Compare commits

...

94 Commits

Author SHA1 Message Date
3f7882b467 feat(aliyundrive_open): rapid upload (close #4766) 2023-07-15 19:33:46 +08:00
a4511c1963 refactor: change hash function 2023-07-15 16:29:44 +08:00
9d1f122717 fix(local): thumbnail rotated if exist orientation tag (close #4749) 2023-07-15 14:31:03 +08:00
5dd73d80d8 fix(123): remove stream upload method (close #4772) 2023-07-14 19:12:18 +08:00
fce872bc1b feat(123): thumbnail support (#3953) 2023-07-14 14:43:40 +08:00
df6c4c80c2 fix(123): update app-version (close #4758) 2023-07-14 14:17:29 +08:00
d2ff040cf8 feat(s3): add SessionToken field (close #4761) 2023-07-13 15:58:19 +08:00
a31af209cc fix(pikpak): hash calculation and fast upload judgment (#4745 fix #1081) 2023-07-11 22:19:21 +08:00
3f8b3da52b feat(server): add HEAD method support (close #4740) 2023-07-11 13:47:49 +08:00
6887f14ec6 feat(pikpak): allow disable media link (close #4735) 2023-07-11 13:40:58 +08:00
3e0de5eaac fix(deps): adapt module github.com/caarlos0/env/v9 (#4728) 2023-07-10 22:06:50 +08:00
61101a60f4 fix(s3): unable to copy empty folder (close #4620) 2023-07-10 14:55:19 +08:00
3529023bf9 fix(mopan): size field type(close #4734 in #4736) 2023-07-10 14:25:27 +08:00
d1d1a089a4 fix(deps): update module github.com/caarlos0/env/v7 to v9 (#4728)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-07-09 18:15:04 +08:00
fa66358b1e fix(sftp): read target obj of symlink file (close #4713) 2023-07-09 14:42:57 +08:00
2b533e4b91 feat: allow customize perm of unix file (close #4709) 2023-07-08 20:17:05 +08:00
d3530a8d80 fix(deps): update module golang.org/x/image to v0.9.0 (#4725)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-07-08 19:21:15 +08:00
6052eb3512 fix(deps): update module golang.org/x/oauth2 to v0.10.0 (#4522)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-07-08 15:44:42 +08:00
d17f7f7cad fix(123): judge status on get redirect_url (close #4718) 2023-07-07 19:55:37 +08:00
8bdc67ec3d fix(webdav): return 404 if error happened on handlePropfind 2023-07-05 13:52:21 +08:00
4fabc27366 fix(aliyundrive_open): panic if driver not init 2023-07-05 13:51:46 +08:00
e4c7b0f17c fix: https port is not effective 2023-07-05 13:02:52 +08:00
5e8bfb017e fix(123): add Referer to request (close #4631) 2023-07-04 18:36:46 +08:00
7d20a01dba feat!: support listen to the unix (close #4671)
Starting from this commit, the HTTP server related config all move to the scheme
2023-07-04 17:56:02 +08:00
59dbf4496f feat(offline_download): try to init client if not ready (close #4674) 2023-07-03 22:57:42 +08:00
12f40608e6 fix(oidc): use TOTP as state verification to replace the static 'state' parameter (#4665) 2023-07-03 22:41:08 +08:00
89832c296f fix: judge can proxy with ext (close #4688) 2023-07-03 20:41:37 +08:00
f09bb88846 fix(thunder): upload issues (close #4663 in #4667) 2023-06-29 13:21:30 +08:00
c518f59528 feat: add MoPan driver (close #4325 in #4659)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-06-28 14:53:43 +08:00
e9c74f9959 fix: regexp rename error (close #4644 in #4653)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-06-26 15:15:57 +08:00
21b8e7f6e5 fix(aliyundrive_share): add limit rate and lift rate limit restrictions (#4587) 2023-06-26 14:49:21 +08:00
2ae9cd8634 fix(dropbox): failed get link in #4639
close cfee536b96 (commitcomment-119404554)
2023-06-25 17:07:31 +08:00
cfee536b96 feat: add Dropbox driver (#4639 close #4590)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-06-23 17:36:40 +08:00
1c8fe3b24c fix(aliyundrive_open): adaptive part size adjustment (#4609)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-06-23 14:25:30 +08:00
84e23c397d fix(baidu_netdisk): rollback #3652 (close #4628) 2023-06-21 18:37:25 +08:00
f7baec2e65 feat: add WoPan driver (close #4541) 2023-06-17 20:20:00 +08:00
378bab32f1 chore(aliyundrive_share): increase the limit of the list api (#4588) 2023-06-17 20:10:34 +08:00
6cd8151cad fix(aliyundrive_open): change default oauth_token_url 2023-06-16 15:03:27 +08:00
541449e10f docs: add special sponsor [skip ci] 2023-06-14 05:42:21 +08:00
ca5a53fc24 fix(aliyundrive_open): openFile/list rate limit 2023-06-11 18:18:09 +08:00
f646d2a699 feat!: listen to both http & https (#4536)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-06-11 18:17:37 +08:00
363e036bf0 chore: fix typo [skip ci] 2023-06-10 22:25:35 +08:00
e23f00f349 fix(139): avoid panic due to Authorization for emptiness 2023-06-10 00:12:04 +08:00
9600267bda ci: add linux-musl-amd64/arm64 to dev build 2023-06-09 23:43:52 +08:00
a66b0e0151 feat(139): auto extract account from Authorization 2023-06-09 23:41:41 +08:00
3bfa00d5d2 fix(189pc): add REQID header 2023-06-09 23:33:12 +08:00
6cbd2532cc fix(139): modify the authentication mode 2023-06-09 23:02:02 +08:00
47976af0d3 feat: set ProxyFromEnvironment for default http client (#4546) 2023-06-09 22:08:54 +08:00
4dca52be85 fix(s3): optional add filename to disposition (close #4538) 2023-06-06 22:47:27 +08:00
62bb09300d chore: fix typo [skip ci] 2023-06-06 19:34:10 +08:00
f9e067abec feat: support delayed start (#4532) 2023-06-05 16:00:31 +08:00
1e62666406 feat(baidu_netdisk): allow custom crack ua 2023-06-04 15:57:41 +08:00
0e0cdf15ef chore: change daysUntilClose [skip ci] 2023-06-03 21:15:52 +08:00
b124fdc092 perf(baidu): avoid refreshing the token on every startup 2023-06-02 18:31:42 +08:00
5141b3c165 fix(deps): update module github.com/gin-gonic/gin to v1.9.1 [security] [skip ci] (#4521)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-06-02 18:31:14 +08:00
881d6e271e feat: add OIDC single sign-on (#4496)
close #3914
close #4315
2023-06-02 18:22:07 +08:00
bd2418c438 feat(deps): update alpine to 3.18 2023-05-28 19:30:42 +08:00
8421c72c5c fix(seafile): driver panic while downloading or uploading file (#4491)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-28 16:45:46 +08:00
a80e21997c feat(cloudreve): auto remove trailing slash in address (#4492)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-28 16:18:09 +08:00
4369cbbac3 fix(alist_v3): missed Content-Length on upload (close #4457) 2023-05-27 20:23:36 +08:00
89f76d7899 feat: add UC driver (close #1127 in #4459)
Co-authored-by: lj98568 <lj98568@alibaba-inc.com>
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-27 19:36:14 +08:00
ef68f84787 fix(baidu_photo): legal album title check (close #4479 in #4487) 2023-05-27 17:07:57 +08:00
2c1f70fbe9 fix(189pc): large file upload error (close #4417 in #4438) 2023-05-27 14:28:58 +08:00
b2f5757f8d fix(copy): copy from driver that return writer (close #4291) 2023-05-26 21:57:43 +08:00
6b97b4eb20 feat(s3): set content type from stream when uploading (#4460)
Co-authored-by: guopeilun <guopl@flatincbr.com>
2023-05-24 18:02:49 +08:00
645c10c11f fix(deps): update module github.com/sirupsen/logrus to v1.9.2 (#4402)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-05-20 22:15:32 +08:00
571bcf07b0 fix(alias): add api prefix for proxy url (close #4392) 2023-05-19 00:12:57 +08:00
63de65be45 fix: increase timeout for http_client (close #4409) 2023-05-18 23:32:05 +08:00
a3446720a2 fix: make TlsInsecureSkipVerify enable for all request (#4386) 2023-05-14 17:05:47 +08:00
3c4c2ad4e0 feat(teambition): support s3 upload method (close #4365) 2023-05-13 23:06:25 +08:00
077a525961 fix(189): adapt new login method (close #4378) 2023-05-13 17:28:40 +08:00
5be79eb26e feat: add robots.txt setting (close #4303) 2023-05-12 16:53:15 +08:00
ddc19ab699 fix(deps): update module github.com/blevesearch/bleve/v2 to v2.3.8 [skip ci] (#4322)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-05-12 16:34:25 +08:00
ddfca5a29b fix(deps): update module github.com/aws/aws-sdk-go to v1.44.262 (#3285)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-05-12 16:25:30 +08:00
c19166be1c feat(google_drive): support sa (close #3132 in #4360)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-12 14:47:50 +08:00
daad61443c feat(local): support thumbnail cache (close #4216) 2023-05-11 19:57:24 +08:00
4b0c01158d fix: panic on nil pointer 2023-05-11 19:44:44 +08:00
f97f1d532e fix(webdav): don't retry for put if body isn't seeker (close #4149 close #4238) 2023-05-11 18:57:35 +08:00
e15755fef0 fix(189): enable TlsInsecureSkipVerify (close #4355) 2023-05-11 18:48:31 +08:00
ea88998325 docs: add help message for mount path (#4364)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-11 18:40:56 +08:00
74d971aa8a docs: fix git address [skip ci] (#4366) 2023-05-11 15:05:33 +08:00
d41d868a8d fix(baidu_photo): change folder name length limit (close #4351 in #4353)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-09 20:44:57 +08:00
555cc26cbf fix(deps): update module golang.org/x/crypto to v0.9.0 (#4350)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-05-09 20:28:52 +08:00
ab4215080b fix(deps): update module golang.org/x/net to v0.10.0 (#4347)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-05-09 16:31:17 +08:00
9502f5acd7 fix(cloudreve): skip init login when using cookie (#4341) 2023-05-08 19:25:36 +08:00
b03879403f feat(cloudreve): support use cookie to login (close #4324 in #4339)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-08 15:19:51 +08:00
ee4ac81677 fix(webdav): can't rename on infini-cloud (close #4333) 2023-05-08 14:21:12 +08:00
b69fc8c306 ci: increase daysUntilClose to avoid use stale-bot [skip ci] 2023-05-07 21:07:31 +08:00
ee6c31332d feat(drivers): ipfs api (#4265)
Co-authored-by: Andy Hsu <i@nn.ci>
2023-05-05 17:42:22 +08:00
9fa16bd5fc ci: use github helper to close stale issue 2023-05-05 16:29:59 +08:00
c77ed5fcb0 feat(aliyundrive_open): limit rate for List and Link (close #4290) 2023-05-02 22:06:03 +08:00
822be17fb9 feat(aliyundrive_open): add expiration for link (close #4061) 2023-05-02 16:12:40 +08:00
7e3b13ea2d fix: fs/list interface conversion from copy alias (close #4279) 2023-05-01 15:45:45 +08:00
f8fb48fb32 fix: cannot connect to Casdoor SSO (close #4266 in #4274) 2023-05-01 15:32:34 +08:00
129 changed files with 4249 additions and 1066 deletions

2
.github/stale.yml vendored
View File

@ -1,7 +1,7 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 44
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 8
daysUntilClose: 20
# Issues with these labels will never be considered stale
exemptLabels:
- accepted

View File

@ -2,7 +2,7 @@ name: Close need info
on:
schedule:
- cron: "0 0 */7 * *"
- cron: "0 0 */1 * *"
workflow_dispatch:
jobs:
@ -15,8 +15,8 @@ jobs:
actions: 'close-issues'
token: ${{ secrets.GITHUB_TOKEN }}
labels: 'question'
inactive-day: 7
inactive-day: 3
close-reason: 'not_planned'
body: |
Hello @${{ github.event.issue.user.login }}, this issue was closed due to no activities in 7 days.
你好 @${{ github.event.issue.user.login }}此issue因超过7天未回复被关闭。
Hello @${{ github.event.issue.user.login }}, this issue was closed due to no activities in 3 days.
你好 @${{ github.event.issue.user.login }}此issue因超过3天未回复被关闭。

21
.github/workflows/issue_close_stale.yml vendored Normal file
View File

@ -0,0 +1,21 @@
name: Close inactive
on:
schedule:
- cron: "0 0 */7 * *"
workflow_dispatch:
jobs:
close-inactive:
runs-on: ubuntu-latest
steps:
- name: close-issues
uses: actions-cool/issues-helper@v3
with:
actions: 'close-issues'
token: ${{ secrets.GITHUB_TOKEN }}
labels: 'stale'
inactive-day: 8
close-reason: 'not_planned'
body: |
Hello @${{ github.event.issue.user.login }}, this issue was closed due to inactive more than 52 days. You can reopen or recreate it if you think it should continue. Thank you for your contributions again.

View File

@ -16,5 +16,5 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ github.event.issue.number }}
body: |
Hello @${{ github.event.issue.user.login }}, please input issue by template and add detail. Issues labeled by `question` will be closed if no activities in 7 days.
你好 @${{ github.event.issue.user.login }}请按照issue模板填写, 并详细说明问题/日志记录/复现步骤/复现链接/实现思路或提供更多信息等, 7天内未回复issue自动关闭。
Hello @${{ github.event.issue.user.login }}, please input issue by template and add detail. Issues labeled by `question` will be closed if no activities in 3 days.
你好 @${{ github.event.issue.user.login }}请按照issue模板填写, 并详细说明问题/日志记录/复现步骤/复现链接/实现思路或提供更多信息等, 3天内未回复issue自动关闭。

View File

@ -6,7 +6,7 @@
Prerequisites:
- [git](https://nodejs.org/zh-cn/)
- [git](https://git-scm.com)
- [Go 1.19+](https://golang.org/doc/install)
- [gcc](https://gcc.gnu.org/)
- [nodejs](https://nodejs.org/)

View File

@ -1,11 +1,11 @@
FROM alpine:3.17 as builder
FROM alpine:3.18 as builder
LABEL stage=go-builder
WORKDIR /app/
COPY ./ ./
RUN apk add --no-cache bash curl gcc git go musl-dev; \
bash build.sh release docker
FROM alpine:3.17
FROM alpine:3.18
LABEL MAINTAINER="i@nn.ci"
VOLUME /opt/alist/data/
WORKDIR /opt/alist/
@ -14,5 +14,5 @@ COPY entrypoint.sh /entrypoint.sh
RUN apk add --no-cache bash ca-certificates su-exec tzdata; \
chmod +x /entrypoint.sh
ENV PUID=0 PGID=0 UMASK=022
EXPOSE 5244
EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ]

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://alist.nn.ci"><img height="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<p><em>🗂A file list program that supports multiple storage, powered by Gin and Solidjs.</em></p>
<p><em>🗂A file list program that supports multiple storages, powered by Gin and Solidjs.</em></p>
<div>
<a href="https://goreportcard.com/report/github.com/alist-org/alist/v3">
<img src="https://goreportcard.com/badge/github.com/alist-org/alist/v3" alt="latest version" />
@ -62,6 +62,7 @@ English | [中文](./README_cn.md) | [Contributing](./CONTRIBUTING.md) | [CODE_O
- [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com/)
- [x] [Terabox](https://www.terabox.com/main)
- [x] [UC](https://drive.uc.cn)
- [x] [Quark](https://pan.quark.cn)
- [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com/)
@ -72,6 +73,7 @@ English | [中文](./README_cn.md) | [Contributing](./CONTRIBUTING.md) | [CODE_O
- [x] SMB
- [x] [115](https://115.com/)
- [X] Cloudreve
- [x] [Dropbox](https://www.dropbox.com/)
- [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode
@ -109,6 +111,7 @@ https://alist.nn.ci/guide/sponsor.html
### Special sponsors
- [亚洲云 - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商](https://www.asiayun.com/aff/QQCOOQKZ) (sponsored Chinese API server)
- [找资源 - 阿里云盘资源搜索引擎](https://zhaoziyuan.la/)
- [KinhDown 百度云盘不限速下载永久免费已稳定运行3年非常可靠Q群 -> 786799372](https://kinhdown.com)
- [JetBrains: Essential tools for software developers and teams](https://www.jetbrains.com/)

View File

@ -61,6 +61,7 @@
- [x] [和彩云](https://yun.139.com/) (个人云, 家庭云)
- [x] [Yandex.Disk](https://disk.yandex.com/)
- [x] [百度网盘](http://pan.baidu.com/)
- [x] [UC网盘](https://drive.uc.cn)
- [x] [夸克网盘](https://pan.quark.cn)
- [x] [迅雷网盘](https://pan.xunlei.com)
- [x] [蓝奏云](https://www.lanzou.com/)
@ -71,6 +72,7 @@
- [x] SMB
- [x] [115](https://115.com/)
- [X] Cloudreve
- [x] [Dropbox](https://www.dropbox.com/)
- [x] 部署方便,开箱即用
- [x] 文件预览PDF、markdown、代码、纯文本……
- [x] 画廊模式下的图像预览
@ -107,6 +109,7 @@ AList 是一个开源软件,如果你碰巧喜欢这个项目,并希望我
### 特别赞助
- [亚洲云 - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商](https://www.asiayun.com/aff/QQCOOQKZ) (国内API服务器赞助)
- [找资源 - 阿里云盘资源搜索引擎](https://zhaoziyuan.la/)
- [KinhDown 百度云盘不限速下载永久免费已稳定运行3年非常可靠Q群 -> 786799372](https://kinhdown.com)
- [JetBrains: Essential tools for software developers and teams](https://www.jetbrains.com/)

View File

@ -54,11 +54,30 @@ BuildWinArm64() {
BuildDev() {
rm -rf .git/
xgo -targets=linux/amd64,windows/amd64,darwin/amd64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
mkdir -p "dist"
muslflags="--extldflags '-static -fpic' $ldflags"
BASE="https://musl.nn.ci/"
FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross)
for i in "${FILES[@]}"; do
url="${BASE}${i}.tgz"
curl -L -o "${i}.tgz" "${url}"
sudo tar xf "${i}.tgz" --strip-components 1 -C /usr/local
done
OS_ARCHES=(linux-musl-amd64 linux-musl-arm64)
CGO_ARGS=(x86_64-linux-musl-gcc aarch64-linux-musl-gcc)
for i in "${!OS_ARCHES[@]}"; do
os_arch=${OS_ARCHES[$i]}
cgo_cc=${CGO_ARGS[$i]}
echo building for ${os_arch}
export GOOS=${os_arch%%-*}
export GOARCH=${os_arch##*-}
export CC=${cgo_cc}
export CGO_ENABLED=1
go build -o ./dist/$appName-$os_arch -ldflags="$muslflags" -tags=jsoniter .
done
xgo -targets=windows/amd64,darwin/amd64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
mv alist-* dist
cd dist
upx -9 ./alist-linux*
cp ./alist-windows-amd64.exe ./alist-windows-amd64-upx.exe
upx -9 ./alist-windows-amd64-upx.exe
find . -type f -print0 | xargs -0 md5sum >md5.txt

View File

@ -24,7 +24,7 @@ func Execute() {
}
func init() {
RootCmd.PersistentFlags().StringVar(&flags.DataDir, "data", "data", "config file")
RootCmd.PersistentFlags().StringVar(&flags.DataDir, "data", "data", "data folder")
RootCmd.PersistentFlags().BoolVar(&flags.Debug, "debug", false, "start with debug mode")
RootCmd.PersistentFlags().BoolVar(&flags.NoPrefix, "no-prefix", false, "disable env prefix")
RootCmd.PersistentFlags().BoolVar(&flags.Dev, "dev", false, "start with dev mode")

View File

@ -3,9 +3,12 @@ package cmd
import (
"context"
"fmt"
"net"
"net/http"
"os"
"os/signal"
"strconv"
"sync"
"syscall"
"time"
@ -28,6 +31,10 @@ var ServerCmd = &cobra.Command{
the address is defined in config file`,
Run: func(cmd *cobra.Command, args []string) {
Init()
if conf.Conf.DelayedStart != 0 {
utils.Log.Infof("delayed start for %d seconds", conf.Conf.DelayedStart)
time.Sleep(time.Duration(conf.Conf.DelayedStart) * time.Second)
}
bootstrap.InitAria2()
bootstrap.InitQbittorrent()
bootstrap.LoadStorages()
@ -37,42 +44,95 @@ the address is defined in config file`,
r := gin.New()
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r)
base := fmt.Sprintf("%s:%d", conf.Conf.Address, conf.Conf.Port)
utils.Log.Infof("start server @ %s", base)
srv := &http.Server{Addr: base, Handler: r}
var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
utils.Log.Infof("start HTTP server @ %s", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: r}
go func() {
var err error
if conf.Conf.Scheme.Https {
//err = r.RunTLS(base, conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
err = srv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
} else {
err = srv.ListenAndServe()
}
err := httpSrv.ListenAndServe()
if err != nil && err != http.ErrServerClosed {
utils.Log.Fatalf("failed to start: %s", err.Error())
utils.Log.Fatalf("failed to start http: %s", err.Error())
}
}()
}
if conf.Conf.Scheme.HttpsPort != -1 {
httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort)
utils.Log.Infof("start HTTPS server @ %s", httpsBase)
httpsSrv = &http.Server{Addr: httpsBase, Handler: r}
go func() {
err := httpsSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
if err != nil && err != http.ErrServerClosed {
utils.Log.Fatalf("failed to start https: %s", err.Error())
}
}()
}
if conf.Conf.Scheme.UnixFile != "" {
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: r}
go func() {
listener, err := net.Listen("unix", conf.Conf.Scheme.UnixFile)
if err != nil {
utils.Log.Fatalf("failed to listen unix: %+v", err)
}
// set socket file permission
mode, err := strconv.ParseUint(conf.Conf.Scheme.UnixFilePerm, 8, 32)
if err != nil {
utils.Log.Errorf("failed to parse socket file permission: %+v", err)
} else {
err = os.Chmod(conf.Conf.Scheme.UnixFile, os.FileMode(mode))
if err != nil {
utils.Log.Errorf("failed to chmod socket file: %+v", err)
}
}
err = unixSrv.Serve(listener)
if err != nil && err != http.ErrServerClosed {
utils.Log.Fatalf("failed to start unix: %s", err.Error())
}
}()
}
// Wait for interrupt signal to gracefully shutdown the server with
// a timeout of 5 seconds.
quit := make(chan os.Signal)
// a timeout of 1 second.
quit := make(chan os.Signal, 1)
// kill (no param) default send syscanll.SIGTERM
// kill -2 is syscall.SIGINT
// kill -9 is syscall. SIGKILL but can"t be catch, so don't need add it
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
utils.Log.Println("Shutdown Server ...")
utils.Log.Println("Shutdown server...")
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
if err := srv.Shutdown(ctx); err != nil {
utils.Log.Fatal("Server Shutdown:", err)
var wg sync.WaitGroup
if conf.Conf.Scheme.HttpPort != -1 {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTP server shutdown err: ", err)
}
// catching ctx.Done(). timeout of 3 seconds.
select {
case <-ctx.Done():
utils.Log.Println("timeout of 1 seconds.")
}()
}
utils.Log.Println("Server exiting")
if conf.Conf.Scheme.HttpsPort != -1 {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpsSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTPS server shutdown err: ", err)
}
}()
}
if conf.Conf.Scheme.UnixFile != "" {
wg.Add(1)
go func() {
defer wg.Done()
if err := unixSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("Unix server shutdown err: ", err)
}
}()
}
wg.Wait()
utils.Log.Println("Server exit")
},
}

View File

@ -6,6 +6,7 @@ services:
- '/etc/alist:/opt/alist/data'
ports:
- '5244:5244'
- '5245:5245'
environment:
- PUID=0
- PGID=0

View File

@ -4,6 +4,7 @@ import (
"fmt"
"github.com/SheltonZhu/115driver/pkg/driver"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/pkg/errors"
)
@ -15,6 +16,7 @@ func (d *Pan115) login() error {
driver.UA(UserAgent),
}
d.client = driver.New(opts...)
d.client.SetHttpClient(base.HttpClient)
cr := &driver.Credential{}
if d.Addition.QRCodeToken != "" {
s := &driver.QRCodeSession{

View File

@ -1,11 +1,9 @@
package _123
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/binary"
"encoding/hex"
"fmt"
"io"
@ -45,6 +43,9 @@ func (d *Pan123) Init(ctx context.Context) error {
}
func (d *Pan123) Drop(ctx context.Context) error {
_, _ = d.request(Logout, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{})
}, nil)
return nil
}
@ -98,7 +99,7 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
}
u_ := u.String()
log.Debug("download url: ", u_)
res, err := base.NoRedirectClient.R().Get(u_)
res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_)
if err != nil {
return nil, err
}
@ -109,9 +110,12 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
log.Debugln("res code: ", res.StatusCode())
if res.StatusCode() == 302 {
link.URL = res.Header().Get("location")
} else if res.StatusCode() == 200 {
} else if res.StatusCode() < 300 {
link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString()
}
link.Header = http.Header{
"Referer": []string{"https://www.123pan.com/"},
}
return &link, nil
} else {
return nil, fmt.Errorf("can't convert obj")
@ -177,24 +181,9 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
}
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
const DEFAULT int64 = 10485760
var uploadFile io.Reader
// const DEFAULT int64 = 10485760
h := md5.New()
if d.StreamUpload && stream.GetSize() > DEFAULT {
// 只计算前10MIB
buf := bytes.NewBuffer(make([]byte, 0, DEFAULT))
if n, err := io.CopyN(io.MultiWriter(buf, h), stream, DEFAULT); err != io.EOF && n == 0 {
return err
}
// 增加额外参数防止MD5碰撞
h.Write([]byte(stream.GetName()))
num := make([]byte, 8)
binary.BigEndian.PutUint64(num, uint64(stream.GetSize()))
h.Write(num)
// 拼装
uploadFile = io.MultiReader(buf, stream)
} else {
// 计算完整文件MD5
// need to calculate md5 of the full content
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
if err != nil {
return err
@ -210,8 +199,6 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
uploadFile = tempFile
}
etag := hex.EncodeToString(h.Sum(nil))
data := base.Json{
"driveId": 0,
@ -234,7 +221,8 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return nil
}
if resp.Data.AccessKeyId == "" || resp.Data.SecretAccessKey == "" || resp.Data.SessionToken == "" {
err = d.newUpload(ctx, &resp, stream, uploadFile, up)
err = d.newUpload(ctx, &resp, stream, tempFile, up)
return err
} else {
cfg := &aws.Config{
Credentials: credentials.NewStaticCredentials(resp.Data.AccessKeyId, resp.Data.SecretAccessKey, resp.Data.SessionToken),
@ -250,7 +238,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
input := &s3manager.UploadInput{
Bucket: &resp.Data.Bucket,
Key: &resp.Data.Key,
Body: uploadFile,
Body: tempFile,
}
_, err = uploader.UploadWithContext(ctx, input)
}

View File

@ -11,7 +11,6 @@ type Addition struct {
driver.RootID
OrderBy string `json:"order_by" type:"select" options:"file_name,size,update_at" default:"file_name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
StreamUpload bool `json:"stream_upload"`
AccessToken string
}

View File

@ -1,7 +1,10 @@
package _123
import (
"net/url"
"path"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/internal/model"
@ -42,7 +45,30 @@ func (f File) GetID() string {
return strconv.FormatInt(f.FileId, 10)
}
func (f File) Thumb() string {
if f.DownloadUrl == "" {
return ""
}
du, err := url.Parse(f.DownloadUrl)
if err != nil {
return ""
}
du.Path = strings.TrimSuffix(du.Path, "_24_24") + "_70_70"
query := du.Query()
query.Set("w", "70")
query.Set("h", "70")
if !query.Has("type") {
query.Set("type", strings.TrimPrefix(path.Base(f.FileName), "."))
}
if !query.Has("trade_key") {
query.Set("trade_key", "123pan-thumbnail")
}
du.RawQuery = query.Encode()
return du.String()
}
var _ model.Obj = (*File)(nil)
var _ model.Thumb = (*File)(nil)
//func (f File) Thumb() string {
//

View File

@ -34,14 +34,17 @@ func (d *Pan123) getS3PreSignedUrls(ctx context.Context, upReq *UploadResp, star
return &s3PreSignedUrls, nil
}
func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp) error {
func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.FileStreamer, isMultipart bool) error {
data := base.Json{
"StorageNode": upReq.Data.StorageNode,
"bucket": upReq.Data.Bucket,
"fileId": upReq.Data.FileId,
"fileSize": file.GetSize(),
"isMultipart": isMultipart,
"key": upReq.Data.Key,
"uploadId": upReq.Data.UploadId,
"StorageNode": upReq.Data.StorageNode,
}
_, err := d.request(S3Complete, http.MethodPost, func(req *resty.Request) {
_, err := d.request(UploadCompleteV2, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, nil)
return err
@ -83,7 +86,7 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
}
}
// complete s3 upload
return d.completeS3(ctx, upReq)
return d.completeS3(ctx, upReq, file, chunkCount > 1)
}
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader io.Reader, curSize int64, retry bool) error {

View File

@ -15,19 +15,24 @@ import (
// do others that not defined in Driver interface
const (
API = "https://www.123pan.com/b/api"
SignIn = API + "/user/sign_in"
UserInfo = API + "/user/info"
FileList = API + "/file/list/new"
DownloadInfo = "https://www.123pan.com/a/api/file/download_info"
Mkdir = API + "/file/upload_request"
Move = API + "/file/mod_pid"
Rename = API + "/file/rename"
Trash = API + "/file/trash"
UploadRequest = API + "/file/upload_request"
UploadComplete = API + "/file/upload_complete"
S3PreSignedUrls = API + "/file/s3_repare_upload_parts_batch"
S3Complete = API + "/file/s3_complete_multipart_upload"
AApi = "https://www.123pan.com/a/api"
BApi = "https://www.123pan.com/b/api"
MainApi = AApi
SignIn = MainApi + "/user/sign_in"
Logout = MainApi + "/user/logout"
UserInfo = MainApi + "/user/info"
FileList = MainApi + "/file/list/new"
DownloadInfo = MainApi + "/file/download_info"
Mkdir = MainApi + "/file/upload_request"
Move = MainApi + "/file/mod_pid"
Rename = MainApi + "/file/rename"
Trash = MainApi + "/file/trash"
UploadRequest = MainApi + "/file/upload_request"
UploadComplete = MainApi + "/file/upload_complete"
S3PreSignedUrls = MainApi + "/file/s3_repare_upload_parts_batch"
S3Auth = MainApi + "/file/s3_upload_object/auth"
UploadCompleteV2 = MainApi + "/file/upload_complete/v2"
S3Complete = MainApi + "/file/s3_complete_multipart_upload"
)
func (d *Pan123) login() error {
@ -42,9 +47,17 @@ func (d *Pan123) login() error {
body = base.Json{
"passport": d.Username,
"password": d.Password,
"remember": true,
}
}
res, err := base.RestyClient.R().
SetHeaders(map[string]string{
"origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/",
"platform": "web",
"app-version": "3",
"user-agent": base.UserAgent,
}).
SetBody(body).Post(SignIn)
if err != nil {
return err
@ -61,9 +74,11 @@ func (d *Pan123) request(url string, method string, callback base.ReqCallback, r
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/",
"authorization": "Bearer " + d.AccessToken,
"platform": "web",
"app-version": "1.2",
"app-version": "3",
"user-agent": base.UserAgent,
})
if callback != nil {
callback(req)

View File

@ -2,10 +2,12 @@ package _139
import (
"context"
"encoding/base64"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@ -18,6 +20,7 @@ import (
type Yun139 struct {
model.Storage
Addition
Account string
}
func (d *Yun139) Config() driver.Config {
@ -29,7 +32,20 @@ func (d *Yun139) GetAddition() driver.Additional {
}
func (d *Yun139) Init(ctx context.Context) error {
_, err := d.post("/orchestration/personalCloud/user/v1.0/qryUserExternInfo", base.Json{
if d.Authorization == "" {
return fmt.Errorf("authorization is empty")
}
decode, err := base64.StdEncoding.DecodeString(d.Authorization)
if err != nil {
return err
}
decodeStr := string(decode)
splits := strings.Split(decodeStr, ":")
if len(splits) < 2 {
return fmt.Errorf("authorization is invalid, splits < 2")
}
d.Account = splits[1]
_, err = d.post("/orchestration/personalCloud/user/v1.0/qryUserExternInfo", base.Json{
"qryUserExternInfoReq": base.Json{
"commonAccountInfo": base.Json{
"account": d.Account,

View File

@ -6,8 +6,8 @@ import (
)
type Addition struct {
Account string `json:"account" required:"true"`
Cookie string `json:"cookie" type:"text" required:"true"`
//Account string `json:"account" required:"true"`
Authorization string `json:"authorization" type:"text" required:"true"`
driver.RootID
Type string `json:"type" type:"select" options:"personal,family" default:"personal"`
CloudID string `json:"cloud_id"`

View File

@ -42,8 +42,8 @@ func calSign(body, ts, randStr string) string {
sort.Strings(strs)
body = strings.Join(strs, "")
body = base64.StdEncoding.EncodeToString([]byte(body))
res := utils.GetMD5Encode(body) + utils.GetMD5Encode(ts+":"+randStr)
res = strings.ToUpper(utils.GetMD5Encode(res))
res := utils.GetMD5EncodeStr(body) + utils.GetMD5EncodeStr(ts+":"+randStr)
res = strings.ToUpper(utils.GetMD5EncodeStr(res))
return res
}
@ -72,7 +72,7 @@ func (d *Yun139) request(pathname string, method string, callback base.ReqCallba
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"CMS-DEVICE": "default",
"Cookie": d.Cookie,
"Authorization": "Basic " + d.Authorization,
"mcloud-channel": "1000101",
"mcloud-client": "10701",
//"mcloud-route": "001",

View File

@ -30,12 +30,9 @@ func (d *Cloud189) GetAddition() driver.Additional {
}
func (d *Cloud189) Init(ctx context.Context) error {
d.client = resty.New().
SetTimeout(base.DefaultTimeout).
SetRetryCount(3).
SetHeader("Referer", "https://cloud.189.cn/").
SetHeader("User-Agent", base.UserAgent)
return d.login()
d.client = base.NewRestyClient().
SetHeader("Referer", "https://cloud.189.cn/")
return d.newLogin()
}
func (d *Cloud189) Drop(ctx context.Context) error {

126
drivers/189/login.go Normal file
View File

@ -0,0 +1,126 @@
package _189
import (
"errors"
"strconv"
"github.com/alist-org/alist/v3/pkg/utils"
log "github.com/sirupsen/logrus"
)
type AppConf struct {
Data struct {
AccountType string `json:"accountType"`
AgreementCheck string `json:"agreementCheck"`
AppKey string `json:"appKey"`
ClientType int `json:"clientType"`
IsOauth2 bool `json:"isOauth2"`
LoginSort string `json:"loginSort"`
MailSuffix string `json:"mailSuffix"`
PageKey string `json:"pageKey"`
ParamId string `json:"paramId"`
RegReturnUrl string `json:"regReturnUrl"`
ReqId string `json:"reqId"`
ReturnUrl string `json:"returnUrl"`
ShowFeedback string `json:"showFeedback"`
ShowPwSaveName string `json:"showPwSaveName"`
ShowQrSaveName string `json:"showQrSaveName"`
ShowSmsSaveName string `json:"showSmsSaveName"`
Sso string `json:"sso"`
} `json:"data"`
Msg string `json:"msg"`
Result string `json:"result"`
}
type EncryptConf struct {
Result int `json:"result"`
Data struct {
UpSmsOn string `json:"upSmsOn"`
Pre string `json:"pre"`
PreDomain string `json:"preDomain"`
PubKey string `json:"pubKey"`
} `json:"data"`
}
func (d *Cloud189) newLogin() error {
url := "https://cloud.189.cn/api/portal/loginUrl.action?redirectURL=https%3A%2F%2Fcloud.189.cn%2Fmain.action"
res, err := d.client.R().Get(url)
if err != nil {
return err
}
// Is logged in
redirectURL := res.RawResponse.Request.URL
if redirectURL.String() == "https://cloud.189.cn/web/main" {
return nil
}
lt := redirectURL.Query().Get("lt")
reqId := redirectURL.Query().Get("reqId")
appId := redirectURL.Query().Get("appId")
headers := map[string]string{
"lt": lt,
"reqid": reqId,
"referer": redirectURL.String(),
"origin": "https://open.e.189.cn",
}
// get app Conf
var appConf AppConf
res, err = d.client.R().SetHeaders(headers).SetFormData(map[string]string{
"version": "2.0",
"appKey": appId,
}).SetResult(&appConf).Post("https://open.e.189.cn/api/logbox/oauth2/appConf.do")
if err != nil {
return err
}
log.Debugf("189 AppConf resp body: %s", res.String())
if appConf.Result != "0" {
return errors.New(appConf.Msg)
}
// get encrypt conf
var encryptConf EncryptConf
res, err = d.client.R().SetHeaders(headers).SetFormData(map[string]string{
"appId": appId,
}).Post("https://open.e.189.cn/api/logbox/config/encryptConf.do")
if err != nil {
return err
}
err = utils.Json.Unmarshal(res.Body(), &encryptConf)
if err != nil {
return err
}
log.Debugf("189 EncryptConf resp body: %s\n%+v", res.String(), encryptConf)
if encryptConf.Result != 0 {
return errors.New("get EncryptConf error:" + res.String())
}
// TODO: getUUID? needcaptcha
// login
loginData := map[string]string{
"version": "v2.0",
"apToken": "",
"appKey": appId,
"accountType": appConf.Data.AccountType,
"userName": encryptConf.Data.Pre + RsaEncode([]byte(d.Username), encryptConf.Data.PubKey, true),
"epd": encryptConf.Data.Pre + RsaEncode([]byte(d.Password), encryptConf.Data.PubKey, true),
"captchaType": "",
"validateCode": "",
"smsValidateCode": "",
"captchaToken": "",
"returnUrl": appConf.Data.ReturnUrl,
"mailSuffix": appConf.Data.MailSuffix,
"dynamicCheck": "FALSE",
"clientType": strconv.Itoa(appConf.Data.ClientType),
"cb_SaveName": "3",
"isOauth2": strconv.FormatBool(appConf.Data.IsOauth2),
"state": "",
"paramId": appConf.Data.ParamId,
}
res, err = d.client.R().SetHeaders(headers).SetFormData(loginData).Post("https://open.e.189.cn/api/logbox/oauth2/loginSubmit.do")
if err != nil {
return err
}
log.Debugf("189 login resp body: %s", res.String())
loginResult := utils.Json.Get(res.Body(), "result").ToInt()
if loginResult != 0 {
return errors.New(utils.Json.Get(res.Body(), "msg").ToString())
}
return nil
}

View File

@ -8,6 +8,7 @@ import (
type Addition struct {
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
Cookie string `json:"cookie" help:"Fill in the cookie if need captcha"`
driver.RootID
}
@ -15,6 +16,7 @@ var config = driver.Config{
Name: "189Cloud",
LocalSort: true,
DefaultRoot: "-11",
Alert: `info|You can try to use 189PC driver if this driver does not work.`,
}
func init() {

View File

@ -11,16 +11,13 @@ import (
"io"
"math"
"net/http"
"regexp"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/utils"
myrand "github.com/alist-org/alist/v3/pkg/utils/random"
"github.com/go-resty/resty/v2"
@ -30,118 +27,118 @@ import (
// do others that not defined in Driver interface
func (d *Cloud189) login() error {
url := "https://cloud.189.cn/api/portal/loginUrl.action?redirectURL=https%3A%2F%2Fcloud.189.cn%2Fmain.action"
b := ""
lt := ""
ltText := regexp.MustCompile(`lt = "(.+?)"`)
var res *resty.Response
var err error
for i := 0; i < 3; i++ {
res, err = d.client.R().Get(url)
if err != nil {
return err
}
// 已经登陆
if res.RawResponse.Request.URL.String() == "https://cloud.189.cn/web/main" {
return nil
}
b = res.String()
ltTextArr := ltText.FindStringSubmatch(b)
if len(ltTextArr) > 0 {
lt = ltTextArr[1]
break
} else {
<-time.After(time.Second)
}
}
if lt == "" {
return fmt.Errorf("get page: %s \nstatus: %d \nrequest url: %s\nredirect url: %s",
b, res.StatusCode(), res.RawResponse.Request.URL.String(), res.Header().Get("location"))
}
captchaToken := regexp.MustCompile(`captchaToken' value='(.+?)'`).FindStringSubmatch(b)[1]
returnUrl := regexp.MustCompile(`returnUrl = '(.+?)'`).FindStringSubmatch(b)[1]
paramId := regexp.MustCompile(`paramId = "(.+?)"`).FindStringSubmatch(b)[1]
//reqId := regexp.MustCompile(`reqId = "(.+?)"`).FindStringSubmatch(b)[1]
jRsakey := regexp.MustCompile(`j_rsaKey" value="(\S+)"`).FindStringSubmatch(b)[1]
vCodeID := regexp.MustCompile(`picCaptcha\.do\?token\=([A-Za-z0-9\&\=]+)`).FindStringSubmatch(b)[1]
vCodeRS := ""
if vCodeID != "" {
// need ValidateCode
log.Debugf("try to identify verification codes")
timeStamp := strconv.FormatInt(time.Now().UnixNano()/1e6, 10)
u := "https://open.e.189.cn/api/logbox/oauth2/picCaptcha.do?token=" + vCodeID + timeStamp
imgRes, err := d.client.R().SetHeaders(map[string]string{
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/76.0",
"Referer": "https://open.e.189.cn/api/logbox/oauth2/unifyAccountLogin.do",
"Sec-Fetch-Dest": "image",
"Sec-Fetch-Mode": "no-cors",
"Sec-Fetch-Site": "same-origin",
}).Get(u)
if err != nil {
return err
}
// Enter the verification code manually
//err = message.GetMessenger().WaitSend(message.Message{
// Type: "image",
// Content: "data:image/png;base64," + base64.StdEncoding.EncodeToString(imgRes.Body()),
//}, 10)
//if err != nil {
// return err
//}
//vCodeRS, err = message.GetMessenger().WaitReceive(30)
// use ocr api
vRes, err := base.RestyClient.R().SetMultipartField(
"image", "validateCode.png", "image/png", bytes.NewReader(imgRes.Body())).
Post(setting.GetStr(conf.OcrApi))
if err != nil {
return err
}
if jsoniter.Get(vRes.Body(), "status").ToInt() != 200 {
return errors.New("ocr error:" + jsoniter.Get(vRes.Body(), "msg").ToString())
}
vCodeRS = jsoniter.Get(vRes.Body(), "result").ToString()
log.Debugln("code: ", vCodeRS)
}
userRsa := RsaEncode([]byte(d.Username), jRsakey, true)
passwordRsa := RsaEncode([]byte(d.Password), jRsakey, true)
url = "https://open.e.189.cn/api/logbox/oauth2/loginSubmit.do"
var loginResp LoginResp
res, err = d.client.R().
SetHeaders(map[string]string{
"lt": lt,
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
"Referer": "https://open.e.189.cn/",
"accept": "application/json;charset=UTF-8",
}).SetFormData(map[string]string{
"appKey": "cloud",
"accountType": "01",
"userName": "{RSA}" + userRsa,
"password": "{RSA}" + passwordRsa,
"validateCode": vCodeRS,
"captchaToken": captchaToken,
"returnUrl": returnUrl,
"mailSuffix": "@pan.cn",
"paramId": paramId,
"clientType": "10010",
"dynamicCheck": "FALSE",
"cb_SaveName": "1",
"isOauth2": "false",
}).Post(url)
if err != nil {
return err
}
err = utils.Json.Unmarshal(res.Body(), &loginResp)
if err != nil {
log.Error(err.Error())
return err
}
if loginResp.Result != 0 {
return fmt.Errorf(loginResp.Msg)
}
_, err = d.client.R().Get(loginResp.ToUrl)
return err
}
//func (d *Cloud189) login() error {
// url := "https://cloud.189.cn/api/portal/loginUrl.action?redirectURL=https%3A%2F%2Fcloud.189.cn%2Fmain.action"
// b := ""
// lt := ""
// ltText := regexp.MustCompile(`lt = "(.+?)"`)
// var res *resty.Response
// var err error
// for i := 0; i < 3; i++ {
// res, err = d.client.R().Get(url)
// if err != nil {
// return err
// }
// // 已经登陆
// if res.RawResponse.Request.URL.String() == "https://cloud.189.cn/web/main" {
// return nil
// }
// b = res.String()
// ltTextArr := ltText.FindStringSubmatch(b)
// if len(ltTextArr) > 0 {
// lt = ltTextArr[1]
// break
// } else {
// <-time.After(time.Second)
// }
// }
// if lt == "" {
// return fmt.Errorf("get page: %s \nstatus: %d \nrequest url: %s\nredirect url: %s",
// b, res.StatusCode(), res.RawResponse.Request.URL.String(), res.Header().Get("location"))
// }
// captchaToken := regexp.MustCompile(`captchaToken' value='(.+?)'`).FindStringSubmatch(b)[1]
// returnUrl := regexp.MustCompile(`returnUrl = '(.+?)'`).FindStringSubmatch(b)[1]
// paramId := regexp.MustCompile(`paramId = "(.+?)"`).FindStringSubmatch(b)[1]
// //reqId := regexp.MustCompile(`reqId = "(.+?)"`).FindStringSubmatch(b)[1]
// jRsakey := regexp.MustCompile(`j_rsaKey" value="(\S+)"`).FindStringSubmatch(b)[1]
// vCodeID := regexp.MustCompile(`picCaptcha\.do\?token\=([A-Za-z0-9\&\=]+)`).FindStringSubmatch(b)[1]
// vCodeRS := ""
// if vCodeID != "" {
// // need ValidateCode
// log.Debugf("try to identify verification codes")
// timeStamp := strconv.FormatInt(time.Now().UnixNano()/1e6, 10)
// u := "https://open.e.189.cn/api/logbox/oauth2/picCaptcha.do?token=" + vCodeID + timeStamp
// imgRes, err := d.client.R().SetHeaders(map[string]string{
// "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/76.0",
// "Referer": "https://open.e.189.cn/api/logbox/oauth2/unifyAccountLogin.do",
// "Sec-Fetch-Dest": "image",
// "Sec-Fetch-Mode": "no-cors",
// "Sec-Fetch-Site": "same-origin",
// }).Get(u)
// if err != nil {
// return err
// }
// // Enter the verification code manually
// //err = message.GetMessenger().WaitSend(message.Message{
// // Type: "image",
// // Content: "data:image/png;base64," + base64.StdEncoding.EncodeToString(imgRes.Body()),
// //}, 10)
// //if err != nil {
// // return err
// //}
// //vCodeRS, err = message.GetMessenger().WaitReceive(30)
// // use ocr api
// vRes, err := base.RestyClient.R().SetMultipartField(
// "image", "validateCode.png", "image/png", bytes.NewReader(imgRes.Body())).
// Post(setting.GetStr(conf.OcrApi))
// if err != nil {
// return err
// }
// if jsoniter.Get(vRes.Body(), "status").ToInt() != 200 {
// return errors.New("ocr error:" + jsoniter.Get(vRes.Body(), "msg").ToString())
// }
// vCodeRS = jsoniter.Get(vRes.Body(), "result").ToString()
// log.Debugln("code: ", vCodeRS)
// }
// userRsa := RsaEncode([]byte(d.Username), jRsakey, true)
// passwordRsa := RsaEncode([]byte(d.Password), jRsakey, true)
// url = "https://open.e.189.cn/api/logbox/oauth2/loginSubmit.do"
// var loginResp LoginResp
// res, err = d.client.R().
// SetHeaders(map[string]string{
// "lt": lt,
// "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
// "Referer": "https://open.e.189.cn/",
// "accept": "application/json;charset=UTF-8",
// }).SetFormData(map[string]string{
// "appKey": "cloud",
// "accountType": "01",
// "userName": "{RSA}" + userRsa,
// "password": "{RSA}" + passwordRsa,
// "validateCode": vCodeRS,
// "captchaToken": captchaToken,
// "returnUrl": returnUrl,
// "mailSuffix": "@pan.cn",
// "paramId": paramId,
// "clientType": "10010",
// "dynamicCheck": "FALSE",
// "cb_SaveName": "1",
// "isOauth2": "false",
// }).Post(url)
// if err != nil {
// return err
// }
// err = utils.Json.Unmarshal(res.Body(), &loginResp)
// if err != nil {
// log.Error(err.Error())
// return err
// }
// if loginResp.Result != 0 {
// return fmt.Errorf(loginResp.Msg)
// }
// _, err = d.client.R().Get(loginResp.ToUrl)
// return err
//}
func (d *Cloud189) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
var e Error
@ -163,7 +160,7 @@ func (d *Cloud189) request(url string, method string, callback base.ReqCallback,
//log.Debug(res.String())
if e.ErrorCode != "" {
if e.ErrorCode == "InvalidSessionKey" {
err = d.login()
err = d.newLogin()
if err != nil {
return nil, err
}
@ -388,7 +385,7 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
fileMd5 := hex.EncodeToString(md5Sum.Sum(nil))
sliceMd5 := fileMd5
if file.GetSize() > DEFAULT {
sliceMd5 = utils.GetMD5Encode(strings.Join(md5s, "\n"))
sliceMd5 = utils.GetMD5EncodeStr(strings.Join(md5s, "\n"))
}
res, err = d.uploadRequest("/person/commitMultiUploadFile", map[string]string{
"uploadFileId": uploadFileId,

View File

@ -4,7 +4,6 @@ import (
"context"
"net/http"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@ -20,7 +19,6 @@ type Cloud189PC struct {
identity string
client *resty.Client
putClient *resty.Client
loginParam *LoginParam
tokenInfo *AppSessionResp
@ -51,12 +49,9 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
"Referer": WEB_URL,
})
}
if y.putClient == nil {
y.putClient = base.NewRestyClient().SetTimeout(120 * time.Second)
}
// 避免重复登陆
identity := utils.GetMD5Encode(y.Username + y.Password)
identity := utils.GetMD5EncodeStr(y.Username + y.Password)
if !y.isLogin() || y.identity != identity {
y.identity = identity
if err = y.login(); err != nil {
@ -266,8 +261,14 @@ func (y *Cloud189PC) Remove(ctx context.Context, obj model.Obj) error {
}
func (y *Cloud189PC) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
if y.RapidUpload {
return y.FastUpload(ctx, dstDir, stream, up)
}
switch y.UploadMethod {
case "stream":
return y.CommonUpload(ctx, dstDir, stream, up)
case "old":
return y.OldUpload(ctx, dstDir, stream, up)
case "rapid":
return y.FastUpload(ctx, dstDir, stream, up)
default:
return y.CommonUpload(ctx, dstDir, stream, up)
}
}

View File

@ -11,6 +11,7 @@ import (
"encoding/hex"
"encoding/pem"
"fmt"
"math"
"net/http"
"regexp"
"strings"
@ -131,3 +132,18 @@ func BoolToNumber(b bool) int {
}
return 0
}
// 计算分片大小
// 对分片数量有限制
// 10MIB 20 MIB 999片
// 50MIB 60MIB 70MIB 80MIB ∞MIB 1999片
func partSize(size int64) int64 {
const DEFAULT = 1024 * 1024 * 10 // 10MIB
if size > DEFAULT*2*999 {
return int64(math.Max(math.Ceil((float64(size)/1999) /*=单个切片大小*/ /float64(DEFAULT)) /*=倍率*/, 5) * DEFAULT)
}
if size > DEFAULT*999 {
return DEFAULT * 2 // 20MIB
}
return DEFAULT
}

View File

@ -14,13 +14,14 @@ type Addition struct {
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
Type string `json:"type" type:"select" options:"personal,family" default:"personal"`
FamilyID string `json:"family_id"`
RapidUpload bool `json:"rapid_upload"`
UploadMethod string `json:"upload_method" type:"select" options:"stream,rapid,old" default:"stream"`
NoUseOcr bool `json:"no_use_ocr"`
}
var config = driver.Config{
Name: "189CloudPC",
DefaultRoot: "-11",
CheckStatus: true,
}
func init() {

View File

@ -10,20 +10,62 @@ import (
// 居然有四种返回方式
type RespErr struct {
ResCode string `json:"res_code"`
ResCode any `json:"res_code"` // int or string
ResMessage string `json:"res_message"`
Error_ string `json:"error"`
XMLName xml.Name `xml:"error"`
Code string `json:"code" xml:"code"`
Message string `json:"message" xml:"message"`
// Code string `json:"code"`
Msg string `json:"msg"`
ErrorCode string `json:"errorCode"`
ErrorMsg string `json:"errorMsg"`
}
func (e *RespErr) HasError() bool {
switch v := e.ResCode.(type) {
case int, int64, int32:
return v != 0
case string:
return e.ResCode != ""
}
return (e.Code != "" && e.Code != "SUCCESS") || e.ErrorCode != "" || e.Error_ != ""
}
func (e *RespErr) Error() string {
switch v := e.ResCode.(type) {
case int, int64, int32:
if v != 0 {
return fmt.Sprintf("res_code: %d ,res_msg: %s", v, e.ResMessage)
}
case string:
if e.ResCode != "" {
return fmt.Sprintf("res_code: %s ,res_msg: %s", e.ResCode, e.ResMessage)
}
}
if e.Code != "" && e.Code != "SUCCESS" {
if e.Msg != "" {
return fmt.Sprintf("code: %s ,msg: %s", e.Code, e.Msg)
}
if e.Message != "" {
return fmt.Sprintf("code: %s ,msg: %s", e.Code, e.Message)
}
return "code: " + e.Code
}
if e.ErrorCode != "" {
return fmt.Sprintf("err_code: %s ,err_msg: %s", e.ErrorCode, e.ErrorMsg)
}
if e.Error_ != "" {
return fmt.Sprintf("error: %s ,message: %s", e.ErrorCode, e.Message)
}
return ""
}
// 登陆需要的参数
type LoginParam struct {
// 加密后的用户名和密码
@ -218,6 +260,42 @@ type Part struct {
RequestHeader string `json:"requestHeader"`
}
/* 第二种上传方式 */
type CreateUploadFileResp struct {
// 上传文件请求ID
UploadFileId int64 `json:"uploadFileId"`
// 上传文件数据的URL路径
FileUploadUrl string `json:"fileUploadUrl"`
// 上传文件完成后确认路径
FileCommitUrl string `json:"fileCommitUrl"`
// 文件是否已存在云盘中0-未存在1-已存在
FileDataExists int `json:"fileDataExists"`
}
type GetUploadFileStatusResp struct {
CreateUploadFileResp
// 已上传的大小
DataSize int64 `json:"dataSize"`
Size int64 `json:"size"`
}
func (r *GetUploadFileStatusResp) GetSize() int64 {
return r.DataSize + r.Size
}
type CommitUploadFileResp struct {
XMLName xml.Name `xml:"file"`
Id string `xml:"id"`
Name string `xml:"name"`
Size string `xml:"size"`
Md5 string `xml:"md5"`
CreateDate string `xml:"createDate"`
Rev string `xml:"rev"`
UserId string `xml:"userId"`
}
/* query 加密参数*/
type Params map[string]string
func (p Params) Set(k, v string) {

View File

@ -6,6 +6,7 @@ import (
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"math"
@ -15,6 +16,7 @@ import (
"os"
"regexp"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
@ -23,9 +25,12 @@ import (
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
jsoniter "github.com/json-iterator/go"
"github.com/pkg/errors"
)
const (
@ -47,7 +52,7 @@ const (
CHANNEL_ID = "web_cloud.189.cn"
)
func (y *Cloud189PC) request(url, method string, callback base.ReqCallback, params Params, resp interface{}) ([]byte, error) {
func (y *Cloud189PC) SignatureHeader(url, method, params string) map[string]string {
dateOfGmt := getHttpDateStr()
sessionKey := y.tokenInfo.SessionKey
sessionSecret := y.tokenInfo.SessionSecret
@ -56,19 +61,40 @@ func (y *Cloud189PC) request(url, method string, callback base.ReqCallback, para
sessionSecret = y.tokenInfo.FamilySessionSecret
}
req := y.client.R().SetQueryParams(clientSuffix()).SetHeaders(map[string]string{
header := map[string]string{
"Date": dateOfGmt,
"SessionKey": sessionKey,
"X-Request-ID": uuid.NewString(),
})
"Signature": signatureOfHmac(sessionSecret, sessionKey, method, url, dateOfGmt, params),
}
return header
}
func (y *Cloud189PC) EncryptParams(params Params) string {
sessionSecret := y.tokenInfo.SessionSecret
if y.isFamily() {
sessionSecret = y.tokenInfo.FamilySessionSecret
}
if params != nil {
return AesECBEncrypt(params.Encode(), sessionSecret[:16])
}
return ""
}
func (y *Cloud189PC) request(url, method string, callback base.ReqCallback, params Params, resp interface{}) ([]byte, error) {
req := y.client.R().SetQueryParams(clientSuffix())
// 设置params
var paramsData string
if params != nil {
paramsData = AesECBEncrypt(params.Encode(), sessionSecret[:16])
paramsData := y.EncryptParams(params)
if paramsData != "" {
req.SetQueryParam("params", paramsData)
}
req.SetHeader("Signature", signatureOfHmac(sessionSecret, sessionKey, method, url, dateOfGmt, paramsData))
// Signature
req.SetHeaders(y.SignatureHeader(url, method, paramsData))
var erron RespErr
req.SetError(&erron)
if callback != nil {
callback(req)
@ -80,32 +106,6 @@ func (y *Cloud189PC) request(url, method string, callback base.ReqCallback, para
if err != nil {
return nil, err
}
var erron RespErr
utils.Json.Unmarshal(res.Body(), &erron)
if erron.ResCode != "" {
return nil, fmt.Errorf("res_code: %s ,res_msg: %s", erron.ResCode, erron.ResMessage)
}
if erron.Code != "" && erron.Code != "SUCCESS" {
if erron.Msg != "" {
return nil, fmt.Errorf("code: %s ,msg: %s", erron.Code, erron.Msg)
}
if erron.Message != "" {
return nil, fmt.Errorf("code: %s ,msg: %s", erron.Code, erron.Message)
}
return nil, fmt.Errorf(res.String())
}
switch erron.ErrorCode {
case "":
break
case "InvalidSessionKey":
if err = y.refreshSession(); err != nil {
return nil, err
}
return y.request(url, method, callback, params, resp)
default:
return nil, fmt.Errorf("err_code: %s ,err_msg: %s", erron.ErrorCode, erron.ErrorMsg)
}
if strings.Contains(res.String(), "userSessionBO is null") {
if err = y.refreshSession(); err != nil {
@ -114,14 +114,17 @@ func (y *Cloud189PC) request(url, method string, callback base.ReqCallback, para
return y.request(url, method, callback, params, resp)
}
resCode := utils.Json.Get(res.Body(), "res_code").ToInt64()
message := utils.Json.Get(res.Body(), "res_message").ToString()
switch resCode {
case 0:
return res.Body(), nil
default:
return nil, fmt.Errorf("res_code: %d ,res_msg: %s", resCode, message)
// 处理错误
if erron.HasError() {
if erron.ErrorCode == "InvalidSessionKey" {
if err = y.refreshSession(); err != nil {
return nil, err
}
return y.request(url, method, callback, params, resp)
}
return nil, &erron
}
return res.Body(), nil
}
func (y *Cloud189PC) get(url string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
@ -132,6 +135,50 @@ func (y *Cloud189PC) post(url string, callback base.ReqCallback, resp interface{
return y.request(url, http.MethodPost, callback, nil, resp)
}
func (y *Cloud189PC) put(ctx context.Context, url string, headers map[string]string, sign bool, file io.Reader) ([]byte, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodPut, url, file)
if err != nil {
return nil, err
}
query := req.URL.Query()
for key, value := range clientSuffix() {
query.Add(key, value)
}
req.URL.RawQuery = query.Encode()
for key, value := range headers {
req.Header.Add(key, value)
}
if sign {
for key, value := range y.SignatureHeader(url, http.MethodPut, "") {
req.Header.Add(key, value)
}
}
resp, err := base.HttpClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
var erron RespErr
jsoniter.Unmarshal(body, &erron)
xml.Unmarshal(body, &erron)
if erron.HasError() {
return nil, &erron
}
if resp.StatusCode != http.StatusOK {
return nil, errors.Errorf("put fail,err:%s", string(body))
}
return body, nil
}
func (y *Cloud189PC) getFiles(ctx context.Context, fileId string) ([]model.Obj, error) {
fullUrl := API_URL
if y.isFamily() {
@ -186,7 +233,7 @@ func (y *Cloud189PC) getFiles(ctx context.Context, fileId string) ([]model.Obj,
func (y *Cloud189PC) login() (err error) {
// 初始化登陆所需参数
if y.loginParam == nil || !y.NoUseOcr {
if y.loginParam == nil {
if err = y.initLoginParam(); err != nil {
// 验证码也通过错误返回
return err
@ -197,7 +244,7 @@ func (y *Cloud189PC) login() (err error) {
y.VCode = ""
// 销毁登陆参数
y.loginParam = nil
// 遇到错误,重新加载登陆参数
// 遇到错误,重新加载登陆参数(刷新验证码)
if err != nil && y.NoUseOcr {
if err1 := y.initLoginParam(); err1 != nil {
err = fmt.Errorf("err1: %s \nerr2: %s", err, err1)
@ -249,9 +296,8 @@ func (y *Cloud189PC) login() (err error) {
return
}
if erron.ResCode != "" {
err = fmt.Errorf(erron.ResMessage)
return
if erron.HasError() {
return &erron
}
if tokenInfo.ResCode != 0 {
err = fmt.Errorf(tokenInfo.ResMessage)
@ -304,6 +350,22 @@ func (y *Cloud189PC) initLoginParam() error {
param.RsaPassword = encryptConf.Data.Pre + RsaEncrypt(param.jRsaKey, y.Password)
y.loginParam = &param
// 判断是否需要验证码
resp, err := y.client.R().
SetHeader("REQID", param.ReqId).
SetFormData(map[string]string{
"appKey": APP_ID,
"accountType": ACCOUNT_TYPE,
"userName": param.RsaUsername,
}).Post(AUTH_URL + "/api/logbox/oauth2/needcaptcha.do")
if err != nil {
return err
}
if resp.String() == "0" {
return nil
}
// 拉取验证码
imgRes, err := y.client.R().
SetQueryParams(map[string]string{
"token": param.CaptchaToken,
@ -359,33 +421,23 @@ func (y *Cloud189PC) refreshSession() (err error) {
}
}()
switch erron.ResCode {
case "":
break
case "UserInvalidOpenToken":
if erron.HasError() {
if erron.ResCode == "UserInvalidOpenToken" {
if err = y.login(); err != nil {
return err
}
default:
err = fmt.Errorf("res_code: %s ,res_msg: %s", erron.ResCode, erron.ResMessage)
return
}
switch userSessionResp.ResCode {
case 0:
return &erron
}
y.tokenInfo.UserSessionResp = userSessionResp
default:
err = fmt.Errorf("code: %d , msg: %s", userSessionResp.ResCode, userSessionResp.ResMessage)
}
return
}
// 普通上传
func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (err error) {
const DEFAULT int64 = 10485760
var count = int64(math.Ceil(float64(file.GetSize()) / float64(DEFAULT)))
var DEFAULT = partSize(file.GetSize())
var count = int(math.Ceil(float64(file.GetSize()) / float64(DEFAULT)))
requestID := uuid.NewString()
params := Params{
"parentFolderId": dstDir.GetID(),
"fileName": url.QueryEscape(file.GetName()),
@ -407,7 +459,6 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
var initMultiUpload InitMultiUploadResp
_, err = y.request(fullUrl+"/initMultiUpload", http.MethodGet, func(req *resty.Request) {
req.SetContext(ctx)
req.SetHeader("X-Request-ID", requestID)
}, params, &initMultiUpload)
if err != nil {
return err
@ -417,7 +468,7 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
silceMd5 := md5.New()
silceMd5Hexs := make([]string, 0, count)
byteData := bytes.NewBuffer(make([]byte, DEFAULT))
for i := int64(1); i <= count; i++ {
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
@ -440,7 +491,6 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
_, err = y.request(fullUrl+"/getMultiUploadUrls", http.MethodGet,
func(req *resty.Request) {
req.SetContext(ctx)
req.SetHeader("X-Request-ID", requestID)
}, Params{
"partInfo": fmt.Sprintf("%d-%s", i, silceMd5Base64),
"uploadFileId": initMultiUpload.Data.UploadFileID,
@ -451,32 +501,31 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
// 开始上传
uploadData := uploadUrl.UploadUrls[fmt.Sprint("partNumber_", i)]
res, err := y.putClient.R().
SetContext(ctx).
SetQueryParams(clientSuffix()).
SetHeaders(ParseHttpHeader(uploadData.RequestHeader)).
SetBody(byteData).
Put(uploadData.RequestURL)
err = retry.Do(func() error {
_, err := y.put(ctx, uploadData.RequestURL, ParseHttpHeader(uploadData.RequestHeader), false, bytes.NewReader(byteData.Bytes()))
return err
},
retry.Context(ctx),
retry.Attempts(3),
retry.Delay(time.Second),
retry.MaxDelay(5*time.Second))
if err != nil {
return err
}
if res.StatusCode() != http.StatusOK {
return fmt.Errorf("updload fail,msg: %s", res.String())
}
up(int(i * 100 / count))
}
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
sliceMd5Hex := fileMd5Hex
if file.GetSize() > DEFAULT {
sliceMd5Hex = strings.ToUpper(utils.GetMD5Encode(strings.Join(silceMd5Hexs, "\n")))
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
}
// 提交上传
_, err = y.request(fullUrl+"/commitMultiUploadFile", http.MethodGet,
func(req *resty.Request) {
req.SetContext(ctx)
req.SetHeader("X-Request-ID", requestID)
}, Params{
"uploadFileId": initMultiUpload.Data.UploadFileID,
"fileMd5": fileMd5Hex,
@ -500,7 +549,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
_ = os.Remove(tempFile.Name())
}()
const DEFAULT int64 = 10485760
var DEFAULT = partSize(file.GetSize())
count := int(math.Ceil(float64(file.GetSize()) / float64(DEFAULT)))
// 优先计算所需信息
@ -528,10 +577,9 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
sliceMd5Hex := fileMd5Hex
if file.GetSize() > DEFAULT {
sliceMd5Hex = strings.ToUpper(utils.GetMD5Encode(strings.Join(silceMd5Hexs, "\n")))
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
}
requestID := uuid.NewString()
// 检测是否支持快传
params := Params{
"parentFolderId": dstDir.GetID(),
@ -554,7 +602,6 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
var uploadInfo InitMultiUploadResp
_, err = y.request(fullUrl+"/initMultiUpload", http.MethodGet, func(req *resty.Request) {
req.SetContext(ctx)
req.SetHeader("X-Request-ID", requestID)
}, params, &uploadInfo)
if err != nil {
return err
@ -566,7 +613,6 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
_, err = y.request(fullUrl+"/getMultiUploadUrls", http.MethodGet,
func(req *resty.Request) {
req.SetContext(ctx)
req.SetHeader("X-Request-ID", requestID)
}, Params{
"uploadFileId": uploadInfo.Data.UploadFileID,
"partInfo": strings.Join(silceMd5Base64s, ","),
@ -575,26 +621,29 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
return err
}
buf := make([]byte, DEFAULT)
for i := 1; i <= count; i++ {
select {
case <-ctx.Done():
if utils.IsCanceled(ctx) {
return ctx.Err()
default:
}
n, err := io.ReadFull(tempFile, buf)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
return err
}
uploadData := uploadUrls.UploadUrls[fmt.Sprint("partNumber_", i)]
res, err := y.putClient.R().
SetContext(ctx).
SetQueryParams(clientSuffix()).
SetHeaders(ParseHttpHeader(uploadData.RequestHeader)).
SetBody(io.LimitReader(tempFile, DEFAULT)).
Put(uploadData.RequestURL)
err = retry.Do(func() error {
_, err := y.put(ctx, uploadData.RequestURL, ParseHttpHeader(uploadData.RequestHeader), false, bytes.NewReader(buf[:n]))
return err
},
retry.Context(ctx),
retry.Attempts(3),
retry.Delay(time.Second),
retry.MaxDelay(5*time.Second))
if err != nil {
return err
}
if res.StatusCode() != http.StatusOK {
return fmt.Errorf("updload fail,msg: %s", res.String())
}
up(int(i * 100 / count))
}
}
@ -603,7 +652,6 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
_, err = y.request(fullUrl+"/commitMultiUploadFile", http.MethodGet,
func(req *resty.Request) {
req.SetContext(ctx)
req.SetHeader("X-Request-ID", requestID)
}, Params{
"uploadFileId": uploadInfo.Data.UploadFileID,
"isLog": "0",
@ -612,6 +660,137 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
return err
}
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (err error) {
// 需要获取完整文件md5,必须支持 io.Seek
tempFile, err := utils.CreateTempFile(file.GetReadCloser())
if err != nil {
return err
}
defer func() {
_ = tempFile.Close()
_ = os.Remove(tempFile.Name())
}()
// 计算md5
fileMd5 := md5.New()
if _, err := io.Copy(fileMd5, tempFile); err != nil {
return err
}
if _, err = tempFile.Seek(0, io.SeekStart); err != nil {
return err
}
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
// 创建上传会话
var uploadInfo CreateUploadFileResp
fullUrl := API_URL + "/createUploadFile.action"
if y.isFamily() {
fullUrl = API_URL + "/family/file/createFamilyFile.action"
}
_, err = y.post(fullUrl, func(req *resty.Request) {
req.SetContext(ctx)
if y.isFamily() {
req.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"fileMd5": fileMd5Hex,
"fileName": file.GetName(),
"fileSize": fmt.Sprint(file.GetSize()),
"parentId": dstDir.GetID(),
"resumePolicy": "1",
})
} else {
req.SetFormData(map[string]string{
"parentFolderId": dstDir.GetID(),
"fileName": file.GetName(),
"size": fmt.Sprint(file.GetSize()),
"md5": fileMd5Hex,
"opertype": "3",
"flag": "1",
"resumePolicy": "1",
"isLog": "0",
// "baseFileId": "",
// "lastWrite":"",
// "localPath": strings.ReplaceAll(param.LocalPath, "\\", "/"),
// "fileExt": "",
})
}
}, &uploadInfo)
if err != nil {
return err
}
// 网盘中不存在该文件,开始上传
status := GetUploadFileStatusResp{CreateUploadFileResp: uploadInfo}
for status.Size < file.GetSize() && status.FileDataExists != 1 {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
header := map[string]string{
"ResumePolicy": "1",
"Expect": "100-continue",
}
if y.isFamily() {
header["FamilyId"] = fmt.Sprint(y.FamilyID)
header["UploadFileId"] = fmt.Sprint(status.UploadFileId)
} else {
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
}
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile))
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
return err
}
// 获取断点状态
fullUrl := API_URL + "/getUploadFileStatus.action"
if y.isFamily() {
fullUrl = API_URL + "/family/file/getFamilyFileStatus.action"
}
_, err = y.get(fullUrl, func(req *resty.Request) {
req.SetContext(ctx).SetQueryParams(map[string]string{
"uploadFileId": fmt.Sprint(status.UploadFileId),
"resumePolicy": "1",
})
if y.isFamily() {
req.SetQueryParam("familyId", fmt.Sprint(y.FamilyID))
}
}, &status)
if err != nil {
return err
}
if _, err := tempFile.Seek(status.GetSize(), io.SeekStart); err != nil {
return err
}
up(int(status.Size / file.GetSize()))
}
// 提交
var resp CommitUploadFileResp
_, err = y.post(status.FileCommitUrl, func(req *resty.Request) {
req.SetContext(ctx)
if y.isFamily() {
req.SetHeaders(map[string]string{
"ResumePolicy": "1",
"UploadFileId": fmt.Sprint(status.UploadFileId),
"FamilyId": fmt.Sprint(y.FamilyID),
})
} else {
req.SetFormData(map[string]string{
"opertype": "3",
"resumePolicy": "1",
"uploadFileId": fmt.Sprint(status.UploadFileId),
"isLog": "0",
})
}
}, &resp)
return err
}
func (y *Cloud189PC) isFamily() bool {
return y.Type == "family"
}

View File

@ -103,7 +103,8 @@ func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs)
}
if common.ShouldProxy(storage, stdpath.Base(sub)) {
return &model.Link{
URL: fmt.Sprintf("/p%s?sign=%s",
URL: fmt.Sprintf("%s/p%s?sign=%s",
common.GetApiUrl(args.HttpReq),
utils.EncodePath(reqPath, true),
sign.Sign(reqPath)),
}, nil

View File

@ -175,6 +175,7 @@ func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
req.SetHeader("File-Path", path.Join(dstDir.GetPath(), stream.GetName())).
SetHeader("Password", d.MetaPassword).
SetHeader("Content-Length", strconv.FormatInt(stream.GetSize(), 10)).
SetContentLength(true).
SetBody(stream.GetReadCloser())
})
return err

View File

@ -9,6 +9,7 @@ import (
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
func (d *AListV3) login() error {
@ -38,6 +39,7 @@ func (d *AListV3) request(api, method string, callback base.ReqCallback, retry .
if err != nil {
return nil, err
}
log.Debugf("[alist_v3] response body: %s", res.String())
if res.StatusCode() >= 400 {
return nil, fmt.Errorf("request failed, status: %s", res.Status())
}

View File

@ -67,7 +67,7 @@ func (d *AliDrive) Init(ctx context.Context) error {
return nil
}
// init deviceID
deviceID := utils.GetSHA256Encode(d.UserID)
deviceID := utils.GetSHA256Encode([]byte(d.UserID))
// init privateKey
privateKey, _ := NewPrivateKeyFromHex(deviceID)
state := State{
@ -193,7 +193,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
if d.RapidUpload {
buf := bytes.NewBuffer(make([]byte, 0, 1024))
io.CopyN(buf, file, 1024)
reqBody["pre_hash"] = utils.GetSHA1Encode(buf.String())
reqBody["pre_hash"] = utils.GetSHA1Encode(buf.Bytes())
if localFile != nil {
if _, err := localFile.Seek(0, io.SeekStart); err != nil {
return err
@ -259,7 +259,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
(t.file.slice(o.toNumber(), Math.min(o.plus(8).toNumber(), t.file.size)))
*/
buf := make([]byte, 8)
r, _ := new(big.Int).SetString(utils.GetMD5Encode(d.AccessToken)[:16], 16)
r, _ := new(big.Int).SetString(utils.GetMD5EncodeStr(d.AccessToken)[:16], 16)
i := new(big.Int).SetInt64(file.GetSize())
o := new(big.Int).SetInt64(0)
if file.GetSize() > 0 {

View File

@ -2,8 +2,7 @@ package aliyundrive_open
import (
"context"
"io"
"math"
"fmt"
"net/http"
"time"
@ -21,6 +20,9 @@ type AliyundriveOpen struct {
base string
DriveId string
limitList func(ctx context.Context, data base.Json) (*Files, error)
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
}
func (d *AliyundriveOpen) Config() driver.Config {
@ -37,6 +39,8 @@ func (d *AliyundriveOpen) Init(ctx context.Context) error {
return err
}
d.DriveId = utils.Json.Get(res, "default_drive_id").ToString()
d.limitList = utils.LimitRateCtx(d.list, time.Second/4)
d.limitLink = utils.LimitRateCtx(d.link, time.Second)
return nil
}
@ -45,7 +49,10 @@ func (d *AliyundriveOpen) Drop(ctx context.Context) error {
}
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.getFiles(dir.GetID())
if d.limitList == nil {
return nil, fmt.Errorf("driver not init")
}
files, err := d.getFiles(ctx, dir.GetID())
if err != nil {
return nil, err
}
@ -54,7 +61,7 @@ func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.Li
})
}
func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link, error) {
res, err := d.request("/adrive/v1.0/openFile/getDownloadUrl", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
@ -66,11 +73,20 @@ func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.L
return nil, err
}
url := utils.Json.Get(res, "url").ToString()
exp := time.Hour
return &model.Link{
URL: url,
Expiration: &exp,
}, nil
}
func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.limitLink == nil {
return nil, fmt.Errorf("driver not init")
}
return d.limitLink(ctx, file)
}
func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
@ -135,59 +151,7 @@ func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
}
func (d *AliyundriveOpen) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// rapid_upload is not currently supported
// 1. create
const DEFAULT int64 = 20971520
createData := base.Json{
"drive_id": d.DriveId,
"parent_file_id": dstDir.GetID(),
"name": stream.GetName(),
"type": "file",
"check_name_mode": "ignore",
}
count := 1
if stream.GetSize() > DEFAULT {
count = int(math.Ceil(float64(stream.GetSize()) / float64(DEFAULT)))
createData["part_info_list"] = makePartInfos(count)
}
var createResp CreateResp
_, err := d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp)
})
if err != nil {
return err
}
// 2. upload
preTime := time.Now()
for i := 1; i <= len(createResp.PartInfoList); i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
err = d.uploadPart(ctx, i, count, utils.NewMultiReadable(io.LimitReader(stream, DEFAULT)), &createResp, true)
if err != nil {
return err
}
if count > 0 {
up(i * 100 / count)
}
// refresh upload url if 50 minutes passed
if time.Since(preTime) > 50*time.Minute {
createResp.PartInfoList, err = d.getUploadUrl(count, createResp.FileId, createResp.UploadId)
if err != nil {
return err
}
preTime = time.Now()
}
}
// 3. complete
_, err = d.request("/adrive/v1.0/openFile/complete", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": createResp.FileId,
"upload_id": createResp.UploadId,
})
})
return err
return d.upload(ctx, dstDir, stream, up)
}
func (d *AliyundriveOpen) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {

View File

@ -10,10 +10,11 @@ type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.nn.ci/alist/ali_open/token"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.xhofe.top/alist/ali_open/token"`
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
RapidUpload bool `json:"rapid_upload" help:"If you enable this option, the file will be uploaded to the server first, so the progress will be incorrect"`
InternalUpload bool `json:"internal_upload" help:"If you are using Aliyun ECS is located in Beijing, you can turn it on to boost the upload speed"`
AccessToken string
}

View File

@ -0,0 +1,267 @@
package aliyundrive_open
import (
"bytes"
"context"
"crypto/sha1"
"encoding/base64"
"encoding/hex"
"fmt"
"io"
"math"
"net/http"
"os"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
func makePartInfos(size int) []base.Json {
partInfoList := make([]base.Json, size)
for i := 0; i < size; i++ {
partInfoList[i] = base.Json{"part_number": 1 + i}
}
return partInfoList
}
func calPartSize(fileSize int64) int64 {
var partSize int64 = 20 * 1024 * 1024
if fileSize > partSize {
if fileSize > 1*1024*1024*1024*1024 { // file Size over 1TB
partSize = 5 * 1024 * 1024 * 1024 // file part size 5GB
} else if fileSize > 768*1024*1024*1024 { // over 768GB
partSize = 109951163 // ≈ 104.8576MB, split 1TB into 10,000 part
} else if fileSize > 512*1024*1024*1024 { // over 512GB
partSize = 82463373 // ≈ 78.6432MB
} else if fileSize > 384*1024*1024*1024 { // over 384GB
partSize = 54975582 // ≈ 52.4288MB
} else if fileSize > 256*1024*1024*1024 { // over 256GB
partSize = 41231687 // ≈ 39.3216MB
} else if fileSize > 128*1024*1024*1024 { // over 128GB
partSize = 27487791 // ≈ 26.2144MB
}
}
return partSize
}
func (d *AliyundriveOpen) getUploadUrl(count int, fileId, uploadId string) ([]PartInfo, error) {
partInfoList := makePartInfos(count)
var resp CreateResp
_, err := d.request("/adrive/v1.0/openFile/getUploadUrl", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": fileId,
"part_info_list": partInfoList,
"upload_id": uploadId,
}).SetResult(&resp)
})
return resp.PartInfoList, err
}
func (d *AliyundriveOpen) uploadPart(ctx context.Context, i, count int, reader *utils.MultiReadable, resp *CreateResp, retry bool) error {
partInfo := resp.PartInfoList[i-1]
uploadUrl := partInfo.UploadUrl
if d.InternalUpload {
uploadUrl = strings.ReplaceAll(uploadUrl, "https://cn-beijing-data.aliyundrive.net/", "http://ccp-bj29-bj-1592982087.oss-cn-beijing-internal.aliyuncs.com/")
}
req, err := http.NewRequest("PUT", uploadUrl, reader)
if err != nil {
return err
}
req = req.WithContext(ctx)
res, err := base.HttpClient.Do(req)
if err != nil {
if retry {
reader.Reset()
return d.uploadPart(ctx, i, count, reader, resp, false)
}
return err
}
res.Body.Close()
if retry && res.StatusCode == http.StatusForbidden {
resp.PartInfoList, err = d.getUploadUrl(count, resp.FileId, resp.UploadId)
if err != nil {
return err
}
reader.Reset()
return d.uploadPart(ctx, i, count, reader, resp, false)
}
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusConflict {
return fmt.Errorf("upload status: %d", res.StatusCode)
}
return nil
}
func (d *AliyundriveOpen) normalUpload(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress, createResp CreateResp, count int, partSize int64) error {
log.Debugf("[aliyundive_open] normal upload")
// 2. upload
preTime := time.Now()
for i := 1; i <= len(createResp.PartInfoList); i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
err := d.uploadPart(ctx, i, count, utils.NewMultiReadable(io.LimitReader(stream, partSize)), &createResp, true)
if err != nil {
return err
}
if count > 0 {
up(i * 100 / count)
}
// refresh upload url if 50 minutes passed
if time.Since(preTime) > 50*time.Minute {
createResp.PartInfoList, err = d.getUploadUrl(count, createResp.FileId, createResp.UploadId)
if err != nil {
return err
}
preTime = time.Now()
}
}
// 3. complete
_, err := d.request("/adrive/v1.0/openFile/complete", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": createResp.FileId,
"upload_id": createResp.UploadId,
})
})
return err
}
type ProofRange struct {
Start int64
End int64
}
func getProofRange(input string, size int64) (*ProofRange, error) {
if size == 0 {
return &ProofRange{}, nil
}
tmpStr := utils.GetMD5EncodeStr(input)[0:16]
tmpInt, err := strconv.ParseUint(tmpStr, 16, 64)
if err != nil {
return nil, err
}
index := tmpInt % uint64(size)
pr := &ProofRange{
Start: int64(index),
End: int64(index) + 8,
}
if pr.End >= size {
pr.End = size
}
return pr, nil
}
func (d *AliyundriveOpen) calProofCode(file *os.File, fileSize int64) (string, error) {
proofRange, err := getProofRange(d.AccessToken, fileSize)
if err != nil {
return "", err
}
buf := make([]byte, proofRange.End-proofRange.Start)
_, err = file.ReadAt(buf, proofRange.Start)
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf), nil
}
func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// 1. create
// Part Size Unit: Bytes, Default: 20MB,
// Maximum number of slices 10,000, ≈195.3125GB
var partSize = calPartSize(stream.GetSize())
createData := base.Json{
"drive_id": d.DriveId,
"parent_file_id": dstDir.GetID(),
"name": stream.GetName(),
"type": "file",
"check_name_mode": "ignore",
}
count := int(math.Ceil(float64(stream.GetSize()) / float64(partSize)))
createData["part_info_list"] = makePartInfos(count)
// rapid upload
rapidUpload := stream.GetSize() > 100*1024 && d.RapidUpload
if rapidUpload {
log.Debugf("[aliyundrive_open] start cal pre_hash")
// read 1024 bytes to calculate pre hash
buf := bytes.NewBuffer(make([]byte, 0, 1024))
_, err := io.CopyN(buf, stream, 1024)
if err != nil {
return err
}
createData["size"] = stream.GetSize()
createData["pre_hash"] = utils.GetSHA1Encode(buf.Bytes())
// if support seek, seek to start
if localFile, ok := stream.(io.Seeker); ok {
if _, err := localFile.Seek(0, io.SeekStart); err != nil {
return err
}
} else {
// Put spliced head back to stream
stream.SetReadCloser(struct {
io.Reader
io.Closer
}{
Reader: io.MultiReader(buf, stream.GetReadCloser()),
Closer: stream.GetReadCloser(),
})
}
}
var createResp CreateResp
_, err, e := d.requestReturnErrResp("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp)
})
if err != nil {
if e.Code != "PreHashMatched" || !rapidUpload {
return err
}
log.Debugf("[aliyundrive_open] pre_hash matched, start rapid upload")
// convert to local file
file, err := utils.CreateTempFile(stream)
if err != nil {
return err
}
// calculate full hash
h := sha1.New()
_, err = io.Copy(h, file)
if err != nil {
return err
}
delete(createData, "pre_hash")
createData["proof_version"] = "v1"
createData["content_hash_name"] = "sha1"
createData["content_hash"] = hex.EncodeToString(h.Sum(nil))
// seek to start
if _, err = file.Seek(0, io.SeekStart); err != nil {
return err
}
createData["proof_code"], err = d.calProofCode(file, stream.GetSize())
if err != nil {
return fmt.Errorf("cal proof code error: %s", err.Error())
}
_, err = d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp)
})
if err != nil {
return err
}
if createResp.RapidUpload {
log.Debugf("[aliyundrive_open] rapid upload success, file id: %s", createResp.FileId)
return nil
}
// failed to rapid upload, try normal upload
if _, err = file.Seek(0, io.SeekStart); err != nil {
return err
}
stream.SetReadCloser(file)
}
log.Debugf("[aliyundrive_open] create file success, resp: %+v", createResp)
return d.normalUpload(ctx, stream, up, createResp, count, partSize)
}

View File

@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"net/http"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/op"
@ -48,6 +47,11 @@ func (d *AliyundriveOpen) refreshToken() error {
}
func (d *AliyundriveOpen) request(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
b, err, _ := d.requestReturnErrResp(uri, method, callback, retry...)
return b, err
}
func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error, *ErrResp) {
req := base.RestyClient.R()
// TODO check whether access_token is expired
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
@ -61,30 +65,40 @@ func (d *AliyundriveOpen) request(uri, method string, callback base.ReqCallback,
req.SetError(&e)
res, err := req.Execute(method, d.base+uri)
if err != nil {
return nil, err
return nil, err, nil
}
isRetry := len(retry) > 0 && retry[0]
if e.Code != "" {
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.AccessToken == "") {
err = d.refreshToken()
if err != nil {
return nil, err
return nil, err, nil
}
return d.request(uri, method, callback, true)
return d.requestReturnErrResp(uri, method, callback, true)
}
return nil, fmt.Errorf("%s:%s", e.Code, e.Message)
return nil, fmt.Errorf("%s:%s", e.Code, e.Message), &e
}
return res.Body(), nil
return res.Body(), nil, nil
}
func (d *AliyundriveOpen) getFiles(fileId string) ([]File, error) {
func (d *AliyundriveOpen) list(ctx context.Context, data base.Json) (*Files, error) {
var resp Files
_, err := d.request("/adrive/v1.0/openFile/list", http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp)
})
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *AliyundriveOpen) getFiles(ctx context.Context, fileId string) ([]File, error) {
marker := "first"
res := make([]File, 0)
for marker != "" {
if marker == "first" {
marker = ""
}
var resp Files
data := base.Json{
"drive_id": d.DriveId,
"limit": 200,
@ -98,9 +112,7 @@ func (d *AliyundriveOpen) getFiles(fileId string) ([]File, error) {
//"video_thumbnail_width": 480,
//"image_thumbnail_width": 480,
}
_, err := d.request("/adrive/v1.0/openFile/list", http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp)
})
resp, err := d.limitList(ctx, data)
if err != nil {
return nil, err
}
@ -109,59 +121,3 @@ func (d *AliyundriveOpen) getFiles(fileId string) ([]File, error) {
}
return res, nil
}
func makePartInfos(size int) []base.Json {
partInfoList := make([]base.Json, size)
for i := 0; i < size; i++ {
partInfoList[i] = base.Json{"part_number": 1 + i}
}
return partInfoList
}
func (d *AliyundriveOpen) getUploadUrl(count int, fileId, uploadId string) ([]PartInfo, error) {
partInfoList := makePartInfos(count)
var resp CreateResp
_, err := d.request("/adrive/v1.0/openFile/getUploadUrl", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": fileId,
"part_info_list": partInfoList,
"upload_id": uploadId,
}).SetResult(&resp)
})
return resp.PartInfoList, err
}
func (d *AliyundriveOpen) uploadPart(ctx context.Context, i, count int, reader *utils.MultiReadable, resp *CreateResp, retry bool) error {
partInfo := resp.PartInfoList[i-1]
uploadUrl := partInfo.UploadUrl
if d.InternalUpload {
uploadUrl = strings.ReplaceAll(uploadUrl, "https://cn-beijing-data.aliyundrive.net/", "http://ccp-bj29-bj-1592982087.oss-cn-beijing-internal.aliyuncs.com/")
}
req, err := http.NewRequest("PUT", uploadUrl, reader)
if err != nil {
return err
}
req = req.WithContext(ctx)
res, err := base.HttpClient.Do(req)
if err != nil {
if retry {
reader.Reset()
return d.uploadPart(ctx, i, count, reader, resp, false)
}
return err
}
res.Body.Close()
if retry && res.StatusCode == http.StatusForbidden {
resp.PartInfoList, err = d.getUploadUrl(count, resp.FileId, resp.UploadId)
if err != nil {
return err
}
reader.Reset()
return d.uploadPart(ctx, i, count, reader, resp, false)
}
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusConflict {
return fmt.Errorf("upload status: %d", res.StatusCode)
}
return nil
}

View File

@ -2,6 +2,7 @@ package aliyundrive_share
import (
"context"
"fmt"
"net/http"
"time"
@ -22,6 +23,9 @@ type AliyundriveShare struct {
ShareToken string
DriveId string
cron *cron.Cron
limitList func(ctx context.Context, dir model.Obj) ([]model.Obj, error)
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
}
func (d *AliyundriveShare) Config() driver.Config {
@ -48,6 +52,8 @@ func (d *AliyundriveShare) Init(ctx context.Context) error {
log.Errorf("%+v", err)
}
})
d.limitList = utils.LimitRateCtx(d.list, time.Second/4)
d.limitLink = utils.LimitRateCtx(d.link, time.Second)
return nil
}
@ -60,6 +66,13 @@ func (d *AliyundriveShare) Drop(ctx context.Context) error {
}
func (d *AliyundriveShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.limitList == nil {
return nil, fmt.Errorf("driver not init")
}
return d.limitList(ctx, dir)
}
func (d *AliyundriveShare) list(ctx context.Context, dir model.Obj) ([]model.Obj, error) {
files, err := d.getFiles(dir.GetID())
if err != nil {
return nil, err
@ -70,6 +83,13 @@ func (d *AliyundriveShare) List(ctx context.Context, dir model.Obj, args model.L
}
func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.limitLink == nil {
return nil, fmt.Errorf("driver not init")
}
return d.limitLink(ctx, file)
}
func (d *AliyundriveShare) link(ctx context.Context, file model.Obj) (*model.Link, error) {
data := base.Json{
"drive_id": d.DriveId,
"file_id": file.GetID(),
@ -79,7 +99,7 @@ func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.
}
var resp ShareLinkResp
_, err := d.request("https://api.aliyundrive.com/v2/file/get_share_link_download_url", http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp)
req.SetHeader(CanaryHeaderKey, CanaryHeaderValue).SetBody(data).SetResult(&resp)
})
if err != nil {
return nil, err

View File

@ -9,6 +9,12 @@ import (
log "github.com/sirupsen/logrus"
)
const (
// CanaryHeaderKey CanaryHeaderValue for lifting rate limit restrictions
CanaryHeaderKey = "X-Canary"
CanaryHeaderValue = "client=web,app=share,version=v2.3.1"
)
func (d *AliyundriveShare) refreshToken() error {
url := "https://auth.aliyundrive.com/v2/account/token"
var resp base.TokenResp
@ -58,6 +64,7 @@ func (d *AliyundriveShare) request(url, method string, callback base.ReqCallback
SetError(&e).
SetHeader("content-type", "application/json").
SetHeader("Authorization", "Bearer\t"+d.AccessToken).
SetHeader(CanaryHeaderKey, CanaryHeaderValue).
SetHeader("x-share-token", d.ShareToken)
if callback != nil {
callback(req)
@ -91,7 +98,7 @@ func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) {
data := base.Json{
"image_thumbnail_process": "image/resize,w_160/format,jpeg",
"image_url_process": "image/resize,w_1920/format,jpeg",
"limit": 100,
"limit": 200,
"order_by": d.OrderBy,
"order_direction": d.OrderDirection,
"parent_file_id": fileId,
@ -107,6 +114,7 @@ func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) {
var resp ListResp
res, err := base.RestyClient.R().
SetHeader("x-share-token", d.ShareToken).
SetHeader(CanaryHeaderKey, CanaryHeaderValue).
SetResult(&resp).SetError(&e).SetBody(data).
Post("https://api.aliyundrive.com/adrive/v3/file/list")
if err != nil {

View File

@ -16,18 +16,21 @@ import (
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
_ "github.com/alist-org/alist/v3/drivers/dropbox"
_ "github.com/alist-org/alist/v3/drivers/ftp"
_ "github.com/alist-org/alist/v3/drivers/google_drive"
_ "github.com/alist-org/alist/v3/drivers/google_photo"
_ "github.com/alist-org/alist/v3/drivers/ipfs_api"
_ "github.com/alist-org/alist/v3/drivers/lanzou"
_ "github.com/alist-org/alist/v3/drivers/local"
_ "github.com/alist-org/alist/v3/drivers/mediatrack"
_ "github.com/alist-org/alist/v3/drivers/mega"
_ "github.com/alist-org/alist/v3/drivers/mopan"
_ "github.com/alist-org/alist/v3/drivers/onedrive"
_ "github.com/alist-org/alist/v3/drivers/onedrive_app"
_ "github.com/alist-org/alist/v3/drivers/pikpak"
_ "github.com/alist-org/alist/v3/drivers/pikpak_share"
_ "github.com/alist-org/alist/v3/drivers/quark"
_ "github.com/alist-org/alist/v3/drivers/quark_uc"
_ "github.com/alist-org/alist/v3/drivers/s3"
_ "github.com/alist-org/alist/v3/drivers/seafile"
_ "github.com/alist-org/alist/v3/drivers/sftp"
@ -40,6 +43,7 @@ import (
_ "github.com/alist-org/alist/v3/drivers/uss"
_ "github.com/alist-org/alist/v3/drivers/virtual"
_ "github.com/alist-org/alist/v3/drivers/webdav"
_ "github.com/alist-org/alist/v3/drivers/wopan"
_ "github.com/alist-org/alist/v3/drivers/yandex_disk"
)

View File

@ -23,7 +23,6 @@ import (
type BaiduNetdisk struct {
model.Storage
Addition
AccessToken string
}
func (d *BaiduNetdisk) Config() driver.Config {
@ -35,7 +34,11 @@ func (d *BaiduNetdisk) GetAddition() driver.Additional {
}
func (d *BaiduNetdisk) Init(ctx context.Context) error {
return d.refreshToken()
res, err := d.get("/xpan/nas", map[string]string{
"method": "uinfo",
}, nil)
log.Debugf("[baidu] get uinfo: %s", string(res))
return err
}
func (d *BaiduNetdisk) Drop(ctx context.Context) error {

View File

@ -13,6 +13,8 @@ type Addition struct {
DownloadAPI string `json:"download_api" type:"select" options:"official,crack" default:"official"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
AccessToken string
}
var config = driver.Config{

View File

@ -154,7 +154,7 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Li
"target": fmt.Sprintf("[\"%s\"]", file.GetPath()),
"dlink": "1",
"web": "5",
//"origin": "dlna",
"origin": "dlna",
}
_, err := d.request("https://pan.baidu.com/api/filemetas", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(param)
@ -165,7 +165,7 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Li
return &model.Link{
URL: resp.Info[0].Dlink,
Header: http.Header{
"User-Agent": []string{"netdisk"},
"User-Agent": []string{d.CustomCrackUA},
},
}, nil
}

View File

@ -4,7 +4,6 @@ import (
"fmt"
"math"
"math/rand"
"regexp"
"strings"
"time"
@ -16,11 +15,6 @@ func getTid() string {
return fmt.Sprintf("3%d%.0f", time.Now().Unix(), math.Floor(9000000*rand.Float64()+1000000))
}
// 检查名称
func checkName(name string) bool {
return len(name) <= 20 && regexp.MustCompile("[\u4e00-\u9fa5A-Za-z0-9_-]").MatchString(name)
}
func toTime(t int64) *time.Time {
tm := time.Unix(t, 0)
return &tm

View File

@ -2,7 +2,6 @@ package baiduphoto
import (
"context"
"errors"
"fmt"
"net/http"
@ -22,10 +21,6 @@ const (
FILE_API_URL_V2 = API_URL + "/file/v2"
)
var (
ErrNotSupportName = errors.New("only chinese and english, numbers and underscores are supported, and the length is no more than 20")
)
func (d *BaiduPhoto) Request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R().
SetQueryParam("access_token", d.AccessToken)
@ -48,6 +43,8 @@ func (d *BaiduPhoto) Request(furl string, method string, callback base.ReqCallba
return nil, fmt.Errorf("you have joined album")
case 50820:
return nil, fmt.Errorf("no shared albums found")
case 50100:
return nil, fmt.Errorf("illegal title, only supports 50 characters")
case -6:
if err = d.refreshToken(); err != nil {
return nil, err
@ -188,9 +185,6 @@ func (d *BaiduPhoto) GetAllAlbumFile(ctx context.Context, album *Album, passwd s
// 创建相册
func (d *BaiduPhoto) CreateAlbum(ctx context.Context, name string) (*Album, error) {
if !checkName(name) {
return nil, ErrNotSupportName
}
var resp JoinOrCreateAlbumResp
_, err := d.Post(ALBUM_API_URL+"/create", func(r *resty.Request) {
r.SetContext(ctx).SetResult(&resp)
@ -208,10 +202,6 @@ func (d *BaiduPhoto) CreateAlbum(ctx context.Context, name string) (*Album, erro
// 相册改名
func (d *BaiduPhoto) SetAlbumName(ctx context.Context, album *Album, name string) (*Album, error) {
if !checkName(name) {
return nil, ErrNotSupportName
}
_, err := d.Post(ALBUM_API_URL+"/settitle", func(r *resty.Request) {
r.SetContext(ctx)
r.SetFormData(map[string]string{

View File

@ -1,31 +1,49 @@
package base
import (
"crypto/tls"
"net/http"
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/go-resty/resty/v2"
)
var NoRedirectClient *resty.Client
var RestyClient = NewRestyClient()
var HttpClient = &http.Client{}
var (
NoRedirectClient *resty.Client
RestyClient *resty.Client
HttpClient *http.Client
)
var UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
var DefaultTimeout = time.Second * 30
func init() {
func InitClient() {
NoRedirectClient = resty.New().SetRedirectPolicy(
resty.RedirectPolicyFunc(func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}),
)
).SetTLSClientConfig(&tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify})
NoRedirectClient.SetHeader("user-agent", UserAgent)
RestyClient = NewRestyClient()
HttpClient = NewHttpClient()
}
func NewRestyClient() *resty.Client {
client := resty.New().
SetHeader("user-agent", UserAgent).
SetRetryCount(3).
SetTimeout(DefaultTimeout)
SetTimeout(DefaultTimeout).
SetTLSClientConfig(&tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify})
return client
}
func NewHttpClient() *http.Client {
return &http.Client{
Timeout: time.Hour * 48,
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: &tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify},
},
}
}

View File

@ -5,6 +5,7 @@ import (
"io"
"net/http"
"strconv"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@ -16,7 +17,6 @@ import (
type Cloudreve struct {
model.Storage
Addition
Cookie string
}
func (d *Cloudreve) Config() driver.Config {
@ -28,6 +28,11 @@ func (d *Cloudreve) GetAddition() driver.Additional {
}
func (d *Cloudreve) Init(ctx context.Context) error {
if d.Cookie != "" {
return nil
}
// removing trailing slash
d.Address = strings.TrimSuffix(d.Address, "/")
return d.login()
}

View File

@ -10,8 +10,9 @@ type Addition struct {
driver.RootPath
// define other
Address string `json:"address" required:"true"`
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
Username string `json:"username"`
Password string `json:"password"`
Cookie string `json:"cookie"`
}
var config = driver.Config{

View File

@ -49,12 +49,14 @@ func (d *Cloudreve) request(method string, path string, callback base.ReqCallbac
// 刷新 cookie
if r.Code == http.StatusUnauthorized && path != loginPath {
if d.Username != "" && d.Password != "" {
err = d.login()
if err != nil {
return err
}
return d.request(method, path, callback, out)
}
}
return errors.New(r.Msg)
}

222
drivers/dropbox/driver.go Normal file
View File

@ -0,0 +1,222 @@
package dropbox
import (
"context"
"fmt"
"io"
"math"
"net/http"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
type Dropbox struct {
model.Storage
Addition
base string
contentBase string
}
func (d *Dropbox) Config() driver.Config {
return config
}
func (d *Dropbox) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Dropbox) Init(ctx context.Context) error {
query := "foo"
res, err := d.request("/2/check/user", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"query": query,
})
})
if err != nil {
return err
}
result := utils.Json.Get(res, "result").ToString()
if result != query {
return fmt.Errorf("failed to check user: %s", string(res))
}
return nil
}
func (d *Dropbox) Drop(ctx context.Context) error {
return nil
}
func (d *Dropbox) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.getFiles(ctx, dir.GetPath())
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return fileToObj(src), nil
})
}
func (d *Dropbox) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
res, err := d.request("/2/files/get_temporary_link", http.MethodPost, func(req *resty.Request) {
req.SetContext(ctx).SetBody(base.Json{
"path": file.GetPath(),
})
})
if err != nil {
return nil, err
}
url := utils.Json.Get(res, "link").ToString()
exp := time.Hour
return &model.Link{
URL: url,
Expiration: &exp,
}, nil
}
func (d *Dropbox) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request("/2/files/create_folder_v2", http.MethodPost, func(req *resty.Request) {
req.SetContext(ctx).SetBody(base.Json{
"autorename": false,
"path": parentDir.GetPath() + "/" + dirName,
})
})
return err
}
func (d *Dropbox) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
toPath := dstDir.GetPath() + "/" + srcObj.GetName()
_, err := d.request("/2/files/move_v2", http.MethodPost, func(req *resty.Request) {
req.SetContext(ctx).SetBody(base.Json{
"allow_ownership_transfer": false,
"allow_shared_folder": false,
"autorename": false,
"from_path": srcObj.GetID(),
"to_path": toPath,
})
})
return err
}
func (d *Dropbox) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
path := srcObj.GetPath()
fileName := srcObj.GetName()
toPath := path[:len(path)-len(fileName)] + newName
_, err := d.request("/2/files/move_v2", http.MethodPost, func(req *resty.Request) {
req.SetContext(ctx).SetBody(base.Json{
"allow_ownership_transfer": false,
"allow_shared_folder": false,
"autorename": false,
"from_path": srcObj.GetID(),
"to_path": toPath,
})
})
return err
}
func (d *Dropbox) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
toPath := dstDir.GetPath() + "/" + srcObj.GetName()
_, err := d.request("/2/files/copy_v2", http.MethodPost, func(req *resty.Request) {
req.SetContext(ctx).SetBody(base.Json{
"allow_ownership_transfer": false,
"allow_shared_folder": false,
"autorename": false,
"from_path": srcObj.GetID(),
"to_path": toPath,
})
})
return err
}
func (d *Dropbox) Remove(ctx context.Context, obj model.Obj) error {
uri := "/2/files/delete_v2"
_, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
req.SetContext(ctx).SetBody(base.Json{
"path": obj.GetID(),
})
})
return err
}
func (d *Dropbox) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// 1. start
sessionId, err := d.startUploadSession(ctx)
if err != nil {
return err
}
// 2.append
// A single request should not upload more than 150 MB, and each call must be multiple of 4MB (except for last call)
const PartSize = 20971520
count := 1
if stream.GetSize() > PartSize {
count = int(math.Ceil(float64(stream.GetSize()) / float64(PartSize)))
}
offset := int64(0)
for i := 0; i < count; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
start := i * PartSize
byteSize := stream.GetSize() - int64(start)
if byteSize > PartSize {
byteSize = PartSize
}
url := d.contentBase + "/2/files/upload_session/append_v2"
reader := io.LimitReader(stream, PartSize)
req, err := http.NewRequest(http.MethodPost, url, reader)
if err != nil {
log.Errorf("failed to update file when append to upload session, err: %+v", err)
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
args := UploadAppendArgs{
Close: false,
Cursor: UploadCursor{
Offset: offset,
SessionID: sessionId,
},
}
argsJson, err := utils.Json.MarshalToString(args)
if err != nil {
return err
}
req.Header.Set("Dropbox-API-Arg", argsJson)
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
if count > 0 {
up((i + 1) * 100 / count)
}
offset += byteSize
}
// 3.finish
toPath := dstDir.GetPath() + "/" + stream.GetName()
err2 := d.finishUploadSession(ctx, toPath, offset, sessionId)
if err2 != nil {
return err2
}
return err
}
var _ driver.Driver = (*Dropbox)(nil)

42
drivers/dropbox/meta.go Normal file
View File

@ -0,0 +1,42 @@
package dropbox
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
const (
DefaultClientID = "76lrwrklhdn1icb"
)
type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
driver.RootPath
OauthTokenURL string `json:"oauth_token_url" default:"https://api.xhofe.top/alist/dropbox/token"`
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
AccessToken string
}
var config = driver.Config{
Name: "Dropbox",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Dropbox{
base: "https://api.dropboxapi.com",
contentBase: "https://content.dropboxapi.com",
}
})
}

79
drivers/dropbox/types.go Normal file
View File

@ -0,0 +1,79 @@
package dropbox
import (
"github.com/alist-org/alist/v3/internal/model"
"time"
)
type TokenResp struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
ExpiresIn int `json:"expires_in"`
}
type ErrorResp struct {
Error struct {
Tag string `json:".tag"`
} `json:"error"`
ErrorSummary string `json:"error_summary"`
}
type RefreshTokenErrorResp struct {
Error string `json:"error"`
ErrorDescription string `json:"error_description"`
}
type File struct {
Tag string `json:".tag"`
Name string `json:"name"`
PathLower string `json:"path_lower"`
PathDisplay string `json:"path_display"`
ID string `json:"id"`
ClientModified time.Time `json:"client_modified"`
ServerModified time.Time `json:"server_modified"`
Rev string `json:"rev"`
Size int `json:"size"`
IsDownloadable bool `json:"is_downloadable"`
ContentHash string `json:"content_hash"`
}
type ListResp struct {
Entries []File `json:"entries"`
Cursor string `json:"cursor"`
HasMore bool `json:"has_more"`
}
type UploadCursor struct {
Offset int64 `json:"offset"`
SessionID string `json:"session_id"`
}
type UploadAppendArgs struct {
Close bool `json:"close"`
Cursor UploadCursor `json:"cursor"`
}
type UploadFinishArgs struct {
Commit struct {
Autorename bool `json:"autorename"`
Mode string `json:"mode"`
Mute bool `json:"mute"`
Path string `json:"path"`
StrictConflict bool `json:"strict_conflict"`
} `json:"commit"`
Cursor UploadCursor `json:"cursor"`
}
func fileToObj(f File) *model.ObjThumb {
return &model.ObjThumb{
Object: model.Object{
ID: f.ID,
Path: f.PathDisplay,
Name: f.Name,
Size: int64(f.Size),
Modified: f.ServerModified,
IsFolder: f.Tag == "folder",
},
Thumbnail: model.Thumbnail{},
}
}

199
drivers/dropbox/util.go Normal file
View File

@ -0,0 +1,199 @@
package dropbox
import (
"context"
"fmt"
"io"
"net/http"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
func (d *Dropbox) refreshToken() error {
url := d.base + "/oauth2/token"
if utils.SliceContains([]string{"", DefaultClientID}, d.ClientID) {
url = d.OauthTokenURL
}
var tokenResp TokenResp
resp, err := base.RestyClient.R().
//ForceContentType("application/x-www-form-urlencoded").
//SetBasicAuth(d.ClientID, d.ClientSecret).
SetFormData(map[string]string{
"grant_type": "refresh_token",
"refresh_token": d.RefreshToken,
"client_id": d.ClientID,
"client_secret": d.ClientSecret,
}).
Post(url)
if err != nil {
return err
}
log.Debugf("[dropbox] refresh token response: %s", resp.String())
if resp.StatusCode() != 200 {
return fmt.Errorf("failed to refresh token: %s", resp.String())
}
_ = utils.Json.UnmarshalFromString(resp.String(), &tokenResp)
d.AccessToken = tokenResp.AccessToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *Dropbox) request(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
req := base.RestyClient.R()
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
if method == http.MethodPost {
req.SetHeader("Content-Type", "application/json")
}
if callback != nil {
callback(req)
}
var e ErrorResp
req.SetError(&e)
res, err := req.Execute(method, d.base+uri)
if err != nil {
return nil, err
}
log.Debugf("[dropbox] request (%s) response: %s", uri, res.String())
isRetry := len(retry) > 0 && retry[0]
if res.StatusCode() != 200 {
body := res.String()
if !isRetry && (utils.SliceMeet([]string{"expired_access_token", "invalid_access_token", "authorization"}, body,
func(item string, v string) bool {
return strings.Contains(v, item)
}) || d.AccessToken == "") {
err = d.refreshToken()
if err != nil {
return nil, err
}
return d.request(uri, method, callback, true)
}
return nil, fmt.Errorf("%s:%s", e.Error, e.ErrorSummary)
}
return res.Body(), nil
}
func (d *Dropbox) list(ctx context.Context, data base.Json, isContinue bool) (*ListResp, error) {
var resp ListResp
uri := "/2/files/list_folder"
if isContinue {
uri += "/continue"
}
_, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
req.SetContext(ctx).SetBody(data).SetResult(&resp)
})
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Dropbox) getFiles(ctx context.Context, path string) ([]File, error) {
hasMore := true
var marker string
res := make([]File, 0)
data := base.Json{
"include_deleted": false,
"include_has_explicit_shared_members": false,
"include_mounted_folders": false,
"include_non_downloadable_files": false,
"limit": 2000,
"path": path,
"recursive": false,
}
resp, err := d.list(ctx, data, false)
if err != nil {
return nil, err
}
marker = resp.Cursor
hasMore = resp.HasMore
res = append(res, resp.Entries...)
for hasMore {
data := base.Json{
"cursor": marker,
}
resp, err := d.list(ctx, data, true)
if err != nil {
return nil, err
}
marker = resp.Cursor
hasMore = resp.HasMore
res = append(res, resp.Entries...)
}
return res, nil
}
func (d *Dropbox) finishUploadSession(ctx context.Context, toPath string, offset int64, sessionId string) error {
url := d.contentBase + "/2/files/upload_session/finish"
req, err := http.NewRequest(http.MethodPost, url, nil)
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
uploadFinishArgs := UploadFinishArgs{
Commit: struct {
Autorename bool `json:"autorename"`
Mode string `json:"mode"`
Mute bool `json:"mute"`
Path string `json:"path"`
StrictConflict bool `json:"strict_conflict"`
}{
Autorename: true,
Mode: "add",
Mute: false,
Path: toPath,
StrictConflict: false,
},
Cursor: UploadCursor{
Offset: offset,
SessionID: sessionId,
},
}
argsJson, err := utils.Json.MarshalToString(uploadFinishArgs)
if err != nil {
return err
}
req.Header.Set("Dropbox-API-Arg", argsJson)
res, err := base.HttpClient.Do(req)
if err != nil {
log.Errorf("failed to update file when finish session, err: %+v", err)
return err
}
_ = res.Body.Close()
return nil
}
func (d *Dropbox) startUploadSession(ctx context.Context) (string, error) {
url := d.contentBase + "/2/files/upload_session/start"
req, err := http.NewRequest(http.MethodPost, url, nil)
if err != nil {
return "", err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Authorization", "Bearer "+d.AccessToken)
req.Header.Set("Dropbox-API-Arg", "{\"close\":false}")
res, err := base.HttpClient.Do(req)
if err != nil {
log.Errorf("failed to update file when start session, err: %+v", err)
return "", err
}
body, err := io.ReadAll(res.Body)
sessionId := utils.Json.Get(body, "session_id").ToString()
_ = res.Body.Close()
return sessionId, nil
}

View File

@ -18,6 +18,8 @@ type GoogleDrive struct {
model.Storage
Addition
AccessToken string
ServiceAccountFile int
ServiceAccountFileList []string
}
func (d *GoogleDrive) Config() driver.Config {

View File

@ -2,21 +2,134 @@ package google_drive
import (
"context"
"crypto/x509"
"encoding/pem"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"regexp"
"strconv"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/golang-jwt/jwt/v4"
log "github.com/sirupsen/logrus"
)
// do others that not defined in Driver interface
type googleDriveServiceAccount struct {
//Type string `json:"type"`
//ProjectID string `json:"project_id"`
//PrivateKeyID string `json:"private_key_id"`
PrivateKey string `json:"private_key"`
ClientEMail string `json:"client_email"`
//ClientID string `json:"client_id"`
//AuthURI string `json:"auth_uri"`
TokenURI string `json:"token_uri"`
//AuthProviderX509CertURL string `json:"auth_provider_x509_cert_url"`
//ClientX509CertURL string `json:"client_x509_cert_url"`
}
func (d *GoogleDrive) refreshToken() error {
// googleDriveServiceAccountFile gdsaFile
gdsaFile, gdsaFileErr := os.Stat(d.RefreshToken)
if gdsaFileErr == nil {
gdsaFileThis := d.RefreshToken
if gdsaFile.IsDir() {
if len(d.ServiceAccountFileList) <= 0 {
gdsaReadDir, gdsaDirErr := ioutil.ReadDir(d.RefreshToken)
if gdsaDirErr != nil {
log.Error("read dir fail")
return gdsaDirErr
}
var gdsaFileList []string
for _, fi := range gdsaReadDir {
if !fi.IsDir() {
match, _ := regexp.MatchString("^.*\\.json$", fi.Name())
if !match {
continue
}
gdsaDirText := d.RefreshToken
if d.RefreshToken[len(d.RefreshToken)-1:] != "/" {
gdsaDirText = d.RefreshToken + "/"
}
gdsaFileList = append(gdsaFileList, gdsaDirText+fi.Name())
}
}
d.ServiceAccountFileList = gdsaFileList
gdsaFileThis = d.ServiceAccountFileList[d.ServiceAccountFile]
d.ServiceAccountFile++
} else {
if d.ServiceAccountFile < len(d.ServiceAccountFileList) {
d.ServiceAccountFile++
} else {
d.ServiceAccountFile = 0
}
gdsaFileThis = d.ServiceAccountFileList[d.ServiceAccountFile]
}
}
gdsaFileThisContent, err := ioutil.ReadFile(gdsaFileThis)
if err != nil {
return err
}
// Now let's unmarshal the data into `payload`
var jsonData googleDriveServiceAccount
err = utils.Json.Unmarshal(gdsaFileThisContent, &jsonData)
if err != nil {
return err
}
gdsaScope := "https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/drive.appdata https://www.googleapis.com/auth/drive.file https://www.googleapis.com/auth/drive.metadata https://www.googleapis.com/auth/drive.metadata.readonly https://www.googleapis.com/auth/drive.readonly https://www.googleapis.com/auth/drive.scripts"
timeNow := time.Now()
var timeStart int64 = timeNow.Unix()
var timeEnd int64 = timeNow.Add(time.Minute * 60).Unix()
// load private key from string
privateKeyPem, _ := pem.Decode([]byte(jsonData.PrivateKey))
privateKey, _ := x509.ParsePKCS8PrivateKey(privateKeyPem.Bytes)
jwtToken := jwt.NewWithClaims(jwt.SigningMethodRS256,
jwt.MapClaims{
"iss": jsonData.ClientEMail,
"scope": gdsaScope,
"aud": jsonData.TokenURI,
"exp": timeEnd,
"iat": timeStart,
})
assertion, err := jwtToken.SignedString(privateKey)
if err != nil {
return err
}
var resp base.TokenResp
var e TokenError
res, err := base.RestyClient.R().SetResult(&resp).SetError(&e).
SetFormData(map[string]string{
"assertion": assertion,
"grant_type": "urn:ietf:params:oauth:grant-type:jwt-bearer",
}).Post(jsonData.TokenURI)
if err != nil {
return err
}
log.Debug(res.String())
if e.Error != "" {
return fmt.Errorf(e.Error)
}
d.AccessToken = resp.AccessToken
return nil
}
if gdsaFileErr != nil && os.IsExist(gdsaFileErr) {
return gdsaFileErr
}
url := "https://www.googleapis.com/oauth2/v4/token"
var resp base.TokenResp
var e TokenError

128
drivers/ipfs_api/driver.go Normal file
View File

@ -0,0 +1,128 @@
package ipfs
import (
"context"
"fmt"
"net/url"
stdpath "path"
"path/filepath"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
shell "github.com/ipfs/go-ipfs-api"
)
type IPFS struct {
model.Storage
Addition
sh *shell.Shell
gateURL *url.URL
}
func (d *IPFS) Config() driver.Config {
return config
}
func (d *IPFS) GetAddition() driver.Additional {
return &d.Addition
}
func (d *IPFS) Init(ctx context.Context) error {
d.sh = shell.NewShell(d.Endpoint)
gateURL, err := url.Parse(d.Gateway)
if err != nil {
return err
}
d.gateURL = gateURL
return nil
}
func (d *IPFS) Drop(ctx context.Context) error {
return nil
}
func (d *IPFS) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
path := dir.GetPath()
if path[len(path):] != "/" {
path += "/"
}
path_cid, err := d.sh.FilesStat(ctx, path)
if err != nil {
return nil, err
}
dirs, err := d.sh.List(path_cid.Hash)
if err != nil {
return nil, err
}
objlist := []model.Obj{}
for _, file := range dirs {
gateurl := *d.gateURL
gateurl.Path = "ipfs/" + file.Hash
gateurl.RawQuery = "filename=" + file.Name
objlist = append(objlist, &model.ObjectURL{
Object: model.Object{ID: file.Hash, Name: file.Name, Size: int64(file.Size), IsFolder: file.Type == 1},
Url: model.Url{Url: gateurl.String()},
})
}
return objlist, nil
}
func (d *IPFS) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
link := d.Gateway + "/ipfs/" + file.GetID() + "/?filename=" + file.GetName()
return &model.Link{URL: link}, nil
}
func (d *IPFS) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
path := parentDir.GetPath()
if path[len(path):] != "/" {
path += "/"
}
return d.sh.FilesMkdir(ctx, path+dirName)
}
func (d *IPFS) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.sh.FilesMv(ctx, srcObj.GetPath(), dstDir.GetPath())
}
func (d *IPFS) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
newFileName := filepath.Dir(srcObj.GetPath()) + "/" + newName
return d.sh.FilesMv(ctx, srcObj.GetPath(), strings.ReplaceAll(newFileName, "\\", "/"))
}
func (d *IPFS) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
// TODO copy obj, optional
fmt.Println(srcObj.GetPath())
fmt.Println(dstDir.GetPath())
newFileName := dstDir.GetPath() + "/" + filepath.Base(srcObj.GetPath())
fmt.Println(newFileName)
return d.sh.FilesCp(ctx, srcObj.GetPath(), strings.ReplaceAll(newFileName, "\\", "/"))
}
func (d *IPFS) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
return d.sh.FilesRm(ctx, obj.GetPath(), true)
}
func (d *IPFS) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// TODO upload file, optional
_, err := d.sh.Add(stream, ToFiles(stdpath.Join(dstDir.GetPath(), stream.GetName())))
return err
}
func ToFiles(dstDir string) shell.AddOpts {
return func(rb *shell.RequestBuilder) error {
rb.Option("to-files", dstDir)
return nil
}
}
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*IPFS)(nil)

25
drivers/ipfs_api/meta.go Normal file
View File

@ -0,0 +1,25 @@
package ipfs
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootPath
Endpoint string `json:"endpoint" default:"http://127.0.0.1:5001"`
Gateway string `json:"gateway" default:"https://ipfs.io"`
}
var config = driver.Config{
Name: "IPFS API",
DefaultRoot: "/",
LocalSort: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &IPFS{}
})
}

View File

@ -5,7 +5,6 @@ import (
"fmt"
"net/http"
"regexp"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@ -15,12 +14,11 @@ import (
"github.com/go-resty/resty/v2"
)
var upClient = base.NewRestyClient().SetTimeout(120 * time.Second)
type LanZou struct {
Addition
model.Storage
uid string
vei string
}
func (d *LanZou) Config() driver.Config {
@ -31,7 +29,7 @@ func (d *LanZou) GetAddition() driver.Additional {
return &d.Addition
}
func (d *LanZou) Init(ctx context.Context) error {
func (d *LanZou) Init(ctx context.Context) (err error) {
if d.IsCookie() {
if d.RootFolderID == "" {
d.RootFolderID = "-1"
@ -41,8 +39,9 @@ func (d *LanZou) Init(ctx context.Context) error {
return fmt.Errorf("cookie does not contain ylogin")
}
d.uid = ylogin[1]
d.vei, err = d.getVei()
}
return nil
return
}
func (d *LanZou) Drop(ctx context.Context) error {

View File

@ -7,6 +7,7 @@ import (
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/alist-org/alist/v3/drivers/base"
@ -16,14 +17,22 @@ import (
log "github.com/sirupsen/logrus"
)
var upClient *resty.Client
var once sync.Once
func (d *LanZou) doupload(callback base.ReqCallback, resp interface{}) ([]byte, error) {
return d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) {
req.SetQueryParam("uid", d.uid)
req.SetQueryParams(map[string]string{
"uid": d.uid,
"vei": d.vei,
})
if callback != nil {
callback(req)
}
}, resp)
}
func (d *LanZou) get(url string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
func (d *LanZou) get(url string, callback base.ReqCallback) ([]byte, error) {
return d.request(url, http.MethodGet, callback, false)
}
@ -64,6 +73,9 @@ func (d *LanZou) _post(url string, callback base.ReqCallback, resp interface{},
func (d *LanZou) request(url string, method string, callback base.ReqCallback, up bool) ([]byte, error) {
var req *resty.Request
if up {
once.Do(func() {
upClient = base.NewRestyClient().SetTimeout(120 * time.Second)
})
req = upClient.R()
} else {
req = base.RestyClient.R()
@ -217,7 +229,7 @@ func (d *LanZou) getShareUrlHtml(shareID string) (string, error) {
Value: vs,
})
}
}, nil)
})
if err != nil {
return "", err
}
@ -308,7 +320,7 @@ func (d *LanZou) getFilesByShareUrl(shareID, pwd string, sharePageData string) (
log.Errorf("lanzou: err => not find file page param ,data => %s\n", sharePageData)
return nil, fmt.Errorf("not find file page param")
}
data, err := d.get(fmt.Sprint(d.ShareUrl, urlpaths[1]), nil, nil)
data, err := d.get(fmt.Sprint(d.ShareUrl, urlpaths[1]), nil)
if err != nil {
return nil, err
}
@ -438,3 +450,22 @@ func (d *LanZou) getFileRealInfo(downURL string) (*int64, *time.Time) {
size, _ := strconv.ParseInt(res.Header().Get("Content-Length"), 10, 64)
return &size, &time
}
func (d *LanZou) getVei() (string, error) {
resp, err := d.get("https://pc.woozooo.com/mydisk.php", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"item": "files",
"action": "index",
"u": d.uid,
})
})
if err != nil {
return "", err
}
html := RemoveNotes(string(resp))
data, err := htmlJsonToMap(html)
if err != nil {
return "", err
}
return data["vei"], nil
}

View File

@ -1,7 +1,6 @@
package local
import (
"bytes"
"context"
"errors"
"fmt"
@ -20,7 +19,6 @@ import (
"github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
"github.com/disintegration/imaging"
_ "golang.org/x/image/webp"
)
@ -54,6 +52,12 @@ func (d *Local) Init(ctx context.Context) error {
}
d.Addition.RootFolderPath = abs
}
if d.ThumbCacheFolder != "" && !utils.Exists(d.ThumbCacheFolder) {
err := os.MkdirAll(d.ThumbCacheFolder, os.FileMode(d.mkdirPerm))
if err != nil {
return err
}
}
return nil
}
@ -135,36 +139,18 @@ func (d *Local) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
fullPath := file.GetPath()
var link model.Link
if args.Type == "thumb" && utils.Ext(file.GetName()) != "svg" {
var srcBuf *bytes.Buffer
if utils.GetFileType(file.GetName()) == conf.VIDEO {
videoBuf, err := GetSnapshot(fullPath, 10)
buf, thumbPath, err := d.getThumb(file)
if err != nil {
return nil, err
}
srcBuf = videoBuf
} else {
imgData, err := os.ReadFile(fullPath)
if err != nil {
return nil, err
}
imgBuf := bytes.NewBuffer(imgData)
srcBuf = imgBuf
}
image, err := imaging.Decode(srcBuf)
if err != nil {
return nil, err
}
thumbImg := imaging.Resize(image, 144, 0, imaging.Lanczos)
var buf bytes.Buffer
err = imaging.Encode(&buf, thumbImg, imaging.PNG)
if err != nil {
return nil, err
}
size := buf.Len()
link.Data = io.NopCloser(&buf)
link.Header = http.Header{
"Content-Length": []string{strconv.Itoa(size)},
"Content-Type": []string{"image/png"},
}
if thumbPath != nil {
link.FilePath = thumbPath
} else {
link.Data = io.NopCloser(buf)
link.Header.Set("Content-Length", strconv.Itoa(buf.Len()))
}
} else {
link.FilePath = &fullPath

View File

@ -8,6 +8,7 @@ import (
type Addition struct {
driver.RootPath
Thumbnail bool `json:"thumbnail" required:"true" help:"enable thumbnail"`
ThumbCacheFolder string `json:"thumb_cache_folder"`
ShowHidden bool `json:"show_hidden" default:"true" required:"false" help:"show hidden directories and files"`
MkdirPerm string `json:"mkdir_perm" default:"777"`
}

View File

@ -7,7 +7,12 @@ import (
"os"
"path/filepath"
"sort"
"strings"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/disintegration/imaging"
ffmpeg "github.com/u2takey/ffmpeg-go"
)
@ -55,3 +60,52 @@ func readDir(dirname string) ([]fs.FileInfo, error) {
sort.Slice(list, func(i, j int) bool { return list[i].Name() < list[j].Name() })
return list, nil
}
func (d *Local) getThumb(file model.Obj) (*bytes.Buffer, *string, error) {
fullPath := file.GetPath()
thumbPrefix := "alist_thumb_"
thumbName := thumbPrefix + utils.GetMD5EncodeStr(fullPath) + ".png"
if d.ThumbCacheFolder != "" {
// skip if the file is a thumbnail
if strings.HasPrefix(file.GetName(), thumbPrefix) {
return nil, &fullPath, nil
}
thumbPath := filepath.Join(d.ThumbCacheFolder, thumbName)
if utils.Exists(thumbPath) {
return nil, &thumbPath, nil
}
}
var srcBuf *bytes.Buffer
if utils.GetFileType(file.GetName()) == conf.VIDEO {
videoBuf, err := GetSnapshot(fullPath, 10)
if err != nil {
return nil, nil, err
}
srcBuf = videoBuf
} else {
imgData, err := os.ReadFile(fullPath)
if err != nil {
return nil, nil, err
}
imgBuf := bytes.NewBuffer(imgData)
srcBuf = imgBuf
}
image, err := imaging.Decode(srcBuf, imaging.AutoOrientation(true))
if err != nil {
return nil, nil, err
}
thumbImg := imaging.Resize(image, 144, 0, imaging.Lanczos)
var buf bytes.Buffer
err = imaging.Encode(&buf, thumbImg, imaging.PNG)
if err != nil {
return nil, nil, err
}
if d.ThumbCacheFolder != "" {
err = os.WriteFile(filepath.Join(d.ThumbCacheFolder, thumbName), buf.Bytes(), 0666)
if err != nil {
return nil, nil, err
}
}
return &buf, nil, nil
}

295
drivers/mopan/driver.go Normal file
View File

@ -0,0 +1,295 @@
package mopan
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"os"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
"github.com/foxxorcat/mopan-sdk-go"
)
type MoPan struct {
model.Storage
Addition
client *mopan.MoClient
userID string
}
func (d *MoPan) Config() driver.Config {
return config
}
func (d *MoPan) GetAddition() driver.Additional {
return &d.Addition
}
func (d *MoPan) Init(ctx context.Context) error {
login := func() error {
data, err := d.client.Login(d.Phone, d.Password)
if err != nil {
return err
}
d.client.SetAuthorization(data.Token)
info, err := d.client.GetUserInfo()
if err != nil {
return err
}
d.userID = info.UserID
return nil
}
d.client = mopan.NewMoClient().
SetRestyClient(base.RestyClient).
SetOnAuthorizationExpired(func(_ error) error {
err := login()
if err != nil {
d.Status = err.Error()
op.MustSaveDriverStorage(d)
}
return err
}).SetDeviceInfo(d.DeviceInfo)
d.DeviceInfo = d.client.GetDeviceInfo()
return login()
}
func (d *MoPan) Drop(ctx context.Context) error {
d.client = nil
d.userID = ""
return nil
}
func (d *MoPan) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var files []model.Obj
for page := 1; ; page++ {
data, err := d.client.QueryFiles(dir.GetID(), page, mopan.WarpParamOption(
func(j mopan.Json) {
j["orderBy"] = d.OrderBy
j["descending"] = d.OrderDirection == "desc"
},
mopan.ParamOptionShareFile(d.CloudID),
))
if err != nil {
return nil, err
}
if len(data.FileListAO.FileList)+len(data.FileListAO.FolderList) == 0 {
break
}
files = append(files, utils.MustSliceConvert(data.FileListAO.FolderList, folderToObj)...)
files = append(files, utils.MustSliceConvert(data.FileListAO.FileList, fileToObj)...)
}
return files, nil
}
func (d *MoPan) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
data, err := d.client.GetFileDownloadUrl(file.GetID(), mopan.WarpParamOption(mopan.ParamOptionShareFile(d.CloudID)))
if err != nil {
return nil, err
}
return &model.Link{
URL: data.DownloadUrl,
}, nil
}
func (d *MoPan) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
f, err := d.client.CreateFolder(dirName, parentDir.GetID(), mopan.WarpParamOption(
mopan.ParamOptionShareFile(d.CloudID),
))
if err != nil {
return nil, err
}
return folderToObj(*f), nil
}
func (d *MoPan) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return d.newTask(srcObj, dstDir, mopan.TASK_MOVE)
}
func (d *MoPan) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if srcObj.IsDir() {
_, err := d.client.RenameFolder(srcObj.GetID(), newName, mopan.WarpParamOption(
mopan.ParamOptionShareFile(d.CloudID),
))
if err != nil {
return nil, err
}
} else {
_, err := d.client.RenameFile(srcObj.GetID(), newName, mopan.WarpParamOption(
mopan.ParamOptionShareFile(d.CloudID),
))
if err != nil {
return nil, err
}
}
return CloneObj(srcObj, srcObj.GetID(), newName), nil
}
func (d *MoPan) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return d.newTask(srcObj, dstDir, mopan.TASK_COPY)
}
func (d *MoPan) newTask(srcObj, dstDir model.Obj, taskType mopan.TaskType) (model.Obj, error) {
param := mopan.TaskParam{
UserOrCloudID: d.userID,
Source: 1,
TaskType: taskType,
TargetSource: 1,
TargetUserOrCloudID: d.userID,
TargetType: 1,
TargetFolderID: dstDir.GetID(),
TaskStatusDetailDTOList: []mopan.TaskFileParam{
{
FileID: srcObj.GetID(),
IsFolder: srcObj.IsDir(),
FileName: srcObj.GetName(),
},
},
}
if d.CloudID != "" {
param.UserOrCloudID = d.CloudID
param.Source = 2
param.TargetSource = 2
param.TargetUserOrCloudID = d.CloudID
}
task, err := d.client.AddBatchTask(param)
if err != nil {
return nil, err
}
for count := 0; count < 5; count++ {
stat, err := d.client.CheckBatchTask(mopan.TaskCheckParam{
TaskId: task.TaskIDList[0],
TaskType: task.TaskType,
TargetType: 1,
TargetFolderID: task.TargetFolderID,
TargetSource: param.TargetSource,
TargetUserOrCloudID: param.TargetUserOrCloudID,
})
if err != nil {
return nil, err
}
switch stat.TaskStatus {
case 2:
if err := d.client.CancelBatchTask(stat.TaskID, task.TaskType); err != nil {
return nil, err
}
return nil, errors.New("file name conflict")
case 4:
if task.TaskType == mopan.TASK_MOVE {
return CloneObj(srcObj, srcObj.GetID(), srcObj.GetName()), nil
}
return CloneObj(srcObj, stat.SuccessedFileIDList[0], srcObj.GetName()), nil
}
time.Sleep(time.Second)
}
return nil, nil
}
func (d *MoPan) Remove(ctx context.Context, obj model.Obj) error {
_, err := d.client.DeleteToRecycle([]mopan.TaskFileParam{
{
FileID: obj.GetID(),
IsFolder: obj.IsDir(),
FileName: obj.GetName(),
},
}, mopan.WarpParamOption(mopan.ParamOptionShareFile(d.CloudID)))
return err
}
func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
file, err := utils.CreateTempFile(stream)
if err != nil {
return nil, err
}
defer func() {
_ = file.Close()
_ = os.Remove(file.Name())
}()
initUpdload, err := d.client.InitMultiUpload(ctx, mopan.UpdloadFileParam{
ParentFolderId: dstDir.GetID(),
FileName: stream.GetName(),
FileSize: stream.GetSize(),
File: file,
}, mopan.WarpParamOption(
mopan.ParamOptionShareFile(d.CloudID),
))
if err != nil {
return nil, err
}
if !initUpdload.FileDataExists {
parts, err := d.client.GetAllMultiUploadUrls(initUpdload.UploadFileID, initUpdload.PartInfo)
if err != nil {
return nil, err
}
d.client.CloudDiskStartBusiness()
for i, part := range parts {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
}
err := retry.Do(func() error {
if _, err := file.Seek(int64(part.PartNumber-1)*int64(initUpdload.PartSize), io.SeekStart); err != nil {
return retry.Unrecoverable(err)
}
req, err := part.NewRequest(ctx, io.LimitReader(file, int64(initUpdload.PartSize)))
if err != nil {
return err
}
resp, err := base.HttpClient.Do(req)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("upload err,code=%d", resp.StatusCode)
}
return nil
},
retry.Context(ctx),
retry.Attempts(3),
retry.Delay(time.Second),
retry.MaxDelay(5*time.Second))
if err != nil {
return nil, err
}
up(100 * (i + 1) / len(parts))
}
}
uFile, err := d.client.CommitMultiUploadFile(initUpdload.UploadFileID, nil)
if err != nil {
return nil, err
}
return &model.Object{
ID: uFile.UserFileID,
Name: uFile.FileName,
Size: int64(uFile.FileSize),
Modified: time.Time(uFile.CreateDate),
}, nil
}
var _ driver.Driver = (*MoPan)(nil)
var _ driver.MkdirResult = (*MoPan)(nil)
var _ driver.MoveResult = (*MoPan)(nil)
var _ driver.RenameResult = (*MoPan)(nil)
var _ driver.Remove = (*MoPan)(nil)
var _ driver.CopyResult = (*MoPan)(nil)
var _ driver.PutResult = (*MoPan)(nil)

37
drivers/mopan/meta.go Normal file
View File

@ -0,0 +1,37 @@
package mopan
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
Phone string `json:"phone" required:"true"`
Password string `json:"password" required:"true"`
RootFolderID string `json:"root_folder_id" default:"-11" required:"true" help:"be careful when using the -11 value, some operations may cause system errors"`
CloudID string `json:"cloud_id"`
OrderBy string `json:"order_by" type:"select" options:"filename,filesize,lastOpTime" default:"filename"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DeviceInfo string `json:"device_info"`
}
func (a *Addition) GetRootId() string {
return a.RootFolderID
}
var config = driver.Config{
Name: "MoPan",
// DefaultRoot: "root, / or other",
CheckStatus: true,
Alert: "warning|This network disk may store your password in clear text. Please set your password carefully",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &MoPan{}
})
}

1
drivers/mopan/types.go Normal file
View File

@ -0,0 +1 @@
package mopan

58
drivers/mopan/util.go Normal file
View File

@ -0,0 +1,58 @@
package mopan
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/foxxorcat/mopan-sdk-go"
)
func fileToObj(f mopan.File) model.Obj {
return &model.ObjThumb{
Object: model.Object{
ID: string(f.ID),
Name: f.Name,
Size: int64(f.Size),
Modified: time.Time(f.LastOpTime),
},
Thumbnail: model.Thumbnail{
Thumbnail: f.Icon.SmallURL,
},
}
}
func folderToObj(f mopan.Folder) model.Obj {
return &model.Object{
ID: string(f.ID),
Name: f.Name,
Modified: time.Time(f.LastOpTime),
IsFolder: true,
}
}
func CloneObj(o model.Obj, newID, newName string) model.Obj {
if o.IsDir() {
return &model.Object{
ID: newID,
Name: newName,
IsFolder: true,
Modified: o.ModTime(),
}
}
thumb := ""
if o, ok := o.(model.Thumb); ok {
thumb = o.Thumb()
}
return &model.ObjThumb{
Object: model.Object{
ID: newID,
Name: newName,
Size: o.GetSize(),
Modified: o.ModTime(),
},
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
}
}

View File

@ -2,8 +2,6 @@ package pikpak
import (
"context"
"crypto/sha1"
"encoding/hex"
"fmt"
"io"
"net/http"
@ -19,7 +17,6 @@ import (
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus"
)
@ -66,7 +63,7 @@ func (d *PikPak) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
link := model.Link{
URL: resp.WebContentLink,
}
if len(resp.Medias) > 0 && resp.Medias[0].Link.Url != "" {
if !d.DisableMediaLink && len(resp.Medias) > 0 && resp.Medias[0].Link.Url != "" {
log.Debugln("use media link")
link.URL = resp.Medias[0].Link.Url
}
@ -135,9 +132,8 @@ func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
_ = tempFile.Close()
_ = os.Remove(tempFile.Name())
}()
// cal sha1
s := sha1.New()
_, err = io.Copy(s, tempFile)
// cal gcid
sha1Str, err := getGcid(tempFile, stream.GetSize())
if err != nil {
return err
}
@ -145,8 +141,9 @@ func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
sha1Str := hex.EncodeToString(s.Sum(nil))
data := base.Json{
var resp UploadTaskData
res, err := d.request("https://api-drive.mypikpak.com/drive/v1/files", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"kind": "drive#file",
"name": stream.GetName(),
"size": stream.GetSize(),
@ -154,28 +151,23 @@ func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"upload_type": "UPLOAD_TYPE_RESUMABLE",
"objProvider": base.Json{"provider": "UPLOAD_TYPE_UNKNOWN"},
"parent_id": dstDir.GetID(),
}
res, err := d.request("https://api-drive.mypikpak.com/drive/v1/files", http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
"folder_type": "NORMAL",
})
}, &resp)
if err != nil {
return err
}
if stream.GetSize() == 0 {
// 秒传成功
if resp.Resumable == nil {
log.Debugln(string(res))
return nil
}
params := jsoniter.Get(res, "resumable").Get("params")
endpoint := params.Get("endpoint").ToString()
endpointS := strings.Split(endpoint, ".")
endpoint = strings.Join(endpointS[1:], ".")
accessKeyId := params.Get("access_key_id").ToString()
accessKeySecret := params.Get("access_key_secret").ToString()
securityToken := params.Get("security_token").ToString()
key := params.Get("key").ToString()
bucket := params.Get("bucket").ToString()
params := resp.Resumable.Params
endpoint := strings.Join(strings.Split(params.Endpoint, ".")[1:], ".")
cfg := &aws.Config{
Credentials: credentials.NewStaticCredentials(accessKeyId, accessKeySecret, securityToken),
Credentials: credentials.NewStaticCredentials(params.AccessKeyID, params.AccessKeySecret, params.SecurityToken),
Region: aws.String("pikpak"),
Endpoint: &endpoint,
}
@ -185,8 +177,8 @@ func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
}
uploader := s3manager.NewUploader(ss)
input := &s3manager.UploadInput{
Bucket: &bucket,
Key: &key,
Bucket: &params.Bucket,
Key: &params.Key,
Body: tempFile,
}
_, err = uploader.UploadWithContext(ctx, input)

View File

@ -9,6 +9,7 @@ type Addition struct {
driver.RootID
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
DisableMediaLink bool `json:"disable_media_link"`
}
var config = driver.Config{

View File

@ -73,3 +73,23 @@ type Media struct {
IsVisible bool `json:"is_visible"`
Category string `json:"category"`
}
type UploadTaskData struct {
UploadType string `json:"upload_type"`
//UPLOAD_TYPE_RESUMABLE
Resumable *struct {
Kind string `json:"kind"`
Params struct {
AccessKeyID string `json:"access_key_id"`
AccessKeySecret string `json:"access_key_secret"`
Bucket string `json:"bucket"`
Endpoint string `json:"endpoint"`
Expiration time.Time `json:"expiration"`
Key string `json:"key"`
SecurityToken string `json:"security_token"`
} `json:"params"`
Provider string `json:"provider"`
} `json:"resumable"`
File File `json:"file"`
}

View File

@ -1,7 +1,10 @@
package pikpak
import (
"crypto/sha1"
"encoding/hex"
"errors"
"io"
"net/http"
"github.com/alist-org/alist/v3/drivers/base"
@ -123,3 +126,28 @@ func (d *PikPak) getFiles(id string) ([]File, error) {
}
return res, nil
}
func getGcid(r io.Reader, size int64) (string, error) {
calcBlockSize := func(j int64) int64 {
var psize int64 = 0x40000
for float64(j)/float64(psize) > 0x200 && psize < 0x200000 {
psize = psize << 1
}
return psize
}
hash1 := sha1.New()
hash2 := sha1.New()
readSize := calcBlockSize(size)
for {
hash2.Reset()
if n, err := io.CopyN(hash2, r, readSize); err != nil && n == 0 {
if err != io.EOF {
return "", err
}
break
}
hash1.Write(hash2.Sum(nil))
}
return hex.EncodeToString(hash1.Sum(nil)), nil
}

View File

@ -1,26 +0,0 @@
package quark
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
Cookie string `json:"cookie" required:"true"`
driver.RootID
OrderBy string `json:"order_by" type:"select" options:"none,file_type,file_name,updated_at" default:"none"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
}
var config = driver.Config{
Name: "Quark",
OnlyLocal: true,
DefaultRoot: "0",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Quark{}
})
}

View File

@ -22,29 +22,31 @@ import (
log "github.com/sirupsen/logrus"
)
type Quark struct {
type QuarkOrUC struct {
model.Storage
Addition
config driver.Config
conf Conf
}
func (d *Quark) Config() driver.Config {
return config
func (d *QuarkOrUC) Config() driver.Config {
return d.config
}
func (d *Quark) GetAddition() driver.Additional {
func (d *QuarkOrUC) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Quark) Init(ctx context.Context) error {
func (d *QuarkOrUC) Init(ctx context.Context) error {
_, err := d.request("/config", http.MethodGet, nil, nil)
return err
}
func (d *Quark) Drop(ctx context.Context) error {
func (d *QuarkOrUC) Drop(ctx context.Context) error {
return nil
}
func (d *Quark) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
func (d *QuarkOrUC) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.GetFiles(dir.GetID())
if err != nil {
return nil, err
@ -54,12 +56,12 @@ func (d *Quark) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
})
}
func (d *Quark) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *QuarkOrUC) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
data := base.Json{
"fids": []string{file.GetID()},
}
var resp DownResp
ua := "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) quark-cloud-drive/2.5.20 Chrome/100.0.4896.160 Electron/18.3.5.4-b478491100 Safari/537.36 Channel/pckk_other_ch"
ua := d.conf.ua
_, err := d.request("/file/download", http.MethodPost, func(req *resty.Request) {
req.SetHeader("User-Agent", ua).
SetBody(data)
@ -69,21 +71,23 @@ func (d *Quark) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
}
u := resp.Data[0].DownloadUrl
start, end := int64(0), file.GetSize()
return &model.Link{
Handle: func(w http.ResponseWriter, r *http.Request) error {
if rg := r.Header.Get("Range"); rg != "" {
link := model.Link{
Header: http.Header{},
}
if rg := args.Header.Get("Range"); rg != "" {
parseRange, err := http_range.ParseRange(rg, file.GetSize())
if err != nil {
return err
return nil, err
}
start, end = parseRange[0].Start, parseRange[0].Start+parseRange[0].Length
w.Header().Set("Content-Range", parseRange[0].ContentRange(file.GetSize()))
w.Header().Set("Content-Length", strconv.FormatInt(parseRange[0].Length, 10))
w.WriteHeader(http.StatusPartialContent)
link.Header.Set("Content-Range", parseRange[0].ContentRange(file.GetSize()))
link.Header.Set("Content-Length", strconv.FormatInt(parseRange[0].Length, 10))
link.Status = http.StatusPartialContent
} else {
w.Header().Set("Content-Length", strconv.FormatInt(file.GetSize(), 10))
w.WriteHeader(http.StatusOK)
link.Header.Set("Content-Length", strconv.FormatInt(file.GetSize(), 10))
link.Status = http.StatusOK
}
link.Writer = func(w io.Writer) error {
// request 10 MB at a time
chunkSize := int64(10 * 1024 * 1024)
for start < end {
@ -94,15 +98,15 @@ func (d *Quark) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
_range := "bytes=" + strconv.FormatInt(start, 10) + "-" + strconv.FormatInt(_end-1, 10)
start = _end
err = func() error {
req, err := http.NewRequest(r.Method, u, nil)
req, err := http.NewRequest(http.MethodGet, u, nil)
if err != nil {
return err
}
req.Header.Set("Range", _range)
req.Header.Set("User-Agent", ua)
req.Header.Set("Cookie", d.Cookie)
req.Header.Set("Referer", "https://pan.quark.cn")
resp, err := http.DefaultClient.Do(req)
req.Header.Set("Referer", d.conf.referer)
resp, err := base.HttpClient.Do(req)
if err != nil {
return err
}
@ -119,11 +123,11 @@ func (d *Quark) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
}
return nil
},
}, nil
}
return &link, nil
}
func (d *Quark) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
func (d *QuarkOrUC) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
data := base.Json{
"dir_init_lock": false,
"dir_path": "",
@ -139,7 +143,7 @@ func (d *Quark) MakeDir(ctx context.Context, parentDir model.Obj, dirName string
return err
}
func (d *Quark) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
func (d *QuarkOrUC) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
data := base.Json{
"action_type": 1,
"exclude_fids": []string{},
@ -152,7 +156,7 @@ func (d *Quark) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
return err
}
func (d *Quark) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
func (d *QuarkOrUC) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
data := base.Json{
"fid": srcObj.GetID(),
"file_name": newName,
@ -163,11 +167,11 @@ func (d *Quark) Rename(ctx context.Context, srcObj model.Obj, newName string) er
return err
}
func (d *Quark) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
func (d *QuarkOrUC) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
return errs.NotSupport
}
func (d *Quark) Remove(ctx context.Context, obj model.Obj) error {
func (d *QuarkOrUC) Remove(ctx context.Context, obj model.Obj) error {
data := base.Json{
"action_type": 1,
"exclude_fids": []string{},
@ -179,7 +183,7 @@ func (d *Quark) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *Quark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
func (d *QuarkOrUC) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
if err != nil {
return err
@ -264,4 +268,4 @@ func (d *Quark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
return d.upFinish(pre)
}
var _ driver.Driver = (*Quark)(nil)
var _ driver.Driver = (*QuarkOrUC)(nil)

55
drivers/quark_uc/meta.go Normal file
View File

@ -0,0 +1,55 @@
package quark
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
Cookie string `json:"cookie" required:"true"`
driver.RootID
OrderBy string `json:"order_by" type:"select" options:"none,file_type,file_name,updated_at" default:"none"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
}
type Conf struct {
ua string
referer string
api string
pr string
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &QuarkOrUC{
config: driver.Config{
Name: "Quark",
OnlyLocal: true,
DefaultRoot: "0",
NoOverwriteUpload: true,
},
conf: Conf{
ua: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) quark-cloud-drive/2.5.20 Chrome/100.0.4896.160 Electron/18.3.5.4-b478491100 Safari/537.36 Channel/pckk_other_ch",
referer: "https://pan.quark.cn",
api: "https://drive.quark.cn/1/clouddrive",
pr: "ucpro",
},
}
})
op.RegisterDriver(func() driver.Driver {
return &QuarkOrUC{
config: driver.Config{
Name: "UC",
OnlyLocal: true,
DefaultRoot: "0",
NoOverwriteUpload: true,
},
conf: Conf{
ua: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) uc-cloud-drive/2.5.20 Chrome/100.0.4896.160 Electron/18.3.5.4-b478491100 Safari/537.36 Channel/pckk_other_ch",
referer: "https://drive.uc.cn",
api: "https://pc-api.uc.cn/1/clouddrive",
pr: "UCBrowser",
},
}
})
}

View File

@ -22,15 +22,15 @@ import (
// do others that not defined in Driver interface
func (d *Quark) request(pathname string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
u := "https://drive.quark.cn/1/clouddrive" + pathname
func (d *QuarkOrUC) request(pathname string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
u := d.conf.api + pathname
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": d.Cookie,
"Accept": "application/json, text/plain, */*",
"Referer": "https://pan.quark.cn/",
"Referer": d.conf.referer,
})
req.SetQueryParam("pr", "ucpro")
req.SetQueryParam("pr", d.conf.pr)
req.SetQueryParam("fr", "pc")
if callback != nil {
callback(req)
@ -55,7 +55,7 @@ func (d *Quark) request(pathname string, method string, callback base.ReqCallbac
return res.Body(), nil
}
func (d *Quark) GetFiles(parent string) ([]File, error) {
func (d *QuarkOrUC) GetFiles(parent string) ([]File, error) {
files := make([]File, 0)
page := 1
size := 100
@ -85,7 +85,7 @@ func (d *Quark) GetFiles(parent string) ([]File, error) {
return files, nil
}
func (d *Quark) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) {
func (d *QuarkOrUC) upPre(file model.FileStreamer, parentId string) (UpPreResp, error) {
now := time.Now()
data := base.Json{
"ccp_hash_update": true,
@ -105,7 +105,7 @@ func (d *Quark) upPre(file model.FileStreamer, parentId string) (UpPreResp, erro
return resp, err
}
func (d *Quark) upHash(md5, sha1, taskId string) (bool, error) {
func (d *QuarkOrUC) upHash(md5, sha1, taskId string) (bool, error) {
data := base.Json{
"md5": md5,
"sha1": sha1,
@ -119,8 +119,8 @@ func (d *Quark) upHash(md5, sha1, taskId string) (bool, error) {
return resp.Data.Finish, err
}
func (d *Quark) upPart(ctx context.Context, pre UpPreResp, mineType string, partNumber int, bytes []byte) (string, error) {
//func (driver Quark) UpPart(pre UpPreResp, mineType string, partNumber int, bytes []byte, account *model.Account, md5Str, sha1Str string) (string, error) {
func (d *QuarkOrUC) upPart(ctx context.Context, pre UpPreResp, mineType string, partNumber int, bytes []byte) (string, error) {
//func (driver QuarkOrUC) UpPart(pre UpPreResp, mineType string, partNumber int, bytes []byte, account *model.Account, md5Str, sha1Str string) (string, error) {
timeStr := time.Now().UTC().Format(http.TimeFormat)
data := base.Json{
"auth_info": pre.Data.AuthInfo,
@ -169,7 +169,7 @@ x-oss-user-agent:aliyun-sdk-js/6.6.1 Chrome 98.0.4758.80 on Windows 10 64-bit
return res.Header().Get("ETag"), nil
}
func (d *Quark) upCommit(pre UpPreResp, md5s []string) error {
func (d *QuarkOrUC) upCommit(pre UpPreResp, md5s []string) error {
timeStr := time.Now().UTC().Format(http.TimeFormat)
log.Debugf("md5s: %+v", md5s)
bodyBuilder := strings.Builder{}
@ -236,7 +236,7 @@ x-oss-user-agent:aliyun-sdk-js/6.6.1 Chrome 98.0.4758.80 on Windows 10 64-bit
return nil
}
func (d *Quark) upFinish(pre UpPreResp) error {
func (d *QuarkOrUC) upFinish(pre UpPreResp) error {
data := base.Json{
"obj_key": pre.Data.ObjKey,
"task_id": pre.Data.TaskId,

View File

@ -53,15 +53,18 @@ func (d *S3) Drop(ctx context.Context) error {
func (d *S3) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.ListObjectVersion == "v2" {
return d.listV2(dir.GetPath())
return d.listV2(dir.GetPath(), args)
}
return d.listV1(dir.GetPath())
return d.listV1(dir.GetPath(), args)
}
func (d *S3) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
path := getKey(file.GetPath(), false)
filename := stdpath.Base(path)
disposition := fmt.Sprintf(`attachment; filename="%s"; filename*=UTF-8''%s`, filename, url.PathEscape(filename))
disposition := fmt.Sprintf(`attachment; filename*=UTF-8''%s`, url.PathEscape(filename))
if d.AddFilenameToDisposition {
disposition = fmt.Sprintf(`attachment; filename="%s"; filename*=UTF-8''%s`, filename, url.PathEscape(filename))
}
input := &s3.GetObjectInput{
Bucket: &d.Bucket,
Key: &path,
@ -136,11 +139,13 @@ func (d *S3) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreame
uploader.PartSize = stream.GetSize() / (s3manager.MaxUploadParts - 1)
}
key := getKey(stdpath.Join(dstDir.GetPath(), stream.GetName()), false)
contentType := stream.GetMimetype()
log.Debugln("key:", key)
input := &s3manager.UploadInput{
Bucket: &d.Bucket,
Key: &key,
Body: stream,
ContentType: &contentType,
}
_, err := uploader.UploadWithContext(ctx, input)
return err

View File

@ -12,12 +12,14 @@ type Addition struct {
Region string `json:"region"`
AccessKeyID string `json:"access_key_id" required:"true"`
SecretAccessKey string `json:"secret_access_key" required:"true"`
SessionToken string `json:"session_token"`
CustomHost string `json:"custom_host"`
SignURLExpire int `json:"sign_url_expire" type:"number" default:"4"`
Placeholder string `json:"placeholder"`
ForcePathStyle bool `json:"force_path_style"`
ListObjectVersion string `json:"list_object_version" type:"select" options:"v1,v2" default:"v1"`
RemoveBucket bool `json:"remove_bucket" help:"Remove bucket name from path when using custom host."`
AddFilenameToDisposition bool `json:"add_filename_to_disposition" help:"Add filename to Content-Disposition header."`
}
var config = driver.Config{

View File

@ -22,7 +22,7 @@ import (
func (d *S3) initSession() error {
cfg := &aws.Config{
Credentials: credentials.NewStaticCredentials(d.AccessKeyID, d.SecretAccessKey, ""),
Credentials: credentials.NewStaticCredentials(d.AccessKeyID, d.SecretAccessKey, d.SessionToken),
Region: &d.Region,
Endpoint: &d.Endpoint,
S3ForcePathStyle: aws.Bool(d.ForcePathStyle),
@ -69,7 +69,7 @@ func getPlaceholderName(placeholder string) string {
return placeholder
}
func (d *S3) listV1(prefix string) ([]model.Obj, error) {
func (d *S3) listV1(prefix string, args model.ListArgs) ([]model.Obj, error) {
prefix = getKey(prefix, true)
log.Debugf("list: %s", prefix)
files := make([]model.Obj, 0)
@ -97,7 +97,7 @@ func (d *S3) listV1(prefix string) ([]model.Obj, error) {
}
for _, object := range listObjectsResult.Contents {
name := path.Base(*object.Key)
if name == getPlaceholderName(d.Placeholder) || name == d.Placeholder {
if !args.S3ShowPlaceholder && (name == getPlaceholderName(d.Placeholder) || name == d.Placeholder) {
continue
}
file := model.Object{
@ -120,7 +120,7 @@ func (d *S3) listV1(prefix string) ([]model.Obj, error) {
return files, nil
}
func (d *S3) listV2(prefix string) ([]model.Obj, error) {
func (d *S3) listV2(prefix string, args model.ListArgs) ([]model.Obj, error) {
prefix = getKey(prefix, true)
files := make([]model.Obj, 0)
var continuationToken, startAfter *string
@ -152,7 +152,7 @@ func (d *S3) listV2(prefix string) ([]model.Obj, error) {
continue
}
name := path.Base(*object.Key)
if name == getPlaceholderName(d.Placeholder) || name == d.Placeholder {
if !args.S3ShowPlaceholder && (name == getPlaceholderName(d.Placeholder) || name == d.Placeholder) {
continue
}
file := model.Object{
@ -198,7 +198,7 @@ func (d *S3) copyFile(ctx context.Context, src string, dst string) error {
}
func (d *S3) copyDir(ctx context.Context, src string, dst string) error {
objs, err := op.List(ctx, d, src, model.ListArgs{})
objs, err := op.List(ctx, d, src, model.ListArgs{S3ShowPlaceholder: true})
if err != nil {
return err
}

View File

@ -36,11 +36,14 @@ func (d *Seafile) request(method string, pathname string, callback base.ReqCallb
if len(noRedirect) > 0 && noRedirect[0] {
req = base.NoRedirectClient.R()
}
var res resty.Response
for i := 0; i < 2; i++ {
req.SetHeader("Authorization", d.authorization)
callback(req)
res, err := req.Execute(method, full)
var (
res *resty.Response
err error
)
for i := 0; i < 2; i++ {
res, err = req.Execute(method, full)
if err != nil {
return nil, err
}

View File

@ -11,6 +11,7 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/pkg/sftp"
log "github.com/sirupsen/logrus"
)
type SFTP struct {
@ -39,13 +40,15 @@ func (d *SFTP) Drop(ctx context.Context) error {
}
func (d *SFTP) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
log.Debugf("[sftp] list dir: %s", dir.GetPath())
files, err := d.client.ReadDir(dir.GetPath())
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src os.FileInfo) (model.Obj, error) {
return fileToObj(src), nil
objs, err := utils.SliceConvert(files, func(src os.FileInfo) (model.Obj, error) {
return d.fileToObj(src, dir.GetPath())
})
return objs, err
}
func (d *SFTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {

View File

@ -2,15 +2,44 @@ package sftp
import (
"os"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/model"
log "github.com/sirupsen/logrus"
)
func fileToObj(f os.FileInfo) model.Obj {
func (d *SFTP) fileToObj(f os.FileInfo, dir string) (model.Obj, error) {
symlink := f.Mode()&os.ModeSymlink != 0
if !symlink {
return &model.Object{
Name: f.Name(),
Size: f.Size(),
Modified: f.ModTime(),
IsFolder: f.IsDir(),
}, nil
}
path := stdpath.Join(dir, f.Name())
// set target path
target, err := d.client.ReadLink(path)
if err != nil {
return nil, err
}
if !strings.HasPrefix(target, "/") {
target = stdpath.Join(dir, target)
}
_f, err := d.client.Stat(target)
if err != nil {
return nil, err
}
// set basic info
obj := &model.Object{
Name: f.Name(),
Size: _f.Size(),
Modified: _f.ModTime(),
IsFolder: _f.IsDir(),
Path: target,
}
log.Debugf("[sftp] obj: %+v, is symlink: %v", obj, symlink)
return obj, nil
}

View File

@ -125,6 +125,9 @@ func (d *Teambition) Remove(ctx context.Context, obj model.Obj) error {
}
func (d *Teambition) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
if d.UseS3UploadMethod {
return d.newUpload(ctx, dstDir, stream, up)
}
res, err := d.request("/api/v2/users/me", http.MethodGet, nil, nil)
if err != nil {
return err

View File

@ -12,6 +12,7 @@ type Addition struct {
driver.RootID
OrderBy string `json:"order_by" type:"select" options:"fileName,fileSize,updated,created" default:"fileName"`
OrderDirection string `json:"order_direction" type:"select" options:"Asc,Desc" default:"Asc"`
UseS3UploadMethod bool `json:"use_s3_upload_method" default:"true"`
}
var config = driver.Config{

View File

@ -66,3 +66,24 @@ type ChunkUpload struct {
PreviewExt string `json:"previewExt"`
LastUploadTime interface{} `json:"lastUploadTime"`
}
type UploadToken struct {
Sdk struct {
Endpoint string `json:"endpoint"`
Region string `json:"region"`
S3ForcePathStyle bool `json:"s3ForcePathStyle"`
Credentials struct {
AccessKeyId string `json:"accessKeyId"`
SecretAccessKey string `json:"secretAccessKey"`
SessionToken string `json:"sessionToken"`
} `json:"credentials"`
} `json:"sdk"`
Upload struct {
Bucket string `json:"Bucket"`
Key string `json:"Key"`
ContentDisposition string `json:"ContentDisposition"`
ContentType string `json:"ContentType"`
} `json:"upload"`
Token string `json:"token"`
DownloadUrl string `json:"downloadUrl"`
}

View File

@ -7,13 +7,16 @@ import (
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
@ -210,17 +213,56 @@ func (d *Teambition) finishUpload(file *FileUpload, parentId string) error {
return err
}
func getBetweenStr(str, start, end string) string {
n := strings.Index(str, start)
if n == -1 {
return ""
func (d *Teambition) newUpload(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
var uploadToken UploadToken
_, err := d.request("/api/awos/upload-token", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"category": "work",
"fileName": stream.GetName(),
"fileSize": stream.GetSize(),
"fileType": stream.GetMimetype(),
"payload": base.Json{
"involveMembers": []struct{}{},
"visible": "members",
},
"scope": "project:" + d.ProjectID,
})
}, &uploadToken)
if err != nil {
return err
}
n = n + len(start)
str = string([]byte(str)[n:])
m := strings.Index(str, end)
if m == -1 {
return ""
cfg := &aws.Config{
Credentials: credentials.NewStaticCredentials(
uploadToken.Sdk.Credentials.AccessKeyId, uploadToken.Sdk.Credentials.SecretAccessKey, uploadToken.Sdk.Credentials.SessionToken),
Region: &uploadToken.Sdk.Region,
Endpoint: &uploadToken.Sdk.Endpoint,
S3ForcePathStyle: &uploadToken.Sdk.S3ForcePathStyle,
}
str = string([]byte(str)[:m])
return str
ss, err := session.NewSession(cfg)
if err != nil {
return err
}
uploader := s3manager.NewUploader(ss)
input := &s3manager.UploadInput{
Bucket: &uploadToken.Upload.Bucket,
Key: &uploadToken.Upload.Key,
ContentDisposition: &uploadToken.Upload.ContentDisposition,
ContentType: &uploadToken.Upload.ContentType,
Body: stream,
}
_, err = uploader.UploadWithContext(ctx, input)
if err != nil {
return err
}
// finish upload
_, err = d.request("/api/works", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"fileTokens": []string{uploadToken.Token},
"involveMembers": []struct{}{},
"visible": "members",
"works": []struct{}{},
"_parentId": dstDir.GetID(),
})
}, nil)
return err
}

View File

@ -3,7 +3,9 @@ package thunder
import (
"context"
"fmt"
"io"
"net/http"
"os"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
@ -54,7 +56,7 @@ func (x *Thunder) Init(ctx context.Context) (err error) {
"j",
"4scKJNdd7F27Hv7tbt",
},
DeviceID: utils.GetMD5Encode(x.Username + x.Password),
DeviceID: utils.GetMD5EncodeStr(x.Username + x.Password),
ClientID: "Xp6vsxz_7IYVw2BB",
ClientSecret: "Xp6vsy4tN9toTVdMSpomVdXpRmES",
ClientVersion: "7.51.0.8196",
@ -135,7 +137,7 @@ func (x *ThunderExpert) Init(ctx context.Context) (err error) {
DeviceID: func() string {
if len(x.DeviceID) != 32 {
return utils.GetMD5Encode(x.DeviceID)
return utils.GetMD5EncodeStr(x.DeviceID)
}
return x.DeviceID
}(),
@ -331,15 +333,32 @@ func (xc *XunLeiCommon) Remove(ctx context.Context, obj model.Obj) error {
}
func (xc *XunLeiCommon) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
if err != nil {
return err
}
defer func() {
_ = tempFile.Close()
_ = os.Remove(tempFile.Name())
}()
gcid, err := getGcid(tempFile, stream.GetSize())
if err != nil {
return err
}
if _, err := tempFile.Seek(0, io.SeekStart); err != nil {
return err
}
var resp UploadTaskResponse
_, err := xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
_, err = xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
r.SetContext(ctx)
r.SetBody(&base.Json{
"kind": FILE,
"parent_id": dstDir.GetID(),
"name": stream.GetName(),
"size": stream.GetSize(),
"hash": "1CF254FBC456E1B012CD45C546636AA62CF8350E",
"hash": gcid,
"upload_type": UPLOAD_TYPE_RESUMABLE,
})
}, &resp)
@ -362,7 +381,7 @@ func (xc *XunLeiCommon) Put(ctx context.Context, dstDir model.Obj, stream model.
Bucket: aws.String(param.Bucket),
Key: aws.String(param.Key),
Expires: aws.Time(param.Expiration),
Body: stream,
Body: tempFile,
})
return err
}

View File

@ -78,7 +78,7 @@ type Addition struct {
// 登录特征,用于判断是否重新登录
func (i *Addition) GetIdentity() string {
return utils.GetMD5Encode(i.Username + i.Password)
return utils.GetMD5EncodeStr(i.Username + i.Password)
}
var config = driver.Config{

View File

@ -1,7 +1,10 @@
package thunder
import (
"crypto/sha1"
"encoding/hex"
"fmt"
"io"
"net/http"
"regexp"
"time"
@ -97,7 +100,7 @@ func (c *Common) GetCaptchaSign() (timestamp, sign string) {
timestamp = fmt.Sprint(time.Now().UnixMilli())
str := fmt.Sprint(c.ClientID, c.ClientVersion, c.PackageName, c.DeviceID, timestamp)
for _, algorithm := range c.Algorithms {
str = utils.GetMD5Encode(str + algorithm)
str = utils.GetMD5EncodeStr(str + algorithm)
}
sign = "1." + str
return
@ -171,3 +174,29 @@ func (c *Common) Request(url, method string, callback base.ReqCallback, resp int
return res.Body(), nil
}
// 计算文件Gcid
func getGcid(r io.Reader, size int64) (string, error) {
calcBlockSize := func(j int64) int64 {
var psize int64 = 0x40000
for float64(j)/float64(psize) > 0x200 && psize < 0x200000 {
psize = psize << 1
}
return psize
}
hash1 := sha1.New()
hash2 := sha1.New()
readSize := calcBlockSize(size)
for {
hash2.Reset()
if n, err := io.CopyN(hash2, r, readSize); err != nil && n == 0 {
if err != io.EOF {
return "", err
}
break
}
hash1.Write(hash2.Sum(nil))
}
return hex.EncodeToString(hash1.Sum(nil)), nil
}

View File

@ -10,6 +10,7 @@ import (
"net/url"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
@ -31,7 +32,7 @@ func (d *Trainbit) GetAddition() driver.Additional {
}
func (d *Trainbit) Init(ctx context.Context) error {
http.DefaultClient.CheckRedirect = func(req *http.Request, via []*http.Request) error {
base.HttpClient.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
var err error
@ -119,7 +120,7 @@ func (d *Trainbit) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
query := &url.Values{}
query.Add("q", strings.Split(dstDir.GetID(), "_")[1])
query.Add("guid", guid)
query.Add("name", url.QueryEscape(local2provider(stream.GetName(), false) + "."))
query.Add("name", url.QueryEscape(local2provider(stream.GetName(), false)+"."))
endpoint.RawQuery = query.Encode()
var total int64
total = 0
@ -135,7 +136,7 @@ func (d *Trainbit) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
return err
}
req.Header.Set("Content-Type", "text/json; charset=UTF-8")
_, err = http.DefaultClient.Do(req)
_, err = base.HttpClient.Do(req)
return err
}

View File

@ -9,6 +9,7 @@ import (
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
)
@ -38,7 +39,7 @@ func get(url string, apiKey string, AUSHELLPORTAL string) (*http.Response, error
Value: apiKey,
MaxAge: 2 * 60,
})
res, err := http.DefaultClient.Do(req)
res, err := base.HttpClient.Do(req)
return res, err
}
@ -65,7 +66,7 @@ func postForm(endpoint string, data url.Values, apiExpiredate string, apiKey str
Value: apiKey,
MaxAge: 2 * 60,
})
res, err := http.DefaultClient.Do(req)
res, err := base.HttpClient.Do(req)
return res, err
}

View File

@ -81,7 +81,7 @@ func (d *USS) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*m
expireAt := time.Now().Add(downExp).Unix()
upd := url.QueryEscape(path.Base(file.GetPath()))
signStr := strings.Join([]string{d.OperatorPassword, fmt.Sprint(expireAt), fmt.Sprintf("/%s", key)}, "&")
upt := utils.GetMD5Encode(signStr)[12:20] + fmt.Sprint(expireAt)
upt := utils.GetMD5EncodeStr(signStr)[12:20] + fmt.Sprint(expireAt)
link := fmt.Sprintf("%s?_upd=%s&_upt=%s", u, upd, upt)
return &model.Link{URL: link}, nil
}

View File

@ -78,19 +78,19 @@ func (d *WebDav) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
}
func (d *WebDav) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.client.Rename(srcObj.GetPath(), path.Join(dstDir.GetPath(), srcObj.GetName()), true)
return d.client.Rename(getPath(srcObj), path.Join(dstDir.GetPath(), srcObj.GetName()), true)
}
func (d *WebDav) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
return d.client.Rename(srcObj.GetPath(), path.Join(path.Dir(srcObj.GetPath()), newName), true)
return d.client.Rename(getPath(srcObj), path.Join(path.Dir(srcObj.GetPath()), newName), true)
}
func (d *WebDav) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.client.Copy(srcObj.GetPath(), path.Join(dstDir.GetPath(), srcObj.GetName()), true)
return d.client.Copy(getPath(srcObj), path.Join(dstDir.GetPath(), srcObj.GetName()), true)
}
func (d *WebDav) Remove(ctx context.Context, obj model.Obj) error {
return d.client.RemoveAll(obj.GetPath())
return d.client.RemoveAll(getPath(obj))
}
func (d *WebDav) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {

View File

@ -12,6 +12,7 @@ import (
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"golang.org/x/net/publicsuffix"
)
@ -185,7 +186,7 @@ func (ca *CookieAuth) getSPToken() (*SuccessResponse, error) {
return nil, err
}
client := &http.Client{}
client := base.HttpClient
resp, err := client.Do(req)
if err != nil {
return nil, err

View File

@ -4,6 +4,7 @@ import (
"net/http"
"github.com/alist-org/alist/v3/drivers/webdav/odrvcookie"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/gowebdav"
)
@ -29,3 +30,10 @@ func (d *WebDav) setClient() error {
d.client = c
return nil
}
func getPath(obj model.Obj) string {
if obj.IsDir() {
return obj.GetPath() + "/"
}
return obj.GetPath()
}

161
drivers/wopan/driver.go Normal file
View File

@ -0,0 +1,161 @@
package template
import (
"context"
"fmt"
"github.com/Xhofe/wopan-sdk-go"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type Wopan struct {
model.Storage
Addition
client *wopan.WoClient
}
func (d *Wopan) Config() driver.Config {
return config
}
func (d *Wopan) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Wopan) Init(ctx context.Context) error {
d.client = wopan.DefaultWithRefreshToken(d.RefreshToken)
d.client.SetAccessToken(d.AccessToken)
d.client.OnRefreshToken(func(accessToken, refreshToken string) {
d.AccessToken = accessToken
d.RefreshToken = refreshToken
op.MustSaveDriverStorage(d)
})
return d.client.InitData()
}
func (d *Wopan) Drop(ctx context.Context) error {
return nil
}
func (d *Wopan) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var res []model.Obj
pageNum := 0
pageSize := 100
for {
data, err := d.client.QueryAllFiles(d.getSpaceType(), dir.GetID(), pageNum, pageSize, 0, d.FamilyID, func(req *resty.Request) {
req.SetContext(ctx)
})
if err != nil {
return nil, err
}
objs, err := utils.SliceConvert(data.Files, fileToObj)
if err != nil {
return nil, err
}
res = append(res, objs...)
if len(data.Files) < pageSize {
break
}
pageNum++
}
return res, nil
}
func (d *Wopan) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if f, ok := file.(*Object); ok {
res, err := d.client.GetDownloadUrlV2([]string{f.FID}, func(req *resty.Request) {
req.SetContext(ctx)
})
if err != nil {
return nil, err
}
return &model.Link{
URL: res.List[0].DownloadUrl,
}, nil
}
return nil, fmt.Errorf("unable to convert file to Object")
}
func (d *Wopan) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.client.CreateDirectory(d.getSpaceType(), parentDir.GetID(), dirName, d.FamilyID, func(req *resty.Request) {
req.SetContext(ctx)
})
return err
}
func (d *Wopan) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
dirList := make([]string, 0)
fileList := make([]string, 0)
if srcObj.IsDir() {
dirList = append(dirList, srcObj.GetID())
} else {
fileList = append(fileList, srcObj.GetID())
}
return d.client.MoveFile(dirList, fileList, dstDir.GetID(),
d.getSpaceType(), d.getSpaceType(),
d.FamilyID, d.FamilyID, func(req *resty.Request) {
req.SetContext(ctx)
})
}
func (d *Wopan) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
_type := 1
if srcObj.IsDir() {
_type = 0
}
return d.client.RenameFileOrDirectory(d.getSpaceType(), _type, srcObj.GetID(), newName, d.FamilyID, func(req *resty.Request) {
req.SetContext(ctx)
})
}
func (d *Wopan) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
dirList := make([]string, 0)
fileList := make([]string, 0)
if srcObj.IsDir() {
dirList = append(dirList, srcObj.GetID())
} else {
fileList = append(fileList, srcObj.GetID())
}
return d.client.CopyFile(dirList, fileList, dstDir.GetID(),
d.getSpaceType(), d.getSpaceType(),
d.FamilyID, d.FamilyID, func(req *resty.Request) {
req.SetContext(ctx)
})
}
func (d *Wopan) Remove(ctx context.Context, obj model.Obj) error {
dirList := make([]string, 0)
fileList := make([]string, 0)
if obj.IsDir() {
dirList = append(dirList, obj.GetID())
} else {
fileList = append(fileList, obj.GetID())
}
return d.client.DeleteFile(d.getSpaceType(), dirList, fileList, func(req *resty.Request) {
req.SetContext(ctx)
})
}
func (d *Wopan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
_, err := d.client.Upload2C(d.getSpaceType(), wopan.Upload2CFile{
Name: stream.GetName(),
Size: stream.GetSize(),
Content: stream,
ContentType: stream.GetMimetype(),
}, dstDir.GetID(), d.FamilyID, wopan.Upload2COption{
OnProgress: func(current, total int64) {
up(int(100 * current / total))
},
})
return err
}
//func (d *Wopan) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Wopan)(nil)

37
drivers/wopan/meta.go Normal file
View File

@ -0,0 +1,37 @@
package template
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootID
// define other
RefreshToken string `json:"refresh_token" required:"true"`
FamilyID string `json:"family_id" help:"Keep it empty if you want to use your personal drive"`
SortRule string `json:"sort_rule" type:"select" options:"name_asc,name_desc,time_asc,time_desc,size_asc,size_desc" default:"name_asc"`
AccessToken string `json:"access_token"`
}
var config = driver.Config{
Name: "WoPan",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Wopan{}
})
}

34
drivers/wopan/types.go Normal file
View File

@ -0,0 +1,34 @@
package template
import (
"github.com/Xhofe/wopan-sdk-go"
"github.com/alist-org/alist/v3/internal/model"
)
type Object struct {
model.ObjThumb
FID string
}
func fileToObj(file wopan.File) (model.Obj, error) {
t, err := getTime(file.CreateTime)
if err != nil {
return nil, err
}
return &Object{
ObjThumb: model.ObjThumb{
Object: model.Object{
ID: file.Id,
//Path: "",
Name: file.Name,
Size: file.Size,
Modified: t,
IsFolder: file.Type == 0,
},
Thumbnail: model.Thumbnail{
Thumbnail: file.ThumbUrl,
},
},
FID: file.Fid,
}, nil
}

40
drivers/wopan/util.go Normal file
View File

@ -0,0 +1,40 @@
package template
import (
"time"
"github.com/Xhofe/wopan-sdk-go"
)
// do others that not defined in Driver interface
func (d *Wopan) getSortRule() int {
switch d.SortRule {
case "name_asc":
return wopan.SortNameAsc
case "name_desc":
return wopan.SortNameDesc
case "time_asc":
return wopan.SortTimeAsc
case "time_desc":
return wopan.SortTimeDesc
case "size_asc":
return wopan.SortSizeAsc
case "size_desc":
return wopan.SortSizeDesc
default:
return wopan.SortNameAsc
}
}
func (d *Wopan) getSpaceType() string {
if d.FamilyID == "" {
return wopan.SpaceTypePersonal
}
return wopan.SpaceTypeFamily
}
// 20230607214351
func getTime(str string) (time.Time, error) {
return time.Parse("20060102150405", str)
}

79
go.mod
View File

@ -5,19 +5,24 @@ go 1.20
require (
github.com/SheltonZhu/115driver v1.0.14
github.com/Xhofe/go-cache v0.0.0-20220723083548-714439c8af9a
github.com/aws/aws-sdk-go v1.44.194
github.com/blevesearch/bleve/v2 v2.3.7
github.com/caarlos0/env/v7 v7.1.0
github.com/Xhofe/wopan-sdk-go v0.1.1
github.com/avast/retry-go v3.0.0+incompatible
github.com/aws/aws-sdk-go v1.44.262
github.com/blevesearch/bleve/v2 v2.3.8
github.com/caarlos0/env/v9 v9.0.0
github.com/coreos/go-oidc v2.2.1+incompatible
github.com/deckarep/golang-set/v2 v2.3.0
github.com/disintegration/imaging v1.6.2
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564
github.com/foxxorcat/mopan-sdk-go v0.1.1
github.com/gin-contrib/cors v1.4.0
github.com/gin-gonic/gin v1.9.0
github.com/gin-gonic/gin v1.9.1
github.com/go-resty/resty/v2 v2.7.0
github.com/golang-jwt/jwt/v4 v4.5.0
github.com/google/uuid v1.3.0
github.com/gorilla/websocket v1.5.0
github.com/hirochachacha/go-smb2 v1.1.0
github.com/ipfs/go-ipfs-api v0.6.0
github.com/jlaffaye/ftp v0.1.0
github.com/json-iterator/go v1.1.12
github.com/maruel/natural v1.1.0
@ -25,15 +30,16 @@ require (
github.com/pkg/errors v0.9.1
github.com/pkg/sftp v1.13.5
github.com/pquerna/otp v1.4.0
github.com/sirupsen/logrus v1.9.0
github.com/sirupsen/logrus v1.9.2
github.com/spf13/cobra v1.7.0
github.com/t3rm1n4l/go-mega v0.0.0-20230228171823-a01a2cda13ca
github.com/u2takey/ffmpeg-go v0.4.1
github.com/upyun/go-sdk/v3 v3.0.4
github.com/winfsp/cgofuse v1.5.0
golang.org/x/crypto v0.8.0
golang.org/x/image v0.7.0
golang.org/x/net v0.9.0
golang.org/x/crypto v0.11.0
golang.org/x/image v0.9.0
golang.org/x/net v0.12.0
golang.org/x/oauth2 v0.10.0
gorm.io/driver/mysql v1.4.7
gorm.io/driver/postgres v1.4.8
gorm.io/driver/sqlite v1.4.4
@ -46,6 +52,7 @@ require (
github.com/aead/ecdh v0.2.0 // indirect
github.com/aliyun/aliyun-oss-go-sdk v2.2.5+incompatible // indirect
github.com/andreburgaud/crypt2go v1.1.0 // indirect
github.com/benbjohnson/clock v1.3.0 // indirect
github.com/bits-and-blooms/bitset v1.2.0 // indirect
github.com/blevesearch/bleve_index_api v1.0.5 // indirect
github.com/blevesearch/geo v0.1.17 // indirect
@ -61,54 +68,78 @@ require (
github.com/blevesearch/zapx/v12 v12.3.7 // indirect
github.com/blevesearch/zapx/v13 v13.3.7 // indirect
github.com/blevesearch/zapx/v14 v14.3.7 // indirect
github.com/blevesearch/zapx/v15 v15.3.9 // indirect
github.com/blevesearch/zapx/v15 v15.3.10 // indirect
github.com/bluele/gcache v0.0.2 // indirect
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc // indirect
github.com/bytedance/sonic v1.8.0 // indirect
github.com/bytedance/sonic v1.9.1 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.1.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
github.com/gaoyb7/115drive-webdav v0.1.8 // indirect
github.com/geoffgarside/ber v1.1.0 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.11.2 // indirect
github.com/go-playground/validator/v10 v10.14.0 // indirect
github.com/go-sql-driver/mysql v1.7.0 // indirect
github.com/goccy/go-json v0.10.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551 // indirect
github.com/golang/protobuf v1.5.0 // indirect
github.com/golang/snappy v0.0.3 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/ipfs/boxo v0.8.0 // indirect
github.com/ipfs/go-cid v0.4.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect
github.com/jackc/pgx/v5 v5.3.0 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/klauspost/cpuid/v2 v2.0.9 // indirect
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/leodido/go-urn v1.2.4 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.26.3 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mattn/go-sqlite3 v1.14.15 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/mschoch/smat v0.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.8.0 // indirect
github.com/multiformats/go-multibase v0.1.1 // indirect
github.com/multiformats/go-multicodec v0.8.1 // indirect
github.com/multiformats/go-multihash v0.2.1 // indirect
github.com/multiformats/go-multistream v0.4.1 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/orzogc/fake115uploader v0.3.3-0.20221009101310-08b764073b77 // indirect
github.com/pelletier/go-toml/v2 v2.0.6 // indirect
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
github.com/pierrec/lz4/v4 v4.1.17 // indirect
github.com/pquerna/cachecontrol v0.1.0 // indirect
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/u2takey/go-utils v0.3.1 // indirect
github.com/ugorji/go/codec v1.2.9 // indirect
github.com/ugorji/go/codec v1.2.11 // indirect
github.com/whyrusleeping/tar-utils v0.0.0-20180509141711-8c6c8ba81d5c // indirect
go.etcd.io/bbolt v1.3.5 // indirect
golang.org/x/arch v0.0.0-20210923205945-b76863e36670 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/arch v0.3.0 // indirect
golang.org/x/sys v0.10.0 // indirect
golang.org/x/text v0.11.0 // indirect
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/protobuf v1.28.1 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/blake3 v1.1.7 // indirect
)

Some files were not shown because too many files have changed in this diff Show More