Compare commits

...

111 Commits

Author SHA1 Message Date
xiaoQQya
3375c26c41
perf(quark_uc&quark_uc_tv): native proxy multithreading (#8287)
* perf(quark_uc): native proxy multithreading

* perf(quark_uc_tv): native proxy multithreading

* chore(fs): file query result add id
2025-04-03 20:50:29 +08:00
asdfghjkl
ab68faef44
fix(baidu_netdisk): add another video crack api (#8275)
Co-authored-by: anobodys <anobodys@gmail.com>
2025-04-03 20:44:49 +08:00
New Future
2e21df0661
feat(driver): add Azure Blob Storage driver (#8261)
* add azure-blob driver

* fix nested folders copy

* feat(driver): add Azure Blob Storage driver

实现 Azure Blob Storage 驱动,支持以下功能:
- 使用共享密钥身份验证初始化连接
- 列出目录和文件
- 生成临时 SAS URL 进行文件访问
- 创建目录
- 移动和重命名文件/文件夹
- 复制文件/文件夹
- 删除文件/文件夹
- 上传文件并支持进度跟踪

此驱动允许用户通过 AList 平台无缝访问和管理 Azure Blob Storage 中的数据。

* feat(driver): update help doc for Azure Blob

* doc(readme): add new driver

* Update drivers/azure_blob/driver.go

fix(azure): fix name check

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update README.md

doc(readme): fix the link

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(azure): fix log and link

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:43:21 +08:00
MadDogOwner
af18cb138b
feat(139): add option ReportRealSize (#8244 close #8141)
* feat(139): handle family upload errors

* feat(139): add option `ReportRealSize`

* Update drivers/139/driver.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:41:59 +08:00
j2rong4cn
31c55a2adf
fix(archive): unable to preview (#8248)
* fix(archive): unable to preview

* fix bug
2025-04-03 20:41:05 +08:00
MadDogOwner
465dd1703d
feat(cloudreve): s3 policy support (#8245)
* feat(cloudreve): s3 policy support

* fix(cloudreve): correct potential off-by-one error in `etags` initialization
2025-04-03 20:40:19 +08:00
j2rong4cn
a6304285b6
fix: revert "refactor(net): pass request header" (#8269)
5be50e77d9
2025-04-03 20:35:52 +08:00
YangXu
affd0cecd1
fix(pikpak&pikpak_share): update algorithms (#8278) 2025-04-03 20:35:14 +08:00
MadDogOwner
37640221c0
fix(doubao): update file size type to int64 (#8289) 2025-04-03 20:34:27 +08:00
Andy Hsu
e4bd223d1c fix(deps): update 115-sdk-go to v0.1.5 2025-04-03 20:29:53 +08:00
jerry
0cde4e73d6
feat(ipfs): better ipfs support (#8225)
* feat:  better ipfs support

fixed mfs crud, added ipns support

* Update driver.go

clean up
2025-03-27 23:25:23 +08:00
Ljcbaby
7b62dcb88c
fix(baidu_netdisk): deplicate retry (#8210 redo #7972, link #8180) 2025-03-27 23:22:55 +08:00
never lee
c38dc6df7c
fix(115_open): support multipart upload (#8229)
Co-authored-by: neverlee <neverlea@formail.com>
2025-03-27 23:22:08 +08:00
MadDogOwner
5668e4a4ea
feat(doubao): add Doubao driver (#8232 closes #8020 #8206)
* feat(doubao): implement List()

* feat(doubao): implement Link()

* feat(doubao): implement MakeDir()

* refactor(doubao): add type Object to store key

* feat(doubao): implement Move()

* feat(doubao): implement Rename()

* feat(doubao): implement Remove()
2025-03-27 23:21:42 +08:00
KirCute
1335f80362
feat(archive): support multipart archives (#8184 close #8015)
* feat(archive): multipart support & sevenzip tool

* feat(archive): rardecode tool

* feat(archive): support decompress multi-selected

* fix(archive): decompress response filter internal

* feat(archive): support multipart zip

* fix: more applicable AcceptedMultipartExtensions interface
2025-03-27 23:20:44 +08:00
KirCute
704d3854df
feat(alist_v3): support forward archive requests (#8230)
* feat(alist_v3): support forward archive requests

* fix: encode all inner path
2025-03-27 23:18:34 +08:00
MadDogOwner
44cc71d354
fix(cloudreve): enable SetContentLength for uploading to local policy (#8228 close #8174)
* fix(cloudreve): upload failure to return error msg instead of deletion success

* fix(cloudreve): enable SetContentLength for uploading to local policy

* refactor(cloudreve): move local policy upload logic to utils for better error handling

* refactor(cloudreve): unified upload code style

* refactor(cloudreve): improve user agent handling
2025-03-27 23:18:15 +08:00
KirCute
9a9aee9ac6
feat(alias): support writing to non-ambiguous paths (#8216)
* feat(alias): support writing to non-ambiguous paths

* feat(alias): support extract concurrency

* fix(alias): extract url no pass query
2025-03-27 23:17:45 +08:00
KirCute
4fcc3a187e
fix(traffic): duplicate semaphore release when uploading (#8211 close #8180) 2025-03-27 23:15:47 +08:00
Ljcbaby
10a76c701d
fix(db): support postgres trust/peer mode (#8198 close #8066) 2025-03-27 23:15:04 +08:00
KirCute
6e13923225
fix(sftp-server): postgre cannot store control characters (#8188 close #8186) 2025-03-27 23:14:36 +08:00
Andy Hsu
32890da29f fix(115_open): upgrade 115-sdk-go dependency to v0.1.4 2025-03-21 19:06:09 +08:00
Andy Hsu
758554a40f fix(115_open): upgrade 115-sdk-go dependency to v0.1.3 (close #8169) 2025-03-19 21:47:42 +08:00
Andy Hsu
4563aea47e fix(115_open): rename delay to take effect (close #8156) 2025-03-18 22:25:04 +08:00
Andy Hsu
35d6f3b8fc fix(115_open): upgrade sdk (close #8151) 2025-03-18 22:21:50 +08:00
j2rong4cn
b4e6ab12d9
refactor: FilterReadMeScripts (#8154 close #8150)
* refactor: FilterReadMeScripts

* .
2025-03-18 22:02:33 +08:00
Andy Hsu
3499c4db87
feat: 115 open driver (#8139)
* wip: 115 open

* chore(go.mod): update 115-sdk-go dependency version

* feat(115_open): implement directory management and file operations

* chore(go.mod): update 115-sdk-go dependency to v0.1.1 and adjust callback handling in driver

* chore: rename driver
2025-03-17 00:52:09 +08:00
hshpy
d20f41d687
fix: missing handling of RangeReadCloser (#8146) 2025-03-16 22:14:44 +08:00
Andy Hsu
d16ba65f42 fix(lang): initialize configuration in LangCmd before generating language JSON file 2025-03-16 16:37:33 +08:00
hshpy
c82e632ee1
fix: potential XSS vulnerabilities (#7923)
* fix: potential XSS vulnerabilities

* feat: support filter and render for readme.md

* chore: set ReadMeAutoRender to true

* fix attachFileName undefined

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-03-15 23:28:40 +08:00
折纸飞机
04f5525f20
fix(s3): incorrectly added slash before the Bucket name (#8083 close #8001) 2025-03-15 00:21:24 +08:00
shniubobo
28b61a93fd
feat(webdav): support oc:checksums (#8064 close #7472)
Ref: #7472
2025-03-15 00:21:07 +08:00
j2rong4cn
0126af4de0
fix(crypt): premature close of MFile (#8132 close #8119)
* fix(crypt): premature close of MFile

* refactor
2025-03-15 00:13:30 +08:00
MadDogOwner
7579d44517
fix(onedrive): set req.ContentLength (#8081)
* fix(onedrive): set req.ContentLength

* fix(onedrive_app): set req.ContentLength

* fix(cloudreve): set req.ContentLength
2025-03-15 00:12:37 +08:00
MadDogOwner
5dfea714d8
fix(cloudreve): use milliseconds timestamp in last_modified (#8133) 2025-03-15 00:12:15 +08:00
Ljcbaby
370a6c15a9
fix(baidu_netdisk): remove duplicate retry (#7972) 2025-03-01 19:00:36 +08:00
Ljcbaby
2570707a06
feat(baidu_netdisk): support dynamical slice size for low bandwith upload case (#7965)
* 动态分片尺寸

* 补充严格测试结果
2025-03-01 18:46:05 +08:00
j2rong4cn
4145734c18
refactor(net): pass request header (#8031 close #8008)
* refactor(net): pass request header

* feat(proxy): add `Etag` to response header

* refactor
2025-03-01 18:35:34 +08:00
KirCute
646c7bcd21
fix(archive): use another sign for extraction (#7982) 2025-03-01 18:34:33 +08:00
KirCute
cdc41595bc
feat(github): support GPG verification (#7996 close #7986)
* feat(github): support GPG verification

* chore
2025-02-24 23:12:23 +08:00
KirCute_ECT
79bef0be9e
chore: fix build failed (#8005) 2025-02-16 15:11:48 +08:00
KirCute_ECT
c230f24ebe
fix(archive): decode filename when decompressing zips (#7998 close #7988) 2025-02-16 12:25:01 +08:00
KirCute_ECT
30d8c20756
feat(archive): support deprioritize previewing (#7984) 2025-02-16 12:24:10 +08:00
KirCute_ECT
3b71500f23
feat(traffic): support limit task worker count & file stream rate (#7948)
* feat: set task workers num & client stream rate limit

* feat: server stream rate limit

* upgrade xhofe/tache

* .
2025-02-16 12:22:11 +08:00
foxxorcat
399336b33c
fix(189pc): transfer rename (#7958)
* fix(189pc): transfer rename

* fix: OverwriteUpload

* fix: change search method

* fix

* fix
2025-02-16 12:21:34 +08:00
KirCute_ECT
36b4204623
feat(github): support github proxy (#7979 close #7963) 2025-02-16 12:21:03 +08:00
YangRucheng
f25be154c6
fix(ilanzou): add header X-Forwarded-For to solve IP ban (#7977)
* fix: warning

* feat: ip header

* fix: ip header for fs link
2025-02-16 12:20:28 +08:00
Sakana
ec3fc945a3
fix(feiji): modify the request header (#7902 close #7890) 2025-02-09 18:35:39 +08:00
MadDogOwner
3f9bed3d5f
feat(bootstrap): add .url to proxy types (#7928) 2025-02-09 18:33:38 +08:00
Jealous
b9ad18bd0a
feat(recursive-move): Advanced conflict policy for preventing unintentional overwriting (#7906) 2025-02-09 18:32:57 +08:00
Jealous
0219c4e15a
fix(index): fix the issue where ignored paths are not updated (#7907) 2025-02-09 18:31:43 +08:00
Feng.YJ
d983a4ebcb
refactor(cmd): use std runtime package to get go version info (#7964)
* refactor(cmd): use std `runtime` package to get go version info

- Remove the `GoVersion` variable.
- Remove overriding `GoVersion` by ldflags in `build.sh`.
- Get go version, OS and arch from the constants in the std `runtime` package instead of compile time.

* chore(ci): remove `GoVersion` flag from workflows

Remove GoVersion flag from beta_release.yml and build.yml workflows.

> Reduce compile-time dependencies.
2025-02-09 18:30:56 +08:00
Sakana
f795807753
feat(github_releases): support dir size for show all version (#7938)
* refactor

* 修改默认 RepoStructure

* feat: 支持使用 gh-proxy
2025-02-09 18:30:38 +08:00
hshpy
6164e4577b
fix: missing args when using alias driver (#7941 close #7932) 2025-02-05 19:22:10 +08:00
Sakana
39bde328ee
fix(lenovonas_share): the size of the directory (#7914) 2025-02-01 17:32:58 +08:00
KirCute_ECT
779c293f04
fix(driver): implement canceling and updating progress for putting for some drivers (#7847)
* fix(driver): additionally implement canceling and updating progress for putting for some drivers

* refactor: add driver archive api into template

* fix(123): use built-in MD5 to avoid caching full

* .

* fix build failed
2025-02-01 17:29:55 +08:00
abc1763613206
b9f397d29f
fix(139): restore the Account handling, partially reverts #7850 (#7900 close #7784) 2025-01-30 11:25:41 +08:00
Jiang Xiang
d53eecc229
fix(febbox): panic due to slice out of range (#7898 close #7889) 2025-01-30 11:24:07 +08:00
Andy Hsu
f88fd83d4a feat(ci): use go-cross/cgo-actions for dev build 2025-01-28 18:57:09 +08:00
Andy Hsu
226c34929a feat(ci): add build info for beta release 2025-01-27 21:32:59 +08:00
j2rong4cn
027edcbe53
refactor(patch): execute all patches in dev version (#7807) 2025-01-27 20:49:24 +08:00
fd51f34efa
feat(misskey): add misskey driver (#7864) 2025-01-27 20:47:52 +08:00
Sakana
bdd9774aa7
feat(github_releases): add support for github_releases driver (#7844 close #7842)
* feat(github_releases): 添加对 GitHub Releases 的支持

* feat(github_releases): 增加目录大小和更新时间,增加请求缓存

* Feat(github_releases): 可选填入 GitHub token 来提高速率限制或访问私有仓库

* Fix(github_releases): 修复仓库无权限或不存在时的异常

* feat(github_releases): 支持显示所有版本,开启后不显示文件夹大小

* feat(github_releases): 兼容无子目录
2025-01-27 20:28:44 +08:00
Jealous
258b8f520f
feat(recursive-move): add overwrite option to preventing unintentional overwriting (#7868 closes #7382,#7719)
* feat(recursive-move): add `overwrite` option to preventing unintentional overwriting

* chore: rearrange code order
2025-01-27 20:25:39 +08:00
Jiang Xiang
99f39410f2
fix(s3): escape CopySource request header when copying files (#7860 close #7858) 2025-01-27 20:23:13 +08:00
Shelton Zhu
267120a8c8
fix(115): fix offline download (#7845 close #7794)
* feat(115): use multi url for list files & change download url api

* fix(115): fix offline download. (close #7794)
2025-01-27 20:20:55 +08:00
KirCute_ECT
5eff8cc7bf
feat(upload): support rapid upload on web (#7851) 2025-01-27 20:20:09 +08:00
KirCute_ECT
d5ec998699
feat(task): allow retry canceled (#7852) 2025-01-27 20:18:10 +08:00
LaoShui
23f3178f39
chore(README): formatting spacing in README links (#7879) [skip ci] 2025-01-27 20:13:35 +08:00
MadDogOwner
cafdb4d407
fix(139): correct path handling in groupGetFiles (#7850 closes #7848,#7603)
* fix(139): correct path handling in groupGetFiles

* perf(139): reduce the number of requests in groupGetFiles

* refactor(139): check authorization expiration (#10)

* refactor(139): check authorization expiration

* fix bug

* chore(139): update api version to 7.14.0

---------

Co-authored-by: j2rong4cn <36783515+j2rong4cn@users.noreply.github.com>
2025-01-27 20:11:21 +08:00
Jealous
0d4c63e9ff
feat(fs): display the existing filename in error message (#7877) 2025-01-27 20:09:17 +08:00
j2rong4cn
5c5d8378e5
fix(archive): unable to preview (#7843)
* fix(archive): unrecognition zip

* feat(archive): add tree for zip meta

* fix bug

* refactor(archive):  meta cache time use Link Expiration first

* feat(archive): return sort policy in meta (#2)

* refactor

* perf(archive): reduce new network requests

---------

Co-authored-by: KirCute_ECT <951206789@qq.com>
2025-01-27 20:08:56 +08:00
j2rong4cn
2be0c3d1a0
feat(alias): add DownloadConcurrency and DownloadPartSize option (#7829)
* fix(net): goroutine logic bug (AlistGo/alist#7215)

* Fix goroutine logic bug

* Fix bug

---------

Co-authored-by: hpy hs <hshpy.pengyu@gmail.com>

* perf(net): sequential and dynamic concurrency

* fix(net): incorrect error return

* feat(alias):  add `DownloadConcurrency` and `DownloadPartSize` option

* feat(net): add `ConcurrencyLimit`

* pref(net): create `chunk` on demand

* refactor

* refactor

* fix(net): `r.Closers.Add` has no effect

* refactor

---------

Co-authored-by: hpy hs <hshpy.pengyu@gmail.com>
2025-01-27 20:08:39 +08:00
foxxorcat
bdcf450203
fix: resolve concurrent read/write issues in WrapObjName (#7865) 2025-01-27 20:06:18 +08:00
Jealous
c2633dd443
fix(workflow): use the dev version of the web for beta releases (#7862)
* fix(workflow): use dev version of the web for beta releases

* chore(config): check version string by prefix
2025-01-23 22:49:35 +08:00
KirCute_ECT
11b6a6012f
fix(copy): use Link and Put when the driver does not support copying (#7834) 2025-01-18 23:52:02 +08:00
Jealous
59e02287b2
feat(fs): add overwrite option to preventing unintentional overwriting (#7809) 2025-01-18 23:39:07 +08:00
KirCute_ECT
bb40e2e2cd
feat(archive): archive manage (#7817)
* feat(archive): archive management

* fix(ftp-server): remove duplicate ReadAtSeeker realization

* fix(archive): bad seeking of SeekableStream

* fix(archive): split internal and driver extraction api

* feat(archive): patch

* fix(shutdown): clear decompress upload tasks

* chore

* feat(archive): support .iso format

* chore
2025-01-18 23:28:12 +08:00
j2rong4cn
ab22cf8233
feat: add Reference interface to driver (#7805)
* feat: add `Reference` interface to driver

* feat(123_share): support reference 123pan
2025-01-18 23:26:58 +08:00
MadDogOwner
880cc7abca
fix(139): use personal_new by default (#7836) 2025-01-18 23:24:09 +08:00
Jealous
b60da9732f
feat(offline-download): allow using offline download tools in any storage (#7716)
* Feat(offline-download): allow using thunder offline download tool in any storage

* Feat(offline-download): allow using 115 offline download tool in any storage

* Feat(offline-download): allow using pikpak offline download tool in any storage

* style(offline-download): unify offline download tool names

* feat(offline-download): show available offline download tools only

* Fix(offline-download): update unmodified tool names.

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-01-10 21:24:44 +08:00
KirCute_ECT
e04114d102
feat(github): add github api driver (#7717)
* feat(github): add github api driver

* fix: filter submodule operation

* feat: rename, copy and move, but with bugs

* fix: move and copy returns 422

* fix: change TargetPath in rename msg from parent path to new self path

* fix: add non-commit mutex

* pref(github): use net/http to put blob

* chore: add a help message to `ref` addition
2025-01-10 20:59:58 +08:00
KirCute_ECT
51bcf83511
feat(url-tree): support url tree driver writing (#7779 close #5166)
* feat: support url tree writing

* fix: meta writable

* feat: disable writable via addition
2025-01-10 20:50:56 +08:00
KirCute_ECT
25b4b55ee1
feat(ftp-server): support resumable downloading (#7792) 2025-01-10 20:50:20 +08:00
Jiang Xiang
6812ec9a6d
fix(ilanzou): add accept-encoding request header (#7796 close #7759) 2025-01-10 20:49:50 +08:00
Lin Tianchuan
31a7470865
feat(local): support both time and percent for video thumbnail (#7802)
* feat(local): support percent for video thumbnail

The percentage determines the point in the video (as a percentage of the total duration) at which the thumbnail will be generated.

* feat(local): support both time and percent for video thumbnail
2025-01-10 20:48:45 +08:00
Mmx
687124c81d
ci(build_docker): merge build_docker into release_docker workflow (#7755)
* feat(ci): merge build_docker workflow into release_docker

* fix(ci): logics of docker meta
2025-01-01 21:29:59 +08:00
foxxorcat
e4439e66b9
fix:(baidu_photo): upload erron -6 (#7760 close #7744)
* fix:(baidu_photo): upload erron -6

* fix(baidu_photo):api add bdstoken
2025-01-01 21:13:34 +08:00
MadDogOwner
7fd4ac7851
fix(139): update familyGetFiles pagination logic (#7748 close #7711) 2024-12-30 22:55:47 +08:00
KirCute_ECT
6745dcc139
feat(task): attach creator to user of the context (#7729) 2024-12-30 22:55:09 +08:00
KirCute_ECT
aa1082a56c
feat(sftp-server): do not generate host key until first enabled (#7734) 2024-12-30 22:54:37 +08:00
Jealous
ed149be84b
feat(index): add disable index option for storages (#7730) 2024-12-30 22:52:55 +08:00
Sakana
040dc14ee6
fix(lenovonas_share): stoken expire (#7727) 2024-12-30 22:51:39 +08:00
Mmx
4dce53d72b
feat(docker release): improve aria2 image, add aio image (#7750)
* build: add argument INSTALL_ARIA2 to dockerfile

* feat: run aria2 in main entrypoint

* feat(ci): environment matrix for docker release

* improve(ci): allow overwrite artifacts in docker release

* fix(ci): permission of alist binary in docker; entrypoint logic

* improve(aria2): move aria2 data to /opt/aria2; fix permission issues

References:

https://github.com/AlistGo/with_aria2/pull/13

Co-authored-by: GoodbyeNJN <cc@fuckwall.cc>

* fix(ci): aio image is not taking effect

* fix(build): tar command in aria2 installation process

(cherry picked from commit 647285408354807bae64df6a20fefb696ff787de)

---------

Co-authored-by: GoodbyeNJN <cc@fuckwall.cc>
2024-12-30 22:51:05 +08:00
j2rong4cn
365fc40dfe
fix: static page to limit request method (#7745 close #7667) 2024-12-30 22:49:18 +08:00
KirCute_ECT
5994c17b4e
feat(patch): upgrade patch module (#7738)
* feat(patch): upgrade patch module

* chore(patch): add docs

* fix(patch): skip and rewrite invalid last launched version

* fix(patch): turn two functions into patches
2024-12-30 22:48:33 +08:00
Jealous
42243b1517
feat(thunder): add offline download tool (#7673)
* feat(thunder): add offline download tool

* fix(thunder): improve error handling and parse file size in status response

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2024-12-25 21:23:58 +08:00
KirCute_ECT
48916cdedf
fix(permission): enhance the strictness of permissions (#7705 close #7680)
* fix(permission): enhance the strictness of permissions

* fix: add initial permissions to admin
2024-12-25 21:17:58 +08:00
Feng.YJ
5ecf5e823c
fix(webauthn): handle error when removing webauthn credential (#7689) 2024-12-25 21:16:34 +08:00
KirCute_ECT
c218b5701e
fix(115): support float QPS (#7677) 2024-12-25 21:16:03 +08:00
KirCute_ECT
77d0c78bfd
feat(sftp-server): public key login (#7668) 2024-12-25 21:15:06 +08:00
j2rong4cn
db5c601cfe
fix(crypt): add sign to thumbnail (#6611) 2024-12-25 21:13:54 +08:00
KirCute_ECT
221cdf3611
feat(s3): support custom host presign (#7699 close #7696) 2024-12-25 21:13:23 +08:00
KirCute_ECT
40b0e66efe
feat(ftp-server): treat moving across file systems as copying (#7704 close #7701)
* feat(ftp-server): treat moving across file systems as copying

* fix: ensure compatibility across different fs on the same driver
2024-12-25 21:12:30 +08:00
KirCute_ECT
b72e85a73a
fix(ftp-server): rewrite download in a more appropriate method (#7656) 2024-12-25 21:11:45 +08:00
KirCute_ECT
6aaf5975c6
fix(ftp-server): work unproperly when base url is not root (#7693)
* fix(ftp-server): work unproperly when base url is not root

* fix: avoid merge conflict
2024-12-25 21:11:36 +08:00
MadDogOwner
bb2aec20e4
fix(139): handle upload file conflicts (#7692) 2024-12-25 21:11:05 +08:00
KirCute_ECT
d7aa1608ac
feat(task): add speed monitor (#7655) 2024-12-25 21:09:54 +08:00
j2rong4cn
db99224126
perf: Speed ​​of database initialization (#7694)
* perf: 优化非sqlite3数据库时初始化慢的问题

* refactor
2024-12-25 21:08:22 +08:00
MadDogOwner
b8bd14f99b
fix(lanzou): missing parameter (#7678 close #7210) 2024-12-17 22:05:52 +08:00
hshpy
331885ed64
fix(net): close of closed channel (#7580) 2024-12-17 22:04:27 +08:00
267 changed files with 13506 additions and 2179 deletions

View File

@ -87,12 +87,17 @@ jobs:
run: bash build.sh dev web
- name: Build
id: test-action
uses: go-cross/cgo-actions@v1
with:
targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch
out-dir: build
x-flags: |
github.com/alist-org/alist/v3/internal/conf.BuiltAt=$built_at
github.com/alist-org/alist/v3/internal/conf.GitAuthor=Xhofe
github.com/alist-org/alist/v3/internal/conf.GitCommit=$git_commit
github.com/alist-org/alist/v3/internal/conf.Version=$tag
github.com/alist-org/alist/v3/internal/conf.WebVersion=dev
- name: Compress
run: |

View File

@ -15,14 +15,17 @@ jobs:
strategy:
matrix:
platform: [ubuntu-latest]
go-version: [ '1.21' ]
target:
- darwin-amd64
- darwin-arm64
- windows-amd64
- linux-arm64-musl
- linux-amd64-musl
- windows-arm64
- android-arm64
name: Build
runs-on: ${{ matrix.platform }}
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
@ -30,19 +33,29 @@ jobs:
- uses: benjlevesque/short-sha@v3.0
id: short-sha
- name: Install dependencies
run: |
sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest
go install github.com/crazy-max/xgo@latest
sudo apt install upx
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Setup web
run: bash build.sh dev web
- name: Build
run: |
bash build.sh dev
uses: go-cross/cgo-actions@v1
with:
targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch
out-dir: build
x-flags: |
github.com/alist-org/alist/v3/internal/conf.BuiltAt=$built_at
github.com/alist-org/alist/v3/internal/conf.GitAuthor=Xhofe
github.com/alist-org/alist/v3/internal/conf.GitCommit=$git_commit
github.com/alist-org/alist/v3/internal/conf.Version=$tag
github.com/alist-org/alist/v3/internal/conf.WebVersion=dev
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: alist_${{ env.SHA }}
path: dist
name: alist_${{ env.SHA }}_${{ matrix.target }}
path: build/*

View File

@ -1,126 +0,0 @@
name: build_docker
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build_docker:
name: Build Docker
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: xhofe/alist
tags: |
type=schedule
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=raw,value=beta,enable={{is_default_branch}}
- name: Docker meta with ffmpeg
id: meta-ffmpeg
uses: docker/metadata-action@v5
with:
images: xhofe/alist
flavor: |
suffix=-ffmpeg
tags: |
type=schedule
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=raw,value=beta,enable={{is_default_branch}}
- uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: Cache Musl
id: cache-musl
uses: actions/cache@v4
with:
path: build/musl-libs
key: docker-musl-libs-v2
- name: Download Musl Library
if: steps.cache-musl.outputs.cache-hit != 'true'
run: bash build.sh prepare docker-multiplatform
- name: Build go binary
run: bash build.sh dev docker-multiplatform
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
username: xhofe
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.ci
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
- name: Build and push with ffmpeg
id: docker_build_ffmpeg
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.ci
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta-ffmpeg.outputs.tags }}
labels: ${{ steps.meta-ffmpeg.outputs.labels }}
build-args: INSTALL_FFMPEG=true
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
build_docker_with_aria2:
needs: build_docker
name: Build docker with aria2
runs-on: ubuntu-latest
if: github.event_name == 'push'
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
repository: alist-org/with_aria2
ref: main
persist-credentials: false
fetch-depth: 0
- name: Commit
run: |
git config --local user.email "bot@nn.ci"
git config --local user.name "IlaBot"
git commit --allow-empty -m "Trigger build for ${{ github.sha }}"
- name: Push commit
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/with_aria2

View File

@ -4,10 +4,34 @@ on:
push:
tags:
- 'v*'
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
REGISTRY: 'xhofe/alist'
REGISTRY_USERNAME: 'xhofe'
REGISTRY_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
ARTIFACT_NAME: 'binaries_docker_release'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
IMAGE_PUSH: ${{ github.event_name == 'push' }}
IMAGE_IS_PROD: ${{ github.ref_type == 'tag' }}
IMAGE_TAGS_BETA: |
type=schedule
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=raw,value=beta,enable={{is_default_branch}}
jobs:
release_docker:
name: Release Docker
build_binary:
name: Build Binaries for Docker Release
runs-on: ubuntu-latest
steps:
- name: Checkout
@ -28,14 +52,53 @@ jobs:
if: steps.cache-musl.outputs.cache-hit != 'true'
run: bash build.sh prepare docker-multiplatform
- name: Build go binary
- name: Build go binary (beta)
if: env.IMAGE_IS_PROD != 'true'
run: bash build.sh beta docker-multiplatform
- name: Build go binary (release)
if: env.IMAGE_IS_PROD == 'true'
run: bash build.sh release docker-multiplatform
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
images: xhofe/alist
name: ${{ env.ARTIFACT_NAME }}
overwrite: true
path: |
build/
!build/*.tgz
!build/musl-libs/**
release_docker:
needs: build_binary
name: Release Docker image
runs-on: ubuntu-latest
strategy:
matrix:
image: ["latest", "ffmpeg", "aria2", "aio"]
include:
- image: "latest"
build_arg: ""
tag_favor: ""
- image: "ffmpeg"
build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-ffmpeg,onlatest=true"
- image: "aria2"
build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-aria2,onlatest=true"
- image: "aio"
build_arg: |
INSTALL_FFMPEG=true
INSTALL_ARIA2=true
tag_favor: "suffix=-aio,onlatest=true"
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: ${{ env.ARTIFACT_NAME }}
path: 'build/'
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
@ -44,10 +107,22 @@ jobs:
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3
with:
username: xhofe
password: ${{ secrets.DOCKERHUB_TOKEN }}
logout: true
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}
tags: ${{ env.IMAGE_IS_PROD == 'true' && '' || env.IMAGE_TAGS_BETA }}
flavor: |
${{ env.IMAGE_IS_PROD == 'true' && 'latest=true' || '' }}
${{ matrix.tag_favor }}
- name: Build and push
id: docker_build
@ -55,54 +130,8 @@ jobs:
with:
context: .
file: Dockerfile.ci
push: true
push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: ${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
- name: Docker meta with ffmpeg
id: meta-ffmpeg
uses: docker/metadata-action@v5
with:
images: xhofe/alist
flavor: |
latest=true
suffix=-ffmpeg,onlatest=true
- name: Build and push with ffmpeg
id: docker_build_ffmpeg
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.ci
push: true
tags: ${{ steps.meta-ffmpeg.outputs.tags }}
labels: ${{ steps.meta-ffmpeg.outputs.labels }}
build-args: INSTALL_FFMPEG=true
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
release_docker_with_aria2:
needs: release_docker
name: Release docker with aria2
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
repository: alist-org/with_aria2
ref: main
persist-credentials: false
fetch-depth: 0
- name: Add tag
run: |
git config --local user.email "bot@nn.ci"
git config --local user.name "IlaBot"
git tag -a ${{ github.ref_name }} -m "release ${{ github.ref_name }}"
- name: Push tags
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/with_aria2
platforms: ${{ env.RELEASE_PLATFORMS }}

View File

@ -10,6 +10,7 @@ RUN bash build.sh release docker
FROM alpine:edge
ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false
LABEL MAINTAINER="i@nn.ci"
WORKDIR /opt/alist/
@ -18,13 +19,25 @@ RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --from=builder /app/bin/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh && /entrypoint.sh version
RUN chmod +x /opt/alist/alist && \
chmod +x /entrypoint.sh && /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/
EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ]

View File

@ -2,6 +2,7 @@ FROM alpine:edge
ARG TARGETPLATFORM
ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false
LABEL MAINTAINER="i@nn.ci"
WORKDIR /opt/alist/
@ -10,13 +11,25 @@ RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY /build/${TARGETPLATFORM}/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh && /entrypoint.sh version
RUN chmod +x /opt/alist/alist && \
chmod +x /entrypoint.sh && /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/
EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ]

View File

@ -39,7 +39,7 @@
---
English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing](./CONTRIBUTING.md) | [CODE_OF_CONDUCT](./CODE_OF_CONDUCT.md)
English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing](./CONTRIBUTING.md) | [CODE_OF_CONDUCT](./CODE_OF_CONDUCT.md)
## Features
@ -77,6 +77,7 @@ English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing]
- [x] [Dropbox](https://www.dropbox.com/)
- [x] [FeijiPan](https://www.feijipan.com/)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode

View File

@ -1,12 +1,14 @@
appName="alist"
builtAt="$(date +'%F %T %z')"
goVersion=$(go version | sed 's/go version //')
gitAuthor="Xhofe <i@nn.ci>"
gitCommit=$(git log --pretty=format:"%h" -1)
if [ "$1" = "dev" ]; then
version="dev"
webVersion="dev"
elif [ "$1" = "beta" ]; then
version="beta"
webVersion="dev"
else
git tag -d beta
version=$(git describe --abbrev=0 --tags)
@ -19,7 +21,6 @@ echo "frontend version: $webVersion"
ldflags="\
-w -s \
-X 'github.com/alist-org/alist/v3/internal/conf.BuiltAt=$builtAt' \
-X 'github.com/alist-org/alist/v3/internal/conf.GoVersion=$goVersion' \
-X 'github.com/alist-org/alist/v3/internal/conf.GitAuthor=$gitAuthor' \
-X 'github.com/alist-org/alist/v3/internal/conf.GitCommit=$gitCommit' \
-X 'github.com/alist-org/alist/v3/internal/conf.Version=$version' \
@ -301,8 +302,12 @@ if [ "$1" = "dev" ]; then
else
BuildDev
fi
elif [ "$1" = "release" ]; then
FetchWebRelease
elif [ "$1" = "release" -o "$1" = "beta" ]; then
if [ "$1" = "beta" ]; then
FetchWebDev
else
FetchWebRelease
fi
if [ "$2" = "docker" ]; then
BuildDocker
elif [ "$2" = "docker-multiplatform" ]; then

View File

@ -15,10 +15,11 @@ import (
func Init() {
bootstrap.InitConfig()
bootstrap.Log()
bootstrap.InitHostKey()
bootstrap.InitDB()
data.InitData()
bootstrap.InitStreamLimit()
bootstrap.InitIndex()
bootstrap.InitUpgradePatch()
}
func Release() {

54
cmd/kill.go Normal file
View File

@ -0,0 +1,54 @@
package cmd
import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"os"
)
// KillCmd represents the kill command
var KillCmd = &cobra.Command{
Use: "kill",
Short: "Force kill alist server process by daemon/pid file",
Run: func(cmd *cobra.Command, args []string) {
kill()
},
}
func kill() {
initDaemon()
if pid == -1 {
log.Info("Seems not have been started. Try use `alist start` to start server.")
return
}
process, err := os.FindProcess(pid)
if err != nil {
log.Errorf("failed to find process by pid: %d, reason: %v", pid, process)
return
}
err = process.Kill()
if err != nil {
log.Errorf("failed to kill process %d: %v", pid, err)
} else {
log.Info("killed process: ", pid)
}
err = os.Remove(pidFile)
if err != nil {
log.Errorf("failed to remove pid file")
}
pid = -1
}
func init() {
RootCmd.AddCommand(KillCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// stopCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// stopCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}

View File

@ -12,6 +12,7 @@ import (
"strings"
_ "github.com/alist-org/alist/v3/drivers"
"github.com/alist-org/alist/v3/internal/bootstrap"
"github.com/alist-org/alist/v3/internal/bootstrap/data"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/op"
@ -137,6 +138,7 @@ var LangCmd = &cobra.Command{
Use: "lang",
Short: "Generate language json file",
Run: func(cmd *cobra.Command, args []string) {
bootstrap.InitConfig()
err := os.MkdirAll("lang", 0777)
if err != nil {
utils.Log.Fatalf("failed create folder: %s", err.Error())

View File

@ -6,6 +6,7 @@ import (
"github.com/alist-org/alist/v3/cmd/flags"
_ "github.com/alist-org/alist/v3/drivers"
_ "github.com/alist-org/alist/v3/internal/archive"
_ "github.com/alist-org/alist/v3/internal/offline_download"
"github.com/spf13/cobra"
)

View File

@ -6,6 +6,7 @@ import (
"fmt"
ftpserver "github.com/KirCute/ftpserverlib-pasvportmap"
"github.com/KirCute/sftpd-alist"
"github.com/alist-org/alist/v3/internal/fs"
"net"
"net/http"
"os"
@ -159,6 +160,7 @@ the address is defined in config file`,
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
utils.Log.Println("Shutdown server...")
fs.ArchiveContentUploadTaskManager.RemoveAll()
Release()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()

View File

@ -1,10 +1,10 @@
/*
Copyright © 2022 NAME HERE <EMAIL ADDRESS>
*/
//go:build !windows
package cmd
import (
"os"
"syscall"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@ -30,11 +30,11 @@ func stop() {
log.Errorf("failed to find process by pid: %d, reason: %v", pid, process)
return
}
err = process.Kill()
err = process.Signal(syscall.SIGTERM)
if err != nil {
log.Errorf("failed to kill process %d: %v", pid, err)
log.Errorf("failed to terminate process %d: %v", pid, err)
} else {
log.Info("killed process: ", pid)
log.Info("terminated process: ", pid)
}
err = os.Remove(pidFile)
if err != nil {

34
cmd/stop_windows.go Normal file
View File

@ -0,0 +1,34 @@
//go:build windows
package cmd
import (
"github.com/spf13/cobra"
)
// StopCmd represents the stop command
var StopCmd = &cobra.Command{
Use: "stop",
Short: "Same as the kill command",
Run: func(cmd *cobra.Command, args []string) {
stop()
},
}
func stop() {
kill()
}
func init() {
RootCmd.AddCommand(StopCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// stopCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// stopCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}

View File

@ -6,6 +6,7 @@ package cmd
import (
"fmt"
"os"
"runtime"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/spf13/cobra"
@ -16,14 +17,15 @@ var VersionCmd = &cobra.Command{
Use: "version",
Short: "Show current version of AList",
Run: func(cmd *cobra.Command, args []string) {
goVersion := fmt.Sprintf("%s %s/%s", runtime.Version(), runtime.GOOS, runtime.GOARCH)
fmt.Printf(`Built At: %s
Go Version: %s
Author: %s
Commit ID: %s
Version: %s
WebVersion: %s
`,
conf.BuiltAt, conf.GoVersion, conf.GitAuthor, conf.GitCommit, conf.Version, conf.WebVersion)
`, conf.BuiltAt, goVersion, conf.GitAuthor, conf.GitCommit, conf.Version, conf.WebVersion)
os.Exit(0)
},
}

View File

@ -215,12 +215,12 @@ func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
var uploadResult *UploadResult
// 闪传失败,上传
if stream.GetSize() <= 10*utils.MB { // 文件大小小于10MB改用普通模式上传
if uploadResult, err = d.UploadByOSS(&fastInfo.UploadOSSParams, stream, dirID); err != nil {
if uploadResult, err = d.UploadByOSS(ctx, &fastInfo.UploadOSSParams, stream, dirID, up); err != nil {
return nil, err
}
} else {
// 分片上传
if uploadResult, err = d.UploadByMultipart(&fastInfo.UploadOSSParams, stream.GetSize(), stream, dirID); err != nil {
if uploadResult, err = d.UploadByMultipart(ctx, &fastInfo.UploadOSSParams, stream.GetSize(), stream, dirID, up); err != nil {
return nil, err
}
}
@ -241,7 +241,7 @@ func (d *Pan115) OfflineList(ctx context.Context) ([]*driver115.OfflineTask, err
}
func (d *Pan115) OfflineDownload(ctx context.Context, uris []string, dstDir model.Obj) ([]string, error) {
return d.client.AddOfflineTaskURIs(uris, dstDir.GetID())
return d.client.AddOfflineTaskURIs(uris, dstDir.GetID(), driver115.WithAppVer(appVer))
}
func (d *Pan115) DeleteOfflineTasks(ctx context.Context, hashes []string, deleteFiles bool) error {

View File

@ -10,7 +10,7 @@ type Addition struct {
QRCodeToken string `json:"qrcode_token" type:"text" help:"one of QR code token and cookie required"`
QRCodeSource string `json:"qrcode_source" type:"select" options:"web,android,ios,tv,alipaymini,wechatmini,qandroid" default:"linux" help:"select the QR code device, default linux"`
PageSize int64 `json:"page_size" type:"number" default:"1000" help:"list api per page size of 115 driver"`
LimitRate float64 `json:"limit_rate" type:"number" default:"2" help:"limit all api request rate ([limit]r/1s)"`
LimitRate float64 `json:"limit_rate" type:"float" default:"2" help:"limit all api request rate ([limit]r/1s)"`
driver.RootID
}

View File

@ -2,6 +2,7 @@ package _115
import (
"bytes"
"context"
"crypto/md5"
"crypto/tls"
"encoding/hex"
@ -13,17 +14,19 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
cipher "github.com/SheltonZhu/115driver/pkg/crypto/ec115"
crypto "github.com/SheltonZhu/115driver/pkg/crypto/m115"
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
crypto "github.com/gaoyb7/115drive-webdav/115"
"github.com/orzogc/fake115uploader/cipher"
"github.com/pkg/errors"
)
@ -63,7 +66,7 @@ func (d *Pan115) getFiles(fileId string) ([]FileObj, error) {
if d.PageSize <= 0 {
d.PageSize = driver115.FileListLimit
}
files, err := d.client.ListWithLimit(fileId, d.PageSize)
files, err := d.client.ListWithLimit(fileId, d.PageSize, driver115.WithMultiUrls())
if err != nil {
return nil, err
}
@ -108,7 +111,7 @@ func (d *Pan115) getUA() string {
func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, error) {
key := crypto.GenerateKey()
result := driver115.DownloadResp{}
params, err := utils.Json.Marshal(map[string]string{"pickcode": pickCode})
params, err := utils.Json.Marshal(map[string]string{"pick_code": pickCode})
if err != nil {
return nil, err
}
@ -116,7 +119,7 @@ func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, e
data := crypto.Encode(params, key)
bodyReader := strings.NewReader(url.Values{"data": []string{data}}.Encode())
reqUrl := fmt.Sprintf("%s?t=%s", driver115.ApiDownloadGetUrl, driver115.Now().String())
reqUrl := fmt.Sprintf("%s?t=%s", driver115.AndroidApiDownloadGetUrl, driver115.Now().String())
req, _ := http.NewRequest(http.MethodPost, reqUrl, bodyReader)
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Cookie", d.Cookie)
@ -140,24 +143,23 @@ func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, e
return nil, err
}
bytes, err := crypto.Decode(string(result.EncodedData), key)
b, err := crypto.Decode(string(result.EncodedData), key)
if err != nil {
return nil, err
}
downloadInfo := driver115.DownloadData{}
if err := utils.Json.Unmarshal(bytes, &downloadInfo); err != nil {
downloadInfo := struct {
Url string `json:"url"`
}{}
if err := utils.Json.Unmarshal(b, &downloadInfo); err != nil {
return nil, err
}
for _, info := range downloadInfo {
if info.FileSize < 0 {
return nil, driver115.ErrDownloadEmpty
}
info.Header = resp.Request.Header
return info, nil
}
return nil, driver115.ErrUnexpected
info := &driver115.DownloadInfo{}
info.PickCode = pickCode
info.Header = resp.Request.Header
info.Url.Url = downloadInfo.Url
return info, nil
}
func (c *Pan115) GenerateToken(fileID, preID, timeStamp, fileSize, signKey, signVal string) string {
@ -272,7 +274,7 @@ func UploadDigestRange(stream model.FileStreamer, rangeSpec string) (result stri
}
// UploadByOSS use aliyun sdk to upload
func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dirID string) (*UploadResult, error) {
func (c *Pan115) UploadByOSS(ctx context.Context, params *driver115.UploadOSSParams, s model.FileStreamer, dirID string, up driver.UpdateProgress) (*UploadResult, error) {
ossToken, err := c.client.GetOSSToken()
if err != nil {
return nil, err
@ -287,6 +289,10 @@ func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dir
}
var bodyBytes []byte
r := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
if err = bucket.PutObject(params.Object, r, append(
driver115.OssOption(params, ossToken),
oss.CallbackResult(&bodyBytes),
@ -302,7 +308,8 @@ func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dir
}
// UploadByMultipart upload by mutipart blocks
func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize int64, stream model.FileStreamer, dirID string, opts ...driver115.UploadMultipartOption) (*UploadResult, error) {
func (d *Pan115) UploadByMultipart(ctx context.Context, params *driver115.UploadOSSParams, fileSize int64, s model.FileStreamer,
dirID string, up driver.UpdateProgress, opts ...driver115.UploadMultipartOption) (*UploadResult, error) {
var (
chunks []oss.FileChunk
parts []oss.UploadPart
@ -314,7 +321,7 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
err error
)
tmpF, err := stream.CacheFullInTempFile()
tmpF, err := s.CacheFullInTempFile()
if err != nil {
return nil, err
}
@ -373,6 +380,7 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
quit <- struct{}{}
}()
completedNum := atomic.Int32{}
// consumers
for i := 0; i < options.ThreadsNum; i++ {
go func(threadId int) {
@ -385,24 +393,28 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
var part oss.UploadPart // 出现错误就继续尝试共尝试3次
for retry := 0; retry < 3; retry++ {
select {
case <-ctx.Done():
break
case <-ticker.C:
if ossToken, err = d.client.GetOSSToken(); err != nil { // 到时重新获取ossToken
errCh <- errors.Wrap(err, "刷新token时出现错误")
}
default:
}
buf := make([]byte, chunk.Size)
if _, err = tmpF.ReadAt(buf, chunk.Offset); err != nil && !errors.Is(err, io.EOF) {
continue
}
if part, err = bucket.UploadPart(imur, bytes.NewBuffer(buf), chunk.Size, chunk.Number, driver115.OssOption(params, ossToken)...); err == nil {
if part, err = bucket.UploadPart(imur, driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(buf)),
chunk.Size, chunk.Number, driver115.OssOption(params, ossToken)...); err == nil {
break
}
}
if err != nil {
errCh <- errors.Wrap(err, fmt.Sprintf("上传 %s 的第%d个分片时出现错误%v", stream.GetName(), chunk.Number, err))
errCh <- errors.Wrap(err, fmt.Sprintf("上传 %s 的第%d个分片时出现错误%v", s.GetName(), chunk.Number, err))
} else {
num := completedNum.Add(1)
up(float64(num) * 100.0 / float64(len(chunks)))
}
UploadedPartsCh <- part
}

299
drivers/115_open/driver.go Normal file
View File

@ -0,0 +1,299 @@
package _115_open
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/cmd/flags"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
)
type Open115 struct {
model.Storage
Addition
client *sdk.Client
}
func (d *Open115) Config() driver.Config {
return config
}
func (d *Open115) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Open115) Init(ctx context.Context) error {
d.client = sdk.New(sdk.WithRefreshToken(d.Addition.RefreshToken),
sdk.WithAccessToken(d.Addition.AccessToken),
sdk.WithOnRefreshToken(func(s1, s2 string) {
d.Addition.AccessToken = s1
d.Addition.RefreshToken = s2
op.MustSaveDriverStorage(d)
}))
if flags.Debug || flags.Dev {
d.client.SetDebug(true)
}
_, err := d.client.UserInfo(ctx)
if err != nil {
return err
}
return nil
}
func (d *Open115) Drop(ctx context.Context) error {
return nil
}
func (d *Open115) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var res []model.Obj
pageSize := int64(200)
offset := int64(0)
for {
resp, err := d.client.GetFiles(ctx, &sdk.GetFilesReq{
CID: dir.GetID(),
Limit: pageSize,
Offset: offset,
ASC: d.Addition.OrderDirection == "asc",
O: d.Addition.OrderBy,
// Cur: 1,
ShowDir: true,
})
if err != nil {
return nil, err
}
res = append(res, utils.MustSliceConvert(resp.Data, func(src sdk.GetFilesResp_File) model.Obj {
obj := Obj(src)
return &obj
})...)
if len(res) >= int(resp.Count) {
break
}
offset += pageSize
}
return res, nil
}
func (d *Open115) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var ua string
if args.Header != nil {
ua = args.Header.Get("User-Agent")
}
if ua == "" {
ua = base.UserAgent
}
obj, ok := file.(*Obj)
if !ok {
return nil, fmt.Errorf("can't convert obj")
}
pc := obj.Pc
resp, err := d.client.DownURL(ctx, pc, ua)
if err != nil {
return nil, err
}
u, ok := resp[obj.GetID()]
if !ok {
return nil, fmt.Errorf("can't get link")
}
return &model.Link{
URL: u.URL.URL,
Header: http.Header{
"User-Agent": []string{ua},
},
}, nil
}
func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
resp, err := d.client.Mkdir(ctx, parentDir.GetID(), dirName)
if err != nil {
return nil, err
}
return &Obj{
Fid: resp.FileID,
Pid: parentDir.GetID(),
Fn: dirName,
Fc: "0",
Upt: time.Now().Unix(),
Uet: time.Now().Unix(),
UpPt: time.Now().Unix(),
}, nil
}
func (d *Open115) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
_, err := d.client.Move(ctx, &sdk.MoveReq{
FileIDs: srcObj.GetID(),
ToCid: dstDir.GetID(),
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
_, err := d.client.UpdateFile(ctx, &sdk.UpdateFileReq{
FileID: srcObj.GetID(),
FileNma: newName,
})
if err != nil {
return nil, err
}
obj, ok := srcObj.(*Obj)
if ok {
obj.Fn = newName
}
return srcObj, nil
}
func (d *Open115) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
_, err := d.client.Copy(ctx, &sdk.CopyReq{
PID: dstDir.GetID(),
FileID: srcObj.GetID(),
NoDupli: "1",
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Remove(ctx context.Context, obj model.Obj) error {
_obj, ok := obj.(*Obj)
if !ok {
return fmt.Errorf("can't convert obj")
}
_, err := d.client.DelFile(ctx, &sdk.DelFileReq{
FileIDs: _obj.GetID(),
ParentID: _obj.Pid,
})
if err != nil {
return err
}
return nil
}
func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
tempF, err := file.CacheFullInTempFile()
if err != nil {
return err
}
// cal full sha1
sha1, err := utils.HashReader(utils.SHA1, tempF)
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// pre 128k sha1
sha1128k, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, 128*1024))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// 1. Init
resp, err := d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
// 2. two way verify
if utils.SliceContains([]int{6, 7, 8}, resp.Status) {
signCheck := strings.Split(resp.SignCheck, "-") //"sign_check": "2392148-2392298" 取2392148-2392298之间的内容(包含2392148、2392298)的sha1
start, err := strconv.ParseInt(signCheck[0], 10, 64)
if err != nil {
return err
}
end, err := strconv.ParseInt(signCheck[1], 10, 64)
if err != nil {
return err
}
_, err = tempF.Seek(start, io.SeekStart)
if err != nil {
return err
}
signVal, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, end-start+1))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
resp, err = d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
SignKey: resp.SignKey,
SignVal: strings.ToUpper(signVal),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
}
// 3. get upload token
tokenResp, err := d.client.UploadGetToken(ctx)
if err != nil {
return err
}
// 4. upload
err = d.multpartUpload(ctx, tempF, file, up, tokenResp, resp)
if err != nil {
return err
}
return nil
}
// func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// // TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// // TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// // TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// // a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// // return errs.NotImplement to use an internal archive tool
// return nil, errs.NotImplement
// }
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Open115)(nil)

36
drivers/115_open/meta.go Normal file
View File

@ -0,0 +1,36 @@
package _115_open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootID
// define other
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"`
AccessToken string
}
var config = driver.Config{
Name: "115 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Open115{}
})
}

59
drivers/115_open/types.go Normal file
View File

@ -0,0 +1,59 @@
package _115_open
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
)
type Obj sdk.GetFilesResp_File
// Thumb implements model.Thumb.
func (o *Obj) Thumb() string {
return o.Thumbnail
}
// CreateTime implements model.Obj.
func (o *Obj) CreateTime() time.Time {
return time.Unix(o.UpPt, 0)
}
// GetHash implements model.Obj.
func (o *Obj) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.SHA1, o.Sha1)
}
// GetID implements model.Obj.
func (o *Obj) GetID() string {
return o.Fid
}
// GetName implements model.Obj.
func (o *Obj) GetName() string {
return o.Fn
}
// GetPath implements model.Obj.
func (o *Obj) GetPath() string {
return ""
}
// GetSize implements model.Obj.
func (o *Obj) GetSize() int64 {
return o.FS
}
// IsDir implements model.Obj.
func (o *Obj) IsDir() bool {
return o.Fc == "0"
}
// ModTime implements model.Obj.
func (o *Obj) ModTime() time.Time {
return time.Unix(o.Upt, 0)
}
var _ model.Obj = (*Obj)(nil)
var _ model.Thumb = (*Obj)(nil)

140
drivers/115_open/upload.go Normal file
View File

@ -0,0 +1,140 @@
package _115_open
import (
"context"
"encoding/base64"
"io"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/avast/retry-go"
sdk "github.com/xhofe/115-sdk-go"
)
func calPartSize(fileSize int64) int64 {
var partSize int64 = 20 * utils.MB
if fileSize > partSize {
if fileSize > 1*utils.TB { // file Size over 1TB
partSize = 5 * utils.GB // file part size 5GB
} else if fileSize > 768*utils.GB { // over 768GB
partSize = 109951163 // ≈ 104.8576MB, split 1TB into 10,000 part
} else if fileSize > 512*utils.GB { // over 512GB
partSize = 82463373 // ≈ 78.6432MB
} else if fileSize > 384*utils.GB { // over 384GB
partSize = 54975582 // ≈ 52.4288MB
} else if fileSize > 256*utils.GB { // over 256GB
partSize = 41231687 // ≈ 39.3216MB
} else if fileSize > 128*utils.GB { // over 128GB
partSize = 27487791 // ≈ 26.2144MB
}
}
return partSize
}
func (d *Open115) singleUpload(ctx context.Context, tempF model.File, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
err = bucket.PutObject(initResp.Object, tempF,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
)
return err
}
// type CallbackResult struct {
// State bool `json:"state"`
// Code int `json:"code"`
// Message string `json:"message"`
// Data struct {
// PickCode string `json:"pick_code"`
// FileName string `json:"file_name"`
// FileSize int64 `json:"file_size"`
// FileID string `json:"file_id"`
// ThumbURL string `json:"thumb_url"`
// Sha1 string `json:"sha1"`
// Aid int `json:"aid"`
// Cid string `json:"cid"`
// } `json:"data"`
// }
func (d *Open115) multpartUpload(ctx context.Context, tempF model.File, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
imur, err := bucket.InitiateMultipartUpload(initResp.Object, oss.Sequential())
if err != nil {
return err
}
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
parts := make([]oss.UploadPart, partNum)
offset := int64(0)
for i := int64(1); i <= partNum; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
partSize := chunkSize
if i == partNum {
partSize = fileSize - (i-1)*chunkSize
}
rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
err = retry.Do(func() error {
_ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
if err != nil {
return err
}
parts[i-1] = part
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second))
if err != nil {
return err
}
if i == partNum {
offset = fileSize
} else {
offset += partSize
}
up(float64(offset) / float64(fileSize))
}
// callbackRespBytes := make([]byte, 1024)
_, err = bucket.CompleteMultipartUpload(
imur,
parts,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
// oss.CallbackResult(&callbackRespBytes),
)
if err != nil {
return err
}
return nil
}

3
drivers/115_open/util.go Normal file
View File

@ -0,0 +1,3 @@
package _115_open
// do others that not defined in Driver interface

View File

@ -10,7 +10,7 @@ type Addition struct {
QRCodeToken string `json:"qrcode_token" type:"text" help:"one of QR code token and cookie required"`
QRCodeSource string `json:"qrcode_source" type:"select" options:"web,android,ios,tv,alipaymini,wechatmini,qandroid" default:"linux" help:"select the QR code device, default linux"`
PageSize int64 `json:"page_size" type:"number" default:"1000" help:"list api per page size of 115 driver"`
LimitRate float64 `json:"limit_rate" type:"number" default:"2" help:"limit all api request rate (1r/[limit_rate]s)"`
LimitRate float64 `json:"limit_rate" type:"float" default:"2" help:"limit all api request rate (1r/[limit_rate]s)"`
ShareCode string `json:"share_code" type:"text" required:"true" help:"share code of 115 share link"`
ReceiveCode string `json:"receive_code" type:"text" required:"true" help:"receive code of 115 share link"`
driver.RootID
@ -18,7 +18,7 @@ type Addition struct {
var config = driver.Config{
Name: "115 Share",
DefaultRoot: "",
DefaultRoot: "0",
// OnlyProxy: true,
// OnlyLocal: true,
CheckStatus: false,

View File

@ -6,13 +6,14 @@ import (
"encoding/base64"
"encoding/hex"
"fmt"
"golang.org/x/time/rate"
"io"
"net/http"
"net/url"
"sync"
"time"
"golang.org/x/time/rate"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
@ -41,12 +42,12 @@ func (d *Pan123) GetAddition() driver.Additional {
}
func (d *Pan123) Init(ctx context.Context) error {
_, err := d.request(UserInfo, http.MethodGet, nil, nil)
_, err := d.Request(UserInfo, http.MethodGet, nil, nil)
return err
}
func (d *Pan123) Drop(ctx context.Context) error {
_, _ = d.request(Logout, http.MethodPost, func(req *resty.Request) {
_, _ = d.Request(Logout, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{})
}, nil)
return nil
@ -81,8 +82,8 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
"size": f.Size,
"type": f.Type,
}
resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
resp, err := d.Request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetHeaders(headers)
}, nil)
if err != nil {
@ -135,7 +136,7 @@ func (d *Pan123) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"size": 0,
"type": 1,
}
_, err := d.request(Mkdir, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@ -146,7 +147,7 @@ func (d *Pan123) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
"fileIdList": []base.Json{{"FileId": srcObj.GetID()}},
"parentFileId": dstDir.GetID(),
}
_, err := d.request(Move, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Move, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@ -158,7 +159,7 @@ func (d *Pan123) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"fileId": srcObj.GetID(),
"fileName": newName,
}
_, err := d.request(Rename, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Rename, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@ -175,7 +176,7 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
"operation": true,
"fileTrashInfoList": []File{f},
}
_, err := d.request(Trash, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Trash, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@ -184,36 +185,39 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
}
}
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// const DEFAULT int64 = 10485760
h := md5.New()
// need to calculate md5 of the full content
tempFile, err := stream.CacheFullInTempFile()
if err != nil {
return err
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width {
// const DEFAULT int64 = 10485760
h := md5.New()
// need to calculate md5 of the full content
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return err
}
defer func() {
_ = tempFile.Close()
}()
if _, err = utils.CopyWithBuffer(h, tempFile); err != nil {
return err
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return err
}
etag = hex.EncodeToString(h.Sum(nil))
}
defer func() {
_ = tempFile.Close()
}()
if _, err = utils.CopyWithBuffer(h, tempFile); err != nil {
return err
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return err
}
etag := hex.EncodeToString(h.Sum(nil))
data := base.Json{
"driveId": 0,
"duplicate": 2, // 2->覆盖 1->重命名 0->默认
"etag": etag,
"fileName": stream.GetName(),
"fileName": file.GetName(),
"parentFileId": dstDir.GetID(),
"size": stream.GetSize(),
"size": file.GetSize(),
"type": 0,
}
var resp UploadResp
res, err := d.request(UploadRequest, http.MethodPost, func(req *resty.Request) {
res, err := d.Request(UploadRequest, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, &resp)
if err != nil {
@ -224,7 +228,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return nil
}
if resp.Data.AccessKeyId == "" || resp.Data.SecretAccessKey == "" || resp.Data.SessionToken == "" {
err = d.newUpload(ctx, &resp, stream, tempFile, up)
err = d.newUpload(ctx, &resp, file, up)
return err
} else {
cfg := &aws.Config{
@ -238,17 +242,23 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return err
}
uploader := s3manager.NewUploader(s)
if stream.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = stream.GetSize() / (s3manager.MaxUploadParts - 1)
if file.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = file.GetSize() / (s3manager.MaxUploadParts - 1)
}
input := &s3manager.UploadInput{
Bucket: &resp.Data.Bucket,
Key: &resp.Data.Key,
Body: tempFile,
Body: driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: file,
UpdateProgress: up,
}),
}
_, err = uploader.UploadWithContext(ctx, input)
if err != nil {
return err
}
}
_, err = d.request(UploadComplete, http.MethodPost, func(req *resty.Request) {
_, err = d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"fileId": resp.Data.FileId,
}).SetContext(ctx)

View File

@ -25,7 +25,7 @@ func (d *Pan123) getS3PreSignedUrls(ctx context.Context, upReq *UploadResp, star
"StorageNode": upReq.Data.StorageNode,
}
var s3PreSignedUrls S3PreSignedURLs
_, err := d.request(S3PreSignedUrls, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(S3PreSignedUrls, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, &s3PreSignedUrls)
if err != nil {
@ -44,7 +44,7 @@ func (d *Pan123) getS3Auth(ctx context.Context, upReq *UploadResp, start, end in
"uploadId": upReq.Data.UploadId,
}
var s3PreSignedUrls S3PreSignedURLs
_, err := d.request(S3Auth, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(S3Auth, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, &s3PreSignedUrls)
if err != nil {
@ -63,13 +63,13 @@ func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.F
"key": upReq.Data.Key,
"uploadId": upReq.Data.UploadId,
}
_, err := d.request(UploadCompleteV2, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(UploadCompleteV2, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, nil)
return err
}
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, reader io.Reader, up driver.UpdateProgress) error {
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
chunkSize := int64(1024 * 1024 * 16)
// fetch s3 pre signed urls
chunkCount := int(math.Ceil(float64(file.GetSize()) / float64(chunkSize)))
@ -81,6 +81,7 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
batchSize = 10
getS3UploadUrl = d.getS3PreSignedUrls
}
limited := driver.NewLimitedUploadStream(ctx, file)
for i := 1; i <= chunkCount; i += batchSize {
if utils.IsCanceled(ctx) {
return ctx.Err()
@ -103,7 +104,7 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
if j == chunkCount {
curSize = file.GetSize() - (int64(chunkCount)-1)*chunkSize
}
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.LimitReader(reader, chunkSize), curSize, false, getS3UploadUrl)
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.LimitReader(limited, chunkSize), curSize, false, getS3UploadUrl)
if err != nil {
return err
}

View File

@ -194,7 +194,9 @@ func (d *Pan123) login() error {
// return &authKey, nil
//}
func (d *Pan123) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
func (d *Pan123) Request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
isRetry := false
do:
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"origin": "https://www.123pan.com",
@ -223,12 +225,13 @@ func (d *Pan123) request(url string, method string, callback base.ReqCallback, r
body := res.Body()
code := utils.Json.Get(body, "code").ToInt()
if code != 0 {
if code == 401 {
if !isRetry && code == 401 {
err := d.login()
if err != nil {
return nil, err
}
return d.request(url, method, callback, resp)
isRetry = true
goto do
}
return nil, errors.New(jsoniter.Get(body, "message").ToString())
}
@ -260,7 +263,7 @@ func (d *Pan123) getFiles(ctx context.Context, parentId string, name string) ([]
"operateType": "4",
"inDirectSpace": "false",
}
_res, err := d.request(FileList, http.MethodGet, func(req *resty.Request) {
_res, err := d.Request(FileList, http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(query)
}, &resp)
if err != nil {

View File

@ -4,12 +4,14 @@ import (
"context"
"encoding/base64"
"fmt"
"golang.org/x/time/rate"
"net/http"
"net/url"
"sync"
"time"
"golang.org/x/time/rate"
_123 "github.com/alist-org/alist/v3/drivers/123"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
@ -23,6 +25,7 @@ type Pan123Share struct {
model.Storage
Addition
apiRateLimit sync.Map
ref *_123.Pan123
}
func (d *Pan123Share) Config() driver.Config {
@ -39,7 +42,17 @@ func (d *Pan123Share) Init(ctx context.Context) error {
return nil
}
func (d *Pan123Share) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*_123.Pan123)
if ok {
d.ref = refStorage
return nil
}
return fmt.Errorf("ref: storage is not 123Pan")
}
func (d *Pan123Share) Drop(ctx context.Context) error {
d.ref = nil
return nil
}

View File

@ -53,6 +53,9 @@ func GetApi(rawUrl string) string {
}
func (d *Pan123Share) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
if d.ref != nil {
return d.ref.Request(url, method, callback, resp)
}
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"origin": "https://www.123pan.com",

View File

@ -3,6 +3,7 @@ package _139
import (
"context"
"encoding/base64"
"encoding/xml"
"fmt"
"io"
"net/http"
@ -26,6 +27,7 @@ type Yun139 struct {
Addition
cron *cron.Cron
Account string
ref *Yun139
}
func (d *Yun139) Config() driver.Config {
@ -37,61 +39,77 @@ func (d *Yun139) GetAddition() driver.Additional {
}
func (d *Yun139) Init(ctx context.Context) error {
if d.Authorization == "" {
return fmt.Errorf("authorization is empty")
}
d.cron = cron.NewCron(time.Hour * 24 * 7)
d.cron.Do(func() {
if d.ref == nil {
if d.Authorization == "" {
return fmt.Errorf("authorization is empty")
}
err := d.refreshToken()
if err != nil {
log.Errorf("%+v", err)
return err
}
})
d.cron = cron.NewCron(time.Hour * 12)
d.cron.Do(func() {
err := d.refreshToken()
if err != nil {
log.Errorf("%+v", err)
}
})
}
switch d.Addition.Type {
case MetaPersonalNew:
if len(d.Addition.RootFolderID) == 0 {
d.RootFolderID = "/"
}
return nil
case MetaPersonal:
if len(d.Addition.RootFolderID) == 0 {
d.RootFolderID = "root"
}
fallthrough
case MetaGroup:
if len(d.Addition.RootFolderID) == 0 {
d.RootFolderID = d.CloudID
}
fallthrough
case MetaFamily:
decode, err := base64.StdEncoding.DecodeString(d.Authorization)
if err != nil {
return err
}
decodeStr := string(decode)
splits := strings.Split(decodeStr, ":")
if len(splits) < 2 {
return fmt.Errorf("authorization is invalid, splits < 2")
}
d.Account = splits[1]
_, err = d.post("/orchestration/personalCloud/user/v1.0/qryUserExternInfo", base.Json{
"qryUserExternInfoReq": base.Json{
"commonAccountInfo": base.Json{
"account": d.Account,
"accountType": 1,
},
},
}, nil)
return err
default:
return errs.NotImplement
}
if d.ref != nil {
return nil
}
decode, err := base64.StdEncoding.DecodeString(d.Authorization)
if err != nil {
return err
}
decodeStr := string(decode)
splits := strings.Split(decodeStr, ":")
if len(splits) < 2 {
return fmt.Errorf("authorization is invalid, splits < 2")
}
d.Account = splits[1]
_, err = d.post("/orchestration/personalCloud/user/v1.0/qryUserExternInfo", base.Json{
"qryUserExternInfoReq": base.Json{
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
},
}, nil)
return err
}
func (d *Yun139) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*Yun139)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *Yun139) Drop(ctx context.Context) error {
if d.cron != nil {
d.cron.Stop()
}
d.ref = nil
return nil
}
@ -150,7 +168,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"parentCatalogID": parentDir.GetID(),
"newCatalogName": dirName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@ -161,7 +179,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
data := base.Json{
"cloudID": d.CloudID,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
"docLibName": dirName,
@ -173,7 +191,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
data := base.Json{
"catalogName": dirName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
"groupID": d.CloudID,
@ -219,7 +237,7 @@ func (d *Yun139) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj,
"contentList": contentList,
"catalogList": catalogList,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -247,7 +265,7 @@ func (d *Yun139) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj,
"newCatalogID": dstDir.GetID(),
},
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@ -282,7 +300,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"catalogID": srcObj.GetID(),
"catalogName": newName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -292,7 +310,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"contentID": srcObj.GetID(),
"contentName": newName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -309,7 +327,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"modifyCatalogName": newName,
"path": srcObj.GetPath(),
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -321,7 +339,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"contentName": newName,
"path": srcObj.GetPath(),
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -338,7 +356,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
// "catalogID": srcObj.GetID(),
// "catalogName": newName,
// "commonAccountInfo": base.Json{
// "account": d.Account,
// "account": d.getAccount(),
// "accountType": 1,
// },
// "path": srcObj.GetPath(),
@ -350,7 +368,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"contentID": srcObj.GetID(),
"contentName": newName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
"path": srcObj.GetPath(),
@ -393,7 +411,7 @@ func (d *Yun139) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
"newCatalogID": dstDir.GetID(),
},
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@ -430,7 +448,7 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
"contentList": contentList,
"catalogList": catalogList,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -457,7 +475,7 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
"catalogInfoList": catalogInfoList,
},
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@ -468,7 +486,7 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
"catalogList": catalogInfoList,
"contentList": contentInfoList,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
"sourceCloudID": d.CloudID,
@ -552,7 +570,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
firstPartInfos = firstPartInfos[:100]
}
// 获取上传信息和前100个分片的上传地址
// 创建任务,获取上传信息和前100个分片的上传地址
data := base.Json{
"contentHash": fullHash,
"contentHashAlgorithm": "SHA256",
@ -572,101 +590,177 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return err
}
if resp.Data.Exist || resp.Data.RapidUpload {
// 判断文件是否已存在
// resp.Data.Exist: true 已存在同名文件且校验相同,云端不会重复增加文件,无需手动处理冲突
if resp.Data.Exist {
return nil
}
uploadPartInfos := resp.Data.PartInfos
// 判断文件是否支持快传
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
// 快传的情况下同样需要手动处理冲突
if resp.Data.PartInfos != nil {
// 读取前100个分片的上传地址
uploadPartInfos := resp.Data.PartInfos
// 获取后续分片的上传地址
for i := 101; i < len(partInfos); i += 100 {
end := i + 100
if end > len(partInfos) {
end = len(partInfos)
}
batchPartInfos := partInfos[i:end]
// 获取后续分片的上传地址
for i := 101; i < len(partInfos); i += 100 {
end := i + 100
if end > len(partInfos) {
end = len(partInfos)
}
batchPartInfos := partInfos[i:end]
moredata := base.Json{
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
"partInfos": batchPartInfos,
"commonAccountInfo": base.Json{
"account": d.Account,
"accountType": 1,
},
moredata := base.Json{
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
"partInfos": batchPartInfos,
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
}
pathname := "/hcy/file/getUploadUrl"
var moreresp PersonalUploadUrlResp
_, err = d.personalPost(pathname, moredata, &moreresp)
if err != nil {
return err
}
uploadPartInfos = append(uploadPartInfos, moreresp.Data.PartInfos...)
}
pathname := "/hcy/file/getUploadUrl"
var moreresp PersonalUploadUrlResp
_, err = d.personalPost(pathname, moredata, &moreresp)
// Progress
p := driver.NewProgress(stream.GetSize(), up)
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 上传所有分片
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
limitReader := io.LimitReader(rateLimited, partSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequest("PUT", uploadPartInfo.UploadUrl, r)
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = partSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("[139] uploaded: %+v", res)
if res.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
}
data = base.Json{
"contentHash": fullHash,
"contentHashAlgorithm": "SHA256",
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
}
_, err = d.personalPost("/hcy/file/complete", data, nil)
if err != nil {
return err
}
uploadPartInfos = append(uploadPartInfos, moreresp.Data.PartInfos...)
}
// Progress
p := driver.NewProgress(stream.GetSize(), up)
// 上传所有分片
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
limitReader := io.LimitReader(stream, partSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequest("PUT", uploadPartInfo.UploadUrl, r)
// 处理冲突
if resp.Data.FileName != stream.GetName() {
log.Debugf("[139] conflict detected: %s != %s", resp.Data.FileName, stream.GetName())
// 给服务器一定时间处理数据,避免无法刷新文件列表
time.Sleep(time.Millisecond * 500)
// 刷新并获取文件列表
files, err := d.List(ctx, dstDir, model.ListArgs{Refresh: true})
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = partSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
// 删除旧文件
for _, file := range files {
if file.GetName() == stream.GetName() {
log.Debugf("[139] conflict: removing old: %s", file.GetName())
// 删除前重命名旧文件,避免仍旧冲突
err = d.Rename(ctx, file, stream.GetName()+random.String(4))
if err != nil {
return err
}
err = d.Remove(ctx, file)
if err != nil {
return err
}
break
}
}
_ = res.Body.Close()
log.Debugf("[139] uploaded: %+v", res)
if res.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
// 重命名新文件
for _, file := range files {
if file.GetName() == resp.Data.FileName {
log.Debugf("[139] conflict: renaming new: %s => %s", file.GetName(), stream.GetName())
err = d.Rename(ctx, file, stream.GetName())
if err != nil {
return err
}
break
}
}
}
data = base.Json{
"contentHash": fullHash,
"contentHashAlgorithm": "SHA256",
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
}
_, err = d.personalPost("/hcy/file/complete", data, nil)
if err != nil {
return err
}
return nil
case MetaPersonal:
fallthrough
case MetaFamily:
// 处理冲突
// 获取文件列表
files, err := d.List(ctx, dstDir, model.ListArgs{})
if err != nil {
return err
}
// 删除旧文件
for _, file := range files {
if file.GetName() == stream.GetName() {
log.Debugf("[139] conflict: removing old: %s", file.GetName())
// 删除前重命名旧文件,避免仍旧冲突
err = d.Rename(ctx, file, stream.GetName()+random.String(4))
if err != nil {
return err
}
err = d.Remove(ctx, file)
if err != nil {
return err
}
break
}
}
var reportSize int64
if d.ReportRealSize {
reportSize = stream.GetSize()
} else {
reportSize = 0
}
data := base.Json{
"manualRename": 2,
"operation": 0,
"fileCount": 1,
"totalSize": 0, // 去除上传大小限制
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": 0, // 去除上传大小限制
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
"parentCatalogID": dstDir.GetID(),
"newCatalogName": "",
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -678,20 +772,23 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"operation": 0,
"path": path.Join(dstDir.GetPath(), dstDir.GetID()),
"seqNo": random.String(32), //序列号不能为空
"totalSize": 0,
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": 0,
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
})
pathname = "/orchestration/familyCloud-rebuild/content/v1.0/getFileUploadURL"
}
var resp UploadResp
_, err := d.post(pathname, data, &resp)
_, err = d.post(pathname, data, &resp)
if err != nil {
return err
}
if resp.Data.Result.ResultCode != "0" {
return fmt.Errorf("get file upload url failed with result code: %s, message: %s", resp.Data.Result.ResultCode, resp.Data.Result.ResultDesc)
}
// Progress
p := driver.NewProgress(stream.GetSize(), up)
@ -701,6 +798,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if part == 0 {
part = 1
}
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
@ -712,7 +810,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
byteSize = partSize
}
limitReader := io.LimitReader(stream, byteSize)
limitReader := io.LimitReader(rateLimited, byteSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, r)
@ -732,13 +830,23 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("%+v", res)
if res.StatusCode != http.StatusOK {
res.Body.Close()
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
bodyBytes, err := io.ReadAll(res.Body)
if err != nil {
return fmt.Errorf("error reading response body: %v", err)
}
var result InterLayerUploadResult
err = xml.Unmarshal(bodyBytes, &result)
if err != nil {
return fmt.Errorf("error parsing XML: %v", err)
}
if result.ResultCode != 0 {
return fmt.Errorf("upload failed with result code: %d, message: %s", result.ResultCode, result.Msg)
}
}
return nil
default:
return errs.NotImplement

View File

@ -9,9 +9,10 @@ type Addition struct {
//Account string `json:"account" required:"true"`
Authorization string `json:"authorization" type:"text" required:"true"`
driver.RootID
Type string `json:"type" type:"select" options:"personal,family,personal_new" default:"personal"`
Type string `json:"type" type:"select" options:"personal_new,family,group,personal" default:"personal_new"`
CloudID string `json:"cloud_id"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
ReportRealSize bool `json:"report_real_size" type:"bool" default:"true" help:"Enable to report the real file size during upload"`
}
var config = driver.Config{

View File

@ -143,6 +143,13 @@ type UploadResp struct {
} `json:"data"`
}
type InterLayerUploadResult struct {
XMLName xml.Name `xml:"result"`
Text string `xml:",chardata"`
ResultCode int `xml:"resultCode"`
Msg string `xml:"msg"`
}
type CloudContent struct {
ContentID string `json:"contentID"`
//Modifier string `json:"modifier"`
@ -261,6 +268,7 @@ type PersonalUploadResp struct {
BaseResp
Data struct {
FileId string `json:"fileId"`
FileName string `json:"fileName"`
PartInfos []PersonalPartInfo `json:"partInfos"`
Exist bool `json:"exist"`
RapidUpload bool `json:"rapidUpload"`

View File

@ -6,6 +6,7 @@ import (
"fmt"
"net/http"
"net/url"
"path"
"sort"
"strconv"
"strings"
@ -54,14 +55,37 @@ func getTime(t string) time.Time {
}
func (d *Yun139) refreshToken() error {
url := "https://aas.caiyun.feixin.10086.cn:443/tellin/authTokenRefresh.do"
var resp RefreshTokenResp
if d.ref != nil {
return d.ref.refreshToken()
}
decode, err := base64.StdEncoding.DecodeString(d.Authorization)
if err != nil {
return err
return fmt.Errorf("authorization decode failed: %s", err)
}
decodeStr := string(decode)
splits := strings.Split(decodeStr, ":")
if len(splits) < 3 {
return fmt.Errorf("authorization is invalid, splits < 3")
}
strs := strings.Split(splits[2], "|")
if len(strs) < 4 {
return fmt.Errorf("authorization is invalid, strs < 4")
}
expiration, err := strconv.ParseInt(strs[3], 10, 64)
if err != nil {
return fmt.Errorf("authorization is invalid")
}
expiration -= time.Now().UnixMilli()
if expiration > 1000*60*60*24*15 {
// Authorization有效期大于15天无需刷新
return nil
}
if expiration < 0 {
return fmt.Errorf("authorization has expired")
}
url := "https://aas.caiyun.feixin.10086.cn:443/tellin/authTokenRefresh.do"
var resp RefreshTokenResp
reqBody := "<root><token>" + splits[2] + "</token><account>" + splits[1] + "</account><clienttype>656</clienttype></root>"
_, err = base.RestyClient.R().
ForceContentType("application/xml").
@ -99,21 +123,22 @@ func (d *Yun139) request(pathname string, method string, callback base.ReqCallba
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"CMS-DEVICE": "default",
"Authorization": "Basic " + d.Authorization,
"Authorization": "Basic " + d.getAuthorization(),
"mcloud-channel": "1000101",
"mcloud-client": "10701",
//"mcloud-route": "001",
"mcloud-sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
//"mcloud-skey":"",
"mcloud-version": "6.6.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|6.6.0|chrome|95.0.4638.69|uwIy75obnsRPIwlJSd7D9GhUvFwG96ce||macos 10.15.2||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
"x-m4c-src": "10002",
"x-SvcType": svcType,
"mcloud-version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
"x-m4c-src": "10002",
"x-SvcType": svcType,
"Inner-Hcy-Router-Https": "1",
})
var e BaseResp
@ -151,7 +176,7 @@ func (d *Yun139) getFiles(catalogID string) ([]model.Obj, error) {
"catalogSortType": 0,
"contentSortType": 0,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -199,7 +224,7 @@ func (d *Yun139) newJson(data map[string]interface{}) base.Json {
"cloudID": d.CloudID,
"cloudType": 1,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -252,7 +277,7 @@ func (d *Yun139) familyGetFiles(catalogID string) ([]model.Obj, error) {
}
files = append(files, &f)
}
if 100*pageNum > resp.Data.TotalCount {
if resp.Data.TotalCount == 0 {
break
}
pageNum++
@ -266,12 +291,12 @@ func (d *Yun139) groupGetFiles(catalogID string) ([]model.Obj, error) {
for {
data := d.newJson(base.Json{
"groupID": d.CloudID,
"catalogID": catalogID,
"catalogID": path.Base(catalogID),
"contentSortType": 0,
"sortDirection": 1,
"startNumber": pageNum,
"endNumber": pageNum + 99,
"path": catalogID,
"path": path.Join(d.RootFolderID, catalogID),
})
var resp QueryGroupContentListResp
@ -307,7 +332,7 @@ func (d *Yun139) groupGetFiles(catalogID string) ([]model.Obj, error) {
}
files = append(files, &f)
}
if pageNum > resp.Data.GetGroupContentResult.NodeCount {
if (pageNum + 99) > resp.Data.GetGroupContentResult.NodeCount {
break
}
pageNum = pageNum + 100
@ -320,7 +345,7 @@ func (d *Yun139) getLink(contentId string) (string, error) {
"appName": "",
"contentID": contentId,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@ -383,17 +408,17 @@ func (d *Yun139) personalRequest(pathname string, method string, callback base.R
}
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"Authorization": "Basic " + d.Authorization,
"Authorization": "Basic " + d.getAuthorization(),
"Caller": "web",
"Cms-Device": "default",
"Mcloud-Channel": "1000101",
"Mcloud-Client": "10701",
"Mcloud-Route": "001",
"Mcloud-Sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
"Mcloud-Version": "7.13.0",
"Mcloud-Version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.13.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
@ -402,7 +427,7 @@ func (d *Yun139) personalRequest(pathname string, method string, callback base.R
"X-Yun-Api-Version": "v1",
"X-Yun-App-Channel": "10000034",
"X-Yun-Channel-Source": "10000034",
"X-Yun-Client-Info": "||9|7.13.0|chrome|120.0.0.0|||windows 10||zh-CN|||dW5kZWZpbmVk||",
"X-Yun-Client-Info": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||dW5kZWZpbmVk||",
"X-Yun-Module-Type": "100",
"X-Yun-Svc-Type": "1",
})
@ -514,3 +539,16 @@ func (d *Yun139) personalGetLink(fileId string) (string, error) {
return jsoniter.Get(res, "data", "url").ToString(), nil
}
}
func (d *Yun139) getAuthorization() string {
if d.ref != nil {
return d.ref.getAuthorization()
}
return d.Authorization
}
func (d *Yun139) getAccount() string {
if d.ref != nil {
return d.ref.getAccount()
}
return d.Account
}

View File

@ -365,7 +365,7 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
log.Debugf("uploadData: %+v", uploadData)
requestURL := uploadData.RequestURL
uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&")
req, err := http.NewRequest(http.MethodPut, requestURL, bytes.NewReader(byteData))
req, err := http.NewRequest(http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
@ -375,11 +375,11 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
req.Header.Set(v[0:i], v[i+1:])
}
r, err := base.HttpClient.Do(req)
log.Debugf("%+v %+v", r, r.Request.Header)
r.Body.Close()
if err != nil {
return err
}
log.Debugf("%+v %+v", r, r.Request.Header)
_ = r.Body.Close()
up(float64(i) * 100 / float64(count))
}
fileMd5 := hex.EncodeToString(md5Sum.Sum(nil))

View File

@ -1,8 +1,8 @@
package _189pc
import (
"container/ring"
"context"
"fmt"
"net/http"
"strconv"
"strings"
@ -14,6 +14,7 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Cloud189PC struct {
@ -29,10 +30,11 @@ type Cloud189PC struct {
uploadThread int
familyTransferFolder *ring.Ring
familyTransferFolder *Cloud189Folder
cleanFamilyTransferFile func()
storageConfig driver.Config
ref *Cloud189PC
}
func (y *Cloud189PC) Config() driver.Config {
@ -47,9 +49,18 @@ func (y *Cloud189PC) GetAddition() driver.Additional {
}
func (y *Cloud189PC) Init(ctx context.Context) (err error) {
// 兼容旧上传接口
y.storageConfig.NoOverwriteUpload = y.isFamily() && (y.Addition.RapidUpload || y.Addition.UploadMethod == "old")
y.storageConfig = config
if y.isFamily() {
// 兼容旧上传接口
if y.Addition.RapidUpload || y.Addition.UploadMethod == "old" {
y.storageConfig.NoOverwriteUpload = true
}
} else {
// 家庭云转存,不支持覆盖上传
if y.Addition.FamilyTransfer {
y.storageConfig.NoOverwriteUpload = true
}
}
// 处理个人云和家庭云参数
if y.isFamily() && y.RootFolderID == "-11" {
y.RootFolderID = ""
@ -64,20 +75,22 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
y.uploadThread, y.UploadThread = 3, "3"
}
// 初始化请求客户端
if y.client == nil {
y.client = base.NewRestyClient().SetHeaders(map[string]string{
"Accept": "application/json;charset=UTF-8",
"Referer": WEB_URL,
})
}
if y.ref == nil {
// 初始化请求客户端
if y.client == nil {
y.client = base.NewRestyClient().SetHeaders(map[string]string{
"Accept": "application/json;charset=UTF-8",
"Referer": WEB_URL,
})
}
// 避免重复登陆
identity := utils.GetMD5EncodeStr(y.Username + y.Password)
if !y.isLogin() || y.identity != identity {
y.identity = identity
if err = y.login(); err != nil {
return
// 避免重复登陆
identity := utils.GetMD5EncodeStr(y.Username + y.Password)
if !y.isLogin() || y.identity != identity {
y.identity = identity
if err = y.login(); err != nil {
return
}
}
}
@ -88,13 +101,14 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
}
}
// 创建中转文件夹,防止重名文件
// 创建中转文件夹
if y.FamilyTransfer {
if y.familyTransferFolder, err = y.createFamilyTransferFolder(32); err != nil {
if err := y.createFamilyTransferFolder(); err != nil {
return err
}
}
// 清理转存文件节流
y.cleanFamilyTransferFile = utils.NewThrottle2(time.Minute, func() {
if err := y.cleanFamilyTransfer(context.TODO()); err != nil {
utils.Log.Errorf("cleanFamilyTransferFolderError:%s", err)
@ -103,7 +117,17 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
return
}
func (d *Cloud189PC) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*Cloud189PC)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (y *Cloud189PC) Drop(ctx context.Context) error {
y.ref = nil
return nil
}
@ -314,35 +338,49 @@ func (y *Cloud189PC) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if !isFamily && y.FamilyTransfer {
// 修改上传目标为家庭云文件夹
transferDstDir := dstDir
dstDir = (y.familyTransferFolder.Value).(*Cloud189Folder)
y.familyTransferFolder = y.familyTransferFolder.Next()
dstDir = y.familyTransferFolder
// 使用临时文件名
srcName := stream.GetName()
stream = &WrapFileStreamer{
FileStreamer: stream,
Name: fmt.Sprintf("0%s.transfer", uuid.NewString()),
}
// 使用家庭云上传
isFamily = true
overwrite = false
defer func() {
if newObj != nil {
// 批量任务有概率删不掉
y.cleanFamilyTransferFile()
// 转存家庭云文件到个人云
err = y.SaveFamilyFileToPersonCloud(context.TODO(), y.FamilyID, newObj, transferDstDir, true)
task := BatchTaskInfo{
FileId: newObj.GetID(),
FileName: newObj.GetName(),
IsFolder: BoolToNumber(newObj.IsDir()),
// 删除家庭云源文件
go y.Delete(context.TODO(), y.FamilyID, newObj)
// 批量任务有概率删不掉
go y.cleanFamilyTransferFile()
// 转存失败返回错误
if err != nil {
return
}
// 删除源文件
if resp, err := y.CreateBatchTask("DELETE", y.FamilyID, "", nil, task); err == nil {
y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
// 永久删除
if resp, err := y.CreateBatchTask("CLEAR_RECYCLE", y.FamilyID, "", nil, task); err == nil {
y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
// 查找转存文件
var file *Cloud189File
file, err = y.findFileByName(context.TODO(), newObj.GetName(), transferDstDir.GetID(), false)
if err != nil {
if err == errs.ObjectNotFound {
err = fmt.Errorf("unknown error: No transfer file obtained %s", newObj.GetName())
}
return
}
newObj = nil
// 重命名转存文件
newObj, err = y.Rename(context.TODO(), file, srcName)
if err != nil {
// 重命名失败删除源文件
_ = y.Delete(context.TODO(), "", file)
}
return
}
}()
}

View File

@ -18,6 +18,7 @@ import (
"strings"
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils/random"
)
@ -208,3 +209,12 @@ func IF[V any](o bool, t V, f V) V {
}
return f
}
type WrapFileStreamer struct {
model.FileStreamer
Name string
}
func (w *WrapFileStreamer) GetName() string {
return w.Name
}

View File

@ -2,7 +2,6 @@ package _189pc
import (
"bytes"
"container/ring"
"context"
"crypto/md5"
"encoding/base64"
@ -20,9 +19,12 @@ import (
"strings"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
@ -57,11 +59,11 @@ const (
func (y *Cloud189PC) SignatureHeader(url, method, params string, isFamily bool) map[string]string {
dateOfGmt := getHttpDateStr()
sessionKey := y.tokenInfo.SessionKey
sessionSecret := y.tokenInfo.SessionSecret
sessionKey := y.getTokenInfo().SessionKey
sessionSecret := y.getTokenInfo().SessionSecret
if isFamily {
sessionKey = y.tokenInfo.FamilySessionKey
sessionSecret = y.tokenInfo.FamilySessionSecret
sessionKey = y.getTokenInfo().FamilySessionKey
sessionSecret = y.getTokenInfo().FamilySessionSecret
}
header := map[string]string{
@ -74,9 +76,9 @@ func (y *Cloud189PC) SignatureHeader(url, method, params string, isFamily bool)
}
func (y *Cloud189PC) EncryptParams(params Params, isFamily bool) string {
sessionSecret := y.tokenInfo.SessionSecret
sessionSecret := y.getTokenInfo().SessionSecret
if isFamily {
sessionSecret = y.tokenInfo.FamilySessionSecret
sessionSecret = y.getTokenInfo().FamilySessionSecret
}
if params != nil {
return AesECBEncrypt(params.Encode(), sessionSecret[:16])
@ -85,7 +87,7 @@ func (y *Cloud189PC) EncryptParams(params Params, isFamily bool) string {
}
func (y *Cloud189PC) request(url, method string, callback base.ReqCallback, params Params, resp interface{}, isFamily ...bool) ([]byte, error) {
req := y.client.R().SetQueryParams(clientSuffix())
req := y.getClient().R().SetQueryParams(clientSuffix())
// 设置params
paramsData := y.EncryptParams(params, isBool(isFamily...))
@ -174,8 +176,8 @@ func (y *Cloud189PC) put(ctx context.Context, url string, headers map[string]str
}
var erron RespErr
jsoniter.Unmarshal(body, &erron)
xml.Unmarshal(body, &erron)
_ = jsoniter.Unmarshal(body, &erron)
_ = xml.Unmarshal(body, &erron)
if erron.HasError() {
return nil, &erron
}
@ -185,39 +187,9 @@ func (y *Cloud189PC) put(ctx context.Context, url string, headers map[string]str
return body, nil
}
func (y *Cloud189PC) getFiles(ctx context.Context, fileId string, isFamily bool) ([]model.Obj, error) {
fullUrl := API_URL
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/listFiles.action"
res := make([]model.Obj, 0, 130)
res := make([]model.Obj, 0, 100)
for pageNum := 1; ; pageNum++ {
var resp Cloud189FilesResp
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(map[string]string{
"folderId": fileId,
"fileType": "0",
"mediaAttr": "0",
"iconOption": "5",
"pageNum": fmt.Sprint(pageNum),
"pageSize": "130",
})
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"orderBy": toFamilyOrderBy(y.OrderBy),
"descending": toDesc(y.OrderDirection),
})
} else {
r.SetQueryParams(map[string]string{
"recursive": "0",
"orderBy": y.OrderBy,
"descending": toDesc(y.OrderDirection),
})
}
}, &resp, isFamily)
resp, err := y.getFilesWithPage(ctx, fileId, isFamily, pageNum, 1000, y.OrderBy, y.OrderDirection)
if err != nil {
return nil, err
}
@ -236,6 +208,63 @@ func (y *Cloud189PC) getFiles(ctx context.Context, fileId string, isFamily bool)
return res, nil
}
func (y *Cloud189PC) getFilesWithPage(ctx context.Context, fileId string, isFamily bool, pageNum int, pageSize int, orderBy string, orderDirection string) (*Cloud189FilesResp, error) {
fullUrl := API_URL
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/listFiles.action"
var resp Cloud189FilesResp
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(map[string]string{
"folderId": fileId,
"fileType": "0",
"mediaAttr": "0",
"iconOption": "5",
"pageNum": fmt.Sprint(pageNum),
"pageSize": fmt.Sprint(pageSize),
})
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"orderBy": toFamilyOrderBy(orderBy),
"descending": toDesc(orderDirection),
})
} else {
r.SetQueryParams(map[string]string{
"recursive": "0",
"orderBy": orderBy,
"descending": toDesc(orderDirection),
})
}
}, &resp, isFamily)
if err != nil {
return nil, err
}
return &resp, nil
}
func (y *Cloud189PC) findFileByName(ctx context.Context, searchName string, folderId string, isFamily bool) (*Cloud189File, error) {
for pageNum := 1; ; pageNum++ {
resp, err := y.getFilesWithPage(ctx, folderId, isFamily, pageNum, 10, "filename", "asc")
if err != nil {
return nil, err
}
// 获取完毕跳出
if resp.FileListAO.Count == 0 {
return nil, errs.ObjectNotFound
}
for i := 0; i < len(resp.FileListAO.FileList); i++ {
file := resp.FileListAO.FileList[i]
if file.Name == searchName {
return &file, nil
}
}
}
}
func (y *Cloud189PC) login() (err error) {
// 初始化登陆所需参数
if y.loginParam == nil {
@ -403,6 +432,9 @@ func (y *Cloud189PC) initLoginParam() error {
// 刷新会话
func (y *Cloud189PC) refreshSession() (err error) {
if y.ref != nil {
return y.ref.refreshSession()
}
var erron RespErr
var userSessionResp UserSessionResp
_, err = y.client.R().
@ -478,6 +510,7 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
fileMd5 := md5.New()
silceMd5 := md5.New()
@ -487,7 +520,6 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
if utils.IsCanceled(upCtx) {
break
}
byteData := make([]byte, sliceSize)
if i == count {
byteData = byteData[:lastPartSize]
@ -496,6 +528,7 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
// 读取块
silceMd5.Reset()
if _, err := io.ReadFull(io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5)), byteData); err != io.EOF && err != nil {
sem.Release(1)
return nil, err
}
@ -505,6 +538,10 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
partInfo := fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
if err != nil {
return err
@ -512,7 +549,8 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
// step.4 上传切片
uploadUrl := uploadUrls[0]
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, bytes.NewReader(byteData), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false,
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)), isFamily)
if err != nil {
return err
}
@ -620,7 +658,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
}
// 尝试恢复进度
uploadProgress, ok := base.GetUploadProgress[*UploadProgress](y, y.tokenInfo.SessionKey, fileMd5Hex)
uploadProgress, ok := base.GetUploadProgress[*UploadProgress](y, y.getTokenInfo().SessionKey, fileMd5Hex)
if !ok {
//step.2 预上传
params := Params{
@ -687,7 +725,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
if err = threadG.Wait(); err != nil {
if errors.Is(err, context.Canceled) {
uploadProgress.UploadParts = utils.SliceFilter(uploadProgress.UploadParts, func(s string) bool { return s != "" })
base.SaveUploadProgress(y, uploadProgress, y.tokenInfo.SessionKey, fileMd5Hex)
base.SaveUploadProgress(y, uploadProgress, y.getTokenInfo().SessionKey, fileMd5Hex)
}
return nil, err
}
@ -764,6 +802,7 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
if err != nil {
return nil, err
}
rateLimited := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
// 创建上传会话
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, file.GetName(), fmt.Sprint(file.GetSize()), isFamily)
@ -790,7 +829,7 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
}
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile), isFamily)
_, err := y.put(ctx, status.FileUploadUrl, header, true, rateLimited, isFamily)
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
return nil, err
}
@ -899,8 +938,7 @@ func (y *Cloud189PC) isLogin() bool {
}
// 创建家庭云中转文件夹
func (y *Cloud189PC) createFamilyTransferFolder(count int) (*ring.Ring, error) {
folders := ring.New(count)
func (y *Cloud189PC) createFamilyTransferFolder() error {
var rootFolder Cloud189Folder
_, err := y.post(API_URL+"/family/file/createFolder.action", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
@ -909,81 +947,61 @@ func (y *Cloud189PC) createFamilyTransferFolder(count int) (*ring.Ring, error) {
})
}, &rootFolder, true)
if err != nil {
return nil, err
return err
}
folderCount := 0
// 获取已有目录
files, err := y.getFiles(context.TODO(), rootFolder.GetID(), true)
if err != nil {
return nil, err
}
for _, file := range files {
if folder, ok := file.(*Cloud189Folder); ok {
folders.Value = folder
folders = folders.Next()
folderCount++
}
}
// 创建新的目录
for folderCount < count {
var newFolder Cloud189Folder
_, err := y.post(API_URL+"/family/file/createFolder.action", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"folderName": uuid.NewString(),
"familyId": y.FamilyID,
"parentId": rootFolder.GetID(),
})
}, &newFolder, true)
if err != nil {
return nil, err
}
folders.Value = &newFolder
folders = folders.Next()
folderCount++
}
return folders, nil
y.familyTransferFolder = &rootFolder
return nil
}
// 清理中转文件夹
func (y *Cloud189PC) cleanFamilyTransfer(ctx context.Context) error {
var tasks []BatchTaskInfo
r := y.familyTransferFolder
for p := r.Next(); p != r; p = p.Next() {
folder := p.Value.(*Cloud189Folder)
files, err := y.getFiles(ctx, folder.GetID(), true)
transferFolderId := y.familyTransferFolder.GetID()
for pageNum := 1; ; pageNum++ {
resp, err := y.getFilesWithPage(ctx, transferFolderId, true, pageNum, 100, "lastOpTime", "asc")
if err != nil {
return err
}
for _, file := range files {
// 获取完毕跳出
if resp.FileListAO.Count == 0 {
break
}
var tasks []BatchTaskInfo
for i := 0; i < len(resp.FileListAO.FolderList); i++ {
folder := resp.FileListAO.FolderList[i]
tasks = append(tasks, BatchTaskInfo{
FileId: folder.GetID(),
FileName: folder.GetName(),
IsFolder: BoolToNumber(folder.IsDir()),
})
}
for i := 0; i < len(resp.FileListAO.FileList); i++ {
file := resp.FileListAO.FileList[i]
tasks = append(tasks, BatchTaskInfo{
FileId: file.GetID(),
FileName: file.GetName(),
IsFolder: BoolToNumber(file.IsDir()),
})
}
}
if len(tasks) > 0 {
// 删除
resp, err := y.CreateBatchTask("DELETE", y.FamilyID, "", nil, tasks...)
if err != nil {
if len(tasks) > 0 {
// 删除
resp, err := y.CreateBatchTask("DELETE", y.FamilyID, "", nil, tasks...)
if err != nil {
return err
}
err = y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
if err != nil {
return err
}
// 永久删除
resp, err = y.CreateBatchTask("CLEAR_RECYCLE", y.FamilyID, "", nil, tasks...)
if err != nil {
return err
}
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
return err
}
err = y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
if err != nil {
return err
}
// 永久删除
resp, err = y.CreateBatchTask("CLEAR_RECYCLE", y.FamilyID, "", nil, tasks...)
if err != nil {
return err
}
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
return err
}
return nil
}
@ -1008,7 +1026,7 @@ func (y *Cloud189PC) getFamilyID() (string, error) {
return "", fmt.Errorf("cannot get automatically,please input family_id")
}
for _, info := range infos {
if strings.Contains(y.tokenInfo.LoginName, info.RemarkName) {
if strings.Contains(y.getTokenInfo().LoginName, info.RemarkName) {
return fmt.Sprint(info.FamilyID), nil
}
}
@ -1060,6 +1078,34 @@ func (y *Cloud189PC) SaveFamilyFileToPersonCloud(ctx context.Context, familyId s
}
}
// 永久删除文件
func (y *Cloud189PC) Delete(ctx context.Context, familyId string, srcObj model.Obj) error {
task := BatchTaskInfo{
FileId: srcObj.GetID(),
FileName: srcObj.GetName(),
IsFolder: BoolToNumber(srcObj.IsDir()),
}
// 删除源文件
resp, err := y.CreateBatchTask("DELETE", familyId, "", nil, task)
if err != nil {
return err
}
err = y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
if err != nil {
return err
}
// 清除回收站
resp, err = y.CreateBatchTask("CLEAR_RECYCLE", familyId, "", nil, task)
if err != nil {
return err
}
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
if err != nil {
return err
}
return nil
}
func (y *Cloud189PC) CreateBatchTask(aType string, familyID string, targetFolderId string, other map[string]string, taskInfos ...BatchTaskInfo) (*CreateBatchTaskResp, error) {
var resp CreateBatchTaskResp
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
@ -1142,3 +1188,17 @@ func (y *Cloud189PC) WaitBatchTask(aType string, taskID string, t time.Duration)
time.Sleep(t)
}
}
func (y *Cloud189PC) getTokenInfo() *AppSessionResp {
if y.ref != nil {
return y.ref.getTokenInfo()
}
return y.tokenInfo
}
func (y *Cloud189PC) getClient() *resty.Client {
if y.ref != nil {
return y.ref.getClient()
}
return y.client
}

View File

@ -3,6 +3,7 @@ package alias
import (
"context"
"errors"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
@ -110,14 +111,62 @@ func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
for _, dst := range dsts {
link, err := d.link(ctx, dst, sub, args)
if err == nil {
if !args.Redirect && len(link.URL) > 0 {
// 正常情况下 多并发 仅支持返回URL的驱动
// alias套娃alias 可以让crypt、mega等驱动(不返回URL的) 支持并发
if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
}
}
return nil, errs.ObjectNotFound
}
func (d *Alias) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, parentDir, true)
if err == nil {
return fs.MakeDir(ctx, stdpath.Join(*reqPath, dirName))
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot make sub-dir")
}
return err
}
func (d *Alias) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be moved")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be moved to")
}
if err != nil {
return err
}
return fs.Move(ctx, *srcPath, *dstPath)
}
func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
reqPath, err := d.getReqPath(ctx, srcObj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, srcObj, false)
if err == nil {
return fs.Rename(ctx, *reqPath, newName)
}
@ -127,8 +176,33 @@ func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) er
return err
}
func (d *Alias) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be copied")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be copied to")
}
if err != nil {
return err
}
_, err = fs.Copy(ctx, *srcPath, *dstPath)
return err
}
func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
reqPath, err := d.getReqPath(ctx, obj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, obj, false)
if err == nil {
return fs.Remove(ctx, *reqPath)
}
@ -138,4 +212,110 @@ func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *Alias) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutDirectly(ctx, *reqPath, s)
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be Put")
}
return err
}
func (d *Alias) PutURL(ctx context.Context, dstDir model.Obj, name, url string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutURL(ctx, *reqPath, name, url)
}
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot offline download")
}
return err
}
func (d *Alias) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
meta, err := d.getArchiveMeta(ctx, dst, sub, args)
if err == nil {
return meta, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
l, err := d.listArchive(ctx, dst, sub, args)
if err == nil {
return l, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// alias的两个驱动一个支持驱动提取一个不支持如何兼容
// 如果访问的是不支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回errs.NotImplement提取URL前缀就会是/aeExtract就不会被调用
// 如果访问的是支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回有效值提取URL前缀就会是/adExtract就会被调用
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
link, err := d.extract(ctx, dst, sub, args)
if err == nil {
if !args.Redirect && len(link.URL) > 0 {
if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be decompressed")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be decompressed to")
}
if err != nil {
return err
}
_, err = fs.ArchiveDecompress(ctx, *srcPath, *dstPath, args)
return err
}
var _ driver.Driver = (*Alias)(nil)

View File

@ -9,15 +9,18 @@ type Addition struct {
// Usually one of two
// driver.RootPath
// define other
Paths string `json:"paths" required:"true" type:"text"`
ProtectSameName bool `json:"protect_same_name" default:"true" required:"false" help:"Protects same-name files from Delete or Rename"`
Paths string `json:"paths" required:"true" type:"text"`
ProtectSameName bool `json:"protect_same_name" default:"true" required:"false" help:"Protects same-name files from Delete or Rename"`
DownloadConcurrency int `json:"download_concurrency" default:"0" required:"false" type:"number" help:"Need to enable proxy"`
DownloadPartSize int `json:"download_part_size" default:"0" type:"number" required:"false" help:"Need to enable proxy. Unit: KB"`
Writable bool `json:"writable" type:"bool" default:"false"`
}
var config = driver.Config{
Name: "Alias",
LocalSort: true,
NoCache: true,
NoUpload: true,
NoUpload: false,
DefaultRoot: "/",
ProxyRangeOption: true,
}

View File

@ -3,12 +3,15 @@ package alias
import (
"context"
"fmt"
"net/url"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
@ -62,6 +65,7 @@ func (d *Alias) get(ctx context.Context, path string, dst, sub string) (model.Ob
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}, nil
}
@ -94,10 +98,15 @@ func (d *Alias) list(ctx context.Context, dst, sub string, args *fs.ListArgs) ([
func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, err := fs.GetStorage(reqPath, &fs.GetStoragesArgs{})
// 参考 crypt 驱动
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(*Alias); !ok && !args.Redirect {
link, _, err := op.Link(ctx, storage, reqActualPath, args)
return link, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
@ -114,13 +123,13 @@ func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs)
}
return link, nil
}
link, _, err := fs.Link(ctx, reqPath, args)
link, _, err := op.Link(ctx, storage, reqActualPath, args)
return link, err
}
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error) {
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj, isParent bool) (*string, error) {
root, sub := d.getRootAndPath(obj.GetPath())
if sub == "" {
if sub == "" && !isParent {
return nil, errs.NotSupport
}
dsts, ok := d.pathMap[root]
@ -149,3 +158,68 @@ func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error)
}
return reqPath, nil
}
func (d *Alias) getArchiveMeta(ctx context.Context, dst, sub string, args model.ArchiveArgs) (model.ArchiveMeta, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.GetArchiveMeta(ctx, storage, reqActualPath, model.ArchiveMetaArgs{
ArchiveArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) listArchive(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) ([]model.Obj, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.ListArchive(ctx, storage, reqActualPath, model.ArchiveListArgs{
ArchiveInnerArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) extract(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
if _, ok := storage.(*Alias); !ok && !args.Redirect {
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
}
if common.ShouldProxy(storage, stdpath.Base(sub)) {
link := &model.Link{
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
common.GetApiUrl(args.HttpReq),
utils.EncodePath(reqPath, true),
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
sign.SignArchive(reqPath)),
}
if args.HttpReq != nil && d.ProxyRange {
link.RangeReadCloser = common.NoProxyRange
}
return link, nil
}
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
return nil, errs.NotImplement
}

View File

@ -5,12 +5,14 @@ import (
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
@ -34,7 +36,7 @@ func (d *AListV3) GetAddition() driver.Additional {
func (d *AListV3) Init(ctx context.Context) error {
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
var resp common.Resp[MeResp]
_, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
@ -48,15 +50,15 @@ func (d *AListV3) Init(ctx context.Context) error {
}
}
// re-get the user info
_, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
return err
}
if resp.Data.Role == model.GUEST {
url := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(url)
u := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(u)
if err != nil {
return err
}
@ -74,7 +76,7 @@ func (d *AListV3) Drop(ctx context.Context) error {
func (d *AListV3) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var resp common.Resp[FsListResp]
_, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ListReq{
PageReq: model.PageReq{
Page: 1,
@ -116,7 +118,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
userAgent = base.UserAgent
}
}
_, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(FsGetReq{
Path: file.GetPath(),
Password: d.MetaPassword,
@ -131,7 +133,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
}
func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
req.SetBody(MkdirOrLinkReq{
Path: path.Join(parentDir.GetPath(), dirName),
})
@ -140,7 +142,7 @@ func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName stri
}
func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@ -151,7 +153,7 @@ func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
_, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
req.SetBody(RenameReq{
Path: srcObj.GetPath(),
Name: newName,
@ -161,7 +163,7 @@ func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string)
}
func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@ -172,7 +174,7 @@ func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
_, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
req.SetBody(RemoveReq{
Dir: path.Dir(obj.GetPath()),
Names: []string{obj.GetName()},
@ -181,16 +183,29 @@ func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
req, err := http.NewRequestWithContext(ctx, http.MethodPut, d.Address+"/api/fs/put", stream)
func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
req, err := http.NewRequestWithContext(ctx, http.MethodPut, d.Address+"/api/fs/put", reader)
if err != nil {
return err
}
req.Header.Set("Authorization", d.Token)
req.Header.Set("File-Path", path.Join(dstDir.GetPath(), stream.GetName()))
req.Header.Set("File-Path", path.Join(dstDir.GetPath(), s.GetName()))
req.Header.Set("Password", d.MetaPassword)
if md5 := s.GetHash().GetHash(utils.MD5); len(md5) > 0 {
req.Header.Set("X-File-Md5", md5)
}
if sha1 := s.GetHash().GetHash(utils.SHA1); len(sha1) > 0 {
req.Header.Set("X-File-Sha1", sha1)
}
if sha256 := s.GetHash().GetHash(utils.SHA256); len(sha256) > 0 {
req.Header.Set("X-File-Sha256", sha256)
}
req.ContentLength = stream.GetSize()
req.ContentLength = s.GetSize()
// client := base.NewHttpClient()
// client.Timeout = time.Hour * 6
res, err := base.HttpClient.Do(req)
@ -219,6 +234,127 @@ func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
return nil
}
func (d *AListV3) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveMetaResp]
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var tree []model.ObjTree
if resp.Data.Content != nil {
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
for _, content := range resp.Data.Content {
tree = append(tree, &content)
}
}
return &model.ArchiveMetaInfo{
Comment: resp.Data.Comment,
Encrypted: resp.Data.Encrypted,
Tree: tree,
}, nil
}
func (d *AListV3) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveListResp]
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveListReq{
ArchiveMetaReq: ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
},
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
InnerPath: args.InnerPath,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *AListV3) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotSupport
}
var resp common.Resp[ArchiveMetaResp]
_, _, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if err != nil {
return nil, err
}
return &model.Link{
URL: fmt.Sprintf("%s?inner=%s&pass=%s&sign=%s",
resp.Data.RawURL,
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
resp.Data.Sign),
}, nil
}
func (d *AListV3) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.ForwardArchiveReq {
return errs.NotImplement
}
dir, name := path.Split(srcObj.GetPath())
_, _, err := d.request("/fs/archive/decompress", http.MethodPost, func(req *resty.Request) {
req.SetBody(DecompressReq{
ArchivePass: args.Password,
CacheFull: args.CacheFull,
DstDir: dstDir.GetPath(),
InnerPath: args.InnerPath,
Name: []string{name},
PutIntoNewDir: args.PutIntoNewDir,
SrcDir: dir,
})
})
return err
}
//func (d *AList) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}

View File

@ -7,12 +7,13 @@ import (
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
MetaPassword string `json:"meta_password"`
Username string `json:"username"`
Password string `json:"password"`
Token string `json:"token"`
PassUAToUpsteam bool `json:"pass_ua_to_upsteam" default:"true"`
Address string `json:"url" required:"true"`
MetaPassword string `json:"meta_password"`
Username string `json:"username"`
Password string `json:"password"`
Token string `json:"token"`
PassUAToUpsteam bool `json:"pass_ua_to_upsteam" default:"true"`
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
}
var config = driver.Config{

View File

@ -4,6 +4,7 @@ import (
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type ListReq struct {
@ -81,3 +82,89 @@ type MeResp struct {
SsoId string `json:"sso_id"`
Otp bool `json:"otp"`
}
type ArchiveMetaReq struct {
ArchivePass string `json:"archive_pass"`
Password string `json:"password"`
Path string `json:"path"`
Refresh bool `json:"refresh"`
}
type TreeResp struct {
ObjResp
Children []TreeResp `json:"children"`
hashCache *utils.HashInfo
}
func (t *TreeResp) GetSize() int64 {
return t.Size
}
func (t *TreeResp) GetName() string {
return t.Name
}
func (t *TreeResp) ModTime() time.Time {
return t.Modified
}
func (t *TreeResp) CreateTime() time.Time {
return t.Created
}
func (t *TreeResp) IsDir() bool {
return t.ObjResp.IsDir
}
func (t *TreeResp) GetHash() utils.HashInfo {
return utils.FromString(t.HashInfo)
}
func (t *TreeResp) GetID() string {
return ""
}
func (t *TreeResp) GetPath() string {
return ""
}
func (t *TreeResp) GetChildren() []model.ObjTree {
ret := make([]model.ObjTree, 0, len(t.Children))
for _, child := range t.Children {
ret = append(ret, &child)
}
return ret
}
func (t *TreeResp) Thumb() string {
return t.ObjResp.Thumb
}
type ArchiveMetaResp struct {
Comment string `json:"comment"`
Encrypted bool `json:"encrypted"`
Content []TreeResp `json:"content"`
RawURL string `json:"raw_url"`
Sign string `json:"sign"`
}
type ArchiveListReq struct {
model.PageReq
ArchiveMetaReq
InnerPath string `json:"inner_path"`
}
type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}
type DecompressReq struct {
ArchivePass string `json:"archive_pass"`
CacheFull bool `json:"cache_full"`
DstDir string `json:"dst_dir"`
InnerPath string `json:"inner_path"`
Name []string `json:"name"`
PutIntoNewDir bool `json:"put_into_new_dir"`
SrcDir string `json:"src_dir"`
}

View File

@ -17,7 +17,7 @@ func (d *AListV3) login() error {
return nil
}
var resp common.Resp[LoginResp]
_, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(base.Json{
"username": d.Username,
"password": d.Password,
@ -31,7 +31,7 @@ func (d *AListV3) login() error {
return nil
}
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, int, error) {
url := d.Address + "/api" + api
req := base.RestyClient.R()
req.SetHeader("Authorization", d.Token)
@ -40,22 +40,26 @@ func (d *AListV3) request(api, method string, callback base.ReqCallback, retry .
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
code := 0
if res != nil {
code = res.StatusCode()
}
return nil, code, err
}
log.Debugf("[alist_v3] response body: %s", res.String())
if res.StatusCode() >= 400 {
return nil, fmt.Errorf("request failed, status: %s", res.Status())
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
}
code := utils.Json.Get(res.Body(), "code").ToInt()
if code != 200 {
if (code == 401 || code == 403) && !utils.IsBool(retry...) {
err = d.login()
if err != nil {
return nil, err
return nil, code, err
}
return d.request(api, method, callback, true)
}
return nil, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
return nil, code, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
}
return res.Body(), nil
return res.Body(), 200, nil
}

View File

@ -14,13 +14,12 @@ import (
"os"
"time"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
@ -194,7 +193,10 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
}
if d.RapidUpload {
buf := bytes.NewBuffer(make([]byte, 0, 1024))
utils.CopyWithBufferN(buf, file, 1024)
_, err := utils.CopyWithBufferN(buf, file, 1024)
if err != nil {
return err
}
reqBody["pre_hash"] = utils.HashData(utils.SHA1, buf.Bytes())
if localFile != nil {
if _, err := localFile.Seek(0, io.SeekStart); err != nil {
@ -286,6 +288,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
file.Reader = localFile
}
rateLimited := driver.NewLimitedUploadStream(ctx, file)
for i, partInfo := range resp.PartInfoList {
if utils.IsCanceled(ctx) {
return ctx.Err()
@ -294,7 +297,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
if d.InternalUpload {
url = partInfo.InternalUploadUrl
}
req, err := http.NewRequest("PUT", url, io.LimitReader(file, DEFAULT))
req, err := http.NewRequest("PUT", url, io.LimitReader(rateLimited, DEFAULT))
if err != nil {
return err
}
@ -303,7 +306,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
if err != nil {
return err
}
res.Body.Close()
_ = res.Body.Close()
if count > 0 {
up(float64(i) * 100 / float64(count))
}

View File

@ -19,12 +19,12 @@ import (
type AliyundriveOpen struct {
model.Storage
Addition
base string
DriveId string
limitList func(ctx context.Context, data base.Json) (*Files, error)
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
ref *AliyundriveOpen
}
func (d *AliyundriveOpen) Config() driver.Config {
@ -58,7 +58,17 @@ func (d *AliyundriveOpen) Init(ctx context.Context) error {
return nil
}
func (d *AliyundriveOpen) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*AliyundriveOpen)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *AliyundriveOpen) Drop(ctx context.Context) error {
d.ref = nil
return nil
}

View File

@ -32,11 +32,10 @@ var config = driver.Config{
DefaultRoot: "root",
NoOverwriteUpload: true,
}
var API_URL = "https://openapi.alipan.com"
func init() {
op.RegisterDriver(func() driver.Driver {
return &AliyundriveOpen{
base: "https://openapi.alipan.com",
}
return &AliyundriveOpen{}
})
}

View File

@ -77,7 +77,7 @@ func (d *AliyundriveOpen) uploadPart(ctx context.Context, r io.Reader, partInfo
if err != nil {
return err
}
res.Body.Close()
_ = res.Body.Close()
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusConflict {
return fmt.Errorf("upload status: %d", res.StatusCode)
}
@ -126,7 +126,7 @@ func getProofRange(input string, size int64) (*ProofRange, error) {
}
func (d *AliyundriveOpen) calProofCode(stream model.FileStreamer) (string, error) {
proofRange, err := getProofRange(d.AccessToken, stream.GetSize())
proofRange, err := getProofRange(d.getAccessToken(), stream.GetSize())
if err != nil {
return "", err
}
@ -251,8 +251,9 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
rd = utils.NewMultiReadable(srd)
}
err = retry.Do(func() error {
rd.Reset()
return d.uploadPart(ctx, rd, createResp.PartInfoList[i])
_ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
return d.uploadPart(ctx, rateLimitedRd, createResp.PartInfoList[i])
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),

View File

@ -19,7 +19,7 @@ import (
// do others that not defined in Driver interface
func (d *AliyundriveOpen) _refreshToken() (string, string, error) {
url := d.base + "/oauth/access_token"
url := API_URL + "/oauth/access_token"
if d.OauthTokenURL != "" && d.ClientID == "" {
url = d.OauthTokenURL
}
@ -74,6 +74,9 @@ func getSub(token string) (string, error) {
}
func (d *AliyundriveOpen) refreshToken() error {
if d.ref != nil {
return d.ref.refreshToken()
}
refresh, access, err := d._refreshToken()
for i := 0; i < 3; i++ {
if err == nil {
@ -100,7 +103,7 @@ func (d *AliyundriveOpen) request(uri, method string, callback base.ReqCallback,
func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error, *ErrResp) {
req := base.RestyClient.R()
// TODO check whether access_token is expired
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
req.SetHeader("Authorization", "Bearer "+d.getAccessToken())
if method == http.MethodPost {
req.SetHeader("Content-Type", "application/json")
}
@ -109,7 +112,7 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
}
var e ErrResp
req.SetError(&e)
res, err := req.Execute(method, d.base+uri)
res, err := req.Execute(method, API_URL+uri)
if err != nil {
if res != nil {
log.Errorf("[aliyundrive_open] request error: %s", res.String())
@ -118,7 +121,7 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
}
isRetry := len(retry) > 0 && retry[0]
if e.Code != "" {
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.AccessToken == "") {
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.getAccessToken() == "") {
err = d.refreshToken()
if err != nil {
return nil, err, nil
@ -176,3 +179,10 @@ func getNowTime() (time.Time, string) {
nowTimeStr := nowTime.Format("2006-01-02T15:04:05.000Z")
return nowTime, nowTimeStr
}
func (d *AliyundriveOpen) getAccessToken() string {
if d.ref != nil {
return d.ref.getAccessToken()
}
return d.AccessToken
}

View File

@ -2,6 +2,7 @@ package drivers
import (
_ "github.com/alist-org/alist/v3/drivers/115"
_ "github.com/alist-org/alist/v3/drivers/115_open"
_ "github.com/alist-org/alist/v3/drivers/115_share"
_ "github.com/alist-org/alist/v3/drivers/123"
_ "github.com/alist-org/alist/v3/drivers/123_link"
@ -15,15 +16,19 @@ import (
_ "github.com/alist-org/alist/v3/drivers/aliyundrive"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_open"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_share"
_ "github.com/alist-org/alist/v3/drivers/azure_blob"
_ "github.com/alist-org/alist/v3/drivers/baidu_netdisk"
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
_ "github.com/alist-org/alist/v3/drivers/chaoxing"
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
_ "github.com/alist-org/alist/v3/drivers/crypt"
_ "github.com/alist-org/alist/v3/drivers/doubao"
_ "github.com/alist-org/alist/v3/drivers/dropbox"
_ "github.com/alist-org/alist/v3/drivers/febbox"
_ "github.com/alist-org/alist/v3/drivers/ftp"
_ "github.com/alist-org/alist/v3/drivers/github"
_ "github.com/alist-org/alist/v3/drivers/github_releases"
_ "github.com/alist-org/alist/v3/drivers/google_drive"
_ "github.com/alist-org/alist/v3/drivers/google_photo"
_ "github.com/alist-org/alist/v3/drivers/halalcloud"
@ -35,6 +40,7 @@ import (
_ "github.com/alist-org/alist/v3/drivers/local"
_ "github.com/alist-org/alist/v3/drivers/mediatrack"
_ "github.com/alist-org/alist/v3/drivers/mega"
_ "github.com/alist-org/alist/v3/drivers/misskey"
_ "github.com/alist-org/alist/v3/drivers/mopan"
_ "github.com/alist-org/alist/v3/drivers/netease_music"
_ "github.com/alist-org/alist/v3/drivers/onedrive"

View File

@ -0,0 +1,313 @@
package azure_blob
import (
"context"
"fmt"
"io"
"path"
"regexp"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
)
// Azure Blob Storage based on the blob APIs
// Link: https://learn.microsoft.com/rest/api/storageservices/blob-service-rest-api
type AzureBlob struct {
model.Storage
Addition
client *azblob.Client
containerClient *container.Client
config driver.Config
}
// Config returns the driver configuration.
func (d *AzureBlob) Config() driver.Config {
return d.config
}
// GetAddition returns additional settings specific to Azure Blob Storage.
func (d *AzureBlob) GetAddition() driver.Additional {
return &d.Addition
}
// Init initializes the Azure Blob Storage client using shared key authentication.
func (d *AzureBlob) Init(ctx context.Context) error {
// Validate the endpoint URL
accountName := extractAccountName(d.Addition.Endpoint)
if !regexp.MustCompile(`^[a-z0-9]+$`).MatchString(accountName) {
return fmt.Errorf("invalid storage account name: must be chars of lowercase letters or numbers only")
}
credential, err := azblob.NewSharedKeyCredential(accountName, d.Addition.AccessKey)
if err != nil {
return fmt.Errorf("failed to create credential: %w", err)
}
// Check if Endpoint is just account name
endpoint := d.Addition.Endpoint
if accountName == endpoint {
endpoint = fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
}
// Initialize Azure Blob client with retry policy
client, err := azblob.NewClientWithSharedKeyCredential(endpoint, credential,
&azblob.ClientOptions{ClientOptions: azcore.ClientOptions{
Retry: policy.RetryOptions{
MaxRetries: MaxRetries,
RetryDelay: RetryDelay,
},
}})
if err != nil {
return fmt.Errorf("failed to create client: %w", err)
}
d.client = client
// Ensure container exists or create it
containerName := strings.Trim(d.Addition.ContainerName, "/ \\")
if containerName == "" {
return fmt.Errorf("container name cannot be empty")
}
return d.createContainerIfNotExists(ctx, containerName)
}
// Drop releases resources associated with the Azure Blob client.
func (d *AzureBlob) Drop(ctx context.Context) error {
d.client = nil
return nil
}
// List retrieves blobs and directories under the specified path.
func (d *AzureBlob) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
prefix := ensureTrailingSlash(dir.GetPath())
pager := d.containerClient.NewListBlobsHierarchyPager("/", &container.ListBlobsHierarchyOptions{
Prefix: &prefix,
})
var objs []model.Obj
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
// Process directories
for _, blobPrefix := range page.Segment.BlobPrefixes {
objs = append(objs, &model.Object{
Name: path.Base(strings.TrimSuffix(*blobPrefix.Name, "/")),
Path: *blobPrefix.Name,
Modified: *blobPrefix.Properties.LastModified,
Ctime: *blobPrefix.Properties.CreationTime,
IsFolder: true,
})
}
// Process files
for _, blob := range page.Segment.BlobItems {
if strings.HasSuffix(*blob.Name, "/") {
continue
}
objs = append(objs, &model.Object{
Name: path.Base(*blob.Name),
Path: *blob.Name,
Size: *blob.Properties.ContentLength,
Modified: *blob.Properties.LastModified,
Ctime: *blob.Properties.CreationTime,
IsFolder: false,
})
}
}
return objs, nil
}
// Link generates a temporary SAS URL for accessing a blob.
func (d *AzureBlob) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
blobClient := d.containerClient.NewBlobClient(file.GetPath())
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
sasURL, err := blobClient.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return nil, fmt.Errorf("failed to generate SAS URL: %w", err)
}
return &model.Link{URL: sasURL}, nil
}
// MakeDir creates a virtual directory by uploading an empty blob as a marker.
func (d *AzureBlob) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
dirPath := path.Join(parentDir.GetPath(), dirName)
if err := d.mkDir(ctx, dirPath); err != nil {
return nil, fmt.Errorf("failed to create directory marker: %w", err)
}
return &model.Object{
Path: dirPath,
Name: dirName,
IsFolder: true,
}, nil
}
// Move relocates an object (file or directory) to a new directory.
func (d *AzureBlob) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("move operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Rename changes the name of an existing object.
func (d *AzureBlob) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(path.Dir(srcPath), newName)
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("rename operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: newName,
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Copy duplicates an object (file or directory) to a specified destination directory.
func (d *AzureBlob) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
// Handle directory copying using flat listing
if srcObj.IsDir() {
srcPrefix := srcObj.GetPath()
srcPrefix = ensureTrailingSlash(srcPrefix)
// Get all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPrefix)
if err != nil {
return nil, fmt.Errorf("failed to list source directory contents: %w", err)
}
// Process each blob - copy to destination
for _, blob := range blobs {
// Skip the directory marker itself
if *blob.Name == srcPrefix {
continue
}
// Calculate relative path from source
relPath := strings.TrimPrefix(*blob.Name, srcPrefix)
itemDstPath := path.Join(dstPath, relPath)
if strings.HasSuffix(itemDstPath, "/") || (blob.Metadata["hdi_isfolder"] != nil && *blob.Metadata["hdi_isfolder"] == "true") {
// Create directory marker at destination
err := d.mkDir(ctx, itemDstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy the blob
if err := d.copyFile(ctx, *blob.Name, itemDstPath); err != nil {
return nil, fmt.Errorf("failed to copy %s: %w", *blob.Name, err)
}
}
}
// Create directory marker at destination if needed
if len(blobs) == 0 {
err := d.mkDir(ctx, dstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: true,
}, nil
}
// Copy a single file
if err := d.copyFile(ctx, srcObj.GetPath(), dstPath); err != nil {
return nil, fmt.Errorf("failed to copy blob: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// Remove deletes a specified blob or recursively deletes a directory and its contents.
func (d *AzureBlob) Remove(ctx context.Context, obj model.Obj) error {
path := obj.GetPath()
// Handle recursive directory deletion
if obj.IsDir() {
return d.deleteFolder(ctx, path)
}
// Delete single file
return d.deleteFile(ctx, path, false)
}
// Put uploads a file stream to Azure Blob Storage with progress tracking.
func (d *AzureBlob) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
blobPath := path.Join(dstDir.GetPath(), stream.GetName())
blobClient := d.containerClient.NewBlockBlobClient(blobPath)
// Determine optimal upload options based on file size
options := optimizedUploadOptions(stream.GetSize())
// Track upload progress
progressTracker := &progressTracker{
total: stream.GetSize(),
updateProgress: up,
}
// Wrap stream to handle context cancellation and progress tracking
limitedStream := driver.NewLimitedUploadStream(ctx, io.TeeReader(stream, progressTracker))
// Upload the stream to Azure Blob Storage
_, err := blobClient.UploadStream(ctx, limitedStream, options)
if err != nil {
return nil, fmt.Errorf("failed to upload file: %w", err)
}
return &model.Object{
Path: blobPath,
Name: stream.GetName(),
Size: stream.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// The following methods related to archive handling are not implemented yet.
// func (d *AzureBlob) GetArchiveMeta(...) {...}
// func (d *AzureBlob) ListArchive(...) {...}
// func (d *AzureBlob) Extract(...) {...}
// func (d *AzureBlob) ArchiveDecompress(...) {...}
// Ensure AzureBlob implements the driver.Driver interface.
var _ driver.Driver = (*AzureBlob)(nil)

View File

@ -0,0 +1,27 @@
package azure_blob
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
Endpoint string `json:"endpoint" required:"true" default:"https://<accountname>.blob.core.windows.net/" help:"e.g. https://accountname.blob.core.windows.net/. The full endpoint URL for Azure Storage, including the unique storage account name (3 ~ 24 numbers and lowercase letters only)."`
AccessKey string `json:"access_key" required:"true" help:"The access key for Azure Storage, used for authentication. https://learn.microsoft.com/azure/storage/common/storage-account-keys-manage"`
ContainerName string `json:"container_name" required:"true" help:"The name of the container in Azure Storage (created in the Azure portal). https://learn.microsoft.com/azure/storage/blobs/blob-containers-portal"`
SignURLExpire int `json:"sign_url_expire" type:"number" default:"4" help:"The expiration time for SAS URLs, in hours."`
}
var config = driver.Config{
Name: "Azure Blob Storage",
LocalSort: true,
CheckStatus: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &AzureBlob{
config: config,
}
})
}

View File

@ -0,0 +1,20 @@
package azure_blob
import "github.com/alist-org/alist/v3/internal/driver"
// progressTracker is used to track upload progress
type progressTracker struct {
total int64
current int64
updateProgress driver.UpdateProgress
}
// Write implements io.Writer to track progress
func (pt *progressTracker) Write(p []byte) (n int, err error) {
n = len(p)
pt.current += int64(n)
if pt.updateProgress != nil && pt.total > 0 {
pt.updateProgress(float64(pt.current) * 100 / float64(pt.total))
}
return n, nil
}

401
drivers/azure_blob/util.go Normal file
View File

@ -0,0 +1,401 @@
package azure_blob
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"path"
"sort"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service"
log "github.com/sirupsen/logrus"
)
const (
// MaxRetries defines the maximum number of retry attempts for Azure operations
MaxRetries = 3
// RetryDelay defines the base delay between retries
RetryDelay = 3 * time.Second
// MaxBatchSize defines the maximum number of operations in a single batch request
MaxBatchSize = 128
)
// extractAccountName 从 Azure 存储 Endpoint 中提取账户名
func extractAccountName(endpoint string) string {
// 移除协议前缀
endpoint = strings.TrimPrefix(endpoint, "https://")
endpoint = strings.TrimPrefix(endpoint, "http://")
// 获取第一个点之前的部分(即账户名)
parts := strings.Split(endpoint, ".")
if len(parts) > 0 {
// to lower case
return strings.ToLower(parts[0])
}
return ""
}
// isNotFoundError checks if the error is a "not found" type error
func isNotFoundError(err error) bool {
var storageErr *azcore.ResponseError
if errors.As(err, &storageErr) {
return storageErr.StatusCode == 404
}
// Fallback to string matching for backwards compatibility
return err != nil && strings.Contains(err.Error(), "BlobNotFound")
}
// flattenListBlobs - Optimize blob listing to handle pagination better
func (d *AzureBlob) flattenListBlobs(ctx context.Context, prefix string) ([]container.BlobItem, error) {
// Standardize prefix format
prefix = ensureTrailingSlash(prefix)
var blobItems []container.BlobItem
pager := d.containerClient.NewListBlobsFlatPager(&container.ListBlobsFlatOptions{
Prefix: &prefix,
Include: container.ListBlobsInclude{
Metadata: true,
},
})
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
for _, blob := range page.Segment.BlobItems {
blobItems = append(blobItems, *blob)
}
}
return blobItems, nil
}
// batchDeleteBlobs - Simplify batch deletion logic
func (d *AzureBlob) batchDeleteBlobs(ctx context.Context, blobPaths []string) error {
if len(blobPaths) == 0 {
return nil
}
// Process in batches of MaxBatchSize
for i := 0; i < len(blobPaths); i += MaxBatchSize {
end := min(i+MaxBatchSize, len(blobPaths))
currentBatch := blobPaths[i:end]
// Create batch builder
batchBuilder, err := d.containerClient.NewBatchBuilder()
if err != nil {
return fmt.Errorf("failed to create batch builder: %w", err)
}
// Add delete operations
for _, blobPath := range currentBatch {
if err := batchBuilder.Delete(blobPath, nil); err != nil {
return fmt.Errorf("failed to add delete operation for %s: %w", blobPath, err)
}
}
// Submit batch
responses, err := d.containerClient.SubmitBatch(ctx, batchBuilder, nil)
if err != nil {
return fmt.Errorf("batch delete request failed: %w", err)
}
// Check responses
for _, resp := range responses.Responses {
if resp.Error != nil && !isNotFoundError(resp.Error) {
// 获取 blob 名称以提供更好的错误信息
blobName := "unknown"
if resp.BlobName != nil {
blobName = *resp.BlobName
}
return fmt.Errorf("failed to delete blob %s: %v", blobName, resp.Error)
}
}
}
return nil
}
// deleteFolder recursively deletes a directory and all its contents
func (d *AzureBlob) deleteFolder(ctx context.Context, prefix string) error {
// Ensure directory path ends with slash
prefix = ensureTrailingSlash(prefix)
// Get all blobs under the directory using flattenListBlobs
globs, err := d.flattenListBlobs(ctx, prefix)
if err != nil {
return fmt.Errorf("failed to list blobs for deletion: %w", err)
}
// If there are blobs in the directory, delete them
if len(globs) > 0 {
// 分离文件和目录标记
var filePaths []string
var dirPaths []string
for _, blob := range globs {
blobName := *blob.Name
if isDirectory(blob) {
// remove trailing slash for directory names
dirPaths = append(dirPaths, strings.TrimSuffix(blobName, "/"))
} else {
filePaths = append(filePaths, blobName)
}
}
// 先删除文件,再删除目录
if len(filePaths) > 0 {
if err := d.batchDeleteBlobs(ctx, filePaths); err != nil {
return err
}
}
if len(dirPaths) > 0 {
// 按路径深度分组
depthMap := make(map[int][]string)
for _, dir := range dirPaths {
depth := strings.Count(dir, "/") // 计算目录深度
depthMap[depth] = append(depthMap[depth], dir)
}
// 按深度从大到小排序
var depths []int
for depth := range depthMap {
depths = append(depths, depth)
}
sort.Sort(sort.Reverse(sort.IntSlice(depths)))
// 按深度逐层批量删除
for _, depth := range depths {
batch := depthMap[depth]
if err := d.batchDeleteBlobs(ctx, batch); err != nil {
return err
}
}
}
}
// 最后删除目录标记本身
return d.deleteEmptyDirectory(ctx, prefix)
}
// deleteFile deletes a single file or blob with better error handling
func (d *AzureBlob) deleteFile(ctx context.Context, path string, isDir bool) error {
blobClient := d.containerClient.NewBlobClient(path)
_, err := blobClient.Delete(ctx, nil)
if err != nil && !(isDir && isNotFoundError(err)) {
return err
}
return nil
}
// copyFile copies a single blob from source path to destination path
func (d *AzureBlob) copyFile(ctx context.Context, srcPath, dstPath string) error {
srcBlob := d.containerClient.NewBlobClient(srcPath)
dstBlob := d.containerClient.NewBlobClient(dstPath)
// Use configured expiration time for SAS URL
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
srcURL, err := srcBlob.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return fmt.Errorf("failed to generate source SAS URL: %w", err)
}
_, err = dstBlob.StartCopyFromURL(ctx, srcURL, nil)
return err
}
// createContainerIfNotExists - Create container if not exists
// Clean up commented code
func (d *AzureBlob) createContainerIfNotExists(ctx context.Context, containerName string) error {
serviceClient := d.client.ServiceClient()
containerClient := serviceClient.NewContainerClient(containerName)
var options = service.CreateContainerOptions{}
_, err := containerClient.Create(ctx, &options)
if err != nil {
var responseErr *azcore.ResponseError
if errors.As(err, &responseErr) && responseErr.ErrorCode != "ContainerAlreadyExists" {
return fmt.Errorf("failed to create or access container [%s]: %w", containerName, err)
}
}
d.containerClient = containerClient
return nil
}
// mkDir creates a virtual directory marker by uploading an empty blob with metadata.
func (d *AzureBlob) mkDir(ctx context.Context, fullDirName string) error {
dirPath := ensureTrailingSlash(fullDirName)
blobClient := d.containerClient.NewBlockBlobClient(dirPath)
// Upload an empty blob with metadata indicating it's a directory
_, err := blobClient.Upload(ctx, struct {
*bytes.Reader
io.Closer
}{
Reader: bytes.NewReader([]byte{}),
Closer: io.NopCloser(nil),
}, &blockblob.UploadOptions{
Metadata: map[string]*string{
"hdi_isfolder": to.Ptr("true"),
},
})
return err
}
// ensureTrailingSlash ensures the provided path ends with a trailing slash.
func ensureTrailingSlash(path string) string {
if !strings.HasSuffix(path, "/") {
return path + "/"
}
return path
}
// moveOrRename moves or renames blobs or directories from source to destination.
func (d *AzureBlob) moveOrRename(ctx context.Context, srcPath, dstPath string, isDir bool, srcSize int64) error {
if isDir {
// Normalize paths for directory operations
srcPath = ensureTrailingSlash(srcPath)
dstPath = ensureTrailingSlash(dstPath)
// List all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPath)
if err != nil {
return fmt.Errorf("failed to list blobs: %w", err)
}
// Iterate and copy each blob to the destination
for _, item := range blobs {
srcBlobName := *item.Name
relPath := strings.TrimPrefix(srcBlobName, srcPath)
itemDstPath := path.Join(dstPath, relPath)
if isDirectory(item) {
// Create directory marker at destination
if err := d.mkDir(ctx, itemDstPath); err != nil {
return fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy file blob to destination
if err := d.copyFile(ctx, srcBlobName, itemDstPath); err != nil {
return fmt.Errorf("failed to copy blob [%s]: %w", srcBlobName, err)
}
}
}
// Handle empty directories by creating a marker at destination
if len(blobs) == 0 {
if err := d.mkDir(ctx, dstPath); err != nil {
return fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
// Delete source directory and its contents
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Warnf("failed to delete source directory [%s]: %v\n, and try again", srcPath, err)
// Retry deletion once more and ignore the result
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Errorf("Retry deletion of source directory [%s] failed: %v", srcPath, err)
}
}
return nil
}
// Single file move or rename operation
if err := d.copyFile(ctx, srcPath, dstPath); err != nil {
return fmt.Errorf("failed to copy file: %w", err)
}
// Delete source file after successful copy
if err := d.deleteFile(ctx, srcPath, false); err != nil {
log.Errorf("Error deleting source file [%s]: %v", srcPath, err)
}
return nil
}
// optimizedUploadOptions returns the optimal upload options based on file size
func optimizedUploadOptions(fileSize int64) *azblob.UploadStreamOptions {
options := &azblob.UploadStreamOptions{
BlockSize: 4 * 1024 * 1024, // 4MB block size
Concurrency: 4, // Default concurrency
}
// For large files, increase block size and concurrency
if fileSize > 256*1024*1024 { // For files larger than 256MB
options.BlockSize = 8 * 1024 * 1024 // 8MB blocks
options.Concurrency = 8 // More concurrent uploads
}
// For very large files (>1GB)
if fileSize > 1024*1024*1024 {
options.BlockSize = 16 * 1024 * 1024 // 16MB blocks
options.Concurrency = 16 // Higher concurrency
}
return options
}
// isDirectory determines if a blob represents a directory
// Checks multiple indicators: path suffix, metadata, and content type
func isDirectory(blob container.BlobItem) bool {
// Check path suffix
if strings.HasSuffix(*blob.Name, "/") {
return true
}
// Check metadata for directory marker
if blob.Metadata != nil {
if val, ok := blob.Metadata["hdi_isfolder"]; ok && val != nil && *val == "true" {
return true
}
// Azure Storage Explorer and other tools may use different metadata keys
if val, ok := blob.Metadata["is_directory"]; ok && val != nil && strings.ToLower(*val) == "true" {
return true
}
}
// Check content type (some tools mark directories with specific content types)
if blob.Properties != nil && blob.Properties.ContentType != nil {
contentType := strings.ToLower(*blob.Properties.ContentType)
if blob.Properties.ContentLength != nil && *blob.Properties.ContentLength == 0 && (contentType == "application/directory" || contentType == "directory") {
return true
}
}
return false
}
// deleteEmptyDirectory deletes a directory only if it's empty
func (d *AzureBlob) deleteEmptyDirectory(ctx context.Context, dirPath string) error {
// Directory is empty, delete the directory marker
blobClient := d.containerClient.NewBlobClient(strings.TrimSuffix(dirPath, "/"))
_, err := blobClient.Delete(ctx, nil)
// Also try deleting with trailing slash (for different directory marker formats)
if err != nil && isNotFoundError(err) {
blobClient = d.containerClient.NewBlobClient(dirPath)
_, err = blobClient.Delete(ctx, nil)
}
// Ignore not found errors
if err != nil && isNotFoundError(err) {
log.Infof("Directory [%s] not found during deletion: %v", dirPath, err)
return nil
}
return err
}

View File

@ -12,6 +12,8 @@ import (
"strconv"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
@ -76,6 +78,8 @@ func (d *BaiduNetdisk) List(ctx context.Context, dir model.Obj, args model.ListA
func (d *BaiduNetdisk) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.DownloadAPI == "crack" {
return d.linkCrack(file, args)
} else if d.DownloadAPI == "crack_video" {
return d.linkCrackVideo(file, args)
}
return d.linkOfficial(file, args)
}
@ -187,7 +191,7 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
}
streamSize := stream.GetSize()
sliceSize := d.getSliceSize()
sliceSize := d.getSliceSize(streamSize)
count := int(math.Max(math.Ceil(float64(streamSize)/float64(sliceSize)), 1))
lastBlockSize := streamSize % sliceSize
if streamSize > 0 && lastBlockSize == 0 {
@ -195,7 +199,7 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
}
//cal md5 for first 256k data
const SliceSize int64 = 256 * 1024
const SliceSize int64 = 256 * utils.KB
// cal md5
blockList := make([]string, 0, count)
byteSize := sliceSize
@ -260,9 +264,10 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
}
// step.2 上传分片
threadG, upCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
retry.Attempts(3),
retry.Attempts(1),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
for i, partseq := range precreateResp.BlockList {
if utils.IsCanceled(upCtx) {
break
@ -273,6 +278,10 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
byteSize = lastBlockSize
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
params := map[string]string{
"method": "upload",
"access_token": d.AccessToken,
@ -281,7 +290,8 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
"uploadid": precreateResp.Uploadid,
"partseq": strconv.Itoa(partseq),
}
err := d.uploadSlice(ctx, params, stream.GetName(), io.NewSectionReader(tempFile, offset, byteSize))
err := d.uploadSlice(ctx, params, stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, byteSize)))
if err != nil {
return err
}

View File

@ -8,16 +8,18 @@ import (
type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
driver.RootPath
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack" default:"official"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
AccessToken string
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
UploadAPI string `json:"upload_api" default:"https://d.pcs.baidu.com"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack,crack_video" default:"official"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
AccessToken string
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
UploadAPI string `json:"upload_api" default:"https://d.pcs.baidu.com"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
LowBandwithUploadMode bool `json:"low_bandwith_upload_mode" default:"false"`
OnlyListVideoFile bool `json:"only_list_video_file" default:"false"`
}
var config = driver.Config{

View File

@ -17,7 +17,7 @@ type TokenErrResp struct {
type File struct {
//TkbindId int `json:"tkbind_id"`
//OwnerType int `json:"owner_type"`
//Category int `json:"category"`
Category int `json:"category"`
//RealCategory string `json:"real_category"`
FsId int64 `json:"fs_id"`
//OperId int `json:"oper_id"`

View File

@ -79,6 +79,12 @@ func (d *BaiduNetdisk) request(furl string, method string, callback base.ReqCall
return retry.Unrecoverable(err2)
}
}
if 31023 == errno && d.DownloadAPI == "crack_video" {
result = res.Body()
return nil
}
return fmt.Errorf("req: [%s] ,errno: %d, refer to https://pan.baidu.com/union/doc/", furl, errno)
}
result = res.Body()
@ -131,12 +137,21 @@ func (d *BaiduNetdisk) getFiles(dir string) ([]File, error) {
if len(resp.List) == 0 {
break
}
res = append(res, resp.List...)
if d.OnlyListVideoFile {
for _, file := range resp.List {
if file.Isdir == 1 || file.Category == 1 {
res = append(res, file)
}
}
} else {
res = append(res, resp.List...)
}
}
return res, nil
}
func (d *BaiduNetdisk) linkOfficial(file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *BaiduNetdisk) linkOfficial(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
var resp DownloadResp
params := map[string]string{
"method": "filemetas",
@ -164,7 +179,7 @@ func (d *BaiduNetdisk) linkOfficial(file model.Obj, args model.LinkArgs) (*model
}, nil
}
func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *BaiduNetdisk) linkCrack(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
var resp DownloadResp2
param := map[string]string{
"target": fmt.Sprintf("[\"%s\"]", file.GetPath()),
@ -187,6 +202,34 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Li
}, nil
}
func (d *BaiduNetdisk) linkCrackVideo(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
param := map[string]string{
"type": "VideoURL",
"path": fmt.Sprintf("%s", file.GetPath()),
"fs_id": file.GetID(),
"devuid": "0%1",
"clienttype": "1",
"channel": "android_15_25010PN30C_bd-netdisk_1523a",
"nom3u8": "1",
"dlink": "1",
"media": "1",
"origin": "dlna",
}
resp, err := d.request("https://pan.baidu.com/api/mediainfo", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(param)
}, nil)
if err != nil {
return nil, err
}
return &model.Link{
URL: utils.Json.Get(resp, "info", "dlink").ToString(),
Header: http.Header{
"User-Agent": []string{d.CustomCrackUA},
},
}, nil
}
func (d *BaiduNetdisk) manage(opera string, filelist any) ([]byte, error) {
params := map[string]string{
"method": "filemanager",
@ -230,22 +273,72 @@ func joinTime(form map[string]string, ctime, mtime int64) {
const (
DefaultSliceSize int64 = 4 * utils.MB
VipSliceSize = 16 * utils.MB
SVipSliceSize = 32 * utils.MB
VipSliceSize int64 = 16 * utils.MB
SVipSliceSize int64 = 32 * utils.MB
MaxSliceNum = 2048 // 文档写的是 1024/没写 ,但实际测试是 2048
SliceStep int64 = 1 * utils.MB
)
func (d *BaiduNetdisk) getSliceSize() int64 {
if d.CustomUploadPartSize != 0 {
return d.CustomUploadPartSize
}
switch d.vipType {
case 1:
return VipSliceSize
case 2:
return SVipSliceSize
default:
func (d *BaiduNetdisk) getSliceSize(filesize int64) int64 {
// 非会员固定为 4MB
if d.vipType == 0 {
if d.CustomUploadPartSize != 0 {
log.Warnf("CustomUploadPartSize is not supported for non-vip user, use DefaultSliceSize")
}
if filesize > MaxSliceNum*DefaultSliceSize {
log.Warnf("File size(%d) is too large, may cause upload failure", filesize)
}
return DefaultSliceSize
}
if d.CustomUploadPartSize != 0 {
if d.CustomUploadPartSize < DefaultSliceSize {
log.Warnf("CustomUploadPartSize(%d) is less than DefaultSliceSize(%d), use DefaultSliceSize", d.CustomUploadPartSize, DefaultSliceSize)
return DefaultSliceSize
}
if d.vipType == 1 && d.CustomUploadPartSize > VipSliceSize {
log.Warnf("CustomUploadPartSize(%d) is greater than VipSliceSize(%d), use VipSliceSize", d.CustomUploadPartSize, VipSliceSize)
return VipSliceSize
}
if d.vipType == 2 && d.CustomUploadPartSize > SVipSliceSize {
log.Warnf("CustomUploadPartSize(%d) is greater than SVipSliceSize(%d), use SVipSliceSize", d.CustomUploadPartSize, SVipSliceSize)
return SVipSliceSize
}
return d.CustomUploadPartSize
}
maxSliceSize := DefaultSliceSize
switch d.vipType {
case 1:
maxSliceSize = VipSliceSize
case 2:
maxSliceSize = SVipSliceSize
}
// upload on low bandwidth
if d.LowBandwithUploadMode {
size := DefaultSliceSize
for size <= maxSliceSize {
if filesize <= MaxSliceNum*size {
return size
}
size += SliceStep
}
}
if filesize > MaxSliceNum*maxSliceSize {
log.Warnf("File size(%d) is too large, may cause upload failure", filesize)
}
return maxSliceSize
}
// func encodeURIComponent(str string) string {

View File

@ -13,6 +13,8 @@ import (
"strings"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
@ -28,8 +30,9 @@ type BaiduPhoto struct {
Addition
// AccessToken string
Uk int64
root model.Obj
Uk int64
bdstoken string
root model.Obj
uploadThread int
}
@ -73,6 +76,10 @@ func (d *BaiduPhoto) Init(ctx context.Context) error {
if err != nil {
return err
}
d.bdstoken, err = d.getBDStoken()
if err != nil {
return err
}
d.Uk, err = strconv.ParseInt(info.YouaID, 10, 64)
return err
}
@ -296,6 +303,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
_, err = d.Post(FILE_API_URL_V1+"/precreate", func(r *resty.Request) {
r.SetContext(ctx)
r.SetFormData(params)
r.SetQueryParam("bdstoken", d.bdstoken)
}, &precreateResp)
if err != nil {
return nil, err
@ -308,6 +316,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
for i, partseq := range precreateResp.BlockList {
if utils.IsCanceled(upCtx) {
break
@ -319,17 +328,22 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadParams := map[string]string{
"method": "upload",
"path": params["path"],
"partseq": fmt.Sprint(partseq),
"uploadid": precreateResp.UploadID,
"app_id": "16051585",
}
_, err = d.Post("https://c3.pcs.baidu.com/rest/2.0/pcs/superfile2", func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(uploadParams)
r.SetFileReader("file", stream.GetName(), io.NewSectionReader(tempFile, offset, byteSize))
r.SetFileReader("file", stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, byteSize)))
}, nil)
if err != nil {
return err
@ -352,6 +366,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
_, err = d.Post(FILE_API_URL_V1+"/create", func(r *resty.Request) {
r.SetContext(ctx)
r.SetFormData(params)
r.SetQueryParam("bdstoken", d.bdstoken)
}, &precreateResp)
if err != nil {
return nil, err

View File

@ -476,6 +476,21 @@ func (d *BaiduPhoto) uInfo() (*UInfo, error) {
return &info, nil
}
func (d *BaiduPhoto) getBDStoken() (string, error) {
var info struct {
Result struct {
Bdstoken string `json:"bdstoken"`
Token string `json:"token"`
Uk int64 `json:"uk"`
} `json:"result"`
}
_, err := d.Get("https://pan.baidu.com/api/gettemplatevariable?fields=[%22bdstoken%22,%22token%22,%22uk%22]", nil, &info)
if err != nil {
return "", err
}
return info.Result.Bdstoken, nil
}
func DecryptMd5(encryptMd5 string) string {
if _, err := hex.DecodeString(encryptMd5); err == nil {
return encryptMd5

View File

@ -6,6 +6,7 @@ import (
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/net"
"github.com/go-resty/resty/v2"
)
@ -26,7 +27,7 @@ func InitClient() {
NoRedirectClient.SetHeader("user-agent", UserAgent)
RestyClient = NewRestyClient()
HttpClient = NewHttpClient()
HttpClient = net.NewHttpClient()
}
func NewRestyClient() *resty.Client {
@ -38,13 +39,3 @@ func NewRestyClient() *resty.Client {
SetTLSClientConfig(&tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify})
return client
}
func NewHttpClient() *http.Client {
return &http.Client{
Timeout: time.Hour * 48,
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: &tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify},
},
}
}

View File

@ -215,7 +215,7 @@ func (d *ChaoXing) Remove(ctx context.Context, obj model.Obj) error {
return nil
}
func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
var resp UploadDataRsp
_, err := d.request("https://noteyd.chaoxing.com/pc/files/getUploadConfig", http.MethodGet, func(req *resty.Request) {
}, &resp)
@ -227,11 +227,11 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
}
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
filePart, err := writer.CreateFormFile("file", stream.GetName())
filePart, err := writer.CreateFormFile("file", file.GetName())
if err != nil {
return err
}
_, err = utils.CopyWithBuffer(filePart, stream)
_, err = utils.CopyWithBuffer(filePart, file)
if err != nil {
return err
}
@ -248,7 +248,14 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
if err != nil {
return err
}
req, err := http.NewRequest("POST", "https://pan-yz.chaoxing.com/upload", body)
r := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: &driver.SimpleReaderWithSize{
Reader: body,
Size: int64(body.Len()),
},
UpdateProgress: up,
})
req, err := http.NewRequestWithContext(ctx, "POST", "https://pan-yz.chaoxing.com/upload", r)
if err != nil {
return err
}

View File

@ -5,7 +5,6 @@ import (
"io"
"net/http"
"path"
"strconv"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
@ -147,7 +146,7 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
"size": stream.GetSize(),
"name": stream.GetName(),
"policy_id": r.Policy.Id,
"last_modified": stream.ModTime().Unix(),
"last_modified": stream.ModTime().UnixMilli(),
}
// 获取上传会话信息
@ -163,42 +162,18 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
switch r.Policy.Type {
case "onedrive":
err = d.upOneDrive(ctx, stream, u, up)
case "s3":
err = d.upS3(ctx, stream, u, up)
case "remote": // 从机存储
err = d.upRemote(ctx, stream, u, up)
case "local": // 本机存储
var chunkSize = u.ChunkSize
var buf []byte
var chunk int
for {
var n int
buf = make([]byte, chunkSize)
n, err = io.ReadAtLeast(stream, buf, chunkSize)
if err != nil && err != io.ErrUnexpectedEOF {
if err == io.EOF {
return nil
}
return err
}
if n == 0 {
break
}
buf = buf[:n]
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetHeader("Content-Length", strconv.Itoa(n))
req.SetBody(buf)
}, nil)
if err != nil {
break
}
chunk++
}
err = d.upLocal(ctx, stream, u, up)
default:
err = errs.NotImplement
}
if err != nil {
// 删除失败的会话
err = d.request(http.MethodDelete, "/file/upload/"+u.SessionID, nil, nil)
_ = d.request(http.MethodDelete, "/file/upload/"+u.SessionID, nil, nil)
return err
}
return nil

View File

@ -21,11 +21,12 @@ type Policy struct {
}
type UploadInfo struct {
SessionID string `json:"sessionID"`
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
UploadURLs []string `json:"uploadURLs"`
Credential string `json:"credential,omitempty"`
SessionID string `json:"sessionID"`
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
UploadURLs []string `json:"uploadURLs"`
Credential string `json:"credential,omitempty"` // local
CompleteURL string `json:"completeURL,omitempty"` // s3
}
type DirectoryResp struct {

View File

@ -27,17 +27,20 @@ import (
const loginPath = "/user/session"
func (d *Cloudreve) getUA() string {
if d.CustomUA != "" {
return d.CustomUA
}
return base.UserAgent
}
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
u := d.Address + "/api/v3" + path
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
}
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "application/json, text/plain, */*",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
var r Resp
@ -100,7 +103,7 @@ func (d *Cloudreve) login() error {
if err == nil {
break
}
if err != nil && err.Error() != "CAPTCHA not match." {
if err.Error() != "CAPTCHA not match." {
break
}
}
@ -161,15 +164,11 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
if !d.Addition.EnableThumbAndFolderSize {
return model.Thumbnail{}, nil
}
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
}
req := base.NoRedirectClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
resp, err := req.Execute(http.MethodGet, d.Address+"/api/v3/file/thumb/"+file.Id)
if err != nil {
@ -180,6 +179,43 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
}, nil
}
func (d *Cloudreve) upLocal(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
utils.Log.Debugf("[Cloudreve-Local] upload: %d", finish)
var byteSize = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.SetHeader("Content-Length", strconv.FormatInt(byteSize, 10))
req.SetHeader("User-Agent", d.getUA())
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
}, nil)
if err != nil {
break
}
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
return nil
}
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
uploadUrl := u.UploadURLs[0]
credential := u.Credential
@ -202,19 +238,22 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
if err != nil {
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk), bytes.NewBuffer(byteData))
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
res.Body.Close()
_ = res.Body.Close()
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
@ -241,13 +280,15 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
if err != nil {
return err
}
req, err := http.NewRequest("PUT", uploadUrl, bytes.NewBuffer(byteData))
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
@ -256,10 +297,10 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
if res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200 {
data, _ := io.ReadAll(res.Body)
res.Body.Close()
_ = res.Body.Close()
return errors.New(string(data))
}
res.Body.Close()
_ = res.Body.Close()
up(float64(finish) * 100 / float64(stream.GetSize()))
}
// 上传成功发送回调请求
@ -271,3 +312,82 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
}
return nil
}
func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
var etags []string
DEFAULT := int64(u.ChunkSize)
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
utils.Log.Debugf("[Cloudreve-S3] upload: %d", finish)
var byteSize = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", u.UploadURLs[chunk],
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
etags = append(etags, res.Header.Get("ETag"))
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
// s3LikeFinishUpload
// https://github.com/cloudreve/frontend/blob/b485bf297974cbe4834d2e8e744ae7b7e5b2ad39/src/component/Uploader/core/api/index.ts#L204-L252
bodyBuilder := &strings.Builder{}
bodyBuilder.WriteString("<CompleteMultipartUpload>")
for i, etag := range etags {
bodyBuilder.WriteString(fmt.Sprintf(
`<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>`,
i+1, // PartNumber 从 1 开始
etag,
))
}
bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequest(
"POST",
u.CompleteURL,
strings.NewReader(bodyBuilder.String()),
)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/xml")
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("up status: %d, error: %s", res.StatusCode, string(body))
}
// 上传成功发送回调请求
err = d.request(http.MethodGet, "/callback/s3/"+u.SessionID, nil, nil)
if err != nil {
return err
}
return nil
}

View File

@ -13,6 +13,7 @@ import (
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
@ -160,7 +161,11 @@ func (d *Crypt) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
// discarding hash as it's encrypted
}
if d.Thumbnail && thumb == "" {
thumb = utils.EncodePath(common.GetApiUrl(nil)+stdpath.Join("/d", args.ReqPath, ".thumbnails", name+".webp"), true)
thumbPath := stdpath.Join(args.ReqPath, ".thumbnails", name+".webp")
thumb = fmt.Sprintf("%s/d%s?sign=%s",
common.GetApiUrl(common.GetHttpReq(ctx)),
utils.EncodePath(thumbPath, true),
sign.Sign(thumbPath))
}
if !ok && !d.Thumbnail {
result = append(result, &objRes)
@ -258,19 +263,13 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
}
rrc := remoteLink.RangeReadCloser
if len(remoteLink.URL) > 0 {
rangedRemoteLink := &model.Link{
URL: remoteLink.URL,
Header: remoteLink.Header,
}
var converted, err = stream.GetRangeReadCloserFromLink(remoteFileSize, rangedRemoteLink)
var converted, err = stream.GetRangeReadCloserFromLink(remoteFileSize, remoteLink)
if err != nil {
return nil, err
}
rrc = converted
}
if rrc != nil {
//remoteRangeReader, err :=
remoteReader, err := rrc.RangeRead(ctx, http_range.Range{Start: underlyingOffset, Length: length})
remoteClosers.AddClosers(rrc.GetClosers())
if err != nil {
@ -283,7 +282,6 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
if err != nil {
return nil, err
}
//remoteClosers.Add(remoteLink.MFile)
//keep reuse same MFile and close at last.
remoteClosers.Add(remoteLink.MFile)
return io.NopCloser(remoteLink.MFile), nil
@ -302,7 +300,6 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: remoteClosers}
resultLink := &model.Link{
Header: remoteLink.Header,
RangeReadCloser: resultRangeReadCloser,
Expiration: remoteLink.Expiration,
}

174
drivers/doubao/driver.go Normal file
View File

@ -0,0 +1,174 @@
package doubao
import (
"context"
"errors"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Doubao struct {
model.Storage
Addition
}
func (d *Doubao) Config() driver.Config {
return config
}
func (d *Doubao) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Doubao) Init(ctx context.Context) error {
// TODO login / refresh token
//op.MustSaveDriverStorage(d)
return nil
}
func (d *Doubao) Drop(ctx context.Context) error {
return nil
}
func (d *Doubao) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var files []model.Obj
var r NodeInfoResp
_, err := d.request("/samantha/aispace/node_info", "POST", func(req *resty.Request) {
req.SetBody(base.Json{
"node_id": dir.GetID(),
"need_full_path": false,
})
}, &r)
if err != nil {
return nil, err
}
for _, child := range r.Data.Children {
files = append(files, &Object{
Object: model.Object{
ID: child.ID,
Path: child.ParentID,
Name: child.Name,
Size: child.Size,
Modified: time.Unix(child.UpdateTime, 0),
Ctime: time.Unix(child.CreateTime, 0),
IsFolder: child.NodeType == 1,
},
Key: child.Key,
})
}
return files, nil
}
func (d *Doubao) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if u, ok := file.(*Object); ok {
var r GetFileUrlResp
_, err := d.request("/alice/message/get_file_url", "POST", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{u.Key},
"type": "file",
})
}, &r)
if err != nil {
return nil, err
}
return &model.Link{
URL: r.Data.FileUrls[0].MainURL,
}, nil
}
return nil, errors.New("can't convert obj to URL")
}
func (d *Doubao) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/upload_node", "POST", func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{
"local_id": uuid.New().String(),
"name": dirName,
"parent_id": parentDir.GetID(),
"node_type": 1,
},
},
})
}, &r)
return err
}
func (d *Doubao) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/move_node", "POST", func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{"id": srcObj.GetID()},
},
"current_parent_id": srcObj.GetPath(),
"target_parent_id": dstDir.GetID(),
})
}, &r)
return err
}
func (d *Doubao) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
var r BaseResp
_, err := d.request("/samantha/aispace/rename_node", "POST", func(req *resty.Request) {
req.SetBody(base.Json{
"node_id": srcObj.GetID(),
"node_name": newName,
})
}, &r)
return err
}
func (d *Doubao) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *Doubao) Remove(ctx context.Context, obj model.Obj) error {
var r BaseResp
_, err := d.request("/samantha/aispace/delete_node", "POST", func(req *resty.Request) {
req.SetBody(base.Json{"node_list": []base.Json{{"id": obj.GetID()}}})
}, &r)
return err
}
func (d *Doubao) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// TODO upload file, optional
return nil, errs.NotImplement
}
func (d *Doubao) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Doubao) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Doubao)(nil)

34
drivers/doubao/meta.go Normal file
View File

@ -0,0 +1,34 @@
package doubao
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
// driver.RootPath
driver.RootID
// define other
Cookie string `json:"cookie" type:"text"`
}
var config = driver.Config{
Name: "Doubao",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Doubao{}
})
}

64
drivers/doubao/types.go Normal file
View File

@ -0,0 +1,64 @@
package doubao
import "github.com/alist-org/alist/v3/internal/model"
type BaseResp struct {
Code int `json:"code"`
Msg string `json:"msg"`
}
type NodeInfoResp struct {
BaseResp
Data struct {
NodeInfo NodeInfo `json:"node_info"`
Children []NodeInfo `json:"children"`
NextCursor string `json:"next_cursor"`
HasMore bool `json:"has_more"`
} `json:"data"`
}
type NodeInfo struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
Size int64 `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"`
UpdateTime int64 `json:"update_time"`
}
type GetFileUrlResp struct {
BaseResp
Data struct {
FileUrls []struct {
URI string `json:"uri"`
MainURL string `json:"main_url"`
BackURL string `json:"back_url"`
} `json:"file_urls"`
} `json:"data"`
}
type UploadNodeResp struct {
BaseResp
Data struct {
NodeList []struct {
LocalID string `json:"local_id"`
ID string `json:"id"`
ParentID string `json:"parent_id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
} `json:"node_list"`
} `json:"data"`
}
type Object struct {
model.Object
Key string
}

38
drivers/doubao/util.go Normal file
View File

@ -0,0 +1,38 @@
package doubao
import (
"errors"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/pkg/utils"
log "github.com/sirupsen/logrus"
)
// do others that not defined in Driver interface
func (d *Doubao) request(path string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
url := "https://www.doubao.com" + path
req := base.RestyClient.R()
req.SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
var r BaseResp
req.SetResult(&r)
res, err := req.Execute(method, url)
log.Debugln(res.String())
if err != nil {
return nil, err
}
// 业务状态码检查优先于HTTP状态码
if r.Code != 0 {
return res.Body(), errors.New(r.Msg)
}
if resp != nil {
err = utils.Json.Unmarshal(res.Body(), resp)
if err != nil {
return nil, err
}
}
return res.Body(), nil
}

View File

@ -191,7 +191,7 @@ func (d *Dropbox) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
}
url := d.contentBase + "/2/files/upload_session/append_v2"
reader := io.LimitReader(stream, PartSize)
reader := driver.NewLimitedUploadStream(ctx, io.LimitReader(stream, PartSize))
req, err := http.NewRequest(http.MethodPost, url, reader)
if err != nil {
log.Errorf("failed to update file when append to upload session, err: %+v", err)
@ -219,13 +219,8 @@ func (d *Dropbox) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
return err
}
_ = res.Body.Close()
if count > 0 {
up(float64(i+1) * 100 / float64(count))
}
up(float64(i+1) * 100 / float64(count))
offset += byteSize
}
// 3.finish
toPath := dstDir.GetPath() + "/" + stream.GetName()

View File

@ -3,6 +3,7 @@ package febbox
import (
"encoding/json"
"errors"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/op"
"github.com/go-resty/resty/v2"
@ -135,6 +136,9 @@ func (d *FebBox) getDownloadLink(id string, ip string) (string, error) {
if err = json.Unmarshal(res, &fileDownloadResp); err != nil {
return "", err
}
if len(fileDownloadResp.Data) == 0 {
return "", fmt.Errorf("can not get download link, code:%d, msg:%s", fileDownloadResp.Code, fileDownloadResp.Msg)
}
return fileDownloadResp.Data[0].DownloadURL, nil
}

View File

@ -114,13 +114,15 @@ func (d *FTP) Remove(ctx context.Context, obj model.Obj) error {
}
}
func (d *FTP) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
func (d *FTP) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
if err := d.login(); err != nil {
return err
}
// TODO: support cancel
path := stdpath.Join(dstDir.GetPath(), stream.GetName())
return d.conn.Stor(encode(path, d.Encoding), stream)
path := stdpath.Join(dstDir.GetPath(), s.GetName())
return d.conn.Stor(encode(path, d.Encoding), driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
}))
}
var _ driver.Driver = (*FTP)(nil)

975
drivers/github/driver.go Normal file
View File

@ -0,0 +1,975 @@
package github
import (
"context"
"encoding/base64"
"fmt"
"io"
"net/http"
stdpath "path"
"strings"
"sync"
"text/template"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
type Github struct {
model.Storage
Addition
client *resty.Client
mkdirMsgTmpl *template.Template
deleteMsgTmpl *template.Template
putMsgTmpl *template.Template
renameMsgTmpl *template.Template
copyMsgTmpl *template.Template
moveMsgTmpl *template.Template
isOnBranch bool
commitMutex sync.Mutex
pgpEntity *openpgp.Entity
}
func (d *Github) Config() driver.Config {
return config
}
func (d *Github) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Github) Init(ctx context.Context) error {
d.RootFolderPath = utils.FixAndCleanPath(d.RootFolderPath)
if d.CommitterName != "" && d.CommitterEmail == "" {
return errors.New("committer email is required")
}
if d.CommitterName == "" && d.CommitterEmail != "" {
return errors.New("committer name is required")
}
if d.AuthorName != "" && d.AuthorEmail == "" {
return errors.New("author email is required")
}
if d.AuthorName == "" && d.AuthorEmail != "" {
return errors.New("author name is required")
}
var err error
d.mkdirMsgTmpl, err = template.New("mkdirCommitMsgTemplate").Parse(d.MkdirCommitMsg)
if err != nil {
return err
}
d.deleteMsgTmpl, err = template.New("deleteCommitMsgTemplate").Parse(d.DeleteCommitMsg)
if err != nil {
return err
}
d.putMsgTmpl, err = template.New("putCommitMsgTemplate").Parse(d.PutCommitMsg)
if err != nil {
return err
}
d.renameMsgTmpl, err = template.New("renameCommitMsgTemplate").Parse(d.RenameCommitMsg)
if err != nil {
return err
}
d.copyMsgTmpl, err = template.New("copyCommitMsgTemplate").Parse(d.CopyCommitMsg)
if err != nil {
return err
}
d.moveMsgTmpl, err = template.New("moveCommitMsgTemplate").Parse(d.MoveCommitMsg)
if err != nil {
return err
}
d.client = base.NewRestyClient().
SetHeader("Accept", "application/vnd.github.object+json").
SetHeader("X-GitHub-Api-Version", "2022-11-28").
SetLogger(log.StandardLogger()).
SetDebug(false)
token := strings.TrimSpace(d.Token)
if token != "" {
d.client = d.client.SetHeader("Authorization", "Bearer "+token)
}
if d.Ref == "" {
repo, err := d.getRepo()
if err != nil {
return err
}
d.Ref = repo.DefaultBranch
d.isOnBranch = true
} else {
_, err = d.getBranchHead()
d.isOnBranch = err == nil
}
if d.GPGPrivateKey != "" {
if d.CommitterName == "" || d.AuthorName == "" {
user, e := d.getAuthenticatedUser()
if e != nil {
return e
}
if d.CommitterName == "" {
d.CommitterName = user.Name
d.CommitterEmail = user.Email
}
if d.AuthorName == "" {
d.AuthorName = user.Name
d.AuthorEmail = user.Email
}
}
d.pgpEntity, err = loadPrivateKey(d.GPGPrivateKey, d.GPGKeyPassphrase)
if err != nil {
return err
}
}
return nil
}
func (d *Github) Drop(ctx context.Context) error {
return nil
}
func (d *Github) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
obj, err := d.get(dir.GetPath())
if err != nil {
return nil, err
}
if obj.Entries == nil {
return nil, errs.NotFolder
}
if len(obj.Entries) >= 1000 {
tree, err := d.getTree(obj.Sha)
if err != nil {
return nil, err
}
if tree.Truncated {
return nil, fmt.Errorf("tree %s is truncated", dir.GetPath())
}
ret := make([]model.Obj, 0, len(tree.Trees))
for _, t := range tree.Trees {
if t.Path != ".gitkeep" {
ret = append(ret, t.toModelObj())
}
}
return ret, nil
} else {
ret := make([]model.Obj, 0, len(obj.Entries))
for _, entry := range obj.Entries {
if entry.Name != ".gitkeep" {
ret = append(ret, entry.toModelObj())
}
}
return ret, nil
}
}
func (d *Github) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
obj, err := d.get(file.GetPath())
if err != nil {
return nil, err
}
if obj.Type == "submodule" {
return nil, errors.New("cannot download a submodule")
}
url := obj.DownloadURL
ghProxy := strings.TrimSpace(d.Addition.GitHubProxy)
if ghProxy != "" {
url = strings.Replace(url, "https://raw.githubusercontent.com", ghProxy, 1)
}
return &model.Link{
URL: url,
}, nil
}
func (d *Github) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if !d.isOnBranch {
return errors.New("cannot write to non-branch reference")
}
d.commitMutex.Lock()
defer d.commitMutex.Unlock()
parent, err := d.get(parentDir.GetPath())
if err != nil {
return err
}
if parent.Entries == nil {
return errs.NotFolder
}
subDirSha, err := d.newTree("", []interface{}{
map[string]string{
"path": ".gitkeep",
"mode": "100644",
"type": "blob",
"content": "",
},
})
if err != nil {
return err
}
newTree := make([]interface{}, 0, 2)
newTree = append(newTree, TreeObjReq{
Path: dirName,
Mode: "040000",
Type: "tree",
Sha: subDirSha,
})
if len(parent.Entries) == 1 && parent.Entries[0].Name == ".gitkeep" {
newTree = append(newTree, TreeObjReq{
Path: ".gitkeep",
Mode: "100644",
Type: "blob",
Sha: nil,
})
}
newSha, err := d.newTree(parent.Sha, newTree)
if err != nil {
return err
}
rootSha, err := d.renewParentTrees(parentDir.GetPath(), parent.Sha, newSha, "/")
if err != nil {
return err
}
commitMessage, err := getMessage(d.mkdirMsgTmpl, &MessageTemplateVars{
UserName: getUsername(ctx),
ObjName: dirName,
ObjPath: stdpath.Join(parentDir.GetPath(), dirName),
ParentName: parentDir.GetName(),
ParentPath: parentDir.GetPath(),
}, "mkdir")
if err != nil {
return err
}
return d.commit(commitMessage, rootSha)
}
func (d *Github) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.isOnBranch {
return errors.New("cannot write to non-branch reference")
}
if strings.HasPrefix(dstDir.GetPath(), srcObj.GetPath()) {
return errors.New("cannot move parent dir to child")
}
d.commitMutex.Lock()
defer d.commitMutex.Unlock()
var rootSha string
if strings.HasPrefix(dstDir.GetPath(), stdpath.Dir(srcObj.GetPath())) { // /aa/1 -> /aa/bb/
dstOldSha, dstNewSha, ancestorOldSha, srcParentTree, err := d.copyWithoutRenewTree(srcObj, dstDir)
if err != nil {
return err
}
srcParentPath := stdpath.Dir(srcObj.GetPath())
dstRest := dstDir.GetPath()[len(srcParentPath):]
if dstRest[0] == '/' {
dstRest = dstRest[1:]
}
dstNextName, _, _ := strings.Cut(dstRest, "/")
dstNextPath := stdpath.Join(srcParentPath, dstNextName)
dstNextTreeSha, err := d.renewParentTrees(dstDir.GetPath(), dstOldSha, dstNewSha, dstNextPath)
if err != nil {
return err
}
var delSrc, dstNextTree *TreeObjReq = nil, nil
for _, t := range srcParentTree.Trees {
if t.Path == dstNextName {
dstNextTree = &t.TreeObjReq
dstNextTree.Sha = dstNextTreeSha
}
if t.Path == srcObj.GetName() {
delSrc = &t.TreeObjReq
delSrc.Sha = nil
}
if delSrc != nil && dstNextTree != nil {
break
}
}
if delSrc == nil || dstNextTree == nil {
return errs.ObjectNotFound
}
ancestorNewSha, err := d.newTree(ancestorOldSha, []interface{}{*delSrc, *dstNextTree})
if err != nil {
return err
}
rootSha, err = d.renewParentTrees(srcParentPath, ancestorOldSha, ancestorNewSha, "/")
if err != nil {
return err
}
} else if strings.HasPrefix(srcObj.GetPath(), dstDir.GetPath()) { // /aa/bb/1 -> /aa/
srcParentPath := stdpath.Dir(srcObj.GetPath())
srcParentTree, srcParentOldSha, err := d.getTreeDirectly(srcParentPath)
if err != nil {
return err
}
var src *TreeObjReq = nil
for _, t := range srcParentTree.Trees {
if t.Path == srcObj.GetName() {
if t.Type == "commit" {
return errors.New("cannot move a submodule")
}
src = &t.TreeObjReq
break
}
}
if src == nil {
return errs.ObjectNotFound
}
delSrc := *src
delSrc.Sha = nil
delSrcTree := make([]interface{}, 0, 2)
delSrcTree = append(delSrcTree, delSrc)
if len(srcParentTree.Trees) == 1 {
delSrcTree = append(delSrcTree, map[string]string{
"path": ".gitkeep",
"mode": "100644",
"type": "blob",
"content": "",
})
}
srcParentNewSha, err := d.newTree(srcParentOldSha, delSrcTree)
if err != nil {
return err
}
srcRest := srcObj.GetPath()[len(dstDir.GetPath()):]
if srcRest[0] == '/' {
srcRest = srcRest[1:]
}
srcNextName, _, ok := strings.Cut(srcRest, "/")
if !ok { // /aa/1 -> /aa/
return errors.New("cannot move in place")
}
srcNextPath := stdpath.Join(dstDir.GetPath(), srcNextName)
srcNextTreeSha, err := d.renewParentTrees(srcParentPath, srcParentOldSha, srcParentNewSha, srcNextPath)
if err != nil {
return err
}
ancestorTree, ancestorOldSha, err := d.getTreeDirectly(dstDir.GetPath())
if err != nil {
return err
}
var srcNextTree *TreeObjReq = nil
for _, t := range ancestorTree.Trees {
if t.Path == srcNextName {
srcNextTree = &t.TreeObjReq
srcNextTree.Sha = srcNextTreeSha
break
}
}
if srcNextTree == nil {
return errs.ObjectNotFound
}
ancestorNewSha, err := d.newTree(ancestorOldSha, []interface{}{*srcNextTree, *src})
if err != nil {
return err
}
rootSha, err = d.renewParentTrees(dstDir.GetPath(), ancestorOldSha, ancestorNewSha, "/")
if err != nil {
return err
}
} else { // /aa/1 -> /bb/
// do copy
dstOldSha, dstNewSha, srcParentOldSha, srcParentTree, err := d.copyWithoutRenewTree(srcObj, dstDir)
if err != nil {
return err
}
// delete src object and create new tree
var srcNewTree *TreeObjReq = nil
for _, t := range srcParentTree.Trees {
if t.Path == srcObj.GetName() {
srcNewTree = &t.TreeObjReq
srcNewTree.Sha = nil
break
}
}
if srcNewTree == nil {
return errs.ObjectNotFound
}
delSrcTree := make([]interface{}, 0, 2)
delSrcTree = append(delSrcTree, *srcNewTree)
if len(srcParentTree.Trees) == 1 {
delSrcTree = append(delSrcTree, map[string]string{
"path": ".gitkeep",
"mode": "100644",
"type": "blob",
"content": "",
})
}
srcParentNewSha, err := d.newTree(srcParentOldSha, delSrcTree)
if err != nil {
return err
}
// renew but the common ancestor of srcPath and dstPath
ancestor, srcChildName, dstChildName, _, _ := getPathCommonAncestor(srcObj.GetPath(), dstDir.GetPath())
dstNextTreeSha, err := d.renewParentTrees(dstDir.GetPath(), dstOldSha, dstNewSha, stdpath.Join(ancestor, dstChildName))
if err != nil {
return err
}
srcNextTreeSha, err := d.renewParentTrees(stdpath.Dir(srcObj.GetPath()), srcParentOldSha, srcParentNewSha, stdpath.Join(ancestor, srcChildName))
if err != nil {
return err
}
// renew the tree of the last common ancestor
ancestorTree, ancestorOldSha, err := d.getTreeDirectly(ancestor)
if err != nil {
return err
}
newTree := make([]interface{}, 2)
srcBind := false
dstBind := false
for _, t := range ancestorTree.Trees {
if t.Path == srcChildName {
t.Sha = srcNextTreeSha
newTree[0] = t.TreeObjReq
srcBind = true
}
if t.Path == dstChildName {
t.Sha = dstNextTreeSha
newTree[1] = t.TreeObjReq
dstBind = true
}
if srcBind && dstBind {
break
}
}
if !srcBind || !dstBind {
return errs.ObjectNotFound
}
ancestorNewSha, err := d.newTree(ancestorOldSha, newTree)
if err != nil {
return err
}
// renew until root
rootSha, err = d.renewParentTrees(ancestor, ancestorOldSha, ancestorNewSha, "/")
if err != nil {
return err
}
}
// commit
message, err := getMessage(d.moveMsgTmpl, &MessageTemplateVars{
UserName: getUsername(ctx),
ObjName: srcObj.GetName(),
ObjPath: srcObj.GetPath(),
ParentName: stdpath.Base(stdpath.Dir(srcObj.GetPath())),
ParentPath: stdpath.Dir(srcObj.GetPath()),
TargetName: stdpath.Base(dstDir.GetPath()),
TargetPath: dstDir.GetPath(),
}, "move")
if err != nil {
return err
}
return d.commit(message, rootSha)
}
func (d *Github) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
if !d.isOnBranch {
return errors.New("cannot write to non-branch reference")
}
d.commitMutex.Lock()
defer d.commitMutex.Unlock()
parentDir := stdpath.Dir(srcObj.GetPath())
tree, _, err := d.getTreeDirectly(parentDir)
if err != nil {
return err
}
newTree := make([]interface{}, 2)
operated := false
for _, t := range tree.Trees {
if t.Path == srcObj.GetName() {
if t.Type == "commit" {
return errors.New("cannot rename a submodule")
}
delCopy := t.TreeObjReq
delCopy.Sha = nil
newTree[0] = delCopy
t.Path = newName
newTree[1] = t.TreeObjReq
operated = true
break
}
}
if !operated {
return errs.ObjectNotFound
}
newSha, err := d.newTree(tree.Sha, newTree)
if err != nil {
return err
}
rootSha, err := d.renewParentTrees(parentDir, tree.Sha, newSha, "/")
if err != nil {
return err
}
message, err := getMessage(d.renameMsgTmpl, &MessageTemplateVars{
UserName: getUsername(ctx),
ObjName: srcObj.GetName(),
ObjPath: srcObj.GetPath(),
ParentName: stdpath.Base(parentDir),
ParentPath: parentDir,
TargetName: newName,
TargetPath: stdpath.Join(parentDir, newName),
}, "rename")
if err != nil {
return err
}
return d.commit(message, rootSha)
}
func (d *Github) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.isOnBranch {
return errors.New("cannot write to non-branch reference")
}
if strings.HasPrefix(dstDir.GetPath(), srcObj.GetPath()) {
return errors.New("cannot copy parent dir to child")
}
d.commitMutex.Lock()
defer d.commitMutex.Unlock()
dstSha, newSha, _, _, err := d.copyWithoutRenewTree(srcObj, dstDir)
if err != nil {
return err
}
rootSha, err := d.renewParentTrees(dstDir.GetPath(), dstSha, newSha, "/")
if err != nil {
return err
}
message, err := getMessage(d.copyMsgTmpl, &MessageTemplateVars{
UserName: getUsername(ctx),
ObjName: srcObj.GetName(),
ObjPath: srcObj.GetPath(),
ParentName: stdpath.Base(stdpath.Dir(srcObj.GetPath())),
ParentPath: stdpath.Dir(srcObj.GetPath()),
TargetName: stdpath.Base(dstDir.GetPath()),
TargetPath: dstDir.GetPath(),
}, "copy")
if err != nil {
return err
}
return d.commit(message, rootSha)
}
func (d *Github) Remove(ctx context.Context, obj model.Obj) error {
if !d.isOnBranch {
return errors.New("cannot write to non-branch reference")
}
d.commitMutex.Lock()
defer d.commitMutex.Unlock()
parentDir := stdpath.Dir(obj.GetPath())
tree, treeSha, err := d.getTreeDirectly(parentDir)
if err != nil {
return err
}
var del *TreeObjReq = nil
for _, t := range tree.Trees {
if t.Path == obj.GetName() {
if t.Type == "commit" {
return errors.New("cannot remove a submodule")
}
del = &t.TreeObjReq
del.Sha = nil
break
}
}
if del == nil {
return errs.ObjectNotFound
}
newTree := make([]interface{}, 0, 2)
newTree = append(newTree, *del)
if len(tree.Trees) == 1 { // completely emptying the repository will get a 404
newTree = append(newTree, map[string]string{
"path": ".gitkeep",
"mode": "100644",
"type": "blob",
"content": "",
})
}
newSha, err := d.newTree(treeSha, newTree)
if err != nil {
return err
}
rootSha, err := d.renewParentTrees(parentDir, treeSha, newSha, "/")
if err != nil {
return err
}
commitMessage, err := getMessage(d.deleteMsgTmpl, &MessageTemplateVars{
UserName: getUsername(ctx),
ObjName: obj.GetName(),
ObjPath: obj.GetPath(),
ParentName: stdpath.Base(parentDir),
ParentPath: parentDir,
}, "remove")
if err != nil {
return err
}
return d.commit(commitMessage, rootSha)
}
func (d *Github) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
if !d.isOnBranch {
return errors.New("cannot write to non-branch reference")
}
blob, err := d.putBlob(ctx, stream, up)
if err != nil {
return err
}
d.commitMutex.Lock()
defer d.commitMutex.Unlock()
parent, err := d.get(dstDir.GetPath())
if err != nil {
return err
}
if parent.Entries == nil {
return errs.NotFolder
}
newTree := make([]interface{}, 0, 2)
newTree = append(newTree, TreeObjReq{
Path: stream.GetName(),
Mode: "100644",
Type: "blob",
Sha: blob,
})
if len(parent.Entries) == 1 && parent.Entries[0].Name == ".gitkeep" {
newTree = append(newTree, TreeObjReq{
Path: ".gitkeep",
Mode: "100644",
Type: "blob",
Sha: nil,
})
}
newSha, err := d.newTree(parent.Sha, newTree)
if err != nil {
return err
}
rootSha, err := d.renewParentTrees(dstDir.GetPath(), parent.Sha, newSha, "/")
if err != nil {
return err
}
commitMessage, err := getMessage(d.putMsgTmpl, &MessageTemplateVars{
UserName: getUsername(ctx),
ObjName: stream.GetName(),
ObjPath: stdpath.Join(dstDir.GetPath(), stream.GetName()),
ParentName: dstDir.GetName(),
ParentPath: dstDir.GetPath(),
}, "upload")
if err != nil {
return err
}
return d.commit(commitMessage, rootSha)
}
var _ driver.Driver = (*Github)(nil)
func (d *Github) getContentApiUrl(path string) string {
path = utils.FixAndCleanPath(path)
return fmt.Sprintf("https://api.github.com/repos/%s/%s/contents%s", d.Owner, d.Repo, path)
}
func (d *Github) get(path string) (*Object, error) {
res, err := d.client.R().SetQueryParam("ref", d.Ref).Get(d.getContentApiUrl(path))
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
return nil, toErr(res)
}
var resp Object
err = utils.Json.Unmarshal(res.Body(), &resp)
return &resp, err
}
func (d *Github) putBlob(ctx context.Context, s model.FileStreamer, up driver.UpdateProgress) (string, error) {
beforeContent := "{\"encoding\":\"base64\",\"content\":\""
afterContent := "\"}"
length := int64(len(beforeContent)) + calculateBase64Length(s.GetSize()) + int64(len(afterContent))
beforeContentReader := strings.NewReader(beforeContent)
contentReader, contentWriter := io.Pipe()
go func() {
encoder := base64.NewEncoder(base64.StdEncoding, contentWriter)
if _, err := utils.CopyWithBuffer(encoder, s); err != nil {
_ = contentWriter.CloseWithError(err)
return
}
_ = encoder.Close()
_ = contentWriter.Close()
}()
afterContentReader := strings.NewReader(afterContent)
req, err := http.NewRequestWithContext(ctx, http.MethodPost,
fmt.Sprintf("https://api.github.com/repos/%s/%s/git/blobs", d.Owner, d.Repo),
driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: &driver.SimpleReaderWithSize{
Reader: io.MultiReader(beforeContentReader, contentReader, afterContentReader),
Size: length,
},
UpdateProgress: up,
}))
if err != nil {
return "", err
}
req.Header.Set("Accept", "application/vnd.github+json")
req.Header.Set("X-GitHub-Api-Version", "2022-11-28")
token := strings.TrimSpace(d.Token)
if token != "" {
req.Header.Set("Authorization", "Bearer "+token)
}
req.ContentLength = length
res, err := base.HttpClient.Do(req)
if err != nil {
return "", err
}
defer res.Body.Close()
resBody, err := io.ReadAll(res.Body)
if err != nil {
return "", err
}
if res.StatusCode != 201 {
var errMsg ErrResp
if err = utils.Json.Unmarshal(resBody, &errMsg); err != nil {
return "", errors.New(res.Status)
} else {
return "", fmt.Errorf("%s: %s", res.Status, errMsg.Message)
}
}
var resp PutBlobResp
if err = utils.Json.Unmarshal(resBody, &resp); err != nil {
return "", err
}
return resp.Sha, nil
}
func (d *Github) renewParentTrees(path, prevSha, curSha, until string) (string, error) {
for path != until {
path = stdpath.Dir(path)
tree, sha, err := d.getTreeDirectly(path)
if err != nil {
return "", err
}
var newTree *TreeObjReq = nil
for _, t := range tree.Trees {
if t.Sha == prevSha {
newTree = &t.TreeObjReq
newTree.Sha = curSha
break
}
}
if newTree == nil {
return "", errs.ObjectNotFound
}
curSha, err = d.newTree(sha, []interface{}{*newTree})
if err != nil {
return "", err
}
prevSha = sha
}
return curSha, nil
}
func (d *Github) getTree(sha string) (*TreeResp, error) {
res, err := d.client.R().Get(fmt.Sprintf("https://api.github.com/repos/%s/%s/git/trees/%s", d.Owner, d.Repo, sha))
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
return nil, toErr(res)
}
var resp TreeResp
if err = utils.Json.Unmarshal(res.Body(), &resp); err != nil {
return nil, err
}
return &resp, nil
}
func (d *Github) getTreeDirectly(path string) (*TreeResp, string, error) {
p, err := d.get(path)
if err != nil {
return nil, "", err
}
if p.Entries == nil {
return nil, "", fmt.Errorf("%s is not a folder", path)
}
tree, err := d.getTree(p.Sha)
if err != nil {
return nil, "", err
}
if tree.Truncated {
return nil, "", fmt.Errorf("tree %s is truncated", path)
}
return tree, p.Sha, nil
}
func (d *Github) newTree(baseSha string, tree []interface{}) (string, error) {
body := &TreeReq{Trees: tree}
if baseSha != "" {
body.BaseTree = baseSha
}
res, err := d.client.R().SetBody(body).
Post(fmt.Sprintf("https://api.github.com/repos/%s/%s/git/trees", d.Owner, d.Repo))
if err != nil {
return "", err
}
if res.StatusCode() != 201 {
return "", toErr(res)
}
var resp TreeResp
if err = utils.Json.Unmarshal(res.Body(), &resp); err != nil {
return "", err
}
return resp.Sha, nil
}
func (d *Github) commit(message, treeSha string) error {
oldCommit, err := d.getBranchHead()
body := map[string]interface{}{
"message": message,
"tree": treeSha,
"parents": []string{oldCommit},
}
d.addCommitterAndAuthor(&body)
if d.pgpEntity != nil {
signature, e := signCommit(&body, d.pgpEntity)
if e != nil {
return e
}
body["signature"] = signature
}
res, err := d.client.R().SetBody(body).Post(fmt.Sprintf("https://api.github.com/repos/%s/%s/git/commits", d.Owner, d.Repo))
if err != nil {
return err
}
if res.StatusCode() != 201 {
return toErr(res)
}
var resp CommitResp
if err = utils.Json.Unmarshal(res.Body(), &resp); err != nil {
return err
}
// update branch head
res, err = d.client.R().
SetBody(&UpdateRefReq{
Sha: resp.Sha,
Force: false,
}).
Patch(fmt.Sprintf("https://api.github.com/repos/%s/%s/git/refs/heads/%s", d.Owner, d.Repo, d.Ref))
if err != nil {
return err
}
if res.StatusCode() != 200 {
return toErr(res)
}
return nil
}
func (d *Github) getBranchHead() (string, error) {
res, err := d.client.R().Get(fmt.Sprintf("https://api.github.com/repos/%s/%s/branches/%s", d.Owner, d.Repo, d.Ref))
if err != nil {
return "", err
}
if res.StatusCode() != 200 {
return "", toErr(res)
}
var resp BranchResp
if err = utils.Json.Unmarshal(res.Body(), &resp); err != nil {
return "", err
}
return resp.Commit.Sha, nil
}
func (d *Github) copyWithoutRenewTree(srcObj, dstDir model.Obj) (dstSha, newSha, srcParentSha string, srcParentTree *TreeResp, err error) {
dst, err := d.get(dstDir.GetPath())
if err != nil {
return "", "", "", nil, err
}
if dst.Entries == nil {
return "", "", "", nil, errs.NotFolder
}
dstSha = dst.Sha
srcParentPath := stdpath.Dir(srcObj.GetPath())
srcParentTree, srcParentSha, err = d.getTreeDirectly(srcParentPath)
if err != nil {
return "", "", "", nil, err
}
var src *TreeObjReq = nil
for _, t := range srcParentTree.Trees {
if t.Path == srcObj.GetName() {
if t.Type == "commit" {
return "", "", "", nil, errors.New("cannot copy a submodule")
}
src = &t.TreeObjReq
break
}
}
if src == nil {
return "", "", "", nil, errs.ObjectNotFound
}
newTree := make([]interface{}, 0, 2)
newTree = append(newTree, *src)
if len(dst.Entries) == 1 && dst.Entries[0].Name == ".gitkeep" {
newTree = append(newTree, TreeObjReq{
Path: ".gitkeep",
Mode: "100644",
Type: "blob",
Sha: nil,
})
}
newSha, err = d.newTree(dstSha, newTree)
if err != nil {
return "", "", "", nil, err
}
return dstSha, newSha, srcParentSha, srcParentTree, nil
}
func (d *Github) getRepo() (*RepoResp, error) {
res, err := d.client.R().Get(fmt.Sprintf("https://api.github.com/repos/%s/%s", d.Owner, d.Repo))
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
return nil, toErr(res)
}
var resp RepoResp
if err = utils.Json.Unmarshal(res.Body(), &resp); err != nil {
return nil, err
}
return &resp, nil
}
func (d *Github) getAuthenticatedUser() (*UserResp, error) {
res, err := d.client.R().Get("https://api.github.com/user")
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
return nil, toErr(res)
}
resp := &UserResp{}
if err = utils.Json.Unmarshal(res.Body(), resp); err != nil {
return nil, err
}
return resp, nil
}
func (d *Github) addCommitterAndAuthor(m *map[string]interface{}) {
if d.CommitterName != "" {
committer := map[string]string{
"name": d.CommitterName,
"email": d.CommitterEmail,
}
(*m)["committer"] = committer
}
if d.AuthorName != "" {
author := map[string]string{
"name": d.AuthorName,
"email": d.AuthorEmail,
}
(*m)["author"] = author
}
}

39
drivers/github/meta.go Normal file
View File

@ -0,0 +1,39 @@
package github
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
Token string `json:"token" type:"string" required:"true"`
Owner string `json:"owner" type:"string" required:"true"`
Repo string `json:"repo" type:"string" required:"true"`
Ref string `json:"ref" type:"string" help:"A branch, a tag or a commit SHA, main branch by default."`
GitHubProxy string `json:"gh_proxy" type:"string" help:"GitHub proxy, e.g. https://ghproxy.net/raw.githubusercontent.com or https://gh-proxy.com/raw.githubusercontent.com"`
GPGPrivateKey string `json:"gpg_private_key" type:"text"`
GPGKeyPassphrase string `json:"gpg_key_passphrase" type:"string"`
CommitterName string `json:"committer_name" type:"string"`
CommitterEmail string `json:"committer_email" type:"string"`
AuthorName string `json:"author_name" type:"string"`
AuthorEmail string `json:"author_email" type:"string"`
MkdirCommitMsg string `json:"mkdir_commit_message" type:"text" default:"{{.UserName}} mkdir {{.ObjPath}}"`
DeleteCommitMsg string `json:"delete_commit_message" type:"text" default:"{{.UserName}} remove {{.ObjPath}}"`
PutCommitMsg string `json:"put_commit_message" type:"text" default:"{{.UserName}} upload {{.ObjPath}}"`
RenameCommitMsg string `json:"rename_commit_message" type:"text" default:"{{.UserName}} rename {{.ObjPath}} to {{.TargetName}}"`
CopyCommitMsg string `json:"copy_commit_message" type:"text" default:"{{.UserName}} copy {{.ObjPath}} to {{.TargetPath}}"`
MoveCommitMsg string `json:"move_commit_message" type:"text" default:"{{.UserName}} move {{.ObjPath}} to {{.TargetPath}}"`
}
var config = driver.Config{
Name: "GitHub API",
LocalSort: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Github{}
})
}

107
drivers/github/types.go Normal file
View File

@ -0,0 +1,107 @@
package github
import (
"github.com/alist-org/alist/v3/internal/model"
"time"
)
type Links struct {
Git string `json:"git"`
Html string `json:"html"`
Self string `json:"self"`
}
type Object struct {
Type string `json:"type"`
Encoding string `json:"encoding" required:"false"`
Size int64 `json:"size"`
Name string `json:"name"`
Path string `json:"path"`
Content string `json:"Content" required:"false"`
Sha string `json:"sha"`
URL string `json:"url"`
GitURL string `json:"git_url"`
HtmlURL string `json:"html_url"`
DownloadURL string `json:"download_url"`
Entries []Object `json:"entries" required:"false"`
Links Links `json:"_links"`
SubmoduleGitURL string `json:"submodule_git_url" required:"false"`
Target string `json:"target" required:"false"`
}
func (o *Object) toModelObj() *model.Object {
return &model.Object{
Name: o.Name,
Size: o.Size,
Modified: time.Unix(0, 0),
IsFolder: o.Type == "dir",
}
}
type PutBlobResp struct {
URL string `json:"url"`
Sha string `json:"sha"`
}
type ErrResp struct {
Message string `json:"message"`
DocumentationURL string `json:"documentation_url"`
Status string `json:"status"`
}
type TreeObjReq struct {
Path string `json:"path"`
Mode string `json:"mode"`
Type string `json:"type"`
Sha interface{} `json:"sha"`
}
type TreeObjResp struct {
TreeObjReq
Size int64 `json:"size" required:"false"`
URL string `json:"url"`
}
func (o *TreeObjResp) toModelObj() *model.Object {
return &model.Object{
Name: o.Path,
Size: o.Size,
Modified: time.Unix(0, 0),
IsFolder: o.Type == "tree",
}
}
type TreeResp struct {
Sha string `json:"sha"`
URL string `json:"url"`
Trees []TreeObjResp `json:"tree"`
Truncated bool `json:"truncated"`
}
type TreeReq struct {
BaseTree interface{} `json:"base_tree,omitempty"`
Trees []interface{} `json:"tree"`
}
type CommitResp struct {
Sha string `json:"sha"`
}
type BranchResp struct {
Name string `json:"name"`
Commit CommitResp `json:"commit"`
}
type UpdateRefReq struct {
Sha string `json:"sha"`
Force bool `json:"force"`
}
type RepoResp struct {
DefaultBranch string `json:"default_branch"`
}
type UserResp struct {
Name string `json:"name"`
Email string `json:"email"`
}

167
drivers/github/util.go Normal file
View File

@ -0,0 +1,167 @@
package github
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"strings"
"text/template"
"time"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/ProtonMail/go-crypto/openpgp/armor"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type MessageTemplateVars struct {
UserName string
ObjName string
ObjPath string
ParentName string
ParentPath string
TargetName string
TargetPath string
}
func getMessage(tmpl *template.Template, vars *MessageTemplateVars, defaultOpStr string) (string, error) {
sb := strings.Builder{}
if err := tmpl.Execute(&sb, vars); err != nil {
return fmt.Sprintf("%s %s %s", vars.UserName, defaultOpStr, vars.ObjPath), err
}
return sb.String(), nil
}
func calculateBase64Length(inputLength int64) int64 {
return 4 * ((inputLength + 2) / 3)
}
func toErr(res *resty.Response) error {
var errMsg ErrResp
if err := utils.Json.Unmarshal(res.Body(), &errMsg); err != nil {
return errors.New(res.Status())
} else {
return fmt.Errorf("%s: %s", res.Status(), errMsg.Message)
}
}
// Example input:
// a = /aaa/bbb/ccc
// b = /aaa/b11/ddd/ccc
//
// Output:
// ancestor = /aaa
// aChildName = bbb
// bChildName = b11
// aRest = bbb/ccc
// bRest = b11/ddd/ccc
func getPathCommonAncestor(a, b string) (ancestor, aChildName, bChildName, aRest, bRest string) {
a = utils.FixAndCleanPath(a)
b = utils.FixAndCleanPath(b)
idx := 1
for idx < len(a) && idx < len(b) {
if a[idx] != b[idx] {
break
}
idx++
}
aNextIdx := idx
for aNextIdx < len(a) {
if a[aNextIdx] == '/' {
break
}
aNextIdx++
}
bNextIdx := idx
for bNextIdx < len(b) {
if b[bNextIdx] == '/' {
break
}
bNextIdx++
}
for idx > 0 {
if a[idx] == '/' {
break
}
idx--
}
ancestor = utils.FixAndCleanPath(a[:idx])
aChildName = a[idx+1 : aNextIdx]
bChildName = b[idx+1 : bNextIdx]
aRest = a[idx+1:]
bRest = b[idx+1:]
return ancestor, aChildName, bChildName, aRest, bRest
}
func getUsername(ctx context.Context) string {
user, ok := ctx.Value("user").(*model.User)
if !ok {
return "<system>"
}
return user.Username
}
func loadPrivateKey(key, passphrase string) (*openpgp.Entity, error) {
entityList, err := openpgp.ReadArmoredKeyRing(strings.NewReader(key))
if err != nil {
return nil, err
}
if len(entityList) < 1 {
return nil, fmt.Errorf("no keys found in key ring")
}
entity := entityList[0]
pass := []byte(passphrase)
if entity.PrivateKey != nil && entity.PrivateKey.Encrypted {
if err = entity.PrivateKey.Decrypt(pass); err != nil {
return nil, fmt.Errorf("password incorrect: %+v", err)
}
}
for _, subKey := range entity.Subkeys {
if subKey.PrivateKey != nil && subKey.PrivateKey.Encrypted {
if err = subKey.PrivateKey.Decrypt(pass); err != nil {
return nil, fmt.Errorf("password incorrect: %+v", err)
}
}
}
return entity, nil
}
func signCommit(m *map[string]interface{}, entity *openpgp.Entity) (string, error) {
var commit strings.Builder
commit.WriteString(fmt.Sprintf("tree %s\n", (*m)["tree"].(string)))
parents := (*m)["parents"].([]string)
for _, p := range parents {
commit.WriteString(fmt.Sprintf("parent %s\n", p))
}
now := time.Now()
_, offset := now.Zone()
hour := offset / 3600
author := (*m)["author"].(map[string]string)
commit.WriteString(fmt.Sprintf("author %s <%s> %d %+03d00\n", author["name"], author["email"], now.Unix(), hour))
author["date"] = now.Format(time.RFC3339)
committer := (*m)["committer"].(map[string]string)
commit.WriteString(fmt.Sprintf("committer %s <%s> %d %+03d00\n", committer["name"], committer["email"], now.Unix(), hour))
committer["date"] = now.Format(time.RFC3339)
commit.WriteString(fmt.Sprintf("\n%s", (*m)["message"].(string)))
data := commit.String()
var sigBuffer bytes.Buffer
err := openpgp.DetachSign(&sigBuffer, entity, strings.NewReader(data), nil)
if err != nil {
return "", fmt.Errorf("signing failed: %v", err)
}
var armoredSig bytes.Buffer
armorWriter, err := armor.Encode(&armoredSig, "PGP SIGNATURE", nil)
if err != nil {
return "", err
}
if _, err = io.Copy(armorWriter, &sigBuffer); err != nil {
return "", err
}
_ = armorWriter.Close()
return armoredSig.String(), nil
}

View File

@ -0,0 +1,167 @@
package github_releases
import (
"context"
"fmt"
"net/http"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type GithubReleases struct {
model.Storage
Addition
points []MountPoint
}
func (d *GithubReleases) Config() driver.Config {
return config
}
func (d *GithubReleases) GetAddition() driver.Additional {
return &d.Addition
}
func (d *GithubReleases) Init(ctx context.Context) error {
d.ParseRepos(d.Addition.RepoStructure)
return nil
}
func (d *GithubReleases) Drop(ctx context.Context) error {
return nil
}
func (d *GithubReleases) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files := make([]File, 0)
path := fmt.Sprintf("/%s", strings.Trim(dir.GetPath(), "/"))
for i := range d.points {
point := &d.points[i]
if !d.Addition.ShowAllVersion { // latest
point.RequestRelease(d.GetRequest, args.Refresh)
if point.Point == path { // 与仓库路径相同
files = append(files, point.GetLatestRelease()...)
if d.Addition.ShowReadme {
files = append(files, point.GetOtherFile(d.GetRequest, args.Refresh)...)
}
} else if strings.HasPrefix(point.Point, path) { // 仓库目录的父目录
nextDir := GetNextDir(point.Point, path)
if nextDir == "" {
continue
}
hasSameDir := false
for index := range files {
if files[index].GetName() == nextDir {
hasSameDir = true
files[index].Size += point.GetLatestSize()
break
}
}
if !hasSameDir {
files = append(files, File{
Path: path + "/" + nextDir,
FileName: nextDir,
Size: point.GetLatestSize(),
UpdateAt: point.Release.PublishedAt,
CreateAt: point.Release.CreatedAt,
Type: "dir",
Url: "",
})
}
}
} else { // all version
point.RequestReleases(d.GetRequest, args.Refresh)
if point.Point == path { // 与仓库路径相同
files = append(files, point.GetAllVersion()...)
if d.Addition.ShowReadme {
files = append(files, point.GetOtherFile(d.GetRequest, args.Refresh)...)
}
} else if strings.HasPrefix(point.Point, path) { // 仓库目录的父目录
nextDir := GetNextDir(point.Point, path)
if nextDir == "" {
continue
}
hasSameDir := false
for index := range files {
if files[index].GetName() == nextDir {
hasSameDir = true
files[index].Size += point.GetAllVersionSize()
break
}
}
if !hasSameDir {
files = append(files, File{
FileName: nextDir,
Path: path + "/" + nextDir,
Size: point.GetAllVersionSize(),
UpdateAt: (*point.Releases)[0].PublishedAt,
CreateAt: (*point.Releases)[0].CreatedAt,
Type: "dir",
Url: "",
})
}
} else if strings.HasPrefix(path, point.Point) { // 仓库目录的子目录
tagName := GetNextDir(path, point.Point)
if tagName == "" {
continue
}
files = append(files, point.GetReleaseByTagName(tagName)...)
}
}
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return src, nil
})
}
func (d *GithubReleases) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
url := file.GetID()
gh_proxy := strings.TrimSpace(d.Addition.GitHubProxy)
if gh_proxy != "" {
url = strings.Replace(url, "https://github.com", gh_proxy, 1)
}
link := model.Link{
URL: url,
Header: http.Header{},
}
return &link, nil
}
func (d *GithubReleases) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
// TODO create folder, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO move obj, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
// TODO rename obj, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
return errs.NotImplement
}

View File

@ -0,0 +1,35 @@
package github_releases
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
RepoStructure string `json:"repo_structure" type:"text" required:"true" default:"alistGo/alist" help:"structure:[path:]org/repo"`
ShowReadme bool `json:"show_readme" type:"bool" default:"true" help:"show README、LICENSE file"`
Token string `json:"token" type:"string" required:"false" help:"GitHub token, if you want to access private repositories or increase the rate limit"`
ShowAllVersion bool `json:"show_all_version" type:"bool" default:"false" help:"show all versions"`
GitHubProxy string `json:"gh_proxy" type:"string" default:"" help:"GitHub proxy, e.g. https://ghproxy.net/github.com or https://gh-proxy.com/github.com "`
}
var config = driver.Config{
Name: "GitHub Releases",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &GithubReleases{}
})
}

View File

@ -0,0 +1,86 @@
package github_releases
type Release struct {
Url string `json:"url"`
AssetsUrl string `json:"assets_url"`
UploadUrl string `json:"upload_url"`
HtmlUrl string `json:"html_url"`
Id int `json:"id"`
Author User `json:"author"`
NodeId string `json:"node_id"`
TagName string `json:"tag_name"`
TargetCommitish string `json:"target_commitish"`
Name string `json:"name"`
Draft bool `json:"draft"`
Prerelease bool `json:"prerelease"`
CreatedAt string `json:"created_at"`
PublishedAt string `json:"published_at"`
Assets []Asset `json:"assets"`
TarballUrl string `json:"tarball_url"`
ZipballUrl string `json:"zipball_url"`
Body string `json:"body"`
Reactions Reactions `json:"reactions"`
}
type User struct {
Login string `json:"login"`
Id int `json:"id"`
NodeId string `json:"node_id"`
AvatarUrl string `json:"avatar_url"`
GravatarId string `json:"gravatar_id"`
Url string `json:"url"`
HtmlUrl string `json:"html_url"`
FollowersUrl string `json:"followers_url"`
FollowingUrl string `json:"following_url"`
GistsUrl string `json:"gists_url"`
StarredUrl string `json:"starred_url"`
SubscriptionsUrl string `json:"subscriptions_url"`
OrganizationsUrl string `json:"organizations_url"`
ReposUrl string `json:"repos_url"`
EventsUrl string `json:"events_url"`
ReceivedEventsUrl string `json:"received_events_url"`
Type string `json:"type"`
UserViewType string `json:"user_view_type"`
SiteAdmin bool `json:"site_admin"`
}
type Asset struct {
Url string `json:"url"`
Id int `json:"id"`
NodeId string `json:"node_id"`
Name string `json:"name"`
Label string `json:"label"`
Uploader User `json:"uploader"`
ContentType string `json:"content_type"`
State string `json:"state"`
Size int64 `json:"size"`
DownloadCount int `json:"download_count"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
BrowserDownloadUrl string `json:"browser_download_url"`
}
type Reactions struct {
Url string `json:"url"`
TotalCount int `json:"total_count"`
PlusOne int `json:"+1"`
MinusOne int `json:"-1"`
Laugh int `json:"laugh"`
Hooray int `json:"hooray"`
Confused int `json:"confused"`
Heart int `json:"heart"`
Rocket int `json:"rocket"`
Eyes int `json:"eyes"`
}
type FileInfo struct {
Name string `json:"name"`
Path string `json:"path"`
Sha string `json:"sha"`
Size int64 `json:"size"`
Url string `json:"url"`
HtmlUrl string `json:"html_url"`
GitUrl string `json:"git_url"`
DownloadUrl string `json:"download_url"`
Type string `json:"type"`
}

View File

@ -0,0 +1,213 @@
package github_releases
import (
"encoding/json"
"strings"
"time"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type MountPoint struct {
Point string // 挂载点
Repo string // 仓库名 owner/repo
Release *Release // Release 指针 latest
Releases *[]Release // []Release 指针
OtherFile *[]FileInfo // 仓库根目录下的其他文件
}
// 请求最新版本
func (m *MountPoint) RequestRelease(get func(url string) (*resty.Response, error), refresh bool) {
if m.Repo == "" {
return
}
if m.Release == nil || refresh {
resp, _ := get("https://api.github.com/repos/" + m.Repo + "/releases/latest")
m.Release = new(Release)
json.Unmarshal(resp.Body(), m.Release)
}
}
// 请求所有版本
func (m *MountPoint) RequestReleases(get func(url string) (*resty.Response, error), refresh bool) {
if m.Repo == "" {
return
}
if m.Releases == nil || refresh {
resp, _ := get("https://api.github.com/repos/" + m.Repo + "/releases")
m.Releases = new([]Release)
json.Unmarshal(resp.Body(), m.Releases)
}
}
// 获取最新版本
func (m *MountPoint) GetLatestRelease() []File {
files := make([]File, 0)
for _, asset := range m.Release.Assets {
files = append(files, File{
Path: m.Point + "/" + asset.Name,
FileName: asset.Name,
Size: asset.Size,
Type: "file",
UpdateAt: asset.UpdatedAt,
CreateAt: asset.CreatedAt,
Url: asset.BrowserDownloadUrl,
})
}
return files
}
// 获取最新版本大小
func (m *MountPoint) GetLatestSize() int64 {
size := int64(0)
for _, asset := range m.Release.Assets {
size += asset.Size
}
return size
}
// 获取所有版本
func (m *MountPoint) GetAllVersion() []File {
files := make([]File, 0)
for _, release := range *m.Releases {
file := File{
Path: m.Point + "/" + release.TagName,
FileName: release.TagName,
Size: m.GetSizeByTagName(release.TagName),
Type: "dir",
UpdateAt: release.PublishedAt,
CreateAt: release.CreatedAt,
Url: release.HtmlUrl,
}
for _, asset := range release.Assets {
file.Size += asset.Size
}
files = append(files, file)
}
return files
}
// 根据版本号获取版本
func (m *MountPoint) GetReleaseByTagName(tagName string) []File {
for _, item := range *m.Releases {
if item.TagName == tagName {
files := make([]File, 0)
for _, asset := range item.Assets {
files = append(files, File{
Path: m.Point + "/" + tagName + "/" + asset.Name,
FileName: asset.Name,
Size: asset.Size,
Type: "file",
UpdateAt: asset.UpdatedAt,
CreateAt: asset.CreatedAt,
Url: asset.BrowserDownloadUrl,
})
}
return files
}
}
return nil
}
// 根据版本号获取版本大小
func (m *MountPoint) GetSizeByTagName(tagName string) int64 {
if m.Releases == nil {
return 0
}
for _, item := range *m.Releases {
if item.TagName == tagName {
size := int64(0)
for _, asset := range item.Assets {
size += asset.Size
}
return size
}
}
return 0
}
// 获取所有版本大小
func (m *MountPoint) GetAllVersionSize() int64 {
if m.Releases == nil {
return 0
}
size := int64(0)
for _, release := range *m.Releases {
for _, asset := range release.Assets {
size += asset.Size
}
}
return size
}
func (m *MountPoint) GetOtherFile(get func(url string) (*resty.Response, error), refresh bool) []File {
if m.OtherFile == nil || refresh {
resp, _ := get("https://api.github.com/repos/" + m.Repo + "/contents")
m.OtherFile = new([]FileInfo)
json.Unmarshal(resp.Body(), m.OtherFile)
}
files := make([]File, 0)
defaultTime := "1970-01-01T00:00:00Z"
for _, file := range *m.OtherFile {
if strings.HasSuffix(file.Name, ".md") || strings.HasPrefix(file.Name, "LICENSE") {
files = append(files, File{
Path: m.Point + "/" + file.Name,
FileName: file.Name,
Size: file.Size,
Type: "file",
UpdateAt: defaultTime,
CreateAt: defaultTime,
Url: file.DownloadUrl,
})
}
}
return files
}
type File struct {
Path string // 文件路径
FileName string // 文件名
Size int64 // 文件大小
Type string // 文件类型
UpdateAt string // 更新时间 eg:"2025-01-27T16:10:16Z"
CreateAt string // 创建时间
Url string // 下载链接
}
func (f File) GetHash() utils.HashInfo {
return utils.HashInfo{}
}
func (f File) GetPath() string {
return f.Path
}
func (f File) GetSize() int64 {
return f.Size
}
func (f File) GetName() string {
return f.FileName
}
func (f File) ModTime() time.Time {
t, _ := time.Parse(time.RFC3339, f.CreateAt)
return t
}
func (f File) CreateTime() time.Time {
t, _ := time.Parse(time.RFC3339, f.CreateAt)
return t
}
func (f File) IsDir() bool {
return f.Type == "dir"
}
func (f File) GetID() string {
return f.Url
}

View File

@ -0,0 +1,85 @@
package github_releases
import (
"fmt"
"path/filepath"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
// 发送 GET 请求
func (d *GithubReleases) GetRequest(url string) (*resty.Response, error) {
req := base.RestyClient.R()
req.SetHeader("Accept", "application/vnd.github+json")
req.SetHeader("X-GitHub-Api-Version", "2022-11-28")
if d.Addition.Token != "" {
req.SetHeader("Authorization", fmt.Sprintf("Bearer %s", d.Addition.Token))
}
res, err := req.Get(url)
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
log.Warn("failed to get request: ", res.StatusCode(), res.String())
}
return res, nil
}
// 解析挂载结构
func (d *GithubReleases) ParseRepos(text string) ([]MountPoint, error) {
lines := strings.Split(text, "\n")
points := make([]MountPoint, 0)
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
parts := strings.Split(line, ":")
path, repo := "", ""
if len(parts) == 1 {
path = "/"
repo = parts[0]
} else if len(parts) == 2 {
path = fmt.Sprintf("/%s", strings.Trim(parts[0], "/"))
repo = parts[1]
} else {
return nil, fmt.Errorf("invalid format: %s", line)
}
points = append(points, MountPoint{
Point: path,
Repo: repo,
Release: nil,
Releases: nil,
})
}
d.points = points
return points, nil
}
// 获取下一级目录
func GetNextDir(wholePath string, basePath string) string {
basePath = fmt.Sprintf("%s/", strings.TrimRight(basePath, "/"))
if !strings.HasPrefix(wholePath, basePath) {
return ""
}
remainingPath := strings.TrimLeft(strings.TrimPrefix(wholePath, basePath), "/")
if remainingPath != "" {
parts := strings.Split(remainingPath, "/")
nextDir := parts[0]
if strings.HasPrefix(wholePath, strings.TrimRight(basePath, "/")+"/"+nextDir) {
return nextDir
}
}
return ""
}
// 判断当前目录是否是目标目录的祖先目录
func IsAncestorDir(parentDir string, targetDir string) bool {
absTargetDir, _ := filepath.Abs(targetDir)
absParentDir, _ := filepath.Abs(parentDir)
return strings.HasPrefix(absTargetDir, absParentDir)
}

View File

@ -158,7 +158,8 @@ func (d *GoogleDrive) Put(ctx context.Context, dstDir model.Obj, stream model.Fi
putUrl := res.Header().Get("location")
if stream.GetSize() < d.ChunkSize*1024*1024 {
_, err = d.request(putUrl, http.MethodPut, func(req *resty.Request) {
req.SetHeader("Content-Length", strconv.FormatInt(stream.GetSize(), 10)).SetBody(stream)
req.SetHeader("Content-Length", strconv.FormatInt(stream.GetSize(), 10)).
SetBody(driver.NewLimitedUploadStream(ctx, stream))
}, nil)
} else {
err = d.chunkUpload(ctx, stream, putUrl)

View File

@ -11,10 +11,10 @@ import (
"strconv"
"time"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/golang-jwt/jwt/v4"
@ -126,8 +126,7 @@ func (d *GoogleDrive) refreshToken() error {
}
d.AccessToken = resp.AccessToken
return nil
}
if gdsaFileErr != nil && os.IsExist(gdsaFileErr) {
} else if os.IsExist(gdsaFileErr) {
return gdsaFileErr
}
url := "https://www.googleapis.com/oauth2/v4/token"
@ -229,6 +228,7 @@ func (d *GoogleDrive) chunkUpload(ctx context.Context, stream model.FileStreamer
if err != nil {
return err
}
reader = driver.NewLimitedUploadStream(ctx, reader)
_, err = d.request(url, http.MethodPut, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Length": strconv.FormatInt(chunkSize, 10),

View File

@ -124,7 +124,7 @@ func (d *GooglePhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fi
}
resp, err := d.request(postUrl, http.MethodPost, func(req *resty.Request) {
req.SetBody(stream).SetContext(ctx)
req.SetBody(driver.NewLimitedUploadStream(ctx, stream)).SetContext(ctx)
}, nil, postHeaders)
if err != nil {

View File

@ -4,12 +4,17 @@ import (
"context"
"crypto/sha1"
"fmt"
"io"
"net/url"
"path"
"strconv"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
@ -19,11 +24,6 @@ import (
pubUserFile "github.com/city404/v6-public-rpc-proto/go/v6/userfile"
"github.com/rclone/rclone/lib/readers"
"github.com/zzzhr1990/go-common-entity/userfile"
"io"
"net/url"
"path"
"strconv"
"time"
)
type HalalCloud struct {
@ -251,7 +251,6 @@ func (d *HalalCloud) getLink(ctx context.Context, file model.Obj, args model.Lin
size := result.FileSize
chunks := getChunkSizes(result.Sizes)
var finalClosers utils.Closers
resultRangeReader := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
length := httpRange.Length
if httpRange.Length >= 0 && httpRange.Start+httpRange.Length >= size {
@ -269,7 +268,6 @@ func (d *HalalCloud) getLink(ctx context.Context, file model.Obj, args model.Lin
sha: result.Sha1,
shaTemp: sha1.New(),
}
finalClosers.Add(oo)
return readers.NewLimitedReadCloser(oo, length), nil
}
@ -281,7 +279,7 @@ func (d *HalalCloud) getLink(ctx context.Context, file model.Obj, args model.Lin
duration = time.Until(time.Now().Add(time.Hour))
}
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: finalClosers}
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader}
return &model.Link{
RangeReadCloser: resultRangeReadCloser,
Expiration: &duration,
@ -394,10 +392,11 @@ func (d *HalalCloud) put(ctx context.Context, dstDir model.Obj, fileStream model
if fileStream.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = fileStream.GetSize() / (s3manager.MaxUploadParts - 1)
}
reader := driver.NewLimitedUploadStream(ctx, fileStream)
_, err = uploader.UploadWithContext(ctx, &s3manager.UploadInput{
Bucket: aws.String(result.Bucket),
Key: aws.String(result.Key),
Body: io.TeeReader(fileStream, driver.NewProgress(fileStream.GetSize(), up)),
Body: io.TeeReader(reader, driver.NewProgress(fileStream.GetSize(), up)),
})
return nil, err

View File

@ -120,7 +120,7 @@ func (d *ILanZou) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
if err != nil {
return nil, err
}
ts, ts_str, err := getTimestamp(d.conf.secret)
ts, ts_str, _ := getTimestamp(d.conf.secret)
params := []string{
"uuid=" + url.QueryEscape(d.UUID),
@ -149,11 +149,17 @@ func (d *ILanZou) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
u.RawQuery = strings.Join(params, "&")
realURL := u.String()
// get the url after redirect
res, err := base.NoRedirectClient.R().SetHeaders(map[string]string{
//"Origin": d.conf.site,
req := base.NoRedirectClient.R()
req.SetHeaders(map[string]string{
"Referer": d.conf.site + "/",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 Edg/125.0.0.0",
}).Get(realURL)
})
if d.Addition.Ip != "" {
req.SetHeader("X-Forwarded-For", d.Addition.Ip)
}
res, err := req.Get(realURL)
if err != nil {
return nil, err
}
@ -266,10 +272,10 @@ func (d *ILanZou) Remove(ctx context.Context, obj model.Obj) error {
const DefaultPartSize = 1024 * 1024 * 8
func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
h := md5.New()
// need to calculate md5 of the full content
tempFile, err := stream.CacheFullInTempFile()
tempFile, err := s.CacheFullInTempFile()
if err != nil {
return nil, err
}
@ -288,8 +294,8 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
res, err := d.proved("/7n/getUpToken", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"fileId": "",
"fileName": stream.GetName(),
"fileSize": stream.GetSize()/1024 + 1,
"fileName": s.GetName(),
"fileSize": s.GetSize()/1024 + 1,
"folderId": dstDir.GetID(),
"md5": etag,
"type": 1,
@ -301,13 +307,20 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
upToken := utils.Json.Get(res, "upToken").ToString()
now := time.Now()
key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli())
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: &driver.SimpleReaderWithSize{
Reader: tempFile,
Size: s.GetSize(),
},
UpdateProgress: up,
})
var token string
if stream.GetSize() <= DefaultPartSize {
res, err := d.upClient.R().SetMultipartFormData(map[string]string{
if s.GetSize() <= DefaultPartSize {
res, err := d.upClient.R().SetContext(ctx).SetMultipartFormData(map[string]string{
"token": upToken,
"key": key,
"fname": stream.GetName(),
}).SetMultipartField("file", stream.GetName(), stream.GetMimetype(), tempFile).
"fname": s.GetName(),
}).SetMultipartField("file", s.GetName(), s.GetMimetype(), reader).
Post("https://upload.qiniup.com/")
if err != nil {
return nil, err
@ -321,10 +334,10 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
}
uploadId := utils.Json.Get(res.Body(), "uploadId").ToString()
parts := make([]Part, 0)
partNum := (stream.GetSize() + DefaultPartSize - 1) / DefaultPartSize
partNum := (s.GetSize() + DefaultPartSize - 1) / DefaultPartSize
for i := 1; i <= int(partNum); i++ {
u := fmt.Sprintf("https://upload.qiniup.com/buckets/%s/objects/%s/uploads/%s/%d", d.conf.bucket, keyBase64, uploadId, i)
res, err = d.upClient.R().SetHeader("Authorization", "UpToken "+upToken).SetBody(io.LimitReader(tempFile, DefaultPartSize)).Put(u)
res, err = d.upClient.R().SetContext(ctx).SetHeader("Authorization", "UpToken "+upToken).SetBody(io.LimitReader(reader, DefaultPartSize)).Put(u)
if err != nil {
return nil, err
}
@ -335,7 +348,7 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
})
}
res, err = d.upClient.R().SetHeader("Authorization", "UpToken "+upToken).SetBody(base.Json{
"fnmae": stream.GetName(),
"fnmae": s.GetName(),
"parts": parts,
}).Post(fmt.Sprintf("https://upload.qiniup.com/buckets/%s/objects/%s/uploads/%s", d.conf.bucket, keyBase64, uploadId))
if err != nil {
@ -373,9 +386,9 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
ID: strconv.FormatInt(file.FileId, 10),
//Path: ,
Name: file.FileName,
Size: stream.GetSize(),
Modified: stream.ModTime(),
Ctime: stream.CreateTime(),
Size: s.GetSize(),
Modified: s.ModTime(),
Ctime: s.CreateTime(),
IsFolder: false,
HashInfo: utils.NewHashInfo(utils.MD5, etag),
}, nil

View File

@ -9,6 +9,7 @@ type Addition struct {
driver.RootID
Username string `json:"username" type:"string" required:"true"`
Password string `json:"password" type:"string" required:"true"`
Ip string `json:"ip" type:"string"`
Token string
UUID string

View File

@ -69,11 +69,17 @@ func (d *ILanZou) request(pathname, method string, callback base.ReqCallback, pr
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Origin": d.conf.site,
"Referer": d.conf.site + "/",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 Edg/125.0.0.0",
"Origin": d.conf.site,
"Referer": d.conf.site + "/",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 Edg/125.0.0.0",
"Accept-Encoding": "gzip, deflate, br, zstd",
"Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6,mt;q=0.5",
})
if d.Addition.Ip != "" {
req.SetHeader("X-Forwarded-For", d.Addition.Ip)
}
if callback != nil {
callback(req)
}

View File

@ -4,13 +4,13 @@ import (
"context"
"fmt"
"net/url"
stdpath "path"
"path/filepath"
"strings"
shell "github.com/ipfs/go-ipfs-api"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
shell "github.com/ipfs/go-ipfs-api"
)
type IPFS struct {
@ -44,27 +44,32 @@ func (d *IPFS) Drop(ctx context.Context) error {
func (d *IPFS) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
path := dir.GetPath()
if path[len(path):] != "/" {
path += "/"
switch d.Mode {
case "ipfs":
path, _ = url.JoinPath("/ipfs", path)
case "ipns":
path, _ = url.JoinPath("/ipns", path)
case "mfs":
fileStat, err := d.sh.FilesStat(ctx, path)
if err != nil {
return nil, err
}
path, _ = url.JoinPath("/ipfs", fileStat.Hash)
default:
return nil, fmt.Errorf("mode error")
}
path_cid, err := d.sh.FilesStat(ctx, path)
if err != nil {
return nil, err
}
dirs, err := d.sh.List(path_cid.Hash)
dirs, err := d.sh.List(path)
if err != nil {
return nil, err
}
objlist := []model.Obj{}
for _, file := range dirs {
gateurl := *d.gateURL
gateurl.Path = "ipfs/" + file.Hash
gateurl := *d.gateURL.JoinPath("/ipfs/" + file.Hash)
gateurl.RawQuery = "filename=" + url.PathEscape(file.Name)
objlist = append(objlist, &model.ObjectURL{
Object: model.Object{ID: file.Hash, Name: file.Name, Size: int64(file.Size), IsFolder: file.Type == 1},
Object: model.Object{ID: "/ipfs/" + file.Hash, Name: file.Name, Size: int64(file.Size), IsFolder: file.Type == 1},
Url: model.Url{Url: gateurl.String()},
})
}
@ -73,11 +78,15 @@ func (d *IPFS) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]
}
func (d *IPFS) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
link := d.Gateway + "/ipfs/" + file.GetID() + "/?filename=" + url.PathEscape(file.GetName())
return &model.Link{URL: link}, nil
gateurl := d.gateURL.JoinPath(file.GetID())
gateurl.RawQuery = "filename=" + url.PathEscape(file.GetName())
return &model.Link{URL: gateurl.String()}, nil
}
func (d *IPFS) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if d.Mode != "mfs" {
return fmt.Errorf("only write in mfs mode")
}
path := parentDir.GetPath()
if path[len(path):] != "/" {
path += "/"
@ -86,39 +95,48 @@ func (d *IPFS) MakeDir(ctx context.Context, parentDir model.Obj, dirName string)
}
func (d *IPFS) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
if d.Mode != "mfs" {
return fmt.Errorf("only write in mfs mode")
}
return d.sh.FilesMv(ctx, srcObj.GetPath(), dstDir.GetPath())
}
func (d *IPFS) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
if d.Mode != "mfs" {
return fmt.Errorf("only write in mfs mode")
}
newFileName := filepath.Dir(srcObj.GetPath()) + "/" + newName
return d.sh.FilesMv(ctx, srcObj.GetPath(), strings.ReplaceAll(newFileName, "\\", "/"))
}
func (d *IPFS) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
// TODO copy obj, optional
fmt.Println(srcObj.GetPath())
fmt.Println(dstDir.GetPath())
if d.Mode != "mfs" {
return fmt.Errorf("only write in mfs mode")
}
newFileName := dstDir.GetPath() + "/" + filepath.Base(srcObj.GetPath())
fmt.Println(newFileName)
return d.sh.FilesCp(ctx, srcObj.GetPath(), strings.ReplaceAll(newFileName, "\\", "/"))
}
func (d *IPFS) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
if d.Mode != "mfs" {
return fmt.Errorf("only write in mfs mode")
}
return d.sh.FilesRm(ctx, obj.GetPath(), true)
}
func (d *IPFS) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// TODO upload file, optional
_, err := d.sh.Add(stream, ToFiles(stdpath.Join(dstDir.GetPath(), stream.GetName())))
return err
}
func ToFiles(dstDir string) shell.AddOpts {
return func(rb *shell.RequestBuilder) error {
rb.Option("to-files", dstDir)
return nil
func (d *IPFS) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
if d.Mode != "mfs" {
return fmt.Errorf("only write in mfs mode")
}
outHash, err := d.sh.Add(driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
}))
if err != nil {
return err
}
err = d.sh.FilesCp(ctx, "/ipfs/"+outHash, dstDir.GetPath()+"/"+strings.ReplaceAll(s.GetName(), "\\", "/"))
return err
}
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {

View File

@ -8,14 +8,16 @@ import (
type Addition struct {
// Usually one of two
driver.RootPath
Mode string `json:"mode" options:"ipfs,ipns,mfs" type:"select" required:"true"`
Endpoint string `json:"endpoint" default:"http://127.0.0.1:5001"`
Gateway string `json:"gateway" default:"https://ipfs.io"`
Gateway string `json:"gateway" default:"http://127.0.0.1:8080"`
}
var config = driver.Config{
Name: "IPFS API",
DefaultRoot: "/",
LocalSort: true,
OnlyProxy: false,
}
func init() {

View File

@ -3,8 +3,6 @@ package kodbox
import (
"context"
"fmt"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"net/http"
"path/filepath"
"strings"
@ -12,6 +10,8 @@ import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type KodBox struct {
@ -225,14 +225,19 @@ func (d *KodBox) Remove(ctx context.Context, obj model.Obj) error {
return nil
}
func (d *KodBox) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
func (d *KodBox) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
var resp *CommonResp
_, err := d.request(http.MethodPost, "/?explorer/upload/fileUpload", func(req *resty.Request) {
req.SetFileReader("file", stream.GetName(), stream).
r := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
req.SetFileReader("file", s.GetName(), r).
SetResult(&resp).
SetFormData(map[string]string{
"path": dstDir.GetPath(),
})
}).
SetContext(ctx)
})
if err != nil {
return nil, err
@ -244,8 +249,8 @@ func (d *KodBox) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return &model.ObjThumb{
Object: model.Object{
Path: resp.Info.(string),
Name: stream.GetName(),
Size: stream.GetSize(),
Name: s.GetName(),
Size: s.GetSize(),
IsFolder: false,
Modified: time.Now(),
Ctime: time.Now(),

View File

@ -208,18 +208,22 @@ func (d *LanZou) Remove(ctx context.Context, obj model.Obj) error {
return errs.NotSupport
}
func (d *LanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
func (d *LanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
if d.IsCookie() || d.IsAccount() {
var resp RespText[[]FileOrFolder]
_, err := d._post(d.BaseUrl+"/html5up.php", func(req *resty.Request) {
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
req.SetFormData(map[string]string{
"task": "1",
"vie": "2",
"ve": "2",
"id": "WU_FILE_0",
"name": stream.GetName(),
"name": s.GetName(),
"folder_id_bb_n": dstDir.GetID(),
}).SetFileReader("upload_file", stream.GetName(), stream).SetContext(ctx)
}).SetFileReader("upload_file", s.GetName(), reader).SetContext(ctx)
}, &resp, true)
if err != nil {
return nil, err

View File

@ -120,9 +120,9 @@ var findKVReg = regexp.MustCompile(`'(.+?)':('?([^' },]*)'?)`) // 拆分kv
func findJSVarFunc(key, data string) string {
var values []string
if key != "sasign" {
values = regexp.MustCompile(`var ` + key + ` = '(.+?)';`).FindStringSubmatch(data)
values = regexp.MustCompile(`var ` + key + `\s*=\s*['"]?(.+?)['"]?;`).FindStringSubmatch(data)
} else {
matches := regexp.MustCompile(`var `+key+` = '(.+?)';`).FindAllStringSubmatch(data, -1)
matches := regexp.MustCompile(`var `+key+`\s*=\s*['"]?(.+?)['"]?;`).FindAllStringSubmatch(data, -1)
if len(matches) == 3 {
values = matches[1]
} else {

View File

@ -264,6 +264,9 @@ var findSubFolderReg = regexp.MustCompile(`(?i)(?:folderlink|mbxfolder).+href="/
// 获取下载页面链接
var findDownPageParamReg = regexp.MustCompile(`<iframe.*?src="(.+?)"`)
// 获取文件ID
var findFileIDReg = regexp.MustCompile(`'/ajaxm\.php\?file=(\d+)'`)
// 获取分享链接主界面
func (d *LanZou) getShareUrlHtml(shareID string) (string, error) {
var vs string
@ -356,8 +359,16 @@ func (d *LanZou) getFilesByShareUrl(shareID, pwd string, sharePageData string) (
return nil, err
}
param["p"] = pwd
fileIDs := findFileIDReg.FindStringSubmatch(sharePageData)
var fileID string
if len(fileIDs) > 1 {
fileID = fileIDs[1]
} else {
return nil, fmt.Errorf("not find file id")
}
var resp FileShareInfoAndUrlResp[string]
_, err = d.post(d.ShareUrl+"/ajaxm.php", func(req *resty.Request) { req.SetFormData(param) }, &resp)
_, err = d.post(d.ShareUrl+"/ajaxm.php?file="+fileID, func(req *resty.Request) { req.SetFormData(param) }, &resp)
if err != nil {
return nil, err
}
@ -381,8 +392,15 @@ func (d *LanZou) getFilesByShareUrl(shareID, pwd string, sharePageData string) (
return nil, err
}
fileIDs := findFileIDReg.FindStringSubmatch(nextPageData)
var fileID string
if len(fileIDs) > 1 {
fileID = fileIDs[1]
} else {
return nil, fmt.Errorf("not find file id")
}
var resp FileShareInfoAndUrlResp[int]
_, err = d.post(d.ShareUrl+"/ajaxm.php", func(req *resty.Request) { req.SetFormData(param) }, &resp)
_, err = d.post(d.ShareUrl+"/ajaxm.php?file="+fileID, func(req *resty.Request) { req.SetFormData(param) }, &resp)
if err != nil {
return nil, err
}

View File

@ -320,7 +320,10 @@ func (c *Lark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStrea
Build()
// 发起请求
uploadLimit.Wait(ctx)
err := uploadLimit.Wait(ctx)
if err != nil {
return nil, err
}
resp, err := c.client.Drive.File.UploadPrepare(ctx, req)
if err != nil {
return nil, err
@ -341,7 +344,7 @@ func (c *Lark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStrea
length = stream.GetSize() - int64(i*blockSize)
}
reader := io.LimitReader(stream, length)
reader := driver.NewLimitedUploadStream(ctx, io.LimitReader(stream, length))
req := larkdrive.NewUploadPartFileReqBuilder().
Body(larkdrive.NewUploadPartFileReqBodyBuilder().
@ -353,7 +356,10 @@ func (c *Lark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStrea
Build()
// 发起请求
uploadLimit.Wait(ctx)
err = uploadLimit.Wait(ctx)
if err != nil {
return nil, err
}
resp, err := c.client.Drive.File.UploadPart(ctx, req)
if err != nil {

View File

@ -3,6 +3,7 @@ package LenovoNasShare
import (
"context"
"net/http"
"time"
"github.com/go-resty/resty/v2"
@ -15,7 +16,8 @@ import (
type LenovoNasShare struct {
model.Storage
Addition
stoken string
stoken string
expireAt int64
}
func (d *LenovoNasShare) Config() driver.Config {
@ -27,20 +29,9 @@ func (d *LenovoNasShare) GetAddition() driver.Additional {
}
func (d *LenovoNasShare) Init(ctx context.Context) error {
if d.Host == "" {
d.Host = "https://siot-share.lenovo.com.cn"
}
query := map[string]string{
"code": d.ShareId,
"password": d.SharePwd,
}
resp, err := d.request(d.Host+"/oneproxy/api/share/v1/access", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(query)
}, nil)
if err != nil {
if err := d.getStoken(); err != nil {
return err
}
d.stoken = utils.Json.Get(resp, "data", "stoken").ToString()
return nil
}
@ -49,6 +40,7 @@ func (d *LenovoNasShare) Drop(ctx context.Context) error {
}
func (d *LenovoNasShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
d.checkStoken() // 检查stoken是否过期
files := make([]File, 0)
var resp Files
@ -71,7 +63,33 @@ func (d *LenovoNasShare) List(ctx context.Context, dir model.Obj, args model.Lis
})
}
func (d *LenovoNasShare) checkStoken() { // 检查stoken是否过期
if d.expireAt < time.Now().Unix() {
d.getStoken()
}
}
func (d *LenovoNasShare) getStoken() error { // 获取stoken
if d.Host == "" {
d.Host = "https://siot-share.lenovo.com.cn"
}
query := map[string]string{
"code": d.ShareId,
"password": d.SharePwd,
}
resp, err := d.request(d.Host+"/oneproxy/api/share/v1/access", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(query)
}, nil)
if err != nil {
return err
}
d.stoken = utils.Json.Get(resp, "data", "stoken").ToString()
d.expireAt = utils.Json.Get(resp, "data", "expires_in").ToInt64() + time.Now().Unix() - 60
return nil
}
func (d *LenovoNasShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
d.checkStoken() // 检查stoken是否过期
query := map[string]string{
"code": d.ShareId,
"stoken": d.stoken,

View File

@ -47,7 +47,11 @@ func (f File) GetPath() string {
}
func (f File) GetSize() int64 {
return f.Size
if f.IsDir() {
return 0
} else {
return f.Size
}
}
func (f File) GetName() string {
@ -70,10 +74,6 @@ func (f File) GetID() string {
return f.GetPath()
}
func (f File) Thumb() string {
return ""
}
type Files struct {
Data struct {
List []File `json:"list"`

View File

@ -79,6 +79,28 @@ func (d *Local) Init(ctx context.Context) error {
} else {
d.thumbTokenBucket = NewStaticTokenBucketWithMigration(d.thumbTokenBucket, d.thumbConcurrency)
}
// Check the VideoThumbPos value
if d.VideoThumbPos == "" {
d.VideoThumbPos = "20%"
}
if strings.HasSuffix(d.VideoThumbPos, "%") {
percentage := strings.TrimSuffix(d.VideoThumbPos, "%")
val, err := strconv.ParseFloat(percentage, 64)
if err != nil {
return fmt.Errorf("invalid video_thumb_pos value: %s, err: %s", d.VideoThumbPos, err)
}
if val < 0 || val > 100 {
return fmt.Errorf("invalid video_thumb_pos value: %s, the precentage must be a number between 0 and 100", d.VideoThumbPos)
}
} else {
val, err := strconv.ParseFloat(d.VideoThumbPos, 64)
if err != nil {
return fmt.Errorf("invalid video_thumb_pos value: %s, err: %s", d.VideoThumbPos, err)
}
if val < 0 {
return fmt.Errorf("invalid video_thumb_pos value: %s, the time must be a positive number", d.VideoThumbPos)
}
}
return nil
}
@ -101,17 +123,17 @@ func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
if !d.ShowHidden && strings.HasPrefix(f.Name(), ".") {
continue
}
file := d.FileInfoToObj(f, args.ReqPath, fullPath)
file := d.FileInfoToObj(ctx, f, args.ReqPath, fullPath)
files = append(files, file)
}
return files, nil
}
func (d *Local) FileInfoToObj(f fs.FileInfo, reqPath string, fullPath string) model.Obj {
func (d *Local) FileInfoToObj(ctx context.Context, f fs.FileInfo, reqPath string, fullPath string) model.Obj {
thumb := ""
if d.Thumbnail {
typeName := utils.GetFileType(f.Name())
if typeName == conf.IMAGE || typeName == conf.VIDEO {
thumb = common.GetApiUrl(nil) + stdpath.Join("/d", reqPath, f.Name())
thumb = common.GetApiUrl(common.GetHttpReq(ctx)) + stdpath.Join("/d", reqPath, f.Name())
thumb = utils.EncodePath(thumb, true)
thumb += "?type=thumb&sign=" + sign.Sign(stdpath.Join(reqPath, f.Name()))
}
@ -149,7 +171,7 @@ func (d *Local) GetMeta(ctx context.Context, path string) (model.Obj, error) {
if err != nil {
return nil, err
}
file := d.FileInfoToObj(f, path, path)
file := d.FileInfoToObj(ctx, f, path, path)
//h := "123123"
//if s, ok := f.(model.SetHash); ok && file.GetHash() == ("","") {
// s.SetHash(h,"SHA1")

Some files were not shown because too many files have changed in this diff Show More