Compare commits

...

198 Commits

Author SHA1 Message Date
0f8a84f67e perf(alias): disabled log on fs call (close #4054) 2023-04-07 00:02:07 +08:00
a475783b00 fix(deps): update module github.com/spf13/cobra to v1.7.0 [skip ci] (#4041)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-04-06 21:41:41 +08:00
67413015e8 ci: use non-upx prebuilt for windows by default 2023-04-06 21:38:57 +08:00
3a311a47af fix(deps): update module github.com/upyun/go-sdk/v3 to v3.0.4 (#4039)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-04-04 17:10:31 +08:00
9ccd802126 fix(123): api prefix changed (close #4038) 2023-04-04 16:39:56 +08:00
0acba7cd22 perf(123): reduce login count 2023-04-03 11:24:29 +08:00
3cdb8e7a81 fix(trainbit): incorrect filename display (#4027) 2023-04-02 21:13:20 +08:00
d3efee2ea1 fix(s3): increase PartSize if filesize > 50000MB (close #4017) 2023-04-02 16:09:27 +08:00
4ec274e748 fix(aliyundrive_open): refresh upload url if expired (#3999 close #3823)
* fix(aliyundrive_open): refresh upload url for large files

* fix(aliyundrive_open): retry upload on url expiry

* fix(aliyundrive_open): ignore 409 error

* feat(aliyundrive): cleanup upload retry logic

* feat(util): add multireadable io utility

* feat(aliyundrive_open): make upload fully stream

* feat(aliyundrive_open): refresh upload url every 20 puts

* fix(aliyundrive_open): part info panic

* chore: change refresh upload url strategy

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2023-04-01 14:54:29 +08:00
3b07c72f88 fix(proxy): ignore Referer if got redirect (close #3996) 2023-03-31 20:29:55 +08:00
0c5820a98f docs(aliyundrive_open): revised the sentence that may cause ambiguity (#3989) 2023-03-29 20:26:21 +08:00
86beadc0ed fix: missed sign with enable sign_all (close #3957) 2023-03-26 16:19:01 +08:00
be62d64dba chore: cancel 2fa succeed tips 2023-03-25 18:36:13 +08:00
112363031a feat: add fine-grained control for link signing (#3924)
* Determine whether the URL requires Sign

* Add File and Mem based KV

NOT TESTED: TokenKV Function

* Change Token KV func to common func.

Add File based KV func

* Remove KV, Remove Token

I found that the original Sign function is enough to complete the link signature, and only need to add simple configuration items to meet the requirements.

* Add IsStorageSigned func to judge if Signing is enabled in the storage settings.

It should be working now.

* Add a SIGN button to the management panel.

* Add enable_sign to the basic storage struct.

Can enable sign for every driver now.

Bug: When sign enabled, in download page, Copy link doesn't contain a sign.

(Not done yet)

* Fix a bug from commit 8f6c25f.

Response of fsread function does not contain sign.

* Optimize code and follow advices.

- Add back public/dist/README.md

- Enable sign when DownProxyUrl is enabled

- Merge needSign() to isEncrypt() in fsread.go

* simplify code

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2023-03-24 22:44:33 +08:00
48dc3552a6 fix(url_tree): incorrect tree structure 2023-03-24 20:34:03 +08:00
663814c9ef fix(url_tree): fix test url [skip ci] (#3940) 2023-03-24 20:26:00 +08:00
bd892e6a63 feat(drivers): new driver UrlTree (close #3268 in #3933)
* feat(drivers): new driver `urls` (close #3268)

* chore: rename

* support customize basic info or get from url

* dfs tree to calculate folder size

* go mod tidy

* add help message
2023-03-24 15:13:54 +08:00
4fd2c09845 fix(115): download issue due to ua (close #3931 in #3932) 2023-03-23 22:57:44 +08:00
0eab31bdf5 fix(local): filename with whitespace issue (#3928)
* fix(local): filename whitespace problem

* fix(deps): remove deprecated package io/ioutil

---------

Co-authored-by: XZB <i@1248.ink>
2023-03-23 15:18:37 +08:00
c6af22b97e feat: add thumbnail to fs/get api (#3927) 2023-03-23 13:59:39 +08:00
b2a5110672 feat(onedrive): support application authorization method (#3906) 2023-03-23 13:26:03 +08:00
c628992ea6 ci: add log required on question label [skip ci] 2023-03-22 14:03:04 +08:00
c65d868e09 fix(baidu_share): large file download (#3887 close #3876)
* fix(baidushare): large file download

* refactor: optimize client
2023-03-20 17:46:15 +08:00
aeb48b2ecc perf(aliyundrive_open): don't refresh token on init if token valid 2023-03-20 15:00:02 +08:00
cefec1a663 style: sort imports 2023-03-20 14:59:01 +08:00
e7ad830aa8 fix(cloudreve): captcha code ocr (#3889 close #3662) 2023-03-19 20:30:39 +08:00
b27eed265a fix(deps): update module github.com/blevesearch/bleve/v2 to v2.3.7 [skip ci] (#3874)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-19 20:26:48 +08:00
3abe26473c fix(trainbit): decode html code (#3883) 2023-03-19 15:25:06 +08:00
023107226c fix(trainbit): remove unnecessary operation (#3881) 2023-03-18 13:52:36 +08:00
8b109cfe40 fix(smb): byte alignment (close #3868) 2023-03-17 16:32:34 +08:00
b48e97d406 chore: fix release name [skip ci] 2023-03-16 22:47:01 +08:00
6c91cfeb90 chore(deps): update actions/setup-go action to v4 (#3858)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-16 18:28:51 +08:00
bfd1f25972 fix(deps): update module github.com/deckarep/golang-set/v2 to v2.3.0 [skip ci] (#3852)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-16 15:58:02 +08:00
8c0defce09 feat(task): add clear succeeded and retry (#3856 close #3776) 2023-03-16 15:56:27 +08:00
a1e88cfa05 fix(teambition): empty token for upload (close #3854) 2023-03-15 14:56:41 +08:00
443f5ffbcc feat(alias): auto flatten if only one root 2023-03-14 20:25:52 +08:00
b8bc94306d fix(alias): check obj exist for every storage (fix d9795ff) 2023-03-14 20:11:25 +08:00
d9795ff22f feat(alias): support proxy and direct together 2023-03-14 13:46:27 +08:00
c4108007cd fix: spaces in filename will be replaced with plus sign (#3841)
Co-authored-by: XZB <i@1248.ink>
2023-03-14 12:27:42 +08:00
f3db23a41e feat(qbittorrent): add offline download seed time (#3842 close #3588) 2023-03-14 12:13:23 +08:00
4741a75c92 feat(115): update upload api to v4.0 add pagesize option (#3840 close #3753) 2023-03-13 20:02:52 +08:00
301756ba03 feat(drivers): alias a new storage with multi path (close #3248) 2023-03-13 15:35:37 +08:00
3b2703a5e5 feat(drivers): add the support for Trainbit (#3813)
* feat: add the support for Trainbit
read only

* feat: add the support for Trainbit
modify the structure of code
allow to create folder, move, rename and remove

* feat: add the support for Trainbit
allow to upload file

* feat: add the support for Trainbit
get token from page

* feat: add the support for Trainbit
display progress of updating

* feat: add the support for Trainbit
fix bug of time zone

* feat: add the support for Trainbit
fix the bug of filename
2023-03-12 22:18:55 +08:00
2a601f06cb feat(drivers): add BaiduYun share link support (#3801)
新增百度网盘分享链接挂载
2023-03-12 14:00:11 +08:00
adc3a56552 feat(aliyundrive): make checksum cancellable (#3814) 2023-03-12 13:59:40 +08:00
4d9a29bddd feat(ftp): support seek/range request (#3811) 2023-03-11 21:02:47 +08:00
666e02f0c3 fix(storage): explicitly set storages' status to disabled (#3810) 2023-03-11 20:45:35 +08:00
6aaec19c1c feat: allow override startup command for Docker image (#3800)
This is to enable the use case where the stock Docker image is used with
different flags. E.g. `docker run xhofe/alist:latest ./alist server --data=mydata`

This was the behavior until PR#2818 changed it. This would make the image more usable.
2023-03-11 15:33:59 +08:00
1091e1b740 feat: file aggregation and regular rename api (#3788)
* 增加文件聚合接口,将给定文件夹下所有文件移动到目标文件夹。

* 增加文件正则重命名接口。

---------

Co-authored-by: varg247 <varg247@qq.com>
2023-03-10 19:01:49 +08:00
d06c605421 fix: smb drive lastConnTime data race (#3787 close #3782) 2023-03-10 15:59:53 +08:00
43de823058 fix: path IsApply check (close #3784) 2023-03-09 21:03:56 +08:00
02d0aef611 feat(aliyundrive_open): add internal upload (aliyun ECS for Beijing area only) (#3775) 2023-03-09 20:48:30 +08:00
5596661ce8 feat(aliyundrive_open): optional delete file directly (close #3769) 2023-03-08 19:19:13 +08:00
2379cb8d67 style: go mod tidy 2023-03-08 19:08:11 +08:00
8c0ebe0841 revert: "fix(deps): update module gorm.io/gorm to v1.24.6 (#3684)" (close #3746)
This reverts commit c595fd7f94.
2023-03-08 19:07:04 +08:00
fd868bac84 fix(deps): update module github.com/caarlos0/env/v7 to v7.1.0 (#3763)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-08 16:21:43 +08:00
ebcbb29a0f feat: ping api (close #3752) 2023-03-07 19:05:52 +08:00
00ff0a43a7 feat(cmd): disable a storage with specific mountPath (close #3564) 2023-03-07 19:01:40 +08:00
3d3f23ec9e fix: upload check if disable sub folder (close #3741) 2023-03-07 14:13:39 +08:00
d484219c48 fix(security): compare auth token in constant time (#3740 close #3739) 2023-03-06 23:41:06 +08:00
dd4c97393e feat: show sso settings at a more reasonable sort (#3735) 2023-03-06 20:59:45 +08:00
07b8ff25a7 ci: auto release desktop 2023-03-06 18:05:57 +08:00
0d5c3c5080 fix(deps): update module github.com/deckarep/golang-set/v2 to v2.2.0 [skip ci] (#3727)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-06 17:54:17 +08:00
75b4429f73 feat(quark): enable NoOverwriteUpload (#3720) 2023-03-05 18:00:00 +08:00
34ef6bd18d feat(115): enable NoOverwriteUpload [skip ci] (close #3669) 2023-03-05 17:59:19 +08:00
c915313ec9 feat: rename then delete if storage doesn't support overwrite upload (close #3643) 2023-03-05 15:36:12 +08:00
12a095a1d6 fix: slice bounds out of range on CanAccess check 2023-03-05 15:29:53 +08:00
dc000f640a feat: optional log to std 2023-03-05 15:07:06 +08:00
aa1c5b2be3 fix(deps): update module golang.org/x/crypto to v0.7.0 [skip ci] (#3717)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-05 14:32:41 +08:00
1d4ec3c50d fix(deps): update module golang.org/x/net to v0.8.0 [skip ci] (#3715)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-05 14:32:26 +08:00
ebfeef52f4 fix(deps): update module golang.org/x/image to v0.6.0 [skip ci] (#3714)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-05 13:52:53 +08:00
c595fd7f94 fix(deps): update module gorm.io/gorm to v1.24.6 (#3684)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-02 19:15:50 +08:00
421052f88a fix(deps): update github.com/t3rm1n4l/go-mega digest to a01a2cd (#3665)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-03-02 19:03:38 +08:00
603681fbe6 feat: rebuild Single sign-on system (#3649 close #3571)
* rebuild single sign on system

* perf: use cache

* fix: codefactor check

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2023-03-02 17:55:33 +08:00
f442185aa5 perf(123): optimize login error 2023-02-28 21:17:15 +08:00
ca9e739465 fix: hide apply to sub path without enable (close #3661) 2023-02-28 18:43:52 +08:00
53a1c4283b fix(baidu_netdisk): maybe optimize crack api (#3652)
User-Agent to netdisk and remove origin=dlna(is baned)
2023-02-28 18:27:07 +08:00
93dd768234 fix(webdav): disabled is not working in webdav (#3659)
A disabled user with webdav permission can use webdav normally, which is not allowed.
2023-02-28 18:26:13 +08:00
c9c4d6bc7e fix!(local): perm on mkdir (close #3626) 2023-02-26 21:25:32 +08:00
81e10f8939 ci: set prerelease before the build completes 2023-02-25 18:06:35 +08:00
4dd753de52 fix(aliyundrive_open): missed expire_sec while get link (close #3610) 2023-02-25 17:54:36 +08:00
79df63d319 chore(aliyundrive): change alert info 2023-02-25 14:28:27 +08:00
ec54831162 fix: only refresh token while do request (close #3591) 2023-02-24 20:31:12 +08:00
c8f3e8ab4d feat!: skip tls insecure verify by default 2023-02-23 22:33:54 +08:00
4be8524d80 feat: add alert for driver 2023-02-23 22:03:11 +08:00
0d3146b51d fix(webdav): disable put with empty path (close #3569) 2023-02-23 21:19:50 +08:00
f95d843969 feat(aliyundrive): add url_expire_sec for video preview (close #3522) 2023-02-23 20:50:31 +08:00
28aee8c493 feat: add aliyundrive open driver (#3437)
close #3533 
close #3521 
close #3459 
close #3375 

* feat: add aliyundrive open driver

* feat: adapt alist api

* fix: trailing spaces

* feat(aliyundrive_open): video preview api
2023-02-23 20:45:57 +08:00
de3ea82eb9 ci: add closeComment for stale 2023-02-22 22:17:33 +08:00
268ba3d069 fix(deps): update module github.com/gin-gonic/gin to v1.9.0 [skip ci] (#3551)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-22 21:24:35 +08:00
309d6558fb feat(local): add thumbnail for video with ffmpeg (#3556)
* feat(local): add ffmpeg

* fix: missed `+`

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2023-02-22 21:19:42 +08:00
c08fdfc868 fix: missed assignment [skip ci] 2023-02-22 20:20:28 +08:00
1b28e6af3e ci: replace issues-helper with stale for inactive check 2023-02-22 20:07:18 +08:00
8655e33e60 fix: incorrect api if not set site_url (6c2f348) 2023-02-21 19:57:50 +08:00
50579fef84 fix: cancel api replace to avoid missing host 2023-02-21 19:45:09 +08:00
e39299bfe2 fix(local): missed type of MkdirPerm (923937b) 2023-02-21 17:45:15 +08:00
d1ab2443f1 feat(qbittorrent): delete tags when deleting qbittorrent tasks (#3546)
* feat & refactor(qbittorrent/client): support `deleteFiles` arg for `Client.Delete()` method

* feat(qbittorrent/client): also delete tags in `Client.Delete()`
2023-02-21 16:45:41 +08:00
658cf368bb fix(deps): update github.com/t3rm1n4l/go-mega digest to b87ebf5 (#3539)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-21 16:43:37 +08:00
fd36ce59f6 fix(onedrive): either id or path in parentReference must be specified (close #3028) 2023-02-21 16:19:46 +08:00
95b3b87672 feat(sftp): support range header 2023-02-20 16:57:52 +08:00
0d07d81802 feat(smb): support range header (close #3192) 2023-02-20 16:46:38 +08:00
923937b530 feat(local): custom mkdir perm (close #3196) 2023-02-20 16:20:36 +08:00
09492193c4 fix(alist_v3): api error pass (close #3326) 2023-02-20 16:15:52 +08:00
40b26a81a0 fix!: change default epub viewer (close #3519) 2023-02-20 16:08:10 +08:00
4293a0ba8c fix(deps): update module github.com/golang-jwt/jwt/v4 to v4.5.0 [skip ci] (#3525)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-20 16:06:35 +08:00
6c2f3486fc fix!: reverse proxy to sub-directory (#3483)
from this commit, if you want reverse proxy to sub-directory like `alist` with `nginx`, you need config:

```nginx
location /alist/ {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Range $http_range;
    proxy_set_header If-Range $http_if_range;
    proxy_redirect off;
    proxy_pass http://127.0.0.1:5244/alist/;
    # the max size of file to upload
    client_max_body_size 20000m;
}
```
2023-02-18 19:03:07 +08:00
3c7512f64a fix(qbittorrent): fix two file transferring related bugs [skip ci] (#3501)
* fix(qbittorrent): delete qbittorrent task before transferring

* fix(qbittorrent): parse the path correctly when the torrent contains folders
2023-02-18 18:54:51 +08:00
84219d3d70 fix(deps): update module gorm.io/driver/mysql to v1.4.7 [skip ci] (#3495)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-18 18:13:41 +08:00
05d3727335 fix(deps): update module golang.org/x/image to v0.5.0 [security skip ci] (#3489)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-18 18:13:22 +08:00
ee77c3b113 fix: friendly tip for initial logging in [skip ci] (#3406)
* refactor: friendly tip for initial logging in

* fix CodeFactor issue

more info pls refer to: https://segmentfault.com/a/1190000043031147
2023-02-18 17:53:11 +08:00
fcaf485e0b fix(deps): update module gorm.io/driver/postgres to v1.4.8 [skip ci] (#3496)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-18 17:52:03 +08:00
bd83469bb1 fix(deps): update module golang.org/x/net to v0.7.0 [security skip ci] (#3502)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-18 17:51:33 +08:00
90f111b24f docs: translate title [skip ci] (#3498)
* Update README_cn.md

* Update README_cn.md

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2023-02-18 17:50:42 +08:00
7d1034c569 fix(aliyundrive): error occurred when running multiple instances at the same time (#3448)
* fix(aliyundrive):an error occurred when running multiple instances at the same time

* Update util.go

fix(aliyunpan):clear retry count
2023-02-16 22:12:19 +08:00
236c17176c fix(123): adapt new file list api (close #3464) 2023-02-16 22:09:45 +08:00
6ee4c10e8f chore(onedrive)!: change default redirect_uri [skip ci] 2023-02-16 21:37:20 +08:00
3798634028 fix(pikpak_share): change media url to content url (close #3273) (#3441) 2023-02-16 15:42:11 +08:00
567ba5ccd4 feat(aliyundrive_share): aliyun office preview (close #3408) 2023-02-15 16:52:24 +08:00
ae2ee1821a chore: change qBittorrent setting [skip ci] 2023-02-15 16:51:29 +08:00
805b1e4fa3 fix: different url encoding (close #3423) 2023-02-15 16:20:30 +08:00
d92c10da56 fix(qbittorrent): fix multiple bugs for qbittorrent download (close #3413 in #3427)
* fix(qbittorrent): wait for qbittorrent to parse torrent and create task

#3413

* fix(qbittorrent): check task state correctly

* fix(qbittorrent): fix path sent to `op.Put()`
2023-02-15 15:58:31 +08:00
6659f6d367 fix: windows arm64 build [skip ci] 2023-02-14 20:28:05 +08:00
fe416ba15c feat!: close sign_all by default 2023-02-14 19:20:15 +08:00
de66708b24 fix(aliyundrive): device session signature error (#3398)
* fix signature

* fix: indent-error-flow [skip ci]
2023-02-14 19:17:21 +08:00
2ca3e0b8bc fix(123): incorrect download url (close #3385) 2023-02-14 15:47:41 +08:00
ae04a0a760 chore: go mod tidy 2023-02-14 15:30:33 +08:00
c28168c970 feat: support qbittorrent (close #3087 in #3333)
* feat(qbittorrent): authorization and logging in support

* feat(qbittorrent/client): support `AddFromLink`

* refactor(qbittorrent/client): check authorization when getting a new client

* feat(qbittorrent/client): support `GetInfo`

* test(qbittorrent/client): update test cases

* feat(qbittorrent): init qbittorrent client on bootstrap

* feat(qbittorrent): support setting webui url via gin

* feat(qbittorrent/client): support deleting

* feat(qbittorrent/client): parse `TorrentStatus` enum when unmarshalling json in `GetInfo()`

* feat(qbittorrent/client): support getting files by id

* feat(qbittorrent): support adding qbittorrent tasks via gin

* refactor(qbittorrent/client): return a `Client` interface in `New()` instead of `*client`

* refactor: task handle

* chore: fix typo

* chore: change path

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2023-02-14 15:20:45 +08:00
46b2ed2507 fix(aliyundriver):x-device-id error code (#3390)
* fix(aliyundriver):x-drvice-id error code

* fix(aliyunpan):session signature error

* fix typo

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2023-02-14 14:11:07 +08:00
22843ffc70 fix(fs): copy file if symlink failed (#3368) 2023-02-13 14:41:35 +08:00
e1b6368343 feat(aliyundrive): zero copy for local file uploads (#3359) 2023-02-12 16:13:57 +08:00
62dae50d70 feat(fs): create symbolic link instead of copy local files (close #2186 in #3354) 2023-02-12 16:03:11 +08:00
43a8ed472b fix: can't login by github after disable guest (close #3314) 2023-02-09 20:12:04 +08:00
d87878c232 ci: cancel win/arm64 on dev build [skip ci] 2023-02-09 20:05:00 +08:00
ab7dee49b0 feat: add windows/arm64 target (close #3308) 2023-02-09 19:52:40 +08:00
dca115506d fix(deps): update module golang.org/x/crypto to v0.6.0 [skip ci] (#3315)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-09 16:17:10 +08:00
be17fba0c6 fix(deps): update module golang.org/x/net to v0.6.0 [skip ci] (#3316)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-09 16:16:43 +08:00
cd58aa5efe fix(deps): update module gorm.io/driver/mysql to v1.4.6 (#3311) [skip ci]
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-09 16:00:08 +08:00
946833d2cc fix(deps): update module golang.org/x/image to v0.4.0 (#3323) [skip ci]
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-09 15:59:31 +08:00
eb42d09849 chore(deps): update docker/build-push-action action to v4 (#3200)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-08 22:22:33 +08:00
9d00492750 fix(deps): update module gorm.io/driver/postgres to v1.4.7 (#3312) [skip ci]
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-08 22:20:04 +08:00
b6711d6ab9 chore(deps): update actions-cool/issues-helper action to v3.4.0 (#3279) [skip ci]
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-08 22:12:02 +08:00
7bc46de8aa feat: settings for tls insecure skip verify (close #3306 in #3307) 2023-02-08 22:01:26 +08:00
a4f4fb2d73 chore(deps): upgrade github.com/caarlos0/env 2023-02-07 19:55:55 +08:00
a181b56ea7 feat: optional forward direct link params (close #3123) 2023-02-07 16:39:14 +08:00
d0b743d955 fix(onedrive): downloadUrl missed on personal account (close #3276) 2023-02-07 16:16:29 +08:00
a985b748e9 fix: allow_indexed check (close #3291) 2023-02-07 15:14:39 +08:00
44cb8aaafe feat: only log to std on debug/dev mode 2023-02-05 09:17:37 +08:00
51f5d1b3c4 fix(local): set perm 0777 for folder (close #2996) 2023-02-04 12:11:13 +08:00
36e0d6f787 perf(onedrive): optimize request parameter (close #3178) 2023-02-04 11:53:13 +08:00
3d0065bdcf feat!: allow disable user (close #3241)
From this commit, the guest user will be disabled by default
2023-02-04 11:44:17 +08:00
7bf8071095 fix(deps): update module github.com/aws/aws-sdk-go to v1.44.194 (#2940)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-04 11:24:47 +08:00
30d39f8e10 fix(deps): update module gorm.io/gorm to v1.24.5 (#3231)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-02-04 11:22:39 +08:00
20d3ef7de6 fix(139): check http code & increase chunk size (#3224)
* fixed: 大文件上传导致连接重置

Signed-off-by: aimuz <mr.imuz@gmail.com>

* revert Dockerfile

---------

Signed-off-by: aimuz <mr.imuz@gmail.com>
Co-authored-by: Andy Hsu <i@nn.ci>
2023-02-04 11:20:13 +08:00
86e5dae4d1 fix(aliyundrive_share): no permission after share_id change (#3246) 2023-02-04 11:10:28 +08:00
d89b1d4871 fix(baidu_baidu_netdisk): override for create (close #3242) 2023-02-03 18:10:39 +08:00
080e6fb22a fix(google_drive): allow download abuse file (#3217)
通过添加参数acknowledgeAbuse=true,对疑似风险文件直接下载
2023-02-01 19:43:36 +08:00
e1cd71616d feat(aliyundrive): internal upload (aliyun ECS for Beijing area only) (#3188)
Co-authored-by: wangwuxuan2011 <git@wangwuxuan.cn>
2023-01-30 11:18:08 +08:00
c92e11dad5 ci: auto build docker with aria2 2023-01-27 15:16:00 +08:00
b52e8747fa fix(alist_v3): incorrect dir on remove (close #3154) 2023-01-27 14:51:56 +08:00
14305748f0 fix(lanzou): files cannot be uploaded to the specified directory (#3157)
* Update driver.go

* fix(Lanzou):files cannot be uploaded to the specified directory

Solve the problem that files cannot be uploaded to the specified directory
2023-01-27 14:46:54 +08:00
44f8112e53 fix(s3): ignore current folder in contents (close #3137) 2023-01-25 19:58:00 +08:00
6a90b1d40a fix(deps): update module github.com/caarlos0/env/v6 to v7 (#3117)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-01-23 20:02:09 +08:00
b42ec3e810 fix: relative path judgment (close #3130) 2023-01-23 15:52:46 +08:00
28875ce304 fix(alist_v3): incorrect src_dir on move and copy (close #3121 pr #3124)
* fix(alist_v3):add dir check(close #3121)

* Update driver.go

Co-authored-by: Noah Hsu <i@nn.ci>
2023-01-22 18:52:54 +08:00
9b99e8ab70 fix(search): allow indexed check (close #3103) 2023-01-19 17:00:49 +08:00
98872a8fdb fix: cancel EXCLUSIVE mode on sqlite3
because it will result in failure to get admin's info
2023-01-19 16:49:43 +08:00
ce4a295008 fix!: check https with X-Forwarded-Proto
not read old setting `api_url` and `base_path` from this commit
2023-01-19 12:16:42 +08:00
bc1babb5b5 fix(lanzou): shortened filename when uploading files (#3099) 2023-01-19 12:05:14 +08:00
d61242d85d feat: add wma to default audio types (close #3088) 2023-01-18 10:50:28 +08:00
99d7105357 fix: move virtual files to end (close #3052) 2023-01-18 10:23:54 +08:00
be8a9c5f07 fix: mark progress as done after clear (#3086) 2023-01-18 09:39:32 +08:00
530e74c70b fix: avoid regular expression match current directory (#3078)
* fix: avoid regular expression match current directory

* fix: optimize and regexp exclude slash

Co-authored-by: wuxuan <refused@wuxuan.eu.org>
2023-01-17 21:54:25 +08:00
0a337756ba fix(quark): upload file integer divide by zero panic. (close #3076 pr #3077) 2023-01-17 18:02:06 +08:00
26fe0a7684 feat: customize index max depth
Because some driver's issue may cause infinite loop
2023-01-17 17:33:18 +08:00
9c7e451c03 perf: optimize sqlite3 (#3074)
- use journal mode to WAL
- set locking mode to EXCLUSIVE
- set auto vacuum

ref:
 - https://www.sqlite.org/pragma.html#pragma_journal_mode
 - https://www.sqlite.org/pragma.html#pragma_locking_mode
 - https://www.sqlite.org/pragma.html#pragma_auto_vacuum
2023-01-17 17:06:11 +08:00
8df1455f25 workflow: add tips for Reproduction 2023-01-17 16:34:56 +08:00
9d9377f65d fix(local): incorrect path of thumbnail (for 6453ae0) 2023-01-16 20:02:30 +08:00
8b523fab8b revert: add Getter interface back 2023-01-16 19:55:43 +08:00
6453ae0968 fix(search): empty parent where update (close #2810) 2023-01-16 17:33:24 +08:00
1cfd47a258 feat: install tzdata in the docker image (#3056)
* disable caching of repository metadata and installation of tzdata

* add TZ variable example
2023-01-16 13:43:15 +08:00
8e2069c554 fix: db non full-text import error (#3055) 2023-01-15 23:49:23 +08:00
6b8778a63c fix: don't save if refresh token is empty (close #2957) 2023-01-14 20:33:07 +08:00
aaa8c440fe fix(seafile): token refresh (#3010)
* docs: add Seafile support

* fix: Seafile token refresh
2023-01-13 21:20:21 +08:00
2dc5dec83c feat: add Cloudreve driver (close #2658 in #2997)
* feat: add cloudreve support

add cloudreve support

(#2658)

* docs(README): add suppuort cloudreve

* fix(cloudreve): add cookie refresh

Co-authored-by: panici <zhangjun@zjdeMacBook-Pro.local>
2023-01-12 19:57:43 +08:00
1eca2b83ed perf(terabox): optimize prompt message (#3002)
* perf(terabox):prompt login status when init the driver

* docs:add Terabox

* perf(terabox):prompt area is not available

* style(terabox): del else
2023-01-12 19:40:38 +08:00
48e6f3bb23 feat: add Seafile driver (#2964)
* feat: add Seafile driver

* docs: add Seafile support

* refactor: optimization

* fix: close redirect on `move` and `rename`

Co-authored-by: Noah Hsu <i@nn.ci>
2023-01-10 20:51:42 +08:00
0ad9e17196 feat: lazy index creation on searcher init (#2962) 2023-01-09 14:09:21 +08:00
9398cdaac1 fix(s3): allow http/https headers to be attached from CustomHost (#2959)
* add(s3):Allow http/https headers to be attached to CustomHost

* optimize

Co-authored-by: wangwuxuan <wangwuxuan@163.com>
Co-authored-by: Noah Hsu <i@nn.ci>
2023-01-08 21:47:45 +08:00
2f19d4a834 perf(lanzou): optimize the use of list cache (#2956)
* fix:local sort not cache

* perf(lanzou): Optimize the use of list cache
2023-01-08 21:31:35 +08:00
99a186d01b fix(139): upload failed (#2950)
fix: The file size is exceeded and cannot be uploaded
fix: File name has special characters, signature fails
improve: optimize memory usage
Signed-off-by: aimuz <mr.imuz@gmail.com>

Signed-off-by: aimuz <mr.imuz@gmail.com>
2023-01-08 16:31:00 +08:00
40ef233d24 fix(USS): resolve driver problem (#2942)
* remove:"Endpoint" and "CustomHost" are the same thing, remove "CustomHost"

* fix: file download url error

* fix: too many file get list error

Co-authored-by: wangwuxuan <wangwuxuan@163.com>
2023-01-08 16:30:05 +08:00
7c3ea193ff fix(lanzou):webdav unable to download and upload (close #2700)
* fix(lanzou):Unable to get folder

* fix(lanzou):webdav unable to download and upload. (close 2700)
2023-01-08 15:37:39 +08:00
7902b646ff feat: add database non full text index (close #2916) 2023-01-07 01:40:49 +08:00
1c453ae147 feat: add a switch to enable auto update index (close #2930) 2023-01-07 00:59:30 +08:00
cf5714ba73 fix(smb): use correct path (#2933)
There is no need to add a `.` prefix as there is no leading `/` in paths
2023-01-07 00:47:08 +08:00
d655340634 fix(lanzou): cookie type failed to get file (#2926) 2023-01-06 18:08:40 +08:00
8d4ac031c3 chore(deps): update module github.com/aws/aws-sdk-go to v1.44.174 [skip ci] (#2920)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-01-06 15:36:33 +08:00
a1ded3a339 refactor(baidu_photo): optimize code (close #2911 pr #2924) 2023-01-06 15:36:05 +08:00
196 changed files with 7001 additions and 1570 deletions

View File

@ -43,8 +43,8 @@ body:
attributes: attributes:
label: Reproduction / 复现链接 label: Reproduction / 复现链接
description: | description: |
Please provide a link to a repo that can reproduce the problem you ran into. Please provide a link to a repo that can reproduce the problem you ran into. Please be aware that your issue may be closed directly if you don't provide it.
请提供能复现此问题的链接 请提供能复现此问题的链接请知悉如果不提供它你的issue可能会被直接关闭。
validations: validations:
required: true required: true
- type: textarea - type: textarea

19
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,19 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 44
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 8
# Issues with these labels will never be considered stale
exemptLabels:
- accepted
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
This issue was closed due to inactive more than 52 days. You can reopen or
recreate it if you think it should continue. Thank you for your contributions again.

View File

@ -21,7 +21,7 @@ jobs:
runs-on: ${{ matrix.platform }} runs-on: ${{ matrix.platform }}
steps: steps:
- name: Setup go - name: Setup go
uses: actions/setup-go@v3 uses: actions/setup-go@v4
with: with:
go-version: ${{ matrix.go-version }} go-version: ${{ matrix.go-version }}
@ -54,7 +54,7 @@ jobs:
cd alist-web cd alist-web
git add . git add .
git config --local user.email "i@nn.ci" git config --local user.email "i@nn.ci"
git config --local user.name "Noah Hsu" git config --local user.name "Andy Hsu"
git commit -m "chore: auto update i18n file" -a 2>/dev/null || : git commit -m "chore: auto update i18n file" -a 2>/dev/null || :
cd .. cd ..

View File

@ -16,7 +16,7 @@ jobs:
runs-on: ${{ matrix.platform }} runs-on: ${{ matrix.platform }}
steps: steps:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v3 uses: actions/setup-go@v4
with: with:
go-version: ${{ matrix.go-version }} go-version: ${{ matrix.go-version }}
@ -25,6 +25,7 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest docker pull crazymax/xgo:latest
go install github.com/crazy-max/xgo@latest go install github.com/crazy-max/xgo@latest
sudo apt install upx sudo apt install upx

View File

@ -6,7 +6,7 @@ on:
jobs: jobs:
build_docker: build_docker:
name: Docker name: Build docker
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
@ -30,10 +30,36 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push - name: Build and push
id: docker_build id: docker_build
uses: docker/build-push-action@v3 uses: docker/build-push-action@v4
with: with:
context: . context: .
push: true push: true
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
build_docker_with_aria2:
needs: build_docker
name: Build docker with aria2
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v3
with:
repository: alist-org/with_aria2
ref: main
persist-credentials: false
fetch-depth: 0
- name: Commit
run: |
git config --local user.email "i@nn.ci"
git config --local user.name "Noah Hsu"
git commit --allow-empty -m "Trigger build for ${{ github.sha }}"
- name: Push commit
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/with_aria2

19
.github/workflows/changelog.yml vendored Normal file
View File

@ -0,0 +1,19 @@
name: auto changelog
on:
push:
tags:
- '*'
jobs:
changelog:
name: Create Release
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0
- run: npx changelogithub # or changelogithub@0.12 if ensure the stable result
env:
GITHUB_TOKEN: ${{secrets.MY_TOKEN}}

View File

@ -1,17 +0,0 @@
name: Check inactive
on:
schedule:
- cron: "0 0 1 * *"
jobs:
check-inactive:
runs-on: ubuntu-latest
steps:
- name: check-inactive
uses: actions-cool/issues-helper@v3
with:
actions: 'check-inactive'
token: ${{ secrets.GITHUB_TOKEN }}
inactive-day: 30
body: Hello, this issue has been inactive for more than 30 days and will be closed if inactive for another 30 days.

View File

@ -1,21 +0,0 @@
name: Close inactive
on:
schedule:
- cron: "0 0 */7 * *"
workflow_dispatch:
jobs:
close-inactive:
runs-on: ubuntu-latest
steps:
- name: close-issues
uses: actions-cool/issues-helper@v3
with:
actions: 'close-issues'
token: ${{ secrets.GITHUB_TOKEN }}
labels: 'inactive'
inactive-day: 30
close-reason: 'not_planned'
body: |
Hello @${{ github.event.issue.user.login }}, this issue was closed due to inactive more than 60 days. You can reopen or recreate it if you think it should continue.

View File

@ -10,11 +10,11 @@ jobs:
if: github.event.label.name == 'question' if: github.event.label.name == 'question'
steps: steps:
- name: Create comment - name: Create comment
uses: actions-cool/issues-helper@v3.3.3 uses: actions-cool/issues-helper@v3.4.0
with: with:
actions: 'create-comment' actions: 'create-comment'
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ github.event.issue.number }} issue-number: ${{ github.event.issue.number }}
body: | body: |
Hello @${{ github.event.issue.user.login }}, please input issue by template and add detail. Issues labeled by `question` will be closed if no activities in 7 days. Hello @${{ github.event.issue.user.login }}, please input issue by template and add detail. Issues labeled by `question` will be closed if no activities in 7 days.
你好 @${{ github.event.issue.user.login }}请按照issue模板填写, 并详细说明问题/复现步骤/复现链接/实现思路或提供更多信息等, 7天内未回复issue自动关闭。 你好 @${{ github.event.issue.user.login }}请按照issue模板填写, 并详细说明问题/日志记录/复现步骤/复现链接/实现思路或提供更多信息等, 7天内未回复issue自动关闭。

View File

@ -1,33 +1,27 @@
name: release name: release
on: on:
push: release:
tags: types: [ published ]
- '*'
jobs: jobs:
changelog:
name: Create Release
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0
- run: npx changelogithub # or changelogithub@0.12 if ensure the stable result
env:
GITHUB_TOKEN: ${{secrets.MY_TOKEN}}
release: release:
needs: changelog
strategy: strategy:
matrix: matrix:
platform: [ubuntu-latest] platform: [ ubuntu-latest ]
go-version: [1.19] go-version: [ 1.19 ]
name: Release name: Release
runs-on: ${{ matrix.platform }} runs-on: ${{ matrix.platform }}
steps: steps:
- name: Prerelease
uses: irongut/EditRelease@v1.2.0
with:
token: ${{ secrets.MY_TOKEN }}
id: ${{ github.event.release.id }}
prerelease: true
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v3 uses: actions/setup-go@v4
with: with:
go-version: ${{ matrix.go-version }} go-version: ${{ matrix.go-version }}
@ -38,6 +32,7 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest docker pull crazymax/xgo:latest
go install github.com/crazy-max/xgo@latest go install github.com/crazy-max/xgo@latest
sudo apt install upx sudo apt install upx
@ -46,7 +41,41 @@ jobs:
run: | run: |
bash build.sh release bash build.sh release
- name: Release - name: Release latest
uses: irongut/EditRelease@v1.2.0
with:
token: ${{ secrets.MY_TOKEN }}
id: ${{ github.event.release.id }}
prerelease: false
- name: Upload assets
uses: softprops/action-gh-release@v1 uses: softprops/action-gh-release@v1
with: with:
files: build/compress/* files: build/compress/*
release_desktop:
needs: release
name: Release desktop
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v3
with:
repository: alist-org/desktop-release
ref: main
persist-credentials: false
fetch-depth: 0
- name: Add tag
run: |
git config --local user.email "i@nn.ci"
git config --local user.name "Andy Hsu"
version=$(wget -qO- -t1 -T2 "https://api.github.com/repos/alist-org/alist/releases/latest" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
git tag -a $version -m "release $version"
- name: Push tags
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/desktop-release

View File

@ -7,7 +7,7 @@ on:
jobs: jobs:
release_docker: release_docker:
name: Docker name: Release Docker
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
@ -33,10 +33,36 @@ jobs:
- name: Build and push - name: Build and push
id: docker_build id: docker_build
uses: docker/build-push-action@v3 uses: docker/build-push-action@v4
with: with:
context: . context: .
push: true push: true
tags: ${{ steps.meta.outputs.tags }} tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }} labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x
release_docker_with_aria2:
needs: release_docker
name: Release docker with aria2
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v3
with:
repository: alist-org/with_aria2
ref: main
persist-credentials: false
fetch-depth: 0
- name: Add tag
run: |
git config --local user.email "i@nn.ci"
git config --local user.name "Andy Hsu"
git tag -a ${{ github.ref_name }} -m "release ${{ github.ref_name }}"
- name: Push tags
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/with_aria2

View File

@ -2,7 +2,7 @@ FROM alpine:edge as builder
LABEL stage=go-builder LABEL stage=go-builder
WORKDIR /app/ WORKDIR /app/
COPY ./ ./ COPY ./ ./
RUN apk add --no-cache bash git go gcc musl-dev curl; \ RUN apk add --no-cache bash curl gcc git go musl-dev; \
bash build.sh release docker bash build.sh release docker
FROM alpine:edge FROM alpine:edge
@ -11,8 +11,8 @@ VOLUME /opt/alist/data/
WORKDIR /opt/alist/ WORKDIR /opt/alist/
COPY --from=builder /app/bin/alist ./ COPY --from=builder /app/bin/alist ./
COPY entrypoint.sh /entrypoint.sh COPY entrypoint.sh /entrypoint.sh
RUN apk add ca-certificates bash su-exec; \ RUN apk add --no-cache bash ca-certificates su-exec tzdata; \
chmod +x /entrypoint.sh chmod +x /entrypoint.sh
ENV PUID=0 PGID=0 UMASK=022 ENV PUID=0 PGID=0 UMASK=022
EXPOSE 5244 EXPOSE 5244
ENTRYPOINT [ "/entrypoint.sh" ] CMD [ "/entrypoint.sh" ]

View File

@ -53,6 +53,7 @@ English | [中文](./README_cn.md) | [Contributing](./CONTRIBUTING.md) | [CODE_O
- [x] FTP / SFTP - [x] FTP / SFTP
- [x] [PikPak](https://www.mypikpak.com/) - [x] [PikPak](https://www.mypikpak.com/)
- [x] [S3](https://aws.amazon.com/s3/) - [x] [S3](https://aws.amazon.com/s3/)
- [x] [Seafile](https://seafile.com/)
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage) - [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] WebDav(Support OneDrive/SharePoint without API) - [x] WebDav(Support OneDrive/SharePoint without API)
- [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ )) - [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ ))
@ -60,6 +61,7 @@ English | [中文](./README_cn.md) | [Contributing](./CONTRIBUTING.md) | [CODE_O
- [x] [139yun](https://yun.139.com/) (Personal, Family) - [x] [139yun](https://yun.139.com/) (Personal, Family)
- [x] [YandexDisk](https://disk.yandex.com/) - [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com/) - [x] [BaiduNetdisk](http://pan.baidu.com/)
- [x] [Terabox](https://www.terabox.com/main)
- [x] [Quark](https://pan.quark.cn) - [x] [Quark](https://pan.quark.cn)
- [x] [Thunder](https://pan.xunlei.com) - [x] [Thunder](https://pan.xunlei.com)
- [x] [Lanzou](https://www.lanzou.com/) - [x] [Lanzou](https://www.lanzou.com/)
@ -69,6 +71,7 @@ English | [中文](./README_cn.md) | [Contributing](./CONTRIBUTING.md) | [CODE_O
- [x] [Baidu photo](https://photo.baidu.com/) - [x] [Baidu photo](https://photo.baidu.com/)
- [x] SMB - [x] SMB
- [x] [115](https://115.com/) - [x] [115](https://115.com/)
- [X] Cloudreve
- [x] Easy to deploy and out-of-the-box - [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...) - [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode - [x] Image preview in gallery mode

View File

@ -41,7 +41,7 @@
[English](./README.md) | 中文 | [Contributing](./CONTRIBUTING.md) | [CODE_OF_CONDUCT](./CODE_OF_CONDUCT.md) [English](./README.md) | 中文 | [Contributing](./CONTRIBUTING.md) | [CODE_OF_CONDUCT](./CODE_OF_CONDUCT.md)
## Features ## 功能
- [x] 多种存储 - [x] 多种存储
- [x] 本地存储 - [x] 本地存储
@ -53,6 +53,7 @@
- [x] FTP / SFTP - [x] FTP / SFTP
- [x] [PikPak](https://www.mypikpak.com/) - [x] [PikPak](https://www.mypikpak.com/)
- [x] [S3](https://aws.amazon.com/cn/s3/) - [x] [S3](https://aws.amazon.com/cn/s3/)
- [x] [Seafile](https://seafile.com/)
- [x] [又拍云对象存储](https://www.upyun.com/products/file-storage) - [x] [又拍云对象存储](https://www.upyun.com/products/file-storage)
- [x] WebDav(支持无API的OneDrive/SharePoint) - [x] WebDav(支持无API的OneDrive/SharePoint)
- [x] Teambition[中国](https://www.teambition.com/ )[国际](https://us.teambition.com/ ) - [x] Teambition[中国](https://www.teambition.com/ )[国际](https://us.teambition.com/ )
@ -69,6 +70,7 @@
- [x] [一刻相册](https://photo.baidu.com/) - [x] [一刻相册](https://photo.baidu.com/)
- [x] SMB - [x] SMB
- [x] [115](https://115.com/) - [x] [115](https://115.com/)
- [X] Cloudreve
- [x] 部署方便,开箱即用 - [x] 部署方便,开箱即用
- [x] 文件预览PDF、markdown、代码、纯文本…… - [x] 文件预览PDF、markdown、代码、纯文本……
- [x] 画廊模式下的图像预览 - [x] 画廊模式下的图像预览
@ -87,7 +89,7 @@
- [x] 离线下载 - [x] 离线下载
- [x] 跨存储复制文件 - [x] 跨存储复制文件
## Document ## 文档
<https://alist.nn.ci/zh/> <https://alist.nn.ci/zh/>
@ -95,21 +97,21 @@
<https://al.nn.ci> <https://al.nn.ci>
## Discussion ## 讨论
一般问题请到[讨论论坛](https://github.com/Xhofe/alist/discussions) **issue仅针对错误报告和功能请求。** 一般问题请到[讨论论坛](https://github.com/Xhofe/alist/discussions) **issue仅针对错误报告和功能请求。**
## Sponsor ## 赞助
AList 是一个开源软件如果你碰巧喜欢这个项目并希望我继续下去请考虑赞助我或提供一个单一的捐款感谢所有的爱和支持https://alist.nn.ci/zh/guide/sponsor.html AList 是一个开源软件如果你碰巧喜欢这个项目并希望我继续下去请考虑赞助我或提供一个单一的捐款感谢所有的爱和支持https://alist.nn.ci/zh/guide/sponsor.html
### Special sponsors ### 特别赞助
- [找资源 - 阿里云盘资源搜索引擎](https://zhaoziyuan.la/) - [找资源 - 阿里云盘资源搜索引擎](https://zhaoziyuan.la/)
- [KinhDown 百度云盘不限速下载永久免费已稳定运行3年非常可靠Q群 -> 786799372](https://kinhdown.com) - [KinhDown 百度云盘不限速下载永久免费已稳定运行3年非常可靠Q群 -> 786799372](https://kinhdown.com)
- [JetBrains: Essential tools for software developers and teams](https://www.jetbrains.com/) - [JetBrains: Essential tools for software developers and teams](https://www.jetbrains.com/)
## Contributors ## 贡献者
Thanks goes to these wonderful people: Thanks goes to these wonderful people:
@ -128,4 +130,4 @@ Thanks goes to these wonderful people:
--- ---
> [@博客](https://nn.ci/) · [@GitHub](https://github.com/Xhofe) · [@Telegram群](https://t.me/alist_chat) · [@Discord](https://discord.gg/F4ymsH4xv2) > [@博客](https://nn.ci/) · [@GitHub](https://github.com/Xhofe) · [@Telegram群](https://t.me/alist_chat) · [@Discord](https://discord.gg/F4ymsH4xv2)

View File

@ -1,7 +1,7 @@
appName="alist" appName="alist"
builtAt="$(date +'%F %T %z')" builtAt="$(date +'%F %T %z')"
goVersion=$(go version | sed 's/go version //') goVersion=$(go version | sed 's/go version //')
gitAuthor=$(git show -s --format='format:%aN <%ae>' HEAD) gitAuthor="Xhofe <i@nn.ci>"
gitCommit=$(git log --pretty=format:"%h" -1) gitCommit=$(git log --pretty=format:"%h" -1)
if [ "$1" = "dev" ]; then if [ "$1" = "dev" ]; then
@ -41,6 +41,17 @@ FetchWebRelease() {
rm -rf dist.tar.gz rm -rf dist.tar.gz
} }
BuildWinArm64() {
echo building for windows-arm64
chmod +x ./wrapper/zcc-arm64
chmod +x ./wrapper/zcxx-arm64
export GOOS=windows
export GOARCH=arm64
export CC=$(pwd)/wrapper/zcc-arm64
export CXX=$(pwd)/wrapper/zcxx-arm64
go build -o "$1" -ldflags="$ldflags" -tags=jsoniter .
}
BuildDev() { BuildDev() {
rm -rf .git/ rm -rf .git/
xgo -targets=linux/amd64,windows/amd64,darwin/amd64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter . xgo -targets=linux/amd64,windows/amd64,darwin/amd64 -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
@ -48,7 +59,8 @@ BuildDev() {
mv alist-* dist mv alist-* dist
cd dist cd dist
upx -9 ./alist-linux* upx -9 ./alist-linux*
upx -9 ./alist-windows* cp ./alist-windows-amd64.exe ./alist-windows-amd64-upx.exe
upx -9 ./alist-windows-amd64-upx.exe
find . -type f -print0 | xargs -0 md5sum >md5.txt find . -type f -print0 | xargs -0 md5sum >md5.txt
cat md5.txt cat md5.txt
} }
@ -80,10 +92,12 @@ BuildRelease() {
export CGO_ENABLED=1 export CGO_ENABLED=1
go build -o ./build/$appName-$os_arch -ldflags="$muslflags" -tags=jsoniter . go build -o ./build/$appName-$os_arch -ldflags="$muslflags" -tags=jsoniter .
done done
BuildWinArm64 ./build/alist-windows-arm64.exe
xgo -out "$appName" -ldflags="$ldflags" -tags=jsoniter . xgo -out "$appName" -ldflags="$ldflags" -tags=jsoniter .
# why? Because some target platforms seem to have issues with upx compression # why? Because some target platforms seem to have issues with upx compression
upx -9 ./alist-linux-amd64 upx -9 ./alist-linux-amd64
upx -9 ./alist-windows* cp ./alist-windows-amd64.exe ./alist-windows-amd64-upx.exe
upx -9 ./alist-windows-amd64-upx.exe
mv alist-* build mv alist-* build
} }

View File

@ -22,6 +22,8 @@ var Cancel2FACmd = &cobra.Command{
err := op.Cancel2FAByUser(admin) err := op.Cancel2FAByUser(admin)
if err != nil { if err != nil {
utils.Log.Errorf("failed to cancel 2FA: %+v", err) utils.Log.Errorf("failed to cancel 2FA: %+v", err)
} else {
utils.Log.Info("2FA canceled")
} }
} }
}, },

View File

@ -6,4 +6,5 @@ var (
NoPrefix bool NoPrefix bool
Dev bool Dev bool
ForceBinDir bool ForceBinDir bool
LogStd bool
) )

View File

@ -78,10 +78,19 @@ func writeFile(name string, data interface{}) {
func generateDriversJson() { func generateDriversJson() {
drivers := make(Drivers) drivers := make(Drivers)
drivers["drivers"] = make(KV[interface{}]) drivers["drivers"] = make(KV[interface{}])
drivers["config"] = make(KV[interface{}])
driverInfoMap := op.GetDriverInfoMap() driverInfoMap := op.GetDriverInfoMap()
for k, v := range driverInfoMap { for k, v := range driverInfoMap {
drivers["drivers"][k] = convert(k) drivers["drivers"][k] = convert(k)
items := make(KV[interface{}]) items := make(KV[interface{}])
config := map[string]string{}
if v.Config.Alert != "" {
alert := strings.SplitN(v.Config.Alert, "|", 2)
if len(alert) > 1 {
config["alert"] = alert[1]
}
}
drivers["config"][k] = config
for i := range v.Additional { for i := range v.Additional {
item := v.Additional[i] item := v.Additional[i]
items[item.Name] = convert(item.Name) items[item.Name] = convert(item.Name)

View File

@ -29,4 +29,5 @@ func init() {
RootCmd.PersistentFlags().BoolVar(&flags.NoPrefix, "no-prefix", false, "disable env prefix") RootCmd.PersistentFlags().BoolVar(&flags.NoPrefix, "no-prefix", false, "disable env prefix")
RootCmd.PersistentFlags().BoolVar(&flags.Dev, "dev", false, "start with dev mode") RootCmd.PersistentFlags().BoolVar(&flags.Dev, "dev", false, "start with dev mode")
RootCmd.PersistentFlags().BoolVar(&flags.ForceBinDir, "force-bin-dir", false, "Force to use the directory where the binary file is located as data directory") RootCmd.PersistentFlags().BoolVar(&flags.ForceBinDir, "force-bin-dir", false, "Force to use the directory where the binary file is located as data directory")
RootCmd.PersistentFlags().BoolVar(&flags.LogStd, "log-std", false, "Force to log to std")
} }

View File

@ -29,6 +29,7 @@ the address is defined in config file`,
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
Init() Init()
bootstrap.InitAria2() bootstrap.InitAria2()
bootstrap.InitQbittorrent()
bootstrap.LoadStorages() bootstrap.LoadStorages()
if !flags.Debug && !flags.Dev { if !flags.Debug && !flags.Dev {
gin.SetMode(gin.ReleaseMode) gin.SetMode(gin.ReleaseMode)

52
cmd/storage.go Normal file
View File

@ -0,0 +1,52 @@
/*
Copyright © 2023 NAME HERE <EMAIL ADDRESS>
*/
package cmd
import (
"github.com/alist-org/alist/v3/internal/db"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/spf13/cobra"
)
// storageCmd represents the storage command
var storageCmd = &cobra.Command{
Use: "storage",
Short: "Manage storage",
}
func init() {
var mountPath string
var disable = &cobra.Command{
Use: "disable",
Short: "Disable a storage",
Run: func(cmd *cobra.Command, args []string) {
Init()
storage, err := db.GetStorageByMountPath(mountPath)
if err != nil {
utils.Log.Errorf("failed to query storage: %+v", err)
} else {
storage.Disabled = true
err = db.UpdateStorage(storage)
if err != nil {
utils.Log.Errorf("failed to update storage: %+v", err)
} else {
utils.Log.Infof("Storage with mount path [%s] have been disabled", mountPath)
}
}
},
}
disable.Flags().StringVarP(&mountPath, "mount-path", "m", "", "The mountPath of storage")
RootCmd.AddCommand(storageCmd)
storageCmd.AddCommand(disable)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// storageCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// storageCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}

View File

@ -10,5 +10,6 @@ services:
- PUID=0 - PUID=0
- PGID=0 - PGID=0
- UMASK=022 - UMASK=022
- TZ=UTC
container_name: alist container_name: alist
image: 'xhofe/alist:latest' image: 'xhofe/alist:latest'

View File

@ -44,7 +44,11 @@ func (d *Pan115) List(ctx context.Context, dir model.Obj, args model.ListArgs) (
} }
func (d *Pan115) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Pan115) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
downloadInfo, err := d.client.Download(file.(driver115.File).PickCode) downloadInfo, err := d.client.
SetUserAgent(driver115.UA115Browser).
Download(file.(driver115.File).PickCode)
// recover for upload
d.client.SetUserAgent(driver115.UA115Desktop)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -6,16 +6,18 @@ import (
) )
type Addition struct { type Addition struct {
Cookie string `json:"cookie"` Cookie string `json:"cookie" type:"text" help:"one of QR code token and cookie required"`
QRCodeToken string `json:"qrcode_token"` QRCodeToken string `json:"qrcode_token" type:"text" help:"one of QR code token and cookie required"`
PageSize int64 `json:"page_size" type:"number" default:"56" help:"list api per page size of 115 driver"`
driver.RootID driver.RootID
} }
var config = driver.Config{ var config = driver.Config{
Name: "115 Cloud", Name: "115 Cloud",
DefaultRoot: "0", DefaultRoot: "0",
OnlyProxy: true, OnlyProxy: true,
OnlyLocal: true, OnlyLocal: true,
NoOverwriteUpload: true,
} }
func init() { func init() {

View File

@ -7,7 +7,7 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
) )
var UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36 115Browser/23.9.3.2 115disk/30.1.0" var UserAgent = driver.UA115Desktop
func (d *Pan115) login() error { func (d *Pan115) login() error {
var err error var err error
@ -38,7 +38,10 @@ func (d *Pan115) login() error {
func (d *Pan115) getFiles(fileId string) ([]driver.File, error) { func (d *Pan115) getFiles(fileId string) ([]driver.File, error) {
res := make([]driver.File, 0) res := make([]driver.File, 0)
files, err := d.client.List(fileId) if d.PageSize <= 0 {
d.PageSize = driver.FileListLimit
}
files, err := d.client.ListWithLimit(fileId, d.PageSize)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -29,7 +29,6 @@ import (
type Pan123 struct { type Pan123 struct {
model.Storage model.Storage
Addition Addition
AccessToken string
} }
func (d *Pan123) Config() driver.Config { func (d *Pan123) Config() driver.Config {
@ -41,7 +40,8 @@ func (d *Pan123) GetAddition() driver.Additional {
} }
func (d *Pan123) Init(ctx context.Context) error { func (d *Pan123) Init(ctx context.Context) error {
return d.login() _, err := d.request(UserInfo, http.MethodGet, nil, nil)
return err
} }
func (d *Pan123) Drop(ctx context.Context) error { func (d *Pan123) Drop(ctx context.Context) error {
@ -77,7 +77,7 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
"size": f.Size, "size": f.Size,
"type": f.Type, "type": f.Type,
} }
resp, err := d.request("https://www.123pan.com/api/file/download_info", http.MethodPost, func(req *resty.Request) { resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetHeaders(headers) req.SetBody(data).SetHeaders(headers)
}, nil) }, nil)
if err != nil { if err != nil {
@ -96,14 +96,14 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
return nil, err return nil, err
} }
} }
u_ := fmt.Sprintf("https://%s%s", u.Host, u.Path) u_ := u.String()
res, err := base.NoRedirectClient.R().SetQueryParamsFromValues(u.Query()).Head(u_) res, err := base.NoRedirectClient.R().SetQueryParamsFromValues(u.Query()).Head(u_)
if err != nil { if err != nil {
return nil, err return nil, err
} }
log.Debug(res.String()) log.Debug(res.String())
link := model.Link{ link := model.Link{
URL: downloadUrl, URL: u_,
} }
log.Debugln("res code: ", res.StatusCode()) log.Debugln("res code: ", res.StatusCode())
if res.StatusCode() == 302 { if res.StatusCode() == 302 {
@ -124,7 +124,7 @@ func (d *Pan123) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"size": 0, "size": 0,
"type": 1, "type": 1,
} }
_, err := d.request("https://www.123pan.com/api/file/upload_request", http.MethodPost, func(req *resty.Request) { _, err := d.request(Mkdir, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data)
}, nil) }, nil)
return err return err
@ -135,7 +135,7 @@ func (d *Pan123) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
"fileIdList": []base.Json{{"FileId": srcObj.GetID()}}, "fileIdList": []base.Json{{"FileId": srcObj.GetID()}},
"parentFileId": dstDir.GetID(), "parentFileId": dstDir.GetID(),
} }
_, err := d.request("https://www.123pan.com/api/file/mod_pid", http.MethodPost, func(req *resty.Request) { _, err := d.request(Move, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data)
}, nil) }, nil)
return err return err
@ -147,7 +147,7 @@ func (d *Pan123) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"fileId": srcObj.GetID(), "fileId": srcObj.GetID(),
"fileName": newName, "fileName": newName,
} }
_, err := d.request("https://www.123pan.com/api/file/rename", http.MethodPost, func(req *resty.Request) { _, err := d.request(Rename, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data)
}, nil) }, nil)
return err return err
@ -164,7 +164,7 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
"operation": true, "operation": true,
"fileTrashInfoList": []File{f}, "fileTrashInfoList": []File{f},
} }
_, err := d.request("https://www.123pan.com/b/api/file/trash", http.MethodPost, func(req *resty.Request) { _, err := d.request(Trash, http.MethodPost, func(req *resty.Request) {
req.SetBody(data) req.SetBody(data)
}, nil) }, nil)
return err return err
@ -220,7 +220,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"type": 0, "type": 0,
} }
var resp UploadResp var resp UploadResp
_, err := d.request("https://www.123pan.com/a/api/file/upload_request", http.MethodPost, func(req *resty.Request) { _, err := d.request(UploadRequest, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx) req.SetBody(data).SetContext(ctx)
}, &resp) }, &resp)
if err != nil { if err != nil {
@ -249,7 +249,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil { if err != nil {
return err return err
} }
_, err = d.request("https://www.123pan.com/api/file/upload_complete", http.MethodPost, func(req *resty.Request) { _, err = d.request(UploadComplete, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{ req.SetBody(base.Json{
"fileId": resp.Data.FileId, "fileId": resp.Data.FileId,
}).SetContext(ctx) }).SetContext(ctx)

View File

@ -6,14 +6,13 @@ import (
) )
type Addition struct { type Addition struct {
Username string `json:"username" required:"true"` Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"` Password string `json:"password" required:"true"`
driver.RootID
OrderBy string `json:"order_by" type:"select" options:"file_name,size,update_at" default:"file_name"` OrderBy string `json:"order_by" type:"select" options:"file_name,size,update_at" default:"file_name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"` OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
driver.RootID StreamUpload bool `json:"stream_upload"`
// define other AccessToken string
StreamUpload bool `json:"stream_upload"`
//Field string `json:"field" type:"select" required:"true" options:"a,b,c" default:"a"`
} }
var config = driver.Config{ var config = driver.Config{

View File

@ -7,18 +7,6 @@ import (
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
) )
//type BaseResp struct {
// Code interface{} `json:"code"`
// Message string `json:"message"`
//}
type TokenResp struct {
//BaseResp
Data struct {
Token string `json:"token"`
} `json:"data"`
}
type File struct { type File struct {
FileName string `json:"FileName"` FileName string `json:"FileName"`
Size int64 `json:"Size"` Size int64 `json:"Size"`

View File

@ -4,6 +4,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"net/http" "net/http"
"strconv"
"github.com/alist-org/alist/v3/drivers/base" "github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/pkg/utils" "github.com/alist-org/alist/v3/pkg/utils"
@ -13,9 +14,22 @@ import (
// do others that not defined in Driver interface // do others that not defined in Driver interface
const (
API = "https://www.123pan.com/b/api"
SignIn = API + "/user/sign_in"
UserInfo = API + "/user/info"
FileList = API + "/file/list/new"
DownloadInfo = API + "/file/download_info"
Mkdir = API + "/file/upload_request"
Move = API + "/file/mod_pid"
Rename = API + "/file/rename"
Trash = API + "/file/trash"
UploadRequest = API + "/file/upload_request"
UploadComplete = API + "/file/upload_complete"
)
func (d *Pan123) login() error { func (d *Pan123) login() error {
var body base.Json var body base.Json
url := "https://www.123pan.com/a/api/user/sign_in"
if utils.IsEmailFormat(d.Username) { if utils.IsEmailFormat(d.Username) {
body = base.Json{ body = base.Json{
"mail": d.Username, "mail": d.Username,
@ -28,17 +42,15 @@ func (d *Pan123) login() error {
"password": d.Password, "password": d.Password,
} }
} }
var resp TokenResp
res, err := base.RestyClient.R(). res, err := base.RestyClient.R().
SetResult(&resp). SetBody(body).Post(SignIn)
SetBody(body).Post(url)
if err != nil { if err != nil {
return err return err
} }
if utils.Json.Get(res.Body(), "code").ToInt() != 200 { if utils.Json.Get(res.Body(), "code").ToInt() != 200 {
err = fmt.Errorf(utils.Json.Get(res.Body(), "message").ToString()) err = fmt.Errorf(utils.Json.Get(res.Body(), "message").ToString())
} else { } else {
d.AccessToken = resp.Data.Token d.AccessToken = utils.Json.Get(res.Body(), "data", "token").ToString()
} }
return err return err
} }
@ -77,27 +89,31 @@ func (d *Pan123) request(url string, method string, callback base.ReqCallback, r
} }
func (d *Pan123) getFiles(parentId string) ([]File, error) { func (d *Pan123) getFiles(parentId string) ([]File, error) {
next := "0" page := 1
res := make([]File, 0) res := make([]File, 0)
for next != "-1" { for {
var resp Files var resp Files
query := map[string]string{ query := map[string]string{
"driveId": "0", "driveId": "0",
"limit": "100", "limit": "100",
"next": next, "next": "0",
"orderBy": d.OrderBy, "orderBy": d.OrderBy,
"orderDirection": d.OrderDirection, "orderDirection": d.OrderDirection,
"parentFileId": parentId, "parentFileId": parentId,
"trashed": "false", "trashed": "false",
"Page": strconv.Itoa(page),
} }
_, err := d.request("https://www.123pan.com/api/file/list/new", http.MethodGet, func(req *resty.Request) { _, err := d.request(FileList, http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(query) req.SetQueryParams(query)
}, &resp) }, &resp)
if err != nil { if err != nil {
return nil, err return nil, err
} }
next = resp.Data.Next page++
res = append(res, resp.Data.InfoList...) res = append(res, resp.Data.InfoList...)
if len(resp.Data.InfoList) == 0 || resp.Data.Next == "-1" {
break
}
} }
return res, nil return res, nil
} }

View File

@ -1,11 +1,9 @@
package _139 package _139
import ( import (
"bytes"
"context" "context"
"fmt" "fmt"
"io" "io"
"math"
"net/http" "net/http"
"strconv" "strconv"
@ -85,8 +83,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
} }
pathname = "/orchestration/familyCloud/cloudCatalog/v1.0/createCloudDoc" pathname = "/orchestration/familyCloud/cloudCatalog/v1.0/createCloudDoc"
} }
_, err := d.post(pathname, _, err := d.post(pathname, data, nil)
data, nil)
return err return err
} }
@ -224,15 +221,31 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
return err return err
} }
const (
_ = iota //ignore first value by assigning to blank identifier
KB = 1 << (10 * iota)
MB
GB
TB
)
func getPartSize(size int64) int64 {
// 网盘对于分片数量存在上限
if size/GB > 30 {
return 512 * MB
}
return 100 * MB
}
func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error { func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
data := base.Json{ data := base.Json{
"manualRename": 2, "manualRename": 2,
"operation": 0, "operation": 0,
"fileCount": 1, "fileCount": 1,
"totalSize": stream.GetSize(), "totalSize": 0, // 去除上传大小限制
"uploadContentList": []base.Json{{ "uploadContentList": []base.Json{{
"contentName": stream.GetName(), "contentName": stream.GetName(),
"contentSize": stream.GetSize(), "contentSize": 0, // 去除上传大小限制
// "digest": "5a3231986ce7a6b46e408612d385bafa" // "digest": "5a3231986ce7a6b46e408612d385bafa"
}}, }},
"parentCatalogID": dstDir.GetID(), "parentCatalogID": dstDir.GetID(),
@ -250,10 +263,10 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"operation": 0, "operation": 0,
"path": "", "path": "",
"seqNo": "", "seqNo": "",
"totalSize": stream.GetSize(), "totalSize": 0,
"uploadContentList": []base.Json{{ "uploadContentList": []base.Json{{
"contentName": stream.GetName(), "contentName": stream.GetName(),
"contentSize": stream.GetSize(), "contentSize": 0,
// "digest": "5a3231986ce7a6b46e408612d385bafa" // "digest": "5a3231986ce7a6b46e408612d385bafa"
}}, }},
}) })
@ -265,51 +278,52 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil { if err != nil {
return err return err
} }
var Default int64 = 104857600
part := int(math.Ceil(float64(stream.GetSize()) / float64(Default))) // Progress
var start int64 = 0 p := driver.NewProgress(stream.GetSize(), up)
for i := 0; i < part; i++ {
var partSize = getPartSize(stream.GetSize())
part := (stream.GetSize() + partSize - 1) / partSize
for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
} }
start := i * partSize
byteSize := stream.GetSize() - start byteSize := stream.GetSize() - start
if byteSize > Default { if byteSize > partSize {
byteSize = Default byteSize = partSize
} }
byteData := make([]byte, byteSize)
_, err = io.ReadFull(stream, byteData) limitReader := io.LimitReader(stream, byteSize)
if err != nil { // Update Progress
return err r := io.TeeReader(limitReader, p)
} req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, r)
req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, bytes.NewBuffer(byteData))
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx) req = req.WithContext(ctx)
headers := map[string]string{ req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
"Accept": "*/*", req.Header.Set("contentSize", strconv.FormatInt(stream.GetSize(), 10))
"Content-Type": "text/plain;name=" + unicode(stream.GetName()), req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))
"contentSize": strconv.FormatInt(stream.GetSize(), 10), req.Header.Set("uploadtaskID", resp.Data.UploadResult.UploadTaskID)
"range": fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1), req.Header.Set("rangeType", "0")
"content-length": strconv.FormatInt(byteSize, 10), req.ContentLength = byteSize
"uploadtaskID": resp.Data.UploadResult.UploadTaskID,
"rangeType": "0",
"Referer": "https://yun.139.com/",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44",
"x-SvcType": "1",
}
for k, v := range headers {
req.Header.Set(k, v)
}
res, err := base.HttpClient.Do(req) res, err := base.HttpClient.Do(req)
if err != nil { if err != nil {
return err return err
} }
log.Debugf("%+v", res) log.Debugf("%+v", res)
if res.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
res.Body.Close() res.Body.Close()
start += byteSize
up(i * 100 / part)
} }
return nil return nil
} }

View File

@ -28,12 +28,15 @@ func (d *Yun139) isFamily() bool {
func encodeURIComponent(str string) string { func encodeURIComponent(str string) string {
r := url.QueryEscape(str) r := url.QueryEscape(str)
r = strings.Replace(r, "+", "%20", -1) r = strings.Replace(r, "+", "%20", -1)
r = strings.Replace(r, "%21", "!", -1)
r = strings.Replace(r, "%27", "'", -1)
r = strings.Replace(r, "%28", "(", -1)
r = strings.Replace(r, "%29", ")", -1)
r = strings.Replace(r, "%2A", "*", -1)
return r return r
} }
func calSign(body, ts, randStr string) string { func calSign(body, ts, randStr string) string {
body = strings.ReplaceAll(body, "\n", "")
body = strings.ReplaceAll(body, " ", "")
body = encodeURIComponent(body) body = encodeURIComponent(body)
strs := strings.Split(body, "") strs := strings.Split(body, "")
sort.Strings(strs) sort.Strings(strs)

114
drivers/alias/driver.go Normal file
View File

@ -0,0 +1,114 @@
package alias
import (
"context"
"errors"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type Alias struct {
model.Storage
Addition
pathMap map[string][]string
autoFlatten bool
oneKey string
}
func (d *Alias) Config() driver.Config {
return config
}
func (d *Alias) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Alias) Init(ctx context.Context) error {
if d.Paths == "" {
return errors.New("paths is required")
}
d.pathMap = make(map[string][]string)
for _, path := range strings.Split(d.Paths, "\n") {
path = strings.TrimSpace(path)
if path == "" {
continue
}
k, v := getPair(path)
d.pathMap[k] = append(d.pathMap[k], v)
}
if len(d.pathMap) == 1 {
for k := range d.pathMap {
d.oneKey = k
}
d.autoFlatten = true
}
return nil
}
func (d *Alias) Drop(ctx context.Context) error {
d.pathMap = nil
return nil
}
func (d *Alias) Get(ctx context.Context, path string) (model.Obj, error) {
if utils.PathEqual(path, "/") {
return &model.Object{
Name: "Root",
IsFolder: true,
Path: "/",
}, nil
}
root, sub := d.getRootAndPath(path)
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
obj, err := d.get(ctx, path, dst, sub)
if err == nil {
return obj, nil
}
}
return nil, errs.ObjectNotFound
}
func (d *Alias) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
path := dir.GetPath()
if utils.PathEqual(path, "/") && !d.autoFlatten {
return d.listRoot(), nil
}
root, sub := d.getRootAndPath(path)
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
var objs []model.Obj
for _, dst := range dsts {
tmp, err := d.list(ctx, dst, sub)
if err == nil {
objs = append(objs, tmp...)
}
}
return objs, nil
}
func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
root, sub := d.getRootAndPath(file.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
link, err := d.link(ctx, dst, sub, args)
if err == nil {
return link, nil
}
}
return nil, errs.ObjectNotFound
}
var _ driver.Driver = (*Alias)(nil)

27
drivers/alias/meta.go Normal file
View File

@ -0,0 +1,27 @@
package alias
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
// driver.RootPath
// define other
Paths string `json:"paths" required:"true" type:"text"`
}
var config = driver.Config{
Name: "Alias",
LocalSort: true,
NoCache: true,
NoUpload: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Alias{}
})
}

1
drivers/alias/types.go Normal file
View File

@ -0,0 +1 @@
package alias

103
drivers/alias/util.go Normal file
View File

@ -0,0 +1,103 @@
package alias
import (
"context"
"fmt"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
)
func (d *Alias) listRoot() []model.Obj {
var objs []model.Obj
for k, _ := range d.pathMap {
obj := model.Object{
Name: k,
IsFolder: true,
Modified: d.Modified,
}
objs = append(objs, &obj)
}
return objs
}
// do others that not defined in Driver interface
func getPair(path string) (string, string) {
//path = strings.TrimSpace(path)
if strings.Contains(path, ":") {
pair := strings.SplitN(path, ":", 2)
if !strings.Contains(pair[0], "/") {
return pair[0], pair[1]
}
}
return stdpath.Base(path), path
}
func (d *Alias) getRootAndPath(path string) (string, string) {
if d.autoFlatten {
return d.oneKey, path
}
path = strings.TrimPrefix(path, "/")
parts := strings.SplitN(path, "/", 2)
if len(parts) == 1 {
return parts[0], ""
}
return parts[0], parts[1]
}
func (d *Alias) get(ctx context.Context, path string, dst, sub string) (model.Obj, error) {
obj, err := fs.Get(ctx, stdpath.Join(dst, sub), &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
}
return &model.Object{
Path: path,
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
}, nil
}
func (d *Alias) list(ctx context.Context, dst, sub string) ([]model.Obj, error) {
objs, err := fs.List(ctx, stdpath.Join(dst, sub), &fs.ListArgs{NoLog: true})
// the obj must implement the model.SetPath interface
// return objs, err
if err != nil {
return nil, err
}
return utils.SliceConvert(objs, func(obj model.Obj) (model.Obj, error) {
return &model.Object{
Name: obj.GetName(),
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
}, nil
})
}
func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, err := fs.GetStorage(reqPath, &fs.GetStoragesArgs{NoLog: true})
if err != nil {
return nil, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
}
if common.ShouldProxy(storage, stdpath.Base(sub)) {
return &model.Link{
URL: fmt.Sprintf("/p%s?sign=%s",
utils.EncodePath(reqPath, true),
sign.Sign(reqPath)),
}, nil
}
link, _, err := fs.Link(ctx, reqPath, args)
return link, err
}

View File

@ -2,6 +2,7 @@ package alist_v3
import ( import (
"context" "context"
"errors"
"io" "io"
"path" "path"
"strconv" "strconv"
@ -55,6 +56,9 @@ func (d *AListV3) List(ctx context.Context, dir model.Obj, args model.ListArgs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if resp.Code != 200 {
return nil, errors.New(resp.Message)
}
var files []model.Obj var files []model.Obj
for _, f := range resp.Data.Content { for _, f := range resp.Data.Content {
file := model.ObjThumb{ file := model.ObjThumb{
@ -84,6 +88,9 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if resp.Code != 200 {
return nil, errors.New(resp.Message)
}
return &model.Link{ return &model.Link{
URL: resp.Data.RawURL, URL: resp.Data.RawURL,
}, nil }, nil
@ -108,7 +115,7 @@ func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
SetResult(&resp). SetResult(&resp).
SetHeader("Authorization", d.AccessToken). SetHeader("Authorization", d.AccessToken).
SetBody(MoveCopyReq{ SetBody(MoveCopyReq{
SrcDir: srcObj.GetPath(), SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(), DstDir: dstDir.GetPath(),
Names: []string{srcObj.GetName()}, Names: []string{srcObj.GetName()},
}).Post(url) }).Post(url)
@ -135,7 +142,7 @@ func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
SetResult(&resp). SetResult(&resp).
SetHeader("Authorization", d.AccessToken). SetHeader("Authorization", d.AccessToken).
SetBody(MoveCopyReq{ SetBody(MoveCopyReq{
SrcDir: srcObj.GetPath(), SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(), DstDir: dstDir.GetPath(),
Names: []string{srcObj.GetName()}, Names: []string{srcObj.GetName()},
}).Post(url) }).Post(url)
@ -149,7 +156,7 @@ func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
SetResult(&resp). SetResult(&resp).
SetHeader("Authorization", d.AccessToken). SetHeader("Authorization", d.AccessToken).
SetBody(RemoveReq{ SetBody(RemoveReq{
Dir: obj.GetPath(), Dir: path.Dir(obj.GetPath()),
Names: []string{obj.GetName()}, Names: []string{obj.GetName()},
}).Post(url) }).Post(url)
return checkResp(resp, err) return checkResp(resp, err)

View File

@ -10,7 +10,7 @@ func checkResp(resp common.Resp[interface{}], err error) error {
if err != nil { if err != nil {
return err return err
} }
if resp.Message == "success" { if resp.Code == 200 {
return nil return nil
} }
return errors.New(resp.Message) return errors.New(resp.Message)

View File

@ -31,6 +31,7 @@ type AliDrive struct {
AccessToken string AccessToken string
cron *cron.Cron cron *cron.Cron
DriveId string DriveId string
UserID string
} }
func (d *AliDrive) Config() driver.Config { func (d *AliDrive) Config() driver.Config {
@ -54,6 +55,7 @@ func (d *AliDrive) Init(ctx context.Context) error {
return err return err
} }
d.DriveId = utils.Json.Get(res, "default_drive_id").ToString() d.DriveId = utils.Json.Get(res, "default_drive_id").ToString()
d.UserID = utils.Json.Get(res, "user_id").ToString()
d.cron = cron.NewCron(time.Hour * 2) d.cron = cron.NewCron(time.Hour * 2)
d.cron.Do(func() { d.cron.Do(func() {
err := d.refreshToken() err := d.refreshToken()
@ -61,7 +63,22 @@ func (d *AliDrive) Init(ctx context.Context) error {
log.Errorf("%+v", err) log.Errorf("%+v", err)
} }
}) })
return err if global.Has(d.UserID) {
return nil
}
// init deviceID
deviceID := utils.GetSHA256Encode(d.UserID)
// init privateKey
privateKey, _ := NewPrivateKeyFromHex(deviceID)
state := State{
privateKey: privateKey,
deviceID: deviceID,
}
// store state
global.Store(d.UserID, &state)
// init signature
d.sign()
return nil
} }
func (d *AliDrive) Drop(ctx context.Context) error { func (d *AliDrive) Drop(ctx context.Context) error {
@ -169,17 +186,27 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
"type": "file", "type": "file",
} }
var localFile *os.File
if fileStream, ok := file.ReadCloser.(*model.FileStream); ok {
localFile, _ = fileStream.ReadCloser.(*os.File)
}
if d.RapidUpload { if d.RapidUpload {
buf := bytes.NewBuffer(make([]byte, 0, 1024)) buf := bytes.NewBuffer(make([]byte, 0, 1024))
io.CopyN(buf, file, 1024) io.CopyN(buf, file, 1024)
reqBody["pre_hash"] = utils.GetSHA1Encode(buf.String()) reqBody["pre_hash"] = utils.GetSHA1Encode(buf.String())
// 把头部拼接回去 if localFile != nil {
file.ReadCloser = struct { if _, err := localFile.Seek(0, io.SeekStart); err != nil {
io.Reader return err
io.Closer }
}{ } else {
Reader: io.MultiReader(buf, file), // 把头部拼接回去
Closer: file, file.ReadCloser = struct {
io.Reader
io.Closer
}{
Reader: io.MultiReader(buf, file),
Closer: file,
}
} }
} else { } else {
reqBody["content_hash_name"] = "none" reqBody["content_hash_name"] = "none"
@ -196,18 +223,28 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
} }
if d.RapidUpload && e.Code == "PreHashMatched" { if d.RapidUpload && e.Code == "PreHashMatched" {
tempFile, err := os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return err
}
defer func() {
_ = tempFile.Close()
_ = os.Remove(tempFile.Name())
}()
delete(reqBody, "pre_hash") delete(reqBody, "pre_hash")
h := sha1.New() h := sha1.New()
if _, err = io.Copy(io.MultiWriter(tempFile, h), file); err != nil { if localFile != nil {
return err if err = utils.CopyWithCtx(ctx, h, localFile, 0, nil); err != nil {
return err
}
if _, err = localFile.Seek(0, io.SeekStart); err != nil {
return err
}
} else {
tempFile, err := os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return err
}
defer func() {
_ = tempFile.Close()
_ = os.Remove(tempFile.Name())
}()
if err = utils.CopyWithCtx(ctx, io.MultiWriter(tempFile, h), file, 0, nil); err != nil {
return err
}
localFile = tempFile
} }
reqBody["content_hash"] = hex.EncodeToString(h.Sum(nil)) reqBody["content_hash"] = hex.EncodeToString(h.Sum(nil))
reqBody["content_hash_name"] = "sha1" reqBody["content_hash_name"] = "sha1"
@ -228,7 +265,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
if file.GetSize() > 0 { if file.GetSize() > 0 {
o = r.Mod(r, i) o = r.Mod(r, i)
} }
n, _ := io.NewSectionReader(tempFile, o.Int64(), 8).Read(buf[:8]) n, _ := io.NewSectionReader(localFile, o.Int64(), 8).Read(buf[:8])
reqBody["proof_code"] = base64.StdEncoding.EncodeToString(buf[:n]) reqBody["proof_code"] = base64.StdEncoding.EncodeToString(buf[:n])
_, err, e := d.request("https://api.aliyundrive.com/adrive/v2/file/createWithFolders", http.MethodPost, func(req *resty.Request) { _, err, e := d.request("https://api.aliyundrive.com/adrive/v2/file/createWithFolders", http.MethodPost, func(req *resty.Request) {
@ -241,17 +278,21 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
return nil return nil
} }
// 秒传失败 // 秒传失败
if _, err = tempFile.Seek(0, io.SeekStart); err != nil { if _, err = localFile.Seek(0, io.SeekStart); err != nil {
return err return err
} }
file.ReadCloser = tempFile file.ReadCloser = localFile
} }
for i, partInfo := range resp.PartInfoList { for i, partInfo := range resp.PartInfoList {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
} }
req, err := http.NewRequest("PUT", partInfo.UploadUrl, io.LimitReader(file, DEFAULT)) url := partInfo.UploadUrl
if d.InternalUpload {
url = partInfo.InternalUploadUrl
}
req, err := http.NewRequest("PUT", url, io.LimitReader(file, DEFAULT))
if err != nil { if err != nil {
return err return err
} }
@ -296,6 +337,7 @@ func (d *AliDrive) Other(ctx context.Context, args model.OtherArgs) (interface{}
case "video_preview": case "video_preview":
url = "https://api.aliyundrive.com/v2/file/get_video_preview_play_info" url = "https://api.aliyundrive.com/v2/file/get_video_preview_play_info"
data["category"] = "live_transcoding" data["category"] = "live_transcoding"
data["url_expire_sec"] = 14400
default: default:
return nil, errs.NotSupport return nil, errs.NotSupport
} }

View File

@ -0,0 +1,16 @@
package aliyundrive
import (
"crypto/ecdsa"
"github.com/alist-org/alist/v3/pkg/generic_sync"
)
type State struct {
deviceID string
signature string
retry int
privateKey *ecdsa.PrivateKey
}
var global = generic_sync.MapOf[string, *State]{}

View File

@ -0,0 +1,66 @@
package aliyundrive
import (
"crypto/ecdsa"
"crypto/rand"
"encoding/hex"
"math/big"
"github.com/dustinxie/ecc"
)
func NewPrivateKey() (*ecdsa.PrivateKey, error) {
p256k1 := ecc.P256k1()
return ecdsa.GenerateKey(p256k1, rand.Reader)
}
func NewPrivateKeyFromHex(hex_ string) (*ecdsa.PrivateKey, error) {
data, err := hex.DecodeString(hex_)
if err != nil {
return nil, err
}
return NewPrivateKeyFromBytes(data), nil
}
func NewPrivateKeyFromBytes(priv []byte) *ecdsa.PrivateKey {
p256k1 := ecc.P256k1()
x, y := p256k1.ScalarBaseMult(priv)
return &ecdsa.PrivateKey{
PublicKey: ecdsa.PublicKey{
Curve: p256k1,
X: x,
Y: y,
},
D: new(big.Int).SetBytes(priv),
}
}
func PrivateKeyToHex(private *ecdsa.PrivateKey) string {
return hex.EncodeToString(PrivateKeyToBytes(private))
}
func PrivateKeyToBytes(private *ecdsa.PrivateKey) []byte {
return private.D.Bytes()
}
func PublicKeyToHex(public *ecdsa.PublicKey) string {
return hex.EncodeToString(PublicKeyToBytes(public))
}
func PublicKeyToBytes(public *ecdsa.PublicKey) []byte {
x := public.X.Bytes()
if len(x) < 32 {
for i := 0; i < 32-len(x); i++ {
x = append([]byte{0}, x...)
}
}
y := public.Y.Bytes()
if len(y) < 32 {
for i := 0; i < 32-len(y); i++ {
y = append([]byte{0}, y...)
}
}
return append(x, y...)
}

View File

@ -7,15 +7,20 @@ import (
type Addition struct { type Addition struct {
driver.RootID driver.RootID
RefreshToken string `json:"refresh_token" required:"true"` RefreshToken string `json:"refresh_token" required:"true"`
//DeviceID string `json:"device_id" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"` OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"` OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
RapidUpload bool `json:"rapid_upload"` RapidUpload bool `json:"rapid_upload"`
InternalUpload bool `json:"internal_upload"`
} }
var config = driver.Config{ var config = driver.Config{
Name: "Aliyundrive", Name: "Aliyundrive",
DefaultRoot: "root", DefaultRoot: "root",
Alert: `warning|There may be an infinite loop bug in this driver.
Deprecated, no longer maintained and will be removed in a future version.
We recommend using the official driver AliyundriveOpen.`,
} }
func init() { func init() {

View File

@ -48,7 +48,8 @@ type UploadResp struct {
FileId string `json:"file_id"` FileId string `json:"file_id"`
UploadId string `json:"upload_id"` UploadId string `json:"upload_id"`
PartInfoList []struct { PartInfoList []struct {
UploadUrl string `json:"upload_url"` UploadUrl string `json:"upload_url"`
InternalUploadUrl string `json:"internal_upload_url"`
} `json:"part_info_list"` } `json:"part_info_list"`
RapidUpload bool `json:"rapid_upload"` RapidUpload bool `json:"rapid_upload"`

View File

@ -1,6 +1,8 @@
package aliyundrive package aliyundrive
import ( import (
"crypto/sha256"
"encoding/hex"
"errors" "errors"
"fmt" "fmt"
"net/http" "net/http"
@ -8,9 +10,51 @@ import (
"github.com/alist-org/alist/v3/drivers/base" "github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/op" "github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils" "github.com/alist-org/alist/v3/pkg/utils"
"github.com/dustinxie/ecc"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
"github.com/google/uuid"
) )
func (d *AliDrive) createSession() error {
state, ok := global.Load(d.UserID)
if !ok {
return fmt.Errorf("can't load user state, user_id: %s", d.UserID)
}
d.sign()
state.retry++
if state.retry > 3 {
state.retry = 0
return fmt.Errorf("createSession failed after three retries")
}
_, err, _ := d.request("https://api.aliyundrive.com/users/v1/users/device/create_session", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"deviceName": "samsung",
"modelName": "SM-G9810",
"nonce": 0,
"pubKey": PublicKeyToHex(&state.privateKey.PublicKey),
"refreshToken": d.RefreshToken,
})
}, nil)
if err == nil{
state.retry = 0
}
return err
}
// func (d *AliDrive) renewSession() error {
// _, err, _ := d.request("https://api.aliyundrive.com/users/v1/users/device/renew_session", http.MethodPost, nil, nil)
// return err
// }
func (d *AliDrive) sign() {
state, _ := global.Load(d.UserID)
secpAppID := "5dde4e1bdf9e4966b387ba58f4b3fdc3"
singdata := fmt.Sprintf("%s:%s:%s:%d", secpAppID, state.deviceID, d.UserID, 0)
hash := sha256.Sum256([]byte(singdata))
data, _ := ecc.SignBytes(state.privateKey, hash[:], ecc.RecID|ecc.LowerS)
state.signature = hex.EncodeToString(data) //strconv.Itoa(state.nonce)
}
// do others that not defined in Driver interface // do others that not defined in Driver interface
func (d *AliDrive) refreshToken() error { func (d *AliDrive) refreshToken() error {
@ -29,6 +73,9 @@ func (d *AliDrive) refreshToken() error {
if e.Code != "" { if e.Code != "" {
return fmt.Errorf("failed to refresh token: %s", e.Message) return fmt.Errorf("failed to refresh token: %s", e.Message)
} }
if resp.RefreshToken == "" {
return errors.New("failed to refresh token: refresh token is empty")
}
d.RefreshToken, d.AccessToken = resp.RefreshToken, resp.AccessToken d.RefreshToken, d.AccessToken = resp.RefreshToken, resp.AccessToken
op.MustSaveDriverStorage(d) op.MustSaveDriverStorage(d)
return nil return nil
@ -36,9 +83,24 @@ func (d *AliDrive) refreshToken() error {
func (d *AliDrive) request(url, method string, callback base.ReqCallback, resp interface{}) ([]byte, error, RespErr) { func (d *AliDrive) request(url, method string, callback base.ReqCallback, resp interface{}) ([]byte, error, RespErr) {
req := base.RestyClient.R() req := base.RestyClient.R()
req.SetHeader("Authorization", "Bearer\t"+d.AccessToken) state, ok := global.Load(d.UserID)
req.SetHeader("content-type", "application/json") if !ok {
req.SetHeader("origin", "https://www.aliyundrive.com") if url == "https://api.aliyundrive.com/v2/user/get" {
state = &State{}
} else {
return nil, fmt.Errorf("can't load user state, user_id: %s", d.UserID), RespErr{}
}
}
req.SetHeaders(map[string]string{
"Authorization": "Bearer\t" + d.AccessToken,
"content-type": "application/json",
"origin": "https://www.aliyundrive.com",
"Referer": "https://aliyundrive.com/",
"X-Signature": state.signature,
"x-request-id": uuid.NewString(),
"X-Canary": "client=Android,app=adrive,version=v4.1.0",
"X-Device-Id": state.deviceID,
})
if callback != nil { if callback != nil {
callback(req) callback(req)
} else { } else {
@ -54,14 +116,21 @@ func (d *AliDrive) request(url, method string, callback base.ReqCallback, resp i
return nil, err, e return nil, err, e
} }
if e.Code != "" { if e.Code != "" {
if e.Code == "AccessTokenInvalid" { switch e.Code {
case "AccessTokenInvalid":
err = d.refreshToken() err = d.refreshToken()
if err != nil { if err != nil {
return nil, err, e return nil, err, e
} }
return d.request(url, method, callback, resp) case "DeviceSessionSignatureInvalid":
err = d.createSession()
if err != nil {
return nil, err, e
}
default:
return nil, errors.New(e.Message), e
} }
return nil, errors.New(e.Message), e return d.request(url, method, callback, resp)
} else if res.IsError() { } else if res.IsError() {
return nil, errors.New("bad status code " + res.Status()), e return nil, errors.New("bad status code " + res.Status()), e
} }

View File

@ -0,0 +1,217 @@
package aliyundrive_open
import (
"context"
"io"
"math"
"net/http"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type AliyundriveOpen struct {
model.Storage
Addition
base string
DriveId string
}
func (d *AliyundriveOpen) Config() driver.Config {
return config
}
func (d *AliyundriveOpen) GetAddition() driver.Additional {
return &d.Addition
}
func (d *AliyundriveOpen) Init(ctx context.Context) error {
res, err := d.request("/adrive/v1.0/user/getDriveInfo", http.MethodPost, nil)
if err != nil {
return err
}
d.DriveId = utils.Json.Get(res, "default_drive_id").ToString()
return nil
}
func (d *AliyundriveOpen) Drop(ctx context.Context) error {
return nil
}
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.getFiles(dir.GetID())
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return fileToObj(src), nil
})
}
func (d *AliyundriveOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
res, err := d.request("/adrive/v1.0/openFile/getDownloadUrl", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": file.GetID(),
"expire_sec": 14400,
})
})
if err != nil {
return nil, err
}
url := utils.Json.Get(res, "url").ToString()
return &model.Link{
URL: url,
}, nil
}
func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"parent_file_id": parentDir.GetID(),
"name": dirName,
"type": "folder",
"check_name_mode": "refuse",
})
})
return err
}
func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/adrive/v1.0/openFile/move", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"check_name_mode": "refuse", // optional:ignore,auto_rename,refuse
//"new_name": "newName", // The new name to use when a file of the same name exists
})
})
return err
}
func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
_, err := d.request("/adrive/v1.0/openFile/update", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"name": newName,
})
})
return err
}
func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"auto_rename": true,
})
})
return err
}
func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
uri := "/adrive/v1.0/openFile/recyclebin/trash"
if d.RemoveWay == "delete" {
uri = "/adrive/v1.0/openFile/delete"
}
_, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": obj.GetID(),
})
})
return err
}
func (d *AliyundriveOpen) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// rapid_upload is not currently supported
// 1. create
const DEFAULT int64 = 20971520
createData := base.Json{
"drive_id": d.DriveId,
"parent_file_id": dstDir.GetID(),
"name": stream.GetName(),
"type": "file",
"check_name_mode": "ignore",
}
count := 1
if stream.GetSize() > DEFAULT {
count = int(math.Ceil(float64(stream.GetSize()) / float64(DEFAULT)))
createData["part_info_list"] = makePartInfos(count)
}
var createResp CreateResp
_, err := d.request("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp)
})
if err != nil {
return err
}
// 2. upload
preTime := time.Now()
for i := 1; i <= len(createResp.PartInfoList); i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
err = d.uploadPart(ctx, i, count, utils.NewMultiReadable(io.LimitReader(stream, DEFAULT)), &createResp, true)
if err != nil {
return err
}
if count > 0 {
up(i * 100 / count)
}
// refresh upload url if 50 minutes passed
if time.Since(preTime) > 50*time.Minute {
createResp.PartInfoList, err = d.getUploadUrl(count, createResp.FileId, createResp.UploadId)
if err != nil {
return err
}
preTime = time.Now()
}
}
// 3. complete
_, err = d.request("/adrive/v1.0/openFile/complete", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": createResp.FileId,
"upload_id": createResp.UploadId,
})
})
return err
}
func (d *AliyundriveOpen) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
var resp base.Json
var uri string
data := base.Json{
"drive_id": d.DriveId,
"file_id": args.Obj.GetID(),
}
switch args.Method {
case "video_preview":
uri = "/adrive/v1.0/openFile/getVideoPreviewPlayInfo"
data["category"] = "live_transcoding"
data["url_expire_sec"] = 14400
default:
return nil, errs.NotSupport
}
_, err := d.request(uri, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp)
})
if err != nil {
return nil, err
}
return resp, nil
}
var _ driver.Driver = (*AliyundriveOpen)(nil)

View File

@ -0,0 +1,39 @@
package aliyundrive_open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.nn.ci/alist/ali_open/token"`
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
InternalUpload bool `json:"internal_upload" help:"If you are using Aliyun ECS is located in Beijing, you can turn it on to boost the upload speed"`
AccessToken string
}
var config = driver.Config{
Name: "AliyundriveOpen",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "root",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &AliyundriveOpen{
base: "https://open.aliyundrive.com",
}
})
}

View File

@ -0,0 +1,69 @@
package aliyundrive_open
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type ErrResp struct {
Code string `json:"code"`
Message string `json:"message"`
}
type Files struct {
Items []File `json:"items"`
NextMarker string `json:"next_marker"`
}
type File struct {
DriveId string `json:"drive_id"`
FileId string `json:"file_id"`
ParentFileId string `json:"parent_file_id"`
Name string `json:"name"`
Size int64 `json:"size"`
FileExtension string `json:"file_extension"`
ContentHash string `json:"content_hash"`
Category string `json:"category"`
Type string `json:"type"`
Thumbnail string `json:"thumbnail"`
Url string `json:"url"`
CreatedAt *time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
func fileToObj(f File) *model.ObjThumb {
return &model.ObjThumb{
Object: model.Object{
ID: f.FileId,
Name: f.Name,
Size: f.Size,
Modified: f.UpdatedAt,
IsFolder: f.Type == "folder",
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumbnail},
}
}
type PartInfo struct {
Etag interface{} `json:"etag"`
PartNumber int `json:"part_number"`
PartSize interface{} `json:"part_size"`
UploadUrl string `json:"upload_url"`
ContentType string `json:"content_type"`
}
type CreateResp struct {
//Type string `json:"type"`
//ParentFileId string `json:"parent_file_id"`
//DriveId string `json:"drive_id"`
FileId string `json:"file_id"`
//RevisionId string `json:"revision_id"`
//EncryptMode string `json:"encrypt_mode"`
//DomainId string `json:"domain_id"`
//FileName string `json:"file_name"`
UploadId string `json:"upload_id"`
//Location string `json:"location"`
RapidUpload bool `json:"rapid_upload"`
PartInfoList []PartInfo `json:"part_info_list"`
}

View File

@ -0,0 +1,167 @@
package aliyundrive_open
import (
"context"
"errors"
"fmt"
"net/http"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
// do others that not defined in Driver interface
func (d *AliyundriveOpen) refreshToken() error {
url := d.base + "/oauth/access_token"
if d.OauthTokenURL != "" && d.ClientID == "" {
url = d.OauthTokenURL
}
var resp base.TokenResp
var e ErrResp
_, err := base.RestyClient.R().
ForceContentType("application/json").
SetBody(base.Json{
"client_id": d.ClientID,
"client_secret": d.ClientSecret,
"grant_type": "refresh_token",
"refresh_token": d.RefreshToken,
}).
SetResult(&resp).
SetError(&e).
Post(url)
if err != nil {
return err
}
if e.Code != "" {
return fmt.Errorf("failed to refresh token: %s", e.Message)
}
if resp.RefreshToken == "" {
return errors.New("failed to refresh token: refresh token is empty")
}
d.RefreshToken, d.AccessToken = resp.RefreshToken, resp.AccessToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *AliyundriveOpen) request(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
req := base.RestyClient.R()
// TODO check whether access_token is expired
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
if method == http.MethodPost {
req.SetHeader("Content-Type", "application/json")
}
if callback != nil {
callback(req)
}
var e ErrResp
req.SetError(&e)
res, err := req.Execute(method, d.base+uri)
if err != nil {
return nil, err
}
isRetry := len(retry) > 0 && retry[0]
if e.Code != "" {
if !isRetry && e.Code == "AccessTokenInvalid" {
err = d.refreshToken()
if err != nil {
return nil, err
}
return d.request(uri, method, callback, true)
}
return nil, fmt.Errorf("%s:%s", e.Code, e.Message)
}
return res.Body(), nil
}
func (d *AliyundriveOpen) getFiles(fileId string) ([]File, error) {
marker := "first"
res := make([]File, 0)
for marker != "" {
if marker == "first" {
marker = ""
}
var resp Files
data := base.Json{
"drive_id": d.DriveId,
"limit": 200,
"marker": marker,
"order_by": d.OrderBy,
"order_direction": d.OrderDirection,
"parent_file_id": fileId,
//"category": "",
//"type": "",
//"video_thumbnail_time": 120000,
//"video_thumbnail_width": 480,
//"image_thumbnail_width": 480,
}
_, err := d.request("/adrive/v1.0/openFile/list", http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp)
})
if err != nil {
return nil, err
}
marker = resp.NextMarker
res = append(res, resp.Items...)
}
return res, nil
}
func makePartInfos(size int) []base.Json {
partInfoList := make([]base.Json, size)
for i := 0; i < size; i++ {
partInfoList[i] = base.Json{"part_number": 1 + i}
}
return partInfoList
}
func (d *AliyundriveOpen) getUploadUrl(count int, fileId, uploadId string) ([]PartInfo, error) {
partInfoList := makePartInfos(count)
var resp CreateResp
_, err := d.request("/adrive/v1.0/openFile/getUploadUrl", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": fileId,
"part_info_list": partInfoList,
"upload_id": uploadId,
}).SetResult(&resp)
})
return resp.PartInfoList, err
}
func (d *AliyundriveOpen) uploadPart(ctx context.Context, i, count int, reader *utils.MultiReadable, resp *CreateResp, retry bool) error {
partInfo := resp.PartInfoList[i-1]
uploadUrl := partInfo.UploadUrl
if d.InternalUpload {
uploadUrl = strings.ReplaceAll(uploadUrl, "https://cn-beijing-data.aliyundrive.net/", "http://ccp-bj29-bj-1592982087.oss-cn-beijing-internal.aliyuncs.com/")
}
req, err := http.NewRequest("PUT", uploadUrl, reader)
if err != nil {
return err
}
req = req.WithContext(ctx)
res, err := base.HttpClient.Do(req)
if err != nil {
if retry {
reader.Reset()
return d.uploadPart(ctx, i, count, reader, resp, false)
}
return err
}
res.Body.Close()
if retry && res.StatusCode == http.StatusForbidden {
resp.PartInfoList, err = d.getUploadUrl(count, resp.FileId, resp.UploadId)
if err != nil {
return err
}
reader.Reset()
return d.uploadPart(ctx, i, count, reader, resp, false)
}
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusConflict {
return fmt.Errorf("upload status: %d", res.StatusCode)
}
return nil
}

View File

@ -2,15 +2,16 @@ package aliyundrive_share
import ( import (
"context" "context"
"errors"
"net/http" "net/http"
"time" "time"
"github.com/alist-org/alist/v3/drivers/base" "github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/cron" "github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils" "github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
@ -54,6 +55,7 @@ func (d *AliyundriveShare) Drop(ctx context.Context) error {
if d.cron != nil { if d.cron != nil {
d.cron.Stop() d.cron.Stop()
} }
d.DriveId = ""
return nil return nil
} }
@ -76,40 +78,43 @@ func (d *AliyundriveShare) Link(ctx context.Context, file model.Obj, args model.
"share_id": d.ShareId, "share_id": d.ShareId,
} }
var resp ShareLinkResp var resp ShareLinkResp
var e ErrorResp _, err := d.request("https://api.aliyundrive.com/v2/file/get_share_link_download_url", http.MethodPost, func(req *resty.Request) {
_, err := base.RestyClient.R(). req.SetBody(data).SetResult(&resp)
SetError(&e).SetBody(data).SetResult(&resp). })
SetHeader("content-type", "application/json").
SetHeader("Authorization", "Bearer\t"+d.AccessToken).
SetHeader("x-share-token", d.ShareToken).
Post("https://api.aliyundrive.com/v2/file/get_share_link_download_url")
if err != nil { if err != nil {
return nil, err return nil, err
} }
var u string
if e.Code != "" {
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
if e.Code == "AccessTokenInvalid" {
err = d.refreshToken()
} else {
err = d.getShareToken()
}
if err != nil {
return nil, err
}
return d.Link(ctx, file, args)
} else {
return nil, errors.New(e.Code + ": " + e.Message)
}
} else {
u = resp.DownloadUrl
}
return &model.Link{ return &model.Link{
Header: http.Header{ Header: http.Header{
"Referer": []string{"https://www.aliyundrive.com/"}, "Referer": []string{"https://www.aliyundrive.com/"},
}, },
URL: u, URL: resp.DownloadUrl,
}, nil }, nil
} }
func (d *AliyundriveShare) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
var resp base.Json
var url string
data := base.Json{
"share_id": d.ShareId,
"file_id": args.Obj.GetID(),
}
switch args.Method {
case "doc_preview":
url = "https://api.aliyundrive.com/v2/file/get_office_preview_url"
case "video_preview":
url = "https://api.aliyundrive.com/v2/file/get_video_preview_play_info"
data["category"] = "live_transcoding"
default:
return nil, errs.NotSupport
}
_, err := d.request(url, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetResult(&resp)
})
if err != nil {
return nil, err
}
return resp, nil
}
var _ driver.Driver = (*AliyundriveShare)(nil) var _ driver.Driver = (*AliyundriveShare)(nil)

View File

@ -52,6 +52,40 @@ func (d *AliyundriveShare) getShareToken() error {
return nil return nil
} }
func (d *AliyundriveShare) request(url, method string, callback base.ReqCallback) ([]byte, error) {
var e ErrorResp
req := base.RestyClient.R().
SetError(&e).
SetHeader("content-type", "application/json").
SetHeader("Authorization", "Bearer\t"+d.AccessToken).
SetHeader("x-share-token", d.ShareToken)
if callback != nil {
callback(req)
} else {
req.SetBody("{}")
}
resp, err := req.Execute(method, url)
if err != nil {
return nil, err
}
if e.Code != "" {
if e.Code == "AccessTokenInvalid" || e.Code == "ShareLinkTokenInvalid" {
if e.Code == "AccessTokenInvalid" {
err = d.refreshToken()
} else {
err = d.getShareToken()
}
if err != nil {
return nil, err
}
return d.request(url, method, callback)
} else {
return nil, errors.New(e.Code + ": " + e.Message)
}
}
return resp.Body(), nil
}
func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) { func (d *AliyundriveShare) getFiles(fileId string) ([]File, error) {
files := make([]File, 0) files := make([]File, 0)
data := base.Json{ data := base.Json{

View File

@ -6,12 +6,16 @@ import (
_ "github.com/alist-org/alist/v3/drivers/139" _ "github.com/alist-org/alist/v3/drivers/139"
_ "github.com/alist-org/alist/v3/drivers/189" _ "github.com/alist-org/alist/v3/drivers/189"
_ "github.com/alist-org/alist/v3/drivers/189pc" _ "github.com/alist-org/alist/v3/drivers/189pc"
_ "github.com/alist-org/alist/v3/drivers/alias"
_ "github.com/alist-org/alist/v3/drivers/alist_v2" _ "github.com/alist-org/alist/v3/drivers/alist_v2"
_ "github.com/alist-org/alist/v3/drivers/alist_v3" _ "github.com/alist-org/alist/v3/drivers/alist_v3"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive" _ "github.com/alist-org/alist/v3/drivers/aliyundrive"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_open"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_share" _ "github.com/alist-org/alist/v3/drivers/aliyundrive_share"
_ "github.com/alist-org/alist/v3/drivers/baidu_netdisk" _ "github.com/alist-org/alist/v3/drivers/baidu_netdisk"
_ "github.com/alist-org/alist/v3/drivers/baidu_photo" _ "github.com/alist-org/alist/v3/drivers/baidu_photo"
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
_ "github.com/alist-org/alist/v3/drivers/ftp" _ "github.com/alist-org/alist/v3/drivers/ftp"
_ "github.com/alist-org/alist/v3/drivers/google_drive" _ "github.com/alist-org/alist/v3/drivers/google_drive"
_ "github.com/alist-org/alist/v3/drivers/google_photo" _ "github.com/alist-org/alist/v3/drivers/google_photo"
@ -20,15 +24,19 @@ import (
_ "github.com/alist-org/alist/v3/drivers/mediatrack" _ "github.com/alist-org/alist/v3/drivers/mediatrack"
_ "github.com/alist-org/alist/v3/drivers/mega" _ "github.com/alist-org/alist/v3/drivers/mega"
_ "github.com/alist-org/alist/v3/drivers/onedrive" _ "github.com/alist-org/alist/v3/drivers/onedrive"
_ "github.com/alist-org/alist/v3/drivers/onedrive_app"
_ "github.com/alist-org/alist/v3/drivers/pikpak" _ "github.com/alist-org/alist/v3/drivers/pikpak"
_ "github.com/alist-org/alist/v3/drivers/pikpak_share" _ "github.com/alist-org/alist/v3/drivers/pikpak_share"
_ "github.com/alist-org/alist/v3/drivers/quark" _ "github.com/alist-org/alist/v3/drivers/quark"
_ "github.com/alist-org/alist/v3/drivers/s3" _ "github.com/alist-org/alist/v3/drivers/s3"
_ "github.com/alist-org/alist/v3/drivers/seafile"
_ "github.com/alist-org/alist/v3/drivers/sftp" _ "github.com/alist-org/alist/v3/drivers/sftp"
_ "github.com/alist-org/alist/v3/drivers/smb" _ "github.com/alist-org/alist/v3/drivers/smb"
_ "github.com/alist-org/alist/v3/drivers/teambition" _ "github.com/alist-org/alist/v3/drivers/teambition"
_ "github.com/alist-org/alist/v3/drivers/terabox" _ "github.com/alist-org/alist/v3/drivers/terabox"
_ "github.com/alist-org/alist/v3/drivers/thunder" _ "github.com/alist-org/alist/v3/drivers/thunder"
_ "github.com/alist-org/alist/v3/drivers/trainbit"
_ "github.com/alist-org/alist/v3/drivers/url_tree"
_ "github.com/alist-org/alist/v3/drivers/uss" _ "github.com/alist-org/alist/v3/drivers/uss"
_ "github.com/alist-org/alist/v3/drivers/virtual" _ "github.com/alist-org/alist/v3/drivers/virtual"
_ "github.com/alist-org/alist/v3/drivers/webdav" _ "github.com/alist-org/alist/v3/drivers/webdav"

View File

@ -154,7 +154,7 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Li
"target": fmt.Sprintf("[\"%s\"]", file.GetPath()), "target": fmt.Sprintf("[\"%s\"]", file.GetPath()),
"dlink": "1", "dlink": "1",
"web": "5", "web": "5",
"origin": "dlna", //"origin": "dlna",
} }
_, err := d.request("https://pan.baidu.com/api/filemetas", http.MethodGet, func(req *resty.Request) { _, err := d.request("https://pan.baidu.com/api/filemetas", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(param) req.SetQueryParams(param)
@ -165,7 +165,7 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Li
return &model.Link{ return &model.Link{
URL: resp.Info[0].Dlink, URL: resp.Info[0].Dlink,
Header: http.Header{ Header: http.Header{
"User-Agent": []string{"pan.baidu.com"}, "User-Agent": []string{"netdisk"},
}, },
}, nil }, nil
} }
@ -187,7 +187,7 @@ func (d *BaiduNetdisk) create(path string, size int64, isdir int, uploadid, bloc
params := map[string]string{ params := map[string]string{
"method": "create", "method": "create",
} }
data := fmt.Sprintf("path=%s&size=%d&isdir=%d", encodeURIComponent(path), size, isdir) data := fmt.Sprintf("path=%s&size=%d&isdir=%d&rtype=3", encodeURIComponent(path), size, isdir)
if uploadid != "" { if uploadid != "" {
data += fmt.Sprintf("&uploadid=%s&block_list=%s", uploadid, block_list) data += fmt.Sprintf("&uploadid=%s&block_list=%s", uploadid, block_list)
} }

View File

@ -9,6 +9,8 @@ import (
"math" "math"
"os" "os"
"regexp" "regexp"
"strconv"
"strings"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs" "github.com/alist-org/alist/v3/internal/errs"
@ -22,6 +24,8 @@ type BaiduPhoto struct {
Addition Addition
AccessToken string AccessToken string
Uk int64
root model.Obj
} }
func (d *BaiduPhoto) Config() driver.Config { func (d *BaiduPhoto) Config() driver.Config {
@ -33,146 +37,178 @@ func (d *BaiduPhoto) GetAddition() driver.Additional {
} }
func (d *BaiduPhoto) Init(ctx context.Context) error { func (d *BaiduPhoto) Init(ctx context.Context) error {
return d.refreshToken() if err := d.refreshToken(); err != nil {
return err
}
// root
if d.AlbumID != "" {
albumID := strings.Split(d.AlbumID, "|")[0]
album, err := d.GetAlbumDetail(ctx, albumID)
if err != nil {
return err
}
d.root = album
} else {
d.root = &Root{
Name: "root",
Modified: d.Modified,
IsFolder: true,
}
}
// uk
info, err := d.uInfo()
if err != nil {
return err
}
d.Uk, err = strconv.ParseInt(info.YouaID, 10, 64)
return err
}
func (d *BaiduPhoto) GetRoot(ctx context.Context) (model.Obj, error) {
return d.root, nil
} }
func (d *BaiduPhoto) Drop(ctx context.Context) error { func (d *BaiduPhoto) Drop(ctx context.Context) error {
d.AccessToken = ""
d.Uk = 0
d.root = nil
return nil return nil
} }
func (d *BaiduPhoto) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) { func (d *BaiduPhoto) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var objs []model.Obj
var err error var err error
if IsRoot(dir) {
var albums []Album
if d.ShowType != "root_only_file" {
albums, err = d.GetAllAlbum(ctx)
if err != nil {
return nil, err
}
}
var files []File /* album */
if d.ShowType != "root_only_album" { if album, ok := dir.(*Album); ok {
files, err = d.GetAllFile(ctx)
if err != nil {
return nil, err
}
}
alubmName := make(map[string]int)
objs, _ = utils.SliceConvert(albums, func(album Album) (model.Obj, error) {
i := alubmName[album.GetName()]
if i != 0 {
alubmName[album.GetName()]++
album.Title = fmt.Sprintf("%s(%d)", album.Title, i)
}
alubmName[album.GetName()]++
return &album, nil
})
for i := 0; i < len(files); i++ {
objs = append(objs, &files[i])
}
} else if IsAlbum(dir) || IsAlbumRoot(dir) {
var files []AlbumFile var files []AlbumFile
files, err = d.GetAllAlbumFile(ctx, splitID(dir.GetID())[0], "") files, err = d.GetAllAlbumFile(ctx, album, "")
if err != nil { if err != nil {
return nil, err return nil, err
} }
objs = make([]model.Obj, 0, len(files))
for i := 0; i < len(files); i++ { return utils.MustSliceConvert(files, func(file AlbumFile) model.Obj {
objs = append(objs, &files[i]) return &file
}), nil
}
/* root */
var albums []Album
if d.ShowType != "root_only_file" {
albums, err = d.GetAllAlbum(ctx)
if err != nil {
return nil, err
} }
} }
return objs, nil
var files []File
if d.ShowType != "root_only_album" {
files, err = d.GetAllFile(ctx)
if err != nil {
return nil, err
}
}
return append(
utils.MustSliceConvert(albums, func(album Album) model.Obj {
return &album
}),
utils.MustSliceConvert(files, func(album File) model.Obj {
return &album
})...,
), nil
} }
func (d *BaiduPhoto) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *BaiduPhoto) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if IsAlbumFile(file) { switch file := file.(type) {
return d.linkAlbum(ctx, file, args) case *File:
} else if IsFile(file) {
return d.linkFile(ctx, file, args) return d.linkFile(ctx, file, args)
case *AlbumFile:
return d.linkAlbum(ctx, file, args)
} }
return nil, errs.NotFile return nil, errs.NotFile
} }
func (d *BaiduPhoto) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { var joinReg = regexp.MustCompile(`(?i)join:([\S]*)`)
if IsRoot(parentDir) {
code := regexp.MustCompile(`(?i)join:([\S]*)`).FindStringSubmatch(dirName) func (d *BaiduPhoto) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if _, ok := parentDir.(*Root); ok {
code := joinReg.FindStringSubmatch(dirName)
if len(code) > 1 { if len(code) > 1 {
return d.JoinAlbum(ctx, code[1]) return d.JoinAlbum(ctx, code[1])
} }
return d.CreateAlbum(ctx, dirName) return d.CreateAlbum(ctx, dirName)
} }
return errs.NotSupport return nil, errs.NotSupport
} }
func (d *BaiduPhoto) Copy(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *BaiduPhoto) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if IsFile(srcObj) { switch file := srcObj.(type) {
if IsAlbum(dstDir) { case *File:
if album, ok := dstDir.(*Album); ok {
//rootfile -> album //rootfile -> album
e := splitID(dstDir.GetID()) return d.AddAlbumFile(ctx, album, file)
return d.AddAlbumFile(ctx, e[0], e[1], srcObj.GetID())
} }
} else if IsAlbumFile(srcObj) { case *AlbumFile:
if IsRoot(dstDir) { switch album := dstDir.(type) {
case *Root:
//albumfile -> root //albumfile -> root
e := splitID(srcObj.GetID()) return d.CopyAlbumFile(ctx, file)
_, err := d.CopyAlbumFile(ctx, e[1], e[2], e[3], srcObj.GetID()) case *Album:
return err
} else if IsAlbum(dstDir) {
// albumfile -> root -> album // albumfile -> root -> album
e := splitID(srcObj.GetID()) rootfile, err := d.CopyAlbumFile(ctx, file)
file, err := d.CopyAlbumFile(ctx, e[1], e[2], e[3], srcObj.GetID())
if err != nil { if err != nil {
return err return nil, err
} }
e = splitID(dstDir.GetID()) return d.AddAlbumFile(ctx, album, rootfile)
return d.AddAlbumFile(ctx, e[0], e[1], fmt.Sprint(file.Fsid))
} }
} }
return errs.NotSupport return nil, errs.NotSupport
} }
func (d *BaiduPhoto) Move(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *BaiduPhoto) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// 仅支持相册之间移动 // 仅支持相册之间移动
if IsAlbumFile(srcObj) && IsAlbum(dstDir) { if file, ok := srcObj.(*AlbumFile); ok {
err := d.Copy(ctx, srcObj, dstDir) if _, ok := dstDir.(*Album); ok {
if err != nil { newObj, err := d.Copy(ctx, srcObj, dstDir)
return err if err != nil {
return nil, err
}
// 删除原相册文件
_ = d.DeleteAlbumFile(ctx, file)
return newObj, nil
} }
e := splitID(srcObj.GetID())
return d.DeleteAlbumFile(ctx, e[1], e[2], srcObj.GetID())
} }
return errs.NotSupport return nil, errs.NotSupport
} }
func (d *BaiduPhoto) Rename(ctx context.Context, srcObj model.Obj, newName string) error { func (d *BaiduPhoto) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
// 仅支持相册改名 // 仅支持相册改名
if IsAlbum(srcObj) { if album, ok := srcObj.(*Album); ok {
e := splitID(srcObj.GetID()) return d.SetAlbumName(ctx, album, newName)
return d.SetAlbumName(ctx, e[0], e[1], newName)
} }
return errs.NotSupport return nil, errs.NotSupport
} }
func (d *BaiduPhoto) Remove(ctx context.Context, obj model.Obj) error { func (d *BaiduPhoto) Remove(ctx context.Context, obj model.Obj) error {
e := splitID(obj.GetID()) switch obj := obj.(type) {
if IsFile(obj) { case *File:
return d.DeleteFile(ctx, e[0]) return d.DeleteFile(ctx, obj)
} else if IsAlbum(obj) { case *AlbumFile:
return d.DeleteAlbum(ctx, e[0], e[1]) return d.DeleteAlbumFile(ctx, obj)
} else if IsAlbumFile(obj) { case *Album:
return d.DeleteAlbumFile(ctx, e[1], e[2], obj.GetID()) return d.DeleteAlbum(ctx, obj)
} }
return errs.NotSupport return errs.NotSupport
} }
func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error { func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// 需要获取完整文件md5,必须支持 io.Seek // 需要获取完整文件md5,必须支持 io.Seek
tempFile, err := utils.CreateTempFile(stream.GetReadCloser()) tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
if err != nil { if err != nil {
return err return nil, err
} }
defer func() { defer func() {
_ = tempFile.Close() _ = tempFile.Close()
@ -190,20 +226,19 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
sliceMd52 := md5.New() sliceMd52 := md5.New()
slicemd52Write := utils.LimitWriter(sliceMd52, SliceSize) slicemd52Write := utils.LimitWriter(sliceMd52, SliceSize)
for i := 1; i <= count; i++ { for i := 1; i <= count; i++ {
select { if utils.IsCanceled(ctx) {
case <-ctx.Done(): return nil, ctx.Err()
return ctx.Err()
default:
} }
_, err := io.CopyN(io.MultiWriter(fileMd5, sliceMd5, slicemd52Write), tempFile, DEFAULT) _, err := io.CopyN(io.MultiWriter(fileMd5, sliceMd5, slicemd52Write), tempFile, DEFAULT)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF { if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
return err return nil, err
} }
sliceMD5List = append(sliceMD5List, hex.EncodeToString(sliceMd5.Sum(nil))) sliceMD5List = append(sliceMD5List, hex.EncodeToString(sliceMd5.Sum(nil)))
sliceMd5.Reset() sliceMd5.Reset()
} }
if _, err = tempFile.Seek(0, io.SeekStart); err != nil { if _, err = tempFile.Seek(0, io.SeekStart); err != nil {
return err return nil, err
} }
content_md5 := hex.EncodeToString(fileMd5.Sum(nil)) content_md5 := hex.EncodeToString(fileMd5.Sum(nil))
slice_md5 := hex.EncodeToString(sliceMd52.Sum(nil)) slice_md5 := hex.EncodeToString(sliceMd52.Sum(nil))
@ -228,7 +263,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
r.SetFormData(params) r.SetFormData(params)
}, &precreateResp) }, &precreateResp)
if err != nil { if err != nil {
return err return nil, err
} }
switch precreateResp.ReturnType { switch precreateResp.ReturnType {
@ -241,7 +276,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return nil, ctx.Err()
} }
uploadParams["partseq"] = fmt.Sprint(i) uploadParams["partseq"] = fmt.Sprint(i)
_, err = d.Post("https://c3.pcs.baidu.com/rest/2.0/pcs/superfile2", func(r *resty.Request) { _, err = d.Post("https://c3.pcs.baidu.com/rest/2.0/pcs/superfile2", func(r *resty.Request) {
@ -250,7 +285,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
r.SetFileReader("file", stream.GetName(), io.LimitReader(tempFile, DEFAULT)) r.SetFileReader("file", stream.GetName(), io.LimitReader(tempFile, DEFAULT))
}, nil) }, nil)
if err != nil { if err != nil {
return err return nil, err
} }
up(i * 100 / count) up(i * 100 / count)
} }
@ -262,19 +297,24 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
r.SetFormData(params) r.SetFormData(params)
}, &precreateResp) }, &precreateResp)
if err != nil { if err != nil {
return err return nil, err
} }
fallthrough fallthrough
case 3: // 增加到相册 case 3: // 增加到相册
if IsAlbum(dstDir) || IsAlbumRoot(dstDir) { rootfile := precreateResp.Data.toFile()
e := splitID(dstDir.GetID()) if album, ok := dstDir.(*Album); ok {
err = d.AddAlbumFile(ctx, e[0], e[1], fmt.Sprint(precreateResp.Data.FsID)) return d.AddAlbumFile(ctx, album, rootfile)
if err != nil {
return err
}
} }
return rootfile, nil
} }
return nil return nil, errs.NotSupport
} }
var _ driver.Driver = (*BaiduPhoto)(nil) var _ driver.Driver = (*BaiduPhoto)(nil)
var _ driver.GetRooter = (*BaiduPhoto)(nil)
var _ driver.MkdirResult = (*BaiduPhoto)(nil)
var _ driver.CopyResult = (*BaiduPhoto)(nil)
var _ driver.MoveResult = (*BaiduPhoto)(nil)
var _ driver.Remove = (*BaiduPhoto)(nil)
var _ driver.PutResult = (*BaiduPhoto)(nil)
var _ driver.RenameResult = (*BaiduPhoto)(nil)

View File

@ -8,10 +8,10 @@ import (
"strings" "strings"
"time" "time"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/pkg/utils"
) )
//Tid生成 // Tid生成
func getTid() string { func getTid() string {
return fmt.Sprintf("3%d%.0f", time.Now().Unix(), math.Floor(9000000*rand.Float64()+1000000)) return fmt.Sprintf("3%d%.0f", time.Now().Unix(), math.Floor(9000000*rand.Float64()+1000000))
} }
@ -26,82 +26,52 @@ func toTime(t int64) *time.Time {
return &tm return &tm
} }
func fsidsFormat(ids ...string) string { func fsidsFormatNotUk(ids ...int64) string {
var buf []string buf := utils.MustSliceConvert(ids, func(id int64) string {
for _, id := range ids { return fmt.Sprintf(`{"fsid":%d}`, id)
e := splitID(id) })
buf = append(buf, fmt.Sprintf(`{"fsid":%s,"uk":%s}`, e[0], e[3]))
}
return fmt.Sprintf("[%s]", strings.Join(buf, ",")) return fmt.Sprintf("[%s]", strings.Join(buf, ","))
} }
func fsidsFormatNotUk(ids ...string) string {
var buf []string
for _, id := range ids {
buf = append(buf, fmt.Sprintf(`{"fsid":%s}`, splitID(id)[0]))
}
return fmt.Sprintf("[%s]", strings.Join(buf, ","))
}
/*
结构
{fsid} 文件
{album_id}|{tid} 相册
{fsid}|{album_id}|{tid}|{uk} 相册文件
*/
func splitID(id string) []string {
return strings.SplitN(id, "|", 4)[:4]
}
/*
结构
{fsid} 文件
{album_id}|{tid} 相册
{fsid}|{album_id}|{tid}|{uk} 相册文件
*/
func joinID(ids ...interface{}) string {
idsStr := make([]string, 0, len(ids))
for _, id := range ids {
idsStr = append(idsStr, fmt.Sprint(id))
}
return strings.Join(idsStr, "|")
}
func getFileName(path string) string { func getFileName(path string) string {
return path[strings.LastIndex(path, "/")+1:] return path[strings.LastIndex(path, "/")+1:]
} }
// 相册
func IsAlbum(obj model.Obj) bool {
return obj.IsDir() && obj.GetPath() == "album"
}
// 根目录
func IsRoot(obj model.Obj) bool {
return obj.IsDir() && obj.GetPath() == "" && obj.GetID() == ""
}
// 以相册为根目录
func IsAlbumRoot(obj model.Obj) bool {
return obj.IsDir() && obj.GetPath() == "" && obj.GetID() != ""
}
// 根文件
func IsFile(obj model.Obj) bool {
return !obj.IsDir() && obj.GetPath() == "file"
}
// 相册文件
func IsAlbumFile(obj model.Obj) bool {
return !obj.IsDir() && obj.GetPath() == "albumfile"
}
func MustString(str string, err error) string { func MustString(str string, err error) string {
return str return str
} }
/*
* 处理文件变化
* 最大程度利用重复数据
**/
func copyFile(file *AlbumFile, cf *CopyFile) *File {
return &File{
Fsid: cf.Fsid,
Path: cf.Path,
Ctime: cf.Ctime,
Mtime: cf.Ctime,
Size: file.Size,
Thumburl: file.Thumburl,
}
}
func moveFileToAlbumFile(file *File, album *Album, uk int64) *AlbumFile {
return &AlbumFile{
File: *file,
AlbumID: album.AlbumID,
Tid: album.Tid,
Uk: uk,
}
}
func renameAlbum(album *Album, newName string) *Album {
return &Album{
AlbumID: album.AlbumID,
Tid: album.Tid,
JoinTime: album.JoinTime,
CreateTime: album.CreateTime,
Title: newName,
Mtime: time.Now().Unix(),
}
}

View File

@ -14,10 +14,6 @@ type Addition struct {
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"` ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
} }
func (a Addition) GetRootId() string {
return a.AlbumID
}
var config = driver.Config{ var config = driver.Config{
Name: "BaiduPhoto", Name: "BaiduPhoto",
LocalSort: true, LocalSort: true,

View File

@ -3,6 +3,8 @@ package baiduphoto
import ( import (
"fmt" "fmt"
"time" "time"
"github.com/alist-org/alist/v3/internal/model"
) )
type TokenErrResp struct { type TokenErrResp struct {
@ -19,6 +21,12 @@ type Erron struct {
RequestID int `json:"request_id"` RequestID int `json:"request_id"`
} }
// 用户信息
type UInfo struct {
// uk
YouaID string `json:"youa_id"`
}
type Page struct { type Page struct {
HasMore int `json:"has_more"` HasMore int `json:"has_more"`
Cursor string `json:"cursor"` Cursor string `json:"cursor"`
@ -28,6 +36,8 @@ func (p Page) HasNextPage() bool {
return p.HasMore == 1 return p.HasMore == 1
} }
type Root = model.Object
type ( type (
FileListResp struct { FileListResp struct {
Page Page
@ -55,8 +65,8 @@ func (c *File) ModTime() time.Time {
return *c.parseTime return *c.parseTime
} }
func (c *File) IsDir() bool { return false } func (c *File) IsDir() bool { return false }
func (c *File) GetID() string { return joinID(c.Fsid) } func (c *File) GetID() string { return "" }
func (c *File) GetPath() string { return "file" } func (c *File) GetPath() string { return "" }
func (c *File) Thumb() string { func (c *File) Thumb() string {
if len(c.Thumburl) > 0 { if len(c.Thumburl) > 0 {
return c.Thumburl[0] return c.Thumburl[0]
@ -108,11 +118,8 @@ func (a *Album) ModTime() time.Time {
return *a.parseTime return *a.parseTime
} }
func (a *Album) IsDir() bool { return true } func (a *Album) IsDir() bool { return true }
func (a *Album) GetID() string { return joinID(a.AlbumID, a.Tid) } func (a *Album) GetID() string { return "" }
func (a *Album) GetPath() string { return "album" } func (a *Album) GetPath() string { return "" }
func (af *AlbumFile) GetID() string { return joinID(af.Fsid, af.AlbumID, af.Tid, af.Uk) }
func (c *AlbumFile) GetPath() string { return "albumfile" }
type ( type (
CopyFileResp struct { CopyFileResp struct {
@ -120,7 +127,8 @@ type (
} }
CopyFile struct { CopyFile struct {
FromFsid int64 `json:"from_fsid"` // 源ID FromFsid int64 `json:"from_fsid"` // 源ID
Fsid int64 `json:"fsid"` // 目标ID Ctime int64 `json:"ctime"`
Fsid int64 `json:"fsid"` // 目标ID
Path string `json:"path"` Path string `json:"path"`
ShootTime int `json:"shoot_time"` ShootTime int `json:"shoot_time"`
} }
@ -134,8 +142,8 @@ type (
Md5 string `json:"md5"` Md5 string `json:"md5"`
ServerFilename string `json:"server_filename"` ServerFilename string `json:"server_filename"`
Path string `json:"path"` Path string `json:"path"`
Ctime int `json:"ctime"` Ctime int64 `json:"ctime"`
Mtime int `json:"mtime"` Mtime int64 `json:"mtime"`
Isdir int `json:"isdir"` Isdir int `json:"isdir"`
Category int `json:"category"` Category int `json:"category"`
ServerMd5 string `json:"server_md5"` ServerMd5 string `json:"server_md5"`
@ -158,6 +166,18 @@ type (
} }
) )
func (f *UploadFile) toFile() *File {
return &File{
Fsid: f.FsID,
Path: f.Path,
Size: f.Size,
Ctime: f.Ctime,
Mtime: f.Mtime,
Thumburl: nil,
}
}
/* 共享相册部分 */
type InviteResp struct { type InviteResp struct {
Pdata struct { Pdata struct {
// 邀请码 // 邀请码
@ -167,3 +187,9 @@ type InviteResp struct {
ShareID string `json:"share_id"` ShareID string `json:"share_id"`
} `json:"pdata"` } `json:"pdata"`
} }
/* 加入相册部分 */
type JoinOrCreateAlbumResp struct {
AlbumID string `json:"album_id"`
AlreadyExists int `json:"already_exists"`
}

View File

@ -5,7 +5,6 @@ import (
"errors" "errors"
"fmt" "fmt"
"net/http" "net/http"
"strings"
"github.com/alist-org/alist/v3/drivers/base" "github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/errs" "github.com/alist-org/alist/v3/internal/errs"
@ -17,6 +16,7 @@ import (
const ( const (
API_URL = "https://photo.baidu.com/youai" API_URL = "https://photo.baidu.com/youai"
USER_API_URL = API_URL + "/user/v1"
ALBUM_API_URL = API_URL + "/album/v1" ALBUM_API_URL = API_URL + "/album/v1"
FILE_API_URL_V1 = API_URL + "/file/v1" FILE_API_URL_V1 = API_URL + "/file/v1"
FILE_API_URL_V2 = API_URL + "/file/v2" FILE_API_URL_V2 = API_URL + "/file/v2"
@ -26,9 +26,9 @@ var (
ErrNotSupportName = errors.New("only chinese and english, numbers and underscores are supported, and the length is no more than 20") ErrNotSupportName = errors.New("only chinese and english, numbers and underscores are supported, and the length is no more than 20")
) )
func (p *BaiduPhoto) Request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) { func (d *BaiduPhoto) Request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R(). req := base.RestyClient.R().
SetQueryParam("access_token", p.AccessToken) SetQueryParam("access_token", d.AccessToken)
if callback != nil { if callback != nil {
callback(req) callback(req)
} }
@ -49,7 +49,7 @@ func (p *BaiduPhoto) Request(furl string, method string, callback base.ReqCallba
case 50820: case 50820:
return nil, fmt.Errorf("no shared albums found") return nil, fmt.Errorf("no shared albums found")
case -6: case -6:
if err = p.refreshToken(); err != nil { if err = d.refreshToken(); err != nil {
return nil, err return nil, err
} }
default: default:
@ -58,15 +58,15 @@ func (p *BaiduPhoto) Request(furl string, method string, callback base.ReqCallba
return res.Body(), nil return res.Body(), nil
} }
func (p *BaiduPhoto) refreshToken() error { func (d *BaiduPhoto) refreshToken() error {
u := "https://openapi.baidu.com/oauth/2.0/token" u := "https://openapi.baidu.com/oauth/2.0/token"
var resp base.TokenResp var resp base.TokenResp
var e TokenErrResp var e TokenErrResp
_, err := base.RestyClient.R().SetResult(&resp).SetError(&e).SetQueryParams(map[string]string{ _, err := base.RestyClient.R().SetResult(&resp).SetError(&e).SetQueryParams(map[string]string{
"grant_type": "refresh_token", "grant_type": "refresh_token",
"refresh_token": p.RefreshToken, "refresh_token": d.RefreshToken,
"client_id": p.ClientID, "client_id": d.ClientID,
"client_secret": p.ClientSecret, "client_secret": d.ClientSecret,
}).Get(u) }).Get(u)
if err != nil { if err != nil {
return err return err
@ -77,25 +77,25 @@ func (p *BaiduPhoto) refreshToken() error {
if resp.RefreshToken == "" { if resp.RefreshToken == "" {
return errs.EmptyToken return errs.EmptyToken
} }
p.AccessToken, p.RefreshToken = resp.AccessToken, resp.RefreshToken d.AccessToken, d.RefreshToken = resp.AccessToken, resp.RefreshToken
op.MustSaveDriverStorage(p) op.MustSaveDriverStorage(d)
return nil return nil
} }
func (p *BaiduPhoto) Get(furl string, callback base.ReqCallback, resp interface{}) ([]byte, error) { func (d *BaiduPhoto) Get(furl string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
return p.Request(furl, http.MethodGet, callback, resp) return d.Request(furl, http.MethodGet, callback, resp)
} }
func (p *BaiduPhoto) Post(furl string, callback base.ReqCallback, resp interface{}) ([]byte, error) { func (d *BaiduPhoto) Post(furl string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
return p.Request(furl, http.MethodPost, callback, resp) return d.Request(furl, http.MethodPost, callback, resp)
} }
// 获取所有文件 // 获取所有文件
func (p *BaiduPhoto) GetAllFile(ctx context.Context) (files []File, err error) { func (d *BaiduPhoto) GetAllFile(ctx context.Context) (files []File, err error) {
var cursor string var cursor string
for { for {
var resp FileListResp var resp FileListResp
_, err = p.Get(FILE_API_URL_V1+"/list", func(r *resty.Request) { _, err = d.Get(FILE_API_URL_V1+"/list", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetQueryParams(map[string]string{ r.SetQueryParams(map[string]string{
"need_thumbnail": "1", "need_thumbnail": "1",
@ -116,22 +116,22 @@ func (p *BaiduPhoto) GetAllFile(ctx context.Context) (files []File, err error) {
} }
// 删除根文件 // 删除根文件
func (p *BaiduPhoto) DeleteFile(ctx context.Context, fileIDs ...string) error { func (d *BaiduPhoto) DeleteFile(ctx context.Context, file *File) error {
_, err := p.Get(FILE_API_URL_V1+"/delete", func(req *resty.Request) { _, err := d.Get(FILE_API_URL_V1+"/delete", func(req *resty.Request) {
req.SetContext(ctx) req.SetContext(ctx)
req.SetQueryParams(map[string]string{ req.SetQueryParams(map[string]string{
"fsid_list": fmt.Sprintf("[%s]", strings.Join(fileIDs, ",")), "fsid_list": fmt.Sprintf("[%d]", file.Fsid),
}) })
}, nil) }, nil)
return err return err
} }
// 获取所有相册 // 获取所有相册
func (p *BaiduPhoto) GetAllAlbum(ctx context.Context) (albums []Album, err error) { func (d *BaiduPhoto) GetAllAlbum(ctx context.Context) (albums []Album, err error) {
var cursor string var cursor string
for { for {
var resp AlbumListResp var resp AlbumListResp
_, err = p.Get(ALBUM_API_URL+"/list", func(r *resty.Request) { _, err = d.Get(ALBUM_API_URL+"/list", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetQueryParams(map[string]string{ r.SetQueryParams(map[string]string{
"need_amount": "1", "need_amount": "1",
@ -156,14 +156,14 @@ func (p *BaiduPhoto) GetAllAlbum(ctx context.Context) (albums []Album, err error
} }
// 获取相册中所有文件 // 获取相册中所有文件
func (p *BaiduPhoto) GetAllAlbumFile(ctx context.Context, albumID, passwd string) (files []AlbumFile, err error) { func (d *BaiduPhoto) GetAllAlbumFile(ctx context.Context, album *Album, passwd string) (files []AlbumFile, err error) {
var cursor string var cursor string
for { for {
var resp AlbumFileListResp var resp AlbumFileListResp
_, err = p.Get(ALBUM_API_URL+"/listfile", func(r *resty.Request) { _, err = d.Get(ALBUM_API_URL+"/listfile", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetQueryParams(map[string]string{ r.SetQueryParams(map[string]string{
"album_id": albumID, "album_id": album.AlbumID,
"need_amount": "1", "need_amount": "1",
"limit": "1000", "limit": "1000",
"passwd": passwd, "passwd": passwd,
@ -187,45 +187,52 @@ func (p *BaiduPhoto) GetAllAlbumFile(ctx context.Context, albumID, passwd string
} }
// 创建相册 // 创建相册
func (p *BaiduPhoto) CreateAlbum(ctx context.Context, name string) error { func (d *BaiduPhoto) CreateAlbum(ctx context.Context, name string) (*Album, error) {
if !checkName(name) { if !checkName(name) {
return ErrNotSupportName return nil, ErrNotSupportName
} }
_, err := p.Post(ALBUM_API_URL+"/create", func(r *resty.Request) { var resp JoinOrCreateAlbumResp
r.SetContext(ctx) _, err := d.Post(ALBUM_API_URL+"/create", func(r *resty.Request) {
r.SetContext(ctx).SetResult(&resp)
r.SetQueryParams(map[string]string{ r.SetQueryParams(map[string]string{
"title": name, "title": name,
"tid": getTid(), "tid": getTid(),
"source": "0", "source": "0",
}) })
}, nil) }, nil)
return err if err != nil {
return nil, err
}
return d.GetAlbumDetail(ctx, resp.AlbumID)
} }
// 相册改名 // 相册改名
func (p *BaiduPhoto) SetAlbumName(ctx context.Context, albumID, tID, name string) error { func (d *BaiduPhoto) SetAlbumName(ctx context.Context, album *Album, name string) (*Album, error) {
if !checkName(name) { if !checkName(name) {
return ErrNotSupportName return nil, ErrNotSupportName
} }
_, err := p.Post(ALBUM_API_URL+"/settitle", func(r *resty.Request) { _, err := d.Post(ALBUM_API_URL+"/settitle", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetFormData(map[string]string{ r.SetFormData(map[string]string{
"title": name, "title": name,
"album_id": albumID, "album_id": album.AlbumID,
"tid": tID, "tid": fmt.Sprint(album.Tid),
}) })
}, nil) }, nil)
return err if err != nil {
return nil, err
}
return renameAlbum(album, name), nil
} }
// 删除相册 // 删除相册
func (p *BaiduPhoto) DeleteAlbum(ctx context.Context, albumID, tID string) error { func (d *BaiduPhoto) DeleteAlbum(ctx context.Context, album *Album) error {
_, err := p.Post(ALBUM_API_URL+"/delete", func(r *resty.Request) { _, err := d.Post(ALBUM_API_URL+"/delete", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetFormData(map[string]string{ r.SetFormData(map[string]string{
"album_id": albumID, "album_id": album.AlbumID,
"tid": tID, "tid": fmt.Sprint(album.Tid),
"delete_origin_image": "0", // 是否删除原图 0 不删除 1 删除 "delete_origin_image": "0", // 是否删除原图 0 不删除 1 删除
}) })
}, nil) }, nil)
@ -233,13 +240,13 @@ func (p *BaiduPhoto) DeleteAlbum(ctx context.Context, albumID, tID string) error
} }
// 删除相册文件 // 删除相册文件
func (p *BaiduPhoto) DeleteAlbumFile(ctx context.Context, albumID, tID string, fileIDs ...string) error { func (d *BaiduPhoto) DeleteAlbumFile(ctx context.Context, file *AlbumFile) error {
_, err := p.Post(ALBUM_API_URL+"/delfile", func(r *resty.Request) { _, err := d.Post(ALBUM_API_URL+"/delfile", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetFormData(map[string]string{ r.SetFormData(map[string]string{
"album_id": albumID, "album_id": fmt.Sprint(file.AlbumID),
"tid": tID, "tid": fmt.Sprint(file.Tid),
"list": fsidsFormat(fileIDs...), "list": fmt.Sprintf(`[{"fsid":%d,"uk":%d}]`, file.Fsid, file.Uk),
"del_origin": "0", // 是否删除原图 0 不删除 1 删除 "del_origin": "0", // 是否删除原图 0 不删除 1 删除
}) })
}, nil) }, nil)
@ -247,41 +254,44 @@ func (p *BaiduPhoto) DeleteAlbumFile(ctx context.Context, albumID, tID string, f
} }
// 增加相册文件 // 增加相册文件
func (p *BaiduPhoto) AddAlbumFile(ctx context.Context, albumID, tID string, fileIDs ...string) error { func (d *BaiduPhoto) AddAlbumFile(ctx context.Context, album *Album, file *File) (*AlbumFile, error) {
_, err := p.Get(ALBUM_API_URL+"/addfile", func(r *resty.Request) { _, err := d.Get(ALBUM_API_URL+"/addfile", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetQueryParams(map[string]string{ r.SetQueryParams(map[string]string{
"album_id": albumID, "album_id": fmt.Sprint(album.AlbumID),
"tid": tID, "tid": fmt.Sprint(album.Tid),
"list": fsidsFormatNotUk(fileIDs...), "list": fsidsFormatNotUk(file.Fsid),
}) })
}, nil) }, nil)
return err if err != nil {
return nil, err
}
return moveFileToAlbumFile(file, album, d.Uk), nil
} }
// 保存相册文件为根文件 // 保存相册文件为根文件
func (p *BaiduPhoto) CopyAlbumFile(ctx context.Context, albumID, tID, uk string, fileID ...string) (*CopyFile, error) { func (d *BaiduPhoto) CopyAlbumFile(ctx context.Context, file *AlbumFile) (*File, error) {
var resp CopyFileResp var resp CopyFileResp
_, err := p.Post(ALBUM_API_URL+"/copyfile", func(r *resty.Request) { _, err := d.Post(ALBUM_API_URL+"/copyfile", func(r *resty.Request) {
r.SetContext(ctx) r.SetContext(ctx)
r.SetFormData(map[string]string{ r.SetFormData(map[string]string{
"album_id": albumID, "album_id": file.AlbumID,
"tid": tID, "tid": fmt.Sprint(file.Tid),
"uk": uk, "uk": fmt.Sprint(file.Uk),
"list": fsidsFormatNotUk(fileID...), "list": fsidsFormatNotUk(file.Fsid),
}) })
r.SetResult(&resp) r.SetResult(&resp)
}, nil) }, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &resp.List[0], nil return copyFile(file, &resp.List[0]), nil
} }
// 加入相册 // 加入相册
func (p *BaiduPhoto) JoinAlbum(ctx context.Context, code string) error { func (d *BaiduPhoto) JoinAlbum(ctx context.Context, code string) (*Album, error) {
var resp InviteResp var resp InviteResp
_, err := p.Get(ALBUM_API_URL+"/querypcode", func(req *resty.Request) { _, err := d.Get(ALBUM_API_URL+"/querypcode", func(req *resty.Request) {
req.SetContext(ctx) req.SetContext(ctx)
req.SetQueryParams(map[string]string{ req.SetQueryParams(map[string]string{
"pcode": code, "pcode": code,
@ -289,18 +299,37 @@ func (p *BaiduPhoto) JoinAlbum(ctx context.Context, code string) error {
}) })
}, &resp) }, &resp)
if err != nil { if err != nil {
return err return nil, err
} }
_, err = p.Get(ALBUM_API_URL+"/join", func(req *resty.Request) { var resp2 JoinOrCreateAlbumResp
_, err = d.Get(ALBUM_API_URL+"/join", func(req *resty.Request) {
req.SetContext(ctx) req.SetContext(ctx)
req.SetQueryParams(map[string]string{ req.SetQueryParams(map[string]string{
"invite_code": resp.Pdata.InviteCode, "invite_code": resp.Pdata.InviteCode,
}) })
}, nil) }, &resp2)
return err if err != nil {
return nil, err
}
return d.GetAlbumDetail(ctx, resp2.AlbumID)
} }
func (d *BaiduPhoto) linkAlbum(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { // 获取相册详细信息
func (d *BaiduPhoto) GetAlbumDetail(ctx context.Context, albumID string) (*Album, error) {
var album Album
_, err := d.Get(ALBUM_API_URL+"/detail", func(req *resty.Request) {
req.SetContext(ctx).SetResult(&album)
req.SetQueryParams(map[string]string{
"album_id": albumID,
})
}, &album)
if err != nil {
return nil, err
}
return &album, nil
}
func (d *BaiduPhoto) linkAlbum(ctx context.Context, file *AlbumFile, args model.LinkArgs) (*model.Link, error) {
headers := map[string]string{ headers := map[string]string{
"User-Agent": base.UserAgent, "User-Agent": base.UserAgent,
} }
@ -311,16 +340,15 @@ func (d *BaiduPhoto) linkAlbum(ctx context.Context, file model.Obj, args model.L
headers["X-Forwarded-For"] = args.IP headers["X-Forwarded-For"] = args.IP
} }
e := splitID(file.GetID())
res, err := base.NoRedirectClient.R(). res, err := base.NoRedirectClient.R().
SetContext(ctx). SetContext(ctx).
SetHeaders(headers). SetHeaders(headers).
SetQueryParams(map[string]string{ SetQueryParams(map[string]string{
"access_token": d.AccessToken, "access_token": d.AccessToken,
"fsid": e[0], "fsid": fmt.Sprint(file.Fsid),
"album_id": e[1], "album_id": file.AlbumID,
"tid": e[2], "tid": fmt.Sprint(file.Tid),
"uk": e[3], "uk": fmt.Sprint(file.Uk),
}). }).
Head(ALBUM_API_URL + "/download") Head(ALBUM_API_URL + "/download")
@ -328,19 +356,17 @@ func (d *BaiduPhoto) linkAlbum(ctx context.Context, file model.Obj, args model.L
return nil, err return nil, err
} }
//exp := 8 * time.Hour
link := &model.Link{ link := &model.Link{
URL: res.Header().Get("location"), URL: res.Header().Get("location"),
Header: http.Header{ Header: http.Header{
"User-Agent": []string{headers["User-Agent"]}, "User-Agent": []string{headers["User-Agent"]},
"Referer": []string{"https://photo.baidu.com/"}, "Referer": []string{"https://photo.baidu.com/"},
}, },
//Expiration: &exp,
} }
return link, nil return link, nil
} }
func (d *BaiduPhoto) linkFile(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *BaiduPhoto) linkFile(ctx context.Context, file *File, args model.LinkArgs) (*model.Link, error) {
headers := map[string]string{ headers := map[string]string{
"User-Agent": base.UserAgent, "User-Agent": base.UserAgent,
} }
@ -358,21 +384,31 @@ func (d *BaiduPhoto) linkFile(ctx context.Context, file model.Obj, args model.Li
r.SetContext(ctx) r.SetContext(ctx)
r.SetHeaders(headers) r.SetHeaders(headers)
r.SetQueryParams(map[string]string{ r.SetQueryParams(map[string]string{
"fsid": splitID(file.GetID())[0], "fsid": fmt.Sprint(file.Fsid),
}) })
}, &downloadUrl) }, &downloadUrl)
if err != nil { if err != nil {
return nil, err return nil, err
} }
//exp := 8 * time.Hour
link := &model.Link{ link := &model.Link{
URL: downloadUrl.Dlink, URL: downloadUrl.Dlink,
Header: http.Header{ Header: http.Header{
"User-Agent": []string{headers["User-Agent"]}, "User-Agent": []string{headers["User-Agent"]},
"Referer": []string{"https://photo.baidu.com/"}, "Referer": []string{"https://photo.baidu.com/"},
}, },
//Expiration: &exp,
} }
return link, nil return link, nil
} }
// 获取uk
func (d *BaiduPhoto) uInfo() (*UInfo, error) {
var info UInfo
_, err := d.Get(USER_API_URL+"/getuinfo", func(req *resty.Request) {
}, &info)
if err != nil {
return nil, err
}
return &info, nil
}

View File

@ -0,0 +1,251 @@
package baidu_share
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"path"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/go-resty/resty/v2"
)
type BaiduShare struct {
model.Storage
Addition
client *resty.Client
info struct {
Root string
Seckey string
Shareid string
Uk string
}
}
func (d *BaiduShare) Config() driver.Config {
return config
}
func (d *BaiduShare) GetAddition() driver.Additional {
return &d.Addition
}
func (d *BaiduShare) Init(ctx context.Context) error {
// TODO login / refresh token
//op.MustSaveDriverStorage(d)
d.client = resty.New().
SetBaseURL("https://pan.baidu.com").
SetHeader("User-Agent", "netdisk").
SetCookie(&http.Cookie{Name: "BDUSS", Value: d.BDUSS}).
SetCookie(&http.Cookie{Name: "ndut_fmt"})
respJson := struct {
Errno int64 `json:"errno"`
Data struct {
List [1]struct {
Path string `json:"path"`
} `json:"list"`
Uk json.Number `json:"uk"`
Shareid json.Number `json:"shareid"`
Seckey string `json:"seckey"`
} `json:"data"`
}{}
resp, err := d.client.R().
SetBody(url.Values{
"pwd": {d.Pwd},
"root": {"1"},
"shorturl": {d.Surl},
}.Encode()).
SetResult(&respJson).
Post("share/wxlist?channel=weixin&version=2.2.2&clienttype=25&web=1")
if err == nil {
if resp.IsSuccess() && respJson.Errno == 0 {
d.info.Root = path.Dir(respJson.Data.List[0].Path)
d.info.Seckey = respJson.Data.Seckey
d.info.Shareid = respJson.Data.Shareid.String()
d.info.Uk = respJson.Data.Uk.String()
} else {
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
}
}
return err
}
func (d *BaiduShare) Drop(ctx context.Context) error {
return nil
}
func (d *BaiduShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
// TODO return the files list, required
reqDir := dir.GetPath()
isRoot := "0"
if reqDir == d.RootFolderPath {
reqDir = path.Join(d.info.Root, reqDir)
}
if reqDir == d.info.Root {
isRoot = "1"
}
objs := []model.Obj{}
var err error
var page uint64 = 1
more := true
for more && err == nil {
respJson := struct {
Errno int64 `json:"errno"`
Data struct {
More bool `json:"has_more"`
List []struct {
Fsid json.Number `json:"fs_id"`
Isdir json.Number `json:"isdir"`
Path string `json:"path"`
Name string `json:"server_filename"`
Mtime json.Number `json:"server_mtime"`
Size json.Number `json:"size"`
} `json:"list"`
} `json:"data"`
}{}
resp, e := d.client.R().
SetBody(url.Values{
"dir": {reqDir},
"num": {"1000"},
"order": {"time"},
"page": {fmt.Sprint(page)},
"pwd": {d.Pwd},
"root": {isRoot},
"shorturl": {d.Surl},
}.Encode()).
SetResult(&respJson).
Post("share/wxlist?channel=weixin&version=2.2.2&clienttype=25&web=1")
err = e
if err == nil {
if resp.IsSuccess() && respJson.Errno == 0 {
page++
more = respJson.Data.More
for _, v := range respJson.Data.List {
size, _ := v.Size.Int64()
mtime, _ := v.Mtime.Int64()
objs = append(objs, &model.Object{
ID: v.Fsid.String(),
Path: v.Path,
Name: v.Name,
Size: size,
Modified: time.Unix(mtime, 0),
IsFolder: v.Isdir.String() == "1",
})
}
} else {
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
}
}
}
return objs, err
}
func (d *BaiduShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
// TODO return link of file, required
link := model.Link{Header: d.client.Header}
sign := ""
stamp := ""
signJson := struct {
Errno int64 `json:"errno"`
Data struct {
Stamp json.Number `json:"timestamp"`
Sign string `json:"sign"`
} `json:"data"`
}{}
resp, err := d.client.R().
SetQueryParam("surl", d.Surl).
SetResult(&signJson).
Get("share/tplconfig?fields=sign,timestamp&channel=chunlei&web=1&app_id=250528&clienttype=0")
if err == nil {
if resp.IsSuccess() && signJson.Errno == 0 {
stamp = signJson.Data.Stamp.String()
sign = signJson.Data.Sign
} else {
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
}
}
if err == nil {
respJson := struct {
Errno int64 `json:"errno"`
List [1]struct {
Dlink string `json:"dlink"`
} `json:"list"`
}{}
resp, err = d.client.R().
SetQueryParam("sign", sign).
SetQueryParam("timestamp", stamp).
SetBody(url.Values{
"encrypt": {"0"},
"extra": {fmt.Sprintf(`{"sekey":"%s"}`, d.info.Seckey)},
"fid_list": {fmt.Sprintf("[%s]", file.GetID())},
"primaryid": {d.info.Shareid},
"product": {"share"},
"type": {"nolimit"},
"uk": {d.info.Uk},
}.Encode()).
SetResult(&respJson).
Post("api/sharedownload?app_id=250528&channel=chunlei&clienttype=12&web=1")
if err == nil {
if resp.IsSuccess() && respJson.Errno == 0 && respJson.List[0].Dlink != "" {
link.URL = respJson.List[0].Dlink
} else {
err = fmt.Errorf(" %s; %s; ", resp.Status(), resp.Body())
}
}
if err == nil {
resp, err = d.client.R().
SetDoNotParseResponse(true).
Get(link.URL)
if err == nil {
defer resp.RawBody().Close()
if resp.IsError() {
byt, _ := io.ReadAll(resp.RawBody())
err = fmt.Errorf(" %s; %s; ", resp.Status(), byt)
}
}
}
}
return &link, err
}
func (d *BaiduShare) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
// TODO create folder, optional
return errs.NotSupport
}
func (d *BaiduShare) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
// TODO move obj, optional
return errs.NotSupport
}
func (d *BaiduShare) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
// TODO rename obj, optional
return errs.NotSupport
}
func (d *BaiduShare) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
// TODO copy obj, optional
return errs.NotSupport
}
func (d *BaiduShare) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
return errs.NotSupport
}
func (d *BaiduShare) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// TODO upload file, optional
return errs.NotSupport
}
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*BaiduShare)(nil)

View File

@ -0,0 +1,37 @@
package baidu_share
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootPath
// driver.RootID
// define other
// Field string `json:"field" type:"select" required:"true" options:"a,b,c" default:"a"`
Surl string `json:"surl"`
Pwd string `json:"pwd"`
BDUSS string `json:"BDUSS"`
}
var config = driver.Config{
Name: "BaiduShare",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true,
NeedMs: false,
DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &BaiduShare{}
})
}

View File

@ -0,0 +1 @@
package baidu_share

View File

@ -0,0 +1,3 @@
package baidu_share
// do others that not defined in Driver interface

View File

@ -23,8 +23,9 @@ func init() {
} }
func NewRestyClient() *resty.Client { func NewRestyClient() *resty.Client {
return resty.New(). client := resty.New().
SetHeader("user-agent", UserAgent). SetHeader("user-agent", UserAgent).
SetRetryCount(3). SetRetryCount(3).
SetTimeout(DefaultTimeout) SetTimeout(DefaultTimeout)
return client
} }

30
drivers/base/util.go Normal file
View File

@ -0,0 +1,30 @@
package base
import (
"io"
"net/http"
"strconv"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
)
func HandleRange(link *model.Link, file io.ReadSeekCloser, header http.Header, size int64) {
if header.Get("Range") != "" {
r, err := http_range.ParseRange(header.Get("Range"), size)
if err == nil && len(r) > 0 {
_, err := file.Seek(r[0].Start, io.SeekStart)
if err == nil {
link.Data = utils.NewLimitReadCloser(file, func() error {
return file.Close()
}, r[0].Length)
link.Status = http.StatusPartialContent
link.Header = http.Header{
"Content-Range": []string{r[0].ContentRange(size)},
"Content-Length": []string{strconv.FormatInt(r[0].Length, 10)},
}
}
}
}
}

184
drivers/cloudreve/driver.go Normal file
View File

@ -0,0 +1,184 @@
package cloudreve
import (
"context"
"io"
"net/http"
"strconv"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type Cloudreve struct {
model.Storage
Addition
Cookie string
}
func (d *Cloudreve) Config() driver.Config {
return config
}
func (d *Cloudreve) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Cloudreve) Init(ctx context.Context) error {
return d.login()
}
func (d *Cloudreve) Drop(ctx context.Context) error {
d.Cookie = ""
return nil
}
func (d *Cloudreve) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var r DirectoryResp
err := d.request(http.MethodGet, "/directory"+dir.GetPath(), nil, &r)
if err != nil {
return nil, err
}
return utils.SliceConvert(r.Objects, func(src Object) (model.Obj, error) {
return objectToObj(src), nil
})
}
func (d *Cloudreve) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var dUrl string
err := d.request(http.MethodPut, "/file/download/"+file.GetID(), nil, &dUrl)
if err != nil {
return nil, err
}
return &model.Link{
URL: dUrl,
}, nil
}
func (d *Cloudreve) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
return d.request(http.MethodPut, "/directory", func(req *resty.Request) {
req.SetBody(base.Json{
"path": parentDir.GetPath() + "/" + dirName,
})
}, nil)
}
func (d *Cloudreve) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
body := base.Json{
"action": "move",
"src_dir": srcObj.GetPath(),
"dst": dstDir.GetPath(),
"src": convertSrc(srcObj),
}
return d.request(http.MethodPatch, "/object", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Cloudreve) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
body := base.Json{
"action": "rename",
"new_name": newName,
"src": convertSrc(srcObj),
}
return d.request(http.MethodPatch, "/object/rename", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Cloudreve) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
body := base.Json{
"src_dir": srcObj.GetPath(),
"dst": dstDir.GetPath(),
"src": convertSrc(srcObj),
}
return d.request(http.MethodPost, "/object/copy", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Cloudreve) Remove(ctx context.Context, obj model.Obj) error {
body := convertSrc(obj)
err := d.request(http.MethodDelete, "/object", func(req *resty.Request) {
req.SetBody(body)
}, nil)
return err
}
func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
if stream.GetReadCloser() == http.NoBody {
return d.create(ctx, dstDir, stream)
}
var r DirectoryResp
err := d.request(http.MethodGet, "/directory"+dstDir.GetPath(), nil, &r)
if err != nil {
return err
}
uploadBody := base.Json{
"path": dstDir.GetPath(),
"size": stream.GetSize(),
"name": stream.GetName(),
"policy_id": r.Policy.Id,
"last_modified": stream.ModTime().Unix(),
}
var u UploadInfo
err = d.request(http.MethodPut, "/file/upload", func(req *resty.Request) {
req.SetBody(uploadBody)
}, &u)
if err != nil {
return err
}
var chunkSize = u.ChunkSize
var buf []byte
var chunk int
for {
var n int
buf = make([]byte, chunkSize)
n, err = io.ReadAtLeast(stream, buf, chunkSize)
if err != nil && err != io.ErrUnexpectedEOF {
if err == io.EOF {
return nil
}
return err
}
if n == 0 {
break
}
buf = buf[:n]
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetHeader("Content-Length", strconv.Itoa(n))
req.SetBody(buf)
}, nil)
if err != nil {
break
}
chunk++
}
return err
}
func (d *Cloudreve) create(ctx context.Context, dir model.Obj, file model.Obj) error {
body := base.Json{"path": dir.GetPath() + "/" + file.GetName()}
if file.IsDir() {
err := d.request(http.MethodPut, "directory", func(req *resty.Request) {
req.SetBody(body)
}, nil)
return err
}
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
//func (d *Cloudreve) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Cloudreve)(nil)

26
drivers/cloudreve/meta.go Normal file
View File

@ -0,0 +1,26 @@
package cloudreve
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootPath
// define other
Address string `json:"address" required:"true"`
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
}
var config = driver.Config{
Name: "Cloudreve",
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Cloudreve{}
})
}

View File

@ -0,0 +1,60 @@
package cloudreve
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type Resp struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data interface{} `json:"data"`
}
type Policy struct {
Id string `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
MaxSize int `json:"max_size"`
FileType []string `json:"file_type"`
}
type UploadInfo struct {
SessionID string `json:"sessionID"`
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
}
type DirectoryResp struct {
Parent string `json:"parent"`
Objects []Object `json:"objects"`
Policy Policy `json:"policy"`
}
type Object struct {
Id string `json:"id"`
Name string `json:"name"`
Path string `json:"path"`
Pic string `json:"pic"`
Size int `json:"size"`
Type string `json:"type"`
Date time.Time `json:"date"`
CreateDate time.Time `json:"create_date"`
SourceEnabled bool `json:"source_enabled"`
}
func objectToObj(f Object) *model.Object {
return &model.Object{
ID: f.Id,
Name: f.Name,
Size: int64(f.Size),
Modified: f.Date,
IsFolder: f.Type == "dir",
}
}
type Config struct {
LoginCaptcha bool `json:"loginCaptcha"`
CaptchaType string `json:"captcha_type"`
}

146
drivers/cloudreve/util.go Normal file
View File

@ -0,0 +1,146 @@
package cloudreve
import (
"encoding/base64"
"errors"
"net/http"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/cookie"
"github.com/go-resty/resty/v2"
json "github.com/json-iterator/go"
jsoniter "github.com/json-iterator/go"
)
// do others that not defined in Driver interface
const loginPath = "/user/session"
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
u := d.Address + "/api/v3" + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "application/json, text/plain, */*",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
})
var r Resp
req.SetResult(&r)
if callback != nil {
callback(req)
}
resp, err := req.Execute(method, u)
if err != nil {
return err
}
if !resp.IsSuccess() {
return errors.New(resp.String())
}
if r.Code != 0 {
// 刷新 cookie
if r.Code == http.StatusUnauthorized && path != loginPath {
err = d.login()
if err != nil {
return err
}
return d.request(method, path, callback, out)
}
return errors.New(r.Msg)
}
sess := cookie.GetCookie(resp.Cookies(), "cloudreve-session")
if sess != nil {
d.Cookie = sess.Value
}
if out != nil && r.Data != nil {
var marshal []byte
marshal, err = json.Marshal(r.Data)
if err != nil {
return err
}
err = json.Unmarshal(marshal, out)
if err != nil {
return err
}
}
return nil
}
func (d *Cloudreve) login() error {
var siteConfig Config
err := d.request(http.MethodGet, "/site/config", nil, &siteConfig)
if err != nil {
return err
}
for i := 0; i < 5; i++ {
err = d.doLogin(siteConfig.LoginCaptcha)
if err == nil {
break
}
if err != nil && err.Error() != "CAPTCHA not match." {
break
}
}
return err
}
func (d *Cloudreve) doLogin(needCaptcha bool) error {
var captchaCode string
var err error
if needCaptcha {
var captcha string
err = d.request(http.MethodGet, "/site/captcha", nil, &captcha)
if err != nil {
return err
}
if len(captcha) == 0 {
return errors.New("can not get captcha")
}
i := strings.Index(captcha, ",")
dec := base64.NewDecoder(base64.StdEncoding, strings.NewReader(captcha[i+1:]))
vRes, err := base.RestyClient.R().SetMultipartField(
"image", "validateCode.png", "image/png", dec).
Post(setting.GetStr(conf.OcrApi))
if err != nil {
return err
}
if jsoniter.Get(vRes.Body(), "status").ToInt() != 200 {
return errors.New("ocr error:" + jsoniter.Get(vRes.Body(), "msg").ToString())
}
captchaCode = jsoniter.Get(vRes.Body(), "result").ToString()
}
var resp Resp
err = d.request(http.MethodPost, loginPath, func(req *resty.Request) {
req.SetBody(base.Json{
"username": d.Addition.Username,
"Password": d.Addition.Password,
"captchaCode": captchaCode,
})
}, &resp)
return err
}
func convertSrc(obj model.Obj) map[string]interface{} {
m := make(map[string]interface{})
var dirs []string
var items []string
if obj.IsDir() {
dirs = append(dirs, obj.GetID())
} else {
items = append(items, obj.GetID())
}
m["dirs"] = dirs
m["items"] = items
return m
}

View File

@ -4,6 +4,7 @@ import (
"context" "context"
stdpath "path" stdpath "path"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs" "github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
@ -44,8 +45,7 @@ func (d *FTP) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]m
return nil, err return nil, err
} }
res := make([]model.Obj, 0) res := make([]model.Obj, 0)
for i, _ := range entries { for _, entry := range entries {
entry := entries[i]
if entry.Name == "." || entry.Name == ".." { if entry.Name == "." || entry.Name == ".." {
continue continue
} }
@ -64,13 +64,13 @@ func (d *FTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*m
if err := d.login(); err != nil { if err := d.login(); err != nil {
return nil, err return nil, err
} }
resp, err := d.conn.Retr(file.GetPath())
if err != nil { r := NewFTPFileReader(d.conn, file.GetPath())
return nil, err link := &model.Link{
Data: r,
} }
return &model.Link{ base.HandleRange(link, r, args.Header, file.GetSize())
Data: resp, return link, nil
}, nil
} }
func (d *FTP) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *FTP) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {

View File

@ -1,6 +1,13 @@
package ftp package ftp
import "github.com/jlaffaye/ftp" import (
"io"
"os"
"sync"
"time"
"github.com/jlaffaye/ftp"
)
// do others that not defined in Driver interface // do others that not defined in Driver interface
@ -11,7 +18,7 @@ func (d *FTP) login() error {
return nil return nil
} }
} }
conn, err := ftp.Dial(d.Address) conn, err := ftp.Dial(d.Address, ftp.DialWithShutTimeout(10*time.Second))
if err != nil { if err != nil {
return err return err
} }
@ -22,3 +29,81 @@ func (d *FTP) login() error {
d.conn = conn d.conn = conn
return nil return nil
} }
// An FTP file reader that implements io.ReadSeekCloser for seeking.
type FTPFileReader struct {
conn *ftp.ServerConn
resp *ftp.Response
offset int64
mu sync.Mutex
path string
}
func NewFTPFileReader(conn *ftp.ServerConn, path string) *FTPFileReader {
return &FTPFileReader{
conn: conn,
path: path,
}
}
func (r *FTPFileReader) Read(buf []byte) (n int, err error) {
r.mu.Lock()
defer r.mu.Unlock()
if r.resp == nil {
r.resp, err = r.conn.RetrFrom(r.path, uint64(r.offset))
if err != nil {
return 0, err
}
}
n, err = r.resp.Read(buf)
r.offset += int64(n)
return
}
func (r *FTPFileReader) Seek(offset int64, whence int) (int64, error) {
r.mu.Lock()
defer r.mu.Unlock()
oldOffset := r.offset
var newOffset int64
switch whence {
case io.SeekStart:
newOffset = offset
case io.SeekCurrent:
newOffset = oldOffset + offset
case io.SeekEnd:
size, err := r.conn.FileSize(r.path)
if err != nil {
return oldOffset, err
}
newOffset = offset + int64(size)
default:
return -1, os.ErrInvalid
}
if newOffset < 0 {
// offset out of range
return oldOffset, os.ErrInvalid
}
if newOffset == oldOffset {
// offset not changed, so return directly
return oldOffset, nil
}
r.offset = newOffset
if r.resp != nil {
// close the existing ftp data connection, otherwise the next read will be blocked
_ = r.resp.Close() // we do not care about whether it returns an error
r.resp = nil
}
return newOffset, nil
}
func (r *FTPFileReader) Close() error {
if r.resp != nil {
return r.resp.Close()
}
return nil
}

View File

@ -56,7 +56,7 @@ func (d *GoogleDrive) Link(ctx context.Context, file model.Obj, args model.LinkA
return nil, err return nil, err
} }
link := model.Link{ link := model.Link{
URL: url + "&alt=media", URL: url + "&alt=media&acknowledgeAbuse=true",
Header: http.Header{ Header: http.Header{
"Authorization": []string{"Bearer " + d.AccessToken}, "Authorization": []string{"Bearer " + d.AccessToken},
}, },

View File

@ -2,13 +2,16 @@ package lanzou
import ( import (
"context" "context"
"fmt"
"net/http" "net/http"
"regexp"
"time" "time"
"github.com/alist-org/alist/v3/drivers/base" "github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs" "github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
) )
@ -17,6 +20,7 @@ var upClient = base.NewRestyClient().SetTimeout(120 * time.Second)
type LanZou struct { type LanZou struct {
Addition Addition
model.Storage model.Storage
uid string
} }
func (d *LanZou) Config() driver.Config { func (d *LanZou) Config() driver.Config {
@ -32,50 +36,92 @@ func (d *LanZou) Init(ctx context.Context) error {
if d.RootFolderID == "" { if d.RootFolderID == "" {
d.RootFolderID = "-1" d.RootFolderID = "-1"
} }
ylogin := regexp.MustCompile("ylogin=(.*?);").FindStringSubmatch(d.Cookie)
if len(ylogin) < 2 {
return fmt.Errorf("cookie does not contain ylogin")
}
d.uid = ylogin[1]
} }
return nil return nil
} }
func (d *LanZou) Drop(ctx context.Context) error { func (d *LanZou) Drop(ctx context.Context) error {
d.uid = ""
return nil return nil
} }
// 获取的大小和时间不准确 // 获取的大小和时间不准确
func (d *LanZou) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) { func (d *LanZou) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.IsCookie() { if d.IsCookie() {
return d.GetFiles(ctx, dir.GetID()) return d.GetAllFiles(dir.GetID())
} else { } else {
return d.GetFileOrFolderByShareUrl(ctx, dir.GetID(), d.SharePassword) return d.GetFileOrFolderByShareUrl(dir.GetID(), d.SharePassword)
} }
} }
func (d *LanZou) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *LanZou) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
downID := file.GetID() var (
pwd := d.SharePassword err error
if d.IsCookie() { dfile *FileOrFolderByShareUrl
share, err := d.getFileShareUrlByID(ctx, file.GetID()) )
switch file := file.(type) {
case *FileOrFolder:
// 先获取分享链接
sfile := file.GetShareInfo()
if sfile == nil {
sfile, err = d.getFileShareUrlByID(file.GetID())
if err != nil {
return nil, err
}
file.SetShareInfo(sfile)
}
// 然后获取下载链接
dfile, err = d.GetFilesByShareUrl(sfile.FID, sfile.Pwd)
if err != nil { if err != nil {
return nil, err return nil, err
} }
downID = share.FID // 修复文件大小
pwd = share.Pwd if d.RepairFileInfo && !file.repairFlag {
size, time := d.getFileRealInfo(dfile.Url)
if size != nil {
file.size = size
file.repairFlag = true
}
if file.time != nil {
file.time = time
}
}
case *FileOrFolderByShareUrl:
dfile, err = d.GetFilesByShareUrl(file.GetID(), file.Pwd)
if err != nil {
return nil, err
}
// 修复文件大小
if d.RepairFileInfo && !file.repairFlag {
size, time := d.getFileRealInfo(dfile.Url)
if size != nil {
file.size = size
file.repairFlag = true
}
if file.time != nil {
file.time = time
}
}
} }
fileInfo, err := d.getFilesByShareUrl(ctx, downID, pwd, nil) exp := GetExpirationTime(dfile.Url)
if err != nil {
return nil, err
}
return &model.Link{ return &model.Link{
URL: fileInfo.Url, URL: dfile.Url,
Header: http.Header{ Header: http.Header{
"User-Agent": []string{base.UserAgent}, "User-Agent": []string{base.UserAgent},
}, },
Expiration: &exp,
}, nil }, nil
} }
func (d *LanZou) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *LanZou) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if d.IsCookie() { if d.IsCookie() {
_, err := d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { data, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx) req.SetContext(ctx)
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "2", "task": "2",
@ -84,15 +130,21 @@ func (d *LanZou) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"folder_description": "", "folder_description": "",
}) })
}, nil) }, nil)
return err if err != nil {
return nil, err
}
return &FileOrFolder{
Name: dirName,
FolID: utils.Json.Get(data, "text").ToString(),
}, nil
} }
return errs.NotImplement return nil, errs.NotImplement
} }
func (d *LanZou) Move(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *LanZou) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if d.IsCookie() { if d.IsCookie() {
if !srcObj.IsDir() { if !srcObj.IsDir() {
_, err := d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { _, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx) req.SetContext(ctx)
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "20", "task": "20",
@ -100,16 +152,19 @@ func (d *LanZou) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
"file_id": srcObj.GetID(), "file_id": srcObj.GetID(),
}) })
}, nil) }, nil)
return err if err != nil {
return nil, err
}
return srcObj, nil
} }
} }
return errs.NotImplement return nil, errs.NotImplement
} }
func (d *LanZou) Rename(ctx context.Context, srcObj model.Obj, newName string) error { func (d *LanZou) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if d.IsCookie() { if d.IsCookie() {
if !srcObj.IsDir() { if !srcObj.IsDir() {
_, err := d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { _, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx) req.SetContext(ctx)
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "46", "task": "46",
@ -118,19 +173,19 @@ func (d *LanZou) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"type": "2", "type": "2",
}) })
}, nil) }, nil)
return err if err != nil {
return nil, err
}
srcObj.(*FileOrFolder).NameAll = newName
return srcObj, nil
} }
} }
return errs.NotImplement return nil, errs.NotImplement
}
func (d *LanZou) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
return errs.NotImplement
} }
func (d *LanZou) Remove(ctx context.Context, obj model.Obj) error { func (d *LanZou) Remove(ctx context.Context, obj model.Obj) error {
if d.IsCookie() { if d.IsCookie() {
_, err := d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { _, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx) req.SetContext(ctx)
if obj.IsDir() { if obj.IsDir() {
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
@ -149,17 +204,23 @@ func (d *LanZou) Remove(ctx context.Context, obj model.Obj) error {
return errs.NotImplement return errs.NotImplement
} }
func (d *LanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error { func (d *LanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
if d.IsCookie() { if d.IsCookie() {
var resp RespText[[]FileOrFolder]
_, err := d._post(d.BaseUrl+"/fileup.php", func(req *resty.Request) { _, err := d._post(d.BaseUrl+"/fileup.php", func(req *resty.Request) {
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "1", "task": "1",
"vie": "2",
"ve": "2",
"id": "WU_FILE_0", "id": "WU_FILE_0",
"name": stream.GetName(), "name": stream.GetName(),
"folder_id": dstDir.GetID(), "folder_id_bb_n": dstDir.GetID(),
}).SetFileReader("upload_file", stream.GetName(), stream).SetContext(ctx) }).SetFileReader("upload_file", stream.GetName(), stream).SetContext(ctx)
}, nil, true) }, &resp, true)
return err if err != nil {
return nil, err
}
return &resp.Text[0], nil
} }
return errs.NotImplement return nil, errs.NotImplement
} }

View File

@ -8,12 +8,16 @@ import (
"strings" "strings"
"time" "time"
"unicode" "unicode"
log "github.com/sirupsen/logrus"
) )
const DAY time.Duration = 84600000000000 const DAY time.Duration = 84600000000000
// 解析时间
var timeSplitReg = regexp.MustCompile("([0-9.]*)\\s*([\u4e00-\u9fa5]+)") var timeSplitReg = regexp.MustCompile("([0-9.]*)\\s*([\u4e00-\u9fa5]+)")
// 如果解析失败,则返回当前时间
func MustParseTime(str string) time.Time { func MustParseTime(str string) time.Time {
lastOpTime, err := time.ParseInLocation("2006-01-02 -07", str+" +08", time.Local) lastOpTime, err := time.ParseInLocation("2006-01-02 -07", str+" +08", time.Local)
if err != nil { if err != nil {
@ -41,8 +45,10 @@ func MustParseTime(str string) time.Time {
return lastOpTime return lastOpTime
} }
// 解析大小
var sizeSplitReg = regexp.MustCompile(`(?i)([0-9.]+)\s*([bkm]+)`) var sizeSplitReg = regexp.MustCompile(`(?i)([0-9.]+)\s*([bkm]+)`)
// 解析失败返回0
func SizeStrToInt64(size string) int64 { func SizeStrToInt64(size string) int64 {
strs := sizeSplitReg.FindStringSubmatch(size) strs := sizeSplitReg.FindStringSubmatch(size)
if len(strs) < 3 { if len(strs) < 3 {
@ -62,8 +68,13 @@ func SizeStrToInt64(size string) int64 {
} }
// 移除注释 // 移除注释
func RemoveNotes(html []byte) []byte { func RemoveNotes(html string) string {
return regexp.MustCompile(`<!--.*?-->|//.*|/\*.*?\*/`).ReplaceAll(html, []byte{}) return regexp.MustCompile(`<!--.*?-->|[^:]//.*|/\*.*?\*/`).ReplaceAllStringFunc(html, func(b string) string {
if b[1:3] == "//" {
return b[:1]
}
return "\n"
})
} }
var findAcwScV2Reg = regexp.MustCompile(`arg1='([0-9A-Z]+)'`) var findAcwScV2Reg = regexp.MustCompile(`arg1='([0-9A-Z]+)'`)
@ -71,6 +82,7 @@ var findAcwScV2Reg = regexp.MustCompile(`arg1='([0-9A-Z]+)'`)
// 在页面被过多访问或其他情况下有时候会先返回一个加密的页面其执行计算出一个acw_sc__v2后放入页面后再重新访问页面才能获得正常页面 // 在页面被过多访问或其他情况下有时候会先返回一个加密的页面其执行计算出一个acw_sc__v2后放入页面后再重新访问页面才能获得正常页面
// 若该页面进行了js加密则进行解密计算acw_sc__v2并加入cookie // 若该页面进行了js加密则进行解密计算acw_sc__v2并加入cookie
func CalcAcwScV2(html string) (string, error) { func CalcAcwScV2(html string) (string, error) {
log.Debugln("acw_sc__v2", html)
acwScV2s := findAcwScV2Reg.FindStringSubmatch(html) acwScV2s := findAcwScV2Reg.FindStringSubmatch(html)
if len(acwScV2s) != 2 { if len(acwScV2s) != 2 {
return "", fmt.Errorf("无法匹配acw_sc__v2") return "", fmt.Errorf("无法匹配acw_sc__v2")
@ -163,3 +175,18 @@ func formToMap(from string) map[string]string {
} }
return param return param
} }
var regExpirationTime = regexp.MustCompile(`e=(\d+)`)
func GetExpirationTime(url string) (etime time.Duration) {
exps := regExpirationTime.FindStringSubmatch(url)
if len(exps) < 2 {
return
}
timestamp, err := strconv.ParseInt(exps[1], 10, 64)
if err != nil {
return
}
etime = time.Duration(timestamp-time.Now().Unix()) * time.Second
return
}

View File

@ -7,11 +7,12 @@ import (
type Addition struct { type Addition struct {
Type string `json:"type" type:"select" options:"cookie,url" default:"cookie"` Type string `json:"type" type:"select" options:"cookie,url" default:"cookie"`
Cookie string `json:"cookie" required:"true" help:"about 15 days valid"` Cookie string `json:"cookie" required:"true" help:"about 15 days valid, ignore if shareUrl is used"`
driver.RootID driver.RootID
SharePassword string `json:"share_password"` SharePassword string `json:"share_password"`
BaseUrl string `json:"baseUrl" required:"true" default:"https://pc.woozooo.com"` BaseUrl string `json:"baseUrl" required:"true" default:"https://pc.woozooo.com" help:"basic URL for file operation"`
ShareUrl string `json:"shareUrl" required:"true" default:"https://pan.lanzouo.com"` ShareUrl string `json:"shareUrl" required:"true" default:"https://pan.lanzouo.com" help:"used to get the sharing page"`
RepairFileInfo bool `json:"repair_file_info" help:"To use webdav, you need to enable it"`
} }
func (a *Addition) IsCookie() bool { func (a *Addition) IsCookie() bool {

View File

@ -1,14 +1,20 @@
package lanzou package lanzou
import ( import (
"errors"
"fmt" "fmt"
"time" "time"
"github.com/alist-org/alist/v3/internal/model"
) )
type FilesOrFoldersResp struct { var ErrFileShareCancel = errors.New("file sharing cancellation")
Text []FileOrFolder `json:"text"` var ErrFileNotExist = errors.New("file does not exist")
type RespText[T any] struct {
Text T `json:"text"`
}
type RespInfo[T any] struct {
Info T `json:"info"`
} }
type FileOrFolder struct { type FileOrFolder struct {
@ -34,30 +40,51 @@ type FileOrFolder struct {
FolID string `json:"fol_id"` FolID string `json:"fol_id"`
//Folderlock string `json:"folderlock"` //Folderlock string `json:"folderlock"`
//FolderDes string `json:"folder_des"` //FolderDes string `json:"folder_des"`
// 缓存字段
size *int64 `json:"-"`
time *time.Time `json:"-"`
repairFlag bool `json:"-"`
shareInfo *FileShare `json:"-"`
} }
func (f *FileOrFolder) isFloder() bool { func (f *FileOrFolder) GetID() string {
return f.FolID != "" if f.IsDir() {
} return f.FolID
func (f *FileOrFolder) ToObj() model.Obj {
obj := &model.Object{}
if f.isFloder() {
obj.ID = f.FolID
obj.Name = f.Name
obj.Modified = time.Now()
obj.IsFolder = true
} else {
obj.ID = f.ID
obj.Name = f.NameAll
obj.Modified = MustParseTime(f.Time)
obj.Size = SizeStrToInt64(f.Size)
} }
return obj return f.ID
}
func (f *FileOrFolder) GetName() string {
if f.IsDir() {
return f.Name
}
return f.NameAll
}
func (f *FileOrFolder) GetPath() string { return "" }
func (f *FileOrFolder) GetSize() int64 {
if f.size == nil {
size := SizeStrToInt64(f.Size)
f.size = &size
}
return *f.size
}
func (f *FileOrFolder) IsDir() bool { return f.FolID != "" }
func (f *FileOrFolder) ModTime() time.Time {
if f.time == nil {
time := MustParseTime(f.Time)
f.time = &time
}
return *f.time
} }
type FileShareResp struct { func (f *FileOrFolder) SetShareInfo(fs *FileShare) {
Info FileShare `json:"info"` f.shareInfo = fs
} }
func (f *FileOrFolder) GetShareInfo() *FileShare {
return f.shareInfo
}
/* 通过ID获取文件/文件夹分享信息 */
type FileShare struct { type FileShare struct {
Pwd string `json:"pwd"` Pwd string `json:"pwd"`
Onof string `json:"onof"` Onof string `json:"onof"`
@ -73,31 +100,55 @@ type FileShare struct {
Des string `json:"des"` Des string `json:"des"`
} }
/* 分享类型为文件夹 */
type FileOrFolderByShareUrlResp struct { type FileOrFolderByShareUrlResp struct {
Text []FileOrFolderByShareUrl `json:"text"` Text []FileOrFolderByShareUrl `json:"text"`
} }
type FileOrFolderByShareUrl struct { type FileOrFolderByShareUrl struct {
ID string `json:"id"` ID string `json:"id"`
NameAll string `json:"name_all"` NameAll string `json:"name_all"`
Size string `json:"size"`
Time string `json:"time"` // 文件特有
Duan string `json:"duan"` Duan string `json:"duan"`
Size string `json:"size"`
Time string `json:"time"`
//Icon string `json:"icon"` //Icon string `json:"icon"`
//PIco int `json:"p_ico"` //PIco int `json:"p_ico"`
//T int `json:"t"` //T int `json:"t"`
IsFloder bool
// 文件夹特有
IsFloder bool `json:"-"`
//
Url string `json:"-"`
Pwd string `json:"-"`
// 缓存字段
size *int64 `json:"-"`
time *time.Time `json:"-"`
repairFlag bool `json:"-"`
} }
func (f *FileOrFolderByShareUrl) ToObj() model.Obj { func (f *FileOrFolderByShareUrl) GetID() string { return f.ID }
return &model.Object{ func (f *FileOrFolderByShareUrl) GetName() string { return f.NameAll }
ID: f.ID, func (f *FileOrFolderByShareUrl) GetPath() string { return "" }
Name: f.NameAll, func (f *FileOrFolderByShareUrl) GetSize() int64 {
Size: SizeStrToInt64(f.Size), if f.size == nil {
Modified: MustParseTime(f.Time), size := SizeStrToInt64(f.Size)
IsFolder: f.IsFloder, f.size = &size
} }
return *f.size
}
func (f *FileOrFolderByShareUrl) IsDir() bool { return f.IsFloder }
func (f *FileOrFolderByShareUrl) ModTime() time.Time {
if f.time == nil {
time := MustParseTime(f.Time)
f.time = &time
}
return *f.time
} }
// 获取下载链接的响应
type FileShareInfoAndUrlResp[T string | int] struct { type FileShareInfoAndUrlResp[T string | int] struct {
Dom string `json:"dom"` Dom string `json:"dom"`
URL string `json:"url"` URL string `json:"url"`
@ -111,21 +162,3 @@ func (u *FileShareInfoAndUrlResp[T]) GetBaseUrl() string {
func (u *FileShareInfoAndUrlResp[T]) GetDownloadUrl() string { func (u *FileShareInfoAndUrlResp[T]) GetDownloadUrl() string {
return fmt.Sprint(u.GetBaseUrl(), "/", u.URL) return fmt.Sprint(u.GetBaseUrl(), "/", u.URL)
} }
// 通过分享链接获取文件信息和下载链接
type FileInfoAndUrlByShareUrl struct {
ID string
Name string
Size string
Time string
Url string
}
func (f *FileInfoAndUrlByShareUrl) ToObj() model.Obj {
return &model.Object{
ID: f.ID,
Name: f.Name,
Size: SizeStrToInt64(f.Size),
Modified: MustParseTime(f.Time),
}
}

View File

@ -1,7 +1,7 @@
package lanzou package lanzou
import ( import (
"context" "errors"
"fmt" "fmt"
"net/http" "net/http"
"regexp" "regexp"
@ -13,8 +13,16 @@ import (
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils" "github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
) )
func (d *LanZou) doupload(callback base.ReqCallback, resp interface{}) ([]byte, error) {
return d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) {
req.SetQueryParam("uid", d.uid)
callback(req)
}, resp)
}
func (d *LanZou) get(url string, callback base.ReqCallback, resp interface{}) ([]byte, error) { func (d *LanZou) get(url string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
return d.request(url, http.MethodGet, callback, false) return d.request(url, http.MethodGet, callback, false)
} }
@ -24,7 +32,16 @@ func (d *LanZou) post(url string, callback base.ReqCallback, resp interface{}) (
} }
func (d *LanZou) _post(url string, callback base.ReqCallback, resp interface{}, up bool) ([]byte, error) { func (d *LanZou) _post(url string, callback base.ReqCallback, resp interface{}, up bool) ([]byte, error) {
data, err := d.request(url, http.MethodPost, callback, up) data, err := d.request(url, http.MethodPost, func(req *resty.Request) {
req.AddRetryCondition(func(r *resty.Response, err error) bool {
if utils.Json.Get(r.Body(), "zt").ToInt() == 4 {
time.Sleep(time.Second)
return true
}
return false
})
callback(req)
}, up)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -68,7 +85,7 @@ func (d *LanZou) request(url string, method string, callback base.ReqCallback, u
if err != nil { if err != nil {
return nil, err return nil, err
} }
log.Debugf("lanzou request: url=>%s ,stats=>%d ,body => %s\n", res.Request.URL, res.StatusCode(), res.String())
return res.Body(), err return res.Body(), err
} }
@ -77,31 +94,28 @@ func (d *LanZou) request(url string, method string, callback base.ReqCallback, u
*/ */
// 获取文件和文件夹,获取到的文件大小、更改时间不可信 // 获取文件和文件夹,获取到的文件大小、更改时间不可信
func (d *LanZou) GetFiles(ctx context.Context, folderID string) ([]model.Obj, error) { func (d *LanZou) GetAllFiles(folderID string) ([]model.Obj, error) {
folders, err := d.getFolders(ctx, folderID) folders, err := d.GetFolders(folderID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
files, err := d.getFiles(ctx, folderID) files, err := d.GetFiles(folderID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
objs := make([]model.Obj, 0, len(folders)+len(files)) return append(
for _, folder := range folders { utils.MustSliceConvert(folders, func(folder FileOrFolder) model.Obj {
objs = append(objs, folder.ToObj()) return &folder
} }), utils.MustSliceConvert(files, func(file FileOrFolder) model.Obj {
return &file
for _, file := range files { })...,
objs = append(objs, file.ToObj()) ), nil
}
return objs, nil
} }
// 通过ID获取文件夹 // 通过ID获取文件夹
func (d *LanZou) getFolders(ctx context.Context, folderID string) ([]FileOrFolder, error) { func (d *LanZou) GetFolders(folderID string) ([]FileOrFolder, error) {
var resp FilesOrFoldersResp var resp RespText[[]FileOrFolder]
_, err := d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { _, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx)
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "47", "task": "47",
"folder_id": folderID, "folder_id": folderID,
@ -114,12 +128,11 @@ func (d *LanZou) getFolders(ctx context.Context, folderID string) ([]FileOrFolde
} }
// 通过ID获取文件 // 通过ID获取文件
func (d *LanZou) getFiles(ctx context.Context, folderID string) ([]FileOrFolder, error) { func (d *LanZou) GetFiles(folderID string) ([]FileOrFolder, error) {
files := make([]FileOrFolder, 0) files := make([]FileOrFolder, 0)
for pg := 1; ; pg++ { for pg := 1; ; pg++ {
var resp FilesOrFoldersResp var resp RespText[[]FileOrFolder]
_, err := d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { _, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx)
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "5", "task": "5",
"folder_id": folderID, "folder_id": folderID,
@ -138,37 +151,33 @@ func (d *LanZou) getFiles(ctx context.Context, folderID string) ([]FileOrFolder,
} }
// 通过ID获取文件夹分享地址 // 通过ID获取文件夹分享地址
func (d *LanZou) getFolderShareUrlByID(ctx context.Context, fileID string) (share FileShare, err error) { func (d *LanZou) getFolderShareUrlByID(fileID string) (*FileShare, error) {
var resp FileShareResp var resp RespInfo[FileShare]
_, err = d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { _, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx)
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "18", "task": "18",
"file_id": fileID, "file_id": fileID,
}) })
}, &resp) }, &resp)
if err != nil { if err != nil {
return return nil, err
} }
share = resp.Info return &resp.Info, nil
return
} }
// 通过ID获取文件分享地址 // 通过ID获取文件分享地址
func (d *LanZou) getFileShareUrlByID(ctx context.Context, fileID string) (share FileShare, err error) { func (d *LanZou) getFileShareUrlByID(fileID string) (*FileShare, error) {
var resp FileShareResp var resp RespInfo[FileShare]
_, err = d.post(d.BaseUrl+"/doupload.php", func(req *resty.Request) { _, err := d.doupload(func(req *resty.Request) {
req.SetContext(ctx)
req.SetFormData(map[string]string{ req.SetFormData(map[string]string{
"task": "22", "task": "22",
"file_id": fileID, "file_id": fileID,
}) })
}, &resp) }, &resp)
if err != nil { if err != nil {
return return nil, err
} }
share = resp.Info return &resp.Info, nil
return
} }
/* /*
@ -180,237 +189,252 @@ var isFileReg = regexp.MustCompile(`class="fileinfo"|id="file"|文件描述`)
var isFolderReg = regexp.MustCompile(`id="infos"`) var isFolderReg = regexp.MustCompile(`id="infos"`)
// 获取文件文件夹基础信息 // 获取文件文件夹基础信息
// 获取文件名称
var nameFindReg = regexp.MustCompile(`<title>(.+?) - 蓝奏云</title>|id="filenajax">(.+?)</div>|var filename = '(.+?)';|<div style="font-size.+?>([^<>].+?)</div>|<div class="filethetext".+?>([^<>]+?)</div>`) var nameFindReg = regexp.MustCompile(`<title>(.+?) - 蓝奏云</title>|id="filenajax">(.+?)</div>|var filename = '(.+?)';|<div style="font-size.+?>([^<>].+?)</div>|<div class="filethetext".+?>([^<>]+?)</div>`)
// 获取文件大小
var sizeFindReg = regexp.MustCompile(`(?i)大小\W*([0-9.]+\s*[bkm]+)`) var sizeFindReg = regexp.MustCompile(`(?i)大小\W*([0-9.]+\s*[bkm]+)`)
// 获取文件时间
var timeFindReg = regexp.MustCompile(`\d+\s*[秒天分小][钟时]?前|[昨前]天|\d{4}-\d{2}-\d{2}`) var timeFindReg = regexp.MustCompile(`\d+\s*[秒天分小][钟时]?前|[昨前]天|\d{4}-\d{2}-\d{2}`)
var findSubFolaerReg = regexp.MustCompile(`(folderlink|mbxfolder).+href="/(.+?)"(.+filename")?>(.+?)<`) // 查找分享文件夹子文件夹ID和名称 // 查找分享文件夹子文件夹ID和名称
var findSubFolaerReg = regexp.MustCompile(`(?i)(?:folderlink|mbxfolder).+href="/(.+?)"(?:.+filename")?>(.+?)<`)
// 获取关键数据 // 获取下载页面链接
var findDownPageParamReg = regexp.MustCompile(`<iframe.*?src="(.+?)"`) var findDownPageParamReg = regexp.MustCompile(`<iframe.*?src="(.+?)"`)
// 通过分享链接获取文件或文件夹,如果是文件则会返回下载链接 // 获取分享链接主界面
func (d *LanZou) GetFileOrFolderByShareUrl(ctx context.Context, downID, pwd string) ([]model.Obj, error) { func (d *LanZou) getShareUrlHtml(shareID string) (string, error) {
pageData, err := d.get(fmt.Sprint(d.ShareUrl, "/", downID), func(req *resty.Request) { req.SetContext(ctx) }, nil) var vs string
for i := 0; i < 3; i++ {
firstPageData, err := d.get(fmt.Sprint(d.ShareUrl, "/", shareID),
func(req *resty.Request) {
if vs != "" {
req.SetCookie(&http.Cookie{
Name: "acw_sc__v2",
Value: vs,
})
}
}, nil)
if err != nil {
return "", err
}
firstPageDataStr := RemoveNotes(string(firstPageData))
if strings.Contains(firstPageDataStr, "取消分享") {
return "", ErrFileShareCancel
}
if strings.Contains(firstPageDataStr, "文件不存在") {
return "", ErrFileNotExist
}
// acw_sc__v2
if strings.Contains(firstPageDataStr, "acw_sc__v2") {
if vs, err = CalcAcwScV2(firstPageDataStr); err != nil {
log.Errorf("lanzou: err => acw_sc__v2 validation error ,data => %s\n", firstPageDataStr)
return "", err
}
continue
}
return firstPageDataStr, nil
}
return "", errors.New("acw_sc__v2 validation error")
}
// 通过分享链接获取文件或文件夹
func (d *LanZou) GetFileOrFolderByShareUrl(shareID, pwd string) ([]model.Obj, error) {
pageData, err := d.getShareUrlHtml(shareID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
pageData = RemoveNotes(pageData)
var objs []model.Obj if !isFileReg.MatchString(pageData) {
if !isFileReg.Match(pageData) { files, err := d.getFolderByShareUrl(pwd, pageData)
files, err := d.getFolderByShareUrl(ctx, downID, pwd, pageData)
if err != nil { if err != nil {
return nil, err return nil, err
} }
objs = make([]model.Obj, 0, len(files)) return utils.MustSliceConvert(files, func(file FileOrFolderByShareUrl) model.Obj {
for _, file := range files { return &file
objs = append(objs, file.ToObj()) }), nil
}
} else { } else {
file, err := d.getFilesByShareUrl(ctx, downID, pwd, pageData) file, err := d.getFilesByShareUrl(shareID, pwd, pageData)
if err != nil { if err != nil {
return nil, err return nil, err
} }
objs = []model.Obj{file.ToObj()} return []model.Obj{file}, nil
} }
return objs, nil
} }
// 通过分享链接获取文件(下载链接也使用此方法) // 通过分享链接获取文件(下载链接也使用此方法)
// FileOrFolderByShareUrl 包含 pwd 和 url 字段
// 参考 https://github.com/zaxtyson/LanZouCloud-API/blob/ab2e9ec715d1919bf432210fc16b91c6775fbb99/lanzou/api/core.py#L440 // 参考 https://github.com/zaxtyson/LanZouCloud-API/blob/ab2e9ec715d1919bf432210fc16b91c6775fbb99/lanzou/api/core.py#L440
func (d *LanZou) getFilesByShareUrl(ctx context.Context, downID, pwd string, firstPageData []byte) (file FileInfoAndUrlByShareUrl, err error) { func (d *LanZou) GetFilesByShareUrl(shareID, pwd string) (file *FileOrFolderByShareUrl, err error) {
if firstPageData == nil { pageData, err := d.getShareUrlHtml(shareID)
firstPageData, err = d.get(fmt.Sprint(d.ShareUrl, "/", downID), func(req *resty.Request) { req.SetContext(ctx) }, nil) if err != nil {
if err != nil { return nil, err
return
}
firstPageData = RemoveNotes(firstPageData)
}
firstPageDataStr := string(firstPageData)
if strings.Contains(firstPageDataStr, "acw_sc__v2") {
var vs string
if vs, err = CalcAcwScV2(firstPageDataStr); err != nil {
return
}
firstPageData, err = d.get(fmt.Sprint(d.ShareUrl, "/", downID), func(req *resty.Request) {
req.SetCookie(&http.Cookie{
Name: "acw_sc__v2",
Value: vs,
})
req.SetContext(ctx)
}, nil)
if err != nil {
return
}
firstPageData = RemoveNotes(firstPageData)
firstPageDataStr = string(firstPageData)
} }
return d.getFilesByShareUrl(shareID, pwd, pageData)
}
func (d *LanZou) getFilesByShareUrl(shareID, pwd string, sharePageData string) (*FileOrFolderByShareUrl, error) {
var ( var (
param map[string]string param map[string]string
downloadUrl string downloadUrl string
baseUrl string baseUrl string
file FileOrFolderByShareUrl
) )
// 需要密码 // 需要密码
if strings.Contains(firstPageDataStr, "pwdload") || strings.Contains(firstPageDataStr, "passwddiv") { if strings.Contains(sharePageData, "pwdload") || strings.Contains(sharePageData, "passwddiv") {
param, err = htmlFormToMap(firstPageDataStr) param, err := htmlFormToMap(sharePageData)
if err != nil { if err != nil {
return return nil, err
} }
param["p"] = pwd param["p"] = pwd
var resp FileShareInfoAndUrlResp[string] var resp FileShareInfoAndUrlResp[string]
_, err = d.post(d.ShareUrl+"/ajaxm.php", func(req *resty.Request) { req.SetFormData(param).SetContext(ctx) }, &resp) _, err = d.post(d.ShareUrl+"/ajaxm.php", func(req *resty.Request) { req.SetFormData(param) }, &resp)
if err != nil { if err != nil {
return return nil, err
} }
file.Name = resp.Inf file.NameAll = resp.Inf
file.Pwd = pwd
baseUrl = resp.GetBaseUrl() baseUrl = resp.GetBaseUrl()
downloadUrl = resp.GetDownloadUrl() downloadUrl = resp.GetDownloadUrl()
} else { } else {
urlpaths := findDownPageParamReg.FindStringSubmatch(firstPageDataStr) urlpaths := findDownPageParamReg.FindStringSubmatch(sharePageData)
if len(urlpaths) != 2 { if len(urlpaths) != 2 {
err = fmt.Errorf("not find file page param") log.Errorf("lanzou: err => not find file page param ,data => %s\n", sharePageData)
return return nil, fmt.Errorf("not find file page param")
} }
var nextPageData []byte data, err := d.get(fmt.Sprint(d.ShareUrl, urlpaths[1]), nil, nil)
nextPageData, err = d.get(fmt.Sprint(d.ShareUrl, urlpaths[1]), func(req *resty.Request) { req.SetContext(ctx) }, nil)
if err != nil { if err != nil {
return return nil, err
} }
nextPageData = RemoveNotes(nextPageData) nextPageData := RemoveNotes(string(data))
nextPageDataStr := string(nextPageData)
param, err = htmlJsonToMap(nextPageDataStr) param, err = htmlJsonToMap(nextPageData)
if err != nil { if err != nil {
return return nil, err
} }
var resp FileShareInfoAndUrlResp[int] var resp FileShareInfoAndUrlResp[int]
_, err = d.post(d.ShareUrl+"/ajaxm.php", func(req *resty.Request) { req.SetFormData(param).SetContext(ctx) }, &resp) _, err = d.post(d.ShareUrl+"/ajaxm.php", func(req *resty.Request) { req.SetFormData(param) }, &resp)
if err != nil { if err != nil {
return return nil, err
} }
baseUrl = resp.GetBaseUrl() baseUrl = resp.GetBaseUrl()
downloadUrl = resp.GetDownloadUrl() downloadUrl = resp.GetDownloadUrl()
names := nameFindReg.FindStringSubmatch(firstPageDataStr) names := nameFindReg.FindStringSubmatch(sharePageData)
if len(names) > 1 { if len(names) > 1 {
for _, name := range names[1:] { for _, name := range names[1:] {
if name != "" { if name != "" {
file.Name = name file.NameAll = name
break break
} }
} }
} }
} }
sizes := sizeFindReg.FindStringSubmatch(firstPageDataStr) sizes := sizeFindReg.FindStringSubmatch(sharePageData)
if len(sizes) == 2 { if len(sizes) == 2 {
file.Size = sizes[1] file.Size = sizes[1]
} }
file.ID = downID file.ID = shareID
file.Time = timeFindReg.FindString(firstPageDataStr) file.Time = timeFindReg.FindString(sharePageData)
// 重定向获取真实链接 // 重定向获取真实链接
res, err := base.NoRedirectClient.R().SetHeaders(map[string]string{ res, err := base.NoRedirectClient.R().SetHeaders(map[string]string{
"accept-language": "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6", "accept-language": "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6",
}).SetContext(ctx).Get(downloadUrl) }).Get(downloadUrl)
if err != nil { if err != nil {
return return nil, err
} }
file.Url = res.Header().Get("location") file.Url = res.Header().Get("location")
// 触发验证 // 触发验证
rPageDataStr := res.String() rPageData := res.String()
if res.StatusCode() != 302 && strings.Contains(rPageDataStr, "网络异常") { if res.StatusCode() != 302 {
param, err = htmlJsonToMap(rPageDataStr) param, err = htmlJsonToMap(rPageData)
if err != nil { if err != nil {
return return nil, err
} }
param["el"] = "2" param["el"] = "2"
time.Sleep(time.Second * 2) time.Sleep(time.Second * 2)
// 通过验证获取直连 // 通过验证获取直连
var rUrl struct { data, err := d.post(fmt.Sprint(baseUrl, "/ajax.php"), func(req *resty.Request) { req.SetFormData(param) }, nil)
Url string `json:"url"`
}
_, err = d.post(fmt.Sprint(baseUrl, "/ajax.php"), func(req *resty.Request) { req.SetContext(ctx).SetFormData(param) }, &rUrl)
if err != nil { if err != nil {
return return nil, err
} }
file.Url = rUrl.Url file.Url = utils.Json.Get(data, "url").ToString()
} }
return return &file, nil
} }
// 通过分享链接获取文件夹 // 通过分享链接获取文件夹
// 似乎子目录和文件不会加密
// 参考 https://github.com/zaxtyson/LanZouCloud-API/blob/ab2e9ec715d1919bf432210fc16b91c6775fbb99/lanzou/api/core.py#L1089 // 参考 https://github.com/zaxtyson/LanZouCloud-API/blob/ab2e9ec715d1919bf432210fc16b91c6775fbb99/lanzou/api/core.py#L1089
func (d *LanZou) getFolderByShareUrl(ctx context.Context, downID, pwd string, firstPageData []byte) ([]FileOrFolderByShareUrl, error) { func (d *LanZou) GetFolderByShareUrl(shareID, pwd string) ([]FileOrFolderByShareUrl, error) {
if firstPageData == nil { pageData, err := d.getShareUrlHtml(shareID)
var err error if err != nil {
firstPageData, err = d.get(fmt.Sprint(d.ShareUrl, "/", downID), func(req *resty.Request) { req.SetContext(ctx) }, nil) return nil, err
if err != nil { }
return nil, err return d.getFolderByShareUrl(pwd, pageData)
} }
firstPageData = RemoveNotes(firstPageData)
} func (d *LanZou) getFolderByShareUrl(pwd string, sharePageData string) ([]FileOrFolderByShareUrl, error) {
firstPageDataStr := string(firstPageData) from, err := htmlJsonToMap(sharePageData)
//
if strings.Contains(firstPageDataStr, "acw_sc__v2") {
vs, err := CalcAcwScV2(firstPageDataStr)
if err != nil {
return nil, err
}
firstPageData, err = d.get(fmt.Sprint(d.ShareUrl, "/", downID), func(req *resty.Request) {
req.SetCookie(&http.Cookie{
Name: "acw_sc__v2",
Value: vs,
})
req.SetContext(ctx)
}, nil)
if err != nil {
return nil, err
}
firstPageData = RemoveNotes(firstPageData)
firstPageDataStr = string(firstPageData)
}
from, err := htmlJsonToMap(firstPageDataStr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
from["pwd"] = pwd
files := make([]FileOrFolderByShareUrl, 0) files := make([]FileOrFolderByShareUrl, 0)
// vip获取文件夹 // vip获取文件夹
floders := findSubFolaerReg.FindAllStringSubmatch(firstPageDataStr, -1) floders := findSubFolaerReg.FindAllStringSubmatch(sharePageData, -1)
for _, floder := range floders { for _, floder := range floders {
if len(floder) == 5 { if len(floder) == 3 {
files = append(files, FileOrFolderByShareUrl{ files = append(files, FileOrFolderByShareUrl{
ID: floder[2], // Pwd: pwd, // 子文件夹不加密
NameAll: floder[4], ID: floder[1],
NameAll: floder[2],
IsFloder: true, IsFloder: true,
}) })
} }
} }
// 获取文件
from["pwd"] = pwd
for page := 1; ; page++ { for page := 1; ; page++ {
from["pg"] = strconv.Itoa(page) from["pg"] = strconv.Itoa(page)
var resp FileOrFolderByShareUrlResp var resp FileOrFolderByShareUrlResp
_, err := d.post(d.ShareUrl+"/filemoreajax.php", func(req *resty.Request) { req.SetFormData(from).SetContext(ctx) }, &resp) _, err := d.post(d.ShareUrl+"/filemoreajax.php", func(req *resty.Request) { req.SetFormData(from) }, &resp)
if err != nil { if err != nil {
return nil, err return nil, err
} }
files = append(files, resp.Text...) /*// 文件夹中的文件也不加密
for i := 0; i < len(resp.Text); i++ {
resp.Text[i].Pwd = pwd
}*/
if len(resp.Text) == 0 { if len(resp.Text) == 0 {
break break
} }
time.Sleep(time.Millisecond * 600) files = append(files, resp.Text...)
time.Sleep(time.Second)
} }
return files, nil return files, nil
} }
// 通过下载头获取真实文件信息
func (d *LanZou) getFileRealInfo(downURL string) (*int64, *time.Time) {
res, _ := base.RestyClient.R().Head(downURL)
if res == nil {
return nil, nil
}
time, _ := http.ParseTime(res.Header().Get("Last-Modified"))
size, _ := strconv.ParseInt(res.Header().Get("Content-Length"), 10, 64)
return &size, &time
}

View File

@ -6,7 +6,6 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"net/http" "net/http"
"os" "os"
stdpath "path" stdpath "path"
@ -16,6 +15,7 @@ import (
"github.com/alist-org/alist/v3/internal/conf" "github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/sign" "github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/pkg/utils" "github.com/alist-org/alist/v3/pkg/utils"
@ -27,6 +27,7 @@ import (
type Local struct { type Local struct {
model.Storage model.Storage
Addition Addition
mkdirPerm int32
} }
func (d *Local) Config() driver.Config { func (d *Local) Config() driver.Config {
@ -34,6 +35,15 @@ func (d *Local) Config() driver.Config {
} }
func (d *Local) Init(ctx context.Context) error { func (d *Local) Init(ctx context.Context) error {
if d.MkdirPerm == "" {
d.mkdirPerm = 0777
} else {
v, err := strconv.ParseUint(d.MkdirPerm, 8, 32)
if err != nil {
return err
}
d.mkdirPerm = int32(v)
}
if !utils.Exists(d.GetRootPath()) { if !utils.Exists(d.GetRootPath()) {
return fmt.Errorf("root folder %s not exists", d.GetRootPath()) return fmt.Errorf("root folder %s not exists", d.GetRootPath())
} }
@ -57,7 +67,7 @@ func (d *Local) GetAddition() driver.Additional {
func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) { func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
fullPath := dir.GetPath() fullPath := dir.GetPath()
rawFiles, err := ioutil.ReadDir(fullPath) rawFiles, err := readDir(fullPath)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -67,10 +77,13 @@ func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
continue continue
} }
thumb := "" thumb := ""
if d.Thumbnail && utils.GetFileType(f.Name()) == conf.IMAGE { if d.Thumbnail {
thumb = common.GetApiUrl(nil) + stdpath.Join("/d", args.ReqPath, f.Name()) typeName := utils.GetFileType(f.Name())
thumb = utils.EncodePath(thumb, true) if typeName == conf.IMAGE || typeName == conf.VIDEO {
thumb += "?type=thumb&sign=" + sign.Sign(stdpath.Join(args.ReqPath, f.Name())) thumb = common.GetApiUrl(nil) + stdpath.Join("/d", args.ReqPath, f.Name())
thumb = utils.EncodePath(thumb, true)
thumb += "?type=thumb&sign=" + sign.Sign(stdpath.Join(args.ReqPath, f.Name()))
}
} }
isFolder := f.IsDir() || isSymlinkDir(f, fullPath) isFolder := f.IsDir() || isSymlinkDir(f, fullPath)
var size int64 var size int64
@ -94,15 +107,50 @@ func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
return files, nil return files, nil
} }
func (d *Local) Get(ctx context.Context, path string) (model.Obj, error) {
path = filepath.Join(d.GetRootPath(), path)
f, err := os.Stat(path)
if err != nil {
if strings.Contains(err.Error(), "cannot find the file") {
return nil, errs.ObjectNotFound
}
return nil, err
}
isFolder := f.IsDir() || isSymlinkDir(f, path)
size := f.Size()
if isFolder {
size = 0
}
file := model.Object{
Path: path,
Name: f.Name(),
Modified: f.ModTime(),
Size: size,
IsFolder: isFolder,
}
return &file, nil
}
func (d *Local) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Local) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
fullPath := file.GetPath() fullPath := file.GetPath()
var link model.Link var link model.Link
if args.Type == "thumb" && utils.Ext(file.GetName()) != "svg" { if args.Type == "thumb" && utils.Ext(file.GetName()) != "svg" {
imgData, err := ioutil.ReadFile(fullPath) var srcBuf *bytes.Buffer
if err != nil { if utils.GetFileType(file.GetName()) == conf.VIDEO {
return nil, err videoBuf, err := GetSnapshot(fullPath, 10)
if err != nil {
return nil, err
}
srcBuf = videoBuf
} else {
imgData, err := os.ReadFile(fullPath)
if err != nil {
return nil, err
}
imgBuf := bytes.NewBuffer(imgData)
srcBuf = imgBuf
} }
srcBuf := bytes.NewBuffer(imgData)
image, err := imaging.Decode(srcBuf) image, err := imaging.Decode(srcBuf)
if err != nil { if err != nil {
return nil, err return nil, err
@ -126,7 +174,7 @@ func (d *Local) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
func (d *Local) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *Local) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
fullPath := filepath.Join(parentDir.GetPath(), dirName) fullPath := filepath.Join(parentDir.GetPath(), dirName)
err := os.MkdirAll(fullPath, 0700) err := os.MkdirAll(fullPath, os.FileMode(d.mkdirPerm))
if err != nil { if err != nil {
return err return err
} }

View File

@ -7,8 +7,9 @@ import (
type Addition struct { type Addition struct {
driver.RootPath driver.RootPath
Thumbnail bool `json:"thumbnail" required:"true" help:"enable thumbnail"` Thumbnail bool `json:"thumbnail" required:"true" help:"enable thumbnail"`
ShowHidden bool `json:"show_hidden" default:"true" required:"false" help:"show hidden directories and files"` ShowHidden bool `json:"show_hidden" default:"true" required:"false" help:"show hidden directories and files"`
MkdirPerm string `json:"mkdir_perm" default:"777"`
} }
var config = driver.Config{ var config = driver.Config{

View File

@ -1,9 +1,14 @@
package local package local
import ( import (
"bytes"
"fmt"
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
"sort"
ffmpeg "github.com/u2takey/ffmpeg-go"
) )
func isSymlinkDir(f fs.FileInfo, path string) bool { func isSymlinkDir(f fs.FileInfo, path string) bool {
@ -23,3 +28,30 @@ func isSymlinkDir(f fs.FileInfo, path string) bool {
} }
return false return false
} }
func GetSnapshot(videoPath string, frameNum int) (imgData *bytes.Buffer, err error) {
srcBuf := bytes.NewBuffer(nil)
err = ffmpeg.Input(videoPath).Filter("select", ffmpeg.Args{fmt.Sprintf("gte(n,%d)", frameNum)}).
Output("pipe:", ffmpeg.KwArgs{"vframes": 1, "format": "image2", "vcodec": "mjpeg"}).
WithOutput(srcBuf, os.Stdout).
Run()
if err != nil {
return nil, err
}
return srcBuf, nil
}
func readDir(dirname string) ([]fs.FileInfo, error) {
f, err := os.Open(dirname)
if err != nil {
return nil, err
}
list, err := f.Readdir(-1)
f.Close()
if err != nil {
return nil, err
}
sort.Slice(list, func(i, j int) bool { return list[i].Name() < list[j].Name() })
return list, nil
}

View File

@ -4,6 +4,7 @@ import (
"context" "context"
"fmt" "fmt"
"net/http" "net/http"
"path"
"github.com/alist-org/alist/v3/drivers/base" "github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
@ -75,9 +76,19 @@ func (d *Onedrive) MakeDir(ctx context.Context, parentDir model.Obj, dirName str
} }
func (d *Onedrive) Move(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *Onedrive) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
parentPath := ""
if dstDir.GetID() == "" {
parentPath = dstDir.GetPath()
if utils.PathEqual(parentPath, "/") {
parentPath = path.Join("/drive/root", parentPath)
} else {
parentPath = path.Join("/drive/root:/", parentPath)
}
}
data := base.Json{ data := base.Json{
"parentReference": base.Json{ "parentReference": base.Json{
"id": dstDir.GetID(), "id": dstDir.GetID(),
"path": parentPath,
}, },
"name": srcObj.GetName(), "name": srcObj.GetName(),
} }
@ -89,13 +100,15 @@ func (d *Onedrive) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
} }
func (d *Onedrive) Rename(ctx context.Context, srcObj model.Obj, newName string) error { func (d *Onedrive) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
//dstDir, err := op.GetUnwrap(ctx, d, stdpath.Dir(srcObj.GetPath()))
var parentID string var parentID string
if o, ok := srcObj.(*Object); ok { if o, ok := srcObj.(*Object); ok {
parentID = o.ParentID parentID = o.ParentID
} else { } else {
return fmt.Errorf("srcObj is not Object") return fmt.Errorf("srcObj is not Object")
} }
if parentID == "" {
parentID = "root"
}
data := base.Json{ data := base.Json{
"parentReference": base.Json{ "parentReference": base.Json{
"id": parentID, "id": parentID,

View File

@ -11,7 +11,7 @@ type Addition struct {
IsSharepoint bool `json:"is_sharepoint"` IsSharepoint bool `json:"is_sharepoint"`
ClientID string `json:"client_id" required:"true"` ClientID string `json:"client_id" required:"true"`
ClientSecret string `json:"client_secret" required:"true"` ClientSecret string `json:"client_secret" required:"true"`
RedirectUri string `json:"redirect_uri" required:"true" default:"https://tool.nn.ci/onedrive/callback"` RedirectUri string `json:"redirect_uri" required:"true" default:"https://alist.nn.ci/tool/onedrive/callback"`
RefreshToken string `json:"refresh_token" required:"true"` RefreshToken string `json:"refresh_token" required:"true"`
SiteId string `json:"site_id"` SiteId string `json:"site_id"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"5"` ChunkSize int64 `json:"chunk_size" type:"number" default:"5"`

View File

@ -43,7 +43,7 @@ type File struct {
} }
type Object struct { type Object struct {
model.ObjThumbURL model.ObjThumb
ParentID string ParentID string
} }
@ -53,7 +53,7 @@ func fileToObj(f File, parentID string) *Object {
thumb = f.Thumbnails[0].Medium.Url thumb = f.Thumbnails[0].Medium.Url
} }
return &Object{ return &Object{
ObjThumbURL: model.ObjThumbURL{ ObjThumb: model.ObjThumb{
Object: model.Object{ Object: model.Object{
ID: f.Id, ID: f.Id,
Name: f.Name, Name: f.Name,
@ -62,7 +62,7 @@ func fileToObj(f File, parentID string) *Object {
IsFolder: f.File == nil, IsFolder: f.File == nil,
}, },
Thumbnail: model.Thumbnail{Thumbnail: thumb}, Thumbnail: model.Thumbnail{Thumbnail: thumb},
Url: model.Url{Url: f.Url}, //Url: model.Url{Url: f.Url},
}, },
ParentID: parentID, ParentID: parentID,
} }

View File

@ -127,7 +127,7 @@ func (d *Onedrive) Request(url string, method string, callback base.ReqCallback,
func (d *Onedrive) getFiles(path string) ([]File, error) { func (d *Onedrive) getFiles(path string) ([]File, error) {
var res []File var res []File
nextLink := d.GetMetaUrl(false, path) + "/children?$expand=thumbnails" nextLink := d.GetMetaUrl(false, path) + "/children?$top=5000&$expand=thumbnails($select=medium)&$select=id,name,size,lastModifiedDateTime,content.downloadUrl,file,parentReference"
for nextLink != "" { for nextLink != "" {
var files Files var files Files
_, err := d.Request(nextLink, http.MethodGet, nil, &files) _, err := d.Request(nextLink, http.MethodGet, nil, &files)

View File

@ -0,0 +1,160 @@
package onedrive_app
import (
"context"
"fmt"
"net/http"
"path"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type OnedriveAPP struct {
model.Storage
Addition
AccessToken string
}
func (d *OnedriveAPP) Config() driver.Config {
return config
}
func (d *OnedriveAPP) GetAddition() driver.Additional {
return &d.Addition
}
func (d *OnedriveAPP) Init(ctx context.Context) error {
if d.ChunkSize < 1 {
d.ChunkSize = 5
}
return d.accessToken()
}
func (d *OnedriveAPP) Drop(ctx context.Context) error {
return nil
}
func (d *OnedriveAPP) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.getFiles(dir.GetPath())
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return fileToObj(src, dir.GetID()), nil
})
}
func (d *OnedriveAPP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
f, err := d.GetFile(file.GetPath())
if err != nil {
return nil, err
}
if f.File == nil {
return nil, errs.NotFile
}
return &model.Link{
URL: f.Url,
}, nil
}
func (d *OnedriveAPP) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
url := d.GetMetaUrl(false, parentDir.GetPath()) + "/children"
data := base.Json{
"name": dirName,
"folder": base.Json{},
"@microsoft.graph.conflictBehavior": "rename",
}
_, err := d.Request(url, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
}
func (d *OnedriveAPP) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
parentPath := ""
if dstDir.GetID() == "" {
parentPath = dstDir.GetPath()
if utils.PathEqual(parentPath, "/") {
parentPath = path.Join("/drive/root", parentPath)
} else {
parentPath = path.Join("/drive/root:/", parentPath)
}
}
data := base.Json{
"parentReference": base.Json{
"id": dstDir.GetID(),
"path": parentPath,
},
"name": srcObj.GetName(),
}
url := d.GetMetaUrl(false, srcObj.GetPath())
_, err := d.Request(url, http.MethodPatch, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
}
func (d *OnedriveAPP) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
var parentID string
if o, ok := srcObj.(*Object); ok {
parentID = o.ParentID
} else {
return fmt.Errorf("srcObj is not Object")
}
if parentID == "" {
parentID = "root"
}
data := base.Json{
"parentReference": base.Json{
"id": parentID,
},
"name": newName,
}
url := d.GetMetaUrl(false, srcObj.GetPath())
_, err := d.Request(url, http.MethodPatch, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
}
func (d *OnedriveAPP) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
dst, err := d.GetFile(dstDir.GetPath())
if err != nil {
return err
}
data := base.Json{
"parentReference": base.Json{
"driveId": dst.ParentReference.DriveId,
"id": dst.Id,
},
"name": srcObj.GetName(),
}
url := d.GetMetaUrl(false, srcObj.GetPath()) + "/copy"
_, err = d.Request(url, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
}
func (d *OnedriveAPP) Remove(ctx context.Context, obj model.Obj) error {
url := d.GetMetaUrl(false, obj.GetPath())
_, err := d.Request(url, http.MethodDelete, nil, nil)
return err
}
func (d *OnedriveAPP) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
var err error
if stream.GetSize() <= 4*1024*1024 {
err = d.upSmall(ctx, dstDir, stream)
} else {
err = d.upBig(ctx, dstDir, stream, up)
}
return err
}
var _ driver.Driver = (*OnedriveAPP)(nil)

View File

@ -0,0 +1,28 @@
package onedrive_app
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
Region string `json:"region" type:"select" required:"true" options:"global,cn,us,de" default:"global"`
ClientID string `json:"client_id" required:"true"`
ClientSecret string `json:"client_secret" required:"true"`
TenantID string `json:"tenant_id"`
Email string `json:"email"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"5"`
}
var config = driver.Config{
Name: "OnedriveAPP",
LocalSort: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &OnedriveAPP{}
})
}

View File

@ -0,0 +1,74 @@
package onedrive_app
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type Host struct {
Oauth string
Api string
}
type TokenErr struct {
Error string `json:"error"`
ErrorDescription string `json:"error_description"`
}
type RespErr struct {
Error struct {
Code string `json:"code"`
Message string `json:"message"`
} `json:"error"`
}
type File struct {
Id string `json:"id"`
Name string `json:"name"`
Size int64 `json:"size"`
LastModifiedDateTime time.Time `json:"lastModifiedDateTime"`
Url string `json:"@microsoft.graph.downloadUrl"`
File *struct {
MimeType string `json:"mimeType"`
} `json:"file"`
Thumbnails []struct {
Medium struct {
Url string `json:"url"`
} `json:"medium"`
} `json:"thumbnails"`
ParentReference struct {
DriveId string `json:"driveId"`
} `json:"parentReference"`
}
type Object struct {
model.ObjThumb
ParentID string
}
func fileToObj(f File, parentID string) *Object {
thumb := ""
if len(f.Thumbnails) > 0 {
thumb = f.Thumbnails[0].Medium.Url
}
return &Object{
ObjThumb: model.ObjThumb{
Object: model.Object{
ID: f.Id,
Name: f.Name,
Size: f.Size,
Modified: f.LastModifiedDateTime,
IsFolder: f.File == nil,
},
Thumbnail: model.Thumbnail{Thumbnail: thumb},
//Url: model.Url{Url: f.Url},
},
ParentID: parentID,
}
}
type Files struct {
Value []File `json:"value"`
NextLink string `json:"@odata.nextLink"`
}

View File

@ -0,0 +1,196 @@
package onedrive_app
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"net/http"
stdpath "path"
"strconv"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus"
)
var onedriveHostMap = map[string]Host{
"global": {
Oauth: "https://login.microsoftonline.com",
Api: "https://graph.microsoft.com",
},
"cn": {
Oauth: "https://login.chinacloudapi.cn",
Api: "https://microsoftgraph.chinacloudapi.cn",
},
"us": {
Oauth: "https://login.microsoftonline.us",
Api: "https://graph.microsoft.us",
},
"de": {
Oauth: "https://login.microsoftonline.de",
Api: "https://graph.microsoft.de",
},
}
func (d *OnedriveAPP) GetMetaUrl(auth bool, path string) string {
host, _ := onedriveHostMap[d.Region]
path = utils.EncodePath(path, true)
if auth {
return host.Oauth
}
if path == "/" || path == "\\" {
return fmt.Sprintf("%s/v1.0/users/%s/drive/root", host.Api, d.Email)
}
return fmt.Sprintf("%s/v1.0/users/%s/drive/root:%s:", host.Api, d.Email, path)
}
func (d *OnedriveAPP) accessToken() error {
var err error
for i := 0; i < 3; i++ {
err = d._accessToken()
if err == nil {
break
}
}
return err
}
func (d *OnedriveAPP) _accessToken() error {
url := d.GetMetaUrl(true, "") + "/" + d.TenantID + "/oauth2/token"
var resp base.TokenResp
var e TokenErr
_, err := base.RestyClient.R().SetResult(&resp).SetError(&e).SetFormData(map[string]string{
"grant_type": "client_credentials",
"client_id": d.ClientID,
"client_secret": d.ClientSecret,
"resource": "https://graph.microsoft.com/",
"scope": "https://graph.microsoft.com/.default",
}).Post(url)
if err != nil {
return err
}
if e.Error != "" {
return fmt.Errorf("%s", e.ErrorDescription)
}
if resp.AccessToken == "" {
return errs.EmptyToken
}
d.AccessToken = resp.AccessToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *OnedriveAPP) Request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var e RespErr
req.SetError(&e)
res, err := req.Execute(method, url)
if err != nil {
return nil, err
}
if e.Error.Code != "" {
if e.Error.Code == "InvalidAuthenticationToken" {
err = d.accessToken()
if err != nil {
return nil, err
}
return d.Request(url, method, callback, resp)
}
return nil, errors.New(e.Error.Message)
}
return res.Body(), nil
}
func (d *OnedriveAPP) getFiles(path string) ([]File, error) {
var res []File
nextLink := d.GetMetaUrl(false, path) + "/children?$top=5000&$expand=thumbnails($select=medium)&$select=id,name,size,lastModifiedDateTime,content.downloadUrl,file,parentReference"
for nextLink != "" {
var files Files
_, err := d.Request(nextLink, http.MethodGet, nil, &files)
if err != nil {
return nil, err
}
res = append(res, files.Value...)
nextLink = files.NextLink
}
return res, nil
}
func (d *OnedriveAPP) GetFile(path string) (*File, error) {
var file File
u := d.GetMetaUrl(false, path)
_, err := d.Request(u, http.MethodGet, nil, &file)
return &file, err
}
func (d *OnedriveAPP) upSmall(ctx context.Context, dstDir model.Obj, stream model.FileStreamer) error {
url := d.GetMetaUrl(false, stdpath.Join(dstDir.GetPath(), stream.GetName())) + "/content"
data, err := io.ReadAll(stream)
if err != nil {
return err
}
_, err = d.Request(url, http.MethodPut, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, nil)
return err
}
func (d *OnedriveAPP) upBig(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
url := d.GetMetaUrl(false, stdpath.Join(dstDir.GetPath(), stream.GetName())) + "/createUploadSession"
res, err := d.Request(url, http.MethodPost, nil, nil)
if err != nil {
return err
}
uploadUrl := jsoniter.Get(res, "uploadUrl").ToString()
var finish int64 = 0
DEFAULT := d.ChunkSize * 1024 * 1024
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
log.Debugf("upload: %d", finish)
var byteSize int64 = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", uploadUrl, bytes.NewBuffer(byteData))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
finish += byteSize
res, err := base.HttpClient.Do(req)
if res.StatusCode != 201 && res.StatusCode != 202 {
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
}
res.Body.Close()
up(int(finish * 100 / stream.GetSize()))
}
return nil
}

View File

@ -8,7 +8,6 @@ import (
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils" "github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
) )
type PikPakShare struct { type PikPakShare struct {
@ -71,10 +70,6 @@ func (d *PikPakShare) Link(ctx context.Context, file model.Obj, args model.LinkA
link := model.Link{ link := model.Link{
URL: resp.FileInfo.WebContentLink, URL: resp.FileInfo.WebContentLink,
} }
if len(resp.FileInfo.Medias) > 0 && resp.FileInfo.Medias[0].Link.Url != "" {
log.Debugln("use media link")
link.URL = resp.FileInfo.Medias[0].Link.Url
}
return &link, nil return &link, nil
} }

View File

@ -175,9 +175,9 @@ func (d *Quark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
var bytes []byte var bytes []byte
md5s := make([]string, 0) md5s := make([]string, 0)
defaultBytes := make([]byte, partSize) defaultBytes := make([]byte, partSize)
left := stream.GetSize() total := stream.GetSize()
left := total
partNumber := 1 partNumber := 1
sizeDivide100 := stream.GetSize() / 100
for left > 0 { for left > 0 {
if utils.IsCanceled(ctx) { if utils.IsCanceled(ctx) {
return ctx.Err() return ctx.Err()
@ -191,7 +191,7 @@ func (d *Quark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
if err != nil { if err != nil {
return err return err
} }
left -= int64(partSize) left -= int64(len(bytes))
log.Debugf("left: %d", left) log.Debugf("left: %d", left)
m, err := d.upPart(ctx, pre, stream.GetMimetype(), partNumber, bytes) m, err := d.upPart(ctx, pre, stream.GetMimetype(), partNumber, bytes)
//m, err := driver.UpPart(pre, file.GetMIMEType(), partNumber, bytes, account, md5Str, sha1Str) //m, err := driver.UpPart(pre, file.GetMIMEType(), partNumber, bytes, account, md5Str, sha1Str)
@ -203,7 +203,7 @@ func (d *Quark) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
} }
md5s = append(md5s, m) md5s = append(md5s, m)
partNumber++ partNumber++
up(100 - int(left/sizeDivide100)) up(int(100 * (total - left) / total))
} }
err = d.upCommit(pre, md5s) err = d.upCommit(pre, md5s)
if err != nil { if err != nil {

View File

@ -16,6 +16,7 @@ var config = driver.Config{
Name: "Quark", Name: "Quark",
OnlyProxy: true, OnlyProxy: true,
DefaultRoot: "0", DefaultRoot: "0",
NoOverwriteUpload: true,
} }
func init() { func init() {

View File

@ -59,7 +59,8 @@ func (d *S3) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]mo
func (d *S3) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *S3) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
path := getKey(file.GetPath(), false) path := getKey(file.GetPath(), false)
disposition := fmt.Sprintf(`attachment;filename="%s"`, url.QueryEscape(stdpath.Base(path))) filename := stdpath.Base(path)
disposition := fmt.Sprintf(`attachment; filename="%s"; filename*=UTF-8''%s`, filename, url.PathEscape(filename))
input := &s3.GetObjectInput{ input := &s3.GetObjectInput{
Bucket: &d.Bucket, Bucket: &d.Bucket,
Key: &path, Key: &path,
@ -127,6 +128,9 @@ func (d *S3) Remove(ctx context.Context, obj model.Obj) error {
func (d *S3) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error { func (d *S3) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
uploader := s3manager.NewUploader(d.Session) uploader := s3manager.NewUploader(d.Session)
if stream.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = stream.GetSize() / (s3manager.MaxUploadParts - 1)
}
key := getKey(stdpath.Join(dstDir.GetPath(), stream.GetName()), false) key := getKey(stdpath.Join(dstDir.GetPath(), stream.GetName()), false)
log.Debugln("key:", key) log.Debugln("key:", key)
input := &s3manager.UploadInput{ input := &s3manager.UploadInput{

View File

@ -9,6 +9,7 @@ import (
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op" "github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/aws/request"
@ -38,7 +39,14 @@ func (d *S3) getClient(link bool) *s3.S3 {
if r.HTTPRequest.Method != http.MethodGet { if r.HTTPRequest.Method != http.MethodGet {
return return
} }
r.HTTPRequest.URL.Host = d.CustomHost //判断CustomHost是否以http://或https://开头
split := strings.SplitN(d.CustomHost, "://", 2)
if utils.SliceContains([]string{"http", "https"}, split[0]) {
r.HTTPRequest.URL.Scheme = split[0]
r.HTTPRequest.URL.Host = split[1]
} else {
r.HTTPRequest.URL.Host = d.CustomHost
}
}) })
} }
return client return client
@ -140,6 +148,9 @@ func (d *S3) listV2(prefix string) ([]model.Obj, error) {
files = append(files, &file) files = append(files, &file)
} }
for _, object := range listObjectsResult.Contents { for _, object := range listObjectsResult.Contents {
if strings.HasSuffix(*object.Key, "/") {
continue
}
name := path.Base(*object.Key) name := path.Base(*object.Key)
if name == getPlaceholderName(d.Placeholder) || name == d.Placeholder { if name == getPlaceholderName(d.Placeholder) || name == d.Placeholder {
continue continue

160
drivers/seafile/driver.go Normal file
View File

@ -0,0 +1,160 @@
package seafile
import (
"context"
"fmt"
"net/http"
"path/filepath"
"strings"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type Seafile struct {
model.Storage
Addition
authorization string
}
func (d *Seafile) Config() driver.Config {
return config
}
func (d *Seafile) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Seafile) Init(ctx context.Context) error {
d.Address = strings.TrimSuffix(d.Address, "/")
return d.getToken()
}
func (d *Seafile) Drop(ctx context.Context) error {
return nil
}
func (d *Seafile) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
path := dir.GetPath()
var resp []RepoDirItemResp
_, err := d.request(http.MethodGet, fmt.Sprintf("/api2/repos/%s/dir/", d.Addition.RepoId), func(req *resty.Request) {
req.SetResult(&resp).SetQueryParams(map[string]string{
"p": path,
})
})
if err != nil {
return nil, err
}
return utils.SliceConvert(resp, func(f RepoDirItemResp) (model.Obj, error) {
return &model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: time.Unix(f.Modified, 0),
Size: f.Size,
IsFolder: f.Type == "dir",
},
// Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}, nil
})
}
func (d *Seafile) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
res, err := d.request(http.MethodGet, fmt.Sprintf("/api2/repos/%s/file/", d.Addition.RepoId), func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"p": file.GetPath(),
"reuse": "1",
})
})
if err != nil {
return nil, err
}
u := string(res)
u = u[1 : len(u)-1] // remove quotes
return &model.Link{URL: u}, nil
}
func (d *Seafile) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request(http.MethodPost, fmt.Sprintf("/api2/repos/%s/dir/", d.Addition.RepoId), func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"p": filepath.Join(parentDir.GetPath(), dirName),
}).SetFormData(map[string]string{
"operation": "mkdir",
})
})
return err
}
func (d *Seafile) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request(http.MethodPost, fmt.Sprintf("/api2/repos/%s/file/", d.Addition.RepoId), func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"p": srcObj.GetPath(),
}).SetFormData(map[string]string{
"operation": "move",
"dst_repo": d.Addition.RepoId,
"dst_dir": dstDir.GetPath(),
})
}, true)
return err
}
func (d *Seafile) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
_, err := d.request(http.MethodPost, fmt.Sprintf("/api2/repos/%s/file/", d.Addition.RepoId), func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"p": srcObj.GetPath(),
}).SetFormData(map[string]string{
"operation": "rename",
"newname": newName,
})
}, true)
return err
}
func (d *Seafile) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request(http.MethodPost, fmt.Sprintf("/api2/repos/%s/file/", d.Addition.RepoId), func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"p": srcObj.GetPath(),
}).SetFormData(map[string]string{
"operation": "copy",
"dst_repo": d.Addition.RepoId,
"dst_dir": dstDir.GetPath(),
})
})
return err
}
func (d *Seafile) Remove(ctx context.Context, obj model.Obj) error {
_, err := d.request(http.MethodDelete, fmt.Sprintf("/api2/repos/%s/file/", d.Addition.RepoId), func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"p": obj.GetPath(),
})
})
return err
}
func (d *Seafile) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
res, err := d.request(http.MethodGet, fmt.Sprintf("/api2/repos/%s/upload-link/", d.Addition.RepoId), func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"p": dstDir.GetPath(),
})
})
if err != nil {
return err
}
u := string(res)
u = u[1 : len(u)-1] // remove quotes
_, err = d.request(http.MethodPost, u, func(req *resty.Request) {
req.SetFileReader("file", stream.GetName(), stream).
SetFormData(map[string]string{
"parent_dir": dstDir.GetPath(),
"replace": "1",
})
})
return err
}
var _ driver.Driver = (*Seafile)(nil)

26
drivers/seafile/meta.go Normal file
View File

@ -0,0 +1,26 @@
package seafile
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
Address string `json:"address" required:"true"`
UserName string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
RepoId string `json:"repoId" required:"true"`
}
var config = driver.Config{
Name: "Seafile",
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Seafile{}
})
}

14
drivers/seafile/types.go Normal file
View File

@ -0,0 +1,14 @@
package seafile
type AuthTokenResp struct {
Token string `json:"token"`
}
type RepoDirItemResp struct {
Id string `json:"id"`
Type string `json:"type"` // dir, file
Name string `json:"name"`
Size int64 `json:"size"`
Modified int64 `json:"mtime"`
Permission string `json:"permission"`
}

59
drivers/seafile/util.go Normal file
View File

@ -0,0 +1,59 @@
package seafile
import (
"fmt"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/go-resty/resty/v2"
)
func (d *Seafile) getToken() error {
var authResp AuthTokenResp
res, err := base.RestyClient.R().
SetResult(&authResp).
SetFormData(map[string]string{
"username": d.UserName,
"password": d.Password,
}).
Post(d.Address + "/api2/auth-token/")
if err != nil {
return err
}
if res.StatusCode() >= 400 {
return fmt.Errorf("get token failed: %s", res.String())
}
d.authorization = fmt.Sprintf("Token %s", authResp.Token)
return nil
}
func (d *Seafile) request(method string, pathname string, callback base.ReqCallback, noRedirect ...bool) ([]byte, error) {
full := pathname
if !strings.HasPrefix(pathname, "http") {
full = d.Address + pathname
}
req := base.RestyClient.R()
if len(noRedirect) > 0 && noRedirect[0] {
req = base.NoRedirectClient.R()
}
var res resty.Response
for i := 0; i < 2; i++ {
req.SetHeader("Authorization", d.authorization)
callback(req)
res, err := req.Execute(method, full)
if err != nil {
return nil, err
}
if res.StatusCode() != 401 { // Unauthorized
break
}
err = d.getToken()
if err != nil {
return nil, err
}
}
if res.StatusCode() >= 400 {
return nil, fmt.Errorf("request failed: %s", res.String())
}
return res.Body(), nil
}

View File

@ -5,6 +5,7 @@ import (
"os" "os"
"path" "path"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs" "github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
@ -52,9 +53,11 @@ func (d *SFTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &model.Link{ link := &model.Link{
Data: remoteFile, Data: remoteFile,
}, nil }
base.HandleRange(link, remoteFile, args.Header, file.GetSize())
return link, nil
} }
func (d *SFTP) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *SFTP) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {

View File

@ -5,8 +5,8 @@ import (
"errors" "errors"
"path/filepath" "path/filepath"
"strings" "strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils" "github.com/alist-org/alist/v3/pkg/utils"
@ -15,10 +15,10 @@ import (
) )
type SMB struct { type SMB struct {
lastConnTime int64
model.Storage model.Storage
Addition Addition
fs *smb2.Share fs *smb2.Share
lastConnTime time.Time
} }
func (d *SMB) Config() driver.Config { func (d *SMB) Config() driver.Config {
@ -47,7 +47,7 @@ func (d *SMB) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]m
if err := d.checkConn(); err != nil { if err := d.checkConn(); err != nil {
return nil, err return nil, err
} }
fullPath := d.getSMBPath(dir) fullPath := dir.GetPath()
rawFiles, err := d.fs.ReadDir(fullPath) rawFiles, err := d.fs.ReadDir(fullPath)
if err != nil { if err != nil {
d.cleanLastConnTime() d.cleanLastConnTime()
@ -73,23 +73,25 @@ func (d *SMB) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*m
if err := d.checkConn(); err != nil { if err := d.checkConn(); err != nil {
return nil, err return nil, err
} }
fullPath := d.getSMBPath(file) fullPath := file.GetPath()
remoteFile, err := d.fs.Open(fullPath) remoteFile, err := d.fs.Open(fullPath)
if err != nil { if err != nil {
d.cleanLastConnTime() d.cleanLastConnTime()
return nil, err return nil, err
} }
d.updateLastConnTime() link := &model.Link{
return &model.Link{
Data: remoteFile, Data: remoteFile,
}, nil }
base.HandleRange(link, remoteFile, args.Header, file.GetSize())
d.updateLastConnTime()
return link, nil
} }
func (d *SMB) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *SMB) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if err := d.checkConn(); err != nil { if err := d.checkConn(); err != nil {
return err return err
} }
fullPath := filepath.Join(d.getSMBPath(parentDir), dirName) fullPath := filepath.Join(parentDir.GetPath(), dirName)
err := d.fs.MkdirAll(fullPath, 0700) err := d.fs.MkdirAll(fullPath, 0700)
if err != nil { if err != nil {
d.cleanLastConnTime() d.cleanLastConnTime()
@ -103,8 +105,8 @@ func (d *SMB) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
if err := d.checkConn(); err != nil { if err := d.checkConn(); err != nil {
return err return err
} }
srcPath := d.getSMBPath(srcObj) srcPath := srcObj.GetPath()
dstPath := filepath.Join(d.getSMBPath(dstDir), srcObj.GetName()) dstPath := filepath.Join(dstDir.GetPath(), srcObj.GetName())
err := d.fs.Rename(srcPath, dstPath) err := d.fs.Rename(srcPath, dstPath)
if err != nil { if err != nil {
d.cleanLastConnTime() d.cleanLastConnTime()
@ -118,7 +120,7 @@ func (d *SMB) Rename(ctx context.Context, srcObj model.Obj, newName string) erro
if err := d.checkConn(); err != nil { if err := d.checkConn(); err != nil {
return err return err
} }
srcPath := d.getSMBPath(srcObj) srcPath := srcObj.GetPath()
dstPath := filepath.Join(filepath.Dir(srcPath), newName) dstPath := filepath.Join(filepath.Dir(srcPath), newName)
err := d.fs.Rename(srcPath, dstPath) err := d.fs.Rename(srcPath, dstPath)
if err != nil { if err != nil {
@ -133,8 +135,8 @@ func (d *SMB) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
if err := d.checkConn(); err != nil { if err := d.checkConn(); err != nil {
return err return err
} }
srcPath := d.getSMBPath(srcObj) srcPath := srcObj.GetPath()
dstPath := filepath.Join(d.getSMBPath(dstDir), srcObj.GetName()) dstPath := filepath.Join(dstDir.GetPath(), srcObj.GetName())
var err error var err error
if srcObj.IsDir() { if srcObj.IsDir() {
err = d.CopyDir(srcPath, dstPath) err = d.CopyDir(srcPath, dstPath)
@ -154,7 +156,7 @@ func (d *SMB) Remove(ctx context.Context, obj model.Obj) error {
return err return err
} }
var err error var err error
fullPath := d.getSMBPath(obj) fullPath := obj.GetPath()
if obj.IsDir() { if obj.IsDir() {
err = d.fs.RemoveAll(fullPath) err = d.fs.RemoveAll(fullPath)
} else { } else {
@ -172,7 +174,7 @@ func (d *SMB) Put(ctx context.Context, dstDir model.Obj, stream model.FileStream
if err := d.checkConn(); err != nil { if err := d.checkConn(); err != nil {
return err return err
} }
fullPath := filepath.Join(d.getSMBPath(dstDir), stream.GetName()) fullPath := filepath.Join(dstDir.GetPath(), stream.GetName())
out, err := d.fs.Create(fullPath) out, err := d.fs.Create(fullPath)
if err != nil { if err != nil {
d.cleanLastConnTime() d.cleanLastConnTime()

View File

@ -6,18 +6,22 @@ import (
"net" "net"
"os" "os"
"path/filepath" "path/filepath"
"sync/atomic"
"time" "time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/hirochachacha/go-smb2" "github.com/hirochachacha/go-smb2"
) )
func (d *SMB) updateLastConnTime() { func (d *SMB) updateLastConnTime() {
d.lastConnTime = time.Now() atomic.StoreInt64(&d.lastConnTime, time.Now().Unix())
} }
func (d *SMB) cleanLastConnTime() { func (d *SMB) cleanLastConnTime() {
d.lastConnTime = time.Now().AddDate(0, 0, -1) atomic.StoreInt64(&d.lastConnTime, 0)
}
func (d *SMB) getLastConnTime() time.Time {
return time.Unix(atomic.LoadInt64(&d.lastConnTime), 0)
} }
func (d *SMB) initFS() error { func (d *SMB) initFS() error {
@ -44,7 +48,7 @@ func (d *SMB) initFS() error {
} }
func (d *SMB) checkConn() error { func (d *SMB) checkConn() error {
if time.Since(d.lastConnTime) < 5*time.Minute { if time.Since(d.getLastConnTime()) < 5*time.Minute {
return nil return nil
} }
if d.fs != nil { if d.fs != nil {
@ -53,14 +57,6 @@ func (d *SMB) checkConn() error {
return d.initFS() return d.initFS()
} }
func (d *SMB) getSMBPath(dir model.Obj) string {
fullPath := dir.GetPath()
if fullPath[0:1] != "." {
fullPath = "." + fullPath
}
return fullPath
}
// CopyFile File copies a single file from src to dst // CopyFile File copies a single file from src to dst
func (d *SMB) CopyFile(src, dst string) error { func (d *SMB) CopyFile(src, dst string) error {
var err error var err error

View File

@ -8,6 +8,7 @@ import (
"github.com/alist-org/alist/v3/drivers/base" "github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver" "github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model" "github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2" "github.com/go-resty/resty/v2"
) )
@ -124,11 +125,11 @@ func (d *Teambition) Remove(ctx context.Context, obj model.Obj) error {
} }
func (d *Teambition) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error { func (d *Teambition) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
res, err := d.request("/projects", http.MethodGet, nil, nil) res, err := d.request("/api/v2/users/me", http.MethodGet, nil, nil)
if err != nil { if err != nil {
return err return err
} }
token := GetBetweenStr(string(res), "strikerAuth&quot;:&quot;", "&quot;,&quot;phoneForLogin") token := utils.Json.Get(res, "strikerAuth").ToString()
var newFile *FileUpload var newFile *FileUpload
if stream.GetSize() <= 20971520 { if stream.GetSize() <= 20971520 {
// post upload // post upload

View File

@ -210,7 +210,7 @@ func (d *Teambition) finishUpload(file *FileUpload, parentId string) error {
return err return err
} }
func GetBetweenStr(str, start, end string) string { func getBetweenStr(str, start, end string) string {
n := strings.Index(str, start) n := strings.Index(str, start)
if n == -1 { if n == -1 {
return "" return ""

View File

@ -32,42 +32,42 @@ func (d *Template) Drop(ctx context.Context) error {
} }
func (d *Template) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) { func (d *Template) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
// TODO return the files list // TODO return the files list, required
return nil, errs.NotImplement return nil, errs.NotImplement
} }
func (d *Template) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) { func (d *Template) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
// TODO return link of file // TODO return link of file, required
return nil, errs.NotImplement return nil, errs.NotImplement
} }
func (d *Template) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error { func (d *Template) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
// TODO create folder // TODO create folder, optional
return errs.NotImplement return errs.NotImplement
} }
func (d *Template) Move(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *Template) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
// TODO move obj // TODO move obj, optional
return errs.NotImplement return errs.NotImplement
} }
func (d *Template) Rename(ctx context.Context, srcObj model.Obj, newName string) error { func (d *Template) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
// TODO rename obj // TODO rename obj, optional
return errs.NotImplement return errs.NotImplement
} }
func (d *Template) Copy(ctx context.Context, srcObj, dstDir model.Obj) error { func (d *Template) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
// TODO copy obj // TODO copy obj, optional
return errs.NotImplement return errs.NotImplement
} }
func (d *Template) Remove(ctx context.Context, obj model.Obj) error { func (d *Template) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj // TODO remove obj, optional
return errs.NotImplement return errs.NotImplement
} }
func (d *Template) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error { func (d *Template) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// TODO upload file // TODO upload file, optional
return errs.NotImplement return errs.NotImplement
} }

Some files were not shown because too many files have changed in this diff Show More