Compare commits
47 Commits
Author | SHA1 | Date | |
---|---|---|---|
a797494aa3 | |||
353dd7f796 | |||
1c00d64952 | |||
ff5cf3f4fa | |||
5b6b2f427a | |||
7877184bee | |||
e9cb37122e | |||
a425392a2b | |||
75acbcc115 | |||
30415cefbe | |||
1d06a0019f | |||
3686075a7f | |||
6c1c7e5cc0 | |||
c4f901b201 | |||
4b7acb1389 | |||
15b7169df4 | |||
861948bcf3 | |||
e5ffd39cf2 | |||
8b353da0d2 | |||
49bde82426 | |||
3e285aaec4 | |||
355fc576b1 | |||
a69d72aa20 | |||
e5d123c5d3 | |||
220eb33f88 | |||
5238850036 | |||
81ac963567 | |||
3c21a9a520 | |||
1dc1dd1f07 | |||
c9ea9bce81 | |||
9f08353d31 | |||
ce0c3626c2 | |||
06f46206db | |||
579f0c06af | |||
b12d92acc9 | |||
e700ce15e5 | |||
7dbef7d559 | |||
7e9cdd8b07 | |||
cee6bc6b5d | |||
cfd23c05b4 | |||
0c1acd72ca | |||
e2ca06dcca | |||
0828fd787d | |||
2e23ea68d4 | |||
4afa822bec | |||
f2ca9b40db | |||
4c2535cb22 |
43
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
43
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@ -7,28 +7,44 @@ body:
|
|||||||
value: |
|
value: |
|
||||||
Thanks for taking the time to fill out this bug report, please **confirm that your issue is not a duplicate issue and not because of your operation or version issues**
|
Thanks for taking the time to fill out this bug report, please **confirm that your issue is not a duplicate issue and not because of your operation or version issues**
|
||||||
感谢您花时间填写此错误报告,请**务必确认您的issue不是重复的且不是因为您的操作或版本问题**
|
感谢您花时间填写此错误报告,请**务必确认您的issue不是重复的且不是因为您的操作或版本问题**
|
||||||
|
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
attributes:
|
attributes:
|
||||||
label: Please make sure of the following things
|
label: Please make sure of the following things
|
||||||
description: You may select more than one, even select all.
|
description: |
|
||||||
|
You must check all the following, otherwise your issue may be closed directly. Or you can go to the [discussions](https://github.com/alist-org/alist/discussions)
|
||||||
|
您必须勾选以下所有内容,否则您的issue可能会被直接关闭。或者您可以去[讨论区](https://github.com/alist-org/alist/discussions)
|
||||||
options:
|
options:
|
||||||
- label: I have read the [documentation](https://alist.nn.ci).
|
- label: |
|
||||||
- label: I'm sure there are no duplicate issues or discussions.
|
I have read the [documentation](https://alist.nn.ci).
|
||||||
- label: I'm sure it's due to `alist` and not something else(such as `Dependencies` or `Operational`).
|
我已经阅读了[文档](https://alist.nn.ci)。
|
||||||
- label: I'm sure I'm using the latest version
|
- label: |
|
||||||
|
I'm sure there are no duplicate issues or discussions.
|
||||||
|
我确定没有重复的issue或讨论。
|
||||||
|
- label: |
|
||||||
|
I'm sure it's due to `AList` and not something else(such as `Dependencies` or `Operational`).
|
||||||
|
我确定是`AList`的问题,而不是其他原因(例如`依赖`或`操作`)。
|
||||||
|
- label: |
|
||||||
|
I'm sure this issue is not fixed in the latest version.
|
||||||
|
我确定这个问题在最新版本中没有被修复。
|
||||||
|
|
||||||
- type: input
|
- type: input
|
||||||
id: version
|
id: version
|
||||||
attributes:
|
attributes:
|
||||||
label: Alist Version / Alist 版本
|
label: AList Version / AList 版本
|
||||||
description: What version of our software are you running?
|
description: |
|
||||||
placeholder: v2.0.0
|
What version of our software are you running? Do not use `latest` or `master` as an answer.
|
||||||
|
您使用的是哪个版本的软件?请不要使用`latest`或`master`作为答案。
|
||||||
|
placeholder: v3.xx.xx
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
- type: input
|
- type: input
|
||||||
id: driver
|
id: driver
|
||||||
attributes:
|
attributes:
|
||||||
label: Driver used / 使用的存储驱动
|
label: Driver used / 使用的存储驱动
|
||||||
description: What storage driver are you using?
|
description: |
|
||||||
|
What storage driver are you using?
|
||||||
|
您使用的是哪个存储驱动?
|
||||||
placeholder: "for example: Onedrive"
|
placeholder: "for example: Onedrive"
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
@ -47,6 +63,15 @@ body:
|
|||||||
请提供能复现此问题的链接,请知悉如果不提供它你的issue可能会被直接关闭。
|
请提供能复现此问题的链接,请知悉如果不提供它你的issue可能会被直接关闭。
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: config
|
||||||
|
attributes:
|
||||||
|
label: Config / 配置
|
||||||
|
description: |
|
||||||
|
Please provide the configuration file of your `AList` application and take a screenshot of the relevant storage configuration. (hide privacy field)
|
||||||
|
请提供您的`AList`应用的配置文件,并截图相关存储配置。(隐藏隐私字段)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: logs
|
id: logs
|
||||||
attributes:
|
attributes:
|
||||||
|
4
.github/workflows/auto_lang.yml
vendored
4
.github/workflows/auto_lang.yml
vendored
@ -53,8 +53,8 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
cd alist-web
|
cd alist-web
|
||||||
git add .
|
git add .
|
||||||
git config --local user.email "i@nn.ci"
|
git config --local user.email "bot@nn.ci"
|
||||||
git config --local user.name "Andy Hsu"
|
git config --local user.name "IlaBot"
|
||||||
git commit -m "chore: auto update i18n file" -a 2>/dev/null || :
|
git commit -m "chore: auto update i18n file" -a 2>/dev/null || :
|
||||||
cd ..
|
cd ..
|
||||||
|
|
||||||
|
4
.github/workflows/build_docker.yml
vendored
4
.github/workflows/build_docker.yml
vendored
@ -53,8 +53,8 @@ jobs:
|
|||||||
|
|
||||||
- name: Commit
|
- name: Commit
|
||||||
run: |
|
run: |
|
||||||
git config --local user.email "i@nn.ci"
|
git config --local user.email "bot@nn.ci"
|
||||||
git config --local user.name "Noah Hsu"
|
git config --local user.name "IlaBot"
|
||||||
git commit --allow-empty -m "Trigger build for ${{ github.sha }}"
|
git commit --allow-empty -m "Trigger build for ${{ github.sha }}"
|
||||||
|
|
||||||
- name: Push commit
|
- name: Push commit
|
||||||
|
2
.github/workflows/issue_question.yml
vendored
2
.github/workflows/issue_question.yml
vendored
@ -10,7 +10,7 @@ jobs:
|
|||||||
if: github.event.label.name == 'question'
|
if: github.event.label.name == 'question'
|
||||||
steps:
|
steps:
|
||||||
- name: Create comment
|
- name: Create comment
|
||||||
uses: actions-cool/issues-helper@v3.5.0
|
uses: actions-cool/issues-helper@v3.5.1
|
||||||
with:
|
with:
|
||||||
actions: 'create-comment'
|
actions: 'create-comment'
|
||||||
token: ${{ secrets.GITHUB_TOKEN }}
|
token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
17
.github/workflows/issue_rm_working.yml
vendored
Normal file
17
.github/workflows/issue_rm_working.yml
vendored
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
name: Remove working label when issue closed
|
||||||
|
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [closed]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
rm-working:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Remove working label
|
||||||
|
uses: actions-cool/issues-helper@v3
|
||||||
|
with:
|
||||||
|
actions: 'remove-labels'
|
||||||
|
token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
issue-number: ${{ github.event.issue.number }}
|
||||||
|
labels: 'working'
|
12
.github/workflows/release.yml
vendored
12
.github/workflows/release.yml
vendored
@ -41,17 +41,11 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
bash build.sh release
|
bash build.sh release
|
||||||
|
|
||||||
- name: Release latest
|
|
||||||
uses: irongut/EditRelease@v1.2.0
|
|
||||||
with:
|
|
||||||
token: ${{ secrets.MY_TOKEN }}
|
|
||||||
id: ${{ github.event.release.id }}
|
|
||||||
prerelease: false
|
|
||||||
|
|
||||||
- name: Upload assets
|
- name: Upload assets
|
||||||
uses: softprops/action-gh-release@v1
|
uses: softprops/action-gh-release@v1
|
||||||
with:
|
with:
|
||||||
files: build/compress/*
|
files: build/compress/*
|
||||||
|
prerelease: false
|
||||||
|
|
||||||
release_desktop:
|
release_desktop:
|
||||||
needs: release
|
needs: release
|
||||||
@ -68,8 +62,8 @@ jobs:
|
|||||||
|
|
||||||
- name: Add tag
|
- name: Add tag
|
||||||
run: |
|
run: |
|
||||||
git config --local user.email "i@nn.ci"
|
git config --local user.email "bot@nn.ci"
|
||||||
git config --local user.name "Andy Hsu"
|
git config --local user.name "IlaBot"
|
||||||
version=$(wget -qO- -t1 -T2 "https://api.github.com/repos/alist-org/alist/releases/latest" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
|
version=$(wget -qO- -t1 -T2 "https://api.github.com/repos/alist-org/alist/releases/latest" | grep "tag_name" | head -n 1 | awk -F ":" '{print $2}' | sed 's/\"//g;s/,//g;s/ //g')
|
||||||
git tag -a $version -m "release $version"
|
git tag -a $version -m "release $version"
|
||||||
|
|
||||||
|
4
.github/workflows/release_docker.yml
vendored
4
.github/workflows/release_docker.yml
vendored
@ -56,8 +56,8 @@ jobs:
|
|||||||
|
|
||||||
- name: Add tag
|
- name: Add tag
|
||||||
run: |
|
run: |
|
||||||
git config --local user.email "i@nn.ci"
|
git config --local user.email "bot@nn.ci"
|
||||||
git config --local user.name "Andy Hsu"
|
git config --local user.name "IlaBot"
|
||||||
git tag -a ${{ github.ref_name }} -m "release ${{ github.ref_name }}"
|
git tag -a ${{ github.ref_name }} -m "release ${{ github.ref_name }}"
|
||||||
|
|
||||||
- name: Push tags
|
- name: Push tags
|
||||||
|
34
.github/workflows/release_linux_musl_arm.yml
vendored
Normal file
34
.github/workflows/release_linux_musl_arm.yml
vendored
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
name: release_linux_musl_arm
|
||||||
|
|
||||||
|
on:
|
||||||
|
release:
|
||||||
|
types: [ published ]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
release_arm:
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
platform: [ ubuntu-latest ]
|
||||||
|
go-version: [ '1.20' ]
|
||||||
|
name: Release
|
||||||
|
runs-on: ${{ matrix.platform }}
|
||||||
|
steps:
|
||||||
|
|
||||||
|
- name: Setup Go
|
||||||
|
uses: actions/setup-go@v4
|
||||||
|
with:
|
||||||
|
go-version: ${{ matrix.go-version }}
|
||||||
|
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Build
|
||||||
|
run: |
|
||||||
|
bash build.sh release linux_musl_arm
|
||||||
|
|
||||||
|
- name: Upload assets
|
||||||
|
uses: softprops/action-gh-release@v1
|
||||||
|
with:
|
||||||
|
files: build/compress/*
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -29,3 +29,5 @@ output/
|
|||||||
/daemon/
|
/daemon/
|
||||||
/public/dist/*
|
/public/dist/*
|
||||||
/!public/dist/README.md
|
/!public/dist/README.md
|
||||||
|
|
||||||
|
.VSCodeCounter
|
@ -7,7 +7,7 @@
|
|||||||
Prerequisites:
|
Prerequisites:
|
||||||
|
|
||||||
- [git](https://git-scm.com)
|
- [git](https://git-scm.com)
|
||||||
- [Go 1.19+](https://golang.org/doc/install)
|
- [Go 1.20+](https://golang.org/doc/install)
|
||||||
- [gcc](https://gcc.gnu.org/)
|
- [gcc](https://gcc.gnu.org/)
|
||||||
- [nodejs](https://nodejs.org/)
|
- [nodejs](https://nodejs.org/)
|
||||||
|
|
||||||
|
@ -91,6 +91,7 @@ English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing]
|
|||||||
- [x] Web upload(Can allow visitors to upload), delete, mkdir, rename, move and copy
|
- [x] Web upload(Can allow visitors to upload), delete, mkdir, rename, move and copy
|
||||||
- [x] Offline download
|
- [x] Offline download
|
||||||
- [x] Copy files between two storage
|
- [x] Copy files between two storage
|
||||||
|
- [x] Multi-thread downloading acceleration for single-thread download/stream
|
||||||
|
|
||||||
## Document
|
## Document
|
||||||
|
|
||||||
|
@ -90,6 +90,7 @@
|
|||||||
- [x] 网页上传(可以允许访客上传),删除,新建文件夹,重命名,移动,复制
|
- [x] 网页上传(可以允许访客上传),删除,新建文件夹,重命名,移动,复制
|
||||||
- [x] 离线下载
|
- [x] 离线下载
|
||||||
- [x] 跨存储复制文件
|
- [x] 跨存储复制文件
|
||||||
|
- [x] 单线程下载/串流的多线程下载加速
|
||||||
|
|
||||||
## 文档
|
## 文档
|
||||||
|
|
||||||
|
@ -91,6 +91,7 @@
|
|||||||
- [x] ウェブアップロード(訪問者にアップロードを許可できる), 削除, mkdir, 名前変更, 移動, コピー
|
- [x] ウェブアップロード(訪問者にアップロードを許可できる), 削除, mkdir, 名前変更, 移動, コピー
|
||||||
- [x] オフラインダウンロード
|
- [x] オフラインダウンロード
|
||||||
- [x] 二つのストレージ間でファイルをコピー
|
- [x] 二つのストレージ間でファイルをコピー
|
||||||
|
- [x] シングルスレッドのダウンロード/ストリーム向けのマルチスレッド ダウンロード アクセラレーション
|
||||||
|
|
||||||
## ドキュメント
|
## ドキュメント
|
||||||
|
|
||||||
|
49
build.sh
49
build.sh
@ -93,14 +93,15 @@ BuildRelease() {
|
|||||||
mkdir -p "build"
|
mkdir -p "build"
|
||||||
muslflags="--extldflags '-static -fpic' $ldflags"
|
muslflags="--extldflags '-static -fpic' $ldflags"
|
||||||
BASE="https://musl.nn.ci/"
|
BASE="https://musl.nn.ci/"
|
||||||
FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross arm-linux-musleabihf-cross mips-linux-musl-cross mips64-linux-musl-cross mips64el-linux-musl-cross mipsel-linux-musl-cross powerpc64le-linux-musl-cross s390x-linux-musl-cross)
|
FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross mips-linux-musl-cross mips64-linux-musl-cross mips64el-linux-musl-cross mipsel-linux-musl-cross powerpc64le-linux-musl-cross s390x-linux-musl-cross)
|
||||||
for i in "${FILES[@]}"; do
|
for i in "${FILES[@]}"; do
|
||||||
url="${BASE}${i}.tgz"
|
url="${BASE}${i}.tgz"
|
||||||
curl -L -o "${i}.tgz" "${url}"
|
curl -L -o "${i}.tgz" "${url}"
|
||||||
sudo tar xf "${i}.tgz" --strip-components 1 -C /usr/local
|
sudo tar xf "${i}.tgz" --strip-components 1 -C /usr/local
|
||||||
|
rm -f "${i}.tgz"
|
||||||
done
|
done
|
||||||
OS_ARCHES=(linux-musl-amd64 linux-musl-arm64 linux-musl-arm linux-musl-mips linux-musl-mips64 linux-musl-mips64le linux-musl-mipsle linux-musl-ppc64le linux-musl-s390x)
|
OS_ARCHES=(linux-musl-amd64 linux-musl-arm64 linux-musl-mips linux-musl-mips64 linux-musl-mips64le linux-musl-mipsle linux-musl-ppc64le linux-musl-s390x)
|
||||||
CGO_ARGS=(x86_64-linux-musl-gcc aarch64-linux-musl-gcc arm-linux-musleabihf-gcc mips-linux-musl-gcc mips64-linux-musl-gcc mips64el-linux-musl-gcc mipsel-linux-musl-gcc powerpc64le-linux-musl-gcc s390x-linux-musl-gcc)
|
CGO_ARGS=(x86_64-linux-musl-gcc aarch64-linux-musl-gcc mips-linux-musl-gcc mips64-linux-musl-gcc mips64el-linux-musl-gcc mipsel-linux-musl-gcc powerpc64le-linux-musl-gcc s390x-linux-musl-gcc)
|
||||||
for i in "${!OS_ARCHES[@]}"; do
|
for i in "${!OS_ARCHES[@]}"; do
|
||||||
os_arch=${OS_ARCHES[$i]}
|
os_arch=${OS_ARCHES[$i]}
|
||||||
cgo_cc=${CGO_ARGS[$i]}
|
cgo_cc=${CGO_ARGS[$i]}
|
||||||
@ -120,6 +121,39 @@ BuildRelease() {
|
|||||||
mv alist-* build
|
mv alist-* build
|
||||||
}
|
}
|
||||||
|
|
||||||
|
BuildReleaseLinuxMuslArm() {
|
||||||
|
rm -rf .git/
|
||||||
|
mkdir -p "build"
|
||||||
|
muslflags="--extldflags '-static -fpic' $ldflags"
|
||||||
|
BASE="https://musl.nn.ci/"
|
||||||
|
# FILES=(arm-linux-musleabi-cross arm-linux-musleabihf-cross armeb-linux-musleabi-cross armeb-linux-musleabihf-cross armel-linux-musleabi-cross armel-linux-musleabihf-cross armv5l-linux-musleabi-cross armv5l-linux-musleabihf-cross armv6-linux-musleabi-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross armv7m-linux-musleabi-cross armv7r-linux-musleabihf-cross)
|
||||||
|
FILES=(arm-linux-musleabi-cross arm-linux-musleabihf-cross armel-linux-musleabi-cross armel-linux-musleabihf-cross armv5l-linux-musleabi-cross armv5l-linux-musleabihf-cross armv6-linux-musleabi-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross armv7m-linux-musleabi-cross armv7r-linux-musleabihf-cross)
|
||||||
|
for i in "${FILES[@]}"; do
|
||||||
|
url="${BASE}${i}.tgz"
|
||||||
|
curl -L -o "${i}.tgz" "${url}"
|
||||||
|
sudo tar xf "${i}.tgz" --strip-components 1 -C /usr/local
|
||||||
|
rm -f "${i}.tgz"
|
||||||
|
done
|
||||||
|
# OS_ARCHES=(linux-musleabi-arm linux-musleabihf-arm linux-musleabi-armeb linux-musleabihf-armeb linux-musleabi-armel linux-musleabihf-armel linux-musleabi-armv5l linux-musleabihf-armv5l linux-musleabi-armv6 linux-musleabihf-armv6 linux-musleabihf-armv7l linux-musleabi-armv7m linux-musleabihf-armv7r)
|
||||||
|
# CGO_ARGS=(arm-linux-musleabi-gcc arm-linux-musleabihf-gcc armeb-linux-musleabi-gcc armeb-linux-musleabihf-gcc armel-linux-musleabi-gcc armel-linux-musleabihf-gcc armv5l-linux-musleabi-gcc armv5l-linux-musleabihf-gcc armv6-linux-musleabi-gcc armv6-linux-musleabihf-gcc armv7l-linux-musleabihf-gcc armv7m-linux-musleabi-gcc armv7r-linux-musleabihf-gcc)
|
||||||
|
# GOARMS=('' '' '' '' '' '' '5' '5' '6' '6' '7' '7' '7')
|
||||||
|
OS_ARCHES=(linux-musleabi-arm linux-musleabihf-arm linux-musleabi-armel linux-musleabihf-armel linux-musleabi-armv5l linux-musleabihf-armv5l linux-musleabi-armv6 linux-musleabihf-armv6 linux-musleabihf-armv7l linux-musleabi-armv7m linux-musleabihf-armv7r)
|
||||||
|
CGO_ARGS=(arm-linux-musleabi-gcc arm-linux-musleabihf-gcc armel-linux-musleabi-gcc armel-linux-musleabihf-gcc armv5l-linux-musleabi-gcc armv5l-linux-musleabihf-gcc armv6-linux-musleabi-gcc armv6-linux-musleabihf-gcc armv7l-linux-musleabihf-gcc armv7m-linux-musleabi-gcc armv7r-linux-musleabihf-gcc)
|
||||||
|
GOARMS=('' '' '' '' '5' '5' '6' '6' '7' '7' '7')
|
||||||
|
for i in "${!OS_ARCHES[@]}"; do
|
||||||
|
os_arch=${OS_ARCHES[$i]}
|
||||||
|
cgo_cc=${CGO_ARGS[$i]}
|
||||||
|
arm=${GOARMS[$i]}
|
||||||
|
echo building for ${os_arch}
|
||||||
|
export GOOS=linux
|
||||||
|
export GOARCH=arm
|
||||||
|
export CC=${cgo_cc}
|
||||||
|
export CGO_ENABLED=1
|
||||||
|
export GOARM=${arm}
|
||||||
|
go build -o ./build/$appName-$os_arch -ldflags="$muslflags" -tags=jsoniter .
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
MakeRelease() {
|
MakeRelease() {
|
||||||
cd build
|
cd build
|
||||||
mkdir compress
|
mkdir compress
|
||||||
@ -139,8 +173,8 @@ MakeRelease() {
|
|||||||
rm -f alist.exe
|
rm -f alist.exe
|
||||||
done
|
done
|
||||||
cd compress
|
cd compress
|
||||||
find . -type f -print0 | xargs -0 md5sum >md5.txt
|
find . -type f -print0 | xargs -0 md5sum >"$1"
|
||||||
cat md5.txt
|
cat "$1"
|
||||||
cd ../..
|
cd ../..
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -155,9 +189,12 @@ elif [ "$1" = "release" ]; then
|
|||||||
FetchWebRelease
|
FetchWebRelease
|
||||||
if [ "$2" = "docker" ]; then
|
if [ "$2" = "docker" ]; then
|
||||||
BuildDocker
|
BuildDocker
|
||||||
|
elif [ "$2" = "linux_musl_arm" ]; then
|
||||||
|
BuildReleaseLinuxMuslArm
|
||||||
|
MakeRelease "md5-linux-musl-arm.txt"
|
||||||
else
|
else
|
||||||
BuildRelease
|
BuildRelease
|
||||||
MakeRelease
|
MakeRelease "md5.txt"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
echo -e "Parameter error"
|
echo -e "Parameter error"
|
||||||
|
69
cmd/admin.go
69
cmd/admin.go
@ -4,30 +4,87 @@ Copyright © 2022 NAME HERE <EMAIL ADDRESS>
|
|||||||
package cmd
|
package cmd
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/alist-org/alist/v3/internal/conf"
|
||||||
"github.com/alist-org/alist/v3/internal/op"
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
"github.com/alist-org/alist/v3/internal/setting"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils/random"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
// PasswordCmd represents the password command
|
// AdminCmd represents the password command
|
||||||
var PasswordCmd = &cobra.Command{
|
var AdminCmd = &cobra.Command{
|
||||||
Use: "admin",
|
Use: "admin",
|
||||||
Aliases: []string{"password"},
|
Aliases: []string{"password"},
|
||||||
Short: "Show admin user's info",
|
Short: "Show admin user's info and some operations about admin user's password",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
Init()
|
Init()
|
||||||
admin, err := op.GetAdmin()
|
admin, err := op.GetAdmin()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
utils.Log.Errorf("failed get admin user: %+v", err)
|
utils.Log.Errorf("failed get admin user: %+v", err)
|
||||||
} else {
|
} else {
|
||||||
utils.Log.Infof("admin user's info: \nusername: %s\npassword: %s", admin.Username, admin.Password)
|
utils.Log.Infof("Admin user's username: %s", admin.Username)
|
||||||
|
utils.Log.Infof("The password can only be output at the first startup, and then stored as a hash value, which cannot be reversed")
|
||||||
|
utils.Log.Infof("You can reset the password with a random string by running [alist admin random]")
|
||||||
|
utils.Log.Infof("You can also set a new password by running [alist admin set NEW_PASSWORD]")
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
var RandomPasswordCmd = &cobra.Command{
|
||||||
RootCmd.AddCommand(PasswordCmd)
|
Use: "random",
|
||||||
|
Short: "Reset admin user's password to a random string",
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
newPwd := random.String(8)
|
||||||
|
setAdminPassword(newPwd)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var SetPasswordCmd = &cobra.Command{
|
||||||
|
Use: "set",
|
||||||
|
Short: "Set admin user's password",
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
if len(args) == 0 {
|
||||||
|
utils.Log.Errorf("Please enter the new password")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
setAdminPassword(args[0])
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var ShowTokenCmd = &cobra.Command{
|
||||||
|
Use: "token",
|
||||||
|
Short: "Show admin token",
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
Init()
|
||||||
|
token := setting.GetStr(conf.Token)
|
||||||
|
utils.Log.Infof("Admin token: %s", token)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
func setAdminPassword(pwd string) {
|
||||||
|
Init()
|
||||||
|
admin, err := op.GetAdmin()
|
||||||
|
if err != nil {
|
||||||
|
utils.Log.Errorf("failed get admin user: %+v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
admin.SetPassword(pwd)
|
||||||
|
if err := op.UpdateUser(admin); err != nil {
|
||||||
|
utils.Log.Errorf("failed update admin user: %+v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
utils.Log.Infof("admin user has been updated:")
|
||||||
|
utils.Log.Infof("username: %s", admin.Username)
|
||||||
|
utils.Log.Infof("password: %s", pwd)
|
||||||
|
DelAdminCacheOnline()
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
RootCmd.AddCommand(AdminCmd)
|
||||||
|
AdminCmd.AddCommand(RandomPasswordCmd)
|
||||||
|
AdminCmd.AddCommand(SetPasswordCmd)
|
||||||
|
AdminCmd.AddCommand(ShowTokenCmd)
|
||||||
// Here you will define your flags and configuration settings.
|
// Here you will define your flags and configuration settings.
|
||||||
|
|
||||||
// Cobra supports Persistent Flags which will work for this command
|
// Cobra supports Persistent Flags which will work for this command
|
||||||
|
@ -24,6 +24,7 @@ var Cancel2FACmd = &cobra.Command{
|
|||||||
utils.Log.Errorf("failed to cancel 2FA: %+v", err)
|
utils.Log.Errorf("failed to cancel 2FA: %+v", err)
|
||||||
} else {
|
} else {
|
||||||
utils.Log.Info("2FA canceled")
|
utils.Log.Info("2FA canceled")
|
||||||
|
DelAdminCacheOnline()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
52
cmd/user.go
Normal file
52
cmd/user.go
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/tls"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/internal/conf"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
"github.com/alist-org/alist/v3/internal/setting"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
func DelAdminCacheOnline() {
|
||||||
|
admin, err := op.GetAdmin()
|
||||||
|
if err != nil {
|
||||||
|
utils.Log.Errorf("[del_admin_cache] get admin error: %+v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
DelUserCacheOnline(admin.Username)
|
||||||
|
}
|
||||||
|
|
||||||
|
func DelUserCacheOnline(username string) {
|
||||||
|
client := resty.New().SetTimeout(1 * time.Second).SetTLSClientConfig(&tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify})
|
||||||
|
token := setting.GetStr(conf.Token)
|
||||||
|
port := conf.Conf.Scheme.HttpPort
|
||||||
|
u := fmt.Sprintf("http://localhost:%d/api/admin/user/del_cache", port)
|
||||||
|
if port == -1 {
|
||||||
|
if conf.Conf.Scheme.HttpsPort == -1 {
|
||||||
|
utils.Log.Warnf("[del_user_cache] no open port")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
u = fmt.Sprintf("https://localhost:%d/api/admin/user/del_cache", conf.Conf.Scheme.HttpsPort)
|
||||||
|
}
|
||||||
|
res, err := client.R().SetHeader("Authorization", token).SetQueryParam("username", username).Post(u)
|
||||||
|
if err != nil {
|
||||||
|
utils.Log.Warnf("[del_user_cache_online] failed: %+v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if res.StatusCode() != 200 {
|
||||||
|
utils.Log.Warnf("[del_user_cache_online] failed: %+v", res.String())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
code := utils.Json.Get(res.Body(), "code").ToInt()
|
||||||
|
msg := utils.Json.Get(res.Body(), "message").ToString()
|
||||||
|
if code != 200 {
|
||||||
|
utils.Log.Errorf("[del_user_cache_online] error: %s", msg)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
utils.Log.Debugf("[del_user_cache_online] del user [%s] cache success", username)
|
||||||
|
}
|
@ -83,7 +83,7 @@ func (d *Pan115) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -1,10 +1,11 @@
|
|||||||
package _115
|
package _115
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/tls"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/SheltonZhu/115driver/pkg/driver"
|
"github.com/SheltonZhu/115driver/pkg/driver"
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/internal/conf"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -14,9 +15,11 @@ func (d *Pan115) login() error {
|
|||||||
var err error
|
var err error
|
||||||
opts := []driver.Option{
|
opts := []driver.Option{
|
||||||
driver.UA(UserAgent),
|
driver.UA(UserAgent),
|
||||||
|
func(c *driver.Pan115Client) {
|
||||||
|
c.Client.SetTLSClientConfig(&tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify})
|
||||||
|
},
|
||||||
}
|
}
|
||||||
d.client = driver.New(opts...)
|
d.client = driver.New(opts...)
|
||||||
d.client.SetHttpClient(base.HttpClient)
|
|
||||||
cr := &driver.Credential{}
|
cr := &driver.Credential{}
|
||||||
if d.Addition.QRCodeToken != "" {
|
if d.Addition.QRCodeToken != "" {
|
||||||
s := &driver.QRCodeSession{
|
s := &driver.QRCodeSession{
|
||||||
|
@ -184,7 +184,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
// const DEFAULT int64 = 10485760
|
// const DEFAULT int64 = 10485760
|
||||||
h := md5.New()
|
h := md5.New()
|
||||||
// need to calculate md5 of the full content
|
// need to calculate md5 of the full content
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -1,14 +1,10 @@
|
|||||||
package _123
|
package _123
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"crypto/md5"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/rand"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
@ -19,9 +15,10 @@ import (
|
|||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
Api = "https://www.123pan.com/api"
|
||||||
AApi = "https://www.123pan.com/a/api"
|
AApi = "https://www.123pan.com/a/api"
|
||||||
BApi = "https://www.123pan.com/b/api"
|
BApi = "https://www.123pan.com/b/api"
|
||||||
MainApi = BApi
|
MainApi = Api
|
||||||
SignIn = MainApi + "/user/sign_in"
|
SignIn = MainApi + "/user/sign_in"
|
||||||
Logout = MainApi + "/user/logout"
|
Logout = MainApi + "/user/logout"
|
||||||
UserInfo = MainApi + "/user/info"
|
UserInfo = MainApi + "/user/info"
|
||||||
@ -37,7 +34,7 @@ const (
|
|||||||
S3Auth = MainApi + "/file/s3_upload_object/auth"
|
S3Auth = MainApi + "/file/s3_upload_object/auth"
|
||||||
UploadCompleteV2 = MainApi + "/file/upload_complete/v2"
|
UploadCompleteV2 = MainApi + "/file/upload_complete/v2"
|
||||||
S3Complete = MainApi + "/file/s3_complete_multipart_upload"
|
S3Complete = MainApi + "/file/s3_complete_multipart_upload"
|
||||||
AuthKeySalt = "8-8D$sL8gPjom7bk#cY"
|
//AuthKeySalt = "8-8D$sL8gPjom7bk#cY"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (d *Pan123) login() error {
|
func (d *Pan123) login() error {
|
||||||
@ -59,9 +56,10 @@ func (d *Pan123) login() error {
|
|||||||
SetHeaders(map[string]string{
|
SetHeaders(map[string]string{
|
||||||
"origin": "https://www.123pan.com",
|
"origin": "https://www.123pan.com",
|
||||||
"referer": "https://www.123pan.com/",
|
"referer": "https://www.123pan.com/",
|
||||||
"platform": "web",
|
"user-agent": "Dart/2.19(dart:io)",
|
||||||
"app-version": "3",
|
"platform": "android",
|
||||||
"user-agent": base.UserAgent,
|
"app-version": "36",
|
||||||
|
//"user-agent": base.UserAgent,
|
||||||
}).
|
}).
|
||||||
SetBody(body).Post(SignIn)
|
SetBody(body).Post(SignIn)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -75,19 +73,19 @@ func (d *Pan123) login() error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func authKey(reqUrl string) (*string, error) {
|
//func authKey(reqUrl string) (*string, error) {
|
||||||
reqURL, err := url.Parse(reqUrl)
|
// reqURL, err := url.Parse(reqUrl)
|
||||||
if err != nil {
|
// if err != nil {
|
||||||
return nil, err
|
// return nil, err
|
||||||
}
|
// }
|
||||||
|
//
|
||||||
nowUnix := time.Now().Unix()
|
// nowUnix := time.Now().Unix()
|
||||||
random := rand.Intn(0x989680)
|
// random := rand.Intn(0x989680)
|
||||||
|
//
|
||||||
p4 := fmt.Sprintf("%d|%d|%s|%s|%s|%s", nowUnix, random, reqURL.Path, "web", "3", AuthKeySalt)
|
// p4 := fmt.Sprintf("%d|%d|%s|%s|%s|%s", nowUnix, random, reqURL.Path, "web", "3", AuthKeySalt)
|
||||||
authKey := fmt.Sprintf("%d-%d-%x", nowUnix, random, md5.Sum([]byte(p4)))
|
// authKey := fmt.Sprintf("%d-%d-%x", nowUnix, random, md5.Sum([]byte(p4)))
|
||||||
return &authKey, nil
|
// return &authKey, nil
|
||||||
}
|
//}
|
||||||
|
|
||||||
func (d *Pan123) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *Pan123) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
||||||
req := base.RestyClient.R()
|
req := base.RestyClient.R()
|
||||||
@ -95,9 +93,10 @@ func (d *Pan123) request(url string, method string, callback base.ReqCallback, r
|
|||||||
"origin": "https://www.123pan.com",
|
"origin": "https://www.123pan.com",
|
||||||
"referer": "https://www.123pan.com/",
|
"referer": "https://www.123pan.com/",
|
||||||
"authorization": "Bearer " + d.AccessToken,
|
"authorization": "Bearer " + d.AccessToken,
|
||||||
"platform": "web",
|
"user-agent": "Dart/2.19(dart:io)",
|
||||||
"app-version": "3",
|
"platform": "android",
|
||||||
"user-agent": base.UserAgent,
|
"app-version": "36",
|
||||||
|
//"user-agent": base.UserAgent,
|
||||||
})
|
})
|
||||||
if callback != nil {
|
if callback != nil {
|
||||||
callback(req)
|
callback(req)
|
||||||
@ -105,11 +104,11 @@ func (d *Pan123) request(url string, method string, callback base.ReqCallback, r
|
|||||||
if resp != nil {
|
if resp != nil {
|
||||||
req.SetResult(resp)
|
req.SetResult(resp)
|
||||||
}
|
}
|
||||||
authKey, err := authKey(url)
|
//authKey, err := authKey(url)
|
||||||
if err != nil {
|
//if err != nil {
|
||||||
return nil, err
|
// return nil, err
|
||||||
}
|
//}
|
||||||
req.SetQueryParam("auth-key", *authKey)
|
//req.SetQueryParam("auth-key", *authKey)
|
||||||
res, err := req.Execute(method, url)
|
res, err := req.Execute(method, url)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
149
drivers/123_share/driver.go
Normal file
149
drivers/123_share/driver.go
Normal file
@ -0,0 +1,149 @@
|
|||||||
|
package _123Share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/base64"
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Pan123Share struct {
|
||||||
|
model.Storage
|
||||||
|
Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Config() driver.Config {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) GetAddition() driver.Additional {
|
||||||
|
return &d.Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Init(ctx context.Context) error {
|
||||||
|
// TODO login / refresh token
|
||||||
|
//op.MustSaveDriverStorage(d)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Drop(ctx context.Context) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
|
// TODO return the files list, required
|
||||||
|
files, err := d.getFiles(dir.GetID())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
|
||||||
|
return src, nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
// TODO return link of file, required
|
||||||
|
if f, ok := file.(File); ok {
|
||||||
|
//var resp DownResp
|
||||||
|
var headers map[string]string
|
||||||
|
if !utils.IsLocalIPAddr(args.IP) {
|
||||||
|
headers = map[string]string{
|
||||||
|
//"X-Real-IP": "1.1.1.1",
|
||||||
|
"X-Forwarded-For": args.IP,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
data := base.Json{
|
||||||
|
"shareKey": d.ShareKey,
|
||||||
|
"SharePwd": d.SharePwd,
|
||||||
|
"etag": f.Etag,
|
||||||
|
"fileId": f.FileId,
|
||||||
|
"s3keyFlag": f.S3KeyFlag,
|
||||||
|
"size": f.Size,
|
||||||
|
}
|
||||||
|
resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
|
||||||
|
req.SetBody(data).SetHeaders(headers)
|
||||||
|
}, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
downloadUrl := utils.Json.Get(resp, "data", "DownloadURL").ToString()
|
||||||
|
u, err := url.Parse(downloadUrl)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
nu := u.Query().Get("params")
|
||||||
|
if nu != "" {
|
||||||
|
du, _ := base64.StdEncoding.DecodeString(nu)
|
||||||
|
u, err = url.Parse(string(du))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
u_ := u.String()
|
||||||
|
log.Debug("download url: ", u_)
|
||||||
|
res, err := base.NoRedirectClient.R().SetHeader("Referer", "https://www.123pan.com/").Get(u_)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
log.Debug(res.String())
|
||||||
|
link := model.Link{
|
||||||
|
URL: u_,
|
||||||
|
}
|
||||||
|
log.Debugln("res code: ", res.StatusCode())
|
||||||
|
if res.StatusCode() == 302 {
|
||||||
|
link.URL = res.Header().Get("location")
|
||||||
|
} else if res.StatusCode() < 300 {
|
||||||
|
link.URL = utils.Json.Get(res.Body(), "data", "redirect_url").ToString()
|
||||||
|
}
|
||||||
|
link.Header = http.Header{
|
||||||
|
"Referer": []string{"https://www.123pan.com/"},
|
||||||
|
}
|
||||||
|
return &link, nil
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("can't convert obj")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
|
// TODO create folder, optional
|
||||||
|
return errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
// TODO move obj, optional
|
||||||
|
return errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||||
|
// TODO rename obj, optional
|
||||||
|
return errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
// TODO copy obj, optional
|
||||||
|
return errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
|
// TODO remove obj, optional
|
||||||
|
return errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
|
// TODO upload file, optional
|
||||||
|
return errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
//func (d *Pan123Share) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
|
||||||
|
// return nil, errs.NotSupport
|
||||||
|
//}
|
||||||
|
|
||||||
|
var _ driver.Driver = (*Pan123Share)(nil)
|
34
drivers/123_share/meta.go
Normal file
34
drivers/123_share/meta.go
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
package _123Share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Addition struct {
|
||||||
|
ShareKey string `json:"sharekey" required:"true"`
|
||||||
|
SharePwd string `json:"sharepassword" required:"true"`
|
||||||
|
driver.RootID
|
||||||
|
OrderBy string `json:"order_by" type:"select" options:"file_name,size,update_at" default:"file_name"`
|
||||||
|
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var config = driver.Config{
|
||||||
|
Name: "123PanShare",
|
||||||
|
LocalSort: true,
|
||||||
|
OnlyLocal: false,
|
||||||
|
OnlyProxy: false,
|
||||||
|
NoCache: false,
|
||||||
|
NoUpload: true,
|
||||||
|
NeedMs: false,
|
||||||
|
DefaultRoot: "0",
|
||||||
|
CheckStatus: false,
|
||||||
|
Alert: "",
|
||||||
|
NoOverwriteUpload: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
op.RegisterDriver(func() driver.Driver {
|
||||||
|
return &Pan123Share{}
|
||||||
|
})
|
||||||
|
}
|
91
drivers/123_share/types.go
Normal file
91
drivers/123_share/types.go
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
package _123Share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/url"
|
||||||
|
"path"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
)
|
||||||
|
|
||||||
|
type File struct {
|
||||||
|
FileName string `json:"FileName"`
|
||||||
|
Size int64 `json:"Size"`
|
||||||
|
UpdateAt time.Time `json:"UpdateAt"`
|
||||||
|
FileId int64 `json:"FileId"`
|
||||||
|
Type int `json:"Type"`
|
||||||
|
Etag string `json:"Etag"`
|
||||||
|
S3KeyFlag string `json:"S3KeyFlag"`
|
||||||
|
DownloadUrl string `json:"DownloadUrl"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f File) GetPath() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f File) GetSize() int64 {
|
||||||
|
return f.Size
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f File) GetName() string {
|
||||||
|
return f.FileName
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f File) ModTime() time.Time {
|
||||||
|
return f.UpdateAt
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f File) IsDir() bool {
|
||||||
|
return f.Type == 1
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f File) GetID() string {
|
||||||
|
return strconv.FormatInt(f.FileId, 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f File) Thumb() string {
|
||||||
|
if f.DownloadUrl == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
du, err := url.Parse(f.DownloadUrl)
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
du.Path = strings.TrimSuffix(du.Path, "_24_24") + "_70_70"
|
||||||
|
query := du.Query()
|
||||||
|
query.Set("w", "70")
|
||||||
|
query.Set("h", "70")
|
||||||
|
if !query.Has("type") {
|
||||||
|
query.Set("type", strings.TrimPrefix(path.Base(f.FileName), "."))
|
||||||
|
}
|
||||||
|
if !query.Has("trade_key") {
|
||||||
|
query.Set("trade_key", "123pan-thumbnail")
|
||||||
|
}
|
||||||
|
du.RawQuery = query.Encode()
|
||||||
|
return du.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ model.Obj = (*File)(nil)
|
||||||
|
var _ model.Thumb = (*File)(nil)
|
||||||
|
|
||||||
|
//func (f File) Thumb() string {
|
||||||
|
//
|
||||||
|
//}
|
||||||
|
//var _ model.Thumb = (*File)(nil)
|
||||||
|
|
||||||
|
type Files struct {
|
||||||
|
//BaseResp
|
||||||
|
Data struct {
|
||||||
|
InfoList []File `json:"InfoList"`
|
||||||
|
Next string `json:"Next"`
|
||||||
|
} `json:"data"`
|
||||||
|
}
|
||||||
|
|
||||||
|
//type DownResp struct {
|
||||||
|
// //BaseResp
|
||||||
|
// Data struct {
|
||||||
|
// DownloadUrl string `json:"DownloadUrl"`
|
||||||
|
// } `json:"data"`
|
||||||
|
//}
|
81
drivers/123_share/util.go
Normal file
81
drivers/123_share/util.go
Normal file
@ -0,0 +1,81 @@
|
|||||||
|
package _123Share
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
|
jsoniter "github.com/json-iterator/go"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
Api = "https://www.123pan.com/api"
|
||||||
|
AApi = "https://www.123pan.com/a/api"
|
||||||
|
BApi = "https://www.123pan.com/b/api"
|
||||||
|
MainApi = Api
|
||||||
|
FileList = MainApi + "/share/get"
|
||||||
|
DownloadInfo = MainApi + "/share/download/info"
|
||||||
|
//AuthKeySalt = "8-8D$sL8gPjom7bk#cY"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (d *Pan123Share) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
||||||
|
req := base.RestyClient.R()
|
||||||
|
req.SetHeaders(map[string]string{
|
||||||
|
"origin": "https://www.123pan.com",
|
||||||
|
"referer": "https://www.123pan.com/",
|
||||||
|
"user-agent": "Dart/2.19(dart:io)",
|
||||||
|
"platform": "android",
|
||||||
|
"app-version": "36",
|
||||||
|
})
|
||||||
|
if callback != nil {
|
||||||
|
callback(req)
|
||||||
|
}
|
||||||
|
if resp != nil {
|
||||||
|
req.SetResult(resp)
|
||||||
|
}
|
||||||
|
res, err := req.Execute(method, url)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
body := res.Body()
|
||||||
|
code := utils.Json.Get(body, "code").ToInt()
|
||||||
|
if code != 0 {
|
||||||
|
return nil, errors.New(jsoniter.Get(body, "message").ToString())
|
||||||
|
}
|
||||||
|
return body, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Pan123Share) getFiles(parentId string) ([]File, error) {
|
||||||
|
page := 1
|
||||||
|
res := make([]File, 0)
|
||||||
|
for {
|
||||||
|
var resp Files
|
||||||
|
query := map[string]string{
|
||||||
|
"limit": "100",
|
||||||
|
"next": "0",
|
||||||
|
"orderBy": d.OrderBy,
|
||||||
|
"orderDirection": d.OrderDirection,
|
||||||
|
"parentFileId": parentId,
|
||||||
|
"Page": strconv.Itoa(page),
|
||||||
|
"shareKey": d.ShareKey,
|
||||||
|
"SharePwd": d.SharePwd,
|
||||||
|
}
|
||||||
|
_, err := d.request(FileList, http.MethodGet, func(req *resty.Request) {
|
||||||
|
req.SetQueryParams(query)
|
||||||
|
}, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
page++
|
||||||
|
res = append(res, resp.Data.InfoList...)
|
||||||
|
if len(resp.Data.InfoList) == 0 || resp.Data.Next == "-1" {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// do others that not defined in Driver interface
|
@ -300,6 +300,9 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
|
|
||||||
var partSize = getPartSize(stream.GetSize())
|
var partSize = getPartSize(stream.GetSize())
|
||||||
part := (stream.GetSize() + partSize - 1) / partSize
|
part := (stream.GetSize() + partSize - 1) / partSize
|
||||||
|
if part == 0 {
|
||||||
|
part = 1
|
||||||
|
}
|
||||||
for i := int64(0); i < part; i++ {
|
for i := int64(0); i < part; i++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return ctx.Err()
|
||||||
@ -331,13 +334,11 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
_ = res.Body.Close()
|
||||||
log.Debugf("%+v", res)
|
log.Debugf("%+v", res)
|
||||||
|
|
||||||
if res.StatusCode != http.StatusOK {
|
if res.StatusCode != http.StatusOK {
|
||||||
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
|
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
|
||||||
}
|
}
|
||||||
|
|
||||||
res.Body.Close()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
@ -4,9 +4,11 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
@ -135,13 +137,14 @@ func (y *Cloud189PC) Link(ctx context.Context, file model.Obj, args model.LinkAr
|
|||||||
return like, nil
|
return like, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
func (y *Cloud189PC) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
||||||
fullUrl := API_URL
|
fullUrl := API_URL
|
||||||
if y.isFamily() {
|
if y.isFamily() {
|
||||||
fullUrl += "/family/file"
|
fullUrl += "/family/file"
|
||||||
}
|
}
|
||||||
fullUrl += "/createFolder.action"
|
fullUrl += "/createFolder.action"
|
||||||
|
|
||||||
|
var newFolder Cloud189Folder
|
||||||
_, err := y.post(fullUrl, func(req *resty.Request) {
|
_, err := y.post(fullUrl, func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
req.SetQueryParams(map[string]string{
|
req.SetQueryParams(map[string]string{
|
||||||
@ -158,11 +161,15 @@ func (y *Cloud189PC) MakeDir(ctx context.Context, parentDir model.Obj, dirName s
|
|||||||
"parentFolderId": parentDir.GetID(),
|
"parentFolderId": parentDir.GetID(),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}, nil)
|
}, &newFolder)
|
||||||
return err
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &newFolder, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
func (y *Cloud189PC) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
|
var resp CreateBatchTaskResp
|
||||||
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
|
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
req.SetFormData(map[string]string{
|
req.SetFormData(map[string]string{
|
||||||
@ -182,11 +189,17 @@ func (y *Cloud189PC) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
|||||||
"familyId": y.FamilyID,
|
"familyId": y.FamilyID,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}, nil)
|
}, &resp)
|
||||||
return err
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err = y.WaitBatchTask("MOVE", resp.TaskID, time.Millisecond*400); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return srcObj, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
func (y *Cloud189PC) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
||||||
queryParam := make(map[string]string)
|
queryParam := make(map[string]string)
|
||||||
fullUrl := API_URL
|
fullUrl := API_URL
|
||||||
method := http.MethodPost
|
method := http.MethodPost
|
||||||
@ -195,23 +208,34 @@ func (y *Cloud189PC) Rename(ctx context.Context, srcObj model.Obj, newName strin
|
|||||||
method = http.MethodGet
|
method = http.MethodGet
|
||||||
queryParam["familyId"] = y.FamilyID
|
queryParam["familyId"] = y.FamilyID
|
||||||
}
|
}
|
||||||
if srcObj.IsDir() {
|
|
||||||
fullUrl += "/renameFolder.action"
|
var newObj model.Obj
|
||||||
queryParam["folderId"] = srcObj.GetID()
|
switch f := srcObj.(type) {
|
||||||
queryParam["destFolderName"] = newName
|
case *Cloud189File:
|
||||||
} else {
|
|
||||||
fullUrl += "/renameFile.action"
|
fullUrl += "/renameFile.action"
|
||||||
queryParam["fileId"] = srcObj.GetID()
|
queryParam["fileId"] = srcObj.GetID()
|
||||||
queryParam["destFileName"] = newName
|
queryParam["destFileName"] = newName
|
||||||
|
newObj = &Cloud189File{Icon: f.Icon} // 复用预览
|
||||||
|
case *Cloud189Folder:
|
||||||
|
fullUrl += "/renameFolder.action"
|
||||||
|
queryParam["folderId"] = srcObj.GetID()
|
||||||
|
queryParam["destFolderName"] = newName
|
||||||
|
newObj = &Cloud189Folder{}
|
||||||
|
default:
|
||||||
|
return nil, errs.NotSupport
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err := y.request(fullUrl, method, func(req *resty.Request) {
|
_, err := y.request(fullUrl, method, func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx).SetQueryParams(queryParam)
|
||||||
req.SetQueryParams(queryParam)
|
}, nil, newObj)
|
||||||
}, nil, nil)
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
|
}
|
||||||
|
return newObj, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
func (y *Cloud189PC) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
var resp CreateBatchTaskResp
|
||||||
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
|
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
req.SetFormData(map[string]string{
|
req.SetFormData(map[string]string{
|
||||||
@ -232,11 +256,15 @@ func (y *Cloud189PC) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
|||||||
"familyId": y.FamilyID,
|
"familyId": y.FamilyID,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}, nil)
|
}, &resp)
|
||||||
return err
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return y.WaitBatchTask("COPY", resp.TaskID, time.Second)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) Remove(ctx context.Context, obj model.Obj) error {
|
func (y *Cloud189PC) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
|
var resp CreateBatchTaskResp
|
||||||
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
|
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
req.SetFormData(map[string]string{
|
req.SetFormData(map[string]string{
|
||||||
@ -256,19 +284,26 @@ func (y *Cloud189PC) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
"familyId": y.FamilyID,
|
"familyId": y.FamilyID,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}, nil)
|
}, &resp)
|
||||||
return err
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// 批量任务数量限制,过快会导致无法删除
|
||||||
|
return y.WaitBatchTask("DELETE", resp.TaskID, time.Millisecond*200)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (y *Cloud189PC) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
switch y.UploadMethod {
|
switch y.UploadMethod {
|
||||||
case "stream":
|
|
||||||
return y.CommonUpload(ctx, dstDir, stream, up)
|
|
||||||
case "old":
|
case "old":
|
||||||
return y.OldUpload(ctx, dstDir, stream, up)
|
return y.OldUpload(ctx, dstDir, stream, up)
|
||||||
case "rapid":
|
case "rapid":
|
||||||
return y.FastUpload(ctx, dstDir, stream, up)
|
return y.FastUpload(ctx, dstDir, stream, up)
|
||||||
|
case "stream":
|
||||||
|
if stream.GetSize() == 0 {
|
||||||
|
return y.FastUpload(ctx, dstDir, stream, up)
|
||||||
|
}
|
||||||
|
fallthrough
|
||||||
default:
|
default:
|
||||||
return y.CommonUpload(ctx, dstDir, stream, up)
|
return y.StreamUpload(ctx, dstDir, stream, up)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -10,6 +10,7 @@ import (
|
|||||||
"crypto/x509"
|
"crypto/x509"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"encoding/pem"
|
"encoding/pem"
|
||||||
|
"encoding/xml"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math"
|
"math"
|
||||||
"net/http"
|
"net/http"
|
||||||
@ -83,6 +84,55 @@ func MustParseTime(str string) *time.Time {
|
|||||||
return &lastOpTime
|
return &lastOpTime
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type Time time.Time
|
||||||
|
|
||||||
|
func (t *Time) UnmarshalJSON(b []byte) error { return t.Unmarshal(b) }
|
||||||
|
func (t *Time) UnmarshalXML(e *xml.Decoder, ee xml.StartElement) error {
|
||||||
|
b, err := e.Token()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if b, ok := b.(xml.CharData); ok {
|
||||||
|
if err = t.Unmarshal(b); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return e.Skip()
|
||||||
|
}
|
||||||
|
func (t *Time) Unmarshal(b []byte) error {
|
||||||
|
bs := strings.Trim(string(b), "\"")
|
||||||
|
var v time.Time
|
||||||
|
var err error
|
||||||
|
for _, f := range []string{"2006-01-02 15:04:05 -07", "Jan 2, 2006 15:04:05 PM -07"} {
|
||||||
|
v, err = time.ParseInLocation(f, bs+" +08", time.Local)
|
||||||
|
if err == nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*t = Time(v)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
type String string
|
||||||
|
|
||||||
|
func (t *String) UnmarshalJSON(b []byte) error { return t.Unmarshal(b) }
|
||||||
|
func (t *String) UnmarshalXML(e *xml.Decoder, ee xml.StartElement) error {
|
||||||
|
b, err := e.Token()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if b, ok := b.(xml.CharData); ok {
|
||||||
|
if err = t.Unmarshal(b); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return e.Skip()
|
||||||
|
}
|
||||||
|
func (s *String) Unmarshal(b []byte) error {
|
||||||
|
*s = String(bytes.Trim(b, "\""))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func toFamilyOrderBy(o string) string {
|
func toFamilyOrderBy(o string) string {
|
||||||
switch o {
|
switch o {
|
||||||
case "filename":
|
case "filename":
|
||||||
@ -122,10 +172,6 @@ func MustString(str string, err error) string {
|
|||||||
return str
|
return str
|
||||||
}
|
}
|
||||||
|
|
||||||
func MustToBytes(b []byte, err error) []byte {
|
|
||||||
return b
|
|
||||||
}
|
|
||||||
|
|
||||||
func BoolToNumber(b bool) int {
|
func BoolToNumber(b bool) int {
|
||||||
if b {
|
if b {
|
||||||
return 1
|
return 1
|
||||||
|
@ -151,8 +151,13 @@ type FamilyInfoResp struct {
|
|||||||
/*文件部分*/
|
/*文件部分*/
|
||||||
// 文件
|
// 文件
|
||||||
type Cloud189File struct {
|
type Cloud189File struct {
|
||||||
CreateDate string `json:"createDate"`
|
ID String `json:"id"`
|
||||||
FileCata int64 `json:"fileCata"`
|
Name string `json:"name"`
|
||||||
|
Size int64 `json:"size"`
|
||||||
|
Md5 string `json:"md5"`
|
||||||
|
|
||||||
|
LastOpTime Time `json:"lastOpTime"`
|
||||||
|
CreateDate Time `json:"createDate"`
|
||||||
Icon struct {
|
Icon struct {
|
||||||
//iconOption 5
|
//iconOption 5
|
||||||
SmallUrl string `json:"smallUrl"`
|
SmallUrl string `json:"smallUrl"`
|
||||||
@ -162,62 +167,44 @@ type Cloud189File struct {
|
|||||||
Max600 string `json:"max600"`
|
Max600 string `json:"max600"`
|
||||||
MediumURL string `json:"mediumUrl"`
|
MediumURL string `json:"mediumUrl"`
|
||||||
} `json:"icon"`
|
} `json:"icon"`
|
||||||
ID int64 `json:"id"`
|
|
||||||
LastOpTime string `json:"lastOpTime"`
|
|
||||||
Md5 string `json:"md5"`
|
|
||||||
MediaType int `json:"mediaType"`
|
|
||||||
Name string `json:"name"`
|
|
||||||
Orientation int64 `json:"orientation"`
|
|
||||||
Rev string `json:"rev"`
|
|
||||||
Size int64 `json:"size"`
|
|
||||||
StarLabel int64 `json:"starLabel"`
|
|
||||||
|
|
||||||
parseTime *time.Time
|
// Orientation int64 `json:"orientation"`
|
||||||
|
// FileCata int64 `json:"fileCata"`
|
||||||
|
// MediaType int `json:"mediaType"`
|
||||||
|
// Rev string `json:"rev"`
|
||||||
|
// StarLabel int64 `json:"starLabel"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Cloud189File) GetSize() int64 { return c.Size }
|
func (c *Cloud189File) GetSize() int64 { return c.Size }
|
||||||
func (c *Cloud189File) GetName() string { return c.Name }
|
func (c *Cloud189File) GetName() string { return c.Name }
|
||||||
func (c *Cloud189File) ModTime() time.Time {
|
func (c *Cloud189File) ModTime() time.Time { return time.Time(c.LastOpTime) }
|
||||||
if c.parseTime == nil {
|
func (c *Cloud189File) IsDir() bool { return false }
|
||||||
c.parseTime = MustParseTime(c.LastOpTime)
|
func (c *Cloud189File) GetID() string { return string(c.ID) }
|
||||||
}
|
func (c *Cloud189File) GetPath() string { return "" }
|
||||||
return *c.parseTime
|
func (c *Cloud189File) Thumb() string { return c.Icon.SmallUrl }
|
||||||
}
|
|
||||||
func (c *Cloud189File) IsDir() bool { return false }
|
|
||||||
func (c *Cloud189File) GetID() string { return fmt.Sprint(c.ID) }
|
|
||||||
func (c *Cloud189File) GetPath() string { return "" }
|
|
||||||
func (c *Cloud189File) Thumb() string { return c.Icon.SmallUrl }
|
|
||||||
|
|
||||||
// 文件夹
|
// 文件夹
|
||||||
type Cloud189Folder struct {
|
type Cloud189Folder struct {
|
||||||
ID int64 `json:"id"`
|
ID String `json:"id"`
|
||||||
ParentID int64 `json:"parentId"`
|
ParentID int64 `json:"parentId"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
|
|
||||||
FileCata int64 `json:"fileCata"`
|
LastOpTime Time `json:"lastOpTime"`
|
||||||
FileCount int64 `json:"fileCount"`
|
CreateDate Time `json:"createDate"`
|
||||||
|
|
||||||
LastOpTime string `json:"lastOpTime"`
|
// FileListSize int64 `json:"fileListSize"`
|
||||||
CreateDate string `json:"createDate"`
|
// FileCount int64 `json:"fileCount"`
|
||||||
|
// FileCata int64 `json:"fileCata"`
|
||||||
FileListSize int64 `json:"fileListSize"`
|
// Rev string `json:"rev"`
|
||||||
Rev string `json:"rev"`
|
// StarLabel int64 `json:"starLabel"`
|
||||||
StarLabel int64 `json:"starLabel"`
|
|
||||||
|
|
||||||
parseTime *time.Time
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Cloud189Folder) GetSize() int64 { return 0 }
|
func (c *Cloud189Folder) GetSize() int64 { return 0 }
|
||||||
func (c *Cloud189Folder) GetName() string { return c.Name }
|
func (c *Cloud189Folder) GetName() string { return c.Name }
|
||||||
func (c *Cloud189Folder) ModTime() time.Time {
|
func (c *Cloud189Folder) ModTime() time.Time { return time.Time(c.LastOpTime) }
|
||||||
if c.parseTime == nil {
|
func (c *Cloud189Folder) IsDir() bool { return true }
|
||||||
c.parseTime = MustParseTime(c.LastOpTime)
|
func (c *Cloud189Folder) GetID() string { return string(c.ID) }
|
||||||
}
|
func (c *Cloud189Folder) GetPath() string { return "" }
|
||||||
return *c.parseTime
|
|
||||||
}
|
|
||||||
func (c *Cloud189Folder) IsDir() bool { return true }
|
|
||||||
func (c *Cloud189Folder) GetID() string { return fmt.Sprint(c.ID) }
|
|
||||||
func (c *Cloud189Folder) GetPath() string { return "" }
|
|
||||||
|
|
||||||
type Cloud189FilesResp struct {
|
type Cloud189FilesResp struct {
|
||||||
//ResCode int `json:"res_code"`
|
//ResCode int `json:"res_code"`
|
||||||
@ -284,15 +271,60 @@ func (r *GetUploadFileStatusResp) GetSize() int64 {
|
|||||||
return r.DataSize + r.Size
|
return r.DataSize + r.Size
|
||||||
}
|
}
|
||||||
|
|
||||||
type CommitUploadFileResp struct {
|
type CommitMultiUploadFileResp struct {
|
||||||
|
File struct {
|
||||||
|
UserFileID String `json:"userFileId"`
|
||||||
|
FileName string `json:"fileName"`
|
||||||
|
FileSize int64 `json:"fileSize"`
|
||||||
|
FileMd5 string `json:"fileMd5"`
|
||||||
|
CreateDate Time `json:"createDate"`
|
||||||
|
} `json:"file"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *CommitMultiUploadFileResp) toFile() *Cloud189File {
|
||||||
|
return &Cloud189File{
|
||||||
|
ID: f.File.UserFileID,
|
||||||
|
Name: f.File.FileName,
|
||||||
|
Size: f.File.FileSize,
|
||||||
|
Md5: f.File.FileMd5,
|
||||||
|
LastOpTime: f.File.CreateDate,
|
||||||
|
CreateDate: f.File.CreateDate,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type OldCommitUploadFileResp struct {
|
||||||
XMLName xml.Name `xml:"file"`
|
XMLName xml.Name `xml:"file"`
|
||||||
Id string `xml:"id"`
|
ID String `xml:"id"`
|
||||||
Name string `xml:"name"`
|
Name string `xml:"name"`
|
||||||
Size string `xml:"size"`
|
Size int64 `xml:"size"`
|
||||||
Md5 string `xml:"md5"`
|
Md5 string `xml:"md5"`
|
||||||
CreateDate string `xml:"createDate"`
|
CreateDate Time `xml:"createDate"`
|
||||||
Rev string `xml:"rev"`
|
}
|
||||||
UserId string `xml:"userId"`
|
|
||||||
|
func (f *OldCommitUploadFileResp) toFile() *Cloud189File {
|
||||||
|
return &Cloud189File{
|
||||||
|
ID: f.ID,
|
||||||
|
Name: f.Name,
|
||||||
|
Size: f.Size,
|
||||||
|
Md5: f.Md5,
|
||||||
|
CreateDate: f.CreateDate,
|
||||||
|
LastOpTime: f.CreateDate,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type CreateBatchTaskResp struct {
|
||||||
|
TaskID string `json:"taskId"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type BatchTaskStateResp struct {
|
||||||
|
FailedCount int `json:"failedCount"`
|
||||||
|
Process int `json:"process"`
|
||||||
|
SkipCount int `json:"skipCount"`
|
||||||
|
SubTaskCount int `json:"subTaskCount"`
|
||||||
|
SuccessedCount int `json:"successedCount"`
|
||||||
|
SuccessedFileIDList []int64 `json:"successedFileIdList"`
|
||||||
|
TaskID string `json:"taskId"`
|
||||||
|
TaskStatus int `json:"taskStatus"` //1 初始化 2 存在冲突 3 执行中,4 完成
|
||||||
}
|
}
|
||||||
|
|
||||||
/* query 加密参数*/
|
/* query 加密参数*/
|
||||||
|
@ -268,7 +268,7 @@ func (y *Cloud189PC) login() (err error) {
|
|||||||
"validateCode": y.VCode,
|
"validateCode": y.VCode,
|
||||||
"captchaToken": param.CaptchaToken,
|
"captchaToken": param.CaptchaToken,
|
||||||
"returnUrl": RETURN_URL,
|
"returnUrl": RETURN_URL,
|
||||||
"mailSuffix": "@189.cn",
|
// "mailSuffix": "@189.cn",
|
||||||
"dynamicCheck": "FALSE",
|
"dynamicCheck": "FALSE",
|
||||||
"clientType": CLIENT_TYPE,
|
"clientType": CLIENT_TYPE,
|
||||||
"cb_SaveName": "1",
|
"cb_SaveName": "1",
|
||||||
@ -434,7 +434,8 @@ func (y *Cloud189PC) refreshSession() (err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 普通上传
|
// 普通上传
|
||||||
func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (err error) {
|
// 无法上传大小为0的文件
|
||||||
|
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
var DEFAULT = partSize(file.GetSize())
|
var DEFAULT = partSize(file.GetSize())
|
||||||
var count = int(math.Ceil(float64(file.GetSize()) / float64(DEFAULT)))
|
var count = int(math.Ceil(float64(file.GetSize()) / float64(DEFAULT)))
|
||||||
|
|
||||||
@ -457,11 +458,11 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
|
|
||||||
// 初始化上传
|
// 初始化上传
|
||||||
var initMultiUpload InitMultiUploadResp
|
var initMultiUpload InitMultiUploadResp
|
||||||
_, err = y.request(fullUrl+"/initMultiUpload", http.MethodGet, func(req *resty.Request) {
|
_, err := y.request(fullUrl+"/initMultiUpload", http.MethodGet, func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
}, params, &initMultiUpload)
|
}, params, &initMultiUpload)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
fileMd5 := md5.New()
|
fileMd5 := md5.New()
|
||||||
@ -470,7 +471,7 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
byteData := bytes.NewBuffer(make([]byte, DEFAULT))
|
byteData := bytes.NewBuffer(make([]byte, DEFAULT))
|
||||||
for i := 1; i <= count; i++ {
|
for i := 1; i <= count; i++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return nil, ctx.Err()
|
||||||
}
|
}
|
||||||
|
|
||||||
// 读取块
|
// 读取块
|
||||||
@ -478,7 +479,7 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
silceMd5.Reset()
|
silceMd5.Reset()
|
||||||
_, err := io.CopyN(io.MultiWriter(fileMd5, silceMd5, byteData), file, DEFAULT)
|
_, err := io.CopyN(io.MultiWriter(fileMd5, silceMd5, byteData), file, DEFAULT)
|
||||||
if err != io.EOF && err != io.ErrUnexpectedEOF && err != nil {
|
if err != io.EOF && err != io.ErrUnexpectedEOF && err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// 计算块md5并进行hex和base64编码
|
// 计算块md5并进行hex和base64编码
|
||||||
@ -496,7 +497,7 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
"uploadFileId": initMultiUpload.Data.UploadFileID,
|
"uploadFileId": initMultiUpload.Data.UploadFileID,
|
||||||
}, &uploadUrl)
|
}, &uploadUrl)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// 开始上传
|
// 开始上传
|
||||||
@ -511,7 +512,7 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
retry.MaxDelay(5*time.Second))
|
retry.MaxDelay(5*time.Second))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
up(int(i * 100 / count))
|
up(int(i * 100 / count))
|
||||||
}
|
}
|
||||||
@ -523,6 +524,7 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 提交上传
|
// 提交上传
|
||||||
|
var resp CommitMultiUploadFileResp
|
||||||
_, err = y.request(fullUrl+"/commitMultiUploadFile", http.MethodGet,
|
_, err = y.request(fullUrl+"/commitMultiUploadFile", http.MethodGet,
|
||||||
func(req *resty.Request) {
|
func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
@ -533,16 +535,19 @@ func (y *Cloud189PC) CommonUpload(ctx context.Context, dstDir model.Obj, file mo
|
|||||||
"lazyCheck": "1",
|
"lazyCheck": "1",
|
||||||
"isLog": "0",
|
"isLog": "0",
|
||||||
"opertype": "3",
|
"opertype": "3",
|
||||||
}, nil)
|
}, &resp)
|
||||||
return err
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return resp.toFile(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// 快传
|
// 快传
|
||||||
func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (err error) {
|
func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
// 需要获取完整文件md5,必须支持 io.Seek
|
// 需要获取完整文件md5,必须支持 io.Seek
|
||||||
tempFile, err := utils.CreateTempFile(file.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(file.GetReadCloser(), file.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
defer func() {
|
defer func() {
|
||||||
_ = tempFile.Close()
|
_ = tempFile.Close()
|
||||||
@ -559,19 +564,19 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
silceMd5Base64s := make([]string, 0, count)
|
silceMd5Base64s := make([]string, 0, count)
|
||||||
for i := 1; i <= count; i++ {
|
for i := 1; i <= count; i++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return nil, ctx.Err()
|
||||||
}
|
}
|
||||||
|
|
||||||
silceMd5.Reset()
|
silceMd5.Reset()
|
||||||
if _, err := io.CopyN(io.MultiWriter(fileMd5, silceMd5), tempFile, DEFAULT); err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
if _, err := io.CopyN(io.MultiWriter(fileMd5, silceMd5), tempFile, DEFAULT); err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
md5Byte := silceMd5.Sum(nil)
|
md5Byte := silceMd5.Sum(nil)
|
||||||
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
|
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
|
||||||
silceMd5Base64s = append(silceMd5Base64s, fmt.Sprint(i, "-", base64.StdEncoding.EncodeToString(md5Byte)))
|
silceMd5Base64s = append(silceMd5Base64s, fmt.Sprint(i, "-", base64.StdEncoding.EncodeToString(md5Byte)))
|
||||||
}
|
}
|
||||||
if _, err = tempFile.Seek(0, io.SeekStart); err != nil {
|
if _, err = tempFile.Seek(0, io.SeekStart); err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
||||||
@ -604,7 +609,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
}, params, &uploadInfo)
|
}, params, &uploadInfo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// 网盘中不存在该文件,开始上传
|
// 网盘中不存在该文件,开始上传
|
||||||
@ -618,18 +623,18 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
"partInfo": strings.Join(silceMd5Base64s, ","),
|
"partInfo": strings.Join(silceMd5Base64s, ","),
|
||||||
}, &uploadUrls)
|
}, &uploadUrls)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
buf := make([]byte, DEFAULT)
|
buf := make([]byte, DEFAULT)
|
||||||
for i := 1; i <= count; i++ {
|
for i := 1; i <= count; i++ {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return nil, ctx.Err()
|
||||||
}
|
}
|
||||||
|
|
||||||
n, err := io.ReadFull(tempFile, buf)
|
n, err := io.ReadFull(tempFile, buf)
|
||||||
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
uploadData := uploadUrls.UploadUrls[fmt.Sprint("partNumber_", i)]
|
uploadData := uploadUrls.UploadUrls[fmt.Sprint("partNumber_", i)]
|
||||||
err = retry.Do(func() error {
|
err = retry.Do(func() error {
|
||||||
@ -641,7 +646,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
retry.Delay(time.Second),
|
retry.Delay(time.Second),
|
||||||
retry.MaxDelay(5*time.Second))
|
retry.MaxDelay(5*time.Second))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
up(int(i * 100 / count))
|
up(int(i * 100 / count))
|
||||||
@ -649,6 +654,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 提交
|
// 提交
|
||||||
|
var resp CommitMultiUploadFileResp
|
||||||
_, err = y.request(fullUrl+"/commitMultiUploadFile", http.MethodGet,
|
_, err = y.request(fullUrl+"/commitMultiUploadFile", http.MethodGet,
|
||||||
func(req *resty.Request) {
|
func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
@ -656,15 +662,19 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
|
|||||||
"uploadFileId": uploadInfo.Data.UploadFileID,
|
"uploadFileId": uploadInfo.Data.UploadFileID,
|
||||||
"isLog": "0",
|
"isLog": "0",
|
||||||
"opertype": "3",
|
"opertype": "3",
|
||||||
}, nil)
|
}, &resp)
|
||||||
return err
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return resp.toFile(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (err error) {
|
// 旧版本上传,家庭云不支持覆盖
|
||||||
|
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
// 需要获取完整文件md5,必须支持 io.Seek
|
// 需要获取完整文件md5,必须支持 io.Seek
|
||||||
tempFile, err := utils.CreateTempFile(file.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(file.GetReadCloser(), file.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
defer func() {
|
defer func() {
|
||||||
_ = tempFile.Close()
|
_ = tempFile.Close()
|
||||||
@ -674,10 +684,10 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
// 计算md5
|
// 计算md5
|
||||||
fileMd5 := md5.New()
|
fileMd5 := md5.New()
|
||||||
if _, err := io.Copy(fileMd5, tempFile); err != nil {
|
if _, err := io.Copy(fileMd5, tempFile); err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
if _, err = tempFile.Seek(0, io.SeekStart); err != nil {
|
if _, err = tempFile.Seek(0, io.SeekStart); err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
|
||||||
|
|
||||||
@ -718,14 +728,14 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
}, &uploadInfo)
|
}, &uploadInfo)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// 网盘中不存在该文件,开始上传
|
// 网盘中不存在该文件,开始上传
|
||||||
status := GetUploadFileStatusResp{CreateUploadFileResp: uploadInfo}
|
status := GetUploadFileStatusResp{CreateUploadFileResp: uploadInfo}
|
||||||
for status.Size < file.GetSize() && status.FileDataExists != 1 {
|
for status.Size < file.GetSize() && status.FileDataExists != 1 {
|
||||||
if utils.IsCanceled(ctx) {
|
if utils.IsCanceled(ctx) {
|
||||||
return ctx.Err()
|
return nil, ctx.Err()
|
||||||
}
|
}
|
||||||
|
|
||||||
header := map[string]string{
|
header := map[string]string{
|
||||||
@ -742,7 +752,7 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
|
|
||||||
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile))
|
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile))
|
||||||
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
|
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// 获取断点状态
|
// 获取断点状态
|
||||||
@ -760,17 +770,17 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
}
|
}
|
||||||
}, &status)
|
}, &status)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, err := tempFile.Seek(status.GetSize(), io.SeekStart); err != nil {
|
if _, err := tempFile.Seek(status.GetSize(), io.SeekStart); err != nil {
|
||||||
return err
|
return nil, err
|
||||||
}
|
}
|
||||||
up(int(status.Size / file.GetSize()))
|
up(int(status.Size / file.GetSize()))
|
||||||
}
|
}
|
||||||
|
|
||||||
// 提交
|
// 提交
|
||||||
var resp CommitUploadFileResp
|
var resp OldCommitUploadFileResp
|
||||||
_, err = y.post(status.FileCommitUrl, func(req *resty.Request) {
|
_, err = y.post(status.FileCommitUrl, func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
if y.isFamily() {
|
if y.isFamily() {
|
||||||
@ -788,7 +798,10 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}, &resp)
|
}, &resp)
|
||||||
return err
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return resp.toFile(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (y *Cloud189PC) isFamily() bool {
|
func (y *Cloud189PC) isFamily() bool {
|
||||||
@ -829,3 +842,33 @@ func (y *Cloud189PC) getFamilyID() (string, error) {
|
|||||||
}
|
}
|
||||||
return fmt.Sprint(infos[0].FamilyID), nil
|
return fmt.Sprint(infos[0].FamilyID), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (y *Cloud189PC) CheckBatchTask(aType string, taskID string) (*BatchTaskStateResp, error) {
|
||||||
|
var resp BatchTaskStateResp
|
||||||
|
_, err := y.post(API_URL+"/batch/checkBatchTask.action", func(req *resty.Request) {
|
||||||
|
req.SetFormData(map[string]string{
|
||||||
|
"type": aType,
|
||||||
|
"taskId": taskID,
|
||||||
|
})
|
||||||
|
}, &resp)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &resp, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (y *Cloud189PC) WaitBatchTask(aType string, taskID string, t time.Duration) error {
|
||||||
|
for {
|
||||||
|
state, err := y.CheckBatchTask(aType, taskID)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
switch state.TaskStatus {
|
||||||
|
case 2:
|
||||||
|
return errors.New("there is a conflict with the target object")
|
||||||
|
case 4:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
time.Sleep(t)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@ -2,10 +2,12 @@ package aliyundrive_open
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/Xhofe/rateg"
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
@ -34,13 +36,25 @@ func (d *AliyundriveOpen) GetAddition() driver.Additional {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *AliyundriveOpen) Init(ctx context.Context) error {
|
func (d *AliyundriveOpen) Init(ctx context.Context) error {
|
||||||
|
if d.LIVPDownloadFormat == "" {
|
||||||
|
d.LIVPDownloadFormat = "jpeg"
|
||||||
|
}
|
||||||
|
if d.DriveType == "" {
|
||||||
|
d.DriveType = "default"
|
||||||
|
}
|
||||||
res, err := d.request("/adrive/v1.0/user/getDriveInfo", http.MethodPost, nil)
|
res, err := d.request("/adrive/v1.0/user/getDriveInfo", http.MethodPost, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
d.DriveId = utils.Json.Get(res, "default_drive_id").ToString()
|
d.DriveId = utils.Json.Get(res, d.DriveType+"_drive_id").ToString()
|
||||||
d.limitList = utils.LimitRateCtx(d.list, time.Second/4)
|
d.limitList = rateg.LimitFnCtx(d.list, rateg.LimitFnOption{
|
||||||
d.limitLink = utils.LimitRateCtx(d.link, time.Second)
|
Limit: 4,
|
||||||
|
Bucket: 1,
|
||||||
|
})
|
||||||
|
d.limitLink = rateg.LimitFnCtx(d.link, rateg.LimitFnOption{
|
||||||
|
Limit: 1,
|
||||||
|
Bucket: 1,
|
||||||
|
})
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -73,6 +87,12 @@ func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
url := utils.Json.Get(res, "url").ToString()
|
url := utils.Json.Get(res, "url").ToString()
|
||||||
|
if url == "" {
|
||||||
|
if utils.Ext(file.GetName()) != "livp" {
|
||||||
|
return nil, errors.New("get download url failed: " + string(res))
|
||||||
|
}
|
||||||
|
url = utils.Json.Get(res, "streamsUrl", d.LIVPDownloadFormat).ToString()
|
||||||
|
}
|
||||||
exp := time.Hour
|
exp := time.Hour
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
URL: url,
|
URL: url,
|
||||||
|
@ -6,17 +6,19 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type Addition struct {
|
type Addition struct {
|
||||||
|
DriveType string `json:"drive_type" type:"select" options:"default,resource,backup" default:"default"`
|
||||||
driver.RootID
|
driver.RootID
|
||||||
RefreshToken string `json:"refresh_token" required:"true"`
|
RefreshToken string `json:"refresh_token" required:"true"`
|
||||||
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
|
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
|
||||||
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
|
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
|
||||||
OauthTokenURL string `json:"oauth_token_url" default:"https://api.xhofe.top/alist/ali_open/token"`
|
OauthTokenURL string `json:"oauth_token_url" default:"https://api.xhofe.top/alist/ali_open/token"`
|
||||||
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
|
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
|
||||||
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
|
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
|
||||||
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
|
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
|
||||||
RapidUpload bool `json:"rapid_upload" help:"If you enable this option, the file will be uploaded to the server first, so the progress will be incorrect"`
|
RapidUpload bool `json:"rapid_upload" help:"If you enable this option, the file will be uploaded to the server first, so the progress will be incorrect"`
|
||||||
InternalUpload bool `json:"internal_upload" help:"If you are using Aliyun ECS is located in Beijing, you can turn it on to boost the upload speed"`
|
InternalUpload bool `json:"internal_upload" help:"If you are using Aliyun ECS is located in Beijing, you can turn it on to boost the upload speed"`
|
||||||
AccessToken string
|
LIVPDownloadFormat string `json:"livp_download_format" type:"select" options:"jpeg,mov" default:"jpeg"`
|
||||||
|
AccessToken string
|
||||||
}
|
}
|
||||||
|
|
||||||
var config = driver.Config{
|
var config = driver.Config{
|
||||||
|
@ -224,7 +224,7 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
|
|||||||
}
|
}
|
||||||
log.Debugf("[aliyundrive_open] pre_hash matched, start rapid upload")
|
log.Debugf("[aliyundrive_open] pre_hash matched, start rapid upload")
|
||||||
// convert to local file
|
// convert to local file
|
||||||
file, err := utils.CreateTempFile(stream)
|
file, err := utils.CreateTempFile(stream, stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -10,6 +10,7 @@ import (
|
|||||||
"github.com/alist-org/alist/v3/internal/op"
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
@ -19,9 +20,9 @@ func (d *AliyundriveOpen) refreshToken() error {
|
|||||||
if d.OauthTokenURL != "" && d.ClientID == "" {
|
if d.OauthTokenURL != "" && d.ClientID == "" {
|
||||||
url = d.OauthTokenURL
|
url = d.OauthTokenURL
|
||||||
}
|
}
|
||||||
var resp base.TokenResp
|
//var resp base.TokenResp
|
||||||
var e ErrResp
|
var e ErrResp
|
||||||
_, err := base.RestyClient.R().
|
res, err := base.RestyClient.R().
|
||||||
ForceContentType("application/json").
|
ForceContentType("application/json").
|
||||||
SetBody(base.Json{
|
SetBody(base.Json{
|
||||||
"client_id": d.ClientID,
|
"client_id": d.ClientID,
|
||||||
@ -29,19 +30,21 @@ func (d *AliyundriveOpen) refreshToken() error {
|
|||||||
"grant_type": "refresh_token",
|
"grant_type": "refresh_token",
|
||||||
"refresh_token": d.RefreshToken,
|
"refresh_token": d.RefreshToken,
|
||||||
}).
|
}).
|
||||||
SetResult(&resp).
|
//SetResult(&resp).
|
||||||
SetError(&e).
|
SetError(&e).
|
||||||
Post(url)
|
Post(url)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
log.Debugf("[ali_open] refresh token response: %s", res.String())
|
||||||
if e.Code != "" {
|
if e.Code != "" {
|
||||||
return fmt.Errorf("failed to refresh token: %s", e.Message)
|
return fmt.Errorf("failed to refresh token: %s", e.Message)
|
||||||
}
|
}
|
||||||
if resp.RefreshToken == "" {
|
refresh, access := utils.Json.Get(res.Body(), "refresh_token").ToString(), utils.Json.Get(res.Body(), "access_token").ToString()
|
||||||
|
if refresh == "" {
|
||||||
return errors.New("failed to refresh token: refresh token is empty")
|
return errors.New("failed to refresh token: refresh token is empty")
|
||||||
}
|
}
|
||||||
d.RefreshToken, d.AccessToken = resp.RefreshToken, resp.AccessToken
|
d.RefreshToken, d.AccessToken = refresh, access
|
||||||
op.MustSaveDriverStorage(d)
|
op.MustSaveDriverStorage(d)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@ -65,6 +68,9 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
|
|||||||
req.SetError(&e)
|
req.SetError(&e)
|
||||||
res, err := req.Execute(method, d.base+uri)
|
res, err := req.Execute(method, d.base+uri)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
if res != nil {
|
||||||
|
log.Errorf("[aliyundrive_open] request error: %s", res.String())
|
||||||
|
}
|
||||||
return nil, err, nil
|
return nil, err, nil
|
||||||
}
|
}
|
||||||
isRetry := len(retry) > 0 && retry[0]
|
isRetry := len(retry) > 0 && retry[0]
|
||||||
|
@ -6,6 +6,7 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/Xhofe/rateg"
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
@ -52,8 +53,14 @@ func (d *AliyundriveShare) Init(ctx context.Context) error {
|
|||||||
log.Errorf("%+v", err)
|
log.Errorf("%+v", err)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
d.limitList = utils.LimitRateCtx(d.list, time.Second/4)
|
d.limitList = rateg.LimitFnCtx(d.list, rateg.LimitFnOption{
|
||||||
d.limitLink = utils.LimitRateCtx(d.link, time.Second)
|
Limit: 4,
|
||||||
|
Bucket: 1,
|
||||||
|
})
|
||||||
|
d.limitLink = rateg.LimitFnCtx(d.link, rateg.LimitFnOption{
|
||||||
|
Limit: 1,
|
||||||
|
Bucket: 1,
|
||||||
|
})
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3,6 +3,7 @@ package drivers
|
|||||||
import (
|
import (
|
||||||
_ "github.com/alist-org/alist/v3/drivers/115"
|
_ "github.com/alist-org/alist/v3/drivers/115"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/123"
|
_ "github.com/alist-org/alist/v3/drivers/123"
|
||||||
|
_ "github.com/alist-org/alist/v3/drivers/123_share"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/139"
|
_ "github.com/alist-org/alist/v3/drivers/139"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/189"
|
_ "github.com/alist-org/alist/v3/drivers/189"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/189pc"
|
_ "github.com/alist-org/alist/v3/drivers/189pc"
|
||||||
@ -16,6 +17,7 @@ import (
|
|||||||
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
|
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
|
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
|
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
|
||||||
|
_ "github.com/alist-org/alist/v3/drivers/crypt"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/dropbox"
|
_ "github.com/alist-org/alist/v3/drivers/dropbox"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/ftp"
|
_ "github.com/alist-org/alist/v3/drivers/ftp"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/google_drive"
|
_ "github.com/alist-org/alist/v3/drivers/google_drive"
|
||||||
@ -43,6 +45,7 @@ import (
|
|||||||
_ "github.com/alist-org/alist/v3/drivers/uss"
|
_ "github.com/alist-org/alist/v3/drivers/uss"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/virtual"
|
_ "github.com/alist-org/alist/v3/drivers/virtual"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/webdav"
|
_ "github.com/alist-org/alist/v3/drivers/webdav"
|
||||||
|
_ "github.com/alist-org/alist/v3/drivers/weiyun"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/wopan"
|
_ "github.com/alist-org/alist/v3/drivers/wopan"
|
||||||
_ "github.com/alist-org/alist/v3/drivers/yandex_disk"
|
_ "github.com/alist-org/alist/v3/drivers/yandex_disk"
|
||||||
)
|
)
|
||||||
|
@ -1,23 +1,23 @@
|
|||||||
package baidu_netdisk
|
package baidu_netdisk
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"context"
|
"context"
|
||||||
"crypto/md5"
|
"crypto/md5"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/avast/retry-go"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
"io"
|
"io"
|
||||||
"math"
|
"math"
|
||||||
"os"
|
"os"
|
||||||
stdpath "path"
|
stdpath "path"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type BaiduNetdisk struct {
|
type BaiduNetdisk struct {
|
||||||
@ -25,6 +25,9 @@ type BaiduNetdisk struct {
|
|||||||
Addition
|
Addition
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const BaiduFileAPI = "https://d.pcs.baidu.com/rest/2.0/pcs/superfile2"
|
||||||
|
const DefaultSliceSize int64 = 4 * 1024 * 1024
|
||||||
|
|
||||||
func (d *BaiduNetdisk) Config() driver.Config {
|
func (d *BaiduNetdisk) Config() driver.Config {
|
||||||
return config
|
return config
|
||||||
}
|
}
|
||||||
@ -109,7 +112,9 @@ func (d *BaiduNetdisk) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
streamSize := stream.GetSize()
|
||||||
|
|
||||||
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -117,43 +122,37 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
|
|||||||
_ = tempFile.Close()
|
_ = tempFile.Close()
|
||||||
_ = os.Remove(tempFile.Name())
|
_ = os.Remove(tempFile.Name())
|
||||||
}()
|
}()
|
||||||
var Default int64 = 4 * 1024 * 1024
|
|
||||||
defaultByteData := make([]byte, Default)
|
count := int(math.Ceil(float64(streamSize) / float64(DefaultSliceSize)))
|
||||||
count := int(math.Ceil(float64(stream.GetSize()) / float64(Default)))
|
//cal md5 for first 256k data
|
||||||
var SliceSize int64 = 256 * 1024
|
const SliceSize int64 = 256 * 1024
|
||||||
// cal md5
|
// cal md5
|
||||||
h1 := md5.New()
|
h1 := md5.New()
|
||||||
h2 := md5.New()
|
h2 := md5.New()
|
||||||
block_list := make([]string, 0)
|
blockList := make([]string, 0)
|
||||||
content_md5 := ""
|
contentMd5 := ""
|
||||||
slice_md5 := ""
|
sliceMd5 := ""
|
||||||
left := stream.GetSize()
|
left := streamSize
|
||||||
for i := 0; i < count; i++ {
|
for i := 0; i < count; i++ {
|
||||||
byteSize := Default
|
byteSize := DefaultSliceSize
|
||||||
var byteData []byte
|
if left < DefaultSliceSize {
|
||||||
if left < Default {
|
|
||||||
byteSize = left
|
byteSize = left
|
||||||
byteData = make([]byte, byteSize)
|
|
||||||
} else {
|
|
||||||
byteData = defaultByteData
|
|
||||||
}
|
}
|
||||||
left -= byteSize
|
left -= byteSize
|
||||||
_, err = io.ReadFull(tempFile, byteData)
|
_, err = io.Copy(io.MultiWriter(h1, h2), io.LimitReader(tempFile, byteSize))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
h1.Write(byteData)
|
blockList = append(blockList, fmt.Sprintf("\"%s\"", hex.EncodeToString(h2.Sum(nil))))
|
||||||
h2.Write(byteData)
|
|
||||||
block_list = append(block_list, fmt.Sprintf("\"%s\"", hex.EncodeToString(h2.Sum(nil))))
|
|
||||||
h2.Reset()
|
h2.Reset()
|
||||||
}
|
}
|
||||||
content_md5 = hex.EncodeToString(h1.Sum(nil))
|
contentMd5 = hex.EncodeToString(h1.Sum(nil))
|
||||||
_, err = tempFile.Seek(0, io.SeekStart)
|
_, err = tempFile.Seek(0, io.SeekStart)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if stream.GetSize() <= SliceSize {
|
if streamSize <= SliceSize {
|
||||||
slice_md5 = content_md5
|
sliceMd5 = contentMd5
|
||||||
} else {
|
} else {
|
||||||
sliceData := make([]byte, SliceSize)
|
sliceData := make([]byte, SliceSize)
|
||||||
_, err = io.ReadFull(tempFile, sliceData)
|
_, err = io.ReadFull(tempFile, sliceData)
|
||||||
@ -161,22 +160,19 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
h2.Write(sliceData)
|
h2.Write(sliceData)
|
||||||
slice_md5 = hex.EncodeToString(h2.Sum(nil))
|
sliceMd5 = hex.EncodeToString(h2.Sum(nil))
|
||||||
_, err = tempFile.Seek(0, io.SeekStart)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
rawPath := stdpath.Join(dstDir.GetPath(), stream.GetName())
|
rawPath := stdpath.Join(dstDir.GetPath(), stream.GetName())
|
||||||
path := encodeURIComponent(rawPath)
|
path := encodeURIComponent(rawPath)
|
||||||
block_list_str := fmt.Sprintf("[%s]", strings.Join(block_list, ","))
|
block_list_str := fmt.Sprintf("[%s]", strings.Join(blockList, ","))
|
||||||
data := fmt.Sprintf("path=%s&size=%d&isdir=0&autoinit=1&block_list=%s&content-md5=%s&slice-md5=%s",
|
data := fmt.Sprintf("path=%s&size=%d&isdir=0&autoinit=1&block_list=%s&content-md5=%s&slice-md5=%s",
|
||||||
path, stream.GetSize(),
|
path, streamSize,
|
||||||
block_list_str,
|
block_list_str,
|
||||||
content_md5, slice_md5)
|
contentMd5, sliceMd5)
|
||||||
params := map[string]string{
|
params := map[string]string{
|
||||||
"method": "precreate",
|
"method": "precreate",
|
||||||
}
|
}
|
||||||
|
log.Debugf("[baidu_netdisk] precreate data: %s", data)
|
||||||
var precreateResp PrecreateResp
|
var precreateResp PrecreateResp
|
||||||
_, err = d.post("/xpan/file", params, data, &precreateResp)
|
_, err = d.post("/xpan/file", params, data, &precreateResp)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -184,6 +180,7 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
|
|||||||
}
|
}
|
||||||
log.Debugf("%+v", precreateResp)
|
log.Debugf("%+v", precreateResp)
|
||||||
if precreateResp.ReturnType == 2 {
|
if precreateResp.ReturnType == 2 {
|
||||||
|
//rapid upload, since got md5 match from baidu server
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
params = map[string]string{
|
params = map[string]string{
|
||||||
@ -193,41 +190,49 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
|
|||||||
"path": path,
|
"path": path,
|
||||||
"uploadid": precreateResp.Uploadid,
|
"uploadid": precreateResp.Uploadid,
|
||||||
}
|
}
|
||||||
left = stream.GetSize()
|
|
||||||
|
var offset int64 = 0
|
||||||
for i, partseq := range precreateResp.BlockList {
|
for i, partseq := range precreateResp.BlockList {
|
||||||
if utils.IsCanceled(ctx) {
|
|
||||||
return ctx.Err()
|
|
||||||
}
|
|
||||||
byteSize := Default
|
|
||||||
var byteData []byte
|
|
||||||
if left < Default {
|
|
||||||
byteSize = left
|
|
||||||
byteData = make([]byte, byteSize)
|
|
||||||
} else {
|
|
||||||
byteData = defaultByteData
|
|
||||||
}
|
|
||||||
left -= byteSize
|
|
||||||
_, err = io.ReadFull(tempFile, byteData)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
u := "https://d.pcs.baidu.com/rest/2.0/pcs/superfile2"
|
|
||||||
params["partseq"] = strconv.Itoa(partseq)
|
params["partseq"] = strconv.Itoa(partseq)
|
||||||
res, err := base.RestyClient.R().
|
byteSize := int64(math.Min(float64(streamSize-offset), float64(DefaultSliceSize)))
|
||||||
SetContext(ctx).
|
err := retry.Do(func() error {
|
||||||
SetQueryParams(params).
|
return d.uploadSlice(ctx, ¶ms, stream.GetName(), tempFile, offset, byteSize)
|
||||||
SetFileReader("file", stream.GetName(), bytes.NewReader(byteData)).
|
},
|
||||||
Post(u)
|
retry.Context(ctx),
|
||||||
|
retry.Attempts(3))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
log.Debugln(res.String())
|
offset += byteSize
|
||||||
|
|
||||||
if len(precreateResp.BlockList) > 0 {
|
if len(precreateResp.BlockList) > 0 {
|
||||||
up(i * 100 / len(precreateResp.BlockList))
|
up(i * 100 / len(precreateResp.BlockList))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
_, err = d.create(rawPath, stream.GetSize(), 0, precreateResp.Uploadid, block_list_str)
|
_, err = d.create(rawPath, streamSize, 0, precreateResp.Uploadid, block_list_str)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
func (d *BaiduNetdisk) uploadSlice(ctx context.Context, params *map[string]string, fileName string, file *os.File, offset int64, byteSize int64) error {
|
||||||
|
_, err := file.Seek(offset, io.SeekStart)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
res, err := base.RestyClient.R().
|
||||||
|
SetContext(ctx).
|
||||||
|
SetQueryParams(*params).
|
||||||
|
SetFileReader("file", fileName, io.LimitReader(file, byteSize)).
|
||||||
|
Post(BaiduFileAPI)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Debugln(res.RawResponse.Status + res.String())
|
||||||
|
errCode := utils.Json.Get(res.Body(), "error_code").ToInt()
|
||||||
|
errNo := utils.Json.Get(res.Body(), "errno").ToInt()
|
||||||
|
if errCode != 0 || errNo != 0 {
|
||||||
|
return errs.NewErr(errs.StreamIncomplete, "error in uploading to baidu, will retry. response=%s", res.String())
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
var _ driver.Driver = (*BaiduNetdisk)(nil)
|
var _ driver.Driver = (*BaiduNetdisk)(nil)
|
||||||
|
@ -2,6 +2,7 @@ package baidu_netdisk
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/avast/retry-go"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"strconv"
|
"strconv"
|
||||||
@ -13,6 +14,7 @@ import (
|
|||||||
"github.com/alist-org/alist/v3/internal/op"
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
@ -50,30 +52,37 @@ func (d *BaiduNetdisk) _refreshToken() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *BaiduNetdisk) request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *BaiduNetdisk) request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
||||||
req := base.RestyClient.R()
|
var result []byte
|
||||||
req.SetQueryParam("access_token", d.AccessToken)
|
err := retry.Do(func() error {
|
||||||
if callback != nil {
|
req := base.RestyClient.R()
|
||||||
callback(req)
|
req.SetQueryParam("access_token", d.AccessToken)
|
||||||
}
|
if callback != nil {
|
||||||
if resp != nil {
|
callback(req)
|
||||||
req.SetResult(resp)
|
|
||||||
}
|
|
||||||
res, err := req.Execute(method, furl)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
errno := utils.Json.Get(res.Body(), "errno").ToInt()
|
|
||||||
if errno != 0 {
|
|
||||||
if errno == -6 {
|
|
||||||
err = d.refreshToken()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return d.request(furl, method, callback, resp)
|
|
||||||
}
|
}
|
||||||
return nil, fmt.Errorf("errno: %d, refer to https://pan.baidu.com/union/doc/", errno)
|
if resp != nil {
|
||||||
}
|
req.SetResult(resp)
|
||||||
return res.Body(), nil
|
}
|
||||||
|
res, err := req.Execute(method, furl)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Debugf("[baidu_netdisk] req: %s, resp: %s", furl, res.String())
|
||||||
|
errno := utils.Json.Get(res.Body(), "errno").ToInt()
|
||||||
|
if errno != 0 {
|
||||||
|
if utils.SliceContains([]int{111, -6}, errno) {
|
||||||
|
log.Info("refreshing baidu_netdisk token.")
|
||||||
|
err2 := d.refreshToken()
|
||||||
|
if err2 != nil {
|
||||||
|
return err2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return fmt.Errorf("req: [%s] ,errno: %d, refer to https://pan.baidu.com/union/doc/", furl, errno)
|
||||||
|
}
|
||||||
|
result = res.Body()
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
retry.Attempts(3))
|
||||||
|
return result, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *BaiduNetdisk) get(pathname string, params map[string]string, resp interface{}) ([]byte, error) {
|
func (d *BaiduNetdisk) get(pathname string, params map[string]string, resp interface{}) ([]byte, error) {
|
||||||
|
@ -126,7 +126,13 @@ func (d *BaiduPhoto) Link(ctx context.Context, file model.Obj, args model.LinkAr
|
|||||||
case *File:
|
case *File:
|
||||||
return d.linkFile(ctx, file, args)
|
return d.linkFile(ctx, file, args)
|
||||||
case *AlbumFile:
|
case *AlbumFile:
|
||||||
return d.linkAlbum(ctx, file, args)
|
f, err := d.CopyAlbumFile(ctx, file)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return d.linkFile(ctx, f, args)
|
||||||
|
// 有概率无法获取到链接
|
||||||
|
//return d.linkAlbum(ctx, file, args)
|
||||||
}
|
}
|
||||||
return nil, errs.NotFile
|
return nil, errs.NotFile
|
||||||
}
|
}
|
||||||
@ -169,9 +175,9 @@ func (d *BaiduPhoto) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *BaiduPhoto) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
func (d *BaiduPhoto) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
// 仅支持相册之间移动
|
|
||||||
if file, ok := srcObj.(*AlbumFile); ok {
|
if file, ok := srcObj.(*AlbumFile); ok {
|
||||||
if _, ok := dstDir.(*Album); ok {
|
switch dstDir.(type) {
|
||||||
|
case *Album, *Root: // albumfile -> root -> album or albumfile -> root
|
||||||
newObj, err := d.Copy(ctx, srcObj, dstDir)
|
newObj, err := d.Copy(ctx, srcObj, dstDir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -206,7 +212,7 @@ func (d *BaiduPhoto) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
|
|
||||||
func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
// 需要获取完整文件md5,必须支持 io.Seek
|
// 需要获取完整文件md5,必须支持 io.Seek
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -69,3 +69,10 @@ func renameAlbum(album *Album, newName string) *Album {
|
|||||||
Mtime: time.Now().Unix(),
|
Mtime: time.Now().Unix(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func BoolToIntStr(b bool) string {
|
||||||
|
if b {
|
||||||
|
return "1"
|
||||||
|
}
|
||||||
|
return "0"
|
||||||
|
}
|
||||||
|
@ -10,6 +10,7 @@ type Addition struct {
|
|||||||
ShowType string `json:"show_type" type:"select" options:"root,root_only_album,root_only_file" default:"root"`
|
ShowType string `json:"show_type" type:"select" options:"root,root_only_album,root_only_file" default:"root"`
|
||||||
AlbumID string `json:"album_id"`
|
AlbumID string `json:"album_id"`
|
||||||
//AlbumPassword string `json:"album_password"`
|
//AlbumPassword string `json:"album_password"`
|
||||||
|
DeleteOrigin bool `json:"delete_origin"`
|
||||||
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
|
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
|
||||||
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
|
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
|
||||||
}
|
}
|
||||||
|
@ -21,7 +21,7 @@ const (
|
|||||||
FILE_API_URL_V2 = API_URL + "/file/v2"
|
FILE_API_URL_V2 = API_URL + "/file/v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (d *BaiduPhoto) Request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *BaiduPhoto) Request(furl string, method string, callback base.ReqCallback, resp interface{}) (*resty.Response, error) {
|
||||||
req := base.RestyClient.R().
|
req := base.RestyClient.R().
|
||||||
SetQueryParam("access_token", d.AccessToken)
|
SetQueryParam("access_token", d.AccessToken)
|
||||||
if callback != nil {
|
if callback != nil {
|
||||||
@ -52,9 +52,17 @@ func (d *BaiduPhoto) Request(furl string, method string, callback base.ReqCallba
|
|||||||
default:
|
default:
|
||||||
return nil, fmt.Errorf("errno: %d, refer to https://photo.baidu.com/union/doc", erron)
|
return nil, fmt.Errorf("errno: %d, refer to https://photo.baidu.com/union/doc", erron)
|
||||||
}
|
}
|
||||||
return res.Body(), nil
|
return res, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//func (d *BaiduPhoto) Request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
||||||
|
// res, err := d.request(furl, method, callback, resp)
|
||||||
|
// if err != nil {
|
||||||
|
// return nil, err
|
||||||
|
// }
|
||||||
|
// return res.Body(), nil
|
||||||
|
//}
|
||||||
|
|
||||||
func (d *BaiduPhoto) refreshToken() error {
|
func (d *BaiduPhoto) refreshToken() error {
|
||||||
u := "https://openapi.baidu.com/oauth/2.0/token"
|
u := "https://openapi.baidu.com/oauth/2.0/token"
|
||||||
var resp base.TokenResp
|
var resp base.TokenResp
|
||||||
@ -79,11 +87,11 @@ func (d *BaiduPhoto) refreshToken() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *BaiduPhoto) Get(furl string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *BaiduPhoto) Get(furl string, callback base.ReqCallback, resp interface{}) (*resty.Response, error) {
|
||||||
return d.Request(furl, http.MethodGet, callback, resp)
|
return d.Request(furl, http.MethodGet, callback, resp)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *BaiduPhoto) Post(furl string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *BaiduPhoto) Post(furl string, callback base.ReqCallback, resp interface{}) (*resty.Response, error) {
|
||||||
return d.Request(furl, http.MethodPost, callback, resp)
|
return d.Request(furl, http.MethodPost, callback, resp)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -223,7 +231,7 @@ func (d *BaiduPhoto) DeleteAlbum(ctx context.Context, album *Album) error {
|
|||||||
r.SetFormData(map[string]string{
|
r.SetFormData(map[string]string{
|
||||||
"album_id": album.AlbumID,
|
"album_id": album.AlbumID,
|
||||||
"tid": fmt.Sprint(album.Tid),
|
"tid": fmt.Sprint(album.Tid),
|
||||||
"delete_origin_image": "0", // 是否删除原图 0 不删除 1 删除
|
"delete_origin_image": BoolToIntStr(d.DeleteOrigin), // 是否删除原图 0 不删除 1 删除
|
||||||
})
|
})
|
||||||
}, nil)
|
}, nil)
|
||||||
return err
|
return err
|
||||||
@ -237,7 +245,7 @@ func (d *BaiduPhoto) DeleteAlbumFile(ctx context.Context, file *AlbumFile) error
|
|||||||
"album_id": fmt.Sprint(file.AlbumID),
|
"album_id": fmt.Sprint(file.AlbumID),
|
||||||
"tid": fmt.Sprint(file.Tid),
|
"tid": fmt.Sprint(file.Tid),
|
||||||
"list": fmt.Sprintf(`[{"fsid":%d,"uk":%d}]`, file.Fsid, file.Uk),
|
"list": fmt.Sprintf(`[{"fsid":%d,"uk":%d}]`, file.Fsid, file.Uk),
|
||||||
"del_origin": "0", // 是否删除原图 0 不删除 1 删除
|
"del_origin": BoolToIntStr(d.DeleteOrigin), // 是否删除原图 0 不删除 1 删除
|
||||||
})
|
})
|
||||||
}, nil)
|
}, nil)
|
||||||
return err
|
return err
|
||||||
@ -391,6 +399,49 @@ func (d *BaiduPhoto) linkFile(ctx context.Context, file *File, args model.LinkAr
|
|||||||
return link, nil
|
return link, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*func (d *BaiduPhoto) linkStreamAlbum(ctx context.Context, file *AlbumFile) (*model.Link, error) {
|
||||||
|
return &model.Link{
|
||||||
|
Header: http.Header{},
|
||||||
|
Writer: func(w io.Writer) error {
|
||||||
|
res, err := d.Get(ALBUM_API_URL+"/streaming", func(r *resty.Request) {
|
||||||
|
r.SetContext(ctx)
|
||||||
|
r.SetQueryParams(map[string]string{
|
||||||
|
"fsid": fmt.Sprint(file.Fsid),
|
||||||
|
"album_id": file.AlbumID,
|
||||||
|
"tid": fmt.Sprint(file.Tid),
|
||||||
|
"uk": fmt.Sprint(file.Uk),
|
||||||
|
}).SetDoNotParseResponse(true)
|
||||||
|
}, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.RawBody().Close()
|
||||||
|
_, err = io.Copy(w, res.RawBody())
|
||||||
|
return err
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}*/
|
||||||
|
|
||||||
|
/*func (d *BaiduPhoto) linkStream(ctx context.Context, file *File) (*model.Link, error) {
|
||||||
|
return &model.Link{
|
||||||
|
Header: http.Header{},
|
||||||
|
Writer: func(w io.Writer) error {
|
||||||
|
res, err := d.Get(FILE_API_URL_V1+"/streaming", func(r *resty.Request) {
|
||||||
|
r.SetContext(ctx)
|
||||||
|
r.SetQueryParams(map[string]string{
|
||||||
|
"fsid": fmt.Sprint(file.Fsid),
|
||||||
|
}).SetDoNotParseResponse(true)
|
||||||
|
}, nil)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer res.RawBody().Close()
|
||||||
|
_, err = io.Copy(w, res.RawBody())
|
||||||
|
return err
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}*/
|
||||||
|
|
||||||
// 获取uk
|
// 获取uk
|
||||||
func (d *BaiduPhoto) uInfo() (*UInfo, error) {
|
func (d *BaiduPhoto) uInfo() (*UInfo, error) {
|
||||||
var info UInfo
|
var info UInfo
|
||||||
|
@ -1,30 +1 @@
|
|||||||
package base
|
package base
|
||||||
|
|
||||||
import (
|
|
||||||
"io"
|
|
||||||
"net/http"
|
|
||||||
"strconv"
|
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
|
||||||
"github.com/alist-org/alist/v3/pkg/http_range"
|
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
|
||||||
)
|
|
||||||
|
|
||||||
func HandleRange(link *model.Link, file io.ReadSeekCloser, header http.Header, size int64) {
|
|
||||||
if header.Get("Range") != "" {
|
|
||||||
r, err := http_range.ParseRange(header.Get("Range"), size)
|
|
||||||
if err == nil && len(r) > 0 {
|
|
||||||
_, err := file.Seek(r[0].Start, io.SeekStart)
|
|
||||||
if err == nil {
|
|
||||||
link.Data = utils.NewLimitReadCloser(file, func() error {
|
|
||||||
return file.Close()
|
|
||||||
}, r[0].Length)
|
|
||||||
link.Status = http.StatusPartialContent
|
|
||||||
link.Header = http.Header{
|
|
||||||
"Content-Range": []string{r[0].ContentRange(size)},
|
|
||||||
"Content-Length": []string{strconv.FormatInt(r[0].Length, 10)},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
411
drivers/crypt/driver.go
Normal file
411
drivers/crypt/driver.go
Normal file
@ -0,0 +1,411 @@
|
|||||||
|
package crypt
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
stdpath "path"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
|
"github.com/alist-org/alist/v3/internal/fs"
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/internal/net"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
rcCrypt "github.com/rclone/rclone/backend/crypt"
|
||||||
|
"github.com/rclone/rclone/fs/config/configmap"
|
||||||
|
"github.com/rclone/rclone/fs/config/obscure"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Crypt struct {
|
||||||
|
model.Storage
|
||||||
|
Addition
|
||||||
|
cipher *rcCrypt.Cipher
|
||||||
|
remoteStorage driver.Driver
|
||||||
|
}
|
||||||
|
|
||||||
|
const obfuscatedPrefix = "___Obfuscated___"
|
||||||
|
|
||||||
|
func (d *Crypt) Config() driver.Config {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) GetAddition() driver.Additional {
|
||||||
|
return &d.Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Init(ctx context.Context) error {
|
||||||
|
//obfuscate credentials if it's updated or just created
|
||||||
|
err := d.updateObfusParm(&d.Password)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to obfuscate password: %w", err)
|
||||||
|
}
|
||||||
|
err = d.updateObfusParm(&d.Salt)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to obfuscate salt: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
isCryptExt := regexp.MustCompile(`^[.][A-Za-z0-9-_]{2,}$`).MatchString
|
||||||
|
if !isCryptExt(d.EncryptedSuffix) {
|
||||||
|
return fmt.Errorf("EncryptedSuffix is Illegal")
|
||||||
|
}
|
||||||
|
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
|
||||||
|
//need remote storage exist
|
||||||
|
storage, err := fs.GetStorage(d.RemotePath, &fs.GetStoragesArgs{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("can't find remote storage: %w", err)
|
||||||
|
}
|
||||||
|
d.remoteStorage = storage
|
||||||
|
|
||||||
|
p, _ := strings.CutPrefix(d.Password, obfuscatedPrefix)
|
||||||
|
p2, _ := strings.CutPrefix(d.Salt, obfuscatedPrefix)
|
||||||
|
config := configmap.Simple{
|
||||||
|
"password": p,
|
||||||
|
"password2": p2,
|
||||||
|
"filename_encryption": d.FileNameEnc,
|
||||||
|
"directory_name_encryption": d.DirNameEnc,
|
||||||
|
"filename_encoding": "base64",
|
||||||
|
"suffix": d.EncryptedSuffix,
|
||||||
|
"pass_bad_blocks": "",
|
||||||
|
}
|
||||||
|
c, err := rcCrypt.NewCipher(config)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create Cipher: %w", err)
|
||||||
|
}
|
||||||
|
d.cipher = c
|
||||||
|
|
||||||
|
//c, err := rcCrypt.newCipher(rcCrypt.NameEncryptionStandard, "", "", true, nil)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) updateObfusParm(str *string) error {
|
||||||
|
temp := *str
|
||||||
|
if !strings.HasPrefix(temp, obfuscatedPrefix) {
|
||||||
|
temp, err := obscure.Obscure(temp)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
temp = obfuscatedPrefix + temp
|
||||||
|
*str = temp
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Drop(ctx context.Context) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
|
path := dir.GetPath()
|
||||||
|
//return d.list(ctx, d.RemotePath, path)
|
||||||
|
//remoteFull
|
||||||
|
|
||||||
|
objs, err := fs.List(ctx, d.getPathForRemote(path, true), &fs.ListArgs{NoLog: true})
|
||||||
|
// the obj must implement the model.SetPath interface
|
||||||
|
// return objs, err
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var result []model.Obj
|
||||||
|
for _, obj := range objs {
|
||||||
|
if obj.IsDir() {
|
||||||
|
name, err := d.cipher.DecryptDirName(obj.GetName())
|
||||||
|
if err != nil {
|
||||||
|
//filter illegal files
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
objRes := model.Object{
|
||||||
|
Name: name,
|
||||||
|
Size: 0,
|
||||||
|
Modified: obj.ModTime(),
|
||||||
|
IsFolder: obj.IsDir(),
|
||||||
|
}
|
||||||
|
result = append(result, &objRes)
|
||||||
|
} else {
|
||||||
|
thumb, ok := model.GetThumb(obj)
|
||||||
|
size, err := d.cipher.DecryptedSize(obj.GetSize())
|
||||||
|
if err != nil {
|
||||||
|
//filter illegal files
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
name, err := d.cipher.DecryptFileName(obj.GetName())
|
||||||
|
if err != nil {
|
||||||
|
//filter illegal files
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
objRes := model.Object{
|
||||||
|
Name: name,
|
||||||
|
Size: size,
|
||||||
|
Modified: obj.ModTime(),
|
||||||
|
IsFolder: obj.IsDir(),
|
||||||
|
}
|
||||||
|
if !ok {
|
||||||
|
result = append(result, &objRes)
|
||||||
|
} else {
|
||||||
|
objWithThumb := model.ObjThumb{
|
||||||
|
Object: objRes,
|
||||||
|
Thumbnail: model.Thumbnail{
|
||||||
|
Thumbnail: thumb,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
result = append(result, &objWithThumb)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Get(ctx context.Context, path string) (model.Obj, error) {
|
||||||
|
if utils.PathEqual(path, "/") {
|
||||||
|
return &model.Object{
|
||||||
|
Name: "Root",
|
||||||
|
IsFolder: true,
|
||||||
|
Path: "/",
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
remoteFullPath := ""
|
||||||
|
var remoteObj model.Obj
|
||||||
|
var err, err2 error
|
||||||
|
firstTryIsFolder, secondTry := guessPath(path)
|
||||||
|
remoteFullPath = d.getPathForRemote(path, firstTryIsFolder)
|
||||||
|
remoteObj, err = fs.Get(ctx, remoteFullPath, &fs.GetArgs{NoLog: true})
|
||||||
|
if err != nil {
|
||||||
|
if errs.IsObjectNotFound(err) && secondTry {
|
||||||
|
//try the opposite
|
||||||
|
remoteFullPath = d.getPathForRemote(path, !firstTryIsFolder)
|
||||||
|
remoteObj, err2 = fs.Get(ctx, remoteFullPath, &fs.GetArgs{NoLog: true})
|
||||||
|
if err2 != nil {
|
||||||
|
return nil, err2
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
var size int64 = 0
|
||||||
|
name := ""
|
||||||
|
if !remoteObj.IsDir() {
|
||||||
|
size, err = d.cipher.DecryptedSize(remoteObj.GetSize())
|
||||||
|
if err != nil {
|
||||||
|
log.Warnf("DecryptedSize failed for %s ,will use original size, err:%s", path, err)
|
||||||
|
size = remoteObj.GetSize()
|
||||||
|
}
|
||||||
|
name, err = d.cipher.DecryptFileName(remoteObj.GetName())
|
||||||
|
if err != nil {
|
||||||
|
log.Warnf("DecryptFileName failed for %s ,will use original name, err:%s", path, err)
|
||||||
|
name = remoteObj.GetName()
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
name, err = d.cipher.DecryptDirName(remoteObj.GetName())
|
||||||
|
if err != nil {
|
||||||
|
log.Warnf("DecryptDirName failed for %s ,will use original name, err:%s", path, err)
|
||||||
|
name = remoteObj.GetName()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
obj := &model.Object{
|
||||||
|
Path: path,
|
||||||
|
Name: name,
|
||||||
|
Size: size,
|
||||||
|
Modified: remoteObj.ModTime(),
|
||||||
|
IsFolder: remoteObj.IsDir(),
|
||||||
|
}
|
||||||
|
return obj, nil
|
||||||
|
//return nil, errs.ObjectNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
dstDirActualPath, err := d.getActualPathForRemote(file.GetPath(), false)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
remoteLink, remoteFile, err := op.Link(ctx, d.remoteStorage, dstDirActualPath, args)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if remoteLink.RangeReadCloser.RangeReader == nil && remoteLink.ReadSeekCloser == nil && len(remoteLink.URL) == 0 {
|
||||||
|
return nil, fmt.Errorf("the remote storage driver need to be enhanced to support encrytion")
|
||||||
|
}
|
||||||
|
remoteFileSize := remoteFile.GetSize()
|
||||||
|
remoteClosers := utils.NewClosers()
|
||||||
|
rangeReaderFunc := func(ctx context.Context, underlyingOffset, underlyingLength int64) (io.ReadCloser, error) {
|
||||||
|
length := underlyingLength
|
||||||
|
if underlyingLength >= 0 && underlyingOffset+underlyingLength >= remoteFileSize {
|
||||||
|
length = -1
|
||||||
|
}
|
||||||
|
if remoteLink.RangeReadCloser.RangeReader != nil {
|
||||||
|
//remoteRangeReader, err :=
|
||||||
|
remoteReader, err := remoteLink.RangeReadCloser.RangeReader(http_range.Range{Start: underlyingOffset, Length: length})
|
||||||
|
remoteClosers.Add(remoteLink.RangeReadCloser.Closers)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return remoteReader, nil
|
||||||
|
}
|
||||||
|
if remoteLink.ReadSeekCloser != nil {
|
||||||
|
_, err := remoteLink.ReadSeekCloser.Seek(underlyingOffset, io.SeekStart)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
//remoteClosers.Add(remoteLink.ReadSeekCloser)
|
||||||
|
//keep reuse same ReadSeekCloser and close at last.
|
||||||
|
return io.NopCloser(remoteLink.ReadSeekCloser), nil
|
||||||
|
}
|
||||||
|
if len(remoteLink.URL) > 0 {
|
||||||
|
rangedRemoteLink := &model.Link{
|
||||||
|
URL: remoteLink.URL,
|
||||||
|
Header: remoteLink.Header,
|
||||||
|
}
|
||||||
|
response, err := RequestRangedHttp(args.HttpReq, rangedRemoteLink, underlyingOffset, length)
|
||||||
|
//remoteClosers.Add(response.Body)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("remote storage http request failure,status: %d err:%s", response.StatusCode, err)
|
||||||
|
}
|
||||||
|
if underlyingOffset == 0 && length == -1 || response.StatusCode == http.StatusPartialContent {
|
||||||
|
return response.Body, nil
|
||||||
|
} else if response.StatusCode == http.StatusOK {
|
||||||
|
log.Warnf("remote http server not supporting range request, expect low perfromace!")
|
||||||
|
readCloser, err := net.GetRangedHttpReader(response.Body, underlyingOffset, length)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return readCloser, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return response.Body, nil
|
||||||
|
}
|
||||||
|
//if remoteLink.Data != nil {
|
||||||
|
// log.Warnf("remote storage not supporting range request, expect low perfromace!")
|
||||||
|
// readCloser, err := net.GetRangedHttpReader(remoteLink.Data, underlyingOffset, length)
|
||||||
|
// remoteCloser = remoteLink.Data
|
||||||
|
// if err != nil {
|
||||||
|
// return nil, err
|
||||||
|
// }
|
||||||
|
// return readCloser, nil
|
||||||
|
//}
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
|
||||||
|
}
|
||||||
|
resultRangeReader := func(httpRange http_range.Range) (io.ReadCloser, error) {
|
||||||
|
readSeeker, err := d.cipher.DecryptDataSeek(ctx, rangeReaderFunc, httpRange.Start, httpRange.Length)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return readSeeker, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: remoteClosers}
|
||||||
|
resultLink := &model.Link{
|
||||||
|
Header: remoteLink.Header,
|
||||||
|
RangeReadCloser: *resultRangeReadCloser,
|
||||||
|
Expiration: remoteLink.Expiration,
|
||||||
|
}
|
||||||
|
|
||||||
|
return resultLink, nil
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
|
dstDirActualPath, err := d.getActualPathForRemote(parentDir.GetPath(), true)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
dir := d.cipher.EncryptDirName(dirName)
|
||||||
|
return op.MakeDir(ctx, d.remoteStorage, stdpath.Join(dstDirActualPath, dir))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
srcRemoteActualPath, err := d.getActualPathForRemote(srcObj.GetPath(), srcObj.IsDir())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
dstRemoteActualPath, err := d.getActualPathForRemote(dstDir.GetPath(), dstDir.IsDir())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
return op.Move(ctx, d.remoteStorage, srcRemoteActualPath, dstRemoteActualPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
|
||||||
|
remoteActualPath, err := d.getActualPathForRemote(srcObj.GetPath(), srcObj.IsDir())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
var newEncryptedName string
|
||||||
|
if srcObj.IsDir() {
|
||||||
|
newEncryptedName = d.cipher.EncryptDirName(newName)
|
||||||
|
} else {
|
||||||
|
newEncryptedName = d.cipher.EncryptFileName(newName)
|
||||||
|
}
|
||||||
|
return op.Rename(ctx, d.remoteStorage, remoteActualPath, newEncryptedName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
srcRemoteActualPath, err := d.getActualPathForRemote(srcObj.GetPath(), srcObj.IsDir())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
dstRemoteActualPath, err := d.getActualPathForRemote(dstDir.GetPath(), dstDir.IsDir())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
return op.Copy(ctx, d.remoteStorage, srcRemoteActualPath, dstRemoteActualPath)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
|
remoteActualPath, err := d.getActualPathForRemote(obj.GetPath(), obj.IsDir())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
return op.Remove(ctx, d.remoteStorage, remoteActualPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
|
dstDirActualPath, err := d.getActualPathForRemote(dstDir.GetPath(), true)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to convert path to remote path: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
in := stream.GetReadCloser()
|
||||||
|
// Encrypt the data into wrappedIn
|
||||||
|
wrappedIn, err := d.cipher.EncryptData(in)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to EncryptData: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
streamOut := &model.FileStream{
|
||||||
|
Obj: &model.Object{
|
||||||
|
ID: stream.GetID(),
|
||||||
|
Path: stream.GetPath(),
|
||||||
|
Name: d.cipher.EncryptFileName(stream.GetName()),
|
||||||
|
Size: d.cipher.EncryptedSize(stream.GetSize()),
|
||||||
|
Modified: stream.ModTime(),
|
||||||
|
IsFolder: stream.IsDir(),
|
||||||
|
},
|
||||||
|
ReadCloser: io.NopCloser(wrappedIn),
|
||||||
|
Mimetype: "application/octet-stream",
|
||||||
|
WebPutAsTask: stream.NeedStore(),
|
||||||
|
Old: stream.GetOld(),
|
||||||
|
}
|
||||||
|
err = op.Put(ctx, d.remoteStorage, dstDirActualPath, streamOut, up, false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
//func (d *Safe) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
|
||||||
|
// return nil, errs.NotSupport
|
||||||
|
//}
|
||||||
|
|
||||||
|
var _ driver.Driver = (*Crypt)(nil)
|
47
drivers/crypt/meta.go
Normal file
47
drivers/crypt/meta.go
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
package crypt
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Addition struct {
|
||||||
|
// Usually one of two
|
||||||
|
//driver.RootPath
|
||||||
|
//driver.RootID
|
||||||
|
// define other
|
||||||
|
|
||||||
|
FileNameEnc string `json:"filename_encryption" type:"select" required:"true" options:"off,standard,obfuscate" default:"off"`
|
||||||
|
DirNameEnc string `json:"directory_name_encryption" type:"select" required:"true" options:"false,true" default:"false"`
|
||||||
|
RemotePath string `json:"remote_path" required:"true" help:"This is where the encrypted data stores"`
|
||||||
|
|
||||||
|
Password string `json:"password" required:"true" confidential:"true" help:"the main password"`
|
||||||
|
Salt string `json:"salt" confidential:"true" help:"If you don't know what is salt, treat it as a second password'. Optional but recommended"`
|
||||||
|
EncryptedSuffix string `json:"encrypted_suffix" required:"true" default:".bin" help:"encrypted files will have this suffix"`
|
||||||
|
}
|
||||||
|
|
||||||
|
/*// inMemory contains decrypted confidential info and other temp data. will not persist these info anywhere
|
||||||
|
type inMemory struct {
|
||||||
|
password string
|
||||||
|
salt string
|
||||||
|
}*/
|
||||||
|
|
||||||
|
var config = driver.Config{
|
||||||
|
Name: "Crypt",
|
||||||
|
LocalSort: true,
|
||||||
|
OnlyLocal: false,
|
||||||
|
OnlyProxy: true,
|
||||||
|
NoCache: true,
|
||||||
|
NoUpload: false,
|
||||||
|
NeedMs: false,
|
||||||
|
DefaultRoot: "/",
|
||||||
|
CheckStatus: false,
|
||||||
|
Alert: "",
|
||||||
|
NoOverwriteUpload: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
op.RegisterDriver(func() driver.Driver {
|
||||||
|
return &Crypt{}
|
||||||
|
})
|
||||||
|
}
|
1
drivers/crypt/types.go
Normal file
1
drivers/crypt/types.go
Normal file
@ -0,0 +1 @@
|
|||||||
|
package crypt
|
55
drivers/crypt/util.go
Normal file
55
drivers/crypt/util.go
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
package crypt
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
stdpath "path"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/internal/net"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
)
|
||||||
|
|
||||||
|
func RequestRangedHttp(r *http.Request, link *model.Link, offset, length int64) (*http.Response, error) {
|
||||||
|
header := net.ProcessHeader(&http.Header{}, &link.Header)
|
||||||
|
header = http_range.ApplyRangeToHttpHeader(http_range.Range{Start: offset, Length: length}, header)
|
||||||
|
|
||||||
|
return net.RequestHttp("GET", header, link.URL)
|
||||||
|
}
|
||||||
|
|
||||||
|
// will give the best guessing based on the path
|
||||||
|
func guessPath(path string) (isFolder, secondTry bool) {
|
||||||
|
if strings.HasSuffix(path, "/") {
|
||||||
|
//confirmed a folder
|
||||||
|
return true, false
|
||||||
|
}
|
||||||
|
lastSlash := strings.LastIndex(path, "/")
|
||||||
|
if strings.Index(path[lastSlash:], ".") < 0 {
|
||||||
|
//no dot, try folder then try file
|
||||||
|
return true, true
|
||||||
|
}
|
||||||
|
return false, true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Crypt) getPathForRemote(path string, isFolder bool) (remoteFullPath string) {
|
||||||
|
if isFolder && !strings.HasSuffix(path, "/") {
|
||||||
|
path = path + "/"
|
||||||
|
}
|
||||||
|
dir, fileName := filepath.Split(path)
|
||||||
|
|
||||||
|
remoteDir := d.cipher.EncryptDirName(dir)
|
||||||
|
remoteFileName := ""
|
||||||
|
if len(strings.TrimSpace(fileName)) > 0 {
|
||||||
|
remoteFileName = d.cipher.EncryptFileName(fileName)
|
||||||
|
}
|
||||||
|
return stdpath.Join(d.RemotePath, remoteDir, remoteFileName)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// actual path is used for internal only. any link for user should come from remoteFullPath
|
||||||
|
func (d *Crypt) getActualPathForRemote(path string, isFolder bool) (string, error) {
|
||||||
|
_, remoteActualPath, err := op.GetStorageAndActualPath(d.getPathForRemote(path, isFolder))
|
||||||
|
return remoteActualPath, err
|
||||||
|
}
|
@ -4,7 +4,6 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
stdpath "path"
|
stdpath "path"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
@ -67,9 +66,8 @@ func (d *FTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*m
|
|||||||
|
|
||||||
r := NewFTPFileReader(d.conn, file.GetPath())
|
r := NewFTPFileReader(d.conn, file.GetPath())
|
||||||
link := &model.Link{
|
link := &model.Link{
|
||||||
Data: r,
|
ReadSeekCloser: r,
|
||||||
}
|
}
|
||||||
base.HandleRange(link, r, args.Header, file.GetSize())
|
|
||||||
return link, nil
|
return link, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2,9 +2,7 @@ package lanzou
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"regexp"
|
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
@ -19,6 +17,8 @@ type LanZou struct {
|
|||||||
model.Storage
|
model.Storage
|
||||||
uid string
|
uid string
|
||||||
vei string
|
vei string
|
||||||
|
|
||||||
|
flag int32
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) Config() driver.Config {
|
func (d *LanZou) Config() driver.Config {
|
||||||
@ -30,16 +30,18 @@ func (d *LanZou) GetAddition() driver.Additional {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) Init(ctx context.Context) (err error) {
|
func (d *LanZou) Init(ctx context.Context) (err error) {
|
||||||
if d.IsCookie() {
|
switch d.Type {
|
||||||
|
case "account":
|
||||||
|
_, err := d.Login()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
fallthrough
|
||||||
|
case "cookie":
|
||||||
if d.RootFolderID == "" {
|
if d.RootFolderID == "" {
|
||||||
d.RootFolderID = "-1"
|
d.RootFolderID = "-1"
|
||||||
}
|
}
|
||||||
ylogin := regexp.MustCompile("ylogin=(.*?);").FindStringSubmatch(d.Cookie)
|
d.vei, d.uid, err = d.getVeiAndUid()
|
||||||
if len(ylogin) < 2 {
|
|
||||||
return fmt.Errorf("cookie does not contain ylogin")
|
|
||||||
}
|
|
||||||
d.uid = ylogin[1]
|
|
||||||
d.vei, err = d.getVei()
|
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@ -51,7 +53,7 @@ func (d *LanZou) Drop(ctx context.Context) error {
|
|||||||
|
|
||||||
// 获取的大小和时间不准确
|
// 获取的大小和时间不准确
|
||||||
func (d *LanZou) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
func (d *LanZou) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
if d.IsCookie() {
|
if d.IsCookie() || d.IsAccount() {
|
||||||
return d.GetAllFiles(dir.GetID())
|
return d.GetAllFiles(dir.GetID())
|
||||||
} else {
|
} else {
|
||||||
return d.GetFileOrFolderByShareUrl(dir.GetID(), d.SharePassword)
|
return d.GetFileOrFolderByShareUrl(dir.GetID(), d.SharePassword)
|
||||||
@ -119,7 +121,7 @@ func (d *LanZou) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
func (d *LanZou) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
||||||
if d.IsCookie() {
|
if d.IsCookie() || d.IsAccount() {
|
||||||
data, err := d.doupload(func(req *resty.Request) {
|
data, err := d.doupload(func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
req.SetFormData(map[string]string{
|
req.SetFormData(map[string]string{
|
||||||
@ -137,11 +139,11 @@ func (d *LanZou) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
|
|||||||
FolID: utils.Json.Get(data, "text").ToString(),
|
FolID: utils.Json.Get(data, "text").ToString(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
return nil, errs.NotImplement
|
return nil, errs.NotSupport
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
func (d *LanZou) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
if d.IsCookie() {
|
if d.IsCookie() || d.IsAccount() {
|
||||||
if !srcObj.IsDir() {
|
if !srcObj.IsDir() {
|
||||||
_, err := d.doupload(func(req *resty.Request) {
|
_, err := d.doupload(func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
@ -157,11 +159,11 @@ func (d *LanZou) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj,
|
|||||||
return srcObj, nil
|
return srcObj, nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil, errs.NotImplement
|
return nil, errs.NotSupport
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
func (d *LanZou) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
||||||
if d.IsCookie() {
|
if d.IsCookie() || d.IsAccount() {
|
||||||
if !srcObj.IsDir() {
|
if !srcObj.IsDir() {
|
||||||
_, err := d.doupload(func(req *resty.Request) {
|
_, err := d.doupload(func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
@ -179,11 +181,11 @@ func (d *LanZou) Rename(ctx context.Context, srcObj model.Obj, newName string) (
|
|||||||
return srcObj, nil
|
return srcObj, nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil, errs.NotImplement
|
return nil, errs.NotSupport
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) Remove(ctx context.Context, obj model.Obj) error {
|
func (d *LanZou) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
if d.IsCookie() {
|
if d.IsCookie() || d.IsAccount() {
|
||||||
_, err := d.doupload(func(req *resty.Request) {
|
_, err := d.doupload(func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
if obj.IsDir() {
|
if obj.IsDir() {
|
||||||
@ -200,13 +202,13 @@ func (d *LanZou) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}, nil)
|
}, nil)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return errs.NotImplement
|
return errs.NotSupport
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
func (d *LanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
if d.IsCookie() {
|
if d.IsCookie() || d.IsAccount() {
|
||||||
var resp RespText[[]FileOrFolder]
|
var resp RespText[[]FileOrFolder]
|
||||||
_, err := d._post(d.BaseUrl+"/fileup.php", func(req *resty.Request) {
|
_, err := d._post(d.BaseUrl+"/html5up.php", func(req *resty.Request) {
|
||||||
req.SetFormData(map[string]string{
|
req.SetFormData(map[string]string{
|
||||||
"task": "1",
|
"task": "1",
|
||||||
"vie": "2",
|
"vie": "2",
|
||||||
@ -221,5 +223,5 @@ func (d *LanZou) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
|
|||||||
}
|
}
|
||||||
return &resp.Text[0], nil
|
return &resp.Text[0], nil
|
||||||
}
|
}
|
||||||
return nil, errs.NotImplement
|
return nil, errs.NotSupport
|
||||||
}
|
}
|
||||||
|
@ -3,6 +3,7 @@ package lanzou
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"net/http"
|
||||||
"regexp"
|
"regexp"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
@ -190,3 +191,14 @@ func GetExpirationTime(url string) (etime time.Duration) {
|
|||||||
etime = time.Duration(timestamp-time.Now().Unix()) * time.Second
|
etime = time.Duration(timestamp-time.Now().Unix()) * time.Second
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func CookieToString(cookies []*http.Cookie) string {
|
||||||
|
if cookies == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
cookieStrings := make([]string, len(cookies))
|
||||||
|
for i, cookie := range cookies {
|
||||||
|
cookieStrings[i] = cookie.Name + "=" + cookie.Value
|
||||||
|
}
|
||||||
|
return strings.Join(cookieStrings, ";")
|
||||||
|
}
|
||||||
|
@ -6,8 +6,13 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type Addition struct {
|
type Addition struct {
|
||||||
Type string `json:"type" type:"select" options:"cookie,url" default:"cookie"`
|
Type string `json:"type" type:"select" options:"account,cookie,url" default:"cookie"`
|
||||||
Cookie string `json:"cookie" required:"true" help:"about 15 days valid, ignore if shareUrl is used"`
|
|
||||||
|
Account string `json:"account"`
|
||||||
|
Password string `json:"password"`
|
||||||
|
|
||||||
|
Cookie string `json:"cookie" help:"about 15 days valid, ignore if shareUrl is used"`
|
||||||
|
|
||||||
driver.RootID
|
driver.RootID
|
||||||
SharePassword string `json:"share_password"`
|
SharePassword string `json:"share_password"`
|
||||||
BaseUrl string `json:"baseUrl" required:"true" default:"https://pc.woozooo.com" help:"basic URL for file operation"`
|
BaseUrl string `json:"baseUrl" required:"true" default:"https://pc.woozooo.com" help:"basic URL for file operation"`
|
||||||
@ -19,6 +24,10 @@ func (a *Addition) IsCookie() bool {
|
|||||||
return a.Type == "cookie"
|
return a.Type == "cookie"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *Addition) IsAccount() bool {
|
||||||
|
return a.Type == "account"
|
||||||
|
}
|
||||||
|
|
||||||
var config = driver.Config{
|
var config = driver.Config{
|
||||||
Name: "Lanzou",
|
Name: "Lanzou",
|
||||||
LocalSort: true,
|
LocalSort: true,
|
||||||
|
@ -8,6 +8,7 @@ import (
|
|||||||
|
|
||||||
var ErrFileShareCancel = errors.New("file sharing cancellation")
|
var ErrFileShareCancel = errors.New("file sharing cancellation")
|
||||||
var ErrFileNotExist = errors.New("file does not exist")
|
var ErrFileNotExist = errors.New("file does not exist")
|
||||||
|
var ErrCookieExpiration = errors.New("cookie expiration")
|
||||||
|
|
||||||
type RespText[T any] struct {
|
type RespText[T any] struct {
|
||||||
Text T `json:"text"`
|
Text T `json:"text"`
|
||||||
|
@ -5,13 +5,16 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"regexp"
|
"regexp"
|
||||||
|
"runtime"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
@ -37,7 +40,24 @@ func (d *LanZou) get(url string, callback base.ReqCallback) ([]byte, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) post(url string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *LanZou) post(url string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
||||||
return d._post(url, callback, resp, false)
|
data, err := d._post(url, callback, resp, false)
|
||||||
|
if err == ErrCookieExpiration && d.IsAccount() {
|
||||||
|
if atomic.CompareAndSwapInt32(&d.flag, 0, 1) {
|
||||||
|
_, err2 := d.Login()
|
||||||
|
atomic.SwapInt32(&d.flag, 0)
|
||||||
|
if err2 != nil {
|
||||||
|
err = errors.Join(err, err2)
|
||||||
|
d.Status = err.Error()
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
return data, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for atomic.LoadInt32(&d.flag) != 0 {
|
||||||
|
runtime.Gosched()
|
||||||
|
}
|
||||||
|
return d._post(url, callback, resp, false)
|
||||||
|
}
|
||||||
|
return data, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) _post(url string, callback base.ReqCallback, resp interface{}, up bool) ([]byte, error) {
|
func (d *LanZou) _post(url string, callback base.ReqCallback, resp interface{}, up bool) ([]byte, error) {
|
||||||
@ -49,10 +69,12 @@ func (d *LanZou) _post(url string, callback base.ReqCallback, resp interface{},
|
|||||||
}
|
}
|
||||||
return false
|
return false
|
||||||
})
|
})
|
||||||
callback(req)
|
if callback != nil {
|
||||||
|
callback(req)
|
||||||
|
}
|
||||||
}, up)
|
}, up)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return data, err
|
||||||
}
|
}
|
||||||
switch utils.Json.Get(data, "zt").ToInt() {
|
switch utils.Json.Get(data, "zt").ToInt() {
|
||||||
case 1, 2, 4:
|
case 1, 2, 4:
|
||||||
@ -61,12 +83,14 @@ func (d *LanZou) _post(url string, callback base.ReqCallback, resp interface{},
|
|||||||
utils.Json.Unmarshal(data, resp)
|
utils.Json.Unmarshal(data, resp)
|
||||||
}
|
}
|
||||||
return data, nil
|
return data, nil
|
||||||
|
case 9: // 登录过期
|
||||||
|
return data, ErrCookieExpiration
|
||||||
default:
|
default:
|
||||||
info := utils.Json.Get(data, "inf").ToString()
|
info := utils.Json.Get(data, "inf").ToString()
|
||||||
if info == "" {
|
if info == "" {
|
||||||
info = utils.Json.Get(data, "info").ToString()
|
info = utils.Json.Get(data, "info").ToString()
|
||||||
}
|
}
|
||||||
return nil, fmt.Errorf(info)
|
return data, fmt.Errorf(info)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -101,6 +125,28 @@ func (d *LanZou) request(url string, method string, callback base.ReqCallback, u
|
|||||||
return res.Body(), err
|
return res.Body(), err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *LanZou) Login() ([]*http.Cookie, error) {
|
||||||
|
resp, err := base.NewRestyClient().SetRedirectPolicy(resty.NoRedirectPolicy()).
|
||||||
|
R().SetFormData(map[string]string{
|
||||||
|
"task": "3",
|
||||||
|
"uid": d.Account,
|
||||||
|
"pwd": d.Password,
|
||||||
|
"setSessionId": "",
|
||||||
|
"setSig": "",
|
||||||
|
"setScene": "",
|
||||||
|
"setTocen": "",
|
||||||
|
"formhash": "",
|
||||||
|
}).Post("https://up.woozooo.com/mlogin.php")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if utils.Json.Get(resp.Body(), "zt").ToInt() != 1 {
|
||||||
|
return nil, fmt.Errorf("login err: %s", resp.Body())
|
||||||
|
}
|
||||||
|
d.Cookie = CookieToString(resp.Cookies())
|
||||||
|
return resp.Cookies(), nil
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
通过cookie获取数据
|
通过cookie获取数据
|
||||||
*/
|
*/
|
||||||
@ -451,21 +497,32 @@ func (d *LanZou) getFileRealInfo(downURL string) (*int64, *time.Time) {
|
|||||||
return &size, &time
|
return &size, &time
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *LanZou) getVei() (string, error) {
|
func (d *LanZou) getVeiAndUid() (vei string, uid string, err error) {
|
||||||
resp, err := d.get("https://pc.woozooo.com/mydisk.php", func(req *resty.Request) {
|
var resp []byte
|
||||||
|
resp, err = d.get("https://pc.woozooo.com/mydisk.php", func(req *resty.Request) {
|
||||||
req.SetQueryParams(map[string]string{
|
req.SetQueryParams(map[string]string{
|
||||||
"item": "files",
|
"item": "files",
|
||||||
"action": "index",
|
"action": "index",
|
||||||
"u": d.uid,
|
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return
|
||||||
}
|
}
|
||||||
|
// uid
|
||||||
|
uids := regexp.MustCompile(`uid=([^'"&;]+)`).FindStringSubmatch(string(resp))
|
||||||
|
if len(uids) < 2 {
|
||||||
|
err = fmt.Errorf("uid variable not find")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
uid = uids[1]
|
||||||
|
|
||||||
|
// vei
|
||||||
html := RemoveNotes(string(resp))
|
html := RemoveNotes(string(resp))
|
||||||
data, err := htmlJsonToMap(html)
|
data, err := htmlJsonToMap(html)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return
|
||||||
}
|
}
|
||||||
return data["vei"], nil
|
vei = data["vei"]
|
||||||
|
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
@ -1,10 +1,11 @@
|
|||||||
package local
|
package local
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io/fs"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
stdpath "path"
|
stdpath "path"
|
||||||
@ -80,36 +81,54 @@ func (d *Local) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
|
|||||||
if !d.ShowHidden && strings.HasPrefix(f.Name(), ".") {
|
if !d.ShowHidden && strings.HasPrefix(f.Name(), ".") {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
thumb := ""
|
file := d.FileInfoToObj(f, args.ReqPath, fullPath)
|
||||||
if d.Thumbnail {
|
files = append(files, file)
|
||||||
typeName := utils.GetFileType(f.Name())
|
|
||||||
if typeName == conf.IMAGE || typeName == conf.VIDEO {
|
|
||||||
thumb = common.GetApiUrl(nil) + stdpath.Join("/d", args.ReqPath, f.Name())
|
|
||||||
thumb = utils.EncodePath(thumb, true)
|
|
||||||
thumb += "?type=thumb&sign=" + sign.Sign(stdpath.Join(args.ReqPath, f.Name()))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
isFolder := f.IsDir() || isSymlinkDir(f, fullPath)
|
|
||||||
var size int64
|
|
||||||
if !isFolder {
|
|
||||||
size = f.Size()
|
|
||||||
}
|
|
||||||
file := model.ObjThumb{
|
|
||||||
Object: model.Object{
|
|
||||||
Path: filepath.Join(dir.GetPath(), f.Name()),
|
|
||||||
Name: f.Name(),
|
|
||||||
Modified: f.ModTime(),
|
|
||||||
Size: size,
|
|
||||||
IsFolder: isFolder,
|
|
||||||
},
|
|
||||||
Thumbnail: model.Thumbnail{
|
|
||||||
Thumbnail: thumb,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
files = append(files, &file)
|
|
||||||
}
|
}
|
||||||
return files, nil
|
return files, nil
|
||||||
}
|
}
|
||||||
|
func (d *Local) FileInfoToObj(f fs.FileInfo, reqPath string, fullPath string) model.Obj {
|
||||||
|
thumb := ""
|
||||||
|
if d.Thumbnail {
|
||||||
|
typeName := utils.GetFileType(f.Name())
|
||||||
|
if typeName == conf.IMAGE || typeName == conf.VIDEO {
|
||||||
|
thumb = common.GetApiUrl(nil) + stdpath.Join("/d", reqPath, f.Name())
|
||||||
|
thumb = utils.EncodePath(thumb, true)
|
||||||
|
thumb += "?type=thumb&sign=" + sign.Sign(stdpath.Join(reqPath, f.Name()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
isFolder := f.IsDir() || isSymlinkDir(f, fullPath)
|
||||||
|
var size int64
|
||||||
|
if !isFolder {
|
||||||
|
size = f.Size()
|
||||||
|
}
|
||||||
|
file := model.ObjThumb{
|
||||||
|
Object: model.Object{
|
||||||
|
Path: filepath.Join(fullPath, f.Name()),
|
||||||
|
Name: f.Name(),
|
||||||
|
Modified: f.ModTime(),
|
||||||
|
Size: size,
|
||||||
|
IsFolder: isFolder,
|
||||||
|
},
|
||||||
|
Thumbnail: model.Thumbnail{
|
||||||
|
Thumbnail: thumb,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return &file
|
||||||
|
|
||||||
|
}
|
||||||
|
func (d *Local) GetMeta(ctx context.Context, path string) (model.Obj, error) {
|
||||||
|
f, err := os.Stat(path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
file := d.FileInfoToObj(f, path, path)
|
||||||
|
//h := "123123"
|
||||||
|
//if s, ok := f.(model.SetHash); ok && file.GetHash() == ("","") {
|
||||||
|
// s.SetHash(h,"SHA1")
|
||||||
|
//}
|
||||||
|
return file, nil
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
func (d *Local) Get(ctx context.Context, path string) (model.Obj, error) {
|
func (d *Local) Get(ctx context.Context, path string) (model.Obj, error) {
|
||||||
path = filepath.Join(d.GetRootPath(), path)
|
path = filepath.Join(d.GetRootPath(), path)
|
||||||
@ -147,13 +166,21 @@ func (d *Local) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
|
|||||||
"Content-Type": []string{"image/png"},
|
"Content-Type": []string{"image/png"},
|
||||||
}
|
}
|
||||||
if thumbPath != nil {
|
if thumbPath != nil {
|
||||||
link.FilePath = thumbPath
|
open, err := os.Open(*thumbPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
link.ReadSeekCloser = open
|
||||||
} else {
|
} else {
|
||||||
link.Data = io.NopCloser(buf)
|
link.ReadSeekCloser = utils.ReadSeekerNopCloser(bytes.NewReader(buf.Bytes()))
|
||||||
link.Header.Set("Content-Length", strconv.Itoa(buf.Len()))
|
//link.Header.Set("Content-Length", strconv.Itoa(buf.Len()))
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
link.FilePath = &fullPath
|
open, err := os.Open(fullPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
link.ReadSeekCloser = open
|
||||||
}
|
}
|
||||||
return &link, nil
|
return &link, nil
|
||||||
}
|
}
|
||||||
|
@ -181,7 +181,7 @@ func (d *MediaTrack) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -4,7 +4,10 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
"github.com/rclone/rclone/lib/readers"
|
||||||
"io"
|
"io"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
@ -64,51 +67,41 @@ func (d *Mega) GetRoot(ctx context.Context) (model.Obj, error) {
|
|||||||
|
|
||||||
func (d *Mega) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
func (d *Mega) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
if node, ok := file.(*MegaNode); ok {
|
if node, ok := file.(*MegaNode); ok {
|
||||||
//link, err := d.c.Link(node.Node, true)
|
|
||||||
|
//down, err := d.c.NewDownload(node.Node)
|
||||||
//if err != nil {
|
//if err != nil {
|
||||||
// return nil, err
|
// return nil, fmt.Errorf("open download file failed: %w", err)
|
||||||
//}
|
//}
|
||||||
//return &model.Link{URL: link}, nil
|
|
||||||
down, err := d.c.NewDownload(node.Node)
|
size := file.GetSize()
|
||||||
if err != nil {
|
var finalClosers utils.Closers
|
||||||
return nil, err
|
resultRangeReader := func(httpRange http_range.Range) (io.ReadCloser, error) {
|
||||||
}
|
length := httpRange.Length
|
||||||
//u := down.GetResourceUrl()
|
if httpRange.Length >= 0 && httpRange.Start+httpRange.Length >= size {
|
||||||
//u = strings.Replace(u, "http", "https", 1)
|
length = -1
|
||||||
//return &model.Link{URL: u}, nil
|
|
||||||
r, w := io.Pipe()
|
|
||||||
go func() {
|
|
||||||
defer func() {
|
|
||||||
_ = recover()
|
|
||||||
}()
|
|
||||||
log.Debugf("chunk size: %d", down.Chunks())
|
|
||||||
var (
|
|
||||||
chunk []byte
|
|
||||||
err error
|
|
||||||
)
|
|
||||||
for id := 0; id < down.Chunks(); id++ {
|
|
||||||
chunk, err = down.DownloadChunk(id)
|
|
||||||
if err != nil {
|
|
||||||
log.Errorf("mega down: %+v", err)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
log.Debugf("id: %d,len: %d", id, len(chunk))
|
|
||||||
//_, _, err = down.ChunkLocation(id)
|
|
||||||
//if err != nil {
|
|
||||||
// log.Errorf("mega down: %+v", err)
|
|
||||||
// return
|
|
||||||
//}
|
|
||||||
//_, err = c.Write(chunk)
|
|
||||||
if _, err = w.Write(chunk); err != nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
err = w.CloseWithError(err)
|
var down *mega.Download
|
||||||
|
err := utils.Retry(3, time.Second, func() (err error) {
|
||||||
|
down, err = d.c.NewDownload(node.Node)
|
||||||
|
return err
|
||||||
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("mega down: %+v", err)
|
return nil, fmt.Errorf("open download file failed: %w", err)
|
||||||
}
|
}
|
||||||
}()
|
oo := &openObject{
|
||||||
return &model.Link{Data: r}, nil
|
ctx: ctx,
|
||||||
|
d: down,
|
||||||
|
skip: httpRange.Start,
|
||||||
|
}
|
||||||
|
finalClosers.Add(oo)
|
||||||
|
|
||||||
|
return readers.NewLimitedReadCloser(oo, length), nil
|
||||||
|
}
|
||||||
|
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: &finalClosers}
|
||||||
|
resultLink := &model.Link{
|
||||||
|
RangeReadCloser: *resultRangeReadCloser,
|
||||||
|
}
|
||||||
|
return resultLink, nil
|
||||||
}
|
}
|
||||||
return nil, fmt.Errorf("unable to convert dir to mega node")
|
return nil, fmt.Errorf("unable to convert dir to mega node")
|
||||||
}
|
}
|
||||||
|
@ -1,3 +1,92 @@
|
|||||||
package mega
|
package mega
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/t3rm1n4l/go-mega"
|
||||||
|
"io"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
// do others that not defined in Driver interface
|
// do others that not defined in Driver interface
|
||||||
|
// openObject represents a download in progress
|
||||||
|
type openObject struct {
|
||||||
|
ctx context.Context
|
||||||
|
mu sync.Mutex
|
||||||
|
d *mega.Download
|
||||||
|
id int
|
||||||
|
skip int64
|
||||||
|
chunk []byte
|
||||||
|
closed bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// get the next chunk
|
||||||
|
func (oo *openObject) getChunk(ctx context.Context) (err error) {
|
||||||
|
if oo.id >= oo.d.Chunks() {
|
||||||
|
return io.EOF
|
||||||
|
}
|
||||||
|
var chunk []byte
|
||||||
|
err = utils.Retry(3, time.Second, func() (err error) {
|
||||||
|
chunk, err = oo.d.DownloadChunk(oo.id)
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
oo.id++
|
||||||
|
oo.chunk = chunk
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read reads up to len(p) bytes into p.
|
||||||
|
func (oo *openObject) Read(p []byte) (n int, err error) {
|
||||||
|
oo.mu.Lock()
|
||||||
|
defer oo.mu.Unlock()
|
||||||
|
if oo.closed {
|
||||||
|
return 0, fmt.Errorf("read on closed file")
|
||||||
|
}
|
||||||
|
// Skip data at the start if requested
|
||||||
|
for oo.skip > 0 {
|
||||||
|
_, size, err := oo.d.ChunkLocation(oo.id)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
if oo.skip < int64(size) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
oo.id++
|
||||||
|
oo.skip -= int64(size)
|
||||||
|
}
|
||||||
|
if len(oo.chunk) == 0 {
|
||||||
|
err = oo.getChunk(oo.ctx)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
if oo.skip > 0 {
|
||||||
|
oo.chunk = oo.chunk[oo.skip:]
|
||||||
|
oo.skip = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
n = copy(p, oo.chunk)
|
||||||
|
oo.chunk = oo.chunk[n:]
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close closed the file - MAC errors are reported here
|
||||||
|
func (oo *openObject) Close() (err error) {
|
||||||
|
oo.mu.Lock()
|
||||||
|
defer oo.mu.Unlock()
|
||||||
|
if oo.closed {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
err = utils.Retry(3, 500*time.Millisecond, func() (err error) {
|
||||||
|
return oo.d.Finish()
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to finish download: %w", err)
|
||||||
|
}
|
||||||
|
oo.closed = true
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
@ -212,7 +212,7 @@ func (d *MoPan) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
file, err := utils.CreateTempFile(stream)
|
file, err := utils.CreateTempFile(stream, stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -124,7 +124,7 @@ func (d *PikPak) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *PikPak) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -5,18 +5,15 @@ import (
|
|||||||
"crypto/md5"
|
"crypto/md5"
|
||||||
"crypto/sha1"
|
"crypto/sha1"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"fmt"
|
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
"github.com/alist-org/alist/v3/pkg/http_range"
|
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"github.com/go-resty/resty/v2"
|
"github.com/go-resty/resty/v2"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
@ -69,62 +66,17 @@ func (d *QuarkOrUC) Link(ctx context.Context, file model.Obj, args model.LinkArg
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
u := resp.Data[0].DownloadUrl
|
|
||||||
start, end := int64(0), file.GetSize()
|
|
||||||
link := model.Link{
|
|
||||||
Header: http.Header{},
|
|
||||||
}
|
|
||||||
if rg := args.Header.Get("Range"); rg != "" {
|
|
||||||
parseRange, err := http_range.ParseRange(rg, file.GetSize())
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
start, end = parseRange[0].Start, parseRange[0].Start+parseRange[0].Length
|
|
||||||
link.Header.Set("Content-Range", parseRange[0].ContentRange(file.GetSize()))
|
|
||||||
link.Header.Set("Content-Length", strconv.FormatInt(parseRange[0].Length, 10))
|
|
||||||
link.Status = http.StatusPartialContent
|
|
||||||
} else {
|
|
||||||
link.Header.Set("Content-Length", strconv.FormatInt(file.GetSize(), 10))
|
|
||||||
link.Status = http.StatusOK
|
|
||||||
}
|
|
||||||
link.Writer = func(w io.Writer) error {
|
|
||||||
// request 10 MB at a time
|
|
||||||
chunkSize := int64(10 * 1024 * 1024)
|
|
||||||
for start < end {
|
|
||||||
_end := start + chunkSize
|
|
||||||
if _end > end {
|
|
||||||
_end = end
|
|
||||||
}
|
|
||||||
_range := "bytes=" + strconv.FormatInt(start, 10) + "-" + strconv.FormatInt(_end-1, 10)
|
|
||||||
start = _end
|
|
||||||
err = func() error {
|
|
||||||
req, err := http.NewRequest(http.MethodGet, u, nil)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
req.Header.Set("Range", _range)
|
|
||||||
req.Header.Set("User-Agent", ua)
|
|
||||||
req.Header.Set("Cookie", d.Cookie)
|
|
||||||
req.Header.Set("Referer", d.conf.referer)
|
|
||||||
resp, err := base.HttpClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
if resp.StatusCode != http.StatusPartialContent {
|
|
||||||
return fmt.Errorf("unexpected status code: %d", resp.StatusCode)
|
|
||||||
}
|
|
||||||
_, err = io.Copy(w, resp.Body)
|
|
||||||
return err
|
|
||||||
}()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
return &model.Link{
|
||||||
return nil
|
URL: resp.Data[0].DownloadUrl,
|
||||||
}
|
Header: http.Header{
|
||||||
return &link, nil
|
"Cookie": []string{d.Cookie},
|
||||||
|
"Referer": []string{d.conf.referer},
|
||||||
|
"User-Agent": []string{ua},
|
||||||
|
},
|
||||||
|
Concurrency: 2,
|
||||||
|
PartSize: 10 * 1024 * 1024,
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *QuarkOrUC) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
func (d *QuarkOrUC) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
@ -184,7 +136,7 @@ func (d *QuarkOrUC) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *QuarkOrUC) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *QuarkOrUC) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -5,7 +5,6 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
@ -57,9 +56,8 @@ func (d *SFTP) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
link := &model.Link{
|
link := &model.Link{
|
||||||
Data: remoteFile,
|
ReadSeekCloser: remoteFile,
|
||||||
}
|
}
|
||||||
base.HandleRange(link, remoteFile, args.Header, file.GetSize())
|
|
||||||
return link, nil
|
return link, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -6,7 +6,6 @@ import (
|
|||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
@ -80,9 +79,8 @@ func (d *SMB) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*m
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
link := &model.Link{
|
link := &model.Link{
|
||||||
Data: remoteFile,
|
ReadSeekCloser: remoteFile,
|
||||||
}
|
}
|
||||||
base.HandleRange(link, remoteFile, args.Header, file.GetSize())
|
|
||||||
d.updateLastConnTime()
|
d.updateLastConnTime()
|
||||||
return link, nil
|
return link, nil
|
||||||
}
|
}
|
||||||
|
@ -116,7 +116,7 @@ func (d *Terabox) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Terabox) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (d *Terabox) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -3,15 +3,16 @@ package terbox
|
|||||||
import (
|
import (
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/alist-org/alist/v3/drivers/base"
|
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
|
||||||
"github.com/go-resty/resty/v2"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/go-resty/resty/v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (d *Terabox) request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
func (d *Terabox) request(furl string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
|
||||||
@ -139,6 +140,11 @@ func (d *Terabox) linkOfficial(file model.Obj, args model.LinkArgs) (*model.Link
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(resp.Dlink) == 0 {
|
||||||
|
return nil, fmt.Errorf("fid %s no dlink found, errno: %d", file.GetID(), resp.Errno)
|
||||||
|
}
|
||||||
|
|
||||||
res, err := base.NoRedirectClient.R().SetHeader("Cookie", d.Cookie).SetHeader("User-Agent", base.UserAgent).Get(resp.Dlink[0].Dlink)
|
res, err := base.NoRedirectClient.R().SetHeader("Cookie", d.Cookie).SetHeader("User-Agent", base.UserAgent).Get(resp.Dlink[0].Dlink)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -333,7 +333,7 @@ func (xc *XunLeiCommon) Remove(ctx context.Context, obj model.Obj) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (xc *XunLeiCommon) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
func (xc *XunLeiCommon) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
|
||||||
tempFile, err := utils.CreateTempFile(stream.GetReadCloser())
|
tempFile, err := utils.CreateTempFile(stream.GetReadCloser(), stream.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -52,9 +52,18 @@ func (d *Virtual) List(ctx context.Context, dir model.Obj, args model.ListArgs)
|
|||||||
return res, nil
|
return res, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type nopReadSeekCloser struct {
|
||||||
|
io.Reader
|
||||||
|
}
|
||||||
|
|
||||||
|
func (nopReadSeekCloser) Seek(offset int64, whence int) (int64, error) {
|
||||||
|
return offset, nil
|
||||||
|
}
|
||||||
|
func (nopReadSeekCloser) Close() error { return nil }
|
||||||
|
|
||||||
func (d *Virtual) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
func (d *Virtual) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
return &model.Link{
|
return &model.Link{
|
||||||
Data: io.NopCloser(io.LimitReader(random.Rand, file.GetSize())),
|
ReadSeekCloser: nopReadSeekCloser{io.LimitReader(random.Rand, file.GetSize())},
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
381
drivers/weiyun/driver.go
Normal file
381
drivers/weiyun/driver.go
Normal file
@ -0,0 +1,381 @@
|
|||||||
|
package weiyun
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/cron"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
weiyunsdkgo "github.com/foxxorcat/weiyun-sdk-go"
|
||||||
|
)
|
||||||
|
|
||||||
|
type WeiYun struct {
|
||||||
|
model.Storage
|
||||||
|
Addition
|
||||||
|
|
||||||
|
client *weiyunsdkgo.WeiYunClient
|
||||||
|
cron *cron.Cron
|
||||||
|
rootFolder *Folder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Config() driver.Config {
|
||||||
|
return config
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) GetAddition() driver.Additional {
|
||||||
|
return &d.Addition
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Init(ctx context.Context) error {
|
||||||
|
d.client = weiyunsdkgo.NewWeiYunClientWithRestyClient(base.RestyClient)
|
||||||
|
err := d.client.SetCookiesStr(d.Cookies).RefreshCtoken()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cookie过期回调
|
||||||
|
d.client.SetOnCookieExpired(func(err error) {
|
||||||
|
d.Status = err.Error()
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
})
|
||||||
|
|
||||||
|
// cookie更新回调
|
||||||
|
d.client.SetOnCookieUpload(func(c []*http.Cookie) {
|
||||||
|
d.Cookies = weiyunsdkgo.CookieToString(weiyunsdkgo.ClearCookie(c))
|
||||||
|
op.MustSaveDriverStorage(d)
|
||||||
|
})
|
||||||
|
|
||||||
|
// qqCookie保活
|
||||||
|
if d.client.LoginType() == 1 {
|
||||||
|
d.cron = cron.NewCron(time.Minute * 5)
|
||||||
|
d.cron.Do(func() {
|
||||||
|
d.client.KeepAlive()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// 获取默认根目录dirKey
|
||||||
|
if d.RootFolderID == "" {
|
||||||
|
userInfo, err := d.client.DiskUserInfoGet()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
d.RootFolderID = userInfo.MainDirKey
|
||||||
|
}
|
||||||
|
|
||||||
|
// 处理目录ID,找到PdirKey
|
||||||
|
folders, err := d.client.LibDirPathGet(d.RootFolderID)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
folder := folders[len(folders)-1]
|
||||||
|
d.rootFolder = &Folder{
|
||||||
|
PFolder: &Folder{
|
||||||
|
Folder: weiyunsdkgo.Folder{
|
||||||
|
DirKey: folder.PdirKey,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Folder: folder.Folder,
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Drop(ctx context.Context) error {
|
||||||
|
d.client = nil
|
||||||
|
if d.cron != nil {
|
||||||
|
d.cron.Stop()
|
||||||
|
d.cron = nil
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) GetRoot(ctx context.Context) (model.Obj, error) {
|
||||||
|
return d.rootFolder, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
|
||||||
|
if folder, ok := dir.(*Folder); ok {
|
||||||
|
var files []model.Obj
|
||||||
|
for {
|
||||||
|
data, err := d.client.DiskDirFileList(folder.GetID(), weiyunsdkgo.WarpParamOption(
|
||||||
|
weiyunsdkgo.QueryFileOptionOffest(int64(len(files))),
|
||||||
|
weiyunsdkgo.QueryFileOptionGetType(weiyunsdkgo.FileAndDir),
|
||||||
|
weiyunsdkgo.QueryFileOptionSort(func() weiyunsdkgo.OrderBy {
|
||||||
|
switch d.OrderBy {
|
||||||
|
case "name":
|
||||||
|
return weiyunsdkgo.FileName
|
||||||
|
case "size":
|
||||||
|
return weiyunsdkgo.FileSize
|
||||||
|
case "updated_at":
|
||||||
|
return weiyunsdkgo.FileMtime
|
||||||
|
default:
|
||||||
|
return weiyunsdkgo.FileName
|
||||||
|
}
|
||||||
|
}(), d.OrderDirection == "desc"),
|
||||||
|
))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if files == nil {
|
||||||
|
files = make([]model.Obj, 0, data.TotalDirCount+data.TotalFileCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, dir := range data.DirList {
|
||||||
|
files = append(files, &Folder{
|
||||||
|
PFolder: folder,
|
||||||
|
Folder: dir,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, file := range data.FileList {
|
||||||
|
files = append(files, &File{
|
||||||
|
PFolder: folder,
|
||||||
|
File: file,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if data.FinishFlag || len(data.DirList)+len(data.FileList) == 0 {
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
|
||||||
|
if file, ok := file.(*File); ok {
|
||||||
|
data, err := d.client.DiskFileDownload(weiyunsdkgo.FileParam{PdirKey: file.GetPKey(), FileID: file.GetID()})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &model.Link{
|
||||||
|
URL: data.DownloadUrl,
|
||||||
|
Header: http.Header{
|
||||||
|
"Cookie": []string{data.CookieName + "=" + data.CookieValue},
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
|
||||||
|
if folder, ok := parentDir.(*Folder); ok {
|
||||||
|
newFolder, err := d.client.DiskDirCreate(weiyunsdkgo.FolderParam{
|
||||||
|
PPdirKey: folder.GetPKey(),
|
||||||
|
PdirKey: folder.DirKey,
|
||||||
|
DirName: dirName,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &Folder{
|
||||||
|
PFolder: folder,
|
||||||
|
Folder: *newFolder,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
|
||||||
|
if dstDir, ok := dstDir.(*Folder); ok {
|
||||||
|
dstParam := weiyunsdkgo.FolderParam{
|
||||||
|
PdirKey: dstDir.GetPKey(),
|
||||||
|
DirKey: dstDir.GetID(),
|
||||||
|
DirName: dstDir.GetName(),
|
||||||
|
}
|
||||||
|
switch srcObj := srcObj.(type) {
|
||||||
|
case *File:
|
||||||
|
err := d.client.DiskFileMove(weiyunsdkgo.FileParam{
|
||||||
|
PPdirKey: srcObj.PFolder.GetPKey(),
|
||||||
|
PdirKey: srcObj.GetPKey(),
|
||||||
|
FileID: srcObj.GetID(),
|
||||||
|
FileName: srcObj.GetName(),
|
||||||
|
}, dstParam)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &File{
|
||||||
|
PFolder: dstDir,
|
||||||
|
File: srcObj.File,
|
||||||
|
}, nil
|
||||||
|
case *Folder:
|
||||||
|
err := d.client.DiskDirMove(weiyunsdkgo.FolderParam{
|
||||||
|
PPdirKey: srcObj.PFolder.GetPKey(),
|
||||||
|
PdirKey: srcObj.GetPKey(),
|
||||||
|
DirKey: srcObj.GetID(),
|
||||||
|
DirName: srcObj.GetName(),
|
||||||
|
}, dstParam)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &Folder{
|
||||||
|
PFolder: dstDir,
|
||||||
|
Folder: srcObj.Folder,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
|
||||||
|
switch srcObj := srcObj.(type) {
|
||||||
|
case *File:
|
||||||
|
err := d.client.DiskFileRename(weiyunsdkgo.FileParam{
|
||||||
|
PPdirKey: srcObj.PFolder.GetPKey(),
|
||||||
|
PdirKey: srcObj.GetPKey(),
|
||||||
|
FileID: srcObj.GetID(),
|
||||||
|
FileName: srcObj.GetName(),
|
||||||
|
}, newName)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
newFile := srcObj.File
|
||||||
|
newFile.FileName = newName
|
||||||
|
newFile.FileCtime = weiyunsdkgo.TimeStamp(time.Now())
|
||||||
|
return &File{
|
||||||
|
PFolder: srcObj.PFolder,
|
||||||
|
File: newFile,
|
||||||
|
}, nil
|
||||||
|
case *Folder:
|
||||||
|
err := d.client.DiskDirAttrModify(weiyunsdkgo.FolderParam{
|
||||||
|
PPdirKey: srcObj.PFolder.GetPKey(),
|
||||||
|
PdirKey: srcObj.GetPKey(),
|
||||||
|
DirKey: srcObj.GetID(),
|
||||||
|
DirName: srcObj.GetName(),
|
||||||
|
}, newName)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
newFolder := srcObj.Folder
|
||||||
|
newFolder.DirName = newName
|
||||||
|
newFolder.DirCtime = weiyunsdkgo.TimeStamp(time.Now())
|
||||||
|
return &Folder{
|
||||||
|
PFolder: srcObj.PFolder,
|
||||||
|
Folder: newFolder,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
|
||||||
|
// TODO copy obj, optional
|
||||||
|
return errs.NotImplement
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Remove(ctx context.Context, obj model.Obj) error {
|
||||||
|
switch obj := obj.(type) {
|
||||||
|
case *File:
|
||||||
|
return d.client.DiskFileDelete(weiyunsdkgo.FileParam{
|
||||||
|
PPdirKey: obj.PFolder.GetPKey(),
|
||||||
|
PdirKey: obj.GetPKey(),
|
||||||
|
FileID: obj.GetID(),
|
||||||
|
FileName: obj.GetName(),
|
||||||
|
})
|
||||||
|
case *Folder:
|
||||||
|
return d.client.DiskDirDelete(weiyunsdkgo.FolderParam{
|
||||||
|
PPdirKey: obj.PFolder.GetPKey(),
|
||||||
|
PdirKey: obj.GetPKey(),
|
||||||
|
DirKey: obj.GetID(),
|
||||||
|
DirName: obj.GetName(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
// TODO remove obj, optional
|
||||||
|
return errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *WeiYun) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
|
||||||
|
if folder, ok := dstDir.(*Folder); ok {
|
||||||
|
file, err := utils.CreateTempFile(stream, stream.GetSize())
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
_ = file.Close()
|
||||||
|
_ = os.Remove(file.Name())
|
||||||
|
}()
|
||||||
|
|
||||||
|
// step 1.
|
||||||
|
preData, err := d.client.PreUpload(ctx, weiyunsdkgo.UpdloadFileParam{
|
||||||
|
PdirKey: folder.GetPKey(),
|
||||||
|
DirKey: folder.DirKey,
|
||||||
|
|
||||||
|
FileName: stream.GetName(),
|
||||||
|
FileSize: stream.GetSize(),
|
||||||
|
File: file,
|
||||||
|
|
||||||
|
ChannelCount: 4,
|
||||||
|
FileExistOption: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// fast upload
|
||||||
|
if !preData.FileExist {
|
||||||
|
// step 2.
|
||||||
|
upCtx, cancel := context.WithCancelCause(ctx)
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
for _, channel := range preData.ChannelList {
|
||||||
|
wg.Add(1)
|
||||||
|
go func(channel weiyunsdkgo.UploadChannelData) {
|
||||||
|
defer wg.Done()
|
||||||
|
if utils.IsCanceled(upCtx) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
for {
|
||||||
|
channel.Len = int(math.Min(float64(stream.GetSize()-channel.Offset), float64(channel.Len)))
|
||||||
|
upData, err := d.client.UploadFile(upCtx, channel, preData.UploadAuthData,
|
||||||
|
io.NewSectionReader(file, channel.Offset, int64(channel.Len)))
|
||||||
|
if err != nil {
|
||||||
|
cancel(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// 上传完成
|
||||||
|
if upData.UploadState != 1 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
channel = upData.Channel
|
||||||
|
}
|
||||||
|
}(channel)
|
||||||
|
}
|
||||||
|
wg.Wait()
|
||||||
|
if utils.IsCanceled(upCtx) {
|
||||||
|
return nil, context.Cause(upCtx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return &File{
|
||||||
|
PFolder: folder,
|
||||||
|
File: preData.File,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
return nil, errs.NotSupport
|
||||||
|
}
|
||||||
|
|
||||||
|
// func (d *WeiYun) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
|
||||||
|
// return nil, errs.NotSupport
|
||||||
|
// }
|
||||||
|
|
||||||
|
var _ driver.Driver = (*WeiYun)(nil)
|
||||||
|
var _ driver.GetRooter = (*WeiYun)(nil)
|
||||||
|
var _ driver.MkdirResult = (*WeiYun)(nil)
|
||||||
|
|
||||||
|
// var _ driver.CopyResult = (*WeiYun)(nil)
|
||||||
|
var _ driver.MoveResult = (*WeiYun)(nil)
|
||||||
|
var _ driver.Remove = (*WeiYun)(nil)
|
||||||
|
|
||||||
|
var _ driver.PutResult = (*WeiYun)(nil)
|
||||||
|
var _ driver.RenameResult = (*WeiYun)(nil)
|
28
drivers/weiyun/meta.go
Normal file
28
drivers/weiyun/meta.go
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
package weiyun
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
|
"github.com/alist-org/alist/v3/internal/op"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Addition struct {
|
||||||
|
RootFolderID string `json:"root_folder_id"`
|
||||||
|
Cookies string `json:"cookies" required:"true"`
|
||||||
|
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at" default:"name"`
|
||||||
|
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var config = driver.Config{
|
||||||
|
Name: "WeiYun",
|
||||||
|
LocalSort: false,
|
||||||
|
OnlyProxy: true,
|
||||||
|
CheckStatus: true,
|
||||||
|
Alert: "",
|
||||||
|
NoOverwriteUpload: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
op.RegisterDriver(func() driver.Driver {
|
||||||
|
return &WeiYun{}
|
||||||
|
})
|
||||||
|
}
|
39
drivers/weiyun/types.go
Normal file
39
drivers/weiyun/types.go
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
package weiyun
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
weiyunsdkgo "github.com/foxxorcat/weiyun-sdk-go"
|
||||||
|
)
|
||||||
|
|
||||||
|
type File struct {
|
||||||
|
PFolder *Folder
|
||||||
|
weiyunsdkgo.File
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *File) GetID() string { return f.FileID }
|
||||||
|
func (f *File) GetSize() int64 { return f.FileSize }
|
||||||
|
func (f *File) GetName() string { return f.FileName }
|
||||||
|
func (f *File) ModTime() time.Time { return time.Time(f.FileMtime) }
|
||||||
|
func (f *File) IsDir() bool { return false }
|
||||||
|
func (f *File) GetPath() string { return "" }
|
||||||
|
|
||||||
|
func (f *File) GetPKey() string {
|
||||||
|
return f.PFolder.DirKey
|
||||||
|
}
|
||||||
|
|
||||||
|
type Folder struct {
|
||||||
|
PFolder *Folder
|
||||||
|
weiyunsdkgo.Folder
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Folder) GetID() string { return f.DirKey }
|
||||||
|
func (f *Folder) GetSize() int64 { return 0 }
|
||||||
|
func (f *Folder) GetName() string { return f.DirName }
|
||||||
|
func (f *Folder) ModTime() time.Time { return time.Time(f.DirMtime) }
|
||||||
|
func (f *Folder) IsDir() bool { return true }
|
||||||
|
func (f *Folder) GetPath() string { return "" }
|
||||||
|
|
||||||
|
func (f *Folder) GetPKey() string {
|
||||||
|
return f.PFolder.DirKey
|
||||||
|
}
|
@ -3,6 +3,7 @@ package template
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
"github.com/Xhofe/wopan-sdk-go"
|
"github.com/Xhofe/wopan-sdk-go"
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
@ -15,7 +16,8 @@ import (
|
|||||||
type Wopan struct {
|
type Wopan struct {
|
||||||
model.Storage
|
model.Storage
|
||||||
Addition
|
Addition
|
||||||
client *wopan.WoClient
|
client *wopan.WoClient
|
||||||
|
defaultFamilyID string
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Wopan) Config() driver.Config {
|
func (d *Wopan) Config() driver.Config {
|
||||||
@ -34,6 +36,11 @@ func (d *Wopan) Init(ctx context.Context) error {
|
|||||||
d.RefreshToken = refreshToken
|
d.RefreshToken = refreshToken
|
||||||
op.MustSaveDriverStorage(d)
|
op.MustSaveDriverStorage(d)
|
||||||
})
|
})
|
||||||
|
fml, err := d.client.FamilyUserCurrentEncode()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
d.defaultFamilyID = strconv.Itoa(fml.DefaultHomeId)
|
||||||
return d.client.InitData()
|
return d.client.InitData()
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -81,7 +88,11 @@ func (d *Wopan) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *Wopan) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
func (d *Wopan) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
|
||||||
_, err := d.client.CreateDirectory(d.getSpaceType(), parentDir.GetID(), dirName, d.FamilyID, func(req *resty.Request) {
|
familyID := d.FamilyID
|
||||||
|
if familyID == "" {
|
||||||
|
familyID = d.defaultFamilyID
|
||||||
|
}
|
||||||
|
_, err := d.client.CreateDirectory(d.getSpaceType(), parentDir.GetID(), dirName, familyID, func(req *resty.Request) {
|
||||||
req.SetContext(ctx)
|
req.SetContext(ctx)
|
||||||
})
|
})
|
||||||
return err
|
return err
|
||||||
|
50
go.mod
50
go.mod
@ -5,9 +5,10 @@ go 1.20
|
|||||||
require (
|
require (
|
||||||
github.com/SheltonZhu/115driver v1.0.14
|
github.com/SheltonZhu/115driver v1.0.14
|
||||||
github.com/Xhofe/go-cache v0.0.0-20220723083548-714439c8af9a
|
github.com/Xhofe/go-cache v0.0.0-20220723083548-714439c8af9a
|
||||||
|
github.com/Xhofe/rateg v0.0.0-20230728072201-251a4e1adad4
|
||||||
github.com/Xhofe/wopan-sdk-go v0.1.1
|
github.com/Xhofe/wopan-sdk-go v0.1.1
|
||||||
github.com/avast/retry-go v3.0.0+incompatible
|
github.com/avast/retry-go v3.0.0+incompatible
|
||||||
github.com/aws/aws-sdk-go v1.44.262
|
github.com/aws/aws-sdk-go v1.44.316
|
||||||
github.com/blevesearch/bleve/v2 v2.3.9
|
github.com/blevesearch/bleve/v2 v2.3.9
|
||||||
github.com/caarlos0/env/v9 v9.0.0
|
github.com/caarlos0/env/v9 v9.0.0
|
||||||
github.com/coreos/go-oidc v2.2.1+incompatible
|
github.com/coreos/go-oidc v2.2.1+incompatible
|
||||||
@ -15,6 +16,7 @@ require (
|
|||||||
github.com/disintegration/imaging v1.6.2
|
github.com/disintegration/imaging v1.6.2
|
||||||
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564
|
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564
|
||||||
github.com/foxxorcat/mopan-sdk-go v0.1.1
|
github.com/foxxorcat/mopan-sdk-go v0.1.1
|
||||||
|
github.com/foxxorcat/weiyun-sdk-go v0.1.1
|
||||||
github.com/gin-contrib/cors v1.4.0
|
github.com/gin-contrib/cors v1.4.0
|
||||||
github.com/gin-gonic/gin v1.9.1
|
github.com/gin-gonic/gin v1.9.1
|
||||||
github.com/go-resty/resty/v2 v2.7.0
|
github.com/go-resty/resty/v2 v2.7.0
|
||||||
@ -22,23 +24,25 @@ require (
|
|||||||
github.com/google/uuid v1.3.0
|
github.com/google/uuid v1.3.0
|
||||||
github.com/gorilla/websocket v1.5.0
|
github.com/gorilla/websocket v1.5.0
|
||||||
github.com/hirochachacha/go-smb2 v1.1.0
|
github.com/hirochachacha/go-smb2 v1.1.0
|
||||||
github.com/ipfs/go-ipfs-api v0.6.0
|
github.com/ipfs/go-ipfs-api v0.6.1
|
||||||
github.com/jlaffaye/ftp v0.2.0
|
github.com/jlaffaye/ftp v0.2.0
|
||||||
github.com/json-iterator/go v1.1.12
|
github.com/json-iterator/go v1.1.12
|
||||||
github.com/maruel/natural v1.1.0
|
github.com/maruel/natural v1.1.0
|
||||||
github.com/natefinch/lumberjack v2.0.0+incompatible
|
github.com/natefinch/lumberjack v2.0.0+incompatible
|
||||||
github.com/pkg/errors v0.9.1
|
github.com/pkg/errors v0.9.1
|
||||||
github.com/pkg/sftp v1.13.5
|
github.com/pkg/sftp v1.13.6-0.20230213180117-971c283182b6
|
||||||
github.com/pquerna/otp v1.4.0
|
github.com/pquerna/otp v1.4.0
|
||||||
|
github.com/rclone/rclone v1.63.1
|
||||||
github.com/sirupsen/logrus v1.9.3
|
github.com/sirupsen/logrus v1.9.3
|
||||||
github.com/spf13/cobra v1.7.0
|
github.com/spf13/cobra v1.7.0
|
||||||
github.com/t3rm1n4l/go-mega v0.0.0-20230228171823-a01a2cda13ca
|
github.com/t3rm1n4l/go-mega v0.0.0-20230228171823-a01a2cda13ca
|
||||||
github.com/u2takey/ffmpeg-go v0.4.1
|
github.com/u2takey/ffmpeg-go v0.4.1
|
||||||
github.com/upyun/go-sdk/v3 v3.0.4
|
github.com/upyun/go-sdk/v3 v3.0.4
|
||||||
github.com/winfsp/cgofuse v1.5.0
|
github.com/winfsp/cgofuse v1.5.1-0.20221118130120-84c0898ad2e0
|
||||||
golang.org/x/crypto v0.11.0
|
golang.org/x/crypto v0.11.0
|
||||||
golang.org/x/image v0.9.0
|
golang.org/x/exp v0.0.0-20230801115018-d63ba01acd4b
|
||||||
golang.org/x/net v0.12.0
|
golang.org/x/image v0.10.0
|
||||||
|
golang.org/x/net v0.13.0
|
||||||
golang.org/x/oauth2 v0.10.0
|
golang.org/x/oauth2 v0.10.0
|
||||||
gorm.io/driver/mysql v1.4.7
|
gorm.io/driver/mysql v1.4.7
|
||||||
gorm.io/driver/postgres v1.4.8
|
gorm.io/driver/postgres v1.4.8
|
||||||
@ -47,12 +51,16 @@ require (
|
|||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
cloud.google.com/go/compute v1.23.0 // indirect
|
||||||
github.com/BurntSushi/toml v0.3.1 // indirect
|
github.com/BurntSushi/toml v0.3.1 // indirect
|
||||||
|
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd // indirect
|
||||||
github.com/RoaringBitmap/roaring v1.2.3 // indirect
|
github.com/RoaringBitmap/roaring v1.2.3 // indirect
|
||||||
|
github.com/abbot/go-http-auth v0.4.0 // indirect
|
||||||
github.com/aead/ecdh v0.2.0 // indirect
|
github.com/aead/ecdh v0.2.0 // indirect
|
||||||
github.com/aliyun/aliyun-oss-go-sdk v2.2.5+incompatible // indirect
|
github.com/aliyun/aliyun-oss-go-sdk v2.2.5+incompatible // indirect
|
||||||
github.com/andreburgaud/crypt2go v1.1.0 // indirect
|
github.com/andreburgaud/crypt2go v1.1.0 // indirect
|
||||||
github.com/benbjohnson/clock v1.3.0 // indirect
|
github.com/benbjohnson/clock v1.3.0 // indirect
|
||||||
|
github.com/beorn7/perks v1.0.1 // indirect
|
||||||
github.com/bits-and-blooms/bitset v1.2.0 // indirect
|
github.com/bits-and-blooms/bitset v1.2.0 // indirect
|
||||||
github.com/blevesearch/bleve_index_api v1.0.5 // indirect
|
github.com/blevesearch/bleve_index_api v1.0.5 // indirect
|
||||||
github.com/blevesearch/geo v0.1.17 // indirect
|
github.com/blevesearch/geo v0.1.17 // indirect
|
||||||
@ -72,13 +80,17 @@ require (
|
|||||||
github.com/bluele/gcache v0.0.2 // indirect
|
github.com/bluele/gcache v0.0.2 // indirect
|
||||||
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc // indirect
|
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc // indirect
|
||||||
github.com/bytedance/sonic v1.9.1 // indirect
|
github.com/bytedance/sonic v1.9.1 // indirect
|
||||||
|
github.com/cespare/xxhash/v2 v2.2.0 // indirect
|
||||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
|
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
|
||||||
|
github.com/coreos/go-semver v0.3.1 // indirect
|
||||||
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 // indirect
|
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 // indirect
|
||||||
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.1.0 // indirect
|
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.1.0 // indirect
|
||||||
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
|
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
|
||||||
github.com/gaoyb7/115drive-webdav v0.1.8 // indirect
|
github.com/gaoyb7/115drive-webdav v0.1.8 // indirect
|
||||||
github.com/geoffgarside/ber v1.1.0 // indirect
|
github.com/geoffgarside/ber v1.1.0 // indirect
|
||||||
github.com/gin-contrib/sse v0.1.0 // indirect
|
github.com/gin-contrib/sse v0.1.0 // indirect
|
||||||
|
github.com/go-chi/chi/v5 v5.0.10 // indirect
|
||||||
|
github.com/go-ole/go-ole v1.2.6 // indirect
|
||||||
github.com/go-playground/locales v0.14.1 // indirect
|
github.com/go-playground/locales v0.14.1 // indirect
|
||||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||||
github.com/go-playground/validator/v10 v10.14.0 // indirect
|
github.com/go-playground/validator/v10 v10.14.0 // indirect
|
||||||
@ -98,14 +110,18 @@ require (
|
|||||||
github.com/jinzhu/inflection v1.0.0 // indirect
|
github.com/jinzhu/inflection v1.0.0 // indirect
|
||||||
github.com/jinzhu/now v1.1.5 // indirect
|
github.com/jinzhu/now v1.1.5 // indirect
|
||||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||||
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
|
github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004 // indirect
|
||||||
|
github.com/klauspost/cpuid/v2 v2.2.5 // indirect
|
||||||
github.com/kr/fs v0.1.0 // indirect
|
github.com/kr/fs v0.1.0 // indirect
|
||||||
github.com/leodido/go-urn v1.2.4 // indirect
|
github.com/leodido/go-urn v1.2.4 // indirect
|
||||||
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
|
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
|
||||||
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
|
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
|
||||||
github.com/libp2p/go-libp2p v0.26.3 // indirect
|
github.com/libp2p/go-libp2p v0.26.3 // indirect
|
||||||
|
github.com/lufia/plan9stats v0.0.0-20230326075908-cb1d2100619a // indirect
|
||||||
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.19 // indirect
|
github.com/mattn/go-isatty v0.0.19 // indirect
|
||||||
github.com/mattn/go-sqlite3 v1.14.15 // indirect
|
github.com/mattn/go-sqlite3 v1.14.15 // indirect
|
||||||
|
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
|
||||||
github.com/minio/sha256-simd v1.0.0 // indirect
|
github.com/minio/sha256-simd v1.0.0 // indirect
|
||||||
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||||
@ -120,23 +136,39 @@ require (
|
|||||||
github.com/multiformats/go-multihash v0.2.1 // indirect
|
github.com/multiformats/go-multihash v0.2.1 // indirect
|
||||||
github.com/multiformats/go-multistream v0.4.1 // indirect
|
github.com/multiformats/go-multistream v0.4.1 // indirect
|
||||||
github.com/multiformats/go-varint v0.0.7 // indirect
|
github.com/multiformats/go-varint v0.0.7 // indirect
|
||||||
|
github.com/ncw/swift/v2 v2.0.2 // indirect
|
||||||
github.com/orzogc/fake115uploader v0.3.3-0.20221009101310-08b764073b77 // indirect
|
github.com/orzogc/fake115uploader v0.3.3-0.20221009101310-08b764073b77 // indirect
|
||||||
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
|
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
|
||||||
github.com/pierrec/lz4/v4 v4.1.17 // indirect
|
github.com/pierrec/lz4/v4 v4.1.17 // indirect
|
||||||
|
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b // indirect
|
||||||
github.com/pquerna/cachecontrol v0.1.0 // indirect
|
github.com/pquerna/cachecontrol v0.1.0 // indirect
|
||||||
|
github.com/prometheus/client_golang v1.16.0 // indirect
|
||||||
|
github.com/prometheus/client_model v0.4.0 // indirect
|
||||||
|
github.com/prometheus/common v0.44.0 // indirect
|
||||||
|
github.com/prometheus/procfs v0.11.1 // indirect
|
||||||
|
github.com/rfjakob/eme v1.1.2 // indirect
|
||||||
|
github.com/shirou/gopsutil/v3 v3.23.7 // indirect
|
||||||
|
github.com/shoenig/go-m1cpu v0.1.6 // indirect
|
||||||
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e // indirect
|
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e // indirect
|
||||||
github.com/spaolacci/murmur3 v1.1.0 // indirect
|
github.com/spaolacci/murmur3 v1.1.0 // indirect
|
||||||
github.com/spf13/pflag v1.0.5 // indirect
|
github.com/spf13/pflag v1.0.5 // indirect
|
||||||
|
github.com/tklauser/go-sysconf v0.3.11 // indirect
|
||||||
|
github.com/tklauser/numcpus v0.6.1 // indirect
|
||||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||||
github.com/u2takey/go-utils v0.3.1 // indirect
|
github.com/u2takey/go-utils v0.3.1 // indirect
|
||||||
github.com/ugorji/go/codec v1.2.11 // indirect
|
github.com/ugorji/go/codec v1.2.11 // indirect
|
||||||
github.com/whyrusleeping/tar-utils v0.0.0-20180509141711-8c6c8ba81d5c // indirect
|
github.com/yusufpapurcu/wmi v1.2.3 // indirect
|
||||||
go.etcd.io/bbolt v1.3.7 // indirect
|
go.etcd.io/bbolt v1.3.7 // indirect
|
||||||
golang.org/x/arch v0.3.0 // indirect
|
golang.org/x/arch v0.3.0 // indirect
|
||||||
|
golang.org/x/sync v0.3.0 // indirect
|
||||||
golang.org/x/sys v0.10.0 // indirect
|
golang.org/x/sys v0.10.0 // indirect
|
||||||
|
golang.org/x/term v0.10.0 // indirect
|
||||||
golang.org/x/text v0.11.0 // indirect
|
golang.org/x/text v0.11.0 // indirect
|
||||||
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af // indirect
|
golang.org/x/time v0.3.0 // indirect
|
||||||
|
google.golang.org/api v0.134.0 // indirect
|
||||||
google.golang.org/appengine v1.6.7 // indirect
|
google.golang.org/appengine v1.6.7 // indirect
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5 // indirect
|
||||||
|
google.golang.org/grpc v1.57.0 // indirect
|
||||||
google.golang.org/protobuf v1.31.0 // indirect
|
google.golang.org/protobuf v1.31.0 // indirect
|
||||||
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
|
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
|
||||||
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
|
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
|
||||||
|
133
go.sum
133
go.sum
@ -1,13 +1,24 @@
|
|||||||
|
cloud.google.com/go v0.110.2 h1:sdFPBr6xG9/wkBbfhmUz/JmZC7X6LavQgcrVINrKiVA=
|
||||||
|
cloud.google.com/go/compute v1.23.0 h1:tP41Zoavr8ptEqaW6j+LQOnyBBhO7OkOMAGrgLopTwY=
|
||||||
|
cloud.google.com/go/compute v1.23.0/go.mod h1:4tCnrn48xsqlwSAiLf1HXMQk8CONslYbdiEZc9FEIbM=
|
||||||
|
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
|
||||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||||
|
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd h1:nzE1YQBdx1bq9IlZinHa+HVffy+NmVRoKr+wHN8fpLE=
|
||||||
|
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd/go.mod h1:C8yoIfvESpM3GD07OCHU7fqI7lhwyZ2Td1rbNbTAhnc=
|
||||||
github.com/RoaringBitmap/roaring v1.2.3 h1:yqreLINqIrX22ErkKI0vY47/ivtJr6n+kMhVOVmhWBY=
|
github.com/RoaringBitmap/roaring v1.2.3 h1:yqreLINqIrX22ErkKI0vY47/ivtJr6n+kMhVOVmhWBY=
|
||||||
github.com/RoaringBitmap/roaring v1.2.3/go.mod h1:plvDsJQpxOC5bw8LRteu/MLWHsHez/3y6cubLI4/1yE=
|
github.com/RoaringBitmap/roaring v1.2.3/go.mod h1:plvDsJQpxOC5bw8LRteu/MLWHsHez/3y6cubLI4/1yE=
|
||||||
github.com/SheltonZhu/115driver v1.0.14 h1:uW3dl8J9KDMw+3gPxQdhTysoGhw0/uI1484GT9xhfU4=
|
github.com/SheltonZhu/115driver v1.0.14 h1:uW3dl8J9KDMw+3gPxQdhTysoGhw0/uI1484GT9xhfU4=
|
||||||
github.com/SheltonZhu/115driver v1.0.14/go.mod h1:00ixivHH5HqDj4S7kAWbkuUrjtsJTxc7cGv5RMw3RVs=
|
github.com/SheltonZhu/115driver v1.0.14/go.mod h1:00ixivHH5HqDj4S7kAWbkuUrjtsJTxc7cGv5RMw3RVs=
|
||||||
|
github.com/Unknwon/goconfig v1.0.0 h1:9IAu/BYbSLQi8puFjUQApZTxIHqSwrj5d8vpP8vTq4A=
|
||||||
github.com/Xhofe/go-cache v0.0.0-20220723083548-714439c8af9a h1:RenIAa2q4H8UcS/cqmwdT1WCWIAH5aumP8m8RpbqVsE=
|
github.com/Xhofe/go-cache v0.0.0-20220723083548-714439c8af9a h1:RenIAa2q4H8UcS/cqmwdT1WCWIAH5aumP8m8RpbqVsE=
|
||||||
github.com/Xhofe/go-cache v0.0.0-20220723083548-714439c8af9a/go.mod h1:sSBbaOg90XwWKtpT56kVujF0bIeVITnPlssLclogS04=
|
github.com/Xhofe/go-cache v0.0.0-20220723083548-714439c8af9a/go.mod h1:sSBbaOg90XwWKtpT56kVujF0bIeVITnPlssLclogS04=
|
||||||
|
github.com/Xhofe/rateg v0.0.0-20230728072201-251a4e1adad4 h1:WnvifFgYyogPz2ZFvaVLk4gI/Co0paF92FmxSR6U1zY=
|
||||||
|
github.com/Xhofe/rateg v0.0.0-20230728072201-251a4e1adad4/go.mod h1:8pWlL2rpusvx7Xa6yYaIWOJ8bR3gPdFBUT7OystyGOY=
|
||||||
github.com/Xhofe/wopan-sdk-go v0.1.1 h1:dSrTxNYclqNuo9libjtC+R6C4RCen/inh/dUXd12vpM=
|
github.com/Xhofe/wopan-sdk-go v0.1.1 h1:dSrTxNYclqNuo9libjtC+R6C4RCen/inh/dUXd12vpM=
|
||||||
github.com/Xhofe/wopan-sdk-go v0.1.1/go.mod h1:xWcUS7PoFLDD9gy2BK2VQfilEsZngLMz2Vkx3oF2zJY=
|
github.com/Xhofe/wopan-sdk-go v0.1.1/go.mod h1:xWcUS7PoFLDD9gy2BK2VQfilEsZngLMz2Vkx3oF2zJY=
|
||||||
|
github.com/abbot/go-http-auth v0.4.0 h1:QjmvZ5gSC7jm3Zg54DqWE/T5m1t2AfDu6QlXJT0EVT0=
|
||||||
|
github.com/abbot/go-http-auth v0.4.0/go.mod h1:Cz6ARTIzApMJDzh5bRMSUou6UMSp0IEXg9km/ci7TJM=
|
||||||
github.com/aead/ecdh v0.2.0 h1:pYop54xVaq/CEREFEcukHRZfTdjiWvYIsZDXXrBapQQ=
|
github.com/aead/ecdh v0.2.0 h1:pYop54xVaq/CEREFEcukHRZfTdjiWvYIsZDXXrBapQQ=
|
||||||
github.com/aead/ecdh v0.2.0/go.mod h1:a9HHtXuSo8J1Js1MwLQx2mBhkXMT6YwUmVVEY4tTB8U=
|
github.com/aead/ecdh v0.2.0/go.mod h1:a9HHtXuSo8J1Js1MwLQx2mBhkXMT6YwUmVVEY4tTB8U=
|
||||||
github.com/aliyun/aliyun-oss-go-sdk v2.2.5+incompatible h1:QoRMR0TCctLDqBCMyOu1eXdZyMw3F7uGA9qPn2J4+R8=
|
github.com/aliyun/aliyun-oss-go-sdk v2.2.5+incompatible h1:QoRMR0TCctLDqBCMyOu1eXdZyMw3F7uGA9qPn2J4+R8=
|
||||||
@ -17,10 +28,12 @@ github.com/andreburgaud/crypt2go v1.1.0/go.mod h1:4qhZPzarj1dCIRmCkpdgCklwp+hBq9
|
|||||||
github.com/avast/retry-go v3.0.0+incompatible h1:4SOWQ7Qs+oroOTQOYnAHqelpCO0biHSxpiH9JdtuBj0=
|
github.com/avast/retry-go v3.0.0+incompatible h1:4SOWQ7Qs+oroOTQOYnAHqelpCO0biHSxpiH9JdtuBj0=
|
||||||
github.com/avast/retry-go v3.0.0+incompatible/go.mod h1:XtSnn+n/sHqQIpZ10K1qAevBhOOCWBLXXy3hyiqqBrY=
|
github.com/avast/retry-go v3.0.0+incompatible/go.mod h1:XtSnn+n/sHqQIpZ10K1qAevBhOOCWBLXXy3hyiqqBrY=
|
||||||
github.com/aws/aws-sdk-go v1.38.20/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
github.com/aws/aws-sdk-go v1.38.20/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||||
github.com/aws/aws-sdk-go v1.44.262 h1:gyXpcJptWoNkK+DiAiaBltlreoWKQXjAIh6FRh60F+I=
|
github.com/aws/aws-sdk-go v1.44.316 h1:UC3alCEyzj2XU13ZFGIOHW3yjCNLGTIGVauyetl9fwE=
|
||||||
github.com/aws/aws-sdk-go v1.44.262/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
github.com/aws/aws-sdk-go v1.44.316/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||||
github.com/benbjohnson/clock v1.3.0 h1:ip6w0uFQkncKQ979AypyG0ER7mqUSBdKLOgAle/AT8A=
|
github.com/benbjohnson/clock v1.3.0 h1:ip6w0uFQkncKQ979AypyG0ER7mqUSBdKLOgAle/AT8A=
|
||||||
github.com/benbjohnson/clock v1.3.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
github.com/benbjohnson/clock v1.3.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
||||||
|
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||||
|
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||||
github.com/bits-and-blooms/bitset v1.2.0 h1:Kn4yilvwNtMACtf1eYDlG8H77R07mZSPbMjLyS07ChA=
|
github.com/bits-and-blooms/bitset v1.2.0 h1:Kn4yilvwNtMACtf1eYDlG8H77R07mZSPbMjLyS07ChA=
|
||||||
github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
|
github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
|
||||||
github.com/blevesearch/bleve/v2 v2.3.9 h1:pUMvK0mxAexqasZcVj8lazmWnEW5XiV0tASIqANiNTQ=
|
github.com/blevesearch/bleve/v2 v2.3.9 h1:pUMvK0mxAexqasZcVj8lazmWnEW5XiV0tASIqANiNTQ=
|
||||||
@ -64,12 +77,16 @@ github.com/bytedance/sonic v1.9.1 h1:6iJ6NqdoxCDr6mbY8h18oSO+cShGSMRGCEo7F2h0x8s
|
|||||||
github.com/bytedance/sonic v1.9.1/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZXU064P/U=
|
github.com/bytedance/sonic v1.9.1/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZXU064P/U=
|
||||||
github.com/caarlos0/env/v9 v9.0.0 h1:SI6JNsOA+y5gj9njpgybykATIylrRMklbs5ch6wO6pc=
|
github.com/caarlos0/env/v9 v9.0.0 h1:SI6JNsOA+y5gj9njpgybykATIylrRMklbs5ch6wO6pc=
|
||||||
github.com/caarlos0/env/v9 v9.0.0/go.mod h1:ye5mlCVMYh6tZ+vCgrs/B95sj88cg5Tlnc0XIzgZ020=
|
github.com/caarlos0/env/v9 v9.0.0/go.mod h1:ye5mlCVMYh6tZ+vCgrs/B95sj88cg5Tlnc0XIzgZ020=
|
||||||
|
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
|
||||||
|
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/cheekybits/is v0.0.0-20150225183255-68e9c0620927 h1:SKI1/fuSdodxmNNyVBR8d7X/HuLnRpvvFO0AgyQk764=
|
github.com/cheekybits/is v0.0.0-20150225183255-68e9c0620927 h1:SKI1/fuSdodxmNNyVBR8d7X/HuLnRpvvFO0AgyQk764=
|
||||||
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
|
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
|
||||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
|
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
|
||||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
|
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
|
||||||
github.com/coreos/go-oidc v2.2.1+incompatible h1:mh48q/BqXqgjVHpy2ZY7WnWAbenxRjsz9N1i1YxjHAk=
|
github.com/coreos/go-oidc v2.2.1+incompatible h1:mh48q/BqXqgjVHpy2ZY7WnWAbenxRjsz9N1i1YxjHAk=
|
||||||
github.com/coreos/go-oidc v2.2.1+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
|
github.com/coreos/go-oidc v2.2.1+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
|
||||||
|
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
|
||||||
|
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||||
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 h1:HVTnpeuvF6Owjd5mniCL8DEXo7uYXdQEmOP4FJbV5tg=
|
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 h1:HVTnpeuvF6Owjd5mniCL8DEXo7uYXdQEmOP4FJbV5tg=
|
||||||
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3/go.mod h1:p1d6YEZWvFzEh4KLyvBcVSnrfNDDvK2zfK/4x2v/4pE=
|
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3/go.mod h1:p1d6YEZWvFzEh4KLyvBcVSnrfNDDvK2zfK/4x2v/4pE=
|
||||||
@ -88,6 +105,8 @@ github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564 h1:I6KUy4CI6hHjqnyJL
|
|||||||
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564/go.mod h1:yekO+3ZShy19S+bsmnERmznGy9Rfg6dWWWpiGJjNAz8=
|
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564/go.mod h1:yekO+3ZShy19S+bsmnERmznGy9Rfg6dWWWpiGJjNAz8=
|
||||||
github.com/foxxorcat/mopan-sdk-go v0.1.1 h1:JYMeCu4PFpqgHapvOz4jPMT7CxR6Yebu3aWkgGMDeIU=
|
github.com/foxxorcat/mopan-sdk-go v0.1.1 h1:JYMeCu4PFpqgHapvOz4jPMT7CxR6Yebu3aWkgGMDeIU=
|
||||||
github.com/foxxorcat/mopan-sdk-go v0.1.1/go.mod h1:LpBPmwezjQNyhaNo3HGzgFtQbhvxmF5ZybSVuKi7OVA=
|
github.com/foxxorcat/mopan-sdk-go v0.1.1/go.mod h1:LpBPmwezjQNyhaNo3HGzgFtQbhvxmF5ZybSVuKi7OVA=
|
||||||
|
github.com/foxxorcat/weiyun-sdk-go v0.1.1 h1:m4qcJk0adr+bpM4es2zCqP3jhMEwEPyTMGICsamygEQ=
|
||||||
|
github.com/foxxorcat/weiyun-sdk-go v0.1.1/go.mod h1:AKsLFuWhWlClpGrg1zxTdMejugZEZtmhIuElAk3W83s=
|
||||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
|
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
|
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
|
||||||
@ -102,7 +121,11 @@ github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm
|
|||||||
github.com/gin-gonic/gin v1.8.1/go.mod h1:ji8BvRH1azfM+SYow9zQ6SZMvR8qOMZHmsCuWR9tTTk=
|
github.com/gin-gonic/gin v1.8.1/go.mod h1:ji8BvRH1azfM+SYow9zQ6SZMvR8qOMZHmsCuWR9tTTk=
|
||||||
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
|
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
|
||||||
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
|
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
|
||||||
|
github.com/go-chi/chi/v5 v5.0.10 h1:rLz5avzKpjqxrYwXNfmjkrYYXOyLJd37pz53UFHC6vk=
|
||||||
|
github.com/go-chi/chi/v5 v5.0.10/go.mod h1:DslCQbL2OYiznFReuXYUmQ2hGd1aDpCnlMNITLSKoi8=
|
||||||
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
|
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
|
||||||
|
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
|
||||||
|
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
|
||||||
github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
||||||
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
||||||
github.com/go-playground/locales v0.14.0/go.mod h1:sawfccIbzZTqEDETgFXqTho0QybSa7l++s0DH+LDiLs=
|
github.com/go-playground/locales v0.14.0/go.mod h1:sawfccIbzZTqEDETgFXqTho0QybSa7l++s0DH+LDiLs=
|
||||||
@ -127,6 +150,8 @@ github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOW
|
|||||||
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||||
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551 h1:gtexQ/VGyN+VVFRXSFiguSNcXmS6rkKT+X7FdIrTtfo=
|
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551 h1:gtexQ/VGyN+VVFRXSFiguSNcXmS6rkKT+X7FdIrTtfo=
|
||||||
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
|
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
|
||||||
|
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
|
||||||
|
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||||
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
|
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
|
||||||
@ -134,11 +159,16 @@ github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiu
|
|||||||
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
||||||
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
|
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||||
|
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
|
github.com/google/s2a-go v0.1.4 h1:1kZ/sQM3srePvKs3tXAvQzo66XfcReoqFpIpIccE7Oc=
|
||||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
||||||
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
|
github.com/googleapis/enterprise-certificate-proxy v0.2.5 h1:UR4rDjcgpgEnqpIEvkiqTYKBCKLNmlge2eVjoZfySzM=
|
||||||
|
github.com/googleapis/gax-go/v2 v2.12.0 h1:A+gCJKdRfqXkr+BIRGtZLibNXf0m1f9E4HG56etFpas=
|
||||||
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
|
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
|
||||||
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||||
@ -155,8 +185,8 @@ github.com/ipfs/boxo v0.8.0 h1:UdjAJmHzQHo/j3g3b1bAcAXCj/GM6iTwvSlBDvPBNBs=
|
|||||||
github.com/ipfs/boxo v0.8.0/go.mod h1:RIsi4CnTyQ7AUsNn5gXljJYZlQrHBMnJp94p73liFiA=
|
github.com/ipfs/boxo v0.8.0/go.mod h1:RIsi4CnTyQ7AUsNn5gXljJYZlQrHBMnJp94p73liFiA=
|
||||||
github.com/ipfs/go-cid v0.4.0 h1:a4pdZq0sx6ZSxbCizebnKiMCx/xI/aBBFlB73IgH4rA=
|
github.com/ipfs/go-cid v0.4.0 h1:a4pdZq0sx6ZSxbCizebnKiMCx/xI/aBBFlB73IgH4rA=
|
||||||
github.com/ipfs/go-cid v0.4.0/go.mod h1:uQHwDeX4c6CtyrFwdqyhpNcxVewur1M7l7fNU7LKwZk=
|
github.com/ipfs/go-cid v0.4.0/go.mod h1:uQHwDeX4c6CtyrFwdqyhpNcxVewur1M7l7fNU7LKwZk=
|
||||||
github.com/ipfs/go-ipfs-api v0.6.0 h1:JARgG0VTbjyVhO5ZfesnbXv9wTcMvoKRBLF1SzJqzmg=
|
github.com/ipfs/go-ipfs-api v0.6.1 h1:nK5oeFOdMh1ogT+GCOcyBFOOcFGNuudSb1rg9YDyAKE=
|
||||||
github.com/ipfs/go-ipfs-api v0.6.0/go.mod h1:iDC2VMwN9LUpQV/GzEeZ2zNqd8NUdRmWcFM+K/6odf0=
|
github.com/ipfs/go-ipfs-api v0.6.1/go.mod h1:8pl+ZMF2LX42szbqGbpOBEiI1/rYaImvTvJtG0g+rL4=
|
||||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||||
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||||
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a h1:bbPeKD0xmW/Y25WS6cokEszi5g+S0QxI/d45PkRi7Nk=
|
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a h1:bbPeKD0xmW/Y25WS6cokEszi5g+S0QxI/d45PkRi7Nk=
|
||||||
@ -178,18 +208,20 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfC
|
|||||||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||||
|
github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004 h1:G+9t9cEtnC9jFiTxyptEKuNIAbiN5ZCQzX2a74lj3xg=
|
||||||
|
github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004/go.mod h1:KmHnJWQrgEvbuy0vcvj00gtMqbvNn1L+3YUZLK/B92c=
|
||||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||||
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||||
github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk=
|
github.com/klauspost/cpuid/v2 v2.2.5 h1:0E5MSMDEoAulmXNFquVs//DdoomxaoTY1kUhbc/qbZg=
|
||||||
github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
|
github.com/klauspost/cpuid/v2 v2.2.5/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
|
||||||
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
|
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
|
||||||
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
|
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||||
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
|
|
||||||
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
|
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
|
||||||
|
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||||
@ -203,13 +235,21 @@ github.com/libp2p/go-flow-metrics v0.1.0 h1:0iPhMI8PskQwzh57jB9WxIuIOQ0r+15PChFG
|
|||||||
github.com/libp2p/go-flow-metrics v0.1.0/go.mod h1:4Xi8MX8wj5aWNDAZttg6UPmc0ZrnFNsMtpsYUClFtro=
|
github.com/libp2p/go-flow-metrics v0.1.0/go.mod h1:4Xi8MX8wj5aWNDAZttg6UPmc0ZrnFNsMtpsYUClFtro=
|
||||||
github.com/libp2p/go-libp2p v0.26.3 h1:6g/psubqwdaBqNNoidbRKSTBEYgaOuKBhHl8Q5tO+PM=
|
github.com/libp2p/go-libp2p v0.26.3 h1:6g/psubqwdaBqNNoidbRKSTBEYgaOuKBhHl8Q5tO+PM=
|
||||||
github.com/libp2p/go-libp2p v0.26.3/go.mod h1:x75BN32YbwuY0Awm2Uix4d4KOz+/4piInkp4Wr3yOo8=
|
github.com/libp2p/go-libp2p v0.26.3/go.mod h1:x75BN32YbwuY0Awm2Uix4d4KOz+/4piInkp4Wr3yOo8=
|
||||||
|
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
|
||||||
|
github.com/lufia/plan9stats v0.0.0-20230326075908-cb1d2100619a h1:N9zuLhTvBSRt0gWSiJswwQ2HqDmtX/ZCDJURnKUt1Ik=
|
||||||
|
github.com/lufia/plan9stats v0.0.0-20230326075908-cb1d2100619a/go.mod h1:JKx41uQRwqlTZabZc+kILPrO/3jlKnQ2Z8b7YiVw5cE=
|
||||||
github.com/maruel/natural v1.1.0 h1:2z1NgP/Vae+gYrtC0VuvrTJ6U35OuyUqDdfluLqMWuQ=
|
github.com/maruel/natural v1.1.0 h1:2z1NgP/Vae+gYrtC0VuvrTJ6U35OuyUqDdfluLqMWuQ=
|
||||||
github.com/maruel/natural v1.1.0/go.mod h1:eFVhYCcUOfZFxXoDZam8Ktya72wa79fNC3lc/leA0DQ=
|
github.com/maruel/natural v1.1.0/go.mod h1:eFVhYCcUOfZFxXoDZam8Ktya72wa79fNC3lc/leA0DQ=
|
||||||
|
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||||
|
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||||
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
|
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
|
||||||
|
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||||
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
|
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
|
||||||
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/mattn/go-sqlite3 v1.14.15 h1:vfoHhTN1af61xCRSWzFIWzx2YskyMTwHLrExkBOjvxI=
|
github.com/mattn/go-sqlite3 v1.14.15 h1:vfoHhTN1af61xCRSWzFIWzx2YskyMTwHLrExkBOjvxI=
|
||||||
github.com/mattn/go-sqlite3 v1.14.15/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
|
github.com/mattn/go-sqlite3 v1.14.15/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
|
||||||
|
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
|
||||||
|
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
|
||||||
github.com/minio/sha256-simd v1.0.0 h1:v1ta+49hkWZyvaKwrQB8elexRqm6Y0aMLjCNsrYxo6g=
|
github.com/minio/sha256-simd v1.0.0 h1:v1ta+49hkWZyvaKwrQB8elexRqm6Y0aMLjCNsrYxo6g=
|
||||||
github.com/minio/sha256-simd v1.0.0/go.mod h1:OuYzVNI5vcoYIAmbIvHPl3N3jUzVedXbKy5RFepssQM=
|
github.com/minio/sha256-simd v1.0.0/go.mod h1:OuYzVNI5vcoYIAmbIvHPl3N3jUzVedXbKy5RFepssQM=
|
||||||
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
||||||
@ -243,6 +283,8 @@ github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/n
|
|||||||
github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU=
|
github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU=
|
||||||
github.com/natefinch/lumberjack v2.0.0+incompatible h1:4QJd3OLAMgj7ph+yZTuX13Ld4UpgHp07nNdFX7mqFfM=
|
github.com/natefinch/lumberjack v2.0.0+incompatible h1:4QJd3OLAMgj7ph+yZTuX13Ld4UpgHp07nNdFX7mqFfM=
|
||||||
github.com/natefinch/lumberjack v2.0.0+incompatible/go.mod h1:Wi9p2TTF5DG5oU+6YfsmYQpsTIOm0B1VNzQg9Mw6nPk=
|
github.com/natefinch/lumberjack v2.0.0+incompatible/go.mod h1:Wi9p2TTF5DG5oU+6YfsmYQpsTIOm0B1VNzQg9Mw6nPk=
|
||||||
|
github.com/ncw/swift/v2 v2.0.2 h1:jx282pcAKFhmoZBSdMcCRFn9VWkoBIRsCpe+yZq7vEk=
|
||||||
|
github.com/ncw/swift/v2 v2.0.2/go.mod h1:z0A9RVdYPjNjXVo2pDOPxZ4eu3oarO1P91fTItcb+Kg=
|
||||||
github.com/orzogc/fake115uploader v0.3.3-0.20221009101310-08b764073b77 h1:dg/EaaJLPIg4xn2kaZil7Ax3wfoxcFXaBwyOTlcz5AI=
|
github.com/orzogc/fake115uploader v0.3.3-0.20221009101310-08b764073b77 h1:dg/EaaJLPIg4xn2kaZil7Ax3wfoxcFXaBwyOTlcz5AI=
|
||||||
github.com/orzogc/fake115uploader v0.3.3-0.20221009101310-08b764073b77/go.mod h1:FD9a09Vw07CSMTdT0Y7ttStOa1WZsnPBslliMw2DkeM=
|
github.com/orzogc/fake115uploader v0.3.3-0.20221009101310-08b764073b77/go.mod h1:FD9a09Vw07CSMTdT0Y7ttStOa1WZsnPBslliMw2DkeM=
|
||||||
github.com/panjf2000/ants/v2 v2.4.2/go.mod h1:f6F0NZVFsGCp5A7QW/Zj/m92atWwOkY0OIhFxRNFr4A=
|
github.com/panjf2000/ants/v2 v2.4.2/go.mod h1:f6F0NZVFsGCp5A7QW/Zj/m92atWwOkY0OIhFxRNFr4A=
|
||||||
@ -255,23 +297,46 @@ github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsK
|
|||||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
github.com/pkg/sftp v1.13.5 h1:a3RLUqkyjYRtBTZJZ1VRrKbN3zhuPLlUc3sphVz81go=
|
github.com/pkg/sftp v1.13.6-0.20230213180117-971c283182b6 h1:5TvW1dv00Y13njmQ1AWkxSWtPkwE7ZEF6yDuv9q+Als=
|
||||||
github.com/pkg/sftp v1.13.5/go.mod h1:wHDZ0IZX6JcBYRK1TH9bcVq8G7TLpVHYIGJRFnmPfxg=
|
github.com/pkg/sftp v1.13.6-0.20230213180117-971c283182b6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
|
||||||
|
github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
|
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||||
|
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b h1:0LFwY6Q3gMACTjAbMZBjXAqTOzOwFaj2Ld6cjeQ7Rig=
|
||||||
|
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
|
||||||
github.com/pquerna/cachecontrol v0.1.0 h1:yJMy84ti9h/+OEWa752kBTKv4XC30OtVVHYv/8cTqKc=
|
github.com/pquerna/cachecontrol v0.1.0 h1:yJMy84ti9h/+OEWa752kBTKv4XC30OtVVHYv/8cTqKc=
|
||||||
github.com/pquerna/cachecontrol v0.1.0/go.mod h1:NrUG3Z7Rdu85UNR3vm7SOsl1nFIeSiQnrHV5K9mBcUI=
|
github.com/pquerna/cachecontrol v0.1.0/go.mod h1:NrUG3Z7Rdu85UNR3vm7SOsl1nFIeSiQnrHV5K9mBcUI=
|
||||||
github.com/pquerna/otp v1.4.0 h1:wZvl1TIVxKRThZIBiwOOHOGP/1+nZyWBil9Y2XNEDzg=
|
github.com/pquerna/otp v1.4.0 h1:wZvl1TIVxKRThZIBiwOOHOGP/1+nZyWBil9Y2XNEDzg=
|
||||||
github.com/pquerna/otp v1.4.0/go.mod h1:dkJfzwRKNiegxyNb54X/3fLwhCynbMspSyWKnvi1AEg=
|
github.com/pquerna/otp v1.4.0/go.mod h1:dkJfzwRKNiegxyNb54X/3fLwhCynbMspSyWKnvi1AEg=
|
||||||
|
github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8=
|
||||||
|
github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc=
|
||||||
|
github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
|
||||||
|
github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
|
||||||
|
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
|
||||||
|
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
|
||||||
|
github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI=
|
||||||
|
github.com/prometheus/procfs v0.11.1/go.mod h1:eesXgaPo1q7lBpVMoMy0ZOFTth9hBn4W/y0/p/ScXhY=
|
||||||
|
github.com/rclone/rclone v1.63.1 h1:iITCUNBfAXnguHjRPFq+w/gGIW0L0las78h4H5CH2Ms=
|
||||||
|
github.com/rclone/rclone v1.63.1/go.mod h1:eUQaKsf1wJfHKB0RDoM8RaPAeRB2eI/Qw+Vc9Ho5FGM=
|
||||||
|
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=
|
||||||
|
github.com/rfjakob/eme v1.1.2/go.mod h1:cVvpasglm/G3ngEfcfT/Wt0GwhkuO32pf/poW6Nyk1k=
|
||||||
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
|
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
|
||||||
github.com/rogpeppe/go-internal v1.8.0 h1:FCbCCtXNOY3UtUuHUYaghJg4y7Fd14rXifAYUAtL9R8=
|
|
||||||
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
|
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
|
||||||
|
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
|
||||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
|
github.com/shirou/gopsutil/v3 v3.23.7 h1:C+fHO8hfIppoJ1WdsVm1RoI0RwXoNdfTK7yWXV0wVj4=
|
||||||
|
github.com/shirou/gopsutil/v3 v3.23.7/go.mod h1:c4gnmoRC0hQuaLqvxnx1//VXQ0Ms/X9UnJF8pddY5z4=
|
||||||
|
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
|
||||||
|
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
|
||||||
|
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
|
||||||
|
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
|
||||||
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||||
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e h1:MRM5ITcdelLK2j1vwZ3Je0FKVCfqOLp5zO6trqMLYs0=
|
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e h1:MRM5ITcdelLK2j1vwZ3Je0FKVCfqOLp5zO6trqMLYs0=
|
||||||
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e/go.mod h1:XV66xRDqSt+GTGFMVlhk3ULuV0y9ZmzeVGR4mloJI3M=
|
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e/go.mod h1:XV66xRDqSt+GTGFMVlhk3ULuV0y9ZmzeVGR4mloJI3M=
|
||||||
|
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 h1:JIAuq3EEf9cgbU6AtGPK4CTG3Zf6CKMNqf0MHTggAUA=
|
||||||
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
||||||
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||||
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
|
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
|
||||||
@ -293,10 +358,16 @@ github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1F
|
|||||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/stretchr/testify v1.8.3 h1:RP3t2pwF7cMEbC1dqtB6poj3niw/9gnV4Cjg5oW5gtY=
|
|
||||||
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||||
|
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
|
||||||
|
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||||
github.com/t3rm1n4l/go-mega v0.0.0-20230228171823-a01a2cda13ca h1:I9rVnNXdIkij4UvMT7OmKhH9sOIvS8iXkxfPdnn9wQA=
|
github.com/t3rm1n4l/go-mega v0.0.0-20230228171823-a01a2cda13ca h1:I9rVnNXdIkij4UvMT7OmKhH9sOIvS8iXkxfPdnn9wQA=
|
||||||
github.com/t3rm1n4l/go-mega v0.0.0-20230228171823-a01a2cda13ca/go.mod h1:suDIky6yrK07NnaBadCB4sS0CqFOvUK91lH7CR+JlDA=
|
github.com/t3rm1n4l/go-mega v0.0.0-20230228171823-a01a2cda13ca/go.mod h1:suDIky6yrK07NnaBadCB4sS0CqFOvUK91lH7CR+JlDA=
|
||||||
|
github.com/tklauser/go-sysconf v0.3.11 h1:89WgdJhk5SNwJfu+GKyYveZ4IaJ7xAkecBo+KdJV0CM=
|
||||||
|
github.com/tklauser/go-sysconf v0.3.11/go.mod h1:GqXfhXY3kiPa0nAXPDIQIWzJbMCB7AmcWpGR8lSZfqI=
|
||||||
|
github.com/tklauser/numcpus v0.6.0/go.mod h1:FEZLMke0lhOUG6w2JadTzp0a+Nl8PF/GFkQ5UVIcaL4=
|
||||||
|
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
|
||||||
|
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
|
||||||
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
||||||
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
||||||
github.com/u2takey/ffmpeg-go v0.4.1 h1:l5ClIwL3N2LaH1zF3xivb3kP2HW95eyG5xhHE1JdZ9Y=
|
github.com/u2takey/ffmpeg-go v0.4.1 h1:l5ClIwL3N2LaH1zF3xivb3kP2HW95eyG5xhHE1JdZ9Y=
|
||||||
@ -310,13 +381,14 @@ github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZ
|
|||||||
github.com/upyun/go-sdk/v3 v3.0.4 h1:2DCJa/Yi7/3ZybT9UCPATSzvU3wpPPxhXinNlb1Hi8Q=
|
github.com/upyun/go-sdk/v3 v3.0.4 h1:2DCJa/Yi7/3ZybT9UCPATSzvU3wpPPxhXinNlb1Hi8Q=
|
||||||
github.com/upyun/go-sdk/v3 v3.0.4/go.mod h1:P/SnuuwhrIgAVRd/ZpzDWqCsBAf/oHg7UggbAxyZa0E=
|
github.com/upyun/go-sdk/v3 v3.0.4/go.mod h1:P/SnuuwhrIgAVRd/ZpzDWqCsBAf/oHg7UggbAxyZa0E=
|
||||||
github.com/valyala/fastjson v1.6.3 h1:tAKFnnwmeMGPbwJ7IwxcTPCNr3uIzoIj3/Fh90ra4xc=
|
github.com/valyala/fastjson v1.6.3 h1:tAKFnnwmeMGPbwJ7IwxcTPCNr3uIzoIj3/Fh90ra4xc=
|
||||||
github.com/whyrusleeping/tar-utils v0.0.0-20180509141711-8c6c8ba81d5c h1:GGsyl0dZ2jJgVT+VvWBf/cNijrHRhkrTjkmp5wg7li0=
|
github.com/winfsp/cgofuse v1.5.1-0.20221118130120-84c0898ad2e0 h1:j3un8DqYvvAOqKI5OPz+/RRVhDFipbPKI4t2Uk5RBJw=
|
||||||
github.com/whyrusleeping/tar-utils v0.0.0-20180509141711-8c6c8ba81d5c/go.mod h1:xxcJeBb7SIUl/Wzkz1eVKJE/CB34YNrqX2TQI6jY9zs=
|
github.com/winfsp/cgofuse v1.5.1-0.20221118130120-84c0898ad2e0/go.mod h1:uxjoF2jEYT3+x+vC2KJddEGdk/LU8pRowXmyVMHSV5I=
|
||||||
github.com/winfsp/cgofuse v1.5.0 h1:MsBP7Mi/LiJf/7/F3O/7HjjR009ds6KCdqXzKpZSWxI=
|
|
||||||
github.com/winfsp/cgofuse v1.5.0/go.mod h1:h3awhoUOcn2VYVKCwDaYxSLlZwnyK+A8KaDoLUp2lbU=
|
|
||||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||||
|
github.com/yusufpapurcu/wmi v1.2.3 h1:E1ctvB7uKFMOJw3fdOW32DwGE9I7t++CRUEMKvFoFiw=
|
||||||
|
github.com/yusufpapurcu/wmi v1.2.3/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
|
||||||
go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ=
|
go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ=
|
||||||
go.etcd.io/bbolt v1.3.7/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
|
go.etcd.io/bbolt v1.3.7/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
|
||||||
|
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
|
||||||
gocv.io/x/gocv v0.25.0/go.mod h1:Rar2PS6DV+T4FL+PM535EImD/h13hGVaHhnCu1xarBs=
|
gocv.io/x/gocv v0.25.0/go.mod h1:Rar2PS6DV+T4FL+PM535EImD/h13hGVaHhnCu1xarBs=
|
||||||
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||||
golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
|
golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
|
||||||
@ -333,9 +405,11 @@ golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw
|
|||||||
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
|
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
|
||||||
golang.org/x/crypto v0.11.0 h1:6Ewdq3tDic1mg5xRO4milcWCfMVQhI4NkqWWvqejpuA=
|
golang.org/x/crypto v0.11.0 h1:6Ewdq3tDic1mg5xRO4milcWCfMVQhI4NkqWWvqejpuA=
|
||||||
golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio=
|
golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio=
|
||||||
|
golang.org/x/exp v0.0.0-20230801115018-d63ba01acd4b h1:r+vk0EmXNmekl0S0BascoeeoHk/L7wmaW2QF90K+kYI=
|
||||||
|
golang.org/x/exp v0.0.0-20230801115018-d63ba01acd4b/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
|
||||||
golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||||
golang.org/x/image v0.9.0 h1:QrzfX26snvCM20hIhBwuHI/ThTg18b/+kcKdXHvnR+g=
|
golang.org/x/image v0.10.0 h1:gXjUUtwtx5yOE0VKWq1CH4IJAClq4UGgUA3i+rpON9M=
|
||||||
golang.org/x/image v0.9.0/go.mod h1:jtrku+n79PfroUbvDdeUWMAI+heR786BofxrbiSF+J0=
|
golang.org/x/image v0.10.0/go.mod h1:jtrku+n79PfroUbvDdeUWMAI+heR786BofxrbiSF+J0=
|
||||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||||
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
@ -348,32 +422,38 @@ golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qx
|
|||||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||||
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||||
golang.org/x/net v0.12.0 h1:cfawfvKITfUsFCeJIHJrbSxpeu/E81khclypR0GVT50=
|
golang.org/x/net v0.13.0 h1:Nvo8UFsZ8X3BhAC9699Z1j7XQ3rsZnUUm7jfBEk1ueY=
|
||||||
golang.org/x/net v0.12.0/go.mod h1:zEVYFnQC7m/vmpQFELhcD1EWkZlX69l4oqgmer6hfKA=
|
golang.org/x/net v0.13.0/go.mod h1:zEVYFnQC7m/vmpQFELhcD1EWkZlX69l4oqgmer6hfKA=
|
||||||
golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8=
|
golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8=
|
||||||
golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI=
|
golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI=
|
||||||
|
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
|
||||||
|
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220702020025-31831981b65f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220702020025-31831981b65f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
|
||||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA=
|
golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA=
|
||||||
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
@ -381,6 +461,7 @@ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuX
|
|||||||
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||||
golang.org/x/term v0.10.0 h1:3R7pNqamzBraeqj/Tj8qt1aQ2HpmlC+Cx/qL/7hn4/c=
|
golang.org/x/term v0.10.0 h1:3R7pNqamzBraeqj/Tj8qt1aQ2HpmlC+Cx/qL/7hn4/c=
|
||||||
|
golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
@ -392,8 +473,8 @@ golang.org/x/text v0.11.0 h1:LAntKIrcmeSKERyiOh0XMV39LXS8IE9UL2yP7+f5ij4=
|
|||||||
golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
|
golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
|
||||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/time v0.0.0-20220722155302-e5dcc9cfc0b9/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.0.0-20220722155302-e5dcc9cfc0b9/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af h1:Yx9k8YCG3dvF87UAn2tu2HQLf2dt/eR1bXxpLMWeH+Y=
|
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
|
||||||
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
@ -401,8 +482,14 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
|
|||||||
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
google.golang.org/api v0.134.0 h1:ktL4Goua+UBgoP1eL1/60LwZJqa1sIzkLmvoR3hR6Gw=
|
||||||
|
google.golang.org/api v0.134.0/go.mod h1:sjRL3UnjTx5UqNQS9EWr9N8p7xbHpy1k0XGRLCf3Spk=
|
||||||
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
||||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5 h1:eSaPbMR4T7WfH9FvABk36NBMacoTUKdWCvV0dx+KfOg=
|
||||||
|
google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5/go.mod h1:zBEcrKX2ZOcEkHWxBPAIvYUWOKKMIhYcmNiUIu2ji3I=
|
||||||
|
google.golang.org/grpc v1.57.0 h1:kfzNeI/klCGD2YPMUlaGNT3pxvYfga7smW3Vth8Zsiw=
|
||||||
|
google.golang.org/grpc v1.57.0/go.mod h1:Sd+9RMTACXwmub0zcNY2c4arhtrbBYD1AUHI/dt16Mo=
|
||||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||||
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||||
|
@ -24,37 +24,58 @@ func initUser() {
|
|||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
salt := random.String(16)
|
||||||
admin = &model.User{
|
admin = &model.User{
|
||||||
Username: "admin",
|
Username: "admin",
|
||||||
Password: adminPassword,
|
Salt: salt,
|
||||||
|
PwdHash: model.TwoHashPwd(adminPassword, salt),
|
||||||
Role: model.ADMIN,
|
Role: model.ADMIN,
|
||||||
BasePath: "/",
|
BasePath: "/",
|
||||||
}
|
}
|
||||||
if err := op.CreateUser(admin); err != nil {
|
if err := op.CreateUser(admin); err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
} else {
|
} else {
|
||||||
utils.Log.Infof("Successfully created the admin user and the initial password is: %s", admin.Password)
|
utils.Log.Infof("Successfully created the admin user and the initial password is: %s", adminPassword)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
panic(err)
|
utils.Log.Fatalf("[init user] Failed to get admin user: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
guest, err := op.GetGuest()
|
guest, err := op.GetGuest()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
salt := random.String(16)
|
||||||
guest = &model.User{
|
guest = &model.User{
|
||||||
Username: "guest",
|
Username: "guest",
|
||||||
Password: "guest",
|
PwdHash: model.TwoHashPwd("guest", salt),
|
||||||
Role: model.GUEST,
|
Role: model.GUEST,
|
||||||
BasePath: "/",
|
BasePath: "/",
|
||||||
Permission: 0,
|
Permission: 0,
|
||||||
Disabled: true,
|
Disabled: true,
|
||||||
}
|
}
|
||||||
if err := db.CreateUser(guest); err != nil {
|
if err := db.CreateUser(guest); err != nil {
|
||||||
panic(err)
|
utils.Log.Fatalf("[init user] Failed to create guest user: %v", err)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
panic(err)
|
utils.Log.Fatalf("[init user] Failed to get guest user: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
hashPwdForOldVersion()
|
||||||
|
}
|
||||||
|
|
||||||
|
func hashPwdForOldVersion() {
|
||||||
|
users, _, err := op.GetUsers(1, -1)
|
||||||
|
if err != nil {
|
||||||
|
utils.Log.Fatalf("[hash pwd for old version] failed get users: %v", err)
|
||||||
|
}
|
||||||
|
for i := range users {
|
||||||
|
user := users[i]
|
||||||
|
if user.PwdHash == "" {
|
||||||
|
user.SetPassword(user.Password)
|
||||||
|
user.Password = ""
|
||||||
|
if err := db.UpdateUser(&user); err != nil {
|
||||||
|
utils.Log.Fatalf("[hash pwd for old version] failed update user: %v", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -72,12 +72,19 @@ func SearchNode(req model.SearchReq, useFullText bool) ([]model.SearchNode, int6
|
|||||||
Where("to_tsvector(name) @@ to_tsquery(?)", strings.Join(strings.Fields(req.Keywords), " & "))
|
Where("to_tsvector(name) @@ to_tsquery(?)", strings.Join(strings.Fields(req.Keywords), " & "))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if req.Scope != 0 {
|
||||||
|
isDir := req.Scope == 1
|
||||||
|
searchDB.Where(db.Where("is_dir = ?", isDir))
|
||||||
|
}
|
||||||
|
|
||||||
var count int64
|
var count int64
|
||||||
if err := searchDB.Count(&count).Error; err != nil {
|
if err := searchDB.Count(&count).Error; err != nil {
|
||||||
return nil, 0, errors.Wrapf(err, "failed get search items count")
|
return nil, 0, errors.Wrapf(err, "failed get search items count")
|
||||||
}
|
}
|
||||||
var files []model.SearchNode
|
var files []model.SearchNode
|
||||||
if err := searchDB.Offset((req.Page - 1) * req.PerPage).Limit(req.PerPage).Find(&files).Error; err != nil {
|
if err := searchDB.Order("name asc").Offset((req.Page - 1) * req.PerPage).Limit(req.PerPage).
|
||||||
|
Find(&files).Error; err != nil {
|
||||||
return nil, 0, err
|
return nil, 0, err
|
||||||
}
|
}
|
||||||
return files, count, nil
|
return files, count, nil
|
||||||
|
@ -2,6 +2,8 @@ package errs
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
pkgerr "github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -12,5 +14,17 @@ var (
|
|||||||
MoveBetweenTwoStorages = errors.New("can't move files between two storages, try to copy")
|
MoveBetweenTwoStorages = errors.New("can't move files between two storages, try to copy")
|
||||||
UploadNotSupported = errors.New("upload not supported")
|
UploadNotSupported = errors.New("upload not supported")
|
||||||
|
|
||||||
MetaNotFound = errors.New("meta not found")
|
MetaNotFound = errors.New("meta not found")
|
||||||
|
StorageNotFound = errors.New("storage not found")
|
||||||
|
StreamIncomplete = errors.New("upload/download stream incomplete, possible network issue")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// NewErr wrap constant error with an extra message
|
||||||
|
// use errors.Is(err1, StorageNotFound) to check if err belongs to any internal error
|
||||||
|
func NewErr(err error, format string, a ...any) error {
|
||||||
|
return fmt.Errorf("%w; %s", err, fmt.Sprintf(format, a...))
|
||||||
|
}
|
||||||
|
|
||||||
|
func IsNotFoundError(err error) bool {
|
||||||
|
return errors.Is(pkgerr.Cause(err), ObjectNotFound) || errors.Is(pkgerr.Cause(err), StorageNotFound)
|
||||||
|
}
|
||||||
|
27
internal/errs/errors_test.go
Normal file
27
internal/errs/errors_test.go
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
package errs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
pkgerr "github.com/pkg/errors"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestErrs(t *testing.T) {
|
||||||
|
|
||||||
|
err1 := NewErr(StorageNotFound, "please add a storage first")
|
||||||
|
t.Logf("err1: %s", err1)
|
||||||
|
if !errors.Is(err1, StorageNotFound) {
|
||||||
|
t.Errorf("failed, expect %s is %s", err1, StorageNotFound)
|
||||||
|
}
|
||||||
|
if !errors.Is(pkgerr.Cause(err1), StorageNotFound) {
|
||||||
|
t.Errorf("failed, expect %s is %s", err1, StorageNotFound)
|
||||||
|
}
|
||||||
|
err2 := pkgerr.WithMessage(err1, "failed get storage")
|
||||||
|
t.Logf("err2: %s", err2)
|
||||||
|
if !errors.Is(err2, StorageNotFound) {
|
||||||
|
t.Errorf("failed, expect %s is %s", err2, StorageNotFound)
|
||||||
|
}
|
||||||
|
if !errors.Is(pkgerr.Cause(err2), StorageNotFound) {
|
||||||
|
t.Errorf("failed, expect %s is %s", err2, StorageNotFound)
|
||||||
|
}
|
||||||
|
}
|
@ -37,7 +37,7 @@ func Get(ctx context.Context, path string, args *GetArgs) (model.Obj, error) {
|
|||||||
res, err := get(ctx, path)
|
res, err := get(ctx, path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if !args.NoLog {
|
if !args.NoLog {
|
||||||
log.Errorf("failed get %s: %+v", path, err)
|
log.Warnf("failed get %s: %s", path, err)
|
||||||
}
|
}
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -27,7 +27,7 @@ func putAsTask(dstDirPath string, file *model.FileStream) error {
|
|||||||
return errors.WithStack(errs.UploadNotSupported)
|
return errors.WithStack(errs.UploadNotSupported)
|
||||||
}
|
}
|
||||||
if file.NeedStore() {
|
if file.NeedStore() {
|
||||||
tempFile, err := utils.CreateTempFile(file)
|
tempFile, err := utils.CreateTempFile(file, file.GetSize())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrapf(err, "failed to create temp file")
|
return errors.Wrapf(err, "failed to create temp file")
|
||||||
}
|
}
|
||||||
@ -36,7 +36,7 @@ func putAsTask(dstDirPath string, file *model.FileStream) error {
|
|||||||
UploadTaskManager.Submit(task.WithCancelCtx(&task.Task[uint64]{
|
UploadTaskManager.Submit(task.WithCancelCtx(&task.Task[uint64]{
|
||||||
Name: fmt.Sprintf("upload %s to [%s](%s)", file.GetName(), storage.GetStorage().MountPath, dstDirActualPath),
|
Name: fmt.Sprintf("upload %s to [%s](%s)", file.GetName(), storage.GetStorage().MountPath, dstDirActualPath),
|
||||||
Func: func(task *task.Task[uint64]) error {
|
Func: func(task *task.Task[uint64]) error {
|
||||||
return op.Put(task.Ctx, storage, dstDirActualPath, file, nil, true)
|
return op.Put(task.Ctx, storage, dstDirActualPath, file, task.SetProgress, true)
|
||||||
},
|
},
|
||||||
}))
|
}))
|
||||||
return nil
|
return nil
|
||||||
|
@ -1,50 +1,30 @@
|
|||||||
package fs
|
package fs
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
|
||||||
stdpath "path"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/internal/conf"
|
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"github.com/alist-org/alist/v3/server/common"
|
"github.com/alist-org/alist/v3/server/common"
|
||||||
"github.com/google/uuid"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func getFileStreamFromLink(file model.Obj, link *model.Link) (*model.FileStream, error) {
|
func getFileStreamFromLink(file model.Obj, link *model.Link) (*model.FileStream, error) {
|
||||||
var rc io.ReadCloser
|
var rc io.ReadCloser
|
||||||
|
var err error
|
||||||
mimetype := utils.GetMimeType(file.GetName())
|
mimetype := utils.GetMimeType(file.GetName())
|
||||||
if link.Data != nil {
|
if link.RangeReadCloser.RangeReader != nil {
|
||||||
rc = link.Data
|
rc, err = link.RangeReadCloser.RangeReader(http_range.Range{Length: -1})
|
||||||
} else if link.FilePath != nil {
|
|
||||||
// create a new temp symbolic link, because it will be deleted after upload
|
|
||||||
newFilePath := stdpath.Join(conf.Conf.TempDir, fmt.Sprintf("%s-%s", uuid.NewString(), file.GetName()))
|
|
||||||
err := utils.SymlinkOrCopyFile(*link.FilePath, newFilePath)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
f, err := os.Open(newFilePath)
|
} else if link.ReadSeekCloser != nil {
|
||||||
if err != nil {
|
rc = link.ReadSeekCloser
|
||||||
return nil, errors.Wrapf(err, "failed to open file %s", *link.FilePath)
|
|
||||||
}
|
|
||||||
rc = f
|
|
||||||
} else if link.Writer != nil {
|
|
||||||
r, w := io.Pipe()
|
|
||||||
go func() {
|
|
||||||
err := link.Writer(w)
|
|
||||||
err = w.CloseWithError(err)
|
|
||||||
if err != nil {
|
|
||||||
log.Errorf("[getFileStreamFromLink] failed to write: %v", err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
rc = r
|
|
||||||
} else {
|
} else {
|
||||||
|
//TODO: add accelerator
|
||||||
req, err := http.NewRequest(http.MethodGet, link.URL, nil)
|
req, err := http.NewRequest(http.MethodGet, link.URL, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrapf(err, "failed to create request for %s", link.URL)
|
return nil, errors.Wrapf(err, "failed to create request for %s", link.URL)
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
package model
|
package model
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
@ -19,14 +21,16 @@ type LinkArgs struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type Link struct {
|
type Link struct {
|
||||||
URL string `json:"url"`
|
URL string `json:"url"`
|
||||||
Header http.Header `json:"header"` // needed header (for url) or response header(for data or writer)
|
Header http.Header `json:"header"` // needed header (for url) or response header(for data or writer)
|
||||||
Data io.ReadCloser // return file reader directly
|
RangeReadCloser RangeReadCloser // recommended way
|
||||||
Status int // status maybe 200 or 206, etc
|
ReadSeekCloser io.ReadSeekCloser // best for local,smb.. file system, which exposes ReadSeekCloser
|
||||||
FilePath *string // local file, return the filepath
|
|
||||||
Expiration *time.Duration // url expiration time
|
Expiration *time.Duration // local cache expire Duration
|
||||||
//Handle func(w http.ResponseWriter, r *http.Request) error `json:"-"` // custom handler
|
IPCacheKey bool // add ip to cache key
|
||||||
Writer WriterFunc `json:"-"` // custom writer
|
//for accelerating request, use multi-thread downloading
|
||||||
|
Concurrency int
|
||||||
|
PartSize int
|
||||||
}
|
}
|
||||||
|
|
||||||
type OtherArgs struct {
|
type OtherArgs struct {
|
||||||
@ -40,5 +44,10 @@ type FsOtherArgs struct {
|
|||||||
Method string `json:"method" form:"method"`
|
Method string `json:"method" form:"method"`
|
||||||
Data interface{} `json:"data" form:"data"`
|
Data interface{} `json:"data" form:"data"`
|
||||||
}
|
}
|
||||||
|
type RangeReadCloser struct {
|
||||||
|
RangeReader RangeReaderFunc
|
||||||
|
Closers *utils.Closers
|
||||||
|
}
|
||||||
|
|
||||||
type WriterFunc func(w io.Writer) error
|
type WriterFunc func(w io.Writer) error
|
||||||
|
type RangeReaderFunc func(httpRange http_range.Range) (io.ReadCloser, error)
|
||||||
|
@ -21,6 +21,7 @@ type Obj interface {
|
|||||||
GetName() string
|
GetName() string
|
||||||
ModTime() time.Time
|
ModTime() time.Time
|
||||||
IsDir() bool
|
IsDir() bool
|
||||||
|
//GetHash() (string, string)
|
||||||
|
|
||||||
// The internal information of the driver.
|
// The internal information of the driver.
|
||||||
// If you want to use it, please understand what it means
|
// If you want to use it, please understand what it means
|
||||||
@ -49,6 +50,9 @@ type Thumb interface {
|
|||||||
type SetPath interface {
|
type SetPath interface {
|
||||||
SetPath(path string)
|
SetPath(path string)
|
||||||
}
|
}
|
||||||
|
type SetHash interface {
|
||||||
|
SetHash(hash string, hashType string)
|
||||||
|
}
|
||||||
|
|
||||||
func SortFiles(objs []Obj, orderBy, orderDirection string) {
|
func SortFiles(objs []Obj, orderBy, orderDirection string) {
|
||||||
if orderBy == "" {
|
if orderBy == "" {
|
||||||
|
@ -29,6 +29,8 @@ type Object struct {
|
|||||||
Size int64
|
Size int64
|
||||||
Modified time.Time
|
Modified time.Time
|
||||||
IsFolder bool
|
IsFolder bool
|
||||||
|
Hash string
|
||||||
|
HashType string
|
||||||
}
|
}
|
||||||
|
|
||||||
func (o *Object) GetName() string {
|
func (o *Object) GetName() string {
|
||||||
@ -55,8 +57,17 @@ func (o *Object) GetPath() string {
|
|||||||
return o.Path
|
return o.Path
|
||||||
}
|
}
|
||||||
|
|
||||||
func (o *Object) SetPath(id string) {
|
func (o *Object) SetPath(path string) {
|
||||||
o.Path = id
|
o.Path = path
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *Object) SetHash(hash string, hashType string) {
|
||||||
|
o.Hash = hash
|
||||||
|
o.HashType = hashType
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *Object) GetHash() (string, string) {
|
||||||
|
return o.Hash, o.HashType
|
||||||
}
|
}
|
||||||
|
|
||||||
type Thumbnail struct {
|
type Thumbnail struct {
|
||||||
|
@ -15,6 +15,8 @@ type IndexProgress struct {
|
|||||||
type SearchReq struct {
|
type SearchReq struct {
|
||||||
Parent string `json:"parent"`
|
Parent string `json:"parent"`
|
||||||
Keywords string `json:"keywords"`
|
Keywords string `json:"keywords"`
|
||||||
|
// 0 for all, 1 for dir, 2 for file
|
||||||
|
Scope int `json:"scope"`
|
||||||
PageReq
|
PageReq
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,8 +1,11 @@
|
|||||||
package model
|
package model
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils/random"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -12,12 +15,16 @@ const (
|
|||||||
ADMIN
|
ADMIN
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const StaticHashSalt = "https://github.com/alist-org/alist"
|
||||||
|
|
||||||
type User struct {
|
type User struct {
|
||||||
ID uint `json:"id" gorm:"primaryKey"` // unique key
|
ID uint `json:"id" gorm:"primaryKey"` // unique key
|
||||||
Username string `json:"username" gorm:"unique" binding:"required"` // username
|
Username string `json:"username" gorm:"unique" binding:"required"` // username
|
||||||
Password string `json:"password"` // password
|
PwdHash string `json:"-"` // password hash
|
||||||
BasePath string `json:"base_path"` // base path
|
Salt string // unique salt
|
||||||
Role int `json:"role"` // user's role
|
Password string `json:"password"` // password
|
||||||
|
BasePath string `json:"base_path"` // base path
|
||||||
|
Role int `json:"role"` // user's role
|
||||||
Disabled bool `json:"disabled"`
|
Disabled bool `json:"disabled"`
|
||||||
// Determine permissions by bit
|
// Determine permissions by bit
|
||||||
// 0: can see hidden files
|
// 0: can see hidden files
|
||||||
@ -36,68 +43,90 @@ type User struct {
|
|||||||
SsoID string `json:"sso_id"` // unique by sso platform
|
SsoID string `json:"sso_id"` // unique by sso platform
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) IsGuest() bool {
|
func (u *User) IsGuest() bool {
|
||||||
return u.Role == GUEST
|
return u.Role == GUEST
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) IsAdmin() bool {
|
func (u *User) IsAdmin() bool {
|
||||||
return u.Role == ADMIN
|
return u.Role == ADMIN
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) ValidatePassword(password string) error {
|
func (u *User) ValidateRawPassword(password string) error {
|
||||||
if password == "" {
|
return u.ValidatePwdStaticHash(StaticHash(password))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (u *User) ValidatePwdStaticHash(pwdStaticHash string) error {
|
||||||
|
if pwdStaticHash == "" {
|
||||||
return errors.WithStack(errs.EmptyPassword)
|
return errors.WithStack(errs.EmptyPassword)
|
||||||
}
|
}
|
||||||
if u.Password != password {
|
if u.PwdHash != HashPwd(pwdStaticHash, u.Salt) {
|
||||||
return errors.WithStack(errs.WrongPassword)
|
return errors.WithStack(errs.WrongPassword)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanSeeHides() bool {
|
func (u *User) SetPassword(pwd string) *User {
|
||||||
|
u.Salt = random.String(16)
|
||||||
|
u.PwdHash = TwoHashPwd(pwd, u.Salt)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
func (u *User) CanSeeHides() bool {
|
||||||
return u.IsAdmin() || u.Permission&1 == 1
|
return u.IsAdmin() || u.Permission&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanAccessWithoutPassword() bool {
|
func (u *User) CanAccessWithoutPassword() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>1)&1 == 1
|
return u.IsAdmin() || (u.Permission>>1)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanAddAria2Tasks() bool {
|
func (u *User) CanAddAria2Tasks() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>2)&1 == 1
|
return u.IsAdmin() || (u.Permission>>2)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanWrite() bool {
|
func (u *User) CanWrite() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>3)&1 == 1
|
return u.IsAdmin() || (u.Permission>>3)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanRename() bool {
|
func (u *User) CanRename() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>4)&1 == 1
|
return u.IsAdmin() || (u.Permission>>4)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanMove() bool {
|
func (u *User) CanMove() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>5)&1 == 1
|
return u.IsAdmin() || (u.Permission>>5)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanCopy() bool {
|
func (u *User) CanCopy() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>6)&1 == 1
|
return u.IsAdmin() || (u.Permission>>6)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanRemove() bool {
|
func (u *User) CanRemove() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>7)&1 == 1
|
return u.IsAdmin() || (u.Permission>>7)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanWebdavRead() bool {
|
func (u *User) CanWebdavRead() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>8)&1 == 1
|
return u.IsAdmin() || (u.Permission>>8)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanWebdavManage() bool {
|
func (u *User) CanWebdavManage() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>9)&1 == 1
|
return u.IsAdmin() || (u.Permission>>9)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) CanAddQbittorrentTasks() bool {
|
func (u *User) CanAddQbittorrentTasks() bool {
|
||||||
return u.IsAdmin() || (u.Permission>>10)&1 == 1
|
return u.IsAdmin() || (u.Permission>>10)&1 == 1
|
||||||
}
|
}
|
||||||
|
|
||||||
func (u User) JoinPath(reqPath string) (string, error) {
|
func (u *User) JoinPath(reqPath string) (string, error) {
|
||||||
return utils.JoinBasePath(u.BasePath, reqPath)
|
return utils.JoinBasePath(u.BasePath, reqPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func StaticHash(password string) string {
|
||||||
|
return utils.GetSHA256Encode([]byte(fmt.Sprintf("%s-%s", password, StaticHashSalt)))
|
||||||
|
}
|
||||||
|
|
||||||
|
func HashPwd(static string, salt string) string {
|
||||||
|
return utils.GetSHA256Encode([]byte(fmt.Sprintf("%s-%s", static, salt)))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TwoHashPwd(password string, salt string) string {
|
||||||
|
return HashPwd(StaticHash(password), salt)
|
||||||
|
}
|
||||||
|
597
internal/net/request.go
Normal file
597
internal/net/request.go
Normal file
@ -0,0 +1,597 @@
|
|||||||
|
package net
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
"github.com/aws/aws-sdk-go/aws/awsutil"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
|
"net/http"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DefaultDownloadPartSize is the default range of bytes to get at a time when
|
||||||
|
// using Download().
|
||||||
|
const DefaultDownloadPartSize = 1024 * 1024 * 10
|
||||||
|
|
||||||
|
// DefaultDownloadConcurrency is the default number of goroutines to spin up
|
||||||
|
// when using Download().
|
||||||
|
const DefaultDownloadConcurrency = 2
|
||||||
|
|
||||||
|
// DefaultPartBodyMaxRetries is the default number of retries to make when a part fails to download.
|
||||||
|
const DefaultPartBodyMaxRetries = 3
|
||||||
|
|
||||||
|
type Downloader struct {
|
||||||
|
PartSize int
|
||||||
|
|
||||||
|
// PartBodyMaxRetries is the number of retry attempts to make for failed part downloads.
|
||||||
|
PartBodyMaxRetries int
|
||||||
|
|
||||||
|
// The number of goroutines to spin up in parallel when sending parts.
|
||||||
|
// If this is set to zero, the DefaultDownloadConcurrency value will be used.
|
||||||
|
//
|
||||||
|
// Concurrency of 1 will download the parts sequentially.
|
||||||
|
Concurrency int
|
||||||
|
|
||||||
|
//RequestParam HttpRequestParams
|
||||||
|
HttpClient HttpRequestFunc
|
||||||
|
}
|
||||||
|
type HttpRequestFunc func(params *HttpRequestParams) (*http.Response, error)
|
||||||
|
|
||||||
|
func NewDownloader(options ...func(*Downloader)) *Downloader {
|
||||||
|
d := &Downloader{
|
||||||
|
HttpClient: DefaultHttpRequestFunc,
|
||||||
|
PartSize: DefaultDownloadPartSize,
|
||||||
|
PartBodyMaxRetries: DefaultPartBodyMaxRetries,
|
||||||
|
Concurrency: DefaultDownloadConcurrency,
|
||||||
|
}
|
||||||
|
for _, option := range options {
|
||||||
|
option(d)
|
||||||
|
}
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download The Downloader makes multi-thread http requests to remote URL, each chunk(except last one) has PartSize,
|
||||||
|
// cache some data, then return Reader with assembled data
|
||||||
|
// Supports range, do not support unknown FileSize, and will fail if FileSize is incorrect
|
||||||
|
// memory usage is at about Concurrency*PartSize, use this wisely
|
||||||
|
func (d Downloader) Download(ctx context.Context, p *HttpRequestParams) (readCloser *io.ReadCloser, err error) {
|
||||||
|
|
||||||
|
var finalP HttpRequestParams
|
||||||
|
awsutil.Copy(&finalP, p)
|
||||||
|
if finalP.Range.Length == -1 {
|
||||||
|
finalP.Range.Length = finalP.Size - finalP.Range.Start
|
||||||
|
}
|
||||||
|
impl := downloader{params: &finalP, cfg: d, ctx: ctx}
|
||||||
|
|
||||||
|
// Ensures we don't need nil checks later on
|
||||||
|
|
||||||
|
impl.partBodyMaxRetries = d.PartBodyMaxRetries
|
||||||
|
|
||||||
|
if impl.cfg.Concurrency == 0 {
|
||||||
|
impl.cfg.Concurrency = DefaultDownloadConcurrency
|
||||||
|
}
|
||||||
|
|
||||||
|
if impl.cfg.PartSize == 0 {
|
||||||
|
impl.cfg.PartSize = DefaultDownloadPartSize
|
||||||
|
}
|
||||||
|
|
||||||
|
return impl.download()
|
||||||
|
}
|
||||||
|
|
||||||
|
// downloader is the implementation structure used internally by Downloader.
|
||||||
|
type downloader struct {
|
||||||
|
ctx context.Context
|
||||||
|
cancel context.CancelFunc
|
||||||
|
cfg Downloader
|
||||||
|
|
||||||
|
params *HttpRequestParams //http request params
|
||||||
|
chunkChannel chan chunk //chunk chanel
|
||||||
|
|
||||||
|
//wg sync.WaitGroup
|
||||||
|
m sync.Mutex
|
||||||
|
|
||||||
|
nextChunk int //next chunk id
|
||||||
|
chunks []chunk
|
||||||
|
bufs []*Buf
|
||||||
|
//totalBytes int64
|
||||||
|
written int64 //total bytes of file downloaded from remote
|
||||||
|
err error
|
||||||
|
|
||||||
|
partBodyMaxRetries int
|
||||||
|
}
|
||||||
|
|
||||||
|
// download performs the implementation of the object download across ranged GETs.
|
||||||
|
func (d *downloader) download() (*io.ReadCloser, error) {
|
||||||
|
d.ctx, d.cancel = context.WithCancel(d.ctx)
|
||||||
|
|
||||||
|
pos := d.params.Range.Start
|
||||||
|
maxPos := d.params.Range.Start + d.params.Range.Length
|
||||||
|
id := 0
|
||||||
|
for pos < maxPos {
|
||||||
|
finalSize := int64(d.cfg.PartSize)
|
||||||
|
//check boundary
|
||||||
|
if pos+finalSize > maxPos {
|
||||||
|
finalSize = maxPos - pos
|
||||||
|
}
|
||||||
|
c := chunk{start: pos, size: finalSize, id: id}
|
||||||
|
d.chunks = append(d.chunks, c)
|
||||||
|
pos += finalSize
|
||||||
|
id++
|
||||||
|
}
|
||||||
|
if len(d.chunks) < d.cfg.Concurrency {
|
||||||
|
d.cfg.Concurrency = len(d.chunks)
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.cfg.Concurrency == 1 {
|
||||||
|
resp, err := d.cfg.HttpClient(d.params)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &resp.Body, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// workers
|
||||||
|
d.chunkChannel = make(chan chunk, d.cfg.Concurrency)
|
||||||
|
|
||||||
|
for i := 0; i < d.cfg.Concurrency; i++ {
|
||||||
|
buf := NewBuf(d.ctx, d.cfg.PartSize, i)
|
||||||
|
d.bufs = append(d.bufs, buf)
|
||||||
|
go d.downloadPart()
|
||||||
|
}
|
||||||
|
// initial tasks
|
||||||
|
for i := 0; i < d.cfg.Concurrency; i++ {
|
||||||
|
d.sendChunkTask()
|
||||||
|
}
|
||||||
|
|
||||||
|
var rc io.ReadCloser = NewMultiReadCloser(d.chunks[0].buf, d.interrupt, d.finishBuf)
|
||||||
|
|
||||||
|
// Return error
|
||||||
|
return &rc, d.err
|
||||||
|
}
|
||||||
|
func (d *downloader) sendChunkTask() *chunk {
|
||||||
|
ch := &d.chunks[d.nextChunk]
|
||||||
|
ch.buf = d.getBuf(d.nextChunk)
|
||||||
|
ch.buf.Reset(int(ch.size))
|
||||||
|
d.chunkChannel <- *ch
|
||||||
|
d.nextChunk++
|
||||||
|
return ch
|
||||||
|
}
|
||||||
|
|
||||||
|
// when the final reader Close, we interrupt
|
||||||
|
func (d *downloader) interrupt() error {
|
||||||
|
d.cancel()
|
||||||
|
if d.written != d.params.Range.Length {
|
||||||
|
log.Debugf("Downloader interrupt before finish")
|
||||||
|
if d.getErr() == nil {
|
||||||
|
d.setErr(fmt.Errorf("interrupted"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
close(d.chunkChannel)
|
||||||
|
for _, buf := range d.bufs {
|
||||||
|
buf.Close()
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
return d.err
|
||||||
|
}
|
||||||
|
func (d *downloader) getBuf(id int) (b *Buf) {
|
||||||
|
|
||||||
|
return d.bufs[id%d.cfg.Concurrency]
|
||||||
|
}
|
||||||
|
func (d *downloader) finishBuf(id int) (isLast bool, buf *Buf) {
|
||||||
|
if id >= len(d.chunks)-1 {
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
if d.nextChunk > id+1 {
|
||||||
|
return false, d.getBuf(id + 1)
|
||||||
|
}
|
||||||
|
ch := d.sendChunkTask()
|
||||||
|
return false, ch.buf
|
||||||
|
}
|
||||||
|
|
||||||
|
// downloadPart is an individual goroutine worker reading from the ch channel
|
||||||
|
// and performing Http request on the data with a given byte range.
|
||||||
|
func (d *downloader) downloadPart() {
|
||||||
|
//defer d.wg.Done()
|
||||||
|
for {
|
||||||
|
c, ok := <-d.chunkChannel
|
||||||
|
log.Debugf("downloadPart tried to get chunk")
|
||||||
|
if !ok {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if d.getErr() != nil {
|
||||||
|
// Drain the channel if there is an error, to prevent deadlocking
|
||||||
|
// of download producer.
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := d.downloadChunk(&c); err != nil {
|
||||||
|
d.setErr(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// downloadChunk downloads the chunk
|
||||||
|
func (d *downloader) downloadChunk(ch *chunk) error {
|
||||||
|
log.Debugf("start new chunk %+v buffer_id =%d", ch, ch.buf.buffer.id)
|
||||||
|
var n int64
|
||||||
|
var err error
|
||||||
|
params := d.getParamsFromChunk(ch)
|
||||||
|
for retry := 0; retry <= d.partBodyMaxRetries; retry++ {
|
||||||
|
if d.getErr() != nil {
|
||||||
|
return d.getErr()
|
||||||
|
}
|
||||||
|
n, err = d.tryDownloadChunk(params, ch)
|
||||||
|
if err == nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
// Check if the returned error is an errReadingBody.
|
||||||
|
// If err is errReadingBody this indicates that an error
|
||||||
|
// occurred while copying the http response body.
|
||||||
|
// If this occurs we unwrap the err to set the underlying error
|
||||||
|
// and attempt any remaining retries.
|
||||||
|
if bodyErr, ok := err.(*errReadingBody); ok {
|
||||||
|
err = bodyErr.Unwrap()
|
||||||
|
} else {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
//ch.cur = 0
|
||||||
|
|
||||||
|
log.Debugf("object part body download interrupted %s, err, %v, retrying attempt %d",
|
||||||
|
params.URL, err, retry)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.incrWritten(n)
|
||||||
|
log.Debugf("down_%d downloaded chunk", ch.id)
|
||||||
|
//ch.buf.buffer.wg1.Wait()
|
||||||
|
//log.Debugf("down_%d downloaded chunk,wg wait passed", ch.id)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *downloader) tryDownloadChunk(params *HttpRequestParams, ch *chunk) (int64, error) {
|
||||||
|
|
||||||
|
resp, err := d.cfg.HttpClient(params)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
//only check file size on the first task
|
||||||
|
if ch.id == 0 {
|
||||||
|
err = d.checkTotalBytes(resp)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
n, err := io.Copy(ch.buf, resp.Body)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return n, &errReadingBody{err: err}
|
||||||
|
}
|
||||||
|
if n != ch.size {
|
||||||
|
err = fmt.Errorf("chunk download size incorrect, expected=%d, got=%d", ch.size, n)
|
||||||
|
return n, &errReadingBody{err: err}
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
func (d *downloader) getParamsFromChunk(ch *chunk) *HttpRequestParams {
|
||||||
|
var params HttpRequestParams
|
||||||
|
awsutil.Copy(¶ms, d.params)
|
||||||
|
|
||||||
|
// Get the getBuf byte range of data
|
||||||
|
params.Range = http_range.Range{Start: ch.start, Length: ch.size}
|
||||||
|
return ¶ms
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *downloader) checkTotalBytes(resp *http.Response) error {
|
||||||
|
var err error
|
||||||
|
var totalBytes int64 = math.MinInt64
|
||||||
|
contentRange := resp.Header.Get("Content-Range")
|
||||||
|
if len(contentRange) == 0 {
|
||||||
|
// ContentRange is nil when the full file contents is provided, and
|
||||||
|
// is not chunked. Use ContentLength instead.
|
||||||
|
if resp.ContentLength > 0 {
|
||||||
|
totalBytes = resp.ContentLength
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
parts := strings.Split(contentRange, "/")
|
||||||
|
|
||||||
|
total := int64(-1)
|
||||||
|
|
||||||
|
// Checking for whether a numbered total exists
|
||||||
|
// If one does not exist, we will assume the total to be -1, undefined,
|
||||||
|
// and sequentially download each chunk until hitting a 416 error
|
||||||
|
totalStr := parts[len(parts)-1]
|
||||||
|
if totalStr != "*" {
|
||||||
|
total, err = strconv.ParseInt(totalStr, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
err = fmt.Errorf("failed extracting file size")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
err = fmt.Errorf("file size unknown")
|
||||||
|
}
|
||||||
|
|
||||||
|
totalBytes = total
|
||||||
|
}
|
||||||
|
if totalBytes != d.params.Size && err == nil {
|
||||||
|
err = fmt.Errorf("expect file size=%d unmatch remote report size=%d, need refresh cache", d.params.Size, totalBytes)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
_ = d.interrupt()
|
||||||
|
d.setErr(err)
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *downloader) incrWritten(n int64) {
|
||||||
|
d.m.Lock()
|
||||||
|
defer d.m.Unlock()
|
||||||
|
|
||||||
|
d.written += n
|
||||||
|
}
|
||||||
|
|
||||||
|
// getErr is a thread-safe getter for the error object
|
||||||
|
func (d *downloader) getErr() error {
|
||||||
|
d.m.Lock()
|
||||||
|
defer d.m.Unlock()
|
||||||
|
|
||||||
|
return d.err
|
||||||
|
}
|
||||||
|
|
||||||
|
// setErr is a thread-safe setter for the error object
|
||||||
|
func (d *downloader) setErr(e error) {
|
||||||
|
d.m.Lock()
|
||||||
|
defer d.m.Unlock()
|
||||||
|
|
||||||
|
d.err = e
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chunk represents a single chunk of data to write by the worker routine.
|
||||||
|
// This structure also implements an io.SectionReader style interface for
|
||||||
|
// io.WriterAt, effectively making it an io.SectionWriter (which does not
|
||||||
|
// exist).
|
||||||
|
type chunk struct {
|
||||||
|
start int64
|
||||||
|
size int64
|
||||||
|
buf *Buf
|
||||||
|
id int
|
||||||
|
|
||||||
|
// Downloader takes range (start,length), but this chunk is requesting equal/sub range of it.
|
||||||
|
// To convert the writer to reader eventually, we need to write within the boundary
|
||||||
|
//boundary http_range.Range
|
||||||
|
}
|
||||||
|
|
||||||
|
func DefaultHttpRequestFunc(params *HttpRequestParams) (*http.Response, error) {
|
||||||
|
header := http_range.ApplyRangeToHttpHeader(params.Range, params.HeaderRef)
|
||||||
|
|
||||||
|
res, err := RequestHttp("GET", header, params.URL)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type HttpRequestParams struct {
|
||||||
|
URL string
|
||||||
|
//only want data within this range
|
||||||
|
Range http_range.Range
|
||||||
|
HeaderRef *http.Header
|
||||||
|
//total file size
|
||||||
|
Size int64
|
||||||
|
}
|
||||||
|
type errReadingBody struct {
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *errReadingBody) Error() string {
|
||||||
|
return fmt.Sprintf("failed to read part body: %v", e.err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *errReadingBody) Unwrap() error {
|
||||||
|
return e.err
|
||||||
|
}
|
||||||
|
|
||||||
|
type MultiReadCloser struct {
|
||||||
|
io.ReadCloser
|
||||||
|
|
||||||
|
//total int //total bufArr
|
||||||
|
//wPos int //current reader wPos
|
||||||
|
cfg *cfg
|
||||||
|
closer closerFunc
|
||||||
|
//getBuf getBufFunc
|
||||||
|
finish finishBufFUnc
|
||||||
|
}
|
||||||
|
|
||||||
|
type cfg struct {
|
||||||
|
rPos int //current reader position, start from 0
|
||||||
|
curBuf *Buf
|
||||||
|
}
|
||||||
|
|
||||||
|
type closerFunc func() error
|
||||||
|
type finishBufFUnc func(id int) (isLast bool, buf *Buf)
|
||||||
|
|
||||||
|
// NewMultiReadCloser to save memory, we re-use limited Buf, and feed data to Read()
|
||||||
|
func NewMultiReadCloser(buf *Buf, c closerFunc, fb finishBufFUnc) *MultiReadCloser {
|
||||||
|
return &MultiReadCloser{closer: c, finish: fb, cfg: &cfg{curBuf: buf}}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (mr MultiReadCloser) Read(p []byte) (n int, err error) {
|
||||||
|
if mr.cfg.curBuf == nil {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
n, err = mr.cfg.curBuf.Read(p)
|
||||||
|
//log.Debugf("read_%d read current buffer, n=%d ,err=%+v", mr.cfg.rPos, n, err)
|
||||||
|
if err == io.EOF {
|
||||||
|
log.Debugf("read_%d finished current buffer", mr.cfg.rPos)
|
||||||
|
|
||||||
|
isLast, next := mr.finish(mr.cfg.rPos)
|
||||||
|
if isLast {
|
||||||
|
return n, io.EOF
|
||||||
|
}
|
||||||
|
mr.cfg.curBuf = next
|
||||||
|
mr.cfg.rPos++
|
||||||
|
//current.Close()
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
return n, err
|
||||||
|
}
|
||||||
|
func (mr MultiReadCloser) Close() error {
|
||||||
|
return mr.closer()
|
||||||
|
}
|
||||||
|
|
||||||
|
type Buffer struct {
|
||||||
|
data []byte
|
||||||
|
wPos int //writer position
|
||||||
|
id int
|
||||||
|
rPos int //reader position
|
||||||
|
lock sync.Mutex
|
||||||
|
|
||||||
|
once bool //combined use with notify & lock, to get notify once
|
||||||
|
notify chan int // notifies new writes
|
||||||
|
}
|
||||||
|
|
||||||
|
func (buf *Buffer) Write(p []byte) (n int, err error) {
|
||||||
|
inSize := len(p)
|
||||||
|
if inSize == 0 {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if inSize > len(buf.data)-buf.wPos {
|
||||||
|
return 0, fmt.Errorf("exceeding buffer max size,inSize=%d ,buf.data.len=%d , buf.wPos=%d",
|
||||||
|
inSize, len(buf.data), buf.wPos)
|
||||||
|
}
|
||||||
|
copy(buf.data[buf.wPos:], p)
|
||||||
|
buf.wPos += inSize
|
||||||
|
|
||||||
|
//give read a notice if once==true
|
||||||
|
buf.lock.Lock()
|
||||||
|
if buf.once == true {
|
||||||
|
buf.notify <- inSize //struct{}{}
|
||||||
|
}
|
||||||
|
buf.once = false
|
||||||
|
buf.lock.Unlock()
|
||||||
|
|
||||||
|
return inSize, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (buf *Buffer) getPos() (n int) {
|
||||||
|
return buf.wPos
|
||||||
|
}
|
||||||
|
func (buf *Buffer) reset() {
|
||||||
|
buf.wPos = 0
|
||||||
|
buf.rPos = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// waitTillNewWrite notify caller that new write happens
|
||||||
|
func (buf *Buffer) waitTillNewWrite(pos int) error {
|
||||||
|
//log.Debugf("waitTillNewWrite, current wPos=%d", pos)
|
||||||
|
var err error
|
||||||
|
|
||||||
|
//defer buffer.lock.Unlock()
|
||||||
|
if pos >= len(buf.data) {
|
||||||
|
err = fmt.Errorf("there will not be any new write")
|
||||||
|
} else if pos > buf.wPos {
|
||||||
|
err = fmt.Errorf("illegal read position")
|
||||||
|
} else if pos == buf.wPos {
|
||||||
|
buf.lock.Lock()
|
||||||
|
buf.once = true
|
||||||
|
//buffer.wg1.Add(1)
|
||||||
|
buf.lock.Unlock()
|
||||||
|
//wait for write
|
||||||
|
log.Debugf("waitTillNewWrite wait for notify")
|
||||||
|
writes := <-buf.notify
|
||||||
|
log.Debugf("waitTillNewWrite got new write from notify, last writes:%+v", writes)
|
||||||
|
//if pos >= buf.wPos {
|
||||||
|
// //wrote 0 bytes
|
||||||
|
// return fmt.Errorf("write has error")
|
||||||
|
//}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
//only case: wPos < buffer.wPos
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
type Buf struct {
|
||||||
|
buffer *Buffer // Buffer we read from
|
||||||
|
size int //expected size
|
||||||
|
ctx context.Context
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewBuf is a buffer that can have 1 read & 1 write at the same time.
|
||||||
|
// when read is faster write, immediately feed data to read after written
|
||||||
|
func NewBuf(ctx context.Context, maxSize int, id int) *Buf {
|
||||||
|
d := make([]byte, maxSize)
|
||||||
|
buffer := &Buffer{data: d, id: id, notify: make(chan int)}
|
||||||
|
buffer.reset()
|
||||||
|
return &Buf{ctx: ctx, buffer: buffer, size: maxSize}
|
||||||
|
|
||||||
|
}
|
||||||
|
func (br *Buf) Reset(size int) {
|
||||||
|
br.buffer.reset()
|
||||||
|
br.size = size
|
||||||
|
}
|
||||||
|
func (br *Buf) GetId() int {
|
||||||
|
return br.buffer.id
|
||||||
|
}
|
||||||
|
|
||||||
|
func (br *Buf) Read(p []byte) (n int, err error) {
|
||||||
|
if err := br.ctx.Err(); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
if len(p) == 0 {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
if br.buffer.rPos == br.size {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
//persist buffer position as another thread is keep increasing it
|
||||||
|
bufPos := br.buffer.getPos()
|
||||||
|
outSize := bufPos - br.buffer.rPos
|
||||||
|
|
||||||
|
if outSize == 0 {
|
||||||
|
//var wg sync.WaitGroup
|
||||||
|
err := br.waitTillNewWrite(br.buffer.rPos)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
bufPos = br.buffer.getPos()
|
||||||
|
outSize = bufPos - br.buffer.rPos
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(p) < outSize {
|
||||||
|
// p is not big enough
|
||||||
|
outSize = len(p)
|
||||||
|
}
|
||||||
|
copy(p, br.buffer.data[br.buffer.rPos:br.buffer.rPos+outSize])
|
||||||
|
br.buffer.rPos += outSize
|
||||||
|
if br.buffer.rPos == br.size {
|
||||||
|
err = io.EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
return outSize, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// waitTillNewWrite is expensive, since we just checked that no new data, wait 0.2s
|
||||||
|
func (br *Buf) waitTillNewWrite(pos int) error {
|
||||||
|
time.Sleep(200 * time.Millisecond)
|
||||||
|
return br.buffer.waitTillNewWrite(br.buffer.rPos)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (br *Buf) Write(p []byte) (n int, err error) {
|
||||||
|
if err := br.ctx.Err(); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return br.buffer.Write(p)
|
||||||
|
}
|
||||||
|
func (br *Buf) Close() {
|
||||||
|
close(br.buffer.notify)
|
||||||
|
}
|
178
internal/net/request_test.go
Normal file
178
internal/net/request_test.go
Normal file
@ -0,0 +1,178 @@
|
|||||||
|
package net
|
||||||
|
|
||||||
|
//no http range
|
||||||
|
//
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"golang.org/x/exp/slices"
|
||||||
|
"io"
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
var buf22MB = make([]byte, 1024*1024*22)
|
||||||
|
|
||||||
|
func dummyHttpRequest(data []byte, p http_range.Range) io.ReadCloser {
|
||||||
|
|
||||||
|
end := p.Start + p.Length - 1
|
||||||
|
|
||||||
|
if end >= int64(len(data)) {
|
||||||
|
end = int64(len(data))
|
||||||
|
}
|
||||||
|
|
||||||
|
bodyBytes := data[p.Start:end]
|
||||||
|
return io.NopCloser(bytes.NewReader(bodyBytes))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDownloadOrder(t *testing.T) {
|
||||||
|
buff := []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}
|
||||||
|
downloader, invocations, ranges := newDownloadRangeClient(buff)
|
||||||
|
con, partSize := 3, 3
|
||||||
|
d := NewDownloader(func(d *Downloader) {
|
||||||
|
d.Concurrency = con
|
||||||
|
d.PartSize = partSize
|
||||||
|
d.HttpClient = downloader.HttpRequest
|
||||||
|
})
|
||||||
|
|
||||||
|
var start, length int64 = 2, 10
|
||||||
|
length2 := length
|
||||||
|
if length2 == -1 {
|
||||||
|
length2 = int64(len(buff)) - start
|
||||||
|
}
|
||||||
|
req := &HttpRequestParams{
|
||||||
|
Range: http_range.Range{Start: start, Length: length},
|
||||||
|
Size: int64(len(buff)),
|
||||||
|
}
|
||||||
|
readCloser, err := d.Download(context.Background(), req)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("expect no error, got %v", err)
|
||||||
|
}
|
||||||
|
resultBuf, err := io.ReadAll(*readCloser)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("expect no error, got %v", err)
|
||||||
|
}
|
||||||
|
if exp, a := int(length), len(resultBuf); exp != a {
|
||||||
|
t.Errorf("expect buffer length=%d, got %d", exp, a)
|
||||||
|
}
|
||||||
|
chunkSize := int(length)/partSize + 1
|
||||||
|
if int(length)%partSize == 0 {
|
||||||
|
chunkSize--
|
||||||
|
}
|
||||||
|
if e, a := chunkSize, *invocations; e != a {
|
||||||
|
t.Errorf("expect %v API calls, got %v", e, a)
|
||||||
|
}
|
||||||
|
|
||||||
|
expectRngs := []string{"2-3", "5-3", "8-3", "11-1"}
|
||||||
|
for _, rng := range expectRngs {
|
||||||
|
if !slices.Contains(*ranges, rng) {
|
||||||
|
t.Errorf("expect range %v, but absent in return", rng)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if e, a := expectRngs, *ranges; len(e) != len(a) {
|
||||||
|
t.Errorf("expect %v ranges, got %v", e, a)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func init() {
|
||||||
|
Formatter := new(logrus.TextFormatter)
|
||||||
|
Formatter.TimestampFormat = "2006-01-02T15:04:05.999999999"
|
||||||
|
Formatter.FullTimestamp = true
|
||||||
|
Formatter.ForceColors = true
|
||||||
|
logrus.SetFormatter(Formatter)
|
||||||
|
logrus.SetLevel(logrus.DebugLevel)
|
||||||
|
logrus.Debugf("Download start")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDownloadSingle(t *testing.T) {
|
||||||
|
buff := []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}
|
||||||
|
downloader, invocations, ranges := newDownloadRangeClient(buff)
|
||||||
|
con, partSize := 1, 3
|
||||||
|
d := NewDownloader(func(d *Downloader) {
|
||||||
|
d.Concurrency = con
|
||||||
|
d.PartSize = partSize
|
||||||
|
d.HttpClient = downloader.HttpRequest
|
||||||
|
})
|
||||||
|
|
||||||
|
var start, length int64 = 2, 10
|
||||||
|
req := &HttpRequestParams{
|
||||||
|
Range: http_range.Range{Start: start, Length: length},
|
||||||
|
Size: int64(len(buff)),
|
||||||
|
}
|
||||||
|
|
||||||
|
readCloser, err := d.Download(context.Background(), req)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("expect no error, got %v", err)
|
||||||
|
}
|
||||||
|
resultBuf, err := io.ReadAll(*readCloser)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("expect no error, got %v", err)
|
||||||
|
}
|
||||||
|
if exp, a := int(length), len(resultBuf); exp != a {
|
||||||
|
t.Errorf("expect buffer length=%d, got %d", exp, a)
|
||||||
|
}
|
||||||
|
if e, a := 1, *invocations; e != a {
|
||||||
|
t.Errorf("expect %v API calls, got %v", e, a)
|
||||||
|
}
|
||||||
|
|
||||||
|
expectRngs := []string{"2-10"}
|
||||||
|
for _, rng := range expectRngs {
|
||||||
|
if !slices.Contains(*ranges, rng) {
|
||||||
|
t.Errorf("expect range %v, but absent in return", rng)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if e, a := expectRngs, *ranges; len(e) != len(a) {
|
||||||
|
t.Errorf("expect %v ranges, got %v", e, a)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type downloadCaptureClient struct {
|
||||||
|
mockedHttpRequest func(params *HttpRequestParams) (*http.Response, error)
|
||||||
|
GetObjectInvocations int
|
||||||
|
|
||||||
|
RetrievedRanges []string
|
||||||
|
|
||||||
|
lock sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *downloadCaptureClient) HttpRequest(params *HttpRequestParams) (*http.Response, error) {
|
||||||
|
c.lock.Lock()
|
||||||
|
defer c.lock.Unlock()
|
||||||
|
|
||||||
|
c.GetObjectInvocations++
|
||||||
|
|
||||||
|
if ¶ms.Range != nil {
|
||||||
|
c.RetrievedRanges = append(c.RetrievedRanges, fmt.Sprintf("%d-%d", params.Range.Start, params.Range.Length))
|
||||||
|
}
|
||||||
|
|
||||||
|
return c.mockedHttpRequest(params)
|
||||||
|
}
|
||||||
|
|
||||||
|
func newDownloadRangeClient(data []byte) (*downloadCaptureClient, *int, *[]string) {
|
||||||
|
capture := &downloadCaptureClient{}
|
||||||
|
|
||||||
|
capture.mockedHttpRequest = func(params *HttpRequestParams) (*http.Response, error) {
|
||||||
|
start, fin := params.Range.Start, params.Range.Start+params.Range.Length
|
||||||
|
if params.Range.Length == -1 || fin >= int64(len(data)) {
|
||||||
|
fin = int64(len(data))
|
||||||
|
}
|
||||||
|
bodyBytes := data[start:fin]
|
||||||
|
|
||||||
|
header := &http.Header{}
|
||||||
|
header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", start, fin-1, len(data)))
|
||||||
|
return &http.Response{
|
||||||
|
Body: ioutil.NopCloser(bytes.NewReader(bodyBytes)),
|
||||||
|
Header: *header,
|
||||||
|
ContentLength: int64(len(bodyBytes)),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return capture, &capture.GetObjectInvocations, &capture.RetrievedRanges
|
||||||
|
}
|
252
internal/net/serve.go
Normal file
252
internal/net/serve.go
Normal file
@ -0,0 +1,252 @@
|
|||||||
|
package net
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"github.com/alist-org/alist/v3/drivers/base"
|
||||||
|
"github.com/alist-org/alist/v3/internal/conf"
|
||||||
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"io"
|
||||||
|
"mime"
|
||||||
|
"mime/multipart"
|
||||||
|
"net/http"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
//this file is inspired by GO_SDK net.http.ServeContent
|
||||||
|
|
||||||
|
//type RangeReadCloser struct {
|
||||||
|
// GetReaderForRange RangeReaderFunc
|
||||||
|
//}
|
||||||
|
|
||||||
|
// ServeHTTP replies to the request using the content in the
|
||||||
|
// provided RangeReadCloser. The main benefit of ServeHTTP over io.Copy
|
||||||
|
// is that it handles Range requests properly, sets the MIME type, and
|
||||||
|
// handles If-Match, If-Unmodified-Since, If-None-Match, If-Modified-Since,
|
||||||
|
// and If-Range requests.
|
||||||
|
//
|
||||||
|
// If the response's Content-Type header is not set, ServeHTTP
|
||||||
|
// first tries to deduce the type from name's file extension and,
|
||||||
|
// if that fails, falls back to reading the first block of the content
|
||||||
|
// and passing it to DetectContentType.
|
||||||
|
// The name is otherwise unused; in particular it can be empty and is
|
||||||
|
// never sent in the response.
|
||||||
|
//
|
||||||
|
// If modtime is not the zero time or Unix epoch, ServeHTTP
|
||||||
|
// includes it in a Last-Modified header in the response. If the
|
||||||
|
// request includes an If-Modified-Since header, ServeHTTP uses
|
||||||
|
// modtime to decide whether the content needs to be sent at all.
|
||||||
|
//
|
||||||
|
// The content's RangeReadCloser method must work: ServeHTTP gives a range,
|
||||||
|
// caller will give the reader for that Range.
|
||||||
|
//
|
||||||
|
// If the caller has set w's ETag header formatted per RFC 7232, section 2.3,
|
||||||
|
// ServeHTTP uses it to handle requests using If-Match, If-None-Match, or If-Range.
|
||||||
|
func ServeHTTP(w http.ResponseWriter, r *http.Request, name string, modTime time.Time, size int64, RangeReaderFunc model.RangeReaderFunc) {
|
||||||
|
setLastModified(w, modTime)
|
||||||
|
done, rangeReq := checkPreconditions(w, r, modTime)
|
||||||
|
if done {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if size < 0 {
|
||||||
|
// since too many functions need file size to work,
|
||||||
|
// will not implement the support of unknown file size here
|
||||||
|
http.Error(w, "negative content size not supported", http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
code := http.StatusOK
|
||||||
|
|
||||||
|
// If Content-Type isn't set, use the file's extension to find it, but
|
||||||
|
// if the Content-Type is unset explicitly, do not sniff the type.
|
||||||
|
contentTypes, haveType := w.Header()["Content-Type"]
|
||||||
|
var contentType string
|
||||||
|
if !haveType {
|
||||||
|
contentType = mime.TypeByExtension(filepath.Ext(name))
|
||||||
|
if contentType == "" {
|
||||||
|
// most modern application can handle the default contentType
|
||||||
|
contentType = "application/octet-stream"
|
||||||
|
}
|
||||||
|
w.Header().Set("Content-Type", contentType)
|
||||||
|
} else if len(contentTypes) > 0 {
|
||||||
|
contentType = contentTypes[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
// handle Content-Range header.
|
||||||
|
sendSize := size
|
||||||
|
var sendContent io.ReadCloser
|
||||||
|
ranges, err := http_range.ParseRange(rangeReq, size)
|
||||||
|
switch err {
|
||||||
|
case nil:
|
||||||
|
case http_range.ErrNoOverlap:
|
||||||
|
if size == 0 {
|
||||||
|
// Some clients add a Range header to all requests to
|
||||||
|
// limit the size of the response. If the file is empty,
|
||||||
|
// ignore the range header and respond with a 200 rather
|
||||||
|
// than a 416.
|
||||||
|
ranges = nil
|
||||||
|
break
|
||||||
|
}
|
||||||
|
w.Header().Set("Content-Range", fmt.Sprintf("bytes */%d", size))
|
||||||
|
fallthrough
|
||||||
|
default:
|
||||||
|
http.Error(w, err.Error(), http.StatusRequestedRangeNotSatisfiable)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if sumRangesSize(ranges) > size || size < 0 {
|
||||||
|
// The total number of bytes in all the ranges is larger than the size of the file
|
||||||
|
// or unknown file size, ignore the range request.
|
||||||
|
ranges = nil
|
||||||
|
}
|
||||||
|
switch {
|
||||||
|
case len(ranges) == 0:
|
||||||
|
reader, err := RangeReaderFunc(http_range.Range{0, -1})
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
sendContent = reader
|
||||||
|
case len(ranges) == 1:
|
||||||
|
// RFC 7233, Section 4.1:
|
||||||
|
// "If a single part is being transferred, the server
|
||||||
|
// generating the 206 response MUST generate a
|
||||||
|
// Content-Range header field, describing what range
|
||||||
|
// of the selected representation is enclosed, and a
|
||||||
|
// payload consisting of the range.
|
||||||
|
// ...
|
||||||
|
// A server MUST NOT generate a multipart response to
|
||||||
|
// a request for a single range, since a client that
|
||||||
|
// does not request multiple parts might not support
|
||||||
|
// multipart responses."
|
||||||
|
ra := ranges[0]
|
||||||
|
sendContent, err = RangeReaderFunc(ra)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusRequestedRangeNotSatisfiable)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
sendSize = ra.Length
|
||||||
|
code = http.StatusPartialContent
|
||||||
|
w.Header().Set("Content-Range", ra.ContentRange(size))
|
||||||
|
case len(ranges) > 1:
|
||||||
|
sendSize, err = rangesMIMESize(ranges, contentType, size)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusRequestedRangeNotSatisfiable)
|
||||||
|
}
|
||||||
|
code = http.StatusPartialContent
|
||||||
|
|
||||||
|
pr, pw := io.Pipe()
|
||||||
|
mw := multipart.NewWriter(pw)
|
||||||
|
w.Header().Set("Content-Type", "multipart/byteranges; boundary="+mw.Boundary())
|
||||||
|
sendContent = pr
|
||||||
|
defer pr.Close() // cause writing goroutine to fail and exit if CopyN doesn't finish.
|
||||||
|
go func() {
|
||||||
|
for _, ra := range ranges {
|
||||||
|
part, err := mw.CreatePart(ra.MimeHeader(contentType, size))
|
||||||
|
if err != nil {
|
||||||
|
pw.CloseWithError(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
reader, err := RangeReaderFunc(ra)
|
||||||
|
if err != nil {
|
||||||
|
pw.CloseWithError(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if _, err := io.CopyN(part, reader, ra.Length); err != nil {
|
||||||
|
pw.CloseWithError(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
//defer reader.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
mw.Close()
|
||||||
|
pw.Close()
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Accept-Ranges", "bytes")
|
||||||
|
if w.Header().Get("Content-Encoding") == "" {
|
||||||
|
w.Header().Set("Content-Length", strconv.FormatInt(sendSize, 10))
|
||||||
|
}
|
||||||
|
|
||||||
|
w.WriteHeader(code)
|
||||||
|
|
||||||
|
if r.Method != "HEAD" {
|
||||||
|
written, err := io.CopyN(w, sendContent, sendSize)
|
||||||
|
if err != nil {
|
||||||
|
log.Warnf("ServeHttp error. err: %s ", err)
|
||||||
|
if written != sendSize {
|
||||||
|
log.Warnf("Maybe size incorrect or reader not giving correct/full data, or connection closed before finish. written bytes: %d ,sendSize:%d, ", written, sendSize)
|
||||||
|
}
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
//defer sendContent.Close()
|
||||||
|
}
|
||||||
|
func ProcessHeader(origin, override *http.Header) *http.Header {
|
||||||
|
result := http.Header{}
|
||||||
|
// client header
|
||||||
|
for h, val := range *origin {
|
||||||
|
if utils.SliceContains(conf.SlicesMap[conf.ProxyIgnoreHeaders], strings.ToLower(h)) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
result[h] = val
|
||||||
|
}
|
||||||
|
// needed header
|
||||||
|
for h, val := range *override {
|
||||||
|
result[h] = val
|
||||||
|
}
|
||||||
|
return &result
|
||||||
|
}
|
||||||
|
|
||||||
|
// RequestHttp deal with Header properly then send the request
|
||||||
|
func RequestHttp(httpMethod string, headerOverride *http.Header, URL string) (*http.Response, error) {
|
||||||
|
req, err := http.NewRequest(httpMethod, URL, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
req.Header = *headerOverride
|
||||||
|
log.Debugln("request Header: ", req.Header)
|
||||||
|
log.Debugln("request URL: ", URL)
|
||||||
|
res, err := HttpClient().Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
log.Debugf("response status: %d", res.StatusCode)
|
||||||
|
log.Debugln("response Header: ", res.Header)
|
||||||
|
// TODO clean header with blocklist or passlist
|
||||||
|
res.Header.Del("set-cookie")
|
||||||
|
if res.StatusCode >= 400 {
|
||||||
|
all, _ := io.ReadAll(res.Body)
|
||||||
|
msg := string(all)
|
||||||
|
log.Debugln(msg)
|
||||||
|
return res, errors.New(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var once sync.Once
|
||||||
|
var httpClient *http.Client
|
||||||
|
|
||||||
|
func HttpClient() *http.Client {
|
||||||
|
once.Do(func() {
|
||||||
|
httpClient = base.NewHttpClient()
|
||||||
|
httpClient.CheckRedirect = func(req *http.Request, via []*http.Request) error {
|
||||||
|
if len(via) >= 10 {
|
||||||
|
return errors.New("stopped after 10 redirects")
|
||||||
|
}
|
||||||
|
req.Header.Del("Referer")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return httpClient
|
||||||
|
}
|
339
internal/net/util.go
Normal file
339
internal/net/util.go
Normal file
@ -0,0 +1,339 @@
|
|||||||
|
package net
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
|
"mime/multipart"
|
||||||
|
"net/http"
|
||||||
|
"net/textproto"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/pkg/http_range"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
// scanETag determines if a syntactically valid ETag is present at s. If so,
|
||||||
|
// the ETag and remaining text after consuming ETag is returned. Otherwise,
|
||||||
|
// it returns "", "".
|
||||||
|
func scanETag(s string) (etag string, remain string) {
|
||||||
|
s = textproto.TrimString(s)
|
||||||
|
start := 0
|
||||||
|
if strings.HasPrefix(s, "W/") {
|
||||||
|
start = 2
|
||||||
|
}
|
||||||
|
if len(s[start:]) < 2 || s[start] != '"' {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
// ETag is either W/"text" or "text".
|
||||||
|
// See RFC 7232 2.3.
|
||||||
|
for i := start + 1; i < len(s); i++ {
|
||||||
|
c := s[i]
|
||||||
|
switch {
|
||||||
|
// Character values allowed in ETags.
|
||||||
|
case c == 0x21 || c >= 0x23 && c <= 0x7E || c >= 0x80:
|
||||||
|
case c == '"':
|
||||||
|
return s[:i+1], s[i+1:]
|
||||||
|
default:
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// etagStrongMatch reports whether a and b match using strong ETag comparison.
|
||||||
|
// Assumes a and b are valid ETags.
|
||||||
|
func etagStrongMatch(a, b string) bool {
|
||||||
|
return a == b && a != "" && a[0] == '"'
|
||||||
|
}
|
||||||
|
|
||||||
|
// etagWeakMatch reports whether a and b match using weak ETag comparison.
|
||||||
|
// Assumes a and b are valid ETags.
|
||||||
|
func etagWeakMatch(a, b string) bool {
|
||||||
|
return strings.TrimPrefix(a, "W/") == strings.TrimPrefix(b, "W/")
|
||||||
|
}
|
||||||
|
|
||||||
|
// condResult is the result of an HTTP request precondition check.
|
||||||
|
// See https://tools.ietf.org/html/rfc7232 section 3.
|
||||||
|
type condResult int
|
||||||
|
|
||||||
|
const (
|
||||||
|
condNone condResult = iota
|
||||||
|
condTrue
|
||||||
|
condFalse
|
||||||
|
)
|
||||||
|
|
||||||
|
func checkIfMatch(w http.ResponseWriter, r *http.Request) condResult {
|
||||||
|
im := r.Header.Get("If-Match")
|
||||||
|
if im == "" {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
for {
|
||||||
|
im = textproto.TrimString(im)
|
||||||
|
if len(im) == 0 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if im[0] == ',' {
|
||||||
|
im = im[1:]
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if im[0] == '*' {
|
||||||
|
return condTrue
|
||||||
|
}
|
||||||
|
etag, remain := scanETag(im)
|
||||||
|
if etag == "" {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if etagStrongMatch(etag, w.Header().Get("Etag")) {
|
||||||
|
return condTrue
|
||||||
|
}
|
||||||
|
im = remain
|
||||||
|
}
|
||||||
|
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkIfUnmodifiedSince(r *http.Request, modtime time.Time) condResult {
|
||||||
|
ius := r.Header.Get("If-Unmodified-Since")
|
||||||
|
if ius == "" || isZeroTime(modtime) {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
t, err := http.ParseTime(ius)
|
||||||
|
if err != nil {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
|
||||||
|
// The Last-Modified header truncates sub-second precision so
|
||||||
|
// the modtime needs to be truncated too.
|
||||||
|
modtime = modtime.Truncate(time.Second)
|
||||||
|
if ret := modtime.Compare(t); ret <= 0 {
|
||||||
|
return condTrue
|
||||||
|
}
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkIfNoneMatch(w http.ResponseWriter, r *http.Request) condResult {
|
||||||
|
inm := r.Header.Get("If-None-Match")
|
||||||
|
if inm == "" {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
buf := inm
|
||||||
|
for {
|
||||||
|
buf = textproto.TrimString(buf)
|
||||||
|
if len(buf) == 0 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if buf[0] == ',' {
|
||||||
|
buf = buf[1:]
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if buf[0] == '*' {
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
etag, remain := scanETag(buf)
|
||||||
|
if etag == "" {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if etagWeakMatch(etag, w.Header().Get("Etag")) {
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
buf = remain
|
||||||
|
}
|
||||||
|
return condTrue
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkIfModifiedSince(r *http.Request, modtime time.Time) condResult {
|
||||||
|
if r.Method != "GET" && r.Method != "HEAD" {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
ims := r.Header.Get("If-Modified-Since")
|
||||||
|
if ims == "" || isZeroTime(modtime) {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
t, err := http.ParseTime(ims)
|
||||||
|
if err != nil {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
// The Last-Modified header truncates sub-second precision so
|
||||||
|
// the modtime needs to be truncated too.
|
||||||
|
modtime = modtime.Truncate(time.Second)
|
||||||
|
if ret := modtime.Compare(t); ret <= 0 {
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
return condTrue
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkIfRange(w http.ResponseWriter, r *http.Request, modtime time.Time) condResult {
|
||||||
|
if r.Method != "GET" && r.Method != "HEAD" {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
ir := r.Header.Get("If-Range")
|
||||||
|
if ir == "" {
|
||||||
|
return condNone
|
||||||
|
}
|
||||||
|
etag, _ := scanETag(ir)
|
||||||
|
if etag != "" {
|
||||||
|
if etagStrongMatch(etag, w.Header().Get("Etag")) {
|
||||||
|
return condTrue
|
||||||
|
}
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
// The If-Range value is typically the ETag value, but it may also be
|
||||||
|
// the modtime date. See golang.org/issue/8367.
|
||||||
|
if modtime.IsZero() {
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
t, err := http.ParseTime(ir)
|
||||||
|
if err != nil {
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
if t.Unix() == modtime.Unix() {
|
||||||
|
return condTrue
|
||||||
|
}
|
||||||
|
return condFalse
|
||||||
|
}
|
||||||
|
|
||||||
|
var unixEpochTime = time.Unix(0, 0)
|
||||||
|
|
||||||
|
// isZeroTime reports whether t is obviously unspecified (either zero or Unix()=0).
|
||||||
|
func isZeroTime(t time.Time) bool {
|
||||||
|
return t.IsZero() || t.Equal(unixEpochTime)
|
||||||
|
}
|
||||||
|
|
||||||
|
func setLastModified(w http.ResponseWriter, modtime time.Time) {
|
||||||
|
if !isZeroTime(modtime) {
|
||||||
|
w.Header().Set("Last-Modified", modtime.UTC().Format(http.TimeFormat))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeNotModified(w http.ResponseWriter) {
|
||||||
|
// RFC 7232 section 4.1:
|
||||||
|
// a sender SHOULD NOT generate representation metadata other than the
|
||||||
|
// above listed fields unless said metadata exists for the purpose of
|
||||||
|
// guiding cache updates (e.g., Last-Modified might be useful if the
|
||||||
|
// response does not have an ETag field).
|
||||||
|
h := w.Header()
|
||||||
|
delete(h, "Content-Type")
|
||||||
|
delete(h, "Content-Length")
|
||||||
|
delete(h, "Content-Encoding")
|
||||||
|
if h.Get("Etag") != "" {
|
||||||
|
delete(h, "Last-Modified")
|
||||||
|
}
|
||||||
|
w.WriteHeader(http.StatusNotModified)
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkPreconditions evaluates request preconditions and reports whether a precondition
|
||||||
|
// resulted in sending StatusNotModified or StatusPreconditionFailed.
|
||||||
|
func checkPreconditions(w http.ResponseWriter, r *http.Request, modtime time.Time) (done bool, rangeHeader string) {
|
||||||
|
// This function carefully follows RFC 7232 section 6.
|
||||||
|
ch := checkIfMatch(w, r)
|
||||||
|
if ch == condNone {
|
||||||
|
ch = checkIfUnmodifiedSince(r, modtime)
|
||||||
|
}
|
||||||
|
if ch == condFalse {
|
||||||
|
w.WriteHeader(http.StatusPreconditionFailed)
|
||||||
|
return true, ""
|
||||||
|
}
|
||||||
|
switch checkIfNoneMatch(w, r) {
|
||||||
|
case condFalse:
|
||||||
|
if r.Method == "GET" || r.Method == "HEAD" {
|
||||||
|
writeNotModified(w)
|
||||||
|
return true, ""
|
||||||
|
}
|
||||||
|
w.WriteHeader(http.StatusPreconditionFailed)
|
||||||
|
return true, ""
|
||||||
|
case condNone:
|
||||||
|
if checkIfModifiedSince(r, modtime) == condFalse {
|
||||||
|
writeNotModified(w)
|
||||||
|
return true, ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rangeHeader = r.Header.Get("Range")
|
||||||
|
if rangeHeader != "" && checkIfRange(w, r, modtime) == condFalse {
|
||||||
|
rangeHeader = ""
|
||||||
|
}
|
||||||
|
return false, rangeHeader
|
||||||
|
}
|
||||||
|
|
||||||
|
func sumRangesSize(ranges []http_range.Range) (size int64) {
|
||||||
|
for _, ra := range ranges {
|
||||||
|
size += ra.Length
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// countingWriter counts how many bytes have been written to it.
|
||||||
|
type countingWriter int64
|
||||||
|
|
||||||
|
func (w *countingWriter) Write(p []byte) (n int, err error) {
|
||||||
|
*w += countingWriter(len(p))
|
||||||
|
return len(p), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// rangesMIMESize returns the number of bytes it takes to encode the
|
||||||
|
// provided ranges as a multipart response.
|
||||||
|
func rangesMIMESize(ranges []http_range.Range, contentType string, contentSize int64) (encSize int64, err error) {
|
||||||
|
var w countingWriter
|
||||||
|
mw := multipart.NewWriter(&w)
|
||||||
|
for _, ra := range ranges {
|
||||||
|
_, err := mw.CreatePart(ra.MimeHeader(contentType, contentSize))
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
encSize += ra.Length
|
||||||
|
}
|
||||||
|
err = mw.Close()
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
encSize += int64(w)
|
||||||
|
return encSize, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// LimitedReadCloser wraps a io.ReadCloser and limits the number of bytes that can be read from it.
|
||||||
|
type LimitedReadCloser struct {
|
||||||
|
rc io.ReadCloser
|
||||||
|
remaining int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *LimitedReadCloser) Read(buf []byte) (int, error) {
|
||||||
|
if l.remaining <= 0 {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(buf) > l.remaining {
|
||||||
|
buf = buf[0:l.remaining]
|
||||||
|
}
|
||||||
|
|
||||||
|
n, err := l.rc.Read(buf)
|
||||||
|
l.remaining -= n
|
||||||
|
|
||||||
|
return n, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l *LimitedReadCloser) Close() error {
|
||||||
|
return l.rc.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetRangedHttpReader some http server doesn't support "Range" header,
|
||||||
|
// so this function read readCloser with whole data, skip offset, then return ReaderCloser.
|
||||||
|
func GetRangedHttpReader(readCloser io.ReadCloser, offset, length int64) (io.ReadCloser, error) {
|
||||||
|
var length_int int
|
||||||
|
if length > math.MaxInt {
|
||||||
|
return nil, fmt.Errorf("doesnot support length bigger than int32 max ")
|
||||||
|
}
|
||||||
|
length_int = int(length)
|
||||||
|
|
||||||
|
if offset > 100*1024*1024 {
|
||||||
|
log.Warnf("offset is more than 100MB, if loading data from internet, high-latency and wasting of bandwith is expected")
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := io.Copy(io.Discard, io.LimitReader(readCloser, offset)); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// return an io.ReadCloser that is limited to `length` bytes.
|
||||||
|
return &LimitedReadCloser{readCloser, length_int}, nil
|
||||||
|
}
|
@ -10,21 +10,21 @@ import (
|
|||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
type New func() driver.Driver
|
type DriverConstructor func() driver.Driver
|
||||||
|
|
||||||
var driverNewMap = map[string]New{}
|
var driverMap = map[string]DriverConstructor{}
|
||||||
var driverInfoMap = map[string]driver.Info{}
|
var driverInfoMap = map[string]driver.Info{}
|
||||||
|
|
||||||
func RegisterDriver(driver New) {
|
func RegisterDriver(driver DriverConstructor) {
|
||||||
// log.Infof("register driver: [%s]", config.Name)
|
// log.Infof("register driver: [%s]", config.Name)
|
||||||
tempDriver := driver()
|
tempDriver := driver()
|
||||||
tempConfig := tempDriver.Config()
|
tempConfig := tempDriver.Config()
|
||||||
registerDriverItems(tempConfig, tempDriver.GetAddition())
|
registerDriverItems(tempConfig, tempDriver.GetAddition())
|
||||||
driverNewMap[tempConfig.Name] = driver
|
driverMap[tempConfig.Name] = driver
|
||||||
}
|
}
|
||||||
|
|
||||||
func GetDriverNew(name string) (New, error) {
|
func GetDriver(name string) (DriverConstructor, error) {
|
||||||
n, ok := driverNewMap[name]
|
n, ok := driverMap[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, errors.Errorf("no driver named: %s", name)
|
return nil, errors.Errorf("no driver named: %s", name)
|
||||||
}
|
}
|
||||||
|
@ -243,7 +243,7 @@ func Link(ctx context.Context, storage driver.Driver, path string, args model.Li
|
|||||||
if file.IsDir() {
|
if file.IsDir() {
|
||||||
return nil, nil, errors.WithStack(errs.NotFile)
|
return nil, nil, errors.WithStack(errs.NotFile)
|
||||||
}
|
}
|
||||||
key := Key(storage, path) + ":" + args.IP
|
key := Key(storage, path)
|
||||||
if link, ok := linkCache.Get(key); ok {
|
if link, ok := linkCache.Get(key); ok {
|
||||||
return link, file, nil
|
return link, file, nil
|
||||||
}
|
}
|
||||||
@ -253,6 +253,9 @@ func Link(ctx context.Context, storage driver.Driver, path string, args model.Li
|
|||||||
return nil, errors.Wrapf(err, "failed get link")
|
return nil, errors.Wrapf(err, "failed get link")
|
||||||
}
|
}
|
||||||
if link.Expiration != nil {
|
if link.Expiration != nil {
|
||||||
|
if link.IPCacheKey {
|
||||||
|
key = key + ":" + args.IP
|
||||||
|
}
|
||||||
linkCache.Set(key, link, cache.WithEx[*model.Link](*link.Expiration))
|
linkCache.Set(key, link, cache.WithEx[*model.Link](*link.Expiration))
|
||||||
}
|
}
|
||||||
return link, nil
|
return link, nil
|
||||||
@ -563,6 +566,9 @@ func Put(ctx context.Context, storage driver.Driver, dstDirPath string, file *mo
|
|||||||
err := Remove(ctx, storage, tempPath)
|
err := Remove(ctx, storage, tempPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
} else {
|
||||||
|
key := Key(storage, stdpath.Join(dstDirPath, file.GetName()))
|
||||||
|
linkCache.Del(key)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1,11 +1,11 @@
|
|||||||
package op
|
package op
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/internal/driver"
|
"github.com/alist-org/alist/v3/internal/driver"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
"github.com/pkg/errors"
|
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -16,10 +16,10 @@ func GetStorageAndActualPath(rawPath string) (storage driver.Driver, actualPath
|
|||||||
storage = GetBalancedStorage(rawPath)
|
storage = GetBalancedStorage(rawPath)
|
||||||
if storage == nil {
|
if storage == nil {
|
||||||
if rawPath == "/" {
|
if rawPath == "/" {
|
||||||
err = errors.New("please add a storage first.")
|
err = errs.NewErr(errs.StorageNotFound, "please add a storage first")
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
err = errors.Errorf("can't find storage with rawPath: %s", rawPath)
|
err = errs.NewErr(errs.StorageNotFound, "rawPath: %s", rawPath)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
log.Debugln("use storage: ", storage.GetStorage().MountPath)
|
log.Debugln("use storage: ", storage.GetStorage().MountPath)
|
||||||
|
@ -46,7 +46,7 @@ func CreateStorage(ctx context.Context, storage model.Storage) (uint, error) {
|
|||||||
var err error
|
var err error
|
||||||
// check driver first
|
// check driver first
|
||||||
driverName := storage.Driver
|
driverName := storage.Driver
|
||||||
driverNew, err := GetDriverNew(driverName)
|
driverNew, err := GetDriver(driverName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, errors.WithMessage(err, "failed get driver new")
|
return 0, errors.WithMessage(err, "failed get driver new")
|
||||||
}
|
}
|
||||||
@ -71,7 +71,7 @@ func LoadStorage(ctx context.Context, storage model.Storage) error {
|
|||||||
storage.MountPath = utils.FixAndCleanPath(storage.MountPath)
|
storage.MountPath = utils.FixAndCleanPath(storage.MountPath)
|
||||||
// check driver first
|
// check driver first
|
||||||
driverName := storage.Driver
|
driverName := storage.Driver
|
||||||
driverNew, err := GetDriverNew(driverName)
|
driverNew, err := GetDriver(driverName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.WithMessage(err, "failed get driver new")
|
return errors.WithMessage(err, "failed get driver new")
|
||||||
}
|
}
|
||||||
|
@ -113,3 +113,18 @@ func Cancel2FAById(id uint) error {
|
|||||||
}
|
}
|
||||||
return Cancel2FAByUser(user)
|
return Cancel2FAByUser(user)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func DelUserCache(username string) error {
|
||||||
|
user, err := GetUserByName(username)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if user.IsAdmin() {
|
||||||
|
adminUser = nil
|
||||||
|
}
|
||||||
|
if user.IsGuest() {
|
||||||
|
guestUser = nil
|
||||||
|
}
|
||||||
|
userCache.Del(username)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
@ -3,12 +3,13 @@ package qbittorrent
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"errors"
|
"errors"
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
|
||||||
"io"
|
"io"
|
||||||
"mime/multipart"
|
"mime/multipart"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/cookiejar"
|
"net/http/cookiejar"
|
||||||
"net/url"
|
"net/url"
|
||||||
|
|
||||||
|
"github.com/alist-org/alist/v3/pkg/utils"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Client interface {
|
type Client interface {
|
||||||
@ -213,7 +214,7 @@ type TorrentInfo struct {
|
|||||||
Hash string `json:"hash"` //
|
Hash string `json:"hash"` //
|
||||||
LastActivity int `json:"last_activity"` // 上次活跃的时间(Unix Epoch)
|
LastActivity int `json:"last_activity"` // 上次活跃的时间(Unix Epoch)
|
||||||
MagnetURI string `json:"magnet_uri"` // 与此 torrent 对应的 Magnet URI
|
MagnetURI string `json:"magnet_uri"` // 与此 torrent 对应的 Magnet URI
|
||||||
MaxRatio int `json:"max_ratio"` // 种子/上传停止种子前的最大共享比率
|
MaxRatio float64 `json:"max_ratio"` // 种子/上传停止种子前的最大共享比率
|
||||||
MaxSeedingTime int `json:"max_seeding_time"` // 停止种子种子前的最长种子时间(秒)
|
MaxSeedingTime int `json:"max_seeding_time"` // 停止种子种子前的最长种子时间(秒)
|
||||||
Name string `json:"name"` //
|
Name string `json:"name"` //
|
||||||
NumComplete int `json:"num_complete"` //
|
NumComplete int `json:"num_complete"` //
|
||||||
|
@ -4,6 +4,8 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
query2 "github.com/blevesearch/bleve/v2/search/query"
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/internal/conf"
|
"github.com/alist-org/alist/v3/internal/conf"
|
||||||
"github.com/alist-org/alist/v3/internal/errs"
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"github.com/alist-org/alist/v3/internal/model"
|
"github.com/alist-org/alist/v3/internal/model"
|
||||||
@ -24,9 +26,19 @@ func (b *Bleve) Config() searcher.Config {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (b *Bleve) Search(ctx context.Context, req model.SearchReq) ([]model.SearchNode, int64, error) {
|
func (b *Bleve) Search(ctx context.Context, req model.SearchReq) ([]model.SearchNode, int64, error) {
|
||||||
|
var queries []query2.Query
|
||||||
query := bleve.NewMatchQuery(req.Keywords)
|
query := bleve.NewMatchQuery(req.Keywords)
|
||||||
query.SetField("name")
|
query.SetField("name")
|
||||||
search := bleve.NewSearchRequest(query)
|
queries = append(queries, query)
|
||||||
|
if req.Scope != 0 {
|
||||||
|
isDir := req.Scope == 1
|
||||||
|
isDirQuery := bleve.NewBoolFieldQuery(isDir)
|
||||||
|
queries = append(queries, isDirQuery)
|
||||||
|
}
|
||||||
|
reqQuery := bleve.NewConjunctionQuery(queries...)
|
||||||
|
search := bleve.NewSearchRequest(reqQuery)
|
||||||
|
search.SortBy([]string{"name"})
|
||||||
|
search.From = (req.Page - 1) * req.PerPage
|
||||||
search.Size = req.PerPage
|
search.Size = req.PerPage
|
||||||
search.Fields = []string{"*"}
|
search.Fields = []string{"*"}
|
||||||
searchResults, err := b.BIndex.Search(search)
|
searchResults, err := b.BIndex.Search(search)
|
||||||
@ -42,7 +54,7 @@ func (b *Bleve) Search(ctx context.Context, req model.SearchReq) ([]model.Search
|
|||||||
Size: int64(src.Fields["size"].(float64)),
|
Size: int64(src.Fields["size"].(float64)),
|
||||||
}, nil
|
}, nil
|
||||||
})
|
})
|
||||||
return res, int64(len(res)), nil
|
return res, int64(searchResults.Total), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *Bleve) Index(ctx context.Context, node model.SearchNode) error {
|
func (b *Bleve) Index(ctx context.Context, node model.SearchNode) error {
|
||||||
|
@ -4,6 +4,7 @@ package http_range
|
|||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"net/http"
|
||||||
"net/textproto"
|
"net/textproto"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
@ -12,7 +13,7 @@ import (
|
|||||||
// Range specifies the byte range to be sent to the client.
|
// Range specifies the byte range to be sent to the client.
|
||||||
type Range struct {
|
type Range struct {
|
||||||
Start int64
|
Start int64
|
||||||
Length int64
|
Length int64 // limit of bytes to read, -1 for unlimited
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContentRange returns Content-Range header value.
|
// ContentRange returns Content-Range header value.
|
||||||
@ -22,7 +23,7 @@ func (r Range) ContentRange(size int64) string {
|
|||||||
|
|
||||||
var (
|
var (
|
||||||
// ErrNoOverlap is returned by ParseRange if first-byte-pos of
|
// ErrNoOverlap is returned by ParseRange if first-byte-pos of
|
||||||
// all of the byte-range-spec values is greater than the content size.
|
// all the byte-range-spec values is greater than the content size.
|
||||||
ErrNoOverlap = errors.New("invalid range: failed to overlap")
|
ErrNoOverlap = errors.New("invalid range: failed to overlap")
|
||||||
|
|
||||||
// ErrInvalid is returned by ParseRange on invalid input.
|
// ErrInvalid is returned by ParseRange on invalid input.
|
||||||
@ -105,3 +106,33 @@ func ParseRange(s string, size int64) ([]Range, error) { // nolint:gocognit
|
|||||||
}
|
}
|
||||||
return ranges, nil
|
return ranges, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r Range) MimeHeader(contentType string, size int64) textproto.MIMEHeader {
|
||||||
|
return textproto.MIMEHeader{
|
||||||
|
"Content-Range": {r.contentRange(size)},
|
||||||
|
"Content-Type": {contentType},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// for http response header
|
||||||
|
func (r Range) contentRange(size int64) string {
|
||||||
|
return fmt.Sprintf("bytes %d-%d/%d", r.Start, r.Start+r.Length-1, size)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ApplyRangeToHttpHeader for http request header
|
||||||
|
func ApplyRangeToHttpHeader(p Range, headerRef *http.Header) *http.Header {
|
||||||
|
header := headerRef
|
||||||
|
if header == nil {
|
||||||
|
header = &http.Header{}
|
||||||
|
}
|
||||||
|
if p.Start == 0 && p.Length < 0 {
|
||||||
|
header.Del("Range")
|
||||||
|
} else {
|
||||||
|
end := ""
|
||||||
|
if p.Length >= 0 {
|
||||||
|
end = strconv.FormatInt(p.Start+p.Length-1, 10)
|
||||||
|
}
|
||||||
|
header.Set("Range", fmt.Sprintf("bytes=%v-%v", p.Start, end))
|
||||||
|
}
|
||||||
|
return header
|
||||||
|
}
|
||||||
|
@ -2,6 +2,7 @@ package utils
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/alist-org/alist/v3/internal/errs"
|
||||||
"io"
|
"io"
|
||||||
"mime"
|
"mime"
|
||||||
"os"
|
"os"
|
||||||
@ -111,7 +112,7 @@ func CreateNestedFile(path string) (*os.File, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// CreateTempFile create temp file from io.ReadCloser, and seek to 0
|
// CreateTempFile create temp file from io.ReadCloser, and seek to 0
|
||||||
func CreateTempFile(r io.ReadCloser) (*os.File, error) {
|
func CreateTempFile(r io.ReadCloser, size int64) (*os.File, error) {
|
||||||
if f, ok := r.(*os.File); ok {
|
if f, ok := r.(*os.File); ok {
|
||||||
return f, nil
|
return f, nil
|
||||||
}
|
}
|
||||||
@ -119,15 +120,19 @@ func CreateTempFile(r io.ReadCloser) (*os.File, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
_, err = io.Copy(f, r)
|
readBytes, err := io.Copy(f, r)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
_ = os.Remove(f.Name())
|
_ = os.Remove(f.Name())
|
||||||
return nil, err
|
return nil, errs.NewErr(err, "CreateTempFile failed")
|
||||||
|
}
|
||||||
|
if size != 0 && readBytes != size {
|
||||||
|
_ = os.Remove(f.Name())
|
||||||
|
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %s, expect = %s ", readBytes, size)
|
||||||
}
|
}
|
||||||
_, err = f.Seek(0, io.SeekStart)
|
_, err = f.Seek(0, io.SeekStart)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
_ = os.Remove(f.Name())
|
_ = os.Remove(f.Name())
|
||||||
return nil, err
|
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
|
||||||
}
|
}
|
||||||
return f, nil
|
return f, nil
|
||||||
}
|
}
|
||||||
|
@ -1,114 +0,0 @@
|
|||||||
package utils
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"reflect"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
func LimitRateReflect(f interface{}, interval time.Duration) func(...interface{}) []interface{} {
|
|
||||||
// Use closures to save the time of the last function call
|
|
||||||
var lastCall time.Time
|
|
||||||
|
|
||||||
fValue := reflect.ValueOf(f)
|
|
||||||
fType := fValue.Type()
|
|
||||||
|
|
||||||
if fType.Kind() != reflect.Func {
|
|
||||||
panic("f must be a function")
|
|
||||||
}
|
|
||||||
|
|
||||||
//if fType.NumOut() == 0 {
|
|
||||||
// panic("f must have at least one output parameter")
|
|
||||||
//}
|
|
||||||
|
|
||||||
outCount := fType.NumOut()
|
|
||||||
outTypes := make([]reflect.Type, outCount)
|
|
||||||
|
|
||||||
for i := 0; i < outCount; i++ {
|
|
||||||
outTypes[i] = fType.Out(i)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Returns a new function, which is used to limit the function to be called only once at a specified time interval
|
|
||||||
return func(args ...interface{}) []interface{} {
|
|
||||||
// Calculate the time interval since the last function call
|
|
||||||
elapsed := time.Since(lastCall)
|
|
||||||
// If the interval is less than the specified time, wait for the remaining time
|
|
||||||
if elapsed < interval {
|
|
||||||
time.Sleep(interval - elapsed)
|
|
||||||
}
|
|
||||||
// Update the time of the last function call
|
|
||||||
lastCall = time.Now()
|
|
||||||
|
|
||||||
inCount := fType.NumIn()
|
|
||||||
in := make([]reflect.Value, inCount)
|
|
||||||
|
|
||||||
if len(args) != inCount {
|
|
||||||
panic("wrong number of arguments")
|
|
||||||
}
|
|
||||||
|
|
||||||
for i := 0; i < inCount; i++ {
|
|
||||||
in[i] = reflect.ValueOf(args[i])
|
|
||||||
}
|
|
||||||
|
|
||||||
out := fValue.Call(in)
|
|
||||||
|
|
||||||
if len(out) != outCount {
|
|
||||||
panic("function returned wrong number of values")
|
|
||||||
}
|
|
||||||
|
|
||||||
result := make([]interface{}, outCount)
|
|
||||||
|
|
||||||
for i := 0; i < outCount; i++ {
|
|
||||||
result[i] = out[i].Interface()
|
|
||||||
}
|
|
||||||
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Fn[T any, R any] func(T) (R, error)
|
|
||||||
type FnCtx[T any, R any] func(context.Context, T) (R, error)
|
|
||||||
|
|
||||||
func LimitRate[T any, R any](f Fn[T, R], interval time.Duration) Fn[T, R] {
|
|
||||||
// Use closures to save the time of the last function call
|
|
||||||
var lastCall time.Time
|
|
||||||
// Returns a new function, which is used to limit the function to be called only once at a specified time interval
|
|
||||||
return func(t T) (R, error) {
|
|
||||||
// Calculate the time interval since the last function call
|
|
||||||
elapsed := time.Since(lastCall)
|
|
||||||
// If the interval is less than the specified time, wait for the remaining time
|
|
||||||
if elapsed < interval {
|
|
||||||
time.Sleep(interval - elapsed)
|
|
||||||
}
|
|
||||||
// Update the time of the last function call
|
|
||||||
lastCall = time.Now()
|
|
||||||
// Execute the function that needs to be limited
|
|
||||||
return f(t)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func LimitRateCtx[T any, R any](f FnCtx[T, R], interval time.Duration) FnCtx[T, R] {
|
|
||||||
// Use closures to save the time of the last function call
|
|
||||||
var lastCall time.Time
|
|
||||||
// Returns a new function, which is used to limit the function to be called only once at a specified time interval
|
|
||||||
return func(ctx context.Context, t T) (R, error) {
|
|
||||||
// Calculate the time interval since the last function call
|
|
||||||
elapsed := time.Since(lastCall)
|
|
||||||
// If the interval is less than the specified time, wait for the remaining time
|
|
||||||
if elapsed < interval {
|
|
||||||
t := time.NewTimer(interval - elapsed)
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
t.Stop()
|
|
||||||
var zero R
|
|
||||||
return zero, ctx.Err()
|
|
||||||
case <-t.C:
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Update the time of the last function call
|
|
||||||
lastCall = time.Now()
|
|
||||||
// Execute the function that needs to be limited
|
|
||||||
return f(ctx, t)
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,59 +0,0 @@
|
|||||||
package utils_test
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/alist-org/alist/v3/pkg/utils"
|
|
||||||
)
|
|
||||||
|
|
||||||
func myFunction(a int) (int, error) {
|
|
||||||
// do something
|
|
||||||
return a + 1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLimitRate(t *testing.T) {
|
|
||||||
myLimitedFunction := utils.LimitRate(myFunction, time.Second)
|
|
||||||
result, _ := myLimitedFunction(1)
|
|
||||||
t.Log(result) // Output: 2
|
|
||||||
result, _ = myLimitedFunction(2)
|
|
||||||
t.Log(result) // Output: 3
|
|
||||||
}
|
|
||||||
|
|
||||||
type Test struct {
|
|
||||||
limitFn func(string) (string, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *Test) myFunction(a string) (string, error) {
|
|
||||||
// do something
|
|
||||||
return a + " world", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLimitRateStruct(t *testing.T) {
|
|
||||||
test := &Test{}
|
|
||||||
test.limitFn = utils.LimitRate(test.myFunction, time.Second)
|
|
||||||
result, _ := test.limitFn("hello")
|
|
||||||
t.Log(result) // Output: hello world
|
|
||||||
result, _ = test.limitFn("hi")
|
|
||||||
t.Log(result) // Output: hi world
|
|
||||||
}
|
|
||||||
|
|
||||||
func myFunctionCtx(ctx context.Context, a int) (int, error) {
|
|
||||||
// do something
|
|
||||||
return a + 1, nil
|
|
||||||
}
|
|
||||||
func TestLimitRateCtx(t *testing.T) {
|
|
||||||
myLimitedFunction := utils.LimitRateCtx(myFunctionCtx, time.Second)
|
|
||||||
result, _ := myLimitedFunction(context.Background(), 1)
|
|
||||||
t.Log(result) // Output: 2
|
|
||||||
ctx, cancel := context.WithCancel(context.Background())
|
|
||||||
go func() {
|
|
||||||
time.Sleep(500 * time.Millisecond)
|
|
||||||
cancel()
|
|
||||||
}()
|
|
||||||
result, err := myLimitedFunction(ctx, 2)
|
|
||||||
t.Log(result, err) // Output: 0 context canceled
|
|
||||||
result, _ = myLimitedFunction(context.Background(), 3)
|
|
||||||
t.Log(result) // Output: 4
|
|
||||||
}
|
|
@ -3,7 +3,10 @@ package utils
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
|
"fmt"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
"io"
|
"io"
|
||||||
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
// here is some syntaxic sugar inspired by the Tomas Senart's video,
|
// here is some syntaxic sugar inspired by the Tomas Senart's video,
|
||||||
@ -135,3 +138,50 @@ func (mr *MultiReadable) Close() error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type nopCloser struct {
|
||||||
|
io.ReadSeeker
|
||||||
|
}
|
||||||
|
|
||||||
|
func (nopCloser) Close() error { return nil }
|
||||||
|
|
||||||
|
func ReadSeekerNopCloser(r io.ReadSeeker) io.ReadSeekCloser {
|
||||||
|
return nopCloser{r}
|
||||||
|
}
|
||||||
|
|
||||||
|
func Retry(attempts int, sleep time.Duration, f func() error) (err error) {
|
||||||
|
for i := 0; i < attempts; i++ {
|
||||||
|
fmt.Println("This is attempt number", i)
|
||||||
|
if i > 0 {
|
||||||
|
log.Println("retrying after error:", err)
|
||||||
|
time.Sleep(sleep)
|
||||||
|
sleep *= 2
|
||||||
|
}
|
||||||
|
err = f()
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return fmt.Errorf("after %d attempts, last error: %s", attempts, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
type Closers struct {
|
||||||
|
closers []*io.Closer
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Closers) Close() (err error) {
|
||||||
|
for _, closer := range c.closers {
|
||||||
|
if closer != nil {
|
||||||
|
_ = (*closer).Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
func (c *Closers) Add(closer io.Closer) {
|
||||||
|
if closer != nil {
|
||||||
|
c.closers = append(c.closers, &closer)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func NewClosers() *Closers {
|
||||||
|
return &Closers{[]*io.Closer{}}
|
||||||
|
}
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user