mirror of
https://github.com/meilisearch/meilisearch.git
synced 2025-12-06 12:45:42 +00:00
Compare commits
1 Commits
openapi-co
...
revert-554
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
54bfe9255c |
@@ -1,34 +1,31 @@
|
|||||||
---
|
---
|
||||||
name: New feature issue
|
name: New sprint issue
|
||||||
about: ⚠️ Should only be used by the internal Meili team ⚠️
|
about: ⚠️ Should only be used by the engine team ⚠️
|
||||||
title: ''
|
title: ''
|
||||||
labels: 'impacts docs, impacts integrations'
|
labels: 'missing usage in PRD, impacts docs'
|
||||||
assignees: ''
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Related product team resources: [PRD]() (_internal only_)
|
Related product team resources: [PRD]() (_internal only_)
|
||||||
|
Related product discussion:
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
<!---Copy/paste the information in PRD or briefly detail the product motivation. Ask product team if any hesitation.-->
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
<!---Link to the public part of the PRD, or to the related product discussion for experimental features-->
|
<!---Link to the public part of the PRD, or to the related product discussion for experimental features-->
|
||||||
|
|
||||||
TBD
|
|
||||||
|
|
||||||
## TODO
|
## TODO
|
||||||
|
|
||||||
<!---If necessary, create a list with technical/product steps-->
|
<!---If necessary, create a list with technical/product steps-->
|
||||||
|
|
||||||
### Are you modifying a database?
|
### Are you modifying a database?
|
||||||
|
|
||||||
- [ ] If not, add the `no db change` label to your PR, and you're good to merge.
|
- [ ] If not, add the `no db change` label to your PR, and you're good to merge.
|
||||||
- [ ] If yes, add the `db change` label to your PR. You'll receive a message explaining you what to do.
|
- [ ] If yes, add the `db change` label to your PR. You'll receive a message explaining you what to do.
|
||||||
|
|
||||||
### Reminders when adding features
|
|
||||||
|
|
||||||
- [ ] Write unit tests using insta
|
|
||||||
- [ ] Write declarative integration tests in [workloads/tests](https://github.com/meilisearch/meilisearch/tree/main/workloads/test). Specify the routes to call and then call `cargo xtask test workloads/tests/YOUR_TEST.json --update-responses` so that responses are automatically filled.
|
|
||||||
|
|
||||||
### Reminders when modifying the API
|
### Reminders when modifying the API
|
||||||
|
|
||||||
- [ ] Update the openAPI file with utoipa:
|
- [ ] Update the openAPI file with utoipa:
|
||||||
@@ -57,5 +54,5 @@ TBD
|
|||||||
|
|
||||||
## Impacted teams
|
## Impacted teams
|
||||||
|
|
||||||
<!---Ping the related teams. Ask on Slack if any hesitation-->
|
<!---Ping the related teams. Ask for the engine manager if any hesitation-->
|
||||||
<!---@meilisearch/docs-team and @meilisearch/integration-team when there is any API change, e.g. settings addition-->
|
<!---@meilisearch/docs-team when there is any API change, e.g. settings addition-->
|
||||||
1
.github/dependabot.yml
vendored
1
.github/dependabot.yml
vendored
@@ -7,5 +7,6 @@ updates:
|
|||||||
schedule:
|
schedule:
|
||||||
interval: "monthly"
|
interval: "monthly"
|
||||||
labels:
|
labels:
|
||||||
|
- 'skip changelog'
|
||||||
- 'dependencies'
|
- 'dependencies'
|
||||||
rebase-strategy: disabled
|
rebase-strategy: disabled
|
||||||
|
|||||||
16
.github/pull_request_template.md
vendored
16
.github/pull_request_template.md
vendored
@@ -1,16 +0,0 @@
|
|||||||
## Related issue
|
|
||||||
|
|
||||||
Fixes #...
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
⚠️ Ensure the following requirements before merging ⚠️
|
|
||||||
- [ ] Automated tests have been added.
|
|
||||||
- [ ] If some tests cannot be automated, manual rigorous tests should be applied.
|
|
||||||
- [ ] ⚠️ If there is any change in the DB:
|
|
||||||
- [ ] Test that any impacted DB still works as expected after using `--experimental-dumpless-upgrade` on a DB created with the last released Meilisearch
|
|
||||||
- [ ] Test that during the upgrade, **search is still available** (artificially make the upgrade longer if needed)
|
|
||||||
- [ ] Set the `db change` label.
|
|
||||||
- [ ] If necessary, the feature have been tested in the Cloud production environment (with [prototypes](./documentation/prototypes.md)) and the Cloud UI is ready.
|
|
||||||
- [ ] If necessary, the [documentation](https://github.com/meilisearch/documentation) related to the implemented feature in the PR is ready.
|
|
||||||
- [ ] If necessary, the [integrations](https://github.com/meilisearch/integration-guides) related to the implemented feature in the PR are ready.
|
|
||||||
22
.github/templates/dependency-issue.md
vendored
22
.github/templates/dependency-issue.md
vendored
@@ -1,22 +0,0 @@
|
|||||||
This issue is about updating Meilisearch dependencies:
|
|
||||||
- [ ] Update Meilisearch dependencies with the help of `cargo +nightly udeps --all-targets` (remove unused dependencies) and `cargo upgrade` (upgrade dependencies versions) - ⚠️ Some repositories may contain subdirectories (like heed, charabia, or deserr). Take care of updating these in the main crate as well. This won't be done automatically by `cargo upgrade`.
|
|
||||||
- [ ] [deserr](https://github.com/meilisearch/deserr)
|
|
||||||
- [ ] [charabia](https://github.com/meilisearch/charabia/)
|
|
||||||
- [ ] [heed](https://github.com/meilisearch/heed/)
|
|
||||||
- [ ] [roaring-rs](https://github.com/RoaringBitmap/roaring-rs/)
|
|
||||||
- [ ] [obkv](https://github.com/meilisearch/obkv)
|
|
||||||
- [ ] [grenad](https://github.com/meilisearch/grenad/)
|
|
||||||
- [ ] [arroy](https://github.com/meilisearch/arroy/)
|
|
||||||
- [ ] [segment](https://github.com/meilisearch/segment)
|
|
||||||
- [ ] [bumparaw-collections](https://github.com/meilisearch/bumparaw-collections)
|
|
||||||
- [ ] [bbqueue](https://github.com/meilisearch/bbqueue)
|
|
||||||
- [ ] Finally, [Meilisearch](https://github.com/meilisearch/MeiliSearch)
|
|
||||||
- [ ] If new Rust versions have been released, update the minimal Rust version in use at Meilisearch:
|
|
||||||
- [ ] in this [GitHub Action file](https://github.com/meilisearch/meilisearch/blob/main/.github/workflows/test-suite.yml), by changing the `toolchain` field of the `rustfmt` job to the latest available nightly (of the day before or the current day).
|
|
||||||
- [ ] in every [GitHub Action files](https://github.com/meilisearch/meilisearch/blob/main/.github/workflows), by changing all the `dtolnay/rust-toolchain@` references to use the latest stable version.
|
|
||||||
- [ ] in this [`rust-toolchain.toml`](https://github.com/meilisearch/meilisearch/blob/main/rust-toolchain.toml), by changing the `channel` field to the latest stable version.
|
|
||||||
- [ ] in the [Dockerfile](https://github.com/meilisearch/meilisearch/blob/main/Dockerfile), by changing the base image to `rust:<target_rust_version>-alpine<alpine_version>`. Check that the image exists on [Dockerhub](https://hub.docker.com/_/rust/tags?page=1&name=alpine). Also, build and run the image to check everything still works!
|
|
||||||
|
|
||||||
⚠️ This issue should be prioritized to avoid any deprecation and vulnerability issues.
|
|
||||||
|
|
||||||
The GitHub action dependencies are managed by [Dependabot](https://github.com/meilisearch/meilisearch/blob/main/.github/dependabot.yml), so no need to update them when solving this issue.
|
|
||||||
4
.github/workflows/bench-manual.yml
vendored
4
.github/workflows/bench-manual.yml
vendored
@@ -17,8 +17,8 @@ jobs:
|
|||||||
runs-on: benchmarks
|
runs-on: benchmarks
|
||||||
timeout-minutes: 180 # 3h
|
timeout-minutes: 180 # 3h
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
|
|
||||||
|
|||||||
6
.github/workflows/bench-pr.yml
vendored
6
.github/workflows/bench-pr.yml
vendored
@@ -60,13 +60,15 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
repo_token: ${{ env.GH_TOKEN }}
|
repo_token: ${{ env.GH_TOKEN }}
|
||||||
|
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
if: success()
|
if: success()
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0 # fetch full history to be able to get main commit sha
|
fetch-depth: 0 # fetch full history to be able to get main commit sha
|
||||||
ref: ${{ steps.comment-branch.outputs.head_ref }}
|
ref: ${{ steps.comment-branch.outputs.head_ref }}
|
||||||
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
|
with:
|
||||||
|
profile: minimal
|
||||||
|
|
||||||
- name: Run benchmarks on PR ${{ github.event.issue.id }}
|
- name: Run benchmarks on PR ${{ github.event.issue.id }}
|
||||||
run: |
|
run: |
|
||||||
|
|||||||
6
.github/workflows/bench-push-indexing.yml
vendored
6
.github/workflows/bench-push-indexing.yml
vendored
@@ -11,8 +11,10 @@ jobs:
|
|||||||
runs-on: benchmarks
|
runs-on: benchmarks
|
||||||
timeout-minutes: 180 # 3h
|
timeout-minutes: 180 # 3h
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
|
with:
|
||||||
|
profile: minimal
|
||||||
|
|
||||||
# Run benchmarks
|
# Run benchmarks
|
||||||
- name: Run benchmarks - Dataset ${BENCH_NAME} - Branch main - Commit ${{ github.sha }}
|
- name: Run benchmarks - Dataset ${BENCH_NAME} - Branch main - Commit ${{ github.sha }}
|
||||||
|
|||||||
4
.github/workflows/benchmarks-manual.yml
vendored
4
.github/workflows/benchmarks-manual.yml
vendored
@@ -17,8 +17,8 @@ jobs:
|
|||||||
runs-on: benchmarks
|
runs-on: benchmarks
|
||||||
timeout-minutes: 4320 # 72h
|
timeout-minutes: 4320 # 72h
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
|
|
||||||
|
|||||||
4
.github/workflows/benchmarks-pr.yml
vendored
4
.github/workflows/benchmarks-pr.yml
vendored
@@ -44,7 +44,7 @@ jobs:
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
|
|
||||||
@@ -61,7 +61,7 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
repo_token: ${{ env.GH_TOKEN }}
|
repo_token: ${{ env.GH_TOKEN }}
|
||||||
|
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
if: success()
|
if: success()
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0 # fetch full history to be able to get main commit sha
|
fetch-depth: 0 # fetch full history to be able to get main commit sha
|
||||||
|
|||||||
@@ -15,8 +15,8 @@ jobs:
|
|||||||
runs-on: benchmarks
|
runs-on: benchmarks
|
||||||
timeout-minutes: 4320 # 72h
|
timeout-minutes: 4320 # 72h
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
|
|
||||||
|
|||||||
@@ -14,8 +14,8 @@ jobs:
|
|||||||
name: Run and upload benchmarks
|
name: Run and upload benchmarks
|
||||||
runs-on: benchmarks
|
runs-on: benchmarks
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
|
|
||||||
|
|||||||
@@ -14,8 +14,8 @@ jobs:
|
|||||||
name: Run and upload benchmarks
|
name: Run and upload benchmarks
|
||||||
runs-on: benchmarks
|
runs-on: benchmarks
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
|
|
||||||
|
|||||||
@@ -14,8 +14,8 @@ jobs:
|
|||||||
name: Run and upload benchmarks
|
name: Run and upload benchmarks
|
||||||
runs-on: benchmarks
|
runs-on: benchmarks
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
|
|
||||||
|
|||||||
100
.github/workflows/check-valid-milestone.yml
vendored
Normal file
100
.github/workflows/check-valid-milestone.yml
vendored
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
name: PR Milestone Check
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
types: [opened, reopened, edited, synchronize, milestoned, demilestoned]
|
||||||
|
branches:
|
||||||
|
- "main"
|
||||||
|
- "release-v*.*.*"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check-milestone:
|
||||||
|
name: Check PR Milestone
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Validate PR milestone
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
script: |
|
||||||
|
// Get PR number directly from the event payload
|
||||||
|
const prNumber = context.payload.pull_request.number;
|
||||||
|
|
||||||
|
// Get PR details
|
||||||
|
const { data: prData } = await github.rest.pulls.get({
|
||||||
|
owner: 'meilisearch',
|
||||||
|
repo: 'meilisearch',
|
||||||
|
pull_number: prNumber
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get base branch name
|
||||||
|
const baseBranch = prData.base.ref;
|
||||||
|
console.log(`Base branch: ${baseBranch}`);
|
||||||
|
|
||||||
|
// Get PR milestone
|
||||||
|
const prMilestone = prData.milestone;
|
||||||
|
if (!prMilestone) {
|
||||||
|
core.setFailed('PR must have a milestone assigned');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.log(`PR milestone: ${prMilestone.title}`);
|
||||||
|
|
||||||
|
// Validate milestone format: vx.y.z
|
||||||
|
const milestoneRegex = /^v\d+\.\d+\.\d+$/;
|
||||||
|
if (!milestoneRegex.test(prMilestone.title)) {
|
||||||
|
core.setFailed(`Milestone "${prMilestone.title}" does not follow the required format vx.y.z`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// For main branch PRs, check if the milestone is the highest one
|
||||||
|
if (baseBranch === 'main') {
|
||||||
|
// Get all milestones
|
||||||
|
const { data: milestones } = await github.rest.issues.listMilestones({
|
||||||
|
owner: 'meilisearch',
|
||||||
|
repo: 'meilisearch',
|
||||||
|
state: 'open',
|
||||||
|
sort: 'due_on',
|
||||||
|
direction: 'desc'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Sort milestones by version number (vx.y.z)
|
||||||
|
const sortedMilestones = milestones
|
||||||
|
.filter(m => milestoneRegex.test(m.title))
|
||||||
|
.sort((a, b) => {
|
||||||
|
const versionA = a.title.substring(1).split('.').map(Number);
|
||||||
|
const versionB = b.title.substring(1).split('.').map(Number);
|
||||||
|
|
||||||
|
// Compare major version
|
||||||
|
if (versionA[0] !== versionB[0]) return versionB[0] - versionA[0];
|
||||||
|
// Compare minor version
|
||||||
|
if (versionA[1] !== versionB[1]) return versionB[1] - versionA[1];
|
||||||
|
// Compare patch version
|
||||||
|
return versionB[2] - versionA[2];
|
||||||
|
});
|
||||||
|
|
||||||
|
if (sortedMilestones.length === 0) {
|
||||||
|
core.setFailed('No valid milestones found in the repository. Please create at least one milestone with the format vx.y.z');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const highestMilestone = sortedMilestones[0];
|
||||||
|
console.log(`Highest milestone: ${highestMilestone.title}`);
|
||||||
|
|
||||||
|
if (prMilestone.title !== highestMilestone.title) {
|
||||||
|
core.setFailed(`PRs targeting the main branch must use the highest milestone (${highestMilestone.title}), but this PR uses ${prMilestone.title}`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// For release branches, the milestone should match the branch version
|
||||||
|
const branchVersion = baseBranch.substring(8); // remove 'release-'
|
||||||
|
if (prMilestone.title !== branchVersion) {
|
||||||
|
core.setFailed(`PRs targeting release branch "${baseBranch}" must use the matching milestone "${branchVersion}", but this PR uses "${prMilestone.title}"`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('PR milestone validation passed!');
|
||||||
6
.github/workflows/db-change-comments.yml
vendored
6
.github/workflows/db-change-comments.yml
vendored
@@ -6,7 +6,7 @@ on:
|
|||||||
|
|
||||||
env:
|
env:
|
||||||
MESSAGE: |
|
MESSAGE: |
|
||||||
### Hello, I'm a bot 🤖
|
### Hello, I'm a bot 🤖
|
||||||
|
|
||||||
You are receiving this message because you declared that this PR make changes to the Meilisearch database.
|
You are receiving this message because you declared that this PR make changes to the Meilisearch database.
|
||||||
Depending on the nature of the change, additional actions might be required on your part. The following sections detail the additional actions depending on the nature of the change, please copy the relevant section in the description of your PR, and make sure to perform the required actions.
|
Depending on the nature of the change, additional actions might be required on your part. The following sections detail the additional actions depending on the nature of the change, please copy the relevant section in the description of your PR, and make sure to perform the required actions.
|
||||||
@@ -19,7 +19,6 @@ env:
|
|||||||
|
|
||||||
- [ ] Detail the change to the DB format and why they are forward compatible
|
- [ ] Detail the change to the DB format and why they are forward compatible
|
||||||
- [ ] Forward-compatibility: A database created before this PR and using the features touched by this PR was able to be opened by a Meilisearch produced by the code of this PR.
|
- [ ] Forward-compatibility: A database created before this PR and using the features touched by this PR was able to be opened by a Meilisearch produced by the code of this PR.
|
||||||
- [ ] Declarative test: add a [declarative test containing a dumpless upgrade](https://github.com/meilisearch/meilisearch/blob/main/TESTING.md#typical-usage)
|
|
||||||
|
|
||||||
|
|
||||||
## This PR makes breaking changes
|
## This PR makes breaking changes
|
||||||
@@ -36,7 +35,8 @@ env:
|
|||||||
- [ ] Write the code to go from the old database to the new one
|
- [ ] Write the code to go from the old database to the new one
|
||||||
- If the change happened in milli, the upgrade function should be written and called [here](https://github.com/meilisearch/meilisearch/blob/3fd86e8d76d7d468b0095d679adb09211ca3b6c0/crates/milli/src/update/upgrade/mod.rs#L24-L47)
|
- If the change happened in milli, the upgrade function should be written and called [here](https://github.com/meilisearch/meilisearch/blob/3fd86e8d76d7d468b0095d679adb09211ca3b6c0/crates/milli/src/update/upgrade/mod.rs#L24-L47)
|
||||||
- If the change happened in the index-scheduler, we've never done it yet, but the right place to do it should be [here](https://github.com/meilisearch/meilisearch/blob/3fd86e8d76d7d468b0095d679adb09211ca3b6c0/crates/index-scheduler/src/scheduler/process_upgrade/mod.rs#L13)
|
- If the change happened in the index-scheduler, we've never done it yet, but the right place to do it should be [here](https://github.com/meilisearch/meilisearch/blob/3fd86e8d76d7d468b0095d679adb09211ca3b6c0/crates/index-scheduler/src/scheduler/process_upgrade/mod.rs#L13)
|
||||||
- [ ] Declarative test: add a [declarative test containing a dumpless upgrade](https://github.com/meilisearch/meilisearch/blob/main/TESTING.md#typical-usage)
|
- [ ] Write an integration test [here](https://github.com/meilisearch/meilisearch/blob/main/crates/meilisearch/tests/upgrade/mod.rs) ensuring you can read the old database, upgrade to the new database, and read the new database as expected
|
||||||
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
add-comment:
|
add-comment:
|
||||||
|
|||||||
10
.github/workflows/db-change-missing.yml
vendored
10
.github/workflows/db-change-missing.yml
vendored
@@ -4,22 +4,22 @@ on:
|
|||||||
pull_request:
|
pull_request:
|
||||||
types: [opened, synchronize, reopened, labeled, unlabeled]
|
types: [opened, synchronize, reopened, labeled, unlabeled]
|
||||||
|
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
check-labels:
|
check-labels:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout code
|
- name: Checkout code
|
||||||
uses: actions/checkout@v5
|
uses: actions/checkout@v3
|
||||||
- name: Check db change labels
|
- name: Check db change labels
|
||||||
id: check_labels
|
id: check_labels
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: |
|
run: |
|
||||||
URL=/repos/meilisearch/meilisearch/pulls/${{ github.event.pull_request.number }}/labels
|
URL=/repos/meilisearch/meilisearch/pulls/${{ github.event.pull_request.number }}/labels
|
||||||
echo ${{ github.event.pull_request.number }}
|
echo ${{ github.event.pull_request.number }}
|
||||||
echo $URL
|
echo $URL
|
||||||
LABELS=$(gh api -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" /repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/labels -q .[].name)
|
LABELS=$(gh api -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" /repos/meilisearch/meilisearch/issues/${{ github.event.pull_request.number }}/labels -q .[].name)
|
||||||
echo "Labels: $LABELS"
|
|
||||||
if [[ ! "$LABELS" =~ "db change" && ! "$LABELS" =~ "no db change" ]]; then
|
if [[ ! "$LABELS" =~ "db change" && ! "$LABELS" =~ "no db change" ]]; then
|
||||||
echo "::error::Pull request must contain either the 'db change' or 'no db change' label."
|
echo "::error::Pull request must contain either the 'db change' or 'no db change' label."
|
||||||
exit 1
|
exit 1
|
||||||
|
|||||||
4
.github/workflows/dependency-issue.yml
vendored
4
.github/workflows/dependency-issue.yml
vendored
@@ -13,9 +13,9 @@ jobs:
|
|||||||
ISSUE_TEMPLATE: issue-template.md
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
|
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Download the issue template
|
- name: Download the issue template
|
||||||
run: curl -s https://raw.githubusercontent.com/meilisearch/meilisearch/main/.github/templates/dependency-issue.md > $ISSUE_TEMPLATE
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/dependency-issue.md > $ISSUE_TEMPLATE
|
||||||
- name: Create issue
|
- name: Create issue
|
||||||
run: |
|
run: |
|
||||||
gh issue create \
|
gh issue create \
|
||||||
|
|||||||
12
.github/workflows/flaky-tests.yml
vendored
12
.github/workflows/flaky-tests.yml
vendored
@@ -3,7 +3,7 @@ name: Look for flaky tests
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
schedule:
|
schedule:
|
||||||
- cron: '0 4 * * *' # Every day at 4:00AM
|
- cron: "0 12 * * FRI" # Every Friday at 12:00PM
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
flaky:
|
flaky:
|
||||||
@@ -12,18 +12,12 @@ jobs:
|
|||||||
# Use ubuntu-22.04 to compile with glibc 2.35
|
# Use ubuntu-22.04 to compile with glibc 2.35
|
||||||
image: ubuntu:22.04
|
image: ubuntu:22.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
|
||||||
run: |
|
|
||||||
sudo rm -rf "/opt/ghc" || true
|
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- name: Install needed dependencies
|
- name: Install needed dependencies
|
||||||
run: |
|
run: |
|
||||||
apt-get update && apt-get install -y curl
|
apt-get update && apt-get install -y curl
|
||||||
apt-get install build-essential -y
|
apt-get install build-essential -y
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
- name: Install cargo-flaky
|
- name: Install cargo-flaky
|
||||||
run: cargo install cargo-flaky
|
run: cargo install cargo-flaky
|
||||||
- name: Run cargo flaky in the dumps
|
- name: Run cargo flaky in the dumps
|
||||||
|
|||||||
6
.github/workflows/fuzzer-indexing.yml
vendored
6
.github/workflows/fuzzer-indexing.yml
vendored
@@ -11,8 +11,10 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
timeout-minutes: 4320 # 72h
|
timeout-minutes: 4320 # 72h
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
|
with:
|
||||||
|
profile: minimal
|
||||||
|
|
||||||
# Run benchmarks
|
# Run benchmarks
|
||||||
- name: Run the fuzzer
|
- name: Run the fuzzer
|
||||||
|
|||||||
4
.github/workflows/latest-git-tag.yml
vendored
4
.github/workflows/latest-git-tag.yml
vendored
@@ -10,7 +10,7 @@ jobs:
|
|||||||
name: Check the version validity
|
name: Check the version validity
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Check release validity
|
- name: Check release validity
|
||||||
if: github.event_name == 'release'
|
if: github.event_name == 'release'
|
||||||
run: bash .github/scripts/check-release.sh
|
run: bash .github/scripts/check-release.sh
|
||||||
@@ -19,7 +19,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: check-version
|
needs: check-version
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- uses: rickstaa/action-create-tag@v1
|
- uses: rickstaa/action-create-tag@v1
|
||||||
with:
|
with:
|
||||||
tag: "latest"
|
tag: "latest"
|
||||||
|
|||||||
224
.github/workflows/milestone-workflow.yml
vendored
Normal file
224
.github/workflows/milestone-workflow.yml
vendored
Normal file
@@ -0,0 +1,224 @@
|
|||||||
|
name: Milestone's workflow
|
||||||
|
|
||||||
|
# /!\ No git flow are handled here
|
||||||
|
|
||||||
|
# For each Milestone created (not opened!), and if the release is NOT a patch release (only the patch changed)
|
||||||
|
# - the roadmap issue is created, see https://github.com/meilisearch/engine-team/blob/main/issue-templates/roadmap-issue.md
|
||||||
|
# - the changelog issue is created, see https://github.com/meilisearch/engine-team/blob/main/issue-templates/changelog-issue.md
|
||||||
|
# - update the ruleset to add the current release version to the list of allowed versions and be able to use the merge queue.
|
||||||
|
|
||||||
|
# For each Milestone closed
|
||||||
|
# - the `release_version` label is created
|
||||||
|
# - this label is applied to all issues/PRs in the Milestone
|
||||||
|
|
||||||
|
on:
|
||||||
|
milestone:
|
||||||
|
types: [created, closed]
|
||||||
|
|
||||||
|
env:
|
||||||
|
MILESTONE_VERSION: ${{ github.event.milestone.title }}
|
||||||
|
MILESTONE_URL: ${{ github.event.milestone.html_url }}
|
||||||
|
MILESTONE_DUE_ON: ${{ github.event.milestone.due_on }}
|
||||||
|
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
# -----------------
|
||||||
|
# MILESTONE CREATED
|
||||||
|
# -----------------
|
||||||
|
|
||||||
|
get-release-version:
|
||||||
|
if: github.event.action == 'created'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
is-patch: ${{ steps.check-patch.outputs.is-patch }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Check if this release is a patch release only
|
||||||
|
id: check-patch
|
||||||
|
run: |
|
||||||
|
echo version: $MILESTONE_VERSION
|
||||||
|
if [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.0$ ]]; then
|
||||||
|
echo 'This is NOT a patch release'
|
||||||
|
echo "is-patch=false" >> $GITHUB_OUTPUT
|
||||||
|
elif [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||||
|
echo 'This is a patch release'
|
||||||
|
echo "is-patch=true" >> $GITHUB_OUTPUT
|
||||||
|
else
|
||||||
|
echo "Not a valid format of release, check the Milestone's title."
|
||||||
|
echo 'Should be vX.Y.Z'
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
create-roadmap-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the roadmap issue if the release is not only a patch release
|
||||||
|
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/roadmap-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Replace all empty occurrences in the templates
|
||||||
|
run: |
|
||||||
|
# Replace all <<version>> occurrences
|
||||||
|
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
|
||||||
|
|
||||||
|
# Replace all <<milestone_id>> occurrences
|
||||||
|
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
|
||||||
|
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
|
||||||
|
|
||||||
|
# Replace release date if exists
|
||||||
|
if [[ ! -z $MILESTONE_DUE_ON ]]; then
|
||||||
|
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
|
||||||
|
sed -i "s/Release date\: 20XX-XX-XX/Release date\: $date/g" $ISSUE_TEMPLATE
|
||||||
|
fi
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "$MILESTONE_VERSION ROADMAP" \
|
||||||
|
--label 'epic,impacts docs,impacts integrations,impacts cloud' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION
|
||||||
|
|
||||||
|
create-changelog-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the changelog issue if the release is not only a patch release
|
||||||
|
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/changelog-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Replace all empty occurrences in the templates
|
||||||
|
run: |
|
||||||
|
# Replace all <<version>> occurrences
|
||||||
|
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
|
||||||
|
|
||||||
|
# Replace all <<milestone_id>> occurrences
|
||||||
|
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
|
||||||
|
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "Create release changelogs for $MILESTONE_VERSION" \
|
||||||
|
--label 'impacts docs,documentation' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION \
|
||||||
|
--assignee curquiza
|
||||||
|
|
||||||
|
create-update-version-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the update-version issue even if the release is a patch release
|
||||||
|
if: github.event.action == 'created'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/update-version-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "Update version in Cargo.toml for $MILESTONE_VERSION" \
|
||||||
|
--label 'maintenance' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION
|
||||||
|
|
||||||
|
create-update-openapi-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the openAPI issue if the release is not only a patch release
|
||||||
|
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/update-openapi-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "Update Open API file for $MILESTONE_VERSION" \
|
||||||
|
--label 'maintenance' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION
|
||||||
|
|
||||||
|
update-ruleset:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.event.action == 'created'
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Install jq
|
||||||
|
run: |
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install -y jq
|
||||||
|
- name: Update ruleset
|
||||||
|
env:
|
||||||
|
# gh api repos/meilisearch/meilisearch/rulesets --jq '.[] | {name: .name, id: .id}'
|
||||||
|
RULESET_ID: 4253297
|
||||||
|
BRANCH_NAME: ${{ github.event.inputs.branch_name }}
|
||||||
|
run: |
|
||||||
|
echo "RULESET_ID: ${{ env.RULESET_ID }}"
|
||||||
|
echo "BRANCH_NAME: ${{ env.BRANCH_NAME }}"
|
||||||
|
|
||||||
|
# Get current ruleset conditions
|
||||||
|
CONDITIONS=$(gh api repos/meilisearch/meilisearch/rulesets/${{ env.RULESET_ID }} --jq '{ conditions: .conditions }')
|
||||||
|
|
||||||
|
# Update the conditions by appending the milestone version
|
||||||
|
UPDATED_CONDITIONS=$(echo $CONDITIONS | jq '.conditions.ref_name.include += ["refs/heads/release-'${{ env.MILESTONE_VERSION }}'"]')
|
||||||
|
|
||||||
|
# Update the ruleset from stdin (-)
|
||||||
|
echo $UPDATED_CONDITIONS |
|
||||||
|
gh api repos/meilisearch/meilisearch/rulesets/${{ env.RULESET_ID }} \
|
||||||
|
--method PUT \
|
||||||
|
-H "Accept: application/vnd.github+json" \
|
||||||
|
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||||
|
--input -
|
||||||
|
|
||||||
|
# ----------------
|
||||||
|
# MILESTONE CLOSED
|
||||||
|
# ----------------
|
||||||
|
|
||||||
|
create-release-label:
|
||||||
|
if: github.event.action == 'closed'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Create the ${{ env.MILESTONE_VERSION }} label
|
||||||
|
run: |
|
||||||
|
label_description="PRs/issues solved in $MILESTONE_VERSION"
|
||||||
|
if [[ ! -z $MILESTONE_DUE_ON ]]; then
|
||||||
|
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
|
||||||
|
label_description="$label_description released on $date"
|
||||||
|
fi
|
||||||
|
|
||||||
|
gh api repos/meilisearch/meilisearch/labels \
|
||||||
|
--method POST \
|
||||||
|
-H "Accept: application/vnd.github+json" \
|
||||||
|
-f name="$MILESTONE_VERSION" \
|
||||||
|
-f description="$label_description" \
|
||||||
|
-f color='ff5ba3'
|
||||||
|
|
||||||
|
labelize-all-milestone-content:
|
||||||
|
if: github.event.action == 'closed'
|
||||||
|
needs: create-release-label
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Add label ${{ env.MILESTONE_VERSION }} to all PRs in the Milestone
|
||||||
|
run: |
|
||||||
|
prs=$(gh pr list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
|
||||||
|
for pr in $prs; do
|
||||||
|
gh pr edit $pr --add-label $MILESTONE_VERSION
|
||||||
|
done
|
||||||
|
- name: Add label ${{ env.MILESTONE_VERSION }} to all issues in the Milestone
|
||||||
|
run: |
|
||||||
|
issues=$(gh issue list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
|
||||||
|
for issue in $issues; do
|
||||||
|
gh issue edit $issue --add-label $MILESTONE_VERSION
|
||||||
|
done
|
||||||
14
.github/workflows/publish-apt-brew-pkg.yml
vendored
14
.github/workflows/publish-apt-brew-pkg.yml
vendored
@@ -9,7 +9,7 @@ jobs:
|
|||||||
name: Check the version validity
|
name: Check the version validity
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Check release validity
|
- name: Check release validity
|
||||||
run: bash .github/scripts/check-release.sh
|
run: bash .github/scripts/check-release.sh
|
||||||
|
|
||||||
@@ -25,20 +25,14 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
apt-get update && apt-get install -y curl
|
apt-get update && apt-get install -y curl
|
||||||
apt-get install build-essential -y
|
apt-get install build-essential -y
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
run: |
|
|
||||||
sudo rm -rf "/opt/ghc" || true
|
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
- name: Install cargo-deb
|
- name: Install cargo-deb
|
||||||
run: cargo install cargo-deb
|
run: cargo install cargo-deb
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Build deb package
|
- name: Build deb package
|
||||||
run: cargo deb -p meilisearch -o target/debian/meilisearch.deb
|
run: cargo deb -p meilisearch -o target/debian/meilisearch.deb
|
||||||
- name: Upload debian pkg to release
|
- name: Upload debian pkg to release
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
uses: svenstaro/upload-release-action@2.7.0
|
||||||
with:
|
with:
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
file: target/debian/meilisearch.deb
|
file: target/debian/meilisearch.deb
|
||||||
|
|||||||
186
.github/workflows/publish-binaries.yml
vendored
Normal file
186
.github/workflows/publish-binaries.yml
vendored
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
name: Publish binaries to GitHub release
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
schedule:
|
||||||
|
- cron: "0 2 * * *" # Every day at 2:00am
|
||||||
|
release:
|
||||||
|
types: [published]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check-version:
|
||||||
|
name: Check the version validity
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
# No need to check the version for dry run (cron)
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
# Check if the tag has the v<nmumber>.<number>.<number> format.
|
||||||
|
# If yes, it means we are publishing an official release.
|
||||||
|
# If no, we are releasing a RC, so no need to check the version.
|
||||||
|
- name: Check tag format
|
||||||
|
if: github.event_name == 'release'
|
||||||
|
id: check-tag-format
|
||||||
|
run: |
|
||||||
|
escaped_tag=$(printf "%q" ${{ github.ref_name }})
|
||||||
|
|
||||||
|
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||||
|
echo "stable=true" >> $GITHUB_OUTPUT
|
||||||
|
else
|
||||||
|
echo "stable=false" >> $GITHUB_OUTPUT
|
||||||
|
fi
|
||||||
|
- name: Check release validity
|
||||||
|
if: github.event_name == 'release' && steps.check-tag-format.outputs.stable == 'true'
|
||||||
|
run: bash .github/scripts/check-release.sh
|
||||||
|
|
||||||
|
publish-linux:
|
||||||
|
name: Publish binary for Linux
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: check-version
|
||||||
|
container:
|
||||||
|
# Use ubuntu-22.04 to compile with glibc 2.35
|
||||||
|
image: ubuntu:22.04
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Install needed dependencies
|
||||||
|
run: |
|
||||||
|
apt-get update && apt-get install -y curl
|
||||||
|
apt-get install build-essential -y
|
||||||
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
|
- name: Build
|
||||||
|
run: cargo build --release --locked
|
||||||
|
# No need to upload binaries for dry run (cron)
|
||||||
|
- name: Upload binaries to release
|
||||||
|
if: github.event_name == 'release'
|
||||||
|
uses: svenstaro/upload-release-action@2.7.0
|
||||||
|
with:
|
||||||
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
|
file: target/release/meilisearch
|
||||||
|
asset_name: meilisearch-linux-amd64
|
||||||
|
tag: ${{ github.ref }}
|
||||||
|
|
||||||
|
publish-macos-windows:
|
||||||
|
name: Publish binary for ${{ matrix.os }}
|
||||||
|
runs-on: ${{ matrix.os }}
|
||||||
|
needs: check-version
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
os: [macos-13, windows-2022]
|
||||||
|
include:
|
||||||
|
- os: macos-13
|
||||||
|
artifact_name: meilisearch
|
||||||
|
asset_name: meilisearch-macos-amd64
|
||||||
|
- os: windows-2022
|
||||||
|
artifact_name: meilisearch.exe
|
||||||
|
asset_name: meilisearch-windows-amd64.exe
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
|
- name: Build
|
||||||
|
run: cargo build --release --locked
|
||||||
|
# No need to upload binaries for dry run (cron)
|
||||||
|
- name: Upload binaries to release
|
||||||
|
if: github.event_name == 'release'
|
||||||
|
uses: svenstaro/upload-release-action@2.7.0
|
||||||
|
with:
|
||||||
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
|
file: target/release/${{ matrix.artifact_name }}
|
||||||
|
asset_name: ${{ matrix.asset_name }}
|
||||||
|
tag: ${{ github.ref }}
|
||||||
|
|
||||||
|
publish-macos-apple-silicon:
|
||||||
|
name: Publish binary for macOS silicon
|
||||||
|
runs-on: macos-13
|
||||||
|
needs: check-version
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
include:
|
||||||
|
- target: aarch64-apple-darwin
|
||||||
|
asset_name: meilisearch-macos-apple-silicon
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
- name: Installing Rust toolchain
|
||||||
|
uses: dtolnay/rust-toolchain@1.85
|
||||||
|
with:
|
||||||
|
profile: minimal
|
||||||
|
target: ${{ matrix.target }}
|
||||||
|
- name: Cargo build
|
||||||
|
uses: actions-rs/cargo@v1
|
||||||
|
with:
|
||||||
|
command: build
|
||||||
|
args: --release --target ${{ matrix.target }}
|
||||||
|
- name: Upload the binary to release
|
||||||
|
# No need to upload binaries for dry run (cron)
|
||||||
|
if: github.event_name == 'release'
|
||||||
|
uses: svenstaro/upload-release-action@2.7.0
|
||||||
|
with:
|
||||||
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
|
file: target/${{ matrix.target }}/release/meilisearch
|
||||||
|
asset_name: ${{ matrix.asset_name }}
|
||||||
|
tag: ${{ github.ref }}
|
||||||
|
|
||||||
|
publish-aarch64:
|
||||||
|
name: Publish binary for aarch64
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: check-version
|
||||||
|
env:
|
||||||
|
DEBIAN_FRONTEND: noninteractive
|
||||||
|
container:
|
||||||
|
# Use ubuntu-22.04 to compile with glibc 2.35
|
||||||
|
image: ubuntu:22.04
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
include:
|
||||||
|
- target: aarch64-unknown-linux-gnu
|
||||||
|
asset_name: meilisearch-linux-aarch64
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
- name: Install needed dependencies
|
||||||
|
run: |
|
||||||
|
apt-get update -y && apt upgrade -y
|
||||||
|
apt-get install -y curl build-essential gcc-aarch64-linux-gnu
|
||||||
|
- name: Set up Docker for cross compilation
|
||||||
|
run: |
|
||||||
|
apt-get install -y curl apt-transport-https ca-certificates software-properties-common
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||||
|
add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
|
||||||
|
apt-get update -y && apt-get install -y docker-ce
|
||||||
|
- name: Installing Rust toolchain
|
||||||
|
uses: dtolnay/rust-toolchain@1.85
|
||||||
|
with:
|
||||||
|
profile: minimal
|
||||||
|
target: ${{ matrix.target }}
|
||||||
|
- name: Configure target aarch64 GNU
|
||||||
|
## Environment variable is not passed using env:
|
||||||
|
## LD gold won't work with MUSL
|
||||||
|
# env:
|
||||||
|
# JEMALLOC_SYS_WITH_LG_PAGE: 16
|
||||||
|
# RUSTFLAGS: '-Clink-arg=-fuse-ld=gold'
|
||||||
|
run: |
|
||||||
|
echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config
|
||||||
|
echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
|
||||||
|
echo 'JEMALLOC_SYS_WITH_LG_PAGE=16' >> $GITHUB_ENV
|
||||||
|
- name: Install a default toolchain that will be used to build cargo cross
|
||||||
|
run: |
|
||||||
|
rustup default stable
|
||||||
|
- name: Cargo build
|
||||||
|
uses: actions-rs/cargo@v1
|
||||||
|
with:
|
||||||
|
command: build
|
||||||
|
use-cross: true
|
||||||
|
args: --release --target ${{ matrix.target }}
|
||||||
|
env:
|
||||||
|
CROSS_DOCKER_IN_DOCKER: true
|
||||||
|
- name: List target output files
|
||||||
|
run: ls -lR ./target
|
||||||
|
- name: Upload the binary to release
|
||||||
|
# No need to upload binaries for dry run (cron)
|
||||||
|
if: github.event_name == 'release'
|
||||||
|
uses: svenstaro/upload-release-action@2.7.0
|
||||||
|
with:
|
||||||
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
|
file: target/${{ matrix.target }}/release/meilisearch
|
||||||
|
asset_name: ${{ matrix.asset_name }}
|
||||||
|
tag: ${{ github.ref }}
|
||||||
167
.github/workflows/publish-docker-images.yml
vendored
167
.github/workflows/publish-docker-images.yml
vendored
@@ -14,107 +14,10 @@ on:
|
|||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
docker:
|
||||||
runs-on: ${{ matrix.runner }}
|
runs-on: docker
|
||||||
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
platform: [amd64, arm64]
|
|
||||||
edition: [community, enterprise]
|
|
||||||
include:
|
|
||||||
- platform: amd64
|
|
||||||
runner: ubuntu-24.04
|
|
||||||
- platform: arm64
|
|
||||||
runner: ubuntu-24.04-arm
|
|
||||||
- edition: community
|
|
||||||
registry: getmeili/meilisearch
|
|
||||||
feature-flag: ""
|
|
||||||
- edition: enterprise
|
|
||||||
registry: getmeili/meilisearch-enterprise
|
|
||||||
feature-flag: "--features enterprise"
|
|
||||||
|
|
||||||
permissions: {}
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
|
|
||||||
- name: Prepare
|
|
||||||
run: |
|
|
||||||
platform=linux/${{ matrix.platform }}
|
|
||||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v3
|
|
||||||
with:
|
|
||||||
platforms: linux/${{ matrix.platform }}
|
|
||||||
install: true
|
|
||||||
|
|
||||||
- name: Login to Docker Hub
|
|
||||||
uses: docker/login-action@v3
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta
|
|
||||||
uses: docker/metadata-action@v5
|
|
||||||
with:
|
|
||||||
images: ${{ matrix.registry }}
|
|
||||||
# Prevent `latest` to be updated for each new tag pushed.
|
|
||||||
# We need latest and `vX.Y` tags to only be pushed for the stable Meilisearch releases.
|
|
||||||
flavor: latest=false
|
|
||||||
tags: |
|
|
||||||
type=ref,event=tag
|
|
||||||
type=raw,value=nightly,enable=${{ github.event_name != 'push' }}
|
|
||||||
type=semver,pattern=v{{major}}.{{minor}},enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
|
|
||||||
type=semver,pattern=v{{major}},enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
|
|
||||||
type=raw,value=latest,enable=${{ steps.check-tag-format.outputs.stable == 'true' && steps.check-tag-format.outputs.latest == 'true' }}
|
|
||||||
|
|
||||||
- name: Build and push by digest
|
|
||||||
uses: docker/build-push-action@v6
|
|
||||||
id: build-and-push
|
|
||||||
with:
|
|
||||||
platforms: linux/${{ matrix.platform }}
|
|
||||||
labels: ${{ steps.meta.outputs.labels }}
|
|
||||||
tags: ${{ matrix.registry }}
|
|
||||||
outputs: type=image,push-by-digest=true,name-canonical=true,push=true
|
|
||||||
build-args: |
|
|
||||||
COMMIT_SHA=${{ github.sha }}
|
|
||||||
COMMIT_DATE=${{ steps.build-metadata.outputs.date }}
|
|
||||||
GIT_TAG=${{ github.ref_name }}
|
|
||||||
EXTRA_ARGS=${{ matrix.feature-flag }}
|
|
||||||
|
|
||||||
- name: Export digest
|
|
||||||
run: |
|
|
||||||
mkdir -p ${{ runner.temp }}/digests
|
|
||||||
digest="${{ steps.build-and-push.outputs.digest }}"
|
|
||||||
touch "${{ runner.temp }}/digests/${digest#sha256:}"
|
|
||||||
|
|
||||||
- name: Upload digest
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: digests-${{ matrix.edition }}-${{ env.PLATFORM_PAIR }}
|
|
||||||
path: ${{ runner.temp }}/digests/*
|
|
||||||
if-no-files-found: error
|
|
||||||
retention-days: 1
|
|
||||||
|
|
||||||
merge:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
edition: [community, enterprise]
|
|
||||||
include:
|
|
||||||
- edition: community
|
|
||||||
registry: getmeili/meilisearch
|
|
||||||
- edition: enterprise
|
|
||||||
registry: getmeili/meilisearch-enterprise
|
|
||||||
needs:
|
|
||||||
- build
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
id-token: write # This is needed to use Cosign in keyless mode
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v5
|
|
||||||
|
|
||||||
# If we are running a cron or manual job ('schedule' or 'workflow_dispatch' event), it means we are publishing the `nightly` tag, so not considered stable.
|
# If we are running a cron or manual job ('schedule' or 'workflow_dispatch' event), it means we are publishing the `nightly` tag, so not considered stable.
|
||||||
# If we have pushed a tag, and the tag has the v<nmumber>.<number>.<number> format, it means we are publishing an official release, so considered stable.
|
# If we have pushed a tag, and the tag has the v<nmumber>.<number>.<number> format, it means we are publishing an official release, so considered stable.
|
||||||
@@ -153,15 +56,11 @@ jobs:
|
|||||||
|
|
||||||
echo "date=$commit_date" >> $GITHUB_OUTPUT
|
echo "date=$commit_date" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
- name: Install cosign
|
- name: Set up QEMU
|
||||||
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # tag=v3.10.0
|
uses: docker/setup-qemu-action@v3
|
||||||
|
|
||||||
- name: Download digests
|
- name: Set up Docker Buildx
|
||||||
uses: actions/download-artifact@v4
|
uses: docker/setup-buildx-action@v3
|
||||||
with:
|
|
||||||
path: ${{ runner.temp }}/digests
|
|
||||||
pattern: digests-${{ matrix.edition }}-*
|
|
||||||
merge-multiple: true
|
|
||||||
|
|
||||||
- name: Login to Docker Hub
|
- name: Login to Docker Hub
|
||||||
uses: docker/login-action@v3
|
uses: docker/login-action@v3
|
||||||
@@ -169,14 +68,11 @@ jobs:
|
|||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v3
|
|
||||||
|
|
||||||
- name: Docker meta
|
- name: Docker meta
|
||||||
id: meta
|
id: meta
|
||||||
uses: docker/metadata-action@v5
|
uses: docker/metadata-action@v5
|
||||||
with:
|
with:
|
||||||
images: ${{ matrix.registry }}
|
images: getmeili/meilisearch
|
||||||
# Prevent `latest` to be updated for each new tag pushed.
|
# Prevent `latest` to be updated for each new tag pushed.
|
||||||
# We need latest and `vX.Y` tags to only be pushed for the stable Meilisearch releases.
|
# We need latest and `vX.Y` tags to only be pushed for the stable Meilisearch releases.
|
||||||
flavor: latest=false
|
flavor: latest=false
|
||||||
@@ -187,45 +83,24 @@ jobs:
|
|||||||
type=semver,pattern=v{{major}},enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
|
type=semver,pattern=v{{major}},enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
|
||||||
type=raw,value=latest,enable=${{ steps.check-tag-format.outputs.stable == 'true' && steps.check-tag-format.outputs.latest == 'true' }}
|
type=raw,value=latest,enable=${{ steps.check-tag-format.outputs.stable == 'true' && steps.check-tag-format.outputs.latest == 'true' }}
|
||||||
|
|
||||||
- name: Create manifest list and push
|
- name: Build and push
|
||||||
working-directory: ${{ runner.temp }}/digests
|
uses: docker/build-push-action@v6
|
||||||
run: |
|
with:
|
||||||
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
|
push: true
|
||||||
$(printf '${{ matrix.registry }}@sha256:%s ' *)
|
platforms: linux/amd64,linux/arm64
|
||||||
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
|
build-args: |
|
||||||
|
COMMIT_SHA=${{ github.sha }}
|
||||||
|
COMMIT_DATE=${{ steps.build-metadata.outputs.date }}
|
||||||
|
GIT_TAG=${{ github.ref_name }}
|
||||||
|
|
||||||
- name: Inspect image to fetch digest to sign
|
# /!\ Don't touch this without checking with Cloud team
|
||||||
run: |
|
- name: Send CI information to Cloud team
|
||||||
digest=$(docker buildx imagetools inspect --format='{{ json .Manifest }}' ${{ matrix.registry }}:${{ steps.meta.outputs.version }} | jq -r '.digest')
|
|
||||||
echo "DIGEST=${digest}" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Sign the images with GitHub OIDC Token
|
|
||||||
env:
|
|
||||||
TAGS: ${{ steps.meta.outputs.tags }}
|
|
||||||
run: |
|
|
||||||
images=""
|
|
||||||
for tag in ${TAGS}; do
|
|
||||||
images+="${tag}@${{ env.DIGEST }} "
|
|
||||||
done
|
|
||||||
cosign sign --yes ${images}
|
|
||||||
|
|
||||||
# /!\ Don't touch this without checking with engineers working on the Cloud code base on #discussion-engineering Slack channel
|
|
||||||
- name: Notify meilisearch-cloud
|
|
||||||
# Do not send if nightly build (i.e. 'schedule' or 'workflow_dispatch' event)
|
# Do not send if nightly build (i.e. 'schedule' or 'workflow_dispatch' event)
|
||||||
if: ${{ (github.event_name == 'push') && (matrix.edition == 'enterprise') }}
|
if: github.event_name == 'push'
|
||||||
uses: peter-evans/repository-dispatch@v3
|
uses: peter-evans/repository-dispatch@v3
|
||||||
with:
|
with:
|
||||||
token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
repository: meilisearch/meilisearch-cloud
|
repository: meilisearch/meilisearch-cloud
|
||||||
event-type: cloud-docker-build
|
event-type: cloud-docker-build
|
||||||
client-payload: '{ "meilisearch_version": "${{ github.ref_name }}", "stable": "${{ steps.check-tag-format.outputs.stable }}" }'
|
client-payload: '{ "meilisearch_version": "${{ github.ref_name }}", "stable": "${{ steps.check-tag-format.outputs.stable }}" }'
|
||||||
|
|
||||||
# /!\ Don't touch this without checking with integration team members on #discussion-integrations Slack channel
|
|
||||||
- name: Notify meilisearch-kubernetes
|
|
||||||
# Do not send if nightly build (i.e. 'schedule' or 'workflow_dispatch' event), or if not stable
|
|
||||||
if: ${{ github.event_name == 'push' && matrix.edition == 'community' && steps.check-tag-format.outputs.stable == 'true' }}
|
|
||||||
uses: peter-evans/repository-dispatch@v3
|
|
||||||
with:
|
|
||||||
token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
|
||||||
repository: meilisearch/meilisearch-kubernetes
|
|
||||||
event-type: meilisearch-release
|
|
||||||
client-payload: '{ "version": "${{ github.ref_name }}" }'
|
|
||||||
|
|||||||
116
.github/workflows/publish-release-assets.yml
vendored
116
.github/workflows/publish-release-assets.yml
vendored
@@ -1,116 +0,0 @@
|
|||||||
name: Publish assets to GitHub release
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 2 * * *" # Every day at 2:00am
|
|
||||||
release:
|
|
||||||
types: [published]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
check-version:
|
|
||||||
name: Check the version validity
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
# No need to check the version for dry run (cron or workflow_dispatch)
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v5
|
|
||||||
# Check if the tag has the v<nmumber>.<number>.<number> format.
|
|
||||||
# If yes, it means we are publishing an official release.
|
|
||||||
# If no, we are releasing a RC, so no need to check the version.
|
|
||||||
- name: Check tag format
|
|
||||||
if: github.event_name == 'release'
|
|
||||||
id: check-tag-format
|
|
||||||
run: |
|
|
||||||
escaped_tag=$(printf "%q" ${{ github.ref_name }})
|
|
||||||
|
|
||||||
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
|
||||||
echo "stable=true" >> $GITHUB_OUTPUT
|
|
||||||
else
|
|
||||||
echo "stable=false" >> $GITHUB_OUTPUT
|
|
||||||
fi
|
|
||||||
- name: Check release validity
|
|
||||||
if: github.event_name == 'release' && steps.check-tag-format.outputs.stable == 'true'
|
|
||||||
run: bash .github/scripts/check-release.sh
|
|
||||||
|
|
||||||
publish-binaries:
|
|
||||||
name: Publish binary for ${{ matrix.release }} ${{ matrix.edition }} edition
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
edition: [community, enterprise]
|
|
||||||
release:
|
|
||||||
[macos-amd64, macos-aarch64, windows, linux-amd64, linux-aarch64]
|
|
||||||
include:
|
|
||||||
- edition: "community"
|
|
||||||
feature-flag: ""
|
|
||||||
edition-suffix: ""
|
|
||||||
- edition: "enterprise"
|
|
||||||
feature-flag: "--features enterprise"
|
|
||||||
edition-suffix: "enterprise-"
|
|
||||||
- release: macos-amd64
|
|
||||||
os: macos-15-intel
|
|
||||||
binary_path: release/meilisearch
|
|
||||||
asset_name: macos-amd64
|
|
||||||
extra-args: ""
|
|
||||||
- release: macos-aarch64
|
|
||||||
os: macos-14
|
|
||||||
binary_path: aarch64-apple-darwin/release/meilisearch
|
|
||||||
asset_name: macos-apple-silicon
|
|
||||||
extra-args: "--target aarch64-apple-darwin"
|
|
||||||
- release: windows
|
|
||||||
os: windows-2022
|
|
||||||
binary_path: release/meilisearch.exe
|
|
||||||
asset_name: windows-amd64.exe
|
|
||||||
extra-args: ""
|
|
||||||
- release: linux-amd64
|
|
||||||
os: ubuntu-22.04
|
|
||||||
binary_path: x86_64-unknown-linux-gnu/release/meilisearch
|
|
||||||
asset_name: linux-amd64
|
|
||||||
extra-args: "--target x86_64-unknown-linux-gnu"
|
|
||||||
- release: linux-aarch64
|
|
||||||
os: ubuntu-22.04-arm
|
|
||||||
binary_path: aarch64-unknown-linux-gnu/release/meilisearch
|
|
||||||
asset_name: linux-aarch64
|
|
||||||
extra-args: "--target aarch64-unknown-linux-gnu"
|
|
||||||
needs: check-version
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v5
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
- name: Build
|
|
||||||
run: cargo build --release --locked ${{ matrix.feature-flag }} ${{ matrix.extra-args }}
|
|
||||||
# No need to upload binaries for dry run (cron or workflow_dispatch)
|
|
||||||
- name: Upload binaries to release
|
|
||||||
if: github.event_name == 'release'
|
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
|
||||||
with:
|
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
|
||||||
file: target/${{ matrix.binary_path }}
|
|
||||||
asset_name: meilisearch-${{ matrix.edition-suffix }}${{ matrix.asset_name }}
|
|
||||||
tag: ${{ github.ref }}
|
|
||||||
|
|
||||||
publish-openapi-file:
|
|
||||||
name: Publish OpenAPI file
|
|
||||||
needs: check-version
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v5
|
|
||||||
- name: Setup Rust
|
|
||||||
uses: actions-rs/toolchain@v1
|
|
||||||
with:
|
|
||||||
toolchain: stable
|
|
||||||
override: true
|
|
||||||
- name: Generate OpenAPI file
|
|
||||||
run: |
|
|
||||||
cd crates/openapi-generator
|
|
||||||
cargo run --release -- --pretty --output ../../meilisearch-openapi.json
|
|
||||||
- name: Upload OpenAPI to Release
|
|
||||||
# No need to upload for dry run (cron or workflow_dispatch)
|
|
||||||
if: github.event_name == 'release'
|
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
|
||||||
with:
|
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
|
||||||
file: ./meilisearch-openapi.json
|
|
||||||
asset_name: meilisearch-openapi.json
|
|
||||||
tag: ${{ github.ref }}
|
|
||||||
84
.github/workflows/sdks-tests.yml
vendored
84
.github/workflows/sdks-tests.yml
vendored
@@ -9,7 +9,7 @@ on:
|
|||||||
required: false
|
required: false
|
||||||
default: nightly
|
default: nightly
|
||||||
schedule:
|
schedule:
|
||||||
- cron: '0 6 * * *' # Every day at 6:00am
|
- cron: "0 6 * * MON" # Every Monday at 6:00AM
|
||||||
|
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: 'masterKey'
|
MEILI_MASTER_KEY: 'masterKey'
|
||||||
@@ -22,7 +22,7 @@ jobs:
|
|||||||
outputs:
|
outputs:
|
||||||
docker-image: ${{ steps.define-image.outputs.docker-image }}
|
docker-image: ${{ steps.define-image.outputs.docker-image }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Define the Docker image we need to use
|
- name: Define the Docker image we need to use
|
||||||
id: define-image
|
id: define-image
|
||||||
run: |
|
run: |
|
||||||
@@ -46,11 +46,11 @@ jobs:
|
|||||||
MEILISEARCH_VERSION: ${{ needs.define-docker-image.outputs.docker-image }}
|
MEILISEARCH_VERSION: ${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-dotnet
|
repository: meilisearch/meilisearch-dotnet
|
||||||
- name: Setup .NET Core
|
- name: Setup .NET Core
|
||||||
uses: actions/setup-dotnet@v5
|
uses: actions/setup-dotnet@v4
|
||||||
with:
|
with:
|
||||||
dotnet-version: "8.0.x"
|
dotnet-version: "8.0.x"
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
@@ -68,14 +68,14 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-dart
|
repository: meilisearch/meilisearch-dart
|
||||||
- uses: dart-lang/setup-dart@v1
|
- uses: dart-lang/setup-dart@v1
|
||||||
@@ -92,7 +92,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
@@ -100,10 +100,10 @@ jobs:
|
|||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- name: Set up Go
|
- name: Set up Go
|
||||||
uses: actions/setup-go@v6
|
uses: actions/setup-go@v5
|
||||||
with:
|
with:
|
||||||
go-version: stable
|
go-version: stable
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-go
|
repository: meilisearch/meilisearch-go
|
||||||
- name: Get dependencies
|
- name: Get dependencies
|
||||||
@@ -114,7 +114,7 @@ jobs:
|
|||||||
dep ensure
|
dep ensure
|
||||||
fi
|
fi
|
||||||
- name: Run integration tests
|
- name: Run integration tests
|
||||||
run: go test --race -v ./integration
|
run: go test -v ./...
|
||||||
|
|
||||||
meilisearch-java-tests:
|
meilisearch-java-tests:
|
||||||
needs: define-docker-image
|
needs: define-docker-image
|
||||||
@@ -122,26 +122,26 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-java
|
repository: meilisearch/meilisearch-java
|
||||||
- name: Set up Java
|
- name: Set up Java
|
||||||
uses: actions/setup-java@v5
|
uses: actions/setup-java@v4
|
||||||
with:
|
with:
|
||||||
java-version: 17
|
java-version: 8
|
||||||
distribution: 'temurin'
|
distribution: 'zulu'
|
||||||
cache: gradle
|
cache: gradle
|
||||||
- name: Grant execute permission for gradlew
|
- name: Grant execute permission for gradlew
|
||||||
run: chmod +x gradlew
|
run: chmod +x gradlew
|
||||||
- name: Build and run unit and integration tests
|
- name: Build and run unit and integration tests
|
||||||
run: ./gradlew build integrationTest --info
|
run: ./gradlew build integrationTest
|
||||||
|
|
||||||
meilisearch-js-tests:
|
meilisearch-js-tests:
|
||||||
needs: define-docker-image
|
needs: define-docker-image
|
||||||
@@ -149,18 +149,18 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-js
|
repository: meilisearch/meilisearch-js
|
||||||
- name: Setup node
|
- name: Setup node
|
||||||
uses: actions/setup-node@v5
|
uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
cache: 'yarn'
|
cache: 'yarn'
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
@@ -184,14 +184,14 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-php
|
repository: meilisearch/meilisearch-php
|
||||||
- name: Install PHP
|
- name: Install PHP
|
||||||
@@ -213,18 +213,18 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-python
|
repository: meilisearch/meilisearch-python
|
||||||
- name: Set up Python
|
- name: Set up Python
|
||||||
uses: actions/setup-python@v6
|
uses: actions/setup-python@v5
|
||||||
- name: Install pipenv
|
- name: Install pipenv
|
||||||
uses: dschep/install-pipenv-action@v1
|
uses: dschep/install-pipenv-action@v1
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
@@ -238,14 +238,14 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-ruby
|
repository: meilisearch/meilisearch-ruby
|
||||||
- name: Set up Ruby 3
|
- name: Set up Ruby 3
|
||||||
@@ -263,14 +263,14 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-rust
|
repository: meilisearch/meilisearch-rust
|
||||||
- name: Build
|
- name: Build
|
||||||
@@ -284,14 +284,14 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-swift
|
repository: meilisearch/meilisearch-swift
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
@@ -307,18 +307,18 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-js-plugins
|
repository: meilisearch/meilisearch-js-plugins
|
||||||
- name: Setup node
|
- name: Setup node
|
||||||
uses: actions/setup-node@v5
|
uses: actions/setup-node@v4
|
||||||
with:
|
with:
|
||||||
cache: yarn
|
cache: yarn
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
@@ -338,29 +338,21 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
env:
|
|
||||||
RAILS_VERSION: '7.0'
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-rails
|
repository: meilisearch/meilisearch-rails
|
||||||
- name: Install SQLite dependencies
|
- name: Set up Ruby 3
|
||||||
run: sudo apt-get update && sudo apt-get install -y libsqlite3-dev
|
|
||||||
- name: Set up Ruby
|
|
||||||
uses: ruby/setup-ruby@v1
|
uses: ruby/setup-ruby@v1
|
||||||
with:
|
with:
|
||||||
ruby-version: 3
|
ruby-version: 3
|
||||||
bundler-cache: true
|
bundler-cache: true
|
||||||
- name: Start MongoDB
|
|
||||||
uses: supercharge/mongodb-github-action@1.12.0
|
|
||||||
with:
|
|
||||||
mongodb-version: 8.0
|
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
run: bundle exec rspec
|
run: bundle exec rspec
|
||||||
|
|
||||||
@@ -370,14 +362,14 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
services:
|
services:
|
||||||
meilisearch:
|
meilisearch:
|
||||||
image: getmeili/meilisearch-enterprise:${{ needs.define-docker-image.outputs.docker-image }}
|
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
|
||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-symfony
|
repository: meilisearch/meilisearch-symfony
|
||||||
- name: Install PHP
|
- name: Install PHP
|
||||||
|
|||||||
192
.github/workflows/test-suite.yml
vendored
192
.github/workflows/test-suite.yml
vendored
@@ -3,7 +3,7 @@ name: Test suite
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
schedule:
|
schedule:
|
||||||
# Every day at 5:00am
|
# Everyday at 5:00am
|
||||||
- cron: "0 5 * * *"
|
- cron: "0 5 * * *"
|
||||||
pull_request:
|
pull_request:
|
||||||
merge_group:
|
merge_group:
|
||||||
@@ -15,40 +15,31 @@ env:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
test-linux:
|
test-linux:
|
||||||
name: Tests on Ubuntu
|
name: Tests on ubuntu-22.04
|
||||||
runs-on: ${{ matrix.runner }}
|
runs-on: ubuntu-latest
|
||||||
strategy:
|
container:
|
||||||
matrix:
|
# Use ubuntu-22.04 to compile with glibc 2.35
|
||||||
runner: [ubuntu-22.04, ubuntu-22.04-arm]
|
image: ubuntu:22.04
|
||||||
features: ["", "--features enterprise"]
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: check free space before
|
- name: Install needed dependencies
|
||||||
run: df -h
|
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
|
||||||
run: |
|
run: |
|
||||||
sudo rm -rf "/opt/ghc" || true
|
apt-get update && apt-get install -y curl
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
apt-get install build-essential -y
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- name: check free space after
|
|
||||||
run: df -h
|
|
||||||
- name: Setup test with Rust stable
|
- name: Setup test with Rust stable
|
||||||
uses: dtolnay/rust-toolchain@1.89
|
uses: dtolnay/rust-toolchain@1.85
|
||||||
- name: Cache dependencies
|
- name: Cache dependencies
|
||||||
uses: Swatinem/rust-cache@v2.8.0
|
uses: Swatinem/rust-cache@v2.7.8
|
||||||
with:
|
- name: Run cargo check without any default features
|
||||||
key: ${{ matrix.features }}
|
|
||||||
- name: Run cargo build without any default features
|
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@v1
|
||||||
with:
|
with:
|
||||||
command: build
|
command: build
|
||||||
args: --locked --no-default-features --all
|
args: --locked --release --no-default-features --all
|
||||||
- name: Run cargo test
|
- name: Run cargo test
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@v1
|
||||||
with:
|
with:
|
||||||
command: test
|
command: test
|
||||||
args: --locked --all ${{ matrix.features }}
|
args: --locked --release --all
|
||||||
|
|
||||||
test-others:
|
test-others:
|
||||||
name: Tests on ${{ matrix.os }}
|
name: Tests on ${{ matrix.os }}
|
||||||
@@ -56,58 +47,51 @@ jobs:
|
|||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
os: [macos-14, windows-2022]
|
os: [macos-13, windows-2022]
|
||||||
features: ["", "--features enterprise"]
|
|
||||||
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Cache dependencies
|
- name: Cache dependencies
|
||||||
uses: Swatinem/rust-cache@v2.8.0
|
uses: Swatinem/rust-cache@v2.7.8
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
- name: Run cargo build without any default features
|
- name: Run cargo check without any default features
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@v1
|
||||||
with:
|
with:
|
||||||
command: build
|
command: build
|
||||||
args: --locked --no-default-features --all
|
args: --locked --release --no-default-features --all
|
||||||
- name: Run cargo test
|
- name: Run cargo test
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@v1
|
||||||
with:
|
with:
|
||||||
command: test
|
command: test
|
||||||
args: --locked --all ${{ matrix.features }}
|
args: --locked --release --all
|
||||||
|
|
||||||
test-all-features:
|
test-all-features:
|
||||||
name: Tests almost all features
|
name: Tests almost all features
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
# Use ubuntu-22.04 to compile with glibc 2.35
|
||||||
|
image: ubuntu:22.04
|
||||||
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
|
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
- name: Install needed dependencies
|
||||||
run: |
|
run: |
|
||||||
sudo rm -rf "/opt/ghc" || true
|
apt-get update
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
apt-get install --assume-yes build-essential curl
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
- name: Run cargo build with almost all features
|
- name: Run cargo build with almost all features
|
||||||
run: |
|
run: |
|
||||||
cargo build --workspace --locked --features "$(cargo xtask list-features --exclude-feature cuda,test-ollama)"
|
cargo build --workspace --locked --release --features "$(cargo xtask list-features --exclude-feature cuda,test-ollama)"
|
||||||
- name: Run cargo test with almost all features
|
- name: Run cargo test with almost all features
|
||||||
run: |
|
run: |
|
||||||
cargo test --workspace --locked --features "$(cargo xtask list-features --exclude-feature cuda,test-ollama)"
|
cargo test --workspace --locked --release --features "$(cargo xtask list-features --exclude-feature cuda,test-ollama)"
|
||||||
|
|
||||||
ollama-ubuntu:
|
ollama-ubuntu:
|
||||||
name: Test with Ollama
|
name: Test with Ollama
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-latest
|
||||||
env:
|
env:
|
||||||
MEILI_TEST_OLLAMA_SERVER: "http://localhost:11434"
|
MEILI_TEST_OLLAMA_SERVER: "http://localhost:11434"
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
|
||||||
run: |
|
|
||||||
sudo rm -rf "/opt/ghc" || true
|
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- name: Install Ollama
|
- name: Install Ollama
|
||||||
run: |
|
run: |
|
||||||
curl -fsSL https://ollama.com/install.sh | sudo -E sh
|
curl -fsSL https://ollama.com/install.sh | sudo -E sh
|
||||||
@@ -131,21 +115,21 @@ jobs:
|
|||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@v1
|
||||||
with:
|
with:
|
||||||
command: test
|
command: test
|
||||||
args: --locked -p meilisearch --features test-ollama ollama
|
args: --locked --release --all --features test-ollama ollama
|
||||||
|
|
||||||
test-disabled-tokenization:
|
test-disabled-tokenization:
|
||||||
name: Test disabled tokenization
|
name: Test disabled tokenization
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: ubuntu:22.04
|
||||||
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
|
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
- name: Install needed dependencies
|
||||||
run: |
|
run: |
|
||||||
sudo rm -rf "/opt/ghc" || true
|
apt-get update
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
apt-get install --assume-yes build-essential curl
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
- name: Run cargo tree without default features and check lindera is not present
|
- name: Run cargo tree without default features and check lindera is not present
|
||||||
run: |
|
run: |
|
||||||
if cargo tree -f '{p} {f}' -e normal --no-default-features | grep -qz lindera; then
|
if cargo tree -f '{p} {f}' -e normal --no-default-features | grep -qz lindera; then
|
||||||
@@ -156,64 +140,58 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
cargo tree -f '{p} {f}' -e normal | grep lindera -qz
|
cargo tree -f '{p} {f}' -e normal | grep lindera -qz
|
||||||
|
|
||||||
build:
|
# We run tests in debug also, to make sure that the debug_assertions are hit
|
||||||
name: Build in release
|
test-debug:
|
||||||
runs-on: ubuntu-22.04
|
name: Run tests in debug
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
# Use ubuntu-22.04 to compile with glibc 2.35
|
||||||
|
image: ubuntu:22.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
- name: Install needed dependencies
|
||||||
run: |
|
run: |
|
||||||
sudo rm -rf "/opt/ghc" || true
|
apt-get update && apt-get install -y curl
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
apt-get install build-essential -y
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
- name: Cache dependencies
|
- name: Cache dependencies
|
||||||
uses: Swatinem/rust-cache@v2.8.0
|
uses: Swatinem/rust-cache@v2.7.8
|
||||||
- name: Build
|
- name: Run tests in debug
|
||||||
run: cargo build --release --locked --target x86_64-unknown-linux-gnu
|
uses: actions-rs/cargo@v1
|
||||||
|
with:
|
||||||
|
command: test
|
||||||
|
args: --locked --all
|
||||||
|
|
||||||
clippy:
|
clippy:
|
||||||
name: Run Clippy
|
name: Run Clippy
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-latest
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
features: ["", "--features enterprise"]
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
run: |
|
|
||||||
sudo rm -rf "/opt/ghc" || true
|
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
with:
|
with:
|
||||||
|
profile: minimal
|
||||||
components: clippy
|
components: clippy
|
||||||
- name: Cache dependencies
|
- name: Cache dependencies
|
||||||
uses: Swatinem/rust-cache@v2.8.0
|
uses: Swatinem/rust-cache@v2.7.8
|
||||||
- name: Run cargo clippy
|
- name: Run cargo clippy
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@v1
|
||||||
with:
|
with:
|
||||||
command: clippy
|
command: clippy
|
||||||
args: --all-targets ${{ matrix.features }} -- --deny warnings
|
args: --all-targets -- --deny warnings
|
||||||
|
|
||||||
fmt:
|
fmt:
|
||||||
name: Run Rustfmt
|
name: Run Rustfmt
|
||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
run: |
|
|
||||||
sudo rm -rf "/opt/ghc" || true
|
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
with:
|
with:
|
||||||
|
profile: minimal
|
||||||
|
toolchain: nightly-2024-07-09
|
||||||
|
override: true
|
||||||
components: rustfmt
|
components: rustfmt
|
||||||
- name: Cache dependencies
|
- name: Cache dependencies
|
||||||
uses: Swatinem/rust-cache@v2.8.0
|
uses: Swatinem/rust-cache@v2.7.8
|
||||||
- name: Run cargo fmt
|
- name: Run cargo fmt
|
||||||
# Since we never ran the `build.rs` script in the benchmark directory we are missing one auto-generated import file.
|
# Since we never ran the `build.rs` script in the benchmark directory we are missing one auto-generated import file.
|
||||||
# Since we want to trigger (and fail) this action as fast as possible, instead of building the benchmark crate
|
# Since we want to trigger (and fail) this action as fast as possible, instead of building the benchmark crate
|
||||||
@@ -221,23 +199,3 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
echo -ne "\n" > crates/benchmarks/benches/datasets_paths.rs
|
echo -ne "\n" > crates/benchmarks/benches/datasets_paths.rs
|
||||||
cargo fmt --all -- --check
|
cargo fmt --all -- --check
|
||||||
|
|
||||||
declarative-tests:
|
|
||||||
name: Run declarative tests
|
|
||||||
runs-on: ubuntu-22.04-arm
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v5
|
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
|
||||||
run: |
|
|
||||||
sudo rm -rf "/opt/ghc" || true
|
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
- name: Cache dependencies
|
|
||||||
uses: Swatinem/rust-cache@v2.8.0
|
|
||||||
- name: Run declarative tests
|
|
||||||
run: |
|
|
||||||
cargo xtask test workloads/tests/*.json
|
|
||||||
|
|||||||
13
.github/workflows/update-cargo-toml-version.yml
vendored
13
.github/workflows/update-cargo-toml-version.yml
vendored
@@ -17,14 +17,10 @@ jobs:
|
|||||||
name: Update version in Cargo.toml
|
name: Update version in Cargo.toml
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v5
|
- uses: actions/checkout@v3
|
||||||
- name: Clean space as per https://github.com/actions/virtual-environments/issues/709
|
- uses: dtolnay/rust-toolchain@1.85
|
||||||
run: |
|
with:
|
||||||
sudo rm -rf "/opt/ghc" || true
|
profile: minimal
|
||||||
sudo rm -rf "/usr/share/dotnet" || true
|
|
||||||
sudo rm -rf "/usr/local/lib/android" || true
|
|
||||||
sudo rm -rf "/usr/local/share/boost" || true
|
|
||||||
- uses: dtolnay/rust-toolchain@1.89
|
|
||||||
- name: Install sd
|
- name: Install sd
|
||||||
run: cargo install sd
|
run: cargo install sd
|
||||||
- name: Update Cargo.toml file
|
- name: Update Cargo.toml file
|
||||||
@@ -45,4 +41,5 @@ jobs:
|
|||||||
--title "Update version for the next release ($NEW_VERSION) in Cargo.toml" \
|
--title "Update version for the next release ($NEW_VERSION) in Cargo.toml" \
|
||||||
--body '⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.' \
|
--body '⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.' \
|
||||||
--label 'skip changelog' \
|
--label 'skip changelog' \
|
||||||
|
--milestone $NEW_VERSION \
|
||||||
--base $GITHUB_REF_NAME
|
--base $GITHUB_REF_NAME
|
||||||
|
|||||||
16
.gitignore
vendored
16
.gitignore
vendored
@@ -5,30 +5,18 @@
|
|||||||
**/*.json_lines
|
**/*.json_lines
|
||||||
**/*.rs.bk
|
**/*.rs.bk
|
||||||
/*.mdb
|
/*.mdb
|
||||||
/*.ms
|
/data.ms
|
||||||
/snapshots
|
/snapshots
|
||||||
/dumps
|
/dumps
|
||||||
/bench
|
/bench
|
||||||
/_xtask_benchmark.ms
|
/_xtask_benchmark.ms
|
||||||
/benchmarks
|
/benchmarks
|
||||||
.DS_Store
|
|
||||||
|
|
||||||
# Snapshots
|
# Snapshots
|
||||||
## ... large
|
## ... large
|
||||||
*.full.snap
|
*.full.snap
|
||||||
## ... unreviewed
|
## ... unreviewed
|
||||||
*.snap.new
|
*.snap.new
|
||||||
## ... pending
|
|
||||||
*.pending-snap
|
|
||||||
|
|
||||||
# Tmp files
|
|
||||||
.tmp*
|
|
||||||
|
|
||||||
# Database snapshot
|
|
||||||
crates/meilisearch/db.snapshot
|
|
||||||
|
|
||||||
# Fuzzcheck data for the facet indexing fuzz test
|
# Fuzzcheck data for the facet indexing fuzz test
|
||||||
crates/milli/fuzz/update::facet::incremental::fuzz::fuzz/
|
crates/milli/fuzz/update::facet::incremental::fuzz::fuzz/
|
||||||
|
|
||||||
# OpenAPI generator
|
|
||||||
**/meilisearch-openapi.json
|
|
||||||
|
|||||||
@@ -124,7 +124,6 @@ They are JSON files with the following structure (comments are not actually supp
|
|||||||
{
|
{
|
||||||
// Name of the workload. Must be unique to the workload, as it will be used to group results on the dashboard.
|
// Name of the workload. Must be unique to the workload, as it will be used to group results on the dashboard.
|
||||||
"name": "hackernews.ndjson_1M,no-threads",
|
"name": "hackernews.ndjson_1M,no-threads",
|
||||||
"type": "bench",
|
|
||||||
// Number of consecutive runs of the commands that should be performed.
|
// Number of consecutive runs of the commands that should be performed.
|
||||||
// Each run uses a fresh instance of Meilisearch and a fresh database.
|
// Each run uses a fresh instance of Meilisearch and a fresh database.
|
||||||
// Each run produces its own report file.
|
// Each run produces its own report file.
|
||||||
|
|||||||
@@ -57,17 +57,9 @@ This command will be triggered to each PR as a requirement for merging it.
|
|||||||
You can set the `LINDERA_CACHE` environment variable to speed up your successive builds by up to 2 minutes.
|
You can set the `LINDERA_CACHE` environment variable to speed up your successive builds by up to 2 minutes.
|
||||||
It'll store some built artifacts in the directory of your choice.
|
It'll store some built artifacts in the directory of your choice.
|
||||||
|
|
||||||
We recommend using the `$HOME/.cache/meili/lindera` directory:
|
We recommend using the standard `$HOME/.cache/lindera` directory:
|
||||||
```sh
|
```sh
|
||||||
export LINDERA_CACHE=$HOME/.cache/meili/lindera
|
export LINDERA_CACHE=$HOME/.cache/lindera
|
||||||
```
|
|
||||||
|
|
||||||
You can set the `MILLI_BENCH_DATASETS_PATH` environment variable to further speed up your builds.
|
|
||||||
It'll store some big files used for the benchmarks in the directory of your choice.
|
|
||||||
|
|
||||||
We recommend using the `$HOME/.cache/meili/benches` directory:
|
|
||||||
```sh
|
|
||||||
export MILLI_BENCH_DATASETS_PATH=$HOME/.cache/meili/benches
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Furthermore, you can improve incremental compilation by setting the `MEILI_NO_VERGEN` environment variable.
|
Furthermore, you can improve incremental compilation by setting the `MEILI_NO_VERGEN` environment variable.
|
||||||
@@ -106,19 +98,7 @@ Run `cargo xtask --help` from the root of the repository to find out what is ava
|
|||||||
#### Update the openAPI file if the API changed
|
#### Update the openAPI file if the API changed
|
||||||
|
|
||||||
To update the openAPI file in the code, see [sprint_issue.md](https://github.com/meilisearch/meilisearch/blob/main/.github/ISSUE_TEMPLATE/sprint_issue.md#reminders-when-modifying-the-api).
|
To update the openAPI file in the code, see [sprint_issue.md](https://github.com/meilisearch/meilisearch/blob/main/.github/ISSUE_TEMPLATE/sprint_issue.md#reminders-when-modifying-the-api).
|
||||||
|
If you want to update the openAPI file on the [open-api repository](https://github.com/meilisearch/open-api), see [update-openapi-issue.md](https://github.com/meilisearch/engine-team/blob/main/issue-templates/update-openapi-issue.md).
|
||||||
If you want to generate OpenAPI file manually:
|
|
||||||
|
|
||||||
With swagger:
|
|
||||||
- Starts Meilisearch with the `swagger` feature flag: `cargo run --features swagger`
|
|
||||||
- On a browser, open the following URL: http://localhost:7700/scalar
|
|
||||||
- Click the « Download openAPI file »
|
|
||||||
|
|
||||||
With the internal crate:
|
|
||||||
```bash
|
|
||||||
cd crates/openapi-generator
|
|
||||||
cargo run --release -- --pretty
|
|
||||||
```
|
|
||||||
|
|
||||||
### Logging
|
### Logging
|
||||||
|
|
||||||
@@ -172,37 +152,25 @@ Some notes on GitHub PRs:
|
|||||||
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
|
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
|
||||||
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [GitHub Merge Queues](https://github.blog/news-insights/product-news/github-merge-queue-is-generally-available/) to automatically enforce this requirement without the PR author having to rebase manually.
|
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [GitHub Merge Queues](https://github.blog/news-insights/product-news/github-merge-queue-is-generally-available/) to automatically enforce this requirement without the PR author having to rebase manually.
|
||||||
|
|
||||||
## Merging PRs
|
## Release Process (for internal team only)
|
||||||
|
|
||||||
This project uses GitHub Merge Queues that helps us manage pull requests merging.
|
|
||||||
|
|
||||||
Before merging a PR, the maintainer should ensure the following requirements are met
|
|
||||||
- Automated tests have been added.
|
|
||||||
- If some tests cannot be automated, manual rigorous tests should be applied.
|
|
||||||
- ⚠️ If there is an change in the DB: it's mandatory to manually test the `--experimental-dumpless-upgrade` on a DB of the previous Meilisearch minor version (e.g. v1.13 for the v1.14 release).
|
|
||||||
- If necessary, the feature have been tested in the Cloud production environment (with [prototypes](./documentation/prototypes.md)) and the Cloud UI is ready.
|
|
||||||
- If necessary, the [documentation](https://github.com/meilisearch/documentation) related to the implemented feature in the PR is ready.
|
|
||||||
- If necessary, the [integrations](https://github.com/meilisearch/integration-guides) related to the implemented feature in the PR are ready.
|
|
||||||
|
|
||||||
## Publish Process (for internal team only)
|
|
||||||
|
|
||||||
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
|
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
|
||||||
|
|
||||||
### How to publish a new release
|
### Automation to rebase and Merge the PRs
|
||||||
|
|
||||||
The full Meilisearch release process is described in [this guide](./documentation/release.md).
|
This project uses GitHub Merge Queues that helps us manage pull requests merging.
|
||||||
|
|
||||||
|
### How to Publish a new Release
|
||||||
|
|
||||||
|
The full Meilisearch release process is described in [this guide](https://github.com/meilisearch/engine-team/blob/main/resources/meilisearch-release.md). Please follow it carefully before doing any release.
|
||||||
|
|
||||||
### How to publish a prototype
|
### How to publish a prototype
|
||||||
|
|
||||||
Depending on the developed feature, you might need to provide a prototyped version of Meilisearch to make it easier to test by the users.
|
Depending on the developed feature, you might need to provide a prototyped version of Meilisearch to make it easier to test by the users.
|
||||||
|
|
||||||
This happens in two steps:
|
This happens in two steps:
|
||||||
- [Release the prototype](./documentation/prototypes.md#how-to-publish-a-prototype)
|
- [Release the prototype](https://github.com/meilisearch/engine-team/blob/main/resources/prototypes.md#how-to-publish-a-prototype)
|
||||||
- [Communicate about it](./documentation/prototypes.md#communication)
|
- [Communicate about it](https://github.com/meilisearch/engine-team/blob/main/resources/prototypes.md#communication)
|
||||||
|
|
||||||
### How to implement and publish an experimental feature
|
|
||||||
|
|
||||||
Here is our [guidelines and process](./documentation/experimental-features.md) to implement and publish an experimental feature.
|
|
||||||
|
|
||||||
### Release assets
|
### Release assets
|
||||||
|
|
||||||
|
|||||||
4694
Cargo.lock
generated
4694
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -19,11 +19,10 @@ members = [
|
|||||||
"crates/tracing-trace",
|
"crates/tracing-trace",
|
||||||
"crates/xtask",
|
"crates/xtask",
|
||||||
"crates/build-info",
|
"crates/build-info",
|
||||||
"crates/openapi-generator",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[workspace.package]
|
[workspace.package]
|
||||||
version = "1.28.2"
|
version = "1.15.0"
|
||||||
authors = [
|
authors = [
|
||||||
"Quentin de Quelen <quentin@dequelen.me>",
|
"Quentin de Quelen <quentin@dequelen.me>",
|
||||||
"Clément Renault <clement@meilisearch.com>",
|
"Clément Renault <clement@meilisearch.com>",
|
||||||
@@ -50,5 +49,3 @@ opt-level = 3
|
|||||||
opt-level = 3
|
opt-level = 3
|
||||||
[profile.dev.package.roaring]
|
[profile.dev.package.roaring]
|
||||||
opt-level = 3
|
opt-level = 3
|
||||||
[profile.dev.package.gemm-f16]
|
|
||||||
opt-level = 3
|
|
||||||
|
|||||||
7
Cross.toml
Normal file
7
Cross.toml
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
[build.env]
|
||||||
|
passthrough = [
|
||||||
|
"RUST_BACKTRACE",
|
||||||
|
"CARGO_TERM_COLOR",
|
||||||
|
"RUSTFLAGS",
|
||||||
|
"JEMALLOC_SYS_WITH_LG_PAGE"
|
||||||
|
]
|
||||||
10
Dockerfile
10
Dockerfile
@@ -1,5 +1,5 @@
|
|||||||
# Compile
|
# Compile
|
||||||
FROM rust:1.89-alpine3.22 AS compiler
|
FROM rust:1.85-alpine3.20 AS compiler
|
||||||
|
|
||||||
RUN apk add -q --no-cache build-base openssl-dev
|
RUN apk add -q --no-cache build-base openssl-dev
|
||||||
|
|
||||||
@@ -8,17 +8,19 @@ WORKDIR /
|
|||||||
ARG COMMIT_SHA
|
ARG COMMIT_SHA
|
||||||
ARG COMMIT_DATE
|
ARG COMMIT_DATE
|
||||||
ARG GIT_TAG
|
ARG GIT_TAG
|
||||||
ARG EXTRA_ARGS
|
|
||||||
ENV VERGEN_GIT_SHA=${COMMIT_SHA} VERGEN_GIT_COMMIT_TIMESTAMP=${COMMIT_DATE} VERGEN_GIT_DESCRIBE=${GIT_TAG}
|
ENV VERGEN_GIT_SHA=${COMMIT_SHA} VERGEN_GIT_COMMIT_TIMESTAMP=${COMMIT_DATE} VERGEN_GIT_DESCRIBE=${GIT_TAG}
|
||||||
ENV RUSTFLAGS="-C target-feature=-crt-static"
|
ENV RUSTFLAGS="-C target-feature=-crt-static"
|
||||||
|
|
||||||
COPY . .
|
COPY . .
|
||||||
RUN set -eux; \
|
RUN set -eux; \
|
||||||
apkArch="$(apk --print-arch)"; \
|
apkArch="$(apk --print-arch)"; \
|
||||||
cargo build --release -p meilisearch -p meilitool ${EXTRA_ARGS}
|
if [ "$apkArch" = "aarch64" ]; then \
|
||||||
|
export JEMALLOC_SYS_WITH_LG_PAGE=16; \
|
||||||
|
fi && \
|
||||||
|
cargo build --release -p meilisearch -p meilitool
|
||||||
|
|
||||||
# Run
|
# Run
|
||||||
FROM alpine:3.22
|
FROM alpine:3.20
|
||||||
LABEL org.opencontainers.image.source="https://github.com/meilisearch/meilisearch"
|
LABEL org.opencontainers.image.source="https://github.com/meilisearch/meilisearch"
|
||||||
|
|
||||||
ENV MEILI_HTTP_ADDR 0.0.0.0:7700
|
ENV MEILI_HTTP_ADDR 0.0.0.0:7700
|
||||||
|
|||||||
20
LICENSE
20
LICENSE
@@ -1,9 +1,21 @@
|
|||||||
# License
|
MIT License
|
||||||
|
|
||||||
Copyright (c) 2019-2025 Meili SAS
|
Copyright (c) 2019-2025 Meili SAS
|
||||||
|
|
||||||
Part of this work fall under the Meilisearch Enterprise Edition (EE) and are licensed under the Business Source License 1.1, please refer to [LICENSE-EE](./LICENSE-EE) for details.
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
The other parts of this work are licensed under the [MIT license](./LICENSE-MIT).
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
`SPDX-License-Identifier: MIT AND BUSL-1.1`
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
||||||
|
|||||||
67
LICENSE-EE
67
LICENSE-EE
@@ -1,67 +0,0 @@
|
|||||||
Business Source License 1.1 – Adapted for Meili SAS
|
|
||||||
This license is based on the Business Source License version 1.1, as published by MariaDB Corporation Ab.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
|
|
||||||
Licensor: Meili SAS
|
|
||||||
|
|
||||||
Licensed Work: Any file explicitly marked as “Enterprise Edition (EE)” or “governed by the Business Source License” residing in enterprise_editions modules/folders.
|
|
||||||
|
|
||||||
Additional Use Grant:
|
|
||||||
You may use, modify, and distribute the Licensed Work for non-production purposes only, such as testing, development, or evaluation.
|
|
||||||
|
|
||||||
Production use of the Licensed Work requires a commercial license agreement with Meilisearch. Contact bonjour@meilisearch.com for licensing.
|
|
||||||
|
|
||||||
Change License: MIT
|
|
||||||
|
|
||||||
Change Date: Four years from the date the Licensed Work is published.
|
|
||||||
|
|
||||||
This License does not apply to any code outside of the Licensed Work, which remains under the MIT license.
|
|
||||||
|
|
||||||
For information about alternative licensing arrangements for the Licensed Work,
|
|
||||||
please contact bonjour@meilisearch.com or sales@meilisearch.com.
|
|
||||||
|
|
||||||
Notice
|
|
||||||
|
|
||||||
Business Source License 1.1
|
|
||||||
|
|
||||||
Terms
|
|
||||||
|
|
||||||
The Licensor hereby grants you the right to copy, modify, create derivative
|
|
||||||
works, redistribute, and make non-production use of the Licensed Work. The
|
|
||||||
Licensor may make an Additional Use Grant, above, permitting limited production use.
|
|
||||||
|
|
||||||
Effective on the Change Date, or the fourth anniversary of the first publicly
|
|
||||||
available distribution of a specific version of the Licensed Work under this
|
|
||||||
License, whichever comes first, the Licensor hereby grants you rights under
|
|
||||||
the terms of the Change License, and the rights granted in the paragraph
|
|
||||||
above terminate.
|
|
||||||
|
|
||||||
If your use of the Licensed Work does not comply with the requirements
|
|
||||||
currently in effect as described in this License, you must purchase a
|
|
||||||
commercial license from the Licensor, its affiliated entities, or authorized
|
|
||||||
resellers, or you must refrain from using the Licensed Work.
|
|
||||||
|
|
||||||
All copies of the original and modified Licensed Work, and derivative works
|
|
||||||
of the Licensed Work, are subject to this License. This License applies
|
|
||||||
separately for each version of the Licensed Work and the Change Date may vary
|
|
||||||
for each version of the Licensed Work released by Licensor.
|
|
||||||
|
|
||||||
You must conspicuously display this License on each original or modified copy
|
|
||||||
of the Licensed Work. If you receive the Licensed Work in original or
|
|
||||||
modified form from a third party, the terms and conditions set forth in this
|
|
||||||
License apply to your use of that work.
|
|
||||||
|
|
||||||
Any use of the Licensed Work in violation of this License will automatically
|
|
||||||
terminate your rights under this License for the current and all other
|
|
||||||
versions of the Licensed Work.
|
|
||||||
|
|
||||||
This License does not grant you any right in any trademark or logo of
|
|
||||||
Licensor or its affiliates (provided that you may use a trademark or logo of
|
|
||||||
Licensor as expressly required by this License).
|
|
||||||
|
|
||||||
TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED WORK IS PROVIDED ON
|
|
||||||
AN "AS IS" BASIS. LICENSOR HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS,
|
|
||||||
EXPRESS OR IMPLIED, INCLUDING (WITHOUT LIMITATION) WARRANTIES OF
|
|
||||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND
|
|
||||||
TITLE.
|
|
||||||
21
LICENSE-MIT
21
LICENSE-MIT
@@ -1,21 +0,0 @@
|
|||||||
MIT License
|
|
||||||
|
|
||||||
Copyright (c) 2019-2025 Meili SAS
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
|
||||||
in the Software without restriction, including without limitation the rights
|
|
||||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
copies of the Software, and to permit persons to whom the Software is
|
|
||||||
furnished to do so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
SOFTWARE.
|
|
||||||
25
README.md
25
README.md
@@ -39,7 +39,6 @@
|
|||||||
## 🖥 Examples
|
## 🖥 Examples
|
||||||
|
|
||||||
- [**Movies**](https://where2watch.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=organization) — An application to help you find streaming platforms to watch movies using [hybrid search](https://www.meilisearch.com/solutions/hybrid-search?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos).
|
- [**Movies**](https://where2watch.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=organization) — An application to help you find streaming platforms to watch movies using [hybrid search](https://www.meilisearch.com/solutions/hybrid-search?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos).
|
||||||
- [**Flickr**](https://flickr.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=organization) — Search and explore one hundred million Flickr images with semantic search.
|
|
||||||
- [**Ecommerce**](https://ecommerce.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Ecommerce website using disjunctive [facets](https://www.meilisearch.com/docs/learn/fine_tuning_results/faceted_search?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos), range and rating filtering, and pagination.
|
- [**Ecommerce**](https://ecommerce.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Ecommerce website using disjunctive [facets](https://www.meilisearch.com/docs/learn/fine_tuning_results/faceted_search?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos), range and rating filtering, and pagination.
|
||||||
- [**Songs**](https://music.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Search through 47 million of songs.
|
- [**Songs**](https://music.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Search through 47 million of songs.
|
||||||
- [**SaaS**](https://saas.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Search for contacts, deals, and companies in this [multi-tenant](https://www.meilisearch.com/docs/learn/security/multitenancy_tenant_tokens?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) CRM application.
|
- [**SaaS**](https://saas.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Search for contacts, deals, and companies in this [multi-tenant](https://www.meilisearch.com/docs/learn/security/multitenancy_tenant_tokens?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) CRM application.
|
||||||
@@ -90,26 +89,6 @@ We also offer a wide range of dedicated guides to all Meilisearch features, such
|
|||||||
|
|
||||||
Finally, for more in-depth information, refer to our articles explaining fundamental Meilisearch concepts such as [documents](https://www.meilisearch.com/docs/learn/core_concepts/documents?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced) and [indexes](https://www.meilisearch.com/docs/learn/core_concepts/indexes?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced).
|
Finally, for more in-depth information, refer to our articles explaining fundamental Meilisearch concepts such as [documents](https://www.meilisearch.com/docs/learn/core_concepts/documents?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced) and [indexes](https://www.meilisearch.com/docs/learn/core_concepts/indexes?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced).
|
||||||
|
|
||||||
## 🧾 Editions & Licensing
|
|
||||||
|
|
||||||
Meilisearch is available in two editions:
|
|
||||||
|
|
||||||
### 🧪 Community Edition (CE)
|
|
||||||
|
|
||||||
- Fully open source under the [MIT license](./LICENSE)
|
|
||||||
- Core search engine with fast and relevant full-text, semantic or hybrid search
|
|
||||||
- Free to use for anyone, including commercial usage
|
|
||||||
|
|
||||||
### 🏢 Enterprise Edition (EE)
|
|
||||||
|
|
||||||
- Includes advanced features such as:
|
|
||||||
- Sharding
|
|
||||||
- Governed by a [commercial license](./LICENSE-EE) or the [Business Source License 1.1](https://mariadb.com/bsl11)
|
|
||||||
- Not allowed in production without a commercial agreement with Meilisearch.
|
|
||||||
- You may use, modify, and distribute the Licensed Work for non-production purposes only, such as testing, development, or evaluation.
|
|
||||||
|
|
||||||
Want access to Enterprise features? → Contact us at [sales@meilisearch.com](maito:sales@meilisearch.com).
|
|
||||||
|
|
||||||
## 📊 Telemetry
|
## 📊 Telemetry
|
||||||
|
|
||||||
Meilisearch collects **anonymized** user data to help us improve our product. You can [deactivate this](https://www.meilisearch.com/docs/learn/what_is_meilisearch/telemetry?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=telemetry#how-to-disable-data-collection) whenever you want.
|
Meilisearch collects **anonymized** user data to help us improve our product. You can [deactivate this](https://www.meilisearch.com/docs/learn/what_is_meilisearch/telemetry?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=telemetry#how-to-disable-data-collection) whenever you want.
|
||||||
@@ -122,7 +101,7 @@ If you want to know more about the kind of data we collect and what we use it fo
|
|||||||
|
|
||||||
Meilisearch is a search engine created by [Meili](https://www.meilisearch.com/careers), a software development company headquartered in France and with team members all over the world. Want to know more about us? [Check out our blog!](https://blog.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=contact)
|
Meilisearch is a search engine created by [Meili](https://www.meilisearch.com/careers), a software development company headquartered in France and with team members all over the world. Want to know more about us? [Check out our blog!](https://blog.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=contact)
|
||||||
|
|
||||||
🗞 [Subscribe to our newsletter](https://share-eu1.hsforms.com/1LN5N0x_GQgq7ss7tXmSykwfg3aq) if you don't want to miss any updates! We promise we won't clutter your mailbox: we only send one edition every two months.
|
🗞 [Subscribe to our newsletter](https://meilisearch.us2.list-manage.com/subscribe?u=27870f7b71c908a8b359599fb&id=79582d828e) if you don't want to miss any updates! We promise we won't clutter your mailbox: we only send one edition every two months.
|
||||||
|
|
||||||
💌 Want to make a suggestion or give feedback? Here are some of the channels where you can reach us:
|
💌 Want to make a suggestion or give feedback? Here are some of the channels where you can reach us:
|
||||||
|
|
||||||
@@ -140,6 +119,6 @@ Meilisearch is, and will always be, open-source! If you want to contribute to th
|
|||||||
|
|
||||||
Meilisearch releases and their associated binaries are available on the project's [releases page](https://github.com/meilisearch/meilisearch/releases).
|
Meilisearch releases and their associated binaries are available on the project's [releases page](https://github.com/meilisearch/meilisearch/releases).
|
||||||
|
|
||||||
The binaries are versioned following [SemVer conventions](https://semver.org/). To know more, read our [versioning policy](./documentation/versioning-policy.md).
|
The binaries are versioned following [SemVer conventions](https://semver.org/). To know more, read our [versioning policy](https://github.com/meilisearch/engine-team/blob/main/resources/versioning-policy.md).
|
||||||
|
|
||||||
Differently from the binaries, crates in this repository are not currently available on [crates.io](https://crates.io/) and do not follow [SemVer conventions](https://semver.org).
|
Differently from the binaries, crates in this repository are not currently available on [crates.io](https://crates.io/) and do not follow [SemVer conventions](https://semver.org).
|
||||||
|
|||||||
326
TESTING.md
326
TESTING.md
@@ -1,326 +0,0 @@
|
|||||||
# Declarative tests
|
|
||||||
|
|
||||||
Declarative tests ensure that Meilisearch features remain stable across versions.
|
|
||||||
|
|
||||||
While we already have unit tests, those are run against **temporary databases** that are created fresh each time and therefore never risk corruption.
|
|
||||||
|
|
||||||
Declarative tests instead **simulate the lifetime of a database**: they chain together commands and requests to change the binary, verifying that database state and API responses remain consistent.
|
|
||||||
|
|
||||||
## Basic example
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"type": "test",
|
|
||||||
"name": "api-keys",
|
|
||||||
"binary": { // the first command will run on the binary following this specification.
|
|
||||||
"source": "release", // get the binary as a release from GitHub
|
|
||||||
"version": "1.19.0", // version to fetch
|
|
||||||
"edition": "community" // edition to fetch
|
|
||||||
},
|
|
||||||
"commands": []
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This example defines a no-op test (it does nothing).
|
|
||||||
|
|
||||||
If the file is saved at `workloads/tests/example.json`, you can run it with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cargo xtask test workloads/tests/example.json
|
|
||||||
```
|
|
||||||
|
|
||||||
## Commands
|
|
||||||
|
|
||||||
Commands represent API requests sent to Meilisearch endpoints during a test.
|
|
||||||
|
|
||||||
They are executed sequentially, and their responses can be validated to ensure consistent behavior across upgrades.
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
|
|
||||||
{
|
|
||||||
"route": "keys",
|
|
||||||
"method": "POST",
|
|
||||||
"body": {
|
|
||||||
"inline": {
|
|
||||||
"actions": [
|
|
||||||
"search",
|
|
||||||
"documents.add"
|
|
||||||
],
|
|
||||||
"description": "Test API Key",
|
|
||||||
"expiresAt": null,
|
|
||||||
"indexes": [ "movies" ]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This command issues a `POST /keys` request, creating an API key with permissions to search and add documents in the `movies` index.
|
|
||||||
|
|
||||||
### Using assets in commands
|
|
||||||
|
|
||||||
To keep tests concise and reusable, you can define **assets** at the root of the workload file.
|
|
||||||
|
|
||||||
Assets are external data sources (such as datasets) that are cached between runs, making tests faster and easier to read.
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"type": "test",
|
|
||||||
"name": "movies",
|
|
||||||
"binary": {
|
|
||||||
"source": "release",
|
|
||||||
"version": "1.19.0",
|
|
||||||
"edition": "community"
|
|
||||||
},
|
|
||||||
"assets": {
|
|
||||||
"movies.json": {
|
|
||||||
"local_location": null,
|
|
||||||
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies.json",
|
|
||||||
"sha256": "5b6e4cb660bc20327776e8a33ea197b43d9ec84856710ead1cc87ab24df77de1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"commands": [
|
|
||||||
{
|
|
||||||
"route": "indexes/movies/documents",
|
|
||||||
"method": "POST",
|
|
||||||
"body": {
|
|
||||||
"asset": "movies.json"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
In this example:
|
|
||||||
- The `movies.json` dataset is defined as an asset, pointing to a remote URL.
|
|
||||||
- The SHA-256 checksum ensures integrity.
|
|
||||||
- The `POST /indexes/movies/documents` command uses this asset as the request body.
|
|
||||||
|
|
||||||
This makes the test much cleaner than inlining a large dataset directly into the command.
|
|
||||||
|
|
||||||
For asset handling, please refer to the [declarative benchmarks documentation](/BENCHMARKS.md#adding-new-assets).
|
|
||||||
|
|
||||||
### Asserting responses
|
|
||||||
|
|
||||||
Commands can specify both the **expected status code** and the **expected response body**.
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"route": "indexes/movies/documents",
|
|
||||||
"method": "POST",
|
|
||||||
"body": {
|
|
||||||
"asset": "movies.json"
|
|
||||||
},
|
|
||||||
"expectedStatus": 202,
|
|
||||||
"expectedResponse": {
|
|
||||||
"enqueuedAt": "[timestamp]", // Set to a bracketed string to ignore the value
|
|
||||||
"indexUid": "movies",
|
|
||||||
"status": "enqueued",
|
|
||||||
"taskUid": 1,
|
|
||||||
"type": "documentAdditionOrUpdate"
|
|
||||||
},
|
|
||||||
"synchronous": "WaitForTask"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Manually writing `expectedResponse` fields can be tedious.
|
|
||||||
|
|
||||||
Instead, you can let the test runner populate them automatically:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Run the workload to populate expected fields. Only adds the missing ones, doesn't change existing data
|
|
||||||
cargo xtask test workloads/tests/example.json --add-missing-responses
|
|
||||||
|
|
||||||
# OR
|
|
||||||
|
|
||||||
# Run the workload to populate expected fields. Updates all fields including existing ones
|
|
||||||
cargo xtask test workloads/tests/example.json --update-responses
|
|
||||||
```
|
|
||||||
|
|
||||||
This workflow is recommended:
|
|
||||||
|
|
||||||
1. Write the test without expected fields.
|
|
||||||
2. Run it with `--add-missing-responses` to capture the actual responses.
|
|
||||||
3. Review and commit the generated expectations.
|
|
||||||
|
|
||||||
## Changing binary
|
|
||||||
|
|
||||||
It is possible to insert an instruction to change the current Meilisearch instance from one binary specification to another during a test.
|
|
||||||
|
|
||||||
When executed, such an instruction will:
|
|
||||||
1. Stop the current Meilisearch instance.
|
|
||||||
2. Fetch the binary specified by the instruction.
|
|
||||||
3. Restart the server with the specified binary on the same database.
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"type": "test",
|
|
||||||
"name": "movies",
|
|
||||||
"binary": {
|
|
||||||
"source": "release",
|
|
||||||
"version": "1.19.0", // start with version v1.19.0
|
|
||||||
"edition": "community"
|
|
||||||
},
|
|
||||||
"assets": {
|
|
||||||
"movies.json": {
|
|
||||||
"local_location": null,
|
|
||||||
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies.json",
|
|
||||||
"sha256": "5b6e4cb660bc20327776e8a33ea197b43d9ec84856710ead1cc87ab24df77de1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"commands": [
|
|
||||||
// setup some data
|
|
||||||
{
|
|
||||||
"route": "indexes/movies/documents",
|
|
||||||
"method": "POST",
|
|
||||||
"body": {
|
|
||||||
"asset": "movies.json"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
// switch binary to v1.24.0
|
|
||||||
{
|
|
||||||
"binary": {
|
|
||||||
"source": "release",
|
|
||||||
"version": "1.24.0",
|
|
||||||
"edition": "community"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Typical Usage
|
|
||||||
|
|
||||||
In most cases, the change binary instruction will be used to update a database.
|
|
||||||
|
|
||||||
- **Set up** some data using commands on an older version.
|
|
||||||
- **Upgrade** to the latest version.
|
|
||||||
- **Assert** that the data and API behavior remain correct after the upgrade.
|
|
||||||
|
|
||||||
To properly test the dumpless upgrade, one should typically:
|
|
||||||
|
|
||||||
1. Open the database without processing the update task: Use a `binary` instruction to switch to the desired version, passing `--experimental-dumpless-upgrade` and `--experimental-max-number-of-batched-tasks=0` as extra CLI arguments
|
|
||||||
2. Check that the search, stats and task queue still work.
|
|
||||||
3. Open the database and process the update task: Use a `binary` instruction to switch to the desired version, passing `--experimental-dumpless-upgrade` as the extra CLI argument. Use a `health` command to wait for the upgrade task to finish.
|
|
||||||
4. Check that the indexing, search, stats, and task queue still work.
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"type": "test",
|
|
||||||
"name": "movies",
|
|
||||||
"binary": {
|
|
||||||
"source": "release",
|
|
||||||
"version": "1.12.0",
|
|
||||||
"edition": "community"
|
|
||||||
},
|
|
||||||
"commands": [
|
|
||||||
// 0. Run commands to populate the database
|
|
||||||
{
|
|
||||||
// ..
|
|
||||||
},
|
|
||||||
// 1. Open the database with new MS without processing the update task
|
|
||||||
{
|
|
||||||
"binary": {
|
|
||||||
"source": "build", // build the binary from the sources in the current git repository
|
|
||||||
"edition": "community",
|
|
||||||
"extraCliArgs": [
|
|
||||||
"--experimental-dumpless-upgrade", // allows to open with a newer MS
|
|
||||||
"--experimental-max-number-of-batched-tasks=0" // prevent processing of the update task
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
// 2. Check the search etc.
|
|
||||||
{
|
|
||||||
// ..
|
|
||||||
},
|
|
||||||
// 3. Open the database with new MS and processing the update task
|
|
||||||
{
|
|
||||||
"binary": {
|
|
||||||
"source": "build", // build the binary from the sources in the current git repository
|
|
||||||
"edition": "community",
|
|
||||||
"extraCliArgs": [
|
|
||||||
"--experimental-dumpless-upgrade" // allows to open with a newer MS
|
|
||||||
// no `--experimental-max-number-of-batched-tasks=0`
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
// 4. Check the indexing, search, etc.
|
|
||||||
{
|
|
||||||
// ..
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This ensures backward compatibility: databases created with older Meilisearch versions should remain functional and consistent after an upgrade.
|
|
||||||
|
|
||||||
## Variables
|
|
||||||
|
|
||||||
Sometimes a command needs to use a value returned by a **previous response**.
|
|
||||||
These values can be captured and reused using the register field.
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"route": "keys",
|
|
||||||
"method": "POST",
|
|
||||||
"body": {
|
|
||||||
"inline": {
|
|
||||||
"actions": [
|
|
||||||
"search",
|
|
||||||
"documents.add"
|
|
||||||
],
|
|
||||||
"description": "Test API Key",
|
|
||||||
"expiresAt": null,
|
|
||||||
"indexes": [ "movies" ]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"expectedResponse": {
|
|
||||||
"key": "c6f64630bad2996b1f675007c8800168e14adf5d6a7bb1a400a6d2b158050eaf",
|
|
||||||
// ...
|
|
||||||
},
|
|
||||||
"register": {
|
|
||||||
"key": "/key"
|
|
||||||
},
|
|
||||||
"synchronous": "WaitForResponse"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
The `register` field captures the value at the JSON path `/key` from the response.
|
|
||||||
Paths follow the **JavaScript Object Notation Pointer (RFC 6901)** format.
|
|
||||||
Registered variables are available for all subsequent commands.
|
|
||||||
|
|
||||||
Registered variables can be referenced by wrapping their name in double curly braces:
|
|
||||||
|
|
||||||
In the route/path:
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"route": "tasks/{{ task_id }}",
|
|
||||||
"method": "GET"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
In the request body:
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"route": "indexes/movies/documents",
|
|
||||||
"method": "PATCH",
|
|
||||||
"body": {
|
|
||||||
"inline": {
|
|
||||||
"id": "{{ document_id }}",
|
|
||||||
"overview": "Shazam turns evil and the world is in danger.",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Or they can be referenced by their name (**without curly braces**) as an API key:
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
{
|
|
||||||
"route": "indexes/movies/documents",
|
|
||||||
"method": "POST",
|
|
||||||
"body": { /* ... */ },
|
|
||||||
"apiKeyVariable": "key" // The **content** of the key variable will be used as an API key
|
|
||||||
}
|
|
||||||
```
|
|
||||||
@@ -11,27 +11,27 @@ edition.workspace = true
|
|||||||
license.workspace = true
|
license.workspace = true
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
anyhow = "1.0.100"
|
anyhow = "1.0.95"
|
||||||
bumpalo = "3.19.0"
|
bumpalo = "3.16.0"
|
||||||
csv = "1.4.0"
|
csv = "1.3.1"
|
||||||
memmap2 = "0.9.9"
|
memmap2 = "0.9.5"
|
||||||
milli = { path = "../milli" }
|
milli = { path = "../milli" }
|
||||||
mimalloc = { version = "0.1.48", default-features = false }
|
mimalloc = { version = "0.1.43", default-features = false }
|
||||||
serde_json = { version = "1.0.145", features = ["preserve_order"] }
|
serde_json = { version = "1.0.135", features = ["preserve_order"] }
|
||||||
tempfile = "3.23.0"
|
tempfile = "3.15.0"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
criterion = { version = "0.7.0", features = ["html_reports"] }
|
criterion = { version = "0.5.1", features = ["html_reports"] }
|
||||||
rand = "0.8.5"
|
rand = "0.8.5"
|
||||||
rand_chacha = "0.3.1"
|
rand_chacha = "0.3.1"
|
||||||
roaring = "0.10.12"
|
roaring = "0.10.10"
|
||||||
|
|
||||||
[build-dependencies]
|
[build-dependencies]
|
||||||
anyhow = "1.0.100"
|
anyhow = "1.0.95"
|
||||||
bytes = "1.11.0"
|
bytes = "1.9.0"
|
||||||
convert_case = "0.9.0"
|
convert_case = "0.6.0"
|
||||||
flate2 = "1.1.5"
|
flate2 = "1.0.35"
|
||||||
reqwest = { version = "0.12.24", features = ["blocking", "rustls-tls"], default-features = false }
|
reqwest = { version = "0.12.12", features = ["blocking", "rustls-tls"], default-features = false }
|
||||||
|
|
||||||
[features]
|
[features]
|
||||||
default = ["milli/all-tokenizations"]
|
default = ["milli/all-tokenizations"]
|
||||||
@@ -51,11 +51,3 @@ harness = false
|
|||||||
[[bench]]
|
[[bench]]
|
||||||
name = "indexing"
|
name = "indexing"
|
||||||
harness = false
|
harness = false
|
||||||
|
|
||||||
[[bench]]
|
|
||||||
name = "sort"
|
|
||||||
harness = false
|
|
||||||
|
|
||||||
[[bench]]
|
|
||||||
name = "filter_starts_with"
|
|
||||||
harness = false
|
|
||||||
|
|||||||
@@ -1,66 +0,0 @@
|
|||||||
mod datasets_paths;
|
|
||||||
mod utils;
|
|
||||||
|
|
||||||
use criterion::{criterion_group, criterion_main};
|
|
||||||
use milli::update::Settings;
|
|
||||||
use milli::FilterableAttributesRule;
|
|
||||||
use utils::Conf;
|
|
||||||
|
|
||||||
#[cfg(not(windows))]
|
|
||||||
#[global_allocator]
|
|
||||||
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
|
|
||||||
|
|
||||||
fn base_conf(builder: &mut Settings) {
|
|
||||||
let displayed_fields = ["geonameid", "name"].iter().map(|s| s.to_string()).collect();
|
|
||||||
builder.set_displayed_fields(displayed_fields);
|
|
||||||
|
|
||||||
let filterable_fields =
|
|
||||||
["name"].iter().map(|s| FilterableAttributesRule::Field(s.to_string())).collect();
|
|
||||||
builder.set_filterable_fields(filterable_fields);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[rustfmt::skip]
|
|
||||||
const BASE_CONF: Conf = Conf {
|
|
||||||
dataset: datasets_paths::SMOL_ALL_COUNTRIES,
|
|
||||||
dataset_format: "jsonl",
|
|
||||||
queries: &[
|
|
||||||
"",
|
|
||||||
],
|
|
||||||
configure: base_conf,
|
|
||||||
primary_key: Some("geonameid"),
|
|
||||||
..Conf::BASE
|
|
||||||
};
|
|
||||||
|
|
||||||
fn filter_starts_with(c: &mut criterion::Criterion) {
|
|
||||||
#[rustfmt::skip]
|
|
||||||
let confs = &[
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "1 letter",
|
|
||||||
filter: Some("name STARTS WITH e"),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "2 letters",
|
|
||||||
filter: Some("name STARTS WITH es"),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "3 letters",
|
|
||||||
filter: Some("name STARTS WITH est"),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "6 letters",
|
|
||||||
filter: Some("name STARTS WITH estoni"),
|
|
||||||
..BASE_CONF
|
|
||||||
}
|
|
||||||
];
|
|
||||||
|
|
||||||
utils::run_benches(c, confs);
|
|
||||||
}
|
|
||||||
|
|
||||||
criterion_group!(benches, filter_starts_with);
|
|
||||||
criterion_main!(benches);
|
|
||||||
@@ -11,7 +11,7 @@ use milli::heed::{EnvOpenOptions, RwTxn};
|
|||||||
use milli::progress::Progress;
|
use milli::progress::Progress;
|
||||||
use milli::update::new::indexer;
|
use milli::update::new::indexer;
|
||||||
use milli::update::{IndexerConfig, Settings};
|
use milli::update::{IndexerConfig, Settings};
|
||||||
use milli::vector::RuntimeEmbedders;
|
use milli::vector::EmbeddingConfigs;
|
||||||
use milli::{FilterableAttributesRule, Index};
|
use milli::{FilterableAttributesRule, Index};
|
||||||
use rand::seq::SliceRandom;
|
use rand::seq::SliceRandom;
|
||||||
use rand_chacha::rand_core::SeedableRng;
|
use rand_chacha::rand_core::SeedableRng;
|
||||||
@@ -21,10 +21,6 @@ use roaring::RoaringBitmap;
|
|||||||
#[global_allocator]
|
#[global_allocator]
|
||||||
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
|
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
|
||||||
|
|
||||||
fn no_cancel() -> bool {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
|
|
||||||
const BENCHMARK_ITERATION: usize = 10;
|
const BENCHMARK_ITERATION: usize = 10;
|
||||||
|
|
||||||
fn setup_dir(path: impl AsRef<Path>) {
|
fn setup_dir(path: impl AsRef<Path>) {
|
||||||
@@ -69,7 +65,7 @@ fn setup_settings<'t>(
|
|||||||
let sortable_fields = sortable_fields.iter().map(|s| s.to_string()).collect();
|
let sortable_fields = sortable_fields.iter().map(|s| s.to_string()).collect();
|
||||||
builder.set_sortable_fields(sortable_fields);
|
builder.set_sortable_fields(sortable_fields);
|
||||||
|
|
||||||
builder.execute(&no_cancel, &Progress::default(), Default::default()).unwrap();
|
builder.execute(|_| (), || false).unwrap();
|
||||||
}
|
}
|
||||||
|
|
||||||
fn setup_index_with_settings(
|
fn setup_index_with_settings(
|
||||||
@@ -156,9 +152,8 @@ fn indexing_songs_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -171,10 +166,9 @@ fn indexing_songs_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -224,9 +218,8 @@ fn reindexing_songs_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -239,10 +232,9 @@ fn reindexing_songs_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -270,9 +262,8 @@ fn reindexing_songs_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -285,10 +276,9 @@ fn reindexing_songs_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -340,9 +330,8 @@ fn deleting_songs_in_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -355,10 +344,9 @@ fn deleting_songs_in_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -418,9 +406,8 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -433,10 +420,9 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -464,9 +450,8 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -479,10 +464,9 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -506,9 +490,8 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -521,10 +504,9 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -575,9 +557,8 @@ fn indexing_songs_without_faceted_numbers(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -590,10 +571,9 @@ fn indexing_songs_without_faceted_numbers(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -643,9 +623,8 @@ fn indexing_songs_without_faceted_fields(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -658,10 +637,9 @@ fn indexing_songs_without_faceted_fields(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -711,9 +689,8 @@ fn indexing_wiki(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -726,10 +703,9 @@ fn indexing_wiki(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -778,9 +754,8 @@ fn reindexing_wiki(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -793,10 +768,9 @@ fn reindexing_wiki(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -824,9 +798,8 @@ fn reindexing_wiki(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -839,10 +812,9 @@ fn reindexing_wiki(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -893,9 +865,8 @@ fn deleting_wiki_in_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -908,10 +879,9 @@ fn deleting_wiki_in_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -971,9 +941,8 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -986,10 +955,9 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1018,9 +986,8 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1033,10 +1000,9 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1061,9 +1027,8 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1076,10 +1041,9 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1129,9 +1093,8 @@ fn indexing_movies_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1144,10 +1107,9 @@ fn indexing_movies_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1196,9 +1158,8 @@ fn reindexing_movies_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1211,10 +1172,9 @@ fn reindexing_movies_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1242,9 +1202,8 @@ fn reindexing_movies_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1257,10 +1216,9 @@ fn reindexing_movies_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1311,9 +1269,8 @@ fn deleting_movies_in_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1326,10 +1283,9 @@ fn deleting_movies_in_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1375,10 +1331,9 @@ fn delete_documents_from_ids(index: Index, document_ids_to_delete: Vec<RoaringBi
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
Some(primary_key),
|
Some(primary_key),
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1426,9 +1381,8 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1441,10 +1395,9 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1472,9 +1425,8 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1487,10 +1439,9 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1514,9 +1465,8 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1529,10 +1479,9 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1605,9 +1554,8 @@ fn indexing_nested_movies_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1620,10 +1568,9 @@ fn indexing_nested_movies_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1697,9 +1644,8 @@ fn deleting_nested_movies_in_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1712,10 +1658,9 @@ fn deleting_nested_movies_in_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1781,9 +1726,8 @@ fn indexing_nested_movies_without_faceted_fields(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1796,10 +1740,9 @@ fn indexing_nested_movies_without_faceted_fields(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1849,9 +1792,8 @@ fn indexing_geo(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1864,10 +1806,9 @@ fn indexing_geo(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1916,9 +1857,8 @@ fn reindexing_geo(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1931,10 +1871,9 @@ fn reindexing_geo(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1962,9 +1901,8 @@ fn reindexing_geo(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1977,10 +1915,9 @@ fn reindexing_geo(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -2031,9 +1968,8 @@ fn deleting_geo_in_batches_default(c: &mut Criterion) {
|
|||||||
&rtxn,
|
&rtxn,
|
||||||
None,
|
None,
|
||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&no_cancel,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -2046,10 +1982,9 @@ fn deleting_geo_in_batches_default(c: &mut Criterion) {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&no_cancel,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -2,8 +2,7 @@ mod datasets_paths;
|
|||||||
mod utils;
|
mod utils;
|
||||||
|
|
||||||
use criterion::{criterion_group, criterion_main};
|
use criterion::{criterion_group, criterion_main};
|
||||||
use milli::update::Settings;
|
use milli::{update::Settings, FilterableAttributesRule};
|
||||||
use milli::FilterableAttributesRule;
|
|
||||||
use utils::Conf;
|
use utils::Conf;
|
||||||
|
|
||||||
#[cfg(not(windows))]
|
#[cfg(not(windows))]
|
||||||
|
|||||||
@@ -2,8 +2,7 @@ mod datasets_paths;
|
|||||||
mod utils;
|
mod utils;
|
||||||
|
|
||||||
use criterion::{criterion_group, criterion_main};
|
use criterion::{criterion_group, criterion_main};
|
||||||
use milli::update::Settings;
|
use milli::{update::Settings, FilterableAttributesRule};
|
||||||
use milli::FilterableAttributesRule;
|
|
||||||
use utils::Conf;
|
use utils::Conf;
|
||||||
|
|
||||||
#[cfg(not(windows))]
|
#[cfg(not(windows))]
|
||||||
|
|||||||
@@ -1,114 +0,0 @@
|
|||||||
//! This benchmark module is used to compare the performance of sorting documents in /search VS /documents
|
|
||||||
//!
|
|
||||||
//! The tests/benchmarks were designed in the context of a query returning only 20 documents.
|
|
||||||
|
|
||||||
mod datasets_paths;
|
|
||||||
mod utils;
|
|
||||||
|
|
||||||
use criterion::{criterion_group, criterion_main};
|
|
||||||
use milli::update::Settings;
|
|
||||||
use utils::Conf;
|
|
||||||
|
|
||||||
#[cfg(not(windows))]
|
|
||||||
#[global_allocator]
|
|
||||||
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
|
|
||||||
|
|
||||||
fn base_conf(builder: &mut Settings) {
|
|
||||||
let displayed_fields =
|
|
||||||
["geonameid", "name", "asciiname", "alternatenames", "_geo", "population"]
|
|
||||||
.iter()
|
|
||||||
.map(|s| s.to_string())
|
|
||||||
.collect();
|
|
||||||
builder.set_displayed_fields(displayed_fields);
|
|
||||||
|
|
||||||
let sortable_fields =
|
|
||||||
["_geo", "name", "population", "elevation", "timezone", "modification-date"]
|
|
||||||
.iter()
|
|
||||||
.map(|s| s.to_string())
|
|
||||||
.collect();
|
|
||||||
builder.set_sortable_fields(sortable_fields);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[rustfmt::skip]
|
|
||||||
const BASE_CONF: Conf = Conf {
|
|
||||||
dataset: datasets_paths::SMOL_ALL_COUNTRIES,
|
|
||||||
dataset_format: "jsonl",
|
|
||||||
configure: base_conf,
|
|
||||||
primary_key: Some("geonameid"),
|
|
||||||
queries: &[""],
|
|
||||||
offsets: &[
|
|
||||||
Some((0, 20)), // The most common query in the real world
|
|
||||||
Some((0, 500)), // A query that ranges over many documents
|
|
||||||
Some((980, 20)), // The worst query that could happen in the real world
|
|
||||||
Some((800_000, 20)) // The worst query
|
|
||||||
],
|
|
||||||
get_documents: true,
|
|
||||||
..Conf::BASE
|
|
||||||
};
|
|
||||||
|
|
||||||
fn bench_sort(c: &mut criterion::Criterion) {
|
|
||||||
#[rustfmt::skip]
|
|
||||||
let confs = &[
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "without sort",
|
|
||||||
sort: None,
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "sort on many different values",
|
|
||||||
sort: Some(vec!["name:asc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "sort on many similar values",
|
|
||||||
sort: Some(vec!["timezone:desc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "sort on many similar then different values",
|
|
||||||
sort: Some(vec!["timezone:desc", "name:asc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "sort on many different then similar values",
|
|
||||||
sort: Some(vec!["timezone:desc", "name:asc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "geo sort",
|
|
||||||
sample_size: Some(10),
|
|
||||||
sort: Some(vec!["_geoPoint(45.4777599, 9.1967508):asc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "sort on many similar values then geo sort",
|
|
||||||
sample_size: Some(50),
|
|
||||||
sort: Some(vec!["timezone:desc", "_geoPoint(45.4777599, 9.1967508):asc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "sort on many different values then geo sort",
|
|
||||||
sample_size: Some(50),
|
|
||||||
sort: Some(vec!["name:desc", "_geoPoint(45.4777599, 9.1967508):asc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "sort on many fields",
|
|
||||||
sort: Some(vec!["population:asc", "name:asc", "elevation:asc", "timezone:asc"]),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
];
|
|
||||||
|
|
||||||
utils::run_benches(c, confs);
|
|
||||||
}
|
|
||||||
|
|
||||||
criterion_group!(benches, bench_sort);
|
|
||||||
criterion_main!(benches);
|
|
||||||
@@ -9,12 +9,11 @@ use anyhow::Context;
|
|||||||
use bumpalo::Bump;
|
use bumpalo::Bump;
|
||||||
use criterion::BenchmarkId;
|
use criterion::BenchmarkId;
|
||||||
use memmap2::Mmap;
|
use memmap2::Mmap;
|
||||||
use milli::documents::sort::recursive_sort;
|
|
||||||
use milli::heed::EnvOpenOptions;
|
use milli::heed::EnvOpenOptions;
|
||||||
use milli::progress::Progress;
|
use milli::progress::Progress;
|
||||||
use milli::update::new::indexer;
|
use milli::update::new::indexer;
|
||||||
use milli::update::{IndexerConfig, Settings};
|
use milli::update::{IndexerConfig, Settings};
|
||||||
use milli::vector::RuntimeEmbedders;
|
use milli::vector::EmbeddingConfigs;
|
||||||
use milli::{Criterion, Filter, Index, Object, TermsMatchingStrategy};
|
use milli::{Criterion, Filter, Index, Object, TermsMatchingStrategy};
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
|
|
||||||
@@ -36,12 +35,6 @@ pub struct Conf<'a> {
|
|||||||
pub configure: fn(&mut Settings),
|
pub configure: fn(&mut Settings),
|
||||||
pub filter: Option<&'a str>,
|
pub filter: Option<&'a str>,
|
||||||
pub sort: Option<Vec<&'a str>>,
|
pub sort: Option<Vec<&'a str>>,
|
||||||
/// set to skip documents (offset, limit)
|
|
||||||
pub offsets: &'a [Option<(usize, usize)>],
|
|
||||||
/// enable if you want to bench getting documents without querying
|
|
||||||
pub get_documents: bool,
|
|
||||||
/// configure the benchmark sample size
|
|
||||||
pub sample_size: Option<usize>,
|
|
||||||
/// enable or disable the optional words on the query
|
/// enable or disable the optional words on the query
|
||||||
pub optional_words: bool,
|
pub optional_words: bool,
|
||||||
/// primary key, if there is None we'll auto-generate docids for every documents
|
/// primary key, if there is None we'll auto-generate docids for every documents
|
||||||
@@ -59,9 +52,6 @@ impl Conf<'_> {
|
|||||||
configure: |_| (),
|
configure: |_| (),
|
||||||
filter: None,
|
filter: None,
|
||||||
sort: None,
|
sort: None,
|
||||||
offsets: &[None],
|
|
||||||
get_documents: false,
|
|
||||||
sample_size: None,
|
|
||||||
optional_words: true,
|
optional_words: true,
|
||||||
primary_key: None,
|
primary_key: None,
|
||||||
};
|
};
|
||||||
@@ -100,7 +90,7 @@ pub fn base_setup(conf: &Conf) -> Index {
|
|||||||
|
|
||||||
(conf.configure)(&mut builder);
|
(conf.configure)(&mut builder);
|
||||||
|
|
||||||
builder.execute(&|| false, &Progress::default(), Default::default()).unwrap();
|
builder.execute(|_| (), || false).unwrap();
|
||||||
wtxn.commit().unwrap();
|
wtxn.commit().unwrap();
|
||||||
|
|
||||||
let config = IndexerConfig::default();
|
let config = IndexerConfig::default();
|
||||||
@@ -123,7 +113,6 @@ pub fn base_setup(conf: &Conf) -> Index {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -136,10 +125,9 @@ pub fn base_setup(conf: &Conf) -> Index {
|
|||||||
new_fields_ids_map,
|
new_fields_ids_map,
|
||||||
primary_key,
|
primary_key,
|
||||||
&document_changes,
|
&document_changes,
|
||||||
RuntimeEmbedders::default(),
|
EmbeddingConfigs::default(),
|
||||||
&|| false,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -156,79 +144,25 @@ pub fn run_benches(c: &mut criterion::Criterion, confs: &[Conf]) {
|
|||||||
let file_name = Path::new(conf.dataset).file_name().and_then(|f| f.to_str()).unwrap();
|
let file_name = Path::new(conf.dataset).file_name().and_then(|f| f.to_str()).unwrap();
|
||||||
let name = format!("{}: {}", file_name, conf.group_name);
|
let name = format!("{}: {}", file_name, conf.group_name);
|
||||||
let mut group = c.benchmark_group(&name);
|
let mut group = c.benchmark_group(&name);
|
||||||
if let Some(sample_size) = conf.sample_size {
|
|
||||||
group.sample_size(sample_size);
|
|
||||||
}
|
|
||||||
|
|
||||||
for &query in conf.queries {
|
for &query in conf.queries {
|
||||||
for offset in conf.offsets {
|
group.bench_with_input(BenchmarkId::from_parameter(query), &query, |b, &query| {
|
||||||
let parameter = match offset {
|
b.iter(|| {
|
||||||
None => query.to_string(),
|
let rtxn = index.read_txn().unwrap();
|
||||||
Some((offset, limit)) => format!("{query}[{offset}:{limit}]"),
|
let mut search = index.search(&rtxn);
|
||||||
};
|
search.query(query).terms_matching_strategy(TermsMatchingStrategy::default());
|
||||||
group.bench_with_input(
|
if let Some(filter) = conf.filter {
|
||||||
BenchmarkId::from_parameter(parameter),
|
let filter = Filter::from_str(filter).unwrap().unwrap();
|
||||||
&query,
|
search.filter(filter);
|
||||||
|b, &query| {
|
}
|
||||||
b.iter(|| {
|
if let Some(sort) = &conf.sort {
|
||||||
let rtxn = index.read_txn().unwrap();
|
let sort = sort.iter().map(|sort| sort.parse().unwrap()).collect();
|
||||||
let mut search = index.search(&rtxn);
|
search.sort_criteria(sort);
|
||||||
search
|
}
|
||||||
.query(query)
|
let _ids = search.execute().unwrap();
|
||||||
.terms_matching_strategy(TermsMatchingStrategy::default());
|
|
||||||
if let Some(filter) = conf.filter {
|
|
||||||
let filter = Filter::from_str(filter).unwrap().unwrap();
|
|
||||||
search.filter(filter);
|
|
||||||
}
|
|
||||||
if let Some(sort) = &conf.sort {
|
|
||||||
let sort = sort.iter().map(|sort| sort.parse().unwrap()).collect();
|
|
||||||
search.sort_criteria(sort);
|
|
||||||
}
|
|
||||||
if let Some((offset, limit)) = offset {
|
|
||||||
search.offset(*offset).limit(*limit);
|
|
||||||
}
|
|
||||||
|
|
||||||
let _ids = search.execute().unwrap();
|
|
||||||
});
|
|
||||||
},
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if conf.get_documents {
|
|
||||||
for offset in conf.offsets {
|
|
||||||
let parameter = match offset {
|
|
||||||
None => String::from("get_documents"),
|
|
||||||
Some((offset, limit)) => format!("get_documents[{offset}:{limit}]"),
|
|
||||||
};
|
|
||||||
group.bench_with_input(BenchmarkId::from_parameter(parameter), &(), |b, &()| {
|
|
||||||
b.iter(|| {
|
|
||||||
let rtxn = index.read_txn().unwrap();
|
|
||||||
if let Some(sort) = &conf.sort {
|
|
||||||
let sort = sort.iter().map(|sort| sort.parse().unwrap()).collect();
|
|
||||||
let all_docs = index.documents_ids(&rtxn).unwrap();
|
|
||||||
let facet_sort =
|
|
||||||
recursive_sort(&index, &rtxn, sort, &all_docs).unwrap();
|
|
||||||
let iter = facet_sort.iter().unwrap();
|
|
||||||
if let Some((offset, limit)) = offset {
|
|
||||||
let _results = iter.skip(*offset).take(*limit).collect::<Vec<_>>();
|
|
||||||
} else {
|
|
||||||
let _results = iter.collect::<Vec<_>>();
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
let all_docs = index.documents_ids(&rtxn).unwrap();
|
|
||||||
if let Some((offset, limit)) = offset {
|
|
||||||
let _results =
|
|
||||||
all_docs.iter().skip(*offset).take(*limit).collect::<Vec<_>>();
|
|
||||||
} else {
|
|
||||||
let _results = all_docs.iter().collect::<Vec<_>>();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
}
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
group.finish();
|
group.finish();
|
||||||
|
|
||||||
index.prepare_for_closing().wait();
|
index.prepare_for_closing().wait();
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ fn main() -> anyhow::Result<()> {
|
|||||||
writeln!(
|
writeln!(
|
||||||
&mut manifest_paths_file,
|
&mut manifest_paths_file,
|
||||||
r#"pub const {}: &str = {:?};"#,
|
r#"pub const {}: &str = {:?};"#,
|
||||||
dataset.to_case(Case::UpperSnake),
|
dataset.to_case(Case::ScreamingSnake),
|
||||||
out_file.display(),
|
out_file.display(),
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
|
|||||||
@@ -11,8 +11,8 @@ license.workspace = true
|
|||||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
time = { version = "0.3.44", features = ["parsing"] }
|
time = { version = "0.3.37", features = ["parsing"] }
|
||||||
|
|
||||||
[build-dependencies]
|
[build-dependencies]
|
||||||
anyhow = "1.0.100"
|
anyhow = "1.0.95"
|
||||||
vergen-gitcl = "1.0.8"
|
vergen-git2 = "1.0.2"
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ fn emit_git_variables() -> anyhow::Result<()> {
|
|||||||
// Note: any code that needs VERGEN_ environment variables should take care to define them manually in the Dockerfile and pass them
|
// Note: any code that needs VERGEN_ environment variables should take care to define them manually in the Dockerfile and pass them
|
||||||
// in the corresponding GitHub workflow (publish_docker.yml).
|
// in the corresponding GitHub workflow (publish_docker.yml).
|
||||||
// This is due to the Dockerfile building the binary outside of the git directory.
|
// This is due to the Dockerfile building the binary outside of the git directory.
|
||||||
let mut builder = vergen_gitcl::GitclBuilder::default();
|
let mut builder = vergen_git2::Git2Builder::default();
|
||||||
|
|
||||||
builder.branch(true);
|
builder.branch(true);
|
||||||
builder.commit_timestamp(true);
|
builder.commit_timestamp(true);
|
||||||
@@ -25,5 +25,5 @@ fn emit_git_variables() -> anyhow::Result<()> {
|
|||||||
|
|
||||||
let git2 = builder.build()?;
|
let git2 = builder.build()?;
|
||||||
|
|
||||||
vergen_gitcl::Emitter::default().fail_on_error().add_instructions(&git2)?.emit()
|
vergen_git2::Emitter::default().fail_on_error().add_instructions(&git2)?.emit()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +0,0 @@
|
|||||||
use build_info::BuildInfo;
|
|
||||||
|
|
||||||
fn main() {
|
|
||||||
let info = BuildInfo::from_build();
|
|
||||||
dbg!(info);
|
|
||||||
}
|
|
||||||
@@ -11,27 +11,24 @@ readme.workspace = true
|
|||||||
license.workspace = true
|
license.workspace = true
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
anyhow = "1.0.100"
|
anyhow = "1.0.95"
|
||||||
flate2 = "1.1.5"
|
flate2 = "1.0.35"
|
||||||
http = "1.3.1"
|
http = "1.2.0"
|
||||||
meilisearch-types = { path = "../meilisearch-types" }
|
meilisearch-types = { path = "../meilisearch-types" }
|
||||||
once_cell = "1.21.3"
|
once_cell = "1.20.2"
|
||||||
regex = "1.12.2"
|
regex = "1.11.1"
|
||||||
roaring = { version = "0.10.12", features = ["serde"] }
|
roaring = { version = "0.10.10", features = ["serde"] }
|
||||||
serde = { version = "1.0.228", features = ["derive"] }
|
serde = { version = "1.0.217", features = ["derive"] }
|
||||||
serde_json = { version = "1.0.145", features = ["preserve_order"] }
|
serde_json = { version = "1.0.135", features = ["preserve_order"] }
|
||||||
tar = "0.4.44"
|
tar = "0.4.43"
|
||||||
tempfile = "3.23.0"
|
tempfile = "3.15.0"
|
||||||
thiserror = "2.0.17"
|
thiserror = "2.0.9"
|
||||||
time = { version = "0.3.44", features = ["serde-well-known", "formatting", "parsing", "macros"] }
|
time = { version = "0.3.37", features = ["serde-well-known", "formatting", "parsing", "macros"] }
|
||||||
tracing = "0.1.41"
|
tracing = "0.1.41"
|
||||||
uuid = { version = "1.18.1", features = ["serde", "v4"] }
|
uuid = { version = "1.11.0", features = ["serde", "v4"] }
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
big_s = "1.0.2"
|
big_s = "1.0.2"
|
||||||
maplit = "1.0.2"
|
maplit = "1.0.2"
|
||||||
meili-snap = { path = "../meili-snap" }
|
meili-snap = { path = "../meili-snap" }
|
||||||
meilisearch-types = { path = "../meilisearch-types" }
|
meilisearch-types = { path = "../meilisearch-types" }
|
||||||
|
|
||||||
[features]
|
|
||||||
enterprise = ["meilisearch-types/enterprise"]
|
|
||||||
@@ -1,17 +1,12 @@
|
|||||||
#![allow(clippy::type_complexity)]
|
#![allow(clippy::type_complexity)]
|
||||||
#![allow(clippy::wrong_self_convention)]
|
#![allow(clippy::wrong_self_convention)]
|
||||||
|
|
||||||
use std::collections::BTreeMap;
|
|
||||||
|
|
||||||
use meilisearch_types::batches::BatchId;
|
use meilisearch_types::batches::BatchId;
|
||||||
use meilisearch_types::byte_unit::Byte;
|
|
||||||
use meilisearch_types::error::ResponseError;
|
use meilisearch_types::error::ResponseError;
|
||||||
use meilisearch_types::keys::Key;
|
use meilisearch_types::keys::Key;
|
||||||
use meilisearch_types::milli::update::IndexDocumentsMethod;
|
use meilisearch_types::milli::update::IndexDocumentsMethod;
|
||||||
use meilisearch_types::settings::Unchecked;
|
use meilisearch_types::settings::Unchecked;
|
||||||
use meilisearch_types::tasks::{
|
use meilisearch_types::tasks::{Details, IndexSwap, KindWithContent, Status, Task, TaskId};
|
||||||
Details, ExportIndexSettings, IndexSwap, KindWithContent, Status, Task, TaskId, TaskNetwork,
|
|
||||||
};
|
|
||||||
use meilisearch_types::InstanceUid;
|
use meilisearch_types::InstanceUid;
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
@@ -94,10 +89,6 @@ pub struct TaskDump {
|
|||||||
default
|
default
|
||||||
)]
|
)]
|
||||||
pub finished_at: Option<OffsetDateTime>,
|
pub finished_at: Option<OffsetDateTime>,
|
||||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
|
||||||
pub network: Option<TaskNetwork>,
|
|
||||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
|
||||||
pub custom_metadata: Option<String>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// A `Kind` specific version made for the dump. If modified you may break the dump.
|
// A `Kind` specific version made for the dump. If modified you may break the dump.
|
||||||
@@ -133,7 +124,6 @@ pub enum KindDump {
|
|||||||
},
|
},
|
||||||
IndexUpdate {
|
IndexUpdate {
|
||||||
primary_key: Option<String>,
|
primary_key: Option<String>,
|
||||||
uid: Option<String>,
|
|
||||||
},
|
},
|
||||||
IndexSwap {
|
IndexSwap {
|
||||||
swaps: Vec<IndexSwap>,
|
swaps: Vec<IndexSwap>,
|
||||||
@@ -151,18 +141,9 @@ pub enum KindDump {
|
|||||||
instance_uid: Option<InstanceUid>,
|
instance_uid: Option<InstanceUid>,
|
||||||
},
|
},
|
||||||
SnapshotCreation,
|
SnapshotCreation,
|
||||||
Export {
|
|
||||||
url: String,
|
|
||||||
api_key: Option<String>,
|
|
||||||
payload_size: Option<Byte>,
|
|
||||||
indexes: BTreeMap<String, ExportIndexSettings>,
|
|
||||||
},
|
|
||||||
UpgradeDatabase {
|
UpgradeDatabase {
|
||||||
from: (u32, u32, u32),
|
from: (u32, u32, u32),
|
||||||
},
|
},
|
||||||
IndexCompaction {
|
|
||||||
index_uid: String,
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<Task> for TaskDump {
|
impl From<Task> for TaskDump {
|
||||||
@@ -179,8 +160,6 @@ impl From<Task> for TaskDump {
|
|||||||
enqueued_at: task.enqueued_at,
|
enqueued_at: task.enqueued_at,
|
||||||
started_at: task.started_at,
|
started_at: task.started_at,
|
||||||
finished_at: task.finished_at,
|
finished_at: task.finished_at,
|
||||||
network: task.network,
|
|
||||||
custom_metadata: task.custom_metadata,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -220,8 +199,8 @@ impl From<KindWithContent> for KindDump {
|
|||||||
KindWithContent::IndexCreation { primary_key, .. } => {
|
KindWithContent::IndexCreation { primary_key, .. } => {
|
||||||
KindDump::IndexCreation { primary_key }
|
KindDump::IndexCreation { primary_key }
|
||||||
}
|
}
|
||||||
KindWithContent::IndexUpdate { primary_key, new_index_uid: uid, .. } => {
|
KindWithContent::IndexUpdate { primary_key, .. } => {
|
||||||
KindDump::IndexUpdate { primary_key, uid }
|
KindDump::IndexUpdate { primary_key }
|
||||||
}
|
}
|
||||||
KindWithContent::IndexSwap { swaps } => KindDump::IndexSwap { swaps },
|
KindWithContent::IndexSwap { swaps } => KindDump::IndexSwap { swaps },
|
||||||
KindWithContent::TaskCancelation { query, tasks } => {
|
KindWithContent::TaskCancelation { query, tasks } => {
|
||||||
@@ -234,21 +213,9 @@ impl From<KindWithContent> for KindDump {
|
|||||||
KindDump::DumpCreation { keys, instance_uid }
|
KindDump::DumpCreation { keys, instance_uid }
|
||||||
}
|
}
|
||||||
KindWithContent::SnapshotCreation => KindDump::SnapshotCreation,
|
KindWithContent::SnapshotCreation => KindDump::SnapshotCreation,
|
||||||
KindWithContent::Export { url, api_key, payload_size, indexes } => KindDump::Export {
|
|
||||||
url,
|
|
||||||
api_key,
|
|
||||||
payload_size,
|
|
||||||
indexes: indexes
|
|
||||||
.into_iter()
|
|
||||||
.map(|(pattern, settings)| (pattern.to_string(), settings))
|
|
||||||
.collect(),
|
|
||||||
},
|
|
||||||
KindWithContent::UpgradeDatabase { from: version } => {
|
KindWithContent::UpgradeDatabase { from: version } => {
|
||||||
KindDump::UpgradeDatabase { from: version }
|
KindDump::UpgradeDatabase { from: version }
|
||||||
}
|
}
|
||||||
KindWithContent::IndexCompaction { index_uid } => {
|
|
||||||
KindDump::IndexCompaction { index_uid }
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -263,12 +230,11 @@ pub(crate) mod test {
|
|||||||
use maplit::{btreemap, btreeset};
|
use maplit::{btreemap, btreeset};
|
||||||
use meilisearch_types::batches::{Batch, BatchEnqueuedAt, BatchStats};
|
use meilisearch_types::batches::{Batch, BatchEnqueuedAt, BatchStats};
|
||||||
use meilisearch_types::facet_values_sort::FacetValuesSort;
|
use meilisearch_types::facet_values_sort::FacetValuesSort;
|
||||||
use meilisearch_types::features::RuntimeTogglableFeatures;
|
use meilisearch_types::features::{Network, Remote, RuntimeTogglableFeatures};
|
||||||
use meilisearch_types::index_uid_pattern::IndexUidPattern;
|
use meilisearch_types::index_uid_pattern::IndexUidPattern;
|
||||||
use meilisearch_types::keys::{Action, Key};
|
use meilisearch_types::keys::{Action, Key};
|
||||||
use meilisearch_types::milli::update::Setting;
|
use meilisearch_types::milli::update::Setting;
|
||||||
use meilisearch_types::milli::{self, FilterableAttributesRule};
|
use meilisearch_types::milli::{self, FilterableAttributesRule};
|
||||||
use meilisearch_types::network::{Network, Remote};
|
|
||||||
use meilisearch_types::settings::{Checked, FacetingSettings, Settings};
|
use meilisearch_types::settings::{Checked, FacetingSettings, Settings};
|
||||||
use meilisearch_types::task_view::DetailsView;
|
use meilisearch_types::task_view::DetailsView;
|
||||||
use meilisearch_types::tasks::{BatchStopReason, Details, Kind, Status};
|
use meilisearch_types::tasks::{BatchStopReason, Details, Kind, Status};
|
||||||
@@ -339,8 +305,6 @@ pub(crate) mod test {
|
|||||||
localized_attributes: Setting::NotSet,
|
localized_attributes: Setting::NotSet,
|
||||||
facet_search: Setting::NotSet,
|
facet_search: Setting::NotSet,
|
||||||
prefix_search: Setting::NotSet,
|
prefix_search: Setting::NotSet,
|
||||||
chat: Setting::NotSet,
|
|
||||||
vector_store: Setting::NotSet,
|
|
||||||
_kind: std::marker::PhantomData,
|
_kind: std::marker::PhantomData,
|
||||||
};
|
};
|
||||||
settings.check()
|
settings.check()
|
||||||
@@ -364,7 +328,6 @@ pub(crate) mod test {
|
|||||||
write_channel_congestion: None,
|
write_channel_congestion: None,
|
||||||
internal_database_sizes: Default::default(),
|
internal_database_sizes: Default::default(),
|
||||||
},
|
},
|
||||||
embedder_stats: Default::default(),
|
|
||||||
enqueued_at: Some(BatchEnqueuedAt {
|
enqueued_at: Some(BatchEnqueuedAt {
|
||||||
earliest: datetime!(2022-11-11 0:00 UTC),
|
earliest: datetime!(2022-11-11 0:00 UTC),
|
||||||
oldest: datetime!(2022-11-11 0:00 UTC),
|
oldest: datetime!(2022-11-11 0:00 UTC),
|
||||||
@@ -398,8 +361,6 @@ pub(crate) mod test {
|
|||||||
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
||||||
started_at: Some(datetime!(2022-11-20 0:00 UTC)),
|
started_at: Some(datetime!(2022-11-20 0:00 UTC)),
|
||||||
finished_at: Some(datetime!(2022-11-21 0:00 UTC)),
|
finished_at: Some(datetime!(2022-11-21 0:00 UTC)),
|
||||||
network: None,
|
|
||||||
custom_metadata: None,
|
|
||||||
},
|
},
|
||||||
None,
|
None,
|
||||||
),
|
),
|
||||||
@@ -424,8 +385,6 @@ pub(crate) mod test {
|
|||||||
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
||||||
started_at: None,
|
started_at: None,
|
||||||
finished_at: None,
|
finished_at: None,
|
||||||
network: None,
|
|
||||||
custom_metadata: None,
|
|
||||||
},
|
},
|
||||||
Some(vec![
|
Some(vec![
|
||||||
json!({ "id": 4, "race": "leonberg" }).as_object().unwrap().clone(),
|
json!({ "id": 4, "race": "leonberg" }).as_object().unwrap().clone(),
|
||||||
@@ -445,8 +404,6 @@ pub(crate) mod test {
|
|||||||
enqueued_at: datetime!(2022-11-15 0:00 UTC),
|
enqueued_at: datetime!(2022-11-15 0:00 UTC),
|
||||||
started_at: None,
|
started_at: None,
|
||||||
finished_at: None,
|
finished_at: None,
|
||||||
network: None,
|
|
||||||
custom_metadata: None,
|
|
||||||
},
|
},
|
||||||
None,
|
None,
|
||||||
),
|
),
|
||||||
@@ -559,8 +516,7 @@ pub(crate) mod test {
|
|||||||
fn create_test_network() -> Network {
|
fn create_test_network() -> Network {
|
||||||
Network {
|
Network {
|
||||||
local: Some("myself".to_string()),
|
local: Some("myself".to_string()),
|
||||||
remotes: maplit::btreemap! {"other".to_string() => Remote { url: "http://test".to_string(), search_api_key: Some("apiKey".to_string()), write_api_key: Some("docApiKey".to_string()) }},
|
remotes: maplit::btreemap! {"other".to_string() => Remote { url: "http://test".to_string(), search_api_key: Some("apiKey".to_string()) }},
|
||||||
sharding: false,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use super::v2_to_v3::CompatV2ToV3;
|
use super::v2_to_v3::CompatV2ToV3;
|
||||||
@@ -95,10 +94,6 @@ impl CompatIndexV1ToV2 {
|
|||||||
self.from.documents().map(|it| Box::new(it) as Box<dyn Iterator<Item = _>>)
|
self.from.documents().map(|it| Box::new(it) as Box<dyn Iterator<Item = _>>)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.from.documents_file()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v2::settings::Settings<v2::settings::Checked>> {
|
pub fn settings(&mut self) -> Result<v2::settings::Settings<v2::settings::Checked>> {
|
||||||
Ok(v2::settings::Settings::<v2::settings::Unchecked>::from(self.from.settings()?).check())
|
Ok(v2::settings::Settings::<v2::settings::Unchecked>::from(self.from.settings()?).check())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use time::OffsetDateTime;
|
use time::OffsetDateTime;
|
||||||
@@ -97,7 +96,6 @@ impl CompatV2ToV3 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(clippy::large_enum_variant)]
|
|
||||||
pub enum CompatIndexV2ToV3 {
|
pub enum CompatIndexV2ToV3 {
|
||||||
V2(v2::V2IndexReader),
|
V2(v2::V2IndexReader),
|
||||||
Compat(Box<CompatIndexV1ToV2>),
|
Compat(Box<CompatIndexV1ToV2>),
|
||||||
@@ -124,13 +122,6 @@ impl CompatIndexV2ToV3 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV2ToV3::V2(v2) => v2.documents_file(),
|
|
||||||
CompatIndexV2ToV3::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v3::Settings<v3::Checked>> {
|
pub fn settings(&mut self) -> Result<v3::Settings<v3::Checked>> {
|
||||||
let settings = match self {
|
let settings = match self {
|
||||||
CompatIndexV2ToV3::V2(from) => from.settings()?,
|
CompatIndexV2ToV3::V2(from) => from.settings()?,
|
||||||
|
|||||||
@@ -1,5 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
|
|
||||||
use super::v2_to_v3::{CompatIndexV2ToV3, CompatV2ToV3};
|
use super::v2_to_v3::{CompatIndexV2ToV3, CompatV2ToV3};
|
||||||
use super::v4_to_v5::CompatV4ToV5;
|
use super::v4_to_v5::CompatV4ToV5;
|
||||||
use crate::reader::{v3, v4, UpdateFile};
|
use crate::reader::{v3, v4, UpdateFile};
|
||||||
@@ -254,13 +252,6 @@ impl CompatIndexV3ToV4 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV3ToV4::V3(v3) => v3.documents_file(),
|
|
||||||
CompatIndexV3ToV4::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v4::Settings<v4::Checked>> {
|
pub fn settings(&mut self) -> Result<v4::Settings<v4::Checked>> {
|
||||||
Ok(match self {
|
Ok(match self {
|
||||||
CompatIndexV3ToV4::V3(v3) => {
|
CompatIndexV3ToV4::V3(v3) => {
|
||||||
|
|||||||
@@ -1,5 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
|
|
||||||
use super::v3_to_v4::{CompatIndexV3ToV4, CompatV3ToV4};
|
use super::v3_to_v4::{CompatIndexV3ToV4, CompatV3ToV4};
|
||||||
use super::v5_to_v6::CompatV5ToV6;
|
use super::v5_to_v6::CompatV5ToV6;
|
||||||
use crate::reader::{v4, v5, Document};
|
use crate::reader::{v4, v5, Document};
|
||||||
@@ -243,13 +241,6 @@ impl CompatIndexV4ToV5 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV4ToV5::V4(v4) => v4.documents_file(),
|
|
||||||
CompatIndexV4ToV5::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v5::Settings<v5::Checked>> {
|
pub fn settings(&mut self) -> Result<v5::Settings<v5::Checked>> {
|
||||||
match self {
|
match self {
|
||||||
CompatIndexV4ToV5::V4(v4) => Ok(v5::Settings::from(v4.settings()?).check()),
|
CompatIndexV4ToV5::V4(v4) => Ok(v5::Settings::from(v4.settings()?).check()),
|
||||||
|
|||||||
@@ -1,5 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
use std::num::NonZeroUsize;
|
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use super::v4_to_v5::{CompatIndexV4ToV5, CompatV4ToV5};
|
use super::v4_to_v5::{CompatIndexV4ToV5, CompatV4ToV5};
|
||||||
@@ -85,7 +83,7 @@ impl CompatV5ToV6 {
|
|||||||
v6::Kind::IndexCreation { primary_key }
|
v6::Kind::IndexCreation { primary_key }
|
||||||
}
|
}
|
||||||
v5::tasks::TaskContent::IndexUpdate { primary_key, .. } => {
|
v5::tasks::TaskContent::IndexUpdate { primary_key, .. } => {
|
||||||
v6::Kind::IndexUpdate { primary_key, uid: None }
|
v6::Kind::IndexUpdate { primary_key }
|
||||||
}
|
}
|
||||||
v5::tasks::TaskContent::IndexDeletion { .. } => v6::Kind::IndexDeletion,
|
v5::tasks::TaskContent::IndexDeletion { .. } => v6::Kind::IndexDeletion,
|
||||||
v5::tasks::TaskContent::DocumentAddition {
|
v5::tasks::TaskContent::DocumentAddition {
|
||||||
@@ -140,11 +138,9 @@ impl CompatV5ToV6 {
|
|||||||
v5::Details::Settings { settings } => {
|
v5::Details::Settings { settings } => {
|
||||||
v6::Details::SettingsUpdate { settings: Box::new(settings.into()) }
|
v6::Details::SettingsUpdate { settings: Box::new(settings.into()) }
|
||||||
}
|
}
|
||||||
v5::Details::IndexInfo { primary_key } => v6::Details::IndexInfo {
|
v5::Details::IndexInfo { primary_key } => {
|
||||||
primary_key,
|
v6::Details::IndexInfo { primary_key }
|
||||||
new_index_uid: None,
|
}
|
||||||
old_index_uid: None,
|
|
||||||
},
|
|
||||||
v5::Details::DocumentDeletion {
|
v5::Details::DocumentDeletion {
|
||||||
received_document_ids,
|
received_document_ids,
|
||||||
deleted_documents,
|
deleted_documents,
|
||||||
@@ -163,8 +159,6 @@ impl CompatV5ToV6 {
|
|||||||
enqueued_at: task_view.enqueued_at,
|
enqueued_at: task_view.enqueued_at,
|
||||||
started_at: task_view.started_at,
|
started_at: task_view.started_at,
|
||||||
finished_at: task_view.finished_at,
|
finished_at: task_view.finished_at,
|
||||||
network: None,
|
|
||||||
custom_metadata: None,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
(task, content_file)
|
(task, content_file)
|
||||||
@@ -206,10 +200,6 @@ impl CompatV5ToV6 {
|
|||||||
pub fn network(&self) -> Result<Option<&v6::Network>> {
|
pub fn network(&self) -> Result<Option<&v6::Network>> {
|
||||||
Ok(None)
|
Ok(None)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn webhooks(&self) -> Option<&v6::Webhooks> {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub enum CompatIndexV5ToV6 {
|
pub enum CompatIndexV5ToV6 {
|
||||||
@@ -252,13 +242,6 @@ impl CompatIndexV5ToV6 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV5ToV6::V5(v5) => v5.documents_file(),
|
|
||||||
CompatIndexV5ToV6::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
||||||
match self {
|
match self {
|
||||||
CompatIndexV5ToV6::V5(v5) => Ok(v6::Settings::from(v5.settings()?).check()),
|
CompatIndexV5ToV6::V5(v5) => Ok(v6::Settings::from(v5.settings()?).check()),
|
||||||
@@ -405,13 +388,7 @@ impl<T> From<v5::Settings<T>> for v6::Settings<v6::Unchecked> {
|
|||||||
},
|
},
|
||||||
pagination: match settings.pagination {
|
pagination: match settings.pagination {
|
||||||
v5::Setting::Set(pagination) => v6::Setting::Set(v6::PaginationSettings {
|
v5::Setting::Set(pagination) => v6::Setting::Set(v6::PaginationSettings {
|
||||||
max_total_hits: match pagination.max_total_hits {
|
max_total_hits: pagination.max_total_hits.into(),
|
||||||
v5::Setting::Set(max_total_hits) => v6::Setting::Set(
|
|
||||||
max_total_hits.try_into().unwrap_or(NonZeroUsize::new(1).unwrap()),
|
|
||||||
),
|
|
||||||
v5::Setting::Reset => v6::Setting::Reset,
|
|
||||||
v5::Setting::NotSet => v6::Setting::NotSet,
|
|
||||||
},
|
|
||||||
}),
|
}),
|
||||||
v5::Setting::Reset => v6::Setting::Reset,
|
v5::Setting::Reset => v6::Setting::Reset,
|
||||||
v5::Setting::NotSet => v6::Setting::NotSet,
|
v5::Setting::NotSet => v6::Setting::NotSet,
|
||||||
@@ -421,8 +398,6 @@ impl<T> From<v5::Settings<T>> for v6::Settings<v6::Unchecked> {
|
|||||||
search_cutoff_ms: v6::Setting::NotSet,
|
search_cutoff_ms: v6::Setting::NotSet,
|
||||||
facet_search: v6::Setting::NotSet,
|
facet_search: v6::Setting::NotSet,
|
||||||
prefix_search: v6::Setting::NotSet,
|
prefix_search: v6::Setting::NotSet,
|
||||||
chat: v6::Setting::NotSet,
|
|
||||||
vector_store: v6::Setting::NotSet,
|
|
||||||
_kind: std::marker::PhantomData,
|
_kind: std::marker::PhantomData,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -116,15 +116,6 @@ impl DumpReader {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn chat_completions_settings(
|
|
||||||
&mut self,
|
|
||||||
) -> Result<Box<dyn Iterator<Item = Result<(String, v6::ChatCompletionSettings)>> + '_>> {
|
|
||||||
match self {
|
|
||||||
DumpReader::Current(current) => current.chat_completions_settings(),
|
|
||||||
DumpReader::Compat(_compat) => Ok(Box::new(std::iter::empty())),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn features(&self) -> Result<Option<v6::RuntimeTogglableFeatures>> {
|
pub fn features(&self) -> Result<Option<v6::RuntimeTogglableFeatures>> {
|
||||||
match self {
|
match self {
|
||||||
DumpReader::Current(current) => Ok(current.features()),
|
DumpReader::Current(current) => Ok(current.features()),
|
||||||
@@ -138,13 +129,6 @@ impl DumpReader {
|
|||||||
DumpReader::Compat(compat) => compat.network(),
|
DumpReader::Compat(compat) => compat.network(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn webhooks(&self) -> Option<&v6::Webhooks> {
|
|
||||||
match self {
|
|
||||||
DumpReader::Current(current) => current.webhooks(),
|
|
||||||
DumpReader::Compat(compat) => compat.webhooks(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<V6Reader> for DumpReader {
|
impl From<V6Reader> for DumpReader {
|
||||||
@@ -199,14 +183,6 @@ impl DumpIndexReader {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A reference to a file in the NDJSON format containing all the documents of the index
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
DumpIndexReader::Current(v6) => v6.documents_file(),
|
|
||||||
DumpIndexReader::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
||||||
match self {
|
match self {
|
||||||
DumpIndexReader::Current(v6) => v6.settings(),
|
DumpIndexReader::Current(v6) => v6.settings(),
|
||||||
@@ -372,7 +348,6 @@ pub(crate) mod test {
|
|||||||
|
|
||||||
assert_eq!(dump.features().unwrap().unwrap(), RuntimeTogglableFeatures::default());
|
assert_eq!(dump.features().unwrap().unwrap(), RuntimeTogglableFeatures::default());
|
||||||
assert_eq!(dump.network().unwrap(), None);
|
assert_eq!(dump.network().unwrap(), None);
|
||||||
assert_eq!(dump.webhooks(), None);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -443,43 +418,6 @@ pub(crate) mod test {
|
|||||||
insta::assert_snapshot!(network.remotes.get("ms-2").as_ref().unwrap().search_api_key.as_ref().unwrap(), @"foo");
|
insta::assert_snapshot!(network.remotes.get("ms-2").as_ref().unwrap().search_api_key.as_ref().unwrap(), @"foo");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn import_dump_v6_webhooks() {
|
|
||||||
let dump = File::open("tests/assets/v6-with-webhooks.dump").unwrap();
|
|
||||||
let dump = DumpReader::open(dump).unwrap();
|
|
||||||
|
|
||||||
// top level infos
|
|
||||||
insta::assert_snapshot!(dump.date().unwrap(), @"2025-07-31 9:21:30.479544 +00:00:00");
|
|
||||||
insta::assert_debug_snapshot!(dump.instance_uid().unwrap(), @r"
|
|
||||||
Some(
|
|
||||||
cb887dcc-34b3-48d1-addd-9815ae721a81,
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
// webhooks
|
|
||||||
let webhooks = dump.webhooks().unwrap();
|
|
||||||
insta::assert_json_snapshot!(webhooks, @r#"
|
|
||||||
{
|
|
||||||
"webhooks": {
|
|
||||||
"627ea538-733d-4545-8d2d-03526eb381ce": {
|
|
||||||
"url": "https://example.com/authorization-less",
|
|
||||||
"headers": {}
|
|
||||||
},
|
|
||||||
"771b0a28-ef28-4082-b984-536f82958c65": {
|
|
||||||
"url": "https://example.com/hook",
|
|
||||||
"headers": {
|
|
||||||
"authorization": "TOKEN"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"f3583083-f8a7-4cbf-a5e7-fb3f1e28a7e9": {
|
|
||||||
"url": "https://third.com",
|
|
||||||
"headers": {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"#);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn import_dump_v5() {
|
fn import_dump_v5() {
|
||||||
let dump = File::open("tests/assets/v5.dump").unwrap();
|
let dump = File::open("tests/assets/v5.dump").unwrap();
|
||||||
|
|||||||
@@ -72,10 +72,6 @@ impl V1IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<self::settings::Settings> {
|
pub fn settings(&mut self) -> Result<self::settings::Settings> {
|
||||||
Ok(serde_json::from_reader(&mut self.settings)?)
|
Ok(serde_json::from_reader(&mut self.settings)?)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -203,10 +203,6 @@ impl V2IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -215,10 +215,6 @@ impl V3IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -210,10 +210,6 @@ impl V4IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -247,10 +247,6 @@ impl V5IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,9 @@
|
|||||||
use std::ffi::OsStr;
|
|
||||||
use std::fs::{self, File};
|
use std::fs::{self, File};
|
||||||
use std::io::{BufRead, BufReader, ErrorKind};
|
use std::io::{BufRead, BufReader, ErrorKind};
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
|
|
||||||
pub use meilisearch_types::milli;
|
pub use meilisearch_types::milli;
|
||||||
use meilisearch_types::milli::vector::embedder::hf::OverridePooling;
|
use meilisearch_types::milli::vector::hf::OverridePooling;
|
||||||
use tempfile::TempDir;
|
use tempfile::TempDir;
|
||||||
use time::OffsetDateTime;
|
use time::OffsetDateTime;
|
||||||
use tracing::debug;
|
use tracing::debug;
|
||||||
@@ -22,10 +21,8 @@ pub type Unchecked = meilisearch_types::settings::Unchecked;
|
|||||||
pub type Task = crate::TaskDump;
|
pub type Task = crate::TaskDump;
|
||||||
pub type Batch = meilisearch_types::batches::Batch;
|
pub type Batch = meilisearch_types::batches::Batch;
|
||||||
pub type Key = meilisearch_types::keys::Key;
|
pub type Key = meilisearch_types::keys::Key;
|
||||||
pub type ChatCompletionSettings = meilisearch_types::features::ChatCompletionSettings;
|
|
||||||
pub type RuntimeTogglableFeatures = meilisearch_types::features::RuntimeTogglableFeatures;
|
pub type RuntimeTogglableFeatures = meilisearch_types::features::RuntimeTogglableFeatures;
|
||||||
pub type Network = meilisearch_types::network::Network;
|
pub type Network = meilisearch_types::features::Network;
|
||||||
pub type Webhooks = meilisearch_types::webhooks::WebhooksDumpView;
|
|
||||||
|
|
||||||
// ===== Other types to clarify the code of the compat module
|
// ===== Other types to clarify the code of the compat module
|
||||||
// everything related to the tasks
|
// everything related to the tasks
|
||||||
@@ -60,7 +57,6 @@ pub struct V6Reader {
|
|||||||
keys: BufReader<File>,
|
keys: BufReader<File>,
|
||||||
features: Option<RuntimeTogglableFeatures>,
|
features: Option<RuntimeTogglableFeatures>,
|
||||||
network: Option<Network>,
|
network: Option<Network>,
|
||||||
webhooks: Option<Webhooks>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl V6Reader {
|
impl V6Reader {
|
||||||
@@ -95,8 +91,8 @@ impl V6Reader {
|
|||||||
Err(e) => return Err(e.into()),
|
Err(e) => return Err(e.into()),
|
||||||
};
|
};
|
||||||
|
|
||||||
let network = match fs::read(dump.path().join("network.json")) {
|
let network_file = match fs::read(dump.path().join("network.json")) {
|
||||||
Ok(network_file) => Some(serde_json::from_reader(&*network_file)?),
|
Ok(network_file) => Some(network_file),
|
||||||
Err(error) => match error.kind() {
|
Err(error) => match error.kind() {
|
||||||
// Allows the file to be missing, this will only result in all experimental features disabled.
|
// Allows the file to be missing, this will only result in all experimental features disabled.
|
||||||
ErrorKind::NotFound => {
|
ErrorKind::NotFound => {
|
||||||
@@ -106,16 +102,10 @@ impl V6Reader {
|
|||||||
_ => return Err(error.into()),
|
_ => return Err(error.into()),
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
let network = if let Some(network_file) = network_file {
|
||||||
let webhooks = match fs::read(dump.path().join("webhooks.json")) {
|
Some(serde_json::from_reader(&*network_file)?)
|
||||||
Ok(webhooks_file) => Some(serde_json::from_reader(&*webhooks_file)?),
|
} else {
|
||||||
Err(error) => match error.kind() {
|
None
|
||||||
ErrorKind::NotFound => {
|
|
||||||
debug!("`webhooks.json` not found in dump");
|
|
||||||
None
|
|
||||||
}
|
|
||||||
_ => return Err(error.into()),
|
|
||||||
},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
Ok(V6Reader {
|
Ok(V6Reader {
|
||||||
@@ -127,7 +117,6 @@ impl V6Reader {
|
|||||||
features,
|
features,
|
||||||
network,
|
network,
|
||||||
dump,
|
dump,
|
||||||
webhooks,
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -203,34 +192,6 @@ impl V6Reader {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn chat_completions_settings(
|
|
||||||
&mut self,
|
|
||||||
) -> Result<Box<dyn Iterator<Item = Result<(String, ChatCompletionSettings)>> + '_>> {
|
|
||||||
let entries = match fs::read_dir(self.dump.path().join("chat-completions-settings")) {
|
|
||||||
Ok(entries) => entries,
|
|
||||||
Err(e) if e.kind() == ErrorKind::NotFound => return Ok(Box::new(std::iter::empty())),
|
|
||||||
Err(e) => return Err(e.into()),
|
|
||||||
};
|
|
||||||
Ok(Box::new(
|
|
||||||
entries
|
|
||||||
.map(|entry| -> Result<Option<_>> {
|
|
||||||
let entry = entry?;
|
|
||||||
let file_name = entry.file_name();
|
|
||||||
let path = Path::new(&file_name);
|
|
||||||
if entry.file_type()?.is_file() && path.extension() == Some(OsStr::new("json"))
|
|
||||||
{
|
|
||||||
let name = path.file_stem().unwrap().to_str().unwrap().to_string();
|
|
||||||
let file = File::open(entry.path())?;
|
|
||||||
let settings = serde_json::from_reader(file)?;
|
|
||||||
Ok(Some((name, settings)))
|
|
||||||
} else {
|
|
||||||
Ok(None)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.filter_map(|entry| entry.transpose()),
|
|
||||||
))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn features(&self) -> Option<RuntimeTogglableFeatures> {
|
pub fn features(&self) -> Option<RuntimeTogglableFeatures> {
|
||||||
self.features
|
self.features
|
||||||
}
|
}
|
||||||
@@ -238,10 +199,6 @@ impl V6Reader {
|
|||||||
pub fn network(&self) -> Option<&Network> {
|
pub fn network(&self) -> Option<&Network> {
|
||||||
self.network.as_ref()
|
self.network.as_ref()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn webhooks(&self) -> Option<&Webhooks> {
|
|
||||||
self.webhooks.as_ref()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct UpdateFile {
|
pub struct UpdateFile {
|
||||||
@@ -297,10 +254,6 @@ impl V6IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
let mut settings: Settings<Unchecked> = serde_json::from_reader(&mut self.settings)?;
|
let mut settings: Settings<Unchecked> = serde_json::from_reader(&mut self.settings)?;
|
||||||
patch_embedders(&mut settings);
|
patch_embedders(&mut settings);
|
||||||
|
|||||||
@@ -5,11 +5,9 @@ use std::path::PathBuf;
|
|||||||
use flate2::write::GzEncoder;
|
use flate2::write::GzEncoder;
|
||||||
use flate2::Compression;
|
use flate2::Compression;
|
||||||
use meilisearch_types::batches::Batch;
|
use meilisearch_types::batches::Batch;
|
||||||
use meilisearch_types::features::{ChatCompletionSettings, RuntimeTogglableFeatures};
|
use meilisearch_types::features::{Network, RuntimeTogglableFeatures};
|
||||||
use meilisearch_types::keys::Key;
|
use meilisearch_types::keys::Key;
|
||||||
use meilisearch_types::network::Network;
|
|
||||||
use meilisearch_types::settings::{Checked, Settings};
|
use meilisearch_types::settings::{Checked, Settings};
|
||||||
use meilisearch_types::webhooks::WebhooksDumpView;
|
|
||||||
use serde_json::{Map, Value};
|
use serde_json::{Map, Value};
|
||||||
use tempfile::TempDir;
|
use tempfile::TempDir;
|
||||||
use time::OffsetDateTime;
|
use time::OffsetDateTime;
|
||||||
@@ -53,10 +51,6 @@ impl DumpWriter {
|
|||||||
KeyWriter::new(self.dir.path().to_path_buf())
|
KeyWriter::new(self.dir.path().to_path_buf())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn create_chat_completions_settings(&self) -> Result<ChatCompletionsSettingsWriter> {
|
|
||||||
ChatCompletionsSettingsWriter::new(self.dir.path().join("chat-completions-settings"))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn create_tasks_queue(&self) -> Result<TaskWriter> {
|
pub fn create_tasks_queue(&self) -> Result<TaskWriter> {
|
||||||
TaskWriter::new(self.dir.path().join("tasks"))
|
TaskWriter::new(self.dir.path().join("tasks"))
|
||||||
}
|
}
|
||||||
@@ -76,13 +70,6 @@ impl DumpWriter {
|
|||||||
Ok(std::fs::write(self.dir.path().join("network.json"), serde_json::to_string(&network)?)?)
|
Ok(std::fs::write(self.dir.path().join("network.json"), serde_json::to_string(&network)?)?)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn create_webhooks(&self, webhooks: WebhooksDumpView) -> Result<()> {
|
|
||||||
Ok(std::fs::write(
|
|
||||||
self.dir.path().join("webhooks.json"),
|
|
||||||
serde_json::to_string(&webhooks)?,
|
|
||||||
)?)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn persist_to(self, mut writer: impl Write) -> Result<()> {
|
pub fn persist_to(self, mut writer: impl Write) -> Result<()> {
|
||||||
let gz_encoder = GzEncoder::new(&mut writer, Compression::default());
|
let gz_encoder = GzEncoder::new(&mut writer, Compression::default());
|
||||||
let mut tar_encoder = tar::Builder::new(gz_encoder);
|
let mut tar_encoder = tar::Builder::new(gz_encoder);
|
||||||
@@ -117,24 +104,6 @@ impl KeyWriter {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct ChatCompletionsSettingsWriter {
|
|
||||||
path: PathBuf,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl ChatCompletionsSettingsWriter {
|
|
||||||
pub(crate) fn new(path: PathBuf) -> Result<Self> {
|
|
||||||
std::fs::create_dir(&path)?;
|
|
||||||
Ok(ChatCompletionsSettingsWriter { path })
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn push_settings(&mut self, name: &str, settings: &ChatCompletionSettings) -> Result<()> {
|
|
||||||
let mut settings_file = File::create(self.path.join(name).with_extension("json"))?;
|
|
||||||
serde_json::to_writer(&mut settings_file, &settings)?;
|
|
||||||
settings_file.flush()?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct TaskWriter {
|
pub struct TaskWriter {
|
||||||
queue: BufWriter<File>,
|
queue: BufWriter<File>,
|
||||||
update_files: PathBuf,
|
update_files: PathBuf,
|
||||||
|
|||||||
Binary file not shown.
@@ -11,7 +11,7 @@ edition.workspace = true
|
|||||||
license.workspace = true
|
license.workspace = true
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
tempfile = "3.23.0"
|
tempfile = "3.15.0"
|
||||||
thiserror = "2.0.17"
|
thiserror = "2.0.9"
|
||||||
tracing = "0.1.41"
|
tracing = "0.1.41"
|
||||||
uuid = { version = "1.18.1", features = ["serde", "v4"] }
|
uuid = { version = "1.11.0", features = ["serde", "v4"] }
|
||||||
|
|||||||
@@ -60,7 +60,7 @@ impl FileStore {
|
|||||||
|
|
||||||
/// Returns the file corresponding to the requested uuid.
|
/// Returns the file corresponding to the requested uuid.
|
||||||
pub fn get_update(&self, uuid: Uuid) -> Result<StdFile> {
|
pub fn get_update(&self, uuid: Uuid) -> Result<StdFile> {
|
||||||
let path = self.update_path(uuid);
|
let path = self.get_update_path(uuid);
|
||||||
let file = match StdFile::open(path) {
|
let file = match StdFile::open(path) {
|
||||||
Ok(file) => file,
|
Ok(file) => file,
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
@@ -72,7 +72,7 @@ impl FileStore {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the path that correspond to this uuid, the path could not exists.
|
/// Returns the path that correspond to this uuid, the path could not exists.
|
||||||
pub fn update_path(&self, uuid: Uuid) -> PathBuf {
|
pub fn get_update_path(&self, uuid: Uuid) -> PathBuf {
|
||||||
self.path.join(uuid.to_string())
|
self.path.join(uuid.to_string())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -148,10 +148,11 @@ impl File {
|
|||||||
Ok(Self { path: PathBuf::new(), file: None })
|
Ok(Self { path: PathBuf::new(), file: None })
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn persist(self) -> Result<Option<StdFile>> {
|
pub fn persist(self) -> Result<()> {
|
||||||
let Some(file) = self.file else { return Ok(None) };
|
if let Some(file) = self.file {
|
||||||
|
file.persist(&self.path)?;
|
||||||
Ok(Some(file.persist(&self.path)?))
|
}
|
||||||
|
Ok(())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -14,8 +14,7 @@ license.workspace = true
|
|||||||
[dependencies]
|
[dependencies]
|
||||||
nom = "7.1.3"
|
nom = "7.1.3"
|
||||||
nom_locate = "4.2.0"
|
nom_locate = "4.2.0"
|
||||||
unescaper = "0.1.6"
|
unescaper = "0.1.5"
|
||||||
levenshtein_automata = { version = "0.2.1", features = ["fst_automaton"] }
|
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
# fixed version due to format breakages in v1.40
|
# fixed version due to format breakages in v1.40
|
||||||
|
|||||||
@@ -7,14 +7,12 @@
|
|||||||
|
|
||||||
use nom::branch::alt;
|
use nom::branch::alt;
|
||||||
use nom::bytes::complete::tag;
|
use nom::bytes::complete::tag;
|
||||||
use nom::character::complete::{char, multispace0, multispace1};
|
use nom::character::complete::multispace1;
|
||||||
use nom::combinator::{cut, map, value};
|
use nom::combinator::cut;
|
||||||
use nom::sequence::{preceded, terminated, tuple};
|
use nom::sequence::{terminated, tuple};
|
||||||
use Condition::*;
|
use Condition::*;
|
||||||
|
|
||||||
use crate::error::IResultExt;
|
use crate::{parse_value, FilterCondition, IResult, Span, Token};
|
||||||
use crate::value::{parse_vector_value, parse_vector_value_cut};
|
|
||||||
use crate::{parse_value, Error, ErrorKind, FilterCondition, IResult, Span, Token, VectorFilter};
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
pub enum Condition<'a> {
|
pub enum Condition<'a> {
|
||||||
@@ -115,83 +113,6 @@ pub fn parse_not_exists(input: Span) -> IResult<FilterCondition> {
|
|||||||
Ok((input, FilterCondition::Not(Box::new(FilterCondition::Condition { fid: key, op: Exists }))))
|
Ok((input, FilterCondition::Not(Box::new(FilterCondition::Condition { fid: key, op: Exists }))))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn parse_vectors(input: Span) -> IResult<(Token, Option<Token>, VectorFilter)> {
|
|
||||||
let (input, _) = multispace0(input)?;
|
|
||||||
let (input, fid) = tag("_vectors")(input)?;
|
|
||||||
|
|
||||||
if let Ok((input, _)) = multispace1::<_, crate::Error>(input) {
|
|
||||||
return Ok((input, (Token::from(fid), None, VectorFilter::None)));
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, _) = char('.')(input)?;
|
|
||||||
|
|
||||||
// From this point, we are certain this is a vector filter, so our errors must be final.
|
|
||||||
// We could use nom's `cut` but it's better to be explicit about the errors
|
|
||||||
|
|
||||||
if let Ok((_, space)) = tag::<_, _, ()>(" ")(input) {
|
|
||||||
return Err(crate::Error::failure_from_kind(space, ErrorKind::VectorFilterMissingEmbedder));
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, embedder_name) =
|
|
||||||
parse_vector_value_cut(input, ErrorKind::VectorFilterInvalidEmbedder)?;
|
|
||||||
|
|
||||||
let (input, filter) = alt((
|
|
||||||
map(
|
|
||||||
preceded(tag(".fragments"), |input| {
|
|
||||||
let (input, _) = tag(".")(input).map_cut(ErrorKind::VectorFilterMissingFragment)?;
|
|
||||||
parse_vector_value_cut(input, ErrorKind::VectorFilterInvalidFragment)
|
|
||||||
}),
|
|
||||||
VectorFilter::Fragment,
|
|
||||||
),
|
|
||||||
value(VectorFilter::UserProvided, tag(".userProvided")),
|
|
||||||
value(VectorFilter::DocumentTemplate, tag(".documentTemplate")),
|
|
||||||
value(VectorFilter::Regenerate, tag(".regenerate")),
|
|
||||||
value(VectorFilter::None, nom::combinator::success("")),
|
|
||||||
))(input)?;
|
|
||||||
|
|
||||||
if let Ok((input, point)) = tag::<_, _, ()>(".")(input) {
|
|
||||||
let opt_value = parse_vector_value(input).ok().map(|(_, v)| v);
|
|
||||||
let value =
|
|
||||||
opt_value.as_ref().map(|v| v.value().to_owned()).unwrap_or_else(|| point.to_string());
|
|
||||||
let context = opt_value.map(|v| v.original_span()).unwrap_or(point);
|
|
||||||
let previous_kind = match filter {
|
|
||||||
VectorFilter::Fragment(_) => Some("fragments"),
|
|
||||||
VectorFilter::DocumentTemplate => Some("documentTemplate"),
|
|
||||||
VectorFilter::UserProvided => Some("userProvided"),
|
|
||||||
VectorFilter::Regenerate => Some("regenerate"),
|
|
||||||
VectorFilter::None => None,
|
|
||||||
};
|
|
||||||
return Err(Error::failure_from_kind(
|
|
||||||
context,
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(previous_kind, value),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, _) = multispace1(input).map_cut(ErrorKind::VectorFilterLeftover)?;
|
|
||||||
|
|
||||||
Ok((input, (Token::from(fid), Some(embedder_name), filter)))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// vectors_exists = vectors ("EXISTS" | ("NOT" WS+ "EXISTS"))
|
|
||||||
pub fn parse_vectors_exists(input: Span) -> IResult<FilterCondition> {
|
|
||||||
let (input, (fid, embedder, filter)) = parse_vectors(input)?;
|
|
||||||
|
|
||||||
// Try parsing "EXISTS" first
|
|
||||||
if let Ok((input, _)) = tag::<_, _, ()>("EXISTS")(input) {
|
|
||||||
return Ok((input, FilterCondition::VectorExists { fid, embedder, filter }));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try parsing "NOT EXISTS"
|
|
||||||
if let Ok((input, _)) = tuple::<_, _, (), _>((tag("NOT"), multispace1, tag("EXISTS")))(input) {
|
|
||||||
return Ok((
|
|
||||||
input,
|
|
||||||
FilterCondition::Not(Box::new(FilterCondition::VectorExists { fid, embedder, filter })),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
Err(crate::Error::failure_from_kind(input, ErrorKind::VectorFilterOperation))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// contains = value "CONTAINS" value
|
/// contains = value "CONTAINS" value
|
||||||
pub fn parse_contains(input: Span) -> IResult<FilterCondition> {
|
pub fn parse_contains(input: Span) -> IResult<FilterCondition> {
|
||||||
let (input, (fid, contains, value)) =
|
let (input, (fid, contains, value)) =
|
||||||
|
|||||||
@@ -42,23 +42,6 @@ pub fn cut_with_err<'a, O>(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub trait IResultExt<'a> {
|
|
||||||
fn map_cut(self, kind: ErrorKind<'a>) -> Self;
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<'a, T> IResultExt<'a> for IResult<'a, T> {
|
|
||||||
fn map_cut(self, kind: ErrorKind<'a>) -> Self {
|
|
||||||
self.map_err(move |e: nom::Err<Error<'a>>| {
|
|
||||||
let input = match e {
|
|
||||||
nom::Err::Incomplete(_) => return e,
|
|
||||||
nom::Err::Error(e) => *e.context(),
|
|
||||||
nom::Err::Failure(e) => *e.context(),
|
|
||||||
};
|
|
||||||
Error::failure_from_kind(input, kind)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub struct Error<'a> {
|
pub struct Error<'a> {
|
||||||
context: Span<'a>,
|
context: Span<'a>,
|
||||||
@@ -75,21 +58,9 @@ pub enum ExpectedValueKind {
|
|||||||
pub enum ErrorKind<'a> {
|
pub enum ErrorKind<'a> {
|
||||||
ReservedGeo(&'a str),
|
ReservedGeo(&'a str),
|
||||||
GeoRadius,
|
GeoRadius,
|
||||||
GeoRadiusArgumentCount(usize),
|
|
||||||
GeoBoundingBox,
|
GeoBoundingBox,
|
||||||
GeoPolygon,
|
|
||||||
GeoPolygonNotEnoughPoints(usize),
|
|
||||||
GeoCoordinatesNotPair(usize),
|
|
||||||
MisusedGeoRadius,
|
MisusedGeoRadius,
|
||||||
MisusedGeoBoundingBox,
|
MisusedGeoBoundingBox,
|
||||||
VectorFilterLeftover,
|
|
||||||
VectorFilterInvalidQuotes,
|
|
||||||
VectorFilterMissingEmbedder,
|
|
||||||
VectorFilterInvalidEmbedder,
|
|
||||||
VectorFilterMissingFragment,
|
|
||||||
VectorFilterInvalidFragment,
|
|
||||||
VectorFilterUnknownSuffix(Option<&'static str>, String),
|
|
||||||
VectorFilterOperation,
|
|
||||||
InvalidPrimary,
|
InvalidPrimary,
|
||||||
InvalidEscapedNumber,
|
InvalidEscapedNumber,
|
||||||
ExpectedEof,
|
ExpectedEof,
|
||||||
@@ -120,10 +91,6 @@ impl<'a> Error<'a> {
|
|||||||
Self { context, kind }
|
Self { context, kind }
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn failure_from_kind(context: Span<'a>, kind: ErrorKind<'a>) -> nom::Err<Self> {
|
|
||||||
nom::Err::Failure(Self::new_from_kind(context, kind))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn new_from_external(context: Span<'a>, error: impl std::error::Error) -> Self {
|
pub fn new_from_external(context: Span<'a>, error: impl std::error::Error) -> Self {
|
||||||
Self::new_from_kind(context, ErrorKind::External(error.to_string()))
|
Self::new_from_kind(context, ErrorKind::External(error.to_string()))
|
||||||
}
|
}
|
||||||
@@ -161,20 +128,6 @@ impl Display for Error<'_> {
|
|||||||
// first line being the diagnostic and the second line being the incriminated filter.
|
// first line being the diagnostic and the second line being the incriminated filter.
|
||||||
let escaped_input = input.escape_debug();
|
let escaped_input = input.escape_debug();
|
||||||
|
|
||||||
fn key_suggestion<'a>(key: &str, keys: &[&'a str]) -> Option<&'a str> {
|
|
||||||
let typos =
|
|
||||||
levenshtein_automata::LevenshteinAutomatonBuilder::new(2, true).build_dfa(key);
|
|
||||||
for key in keys.iter() {
|
|
||||||
match typos.eval(key) {
|
|
||||||
levenshtein_automata::Distance::Exact(_) => {
|
|
||||||
return Some(key);
|
|
||||||
}
|
|
||||||
levenshtein_automata::Distance::AtLeast(_) => continue,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None
|
|
||||||
}
|
|
||||||
|
|
||||||
match &self.kind {
|
match &self.kind {
|
||||||
ErrorKind::ExpectedValue(_) if input.trim().is_empty() => {
|
ErrorKind::ExpectedValue(_) if input.trim().is_empty() => {
|
||||||
writeln!(f, "Was expecting a value but instead got nothing.")?
|
writeln!(f, "Was expecting a value but instead got nothing.")?
|
||||||
@@ -193,7 +146,7 @@ impl Display for Error<'_> {
|
|||||||
}
|
}
|
||||||
ErrorKind::InvalidPrimary => {
|
ErrorKind::InvalidPrimary => {
|
||||||
let text = if input.trim().is_empty() { "but instead got nothing.".to_string() } else { format!("at `{}`.", escaped_input) };
|
let text = if input.trim().is_empty() { "but instead got nothing.".to_string() } else { format!("at `{}`.", escaped_input) };
|
||||||
writeln!(f, "Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` {text}")?
|
writeln!(f, "Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` {}", text)?
|
||||||
}
|
}
|
||||||
ErrorKind::InvalidEscapedNumber => {
|
ErrorKind::InvalidEscapedNumber => {
|
||||||
writeln!(f, "Found an invalid escaped sequence number: `{}`.", escaped_input)?
|
writeln!(f, "Found an invalid escaped sequence number: `{}`.", escaped_input)?
|
||||||
@@ -202,23 +155,11 @@ impl Display for Error<'_> {
|
|||||||
writeln!(f, "Found unexpected characters at the end of the filter: `{}`. You probably forgot an `OR` or an `AND` rule.", escaped_input)?
|
writeln!(f, "Found unexpected characters at the end of the filter: `{}`. You probably forgot an `OR` or an `AND` rule.", escaped_input)?
|
||||||
}
|
}
|
||||||
ErrorKind::GeoRadius => {
|
ErrorKind::GeoRadius => {
|
||||||
writeln!(f, "The `_geoRadius` filter must be in the form: `_geoRadius(latitude, longitude, radius, optionalResolution)`.")?
|
writeln!(f, "The `_geoRadius` filter expects three arguments: `_geoRadius(latitude, longitude, radius)`.")?
|
||||||
}
|
|
||||||
ErrorKind::GeoRadiusArgumentCount(count) => {
|
|
||||||
writeln!(f, "Was expecting 3 or 4 arguments for `_geoRadius`, but instead found {count}.")?
|
|
||||||
}
|
}
|
||||||
ErrorKind::GeoBoundingBox => {
|
ErrorKind::GeoBoundingBox => {
|
||||||
writeln!(f, "The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.")?
|
writeln!(f, "The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.")?
|
||||||
}
|
}
|
||||||
ErrorKind::GeoPolygon => {
|
|
||||||
writeln!(f, "The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.")?
|
|
||||||
}
|
|
||||||
ErrorKind::GeoPolygonNotEnoughPoints(n) => {
|
|
||||||
writeln!(f, "The `_geoPolygon` filter expects at least 3 points but only {n} were specified")?;
|
|
||||||
}
|
|
||||||
ErrorKind::GeoCoordinatesNotPair(number) => {
|
|
||||||
writeln!(f, "Was expecting 2 coordinates but instead found {number}.")?
|
|
||||||
}
|
|
||||||
ErrorKind::ReservedGeo(name) => {
|
ErrorKind::ReservedGeo(name) => {
|
||||||
writeln!(f, "`{}` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.", name.escape_debug())?
|
writeln!(f, "`{}` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.", name.escape_debug())?
|
||||||
}
|
}
|
||||||
@@ -228,44 +169,6 @@ impl Display for Error<'_> {
|
|||||||
ErrorKind::MisusedGeoBoundingBox => {
|
ErrorKind::MisusedGeoBoundingBox => {
|
||||||
writeln!(f, "The `_geoBoundingBox` filter is an operation and can't be used as a value.")?
|
writeln!(f, "The `_geoBoundingBox` filter is an operation and can't be used as a value.")?
|
||||||
}
|
}
|
||||||
ErrorKind::VectorFilterLeftover => {
|
|
||||||
writeln!(f, "The vector filter has leftover tokens.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(_, value) if value.as_str() == "." => {
|
|
||||||
writeln!(f, "Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.")?;
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(None, value) if ["fragments", "userProvided", "documentTemplate", "regenerate"].contains(&value.as_str()) => {
|
|
||||||
// This will happen with "_vectors.rest.\"userProvided\"" for instance
|
|
||||||
writeln!(f, "Was expecting this part to be unquoted.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(None, value) => {
|
|
||||||
if let Some(suggestion) = key_suggestion(value, &["fragments", "userProvided", "documentTemplate", "regenerate"]) {
|
|
||||||
writeln!(f, "Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `{value}`. Did you mean `{suggestion}`?")?;
|
|
||||||
} else {
|
|
||||||
writeln!(f, "Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `{value}`.")?;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(Some(previous_filter_kind), value) => {
|
|
||||||
writeln!(f, "Vector filter can only accept one of `fragments`, `userProvided`, `documentTemplate` or `regenerate`, but found both `{previous_filter_kind}` and `{value}`.")?
|
|
||||||
},
|
|
||||||
ErrorKind::VectorFilterInvalidFragment => {
|
|
||||||
writeln!(f, "The vector filter's fragment name is invalid.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterMissingFragment => {
|
|
||||||
writeln!(f, "The vector filter is missing a fragment name.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterMissingEmbedder => {
|
|
||||||
writeln!(f, "Was expecting embedder name but found nothing.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterInvalidEmbedder => {
|
|
||||||
writeln!(f, "The vector filter's embedder name is invalid.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterOperation => {
|
|
||||||
writeln!(f, "Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterInvalidQuotes => {
|
|
||||||
writeln!(f, "The quotes in one of the values are inconsistent.")?
|
|
||||||
}
|
|
||||||
ErrorKind::ReservedKeyword(word) => {
|
ErrorKind::ReservedKeyword(word) => {
|
||||||
writeln!(f, "`{word}` is a reserved keyword and thus cannot be used as a field name unless it is put inside quotes. Use \"{word}\" or \'{word}\' instead.")?
|
writeln!(f, "`{word}` is a reserved keyword and thus cannot be used as a field name unless it is put inside quotes. Use \"{word}\" or \'{word}\' instead.")?
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -19,7 +19,6 @@
|
|||||||
//! word = (alphanumeric | _ | - | .)+
|
//! word = (alphanumeric | _ | - | .)+
|
||||||
//! geoRadius = "_geoRadius(" WS* float WS* "," WS* float WS* "," float WS* ")"
|
//! geoRadius = "_geoRadius(" WS* float WS* "," WS* float WS* "," float WS* ")"
|
||||||
//! geoBoundingBox = "_geoBoundingBox([" WS * float WS* "," WS* float WS* "], [" WS* float WS* "," WS* float WS* "]")
|
//! geoBoundingBox = "_geoBoundingBox([" WS * float WS* "," WS* float WS* "], [" WS* float WS* "," WS* float WS* "]")
|
||||||
//! geoPolygon = "_geoPolygon([[" WS* float WS* "," WS* float WS* "],+])"
|
|
||||||
//! ```
|
//! ```
|
||||||
//!
|
//!
|
||||||
//! Other BNF grammar used to handle some specific errors:
|
//! Other BNF grammar used to handle some specific errors:
|
||||||
@@ -66,9 +65,6 @@ use nom_locate::LocatedSpan;
|
|||||||
pub(crate) use value::parse_value;
|
pub(crate) use value::parse_value;
|
||||||
use value::word_exact;
|
use value::word_exact;
|
||||||
|
|
||||||
use crate::condition::parse_vectors_exists;
|
|
||||||
use crate::error::IResultExt;
|
|
||||||
|
|
||||||
pub type Span<'a> = LocatedSpan<&'a str, &'a str>;
|
pub type Span<'a> = LocatedSpan<&'a str, &'a str>;
|
||||||
|
|
||||||
type IResult<'a, Ret> = nom::IResult<Span<'a>, Ret, Error<'a>>;
|
type IResult<'a, Ret> = nom::IResult<Span<'a>, Ret, Error<'a>>;
|
||||||
@@ -117,7 +113,7 @@ impl<'a> Token<'a> {
|
|||||||
self.span
|
self.span
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn parse_finite_float(&self) -> Result<f64, Error<'a>> {
|
pub fn parse_finite_float(&self) -> Result<f64, Error> {
|
||||||
let value: f64 = self.value().parse().map_err(|e| self.as_external_error(e))?;
|
let value: f64 = self.value().parse().map_err(|e| self.as_external_error(e))?;
|
||||||
if value.is_finite() {
|
if value.is_finite() {
|
||||||
Ok(value)
|
Ok(value)
|
||||||
@@ -140,15 +136,6 @@ impl<'a> From<&'a str> for Token<'a> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
|
||||||
pub enum VectorFilter<'a> {
|
|
||||||
Fragment(Token<'a>),
|
|
||||||
DocumentTemplate,
|
|
||||||
UserProvided,
|
|
||||||
Regenerate,
|
|
||||||
None,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
pub enum FilterCondition<'a> {
|
pub enum FilterCondition<'a> {
|
||||||
Not(Box<Self>),
|
Not(Box<Self>),
|
||||||
@@ -156,10 +143,8 @@ pub enum FilterCondition<'a> {
|
|||||||
In { fid: Token<'a>, els: Vec<Token<'a>> },
|
In { fid: Token<'a>, els: Vec<Token<'a>> },
|
||||||
Or(Vec<Self>),
|
Or(Vec<Self>),
|
||||||
And(Vec<Self>),
|
And(Vec<Self>),
|
||||||
VectorExists { fid: Token<'a>, embedder: Option<Token<'a>>, filter: VectorFilter<'a> },
|
GeoLowerThan { point: [Token<'a>; 2], radius: Token<'a> },
|
||||||
GeoLowerThan { point: [Token<'a>; 2], radius: Token<'a>, resolution: Option<Token<'a>> },
|
|
||||||
GeoBoundingBox { top_right_point: [Token<'a>; 2], bottom_left_point: [Token<'a>; 2] },
|
GeoBoundingBox { top_right_point: [Token<'a>; 2], bottom_left_point: [Token<'a>; 2] },
|
||||||
GeoPolygon { points: Vec<[Token<'a>; 2]> },
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub enum TraversedElement<'a> {
|
pub enum TraversedElement<'a> {
|
||||||
@@ -168,7 +153,7 @@ pub enum TraversedElement<'a> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl<'a> FilterCondition<'a> {
|
impl<'a> FilterCondition<'a> {
|
||||||
pub fn use_contains_operator(&self) -> Option<&Token<'a>> {
|
pub fn use_contains_operator(&self) -> Option<&Token> {
|
||||||
match self {
|
match self {
|
||||||
FilterCondition::Condition { fid: _, op } => match op {
|
FilterCondition::Condition { fid: _, op } => match op {
|
||||||
Condition::GreaterThan(_)
|
Condition::GreaterThan(_)
|
||||||
@@ -180,38 +165,21 @@ impl<'a> FilterCondition<'a> {
|
|||||||
| Condition::Exists
|
| Condition::Exists
|
||||||
| Condition::LowerThan(_)
|
| Condition::LowerThan(_)
|
||||||
| Condition::LowerThanOrEqual(_)
|
| Condition::LowerThanOrEqual(_)
|
||||||
| Condition::Between { .. }
|
| Condition::Between { .. } => None,
|
||||||
| Condition::StartsWith { .. } => None,
|
Condition::Contains { keyword, word: _ }
|
||||||
Condition::Contains { keyword, word: _ } => Some(keyword),
|
| Condition::StartsWith { keyword, word: _ } => Some(keyword),
|
||||||
},
|
},
|
||||||
FilterCondition::Not(this) => this.use_contains_operator(),
|
FilterCondition::Not(this) => this.use_contains_operator(),
|
||||||
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
|
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
|
||||||
seq.iter().find_map(|filter| filter.use_contains_operator())
|
seq.iter().find_map(|filter| filter.use_contains_operator())
|
||||||
}
|
}
|
||||||
FilterCondition::VectorExists { .. }
|
|
||||||
| FilterCondition::GeoLowerThan { .. }
|
|
||||||
| FilterCondition::GeoBoundingBox { .. }
|
|
||||||
| FilterCondition::GeoPolygon { .. }
|
|
||||||
| FilterCondition::In { .. } => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn use_vector_filter(&self) -> Option<&Token<'a>> {
|
|
||||||
match self {
|
|
||||||
FilterCondition::Condition { .. } => None,
|
|
||||||
FilterCondition::Not(this) => this.use_vector_filter(),
|
|
||||||
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
|
|
||||||
seq.iter().find_map(|filter| filter.use_vector_filter())
|
|
||||||
}
|
|
||||||
FilterCondition::GeoLowerThan { .. }
|
FilterCondition::GeoLowerThan { .. }
|
||||||
| FilterCondition::GeoBoundingBox { .. }
|
| FilterCondition::GeoBoundingBox { .. }
|
||||||
| FilterCondition::GeoPolygon { .. }
|
|
||||||
| FilterCondition::In { .. } => None,
|
| FilterCondition::In { .. } => None,
|
||||||
FilterCondition::VectorExists { fid, .. } => Some(fid),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn fids(&self, depth: usize) -> Box<dyn Iterator<Item = &Token<'a>> + '_> {
|
pub fn fids(&self, depth: usize) -> Box<dyn Iterator<Item = &Token> + '_> {
|
||||||
if depth == 0 {
|
if depth == 0 {
|
||||||
return Box::new(std::iter::empty());
|
return Box::new(std::iter::empty());
|
||||||
}
|
}
|
||||||
@@ -232,7 +200,7 @@ impl<'a> FilterCondition<'a> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the first token found at the specified depth, `None` if no token at this depth.
|
/// Returns the first token found at the specified depth, `None` if no token at this depth.
|
||||||
pub fn token_at_depth(&self, depth: usize) -> Option<&Token<'a>> {
|
pub fn token_at_depth(&self, depth: usize) -> Option<&Token> {
|
||||||
match self {
|
match self {
|
||||||
FilterCondition::Condition { fid, .. } if depth == 0 => Some(fid),
|
FilterCondition::Condition { fid, .. } if depth == 0 => Some(fid),
|
||||||
FilterCondition::Or(subfilters) => {
|
FilterCondition::Or(subfilters) => {
|
||||||
@@ -295,7 +263,10 @@ fn parse_in_body(input: Span) -> IResult<Vec<Token>> {
|
|||||||
let (input, _) = ws(word_exact("IN"))(input)?;
|
let (input, _) = ws(word_exact("IN"))(input)?;
|
||||||
|
|
||||||
// everything after `IN` can be a failure
|
// everything after `IN` can be a failure
|
||||||
let (input, _) = tag("[")(input).map_cut(ErrorKind::InOpeningBracket)?;
|
let (input, _) =
|
||||||
|
cut_with_err(tag("["), |_| Error::new_from_kind(input, ErrorKind::InOpeningBracket))(
|
||||||
|
input,
|
||||||
|
)?;
|
||||||
|
|
||||||
let (input, content) = cut(parse_value_list)(input)?;
|
let (input, content) = cut(parse_value_list)(input)?;
|
||||||
|
|
||||||
@@ -400,27 +371,23 @@ fn parse_not(input: Span, depth: usize) -> IResult<FilterCondition> {
|
|||||||
/// If we parse `_geoRadius` we MUST parse the rest of the expression.
|
/// If we parse `_geoRadius` we MUST parse the rest of the expression.
|
||||||
fn parse_geo_radius(input: Span) -> IResult<FilterCondition> {
|
fn parse_geo_radius(input: Span) -> IResult<FilterCondition> {
|
||||||
// we want to allow space BEFORE the _geoRadius but not after
|
// we want to allow space BEFORE the _geoRadius but not after
|
||||||
|
let parsed = preceded(
|
||||||
let (input, _) = tuple((multispace0, word_exact("_geoRadius")))(input)?;
|
tuple((multispace0, word_exact("_geoRadius"))),
|
||||||
|
// if we were able to parse `_geoRadius` and can't parse the rest of the input we return a failure
|
||||||
// if we were able to parse `_geoRadius` and can't parse the rest of the input we return a failure
|
cut(delimited(char('('), separated_list1(tag(","), ws(recognize_float)), char(')'))),
|
||||||
|
)(input)
|
||||||
let parsed =
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::GeoRadius)));
|
||||||
delimited(char('('), separated_list1(tag(","), ws(recognize_float)), char(')'))(input)
|
|
||||||
.map_cut(ErrorKind::GeoRadius);
|
|
||||||
|
|
||||||
let (input, args) = parsed?;
|
let (input, args) = parsed?;
|
||||||
|
|
||||||
if !(3..=4).contains(&args.len()) {
|
if args.len() != 3 {
|
||||||
return Err(Error::failure_from_kind(input, ErrorKind::GeoRadiusArgumentCount(args.len())));
|
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::GeoRadius)));
|
||||||
}
|
}
|
||||||
|
|
||||||
let res = FilterCondition::GeoLowerThan {
|
let res = FilterCondition::GeoLowerThan {
|
||||||
point: [args[0].into(), args[1].into()],
|
point: [args[0].into(), args[1].into()],
|
||||||
radius: args[2].into(),
|
radius: args[2].into(),
|
||||||
resolution: args.get(3).cloned().map(Token::from),
|
|
||||||
};
|
};
|
||||||
|
|
||||||
Ok((input, res))
|
Ok((input, res))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -428,31 +395,24 @@ fn parse_geo_radius(input: Span) -> IResult<FilterCondition> {
|
|||||||
/// If we parse `_geoBoundingBox` we MUST parse the rest of the expression.
|
/// If we parse `_geoBoundingBox` we MUST parse the rest of the expression.
|
||||||
fn parse_geo_bounding_box(input: Span) -> IResult<FilterCondition> {
|
fn parse_geo_bounding_box(input: Span) -> IResult<FilterCondition> {
|
||||||
// we want to allow space BEFORE the _geoBoundingBox but not after
|
// we want to allow space BEFORE the _geoBoundingBox but not after
|
||||||
|
let parsed = preceded(
|
||||||
let (input, _) = tuple((multispace0, word_exact("_geoBoundingBox")))(input)?;
|
tuple((multispace0, word_exact("_geoBoundingBox"))),
|
||||||
|
// if we were able to parse `_geoBoundingBox` and can't parse the rest of the input we return a failure
|
||||||
// if we were able to parse `_geoBoundingBox` and can't parse the rest of the input we return a failure
|
cut(delimited(
|
||||||
|
char('('),
|
||||||
let (input, args) = delimited(
|
separated_list1(
|
||||||
char('('),
|
tag(","),
|
||||||
separated_list1(
|
ws(delimited(char('['), separated_list1(tag(","), ws(recognize_float)), char(']'))),
|
||||||
tag(","),
|
),
|
||||||
ws(delimited(char('['), separated_list1(tag(","), ws(recognize_float)), char(']'))),
|
char(')'),
|
||||||
),
|
)),
|
||||||
char(')'),
|
|
||||||
)(input)
|
)(input)
|
||||||
.map_cut(ErrorKind::GeoBoundingBox)?;
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::GeoBoundingBox)));
|
||||||
|
|
||||||
if args.len() != 2 {
|
let (input, args) = parsed?;
|
||||||
return Err(Error::failure_from_kind(input, ErrorKind::GeoBoundingBox));
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(offending) = args.iter().find(|a| a.len() != 2) {
|
if args.len() != 2 || args[0].len() != 2 || args[1].len() != 2 {
|
||||||
let context = offending.first().unwrap_or(&input);
|
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::GeoBoundingBox)));
|
||||||
return Err(Error::failure_from_kind(
|
|
||||||
*context,
|
|
||||||
ErrorKind::GeoCoordinatesNotPair(offending.len()),
|
|
||||||
));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let res = FilterCondition::GeoBoundingBox {
|
let res = FilterCondition::GeoBoundingBox {
|
||||||
@@ -462,47 +422,6 @@ fn parse_geo_bounding_box(input: Span) -> IResult<FilterCondition> {
|
|||||||
Ok((input, res))
|
Ok((input, res))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// geoPolygon = "_geoPolygon([[" WS* float WS* "," WS* float WS* "],+])"
|
|
||||||
/// If we parse `_geoPolygon` we MUST parse the rest of the expression.
|
|
||||||
fn parse_geo_polygon(input: Span) -> IResult<FilterCondition> {
|
|
||||||
// we want to allow space BEFORE the _geoPolygon but not after
|
|
||||||
|
|
||||||
let (input, _) = tuple((multispace0, word_exact("_geoPolygon")))(input)?;
|
|
||||||
|
|
||||||
// if we were able to parse `_geoPolygon` and can't parse the rest of the input we return a failure
|
|
||||||
|
|
||||||
let (input, args): (_, Vec<Vec<LocatedSpan<_, _>>>) = delimited(
|
|
||||||
char('('),
|
|
||||||
separated_list1(
|
|
||||||
tag(","),
|
|
||||||
ws(delimited(char('['), separated_list1(tag(","), ws(recognize_float)), char(']'))),
|
|
||||||
),
|
|
||||||
preceded(opt(ws(char(','))), char(')')), // Tolerate trailing comma
|
|
||||||
)(input)
|
|
||||||
.map_cut(ErrorKind::GeoPolygon)?;
|
|
||||||
|
|
||||||
if args.len() < 3 {
|
|
||||||
let context = args.last().and_then(|a| a.last()).unwrap_or(&input);
|
|
||||||
return Err(Error::failure_from_kind(
|
|
||||||
*context,
|
|
||||||
ErrorKind::GeoPolygonNotEnoughPoints(args.len()),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(offending) = args.iter().find(|a| a.len() != 2) {
|
|
||||||
let context = offending.first().unwrap_or(&input);
|
|
||||||
return Err(Error::failure_from_kind(
|
|
||||||
*context,
|
|
||||||
ErrorKind::GeoCoordinatesNotPair(offending.len()),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
let res = FilterCondition::GeoPolygon {
|
|
||||||
points: args.into_iter().map(|a| [a[0].into(), a[1].into()]).collect(),
|
|
||||||
};
|
|
||||||
Ok((input, res))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// geoPoint = WS* "_geoPoint(float WS* "," WS* float WS* "," WS* float)
|
/// geoPoint = WS* "_geoPoint(float WS* "," WS* float WS* "," WS* float)
|
||||||
fn parse_geo_point(input: Span) -> IResult<FilterCondition> {
|
fn parse_geo_point(input: Span) -> IResult<FilterCondition> {
|
||||||
// we want to forbid space BEFORE the _geoPoint but not after
|
// we want to forbid space BEFORE the _geoPoint but not after
|
||||||
@@ -514,7 +433,7 @@ fn parse_geo_point(input: Span) -> IResult<FilterCondition> {
|
|||||||
))(input)
|
))(input)
|
||||||
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))?;
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))?;
|
||||||
// if we succeeded we still return a `Failure` because geoPoints are not allowed
|
// if we succeeded we still return a `Failure` because geoPoints are not allowed
|
||||||
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geoPoint")))
|
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// geoPoint = WS* "_geoDistance(float WS* "," WS* float WS* "," WS* float)
|
/// geoPoint = WS* "_geoDistance(float WS* "," WS* float WS* "," WS* float)
|
||||||
@@ -528,7 +447,7 @@ fn parse_geo_distance(input: Span) -> IResult<FilterCondition> {
|
|||||||
))(input)
|
))(input)
|
||||||
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))?;
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))?;
|
||||||
// if we succeeded we still return a `Failure` because `geoDistance` filters are not allowed
|
// if we succeeded we still return a `Failure` because `geoDistance` filters are not allowed
|
||||||
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geoDistance")))
|
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// geo = WS* "_geo(float WS* "," WS* float WS* "," WS* float)
|
/// geo = WS* "_geo(float WS* "," WS* float WS* "," WS* float)
|
||||||
@@ -542,7 +461,7 @@ fn parse_geo(input: Span) -> IResult<FilterCondition> {
|
|||||||
))(input)
|
))(input)
|
||||||
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))?;
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))?;
|
||||||
// if we succeeded we still return a `Failure` because `_geo` filter is not allowed
|
// if we succeeded we still return a `Failure` because `_geo` filter is not allowed
|
||||||
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geo")))
|
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn parse_error_reserved_keyword(input: Span) -> IResult<FilterCondition> {
|
fn parse_error_reserved_keyword(input: Span) -> IResult<FilterCondition> {
|
||||||
@@ -572,8 +491,8 @@ fn parse_primary(input: Span, depth: usize) -> IResult<FilterCondition> {
|
|||||||
Error::new_from_kind(input, ErrorKind::MissingClosingDelimiter(c.char()))
|
Error::new_from_kind(input, ErrorKind::MissingClosingDelimiter(c.char()))
|
||||||
}),
|
}),
|
||||||
),
|
),
|
||||||
// Made a random block of functions because we reached the maximum number of elements per alt
|
parse_geo_radius,
|
||||||
alt((parse_geo_radius, parse_geo_bounding_box, parse_geo_polygon)),
|
parse_geo_bounding_box,
|
||||||
parse_in,
|
parse_in,
|
||||||
parse_not_in,
|
parse_not_in,
|
||||||
parse_condition,
|
parse_condition,
|
||||||
@@ -581,7 +500,8 @@ fn parse_primary(input: Span, depth: usize) -> IResult<FilterCondition> {
|
|||||||
parse_is_not_null,
|
parse_is_not_null,
|
||||||
parse_is_empty,
|
parse_is_empty,
|
||||||
parse_is_not_empty,
|
parse_is_not_empty,
|
||||||
alt((parse_vectors_exists, parse_exists, parse_not_exists)),
|
parse_exists,
|
||||||
|
parse_not_exists,
|
||||||
parse_to,
|
parse_to,
|
||||||
parse_contains,
|
parse_contains,
|
||||||
parse_not_contains,
|
parse_not_contains,
|
||||||
@@ -637,28 +557,9 @@ impl std::fmt::Display for FilterCondition<'_> {
|
|||||||
}
|
}
|
||||||
write!(f, "]")
|
write!(f, "]")
|
||||||
}
|
}
|
||||||
FilterCondition::VectorExists { fid: _, embedder, filter: inner } => {
|
FilterCondition::GeoLowerThan { point, radius } => {
|
||||||
write!(f, "_vectors")?;
|
|
||||||
if let Some(embedder) = embedder {
|
|
||||||
write!(f, ".{:?}", embedder.value())?;
|
|
||||||
}
|
|
||||||
match inner {
|
|
||||||
VectorFilter::Fragment(fragment) => {
|
|
||||||
write!(f, ".fragments.{:?}", fragment.value())?
|
|
||||||
}
|
|
||||||
VectorFilter::DocumentTemplate => write!(f, ".documentTemplate")?,
|
|
||||||
VectorFilter::UserProvided => write!(f, ".userProvided")?,
|
|
||||||
VectorFilter::Regenerate => write!(f, ".regenerate")?,
|
|
||||||
VectorFilter::None => (),
|
|
||||||
}
|
|
||||||
write!(f, " EXISTS")
|
|
||||||
}
|
|
||||||
FilterCondition::GeoLowerThan { point, radius, resolution: None } => {
|
|
||||||
write!(f, "_geoRadius({}, {}, {})", point[0], point[1], radius)
|
write!(f, "_geoRadius({}, {}, {})", point[0], point[1], radius)
|
||||||
}
|
}
|
||||||
FilterCondition::GeoLowerThan { point, radius, resolution: Some(resolution) } => {
|
|
||||||
write!(f, "_geoRadius({}, {}, {}, {})", point[0], point[1], radius, resolution)
|
|
||||||
}
|
|
||||||
FilterCondition::GeoBoundingBox {
|
FilterCondition::GeoBoundingBox {
|
||||||
top_right_point: top_left_point,
|
top_right_point: top_left_point,
|
||||||
bottom_left_point: bottom_right_point,
|
bottom_left_point: bottom_right_point,
|
||||||
@@ -672,13 +573,6 @@ impl std::fmt::Display for FilterCondition<'_> {
|
|||||||
bottom_right_point[1]
|
bottom_right_point[1]
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
FilterCondition::GeoPolygon { points } => {
|
|
||||||
write!(f, "_geoPolygon([")?;
|
|
||||||
for point in points {
|
|
||||||
write!(f, "[{}, {}], ", point[0], point[1])?;
|
|
||||||
}
|
|
||||||
write!(f, "])")
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -717,7 +611,7 @@ pub mod tests {
|
|||||||
/// Create a raw [Token]. You must specify the string that appear BEFORE your element followed by your element
|
/// Create a raw [Token]. You must specify the string that appear BEFORE your element followed by your element
|
||||||
pub fn rtok<'a>(before: &'a str, value: &'a str) -> Token<'a> {
|
pub fn rtok<'a>(before: &'a str, value: &'a str) -> Token<'a> {
|
||||||
// if the string is empty we still need to return 1 for the line number
|
// if the string is empty we still need to return 1 for the line number
|
||||||
let lines = if before.is_empty() { 1 } else { before.lines().count() };
|
let lines = before.is_empty().then_some(1).unwrap_or_else(|| before.lines().count());
|
||||||
let offset = before.chars().count();
|
let offset = before.chars().count();
|
||||||
// the extra field is not checked in the tests so we can set it to nothing
|
// the extra field is not checked in the tests so we can set it to nothing
|
||||||
unsafe { Span::new_from_raw_offset(offset, lines as u32, value, "") }.into()
|
unsafe { Span::new_from_raw_offset(offset, lines as u32, value, "") }.into()
|
||||||
@@ -736,9 +630,6 @@ pub mod tests {
|
|||||||
insta::assert_snapshot!(p(r"title = 'foo\\\\\\\\'"), @r#"{title} = {foo\\\\}"#);
|
insta::assert_snapshot!(p(r"title = 'foo\\\\\\\\'"), @r#"{title} = {foo\\\\}"#);
|
||||||
// but it also works with other sequences
|
// but it also works with other sequences
|
||||||
insta::assert_snapshot!(p(r#"title = 'foo\x20\n\t\"\'"'"#), @"{title} = {foo \n\t\"\'\"}");
|
insta::assert_snapshot!(p(r#"title = 'foo\x20\n\t\"\'"'"#), @"{title} = {foo \n\t\"\'\"}");
|
||||||
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors." valid.name ".fragments."also.. valid! " EXISTS"#), @r#"_vectors." valid.name ".fragments."also.. valid! " EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.\"\n\t\r\\\"\" EXISTS"), @r#"_vectors."\n\t\r\"" EXISTS"#);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -801,18 +692,6 @@ pub mod tests {
|
|||||||
insta::assert_snapshot!(p("NOT subscribers IS NOT EMPTY"), @"{subscribers} IS EMPTY");
|
insta::assert_snapshot!(p("NOT subscribers IS NOT EMPTY"), @"{subscribers} IS EMPTY");
|
||||||
insta::assert_snapshot!(p("subscribers IS NOT EMPTY"), @"NOT ({subscribers} IS EMPTY)");
|
insta::assert_snapshot!(p("subscribers IS NOT EMPTY"), @"NOT ({subscribers} IS EMPTY)");
|
||||||
|
|
||||||
// Test _vectors EXISTS + _vectors NOT EXITS
|
|
||||||
insta::assert_snapshot!(p("_vectors EXISTS"), @"_vectors EXISTS");
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName EXISTS"), @r#"_vectors."embedderName" EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.documentTemplate EXISTS"), @r#"_vectors."embedderName".documentTemplate EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.regenerate EXISTS"), @r#"_vectors."embedderName".regenerate EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.regenerate EXISTS"), @r#"_vectors."embedderName".regenerate EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.fragments.fragmentName EXISTS"), @r#"_vectors."embedderName".fragments."fragmentName" EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p(" _vectors.embedderName.fragments.fragmentName EXISTS"), @r#"_vectors."embedderName".fragments."fragmentName" EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("NOT _vectors EXISTS"), @"NOT (_vectors EXISTS)");
|
|
||||||
insta::assert_snapshot!(p(" NOT _vectors EXISTS"), @"NOT (_vectors EXISTS)");
|
|
||||||
insta::assert_snapshot!(p(" _vectors NOT EXISTS"), @"NOT (_vectors EXISTS)");
|
|
||||||
|
|
||||||
// Test EXISTS + NOT EXITS
|
// Test EXISTS + NOT EXITS
|
||||||
insta::assert_snapshot!(p("subscribers EXISTS"), @"{subscribers} EXISTS");
|
insta::assert_snapshot!(p("subscribers EXISTS"), @"{subscribers} EXISTS");
|
||||||
insta::assert_snapshot!(p("NOT subscribers EXISTS"), @"NOT ({subscribers} EXISTS)");
|
insta::assert_snapshot!(p("NOT subscribers EXISTS"), @"NOT ({subscribers} EXISTS)");
|
||||||
@@ -842,17 +721,12 @@ pub mod tests {
|
|||||||
insta::assert_snapshot!(p("_geoRadius(12, 13, 14)"), @"_geoRadius({12}, {13}, {14})");
|
insta::assert_snapshot!(p("_geoRadius(12, 13, 14)"), @"_geoRadius({12}, {13}, {14})");
|
||||||
insta::assert_snapshot!(p("NOT _geoRadius(12, 13, 14)"), @"NOT (_geoRadius({12}, {13}, {14}))");
|
insta::assert_snapshot!(p("NOT _geoRadius(12, 13, 14)"), @"NOT (_geoRadius({12}, {13}, {14}))");
|
||||||
insta::assert_snapshot!(p("_geoRadius(12,13,14)"), @"_geoRadius({12}, {13}, {14})");
|
insta::assert_snapshot!(p("_geoRadius(12,13,14)"), @"_geoRadius({12}, {13}, {14})");
|
||||||
insta::assert_snapshot!(p("_geoRadius(12,13,14,1000)"), @"_geoRadius({12}, {13}, {14}, {1000})");
|
|
||||||
|
|
||||||
// Test geo bounding box
|
// Test geo bounding box
|
||||||
insta::assert_snapshot!(p("_geoBoundingBox([12, 13], [14, 15])"), @"_geoBoundingBox([{12}, {13}], [{14}, {15}])");
|
insta::assert_snapshot!(p("_geoBoundingBox([12, 13], [14, 15])"), @"_geoBoundingBox([{12}, {13}], [{14}, {15}])");
|
||||||
insta::assert_snapshot!(p("NOT _geoBoundingBox([12, 13], [14, 15])"), @"NOT (_geoBoundingBox([{12}, {13}], [{14}, {15}]))");
|
insta::assert_snapshot!(p("NOT _geoBoundingBox([12, 13], [14, 15])"), @"NOT (_geoBoundingBox([{12}, {13}], [{14}, {15}]))");
|
||||||
insta::assert_snapshot!(p("_geoBoundingBox([12,13],[14,15])"), @"_geoBoundingBox([{12}, {13}], [{14}, {15}])");
|
insta::assert_snapshot!(p("_geoBoundingBox([12,13],[14,15])"), @"_geoBoundingBox([{12}, {13}], [{14}, {15}])");
|
||||||
|
|
||||||
// Test geo polygon
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon([12, 13], [14, 15], [16, 17])"), @"_geoPolygon([[{12}, {13}], [{14}, {15}], [{16}, {17}], ])");
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon([12, 13], [14, 15], [-1.2,2939.2], [1,1])"), @"_geoPolygon([[{12}, {13}], [{14}, {15}], [{-1.2}, {2939.2}], [{1}, {1}], ])");
|
|
||||||
|
|
||||||
// Test OR + AND
|
// Test OR + AND
|
||||||
insta::assert_snapshot!(p("channel = ponce AND 'dog race' != 'bernese mountain'"), @"AND[{channel} = {ponce}, {dog race} != {bernese mountain}, ]");
|
insta::assert_snapshot!(p("channel = ponce AND 'dog race' != 'bernese mountain'"), @"AND[{channel} = {ponce}, {dog race} != {bernese mountain}, ]");
|
||||||
insta::assert_snapshot!(p("channel = ponce OR 'dog race' != 'bernese mountain'"), @"OR[{channel} = {ponce}, {dog race} != {bernese mountain}, ]");
|
insta::assert_snapshot!(p("channel = ponce OR 'dog race' != 'bernese mountain'"), @"OR[{channel} = {ponce}, {dog race} != {bernese mountain}, ]");
|
||||||
@@ -909,80 +783,50 @@ pub mod tests {
|
|||||||
11:12 channel = 🐻 AND followers < 100
|
11:12 channel = 🐻 AND followers < 100
|
||||||
"###);
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("'OR'"), @r"
|
insta::assert_snapshot!(p("'OR'"), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `\'OR\'`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `\'OR\'`.
|
||||||
1:5 'OR'
|
1:5 'OR'
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("OR"), @r###"
|
insta::assert_snapshot!(p("OR"), @r###"
|
||||||
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
|
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
|
||||||
1:3 OR
|
1:3 OR
|
||||||
"###);
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("channel Ponce"), @r"
|
insta::assert_snapshot!(p("channel Ponce"), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `channel Ponce`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `channel Ponce`.
|
||||||
1:14 channel Ponce
|
1:14 channel Ponce
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("channel = Ponce OR"), @r"
|
insta::assert_snapshot!(p("channel = Ponce OR"), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` but instead got nothing.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` but instead got nothing.
|
||||||
19:19 channel = Ponce OR
|
19:19 channel = Ponce OR
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoRadius"), @r"
|
insta::assert_snapshot!(p("_geoRadius"), @r###"
|
||||||
The `_geoRadius` filter must be in the form: `_geoRadius(latitude, longitude, radius, optionalResolution)`.
|
The `_geoRadius` filter expects three arguments: `_geoRadius(latitude, longitude, radius)`.
|
||||||
11:11 _geoRadius
|
1:11 _geoRadius
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoRadius = 12"), @r"
|
insta::assert_snapshot!(p("_geoRadius = 12"), @r###"
|
||||||
The `_geoRadius` filter must be in the form: `_geoRadius(latitude, longitude, radius, optionalResolution)`.
|
The `_geoRadius` filter expects three arguments: `_geoRadius(latitude, longitude, radius)`.
|
||||||
11:16 _geoRadius = 12
|
1:16 _geoRadius = 12
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoBoundingBox"), @r"
|
insta::assert_snapshot!(p("_geoBoundingBox"), @r###"
|
||||||
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
|
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
|
||||||
16:16 _geoBoundingBox
|
1:16 _geoBoundingBox
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoBoundingBox = 12"), @r"
|
insta::assert_snapshot!(p("_geoBoundingBox = 12"), @r###"
|
||||||
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
|
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
|
||||||
16:21 _geoBoundingBox = 12
|
1:21 _geoBoundingBox = 12
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoBoundingBox(1.0, 1.0)"), @r"
|
insta::assert_snapshot!(p("_geoBoundingBox(1.0, 1.0)"), @r###"
|
||||||
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
|
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
|
||||||
17:26 _geoBoundingBox(1.0, 1.0)
|
1:26 _geoBoundingBox(1.0, 1.0)
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon([1,2,3])"), @r"
|
|
||||||
The `_geoPolygon` filter expects at least 3 points but only 1 were specified
|
|
||||||
18:19 _geoPolygon([1,2,3])
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon(1,2,3)"), @r"
|
|
||||||
The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.
|
|
||||||
13:19 _geoPolygon(1,2,3)
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon([1,2],[1,2],[1,2,3])"), @r"
|
|
||||||
Was expecting 2 coordinates but instead found 3.
|
|
||||||
26:27 _geoPolygon([1,2],[1,2],[1,2,3])
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon([1,2],[1,2,3])"), @r"
|
|
||||||
The `_geoPolygon` filter expects at least 3 points but only 2 were specified
|
|
||||||
24:25 _geoPolygon([1,2],[1,2,3])
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon(1)"), @r"
|
|
||||||
The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.
|
|
||||||
13:15 _geoPolygon(1)
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoPolygon([1,2)"), @r"
|
|
||||||
The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.
|
|
||||||
17:18 _geoPolygon([1,2)
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p("_geoPoint(12, 13, 14)"), @r###"
|
insta::assert_snapshot!(p("_geoPoint(12, 13, 14)"), @r###"
|
||||||
`_geoPoint` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.
|
`_geoPoint` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.
|
||||||
@@ -1039,15 +883,15 @@ pub mod tests {
|
|||||||
34:35 channel = mv OR followers >= 1000)
|
34:35 channel = mv OR followers >= 1000)
|
||||||
"###);
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("colour NOT EXIST"), @r"
|
insta::assert_snapshot!(p("colour NOT EXIST"), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `colour NOT EXIST`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `colour NOT EXIST`.
|
||||||
1:17 colour NOT EXIST
|
1:17 colour NOT EXIST
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("subscribers 100 TO1000"), @r"
|
insta::assert_snapshot!(p("subscribers 100 TO1000"), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `subscribers 100 TO1000`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `subscribers 100 TO1000`.
|
||||||
1:23 subscribers 100 TO1000
|
1:23 subscribers 100 TO1000
|
||||||
");
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p("channel = ponce ORdog != 'bernese mountain'"), @r###"
|
insta::assert_snapshot!(p("channel = ponce ORdog != 'bernese mountain'"), @r###"
|
||||||
Found unexpected characters at the end of the filter: `ORdog != \'bernese mountain\'`. You probably forgot an `OR` or an `AND` rule.
|
Found unexpected characters at the end of the filter: `ORdog != \'bernese mountain\'`. You probably forgot an `OR` or an `AND` rule.
|
||||||
@@ -1102,108 +946,43 @@ pub mod tests {
|
|||||||
"###
|
"###
|
||||||
);
|
);
|
||||||
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors _vectors EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
10:25 _vectors _vectors EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors. embedderName EXISTS"#), @r"
|
|
||||||
Was expecting embedder name but found nothing.
|
|
||||||
10:11 _vectors. embedderName EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors .embedderName EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
10:30 _vectors .embedderName EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName. EXISTS"#), @r"
|
|
||||||
Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.
|
|
||||||
22:23 _vectors.embedderName. EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors."embedderName EXISTS"#), @r#"
|
|
||||||
The quotes in one of the values are inconsistent.
|
|
||||||
10:30 _vectors."embedderName EXISTS
|
|
||||||
"#);
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors."embedderNam"e EXISTS"#), @r#"
|
|
||||||
The vector filter has leftover tokens.
|
|
||||||
23:31 _vectors."embedderNam"e EXISTS
|
|
||||||
"#);
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.documentTemplate. EXISTS"#), @r"
|
|
||||||
Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.
|
|
||||||
39:40 _vectors.embedderName.documentTemplate. EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments EXISTS"#), @r"
|
|
||||||
The vector filter is missing a fragment name.
|
|
||||||
32:39 _vectors.embedderName.fragments EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments. EXISTS"#), @r"
|
|
||||||
The vector filter's fragment name is invalid.
|
|
||||||
33:40 _vectors.embedderName.fragments. EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments.test test EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
38:49 _vectors.embedderName.fragments.test test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments. test EXISTS"#), @r"
|
|
||||||
The vector filter's fragment name is invalid.
|
|
||||||
33:45 _vectors.embedderName.fragments. test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName .fragments. test EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
23:46 _vectors.embedderName .fragments. test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName .fragments.test EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
23:45 _vectors.embedderName .fragments.test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fargments.test EXISTS"#), @r"
|
|
||||||
Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `fargments`. Did you mean `fragments`?
|
|
||||||
23:32 _vectors.embedderName.fargments.test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName."userProvided" EXISTS"#), @r#"
|
|
||||||
Was expecting this part to be unquoted.
|
|
||||||
24:36 _vectors.embedderName."userProvided" EXISTS
|
|
||||||
"#);
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.userProvided.fragments.test EXISTS"#), @r"
|
|
||||||
Vector filter can only accept one of `fragments`, `userProvided`, `documentTemplate` or `regenerate`, but found both `userProvided` and `fragments`.
|
|
||||||
36:45 _vectors.embedderName.userProvided.fragments.test EXISTS
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p(r#"NOT OR EXISTS AND EXISTS NOT EXISTS"#), @r###"
|
insta::assert_snapshot!(p(r#"NOT OR EXISTS AND EXISTS NOT EXISTS"#), @r###"
|
||||||
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
|
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
|
||||||
5:7 NOT OR EXISTS AND EXISTS NOT EXISTS
|
5:7 NOT OR EXISTS AND EXISTS NOT EXISTS
|
||||||
"###);
|
"###);
|
||||||
|
|
||||||
insta::assert_snapshot!(p(r#"value NULL"#), @r"
|
insta::assert_snapshot!(p(r#"value NULL"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value NULL`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value NULL`.
|
||||||
1:11 value NULL
|
1:11 value NULL
|
||||||
");
|
"###);
|
||||||
insta::assert_snapshot!(p(r#"value NOT NULL"#), @r"
|
insta::assert_snapshot!(p(r#"value NOT NULL"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value NOT NULL`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value NOT NULL`.
|
||||||
1:15 value NOT NULL
|
1:15 value NOT NULL
|
||||||
");
|
"###);
|
||||||
insta::assert_snapshot!(p(r#"value EMPTY"#), @r"
|
insta::assert_snapshot!(p(r#"value EMPTY"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value EMPTY`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value EMPTY`.
|
||||||
1:12 value EMPTY
|
1:12 value EMPTY
|
||||||
");
|
"###);
|
||||||
insta::assert_snapshot!(p(r#"value NOT EMPTY"#), @r"
|
insta::assert_snapshot!(p(r#"value NOT EMPTY"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value NOT EMPTY`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value NOT EMPTY`.
|
||||||
1:16 value NOT EMPTY
|
1:16 value NOT EMPTY
|
||||||
");
|
"###);
|
||||||
insta::assert_snapshot!(p(r#"value IS"#), @r"
|
insta::assert_snapshot!(p(r#"value IS"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS`.
|
||||||
1:9 value IS
|
1:9 value IS
|
||||||
");
|
"###);
|
||||||
insta::assert_snapshot!(p(r#"value IS NOT"#), @r"
|
insta::assert_snapshot!(p(r#"value IS NOT"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS NOT`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS NOT`.
|
||||||
1:13 value IS NOT
|
1:13 value IS NOT
|
||||||
");
|
"###);
|
||||||
insta::assert_snapshot!(p(r#"value IS EXISTS"#), @r"
|
insta::assert_snapshot!(p(r#"value IS EXISTS"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS EXISTS`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS EXISTS`.
|
||||||
1:16 value IS EXISTS
|
1:16 value IS EXISTS
|
||||||
");
|
"###);
|
||||||
insta::assert_snapshot!(p(r#"value IS NOT EXISTS"#), @r"
|
insta::assert_snapshot!(p(r#"value IS NOT EXISTS"#), @r###"
|
||||||
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS NOT EXISTS`.
|
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS NOT EXISTS`.
|
||||||
1:20 value IS NOT EXISTS
|
1:20 value IS NOT EXISTS
|
||||||
");
|
"###);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -80,51 +80,6 @@ pub fn word_exact<'a, 'b: 'a>(tag: &'b str) -> impl Fn(Span<'a>) -> IResult<'a,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// vector_value = ( non_dot_word | singleQuoted | doubleQuoted)
|
|
||||||
pub fn parse_vector_value(input: Span) -> IResult<Token> {
|
|
||||||
pub fn non_dot_word(input: Span) -> IResult<Token> {
|
|
||||||
let (input, word) = take_while1(|c| is_value_component(c) && c != '.')(input)?;
|
|
||||||
Ok((input, word.into()))
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, value) = alt((
|
|
||||||
delimited(char('\''), cut(|input| quoted_by('\'', input)), cut(char('\''))),
|
|
||||||
delimited(char('"'), cut(|input| quoted_by('"', input)), cut(char('"'))),
|
|
||||||
non_dot_word,
|
|
||||||
))(input)?;
|
|
||||||
|
|
||||||
match unescaper::unescape(value.value()) {
|
|
||||||
Ok(content) => {
|
|
||||||
if content.len() != value.value().len() {
|
|
||||||
Ok((input, Token::new(value.original_span(), Some(content))))
|
|
||||||
} else {
|
|
||||||
Ok((input, value))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(unescaper::Error::IncompleteStr(_)) => Err(nom::Err::Incomplete(nom::Needed::Unknown)),
|
|
||||||
Err(unescaper::Error::ParseIntError { .. }) => Err(nom::Err::Error(Error::new_from_kind(
|
|
||||||
value.original_span(),
|
|
||||||
ErrorKind::InvalidEscapedNumber,
|
|
||||||
))),
|
|
||||||
Err(unescaper::Error::InvalidChar { .. }) => Err(nom::Err::Error(Error::new_from_kind(
|
|
||||||
value.original_span(),
|
|
||||||
ErrorKind::MalformedValue,
|
|
||||||
))),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn parse_vector_value_cut<'a>(input: Span<'a>, kind: ErrorKind<'a>) -> IResult<'a, Token<'a>> {
|
|
||||||
parse_vector_value(input).map_err(|e| match e {
|
|
||||||
nom::Err::Failure(e) => match e.kind() {
|
|
||||||
ErrorKind::Char(c) if *c == '"' || *c == '\'' => {
|
|
||||||
crate::Error::failure_from_kind(input, ErrorKind::VectorFilterInvalidQuotes)
|
|
||||||
}
|
|
||||||
_ => crate::Error::failure_from_kind(input, kind),
|
|
||||||
},
|
|
||||||
_ => crate::Error::failure_from_kind(input, kind),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// value = WS* ( word | singleQuoted | doubleQuoted) WS+
|
/// value = WS* ( word | singleQuoted | doubleQuoted) WS+
|
||||||
pub fn parse_value(input: Span) -> IResult<Token> {
|
pub fn parse_value(input: Span) -> IResult<Token> {
|
||||||
// to get better diagnostic message we are going to strip the left whitespaces from the input right now
|
// to get better diagnostic message we are going to strip the left whitespaces from the input right now
|
||||||
@@ -144,21 +99,31 @@ pub fn parse_value(input: Span) -> IResult<Token> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
match parse_geo_radius(input) {
|
match parse_geo_radius(input) {
|
||||||
Ok(_) => return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoRadius)),
|
Ok(_) => {
|
||||||
|
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::MisusedGeoRadius)))
|
||||||
|
}
|
||||||
// if we encountered a failure it means the user badly wrote a _geoRadius filter.
|
// if we encountered a failure it means the user badly wrote a _geoRadius filter.
|
||||||
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
||||||
Err(e) if e.is_failure() => {
|
Err(e) if e.is_failure() => {
|
||||||
return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoRadius))
|
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::MisusedGeoRadius)))
|
||||||
}
|
}
|
||||||
_ => (),
|
_ => (),
|
||||||
}
|
}
|
||||||
|
|
||||||
match parse_geo_bounding_box(input) {
|
match parse_geo_bounding_box(input) {
|
||||||
Ok(_) => return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoBoundingBox)),
|
Ok(_) => {
|
||||||
|
return Err(nom::Err::Failure(Error::new_from_kind(
|
||||||
|
input,
|
||||||
|
ErrorKind::MisusedGeoBoundingBox,
|
||||||
|
)))
|
||||||
|
}
|
||||||
// if we encountered a failure it means the user badly wrote a _geoBoundingBox filter.
|
// if we encountered a failure it means the user badly wrote a _geoBoundingBox filter.
|
||||||
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
||||||
Err(e) if e.is_failure() => {
|
Err(e) if e.is_failure() => {
|
||||||
return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoBoundingBox))
|
return Err(nom::Err::Failure(Error::new_from_kind(
|
||||||
|
input,
|
||||||
|
ErrorKind::MisusedGeoBoundingBox,
|
||||||
|
)))
|
||||||
}
|
}
|
||||||
_ => (),
|
_ => (),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ license.workspace = true
|
|||||||
serde_json = "1.0"
|
serde_json = "1.0"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
criterion = { version = "0.7.0", features = ["html_reports"] }
|
criterion = { version = "0.5.1", features = ["html_reports"] }
|
||||||
|
|
||||||
[[bench]]
|
[[bench]]
|
||||||
name = "benchmarks"
|
name = "benchmarks"
|
||||||
|
|||||||
@@ -11,12 +11,12 @@ edition.workspace = true
|
|||||||
license.workspace = true
|
license.workspace = true
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
arbitrary = { version = "1.4.2", features = ["derive"] }
|
arbitrary = { version = "1.4.1", features = ["derive"] }
|
||||||
bumpalo = "3.19.0"
|
bumpalo = "3.16.0"
|
||||||
clap = { version = "4.5.52", features = ["derive"] }
|
clap = { version = "4.5.24", features = ["derive"] }
|
||||||
either = "1.15.0"
|
either = "1.13.0"
|
||||||
fastrand = "2.3.0"
|
fastrand = "2.3.0"
|
||||||
milli = { path = "../milli" }
|
milli = { path = "../milli" }
|
||||||
serde = { version = "1.0.228", features = ["derive"] }
|
serde = { version = "1.0.217", features = ["derive"] }
|
||||||
serde_json = { version = "1.0.145", features = ["preserve_order"] }
|
serde_json = { version = "1.0.135", features = ["preserve_order"] }
|
||||||
tempfile = "3.23.0"
|
tempfile = "3.15.0"
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ use milli::heed::EnvOpenOptions;
|
|||||||
use milli::progress::Progress;
|
use milli::progress::Progress;
|
||||||
use milli::update::new::indexer;
|
use milli::update::new::indexer;
|
||||||
use milli::update::IndexerConfig;
|
use milli::update::IndexerConfig;
|
||||||
use milli::vector::RuntimeEmbedders;
|
use milli::vector::EmbeddingConfigs;
|
||||||
use milli::Index;
|
use milli::Index;
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
use tempfile::TempDir;
|
use tempfile::TempDir;
|
||||||
@@ -89,7 +89,7 @@ fn main() {
|
|||||||
let mut new_fields_ids_map = db_fields_ids_map.clone();
|
let mut new_fields_ids_map = db_fields_ids_map.clone();
|
||||||
|
|
||||||
let indexer_alloc = Bump::new();
|
let indexer_alloc = Bump::new();
|
||||||
let embedders = RuntimeEmbedders::default();
|
let embedders = EmbeddingConfigs::default();
|
||||||
let mut indexer = indexer::DocumentOperation::new();
|
let mut indexer = indexer::DocumentOperation::new();
|
||||||
|
|
||||||
let mut operations = Vec::new();
|
let mut operations = Vec::new();
|
||||||
@@ -129,7 +129,6 @@ fn main() {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -145,7 +144,6 @@ fn main() {
|
|||||||
embedders,
|
embedders,
|
||||||
&|| false,
|
&|| false,
|
||||||
&Progress::default(),
|
&Progress::default(),
|
||||||
&Default::default(),
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -11,33 +11,31 @@ edition.workspace = true
|
|||||||
license.workspace = true
|
license.workspace = true
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
anyhow = "1.0.100"
|
anyhow = "1.0.95"
|
||||||
bincode = "1.3.3"
|
bincode = "1.3.3"
|
||||||
byte-unit = "5.1.6"
|
byte-unit = "5.1.6"
|
||||||
bytes = "1.11.0"
|
bumpalo = "3.16.0"
|
||||||
bumpalo = "3.19.0"
|
|
||||||
bumparaw-collections = "0.1.4"
|
bumparaw-collections = "0.1.4"
|
||||||
convert_case = "0.9.0"
|
convert_case = "0.6.0"
|
||||||
csv = "1.4.0"
|
csv = "1.3.1"
|
||||||
derive_builder = "0.20.2"
|
derive_builder = "0.20.2"
|
||||||
dump = { path = "../dump" }
|
dump = { path = "../dump" }
|
||||||
enum-iterator = "2.3.0"
|
enum-iterator = "2.1.0"
|
||||||
file-store = { path = "../file-store" }
|
file-store = { path = "../file-store" }
|
||||||
flate2 = "1.1.5"
|
flate2 = "1.0.35"
|
||||||
indexmap = "2.12.0"
|
indexmap = "2.7.0"
|
||||||
meilisearch-auth = { path = "../meilisearch-auth" }
|
meilisearch-auth = { path = "../meilisearch-auth" }
|
||||||
meilisearch-types = { path = "../meilisearch-types" }
|
meilisearch-types = { path = "../meilisearch-types" }
|
||||||
memmap2 = "0.9.9"
|
memmap2 = "0.9.5"
|
||||||
page_size = "0.6.0"
|
page_size = "0.6.0"
|
||||||
rayon = "1.11.0"
|
rayon = "1.10.0"
|
||||||
roaring = { version = "0.10.12", features = ["serde"] }
|
roaring = { version = "0.10.10", features = ["serde"] }
|
||||||
serde = { version = "1.0.228", features = ["derive"] }
|
serde = { version = "1.0.217", features = ["derive"] }
|
||||||
serde_json = { version = "1.0.145", features = ["preserve_order"] }
|
serde_json = { version = "1.0.138", features = ["preserve_order"] }
|
||||||
tar = "0.4.44"
|
|
||||||
synchronoise = "1.0.1"
|
synchronoise = "1.0.1"
|
||||||
tempfile = "3.23.0"
|
tempfile = "3.15.0"
|
||||||
thiserror = "2.0.17"
|
thiserror = "2.0.9"
|
||||||
time = { version = "0.3.44", features = [
|
time = { version = "0.3.37", features = [
|
||||||
"serde-well-known",
|
"serde-well-known",
|
||||||
"formatting",
|
"formatting",
|
||||||
"parsing",
|
"parsing",
|
||||||
@@ -45,11 +43,7 @@ time = { version = "0.3.44", features = [
|
|||||||
] }
|
] }
|
||||||
tracing = "0.1.41"
|
tracing = "0.1.41"
|
||||||
ureq = "2.12.1"
|
ureq = "2.12.1"
|
||||||
uuid = { version = "1.18.1", features = ["serde", "v4"] }
|
uuid = { version = "1.11.0", features = ["serde", "v4"] }
|
||||||
backoff = "0.4.0"
|
|
||||||
reqwest = { version = "0.12.24", features = ["rustls-tls", "http2"], default-features = false }
|
|
||||||
rusty-s3 = "0.8.1"
|
|
||||||
tokio = { version = "1.48.0", features = ["full"] }
|
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
big_s = "1.0.2"
|
big_s = "1.0.2"
|
||||||
|
|||||||
@@ -1,12 +1,9 @@
|
|||||||
#![allow(clippy::result_large_err)]
|
|
||||||
|
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::io;
|
use std::io;
|
||||||
|
|
||||||
use dump::{KindDump, TaskDump, UpdateFile};
|
use dump::{KindDump, TaskDump, UpdateFile};
|
||||||
use meilisearch_types::batches::{Batch, BatchId};
|
use meilisearch_types::batches::{Batch, BatchId};
|
||||||
use meilisearch_types::heed::RwTxn;
|
use meilisearch_types::heed::RwTxn;
|
||||||
use meilisearch_types::index_uid_pattern::IndexUidPattern;
|
|
||||||
use meilisearch_types::milli;
|
use meilisearch_types::milli;
|
||||||
use meilisearch_types::tasks::{Kind, KindWithContent, Status, Task};
|
use meilisearch_types::tasks::{Kind, KindWithContent, Status, Task};
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
@@ -149,8 +146,6 @@ impl<'a> Dump<'a> {
|
|||||||
canceled_by: task.canceled_by,
|
canceled_by: task.canceled_by,
|
||||||
details: task.details,
|
details: task.details,
|
||||||
status: task.status,
|
status: task.status,
|
||||||
network: task.network,
|
|
||||||
custom_metadata: task.custom_metadata,
|
|
||||||
kind: match task.kind {
|
kind: match task.kind {
|
||||||
KindDump::DocumentImport {
|
KindDump::DocumentImport {
|
||||||
primary_key,
|
primary_key,
|
||||||
@@ -201,10 +196,9 @@ impl<'a> Dump<'a> {
|
|||||||
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
||||||
primary_key,
|
primary_key,
|
||||||
},
|
},
|
||||||
KindDump::IndexUpdate { primary_key, uid } => KindWithContent::IndexUpdate {
|
KindDump::IndexUpdate { primary_key } => KindWithContent::IndexUpdate {
|
||||||
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
||||||
primary_key,
|
primary_key,
|
||||||
new_index_uid: uid,
|
|
||||||
},
|
},
|
||||||
KindDump::IndexSwap { swaps } => KindWithContent::IndexSwap { swaps },
|
KindDump::IndexSwap { swaps } => KindWithContent::IndexSwap { swaps },
|
||||||
KindDump::TaskCancelation { query, tasks } => {
|
KindDump::TaskCancelation { query, tasks } => {
|
||||||
@@ -217,27 +211,7 @@ impl<'a> Dump<'a> {
|
|||||||
KindWithContent::DumpCreation { keys, instance_uid }
|
KindWithContent::DumpCreation { keys, instance_uid }
|
||||||
}
|
}
|
||||||
KindDump::SnapshotCreation => KindWithContent::SnapshotCreation,
|
KindDump::SnapshotCreation => KindWithContent::SnapshotCreation,
|
||||||
KindDump::Export { url, api_key, payload_size, indexes } => {
|
|
||||||
KindWithContent::Export {
|
|
||||||
url,
|
|
||||||
api_key,
|
|
||||||
payload_size,
|
|
||||||
indexes: indexes
|
|
||||||
.into_iter()
|
|
||||||
.map(|(pattern, settings)| {
|
|
||||||
Ok((
|
|
||||||
IndexUidPattern::try_from(pattern)
|
|
||||||
.map_err(|_| Error::CorruptedDump)?,
|
|
||||||
settings,
|
|
||||||
))
|
|
||||||
})
|
|
||||||
.collect::<Result<_, Error>>()?,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
KindDump::UpgradeDatabase { from } => KindWithContent::UpgradeDatabase { from },
|
KindDump::UpgradeDatabase { from } => KindWithContent::UpgradeDatabase { from },
|
||||||
KindDump::IndexCompaction { index_uid } => {
|
|
||||||
KindWithContent::IndexCompaction { index_uid }
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ use meilisearch_types::error::{Code, ErrorCode};
|
|||||||
use meilisearch_types::milli::index::RollbackOutcome;
|
use meilisearch_types::milli::index::RollbackOutcome;
|
||||||
use meilisearch_types::tasks::{Kind, Status};
|
use meilisearch_types::tasks::{Kind, Status};
|
||||||
use meilisearch_types::{heed, milli};
|
use meilisearch_types::{heed, milli};
|
||||||
use reqwest::StatusCode;
|
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
|
||||||
use crate::TaskId;
|
use crate::TaskId;
|
||||||
@@ -68,8 +67,6 @@ pub enum Error {
|
|||||||
SwapDuplicateIndexesFound(Vec<String>),
|
SwapDuplicateIndexesFound(Vec<String>),
|
||||||
#[error("Index `{0}` not found.")]
|
#[error("Index `{0}` not found.")]
|
||||||
SwapIndexNotFound(String),
|
SwapIndexNotFound(String),
|
||||||
#[error("Cannot rename `{0}` to `{1}` as the index already exists. Hint: You can remove `{1}` first and then do your remove.")]
|
|
||||||
SwapIndexFoundDuringRename(String, String),
|
|
||||||
#[error("Meilisearch cannot receive write operations because the limit of the task database has been reached. Please delete tasks to continue performing write operations.")]
|
#[error("Meilisearch cannot receive write operations because the limit of the task database has been reached. Please delete tasks to continue performing write operations.")]
|
||||||
NoSpaceLeftInTaskQueue,
|
NoSpaceLeftInTaskQueue,
|
||||||
#[error(
|
#[error(
|
||||||
@@ -77,10 +74,6 @@ pub enum Error {
|
|||||||
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
|
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
|
||||||
)]
|
)]
|
||||||
SwapIndexesNotFound(Vec<String>),
|
SwapIndexesNotFound(Vec<String>),
|
||||||
#[error("The following indexes are being renamed but cannot because their new name conflicts with an already existing index: {}. Renaming doesn't overwrite the other index name.",
|
|
||||||
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
|
|
||||||
)]
|
|
||||||
SwapIndexesFoundDuringRename(Vec<String>),
|
|
||||||
#[error("Corrupted dump.")]
|
#[error("Corrupted dump.")]
|
||||||
CorruptedDump,
|
CorruptedDump,
|
||||||
#[error(
|
#[error(
|
||||||
@@ -128,14 +121,6 @@ pub enum Error {
|
|||||||
#[error("Aborted task")]
|
#[error("Aborted task")]
|
||||||
AbortedTask,
|
AbortedTask,
|
||||||
|
|
||||||
#[error("S3 error: status: {status}, body: {body}")]
|
|
||||||
S3Error { status: StatusCode, body: String },
|
|
||||||
#[error("S3 HTTP error: {0}")]
|
|
||||||
S3HttpError(reqwest::Error),
|
|
||||||
#[error("S3 XML error: {0}")]
|
|
||||||
S3XmlError(Box<dyn std::error::Error + Send + Sync>),
|
|
||||||
#[error("S3 bucket error: {0}")]
|
|
||||||
S3BucketError(rusty_s3::BucketError),
|
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
Dump(#[from] dump::Error),
|
Dump(#[from] dump::Error),
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
@@ -166,10 +151,6 @@ pub enum Error {
|
|||||||
CorruptedTaskQueue,
|
CorruptedTaskQueue,
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
DatabaseUpgrade(Box<Self>),
|
DatabaseUpgrade(Box<Self>),
|
||||||
#[error(transparent)]
|
|
||||||
Export(Box<Self>),
|
|
||||||
#[error("Failed to export documents to remote server {code} ({type}): {message} <{link}>")]
|
|
||||||
FromRemoteWhenExporting { message: String, code: String, r#type: String, link: String },
|
|
||||||
#[error("Failed to rollback for index `{index}`: {rollback_outcome} ")]
|
#[error("Failed to rollback for index `{index}`: {rollback_outcome} ")]
|
||||||
RollbackFailed { index: String, rollback_outcome: RollbackOutcome },
|
RollbackFailed { index: String, rollback_outcome: RollbackOutcome },
|
||||||
#[error(transparent)]
|
#[error(transparent)]
|
||||||
@@ -218,8 +199,6 @@ impl Error {
|
|||||||
| Error::SwapIndexNotFound(_)
|
| Error::SwapIndexNotFound(_)
|
||||||
| Error::NoSpaceLeftInTaskQueue
|
| Error::NoSpaceLeftInTaskQueue
|
||||||
| Error::SwapIndexesNotFound(_)
|
| Error::SwapIndexesNotFound(_)
|
||||||
| Error::SwapIndexFoundDuringRename(_, _)
|
|
||||||
| Error::SwapIndexesFoundDuringRename(_)
|
|
||||||
| Error::CorruptedDump
|
| Error::CorruptedDump
|
||||||
| Error::InvalidTaskDate { .. }
|
| Error::InvalidTaskDate { .. }
|
||||||
| Error::InvalidTaskUid { .. }
|
| Error::InvalidTaskUid { .. }
|
||||||
@@ -233,12 +212,7 @@ impl Error {
|
|||||||
| Error::BatchNotFound(_)
|
| Error::BatchNotFound(_)
|
||||||
| Error::TaskDeletionWithEmptyQuery
|
| Error::TaskDeletionWithEmptyQuery
|
||||||
| Error::TaskCancelationWithEmptyQuery
|
| Error::TaskCancelationWithEmptyQuery
|
||||||
| Error::FromRemoteWhenExporting { .. }
|
|
||||||
| Error::AbortedTask
|
| Error::AbortedTask
|
||||||
| Error::S3Error { .. }
|
|
||||||
| Error::S3HttpError(_)
|
|
||||||
| Error::S3XmlError(_)
|
|
||||||
| Error::S3BucketError(_)
|
|
||||||
| Error::Dump(_)
|
| Error::Dump(_)
|
||||||
| Error::Heed(_)
|
| Error::Heed(_)
|
||||||
| Error::Milli { .. }
|
| Error::Milli { .. }
|
||||||
@@ -247,7 +221,6 @@ impl Error {
|
|||||||
| Error::IoError(_)
|
| Error::IoError(_)
|
||||||
| Error::Persist(_)
|
| Error::Persist(_)
|
||||||
| Error::FeatureNotEnabled(_)
|
| Error::FeatureNotEnabled(_)
|
||||||
| Error::Export(_)
|
|
||||||
| Error::Anyhow(_) => true,
|
| Error::Anyhow(_) => true,
|
||||||
Error::CreateBatch(_)
|
Error::CreateBatch(_)
|
||||||
| Error::CorruptedTaskQueue
|
| Error::CorruptedTaskQueue
|
||||||
@@ -292,8 +265,6 @@ impl ErrorCode for Error {
|
|||||||
Error::SwapDuplicateIndexFound(_) => Code::InvalidSwapDuplicateIndexFound,
|
Error::SwapDuplicateIndexFound(_) => Code::InvalidSwapDuplicateIndexFound,
|
||||||
Error::SwapIndexNotFound(_) => Code::IndexNotFound,
|
Error::SwapIndexNotFound(_) => Code::IndexNotFound,
|
||||||
Error::SwapIndexesNotFound(_) => Code::IndexNotFound,
|
Error::SwapIndexesNotFound(_) => Code::IndexNotFound,
|
||||||
Error::SwapIndexFoundDuringRename(_, _) => Code::IndexAlreadyExists,
|
|
||||||
Error::SwapIndexesFoundDuringRename(_) => Code::IndexAlreadyExists,
|
|
||||||
Error::InvalidTaskDate { field, .. } => (*field).into(),
|
Error::InvalidTaskDate { field, .. } => (*field).into(),
|
||||||
Error::InvalidTaskUid { .. } => Code::InvalidTaskUids,
|
Error::InvalidTaskUid { .. } => Code::InvalidTaskUids,
|
||||||
Error::InvalidBatchUid { .. } => Code::InvalidBatchUids,
|
Error::InvalidBatchUid { .. } => Code::InvalidBatchUids,
|
||||||
@@ -306,18 +277,11 @@ impl ErrorCode for Error {
|
|||||||
Error::BatchNotFound(_) => Code::BatchNotFound,
|
Error::BatchNotFound(_) => Code::BatchNotFound,
|
||||||
Error::TaskDeletionWithEmptyQuery => Code::MissingTaskFilters,
|
Error::TaskDeletionWithEmptyQuery => Code::MissingTaskFilters,
|
||||||
Error::TaskCancelationWithEmptyQuery => Code::MissingTaskFilters,
|
Error::TaskCancelationWithEmptyQuery => Code::MissingTaskFilters,
|
||||||
|
// TODO: not sure of the Code to use
|
||||||
Error::NoSpaceLeftInTaskQueue => Code::NoSpaceLeftOnDevice,
|
Error::NoSpaceLeftInTaskQueue => Code::NoSpaceLeftOnDevice,
|
||||||
Error::S3Error { status, .. } if status.is_client_error() => {
|
|
||||||
Code::InvalidS3SnapshotRequest
|
|
||||||
}
|
|
||||||
Error::S3Error { .. } => Code::S3SnapshotServerError,
|
|
||||||
Error::S3HttpError(_) => Code::S3SnapshotServerError,
|
|
||||||
Error::S3XmlError(_) => Code::S3SnapshotServerError,
|
|
||||||
Error::S3BucketError(_) => Code::InvalidS3SnapshotParameters,
|
|
||||||
Error::Dump(e) => e.error_code(),
|
Error::Dump(e) => e.error_code(),
|
||||||
Error::Milli { error, .. } => error.error_code(),
|
Error::Milli { error, .. } => error.error_code(),
|
||||||
Error::ProcessBatchPanicked(_) => Code::Internal,
|
Error::ProcessBatchPanicked(_) => Code::Internal,
|
||||||
Error::FromRemoteWhenExporting { .. } => Code::Internal,
|
|
||||||
Error::Heed(e) => e.error_code(),
|
Error::Heed(e) => e.error_code(),
|
||||||
Error::HeedTransaction(e) => e.error_code(),
|
Error::HeedTransaction(e) => e.error_code(),
|
||||||
Error::FileStore(e) => e.error_code(),
|
Error::FileStore(e) => e.error_code(),
|
||||||
@@ -330,7 +294,6 @@ impl ErrorCode for Error {
|
|||||||
Error::CorruptedTaskQueue => Code::Internal,
|
Error::CorruptedTaskQueue => Code::Internal,
|
||||||
Error::CorruptedDump => Code::Internal,
|
Error::CorruptedDump => Code::Internal,
|
||||||
Error::DatabaseUpgrade(_) => Code::Internal,
|
Error::DatabaseUpgrade(_) => Code::Internal,
|
||||||
Error::Export(_) => Code::Internal,
|
|
||||||
Error::RollbackFailed { .. } => Code::Internal,
|
Error::RollbackFailed { .. } => Code::Internal,
|
||||||
Error::UnrecoverableError(_) => Code::Internal,
|
Error::UnrecoverableError(_) => Code::Internal,
|
||||||
Error::IndexSchedulerVersionMismatch { .. } => Code::Internal,
|
Error::IndexSchedulerVersionMismatch { .. } => Code::Internal,
|
||||||
|
|||||||
@@ -1,9 +1,8 @@
|
|||||||
use std::sync::{Arc, RwLock};
|
use std::sync::{Arc, RwLock};
|
||||||
|
|
||||||
use meilisearch_types::features::{InstanceTogglableFeatures, RuntimeTogglableFeatures};
|
use meilisearch_types::features::{InstanceTogglableFeatures, Network, RuntimeTogglableFeatures};
|
||||||
use meilisearch_types::heed::types::{SerdeJson, Str};
|
use meilisearch_types::heed::types::{SerdeJson, Str};
|
||||||
use meilisearch_types::heed::{Database, Env, RwTxn, WithoutTls};
|
use meilisearch_types::heed::{Database, Env, RwTxn, WithoutTls};
|
||||||
use meilisearch_types::network::Network;
|
|
||||||
|
|
||||||
use crate::error::FeatureNotEnabledError;
|
use crate::error::FeatureNotEnabledError;
|
||||||
use crate::Result;
|
use crate::Result;
|
||||||
@@ -86,7 +85,7 @@ impl RoFeatures {
|
|||||||
Ok(())
|
Ok(())
|
||||||
} else {
|
} else {
|
||||||
Err(FeatureNotEnabledError {
|
Err(FeatureNotEnabledError {
|
||||||
disabled_action: "Using `CONTAINS` in a filter",
|
disabled_action: "Using `CONTAINS` or `STARTS WITH` in a filter",
|
||||||
feature: "contains filter",
|
feature: "contains filter",
|
||||||
issue_link: "https://github.com/orgs/meilisearch/discussions/763",
|
issue_link: "https://github.com/orgs/meilisearch/discussions/763",
|
||||||
}
|
}
|
||||||
@@ -132,45 +131,6 @@ impl RoFeatures {
|
|||||||
.into())
|
.into())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn check_chat_completions(&self, disabled_action: &'static str) -> Result<()> {
|
|
||||||
if self.runtime.chat_completions {
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err(FeatureNotEnabledError {
|
|
||||||
disabled_action,
|
|
||||||
feature: "chat completions",
|
|
||||||
issue_link: "https://github.com/orgs/meilisearch/discussions/835",
|
|
||||||
}
|
|
||||||
.into())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn check_multimodal(&self, disabled_action: &'static str) -> Result<()> {
|
|
||||||
if self.runtime.multimodal {
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err(FeatureNotEnabledError {
|
|
||||||
disabled_action,
|
|
||||||
feature: "multimodal",
|
|
||||||
issue_link: "https://github.com/orgs/meilisearch/discussions/846",
|
|
||||||
}
|
|
||||||
.into())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn check_vector_store_setting(&self, disabled_action: &'static str) -> Result<()> {
|
|
||||||
if self.runtime.vector_store_setting {
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err(FeatureNotEnabledError {
|
|
||||||
disabled_action,
|
|
||||||
feature: "vector_store_setting",
|
|
||||||
issue_link: "https://github.com/orgs/meilisearch/discussions/860",
|
|
||||||
}
|
|
||||||
.into())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl FeatureData {
|
impl FeatureData {
|
||||||
@@ -196,7 +156,6 @@ impl FeatureData {
|
|||||||
..persisted_features
|
..persisted_features
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Once this is stabilized, network should be stored along with webhooks in index-scheduler's persisted database
|
|
||||||
let network_db = runtime_features_db.remap_data_type::<SerdeJson<Network>>();
|
let network_db = runtime_features_db.remap_data_type::<SerdeJson<Network>>();
|
||||||
let network: Network = network_db.get(wtxn, db_keys::NETWORK)?.unwrap_or_default();
|
let network: Network = network_db.get(wtxn, db_keys::NETWORK)?.unwrap_or_default();
|
||||||
|
|
||||||
|
|||||||
@@ -71,7 +71,7 @@ pub struct IndexMapper {
|
|||||||
/// Path to the folder where the LMDB environments of each index are.
|
/// Path to the folder where the LMDB environments of each index are.
|
||||||
base_path: PathBuf,
|
base_path: PathBuf,
|
||||||
/// The map size an index is opened with on the first time.
|
/// The map size an index is opened with on the first time.
|
||||||
pub(crate) index_base_map_size: usize,
|
index_base_map_size: usize,
|
||||||
/// The quantity by which the map size of an index is incremented upon reopening, in bytes.
|
/// The quantity by which the map size of an index is incremented upon reopening, in bytes.
|
||||||
index_growth_amount: usize,
|
index_growth_amount: usize,
|
||||||
/// Whether we open a meilisearch index with the MDB_WRITEMAP option or not.
|
/// Whether we open a meilisearch index with the MDB_WRITEMAP option or not.
|
||||||
@@ -143,10 +143,10 @@ impl IndexStats {
|
|||||||
///
|
///
|
||||||
/// - rtxn: a RO transaction for the index, obtained from `Index::read_txn()`.
|
/// - rtxn: a RO transaction for the index, obtained from `Index::read_txn()`.
|
||||||
pub fn new(index: &Index, rtxn: &RoTxn) -> milli::Result<Self> {
|
pub fn new(index: &Index, rtxn: &RoTxn) -> milli::Result<Self> {
|
||||||
let vector_store_stats = index.vector_store_stats(rtxn)?;
|
let arroy_stats = index.arroy_stats(rtxn)?;
|
||||||
Ok(IndexStats {
|
Ok(IndexStats {
|
||||||
number_of_embeddings: Some(vector_store_stats.number_of_embeddings),
|
number_of_embeddings: Some(arroy_stats.number_of_embeddings),
|
||||||
number_of_embedded_documents: Some(vector_store_stats.documents.len()),
|
number_of_embedded_documents: Some(arroy_stats.documents.len()),
|
||||||
documents_database_stats: index.documents_stats(rtxn)?.unwrap_or_default(),
|
documents_database_stats: index.documents_stats(rtxn)?.unwrap_or_default(),
|
||||||
number_of_documents: None,
|
number_of_documents: None,
|
||||||
database_size: index.on_disk_size()?,
|
database_size: index.on_disk_size()?,
|
||||||
@@ -199,7 +199,7 @@ impl IndexMapper {
|
|||||||
let uuid = Uuid::new_v4();
|
let uuid = Uuid::new_v4();
|
||||||
self.index_mapping.put(&mut wtxn, name, &uuid)?;
|
self.index_mapping.put(&mut wtxn, name, &uuid)?;
|
||||||
|
|
||||||
let index_path = self.index_path(uuid);
|
let index_path = self.base_path.join(uuid.to_string());
|
||||||
fs::create_dir_all(&index_path)?;
|
fs::create_dir_all(&index_path)?;
|
||||||
|
|
||||||
// Error if the UUIDv4 somehow already exists in the map, since it should be fresh.
|
// Error if the UUIDv4 somehow already exists in the map, since it should be fresh.
|
||||||
@@ -286,7 +286,7 @@ impl IndexMapper {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let index_map = self.index_map.clone();
|
let index_map = self.index_map.clone();
|
||||||
let index_path = self.index_path(uuid);
|
let index_path = self.base_path.join(uuid.to_string());
|
||||||
let index_name = name.to_string();
|
let index_name = name.to_string();
|
||||||
thread::Builder::new()
|
thread::Builder::new()
|
||||||
.name(String::from("index_deleter"))
|
.name(String::from("index_deleter"))
|
||||||
@@ -341,26 +341,6 @@ impl IndexMapper {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Closes the specified index.
|
|
||||||
///
|
|
||||||
/// This operation involves closing the underlying environment and so can take a long time to complete.
|
|
||||||
///
|
|
||||||
/// # Panics
|
|
||||||
///
|
|
||||||
/// - If the Index corresponding to the passed name is concurrently being deleted/resized or cannot be found in the
|
|
||||||
/// in memory hash map.
|
|
||||||
pub fn close_index(&self, rtxn: &RoTxn, name: &str) -> Result<()> {
|
|
||||||
let uuid = self
|
|
||||||
.index_mapping
|
|
||||||
.get(rtxn, name)?
|
|
||||||
.ok_or_else(|| Error::IndexNotFound(name.to_string()))?;
|
|
||||||
|
|
||||||
// We remove the index from the in-memory index map.
|
|
||||||
self.index_map.write().unwrap().close_for_resize(&uuid, self.enable_mdb_writemap, 0);
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return an index, may open it if it wasn't already opened.
|
/// Return an index, may open it if it wasn't already opened.
|
||||||
pub fn index(&self, rtxn: &RoTxn, name: &str) -> Result<Index> {
|
pub fn index(&self, rtxn: &RoTxn, name: &str) -> Result<Index> {
|
||||||
if let Some((current_name, current_index)) =
|
if let Some((current_name, current_index)) =
|
||||||
@@ -408,7 +388,7 @@ impl IndexMapper {
|
|||||||
} else {
|
} else {
|
||||||
continue;
|
continue;
|
||||||
};
|
};
|
||||||
let index_path = self.index_path(uuid);
|
let index_path = self.base_path.join(uuid.to_string());
|
||||||
// take the lock to reopen the environment.
|
// take the lock to reopen the environment.
|
||||||
reopen
|
reopen
|
||||||
.reopen(&mut self.index_map.write().unwrap(), &index_path)
|
.reopen(&mut self.index_map.write().unwrap(), &index_path)
|
||||||
@@ -425,7 +405,7 @@ impl IndexMapper {
|
|||||||
// if it's not already there.
|
// if it's not already there.
|
||||||
match index_map.get(&uuid) {
|
match index_map.get(&uuid) {
|
||||||
Missing => {
|
Missing => {
|
||||||
let index_path = self.index_path(uuid);
|
let index_path = self.base_path.join(uuid.to_string());
|
||||||
|
|
||||||
break index_map
|
break index_map
|
||||||
.create(
|
.create(
|
||||||
@@ -452,14 +432,6 @@ impl IndexMapper {
|
|||||||
Ok(index)
|
Ok(index)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the path of the index.
|
|
||||||
///
|
|
||||||
/// The folder located at this path is containing the data.mdb,
|
|
||||||
/// the lock.mdb and an optional data.mdb.cpy file.
|
|
||||||
pub fn index_path(&self, uuid: Uuid) -> PathBuf {
|
|
||||||
self.base_path.join(uuid.to_string())
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn rollback_index(
|
pub fn rollback_index(
|
||||||
&self,
|
&self,
|
||||||
rtxn: &RoTxn,
|
rtxn: &RoTxn,
|
||||||
@@ -500,7 +472,7 @@ impl IndexMapper {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
let index_path = self.index_path(uuid);
|
let index_path = self.base_path.join(uuid.to_string());
|
||||||
Index::rollback(milli::heed::EnvOpenOptions::new().read_txn_without_tls(), index_path, to)
|
Index::rollback(milli::heed::EnvOpenOptions::new().read_txn_without_tls(), index_path, to)
|
||||||
.map_err(|err| crate::Error::from_milli(err, Some(name.to_string())))
|
.map_err(|err| crate::Error::from_milli(err, Some(name.to_string())))
|
||||||
}
|
}
|
||||||
@@ -554,20 +526,6 @@ impl IndexMapper {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Rename an index.
|
|
||||||
pub fn rename(&self, wtxn: &mut RwTxn, current: &str, new: &str) -> Result<()> {
|
|
||||||
let uuid = self
|
|
||||||
.index_mapping
|
|
||||||
.get(wtxn, current)?
|
|
||||||
.ok_or_else(|| Error::IndexNotFound(current.to_string()))?;
|
|
||||||
if self.index_mapping.get(wtxn, new)?.is_some() {
|
|
||||||
return Err(Error::IndexAlreadyExists(new.to_string()));
|
|
||||||
}
|
|
||||||
self.index_mapping.delete(wtxn, current)?;
|
|
||||||
self.index_mapping.put(wtxn, new, &uuid)?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// The stats of an index.
|
/// The stats of an index.
|
||||||
///
|
///
|
||||||
/// If available in the cache, they are directly returned.
|
/// If available in the cache, they are directly returned.
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ use meilisearch_types::heed::types::{SerdeBincode, SerdeJson, Str};
|
|||||||
use meilisearch_types::heed::{Database, RoTxn};
|
use meilisearch_types::heed::{Database, RoTxn};
|
||||||
use meilisearch_types::milli::{CboRoaringBitmapCodec, RoaringBitmapCodec, BEU32};
|
use meilisearch_types::milli::{CboRoaringBitmapCodec, RoaringBitmapCodec, BEU32};
|
||||||
use meilisearch_types::tasks::{Details, Kind, Status, Task};
|
use meilisearch_types::tasks::{Details, Kind, Status, Task};
|
||||||
use meilisearch_types::versioning::{self, VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH};
|
use meilisearch_types::versioning;
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
|
|
||||||
use crate::index_mapper::IndexMapper;
|
use crate::index_mapper::IndexMapper;
|
||||||
@@ -20,23 +20,20 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
|
|||||||
|
|
||||||
let IndexScheduler {
|
let IndexScheduler {
|
||||||
cleanup_enabled: _,
|
cleanup_enabled: _,
|
||||||
experimental_no_edition_2024_for_dumps: _,
|
|
||||||
processing_tasks,
|
processing_tasks,
|
||||||
env,
|
env,
|
||||||
version,
|
version,
|
||||||
queue,
|
queue,
|
||||||
scheduler,
|
scheduler,
|
||||||
persisted,
|
|
||||||
|
|
||||||
index_mapper,
|
index_mapper,
|
||||||
features: _,
|
features: _,
|
||||||
webhooks: _,
|
webhook_url: _,
|
||||||
|
webhook_authorization_header: _,
|
||||||
test_breakpoint_sdr: _,
|
test_breakpoint_sdr: _,
|
||||||
planned_failures: _,
|
planned_failures: _,
|
||||||
run_loop_iteration: _,
|
run_loop_iteration: _,
|
||||||
embedders: _,
|
embedders: _,
|
||||||
chat_settings: _,
|
|
||||||
runtime: _,
|
|
||||||
} = scheduler;
|
} = scheduler;
|
||||||
|
|
||||||
let rtxn = env.read_txn().unwrap();
|
let rtxn = env.read_txn().unwrap();
|
||||||
@@ -63,13 +60,6 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
|
|||||||
}
|
}
|
||||||
snap.push_str("\n----------------------------------------------------------------------\n");
|
snap.push_str("\n----------------------------------------------------------------------\n");
|
||||||
|
|
||||||
let persisted_db_snapshot = snapshot_persisted_db(&rtxn, persisted);
|
|
||||||
if !persisted_db_snapshot.is_empty() {
|
|
||||||
snap.push_str("### Persisted:\n");
|
|
||||||
snap.push_str(&persisted_db_snapshot);
|
|
||||||
snap.push_str("----------------------------------------------------------------------\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
snap.push_str("### All Tasks:\n");
|
snap.push_str("### All Tasks:\n");
|
||||||
snap.push_str(&snapshot_all_tasks(&rtxn, queue.tasks.all_tasks));
|
snap.push_str(&snapshot_all_tasks(&rtxn, queue.tasks.all_tasks));
|
||||||
snap.push_str("----------------------------------------------------------------------\n");
|
snap.push_str("----------------------------------------------------------------------\n");
|
||||||
@@ -208,16 +198,6 @@ pub fn snapshot_date_db(rtxn: &RoTxn, db: Database<BEI128, CboRoaringBitmapCodec
|
|||||||
snap
|
snap
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn snapshot_persisted_db(rtxn: &RoTxn, db: &Database<Str, Str>) -> String {
|
|
||||||
let mut snap = String::new();
|
|
||||||
let iter = db.iter(rtxn).unwrap();
|
|
||||||
for next in iter {
|
|
||||||
let (key, value) = next.unwrap();
|
|
||||||
snap.push_str(&format!("{key}: {value}\n"));
|
|
||||||
}
|
|
||||||
snap
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn snapshot_task(task: &Task) -> String {
|
pub fn snapshot_task(task: &Task) -> String {
|
||||||
let mut snap = String::new();
|
let mut snap = String::new();
|
||||||
let Task {
|
let Task {
|
||||||
@@ -231,8 +211,6 @@ pub fn snapshot_task(task: &Task) -> String {
|
|||||||
details,
|
details,
|
||||||
status,
|
status,
|
||||||
kind,
|
kind,
|
||||||
network,
|
|
||||||
custom_metadata,
|
|
||||||
} = task;
|
} = task;
|
||||||
snap.push('{');
|
snap.push('{');
|
||||||
snap.push_str(&format!("uid: {uid}, "));
|
snap.push_str(&format!("uid: {uid}, "));
|
||||||
@@ -250,12 +228,6 @@ pub fn snapshot_task(task: &Task) -> String {
|
|||||||
snap.push_str(&format!("details: {}, ", &snapshot_details(details)));
|
snap.push_str(&format!("details: {}, ", &snapshot_details(details)));
|
||||||
}
|
}
|
||||||
snap.push_str(&format!("kind: {kind:?}"));
|
snap.push_str(&format!("kind: {kind:?}"));
|
||||||
if let Some(network) = network {
|
|
||||||
snap.push_str(&format!("network: {network:?}, "))
|
|
||||||
}
|
|
||||||
if let Some(custom_metadata) = custom_metadata {
|
|
||||||
snap.push_str(&format!("custom_metadata: {custom_metadata:?}"))
|
|
||||||
}
|
|
||||||
|
|
||||||
snap.push('}');
|
snap.push('}');
|
||||||
snap
|
snap
|
||||||
@@ -283,8 +255,8 @@ fn snapshot_details(d: &Details) -> String {
|
|||||||
Details::SettingsUpdate { settings } => {
|
Details::SettingsUpdate { settings } => {
|
||||||
format!("{{ settings: {settings:?} }}")
|
format!("{{ settings: {settings:?} }}")
|
||||||
}
|
}
|
||||||
Details::IndexInfo { primary_key, new_index_uid, old_index_uid } => {
|
Details::IndexInfo { primary_key } => {
|
||||||
format!("{{ primary_key: {primary_key:?}, old_new_uid: {old_index_uid:?}, new_index_uid: {new_index_uid:?} }}")
|
format!("{{ primary_key: {primary_key:?} }}")
|
||||||
}
|
}
|
||||||
Details::DocumentDeletion {
|
Details::DocumentDeletion {
|
||||||
provided_ids: received_document_ids,
|
provided_ids: received_document_ids,
|
||||||
@@ -316,18 +288,8 @@ fn snapshot_details(d: &Details) -> String {
|
|||||||
Details::IndexSwap { swaps } => {
|
Details::IndexSwap { swaps } => {
|
||||||
format!("{{ swaps: {swaps:?} }}")
|
format!("{{ swaps: {swaps:?} }}")
|
||||||
}
|
}
|
||||||
Details::Export { url, api_key, payload_size, indexes } => {
|
|
||||||
format!("{{ url: {url:?}, api_key: {api_key:?}, payload_size: {payload_size:?}, indexes: {indexes:?} }}")
|
|
||||||
}
|
|
||||||
Details::UpgradeDatabase { from, to } => {
|
Details::UpgradeDatabase { from, to } => {
|
||||||
if to == &(VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH) {
|
format!("{{ from: {from:?}, to: {to:?} }}")
|
||||||
format!("{{ from: {from:?}, to: [current version] }}")
|
|
||||||
} else {
|
|
||||||
format!("{{ from: {from:?}, to: {to:?} }}")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Details::IndexCompaction { index_uid, pre_compaction_size, post_compaction_size } => {
|
|
||||||
format!("{{ index_uid: {index_uid:?}, pre_compaction_size: {pre_compaction_size:?}, post_compaction_size: {post_compaction_size:?} }}")
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -344,7 +306,6 @@ pub fn snapshot_status(
|
|||||||
}
|
}
|
||||||
snap
|
snap
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCodec>) -> String {
|
pub fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCodec>) -> String {
|
||||||
let mut snap = String::new();
|
let mut snap = String::new();
|
||||||
let iter = db.iter(rtxn).unwrap();
|
let iter = db.iter(rtxn).unwrap();
|
||||||
@@ -365,7 +326,6 @@ pub fn snapshot_index_tasks(rtxn: &RoTxn, db: Database<Str, RoaringBitmapCodec>)
|
|||||||
}
|
}
|
||||||
snap
|
snap
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn snapshot_canceled_by(rtxn: &RoTxn, db: Database<BEU32, RoaringBitmapCodec>) -> String {
|
pub fn snapshot_canceled_by(rtxn: &RoTxn, db: Database<BEU32, RoaringBitmapCodec>) -> String {
|
||||||
let mut snap = String::new();
|
let mut snap = String::new();
|
||||||
let iter = db.iter(rtxn).unwrap();
|
let iter = db.iter(rtxn).unwrap();
|
||||||
@@ -382,7 +342,6 @@ pub fn snapshot_batch(batch: &Batch) -> String {
|
|||||||
uid,
|
uid,
|
||||||
details,
|
details,
|
||||||
stats,
|
stats,
|
||||||
embedder_stats,
|
|
||||||
started_at,
|
started_at,
|
||||||
finished_at,
|
finished_at,
|
||||||
progress: _,
|
progress: _,
|
||||||
@@ -404,28 +363,8 @@ pub fn snapshot_batch(batch: &Batch) -> String {
|
|||||||
|
|
||||||
snap.push('{');
|
snap.push('{');
|
||||||
snap.push_str(&format!("uid: {uid}, "));
|
snap.push_str(&format!("uid: {uid}, "));
|
||||||
let details = if let Some(upgrade_to) = &details.upgrade_to {
|
snap.push_str(&format!("details: {}, ", serde_json::to_string(details).unwrap()));
|
||||||
if upgrade_to.as_str()
|
|
||||||
== format!("v{VERSION_MAJOR}.{VERSION_MINOR}.{VERSION_PATCH}").as_str()
|
|
||||||
{
|
|
||||||
let mut details = details.clone();
|
|
||||||
|
|
||||||
details.upgrade_to = Some("[current version]".into());
|
|
||||||
serde_json::to_string(&details).unwrap()
|
|
||||||
} else {
|
|
||||||
serde_json::to_string(details).unwrap()
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
serde_json::to_string(details).unwrap()
|
|
||||||
};
|
|
||||||
snap.push_str(&format!("details: {details}, "));
|
|
||||||
snap.push_str(&format!("stats: {}, ", serde_json::to_string(&stats).unwrap()));
|
snap.push_str(&format!("stats: {}, ", serde_json::to_string(&stats).unwrap()));
|
||||||
if !embedder_stats.skip_serializing() {
|
|
||||||
snap.push_str(&format!(
|
|
||||||
"embedder stats: {}, ",
|
|
||||||
serde_json::to_string(&embedder_stats).unwrap()
|
|
||||||
));
|
|
||||||
}
|
|
||||||
snap.push_str(&format!("stop reason: {}, ", serde_json::to_string(&stop_reason).unwrap()));
|
snap.push_str(&format!("stop reason: {}, ", serde_json::to_string(&stop_reason).unwrap()));
|
||||||
snap.push('}');
|
snap.push('}');
|
||||||
snap
|
snap
|
||||||
|
|||||||
@@ -1,6 +1,3 @@
|
|||||||
// The main Error type is large and boxing the large variant make the pattern matching fails
|
|
||||||
#![allow(clippy::result_large_err)]
|
|
||||||
|
|
||||||
/*!
|
/*!
|
||||||
This crate defines the index scheduler, which is responsible for:
|
This crate defines the index scheduler, which is responsible for:
|
||||||
1. Keeping references to meilisearch's indexes and mapping them to their
|
1. Keeping references to meilisearch's indexes and mapping them to their
|
||||||
@@ -54,31 +51,22 @@ pub use features::RoFeatures;
|
|||||||
use flate2::bufread::GzEncoder;
|
use flate2::bufread::GzEncoder;
|
||||||
use flate2::Compression;
|
use flate2::Compression;
|
||||||
use meilisearch_types::batches::Batch;
|
use meilisearch_types::batches::Batch;
|
||||||
use meilisearch_types::features::{
|
use meilisearch_types::features::{InstanceTogglableFeatures, Network, RuntimeTogglableFeatures};
|
||||||
ChatCompletionSettings, InstanceTogglableFeatures, RuntimeTogglableFeatures,
|
|
||||||
};
|
|
||||||
use meilisearch_types::heed::byteorder::BE;
|
use meilisearch_types::heed::byteorder::BE;
|
||||||
use meilisearch_types::heed::types::{DecodeIgnore, SerdeJson, Str, I128};
|
use meilisearch_types::heed::types::I128;
|
||||||
use meilisearch_types::heed::{self, Database, Env, RoTxn, WithoutTls};
|
use meilisearch_types::heed::{self, Env, RoTxn, WithoutTls};
|
||||||
|
use meilisearch_types::milli::index::IndexEmbeddingConfig;
|
||||||
use meilisearch_types::milli::update::IndexerConfig;
|
use meilisearch_types::milli::update::IndexerConfig;
|
||||||
use meilisearch_types::milli::vector::json_template::JsonTemplate;
|
use meilisearch_types::milli::vector::{Embedder, EmbedderOptions, EmbeddingConfigs};
|
||||||
use meilisearch_types::milli::vector::{
|
|
||||||
Embedder, EmbedderOptions, RuntimeEmbedder, RuntimeEmbedders, RuntimeFragment,
|
|
||||||
};
|
|
||||||
use meilisearch_types::milli::{self, Index};
|
use meilisearch_types::milli::{self, Index};
|
||||||
use meilisearch_types::network::Network;
|
|
||||||
use meilisearch_types::task_view::TaskView;
|
use meilisearch_types::task_view::TaskView;
|
||||||
use meilisearch_types::tasks::{KindWithContent, Task, TaskNetwork};
|
use meilisearch_types::tasks::{KindWithContent, Task};
|
||||||
use meilisearch_types::webhooks::{Webhook, WebhooksDumpView, WebhooksView};
|
|
||||||
use milli::vector::db::IndexEmbeddingConfig;
|
|
||||||
use processing::ProcessingTasks;
|
use processing::ProcessingTasks;
|
||||||
pub use queue::Query;
|
pub use queue::Query;
|
||||||
use queue::Queue;
|
use queue::Queue;
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
use scheduler::Scheduler;
|
use scheduler::Scheduler;
|
||||||
use serde::{Deserialize, Serialize};
|
|
||||||
use time::OffsetDateTime;
|
use time::OffsetDateTime;
|
||||||
use uuid::Uuid;
|
|
||||||
use versioning::Versioning;
|
use versioning::Versioning;
|
||||||
|
|
||||||
use crate::index_mapper::IndexMapper;
|
use crate::index_mapper::IndexMapper;
|
||||||
@@ -88,15 +76,6 @@ pub(crate) type BEI128 = I128<BE>;
|
|||||||
|
|
||||||
const TASK_SCHEDULER_SIZE_THRESHOLD_PERCENT_INT: u64 = 40;
|
const TASK_SCHEDULER_SIZE_THRESHOLD_PERCENT_INT: u64 = 40;
|
||||||
|
|
||||||
mod db_name {
|
|
||||||
pub const CHAT_SETTINGS: &str = "chat-settings";
|
|
||||||
pub const PERSISTED: &str = "persisted";
|
|
||||||
}
|
|
||||||
|
|
||||||
mod db_keys {
|
|
||||||
pub const WEBHOOKS: &str = "webhooks";
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub struct IndexSchedulerOptions {
|
pub struct IndexSchedulerOptions {
|
||||||
/// The path to the version file of Meilisearch.
|
/// The path to the version file of Meilisearch.
|
||||||
@@ -113,10 +92,10 @@ pub struct IndexSchedulerOptions {
|
|||||||
pub snapshots_path: PathBuf,
|
pub snapshots_path: PathBuf,
|
||||||
/// The path to the folder containing the dumps.
|
/// The path to the folder containing the dumps.
|
||||||
pub dumps_path: PathBuf,
|
pub dumps_path: PathBuf,
|
||||||
/// The webhook url that was set by the CLI.
|
/// The URL on which we must send the tasks statuses
|
||||||
pub cli_webhook_url: Option<String>,
|
pub webhook_url: Option<String>,
|
||||||
/// The Authorization header to send to the webhook URL that was set by the CLI.
|
/// The value we will send into the Authorization HTTP header on the webhook URL
|
||||||
pub cli_webhook_authorization: Option<String>,
|
pub webhook_authorization_header: Option<String>,
|
||||||
/// The maximum size, in bytes, of the task index.
|
/// The maximum size, in bytes, of the task index.
|
||||||
pub task_db_size: usize,
|
pub task_db_size: usize,
|
||||||
/// The size, in bytes, with which a meilisearch index is opened the first time of each meilisearch index.
|
/// The size, in bytes, with which a meilisearch index is opened the first time of each meilisearch index.
|
||||||
@@ -152,8 +131,6 @@ pub struct IndexSchedulerOptions {
|
|||||||
///
|
///
|
||||||
/// 0 disables the cache.
|
/// 0 disables the cache.
|
||||||
pub embedding_cache_cap: usize,
|
pub embedding_cache_cap: usize,
|
||||||
/// Snapshot compaction status.
|
|
||||||
pub experimental_no_snapshot_compaction: bool,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Structure which holds meilisearch's indexes and schedules the tasks
|
/// Structure which holds meilisearch's indexes and schedules the tasks
|
||||||
@@ -174,23 +151,16 @@ pub struct IndexScheduler {
|
|||||||
/// In charge of fetching and setting the status of experimental features.
|
/// In charge of fetching and setting the status of experimental features.
|
||||||
features: features::FeatureData,
|
features: features::FeatureData,
|
||||||
|
|
||||||
/// Stores the custom chat prompts and other settings of the indexes.
|
|
||||||
pub(crate) chat_settings: Database<Str, SerdeJson<ChatCompletionSettings>>,
|
|
||||||
|
|
||||||
/// Everything related to the processing of the tasks
|
/// Everything related to the processing of the tasks
|
||||||
pub scheduler: scheduler::Scheduler,
|
pub scheduler: scheduler::Scheduler,
|
||||||
|
|
||||||
/// Whether we should automatically cleanup the task queue or not.
|
/// Whether we should automatically cleanup the task queue or not.
|
||||||
pub(crate) cleanup_enabled: bool,
|
pub(crate) cleanup_enabled: bool,
|
||||||
|
|
||||||
/// Whether we should use the old document indexer or the new one.
|
/// The webhook url we should send tasks to after processing every batches.
|
||||||
pub(crate) experimental_no_edition_2024_for_dumps: bool,
|
pub(crate) webhook_url: Option<String>,
|
||||||
|
/// The Authorization header to send to the webhook URL.
|
||||||
/// A database to store single-keyed data that is persisted across restarts.
|
pub(crate) webhook_authorization_header: Option<String>,
|
||||||
persisted: Database<Str, Str>,
|
|
||||||
|
|
||||||
/// Webhook, loaded and stored in the `persisted` database
|
|
||||||
webhooks: Arc<Webhooks>,
|
|
||||||
|
|
||||||
/// A map to retrieve the runtime representation of an embedder depending on its configuration.
|
/// A map to retrieve the runtime representation of an embedder depending on its configuration.
|
||||||
///
|
///
|
||||||
@@ -216,9 +186,6 @@ pub struct IndexScheduler {
|
|||||||
/// A counter that is incremented before every call to [`tick`](IndexScheduler::tick)
|
/// A counter that is incremented before every call to [`tick`](IndexScheduler::tick)
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
run_loop_iteration: Arc<RwLock<usize>>,
|
run_loop_iteration: Arc<RwLock<usize>>,
|
||||||
|
|
||||||
/// The tokio runtime used for asynchronous tasks.
|
|
||||||
runtime: Option<tokio::runtime::Handle>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl IndexScheduler {
|
impl IndexScheduler {
|
||||||
@@ -232,10 +199,8 @@ impl IndexScheduler {
|
|||||||
|
|
||||||
index_mapper: self.index_mapper.clone(),
|
index_mapper: self.index_mapper.clone(),
|
||||||
cleanup_enabled: self.cleanup_enabled,
|
cleanup_enabled: self.cleanup_enabled,
|
||||||
experimental_no_edition_2024_for_dumps: self.experimental_no_edition_2024_for_dumps,
|
webhook_url: self.webhook_url.clone(),
|
||||||
persisted: self.persisted,
|
webhook_authorization_header: self.webhook_authorization_header.clone(),
|
||||||
|
|
||||||
webhooks: self.webhooks.clone(),
|
|
||||||
embedders: self.embedders.clone(),
|
embedders: self.embedders.clone(),
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
test_breakpoint_sdr: self.test_breakpoint_sdr.clone(),
|
test_breakpoint_sdr: self.test_breakpoint_sdr.clone(),
|
||||||
@@ -244,38 +209,21 @@ impl IndexScheduler {
|
|||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
run_loop_iteration: self.run_loop_iteration.clone(),
|
run_loop_iteration: self.run_loop_iteration.clone(),
|
||||||
features: self.features.clone(),
|
features: self.features.clone(),
|
||||||
chat_settings: self.chat_settings,
|
|
||||||
runtime: self.runtime.clone(),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) const fn nb_db() -> u32 {
|
pub(crate) const fn nb_db() -> u32 {
|
||||||
Versioning::nb_db()
|
Versioning::nb_db() + Queue::nb_db() + IndexMapper::nb_db() + features::FeatureData::nb_db()
|
||||||
+ Queue::nb_db()
|
|
||||||
+ IndexMapper::nb_db()
|
|
||||||
+ features::FeatureData::nb_db()
|
|
||||||
+ 1 // chat-prompts
|
|
||||||
+ 1 // persisted
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create an index scheduler and start its run loop.
|
/// Create an index scheduler and start its run loop.
|
||||||
|
#[allow(private_interfaces)] // because test_utils is private
|
||||||
pub fn new(
|
pub fn new(
|
||||||
options: IndexSchedulerOptions,
|
options: IndexSchedulerOptions,
|
||||||
auth_env: Env<WithoutTls>,
|
auth_env: Env<WithoutTls>,
|
||||||
from_db_version: (u32, u32, u32),
|
from_db_version: (u32, u32, u32),
|
||||||
runtime: Option<tokio::runtime::Handle>,
|
#[cfg(test)] test_breakpoint_sdr: crossbeam_channel::Sender<(test_utils::Breakpoint, bool)>,
|
||||||
) -> Result<Self> {
|
#[cfg(test)] planned_failures: Vec<(usize, test_utils::FailureLocation)>,
|
||||||
let this = Self::new_without_run(options, auth_env, from_db_version, runtime)?;
|
|
||||||
|
|
||||||
this.run();
|
|
||||||
Ok(this)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn new_without_run(
|
|
||||||
options: IndexSchedulerOptions,
|
|
||||||
auth_env: Env<WithoutTls>,
|
|
||||||
from_db_version: (u32, u32, u32),
|
|
||||||
runtime: Option<tokio::runtime::Handle>,
|
|
||||||
) -> Result<Self> {
|
) -> Result<Self> {
|
||||||
std::fs::create_dir_all(&options.tasks_path)?;
|
std::fs::create_dir_all(&options.tasks_path)?;
|
||||||
std::fs::create_dir_all(&options.update_file_path)?;
|
std::fs::create_dir_all(&options.update_file_path)?;
|
||||||
@@ -316,21 +264,13 @@ impl IndexScheduler {
|
|||||||
let version = versioning::Versioning::new(&env, from_db_version)?;
|
let version = versioning::Versioning::new(&env, from_db_version)?;
|
||||||
|
|
||||||
let mut wtxn = env.write_txn()?;
|
let mut wtxn = env.write_txn()?;
|
||||||
|
|
||||||
let features = features::FeatureData::new(&env, &mut wtxn, options.instance_features)?;
|
let features = features::FeatureData::new(&env, &mut wtxn, options.instance_features)?;
|
||||||
let queue = Queue::new(&env, &mut wtxn, &options)?;
|
let queue = Queue::new(&env, &mut wtxn, &options)?;
|
||||||
let index_mapper = IndexMapper::new(&env, &mut wtxn, &options, budget)?;
|
let index_mapper = IndexMapper::new(&env, &mut wtxn, &options, budget)?;
|
||||||
let chat_settings = env.create_database(&mut wtxn, Some(db_name::CHAT_SETTINGS))?;
|
|
||||||
|
|
||||||
let persisted = env.create_database(&mut wtxn, Some(db_name::PERSISTED))?;
|
|
||||||
let webhooks_db = persisted.remap_data_type::<SerdeJson<Webhooks>>();
|
|
||||||
let mut webhooks = webhooks_db.get(&wtxn, db_keys::WEBHOOKS)?.unwrap_or_default();
|
|
||||||
webhooks
|
|
||||||
.with_cli(options.cli_webhook_url.clone(), options.cli_webhook_authorization.clone());
|
|
||||||
|
|
||||||
wtxn.commit()?;
|
wtxn.commit()?;
|
||||||
|
|
||||||
Ok(Self {
|
// allow unreachable_code to get rids of the warning in the case of a test build.
|
||||||
|
let this = Self {
|
||||||
processing_tasks: Arc::new(RwLock::new(ProcessingTasks::new())),
|
processing_tasks: Arc::new(RwLock::new(ProcessingTasks::new())),
|
||||||
version,
|
version,
|
||||||
queue,
|
queue,
|
||||||
@@ -339,48 +279,23 @@ impl IndexScheduler {
|
|||||||
index_mapper,
|
index_mapper,
|
||||||
env,
|
env,
|
||||||
cleanup_enabled: options.cleanup_enabled,
|
cleanup_enabled: options.cleanup_enabled,
|
||||||
experimental_no_edition_2024_for_dumps: options
|
webhook_url: options.webhook_url,
|
||||||
.indexer_config
|
webhook_authorization_header: options.webhook_authorization_header,
|
||||||
.experimental_no_edition_2024_for_dumps,
|
|
||||||
persisted,
|
|
||||||
webhooks: Arc::new(webhooks),
|
|
||||||
embedders: Default::default(),
|
embedders: Default::default(),
|
||||||
|
|
||||||
#[cfg(test)] // Will be replaced in `new_tests` in test environments
|
#[cfg(test)]
|
||||||
test_breakpoint_sdr: crossbeam_channel::bounded(0).0,
|
test_breakpoint_sdr,
|
||||||
#[cfg(test)] // Will be replaced in `new_tests` in test environments
|
#[cfg(test)]
|
||||||
planned_failures: Default::default(),
|
planned_failures,
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
run_loop_iteration: Arc::new(RwLock::new(0)),
|
run_loop_iteration: Arc::new(RwLock::new(0)),
|
||||||
features,
|
features,
|
||||||
chat_settings,
|
};
|
||||||
runtime,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Create an index scheduler and start its run loop.
|
|
||||||
#[cfg(test)]
|
|
||||||
fn new_test(
|
|
||||||
options: IndexSchedulerOptions,
|
|
||||||
auth_env: Env<WithoutTls>,
|
|
||||||
from_db_version: (u32, u32, u32),
|
|
||||||
runtime: Option<tokio::runtime::Handle>,
|
|
||||||
test_breakpoint_sdr: crossbeam_channel::Sender<(test_utils::Breakpoint, bool)>,
|
|
||||||
planned_failures: Vec<(usize, test_utils::FailureLocation)>,
|
|
||||||
) -> Result<Self> {
|
|
||||||
let mut this = Self::new_without_run(options, auth_env, from_db_version, runtime)?;
|
|
||||||
|
|
||||||
this.test_breakpoint_sdr = test_breakpoint_sdr;
|
|
||||||
this.planned_failures = planned_failures;
|
|
||||||
|
|
||||||
this.run();
|
this.run();
|
||||||
Ok(this)
|
Ok(this)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn read_txn(&self) -> Result<RoTxn<'_, WithoutTls>> {
|
|
||||||
self.env.read_txn().map_err(|e| e.into())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return `Ok(())` if the index scheduler is able to access one of its database.
|
/// Return `Ok(())` if the index scheduler is able to access one of its database.
|
||||||
pub fn health(&self) -> Result<()> {
|
pub fn health(&self) -> Result<()> {
|
||||||
let rtxn = self.env.read_txn()?;
|
let rtxn = self.env.read_txn()?;
|
||||||
@@ -457,16 +372,15 @@ impl IndexScheduler {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn read_txn(&self) -> Result<RoTxn<WithoutTls>> {
|
||||||
|
self.env.read_txn().map_err(|e| e.into())
|
||||||
|
}
|
||||||
|
|
||||||
/// Start the run loop for the given index scheduler.
|
/// Start the run loop for the given index scheduler.
|
||||||
///
|
///
|
||||||
/// This function will execute in a different thread and must be called
|
/// This function will execute in a different thread and must be called
|
||||||
/// only once per index scheduler.
|
/// only once per index scheduler.
|
||||||
fn run(&self) {
|
fn run(&self) {
|
||||||
// If the number of batched tasks is 0, we don't need to run the scheduler at all.
|
|
||||||
// It will never be able to process any tasks.
|
|
||||||
if self.scheduler.max_number_of_batched_tasks == 0 {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let run = self.private_clone();
|
let run = self.private_clone();
|
||||||
std::thread::Builder::new()
|
std::thread::Builder::new()
|
||||||
.name(String::from("scheduler"))
|
.name(String::from("scheduler"))
|
||||||
@@ -574,7 +488,7 @@ impl IndexScheduler {
|
|||||||
|
|
||||||
/// Returns the total number of indexes available for the specified filter.
|
/// Returns the total number of indexes available for the specified filter.
|
||||||
/// And a `Vec` of the index_uid + its stats
|
/// And a `Vec` of the index_uid + its stats
|
||||||
pub fn paginated_indexes_stats(
|
pub fn get_paginated_indexes_stats(
|
||||||
&self,
|
&self,
|
||||||
filters: &meilisearch_auth::AuthFilter,
|
filters: &meilisearch_auth::AuthFilter,
|
||||||
from: usize,
|
from: usize,
|
||||||
@@ -615,24 +529,6 @@ impl IndexScheduler {
|
|||||||
ret.map(|ret| (total, ret))
|
ret.map(|ret| (total, ret))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns the total number of chat workspaces available ~~for the specified filter~~.
|
|
||||||
/// And a `Vec` of the workspace_uids
|
|
||||||
pub fn paginated_chat_workspace_uids(
|
|
||||||
&self,
|
|
||||||
from: usize,
|
|
||||||
limit: usize,
|
|
||||||
) -> Result<(usize, Vec<String>)> {
|
|
||||||
let rtxn = self.read_txn()?;
|
|
||||||
let total = self.chat_settings.len(&rtxn)?;
|
|
||||||
let mut iter = self.chat_settings.iter(&rtxn)?.skip(from);
|
|
||||||
iter.by_ref()
|
|
||||||
.take(limit)
|
|
||||||
.map(|ret| ret.map_err(Error::from))
|
|
||||||
.map(|ret| ret.map(|(uid, _)| uid.to_string()))
|
|
||||||
.collect::<Result<Vec<_>, Error>>()
|
|
||||||
.map(|ret| (total as usize, ret))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// The returned structure contains:
|
/// The returned structure contains:
|
||||||
/// 1. The name of the property being observed can be `statuses`, `types`, or `indexes`.
|
/// 1. The name of the property being observed can be `statuses`, `types`, or `indexes`.
|
||||||
/// 2. The name of the specific data related to the property can be `enqueued` for the `statuses`, `settingsUpdate` for the `types`, or the name of the index for the `indexes`, for example.
|
/// 2. The name of the specific data related to the property can be `enqueued` for the `statuses`, `settingsUpdate` for the `types`, or the name of the index for the `indexes`, for example.
|
||||||
@@ -657,11 +553,6 @@ impl IndexScheduler {
|
|||||||
Ok(nbr_index_processing_tasks > 0)
|
Ok(nbr_index_processing_tasks > 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Whether the index should use the old document indexer.
|
|
||||||
pub fn no_edition_2024_for_dumps(&self) -> bool {
|
|
||||||
self.experimental_no_edition_2024_for_dumps
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return the tasks matching the query from the user's point of view along
|
/// Return the tasks matching the query from the user's point of view along
|
||||||
/// with the total number of tasks matching the query, ignoring from and limit.
|
/// with the total number of tasks matching the query, ignoring from and limit.
|
||||||
///
|
///
|
||||||
@@ -700,16 +591,6 @@ impl IndexScheduler {
|
|||||||
self.queue.get_task_ids_from_authorized_indexes(&rtxn, query, filters, &processing)
|
self.queue.get_task_ids_from_authorized_indexes(&rtxn, query, filters, &processing)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn set_task_network(&self, task_id: TaskId, network: TaskNetwork) -> Result<()> {
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
|
||||||
let mut task =
|
|
||||||
self.queue.tasks.get_task(&wtxn, task_id)?.ok_or(Error::TaskNotFound(task_id))?;
|
|
||||||
task.network = Some(network);
|
|
||||||
self.queue.tasks.all_tasks.put(&mut wtxn, &task_id, &task)?;
|
|
||||||
wtxn.commit()?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return the batches matching the query from the user's point of view along
|
/// Return the batches matching the query from the user's point of view along
|
||||||
/// with the total number of batches matching the query, ignoring from and limit.
|
/// with the total number of batches matching the query, ignoring from and limit.
|
||||||
///
|
///
|
||||||
@@ -756,19 +637,6 @@ impl IndexScheduler {
|
|||||||
kind: KindWithContent,
|
kind: KindWithContent,
|
||||||
task_id: Option<TaskId>,
|
task_id: Option<TaskId>,
|
||||||
dry_run: bool,
|
dry_run: bool,
|
||||||
) -> Result<Task> {
|
|
||||||
self.register_with_custom_metadata(kind, task_id, None, dry_run)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Register a new task in the scheduler, with metadata.
|
|
||||||
///
|
|
||||||
/// If it fails and data was associated with the task, it tries to delete the associated data.
|
|
||||||
pub fn register_with_custom_metadata(
|
|
||||||
&self,
|
|
||||||
kind: KindWithContent,
|
|
||||||
task_id: Option<TaskId>,
|
|
||||||
custom_metadata: Option<String>,
|
|
||||||
dry_run: bool,
|
|
||||||
) -> Result<Task> {
|
) -> Result<Task> {
|
||||||
// if the task doesn't delete or cancel anything and 40% of the task queue is full, we must refuse to enqueue the incoming task
|
// if the task doesn't delete or cancel anything and 40% of the task queue is full, we must refuse to enqueue the incoming task
|
||||||
if !matches!(&kind, KindWithContent::TaskDeletion { tasks, .. } | KindWithContent::TaskCancelation { tasks, .. } if !tasks.is_empty())
|
if !matches!(&kind, KindWithContent::TaskDeletion { tasks, .. } | KindWithContent::TaskCancelation { tasks, .. } if !tasks.is_empty())
|
||||||
@@ -779,7 +647,7 @@ impl IndexScheduler {
|
|||||||
}
|
}
|
||||||
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
let mut wtxn = self.env.write_txn()?;
|
||||||
let task = self.queue.register(&mut wtxn, &kind, task_id, custom_metadata, dry_run)?;
|
let task = self.queue.register(&mut wtxn, &kind, task_id, dry_run)?;
|
||||||
|
|
||||||
// If the registered task is a task cancelation
|
// If the registered task is a task cancelation
|
||||||
// we inform the processing tasks to stop (if necessary).
|
// we inform the processing tasks to stop (if necessary).
|
||||||
@@ -803,7 +671,7 @@ impl IndexScheduler {
|
|||||||
|
|
||||||
/// Register a new task coming from a dump in the scheduler.
|
/// Register a new task coming from a dump in the scheduler.
|
||||||
/// By taking a mutable ref we're pretty sure no one will ever import a dump while actix is running.
|
/// By taking a mutable ref we're pretty sure no one will ever import a dump while actix is running.
|
||||||
pub fn register_dumped_task(&mut self) -> Result<Dump<'_>> {
|
pub fn register_dumped_task(&mut self) -> Result<Dump> {
|
||||||
Dump::new(self)
|
Dump::new(self)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -831,90 +699,86 @@ impl IndexScheduler {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Once the tasks changes have been committed we must send all the tasks that were updated to our webhooks
|
/// Once the tasks changes have been committed we must send all the tasks that were updated to our webhook if there is one.
|
||||||
fn notify_webhooks(&self, updated: RoaringBitmap) {
|
fn notify_webhook(&self, updated: &RoaringBitmap) -> Result<()> {
|
||||||
struct TaskReader<'a, 'b> {
|
if let Some(ref url) = self.webhook_url {
|
||||||
rtxn: &'a RoTxn<'a>,
|
struct TaskReader<'a, 'b> {
|
||||||
index_scheduler: &'a IndexScheduler,
|
rtxn: &'a RoTxn<'a>,
|
||||||
tasks: &'b mut roaring::bitmap::Iter<'b>,
|
index_scheduler: &'a IndexScheduler,
|
||||||
buffer: Vec<u8>,
|
tasks: &'b mut roaring::bitmap::Iter<'b>,
|
||||||
written: usize,
|
buffer: Vec<u8>,
|
||||||
}
|
written: usize,
|
||||||
|
}
|
||||||
|
|
||||||
impl Read for TaskReader<'_, '_> {
|
impl Read for TaskReader<'_, '_> {
|
||||||
fn read(&mut self, mut buf: &mut [u8]) -> std::io::Result<usize> {
|
fn read(&mut self, mut buf: &mut [u8]) -> std::io::Result<usize> {
|
||||||
if self.buffer.is_empty() {
|
if self.buffer.is_empty() {
|
||||||
match self.tasks.next() {
|
match self.tasks.next() {
|
||||||
None => return Ok(0),
|
None => return Ok(0),
|
||||||
Some(task_id) => {
|
Some(task_id) => {
|
||||||
let task = self
|
let task = self
|
||||||
.index_scheduler
|
.index_scheduler
|
||||||
.queue
|
.queue
|
||||||
.tasks
|
.tasks
|
||||||
.get_task(self.rtxn, task_id)
|
.get_task(self.rtxn, task_id)
|
||||||
.map_err(io::Error::other)?
|
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))?
|
||||||
.ok_or_else(|| io::Error::other(Error::CorruptedTaskQueue))?;
|
.ok_or_else(|| {
|
||||||
|
io::Error::new(
|
||||||
|
io::ErrorKind::Other,
|
||||||
|
Error::CorruptedTaskQueue,
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
serde_json::to_writer(&mut self.buffer, &TaskView::from_task(&task))?;
|
serde_json::to_writer(
|
||||||
self.buffer.push(b'\n');
|
&mut self.buffer,
|
||||||
|
&TaskView::from_task(&task),
|
||||||
|
)?;
|
||||||
|
self.buffer.push(b'\n');
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let mut to_write = &self.buffer[self.written..];
|
||||||
|
let wrote = io::copy(&mut to_write, &mut buf)?;
|
||||||
|
self.written += wrote as usize;
|
||||||
|
|
||||||
|
// we wrote everything and must refresh our buffer on the next call
|
||||||
|
if self.written == self.buffer.len() {
|
||||||
|
self.written = 0;
|
||||||
|
self.buffer.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(wrote as usize)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let mut to_write = &self.buffer[self.written..];
|
let rtxn = self.env.read_txn()?;
|
||||||
let wrote = io::copy(&mut to_write, &mut buf)?;
|
|
||||||
self.written += wrote as usize;
|
|
||||||
|
|
||||||
// we wrote everything and must refresh our buffer on the next call
|
let task_reader = TaskReader {
|
||||||
if self.written == self.buffer.len() {
|
rtxn: &rtxn,
|
||||||
self.written = 0;
|
index_scheduler: self,
|
||||||
self.buffer.clear();
|
tasks: &mut updated.into_iter(),
|
||||||
}
|
buffer: Vec::with_capacity(50), // on average a task is around ~100 bytes
|
||||||
|
written: 0,
|
||||||
|
};
|
||||||
|
|
||||||
Ok(wrote as usize)
|
// let reader = GzEncoder::new(BufReader::new(task_reader), Compression::default());
|
||||||
|
let reader = GzEncoder::new(BufReader::new(task_reader), Compression::default());
|
||||||
|
let request = ureq::post(url)
|
||||||
|
.timeout(Duration::from_secs(30))
|
||||||
|
.set("Content-Encoding", "gzip")
|
||||||
|
.set("Content-Type", "application/x-ndjson");
|
||||||
|
let request = match &self.webhook_authorization_header {
|
||||||
|
Some(header) => request.set("Authorization", header),
|
||||||
|
None => request,
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Err(e) = request.send(reader) {
|
||||||
|
tracing::error!("While sending data to the webhook: {e}");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let webhooks = self.webhooks.get_all();
|
Ok(())
|
||||||
if webhooks.is_empty() {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let this = self.private_clone();
|
|
||||||
// We must take the RoTxn before entering the thread::spawn otherwise another batch may be
|
|
||||||
// processed before we had the time to take our txn.
|
|
||||||
let rtxn = match self.env.clone().static_read_txn() {
|
|
||||||
Ok(rtxn) => rtxn,
|
|
||||||
Err(e) => {
|
|
||||||
tracing::error!("Couldn't get an rtxn to notify the webhook: {e}");
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
std::thread::spawn(move || {
|
|
||||||
for (uuid, Webhook { url, headers }) in webhooks.iter() {
|
|
||||||
let task_reader = TaskReader {
|
|
||||||
rtxn: &rtxn,
|
|
||||||
index_scheduler: &this,
|
|
||||||
tasks: &mut updated.iter(),
|
|
||||||
buffer: Vec::with_capacity(page_size::get()),
|
|
||||||
written: 0,
|
|
||||||
};
|
|
||||||
|
|
||||||
let reader = GzEncoder::new(BufReader::new(task_reader), Compression::default());
|
|
||||||
|
|
||||||
let mut request = ureq::post(url)
|
|
||||||
.timeout(Duration::from_secs(30))
|
|
||||||
.set("Content-Encoding", "gzip")
|
|
||||||
.set("Content-Type", "application/x-ndjson");
|
|
||||||
for (header_name, header_value) in headers.iter() {
|
|
||||||
request = request.set(header_name, header_value);
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Err(e) = request.send(reader) {
|
|
||||||
tracing::error!("While sending data to the webhook {uuid}: {e}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn index_stats(&self, index_uid: &str) -> Result<IndexStats> {
|
pub fn index_stats(&self, index_uid: &str) -> Result<IndexStats> {
|
||||||
@@ -945,69 +809,33 @@ impl IndexScheduler {
|
|||||||
self.features.network()
|
self.features.network()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn update_runtime_webhooks(&self, runtime: RuntimeWebhooks) -> Result<()> {
|
|
||||||
let webhooks = Webhooks::from_runtime(runtime);
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
|
||||||
let webhooks_db = self.persisted.remap_data_type::<SerdeJson<Webhooks>>();
|
|
||||||
webhooks_db.put(&mut wtxn, db_keys::WEBHOOKS, &webhooks)?;
|
|
||||||
wtxn.commit()?;
|
|
||||||
self.webhooks.update_runtime(webhooks.into_runtime());
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn webhooks_dump_view(&self) -> WebhooksDumpView {
|
|
||||||
// We must not dump the cli api key
|
|
||||||
WebhooksDumpView { webhooks: self.webhooks.get_runtime() }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn webhooks_view(&self) -> WebhooksView {
|
|
||||||
WebhooksView { webhooks: self.webhooks.get_all() }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn retrieve_runtime_webhooks(&self) -> RuntimeWebhooks {
|
|
||||||
self.webhooks.get_runtime()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn embedders(
|
pub fn embedders(
|
||||||
&self,
|
&self,
|
||||||
index_uid: String,
|
index_uid: String,
|
||||||
embedding_configs: Vec<IndexEmbeddingConfig>,
|
embedding_configs: Vec<IndexEmbeddingConfig>,
|
||||||
) -> Result<RuntimeEmbedders> {
|
) -> Result<EmbeddingConfigs> {
|
||||||
let res: Result<_> = embedding_configs
|
let res: Result<_> = embedding_configs
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.map(
|
.map(
|
||||||
|IndexEmbeddingConfig {
|
|IndexEmbeddingConfig {
|
||||||
name,
|
name,
|
||||||
config: milli::vector::EmbeddingConfig { embedder_options, prompt, quantized },
|
config: milli::vector::EmbeddingConfig { embedder_options, prompt, quantized },
|
||||||
fragments,
|
..
|
||||||
}|
|
}| {
|
||||||
-> Result<(String, Arc<RuntimeEmbedder>)> {
|
let prompt = Arc::new(
|
||||||
let document_template = prompt
|
prompt
|
||||||
.try_into()
|
.try_into()
|
||||||
.map_err(meilisearch_types::milli::Error::from)
|
.map_err(meilisearch_types::milli::Error::from)
|
||||||
.map_err(|err| Error::from_milli(err, Some(index_uid.clone())))?;
|
.map_err(|err| Error::from_milli(err, Some(index_uid.clone())))?,
|
||||||
|
);
|
||||||
let fragments = fragments
|
|
||||||
.into_inner()
|
|
||||||
.into_iter()
|
|
||||||
.map(|fragment| {
|
|
||||||
let value = embedder_options.fragment(&fragment.name).unwrap();
|
|
||||||
let template = JsonTemplate::new(value.clone()).unwrap();
|
|
||||||
RuntimeFragment { name: fragment.name, id: fragment.id, template }
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
// optimistically return existing embedder
|
// optimistically return existing embedder
|
||||||
{
|
{
|
||||||
let embedders = self.embedders.read().unwrap();
|
let embedders = self.embedders.read().unwrap();
|
||||||
if let Some(embedder) = embedders.get(&embedder_options) {
|
if let Some(embedder) = embedders.get(&embedder_options) {
|
||||||
let runtime = Arc::new(RuntimeEmbedder::new(
|
return Ok((
|
||||||
embedder.clone(),
|
name,
|
||||||
document_template,
|
(embedder.clone(), prompt, quantized.unwrap_or_default()),
|
||||||
fragments,
|
|
||||||
quantized.unwrap_or_default(),
|
|
||||||
));
|
));
|
||||||
|
|
||||||
return Ok((name, runtime));
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1023,44 +851,11 @@ impl IndexScheduler {
|
|||||||
let mut embedders = self.embedders.write().unwrap();
|
let mut embedders = self.embedders.write().unwrap();
|
||||||
embedders.insert(embedder_options, embedder.clone());
|
embedders.insert(embedder_options, embedder.clone());
|
||||||
}
|
}
|
||||||
|
Ok((name, (embedder, prompt, quantized.unwrap_or_default())))
|
||||||
let runtime = Arc::new(RuntimeEmbedder::new(
|
|
||||||
embedder.clone(),
|
|
||||||
document_template,
|
|
||||||
fragments,
|
|
||||||
quantized.unwrap_or_default(),
|
|
||||||
));
|
|
||||||
|
|
||||||
Ok((name, runtime))
|
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
.collect();
|
.collect();
|
||||||
res.map(RuntimeEmbedders::new)
|
res.map(EmbeddingConfigs::new)
|
||||||
}
|
|
||||||
|
|
||||||
pub fn chat_settings(&self, uid: &str) -> Result<Option<ChatCompletionSettings>> {
|
|
||||||
let rtxn = self.env.read_txn()?;
|
|
||||||
self.chat_settings.get(&rtxn, uid).map_err(Into::into)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return true if chat workspace exists.
|
|
||||||
pub fn chat_workspace_exists(&self, name: &str) -> Result<bool> {
|
|
||||||
let rtxn = self.env.read_txn()?;
|
|
||||||
Ok(self.chat_settings.remap_data_type::<DecodeIgnore>().get(&rtxn, name)?.is_some())
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn put_chat_settings(&self, uid: &str, settings: &ChatCompletionSettings) -> Result<()> {
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
|
||||||
self.chat_settings.put(&mut wtxn, uid, settings)?;
|
|
||||||
wtxn.commit()?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn delete_chat_settings(&self, uid: &str) -> Result<bool> {
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
|
||||||
let deleted = self.chat_settings.delete(&mut wtxn, uid)?;
|
|
||||||
wtxn.commit()?;
|
|
||||||
Ok(deleted)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1096,72 +891,3 @@ pub struct IndexStats {
|
|||||||
/// Internal stats computed from the index.
|
/// Internal stats computed from the index.
|
||||||
pub inner_stats: index_mapper::IndexStats,
|
pub inner_stats: index_mapper::IndexStats,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// These structure are not meant to be exposed to the end user, if needed, use the meilisearch-types::webhooks structure instead.
|
|
||||||
/// /!\ Everytime you deserialize this structure you should fill the cli_webhook later on with the `with_cli` method. /!\
|
|
||||||
#[derive(Debug, Serialize, Deserialize, Default)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
struct Webhooks {
|
|
||||||
// The cli webhook should *never* be stored in a database.
|
|
||||||
// It represent a state that only exists for this execution of meilisearch
|
|
||||||
#[serde(skip)]
|
|
||||||
pub cli: Option<CliWebhook>,
|
|
||||||
|
|
||||||
#[serde(default)]
|
|
||||||
pub runtime: RwLock<RuntimeWebhooks>,
|
|
||||||
}
|
|
||||||
|
|
||||||
type RuntimeWebhooks = BTreeMap<Uuid, Webhook>;
|
|
||||||
|
|
||||||
impl Webhooks {
|
|
||||||
pub fn with_cli(&mut self, url: Option<String>, auth: Option<String>) {
|
|
||||||
if let Some(url) = url {
|
|
||||||
let webhook = CliWebhook { url, auth };
|
|
||||||
self.cli = Some(webhook);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn from_runtime(webhooks: RuntimeWebhooks) -> Self {
|
|
||||||
Self { cli: None, runtime: RwLock::new(webhooks) }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn into_runtime(self) -> RuntimeWebhooks {
|
|
||||||
// safe because we own self and it cannot be cloned
|
|
||||||
self.runtime.into_inner().unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn update_runtime(&self, webhooks: RuntimeWebhooks) {
|
|
||||||
*self.runtime.write().unwrap() = webhooks;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns all the webhooks in an unified view. The cli webhook is represented with an uuid set to 0
|
|
||||||
pub fn get_all(&self) -> BTreeMap<Uuid, Webhook> {
|
|
||||||
self.cli
|
|
||||||
.as_ref()
|
|
||||||
.map(|wh| (Uuid::nil(), Webhook::from(wh)))
|
|
||||||
.into_iter()
|
|
||||||
.chain(self.runtime.read().unwrap().iter().map(|(uuid, wh)| (*uuid, wh.clone())))
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns all the runtime webhooks.
|
|
||||||
pub fn get_runtime(&self) -> BTreeMap<Uuid, Webhook> {
|
|
||||||
self.runtime.read().unwrap().iter().map(|(uuid, wh)| (*uuid, wh.clone())).collect()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Serialize, Deserialize, Default, Clone, PartialEq)]
|
|
||||||
struct CliWebhook {
|
|
||||||
pub url: String,
|
|
||||||
pub auth: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl From<&CliWebhook> for Webhook {
|
|
||||||
fn from(webhook: &CliWebhook) -> Self {
|
|
||||||
let mut headers = BTreeMap::new();
|
|
||||||
if let Some(ref auth) = webhook.auth {
|
|
||||||
headers.insert("Authorization".to_string(), auth.to_string());
|
|
||||||
}
|
|
||||||
Self { url: webhook.url.to_string(), headers }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -75,7 +75,6 @@ make_enum_progress! {
|
|||||||
pub enum TaskCancelationProgress {
|
pub enum TaskCancelationProgress {
|
||||||
RetrievingTasks,
|
RetrievingTasks,
|
||||||
CancelingUpgrade,
|
CancelingUpgrade,
|
||||||
CleaningCompactionLeftover,
|
|
||||||
UpdatingTasks,
|
UpdatingTasks,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -104,12 +103,10 @@ make_enum_progress! {
|
|||||||
pub enum DumpCreationProgress {
|
pub enum DumpCreationProgress {
|
||||||
StartTheDumpCreation,
|
StartTheDumpCreation,
|
||||||
DumpTheApiKeys,
|
DumpTheApiKeys,
|
||||||
DumpTheChatCompletionSettings,
|
|
||||||
DumpTheTasks,
|
DumpTheTasks,
|
||||||
DumpTheBatches,
|
DumpTheBatches,
|
||||||
DumpTheIndexes,
|
DumpTheIndexes,
|
||||||
DumpTheExperimentalFeatures,
|
DumpTheExperimentalFeatures,
|
||||||
DumpTheWebhooks,
|
|
||||||
CompressTheDump,
|
CompressTheDump,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -139,17 +136,6 @@ make_enum_progress! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
make_enum_progress! {
|
|
||||||
pub enum IndexCompaction {
|
|
||||||
RetrieveTheIndex,
|
|
||||||
CreateTemporaryFile,
|
|
||||||
CopyAndCompactTheIndex,
|
|
||||||
PersistTheCompactedIndex,
|
|
||||||
CloseTheIndex,
|
|
||||||
ReopenTheIndex,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
make_enum_progress! {
|
make_enum_progress! {
|
||||||
pub enum InnerSwappingTwoIndexes {
|
pub enum InnerSwappingTwoIndexes {
|
||||||
RetrieveTheTasks,
|
RetrieveTheTasks,
|
||||||
@@ -189,17 +175,8 @@ make_enum_progress! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
make_enum_progress! {
|
|
||||||
pub enum Export {
|
|
||||||
EnsuringCorrectnessOfTheTarget,
|
|
||||||
ExportingTheSettings,
|
|
||||||
ExportingTheDocuments,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
make_atomic_progress!(Task alias AtomicTaskStep => "task" );
|
make_atomic_progress!(Task alias AtomicTaskStep => "task" );
|
||||||
make_atomic_progress!(Document alias AtomicDocumentStep => "document" );
|
make_atomic_progress!(Document alias AtomicDocumentStep => "document" );
|
||||||
make_atomic_progress!(Index alias AtomicIndexStep => "index" );
|
|
||||||
make_atomic_progress!(Batch alias AtomicBatchStep => "batch" );
|
make_atomic_progress!(Batch alias AtomicBatchStep => "batch" );
|
||||||
make_atomic_progress!(UpdateFile alias AtomicUpdateFileStep => "update file" );
|
make_atomic_progress!(UpdateFile alias AtomicUpdateFileStep => "update file" );
|
||||||
|
|
||||||
|
|||||||
@@ -179,7 +179,6 @@ impl BatchQueue {
|
|||||||
progress: None,
|
progress: None,
|
||||||
details: batch.details,
|
details: batch.details,
|
||||||
stats: batch.stats,
|
stats: batch.stats,
|
||||||
embedder_stats: batch.embedder_stats.as_ref().into(),
|
|
||||||
started_at: batch.started_at,
|
started_at: batch.started_at,
|
||||||
finished_at: batch.finished_at,
|
finished_at: batch.finished_at,
|
||||||
enqueued_at: batch.enqueued_at,
|
enqueued_at: batch.enqueued_at,
|
||||||
@@ -275,27 +274,19 @@ impl BatchQueue {
|
|||||||
pub(crate) fn get_existing_batches(
|
pub(crate) fn get_existing_batches(
|
||||||
&self,
|
&self,
|
||||||
rtxn: &RoTxn,
|
rtxn: &RoTxn,
|
||||||
batches: impl IntoIterator<Item = BatchId>,
|
tasks: impl IntoIterator<Item = BatchId>,
|
||||||
processing: &ProcessingTasks,
|
processing: &ProcessingTasks,
|
||||||
) -> Result<Vec<Batch>> {
|
) -> Result<Vec<Batch>> {
|
||||||
batches
|
tasks
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.map(|batch_id| {
|
.map(|batch_id| {
|
||||||
if Some(batch_id) == processing.batch.as_ref().map(|batch| batch.uid) {
|
if Some(batch_id) == processing.batch.as_ref().map(|batch| batch.uid) {
|
||||||
let mut batch = processing.batch.as_ref().unwrap().to_batch();
|
let mut batch = processing.batch.as_ref().unwrap().to_batch();
|
||||||
batch.progress = processing.get_progress_view();
|
batch.progress = processing.get_progress_view();
|
||||||
// Add progress_trace from the current progress state
|
|
||||||
if let Some(progress) = &processing.progress {
|
|
||||||
batch.stats.progress_trace = progress
|
|
||||||
.accumulated_durations()
|
|
||||||
.into_iter()
|
|
||||||
.map(|(k, v)| (k, v.into()))
|
|
||||||
.collect();
|
|
||||||
}
|
|
||||||
Ok(batch)
|
Ok(batch)
|
||||||
} else {
|
} else {
|
||||||
self.get_batch(rtxn, batch_id)
|
self.get_batch(rtxn, batch_id)
|
||||||
.and_then(|batch| batch.ok_or(Error::CorruptedTaskQueue))
|
.and_then(|task| task.ok_or(Error::CorruptedTaskQueue))
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
.collect::<Result<_>>()
|
.collect::<Result<_>>()
|
||||||
|
|||||||
@@ -104,15 +104,6 @@ fn query_batches_simple() {
|
|||||||
batches[0].started_at = OffsetDateTime::UNIX_EPOCH;
|
batches[0].started_at = OffsetDateTime::UNIX_EPOCH;
|
||||||
assert!(batches[0].enqueued_at.is_some());
|
assert!(batches[0].enqueued_at.is_some());
|
||||||
batches[0].enqueued_at = None;
|
batches[0].enqueued_at = None;
|
||||||
|
|
||||||
if !batches[0].stats.progress_trace.is_empty() {
|
|
||||||
batches[0].stats.progress_trace.clear();
|
|
||||||
batches[0]
|
|
||||||
.stats
|
|
||||||
.progress_trace
|
|
||||||
.insert("processing tasks".to_string(), "deterministic_duration".into());
|
|
||||||
}
|
|
||||||
|
|
||||||
// Insta cannot snapshot our batches because the batch stats contains an enum as key: https://github.com/mitsuhiko/insta/issues/689
|
// Insta cannot snapshot our batches because the batch stats contains an enum as key: https://github.com/mitsuhiko/insta/issues/689
|
||||||
let batch = serde_json::to_string_pretty(&batches[0]).unwrap();
|
let batch = serde_json::to_string_pretty(&batches[0]).unwrap();
|
||||||
snapshot!(batch, @r###"
|
snapshot!(batch, @r###"
|
||||||
@@ -131,15 +122,12 @@ fn query_batches_simple() {
|
|||||||
},
|
},
|
||||||
"indexUids": {
|
"indexUids": {
|
||||||
"catto": 1
|
"catto": 1
|
||||||
},
|
|
||||||
"progressTrace": {
|
|
||||||
"processing tasks": "deterministic_duration"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"startedAt": "1970-01-01T00:00:00Z",
|
"startedAt": "1970-01-01T00:00:00Z",
|
||||||
"finishedAt": null,
|
"finishedAt": null,
|
||||||
"enqueuedAt": null,
|
"enqueuedAt": null,
|
||||||
"stopReason": "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task."
|
"stopReason": "task with id 0 of type `indexCreation` cannot be batched"
|
||||||
}
|
}
|
||||||
"###);
|
"###);
|
||||||
|
|
||||||
@@ -346,11 +334,11 @@ fn query_batches_special_rules() {
|
|||||||
let kind = index_creation_task("doggo", "sheep");
|
let kind = index_creation_task("doggo", "sheep");
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
|
|
||||||
@@ -454,7 +442,7 @@ fn query_batches_canceled_by() {
|
|||||||
let kind = index_creation_task("doggo", "sheep");
|
let kind = index_creation_task("doggo", "sheep");
|
||||||
let _ = index_scheduler.register(kind, None, false).unwrap();
|
let _ = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -257,7 +257,6 @@ impl Queue {
|
|||||||
wtxn: &mut RwTxn,
|
wtxn: &mut RwTxn,
|
||||||
kind: &KindWithContent,
|
kind: &KindWithContent,
|
||||||
task_id: Option<TaskId>,
|
task_id: Option<TaskId>,
|
||||||
custom_metadata: Option<String>,
|
|
||||||
dry_run: bool,
|
dry_run: bool,
|
||||||
) -> Result<Task> {
|
) -> Result<Task> {
|
||||||
let next_task_id = self.tasks.next_task_id(wtxn)?;
|
let next_task_id = self.tasks.next_task_id(wtxn)?;
|
||||||
@@ -280,8 +279,6 @@ impl Queue {
|
|||||||
details: kind.default_details(),
|
details: kind.default_details(),
|
||||||
status: Status::Enqueued,
|
status: Status::Enqueued,
|
||||||
kind: kind.clone(),
|
kind: kind.clone(),
|
||||||
network: None,
|
|
||||||
custom_metadata,
|
|
||||||
};
|
};
|
||||||
// For deletion and cancelation tasks, we want to make extra sure that they
|
// For deletion and cancelation tasks, we want to make extra sure that they
|
||||||
// don't attempt to delete/cancel tasks that are newer than themselves.
|
// don't attempt to delete/cancel tasks that are newer than themselves.
|
||||||
@@ -312,8 +309,7 @@ impl Queue {
|
|||||||
| self.tasks.status.get(wtxn, &Status::Failed)?.unwrap_or_default()
|
| self.tasks.status.get(wtxn, &Status::Failed)?.unwrap_or_default()
|
||||||
| self.tasks.status.get(wtxn, &Status::Canceled)?.unwrap_or_default();
|
| self.tasks.status.get(wtxn, &Status::Canceled)?.unwrap_or_default();
|
||||||
|
|
||||||
let to_delete =
|
let to_delete = RoaringBitmap::from_iter(finished.into_iter().rev().take(100_000));
|
||||||
RoaringBitmap::from_sorted_iter(finished.into_iter().take(100_000)).unwrap();
|
|
||||||
|
|
||||||
// /!\ the len must be at least 2 or else we might enter an infinite loop where we only delete
|
// /!\ the len must be at least 2 or else we might enter an infinite loop where we only delete
|
||||||
// the deletion tasks we enqueued ourselves.
|
// the deletion tasks we enqueued ourselves.
|
||||||
@@ -329,7 +325,7 @@ impl Queue {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// it's safe to unwrap here because we checked the len above
|
// it's safe to unwrap here because we checked the len above
|
||||||
let newest_task_id = to_delete.iter().next_back().unwrap();
|
let newest_task_id = to_delete.iter().last().unwrap();
|
||||||
let last_task_to_delete =
|
let last_task_to_delete =
|
||||||
self.tasks.get_task(wtxn, newest_task_id)?.ok_or(Error::CorruptedTaskQueue)?;
|
self.tasks.get_task(wtxn, newest_task_id)?.ok_or(Error::CorruptedTaskQueue)?;
|
||||||
|
|
||||||
@@ -346,7 +342,6 @@ impl Queue {
|
|||||||
tasks: to_delete,
|
tasks: to_delete,
|
||||||
},
|
},
|
||||||
None,
|
None,
|
||||||
None,
|
|
||||||
false,
|
false,
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
@@ -48,8 +48,8 @@ catto: { number_of_documents: 0, field_distribution: {} }
|
|||||||
[timestamp] [1,2,3,]
|
[timestamp] [1,2,3,]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "task with id 0 of type `indexCreation` cannot be batched", }
|
||||||
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "task with id 3 of type `taskCancelation` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
@@ -47,9 +47,9 @@ whalo: { number_of_documents: 0, field_distribution: {} }
|
|||||||
[timestamp] [2,]
|
[timestamp] [2,]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"bone"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"bone"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "task with id 0 of type `indexCreation` cannot be batched", }
|
||||||
1 {uid: 1, details: {"primaryKey":"plankton"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"whalo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"plankton"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"whalo":1}}, stop reason: "task with id 1 of type `indexCreation` cannot be batched", }
|
||||||
2 {uid: 2, details: {"primaryKey":"his_own_vomit"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 2 of type `indexCreation` that cannot be batched with any other task.", }
|
2 {uid: 2, details: {"primaryKey":"his_own_vomit"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "task with id 2 of type `indexCreation` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -1,12 +1,13 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,]
|
enqueued [0,]
|
||||||
|
|||||||
@@ -1,13 +1,14 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,]
|
enqueued [0,1,]
|
||||||
|
|||||||
@@ -1,14 +1,15 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,]
|
enqueued [0,1,2,]
|
||||||
|
|||||||
@@ -4,12 +4,12 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch Some(1):
|
### Processing batch Some(1):
|
||||||
[1,]
|
[1,]
|
||||||
{uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"processing":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
{uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"processing":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "task with id 1 of type `indexCreation` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [1,2,]
|
enqueued [1,2,]
|
||||||
@@ -42,7 +42,7 @@ catto: { number_of_documents: 0, field_distribution: {} }
|
|||||||
[timestamp] [0,]
|
[timestamp] [0,]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "task with id 0 of type `indexCreation` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
@@ -47,9 +47,9 @@ doggo: { number_of_documents: 0, field_distribution: {} }
|
|||||||
[timestamp] [2,]
|
[timestamp] [2,]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "task with id 0 of type `indexCreation` cannot be batched", }
|
||||||
1 {uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "task with id 1 of type `indexCreation` cannot be batched", }
|
||||||
2 {uid: 2, details: {"primaryKey":"fish"}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexCreation":1},"indexUids":{"whalo":1}}, stop reason: "created batch containing only task with id 2 of type `indexCreation` that cannot be batched with any other task.", }
|
2 {uid: 2, details: {"primaryKey":"fish"}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexCreation":1},"indexUids":{"whalo":1}}, stop reason: "task with id 2 of type `indexCreation` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -1,14 +1,15 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,]
|
enqueued [0,1,2,]
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, batch_uid: 3, status: failed, error: ResponseError { code: 200, message: "Index `whalo` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
|
3 {uid: 3, batch_uid: 3, status: failed, error: ResponseError { code: 200, message: "Index `whalo` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
@@ -52,10 +52,10 @@ doggo: { number_of_documents: 0, field_distribution: {} }
|
|||||||
[timestamp] [3,]
|
[timestamp] [3,]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "task with id 0 of type `indexCreation` cannot be batched", }
|
||||||
1 {uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "task with id 1 of type `indexCreation` cannot be batched", }
|
||||||
2 {uid: 2, details: {"swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 2 of type `indexSwap` that cannot be batched with any other task.", }
|
2 {uid: 2, details: {"swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "task with id 2 of type `indexSwap` cannot be batched", }
|
||||||
3 {uid: 3, details: {"swaps":[{"indexes":["catto","whalo"],"rename":false}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 3 of type `indexSwap` that cannot be batched with any other task.", }
|
3 {uid: 3, details: {"swaps":[{"indexes":["catto","whalo"]}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "task with id 3 of type `indexSwap` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -1,15 +1,16 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
|
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,3,]
|
enqueued [0,1,2,3,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
@@ -48,8 +48,8 @@ catto: { number_of_documents: 0, field_distribution: {} }
|
|||||||
[timestamp] [1,2,3,]
|
[timestamp] [1,2,3,]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "task with id 0 of type `indexCreation` cannot be batched", }
|
||||||
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "task with id 3 of type `taskCancelation` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
@@ -47,9 +47,9 @@ whalo: { number_of_documents: 0, field_distribution: {} }
|
|||||||
[timestamp] [2,]
|
[timestamp] [2,]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"bone"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"bone"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "task with id 0 of type `indexCreation` cannot be batched", }
|
||||||
1 {uid: 1, details: {"primaryKey":"plankton"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"whalo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"plankton"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"whalo":1}}, stop reason: "task with id 1 of type `indexCreation` cannot be batched", }
|
||||||
2 {uid: 2, details: {"primaryKey":"his_own_vomit"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 2 of type `indexCreation` that cannot be batched with any other task.", }
|
2 {uid: 2, details: {"primaryKey":"his_own_vomit"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "task with id 2 of type `indexCreation` cannot be batched", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user