mirror of
https://github.com/meilisearch/meilisearch.git
synced 2025-11-23 13:16:33 +00:00
Compare commits
1 Commits
upgrade-te
...
dump-vecto
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f0b55e0349 |
@@ -1,34 +1,31 @@
|
|||||||
---
|
---
|
||||||
name: New feature issue
|
name: New sprint issue
|
||||||
about: ⚠️ Should only be used by the internal Meili team ⚠️
|
about: ⚠️ Should only be used by the engine team ⚠️
|
||||||
title: ''
|
title: ''
|
||||||
labels: 'impacts docs, impacts integrations'
|
labels: 'missing usage in PRD, impacts docs'
|
||||||
assignees: ''
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Related product team resources: [PRD]() (_internal only_)
|
Related product team resources: [PRD]() (_internal only_)
|
||||||
|
Related product discussion:
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
<!---Copy/paste the information in PRD or briefly detail the product motivation. Ask product team if any hesitation.-->
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
<!---Link to the public part of the PRD, or to the related product discussion for experimental features-->
|
<!---Link to the public part of the PRD, or to the related product discussion for experimental features-->
|
||||||
|
|
||||||
TBD
|
|
||||||
|
|
||||||
## TODO
|
## TODO
|
||||||
|
|
||||||
<!---If necessary, create a list with technical/product steps-->
|
<!---If necessary, create a list with technical/product steps-->
|
||||||
|
|
||||||
### Are you modifying a database?
|
### Are you modifying a database?
|
||||||
|
|
||||||
- [ ] If not, add the `no db change` label to your PR, and you're good to merge.
|
- [ ] If not, add the `no db change` label to your PR, and you're good to merge.
|
||||||
- [ ] If yes, add the `db change` label to your PR. You'll receive a message explaining you what to do.
|
- [ ] If yes, add the `db change` label to your PR. You'll receive a message explaining you what to do.
|
||||||
|
|
||||||
### Reminders when adding features
|
|
||||||
|
|
||||||
- [ ] Write unit tests using insta
|
|
||||||
- [ ] Write declarative integration tests in [workloads/tests](https://github.com/meilisearch/meilisearch/tree/main/workloads/test). Specify the routes to call and then call `cargo xtask test workloads/tests/YOUR_TEST.json --update-responses` so that responses are automatically filled.
|
|
||||||
|
|
||||||
### Reminders when modifying the API
|
### Reminders when modifying the API
|
||||||
|
|
||||||
- [ ] Update the openAPI file with utoipa:
|
- [ ] Update the openAPI file with utoipa:
|
||||||
@@ -57,5 +54,5 @@ TBD
|
|||||||
|
|
||||||
## Impacted teams
|
## Impacted teams
|
||||||
|
|
||||||
<!---Ping the related teams. Ask on Slack if any hesitation-->
|
<!---Ping the related teams. Ask for the engine manager if any hesitation-->
|
||||||
<!---@meilisearch/docs-team and @meilisearch/integration-team when there is any API change, e.g. settings addition-->
|
<!---@meilisearch/docs-team when there is any API change, e.g. settings addition-->
|
||||||
16
.github/pull_request_template.md
vendored
16
.github/pull_request_template.md
vendored
@@ -1,16 +0,0 @@
|
|||||||
## Related issue
|
|
||||||
|
|
||||||
Fixes #...
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
⚠️ Ensure the following requirements before merging ⚠️
|
|
||||||
- [ ] Automated tests have been added.
|
|
||||||
- [ ] If some tests cannot be automated, manual rigorous tests should be applied.
|
|
||||||
- [ ] ⚠️ If there is any change in the DB:
|
|
||||||
- [ ] Test that any impacted DB still works as expected after using `--experimental-dumpless-upgrade` on a DB created with the last released Meilisearch
|
|
||||||
- [ ] Test that during the upgrade, **search is still available** (artificially make the upgrade longer if needed)
|
|
||||||
- [ ] Set the `db change` label.
|
|
||||||
- [ ] If necessary, the feature have been tested in the Cloud production environment (with [prototypes](./documentation/prototypes.md)) and the Cloud UI is ready.
|
|
||||||
- [ ] If necessary, the [documentation](https://github.com/meilisearch/documentation) related to the implemented feature in the PR is ready.
|
|
||||||
- [ ] If necessary, the [integrations](https://github.com/meilisearch/integration-guides) related to the implemented feature in the PR are ready.
|
|
||||||
33
.github/release-draft-template.yml
vendored
33
.github/release-draft-template.yml
vendored
@@ -1,33 +0,0 @@
|
|||||||
name-template: 'v$RESOLVED_VERSION'
|
|
||||||
tag-template: 'v$RESOLVED_VERSION'
|
|
||||||
exclude-labels:
|
|
||||||
- 'skip changelog'
|
|
||||||
version-resolver:
|
|
||||||
minor:
|
|
||||||
labels:
|
|
||||||
- 'enhancement'
|
|
||||||
default: patch
|
|
||||||
categories:
|
|
||||||
- title: '⚠️ Breaking changes'
|
|
||||||
label: 'breaking-change'
|
|
||||||
- title: '🚀 Enhancements'
|
|
||||||
label: 'enhancement'
|
|
||||||
- title: '🐛 Bug Fixes'
|
|
||||||
label: 'bug'
|
|
||||||
- title: '🔒 Security'
|
|
||||||
label: 'security'
|
|
||||||
- title: '⚙️ Maintenance/misc'
|
|
||||||
label:
|
|
||||||
- 'maintenance'
|
|
||||||
- 'documentation'
|
|
||||||
template: |
|
|
||||||
$CHANGES
|
|
||||||
|
|
||||||
❤️ Huge thanks to our contributors: $CONTRIBUTORS.
|
|
||||||
no-changes-template: 'Changes are coming soon 😎'
|
|
||||||
sort-direction: 'ascending'
|
|
||||||
replacers:
|
|
||||||
- search: '/(?:and )?@dependabot-preview(?:\[bot\])?,?/g'
|
|
||||||
replace: ''
|
|
||||||
- search: '/(?:and )?@dependabot(?:\[bot\])?,?/g'
|
|
||||||
replace: ''
|
|
||||||
22
.github/templates/dependency-issue.md
vendored
22
.github/templates/dependency-issue.md
vendored
@@ -1,22 +0,0 @@
|
|||||||
This issue is about updating Meilisearch dependencies:
|
|
||||||
- [ ] Update Meilisearch dependencies with the help of `cargo +nightly udeps --all-targets` (remove unused dependencies) and `cargo upgrade` (upgrade dependencies versions) - ⚠️ Some repositories may contain subdirectories (like heed, charabia, or deserr). Take care of updating these in the main crate as well. This won't be done automatically by `cargo upgrade`.
|
|
||||||
- [ ] [deserr](https://github.com/meilisearch/deserr)
|
|
||||||
- [ ] [charabia](https://github.com/meilisearch/charabia/)
|
|
||||||
- [ ] [heed](https://github.com/meilisearch/heed/)
|
|
||||||
- [ ] [roaring-rs](https://github.com/RoaringBitmap/roaring-rs/)
|
|
||||||
- [ ] [obkv](https://github.com/meilisearch/obkv)
|
|
||||||
- [ ] [grenad](https://github.com/meilisearch/grenad/)
|
|
||||||
- [ ] [arroy](https://github.com/meilisearch/arroy/)
|
|
||||||
- [ ] [segment](https://github.com/meilisearch/segment)
|
|
||||||
- [ ] [bumparaw-collections](https://github.com/meilisearch/bumparaw-collections)
|
|
||||||
- [ ] [bbqueue](https://github.com/meilisearch/bbqueue)
|
|
||||||
- [ ] Finally, [Meilisearch](https://github.com/meilisearch/MeiliSearch)
|
|
||||||
- [ ] If new Rust versions have been released, update the minimal Rust version in use at Meilisearch:
|
|
||||||
- [ ] in this [GitHub Action file](https://github.com/meilisearch/meilisearch/blob/main/.github/workflows/test-suite.yml), by changing the `toolchain` field of the `rustfmt` job to the latest available nightly (of the day before or the current day).
|
|
||||||
- [ ] in every [GitHub Action files](https://github.com/meilisearch/meilisearch/blob/main/.github/workflows), by changing all the `dtolnay/rust-toolchain@` references to use the latest stable version.
|
|
||||||
- [ ] in this [`rust-toolchain.toml`](https://github.com/meilisearch/meilisearch/blob/main/rust-toolchain.toml), by changing the `channel` field to the latest stable version.
|
|
||||||
- [ ] in the [Dockerfile](https://github.com/meilisearch/meilisearch/blob/main/Dockerfile), by changing the base image to `rust:<target_rust_version>-alpine<alpine_version>`. Check that the image exists on [Dockerhub](https://hub.docker.com/_/rust/tags?page=1&name=alpine). Also, build and run the image to check everything still works!
|
|
||||||
|
|
||||||
⚠️ This issue should be prioritized to avoid any deprecation and vulnerability issues.
|
|
||||||
|
|
||||||
The GitHub action dependencies are managed by [Dependabot](https://github.com/meilisearch/meilisearch/blob/main/.github/dependabot.yml), so no need to update them when solving this issue.
|
|
||||||
100
.github/workflows/check-valid-milestone.yml
vendored
Normal file
100
.github/workflows/check-valid-milestone.yml
vendored
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
name: PR Milestone Check
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
types: [opened, reopened, edited, synchronize, milestoned, demilestoned]
|
||||||
|
branches:
|
||||||
|
- "main"
|
||||||
|
- "release-v*.*.*"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check-milestone:
|
||||||
|
name: Check PR Milestone
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Validate PR milestone
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
script: |
|
||||||
|
// Get PR number directly from the event payload
|
||||||
|
const prNumber = context.payload.pull_request.number;
|
||||||
|
|
||||||
|
// Get PR details
|
||||||
|
const { data: prData } = await github.rest.pulls.get({
|
||||||
|
owner: 'meilisearch',
|
||||||
|
repo: 'meilisearch',
|
||||||
|
pull_number: prNumber
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get base branch name
|
||||||
|
const baseBranch = prData.base.ref;
|
||||||
|
console.log(`Base branch: ${baseBranch}`);
|
||||||
|
|
||||||
|
// Get PR milestone
|
||||||
|
const prMilestone = prData.milestone;
|
||||||
|
if (!prMilestone) {
|
||||||
|
core.setFailed('PR must have a milestone assigned');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.log(`PR milestone: ${prMilestone.title}`);
|
||||||
|
|
||||||
|
// Validate milestone format: vx.y.z
|
||||||
|
const milestoneRegex = /^v\d+\.\d+\.\d+$/;
|
||||||
|
if (!milestoneRegex.test(prMilestone.title)) {
|
||||||
|
core.setFailed(`Milestone "${prMilestone.title}" does not follow the required format vx.y.z`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// For main branch PRs, check if the milestone is the highest one
|
||||||
|
if (baseBranch === 'main') {
|
||||||
|
// Get all milestones
|
||||||
|
const { data: milestones } = await github.rest.issues.listMilestones({
|
||||||
|
owner: 'meilisearch',
|
||||||
|
repo: 'meilisearch',
|
||||||
|
state: 'open',
|
||||||
|
sort: 'due_on',
|
||||||
|
direction: 'desc'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Sort milestones by version number (vx.y.z)
|
||||||
|
const sortedMilestones = milestones
|
||||||
|
.filter(m => milestoneRegex.test(m.title))
|
||||||
|
.sort((a, b) => {
|
||||||
|
const versionA = a.title.substring(1).split('.').map(Number);
|
||||||
|
const versionB = b.title.substring(1).split('.').map(Number);
|
||||||
|
|
||||||
|
// Compare major version
|
||||||
|
if (versionA[0] !== versionB[0]) return versionB[0] - versionA[0];
|
||||||
|
// Compare minor version
|
||||||
|
if (versionA[1] !== versionB[1]) return versionB[1] - versionA[1];
|
||||||
|
// Compare patch version
|
||||||
|
return versionB[2] - versionA[2];
|
||||||
|
});
|
||||||
|
|
||||||
|
if (sortedMilestones.length === 0) {
|
||||||
|
core.setFailed('No valid milestones found in the repository. Please create at least one milestone with the format vx.y.z');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const highestMilestone = sortedMilestones[0];
|
||||||
|
console.log(`Highest milestone: ${highestMilestone.title}`);
|
||||||
|
|
||||||
|
if (prMilestone.title !== highestMilestone.title) {
|
||||||
|
core.setFailed(`PRs targeting the main branch must use the highest milestone (${highestMilestone.title}), but this PR uses ${prMilestone.title}`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// For release branches, the milestone should match the branch version
|
||||||
|
const branchVersion = baseBranch.substring(8); // remove 'release-'
|
||||||
|
if (prMilestone.title !== branchVersion) {
|
||||||
|
core.setFailed(`PRs targeting release branch "${baseBranch}" must use the matching milestone "${branchVersion}", but this PR uses "${prMilestone.title}"`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('PR milestone validation passed!');
|
||||||
2
.github/workflows/dependency-issue.yml
vendored
2
.github/workflows/dependency-issue.yml
vendored
@@ -15,7 +15,7 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- name: Download the issue template
|
- name: Download the issue template
|
||||||
run: curl -s https://raw.githubusercontent.com/meilisearch/meilisearch/main/.github/templates/dependency-issue.md > $ISSUE_TEMPLATE
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/dependency-issue.md > $ISSUE_TEMPLATE
|
||||||
- name: Create issue
|
- name: Create issue
|
||||||
run: |
|
run: |
|
||||||
gh issue create \
|
gh issue create \
|
||||||
|
|||||||
2
.github/workflows/flaky-tests.yml
vendored
2
.github/workflows/flaky-tests.yml
vendored
@@ -3,7 +3,7 @@ name: Look for flaky tests
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
schedule:
|
schedule:
|
||||||
- cron: '0 4 * * *' # Every day at 4:00AM
|
- cron: "0 12 * * FRI" # Every Friday at 12:00PM
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
flaky:
|
flaky:
|
||||||
|
|||||||
224
.github/workflows/milestone-workflow.yml
vendored
Normal file
224
.github/workflows/milestone-workflow.yml
vendored
Normal file
@@ -0,0 +1,224 @@
|
|||||||
|
name: Milestone's workflow
|
||||||
|
|
||||||
|
# /!\ No git flow are handled here
|
||||||
|
|
||||||
|
# For each Milestone created (not opened!), and if the release is NOT a patch release (only the patch changed)
|
||||||
|
# - the roadmap issue is created, see https://github.com/meilisearch/engine-team/blob/main/issue-templates/roadmap-issue.md
|
||||||
|
# - the changelog issue is created, see https://github.com/meilisearch/engine-team/blob/main/issue-templates/changelog-issue.md
|
||||||
|
# - update the ruleset to add the current release version to the list of allowed versions and be able to use the merge queue.
|
||||||
|
|
||||||
|
# For each Milestone closed
|
||||||
|
# - the `release_version` label is created
|
||||||
|
# - this label is applied to all issues/PRs in the Milestone
|
||||||
|
|
||||||
|
on:
|
||||||
|
milestone:
|
||||||
|
types: [created, closed]
|
||||||
|
|
||||||
|
env:
|
||||||
|
MILESTONE_VERSION: ${{ github.event.milestone.title }}
|
||||||
|
MILESTONE_URL: ${{ github.event.milestone.html_url }}
|
||||||
|
MILESTONE_DUE_ON: ${{ github.event.milestone.due_on }}
|
||||||
|
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
# -----------------
|
||||||
|
# MILESTONE CREATED
|
||||||
|
# -----------------
|
||||||
|
|
||||||
|
get-release-version:
|
||||||
|
if: github.event.action == 'created'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
is-patch: ${{ steps.check-patch.outputs.is-patch }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Check if this release is a patch release only
|
||||||
|
id: check-patch
|
||||||
|
run: |
|
||||||
|
echo version: $MILESTONE_VERSION
|
||||||
|
if [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.0$ ]]; then
|
||||||
|
echo 'This is NOT a patch release'
|
||||||
|
echo "is-patch=false" >> $GITHUB_OUTPUT
|
||||||
|
elif [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||||
|
echo 'This is a patch release'
|
||||||
|
echo "is-patch=true" >> $GITHUB_OUTPUT
|
||||||
|
else
|
||||||
|
echo "Not a valid format of release, check the Milestone's title."
|
||||||
|
echo 'Should be vX.Y.Z'
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
create-roadmap-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the roadmap issue if the release is not only a patch release
|
||||||
|
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/roadmap-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Replace all empty occurrences in the templates
|
||||||
|
run: |
|
||||||
|
# Replace all <<version>> occurrences
|
||||||
|
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
|
||||||
|
|
||||||
|
# Replace all <<milestone_id>> occurrences
|
||||||
|
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
|
||||||
|
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
|
||||||
|
|
||||||
|
# Replace release date if exists
|
||||||
|
if [[ ! -z $MILESTONE_DUE_ON ]]; then
|
||||||
|
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
|
||||||
|
sed -i "s/Release date\: 20XX-XX-XX/Release date\: $date/g" $ISSUE_TEMPLATE
|
||||||
|
fi
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "$MILESTONE_VERSION ROADMAP" \
|
||||||
|
--label 'epic,impacts docs,impacts integrations,impacts cloud' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION
|
||||||
|
|
||||||
|
create-changelog-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the changelog issue if the release is not only a patch release
|
||||||
|
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/changelog-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Replace all empty occurrences in the templates
|
||||||
|
run: |
|
||||||
|
# Replace all <<version>> occurrences
|
||||||
|
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
|
||||||
|
|
||||||
|
# Replace all <<milestone_id>> occurrences
|
||||||
|
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
|
||||||
|
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "Create release changelogs for $MILESTONE_VERSION" \
|
||||||
|
--label 'impacts docs,documentation' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION \
|
||||||
|
--assignee curquiza
|
||||||
|
|
||||||
|
create-update-version-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the update-version issue even if the release is a patch release
|
||||||
|
if: github.event.action == 'created'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/update-version-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "Update version in Cargo.toml for $MILESTONE_VERSION" \
|
||||||
|
--label 'maintenance' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION
|
||||||
|
|
||||||
|
create-update-openapi-issue:
|
||||||
|
needs: get-release-version
|
||||||
|
# Create the openAPI issue if the release is not only a patch release
|
||||||
|
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
env:
|
||||||
|
ISSUE_TEMPLATE: issue-template.md
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Download the issue template
|
||||||
|
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/update-openapi-issue.md > $ISSUE_TEMPLATE
|
||||||
|
- name: Create the issue
|
||||||
|
run: |
|
||||||
|
gh issue create \
|
||||||
|
--title "Update Open API file for $MILESTONE_VERSION" \
|
||||||
|
--label 'maintenance' \
|
||||||
|
--body-file $ISSUE_TEMPLATE \
|
||||||
|
--milestone $MILESTONE_VERSION
|
||||||
|
|
||||||
|
update-ruleset:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.event.action == 'created'
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Install jq
|
||||||
|
run: |
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install -y jq
|
||||||
|
- name: Update ruleset
|
||||||
|
env:
|
||||||
|
# gh api repos/meilisearch/meilisearch/rulesets --jq '.[] | {name: .name, id: .id}'
|
||||||
|
RULESET_ID: 4253297
|
||||||
|
BRANCH_NAME: ${{ github.event.inputs.branch_name }}
|
||||||
|
run: |
|
||||||
|
echo "RULESET_ID: ${{ env.RULESET_ID }}"
|
||||||
|
echo "BRANCH_NAME: ${{ env.BRANCH_NAME }}"
|
||||||
|
|
||||||
|
# Get current ruleset conditions
|
||||||
|
CONDITIONS=$(gh api repos/meilisearch/meilisearch/rulesets/${{ env.RULESET_ID }} --jq '{ conditions: .conditions }')
|
||||||
|
|
||||||
|
# Update the conditions by appending the milestone version
|
||||||
|
UPDATED_CONDITIONS=$(echo $CONDITIONS | jq '.conditions.ref_name.include += ["refs/heads/release-'${{ env.MILESTONE_VERSION }}'"]')
|
||||||
|
|
||||||
|
# Update the ruleset from stdin (-)
|
||||||
|
echo $UPDATED_CONDITIONS |
|
||||||
|
gh api repos/meilisearch/meilisearch/rulesets/${{ env.RULESET_ID }} \
|
||||||
|
--method PUT \
|
||||||
|
-H "Accept: application/vnd.github+json" \
|
||||||
|
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||||
|
--input -
|
||||||
|
|
||||||
|
# ----------------
|
||||||
|
# MILESTONE CLOSED
|
||||||
|
# ----------------
|
||||||
|
|
||||||
|
create-release-label:
|
||||||
|
if: github.event.action == 'closed'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Create the ${{ env.MILESTONE_VERSION }} label
|
||||||
|
run: |
|
||||||
|
label_description="PRs/issues solved in $MILESTONE_VERSION"
|
||||||
|
if [[ ! -z $MILESTONE_DUE_ON ]]; then
|
||||||
|
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
|
||||||
|
label_description="$label_description released on $date"
|
||||||
|
fi
|
||||||
|
|
||||||
|
gh api repos/meilisearch/meilisearch/labels \
|
||||||
|
--method POST \
|
||||||
|
-H "Accept: application/vnd.github+json" \
|
||||||
|
-f name="$MILESTONE_VERSION" \
|
||||||
|
-f description="$label_description" \
|
||||||
|
-f color='ff5ba3'
|
||||||
|
|
||||||
|
labelize-all-milestone-content:
|
||||||
|
if: github.event.action == 'closed'
|
||||||
|
needs: create-release-label
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- name: Add label ${{ env.MILESTONE_VERSION }} to all PRs in the Milestone
|
||||||
|
run: |
|
||||||
|
prs=$(gh pr list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
|
||||||
|
for pr in $prs; do
|
||||||
|
gh pr edit $pr --add-label $MILESTONE_VERSION
|
||||||
|
done
|
||||||
|
- name: Add label ${{ env.MILESTONE_VERSION }} to all issues in the Milestone
|
||||||
|
run: |
|
||||||
|
issues=$(gh issue list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
|
||||||
|
for issue in $issues; do
|
||||||
|
gh issue edit $issue --add-label $MILESTONE_VERSION
|
||||||
|
done
|
||||||
2
.github/workflows/publish-apt-brew-pkg.yml
vendored
2
.github/workflows/publish-apt-brew-pkg.yml
vendored
@@ -32,7 +32,7 @@ jobs:
|
|||||||
- name: Build deb package
|
- name: Build deb package
|
||||||
run: cargo deb -p meilisearch -o target/debian/meilisearch.deb
|
run: cargo deb -p meilisearch -o target/debian/meilisearch.deb
|
||||||
- name: Upload debian pkg to release
|
- name: Upload debian pkg to release
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
uses: svenstaro/upload-release-action@2.11.1
|
||||||
with:
|
with:
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
file: target/debian/meilisearch.deb
|
file: target/debian/meilisearch.deb
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
name: Publish assets to GitHub release
|
name: Publish binaries to GitHub release
|
||||||
|
|
||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
@@ -51,7 +51,7 @@ jobs:
|
|||||||
# No need to upload binaries for dry run (cron)
|
# No need to upload binaries for dry run (cron)
|
||||||
- name: Upload binaries to release
|
- name: Upload binaries to release
|
||||||
if: github.event_name == 'release'
|
if: github.event_name == 'release'
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
uses: svenstaro/upload-release-action@2.11.1
|
||||||
with:
|
with:
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
file: target/release/meilisearch
|
file: target/release/meilisearch
|
||||||
@@ -81,7 +81,7 @@ jobs:
|
|||||||
# No need to upload binaries for dry run (cron)
|
# No need to upload binaries for dry run (cron)
|
||||||
- name: Upload binaries to release
|
- name: Upload binaries to release
|
||||||
if: github.event_name == 'release'
|
if: github.event_name == 'release'
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
uses: svenstaro/upload-release-action@2.11.1
|
||||||
with:
|
with:
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
file: target/release/${{ matrix.artifact_name }}
|
file: target/release/${{ matrix.artifact_name }}
|
||||||
@@ -113,7 +113,7 @@ jobs:
|
|||||||
- name: Upload the binary to release
|
- name: Upload the binary to release
|
||||||
# No need to upload binaries for dry run (cron)
|
# No need to upload binaries for dry run (cron)
|
||||||
if: github.event_name == 'release'
|
if: github.event_name == 'release'
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
uses: svenstaro/upload-release-action@2.11.1
|
||||||
with:
|
with:
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
file: target/${{ matrix.target }}/release/meilisearch
|
file: target/${{ matrix.target }}/release/meilisearch
|
||||||
@@ -178,34 +178,9 @@ jobs:
|
|||||||
- name: Upload the binary to release
|
- name: Upload the binary to release
|
||||||
# No need to upload binaries for dry run (cron)
|
# No need to upload binaries for dry run (cron)
|
||||||
if: github.event_name == 'release'
|
if: github.event_name == 'release'
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
uses: svenstaro/upload-release-action@2.11.1
|
||||||
with:
|
with:
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||||
file: target/${{ matrix.target }}/release/meilisearch
|
file: target/${{ matrix.target }}/release/meilisearch
|
||||||
asset_name: ${{ matrix.asset_name }}
|
asset_name: ${{ matrix.asset_name }}
|
||||||
tag: ${{ github.ref }}
|
tag: ${{ github.ref }}
|
||||||
|
|
||||||
publish-openapi-file:
|
|
||||||
name: Publish OpenAPI file
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout code
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
- name: Setup Rust
|
|
||||||
uses: actions-rs/toolchain@v1
|
|
||||||
with:
|
|
||||||
toolchain: stable
|
|
||||||
override: true
|
|
||||||
- name: Generate OpenAPI file
|
|
||||||
run: |
|
|
||||||
cd crates/openapi-generator
|
|
||||||
cargo run --release -- --pretty --output ../../meilisearch.json
|
|
||||||
- name: Upload OpenAPI to Release
|
|
||||||
# No need to upload for dry run (cron)
|
|
||||||
if: github.event_name == 'release'
|
|
||||||
uses: svenstaro/upload-release-action@2.11.2
|
|
||||||
with:
|
|
||||||
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
|
|
||||||
file: ./meilisearch.json
|
|
||||||
asset_name: meilisearch-openapi.json
|
|
||||||
tag: ${{ github.ref }}
|
|
||||||
17
.github/workflows/publish-docker-images.yml
vendored
17
.github/workflows/publish-docker-images.yml
vendored
@@ -16,8 +16,6 @@ on:
|
|||||||
jobs:
|
jobs:
|
||||||
docker:
|
docker:
|
||||||
runs-on: docker
|
runs-on: docker
|
||||||
permissions:
|
|
||||||
id-token: write # This is needed to use Cosign in keyless mode
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
|
|
||||||
@@ -64,9 +62,6 @@ jobs:
|
|||||||
- name: Set up Docker Buildx
|
- name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v3
|
||||||
|
|
||||||
- name: Install cosign
|
|
||||||
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # tag=v3.9.2
|
|
||||||
|
|
||||||
- name: Login to Docker Hub
|
- name: Login to Docker Hub
|
||||||
uses: docker/login-action@v3
|
uses: docker/login-action@v3
|
||||||
with:
|
with:
|
||||||
@@ -90,7 +85,6 @@ jobs:
|
|||||||
|
|
||||||
- name: Build and push
|
- name: Build and push
|
||||||
uses: docker/build-push-action@v6
|
uses: docker/build-push-action@v6
|
||||||
id: build-and-push
|
|
||||||
with:
|
with:
|
||||||
push: true
|
push: true
|
||||||
platforms: linux/amd64,linux/arm64
|
platforms: linux/amd64,linux/arm64
|
||||||
@@ -100,17 +94,6 @@ jobs:
|
|||||||
COMMIT_DATE=${{ steps.build-metadata.outputs.date }}
|
COMMIT_DATE=${{ steps.build-metadata.outputs.date }}
|
||||||
GIT_TAG=${{ github.ref_name }}
|
GIT_TAG=${{ github.ref_name }}
|
||||||
|
|
||||||
- name: Sign the images with GitHub OIDC Token
|
|
||||||
env:
|
|
||||||
DIGEST: ${{ steps.build-and-push.outputs.digest }}
|
|
||||||
TAGS: ${{ steps.meta.outputs.tags }}
|
|
||||||
run: |
|
|
||||||
images=""
|
|
||||||
for tag in ${TAGS}; do
|
|
||||||
images+="${tag}@${DIGEST} "
|
|
||||||
done
|
|
||||||
cosign sign --yes ${images}
|
|
||||||
|
|
||||||
# /!\ Don't touch this without checking with Cloud team
|
# /!\ Don't touch this without checking with Cloud team
|
||||||
- name: Send CI information to Cloud team
|
- name: Send CI information to Cloud team
|
||||||
# Do not send if nightly build (i.e. 'schedule' or 'workflow_dispatch' event)
|
# Do not send if nightly build (i.e. 'schedule' or 'workflow_dispatch' event)
|
||||||
|
|||||||
20
.github/workflows/release-drafter.yml
vendored
20
.github/workflows/release-drafter.yml
vendored
@@ -1,20 +0,0 @@
|
|||||||
name: Release Drafter
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
pull-requests: write
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
update_release_draft:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: release-drafter/release-drafter@v6
|
|
||||||
with:
|
|
||||||
config-name: release-draft-template.yml
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.RELEASE_DRAFTER_TOKEN }}
|
|
||||||
14
.github/workflows/sdks-tests.yml
vendored
14
.github/workflows/sdks-tests.yml
vendored
@@ -9,7 +9,7 @@ on:
|
|||||||
required: false
|
required: false
|
||||||
default: nightly
|
default: nightly
|
||||||
schedule:
|
schedule:
|
||||||
- cron: '0 6 * * *' # Every day at 6:00am
|
- cron: "0 6 * * MON" # Every Monday at 6:00AM
|
||||||
|
|
||||||
env:
|
env:
|
||||||
MEILI_MASTER_KEY: 'masterKey'
|
MEILI_MASTER_KEY: 'masterKey'
|
||||||
@@ -114,7 +114,7 @@ jobs:
|
|||||||
dep ensure
|
dep ensure
|
||||||
fi
|
fi
|
||||||
- name: Run integration tests
|
- name: Run integration tests
|
||||||
run: go test --race -v ./integration
|
run: go test -v ./...
|
||||||
|
|
||||||
meilisearch-java-tests:
|
meilisearch-java-tests:
|
||||||
needs: define-docker-image
|
needs: define-docker-image
|
||||||
@@ -344,23 +344,15 @@ jobs:
|
|||||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||||
ports:
|
ports:
|
||||||
- '7700:7700'
|
- '7700:7700'
|
||||||
env:
|
|
||||||
RAILS_VERSION: '7.0'
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
repository: meilisearch/meilisearch-rails
|
repository: meilisearch/meilisearch-rails
|
||||||
- name: Install SQLite dependencies
|
- name: Set up Ruby 3
|
||||||
run: sudo apt-get update && sudo apt-get install -y libsqlite3-dev
|
|
||||||
- name: Set up Ruby
|
|
||||||
uses: ruby/setup-ruby@v1
|
uses: ruby/setup-ruby@v1
|
||||||
with:
|
with:
|
||||||
ruby-version: 3
|
ruby-version: 3
|
||||||
bundler-cache: true
|
bundler-cache: true
|
||||||
- name: Start MongoDB
|
|
||||||
uses: supercharge/mongodb-github-action@1.12.0
|
|
||||||
with:
|
|
||||||
mongodb-version: 8.0
|
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
run: bundle exec rspec
|
run: bundle exec rspec
|
||||||
|
|
||||||
|
|||||||
2
.github/workflows/test-suite.yml
vendored
2
.github/workflows/test-suite.yml
vendored
@@ -3,7 +3,7 @@ name: Test suite
|
|||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
schedule:
|
schedule:
|
||||||
# Every day at 5:00am
|
# Everyday at 5:00am
|
||||||
- cron: "0 5 * * *"
|
- cron: "0 5 * * *"
|
||||||
pull_request:
|
pull_request:
|
||||||
merge_group:
|
merge_group:
|
||||||
|
|||||||
@@ -41,4 +41,5 @@ jobs:
|
|||||||
--title "Update version for the next release ($NEW_VERSION) in Cargo.toml" \
|
--title "Update version for the next release ($NEW_VERSION) in Cargo.toml" \
|
||||||
--body '⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.' \
|
--body '⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.' \
|
||||||
--label 'skip changelog' \
|
--label 'skip changelog' \
|
||||||
|
--milestone $NEW_VERSION \
|
||||||
--base $GITHUB_REF_NAME
|
--base $GITHUB_REF_NAME
|
||||||
|
|||||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -5,7 +5,7 @@
|
|||||||
**/*.json_lines
|
**/*.json_lines
|
||||||
**/*.rs.bk
|
**/*.rs.bk
|
||||||
/*.mdb
|
/*.mdb
|
||||||
/*.ms
|
/data.ms
|
||||||
/snapshots
|
/snapshots
|
||||||
/dumps
|
/dumps
|
||||||
/bench
|
/bench
|
||||||
|
|||||||
@@ -124,7 +124,6 @@ They are JSON files with the following structure (comments are not actually supp
|
|||||||
{
|
{
|
||||||
// Name of the workload. Must be unique to the workload, as it will be used to group results on the dashboard.
|
// Name of the workload. Must be unique to the workload, as it will be used to group results on the dashboard.
|
||||||
"name": "hackernews.ndjson_1M,no-threads",
|
"name": "hackernews.ndjson_1M,no-threads",
|
||||||
"type": "bench",
|
|
||||||
// Number of consecutive runs of the commands that should be performed.
|
// Number of consecutive runs of the commands that should be performed.
|
||||||
// Each run uses a fresh instance of Meilisearch and a fresh database.
|
// Each run uses a fresh instance of Meilisearch and a fresh database.
|
||||||
// Each run produces its own report file.
|
// Each run produces its own report file.
|
||||||
|
|||||||
@@ -106,19 +106,7 @@ Run `cargo xtask --help` from the root of the repository to find out what is ava
|
|||||||
#### Update the openAPI file if the API changed
|
#### Update the openAPI file if the API changed
|
||||||
|
|
||||||
To update the openAPI file in the code, see [sprint_issue.md](https://github.com/meilisearch/meilisearch/blob/main/.github/ISSUE_TEMPLATE/sprint_issue.md#reminders-when-modifying-the-api).
|
To update the openAPI file in the code, see [sprint_issue.md](https://github.com/meilisearch/meilisearch/blob/main/.github/ISSUE_TEMPLATE/sprint_issue.md#reminders-when-modifying-the-api).
|
||||||
|
If you want to update the openAPI file on the [open-api repository](https://github.com/meilisearch/open-api), see [update-openapi-issue.md](https://github.com/meilisearch/engine-team/blob/main/issue-templates/update-openapi-issue.md).
|
||||||
If you want to generate OpenAPI file manually:
|
|
||||||
|
|
||||||
With swagger:
|
|
||||||
- Starts Meilisearch with the `swagger` feature flag: `cargo run --features swagger`
|
|
||||||
- On a browser, open the following URL: http://localhost:7700/scalar
|
|
||||||
- Click the « Download openAPI file »
|
|
||||||
|
|
||||||
With the internal crate:
|
|
||||||
```bash
|
|
||||||
cd crates/openapi-generator
|
|
||||||
cargo run --release -- --pretty --output meilisearch.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### Logging
|
### Logging
|
||||||
|
|
||||||
@@ -172,37 +160,25 @@ Some notes on GitHub PRs:
|
|||||||
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
|
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
|
||||||
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [GitHub Merge Queues](https://github.blog/news-insights/product-news/github-merge-queue-is-generally-available/) to automatically enforce this requirement without the PR author having to rebase manually.
|
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [GitHub Merge Queues](https://github.blog/news-insights/product-news/github-merge-queue-is-generally-available/) to automatically enforce this requirement without the PR author having to rebase manually.
|
||||||
|
|
||||||
## Merging PRs
|
## Release Process (for internal team only)
|
||||||
|
|
||||||
This project uses GitHub Merge Queues that helps us manage pull requests merging.
|
|
||||||
|
|
||||||
Before merging a PR, the maintainer should ensure the following requirements are met
|
|
||||||
- Automated tests have been added.
|
|
||||||
- If some tests cannot be automated, manual rigorous tests should be applied.
|
|
||||||
- ⚠️ If there is an change in the DB: it's mandatory to manually test the `--experimental-dumpless-upgrade` on a DB of the previous Meilisearch minor version (e.g. v1.13 for the v1.14 release).
|
|
||||||
- If necessary, the feature have been tested in the Cloud production environment (with [prototypes](./documentation/prototypes.md)) and the Cloud UI is ready.
|
|
||||||
- If necessary, the [documentation](https://github.com/meilisearch/documentation) related to the implemented feature in the PR is ready.
|
|
||||||
- If necessary, the [integrations](https://github.com/meilisearch/integration-guides) related to the implemented feature in the PR are ready.
|
|
||||||
|
|
||||||
## Publish Process (for internal team only)
|
|
||||||
|
|
||||||
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
|
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
|
||||||
|
|
||||||
### How to publish a new release
|
### Automation to rebase and Merge the PRs
|
||||||
|
|
||||||
The full Meilisearch release process is described in [this guide](./documentation/release.md).
|
This project uses GitHub Merge Queues that helps us manage pull requests merging.
|
||||||
|
|
||||||
|
### How to Publish a new Release
|
||||||
|
|
||||||
|
The full Meilisearch release process is described in [this guide](https://github.com/meilisearch/engine-team/blob/main/resources/meilisearch-release.md). Please follow it carefully before doing any release.
|
||||||
|
|
||||||
### How to publish a prototype
|
### How to publish a prototype
|
||||||
|
|
||||||
Depending on the developed feature, you might need to provide a prototyped version of Meilisearch to make it easier to test by the users.
|
Depending on the developed feature, you might need to provide a prototyped version of Meilisearch to make it easier to test by the users.
|
||||||
|
|
||||||
This happens in two steps:
|
This happens in two steps:
|
||||||
- [Release the prototype](./documentation/prototypes.md#how-to-publish-a-prototype)
|
- [Release the prototype](https://github.com/meilisearch/engine-team/blob/main/resources/prototypes.md#how-to-publish-a-prototype)
|
||||||
- [Communicate about it](./documentation/prototypes.md#communication)
|
- [Communicate about it](https://github.com/meilisearch/engine-team/blob/main/resources/prototypes.md#communication)
|
||||||
|
|
||||||
### How to implement and publish an experimental feature
|
|
||||||
|
|
||||||
Here is our [guidelines and process](./documentation/experimental-features.md) to implement and publish an experimental feature.
|
|
||||||
|
|
||||||
### Release assets
|
### Release assets
|
||||||
|
|
||||||
|
|||||||
131
Cargo.lock
generated
131
Cargo.lock
generated
@@ -350,21 +350,6 @@ version = "0.3.0"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "78200ac3468a57d333cd0ea5dd398e25111194dcacd49208afca95c629a6311d"
|
checksum = "78200ac3468a57d333cd0ea5dd398e25111194dcacd49208afca95c629a6311d"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "android-tzdata"
|
|
||||||
version = "0.1.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "e999941b234f3131b00bc13c22d06e8c5ff726d1b6318ac7eb276997bbb4fef0"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "android_system_properties"
|
|
||||||
version = "0.1.5"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
|
|
||||||
dependencies = [
|
|
||||||
"libc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "anes"
|
name = "anes"
|
||||||
version = "0.1.6"
|
version = "0.1.6"
|
||||||
@@ -595,7 +580,7 @@ source = "git+https://github.com/meilisearch/bbqueue#cbb87cc707b5af415ef203bdaf2
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "benchmarks"
|
name = "benchmarks"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"bumpalo",
|
"bumpalo",
|
||||||
@@ -785,7 +770,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "build-info"
|
name = "build-info"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"time",
|
"time",
|
||||||
@@ -1121,20 +1106,6 @@ dependencies = [
|
|||||||
"whatlang",
|
"whatlang",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "chrono"
|
|
||||||
version = "0.4.41"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "c469d952047f47f91b68d1cba3f10d63c11d73e4636f24f08daf0278abf01c4d"
|
|
||||||
dependencies = [
|
|
||||||
"android-tzdata",
|
|
||||||
"iana-time-zone",
|
|
||||||
"js-sys",
|
|
||||||
"num-traits",
|
|
||||||
"wasm-bindgen",
|
|
||||||
"windows-link",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ciborium"
|
name = "ciborium"
|
||||||
version = "0.2.2"
|
version = "0.2.2"
|
||||||
@@ -1803,16 +1774,19 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "dump"
|
name = "dump"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"big_s",
|
"big_s",
|
||||||
|
"bytemuck",
|
||||||
"flate2",
|
"flate2",
|
||||||
"http 1.3.1",
|
"http 1.3.1",
|
||||||
"maplit",
|
"maplit",
|
||||||
"meili-snap",
|
"meili-snap",
|
||||||
"meilisearch-types",
|
"meilisearch-types",
|
||||||
|
"memmap2",
|
||||||
"once_cell",
|
"once_cell",
|
||||||
|
"rayon",
|
||||||
"regex",
|
"regex",
|
||||||
"roaring",
|
"roaring",
|
||||||
"serde",
|
"serde",
|
||||||
@@ -2035,7 +2009,7 @@ checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "file-store"
|
name = "file-store"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"tempfile",
|
"tempfile",
|
||||||
"thiserror 2.0.12",
|
"thiserror 2.0.12",
|
||||||
@@ -2057,10 +2031,9 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "filter-parser"
|
name = "filter-parser"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"insta",
|
"insta",
|
||||||
"levenshtein_automata",
|
|
||||||
"nom",
|
"nom",
|
||||||
"nom_locate",
|
"nom_locate",
|
||||||
"unescaper",
|
"unescaper",
|
||||||
@@ -2079,7 +2052,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "flatten-serde-json"
|
name = "flatten-serde-json"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"criterion",
|
"criterion",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
@@ -2224,7 +2197,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "fuzzers"
|
name = "fuzzers"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"arbitrary",
|
"arbitrary",
|
||||||
"bumpalo",
|
"bumpalo",
|
||||||
@@ -2880,30 +2853,6 @@ dependencies = [
|
|||||||
"tracing",
|
"tracing",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "iana-time-zone"
|
|
||||||
version = "0.1.63"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "b0c919e5debc312ad217002b8048a17b7d83f80703865bbfcfebb0458b0b27d8"
|
|
||||||
dependencies = [
|
|
||||||
"android_system_properties",
|
|
||||||
"core-foundation-sys",
|
|
||||||
"iana-time-zone-haiku",
|
|
||||||
"js-sys",
|
|
||||||
"log",
|
|
||||||
"wasm-bindgen",
|
|
||||||
"windows-core",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "iana-time-zone-haiku"
|
|
||||||
version = "0.1.2"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f"
|
|
||||||
dependencies = [
|
|
||||||
"cc",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "icu_collections"
|
name = "icu_collections"
|
||||||
version = "2.0.0"
|
version = "2.0.0"
|
||||||
@@ -3048,7 +2997,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "index-scheduler"
|
name = "index-scheduler"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"backoff",
|
"backoff",
|
||||||
@@ -3284,7 +3233,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "json-depth-checker"
|
name = "json-depth-checker"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"criterion",
|
"criterion",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
@@ -3778,7 +3727,7 @@ checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "meili-snap"
|
name = "meili-snap"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"insta",
|
"insta",
|
||||||
"md5",
|
"md5",
|
||||||
@@ -3789,7 +3738,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "meilisearch"
|
name = "meilisearch"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"actix-cors",
|
"actix-cors",
|
||||||
"actix-http",
|
"actix-http",
|
||||||
@@ -3799,7 +3748,6 @@ dependencies = [
|
|||||||
"actix-web-lab",
|
"actix-web-lab",
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"async-openai",
|
"async-openai",
|
||||||
"backoff",
|
|
||||||
"brotli",
|
"brotli",
|
||||||
"bstr",
|
"bstr",
|
||||||
"build-info",
|
"build-info",
|
||||||
@@ -3830,7 +3778,6 @@ dependencies = [
|
|||||||
"meili-snap",
|
"meili-snap",
|
||||||
"meilisearch-auth",
|
"meilisearch-auth",
|
||||||
"meilisearch-types",
|
"meilisearch-types",
|
||||||
"memmap2",
|
|
||||||
"mimalloc",
|
"mimalloc",
|
||||||
"mime",
|
"mime",
|
||||||
"mopa-maintained",
|
"mopa-maintained",
|
||||||
@@ -3886,7 +3833,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "meilisearch-auth"
|
name = "meilisearch-auth"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"base64 0.22.1",
|
"base64 0.22.1",
|
||||||
"enum-iterator",
|
"enum-iterator",
|
||||||
@@ -3905,7 +3852,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "meilisearch-types"
|
name = "meilisearch-types"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"actix-web",
|
"actix-web",
|
||||||
"anyhow",
|
"anyhow",
|
||||||
@@ -3940,7 +3887,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "meilitool"
|
name = "meilitool"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"clap",
|
"clap",
|
||||||
@@ -3964,9 +3911,9 @@ checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "memmap2"
|
name = "memmap2"
|
||||||
version = "0.9.7"
|
version = "0.9.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "483758ad303d734cec05e5c12b41d7e93e6a6390c5e9dae6bdeb7c1259012d28"
|
checksum = "fd3f7eed9d3848f8b98834af67102b720745c4ec028fcd0aa0239277e7de374f"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"libc",
|
"libc",
|
||||||
"stable_deref_trait",
|
"stable_deref_trait",
|
||||||
@@ -3974,7 +3921,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "milli"
|
name = "milli"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"allocator-api2 0.3.0",
|
"allocator-api2 0.3.0",
|
||||||
"arroy",
|
"arroy",
|
||||||
@@ -4043,7 +3990,6 @@ dependencies = [
|
|||||||
"time",
|
"time",
|
||||||
"tokenizers",
|
"tokenizers",
|
||||||
"tracing",
|
"tracing",
|
||||||
"twox-hash",
|
|
||||||
"ureq",
|
"ureq",
|
||||||
"url",
|
"url",
|
||||||
"utoipa",
|
"utoipa",
|
||||||
@@ -4394,17 +4340,6 @@ version = "11.1.5"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "d6790f58c7ff633d8771f42965289203411a5e5c68388703c06e14f24770b41e"
|
checksum = "d6790f58c7ff633d8771f42965289203411a5e5c68388703c06e14f24770b41e"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "openapi-generator"
|
|
||||||
version = "0.1.0"
|
|
||||||
dependencies = [
|
|
||||||
"anyhow",
|
|
||||||
"clap",
|
|
||||||
"meilisearch",
|
|
||||||
"serde_json",
|
|
||||||
"utoipa",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "openssl-probe"
|
name = "openssl-probe"
|
||||||
version = "0.1.6"
|
version = "0.1.6"
|
||||||
@@ -4538,7 +4473,7 @@ checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "permissive-json-pointer"
|
name = "permissive-json-pointer"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"big_s",
|
"big_s",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
@@ -5735,20 +5670,6 @@ name = "similar"
|
|||||||
version = "2.7.0"
|
version = "2.7.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "bbbb5d9659141646ae647b42fe094daf6c6192d1620870b449d9557f748b2daa"
|
checksum = "bbbb5d9659141646ae647b42fe094daf6c6192d1620870b449d9557f748b2daa"
|
||||||
dependencies = [
|
|
||||||
"bstr",
|
|
||||||
"unicode-segmentation",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "similar-asserts"
|
|
||||||
version = "1.7.0"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "b5b441962c817e33508847a22bd82f03a30cff43642dc2fae8b050566121eb9a"
|
|
||||||
dependencies = [
|
|
||||||
"console",
|
|
||||||
"similar",
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "simple_asn1"
|
name = "simple_asn1"
|
||||||
@@ -6511,12 +6432,6 @@ version = "0.2.5"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
|
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "twox-hash"
|
|
||||||
version = "2.1.1"
|
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
|
||||||
checksum = "8b907da542cbced5261bd3256de1b3a1bf340a3d37f93425a07362a1d687de56"
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "typeid"
|
name = "typeid"
|
||||||
version = "1.0.3"
|
version = "1.0.3"
|
||||||
@@ -7346,12 +7261,11 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "xtask"
|
name = "xtask"
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"build-info",
|
"build-info",
|
||||||
"cargo_metadata",
|
"cargo_metadata",
|
||||||
"chrono",
|
|
||||||
"clap",
|
"clap",
|
||||||
"futures-core",
|
"futures-core",
|
||||||
"futures-util",
|
"futures-util",
|
||||||
@@ -7359,7 +7273,6 @@ dependencies = [
|
|||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
"sha2",
|
"sha2",
|
||||||
"similar-asserts",
|
|
||||||
"sysinfo",
|
"sysinfo",
|
||||||
"time",
|
"time",
|
||||||
"tokio",
|
"tokio",
|
||||||
|
|||||||
@@ -19,11 +19,10 @@ members = [
|
|||||||
"crates/tracing-trace",
|
"crates/tracing-trace",
|
||||||
"crates/xtask",
|
"crates/xtask",
|
||||||
"crates/build-info",
|
"crates/build-info",
|
||||||
"crates/openapi-generator",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[workspace.package]
|
[workspace.package]
|
||||||
version = "1.19.0"
|
version = "1.16.0"
|
||||||
authors = [
|
authors = [
|
||||||
"Quentin de Quelen <quentin@dequelen.me>",
|
"Quentin de Quelen <quentin@dequelen.me>",
|
||||||
"Clément Renault <clement@meilisearch.com>",
|
"Clément Renault <clement@meilisearch.com>",
|
||||||
|
|||||||
8
LICENSE
8
LICENSE
@@ -19,11 +19,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
SOFTWARE.
|
SOFTWARE.
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
🔒 Meilisearch Enterprise Edition (EE)
|
|
||||||
|
|
||||||
Certain parts of this codebase are not licensed under the MIT license and governed by the Business Source License 1.1.
|
|
||||||
|
|
||||||
See the LICENSE-EE file for details.
|
|
||||||
|
|||||||
67
LICENSE-EE
67
LICENSE-EE
@@ -1,67 +0,0 @@
|
|||||||
Business Source License 1.1 – Adapted for Meili SAS
|
|
||||||
This license is based on the Business Source License version 1.1, as published by MariaDB Corporation Ab.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
|
|
||||||
Licensor: Meili SAS
|
|
||||||
|
|
||||||
Licensed Work: Any file explicitly marked as “Enterprise Edition (EE)” or “governed by the Business Source License” residing in enterprise_editions modules/folders.
|
|
||||||
|
|
||||||
Additional Use Grant:
|
|
||||||
You may use, modify, and distribute the Licensed Work for non-production purposes only, such as testing, development, or evaluation.
|
|
||||||
|
|
||||||
Production use of the Licensed Work requires a commercial license agreement with Meilisearch. Contact bonjour@meilisearch.com for licensing.
|
|
||||||
|
|
||||||
Change License: MIT
|
|
||||||
|
|
||||||
Change Date: Four years from the date the Licensed Work is published.
|
|
||||||
|
|
||||||
This License does not apply to any code outside of the Licensed Work, which remains under the MIT license.
|
|
||||||
|
|
||||||
For information about alternative licensing arrangements for the Licensed Work,
|
|
||||||
please contact bonjour@meilisearch.com or sales@meilisearch.com.
|
|
||||||
|
|
||||||
Notice
|
|
||||||
|
|
||||||
Business Source License 1.1
|
|
||||||
|
|
||||||
Terms
|
|
||||||
|
|
||||||
The Licensor hereby grants you the right to copy, modify, create derivative
|
|
||||||
works, redistribute, and make non-production use of the Licensed Work. The
|
|
||||||
Licensor may make an Additional Use Grant, above, permitting limited production use.
|
|
||||||
|
|
||||||
Effective on the Change Date, or the fourth anniversary of the first publicly
|
|
||||||
available distribution of a specific version of the Licensed Work under this
|
|
||||||
License, whichever comes first, the Licensor hereby grants you rights under
|
|
||||||
the terms of the Change License, and the rights granted in the paragraph
|
|
||||||
above terminate.
|
|
||||||
|
|
||||||
If your use of the Licensed Work does not comply with the requirements
|
|
||||||
currently in effect as described in this License, you must purchase a
|
|
||||||
commercial license from the Licensor, its affiliated entities, or authorized
|
|
||||||
resellers, or you must refrain from using the Licensed Work.
|
|
||||||
|
|
||||||
All copies of the original and modified Licensed Work, and derivative works
|
|
||||||
of the Licensed Work, are subject to this License. This License applies
|
|
||||||
separately for each version of the Licensed Work and the Change Date may vary
|
|
||||||
for each version of the Licensed Work released by Licensor.
|
|
||||||
|
|
||||||
You must conspicuously display this License on each original or modified copy
|
|
||||||
of the Licensed Work. If you receive the Licensed Work in original or
|
|
||||||
modified form from a third party, the terms and conditions set forth in this
|
|
||||||
License apply to your use of that work.
|
|
||||||
|
|
||||||
Any use of the Licensed Work in violation of this License will automatically
|
|
||||||
terminate your rights under this License for the current and all other
|
|
||||||
versions of the Licensed Work.
|
|
||||||
|
|
||||||
This License does not grant you any right in any trademark or logo of
|
|
||||||
Licensor or its affiliates (provided that you may use a trademark or logo of
|
|
||||||
Licensor as expressly required by this License).
|
|
||||||
|
|
||||||
TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED WORK IS PROVIDED ON
|
|
||||||
AN "AS IS" BASIS. LICENSOR HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS,
|
|
||||||
EXPRESS OR IMPLIED, INCLUDING (WITHOUT LIMITATION) WARRANTIES OF
|
|
||||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND
|
|
||||||
TITLE.
|
|
||||||
22
README.md
22
README.md
@@ -89,26 +89,6 @@ We also offer a wide range of dedicated guides to all Meilisearch features, such
|
|||||||
|
|
||||||
Finally, for more in-depth information, refer to our articles explaining fundamental Meilisearch concepts such as [documents](https://www.meilisearch.com/docs/learn/core_concepts/documents?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced) and [indexes](https://www.meilisearch.com/docs/learn/core_concepts/indexes?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced).
|
Finally, for more in-depth information, refer to our articles explaining fundamental Meilisearch concepts such as [documents](https://www.meilisearch.com/docs/learn/core_concepts/documents?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced) and [indexes](https://www.meilisearch.com/docs/learn/core_concepts/indexes?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced).
|
||||||
|
|
||||||
## 🧾 Editions & Licensing
|
|
||||||
|
|
||||||
Meilisearch is available in two editions:
|
|
||||||
|
|
||||||
### 🧪 Community Edition (CE)
|
|
||||||
|
|
||||||
- Fully open source under the [MIT license](./LICENSE)
|
|
||||||
- Core search engine with fast and relevant full-text, semantic or hybrid search
|
|
||||||
- Free to use for anyone, including commercial usage
|
|
||||||
|
|
||||||
### 🏢 Enterprise Edition (EE)
|
|
||||||
|
|
||||||
- Includes advanced features such as:
|
|
||||||
- Sharding
|
|
||||||
- Governed by a [commercial license](./LICENSE-EE) or the [Business Source License 1.1](https://mariadb.com/bsl11)
|
|
||||||
- Not allowed in production without a commercial agreement with Meilisearch.
|
|
||||||
- You may use, modify, and distribute the Licensed Work for non-production purposes only, such as testing, development, or evaluation.
|
|
||||||
|
|
||||||
Want access to Enterprise features? → Contact us at [sales@meilisearch.com](maito:sales@meilisearch.com).
|
|
||||||
|
|
||||||
## 📊 Telemetry
|
## 📊 Telemetry
|
||||||
|
|
||||||
Meilisearch collects **anonymized** user data to help us improve our product. You can [deactivate this](https://www.meilisearch.com/docs/learn/what_is_meilisearch/telemetry?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=telemetry#how-to-disable-data-collection) whenever you want.
|
Meilisearch collects **anonymized** user data to help us improve our product. You can [deactivate this](https://www.meilisearch.com/docs/learn/what_is_meilisearch/telemetry?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=telemetry#how-to-disable-data-collection) whenever you want.
|
||||||
@@ -139,6 +119,6 @@ Meilisearch is, and will always be, open-source! If you want to contribute to th
|
|||||||
|
|
||||||
Meilisearch releases and their associated binaries are available on the project's [releases page](https://github.com/meilisearch/meilisearch/releases).
|
Meilisearch releases and their associated binaries are available on the project's [releases page](https://github.com/meilisearch/meilisearch/releases).
|
||||||
|
|
||||||
The binaries are versioned following [SemVer conventions](https://semver.org/). To know more, read our [versioning policy](./documentation/versioning-policy.md).
|
The binaries are versioned following [SemVer conventions](https://semver.org/). To know more, read our [versioning policy](https://github.com/meilisearch/engine-team/blob/main/resources/versioning-policy.md).
|
||||||
|
|
||||||
Differently from the binaries, crates in this repository are not currently available on [crates.io](https://crates.io/) and do not follow [SemVer conventions](https://semver.org).
|
Differently from the binaries, crates in this repository are not currently available on [crates.io](https://crates.io/) and do not follow [SemVer conventions](https://semver.org).
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ license.workspace = true
|
|||||||
anyhow = "1.0.98"
|
anyhow = "1.0.98"
|
||||||
bumpalo = "3.18.1"
|
bumpalo = "3.18.1"
|
||||||
csv = "1.3.1"
|
csv = "1.3.1"
|
||||||
memmap2 = "0.9.7"
|
memmap2 = "0.9.5"
|
||||||
milli = { path = "../milli" }
|
milli = { path = "../milli" }
|
||||||
mimalloc = { version = "0.1.47", default-features = false }
|
mimalloc = { version = "0.1.47", default-features = false }
|
||||||
serde_json = { version = "1.0.140", features = ["preserve_order"] }
|
serde_json = { version = "1.0.140", features = ["preserve_order"] }
|
||||||
@@ -56,6 +56,3 @@ harness = false
|
|||||||
name = "sort"
|
name = "sort"
|
||||||
harness = false
|
harness = false
|
||||||
|
|
||||||
[[bench]]
|
|
||||||
name = "filter_starts_with"
|
|
||||||
harness = false
|
|
||||||
|
|||||||
@@ -1,66 +0,0 @@
|
|||||||
mod datasets_paths;
|
|
||||||
mod utils;
|
|
||||||
|
|
||||||
use criterion::{criterion_group, criterion_main};
|
|
||||||
use milli::update::Settings;
|
|
||||||
use milli::FilterableAttributesRule;
|
|
||||||
use utils::Conf;
|
|
||||||
|
|
||||||
#[cfg(not(windows))]
|
|
||||||
#[global_allocator]
|
|
||||||
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
|
|
||||||
|
|
||||||
fn base_conf(builder: &mut Settings) {
|
|
||||||
let displayed_fields = ["geonameid", "name"].iter().map(|s| s.to_string()).collect();
|
|
||||||
builder.set_displayed_fields(displayed_fields);
|
|
||||||
|
|
||||||
let filterable_fields =
|
|
||||||
["name"].iter().map(|s| FilterableAttributesRule::Field(s.to_string())).collect();
|
|
||||||
builder.set_filterable_fields(filterable_fields);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[rustfmt::skip]
|
|
||||||
const BASE_CONF: Conf = Conf {
|
|
||||||
dataset: datasets_paths::SMOL_ALL_COUNTRIES,
|
|
||||||
dataset_format: "jsonl",
|
|
||||||
queries: &[
|
|
||||||
"",
|
|
||||||
],
|
|
||||||
configure: base_conf,
|
|
||||||
primary_key: Some("geonameid"),
|
|
||||||
..Conf::BASE
|
|
||||||
};
|
|
||||||
|
|
||||||
fn filter_starts_with(c: &mut criterion::Criterion) {
|
|
||||||
#[rustfmt::skip]
|
|
||||||
let confs = &[
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "1 letter",
|
|
||||||
filter: Some("name STARTS WITH e"),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "2 letters",
|
|
||||||
filter: Some("name STARTS WITH es"),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "3 letters",
|
|
||||||
filter: Some("name STARTS WITH est"),
|
|
||||||
..BASE_CONF
|
|
||||||
},
|
|
||||||
|
|
||||||
utils::Conf {
|
|
||||||
group_name: "6 letters",
|
|
||||||
filter: Some("name STARTS WITH estoni"),
|
|
||||||
..BASE_CONF
|
|
||||||
}
|
|
||||||
];
|
|
||||||
|
|
||||||
utils::run_benches(c, confs);
|
|
||||||
}
|
|
||||||
|
|
||||||
criterion_group!(benches, filter_starts_with);
|
|
||||||
criterion_main!(benches);
|
|
||||||
@@ -154,7 +154,6 @@ fn indexing_songs_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -222,7 +221,6 @@ fn reindexing_songs_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -268,7 +266,6 @@ fn reindexing_songs_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -338,7 +335,6 @@ fn deleting_songs_in_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -416,7 +412,6 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -462,7 +457,6 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -504,7 +498,6 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -573,7 +566,6 @@ fn indexing_songs_without_faceted_numbers(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -641,7 +633,6 @@ fn indexing_songs_without_faceted_fields(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -709,7 +700,6 @@ fn indexing_wiki(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -776,7 +766,6 @@ fn reindexing_wiki(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -822,7 +811,6 @@ fn reindexing_wiki(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -891,7 +879,6 @@ fn deleting_wiki_in_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -969,7 +956,6 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1016,7 +1002,6 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1059,7 +1044,6 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1127,7 +1111,6 @@ fn indexing_movies_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1194,7 +1177,6 @@ fn reindexing_movies_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1240,7 +1222,6 @@ fn reindexing_movies_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1309,7 +1290,6 @@ fn deleting_movies_in_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1424,7 +1404,6 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1470,7 +1449,6 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1512,7 +1490,6 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1603,7 +1580,6 @@ fn indexing_nested_movies_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1695,7 +1671,6 @@ fn deleting_nested_movies_in_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1779,7 +1754,6 @@ fn indexing_nested_movies_without_faceted_fields(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1847,7 +1821,6 @@ fn indexing_geo(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1914,7 +1887,6 @@ fn reindexing_geo(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1960,7 +1932,6 @@ fn reindexing_geo(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -2029,7 +2000,6 @@ fn deleting_geo_in_batches_default(c: &mut Criterion) {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -123,7 +123,6 @@ pub fn base_setup(conf: &Conf) -> Index {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -12,18 +12,26 @@ license.workspace = true
|
|||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
anyhow = "1.0.98"
|
anyhow = "1.0.98"
|
||||||
|
bytemuck = { version = "1.23.1", features = ["extern_crate_alloc"] }
|
||||||
flate2 = "1.1.2"
|
flate2 = "1.1.2"
|
||||||
http = "1.3.1"
|
http = "1.3.1"
|
||||||
meilisearch-types = { path = "../meilisearch-types" }
|
meilisearch-types = { path = "../meilisearch-types" }
|
||||||
|
memmap2 = "0.9.5"
|
||||||
once_cell = "1.21.3"
|
once_cell = "1.21.3"
|
||||||
regex = "1.11.1"
|
regex = "1.11.1"
|
||||||
|
rayon = "1.10.0"
|
||||||
roaring = { version = "0.10.12", features = ["serde"] }
|
roaring = { version = "0.10.12", features = ["serde"] }
|
||||||
serde = { version = "1.0.219", features = ["derive"] }
|
serde = { version = "1.0.219", features = ["derive"] }
|
||||||
serde_json = { version = "1.0.140", features = ["preserve_order"] }
|
serde_json = { version = "1.0.140", features = ["preserve_order"] }
|
||||||
tar = "0.4.44"
|
tar = "0.4.44"
|
||||||
tempfile = "3.20.0"
|
tempfile = "3.20.0"
|
||||||
thiserror = "2.0.12"
|
thiserror = "2.0.12"
|
||||||
time = { version = "0.3.41", features = ["serde-well-known", "formatting", "parsing", "macros"] }
|
time = { version = "0.3.41", features = [
|
||||||
|
"serde-well-known",
|
||||||
|
"formatting",
|
||||||
|
"parsing",
|
||||||
|
"macros",
|
||||||
|
] }
|
||||||
tracing = "0.1.41"
|
tracing = "0.1.41"
|
||||||
uuid = { version = "1.17.0", features = ["serde", "v4"] }
|
uuid = { version = "1.17.0", features = ["serde", "v4"] }
|
||||||
|
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ use meilisearch_types::keys::Key;
|
|||||||
use meilisearch_types::milli::update::IndexDocumentsMethod;
|
use meilisearch_types::milli::update::IndexDocumentsMethod;
|
||||||
use meilisearch_types::settings::Unchecked;
|
use meilisearch_types::settings::Unchecked;
|
||||||
use meilisearch_types::tasks::{
|
use meilisearch_types::tasks::{
|
||||||
Details, ExportIndexSettings, IndexSwap, KindWithContent, Status, Task, TaskId, TaskNetwork,
|
Details, ExportIndexSettings, IndexSwap, KindWithContent, Status, Task, TaskId,
|
||||||
};
|
};
|
||||||
use meilisearch_types::InstanceUid;
|
use meilisearch_types::InstanceUid;
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
@@ -94,8 +94,6 @@ pub struct TaskDump {
|
|||||||
default
|
default
|
||||||
)]
|
)]
|
||||||
pub finished_at: Option<OffsetDateTime>,
|
pub finished_at: Option<OffsetDateTime>,
|
||||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
|
||||||
pub network: Option<TaskNetwork>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// A `Kind` specific version made for the dump. If modified you may break the dump.
|
// A `Kind` specific version made for the dump. If modified you may break the dump.
|
||||||
@@ -131,7 +129,6 @@ pub enum KindDump {
|
|||||||
},
|
},
|
||||||
IndexUpdate {
|
IndexUpdate {
|
||||||
primary_key: Option<String>,
|
primary_key: Option<String>,
|
||||||
uid: Option<String>,
|
|
||||||
},
|
},
|
||||||
IndexSwap {
|
IndexSwap {
|
||||||
swaps: Vec<IndexSwap>,
|
swaps: Vec<IndexSwap>,
|
||||||
@@ -174,7 +171,6 @@ impl From<Task> for TaskDump {
|
|||||||
enqueued_at: task.enqueued_at,
|
enqueued_at: task.enqueued_at,
|
||||||
started_at: task.started_at,
|
started_at: task.started_at,
|
||||||
finished_at: task.finished_at,
|
finished_at: task.finished_at,
|
||||||
network: task.network,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -214,8 +210,8 @@ impl From<KindWithContent> for KindDump {
|
|||||||
KindWithContent::IndexCreation { primary_key, .. } => {
|
KindWithContent::IndexCreation { primary_key, .. } => {
|
||||||
KindDump::IndexCreation { primary_key }
|
KindDump::IndexCreation { primary_key }
|
||||||
}
|
}
|
||||||
KindWithContent::IndexUpdate { primary_key, new_index_uid: uid, .. } => {
|
KindWithContent::IndexUpdate { primary_key, .. } => {
|
||||||
KindDump::IndexUpdate { primary_key, uid }
|
KindDump::IndexUpdate { primary_key }
|
||||||
}
|
}
|
||||||
KindWithContent::IndexSwap { swaps } => KindDump::IndexSwap { swaps },
|
KindWithContent::IndexSwap { swaps } => KindDump::IndexSwap { swaps },
|
||||||
KindWithContent::TaskCancelation { query, tasks } => {
|
KindWithContent::TaskCancelation { query, tasks } => {
|
||||||
@@ -253,9 +249,8 @@ pub(crate) mod test {
|
|||||||
use big_s::S;
|
use big_s::S;
|
||||||
use maplit::{btreemap, btreeset};
|
use maplit::{btreemap, btreeset};
|
||||||
use meilisearch_types::batches::{Batch, BatchEnqueuedAt, BatchStats};
|
use meilisearch_types::batches::{Batch, BatchEnqueuedAt, BatchStats};
|
||||||
use meilisearch_types::enterprise_edition::network::{Network, Remote};
|
|
||||||
use meilisearch_types::facet_values_sort::FacetValuesSort;
|
use meilisearch_types::facet_values_sort::FacetValuesSort;
|
||||||
use meilisearch_types::features::RuntimeTogglableFeatures;
|
use meilisearch_types::features::{Network, Remote, RuntimeTogglableFeatures};
|
||||||
use meilisearch_types::index_uid_pattern::IndexUidPattern;
|
use meilisearch_types::index_uid_pattern::IndexUidPattern;
|
||||||
use meilisearch_types::keys::{Action, Key};
|
use meilisearch_types::keys::{Action, Key};
|
||||||
use meilisearch_types::milli::update::Setting;
|
use meilisearch_types::milli::update::Setting;
|
||||||
@@ -388,7 +383,6 @@ pub(crate) mod test {
|
|||||||
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
||||||
started_at: Some(datetime!(2022-11-20 0:00 UTC)),
|
started_at: Some(datetime!(2022-11-20 0:00 UTC)),
|
||||||
finished_at: Some(datetime!(2022-11-21 0:00 UTC)),
|
finished_at: Some(datetime!(2022-11-21 0:00 UTC)),
|
||||||
network: None,
|
|
||||||
},
|
},
|
||||||
None,
|
None,
|
||||||
),
|
),
|
||||||
@@ -413,7 +407,6 @@ pub(crate) mod test {
|
|||||||
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
enqueued_at: datetime!(2022-11-11 0:00 UTC),
|
||||||
started_at: None,
|
started_at: None,
|
||||||
finished_at: None,
|
finished_at: None,
|
||||||
network: None,
|
|
||||||
},
|
},
|
||||||
Some(vec![
|
Some(vec![
|
||||||
json!({ "id": 4, "race": "leonberg" }).as_object().unwrap().clone(),
|
json!({ "id": 4, "race": "leonberg" }).as_object().unwrap().clone(),
|
||||||
@@ -433,7 +426,6 @@ pub(crate) mod test {
|
|||||||
enqueued_at: datetime!(2022-11-15 0:00 UTC),
|
enqueued_at: datetime!(2022-11-15 0:00 UTC),
|
||||||
started_at: None,
|
started_at: None,
|
||||||
finished_at: None,
|
finished_at: None,
|
||||||
network: None,
|
|
||||||
},
|
},
|
||||||
None,
|
None,
|
||||||
),
|
),
|
||||||
@@ -546,8 +538,7 @@ pub(crate) mod test {
|
|||||||
fn create_test_network() -> Network {
|
fn create_test_network() -> Network {
|
||||||
Network {
|
Network {
|
||||||
local: Some("myself".to_string()),
|
local: Some("myself".to_string()),
|
||||||
remotes: maplit::btreemap! {"other".to_string() => Remote { url: "http://test".to_string(), search_api_key: Some("apiKey".to_string()), write_api_key: Some("docApiKey".to_string()) }},
|
remotes: maplit::btreemap! {"other".to_string() => Remote { url: "http://test".to_string(), search_api_key: Some("apiKey".to_string()) }},
|
||||||
sharding: false,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use super::v2_to_v3::CompatV2ToV3;
|
use super::v2_to_v3::CompatV2ToV3;
|
||||||
@@ -95,10 +94,6 @@ impl CompatIndexV1ToV2 {
|
|||||||
self.from.documents().map(|it| Box::new(it) as Box<dyn Iterator<Item = _>>)
|
self.from.documents().map(|it| Box::new(it) as Box<dyn Iterator<Item = _>>)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.from.documents_file()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v2::settings::Settings<v2::settings::Checked>> {
|
pub fn settings(&mut self) -> Result<v2::settings::Settings<v2::settings::Checked>> {
|
||||||
Ok(v2::settings::Settings::<v2::settings::Unchecked>::from(self.from.settings()?).check())
|
Ok(v2::settings::Settings::<v2::settings::Unchecked>::from(self.from.settings()?).check())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use time::OffsetDateTime;
|
use time::OffsetDateTime;
|
||||||
@@ -123,13 +122,6 @@ impl CompatIndexV2ToV3 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV2ToV3::V2(v2) => v2.documents_file(),
|
|
||||||
CompatIndexV2ToV3::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v3::Settings<v3::Checked>> {
|
pub fn settings(&mut self) -> Result<v3::Settings<v3::Checked>> {
|
||||||
let settings = match self {
|
let settings = match self {
|
||||||
CompatIndexV2ToV3::V2(from) => from.settings()?,
|
CompatIndexV2ToV3::V2(from) => from.settings()?,
|
||||||
|
|||||||
@@ -1,5 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
|
|
||||||
use super::v2_to_v3::{CompatIndexV2ToV3, CompatV2ToV3};
|
use super::v2_to_v3::{CompatIndexV2ToV3, CompatV2ToV3};
|
||||||
use super::v4_to_v5::CompatV4ToV5;
|
use super::v4_to_v5::CompatV4ToV5;
|
||||||
use crate::reader::{v3, v4, UpdateFile};
|
use crate::reader::{v3, v4, UpdateFile};
|
||||||
@@ -254,13 +252,6 @@ impl CompatIndexV3ToV4 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV3ToV4::V3(v3) => v3.documents_file(),
|
|
||||||
CompatIndexV3ToV4::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v4::Settings<v4::Checked>> {
|
pub fn settings(&mut self) -> Result<v4::Settings<v4::Checked>> {
|
||||||
Ok(match self {
|
Ok(match self {
|
||||||
CompatIndexV3ToV4::V3(v3) => {
|
CompatIndexV3ToV4::V3(v3) => {
|
||||||
|
|||||||
@@ -1,5 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
|
|
||||||
use super::v3_to_v4::{CompatIndexV3ToV4, CompatV3ToV4};
|
use super::v3_to_v4::{CompatIndexV3ToV4, CompatV3ToV4};
|
||||||
use super::v5_to_v6::CompatV5ToV6;
|
use super::v5_to_v6::CompatV5ToV6;
|
||||||
use crate::reader::{v4, v5, Document};
|
use crate::reader::{v4, v5, Document};
|
||||||
@@ -243,13 +241,6 @@ impl CompatIndexV4ToV5 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV4ToV5::V4(v4) => v4.documents_file(),
|
|
||||||
CompatIndexV4ToV5::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v5::Settings<v5::Checked>> {
|
pub fn settings(&mut self) -> Result<v5::Settings<v5::Checked>> {
|
||||||
match self {
|
match self {
|
||||||
CompatIndexV4ToV5::V4(v4) => Ok(v5::Settings::from(v4.settings()?).check()),
|
CompatIndexV4ToV5::V4(v4) => Ok(v5::Settings::from(v4.settings()?).check()),
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
use std::fs::File;
|
|
||||||
use std::num::NonZeroUsize;
|
use std::num::NonZeroUsize;
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
@@ -85,7 +84,7 @@ impl CompatV5ToV6 {
|
|||||||
v6::Kind::IndexCreation { primary_key }
|
v6::Kind::IndexCreation { primary_key }
|
||||||
}
|
}
|
||||||
v5::tasks::TaskContent::IndexUpdate { primary_key, .. } => {
|
v5::tasks::TaskContent::IndexUpdate { primary_key, .. } => {
|
||||||
v6::Kind::IndexUpdate { primary_key, uid: None }
|
v6::Kind::IndexUpdate { primary_key }
|
||||||
}
|
}
|
||||||
v5::tasks::TaskContent::IndexDeletion { .. } => v6::Kind::IndexDeletion,
|
v5::tasks::TaskContent::IndexDeletion { .. } => v6::Kind::IndexDeletion,
|
||||||
v5::tasks::TaskContent::DocumentAddition {
|
v5::tasks::TaskContent::DocumentAddition {
|
||||||
@@ -140,11 +139,9 @@ impl CompatV5ToV6 {
|
|||||||
v5::Details::Settings { settings } => {
|
v5::Details::Settings { settings } => {
|
||||||
v6::Details::SettingsUpdate { settings: Box::new(settings.into()) }
|
v6::Details::SettingsUpdate { settings: Box::new(settings.into()) }
|
||||||
}
|
}
|
||||||
v5::Details::IndexInfo { primary_key } => v6::Details::IndexInfo {
|
v5::Details::IndexInfo { primary_key } => {
|
||||||
primary_key,
|
v6::Details::IndexInfo { primary_key }
|
||||||
new_index_uid: None,
|
}
|
||||||
old_index_uid: None,
|
|
||||||
},
|
|
||||||
v5::Details::DocumentDeletion {
|
v5::Details::DocumentDeletion {
|
||||||
received_document_ids,
|
received_document_ids,
|
||||||
deleted_documents,
|
deleted_documents,
|
||||||
@@ -163,7 +160,6 @@ impl CompatV5ToV6 {
|
|||||||
enqueued_at: task_view.enqueued_at,
|
enqueued_at: task_view.enqueued_at,
|
||||||
started_at: task_view.started_at,
|
started_at: task_view.started_at,
|
||||||
finished_at: task_view.finished_at,
|
finished_at: task_view.finished_at,
|
||||||
network: None,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
(task, content_file)
|
(task, content_file)
|
||||||
@@ -205,10 +201,6 @@ impl CompatV5ToV6 {
|
|||||||
pub fn network(&self) -> Result<Option<&v6::Network>> {
|
pub fn network(&self) -> Result<Option<&v6::Network>> {
|
||||||
Ok(None)
|
Ok(None)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn webhooks(&self) -> Option<&v6::Webhooks> {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub enum CompatIndexV5ToV6 {
|
pub enum CompatIndexV5ToV6 {
|
||||||
@@ -251,13 +243,6 @@ impl CompatIndexV5ToV6 {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
CompatIndexV5ToV6::V5(v5) => v5.documents_file(),
|
|
||||||
CompatIndexV5ToV6::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
||||||
match self {
|
match self {
|
||||||
CompatIndexV5ToV6::V5(v5) => Ok(v6::Settings::from(v5.settings()?).check()),
|
CompatIndexV5ToV6::V5(v5) => Ok(v6::Settings::from(v5.settings()?).check()),
|
||||||
|
|||||||
@@ -138,13 +138,6 @@ impl DumpReader {
|
|||||||
DumpReader::Compat(compat) => compat.network(),
|
DumpReader::Compat(compat) => compat.network(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn webhooks(&self) -> Option<&v6::Webhooks> {
|
|
||||||
match self {
|
|
||||||
DumpReader::Current(current) => current.webhooks(),
|
|
||||||
DumpReader::Compat(compat) => compat.webhooks(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl From<V6Reader> for DumpReader {
|
impl From<V6Reader> for DumpReader {
|
||||||
@@ -199,14 +192,6 @@ impl DumpIndexReader {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A reference to a file in the NDJSON format containing all the documents of the index
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
match self {
|
|
||||||
DumpIndexReader::Current(v6) => v6.documents_file(),
|
|
||||||
DumpIndexReader::Compat(compat) => compat.documents_file(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
|
||||||
match self {
|
match self {
|
||||||
DumpIndexReader::Current(v6) => v6.settings(),
|
DumpIndexReader::Current(v6) => v6.settings(),
|
||||||
@@ -372,7 +357,6 @@ pub(crate) mod test {
|
|||||||
|
|
||||||
assert_eq!(dump.features().unwrap().unwrap(), RuntimeTogglableFeatures::default());
|
assert_eq!(dump.features().unwrap().unwrap(), RuntimeTogglableFeatures::default());
|
||||||
assert_eq!(dump.network().unwrap(), None);
|
assert_eq!(dump.network().unwrap(), None);
|
||||||
assert_eq!(dump.webhooks(), None);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -443,43 +427,6 @@ pub(crate) mod test {
|
|||||||
insta::assert_snapshot!(network.remotes.get("ms-2").as_ref().unwrap().search_api_key.as_ref().unwrap(), @"foo");
|
insta::assert_snapshot!(network.remotes.get("ms-2").as_ref().unwrap().search_api_key.as_ref().unwrap(), @"foo");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn import_dump_v6_webhooks() {
|
|
||||||
let dump = File::open("tests/assets/v6-with-webhooks.dump").unwrap();
|
|
||||||
let dump = DumpReader::open(dump).unwrap();
|
|
||||||
|
|
||||||
// top level infos
|
|
||||||
insta::assert_snapshot!(dump.date().unwrap(), @"2025-07-31 9:21:30.479544 +00:00:00");
|
|
||||||
insta::assert_debug_snapshot!(dump.instance_uid().unwrap(), @r"
|
|
||||||
Some(
|
|
||||||
cb887dcc-34b3-48d1-addd-9815ae721a81,
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
// webhooks
|
|
||||||
let webhooks = dump.webhooks().unwrap();
|
|
||||||
insta::assert_json_snapshot!(webhooks, @r#"
|
|
||||||
{
|
|
||||||
"webhooks": {
|
|
||||||
"627ea538-733d-4545-8d2d-03526eb381ce": {
|
|
||||||
"url": "https://example.com/authorization-less",
|
|
||||||
"headers": {}
|
|
||||||
},
|
|
||||||
"771b0a28-ef28-4082-b984-536f82958c65": {
|
|
||||||
"url": "https://example.com/hook",
|
|
||||||
"headers": {
|
|
||||||
"authorization": "TOKEN"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"f3583083-f8a7-4cbf-a5e7-fb3f1e28a7e9": {
|
|
||||||
"url": "https://third.com",
|
|
||||||
"headers": {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"#);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn import_dump_v5() {
|
fn import_dump_v5() {
|
||||||
let dump = File::open("tests/assets/v5.dump").unwrap();
|
let dump = File::open("tests/assets/v5.dump").unwrap();
|
||||||
|
|||||||
@@ -72,10 +72,6 @@ impl V1IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<self::settings::Settings> {
|
pub fn settings(&mut self) -> Result<self::settings::Settings> {
|
||||||
Ok(serde_json::from_reader(&mut self.settings)?)
|
Ok(serde_json::from_reader(&mut self.settings)?)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -203,10 +203,6 @@ impl V2IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -215,10 +215,6 @@ impl V3IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -210,10 +210,6 @@ impl V4IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -247,10 +247,6 @@ impl V5IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
Ok(self.settings.clone())
|
Ok(self.settings.clone())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -24,8 +24,7 @@ pub type Batch = meilisearch_types::batches::Batch;
|
|||||||
pub type Key = meilisearch_types::keys::Key;
|
pub type Key = meilisearch_types::keys::Key;
|
||||||
pub type ChatCompletionSettings = meilisearch_types::features::ChatCompletionSettings;
|
pub type ChatCompletionSettings = meilisearch_types::features::ChatCompletionSettings;
|
||||||
pub type RuntimeTogglableFeatures = meilisearch_types::features::RuntimeTogglableFeatures;
|
pub type RuntimeTogglableFeatures = meilisearch_types::features::RuntimeTogglableFeatures;
|
||||||
pub type Network = meilisearch_types::enterprise_edition::network::Network;
|
pub type Network = meilisearch_types::features::Network;
|
||||||
pub type Webhooks = meilisearch_types::webhooks::WebhooksDumpView;
|
|
||||||
|
|
||||||
// ===== Other types to clarify the code of the compat module
|
// ===== Other types to clarify the code of the compat module
|
||||||
// everything related to the tasks
|
// everything related to the tasks
|
||||||
@@ -51,6 +50,8 @@ pub type RankingRuleView = meilisearch_types::settings::RankingRuleView;
|
|||||||
|
|
||||||
pub type FilterableAttributesRule = meilisearch_types::milli::FilterableAttributesRule;
|
pub type FilterableAttributesRule = meilisearch_types::milli::FilterableAttributesRule;
|
||||||
|
|
||||||
|
pub mod vector;
|
||||||
|
|
||||||
pub struct V6Reader {
|
pub struct V6Reader {
|
||||||
dump: TempDir,
|
dump: TempDir,
|
||||||
instance_uid: Option<Uuid>,
|
instance_uid: Option<Uuid>,
|
||||||
@@ -60,7 +61,6 @@ pub struct V6Reader {
|
|||||||
keys: BufReader<File>,
|
keys: BufReader<File>,
|
||||||
features: Option<RuntimeTogglableFeatures>,
|
features: Option<RuntimeTogglableFeatures>,
|
||||||
network: Option<Network>,
|
network: Option<Network>,
|
||||||
webhooks: Option<Webhooks>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl V6Reader {
|
impl V6Reader {
|
||||||
@@ -95,8 +95,8 @@ impl V6Reader {
|
|||||||
Err(e) => return Err(e.into()),
|
Err(e) => return Err(e.into()),
|
||||||
};
|
};
|
||||||
|
|
||||||
let network = match fs::read(dump.path().join("network.json")) {
|
let network_file = match fs::read(dump.path().join("network.json")) {
|
||||||
Ok(network_file) => Some(serde_json::from_reader(&*network_file)?),
|
Ok(network_file) => Some(network_file),
|
||||||
Err(error) => match error.kind() {
|
Err(error) => match error.kind() {
|
||||||
// Allows the file to be missing, this will only result in all experimental features disabled.
|
// Allows the file to be missing, this will only result in all experimental features disabled.
|
||||||
ErrorKind::NotFound => {
|
ErrorKind::NotFound => {
|
||||||
@@ -106,16 +106,10 @@ impl V6Reader {
|
|||||||
_ => return Err(error.into()),
|
_ => return Err(error.into()),
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
let network = if let Some(network_file) = network_file {
|
||||||
let webhooks = match fs::read(dump.path().join("webhooks.json")) {
|
Some(serde_json::from_reader(&*network_file)?)
|
||||||
Ok(webhooks_file) => Some(serde_json::from_reader(&*webhooks_file)?),
|
} else {
|
||||||
Err(error) => match error.kind() {
|
None
|
||||||
ErrorKind::NotFound => {
|
|
||||||
debug!("`webhooks.json` not found in dump");
|
|
||||||
None
|
|
||||||
}
|
|
||||||
_ => return Err(error.into()),
|
|
||||||
},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
Ok(V6Reader {
|
Ok(V6Reader {
|
||||||
@@ -127,7 +121,6 @@ impl V6Reader {
|
|||||||
features,
|
features,
|
||||||
network,
|
network,
|
||||||
dump,
|
dump,
|
||||||
webhooks,
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -238,10 +231,6 @@ impl V6Reader {
|
|||||||
pub fn network(&self) -> Option<&Network> {
|
pub fn network(&self) -> Option<&Network> {
|
||||||
self.network.as_ref()
|
self.network.as_ref()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn webhooks(&self) -> Option<&Webhooks> {
|
|
||||||
self.webhooks.as_ref()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct UpdateFile {
|
pub struct UpdateFile {
|
||||||
@@ -297,10 +286,6 @@ impl V6IndexReader {
|
|||||||
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn documents_file(&self) -> &File {
|
|
||||||
self.documents.get_ref()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
pub fn settings(&mut self) -> Result<Settings<Checked>> {
|
||||||
let mut settings: Settings<Unchecked> = serde_json::from_reader(&mut self.settings)?;
|
let mut settings: Settings<Unchecked> = serde_json::from_reader(&mut self.settings)?;
|
||||||
patch_embedders(&mut settings);
|
patch_embedders(&mut settings);
|
||||||
|
|||||||
154
crates/dump/src/reader/v6/vector.rs
Normal file
154
crates/dump/src/reader/v6/vector.rs
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
//! Read vectors from a `vectors` directory for each index.
|
||||||
|
//!
|
||||||
|
//! The `vectors` directory is architected in the following way:
|
||||||
|
//! - `commands/` directory containing binary files that indicate which vectors should go into which embedder and fragment for which document
|
||||||
|
//! - `data/` directory containing the vector data.
|
||||||
|
//! - `status/` directory containing embedding metadata (`EmbeddingStatus`)
|
||||||
|
|
||||||
|
use std::fs::File;
|
||||||
|
use std::io::{BufReader, ErrorKind, Read};
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use meilisearch_types::heed::byteorder::{BigEndian, ReadBytesExt};
|
||||||
|
use meilisearch_types::heed::RoTxn;
|
||||||
|
use meilisearch_types::milli::vector::RuntimeEmbedders;
|
||||||
|
use meilisearch_types::milli::DocumentId;
|
||||||
|
use meilisearch_types::Index;
|
||||||
|
use memmap2::Mmap;
|
||||||
|
|
||||||
|
use crate::Result;
|
||||||
|
|
||||||
|
pub struct VectorReader {
|
||||||
|
dir: PathBuf,
|
||||||
|
file_count: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl VectorReader {
|
||||||
|
pub fn new(dir: PathBuf) -> Result<Self> {
|
||||||
|
let commands = dir.join("commands");
|
||||||
|
let file_count = commands.read_dir()?.count();
|
||||||
|
Ok(Self { dir, file_count })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn visit<V: Visitor>(
|
||||||
|
&self,
|
||||||
|
mut v: V,
|
||||||
|
index: usize,
|
||||||
|
) -> Result<std::result::Result<(), V::Error>> {
|
||||||
|
let filename = format!("{:04}.bin", index);
|
||||||
|
let commands = self.dir.join("commands").join(&filename);
|
||||||
|
let data = self.dir.join("data").join(&filename);
|
||||||
|
let mut commands = BufReader::new(File::open(commands)?);
|
||||||
|
let data = File::open(data)?;
|
||||||
|
let data = unsafe { Mmap::map(&data)? };
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
let mut dimensions = None;
|
||||||
|
while let Some(command) = read_next_command(&mut buf, &mut commands)? {
|
||||||
|
let res = match command {
|
||||||
|
Command::ChangeCurrentEmbedder { name } => v
|
||||||
|
.on_current_embedder_change(name)
|
||||||
|
.map(|new_dimensions| dimensions = Some(new_dimensions)),
|
||||||
|
Command::ChangeCurrentStore { name } => v.on_current_store_change(name),
|
||||||
|
Command::ChangeDocid { external_docid } => {
|
||||||
|
v.on_current_docid_change(external_docid)
|
||||||
|
}
|
||||||
|
Command::SetVector { offset } => {
|
||||||
|
let dimensions = dimensions.unwrap();
|
||||||
|
let vec = &data[(offset as usize)
|
||||||
|
..(offset as usize + (dimensions * std::mem::size_of::<f32>()))];
|
||||||
|
|
||||||
|
v.on_set_vector(bytemuck::cast_slice(vec))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
if let Err(err) = res {
|
||||||
|
return Ok(Err(err));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(Ok(()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_next_command(buf: &mut Vec<u8>, mut commands: impl Read) -> Result<Option<Command>> {
|
||||||
|
let kind = match commands.read_u8() {
|
||||||
|
Ok(kind) => kind,
|
||||||
|
Err(err) if err.kind() == ErrorKind::UnexpectedEof => return Ok(None),
|
||||||
|
Err(err) => return Err(err.into()),
|
||||||
|
};
|
||||||
|
let s = if Command::has_len(kind) {
|
||||||
|
let len = commands.read_u32::<BigEndian>()?;
|
||||||
|
buf.resize(len as usize, 0);
|
||||||
|
if len != 0 {
|
||||||
|
commands.read_exact(buf)?;
|
||||||
|
std::str::from_utf8(buf).unwrap()
|
||||||
|
} else {
|
||||||
|
""
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
""
|
||||||
|
};
|
||||||
|
let offset = if Command::has_offset(kind) { commands.read_u64::<BigEndian>()? } else { 0 };
|
||||||
|
Ok(Some(Command::from_raw(kind, s, offset)))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[repr(u8)]
|
||||||
|
pub enum Command<'pl> {
|
||||||
|
/// Tell the importer that the next embeddings are to be added in the context of the specified embedder.
|
||||||
|
///
|
||||||
|
/// Replaces the embedder specified by the previous such command.
|
||||||
|
///
|
||||||
|
/// Embedder is specified by its name.
|
||||||
|
ChangeCurrentEmbedder { name: &'pl str },
|
||||||
|
/// Tell the importer that the next embeddings are to be added in the context of the specified store.
|
||||||
|
///
|
||||||
|
/// Replaces the store specified by the previous such command.
|
||||||
|
///
|
||||||
|
/// The store is specified by an optional fragment name
|
||||||
|
ChangeCurrentStore { name: Option<&'pl str> },
|
||||||
|
/// Tell the importer that the next embeddings are to be added in the context of the specified document.
|
||||||
|
///
|
||||||
|
/// Replaces the store specified by the previous such command.
|
||||||
|
///
|
||||||
|
/// The document is specified by the external docid of the document.
|
||||||
|
ChangeDocid { external_docid: &'pl str },
|
||||||
|
/// Tell the importer where to find the next vector in the current data file.
|
||||||
|
SetVector { offset: u64 },
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Command<'_> {
|
||||||
|
const CHANGE_CURRENT_EMBEDDER: Self = Self::ChangeCurrentEmbedder { name: "" };
|
||||||
|
const CHANGE_CURRENT_STORE: Self = Self::ChangeCurrentStore { name: Some("") };
|
||||||
|
const CHANGE_DOCID: Self = Self::ChangeDocid { external_docid: "" };
|
||||||
|
const SET_VECTOR: Self = Self::SetVector { offset: 0 };
|
||||||
|
|
||||||
|
fn has_len(kind: u8) -> bool {
|
||||||
|
kind == Self::CHANGE_CURRENT_EMBEDDER.discriminant()
|
||||||
|
|| kind == Self::CHANGE_CURRENT_STORE.discriminant()
|
||||||
|
|| kind == Self::CHANGE_DOCID.discriminant()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn has_offset(kind: u8) -> bool {
|
||||||
|
kind == Self::SET_VECTOR.discriminant()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// See <https://doc.rust-lang.org/std/mem/fn.discriminant.html#accessing-the-numeric-value-of-the-discriminant>
|
||||||
|
fn discriminant(&self) -> u8 {
|
||||||
|
// SAFETY: Because `Self` is marked `repr(u8)`, its layout is a `repr(C)` `union`
|
||||||
|
// between `repr(C)` structs, each of which has the `u8` discriminant as its first
|
||||||
|
// field, so we can read the discriminant without offsetting the pointer.
|
||||||
|
unsafe { *<*const _>::from(self).cast::<u8>() }
|
||||||
|
}
|
||||||
|
|
||||||
|
fn from_raw(kind: u8, s: &str, offset: u64) -> Command {
|
||||||
|
if kind == Self::CHANGE_CURRENT_EMBEDDER.discriminant() {
|
||||||
|
Command::ChangeCurrentEmbedder { name: s }
|
||||||
|
} else if kind == Self::CHANGE_CURRENT_STORE.discriminant() {
|
||||||
|
Command::ChangeCurrentStore { name: (!s.is_empty()).then_some(s) }
|
||||||
|
} else if kind == Self::CHANGE_DOCID.discriminant() {
|
||||||
|
Command::ChangeDocid { external_docid: s }
|
||||||
|
} else if kind == Self::SET_VECTOR.discriminant() {
|
||||||
|
Command::SetVector { offset }
|
||||||
|
} else {
|
||||||
|
panic!("unknown command")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -5,11 +5,9 @@ use std::path::PathBuf;
|
|||||||
use flate2::write::GzEncoder;
|
use flate2::write::GzEncoder;
|
||||||
use flate2::Compression;
|
use flate2::Compression;
|
||||||
use meilisearch_types::batches::Batch;
|
use meilisearch_types::batches::Batch;
|
||||||
use meilisearch_types::enterprise_edition::network::Network;
|
use meilisearch_types::features::{ChatCompletionSettings, Network, RuntimeTogglableFeatures};
|
||||||
use meilisearch_types::features::{ChatCompletionSettings, RuntimeTogglableFeatures};
|
|
||||||
use meilisearch_types::keys::Key;
|
use meilisearch_types::keys::Key;
|
||||||
use meilisearch_types::settings::{Checked, Settings};
|
use meilisearch_types::settings::{Checked, Settings};
|
||||||
use meilisearch_types::webhooks::WebhooksDumpView;
|
|
||||||
use serde_json::{Map, Value};
|
use serde_json::{Map, Value};
|
||||||
use tempfile::TempDir;
|
use tempfile::TempDir;
|
||||||
use time::OffsetDateTime;
|
use time::OffsetDateTime;
|
||||||
@@ -76,13 +74,6 @@ impl DumpWriter {
|
|||||||
Ok(std::fs::write(self.dir.path().join("network.json"), serde_json::to_string(&network)?)?)
|
Ok(std::fs::write(self.dir.path().join("network.json"), serde_json::to_string(&network)?)?)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn create_webhooks(&self, webhooks: WebhooksDumpView) -> Result<()> {
|
|
||||||
Ok(std::fs::write(
|
|
||||||
self.dir.path().join("webhooks.json"),
|
|
||||||
serde_json::to_string(&webhooks)?,
|
|
||||||
)?)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn persist_to(self, mut writer: impl Write) -> Result<()> {
|
pub fn persist_to(self, mut writer: impl Write) -> Result<()> {
|
||||||
let gz_encoder = GzEncoder::new(&mut writer, Compression::default());
|
let gz_encoder = GzEncoder::new(&mut writer, Compression::default());
|
||||||
let mut tar_encoder = tar::Builder::new(gz_encoder);
|
let mut tar_encoder = tar::Builder::new(gz_encoder);
|
||||||
|
|||||||
Binary file not shown.
@@ -148,10 +148,11 @@ impl File {
|
|||||||
Ok(Self { path: PathBuf::new(), file: None })
|
Ok(Self { path: PathBuf::new(), file: None })
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn persist(self) -> Result<Option<StdFile>> {
|
pub fn persist(self) -> Result<()> {
|
||||||
let Some(file) = self.file else { return Ok(None) };
|
if let Some(file) = self.file {
|
||||||
|
file.persist(&self.path)?;
|
||||||
Ok(Some(file.persist(&self.path)?))
|
}
|
||||||
|
Ok(())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ license.workspace = true
|
|||||||
nom = "7.1.3"
|
nom = "7.1.3"
|
||||||
nom_locate = "4.2.0"
|
nom_locate = "4.2.0"
|
||||||
unescaper = "0.1.6"
|
unescaper = "0.1.6"
|
||||||
levenshtein_automata = { version = "0.2.1", features = ["fst_automaton"] }
|
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
# fixed version due to format breakages in v1.40
|
# fixed version due to format breakages in v1.40
|
||||||
|
|||||||
@@ -7,22 +7,11 @@
|
|||||||
|
|
||||||
use nom::branch::alt;
|
use nom::branch::alt;
|
||||||
use nom::bytes::complete::tag;
|
use nom::bytes::complete::tag;
|
||||||
use nom::character::complete::char;
|
|
||||||
use nom::character::complete::multispace0;
|
|
||||||
use nom::character::complete::multispace1;
|
use nom::character::complete::multispace1;
|
||||||
use nom::combinator::cut;
|
use nom::combinator::cut;
|
||||||
use nom::combinator::map;
|
|
||||||
use nom::combinator::value;
|
|
||||||
use nom::sequence::preceded;
|
|
||||||
use nom::sequence::{terminated, tuple};
|
use nom::sequence::{terminated, tuple};
|
||||||
use Condition::*;
|
use Condition::*;
|
||||||
|
|
||||||
use crate::error::IResultExt;
|
|
||||||
use crate::value::parse_vector_value;
|
|
||||||
use crate::value::parse_vector_value_cut;
|
|
||||||
use crate::Error;
|
|
||||||
use crate::ErrorKind;
|
|
||||||
use crate::VectorFilter;
|
|
||||||
use crate::{parse_value, FilterCondition, IResult, Span, Token};
|
use crate::{parse_value, FilterCondition, IResult, Span, Token};
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
@@ -124,83 +113,6 @@ pub fn parse_not_exists(input: Span) -> IResult<FilterCondition> {
|
|||||||
Ok((input, FilterCondition::Not(Box::new(FilterCondition::Condition { fid: key, op: Exists }))))
|
Ok((input, FilterCondition::Not(Box::new(FilterCondition::Condition { fid: key, op: Exists }))))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn parse_vectors(input: Span) -> IResult<(Token, Option<Token>, VectorFilter<'_>)> {
|
|
||||||
let (input, _) = multispace0(input)?;
|
|
||||||
let (input, fid) = tag("_vectors")(input)?;
|
|
||||||
|
|
||||||
if let Ok((input, _)) = multispace1::<_, crate::Error>(input) {
|
|
||||||
return Ok((input, (Token::from(fid), None, VectorFilter::None)));
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, _) = char('.')(input)?;
|
|
||||||
|
|
||||||
// From this point, we are certain this is a vector filter, so our errors must be final.
|
|
||||||
// We could use nom's `cut` but it's better to be explicit about the errors
|
|
||||||
|
|
||||||
if let Ok((_, space)) = tag::<_, _, ()>(" ")(input) {
|
|
||||||
return Err(crate::Error::failure_from_kind(space, ErrorKind::VectorFilterMissingEmbedder));
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, embedder_name) =
|
|
||||||
parse_vector_value_cut(input, ErrorKind::VectorFilterInvalidEmbedder)?;
|
|
||||||
|
|
||||||
let (input, filter) = alt((
|
|
||||||
map(
|
|
||||||
preceded(tag(".fragments"), |input| {
|
|
||||||
let (input, _) = tag(".")(input).map_cut(ErrorKind::VectorFilterMissingFragment)?;
|
|
||||||
parse_vector_value_cut(input, ErrorKind::VectorFilterInvalidFragment)
|
|
||||||
}),
|
|
||||||
VectorFilter::Fragment,
|
|
||||||
),
|
|
||||||
value(VectorFilter::UserProvided, tag(".userProvided")),
|
|
||||||
value(VectorFilter::DocumentTemplate, tag(".documentTemplate")),
|
|
||||||
value(VectorFilter::Regenerate, tag(".regenerate")),
|
|
||||||
value(VectorFilter::None, nom::combinator::success("")),
|
|
||||||
))(input)?;
|
|
||||||
|
|
||||||
if let Ok((input, point)) = tag::<_, _, ()>(".")(input) {
|
|
||||||
let opt_value = parse_vector_value(input).ok().map(|(_, v)| v);
|
|
||||||
let value =
|
|
||||||
opt_value.as_ref().map(|v| v.value().to_owned()).unwrap_or_else(|| point.to_string());
|
|
||||||
let context = opt_value.map(|v| v.original_span()).unwrap_or(point);
|
|
||||||
let previous_kind = match filter {
|
|
||||||
VectorFilter::Fragment(_) => Some("fragments"),
|
|
||||||
VectorFilter::DocumentTemplate => Some("documentTemplate"),
|
|
||||||
VectorFilter::UserProvided => Some("userProvided"),
|
|
||||||
VectorFilter::Regenerate => Some("regenerate"),
|
|
||||||
VectorFilter::None => None,
|
|
||||||
};
|
|
||||||
return Err(Error::failure_from_kind(
|
|
||||||
context,
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(previous_kind, value),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, _) = multispace1(input).map_cut(ErrorKind::VectorFilterLeftover)?;
|
|
||||||
|
|
||||||
Ok((input, (Token::from(fid), Some(embedder_name), filter)))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// vectors_exists = vectors ("EXISTS" | ("NOT" WS+ "EXISTS"))
|
|
||||||
pub fn parse_vectors_exists(input: Span) -> IResult<FilterCondition> {
|
|
||||||
let (input, (fid, embedder, filter)) = parse_vectors(input)?;
|
|
||||||
|
|
||||||
// Try parsing "EXISTS" first
|
|
||||||
if let Ok((input, _)) = tag::<_, _, ()>("EXISTS")(input) {
|
|
||||||
return Ok((input, FilterCondition::VectorExists { fid, embedder, filter }));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try parsing "NOT EXISTS"
|
|
||||||
if let Ok((input, _)) = tuple::<_, _, (), _>((tag("NOT"), multispace1, tag("EXISTS")))(input) {
|
|
||||||
return Ok((
|
|
||||||
input,
|
|
||||||
FilterCondition::Not(Box::new(FilterCondition::VectorExists { fid, embedder, filter })),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
Err(crate::Error::failure_from_kind(input, ErrorKind::VectorFilterOperation))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// contains = value "CONTAINS" value
|
/// contains = value "CONTAINS" value
|
||||||
pub fn parse_contains(input: Span) -> IResult<FilterCondition> {
|
pub fn parse_contains(input: Span) -> IResult<FilterCondition> {
|
||||||
let (input, (fid, contains, value)) =
|
let (input, (fid, contains, value)) =
|
||||||
|
|||||||
@@ -42,23 +42,6 @@ pub fn cut_with_err<'a, O>(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub trait IResultExt<'a> {
|
|
||||||
fn map_cut(self, kind: ErrorKind<'a>) -> Self;
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<'a, T> IResultExt<'a> for IResult<'a, T> {
|
|
||||||
fn map_cut(self, kind: ErrorKind<'a>) -> Self {
|
|
||||||
self.map_err(move |e: nom::Err<Error<'a>>| {
|
|
||||||
let input = match e {
|
|
||||||
nom::Err::Incomplete(_) => return e,
|
|
||||||
nom::Err::Error(e) => *e.context(),
|
|
||||||
nom::Err::Failure(e) => *e.context(),
|
|
||||||
};
|
|
||||||
Error::failure_from_kind(input, kind)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub struct Error<'a> {
|
pub struct Error<'a> {
|
||||||
context: Span<'a>,
|
context: Span<'a>,
|
||||||
@@ -78,14 +61,6 @@ pub enum ErrorKind<'a> {
|
|||||||
GeoBoundingBox,
|
GeoBoundingBox,
|
||||||
MisusedGeoRadius,
|
MisusedGeoRadius,
|
||||||
MisusedGeoBoundingBox,
|
MisusedGeoBoundingBox,
|
||||||
VectorFilterLeftover,
|
|
||||||
VectorFilterInvalidQuotes,
|
|
||||||
VectorFilterMissingEmbedder,
|
|
||||||
VectorFilterInvalidEmbedder,
|
|
||||||
VectorFilterMissingFragment,
|
|
||||||
VectorFilterInvalidFragment,
|
|
||||||
VectorFilterUnknownSuffix(Option<&'static str>, String),
|
|
||||||
VectorFilterOperation,
|
|
||||||
InvalidPrimary,
|
InvalidPrimary,
|
||||||
InvalidEscapedNumber,
|
InvalidEscapedNumber,
|
||||||
ExpectedEof,
|
ExpectedEof,
|
||||||
@@ -116,10 +91,6 @@ impl<'a> Error<'a> {
|
|||||||
Self { context, kind }
|
Self { context, kind }
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn failure_from_kind(context: Span<'a>, kind: ErrorKind<'a>) -> nom::Err<Self> {
|
|
||||||
nom::Err::Failure(Self::new_from_kind(context, kind))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn new_from_external(context: Span<'a>, error: impl std::error::Error) -> Self {
|
pub fn new_from_external(context: Span<'a>, error: impl std::error::Error) -> Self {
|
||||||
Self::new_from_kind(context, ErrorKind::External(error.to_string()))
|
Self::new_from_kind(context, ErrorKind::External(error.to_string()))
|
||||||
}
|
}
|
||||||
@@ -157,20 +128,6 @@ impl Display for Error<'_> {
|
|||||||
// first line being the diagnostic and the second line being the incriminated filter.
|
// first line being the diagnostic and the second line being the incriminated filter.
|
||||||
let escaped_input = input.escape_debug();
|
let escaped_input = input.escape_debug();
|
||||||
|
|
||||||
fn key_suggestion<'a>(key: &str, keys: &[&'a str]) -> Option<&'a str> {
|
|
||||||
let typos =
|
|
||||||
levenshtein_automata::LevenshteinAutomatonBuilder::new(2, true).build_dfa(key);
|
|
||||||
for key in keys.iter() {
|
|
||||||
match typos.eval(key) {
|
|
||||||
levenshtein_automata::Distance::Exact(_) => {
|
|
||||||
return Some(key);
|
|
||||||
}
|
|
||||||
levenshtein_automata::Distance::AtLeast(_) => continue,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None
|
|
||||||
}
|
|
||||||
|
|
||||||
match &self.kind {
|
match &self.kind {
|
||||||
ErrorKind::ExpectedValue(_) if input.trim().is_empty() => {
|
ErrorKind::ExpectedValue(_) if input.trim().is_empty() => {
|
||||||
writeln!(f, "Was expecting a value but instead got nothing.")?
|
writeln!(f, "Was expecting a value but instead got nothing.")?
|
||||||
@@ -212,44 +169,6 @@ impl Display for Error<'_> {
|
|||||||
ErrorKind::MisusedGeoBoundingBox => {
|
ErrorKind::MisusedGeoBoundingBox => {
|
||||||
writeln!(f, "The `_geoBoundingBox` filter is an operation and can't be used as a value.")?
|
writeln!(f, "The `_geoBoundingBox` filter is an operation and can't be used as a value.")?
|
||||||
}
|
}
|
||||||
ErrorKind::VectorFilterLeftover => {
|
|
||||||
writeln!(f, "The vector filter has leftover tokens.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(_, value) if value.as_str() == "." => {
|
|
||||||
writeln!(f, "Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.")?;
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(None, value) if ["fragments", "userProvided", "documentTemplate", "regenerate"].contains(&value.as_str()) => {
|
|
||||||
// This will happen with "_vectors.rest.\"userProvided\"" for instance
|
|
||||||
writeln!(f, "Was expecting this part to be unquoted.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(None, value) => {
|
|
||||||
if let Some(suggestion) = key_suggestion(value, &["fragments", "userProvided", "documentTemplate", "regenerate"]) {
|
|
||||||
writeln!(f, "Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `{value}`. Did you mean `{suggestion}`?")?;
|
|
||||||
} else {
|
|
||||||
writeln!(f, "Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `{value}`.")?;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterUnknownSuffix(Some(previous_filter_kind), value) => {
|
|
||||||
writeln!(f, "Vector filter can only accept one of `fragments`, `userProvided`, `documentTemplate` or `regenerate`, but found both `{previous_filter_kind}` and `{value}`.")?
|
|
||||||
},
|
|
||||||
ErrorKind::VectorFilterInvalidFragment => {
|
|
||||||
writeln!(f, "The vector filter's fragment name is invalid.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterMissingFragment => {
|
|
||||||
writeln!(f, "The vector filter is missing a fragment name.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterMissingEmbedder => {
|
|
||||||
writeln!(f, "Was expecting embedder name but found nothing.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterInvalidEmbedder => {
|
|
||||||
writeln!(f, "The vector filter's embedder name is invalid.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterOperation => {
|
|
||||||
writeln!(f, "Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.")?
|
|
||||||
}
|
|
||||||
ErrorKind::VectorFilterInvalidQuotes => {
|
|
||||||
writeln!(f, "The quotes in one of the values are inconsistent.")?
|
|
||||||
}
|
|
||||||
ErrorKind::ReservedKeyword(word) => {
|
ErrorKind::ReservedKeyword(word) => {
|
||||||
writeln!(f, "`{word}` is a reserved keyword and thus cannot be used as a field name unless it is put inside quotes. Use \"{word}\" or \'{word}\' instead.")?
|
writeln!(f, "`{word}` is a reserved keyword and thus cannot be used as a field name unless it is put inside quotes. Use \"{word}\" or \'{word}\' instead.")?
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -65,9 +65,6 @@ use nom_locate::LocatedSpan;
|
|||||||
pub(crate) use value::parse_value;
|
pub(crate) use value::parse_value;
|
||||||
use value::word_exact;
|
use value::word_exact;
|
||||||
|
|
||||||
use crate::condition::parse_vectors_exists;
|
|
||||||
use crate::error::IResultExt;
|
|
||||||
|
|
||||||
pub type Span<'a> = LocatedSpan<&'a str, &'a str>;
|
pub type Span<'a> = LocatedSpan<&'a str, &'a str>;
|
||||||
|
|
||||||
type IResult<'a, Ret> = nom::IResult<Span<'a>, Ret, Error<'a>>;
|
type IResult<'a, Ret> = nom::IResult<Span<'a>, Ret, Error<'a>>;
|
||||||
@@ -139,15 +136,6 @@ impl<'a> From<&'a str> for Token<'a> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
|
||||||
pub enum VectorFilter<'a> {
|
|
||||||
Fragment(Token<'a>),
|
|
||||||
DocumentTemplate,
|
|
||||||
UserProvided,
|
|
||||||
Regenerate,
|
|
||||||
None,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
pub enum FilterCondition<'a> {
|
pub enum FilterCondition<'a> {
|
||||||
Not(Box<Self>),
|
Not(Box<Self>),
|
||||||
@@ -155,7 +143,6 @@ pub enum FilterCondition<'a> {
|
|||||||
In { fid: Token<'a>, els: Vec<Token<'a>> },
|
In { fid: Token<'a>, els: Vec<Token<'a>> },
|
||||||
Or(Vec<Self>),
|
Or(Vec<Self>),
|
||||||
And(Vec<Self>),
|
And(Vec<Self>),
|
||||||
VectorExists { fid: Token<'a>, embedder: Option<Token<'a>>, filter: VectorFilter<'a> },
|
|
||||||
GeoLowerThan { point: [Token<'a>; 2], radius: Token<'a> },
|
GeoLowerThan { point: [Token<'a>; 2], radius: Token<'a> },
|
||||||
GeoBoundingBox { top_right_point: [Token<'a>; 2], bottom_left_point: [Token<'a>; 2] },
|
GeoBoundingBox { top_right_point: [Token<'a>; 2], bottom_left_point: [Token<'a>; 2] },
|
||||||
}
|
}
|
||||||
@@ -178,32 +165,17 @@ impl<'a> FilterCondition<'a> {
|
|||||||
| Condition::Exists
|
| Condition::Exists
|
||||||
| Condition::LowerThan(_)
|
| Condition::LowerThan(_)
|
||||||
| Condition::LowerThanOrEqual(_)
|
| Condition::LowerThanOrEqual(_)
|
||||||
| Condition::Between { .. }
|
| Condition::Between { .. } => None,
|
||||||
| Condition::StartsWith { .. } => None,
|
Condition::Contains { keyword, word: _ }
|
||||||
Condition::Contains { keyword, word: _ } => Some(keyword),
|
| Condition::StartsWith { keyword, word: _ } => Some(keyword),
|
||||||
},
|
},
|
||||||
FilterCondition::Not(this) => this.use_contains_operator(),
|
FilterCondition::Not(this) => this.use_contains_operator(),
|
||||||
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
|
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
|
||||||
seq.iter().find_map(|filter| filter.use_contains_operator())
|
seq.iter().find_map(|filter| filter.use_contains_operator())
|
||||||
}
|
}
|
||||||
FilterCondition::VectorExists { .. }
|
|
||||||
| FilterCondition::GeoLowerThan { .. }
|
|
||||||
| FilterCondition::GeoBoundingBox { .. }
|
|
||||||
| FilterCondition::In { .. } => None,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn use_vector_filter(&self) -> Option<&Token> {
|
|
||||||
match self {
|
|
||||||
FilterCondition::Condition { .. } => None,
|
|
||||||
FilterCondition::Not(this) => this.use_vector_filter(),
|
|
||||||
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
|
|
||||||
seq.iter().find_map(|filter| filter.use_vector_filter())
|
|
||||||
}
|
|
||||||
FilterCondition::GeoLowerThan { .. }
|
FilterCondition::GeoLowerThan { .. }
|
||||||
| FilterCondition::GeoBoundingBox { .. }
|
| FilterCondition::GeoBoundingBox { .. }
|
||||||
| FilterCondition::In { .. } => None,
|
| FilterCondition::In { .. } => None,
|
||||||
FilterCondition::VectorExists { fid, .. } => Some(fid),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -291,7 +263,10 @@ fn parse_in_body(input: Span) -> IResult<Vec<Token>> {
|
|||||||
let (input, _) = ws(word_exact("IN"))(input)?;
|
let (input, _) = ws(word_exact("IN"))(input)?;
|
||||||
|
|
||||||
// everything after `IN` can be a failure
|
// everything after `IN` can be a failure
|
||||||
let (input, _) = tag("[")(input).map_cut(ErrorKind::InOpeningBracket)?;
|
let (input, _) =
|
||||||
|
cut_with_err(tag("["), |_| Error::new_from_kind(input, ErrorKind::InOpeningBracket))(
|
||||||
|
input,
|
||||||
|
)?;
|
||||||
|
|
||||||
let (input, content) = cut(parse_value_list)(input)?;
|
let (input, content) = cut(parse_value_list)(input)?;
|
||||||
|
|
||||||
@@ -437,7 +412,7 @@ fn parse_geo_bounding_box(input: Span) -> IResult<FilterCondition> {
|
|||||||
let (input, args) = parsed?;
|
let (input, args) = parsed?;
|
||||||
|
|
||||||
if args.len() != 2 || args[0].len() != 2 || args[1].len() != 2 {
|
if args.len() != 2 || args[0].len() != 2 || args[1].len() != 2 {
|
||||||
return Err(Error::failure_from_kind(input, ErrorKind::GeoBoundingBox));
|
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::GeoBoundingBox)));
|
||||||
}
|
}
|
||||||
|
|
||||||
let res = FilterCondition::GeoBoundingBox {
|
let res = FilterCondition::GeoBoundingBox {
|
||||||
@@ -458,7 +433,7 @@ fn parse_geo_point(input: Span) -> IResult<FilterCondition> {
|
|||||||
))(input)
|
))(input)
|
||||||
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))?;
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))?;
|
||||||
// if we succeeded we still return a `Failure` because geoPoints are not allowed
|
// if we succeeded we still return a `Failure` because geoPoints are not allowed
|
||||||
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geoPoint")))
|
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// geoPoint = WS* "_geoDistance(float WS* "," WS* float WS* "," WS* float)
|
/// geoPoint = WS* "_geoDistance(float WS* "," WS* float WS* "," WS* float)
|
||||||
@@ -472,7 +447,7 @@ fn parse_geo_distance(input: Span) -> IResult<FilterCondition> {
|
|||||||
))(input)
|
))(input)
|
||||||
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))?;
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))?;
|
||||||
// if we succeeded we still return a `Failure` because `geoDistance` filters are not allowed
|
// if we succeeded we still return a `Failure` because `geoDistance` filters are not allowed
|
||||||
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geoDistance")))
|
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// geo = WS* "_geo(float WS* "," WS* float WS* "," WS* float)
|
/// geo = WS* "_geo(float WS* "," WS* float WS* "," WS* float)
|
||||||
@@ -486,7 +461,7 @@ fn parse_geo(input: Span) -> IResult<FilterCondition> {
|
|||||||
))(input)
|
))(input)
|
||||||
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))?;
|
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))?;
|
||||||
// if we succeeded we still return a `Failure` because `_geo` filter is not allowed
|
// if we succeeded we still return a `Failure` because `_geo` filter is not allowed
|
||||||
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geo")))
|
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn parse_error_reserved_keyword(input: Span) -> IResult<FilterCondition> {
|
fn parse_error_reserved_keyword(input: Span) -> IResult<FilterCondition> {
|
||||||
@@ -525,7 +500,8 @@ fn parse_primary(input: Span, depth: usize) -> IResult<FilterCondition> {
|
|||||||
parse_is_not_null,
|
parse_is_not_null,
|
||||||
parse_is_empty,
|
parse_is_empty,
|
||||||
parse_is_not_empty,
|
parse_is_not_empty,
|
||||||
alt((parse_vectors_exists, parse_exists, parse_not_exists)),
|
parse_exists,
|
||||||
|
parse_not_exists,
|
||||||
parse_to,
|
parse_to,
|
||||||
parse_contains,
|
parse_contains,
|
||||||
parse_not_contains,
|
parse_not_contains,
|
||||||
@@ -581,22 +557,6 @@ impl std::fmt::Display for FilterCondition<'_> {
|
|||||||
}
|
}
|
||||||
write!(f, "]")
|
write!(f, "]")
|
||||||
}
|
}
|
||||||
FilterCondition::VectorExists { fid: _, embedder, filter: inner } => {
|
|
||||||
write!(f, "_vectors")?;
|
|
||||||
if let Some(embedder) = embedder {
|
|
||||||
write!(f, ".{:?}", embedder.value())?;
|
|
||||||
}
|
|
||||||
match inner {
|
|
||||||
VectorFilter::Fragment(fragment) => {
|
|
||||||
write!(f, ".fragments.{:?}", fragment.value())?
|
|
||||||
}
|
|
||||||
VectorFilter::DocumentTemplate => write!(f, ".documentTemplate")?,
|
|
||||||
VectorFilter::UserProvided => write!(f, ".userProvided")?,
|
|
||||||
VectorFilter::Regenerate => write!(f, ".regenerate")?,
|
|
||||||
VectorFilter::None => (),
|
|
||||||
}
|
|
||||||
write!(f, " EXISTS")
|
|
||||||
}
|
|
||||||
FilterCondition::GeoLowerThan { point, radius } => {
|
FilterCondition::GeoLowerThan { point, radius } => {
|
||||||
write!(f, "_geoRadius({}, {}, {})", point[0], point[1], radius)
|
write!(f, "_geoRadius({}, {}, {})", point[0], point[1], radius)
|
||||||
}
|
}
|
||||||
@@ -670,9 +630,6 @@ pub mod tests {
|
|||||||
insta::assert_snapshot!(p(r"title = 'foo\\\\\\\\'"), @r#"{title} = {foo\\\\}"#);
|
insta::assert_snapshot!(p(r"title = 'foo\\\\\\\\'"), @r#"{title} = {foo\\\\}"#);
|
||||||
// but it also works with other sequences
|
// but it also works with other sequences
|
||||||
insta::assert_snapshot!(p(r#"title = 'foo\x20\n\t\"\'"'"#), @"{title} = {foo \n\t\"\'\"}");
|
insta::assert_snapshot!(p(r#"title = 'foo\x20\n\t\"\'"'"#), @"{title} = {foo \n\t\"\'\"}");
|
||||||
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors." valid.name ".fragments."also.. valid! " EXISTS"#), @r#"_vectors." valid.name ".fragments."also.. valid! " EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.\"\n\t\r\\\"\" EXISTS"), @r#"_vectors."\n\t\r\"" EXISTS"#);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -735,18 +692,6 @@ pub mod tests {
|
|||||||
insta::assert_snapshot!(p("NOT subscribers IS NOT EMPTY"), @"{subscribers} IS EMPTY");
|
insta::assert_snapshot!(p("NOT subscribers IS NOT EMPTY"), @"{subscribers} IS EMPTY");
|
||||||
insta::assert_snapshot!(p("subscribers IS NOT EMPTY"), @"NOT ({subscribers} IS EMPTY)");
|
insta::assert_snapshot!(p("subscribers IS NOT EMPTY"), @"NOT ({subscribers} IS EMPTY)");
|
||||||
|
|
||||||
// Test _vectors EXISTS + _vectors NOT EXITS
|
|
||||||
insta::assert_snapshot!(p("_vectors EXISTS"), @"_vectors EXISTS");
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName EXISTS"), @r#"_vectors."embedderName" EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.documentTemplate EXISTS"), @r#"_vectors."embedderName".documentTemplate EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.regenerate EXISTS"), @r#"_vectors."embedderName".regenerate EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.regenerate EXISTS"), @r#"_vectors."embedderName".regenerate EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("_vectors.embedderName.fragments.fragmentName EXISTS"), @r#"_vectors."embedderName".fragments."fragmentName" EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p(" _vectors.embedderName.fragments.fragmentName EXISTS"), @r#"_vectors."embedderName".fragments."fragmentName" EXISTS"#);
|
|
||||||
insta::assert_snapshot!(p("NOT _vectors EXISTS"), @"NOT (_vectors EXISTS)");
|
|
||||||
insta::assert_snapshot!(p(" NOT _vectors EXISTS"), @"NOT (_vectors EXISTS)");
|
|
||||||
insta::assert_snapshot!(p(" _vectors NOT EXISTS"), @"NOT (_vectors EXISTS)");
|
|
||||||
|
|
||||||
// Test EXISTS + NOT EXITS
|
// Test EXISTS + NOT EXITS
|
||||||
insta::assert_snapshot!(p("subscribers EXISTS"), @"{subscribers} EXISTS");
|
insta::assert_snapshot!(p("subscribers EXISTS"), @"{subscribers} EXISTS");
|
||||||
insta::assert_snapshot!(p("NOT subscribers EXISTS"), @"NOT ({subscribers} EXISTS)");
|
insta::assert_snapshot!(p("NOT subscribers EXISTS"), @"NOT ({subscribers} EXISTS)");
|
||||||
@@ -1001,71 +946,6 @@ pub mod tests {
|
|||||||
"###
|
"###
|
||||||
);
|
);
|
||||||
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors _vectors EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
10:25 _vectors _vectors EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors. embedderName EXISTS"#), @r"
|
|
||||||
Was expecting embedder name but found nothing.
|
|
||||||
10:11 _vectors. embedderName EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors .embedderName EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
10:30 _vectors .embedderName EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName. EXISTS"#), @r"
|
|
||||||
Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.
|
|
||||||
22:23 _vectors.embedderName. EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors."embedderName EXISTS"#), @r#"
|
|
||||||
The quotes in one of the values are inconsistent.
|
|
||||||
10:30 _vectors."embedderName EXISTS
|
|
||||||
"#);
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors."embedderNam"e EXISTS"#), @r#"
|
|
||||||
The vector filter has leftover tokens.
|
|
||||||
23:31 _vectors."embedderNam"e EXISTS
|
|
||||||
"#);
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.documentTemplate. EXISTS"#), @r"
|
|
||||||
Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.
|
|
||||||
39:40 _vectors.embedderName.documentTemplate. EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments EXISTS"#), @r"
|
|
||||||
The vector filter is missing a fragment name.
|
|
||||||
32:39 _vectors.embedderName.fragments EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments. EXISTS"#), @r"
|
|
||||||
The vector filter's fragment name is invalid.
|
|
||||||
33:40 _vectors.embedderName.fragments. EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments.test test EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
38:49 _vectors.embedderName.fragments.test test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments. test EXISTS"#), @r"
|
|
||||||
The vector filter's fragment name is invalid.
|
|
||||||
33:45 _vectors.embedderName.fragments. test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName .fragments. test EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
23:46 _vectors.embedderName .fragments. test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName .fragments.test EXISTS"#), @r"
|
|
||||||
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
|
|
||||||
23:45 _vectors.embedderName .fragments.test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.fargments.test EXISTS"#), @r"
|
|
||||||
Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `fargments`. Did you mean `fragments`?
|
|
||||||
23:32 _vectors.embedderName.fargments.test EXISTS
|
|
||||||
");
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName."userProvided" EXISTS"#), @r#"
|
|
||||||
Was expecting this part to be unquoted.
|
|
||||||
24:36 _vectors.embedderName."userProvided" EXISTS
|
|
||||||
"#);
|
|
||||||
insta::assert_snapshot!(p(r#"_vectors.embedderName.userProvided.fragments.test EXISTS"#), @r"
|
|
||||||
Vector filter can only accept one of `fragments`, `userProvided`, `documentTemplate` or `regenerate`, but found both `userProvided` and `fragments`.
|
|
||||||
36:45 _vectors.embedderName.userProvided.fragments.test EXISTS
|
|
||||||
");
|
|
||||||
|
|
||||||
insta::assert_snapshot!(p(r#"NOT OR EXISTS AND EXISTS NOT EXISTS"#), @r###"
|
insta::assert_snapshot!(p(r#"NOT OR EXISTS AND EXISTS NOT EXISTS"#), @r###"
|
||||||
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
|
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
|
||||||
5:7 NOT OR EXISTS AND EXISTS NOT EXISTS
|
5:7 NOT OR EXISTS AND EXISTS NOT EXISTS
|
||||||
|
|||||||
@@ -80,51 +80,6 @@ pub fn word_exact<'a, 'b: 'a>(tag: &'b str) -> impl Fn(Span<'a>) -> IResult<'a,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// vector_value = ( non_dot_word | singleQuoted | doubleQuoted)
|
|
||||||
pub fn parse_vector_value(input: Span) -> IResult<Token> {
|
|
||||||
pub fn non_dot_word(input: Span) -> IResult<Token> {
|
|
||||||
let (input, word) = take_while1(|c| is_value_component(c) && c != '.')(input)?;
|
|
||||||
Ok((input, word.into()))
|
|
||||||
}
|
|
||||||
|
|
||||||
let (input, value) = alt((
|
|
||||||
delimited(char('\''), cut(|input| quoted_by('\'', input)), cut(char('\''))),
|
|
||||||
delimited(char('"'), cut(|input| quoted_by('"', input)), cut(char('"'))),
|
|
||||||
non_dot_word,
|
|
||||||
))(input)?;
|
|
||||||
|
|
||||||
match unescaper::unescape(value.value()) {
|
|
||||||
Ok(content) => {
|
|
||||||
if content.len() != value.value().len() {
|
|
||||||
Ok((input, Token::new(value.original_span(), Some(content))))
|
|
||||||
} else {
|
|
||||||
Ok((input, value))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Err(unescaper::Error::IncompleteStr(_)) => Err(nom::Err::Incomplete(nom::Needed::Unknown)),
|
|
||||||
Err(unescaper::Error::ParseIntError { .. }) => Err(nom::Err::Error(Error::new_from_kind(
|
|
||||||
value.original_span(),
|
|
||||||
ErrorKind::InvalidEscapedNumber,
|
|
||||||
))),
|
|
||||||
Err(unescaper::Error::InvalidChar { .. }) => Err(nom::Err::Error(Error::new_from_kind(
|
|
||||||
value.original_span(),
|
|
||||||
ErrorKind::MalformedValue,
|
|
||||||
))),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn parse_vector_value_cut<'a>(input: Span<'a>, kind: ErrorKind<'a>) -> IResult<'a, Token<'a>> {
|
|
||||||
parse_vector_value(input).map_err(|e| match e {
|
|
||||||
nom::Err::Failure(e) => match e.kind() {
|
|
||||||
ErrorKind::Char(c) if *c == '"' || *c == '\'' => {
|
|
||||||
crate::Error::failure_from_kind(input, ErrorKind::VectorFilterInvalidQuotes)
|
|
||||||
}
|
|
||||||
_ => crate::Error::failure_from_kind(input, kind),
|
|
||||||
},
|
|
||||||
_ => crate::Error::failure_from_kind(input, kind),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// value = WS* ( word | singleQuoted | doubleQuoted) WS+
|
/// value = WS* ( word | singleQuoted | doubleQuoted) WS+
|
||||||
pub fn parse_value(input: Span) -> IResult<Token> {
|
pub fn parse_value(input: Span) -> IResult<Token> {
|
||||||
// to get better diagnostic message we are going to strip the left whitespaces from the input right now
|
// to get better diagnostic message we are going to strip the left whitespaces from the input right now
|
||||||
@@ -144,21 +99,31 @@ pub fn parse_value(input: Span) -> IResult<Token> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
match parse_geo_radius(input) {
|
match parse_geo_radius(input) {
|
||||||
Ok(_) => return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoRadius)),
|
Ok(_) => {
|
||||||
|
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::MisusedGeoRadius)))
|
||||||
|
}
|
||||||
// if we encountered a failure it means the user badly wrote a _geoRadius filter.
|
// if we encountered a failure it means the user badly wrote a _geoRadius filter.
|
||||||
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
||||||
Err(e) if e.is_failure() => {
|
Err(e) if e.is_failure() => {
|
||||||
return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoRadius))
|
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::MisusedGeoRadius)))
|
||||||
}
|
}
|
||||||
_ => (),
|
_ => (),
|
||||||
}
|
}
|
||||||
|
|
||||||
match parse_geo_bounding_box(input) {
|
match parse_geo_bounding_box(input) {
|
||||||
Ok(_) => return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoBoundingBox)),
|
Ok(_) => {
|
||||||
|
return Err(nom::Err::Failure(Error::new_from_kind(
|
||||||
|
input,
|
||||||
|
ErrorKind::MisusedGeoBoundingBox,
|
||||||
|
)))
|
||||||
|
}
|
||||||
// if we encountered a failure it means the user badly wrote a _geoBoundingBox filter.
|
// if we encountered a failure it means the user badly wrote a _geoBoundingBox filter.
|
||||||
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
|
||||||
Err(e) if e.is_failure() => {
|
Err(e) if e.is_failure() => {
|
||||||
return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoBoundingBox))
|
return Err(nom::Err::Failure(Error::new_from_kind(
|
||||||
|
input,
|
||||||
|
ErrorKind::MisusedGeoBoundingBox,
|
||||||
|
)))
|
||||||
}
|
}
|
||||||
_ => (),
|
_ => (),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -129,7 +129,6 @@ fn main() {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| false,
|
&|| false,
|
||||||
Progress::default(),
|
Progress::default(),
|
||||||
None,
|
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ flate2 = "1.1.2"
|
|||||||
indexmap = "2.9.0"
|
indexmap = "2.9.0"
|
||||||
meilisearch-auth = { path = "../meilisearch-auth" }
|
meilisearch-auth = { path = "../meilisearch-auth" }
|
||||||
meilisearch-types = { path = "../meilisearch-types" }
|
meilisearch-types = { path = "../meilisearch-types" }
|
||||||
memmap2 = "0.9.7"
|
memmap2 = "0.9.5"
|
||||||
page_size = "0.6.0"
|
page_size = "0.6.0"
|
||||||
rayon = "1.10.0"
|
rayon = "1.10.0"
|
||||||
roaring = { version = "0.10.12", features = ["serde"] }
|
roaring = { version = "0.10.12", features = ["serde"] }
|
||||||
|
|||||||
@@ -147,7 +147,6 @@ impl<'a> Dump<'a> {
|
|||||||
canceled_by: task.canceled_by,
|
canceled_by: task.canceled_by,
|
||||||
details: task.details,
|
details: task.details,
|
||||||
status: task.status,
|
status: task.status,
|
||||||
network: task.network,
|
|
||||||
kind: match task.kind {
|
kind: match task.kind {
|
||||||
KindDump::DocumentImport {
|
KindDump::DocumentImport {
|
||||||
primary_key,
|
primary_key,
|
||||||
@@ -198,10 +197,9 @@ impl<'a> Dump<'a> {
|
|||||||
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
||||||
primary_key,
|
primary_key,
|
||||||
},
|
},
|
||||||
KindDump::IndexUpdate { primary_key, uid } => KindWithContent::IndexUpdate {
|
KindDump::IndexUpdate { primary_key } => KindWithContent::IndexUpdate {
|
||||||
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
|
||||||
primary_key,
|
primary_key,
|
||||||
new_index_uid: uid,
|
|
||||||
},
|
},
|
||||||
KindDump::IndexSwap { swaps } => KindWithContent::IndexSwap { swaps },
|
KindDump::IndexSwap { swaps } => KindWithContent::IndexSwap { swaps },
|
||||||
KindDump::TaskCancelation { query, tasks } => {
|
KindDump::TaskCancelation { query, tasks } => {
|
||||||
|
|||||||
@@ -67,8 +67,6 @@ pub enum Error {
|
|||||||
SwapDuplicateIndexesFound(Vec<String>),
|
SwapDuplicateIndexesFound(Vec<String>),
|
||||||
#[error("Index `{0}` not found.")]
|
#[error("Index `{0}` not found.")]
|
||||||
SwapIndexNotFound(String),
|
SwapIndexNotFound(String),
|
||||||
#[error("Cannot rename `{0}` to `{1}` as the index already exists. Hint: You can remove `{1}` first and then do your remove.")]
|
|
||||||
SwapIndexFoundDuringRename(String, String),
|
|
||||||
#[error("Meilisearch cannot receive write operations because the limit of the task database has been reached. Please delete tasks to continue performing write operations.")]
|
#[error("Meilisearch cannot receive write operations because the limit of the task database has been reached. Please delete tasks to continue performing write operations.")]
|
||||||
NoSpaceLeftInTaskQueue,
|
NoSpaceLeftInTaskQueue,
|
||||||
#[error(
|
#[error(
|
||||||
@@ -76,10 +74,6 @@ pub enum Error {
|
|||||||
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
|
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
|
||||||
)]
|
)]
|
||||||
SwapIndexesNotFound(Vec<String>),
|
SwapIndexesNotFound(Vec<String>),
|
||||||
#[error("The following indexes are being renamed but cannot because their new name conflicts with an already existing index: {}. Renaming doesn't overwrite the other index name.",
|
|
||||||
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
|
|
||||||
)]
|
|
||||||
SwapIndexesFoundDuringRename(Vec<String>),
|
|
||||||
#[error("Corrupted dump.")]
|
#[error("Corrupted dump.")]
|
||||||
CorruptedDump,
|
CorruptedDump,
|
||||||
#[error(
|
#[error(
|
||||||
@@ -209,8 +203,6 @@ impl Error {
|
|||||||
| Error::SwapIndexNotFound(_)
|
| Error::SwapIndexNotFound(_)
|
||||||
| Error::NoSpaceLeftInTaskQueue
|
| Error::NoSpaceLeftInTaskQueue
|
||||||
| Error::SwapIndexesNotFound(_)
|
| Error::SwapIndexesNotFound(_)
|
||||||
| Error::SwapIndexFoundDuringRename(_, _)
|
|
||||||
| Error::SwapIndexesFoundDuringRename(_)
|
|
||||||
| Error::CorruptedDump
|
| Error::CorruptedDump
|
||||||
| Error::InvalidTaskDate { .. }
|
| Error::InvalidTaskDate { .. }
|
||||||
| Error::InvalidTaskUid { .. }
|
| Error::InvalidTaskUid { .. }
|
||||||
@@ -279,8 +271,6 @@ impl ErrorCode for Error {
|
|||||||
Error::SwapDuplicateIndexFound(_) => Code::InvalidSwapDuplicateIndexFound,
|
Error::SwapDuplicateIndexFound(_) => Code::InvalidSwapDuplicateIndexFound,
|
||||||
Error::SwapIndexNotFound(_) => Code::IndexNotFound,
|
Error::SwapIndexNotFound(_) => Code::IndexNotFound,
|
||||||
Error::SwapIndexesNotFound(_) => Code::IndexNotFound,
|
Error::SwapIndexesNotFound(_) => Code::IndexNotFound,
|
||||||
Error::SwapIndexFoundDuringRename(_, _) => Code::IndexAlreadyExists,
|
|
||||||
Error::SwapIndexesFoundDuringRename(_) => Code::IndexAlreadyExists,
|
|
||||||
Error::InvalidTaskDate { field, .. } => (*field).into(),
|
Error::InvalidTaskDate { field, .. } => (*field).into(),
|
||||||
Error::InvalidTaskUid { .. } => Code::InvalidTaskUids,
|
Error::InvalidTaskUid { .. } => Code::InvalidTaskUids,
|
||||||
Error::InvalidBatchUid { .. } => Code::InvalidBatchUids,
|
Error::InvalidBatchUid { .. } => Code::InvalidBatchUids,
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
use std::sync::{Arc, RwLock};
|
use std::sync::{Arc, RwLock};
|
||||||
|
|
||||||
use meilisearch_types::enterprise_edition::network::Network;
|
use meilisearch_types::features::{InstanceTogglableFeatures, Network, RuntimeTogglableFeatures};
|
||||||
use meilisearch_types::features::{InstanceTogglableFeatures, RuntimeTogglableFeatures};
|
|
||||||
use meilisearch_types::heed::types::{SerdeJson, Str};
|
use meilisearch_types::heed::types::{SerdeJson, Str};
|
||||||
use meilisearch_types::heed::{Database, Env, RwTxn, WithoutTls};
|
use meilisearch_types::heed::{Database, Env, RwTxn, WithoutTls};
|
||||||
|
|
||||||
@@ -86,7 +85,7 @@ impl RoFeatures {
|
|||||||
Ok(())
|
Ok(())
|
||||||
} else {
|
} else {
|
||||||
Err(FeatureNotEnabledError {
|
Err(FeatureNotEnabledError {
|
||||||
disabled_action: "Using `CONTAINS` in a filter",
|
disabled_action: "Using `CONTAINS` or `STARTS WITH` in a filter",
|
||||||
feature: "contains filter",
|
feature: "contains filter",
|
||||||
issue_link: "https://github.com/orgs/meilisearch/discussions/763",
|
issue_link: "https://github.com/orgs/meilisearch/discussions/763",
|
||||||
}
|
}
|
||||||
@@ -183,7 +182,6 @@ impl FeatureData {
|
|||||||
..persisted_features
|
..persisted_features
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Once this is stabilized, network should be stored along with webhooks in index-scheduler's persisted database
|
|
||||||
let network_db = runtime_features_db.remap_data_type::<SerdeJson<Network>>();
|
let network_db = runtime_features_db.remap_data_type::<SerdeJson<Network>>();
|
||||||
let network: Network = network_db.get(wtxn, db_keys::NETWORK)?.unwrap_or_default();
|
let network: Network = network_db.get(wtxn, db_keys::NETWORK)?.unwrap_or_default();
|
||||||
|
|
||||||
|
|||||||
@@ -71,7 +71,7 @@ pub struct IndexMapper {
|
|||||||
/// Path to the folder where the LMDB environments of each index are.
|
/// Path to the folder where the LMDB environments of each index are.
|
||||||
base_path: PathBuf,
|
base_path: PathBuf,
|
||||||
/// The map size an index is opened with on the first time.
|
/// The map size an index is opened with on the first time.
|
||||||
pub(crate) index_base_map_size: usize,
|
index_base_map_size: usize,
|
||||||
/// The quantity by which the map size of an index is incremented upon reopening, in bytes.
|
/// The quantity by which the map size of an index is incremented upon reopening, in bytes.
|
||||||
index_growth_amount: usize,
|
index_growth_amount: usize,
|
||||||
/// Whether we open a meilisearch index with the MDB_WRITEMAP option or not.
|
/// Whether we open a meilisearch index with the MDB_WRITEMAP option or not.
|
||||||
@@ -526,20 +526,6 @@ impl IndexMapper {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Rename an index.
|
|
||||||
pub fn rename(&self, wtxn: &mut RwTxn, current: &str, new: &str) -> Result<()> {
|
|
||||||
let uuid = self
|
|
||||||
.index_mapping
|
|
||||||
.get(wtxn, current)?
|
|
||||||
.ok_or_else(|| Error::IndexNotFound(current.to_string()))?;
|
|
||||||
if self.index_mapping.get(wtxn, new)?.is_some() {
|
|
||||||
return Err(Error::IndexAlreadyExists(new.to_string()));
|
|
||||||
}
|
|
||||||
self.index_mapping.delete(wtxn, current)?;
|
|
||||||
self.index_mapping.put(wtxn, new, &uuid)?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// The stats of an index.
|
/// The stats of an index.
|
||||||
///
|
///
|
||||||
/// If available in the cache, they are directly returned.
|
/// If available in the cache, they are directly returned.
|
||||||
|
|||||||
@@ -20,17 +20,16 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
|
|||||||
|
|
||||||
let IndexScheduler {
|
let IndexScheduler {
|
||||||
cleanup_enabled: _,
|
cleanup_enabled: _,
|
||||||
experimental_no_edition_2024_for_dumps: _,
|
|
||||||
processing_tasks,
|
processing_tasks,
|
||||||
env,
|
env,
|
||||||
version,
|
version,
|
||||||
queue,
|
queue,
|
||||||
scheduler,
|
scheduler,
|
||||||
persisted,
|
|
||||||
|
|
||||||
index_mapper,
|
index_mapper,
|
||||||
features: _,
|
features: _,
|
||||||
webhooks: _,
|
webhook_url: _,
|
||||||
|
webhook_authorization_header: _,
|
||||||
test_breakpoint_sdr: _,
|
test_breakpoint_sdr: _,
|
||||||
planned_failures: _,
|
planned_failures: _,
|
||||||
run_loop_iteration: _,
|
run_loop_iteration: _,
|
||||||
@@ -62,13 +61,6 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
|
|||||||
}
|
}
|
||||||
snap.push_str("\n----------------------------------------------------------------------\n");
|
snap.push_str("\n----------------------------------------------------------------------\n");
|
||||||
|
|
||||||
let persisted_db_snapshot = snapshot_persisted_db(&rtxn, persisted);
|
|
||||||
if !persisted_db_snapshot.is_empty() {
|
|
||||||
snap.push_str("### Persisted:\n");
|
|
||||||
snap.push_str(&persisted_db_snapshot);
|
|
||||||
snap.push_str("----------------------------------------------------------------------\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
snap.push_str("### All Tasks:\n");
|
snap.push_str("### All Tasks:\n");
|
||||||
snap.push_str(&snapshot_all_tasks(&rtxn, queue.tasks.all_tasks));
|
snap.push_str(&snapshot_all_tasks(&rtxn, queue.tasks.all_tasks));
|
||||||
snap.push_str("----------------------------------------------------------------------\n");
|
snap.push_str("----------------------------------------------------------------------\n");
|
||||||
@@ -207,16 +199,6 @@ pub fn snapshot_date_db(rtxn: &RoTxn, db: Database<BEI128, CboRoaringBitmapCodec
|
|||||||
snap
|
snap
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn snapshot_persisted_db(rtxn: &RoTxn, db: &Database<Str, Str>) -> String {
|
|
||||||
let mut snap = String::new();
|
|
||||||
let iter = db.iter(rtxn).unwrap();
|
|
||||||
for next in iter {
|
|
||||||
let (key, value) = next.unwrap();
|
|
||||||
snap.push_str(&format!("{key}: {value}\n"));
|
|
||||||
}
|
|
||||||
snap
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn snapshot_task(task: &Task) -> String {
|
pub fn snapshot_task(task: &Task) -> String {
|
||||||
let mut snap = String::new();
|
let mut snap = String::new();
|
||||||
let Task {
|
let Task {
|
||||||
@@ -230,7 +212,6 @@ pub fn snapshot_task(task: &Task) -> String {
|
|||||||
details,
|
details,
|
||||||
status,
|
status,
|
||||||
kind,
|
kind,
|
||||||
network,
|
|
||||||
} = task;
|
} = task;
|
||||||
snap.push('{');
|
snap.push('{');
|
||||||
snap.push_str(&format!("uid: {uid}, "));
|
snap.push_str(&format!("uid: {uid}, "));
|
||||||
@@ -248,9 +229,6 @@ pub fn snapshot_task(task: &Task) -> String {
|
|||||||
snap.push_str(&format!("details: {}, ", &snapshot_details(details)));
|
snap.push_str(&format!("details: {}, ", &snapshot_details(details)));
|
||||||
}
|
}
|
||||||
snap.push_str(&format!("kind: {kind:?}"));
|
snap.push_str(&format!("kind: {kind:?}"));
|
||||||
if let Some(network) = network {
|
|
||||||
snap.push_str(&format!("network: {network:?}, "))
|
|
||||||
}
|
|
||||||
|
|
||||||
snap.push('}');
|
snap.push('}');
|
||||||
snap
|
snap
|
||||||
@@ -278,8 +256,8 @@ fn snapshot_details(d: &Details) -> String {
|
|||||||
Details::SettingsUpdate { settings } => {
|
Details::SettingsUpdate { settings } => {
|
||||||
format!("{{ settings: {settings:?} }}")
|
format!("{{ settings: {settings:?} }}")
|
||||||
}
|
}
|
||||||
Details::IndexInfo { primary_key, new_index_uid, old_index_uid } => {
|
Details::IndexInfo { primary_key } => {
|
||||||
format!("{{ primary_key: {primary_key:?}, old_new_uid: {old_index_uid:?}, new_index_uid: {new_index_uid:?} }}")
|
format!("{{ primary_key: {primary_key:?} }}")
|
||||||
}
|
}
|
||||||
Details::DocumentDeletion {
|
Details::DocumentDeletion {
|
||||||
provided_ids: received_document_ids,
|
provided_ids: received_document_ids,
|
||||||
@@ -332,7 +310,6 @@ pub fn snapshot_status(
|
|||||||
}
|
}
|
||||||
snap
|
snap
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCodec>) -> String {
|
pub fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCodec>) -> String {
|
||||||
let mut snap = String::new();
|
let mut snap = String::new();
|
||||||
let iter = db.iter(rtxn).unwrap();
|
let iter = db.iter(rtxn).unwrap();
|
||||||
@@ -353,7 +330,6 @@ pub fn snapshot_index_tasks(rtxn: &RoTxn, db: Database<Str, RoaringBitmapCodec>)
|
|||||||
}
|
}
|
||||||
snap
|
snap
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn snapshot_canceled_by(rtxn: &RoTxn, db: Database<BEU32, RoaringBitmapCodec>) -> String {
|
pub fn snapshot_canceled_by(rtxn: &RoTxn, db: Database<BEU32, RoaringBitmapCodec>) -> String {
|
||||||
let mut snap = String::new();
|
let mut snap = String::new();
|
||||||
let iter = db.iter(rtxn).unwrap();
|
let iter = db.iter(rtxn).unwrap();
|
||||||
|
|||||||
@@ -51,9 +51,8 @@ pub use features::RoFeatures;
|
|||||||
use flate2::bufread::GzEncoder;
|
use flate2::bufread::GzEncoder;
|
||||||
use flate2::Compression;
|
use flate2::Compression;
|
||||||
use meilisearch_types::batches::Batch;
|
use meilisearch_types::batches::Batch;
|
||||||
use meilisearch_types::enterprise_edition::network::Network;
|
|
||||||
use meilisearch_types::features::{
|
use meilisearch_types::features::{
|
||||||
ChatCompletionSettings, InstanceTogglableFeatures, RuntimeTogglableFeatures,
|
ChatCompletionSettings, InstanceTogglableFeatures, Network, RuntimeTogglableFeatures,
|
||||||
};
|
};
|
||||||
use meilisearch_types::heed::byteorder::BE;
|
use meilisearch_types::heed::byteorder::BE;
|
||||||
use meilisearch_types::heed::types::{DecodeIgnore, SerdeJson, Str, I128};
|
use meilisearch_types::heed::types::{DecodeIgnore, SerdeJson, Str, I128};
|
||||||
@@ -65,17 +64,14 @@ use meilisearch_types::milli::vector::{
|
|||||||
};
|
};
|
||||||
use meilisearch_types::milli::{self, Index};
|
use meilisearch_types::milli::{self, Index};
|
||||||
use meilisearch_types::task_view::TaskView;
|
use meilisearch_types::task_view::TaskView;
|
||||||
use meilisearch_types::tasks::{KindWithContent, Task, TaskNetwork};
|
use meilisearch_types::tasks::{KindWithContent, Task};
|
||||||
use meilisearch_types::webhooks::{Webhook, WebhooksDumpView, WebhooksView};
|
|
||||||
use milli::vector::db::IndexEmbeddingConfig;
|
use milli::vector::db::IndexEmbeddingConfig;
|
||||||
use processing::ProcessingTasks;
|
use processing::ProcessingTasks;
|
||||||
pub use queue::Query;
|
pub use queue::Query;
|
||||||
use queue::Queue;
|
use queue::Queue;
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
use scheduler::Scheduler;
|
use scheduler::Scheduler;
|
||||||
use serde::{Deserialize, Serialize};
|
|
||||||
use time::OffsetDateTime;
|
use time::OffsetDateTime;
|
||||||
use uuid::Uuid;
|
|
||||||
use versioning::Versioning;
|
use versioning::Versioning;
|
||||||
|
|
||||||
use crate::index_mapper::IndexMapper;
|
use crate::index_mapper::IndexMapper;
|
||||||
@@ -84,15 +80,7 @@ use crate::utils::clamp_to_page_size;
|
|||||||
pub(crate) type BEI128 = I128<BE>;
|
pub(crate) type BEI128 = I128<BE>;
|
||||||
|
|
||||||
const TASK_SCHEDULER_SIZE_THRESHOLD_PERCENT_INT: u64 = 40;
|
const TASK_SCHEDULER_SIZE_THRESHOLD_PERCENT_INT: u64 = 40;
|
||||||
|
const CHAT_SETTINGS_DB_NAME: &str = "chat-settings";
|
||||||
mod db_name {
|
|
||||||
pub const CHAT_SETTINGS: &str = "chat-settings";
|
|
||||||
pub const PERSISTED: &str = "persisted";
|
|
||||||
}
|
|
||||||
|
|
||||||
mod db_keys {
|
|
||||||
pub const WEBHOOKS: &str = "webhooks";
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
pub struct IndexSchedulerOptions {
|
pub struct IndexSchedulerOptions {
|
||||||
@@ -110,10 +98,10 @@ pub struct IndexSchedulerOptions {
|
|||||||
pub snapshots_path: PathBuf,
|
pub snapshots_path: PathBuf,
|
||||||
/// The path to the folder containing the dumps.
|
/// The path to the folder containing the dumps.
|
||||||
pub dumps_path: PathBuf,
|
pub dumps_path: PathBuf,
|
||||||
/// The webhook url that was set by the CLI.
|
/// The URL on which we must send the tasks statuses
|
||||||
pub cli_webhook_url: Option<String>,
|
pub webhook_url: Option<String>,
|
||||||
/// The Authorization header to send to the webhook URL that was set by the CLI.
|
/// The value we will send into the Authorization HTTP header on the webhook URL
|
||||||
pub cli_webhook_authorization: Option<String>,
|
pub webhook_authorization_header: Option<String>,
|
||||||
/// The maximum size, in bytes, of the task index.
|
/// The maximum size, in bytes, of the task index.
|
||||||
pub task_db_size: usize,
|
pub task_db_size: usize,
|
||||||
/// The size, in bytes, with which a meilisearch index is opened the first time of each meilisearch index.
|
/// The size, in bytes, with which a meilisearch index is opened the first time of each meilisearch index.
|
||||||
@@ -180,14 +168,10 @@ pub struct IndexScheduler {
|
|||||||
/// Whether we should automatically cleanup the task queue or not.
|
/// Whether we should automatically cleanup the task queue or not.
|
||||||
pub(crate) cleanup_enabled: bool,
|
pub(crate) cleanup_enabled: bool,
|
||||||
|
|
||||||
/// Whether we should use the old document indexer or the new one.
|
/// The webhook url we should send tasks to after processing every batches.
|
||||||
pub(crate) experimental_no_edition_2024_for_dumps: bool,
|
pub(crate) webhook_url: Option<String>,
|
||||||
|
/// The Authorization header to send to the webhook URL.
|
||||||
/// A database to store single-keyed data that is persisted across restarts.
|
pub(crate) webhook_authorization_header: Option<String>,
|
||||||
persisted: Database<Str, Str>,
|
|
||||||
|
|
||||||
/// Webhook, loaded and stored in the `persisted` database
|
|
||||||
webhooks: Arc<Webhooks>,
|
|
||||||
|
|
||||||
/// A map to retrieve the runtime representation of an embedder depending on its configuration.
|
/// A map to retrieve the runtime representation of an embedder depending on its configuration.
|
||||||
///
|
///
|
||||||
@@ -226,10 +210,8 @@ impl IndexScheduler {
|
|||||||
|
|
||||||
index_mapper: self.index_mapper.clone(),
|
index_mapper: self.index_mapper.clone(),
|
||||||
cleanup_enabled: self.cleanup_enabled,
|
cleanup_enabled: self.cleanup_enabled,
|
||||||
experimental_no_edition_2024_for_dumps: self.experimental_no_edition_2024_for_dumps,
|
webhook_url: self.webhook_url.clone(),
|
||||||
persisted: self.persisted,
|
webhook_authorization_header: self.webhook_authorization_header.clone(),
|
||||||
|
|
||||||
webhooks: self.webhooks.clone(),
|
|
||||||
embedders: self.embedders.clone(),
|
embedders: self.embedders.clone(),
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
test_breakpoint_sdr: self.test_breakpoint_sdr.clone(),
|
test_breakpoint_sdr: self.test_breakpoint_sdr.clone(),
|
||||||
@@ -248,7 +230,6 @@ impl IndexScheduler {
|
|||||||
+ IndexMapper::nb_db()
|
+ IndexMapper::nb_db()
|
||||||
+ features::FeatureData::nb_db()
|
+ features::FeatureData::nb_db()
|
||||||
+ 1 // chat-prompts
|
+ 1 // chat-prompts
|
||||||
+ 1 // persisted
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create an index scheduler and start its run loop.
|
/// Create an index scheduler and start its run loop.
|
||||||
@@ -299,18 +280,10 @@ impl IndexScheduler {
|
|||||||
let version = versioning::Versioning::new(&env, from_db_version)?;
|
let version = versioning::Versioning::new(&env, from_db_version)?;
|
||||||
|
|
||||||
let mut wtxn = env.write_txn()?;
|
let mut wtxn = env.write_txn()?;
|
||||||
|
|
||||||
let features = features::FeatureData::new(&env, &mut wtxn, options.instance_features)?;
|
let features = features::FeatureData::new(&env, &mut wtxn, options.instance_features)?;
|
||||||
let queue = Queue::new(&env, &mut wtxn, &options)?;
|
let queue = Queue::new(&env, &mut wtxn, &options)?;
|
||||||
let index_mapper = IndexMapper::new(&env, &mut wtxn, &options, budget)?;
|
let index_mapper = IndexMapper::new(&env, &mut wtxn, &options, budget)?;
|
||||||
let chat_settings = env.create_database(&mut wtxn, Some(db_name::CHAT_SETTINGS))?;
|
let chat_settings = env.create_database(&mut wtxn, Some(CHAT_SETTINGS_DB_NAME))?;
|
||||||
|
|
||||||
let persisted = env.create_database(&mut wtxn, Some(db_name::PERSISTED))?;
|
|
||||||
let webhooks_db = persisted.remap_data_type::<SerdeJson<Webhooks>>();
|
|
||||||
let mut webhooks = webhooks_db.get(&wtxn, db_keys::WEBHOOKS)?.unwrap_or_default();
|
|
||||||
webhooks
|
|
||||||
.with_cli(options.cli_webhook_url.clone(), options.cli_webhook_authorization.clone());
|
|
||||||
|
|
||||||
wtxn.commit()?;
|
wtxn.commit()?;
|
||||||
|
|
||||||
// allow unreachable_code to get rids of the warning in the case of a test build.
|
// allow unreachable_code to get rids of the warning in the case of a test build.
|
||||||
@@ -323,11 +296,8 @@ impl IndexScheduler {
|
|||||||
index_mapper,
|
index_mapper,
|
||||||
env,
|
env,
|
||||||
cleanup_enabled: options.cleanup_enabled,
|
cleanup_enabled: options.cleanup_enabled,
|
||||||
experimental_no_edition_2024_for_dumps: options
|
webhook_url: options.webhook_url,
|
||||||
.indexer_config
|
webhook_authorization_header: options.webhook_authorization_header,
|
||||||
.experimental_no_edition_2024_for_dumps,
|
|
||||||
persisted,
|
|
||||||
webhooks: Arc::new(webhooks),
|
|
||||||
embedders: Default::default(),
|
embedders: Default::default(),
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
@@ -624,11 +594,6 @@ impl IndexScheduler {
|
|||||||
Ok(nbr_index_processing_tasks > 0)
|
Ok(nbr_index_processing_tasks > 0)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Whether the index should use the old document indexer.
|
|
||||||
pub fn no_edition_2024_for_dumps(&self) -> bool {
|
|
||||||
self.experimental_no_edition_2024_for_dumps
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return the tasks matching the query from the user's point of view along
|
/// Return the tasks matching the query from the user's point of view along
|
||||||
/// with the total number of tasks matching the query, ignoring from and limit.
|
/// with the total number of tasks matching the query, ignoring from and limit.
|
||||||
///
|
///
|
||||||
@@ -667,16 +632,6 @@ impl IndexScheduler {
|
|||||||
self.queue.get_task_ids_from_authorized_indexes(&rtxn, query, filters, &processing)
|
self.queue.get_task_ids_from_authorized_indexes(&rtxn, query, filters, &processing)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn set_task_network(&self, task_id: TaskId, network: TaskNetwork) -> Result<()> {
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
|
||||||
let mut task =
|
|
||||||
self.queue.tasks.get_task(&wtxn, task_id)?.ok_or(Error::TaskNotFound(task_id))?;
|
|
||||||
task.network = Some(network);
|
|
||||||
self.queue.tasks.all_tasks.put(&mut wtxn, &task_id, &task)?;
|
|
||||||
wtxn.commit()?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return the batches matching the query from the user's point of view along
|
/// Return the batches matching the query from the user's point of view along
|
||||||
/// with the total number of batches matching the query, ignoring from and limit.
|
/// with the total number of batches matching the query, ignoring from and limit.
|
||||||
///
|
///
|
||||||
@@ -785,92 +740,86 @@ impl IndexScheduler {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Once the tasks changes have been committed we must send all the tasks that were updated to our webhooks
|
/// Once the tasks changes have been committed we must send all the tasks that were updated to our webhook if there is one.
|
||||||
fn notify_webhooks(&self, updated: RoaringBitmap) {
|
fn notify_webhook(&self, updated: &RoaringBitmap) -> Result<()> {
|
||||||
struct TaskReader<'a, 'b> {
|
if let Some(ref url) = self.webhook_url {
|
||||||
rtxn: &'a RoTxn<'a>,
|
struct TaskReader<'a, 'b> {
|
||||||
index_scheduler: &'a IndexScheduler,
|
rtxn: &'a RoTxn<'a>,
|
||||||
tasks: &'b mut roaring::bitmap::Iter<'b>,
|
index_scheduler: &'a IndexScheduler,
|
||||||
buffer: Vec<u8>,
|
tasks: &'b mut roaring::bitmap::Iter<'b>,
|
||||||
written: usize,
|
buffer: Vec<u8>,
|
||||||
}
|
written: usize,
|
||||||
|
}
|
||||||
|
|
||||||
impl Read for TaskReader<'_, '_> {
|
impl Read for TaskReader<'_, '_> {
|
||||||
fn read(&mut self, mut buf: &mut [u8]) -> std::io::Result<usize> {
|
fn read(&mut self, mut buf: &mut [u8]) -> std::io::Result<usize> {
|
||||||
if self.buffer.is_empty() {
|
if self.buffer.is_empty() {
|
||||||
match self.tasks.next() {
|
match self.tasks.next() {
|
||||||
None => return Ok(0),
|
None => return Ok(0),
|
||||||
Some(task_id) => {
|
Some(task_id) => {
|
||||||
let task = self
|
let task = self
|
||||||
.index_scheduler
|
.index_scheduler
|
||||||
.queue
|
.queue
|
||||||
.tasks
|
.tasks
|
||||||
.get_task(self.rtxn, task_id)
|
.get_task(self.rtxn, task_id)
|
||||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))?
|
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))?
|
||||||
.ok_or_else(|| {
|
.ok_or_else(|| {
|
||||||
io::Error::new(io::ErrorKind::Other, Error::CorruptedTaskQueue)
|
io::Error::new(
|
||||||
})?;
|
io::ErrorKind::Other,
|
||||||
|
Error::CorruptedTaskQueue,
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
serde_json::to_writer(&mut self.buffer, &TaskView::from_task(&task))?;
|
serde_json::to_writer(
|
||||||
self.buffer.push(b'\n');
|
&mut self.buffer,
|
||||||
|
&TaskView::from_task(&task),
|
||||||
|
)?;
|
||||||
|
self.buffer.push(b'\n');
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let mut to_write = &self.buffer[self.written..];
|
||||||
|
let wrote = io::copy(&mut to_write, &mut buf)?;
|
||||||
|
self.written += wrote as usize;
|
||||||
|
|
||||||
|
// we wrote everything and must refresh our buffer on the next call
|
||||||
|
if self.written == self.buffer.len() {
|
||||||
|
self.written = 0;
|
||||||
|
self.buffer.clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(wrote as usize)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let mut to_write = &self.buffer[self.written..];
|
let rtxn = self.env.read_txn()?;
|
||||||
let wrote = io::copy(&mut to_write, &mut buf)?;
|
|
||||||
self.written += wrote as usize;
|
|
||||||
|
|
||||||
// we wrote everything and must refresh our buffer on the next call
|
let task_reader = TaskReader {
|
||||||
if self.written == self.buffer.len() {
|
rtxn: &rtxn,
|
||||||
self.written = 0;
|
index_scheduler: self,
|
||||||
self.buffer.clear();
|
tasks: &mut updated.into_iter(),
|
||||||
}
|
buffer: Vec::with_capacity(50), // on average a task is around ~100 bytes
|
||||||
|
written: 0,
|
||||||
|
};
|
||||||
|
|
||||||
Ok(wrote as usize)
|
// let reader = GzEncoder::new(BufReader::new(task_reader), Compression::default());
|
||||||
|
let reader = GzEncoder::new(BufReader::new(task_reader), Compression::default());
|
||||||
|
let request = ureq::post(url)
|
||||||
|
.timeout(Duration::from_secs(30))
|
||||||
|
.set("Content-Encoding", "gzip")
|
||||||
|
.set("Content-Type", "application/x-ndjson");
|
||||||
|
let request = match &self.webhook_authorization_header {
|
||||||
|
Some(header) => request.set("Authorization", header),
|
||||||
|
None => request,
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Err(e) = request.send(reader) {
|
||||||
|
tracing::error!("While sending data to the webhook: {e}");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let webhooks = self.webhooks.get_all();
|
Ok(())
|
||||||
if webhooks.is_empty() {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let this = self.private_clone();
|
|
||||||
// We must take the RoTxn before entering the thread::spawn otherwise another batch may be
|
|
||||||
// processed before we had the time to take our txn.
|
|
||||||
let rtxn = match self.env.clone().static_read_txn() {
|
|
||||||
Ok(rtxn) => rtxn,
|
|
||||||
Err(e) => {
|
|
||||||
tracing::error!("Couldn't get an rtxn to notify the webhook: {e}");
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
std::thread::spawn(move || {
|
|
||||||
for (uuid, Webhook { url, headers }) in webhooks.iter() {
|
|
||||||
let task_reader = TaskReader {
|
|
||||||
rtxn: &rtxn,
|
|
||||||
index_scheduler: &this,
|
|
||||||
tasks: &mut updated.iter(),
|
|
||||||
buffer: Vec::with_capacity(page_size::get()),
|
|
||||||
written: 0,
|
|
||||||
};
|
|
||||||
|
|
||||||
let reader = GzEncoder::new(BufReader::new(task_reader), Compression::default());
|
|
||||||
|
|
||||||
let mut request = ureq::post(url)
|
|
||||||
.timeout(Duration::from_secs(30))
|
|
||||||
.set("Content-Encoding", "gzip")
|
|
||||||
.set("Content-Type", "application/x-ndjson");
|
|
||||||
for (header_name, header_value) in headers.iter() {
|
|
||||||
request = request.set(header_name, header_value);
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Err(e) = request.send(reader) {
|
|
||||||
tracing::error!("While sending data to the webhook {uuid}: {e}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn index_stats(&self, index_uid: &str) -> Result<IndexStats> {
|
pub fn index_stats(&self, index_uid: &str) -> Result<IndexStats> {
|
||||||
@@ -901,29 +850,6 @@ impl IndexScheduler {
|
|||||||
self.features.network()
|
self.features.network()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn update_runtime_webhooks(&self, runtime: RuntimeWebhooks) -> Result<()> {
|
|
||||||
let webhooks = Webhooks::from_runtime(runtime);
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
|
||||||
let webhooks_db = self.persisted.remap_data_type::<SerdeJson<Webhooks>>();
|
|
||||||
webhooks_db.put(&mut wtxn, db_keys::WEBHOOKS, &webhooks)?;
|
|
||||||
wtxn.commit()?;
|
|
||||||
self.webhooks.update_runtime(webhooks.into_runtime());
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn webhooks_dump_view(&self) -> WebhooksDumpView {
|
|
||||||
// We must not dump the cli api key
|
|
||||||
WebhooksDumpView { webhooks: self.webhooks.get_runtime() }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn webhooks_view(&self) -> WebhooksView {
|
|
||||||
WebhooksView { webhooks: self.webhooks.get_all() }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn retrieve_runtime_webhooks(&self) -> RuntimeWebhooks {
|
|
||||||
self.webhooks.get_runtime()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn embedders(
|
pub fn embedders(
|
||||||
&self,
|
&self,
|
||||||
index_uid: String,
|
index_uid: String,
|
||||||
@@ -1052,72 +978,3 @@ pub struct IndexStats {
|
|||||||
/// Internal stats computed from the index.
|
/// Internal stats computed from the index.
|
||||||
pub inner_stats: index_mapper::IndexStats,
|
pub inner_stats: index_mapper::IndexStats,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// These structure are not meant to be exposed to the end user, if needed, use the meilisearch-types::webhooks structure instead.
|
|
||||||
/// /!\ Everytime you deserialize this structure you should fill the cli_webhook later on with the `with_cli` method. /!\
|
|
||||||
#[derive(Debug, Serialize, Deserialize, Default)]
|
|
||||||
#[serde(rename_all = "camelCase")]
|
|
||||||
struct Webhooks {
|
|
||||||
// The cli webhook should *never* be stored in a database.
|
|
||||||
// It represent a state that only exists for this execution of meilisearch
|
|
||||||
#[serde(skip)]
|
|
||||||
pub cli: Option<CliWebhook>,
|
|
||||||
|
|
||||||
#[serde(default)]
|
|
||||||
pub runtime: RwLock<RuntimeWebhooks>,
|
|
||||||
}
|
|
||||||
|
|
||||||
type RuntimeWebhooks = BTreeMap<Uuid, Webhook>;
|
|
||||||
|
|
||||||
impl Webhooks {
|
|
||||||
pub fn with_cli(&mut self, url: Option<String>, auth: Option<String>) {
|
|
||||||
if let Some(url) = url {
|
|
||||||
let webhook = CliWebhook { url, auth };
|
|
||||||
self.cli = Some(webhook);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn from_runtime(webhooks: RuntimeWebhooks) -> Self {
|
|
||||||
Self { cli: None, runtime: RwLock::new(webhooks) }
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn into_runtime(self) -> RuntimeWebhooks {
|
|
||||||
// safe because we own self and it cannot be cloned
|
|
||||||
self.runtime.into_inner().unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn update_runtime(&self, webhooks: RuntimeWebhooks) {
|
|
||||||
*self.runtime.write().unwrap() = webhooks;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns all the webhooks in an unified view. The cli webhook is represented with an uuid set to 0
|
|
||||||
pub fn get_all(&self) -> BTreeMap<Uuid, Webhook> {
|
|
||||||
self.cli
|
|
||||||
.as_ref()
|
|
||||||
.map(|wh| (Uuid::nil(), Webhook::from(wh)))
|
|
||||||
.into_iter()
|
|
||||||
.chain(self.runtime.read().unwrap().iter().map(|(uuid, wh)| (*uuid, wh.clone())))
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Returns all the runtime webhooks.
|
|
||||||
pub fn get_runtime(&self) -> BTreeMap<Uuid, Webhook> {
|
|
||||||
self.runtime.read().unwrap().iter().map(|(uuid, wh)| (*uuid, wh.clone())).collect()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Serialize, Deserialize, Default, Clone, PartialEq)]
|
|
||||||
struct CliWebhook {
|
|
||||||
pub url: String,
|
|
||||||
pub auth: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl From<&CliWebhook> for Webhook {
|
|
||||||
fn from(webhook: &CliWebhook) -> Self {
|
|
||||||
let mut headers = BTreeMap::new();
|
|
||||||
if let Some(ref auth) = webhook.auth {
|
|
||||||
headers.insert("Authorization".to_string(), auth.to_string());
|
|
||||||
}
|
|
||||||
Self { url: webhook.url.to_string(), headers }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -108,7 +108,6 @@ make_enum_progress! {
|
|||||||
DumpTheBatches,
|
DumpTheBatches,
|
||||||
DumpTheIndexes,
|
DumpTheIndexes,
|
||||||
DumpTheExperimentalFeatures,
|
DumpTheExperimentalFeatures,
|
||||||
DumpTheWebhooks,
|
|
||||||
CompressTheDump,
|
CompressTheDump,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -334,11 +334,11 @@ fn query_batches_special_rules() {
|
|||||||
let kind = index_creation_task("doggo", "sheep");
|
let kind = index_creation_task("doggo", "sheep");
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
|
|
||||||
@@ -442,7 +442,7 @@ fn query_batches_canceled_by() {
|
|||||||
let kind = index_creation_task("doggo", "sheep");
|
let kind = index_creation_task("doggo", "sheep");
|
||||||
let _ = index_scheduler.register(kind, None, false).unwrap();
|
let _ = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -279,7 +279,6 @@ impl Queue {
|
|||||||
details: kind.default_details(),
|
details: kind.default_details(),
|
||||||
status: Status::Enqueued,
|
status: Status::Enqueued,
|
||||||
kind: kind.clone(),
|
kind: kind.clone(),
|
||||||
network: None,
|
|
||||||
};
|
};
|
||||||
// For deletion and cancelation tasks, we want to make extra sure that they
|
// For deletion and cancelation tasks, we want to make extra sure that they
|
||||||
// don't attempt to delete/cancel tasks that are newer than themselves.
|
// don't attempt to delete/cancel tasks that are newer than themselves.
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
@@ -49,7 +49,7 @@ catto: { number_of_documents: 0, field_distribution: {} }
|
|||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
||||||
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
|
|||||||
@@ -1,12 +1,13 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,]
|
enqueued [0,]
|
||||||
|
|||||||
@@ -1,13 +1,14 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,]
|
enqueued [0,1,]
|
||||||
|
|||||||
@@ -1,14 +1,15 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,]
|
enqueued [0,1,2,]
|
||||||
|
|||||||
@@ -7,9 +7,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
{uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"processing":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
{uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"processing":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [1,2,]
|
enqueued [1,2,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
|
|||||||
@@ -1,14 +1,15 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,]
|
enqueued [0,1,2,]
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ source: crates/index-scheduler/src/queue/batches_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, batch_uid: 3, status: failed, error: ResponseError { code: 200, message: "Index `whalo` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
|
3 {uid: 3, batch_uid: 3, status: failed, error: ResponseError { code: 200, message: "Index `whalo` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
@@ -54,8 +54,8 @@ doggo: { number_of_documents: 0, field_distribution: {} }
|
|||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
||||||
1 {uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
|
||||||
2 {uid: 2, details: {"swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 2 of type `indexSwap` that cannot be batched with any other task.", }
|
2 {uid: 2, details: {"swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 2 of type `indexSwap` that cannot be batched with any other task.", }
|
||||||
3 {uid: 3, details: {"swaps":[{"indexes":["catto","whalo"],"rename":false}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 3 of type `indexSwap` that cannot be batched with any other task.", }
|
3 {uid: 3, details: {"swaps":[{"indexes":["catto","whalo"]}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 3 of type `indexSwap` that cannot be batched with any other task.", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -1,15 +1,16 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/batches_test.rs
|
source: crates/index-scheduler/src/queue/batches_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
|
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,3,]
|
enqueued [0,1,2,3,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
@@ -49,7 +49,7 @@ catto: { number_of_documents: 0, field_distribution: {} }
|
|||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Batches:
|
### All Batches:
|
||||||
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
|
||||||
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
|
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Batch to tasks mapping:
|
### Batch to tasks mapping:
|
||||||
0 [0,]
|
0 [0,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
|
|||||||
@@ -1,12 +1,13 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/tasks_test.rs
|
source: crates/index-scheduler/src/queue/tasks_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,]
|
enqueued [0,]
|
||||||
|
|||||||
@@ -1,13 +1,14 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/tasks_test.rs
|
source: crates/index-scheduler/src/queue/tasks_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,]
|
enqueued [0,1,]
|
||||||
|
|||||||
@@ -1,14 +1,15 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/tasks_test.rs
|
source: crates/index-scheduler/src/queue/tasks_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,]
|
enqueued [0,1,2,]
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued []
|
enqueued []
|
||||||
|
|||||||
@@ -1,14 +1,15 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/tasks_test.rs
|
source: crates/index-scheduler/src/queue/tasks_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,]
|
enqueued [0,1,2,]
|
||||||
|
|||||||
@@ -1,15 +1,16 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/tasks_test.rs
|
source: crates/index-scheduler/src/queue/tasks_test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
|
||||||
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
|
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
|
||||||
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
|
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### Status:
|
### Status:
|
||||||
enqueued [0,1,2,3,]
|
enqueued [0,1,2,3,]
|
||||||
|
|||||||
@@ -1,12 +1,13 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/test.rs
|
source: crates/index-scheduler/src/queue/test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
### Autobatching Enabled = true
|
### Autobatching Enabled = true
|
||||||
### Processing batch None:
|
### Processing batch None:
|
||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
|
||||||
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
|
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
|
||||||
2 {uid: 2, status: enqueued, details: { received_documents: 50, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 50, allow_index_creation: true }}
|
2 {uid: 2, status: enqueued, details: { received_documents: 50, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 50, allow_index_creation: true }}
|
||||||
3 {uid: 3, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 5000, allow_index_creation: true }}
|
3 {uid: 3, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 5000, allow_index_creation: true }}
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/test.rs
|
source: crates/index-scheduler/src/queue/test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -12,9 +13,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/test.rs
|
source: crates/index-scheduler/src/queue/test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -12,9 +13,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "succeeded",
|
"status": "succeeded",
|
||||||
@@ -40,9 +39,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "failed",
|
"status": "failed",
|
||||||
@@ -63,9 +60,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
@@ -86,9 +81,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/test.rs
|
source: crates/index-scheduler/src/queue/test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -12,9 +13,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
@@ -35,9 +34,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/test.rs
|
source: crates/index-scheduler/src/queue/test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -12,9 +13,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "succeeded",
|
"status": "succeeded",
|
||||||
@@ -40,9 +39,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "failed",
|
"status": "failed",
|
||||||
@@ -63,9 +60,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
@@ -86,9 +81,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/test.rs
|
source: crates/index-scheduler/src/queue/test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -12,9 +13,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "succeeded",
|
"status": "succeeded",
|
||||||
@@ -40,9 +39,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "failed",
|
"status": "failed",
|
||||||
@@ -63,9 +60,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
@@ -86,9 +81,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
---
|
---
|
||||||
source: crates/index-scheduler/src/queue/test.rs
|
source: crates/index-scheduler/src/queue/test.rs
|
||||||
|
snapshot_kind: text
|
||||||
---
|
---
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -12,9 +13,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "succeeded",
|
"status": "succeeded",
|
||||||
@@ -40,9 +39,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "failed",
|
"status": "failed",
|
||||||
@@ -63,9 +60,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
@@ -86,9 +81,7 @@ source: crates/index-scheduler/src/queue/test.rs
|
|||||||
"canceledBy": null,
|
"canceledBy": null,
|
||||||
"details": {
|
"details": {
|
||||||
"IndexInfo": {
|
"IndexInfo": {
|
||||||
"primary_key": null,
|
"primary_key": null
|
||||||
"new_index_uid": null,
|
|
||||||
"old_index_uid": null
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"status": "enqueued",
|
"status": "enqueued",
|
||||||
|
|||||||
@@ -97,22 +97,7 @@ impl TaskQueue {
|
|||||||
Ok(self.all_tasks.get(rtxn, &task_id)?)
|
Ok(self.all_tasks.get(rtxn, &task_id)?)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Update the inverted task indexes and write the new value of the task.
|
pub(crate) fn update_task(&self, wtxn: &mut RwTxn, task: &Task) -> Result<()> {
|
||||||
///
|
|
||||||
/// The passed `task` object typically comes from a previous transaction, so two kinds of modification might have occurred:
|
|
||||||
/// 1. Modification to the `task` object after loading it from the DB (the purpose of this method is to persist these changes)
|
|
||||||
/// 2. Modification to the task committed by another transaction in the DB (an annoying consequence of having lost the original
|
|
||||||
/// transaction from which the `task` instance was deserialized)
|
|
||||||
///
|
|
||||||
/// When calling this function, this `task` is modified to take into account any existing `network`
|
|
||||||
/// that can have been added since the task was loaded into memory.
|
|
||||||
///
|
|
||||||
/// Any other modification to the task that was committed from the DB since the parameter was pulled from the DB will be overwritten.
|
|
||||||
///
|
|
||||||
/// # Errors
|
|
||||||
///
|
|
||||||
/// - CorruptedTaskQueue: The task doesn't exist in the database
|
|
||||||
pub(crate) fn update_task(&self, wtxn: &mut RwTxn, task: &mut Task) -> Result<()> {
|
|
||||||
let old_task = self.get_task(wtxn, task.uid)?.ok_or(Error::CorruptedTaskQueue)?;
|
let old_task = self.get_task(wtxn, task.uid)?.ok_or(Error::CorruptedTaskQueue)?;
|
||||||
let reprocessing = old_task.status != Status::Enqueued;
|
let reprocessing = old_task.status != Status::Enqueued;
|
||||||
|
|
||||||
@@ -172,12 +157,6 @@ impl TaskQueue {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
task.network = match (old_task.network, task.network.take()) {
|
|
||||||
(None, None) => None,
|
|
||||||
(None, Some(network)) | (Some(network), None) => Some(network),
|
|
||||||
(Some(_), Some(network)) => Some(network),
|
|
||||||
};
|
|
||||||
|
|
||||||
self.all_tasks.put(wtxn, &task.uid, task)?;
|
self.all_tasks.put(wtxn, &task.uid, task)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -304,11 +304,11 @@ fn query_tasks_special_rules() {
|
|||||||
let kind = index_creation_task("doggo", "sheep");
|
let kind = index_creation_task("doggo", "sheep");
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
|
|
||||||
@@ -399,7 +399,7 @@ fn query_tasks_canceled_by() {
|
|||||||
let kind = index_creation_task("doggo", "sheep");
|
let kind = index_creation_task("doggo", "sheep");
|
||||||
let _ = index_scheduler.register(kind, None, false).unwrap();
|
let _ = index_scheduler.register(kind, None, false).unwrap();
|
||||||
let kind = KindWithContent::IndexSwap {
|
let kind = KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
|
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
|
||||||
};
|
};
|
||||||
let _task = index_scheduler.register(kind, None, false).unwrap();
|
let _task = index_scheduler.register(kind, None, false).unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -287,7 +287,7 @@ impl BatchKind {
|
|||||||
};
|
};
|
||||||
|
|
||||||
match (self, autobatch_kind) {
|
match (self, autobatch_kind) {
|
||||||
// We don't batch any of these operations
|
// We don't batch any of these operations
|
||||||
(this, K::IndexCreation | K::IndexUpdate | K::IndexSwap | K::DocumentEdition) => Break((this, BatchStopReason::TaskCannotBeBatched { kind, id })),
|
(this, K::IndexCreation | K::IndexUpdate | K::IndexSwap | K::DocumentEdition) => Break((this, BatchStopReason::TaskCannotBeBatched { kind, id })),
|
||||||
// We must not batch tasks that don't have the same index creation rights if the index doesn't already exists.
|
// We must not batch tasks that don't have the same index creation rights if the index doesn't already exists.
|
||||||
(this, kind) if !index_already_exists && this.allow_index_creation() == Some(false) && kind.allow_index_creation() == Some(true) => {
|
(this, kind) if !index_already_exists && this.allow_index_creation() == Some(false) && kind.allow_index_creation() == Some(true) => {
|
||||||
|
|||||||
@@ -75,11 +75,7 @@ fn idx_create() -> KindWithContent {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn idx_update() -> KindWithContent {
|
fn idx_update() -> KindWithContent {
|
||||||
KindWithContent::IndexUpdate {
|
KindWithContent::IndexUpdate { index_uid: String::from("doggo"), primary_key: None }
|
||||||
index_uid: String::from("doggo"),
|
|
||||||
primary_key: None,
|
|
||||||
new_index_uid: None,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn idx_del() -> KindWithContent {
|
fn idx_del() -> KindWithContent {
|
||||||
@@ -88,10 +84,7 @@ fn idx_del() -> KindWithContent {
|
|||||||
|
|
||||||
fn idx_swap() -> KindWithContent {
|
fn idx_swap() -> KindWithContent {
|
||||||
KindWithContent::IndexSwap {
|
KindWithContent::IndexSwap {
|
||||||
swaps: vec![IndexSwap {
|
swaps: vec![IndexSwap { indexes: (String::from("doggo"), String::from("catto")) }],
|
||||||
indexes: (String::from("doggo"), String::from("catto")),
|
|
||||||
rename: false,
|
|
||||||
}],
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -38,7 +38,6 @@ pub(crate) enum Batch {
|
|||||||
IndexUpdate {
|
IndexUpdate {
|
||||||
index_uid: String,
|
index_uid: String,
|
||||||
primary_key: Option<String>,
|
primary_key: Option<String>,
|
||||||
new_index_uid: Option<String>,
|
|
||||||
task: Task,
|
task: Task,
|
||||||
},
|
},
|
||||||
IndexDeletion {
|
IndexDeletion {
|
||||||
@@ -406,13 +405,11 @@ impl IndexScheduler {
|
|||||||
let mut task =
|
let mut task =
|
||||||
self.queue.tasks.get_task(rtxn, id)?.ok_or(Error::CorruptedTaskQueue)?;
|
self.queue.tasks.get_task(rtxn, id)?.ok_or(Error::CorruptedTaskQueue)?;
|
||||||
current_batch.processing(Some(&mut task));
|
current_batch.processing(Some(&mut task));
|
||||||
let (primary_key, new_index_uid) = match &task.kind {
|
let primary_key = match &task.kind {
|
||||||
KindWithContent::IndexUpdate { primary_key, new_index_uid, .. } => {
|
KindWithContent::IndexUpdate { primary_key, .. } => primary_key.clone(),
|
||||||
(primary_key.clone(), new_index_uid.clone())
|
|
||||||
}
|
|
||||||
_ => unreachable!(),
|
_ => unreachable!(),
|
||||||
};
|
};
|
||||||
Ok(Some(Batch::IndexUpdate { index_uid, primary_key, new_index_uid, task }))
|
Ok(Some(Batch::IndexUpdate { index_uid, primary_key, task }))
|
||||||
}
|
}
|
||||||
BatchKind::IndexDeletion { ids } => Ok(Some(Batch::IndexDeletion {
|
BatchKind::IndexDeletion { ids } => Ok(Some(Batch::IndexDeletion {
|
||||||
index_uid,
|
index_uid,
|
||||||
|
|||||||
@@ -268,7 +268,7 @@ impl IndexScheduler {
|
|||||||
|
|
||||||
self.queue
|
self.queue
|
||||||
.tasks
|
.tasks
|
||||||
.update_task(&mut wtxn, &mut task)
|
.update_task(&mut wtxn, &task)
|
||||||
.map_err(|e| Error::UnrecoverableError(Box::new(e)))?;
|
.map_err(|e| Error::UnrecoverableError(Box::new(e)))?;
|
||||||
}
|
}
|
||||||
if let Some(canceled_by) = canceled_by {
|
if let Some(canceled_by) = canceled_by {
|
||||||
@@ -349,7 +349,7 @@ impl IndexScheduler {
|
|||||||
|
|
||||||
self.queue
|
self.queue
|
||||||
.tasks
|
.tasks
|
||||||
.update_task(&mut wtxn, &mut task)
|
.update_task(&mut wtxn, &task)
|
||||||
.map_err(|e| Error::UnrecoverableError(Box::new(e)))?;
|
.map_err(|e| Error::UnrecoverableError(Box::new(e)))?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -446,7 +446,8 @@ impl IndexScheduler {
|
|||||||
Ok(())
|
Ok(())
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
self.notify_webhooks(ids);
|
// We shouldn't crash the tick function if we can't send data to the webhook.
|
||||||
|
let _ = self.notify_webhook(&ids);
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
self.breakpoint(crate::test_utils::Breakpoint::AfterProcessing);
|
self.breakpoint(crate::test_utils::Breakpoint::AfterProcessing);
|
||||||
|
|||||||
@@ -10,7 +10,6 @@ use meilisearch_types::tasks::{Details, IndexSwap, Kind, KindWithContent, Status
|
|||||||
use meilisearch_types::versioning::{VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH};
|
use meilisearch_types::versioning::{VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH};
|
||||||
use milli::update::Settings as MilliSettings;
|
use milli::update::Settings as MilliSettings;
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
use time::OffsetDateTime;
|
|
||||||
|
|
||||||
use super::create_batch::Batch;
|
use super::create_batch::Batch;
|
||||||
use crate::processing::{
|
use crate::processing::{
|
||||||
@@ -225,46 +224,24 @@ impl IndexScheduler {
|
|||||||
self.index_mapper.create_index(wtxn, &index_uid, None)?;
|
self.index_mapper.create_index(wtxn, &index_uid, None)?;
|
||||||
|
|
||||||
self.process_batch(
|
self.process_batch(
|
||||||
Batch::IndexUpdate { index_uid, primary_key, new_index_uid: None, task },
|
Batch::IndexUpdate { index_uid, primary_key, task },
|
||||||
current_batch,
|
current_batch,
|
||||||
progress,
|
progress,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
Batch::IndexUpdate { index_uid, primary_key, new_index_uid, mut task } => {
|
Batch::IndexUpdate { index_uid, primary_key, mut task } => {
|
||||||
progress.update_progress(UpdateIndexProgress::UpdatingTheIndex);
|
progress.update_progress(UpdateIndexProgress::UpdatingTheIndex);
|
||||||
|
|
||||||
// Get the index (renamed or not)
|
|
||||||
let rtxn = self.env.read_txn()?;
|
let rtxn = self.env.read_txn()?;
|
||||||
let index = self.index_mapper.index(&rtxn, &index_uid)?;
|
let index = self.index_mapper.index(&rtxn, &index_uid)?;
|
||||||
let mut index_wtxn = index.write_txn()?;
|
|
||||||
|
|
||||||
// Handle rename if new_index_uid is provided
|
if let Some(primary_key) = primary_key.clone() {
|
||||||
let final_index_uid = if let Some(new_uid) = &new_index_uid {
|
let mut index_wtxn = index.write_txn()?;
|
||||||
if new_uid != &index_uid {
|
|
||||||
index.set_updated_at(&mut index_wtxn, &OffsetDateTime::now_utc())?;
|
|
||||||
|
|
||||||
let mut wtxn = self.env.write_txn()?;
|
|
||||||
self.apply_index_swap(
|
|
||||||
&mut wtxn, &progress, task.uid, &index_uid, new_uid, true,
|
|
||||||
)?;
|
|
||||||
wtxn.commit()?;
|
|
||||||
|
|
||||||
new_uid.clone()
|
|
||||||
} else {
|
|
||||||
new_uid.clone()
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
index_uid.clone()
|
|
||||||
};
|
|
||||||
|
|
||||||
// Handle primary key update if provided
|
|
||||||
if let Some(ref primary_key) = primary_key {
|
|
||||||
let mut builder = MilliSettings::new(
|
let mut builder = MilliSettings::new(
|
||||||
&mut index_wtxn,
|
&mut index_wtxn,
|
||||||
&index,
|
&index,
|
||||||
self.index_mapper.indexer_config(),
|
self.index_mapper.indexer_config(),
|
||||||
);
|
);
|
||||||
builder.set_primary_key(primary_key.clone());
|
builder.set_primary_key(primary_key);
|
||||||
let must_stop_processing = self.scheduler.must_stop_processing.clone();
|
let must_stop_processing = self.scheduler.must_stop_processing.clone();
|
||||||
|
|
||||||
builder
|
builder
|
||||||
@@ -273,20 +250,15 @@ impl IndexScheduler {
|
|||||||
&progress,
|
&progress,
|
||||||
current_batch.embedder_stats.clone(),
|
current_batch.embedder_stats.clone(),
|
||||||
)
|
)
|
||||||
.map_err(|e| Error::from_milli(e, Some(final_index_uid.to_string())))?;
|
.map_err(|e| Error::from_milli(e, Some(index_uid.to_string())))?;
|
||||||
|
index_wtxn.commit()?;
|
||||||
}
|
}
|
||||||
|
|
||||||
index_wtxn.commit()?;
|
|
||||||
// drop rtxn before starting a new wtxn on the same db
|
// drop rtxn before starting a new wtxn on the same db
|
||||||
rtxn.commit()?;
|
rtxn.commit()?;
|
||||||
|
|
||||||
task.status = Status::Succeeded;
|
task.status = Status::Succeeded;
|
||||||
task.details = Some(Details::IndexInfo {
|
task.details = Some(Details::IndexInfo { primary_key });
|
||||||
primary_key: primary_key.clone(),
|
|
||||||
new_index_uid: new_index_uid.clone(),
|
|
||||||
// we only display the old index uid if a rename happened => there is a new_index_uid
|
|
||||||
old_index_uid: new_index_uid.map(|_| index_uid.clone()),
|
|
||||||
});
|
|
||||||
|
|
||||||
// if the update processed successfully, we're going to store the new
|
// if the update processed successfully, we're going to store the new
|
||||||
// stats of the index. Since the tasks have already been processed and
|
// stats of the index. Since the tasks have already been processed and
|
||||||
@@ -296,8 +268,8 @@ impl IndexScheduler {
|
|||||||
let mut wtxn = self.env.write_txn()?;
|
let mut wtxn = self.env.write_txn()?;
|
||||||
let index_rtxn = index.read_txn()?;
|
let index_rtxn = index.read_txn()?;
|
||||||
let stats = crate::index_mapper::IndexStats::new(&index, &index_rtxn)
|
let stats = crate::index_mapper::IndexStats::new(&index, &index_rtxn)
|
||||||
.map_err(|e| Error::from_milli(e, Some(final_index_uid.clone())))?;
|
.map_err(|e| Error::from_milli(e, Some(index_uid.clone())))?;
|
||||||
self.index_mapper.store_stats_of(&mut wtxn, &final_index_uid, &stats)?;
|
self.index_mapper.store_stats_of(&mut wtxn, &index_uid, &stats)?;
|
||||||
wtxn.commit()?;
|
wtxn.commit()?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}();
|
}();
|
||||||
@@ -358,18 +330,13 @@ impl IndexScheduler {
|
|||||||
unreachable!()
|
unreachable!()
|
||||||
};
|
};
|
||||||
let mut not_found_indexes = BTreeSet::new();
|
let mut not_found_indexes = BTreeSet::new();
|
||||||
let mut found_indexes_but_should_not = BTreeSet::new();
|
for IndexSwap { indexes: (lhs, rhs) } in swaps {
|
||||||
for IndexSwap { indexes: (lhs, rhs), rename } in swaps {
|
for index in [lhs, rhs] {
|
||||||
let index_exists = self.index_mapper.index_exists(&wtxn, lhs)?;
|
let index_exists = self.index_mapper.index_exists(&wtxn, index)?;
|
||||||
if !index_exists {
|
if !index_exists {
|
||||||
not_found_indexes.insert(lhs);
|
not_found_indexes.insert(index);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
let index_exists = self.index_mapper.index_exists(&wtxn, rhs)?;
|
|
||||||
match (index_exists, rename) {
|
|
||||||
(true, true) => found_indexes_but_should_not.insert((lhs, rhs)),
|
|
||||||
(false, false) => not_found_indexes.insert(rhs),
|
|
||||||
(true, false) | (false, true) => true, // random value we don't read it anyway
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
if !not_found_indexes.is_empty() {
|
if !not_found_indexes.is_empty() {
|
||||||
if not_found_indexes.len() == 1 {
|
if not_found_indexes.len() == 1 {
|
||||||
@@ -382,23 +349,6 @@ impl IndexScheduler {
|
|||||||
));
|
));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if !found_indexes_but_should_not.is_empty() {
|
|
||||||
if found_indexes_but_should_not.len() == 1 {
|
|
||||||
let (lhs, rhs) = found_indexes_but_should_not
|
|
||||||
.into_iter()
|
|
||||||
.next()
|
|
||||||
.map(|(lhs, rhs)| (lhs.clone(), rhs.clone()))
|
|
||||||
.unwrap();
|
|
||||||
return Err(Error::SwapIndexFoundDuringRename(lhs, rhs));
|
|
||||||
} else {
|
|
||||||
return Err(Error::SwapIndexesFoundDuringRename(
|
|
||||||
found_indexes_but_should_not
|
|
||||||
.into_iter()
|
|
||||||
.map(|(_, rhs)| rhs.to_string())
|
|
||||||
.collect(),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
progress.update_progress(SwappingTheIndexes::SwappingTheIndexes);
|
progress.update_progress(SwappingTheIndexes::SwappingTheIndexes);
|
||||||
for (step, swap) in swaps.iter().enumerate() {
|
for (step, swap) in swaps.iter().enumerate() {
|
||||||
progress.update_progress(VariableNameStep::<SwappingTheIndexes>::new(
|
progress.update_progress(VariableNameStep::<SwappingTheIndexes>::new(
|
||||||
@@ -412,7 +362,6 @@ impl IndexScheduler {
|
|||||||
task.uid,
|
task.uid,
|
||||||
&swap.indexes.0,
|
&swap.indexes.0,
|
||||||
&swap.indexes.1,
|
&swap.indexes.1,
|
||||||
swap.rename,
|
|
||||||
)?;
|
)?;
|
||||||
}
|
}
|
||||||
wtxn.commit()?;
|
wtxn.commit()?;
|
||||||
@@ -502,7 +451,6 @@ impl IndexScheduler {
|
|||||||
task_id: u32,
|
task_id: u32,
|
||||||
lhs: &str,
|
lhs: &str,
|
||||||
rhs: &str,
|
rhs: &str,
|
||||||
rename: bool,
|
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
progress.update_progress(InnerSwappingTwoIndexes::RetrieveTheTasks);
|
progress.update_progress(InnerSwappingTwoIndexes::RetrieveTheTasks);
|
||||||
// 1. Verify that both lhs and rhs are existing indexes
|
// 1. Verify that both lhs and rhs are existing indexes
|
||||||
@@ -510,23 +458,16 @@ impl IndexScheduler {
|
|||||||
if !index_lhs_exists {
|
if !index_lhs_exists {
|
||||||
return Err(Error::IndexNotFound(lhs.to_owned()));
|
return Err(Error::IndexNotFound(lhs.to_owned()));
|
||||||
}
|
}
|
||||||
if !rename {
|
let index_rhs_exists = self.index_mapper.index_exists(wtxn, rhs)?;
|
||||||
let index_rhs_exists = self.index_mapper.index_exists(wtxn, rhs)?;
|
if !index_rhs_exists {
|
||||||
if !index_rhs_exists {
|
return Err(Error::IndexNotFound(rhs.to_owned()));
|
||||||
return Err(Error::IndexNotFound(rhs.to_owned()));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// 2. Get the task set for index = name that appeared before the index swap task
|
// 2. Get the task set for index = name that appeared before the index swap task
|
||||||
let mut index_lhs_task_ids = self.queue.tasks.index_tasks(wtxn, lhs)?;
|
let mut index_lhs_task_ids = self.queue.tasks.index_tasks(wtxn, lhs)?;
|
||||||
index_lhs_task_ids.remove_range(task_id..);
|
index_lhs_task_ids.remove_range(task_id..);
|
||||||
let index_rhs_task_ids = if rename {
|
let mut index_rhs_task_ids = self.queue.tasks.index_tasks(wtxn, rhs)?;
|
||||||
let mut index_rhs_task_ids = self.queue.tasks.index_tasks(wtxn, rhs)?;
|
index_rhs_task_ids.remove_range(task_id..);
|
||||||
index_rhs_task_ids.remove_range(task_id..);
|
|
||||||
index_rhs_task_ids
|
|
||||||
} else {
|
|
||||||
RoaringBitmap::new()
|
|
||||||
};
|
|
||||||
|
|
||||||
// 3. before_name -> new_name in the task's KindWithContent
|
// 3. before_name -> new_name in the task's KindWithContent
|
||||||
progress.update_progress(InnerSwappingTwoIndexes::UpdateTheTasks);
|
progress.update_progress(InnerSwappingTwoIndexes::UpdateTheTasks);
|
||||||
@@ -555,11 +496,7 @@ impl IndexScheduler {
|
|||||||
})?;
|
})?;
|
||||||
|
|
||||||
// 6. Swap in the index mapper
|
// 6. Swap in the index mapper
|
||||||
if rename {
|
self.index_mapper.swap(wtxn, lhs, rhs)?;
|
||||||
self.index_mapper.rename(wtxn, lhs, rhs)?;
|
|
||||||
} else {
|
|
||||||
self.index_mapper.swap(wtxn, lhs, rhs)?;
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ use std::sync::atomic::Ordering;
|
|||||||
|
|
||||||
use dump::IndexMetadata;
|
use dump::IndexMetadata;
|
||||||
use meilisearch_types::milli::constants::RESERVED_VECTORS_FIELD_NAME;
|
use meilisearch_types::milli::constants::RESERVED_VECTORS_FIELD_NAME;
|
||||||
use meilisearch_types::milli::index::EmbeddingsWithMetadata;
|
|
||||||
use meilisearch_types::milli::progress::{Progress, VariableNameStep};
|
use meilisearch_types::milli::progress::{Progress, VariableNameStep};
|
||||||
use meilisearch_types::milli::vector::parsed_vectors::{ExplicitVectors, VectorOrArrayOfVectors};
|
use meilisearch_types::milli::vector::parsed_vectors::{ExplicitVectors, VectorOrArrayOfVectors};
|
||||||
use meilisearch_types::milli::{self};
|
use meilisearch_types::milli::{self};
|
||||||
@@ -228,21 +227,12 @@ impl IndexScheduler {
|
|||||||
return Err(Error::from_milli(user_err, Some(uid.to_string())));
|
return Err(Error::from_milli(user_err, Some(uid.to_string())));
|
||||||
};
|
};
|
||||||
|
|
||||||
for (
|
for (embedder_name, (embeddings, regenerate)) in embeddings {
|
||||||
embedder_name,
|
|
||||||
EmbeddingsWithMetadata { embeddings, regenerate, has_fragments },
|
|
||||||
) in embeddings
|
|
||||||
{
|
|
||||||
let embeddings = ExplicitVectors {
|
let embeddings = ExplicitVectors {
|
||||||
embeddings: Some(VectorOrArrayOfVectors::from_array_of_vectors(
|
embeddings: Some(VectorOrArrayOfVectors::from_array_of_vectors(
|
||||||
embeddings,
|
embeddings,
|
||||||
)),
|
)),
|
||||||
regenerate: regenerate &&
|
regenerate,
|
||||||
// Meilisearch does not handle well dumps with fragments, because as the fragments
|
|
||||||
// are marked as user-provided,
|
|
||||||
// all embeddings would be regenerated on any settings change or document update.
|
|
||||||
// To prevent this, we mark embeddings has non regenerate in this case.
|
|
||||||
!has_fragments,
|
|
||||||
};
|
};
|
||||||
vectors.insert(embedder_name, serde_json::to_value(embeddings).unwrap());
|
vectors.insert(embedder_name, serde_json::to_value(embeddings).unwrap());
|
||||||
}
|
}
|
||||||
@@ -270,11 +260,6 @@ impl IndexScheduler {
|
|||||||
let network = self.network();
|
let network = self.network();
|
||||||
dump.create_network(network)?;
|
dump.create_network(network)?;
|
||||||
|
|
||||||
// 7. Dump the webhooks
|
|
||||||
progress.update_progress(DumpCreationProgress::DumpTheWebhooks);
|
|
||||||
let webhooks = self.webhooks_dump_view();
|
|
||||||
dump.create_webhooks(webhooks)?;
|
|
||||||
|
|
||||||
let dump_uid = started_at.format(format_description!(
|
let dump_uid = started_at.format(format_description!(
|
||||||
"[year repr:full][month repr:numerical][day padding:zero]-[hour padding:zero][minute padding:zero][second padding:zero][subsecond digits:3]"
|
"[year repr:full][month repr:numerical][day padding:zero]-[hour padding:zero][minute padding:zero][second padding:zero][subsecond digits:3]"
|
||||||
)).unwrap();
|
)).unwrap();
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ use flate2::write::GzEncoder;
|
|||||||
use flate2::Compression;
|
use flate2::Compression;
|
||||||
use meilisearch_types::index_uid_pattern::IndexUidPattern;
|
use meilisearch_types::index_uid_pattern::IndexUidPattern;
|
||||||
use meilisearch_types::milli::constants::RESERVED_VECTORS_FIELD_NAME;
|
use meilisearch_types::milli::constants::RESERVED_VECTORS_FIELD_NAME;
|
||||||
use meilisearch_types::milli::index::EmbeddingsWithMetadata;
|
|
||||||
use meilisearch_types::milli::progress::{Progress, VariableNameStep};
|
use meilisearch_types::milli::progress::{Progress, VariableNameStep};
|
||||||
use meilisearch_types::milli::update::{request_threads, Setting};
|
use meilisearch_types::milli::update::{request_threads, Setting};
|
||||||
use meilisearch_types::milli::vector::parsed_vectors::{ExplicitVectors, VectorOrArrayOfVectors};
|
use meilisearch_types::milli::vector::parsed_vectors::{ExplicitVectors, VectorOrArrayOfVectors};
|
||||||
@@ -230,21 +229,12 @@ impl IndexScheduler {
|
|||||||
));
|
));
|
||||||
};
|
};
|
||||||
|
|
||||||
for (
|
for (embedder_name, (embeddings, regenerate)) in embeddings {
|
||||||
embedder_name,
|
|
||||||
EmbeddingsWithMetadata { embeddings, regenerate, has_fragments },
|
|
||||||
) in embeddings
|
|
||||||
{
|
|
||||||
let embeddings = ExplicitVectors {
|
let embeddings = ExplicitVectors {
|
||||||
embeddings: Some(
|
embeddings: Some(
|
||||||
VectorOrArrayOfVectors::from_array_of_vectors(embeddings),
|
VectorOrArrayOfVectors::from_array_of_vectors(embeddings),
|
||||||
),
|
),
|
||||||
regenerate: regenerate &&
|
regenerate,
|
||||||
// Meilisearch does not handle well dumps with fragments, because as the fragments
|
|
||||||
// are marked as user-provided,
|
|
||||||
// all embeddings would be regenerated on any settings change or document update.
|
|
||||||
// To prevent this, we mark embeddings has non regenerate in this case.
|
|
||||||
!has_fragments,
|
|
||||||
};
|
};
|
||||||
vectors.insert(
|
vectors.insert(
|
||||||
embedder_name,
|
embedder_name,
|
||||||
|
|||||||
@@ -66,11 +66,6 @@ impl IndexScheduler {
|
|||||||
}
|
}
|
||||||
IndexOperation::DocumentOperation { index_uid, primary_key, operations, mut tasks } => {
|
IndexOperation::DocumentOperation { index_uid, primary_key, operations, mut tasks } => {
|
||||||
progress.update_progress(DocumentOperationProgress::RetrievingConfig);
|
progress.update_progress(DocumentOperationProgress::RetrievingConfig);
|
||||||
|
|
||||||
let network = self.network();
|
|
||||||
|
|
||||||
let shards = network.shards();
|
|
||||||
|
|
||||||
// TODO: at some point, for better efficiency we might want to reuse the bumpalo for successive batches.
|
// TODO: at some point, for better efficiency we might want to reuse the bumpalo for successive batches.
|
||||||
// this is made difficult by the fact we're doing private clones of the index scheduler and sending it
|
// this is made difficult by the fact we're doing private clones of the index scheduler and sending it
|
||||||
// to a fresh thread.
|
// to a fresh thread.
|
||||||
@@ -135,7 +130,6 @@ impl IndexScheduler {
|
|||||||
&mut new_fields_ids_map,
|
&mut new_fields_ids_map,
|
||||||
&|| must_stop_processing.get(),
|
&|| must_stop_processing.get(),
|
||||||
progress.clone(),
|
progress.clone(),
|
||||||
shards.as_ref(),
|
|
||||||
)
|
)
|
||||||
.map_err(|e| Error::from_milli(e, Some(index_uid.clone())))?;
|
.map_err(|e| Error::from_milli(e, Some(index_uid.clone())))?;
|
||||||
|
|
||||||
|
|||||||
@@ -7,73 +7,9 @@ use meilisearch_types::milli::progress::{Progress, VariableNameStep};
|
|||||||
use meilisearch_types::tasks::{Status, Task};
|
use meilisearch_types::tasks::{Status, Task};
|
||||||
use meilisearch_types::{compression, VERSION_FILE_NAME};
|
use meilisearch_types::{compression, VERSION_FILE_NAME};
|
||||||
|
|
||||||
use crate::heed::EnvOpenOptions;
|
|
||||||
use crate::processing::{AtomicUpdateFileStep, SnapshotCreationProgress};
|
use crate::processing::{AtomicUpdateFileStep, SnapshotCreationProgress};
|
||||||
use crate::queue::TaskQueue;
|
|
||||||
use crate::{Error, IndexScheduler, Result};
|
use crate::{Error, IndexScheduler, Result};
|
||||||
|
|
||||||
/// # Safety
|
|
||||||
///
|
|
||||||
/// See [`EnvOpenOptions::open`].
|
|
||||||
unsafe fn remove_tasks(
|
|
||||||
tasks: &[Task],
|
|
||||||
dst: &std::path::Path,
|
|
||||||
index_base_map_size: usize,
|
|
||||||
) -> Result<()> {
|
|
||||||
let env_options = EnvOpenOptions::new();
|
|
||||||
let mut env_options = env_options.read_txn_without_tls();
|
|
||||||
let env = env_options.max_dbs(TaskQueue::nb_db()).map_size(index_base_map_size).open(dst)?;
|
|
||||||
let mut wtxn = env.write_txn()?;
|
|
||||||
let task_queue = TaskQueue::new(&env, &mut wtxn)?;
|
|
||||||
|
|
||||||
// Destructuring to ensure the code below gets updated if a database gets added in the future.
|
|
||||||
let TaskQueue {
|
|
||||||
all_tasks,
|
|
||||||
status,
|
|
||||||
kind,
|
|
||||||
index_tasks: _, // snapshot creation tasks are not index tasks
|
|
||||||
canceled_by,
|
|
||||||
enqueued_at,
|
|
||||||
started_at,
|
|
||||||
finished_at,
|
|
||||||
} = task_queue;
|
|
||||||
|
|
||||||
for task in tasks {
|
|
||||||
all_tasks.delete(&mut wtxn, &task.uid)?;
|
|
||||||
|
|
||||||
let mut tasks = status.get(&wtxn, &task.status)?.unwrap_or_default();
|
|
||||||
tasks.remove(task.uid);
|
|
||||||
status.put(&mut wtxn, &task.status, &tasks)?;
|
|
||||||
|
|
||||||
let mut tasks = kind.get(&wtxn, &task.kind.as_kind())?.unwrap_or_default();
|
|
||||||
tasks.remove(task.uid);
|
|
||||||
kind.put(&mut wtxn, &task.kind.as_kind(), &tasks)?;
|
|
||||||
|
|
||||||
canceled_by.delete(&mut wtxn, &task.uid)?;
|
|
||||||
|
|
||||||
let timestamp = task.enqueued_at.unix_timestamp_nanos();
|
|
||||||
let mut tasks = enqueued_at.get(&wtxn, ×tamp)?.unwrap_or_default();
|
|
||||||
tasks.remove(task.uid);
|
|
||||||
enqueued_at.put(&mut wtxn, ×tamp, &tasks)?;
|
|
||||||
|
|
||||||
if let Some(task_started_at) = task.started_at {
|
|
||||||
let timestamp = task_started_at.unix_timestamp_nanos();
|
|
||||||
let mut tasks = started_at.get(&wtxn, ×tamp)?.unwrap_or_default();
|
|
||||||
tasks.remove(task.uid);
|
|
||||||
started_at.put(&mut wtxn, ×tamp, &tasks)?;
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(task_finished_at) = task.finished_at {
|
|
||||||
let timestamp = task_finished_at.unix_timestamp_nanos();
|
|
||||||
let mut tasks = finished_at.get(&wtxn, ×tamp)?.unwrap_or_default();
|
|
||||||
tasks.remove(task.uid);
|
|
||||||
finished_at.put(&mut wtxn, ×tamp, &tasks)?;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
wtxn.commit()?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
impl IndexScheduler {
|
impl IndexScheduler {
|
||||||
pub(super) fn process_snapshot(
|
pub(super) fn process_snapshot(
|
||||||
&self,
|
&self,
|
||||||
@@ -112,26 +48,14 @@ impl IndexScheduler {
|
|||||||
};
|
};
|
||||||
self.env.copy_to_path(dst.join("data.mdb"), compaction_option)?;
|
self.env.copy_to_path(dst.join("data.mdb"), compaction_option)?;
|
||||||
|
|
||||||
// 2.2 Remove the current snapshot tasks
|
// 2.2 Create a read transaction on the index-scheduler
|
||||||
//
|
|
||||||
// This is done to ensure that the tasks are not processed again when the snapshot is imported
|
|
||||||
//
|
|
||||||
// # Safety
|
|
||||||
//
|
|
||||||
// This is safe because we open the env file we just created in a temporary directory.
|
|
||||||
// We are sure it's not being used by any other process nor thread.
|
|
||||||
unsafe {
|
|
||||||
remove_tasks(&tasks, &dst, self.index_mapper.index_base_map_size)?;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2.3 Create a read transaction on the index-scheduler
|
|
||||||
let rtxn = self.env.read_txn()?;
|
let rtxn = self.env.read_txn()?;
|
||||||
|
|
||||||
// 2.4 Create the update files directory
|
// 2.3 Create the update files directory
|
||||||
let update_files_dir = temp_snapshot_dir.path().join("update_files");
|
let update_files_dir = temp_snapshot_dir.path().join("update_files");
|
||||||
fs::create_dir_all(&update_files_dir)?;
|
fs::create_dir_all(&update_files_dir)?;
|
||||||
|
|
||||||
// 2.5 Only copy the update files of the enqueued tasks
|
// 2.4 Only copy the update files of the enqueued tasks
|
||||||
progress.update_progress(SnapshotCreationProgress::SnapshotTheUpdateFiles);
|
progress.update_progress(SnapshotCreationProgress::SnapshotTheUpdateFiles);
|
||||||
let enqueued = self.queue.tasks.get_status(&rtxn, Status::Enqueued)?;
|
let enqueued = self.queue.tasks.get_status(&rtxn, Status::Enqueued)?;
|
||||||
let (atomic, update_file_progress) = AtomicUpdateFileStep::new(enqueued.len() as u32);
|
let (atomic, update_file_progress) = AtomicUpdateFileStep::new(enqueued.len() as u32);
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/scheduler/test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
|
||||||
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "girafos", primary_key: None }}
|
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "girafos", primary_key: None }}
|
||||||
3 {uid: 3, batch_uid: 3, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
|
3 {uid: 3, batch_uid: 3, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
|
||||||
4 {uid: 4, batch_uid: 4, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "cattos" }}
|
4 {uid: 4, batch_uid: 4, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "cattos" }}
|
||||||
5 {uid: 5, batch_uid: 5, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "girafos" }}
|
5 {uid: 5, batch_uid: 5, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "girafos" }}
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
|
||||||
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
|
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
|
||||||
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
|
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test.rs
|
|||||||
[]
|
[]
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
### All Tasks:
|
### All Tasks:
|
||||||
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
|
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
|
||||||
1 {uid: 1, batch_uid: 1, status: succeeded, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
|
1 {uid: 1, batch_uid: 1, status: succeeded, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
|
||||||
2 {uid: 2, batch_uid: 1, status: succeeded, details: { deleted_documents: Some(0) }, kind: IndexDeletion { index_uid: "doggos" }}
|
2 {uid: 2, batch_uid: 1, status: succeeded, details: { deleted_documents: Some(0) }, kind: IndexDeletion { index_uid: "doggos" }}
|
||||||
----------------------------------------------------------------------
|
----------------------------------------------------------------------
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user