mirror of
https://github.com/meilisearch/meilisearch.git
synced 2025-07-21 13:51:05 +00:00
Compare commits
38 Commits
prototype-
...
release-pr
Author | SHA1 | Date | |
---|---|---|---|
f3b60a1dab | |||
cd0523c3f1 | |||
7f318ee964 | |||
dc1656da8e | |||
dc0bd9f25d | |||
52d8007b12 | |||
4f8382b159 | |||
c2c82be556 | |||
421a23ee3d | |||
191ea340ed | |||
8d22972d84 | |||
8772b5af87 | |||
df2e7cde53 | |||
02b2ae6142 | |||
f813eb7ca4 | |||
d072edaa49 | |||
e3daa907c5 | |||
a39223822a | |||
1eb6cd38ce | |||
eb6ad3ef9c | |||
3bef4f4413 | |||
9f89881b0d | |||
126aefc207 | |||
e7a60555d6 | |||
ae912c4c3f | |||
13ea29e511 | |||
5342df26fe | |||
61bc95e8d6 | |||
074744b8a6 | |||
a8030850ee | |||
4c7a6e5c1b | |||
07bfed99e6 | |||
9e31d6ceff | |||
139ec8c782 | |||
2691999bd3 | |||
48460678df | |||
cb15e5c67e | |||
7380808b26 |
@ -1,28 +1,26 @@
|
||||
---
|
||||
name: New sprint issue
|
||||
about: ⚠️ Should only be used by the engine team ⚠️
|
||||
name: New feature issue
|
||||
about: ⚠️ Should only be used by the internal Meili team ⚠️
|
||||
title: ''
|
||||
labels: 'missing usage in PRD, impacts docs'
|
||||
labels: 'impacts docs, impacts integrations'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Related product team resources: [PRD]() (_internal only_)
|
||||
Related product discussion:
|
||||
|
||||
## Motivation
|
||||
|
||||
<!---Copy/paste the information in PRD or briefly detail the product motivation. Ask product team if any hesitation.-->
|
||||
|
||||
## Usage
|
||||
|
||||
<!---Link to the public part of the PRD, or to the related product discussion for experimental features-->
|
||||
|
||||
TBD
|
||||
|
||||
## TODO
|
||||
|
||||
<!---If necessary, create a list with technical/product steps-->
|
||||
|
||||
### Are you modifying a database?
|
||||
|
||||
- [ ] If not, add the `no db change` label to your PR, and you're good to merge.
|
||||
- [ ] If yes, add the `db change` label to your PR. You'll receive a message explaining you what to do.
|
||||
|
||||
@ -54,5 +52,5 @@ Related product discussion:
|
||||
|
||||
## Impacted teams
|
||||
|
||||
<!---Ping the related teams. Ask for the engine manager if any hesitation-->
|
||||
<!---@meilisearch/docs-team when there is any API change, e.g. settings addition-->
|
||||
<!---Ping the related teams. Ask on Slack if any hesitation-->
|
||||
<!---@meilisearch/docs-team and @meilisearch/integration-team when there is any API change, e.g. settings addition-->
|
13
.github/pull_request_template.md
vendored
Normal file
13
.github/pull_request_template.md
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
## Related issue
|
||||
|
||||
Fixes #...
|
||||
|
||||
## Requirements
|
||||
|
||||
⚠️ Ensure the following requirements before merging ⚠️
|
||||
- [] Automated tests have been added.
|
||||
- [] If some tests cannot be automated, manual rigorous tests should be applied.
|
||||
- [] ⚠️ If there is an change in the DB: it's mandatory to manually test the `--experimental-dumpless-upgrade` on a DB of the previous Meilisearch minor version (e.g. v1.13 for the v1.14 release).
|
||||
- [] If necessary, the feature have been tested in the Cloud production environment (with [prototypes](./documentation/prototypes.md)) and the Cloud UI is ready.
|
||||
- [] If necessary, the [documentation](https://github.com/meilisearch/documentation) related to the implemented feature in the PR is ready.
|
||||
- [] If necessary, the [integrations](https://github.com/meilisearch/integration-guides) related to the implemented feature in the PR are ready.
|
23
.github/release-draft-template.yml
vendored
Normal file
23
.github/release-draft-template.yml
vendored
Normal file
@ -0,0 +1,23 @@
|
||||
name-template: 'v$RESOLVED_VERSION'
|
||||
tag-template: 'v$RESOLVED_VERSION'
|
||||
exclude-labels:
|
||||
- 'skip changelog'
|
||||
version-resolver:
|
||||
major:
|
||||
labels:
|
||||
- 'breaking-change'
|
||||
minor:
|
||||
labels:
|
||||
- 'enhancement'
|
||||
default: patch
|
||||
template: |
|
||||
$CHANGES
|
||||
|
||||
Thanks again to $CONTRIBUTORS! 🎉
|
||||
no-changes-template: 'Changes are coming soon 😎'
|
||||
sort-direction: 'ascending'
|
||||
replacers:
|
||||
- search: '/(?:and )?@dependabot-preview(?:\[bot\])?,?/g'
|
||||
replace: ''
|
||||
- search: '/(?:and )?@dependabot(?:\[bot\])?,?/g'
|
||||
replace: ''
|
22
.github/templates/dependency-issue.md
vendored
Normal file
22
.github/templates/dependency-issue.md
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
This issue is about updating Meilisearch dependencies:
|
||||
- [ ] Update Meilisearch dependencies with the help of `cargo +nightly udeps --all-targets` (remove unused dependencies) and `cargo upgrade` (upgrade dependencies versions) - ⚠️ Some repositories may contain subdirectories (like heed, charabia, or deserr). Take care of updating these in the main crate as well. This won't be done automatically by `cargo upgrade`.
|
||||
- [ ] [deserr](https://github.com/meilisearch/deserr)
|
||||
- [ ] [charabia](https://github.com/meilisearch/charabia/)
|
||||
- [ ] [heed](https://github.com/meilisearch/heed/)
|
||||
- [ ] [roaring-rs](https://github.com/RoaringBitmap/roaring-rs/)
|
||||
- [ ] [obkv](https://github.com/meilisearch/obkv)
|
||||
- [ ] [grenad](https://github.com/meilisearch/grenad/)
|
||||
- [ ] [arroy](https://github.com/meilisearch/arroy/)
|
||||
- [ ] [segment](https://github.com/meilisearch/segment)
|
||||
- [ ] [bumparaw-collections](https://github.com/meilisearch/bumparaw-collections)
|
||||
- [ ] [bbqueue](https://github.com/meilisearch/bbqueue)
|
||||
- [ ] Finally, [Meilisearch](https://github.com/meilisearch/MeiliSearch)
|
||||
- [ ] If new Rust versions have been released, update the minimal Rust version in use at Meilisearch:
|
||||
- [ ] in this [GitHub Action file](https://github.com/meilisearch/meilisearch/blob/main/.github/workflows/test-suite.yml), by changing the `toolchain` field of the `rustfmt` job to the latest available nightly (of the day before or the current day).
|
||||
- [ ] in every [GitHub Action files](https://github.com/meilisearch/meilisearch/blob/main/.github/workflows), by changing all the `dtolnay/rust-toolchain@` references to use the latest stable version.
|
||||
- [ ] in this [`rust-toolchain.toml`](https://github.com/meilisearch/meilisearch/blob/main/rust-toolchain.toml), by changing the `channel` field to the latest stable version.
|
||||
- [ ] in the [Dockerfile](https://github.com/meilisearch/meilisearch/blob/main/Dockerfile), by changing the base image to `rust:<target_rust_version>-alpine<alpine_version>`. Check that the image exists on [Dockerhub](https://hub.docker.com/_/rust/tags?page=1&name=alpine). Also, build and run the image to check everything still works!
|
||||
|
||||
⚠️ This issue should be prioritized to avoid any deprecation and vulnerability issues.
|
||||
|
||||
The GitHub action dependencies are managed by [Dependabot](https://github.com/meilisearch/meilisearch/blob/main/.github/dependabot.yml), so no need to update them when solving this issue.
|
100
.github/workflows/check-valid-milestone.yml
vendored
100
.github/workflows/check-valid-milestone.yml
vendored
@ -1,100 +0,0 @@
|
||||
name: PR Milestone Check
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, reopened, edited, synchronize, milestoned, demilestoned]
|
||||
branches:
|
||||
- "main"
|
||||
- "release-v*.*.*"
|
||||
|
||||
jobs:
|
||||
check-milestone:
|
||||
name: Check PR Milestone
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Validate PR milestone
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
// Get PR number directly from the event payload
|
||||
const prNumber = context.payload.pull_request.number;
|
||||
|
||||
// Get PR details
|
||||
const { data: prData } = await github.rest.pulls.get({
|
||||
owner: 'meilisearch',
|
||||
repo: 'meilisearch',
|
||||
pull_number: prNumber
|
||||
});
|
||||
|
||||
// Get base branch name
|
||||
const baseBranch = prData.base.ref;
|
||||
console.log(`Base branch: ${baseBranch}`);
|
||||
|
||||
// Get PR milestone
|
||||
const prMilestone = prData.milestone;
|
||||
if (!prMilestone) {
|
||||
core.setFailed('PR must have a milestone assigned');
|
||||
return;
|
||||
}
|
||||
console.log(`PR milestone: ${prMilestone.title}`);
|
||||
|
||||
// Validate milestone format: vx.y.z
|
||||
const milestoneRegex = /^v\d+\.\d+\.\d+$/;
|
||||
if (!milestoneRegex.test(prMilestone.title)) {
|
||||
core.setFailed(`Milestone "${prMilestone.title}" does not follow the required format vx.y.z`);
|
||||
return;
|
||||
}
|
||||
|
||||
// For main branch PRs, check if the milestone is the highest one
|
||||
if (baseBranch === 'main') {
|
||||
// Get all milestones
|
||||
const { data: milestones } = await github.rest.issues.listMilestones({
|
||||
owner: 'meilisearch',
|
||||
repo: 'meilisearch',
|
||||
state: 'open',
|
||||
sort: 'due_on',
|
||||
direction: 'desc'
|
||||
});
|
||||
|
||||
// Sort milestones by version number (vx.y.z)
|
||||
const sortedMilestones = milestones
|
||||
.filter(m => milestoneRegex.test(m.title))
|
||||
.sort((a, b) => {
|
||||
const versionA = a.title.substring(1).split('.').map(Number);
|
||||
const versionB = b.title.substring(1).split('.').map(Number);
|
||||
|
||||
// Compare major version
|
||||
if (versionA[0] !== versionB[0]) return versionB[0] - versionA[0];
|
||||
// Compare minor version
|
||||
if (versionA[1] !== versionB[1]) return versionB[1] - versionA[1];
|
||||
// Compare patch version
|
||||
return versionB[2] - versionA[2];
|
||||
});
|
||||
|
||||
if (sortedMilestones.length === 0) {
|
||||
core.setFailed('No valid milestones found in the repository. Please create at least one milestone with the format vx.y.z');
|
||||
return;
|
||||
}
|
||||
|
||||
const highestMilestone = sortedMilestones[0];
|
||||
console.log(`Highest milestone: ${highestMilestone.title}`);
|
||||
|
||||
if (prMilestone.title !== highestMilestone.title) {
|
||||
core.setFailed(`PRs targeting the main branch must use the highest milestone (${highestMilestone.title}), but this PR uses ${prMilestone.title}`);
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
// For release branches, the milestone should match the branch version
|
||||
const branchVersion = baseBranch.substring(8); // remove 'release-'
|
||||
if (prMilestone.title !== branchVersion) {
|
||||
core.setFailed(`PRs targeting release branch "${baseBranch}" must use the matching milestone "${branchVersion}", but this PR uses "${prMilestone.title}"`);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
console.log('PR milestone validation passed!');
|
2
.github/workflows/dependency-issue.yml
vendored
2
.github/workflows/dependency-issue.yml
vendored
@ -15,7 +15,7 @@ jobs:
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Download the issue template
|
||||
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/dependency-issue.md > $ISSUE_TEMPLATE
|
||||
run: curl -s https://raw.githubusercontent.com/meilisearch/meilisearch/main/.github/templates/dependency-issue.md > $ISSUE_TEMPLATE
|
||||
- name: Create issue
|
||||
run: |
|
||||
gh issue create \
|
||||
|
2
.github/workflows/flaky-tests.yml
vendored
2
.github/workflows/flaky-tests.yml
vendored
@ -3,7 +3,7 @@ name: Look for flaky tests
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: "0 12 * * FRI" # Every Friday at 12:00PM
|
||||
- cron: '0 4 * * *' # Every day at 4:00AM
|
||||
|
||||
jobs:
|
||||
flaky:
|
||||
|
224
.github/workflows/milestone-workflow.yml
vendored
224
.github/workflows/milestone-workflow.yml
vendored
@ -1,224 +0,0 @@
|
||||
name: Milestone's workflow
|
||||
|
||||
# /!\ No git flow are handled here
|
||||
|
||||
# For each Milestone created (not opened!), and if the release is NOT a patch release (only the patch changed)
|
||||
# - the roadmap issue is created, see https://github.com/meilisearch/engine-team/blob/main/issue-templates/roadmap-issue.md
|
||||
# - the changelog issue is created, see https://github.com/meilisearch/engine-team/blob/main/issue-templates/changelog-issue.md
|
||||
# - update the ruleset to add the current release version to the list of allowed versions and be able to use the merge queue.
|
||||
|
||||
# For each Milestone closed
|
||||
# - the `release_version` label is created
|
||||
# - this label is applied to all issues/PRs in the Milestone
|
||||
|
||||
on:
|
||||
milestone:
|
||||
types: [created, closed]
|
||||
|
||||
env:
|
||||
MILESTONE_VERSION: ${{ github.event.milestone.title }}
|
||||
MILESTONE_URL: ${{ github.event.milestone.html_url }}
|
||||
MILESTONE_DUE_ON: ${{ github.event.milestone.due_on }}
|
||||
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
|
||||
|
||||
jobs:
|
||||
# -----------------
|
||||
# MILESTONE CREATED
|
||||
# -----------------
|
||||
|
||||
get-release-version:
|
||||
if: github.event.action == 'created'
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
is-patch: ${{ steps.check-patch.outputs.is-patch }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Check if this release is a patch release only
|
||||
id: check-patch
|
||||
run: |
|
||||
echo version: $MILESTONE_VERSION
|
||||
if [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.0$ ]]; then
|
||||
echo 'This is NOT a patch release'
|
||||
echo "is-patch=false" >> $GITHUB_OUTPUT
|
||||
elif [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
echo 'This is a patch release'
|
||||
echo "is-patch=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "Not a valid format of release, check the Milestone's title."
|
||||
echo 'Should be vX.Y.Z'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
create-roadmap-issue:
|
||||
needs: get-release-version
|
||||
# Create the roadmap issue if the release is not only a patch release
|
||||
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
ISSUE_TEMPLATE: issue-template.md
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Download the issue template
|
||||
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/roadmap-issue.md > $ISSUE_TEMPLATE
|
||||
- name: Replace all empty occurrences in the templates
|
||||
run: |
|
||||
# Replace all <<version>> occurrences
|
||||
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
|
||||
|
||||
# Replace all <<milestone_id>> occurrences
|
||||
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
|
||||
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
|
||||
|
||||
# Replace release date if exists
|
||||
if [[ ! -z $MILESTONE_DUE_ON ]]; then
|
||||
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
|
||||
sed -i "s/Release date\: 20XX-XX-XX/Release date\: $date/g" $ISSUE_TEMPLATE
|
||||
fi
|
||||
- name: Create the issue
|
||||
run: |
|
||||
gh issue create \
|
||||
--title "$MILESTONE_VERSION ROADMAP" \
|
||||
--label 'epic,impacts docs,impacts integrations,impacts cloud' \
|
||||
--body-file $ISSUE_TEMPLATE \
|
||||
--milestone $MILESTONE_VERSION
|
||||
|
||||
create-changelog-issue:
|
||||
needs: get-release-version
|
||||
# Create the changelog issue if the release is not only a patch release
|
||||
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
ISSUE_TEMPLATE: issue-template.md
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Download the issue template
|
||||
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/changelog-issue.md > $ISSUE_TEMPLATE
|
||||
- name: Replace all empty occurrences in the templates
|
||||
run: |
|
||||
# Replace all <<version>> occurrences
|
||||
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
|
||||
|
||||
# Replace all <<milestone_id>> occurrences
|
||||
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
|
||||
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
|
||||
- name: Create the issue
|
||||
run: |
|
||||
gh issue create \
|
||||
--title "Create release changelogs for $MILESTONE_VERSION" \
|
||||
--label 'impacts docs,documentation' \
|
||||
--body-file $ISSUE_TEMPLATE \
|
||||
--milestone $MILESTONE_VERSION \
|
||||
--assignee curquiza
|
||||
|
||||
create-update-version-issue:
|
||||
needs: get-release-version
|
||||
# Create the update-version issue even if the release is a patch release
|
||||
if: github.event.action == 'created'
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
ISSUE_TEMPLATE: issue-template.md
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Download the issue template
|
||||
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/update-version-issue.md > $ISSUE_TEMPLATE
|
||||
- name: Create the issue
|
||||
run: |
|
||||
gh issue create \
|
||||
--title "Update version in Cargo.toml for $MILESTONE_VERSION" \
|
||||
--label 'maintenance' \
|
||||
--body-file $ISSUE_TEMPLATE \
|
||||
--milestone $MILESTONE_VERSION
|
||||
|
||||
create-update-openapi-issue:
|
||||
needs: get-release-version
|
||||
# Create the openAPI issue if the release is not only a patch release
|
||||
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
ISSUE_TEMPLATE: issue-template.md
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Download the issue template
|
||||
run: curl -s https://raw.githubusercontent.com/meilisearch/engine-team/main/issue-templates/update-openapi-issue.md > $ISSUE_TEMPLATE
|
||||
- name: Create the issue
|
||||
run: |
|
||||
gh issue create \
|
||||
--title "Update Open API file for $MILESTONE_VERSION" \
|
||||
--label 'maintenance' \
|
||||
--body-file $ISSUE_TEMPLATE \
|
||||
--milestone $MILESTONE_VERSION
|
||||
|
||||
update-ruleset:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.action == 'created'
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Install jq
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y jq
|
||||
- name: Update ruleset
|
||||
env:
|
||||
# gh api repos/meilisearch/meilisearch/rulesets --jq '.[] | {name: .name, id: .id}'
|
||||
RULESET_ID: 4253297
|
||||
BRANCH_NAME: ${{ github.event.inputs.branch_name }}
|
||||
run: |
|
||||
echo "RULESET_ID: ${{ env.RULESET_ID }}"
|
||||
echo "BRANCH_NAME: ${{ env.BRANCH_NAME }}"
|
||||
|
||||
# Get current ruleset conditions
|
||||
CONDITIONS=$(gh api repos/meilisearch/meilisearch/rulesets/${{ env.RULESET_ID }} --jq '{ conditions: .conditions }')
|
||||
|
||||
# Update the conditions by appending the milestone version
|
||||
UPDATED_CONDITIONS=$(echo $CONDITIONS | jq '.conditions.ref_name.include += ["refs/heads/release-'${{ env.MILESTONE_VERSION }}'"]')
|
||||
|
||||
# Update the ruleset from stdin (-)
|
||||
echo $UPDATED_CONDITIONS |
|
||||
gh api repos/meilisearch/meilisearch/rulesets/${{ env.RULESET_ID }} \
|
||||
--method PUT \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||
--input -
|
||||
|
||||
# ----------------
|
||||
# MILESTONE CLOSED
|
||||
# ----------------
|
||||
|
||||
create-release-label:
|
||||
if: github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Create the ${{ env.MILESTONE_VERSION }} label
|
||||
run: |
|
||||
label_description="PRs/issues solved in $MILESTONE_VERSION"
|
||||
if [[ ! -z $MILESTONE_DUE_ON ]]; then
|
||||
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
|
||||
label_description="$label_description released on $date"
|
||||
fi
|
||||
|
||||
gh api repos/meilisearch/meilisearch/labels \
|
||||
--method POST \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
-f name="$MILESTONE_VERSION" \
|
||||
-f description="$label_description" \
|
||||
-f color='ff5ba3'
|
||||
|
||||
labelize-all-milestone-content:
|
||||
if: github.event.action == 'closed'
|
||||
needs: create-release-label
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Add label ${{ env.MILESTONE_VERSION }} to all PRs in the Milestone
|
||||
run: |
|
||||
prs=$(gh pr list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
|
||||
for pr in $prs; do
|
||||
gh pr edit $pr --add-label $MILESTONE_VERSION
|
||||
done
|
||||
- name: Add label ${{ env.MILESTONE_VERSION }} to all issues in the Milestone
|
||||
run: |
|
||||
issues=$(gh issue list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
|
||||
for issue in $issues; do
|
||||
gh issue edit $issue --add-label $MILESTONE_VERSION
|
||||
done
|
17
.github/workflows/publish-docker-images.yml
vendored
17
.github/workflows/publish-docker-images.yml
vendored
@ -16,6 +16,8 @@ on:
|
||||
jobs:
|
||||
docker:
|
||||
runs-on: docker
|
||||
permissions:
|
||||
id-token: write # This is needed to use Cosign in keyless mode
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
@ -62,6 +64,9 @@ jobs:
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # tag=v3.8.2
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
@ -85,6 +90,7 @@ jobs:
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v6
|
||||
id: build-and-push
|
||||
with:
|
||||
push: true
|
||||
platforms: linux/amd64,linux/arm64
|
||||
@ -94,6 +100,17 @@ jobs:
|
||||
COMMIT_DATE=${{ steps.build-metadata.outputs.date }}
|
||||
GIT_TAG=${{ github.ref_name }}
|
||||
|
||||
- name: Sign the images with GitHub OIDC Token
|
||||
env:
|
||||
DIGEST: ${{ steps.build-and-push.outputs.digest }}
|
||||
TAGS: ${{ steps.meta.outputs.tags }}
|
||||
run: |
|
||||
images=""
|
||||
for tag in ${TAGS}; do
|
||||
images+="${tag}@${DIGEST} "
|
||||
done
|
||||
cosign sign --yes ${images}
|
||||
|
||||
# /!\ Don't touch this without checking with Cloud team
|
||||
- name: Send CI information to Cloud team
|
||||
# Do not send if nightly build (i.e. 'schedule' or 'workflow_dispatch' event)
|
||||
|
16
.github/workflows/release-drafter.yml
vendored
Normal file
16
.github/workflows/release-drafter.yml
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
name: Release Drafter
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
update_release_draft:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: release-drafter/release-drafter@v6
|
||||
with:
|
||||
config-name: release-draft-template.yml
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.RELEASE_DRAFTER_TOKEN }}
|
12
.github/workflows/sdks-tests.yml
vendored
12
.github/workflows/sdks-tests.yml
vendored
@ -9,7 +9,7 @@ on:
|
||||
required: false
|
||||
default: nightly
|
||||
schedule:
|
||||
- cron: "0 6 * * MON" # Every Monday at 6:00AM
|
||||
- cron: '0 6 * * *' # Every day at 6:00am
|
||||
|
||||
env:
|
||||
MEILI_MASTER_KEY: 'masterKey'
|
||||
@ -344,15 +344,23 @@ jobs:
|
||||
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
|
||||
ports:
|
||||
- '7700:7700'
|
||||
env:
|
||||
RAILS_VERSION: '7.0'
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
repository: meilisearch/meilisearch-rails
|
||||
- name: Set up Ruby 3
|
||||
- name: Install SQLite dependencies
|
||||
run: sudo apt-get update && sudo apt-get install -y libsqlite3-dev
|
||||
- name: Set up Ruby
|
||||
uses: ruby/setup-ruby@v1
|
||||
with:
|
||||
ruby-version: 3
|
||||
bundler-cache: true
|
||||
- name: Start MongoDB
|
||||
uses: supercharge/mongodb-github-action@1.12.0
|
||||
with:
|
||||
mongodb-version: 8.0
|
||||
- name: Run tests
|
||||
run: bundle exec rspec
|
||||
|
||||
|
2
.github/workflows/test-suite.yml
vendored
2
.github/workflows/test-suite.yml
vendored
@ -3,7 +3,7 @@ name: Test suite
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
# Everyday at 5:00am
|
||||
# Every day at 5:00am
|
||||
- cron: "0 5 * * *"
|
||||
pull_request:
|
||||
merge_group:
|
||||
|
@ -106,7 +106,13 @@ Run `cargo xtask --help` from the root of the repository to find out what is ava
|
||||
#### Update the openAPI file if the API changed
|
||||
|
||||
To update the openAPI file in the code, see [sprint_issue.md](https://github.com/meilisearch/meilisearch/blob/main/.github/ISSUE_TEMPLATE/sprint_issue.md#reminders-when-modifying-the-api).
|
||||
If you want to update the openAPI file on the [open-api repository](https://github.com/meilisearch/open-api), see [update-openapi-issue.md](https://github.com/meilisearch/engine-team/blob/main/issue-templates/update-openapi-issue.md).
|
||||
|
||||
If you want to update the openAPI file on the [open-api repository](https://github.com/meilisearch/open-api):
|
||||
- Pull the latest version of the latest rc of Meilisearch `git checkout release-vX.Y.Z; git pull`
|
||||
- Starts Meilisearch with the `swagger` feature flag: `cargo run --features swagger`
|
||||
- On a browser, open the following URL: http://localhost:7700/scalar
|
||||
- Click the « Download openAPI file »
|
||||
- Open a PR replacing [this file](https://github.com/meilisearch/open-api/blob/main/open-api.json) with the one downloaded
|
||||
|
||||
### Logging
|
||||
|
||||
@ -160,25 +166,37 @@ Some notes on GitHub PRs:
|
||||
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
|
||||
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [GitHub Merge Queues](https://github.blog/news-insights/product-news/github-merge-queue-is-generally-available/) to automatically enforce this requirement without the PR author having to rebase manually.
|
||||
|
||||
## Release Process (for internal team only)
|
||||
|
||||
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
|
||||
|
||||
### Automation to rebase and Merge the PRs
|
||||
## Merging PRs
|
||||
|
||||
This project uses GitHub Merge Queues that helps us manage pull requests merging.
|
||||
|
||||
### How to Publish a new Release
|
||||
Before merging a PR, the maintainer should ensure the following requirements are met
|
||||
- Automated tests have been added.
|
||||
- If some tests cannot be automated, manual rigorous tests should be applied.
|
||||
- ⚠️ If there is an change in the DB: it's mandatory to manually test the `--experimental-dumpless-upgrade` on a DB of the previous Meilisearch minor version (e.g. v1.13 for the v1.14 release).
|
||||
- If necessary, the feature have been tested in the Cloud production environment (with [prototypes](./documentation/prototypes.md)) and the Cloud UI is ready.
|
||||
- If necessary, the [documentation](https://github.com/meilisearch/documentation) related to the implemented feature in the PR is ready.
|
||||
- If necessary, the [integrations](https://github.com/meilisearch/integration-guides) related to the implemented feature in the PR are ready.
|
||||
|
||||
The full Meilisearch release process is described in [this guide](https://github.com/meilisearch/engine-team/blob/main/resources/meilisearch-release.md). Please follow it carefully before doing any release.
|
||||
## Publish Process (for internal team only)
|
||||
|
||||
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
|
||||
|
||||
### How to publish a new release
|
||||
|
||||
The full Meilisearch release process is described in [this guide](./documentation/release.md).
|
||||
|
||||
### How to publish a prototype
|
||||
|
||||
Depending on the developed feature, you might need to provide a prototyped version of Meilisearch to make it easier to test by the users.
|
||||
|
||||
This happens in two steps:
|
||||
- [Release the prototype](https://github.com/meilisearch/engine-team/blob/main/resources/prototypes.md#how-to-publish-a-prototype)
|
||||
- [Communicate about it](https://github.com/meilisearch/engine-team/blob/main/resources/prototypes.md#communication)
|
||||
- [Release the prototype](./documentation/prototypes.md#how-to-publish-a-prototype)
|
||||
- [Communicate about it](./documentation/prototypes.md#communication)
|
||||
|
||||
### How to implement and publish an experimental feature
|
||||
|
||||
Here is our [guidelines and process](./documentation/experimental-features.md) to implement and publish an experimental feature.
|
||||
|
||||
### Release assets
|
||||
|
||||
|
@ -119,6 +119,6 @@ Meilisearch is, and will always be, open-source! If you want to contribute to th
|
||||
|
||||
Meilisearch releases and their associated binaries are available on the project's [releases page](https://github.com/meilisearch/meilisearch/releases).
|
||||
|
||||
The binaries are versioned following [SemVer conventions](https://semver.org/). To know more, read our [versioning policy](https://github.com/meilisearch/engine-team/blob/main/resources/versioning-policy.md).
|
||||
The binaries are versioned following [SemVer conventions](https://semver.org/). To know more, read our [versioning policy](./documentation/versioning-policy.md).
|
||||
|
||||
Differently from the binaries, crates in this repository are not currently available on [crates.io](https://crates.io/) and do not follow [SemVer conventions](https://semver.org).
|
||||
|
@ -1,3 +1,5 @@
|
||||
use url::Url;
|
||||
|
||||
use crate::analytics::Aggregate;
|
||||
use crate::routes::export::Export;
|
||||
|
||||
@ -5,6 +7,7 @@ use crate::routes::export::Export;
|
||||
pub struct ExportAnalytics {
|
||||
total_received: usize,
|
||||
has_api_key: bool,
|
||||
sum_exports_meilisearch_cloud: usize,
|
||||
sum_index_patterns: usize,
|
||||
sum_patterns_with_filter: usize,
|
||||
sum_patterns_with_override_settings: usize,
|
||||
@ -13,8 +16,14 @@ pub struct ExportAnalytics {
|
||||
|
||||
impl ExportAnalytics {
|
||||
pub fn from_export(export: &Export) -> Self {
|
||||
let Export { url: _, api_key, payload_size, indexes } = export;
|
||||
let Export { url, api_key, payload_size, indexes } = export;
|
||||
|
||||
let url = Url::parse(url).ok();
|
||||
let is_meilisearch_cloud = url.as_ref().and_then(Url::host_str).is_some_and(|host| {
|
||||
host.ends_with("meilisearch.dev")
|
||||
|| host.ends_with("meilisearch.com")
|
||||
|| host.ends_with("meilisearch.io")
|
||||
});
|
||||
let has_api_key = api_key.is_some();
|
||||
let index_patterns_count = indexes.as_ref().map_or(0, |indexes| indexes.len());
|
||||
let patterns_with_filter_count = indexes.as_ref().map_or(0, |indexes| {
|
||||
@ -33,6 +42,7 @@ impl ExportAnalytics {
|
||||
Self {
|
||||
total_received: 1,
|
||||
has_api_key,
|
||||
sum_exports_meilisearch_cloud: is_meilisearch_cloud as usize,
|
||||
sum_index_patterns: index_patterns_count,
|
||||
sum_patterns_with_filter: patterns_with_filter_count,
|
||||
sum_patterns_with_override_settings: patterns_with_override_settings_count,
|
||||
@ -49,6 +59,7 @@ impl Aggregate for ExportAnalytics {
|
||||
fn aggregate(mut self: Box<Self>, other: Box<Self>) -> Box<Self> {
|
||||
self.total_received += other.total_received;
|
||||
self.has_api_key |= other.has_api_key;
|
||||
self.sum_exports_meilisearch_cloud += other.sum_exports_meilisearch_cloud;
|
||||
self.sum_index_patterns += other.sum_index_patterns;
|
||||
self.sum_patterns_with_filter += other.sum_patterns_with_filter;
|
||||
self.sum_patterns_with_override_settings += other.sum_patterns_with_override_settings;
|
||||
@ -63,6 +74,12 @@ impl Aggregate for ExportAnalytics {
|
||||
Some(self.payload_sizes.iter().sum::<u64>() / self.payload_sizes.len() as u64)
|
||||
};
|
||||
|
||||
let avg_exports_meilisearch_cloud = if self.total_received == 0 {
|
||||
None
|
||||
} else {
|
||||
Some(self.sum_exports_meilisearch_cloud as f64 / self.total_received as f64)
|
||||
};
|
||||
|
||||
let avg_index_patterns = if self.total_received == 0 {
|
||||
None
|
||||
} else {
|
||||
@ -84,6 +101,7 @@ impl Aggregate for ExportAnalytics {
|
||||
serde_json::json!({
|
||||
"total_received": self.total_received,
|
||||
"has_api_key": self.has_api_key,
|
||||
"avg_exports_meilisearch_cloud": avg_exports_meilisearch_cloud,
|
||||
"avg_index_patterns": avg_index_patterns,
|
||||
"avg_patterns_with_filter": avg_patterns_with_filter,
|
||||
"avg_patterns_with_override_settings": avg_patterns_with_override_settings,
|
||||
|
@ -304,7 +304,7 @@ async fn access_authorized_stats_restricted_index() {
|
||||
let (response, code) = index.create(Some("product_id")).await;
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
// create key with access on `products` index only.
|
||||
let content = json!({
|
||||
@ -344,7 +344,7 @@ async fn access_authorized_stats_no_index_restriction() {
|
||||
let (response, code) = index.create(Some("product_id")).await;
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
// create key with access on all indexes.
|
||||
let content = json!({
|
||||
@ -384,7 +384,7 @@ async fn list_authorized_indexes_restricted_index() {
|
||||
let (response, code) = index.create(Some("product_id")).await;
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
// create key with access on `products` index only.
|
||||
let content = json!({
|
||||
@ -425,7 +425,7 @@ async fn list_authorized_indexes_no_index_restriction() {
|
||||
let (response, code) = index.create(Some("product_id")).await;
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
// create key with access on all indexes.
|
||||
let content = json!({
|
||||
@ -507,10 +507,10 @@ async fn access_authorized_index_patterns() {
|
||||
|
||||
server.use_api_key(MASTER_KEY);
|
||||
|
||||
// refer to products_1 with modified api key.
|
||||
// refer to products_1 with a modified api key.
|
||||
let index_1 = server.index("products_1");
|
||||
|
||||
index_1.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
let (response, code) = index_1.get_task(task_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
@ -578,19 +578,19 @@ async fn raise_error_non_authorized_index_patterns() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task2_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
// Adding document to test index. Should Fail with 403 -- invalid_api_key
|
||||
// Adding a document to test index. Should Fail with 403 -- invalid_api_key
|
||||
let (response, code) = test_index.add_documents(documents, None).await;
|
||||
assert_eq!(403, code, "{:?}", &response);
|
||||
|
||||
server.use_api_key(MASTER_KEY);
|
||||
|
||||
// refer to products_1 with modified api key.
|
||||
// refer to products_1 with a modified api key.
|
||||
let product_1_index = server.index("products_1");
|
||||
// refer to products_2 with modified api key.
|
||||
let product_2_index = server.index("products_2");
|
||||
// refer to products_2 with a modified api key.
|
||||
// let product_2_index = server.index("products_2");
|
||||
|
||||
product_1_index.wait_task(task1_id).await;
|
||||
product_2_index.wait_task(task2_id).await;
|
||||
server.wait_task(task1_id).await;
|
||||
server.wait_task(task2_id).await;
|
||||
|
||||
let (response, code) = product_1_index.get_task(task1_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
@ -603,7 +603,7 @@ async fn raise_error_non_authorized_index_patterns() {
|
||||
|
||||
#[actix_rt::test]
|
||||
async fn pattern_indexes() {
|
||||
// Create server with master key
|
||||
// Create a server with master key
|
||||
let mut server = Server::new_auth().await;
|
||||
server.use_admin_key(MASTER_KEY).await;
|
||||
|
||||
@ -650,7 +650,7 @@ async fn list_authorized_tasks_restricted_index() {
|
||||
let (response, code) = index.create(Some("product_id")).await;
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
// create key with access on `products` index only.
|
||||
let content = json!({
|
||||
@ -690,7 +690,7 @@ async fn list_authorized_tasks_no_index_restriction() {
|
||||
let (response, code) = index.create(Some("product_id")).await;
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
// create key with access on all indexes.
|
||||
let content = json!({
|
||||
@ -757,7 +757,7 @@ async fn error_creating_index_without_action() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
let response = index.wait_task(task_id).await;
|
||||
let response = server.wait_task(task_id).await;
|
||||
assert_eq!(response["status"], "failed");
|
||||
assert_eq!(response["error"], expected_error.clone());
|
||||
|
||||
@ -768,7 +768,7 @@ async fn error_creating_index_without_action() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
let response = index.wait_task(task_id).await;
|
||||
let response = server.wait_task(task_id).await;
|
||||
|
||||
assert_eq!(response["status"], "failed");
|
||||
assert_eq!(response["error"], expected_error.clone());
|
||||
@ -778,7 +778,7 @@ async fn error_creating_index_without_action() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
let response = index.wait_task(task_id).await;
|
||||
let response = server.wait_task(task_id).await;
|
||||
|
||||
assert_eq!(response["status"], "failed");
|
||||
assert_eq!(response["error"], expected_error.clone());
|
||||
@ -830,7 +830,7 @@ async fn lazy_create_index() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
let (response, code) = index.get_task(task_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
@ -844,7 +844,7 @@ async fn lazy_create_index() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
let (response, code) = index.get_task(task_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
@ -856,7 +856,7 @@ async fn lazy_create_index() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
let (response, code) = index.get_task(task_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
@ -911,7 +911,7 @@ async fn lazy_create_index_from_pattern() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
let (response, code) = index.get_task(task_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
@ -929,7 +929,7 @@ async fn lazy_create_index_from_pattern() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
let (response, code) = index.get_task(task_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
@ -949,7 +949,7 @@ async fn lazy_create_index_from_pattern() {
|
||||
assert_eq!(202, code, "{:?}", &response);
|
||||
let task_id = response["taskUid"].as_u64().unwrap();
|
||||
|
||||
index.wait_task(task_id).await;
|
||||
server.wait_task(task_id).await;
|
||||
|
||||
let (response, code) = index.get_task(task_id).await;
|
||||
assert_eq!(200, code, "{:?}", &response);
|
||||
|
@ -100,11 +100,11 @@ macro_rules! compute_authorized_search {
|
||||
let index = server.index("sales");
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (task1,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task1.uid()).await.succeeded();
|
||||
server.wait_task(task1.uid()).await.succeeded();
|
||||
let (task2,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["color"]}))
|
||||
.await;
|
||||
index.wait_task(task2.uid()).await.succeeded();
|
||||
server.wait_task(task2.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
for key_content in ACCEPTED_KEYS.iter() {
|
||||
@ -147,7 +147,7 @@ macro_rules! compute_forbidden_search {
|
||||
let index = server.index("sales");
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (task, _status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
for key_content in $parent_keys.iter() {
|
||||
|
@ -268,21 +268,21 @@ macro_rules! compute_authorized_single_search {
|
||||
let index = server.index("sales");
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (add_task,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(add_task.uid()).await.succeeded();
|
||||
server.wait_task(add_task.uid()).await.succeeded();
|
||||
let (update_task,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["color"]}))
|
||||
.await;
|
||||
index.wait_task(update_task.uid()).await.succeeded();
|
||||
server.wait_task(update_task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
let index = server.index("products");
|
||||
let documents = NESTED_DOCUMENTS.clone();
|
||||
let (add_task2,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(add_task2.uid()).await.succeeded();
|
||||
server.wait_task(add_task2.uid()).await.succeeded();
|
||||
let (update_task2,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["doggos"]}))
|
||||
.await;
|
||||
index.wait_task(update_task2.uid()).await.succeeded();
|
||||
server.wait_task(update_task2.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
|
||||
@ -339,21 +339,21 @@ macro_rules! compute_authorized_multiple_search {
|
||||
let index = server.index("sales");
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (task,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
let (task,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["color"]}))
|
||||
.await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
let index = server.index("products");
|
||||
let documents = NESTED_DOCUMENTS.clone();
|
||||
let (task,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
let (task,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["doggos"]}))
|
||||
.await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
|
||||
@ -423,21 +423,21 @@ macro_rules! compute_forbidden_single_search {
|
||||
let index = server.index("sales");
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (task,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
let (task,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["color"]}))
|
||||
.await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
let index = server.index("products");
|
||||
let documents = NESTED_DOCUMENTS.clone();
|
||||
let (task,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
let (task,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["doggos"]}))
|
||||
.await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
assert_eq!($parent_keys.len(), $failed_query_indexes.len(), "keys != query_indexes");
|
||||
@ -499,21 +499,21 @@ macro_rules! compute_forbidden_multiple_search {
|
||||
let index = server.index("sales");
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (task,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
let (task,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["color"]}))
|
||||
.await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
let index = server.index("products");
|
||||
let documents = NESTED_DOCUMENTS.clone();
|
||||
let (task,_status_code) = index.add_documents(documents, None).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
let (task,_status_code) = index
|
||||
.update_settings(json!({"filterableAttributes": ["doggos"]}))
|
||||
.await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
drop(index);
|
||||
|
||||
assert_eq!($parent_keys.len(), $failed_query_indexes.len(), "keys != query_indexes");
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,15 +1,13 @@
|
||||
use std::fmt::Write;
|
||||
use std::marker::PhantomData;
|
||||
use std::panic::{catch_unwind, resume_unwind, UnwindSafe};
|
||||
use std::time::Duration;
|
||||
|
||||
use actix_web::http::StatusCode;
|
||||
use tokio::time::sleep;
|
||||
use urlencoding::encode as urlencode;
|
||||
|
||||
use super::encoder::Encoder;
|
||||
use super::service::Service;
|
||||
use super::{Owned, Shared, Value};
|
||||
use super::{Owned, Server, Shared, Value};
|
||||
use crate::json;
|
||||
|
||||
pub struct Index<'a, State = Owned> {
|
||||
@ -33,7 +31,7 @@ impl<'a> Index<'a, Owned> {
|
||||
Index { uid: self.uid.clone(), service: self.service, encoder, marker: PhantomData }
|
||||
}
|
||||
|
||||
pub async fn load_test_set(&self) -> u64 {
|
||||
pub async fn load_test_set<State>(&self, waiter: &Server<State>) -> u64 {
|
||||
let url = format!("/indexes/{}/documents", urlencode(self.uid.as_ref()));
|
||||
let (response, code) = self
|
||||
.service
|
||||
@ -44,12 +42,12 @@ impl<'a> Index<'a, Owned> {
|
||||
)
|
||||
.await;
|
||||
assert_eq!(code, 202);
|
||||
let update_id = response["taskUid"].as_i64().unwrap();
|
||||
self.wait_task(update_id as u64).await;
|
||||
update_id as u64
|
||||
let update_id = response["taskUid"].as_u64().unwrap();
|
||||
waiter.wait_task(update_id).await;
|
||||
update_id
|
||||
}
|
||||
|
||||
pub async fn load_test_set_ndjson(&self) -> u64 {
|
||||
pub async fn load_test_set_ndjson<State>(&self, waiter: &Server<State>) -> u64 {
|
||||
let url = format!("/indexes/{}/documents", urlencode(self.uid.as_ref()));
|
||||
let (response, code) = self
|
||||
.service
|
||||
@ -60,9 +58,9 @@ impl<'a> Index<'a, Owned> {
|
||||
)
|
||||
.await;
|
||||
assert_eq!(code, 202);
|
||||
let update_id = response["taskUid"].as_i64().unwrap();
|
||||
self.wait_task(update_id as u64).await;
|
||||
update_id as u64
|
||||
let update_id = response["taskUid"].as_u64().unwrap();
|
||||
waiter.wait_task(update_id).await;
|
||||
update_id
|
||||
}
|
||||
|
||||
pub async fn create(&self, primary_key: Option<&str>) -> (Value, StatusCode) {
|
||||
@ -267,10 +265,14 @@ impl Index<'_, Shared> {
|
||||
/// You cannot modify the content of a shared index, thus the delete_document_by_filter call
|
||||
/// must fail. If the task successfully enqueue itself, we'll wait for the task to finishes,
|
||||
/// and if it succeed the function will panic.
|
||||
pub async fn delete_document_by_filter_fail(&self, body: Value) -> (Value, StatusCode) {
|
||||
pub async fn delete_document_by_filter_fail<State>(
|
||||
&self,
|
||||
body: Value,
|
||||
waiter: &Server<State>,
|
||||
) -> (Value, StatusCode) {
|
||||
let (mut task, code) = self._delete_document_by_filter(body).await;
|
||||
if code.is_success() {
|
||||
task = self.wait_task(task.uid()).await;
|
||||
task = waiter.wait_task(task.uid()).await;
|
||||
if task.is_success() {
|
||||
panic!(
|
||||
"`delete_document_by_filter_fail` succeeded: {}",
|
||||
@ -281,10 +283,10 @@ impl Index<'_, Shared> {
|
||||
(task, code)
|
||||
}
|
||||
|
||||
pub async fn delete_index_fail(&self) -> (Value, StatusCode) {
|
||||
pub async fn delete_index_fail<State>(&self, waiter: &Server<State>) -> (Value, StatusCode) {
|
||||
let (mut task, code) = self._delete().await;
|
||||
if code.is_success() {
|
||||
task = self.wait_task(task.uid()).await;
|
||||
task = waiter.wait_task(task.uid()).await;
|
||||
if task.is_success() {
|
||||
panic!(
|
||||
"`delete_index_fail` succeeded: {}",
|
||||
@ -295,10 +297,14 @@ impl Index<'_, Shared> {
|
||||
(task, code)
|
||||
}
|
||||
|
||||
pub async fn update_index_fail(&self, primary_key: Option<&str>) -> (Value, StatusCode) {
|
||||
pub async fn update_index_fail<State>(
|
||||
&self,
|
||||
primary_key: Option<&str>,
|
||||
waiter: &Server<State>,
|
||||
) -> (Value, StatusCode) {
|
||||
let (mut task, code) = self._update(primary_key).await;
|
||||
if code.is_success() {
|
||||
task = self.wait_task(task.uid()).await;
|
||||
task = waiter.wait_task(task.uid()).await;
|
||||
if task.is_success() {
|
||||
panic!(
|
||||
"`update_index_fail` succeeded: {}",
|
||||
@ -364,23 +370,6 @@ impl<State> Index<'_, State> {
|
||||
self.service.delete(url).await
|
||||
}
|
||||
|
||||
pub async fn wait_task(&self, update_id: u64) -> Value {
|
||||
// try several times to get status, or panic to not wait forever
|
||||
let url = format!("/tasks/{}", update_id);
|
||||
for _ in 0..100 {
|
||||
let (response, status_code) = self.service.get(&url).await;
|
||||
assert_eq!(200, status_code, "response: {}", response);
|
||||
|
||||
if response["status"] == "succeeded" || response["status"] == "failed" {
|
||||
return response;
|
||||
}
|
||||
|
||||
// wait 0.5 second.
|
||||
sleep(Duration::from_millis(500)).await;
|
||||
}
|
||||
panic!("Timeout waiting for update id");
|
||||
}
|
||||
|
||||
pub async fn get_task(&self, update_id: u64) -> (Value, StatusCode) {
|
||||
let url = format!("/tasks/{}", update_id);
|
||||
self.service.get(url).await
|
||||
|
@ -38,6 +38,15 @@ impl Value {
|
||||
self["uid"].as_u64().is_some() || self["taskUid"].as_u64().is_some()
|
||||
}
|
||||
|
||||
#[track_caller]
|
||||
pub fn batch_uid(&self) -> u32 {
|
||||
if let Some(batch_uid) = self["batchUid"].as_u64() {
|
||||
batch_uid as u32
|
||||
} else {
|
||||
panic!("Didn't find `batchUid` in: {self}");
|
||||
}
|
||||
}
|
||||
|
||||
/// Return `true` if the `status` field is set to `succeeded`.
|
||||
/// Panic if the `status` field doesn't exists.
|
||||
#[track_caller]
|
||||
@ -181,7 +190,7 @@ pub async fn shared_empty_index() -> &'static Index<'static, Shared> {
|
||||
let server = Server::new_shared();
|
||||
let index = server._index("EMPTY_INDEX").to_shared();
|
||||
let (response, _code) = index._create(None).await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
index
|
||||
})
|
||||
.await
|
||||
@ -229,13 +238,13 @@ pub async fn shared_index_with_documents() -> &'static Index<'static, Shared> {
|
||||
let index = server._index("SHARED_DOCUMENTS").to_shared();
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (response, _code) = index._add_documents(documents, None).await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
let (response, _code) = index
|
||||
._update_settings(
|
||||
json!({"filterableAttributes": ["id", "title"], "sortableAttributes": ["id", "title"]}),
|
||||
)
|
||||
.await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
index
|
||||
}).await
|
||||
}
|
||||
@ -272,13 +281,13 @@ pub async fn shared_index_with_score_documents() -> &'static Index<'static, Shar
|
||||
let index = server._index("SHARED_SCORE_DOCUMENTS").to_shared();
|
||||
let documents = SCORE_DOCUMENTS.clone();
|
||||
let (response, _code) = index._add_documents(documents, None).await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
let (response, _code) = index
|
||||
._update_settings(
|
||||
json!({"filterableAttributes": ["id", "title"], "sortableAttributes": ["id", "title"]}),
|
||||
)
|
||||
.await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
index
|
||||
}).await
|
||||
}
|
||||
@ -349,13 +358,13 @@ pub async fn shared_index_with_nested_documents() -> &'static Index<'static, Sha
|
||||
let index = server._index("SHARED_NESTED_DOCUMENTS").to_shared();
|
||||
let documents = NESTED_DOCUMENTS.clone();
|
||||
let (response, _code) = index._add_documents(documents, None).await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
let (response, _code) = index
|
||||
._update_settings(
|
||||
json!({"filterableAttributes": ["father", "doggos", "cattos"], "sortableAttributes": ["doggos"]}),
|
||||
)
|
||||
.await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
index
|
||||
}).await
|
||||
}
|
||||
@ -449,7 +458,7 @@ pub async fn shared_index_with_test_set() -> &'static Index<'static, Shared> {
|
||||
)
|
||||
.await;
|
||||
assert_eq!(code, 202);
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
index
|
||||
})
|
||||
.await
|
||||
@ -496,14 +505,14 @@ pub async fn shared_index_with_geo_documents() -> &'static Index<'static, Shared
|
||||
let server = Server::new_shared();
|
||||
let index = server._index("SHARED_GEO_DOCUMENTS").to_shared();
|
||||
let (response, _code) = index._add_documents(GEO_DOCUMENTS.clone(), None).await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (response, _code) = index
|
||||
._update_settings(
|
||||
json!({"filterableAttributes": ["_geo"], "sortableAttributes": ["_geo"]}),
|
||||
)
|
||||
.await;
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
index
|
||||
})
|
||||
.await
|
||||
|
@ -408,12 +408,12 @@ impl<State> Server<State> {
|
||||
|
||||
pub async fn wait_task(&self, update_id: u64) -> Value {
|
||||
// try several times to get status, or panic to not wait forever
|
||||
let url = format!("/tasks/{}", update_id);
|
||||
let max_attempts = 400; // 200 seconds total, 0.5s per attempt
|
||||
let url = format!("/tasks/{update_id}");
|
||||
let max_attempts = 400; // 200 seconds in total, 0.5secs per attempt
|
||||
|
||||
for i in 0..max_attempts {
|
||||
let (response, status_code) = self.service.get(&url).await;
|
||||
assert_eq!(200, status_code, "response: {}", response);
|
||||
let (response, status_code) = self.service.get(url.clone()).await;
|
||||
assert_eq!(200, status_code, "response: {response}");
|
||||
|
||||
if response["status"] == "succeeded" || response["status"] == "failed" {
|
||||
return response;
|
||||
|
@ -1318,7 +1318,7 @@ async fn add_no_documents() {
|
||||
async fn add_larger_dataset() {
|
||||
let server = Server::new_shared();
|
||||
let index = server.unique_index();
|
||||
let update_id = index.load_test_set().await;
|
||||
let update_id = index.load_test_set(server).await;
|
||||
let (response, code) = index.get_task(update_id).await;
|
||||
assert_eq!(code, 200);
|
||||
assert_eq!(response["status"], "succeeded");
|
||||
@ -1333,7 +1333,7 @@ async fn add_larger_dataset() {
|
||||
|
||||
// x-ndjson add large test
|
||||
let index = server.unique_index();
|
||||
let update_id = index.load_test_set_ndjson().await;
|
||||
let update_id = index.load_test_set_ndjson(server).await;
|
||||
let (response, code) = index.get_task(update_id).await;
|
||||
assert_eq!(code, 200);
|
||||
assert_eq!(response["status"], "succeeded");
|
||||
|
@ -7,7 +7,8 @@ use crate::json;
|
||||
async fn delete_one_document_unexisting_index() {
|
||||
let server = Server::new_shared();
|
||||
let index = shared_does_not_exists_index().await;
|
||||
let (task, code) = index.delete_document_by_filter_fail(json!({"filter": "a = b"})).await;
|
||||
let (task, code) =
|
||||
index.delete_document_by_filter_fail(json!({"filter": "a = b"}), server).await;
|
||||
assert_eq!(code, 202);
|
||||
|
||||
server.wait_task(task.uid()).await.failed();
|
||||
|
@ -559,7 +559,7 @@ async fn delete_document_by_filter() {
|
||||
let index = shared_does_not_exists_index().await;
|
||||
// index does not exists
|
||||
let (response, _code) =
|
||||
index.delete_document_by_filter_fail(json!({ "filter": "doggo = bernese"})).await;
|
||||
index.delete_document_by_filter_fail(json!({ "filter": "doggo = bernese"}), server).await;
|
||||
snapshot!(response, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -589,7 +589,7 @@ async fn delete_document_by_filter() {
|
||||
// no filterable are set
|
||||
let index = shared_empty_index().await;
|
||||
let (response, _code) =
|
||||
index.delete_document_by_filter_fail(json!({ "filter": "doggo = bernese"})).await;
|
||||
index.delete_document_by_filter_fail(json!({ "filter": "doggo = bernese"}), server).await;
|
||||
snapshot!(response, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -619,7 +619,7 @@ async fn delete_document_by_filter() {
|
||||
// not filterable while there is a filterable attribute
|
||||
let index = shared_index_with_documents().await;
|
||||
let (response, code) =
|
||||
index.delete_document_by_filter_fail(json!({ "filter": "catto = jorts"})).await;
|
||||
index.delete_document_by_filter_fail(json!({ "filter": "catto = jorts"}), server).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let response = server.wait_task(response.uid()).await.failed();
|
||||
snapshot!(response, @r###"
|
||||
|
@ -334,7 +334,7 @@ async fn get_document_s_nested_attributes_to_retrieve() {
|
||||
async fn get_documents_displayed_attributes_is_ignored() {
|
||||
let server = Server::new_shared();
|
||||
let index = server.unique_index();
|
||||
index.load_test_set().await;
|
||||
index.load_test_set(server).await;
|
||||
index.update_settings(json!({"displayedAttributes": ["gender"]})).await;
|
||||
|
||||
let (response, code) = index.get_all_documents(GetAllDocumentsOptions::default()).await;
|
||||
|
@ -2366,7 +2366,7 @@ async fn generate_and_import_dump_containing_vectors() {
|
||||
))
|
||||
.await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let response = index.wait_task(response.uid()).await;
|
||||
let response = server.wait_task(response.uid()).await;
|
||||
snapshot!(response);
|
||||
let (response, code) = index
|
||||
.add_documents(
|
||||
@ -2381,12 +2381,12 @@ async fn generate_and_import_dump_containing_vectors() {
|
||||
)
|
||||
.await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let response = index.wait_task(response.uid()).await;
|
||||
let response = server.wait_task(response.uid()).await;
|
||||
snapshot!(response);
|
||||
|
||||
let (response, code) = server.create_dump().await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let response = index.wait_task(response.uid()).await;
|
||||
let response = server.wait_task(response.uid()).await;
|
||||
snapshot!(response["status"], @r###""succeeded""###);
|
||||
|
||||
// ========= We made a dump, now we should clear the DB and try to import our dump
|
||||
|
@ -161,9 +161,9 @@ async fn test_create_multiple_indexes() {
|
||||
let (task2, _) = index2.create(None).await;
|
||||
let (task3, _) = index3.create(None).await;
|
||||
|
||||
index1.wait_task(task1.uid()).await.succeeded();
|
||||
index2.wait_task(task2.uid()).await.succeeded();
|
||||
index3.wait_task(task3.uid()).await.succeeded();
|
||||
server.wait_task(task1.uid()).await.succeeded();
|
||||
server.wait_task(task2.uid()).await.succeeded();
|
||||
server.wait_task(task3.uid()).await.succeeded();
|
||||
|
||||
assert_eq!(index1.get().await.1, 200);
|
||||
assert_eq!(index2.get().await.1, 200);
|
||||
|
@ -26,7 +26,7 @@ async fn create_and_delete_index() {
|
||||
async fn error_delete_unexisting_index() {
|
||||
let server = Server::new_shared();
|
||||
let index = shared_does_not_exists_index().await;
|
||||
let (task, code) = index.delete_index_fail().await;
|
||||
let (task, code) = index.delete_index_fail(server).await;
|
||||
|
||||
assert_eq!(code, 202);
|
||||
server.wait_task(task.uid()).await.failed();
|
||||
|
@ -60,8 +60,8 @@ async fn list_multiple_indexes() {
|
||||
let index_with_key = server.unique_index();
|
||||
let (response_with_key, _status_code) = index_with_key.create(Some("key")).await;
|
||||
|
||||
index_without_key.wait_task(response_without_key.uid()).await.succeeded();
|
||||
index_with_key.wait_task(response_with_key.uid()).await.succeeded();
|
||||
server.wait_task(response_without_key.uid()).await.succeeded();
|
||||
server.wait_task(response_with_key.uid()).await.succeeded();
|
||||
|
||||
let (response, code) = server.list_indexes(None, Some(1000)).await;
|
||||
assert_eq!(code, 200);
|
||||
@ -81,8 +81,9 @@ async fn get_and_paginate_indexes() {
|
||||
let server = Server::new().await;
|
||||
const NB_INDEXES: usize = 50;
|
||||
for i in 0..NB_INDEXES {
|
||||
server.index(format!("test_{i:02}")).create(None).await;
|
||||
server.index(format!("test_{i:02}")).wait_task(i as u64).await;
|
||||
let (task, code) = server.index(format!("test_{i:02}")).create(None).await;
|
||||
assert_eq!(code, 202);
|
||||
server.wait_task(task.uid()).await;
|
||||
}
|
||||
|
||||
// basic
|
||||
|
@ -72,7 +72,7 @@ async fn error_update_existing_primary_key() {
|
||||
let server = Server::new_shared();
|
||||
let index = shared_index_with_documents().await;
|
||||
|
||||
let (update_task, code) = index.update_index_fail(Some("primary")).await;
|
||||
let (update_task, code) = index.update_index_fail(Some("primary"), server).await;
|
||||
|
||||
assert_eq!(code, 202);
|
||||
let response = server.wait_task(update_task.uid()).await.failed();
|
||||
@ -91,7 +91,7 @@ async fn error_update_existing_primary_key() {
|
||||
async fn error_update_unexisting_index() {
|
||||
let server = Server::new_shared();
|
||||
let index = shared_does_not_exists_index().await;
|
||||
let (task, code) = index.update_index_fail(Some("my-primary-key")).await;
|
||||
let (task, code) = index.update_index_fail(Some("my-primary-key"), server).await;
|
||||
|
||||
assert_eq!(code, 202);
|
||||
|
||||
|
@ -158,11 +158,11 @@ async fn remote_sharding() {
|
||||
let index1 = ms1.index("test");
|
||||
let index2 = ms2.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index2.add_documents(json!(documents[3..5]), None).await;
|
||||
index2.wait_task(task.uid()).await.succeeded();
|
||||
ms2.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -454,9 +454,9 @@ async fn error_unregistered_remote() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -572,9 +572,9 @@ async fn error_no_weighted_score() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -705,9 +705,9 @@ async fn error_bad_response() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -842,9 +842,9 @@ async fn error_bad_request() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -972,10 +972,10 @@ async fn error_bad_request_facets_by_index() {
|
||||
let index0 = ms0.index("test0");
|
||||
let index1 = ms1.index("test1");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -1113,13 +1113,13 @@ async fn error_bad_request_facets_by_index_facet() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
let (task, _status_code) = index0.update_settings_filterable_attributes(json!(["id"])).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -1224,6 +1224,7 @@ async fn error_bad_request_facets_by_index_facet() {
|
||||
}
|
||||
|
||||
#[actix_rt::test]
|
||||
#[ignore]
|
||||
async fn error_remote_does_not_answer() {
|
||||
let ms0 = Server::new().await;
|
||||
let ms1 = Server::new().await;
|
||||
@ -1262,9 +1263,9 @@ async fn error_remote_does_not_answer() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -1463,9 +1464,9 @@ async fn error_remote_404() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -1658,9 +1659,9 @@ async fn error_remote_sharding_auth() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
ms1.clear_api_key();
|
||||
@ -1818,9 +1819,9 @@ async fn remote_sharding_auth() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
ms1.clear_api_key();
|
||||
@ -1973,9 +1974,9 @@ async fn error_remote_500() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -2152,9 +2153,9 @@ async fn error_remote_500_once() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
@ -2335,9 +2336,9 @@ async fn error_remote_timeout() {
|
||||
let index0 = ms0.index("test");
|
||||
let index1 = ms1.index("test");
|
||||
let (task, _status_code) = index0.add_documents(json!(documents[0..2]), None).await;
|
||||
index0.wait_task(task.uid()).await.succeeded();
|
||||
ms0.wait_task(task.uid()).await.succeeded();
|
||||
let (task, _status_code) = index1.add_documents(json!(documents[2..3]), None).await;
|
||||
index1.wait_task(task.uid()).await.succeeded();
|
||||
ms1.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wrap servers
|
||||
let ms0 = Arc::new(ms0);
|
||||
|
@ -298,7 +298,7 @@ async fn similar_bad_filter() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let (response, code) =
|
||||
index.similar_post(json!({ "id": 287947, "filter": true, "embedder": "manual" })).await;
|
||||
@ -335,7 +335,7 @@ async fn filter_invalid_syntax_object() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
index
|
||||
.similar(json!({"id": 287947, "filter": "title & Glass", "embedder": "manual"}), |response, code| {
|
||||
@ -373,7 +373,7 @@ async fn filter_invalid_syntax_array() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
index
|
||||
.similar(json!({"id": 287947, "filter": ["title & Glass"], "embedder": "manual"}), |response, code| {
|
||||
@ -411,7 +411,7 @@ async fn filter_invalid_syntax_string() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "Found unexpected characters at the end of the filter: `XOR title = Glass`. You probably forgot an `OR` or an `AND` rule.\n15:32 title = Glass XOR title = Glass",
|
||||
@ -451,7 +451,7 @@ async fn filter_invalid_attribute_array() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
index
|
||||
.similar(
|
||||
@ -492,7 +492,7 @@ async fn filter_invalid_attribute_string() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
index
|
||||
.similar(
|
||||
@ -533,7 +533,7 @@ async fn filter_reserved_geo_attribute_array() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "`_geo` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.\n1:13 _geo = Glass",
|
||||
@ -573,7 +573,7 @@ async fn filter_reserved_geo_attribute_string() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "`_geo` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.\n1:13 _geo = Glass",
|
||||
@ -613,7 +613,7 @@ async fn filter_reserved_attribute_array() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "`_geoDistance` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.\n1:21 _geoDistance = Glass",
|
||||
@ -653,7 +653,7 @@ async fn filter_reserved_attribute_string() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "`_geoDistance` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.\n1:21 _geoDistance = Glass",
|
||||
@ -693,7 +693,7 @@ async fn filter_reserved_geo_point_array() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "`_geoPoint` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.\n1:18 _geoPoint = Glass",
|
||||
@ -733,7 +733,7 @@ async fn filter_reserved_geo_point_string() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "`_geoPoint` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.\n1:18 _geoPoint = Glass",
|
||||
@ -825,7 +825,7 @@ async fn similar_bad_embedder() {
|
||||
let documents = DOCUMENTS.clone();
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await;
|
||||
server.wait_task(value.uid()).await;
|
||||
|
||||
let expected_response = json!({
|
||||
"message": "Cannot find embedder with name `auto`.",
|
||||
|
@ -51,12 +51,12 @@ async fn perform_snapshot() {
|
||||
}))
|
||||
.await;
|
||||
|
||||
index.load_test_set().await;
|
||||
index.load_test_set(&server).await;
|
||||
|
||||
let (task, code) = server.index("test1").create(Some("prim")).await;
|
||||
meili_snap::snapshot!(code, @"202 Accepted");
|
||||
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
// wait for the _next task_ to process, aka the snapshot that should be enqueued at some point
|
||||
|
||||
@ -128,13 +128,13 @@ async fn perform_on_demand_snapshot() {
|
||||
}))
|
||||
.await;
|
||||
|
||||
index.load_test_set().await;
|
||||
index.load_test_set(&server).await;
|
||||
|
||||
let (task, _status_code) = server.index("doggo").create(Some("bone")).await;
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
let (task, _status_code) = server.index("doggo").create(Some("bone")).await;
|
||||
index.wait_task(task.uid()).await.failed();
|
||||
server.wait_task(task.uid()).await.failed();
|
||||
|
||||
let (task, code) = server.create_snapshot().await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
@ -147,7 +147,7 @@ async fn perform_on_demand_snapshot() {
|
||||
"enqueuedAt": "[date]"
|
||||
}
|
||||
"###);
|
||||
let task = index.wait_task(task.uid()).await;
|
||||
let task = server.wait_task(task.uid()).await;
|
||||
snapshot!(json_string!(task, { ".enqueuedAt" => "[date]", ".startedAt" => "[date]", ".finishedAt" => "[date]", ".duration" => "[duration]" }), @r###"
|
||||
{
|
||||
"uid": 4,
|
||||
|
@ -32,7 +32,7 @@ async fn stats() {
|
||||
let (task, code) = index.create(Some("id")).await;
|
||||
|
||||
assert_eq!(code, 202);
|
||||
index.wait_task(task.uid()).await.succeeded();
|
||||
server.wait_task(task.uid()).await.succeeded();
|
||||
|
||||
let (response, code) = server.stats().await;
|
||||
|
||||
@ -58,7 +58,7 @@ async fn stats() {
|
||||
assert_eq!(code, 202, "{response}");
|
||||
assert_eq!(response["taskUid"], 1);
|
||||
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let timestamp = OffsetDateTime::now_utc();
|
||||
let (response, code) = server.stats().await;
|
||||
@ -107,7 +107,7 @@ async fn add_remove_embeddings() {
|
||||
|
||||
let (response, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (stats, _code) = index.stats().await;
|
||||
snapshot!(json_string!(stats, {
|
||||
@ -135,7 +135,7 @@ async fn add_remove_embeddings() {
|
||||
|
||||
let (response, code) = index.update_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (stats, _code) = index.stats().await;
|
||||
snapshot!(json_string!(stats, {
|
||||
@ -163,7 +163,7 @@ async fn add_remove_embeddings() {
|
||||
|
||||
let (response, code) = index.update_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (stats, _code) = index.stats().await;
|
||||
snapshot!(json_string!(stats, {
|
||||
@ -192,7 +192,7 @@ async fn add_remove_embeddings() {
|
||||
|
||||
let (response, code) = index.update_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (stats, _code) = index.stats().await;
|
||||
snapshot!(json_string!(stats, {
|
||||
@ -245,7 +245,7 @@ async fn add_remove_embedded_documents() {
|
||||
|
||||
let (response, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (stats, _code) = index.stats().await;
|
||||
snapshot!(json_string!(stats, {
|
||||
@ -269,7 +269,7 @@ async fn add_remove_embedded_documents() {
|
||||
// delete one embedded document, remaining 1 embedded documents for 3 embeddings in total
|
||||
let (response, code) = index.delete_document(0).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (stats, _code) = index.stats().await;
|
||||
snapshot!(json_string!(stats, {
|
||||
@ -305,7 +305,7 @@ async fn update_embedder_settings() {
|
||||
|
||||
let (response, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(response.uid()).await.succeeded();
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (stats, _code) = index.stats().await;
|
||||
snapshot!(json_string!(stats, {
|
||||
|
@ -88,7 +88,7 @@ async fn binary_quantize_before_sending_documents() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
// Make sure the documents are binary quantized
|
||||
let (documents, _code) = index
|
||||
@ -161,7 +161,7 @@ async fn binary_quantize_after_sending_documents() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let (response, code) = index
|
||||
.update_settings(json!({
|
||||
@ -305,7 +305,7 @@ async fn binary_quantize_clear_documents() {
|
||||
server.wait_task(response.uid()).await.succeeded();
|
||||
|
||||
let (value, _code) = index.clear_all_documents().await;
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
// Make sure the documents DB has been cleared
|
||||
let (documents, _code) = index
|
||||
|
@ -42,7 +42,7 @@ async fn add_remove_user_provided() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let (documents, _code) = index
|
||||
.get_all_documents(GetAllDocumentsOptions { retrieve_vectors: true, ..Default::default() })
|
||||
@ -95,7 +95,7 @@ async fn add_remove_user_provided() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let (documents, _code) = index
|
||||
.get_all_documents(GetAllDocumentsOptions { retrieve_vectors: true, ..Default::default() })
|
||||
@ -138,7 +138,7 @@ async fn add_remove_user_provided() {
|
||||
|
||||
let (value, code) = index.delete_document(0).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
let (documents, _code) = index
|
||||
.get_all_documents(GetAllDocumentsOptions { retrieve_vectors: true, ..Default::default() })
|
||||
@ -187,7 +187,7 @@ async fn user_provide_mismatched_embedding_dimension() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -218,7 +218,7 @@ async fn user_provide_mismatched_embedding_dimension() {
|
||||
]);
|
||||
let (response, code) = index.add_documents(new_document, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(response.uid()).await;
|
||||
let task = server.wait_task(response.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -270,7 +270,7 @@ async fn generate_default_user_provided_documents(server: &Server) -> Index {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
index
|
||||
}
|
||||
@ -285,7 +285,7 @@ async fn user_provided_embeddings_error() {
|
||||
json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "embeddings": [0, 0, 0] }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -315,7 +315,7 @@ async fn user_provided_embeddings_error() {
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": {}}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -346,7 +346,7 @@ async fn user_provided_embeddings_error() {
|
||||
json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "regenerate": "yes please" }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -375,7 +375,7 @@ async fn user_provided_embeddings_error() {
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "embeddings": true, "regenerate": true }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -404,7 +404,7 @@ async fn user_provided_embeddings_error() {
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "embeddings": [true], "regenerate": true }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -433,7 +433,7 @@ async fn user_provided_embeddings_error() {
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "embeddings": [[true]], "regenerate": false }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -462,20 +462,20 @@ async fn user_provided_embeddings_error() {
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "embeddings": [23, 0.1, -12], "regenerate": true }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task["status"], @r###""succeeded""###);
|
||||
|
||||
let documents =
|
||||
json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "regenerate": false }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task["status"], @r###""succeeded""###);
|
||||
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "regenerate": false, "embeddings": [0.1, [0.2, 0.3]] }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -504,7 +504,7 @@ async fn user_provided_embeddings_error() {
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "regenerate": false, "embeddings": [[0.1, 0.2], 0.3] }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -533,7 +533,7 @@ async fn user_provided_embeddings_error() {
|
||||
let documents = json!({"id": 0, "name": "kefir", "_vectors": { "manual": { "regenerate": false, "embeddings": [[0.1, true], 0.3] }}});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -574,7 +574,7 @@ async fn user_provided_vectors_error() {
|
||||
let documents = json!([{"id": 40, "name": "kefir"}, {"id": 41, "name": "intel"}, {"id": 42, "name": "max"}, {"id": 43, "name": "venus"}, {"id": 44, "name": "eva"}]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -604,7 +604,7 @@ async fn user_provided_vectors_error() {
|
||||
let documents = json!({"id": 42, "name": "kefir", "_vector": { "manaul": [0, 0, 0] }});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -634,7 +634,7 @@ async fn user_provided_vectors_error() {
|
||||
let documents = json!({"id": 42, "name": "kefir", "_vectors": { "manaul": [0, 0, 0] }});
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -667,7 +667,7 @@ async fn clear_documents() {
|
||||
let index = generate_default_user_provided_documents(&server).await;
|
||||
|
||||
let (value, _code) = index.clear_all_documents().await;
|
||||
index.wait_task(value.uid()).await.succeeded();
|
||||
server.wait_task(value.uid()).await.succeeded();
|
||||
|
||||
// Make sure the documents DB has been cleared
|
||||
let (documents, _code) = index
|
||||
@ -723,7 +723,7 @@ async fn add_remove_one_vector_4588() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, name: "document-added");
|
||||
|
||||
let documents = json!([
|
||||
@ -731,7 +731,7 @@ async fn add_remove_one_vector_4588() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, name: "document-deleted");
|
||||
|
||||
let (documents, _code) = index
|
||||
|
@ -117,7 +117,7 @@ async fn test_both_apis() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
|
@ -370,7 +370,7 @@ async fn it_works() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -601,7 +601,7 @@ async fn tokenize_long_text() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -657,7 +657,7 @@ async fn bad_api_key() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
@ -805,7 +805,7 @@ async fn bad_model() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
@ -883,7 +883,7 @@ async fn bad_dimensions() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
@ -992,7 +992,7 @@ async fn smaller_dimensions() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -1224,7 +1224,7 @@ async fn small_embedding_model() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -1455,7 +1455,7 @@ async fn legacy_embedding_model() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -1687,7 +1687,7 @@ async fn it_still_works() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -1916,7 +1916,7 @@ async fn timeout() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
|
@ -1099,7 +1099,7 @@ async fn add_vector_and_user_provided() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -1616,7 +1616,7 @@ async fn server_returns_multiple() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -1722,7 +1722,7 @@ async fn server_single_input_returns_in_array() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
@ -1828,7 +1828,7 @@ async fn server_raw() {
|
||||
]);
|
||||
let (value, code) = index.add_documents(documents, None).await;
|
||||
snapshot!(code, @"202 Accepted");
|
||||
let task = index.wait_task(value.uid()).await;
|
||||
let task = server.wait_task(value.uid()).await;
|
||||
snapshot!(task, @r###"
|
||||
{
|
||||
"uid": "[uid]",
|
||||
|
83
documentation/experimental-features.md
Normal file
83
documentation/experimental-features.md
Normal file
@ -0,0 +1,83 @@
|
||||
# Experimental features: description and process
|
||||
|
||||
## Quick definition of experimental features
|
||||
|
||||
An experimental feature is a feature present in the final Meilisearch binary that is not considered stable. This means the API might become incompatible between two Meilisearch releases.
|
||||
|
||||
Experimental features must be explicitly enabled by a user.
|
||||
|
||||
> ⚠️ Experimental features are NOT [prototypes](./prototypes.md). All experimental features are thoroughly tested before release and follow the same quality standards as other features.
|
||||
|
||||
## Motivation
|
||||
|
||||
Since the release of v1, Meilisearch is considered a stable binary and its API cannot break between minor and patch versions. This means it is impossible to make breaking changes to a feature without releasing a major version.
|
||||
|
||||
This limitation, which guarantees our users Meilisearch is a stable and reliable product, also applies to new features. If we introduce a new feature in one release, any breaking changes will require a new major release.
|
||||
|
||||
To prevent frequently releasing new major versions but still continue to develop new features, we will first provide these features as "experimental". This allows users to test them, report implementation issues, and give us important feedback.
|
||||
|
||||
## When is a feature considered experimental?
|
||||
|
||||
Not all new features need to go through the experimental feature process.
|
||||
|
||||
We will treat features as experimental when:
|
||||
|
||||
- New features we are considering adding to the search engine, but need user feedback before making our final decision and/or committing to a specific implementation. Example: a new API route or CLI flag
|
||||
- Improvements to existing functionality the engine team is not comfortable releasing as stable immediately. Example: changes to search relevancy or performance improvements
|
||||
- New features that would introduce breaking changes and cannot be integrated as stable before a new major version
|
||||
- New features that will NEVER be stable. These features are useful to provide quick temporary fixes to critical issues. Example: an option to disable auto-batching
|
||||
|
||||
## How to enable experimental features?
|
||||
|
||||
Users must explicitly enable experimental features with a CLI flag. Experimental features will always be disabled by default.
|
||||
|
||||
Example CLI flags: `--experimental-disable-soft-delete`, `--experimental-multi-index-search`.
|
||||
|
||||
⚠️ To ensure users understand a feature is experimental, flags must contain the `experimental` prefix.
|
||||
|
||||
## Rules and expectations
|
||||
|
||||
- The API and behavior of an experimental feature can break between two minor versions of Meilisearch
|
||||
- The experimental feature process described here can significantly change between 2 minor versions of Meilisearch
|
||||
- Providing a feature as “experimental” does not guarantee it will be stable one day: the newly introduced experimental features or improvements may be removed in a future release
|
||||
- While experimental features are supposed to be unstable regarding usage and compatibility between versions, users should not expect any more bugs or issues than with any other Meilisearch feature. Experimental features should follow the same quality standards of stable features, including thorough testing suites and in-depth code reviews. That said, certain experimental features might be inherently more prone to bugs and downgrades
|
||||
|
||||
## Communication with users
|
||||
|
||||
For each new experimental feature, we must:
|
||||
- GitHub: open a dedicated GitHub discussion in the [product repository](https://github.com/meilisearch/product/discussions). This discussion should never become stale and be updated regularly. Users need to understand they can interact with us and get quick answers. The discussion should inform users about:
|
||||
- Our motivations: why this feature is unstable?
|
||||
- Usage: how to activate this feature? Do we need to do a migration with a dump?
|
||||
- Planning: what are the conditions to make this feature stable? When do we expect it become stable?
|
||||
- Meilisearch CLI: update the `--help` command in the Meilisearch binary so it redirects users to the related GitHub discussion and warns them about the unstable state of the features
|
||||
- Documentation: create a small dedicated page about the purpose of the experimental feature. This page should contain no usage instructions and redirect users to the related GitHub discussion for more information
|
||||
|
||||
## Usage warnings
|
||||
|
||||
- API can break between 2 versions of Meilisearch. People using the experimental feature in production should pay extra attention to it.
|
||||
- Some experimental features might require re-indexing. In these cases, users will have to use a dump to activate and deactivate an experimental feature. Users will be clearly informed about this in the related GitHub discussion
|
||||
|
||||
> ⚠️ Since this process is not mature yet, users might experience issues with their DB when deactivating these features even when using a dump.<br>
|
||||
> We recommend users always save their data (with snapshots and/or dumps) before activating experimental features.
|
||||
|
||||
## Technical details
|
||||
|
||||
### Why does Meilisearch need to be restarted when activating an experimental feature?
|
||||
|
||||
Meilisearch uses LMDB to store both documents and internal application data, such as Meilisearch tasks. Altering these internal data structures requires closing and re-opening the LMDB environment.
|
||||
|
||||
If an experimental feature implementation involves a modification of internal data structures, users must restart Meilisearch. This cannot be done via HTTP routes.
|
||||
|
||||
Unfortunately, this might impact most experimental features. However, this might change in the future, or adapted to the context of a specific new feature.
|
||||
|
||||
### Why will some features require migrating data with dumps?
|
||||
|
||||
Under some circumstances, Meilisearch might have issues when reading a database generated by a different Meilisearch release. This might cause an instance to crash or work with faulty data.
|
||||
|
||||
This is already a possibility when migrating between minor Meilisearch versions, and is more likely to happen when activating a new experimental feature. The opposite operation—migrating a database with experimental features activated to a database where those features are not active—is currently riskier. As we develop and improve the development of experimental features, this procedure will become safer and more reliable.
|
||||
|
||||
### Restarting Meilisearch and migrating databases with dumps to activate an experimental feature is inconvenient. Will this improve in the future?
|
||||
|
||||
We understand the situation is inconvenient and less than ideal. We will only ask users to use dumps when activating experimental features when it’s strictly necessary.
|
||||
|
||||
Avoiding restarts is more difficult, especially for features that currently require database migrations with dumps. We are not currently working on this, but the situation might change in the future.
|
74
documentation/prototypes.md
Normal file
74
documentation/prototypes.md
Normal file
@ -0,0 +1,74 @@
|
||||
# Prototype process
|
||||
|
||||
## What is a prototype?
|
||||
|
||||
A prototype is an alternative version of Meilisearch (provided in a Docker image) containing a new feature or an improvement the engine team provides to the users.
|
||||
|
||||
## Why providing a prototype?
|
||||
|
||||
For some features or improvements we want to introduce in Meilisearch, we also have to make the users test them first before releasing them for many reasons:
|
||||
- to ensure we solve the first use case defined during the discovery
|
||||
- to ensure the API does not have major issues of usages
|
||||
- identify/remove concrete technical roadblocks by working on an implementation as soon as possible, like performance issues
|
||||
- to get any other feedback from the users regarding their usage
|
||||
|
||||
These make us iterate fast before stabilizing it for the current release.
|
||||
|
||||
> ⚠️ Prototypes are NOT [experimental features](./experimental-features.md). All experimental features are thoroughly tested before release and follow the same quality standards as other features. This is not the case with prototypes which are the equivalent of a first draft of a new feature.
|
||||
|
||||
## How to publish a prototype?
|
||||
|
||||
### Release steps
|
||||
|
||||
The prototype name must follow this convention: `prototype-X-Y` where
|
||||
- `X` is the feature name formatted in `kebab-case`. It should not end with a single number.
|
||||
- `Y` is the version of the prototype, starting from `0`.
|
||||
|
||||
✅ Example: `prototype-auto-resize-0`. </br>
|
||||
❌ Bad example: `auto-resize-0`: lacks the `prototype` prefix. </br>
|
||||
❌ Bad example: `prototype-auto-resize`: lacks the version suffix. </br>
|
||||
❌ Bad example: `prototype-auto-resize-0-0`: feature name ends with a single number.
|
||||
|
||||
Steps to create a prototype:
|
||||
|
||||
1. In your terminal, go to the last commit of your branch (the one you want to provide as a prototype).
|
||||
2. Create a tag following the convention: `git tag prototype-X-Y`
|
||||
3. Run Meilisearch and check that its launch summary features a line: `Prototype: prototype-X-Y` (you may need to switch branches and back after tagging for this to work).
|
||||
3. Push the tag: `git push origin prototype-X-Y`
|
||||
4. Check the [Docker CI](https://github.com/meilisearch/meilisearch/actions/workflows/publish-docker-images.yml) is now running.
|
||||
|
||||
🐳 Once the CI has finished to run (~1h30), a Docker image named `prototype-X-Y` will be available on [DockerHub](https://hub.docker.com/repository/docker/getmeili/meilisearch/general). People can use it with the following command: `docker run -p 7700:7700 -v $(pwd)/meili_data:/meili_data getmeili/meilisearch:prototype-X-Y`. <br>
|
||||
More information about [how to run Meilisearch with Docker](https://docs.meilisearch.com/learn/cookbooks/docker.html#download-meilisearch-with-docker).
|
||||
|
||||
⚠️ However, no binaries will be created. If the users do not use Docker, they can go to the `prototype-X-Y` tag in the Meilisearch repository and compile it from the source code.
|
||||
|
||||
### Communication
|
||||
|
||||
When sharing a prototype with users, it's important to
|
||||
- remind them not to use it in production. Prototypes are solely for test purposes.
|
||||
- explain how to run the prototype
|
||||
- explain how to use the new feature
|
||||
- encourage users to let their feedback
|
||||
|
||||
The prototype should be shared at least in the related issue and/or the related product discussion. It's the developer and the PM to decide to add more communication, like sharing it on Discord or Twitter.
|
||||
|
||||
Here is an example of messages to share on GitHub:
|
||||
|
||||
> Hello everyone,
|
||||
>
|
||||
> Here is the current prototype you can use to test the new XXX feature:
|
||||
>
|
||||
> How to run the prototype?
|
||||
> You need to start from a fresh new database (remove the previous used `data.ms`) and use the following Docker image:
|
||||
> ```bash
|
||||
> docker run -it --rm -p 7700:7700 -v $(pwd)/meili_data:/meili_data getmeili/meilisearch:prototype-X-Y
|
||||
> ```
|
||||
>
|
||||
> You can use the feature this way:
|
||||
> ```bash
|
||||
> ...
|
||||
> ```
|
||||
>
|
||||
> ⚠️ We do NOT recommend using this prototype in production. This is only for test purposes.
|
||||
>
|
||||
> Everyone is more than welcome to give feedback and to report any issue or bug you might encounter when using this prototype. Thanks in advance for your involvement. It means a lot to us ❤️
|
75
documentation/release.md
Normal file
75
documentation/release.md
Normal file
@ -0,0 +1,75 @@
|
||||
# Meilisearch release process
|
||||
|
||||
This guide is to describe how to make releases for the current repository.
|
||||
|
||||
## 📅 Weekly Meilisearch release
|
||||
|
||||
1. A weekly meeting is done every Monday to define the release and to ensure minimal checks before the release.
|
||||
<details>
|
||||
<summary>Check out the TODO 👇👇👇</summary>
|
||||
- [ ] Define the version of the release (`vX.Y.Z`)
|
||||
- [ ] Manually test `--experimental-dumpless-upgrade` on a DB of the previous Meilisearch minor version</br>
|
||||
- [ ] Check recent <a href="https://github.com/meilisearch/meilisearch/actions">automated tests</a> on `main`</br>
|
||||
- [ ] Scheduled test suite</br>
|
||||
- [ ] Scheduled SDK tests</br>
|
||||
- [ ] Scheduled flaky tests</br>
|
||||
- [ ] Scheduled fuzzer tests</br>
|
||||
- [ ] Scheduled Docker CI (dry run)</br>
|
||||
- [ ] Scheduled GitHub binary release (dry run)</br>
|
||||
- [ ] <a href="https://github.com/meilisearch/meilisearch/actions/workflows/update-cargo-toml-version.yml">Create the PR updating the version</a>and merge it.
|
||||
</details>
|
||||
|
||||
2. Go to the GitHub interface, in the [`Release` section](https://github.com/meilisearch/meilisearch/releases).
|
||||
|
||||
3. Select the already drafted release or click on the `Draft a new release` button if you want to start a blank one, and fill the form with the appropriate information.
|
||||
⚠️ Publish on `main`
|
||||
|
||||
⚙️ The CIs will be triggered to:
|
||||
- [Upload binaries](https://github.com/meilisearch/meilisearch/actions/workflows/publish-binaries.yml) to the associated GitHub release.
|
||||
- [Publish the Docker images](https://github.com/meilisearch/meilisearch/actions/workflows/publish-docker-images.yml) (`latest`, `vX`, `vX.Y` and `vX.Y.Z`) to DockerHub -> check the "Docker meta" steps in the CI to check the right tags are created
|
||||
- [Publish binaries for Homebrew and APT](https://github.com/meilisearch/meilisearch/actions/workflows/publish-apt-brew-pkg.yml)
|
||||
- [Move the `latest` git tag to the release commit](https://github.com/meilisearch/meilisearch/actions/workflows/latest-git-tag.yml).
|
||||
|
||||
|
||||
### 🔥 How to do a patch release for an hotfix
|
||||
|
||||
It happens some releases come with impactful bugs in production (e.g. indexation or search issues): we obviously don't wait for the next cycle to fix them and we release a patched version of Meilisearch.
|
||||
|
||||
1. Create a new release branch starting from the latest stable Meilisearch release (`latest` git tag or the corresponding `vX.Y.Z` tag).
|
||||
|
||||
```bash
|
||||
# Ensure you get all the current tags of the repository
|
||||
git fetch origin --tags --force
|
||||
|
||||
# Create the branch
|
||||
git checkout vX.Y.Z # The latest release you want to patch
|
||||
git checkout -b release-vX.Y.Z+1 # Increase the Z here
|
||||
git push -u origin release-vX.Y.Z+1
|
||||
```
|
||||
|
||||
2. Change the [version in `Cargo.toml` file](https://github.com/meilisearch/meilisearch/blob/e9b62aacb38f2c7a777adfda55293d407e0d6254/Cargo.toml#L21). You can use [our automation](https://github.com/meilisearch/meilisearch/actions/workflows/update-cargo-toml-version.yml) -> click on `Run workflow` -> Fill the appropriate version and run it on the newly created branch `release-vX.Y.Z` -> Click on "Run workflow". A PR updating the version in the `Cargo.toml` and `Cargo.lock` files will be created.
|
||||
|
||||
3. Open and merge the PRs (fixing your bugs): they should point to `release-vX.Y.Z+1` branch.
|
||||
|
||||
4. Go to the GitHub interface, in the [`Release` section](https://github.com/meilisearch/meilisearch/releases) and click on `Draft a new release`
|
||||
⚠️⚠️⚠️ Publish on `release-vX.Y.Z+1` branch, not on `main`!
|
||||
|
||||
⚠️ <ins>If doing a patch release that should NOT be the `latest` release</s>:
|
||||
|
||||
- Do NOT check `Set as the latest release` when creating the GitHub release. If you did, quickly interrupt all CIs and delete the GitHub release!
|
||||
- Once the release is created, you don't have to care about Homebrew, APT and Docker CIs: they will not consider this new release as the latest; the CIs are already adapted for this situation.
|
||||
- However, the [CI updating the `latest` git tag](https://github.com/meilisearch/meilisearch/actions/workflows/latest-git-tag.yml) is not working for this situation currently and will attach the `latest` git tag to the just-created release, which is something we don't want! If you don't succeed in stopping the CI on time, don't worry, you just have to re-run the [old CI](https://github.com/meilisearch/meilisearch/actions/workflows/latest-git-tag.yml) corresponding to the real latest release, and the `latest` git tag will be attached back to the right commit.
|
||||
|
||||
5. Bring the new commits back from `release-vX.Y.Z+1` to `main` by merging a PR originating `release-vX.Y.Z+1` and pointing to `main`.
|
||||
|
||||
⚠️ If you encounter any merge conflicts, please do NOT fix the git conflicts directly on the `release-vX.Y.Z` branch. It would bring the changes present in `main` into `release-vX.Y.Z`, which would break a potential future patched release.
|
||||
|
||||

|
||||
|
||||
Instead:
|
||||
- Create a new branch originating `release-vX.Y.Z+1`, like `tmp-release-vX.Y.Z+1`
|
||||
- Create a PR from the `tmp-release-vX.Y.Z+1` branch and pointing to `main`
|
||||
- Fix the git conflicts on this new branch
|
||||
- By either fixing the git conflict via the GitHub interface
|
||||
- By pulling the `main` branch into `tmp-release-vX.Y.Z+1` and fixing them on your machine.
|
||||
- Merge this new PR into `main`
|
83
documentation/versioning-policy.md
Normal file
83
documentation/versioning-policy.md
Normal file
@ -0,0 +1,83 @@
|
||||
# Versioning policy
|
||||
|
||||
This page describes the versioning rules Meilisearch will follow once v1.0.0 is released and how/when we should increase the MAJOR, MINOR, and PATCH of the versions.
|
||||
|
||||
## 🤖 Basic rules
|
||||
|
||||
Meilisearch engine releases follow the [SemVer rules](https://semver.org/), including the following basic ones:
|
||||
|
||||
> 🔥 Given a version number MAJOR.MINOR.PATCH, increment the:
|
||||
>
|
||||
> 1. MAJOR version when you make incompatible API changes
|
||||
> 2. MINOR version when you add functionality in a backwards compatible
|
||||
> manner
|
||||
> 3. PATCH version when you make backwards compatible bug fixes
|
||||
|
||||
**Changes that MAY lead the Meilisearch users (developers) to change their code are considered API incompatibility and will make us increase the MAJOR version of Meilisearch.**
|
||||
|
||||
**In other terms, if the users MAY have to do more steps than just downloading the new Meilisearch instance and running it, a new MAJOR is needed.**
|
||||
|
||||
Examples of changes making the code break and then, involving increasing the MAJOR:
|
||||
|
||||
- Name change of a route or a field in the request/response body
|
||||
- Change a default value of a parameter or a setting.
|
||||
- Any API behavior change: the users expect in their code the engine to behave this way, but it does not.
|
||||
Examples:
|
||||
- Make a synchronous error asynchronous or the contrary
|
||||
- `displayableAttributes` impact now the `/documents` route: the users expect to retrieve all the fields, so specific fields, in their code but cannot.
|
||||
- Change a final value type.
|
||||
Ex: `/stats` now return floats instead of integers. This can impact strongly typed languages.
|
||||
|
||||
⚠️ This guide only applies to the Meilisearch binary. Additional tools like SDKs and Docker images are out of the scope of this guide. However, we will ensure the changelogs are clear enough to inform users of the changes and their impacts.
|
||||
|
||||
## ✋ Exceptions related to Meilisearch’s specificities
|
||||
|
||||
Meilisearch is a search engine working with an internal database. It means some parts of the project can be really problematic to consider as breaking (and then leading to an increase of the MAJOR) without slowing down innovation.
|
||||
|
||||
Here is the list of the following exceptions of changes that will not lead to an increase in the MAJOR in Meilisearch release.
|
||||
|
||||
### DB incompatibilities: force using a dump
|
||||
|
||||
A DB breaking leads to a failure when starting Meilisearch: you need to use a dump.
|
||||
|
||||
We know this kind of failure requiring an additional step is the definition of “breaking” on the user side, but it’s really complicated to consider increasing a MAJOR for this. Indeed, since we don’t want to release a major version every two months and we also want to keep innovating simultaneously, increasing the MINOR is the best solution.
|
||||
|
||||
People would need to use dump sometimes between two MAJOR versions; for instance, this is something [PostgreSQL does](https://www.postgresql.org/support/versioning/) by asking their users to perform some manual actions between two MINOR releases.
|
||||
|
||||
### Search relevancy and algorithm improvements
|
||||
|
||||
Relevancy is the engine team job; we need to improve it every day, like performance. It will be really hard to improve the engine without allowing the team to change the relevancy algorithm. Same as for DB breaking, considering relevancy changes as breaking can really slow down innovation.
|
||||
|
||||
This way, changing the search relevancy, not the API behavior or fields, but the final relevancy result (like cropping algorithm, search algorithm, placeholder behavior, highlight behavior…) is not considered as a breaking change. Indeed, changing the relevancy behavior is not supposed to make the code fail since the final results of Meilisearch are only displayed, no matter the matched documents.
|
||||
|
||||
This kind of change will lead us to increase the MINOR to let the people know about the change and avoid non-expected changes when pulling the latest patched version of Meilisearch. Indeed, increasing the MINOR (instead of the PATCH) will prevent users from downloading the new patched version without noticing the changes.
|
||||
|
||||
🚨 Any change about the relevancy that is related to API usage, and thus, that may impact users to change their code (for instance changing the default `matchingStrategy` value) is not related to this specific section and would lead us to increase the MAJOR.
|
||||
|
||||
### New "variant" type addition
|
||||
|
||||
We don't consider breaking to add a new type to an already existing list of variant. For example, adding a new type of `task`, or a new type of error `code`.
|
||||
|
||||
We are aware some strongly typed language code bases could be impacted, and our recommendation is to handle the possibility of having an unknown type when deserializing Meilisearch's response.
|
||||
|
||||
### Human-readability purposes
|
||||
|
||||
- Changing the value of `message` or `link` in error object will only increase the PATCH. The users should not refer to this field in their code since `code` and `type` exist in the same object.
|
||||
- Any error message sent to the terminal that changed will increase the PATCH. People should not rely on them since these messages are for human debugging.
|
||||
- Updating the logs format will increase the MINOR: this is supposed to be used by humans for debugging, but we are aware some people can plug some tools at the top of them. But since it’s not the main purpose of our logs, we don’t want to increase the MAJOR for a log format change. However, we will increase the MINOR to let the people know better about the change and avoid bad surprises when pulling the latest patched version of Meilisearch.
|
||||
|
||||
### Integrated web-interface
|
||||
|
||||
Any changes done to the integrated web interface are not considered breaking. The interface is considered an additional tool for test purposes, not for production.
|
||||
|
||||
## 📝 About the Meilisearch changelogs
|
||||
|
||||
All the changes, no matter if they are considered as breaking or not, if they are related to an algorithm change or not, will be announced in the changelogs.
|
||||
|
||||
The details of the change will depend on the impact on the users. For instance, giving too many details on really deep tech improvements can lead to some confusion on the user side.
|
||||
|
||||
## 👀 Some precisions
|
||||
|
||||
- Updating a dependence requirement of Meilisearch is NOT considered as breaking by SemVer guide and will lead, in our case, to increasing the MINOR. Indeed, increasing the MINOR (instead of the PATCH) will prevent users from downloading the new patched version without noticing the changes.
|
||||
See the [related rule](https://semver.org/#what-should-i-do-if-i-update-my-own-dependencies-without-changing-the-public-api).
|
||||
- Fixing a CVE (Common Vulnerabilities and Exposures) will not increase the MAJOR; depending on the CVE, it will be a PATCH or a MINOR upgrade.
|
Reference in New Issue
Block a user