Compare commits

..

29 Commits

Author SHA1 Message Date
Louis Dureuil
0fccd0ca1f Merge pull request #5883 from meilisearch/update-to-v1.20
Update to v1.20
2025-09-08 08:50:48 +00:00
Louis Dureuil
226c102bab Update snapshot and upgrade proc 2025-09-08 10:00:44 +02:00
Louis Dureuil
2940bbb75c Update version to v1.20.0 2025-09-08 09:20:25 +02:00
Clémentine
35b24a28aa Merge pull request #5873 from meilisearch/dependabot/github_actions/actions/checkout-5
Bump actions/checkout from 3 to 5
2025-09-03 13:18:51 +00:00
Tamo
0a3ab8e171 Merge pull request #5876 from meilisearch/specify-prometheus-protocol-version
Send the version when returning prometheus metrics
2025-09-02 13:24:36 +00:00
Tamo
b144d9ab2b fix warnings 2025-09-02 14:31:24 +02:00
Tamo
c3cefbc170 send the version when returning prometheus metrics 2025-09-02 12:40:18 +02:00
Clémentine
8e2aeb6739 Merge pull request #5874 from meilisearch/dependabot/github_actions/actions/setup-java-5
Bump actions/setup-java from 4 to 5
2025-09-02 09:11:19 +00:00
dependabot[bot]
9c06545ae3 Bump actions/setup-java from 4 to 5
Bumps [actions/setup-java](https://github.com/actions/setup-java) from 4 to 5.
- [Release notes](https://github.com/actions/setup-java/releases)
- [Commits](https://github.com/actions/setup-java/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-java
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-02 08:23:15 +00:00
dependabot[bot]
e1c859c0f7 Bump actions/checkout from 3 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-02 07:44:35 +00:00
Clémentine
5cad65cca5 Merge pull request #5869 from meilisearch/dependabot/cargo/tracing-subscriber-0.3.20
Bump tracing-subscriber from 0.3.19 to 0.3.20
2025-09-01 14:23:26 +00:00
Tamo
7fe9d07247 Merge pull request #5858 from shreeup/5835DispProgressTrace
Display the progressTrace in real time
2025-09-01 10:21:36 +00:00
dependabot[bot]
026b95afbb Bump tracing-subscriber from 0.3.19 to 0.3.20
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from 0.3.19 to 0.3.20.
- [Release notes](https://github.com/tokio-rs/tracing/releases)
- [Commits](https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.3.19...tracing-subscriber-0.3.20)

---
updated-dependencies:
- dependency-name: tracing-subscriber
  dependency-version: 0.3.20
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-29 20:54:30 +00:00
Clémentine
210da70faf Merge pull request #5856 from arithmeticmean/main
Fix scheduled CI failure
2025-08-28 17:53:59 +00:00
Many the fish
1f0a6e8a44 Merge pull request #5862 from meilisearch/release-v1.19.1
Bring back v1.19.1 to main
2025-08-28 12:57:48 +00:00
Shree
952394710c Merge remote-tracking branch 'origin/main' into 5835DispProgressTrace 2025-08-26 14:03:09 -07:00
Many the fish
0fd66a5317 Merge pull request #5860 from meilisearch/update-version-v1.19.1
Update version for the next release (v1.19.1) in Cargo.toml
2025-08-26 11:43:23 +00:00
Many the fish
cb4dd3b88c Merge pull request #5846 from meilisearch/update-arroy-v0.6.2
Update Arroy v0.6.2
2025-08-26 12:01:06 +02:00
ManyTheFish
0ade376b00 update version tests 2025-08-26 11:57:27 +02:00
ManyTheFish
32785cb2d0 Update version for the next release (v1.19.1) in Cargo.toml 2025-08-26 08:39:16 +00:00
Louis Dureuil
5cf66856ae Merge pull request #5859 from meilisearch/revert-5857-license-detection
Revert "Fix license detection"
2025-08-26 07:53:17 +00:00
Clémentine
7acac2f560 Revert "Fix license detection" 2025-08-26 08:51:07 +02:00
Shree
b68431367f run cargo fmt 2025-08-25 23:47:24 -07:00
Shree
79d3d1606c Display the progressTrace in real time #5835 2025-08-25 23:33:26 -07:00
Louis Dureuil
580bfb06b4 Merge pull request #5857 from meilisearch/license-detection
Fix license detection
2025-08-25 18:28:55 +00:00
curquiza
062c9c6971 Fix links 2025-08-25 19:39:24 +02:00
curquiza
07ed5c57e4 Fix license detection 2025-08-25 19:12:28 +02:00
arithmeticmean
938ef77ee5 Fix scheduled CI failure
Disabled default features on the meilisearch dependency in one crate to
prevent lindera from being pulled in during the scheduled CI build
2025-08-23 19:30:26 +05:30
ManyTheFish
0a86b1e11e Update Arroy v0.6.2
The new version of arroy contains a search optimization when there is few input candidates compared to the number of documents in the database
2025-08-21 09:37:17 +02:00
90 changed files with 491 additions and 1984 deletions

View File

@@ -24,11 +24,6 @@ TBD
- [ ] If not, add the `no db change` label to your PR, and you're good to merge.
- [ ] If yes, add the `db change` label to your PR. You'll receive a message explaining you what to do.
### Reminders when adding features
- [ ] Write unit tests using insta
- [ ] Write declarative integration tests in [workloads/tests](https://github.com/meilisearch/meilisearch/tree/main/workloads/test). Specify the routes to call and then call `cargo xtask test workloads/tests/YOUR_TEST.json --update-responses` so that responses are automatically filled.
### Reminders when modifying the API
- [ ] Update the openAPI file with utoipa:

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: benchmarks
timeout-minutes: 180 # 3h
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -60,7 +60,7 @@ jobs:
with:
repo_token: ${{ env.GH_TOKEN }}
- uses: actions/checkout@v3
- uses: actions/checkout@v5
if: success()
with:
fetch-depth: 0 # fetch full history to be able to get main commit sha

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: benchmarks
timeout-minutes: 180 # 3h
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: benchmarks
timeout-minutes: 4320 # 72h
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -61,7 +61,7 @@ jobs:
with:
repo_token: ${{ env.GH_TOKEN }}
- uses: actions/checkout@v3
- uses: actions/checkout@v5
if: success()
with:
fetch-depth: 0 # fetch full history to be able to get main commit sha

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: benchmarks
timeout-minutes: 4320 # 72h
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -14,7 +14,7 @@ jobs:
name: Run and upload benchmarks
runs-on: benchmarks
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -14,7 +14,7 @@ jobs:
name: Run and upload benchmarks
runs-on: benchmarks
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -14,7 +14,7 @@ jobs:
name: Run and upload benchmarks
runs-on: benchmarks
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -9,7 +9,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Check db change labels
id: check_labels
env:

View File

@@ -13,7 +13,7 @@ jobs:
ISSUE_TEMPLATE: issue-template.md
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Download the issue template
run: curl -s https://raw.githubusercontent.com/meilisearch/meilisearch/main/.github/templates/dependency-issue.md > $ISSUE_TEMPLATE
- name: Create issue

View File

@@ -12,7 +12,7 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 4320 # 72h
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -10,7 +10,7 @@ jobs:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Check release validity
if: github.event_name == 'release'
run: bash .github/scripts/check-release.sh
@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
needs: check-version
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: rickstaa/action-create-tag@v1
with:
tag: "latest"

View File

@@ -9,7 +9,7 @@ jobs:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Check release validity
run: bash .github/scripts/check-release.sh
@@ -28,7 +28,7 @@ jobs:
- uses: dtolnay/rust-toolchain@1.85
- name: Install cargo-deb
run: cargo install cargo-deb
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Build deb package
run: cargo deb -p meilisearch -o target/debian/meilisearch.deb
- name: Upload debian pkg to release

View File

@@ -19,7 +19,7 @@ jobs:
permissions:
id-token: write # This is needed to use Cosign in keyless mode
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
# If we are running a cron or manual job ('schedule' or 'workflow_dispatch' event), it means we are publishing the `nightly` tag, so not considered stable.
# If we have pushed a tag, and the tag has the v<nmumber>.<number>.<number> format, it means we are publishing an official release, so considered stable.

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
# No need to check the version for dry run (cron)
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
# Check if the tag has the v<nmumber>.<number>.<number> format.
# If yes, it means we are publishing an official release.
# If no, we are releasing a RC, so no need to check the version.
@@ -40,7 +40,7 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl
@@ -74,7 +74,7 @@ jobs:
artifact_name: meilisearch.exe
asset_name: meilisearch-windows-amd64.exe
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
- name: Build
run: cargo build --release --locked
@@ -99,7 +99,7 @@ jobs:
asset_name: meilisearch-macos-apple-silicon
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v5
- name: Installing Rust toolchain
uses: dtolnay/rust-toolchain@1.85
with:
@@ -136,7 +136,7 @@ jobs:
asset_name: meilisearch-linux-aarch64
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update -y && apt upgrade -y
@@ -190,7 +190,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Setup Rust
uses: actions-rs/toolchain@v1
with:

View File

@@ -22,7 +22,7 @@ jobs:
outputs:
docker-image: ${{ steps.define-image.outputs.docker-image }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Define the Docker image we need to use
id: define-image
run: |
@@ -46,7 +46,7 @@ jobs:
MEILISEARCH_VERSION: ${{ needs.define-docker-image.outputs.docker-image }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-dotnet
- name: Setup .NET Core
@@ -75,7 +75,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-dart
- uses: dart-lang/setup-dart@v1
@@ -103,7 +103,7 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: stable
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-go
- name: Get dependencies
@@ -129,11 +129,11 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-java
- name: Set up Java
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
java-version: 8
distribution: 'zulu'
@@ -156,7 +156,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-js
- name: Setup node
@@ -191,7 +191,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-php
- name: Install PHP
@@ -220,7 +220,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-python
- name: Set up Python
@@ -245,7 +245,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-ruby
- name: Set up Ruby 3
@@ -270,7 +270,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-rust
- name: Build
@@ -291,7 +291,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-swift
- name: Run tests
@@ -314,7 +314,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-js-plugins
- name: Setup node
@@ -347,7 +347,7 @@ jobs:
env:
RAILS_VERSION: '7.0'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-rails
- name: Install SQLite dependencies
@@ -377,7 +377,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-symfony
- name: Install PHP

View File

@@ -21,7 +21,7 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl
@@ -49,7 +49,7 @@ jobs:
matrix:
os: [macos-13, windows-2022]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Cache dependencies
uses: Swatinem/rust-cache@v2.8.0
- uses: dtolnay/rust-toolchain@1.85
@@ -72,7 +72,7 @@ jobs:
image: ubuntu:22.04
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update
@@ -91,7 +91,7 @@ jobs:
env:
MEILI_TEST_OLLAMA_SERVER: "http://localhost:11434"
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install Ollama
run: |
curl -fsSL https://ollama.com/install.sh | sudo -E sh
@@ -124,7 +124,7 @@ jobs:
image: ubuntu:22.04
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update
@@ -148,7 +148,7 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl
@@ -166,7 +166,7 @@ jobs:
name: Run Clippy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal
@@ -183,7 +183,7 @@ jobs:
name: Run Rustfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -17,7 +17,7 @@ jobs:
name: Update version in Cargo.toml
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.85
with:
profile: minimal

View File

@@ -124,7 +124,6 @@ They are JSON files with the following structure (comments are not actually supp
{
// Name of the workload. Must be unique to the workload, as it will be used to group results on the dashboard.
"name": "hackernews.ndjson_1M,no-threads",
"type": "bench",
// Number of consecutive runs of the commands that should be performed.
// Each run uses a fresh instance of Meilisearch and a fresh database.
// Each run produces its own report file.

124
Cargo.lock generated
View File

@@ -350,21 +350,6 @@ version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78200ac3468a57d333cd0ea5dd398e25111194dcacd49208afca95c629a6311d"
[[package]]
name = "android-tzdata"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e999941b234f3131b00bc13c22d06e8c5ff726d1b6318ac7eb276997bbb4fef0"
[[package]]
name = "android_system_properties"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
dependencies = [
"libc",
]
[[package]]
name = "anes"
version = "0.1.6"
@@ -459,9 +444,9 @@ checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]]
name = "arroy"
version = "0.6.1"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08e6111f351d004bd13e95ab540721272136fd3218b39d3ec95a2ea1c4e6a0a6"
checksum = "733ce4c7a5250d770985c56466fac41238ffdaec0502bee64a4289e300164c5e"
dependencies = [
"bytemuck",
"byteorder",
@@ -595,7 +580,7 @@ source = "git+https://github.com/meilisearch/bbqueue#cbb87cc707b5af415ef203bdaf2
[[package]]
name = "benchmarks"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"anyhow",
"bumpalo",
@@ -785,7 +770,7 @@ dependencies = [
[[package]]
name = "build-info"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"anyhow",
"time",
@@ -1121,20 +1106,6 @@ dependencies = [
"whatlang",
]
[[package]]
name = "chrono"
version = "0.4.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c469d952047f47f91b68d1cba3f10d63c11d73e4636f24f08daf0278abf01c4d"
dependencies = [
"android-tzdata",
"iana-time-zone",
"js-sys",
"num-traits",
"wasm-bindgen",
"windows-link",
]
[[package]]
name = "ciborium"
version = "0.2.2"
@@ -1803,7 +1774,7 @@ dependencies = [
[[package]]
name = "dump"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"anyhow",
"big_s",
@@ -2035,7 +2006,7 @@ checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]]
name = "file-store"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"tempfile",
"thiserror 2.0.12",
@@ -2057,7 +2028,7 @@ dependencies = [
[[package]]
name = "filter-parser"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"insta",
"levenshtein_automata",
@@ -2079,7 +2050,7 @@ dependencies = [
[[package]]
name = "flatten-serde-json"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"criterion",
"serde_json",
@@ -2224,7 +2195,7 @@ dependencies = [
[[package]]
name = "fuzzers"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"arbitrary",
"bumpalo",
@@ -2880,30 +2851,6 @@ dependencies = [
"tracing",
]
[[package]]
name = "iana-time-zone"
version = "0.1.63"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0c919e5debc312ad217002b8048a17b7d83f80703865bbfcfebb0458b0b27d8"
dependencies = [
"android_system_properties",
"core-foundation-sys",
"iana-time-zone-haiku",
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
]
[[package]]
name = "iana-time-zone-haiku"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f"
dependencies = [
"cc",
]
[[package]]
name = "icu_collections"
version = "2.0.0"
@@ -3048,7 +2995,7 @@ dependencies = [
[[package]]
name = "index-scheduler"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"anyhow",
"backoff",
@@ -3284,7 +3231,7 @@ dependencies = [
[[package]]
name = "json-depth-checker"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"criterion",
"serde_json",
@@ -3778,7 +3725,7 @@ checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771"
[[package]]
name = "meili-snap"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"insta",
"md5",
@@ -3789,7 +3736,7 @@ dependencies = [
[[package]]
name = "meilisearch"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"actix-cors",
"actix-http",
@@ -3886,7 +3833,7 @@ dependencies = [
[[package]]
name = "meilisearch-auth"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"base64 0.22.1",
"enum-iterator",
@@ -3905,7 +3852,7 @@ dependencies = [
[[package]]
name = "meilisearch-types"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"actix-web",
"anyhow",
@@ -3940,7 +3887,7 @@ dependencies = [
[[package]]
name = "meilitool"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"anyhow",
"clap",
@@ -3974,7 +3921,7 @@ dependencies = [
[[package]]
name = "milli"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"allocator-api2 0.3.0",
"arroy",
@@ -4182,12 +4129,11 @@ dependencies = [
[[package]]
name = "nu-ansi-term"
version = "0.46.0"
version = "0.50.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77a8165726e8236064dbb45459242600304b42a5ea24ee2948e18e023bf7ba84"
checksum = "d4a28e057d01f97e61255210fcff094d74ed0466038633e95017f5beb68e4399"
dependencies = [
"overload",
"winapi",
"windows-sys 0.52.0",
]
[[package]]
@@ -4444,12 +4390,6 @@ dependencies = [
"num-traits",
]
[[package]]
name = "overload"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
[[package]]
name = "owo-colors"
version = "4.2.1"
@@ -4538,7 +4478,7 @@ checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e"
[[package]]
name = "permissive-json-pointer"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"big_s",
"serde_json",
@@ -5735,20 +5675,6 @@ name = "similar"
version = "2.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbbb5d9659141646ae647b42fe094daf6c6192d1620870b449d9557f748b2daa"
dependencies = [
"bstr",
"unicode-segmentation",
]
[[package]]
name = "similar-asserts"
version = "1.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5b441962c817e33508847a22bd82f03a30cff43642dc2fae8b050566121eb9a"
dependencies = [
"console",
"similar",
]
[[package]]
name = "simple_asn1"
@@ -6474,9 +6400,9 @@ dependencies = [
[[package]]
name = "tracing-subscriber"
version = "0.3.19"
version = "0.3.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8189decb5ac0fa7bc8b96b7cb9b2701d60d48805aca84a238004d665fcc4008"
checksum = "2054a14f5307d601f88daf0553e1cbf472acc4f2c51afab632431cdcd72124d5"
dependencies = [
"nu-ansi-term",
"serde",
@@ -7346,12 +7272,11 @@ dependencies = [
[[package]]
name = "xtask"
version = "1.19.0"
version = "1.20.0"
dependencies = [
"anyhow",
"build-info",
"cargo_metadata",
"chrono",
"clap",
"futures-core",
"futures-util",
@@ -7359,7 +7284,6 @@ dependencies = [
"serde",
"serde_json",
"sha2",
"similar-asserts",
"sysinfo",
"time",
"tokio",

View File

@@ -23,7 +23,7 @@ members = [
]
[workspace.package]
version = "1.19.0"
version = "1.20.0"
authors = [
"Quentin de Quelen <quentin@dequelen.me>",
"Clément Renault <clement@meilisearch.com>",

View File

@@ -284,6 +284,14 @@ impl BatchQueue {
if Some(batch_id) == processing.batch.as_ref().map(|batch| batch.uid) {
let mut batch = processing.batch.as_ref().unwrap().to_batch();
batch.progress = processing.get_progress_view();
// Add progress_trace from the current progress state
if let Some(progress) = &processing.progress {
batch.stats.progress_trace = progress
.accumulated_durations()
.into_iter()
.map(|(k, v)| (k, v.into()))
.collect();
}
Ok(batch)
} else {
self.get_batch(rtxn, batch_id)

View File

@@ -104,6 +104,15 @@ fn query_batches_simple() {
batches[0].started_at = OffsetDateTime::UNIX_EPOCH;
assert!(batches[0].enqueued_at.is_some());
batches[0].enqueued_at = None;
if !batches[0].stats.progress_trace.is_empty() {
batches[0].stats.progress_trace.clear();
batches[0]
.stats
.progress_trace
.insert("processing tasks".to_string(), "deterministic_duration".into());
}
// Insta cannot snapshot our batches because the batch stats contains an enum as key: https://github.com/mitsuhiko/insta/issues/689
let batch = serde_json::to_string_pretty(&batches[0]).unwrap();
snapshot!(batch, @r###"
@@ -122,6 +131,9 @@ fn query_batches_simple() {
},
"indexUids": {
"catto": 1
},
"progressTrace": {
"processing tasks": "deterministic_duration"
}
},
"startedAt": "1970-01-01T00:00:00Z",

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test_failure.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { from: (1, 12, 0), to: (1, 19, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { from: (1, 12, 0), to: (1, 20, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
3 {uid: 3, batch_uid: 3, status: failed, error: ResponseError { code: 200, message: "Index `doggo` already exists.", error_code: "index_already_exists", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_already_exists" }, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
@@ -57,7 +57,7 @@ girafo: { number_of_documents: 0, field_distribution: {} }
[timestamp] [4,]
----------------------------------------------------------------------
### All Batches:
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.19.0"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.20.0"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
1 {uid: 1, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
2 {uid: 2, details: {"primaryKey":"bone"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 2 of type `indexCreation` that cannot be batched with any other task.", }
3 {uid: 3, details: {"primaryKey":"bone"}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `indexCreation` that cannot be batched with any other task.", }

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test_failure.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { from: (1, 12, 0), to: (1, 19, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
0 {uid: 0, status: enqueued, details: { from: (1, 12, 0), to: (1, 20, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
----------------------------------------------------------------------
### Status:
enqueued [0,]

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test_failure.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { from: (1, 12, 0), to: (1, 19, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
0 {uid: 0, status: enqueued, details: { from: (1, 12, 0), to: (1, 20, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
----------------------------------------------------------------------
### Status:

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test_failure.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { from: (1, 12, 0), to: (1, 19, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
0 {uid: 0, batch_uid: 0, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { from: (1, 12, 0), to: (1, 20, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
----------------------------------------------------------------------
### Status:
@@ -37,7 +37,7 @@ catto [1,]
[timestamp] [0,]
----------------------------------------------------------------------
### All Batches:
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.19.0"}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.20.0"}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
----------------------------------------------------------------------
### Batch to tasks mapping:
0 [0,]

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test_failure.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { from: (1, 12, 0), to: (1, 19, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
0 {uid: 0, batch_uid: 0, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { from: (1, 12, 0), to: (1, 20, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
----------------------------------------------------------------------
@@ -40,7 +40,7 @@ doggo [2,]
[timestamp] [0,]
----------------------------------------------------------------------
### All Batches:
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.19.0"}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.20.0"}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
----------------------------------------------------------------------
### Batch to tasks mapping:
0 [0,]

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test_failure.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { from: (1, 12, 0), to: (1, 19, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { from: (1, 12, 0), to: (1, 20, 0) }, kind: UpgradeDatabase { from: (1, 12, 0) }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
3 {uid: 3, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
@@ -43,7 +43,7 @@ doggo [2,3,]
[timestamp] [0,]
----------------------------------------------------------------------
### All Batches:
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.19.0"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
0 {uid: 0, details: {"upgradeFrom":"v1.12.0","upgradeTo":"v1.20.0"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"upgradeDatabase":1},"indexUids":{}}, stop reason: "stopped after the last task of type `upgradeDatabase` because they cannot be batched with tasks of any other type.", }
----------------------------------------------------------------------
### Batch to tasks mapping:
0 [0,]

View File

@@ -43,6 +43,7 @@ pub fn upgrade_index_scheduler(
(1, 17, _) => 0,
(1, 18, _) => 0,
(1, 19, _) => 0,
(1, 20, _) => 0,
(major, minor, patch) => {
if major > current_major
|| (major == current_major && minor > current_minor)

View File

@@ -96,7 +96,7 @@ serde_urlencoded = "0.7.1"
termcolor = "1.4.1"
url = { version = "2.5.4", features = ["serde"] }
tracing = "0.1.41"
tracing-subscriber = { version = "0.3.19", features = ["json"] }
tracing-subscriber = { version = "0.3.20", features = ["json"] }
tracing-trace = { version = "0.1.0", path = "../tracing-trace" }
tracing-actix-web = "0.7.18"
build-info = { version = "1.7.0", path = "../build-info" }

View File

@@ -1,4 +1,3 @@
use actix_web::http::header;
use actix_web::web::{self, Data};
use actix_web::HttpResponse;
use index_scheduler::{IndexScheduler, Query};
@@ -181,5 +180,12 @@ pub async fn get_metrics(
let response = String::from_utf8(buffer).expect("Failed to convert bytes to string");
Ok(HttpResponse::Ok().insert_header(header::ContentType(mime::TEXT_PLAIN)).body(response))
// We cannot specify the version with ContentType(TEXT_PLAIN_UTF_8) so we have to write everything by hand :(
// see the following for what should be returned: https://prometheus.io/docs/instrumenting/content_negotiation/#content-type-response
let content_type = ("content-type", "text/plain; version=0.0.4; charset=utf-8");
Ok(HttpResponse::Ok()
// .insert_header(header::ContentType(mime::TEXT_PLAIN_UTF_8))
.insert_header(content_type)
.body(response))
}

View File

@@ -43,7 +43,7 @@ async fn version_too_old() {
std::fs::write(db_path.join("VERSION"), "1.11.9999").unwrap();
let options = Opt { experimental_dumpless_upgrade: true, ..default_settings };
let err = Server::new_with_options(options).await.map(|_| ()).unwrap_err();
snapshot!(err, @"Database version 1.11.9999 is too old for the experimental dumpless upgrade feature. Please generate a dump using the v1.11.9999 and import it in the v1.19.0");
snapshot!(err, @"Database version 1.11.9999 is too old for the experimental dumpless upgrade feature. Please generate a dump using the v1.11.9999 and import it in the v1.20.0");
}
#[actix_rt::test]
@@ -58,7 +58,7 @@ async fn version_requires_downgrade() {
std::fs::write(db_path.join("VERSION"), format!("{major}.{minor}.{patch}")).unwrap();
let options = Opt { experimental_dumpless_upgrade: true, ..default_settings };
let err = Server::new_with_options(options).await.map(|_| ()).unwrap_err();
snapshot!(err, @"Database version 1.19.1 is higher than the Meilisearch version 1.19.0. Downgrade is not supported");
snapshot!(err, @"Database version 1.20.1 is higher than the Meilisearch version 1.20.0. Downgrade is not supported");
}
#[actix_rt::test]

View File

@@ -8,7 +8,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"progress": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"stats": {
"totalNbTasks": 1,

View File

@@ -8,7 +8,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"progress": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"stats": {
"totalNbTasks": 1,

View File

@@ -8,7 +8,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"progress": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"stats": {
"totalNbTasks": 1,

View File

@@ -12,7 +12,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"canceledBy": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"error": null,
"duration": "[duration]",

View File

@@ -12,7 +12,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"canceledBy": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"error": null,
"duration": "[duration]",

View File

@@ -12,7 +12,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"canceledBy": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"error": null,
"duration": "[duration]",

View File

@@ -8,7 +8,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"progress": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"stats": {
"totalNbTasks": 1,

View File

@@ -12,7 +12,7 @@ source: crates/meilisearch/tests/upgrade/v1_12/v1_12_0.rs
"canceledBy": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "v1.19.0"
"upgradeTo": "v1.20.0"
},
"error": null,
"duration": "[duration]",

View File

@@ -2184,6 +2184,7 @@ async fn last_error_stats() {
".progress" => "[ignored]",
".stats.embedderRequests.total" => "[ignored]",
".stats.embedderRequests.failed" => "[ignored]",
".stats.progressTrace" => "[ignored]",
".startedAt" => "[ignored]"
}), @r#"
{
@@ -2204,6 +2205,7 @@ async fn last_error_stats() {
"indexUids": {
"doggo": 1
},
"progressTrace": "[ignored]",
"embedderRequests": {
"total": "[ignored]",
"failed": "[ignored]",

View File

@@ -87,7 +87,7 @@ rhai = { version = "1.22.2", features = [
"no_time",
"sync",
] }
arroy = "0.6.1"
arroy = "0.6.2"
rand = "0.8.5"
tracing = "0.1.41"
ureq = { version = "2.12.1", features = ["json"] }

View File

@@ -65,6 +65,7 @@ const fn start(from: (u32, u32, u32)) -> Option<usize> {
(1, 17, _) => function_index!(7),
(1, 18, _) => function_index!(7),
(1, 19, _) => function_index!(7),
(1, 20, _) => function_index!(7),
// We deliberately don't add a placeholder with (VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH) here to force manually
// considering dumpless upgrade.
(_major, _minor, _patch) => return None,

View File

@@ -5,7 +5,7 @@ edition = "2021"
publish = false
[dependencies]
meilisearch = { path = "../meilisearch" }
meilisearch = { path = "../meilisearch" , default-features = false}
serde_json = "1.0"
clap = { version = "4.5.40", features = ["derive"] }
anyhow = "1.0.98"

View File

@@ -12,7 +12,7 @@ serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
tracing = "0.1.41"
tracing-error = "0.2.1"
tracing-subscriber = "0.3.19"
tracing-subscriber = "0.3.20"
byte-unit = { version = "5.1.6", default-features = false, features = [
"std",
"byte",

View File

@@ -39,8 +39,6 @@ tokio = { version = "1.45.1", features = [
"signal",
] }
tracing = "0.1.41"
tracing-subscriber = "0.3.19"
tracing-subscriber = "0.3.20"
tracing-trace = { version = "0.1.0", path = "../tracing-trace" }
uuid = { version = "1.17.0", features = ["v7", "serde"] }
similar-asserts = "1.7.0"
chrono = "0.4"

View File

@@ -3,22 +3,21 @@ use std::io::{Read as _, Seek as _, Write as _};
use anyhow::{bail, Context};
use futures_util::TryStreamExt as _;
use serde::{Deserialize, Serialize};
use serde::Deserialize;
use sha2::Digest;
use super::client::Client;
#[derive(Serialize, Deserialize, Clone, Debug)]
#[derive(Deserialize, Clone)]
pub struct Asset {
pub local_location: Option<String>,
pub remote_location: Option<String>,
#[serde(default, skip_serializing_if = "AssetFormat::is_default")]
#[serde(default)]
pub format: AssetFormat,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub sha256: Option<String>,
}
#[derive(Serialize, Deserialize, Default, Copy, Clone, Debug)]
#[derive(Deserialize, Default, Copy, Clone)]
pub enum AssetFormat {
#[default]
Auto,
@@ -28,10 +27,6 @@ pub enum AssetFormat {
}
impl AssetFormat {
fn is_default(&self) -> bool {
matches!(self, AssetFormat::Auto)
}
pub fn to_content_type(self, filename: &str) -> &'static str {
match self {
AssetFormat::Auto => Self::auto_detect(filename).to_content_type(filename),
@@ -171,14 +166,7 @@ fn check_sha256(name: &str, asset: &Asset, mut file: std::fs::File) -> anyhow::R
}
}
None => {
let msg = match name.starts_with("meilisearch-v") {
true => "Please add it to xtask/src/test/versions.rs",
false => "Please add it to workload file",
};
tracing::warn!(
sha256 = file_hash,
"Skipping hash for asset {name} that doesn't have one. {msg}"
);
tracing::warn!(sha256 = file_hash, "Skipping hash for asset {name} that doesn't have one. Please add it to workload file");
true
}
})

View File

@@ -1,5 +1,5 @@
use anyhow::Context;
use serde::{Deserialize, Serialize};
use serde::Deserialize;
#[derive(Debug, Clone)]
pub struct Client {
@@ -61,7 +61,7 @@ impl Client {
}
}
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
#[derive(Debug, Clone, Copy, Deserialize)]
#[serde(rename_all = "SCREAMING_SNAKE_CASE")]
pub enum Method {
Get,

View File

@@ -0,0 +1,194 @@
use std::collections::BTreeMap;
use std::fmt::Display;
use std::io::Read as _;
use anyhow::{bail, Context as _};
use serde::Deserialize;
use super::assets::{fetch_asset, Asset};
use super::client::{Client, Method};
#[derive(Clone, Deserialize)]
pub struct Command {
pub route: String,
pub method: Method,
#[serde(default)]
pub body: Body,
#[serde(default)]
pub synchronous: SyncMode,
}
#[derive(Default, Clone, Deserialize)]
#[serde(untagged)]
pub enum Body {
Inline {
inline: serde_json::Value,
},
Asset {
asset: String,
},
#[default]
Empty,
}
impl Body {
pub fn get(
self,
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<Option<(Vec<u8>, &'static str)>> {
Ok(match self {
Body::Inline { inline: body } => Some((
serde_json::to_vec(&body)
.context("serializing to bytes")
.context("while getting inline body")?,
"application/json",
)),
Body::Asset { asset: name } => Some({
let context = || format!("while getting body from asset '{name}'");
let (mut file, format) =
fetch_asset(&name, assets, asset_folder).with_context(context)?;
let mut buf = Vec::new();
file.read_to_end(&mut buf).with_context(context)?;
(buf, format.to_content_type(&name))
}),
Body::Empty => None,
})
}
}
impl Display for Command {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?} {} ({:?})", self.method, self.route, self.synchronous)
}
}
#[derive(Default, Debug, Clone, Copy, Deserialize)]
pub enum SyncMode {
DontWait,
#[default]
WaitForResponse,
WaitForTask,
}
pub async fn run_batch(
client: &Client,
batch: &[Command],
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<()> {
let [.., last] = batch else { return Ok(()) };
let sync = last.synchronous;
let mut tasks = tokio::task::JoinSet::new();
for command in batch {
// FIXME: you probably don't want to copy assets everytime here
tasks.spawn({
let client = client.clone();
let command = command.clone();
let assets = assets.clone();
let asset_folder = asset_folder.to_owned();
async move { run(client, command, &assets, &asset_folder).await }
});
}
while let Some(result) = tasks.join_next().await {
result
.context("panicked while executing command")?
.context("error while executing command")?;
}
match sync {
SyncMode::DontWait => {}
SyncMode::WaitForResponse => {}
SyncMode::WaitForTask => wait_for_tasks(client).await?,
}
Ok(())
}
async fn wait_for_tasks(client: &Client) -> anyhow::Result<()> {
loop {
let response = client
.get("tasks?statuses=enqueued,processing")
.send()
.await
.context("could not wait for tasks")?;
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response to JSON")
.context("could not wait for tasks")?;
match response.get("total") {
Some(serde_json::Value::Number(number)) => {
let number = number.as_u64().with_context(|| {
format!("waiting for tasks: could not parse 'total' as integer, got {}", number)
})?;
if number == 0 {
break;
} else {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
continue;
}
}
Some(thing_else) => {
bail!(format!(
"waiting for tasks: could not parse 'total' as a number, got '{thing_else}'"
))
}
None => {
bail!(format!(
"waiting for tasks: expected response to contain 'total', got '{response}'"
))
}
}
}
Ok(())
}
#[tracing::instrument(skip(client, command, assets, asset_folder), fields(command = %command))]
pub async fn run(
client: Client,
mut command: Command,
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<()> {
// memtake the body here to leave an empty body in its place, so that command is not partially moved-out
let body = std::mem::take(&mut command.body)
.get(assets, asset_folder)
.with_context(|| format!("while getting body for command {command}"))?;
let request = client.request(command.method.into(), &command.route);
let request = if let Some((body, content_type)) = body {
request.body(body).header(reqwest::header::CONTENT_TYPE, content_type)
} else {
request
};
let response =
request.send().await.with_context(|| format!("error sending command: {}", command))?;
let code = response.status();
if code.is_client_error() {
tracing::error!(%command, %code, "error in workload file");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing error in workload file when sending command")?;
bail!("error in workload file: server responded with error code {code} and '{response}'")
} else if code.is_server_error() {
tracing::error!(%command, %code, "server error");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing server error when sending command")?;
bail!("server error: server responded with error code {code} and '{response}'")
}
Ok(())
}

View File

@@ -7,9 +7,9 @@ use tokio::task::AbortHandle;
use tracing_trace::processor::span_stats::CallStats;
use uuid::Uuid;
use super::client::Client;
use super::env_info;
use super::workload::BenchWorkload;
use crate::common::client::Client;
use super::workload::Workload;
#[derive(Debug, Clone)]
pub enum DashboardClient {
@@ -89,7 +89,7 @@ impl DashboardClient {
pub async fn create_workload(
&self,
invocation_uuid: Uuid,
workload: &BenchWorkload,
workload: &Workload,
) -> anyhow::Result<Uuid> {
let Self::Client(dashboard_client) = self else { return Ok(Uuid::now_v7()) };

View File

@@ -1,18 +1,18 @@
use std::collections::{BTreeMap, HashMap};
use std::path::Path;
use std::collections::BTreeMap;
use std::time::Duration;
use anyhow::{bail, Context as _};
use tokio::process::Command as TokioCommand;
use tokio::process::Command;
use tokio::time;
use crate::common::client::Client;
use crate::common::command::{health_command, run as run_command};
use super::assets::Asset;
use super::client::Client;
use super::workload::Workload;
pub async fn kill_meili(mut meilisearch: tokio::process::Child) {
pub async fn kill(mut meilisearch: tokio::process::Child) {
let Some(id) = meilisearch.id() else { return };
match TokioCommand::new("kill").args(["--signal=TERM", &id.to_string()]).spawn() {
match Command::new("kill").args(["--signal=TERM", &id.to_string()]).spawn() {
Ok(mut cmd) => {
let Err(error) = cmd.wait().await else { return };
tracing::warn!(
@@ -49,8 +49,8 @@ pub async fn kill_meili(mut meilisearch: tokio::process::Child) {
}
#[tracing::instrument]
async fn build() -> anyhow::Result<()> {
let mut command = TokioCommand::new("cargo");
pub async fn build() -> anyhow::Result<()> {
let mut command = Command::new("cargo");
command.arg("build").arg("--release").arg("-p").arg("meilisearch");
command.kill_on_drop(true);
@@ -64,61 +64,29 @@ async fn build() -> anyhow::Result<()> {
Ok(())
}
#[tracing::instrument(skip(client, master_key), fields(workload = _workload))]
pub async fn start_meili(
#[tracing::instrument(skip(client, master_key, workload), fields(workload = workload.name))]
pub async fn start(
client: &Client,
master_key: Option<&str>,
extra_cli_args: &[String],
_workload: &str,
binary_path: Option<&Path>,
workload: &Workload,
asset_folder: &str,
mut command: Command,
) -> anyhow::Result<tokio::process::Child> {
let mut command = match binary_path {
Some(binary_path) => tokio::process::Command::new(binary_path),
None => {
build().await?;
let mut command = tokio::process::Command::new("cargo");
command
.arg("run")
.arg("--release")
.arg("-p")
.arg("meilisearch")
.arg("--bin")
.arg("meilisearch")
.arg("--");
command
}
};
command.arg("--db-path").arg("./_xtask_benchmark.ms");
if let Some(master_key) = master_key {
command.arg("--master-key").arg(master_key);
}
command.arg("--experimental-enable-logs-route");
for extra_arg in extra_cli_args.iter() {
for extra_arg in workload.extra_cli_args.iter() {
command.arg(extra_arg);
}
command.kill_on_drop(true);
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
if let Some(binary_path) = binary_path {
let mut perms = tokio::fs::metadata(binary_path)
.await
.with_context(|| format!("could not get metadata for {binary_path:?}"))?
.permissions();
perms.set_mode(perms.mode() | 0o111);
tokio::fs::set_permissions(binary_path, perms)
.await
.with_context(|| format!("could not set permissions for {binary_path:?}"))?;
}
}
let mut meilisearch = command.spawn().context("Error starting Meilisearch")?;
wait_for_health(client, &mut meilisearch).await?;
wait_for_health(client, &mut meilisearch, &workload.assets, asset_folder).await?;
Ok(meilisearch)
}
@@ -126,11 +94,11 @@ pub async fn start_meili(
async fn wait_for_health(
client: &Client,
meilisearch: &mut tokio::process::Child,
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<()> {
for i in 0..100 {
let res =
run_command(client, &health_command(), &BTreeMap::new(), HashMap::new(), "", false)
.await;
let res = super::command::run(client.clone(), health_command(), assets, asset_folder).await;
if res.is_ok() {
// check that this is actually the current Meilisearch instance that answered us
if let Some(exit_code) =
@@ -154,6 +122,15 @@ async fn wait_for_health(
bail!("meilisearch is not responding")
}
pub async fn delete_db() {
let _ = tokio::fs::remove_dir_all("./_xtask_benchmark.ms").await;
fn health_command() -> super::command::Command {
super::command::Command {
route: "/health".into(),
method: super::client::Method::Get,
body: Default::default(),
synchronous: super::command::SyncMode::WaitForResponse,
}
}
pub fn delete_db() {
let _ = std::fs::remove_dir_all("./_xtask_benchmark.ms");
}

View File

@@ -1,22 +1,38 @@
mod assets;
mod client;
mod command;
mod dashboard;
mod env_info;
mod meili_process;
mod workload;
use crate::common::args::CommonArgs;
use crate::common::logs::setup_logs;
use crate::common::workload::Workload;
use std::{path::PathBuf, sync::Arc};
use std::io::LineWriter;
use std::path::PathBuf;
use anyhow::{bail, Context};
use anyhow::Context;
use clap::Parser;
use tracing_subscriber::fmt::format::FmtSpan;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::Layer;
use crate::common::client::Client;
pub use workload::BenchWorkload;
use self::client::Client;
use self::workload::Workload;
pub fn default_http_addr() -> String {
"127.0.0.1:7700".to_string()
}
pub fn default_report_folder() -> String {
"./bench/reports/".into()
}
pub fn default_asset_folder() -> String {
"./bench/assets/".into()
}
pub fn default_log_filter() -> String {
"info".into()
}
pub fn default_dashboard_url() -> String {
"http://localhost:9001".into()
}
@@ -24,13 +40,12 @@ pub fn default_dashboard_url() -> String {
/// Run benchmarks from a workload
#[derive(Parser, Debug)]
pub struct BenchDeriveArgs {
/// Common arguments shared with other commands
#[command(flatten)]
common: CommonArgs,
/// Meilisearch master keys
#[arg(long)]
pub master_key: Option<String>,
/// Filename of the workload file, pass multiple filenames
/// to run multiple workloads in the specified order.
///
/// Each workload run will get its own report file.
#[arg(value_name = "WORKLOAD_FILE", last = false)]
workload_file: Vec<PathBuf>,
/// URL of the dashboard.
#[arg(long, default_value_t = default_dashboard_url())]
@@ -44,14 +59,34 @@ pub struct BenchDeriveArgs {
#[arg(long, default_value_t = default_report_folder())]
report_folder: String,
/// Directory to store the remote assets.
#[arg(long, default_value_t = default_asset_folder())]
asset_folder: String,
/// Log directives
#[arg(short, long, default_value_t = default_log_filter())]
log_filter: String,
/// Benchmark dashboard API key
#[arg(long)]
api_key: Option<String>,
/// Meilisearch master keys
#[arg(long)]
master_key: Option<String>,
/// Authentication bearer for fetching assets
#[arg(long)]
assets_key: Option<String>,
/// Reason for the benchmark invocation
#[arg(short, long)]
reason: Option<String>,
/// The maximum time in seconds we allow for fetching the task queue before timing out.
#[arg(long, default_value_t = 60)]
tasks_queue_timeout_secs: u64,
/// The path to the binary to run.
///
/// If unspecified, runs `cargo run` after building Meilisearch with `cargo build`.
@@ -60,7 +95,17 @@ pub struct BenchDeriveArgs {
}
pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
setup_logs(&args.common.log_filter)?;
// setup logs
let filter: tracing_subscriber::filter::Targets =
args.log_filter.parse().context("invalid --log-filter")?;
let subscriber = tracing_subscriber::registry().with(
tracing_subscriber::fmt::layer()
.with_writer(|| LineWriter::new(std::io::stderr()))
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(filter),
);
tracing::subscriber::set_global_default(subscriber).context("could not setup logging")?;
// fetch environment and build info
let env = env_info::Environment::generate_from_current_config();
@@ -71,11 +116,8 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
let _scope = rt.enter();
// setup clients
let assets_client = Client::new(
None,
args.common.assets_key.as_deref(),
Some(std::time::Duration::from_secs(3600)), // 1h
)?;
let assets_client =
Client::new(None, args.assets_key.as_deref(), Some(std::time::Duration::from_secs(3600)))?; // 1h
let dashboard_client = if args.no_dashboard {
dashboard::DashboardClient::new_dry()
@@ -92,11 +134,11 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
None,
)?;
let meili_client = Arc::new(Client::new(
let meili_client = Client::new(
Some("http://127.0.0.1:7700".into()),
args.master_key.as_deref(),
Some(std::time::Duration::from_secs(args.common.tasks_queue_timeout_secs)),
)?);
Some(std::time::Duration::from_secs(args.tasks_queue_timeout_secs)),
)?;
// enter runtime
@@ -104,11 +146,11 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
dashboard_client.send_machine_info(&env).await?;
let commit_message = build_info.commit_msg.unwrap_or_default().split('\n').next().unwrap();
let max_workloads = args.common.workload_file.len();
let max_workloads = args.workload_file.len();
let reason: Option<&str> = args.reason.as_deref();
let invocation_uuid = dashboard_client.create_invocation(build_info.clone(), commit_message, env, max_workloads, reason).await?;
tracing::info!(workload_count = args.common.workload_file.len(), "handling workload files");
tracing::info!(workload_count = args.workload_file.len(), "handling workload files");
// main task
let workload_runs = tokio::spawn(
@@ -116,17 +158,13 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
let dashboard_client = dashboard_client.clone();
let mut dashboard_urls = Vec::new();
async move {
for workload_file in args.common.workload_file.iter() {
for workload_file in args.workload_file.iter() {
let workload: Workload = serde_json::from_reader(
std::fs::File::open(workload_file)
.with_context(|| format!("error opening {}", workload_file.display()))?,
)
.with_context(|| format!("error parsing {} as JSON", workload_file.display()))?;
let Workload::Bench(workload) = workload else {
bail!("workload file {} is not a bench workload", workload_file.display());
};
let workload_name = workload.name.clone();
workload::execute(

View File

@@ -1,27 +1,24 @@
use std::collections::{BTreeMap, HashMap};
use std::collections::BTreeMap;
use std::fs::File;
use std::io::{Seek as _, Write as _};
use std::path::Path;
use std::sync::Arc;
use anyhow::{bail, Context as _};
use futures_util::TryStreamExt as _;
use serde::{Deserialize, Serialize};
use serde::Deserialize;
use serde_json::json;
use tokio::task::JoinHandle;
use uuid::Uuid;
use super::assets::Asset;
use super::client::Client;
use super::command::SyncMode;
use super::dashboard::DashboardClient;
use super::BenchDeriveArgs;
use crate::common::assets::{self, Asset};
use crate::common::client::Client;
use crate::common::command::{run_commands, Command};
use crate::common::process::{self, delete_db, start_meili};
use crate::bench::{assets, meili_process};
/// A bench workload.
/// Not to be confused with [a test workload](crate::test::workload::Workload).
#[derive(Serialize, Deserialize, Debug)]
pub struct BenchWorkload {
#[derive(Deserialize)]
pub struct Workload {
pub name: String,
pub run_count: u16,
pub extra_cli_args: Vec<String>,
@@ -29,33 +26,30 @@ pub struct BenchWorkload {
#[serde(default)]
pub target: String,
#[serde(default)]
pub precommands: Vec<Command>,
pub commands: Vec<Command>,
pub precommands: Vec<super::command::Command>,
pub commands: Vec<super::command::Command>,
}
async fn run_workload_commands(
async fn run_commands(
dashboard_client: &DashboardClient,
logs_client: &Client,
meili_client: &Arc<Client>,
meili_client: &Client,
workload_uuid: Uuid,
workload: &BenchWorkload,
workload: &Workload,
args: &BenchDeriveArgs,
run_number: u16,
) -> anyhow::Result<JoinHandle<anyhow::Result<File>>> {
let report_folder = &args.report_folder;
let workload_name = &workload.name;
let assets = Arc::new(workload.assets.clone());
let asset_folder = args.common.asset_folder.clone().leak();
run_commands(
meili_client,
&workload.precommands,
&assets,
asset_folder,
&mut HashMap::new(),
false,
)
.await?;
for batch in workload
.precommands
.as_slice()
.split_inclusive(|command| !matches!(command.synchronous, SyncMode::DontWait))
{
super::command::run_batch(meili_client, batch, &workload.assets, &args.asset_folder)
.await?;
}
std::fs::create_dir_all(report_folder)
.with_context(|| format!("could not create report directory at {report_folder}"))?;
@@ -65,15 +59,14 @@ async fn run_workload_commands(
let report_handle = start_report(logs_client, trace_filename, &workload.target).await?;
run_commands(
meili_client,
&workload.commands,
&assets,
asset_folder,
&mut HashMap::new(),
false,
)
.await?;
for batch in workload
.commands
.as_slice()
.split_inclusive(|command| !matches!(command.synchronous, SyncMode::DontWait))
{
super::command::run_batch(meili_client, batch, &workload.assets, &args.asset_folder)
.await?;
}
let processor =
stop_report(dashboard_client, logs_client, workload_uuid, report_filename, report_handle)
@@ -88,14 +81,14 @@ pub async fn execute(
assets_client: &Client,
dashboard_client: &DashboardClient,
logs_client: &Client,
meili_client: &Arc<Client>,
meili_client: &Client,
invocation_uuid: Uuid,
master_key: Option<&str>,
workload: BenchWorkload,
workload: Workload,
args: &BenchDeriveArgs,
binary_path: Option<&Path>,
) -> anyhow::Result<()> {
assets::fetch_assets(assets_client, &workload.assets, &args.common.asset_folder).await?;
assets::fetch_assets(assets_client, &workload.assets, &args.asset_folder).await?;
let workload_uuid = dashboard_client.create_workload(invocation_uuid, &workload).await?;
@@ -136,26 +129,38 @@ pub async fn execute(
async fn execute_run(
dashboard_client: &DashboardClient,
logs_client: &Client,
meili_client: &Arc<Client>,
meili_client: &Client,
workload_uuid: Uuid,
master_key: Option<&str>,
workload: &BenchWorkload,
workload: &Workload,
args: &BenchDeriveArgs,
binary_path: Option<&Path>,
run_number: u16,
) -> anyhow::Result<tokio::task::JoinHandle<anyhow::Result<std::fs::File>>> {
delete_db().await;
meili_process::delete_db();
let meilisearch = start_meili(
meili_client,
master_key,
&workload.extra_cli_args,
&workload.name,
binary_path,
)
.await?;
let run_command = match binary_path {
Some(binary_path) => tokio::process::Command::new(binary_path),
None => {
meili_process::build().await?;
let mut command = tokio::process::Command::new("cargo");
command
.arg("run")
.arg("--release")
.arg("-p")
.arg("meilisearch")
.arg("--bin")
.arg("meilisearch")
.arg("--");
command
}
};
let processor = run_workload_commands(
let meilisearch =
meili_process::start(meili_client, master_key, workload, &args.asset_folder, run_command)
.await?;
let processor = run_commands(
dashboard_client,
logs_client,
meili_client,
@@ -166,7 +171,7 @@ async fn execute_run(
)
.await?;
process::kill_meili(meilisearch).await;
meili_process::kill(meilisearch).await;
tracing::info!(run_number, "Successful run");

View File

@@ -1,36 +0,0 @@
use clap::Parser;
use std::path::PathBuf;
pub fn default_asset_folder() -> String {
"./bench/assets/".into()
}
pub fn default_log_filter() -> String {
"info".into()
}
#[derive(Parser, Debug, Clone)]
pub struct CommonArgs {
/// Filename of the workload file, pass multiple filenames
/// to run multiple workloads in the specified order.
///
/// For benches, each workload run will get its own report file.
#[arg(value_name = "WORKLOAD_FILE", last = false)]
pub workload_file: Vec<PathBuf>,
/// Directory to store the remote assets.
#[arg(long, default_value_t = default_asset_folder())]
pub asset_folder: String,
/// Log directives
#[arg(short, long, default_value_t = default_log_filter())]
pub log_filter: String,
/// Authentication bearer for fetching assets
#[arg(long)]
pub assets_key: Option<String>,
/// The maximum time in seconds we allow for fetching the task queue before timing out.
#[arg(long, default_value_t = 60)]
pub tasks_queue_timeout_secs: u64,
}

View File

@@ -1,398 +0,0 @@
use std::collections::{BTreeMap, HashMap};
use std::fmt::Display;
use std::io::Read as _;
use std::sync::Arc;
use anyhow::{bail, Context as _};
use reqwest::StatusCode;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use similar_asserts::SimpleDiff;
use crate::common::assets::{fetch_asset, Asset};
use crate::common::client::{Client, Method};
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct Command {
pub route: String,
pub method: Method,
#[serde(default)]
pub body: Body,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub expected_status: Option<u16>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub expected_response: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "HashMap::is_empty")]
pub register: HashMap<String, String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub api_key_variable: Option<String>,
#[serde(default)]
pub synchronous: SyncMode,
}
#[derive(Default, Clone, Serialize, Deserialize, Debug)]
#[serde(untagged)]
pub enum Body {
Inline {
inline: serde_json::Value,
},
Asset {
asset: String,
},
#[default]
Empty,
}
impl Body {
pub fn get(
self,
assets: &BTreeMap<String, Asset>,
registered: &HashMap<String, Value>,
asset_folder: &str,
) -> anyhow::Result<Option<(Vec<u8>, &'static str)>> {
Ok(match self {
Body::Inline { inline: mut body } => {
fn insert_variables(value: &mut Value, registered: &HashMap<String, Value>) {
match value {
Value::Null | Value::Bool(_) | Value::Number(_) => (),
Value::String(s) => {
if s.starts_with("{{") && s.ends_with("}}") {
let name = s[2..s.len() - 2].trim();
if let Some(replacement) = registered.get(name) {
*value = replacement.clone();
}
}
}
Value::Array(values) => {
for value in values {
insert_variables(value, registered);
}
}
Value::Object(map) => {
for (_key, value) in map.iter_mut() {
insert_variables(value, registered);
}
}
}
}
if !registered.is_empty() {
insert_variables(&mut body, registered);
}
Some((
serde_json::to_vec(&body)
.context("serializing to bytes")
.context("while getting inline body")?,
"application/json",
))
}
Body::Asset { asset: name } => Some({
let context = || format!("while getting body from asset '{name}'");
let (mut file, format) =
fetch_asset(&name, assets, asset_folder).with_context(context)?;
let mut buf = Vec::new();
file.read_to_end(&mut buf).with_context(context)?;
(buf, format.to_content_type(&name))
}),
Body::Empty => None,
})
}
}
impl Display for Command {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?} {} ({:?})", self.method, self.route, self.synchronous)
}
}
#[derive(Default, Debug, Clone, Copy, Serialize, Deserialize, PartialEq)]
pub enum SyncMode {
DontWait,
#[default]
WaitForResponse,
WaitForTask,
}
async fn run_batch(
client: &Arc<Client>,
batch: &[Command],
assets: &Arc<BTreeMap<String, Asset>>,
asset_folder: &'static str,
registered: &mut HashMap<String, Value>,
return_response: bool,
) -> anyhow::Result<Vec<(Value, StatusCode)>> {
let [.., last] = batch else { return Ok(Vec::new()) };
let sync = last.synchronous;
let batch_len = batch.len();
let mut tasks = Vec::with_capacity(batch.len());
for command in batch.iter().cloned() {
let client2 = Arc::clone(client);
let assets2 = Arc::clone(assets);
let needs_response = return_response || !command.register.is_empty();
let registered2 = registered.clone(); // FIXME: cloning the whole map for each command is inefficient
tasks.push(tokio::spawn(async move {
run(&client2, &command, &assets2, registered2, asset_folder, needs_response).await
}));
}
let mut outputs = Vec::with_capacity(if return_response { batch_len } else { 0 });
for (task, command) in tasks.into_iter().zip(batch.iter()) {
let output = task.await.context("task panicked")??;
if let Some(output) = output {
for (name, path) in &command.register {
let value = output
.0
.pointer(path)
.with_context(|| format!("could not find path '{path}' in response (required to register '{name}')"))?
.clone();
registered.insert(name.clone(), value);
}
if return_response {
outputs.push(output);
}
}
}
match sync {
SyncMode::DontWait => {}
SyncMode::WaitForResponse => {}
SyncMode::WaitForTask => wait_for_tasks(client).await?,
}
Ok(outputs)
}
async fn wait_for_tasks(client: &Client) -> anyhow::Result<()> {
loop {
let response = client
.get("tasks?statuses=enqueued,processing")
.send()
.await
.context("could not wait for tasks")?;
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response to JSON")
.context("could not wait for tasks")?;
match response.get("total") {
Some(serde_json::Value::Number(number)) => {
let number = number.as_u64().with_context(|| {
format!("waiting for tasks: could not parse 'total' as integer, got {}", number)
})?;
if number == 0 {
break;
} else {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
continue;
}
}
Some(thing_else) => {
bail!(format!(
"waiting for tasks: could not parse 'total' as a number, got '{thing_else}'"
))
}
None => {
bail!(format!(
"waiting for tasks: expected response to contain 'total', got '{response}'"
))
}
}
}
Ok(())
}
fn json_eq_ignore(reference: &Value, value: &Value) -> bool {
match reference {
Value::Null | Value::Bool(_) | Value::Number(_) => reference == value,
Value::String(s) => (s.starts_with('[') && s.ends_with(']')) || reference == value,
Value::Array(values) => match value {
Value::Array(other_values) => {
if values.len() != other_values.len() {
return false;
}
for (value, other_value) in values.iter().zip(other_values.iter()) {
if !json_eq_ignore(value, other_value) {
return false;
}
}
true
}
_ => false,
},
Value::Object(map) => match value {
Value::Object(other_map) => {
if map.len() != other_map.len() {
return false;
}
for (key, value) in map.iter() {
match other_map.get(key) {
Some(other_value) => {
if !json_eq_ignore(value, other_value) {
return false;
}
}
None => return false,
}
}
true
}
_ => false,
},
}
}
#[tracing::instrument(skip(client, command, assets, registered, asset_folder), fields(command = %command))]
pub async fn run(
client: &Client,
command: &Command,
assets: &BTreeMap<String, Asset>,
registered: HashMap<String, Value>,
asset_folder: &str,
return_value: bool,
) -> anyhow::Result<Option<(Value, StatusCode)>> {
// Try to replace variables in the route
let mut route = &command.route;
let mut owned_route;
if !registered.is_empty() {
while let (Some(pos1), Some(pos2)) = (route.find("{{"), route.rfind("}}")) {
if pos2 > pos1 {
let name = route[pos1 + 2..pos2].trim();
if let Some(replacement) = registered.get(name).and_then(|r| r.as_str()) {
let mut new_route = String::new();
new_route.push_str(&route[..pos1]);
new_route.push_str(replacement);
new_route.push_str(&route[pos2 + 2..]);
owned_route = new_route;
route = &owned_route;
continue;
}
}
break;
}
}
// memtake the body here to leave an empty body in its place, so that command is not partially moved-out
let body = command
.body
.clone()
.get(assets, &registered, asset_folder)
.with_context(|| format!("while getting body for command {command}"))?;
let mut request = client.request(command.method.into(), route);
// Replace the api key
if let Some(var_name) = &command.api_key_variable {
if let Some(api_key) = registered.get(var_name).and_then(|v| v.as_str()) {
request = request.header("Authorization", format!("Bearer {api_key}"));
} else {
bail!("could not find API key variable '{var_name}' in registered values");
}
}
let request = if let Some((body, content_type)) = body {
request.body(body).header(reqwest::header::CONTENT_TYPE, content_type)
} else {
request
};
let response =
request.send().await.with_context(|| format!("error sending command: {}", command))?;
let code = response.status();
if !return_value {
if let Some(expected_status) = command.expected_status {
if code.as_u16() != expected_status {
let response = response
.text()
.await
.context("could not read response body as text")
.context("reading response body when checking expected status")?;
bail!("unexpected status code: got {}, expected {expected_status}, response body: '{response}'", code.as_u16());
}
} else if code.is_client_error() {
tracing::error!(%command, %code, "error in workload file");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing error in workload file when sending command")?;
bail!(
"error in workload file: server responded with error code {code} and '{response}'"
)
} else if code.is_server_error() {
tracing::error!(%command, %code, "server error");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing server error when sending command")?;
bail!("server error: server responded with error code {code} and '{response}'")
}
}
if let Some(expected_response) = &command.expected_response {
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing response when checking expected response")?;
if return_value {
return Ok(Some((response, code)));
}
if !json_eq_ignore(expected_response, &response) {
let expected_pretty = serde_json::to_string_pretty(expected_response)
.context("serializing expected response as pretty JSON")?;
let response_pretty = serde_json::to_string_pretty(&response)
.context("serializing response as pretty JSON")?;
let diff = SimpleDiff::from_str(&expected_pretty, &response_pretty, "expected", "got");
bail!("unexpected response:\n{diff}");
}
} else if return_value {
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing response when recording expected response")?;
return Ok(Some((response, code)));
}
Ok(None)
}
pub async fn run_commands(
client: &Arc<Client>,
commands: &[Command],
assets: &Arc<BTreeMap<String, Asset>>,
asset_folder: &'static str,
registered: &mut HashMap<String, Value>,
return_response: bool,
) -> anyhow::Result<Vec<(Value, StatusCode)>> {
let mut responses = Vec::new();
for batch in
commands.split_inclusive(|command| !matches!(command.synchronous, SyncMode::DontWait))
{
let mut new_responses =
run_batch(client, batch, assets, asset_folder, registered, return_response).await?;
responses.append(&mut new_responses);
}
Ok(responses)
}
pub fn health_command() -> Command {
Command {
route: "/health".into(),
method: crate::common::client::Method::Get,
body: Default::default(),
register: HashMap::new(),
synchronous: SyncMode::WaitForResponse,
expected_status: None,
expected_response: None,
api_key_variable: None,
}
}

View File

@@ -1,18 +0,0 @@
use anyhow::Context;
use std::io::LineWriter;
use tracing_subscriber::{fmt::format::FmtSpan, layer::SubscriberExt, Layer};
pub fn setup_logs(log_filter: &str) -> anyhow::Result<()> {
let filter: tracing_subscriber::filter::Targets =
log_filter.parse().context("invalid --log-filter")?;
let subscriber = tracing_subscriber::registry().with(
tracing_subscriber::fmt::layer()
.with_writer(|| LineWriter::new(std::io::stderr()))
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(filter),
);
tracing::subscriber::set_global_default(subscriber).context("could not setup logging")?;
Ok(())
}

View File

@@ -1,7 +0,0 @@
pub mod args;
pub mod assets;
pub mod client;
pub mod command;
pub mod logs;
pub mod process;
pub mod workload;

View File

@@ -1,11 +0,0 @@
use serde::{Deserialize, Serialize};
use crate::{bench::BenchWorkload, test::TestWorkload};
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "type")]
#[serde(rename_all = "camelCase")]
pub enum Workload {
Bench(BenchWorkload),
Test(TestWorkload),
}

View File

@@ -1,3 +1 @@
pub mod bench;
pub mod common;
pub mod test;

View File

@@ -1,7 +1,7 @@
use std::collections::HashSet;
use clap::Parser;
use xtask::{bench::BenchDeriveArgs, test::TestDeriveArgs};
use xtask::bench::BenchDeriveArgs;
/// List features available in the workspace
#[derive(Parser, Debug)]
@@ -20,7 +20,6 @@ struct ListFeaturesDeriveArgs {
enum Command {
ListFeatures(ListFeaturesDeriveArgs),
Bench(BenchDeriveArgs),
Test(TestDeriveArgs),
}
fn main() -> anyhow::Result<()> {
@@ -28,7 +27,6 @@ fn main() -> anyhow::Result<()> {
match args {
Command::ListFeatures(args) => list_features(args),
Command::Bench(args) => xtask::bench::run(args)?,
Command::Test(args) => xtask::test::run(args)?,
}
Ok(())
}

View File

@@ -1,100 +0,0 @@
use std::{sync::Arc, time::Duration};
use crate::{
common::{
args::CommonArgs, client::Client, command::SyncMode, logs::setup_logs, workload::Workload,
},
test::workload::CommandOrUpgrade,
};
use anyhow::{bail, Context};
use clap::Parser;
mod versions;
mod workload;
pub use workload::TestWorkload;
/// Run tests from a workload
#[derive(Parser, Debug)]
pub struct TestDeriveArgs {
/// Common arguments shared with other commands
#[command(flatten)]
common: CommonArgs,
/// Enables workloads to be rewritten in place to update expected responses.
#[arg(short, long, default_value_t = false)]
pub update_responses: bool,
/// Enables workloads to be rewritten in place to add missing expected responses.
#[arg(short, long, default_value_t = false)]
pub add_missing_responses: bool,
}
pub fn run(args: TestDeriveArgs) -> anyhow::Result<()> {
let rt = tokio::runtime::Builder::new_current_thread().enable_io().enable_time().build()?;
let _scope = rt.enter();
rt.block_on(async { run_inner(args).await })?;
Ok(())
}
async fn run_inner(args: TestDeriveArgs) -> anyhow::Result<()> {
setup_logs(&args.common.log_filter)?;
// setup clients
let assets_client = Arc::new(Client::new(
None,
args.common.assets_key.as_deref(),
Some(Duration::from_secs(3600)), // 1h
)?);
let meili_client = Arc::new(Client::new(
Some("http://127.0.0.1:7700".into()),
Some("masterKey"),
Some(Duration::from_secs(args.common.tasks_queue_timeout_secs)),
)?);
let asset_folder = args.common.asset_folder.clone().leak();
for workload_file in &args.common.workload_file {
let string = tokio::fs::read_to_string(workload_file)
.await
.with_context(|| format!("error reading {}", workload_file.display()))?;
let workload: Workload = serde_json::from_str(string.trim())
.with_context(|| format!("error parsing {} as JSON", workload_file.display()))?;
let Workload::Test(workload) = workload else {
bail!("workload file {} is not a test workload", workload_file.display());
};
let has_upgrade =
workload.commands.iter().any(|c| matches!(c, CommandOrUpgrade::Upgrade { .. }));
let has_faulty_register = workload.commands.iter().any(|c| {
matches!(c, CommandOrUpgrade::Command(cmd) if cmd.synchronous == SyncMode::DontWait && !cmd.register.is_empty())
});
if has_faulty_register {
bail!("workload {} contains commands that register values but are marked as --dont-wait. This is not supported because we cannot guarantee the value will be registered before the next command runs.", workload.name);
}
let name = workload.name.clone();
match workload.run(&args, &assets_client, &meili_client, asset_folder).await {
Ok(_) => {
match args.update_responses {
true => println!("🛠️ Workload {name} was updated"),
false => println!("âś… Workload {name} passed"),
}
if !has_upgrade {
println!("⚠️ Warning: this workload doesn't contain an upgrade. The whole point of these tests is to test upgrades! Please add one.");
}
}
Err(error) => {
println!("❌ Workload {name} failed: {error}");
println!("đź’ˇ Is this intentional? If so, rerun with --update-responses to update the workload files.");
return Err(error);
}
}
}
Ok(())
}

View File

@@ -1,197 +0,0 @@
use std::{collections::BTreeMap, fmt::Display, path::PathBuf};
use crate::common::assets::{Asset, AssetFormat};
use anyhow::Context;
use cargo_metadata::semver::Version;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug)]
pub enum VersionOrLatest {
Version(Version),
Latest,
}
impl<'a> Deserialize<'a> for VersionOrLatest {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'a>,
{
let s: &str = Deserialize::deserialize(deserializer)?;
if s.eq_ignore_ascii_case("latest") {
Ok(VersionOrLatest::Latest)
} else {
let version = Version::parse(s).map_err(serde::de::Error::custom)?;
Ok(VersionOrLatest::Version(version))
}
}
}
impl Serialize for VersionOrLatest {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
match self {
VersionOrLatest::Version(v) => serializer.serialize_str(&v.to_string()),
VersionOrLatest::Latest => serializer.serialize_str("latest"),
}
}
}
impl VersionOrLatest {
pub fn binary_path(&self, asset_folder: &str) -> anyhow::Result<Option<PathBuf>> {
match self {
VersionOrLatest::Version(version) => {
let mut asset_folder: PathBuf =
asset_folder.parse().context("parsing asset folder")?;
let arch = get_arch()?;
let local_filename = format!("meilisearch-{version}-{arch}");
asset_folder.push(local_filename);
Ok(Some(asset_folder))
}
VersionOrLatest::Latest => Ok(None),
}
}
}
impl Display for VersionOrLatest {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
VersionOrLatest::Version(v) => v.fmt(f),
VersionOrLatest::Latest => write!(f, "latest"),
}
}
}
async fn get_sha256(version: &Version, asset_name: &str) -> anyhow::Result<String> {
// If version is lower than 1.15 there is no point in trying to get the sha256, GitHub didn't support it
if *version < Version::parse("1.15.0")? {
anyhow::bail!("version is lower than 1.15, sha256 not available");
}
#[derive(Deserialize)]
struct GithubReleaseAsset {
name: String,
digest: Option<String>,
}
#[derive(Deserialize)]
struct GithubRelease {
assets: Vec<GithubReleaseAsset>,
}
let url =
format!("https://api.github.com/repos/meilisearch/meilisearch/releases/tags/v{version}");
let client = reqwest::Client::builder()
.user_agent("Meilisearch bench xtask")
.build()
.context("failed to build reqwest client")?;
let body = client.get(url).send().await?.text().await?;
let data: GithubRelease = serde_json::from_str(&body)?;
let digest = data
.assets
.into_iter()
.find(|asset| asset.name.as_str() == asset_name)
.with_context(|| format!("asset {asset_name} not found in release v{version}"))?
.digest
.with_context(|| format!("asset {asset_name} has no digest"))?;
let sha256 =
digest.strip_prefix("sha256:").map(|s| s.to_string()).context("invalid sha256 format")?;
Ok(sha256)
}
pub fn get_arch() -> anyhow::Result<&'static str> {
let arch;
// linux-aarch64
#[cfg(all(target_os = "linux", target_arch = "aarch64"))]
{
arch = "linux-aarch64";
}
// linux-amd64
#[cfg(all(target_os = "linux", target_arch = "x86_64"))]
{
arch = "linux-amd64";
}
// macos-amd64
#[cfg(all(target_os = "macos", target_arch = "x86_64"))]
{
arch = "macos-amd64";
}
// macos-apple-silicon
#[cfg(all(target_os = "macos", target_arch = "aarch64"))]
{
arch = "macos-apple-silicon";
}
// windows-amd64
#[cfg(all(target_os = "windows", target_arch = "x86_64"))]
{
arch = "windows-amd64";
}
if arch.is_empty() {
anyhow::bail!("unsupported platform");
}
Ok(arch)
}
async fn add_asset(assets: &mut BTreeMap<String, Asset>, version: &Version) -> anyhow::Result<()> {
let arch = get_arch()?;
let local_filename = format!("meilisearch-{version}-{arch}");
if assets.contains_key(&local_filename) {
return Ok(());
}
let filename = format!("meilisearch-{arch}");
// Try to get the sha256 but it may fail if Github is rate limiting us
// We hardcode some values to speed up tests and avoid hitting Github
// Also, versions prior to 1.15 don't have sha256 available anyway
let sha256 = match local_filename.as_str() {
"meilisearch-1.12.0-macos-apple-silicon" => {
Some(String::from("3b384707a5df9edf66f9157f0ddb70dcd3ac84d4887149169cf93067d06717b7"))
}
_ => match get_sha256(version, &filename).await {
Ok(sha256) => Some(sha256),
Err(err) => {
tracing::warn!("failed to get sha256 for version {version}: {err}");
None
}
},
};
let url = format!(
"https://github.com/meilisearch/meilisearch/releases/download/v{version}/{filename}"
);
let asset = Asset {
local_location: Some(local_filename.clone()),
remote_location: Some(url),
format: AssetFormat::Raw,
sha256,
};
assets.insert(local_filename, asset);
Ok(())
}
pub async fn expand_assets_with_versions(
assets: &mut BTreeMap<String, Asset>,
versions: &[Version],
) -> anyhow::Result<()> {
for version in versions {
add_asset(assets, version).await?;
}
Ok(())
}

View File

@@ -1,201 +0,0 @@
use anyhow::Context;
use cargo_metadata::semver::Version;
use chrono::DateTime;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::{
collections::{BTreeMap, HashMap},
io::Write,
sync::Arc,
};
use crate::{
common::{
assets::{fetch_assets, Asset},
client::Client,
command::{run_commands, Command},
process::{self, delete_db, kill_meili},
workload::Workload,
},
test::{
versions::{expand_assets_with_versions, VersionOrLatest},
TestDeriveArgs,
},
};
#[derive(Serialize, Deserialize, Debug)]
#[serde(untagged)]
#[allow(clippy::large_enum_variant)]
pub enum CommandOrUpgrade {
Command(Command),
Upgrade { upgrade: VersionOrLatest },
}
enum CommandOrUpgradeVec<'a> {
Commands(Vec<&'a mut Command>),
Upgrade(VersionOrLatest),
}
fn produce_reference_value(value: &mut Value) {
match value {
Value::Null | Value::Bool(_) | Value::Number(_) => (),
Value::String(string) => {
if DateTime::parse_from_rfc3339(string.as_str()).is_ok() {
*string = String::from("[timestamp]");
} else if uuid::Uuid::parse_str(string).is_ok() {
*string = String::from("[uuid]");
}
}
Value::Array(values) => {
for value in values {
produce_reference_value(value);
}
}
Value::Object(map) => {
for (key, value) in map.iter_mut() {
match key.as_str() {
"processingTimeMs" => {
*value = Value::String(String::from("[duration]"));
continue;
}
_ => produce_reference_value(value),
}
}
}
}
}
/// A test workload.
/// Not to be confused with [a bench workload](crate::bench::workload::Workload).
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct TestWorkload {
pub name: String,
pub initial_version: Version,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub master_key: Option<String>,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub assets: BTreeMap<String, Asset>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub commands: Vec<CommandOrUpgrade>,
}
impl TestWorkload {
pub async fn run(
mut self,
args: &TestDeriveArgs,
assets_client: &Client,
meili_client: &Arc<Client>,
asset_folder: &'static str,
) -> anyhow::Result<()> {
// Group commands between upgrades
let mut commands_or_upgrade = Vec::new();
let mut current_commands = Vec::new();
let mut all_versions = vec![self.initial_version.clone()];
for command_or_upgrade in &mut self.commands {
match command_or_upgrade {
CommandOrUpgrade::Command(command) => current_commands.push(command),
CommandOrUpgrade::Upgrade { upgrade } => {
if !current_commands.is_empty() {
commands_or_upgrade.push(CommandOrUpgradeVec::Commands(current_commands));
current_commands = Vec::new();
}
commands_or_upgrade.push(CommandOrUpgradeVec::Upgrade(upgrade.clone()));
if let VersionOrLatest::Version(upgrade) = upgrade {
all_versions.push(upgrade.clone());
}
}
}
}
if !current_commands.is_empty() {
commands_or_upgrade.push(CommandOrUpgradeVec::Commands(current_commands));
}
// Fetch assets
expand_assets_with_versions(&mut self.assets, &all_versions).await?;
fetch_assets(assets_client, &self.assets, &args.common.asset_folder).await?;
// Run server
delete_db().await;
let binary_path = VersionOrLatest::Version(self.initial_version.clone())
.binary_path(&args.common.asset_folder)?;
let mut process = process::start_meili(
meili_client,
Some("masterKey"),
&[],
&self.name,
binary_path.as_deref(),
)
.await?;
let assets = Arc::new(self.assets.clone());
let return_responses = dbg!(args.add_missing_responses || args.update_responses);
let mut registered = HashMap::new();
for command_or_upgrade in commands_or_upgrade {
match command_or_upgrade {
CommandOrUpgradeVec::Commands(commands) => {
let cloned: Vec<_> = commands.iter().map(|c| (*c).clone()).collect();
let responses = run_commands(
meili_client,
&cloned,
&assets,
asset_folder,
&mut registered,
return_responses,
)
.await?;
if return_responses {
assert_eq!(responses.len(), cloned.len());
for (command, (mut response, status)) in commands.into_iter().zip(responses)
{
if args.update_responses
|| (args.add_missing_responses
&& command.expected_response.is_none())
{
produce_reference_value(&mut response);
command.expected_response = Some(response);
command.expected_status = Some(status.as_u16());
}
}
}
}
CommandOrUpgradeVec::Upgrade(version) => {
kill_meili(process).await;
let binary_path = version.binary_path(&args.common.asset_folder)?;
process = process::start_meili(
meili_client,
Some("masterKey"),
&[String::from("--experimental-dumpless-upgrade")],
&self.name,
binary_path.as_deref(),
)
.await?;
tracing::info!("Upgraded to {version}");
}
}
}
// Write back the workload if needed
if return_responses {
// Filter out the assets we added for the versions
self.assets.retain(|_, asset| {
asset.local_location.as_ref().is_none_or(|a| !a.starts_with("meilisearch-"))
});
let workload = Workload::Test(self);
let mut file =
std::fs::File::create(&args.common.workload_file[0]).with_context(|| {
format!("could not open {}", args.common.workload_file[0].display())
})?;
serde_json::to_writer_pretty(&file, &workload).with_context(|| {
format!("could not write to {}", args.common.workload_file[0].display())
})?;
file.write_all(b"\n").with_context(|| {
format!("could not write to {}", args.common.workload_file[0].display())
})?;
tracing::info!("Updated workload file {}", args.common.workload_file[0].display());
}
Ok(())
}
}

View File

@@ -1,6 +1,5 @@
{
"name": "movies-subset-hf-embeddings",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-add-embeddings-hf",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.add_new_documents",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.ndjson_1M_ignore_first_100k",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.modify_facet_numbers",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.modify_facet_strings",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.modify_searchables",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.ndjson_1M",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "movies.json,no-threads",
"type": "bench",
"run_count": 2,
"extra_cli_args": [
"--max-indexing-threads=1"

View File

@@ -1,6 +1,5 @@
{
"name": "movies.json",
"type": "bench",
"run_count": 10,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "search-movies-subset-hf-embeddings",
"type": "bench",
"run_count": 2,
"target": "search::=trace",
"extra_cli_args": [

View File

@@ -1,6 +1,5 @@
{
"name": "search-filterable-movies.json",
"type": "bench",
"run_count": 10,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,7 +1,6 @@
{
"name": "search-geosort.jsonl_1M",
"type": "bench",
"run_count": 3,
"run_count": 3,
"target": "search::=trace",
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "search-hackernews.ndjson_1M",
"type": "bench",
"run_count": 3,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,6 +1,5 @@
{
"name": "search-movies.json",
"type": "bench",
"run_count": 10,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,6 +1,5 @@
{
"name": "search-sortable-movies.json",
"type": "bench",
"run_count": 10,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,6 +1,5 @@
{
"name": "settings-add-remove-filters.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-proximity-precision.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-remove-add-swap-searchable.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-typo.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,265 +0,0 @@
# Declarative upgrade tests
Declarative upgrade tests ensure that Meilisearch features remain stable across versions.
While we already have unit tests, those are run against **temporary databases** that are created fresh each time and therefore never risk corruption.
Upgrade tests instead **simulate the lifetime of a database**: they chain together commands and version upgrades, verifying that database state and API responses remain consistent.
## Basic example
```json
{
"type": "test",
"name": "api-keys",
"initialVersion": "1.19.0", // the first command will run on a brand new database of this version
"commands": []
}
```
This example defines a no-op test (it does nothing).
If the file is saved at `workloads/tests/example.json`, you can run it with:
```bash
cargo xtask test workloads/tests/example.json
```
## Commands
Commands represent API requests sent to Meilisearch endpoints during a test.
They are executed sequentially, and their responses can be validated to ensure consistent behavior across upgrades.
```json
{
"route": "keys",
"method": "POST",
"body": {
"inline": {
"actions": [
"search",
"documents.add"
],
"description": "Test API Key",
"expiresAt": null,
"indexes": [ "movies" ]
}
}
}
```
This command issues a `POST /keys` request, creating an API key with permissions to search and add documents in the `movies` index.
### Using assets in commands
To keep tests concise and reusable, you can define **assets** at the root of the workload file.
Assets are external data sources (such as datasets) that are cached between runs, making tests faster and easier to read.
```json
{
"type": "test",
"name": "movies",
"initialVersion": "1.12.0",
"assets": {
"movies.json": {
"local_location": null,
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies.json",
"sha256": "5b6e4cb660bc20327776e8a33ea197b43d9ec84856710ead1cc87ab24df77de1"
}
},
"commands": [
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies.json"
}
}
]
}
```
In this example:
- The `movies.json` dataset is defined as an asset, pointing to a remote URL.
- The SHA-256 checksum ensures integrity.
- The `POST /indexes/movies/documents` command uses this asset as the request body.
This makes the test much cleaner than inlining a large dataset directly into the command.
### Asserting responses
Commands can specify both the **expected status code** and the **expected response body**.
```json
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies.json"
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]", // Set to a bracketed string to ignore the value
"indexUid": "movies",
"status": "enqueued",
"taskUid": 1,
"type": "documentAdditionOrUpdate"
},
"synchronous": "WaitForTask"
}
```
Manually writing `expectedResponse` fields can be tedious.
Instead, you can let the test runner populate them automatically:
```bash
# Run the workload to populate expected fields. Only adds the missing ones, doesn't change existing data
cargo xtask test workloads/tests/example.json --add-missing-responses
# OR
# Run the workload to populate expected fields. Updates all fields including existing ones
cargo xtask test workloads/tests/example.json --update-responses
```
This workflow is recommended:
1. Write the test without expected fields.
2. Run it with `--add-missing-responses` to capture the actual responses.
3. Review and commit the generated expectations.
## Upgrade commands
Upgrade commands allow you to switch the Meilisearch instance from one version to another during a test.
When executed, an upgrade command will:
1. Stop the current Meilisearch server.
2. Upgrade the database to the specified version.
3. Restart the server with the new specified version.
### Typical Usage
In most cases, you will:
- **Set up** some data using commands on an older version.
- **Upgrade** to the latest version.
- **Assert** that the data and API behavior remain correct after the upgrade.
```json
{
"type": "test",
"name": "movies",
"initialVersion": "1.12.0", // An older version to start with
"commands": [
// Commands to populate the database
{
"upgrade": "latest" // Will build meilisearch locally and run it
},
// Commands to check the state of the database
]
}
```
This ensures backward compatibility: databases created with older Meilisearch versions should remain functional and consistent after an upgrade.
### Advanced usage
As time goes on, tests may grow more complex as they evolve alongside new features and schema changes.
A single test can chain together multiple upgrades, interleaving data population, API checks, and version transitions.
For example:
```json
{
"type": "test",
"name": "movies",
"initialVersion": "1.12.0",
"commands": [
// Commands to populate the database
{
"upgrade": "1.17.0"
},
// Commands on endpoints that were removed after 1.17
{
"upgrade": "latest"
},
// Check the state
]
}
```
## Variables
Sometimes a command needs to use a value returned by a **previous response**.
These values can be captured and reused using the register field.
```json
{
"route": "keys",
"method": "POST",
"body": {
"inline": {
"actions": [
"search",
"documents.add"
],
"description": "Test API Key",
"expiresAt": null,
"indexes": [ "movies" ]
}
},
"expectedResponse": {
"key": "c6f64630bad2996b1f675007c8800168e14adf5d6a7bb1a400a6d2b158050eaf",
// ...
},
"register": {
"key": "/key"
},
"synchronous": "WaitForResponse"
}
```
The `register` field captures the value at the JSON path `/key` from the response.
Paths follow the **JavaScript Object Notation Pointer (RFC 6901)** format.
Registered variables are available for all subsequent commands.
Registered variables can be referenced by wrapping their name in double curly braces:
In the route/path:
```json
{
"route": "tasks/{{ task_id }}",
"method": "GET"
}
```
In the request body:
```json
{
"route": "indexes/movies/documents",
"method": "PATCH",
"body": {
"inline": {
"id": "{{ document_id }}",
"overview": "Shazam turns evil and the world is in danger.",
}
}
}
```
As an API-key:
```json
{
"route": "indexes/movies/documents",
"method": "POST",
"body": { /* ... */ },
"apiKeyVariable": "key" // The content of the key variable will be used as an API key
}
```

View File

@@ -1,221 +0,0 @@
{
"type": "test",
"name": "api-keys",
"initialVersion": "1.12.0",
"commands": [
{
"route": "keys",
"method": "POST",
"body": {
"inline": {
"actions": [
"search",
"documents.add"
],
"description": "Test API Key",
"expiresAt": null,
"indexes": [
"movies"
]
}
},
"expectedStatus": 201,
"expectedResponse": {
"actions": [
"search",
"documents.add"
],
"createdAt": "[timestamp]",
"description": "Test API Key",
"expiresAt": null,
"indexes": [
"movies"
],
"key": "c6f64630bad2996b1f675007c8800168e14adf5d6a7bb1a400a6d2b158050eaf",
"name": null,
"uid": "[uuid]",
"updatedAt": "[timestamp]"
},
"register": {
"key": "/key"
},
"synchronous": "WaitForResponse"
},
{
"route": "keys/{{ key }}",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"actions": [
"search",
"documents.add"
],
"createdAt": "[timestamp]",
"description": "Test API Key",
"expiresAt": null,
"indexes": [
"movies"
],
"key": "c6f64630bad2996b1f675007c8800168e14adf5d6a7bb1a400a6d2b158050eaf",
"name": null,
"uid": "[uuid]",
"updatedAt": "[timestamp]"
},
"synchronous": "WaitForResponse"
},
{
"route": "/indexes",
"method": "POST",
"body": {
"inline": {
"primaryKey": "id",
"uid": "movies"
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 0,
"type": "indexCreation"
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"inline": {
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 1,
"type": "documentAdditionOrUpdate"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/search?q=shazam",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 0,
"hits": [],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "shazam"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"upgrade": "latest"
},
{
"route": "indexes/movies/search?q=shazam",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 1,
"hits": [
{
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "shazam"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/documents/287947",
"method": "DELETE",
"body": null,
"expectedStatus": 403,
"expectedResponse": {
"code": "invalid_api_key",
"link": "https://docs.meilisearch.com/errors#invalid_api_key",
"message": "The provided API key is invalid.",
"type": "auth"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"inline": {
"id": 287948,
"overview": "Shazam turns evil and the world is in danger.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2032-03-23",
"title": "Shazam 2"
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 3,
"type": "documentAdditionOrUpdate"
},
"apiKeyVariable": "key",
"synchronous": "WaitForTask"
},
{
"route": "indexes/movies/search?q=shaza",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 2,
"hits": [
{
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
},
{
"id": 287948,
"overview": "Shazam turns evil and the world is in danger.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2032-03-23",
"title": "Shazam 2"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "shaza"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
}
]
}

View File

@@ -1,163 +0,0 @@
{
"type": "test",
"name": "movies",
"initialVersion": "1.12.0",
"assets": {
"movies.json": {
"local_location": null,
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies.json",
"sha256": "5b6e4cb660bc20327776e8a33ea197b43d9ec84856710ead1cc87ab24df77de1"
}
},
"commands": [
{
"route": "indexes/movies/settings",
"method": "PATCH",
"body": {
"inline": {
"filterableAttributes": [
"genres",
"release_date"
],
"searchableAttributes": [
"title",
"overview"
],
"sortableAttributes": [
"release_date"
]
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 0,
"type": "settingsUpdate"
},
"synchronous": "DontWait"
},
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies.json"
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 1,
"type": "documentAdditionOrUpdate"
},
"synchronous": "WaitForTask"
},
{
"upgrade": "latest"
},
{
"route": "indexes/movies/search?q=bitcoin",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 6,
"hits": [
{
"genres": [
"Documentary"
],
"id": 349086,
"overview": "A documentary exploring how money and the trading of value has evolved, culminating in Bitcoin.",
"poster": "https://image.tmdb.org/t/p/w500/A82oxum0dTL71N0cjD0F66S9gdt.jpg",
"release_date": 1437177600,
"title": "Bitcoin: The End of Money as We Know It"
},
{
"genres": [
"Documentary",
"History"
],
"id": 427451,
"overview": "Not since the invention of the Internet has there been such a disruptive technology as Bitcoin. Bitcoin's early pioneers sought to blur the lines of sovereignty and the financial status quo. After years of underground development Bitcoin grabbed the attention of a curious public, and the ire of the regulators the technology had subverted. After landmark arrests of prominent cyber criminals Bitcoin faces its most severe adversary yet, the very banks it was built to destroy.",
"poster": "https://image.tmdb.org/t/p/w500/qW3vsno24UBawZjnrKfQ1qHRPD6.jpg",
"release_date": 1483056000,
"title": "Banking on Bitcoin"
},
{
"genres": [
"Documentary",
"History"
],
"id": 292607,
"overview": "A documentary about the development and spread of the virtual currency called Bitcoin.",
"poster": "https://image.tmdb.org/t/p/w500/nUzeZupwmEOoddQIDAq10Gyifk0.jpg",
"release_date": 1412294400,
"title": "The Rise and Rise of Bitcoin"
},
{
"genres": [
"Documentary"
],
"id": 321769,
"overview": "Deep Web gives the inside story of one of the most important and riveting digital crime sagas of the century -- the arrest of Ross William Ulbricht, the 30-year-old entrepreneur convicted of being 'Dread Pirate Roberts,' creator and operator of online black market Silk Road. As the only film with exclusive access to the Ulbricht family, Deep Web explores how the brightest minds and thought leaders behind the Deep Web and Bitcoin are now caught in the crosshairs of the battle for control of a future inextricably linked to technology, with our digital rights hanging in the balance.",
"poster": "https://image.tmdb.org/t/p/w500/dtSOFZ7ioDSaJxPzORaplqo8QZ2.jpg",
"release_date": 1426377600,
"title": "Deep Web"
},
{
"genres": [
"Comedy",
"Horror"
],
"id": 179538,
"overview": "A gang of gold thieves lands in a coven of witches who are preparing for an ancient ritual... and in need of a sacrifice.",
"poster": "https://image.tmdb.org/t/p/w500/u7w6vghlbz8xDUZRayOXma3Ax96.jpg",
"release_date": 1379635200,
"title": "Witching & Bitching"
},
{
"genres": [
"Comedy"
],
"id": 70882,
"overview": "Roseanne Barr is back with an all-new HBO comedy special! Filmed live at the Comedy Store in Los Angeles, Roseanne returns to her stand-up roots for the first time in 14 years, as she tackles hot issues of today - from gay marriage to President Bush.",
"poster": "https://image.tmdb.org/t/p/w500/cUkQQnfPTonMXRroZzCyw11eKXr.jpg",
"release_date": 1162598400,
"title": "Roseanne Barr: Blonde and Bitchin'"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "bitcoin"
},
"synchronous": "DontWait"
},
{
"route": "indexes/movies/stats",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"avgDocumentSize": 499,
"fieldDistribution": {
"genres": 31944,
"id": 31944,
"overview": 31944,
"poster": 31944,
"release_date": 31944,
"title": 31944
},
"isIndexing": false,
"numberOfDocuments": 31944,
"numberOfEmbeddedDocuments": 0,
"numberOfEmbeddings": 0,
"rawDocumentDbSize": 16220160
},
"synchronous": "DontWait"
}
]
}