Compare commits

...

537 Commits

Author SHA1 Message Date
Clément Renault
ea70a7d1c9 Merge pull request #5969 from xuhongxu96/main
Remove unused dependency `allocator-api2`
2025-11-15 10:03:15 +00:00
Clément Renault
9304f8e586 Merge pull request #5991 from meilisearch/release-v1.26.0
Release v1.26.0
2025-11-13 17:54:01 +00:00
Louis Dureuil
495db080ec Upgrade snap 2025-11-13 17:52:34 +01:00
Louis Dureuil
d71341fa48 Suport upgrade to v1.26.0 2025-11-13 17:52:02 +01:00
Louis Dureuil
5b3070d8c3 Update version in toml and lock 2025-11-13 17:35:26 +01:00
Louis Dureuil
89006fd4b3 Merge pull request #5980 from hayatosc/feat/hugging-face-modernbert
Support ModernBERT architecture on `huggingface` embedder
2025-11-10 18:03:35 +00:00
Louis Dureuil
49f50a0a21 Don't collect the views 2025-11-10 17:55:44 +01:00
Louis Dureuil
1104f00803 happy clippy 2025-11-10 16:59:12 +01:00
Louis Dureuil
33fa564a9c rustfmt 2025-11-10 16:56:13 +01:00
Clément Renault
a097b254f8 Merge pull request #5963 from meilisearch/engprod-2128-allow-attaching-user-defined-metadata-to-tasks-and-return
Allow to attach `customMetadata` in the document addition or update tasks
2025-11-10 15:48:46 +00:00
Clément Renault
54cb0ec437 Merge pull request #5984 from meilisearch/embedder-error-modes
Embedder failure modes
2025-11-10 15:34:01 +00:00
Louis Dureuil
38ed1f1dbb Change parsing of environment variable 2025-11-10 15:08:24 +01:00
Clément Renault
643dd33358 Merge pull request #5982 from meilisearch/bump-meilisearch-v1.25.0
Bump meilisearch v1.25.0
2025-11-10 14:04:17 +00:00
Louis Dureuil
32f9fb6ab2 fix environment variable values 2025-11-10 14:54:25 +01:00
Louis Dureuil
b5966f82e8 Make max retry duration configurable with MEILI_EXPERIMENTAL_REST_EMBEDDER_MAX_RETRY_DURATION_SECONDS 2025-11-10 14:29:27 +01:00
Louis Dureuil
5e54063aab Configurable timeout with MEILI_EXPERIMENTAL_REST_EMBEDDER_TIMEOUT_SECONDS 2025-11-10 14:29:20 +01:00
Louis Dureuil
40456795d0 Allow to customize failure modes with MEILI_EXPERIMENTAL_CONFIG_EMBEDDER_FAILURE_MODES 2025-11-10 14:23:51 +01:00
ManyTheFish
40e60c6f52 Fix dumpless upgrade 2025-11-10 14:03:17 +01:00
ManyTheFish
eeae6383d0 Bump Meilisearch version v1.25.0 2025-11-10 14:03:17 +01:00
Clément Renault
8cbcaeff56 Merge pull request #5981 from meilisearch/charabia-v0.9.8
Update Charabia v0.9.8
2025-11-10 09:45:30 +00:00
ManyTheFish
ce87d5a89e Update Charabia v0.9.8 2025-11-10 09:33:31 +01:00
Hayato Sakaguchi
9f7172f6ab chage tensor names 2025-11-09 01:06:04 +09:00
Hayato Sakaguchi
d6eca83cfa Support modernbert architecture in hugging face embedder 2025-11-08 20:53:47 +09:00
Louis Dureuil
a9d6e86077 Merge pull request #5775 from meilisearch/experimental-search-personnalization
Experimental search personalization
2025-11-06 18:45:18 +00:00
Clément Renault
346f9efe3a Merge pull request #5977 from meilisearch/fix-rust-analyzer-false-positive
Fix error that rust-analyzer reports because it is compiling all code with the `test` cfg
2025-11-06 18:42:43 +00:00
Louis Dureuil
a987d698c1 Fix error that rust-analyzer reports because it is compiling all code with the test cfg 2025-11-06 18:10:59 +01:00
Louis Dureuil
fc3508c8c8 Fix route path 2025-11-06 18:08:50 +01:00
ManyTheFish
dbb45dec1a ignore flaky test 2025-11-06 17:55:27 +01:00
ManyTheFish
5f69a43846 Early return if time budget is already exceeded 2025-11-06 17:55:27 +01:00
ManyTheFish
fe1e4814fa Return an error when personalize is used in federated queries 2025-11-06 17:55:27 +01:00
ManyTheFish
c29749741b Use the time budget instead of defining a deadline outside the scope 2025-11-06 17:55:27 +01:00
ManyTheFish
3e47201365 Fix too many argument clippy warning 2025-11-06 17:55:27 +01:00
ManyTheFish
ec9719f3b1 Fix simple PR comments 2025-11-06 17:55:27 +01:00
ManyTheFish
b2cc9e4db8 remove useless clones 2025-11-06 17:55:27 +01:00
ManyTheFish
56198bae48 Initialize personalization API once 2025-11-06 17:55:27 +01:00
ManyTheFish
888059b2d0 Fix PR comments 2025-11-06 17:55:27 +01:00
ManyTheFish
410f2fc8c3 add some failure tests for personalization 2025-11-06 17:55:01 +01:00
ManyTheFish
54e244d2f3 Return an error when the feature is disabled 2025-11-06 17:55:01 +01:00
ManyTheFish
e0c36972fb remove deadcode 2025-11-06 17:55:01 +01:00
ManyTheFish
daadcddb5e Reduce personalization footprint on the codebase 2025-11-06 17:55:01 +01:00
ManyTheFish
7f92dafa02 User context is no more optional 2025-11-06 17:55:01 +01:00
ManyTheFish
cc5d12a368 FIx padding 2025-11-06 17:55:01 +01:00
ManyTheFish
0f98b996b5 Fix PR comments 2025-11-06 17:55:01 +01:00
ManyTheFish
d005ca5bf7 remove irrelevant tests 2025-11-06 17:55:01 +01:00
ManyTheFish
7e65fb1d3e feat(metrics): add personalization count to metrics endpoint
- Add MEILISEARCH_PERSONALIZED_SEARCH_REQUESTS metric to track personalized searches
- Increment metric directly in search analytics when personalization is used
- Metric automatically exposed in /metrics endpoint for monitoring
2025-11-06 17:55:01 +01:00
ManyTheFish
cdefb3f665 feat(analytics): add personalization tracking to segment analytics
- Add total_personalized field to SearchAggregator to track personalization usage
- Track when search requests include personalization parameters
- Include personalization data in analytics JSON output
- Maintain clean personalization service interface
2025-11-06 17:55:01 +01:00
ManyTheFish
a91887221a refactor(personalization): improve Cohere reranking logic and error handling
- Replace and_then() with early return for missing personalization
- Simplify reranking by building new hits vector instead of swapping
- Add debug logging for reranked indices
- Fix potential index out-of-bounds issues in reranking
2025-11-06 17:55:01 +01:00
ManyTheFish
9c66b20a97 refactor: split PersonalizationService into enum with CohereService
- Refactor PersonalizationService as enum with Cohere and Uninitialized variants
- Create dedicated CohereService struct with rerank_search_results method
- Split constructor into cohere() and uninitialized() methods
- Move all Cohere logic into CohereService for better separation of concerns
- Update tests and lib.rs to use new API
- Improve code organization and maintainability
2025-11-06 17:55:01 +01:00
ManyTheFish
a48283527e feat: refine personalization query by merging with user context
- Merge initial query with user context to create a comprehensive prompt
- Only skip reranking if both query and user_context are None
- Support reranking with query-only, user_context-only, or both
- Use 'let else' pattern for cleaner error handling
- Add comprehensive tests for different parameter combinations
- Improve prompt format for better reranking effectiveness
2025-11-06 17:55:01 +01:00
ManyTheFish
73f78c19b0 refactor: rename personalization API fields and move checks inside service
- Rename 'personalization' field to 'personalize' in API
- Rename 'userProfile' to 'userContext' in personalization object
- Remove 'personalized' boolean field (activation now based on non-null 'personalize')
- Move personalization checks inside rerank_search_results function
- Use 'let else' pattern for better error handling
- Update error types and messages to reflect new field names
- Update all search routes and analytics to use new field names
2025-11-06 17:55:01 +01:00
ManyTheFish
34639e346e feat: add personalization service with EnglishV3-only reranking
- Add new personalization module with Cohere integration
- Implement rerank_search_results method using EnglishV3 model
- Remove fallback logic to EnglishV2 for simplified behavior
- Add comprehensive error handling and logging
- Include unit tests for service behavior
- Update search route to support personalization feature
2025-11-06 17:55:01 +01:00
ManyTheFish
7af2a254d6 feat: add personalization parameters to /search route
- Add Personalization struct with personalized boolean and user_profile string
- Add personalizationPersonalized and personalizationUserProfile query parameters to SearchQueryGet
- Follow same pattern as hybrid parameters (hybridEmbedder, hybridSemanticRatio)
- Add validation: personalizationUserProfile requires personalizationPersonalized
- Add error codes for personalization parameters
- Update analytics and facet search to handle new personalization field
- Remove serde dependencies from Personalization struct, use Deserr only
2025-11-06 17:55:01 +01:00
ManyTheFish
0f9d262a1c feat: add experimental_personalization_api_key feature to RoFeatures
- Add MEILI_EXPERIMENTAL_PERSONALIZATION_API_KEY environment variable
- Add experimental_personalization_api_key field to Opt struct with CLI and env support
- Add experimental_personalization_api_key field to InstanceTogglableFeatures
- Store personalization API key in FeatureData for access through IndexScheduler
- Add experimental_personalization_api_key() method to IndexScheduler
- Update analytics destructuring to include new field
- Maintain RoFeatures Copy trait while properly handling Option<String>
2025-11-06 17:55:01 +01:00
Louis Dureuil
747476a225 Add customMetadata to all documents routes 2025-11-06 17:07:18 +01:00
Clément Renault
34765b556b Merge pull request #5948 from meilisearch/new-s3-snapshots
Upload snapshot tarballs to S3
2025-11-06 15:10:37 +00:00
Kerollmops
dfb4860578 Make some parameters experimental and link the dedicated issue 2025-11-06 12:43:12 +01:00
Kerollmops
ce62713f02 Always create the update_files directory 2025-11-06 12:43:12 +01:00
Kerollmops
8b5d04d60f Move the code to append a file to the tarball in a dedicated function 2025-11-06 12:07:08 +01:00
Kerollmops
1b74709b91 Extract more logic into dedicated functions 2025-11-06 12:00:25 +01:00
Kerollmops
a5c0a282c5 Better document safety problems and unwraps 2025-11-06 11:26:45 +01:00
Kerollmops
4fc048ff20 Better handle errors when trying to clone the handles 2025-11-06 11:15:34 +01:00
Kerollmops
375b5600cd Move the tarball streaming into a dedicated function 2025-11-06 11:11:26 +01:00
Kerollmops
32b997d817 Extracts the streaming closure into an async function 2025-11-06 11:04:16 +01:00
Kerollmops
ff3090e3cc Remove the slash-path dependency 2025-11-06 10:53:07 +01:00
Kerollmops
6c6645f945 Better seeking-to-the-start explanation 2025-11-06 10:45:47 +01:00
Clément Renault
af6473d999 Improve the comments
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2025-11-06 10:43:14 +01:00
Kerollmops
11851f9701 Fix warnings on Windows 2025-11-05 16:40:33 +01:00
Kerollmops
cc4654eabd Remove useless clippy flag 2025-11-05 16:38:05 +01:00
Kerollmops
0bb91f4a77 Immediately panic when using S3 snapshot options on Windows 2025-11-05 15:16:43 +01:00
Kerollmops
f9d57f54df Keep the smae naming behavior as before 2025-11-05 15:16:43 +01:00
Kerollmops
3ef1afc0f1 Introduce some backoff retries 2025-11-05 15:16:43 +01:00
Kerollmops
dbb5abebb6 Support cancelation 2025-11-05 15:16:42 +01:00
Kerollmops
700f33bd39 Clean up the code 2025-11-05 15:16:42 +01:00
Kerollmops
d01bbbccde Support clean CLI options 2025-11-05 15:16:42 +01:00
Kerollmops
4fc506f267 Remove unused imports/code on Windows 2025-11-05 15:16:42 +01:00
Kerollmops
dc456276e5 Make clippy happy 2025-11-05 15:16:42 +01:00
Kerollmops
b2ea50cb10 Make the compression level configurable 2025-11-05 15:16:42 +01:00
Kerollmops
5074cf92ab Disable compression entirely to avoid being CPU bound 2025-11-05 15:16:42 +01:00
Clément Renault
a92bc8d192 Improve the way we create the snapshot path 2025-11-05 15:16:42 +01:00
Clément Renault
ee538cf045 Remove useless dependencies 2025-11-05 15:16:42 +01:00
Kerollmops
2b05d63a0f Make it finaly work but without async on the write side 2025-11-05 15:16:42 +01:00
Kerollmops
104e8918ce Seeking the tasks/data.mdb file to the begining made the trick 2025-11-05 15:16:42 +01:00
Kerollmops
d6ec4d4f4a Improve understanding of S3-related errors 2025-11-05 15:16:42 +01:00
Kerollmops
f0e7326b7a Retrieve the bytesMut only when released 2025-11-05 15:16:42 +01:00
Kerollmops
c8106a0006 Fix minimum part size 2025-11-05 15:16:42 +01:00
Kerollmops
c9ab5bc0b6 Improve error messaging when missing env var 2025-11-05 15:16:42 +01:00
Clément Renault
5e0f15fd43 WIP 2025-11-05 15:16:42 +01:00
Kerollmops
4c30f090c7 WIP Do more tests 2025-11-05 15:16:42 +01:00
Clément Renault
63f247cdda WIP sending multiparts of 250MiB 2025-11-05 15:16:42 +01:00
Clément Renault
e109fa9529 Rename the update_path function 2025-11-05 15:16:42 +01:00
Clément Renault
76e4ec2168 Geenrate an async tarball 2025-11-05 15:16:42 +01:00
Kerollmops
982babdb74 WIP 2025-11-05 15:16:42 +01:00
Kerollmops
7ae2ae33d9 Make max in flights parts fro upload configurable 2025-11-05 15:16:42 +01:00
Kerollmops
cb0788ae07 Use a good mem advice for uploads 2025-11-05 15:16:42 +01:00
Kerollmops
cb3e5dc234 Move the S3 snapshots to disk into a dedicated method 2025-11-05 15:16:41 +01:00
Clément Renault
59d40a2821 Upload ten parts at a time 2025-11-05 15:16:41 +01:00
Clément Renault
98a678e73d Use the Bytes crate to send the parts 2025-11-05 15:16:41 +01:00
Clément Renault
70292aae3c Upload indexes under their uuids 2025-11-05 15:16:41 +01:00
Clément Renault
73521f0069 Initial working S3 uploads to RustFS 2025-11-05 15:16:41 +01:00
Louis Dureuil
4533179604 Pass tokio handle to index-scheduler 2025-11-05 15:16:41 +01:00
Clément Renault
1a21cc1a17 Merge pull request #5959 from meilisearch/parallelize-word-prefix-docids
Parallelize the word prefix docids
2025-11-05 10:10:07 +00:00
Clément Renault
d08042f8a7 Merge pull request #5967 from meilisearch/bump-lmdb-version
Fix the LMDB fork memory leak
2025-11-05 09:33:59 +00:00
Louis Dureuil
77aadb5f22 Merge pull request #5968 from meilisearch/engprod-2116-privilege-escalation-from-webhook-api-key-to-master-key
Redact Authorization header in webhooks
2025-11-05 08:51:07 +00:00
Kerollmops
4fd913f7eb Use the latest version of heed 2025-11-05 09:42:03 +01:00
Louis Dureuil
4b72e54ca7 Test webhook with metadata 2025-11-04 17:41:29 +01:00
Louis Dureuil
adef2cc132 test remote auto sharding with and without metadata 2025-11-04 17:41:28 +01:00
Louis Dureuil
533b9951b1 Allow adding custom metadata in tests 2025-11-04 17:41:28 +01:00
Louis Dureuil
9103cbc9db Add custom metadata to payload 2025-11-04 17:41:28 +01:00
Louis Dureuil
083de2bfc1 Allow to attach customMetadatain the document addition or update tasks 2025-11-04 17:41:28 +01:00
Louis Dureuil
8618a4d2ba document hide_secret 2025-11-04 17:03:12 +01:00
Hongxu Xu
08bc982748 Remove unused dependency allocator-api2 2025-11-04 03:29:24 +00:00
Louis Dureuil
e9c5df7993 happy clippy 2025-11-03 17:23:27 +01:00
Louis Dureuil
8a28b3aa77 Update snap 2025-11-03 15:52:35 +01:00
Louis Dureuil
1a0b100ad9 rename webhook to highlight redaction 2025-11-03 15:52:22 +01:00
Louis Dureuil
ff93563f41 Redact webhook authorize header on display 2025-11-03 15:51:56 +01:00
Louis Dureuil
2f25258191 Extract crate::settings::hide_secret as a public function 2025-11-03 15:50:37 +01:00
Clément Renault
2859079c32 Merge pull request #5961 from meilisearch/add-flickr-demo
Add Flickr example to README
2025-11-03 14:44:54 +00:00
Clément Renault
74b83d305f Add Flickr example to README
This PR adds the new Flickr demo to the README.
2025-10-29 18:27:36 +01:00
Clément Renault
70f6e4b828 Parallelize the word prefix docids 2025-10-27 17:20:12 +01:00
Clément Renault
6df196034e Merge pull request #5950 from meilisearch/update-version-v1.24.0
Update version to v1.24.0
2025-10-20 11:17:15 +00:00
Clément Renault
a63762737c Upgrade index scheduler 2025-10-20 12:22:27 +02:00
Clément Renault
77394bd4b9 Update insta tests 2025-10-20 10:54:16 +02:00
Clément Renault
cb87201c8b Fix dumpless upgrade and do nothing 2025-10-20 10:42:35 +02:00
Clément Renault
1a9c38794f Bump version to v1.24.0 2025-10-20 10:38:48 +02:00
Clément Renault
34233efb63 Merge pull request #5946 from meilisearch/fix-compaction-issues
Improve compaction behaviors
2025-10-16 15:42:38 +00:00
Clément Renault
af0608ebd6 Continue to the next index if index doesn't exists 2025-10-16 16:39:51 +02:00
Clément Renault
8c7e5c094e Improve the task batch stopped message 2025-10-16 16:39:50 +02:00
Clément Renault
c064737137 Remove duplicated logic in auto batching of tasks 2025-10-16 16:33:20 +02:00
Clément Renault
1d188a7ad3 Make the compaction tasks a priority over the export ones 2025-10-16 13:01:23 +02:00
Clément Renault
66a6b65716 Merge pull request #5945 from meilisearch/search-cutoff-vector-store
Search cutoff vector store
2025-10-16 09:43:20 +00:00
Louis Dureuil
326652a399 Update hannoy 2025-10-16 10:34:54 +02:00
Louis Dureuil
59316e8d5a add unit test 2025-10-16 10:34:20 +02:00
Louis Dureuil
76d7f20c87 fix snap 2025-10-16 10:34:19 +02:00
Louis Dureuil
380b2797a5 Share the same budget for all queries of a given index in federated search 2025-10-16 10:34:19 +02:00
Clémentine
1dd58f9bec Merge pull request #5866 from PedroTroller/build/alpine3.22
Bump Dockerfile alpine version to 3.22
2025-10-16 07:22:43 +00:00
Kerollmops
ddc76ad0dc Delete the leftover compaction files from canceled operations 2025-10-15 16:49:25 +02:00
Kerollmops
ffacf1c002 Introduce the new IndexMapper index path method 2025-10-15 16:49:25 +02:00
Kerollmops
5a49b93b77 Use constant tempfile name to reuse tempfile 2025-10-15 16:49:25 +02:00
Louis Dureuil
918a6eaec9 Implement for vector store ranking rule 2025-10-15 16:31:47 +02:00
Louis Dureuil
1e6ce70e3e "Uninteresting" ranking rule implementations 2025-10-15 16:31:47 +02:00
Louis Dureuil
b418054ee4 Change bucket_sort logic to pass the time budget and allow for retrieving non-blocking buckets 2025-10-15 16:31:47 +02:00
Louis Dureuil
58f30e9d8a Change RankingRule trait to account for budget 2025-10-15 16:31:46 +02:00
Many the fish
c45172a4bf Merge pull request #5942 from meilisearch/meili-bot-patch-1
Adapt the standards of prototypes
2025-10-15 11:22:03 +00:00
meili-bot
221ba20083 Adapt the standards of prototypes 2025-10-15 10:47:23 +02:00
Many the fish
93c5fbbb8b Merge pull request #5926 from meilisearch/search-metadata
Search metadata
2025-10-14 14:13:42 +00:00
ManyTheFish
22d529523a refactor: extract query metadata building logic into separate function 2025-10-14 14:39:07 +02:00
ManyTheFish
ed6f479940 Remove irrelevant test index method 2025-10-14 12:10:17 +02:00
ManyTheFish
f19f712433 Add local remote name when a remote federated search is made 2025-10-14 12:10:17 +02:00
ManyTheFish
24a92c2809 move contant header in search/mod.rs 2025-10-14 12:10:17 +02:00
ManyTheFish
443cc24408 --amend 2025-10-14 12:10:17 +02:00
ManyTheFish
e8d5228250 factorize metadata header 2025-10-14 12:10:17 +02:00
ManyTheFish
5c33fb090c avoid openning each index twice and remove clones 2025-10-14 12:10:17 +02:00
ManyTheFish
48dd9146e7 Add comprehensive metadata tests with insta snapshots
- Add 9 test cases covering single search, multi-search, and federated search
- Test metadata header opt-in functionality with case insensitivity
- Test header false value handling
- Test UUID format validation and consistency
- Use insta snapshots for reliable, maintainable test assertions
- Fix header parsing to properly handle 'false' values
- Add helper methods for testing with custom headers
2025-10-14 12:10:17 +02:00
ManyTheFish
c1c42e818e refactor: group perform_search parameters into SearchParams struct
- Create SearchParams struct to group related parameters
- Update perform_search function to use SearchParams instead of 8 individual parameters
- Fix clippy warning about too many arguments
- Update all callers to use new SearchParams struct
2025-10-14 12:10:17 +02:00
ManyTheFish
519905ef9c Fix remote index collision with HashMap-based lookup
- Replace BTreeMap with HashMap for (remote, index_uid) -> primary_key lookup
- Prevents collisions when multiple remotes have same index_uid but different primary keys
2025-10-14 12:10:17 +02:00
ManyTheFish
f242377d2b Fix remote index collision in federated search metadata
- Use composite key (indexUid, remote) instead of indexUid only for remote metadata lookup
- Prevents collisions when multiple remotes have same indexUid but different primary keys
- Ensures each remote query gets correct primaryKey from its specific remote instance
2025-10-14 12:10:17 +02:00
ManyTheFish
da06306274 Add header-based metadata opt-in for search responses
- Add Meili-Include-Metadata header constant
- Modify perform_search to conditionally include metadata based on header
- Modify perform_federated_search to conditionally include metadata based on header
- Update all search routes to check for header and pass include_metadata parameter
- Forward Meili-Include-Metadata header to remote requests for federated search
- Ensure remote queries include primaryKey metadata when header is present
2025-10-14 12:10:17 +02:00
ManyTheFish
b93b803a2e WIP: Add metadata field with queryUid, indexUid, primaryKey, and remote
- Add SearchMetadata struct with queryUid, indexUid, primaryKey, and remote fields
- Update SearchResult to include metadata field
- Update FederatedSearchResult to include metadata array
- Refactor federated search metadata building to maintain query order
- Support primary key extraction from both local and remote results
- Add remote field to identify remote instance for federated queries
- Ensure metadata array matches query order in federated search

Features:
- queryUid: UUID v7 for each query
- indexUid: Index identifier
- primaryKey: Primary key field name (null if not available)
- remote: Remote instance name (null for local queries)

This provides complete traceability for search operations across local and remote instances.
2025-10-14 12:10:17 +02:00
ManyTheFish
cf43ec4aff feat: add indexUid to SearchMetadata
- Add indexUid field to SearchMetadata struct
- Update perform_search to include indexUid in metadata
- Update federated search to include indexUid for each query

The metadata field now contains both queryUid and indexUid:
- For /search: single object with queryUid and indexUid
- For /multi-search: each result has metadata with both fields
- For federated search: array of objects, each with queryUid and indexUid
2025-10-14 12:10:17 +02:00
ManyTheFish
9795d98e77 feat: add metadata field with queryUid to search responses
- Add SearchMetadata struct with queryUid field (UUID v7)
- Add metadata field to SearchResult for /search route
- Add metadata field to FederatedSearchResult for /multi-search route
- Update perform_search to generate queryUid and set metadata
- Update federated search to generate queryUid for each query
- Update multi-search non-federated path to include metadata
- Fix pattern matching in analytics and other code

The metadata field contains:
- For /search: single object with queryUid
- For /multi-search: array of objects, one per query
- For federated search: array of objects, one per query

All queryUid values are generated using Uuid::now_v7() for time-ordered uniqueness.
2025-10-14 12:10:17 +02:00
Clément Renault
316b4c047f Merge pull request #5940 from meilisearch/update-version-v1.23.0
Update version v1.23.0
2025-10-13 12:50:52 +00:00
Kerollmops
1d701c6980 Fix upgrade tests 2025-10-13 10:40:15 +02:00
Kerollmops
0203adb9cb Add a no-op when upgrading the index scheduler 2025-10-13 10:28:31 +02:00
Kerollmops
0d05c2ad6e Add a no-op when upgrading the index 2025-10-13 10:24:57 +02:00
Kerollmops
b3f44c4abd Bump the version to 1.23.0 2025-10-13 09:47:20 +02:00
Clémentine
62115f57b1 Merge pull request #5938 from meilisearch/attempt-license-fix-again
Try to fix GH license detection again
2025-10-09 16:32:40 +00:00
Louis Dureuil
9023172139 Add a dedicated LICENSE-MIT file containing the unmodified MIT license 2025-10-09 16:24:18 +02:00
Louis Dureuil
59631afd9a Merge pull request #5929 from meilisearch/compaction-task
Introduce a task to compact an index
2025-10-09 11:30:01 +00:00
Clément Renault
c2584c6edd Merge pull request #5936 from meilisearch/merge-v1.22.3-back
Merge v1.22.3 back into main
2025-10-09 08:45:33 +00:00
Louis Dureuil
685663af3c bump cellulite to address backcompat issue from #5307 2025-10-09 10:20:58 +02:00
Louis Dureuil
72b4b41443 Read MEILI_EXPERIMENTAL_REMOTE_SEARCH_TIMEOUT_SECONDS to override the default timeout in remote federated search 2025-10-09 09:34:49 +02:00
Louis Dureuil
70aa768d48 Update ignored test 2025-10-09 09:34:48 +02:00
Louis Dureuil
6029677eec Also raise the global deadline 2025-10-09 09:34:48 +02:00
Louis Dureuil
3c78f4121e Raise timeout to 30secs 2025-10-09 09:34:48 +02:00
Clémentine
89170dd78f Merge pull request #5935 from meilisearch/remove-release-drafter
Remove release-drafter and encourage usage of GitHub generated notes
2025-10-08 16:42:51 +00:00
Many the fish
6379a62d95 Merge pull request #5933 from meilisearch/fix-ranking-score-with-sort
Fix ranking score bug when sort is present
2025-10-08 16:23:12 +00:00
curquiza
4c05c0cf96 Remove release-drafter and encourage usage of GitHub generated notes 2025-10-08 17:35:33 +02:00
ManyTheFish
ce832da16c Add a function documentation 2025-10-08 17:19:40 +02:00
Louis Dureuil
14de657d36 Use the "currently_processing_index" to avoid potentially blocking the search during compaction 2025-10-08 15:45:38 +02:00
Kerollmops
9a36c090bf Do not return the EnvClosingEvent 2025-10-08 15:38:45 +02:00
Kerollmops
3aca010b42 Recompute the stats 2025-10-08 15:33:12 +02:00
Clément Renault
62c11ce3f3 Fix comments 2025-10-08 15:33:12 +02:00
Clément Renault
f358538f4f Improve the pre-compaction size information 2025-10-08 15:33:12 +02:00
Clément Renault
9068857ba1 Make the tests pass 2025-10-08 15:33:12 +02:00
Clément Renault
d241157084 Make Clippy happy 2025-10-08 15:33:12 +02:00
Clément Renault
69f73b1d74 Introduce a function to effectively close an index 2025-10-08 15:33:12 +02:00
Clément Renault
202794f620 Expose the env closing event so we can wait for the index to close 2025-10-08 15:33:12 +02:00
Kerollmops
38cbd54604 Implement the index compaction task 2025-10-08 15:33:12 +02:00
Kerollmops
3877e0043c Rename operation to IndexCompaction 2025-10-08 15:33:12 +02:00
Clément Renault
f95398420b Add the necessary batches and tasks in the process 2025-10-08 15:33:11 +02:00
Clément Renault
53905c1362 Add a new CompactIndex action 2025-10-08 15:33:11 +02:00
Clément Renault
113aac8815 Introduce a new /indexes/{indexUid}/compact route 2025-10-08 15:33:11 +02:00
ManyTheFish
d2071dde1f Fix ranking score bug when sort is present
- Fix global_score function to properly handle semantic scores and ranking scores
- Prioritize semantic scores (vector/embedding) when available, fall back to ranking scores
- Exclude sort and geo sort details from relevance scoring
- Use Rank::global_score to properly merge ranking scores
- Add test case with insta snapshots to reproduce and verify the fix
- When sorting is present, ranking scores now properly reflect search relevance
- Previously all ranking scores were 1.0 when sort was present, now they show actual relevance scores
2025-10-08 11:23:43 +02:00
Many the fish
4502af5aed Merge pull request #5930 from meilisearch/synonym-performance-fix
Synonym performance fix
2025-10-07 15:17:34 +00:00
ManyTheFish
06af68aa07 Get rid of upwrap in get_synonym, We can't use get_or_insert_with because the index.synonyms(..) returns a Result 2025-10-07 14:37:13 +02:00
ManyTheFish
6d378c6397 PERFORMANCE: Implement synonym caching to eliminate repeated database access
- Added SynonymCache to SearchContext to cache synonyms in memory
- Modified synonym retrieval to use cached synonyms after first load
- Eliminated redundant database calls for multi-word queries
- Performance improvement: 87% → 0ms for subsequent synonym processing
- Complex queries now process in 40ms vs 495ms (92% improvement)
2025-10-06 14:26:30 +02:00
Clément Renault
ec0c0cf779 Merge pull request #5307 from meilisearch/parallel-bulk-facets
Parallelize bulk facets & word prefix fid/position docids
2025-10-06 12:08:52 +00:00
Kerollmops
851694e323 Fix a bug where prefixes were never deleted 2025-10-03 10:50:05 +02:00
Kerollmops
ea92c64fdc Fix a potential bug where prefixes were not deleted 2025-10-03 09:49:05 +02:00
Kerollmops
dc36f681be Fix the prefix post-processing algorithm 2025-10-03 09:42:29 +02:00
Clément Renault
48f1987a8d Improve facet post processing readability
Co-authored-by: Many the fish <many@meilisearch.com>
2025-10-03 09:42:29 +02:00
Many the fish
b98e2cef81 Merge pull request #5863 from meilisearch/add-request-uid-to-search-routes
Add request uid to search routes
2025-10-02 10:09:31 +00:00
Clément Renault
9f79ce82af Introduce new CLI arguments to deactivate experimental post processing 2025-10-02 12:06:33 +02:00
Clément Renault
5f18a9b2ee Move dependencies to actual versions 2025-10-02 11:00:48 +02:00
Clément Renault
7f8a1ac0be Remove useless heed path 2025-10-01 16:19:58 +02:00
Clément Renault
1a67163ee8 Use git cellulite in case 2025-10-01 16:02:07 +02:00
Clément Renault
38141de68d Use local heed in case 2025-10-01 16:01:58 +02:00
Clément Renault
7a98b80687 Use temporary git repo for hannoy and arroy in nested-rtxns pre-version 2025-10-01 15:28:36 +02:00
Kerollmops
229a12c8e6 Multithread word prefix position docids 2025-10-01 15:18:21 +02:00
Kerollmops
2fdfe79400 Make clippy happy 2025-10-01 15:09:59 +02:00
Kerollmops
9184b12a26 Fix the algorithm 2025-10-01 15:09:59 +02:00
Kerollmops
742378d8e1 Multi-thread the facet bulk processing 2025-10-01 15:09:59 +02:00
Kerollmops
6dcd739a8b Patch heed to create multiple nested RoTxns 2025-10-01 15:09:59 +02:00
ManyTheFish
f97384da6c Fix geo_json snapshots 2025-09-30 17:03:21 +02:00
ManyTheFish
6ea76f2771 Add uuid v7 feature 2025-09-30 15:42:03 +02:00
ManyTheFish
715b255371 fix tests 2025-09-30 15:42:03 +02:00
ManyTheFish
db094d3923 Add requestUid field in search response and add debug logs with requestUid 2025-09-30 15:42:03 +02:00
Many the fish
c29bdcae23 Merge pull request #5913 from meilisearch/dependabot/github_actions/actions/setup-python-6
Bump actions/setup-python from 5 to 6
2025-09-29 14:58:45 +00:00
Many the fish
75219181a3 Merge pull request #5834 from meilisearch/fix-openapi-ci
Minor improvement in OpenAPI CI
2025-09-29 13:55:12 +00:00
Many the fish
a5b5cf7cd1 Merge pull request #5916 from meilisearch/dependabot/github_actions/sigstore/cosign-installer-3.10.0
Bump sigstore/cosign-installer from 3.9.2 to 3.10.0
2025-09-29 13:52:31 +00:00
Many the fish
142ba8ea00 Merge pull request #5915 from meilisearch/dependabot/github_actions/actions/setup-node-5
Bump actions/setup-node from 4 to 5
2025-09-29 13:52:28 +00:00
Many the fish
4bc823e07c Merge pull request #5914 from meilisearch/dependabot/github_actions/actions/setup-dotnet-5
Bump actions/setup-dotnet from 4 to 5
2025-09-29 13:52:10 +00:00
Many the fish
db06ca7138 Merge pull request #5912 from meilisearch/dependabot/github_actions/actions/setup-go-6
Bump actions/setup-go from 5 to 6
2025-09-29 13:52:06 +00:00
Clément Renault
95595a768e Merge pull request #5911 from EazyAl/main
Update README.md to fix newsletter link
2025-09-29 13:10:16 +00:00
dependabot[bot]
36f649768e Bump sigstore/cosign-installer from 3.9.2 to 3.10.0
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.9.2 to 3.10.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](d58896d6a1...d7543c93d8)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.10.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:01:14 +00:00
dependabot[bot]
0c6fc243f2 Bump actions/setup-node from 4 to 5
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4 to 5.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](https://github.com/actions/setup-node/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:01:11 +00:00
dependabot[bot]
dfc46d5627 Bump actions/setup-dotnet from 4 to 5
Bumps [actions/setup-dotnet](https://github.com/actions/setup-dotnet) from 4 to 5.
- [Release notes](https://github.com/actions/setup-dotnet/releases)
- [Commits](https://github.com/actions/setup-dotnet/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-dotnet
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:01:08 +00:00
dependabot[bot]
11d55f2121 Bump actions/setup-python from 5 to 6
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5 to 6.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:01:03 +00:00
dependabot[bot]
014da57cf6 Bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:01:00 +00:00
Clément Renault
70a0ff4a8f Merge pull request #5900 from meilisearch/show-dependencies
Show Dependabot dependency upgrade in the changelog
2025-09-25 16:04:03 +00:00
Clément Renault
dd0d5e4b90 Merge pull request #5910 from meilisearch/curquiza-patch-1
Change Java version in SDK CI
2025-09-25 14:32:16 +00:00
Ali Imran
15b3bb1700 Update README.md to fix newsletter link 2025-09-25 16:07:08 +02:00
Louis Dureuil
077ec2ab11 Merge pull request #5908 from meilisearch/update-version
Update version
2025-09-25 13:10:34 +00:00
Clémentine
f25db0795e Change Java version in SDK CI
Updated Java version and distribution in workflow.
2025-09-25 15:03:50 +02:00
Tamo
c50a337c29 bump version for 1.22.1 2025-09-25 13:44:44 +02:00
Tamo
efeae09ce1 Merge pull request #5906 from meilisearch/task-deletion-strategy
Delete oldest tasks first
2025-09-25 10:11:33 +00:00
Tamo
ad55b48664 Merge pull request #5907 from meilisearch/fix-geojson-bug
use the latest version of zerometry that supports collection, lines and multi-lines
2025-09-25 09:56:01 +00:00
Tamo
94eabd34e6 fmt 2025-09-25 11:01:53 +02:00
Tamo
6935589f74 use the latest version of zerometry that supports collection, lines and multi-lines 2025-09-25 10:31:07 +02:00
Louis Dureuil
4beb452027 Optimize by using from_sorted_iter
Co-authored-by: Tamo <tamo@meilisearch.com>
2025-09-25 10:16:30 +02:00
Louis Dureuil
b722da303a Do not start from the end of the finished tasks when selecting the tasks to delete 2025-09-25 09:54:58 +02:00
Louis Dureuil
ad39263b94 Merge pull request #5902 from meilisearch/bump-version
bump the version of meilisearch
2025-09-24 07:23:39 +00:00
Tamo
0ffb08b112 bump the version of meilisearch 2025-09-23 17:37:31 +02:00
Clément Renault
ff80b4d0ff Merge pull request #5891 from nnethercott/fix-hannoy-arroy-conversion
Bump `hannoy` to v0.0.8
2025-09-23 13:26:54 +00:00
Louis Dureuil
7fb4404928 Merge pull request #5758 from meilisearch/cellulite
Cellulite integration
2025-09-23 12:48:13 +00:00
Tamo
8405f0bf9c fmt 2025-09-23 13:55:36 +02:00
Tamo
3a7f9b56fe update cellulite 2025-09-23 13:55:36 +02:00
Louis Dureuil
61034e2e2e write geojson in obkv 2025-09-23 13:55:36 +02:00
Tamo
108d6d3344 remove a bunch of useless logs 2025-09-23 13:55:36 +02:00
Tamo
35bd00f6a1 continue previous commit 2025-09-23 13:55:36 +02:00
Tamo
69059d67ef stop returning the geojson field when iterating on the fields 2025-09-23 13:55:36 +02:00
Tamo
e13783103f use the CELLULITE constant 2025-09-23 13:55:36 +02:00
Tamo
f719665c4e update the filter-parser after updating its error messages 2025-09-23 13:55:36 +02:00
Tamo
638f284614 densify the shapes before storing them 2025-09-23 13:55:36 +02:00
Tamo
32ac98ed95 style improvement 2025-09-23 13:55:36 +02:00
Tamo
46aee695ca review the filters errors 2025-09-23 13:55:36 +02:00
Tamo
716c67f858 review and fix all error codes 2025-09-23 13:55:36 +02:00
Tamo
fec10bb2d6 update cellulite to the latest version 2025-09-23 13:55:36 +02:00
Mubelotix
3dac2cf73e Update tests 2025-09-23 13:55:36 +02:00
Mubelotix
03eca800e6 Support _geoRadius 2025-09-23 13:55:36 +02:00
Mubelotix
28fa2e960e Tolerate trailing comma 2025-09-23 13:55:36 +02:00
Mubelotix
a3b9220f84 Improve error message 2025-09-23 13:55:36 +02:00
Mubelotix
c09d48edf2 Fix coordinates order in filters 2025-09-23 13:55:36 +02:00
Mubelotix
ae4ab0ebbb Improve filter parser errors 2025-09-23 13:55:36 +02:00
Mubelotix
900a9a6d59 Reduce identations 2025-09-23 13:55:36 +02:00
Mubelotix
fc560e6730 Improve geo polygon errors 2025-09-23 13:55:36 +02:00
Mubelotix
e2a06470b7 Update tests 2025-09-23 13:55:36 +02:00
Mubelotix
ada27323f2 Rename file 2025-09-23 13:55:36 +02:00
Mubelotix
607a1c2395 Add geo bounding box filter 2025-09-23 13:55:36 +02:00
Mubelotix
b56956ea0c Optimize geojson channels 2025-09-23 13:55:36 +02:00
Mubelotix
3d21290f7f Add cellulite database sizes 2025-09-23 13:55:36 +02:00
Mubelotix
4edd4c06bc Fix trivial clippy warnings 2025-09-23 13:55:36 +02:00
Mubelotix
566baddc6b Optimize points removed serialization 2025-09-23 13:55:36 +02:00
Tamo
febe3186ce improve deletion 2025-09-23 13:55:36 +02:00
Tamo
5dd42c1871 remove useless log 2025-09-23 13:55:36 +02:00
Tamo
8670793e6e fix the cellulite spilling bug 2025-09-23 13:55:36 +02:00
Tamo
41a04aa3ab fix the cellulite integration 2025-09-23 13:55:36 +02:00
Tamo
88f841bc05 plug in the document deletion in cellulite 2025-09-23 13:55:36 +02:00
Tamo
d19892d2ea update to the latest version of cellulite and steppe 2025-09-23 13:55:36 +02:00
Tamo
c0905d6650 add the deletion in the new indexer 2025-09-23 13:55:36 +02:00
Tamo
576d7d94b1 fix the old indexer 2025-09-23 13:55:36 +02:00
Tamo
f4f1334b62 add a new _geoPolygon filter to query the cellulite database 2025-09-23 13:55:36 +02:00
Tamo
aaff6c3685 fmt 2025-09-23 13:55:36 +02:00
Tamo
42d2af4c84 finish plugin cellulite to the new indexer 2025-09-23 13:55:36 +02:00
Tamo
6be91c824c Cellulite is almost in the new indexer. We must add the documentID to the geojson pipeline 2025-09-23 13:55:36 +02:00
Tamo
6ee0537db8 add an extractor for cellulite in the new pipeline 2025-09-23 13:55:36 +02:00
Tamo
3fbeff4308 add cellulite to the old pipeline, it probably doesn't works 2025-09-23 13:55:36 +02:00
Tamo
375546b61a add a few helpers 2025-09-23 13:55:36 +02:00
Tamo
25a1d50763 add cellulite to the index 2025-09-23 13:55:36 +02:00
curquiza
6f0d26c22c Show dependency upgrade in the changelog for full transparency 2025-09-22 18:30:34 +02:00
Louis Dureuil
4fe073cc1a Merge pull request #5896 from meilisearch/fix-doc-template
Document template: Correctly render when indexing first item in array
2025-09-22 07:20:38 +00:00
Clément Renault
5cd3d36d20 Merge pull request #5897 from meilisearch/improve-prom
improve the prometheus content type we return
2025-09-18 16:18:16 +00:00
PedroTroller
9f4dcd04e9 Bump alpine version to 3.22 2025-09-18 17:08:36 +02:00
Tamo
d7ad76ea1e improve the prometheus content type we return 2025-09-18 17:04:13 +02:00
Louis Dureuil
e82bb93221 Fix indexing bug 2025-09-18 16:57:20 +02:00
Clément Renault
000cb93aad Merge pull request #5895 from meilisearch/fix-ci
Update the dtolnay action to 1.89
2025-09-18 14:56:45 +00:00
Tamo
ad4f5514b9 update the dtolnay action to 1.89 2025-09-18 15:52:39 +02:00
Louis Dureuil
8d29a29867 Merge pull request #5894 from meilisearch/fix-hannoy-unreachable-items
Bump Hannoy to fix unreachable documents
2025-09-18 13:33:34 +00:00
Kerollmops
d7de819d11 Bump Hannoy to fix unreachable documents 2025-09-18 14:26:13 +02:00
nnethercott
7a6cf30cb2 bump hannoy to 0.0.8 2025-09-18 11:23:57 +02:00
Tamo
e43d67591c Merge pull request #5892 from meilisearch/increase-msrv
increase rust version from 1.85 to 1.89
2025-09-17 08:26:08 +00:00
Tamo
134237d1eb update the toolchain for rustfmt 2025-09-16 17:45:49 +02:00
Tamo
26d9070aa7 increase rust version from 1.85 to 1.89 2025-09-16 17:21:33 +02:00
nnethercott
f9ffb8ada5 bump from hannoy 0.0.6 to 0.0.7 2025-09-16 12:00:36 +02:00
nnethercott
a47888f02c bump hannoy to 0.6 2025-09-16 11:02:46 +02:00
nnethercott
5bef2f4d86 Update arroy-hannoy conversion internals 2025-09-15 16:10:56 +02:00
Louis Dureuil
06b3ca9eb5 Merge pull request #5890 from meilisearch/upgrade-dumpless-for-v1.21
Update dumpless upgrade for v1.21
2025-09-15 09:47:52 +00:00
Louis Dureuil
7dc1c03a36 Update dumpless upgrade for v1.21 2025-09-15 10:46:40 +02:00
Louis Dureuil
0b74722a73 Merge pull request #5848 from meilisearch/update-charabia-v0.9.7
Add Persian support (update charabia to v0.9.7)
2025-09-15 08:22:11 +00:00
ManyTheFish
0f80249b70 Update Charabia v0.9.7 2025-09-15 09:33:21 +02:00
Tamo
a9b8a60320 Merge pull request #5886 from meilisearch/fix-decoding-error
Allow missing `search_fragments` and `indexing_fragments`
2025-09-11 14:13:57 +00:00
Louis Dureuil
fd795c513b add documentation warnings 2025-09-10 09:44:41 +02:00
Louis Dureuil
ce136ec0c1 Support missing search_fragments and indexing_fragments 2025-09-10 09:43:39 +02:00
Tamo
4d4f6d2c20 Merge pull request #5767 from meilisearch/arroy-becomes-hannoy
Add index setting to switch from arroy to hannoy
2025-09-09 17:59:45 +00:00
Louis Dureuil
4cc8fb2c5c Add comment about upgrade procedure
Co-authored-by: Tamo <tamo@meilisearch.com>
2025-09-09 17:42:33 +02:00
Tamo
5d47590f3e Merge pull request #5884 from meilisearch/fix-the-progress-trace
Fix the quantic progress trace
2025-09-09 15:01:19 +00:00
Louis Dureuil
16461a9145 add unit test 2025-09-09 14:58:14 +02:00
Tamo
17810394b8 fix the quantic progress trace 2025-09-09 11:04:54 +02:00
Louis Dureuil
15690b9e22 Merge branch 'main' into arroy-becomes-hannoy 2025-09-08 17:05:05 +02:00
Louis Dureuil
a8cd81c7f4 get_vector_store returns an option, handles it in Index::settings 2025-09-08 16:53:57 +02:00
Louis Dureuil
6376571df0 Add VectorStoreBackend to the list of components 2025-09-08 16:44:16 +02:00
Louis Dureuil
cfb040e647 remove extraneous space 2025-09-08 16:41:48 +02:00
Louis Dureuil
f54773781a Revert the fake 1.22 in index-scheduler as well 2025-09-08 15:00:02 +02:00
Louis Dureuil
0fccd0ca1f Merge pull request #5883 from meilisearch/update-to-v1.20
Update to v1.20
2025-09-08 08:50:48 +00:00
Louis Dureuil
226c102bab Update snapshot and upgrade proc 2025-09-08 10:00:44 +02:00
Louis Dureuil
2940bbb75c Update version to v1.20.0 2025-09-08 09:20:25 +02:00
Louis Dureuil
13df964564 Adopt neutral terminology where arroy/hannoy would be confusing 2025-09-03 16:11:40 +02:00
Clémentine
35b24a28aa Merge pull request #5873 from meilisearch/dependabot/github_actions/actions/checkout-5
Bump actions/checkout from 3 to 5
2025-09-03 13:18:51 +00:00
Louis Dureuil
0faf495173 cargo fmt 2025-09-03 14:49:24 +02:00
Louis Dureuil
c32c74671d Rename HannoyStats to VectorStoreStats
The stats can be provided by any backend
2025-09-03 14:45:31 +02:00
Louis Dureuil
b05bcf2c13 happy clippy 2025-09-03 14:13:08 +02:00
Louis Dureuil
90cc5263f6 Remove MEILI_EMBEDDINGS_CHUNK_SIZE 2025-09-03 13:57:58 +02:00
Louis Dureuil
424d0e277e Merge branch 'main' into arroy-becomes-hannoy-with-sharding 2025-09-03 13:46:35 +02:00
Louis Dureuil
34eba61c0d Add new tests 2025-09-03 13:42:56 +02:00
Louis Dureuil
687260bc13 Change approach to arroy <-> migration after encountering multiple issues 2025-09-02 17:49:22 +02:00
Tamo
0a3ab8e171 Merge pull request #5876 from meilisearch/specify-prometheus-protocol-version
Send the version when returning prometheus metrics
2025-09-02 13:24:36 +00:00
Louis Dureuil
6b6e69b07a rename Arroy to "stable" and Hannoy to "experimental" in setting values 2025-09-02 14:52:43 +02:00
Louis Dureuil
a25111f32e get old backend before it mutates 2025-09-02 14:52:18 +02:00
Tamo
b144d9ab2b fix warnings 2025-09-02 14:31:24 +02:00
Tamo
c3cefbc170 send the version when returning prometheus metrics 2025-09-02 12:40:18 +02:00
Clémentine
8e2aeb6739 Merge pull request #5874 from meilisearch/dependabot/github_actions/actions/setup-java-5
Bump actions/setup-java from 4 to 5
2025-09-02 09:11:19 +00:00
dependabot[bot]
9c06545ae3 Bump actions/setup-java from 4 to 5
Bumps [actions/setup-java](https://github.com/actions/setup-java) from 4 to 5.
- [Release notes](https://github.com/actions/setup-java/releases)
- [Commits](https://github.com/actions/setup-java/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-java
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-02 08:23:15 +00:00
dependabot[bot]
e1c859c0f7 Bump actions/checkout from 3 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-02 07:44:35 +00:00
Louis Dureuil
c4848e6cc0 Set back snapshot to what it was 2025-09-01 17:56:20 +02:00
Louis Dureuil
454581dbc9 Support progress 2025-09-01 17:48:50 +02:00
Louis Dureuil
bc5100dddd Update snap 2025-09-01 17:01:01 +02:00
Louis Dureuil
118c6da64d Update hannoy to v0.0.5 2025-09-01 16:42:08 +02:00
Louis Dureuil
a989f52657 Fix signature of backend change function 2025-09-01 16:38:39 +02:00
Louis Dureuil
a8cc66899c Derive ToSchema for VectorStoreBackend 2025-09-01 16:38:18 +02:00
Louis Dureuil
c9cc748f42 Mark get_vector_store as public 2025-09-01 16:37:52 +02:00
Louis Dureuil
4ccce18d7b Add settings route 2025-09-01 16:36:24 +02:00
Louis Dureuil
00d1006cd9 add experimental feature 2025-09-01 16:35:48 +02:00
Clémentine
5cad65cca5 Merge pull request #5869 from meilisearch/dependabot/cargo/tracing-subscriber-0.3.20
Bump tracing-subscriber from 0.3.19 to 0.3.20
2025-09-01 14:23:26 +00:00
Tamo
7fe9d07247 Merge pull request #5858 from shreeup/5835DispProgressTrace
Display the progressTrace in real time
2025-09-01 10:21:36 +00:00
Louis Dureuil
8933d87031 Make backend change cancelable 2025-09-01 12:10:57 +02:00
Louis Dureuil
231f86decf Refer to v1.19 and remove arroy -> hannoy dumpless upgrade 2025-09-01 12:10:13 +02:00
Louis Dureuil
381de52fc5 Add setting to change backend 2025-09-01 12:09:18 +02:00
dependabot[bot]
026b95afbb Bump tracing-subscriber from 0.3.19 to 0.3.20
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from 0.3.19 to 0.3.20.
- [Release notes](https://github.com/tokio-rs/tracing/releases)
- [Commits](https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.3.19...tracing-subscriber-0.3.20)

---
updated-dependencies:
- dependency-name: tracing-subscriber
  dependency-version: 0.3.20
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-29 20:54:30 +00:00
Clémentine
210da70faf Merge pull request #5856 from arithmeticmean/main
Fix scheduled CI failure
2025-08-28 17:53:59 +00:00
Louis Dureuil
5fc7872ab3 Make sure the vector store works with both arroy and hannoy 2025-08-28 16:32:47 +02:00
Many the fish
1f0a6e8a44 Merge pull request #5862 from meilisearch/release-v1.19.1
Bring back v1.19.1 to main
2025-08-28 12:57:48 +00:00
Louis Dureuil
b2f2807a94 Integrate arroy with conversion capabilities 2025-08-28 14:43:04 +02:00
Shree
952394710c Merge remote-tracking branch 'origin/main' into 5835DispProgressTrace 2025-08-26 14:03:09 -07:00
Louis Dureuil
da6fffdf6d Switch from version to backend selector 2025-08-26 17:49:56 +02:00
Louis Dureuil
b5f0c19406 Split the vector module in submodules 2025-08-26 16:32:17 +02:00
Many the fish
0fd66a5317 Merge pull request #5860 from meilisearch/update-version-v1.19.1
Update version for the next release (v1.19.1) in Cargo.toml
2025-08-26 11:43:23 +00:00
Many the fish
cb4dd3b88c Merge pull request #5846 from meilisearch/update-arroy-v0.6.2
Update Arroy v0.6.2
2025-08-26 12:01:06 +02:00
ManyTheFish
0ade376b00 update version tests 2025-08-26 11:57:27 +02:00
ManyTheFish
32785cb2d0 Update version for the next release (v1.19.1) in Cargo.toml 2025-08-26 08:39:16 +00:00
Louis Dureuil
fb7ccc0db3 remove v1_17 2025-08-26 10:25:20 +02:00
Louis Dureuil
69a84fbfe6 update to v1.22 2025-08-26 10:25:19 +02:00
Clément Renault
31cb960992 Make clippy happy 2025-08-26 10:19:55 +02:00
Clément Renault
6d9e0c4bce Switch to hannoy 0.0.4 2025-08-26 10:19:54 +02:00
Kerollmops
a8e9597f49 Make cargo insta happy 2025-08-26 10:19:54 +02:00
Kerollmops
f4147a60a3 Remove the vector_store reference 2025-08-26 10:19:54 +02:00
Clément Renault
5139dd273e Depend on Hannoy from crates.io 2025-08-26 10:19:54 +02:00
Mubelotix
72c63d3929 Move code to the right file 2025-08-26 10:19:54 +02:00
Clément Renault
97ea9e9937 Make cargo fmt happy 2025-08-26 10:19:54 +02:00
Clément Renault
4645813ea8 Make clippy happy 2025-08-26 10:19:54 +02:00
Clément Renault
fb68f1241c Dispatch the vector store based on the index version 2025-08-26 10:19:54 +02:00
Clément Renault
f5f2f7c6f2 Make the VectorStore aware of the index version 2025-08-26 10:19:53 +02:00
Clément Renault
6340412219 Expose Hannoy progress when upgrading 2025-08-26 10:19:53 +02:00
Louis Dureuil
6e4dfa0168 First version of Hannoy dumpless upgrade 2025-08-26 10:19:52 +02:00
Louis Dureuil
5cf66856ae Merge pull request #5859 from meilisearch/revert-5857-license-detection
Revert "Fix license detection"
2025-08-26 07:53:17 +00:00
Clément Renault
0d4b78a217 Integrate the hannoy progress 2025-08-26 09:44:23 +02:00
Kerollmops
aef07f4bfa wip: Use Hamming when binary quantized 2025-08-26 09:44:23 +02:00
Clément Renault
0b3f983d27 Always use at least an ef = 100 when searching 2025-08-26 09:44:23 +02:00
Clément Renault
52d55ccd8e Switch to hannoy with support for deletions 2025-08-26 09:44:23 +02:00
Kerollmops
6d92c94bb3 Add a missing cancelation call for hannoy 2025-08-26 09:44:23 +02:00
Kerollmops
30110a0488 Reintroduce changing the distance from Cosine to Cosine binary quantized 2025-08-26 09:44:22 +02:00
Kerollmops
47cee7e1ea Bump Hannoy's version 2025-08-26 09:44:22 +02:00
Clément Renault
493d67ffd4 Increase efSearch from x2 to x10 2025-08-26 09:44:22 +02:00
Clément Renault
2b2559016a Increase efConstruction from 48 to 125 2025-08-26 09:44:22 +02:00
Clément Renault
6176b143bb remove-me: Introduce an env var to change the embeddings chunk size 2025-08-26 09:44:22 +02:00
Kerollmops
f9d0d1ddd6 Bump hannoy 2025-08-26 09:44:22 +02:00
Kerollmops
e50f970ab8 Use a more feature-full Hannoy version 2025-08-26 09:44:21 +02:00
Clément Renault
27550dafad Reintroduce arroy and support for dumpless upgrade from previous versions 2025-08-26 09:44:21 +02:00
Clément Renault
a7cd6853db Rename the vector store const name and keep the vector-arroy db name 2025-08-26 09:44:21 +02:00
Clément Renault
f51f7832a7 Rename the ArroyWrapper/HannoyWrapper into VectorStore 2025-08-26 09:44:21 +02:00
Clément Renault
a38a57acb6 Use constants as the hannoy default parameters 2025-08-26 09:44:21 +02:00
Kerollmops
affcaef556 Use Hannoy instead of arroy 2025-08-26 09:44:21 +02:00
Clémentine
7acac2f560 Revert "Fix license detection" 2025-08-26 08:51:07 +02:00
Shree
b68431367f run cargo fmt 2025-08-25 23:47:24 -07:00
Shree
79d3d1606c Display the progressTrace in real time #5835 2025-08-25 23:33:26 -07:00
Louis Dureuil
580bfb06b4 Merge pull request #5857 from meilisearch/license-detection
Fix license detection
2025-08-25 18:28:55 +00:00
curquiza
062c9c6971 Fix links 2025-08-25 19:39:24 +02:00
curquiza
07ed5c57e4 Fix license detection 2025-08-25 19:12:28 +02:00
Louis Dureuil
a94a13c9b0 Merge pull request #5849 from meilisearch/tmp-v1.19
Prepare for v1.19 release
2025-08-25 07:03:27 +00:00
arithmeticmean
938ef77ee5 Fix scheduled CI failure
Disabled default features on the meilisearch dependency in one crate to
prevent lindera from being pulled in during the scheduled CI build
2025-08-23 19:30:26 +05:30
Clément Renault
9dcdde592c Merge pull request #5729 from martin-g/5616-max-memory-in-container
Take into account the allowed max memory of the container
2025-08-21 14:43:32 +00:00
Louis Dureuil
7de44ad2b7 Add v1.19 in index-scheduler and index upgrades 2025-08-21 16:37:35 +02:00
Louis Dureuil
820854ba5c Update snapshots 2025-08-21 16:37:23 +02:00
Louis Dureuil
496de5563a Update version in Cargo.toml 2025-08-21 16:36:56 +02:00
ManyTheFish
0a86b1e11e Update Arroy v0.6.2
The new version of arroy contains a search optimization when there is few input candidates compared to the number of documents in the database
2025-08-21 09:37:17 +02:00
Clément Renault
795045c03a Merge pull request #5784 from meilisearch/sharding-split-docs
Sharding and EE license
2025-08-19 14:17:37 +00:00
Louis Dureuil
b541b7bed3 Change license text to clarify that EE files are in EE modules 2025-08-19 14:50:42 +02:00
Louis Dureuil
6fb3cf95e4 Move EE files into EE modules 2025-08-19 14:50:42 +02:00
Louis Dureuil
cbd2bdf0fa Fix snapshots 2025-08-19 14:50:42 +02:00
Louis Dureuil
601785692f Remove erroneous untagged annotation 2025-08-19 14:50:42 +02:00
Louis Dureuil
65c212d1fd camel case the fields in "origin" 2025-08-19 14:50:42 +02:00
Louis Dureuil
85feb3a26c Rename Body::with_file 2025-08-19 14:50:42 +02:00
Louis Dureuil
d550b90c60 Adjust timeouts 2025-08-19 14:50:42 +02:00
Louis Dureuil
385acbbcd2 Don't always hardcode Content-Type in proxy 2025-08-19 14:50:41 +02:00
Louis Dureuil
484dbf8c06 Update snap 2025-08-19 14:50:41 +02:00
Louis Dureuil
9c6c0af076 Misc churn 2025-08-19 14:50:41 +02:00
Louis Dureuil
e33fbcf7b2 Move meilisearch_types::Network to its own module 2025-08-19 14:50:41 +02:00
Louis Dureuil
d352f33d16 Make types Serialize and Deserialize for proxying 2025-08-19 14:50:41 +02:00
Louis Dureuil
3682b92ee8 New errors 2025-08-19 14:50:41 +02:00
Louis Dureuil
ef10c1fb23 Dependency changes 2025-08-19 14:50:41 +02:00
Louis Dureuil
bd97a7cc19 IndexScheduler::update_task now merges the task.network and accepts &mut Task 2025-08-19 14:50:41 +02:00
Louis Dureuil
56c7f54804 IndexScheduler::set_task_network 2025-08-19 14:50:41 +02:00
Louis Dureuil
15d34c33e8 file-store: persist returns the persisted File object 2025-08-19 14:50:40 +02:00
Louis Dureuil
42ac869c5c Dump support for network 2025-08-19 14:50:40 +02:00
Louis Dureuil
6e0152921f Proxy all document tasks to the network when sharding is enabled 2025-08-19 14:50:40 +02:00
Louis Dureuil
069d25dce6 Shard documents 2025-08-19 14:50:40 +02:00
Louis Dureuil
9929f798d3 network: add sharding to Network and writeApiKey to Remotes 2025-08-19 14:50:40 +02:00
Louis Dureuil
80ff438402 Add proxy module to proxy requests to members of a network 2025-08-19 14:50:40 +02:00
Louis Dureuil
e62a807b60 Add new milli::update:🆕:indexer::sharding module 2025-08-19 14:50:40 +02:00
Louis Dureuil
907055ed08 Add network to Task and TaskView 2025-08-19 14:50:39 +02:00
Louis Dureuil
8b18adee95 Add EE license 2025-08-19 14:50:39 +02:00
Clément Renault
53223ace47 Merge pull request #5844 from meilisearch/prepare-v1.18
Prepare v1.18.0
2025-08-18 11:34:53 +00:00
Mubelotix
a579ea2596 Remove useless code 2025-08-18 10:30:29 +02:00
Mubelotix
e13541818a Update upgrade tests 2025-08-18 09:48:44 +02:00
Mubelotix
c974f0ab0a Update dumpless upgrades 2025-08-18 09:44:55 +02:00
Mubelotix
36cac8acf7 Update package version 2025-08-18 09:44:40 +02:00
Tamo
5507a73b23 Merge pull request #5829 from meilisearch/index-rename
Index rename
2025-08-14 15:39:28 +00:00
Tamo
a608e57c3c Merge pull request #5741 from meilisearch/fragment-filters
Vector filters
2025-08-13 16:39:02 +00:00
Mubelotix
398efa3c55 Merge branch 'main' into fragment-filters 2025-08-13 17:27:09 +02:00
Mubelotix
307ea38c2a Remove old irrelevant tests 2025-08-13 17:19:37 +02:00
Mubelotix
cdeca59587 Add error message for quoting errors 2025-08-13 17:14:36 +02:00
Mubelotix
8529e2161a Clarify more errors 2025-08-13 13:37:19 +02:00
Mubelotix
b80869f2be Add two other "did you mean" messages 2025-08-13 13:16:25 +02:00
Mubelotix
666ae1a3e7 Add "did you mean" message 2025-08-13 13:00:38 +02:00
Mubelotix
f6559258ce Improve operation error on vector filters 2025-08-13 10:32:28 +02:00
Mubelotix
b5ba0e42b3 Add new error 2025-08-13 09:58:16 +02:00
Tamo
b0479eb996 make it work with the dump and dumpless upgrade 2025-08-13 09:54:34 +02:00
Tamo
2bab375001 update the task details again 2025-08-13 09:54:32 +02:00
Tamo
81020c7d6d remove a duplicated test 2025-08-13 09:51:51 +02:00
Tamo
4068c58417 change the details of the tasks 2025-08-13 09:51:49 +02:00
Tamo
a904ce109a fix error code and add a bunch of tests for the swap and index rename 2025-08-13 09:48:39 +02:00
Tamo
ecea247e5d Provide a rename argument to the swap 2025-08-13 09:48:39 +02:00
Tamo
ae5bd9d0e3 fix: updated_at was not 'updated' when updating the index name 2025-08-13 09:48:39 +02:00
Quentin de Quelen
ae2d0a67a4 Enhance index update functionality to support renaming by adding new_uid field. Update related structures and methods to handle the new index UID during updates, ensuring backward compatibility with existing index operations. 2025-08-13 09:48:39 +02:00
Quentin de Quelen
0f1c78b185 Add index rename feature 2025-08-13 09:48:39 +02:00
Mubelotix
300f5ce0f4 Merge pull request #5778 from meilisearch/retrieve-query-vectors
Return query vector
2025-08-13 07:39:15 +00:00
curquiza
d52c7dcc94 Add needs: check-version 2025-08-12 20:47:43 +02:00
Clémentine
3240d89e81 Merge pull request #5830 from meilisearch/v1-17-1
Update version for v1.17.1
2025-08-12 15:43:28 +00:00
Clément Renault
63a649fd7d Merge pull request #5831 from meilisearch/fix-ci
Fix update-cargo-version CI
2025-08-12 14:16:22 +00:00
curquiza
b9e014c044 Update snapshots 2025-08-12 15:36:28 +02:00
curquiza
9021cb4258 Fix update-cargo-version CI 2025-08-12 14:55:57 +02:00
curquiza
de52fe91f5 Update version 2025-08-12 14:52:50 +02:00
Tamo
97ecbb53ff Merge pull request #5823 from meilisearch/ci-open-api
Add CI to publish OpenAPI file
2025-08-12 12:38:05 +00:00
curquiza
14b1a3300b Fix indentation 2025-08-12 10:07:34 +02:00
curquiza
3c84010403 Minor change in CI manifest 2025-08-11 18:31:30 +02:00
curquiza
a69af611e3 Add documentation 2025-08-11 18:29:52 +02:00
curquiza
c5b325de30 Fix rustfmt 2025-08-11 18:15:42 +02:00
curquiza
100a6f96e4 Minor change 2025-08-11 18:11:23 +02:00
curquiza
3c583ce7a4 Fix linting 2025-08-11 18:10:39 +02:00
curquiza
0881810780 Add CI to publish OpenAPI file 2025-08-11 18:09:54 +02:00
Mubelotix
3f20c1aa5d Merge branch 'main' into retrieve-query-vectors 2025-08-11 13:01:27 +02:00
Mubelotix
5df125cbb7 Format 2025-08-07 09:31:05 +02:00
Mubelotix
74992560b0 Simplify conditions 2025-08-07 09:28:45 +02:00
Mubelotix
c385cf985b Fix tests 2025-08-05 15:55:31 +02:00
Mubelotix
2121819c66 Fix tests 2025-08-05 14:18:45 +02:00
Mubelotix
2f33cd5f0a Merge branch 'main' into fragment-filters 2025-08-05 14:05:15 +02:00
Mubelotix
2f5101a1e4 Merge branch 'main' into retrieve-query-vectors 2025-08-05 14:02:25 +02:00
Mubelotix
be045a7636 Merge branch 'release-v1.16.0' into fragment-filters 2025-08-01 09:04:12 +02:00
Mubelotix
60acdf8574 Fix grammar
Co-Authored-By: Louis Dureuil <louis.dureuil@xinra.net>
2025-07-29 11:05:16 +02:00
Mubelotix
93864009cc Rename variable with typo
Co-Authored-By: Louis Dureuil <louis.dureuil@xinra.net>
2025-07-29 11:04:08 +02:00
Mubelotix
223df5a433 Remove incorrect break 2025-07-29 11:02:59 +02:00
Mubelotix
3580b3a4ef Remove userProvided from fragments 2025-07-29 10:56:54 +02:00
Mubelotix
66b6e47494 Remove warning 2025-07-29 10:52:21 +02:00
Mubelotix
6c3dd83ae5 Fix old test 2025-07-29 09:03:48 +02:00
Mubelotix
10567b150c Continue updating tests 2025-07-25 14:25:35 +02:00
Mubelotix
a439f57d70 Update tests 2025-07-25 13:41:31 +02:00
Mubelotix
d243504296 Improve test 2025-07-25 11:58:34 +02:00
Mubelotix
a7fe2abca4 Implement for multi-search 2025-07-25 11:45:51 +02:00
Mubelotix
26da478b5b Add query vector to response 2025-07-24 17:27:49 +02:00
Mubelotix
13d38d59bf Remove useless import 2025-07-24 15:44:11 +02:00
Mubelotix
4264abda23 Remove debugs 2025-07-24 15:30:36 +02:00
Mubelotix
dbb670a9ee Remove old split function 2025-07-24 15:28:58 +02:00
Mubelotix
a92e36ab83 Small improvements 2025-07-24 15:28:17 +02:00
Mubelotix
ad06828685 Add tests on parser 2025-07-24 15:24:42 +02:00
Mubelotix
8f1b697b91 Format 2025-07-24 14:57:06 +02:00
Mubelotix
bb4d573862 Switch to a nom parser 2025-07-24 14:56:35 +02:00
Mubelotix
aa5a1f333a Refactor to support less combinations 2025-07-23 15:33:17 +02:00
Mubelotix
776e55d209 Improve code readability 2025-07-22 11:37:21 +02:00
Mubelotix
3362fb8476 Remove print 2025-07-22 11:21:06 +02:00
Mubelotix
6d93b36279 Format 2025-07-22 11:18:41 +02:00
Mubelotix
982e989886 Test regenerate filter 2025-07-22 11:10:05 +02:00
Mubelotix
0014ed3114 Apply review suggestions 2025-07-22 10:56:05 +02:00
Mubelotix
ab07e9480e Resolve post-merge issues 2025-07-21 18:22:10 +02:00
Mubelotix
00e957051e Merge remote-tracking branch 'origin/release-v1.16.0' into fragment-filters 2025-07-21 18:19:45 +02:00
Mubelotix
f244439b4f Revert "Format"
This reverts commit 30fd546c12.
2025-07-10 16:43:45 +02:00
Mubelotix
30fd546c12 Format 2025-07-10 16:43:10 +02:00
Mubelotix
a930977460 Fix test 2025-07-10 09:37:58 +02:00
Mubelotix
a3b8c2b71f Gate behind multimodal experimental feature 2025-07-09 18:21:52 +02:00
Mubelotix
39f808714d Implement a documentTemplate filter 2025-07-09 18:03:32 +02:00
Mubelotix
8adf6141e0 Fix old test 2025-07-08 16:55:43 +02:00
Mubelotix
df3f282e4d Merge branch 'request-fragments-test' into fragment-filters 2025-07-08 16:35:14 +02:00
Mubelotix
d81855015b Add test 2025-07-08 16:23:45 +02:00
Mubelotix
feb53104e5 Grammar 2025-07-08 16:19:55 +02:00
Mubelotix
881c37393f Add telemetry 2025-07-08 16:06:27 +02:00
Mubelotix
9e98a25e45 Fix clippy 2025-07-08 15:56:09 +02:00
Mubelotix
fb73b83abe Fix performance 2025-07-08 12:14:34 +02:00
Mubelotix
29b74424ad Clean code 2025-07-08 12:03:32 +02:00
Mubelotix
b4cafec8b3 Add tests for operators along vector filter 2025-07-08 11:56:19 +02:00
Mubelotix
d43cd40807 Split tests 2025-07-08 11:48:23 +02:00
Mubelotix
0301d8f239 Improve error handling 2025-07-08 11:39:10 +02:00
Mubelotix
2d45124d9b Fix parsing 2025-07-08 10:01:50 +02:00
Mubelotix
40e7284d70 Add tests 2025-07-08 10:01:35 +02:00
Mubelotix
4d8d34cc93 Merge branch 'request-fragments-test' into fragment-filters 2025-07-07 18:45:34 +02:00
Mubelotix
5cced0af02 Prevent having both a fragment name and userProvided 2025-07-07 18:41:03 +02:00
Mubelotix
9c60e9689f Support not specifying an embedder in the vector filter 2025-07-07 18:34:24 +02:00
Mubelotix
2052537681 Implement core filter logic 2025-07-07 15:28:35 +02:00
Mubelotix
a9bb64c55a Unrelated minor fixes 2025-07-07 15:28:10 +02:00
Martin Tzvetanov Grigorov
45da2257ec Take into account the allowed max memory of the container
When Meilisearch runs inside a container (e.g. Docker or Kubernetes) it
may run with less max memory than the available on the host, e.g.
`docker run --memory 1G ...`

Fixes #5616

Signed-off-by: Martin Tzvetanov Grigorov <mgrigorov@apache.org>
2025-07-02 14:23:11 +03:00
386 changed files with 17107 additions and 4319 deletions

View File

@@ -7,6 +7,5 @@ updates:
schedule:
interval: "monthly"
labels:
- 'skip changelog'
- 'dependencies'
rebase-strategy: disabled

View File

@@ -1,33 +0,0 @@
name-template: 'v$RESOLVED_VERSION'
tag-template: 'v$RESOLVED_VERSION'
exclude-labels:
- 'skip changelog'
version-resolver:
minor:
labels:
- 'enhancement'
default: patch
categories:
- title: '⚠️ Breaking changes'
label: 'breaking-change'
- title: '🚀 Enhancements'
label: 'enhancement'
- title: '🐛 Bug Fixes'
label: 'bug'
- title: '🔒 Security'
label: 'security'
- title: '⚙️ Maintenance/misc'
label:
- 'maintenance'
- 'documentation'
template: |
$CHANGES
❤️ Huge thanks to our contributors: $CONTRIBUTORS.
no-changes-template: 'Changes are coming soon 😎'
sort-direction: 'ascending'
replacers:
- search: '/(?:and )?@dependabot-preview(?:\[bot\])?,?/g'
replace: ''
- search: '/(?:and )?@dependabot(?:\[bot\])?,?/g'
replace: ''

View File

@@ -17,8 +17,8 @@ jobs:
runs-on: benchmarks
timeout-minutes: 180 # 3h
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -60,13 +60,13 @@ jobs:
with:
repo_token: ${{ env.GH_TOKEN }}
- uses: actions/checkout@v3
- uses: actions/checkout@v5
if: success()
with:
fetch-depth: 0 # fetch full history to be able to get main commit sha
ref: ${{ steps.comment-branch.outputs.head_ref }}
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -11,8 +11,8 @@ jobs:
runs-on: benchmarks
timeout-minutes: 180 # 3h
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -17,8 +17,8 @@ jobs:
runs-on: benchmarks
timeout-minutes: 4320 # 72h
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -44,7 +44,7 @@ jobs:
exit 1
fi
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal
@@ -61,7 +61,7 @@ jobs:
with:
repo_token: ${{ env.GH_TOKEN }}
- uses: actions/checkout@v3
- uses: actions/checkout@v5
if: success()
with:
fetch-depth: 0 # fetch full history to be able to get main commit sha

View File

@@ -15,8 +15,8 @@ jobs:
runs-on: benchmarks
timeout-minutes: 4320 # 72h
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -14,8 +14,8 @@ jobs:
name: Run and upload benchmarks
runs-on: benchmarks
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -14,8 +14,8 @@ jobs:
name: Run and upload benchmarks
runs-on: benchmarks
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -14,8 +14,8 @@ jobs:
name: Run and upload benchmarks
runs-on: benchmarks
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -9,7 +9,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Check db change labels
id: check_labels
env:

View File

@@ -13,7 +13,7 @@ jobs:
ISSUE_TEMPLATE: issue-template.md
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Download the issue template
run: curl -s https://raw.githubusercontent.com/meilisearch/meilisearch/main/.github/templates/dependency-issue.md > $ISSUE_TEMPLATE
- name: Create issue

View File

@@ -12,12 +12,12 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl
apt-get install build-essential -y
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
- name: Install cargo-flaky
run: cargo install cargo-flaky
- name: Run cargo flaky in the dumps

View File

@@ -11,8 +11,8 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 4320 # 72h
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal

View File

@@ -10,7 +10,7 @@ jobs:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Check release validity
if: github.event_name == 'release'
run: bash .github/scripts/check-release.sh
@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
needs: check-version
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- uses: rickstaa/action-create-tag@v1
with:
tag: "latest"

View File

@@ -9,7 +9,7 @@ jobs:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Check release validity
run: bash .github/scripts/check-release.sh
@@ -25,10 +25,10 @@ jobs:
run: |
apt-get update && apt-get install -y curl
apt-get install build-essential -y
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
- name: Install cargo-deb
run: cargo install cargo-deb
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Build deb package
run: cargo deb -p meilisearch -o target/debian/meilisearch.deb
- name: Upload debian pkg to release

View File

@@ -19,7 +19,7 @@ jobs:
permissions:
id-token: write # This is needed to use Cosign in keyless mode
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
# If we are running a cron or manual job ('schedule' or 'workflow_dispatch' event), it means we are publishing the `nightly` tag, so not considered stable.
# If we have pushed a tag, and the tag has the v<nmumber>.<number>.<number> format, it means we are publishing an official release, so considered stable.
@@ -65,7 +65,7 @@ jobs:
uses: docker/setup-buildx-action@v3
- name: Install cosign
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # tag=v3.9.2
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # tag=v3.10.0
- name: Login to Docker Hub
uses: docker/login-action@v3

View File

@@ -1,4 +1,4 @@
name: Publish binaries to GitHub release
name: Publish assets to GitHub release
on:
workflow_dispatch:
@@ -11,9 +11,9 @@ jobs:
check-version:
name: Check the version validity
runs-on: ubuntu-latest
# No need to check the version for dry run (cron)
# No need to check the version for dry run (cron or workflow_dispatch)
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
# Check if the tag has the v<nmumber>.<number>.<number> format.
# If yes, it means we are publishing an official release.
# If no, we are releasing a RC, so no need to check the version.
@@ -40,15 +40,15 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl
apt-get install build-essential -y
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
- name: Build
run: cargo build --release --locked
# No need to upload binaries for dry run (cron)
# No need to upload binaries for dry run (cron or workflow_dispatch)
- name: Upload binaries to release
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@2.11.2
@@ -74,11 +74,11 @@ jobs:
artifact_name: meilisearch.exe
asset_name: meilisearch-windows-amd64.exe
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
- name: Build
run: cargo build --release --locked
# No need to upload binaries for dry run (cron)
# No need to upload binaries for dry run (cron or workflow_dispatch)
- name: Upload binaries to release
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@2.11.2
@@ -99,9 +99,9 @@ jobs:
asset_name: meilisearch-macos-apple-silicon
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v5
- name: Installing Rust toolchain
uses: dtolnay/rust-toolchain@1.85
uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal
target: ${{ matrix.target }}
@@ -111,7 +111,7 @@ jobs:
command: build
args: --release --target ${{ matrix.target }}
- name: Upload the binary to release
# No need to upload binaries for dry run (cron)
# No need to upload binaries for dry run (cron or workflow_dispatch)
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@2.11.2
with:
@@ -136,7 +136,7 @@ jobs:
asset_name: meilisearch-linux-aarch64
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update -y && apt upgrade -y
@@ -148,7 +148,7 @@ jobs:
add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update -y && apt-get install -y docker-ce
- name: Installing Rust toolchain
uses: dtolnay/rust-toolchain@1.85
uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal
target: ${{ matrix.target }}
@@ -176,7 +176,7 @@ jobs:
- name: List target output files
run: ls -lR ./target
- name: Upload the binary to release
# No need to upload binaries for dry run (cron)
# No need to upload binaries for dry run (cron or workflow_dispatch)
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@2.11.2
with:
@@ -184,3 +184,29 @@ jobs:
file: target/${{ matrix.target }}/release/meilisearch
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}
publish-openapi-file:
name: Publish OpenAPI file
needs: check-version
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Setup Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Generate OpenAPI file
run: |
cd crates/openapi-generator
cargo run --release -- --pretty --output ../../meilisearch.json
- name: Upload OpenAPI to Release
# No need to upload for dry run (cron or workflow_dispatch)
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@2.11.2
with:
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
file: ./meilisearch.json
asset_name: meilisearch-openapi.json
tag: ${{ github.ref }}

View File

@@ -1,20 +0,0 @@
name: Release Drafter
permissions:
contents: read
pull-requests: write
on:
push:
branches:
- main
jobs:
update_release_draft:
runs-on: ubuntu-latest
steps:
- uses: release-drafter/release-drafter@v6
with:
config-name: release-draft-template.yml
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_DRAFTER_TOKEN }}

View File

@@ -22,7 +22,7 @@ jobs:
outputs:
docker-image: ${{ steps.define-image.outputs.docker-image }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Define the Docker image we need to use
id: define-image
run: |
@@ -46,11 +46,11 @@ jobs:
MEILISEARCH_VERSION: ${{ needs.define-docker-image.outputs.docker-image }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-dotnet
- name: Setup .NET Core
uses: actions/setup-dotnet@v4
uses: actions/setup-dotnet@v5
with:
dotnet-version: "8.0.x"
- name: Install dependencies
@@ -75,7 +75,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-dart
- uses: dart-lang/setup-dart@v1
@@ -100,10 +100,10 @@ jobs:
- '7700:7700'
steps:
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: stable
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-go
- name: Get dependencies
@@ -129,19 +129,19 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-java
- name: Set up Java
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
java-version: 8
distribution: 'zulu'
java-version: 17
distribution: 'temurin'
cache: gradle
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build and run unit and integration tests
run: ./gradlew build integrationTest
run: ./gradlew build integrationTest --info
meilisearch-js-tests:
needs: define-docker-image
@@ -156,11 +156,11 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-js
- name: Setup node
uses: actions/setup-node@v4
uses: actions/setup-node@v5
with:
cache: 'yarn'
- name: Install dependencies
@@ -191,7 +191,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-php
- name: Install PHP
@@ -220,11 +220,11 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-python
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
- name: Install pipenv
uses: dschep/install-pipenv-action@v1
- name: Install dependencies
@@ -245,7 +245,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-ruby
- name: Set up Ruby 3
@@ -270,7 +270,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-rust
- name: Build
@@ -291,7 +291,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-swift
- name: Run tests
@@ -314,11 +314,11 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-js-plugins
- name: Setup node
uses: actions/setup-node@v4
uses: actions/setup-node@v5
with:
cache: yarn
- name: Install dependencies
@@ -347,7 +347,7 @@ jobs:
env:
RAILS_VERSION: '7.0'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-rails
- name: Install SQLite dependencies
@@ -377,7 +377,7 @@ jobs:
ports:
- '7700:7700'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
with:
repository: meilisearch/meilisearch-symfony
- name: Install PHP

View File

@@ -21,13 +21,13 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl
apt-get install build-essential -y
- name: Setup test with Rust stable
uses: dtolnay/rust-toolchain@1.85
uses: dtolnay/rust-toolchain@1.89
- name: Cache dependencies
uses: Swatinem/rust-cache@v2.8.0
- name: Run cargo check without any default features
@@ -49,10 +49,10 @@ jobs:
matrix:
os: [macos-13, windows-2022]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Cache dependencies
uses: Swatinem/rust-cache@v2.8.0
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
- name: Run cargo check without any default features
uses: actions-rs/cargo@v1
with:
@@ -72,12 +72,12 @@ jobs:
image: ubuntu:22.04
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update
apt-get install --assume-yes build-essential curl
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
- name: Run cargo build with almost all features
run: |
cargo build --workspace --locked --release --features "$(cargo xtask list-features --exclude-feature cuda,test-ollama)"
@@ -91,7 +91,7 @@ jobs:
env:
MEILI_TEST_OLLAMA_SERVER: "http://localhost:11434"
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install Ollama
run: |
curl -fsSL https://ollama.com/install.sh | sudo -E sh
@@ -124,12 +124,12 @@ jobs:
image: ubuntu:22.04
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update
apt-get install --assume-yes build-essential curl
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
- name: Run cargo tree without default features and check lindera is not present
run: |
if cargo tree -f '{p} {f}' -e normal --no-default-features | grep -qz lindera; then
@@ -148,12 +148,12 @@ jobs:
# Use ubuntu-22.04 to compile with glibc 2.35
image: ubuntu:22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5
- name: Install needed dependencies
run: |
apt-get update && apt-get install -y curl
apt-get install build-essential -y
- uses: dtolnay/rust-toolchain@1.85
- uses: dtolnay/rust-toolchain@1.89
- name: Cache dependencies
uses: Swatinem/rust-cache@v2.8.0
- name: Run tests in debug
@@ -166,8 +166,8 @@ jobs:
name: Run Clippy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal
components: clippy
@@ -183,8 +183,8 @@ jobs:
name: Run Rustfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal
toolchain: nightly-2024-07-09

View File

@@ -17,8 +17,8 @@ jobs:
name: Update version in Cargo.toml
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@1.85
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
with:
profile: minimal
- name: Install sd
@@ -41,5 +41,4 @@ jobs:
--title "Update version for the next release ($NEW_VERSION) in Cargo.toml" \
--body '⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.' \
--label 'skip changelog' \
--milestone $NEW_VERSION \
--base $GITHUB_REF_NAME

View File

@@ -107,12 +107,18 @@ Run `cargo xtask --help` from the root of the repository to find out what is ava
To update the openAPI file in the code, see [sprint_issue.md](https://github.com/meilisearch/meilisearch/blob/main/.github/ISSUE_TEMPLATE/sprint_issue.md#reminders-when-modifying-the-api).
If you want to update the openAPI file on the [open-api repository](https://github.com/meilisearch/open-api):
- Pull the latest version of the latest rc of Meilisearch `git checkout release-vX.Y.Z; git pull`
If you want to generate OpenAPI file manually:
With swagger:
- Starts Meilisearch with the `swagger` feature flag: `cargo run --features swagger`
- On a browser, open the following URL: http://localhost:7700/scalar
- Click the « Download openAPI file »
- Open a PR replacing [this file](https://github.com/meilisearch/open-api/blob/main/open-api.json) with the one downloaded
With the internal crate:
```bash
cd crates/openapi-generator
cargo run --release -- --pretty --output meilisearch.json
```
### Logging

2029
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -19,10 +19,11 @@ members = [
"crates/tracing-trace",
"crates/xtask",
"crates/build-info",
"crates/openapi-generator",
]
[workspace.package]
version = "1.17.0"
version = "1.26.0"
authors = [
"Quentin de Quelen <quentin@dequelen.me>",
"Clément Renault <clement@meilisearch.com>",

View File

@@ -1,5 +1,5 @@
# Compile
FROM rust:1.85-alpine3.20 AS compiler
FROM rust:1.89-alpine3.22 AS compiler
RUN apk add -q --no-cache build-base openssl-dev
@@ -20,7 +20,7 @@ RUN set -eux; \
cargo build --release -p meilisearch -p meilitool
# Run
FROM alpine:3.20
FROM alpine:3.22
LABEL org.opencontainers.image.source="https://github.com/meilisearch/meilisearch"
ENV MEILI_HTTP_ADDR 0.0.0.0:7700

20
LICENSE
View File

@@ -1,21 +1,9 @@
MIT License
# License
Copyright (c) 2019-2025 Meili SAS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Part of this work fall under the Meilisearch Enterprise Edition (EE) and are licensed under the Business Source License 1.1, please refer to [LICENSE-EE](./LICENSE-EE) for details.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The other parts of this work are licensed under the [MIT license](./LICENSE-MIT).
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
`SPDX-License-Identifier: MIT AND BUSL-1.1`

67
LICENSE-EE Normal file
View File

@@ -0,0 +1,67 @@
Business Source License 1.1 Adapted for Meili SAS
This license is based on the Business Source License version 1.1, as published by MariaDB Corporation Ab.
Parameters
Licensor: Meili SAS
Licensed Work: Any file explicitly marked as “Enterprise Edition (EE)” or “governed by the Business Source License” residing in enterprise_editions modules/folders.
Additional Use Grant:
You may use, modify, and distribute the Licensed Work for non-production purposes only, such as testing, development, or evaluation.
Production use of the Licensed Work requires a commercial license agreement with Meilisearch. Contact bonjour@meilisearch.com for licensing.
Change License: MIT
Change Date: Four years from the date the Licensed Work is published.
This License does not apply to any code outside of the Licensed Work, which remains under the MIT license.
For information about alternative licensing arrangements for the Licensed Work,
please contact bonjour@meilisearch.com or sales@meilisearch.com.
Notice
Business Source License 1.1
Terms
The Licensor hereby grants you the right to copy, modify, create derivative
works, redistribute, and make non-production use of the Licensed Work. The
Licensor may make an Additional Use Grant, above, permitting limited production use.
Effective on the Change Date, or the fourth anniversary of the first publicly
available distribution of a specific version of the Licensed Work under this
License, whichever comes first, the Licensor hereby grants you rights under
the terms of the Change License, and the rights granted in the paragraph
above terminate.
If your use of the Licensed Work does not comply with the requirements
currently in effect as described in this License, you must purchase a
commercial license from the Licensor, its affiliated entities, or authorized
resellers, or you must refrain from using the Licensed Work.
All copies of the original and modified Licensed Work, and derivative works
of the Licensed Work, are subject to this License. This License applies
separately for each version of the Licensed Work and the Change Date may vary
for each version of the Licensed Work released by Licensor.
You must conspicuously display this License on each original or modified copy
of the Licensed Work. If you receive the Licensed Work in original or
modified form from a third party, the terms and conditions set forth in this
License apply to your use of that work.
Any use of the Licensed Work in violation of this License will automatically
terminate your rights under this License for the current and all other
versions of the Licensed Work.
This License does not grant you any right in any trademark or logo of
Licensor or its affiliates (provided that you may use a trademark or logo of
Licensor as expressly required by this License).
TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED WORK IS PROVIDED ON
AN "AS IS" BASIS. LICENSOR HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS,
EXPRESS OR IMPLIED, INCLUDING (WITHOUT LIMITATION) WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND
TITLE.

21
LICENSE-MIT Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2019-2025 Meili SAS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -39,6 +39,7 @@
## 🖥 Examples
- [**Movies**](https://where2watch.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=organization) — An application to help you find streaming platforms to watch movies using [hybrid search](https://www.meilisearch.com/solutions/hybrid-search?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos).
- [**Flickr**](https://flickr.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=organization) — Search and explore one hundred million Flickr images with semantic search.
- [**Ecommerce**](https://ecommerce.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Ecommerce website using disjunctive [facets](https://www.meilisearch.com/docs/learn/fine_tuning_results/faceted_search?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos), range and rating filtering, and pagination.
- [**Songs**](https://music.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Search through 47 million of songs.
- [**SaaS**](https://saas.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) — Search for contacts, deals, and companies in this [multi-tenant](https://www.meilisearch.com/docs/learn/security/multitenancy_tenant_tokens?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=demos) CRM application.
@@ -89,6 +90,26 @@ We also offer a wide range of dedicated guides to all Meilisearch features, such
Finally, for more in-depth information, refer to our articles explaining fundamental Meilisearch concepts such as [documents](https://www.meilisearch.com/docs/learn/core_concepts/documents?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced) and [indexes](https://www.meilisearch.com/docs/learn/core_concepts/indexes?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=advanced).
## 🧾 Editions & Licensing
Meilisearch is available in two editions:
### 🧪 Community Edition (CE)
- Fully open source under the [MIT license](./LICENSE)
- Core search engine with fast and relevant full-text, semantic or hybrid search
- Free to use for anyone, including commercial usage
### 🏢 Enterprise Edition (EE)
- Includes advanced features such as:
- Sharding
- Governed by a [commercial license](./LICENSE-EE) or the [Business Source License 1.1](https://mariadb.com/bsl11)
- Not allowed in production without a commercial agreement with Meilisearch.
- You may use, modify, and distribute the Licensed Work for non-production purposes only, such as testing, development, or evaluation.
Want access to Enterprise features? → Contact us at [sales@meilisearch.com](maito:sales@meilisearch.com).
## 📊 Telemetry
Meilisearch collects **anonymized** user data to help us improve our product. You can [deactivate this](https://www.meilisearch.com/docs/learn/what_is_meilisearch/telemetry?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=telemetry#how-to-disable-data-collection) whenever you want.
@@ -101,7 +122,7 @@ If you want to know more about the kind of data we collect and what we use it fo
Meilisearch is a search engine created by [Meili](https://www.meilisearch.com/careers), a software development company headquartered in France and with team members all over the world. Want to know more about us? [Check out our blog!](https://blog.meilisearch.com/?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=contact)
🗞 [Subscribe to our newsletter](https://meilisearch.us2.list-manage.com/subscribe?u=27870f7b71c908a8b359599fb&id=79582d828e) if you don't want to miss any updates! We promise we won't clutter your mailbox: we only send one edition every two months.
🗞 [Subscribe to our newsletter](https://share-eu1.hsforms.com/1LN5N0x_GQgq7ss7tXmSykwfg3aq) if you don't want to miss any updates! We promise we won't clutter your mailbox: we only send one edition every two months.
💌 Want to make a suggestion or give feedback? Here are some of the channels where you can reach us:

View File

@@ -154,6 +154,7 @@ fn indexing_songs_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -221,6 +222,7 @@ fn reindexing_songs_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -266,6 +268,7 @@ fn reindexing_songs_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -335,6 +338,7 @@ fn deleting_songs_in_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -412,6 +416,7 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -457,6 +462,7 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -498,6 +504,7 @@ fn indexing_songs_in_three_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -566,6 +573,7 @@ fn indexing_songs_without_faceted_numbers(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -633,6 +641,7 @@ fn indexing_songs_without_faceted_fields(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -700,6 +709,7 @@ fn indexing_wiki(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -766,6 +776,7 @@ fn reindexing_wiki(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -811,6 +822,7 @@ fn reindexing_wiki(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -879,6 +891,7 @@ fn deleting_wiki_in_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -956,6 +969,7 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1002,6 +1016,7 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1044,6 +1059,7 @@ fn indexing_wiki_in_three_batches(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1111,6 +1127,7 @@ fn indexing_movies_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1177,6 +1194,7 @@ fn reindexing_movies_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1222,6 +1240,7 @@ fn reindexing_movies_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1290,6 +1309,7 @@ fn deleting_movies_in_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1404,6 +1424,7 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1449,6 +1470,7 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1490,6 +1512,7 @@ fn indexing_movies_in_three_batches(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1580,6 +1603,7 @@ fn indexing_nested_movies_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1671,6 +1695,7 @@ fn deleting_nested_movies_in_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1754,6 +1779,7 @@ fn indexing_nested_movies_without_faceted_fields(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1821,6 +1847,7 @@ fn indexing_geo(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1887,6 +1914,7 @@ fn reindexing_geo(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -1932,6 +1960,7 @@ fn reindexing_geo(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();
@@ -2000,6 +2029,7 @@ fn deleting_geo_in_batches_default(c: &mut Criterion) {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();

View File

@@ -123,6 +123,7 @@ pub fn base_setup(conf: &Conf) -> Index {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();

View File

@@ -10,7 +10,7 @@ use meilisearch_types::keys::Key;
use meilisearch_types::milli::update::IndexDocumentsMethod;
use meilisearch_types::settings::Unchecked;
use meilisearch_types::tasks::{
Details, ExportIndexSettings, IndexSwap, KindWithContent, Status, Task, TaskId,
Details, ExportIndexSettings, IndexSwap, KindWithContent, Status, Task, TaskId, TaskNetwork,
};
use meilisearch_types::InstanceUid;
use roaring::RoaringBitmap;
@@ -94,6 +94,10 @@ pub struct TaskDump {
default
)]
pub finished_at: Option<OffsetDateTime>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub network: Option<TaskNetwork>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub custom_metadata: Option<String>,
}
// A `Kind` specific version made for the dump. If modified you may break the dump.
@@ -129,6 +133,7 @@ pub enum KindDump {
},
IndexUpdate {
primary_key: Option<String>,
uid: Option<String>,
},
IndexSwap {
swaps: Vec<IndexSwap>,
@@ -155,6 +160,9 @@ pub enum KindDump {
UpgradeDatabase {
from: (u32, u32, u32),
},
IndexCompaction {
index_uid: String,
},
}
impl From<Task> for TaskDump {
@@ -171,6 +179,8 @@ impl From<Task> for TaskDump {
enqueued_at: task.enqueued_at,
started_at: task.started_at,
finished_at: task.finished_at,
network: task.network,
custom_metadata: task.custom_metadata,
}
}
}
@@ -210,8 +220,8 @@ impl From<KindWithContent> for KindDump {
KindWithContent::IndexCreation { primary_key, .. } => {
KindDump::IndexCreation { primary_key }
}
KindWithContent::IndexUpdate { primary_key, .. } => {
KindDump::IndexUpdate { primary_key }
KindWithContent::IndexUpdate { primary_key, new_index_uid: uid, .. } => {
KindDump::IndexUpdate { primary_key, uid }
}
KindWithContent::IndexSwap { swaps } => KindDump::IndexSwap { swaps },
KindWithContent::TaskCancelation { query, tasks } => {
@@ -236,6 +246,9 @@ impl From<KindWithContent> for KindDump {
KindWithContent::UpgradeDatabase { from: version } => {
KindDump::UpgradeDatabase { from: version }
}
KindWithContent::IndexCompaction { index_uid } => {
KindDump::IndexCompaction { index_uid }
}
}
}
}
@@ -249,8 +262,9 @@ pub(crate) mod test {
use big_s::S;
use maplit::{btreemap, btreeset};
use meilisearch_types::batches::{Batch, BatchEnqueuedAt, BatchStats};
use meilisearch_types::enterprise_edition::network::{Network, Remote};
use meilisearch_types::facet_values_sort::FacetValuesSort;
use meilisearch_types::features::{Network, Remote, RuntimeTogglableFeatures};
use meilisearch_types::features::RuntimeTogglableFeatures;
use meilisearch_types::index_uid_pattern::IndexUidPattern;
use meilisearch_types::keys::{Action, Key};
use meilisearch_types::milli::update::Setting;
@@ -326,6 +340,7 @@ pub(crate) mod test {
facet_search: Setting::NotSet,
prefix_search: Setting::NotSet,
chat: Setting::NotSet,
vector_store: Setting::NotSet,
_kind: std::marker::PhantomData,
};
settings.check()
@@ -383,6 +398,8 @@ pub(crate) mod test {
enqueued_at: datetime!(2022-11-11 0:00 UTC),
started_at: Some(datetime!(2022-11-20 0:00 UTC)),
finished_at: Some(datetime!(2022-11-21 0:00 UTC)),
network: None,
custom_metadata: None,
},
None,
),
@@ -407,6 +424,8 @@ pub(crate) mod test {
enqueued_at: datetime!(2022-11-11 0:00 UTC),
started_at: None,
finished_at: None,
network: None,
custom_metadata: None,
},
Some(vec![
json!({ "id": 4, "race": "leonberg" }).as_object().unwrap().clone(),
@@ -426,6 +445,8 @@ pub(crate) mod test {
enqueued_at: datetime!(2022-11-15 0:00 UTC),
started_at: None,
finished_at: None,
network: None,
custom_metadata: None,
},
None,
),
@@ -538,7 +559,8 @@ pub(crate) mod test {
fn create_test_network() -> Network {
Network {
local: Some("myself".to_string()),
remotes: maplit::btreemap! {"other".to_string() => Remote { url: "http://test".to_string(), search_api_key: Some("apiKey".to_string()) }},
remotes: maplit::btreemap! {"other".to_string() => Remote { url: "http://test".to_string(), search_api_key: Some("apiKey".to_string()), write_api_key: Some("docApiKey".to_string()) }},
sharding: false,
}
}

View File

@@ -97,6 +97,7 @@ impl CompatV2ToV3 {
}
}
#[allow(clippy::large_enum_variant)]
pub enum CompatIndexV2ToV3 {
V2(v2::V2IndexReader),
Compat(Box<CompatIndexV1ToV2>),

View File

@@ -85,7 +85,7 @@ impl CompatV5ToV6 {
v6::Kind::IndexCreation { primary_key }
}
v5::tasks::TaskContent::IndexUpdate { primary_key, .. } => {
v6::Kind::IndexUpdate { primary_key }
v6::Kind::IndexUpdate { primary_key, uid: None }
}
v5::tasks::TaskContent::IndexDeletion { .. } => v6::Kind::IndexDeletion,
v5::tasks::TaskContent::DocumentAddition {
@@ -140,9 +140,11 @@ impl CompatV5ToV6 {
v5::Details::Settings { settings } => {
v6::Details::SettingsUpdate { settings: Box::new(settings.into()) }
}
v5::Details::IndexInfo { primary_key } => {
v6::Details::IndexInfo { primary_key }
}
v5::Details::IndexInfo { primary_key } => v6::Details::IndexInfo {
primary_key,
new_index_uid: None,
old_index_uid: None,
},
v5::Details::DocumentDeletion {
received_document_ids,
deleted_documents,
@@ -161,6 +163,8 @@ impl CompatV5ToV6 {
enqueued_at: task_view.enqueued_at,
started_at: task_view.started_at,
finished_at: task_view.finished_at,
network: None,
custom_metadata: None,
};
(task, content_file)
@@ -418,6 +422,7 @@ impl<T> From<v5::Settings<T>> for v6::Settings<v6::Unchecked> {
facet_search: v6::Setting::NotSet,
prefix_search: v6::Setting::NotSet,
chat: v6::Setting::NotSet,
vector_store: v6::Setting::NotSet,
_kind: std::marker::PhantomData,
}
}

View File

@@ -4,7 +4,7 @@ use std::io::{BufRead, BufReader, ErrorKind};
use std::path::Path;
pub use meilisearch_types::milli;
use meilisearch_types::milli::vector::hf::OverridePooling;
use meilisearch_types::milli::vector::embedder::hf::OverridePooling;
use tempfile::TempDir;
use time::OffsetDateTime;
use tracing::debug;
@@ -24,7 +24,7 @@ pub type Batch = meilisearch_types::batches::Batch;
pub type Key = meilisearch_types::keys::Key;
pub type ChatCompletionSettings = meilisearch_types::features::ChatCompletionSettings;
pub type RuntimeTogglableFeatures = meilisearch_types::features::RuntimeTogglableFeatures;
pub type Network = meilisearch_types::features::Network;
pub type Network = meilisearch_types::enterprise_edition::network::Network;
pub type Webhooks = meilisearch_types::webhooks::WebhooksDumpView;
// ===== Other types to clarify the code of the compat module

View File

@@ -5,7 +5,8 @@ use std::path::PathBuf;
use flate2::write::GzEncoder;
use flate2::Compression;
use meilisearch_types::batches::Batch;
use meilisearch_types::features::{ChatCompletionSettings, Network, RuntimeTogglableFeatures};
use meilisearch_types::enterprise_edition::network::Network;
use meilisearch_types::features::{ChatCompletionSettings, RuntimeTogglableFeatures};
use meilisearch_types::keys::Key;
use meilisearch_types::settings::{Checked, Settings};
use meilisearch_types::webhooks::WebhooksDumpView;

View File

@@ -60,7 +60,7 @@ impl FileStore {
/// Returns the file corresponding to the requested uuid.
pub fn get_update(&self, uuid: Uuid) -> Result<StdFile> {
let path = self.get_update_path(uuid);
let path = self.update_path(uuid);
let file = match StdFile::open(path) {
Ok(file) => file,
Err(e) => {
@@ -72,7 +72,7 @@ impl FileStore {
}
/// Returns the path that correspond to this uuid, the path could not exists.
pub fn get_update_path(&self, uuid: Uuid) -> PathBuf {
pub fn update_path(&self, uuid: Uuid) -> PathBuf {
self.path.join(uuid.to_string())
}
@@ -148,11 +148,10 @@ impl File {
Ok(Self { path: PathBuf::new(), file: None })
}
pub fn persist(self) -> Result<()> {
if let Some(file) = self.file {
file.persist(&self.path)?;
}
Ok(())
pub fn persist(self) -> Result<Option<StdFile>> {
let Some(file) = self.file else { return Ok(None) };
Ok(Some(file.persist(&self.path)?))
}
}

View File

@@ -15,6 +15,7 @@ license.workspace = true
nom = "7.1.3"
nom_locate = "4.2.0"
unescaper = "0.1.6"
levenshtein_automata = { version = "0.2.1", features = ["fst_automaton"] }
[dev-dependencies]
# fixed version due to format breakages in v1.40

View File

@@ -7,12 +7,14 @@
use nom::branch::alt;
use nom::bytes::complete::tag;
use nom::character::complete::multispace1;
use nom::combinator::cut;
use nom::sequence::{terminated, tuple};
use nom::character::complete::{char, multispace0, multispace1};
use nom::combinator::{cut, map, value};
use nom::sequence::{preceded, terminated, tuple};
use Condition::*;
use crate::{parse_value, FilterCondition, IResult, Span, Token};
use crate::error::IResultExt;
use crate::value::{parse_vector_value, parse_vector_value_cut};
use crate::{parse_value, Error, ErrorKind, FilterCondition, IResult, Span, Token, VectorFilter};
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Condition<'a> {
@@ -113,6 +115,83 @@ pub fn parse_not_exists(input: Span) -> IResult<FilterCondition> {
Ok((input, FilterCondition::Not(Box::new(FilterCondition::Condition { fid: key, op: Exists }))))
}
fn parse_vectors(input: Span) -> IResult<(Token, Option<Token>, VectorFilter)> {
let (input, _) = multispace0(input)?;
let (input, fid) = tag("_vectors")(input)?;
if let Ok((input, _)) = multispace1::<_, crate::Error>(input) {
return Ok((input, (Token::from(fid), None, VectorFilter::None)));
}
let (input, _) = char('.')(input)?;
// From this point, we are certain this is a vector filter, so our errors must be final.
// We could use nom's `cut` but it's better to be explicit about the errors
if let Ok((_, space)) = tag::<_, _, ()>(" ")(input) {
return Err(crate::Error::failure_from_kind(space, ErrorKind::VectorFilterMissingEmbedder));
}
let (input, embedder_name) =
parse_vector_value_cut(input, ErrorKind::VectorFilterInvalidEmbedder)?;
let (input, filter) = alt((
map(
preceded(tag(".fragments"), |input| {
let (input, _) = tag(".")(input).map_cut(ErrorKind::VectorFilterMissingFragment)?;
parse_vector_value_cut(input, ErrorKind::VectorFilterInvalidFragment)
}),
VectorFilter::Fragment,
),
value(VectorFilter::UserProvided, tag(".userProvided")),
value(VectorFilter::DocumentTemplate, tag(".documentTemplate")),
value(VectorFilter::Regenerate, tag(".regenerate")),
value(VectorFilter::None, nom::combinator::success("")),
))(input)?;
if let Ok((input, point)) = tag::<_, _, ()>(".")(input) {
let opt_value = parse_vector_value(input).ok().map(|(_, v)| v);
let value =
opt_value.as_ref().map(|v| v.value().to_owned()).unwrap_or_else(|| point.to_string());
let context = opt_value.map(|v| v.original_span()).unwrap_or(point);
let previous_kind = match filter {
VectorFilter::Fragment(_) => Some("fragments"),
VectorFilter::DocumentTemplate => Some("documentTemplate"),
VectorFilter::UserProvided => Some("userProvided"),
VectorFilter::Regenerate => Some("regenerate"),
VectorFilter::None => None,
};
return Err(Error::failure_from_kind(
context,
ErrorKind::VectorFilterUnknownSuffix(previous_kind, value),
));
}
let (input, _) = multispace1(input).map_cut(ErrorKind::VectorFilterLeftover)?;
Ok((input, (Token::from(fid), Some(embedder_name), filter)))
}
/// vectors_exists = vectors ("EXISTS" | ("NOT" WS+ "EXISTS"))
pub fn parse_vectors_exists(input: Span) -> IResult<FilterCondition> {
let (input, (fid, embedder, filter)) = parse_vectors(input)?;
// Try parsing "EXISTS" first
if let Ok((input, _)) = tag::<_, _, ()>("EXISTS")(input) {
return Ok((input, FilterCondition::VectorExists { fid, embedder, filter }));
}
// Try parsing "NOT EXISTS"
if let Ok((input, _)) = tuple::<_, _, (), _>((tag("NOT"), multispace1, tag("EXISTS")))(input) {
return Ok((
input,
FilterCondition::Not(Box::new(FilterCondition::VectorExists { fid, embedder, filter })),
));
}
Err(crate::Error::failure_from_kind(input, ErrorKind::VectorFilterOperation))
}
/// contains = value "CONTAINS" value
pub fn parse_contains(input: Span) -> IResult<FilterCondition> {
let (input, (fid, contains, value)) =

View File

@@ -42,6 +42,23 @@ pub fn cut_with_err<'a, O>(
}
}
pub trait IResultExt<'a> {
fn map_cut(self, kind: ErrorKind<'a>) -> Self;
}
impl<'a, T> IResultExt<'a> for IResult<'a, T> {
fn map_cut(self, kind: ErrorKind<'a>) -> Self {
self.map_err(move |e: nom::Err<Error<'a>>| {
let input = match e {
nom::Err::Incomplete(_) => return e,
nom::Err::Error(e) => *e.context(),
nom::Err::Failure(e) => *e.context(),
};
Error::failure_from_kind(input, kind)
})
}
}
#[derive(Debug)]
pub struct Error<'a> {
context: Span<'a>,
@@ -58,9 +75,21 @@ pub enum ExpectedValueKind {
pub enum ErrorKind<'a> {
ReservedGeo(&'a str),
GeoRadius,
GeoRadiusArgumentCount(usize),
GeoBoundingBox,
GeoPolygon,
GeoPolygonNotEnoughPoints(usize),
GeoCoordinatesNotPair(usize),
MisusedGeoRadius,
MisusedGeoBoundingBox,
VectorFilterLeftover,
VectorFilterInvalidQuotes,
VectorFilterMissingEmbedder,
VectorFilterInvalidEmbedder,
VectorFilterMissingFragment,
VectorFilterInvalidFragment,
VectorFilterUnknownSuffix(Option<&'static str>, String),
VectorFilterOperation,
InvalidPrimary,
InvalidEscapedNumber,
ExpectedEof,
@@ -91,6 +120,10 @@ impl<'a> Error<'a> {
Self { context, kind }
}
pub fn failure_from_kind(context: Span<'a>, kind: ErrorKind<'a>) -> nom::Err<Self> {
nom::Err::Failure(Self::new_from_kind(context, kind))
}
pub fn new_from_external(context: Span<'a>, error: impl std::error::Error) -> Self {
Self::new_from_kind(context, ErrorKind::External(error.to_string()))
}
@@ -128,6 +161,20 @@ impl Display for Error<'_> {
// first line being the diagnostic and the second line being the incriminated filter.
let escaped_input = input.escape_debug();
fn key_suggestion<'a>(key: &str, keys: &[&'a str]) -> Option<&'a str> {
let typos =
levenshtein_automata::LevenshteinAutomatonBuilder::new(2, true).build_dfa(key);
for key in keys.iter() {
match typos.eval(key) {
levenshtein_automata::Distance::Exact(_) => {
return Some(key);
}
levenshtein_automata::Distance::AtLeast(_) => continue,
}
}
None
}
match &self.kind {
ErrorKind::ExpectedValue(_) if input.trim().is_empty() => {
writeln!(f, "Was expecting a value but instead got nothing.")?
@@ -146,7 +193,7 @@ impl Display for Error<'_> {
}
ErrorKind::InvalidPrimary => {
let text = if input.trim().is_empty() { "but instead got nothing.".to_string() } else { format!("at `{}`.", escaped_input) };
writeln!(f, "Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` {}", text)?
writeln!(f, "Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` {text}")?
}
ErrorKind::InvalidEscapedNumber => {
writeln!(f, "Found an invalid escaped sequence number: `{}`.", escaped_input)?
@@ -155,11 +202,23 @@ impl Display for Error<'_> {
writeln!(f, "Found unexpected characters at the end of the filter: `{}`. You probably forgot an `OR` or an `AND` rule.", escaped_input)?
}
ErrorKind::GeoRadius => {
writeln!(f, "The `_geoRadius` filter expects three arguments: `_geoRadius(latitude, longitude, radius)`.")?
writeln!(f, "The `_geoRadius` filter must be in the form: `_geoRadius(latitude, longitude, radius, optionalResolution)`.")?
}
ErrorKind::GeoRadiusArgumentCount(count) => {
writeln!(f, "Was expecting 3 or 4 arguments for `_geoRadius`, but instead found {count}.")?
}
ErrorKind::GeoBoundingBox => {
writeln!(f, "The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.")?
}
ErrorKind::GeoPolygon => {
writeln!(f, "The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.")?
}
ErrorKind::GeoPolygonNotEnoughPoints(n) => {
writeln!(f, "The `_geoPolygon` filter expects at least 3 points but only {n} were specified")?;
}
ErrorKind::GeoCoordinatesNotPair(number) => {
writeln!(f, "Was expecting 2 coordinates but instead found {number}.")?
}
ErrorKind::ReservedGeo(name) => {
writeln!(f, "`{}` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.", name.escape_debug())?
}
@@ -169,6 +228,44 @@ impl Display for Error<'_> {
ErrorKind::MisusedGeoBoundingBox => {
writeln!(f, "The `_geoBoundingBox` filter is an operation and can't be used as a value.")?
}
ErrorKind::VectorFilterLeftover => {
writeln!(f, "The vector filter has leftover tokens.")?
}
ErrorKind::VectorFilterUnknownSuffix(_, value) if value.as_str() == "." => {
writeln!(f, "Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.")?;
}
ErrorKind::VectorFilterUnknownSuffix(None, value) if ["fragments", "userProvided", "documentTemplate", "regenerate"].contains(&value.as_str()) => {
// This will happen with "_vectors.rest.\"userProvided\"" for instance
writeln!(f, "Was expecting this part to be unquoted.")?
}
ErrorKind::VectorFilterUnknownSuffix(None, value) => {
if let Some(suggestion) = key_suggestion(value, &["fragments", "userProvided", "documentTemplate", "regenerate"]) {
writeln!(f, "Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `{value}`. Did you mean `{suggestion}`?")?;
} else {
writeln!(f, "Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `{value}`.")?;
}
}
ErrorKind::VectorFilterUnknownSuffix(Some(previous_filter_kind), value) => {
writeln!(f, "Vector filter can only accept one of `fragments`, `userProvided`, `documentTemplate` or `regenerate`, but found both `{previous_filter_kind}` and `{value}`.")?
},
ErrorKind::VectorFilterInvalidFragment => {
writeln!(f, "The vector filter's fragment name is invalid.")?
}
ErrorKind::VectorFilterMissingFragment => {
writeln!(f, "The vector filter is missing a fragment name.")?
}
ErrorKind::VectorFilterMissingEmbedder => {
writeln!(f, "Was expecting embedder name but found nothing.")?
}
ErrorKind::VectorFilterInvalidEmbedder => {
writeln!(f, "The vector filter's embedder name is invalid.")?
}
ErrorKind::VectorFilterOperation => {
writeln!(f, "Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.")?
}
ErrorKind::VectorFilterInvalidQuotes => {
writeln!(f, "The quotes in one of the values are inconsistent.")?
}
ErrorKind::ReservedKeyword(word) => {
writeln!(f, "`{word}` is a reserved keyword and thus cannot be used as a field name unless it is put inside quotes. Use \"{word}\" or \'{word}\' instead.")?
}

View File

@@ -19,6 +19,7 @@
//! word = (alphanumeric | _ | - | .)+
//! geoRadius = "_geoRadius(" WS* float WS* "," WS* float WS* "," float WS* ")"
//! geoBoundingBox = "_geoBoundingBox([" WS * float WS* "," WS* float WS* "], [" WS* float WS* "," WS* float WS* "]")
//! geoPolygon = "_geoPolygon([[" WS* float WS* "," WS* float WS* "],+])"
//! ```
//!
//! Other BNF grammar used to handle some specific errors:
@@ -65,6 +66,9 @@ use nom_locate::LocatedSpan;
pub(crate) use value::parse_value;
use value::word_exact;
use crate::condition::parse_vectors_exists;
use crate::error::IResultExt;
pub type Span<'a> = LocatedSpan<&'a str, &'a str>;
type IResult<'a, Ret> = nom::IResult<Span<'a>, Ret, Error<'a>>;
@@ -113,7 +117,7 @@ impl<'a> Token<'a> {
self.span
}
pub fn parse_finite_float(&self) -> Result<f64, Error> {
pub fn parse_finite_float(&self) -> Result<f64, Error<'a>> {
let value: f64 = self.value().parse().map_err(|e| self.as_external_error(e))?;
if value.is_finite() {
Ok(value)
@@ -136,6 +140,15 @@ impl<'a> From<&'a str> for Token<'a> {
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum VectorFilter<'a> {
Fragment(Token<'a>),
DocumentTemplate,
UserProvided,
Regenerate,
None,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum FilterCondition<'a> {
Not(Box<Self>),
@@ -143,8 +156,10 @@ pub enum FilterCondition<'a> {
In { fid: Token<'a>, els: Vec<Token<'a>> },
Or(Vec<Self>),
And(Vec<Self>),
GeoLowerThan { point: [Token<'a>; 2], radius: Token<'a> },
VectorExists { fid: Token<'a>, embedder: Option<Token<'a>>, filter: VectorFilter<'a> },
GeoLowerThan { point: [Token<'a>; 2], radius: Token<'a>, resolution: Option<Token<'a>> },
GeoBoundingBox { top_right_point: [Token<'a>; 2], bottom_left_point: [Token<'a>; 2] },
GeoPolygon { points: Vec<[Token<'a>; 2]> },
}
pub enum TraversedElement<'a> {
@@ -153,7 +168,7 @@ pub enum TraversedElement<'a> {
}
impl<'a> FilterCondition<'a> {
pub fn use_contains_operator(&self) -> Option<&Token> {
pub fn use_contains_operator(&self) -> Option<&Token<'a>> {
match self {
FilterCondition::Condition { fid: _, op } => match op {
Condition::GreaterThan(_)
@@ -173,13 +188,30 @@ impl<'a> FilterCondition<'a> {
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
seq.iter().find_map(|filter| filter.use_contains_operator())
}
FilterCondition::GeoLowerThan { .. }
FilterCondition::VectorExists { .. }
| FilterCondition::GeoLowerThan { .. }
| FilterCondition::GeoBoundingBox { .. }
| FilterCondition::GeoPolygon { .. }
| FilterCondition::In { .. } => None,
}
}
pub fn fids(&self, depth: usize) -> Box<dyn Iterator<Item = &Token> + '_> {
pub fn use_vector_filter(&self) -> Option<&Token<'a>> {
match self {
FilterCondition::Condition { .. } => None,
FilterCondition::Not(this) => this.use_vector_filter(),
FilterCondition::Or(seq) | FilterCondition::And(seq) => {
seq.iter().find_map(|filter| filter.use_vector_filter())
}
FilterCondition::GeoLowerThan { .. }
| FilterCondition::GeoBoundingBox { .. }
| FilterCondition::GeoPolygon { .. }
| FilterCondition::In { .. } => None,
FilterCondition::VectorExists { fid, .. } => Some(fid),
}
}
pub fn fids(&self, depth: usize) -> Box<dyn Iterator<Item = &Token<'a>> + '_> {
if depth == 0 {
return Box::new(std::iter::empty());
}
@@ -200,7 +232,7 @@ impl<'a> FilterCondition<'a> {
}
/// Returns the first token found at the specified depth, `None` if no token at this depth.
pub fn token_at_depth(&self, depth: usize) -> Option<&Token> {
pub fn token_at_depth(&self, depth: usize) -> Option<&Token<'a>> {
match self {
FilterCondition::Condition { fid, .. } if depth == 0 => Some(fid),
FilterCondition::Or(subfilters) => {
@@ -263,10 +295,7 @@ fn parse_in_body(input: Span) -> IResult<Vec<Token>> {
let (input, _) = ws(word_exact("IN"))(input)?;
// everything after `IN` can be a failure
let (input, _) =
cut_with_err(tag("["), |_| Error::new_from_kind(input, ErrorKind::InOpeningBracket))(
input,
)?;
let (input, _) = tag("[")(input).map_cut(ErrorKind::InOpeningBracket)?;
let (input, content) = cut(parse_value_list)(input)?;
@@ -371,23 +400,27 @@ fn parse_not(input: Span, depth: usize) -> IResult<FilterCondition> {
/// If we parse `_geoRadius` we MUST parse the rest of the expression.
fn parse_geo_radius(input: Span) -> IResult<FilterCondition> {
// we want to allow space BEFORE the _geoRadius but not after
let parsed = preceded(
tuple((multispace0, word_exact("_geoRadius"))),
// if we were able to parse `_geoRadius` and can't parse the rest of the input we return a failure
cut(delimited(char('('), separated_list1(tag(","), ws(recognize_float)), char(')'))),
)(input)
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::GeoRadius)));
let (input, _) = tuple((multispace0, word_exact("_geoRadius")))(input)?;
// if we were able to parse `_geoRadius` and can't parse the rest of the input we return a failure
let parsed =
delimited(char('('), separated_list1(tag(","), ws(recognize_float)), char(')'))(input)
.map_cut(ErrorKind::GeoRadius);
let (input, args) = parsed?;
if args.len() != 3 {
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::GeoRadius)));
if !(3..=4).contains(&args.len()) {
return Err(Error::failure_from_kind(input, ErrorKind::GeoRadiusArgumentCount(args.len())));
}
let res = FilterCondition::GeoLowerThan {
point: [args[0].into(), args[1].into()],
radius: args[2].into(),
resolution: args.get(3).cloned().map(Token::from),
};
Ok((input, res))
}
@@ -395,24 +428,31 @@ fn parse_geo_radius(input: Span) -> IResult<FilterCondition> {
/// If we parse `_geoBoundingBox` we MUST parse the rest of the expression.
fn parse_geo_bounding_box(input: Span) -> IResult<FilterCondition> {
// we want to allow space BEFORE the _geoBoundingBox but not after
let parsed = preceded(
tuple((multispace0, word_exact("_geoBoundingBox"))),
// if we were able to parse `_geoBoundingBox` and can't parse the rest of the input we return a failure
cut(delimited(
char('('),
separated_list1(
tag(","),
ws(delimited(char('['), separated_list1(tag(","), ws(recognize_float)), char(']'))),
),
char(')'),
)),
let (input, _) = tuple((multispace0, word_exact("_geoBoundingBox")))(input)?;
// if we were able to parse `_geoBoundingBox` and can't parse the rest of the input we return a failure
let (input, args) = delimited(
char('('),
separated_list1(
tag(","),
ws(delimited(char('['), separated_list1(tag(","), ws(recognize_float)), char(']'))),
),
char(')'),
)(input)
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::GeoBoundingBox)));
.map_cut(ErrorKind::GeoBoundingBox)?;
let (input, args) = parsed?;
if args.len() != 2 {
return Err(Error::failure_from_kind(input, ErrorKind::GeoBoundingBox));
}
if args.len() != 2 || args[0].len() != 2 || args[1].len() != 2 {
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::GeoBoundingBox)));
if let Some(offending) = args.iter().find(|a| a.len() != 2) {
let context = offending.first().unwrap_or(&input);
return Err(Error::failure_from_kind(
*context,
ErrorKind::GeoCoordinatesNotPair(offending.len()),
));
}
let res = FilterCondition::GeoBoundingBox {
@@ -422,6 +462,47 @@ fn parse_geo_bounding_box(input: Span) -> IResult<FilterCondition> {
Ok((input, res))
}
/// geoPolygon = "_geoPolygon([[" WS* float WS* "," WS* float WS* "],+])"
/// If we parse `_geoPolygon` we MUST parse the rest of the expression.
fn parse_geo_polygon(input: Span) -> IResult<FilterCondition> {
// we want to allow space BEFORE the _geoPolygon but not after
let (input, _) = tuple((multispace0, word_exact("_geoPolygon")))(input)?;
// if we were able to parse `_geoPolygon` and can't parse the rest of the input we return a failure
let (input, args): (_, Vec<Vec<LocatedSpan<_, _>>>) = delimited(
char('('),
separated_list1(
tag(","),
ws(delimited(char('['), separated_list1(tag(","), ws(recognize_float)), char(']'))),
),
preceded(opt(ws(char(','))), char(')')), // Tolerate trailing comma
)(input)
.map_cut(ErrorKind::GeoPolygon)?;
if args.len() < 3 {
let context = args.last().and_then(|a| a.last()).unwrap_or(&input);
return Err(Error::failure_from_kind(
*context,
ErrorKind::GeoPolygonNotEnoughPoints(args.len()),
));
}
if let Some(offending) = args.iter().find(|a| a.len() != 2) {
let context = offending.first().unwrap_or(&input);
return Err(Error::failure_from_kind(
*context,
ErrorKind::GeoCoordinatesNotPair(offending.len()),
));
}
let res = FilterCondition::GeoPolygon {
points: args.into_iter().map(|a| [a[0].into(), a[1].into()]).collect(),
};
Ok((input, res))
}
/// geoPoint = WS* "_geoPoint(float WS* "," WS* float WS* "," WS* float)
fn parse_geo_point(input: Span) -> IResult<FilterCondition> {
// we want to forbid space BEFORE the _geoPoint but not after
@@ -433,7 +514,7 @@ fn parse_geo_point(input: Span) -> IResult<FilterCondition> {
))(input)
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))?;
// if we succeeded we still return a `Failure` because geoPoints are not allowed
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoPoint"))))
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geoPoint")))
}
/// geoPoint = WS* "_geoDistance(float WS* "," WS* float WS* "," WS* float)
@@ -447,7 +528,7 @@ fn parse_geo_distance(input: Span) -> IResult<FilterCondition> {
))(input)
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))?;
// if we succeeded we still return a `Failure` because `geoDistance` filters are not allowed
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geoDistance"))))
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geoDistance")))
}
/// geo = WS* "_geo(float WS* "," WS* float WS* "," WS* float)
@@ -461,7 +542,7 @@ fn parse_geo(input: Span) -> IResult<FilterCondition> {
))(input)
.map_err(|e| e.map(|_| Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))?;
// if we succeeded we still return a `Failure` because `_geo` filter is not allowed
Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::ReservedGeo("_geo"))))
Err(Error::failure_from_kind(input, ErrorKind::ReservedGeo("_geo")))
}
fn parse_error_reserved_keyword(input: Span) -> IResult<FilterCondition> {
@@ -491,8 +572,8 @@ fn parse_primary(input: Span, depth: usize) -> IResult<FilterCondition> {
Error::new_from_kind(input, ErrorKind::MissingClosingDelimiter(c.char()))
}),
),
parse_geo_radius,
parse_geo_bounding_box,
// Made a random block of functions because we reached the maximum number of elements per alt
alt((parse_geo_radius, parse_geo_bounding_box, parse_geo_polygon)),
parse_in,
parse_not_in,
parse_condition,
@@ -500,8 +581,7 @@ fn parse_primary(input: Span, depth: usize) -> IResult<FilterCondition> {
parse_is_not_null,
parse_is_empty,
parse_is_not_empty,
parse_exists,
parse_not_exists,
alt((parse_vectors_exists, parse_exists, parse_not_exists)),
parse_to,
parse_contains,
parse_not_contains,
@@ -557,9 +637,28 @@ impl std::fmt::Display for FilterCondition<'_> {
}
write!(f, "]")
}
FilterCondition::GeoLowerThan { point, radius } => {
FilterCondition::VectorExists { fid: _, embedder, filter: inner } => {
write!(f, "_vectors")?;
if let Some(embedder) = embedder {
write!(f, ".{:?}", embedder.value())?;
}
match inner {
VectorFilter::Fragment(fragment) => {
write!(f, ".fragments.{:?}", fragment.value())?
}
VectorFilter::DocumentTemplate => write!(f, ".documentTemplate")?,
VectorFilter::UserProvided => write!(f, ".userProvided")?,
VectorFilter::Regenerate => write!(f, ".regenerate")?,
VectorFilter::None => (),
}
write!(f, " EXISTS")
}
FilterCondition::GeoLowerThan { point, radius, resolution: None } => {
write!(f, "_geoRadius({}, {}, {})", point[0], point[1], radius)
}
FilterCondition::GeoLowerThan { point, radius, resolution: Some(resolution) } => {
write!(f, "_geoRadius({}, {}, {}, {})", point[0], point[1], radius, resolution)
}
FilterCondition::GeoBoundingBox {
top_right_point: top_left_point,
bottom_left_point: bottom_right_point,
@@ -573,6 +672,13 @@ impl std::fmt::Display for FilterCondition<'_> {
bottom_right_point[1]
)
}
FilterCondition::GeoPolygon { points } => {
write!(f, "_geoPolygon([")?;
for point in points {
write!(f, "[{}, {}], ", point[0], point[1])?;
}
write!(f, "])")
}
}
}
}
@@ -611,7 +717,7 @@ pub mod tests {
/// Create a raw [Token]. You must specify the string that appear BEFORE your element followed by your element
pub fn rtok<'a>(before: &'a str, value: &'a str) -> Token<'a> {
// if the string is empty we still need to return 1 for the line number
let lines = before.is_empty().then_some(1).unwrap_or_else(|| before.lines().count());
let lines = if before.is_empty() { 1 } else { before.lines().count() };
let offset = before.chars().count();
// the extra field is not checked in the tests so we can set it to nothing
unsafe { Span::new_from_raw_offset(offset, lines as u32, value, "") }.into()
@@ -630,6 +736,9 @@ pub mod tests {
insta::assert_snapshot!(p(r"title = 'foo\\\\\\\\'"), @r#"{title} = {foo\\\\}"#);
// but it also works with other sequences
insta::assert_snapshot!(p(r#"title = 'foo\x20\n\t\"\'"'"#), @"{title} = {foo \n\t\"\'\"}");
insta::assert_snapshot!(p(r#"_vectors." valid.name ".fragments."also.. valid! " EXISTS"#), @r#"_vectors." valid.name ".fragments."also.. valid! " EXISTS"#);
insta::assert_snapshot!(p("_vectors.\"\n\t\r\\\"\" EXISTS"), @r#"_vectors."\n\t\r\"" EXISTS"#);
}
#[test]
@@ -692,6 +801,18 @@ pub mod tests {
insta::assert_snapshot!(p("NOT subscribers IS NOT EMPTY"), @"{subscribers} IS EMPTY");
insta::assert_snapshot!(p("subscribers IS NOT EMPTY"), @"NOT ({subscribers} IS EMPTY)");
// Test _vectors EXISTS + _vectors NOT EXITS
insta::assert_snapshot!(p("_vectors EXISTS"), @"_vectors EXISTS");
insta::assert_snapshot!(p("_vectors.embedderName EXISTS"), @r#"_vectors."embedderName" EXISTS"#);
insta::assert_snapshot!(p("_vectors.embedderName.documentTemplate EXISTS"), @r#"_vectors."embedderName".documentTemplate EXISTS"#);
insta::assert_snapshot!(p("_vectors.embedderName.regenerate EXISTS"), @r#"_vectors."embedderName".regenerate EXISTS"#);
insta::assert_snapshot!(p("_vectors.embedderName.regenerate EXISTS"), @r#"_vectors."embedderName".regenerate EXISTS"#);
insta::assert_snapshot!(p("_vectors.embedderName.fragments.fragmentName EXISTS"), @r#"_vectors."embedderName".fragments."fragmentName" EXISTS"#);
insta::assert_snapshot!(p(" _vectors.embedderName.fragments.fragmentName EXISTS"), @r#"_vectors."embedderName".fragments."fragmentName" EXISTS"#);
insta::assert_snapshot!(p("NOT _vectors EXISTS"), @"NOT (_vectors EXISTS)");
insta::assert_snapshot!(p(" NOT _vectors EXISTS"), @"NOT (_vectors EXISTS)");
insta::assert_snapshot!(p(" _vectors NOT EXISTS"), @"NOT (_vectors EXISTS)");
// Test EXISTS + NOT EXITS
insta::assert_snapshot!(p("subscribers EXISTS"), @"{subscribers} EXISTS");
insta::assert_snapshot!(p("NOT subscribers EXISTS"), @"NOT ({subscribers} EXISTS)");
@@ -721,12 +842,17 @@ pub mod tests {
insta::assert_snapshot!(p("_geoRadius(12, 13, 14)"), @"_geoRadius({12}, {13}, {14})");
insta::assert_snapshot!(p("NOT _geoRadius(12, 13, 14)"), @"NOT (_geoRadius({12}, {13}, {14}))");
insta::assert_snapshot!(p("_geoRadius(12,13,14)"), @"_geoRadius({12}, {13}, {14})");
insta::assert_snapshot!(p("_geoRadius(12,13,14,1000)"), @"_geoRadius({12}, {13}, {14}, {1000})");
// Test geo bounding box
insta::assert_snapshot!(p("_geoBoundingBox([12, 13], [14, 15])"), @"_geoBoundingBox([{12}, {13}], [{14}, {15}])");
insta::assert_snapshot!(p("NOT _geoBoundingBox([12, 13], [14, 15])"), @"NOT (_geoBoundingBox([{12}, {13}], [{14}, {15}]))");
insta::assert_snapshot!(p("_geoBoundingBox([12,13],[14,15])"), @"_geoBoundingBox([{12}, {13}], [{14}, {15}])");
// Test geo polygon
insta::assert_snapshot!(p("_geoPolygon([12, 13], [14, 15], [16, 17])"), @"_geoPolygon([[{12}, {13}], [{14}, {15}], [{16}, {17}], ])");
insta::assert_snapshot!(p("_geoPolygon([12, 13], [14, 15], [-1.2,2939.2], [1,1])"), @"_geoPolygon([[{12}, {13}], [{14}, {15}], [{-1.2}, {2939.2}], [{1}, {1}], ])");
// Test OR + AND
insta::assert_snapshot!(p("channel = ponce AND 'dog race' != 'bernese mountain'"), @"AND[{channel} = {ponce}, {dog race} != {bernese mountain}, ]");
insta::assert_snapshot!(p("channel = ponce OR 'dog race' != 'bernese mountain'"), @"OR[{channel} = {ponce}, {dog race} != {bernese mountain}, ]");
@@ -783,50 +909,80 @@ pub mod tests {
11:12 channel = 🐻 AND followers < 100
"###);
insta::assert_snapshot!(p("'OR'"), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `\'OR\'`.
insta::assert_snapshot!(p("'OR'"), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `\'OR\'`.
1:5 'OR'
"###);
");
insta::assert_snapshot!(p("OR"), @r###"
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
1:3 OR
"###);
insta::assert_snapshot!(p("channel Ponce"), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `channel Ponce`.
insta::assert_snapshot!(p("channel Ponce"), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `channel Ponce`.
1:14 channel Ponce
"###);
");
insta::assert_snapshot!(p("channel = Ponce OR"), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` but instead got nothing.
insta::assert_snapshot!(p("channel = Ponce OR"), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` but instead got nothing.
19:19 channel = Ponce OR
"###);
");
insta::assert_snapshot!(p("_geoRadius"), @r###"
The `_geoRadius` filter expects three arguments: `_geoRadius(latitude, longitude, radius)`.
1:11 _geoRadius
"###);
insta::assert_snapshot!(p("_geoRadius"), @r"
The `_geoRadius` filter must be in the form: `_geoRadius(latitude, longitude, radius, optionalResolution)`.
11:11 _geoRadius
");
insta::assert_snapshot!(p("_geoRadius = 12"), @r###"
The `_geoRadius` filter expects three arguments: `_geoRadius(latitude, longitude, radius)`.
1:16 _geoRadius = 12
"###);
insta::assert_snapshot!(p("_geoRadius = 12"), @r"
The `_geoRadius` filter must be in the form: `_geoRadius(latitude, longitude, radius, optionalResolution)`.
11:16 _geoRadius = 12
");
insta::assert_snapshot!(p("_geoBoundingBox"), @r###"
insta::assert_snapshot!(p("_geoBoundingBox"), @r"
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
1:16 _geoBoundingBox
"###);
16:16 _geoBoundingBox
");
insta::assert_snapshot!(p("_geoBoundingBox = 12"), @r###"
insta::assert_snapshot!(p("_geoBoundingBox = 12"), @r"
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
1:21 _geoBoundingBox = 12
"###);
16:21 _geoBoundingBox = 12
");
insta::assert_snapshot!(p("_geoBoundingBox(1.0, 1.0)"), @r###"
insta::assert_snapshot!(p("_geoBoundingBox(1.0, 1.0)"), @r"
The `_geoBoundingBox` filter expects two pairs of arguments: `_geoBoundingBox([latitude, longitude], [latitude, longitude])`.
1:26 _geoBoundingBox(1.0, 1.0)
"###);
17:26 _geoBoundingBox(1.0, 1.0)
");
insta::assert_snapshot!(p("_geoPolygon([1,2,3])"), @r"
The `_geoPolygon` filter expects at least 3 points but only 1 were specified
18:19 _geoPolygon([1,2,3])
");
insta::assert_snapshot!(p("_geoPolygon(1,2,3)"), @r"
The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.
13:19 _geoPolygon(1,2,3)
");
insta::assert_snapshot!(p("_geoPolygon([1,2],[1,2],[1,2,3])"), @r"
Was expecting 2 coordinates but instead found 3.
26:27 _geoPolygon([1,2],[1,2],[1,2,3])
");
insta::assert_snapshot!(p("_geoPolygon([1,2],[1,2,3])"), @r"
The `_geoPolygon` filter expects at least 3 points but only 2 were specified
24:25 _geoPolygon([1,2],[1,2,3])
");
insta::assert_snapshot!(p("_geoPolygon(1)"), @r"
The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.
13:15 _geoPolygon(1)
");
insta::assert_snapshot!(p("_geoPolygon([1,2)"), @r"
The `_geoPolygon` filter doesn't match the expected format: `_geoPolygon([latitude, longitude], [latitude, longitude])`.
17:18 _geoPolygon([1,2)
");
insta::assert_snapshot!(p("_geoPoint(12, 13, 14)"), @r###"
`_geoPoint` is a reserved keyword and thus can't be used as a filter expression. Use the `_geoRadius(latitude, longitude, distance)` or `_geoBoundingBox([latitude, longitude], [latitude, longitude])` built-in rules to filter on `_geo` coordinates.
@@ -883,15 +1039,15 @@ pub mod tests {
34:35 channel = mv OR followers >= 1000)
"###);
insta::assert_snapshot!(p("colour NOT EXIST"), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `colour NOT EXIST`.
insta::assert_snapshot!(p("colour NOT EXIST"), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `colour NOT EXIST`.
1:17 colour NOT EXIST
"###);
");
insta::assert_snapshot!(p("subscribers 100 TO1000"), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `subscribers 100 TO1000`.
insta::assert_snapshot!(p("subscribers 100 TO1000"), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `subscribers 100 TO1000`.
1:23 subscribers 100 TO1000
"###);
");
insta::assert_snapshot!(p("channel = ponce ORdog != 'bernese mountain'"), @r###"
Found unexpected characters at the end of the filter: `ORdog != \'bernese mountain\'`. You probably forgot an `OR` or an `AND` rule.
@@ -946,43 +1102,108 @@ pub mod tests {
"###
);
insta::assert_snapshot!(p(r#"_vectors _vectors EXISTS"#), @r"
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
10:25 _vectors _vectors EXISTS
");
insta::assert_snapshot!(p(r#"_vectors. embedderName EXISTS"#), @r"
Was expecting embedder name but found nothing.
10:11 _vectors. embedderName EXISTS
");
insta::assert_snapshot!(p(r#"_vectors .embedderName EXISTS"#), @r"
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
10:30 _vectors .embedderName EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName. EXISTS"#), @r"
Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.
22:23 _vectors.embedderName. EXISTS
");
insta::assert_snapshot!(p(r#"_vectors."embedderName EXISTS"#), @r#"
The quotes in one of the values are inconsistent.
10:30 _vectors."embedderName EXISTS
"#);
insta::assert_snapshot!(p(r#"_vectors."embedderNam"e EXISTS"#), @r#"
The vector filter has leftover tokens.
23:31 _vectors."embedderNam"e EXISTS
"#);
insta::assert_snapshot!(p(r#"_vectors.embedderName.documentTemplate. EXISTS"#), @r"
Was expecting one of `.fragments`, `.userProvided`, `.documentTemplate`, `.regenerate` or nothing, but instead found a point without a valid value.
39:40 _vectors.embedderName.documentTemplate. EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments EXISTS"#), @r"
The vector filter is missing a fragment name.
32:39 _vectors.embedderName.fragments EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments. EXISTS"#), @r"
The vector filter's fragment name is invalid.
33:40 _vectors.embedderName.fragments. EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments.test test EXISTS"#), @r"
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
38:49 _vectors.embedderName.fragments.test test EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName.fragments. test EXISTS"#), @r"
The vector filter's fragment name is invalid.
33:45 _vectors.embedderName.fragments. test EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName .fragments. test EXISTS"#), @r"
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
23:46 _vectors.embedderName .fragments. test EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName .fragments.test EXISTS"#), @r"
Was expecting an operation like `EXISTS` or `NOT EXISTS` after the vector filter.
23:45 _vectors.embedderName .fragments.test EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName.fargments.test EXISTS"#), @r"
Was expecting one of `fragments`, `userProvided`, `documentTemplate`, `regenerate` or nothing, but instead found `fargments`. Did you mean `fragments`?
23:32 _vectors.embedderName.fargments.test EXISTS
");
insta::assert_snapshot!(p(r#"_vectors.embedderName."userProvided" EXISTS"#), @r#"
Was expecting this part to be unquoted.
24:36 _vectors.embedderName."userProvided" EXISTS
"#);
insta::assert_snapshot!(p(r#"_vectors.embedderName.userProvided.fragments.test EXISTS"#), @r"
Vector filter can only accept one of `fragments`, `userProvided`, `documentTemplate` or `regenerate`, but found both `userProvided` and `fragments`.
36:45 _vectors.embedderName.userProvided.fragments.test EXISTS
");
insta::assert_snapshot!(p(r#"NOT OR EXISTS AND EXISTS NOT EXISTS"#), @r###"
Was expecting a value but instead got `OR`, which is a reserved keyword. To use `OR` as a field name or a value, surround it by quotes.
5:7 NOT OR EXISTS AND EXISTS NOT EXISTS
"###);
insta::assert_snapshot!(p(r#"value NULL"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value NULL`.
insta::assert_snapshot!(p(r#"value NULL"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value NULL`.
1:11 value NULL
"###);
insta::assert_snapshot!(p(r#"value NOT NULL"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value NOT NULL`.
");
insta::assert_snapshot!(p(r#"value NOT NULL"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value NOT NULL`.
1:15 value NOT NULL
"###);
insta::assert_snapshot!(p(r#"value EMPTY"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value EMPTY`.
");
insta::assert_snapshot!(p(r#"value EMPTY"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value EMPTY`.
1:12 value EMPTY
"###);
insta::assert_snapshot!(p(r#"value NOT EMPTY"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value NOT EMPTY`.
");
insta::assert_snapshot!(p(r#"value NOT EMPTY"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value NOT EMPTY`.
1:16 value NOT EMPTY
"###);
insta::assert_snapshot!(p(r#"value IS"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS`.
");
insta::assert_snapshot!(p(r#"value IS"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS`.
1:9 value IS
"###);
insta::assert_snapshot!(p(r#"value IS NOT"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS NOT`.
");
insta::assert_snapshot!(p(r#"value IS NOT"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS NOT`.
1:13 value IS NOT
"###);
insta::assert_snapshot!(p(r#"value IS EXISTS"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS EXISTS`.
");
insta::assert_snapshot!(p(r#"value IS EXISTS"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS EXISTS`.
1:16 value IS EXISTS
"###);
insta::assert_snapshot!(p(r#"value IS NOT EXISTS"#), @r###"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, or `_geoBoundingBox` at `value IS NOT EXISTS`.
");
insta::assert_snapshot!(p(r#"value IS NOT EXISTS"#), @r"
Was expecting an operation `=`, `!=`, `>=`, `>`, `<=`, `<`, `IN`, `NOT IN`, `TO`, `EXISTS`, `NOT EXISTS`, `IS NULL`, `IS NOT NULL`, `IS EMPTY`, `IS NOT EMPTY`, `CONTAINS`, `NOT CONTAINS`, `STARTS WITH`, `NOT STARTS WITH`, `_geoRadius`, `_geoBoundingBox` or `_geoPolygon` at `value IS NOT EXISTS`.
1:20 value IS NOT EXISTS
"###);
");
}
#[test]

View File

@@ -80,6 +80,51 @@ pub fn word_exact<'a, 'b: 'a>(tag: &'b str) -> impl Fn(Span<'a>) -> IResult<'a,
}
}
/// vector_value = ( non_dot_word | singleQuoted | doubleQuoted)
pub fn parse_vector_value(input: Span) -> IResult<Token> {
pub fn non_dot_word(input: Span) -> IResult<Token> {
let (input, word) = take_while1(|c| is_value_component(c) && c != '.')(input)?;
Ok((input, word.into()))
}
let (input, value) = alt((
delimited(char('\''), cut(|input| quoted_by('\'', input)), cut(char('\''))),
delimited(char('"'), cut(|input| quoted_by('"', input)), cut(char('"'))),
non_dot_word,
))(input)?;
match unescaper::unescape(value.value()) {
Ok(content) => {
if content.len() != value.value().len() {
Ok((input, Token::new(value.original_span(), Some(content))))
} else {
Ok((input, value))
}
}
Err(unescaper::Error::IncompleteStr(_)) => Err(nom::Err::Incomplete(nom::Needed::Unknown)),
Err(unescaper::Error::ParseIntError { .. }) => Err(nom::Err::Error(Error::new_from_kind(
value.original_span(),
ErrorKind::InvalidEscapedNumber,
))),
Err(unescaper::Error::InvalidChar { .. }) => Err(nom::Err::Error(Error::new_from_kind(
value.original_span(),
ErrorKind::MalformedValue,
))),
}
}
pub fn parse_vector_value_cut<'a>(input: Span<'a>, kind: ErrorKind<'a>) -> IResult<'a, Token<'a>> {
parse_vector_value(input).map_err(|e| match e {
nom::Err::Failure(e) => match e.kind() {
ErrorKind::Char(c) if *c == '"' || *c == '\'' => {
crate::Error::failure_from_kind(input, ErrorKind::VectorFilterInvalidQuotes)
}
_ => crate::Error::failure_from_kind(input, kind),
},
_ => crate::Error::failure_from_kind(input, kind),
})
}
/// value = WS* ( word | singleQuoted | doubleQuoted) WS+
pub fn parse_value(input: Span) -> IResult<Token> {
// to get better diagnostic message we are going to strip the left whitespaces from the input right now
@@ -99,31 +144,21 @@ pub fn parse_value(input: Span) -> IResult<Token> {
}
match parse_geo_radius(input) {
Ok(_) => {
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::MisusedGeoRadius)))
}
Ok(_) => return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoRadius)),
// if we encountered a failure it means the user badly wrote a _geoRadius filter.
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
Err(e) if e.is_failure() => {
return Err(nom::Err::Failure(Error::new_from_kind(input, ErrorKind::MisusedGeoRadius)))
return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoRadius))
}
_ => (),
}
match parse_geo_bounding_box(input) {
Ok(_) => {
return Err(nom::Err::Failure(Error::new_from_kind(
input,
ErrorKind::MisusedGeoBoundingBox,
)))
}
Ok(_) => return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoBoundingBox)),
// if we encountered a failure it means the user badly wrote a _geoBoundingBox filter.
// But instead of showing them how to fix his syntax we are going to tell them they should not use this filter as a value.
Err(e) if e.is_failure() => {
return Err(nom::Err::Failure(Error::new_from_kind(
input,
ErrorKind::MisusedGeoBoundingBox,
)))
return Err(Error::failure_from_kind(input, ErrorKind::MisusedGeoBoundingBox))
}
_ => (),
}

View File

@@ -129,6 +129,7 @@ fn main() {
&mut new_fields_ids_map,
&|| false,
Progress::default(),
None,
)
.unwrap();

View File

@@ -14,6 +14,7 @@ license.workspace = true
anyhow = "1.0.98"
bincode = "1.3.3"
byte-unit = "5.1.6"
bytes = "1.10.1"
bumpalo = "3.18.1"
bumparaw-collections = "0.1.4"
convert_case = "0.8.0"
@@ -32,6 +33,7 @@ rayon = "1.10.0"
roaring = { version = "0.10.12", features = ["serde"] }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = { version = "1.0.140", features = ["preserve_order"] }
tar = "0.4.44"
synchronoise = "1.0.1"
tempfile = "3.20.0"
thiserror = "2.0.12"
@@ -45,6 +47,9 @@ tracing = "0.1.41"
ureq = "2.12.1"
uuid = { version = "1.17.0", features = ["serde", "v4"] }
backoff = "0.4.0"
reqwest = { version = "0.12.23", features = ["rustls-tls", "http2"], default-features = false }
rusty-s3 = "0.8.1"
tokio = { version = "1.47.1", features = ["full"] }
[dev-dependencies]
big_s = "1.0.2"

View File

@@ -1,3 +1,5 @@
#![allow(clippy::result_large_err)]
use std::collections::HashMap;
use std::io;
@@ -147,6 +149,8 @@ impl<'a> Dump<'a> {
canceled_by: task.canceled_by,
details: task.details,
status: task.status,
network: task.network,
custom_metadata: task.custom_metadata,
kind: match task.kind {
KindDump::DocumentImport {
primary_key,
@@ -197,9 +201,10 @@ impl<'a> Dump<'a> {
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
primary_key,
},
KindDump::IndexUpdate { primary_key } => KindWithContent::IndexUpdate {
KindDump::IndexUpdate { primary_key, uid } => KindWithContent::IndexUpdate {
index_uid: task.index_uid.ok_or(Error::CorruptedDump)?,
primary_key,
new_index_uid: uid,
},
KindDump::IndexSwap { swaps } => KindWithContent::IndexSwap { swaps },
KindDump::TaskCancelation { query, tasks } => {
@@ -230,6 +235,9 @@ impl<'a> Dump<'a> {
}
}
KindDump::UpgradeDatabase { from } => KindWithContent::UpgradeDatabase { from },
KindDump::IndexCompaction { index_uid } => {
KindWithContent::IndexCompaction { index_uid }
}
},
};

View File

@@ -5,6 +5,7 @@ use meilisearch_types::error::{Code, ErrorCode};
use meilisearch_types::milli::index::RollbackOutcome;
use meilisearch_types::tasks::{Kind, Status};
use meilisearch_types::{heed, milli};
use reqwest::StatusCode;
use thiserror::Error;
use crate::TaskId;
@@ -67,6 +68,8 @@ pub enum Error {
SwapDuplicateIndexesFound(Vec<String>),
#[error("Index `{0}` not found.")]
SwapIndexNotFound(String),
#[error("Cannot rename `{0}` to `{1}` as the index already exists. Hint: You can remove `{1}` first and then do your remove.")]
SwapIndexFoundDuringRename(String, String),
#[error("Meilisearch cannot receive write operations because the limit of the task database has been reached. Please delete tasks to continue performing write operations.")]
NoSpaceLeftInTaskQueue,
#[error(
@@ -74,6 +77,10 @@ pub enum Error {
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
)]
SwapIndexesNotFound(Vec<String>),
#[error("The following indexes are being renamed but cannot because their new name conflicts with an already existing index: {}. Renaming doesn't overwrite the other index name.",
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
)]
SwapIndexesFoundDuringRename(Vec<String>),
#[error("Corrupted dump.")]
CorruptedDump,
#[error(
@@ -121,6 +128,14 @@ pub enum Error {
#[error("Aborted task")]
AbortedTask,
#[error("S3 error: status: {status}, body: {body}")]
S3Error { status: StatusCode, body: String },
#[error("S3 HTTP error: {0}")]
S3HttpError(reqwest::Error),
#[error("S3 XML error: {0}")]
S3XmlError(Box<dyn std::error::Error + Send + Sync>),
#[error("S3 bucket error: {0}")]
S3BucketError(rusty_s3::BucketError),
#[error(transparent)]
Dump(#[from] dump::Error),
#[error(transparent)]
@@ -203,6 +218,8 @@ impl Error {
| Error::SwapIndexNotFound(_)
| Error::NoSpaceLeftInTaskQueue
| Error::SwapIndexesNotFound(_)
| Error::SwapIndexFoundDuringRename(_, _)
| Error::SwapIndexesFoundDuringRename(_)
| Error::CorruptedDump
| Error::InvalidTaskDate { .. }
| Error::InvalidTaskUid { .. }
@@ -218,6 +235,10 @@ impl Error {
| Error::TaskCancelationWithEmptyQuery
| Error::FromRemoteWhenExporting { .. }
| Error::AbortedTask
| Error::S3Error { .. }
| Error::S3HttpError(_)
| Error::S3XmlError(_)
| Error::S3BucketError(_)
| Error::Dump(_)
| Error::Heed(_)
| Error::Milli { .. }
@@ -271,6 +292,8 @@ impl ErrorCode for Error {
Error::SwapDuplicateIndexFound(_) => Code::InvalidSwapDuplicateIndexFound,
Error::SwapIndexNotFound(_) => Code::IndexNotFound,
Error::SwapIndexesNotFound(_) => Code::IndexNotFound,
Error::SwapIndexFoundDuringRename(_, _) => Code::IndexAlreadyExists,
Error::SwapIndexesFoundDuringRename(_) => Code::IndexAlreadyExists,
Error::InvalidTaskDate { field, .. } => (*field).into(),
Error::InvalidTaskUid { .. } => Code::InvalidTaskUids,
Error::InvalidBatchUid { .. } => Code::InvalidBatchUids,
@@ -283,8 +306,14 @@ impl ErrorCode for Error {
Error::BatchNotFound(_) => Code::BatchNotFound,
Error::TaskDeletionWithEmptyQuery => Code::MissingTaskFilters,
Error::TaskCancelationWithEmptyQuery => Code::MissingTaskFilters,
// TODO: not sure of the Code to use
Error::NoSpaceLeftInTaskQueue => Code::NoSpaceLeftOnDevice,
Error::S3Error { status, .. } if status.is_client_error() => {
Code::InvalidS3SnapshotRequest
}
Error::S3Error { .. } => Code::S3SnapshotServerError,
Error::S3HttpError(_) => Code::S3SnapshotServerError,
Error::S3XmlError(_) => Code::S3SnapshotServerError,
Error::S3BucketError(_) => Code::InvalidS3SnapshotParameters,
Error::Dump(e) => e.error_code(),
Error::Milli { error, .. } => error.error_code(),
Error::ProcessBatchPanicked(_) => Code::Internal,

View File

@@ -1,6 +1,7 @@
use std::sync::{Arc, RwLock};
use meilisearch_types::features::{InstanceTogglableFeatures, Network, RuntimeTogglableFeatures};
use meilisearch_types::enterprise_edition::network::Network;
use meilisearch_types::features::{InstanceTogglableFeatures, RuntimeTogglableFeatures};
use meilisearch_types::heed::types::{SerdeJson, Str};
use meilisearch_types::heed::{Database, Env, RwTxn, WithoutTls};
@@ -157,6 +158,19 @@ impl RoFeatures {
.into())
}
}
pub fn check_vector_store_setting(&self, disabled_action: &'static str) -> Result<()> {
if self.runtime.vector_store_setting {
Ok(())
} else {
Err(FeatureNotEnabledError {
disabled_action,
feature: "vector_store_setting",
issue_link: "https://github.com/orgs/meilisearch/discussions/860",
}
.into())
}
}
}
impl FeatureData {

View File

@@ -143,10 +143,10 @@ impl IndexStats {
///
/// - rtxn: a RO transaction for the index, obtained from `Index::read_txn()`.
pub fn new(index: &Index, rtxn: &RoTxn) -> milli::Result<Self> {
let arroy_stats = index.arroy_stats(rtxn)?;
let vector_store_stats = index.vector_store_stats(rtxn)?;
Ok(IndexStats {
number_of_embeddings: Some(arroy_stats.number_of_embeddings),
number_of_embedded_documents: Some(arroy_stats.documents.len()),
number_of_embeddings: Some(vector_store_stats.number_of_embeddings),
number_of_embedded_documents: Some(vector_store_stats.documents.len()),
documents_database_stats: index.documents_stats(rtxn)?.unwrap_or_default(),
number_of_documents: None,
database_size: index.on_disk_size()?,
@@ -199,7 +199,7 @@ impl IndexMapper {
let uuid = Uuid::new_v4();
self.index_mapping.put(&mut wtxn, name, &uuid)?;
let index_path = self.base_path.join(uuid.to_string());
let index_path = self.index_path(uuid);
fs::create_dir_all(&index_path)?;
// Error if the UUIDv4 somehow already exists in the map, since it should be fresh.
@@ -286,7 +286,7 @@ impl IndexMapper {
};
let index_map = self.index_map.clone();
let index_path = self.base_path.join(uuid.to_string());
let index_path = self.index_path(uuid);
let index_name = name.to_string();
thread::Builder::new()
.name(String::from("index_deleter"))
@@ -341,6 +341,26 @@ impl IndexMapper {
Ok(())
}
/// Closes the specified index.
///
/// This operation involves closing the underlying environment and so can take a long time to complete.
///
/// # Panics
///
/// - If the Index corresponding to the passed name is concurrently being deleted/resized or cannot be found in the
/// in memory hash map.
pub fn close_index(&self, rtxn: &RoTxn, name: &str) -> Result<()> {
let uuid = self
.index_mapping
.get(rtxn, name)?
.ok_or_else(|| Error::IndexNotFound(name.to_string()))?;
// We remove the index from the in-memory index map.
self.index_map.write().unwrap().close_for_resize(&uuid, self.enable_mdb_writemap, 0);
Ok(())
}
/// Return an index, may open it if it wasn't already opened.
pub fn index(&self, rtxn: &RoTxn, name: &str) -> Result<Index> {
if let Some((current_name, current_index)) =
@@ -388,7 +408,7 @@ impl IndexMapper {
} else {
continue;
};
let index_path = self.base_path.join(uuid.to_string());
let index_path = self.index_path(uuid);
// take the lock to reopen the environment.
reopen
.reopen(&mut self.index_map.write().unwrap(), &index_path)
@@ -405,7 +425,7 @@ impl IndexMapper {
// if it's not already there.
match index_map.get(&uuid) {
Missing => {
let index_path = self.base_path.join(uuid.to_string());
let index_path = self.index_path(uuid);
break index_map
.create(
@@ -432,6 +452,14 @@ impl IndexMapper {
Ok(index)
}
/// Returns the path of the index.
///
/// The folder located at this path is containing the data.mdb,
/// the lock.mdb and an optional data.mdb.cpy file.
pub fn index_path(&self, uuid: Uuid) -> PathBuf {
self.base_path.join(uuid.to_string())
}
pub fn rollback_index(
&self,
rtxn: &RoTxn,
@@ -472,7 +500,7 @@ impl IndexMapper {
};
}
let index_path = self.base_path.join(uuid.to_string());
let index_path = self.index_path(uuid);
Index::rollback(milli::heed::EnvOpenOptions::new().read_txn_without_tls(), index_path, to)
.map_err(|err| crate::Error::from_milli(err, Some(name.to_string())))
}
@@ -526,6 +554,20 @@ impl IndexMapper {
Ok(())
}
/// Rename an index.
pub fn rename(&self, wtxn: &mut RwTxn, current: &str, new: &str) -> Result<()> {
let uuid = self
.index_mapping
.get(wtxn, current)?
.ok_or_else(|| Error::IndexNotFound(current.to_string()))?;
if self.index_mapping.get(wtxn, new)?.is_some() {
return Err(Error::IndexAlreadyExists(new.to_string()));
}
self.index_mapping.delete(wtxn, current)?;
self.index_mapping.put(wtxn, new, &uuid)?;
Ok(())
}
/// The stats of an index.
///
/// If available in the cache, they are directly returned.

View File

@@ -36,6 +36,7 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
run_loop_iteration: _,
embedders: _,
chat_settings: _,
runtime: _,
} = scheduler;
let rtxn = env.read_txn().unwrap();
@@ -230,6 +231,8 @@ pub fn snapshot_task(task: &Task) -> String {
details,
status,
kind,
network,
custom_metadata,
} = task;
snap.push('{');
snap.push_str(&format!("uid: {uid}, "));
@@ -247,6 +250,12 @@ pub fn snapshot_task(task: &Task) -> String {
snap.push_str(&format!("details: {}, ", &snapshot_details(details)));
}
snap.push_str(&format!("kind: {kind:?}"));
if let Some(network) = network {
snap.push_str(&format!("network: {network:?}, "))
}
if let Some(custom_metadata) = custom_metadata {
snap.push_str(&format!("custom_metadata: {custom_metadata:?}"))
}
snap.push('}');
snap
@@ -274,8 +283,8 @@ fn snapshot_details(d: &Details) -> String {
Details::SettingsUpdate { settings } => {
format!("{{ settings: {settings:?} }}")
}
Details::IndexInfo { primary_key } => {
format!("{{ primary_key: {primary_key:?} }}")
Details::IndexInfo { primary_key, new_index_uid, old_index_uid } => {
format!("{{ primary_key: {primary_key:?}, old_new_uid: {old_index_uid:?}, new_index_uid: {new_index_uid:?} }}")
}
Details::DocumentDeletion {
provided_ids: received_document_ids,
@@ -313,6 +322,9 @@ fn snapshot_details(d: &Details) -> String {
Details::UpgradeDatabase { from, to } => {
format!("{{ from: {from:?}, to: {to:?} }}")
}
Details::IndexCompaction { index_uid, pre_compaction_size, post_compaction_size } => {
format!("{{ index_uid: {index_uid:?}, pre_compaction_size: {pre_compaction_size:?}, post_compaction_size: {post_compaction_size:?} }}")
}
}
}

View File

@@ -1,3 +1,6 @@
// The main Error type is large and boxing the large variant make the pattern matching fails
#![allow(clippy::result_large_err)]
/*!
This crate defines the index scheduler, which is responsible for:
1. Keeping references to meilisearch's indexes and mapping them to their
@@ -51,8 +54,9 @@ pub use features::RoFeatures;
use flate2::bufread::GzEncoder;
use flate2::Compression;
use meilisearch_types::batches::Batch;
use meilisearch_types::enterprise_edition::network::Network;
use meilisearch_types::features::{
ChatCompletionSettings, InstanceTogglableFeatures, Network, RuntimeTogglableFeatures,
ChatCompletionSettings, InstanceTogglableFeatures, RuntimeTogglableFeatures,
};
use meilisearch_types::heed::byteorder::BE;
use meilisearch_types::heed::types::{DecodeIgnore, SerdeJson, Str, I128};
@@ -64,7 +68,7 @@ use meilisearch_types::milli::vector::{
};
use meilisearch_types::milli::{self, Index};
use meilisearch_types::task_view::TaskView;
use meilisearch_types::tasks::{KindWithContent, Task};
use meilisearch_types::tasks::{KindWithContent, Task, TaskNetwork};
use meilisearch_types::webhooks::{Webhook, WebhooksDumpView, WebhooksView};
use milli::vector::db::IndexEmbeddingConfig;
use processing::ProcessingTasks;
@@ -212,6 +216,9 @@ pub struct IndexScheduler {
/// A counter that is incremented before every call to [`tick`](IndexScheduler::tick)
#[cfg(test)]
run_loop_iteration: Arc<RwLock<usize>>,
/// The tokio runtime used for asynchronous tasks.
runtime: Option<tokio::runtime::Handle>,
}
impl IndexScheduler {
@@ -238,6 +245,7 @@ impl IndexScheduler {
run_loop_iteration: self.run_loop_iteration.clone(),
features: self.features.clone(),
chat_settings: self.chat_settings,
runtime: self.runtime.clone(),
}
}
@@ -251,13 +259,23 @@ impl IndexScheduler {
}
/// Create an index scheduler and start its run loop.
#[allow(private_interfaces)] // because test_utils is private
pub fn new(
options: IndexSchedulerOptions,
auth_env: Env<WithoutTls>,
from_db_version: (u32, u32, u32),
#[cfg(test)] test_breakpoint_sdr: crossbeam_channel::Sender<(test_utils::Breakpoint, bool)>,
#[cfg(test)] planned_failures: Vec<(usize, test_utils::FailureLocation)>,
runtime: Option<tokio::runtime::Handle>,
) -> Result<Self> {
let this = Self::new_without_run(options, auth_env, from_db_version, runtime)?;
this.run();
Ok(this)
}
fn new_without_run(
options: IndexSchedulerOptions,
auth_env: Env<WithoutTls>,
from_db_version: (u32, u32, u32),
runtime: Option<tokio::runtime::Handle>,
) -> Result<Self> {
std::fs::create_dir_all(&options.tasks_path)?;
std::fs::create_dir_all(&options.update_file_path)?;
@@ -312,8 +330,7 @@ impl IndexScheduler {
wtxn.commit()?;
// allow unreachable_code to get rids of the warning in the case of a test build.
let this = Self {
Ok(Self {
processing_tasks: Arc::new(RwLock::new(ProcessingTasks::new())),
version,
queue,
@@ -329,21 +346,38 @@ impl IndexScheduler {
webhooks: Arc::new(webhooks),
embedders: Default::default(),
#[cfg(test)]
test_breakpoint_sdr,
#[cfg(test)]
planned_failures,
#[cfg(test)] // Will be replaced in `new_tests` in test environments
test_breakpoint_sdr: crossbeam_channel::bounded(0).0,
#[cfg(test)] // Will be replaced in `new_tests` in test environments
planned_failures: Default::default(),
#[cfg(test)]
run_loop_iteration: Arc::new(RwLock::new(0)),
features,
chat_settings,
};
runtime,
})
}
/// Create an index scheduler and start its run loop.
#[cfg(test)]
fn new_test(
options: IndexSchedulerOptions,
auth_env: Env<WithoutTls>,
from_db_version: (u32, u32, u32),
runtime: Option<tokio::runtime::Handle>,
test_breakpoint_sdr: crossbeam_channel::Sender<(test_utils::Breakpoint, bool)>,
planned_failures: Vec<(usize, test_utils::FailureLocation)>,
) -> Result<Self> {
let mut this = Self::new_without_run(options, auth_env, from_db_version, runtime)?;
this.test_breakpoint_sdr = test_breakpoint_sdr;
this.planned_failures = planned_failures;
this.run();
Ok(this)
}
fn read_txn(&self) -> Result<RoTxn<WithoutTls>> {
fn read_txn(&self) -> Result<RoTxn<'_, WithoutTls>> {
self.env.read_txn().map_err(|e| e.into())
}
@@ -666,6 +700,16 @@ impl IndexScheduler {
self.queue.get_task_ids_from_authorized_indexes(&rtxn, query, filters, &processing)
}
pub fn set_task_network(&self, task_id: TaskId, network: TaskNetwork) -> Result<()> {
let mut wtxn = self.env.write_txn()?;
let mut task =
self.queue.tasks.get_task(&wtxn, task_id)?.ok_or(Error::TaskNotFound(task_id))?;
task.network = Some(network);
self.queue.tasks.all_tasks.put(&mut wtxn, &task_id, &task)?;
wtxn.commit()?;
Ok(())
}
/// Return the batches matching the query from the user's point of view along
/// with the total number of batches matching the query, ignoring from and limit.
///
@@ -712,6 +756,19 @@ impl IndexScheduler {
kind: KindWithContent,
task_id: Option<TaskId>,
dry_run: bool,
) -> Result<Task> {
self.register_with_custom_metadata(kind, task_id, None, dry_run)
}
/// Register a new task in the scheduler, with metadata.
///
/// If it fails and data was associated with the task, it tries to delete the associated data.
pub fn register_with_custom_metadata(
&self,
kind: KindWithContent,
task_id: Option<TaskId>,
custom_metadata: Option<String>,
dry_run: bool,
) -> Result<Task> {
// if the task doesn't delete or cancel anything and 40% of the task queue is full, we must refuse to enqueue the incoming task
if !matches!(&kind, KindWithContent::TaskDeletion { tasks, .. } | KindWithContent::TaskCancelation { tasks, .. } if !tasks.is_empty())
@@ -722,7 +779,7 @@ impl IndexScheduler {
}
let mut wtxn = self.env.write_txn()?;
let task = self.queue.register(&mut wtxn, &kind, task_id, dry_run)?;
let task = self.queue.register(&mut wtxn, &kind, task_id, custom_metadata, dry_run)?;
// If the registered task is a task cancelation
// we inform the processing tasks to stop (if necessary).
@@ -746,7 +803,7 @@ impl IndexScheduler {
/// Register a new task coming from a dump in the scheduler.
/// By taking a mutable ref we're pretty sure no one will ever import a dump while actix is running.
pub fn register_dumped_task(&mut self) -> Result<Dump> {
pub fn register_dumped_task(&mut self) -> Result<Dump<'_>> {
Dump::new(self)
}
@@ -795,10 +852,8 @@ impl IndexScheduler {
.queue
.tasks
.get_task(self.rtxn, task_id)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))?
.ok_or_else(|| {
io::Error::new(io::ErrorKind::Other, Error::CorruptedTaskQueue)
})?;
.map_err(io::Error::other)?
.ok_or_else(|| io::Error::other(Error::CorruptedTaskQueue))?;
serde_json::to_writer(&mut self.buffer, &TaskView::from_task(&task))?;
self.buffer.push(b'\n');

View File

@@ -75,6 +75,7 @@ make_enum_progress! {
pub enum TaskCancelationProgress {
RetrievingTasks,
CancelingUpgrade,
CleaningCompactionLeftover,
UpdatingTasks,
}
}
@@ -138,6 +139,17 @@ make_enum_progress! {
}
}
make_enum_progress! {
pub enum IndexCompaction {
RetrieveTheIndex,
CreateTemporaryFile,
CopyAndCompactTheIndex,
PersistTheCompactedIndex,
CloseTheIndex,
ReopenTheIndex,
}
}
make_enum_progress! {
pub enum InnerSwappingTwoIndexes {
RetrieveTheTasks,

View File

@@ -275,19 +275,27 @@ impl BatchQueue {
pub(crate) fn get_existing_batches(
&self,
rtxn: &RoTxn,
tasks: impl IntoIterator<Item = BatchId>,
batches: impl IntoIterator<Item = BatchId>,
processing: &ProcessingTasks,
) -> Result<Vec<Batch>> {
tasks
batches
.into_iter()
.map(|batch_id| {
if Some(batch_id) == processing.batch.as_ref().map(|batch| batch.uid) {
let mut batch = processing.batch.as_ref().unwrap().to_batch();
batch.progress = processing.get_progress_view();
// Add progress_trace from the current progress state
if let Some(progress) = &processing.progress {
batch.stats.progress_trace = progress
.accumulated_durations()
.into_iter()
.map(|(k, v)| (k, v.into()))
.collect();
}
Ok(batch)
} else {
self.get_batch(rtxn, batch_id)
.and_then(|task| task.ok_or(Error::CorruptedTaskQueue))
.and_then(|batch| batch.ok_or(Error::CorruptedTaskQueue))
}
})
.collect::<Result<_>>()

View File

@@ -104,6 +104,15 @@ fn query_batches_simple() {
batches[0].started_at = OffsetDateTime::UNIX_EPOCH;
assert!(batches[0].enqueued_at.is_some());
batches[0].enqueued_at = None;
if !batches[0].stats.progress_trace.is_empty() {
batches[0].stats.progress_trace.clear();
batches[0]
.stats
.progress_trace
.insert("processing tasks".to_string(), "deterministic_duration".into());
}
// Insta cannot snapshot our batches because the batch stats contains an enum as key: https://github.com/mitsuhiko/insta/issues/689
let batch = serde_json::to_string_pretty(&batches[0]).unwrap();
snapshot!(batch, @r###"
@@ -122,6 +131,9 @@ fn query_batches_simple() {
},
"indexUids": {
"catto": 1
},
"progressTrace": {
"processing tasks": "deterministic_duration"
}
},
"startedAt": "1970-01-01T00:00:00Z",
@@ -334,11 +346,11 @@ fn query_batches_special_rules() {
let kind = index_creation_task("doggo", "sheep");
let _task = index_scheduler.register(kind, None, false).unwrap();
let kind = KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
};
let _task = index_scheduler.register(kind, None, false).unwrap();
let kind = KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()) }],
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()), rename: false }],
};
let _task = index_scheduler.register(kind, None, false).unwrap();
@@ -442,7 +454,7 @@ fn query_batches_canceled_by() {
let kind = index_creation_task("doggo", "sheep");
let _ = index_scheduler.register(kind, None, false).unwrap();
let kind = KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
};
let _task = index_scheduler.register(kind, None, false).unwrap();

View File

@@ -257,6 +257,7 @@ impl Queue {
wtxn: &mut RwTxn,
kind: &KindWithContent,
task_id: Option<TaskId>,
custom_metadata: Option<String>,
dry_run: bool,
) -> Result<Task> {
let next_task_id = self.tasks.next_task_id(wtxn)?;
@@ -279,6 +280,8 @@ impl Queue {
details: kind.default_details(),
status: Status::Enqueued,
kind: kind.clone(),
network: None,
custom_metadata,
};
// For deletion and cancelation tasks, we want to make extra sure that they
// don't attempt to delete/cancel tasks that are newer than themselves.
@@ -309,7 +312,8 @@ impl Queue {
| self.tasks.status.get(wtxn, &Status::Failed)?.unwrap_or_default()
| self.tasks.status.get(wtxn, &Status::Canceled)?.unwrap_or_default();
let to_delete = RoaringBitmap::from_iter(finished.into_iter().rev().take(100_000));
let to_delete =
RoaringBitmap::from_sorted_iter(finished.into_iter().take(100_000)).unwrap();
// /!\ the len must be at least 2 or else we might enter an infinite loop where we only delete
// the deletion tasks we enqueued ourselves.
@@ -325,7 +329,7 @@ impl Queue {
);
// it's safe to unwrap here because we checked the len above
let newest_task_id = to_delete.iter().last().unwrap();
let newest_task_id = to_delete.iter().next_back().unwrap();
let last_task_to_delete =
self.tasks.get_task(wtxn, newest_task_id)?.ok_or(Error::CorruptedTaskQueue)?;
@@ -342,6 +346,7 @@ impl Queue {
tasks: to_delete,
},
None,
None,
false,
)?;

View File

@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
@@ -49,7 +49,7 @@ catto: { number_of_documents: 0, field_distribution: {} }
----------------------------------------------------------------------
### All Batches:
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
----------------------------------------------------------------------
### Batch to tasks mapping:
0 [0,]

View File

@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
----------------------------------------------------------------------
### Status:
enqueued []

View File

@@ -1,13 +1,12 @@
---
source: crates/index-scheduler/src/queue/batches_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]

View File

@@ -1,14 +1,13 @@
---
source: crates/index-scheduler/src/queue/batches_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]

View File

@@ -1,15 +1,14 @@
---
source: crates/index-scheduler/src/queue/batches_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]

View File

@@ -7,9 +7,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
{uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"processing":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,]

View File

@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/batches_test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
----------------------------------------------------------------------
### Status:
enqueued []

View File

@@ -1,15 +1,14 @@
---
source: crates/index-scheduler/src/queue/batches_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]

View File

@@ -6,10 +6,10 @@ source: crates/index-scheduler/src/queue/batches_test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
3 {uid: 3, batch_uid: 3, status: failed, error: ResponseError { code: 200, message: "Index `whalo` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
3 {uid: 3, batch_uid: 3, status: failed, error: ResponseError { code: 200, message: "Index `whalo` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
----------------------------------------------------------------------
### Status:
enqueued []
@@ -54,8 +54,8 @@ doggo: { number_of_documents: 0, field_distribution: {} }
### All Batches:
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
1 {uid: 1, details: {"primaryKey":"sheep"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 1 of type `indexCreation` that cannot be batched with any other task.", }
2 {uid: 2, details: {"swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 2 of type `indexSwap` that cannot be batched with any other task.", }
3 {uid: 3, details: {"swaps":[{"indexes":["catto","whalo"]}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 3 of type `indexSwap` that cannot be batched with any other task.", }
2 {uid: 2, details: {"swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 2 of type `indexSwap` that cannot be batched with any other task.", }
3 {uid: 3, details: {"swaps":[{"indexes":["catto","whalo"],"rename":false}]}, stats: {"totalNbTasks":1,"status":{"failed":1},"types":{"indexSwap":1},"indexUids":{}}, stop reason: "created batch containing only task with id 3 of type `indexSwap` that cannot be batched with any other task.", }
----------------------------------------------------------------------
### Batch to tasks mapping:
0 [0,]

View File

@@ -1,16 +1,15 @@
---
source: crates/index-scheduler/src/queue/batches_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,3,]

View File

@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 1, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
3 {uid: 3, batch_uid: 1, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
@@ -49,7 +49,7 @@ catto: { number_of_documents: 0, field_distribution: {} }
----------------------------------------------------------------------
### All Batches:
0 {uid: 0, details: {"primaryKey":"mouse"}, stats: {"totalNbTasks":1,"status":{"succeeded":1},"types":{"indexCreation":1},"indexUids":{"catto":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"]}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
1 {uid: 1, details: {"primaryKey":"sheep","matchedTasks":3,"canceledTasks":2,"originalFilter":"test_query","swaps":[{"indexes":["catto","doggo"],"rename":false}]}, stats: {"totalNbTasks":3,"status":{"succeeded":1,"canceled":2},"types":{"indexCreation":1,"indexSwap":1,"taskCancelation":1},"indexUids":{"doggo":1}}, stop reason: "created batch containing only task with id 3 of type `taskCancelation` that cannot be batched with any other task.", }
----------------------------------------------------------------------
### Batch to tasks mapping:
0 [0,]

View File

@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
----------------------------------------------------------------------
### Status:
enqueued []

View File

@@ -1,13 +1,12 @@
---
source: crates/index-scheduler/src/queue/tasks_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]

View File

@@ -1,14 +1,13 @@
---
source: crates/index-scheduler/src/queue/tasks_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]

View File

@@ -1,15 +1,14 @@
---
source: crates/index-scheduler/src/queue/tasks_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]

View File

@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/queue/tasks_test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, batch_uid: 2, status: failed, error: ResponseError { code: 200, message: "Planned failure for tests.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
----------------------------------------------------------------------
### Status:
enqueued []

View File

@@ -1,15 +1,14 @@
---
source: crates/index-scheduler/src/queue/tasks_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]

View File

@@ -1,16 +1,15 @@
---
source: crates/index-scheduler/src/queue/tasks_test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo"), rename: false }] }}
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo"), rename: false }] }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,3,]

View File

@@ -1,13 +1,12 @@
---
source: crates/index-scheduler/src/queue/test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 50, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 50, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 5000, allow_index_creation: true }}

View File

@@ -1,6 +1,5 @@
---
source: crates/index-scheduler/src/queue/test.rs
snapshot_kind: text
---
[
{
@@ -13,7 +12,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",

View File

@@ -1,6 +1,5 @@
---
source: crates/index-scheduler/src/queue/test.rs
snapshot_kind: text
---
[
{
@@ -13,7 +12,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "succeeded",
@@ -39,7 +40,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "failed",
@@ -60,7 +63,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",
@@ -81,7 +86,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",

View File

@@ -1,6 +1,5 @@
---
source: crates/index-scheduler/src/queue/test.rs
snapshot_kind: text
---
[
{
@@ -13,7 +12,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",
@@ -34,7 +35,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",

View File

@@ -1,6 +1,5 @@
---
source: crates/index-scheduler/src/queue/test.rs
snapshot_kind: text
---
[
{
@@ -13,7 +12,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "succeeded",
@@ -39,7 +40,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "failed",
@@ -60,7 +63,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",
@@ -81,7 +86,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",

View File

@@ -1,6 +1,5 @@
---
source: crates/index-scheduler/src/queue/test.rs
snapshot_kind: text
---
[
{
@@ -13,7 +12,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "succeeded",
@@ -39,7 +40,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "failed",
@@ -60,7 +63,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",
@@ -81,7 +86,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",

View File

@@ -1,6 +1,5 @@
---
source: crates/index-scheduler/src/queue/test.rs
snapshot_kind: text
---
[
{
@@ -13,7 +12,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "succeeded",
@@ -39,7 +40,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "failed",
@@ -60,7 +63,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",
@@ -81,7 +86,9 @@ snapshot_kind: text
"canceledBy": null,
"details": {
"IndexInfo": {
"primary_key": null
"primary_key": null,
"new_index_uid": null,
"old_index_uid": null
}
},
"status": "enqueued",

View File

@@ -97,7 +97,22 @@ impl TaskQueue {
Ok(self.all_tasks.get(rtxn, &task_id)?)
}
pub(crate) fn update_task(&self, wtxn: &mut RwTxn, task: &Task) -> Result<()> {
/// Update the inverted task indexes and write the new value of the task.
///
/// The passed `task` object typically comes from a previous transaction, so two kinds of modification might have occurred:
/// 1. Modification to the `task` object after loading it from the DB (the purpose of this method is to persist these changes)
/// 2. Modification to the task committed by another transaction in the DB (an annoying consequence of having lost the original
/// transaction from which the `task` instance was deserialized)
///
/// When calling this function, this `task` is modified to take into account any existing `network`
/// that can have been added since the task was loaded into memory.
///
/// Any other modification to the task that was committed from the DB since the parameter was pulled from the DB will be overwritten.
///
/// # Errors
///
/// - CorruptedTaskQueue: The task doesn't exist in the database
pub(crate) fn update_task(&self, wtxn: &mut RwTxn, task: &mut Task) -> Result<()> {
let old_task = self.get_task(wtxn, task.uid)?.ok_or(Error::CorruptedTaskQueue)?;
let reprocessing = old_task.status != Status::Enqueued;
@@ -157,6 +172,12 @@ impl TaskQueue {
}
}
task.network = match (old_task.network, task.network.take()) {
(None, None) => None,
(None, Some(network)) | (Some(network), None) => Some(network),
(Some(_), Some(network)) => Some(network),
};
self.all_tasks.put(wtxn, &task.uid, task)?;
Ok(())
}

View File

@@ -304,11 +304,11 @@ fn query_tasks_special_rules() {
let kind = index_creation_task("doggo", "sheep");
let _task = index_scheduler.register(kind, None, false).unwrap();
let kind = KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
};
let _task = index_scheduler.register(kind, None, false).unwrap();
let kind = KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()) }],
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "whalo".to_owned()), rename: false }],
};
let _task = index_scheduler.register(kind, None, false).unwrap();
@@ -399,7 +399,7 @@ fn query_tasks_canceled_by() {
let kind = index_creation_task("doggo", "sheep");
let _ = index_scheduler.register(kind, None, false).unwrap();
let kind = KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()) }],
swaps: vec![IndexSwap { indexes: ("catto".to_owned(), "doggo".to_owned()), rename: false }],
};
let _task = index_scheduler.register(kind, None, false).unwrap();

View File

@@ -68,13 +68,14 @@ impl From<KindWithContent> for AutobatchKind {
KindWithContent::IndexCreation { .. } => AutobatchKind::IndexCreation,
KindWithContent::IndexUpdate { .. } => AutobatchKind::IndexUpdate,
KindWithContent::IndexSwap { .. } => AutobatchKind::IndexSwap,
KindWithContent::TaskCancelation { .. }
KindWithContent::IndexCompaction { .. }
| KindWithContent::TaskCancelation { .. }
| KindWithContent::TaskDeletion { .. }
| KindWithContent::DumpCreation { .. }
| KindWithContent::Export { .. }
| KindWithContent::UpgradeDatabase { .. }
| KindWithContent::SnapshotCreation => {
panic!("The autobatcher should never be called with tasks that don't apply to an index.")
panic!("The autobatcher should never be called with tasks with special priority or that don't apply to an index.")
}
}
}
@@ -288,7 +289,9 @@ impl BatchKind {
match (self, autobatch_kind) {
// We don't batch any of these operations
(this, K::IndexCreation | K::IndexUpdate | K::IndexSwap | K::DocumentEdition) => Break((this, BatchStopReason::TaskCannotBeBatched { kind, id })),
(this, K::IndexCreation | K::IndexUpdate | K::IndexSwap | K::DocumentEdition) => {
Break((this, BatchStopReason::TaskCannotBeBatched { kind, id }))
},
// We must not batch tasks that don't have the same index creation rights if the index doesn't already exists.
(this, kind) if !index_already_exists && this.allow_index_creation() == Some(false) && kind.allow_index_creation() == Some(true) => {
Break((this, BatchStopReason::IndexCreationMismatch { id }))

View File

@@ -75,7 +75,11 @@ fn idx_create() -> KindWithContent {
}
fn idx_update() -> KindWithContent {
KindWithContent::IndexUpdate { index_uid: String::from("doggo"), primary_key: None }
KindWithContent::IndexUpdate {
index_uid: String::from("doggo"),
primary_key: None,
new_index_uid: None,
}
}
fn idx_del() -> KindWithContent {
@@ -84,7 +88,10 @@ fn idx_del() -> KindWithContent {
fn idx_swap() -> KindWithContent {
KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: (String::from("doggo"), String::from("catto")) }],
swaps: vec![IndexSwap {
indexes: (String::from("doggo"), String::from("catto")),
rename: false,
}],
}
}

View File

@@ -38,6 +38,7 @@ pub(crate) enum Batch {
IndexUpdate {
index_uid: String,
primary_key: Option<String>,
new_index_uid: Option<String>,
task: Task,
},
IndexDeletion {
@@ -54,6 +55,10 @@ pub(crate) enum Batch {
UpgradeDatabase {
tasks: Vec<Task>,
},
IndexCompaction {
index_uid: String,
task: Task,
},
}
#[derive(Debug)]
@@ -65,6 +70,7 @@ pub(crate) enum DocumentOperation {
/// A [batch](Batch) that combines multiple tasks operating on an index.
#[derive(Debug)]
#[allow(clippy::large_enum_variant)]
pub(crate) enum IndexOperation {
DocumentOperation {
index_uid: String,
@@ -108,7 +114,8 @@ impl Batch {
| Batch::Dump(task)
| Batch::IndexCreation { task, .. }
| Batch::Export { task }
| Batch::IndexUpdate { task, .. } => {
| Batch::IndexUpdate { task, .. }
| Batch::IndexCompaction { task, .. } => {
RoaringBitmap::from_sorted_iter(std::iter::once(task.uid)).unwrap()
}
Batch::SnapshotCreation(tasks)
@@ -153,7 +160,8 @@ impl Batch {
IndexOperation { op, .. } => Some(op.index_uid()),
IndexCreation { index_uid, .. }
| IndexUpdate { index_uid, .. }
| IndexDeletion { index_uid, .. } => Some(index_uid),
| IndexDeletion { index_uid, .. }
| IndexCompaction { index_uid, .. } => Some(index_uid),
}
}
}
@@ -173,6 +181,7 @@ impl fmt::Display for Batch {
Batch::IndexUpdate { .. } => f.write_str("IndexUpdate")?,
Batch::IndexDeletion { .. } => f.write_str("IndexDeletion")?,
Batch::IndexSwap { .. } => f.write_str("IndexSwap")?,
Batch::IndexCompaction { .. } => f.write_str("IndexCompaction")?,
Batch::Export { .. } => f.write_str("Export")?,
Batch::UpgradeDatabase { .. } => f.write_str("UpgradeDatabase")?,
};
@@ -405,11 +414,13 @@ impl IndexScheduler {
let mut task =
self.queue.tasks.get_task(rtxn, id)?.ok_or(Error::CorruptedTaskQueue)?;
current_batch.processing(Some(&mut task));
let primary_key = match &task.kind {
KindWithContent::IndexUpdate { primary_key, .. } => primary_key.clone(),
let (primary_key, new_index_uid) = match &task.kind {
KindWithContent::IndexUpdate { primary_key, new_index_uid, .. } => {
(primary_key.clone(), new_index_uid.clone())
}
_ => unreachable!(),
};
Ok(Some(Batch::IndexUpdate { index_uid, primary_key, task }))
Ok(Some(Batch::IndexUpdate { index_uid, primary_key, new_index_uid, task }))
}
BatchKind::IndexDeletion { ids } => Ok(Some(Batch::IndexDeletion {
index_uid,
@@ -508,17 +519,33 @@ impl IndexScheduler {
return Ok(Some((Batch::TaskDeletions(tasks), current_batch)));
}
// 3. we batch the export.
// 3. we get the next task to compact
let to_compact = self.queue.tasks.get_kind(rtxn, Kind::IndexCompaction)? & enqueued;
if let Some(task_id) = to_compact.min() {
let mut task =
self.queue.tasks.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;
current_batch.processing(Some(&mut task));
current_batch.reason(BatchStopReason::TaskCannotBeBatched {
kind: Kind::IndexCompaction,
id: task_id,
});
let index_uid =
task.index_uid().expect("Compaction task must have an index uid").to_owned();
return Ok(Some((Batch::IndexCompaction { index_uid, task }, current_batch)));
}
// 4. we batch the export.
let to_export = self.queue.tasks.get_kind(rtxn, Kind::Export)? & enqueued;
if !to_export.is_empty() {
let task_id = to_export.iter().next().expect("There must be at least one export task");
let mut task = self.queue.tasks.get_task(rtxn, task_id)?.unwrap();
current_batch.processing([&mut task]);
current_batch.reason(BatchStopReason::TaskKindCannotBeBatched { kind: Kind::Export });
current_batch
.reason(BatchStopReason::TaskCannotBeBatched { kind: Kind::Export, id: task_id });
return Ok(Some((Batch::Export { task }, current_batch)));
}
// 4. we batch the snapshot.
// 5. we batch the snapshot.
let to_snapshot = self.queue.tasks.get_kind(rtxn, Kind::SnapshotCreation)? & enqueued;
if !to_snapshot.is_empty() {
let mut tasks = self.queue.tasks.get_existing_tasks(rtxn, to_snapshot)?;
@@ -528,7 +555,7 @@ impl IndexScheduler {
return Ok(Some((Batch::SnapshotCreation(tasks), current_batch)));
}
// 5. we batch the dumps.
// 6. we batch the dumps.
let to_dump = self.queue.tasks.get_kind(rtxn, Kind::DumpCreation)? & enqueued;
if let Some(to_dump) = to_dump.min() {
let mut task =
@@ -541,7 +568,7 @@ impl IndexScheduler {
return Ok(Some((Batch::Dump(task), current_batch)));
}
// 6. We make a batch from the unprioritised tasks. Start by taking the next enqueued task.
// 7. We make a batch from the unprioritised tasks. Start by taking the next enqueued task.
let task_id = if let Some(task_id) = enqueued.min() { task_id } else { return Ok(None) };
let mut task =
self.queue.tasks.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;

View File

@@ -25,6 +25,7 @@ use convert_case::{Case, Casing as _};
use meilisearch_types::error::ResponseError;
use meilisearch_types::heed::{Env, WithoutTls};
use meilisearch_types::milli;
use meilisearch_types::milli::update::S3SnapshotOptions;
use meilisearch_types::tasks::Status;
use process_batch::ProcessBatchInfo;
use rayon::current_num_threads;
@@ -87,11 +88,14 @@ pub struct Scheduler {
/// Snapshot compaction status.
pub(crate) experimental_no_snapshot_compaction: bool,
/// S3 Snapshot options.
pub(crate) s3_snapshot_options: Option<S3SnapshotOptions>,
}
impl Scheduler {
pub(crate) fn private_clone(&self) -> Scheduler {
Scheduler {
pub(crate) fn private_clone(&self) -> Self {
Self {
must_stop_processing: self.must_stop_processing.clone(),
wake_up: self.wake_up.clone(),
autobatching_enabled: self.autobatching_enabled,
@@ -103,23 +107,52 @@ impl Scheduler {
version_file_path: self.version_file_path.clone(),
embedding_cache_cap: self.embedding_cache_cap,
experimental_no_snapshot_compaction: self.experimental_no_snapshot_compaction,
s3_snapshot_options: self.s3_snapshot_options.clone(),
}
}
pub fn new(options: &IndexSchedulerOptions, auth_env: Env<WithoutTls>) -> Scheduler {
let IndexSchedulerOptions {
version_file_path,
auth_path: _,
tasks_path: _,
update_file_path: _,
indexes_path: _,
snapshots_path,
dumps_path,
cli_webhook_url: _,
cli_webhook_authorization: _,
task_db_size: _,
index_base_map_size: _,
enable_mdb_writemap: _,
index_growth_amount: _,
index_count: _,
indexer_config,
autobatching_enabled,
cleanup_enabled: _,
max_number_of_tasks: _,
max_number_of_batched_tasks,
batched_tasks_size_limit,
instance_features: _,
auto_upgrade: _,
embedding_cache_cap,
experimental_no_snapshot_compaction,
} = options;
Scheduler {
must_stop_processing: MustStopProcessing::default(),
// we want to start the loop right away in case meilisearch was ctrl+Ced while processing things
wake_up: Arc::new(SignalEvent::auto(true)),
autobatching_enabled: options.autobatching_enabled,
max_number_of_batched_tasks: options.max_number_of_batched_tasks,
batched_tasks_size_limit: options.batched_tasks_size_limit,
dumps_path: options.dumps_path.clone(),
snapshots_path: options.snapshots_path.clone(),
autobatching_enabled: *autobatching_enabled,
max_number_of_batched_tasks: *max_number_of_batched_tasks,
batched_tasks_size_limit: *batched_tasks_size_limit,
dumps_path: dumps_path.clone(),
snapshots_path: snapshots_path.clone(),
auth_env,
version_file_path: options.version_file_path.clone(),
embedding_cache_cap: options.embedding_cache_cap,
experimental_no_snapshot_compaction: options.experimental_no_snapshot_compaction,
version_file_path: version_file_path.clone(),
embedding_cache_cap: *embedding_cache_cap,
experimental_no_snapshot_compaction: *experimental_no_snapshot_compaction,
s3_snapshot_options: indexer_config.s3_snapshot_options.clone(),
}
}
}
@@ -268,7 +301,7 @@ impl IndexScheduler {
self.queue
.tasks
.update_task(&mut wtxn, &task)
.update_task(&mut wtxn, &mut task)
.map_err(|e| Error::UnrecoverableError(Box::new(e)))?;
}
if let Some(canceled_by) = canceled_by {
@@ -349,7 +382,7 @@ impl IndexScheduler {
self.queue
.tasks
.update_task(&mut wtxn, &task)
.update_task(&mut wtxn, &mut task)
.map_err(|e| Error::UnrecoverableError(Box::new(e)))?;
}
}

View File

@@ -1,21 +1,27 @@
use std::collections::{BTreeSet, HashMap, HashSet};
use std::fs::{remove_file, File};
use std::io::{ErrorKind, Seek, SeekFrom};
use std::panic::{catch_unwind, AssertUnwindSafe};
use std::sync::atomic::Ordering;
use byte_unit::Byte;
use meilisearch_types::batches::{BatchEnqueuedAt, BatchId};
use meilisearch_types::heed::{RoTxn, RwTxn};
use meilisearch_types::milli::heed::CompactionOption;
use meilisearch_types::milli::progress::{Progress, VariableNameStep};
use meilisearch_types::milli::{self, ChannelCongestion};
use meilisearch_types::tasks::{Details, IndexSwap, Kind, KindWithContent, Status, Task};
use meilisearch_types::versioning::{VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH};
use milli::update::Settings as MilliSettings;
use roaring::RoaringBitmap;
use tempfile::{PersistError, TempPath};
use time::OffsetDateTime;
use super::create_batch::Batch;
use crate::processing::{
AtomicBatchStep, AtomicTaskStep, CreateIndexProgress, DeleteIndexProgress, FinalizingIndexStep,
InnerSwappingTwoIndexes, SwappingTheIndexes, TaskCancelationProgress, TaskDeletionProgress,
UpdateIndexProgress,
IndexCompaction, InnerSwappingTwoIndexes, SwappingTheIndexes, TaskCancelationProgress,
TaskDeletionProgress, UpdateIndexProgress,
};
use crate::utils::{
self, remove_n_tasks_datetime_earlier_than, remove_task_datetime, swap_index_uid_in_task,
@@ -23,6 +29,9 @@ use crate::utils::{
};
use crate::{Error, IndexScheduler, Result, TaskId};
/// The name of the copy of the data.mdb file used during compaction.
const DATA_MDB_COPY_NAME: &str = "data.mdb.cpy";
#[derive(Debug, Default)]
pub struct ProcessBatchInfo {
/// The write channel congestion. None when unavailable: settings update.
@@ -146,7 +155,6 @@ impl IndexScheduler {
};
let mut index_wtxn = index.write_txn()?;
let index_version = index.get_version(&index_wtxn)?.unwrap_or((1, 12, 0));
let package_version = (VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH);
if index_version != package_version {
@@ -224,24 +232,46 @@ impl IndexScheduler {
self.index_mapper.create_index(wtxn, &index_uid, None)?;
self.process_batch(
Batch::IndexUpdate { index_uid, primary_key, task },
Batch::IndexUpdate { index_uid, primary_key, new_index_uid: None, task },
current_batch,
progress,
)
}
Batch::IndexUpdate { index_uid, primary_key, mut task } => {
Batch::IndexUpdate { index_uid, primary_key, new_index_uid, mut task } => {
progress.update_progress(UpdateIndexProgress::UpdatingTheIndex);
// Get the index (renamed or not)
let rtxn = self.env.read_txn()?;
let index = self.index_mapper.index(&rtxn, &index_uid)?;
let mut index_wtxn = index.write_txn()?;
if let Some(primary_key) = primary_key.clone() {
let mut index_wtxn = index.write_txn()?;
// Handle rename if new_index_uid is provided
let final_index_uid = if let Some(new_uid) = &new_index_uid {
if new_uid != &index_uid {
index.set_updated_at(&mut index_wtxn, &OffsetDateTime::now_utc())?;
let mut wtxn = self.env.write_txn()?;
self.apply_index_swap(
&mut wtxn, &progress, task.uid, &index_uid, new_uid, true,
)?;
wtxn.commit()?;
new_uid.clone()
} else {
new_uid.clone()
}
} else {
index_uid.clone()
};
// Handle primary key update if provided
if let Some(ref primary_key) = primary_key {
let mut builder = MilliSettings::new(
&mut index_wtxn,
&index,
self.index_mapper.indexer_config(),
);
builder.set_primary_key(primary_key);
builder.set_primary_key(primary_key.clone());
let must_stop_processing = self.scheduler.must_stop_processing.clone();
builder
@@ -250,15 +280,20 @@ impl IndexScheduler {
&progress,
current_batch.embedder_stats.clone(),
)
.map_err(|e| Error::from_milli(e, Some(index_uid.to_string())))?;
index_wtxn.commit()?;
.map_err(|e| Error::from_milli(e, Some(final_index_uid.to_string())))?;
}
index_wtxn.commit()?;
// drop rtxn before starting a new wtxn on the same db
rtxn.commit()?;
task.status = Status::Succeeded;
task.details = Some(Details::IndexInfo { primary_key });
task.details = Some(Details::IndexInfo {
primary_key: primary_key.clone(),
new_index_uid: new_index_uid.clone(),
// we only display the old index uid if a rename happened => there is a new_index_uid
old_index_uid: new_index_uid.map(|_| index_uid.clone()),
});
// if the update processed successfully, we're going to store the new
// stats of the index. Since the tasks have already been processed and
@@ -268,8 +303,8 @@ impl IndexScheduler {
let mut wtxn = self.env.write_txn()?;
let index_rtxn = index.read_txn()?;
let stats = crate::index_mapper::IndexStats::new(&index, &index_rtxn)
.map_err(|e| Error::from_milli(e, Some(index_uid.clone())))?;
self.index_mapper.store_stats_of(&mut wtxn, &index_uid, &stats)?;
.map_err(|e| Error::from_milli(e, Some(final_index_uid.clone())))?;
self.index_mapper.store_stats_of(&mut wtxn, &final_index_uid, &stats)?;
wtxn.commit()?;
Ok(())
}();
@@ -330,13 +365,18 @@ impl IndexScheduler {
unreachable!()
};
let mut not_found_indexes = BTreeSet::new();
for IndexSwap { indexes: (lhs, rhs) } in swaps {
for index in [lhs, rhs] {
let index_exists = self.index_mapper.index_exists(&wtxn, index)?;
if !index_exists {
not_found_indexes.insert(index);
}
let mut found_indexes_but_should_not = BTreeSet::new();
for IndexSwap { indexes: (lhs, rhs), rename } in swaps {
let index_exists = self.index_mapper.index_exists(&wtxn, lhs)?;
if !index_exists {
not_found_indexes.insert(lhs);
}
let index_exists = self.index_mapper.index_exists(&wtxn, rhs)?;
match (index_exists, rename) {
(true, true) => found_indexes_but_should_not.insert((lhs, rhs)),
(false, false) => not_found_indexes.insert(rhs),
(true, false) | (false, true) => true, // random value we don't read it anyway
};
}
if !not_found_indexes.is_empty() {
if not_found_indexes.len() == 1 {
@@ -349,6 +389,23 @@ impl IndexScheduler {
));
}
}
if !found_indexes_but_should_not.is_empty() {
if found_indexes_but_should_not.len() == 1 {
let (lhs, rhs) = found_indexes_but_should_not
.into_iter()
.next()
.map(|(lhs, rhs)| (lhs.clone(), rhs.clone()))
.unwrap();
return Err(Error::SwapIndexFoundDuringRename(lhs, rhs));
} else {
return Err(Error::SwapIndexesFoundDuringRename(
found_indexes_but_should_not
.into_iter()
.map(|(_, rhs)| rhs.to_string())
.collect(),
));
}
}
progress.update_progress(SwappingTheIndexes::SwappingTheIndexes);
for (step, swap) in swaps.iter().enumerate() {
progress.update_progress(VariableNameStep::<SwappingTheIndexes>::new(
@@ -362,12 +419,54 @@ impl IndexScheduler {
task.uid,
&swap.indexes.0,
&swap.indexes.1,
swap.rename,
)?;
}
wtxn.commit()?;
task.status = Status::Succeeded;
Ok((vec![task], ProcessBatchInfo::default()))
}
Batch::IndexCompaction { index_uid: _, mut task } => {
let KindWithContent::IndexCompaction { index_uid } = &task.kind else {
unreachable!()
};
let rtxn = self.env.read_txn()?;
let ret = catch_unwind(AssertUnwindSafe(|| {
self.apply_compaction(&rtxn, &progress, index_uid)
}));
let (pre_size, post_size) = match ret {
Ok(Ok(stats)) => stats,
Ok(Err(Error::AbortedTask)) => return Err(Error::AbortedTask),
Ok(Err(e)) => return Err(e),
Err(e) => {
let msg = match e.downcast_ref::<&'static str>() {
Some(s) => *s,
None => match e.downcast_ref::<String>() {
Some(s) => &s[..],
None => "Box<dyn Any>",
},
};
return Err(Error::Export(Box::new(Error::ProcessBatchPanicked(
msg.to_string(),
))));
}
};
task.status = Status::Succeeded;
if let Some(Details::IndexCompaction {
index_uid: _,
pre_compaction_size,
post_compaction_size,
}) = task.details.as_mut()
{
*pre_compaction_size = Some(Byte::from_u64(pre_size));
*post_compaction_size = Some(Byte::from_u64(post_size));
}
Ok((vec![task], ProcessBatchInfo::default()))
}
Batch::Export { mut task } => {
let KindWithContent::Export { url, api_key, payload_size, indexes } = &task.kind
else {
@@ -443,6 +542,92 @@ impl IndexScheduler {
}
}
fn apply_compaction(
&self,
rtxn: &RoTxn,
progress: &Progress,
index_uid: &str,
) -> Result<(u64, u64)> {
// 1. Verify that the index exists
if !self.index_mapper.index_exists(rtxn, index_uid)? {
return Err(Error::IndexNotFound(index_uid.to_owned()));
}
// 2. We retrieve the index and create a temporary file in the index directory
progress.update_progress(IndexCompaction::RetrieveTheIndex);
let index = self.index_mapper.index(rtxn, index_uid)?;
// the index operation can take a long time, so save this handle to make it available to the search for the duration of the tick
self.index_mapper
.set_currently_updating_index(Some((index_uid.to_string(), index.clone())));
progress.update_progress(IndexCompaction::CreateTemporaryFile);
let src_path = index.path().join("data.mdb");
let pre_size = std::fs::metadata(&src_path)?.len();
let dst_path = TempPath::from_path(index.path().join(DATA_MDB_COPY_NAME));
let file = File::create(&dst_path)?;
let mut file = tempfile::NamedTempFile::from_parts(file, dst_path);
// 3. We copy the index data to the temporary file
progress.update_progress(IndexCompaction::CopyAndCompactTheIndex);
index
.copy_to_file(file.as_file_mut(), CompactionOption::Enabled)
.map_err(|error| Error::Milli { error, index_uid: Some(index_uid.to_string()) })?;
// ...and reset the file position as specified in the documentation
file.seek(SeekFrom::Start(0))?;
// 4. We replace the index data file with the temporary file
progress.update_progress(IndexCompaction::PersistTheCompactedIndex);
match file.persist(src_path) {
Ok(file) => file.sync_all()?,
// TODO see if we have a _resource busy_ error and probably handle this by:
// 1. closing the index, 2. replacing and 3. reopening it
Err(PersistError { error, file: _ }) => return Err(Error::IoError(error)),
};
// 5. Prepare to close the index
progress.update_progress(IndexCompaction::CloseTheIndex);
// unmark that the index is the processing one so we don't keep a handle to it, preventing its closing
self.index_mapper.set_currently_updating_index(None);
self.index_mapper.close_index(rtxn, index_uid)?;
drop(index);
progress.update_progress(IndexCompaction::ReopenTheIndex);
// 6. Reopen the index
// The index will use the compacted data file when being reopened
let index = self.index_mapper.index(rtxn, index_uid)?;
// if the update processed successfully, we're going to store the new
// stats of the index. Since the tasks have already been processed and
// this is a non-critical operation. If it fails, we should not fail
// the entire batch.
let res = || -> Result<_> {
let mut wtxn = self.env.write_txn()?;
let index_rtxn = index.read_txn()?;
let stats = crate::index_mapper::IndexStats::new(&index, &index_rtxn)
.map_err(|e| Error::from_milli(e, Some(index_uid.to_string())))?;
self.index_mapper.store_stats_of(&mut wtxn, index_uid, &stats)?;
wtxn.commit()?;
Ok(stats.database_size)
}();
let post_size = match res {
Ok(post_size) => post_size,
Err(e) => {
tracing::error!(
error = &e as &dyn std::error::Error,
"Could not write the stats of the index"
);
0
}
};
Ok((pre_size, post_size))
}
/// Swap the index `lhs` with the index `rhs`.
fn apply_index_swap(
&self,
@@ -451,6 +636,7 @@ impl IndexScheduler {
task_id: u32,
lhs: &str,
rhs: &str,
rename: bool,
) -> Result<()> {
progress.update_progress(InnerSwappingTwoIndexes::RetrieveTheTasks);
// 1. Verify that both lhs and rhs are existing indexes
@@ -458,16 +644,23 @@ impl IndexScheduler {
if !index_lhs_exists {
return Err(Error::IndexNotFound(lhs.to_owned()));
}
let index_rhs_exists = self.index_mapper.index_exists(wtxn, rhs)?;
if !index_rhs_exists {
return Err(Error::IndexNotFound(rhs.to_owned()));
if !rename {
let index_rhs_exists = self.index_mapper.index_exists(wtxn, rhs)?;
if !index_rhs_exists {
return Err(Error::IndexNotFound(rhs.to_owned()));
}
}
// 2. Get the task set for index = name that appeared before the index swap task
let mut index_lhs_task_ids = self.queue.tasks.index_tasks(wtxn, lhs)?;
index_lhs_task_ids.remove_range(task_id..);
let mut index_rhs_task_ids = self.queue.tasks.index_tasks(wtxn, rhs)?;
index_rhs_task_ids.remove_range(task_id..);
let index_rhs_task_ids = if rename {
let mut index_rhs_task_ids = self.queue.tasks.index_tasks(wtxn, rhs)?;
index_rhs_task_ids.remove_range(task_id..);
index_rhs_task_ids
} else {
RoaringBitmap::new()
};
// 3. before_name -> new_name in the task's KindWithContent
progress.update_progress(InnerSwappingTwoIndexes::UpdateTheTasks);
@@ -496,7 +689,11 @@ impl IndexScheduler {
})?;
// 6. Swap in the index mapper
self.index_mapper.swap(wtxn, lhs, rhs)?;
if rename {
self.index_mapper.rename(wtxn, lhs, rhs)?;
} else {
self.index_mapper.swap(wtxn, lhs, rhs)?;
}
Ok(())
}
@@ -718,9 +915,10 @@ impl IndexScheduler {
let enqueued_tasks = &self.queue.tasks.get_status(rtxn, Status::Enqueued)?;
// 0. Check if any upgrade task was matched.
// 0. Check if any upgrade or compaction tasks were matched.
// If so, we cancel all the failed or enqueued upgrade tasks.
let upgrade_tasks = &self.queue.tasks.get_kind(rtxn, Kind::UpgradeDatabase)?;
let compaction_tasks = &self.queue.tasks.get_kind(rtxn, Kind::IndexCompaction)?;
let is_canceling_upgrade = !matched_tasks.is_disjoint(upgrade_tasks);
if is_canceling_upgrade {
let failed_tasks = self.queue.tasks.get_status(rtxn, Status::Failed)?;
@@ -785,7 +983,33 @@ impl IndexScheduler {
}
}
// 3. We now have a list of tasks to cancel, cancel them
// 3. If we are cancelling a compaction task, remove the tempfiles after incomplete compactions
for compaction_task in &tasks_to_cancel & compaction_tasks {
progress.update_progress(TaskCancelationProgress::CleaningCompactionLeftover);
let task = self.queue.tasks.get_task(rtxn, compaction_task)?.unwrap();
let Some(Details::IndexCompaction {
index_uid,
pre_compaction_size: _,
post_compaction_size: _,
}) = task.details
else {
unreachable!("wrong details for compaction task {compaction_task}")
};
let index_path = match self.index_mapper.index_mapping.get(rtxn, &index_uid)? {
Some(index_uuid) => self.index_mapper.index_path(index_uuid),
None => continue,
};
if let Err(e) = remove_file(index_path.join(DATA_MDB_COPY_NAME)) {
match e.kind() {
ErrorKind::NotFound => (),
_ => return Err(Error::IoError(e)),
}
}
}
// 4. We now have a list of tasks to cancel, cancel them
let (task_progress, progress_obj) = AtomicTaskStep::new(tasks_to_cancel.len() as u32);
progress.update_progress(progress_obj);

View File

@@ -370,7 +370,7 @@ fn ureq_error_into_error(error: ureq::Error) -> Error {
}
Err(e) => e.into(),
},
ureq::Error::Transport(transport) => io::Error::new(io::ErrorKind::Other, transport).into(),
ureq::Error::Transport(transport) => io::Error::other(transport).into(),
}
}

View File

@@ -66,6 +66,11 @@ impl IndexScheduler {
}
IndexOperation::DocumentOperation { index_uid, primary_key, operations, mut tasks } => {
progress.update_progress(DocumentOperationProgress::RetrievingConfig);
let network = self.network();
let shards = network.shards();
// TODO: at some point, for better efficiency we might want to reuse the bumpalo for successive batches.
// this is made difficult by the fact we're doing private clones of the index scheduler and sending it
// to a fresh thread.
@@ -130,6 +135,7 @@ impl IndexScheduler {
&mut new_fields_ids_map,
&|| must_stop_processing.get(),
progress.clone(),
shards.as_ref(),
)
.map_err(|e| Error::from_milli(e, Some(index_uid.clone())))?;

View File

@@ -12,6 +12,8 @@ use crate::processing::{AtomicUpdateFileStep, SnapshotCreationProgress};
use crate::queue::TaskQueue;
use crate::{Error, IndexScheduler, Result};
const UPDATE_FILES_DIR_NAME: &str = "update_files";
/// # Safety
///
/// See [`EnvOpenOptions::open`].
@@ -78,10 +80,32 @@ impl IndexScheduler {
pub(super) fn process_snapshot(
&self,
progress: Progress,
mut tasks: Vec<Task>,
tasks: Vec<Task>,
) -> Result<Vec<Task>> {
progress.update_progress(SnapshotCreationProgress::StartTheSnapshotCreation);
match self.scheduler.s3_snapshot_options.clone() {
Some(options) => {
#[cfg(not(unix))]
{
let _ = options;
panic!("Non-unix platform does not support S3 snapshotting");
}
#[cfg(unix)]
self.runtime
.as_ref()
.expect("Runtime not initialized")
.block_on(self.process_snapshot_to_s3(progress, options, tasks))
}
None => self.process_snapshots_to_disk(progress, tasks),
}
}
fn process_snapshots_to_disk(
&self,
progress: Progress,
mut tasks: Vec<Task>,
) -> Result<Vec<Task>, Error> {
fs::create_dir_all(&self.scheduler.snapshots_path)?;
let temp_snapshot_dir = tempfile::tempdir()?;
@@ -128,7 +152,7 @@ impl IndexScheduler {
let rtxn = self.env.read_txn()?;
// 2.4 Create the update files directory
let update_files_dir = temp_snapshot_dir.path().join("update_files");
let update_files_dir = temp_snapshot_dir.path().join(UPDATE_FILES_DIR_NAME);
fs::create_dir_all(&update_files_dir)?;
// 2.5 Only copy the update files of the enqueued tasks
@@ -140,7 +164,7 @@ impl IndexScheduler {
let task =
self.queue.tasks.get_task(&rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;
if let Some(content_uuid) = task.content_uuid() {
let src = self.queue.file_store.get_update_path(content_uuid);
let src = self.queue.file_store.update_path(content_uuid);
let dst = update_files_dir.join(content_uuid.to_string());
fs::copy(src, dst)?;
}
@@ -206,4 +230,403 @@ impl IndexScheduler {
Ok(tasks)
}
#[cfg(unix)]
pub(super) async fn process_snapshot_to_s3(
&self,
progress: Progress,
opts: meilisearch_types::milli::update::S3SnapshotOptions,
mut tasks: Vec<Task>,
) -> Result<Vec<Task>> {
use meilisearch_types::milli::update::S3SnapshotOptions;
let S3SnapshotOptions {
s3_bucket_url,
s3_bucket_region,
s3_bucket_name,
s3_snapshot_prefix,
s3_access_key,
s3_secret_key,
s3_max_in_flight_parts,
s3_compression_level: level,
s3_signature_duration,
s3_multipart_part_size,
} = opts;
let must_stop_processing = self.scheduler.must_stop_processing.clone();
let retry_backoff = backoff::ExponentialBackoff::default();
let db_name = {
let mut base_path = self.env.path().to_owned();
base_path.pop();
base_path.file_name().and_then(OsStr::to_str).unwrap_or("data.ms").to_string()
};
let (reader, writer) = std::io::pipe()?;
let uploader_task = tokio::spawn(multipart_stream_to_s3(
s3_bucket_url,
s3_bucket_region,
s3_bucket_name,
s3_snapshot_prefix,
s3_access_key,
s3_secret_key,
s3_max_in_flight_parts,
s3_signature_duration,
s3_multipart_part_size,
must_stop_processing,
retry_backoff,
db_name,
reader,
));
let index_scheduler = IndexScheduler::private_clone(self);
let builder_task = tokio::task::spawn_blocking(move || {
stream_tarball_into_pipe(progress, level, writer, index_scheduler)
});
let (uploader_result, builder_result) = tokio::join!(uploader_task, builder_task);
// Check uploader result first to early return on task abortion.
// safety: JoinHandle can return an error if the task was aborted, cancelled, or panicked.
uploader_result.unwrap()?;
builder_result.unwrap()?;
for task in &mut tasks {
task.status = Status::Succeeded;
}
Ok(tasks)
}
}
/// Streams a tarball of the database content into a pipe.
#[cfg(unix)]
fn stream_tarball_into_pipe(
progress: Progress,
level: u32,
writer: std::io::PipeWriter,
index_scheduler: IndexScheduler,
) -> std::result::Result<(), Error> {
use std::io::Write as _;
use std::path::Path;
let writer = flate2::write::GzEncoder::new(writer, flate2::Compression::new(level));
let mut tarball = tar::Builder::new(writer);
// 1. Snapshot the version file
tarball
.append_path_with_name(&index_scheduler.scheduler.version_file_path, VERSION_FILE_NAME)?;
// 2. Snapshot the index scheduler LMDB env
progress.update_progress(SnapshotCreationProgress::SnapshotTheIndexScheduler);
let tasks_env_file = index_scheduler.env.try_clone_inner_file()?;
let path = Path::new("tasks").join("data.mdb");
append_file_to_tarball(&mut tarball, path, tasks_env_file)?;
// 2.3 Create a read transaction on the index-scheduler
let rtxn = index_scheduler.env.read_txn()?;
// 2.4 Create the update files directory
// And only copy the update files of the enqueued tasks
progress.update_progress(SnapshotCreationProgress::SnapshotTheUpdateFiles);
let enqueued = index_scheduler.queue.tasks.get_status(&rtxn, Status::Enqueued)?;
let (atomic, update_file_progress) = AtomicUpdateFileStep::new(enqueued.len() as u32);
progress.update_progress(update_file_progress);
// We create the update_files directory so that it
// always exists even if there are no update files
let update_files_dir = Path::new(UPDATE_FILES_DIR_NAME);
let src_update_files_dir = {
let mut path = index_scheduler.env.path().to_path_buf();
path.pop();
path.join(UPDATE_FILES_DIR_NAME)
};
tarball.append_dir(update_files_dir, src_update_files_dir)?;
for task_id in enqueued {
let task = index_scheduler
.queue
.tasks
.get_task(&rtxn, task_id)?
.ok_or(Error::CorruptedTaskQueue)?;
if let Some(content_uuid) = task.content_uuid() {
use std::fs::File;
let src = index_scheduler.queue.file_store.update_path(content_uuid);
let mut update_file = File::open(src)?;
let path = update_files_dir.join(content_uuid.to_string());
tarball.append_file(path, &mut update_file)?;
}
atomic.fetch_add(1, Ordering::Relaxed);
}
// 3. Snapshot every indexes
progress.update_progress(SnapshotCreationProgress::SnapshotTheIndexes);
let index_mapping = index_scheduler.index_mapper.index_mapping;
let nb_indexes = index_mapping.len(&rtxn)? as u32;
let indexes_dir = Path::new("indexes");
let indexes_references: Vec<_> = index_scheduler
.index_mapper
.index_mapping
.iter(&rtxn)?
.map(|res| res.map_err(Error::from).map(|(name, uuid)| (name.to_string(), uuid)))
.collect::<Result<_, Error>>()?;
// It's prettier to use a for loop instead of the IndexMapper::try_for_each_index
// method, especially when we need to access the UUID, local path and index number.
for (i, (name, uuid)) in indexes_references.into_iter().enumerate() {
progress.update_progress(VariableNameStep::<SnapshotCreationProgress>::new(
&name, i as u32, nb_indexes,
));
let path = indexes_dir.join(uuid.to_string()).join("data.mdb");
let index = index_scheduler.index_mapper.index(&rtxn, &name)?;
let index_file = index.try_clone_inner_file()?;
tracing::trace!("Appending index file for {name} in {}", path.display());
append_file_to_tarball(&mut tarball, path, index_file)?;
}
drop(rtxn);
// 4. Snapshot the auth LMDB env
progress.update_progress(SnapshotCreationProgress::SnapshotTheApiKeys);
let auth_env_file = index_scheduler.scheduler.auth_env.try_clone_inner_file()?;
let path = Path::new("auth").join("data.mdb");
append_file_to_tarball(&mut tarball, path, auth_env_file)?;
let mut gzencoder = tarball.into_inner()?;
gzencoder.flush()?;
gzencoder.try_finish()?;
let mut writer = gzencoder.finish()?;
writer.flush()?;
Result::<_, Error>::Ok(())
}
#[cfg(unix)]
fn append_file_to_tarball<W, P>(
tarball: &mut tar::Builder<W>,
path: P,
mut auth_env_file: fs::File,
) -> Result<(), Error>
where
W: std::io::Write,
P: AsRef<std::path::Path>,
{
use std::io::{Seek as _, SeekFrom};
// Note: A previous snapshot operation may have left the cursor
// at the end of the file so we need to seek to the start.
auth_env_file.seek(SeekFrom::Start(0))?;
tarball.append_file(path, &mut auth_env_file)?;
Ok(())
}
/// Streams the content read from the given reader to S3.
#[cfg(unix)]
#[allow(clippy::too_many_arguments)]
async fn multipart_stream_to_s3(
s3_bucket_url: String,
s3_bucket_region: String,
s3_bucket_name: String,
s3_snapshot_prefix: String,
s3_access_key: String,
s3_secret_key: String,
s3_max_in_flight_parts: std::num::NonZero<usize>,
s3_signature_duration: std::time::Duration,
s3_multipart_part_size: u64,
must_stop_processing: super::MustStopProcessing,
retry_backoff: backoff::exponential::ExponentialBackoff<backoff::SystemClock>,
db_name: String,
reader: std::io::PipeReader,
) -> Result<(), Error> {
use std::{collections::VecDeque, os::fd::OwnedFd, path::PathBuf};
use bytes::{Bytes, BytesMut};
use reqwest::{Client, Response};
use rusty_s3::S3Action as _;
use rusty_s3::{actions::CreateMultipartUpload, Bucket, BucketError, Credentials, UrlStyle};
use tokio::task::JoinHandle;
let reader = OwnedFd::from(reader);
let reader = tokio::net::unix::pipe::Receiver::from_owned_fd(reader)?;
let s3_snapshot_prefix = PathBuf::from(s3_snapshot_prefix);
let url =
s3_bucket_url.parse().map_err(BucketError::ParseError).map_err(Error::S3BucketError)?;
let bucket = Bucket::new(url, UrlStyle::Path, s3_bucket_name, s3_bucket_region)
.map_err(Error::S3BucketError)?;
let credential = Credentials::new(s3_access_key, s3_secret_key);
// Note for the future (rust 1.91+): use with_added_extension, it's prettier
let object_path = s3_snapshot_prefix.join(format!("{db_name}.snapshot"));
// Note: It doesn't work on Windows and if a port to this platform is needed,
// use the slash-path crate or similar to get the correct path separator.
let object = object_path.display().to_string();
let action = bucket.create_multipart_upload(Some(&credential), &object);
let url = action.sign(s3_signature_duration);
let client = Client::new();
let resp = client.post(url).send().await.map_err(Error::S3HttpError)?;
let status = resp.status();
let body = match resp.error_for_status_ref() {
Ok(_) => resp.text().await.map_err(Error::S3HttpError)?,
Err(_) => {
return Err(Error::S3Error { status, body: resp.text().await.unwrap_or_default() })
}
};
let multipart =
CreateMultipartUpload::parse_response(&body).map_err(|e| Error::S3XmlError(Box::new(e)))?;
tracing::debug!("Starting the upload of the snapshot to {object}");
// We use this bumpalo for etags strings.
let bump = bumpalo::Bump::new();
let mut etags = Vec::<&str>::new();
let mut in_flight = VecDeque::<(JoinHandle<reqwest::Result<Response>>, Bytes)>::with_capacity(
s3_max_in_flight_parts.get(),
);
// Part numbers start at 1 and cannot be larger than 10k
for part_number in 1u16.. {
if must_stop_processing.get() {
return Err(Error::AbortedTask);
}
let part_upload =
bucket.upload_part(Some(&credential), &object, part_number, multipart.upload_id());
let url = part_upload.sign(s3_signature_duration);
// Wait for a buffer to be ready if there are in-flight parts that landed
let mut buffer = if in_flight.len() >= s3_max_in_flight_parts.get() {
let (handle, buffer) = in_flight.pop_front().expect("At least one in flight request");
let resp = join_and_map_error(handle).await?;
extract_and_append_etag(&bump, &mut etags, resp.headers())?;
let mut buffer = match buffer.try_into_mut() {
Ok(buffer) => buffer,
Err(_) => unreachable!("All bytes references were consumed in the task"),
};
buffer.clear();
buffer
} else {
BytesMut::with_capacity(s3_multipart_part_size as usize)
};
// If we successfully read enough bytes,
// we can continue and send the buffer/part
while buffer.len() < (s3_multipart_part_size as usize / 2) {
// Wait for the pipe to be readable
use std::io;
reader.readable().await?;
match reader.try_read_buf(&mut buffer) {
Ok(0) => break,
// We read some bytes but maybe not enough
Ok(_) => continue,
// The readiness event is a false positive.
Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => continue,
Err(e) => return Err(e.into()),
}
}
if buffer.is_empty() {
// Break the loop if the buffer is
// empty after we tried to read bytes
break;
}
let body = buffer.freeze();
tracing::trace!("Sending part {part_number}");
let task = tokio::spawn({
let client = client.clone();
let body = body.clone();
backoff::future::retry(retry_backoff.clone(), move || {
let client = client.clone();
let url = url.clone();
let body = body.clone();
async move {
match client.put(url).body(body).send().await {
Ok(resp) if resp.status().is_client_error() => {
resp.error_for_status().map_err(backoff::Error::Permanent)
}
Ok(resp) => Ok(resp),
Err(e) => Err(backoff::Error::transient(e)),
}
}
})
});
in_flight.push_back((task, body));
}
for (handle, _buffer) in in_flight {
let resp = join_and_map_error(handle).await?;
extract_and_append_etag(&bump, &mut etags, resp.headers())?;
}
tracing::debug!("Finalizing the multipart upload");
let action = bucket.complete_multipart_upload(
Some(&credential),
&object,
multipart.upload_id(),
etags.iter().map(AsRef::as_ref),
);
let url = action.sign(s3_signature_duration);
let body = action.body();
let resp = backoff::future::retry(retry_backoff, move || {
let client = client.clone();
let url = url.clone();
let body = body.clone();
async move {
match client.post(url).body(body).send().await {
Ok(resp) if resp.status().is_client_error() => {
resp.error_for_status().map_err(backoff::Error::Permanent)
}
Ok(resp) => Ok(resp),
Err(e) => Err(backoff::Error::transient(e)),
}
}
})
.await
.map_err(Error::S3HttpError)?;
let status = resp.status();
let body = resp.text().await.map_err(|e| Error::S3Error { status, body: e.to_string() })?;
if status.is_success() {
Ok(())
} else {
Err(Error::S3Error { status, body })
}
}
#[cfg(unix)]
async fn join_and_map_error(
join_handle: tokio::task::JoinHandle<Result<reqwest::Response, reqwest::Error>>,
) -> Result<reqwest::Response> {
// safety: Panic happens if the task (JoinHandle) was aborted, cancelled, or panicked
let request = join_handle.await.unwrap();
let resp = request.map_err(Error::S3HttpError)?;
match resp.error_for_status_ref() {
Ok(_) => Ok(resp),
Err(_) => Err(Error::S3Error {
status: resp.status(),
body: resp.text().await.unwrap_or_default(),
}),
}
}
#[cfg(unix)]
fn extract_and_append_etag<'b>(
bump: &'b bumpalo::Bump,
etags: &mut Vec<&'b str>,
headers: &reqwest::header::HeaderMap,
) -> Result<()> {
use reqwest::header::ETAG;
let etag = headers.get(ETAG).ok_or_else(|| Error::S3XmlError("Missing ETag header".into()))?;
let etag = etag.to_str().map_err(|e| Error::S3XmlError(Box::new(e)))?;
etags.push(bump.alloc_str(etag));
Ok(())
}

View File

@@ -6,9 +6,9 @@ source: crates/index-scheduler/src/scheduler/test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "girafos", primary_key: None }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, batch_uid: 2, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "girafos", primary_key: None }}
3 {uid: 3, batch_uid: 3, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
4 {uid: 4, batch_uid: 4, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "cattos" }}
5 {uid: 5, batch_uid: 5, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "girafos" }}

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------

View File

@@ -6,7 +6,7 @@ source: crates/index-scheduler/src/scheduler/test.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
0 {uid: 0, batch_uid: 0, status: succeeded, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, batch_uid: 1, status: succeeded, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, batch_uid: 1, status: succeeded, details: { deleted_documents: Some(0) }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------

View File

@@ -1,13 +1,12 @@
---
source: crates/index-scheduler/src/scheduler/test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
0 {uid: 0, status: enqueued, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
----------------------------------------------------------------------
### Status:
enqueued [0,]

View File

@@ -1,13 +1,12 @@
---
source: crates/index-scheduler/src/scheduler/test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
0 {uid: 0, status: enqueued, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:

View File

@@ -1,13 +1,12 @@
---
source: crates/index-scheduler/src/scheduler/test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
0 {uid: 0, status: enqueued, details: { primary_key: None, old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------

View File

@@ -7,7 +7,7 @@ source: crates/index-scheduler/src/scheduler/test.rs
{uid: 0, details: {"primaryKey":"id"}, stats: {"totalNbTasks":1,"status":{"processing":1},"types":{"indexCreation":1},"indexUids":{"index_a":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("id"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]

View File

@@ -1,13 +1,12 @@
---
source: crates/index-scheduler/src/scheduler/test.rs
snapshot_kind: text
---
### Autobatching Enabled = true
### Processing batch None:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("id"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]

View File

@@ -7,8 +7,8 @@ source: crates/index-scheduler/src/scheduler/test.rs
{uid: 0, details: {"primaryKey":"id"}, stats: {"totalNbTasks":1,"status":{"processing":1},"types":{"indexCreation":1},"indexUids":{"index_a":1}}, stop reason: "created batch containing only task with id 0 of type `indexCreation` that cannot be batched with any other task.", }
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_b", primary_key: Some("id") }}
0 {uid: 0, status: enqueued, details: { primary_key: Some("id"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("id"), old_new_uid: None, new_index_uid: None }, kind: IndexCreation { index_uid: "index_b", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]

Some files were not shown because too many files have changed in this diff Show More