Compare commits

...

1058 Commits

Author SHA1 Message Date
0aa3f667d4 Merge #3136
3136: Fix no master key error r=Kerollmops a=ManyTheFish


Fixes #3135

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-11-24 15:25:11 +00:00
1eba5d45ea Check if the master key is missing before returning an error 2022-11-24 16:02:14 +01:00
cfa78418f2 Update tests 2022-11-24 15:51:15 +01:00
aaf5abbf1c Merge #3085
3085: refactorize the whole test suite r=irevoire a=irevoire

1. Make a call to assert_internally_consistent automatically when snapshotting the scheduler. There is no point in snapshotting something broken and expecting the dumb humans to notice.
2. Replace every possible call to assert_internally_consistent with a snapshot of the scheduler. It uses the same amount of lines and ensures we never change something without noticing in any tests ever.
3. Name every snapshot: it's easier to debug when something goes wrong and easier to review in general.
4. Stop skipping breakpoints; it's too easy to miss something. Now you must explicitly show which path the scheduler is supposed to use.
5. Add a timeout on the channel.recv, it eases the process of writing tests; now, when something files, you get a failure instead of a deadlock.

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-11-23 19:55:08 +00:00
f724f8adfe Merge #3122
3122: Display the `dumpUid` as `null` until we create it r=irevoire a=Kerollmops

This PR fixes #3117 by displaying the `DumpCreation` `dumpUid` details field as `null` until we compute the dump and the task is finished.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-23 14:37:50 +00:00
cde2a96486 Display a null dumpUid until we computed the dump itself on disk 2022-11-23 15:16:58 +01:00
ceca386dc0 Merge #3114
3114: Update the analytics on the ranking rules r=irevoire a=irevoire

fix #3113

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-11-23 14:03:57 +00:00
370a45a58b send the ranking rules as a string because amplitude is too dumb to process an array as a single value 2022-11-23 14:56:22 +01:00
7093bae131 Update the dump test to check for the dumpUid dumpCreation task details 2022-11-23 14:48:39 +01:00
fb785dc5ac Add more analytics on the ranking rules positions 2022-11-23 12:51:34 +01:00
3a0b1a0c0e try to remove the flakyness of the failing test 2022-11-23 11:22:24 +01:00
af808462b6 update the snapshots after a rebase 2022-11-23 11:16:59 +01:00
2999ae3da4 makes clippy happy 2022-11-23 11:13:54 +01:00
23ec7db3f9 rebase on release-v0.30 2022-11-23 11:13:54 +01:00
f02e5cfaa6 refactorize the whole test suite
1. Make a call to assert_internally_consistent automatically when snapshoting the scheduler. There is no point in snapshoting something broken and expect the dumb humans to notice.
2. Replace every possible call to assert_internally_consistent by a snapshot of the scheduler. It takes as many lines and ensure we never change something without noticing in any tests ever.
3. Name every snapshots: it's easier to debug when something goes wrong and easier to review in general.
4. Stop skipping breakpoints, it's too easy to miss something. Now you must explicitely show which path is the scheduler supposed to use.
5. Add a timeout on the channel.recv, it eases the process of writing tests, now when something file you get a failure instead of a deadlock.
2022-11-23 11:13:54 +01:00
8443554b1f Merge #3110
3110: Always display `deletedDocuments` in the `IndexDeletion` details r=ManyTheFish a=Kerollmops

This PR fixes #3108 by always displaying a `deletedDocuments` details info.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-23 09:37:02 +00:00
84c782ce9a Fix the insta tests 2022-11-22 18:53:17 +01:00
1392a3b304 Merge #3106
3106: Fix linking error when building binaries for aarch64 r=curquiza a=Kerollmops

This PR tries to fix #3094. Please don't look too close. It will be horrifying 😱
You can look at [the status of the fix in the CI](https://github.com/meilisearch/meilisearch/actions/runs/3523723323/jobs/5908229083).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-22 15:49:53 +00:00
2ec699a2e7 Always display details for the indexDeletion task 2022-11-22 15:14:28 +01:00
d02837f982 Don't use gold but the default linker 2022-11-22 14:48:57 +01:00
6784d17d0e Merge #3105
3105: Fix publish release CI r=Kerollmops a=curquiza

Fix CI to avoid release creation when triggering manually the CI with `workflow_dispatch` -> only trigger the upload of binary when the event is `release` instead of different of `schedule`

Co-authored-by: curquiza <clementine@meilisearch.com>
2022-11-22 10:49:52 +00:00
415977a41e Fix publish release CI 2022-11-22 11:38:45 +01:00
a07e1f7a00 Merge #3099
3099: Add a dispatch to the publish binaries workflow r=curquiza a=Kerollmops

This PR adds a dispatch to the publish binaries workflow to help us debug #3094.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-21 17:15:20 +00:00
b0460abf54 Add a dispatch to the publish binaries workflow 2022-11-21 18:07:47 +01:00
48e00cdf3f Merge #3096
3096: Fix total memory computation r=Kerollmops a=dureuill

# Pull Request

## Related issue
Fixes #3018 

## What does this PR do?
- Don't multiply the total memory value returned by sysinfo by 1024 anymore:
  According to the [changelog of sysinfo 0.26.0](https://github.com/GuillaumeGomez/sysinfo/blob/master/CHANGELOG.md#0260), units are now in bytes and not KBs.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-21 15:08:51 +00:00
aaea5f87db Don't multiply total memory returned by sysinfo anymore
sysinfo now returns bytes rather than KB
2022-11-21 15:28:05 +01:00
44440bb8f4 Merge #3084
3084: Bump the `milli` and `grenad` dependencies r=irevoire a=Kerollmops

This PR is bumping the milli direct and the grenad indirect dependencies.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-17 16:21:25 +00:00
ec74fd6b44 Bump milli to version v0.37.0 2022-11-17 17:02:15 +01:00
d3bc0c6e93 Bump grenad to 0.4.4 2022-11-17 16:59:13 +01:00
262c8bf68b Merge #3083
3083: Add `workflow_dispatch` to flaky.yml r=irevoire a=curquiza

Following this PR: bringing the change into `release-v0.30.0` as well, otherwise we cannot test

<img width="346" alt="Capture d’écran 2022-11-17 à 16 55 12" src="https://user-images.githubusercontent.com/20380692/202494559-6032665c-e336-4d4b-8a84-8be5e2371370.png">


Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-11-17 15:57:26 +00:00
c4a669d056 Add workflow_dispatch to flaky.yml 2022-11-17 16:54:09 +01:00
dfaf845382 Merge #3080
3080: Rename `originalFilters` into `originalFilter` on the cancelation and deletion routes r=irevoire a=Kerollmops

This PR fixes https://github.com/meilisearch/meilisearch/issues/3079.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-17 13:49:50 +00:00
1fe9fccf6e Merge #3081
3081: Rename `matchedDocuments` into `providedIds` r=ManyTheFish a=Kerollmops

This PR fixes https://github.com/meilisearch/meilisearch/issues/3078.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-17 12:55:59 +00:00
9fe32e1e3b Rename matchedDocuments into providedIds 2022-11-17 13:48:23 +01:00
388305fcb6 Rename originalFilters into originalFilters 2022-11-17 13:44:00 +01:00
49bc45e0d4 Merge #3074
3074: add the analytics of the swap-indexes route r=irevoire a=irevoire

implements https://github.com/meilisearch/specifications/pull/192/files#diff-dbac052211a8ea4b2c5d068a6264380740b022efead552b4dfc9e4e8d961f0f4R276-R284

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-17 11:29:35 +00:00
b478b18218 Merge #3071
3071: Analytics on the tasks route r=Kerollmops a=irevoire

Implement the missing analytics on the delete and cancel task routes.
+ Batch the analytics on the `GET tasks` route to avoid flooding ourselves while polling meilisearch.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-17 11:05:56 +00:00
877d1735b1 Merge #3077
3077: Don't remove DB if unreadable r=irevoire a=dureuill

# Pull Request

## Related issue
Related to #3069

Will fix it after merging into `main`

## What does this PR do?

### User standpoint

- When the DB cannot be read after opening it, Meilisearch exits with an error message like previously, but it now leaves the DB untouched
- When the DB cannot be read after importing it from a snapshot or dump, it is still removed, like previously.

### Dev standpoint

- Add new local enum `OnFailure` that is used as a parameter to the `meilisearch_builder` closure to control when to keep or remove the DB in case of failure.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-17 09:38:27 +00:00
a1a29e92fd Stop removing the DB when failing to read it 2022-11-17 09:33:48 +01:00
fea969d5e5 Merge #3072
3072: Update the finite pagination analytics r=ManyTheFish a=irevoire

Implement this spec: https://github.com/meilisearch/specifications/pull/196/files?show-viewed-files=true&file-filters%5B%5D=#diff-dbac052211a8ea4b2c5d068a6264380740b022efead552b4dfc9e4e8d961f0f4R111

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-17 08:04:58 +00:00
fcca7475fa add the analytics of the swap-indexes route 2022-11-16 19:10:01 +01:00
3fc1d7e67b Update the finite pagination analytics 2022-11-16 19:01:21 +01:00
f1884d6910 batch the tasks seen events 2022-11-16 18:45:19 +01:00
0e6394fafc add analytics on the task route
* Add all the missing fields of the new task query type
* Create a new analytics for the task deletion
* Create a new analytics for the task creation
2022-11-16 18:45:19 +01:00
637ca7b9fa Merge #3067
3067: Fix task details serialization r=Kerollmops a=ManyTheFish

# Pull Request

- document addition task details always contain the field `indexedDocuments`
  - value is set to `null` when the task is enqueued or processing
  - value is set to `0` when the task is canceled or failed
- the field `deletedDocuments` of the document deletion task details is set to `0` when the task is canceled or failed
- the field `deletedDocuments` of the document clearAll task details is set to `0` when the task is canceled or failed
- the field `deletedTasks` of the task deletion task details is set to `0` when the task is canceled or failed
- the field `canceledTasks` of the task cancelation task details is set to `0` when the task is canceled or failed

## Related issue
Fixes #3057
Fixes #3058


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-11-16 16:15:03 +00:00
25e39edc7e Fix tests 2022-11-16 16:45:20 +01:00
f908ae2ef4 Merge #3068
3068: Prepend question mark to the `originalFilters` of the task deletion and cancelation r=irevoire a=Kerollmops

This pull request fixes #3064 by prepending [the HTTP query question mark](https://en.wikipedia.org/wiki/Question_mark#Computing) to the `originalFilters` of the task deletion and cancelation.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-16 15:02:04 +00:00
8ddec58430 Merge #3061
3061: Name spawned threads r=irevoire a=dureuill

# Pull Request

## Related issue
None, this is to improve debuggability

## What does this PR do?
- This PR replaces the raw `thread::spawn(...)` calls by `thread::Builder::new().name(...).spawn(...).unwrap()` calls so that we can give meaningful names to threads. 
- This PR also setup the `rayon` thread pool to give a name to its threads.
- This improves debuggability, as the thread names are reported by debuggers:

<img width="411" alt="Capture d’écran 2022-11-16 à 10 26 27" src="https://user-images.githubusercontent.com/41078892/202141870-a88663aa-d2f8-494f-b4da-709fdbd072ba.png">

(screen showing vscode's debugger and its main/scheduler/indexing threads)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-16 14:25:32 +00:00
b4d0403518 Merge #3065
3065: Implement the analytics on the health and version routes r=Kerollmops a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/2955

Must be merged after https://github.com/meilisearch/meilisearch/pull/3063

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 14:01:43 +00:00
3525c964a7 Add the question mark to the task cancelation query filter 2022-11-16 14:30:35 +01:00
ed51df41e5 Add the question mark to the task deletion query filter 2022-11-16 14:28:30 +01:00
7f89e302a2 Merge #3049 #3063
3049: Plural on task query r=Kerollmops a=irevoire

Fixes #3045 and fixes #3046

3063: Update the analytics on the search r=Kerollmops a=irevoire

Partially fix https://github.com/meilisearch/meilisearch/issues/2955

Must be merged after https://github.com/meilisearch/meilisearch/pull/3060

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 13:20:40 +00:00
c603e17b1d Merge #3060
3060: Add analytics on all the settings r=Kerollmops a=irevoire

Partially fix https://github.com/meilisearch/meilisearch/issues/2955

Must be merged after https://github.com/meilisearch/meilisearch/pull/3059

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 12:48:23 +00:00
07b28ea8cf Fix task details serialization 2022-11-16 13:47:08 +01:00
10ab5f6a58 implements the analytics on the health and version routes 2022-11-16 13:06:10 +01:00
684b90066d update the analytics on the search route 2022-11-16 12:30:49 +01:00
93ab019304 update the distinct attributes to the spec update 2022-11-16 12:25:33 +01:00
1ef517f63d Merge #3059
3059: Create the analytics for the document deletion r=Kerollmops a=irevoire

Partially fix #2955

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 10:44:59 +00:00
1a1ede96de Spawn rayon threads with names 2022-11-16 10:28:25 +01:00
93afeedcea Spawn threads with names 2022-11-16 09:50:47 +01:00
25d057b75e Add analytics on all the settings 2022-11-15 19:09:02 +01:00
b44c381c2a Store analytics for the documents deletions 2022-11-15 19:08:45 +01:00
51be75a264 Merge #3056
3056: refactor the way we send the cli informations + add the analytics for the config file and ssl usage r=Kerollmops a=irevoire

Partially fix #2955

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-15 17:03:30 +00:00
4953b62712 reformat, sorry @kero 2022-11-15 17:57:27 +01:00
9473cccc27 add a comment over the new infos structure 2022-11-15 17:56:07 +01:00
9327db3e91 Apply suggestions from code review
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-15 17:53:23 +01:00
0fced6f270 refactor the way we send the cli informations + add the analytics for the config file and ssl usage 2022-11-15 16:32:31 +01:00
b4434dcad2 rename the error codes for the sake of consistency 2022-11-14 23:27:02 +01:00
d08d97bf43 fix the error messages and add tests 2022-11-14 23:27:02 +01:00
a8991ccb64 Merge #3036
3036: Bump milli to v0.35.1 r=irevoire a=Kerollmops

This PR bumps milli to v0.35.1 which brings some fixes. You can see [the changelog of milli on the release page](https://github.com/meilisearch/milli/releases/tag/v0.35.1).

Fixes #2905
Fixes #3004
Fixes #3000
Fixes #3021
Fixes #2945

Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-10 14:25:54 +00:00
761bd3aca4 Fix the new error messages 2022-11-10 12:04:25 +01:00
26ab6ab0cc Bump milli version to 0.35.1 2022-11-10 12:04:25 +01:00
379522ace3 Merge #3023
3023: Update error codes related to tasks cancelation + add canceledBy filter r=Kerollmops a=Kerollmops

<details>This PR changes the error codes [to follow the specification](https://github.com/meilisearch/specifications/pull/195).

 - [x] The `missing_filters` error code is renamed `missing_task_filters` to be more accurate and follow the `invalid_task_*` convention.
 - [x] The error code `invalid_task_uids_filter` is added.
 - [x] The error code `invalid_task_canceled_by_filter` is added.
 - [x] The error code `invalid_task_date_filter` is added.
      -  The error message is the same as for expires_at in the API Key  EXCEPT that it does not explicitly mention that a date must be given in the future.
</details>

Edit by `@loiclec` :
I have added a few more changes into this PR. The related issues are:

- Fixes https://github.com/meilisearch/meilisearch/issues/3029
- Implements https://github.com/meilisearch/meilisearch/issues/3026
- Fixes https://github.com/meilisearch/meilisearch/issues/2940
- Fixes https://github.com/meilisearch/meilisearch/issues/2939

Additionally:
- Fixes a bug where global tasks were returned by `GET /tasks` queries even if the user did not have the `index.*` API key action.
- Rename `originalQuery` to `originalFilters`
- Display `error: null` and `canceledBy: null` in the task views
- Allow using the star operator in the task filters in the `DELETE /tasks` and `POST /tasks/cancel` routes
- Make sure that the index scheduler keeps making progress even when a grave error occurs.


Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
2022-11-10 10:51:41 +00:00
1d5f17a9ea Merge #3032
3032: Fix Index name parsing error message to fit the specification r=Kerollmops a=ManyTheFish

fixes #2924


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-11-08 16:01:53 +00:00
8bb260bf3e Fix Index name parsing error message to fit the specification 2022-11-08 16:35:58 +01:00
52b38bee9d Make rustfmt happy
>:-(
2022-11-08 15:45:53 +01:00
f5454dfa60 Make clippy happy
They're a happy clip now.
2022-11-08 15:31:08 +01:00
1e464e87fc Update more insta-snap tests 2022-11-08 13:39:52 +01:00
6126fc8d98 Rename original_query to original_filters everywhere 2022-11-08 13:18:18 +01:00
2fdd814e57 Update tests following task API changes 2022-11-08 13:18:18 +01:00
20fa103992 Add canceledBy task filter 2022-11-08 13:18:18 +01:00
d5638d2c27 Use more precise error codes/message for the task routes
+ Allow star operator in delete/cancel tasks
+ rename originalQuery to originalFilters
+ Display error/canceled_by in task view even when they are = null
+ Rename task filter fields by using their plural forms
+ Prepare an error code for canceledBy filter
+ Only return global tasks if the API key action `index.*` is there
2022-11-08 13:18:17 +01:00
932414bf72 WIP Introduce the invalid_task_uid error code 2022-11-08 13:17:56 +01:00
b20025c01e Change the missing_filters error code into missing_task_filters 2022-11-08 13:17:29 +01:00
3999f74f78 Merge #3022
3022: Store the `started_at` for a task that is canceled when processing r=irevoire a=Kerollmops

This PR changes the current behavior of the displayed tasks. When a processing task is canceled, the `started_at` date time is kept and displayed to the user. Otherwise, if the task was just enqueued, the `started_at` remains `null`. If a task is processing, the engine is ctrl-c, starts again, and the task becomes enqueued again, so if it is canceled, its `started_at` will be `null`.

You can read more [in this discussion](https://github.com/meilisearch/specifications/pull/195/files#r1009602335).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-03 20:48:24 +00:00
739b9f5505 Use the content of the ProcessingTasks in the tasks cancelation system 2022-11-03 11:09:59 +01:00
722a0da0c3 Merge #3019
3019: Fix error code of the "duplicate index found" error on the index swap route r=irevoire a=loiclec

According to the spec, the code of the error should be `duplicate_index_found`, but it was `bad_request` instead.

Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
2022-11-03 09:49:55 +00:00
5704a1895d Fix error code of the "duplicate index found" error 2022-11-02 09:34:50 +01:00
2254bbf3bd Merge #3002
3002: Fix dump import without instance uid r=Kerollmops a=irevoire

When creating a dump without any instance-uid (that can happen if you’ve always run meilisearch with the `--no-analytics` flag), you could get an error when trying to load the dump.


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-31 12:58:37 +00:00
510afda590 remove unused import 2022-10-30 20:05:20 +01:00
fea9fdcd7e fix the dump reader process when no instance-uid was specified 2022-10-30 20:00:27 +01:00
dd1011ba76 Merge #2995
2995: merge the settings and do one indexation at the end r=irevoire a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 21:24:21 +00:00
20258461a8 Merge #2981 #2996
2981: Move index swap error handling from meilisearch-http to index-scheduler r=irevoire a=loiclec

And make index_not_found error asynchronous, since we can't know whether the index will exist by the time the index swap task is processed.

Improve the index-swap test to verify that future tasks are not swapped and to test the new error messages that were introduced.

## Related issue
https://github.com/meilisearch/meilisearch/issues/2973


2996: Get rids of the unecessary tasks when an index_uid is specified r=Kerollmops a=irevoire



Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 19:11:23 +00:00
87cac158c4 Update index-scheduler/src/batch.rs 2022-10-27 18:08:21 +02:00
c9f89d38e3 Merge branch 'main' into index-swap-error-handling 2022-10-27 18:06:45 +02:00
01687c87a2 Get rids of the unecessary tasks when an index_uid is specified 2022-10-27 18:00:04 +02:00
313f204f39 merge the settings and do one indexation at the end 2022-10-27 16:38:21 +02:00
d16ea755d8 Merge #2982
2982: Adapt task queries to account for special index swap rules r=irevoire a=loiclec

# Pull Request

## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/2970 

## What does this PR do?
- Replace the `get_tasks` method with a `get_tasks_from_authorized_indexes` which returns the list of tasks matched by the query **from the point of view of the user**. That is, it takes into consideration the list of authorised indexes as well as the special case of `IndexSwap` which should not be returned if an index_uid is specified or if any of its associated indexes are not authorised.
- Adapt the code in other places following this change
- Add some tests
- Also the method `get_task_ids_from_authorized_indexes` now takes a read transaction as argument. This is because we want to make sure that the implementation of `get_tasks_from_authorized_indexes` only uses one read transaction. Otherwise, we could (1) get a list of task ids matching the query, then (2) one of these task ids is deleted by a taskDeletion task, and finally (3) we try to get the `Task`s associated with each returned task ids, and get a `CorruptedTaskQueue` error.



Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
2022-10-27 14:28:04 +00:00
8152ab5dfc Revert change in initialisation of TempDir for index scheduler tests 2022-10-27 16:26:17 +02:00
8dd7942656 Cargo fmt 2022-10-27 16:24:09 +02:00
2c31d7c50a Apply review suggestions 2022-10-27 16:24:08 +02:00
b76f0ace26 Merge #2993
2993: Reconsider the Windows tests r=irevoire a=Kerollmops

This PR removes the `ignore` cfg on top of a lot of our tests. Now that we reworked the index scheduler we can make them pass again!

Fixes #2038, fixes #1966.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 13:41:04 +00:00
5b535a82ea Merge #2991
2991: Update version for the next release (v0.30.0) in Cargo.toml files r=Kerollmops a=meili-bot

⚠️ This PR is automatically generated. Check the new version is the expected one before merging.

Co-authored-by: curquiza <curquiza@users.noreply.github.com>
2022-10-27 13:12:31 +00:00
e67673bd12 Ingore the dumps v1 test on Windows 2022-10-27 14:34:45 +02:00
44d6f3e7a0 Reconsider the Windows tests 2022-10-27 13:50:05 +02:00
68f80dbacf Update version for the next release (v0.30.0) in Cargo.toml files 2022-10-27 11:35:44 +00:00
08db774699 Merge #2990
2990: isolate the search in another task r=Kerollmops a=irevoire

In case there is a failure on milli's side that should avoid blocking the tokio main thread


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 11:29:22 +00:00
4d9e9f4a9d isolate the search in another task
In case there is a failure on milli's side that should avoid blocking the tokio main thread
2022-10-27 13:12:42 +02:00
4f4fc20acf Make clippy happy 2022-10-27 13:00:30 +02:00
78ffa00f98 Move index swap error handling from meilisearch-http to index-scheduler
And make index_not_found error asynchronous, since we can't know
whether the index will exist by the time the index swap task is
processed.

Improve the index-swap test to verify that future tasks are not swapped
and to test the new error messages that were introduced.
2022-10-27 11:45:38 +02:00
7b93ba40bd Reimplement task queries to account for special index swap rules 2022-10-27 11:44:51 +02:00
b44cc62320 Merge #2763
2763: Index scheduler r=Kerollmops a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/2725

- [x] Durability of the tasks once an answer has been sent to the user.
- [x] Fix the analytics
- [x] Disable the auto-batching system.
- [x] Make sure the task scheduler run if there are tasks to process.
- [x] Auto-batching of enqueued tasks:
    - [x] Do not batch operations from two different indexes.
    - [x] Document addition.
    - [x] Document updates.
    - [x] Settings.
    - [x] Document deletion.
    - [x] Make sure that we only merge batches with the same index-creation rights:
        - [x] the batch either starts with a `yes`
        - [x] [we only batch `no`s together and stop batching when we encounter a `yes`](https://www.youtube.com/watch?v=O27mdRvR1GY)
        - [x] Unify the logic about `false` and `true` index creation rights.
- [ ] Execute all batch kind:
    - [x] Import dumps at startup time.
    - [x] Export dumps i.e. export the tasks queue.
    - [x] Document addition
    - [x] Document update
    - [x] Document deletion.
    - [x] Clear all documents.
    - [x] Update the settings of an index.
    - [ ] Merge multiple settings into a single one.
    - [x] Index update e.g. Create an Index, change an index primary key, delete an index.
    - [x] Cancel enqueued or processing tasks (with filters) (don't count tasks from forbidden indexes) (can't cancel a task with a higher or equal task_id than your own).
    - [x] Delete processed tasks from the task store (with filters) (don't count tasks from forbidden indexes) (can't flush a task with a higher or equal task_id than your own)
    - [x] Document addition + settings
    - [x] Document addition + settings + clear all documents
    - [x] anything + index deletion
    - [x] Snapshot
       - [x] Make the `SnapshotCreation` task visible.
       - [x] Snapshot tasks are scheduled by a detached thread.
       - [x] Only include update files that are useful.
    - [x] Check that statuses and details are correctly set. (ie; if you enqueue a `documentAddition`, is the `documentReceived` well set?)
- [x] Prioritize and reorder tasks i.e. Index deletion, Delete all the documents.
- [x] Always accept new tasks without blocking.
- [x] Fairly share the loads over the different indexes e.g. Always process the index queue with the lowest id.
- [x] Easily testable.
- [x] Well tested i.e. tasks reordering, tasks prioritizing, use atomic barriers to block the tasks for tests.
- [x] Dump
    - [x] Serialize the uuid as string in the keys
    - [x] Create a dump crate with getters and setters
    - [x] Serialize the API key in the dump task
    - [x] Get the instance-uuid in the dump task
- [x] List and filter tasks:
    - [x] Paginate the tasks.
    - [x] Filter by index name.
    - [x] Filter on the status, the enqueued, processing, and finished tasks.
    - [x] Filter on the type of task.
    - [x] Check that it works in `meilisearch-http`.
- [x] Think about [the index wrapper](2c4c14caa8/index/src/updates.rs (L269)) and probably move or remove it.
- [x] Reduce the amount of copy/paste for the batched operations by creating a sub-enum for the `Batch` enum.
- [x] Move the `IndexScheduler` in the lib.rs file.
- [x] Think about the `MilliError` type and probably remove it.
- [x] Remove the `index` crate entirely
- [x] Remove the `Kind` type from the `TaskView` and introduce another type, remove the `<Kind as FromStr>`.
- [x] Once the point above is done; remove the unreachable variant from the autobatchingkind
- [x] Rename the `Settings` task `Kind` to `SettingsUpdate`
- [x] Rename the `DumpExport` task `Kind` to `DumpExport`
- [x] Path the error message when deserializing a `Kind` and `Status`.
- [x] Check the version file when starting.
- [x] Copy the version file when creating snapshots.

---------

Once everything above is done;
- [ ] Check what happens with the update files i.e. when are they deleted.
    - [ ] When a TaskDeletion occurs
    - [ ] When a TaskCancelation
    - [ ] When a task is finished
    - [ ] When a task fails
- [ ] When importing a dump forward the date to milli
- [ ] Add tests for the snapshots.
- [ ] Look at all the places where we put _TODOs_.
- [ ] Rename a bunch of things, see https://github.com/meilisearch/meilisearch/pull/2917
- [ ] Ensure that when compiling meilisearch-http with `no-default-features` it doesn’t pull lindera etc
- [ ] Run a bunch of operations in a `tokio::spawn_blocking`
    - [ ] The search requests
- [ ] Issue to create once this is merged:
    - [ ] Realtime progressing status e.g. Websocket events (optional).
    - [ ] Implement an `Uuid` codec instead of using a `Bincode<Uuid>`.
    - [ ] Handle the dump-v1
    - [ ] When importing a dump v1 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v2 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v3 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v4 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v5 we could iterate over the whole task queue to find the creation and last update date

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-10-27 09:38:00 +00:00
fae17ed590 Enable the authentication tests on Windows again 2022-10-27 11:35:24 +02:00
7e355958e0 Await the last insert task 2022-10-27 11:35:24 +02:00
8bc602a7dd makes clippy happy 2022-10-27 11:35:23 +02:00
6c2ecec4d0 fix the return of the task cancelation and task deletion 2022-10-27 11:35:23 +02:00
6280bd51a9 actually fix the test and the swap_indexes name resolution 2022-10-27 11:35:23 +02:00
54d0aff4cf ignore a strange test that works on my machine but not on the ci 2022-10-27 11:35:23 +02:00
b804cba4ca try to debug the ci 2022-10-27 11:35:23 +02:00
3cf8aaa4d0 reformat 2022-10-27 11:35:23 +02:00
225405bb0d ignore the dump tests 2022-10-27 11:35:22 +02:00
0dd8e00929 Reapply #2601 2022-10-27 11:35:22 +02:00
a99ddf85f7 fix clippy once again 2022-10-27 11:35:22 +02:00
7307c4dacd fix clippy 2022-10-27 11:35:22 +02:00
6aa816d96a use meili-snap in the dump 2022-10-27 11:35:22 +02:00
866a3676eb reupload the test fix for the dump 2022-10-27 11:35:22 +02:00
fa84eae0f1 Insta review and fix insta snapshots 2022-10-27 11:35:21 +02:00
33996071ea fix clippy from the CI 2022-10-27 11:35:21 +02:00
953055e3d7 bump milli 2022-10-27 11:35:21 +02:00
7c908fadcf Remove a useless clippy silence 2022-10-27 11:35:21 +02:00
07d39776f9 fix clippy _once again_ 2022-10-27 11:35:21 +02:00
3979c9f02b fix all the dump snasphots 2022-10-27 11:35:20 +02:00
8ec3681cf8 fix clippy part1 2022-10-27 11:35:20 +02:00
2ba5e3b519 Clean up some code 2022-10-27 11:35:20 +02:00
861a07792e Remove useless task module 2022-10-27 11:35:20 +02:00
ee6597da60 Fix all the tests 2022-10-27 11:35:20 +02:00
314b89ca30 Fix insta snapshots 2022-10-27 11:35:20 +02:00
8ebb49d1b1 bump milli 2022-10-27 11:35:19 +02:00
0ba6253eed fix the sort 2022-10-27 11:35:19 +02:00
e8cd571820 try to convert the OsStr to a rust string to fix the sort 2022-10-27 11:35:19 +02:00
4f955e68b3 Apply suggestions from code review 2022-10-27 11:35:19 +02:00
6c98752922 move the commit before the insertion in the map 2022-10-27 11:35:19 +02:00
4e1b6b514e update reviewer change 2022-10-27 11:35:19 +02:00
64e55b4db9 fix the index creation. When an index is being created we insert it in the index_map straight away to avoid someone else from trying to re-open it. The definitive fix should be made on milli's side 2022-10-27 11:35:18 +02:00
9b43528bbb Update test after fixing bug in index swap 2022-10-27 11:35:18 +02:00
e641d08846 Cargo fmt 2022-10-27 11:35:18 +02:00
36c9f05998 Revert "Display more than one indexUid in a task view if necessary"
This reverts commit 1f2e253bb6.
2022-10-27 11:35:18 +02:00
3b158bb966 Return invalid API key error in /swap-indexes 2022-10-27 11:35:18 +02:00
08b5123380 Display more than one indexUid in a task view if necessary 2022-10-27 11:35:17 +02:00
1f75caae88 Fix a few index swap bugs.
1. Details of the indexSwap task
2. Query tasks with type=indexUid
3. Synchronous error message for multiple index not found
2022-10-27 11:35:17 +02:00
a16604af80 fix all the tests 2022-10-27 11:35:17 +02:00
1d014a538e comment out a test that makes the CI crash 2022-10-27 11:35:17 +02:00
29bdcb880c update the snapshot 2022-10-27 11:35:17 +02:00
a3fc0d3bd9 Fix the last regression 2022-10-27 11:35:17 +02:00
2de8a0711a Cargo insta test/review 2022-10-27 11:35:16 +02:00
2f577b6fcd Patch the IndexScheduler in meilisearch-http to use the options struct 2022-10-27 11:35:16 +02:00
eccbdb74cf remove useless print
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 11:35:16 +02:00
033794d209 add tests for the task deletion and task cancelation 2022-10-27 11:35:16 +02:00
a85d5b4981 test the details of all tasks type 2022-10-27 11:35:16 +02:00
71b50853dc Introduce an options struct to create the IndexScheduler 2022-10-27 11:35:16 +02:00
7074872a78 cargo insta accept 2022-10-27 11:35:15 +02:00
035e8eeff5 Clean-up some TODOs 2022-10-27 11:35:15 +02:00
e35fe33712 Fix some bugs with files 2022-10-27 11:35:15 +02:00
4736e00253 Handle the CLI options related to snapshots 2022-10-27 11:35:15 +02:00
942b7c338b Compress the snapshot in a tarball 2022-10-27 11:35:15 +02:00
4cafc63561 Reintroduce the versioning functions 2022-10-27 11:35:14 +02:00
89e127e4f4 Declare the auth path in the index scheduler 2022-10-27 11:35:14 +02:00
eec43ec953 Implement a first version of the snapshots 2022-10-27 11:35:14 +02:00
c063f154fb Add the snapshots directory path to the IndexScheduler 2022-10-27 11:35:14 +02:00
e0548e42e7 Rename the Snapshot task into SnapshotCreation 2022-10-27 11:35:14 +02:00
4d43a9f5b1 Rename the index-scheduler module into insta_snapshot 2022-10-27 11:35:14 +02:00
901c405919 Fix the inta-snapshot typos in the tests 2022-10-27 11:35:13 +02:00
c641888a23 Patch the delete and cancel tasks routes 2022-10-27 11:35:13 +02:00
6db90ba6cc Make sure that we don't delete or cancel future tasks
This should already have been the case before, but there is no harm
in adding another check.
2022-10-27 11:35:13 +02:00
e0821ad4b0 remove an useless dbg 2022-10-27 11:35:13 +02:00
61f0940f8c fix an issue with the dates 2022-10-27 11:35:13 +02:00
241300d2d8 add more naive tests around the document addition + remove the old unused snapshot files 2022-10-27 11:35:13 +02:00
570b2d1167 add some naive document addition tests 2022-10-27 11:35:12 +02:00
d92425658e Add index scheduler tests for task cancelation 2022-10-27 11:35:12 +02:00
12669bf07c rename received_documents_ids to matched_documents 2022-10-27 11:35:12 +02:00
16fac10074 Fix crash when batching an index swap task containing 0 swaps 2022-10-27 11:35:12 +02:00
0aca5e84b9 rename received_document_ids to matched_documents in the DocumentDeletion task type (reimplementation of #2826) 2022-10-27 11:35:12 +02:00
7ed3f00b1e reformat 2022-10-27 11:35:12 +02:00
9c00b159ba fix clippy 2022-10-27 11:35:11 +02:00
7e52f1effb remove a lot of unecessary clone and ref 2022-10-27 11:35:11 +02:00
4d25c159e6 Apply code review suggestions 2022-10-27 11:35:11 +02:00
e9cd6cbbee Revert implementation of get_status to query only the database 2022-10-27 11:35:11 +02:00
424202d773 Pause the index scheduler for one second when a fatal error occurs 2022-10-27 11:35:11 +02:00
4a35eb9849 Fix (hopefully) queries that include processing tasks 2022-10-27 11:35:11 +02:00
493a8cff31 Adjust task details correctly following index swap 2022-10-27 11:35:10 +02:00
4de445d386 Start testing unexpected errors and panics in index scheduler 2022-10-27 11:35:10 +02:00
e3848b5f28 Add assert method to verify validity of index scheduler state 2022-10-27 11:35:10 +02:00
ecf4e43b3d rename the dumpExport to dumpCreation 2022-10-27 11:35:10 +02:00
3ea489421e move the error types to meilisearch-http 2022-10-27 11:35:10 +02:00
2808be9d45 Fix the /swap-indexes route API
1. payload
2. error messages
3. auth errors
2022-10-27 11:35:10 +02:00
92c41f0ef6 meili-snap: get the test name from the name of the function 2022-10-27 11:35:09 +02:00
1214a68a41 Only store full snapshots if env variable is set to true 2022-10-27 11:35:09 +02:00
8a23e707c1 fix the task view and forward the task db size 2022-10-27 11:35:09 +02:00
eb4bdde432 fix clippy 2022-10-27 11:35:09 +02:00
735a5da257 reformat 2022-10-27 11:35:09 +02:00
1d04ce611d remove ununsed function 2022-10-27 11:35:08 +02:00
e9055f5572 fix clippy 2022-10-27 11:35:08 +02:00
874499a2d2 fix all the snapshots 2022-10-27 11:35:08 +02:00
ecdcbf350f update all the snapshots with the new kind name 2022-10-27 11:35:08 +02:00
de7d4200d8 fix the snapshot tests of the dump after renaming a bunch of kinds 2022-10-27 11:35:08 +02:00
2c0fde4a1a ignore the snapshot test 2022-10-27 11:35:08 +02:00
e2ce8f2d32 remove the public DocumentClear variant 2022-10-27 11:35:07 +02:00
c8ee453b6c fix the autobatched document deletion 2022-10-27 11:35:07 +02:00
f6963f9662 ensure the indexUid is valid in most cases 2022-10-27 11:35:07 +02:00
a8de5368e5 fix the index creation in case an index already exists 2022-10-27 11:35:07 +02:00
b3265a8e1f ensure the index_uid is valid when creating an index 2022-10-27 11:35:07 +02:00
9bb2e3c790 fix the failed document addition with a primary key 2022-10-27 11:35:07 +02:00
cb48a02f94 fix the invalid index uid errors 2022-10-27 11:35:06 +02:00
99144b1419 fix most content file error 2022-10-27 11:35:06 +02:00
e107f1b282 fix the payload too large error 2022-10-27 11:35:06 +02:00
1bef5d119d fix the api keys for the tasks route 2022-10-27 11:35:06 +02:00
ca4234b445 fix the deletion of the data.ms in case of failure 2022-10-27 11:35:06 +02:00
8d1408c65e fix the import of the dumpv4&v5 when there is no instance-uid + rename the Kind+KindWithContent+Details variant for the DocumentImport and the Setting 2022-10-27 11:35:05 +02:00
131fe30934 fix the error messages and the index stats 2022-10-27 11:35:05 +02:00
50386921df fix the index creation 2022-10-27 11:35:05 +02:00
32cfac0cfd Sort the TOML dependencies 2022-10-27 11:35:05 +02:00
80b2e70ee7 Introduce a rustfmt file 2022-10-27 11:35:05 +02:00
52e858a588 Reapply #2890 2022-10-27 11:34:18 +02:00
8b0427f0c4 Reapply #2839 2022-10-27 11:34:18 +02:00
fce0996e17 Reapply #2819 2022-10-27 11:34:18 +02:00
2a7ef3b352 Reapply #2830 2022-10-27 11:34:18 +02:00
ce4dcf47f0 Reapply #2773 2022-10-27 11:34:18 +02:00
ca8c922f35 Reapply #2727 2022-10-27 11:34:18 +02:00
4c42130ec7 Remove once for all the meilisearch-lib crate 2022-10-27 11:34:17 +02:00
788262e588 Fix final compilation 2022-10-27 11:34:17 +02:00
61edcd585a Fix the new config file with the index scheduler 2022-10-27 11:34:17 +02:00
72ec4ce96b Fix allow_index_creation useless field 2022-10-27 11:34:17 +02:00
75857bf476 Fix the insta tests 2022-10-27 11:34:17 +02:00
0bbf80186f push the snapshot files 2022-10-27 11:34:17 +02:00
b6a0abea9f fix the index deletion when the index doesn’t exists but would be created by one of the autobatched tasks 2022-10-27 11:34:16 +02:00
5303bbffab fix the last rule about merging the allow_index_creation 2022-10-27 11:34:16 +02:00
fc944c39a5 simplify the code A LOT and create less false positive 2022-10-27 11:34:16 +02:00
a1d4cc673d add a whole new batch of tests around the index already exists / allow_index_creation 2022-10-27 11:34:16 +02:00
28d9f2c041 fix all the snapshot tests 2022-10-27 11:34:16 +02:00
d9218578e3 it probably works but it's also horrendous 2022-10-27 11:34:16 +02:00
d20b5ddda0 Don't return an error when swapping 0 indexes 2022-10-27 11:34:16 +02:00
11fee30f47 Apply review suggestions and stop using rtxn.commit 2022-10-27 11:34:15 +02:00
17cd2a4aa0 Implement POST /indexes-swap 2022-10-27 11:34:15 +02:00
28bd8b6c6b Remove key from index_tasks database when the value is empty 2022-10-27 11:34:15 +02:00
169f386418 Add some documentation to the index scheduler 2022-10-27 11:34:15 +02:00
66c3b93ef1 fix all the snapshot tests in the dump 2022-10-27 11:34:15 +02:00
bdb17954d2 Fix bug where assert used != instead of ==
And update snapshot tests.
2022-10-27 11:34:15 +02:00
23b01a58df cargo fmt 2022-10-27 11:34:14 +02:00
ec3391808d Fix date parsing for task queries
Use rfc3339 or YYYY-MM-DD.

Add a day to the parsed date when it is an excluded lower bound
and the YYYY-MM-DD was used.

Also the Query type does not need to be serialisable anymore
2022-10-27 11:34:14 +02:00
10a547df4f Apply suggestions from code review
Co-authored-by: Clément Renault <clement@meilisearch.com>

Apply suggestions from code review

Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>

Apply suggestions from code review

Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>

Apply code review suggestion

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 11:34:14 +02:00
22cf0559fe Implement task date filters
before/after enqueued/started/finished at
2022-10-27 11:34:14 +02:00
5765883600 fix the auto-generated details 2022-10-27 11:34:14 +02:00
cff003c928 remove the unused variants from the autobatcher 2022-10-27 11:34:14 +02:00
ab8f1c2865 fix a bunch of snapshot tests 2022-10-27 11:34:13 +02:00
6730e190db fix the dumps tests since we added informations in the DumpTask 2022-10-27 11:34:13 +02:00
50b8b9df6a Delete the tasks content file once the transaction has been successfully committed 2022-10-27 11:34:13 +02:00
ec0a5a9f01 Remove the useless r#union thing 2022-10-27 11:34:13 +02:00
6460b78e08 Clean up the delete_persisted_task_data function 2022-10-27 11:34:13 +02:00
d21651c968 Throw the error if we can't register the tasks in the store 2022-10-27 11:34:13 +02:00
6e904d0997 Introduce a ProcessingTasks constructor 2022-10-27 11:34:12 +02:00
b373d19831 Extract the must_stop flag out of the RwLock 2022-10-27 11:34:12 +02:00
3cbfacb616 Prefer using an u64 instead of a usize in some places 2022-10-27 11:34:12 +02:00
79c4275bfc Delete the persisted data when we cancel a task 2022-10-27 11:34:12 +02:00
f9c8fe5eaa Use a tokio block_in_place method for potentially blocking tasks 2022-10-27 11:34:12 +02:00
c2ec4a089b Put the original URL query in the tasks details 2022-10-27 11:34:12 +02:00
751e9bac3b Add the tasks cancel route to cancel tasks 2022-10-27 11:34:11 +02:00
290945e258 Update the canceledBy and finishedAt fields 2022-10-27 11:34:11 +02:00
725158b454 Introduce the core algorithm of task cancelation 2022-10-27 11:34:11 +02:00
b2c5bc67b7 Add more enum-iterator related stuff 2022-10-27 11:34:11 +02:00
591527a99d Prefer using TaskDeletion in the dumps 2022-10-27 11:34:11 +02:00
1ca9a67c49 Introduce the task cancelation task type 2022-10-27 11:34:11 +02:00
f177c97671 Add the canceled task status 2022-10-27 11:34:10 +02:00
703ba7a1fb Introduce the ProcessingTasks struct 2022-10-27 11:34:10 +02:00
c9523c6f39 Use the indexation-abortion milli's branch 2022-10-27 11:34:10 +02:00
e645c4c4d6 Remove the meilisearch-auth milli dependency 2022-10-27 11:34:10 +02:00
ea60d35c71 Delete a task's persisted data when appropriate 2022-10-27 11:34:10 +02:00
f7e546eea3 make the tests compile again 2022-10-27 11:34:10 +02:00
b45c430165 fix the analytics 2022-10-27 11:34:10 +02:00
634eb52926 extract the create_app function for the tests 2022-10-27 11:34:09 +02:00
d1a6fb2971 bump enum-iter and fix a bunch of error messages 2022-10-27 11:34:09 +02:00
bea81ae37b fix meilisearch-http 2022-10-27 11:34:09 +02:00
9e85f050b2 fix the tests 2022-10-27 11:34:09 +02:00
2f748480a1 share the rtxn between the access to the tasks and to the indexes 2022-10-27 11:34:09 +02:00
6bd6321226 dump the content of the dump tasks instead of recreating at import time with wrong API keys 2022-10-27 11:34:08 +02:00
655705eb2b remove useless todo 2022-10-27 11:34:08 +02:00
9fe24fbff2 get rids of the useless Seek before creating a grenad reader 2022-10-27 11:34:08 +02:00
83f3c5ec57 flush the dump-writer only once everything has been inserted 2022-10-27 11:34:08 +02:00
78ce29f461 apply most style comments of the review 2022-10-27 11:34:08 +02:00
dd70daaae3 Update dump/src/error.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 11:34:08 +02:00
d0e91555d1 rebase on index-scheduler 2022-10-27 11:34:08 +02:00
e0221fc0a3 fix a synchronization bug while importing tasks 2022-10-27 11:34:07 +02:00
a9eeb070b8 fix all the errors code and settings issues when importing a dump v2 2022-10-27 11:34:07 +02:00
3872a1b8d1 fix all the error codes 2022-10-27 11:34:07 +02:00
ba150f2127 commit after creating an index 2022-10-27 11:34:07 +02:00
554600dfd8 fix the deletion of the data.ms in case of errors 2022-10-27 11:34:07 +02:00
e9295c03ce the index-scheduler needs to wake-up after importing a dump 2022-10-27 11:34:06 +02:00
955d3339f0 remove the dbg 2022-10-27 11:34:06 +02:00
d481669b7e fix the content_file import 2022-10-27 11:34:06 +02:00
dd506e5d87 stop dumping the current dumping task as enqueued so it's not looping for ever 2022-10-27 11:34:06 +02:00
208c785697 add a bufwriter on the documents 2022-10-27 11:34:06 +02:00
d976e680c5 first mostly working version 2022-10-27 11:34:06 +02:00
c051166bcc update the API a little bit 2022-10-27 11:34:05 +02:00
72a906ae75 fix the tests 2022-10-27 11:34:05 +02:00
b7f9c94f4a write the dump export 2022-10-27 11:34:05 +02:00
8954b1bd1d Fix number of deleted tasks details after duplicate task deletion 2022-10-27 11:34:05 +02:00
8defad6c38 Add task deletion tests where the same task is deleted twice 2022-10-27 11:34:05 +02:00
f32b973945 Return an error when calling DELETE /tasks with an empty query 2022-10-27 11:34:04 +02:00
fbd2be2ec8 Apply suggested changes from PR review 2022-10-27 11:34:04 +02:00
441417447e Avoid creating two read txn at the same time 2022-10-27 11:34:04 +02:00
8c6aeaada5 Update snapshot tests following git rebase that fixes a bug 2022-10-27 11:34:04 +02:00
8bb0fcd144 Finish first draft of the DELETE /tasks route 2022-10-27 11:34:04 +02:00
9522b75454 Continue implementation of task deletion
1. Matched tasks are a roaring bitmap
2. Start implementation in meilisearch-http
3. Snapshots use meili-snap
4. Rename to TaskDeletion
2022-10-27 11:34:03 +02:00
e4d461ecba Make sure that we do not batch tasks from different indexes 2022-10-27 11:34:03 +02:00
b029369653 Add a test to check different indexes autobatching 2022-10-27 11:34:03 +02:00
408d00136c Extract index creation rights and simplify the autobatcher rules 2022-10-27 11:34:03 +02:00
2c24c7d403 Fix invalid import of tasks types 2022-10-27 11:34:03 +02:00
7034803712 move the API key in meilisearch_types 2022-10-27 11:34:02 +02:00
c192146fbe remove an unused file 2022-10-27 11:34:02 +02:00
b6c84e53ba uncomment a task serialization test 2022-10-27 11:34:02 +02:00
2f1eb78b1d refactor the Task a little bit 2022-10-27 11:34:02 +02:00
510ce9fc51 start moving a lot of task types to meilisearch_types 2022-10-27 11:34:01 +02:00
fa4c1de019 store md5 instead of the whole snapshots 2022-10-27 11:34:01 +02:00
3e4337c91f Add meili-snap crate to make writing snapshot tests easier 2022-10-27 11:34:01 +02:00
0af00f6b32 fix all the import and comment most of the dump v6 2022-10-27 11:34:01 +02:00
141a1c9464 push the document_format and settings I forgot in the previous PR 2022-10-27 11:34:00 +02:00
667c282e19 get rids of the index crate + the document_types crate 2022-10-27 11:34:00 +02:00
9a74ea0943 Fix compiler errors related autobatching option of the index scheduler 2022-10-27 11:34:00 +02:00
eabac9676b Fix typo and remove useless code in tests 2022-10-27 11:34:00 +02:00
ab4e649221 Apply suggestions from code review
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-10-27 11:34:00 +02:00
568199fc0d Add more task deletion tests 2022-10-27 11:33:59 +02:00
13a72f8757 Use more complete snapshot tests for the index scheduler 2022-10-27 11:33:59 +02:00
4c55c30027 Add a DetailsView type and improve index scheduler snapshots
The DetailsView type is necessary because serde incorrectly
deserialises the `Details` type, so the database fails to correctly
decode Tasks
2022-10-27 11:33:59 +02:00
dc81992eb2 Implement TaskDeletion in the index scheduler 2022-10-27 11:33:59 +02:00
fe84f2648b Allow a user to disable the auto batching system 2022-10-27 11:33:59 +02:00
e2a766acb5 Add a test to check that it works without autobatching 2022-10-27 11:33:58 +02:00
db9d1b18ca Remove the IndexScheduler::notify method 2022-10-27 11:33:58 +02:00
19c6f8303f Make sure that the index-scheduler tick loop is rerun after processing 2022-10-27 11:33:58 +02:00
b311eb3bed Add a test that verifies that sending multiple tasks works 2022-10-27 11:33:58 +02:00
f026ac3115 remove unused files 2022-10-27 11:33:58 +02:00
f176382b34 fix the tests 2022-10-27 11:33:57 +02:00
4bd9e4d723 write a bunch of tests that goes through the whole compat layers 2022-10-27 11:33:57 +02:00
c6f4fb5f7d remove the warnings 2022-10-27 11:33:57 +02:00
2ae0806773 rewrite the update file API 2022-10-27 11:33:57 +02:00
7579a363ab finish the dump reader API, the dump Writer API now needs to be updated 2022-10-27 11:33:57 +02:00
0284764b5e start dumping the update files to a known format 2022-10-27 11:33:56 +02:00
9117fde712 fix the compat between v3 and v4 2022-10-27 11:33:56 +02:00
f622ef9836 remove the ununsed snapshot files 2022-10-27 11:33:56 +02:00
dc0f307d61 remove all warnings 2022-10-27 11:33:56 +02:00
06fadb3004 write the compat layer from v2 to v3 2022-10-27 11:33:55 +02:00
6107540ad4 remove old compat files 2022-10-27 11:33:55 +02:00
7e18f92635 write the dump v2 import 2022-10-27 11:33:55 +02:00
43496b97bd make the open function public 2022-10-27 11:33:55 +02:00
6f327a00c7 fix some warnings 2022-10-27 11:33:55 +02:00
58ef80a2a7 rebase on main 2022-10-27 11:33:54 +02:00
22ffbf3676 write and test the compat layer from v3 to v4 2022-10-27 11:33:54 +02:00
089106a970 write and test the dump v3 import 2022-10-27 11:33:54 +02:00
026f6fb06a fix the test once again 2022-10-27 11:33:54 +02:00
efe0a5f422 finish the test for the compatibility between v4 and v5 2022-10-27 11:33:53 +02:00
47e0288747 rewrite the compat API to something more generic 2022-10-27 11:33:53 +02:00
2f47443458 rename a few things for consistency 2022-10-27 11:33:53 +02:00
a8128678a4 implement the dump v4 import 2022-10-27 11:33:53 +02:00
c50b44039e add the compat layer between v5 and v6 2022-10-27 11:33:53 +02:00
6dcc5851b5 get rids of the trait in most places 2022-10-27 11:33:52 +02:00
0972587cfc start writting the compat layer between v5 and v6 2022-10-27 11:33:52 +02:00
afd5fe0783 test the dump v5 2022-10-27 11:33:52 +02:00
1473a71e33 write the v5 dump import 2022-10-27 11:33:52 +02:00
101f55ce8b introduce the index metadata 2022-10-27 11:33:52 +02:00
e845cc2b6f fix the tests 2022-10-27 11:33:51 +02:00
7bd6f63001 implement the dump reader v6 2022-10-27 11:33:51 +02:00
699ae1b190 start implementing a skeleton of the v1 dump reader 2022-10-27 11:33:51 +02:00
f041d474a5 move the DumpWriter and Error to their own module 2022-10-27 11:33:51 +02:00
ece6c3f6e7 fix the dump export 2022-10-27 11:33:51 +02:00
87a6a337aa write a dump exporter 2022-10-27 11:33:51 +02:00
123f47dbc4 Create the index only if the task has the rights to do so 2022-10-27 11:33:50 +02:00
068a4b2884 Correctly batch tasks with different index creation rights 2022-10-27 11:33:50 +02:00
87212cfd20 Use a ControlFlow in the autobatcher function 2022-10-27 11:33:50 +02:00
f1b1cfdbcc IndexDeletion operation have ClearAll details 2022-10-27 11:33:50 +02:00
a083c9e452 Only mark the first clear document with the amount of cleared documents 2022-10-27 11:33:50 +02:00
b24b13b036 Let the tick function set the Failed status itself 2022-10-27 11:33:50 +02:00
566c15fb74 Fill an IndexDeletion task with the number of documents removed 2022-10-27 11:33:49 +02:00
6b3b05fb73 Panic if we encountered a wring KindWithContent type 2022-10-27 11:33:49 +02:00
36e5efde0d Update the tasks statuses 2022-10-27 11:33:49 +02:00
2fbdd104b8 Implement the IndexDeletion batch operation 2022-10-27 11:33:49 +02:00
da363a92ac Implement the IndexUpdate batch operation 2022-10-27 11:33:49 +02:00
0543cba6eb Implement the IndexCreate batch operation 2022-10-27 11:33:48 +02:00
cf6084151b Make sure that meilisearch-http works without index wrapper 2022-10-27 11:33:48 +02:00
c70f375669 Implement ErrorCode on the heed Error 2022-10-27 11:33:48 +02:00
91e13c2824 Implement ErrorCode on the milli::Error type 2022-10-27 11:33:48 +02:00
d76634a36c Remove the Index wrapper and use milli::Index directly 2022-10-27 11:33:48 +02:00
9e8242c57d Remove the IndexRename operation 2022-10-27 11:33:48 +02:00
5fa214abb1 Move the IndexScheduler to the root of the index-scheduler crate 2022-10-27 11:33:47 +02:00
9a9e98fb77 Add a TODO about the index creation 2022-10-27 11:33:47 +02:00
5d21c790ef Make clippy happy 2022-10-27 11:33:47 +02:00
31de33d5ee Implement a recursive indexation for the index-related operations 2022-10-27 11:33:47 +02:00
3b343a930e Fix meilisearch-http to use the new DocumentImport batch operation 2022-10-27 11:33:47 +02:00
07286fcc79 Implement the SettingsAndDocumentImport batch operation 2022-10-27 11:33:47 +02:00
f68906f5dc Merge both DocumentAddition/Update into one DocumentImport variant 2022-10-27 11:33:46 +02:00
5174c78f87 Implement the DocumentClear batch operation 2022-10-27 11:33:46 +02:00
025bb5f616 Implement the DocumentClearAndSettings batch operation 2022-10-27 11:33:46 +02:00
41ec737e73 Implement the Settings batch operation 2022-10-27 11:33:46 +02:00
7b4a913704 Implement the DocumentUpdate batch operation 2022-10-27 11:33:46 +02:00
a6a1043abb Implement the DocumentDeletion batch operation 2022-10-27 11:33:46 +02:00
7a0f17c912 remove an old unworking part of the batch execution 2022-10-27 11:33:45 +02:00
c2899fe9b2 bring back the IndexMeta and IndexStats in meilisearch-http 2022-10-27 11:33:45 +02:00
c759fd6924 fix import bug 2022-10-27 11:33:45 +02:00
fba9aa214a remove the create_app macro 2022-10-27 11:33:45 +02:00
2c8f1a43e9 get rids of meilisearch-lib 2022-10-27 11:33:44 +02:00
0ba1c46e19 fix a deadlock 2022-10-27 11:33:44 +02:00
22bfb5a7a0 remove Clone from the IndexScheduler 2022-10-27 11:33:44 +02:00
d8d3499aec remove a bunch of comments 2022-10-27 11:33:44 +02:00
64e132ce53 move as many fields as possible out of the IndexScheduler 2022-10-27 11:33:44 +02:00
9e1f38ec7c move the test function in the test module 2022-10-27 11:33:44 +02:00
6f4dcc0c38 start implementing some logic to test the internal states of the scheduler 2022-10-27 11:33:43 +02:00
84cd5cef0b fix the tests 2022-10-27 11:33:43 +02:00
ae86a8ccd6 slightly refactor the autobatching tests 2022-10-27 11:33:43 +02:00
ce2dfecc03 connect the new scheduler to meilisearch-http officially.
I can index documents and do search
2022-10-27 11:33:43 +02:00
cb4feabca2 implements the get_tasks 2022-10-27 11:33:43 +02:00
19154e48fe fix all compilation errors 2022-10-27 11:33:42 +02:00
8d51c1f389 wip integrating the scheduler in meilisearch-http 2022-10-27 11:33:42 +02:00
250410495c start integrating the index-scheduler in meilisearch-lib 2022-10-27 11:33:42 +02:00
8f0fd35358 add insta::json for later 2022-10-27 11:33:42 +02:00
8770e07397 I can index documents without meilisearch 2022-10-27 11:33:42 +02:00
edd8344dc9 wip 2022-10-27 11:33:42 +02:00
e547552702 create the end Batch type for all Index* operations 2022-10-27 11:33:41 +02:00
925971809a create the end Batch type for all Document* operation 2022-10-27 11:33:41 +02:00
1ea9c0b4c0 write most of the run loop 2022-10-27 11:33:41 +02:00
4846a7c501 use faux in the file-store 2022-10-27 11:33:41 +02:00
9ff0fe952e split the run function in two 2022-10-27 11:33:41 +02:00
a8b18b2c96 fix the register test 2022-10-27 11:33:40 +02:00
5436b996ab reduce the size of the snapshots 2022-10-27 11:33:40 +02:00
7d0c8a3379 test the register tasks 2022-10-27 11:33:40 +02:00
fc098022c7 start integrating the index-scheduler in the meilisearch codebase 2022-10-27 11:33:40 +02:00
b816535e33 greatly reduce the number of warnings 2022-10-27 11:33:40 +02:00
38e4ffe73c fix smol typo 2022-10-27 11:33:40 +02:00
366a344474 get rids of the horrendous spinlock in favor of synchronoise 2022-10-27 11:33:39 +02:00
7b6673dc1d implement the index swap in the index mapper 2022-10-27 11:33:39 +02:00
03aca2e452 move the index mapping logic in another structure 2022-10-27 11:33:39 +02:00
4129783019 migrate the index handling code in a different file + implements the create index 2022-10-27 11:33:39 +02:00
1804416afa reintroduce the uuid mapping for the indexes 2022-10-27 11:33:39 +02:00
c97d51a624 add a bunch of tests 2022-10-27 11:33:39 +02:00
803f2157af split the DocumentAdditionOrUpdate in two tasks; DocumentAddition and DocumentUpdate 2022-10-27 11:33:38 +02:00
b7c5b71a53 starts importing the real tasks 2022-10-27 11:33:38 +02:00
5cc8f96237 get rids of the auto-generated mains 2022-10-27 11:33:38 +02:00
94e29a9f5f extract the index abstraction out of the index-scheduler in its own module 2022-10-27 11:33:38 +02:00
48138c21a9 rename the update-file-store to file-store since it can store any kind of file 2022-10-27 11:33:38 +02:00
76597fc382 import the update_file_store in the index-scheduler 2022-10-27 11:33:37 +02:00
2afb381f95 get rids of nelson 2022-10-27 11:33:37 +02:00
a9844bd4f6 move the update file store to another crate with as little dependencies as possible 2022-10-27 11:33:37 +02:00
a0588d6b94 finishes the global skelton of the auto-batcher 2022-10-27 11:33:37 +02:00
b3c9b128d9 polish the global structure of the batch creation 2022-10-27 11:33:37 +02:00
448f44f631 move the autobatcher logic to another file 2022-10-27 11:33:36 +02:00
f638774764 add the document format file 2022-10-27 11:33:36 +02:00
516860f342 fix the create_new_batch method 2022-10-27 11:33:36 +02:00
6b9689a1c0 fix the whole batchKind thingy 2022-10-27 11:33:36 +02:00
af0f5d6c0c implements most operations 2022-10-27 11:33:36 +02:00
5a7fcf2688 fix a few typos 2022-10-27 11:33:35 +02:00
30d2b24689 implements the index deletion, creation and swap 2022-10-27 11:33:35 +02:00
72b2e68de4 makes the updates getters smoother to uses 2022-10-27 11:33:35 +02:00
7879189c6b make the project compile again 2022-10-27 11:33:35 +02:00
46b8ebcab4 fix the file store 2022-10-27 11:33:35 +02:00
fa742f60e8 make the file store entirely synchronous, including the file deletion 2022-10-27 11:33:35 +02:00
a7aa92df5f fix most of the index module 2022-10-27 11:33:34 +02:00
d8b8e04ad1 wip porting the index back in the scheduler 2022-10-27 11:33:34 +02:00
fe330e1be9 add a little bit of documentation 2022-10-27 11:33:34 +02:00
2c4e5ce8be implements the filter query 2022-10-27 11:33:34 +02:00
705af94fd7 add the task to the index db in the register task 2022-10-27 11:33:34 +02:00
ed745591e1 split the scheduler into multiples files 2022-10-27 11:33:34 +02:00
22d24dba56 implement the get_batch method 2022-10-27 11:33:33 +02:00
1a47949063 START THE REWRITE OF THE INDEX SCHEDULER: index & register has been implemented 2022-10-27 11:33:33 +02:00
ab1800551f Merge #2922
2922: Add new error when using /keys without masterkey set r=ManyTheFish a=vishalsodani

# Pull Request

## Related issue
Fixes #2918 


Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?




Co-authored-by: vishalsodani <vishalsodani@rediffmail.com>
2022-10-27 09:13:11 +00:00
689bef7ad2 fmt the code 2022-10-27 14:09:38 +05:30
89c40c83c3 refactor code to avoid cloning 2022-10-27 14:08:29 +05:30
03ba830ab2 uncomment tests 2022-10-27 12:59:28 +05:30
9cf3ff72a3 fix checking of master key as per review comment 2022-10-27 12:56:18 +05:30
25ec51e783 Merge #2601
2601: Ease search result pagination r=Kerollmops a=ManyTheFish

# Summary
This PR is a prototype enhancing search results pagination (#2577)

# Todo

- [x] Update the API to return the number of pages and allow users to directly choose a page instead of computing an offset
- [x] Change computation of `total_pages` in order to have an exact count
  - [x] compute query tree exhaustively
  - [x] compute distinct exhaustively

# Small Documentation

## Default search query

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "botman" }'
```

**result**:
```json
{
  "hits":[...],
  "query":"botman",
  "processingTimeMs":5,
  "hitsPerPage":20,
  "page":1,
  "totalPages":4,
  "totalHits":66
}
```

## Search query with offset parameter

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "botman", "offset": 0 }'
```

**result**:
```json
{
  "hits":[...],
  "query":"botman",
  "processingTimeMs":3,
  "limit":20,
  "offset":0,
  "estimatedTotalHits":66
}
```

## Search query selecting page with page parameter

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "botman", "page": 2 }'
```

**result**:
```json
{
  "hits":[...],
  "query":"botman",
  "processingTimeMs":5,
  "hitsPerPage":20,
  "page":2,
  "totalPages":4,
  "totalHits":66
}
```

# Related

fixes #2577

## In charge of the feature

Core: `@ManyTheFish` 
Docs: `@guimachiavelli` 
Integration: `@bidoubiwa` 


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-10-26 16:10:58 +00:00
f4021273b8 Add is_finite_pagination method to SearchQuery 2022-10-26 18:08:29 +02:00
f0ecacb58d add implementation for no master key set and fix tests 2022-10-25 22:41:48 +05:30
68c9751d49 Fix clippy 2022-10-25 16:08:07 +02:00
9aef1031ca Merge #2961
2961: Changed error message for config file r=curquiza a=LunarMarathon

# Pull Request

## Related issue
Fixes #2959 

## What does this PR do?
- Changed the error message as required in the issue - changed config to configuration

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: LunarMarathon <lmaytan24@gmail.com>
2022-10-25 11:36:47 +00:00
bc2a161f62 Change err mess for config 2022-10-25 16:16:34 +05:30
cd3748d412 Merge #2951
2951: Update mini-dashboard to v0.2.3 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
- Update the mini-dashboard with its latest release (v0.2.3)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-10-24 14:30:26 +00:00
079cfc70ae Update mini-dashboard to v0.2.3 2022-10-24 15:20:59 +02:00
4afed4de4f stabilize milli 2022-10-24 14:16:41 +02:00
a2314cf436 Update analytics 2022-10-24 13:56:26 +02:00
1d85eeecef Merge #2930
2930: fix wrong variant returned for invalid_api_key_indexes error r=ManyTheFish a=vishalsodani

# Pull Request

## Related issue
Fixes #2924 

## What does this PR do?
- ...

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: vishalsodani <vishalsodani@rediffmail.com>
2022-10-24 11:43:03 +00:00
0578aff8c9 Fix the tests 2022-10-20 17:41:13 +02:00
1d217cef19 Add some tests 2022-10-20 17:03:07 +02:00
c02ae4dfc0 Update roaring 2022-10-19 14:25:43 +02:00
506d08a9f4 Update analytics version 2022-10-19 14:05:42 +02:00
b423ef72be PROTO: hardcode version and interval for prototype analytics 2022-10-19 14:05:42 +02:00
77e718214f Fix pagination analytics 2022-10-19 14:05:42 +02:00
e35ea2ad55 Make search returns 0 hits when pages is set to 0 2022-10-19 14:05:42 +02:00
dfa70e47f7 Change page and hitsPerPage corner cases 2022-10-19 14:05:42 +02:00
815fba9cc3 Fix zero division when computing pages 2022-10-19 14:05:42 +02:00
e9d493c052 Add a totalHits field on finite pagination return 2022-10-19 14:05:42 +02:00
0fa5c9b515 Fix tests 2022-10-19 14:05:42 +02:00
062d17fbc0 Use a milli version that compute exhaustivelly the number of hits 2022-10-19 14:05:42 +02:00
30410e870f Format all fields in camelCase 2022-10-19 13:58:03 +02:00
b1bf6722e8 Update API to fit the proto needs 2022-10-19 13:58:03 +02:00
f2279f4615 Merge #2928
2928: Improve default config file r=curquiza a=curquiza

Following https://github.com/meilisearch/meilisearch/pull/2745#pullrequestreview-1115361274

`@gmourier` `@maryamsulemani97,` here are the changes I applied
- [After discussing with the cloud team](https://meilisearch.slack.com/archives/C03T1T47TUG/p1666014688582379) (internal link only), I let commented and uncommented the variables depending on their role.
- The only boolean variable I let be commented is the `no_analytics` one, and I wrote the non-default value (i.e. `true`)-> it will be easier for the users to disable analytics by only uncommenting the line. Plus, `no_analytics = false` is a double negative and not really easy to understand.
- I removed the `INDEX` section that was not really related to indexes
- I keep only 4 sections: SSL, Dumps, Snapshots, and an untitled section with all the uncategorized variables. I gathered all the uncategorized variable at the top of the file, so in the "untitled" section.
- I copied/pasted the really basic [documentation description](https://docs.meilisearch.com/learn/configuration/instance_options.html) of the variables
- I added every doc linked to each variable. 

Let me know if you agree or not with my choices!
Thanks for your review 🙏 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-10-19 11:49:33 +00:00
1a61209596 fix wrong variant returned for invalid_api_key_indexes error 2022-10-18 19:41:06 +05:30
48058b5e56 Improve default config file 2022-10-18 15:28:32 +02:00
1cf6efa740 Add new error when using /keys without masterkey set 2022-10-18 10:48:45 +05:30
96acbf815d Merge #2913
2913: download-latest: some refactoring r=curquiza a=nfsec

# Pull Request

## What does this PR do?
- Usually the elevation of variables.

## PR checklist
Please check if your PR fulfills the following requirements:
- [X] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [X] Have you read the contributing guidelines?
- [X] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Patryk Krawaczyński <nfsec@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-10-17 13:21:36 +00:00
c3ecdb3d8b Update download-latest.sh 2022-10-17 15:20:00 +02:00
b0749407f3 Merge #2804
2804: Add environement variable `MEILI_CONFIG_FILE_PATH` to define the config file path r=Kerollmops a=choznerol

# Pull Request

## What does this PR do?
Fixes #2800

~This is a draft PR base on the code in #2745. I will `rebase` and mark it ready for review only after #2745 merge.~ Done rebase


## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


## Demo

With `config.toml`, `config_copy1.toml` and `config_copy2.toml` present:
> <img width="692" alt="image" src="https://user-images.githubusercontent.com/12410942/192566891-6b3c9d26-736f-4e23-a09b-687fca1cb50d.png">

`MEILI_CONFIG_FILE_PATH` works:
> <img width="773" alt="image" src="https://user-images.githubusercontent.com/12410942/192567023-f751536e-992a-4e90-a176-cb19122248be.png">

`--config-file-path` still works:
> <img width="768" alt="image" src="https://user-images.githubusercontent.com/12410942/192567318-88c80b24-7873-4cec-8d08-16fe4d228055.png">

When both present, `--config-file-path` taks precedence:
> <img width="1214" alt="image" src="https://user-images.githubusercontent.com/12410942/192567477-8a7cffe1-96f0-42a9-a348-6dbec20dc1e7.png">



Co-authored-by: Lawrence Chou <choznerol@protonmail.com>
2022-10-17 10:40:05 +00:00
5d895dd7da Merge #2851
2851: Upgrade clap to 4.0 r=loiclec a=choznerol

# Pull Request

## Related issue
Fixes #2846 

This PR is draft based on #2847 to avoid conflict. I will rebase and mark as 'Ready for review' after #2847 is merged.

## What does this PR do?
1. Upgrade clap to the latest version or 4.0 (4.0.9 as of today) by following the [migrating instruction](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#migrating) from [4.0 changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#migrating)
2. Fix an `ArgGroup` typo that can only be caught after upgrading to 4.0 in 20a715e29ed17c5a76229c98fb31504ada873597

## Notable changes

### The `--help` message

The format, ordering and indentation of `--help` message was changed in 4.0. I recorded the output of `cargo run -- --help` before and after upgrade to 4.0 for reference.

<details>
<summary>diff</summary>

Output of `diff --ignore-all-space --text --unified --new-file help-message-before.txt help-message-after.txt`:

```diff
--- help-message-before.txt	2022-10-14 16:45:36.000000000 +0800
+++ help-message-after.txt	2022-10-14 16:36:53.000000000 +0800
`@@` -1,12 +1,8 `@@`
-meilisearch-http 0.29.1
+Usage: meilisearch [OPTIONS]
 
-USAGE:
-    meilisearch [OPTIONS]
-
-OPTIONS:
+Options:
         --config-file-path <CONFIG_FILE_PATH>
-            Set the path to a configuration file that should be used to setup the engine. Format
-            must be TOML
+          Set the path to a configuration file that should be used to setup the engine. Format must be TOML
 
         --db-path <DB_PATH>
             Designates the location where database files will be created and retrieved
`@@` -26,15 +22,14 `@@`
             [default: dumps/]
 
         --env <ENV>
-            Configures the instance's environment. Value must be either `production` or
-            `development`
+          Configures the instance's environment. Value must be either `production` or `development`
             
             [env: MEILI_ENV=]
             [default: development]
             [possible values: development, production]
 
     -h, --help
-            Print help information
+          Print help information (use `-h` for a summary)
 
         --http-addr <HTTP_ADDR>
             Sets the HTTP address and port Meilisearch will use
`@@` -43,63 +38,53 `@@`
             [default: 127.0.0.1:7700]
 
         --http-payload-size-limit <HTTP_PAYLOAD_SIZE_LIMIT>
-            Sets the maximum size of accepted payloads. Value must be given in bytes or explicitly
-            stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
+          Sets the maximum size of accepted payloads. Value must be given in bytes or explicitly stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
             
             [env: MEILI_HTTP_PAYLOAD_SIZE_LIMIT=]
             [default: 100000000]
 
         --ignore-dump-if-db-exists
-            Prevents a Meilisearch instance with an existing database from throwing an error when
-            using `--import-dump`. Instead, the dump will be ignored and Meilisearch will launch
-            using the existing database.
+          Prevents a Meilisearch instance with an existing database from throwing an error when using `--import-dump`. Instead, the dump will be ignored and Meilisearch will launch using the existing database.
             
             This option will trigger an error if `--import-dump` is not defined.
             
             [env: MEILI_IGNORE_DUMP_IF_DB_EXISTS=]
 
         --ignore-missing-dump
-            Prevents Meilisearch from throwing an error when `--import-dump` does not point to a
-            valid dump file. Instead, Meilisearch will start normally without importing any dump.
+          Prevents Meilisearch from throwing an error when `--import-dump` does not point to a valid dump file. Instead, Meilisearch will start normally without importing any dump.
             
             This option will trigger an error if `--import-dump` is not defined.
             
             [env: MEILI_IGNORE_MISSING_DUMP=]
 
         --ignore-missing-snapshot
-            Prevents a Meilisearch instance from throwing an error when `--import-snapshot` does not
-            point to a valid snapshot file.
+          Prevents a Meilisearch instance from throwing an error when `--import-snapshot` does not point to a valid snapshot file.
             
             This command will throw an error if `--import-snapshot` is not defined.
             
             [env: MEILI_IGNORE_MISSING_SNAPSHOT=]
 
         --ignore-snapshot-if-db-exists
-            Prevents a Meilisearch instance with an existing database from throwing an error when
-            using `--import-snapshot`. Instead, the snapshot will be ignored and Meilisearch will
-            launch using the existing database.
+          Prevents a Meilisearch instance with an existing database from throwing an error when using `--import-snapshot`. Instead, the snapshot will be ignored and Meilisearch will launch using the existing database.
             
             This command will throw an error if `--import-snapshot` is not defined.
             
             [env: MEILI_IGNORE_SNAPSHOT_IF_DB_EXISTS=]
 
         --import-dump <IMPORT_DUMP>
-            Imports the dump file located at the specified path. Path must point to a `.dump` file.
-            If a database already exists, Meilisearch will throw an error and abort launch
+          Imports the dump file located at the specified path. Path must point to a `.dump` file. If a database already exists, Meilisearch will throw an error and abort launch
             
             [env: MEILI_IMPORT_DUMP=]
 
         --import-snapshot <IMPORT_SNAPSHOT>
-            Launches Meilisearch after importing a previously-generated snapshot at the given
-            filepath
+          Launches Meilisearch after importing a previously-generated snapshot at the given filepath
             
             [env: MEILI_IMPORT_SNAPSHOT=]
 
         --log-level <LOG_LEVEL>
             Defines how much detail should be present in Meilisearch's logs.
             
-            Meilisearch currently supports five log levels, listed in order of increasing verbosity:
-            ERROR, WARN, INFO, DEBUG, TRACE.
+          Meilisearch currently supports five log levels, listed in order of increasing verbosity: ERROR, WARN, INFO, DEBUG, TRACE.
             
             [env: MEILI_LOG_LEVEL=]
             [default: INFO]
`@@` -110,31 +95,25 `@@`
             [env: MEILI_MASTER_KEY=]
 
         --max-index-size <MAX_INDEX_SIZE>
-            Sets the maximum size of the index. Value must be given in bytes or explicitly stating a
-            base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
+          Sets the maximum size of the index. Value must be given in bytes or explicitly stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
             
             [env: MEILI_MAX_INDEX_SIZE=]
             [default: 107374182400]
 
         --max-indexing-memory <MAX_INDEXING_MEMORY>
-            Sets the maximum amount of RAM Meilisearch can use when indexing. By default,
-            Meilisearch uses no more than two thirds of available memory
+          Sets the maximum amount of RAM Meilisearch can use when indexing. By default, Meilisearch uses no more than two thirds of available memory
             
             [env: MEILI_MAX_INDEXING_MEMORY=]
             [default: "21.33 TiB"]
 
         --max-indexing-threads <MAX_INDEXING_THREADS>
-            Sets the maximum number of threads Meilisearch can use during indexation. By default,
-            the indexer avoids using more than half of a machine's total processing units. This
-            ensures Meilisearch is always ready to perform searches, even while you are updating an
-            index
+          Sets the maximum number of threads Meilisearch can use during indexation. By default, the indexer avoids using more than half of a machine's total processing units. This ensures Meilisearch is always ready to perform searches, even while you are updating an index
             
             [env: MEILI_MAX_INDEXING_THREADS=]
             [default: 5]
 
         --max-task-db-size <MAX_TASK_DB_SIZE>
-            Sets the maximum size of the task database. Value must be given in bytes or explicitly
-            stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
+          Sets the maximum size of the task database. Value must be given in bytes or explicitly stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
             
             [env: MEILI_MAX_TASK_DB_SIZE=]
             [default: 107374182400]
```

- ~[help-message-before.txt](https://github.com/meilisearch/meilisearch/files/9715683/help-message-before.txt)~ [help-message-before.txt](https://github.com/meilisearch/meilisearch/files/9784156/help-message-before-2.txt)
- ~[help-message-after.txt](https://github.com/meilisearch/meilisearch/files/9715682/help-message-after.txt)~ [help-message-after.txt](https://github.com/meilisearch/meilisearch/files/9784091/help-message-after.txt)

</details>


## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Lawrence Chou <choznerol@protonmail.com>
2022-10-17 10:08:27 +00:00
136a87f7bb Little code refactor
Usually the elevation of variables.
2022-10-14 15:32:59 +02:00
5dafdd9a23 Preserve --help output ordering after upgrade Clap to 4.0
From the [4.0 breaking change][1]:

  ...
  * (help) Make DeriveDisplayOrder the default and removed the setting. To sort help, set next_display_order(None) (#2808)
  ...

[1]: https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#breaking-changes
2022-10-14 16:43:03 +08:00
53e5229b4a Assert error message for Windows besides *nix
The 'Tests on windows-latest' now failed with error message below

---- option::test::test_meilli_config_file_path_invalid stdout ----
thread 'option::test::test_meilli_config_file_path_invalid' panicked at 'assertion failed:
  left: `"unable to open or read the \"../configgg.toml\" configuration file: The system cannot find the file specified. (os error 2)."`,
 right: `"unable to open or read the \"../configgg.toml\" configuration file: No such file or directory (os error 2)."`', meilisearch-http\src\option.rs:555:17

https://github.com/meilisearch/meilisearch/actions/runs/3231941308/jobs/5291998750
2022-10-14 14:49:40 +08:00
9ebc73e6ac Comply with Clippy rule single-match 2022-10-14 14:16:10 +08:00
fc5a7e376c Merge #2876
2876: Full support for compressed (Gzip, Brotli, Zlib) API requests r=Kerollmops a=mou

# Pull Request

## Related issue
Fixes #2802 

## What does this PR do?
- Adds missed content-encoding support for streamed requests (documents)
- Adds additional tests to validate content-encoding support is in place
- Adds new tests to validate content-encoding support for previously existing code (built-in actix functionality for unmarshaling JSON payloads)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!

Co-authored-by: Andrey "MOU" Larionov <anlarionov@gmail.com>
2022-10-13 15:32:21 +00:00
5a98ecaf2a Merge #2896
2896: Fix CI to send signal to Cloud team r=Kerollmops a=curquiza

Hello `@eskombro` 👋 

I realized the CI we recently created together was not good when we release the official Meilisearch version. Indeed, in this case `steps.meta.outputs.tags` contains several tags, see: https://github.com/meilisearch/meilisearch/actions/runs/3197492898/jobs/5220776456

You might want to ask: why the CI, including the Cloud team signal, was available when releasing v0.29.1 and not v0.29.0? Good question, thanks for asking it Sam! It was a mistake on our side, it should not have been available for v0.29.1, and this is how I found out v0.29.1 was broken and contained commits it should not have. So I deleted everything and started the release process again for v0.29.1.
Anyway, since I had the chance to see the bug in this release mess, I want to take the opportunity to fix it. Now, we will only send the real tag. 
Here you have more documentation about what `github.ref_name` is: https://docs.github.com/en/actions/learn-github-actions/contexts

Bisous bisous!

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-10-13 14:17:07 +00:00
2dce44f4c1 Fix formatting and shaving lints 2022-10-13 15:49:23 +02:00
b69f8d67c3 Added test to verify response encoding
Alongside request encoding (compression) support, it is helpful to verify that the server respect `Accept-Encoding` headers and apply the corresponding compression to responses.
2022-10-13 00:56:57 +02:00
99e2788ee7 Fix Cargo.toml formatting 2022-10-12 21:12:18 +02:00
a9e6a8901b Fix CI to send signal to Cloud team 2022-10-12 09:36:01 +02:00
3c3ae3ff98 Impeove invalid config_file_path handling
1. Besides opt.config_file_path, also consider MEILI_CONFIG_FILE_PATH in the Err path because they are both user input.
2. Print out the incorrect file path in error message.
3. Add tests
https://github.com/meilisearch/meilisearch/pull/2804#discussion_r991999888
2022-10-12 12:04:48 +08:00
343828a76e Merge #2890
2890: fix: add handle dumpCreation query on tasks request r=Kerollmops a=washbin

# Pull Request

## Related issue
Fixes #2874 

## What does this PR do?
- add missing `DumpCreation` type in tasks route

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Co-authored-by: washbin <76929116+washbin@users.noreply.github.com>
2022-10-11 19:09:34 +00:00
72c1aef1c4 fix: add handle dumpCreation query on tasks request 2022-10-11 19:36:04 +05:45
91accc0194 Fix default config file path typo 2022-10-11 21:36:17 +08:00
0f024e7d97 Merge #2882
2882: Bring back v0.29.1 changes into `main` r=Kerollmops a=curquiza

Following v0.29.1 release

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
2022-10-10 16:27:22 +00:00
3ebd88c03b Revert "Comment cache steps in jobs"
This reverts commit f513ac1233.
2022-10-10 14:46:54 +02:00
c958097e99 Merge #2862
2862: Use Ubuntu 18.04 for all CI tasks that previously used Ubuntu 20.04 r=curquiza a=loiclec

This is to prevent linking with a version of glibc that is too recent.

With meilisearch v0.29.0 we inadvertently bumped the minimum supported glibc version to 2.29, which means it couldn't be run from Debian 10 (for example) anymore. By using Ubuntu 18.04, which uses glibc 2.27, we restore support for older Linux distros.

Fixes #2850

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-10-10 14:42:18 +02:00
f513ac1233 Comment cache steps in jobs 2022-10-10 14:25:24 +02:00
97c202db51 Update version for next release (v0.29.1) 2022-10-10 14:25:24 +02:00
a5e23aa6e4 Use Ubuntu 18.04 for all CI tasks that previously used Ubuntu 20.04
This is to prevent linking with a version of glibc that is too recent.

With meilisearch v0.29.0 we inadvertently bumped the minimum supported
glibc version to 2.29, which means it couldn't be run from Debian 10
(for example) anymore. By using Ubuntu 18.04, which uses glibc 2.27, we
restore support for older Linux distros.
2022-10-10 14:25:18 +02:00
c5cd743eb6 Merge #2868
2868: Uncomment cache steps in Github CI r=curquiza a=AM1TB

# Pull Request

Uncomment cache steps as they were previously affected by an issue with Github actions: https://www.githubstatus.com/incidents/gq1x0j8bv67v

## Related issue
Fixes #2864 

## What does this PR do?
- Reintroduce the rust-cache steps within Github CI.

## PR checklist
Please check if your PR fulfills the following requirements:
- [X] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [X] Have you read the contributing guidelines?
- [X] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Amit Banerjee <amit.banerjee.jsr@gmail.com>
2022-10-10 11:33:37 +00:00
343a677566 Fix formatting and apply clippy suggestion 2022-10-10 11:04:46 +02:00
9dbc71cb6d Added support for encoded payload
Actix provides different content encodings out of the box, but only if we use built-in content wrappers and containers. This patch wraps its own Payload implementation with an actix decoder, which enables request compression support.
2022-10-09 22:09:30 +02:00
11b986a81d Added support for specifying compression in tests
Refactored tests code to allow to specify compression (content-encoding) algorithm.

Added tests to verify what actix actually handle different content encodings properly.
2022-10-09 22:09:29 +02:00
7607a62531 Split tests over two modules
Currently, `add_documents` contains some amount of `update` tests. This change should unify test structure with `index` module.
2022-10-09 22:03:22 +02:00
c883b23cca Merge #2861
2861: Change default bind address to localhost r=Kerollmops a=Fall1ngStar

# Pull Request

## Related issue
Fixes #2782

## What does this PR do?
- Change the default bind address to `localhost` so that it can be accessed with IPv6

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Fall1ngStar <fall1ngstar.public@gmail.com>
2022-10-07 19:49:39 +00:00
d1c10d6d68 Fix segment_analytics default_http_addr import 2022-10-06 22:42:20 -04:00
1f40c3e48c Uncomment cache steps in jobs 2022-10-06 22:32:29 +05:30
2681e92d4e Support MEILI_CONFIG_FILE_PATH to define config file path
Close #2800

This is an alternative to the `--config-file-path` option. If both `--config-file-path` and `MEILI_CONFIG_FILE_PATH` are present, `--config-file-path` takes precedence according to the "Priority order" section of #2558.
2022-10-07 00:41:14 +08:00
da25328c2b Fix clap ArgGroup typos
Not sure why but the compiler didn't catch this until clap is upgraded to v4.
Follwoing are the error from 'cargo test':

running 2 tests
test routes::indexes::search::test::test_fix_sort_query_parameters ... ok
test option::test::test_valid_opt ... FAILED

failures:

---- option::test::test_valid_opt stdout ----
thread 'option::test::test_valid_opt' panicked at 'Command meilisearch-http: Argument or group 'import-snapshot' specified in 'requires*' for 'ignore_missing_snapshot' does not exist', /Users/ychou/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-4.0.9/src/builder/debug_asserts.rs:152:13
note: run with  environment variable to display a backtrace

failures:
    option::test::test_valid_opt

test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
2022-10-07 00:32:26 +08:00
9e5ef8eb69 Upgrade clap to v4
Close #2846

4.0.0 changelog: 'https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#400---2022-09-28'

I followed the [Migrating steps](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#migrating) and the only issue I encountered are:
1. The typo problem in previous commit "Fix clap ArgGroup typo"
2. I can't say I am 100% sure every [Subtle changes](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#breaking-changes) is fine for our use case, but at least after a quick read I didn't notice anything actionable.
2022-10-07 00:32:25 +08:00
6285c5949c Fix clap v4 deprecation warning
Following the 3. stap of [clap v4 migration instructions](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#migrating)
The result for 'cargo check --features clap/deprecated' is https://user-images.githubusercontent.com/12410942/193825216-ac680574-f53b-49c0-88c4-8bc42c4c6381.png
2022-10-07 00:15:53 +08:00
b55ec7db4d Upgrade clap to 3.2.8
Upgrade to the latest version of v3 before upgrading to v4
2022-10-07 00:04:21 +08:00
1b72eba1f3 Merge #2867
2867: Bring back `stable` into `main` r=Kerollmops a=curquiza

Following hotfix for v0.29.1

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
2022-10-06 14:14:23 +00:00
c98d3209ad Merge #2839
2839: Update internal CLI documentation r=curquiza a=jeertmans

# Pull Request

## Related issue
Fixes #2810

## What does this PR do?
- Make internal CLI documentation match the online one

## Remarks
- Scope is limited to `meilisearch/meilisearch-http/src/option.rs`, not the flattened structs: should I also take care of them?
- Could not find online docs for `enable_metrics_route`
- Max column width wrapping was done by hand, so may not be perfect
- I removed the links from the internal doc: should I put them back?

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Jérome Eertmans <jeertmans@icloud.com>
2022-10-06 13:32:32 +00:00
abb0233077 chore: add docs of flattened structs 2022-10-06 15:14:56 +02:00
8c526c31da Update meilisearch-http/src/option.rs
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-10-06 15:08:37 +02:00
ed6c2fb22c Merge #2862
2862: Use Ubuntu 18.04 for all CI tasks that previously used Ubuntu 20.04 r=curquiza a=loiclec

This is to prevent linking with a version of glibc that is too recent.

With meilisearch v0.29.0 we inadvertently bumped the minimum supported glibc version to 2.29, which means it couldn't be run from Debian 10 (for example) anymore. By using Ubuntu 18.04, which uses glibc 2.27, we restore support for older Linux distros.

Fixes #2850 

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-10-06 12:18:38 +00:00
ab17c0acd5 Comment cache steps in jobs 2022-10-06 14:03:04 +02:00
425692287d Merge #2841
2841: Bail if config file contains 'config_file_path' r=Kerollmops a=arriven

# Pull Request

## Related issue
Fixes #2801

## What does this PR do?
- Return an error if config file contains 'config_file_path'

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: arriven <20084245+Arriven@users.noreply.github.com>
2022-10-06 11:58:28 +00:00
05af8f0e46 Update version for next release (v0.29.1) 2022-10-06 10:27:11 +02:00
dc93853946 Use Ubuntu 18.04 for all CI tasks that previously used Ubuntu 20.04
This is to prevent linking with a version of glibc that is too recent.

With meilisearch v0.29.0 we inadvertently bumped the minimum supported
glibc version to 2.29, which means it couldn't be run from Debian 10
(for example) anymore. By using Ubuntu 18.04, which uses glibc 2.27, we
restore support for older Linux distros.
2022-10-06 10:13:50 +02:00
435778f328 Change default bind address to localhost 2022-10-05 22:23:29 -04:00
8134c26a6f Merge #2847
2847: Upgrade dependencies r=Kerollmops a=loiclec

# Pull Request

## Related issue
Partly fixes #2822 

## What does this PR do?
Upgrade Meilisearch's dependencies to their latest versions, except:
- clap stays on 3.0 because it is more complicated to upgrade ( see #2846  )
- `enum_iterator` goes up to 1.1.2 instead of 1.2 because of the vergen dependency

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-10-05 20:09:17 +00:00
6933ba54fb chore: add missing docs 2022-10-05 19:04:33 +02:00
4a022acd33 Update meilisearch-http/src/option.rs
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Update meilisearch-http/src/option.rs

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Apply suggestions from code review

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-10-05 18:54:38 +02:00
a2c91a87fe Upgrade dependencies
Except:
- clap stays on 3.0 because it is more complicated to upgrade
- enum_iterator goes up to 1.1.2 instead of 1.2 because of the vergen
dependency
2022-10-05 15:53:02 +02:00
59f1091c5e Bail if config file contains 'config_file_path' 2022-10-05 12:00:27 +03:00
d1fdca2bce chore(fmt): cargo fmt 2022-10-05 10:31:29 +02:00
9da34dc55c Merge #2837
2837: chore: generate Apple Silicon binaries r=curquiza a=jeertmans

# Pull Request

This creates a new job in the "publish binaries" workflow.

User `@mohitsaxenaknoldus` did not seem to be active on this anymore, so I tryied myself ;-)

## Related issue
Fixes #2792

## What does this PR do?
- As titled

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Jérome Eertmans <jeertmans@icloud.com>
2022-10-05 07:42:02 +00:00
221e3edf48 Update publish-binaries.yml 2022-10-04 19:33:39 +02:00
add793462f Update milestone-workflow.yml (#2853) 2022-10-04 16:35:06 +02:00
cc2a271287 Update milestone-workflow.yml (#2852) 2022-10-04 16:26:15 +02:00
77cdecaf3a Merge #2848
2848: v0.29.0: bring `stable` changes into `main` r=curquiza a=curquiza

Bring `stable` changes into `main`. These changes correspond to changes we made during the v0.29.0 pre-release

Co-authored-by: evpeople <hangcaihui@gmail.com>
Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Many the fish <many@meilisearch.com>
Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-10-04 12:29:40 +00:00
a7d2c9572e Merge branch 'main' into stable 2022-10-04 14:20:10 +02:00
53c7c1b07f Merge #2845
2845: Fixed broken Link r=curquiza a=AnirudhDaya

# Pull Request

## Related issue
Fixes #2840 

## What does this PR do?
- Broken link is fixed. Now it redirects to the correct website.

## PR checklist
Please check if your PR fulfils the following requirements:
- [X] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [X] Have you read the contributing guidelines?
- [X] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Anirudh Dayanand <anirudhdaya@gmail.com>
2022-10-04 10:04:31 +00:00
a93e3dead3 Update meilisearch-http/src/option.rs
Co-authored-by: Tommy <68053732+dichotommy@users.noreply.github.com>
2022-10-04 10:30:08 +02:00
62b06e6088 Fixed broken Link 2022-10-04 13:39:40 +05:30
8ad5c3043c Merge #2819
2819: Replace a meaningless serde message r=irevoire a=onyxcherry

# Pull Request

## What does this PR do?
Fixes #2680
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

I've renamed the `serde_msg` variable to a `message` as _message_ does or does not include the serde error message --> is more generic.
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Tomasz Wiśniewski <tomasz@wisniewski.app>
2022-10-03 17:58:00 +00:00
8fa8eb9767 Merge #2830
2830: Increase max concurrent readers on indexes r=irevoire a=arriven

# Pull Request

## What does this PR do?
Fixes #2786 
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: arriven <20084245+Arriven@users.noreply.github.com>
2022-10-03 17:26:45 +00:00
069374c23a Merge #2817
2817: Update Hacktoberfest section in CONTRIBUTING.md r=curquiza a=meili-bot

_This PR is auto-generated._

Following: af850854e4

Update Hacktoberfest section in CONTRIBUTING.md with the global guideline information.


Co-authored-by: meili-bot <74670311+meili-bot@users.noreply.github.com>
2022-10-03 14:49:50 +00:00
458775c7ad docs: add missing options 2022-10-03 16:37:16 +02:00
23db798f45 docs: update CLI documentation
Closes #2810
2022-10-03 16:07:38 +02:00
6a67afa48b chore: generate Apple Silicon binaries
Closes #2792
2022-10-03 15:09:39 +02:00
4ccfc837c1 Merge #2827
2827: Upgrade to alpine 3.16 r=curquiza a=nontw

# Pull Request

## What does this PR do?
Just a simple version upgrade to the existing Dockerfile.  Tested building the image locally successfully.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: nont <nontkrub@gmail.com>
2022-10-03 12:28:16 +00:00
ce75bb5ef8 Merge #2834
2834: Deleted v1.rs because it is not included in Project r=irevoire a=Himanshu664

# Pull Request

## Related issue
Fixes #2833 

## What does this PR do?
- Deleted the v1.rs file because it is not included in project/

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Himanshu Malviya <gs2012023@sgsitsindore.in>
2022-10-03 11:10:05 +00:00
f7f34fb714 deleted v1.rs 2022-10-03 11:04:54 +00:00
fcedce8578 Merge #2745 #2789 #2814 #2826
2745: Config file support r=curquiza a=mlemesle

# Pull Request

## What does this PR do?
Fixes #2558

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


2789: Fix typos r=Kerollmops a=kianmeng

# Pull Request

## What does this PR do?

Found via `codespell -L crate,nam,hart`.

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


2814: Skip dashboard test if mini-dashboard feature is disabled r=Kerollmops a=jirutka

Fixes #2813

Fixes the following error:

    cargo test --no-default-features
    ...
    error: couldn't read target/debug/build/meilisearch-http-ec029d8c902cf2cb/out/generated.rs: No such file or directory (os error 2)
     --> meilisearch-http/tests/dashboard/mod.rs:8:9
      |
    8 |         include!(concat!(env!("OUT_DIR"), "/generated.rs"));
      |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      |
      = note: this error originates in the macro `include` (in Nightly builds, run with -Z macro-backtrace for more info)

    error: could not compile `meilisearch-http` due to previous error

2826: Rename receivedDocumentIds into matchedDocuments r=Kerollmops a=Ugzuzg

# Pull Request

## What does this PR do?

Fixes #2799 
Changes DocumentDeletion task details response.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Tested with curl:
```
curl \
  -X POST 'http://localhost:7700/indexes/movies/documents/delete-batch' \
  -H 'Content-Type: application/json' \
  --data-binary '[
    23488,
    153738,
    437035,
    363869
  ]'

{"taskUid":1,"indexUid":"movies","status":"enqueued","type":"documentDeletion","enqueuedAt":"2022-10-01T20:06:37.105416054Z"}%

curl \
  -X GET 'http://localhost:7700/tasks/1'

{"uid":1,"indexUid":"movies","status":"succeeded","type":"documentDeletion","details":{"matchedDocuments":4,"deletedDocuments":2},"duration":"PT0.005708322S","enqueuedAt":"2022-10-01T20:06:37.105416054Z","startedAt":"2022-10-01T20:06:37.115562733Z","finishedAt":"2022-10-01T20:06:37.121271055Z"}
```

Co-authored-by: mlemesle <lemesle.martin@hotmail.fr>
Co-authored-by: Kian-Meng Ang <kianmeng@cpan.org>
Co-authored-by: Jakub Jirutka <jakub@jirutka.cz>
Co-authored-by: Jarasłaŭ Viktorčyk <ugzuzg@gmail.com>
2022-10-03 09:48:34 +00:00
a93eec1d34 Merge #2831
2831: Make clippy happy r=curquiza a=Kerollmops



Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-10-03 08:48:07 +00:00
135f656e8f Make clippy happy 2022-10-03 10:39:42 +02:00
88e69f4302 Increase max concurrent readers on indexes 2022-10-02 18:25:32 +03:00
459829631f Upgrade to alpine 3.16 2022-10-01 18:06:09 -07:00
20589a41b5 Rename receivedDocumentIds into matchedDocuments
Changes DocumentDeletion task details response.
2022-10-01 21:59:20 +02:00
61a518a384 Fix #2680 - replace a meaningless serde message 2022-09-29 16:36:32 +02:00
7905dae7ad Update CONTRIBUTING.md 2022-09-29 16:00:08 +02:00
05f93541d8 Skip dashboard test if mini-dashboard feature is disabled
Fixes the following error:

    cargo test --no-default-features
    ...
    error: couldn't read target/debug/build/meilisearch-http-ec029d8c902cf2cb/out/generated.rs: No such file or directory (os error 2)
     --> meilisearch-http/tests/dashboard/mod.rs:8:9
      |
    8 |         include!(concat!(env!("OUT_DIR"), "/generated.rs"));
      |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      |
      = note: this error originates in the macro `include` (in Nightly builds, run with -Z macro-backtrace for more info)

    error: could not compile `meilisearch-http` due to previous error
2022-09-29 01:42:52 +02:00
2827ff7957 Update README.md with Hacktoberfest section (#2794)
* Update README.md

* Update README.md

* Update README.md

Co-authored-by: Bruno Casali <brunoocasali@gmail.com>

Co-authored-by: Bruno Casali <brunoocasali@gmail.com>
2022-09-22 17:45:33 +02:00
d166a97d67 Update CONTRIBUTING.md for Hacktoberfest (#2793)
* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

Co-authored-by: Bruno Casali <brunoocasali@gmail.com>

* Update CONTRIBUTING.md

Co-authored-by: Bruno Casali <brunoocasali@gmail.com>

Co-authored-by: Bruno Casali <brunoocasali@gmail.com>
2022-09-22 17:27:42 +02:00
248d727e61 Add quotes around file name and change error message 2022-09-22 09:44:28 +02:00
56d72d4493 Uncomment static default values and fix typo 2022-09-21 16:31:16 +02:00
740926e747 Fix typos
Found via `codespell -L crate,nam,hart,succeded`.
2022-09-21 21:46:06 +08:00
f4b81fa0a1 Merge #2772 #2773
2772: Improve issue template display to avoid support in Meilisearch issues r=curquiza a=curquiza

Move the support line higher to make it more visible
Like this: https://github.com/meilisearch/meilisearch/discussions/2780

2773: Allow building without specialized tokenizations r=curquiza a=jirutka

Fixes #2774

(Some of) these specialized tokenizations include huge dictionaries that currently account for 90% (!) of the meilisearch binary size.

This commit adds `chinese`, `hebrew`, `japanese`, and `thai` feature flags that are propagated via `milli` down to the `charabia` crate. To keep it backwards compatible, they are enabled by default.

Related to meilisearch/milli#632

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Co-authored-by: Jakub Jirutka <jakub@jirutka.cz>
2022-09-21 10:56:12 +00:00
0fdbf128e9 Merge #2790
2790: Improve Docker CI for cloud team r=curquiza a=curquiza

Work with `@eskombro` 
Update CI to send a signal to Cloud team CI when Docker image is pushed

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-21 09:55:50 +00:00
d3b984d862 Update CI to send a signal to Cloud team when Docker image is pushed
Co-authored-by: Samuel Jimenez <sjimenezre@gmail.com>
2022-09-21 11:39:58 +02:00
d406fe901b Pass config.toml keys to snake_case 2022-09-21 10:55:16 +02:00
4dfae44478 Apply PR review comments 2022-09-19 18:16:28 +02:00
935f18efcf Allow building without specialized tokenizations
(Some of) these specialized tokenizations include huge dictionaries
that currently account for 90% (!) of the meilisearch binary size.

This commit adds chinese, hebrew, japanese, and thai feature flags
that are propagated via milli down to the charabia crate. To keep it
backward compatible, they are enabled by default.

Related to meilisearch/milli#632
2022-09-14 21:16:34 +02:00
5b57114771 Bump milli from 0.33.0 to 0.33.4 2022-09-14 20:52:11 +02:00
c2ab7a7939 Update config.yml 2022-09-14 14:40:36 +02:00
fa315352da Merge #2770
2770: Update milli 0.33.4 r=Kerollmops a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/2764

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-13 16:07:06 +00:00
268d59ccb1 Update milli version to v0.33.4 2022-09-13 18:01:09 +02:00
5901d4e407 Merge #2768
2768: Update patch versions to remove CVE r=Kerollmops a=curquiza

Trying to fix CVE we have with [synchronoise](https://github.com/QuietMisdreavus/synchronoise) crate

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-12 12:47:59 +00:00
aabd67a9fa Update patch version to remove CVE 2022-09-12 14:36:45 +02:00
a690ace36e Add example config.toml with default values 2022-09-09 09:37:23 +02:00
579fa3f1ad Remove unnecessary println 2022-09-08 11:05:52 +02:00
3fd6af25f9 Merge #2759
2759: Bump milli to 0.33.3 r=Kerollmops a=Kerollmops

This PR fixes #2743.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-09-07 20:55:08 +00:00
7f267ec4be Fix clippy 2022-09-07 20:22:49 +02:00
441492f1c8 Bump milli to v0.33.3 2022-09-07 18:23:49 +02:00
528a4721c1 Merge #2727
2727: Don't panic when the error length is slightly over 100 r=Kerollmops a=onyxcherry

# Pull Request

## What does this PR do?
Fixes PR #2207 as [the last commit](7ece7a9d9e) has changed number of the characters at the end to leave in place from `50` to `85` **but the lower limit of a string length wasn't changed**.
Therefore, any data (e.g. example string from issue #2680) was causing `meilisearch` to **panic**.

So I simply raised the minimum value from `100` to `135` (`50 + 85`) to ensure that `replace_range()` won't panic due to an inverted range.
At the same time I am in favor of the `85` value which was changed in the `@CNLHC's` last commit.


## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing ~issue~ pull request?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Tomasz Wiśniewski <tomasz@wisniewski.app>
2022-09-07 16:16:43 +00:00
5a4f1508d0 Add documentation 2022-09-07 18:16:33 +02:00
d1a30df23d Remove unneeded prints, format 2022-09-07 18:05:55 +02:00
92b0c51bfe Merge #2755
2755: Update mini-dashboard to v0.2.2 r=Kerollmops a=mdubus

# Pull Request

## What does this PR do?
Fixes #2716

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-09-07 15:53:04 +00:00
135499f398 Extract new env vars to const 2022-09-07 17:47:15 +02:00
b3ffcb2d97 Merge #2758
2758: Update ubuntu-18.04 to 20.04 r=Kerollmops a=curquiza

Trying to avoid CI failure by updating ubuntu machines
Commit already available on main, so for v0.30.0
https://github.com/meilisearch/meilisearch/pull/2719

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-07 15:27:19 +00:00
5cbd047989 Update ubuntu-18.04 to 20.04 2022-09-07 17:24:35 +02:00
ef3fa92536 Refactor default values for clap and serde 2022-09-07 16:58:03 +02:00
6520d3c474 Refactor build method and flag 2022-09-07 16:09:00 +02:00
403226a029 Add support for config file 2022-09-07 16:09:00 +02:00
07f45251e9 Update mini-dashboard to v0.2.2 2022-09-07 11:09:12 +02:00
3a62e97c9f Merge #2744
2744: Minor fixes in the just added update-version CI r=ManyTheFish a=curquiza

These fixes does not prevent us to use the current CI

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-06 13:14:06 +00:00
37dc6537c3 Fix api keys bugs (#2734)
* Add some tests

* Disallow index creation when API key doesn't havec explicitelly the right on the creating index

* Fix lazy index creation with `indexes.*` action
2022-09-06 15:13:09 +02:00
55aa83d75a Minor fixes in the just added update-version CI 2022-09-05 16:41:01 +02:00
0c962f6a76 Merge #2740
2740: Update checkout v2 to v3 in CI manifests and use a unique GitHub PAT r=Kerollmops a=curquiza

Upgrade the missing checkout v2 into v3
Probably a bad merge conflicts that make them removed when merging `stable` into `main` after v0.28.0 release.

Also, use `MEILI_BOT_GH_PAT` instead of `PUBLISH_TOKEN` and the default github token, which allow us to remove useless GitHub secrets (once this PR is merged and v0.29.0 is release because `PUBLISH_TOKEN` is still used on `release-v0.29.0`)

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-05 12:14:40 +00:00
2e0221cfc6 Merge #2739
2739: Add CI manifest to automate some steps when closing/creating a Milestone r=Kerollmops a=curquiza

For each Milestone created (not opened!), and if the release is NOT a patch release (only the patch changed)
- the roadmap issue is created, see the [template](https://github.com/meilisearch/core-team/blob/main/issue-templates/roadmap-issue.md)
- the changelog issue is created, see the [template](https://github.com/meilisearch/core-team/blob/main/issue-templates/changelog-issue.md)

For each Milestone closed
- the `vX.Y.Z` label is created
- this label is applied to all issues/PRs in the Milestone

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-05 11:42:10 +00:00
2d34b239ab Merge #2738
2738: Add missing env vars for dumps and snapshots features r=irevoire a=gmourier

# Pull Request

## What does this PR do?
Fixes #2721


## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-09-05 11:12:09 +00:00
55853815a5 Merge #2741
2741: Add CI to update the Meilisearch version in Cargo.toml files r=ManyTheFish a=curquiza

Add a CI we can trigger manually to create a PR updating the Meilisearch version
The next step is to create a Slack bot that will trigger this CI
In the meantime, we can trigger this CI manually in the [Actions tab](https://github.com/meilisearch/milli/actions)

The `MEILI_BOT_GH_PAT` secrets has been added to the organization level, and is accessible for the following repositories (so far): Meilisearch, Milli and Charabia

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-05 08:32:43 +00:00
b897ce8dfa Add CI to update th Meilisearch version in Cargo.toml files 2022-09-04 11:57:23 +02:00
5f5b787483 Use meili-bot PAT everywhere 2022-09-04 11:32:22 +02:00
d0aa8042e2 Fix job names 2022-09-03 17:53:37 +02:00
9cb1e4af5c Rename workflow file 2022-09-03 17:50:24 +02:00
cc09aa8868 Refacto env var 2022-09-02 17:07:23 +02:00
2eca723a91 Update checkout v2 to v3 in CI manifests 2022-09-02 16:45:32 +02:00
ae14567f97 Add CI manifest to automate some step of the release management when creating/closing a Milestone 2022-09-02 16:28:58 +02:00
d0f1054f5c chore: cargo fmt 2022-09-01 22:37:07 +02:00
3878c289df feat: add missing env var for dumps and snapshots feature 2022-09-01 22:34:20 +02:00
d1b3642923 Extract input to trim lengths to variables 2022-09-01 20:50:11 +02:00
f45ddff312 Merge #2733
2733: Move if conditions to the steps and not to the whole job r=Kerollmops a=curquiza

Follows https://github.com/meilisearch/meilisearch/pull/2726
Fixes the CI. Moves the `if:` condition to the steps and not the whole `check-version` jobs, otherwise, the following jobs checking the compilation cannot run.
See: https://github.com/meilisearch/meilisearch/actions/runs/2968543407

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-01 11:49:40 +00:00
8f3b590436 Move if conditions to the steps and not to the whole job 2022-09-01 13:34:34 +02:00
4e37427de8 Merge #2732
2732: Update milli v0.33.2 r=Kerollmops a=ManyTheFish

closes #2722


⚠️ : merging into [release-v0.29.0](https://github.com/meilisearch/meilisearch/tree/release-v0.29.0)

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-09-01 11:18:11 +00:00
50434d35d0 Update milli v0.33.2 2022-09-01 13:15:05 +02:00
60792eebcf Fix #2207 - do not panic when the error message length is between 100 and 135 2022-08-31 19:29:53 +02:00
4fbc8c705f Merge #2726
2726: Add dry run for publishing binaries: check the compilation works r=Kerollmops a=curquiza

To avoid realizing the compilation of one type of binary does not work during the release, I create a dry run for binary compilation every day at 2am 😇 

See the problem we had recently because missing this CI: https://github.com/meilisearch/meilisearch/issues/2718

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-08-31 15:35:05 +00:00
80b1f3e830 Add dry run for publishing binaries: check the compilation works 2022-08-31 17:28:42 +02:00
e315547ffc Merge #2724
2724: Make the document addition done log to appear once indexing is over r=curquiza a=evpeople

# Pull Request

## What does this PR do?
Fixes #2703 
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: evpeople <hangcaihui@gmail.com>
2022-08-31 13:13:36 +00:00
833ade80a6 cargo update 2022-08-31 13:58:53 +08:00
f117c90c46 remove the intermediate addition variable 2022-08-30 21:49:34 +08:00
3efd06168f Merge #2719
2719: Update ubuntu-18.04 to 20.04 r=Kerollmops a=curquiza

ubuntu-18.04 is deprecated

<img width="1566" alt="Capture d’écran 2022-08-30 à 12 04 25" src="https://user-images.githubusercontent.com/20380692/187409661-5ac9c545-a737-47f5-ad2f-0369114be11a.png">

https://github.com/meilisearch/meilisearch/actions/runs/2949985452

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-08-30 13:42:09 +00:00
07bb32b622 Update ubuntu-18.04 to 20.04 2022-08-29 18:26:01 +02:00
1131400694 Merge #2717
2717: Disable LTO due to compilation failures on some platforms r=curquiza a=loiclec

Meilisearch fails to compile on aarch64 Linux due to a linker error ( https://github.com/meilisearch/meilisearch/runs/8072616457?check_suite_focus=true ). This is probably caused by link-time-optimisation (LTO). Since it is not possible to modify a profile based on the target triple, this PR deactivates LTO completely for all platforms.
In the future, we might want to create different custom profiles, such as:
```toml
[profile.release-lto]
inherits = "release"
lto = "thin"
```
and compile Meilisearch using `cargo build --profile release-lto` on the platforms that can support it.


Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-08-29 16:04:34 +00:00
9a789daa58 Disable LTO due to compilation failures on some platforms
(aarch64 linux)
2022-08-29 17:21:08 +02:00
0826aa35e2 Merge #2713
2713: Move prometheus behind a feature flag r=Kerollmops a=irevoire

We decided we wanted to continue working on this feature before making it public.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-08-29 12:38:42 +00:00
6aa3ad6b6c move prometheus behind a feature flag 2022-08-29 14:36:59 +02:00
b774adfbf7 The "document addition done"
appear once the indexation is over now.
2022-08-27 21:50:10 +08:00
47a1aa69f0 Merge #2702
2702: Add link to the main image r=curquiza a=brunoocasali

I have wrapped the image with a `<a>` link, and it seems to be working fine, WDYT?

Co-authored-by: Bruno Casali <brunoocasali@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-08-24 16:54:08 +00:00
b3d89da74d Update README.md 2022-08-24 18:52:34 +02:00
0798170a64 Add link to the main image 2022-08-24 13:47:09 -03:00
446dfccc8c Merge #2504 #2697
2504: New README 🌟 r=curquiza a=curquiza

⚠️ Please do not only look at the Markdown but also how the GitHub renders the README 😇 

👉 👉 [Rendered](https://github.com/meilisearch/meilisearch/blob/new-readme/README.md) 👈 👈

2697: Accept an environment variable to enable the metrics route r=ManyTheFish a=Kerollmops

With the PR Meilisearch is able to accept the `MEILI_ENABLE_METRICS_ROUTE` environment variable to enable the newly introduces metrics route.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-24 15:41:44 +00:00
43175cfb78 New README version
Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update according to guigui reuqest

Add demo link

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: CaroFG <48251481+CaroFG@users.noreply.github.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Put sentence in bold
2022-08-24 17:16:29 +02:00
a1bb49c351 Merge #2696
2696: Add the new `metrics.get` and `metrics.all` actions rights r=Kerollmops a=Kerollmops

Follow the specification and add the new `metrics.get` and `metrics.all` actions, making the `/metrics` route only accessible with those rights.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-24 15:08:53 +00:00
bebd76064a Add test for the rights of /metrics route 2022-08-24 17:03:43 +02:00
f0b2ac6efb metrics.all must define metrics.get 2022-08-24 17:03:30 +02:00
08d86e33ca Accept an env variable to enable the metrics route 2022-08-24 16:39:56 +02:00
2c2efc7ab6 Remove the hand written numbers of the actions rights 2022-08-24 16:33:12 +02:00
381df43be4 Change the metrics route API access rights 2022-08-24 16:28:33 +02:00
f87ebfe477 Merge #2692
2692: Slight changes for prometheus metrics r=Kerollmops a=gmourier

# Pull Request

## What does this PR do?

- Replace "MeiliSearch" with "Meilisearch"
- Brings some consistency between rust identifier and exposed metrics names
- Add suffix describing unit, in plural form. e.g `MEILISEARCH_DB_SIZE_BYTES` (https://prometheus.io/docs/practices/naming/#metric-names)
- Update dashboard.json

Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-08-24 10:12:24 +00:00
c445334070 Merge #2636
2636: Upgrade milli to v0.33.0 r=Kerollmops a=ManyTheFish

# Summary
- Update milli to v0.33.0
- Classify the new InvalidLmdbOpenOptions error as an Internal error
- Update filter error check in tests
- Introduce Terms Matching Policies

fixes #2479
fixes #2484
fixes #2486
fixes #2516
fixes #2578
fixes #2580
fixes #2583
fixes #2600
fixes #2640
fixes #2672
fixes #2679
fixes #2686

# Terms Matching Policies
This PR allows end users to customize matching term policies

## Todo

- [x] Update the API to return the number of pages and allow users to directly choose a page instead of computing an offset
- [x] Change generation of the query tree depending on the chosen settings https://github.com/meilisearch/milli/pull/598

## Small Documentation

### Default search query

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "doctor of tokio" }'
```

**result**:
```json
{
  "hits":[...],
  "estimatedTotalHits":32,
  "query":"doctor of tokio",
  "limit":20,
  "offset":0,
  "processingTimeMs":7
}
```

The default behavior doesn't change with the current Meilisearch behavior:
If we don't have enough documents to fit the requested limit, we remove the query words from the last to the first typed word.

## Search query with `optionalWords` parameter

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "doctor of tokio", "matchingStrategy": "all"}'
```

**result**:
```json
{
  "hits":[...],
  "estimatedTotalHits":1,
  "query":"doctor of tokio",
  "limit":20,
  "offset":0,
  "processingTimeMs":7
}
```

### allowed `matchingStrategy` values

#### `last`
The default behavior, If we don't have enough documents to fit the requested limit, we remove the query words from the last to the first typed word.

#### `all`
No word will be removed, If we don't have enough documents to fit the requested limit, we return the number of documents we found.

### In charge of the feature

Core: `@ManyTheFish` & `@curquiza`  
Docs: TBD
Integration: `@bidoubiwa` 


Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Many the fish <many@meilisearch.com>
2022-08-23 16:21:00 +00:00
651a22b1ed Enhance enum documentation
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-23 18:11:20 +02:00
ff59ae56f4 cargo fmt 2022-08-23 17:17:02 +02:00
b2577aac52 Add suffix describing the unit when needed; Replace MeiliSearch by Meilisearch; Precised some metrics name 2022-08-23 17:09:27 +02:00
c9bb111ef3 Implement all and last matching strategy 2022-08-23 17:07:43 +02:00
e2af8dccb8 Fix filter tests 2022-08-23 16:39:39 +02:00
5e206ee84b Classify InvalidLmdbOpenOptions as an Internal error 2022-08-23 16:39:39 +02:00
aff4b64265 Update dependencies 2022-08-23 16:39:39 +02:00
0a2ef0037f Merge #2689 #2690
2689: Use mimalloc as the global allocator r=Kerollmops a=loiclec

milli has switched its global allocator to mimalloc already, and we have seen some performance gains as a result. Furthermore, we can use mimalloc as the global allocator on all platforms whereas jemalloc was only activated on Linux. 

This PR brings mimalloc to Meilisearch as well. 

2690: Add LTO and codegen-units=1 to release compile options r=Kerollmops a=loiclec

This PR brings Meilisearch's release compile options in line with milli (see https://github.com/meilisearch/milli/pull/606 ). 

Adding LTO and codegen=units=1 will make compile times longer, but they also speed up the final binary significantly.

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-08-23 12:05:02 +00:00
40ae26478a Merge #2691
2691: Update version for next release (v0.29.0) in Cargo.toml files r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-08-23 11:41:22 +00:00
6fe3f285ce Update version for next release (v0.29.0) 2022-08-23 13:39:56 +02:00
72f8adaa70 Add LTO and codegen-units=1 to release compile options 2022-08-23 13:03:57 +02:00
e659c08ac4 Use mimalloc as the global allocator 2022-08-23 12:58:10 +02:00
ea365126b4 Merge #2657
2657: prometheus and grafana dashboards implemented r=irevoire a=pavo-tusker

Implemented Basic Prometheus Metrics and Grafana Dashboard using this [Prometheus Crate](https://crates.io/crates/prometheus) [#496](https://github.com/meilisearch/product/issues/496)
![Screenshot from 2022-08-04 19-59-06](https://user-images.githubusercontent.com/43550760/182880420-71ec8591-a2cb-4fd5-b1c5-911a6dcbdaf9.png)
![Screenshot from 2022-08-04 19-58-56](https://user-images.githubusercontent.com/43550760/182880433-11727814-e230-44dd-89c9-fec3baa47b11.png)
![Screenshot from 2022-08-04 19-58-40](https://user-images.githubusercontent.com/43550760/182880436-73312a68-4f20-49f0-80e9-5e344f96db6f.png)


Co-authored-by: mohandasspat <mohan.s@pavo-tusker.com>
Co-authored-by: Pavo-Tusker <43550760+pavo-tusker@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-08-22 11:15:42 +00:00
a277cc9a18 Merge branch 'metrics/prometheus-setup' of https://github.com/pavo-tusker/meilisearch into metrics/prometheus-setup 2022-08-22 13:34:37 +05:30
a37c7ba1bb clippy & cargo fixed 2022-08-22 13:34:19 +05:30
ef1d6b1694 clippy & cargo fixed 2022-08-22 13:27:26 +05:30
099abefc6d Merge branch 'main' into metrics/prometheus-setup 2022-08-22 09:56:15 +02:00
a05101af4d clippy & fmt fixed 2022-08-22 13:21:22 +05:30
109540011a conflict fixes 2022-08-22 13:21:22 +05:30
2f92169e48 clippy issue in metrics fixed 2022-08-22 13:21:22 +05:30
a58b00d8f1 Update meilisearch-http/src/option.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-22 13:21:22 +05:30
2b8f3c26ec Changed prometheus metrics feature as optional 2022-08-22 13:21:22 +05:30
0b6ca73790 review fixes 2022-08-22 13:21:22 +05:30
1f1482e97c Update meilisearch-http/src/routes/mod.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-22 13:21:22 +05:30
25fecf9360 clippy & rustfmt fixed 2022-08-22 13:21:22 +05:30
4bee0565e8 prometheus and grafana dashboards implemented 2022-08-22 13:21:22 +05:30
d5da063666 clippy & fmt fixed 2022-08-22 10:52:09 +05:30
43bb5176a9 conflict fixes 2022-08-22 10:30:07 +05:30
a0734c991c Merge #2674
2674: Add analytics on the stats routes r=ManyTheFish a=irevoire

# Pull Request

## What does this PR do?
Implements https://github.com/meilisearch/specifications/pull/169

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [ ] Have you read the contributing guidelines?
- [ ] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-08-18 14:19:56 +00:00
cb29d7d124 Merge #2678
2678: Accept either an array of documents or a single document r=irevoire a=Kerollmops

# Pull Request

## What does this PR do?
Fixes #2671

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-18 14:00:01 +00:00
e32d5ef2b3 Fix the test with an uncomprehensible user error message 2022-08-18 14:37:44 +02:00
ee69ede1ce Merge #2677
2677: Hide the batch_uid field from the tasks route r=Kerollmops a=Kerollmops

# Pull Request

## What does this PR do?

Fixes #2676

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-18 10:01:09 +00:00
9b2036ac05 Accept either an array of documents or a single document 2022-08-18 11:55:14 +02:00
5c543f9d94 Add a test for single document upload 2022-08-18 11:33:22 +02:00
0c03ed3c1e Hide the batch_uid field from the tasks route 2022-08-18 11:15:21 +02:00
54a0b47c2b clippy issue in metrics fixed 2022-08-17 21:08:28 +05:30
947fb5c956 Update meilisearch-http/src/option.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-17 20:57:07 +05:30
cd18459484 Changed prometheus metrics feature as optional 2022-08-17 20:56:15 +05:30
225d9936ed review fixes 2022-08-17 20:55:29 +05:30
93daa4c464 Update meilisearch-http/src/routes/mod.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-17 20:55:29 +05:30
d08c77706c clippy & rustfmt fixed 2022-08-17 20:55:29 +05:30
de58ccd4ba prometheus and grafana dashboards implemented 2022-08-17 20:54:39 +05:30
62240b7e19 add analytics on the stats routes 2022-08-17 16:12:26 +02:00
22874ce300 Merge #2664
2664: 🐞 fix: Support https in print_launch_resume r=irevoire a=evpeople

fix #2660

# Pull Request

## What does this PR do?
Fixes #2660 
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: evpeople <hangcaihui@gmail.com>
2022-08-16 14:40:20 +00:00
b5f91b91c3 Merge #2523
2523: Improve the tasks error reporting when processed in batches r=irevoire a=Kerollmops

This fixes #2478 by changing the behavior of the task handler when there is an error in a batch of document addition or update.

What changes is that when there is a user error in a task in a batch we now report this task as failed with the right error message but we continue to process the other tasks. A user error can be when a geo field is invalid, a document id is invalid, or missing.

fixes #2582, #2478

Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-08-16 14:15:30 +00:00
b6e6a08f7d Fix CI test 2022-08-16 15:14:01 +02:00
8198bb9da2 Merge #2665
2665: 📎 makes clippy happy r=Kerollmops a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-08-16 12:01:56 +00:00
68a7d6bc61 reformat 2022-08-12 15:11:01 +02:00
83e20027fd 📎 makes clippy happy 2022-08-12 14:18:27 +02:00
f21a4d61da 🌈 style(http/main.rs): 2022-08-12 16:16:23 +08:00
12538d5a44 🐞 fix: Support https when print_lanuch_resume
fix #2660
2022-08-11 21:29:18 +08:00
ae174c2cca Fix task serialization 2022-08-11 13:35:35 +02:00
e6b806e0cf Merge #2662
2662: Fix(cli): Clamp databases max size to a multiple of system page size r=Kerollmops a=ManyTheFish

fix #2659


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-08-11 08:49:24 +00:00
cf955a77db Fix(cli): Clamp databases max size to a multiple of system page size
fix #2659
2022-08-11 10:44:47 +02:00
3a48de136e Add autobatching test 2022-08-10 17:02:29 +02:00
dfbdc565f9 Merge #2653
2653: Bump Swatinem/rust-cache from 1.4.0 to 2.0.0 r=curquiza a=dependabot[bot]

Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.4.0 to 2.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/releases">Swatinem/rust-cache's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>The action code was refactored to allow for caching multiple workspaces and
different <code>target</code> directory layouts.</li>
<li>The <code>working-directory</code> and <code>target-dir</code> input options were replaced by a
single <code>workspaces</code> option that has the form of <code>$workspace -&gt; $target</code>.</li>
<li>Support for considering <code>env-vars</code> as part of the cache key.</li>
<li>The <code>sharedKey</code> input option was renamed to <code>shared-key</code> for consistency.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md">Swatinem/rust-cache's changelog</a>.</em></p>
<blockquote>
<h2>2.0.0</h2>
<ul>
<li>The action code was refactored to allow for caching multiple workspaces and
different <code>target</code> directory layouts.</li>
<li>The <code>working-directory</code> and <code>target-dir</code> input options were replaced by a
single <code>workspaces</code> option that has the form of <code>$workspace -&gt; $target</code>.</li>
<li>Support for considering <code>env-vars</code> as part of the cache key.</li>
<li>The <code>sharedKey</code> input option was renamed to <code>shared-key</code> for consistency.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="6720f05bc4"><code>6720f05</code></a> 2.0.0</li>
<li><a href="5733786579"><code>5733786</code></a> rebuild</li>
<li><a href="622616010e"><code>6226160</code></a> prepare v2</li>
<li><a href="0497f9301f"><code>0497f93</code></a> improve registry cleanpu</li>
<li><a href="7b8626742a"><code>7b86267</code></a> update registry cleaning</li>
<li><a href="911d8e9e55"><code>911d8e9</code></a> test sparse registry</li>
<li><a href="875be5ce2d"><code>875be5c</code></a> bump cache</li>
<li><a href="07a2ee71bc"><code>07a2ee7</code></a> lol, dependency check was reversed</li>
<li><a href="7c190ef171"><code>7c190ef</code></a> fix actual test code ;-)</li>
<li><a href="fffd6895b2"><code>fffd689</code></a> add some more tests</li>
<li>Additional commits viewable in <a href="https://github.com/Swatinem/rust-cache/compare/v1.4.0...v2.0.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Swatinem/rust-cache&package-manager=github_actions&previous-version=1.4.0&new-version=2.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-02 07:58:48 +00:00
1bb05f2716 Bump Swatinem/rust-cache from 1.4.0 to 2.0.0
Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.4.0 to 2.0.0.
- [Release notes](https://github.com/Swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Swatinem/rust-cache/compare/v1.4.0...v2.0.0)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-01 17:07:30 +00:00
e6f03f82df Fix clippy warnings 2022-07-28 15:56:22 +02:00
58d2aad309 Change binary option and add env var support 2022-07-28 15:13:49 +02:00
e3426d5b7a Improve the tasks error reporting 2022-07-28 15:12:54 +02:00
73d4869e5e Make the changes to plug the new DocumentsBatch system 2022-07-28 14:45:33 +02:00
fe32097964 Update milli v0.32 2022-07-28 14:45:10 +02:00
dc21af46e5 Merge #2634
2634: Bring `stable` changes into `main` (v0.28.1) r=ManyTheFish a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-21 13:22:57 +00:00
22aa349e31 Merge #2633
2633: Fix highlight issue by updating milli to v0.31.2 r=ManyTheFish a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/2627

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-21 11:08:37 +00:00
9cf6acb671 Fix highlight issue by updating milli to v0.31.2 2022-07-21 14:11:24 +04:00
78c5826c57 Merge #2631
2631: Update mini-dashboard to v0.2.1 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
Fixes #2629

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-21 06:37:57 +00:00
7e6f3274fa Update mini-dashboard to v0.2.1 2022-07-21 09:47:07 +04:00
94ef326be3 Merge #2630
2630: Update version for next release (v0.28.1) r=ManyTheFish a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-20 13:42:51 +00:00
d01a3ab889 Update version for next release (v0.28.1) 2022-07-20 15:46:53 +04:00
ad494b6f77 Merge #2625
2625: Update link to Cloud beta form r=curquiza a=davelarkan

# Pull Request

## What does this PR do?
Fixes #2624

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Dave Larkan <davelarkan@gmail.com>
2022-07-19 11:09:12 +00:00
2f11686c81 Update link to Cloud beta form 2022-07-19 11:10:16 +01:00
01a47e2db5 Merge #2598
2598: Bring `stable` into `main` (v0.28.0) r=curquiza a=curquiza



Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-11 14:06:02 +00:00
cfacc79ad7 Remove duplicate step in CI 2022-07-11 15:18:15 +02:00
8e370ed9ab Merge branch 'main' into stable 2022-07-11 14:41:15 +02:00
32d6af6527 Merge remote-tracking branch 'origin/release-v0.28.0' into stable 2022-07-11 14:37:21 +02:00
be3240d2dd Merge #2592
2592: Chores: Add a dedicated section for Language Support in the issue template r=curquiza a=ManyTheFish



This new section in put upper than the feature proposal because language support is kind of a sub-category of it,
and so, in the reading order, we choose to create a feature proposal only if it is not related to Language.


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-07 12:37:36 +00:00
2de6868858 Chores: Add a dedicated section for Language Support in the issue template
This new section in put apper than feature proposal because language-support is kind of a sub-category of it,
and so, in the reading order, we choose to create a feature proposal only if it is not related to Language.
2022-07-07 13:48:47 +02:00
d419a91207 Merge #2591
2591: Introduce the Tasks Seen event when filtering r=Kerollmops a=Kerollmops

This PR fixes #2377 by introducing the Tasks Seen analytics events.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-07 11:41:01 +00:00
a9fb5a4d50 Introduce the Tasks Seen event when filtering 2022-07-07 11:39:23 +02:00
0353537fef Merge #2579
2579: API keys: adds action * for actions r=irevoire a=phdavis1027

# Pull Request
This is PR builds on `@janithpet's` addition to DocumentsAll; it's basically a copy-and-paste job, except I used ```iter.filter()``` to avoid the possibility for duplication that they mentioned. I'm not sure how much that matters.

Also, hi! This is my first open-source contribution and my first attempt to write Rust for anyone other than myself, so any feedback whatsoever is appreciated. 

## What does this PR do?
Fixes #2560

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Phillip Davis <phdavis1027@gmail.com>
2022-07-07 08:54:23 +00:00
da7729e4a8 Merge #2589
2589: Update create-issue-dependencies.yml r=ManyTheFish a=curquiza

Minor change to update the format in the description of the issue created: remove the useless newlines

See the format without this change: https://github.com/meilisearch/meilisearch/issues/2588

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-07-07 08:37:33 +00:00
074a6a0cce Fix typos in HeadAuthStore::put_api_key 2022-07-06 22:24:46 -04:00
755b1a59a2 Merge #2584
2584: Format API keys in hexa instead of base64 r=curquiza a=ManyTheFish

This PR:
- Changes API key generation and formatting to ease the generation of the key made by our users
- updates the `uuid` crate version

The API key can now be generated in bash as below:
```sh
echo -n $HYPHENATED_UUID | openssl dgst -sha256 -hmac $MASTER_KEY
```

fixes the issue raised in [product/discussion#421](https://github.com/meilisearch/product/discussions/421#discussioncomment-3079410), this should not impact anything in documentation nor integration but ease the key generation on the user sides.

poke `@gmourier` 

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-06 12:49:04 +00:00
bb5b18b82c Update create-issue-dependencies.yml 2022-07-06 11:01:25 +02:00
01d9560318 Merge #2587
2587: Update mini-dashboard to v0.2.0 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
Fixes #2469

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-06 08:54:16 +00:00
719879d4d2 Update mini-dashboard to v0.2.0 2022-07-06 08:12:17 +02:00
fb9b298645 Leave actions as HashSet 2022-07-05 21:52:50 -04:00
23f02f241e Run the code formatter 2022-07-05 21:08:56 -04:00
5d80ff41a2 Clean up put_key impl 2022-07-05 21:06:58 -04:00
f4989590db Merge #2585
2585: Add CI creates issue updating dependencies r=curquiza a=VasiliySoldatkin

Adds CI, which uses cron GHA to create Issue "Upgrade dependencies" every 3 months 
Context: [#2569](https://github.com/meilisearch/meilisearch/issues/2569)


Co-authored-by: Vasiliy Soldatkin <vasiliy.soldatkin@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-07-05 19:05:43 +00:00
bba5fab5e5 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:03:08 +02:00
05ffe24d64 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:02:40 +02:00
6f95ae9879 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:02:32 +02:00
480b881e15 Remove template and add GHA from review 2022-07-05 21:08:59 +03:00
43fecbf382 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:35:09 +02:00
5588a6415a Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:34:57 +02:00
b0757e75c4 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:34:49 +02:00
106be03ba8 Merge #2539
2539: Update Docker credentials r=curquiza a=curquiza

This is to avoid using the tpayet credentials and use `@meili-bot` credentials instead.

The `DOCKER_USERNAME` and `DOCKER_PASSWORD` are still present as secret, I will remove them once v0.28.0 is fully merged (they are still used on `release-v0.28.0`)

I tested by created a tag on the branch, it worked: the tag was pushed to docker hub by meili-bot

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-05 17:22:26 +00:00
70c55208f1 Merge #2586
2586: Fix Clippy r=irevoire a=Kerollmops

This PR fixes clippy on the `release-v0.28.0` branch.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-05 15:55:00 +00:00
d56bf66022 Make clippy happy 2022-07-05 17:48:44 +02:00
2c300c72c9 Add CI creates issue updating dependencies 2022-07-05 18:28:29 +03:00
a146fd45b9 Format API keys in hexa instead of base64 2022-07-05 16:14:18 +02:00
63e1fb4f96 Run the code formatter 2022-07-04 21:49:40 -04:00
be1c6f9dc4 Update tests to include .* permissions for tasks, indexes, dumps, stats, and settings 2022-07-04 21:38:54 -04:00
c251b527b0 Add iterators over * for stats, dumps, tasks, settings, and indexes; change documents impl to prevent duplication 2022-07-04 21:38:31 -04:00
1dc3724c1f Added [...]_ALL enum members in action.rs 2022-07-04 21:38:21 -04:00
c1ad56281d Merge #2545 #2556 #2568 #2573 #2574 #2575
2545: meilisearch-lib was missing a feature in its cargo.toml r=Kerollmops a=irevoire

Meilisearch-lib was missing a feature to compile on its own

2556: chore: `meilisearch-http` readability improvements r=curquiza a=ryanrussell

## What does this PR do?
Readability improvements in `meilisearch-http`. 

Believe these are pretty straightforward; let me know if anything needs adjusted or reverted :)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?



2568: Improve manifest for dependabot r=curquiza a=curquiza

Improve dependabot manifest
- `rebase-strategy: disabled` -> avoid issues with bors we had in the past with the integration team. The automatic rebase of dependabot does not fit with the rebase bors will try to do
- `labels` -> better triage for when I create the release changelogs

2573: Bump docker/login-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/login-action/releases">docker/login-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/161">#161</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/170">#170</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/167">#167</a>)</li>
<li>Bump <code>`@​actions/io</code>` from 1.1.1 to 1.1.2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/168">#168</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/176">#176</a>)</li>
<li>Bump https-proxy-agent from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/182">#182</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/login-action/compare/v1.14.1...v2.0.0">https://github.com/docker/login-action/compare/v1.14.1...v2.0.0</a></p>
<h2>v1.14.1</h2>
<ul>
<li>Revert to Node 12 as default runtime to fix issue for GHE users (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/160">#160</a>)</li>
</ul>
<h2>v1.14.0</h2>
<ul>
<li>Update to node 16 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/158">#158</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr</code>` from 3.45.0 to 3.53.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/157">#157</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr-public</code>` from 3.45.0 to 3.53.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/156">#156</a>)</li>
</ul>
<h2>v1.13.0</h2>
<ul>
<li>Handle proxy settings for aws-sdk (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/152">#152</a>)</li>
<li>Workload identity based authentication docs for GCR and GAR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/112">#112</a>)</li>
<li>Test login against ACR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/49">#49</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr</code>` from 3.44.0 to 3.45.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/132">#132</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr-public</code>` from 3.43.0 to 3.45.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/131">#131</a>)</li>
</ul>
<h2>v1.12.0</h2>
<ul>
<li>ECR: only set credentials if username and password are specified (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
<li>Refactor to use aws-sdk v3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
</ul>
<h2>v1.11.0</h2>
<ul>
<li>ECR: switch implementation to use the AWS SDK (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li><code>ecr</code> input to specify whether the given registry is ECR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/123">#123</a>)</li>
<li>Test against Windows runner (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li>Update instructions for Google registry (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/127">#127</a>)</li>
<li>Update dev workflow (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/111">#111</a>)</li>
<li>Small changes for GHCR doc (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/86">#86</a>)</li>
<li>Update dev dependencies (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/85">#85</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/101">#101</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/100">#100</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/94">#94</a> <a href="https://github-redirect.dependabot.com/docker/login-action/issues/103">#103</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/88">#88</a>)</li>
<li>Bump hosted-git-info from 2.8.8 to 2.8.9 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/83">#83</a>)</li>
<li>Bump node-notifier from 8.0.0 to 8.0.2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/82">#82</a>)</li>
<li>Bump ws from 7.3.1 to 7.5.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/81">#81</a>)</li>
<li>Bump lodash from 4.17.20 to 4.17.21 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/80">#80</a>)</li>
<li>Bump y18n from 4.0.0 to 4.0.3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/79">#79</a>)</li>
</ul>
<h2>v1.10.0</h2>
<ul>
<li>GitHub Packages Docker Registry deprecated (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/78">#78</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="49ed152c8e"><code>49ed152</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/161">#161</a> from crazy-max/node16-runtime</li>
<li><a href="b61a9ce7bd"><code>b61a9ce</code></a> Node 16 as default runtime</li>
<li><a href="3a136a8631"><code>3a136a8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/182">#182</a> from docker/dependabot/npm_and_yarn/https-proxy-agent...</li>
<li><a href="b312880b69"><code>b312880</code></a> Update generated content</li>
<li><a href="795794e081"><code>795794e</code></a> Bump https-proxy-agent from 5.0.0 to 5.0.1</li>
<li><a href="1edf6180e0"><code>1edf618</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/179">#179</a> from docker/dependabot/github_actions/codecov/codecov...</li>
<li><a href="8e66ad4089"><code>8e66ad4</code></a> Bump codecov/codecov-action from 2 to 3</li>
<li><a href="7c79b598ea"><code>7c79b59</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/176">#176</a> from docker/dependabot/npm_and_yarn/minimist-1.2.6</li>
<li><a href="24a38e0d6d"><code>24a38e0</code></a> Bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="70e1ff84cb"><code>70e1ff8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/170">#170</a> from crazy-max/eslint</li>
<li>Additional commits viewable in <a href="https://github.com/docker/login-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/login-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2574: Bump Swatinem/rust-cache from 1.3.0 to 1.4.0 r=curquiza a=dependabot[bot]

Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.3.0 to 1.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/releases">Swatinem/rust-cache's releases</a>.</em></p>
<blockquote>
<h2>v1.4.0</h2>
<ul>
<li>Clean both debug and release target directories.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/blob/v1/CHANGELOG.md">Swatinem/rust-cache's changelog</a>.</em></p>
<blockquote>
<h2>1.4.0</h2>
<ul>
<li>Clean both <code>debug</code> and <code>release</code> target directories.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="cb2cf0cc7c"><code>cb2cf0c</code></a> 1.4.0</li>
<li><a href="74e8e24b6d"><code>74e8e24</code></a> Update dependencies, clean both debug and release targets</li>
<li><a href="f8f67b7515"><code>f8f67b7</code></a> Add a LICENSE file</li>
<li><a href="5b2b053862"><code>5b2b053</code></a> Improve Cache Details documentation (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/49">#49</a>)</li>
<li><a href="3bb3a9a087"><code>3bb3a9a</code></a> update deps and rebuild</li>
<li><a href="d127014599"><code>d127014</code></a> update dependencies</li>
<li><a href="801365cd81"><code>801365c</code></a> hint that checkout has to be used first (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/34">#34</a>)</li>
<li><a href="c5ed9ba6b7"><code>c5ed9ba</code></a> update dependencies and rebuild</li>
<li><a href="536c94f32c"><code>536c94f</code></a> Cache-on-failure support (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/22">#22</a>)</li>
<li>See full diff in <a href="https://github.com/Swatinem/rust-cache/compare/v1.3.0...v1.4.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Swatinem/rust-cache&package-manager=github_actions&previous-version=1.3.0&new-version=1.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2575: Bump docker/build-push-action from 2 to 3 r=curquiza a=dependabot[bot]

Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/build-push-action/releases">docker/build-push-action's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/564">#564</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>Standalone mode support by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/601">#601</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/609">#609</a>)</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/571">#571</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/573">#573</a>)</li>
<li>Bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/582">#582</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/584">#584</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/595">#595</a>)</li>
<li>Bump csv-parse from 4.16.3 to 5.0.4 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/533">#533</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v2.10.0...v3.0.0">https://github.com/docker/build-push-action/compare/v2.10.0...v3.0.0</a></p>
<h2>v2.10.0</h2>
<ul>
<li>Add <code>imageid</code> output and use metadata to set <code>digest</code> output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/569">#569</a>)</li>
<li>Add <code>build-contexts</code> input (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/563">#563</a>)</li>
<li>Enhance outputs display (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/559">#559</a>)</li>
</ul>
<h2>v2.9.0</h2>
<ul>
<li><code>add-hosts</code> input (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/553">#553</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/555">#555</a>)</li>
<li>Fix git context subdir example and improve README (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/552">#552</a>)</li>
<li>Add e2e tests for ACR (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/548">#548</a>)</li>
<li>Add description on <code>github-token</code> option to README (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/544">#544</a>)</li>
<li>Bump node-fetch from 2.6.1 to 2.6.7 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/549">#549</a>)</li>
</ul>
<h2>v2.8.0</h2>
<ul>
<li>Allow specifying subdirectory with default git context (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/531">#531</a>)</li>
<li>Add <code>cgroup-parent</code>, <code>shm-size</code>, <code>ulimit</code> inputs (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/501">#501</a>)</li>
<li>Don't set outputs if empty or nil (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/470">#470</a>)</li>
<li>docs: example to sanitize tags with metadata-action (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/476">#476</a>)</li>
<li>docs: wrong syntax to sanitize repo slug (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/475">#475</a>)</li>
<li>docs: test before pushing your image (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/455">#455</a>)</li>
<li>readme: remove v1 section (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/500">#500</a>)</li>
<li>ci: virtual env file system info (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/510">#510</a>)</li>
<li>dev: update workflow (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/499">#499</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/160">#160</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/469">#469</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/465">#465</a>)</li>
<li>Bump csv-parse from 4.16.0 to 4.16.3 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/451">#451</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/459">#459</a>)</li>
</ul>
<h2>v2.7.0</h2>
<ul>
<li>Add <code>metadata</code> output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/412">#412</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/439">#439</a>)</li>
<li>Add note to sanitize tags (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/426">#426</a>)</li>
<li>Cache backend API docs (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/406">#406</a>)</li>
<li>Git context now supports subdir (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/407">#407</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/415">#415</a>)</li>
</ul>
<h2>v2.6.1</h2>
<ul>
<li>Small typo and ensure trimmed output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/400">#400</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="e551b19e49"><code>e551b19</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/564">#564</a> from crazy-max/node-16</li>
<li><a href="3554377aa3"><code>3554377</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/609">#609</a> from crazy-max/ci-fix-test</li>
<li><a href="a62bc1b22b"><code>a62bc1b</code></a> ci: fix standalone test</li>
<li><a href="c2085839e1"><code>c208583</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/601">#601</a> from crazy-max/standalone-mode</li>
<li><a href="fcd91249e5"><code>fcd9124</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/607">#607</a> from docker/dependabot/github_actions/docker/metadata...</li>
<li><a href="0ebe720aed"><code>0ebe720</code></a> Bump docker/metadata-action from 3 to 4</li>
<li><a href="38b45804b5"><code>38b4580</code></a> Standalone mode support</li>
<li><a href="ba317382dc"><code>ba31738</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/533">#533</a> from docker/dependabot/npm_and_yarn/csv-parse-5.0.4</li>
<li><a href="43721d2346"><code>43721d2</code></a> Update generated content</li>
<li><a href="5ea21bf2ba"><code>5ea21bf</code></a> Fix csv-parse implementation since major update</li>
<li>Additional commits viewable in <a href="https://github.com/docker/build-push-action/compare/v2...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Ryan Russell <git@ryanrussell.org>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-07-04 12:13:46 +00:00
83e5f45a91 Merge #2576
2576: Make clippy happy r=curquiza a=Kerollmops

This PR fixes clippy.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-04 11:53:27 +00:00
aff8cd1774 Make clippy happy 2022-07-04 13:36:56 +02:00
d1296d03ea Bump docker/build-push-action from 2 to 3
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:44 +00:00
6f7a4d95d9 Bump Swatinem/rust-cache from 1.3.0 to 1.4.0
Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/Swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/v1/CHANGELOG.md)
- [Commits](https://github.com/Swatinem/rust-cache/compare/v1.3.0...v1.4.0)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:40 +00:00
9ea96fa5c1 Bump docker/login-action from 1 to 2
Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:38 +00:00
19f732e623 Update manifest for dependabot 2022-06-29 11:35:41 +02:00
d833e62282 Merge #2564 #2565 #2566 #2567
2564: Bump codecov/codecov-action from 1 to 3 r=curquiza a=dependabot[bot]

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/codecov/codecov-action/releases">codecov/codecov-action's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/689">#689</a> Bump to node16 and small fixes</li>
</ul>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/688">#688</a> Incorporate <code>gcov</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/548">#548</a> build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/603">#603</a> [Snyk] Upgrade <code>`@​actions/core</code>` from 1.5.0 to 1.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/628">#628</a> build(deps): bump node-fetch from 2.6.1 to 3.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/634">#634</a> build(deps): bump node-fetch from 3.1.1 to 3.2.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/636">#636</a> build(deps): bump openpgp from 5.0.1 to 5.1.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/652">#652</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.30.0 to 0.33.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/653">#653</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.11.21 to 17.0.18</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/659">#659</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.4.0 to 27.4.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/667">#667</a> build(deps): bump actions/checkout from 2 to 3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/673">#673</a> build(deps): bump node-fetch from 3.2.0 to 3.2.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/683">#683</a> build(deps): bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/685">#685</a> build(deps): bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/681">#681</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.18 to 17.0.23</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/682">#682</a> build(deps-dev): bump typescript from 4.5.5 to 4.6.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/676">#676</a> build(deps): bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/675">#675</a> build(deps): bump openpgp from 5.1.0 to 5.2.1</li>
</ul>
<h2>v2.1.0</h2>
<h2>2.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/515">#515</a> Allow specifying version of Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/499">#499</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.29.0 to 0.30.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/508">#508</a> build(deps): bump openpgp from 5.0.0-5 to 5.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/514">#514</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.6.0 to 16.9.0</li>
</ul>
<h2>v2.0.3</h2>
<h2>2.0.3</h2>
<h3>Fixes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/464">#464</a> Fix wrong link in the readme</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/485">#485</a> fix: Add override OS and linux default to platform</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/447">#447</a> build(deps): bump openpgp from 5.0.0-4 to 5.0.0-5</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/458">#458</a> build(deps-dev): bump eslint from 7.31.0 to 7.32.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/465">#465</a> build(deps-dev): bump <code>`@​typescript-eslint/eslint-plugin</code>` from 4.28.4 to 4.29.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/466">#466</a> build(deps-dev): bump <code>`@​typescript-eslint/parser</code>` from 4.28.4 to 4.29.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/468">#468</a> build(deps-dev): bump <code>`@​types/jest</code>` from 26.0.24 to 27.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/470">#470</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.4.0 to 16.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/472">#472</a> build(deps): bump path-parse from 1.0.6 to 1.0.7</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/473">#473</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.0.0 to 27.0.1</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md">codecov/codecov-action's changelog</a>.</em></p>
<blockquote>
<h2>3.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/699">#699</a> Incorporate <code>xcode</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/694">#694</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.33.3 to 0.33.4</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/696">#696</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.23 to 17.0.25</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/698">#698</a> build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0</li>
</ul>
<h2>3.0.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/689">#689</a> Bump to node16 and small fixes</li>
</ul>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/688">#688</a> Incorporate <code>gcov</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/548">#548</a> build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/603">#603</a> [Snyk] Upgrade <code>`@​actions/core</code>` from 1.5.0 to 1.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/628">#628</a> build(deps): bump node-fetch from 2.6.1 to 3.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/634">#634</a> build(deps): bump node-fetch from 3.1.1 to 3.2.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/636">#636</a> build(deps): bump openpgp from 5.0.1 to 5.1.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/652">#652</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.30.0 to 0.33.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/653">#653</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.11.21 to 17.0.18</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/659">#659</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.4.0 to 27.4.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/667">#667</a> build(deps): bump actions/checkout from 2 to 3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/673">#673</a> build(deps): bump node-fetch from 3.2.0 to 3.2.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/683">#683</a> build(deps): bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/685">#685</a> build(deps): bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/681">#681</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.18 to 17.0.23</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/682">#682</a> build(deps-dev): bump typescript from 4.5.5 to 4.6.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/676">#676</a> build(deps): bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/675">#675</a> build(deps): bump openpgp from 5.1.0 to 5.2.1</li>
</ul>
<h2>2.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/515">#515</a> Allow specifying version of Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/499">#499</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.29.0 to 0.30.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/508">#508</a> build(deps): bump openpgp from 5.0.0-5 to 5.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/514">#514</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.6.0 to 16.9.0</li>
</ul>
<h2>2.0.3</h2>
<h3>Fixes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/464">#464</a> Fix wrong link in the readme</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/485">#485</a> fix: Add override OS and linux default to platform</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/447">#447</a> build(deps): bump openpgp from 5.0.0-4 to 5.0.0-5</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="81cd2dc814"><code>81cd2dc</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/699">#699</a> from codecov/feat-xcode</li>
<li><a href="a03184e530"><code>a03184e</code></a> feat: add xcode support</li>
<li><a href="6a6a9ae7b1"><code>6a6a9ae</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/694">#694</a> from codecov/dependabot/npm_and_yarn/vercel/ncc-0.33.4</li>
<li><a href="92a872a5e7"><code>92a872a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/696">#696</a> from codecov/dependabot/npm_and_yarn/types/node-17.0.25</li>
<li><a href="43a9c182dd"><code>43a9c18</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/698">#698</a> from codecov/dependabot/npm_and_yarn/jest-junit-13.2.0</li>
<li><a href="13ce822ccd"><code>13ce822</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/690">#690</a> from codecov/ci-v3</li>
<li><a href="4d6dbaaea6"><code>4d6dbaa</code></a> build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0</li>
<li><a href="98f0f19300"><code>98f0f19</code></a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.23 to 17.0.25</li>
<li><a href="d3021d9910"><code>d3021d9</code></a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.33.3 to 0.33.4</li>
<li><a href="2c83f35c20"><code>2c83f35</code></a> Update makefile to v3</li>
<li>Additional commits viewable in <a href="https://github.com/codecov/codecov-action/compare/v1...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=codecov/codecov-action&package-manager=github_actions&previous-version=1&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2565: Bump docker/setup-qemu-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/setup-qemu-action/releases">docker/setup-qemu-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/48">#48</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/43">#43</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/47">#47</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.3.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/37">#37</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/39">#39</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/41">#41</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.0.4 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/38">#38</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/46">#46</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-qemu-action/compare/v1.2.0...v2.0.0">https://github.com/docker/setup-qemu-action/compare/v1.2.0...v2.0.0</a></p>
<h2>v1.2.0</h2>
<ul>
<li>Display image information (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/36">#36</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.7 to 1.3.0 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/35">#35</a>)</li>
</ul>
<h2>v1.1.0</h2>
<ul>
<li>Remove os limitation (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/30">#30</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.6 to 1.2.7 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/29">#29</a>)</li>
</ul>
<h2>v1.0.2</h2>
<ul>
<li>Enhance workflow (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/26">#26</a>)</li>
<li>Container based developer flow (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/19">#19</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/20">#20</a>)</li>
</ul>
<h2>v1.0.1</h2>
<ul>
<li>Fix CVE-2020-15228</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8b122486ce"><code>8b12248</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/48">#48</a> from crazy-max/node-16</li>
<li><a href="466d53193c"><code>466d531</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/50">#50</a> from crazy-max/update-readme</li>
<li><a href="607c1922b5"><code>607c192</code></a> simplify usage example</li>
<li><a href="d7849ecb9c"><code>d7849ec</code></a> Node 16 as default runtime</li>
<li><a href="2d4bfe71c9"><code>2d4bfe7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/47">#47</a> from crazy-max/update-dev</li>
<li><a href="224b802eb3"><code>224b802</code></a> chore: update dev dependencies and workflow</li>
<li><a href="95bd865778"><code>95bd865</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/46">#46</a> from docker/dependabot/npm_and_yarn/actions/exec-1.1.1</li>
<li><a href="cfd091faa1"><code>cfd091f</code></a> Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="d2a60302b8"><code>d2a6030</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/45">#45</a> from docker/dependabot/github_actions/actions/checkout-3</li>
<li><a href="97dc484a91"><code>97dc484</code></a> Bump actions/checkout from 2 to 3</li>
<li>Additional commits viewable in <a href="https://github.com/docker/setup-qemu-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-qemu-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2566: Bump actions/checkout from 2 to 3 r=curquiza a=dependabot[bot]

Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/releases">actions/checkout's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<ul>
<li>Updated to the node16 runtime by default
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0 to run, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
</ul>
<h2>v2.4.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Add set-safe-directory input to allow customers to take control. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/770">#770</a>) by <a href="https://github.com/TingluoHuang"><code>`@​TingluoHuang</code></a>` in <a href="https://github-redirect.dependabot.com/actions/checkout/pull/776">actions/checkout#776</a></li>
<li>Prepare changelog for v2.4.2. by <a href="https://github.com/TingluoHuang"><code>`@​TingluoHuang</code></a>` in <a href="https://github-redirect.dependabot.com/actions/checkout/pull/778">actions/checkout#778</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/actions/checkout/compare/v2...v2.4.2">https://github.com/actions/checkout/compare/v2...v2.4.2</a></p>
<h2>v2.4.1</h2>
<ul>
<li>Fixed an issue where checkout failed to run in container jobs due to the new git setting <code>safe.directory</code></li>
</ul>
<h2>v2.4.0</h2>
<ul>
<li>Convert SSH URLs like <code>org-&lt;ORG_ID&gt;`@github.com:</code>` to <code>https://github.com/</code> - <a href="https://github-redirect.dependabot.com/actions/checkout/pull/621">pr</a></li>
</ul>
<h2>v2.3.5</h2>
<p>Update dependencies</p>
<h2>v2.3.4</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/379">Add missing <code>await</code>s</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/360">Swap to Environment Files</a></li>
</ul>
<h2>v2.3.3</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/345">Remove Unneeded commit information from build logs</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/326">Add Licensed to verify third party dependencies</a></li>
</ul>
<h2>v2.3.2</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/320">Add Third Party License Information to Dist Files</a></p>
<h2>v2.3.1</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/284">Fix default branch resolution for .wiki and when using SSH</a></p>
<h2>v2.3.0</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/278">Fallback to the default branch</a></p>
<h2>v2.2.0</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/258">Fetch all history for all tags and branches when fetch-depth=0</a></p>
<h2>v2.1.1</h2>
<p>Changes to support GHES (<a href="https://github-redirect.dependabot.com/actions/checkout/pull/236">here</a> and <a href="https://github-redirect.dependabot.com/actions/checkout/pull/248">here</a>)</p>
<h2>v2.1.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/191">Group output</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/199">Changes to support GHES alpha release</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/184">Persist core.sshCommand for submodules</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/163">Add support ssh</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/179">Convert submodule SSH URL to HTTPS, when not using SSH</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>v3.0.2</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/770">Add input <code>set-safe-directory</code></a></li>
</ul>
<h2>v3.0.1</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/762">Fixed an issue where checkout failed to run in container jobs due to the new git setting <code>safe.directory</code></a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/744">Bumped various npm package versions</a></li>
</ul>
<h2>v3.0.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/689">Update to node 16</a></li>
</ul>
<h2>v2.3.1</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/284">Fix default branch resolution for .wiki and when using SSH</a></li>
</ul>
<h2>v2.3.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/278">Fallback to the default branch</a></li>
</ul>
<h2>v2.2.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/258">Fetch all history for all tags and branches when fetch-depth=0</a></li>
</ul>
<h2>v2.1.1</h2>
<ul>
<li>Changes to support GHES (<a href="https://github-redirect.dependabot.com/actions/checkout/pull/236">here</a> and <a href="https://github-redirect.dependabot.com/actions/checkout/pull/248">here</a>)</li>
</ul>
<h2>v2.1.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/191">Group output</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/199">Changes to support GHES alpha release</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/184">Persist core.sshCommand for submodules</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/163">Add support ssh</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/179">Convert submodule SSH URL to HTTPS, when not using SSH</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/157">Add submodule support</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/144">Follow proxy settings</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/141">Fix ref for pr closed event when a pr is merged</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/128">Fix issue checking detached when git less than 2.22</a></li>
</ul>
<h2>v2.0.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/108">Do not pass cred on command line</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/107">Add input persist-credentials</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/104">Fallback to REST API to download repo</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="2541b1294d"><code>2541b12</code></a> Prepare changelog for v3.0.2. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/777">#777</a>)</li>
<li><a href="0ffe6f9c55"><code>0ffe6f9</code></a> Add set-safe-directory input to allow customers to take control. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/770">#770</a>)</li>
<li><a href="dcd71f6466"><code>dcd71f6</code></a> Enforce safe directory (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/762">#762</a>)</li>
<li><a href="add3486cc3"><code>add3486</code></a> Patch to fix the dependbot alert. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/744">#744</a>)</li>
<li><a href="5126516654"><code>5126516</code></a> Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/741">#741</a>)</li>
<li><a href="d50f8ea767"><code>d50f8ea</code></a> Add v3.0 release information to changelog (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/740">#740</a>)</li>
<li><a href="2d1c1198e7"><code>2d1c119</code></a> update test workflows to checkout v3 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/709">#709</a>)</li>
<li><a href="a12a3943b4"><code>a12a394</code></a> update readme for v3 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/708">#708</a>)</li>
<li><a href="8f9e05e482"><code>8f9e05e</code></a> Update to node 16 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/689">#689</a>)</li>
<li>See full diff in <a href="https://github.com/actions/checkout/compare/v2...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2567: Bump docker/setup-buildx-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/setup-buildx-action/releases">docker/setup-buildx-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/131">#131</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-buildx-action/compare/v1.7.0...v2.0.0">https://github.com/docker/setup-buildx-action/compare/v1.7.0...v2.0.0</a></p>
<h2>v1.7.0</h2>
<ul>
<li>Standalone mode by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` in (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/119">#119</a>)</li>
<li>Update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/114">#114</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/130">#130</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/108">#108</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/109">#109</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/110">#110</a>)</li>
<li>Bump actions/checkout from 2 to 3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/126">#126</a>)</li>
<li>Bump <code>`@​actions/tool-cache</code>` from 1.7.1 to 1.7.2 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/128">#128</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/129">#129</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/132">#132</a>)</li>
<li>Bump codecov/codecov-action from 2 to 3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/133">#133</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/136">#136</a>)</li>
</ul>
<h2>v1.6.0</h2>
<ul>
<li>Add <code>config-inline</code> input (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/106">#106</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/104">#104</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/101">#101</a>)</li>
</ul>
<h2>v1.5.1</h2>
<ul>
<li>Explicit version spec for caching (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/100">#100</a>)</li>
</ul>
<h2>v1.5.0</h2>
<ul>
<li>Allow building buildx from source (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/99">#99</a>)</li>
</ul>
<h2>v1.4.1</h2>
<ul>
<li>Fix <code>docker: invalid reference format</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/97">#97</a>)</li>
</ul>
<h2>v1.4.0</h2>
<ul>
<li>Update dev deps (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/95">#95</a>)</li>
<li>Use built-in <code>getExecOutput</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/94">#94</a>)</li>
<li>Use <code>core.getBooleanInput</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/93">#93</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.0.4 to 1.1.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/85">#85</a>)</li>
<li>Bump y18n from 4.0.0 to 4.0.3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/91">#91</a>)</li>
<li>Bump hosted-git-info from 2.8.8 to 2.8.9 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/89">#89</a>)</li>
<li>Bump ws from 7.3.1 to 7.5.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/90">#90</a>)</li>
<li>Bump <code>`@​actions/tool-cache</code>` from 1.6.1 to 1.7.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/82">#82</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/86">#86</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.7 to 1.4.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/80">#80</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/87">#87</a>)</li>
</ul>
<h2>v1.3.0</h2>
<ul>
<li>Display BuildKit version (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/72">#72</a>)</li>
</ul>
<h2>v1.2.0</h2>
<ul>
<li>Remove os limitation (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/71">#71</a>)</li>
<li>Add test job for <code>config</code> input (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/68">#68</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="dc7b9719a9"><code>dc7b971</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/131">#131</a> from crazy-max/node16</li>
<li><a href="f55bc08278"><code>f55bc08</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/141">#141</a> from crazy-max/fix-test</li>
<li><a href="aa877a9d36"><code>aa877a9</code></a> ci: fix standalone test</li>
<li><a href="130c56f342"><code>130c56f</code></a> Node 16 as default runtime</li>
<li>See full diff in <a href="https://github.com/docker/setup-buildx-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-buildx-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-29 08:34:57 +00:00
a733271ced Merge #2563
2563: Bump docker/metadata-action from 3 to 4 r=curquiza a=dependabot[bot]

Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/metadata-action/releases">docker/metadata-action's releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/176">#176</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>Do not sanitize before pattern matching by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/201">#201</a>)
<ul>
<li>Breaking change with <code>type=match</code> pattern matching</li>
</ul>
</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/metadata-action/compare/v3.8.0...v4.0.0">https://github.com/docker/metadata-action/compare/v3.8.0...v4.0.0</a></p>
<h2>v3.8.0</h2>
<ul>
<li>Add attribute to enable/disable images by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/193">#193</a>)</li>
<li>Add <code>is_default_branch</code> global expression by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/192">#192</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/197">#197</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/198">#198</a>)</li>
<li>Update fixtures (dev) by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/190">#190</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/185">#185</a>)</li>
<li>Bump moment from 2.29.2 to 2.29.3 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/187">#187</a>)</li>
<li>Bump csv-parse from 4.16.3 to 5.0.4 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/195">#195</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/metadata-action/compare/v3.7.0...v3.8.0">https://github.com/docker/metadata-action/compare/v3.7.0...v3.8.0</a></p>
<h2>v3.7.0</h2>
<ul>
<li>Handle comments for multi-line inputs (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/172">#172</a>)</li>
<li>Missing <code>json</code> output in <code>action.yml</code> (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/167">#167</a>)</li>
<li>Update dev dependencies and workflow (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/175">#175</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/182">#182</a>)</li>
<li>Bump moment from 2.29.1 to 2.29.2 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/180">#180</a>)</li>
<li>Bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/179">#179</a>)</li>
<li>Bump node-fetch from 2.6.1 to 2.6.7 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/173">#173</a>)</li>
</ul>
<h2>v3.6.2</h2>
<ul>
<li>Handle raw statement for pre-release (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/155">#155</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/156">#156</a>)</li>
</ul>
<h2>v3.6.1</h2>
<ul>
<li>Preserve quotes inside unquoted field (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/153">#153</a>)</li>
</ul>
<h2>v3.6.0</h2>
<ul>
<li><code>base_ref</code> global expression (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/142">#142</a>)</li>
<li>Trim tags and flavor inputs (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/143">#143</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/135">#135</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/134">#134</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/132">#132</a>)</li>
<li>Bump csv-parse from 4.16.0 to 4.16.3 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/131">#131</a>)</li>
</ul>
<h2>v3.5.0</h2>
<ul>
<li>Add global expression <code>date</code> (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/121">#121</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/122">#122</a>)</li>
</ul>
<h2>v3.4.1</h2>
<ul>
<li>Only return edge if branch matches (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/115">#115</a>)</li>
</ul>
<h2>v3.4.0</h2>
<ul>
<li>PEP 440 support (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/108">#108</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Upgrade guide</summary>
<p><em>Sourced from <a href="https://github.com/docker/metadata-action/blob/master/UPGRADE.md">docker/metadata-action's upgrade guide</a>.</em></p>
<blockquote>
<h1>Upgrade notes</h1>
<h2>v2 to v3</h2>
<ul>
<li>Repository has been moved to docker org. Replace <code>crazy-max/ghaction-docker-meta@v2</code> with <code>docker/metadata-action@v4</code></li>
<li>The default bake target has been changed: <code>ghaction-docker-meta</code> &gt; <code>docker-metadata-action</code></li>
</ul>
<h2>v1 to v2</h2>
<ul>
<li><a href="https://github.com/docker/metadata-action/blob/master/#inputs">inputs</a>
<ul>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-sha"><code>tag-sha</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-edge--tag-edge-branch"><code>tag-edge</code> / <code>tag-edge-branch</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-semver"><code>tag-semver</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-match--tag-match-group"><code>tag-match</code> / <code>tag-match-group</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-latest"><code>tag-latest</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-schedule"><code>tag-schedule</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-custom--tag-custom-only"><code>tag-custom</code> / <code>tag-custom-only</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#label-custom"><code>label-custom</code></a></li>
</ul>
</li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#basic-workflow">Basic workflow</a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#semver-workflow">Semver workflow</a></li>
</ul>
<h3>inputs</h3>
<table>
<thead>
<tr>
<th>New</th>
<th>Unchanged</th>
<th>Removed</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>tags</code></td>
<td><code>images</code></td>
<td><code>tag-sha</code></td>
</tr>
<tr>
<td><code>flavor</code></td>
<td><code>sep-tags</code></td>
<td><code>tag-edge</code></td>
</tr>
<tr>
<td><code>labels</code></td>
<td><code>sep-labels</code></td>
<td><code>tag-edge-branch</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-semver</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-match</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-match-group</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-latest</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-schedule</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-custom</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-custom-only</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>label-custom</code></td>
</tr>
</tbody>
</table>
<h4><code>tag-sha</code></h4>
<pre lang="yaml"><code>tags: |
  type=sha
</code></pre>
<h4><code>tag-edge</code> / <code>tag-edge-branch</code></h4>
<pre lang="yaml"><code>tags: |
  # default branch
  type=edge
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="69f6fc9d46"><code>69f6fc9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/203">#203</a> from crazy-max/san-fix</li>
<li><a href="2f5b5ae8bf"><code>2f5b5ae</code></a> Sanitize tag earlier</li>
<li><a href="f206c36955"><code>f206c36</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/202">#202</a> from crazy-max/v4-prep</li>
<li><a href="a20adfa74e"><code>a20adfa</code></a> readme: set metadata-action to v4</li>
<li><a href="26b9439ce3"><code>26b9439</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/201">#201</a> from crazy-max/fix-sanitization</li>
<li><a href="467883f452"><code>467883f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/176">#176</a> from crazy-max/node-16</li>
<li><a href="5edf56f2c4"><code>5edf56f</code></a> Node 16 as default runtime</li>
<li><a href="678218f2be"><code>678218f</code></a> Note about image name and tag sanitization</li>
<li><a href="e44c1fbe6e"><code>e44c1fb</code></a> Do not sanitize before pattern matching</li>
<li>See full diff in <a href="https://github.com/docker/metadata-action/compare/v3...v4">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/metadata-action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-29 08:07:10 +00:00
28609c4176 chore(auth test): Correct compute_authorized_search macro
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:26 -05:00
a626cf4c99 chore: test comment readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:25 -05:00
9b660e1058 chore(routes): correct typo
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:22 -05:00
38b85ec547 Bump docker/setup-buildx-action from 1 to 2
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:43 +00:00
ed185fb636 Bump actions/checkout from 2 to 3
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:41 +00:00
f7b47b43f4 Bump docker/setup-qemu-action from 1 to 2
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:38 +00:00
8e703fbabe Bump codecov/codecov-action from 1 to 3
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:35 +00:00
7ced5c2cc7 Bump docker/metadata-action from 3 to 4
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 3 to 4.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:32 +00:00
ef95d1d545 Merge #2561
2561: Add dependabot for GHA r=Kerollmops a=curquiza

No panic, I only add dependabot to update our CI, not the rust dependencies. Indeed, no easy command exists for this.
If it's too much regular notification, as usual, I will remove it.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-28 20:02:30 +00:00
f98c8d7f8b Add dependabot for GHA 2022-06-28 18:58:23 +02:00
4862993482 Merge #2525
2525: Auth: Provide all document related permissions for action document.* r=Kerollmops a=janithpet

Added a `Action::DocumentsAll` identifier as [suggested](https://github.com/meilisearch/meilisearch/issues/2080#issuecomment-1022952486), along with the other necessary changes in `action.rs`. 

Inside `store.rs`, added an extra condition in `HeedAuthStore::put_api_key` to append all document related permissions if `key.actions.contains(&DocumentsAll)`.

Updated the tests as [suggested](https://github.com/meilisearch/meilisearch/issues/2080#issuecomment-1022952486).

I am quite new to Rust, so please let me know if I had made any mistakes; have I written the code in the most idiomatic/efficient way? I am aware that the way I append the document permissions could create duplicates in the `actions` vector, but I am not sure how fix that in a simple way (other than using other dependencies like [itertools](https://github.com/rust-itertools/itertools), for example).

## What does this PR do?
Fixes #2080 

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [ x] Have you read the contributing guidelines?
- [ x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: janithPet <jpetangoda@gmail.com>
2022-06-28 14:02:06 +00:00
8a64ee0c14 Merge #2557
2557: add more tests on the formatted route r=Kerollmops a=irevoire

We had a bunch of tests trying to send array of array in a get request, this is actually not supported thus I updated the tests to only send a single array or a direct string.

Also, the real tests that ensure the array of array are well handled are in milli so I don’t think we should lose time trying to « improve » our test surface on this point.

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-06-28 13:41:53 +00:00
05ee2eff01 add more tests on the formatted route 2022-06-28 13:17:55 +02:00
7ee8855499 Merge #2553
2553: Fix ci binary push r=irevoire a=curquiza

Prevent the check of version when releasing a RC for the binary publish.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-27 11:18:40 +00:00
7f4fab876d Add fix to publish-binaries.yml 2022-06-27 13:11:58 +02:00
688d6f704b Merge #2549
2549: Fix CI checking version compatibility r=irevoire a=curquiza

- Fix CI checks in `if` regarding the `stable` variable created
- Fix command to remove prefix `refs/tags/v` from `GITHUB_REF` in `check-release.sh` script
- Change from `sh` to `bash` for `check-release.sh` script
- Fix error message

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-27 08:28:31 +00:00
f83188fd60 Fix CI with check-release.sh script 2022-06-24 13:11:04 +02:00
9e261b996f Merge #2543
2543: fix all the array on the search get route and improve the tests r=curquiza a=irevoire

fix #2527

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-23 14:51:36 +00:00
90b0a4e99f Merge #2546
2546: Bump milli to 0.31.1 r=irevoire a=Kerollmops



Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-23 11:07:20 +00:00
dad86fc3d6 Make the changes necessary to use milli 0.31.1 2022-06-23 10:47:49 +02:00
7feb15df28 Bump milli to 0.31.1 2022-06-23 10:47:48 +02:00
bf865f51bb Merge #2544
2544: Fix content of dump/assets for testing r=curquiza a=loiclec

This is just a change to the content of two .dump files used in the integration tests.

Those files contained settings with the criterion `desc(fame)`, which is invalid
for a v3 or higher dump.

The change took place in the updates.json file inside the decompressed
.dump files. Instances of `desc(field)` or `asc(field)` were changed to 
`field:desc` and `field:asc`.

The tests were (wrongly) passing because the ranking rules were never parsed.

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-06-22 15:12:19 +00:00
13f258513f meilisearch-lib was missing a feature in its cargo.toml 2022-06-22 16:44:16 +02:00
f8aa21bc16 Fix content of dump/assets for testing
Some contained settings with the criterion desc(fame), which is invalid
for a v3 or higher dump.

The change took place in the updates.json file inside the decompressed
.dump files. Instances of desc(field) or asc(field) were changed to 
field:desc and field:asc
2022-06-22 14:51:52 +02:00
1ffe90bf15 Merge #2530
2530: Check the version in Cargo.toml before publishing r=irevoire a=curquiza

Fixes #2079 

Also
- improves the current docker CI for v0.28.0: the current implementation will make run 2 CI instead of just one for the official release
- move the `is-latest-releaes.sh` script, and update the documentation comment
- fix version of permissive-json-pointer

How to test the script?

```
export GITHUB_REF='refs/tags/v0.28.0'
sh .github/scripts/check-release.sh
```

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-06-22 12:42:29 +00:00
c47369b502 fix all the array on the search get route and improve the tests 2022-06-22 14:33:44 +02:00
32c8846514 Rollback 0.0.0 versionning 2022-06-22 12:20:12 +02:00
7490383d4f Update the not-released version in Cargo.toml files 2022-06-21 19:17:33 +02:00
c6ed756dbc Update script after review 2022-06-21 10:46:32 +02:00
de16de20f4 Update .github/scripts/check-release.sh
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-21 10:14:24 +02:00
c484d28646 Update .github/scripts/check-release.sh
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-21 10:14:17 +02:00
5318e53248 Move is-latest-release.sh script into the scripts folder 2022-06-21 10:12:00 +02:00
06e05cc4f8 Update Docker credentials 2022-06-20 14:39:50 +02:00
ba839a909f Merge #2537
2537: Change information place regarding release assets r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-20 08:40:25 +00:00
8b98303191 Change information place regarding release assets 2022-06-20 10:36:34 +02:00
2dde6fadb4 Check the version in Cargo.toml before publishing 2022-06-20 10:22:01 +02:00
eb8d53a915 Merge #2529
2529: Improve docker CI: push vX.Y tag (without patch) to DockerHub (for v0.28.0) r=curquiza a=curquiza

Bringing a commit from main ([5ae5b06](5ae5b06018)) into release-v0.28.0 to have this change already applied for v0.28.0

Following the one merged on main: https://github.com/meilisearch/meilisearch/pull/2521

Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
2022-06-16 15:38:25 +00:00
10f3150150 Improve docker CI: push vX.Y tag (without patch) to DockerHub (#2507)
* Create a docker tag without patch version if git tag has 0 patch version.

* Create Docker tag without patch number if git tag follows v<number>.<number>.<number>

Add minor changes on CI
2022-06-16 17:14:09 +02:00
54cd9976d7 Merge #2521
2521: Improve docker CI: push `vX.Y` tag (without patch) to DockerHub (#2507) r=curquiza a=curquiza

Following https://github.com/meilisearch/meilisearch/pull/2507

Fixes #2497 

Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
2022-06-16 13:28:27 +00:00
5ae5b06018 Improve docker CI: push vX.Y tag (without patch) to DockerHub (#2507)
* Create a docker tag without patch version if git tag has 0 patch version.

* Create Docker tag without patch number if git tag follows v<number>.<number>.<number>

Add minor changes on CI
2022-06-16 09:51:20 +02:00
6f910f89eb Ran formatter on the code. 2022-06-15 22:23:38 +01:00
b455c3897f Merge #2524
2524: Add the specific routes for the pagination and faceting settings r=curquiza a=Kerollmops

Fixes #2522

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-15 17:01:52 +00:00
9a8fb6c55a Updated actions identifiers to be in a more pleasing order 2022-06-15 17:27:41 +01:00
4016161035 Provide all document related permissions for action document.* 2022-06-15 16:11:39 +01:00
9d692ba1c6 Add more tests for the pagination and faceting subsettings routes 2022-06-15 15:53:43 +02:00
22e1ac969a Add specific routes for the pagination and faceting settings 2022-06-15 15:27:06 +02:00
3340af1ba9 Merge #2517
2517: Fix typos in `tasks/.rs` r=MarinPostma a=ryanrussell

Signed-off-by: Ryan Russell <git@ryanrussell.org>

## What does this PR do?
Readability in `tasks/.rs` files.

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-15 09:27:21 +00:00
49d509936b Merge #2520
2520: Use nightly for cargo fmt in CI (for `release-v0.28.0` branch) r=Kerollmops a=curquiza

Following https://github.com/meilisearch/meilisearch/pull/2519

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-15 09:08:40 +00:00
e46b853fbf Merge #2519
2519: Use nightly for cargo fmt in CI r=Kerollmops a=curquiza

Rustfmt CI currently does not pass on main with nightly, see https://github.com/meilisearch/meilisearch/pull/2517#issuecomment-1156138216

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-15 08:35:46 +00:00
44e004d895 Use nightly for cargo fmt in CI 2022-06-15 10:33:03 +02:00
71bf9b5b9b docs: Readability improvements in tasks/
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-14 20:38:45 -05:00
053071d866 Merge #2502
2502: test dump v5 r=ManyTheFish a=MarinPostma

this PR adds a test for dump v5


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-13 10:11:22 +00:00
0f6e650ba2 Merge #2508
2508: docs: Readability improvements in `download-latest.sh` and `src/analytics/.rs` r=Kerollmops a=ryanrussell

# Pull Request

## What does this PR do?
Improves readability in `download-latest.sh` and in the `src/analytics/.rs` files

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-13 08:18:17 +00:00
97daea5a66 ignore dump test on windows 2022-06-13 09:16:36 +01:00
8990b12609 docs: Readability fixes in src/analytics/.rs files
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-11 10:21:05 -05:00
07a35c6445 docs: Bash comment readability fixes
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-11 10:20:28 -05:00
4fc73195e6 Merge #2466
2466: index resolver tests r=MarinPostma a=MarinPostma

add more index resolver tests


depends on #2455 

followup #2453 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-09 17:50:01 +00:00
1425d62a31 test dump v5 2022-06-09 18:35:17 +02:00
87d4bf672c Merge #2500
2500: Bump milli r=Kerollmops a=irevoire

Fix #2380

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-09 16:35:00 +00:00
2063fbd985 chore: bump milli 2022-06-09 18:34:03 +02:00
de356061db Merge #2414
2414: Improve index uid validation upon API key creation r=Kerollmops a=pierre-l

- ~Use an IndexUid newtype to enforce stronger constraints~
- ~`cargo update -p vergen`~ (`rustup update` was the proper fix for this)
- Add a new `meilisearch_types` crate
- Move `meilisearch_error` to `meilisearch_types::error`
- Move `meilisearch_lib::index_resolver::IndexUid` to `meilisearch_types::index_uid`
- Add a new `InvalidIndexUid` error in `meilisearch_types::index_uid`
- Move `meilisearch_http::routes::StarOr` to `meilisearch_types::star_or`
- Use the `IndexUid` and `StarOr` in `meilisearch_auth::Key`

Fixes #2158


Co-authored-by: pierre-l <pierre.larger@gmail.com>
2022-06-09 15:41:51 +00:00
9a6841c7ce remove unused test 2022-06-09 17:10:56 +02:00
354f7fb2bf test index_update 2022-06-09 17:10:55 +02:00
0333bad057 delete documents test 2022-06-09 17:10:55 +02:00
0e416b4bcd delete index test 2022-06-09 17:10:55 +02:00
20dd259f23 Merge #2455
2455: refactor perform task r=curquiza a=MarinPostma

Refactor some index resolver functions to make duties more consistent, and code easier to test


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-09 14:49:59 +00:00
18095fa4e1 Merge #2498
2498: Update CONTRIBUTING.md to add the release process r=Kerollmops a=curquiza

Add release process to the contributing

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-09 14:23:49 +00:00
b8745420da Use the IndexUid and StarOr in meilisearch_auth::Key
Move `meilisearch_http::routes::StarOr` to `meilisearch_types::star_or`

Fixes #2158
2022-06-09 16:14:15 +02:00
36cb09eb25 Add a new meilisearch_types crate
Move `meilisearch_error` to `meilisearch_types::error`
Move `meilisearch_lib::index_resolver::IndexUid` to `meilisearch_types::index_uid`
Add a new `InvalidIndexUid` error in `meilisearch_types::index_uid`
2022-06-09 16:14:13 +02:00
8fc3b7d3b0 refactor process_document_addition_batch 2022-06-09 14:59:20 +02:00
64e3096790 process_task updates task events 2022-06-09 14:59:20 +02:00
b594d49def add IndexResolver BatchHandler tests 2022-06-09 14:59:19 +02:00
fbba67fbe9 add mocker to IndexResolver 2022-06-09 14:59:19 +02:00
232b2baaa3 Update TOC 2022-06-09 14:49:49 +02:00
02c5c193a2 Update CONTRIBUTING.md to add the release process 2022-06-09 14:48:50 +02:00
b9b32d65a8 Merge #2494
2494: Introduce the new faceting and pagination settings r=ManyTheFish a=Kerollmops

This PR introduces two new settings following the newly created spec https://github.com/meilisearch/specifications/pull/157:
 - The `faceting.max_values_per_facet` one describes the maximum number of values (each with a count) associated with a value in a facet distribution query.
 - The `pagination.limited_to` one describes the maximum number of documents that a search query can ever return.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-09 12:09:21 +00:00
680606fd82 Merge #2491
2491: Improve Docker CIs r=curquiza a=curquiza

Close #1901 

Continuing work of 

- [x] Merge two docker CI in one (#2477). The latest tag is still only push for the official release.
- [x] Add cron job in GHA to run the CI (without publishing the image) every day. This will check the docker build is working.

Co-authored-by: Lawrence Chou <lawrencechou1024@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-09 11:26:12 +00:00
4a494ad2fa Add schedule to the CI 2022-06-09 11:50:20 +02:00
2b2e571c76 Merge #2460
2460: Create custom error types for `TaskType`, `TaskStatus`, and `IndexUid` r=Kerollmops a=walterbm

# Pull Request

## What does this PR do?
Fixes #2443 by making the following changes:

- Add custom `TaskTypeError` for `TaskType::from_str` 
- Add custom `TaskStatusError` for `TaskStatus::from_str`
- Add custom `IndexUidFormatError` for `IndexUid::from_str`
- Implement `From<IndexUidFormatError> for IndexResolverError` to convert between errors
- Replace all usages of `IndexUid::new` with `IndexUid::from_str`
    - **NOTE** I am relatively new to Rust and I struggled a lot with this final part. This PR ended up with a messy error conversion which does not seem ideal. Please let me know if you have any suggestions for how to make this better and I'll be happy to make any updates!

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: walter <walter.beller.morales@gmail.com>
2022-06-09 09:10:28 +00:00
6f0d3472b1 Change the test for the new pagination.limited_to setting 2022-06-09 10:56:43 +02:00
5cd13cc303 Add a test to validate the faceting.max_values_per_facet setting 2022-06-09 10:56:42 +02:00
1e3dcbea3f Plug the pagination.limited_to setting 2022-06-09 10:56:42 +02:00
b96399d24b Plug the faceting.max_values_per_facet setting 2022-06-09 10:56:42 +02:00
5450b5ced3 Add the faceting.max_values_per_facet setting 2022-06-09 10:54:32 +02:00
c924614527 Bump milli to 0.29.2 2022-06-09 10:54:28 +02:00
96d4fd54bb Change the index uid format check for better legibility 2022-06-08 19:58:47 -04:00
3e5d6be86b Rename TaskType::from_str parameter to 'type_' 2022-06-08 19:57:45 -04:00
2b944ecd89 Remove IndexUid::new and replace with IndexUid::from_str 2022-06-08 19:56:01 -04:00
db42268888 Merge #2473
2473: fix blocking in dumps r=irevoire a=MarinPostma

This PR fixes two blocking calls in the dump process.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-08 17:14:46 +00:00
108b3520de fix blocking auth controller dump 2022-06-08 18:19:29 +02:00
f64b824c45 Merge #2493
2493: Update version for next release (v0.28.0) r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-08 16:02:03 +00:00
fc4990b968 Update version for next release (v0.28.0) 2022-06-08 17:59:18 +02:00
39a1dcb32c Update .github/workflows/publish-docker.yml 2022-06-08 17:27:03 +02:00
afcc493480 Merge publish-docker-latest.yml & publish-docker-tag.yml (#2477)
close #1901
2022-06-08 17:17:20 +02:00
6171f17f1d Merge #2468
2468: Update milli 0.29 r=Kerollmops a=ManyTheFish

- [x] Update milli to 0.29
- [x] Integrate charabia
- [x] Set disabled_words to default when Index::exact_words returns None
- [x] Fix ranking rules integration test

fixes #2375
fixes #2144
fixes #2417
fixes #2407

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-08 14:29:20 +00:00
55169ff914 Fix test get_document_s_nested_attributes_to_retrieve 2022-06-08 15:56:06 +02:00
0a16f71563 Increase wait_task wainting time 2022-06-08 15:56:03 +02:00
bd280f75e7 Merge #2487 #2489
2487: Feat(auth): Use hmac instead of sha256 to derivate API keys from master key r=MarinPostma a=ManyTheFish

Wrap sha256 in HMAC to derivate new API keys from the master key.


2489: Fix(auth): Authorization test were not testing keys unrestricted on index r=Kerollmops a=ManyTheFish



Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-08 13:48:07 +00:00
617802bae7 Merge #2480
2480: Bring release v0.27.2 changes into stable r=Kerollmops a=curquiza



Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-08 13:08:26 +00:00
1a7631c807 Hash master_key before passing it to HMAC 2022-06-08 14:54:47 +02:00
17f30c2b2d Fix(auth): Authorization test were not testing keys unrestricted on index 2022-06-08 14:52:32 +02:00
09938c9b6f Patch ranking rules error test 2022-06-08 14:38:09 +02:00
f5306eb5b0 Set disabled_words to default when Index::exact_words returns None 2022-06-08 14:38:09 +02:00
173eea06e1 Replace old tokenizer by charabia 2022-06-08 14:38:09 +02:00
8d09772334 Update milli 2022-06-08 14:38:05 +02:00
987a7f8926 Wrap sha256 in HMAC instead of directly use sha256 2022-06-08 14:25:12 +02:00
0928f3d41c Merge #2453
2453: test index resolver r=MarinPostma a=MarinPostma

add some tests to the `IndexResolver` implementation of `BatchHandler`


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-08 11:05:30 +00:00
09ec8e9fca Merge #2471
2471: Remove the connection keep-alive timeout r=MarinPostma a=Thearas

# Pull Request

## What does this PR do?
Fixes <https://github.com/meilisearch/meilisearch-go/issues/221>.
Meilisearch has a default connection keep-alive timeout for 5s, which means it will close the connections with idle time >= 5s.
This PR set actix-web keep-alive config to `KeepAlive::Os`, let the client and system to decide when to close the connection.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Thearas <thearas850@gmail.com>
2022-06-08 06:59:25 +00:00
1968950b0f Merge #2475
2475: feat(auth): Paginate API keys listing r=irevoire a=ManyTheFish

- [x] Update tests
- [x] Use Pagination helpers to paginate API keys (thanks to `@irevoire)`

fixes #2442


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 15:42:02 +00:00
6ffa222218 feat(auth): Paginate API keys listing
- [x] Update tests
- [x] Use Pagination helpers to paginate API keys

fixes #2442
2022-06-07 17:37:48 +02:00
79e67df73d Merge #2474
2474: feat(auth): remove `dumps.get` action from keys r=ManyTheFish a=ManyTheFish

- [x] Update tests
- [x] Remove `dumps.get` action


related to: #2430

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 15:03:41 +00:00
7fd66b80ca bump meilisearch version 2022-06-07 16:25:47 +02:00
0cfad7eeac bump milli version 2022-06-07 16:23:23 +02:00
72be296852 Merge #2476
2476: fix(test): Windows index pagination test r=ManyTheFish a=ManyTheFish

The test `index::get_index::get_and_paginate_indexes` fails on every PRs during `Tests on windows-latest` CI job.
- reduce the default index size in tests.
- wait for each task instead of waiting for the last one at the end.

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 14:17:01 +00:00
a7bff35e49 fix(test): Reduce default index size in tests 2022-06-07 15:16:34 +02:00
3b01ed4fe8 feat(auth): remove dumps.get action from keys 2022-06-07 10:49:28 +02:00
cbd27d313c fix blocking writing of meta file in dump 2022-06-07 10:07:40 +02:00
6ac8675c6d add IndexResolver BatchHandler tests 2022-06-07 09:33:57 +02:00
df61ca9cae add mocker to IndexResolver 2022-06-07 09:33:57 +02:00
bbd685af5e move IndexResolver to real module 2022-06-07 09:33:56 +02:00
9b9cbc815b fmt 2022-06-07 03:50:39 +08:00
fd11903920 remove the connection timeout 2022-06-07 03:38:23 +08:00
c3003065e8 Merge #2464
2464: Simplify the `star_or` function usage r=ManyTheFish a=Kerollmops

This PR simplifies the usage of the `star_or` function that was originally introduced in #2399. The `serde-cs` https://github.com/naughie/serde-cs/pull/1 PR was merged and was implementing the `IntoIterator` trait on the `CS` types, which makes it possible to directly collect without converting a `CS` into the inner type (vec).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-06 12:47:27 +00:00
c6ce3452cf Merge #2459
2459: Fix typo in codebase comments r=Kerollmops a=ryanrussell

# Pull Request

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-06 09:01:07 +00:00
e5b760c59a Fix the segment analytics tests 2022-06-06 10:44:46 +02:00
277a0a7967 Bump serde-cs to simplify our usage of the star_or function 2022-06-06 10:17:33 +02:00
64b5b2e1f8 Use serde-cs::CS with StarOr to reduce the logic duplication 2022-06-06 10:06:00 +02:00
10d3b367dc Simplify the const default values 2022-06-06 10:06:00 +02:00
ba55905377 Add custom IndexUidFormatError for IndexUid 2022-06-05 02:26:48 -04:00
0e7e16ae72 Add custom TaskStatusError for TaskStatus 2022-06-05 00:51:08 -04:00
80c156df3f Add custom TaskTypeError for TaskType 2022-06-05 00:51:08 -04:00
4b6c3e72ff Improve Lib Readability
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-04 21:38:04 -05:00
3e46543060 Improve Store Readability
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-04 20:42:53 -05:00
b83455f345 Merge #2454
2454: Unify the pagination of the index and documents route behind a common type r=curquiza a=irevoire

`@MarinPostma` wdyt of keeping the `auto_paginate_sized` until we implement the pagination on every route that needs it just to see if it could be useful to something else

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-02 15:01:43 +00:00
953a209f02 Merge #2447
2447: move index uid in task content r=Kerollmops a=MarinPostma

this pr moves the index_uid from the `Task` to the `TaskContent`. This is because the task can now have content that do not target a particular index.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-02 13:54:09 +00:00
0c5352fc22 move index_uid from task to task_content 2022-06-02 15:30:35 +02:00
8ac8fcb0c9 Merge #2433
2433: Fix the documents route r=Kerollmops a=irevoire

fix #2428

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-06-02 13:25:34 +00:00
4667c9fe1a fix(http): Fix the query parameter in the Documents route 2022-06-02 14:10:44 +02:00
12b5eabd5d chore(http): unify the pagination of the index and documents route behind a common type 2022-06-02 14:06:56 +02:00
cf2d8de48a Merge #2452
2452: Change http verbs r=ManyTheFish a=Kerollmops

This PR fixes #2419 by updating the HTTP verbs used to update the settings and every single setting parameter.

- [x] `PATCH /indexes/{indexUid}` instead of `PUT`
- [x] `PATCH /indexes/{indexUid}/settings`  instead of `POST`
- [x] `PATCH /indexes/{indexUid}/settings/typo-tolerance`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/displayed-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/distinct-attribute`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/filterable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/ranking-rules`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/searchable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/sortable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/stop-words`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/synonyms`  instead of `POST`


Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 11:46:17 +00:00
419922e475 Make clippy happy 2022-06-02 13:38:23 +02:00
c9cd1738a5 Merge #2445
2445: Seek-based tasks list r=Kerollmops a=Kerollmops

This PR implements the seek-based pagination for the tasks list following [the spec](https://github.com/meilisearch/specifications/pull/115).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 10:25:54 +00:00
0258659278 Fix the get_settings tests 2022-06-02 12:24:27 +02:00
ce37f53a16 Add typo-tolerance to the authorization tests 2022-06-02 12:17:53 +02:00
bcb51905d7 Fix the authorization tests 2022-06-02 12:16:46 +02:00
10a71fdb10 Update the /indexes/{indexUid}/settings/* verbs by adding a macro parameter 2022-06-02 11:55:47 +02:00
f8d3f739ad Update the /indexes/{indexUid}/settings verb from POST to PATCH 2022-06-02 11:55:47 +02:00
bb405aa729 Update the /indexes/{indexUid} verb from PUT to PATCH 2022-06-02 11:55:47 +02:00
7e3d5ebc8e Merge #2451
2451: feat(API-keys): Change immutable_field error message r=Kerollmops a=ManyTheFish

Change the immutable_field error message to fit the recent changes in the spec:
aa0a148ee3..84a9baff68



Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-02 09:26:36 +00:00
dfce9ba468 Apply suggestions 2022-06-02 11:26:12 +02:00
9eea142e2b feat(API-keys): Change immutable_field error message
Change the immutable_field error message to fit the recent changes in the spec:
aa0a148ee3..84a9baff68
2022-06-02 11:11:07 +02:00
8b8c3e32f0 Merge #2450
2450: Bump the dependencies r=ManyTheFish a=Kerollmops

In order to use [the latest version of grenad](https://docs.rs/grenad) I bump the dependencies here. We also use the latest versions of all our other dependencies now.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 08:53:12 +00:00
08d72e32a4 Merge #2438
2438: Refine keys api r=ManyTheFish a=ManyTheFish

waiting for #2410 and #2444 to be merged.

fix #2369 

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-02 08:32:35 +00:00
ac9e7bdbe3 Fix a test that was depending on the speed of the CPU 2022-06-02 10:21:19 +02:00
4512eed8f5 Fix PR comments 2022-06-01 18:06:20 +02:00
e769043576 Bump the dependencies 2022-06-01 17:59:05 +02:00
df721b2e9e Scheduler must not reverse the order of the fetched tasks 2022-06-01 17:16:15 +02:00
0656df3a6d Fix the dumps tests 2022-06-01 17:14:13 +02:00
7652295d2c Encode key in base64 instead of hexa 2022-06-01 16:17:47 +02:00
94b32cce01 Patch errors 2022-06-01 16:17:47 +02:00
b2e2dc8558 Re-authorize master_key to access to all routes 2022-06-01 16:17:47 +02:00
1816db8c1f Move dump v4 patcher into v4.rs 2022-06-01 16:17:43 +02:00
c295924ea2 Patch tests 2022-06-01 16:08:42 +02:00
1f62e83267 Remove error_add_api_key_invalid_index_uid_format 2022-06-01 16:08:42 +02:00
b3c8915702 Make small changes and renaming 2022-06-01 16:08:42 +02:00
151f494110 Use Stream Deserializer to load dumps 2022-06-01 16:08:42 +02:00
96152a3d32 Change default API keys names and descriptions 2022-06-01 16:08:42 +02:00
84f52ac175 Add v4 feature to uuid 2022-06-01 16:08:42 +02:00
70916d6596 Patch dump v4 2022-06-01 16:08:42 +02:00
b9a79eb858 Change apiKeyPrefix to apiKeyUid 2022-06-01 16:07:44 +02:00
a57b2d9538 Restrict master key access to /keys routes 2022-06-01 16:07:44 +02:00
34c8888f56 Add keys actions 2022-06-01 16:07:44 +02:00
d54643455c Make PATCH only modify name, description, and updated_at fields 2022-06-01 16:07:44 +02:00
96a5791e39 Add uid and name fields in keys 2022-06-01 16:07:44 +02:00
e2c204cf86 Update tests to fit to the new requirements 2022-06-01 16:07:44 +02:00
d80e8b64af Align the tasks route API to the new spec 2022-06-01 15:30:39 +02:00
c11d21879a Introduce tasks limit and after to the tasks route 2022-06-01 13:26:36 +02:00
d6dd234914 Merge #2434
2434: Update docker volume path r=curquiza a=0x0x1

Currently, the args `$(pwd)/data.ms:/data.ms` cannot share data from the container, makes docker volume same as the [Dockerfile](67b6f4340a/Dockerfile (L40)) to fixing it.



Co-authored-by: 0x0x1 <101086451+0x0x1@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-06-01 10:31:27 +00:00
4970525541 Update README.md
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-06-01 12:29:36 +02:00
461b91fd13 Introduce the fetch_unfinished_tasks function to fetch tasks 2022-06-01 12:09:52 +02:00
004c8b6be3 Add the new limit and after fields in the dump tests 2022-06-01 12:09:52 +02:00
9d5cc88cd5 Implement the seek-based tasks list pagination 2022-06-01 12:09:52 +02:00
d22f07f5b2 Merge #2448
2448: docs(security): Fix `Supported` r=MarinPostma a=ryanrussell

Signed-off-by: Ryan Russell <git@ryanrussell.org>

# Pull Request

## What does this PR do?
- Fix typo in security `Suported` -> `Supported`

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-05-31 19:59:37 +00:00
e81c7aa2e6 Merge #2423
2423: Paginate the index resource r=MarinPostma a=irevoire

Fix #2373


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-31 19:25:25 +00:00
39db6ea42b docs(security): Fix Supported
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-05-31 14:21:34 -05:00
47007fa71b Merge #2446
2446: rename Succeded to Succeeded r=irevoire a=MarinPostma

this pr renames `TaskEvent::Succeded` to `TaskEvent::Succeeded` and apply the migration to the dumps


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-31 18:27:02 +00:00
627f13df85 feat(http): paginate the index resource
Fix #2373
2022-05-31 18:11:45 +02:00
97c14f6fcc Merge #2427
2427: Update the documents resource r=MarinPostma a=irevoire

- Return Documents API resources on `/documents` in an array in the results field.
- Add limit, offset, and total in the response body.
- Rename `attributesToRetrieve` into `fields` (only for the `/documents` endpoints, not for the `/search` ones).
- The `displayedAttributes` settings do not impact anymore the displayed fields returned in the `/documents` endpoints. These settings only impact the `/search` endpoint.
- make the `/document/:uid` route accept the `fields` query parameter

Fix #2372


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-31 15:33:42 +00:00
446f1f31e0 rename Succeded to Succeeded 2022-05-31 17:22:37 +02:00
ddad6cc069 feat(http): update the documents resource
- Return Documents API resources on `/documents` in an array in the the results field.
- Add limit, offset and total in the response body.
- Rename `attributesToRetrieve` into `fields` (only for the `/documents` endpoints, not for the `/search` ones).
- The `displayedAttributes` settings does not impact anymore the displayed fields returned in the `/documents` endpoints. These settings only impacts the `/search` endpoint.

Fix #2372
2022-05-31 16:40:40 +02:00
ab39df9693 Merge #2399
2399: Update the tasks endpoints r=MarinPostma a=Kerollmops

This PR wraps all the changes related to the `tasks` endpoints, it is related to https://github.com/meilisearch/meilisearch/issues/2377 but doesn't close it. I will create a new PR to work on [the seek-based pagination](https://github.com/meilisearch/specifications/pull/115).

I wanted to do something cool with Github: being able to merge multiple PR in this one, to help review changes one by one, unfortunately, Github doesn't allow creating empty PRs. I also struggled with git itself when it comes to merging things in the right order, so I decided that I would add all of the changes in this single PR. I will list the changes and references to the specs here.

 - [x] Tasks statuses and types must be case insensitive
 - [x] Tasks statuses, types and indexUid must accept the `*` selector
 - [ ] Rename the `TaskDetails` struct fields

## Changes

- [ ] Add seek-based pagination following [the spec](https://github.com/meilisearch/specifications/pull/115) 
- [x] Add filtering on the `/tasks` endpoint following [this spec](https://github.com/meilisearch/specifications/pull/116)
  - [x] Add filtering capabilities on `type`, `status` and `indexUid` for `GET` `task` lists endpoints.
  - [x] It is possible to specify several values for a filter using the `,` character. e.g. `?status=enqueued,processing`
  - [x] Between two different filters, an AND operation is applied. e.g. `?status=enqueued&type=indexCreation` is equivalent to `status=enqueued AND type = indexCreation`
- [x] Remove `GET /indexes/:indexUid/tasks`. It can be replaced by `GET /tasks?indexUid=:indexUid`
- [x] Remove `GET /indexes/:indexUid/tasks/:taskUid`.
- [x] Rename `uid` to `taskUid` in the `202 - Accepted` task response return by every asynchronous tasks (ex: index creation, document addition...)
- [x] Rename some task properties
  - [x] `documentPartial`-> `documentAdditionOrUpdate`
  - [x] `documentAddition`-> `documentAdditionOrUpdate`
  - [x] `clearAll` -> `documentDeletion` 

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-05-31 09:40:40 +00:00
1465b5e0ff Refactorize the tasks filters by moving the match inside 2022-05-31 11:33:21 +02:00
8800b348f0 Implement the StarOr on all the tasks filters 2022-05-31 11:33:21 +02:00
082d6b89ff Make the StarOrIndexUid Generic and call it StarOr 2022-05-31 11:33:21 +02:00
b82c86c8f5 Allow users to filter indexUid with a * 2022-05-31 11:33:20 +02:00
36d94257d8 Make clippy happy 2022-05-31 11:33:20 +02:00
3f80468f18 Rename the Tasks Types 2022-05-31 11:33:20 +02:00
8509243e68 Implement the status and type filtering on the tasks route 2022-05-31 11:33:20 +02:00
3684c822f1 Add indexUid filtering on the /tasks route 2022-05-31 11:33:20 +02:00
80f7d87356 Remove the /indexes/:indexUid/tasks/... routes 2022-05-31 11:33:20 +02:00
d2f457a076 Rename the uid to taskUid in asynchronous response 2022-05-31 11:33:20 +02:00
e5ef5a6f9c Remove an unused updates.rs file 2022-05-31 11:33:19 +02:00
5450fecaef Merge #2444
2444: add boilerplate for dump v5 r=MarinPostma a=MarinPostma

add the boilerplate files for dump v5


Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-05-31 08:56:52 +00:00
deba0cc096 Make v4::load_dump copy each part a the dump 2022-05-31 10:24:44 +02:00
26e7bdf702 add boilerplate for dump v5 2022-05-30 17:25:29 +02:00
3441cc6c36 Merge #2410
2410: Make dump a task r=Kerollmops a=MarinPostma

This PR transforms the dump task into a proper task.
The `GET /dumps/:dump_uid` is removed.


Some changes were made to make this work, and a bit a refactoring was necessary.
- The `dump_actor` module has been renamed do `dumps` and moved to the root
- There isn't a `DumpActor` anymore, and the dump process is handled by the `DumpHandler`.
- The `TaskPerformer` is renamed to `BatchHandler`
- The `BatchHandler` trait no longer has a `perform_job` method, but instead has a `accept` method returning whether a handler can proccess a batch
- The scheduler now accept a list of `BatchHandler`, and iterates trhough them until it finds one to accept the current batch.
- `Job` doesn't exist anymore, and everything in now inside of the `BatchContent` enum.
- The `Vec<TaskId>` from `Batch` is replaced with a `BatchContent` enum which hints at the content.
- The Scheduler is slightly modified to accept batch, and prioritize them before regular tasks.
- The `TaskList` are not identified by a `String` representing the index uid anymore, but by a `TaskListIdentifier` which also works for dumps which are not targeting any specific indexes.
- The `GET /dump/:dump_id` no longer exists
- `DumpActorError` is renamed to `DumpError`


close #2410 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-30 14:09:43 +00:00
c7711c7816 Merge #2429
2429: Send the analytics to `telemetry.meilisearch.com` instead of segment r=MarinPostma a=irevoire

Fix #2425

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-30 13:18:55 +00:00
d47b997120 chore(analytics): update the url used to send our analytics 2022-05-30 15:13:10 +02:00
1e310ecc7d fix typo in docstring
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-30 14:34:49 +02:00
4cb2c6ef1e use map_or instead of map + unwrap_or 2022-05-30 12:30:15 +02:00
a9ef399a6b processing::Nothing return BatchContent::Empty instead of panic 2022-05-26 12:04:27 +02:00
5a2972fc19 use TaskEvent method instead of variants in BatchHandler impl 2022-05-26 11:51:58 +02:00
ba51ca83ec Update docker volume path
Makes docker volume same as Dockerfile
2022-05-26 10:29:27 +08:00
1647ca3c1f fix clipy warnings 2022-05-25 15:07:52 +02:00
74a1f88d88 add test for dump processing order 2022-05-25 14:57:36 +02:00
f58507379a fix dump priority in scheduler 2022-05-25 14:50:14 +02:00
6b2016b350 remove typo in BatchContent variant 2022-05-25 14:39:07 +02:00
3015265bde remove useless dump errors 2022-05-25 14:37:10 +02:00
49d8fadb52 test dump handler 2022-05-25 14:32:12 +02:00
127171c812 impl Default on Processing 2022-05-25 14:10:39 +02:00
67b6f4340a Merge #2422
2422: Update url of movies.json r=curquiza a=0x0x1

URL `https://bit.ly/2PAcw9l` is a notion site.

Co-authored-by: 0x0x1 <101086451+0x0x1@users.noreply.github.com>
2022-05-25 11:21:56 +00:00
986a99296d remove useless dump test 2022-05-25 11:25:11 +02:00
92d86ce6aa add tests to IndexResolver BatchHandler 2022-05-25 11:13:36 +02:00
3c85b29865 add doc to BatchHandler 2022-05-25 11:13:35 +02:00
8349f38197 remove unused file 2022-05-25 11:13:35 +02:00
64654ef7c3 rename batch_handler to handler 2022-05-25 11:13:35 +02:00
0f9c134114 fix tests 2022-05-25 11:13:35 +02:00
7b47e4e87a snapshot batch handler 2022-05-25 11:13:35 +02:00
8743d73973 move DumpHandler to own module 2022-05-25 11:13:35 +02:00
f0aceb4fba remove unused files 2022-05-25 11:13:35 +02:00
61035a3ea4 create dump v5 2022-05-25 11:13:34 +02:00
4778884105 remove dump status route 2022-05-25 11:13:34 +02:00
57fde30b91 handle dump 2022-05-25 11:13:34 +02:00
56eb2907c9 dump indexes 2022-05-25 11:13:34 +02:00
414d0907ce register dump handler 2022-05-25 11:13:34 +02:00
60a8249de6 add dump batch handler 2022-05-25 11:13:34 +02:00
46cdc17701 make scheduler accept multiple batch handlers 2022-05-25 11:13:34 +02:00
6a0231cb28 perform dump method 2022-05-25 11:13:33 +02:00
7fa3eb1003 register dump tasks 2022-05-25 11:13:33 +02:00
2f0625a984 register and insert dump task in scheduler 2022-05-25 11:13:33 +02:00
737b891a41 introduce Dump TaskListIdentifier variant 2022-05-25 11:13:33 +02:00
5a5066023b introduce TaskListIdentifier 2022-05-25 11:13:33 +02:00
aa50acb031 make Task index_uid an option
Not all task relate to an index. Tasks that don't have an index_uid set
to None
2022-05-25 11:13:32 +02:00
9935db86c7 Merge #2424
2424: Uncomment clippy from the ci check r=curquiza a=irevoire

The issue has been fixed in the latest release of rust. See https://github.com/rust-lang/rust-clippy/issues/8662
Fix #2305


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-24 17:50:55 +00:00
f65116b208 chore(ci): uncomment clippy from the ci check
The issue has been fixed in the latest release of rust. See https://github.com/rust-lang/rust-clippy/issues/8662
Fix #2305
2022-05-24 15:03:11 +02:00
341756a0eb Merge #2357
2357: chore(dump): add dump tests r=Kerollmops a=irevoire

Add tests on the import of dump v1, v2, v3 and v4.

Since the dumps are slow to decompress, I made the `flate2` crate always compile in optimized.
And since they're also slow to index, I also made the `milli` crate always compile in optimized. What do you think of this `@MarinPostma?`
Should we keep milli unoptimized in case it could help us debug some things? 👀 

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-24 12:24:29 +00:00
5f0e9b63d2 chore(dump): add tests 2022-05-24 14:21:56 +02:00
ca9ba2d90c Merge #2406
2406: chore(search): rename in the search endpoint r=irevoire a=irevoire

Fix #2376


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-24 12:02:45 +00:00
2c248a68a4 Merge #2374
2374: feat(analytics): handle the `X-Meilisearch-Client` header r=Kerollmops a=irevoire

Fix #2367


Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-24 09:49:16 +00:00
641ca5a857 Update url of movies.json 2022-05-24 17:34:34 +08:00
6bf4db0bca feat(analytics): handle the new x-meilisearch-client custom header for the analytics
Fix #2367
2022-05-23 13:51:19 +02:00
4e9accdeb7 chore(search): rename in the search endpoint
Fix ##2376
2022-05-19 16:31:37 +02:00
ae4e419db4 Merge #2408
2408: Upgrade milli v0.28 r=ManyTheFish a=ManyTheFish

- Add smart crop
- Add test on _matches_infos
- Fix some test

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-05-19 11:52:16 +00:00
50763aac82 Fix clippy 2022-05-19 11:23:22 +02:00
3517eae47f Fix tests 2022-05-18 18:45:53 +02:00
0250ea9157 Intergrate smart crop in Meilisearch 2022-05-18 18:35:51 +02:00
6d221058f1 Merge #2404
2404: Bring `release-v0.27.1` into `main` r=curquiza a=curquiza

Following the v0.27.1 hotfixes

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-05-17 16:11:17 +00:00
a23998f2e3 Merge #2402
2402: Update version for next release (v0.27.1) r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-05-17 10:09:16 +00:00
49e857776c Update version for next release (v0.27.1) 2022-05-17 11:59:35 +02:00
7fa7f3d1a4 Merge #2397
2397: Bump milli to v0.26.5 r=irevoire a=irevoire

Fix #2394 and fix #2385

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-17 08:45:05 +00:00
85d19bfb3e chore: bump milli 2022-05-16 18:43:35 +02:00
a65a2bea1e Merge #2396
2396: Fix dump bug r=curquiza a=MarinPostma

Only check db version file if we aren't loading a dump or a snapshot, because we can assume that if we are loading a dump or a snapshot, then we can safely work with the database. The db version file will be (over)written just after that.

This was introduced by #2356.
fix #2389


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-16 13:59:20 +00:00
5670b4d012 fix dump import error 2022-05-16 14:33:33 +02:00
b9866d8df2 Merge #2339
2339: deny warnings in CI r=irevoire a=MarinPostma

Add `RUSTFLAGS= -D warnings` to the CI so all warnings are treated as hard errors.

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-16 09:58:30 +00:00
b9b9cba154 Merge #2383
2383: v0.27.0: bring `stable` into `main` r=Kerollmops a=curquiza

Bring `stable` into `main`

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-05-16 08:35:25 +00:00
cd2239eb2d Merge branch 'release-v0.27.0' into stable 2022-05-09 10:51:32 +02:00
348af6cfbf deny-rust-warnings 2022-05-04 15:20:45 +02:00
c07f3b44b7 Merge #2347
2347: Change Nelson path r=curquiza a=curquiza

Nelson is now on the Meilisearch orga side

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-21 17:50:46 +00:00
38d681c230 Change Nelson path 2022-04-21 18:42:34 +02:00
e85377e725 Merge pull request #2346 from meilisearch/revert-2345-bump-meilisearch-v9000.0.0
Revert "[TEST PURPOSE] Bump meilisearch to version 9000.0.0"
2022-04-21 16:38:48 +02:00
6ff8bf823d Revert "[TEST PURPOSE] Bump meilisearch to version 9000.0.0" 2022-04-21 16:36:56 +02:00
4d25229df9 Merge pull request #2345 from meilisearch/bump-meilisearch-v9000.0.0
Bump meilisearch to version 9000.0.0
2022-04-21 16:28:46 +02:00
f1cd6b6ee8 bump meilisearch to v9000.0.0 2022-04-21 14:26:40 +00:00
63f75bd187 Merge pull request #2344 from meilisearch/revert-2340-bump-meilisearch-v8000.1.0
Revert "[TEST PURPOSE] Bump meilisearch to version 8000.1.0"
2022-04-21 16:24:57 +02:00
acf3357cf3 Revert "[TEST PURPOSE] Bump meilisearch to version 8000.1.0" 2022-04-21 16:24:27 +02:00
202d6105b2 Merge pull request #2340 from meilisearch/bump-meilisearch-v8000.1.0
[TEST PURPOSE] Bump meilisearch to version 8000.1.0
2022-04-21 15:28:00 +02:00
0714551101 bump meilisearch to v8000.1.0 2022-04-21 13:23:46 +00:00
3273fe0470 Merge #2246
2246: Release v0.26.1: bring "fix panic at start" commit (#2243) into stable r=MarinPostma a=curquiza

2 commits
- cherry pick `32843f30d973349122ec5a37469ee09e6002b6f3` (merged in #2243)
- change the version in the Cargo toml files (from v0.26.0 to v0.26.1)

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-03-16 18:32:47 +00:00
7468a5e96c Update the version (from v0.26.0 to v0.26.1) 2022-03-16 18:03:54 +01:00
a87faa0db7 bug(http): fix panic on startup 2022-03-16 18:03:01 +01:00
373 changed files with 35421 additions and 14146 deletions

View File

@ -1,10 +1,13 @@
contact_links:
- name: Feature request & feedback
- name: Support questions & other
url: https://github.com/meilisearch/meilisearch/discussions/new
about: For any other question, open a discussion in this repository
- name: Language support request & feedback
url: https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal?discussions_q=label%3Aproduct%3Acore%3Atokenizer+category%3A%22Feedback+%26+Feature+Proposal%22
about: The requests and feedback regarding Language support are not managed in this repository. Please upvote the related discussion in our dedicated product repository or open a new one if it doesn't exist.
- name: Any other feature request & feedback
url: https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal
about: The feature requests and feedback regarding the already existing features are not managed in this repository. Please open a discussion in our dedicated product repository
- name: Documentation issue
url: https://github.com/meilisearch/documentation/issues/new
about: For documentation issues, open an issue or a PR in the documentation repository
- name: Support questions & other
url: https://github.com/meilisearch/meilisearch/discussions/new
about: For any other question, open a discussion in this repository

13
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,13 @@
# Set update schedule for GitHub Actions only
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
labels:
- 'skip changelog'
- 'dependencies'
rebase-strategy: disabled

28
.github/scripts/check-release.sh vendored Normal file
View File

@ -0,0 +1,28 @@
#!/bin/bash
# check_tag $current_tag $file_tag $file_name
function check_tag {
if [[ "$1" != "$2" ]]; then
echo "Error: the current tag does not match the version in $3: found $2 - expected $1"
ret=1
fi
}
ret=0
current_tag=${GITHUB_REF#'refs/tags/v'}
toml_files='*/Cargo.toml'
for toml_file in $toml_files;
do
file_tag="$(grep '^version = ' $toml_file | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')"
check_tag $current_tag $file_tag $toml_file
done
lock_file='Cargo.lock'
lock_tag=$(grep -A 1 'name = "meilisearch-auth"' $lock_file | grep version | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')
check_tag $current_tag $lock_tag $lock_file
if [[ "$ret" -eq 0 ]] ; then
echo 'OK'
fi
exit $ret

View File

@ -1,14 +1,14 @@
#!/bin/sh
# Checks if the current tag should be the latest (in terms of semver and not of release date).
# Ex: previous tag -> v0.10.1
# new tag -> v0.8.12
# The new tag should not be the latest
# So it returns "false", the CI should not run for the release v0.8.2
# Used in GHA in publish-docker-latest.yml
# Was used in our CIs to publish the latest docker image. Not used anymore, will be used again when v1 and v2 will be out and we will want to maintain multiple stable versions.
# Returns "true" or "false" (as a string) to be used in the `if` in GHA
# Checks if the current tag should be the latest (in terms of semver and not of release date).
# Ex: previous tag -> v2.1.1
# new tag -> v1.20.3
# The new tag (v1.20.3) should NOT be the latest
# So it returns "false", the `latest` tag should not be updated for the release v1.20.3 and still need to correspond to v2.1.1
# GLOBAL
GREP_SEMVER_REGEXP='v\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)$' # i.e. v[number].[number].[number]
@ -85,7 +85,7 @@ get_latest() {
latest=""
current_tag=""
for release_info in $releases; do
if [ $i -eq 0 ]; then # Cheking tag_name
if [ $i -eq 0 ]; then # Checking tag_name
if echo "$release_info" | grep -q "$GREP_SEMVER_REGEXP"; then # If it's not an alpha or beta release
current_tag=$release_info
else

View File

@ -1,20 +0,0 @@
# GitHub Actions Workflow for Meilisearch
> **Note:**
> - We do not use [cache](https://github.com/actions/cache) yet but we could use it to speed up CI
## Workflow
- On each pull request, we trigger `cargo test`.
- On each tag, we build:
- the tagged Docker image and publish it to Docker Hub
- the binaries for MacOS, Ubuntu, and Windows
- the Debian package
- On each stable release (`v*.*.*` tag):
- we build the `latest` Docker image and publish it to Docker Hub
- we publish the binary to Hombrew and Gemfury
## Problems
- We do not test on Windows because we are unable to make it work, there is a disk space problem.

View File

@ -8,7 +8,7 @@ jobs:
nightly-coverage:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
@ -25,7 +25,7 @@ jobs:
RUSTFLAGS: "-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Cpanic=unwind -Zpanic_abort_tests"
- uses: actions-rs/grcov@v0.1
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ${{ steps.coverage.outputs.report }}

View File

@ -0,0 +1,23 @@
name: Create issue to upgrade dependencies
on:
schedule:
- cron: '0 0 1 */3 *'
workflow_dispatch:
jobs:
create-issue:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Create an issue
uses: actions-ecosystem/action-create-issue@v1
with:
github_token: ${{ secrets.MEILI_BOT_GH_PAT }}
title: Upgrade dependencies
body: |
We need to update the dependencies of the Meilisearch repository, and, if possible, the dependencies of all the core-team repositories that Meilisearch depends on (milli, charabia, heed...).
⚠️ This issue should only be done at the beginning of the sprint!
labels: |
dependencies
maintenance

View File

@ -1,14 +1,15 @@
name: Look for flaky tests
on:
workflow_dispatch:
schedule:
- cron: "0 12 * * FRI" # every friday at 12:00PM
- cron: "0 12 * * FRI" # Every Friday at 12:00PM
jobs:
flaky:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install cargo-flaky
run: cargo install cargo-flaky
- name: Run cargo flaky 100 times

156
.github/workflows/milestone-workflow.yml vendored Normal file
View File

@ -0,0 +1,156 @@
name: Milestone's workflow
# /!\ No git flow are handled here
# For each Milestone created (not opened!), and if the release is NOT a patch release (only the patch changed)
# - the roadmap issue is created, see https://github.com/meilisearch/core-team/blob/main/issue-templates/roadmap-issue.md
# - the changelog issue is created, see https://github.com/meilisearch/core-team/blob/main/issue-templates/changelog-issue.md
# For each Milestone closed
# - the `release_version` label is created
# - this label is applied to all issues/PRs in the Milestone
on:
milestone:
types: [created, closed]
env:
MILESTONE_VERSION: ${{ github.event.milestone.title }}
MILESTONE_URL: ${{ github.event.milestone.html_url }}
MILESTONE_DUE_ON: ${{ github.event.milestone.due_on }}
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
jobs:
# -----------------
# MILESTONE CREATED
# -----------------
get-release-version:
if: github.event.action == 'created'
runs-on: ubuntu-latest
outputs:
is-patch: ${{ steps.check-patch.outputs.is-patch }}
env:
MILESTONE_VERSION: ${{ github.event.milestone.title }}
steps:
- uses: actions/checkout@v3
- name: Check if this release is a patch release only
id: check-patch
run: |
echo version: $MILESTONE_VERSION
if [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.0$ ]]; then
echo 'This is NOT a patch release'
echo ::set-output name=is-patch::false
elif [[ $MILESTONE_VERSION =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo 'This is a patch release'
echo ::set-output name=is-patch::true
else
echo "Not a valid format of release, check the Milestone's title."
echo 'Should be vX.Y.Z'
exit 1
fi
create-roadmap-issue:
needs: get-release-version
# Create the roadmap issue if the release is not only a patch release
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
runs-on: ubuntu-latest
env:
ISSUE_TEMPLATE: issue-template.md
steps:
- uses: actions/checkout@v3
- name: Download the issue template
run: curl -s https://raw.githubusercontent.com/meilisearch/core-team/main/issue-templates/roadmap-issue.md > $ISSUE_TEMPLATE
- name: Replace all empty occurrences in the templates
run: |
# Replace all <<version>> occurrences
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
# Replace all <<milestone_id>> occurrences
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
# Replace release date if exists
if [[ ! -z $MILESTONE_DUE_ON ]]; then
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
sed -i "s/Release date\: 20XX-XX-XX/Release date\: $date/g" $ISSUE_TEMPLATE
fi
- name: Create the issue
run: |
gh issue create \
--title "$MILESTONE_VERSION ROADMAP" \
--label 'epic,impacts docs,impacts integrations,impacts cloud' \
--body-file $ISSUE_TEMPLATE \
--milestone $MILESTONE_VERSION
create-changelog-issue:
needs: get-release-version
# Create the changelog issue if the release is not only a patch release
if: github.event.action == 'created' && needs.get-release-version.outputs.is-patch == 'false'
runs-on: ubuntu-latest
env:
ISSUE_TEMPLATE: issue-template.md
steps:
- uses: actions/checkout@v3
- name: Download the issue template
run: curl -s https://raw.githubusercontent.com/meilisearch/core-team/main/issue-templates/changelog-issue.md > $ISSUE_TEMPLATE
- name: Replace all empty occurrences in the templates
run: |
# Replace all <<version>> occurrences
sed -i "s/<<version>>/$MILESTONE_VERSION/g" $ISSUE_TEMPLATE
# Replace all <<milestone_id>> occurrences
milestone_id=$(echo $MILESTONE_URL | cut -d '/' -f 7)
sed -i "s/<<milestone_id>>/$milestone_id/g" $ISSUE_TEMPLATE
- name: Create the issue
run: |
gh issue create \
--title "Create release changelogs for $MILESTONE_VERSION" \
--label 'impacts docs,documentation' \
--body-file $ISSUE_TEMPLATE \
--milestone $MILESTONE_VERSION \
--assignee curquiza
# ----------------
# MILESTONE CLOSED
# ----------------
create-release-label:
if: github.event.action == 'closed'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Create the ${{ env.MILESTONE_VERSION }} label
run: |
label_description="PRs/issues solved in $MILESTONE_VERSION"
if [[ ! -z $MILESTONE_DUE_ON ]]; then
date=$(echo $MILESTONE_DUE_ON | cut -d 'T' -f 1)
label_description="$label_description released on $date"
fi
gh api repos/meilisearch/meilisearch/labels \
--method POST \
-H "Accept: application/vnd.github+json" \
-f name="$MILESTONE_VERSION" \
-f description="$label_description" \
-f color='ff5ba3'
labelize-all-milestone-content:
if: github.event.action == 'closed'
needs: create-release-label
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Add label ${{ env.MILESTONE_VERSION }} to all PRs in the Milestone
run: |
prs=$(gh pr list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
for pr in $prs; do
gh pr edit $pr --add-label $MILESTONE_VERSION
done
- name: Add label ${{ env.MILESTONE_VERSION }} to all issues in the Milestone
run: |
issues=$(gh issue list --search milestone:"$MILESTONE_VERSION" --limit 1000 --state all --json number --template '{{range .}}{{tablerow (printf "%v" .number)}}{{end}}')
for issue in $issues; do
gh issue edit $issue --add-label $MILESTONE_VERSION
done

View File

@ -1,13 +1,41 @@
on:
workflow_dispatch:
schedule:
- cron: '0 2 * * *' # Every day at 2:00am
release:
types: [published]
name: Publish binaries to release
jobs:
check-version:
name: Check the version validity
runs-on: ubuntu-latest
# No need to check the version for dry run (cron)
steps:
- uses: actions/checkout@v3
# Check if the tag has the v<nmumber>.<number>.<number> format.
# If yes, it means we are publishing an official release.
# If no, we are releasing a RC, so no need to check the version.
- name: Check tag format
if: github.event_name == 'release'
id: check-tag-format
run: |
escaped_tag=$(printf "%q" ${{ github.ref_name }})
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo ::set-output name=stable::true
else
echo ::set-output name=stable::false
fi
- name: Check release validity
if: github.event_name == 'release' && steps.check-tag-format.outputs.stable == 'true'
run: bash .github/scripts/check-release.sh
publish:
name: Publish binary for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
needs: check-version
strategy:
fail-fast: false
matrix:
@ -27,20 +55,61 @@ jobs:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build
run: cargo build --release --locked
# No need to upload binaries for dry run (cron)
- name: Upload binaries to release
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
file: target/release/${{ matrix.artifact_name }}
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}
publish-macos-apple-silicon:
name: Publish binary for macOS silicon
runs-on: ${{ matrix.os }}
needs: check-version
continue-on-error: false
strategy:
fail-fast: false
matrix:
include:
- os: macos-latest
target: aarch64-apple-darwin
asset_name: meilisearch-macos-apple-silicon
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Installing Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
target: ${{ matrix.target }}
override: true
- name: Cargo build
uses: actions-rs/cargo@v1
with:
command: build
args: --release --target ${{ matrix.target }}
- name: Upload the binary to release
# No need to upload binaries for dry run (cron)
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
file: target/${{ matrix.target }}/release/meilisearch
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}
publish-aarch64:
name: Publish binary for aarch64
runs-on: ${{ matrix.os }}
needs: check-version
continue-on-error: false
strategy:
fail-fast: false
@ -55,7 +124,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Installing Rust toolchain
uses: actions-rs/toolchain@v1
@ -85,7 +154,6 @@ jobs:
echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config
echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
echo 'JEMALLOC_SYS_WITH_LG_PAGE=16' >> $GITHUB_ENV
echo RUSTFLAGS="-Clink-arg=-fuse-ld=gold" >> $GITHUB_ENV
- name: Cargo build
uses: actions-rs/cargo@v1
@ -98,9 +166,11 @@ jobs:
run: ls -lR ./target
- name: Upload the binary to release
# No need to upload binaries for dry run (cron)
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
file: target/${{ matrix.target }}/release/meilisearch
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}

View File

@ -5,22 +5,31 @@ on:
types: [released]
jobs:
check-version:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Check release validity
run: bash .github/scripts/check-release.sh
debian:
name: Publish debian packagge
runs-on: ubuntu-18.04
needs: check-version
steps:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- name: Install cargo-deb
run: cargo install cargo-deb
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build deb package
run: cargo deb -p meilisearch-http -o target/debian/meilisearch.deb
- name: Upload debian pkg to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
file: target/debian/meilisearch.deb
asset_name: meilisearch.deb
tag: ${{ github.ref }}
@ -30,6 +39,7 @@ jobs:
homebrew:
name: Bump Homebrew formula
runs-on: ubuntu-18.04
needs: check-version
steps:
- name: Create PR to Homebrew
uses: mislav/bump-homebrew-formula-action@v1

View File

@ -0,0 +1,80 @@
---
on:
schedule:
- cron: '0 4 * * *' # Every day at 4:00am
push:
tags:
- '*'
name: Publish tagged images to Docker Hub
jobs:
docker:
runs-on: docker
steps:
- uses: actions/checkout@v3
# Check if the tag has the v<nmumber>.<number>.<number> format. If yes, it means we are publishing an official release.
# In this situation, we need to set `output.stable` to create/update the following tags (additionally to the `vX.Y.Z` Docker tag):
# - a `vX.Y` (without patch version) Docker tag
# - a `latest` Docker tag
- name: Check tag format
if: github.event_name != 'schedule'
id: check-tag-format
run: |
escaped_tag=$(printf "%q" ${{ github.ref_name }})
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo ::set-output name=stable::true
else
echo ::set-output name=stable::false
fi
# Check only the validity of the tag for official releases (not for pre-releases or other tags)
- name: Check release validity
if: github.event_name != 'schedule' && steps.check-tag-format.outputs.stable == 'true'
run: bash .github/scripts/check-release.sh
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
if: github.event_name != 'schedule'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: getmeili/meilisearch
# The latest and `vX.Y` tags are only pushed for the official Meilisearch releases
# See https://github.com/docker/metadata-action#latest-tag
flavor: latest=false
tags: |
type=ref,event=tag
type=semver,pattern=v{{major}}.{{minor}},enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
type=raw,value=latest,enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
- name: Build and push
uses: docker/build-push-action@v3
with:
# We do not push tags for the cron jobs, this is only for test purposes
push: ${{ github.event_name != 'schedule' }}
platforms: linux/amd64,linux/arm64
tags: ${{ steps.meta.outputs.tags }}
# /!\ Don't touch this without checking with Cloud team
- name: Send CI information to Cloud team
if: github.event_name != 'schedule'
uses: peter-evans/repository-dispatch@v2
with:
token: ${{ secrets.MEILI_BOT_GH_PAT }}
repository: meilisearch/meilisearch-cloud
event-type: cloud-docker-build
client-payload: '{ "meilisearch_version": "${{ github.ref_name }}", "stable": "${{ steps.check-tag-format.outputs.stable }}" }'

View File

@ -1,30 +0,0 @@
---
on:
release:
types: [released]
name: Publish latest image to Docker Hub
jobs:
docker-latest:
runs-on: docker
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
platforms: linux/amd64,linux/arm64
tags: getmeili/meilisearch:latest

View File

@ -1,39 +0,0 @@
---
on:
push:
tags:
- '*'
name: Publish tagged image to Docker Hub
jobs:
docker-tag:
runs-on: docker
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: getmeili/meilisearch
flavor: latest=false
tags: type=ref,event=tag
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
platforms: linux/amd64,linux/arm64
tags: ${{ steps.meta.outputs.tags }}

View File

@ -12,6 +12,7 @@ on:
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
RUSTFLAGS: "-D warnings"
jobs:
tests:
@ -22,9 +23,9 @@ jobs:
matrix:
os: [ubuntu-18.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo check without any default features
uses: actions-rs/cargo@v1
with:
@ -41,14 +42,14 @@ jobs:
name: Run tests in debug
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run tests in debug
uses: actions-rs/cargo@v1
with:
@ -59,7 +60,7 @@ jobs:
name: Run Clippy
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
@ -67,7 +68,7 @@ jobs:
override: true
components: clippy
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo clippy
uses: actions-rs/cargo@v1
with:
@ -78,14 +79,14 @@ jobs:
name: Run Rustfmt
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
toolchain: stable
override: true
components: rustfmt
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo fmt
run: cargo fmt --all -- --check

View File

@ -0,0 +1,47 @@
name: Update Meilisearch version in all Cargo.toml files
on:
workflow_dispatch:
inputs:
new_version:
description: 'The new version (vX.Y.Z)'
required: true
env:
NEW_VERSION: ${{ github.event.inputs.new_version }}
NEW_BRANCH: update-version-${{ github.event.inputs.new_version }}
GH_TOKEN: ${{ secrets.MEILI_BOT_GH_PAT }}
jobs:
update-version-cargo-toml:
name: Update version in Cargo.toml files
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install sd
run: cargo install sd
- name: Update Cargo.toml files
run: |
raw_new_version=$(echo $NEW_VERSION | cut -d 'v' -f 2)
new_string="version = \"$raw_new_version\""
sd '^version = "\d+.\d+.\w+"$' "$new_string" */Cargo.toml
- name: Build Meilisearch to update Cargo.lock
run: cargo build
- name: Commit and push the changes to the ${{ env.NEW_BRANCH }} branch
uses: EndBug/add-and-commit@v9
with:
message: "Update version for the next release (${{ env.NEW_VERSION }}) in Cargo.toml files"
new_branch: ${{ env.NEW_BRANCH }}
- name: Create the PR pointing to ${{ github.ref_name }}
run: |
gh pr create \
--title "Update version for the next release ($NEW_VERSION) in Cargo.toml files" \
--body '⚠️ This PR is automatically generated. Check the new version is the expected one before merging.' \
--label 'skip changelog' \
--milestone $NEW_VERSION

7
.gitignore vendored
View File

@ -7,3 +7,10 @@
/data.ms
/snapshots
/dumps
# Snapshots
## ... large
*.full.snap
## ... unreviewed
*.snap.new

5
.rustfmt.toml Normal file
View File

@ -0,0 +1,5 @@
unstable_features = true
use_small_heuristics = "max"
imports_granularity = "Module"
group_imports = "StdExternalCrate"

View File

@ -4,15 +4,33 @@ First, thank you for contributing to Meilisearch! The goal of this document is t
Remember that there are many ways to contribute other than writing code: writing [tutorials or blog posts](https://github.com/meilisearch/awesome-meilisearch), improving [the documentation](https://github.com/meilisearch/documentation), submitting [bug reports](https://github.com/meilisearch/meilisearch/issues/new?assignees=&labels=&template=bug_report.md&title=) and [feature requests](https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal)...
The code in this repository is only concerned with managing multiple indexes, handling the update store, and exposing an HTTP API. Search and indexation are the domain of our core engine, [`milli`](https://github.com/meilisearch/milli), while tokenization is handled by [our `charabia` library](https://github.com/meilisearch/charabia/).
If Meilisearch does not offer optimized support for your language, please consider contributing to `charabia` by following the [CONTRIBUTING.md file](https://github.com/meilisearch/charabia/blob/main/CONTRIBUTING.md) and integrating your intended normalizer/segmenter.
## Table of Contents
- [Hacktoberfest 2022](#hacktoberfest-2022)
- [Assumptions](#assumptions)
- [How to Contribute](#how-to-contribute)
- [Development Workflow](#development-workflow)
- [Git Guidelines](#git-guidelines)
- [Release Process (for internal team only)](#release-process-for-internal-team-only)
## Hacktoberfest 2022
It's [Hacktoberfest month](https://hacktoberfest.com)! 🥳
Thanks so much for participating with Meilisearch this year!
1. We will follow the quality standards set by the organizers of Hacktoberfest (see detail on their [website](https://hacktoberfest.com/participation/#spam)). Our reviewers will not consider any PR that doesnt match that standard.
2. PRs reviews will take place from Monday to Thursday, during usual working hours, CEST time. If you submit outside of these hours, theres no need to panic; we will get around to your contribution.
3. There will be no issue assignment as we dont want people to ask to be assigned specific issues and never return, discouraging the volunteer contributors from opening a PR to fix this issue. We take the liberty to choose the PR that best fixes the issue, so we encourage you to get to it as soon as possible and do your best!
You can check out the longer, more complete guideline documentation [here](https://github.com/meilisearch/.github/blob/main/Hacktoberfest_2022_contributors_guidelines.md).
## Assumptions
1. **You're familiar with [Github](https://github.com) and the [Pull Requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)(PR) workflow.**
1. **You're familiar with [GitHub](https://github.com) and the [Pull Requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)(PR) workflow.**
2. **You've read the Meilisearch [documentation](https://docs.meilisearch.com).**
3. **You know about the [Meilisearch community](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html).
Please use this for help.**
@ -22,7 +40,7 @@ Remember that there are many ways to contribute other than writing code: writing
1. Ensure your change has an issue! Find an
[existing issue](https://github.com/meilisearch/meilisearch/issues/) or [open a new issue](https://github.com/meilisearch/meilisearch/issues/new).
* This is where you can get a feel if the change will be accepted or not.
2. Once approved, [fork the Meilisearch repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) in your own Github account.
2. Once approved, [fork the Meilisearch repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) in your own GitHub account.
3. [Create a new Git branch](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository)
4. Review the [Development Workflow](#development-workflow) section that describes the steps to maintain the repository.
5. Make your changes on your branch.
@ -44,6 +62,8 @@ We recommend using the `--release` flag to test the full performance of Meilisea
cargo test
```
This command will be triggered to each PR as a requirement for merging it.
If you get a "Too many open files" error you might want to increase the open file limit using this command:
```bash
@ -68,7 +88,7 @@ As minimal requirements, your commit message should:
We don't follow any other convention, but if you want to use one, we recommend [the Chris Beams one](https://chris.beams.io/posts/git-commit/).
### Github Pull Requests
### GitHub Pull Requests
Some notes on GitHub PRs:
@ -78,6 +98,29 @@ Some notes on GitHub PRs:
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [Bors](https://github.com/bors-ng/bors-ng) to automatically enforce this requirement without the PR author having to rebase manually.
## Release Process (for internal team only)
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
### Automation to rebase and Merge the PRs
This project integrates a bot that helps us manage pull requests merging.<br>
_[Read more about this](https://github.com/meilisearch/integration-guides/blob/main/resources/bors.md)._
### How to Publish a new Release
The full Meilisearch release process is described in [this guide](https://github.com/meilisearch/core-team/blob/main/resources/meilisearch-release.md). Please follow it carefully before doing any release.
### Release assets
For each release, the following assets are created:
- Binaries for different platforms (Linux, MacOS, Windows and ARM architectures) are attached to the GitHub release
- Binaries are pushed to HomeBrew and APT (not published for RC)
- Docker tags are created/updated:
- `vX.Y.Z`
- `vX.Y` (not published for RC)
- `latest` (not published for RC)
<hr>
Thank you again for reading this through, we can not wait to begin to work with you if you made your way through this contributing guide ❤️

2219
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -2,8 +2,20 @@
resolver = "2"
members = [
"meilisearch-http",
"meilisearch-error",
"meilisearch-lib",
"meilisearch-types",
"meilisearch-auth",
"meili-snap",
"index-scheduler",
"dump",
"file-store",
"permissive-json-pointer",
]
[profile.release]
codegen-units = 1
[profile.dev.package.flate2]
opt-level = 3
[profile.dev.package.milli]
opt-level = 3

View File

@ -1,5 +1,5 @@
# Compile
FROM rust:alpine3.14 AS compiler
FROM rust:alpine3.16 AS compiler
RUN apk add -q --update-cache --no-cache build-base openssl-dev
@ -19,7 +19,7 @@ RUN set -eux; \
cargo build --release
# Run
FROM alpine:3.14
FROM alpine:3.16
ENV MEILI_HTTP_ADDR 0.0.0.0:7700
ENV MEILI_SERVER_PROVIDER docker

220
README.md
View File

@ -1,205 +1,111 @@
<p align="center">
<img src="assets/logo.svg" alt="Meilisearch" width="200" height="200" />
<img src="assets/meilisearch-logo-light.svg?sanitize=true#gh-light-mode-only">
<img src="assets/meilisearch-logo-dark.svg?sanitize=true#gh-dark-mode-only">
</p>
<h1 align="center">Meilisearch</h1>
<h4 align="center">
<a href="https://www.meilisearch.com">Website</a> |
<a href="https://roadmap.meilisearch.com/tabs/1-under-consideration">Roadmap</a> |
<a href="https://blog.meilisearch.com">Blog</a> |
<a href="https://fr.linkedin.com/company/meilisearch">LinkedIn</a> |
<a href="https://twitter.com/meilisearch">Twitter</a> |
<a href="https://docs.meilisearch.com">Documentation</a> |
<a href="https://docs.meilisearch.com/faq/">FAQ</a>
<a href="https://docs.meilisearch.com/faq/">FAQ</a> |
<a href="https://slack.meilisearch.com">Slack</a>
</h4>
<p align="center">
<a href="https://github.com/meilisearch/meilisearch/actions"><img src="https://github.com/meilisearch/meilisearch/workflows/Cargo%20test/badge.svg" alt="Build Status"></a>
<a href="https://deps.rs/repo/github/meilisearch/meilisearch"><img src="https://deps.rs/repo/github/meilisearch/meilisearch/status.svg" alt="Dependency status"></a>
<a href="https://github.com/meilisearch/meilisearch/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-informational" alt="License"></a>
<a href="https://slack.meilisearch.com"><img src="https://img.shields.io/badge/slack-meilisearch-blue.svg?logo=slack" alt="Slack"></a>
<a href="https://github.com/meilisearch/meilisearch/discussions" alt="Discussions"><img src="https://img.shields.io/badge/github-discussions-red" /></a>
<a href="https://app.bors.tech/repositories/26457"><img src="https://bors.tech/images/badge_small.svg" alt="Bors enabled"></a>
</p>
<p align="center">Lightning Fast, Ultra Relevant, and Typo-Tolerant Search Engine 🔍</p>
<p align="center">A lightning-fast search engine that fits effortlessly into your apps, websites, and workflow 🔍</p>
**Meilisearch** is a powerful, fast, open-source, easy to use and deploy search engine. Both searching and indexing are highly customizable. Features such as typo-tolerance, filters, and synonyms are provided out-of-the-box.
For more information about features go to [our documentation](https://docs.meilisearch.com/).
Meilisearch helps you shape a delightful search experience in a snap, offering features that work out-of-the-box to speed up your workflow.
<p align="center">
<img src="assets/trumen-fast.gif" alt="Web interface gif" />
<p align="center" name="demo">
<a href="https://where2watch.meilisearch.com/#gh-light-mode-only" target="_blank">
<img src="assets/demo-light.gif#gh-light-mode-only" alt="A bright colored application for finding movies screening near the user">
</a>
<a href="https://where2watch.meilisearch.com/#gh-dark-mode-only" target="_blank">
<img src="assets/demo-dark.gif#gh-dark-mode-only" alt="A dark colored application for finding movies screening near the user">
</a>
</p>
🔥 [**Try it!**](https://where2watch.meilisearch.com/) 🔥
## 🎃 Hacktoberfest
Its Hacktoberfest 2022 @Meilisearch
[Hacktoberfest](https://hacktoberfest.com/) is a celebration of the open-source community. This year, and for the third time in a row, Meilisearch is participating in this fantastic event.
Youd like to contribute? Dont hesitate to check out our [contributing guidelines](./CONTRIBUTING.md).
## ✨ Features
* Search-as-you-type experience (answers < 50 milliseconds)
* Full-text search
* Typo tolerant (understands typos and misspelling)
* Faceted search and filters
* Supports hanzi (Chinese characters)
* Supports synonyms
* Easy to install, deploy, and maintain
* Whole documents are returned
* Highly customizable
* RESTful API
## Getting started
- **Search-as-you-type:** find search results in less than 50 milliseconds
- **[Typo tolerance](https://docs.meilisearch.com/learn/getting_started/customizing_relevancy.html#typo-tolerance):** get relevant matches even when queries contain typos and misspellings
- **[Filtering and faceted search](https://docs.meilisearch.com/learn/advanced/filtering_and_faceted_search.html):** enhance your user's search experience with custom filters and build a faceted search interface in a few lines of code
- **[Sorting](https://docs.meilisearch.com/learn/advanced/sorting.html):** sort results based on price, date, or pretty much anything else your users need
- **[Synonym support](https://docs.meilisearch.com/learn/getting_started/customizing_relevancy.html#synonyms):** configure synonyms to include more relevant content in your search results
- **[Geosearch](https://docs.meilisearch.com/learn/advanced/geosearch.html):** filter and sort documents based on geographic data
- **[Extensive language support](https://docs.meilisearch.com/learn/what_is_meilisearch/language.html):** search datasets in any language, with optimized support for Chinese, Japanese, Hebrew, and languages using the Latin alphabet
- **[Security management](https://docs.meilisearch.com/learn/security/master_api_keys.html):** control which users can access what data with API keys that allow fine-grained permissions handling
- **[Multi-Tenancy](https://docs.meilisearch.com/learn/security/tenant_tokens.html):** personalize search results for any number of application tenants
- **Highly Customizable:** customize Meilisearch to your specific needs or use our out-of-the-box and hassle-free presets
- **[RESTful API](https://docs.meilisearch.com/reference/api/overview.html):** integrate Meilisearch in your technical stack with our plugins and SDKs
- **Easy to install, deploy, and maintain**
### Deploy the Server
## 📖 Documentation
#### Homebrew (Mac OS)
You can consult Meilisearch's documentation at [https://docs.meilisearch.com](https://docs.meilisearch.com/).
```bash
brew update && brew install meilisearch
meilisearch
```
## 🚀 Getting started
#### Docker
For basic instructions on how to set up Meilisearch, add documents to an index, and search for documents, take a look at our [Quick Start](https://docs.meilisearch.com/learn/getting_started/quick_start.html) guide.
```bash
docker run -p 7700:7700 -v "$(pwd)/data.ms:/data.ms" getmeili/meilisearch
```
You may also want to check out [Meilisearch 101](https://docs.meilisearch.com/learn/getting_started/filtering_and_sorting.html) for an introduction to some of Meilisearch's most popular features.
#### Announcing a cloud-hosted Meilisearch
## ☁️ Meilisearch cloud
Join the closed beta by filling out this [form](https://meilisearch.typeform.com/to/FtnzvZfh).
Join the closed beta for Meilisearch cloud by filling out [this form](https://meilisearch.typeform.com/to/VI2cI2rv).
#### Try Meilisearch in our Sandbox
## 🧰 SDKs & integration tools
Create a Meilisearch instance in [Meilisearch Sandbox](https://sandbox.meilisearch.com/). This instance is free, and will be active for 48 hours.
Install one of our SDKs in your project for seamless integration between Meilisearch and your favorite language or framework!
#### Run on Digital Ocean
Take a look at the complete [Meilisearch integration list](https://docs.meilisearch.com/learn/what_is_meilisearch/sdks.html).
[![DigitalOcean Marketplace](assets/do-btn-blue.svg)](https://marketplace.digitalocean.com/apps/meilisearch?action=deploy&refcode=7c67bd97e101)
![Logos belonging to different languages and frameworks supported by Meilisearch, including React, Ruby on Rails, Go, Rust, and PHP](assets/integrations.png)
#### Deploy on Platform.sh
## ⚙️ Advanced usage
<a href="https://console.platform.sh/projects/create-project?template=https://raw.githubusercontent.com/platformsh/template-builder/master/templates/meilisearch/.platform.template.yaml&utm_content=meilisearch&utm_source=github&utm_medium=button&utm_campaign=deploy_on_platform">
<img src="https://platform.sh/images/deploy/lg-blue.svg" alt="Deploy on Platform.sh" width="180px" />
</a>
Experienced users will want to keep our [API Reference](https://docs.meilisearch.com/reference/api) close at hand.
#### APT (Debian & Ubuntu)
We also offer a wide range of dedicated guides to all Meilisearch features, such as [filtering](https://docs.meilisearch.com/learn/advanced/filtering_and_faceted_search.html), [sorting](https://docs.meilisearch.com/learn/advanced/sorting.html), [geosearch](https://docs.meilisearch.com/learn/advanced/geosearch.html), [API keys](https://docs.meilisearch.com/learn/security/master_api_keys.html), and [tenant tokens](https://docs.meilisearch.com/learn/security/tenant_tokens.html).
```bash
echo "deb [trusted=yes] https://apt.fury.io/meilisearch/ /" > /etc/apt/sources.list.d/fury.list
apt update && apt install meilisearch-http
meilisearch
```
Finally, for more in-depth information, refer to our articles explaining fundamental Meilisearch concepts such as [documents](https://docs.meilisearch.com/learn/core_concepts/documents.html) and [indexes](https://docs.meilisearch.com/learn/core_concepts/indexes.html).
#### Download the binary (Linux & Mac OS)
## 📊 Telemetry
```bash
curl -L https://install.meilisearch.com | sh
./meilisearch
```
Meilisearch collects **anonymized** data from users to help us improve our product. You can [deactivate this](https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html#how-to-disable-data-collection) whenever you want.
#### Compile and run it from sources
To request deletion of collected data, please write to us at [privacy@meilisearch.com](mailto:privacy@meilisearch.com). Don't forget to include your `Instance UID` in the message, as this helps us quickly find and delete your data.
If you have the latest stable Rust toolchain installed on your local system, clone the repository and change it to your working directory.
If you want to know more about the kind of data we collect and what we use it for, check the [telemetry section](https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html) of our documentation.
```bash
git clone https://github.com/meilisearch/meilisearch.git
cd meilisearch
cargo run --release
```
## 📫 Get in touch!
### Create an Index and Upload Some Documents
Meilisearch is a search engine created by [Meili](https://www.welcometothejungle.com/en/companies/meilisearch), a software development company based in France and with team members all over the world. Want to know more about us? [Check out our blog!](https://blog.meilisearch.com/)
Let's create an index! If you need a sample dataset, use [this movie database](https://www.notion.so/meilisearch/A-movies-dataset-to-test-Meili-1cbf7c9cfa4247249c40edfa22d7ca87#b5ae399b81834705ba5420ac70358a65). You can also find it in the `datasets/` directory.
🗞 [Subscribe to our newsletter](https://meilisearch.us2.list-manage.com/subscribe?u=27870f7b71c908a8b359599fb&id=79582d828e) if you don't want to miss any updates! We promise we won't clutter your mailbox: we only send one edition every two months.
```bash
curl -L 'https://bit.ly/2PAcw9l' -o movies.json
```
💌 Want to make a suggestion or give feedback? Here are some of the channels where you can reach us:
Now, you're ready to index some data.
- For feature requests, please visit our [product repository](https://github.com/meilisearch/product/discussions)
- Found a bug? Open an [issue](https://github.com/meilisearch/meilisearch/issues)!
- Want to be part of our Slack community? [Join us!](https://slack.meilisearch.com/)
- For everything else, please check [this page listing some of the other places where you can find us](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html)
```bash
curl -i -X POST 'http://127.0.0.1:7700/indexes/movies/documents' \
--header 'content-type: application/json' \
--data-binary @movies.json
```
### Search for Documents
#### In command line
The search engine is now aware of your documents and can serve those via an HTTP server.
The [`jq` command-line tool](https://stedolan.github.io/jq/) can greatly help you read the server responses.
```bash
curl 'http://127.0.0.1:7700/indexes/movies/search?q=botman+robin&limit=2' | jq
```
```json
{
"hits": [
{
"id": "415",
"title": "Batman & Robin",
"poster": "https://image.tmdb.org/t/p/w1280/79AYCcxw3kSKbhGpx1LiqaCAbwo.jpg",
"overview": "Along with crime-fighting partner Robin and new recruit Batgirl, Batman battles the dual threat of frosty genius Mr. Freeze and homicidal horticulturalist Poison Ivy. Freeze plans to put Gotham City on ice, while Ivy tries to drive a wedge between the dynamic duo.",
"release_date": 866768400
},
{
"id": "411736",
"title": "Batman: Return of the Caped Crusaders",
"poster": "https://image.tmdb.org/t/p/w1280/GW3IyMW5Xgl0cgCN8wu96IlNpD.jpg",
"overview": "Adam West and Burt Ward returns to their iconic roles of Batman and Robin. Featuring the voices of Adam West, Burt Ward, and Julie Newmar, the film sees the superheroes going up against classic villains like The Joker, The Riddler, The Penguin and Catwoman, both in Gotham City… and in space.",
"release_date": 1475888400
}
],
"nbHits": 8,
"exhaustiveNbHits": false,
"query": "botman robin",
"limit": 2,
"offset": 0,
"processingTimeMs": 2
}
```
#### Use the Web Interface
We also deliver an **out-of-the-box [web interface](https://github.com/meilisearch/mini-dashboard)** in which you can test Meilisearch interactively.
You can access the web interface in your web browser at the root of the server. The default URL is [http://127.0.0.1:7700](http://127.0.0.1:7700). All you need to do is open your web browser and enter Meilisearchs address to visit it. This will lead you to a web page with a search bar that will allow you to search in the selected index.
| [See the gif above](#demo)
## Documentation
Now that your Meilisearch server is up and running, you can learn more about how to tune your search engine in [the documentation](https://docs.meilisearch.com).
## Contributing
Hey! We're glad you're thinking about contributing to Meilisearch! Feel free to pick an [issue labeled as `good first issue`](https://github.com/meilisearch/meilisearch/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22), and to ask any question you need. Some points might not be clear and we are available to help you!
Also, we recommend following the [CONTRIBUTING](./CONTRIBUTING.md) to create your PR.
## Core engine and tokenizer
The code in this repository is only concerned with managing multiple indexes, handling the update store, and exposing an HTTP API.
Search and indexation are the domain of our core engine, [`milli`](https://github.com/meilisearch/milli), while tokenization is handled by [our `tokenizer` library](https://github.com/meilisearch/tokenizer/).
## Telemetry
Meilisearch collects anonymous data regarding general usage.
This helps us better understand developers' usage of Meilisearch features.
To find out more on what information we're retrieving, please see our documentation on [Telemetry](https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html).
This program is optional, you can disable these analytics by using the `MEILI_NO_ANALYTICS` env variable.
## Feature request
The feature requests are not managed in this repository. Please visit our [dedicated repository](https://github.com/meilisearch/product) to see our work about the Meilisearch product.
If you have a feature request or any feedback about an existing feature, please open [a discussion](https://github.com/meilisearch/product/discussions).
Also, feel free to participate in the current discussions, we are looking forward to reading your comments.
## 💌 Contact
Please visit [this page](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html#contact-us).
Meilisearch is developed by [Meili](https://www.meilisearch.com), a young company. To know more about us, you can [read our blog](https://blog.meilisearch.com). Any suggestion or feedback is highly appreciated. Thank you for your support!
Thank you for your support!

View File

@ -4,7 +4,7 @@ Meilisearch takes the security of our software products and services seriously.
If you believe you have found a security vulnerability in any Meilisearch-owned repository, please report it to us as described below.
## Suported versions
## Supported versions
As long as we are pre-v1.0, only the latest version of Meilisearch will be supported with security updates.

BIN
assets/demo-dark.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 MiB

BIN
assets/demo-light.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

BIN
assets/integrations.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 799 KiB

View File

@ -0,0 +1,30 @@
<svg width="495" height="74" viewBox="0 0 495 74" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M181.842 42.5349C181.842 37.6137 184.201 34.715 188.718 34.715C192.965 34.715 194.381 37.7486 194.381 41.6585V62.6238H203.953V40.5799C203.953 32.3556 199.639 26.4907 191.145 26.4907C186.089 26.4907 182.516 28.0412 179.415 31.4792C177.393 28.3782 173.955 26.4907 169.168 26.4907C164.112 26.4907 160.607 28.5805 158.989 31.614V27.2996H150.158V62.6238H159.731V42.3326C159.731 37.6137 162.157 34.715 166.607 34.715C170.854 34.715 172.269 37.7486 172.269 41.6585V62.6238H181.842V42.5349Z" fill="white"/>
<path d="M243.245 47.7256C243.245 47.7256 243.379 46.4448 243.379 44.8943C243.379 34.4454 236.301 26.4907 225.852 26.4907C215.403 26.4907 208.123 34.4454 208.123 44.8943C208.123 55.7477 215.471 63.4327 225.92 63.4327C234.077 63.4327 240.548 58.5116 242.638 51.3659H232.998C231.852 53.9276 229.088 55.2084 226.189 55.2084C221.403 55.2084 218.302 52.5793 217.628 47.7256H243.245ZM225.785 34.1757C230.234 34.1757 233.133 36.8722 233.807 40.8495H217.763C218.572 36.8048 221.403 34.1757 225.785 34.1757Z" fill="white"/>
<path d="M244.791 35.524H249.038V62.6238H258.61V27.2996H244.791V35.524ZM253.824 22.7156C257.195 22.7156 259.622 20.3561 259.622 16.9855C259.622 13.6149 257.195 11.188 253.824 11.188C250.454 11.188 248.027 13.6149 248.027 16.9855C248.027 20.3561 250.454 22.7156 253.824 22.7156Z" fill="white"/>
<path d="M278.432 54.3995C278.163 54.3995 277.758 54.4669 277.152 54.4669C274.994 54.4669 274.725 53.4557 274.725 51.9726V12.0644H265.152V52.6467C265.152 59.6576 267.849 62.7586 275.466 62.7586C276.747 62.7586 277.96 62.6238 278.432 62.5564V54.3995Z" fill="white"/>
<path d="M279.521 35.524H283.768V62.6238H293.341V27.2996H279.521V35.524ZM288.555 22.7156C291.925 22.7156 294.352 20.3561 294.352 16.9855C294.352 13.6149 291.925 11.188 288.555 11.188C285.184 11.188 282.757 13.6149 282.757 16.9855C282.757 20.3561 285.184 22.7156 288.555 22.7156Z" fill="white"/>
<path d="M312.557 62.9937C321.86 62.9937 326.242 58.0726 326.242 52.8819C326.242 38.4556 305.007 46.4777 305.007 36.9725C305.007 33.8716 307.636 31.2425 312.962 31.2425C318.422 31.2425 320.984 34.2086 321.388 37.9163H326.175C325.77 33.2648 322.602 27.0629 313.097 27.0629C304.94 27.0629 300.356 31.9166 300.356 37.1748C300.356 51.264 321.591 43.1745 321.591 53.0167C321.591 56.4547 318.355 58.8142 312.557 58.8142C306.625 58.8142 303.659 55.848 303.322 51.4662H298.468C298.873 57.4659 302.648 62.9937 312.557 62.9937Z" fill="white"/>
<path d="M364.256 46.4103C364.256 46.4103 364.324 45.3317 364.324 44.5901C364.324 34.8827 358.054 27.0629 347.808 27.0629C337.494 27.0629 330.955 35.4894 330.955 44.9946C330.955 54.6346 337.022 62.9937 347.875 62.9937C356.032 62.9937 361.695 58.0052 363.717 51.4662H358.729C357.245 55.6458 353.201 58.6794 347.943 58.6794C340.729 58.6794 336.213 53.3538 335.741 46.4103H364.256ZM347.808 31.3773C354.549 31.3773 358.931 35.8939 359.538 42.5004H335.876C336.685 36.1636 341.134 31.3773 347.808 31.3773Z" fill="white"/>
<path d="M394.037 45.871V49.1068C394.037 54.9717 389.79 59.0164 381.634 59.0164C376.578 59.0164 373.814 56.9266 373.814 52.41C373.814 50.118 374.892 48.3652 376.578 47.4215C378.33 46.4777 380.69 45.871 394.037 45.871ZM381.094 62.9937C387.027 62.9937 391.813 61.1062 394.24 57.1963V62.1848H398.824V39.7364C398.824 32.1188 394.442 27.0629 384.532 27.0629C375.027 27.0629 370.848 31.8492 369.971 37.9837H374.623C375.566 33.13 379.274 31.1751 384.33 31.1751C390.802 31.1751 394.037 33.8716 394.037 39.669V41.8936C383.184 41.8936 378.667 42.0959 375.297 43.4441C371.387 44.9946 369.095 48.4327 369.095 52.5448C369.095 58.5445 372.937 62.9937 381.094 62.9937Z" fill="white"/>
<path d="M424.991 27.6022C424.991 27.6022 424.182 27.5348 423.845 27.5348C417.509 27.5348 414.138 30.838 412.857 33.1974V27.8718H408.273V62.1848H413.059V42.7026C413.059 35.5569 417.441 32.0514 423.306 32.0514C424.182 32.0514 424.991 32.1188 424.991 32.1188V27.6022Z" fill="white"/>
<path d="M425.809 45.062C425.809 54.4324 432.28 62.9937 442.729 62.9937C452.032 62.9937 457.425 56.7918 458.773 49.9831H453.92C452.504 55.3087 448.594 58.6794 442.729 58.6794C435.516 58.6794 430.662 52.9493 430.662 45.062C430.662 37.1073 435.516 31.3773 442.729 31.3773C448.594 31.3773 452.504 34.7479 453.92 40.0735H458.773C457.425 33.2648 452.032 27.0629 442.729 27.0629C432.28 27.0629 425.809 35.6243 425.809 45.062Z" fill="white"/>
<path d="M470.041 11.6254H465.255V62.1848H470.041V41.8936C470.041 34.8827 474.558 31.2425 480.355 31.2425C486.49 31.2425 489.389 35.0176 489.389 41.2195V62.1848H494.175V40.2757C494.175 32.6581 489.658 27.0629 481.164 27.0629C474.76 27.0629 471.255 30.5683 470.041 32.6581V11.6254Z" fill="white"/>
<path d="M0.825012 73.993L24.0688 14.5224C27.3443 6.14179 35.4223 0.625977 44.4203 0.625977H58.4336L35.1899 60.0966C31.9143 68.4772 23.8363 73.993 14.8384 73.993H0.825012Z" fill="url(#paint0_linear_0_3)"/>
<path d="M34.9246 73.9932L58.1684 14.5226C61.444 6.14197 69.5219 0.626152 78.5199 0.626152H92.5333L69.2895 60.0968C66.014 68.4774 57.936 73.9932 48.938 73.9932H34.9246Z" fill="url(#paint1_linear_0_3)"/>
<path d="M69.0262 73.9932L92.27 14.5226C95.5456 6.14197 103.624 0.626152 112.622 0.626152H126.635L103.391 60.0968C100.116 68.4774 92.0376 73.9932 83.0396 73.9932H69.0262Z" fill="url(#paint2_linear_0_3)"/>
<defs>
<linearGradient id="paint0_linear_0_3" x1="126.635" y1="-4.97799" x2="0.825008" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint1_linear_0_3" x1="126.635" y1="-4.97799" x2="0.825008" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint2_linear_0_3" x1="126.635" y1="-4.97799" x2="0.825008" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.8 KiB

View File

@ -0,0 +1,30 @@
<svg width="495" height="74" viewBox="0 0 495 74" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M181.84 42.5347C181.84 37.6136 184.199 34.7149 188.716 34.7149C192.963 34.7149 194.378 37.7484 194.378 41.6584V62.6237H203.951V40.5798C203.951 32.3554 199.637 26.4906 191.143 26.4906C186.087 26.4906 182.514 28.041 179.413 31.4791C177.39 28.3781 173.952 26.4906 169.166 26.4906C164.11 26.4906 160.605 28.5804 158.987 31.6139V27.2995H150.156V62.6237H159.728V42.3325C159.728 37.6136 162.155 34.7149 166.604 34.7149C170.851 34.7149 172.267 37.7484 172.267 41.6584V62.6237H181.84V42.5347Z" fill="#21004B"/>
<path d="M243.242 47.7255C243.242 47.7255 243.377 46.4447 243.377 44.8942C243.377 34.4452 236.299 26.4906 225.85 26.4906C215.401 26.4906 208.12 34.4452 208.12 44.8942C208.12 55.7476 215.468 63.4326 225.917 63.4326C234.074 63.4326 240.546 58.5115 242.636 51.3658H232.996C231.85 53.9274 229.086 55.2083 226.187 55.2083C221.401 55.2083 218.3 52.5792 217.626 47.7255H243.242ZM225.783 34.1756C230.232 34.1756 233.131 36.8721 233.805 40.8494H217.76C218.569 36.8047 221.401 34.1756 225.783 34.1756Z" fill="#21004B"/>
<path d="M244.789 35.5238H249.036V62.6237H258.608V27.2995H244.789V35.5238ZM253.822 22.7155C257.193 22.7155 259.619 20.356 259.619 16.9854C259.619 13.6148 257.193 11.1879 253.822 11.1879C250.451 11.1879 248.024 13.6148 248.024 16.9854C248.024 20.356 250.451 22.7155 253.822 22.7155Z" fill="#21004B"/>
<path d="M278.43 54.3993C278.16 54.3993 277.756 54.4667 277.149 54.4667C274.992 54.4667 274.722 53.4556 274.722 51.9725V12.0643H265.15V52.6466C265.15 59.6575 267.846 62.7585 275.464 62.7585C276.745 62.7585 277.958 62.6237 278.43 62.5562V54.3993Z" fill="#21004B"/>
<path d="M279.519 35.5238H283.766V62.6237H293.339V27.2995H279.519V35.5238ZM288.553 22.7155C291.923 22.7155 294.35 20.356 294.35 16.9854C294.35 13.6148 291.923 11.1879 288.553 11.1879C285.182 11.1879 282.755 13.6148 282.755 16.9854C282.755 20.356 285.182 22.7155 288.553 22.7155Z" fill="#21004B"/>
<path d="M312.557 62.9939C321.86 62.9939 326.242 58.0728 326.242 52.882C326.242 38.4557 305.007 46.4778 305.007 36.9726C305.007 33.8717 307.636 31.2426 312.962 31.2426C318.422 31.2426 320.984 34.2087 321.388 37.9164H326.175C325.77 33.265 322.602 27.063 313.097 27.063C304.94 27.063 300.356 31.9167 300.356 37.1749C300.356 51.2641 321.591 43.1746 321.591 53.0168C321.591 56.4548 318.355 58.8143 312.557 58.8143C306.625 58.8143 303.659 55.8481 303.322 51.4663H298.468C298.872 57.466 302.648 62.9939 312.557 62.9939Z" fill="#21004B"/>
<path d="M364.256 46.4104C364.256 46.4104 364.324 45.3318 364.324 44.5903C364.324 34.8829 358.054 27.063 347.808 27.063C337.494 27.063 330.955 35.4896 330.955 44.9947C330.955 54.6347 337.022 62.9939 347.875 62.9939C356.032 62.9939 361.695 58.0053 363.717 51.4663H358.728C357.245 55.6459 353.201 58.6795 347.942 58.6795C340.729 58.6795 336.213 53.3539 335.741 46.4104H364.256ZM347.808 31.3774C354.549 31.3774 358.931 35.894 359.537 42.5005H335.876C336.685 36.1637 341.134 31.3774 347.808 31.3774Z" fill="#21004B"/>
<path d="M394.037 45.8711V49.1069C394.037 54.9718 389.79 59.0165 381.633 59.0165C376.578 59.0165 373.814 56.9267 373.814 52.4101C373.814 50.1181 374.892 48.3654 376.578 47.4216C378.33 46.4778 380.69 45.8711 394.037 45.8711ZM381.094 62.9939C387.026 62.9939 391.813 61.1063 394.24 57.1964V62.1849H398.824V39.7366C398.824 32.1189 394.442 27.063 384.532 27.063C375.027 27.063 370.847 31.8493 369.971 37.9838H374.623C375.566 33.1301 379.274 31.1752 384.33 31.1752C390.802 31.1752 394.037 33.8717 394.037 39.6691V41.8938C383.184 41.8938 378.667 42.096 375.297 43.4442C371.387 44.9947 369.095 48.4328 369.095 52.5449C369.095 58.5446 372.937 62.9939 381.094 62.9939Z" fill="#21004B"/>
<path d="M424.991 27.6023C424.991 27.6023 424.182 27.5349 423.845 27.5349C417.508 27.5349 414.138 30.8381 412.857 33.1975V27.872H408.273V62.1849H413.059V42.7027C413.059 35.557 417.441 32.0515 423.306 32.0515C424.182 32.0515 424.991 32.1189 424.991 32.1189V27.6023Z" fill="#21004B"/>
<path d="M425.809 45.0621C425.809 54.4325 432.28 62.9939 442.729 62.9939C452.032 62.9939 457.425 56.7919 458.773 49.9832H453.92C452.504 55.3088 448.594 58.6795 442.729 58.6795C435.516 58.6795 430.662 52.9494 430.662 45.0621C430.662 37.1075 435.516 31.3774 442.729 31.3774C448.594 31.3774 452.504 34.748 453.92 40.0736H458.773C457.425 33.265 452.032 27.063 442.729 27.063C432.28 27.063 425.809 35.6244 425.809 45.0621Z" fill="#21004B"/>
<path d="M470.041 11.6255H465.255V62.1849H470.041V41.8938C470.041 34.8829 474.558 31.2426 480.355 31.2426C486.49 31.2426 489.389 35.0177 489.389 41.2196V62.1849H494.175V40.2759C494.175 32.6582 489.658 27.063 481.164 27.063C474.76 27.063 471.255 30.5685 470.041 32.6582V11.6255Z" fill="#21004B"/>
<path d="M0.824951 73.993L24.0688 14.5224C27.3443 6.14179 35.4223 0.625977 44.4202 0.625977H58.4336L35.1898 60.0966C31.9143 68.4772 23.8363 73.993 14.8383 73.993H0.824951Z" fill="url(#paint0_linear_0_15)"/>
<path d="M34.9246 73.9932L58.1684 14.5226C61.4439 6.14197 69.5219 0.626152 78.5199 0.626152H92.5332L69.2894 60.0968C66.0139 68.4774 57.9359 73.9932 48.9379 73.9932H34.9246Z" fill="url(#paint1_linear_0_15)"/>
<path d="M69.0262 73.9932L92.27 14.5226C95.5455 6.14197 103.623 0.626152 112.621 0.626152H126.635L103.391 60.0968C100.115 68.4774 92.0375 73.9932 83.0395 73.9932H69.0262Z" fill="url(#paint2_linear_0_15)"/>
<defs>
<linearGradient id="paint0_linear_0_15" x1="126.635" y1="-4.97799" x2="0.824952" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint1_linear_0_15" x1="126.635" y1="-4.97799" x2="0.824952" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint2_linear_0_15" x1="126.635" y1="-4.97799" x2="0.824952" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.9 KiB

View File

@ -2,7 +2,7 @@ status = [
'Tests on ubuntu-18.04',
'Tests on macos-latest',
'Tests on windows-latest',
# 'Run Clippy',
'Run Clippy',
'Run Rustfmt',
'Run tests in debug',
]

135
config.toml Normal file
View File

@ -0,0 +1,135 @@
# This file shows the default configuration of Meilisearch.
# All variables are defined here: https://docs.meilisearch.com/learn/configuration/instance_options.html#environment-variables
db_path = "./data.ms"
# Designates the location where database files will be created and retrieved.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#database-path
env = "development"
# Configures the instance's environment. Value must be either `production` or `development`.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#environment
http_addr = "localhost:7700"
# The address on which the HTTP server will listen.
# master_key = "YOUR_MASTER_KEY_VALUE"
# Sets the instance's master key, automatically protecting all routes except GET /health.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#master-key
# no_analytics = true
# Deactivates Meilisearch's built-in telemetry when provided.
# Meilisearch automatically collects data from all instances that do not opt out using this flag.
# All gathered data is used solely for the purpose of improving Meilisearch, and can be deleted at any time.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#disable-analytics
http_payload_size_limit = "100 MB"
# Sets the maximum size of accepted payloads.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#payload-limit-size
log_level = "INFO"
# Defines how much detail should be present in Meilisearch's logs.
# Meilisearch currently supports five log levels, listed in order of increasing verbosity: `ERROR`, `WARN`, `INFO`, `DEBUG`, `TRACE`
# https://docs.meilisearch.com/learn/configuration/instance_options.html#log-level
max_index_size = "100 GiB"
# Sets the maximum size of the index.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#max-index-size
max_task_db_size = "100 GiB"
# Sets the maximum size of the task database.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#max-task-db-size
# max_indexing_memory = "2 GiB"
# Sets the maximum amount of RAM Meilisearch can use when indexing.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#max-indexing-memory
# max_indexing_threads = 4
# Sets the maximum number of threads Meilisearch can use during indexing.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#max-indexing-threads
disable_auto_batching = false
# Deactivates auto-batching when provided.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#disable-auto-batching
#############
### DUMPS ###
#############
dumps_dir = "dumps/"
# Sets the directory where Meilisearch will create dump files.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#dumps-destination
# import_dump = "./path/to/my/file.dump"
# Imports the dump file located at the specified path. Path must point to a .dump file.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#import-dump
ignore_missing_dump = false
# Prevents Meilisearch from throwing an error when `import_dump` does not point to a valid dump file.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ignore-missing-dump
ignore_dump_if_db_exists = false
# Prevents a Meilisearch instance with an existing database from throwing an error when using `import_dump`.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ignore-dump-if-db-exists
#################
### SNAPSHOTS ###
#################
schedule_snapshot = false
# Activates scheduled snapshots when provided.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#schedule-snapshot-creation
snapshot_dir = "snapshots/"
# Sets the directory where Meilisearch will store snapshots.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#snapshot-destination
snapshot_interval_sec = 86400
# Defines the interval between each snapshot. Value must be given in seconds.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#snapshot-interval
# import_snapshot = "./path/to/my/snapshot"
# Launches Meilisearch after importing a previously-generated snapshot at the given filepath.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#import-snapshot
ignore_missing_snapshot = false
# Prevents a Meilisearch instance from throwing an error when `import_snapshot` does not point to a valid snapshot file.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ignore-missing-snapshot
ignore_snapshot_if_db_exists = false
# Prevents a Meilisearch instance with an existing database from throwing an error when using `import_snapshot`.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ignore-snapshot-if-db-exists
###########
### SSL ###
###########
# ssl_auth_path = "./path/to/root"
# Enables client authentication in the specified path.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ssl-authentication-path
# ssl_cert_path = "./path/to/certfile"
# Sets the server's SSL certificates.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ssl-certificates-path
# ssl_key_path = "./path/to/private-key"
# Sets the server's SSL key files.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ssl-key-path
# ssl_ocsp_path = "./path/to/ocsp-file"
# Sets the server's OCSP file.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ssl-ocsp-path
ssl_require_auth = false
# Makes SSL authentication mandatory.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ssl-require-auth
ssl_resumption = false
# Activates SSL session resumption.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ssl-resumption
ssl_tickets = false
# Activates SSL tickets.
# https://docs.meilisearch.com/learn/configuration/instance_options.html#ssl-tickets

View File

@ -1,29 +1,38 @@
#!/bin/sh
# COLORS
# GLOBALS
# Colors
RED='\033[31m'
GREEN='\033[32m'
DEFAULT='\033[0m'
# GLOBALS
GREP_SEMVER_REGEXP='v\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)$' # i.e. v[number].[number].[number]
# Project name
PNAME='meilisearch'
# Version regexp i.e. v[number].[number].[number]
GREP_SEMVER_REGEXP='v\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)$'
# GitHub API address
GITHUB_API='https://api.github.com/repos/meilisearch/meilisearch/releases'
# GitHub Release address
GITHUB_REL='https://github.com/meilisearch/meilisearch/releases/download/'
# FUNCTIONS
# semverParseInto and semverLT from https://github.com/cloudflare/semver_bash/blob/master/semver.sh
# semverParseInto and semverLT from: https://github.com/cloudflare/semver_bash/blob/master/semver.sh
# usage: semverParseInto version major minor patch special
# version: the string version
# major, minor, patch, special: will be assigned by the function
semverParseInto() {
local RE='[^0-9]*\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)\([0-9A-Za-z-]*\)'
#MAJOR
# MAJOR
eval $2=`echo $1 | sed -e "s#$RE#\1#"`
#MINOR
# MINOR
eval $3=`echo $1 | sed -e "s#$RE#\2#"`
#PATCH
# PATCH
eval $4=`echo $1 | sed -e "s#$RE#\3#"`
#SPECIAL
# SPECIAL
eval $5=`echo $1 | sed -e "s#$RE#\4#"`
}
@ -67,16 +76,22 @@ semverLT() {
return 1
}
# Get a token from https://github.com/settings/tokens to increasae rate limit (from 60 to 5000), make sure the token scope is set to 'public_repo'
# Create GITHUB_PAT enviroment variable once you aquired the token to start using it
# Returns the tag of the latest stable release (in terms of semver and not of release date)
# Get a token from: https://github.com/settings/tokens to increase rate limit (from 60 to 5000),
# make sure the token scope is set to 'public_repo'.
# Create GITHUB_PAT environment variable once you acquired the token to start using it.
# Returns the tag of the latest stable release (in terms of semver and not of release date).
get_latest() {
temp_file='temp_file' # temp_file needed because the grep would start before the download is over
# temp_file is needed because the grep would start before the download is over
temp_file=$(mktemp -q /tmp/$PNAME.XXXXXXXXX)
if [ $? -ne 0 ]; then
echo "$0: Can't create temp file, bye bye.."
exit 1
fi
if [ -z "$GITHUB_PAT" ]; then
curl -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file" || return 1
curl -s $GITHUB_API > "$temp_file" || return 1
else
curl -H "Authorization: token $GITHUB_PAT" -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file" || return 1
curl -H "Authorization: token $GITHUB_PAT" -s $GITHUB_API > "$temp_file" || return 1
fi
releases=$(cat "$temp_file" | \
@ -89,28 +104,35 @@ get_latest() {
latest=''
current_tag=''
for release_info in $releases; do
if [ $i -eq 0 ]; then # Cheking tag_name
if echo "$release_info" | grep -q "$GREP_SEMVER_REGEXP"; then # If it's not an alpha or beta release
# Checking tag_name
if [ $i -eq 0 ]; then
# If it's not an alpha or beta release
if echo "$release_info" | grep -q "$GREP_SEMVER_REGEXP"; then
current_tag=$release_info
else
current_tag=''
fi
i=1
elif [ $i -eq 1 ]; then # Checking draft boolean
# Checking draft boolean
elif [ $i -eq 1 ]; then
if [ "$release_info" = 'true' ]; then
current_tag=''
fi
i=2
elif [ $i -eq 2 ]; then # Checking prerelease boolean
# Checking prerelease boolean
elif [ $i -eq 2 ]; then
if [ "$release_info" = 'true' ]; then
current_tag=''
fi
i=0
if [ "$current_tag" != '' ]; then # If the current_tag is valid
if [ "$latest" = '' ]; then # If there is no latest yet
# If the current_tag is valid
if [ "$current_tag" != '' ]; then
# If there is no latest yes
if [ "$latest" = '' ]; then
latest="$current_tag"
else
semverLT $current_tag $latest # Comparing latest and the current tag
# Comparing latest and the current tag
semverLT $current_tag $latest
if [ $? -eq 1 ]; then
latest="$current_tag"
fi
@ -123,7 +145,7 @@ get_latest() {
return 0
}
# Gets the OS by setting the $os variable
# Gets the OS by setting the $os variable.
# Returns 0 in case of success, 1 otherwise.
get_os() {
os_name=$(uname -s)
@ -134,7 +156,7 @@ get_os() {
'Linux')
os='linux'
;;
'MINGW'*)
'MINGW'*)
os='windows'
;;
*)
@ -143,7 +165,7 @@ get_os() {
return 0
}
# Gets the architecture by setting the $archi variable
# Gets the architecture by setting the $archi variable.
# Returns 0 in case of success, 1 otherwise.
get_archi() {
architecture=$(uname -m)
@ -152,7 +174,8 @@ get_archi() {
archi='amd64'
;;
'arm64')
if [ $os = 'macos' ]; then # MacOS M1
# MacOS M1
if [ $os = 'macos' ]; then
archi='amd64'
else
archi='aarch64'
@ -171,9 +194,9 @@ success_usage() {
printf "$GREEN%s\n$DEFAULT" "Meilisearch $latest binary successfully downloaded as '$binary_name' file."
echo ''
echo 'Run it:'
echo ' $ ./meilisearch'
echo " $ ./$PNAME"
echo 'Usage:'
echo ' $ ./meilisearch --help'
echo " $ ./$PNAME --help"
}
not_available_failure_usage() {
@ -189,52 +212,55 @@ fetch_release_failure_usage() {
echo 'Please let us know about this issue: https://github.com/meilisearch/meilisearch/issues/new/choose'
}
fill_release_variables() {
# Fill $latest variable.
if ! get_latest; then
# TO CHANGE.
fetch_release_failure_usage
exit 1
fi
if [ "$latest" = '' ]; then
fetch_release_failure_usage
exit 1
fi
# Fill $os variable.
if ! get_os; then
not_available_failure_usage
exit 1
fi
# Fill $archi variable.
if ! get_archi; then
not_available_failure_usage
exit 1
fi
}
download_binary() {
fill_release_variables
echo "Downloading Meilisearch binary $latest for $os, architecture $archi..."
case "$os" in
'windows')
release_file="$PNAME-$os-$archi.exe"
binary_name="$PNAME.exe"
;;
*)
release_file="$PNAME-$os-$archi"
binary_name="$PNAME"
esac
# Fetch the Meilisearch binary.
curl --fail -OL "$GITHUB_REL/$latest/$release_file"
if [ $? -ne 0 ]; then
fetch_release_failure_usage
exit 1
fi
mv "$release_file" "$binary_name"
chmod 744 "$binary_name"
success_usage
}
# MAIN
# Fill $latest variable
if ! get_latest; then
fetch_release_failure_usage # TO CHANGE
exit 1
fi
if [ "$latest" = '' ]; then
fetch_release_failure_usage
exit 1
fi
# Fill $os variable
if ! get_os; then
not_available_failure_usage
exit 1
fi
# Fill $archi variable
if ! get_archi; then
not_available_failure_usage
exit 1
fi
echo "Downloading Meilisearch binary $latest for $os, architecture $archi..."
case "$os" in
'windows')
release_file="meilisearch-$os-$archi.exe"
binary_name='meilisearch.exe'
;;
*)
release_file="meilisearch-$os-$archi"
binary_name='meilisearch'
esac
# Fetch the Meilisearch binary
link="https://github.com/meilisearch/meilisearch/releases/download/$latest/$release_file"
curl --fail -OL "$link"
if [ $? -ne 0 ]; then
fetch_release_failure_usage
exit 1
fi
mv "$release_file" "$binary_name"
chmod 744 "$binary_name"
success_usage
main() {
download_binary
}
main

28
dump/Cargo.toml Normal file
View File

@ -0,0 +1,28 @@
[package]
name = "dump"
version = "0.30.0"
edition = "2021"
[dependencies]
anyhow = "1.0.65"
flate2 = "1.0.22"
http = "0.2.8"
log = "0.4.17"
meilisearch-auth = { path = "../meilisearch-auth" }
meilisearch-types = { path = "../meilisearch-types" }
once_cell = "1.15.0"
regex = "1.6.0"
roaring = { version = "0.10.0", features = ["serde"] }
serde = { version = "1.0.136", features = ["derive"] }
serde_json = { version = "1.0.85", features = ["preserve_order"] }
tar = "0.4.38"
tempfile = "3.3.0"
thiserror = "1.0.30"
time = { version = "0.3.7", features = ["serde-well-known", "formatting", "parsing", "macros"] }
uuid = { version = "1.1.2", features = ["serde", "v4"] }
[dev-dependencies]
big_s = "1.0.2"
maplit = "1.0.2"
meili-snap = { path = "../meili-snap" }
meilisearch-types = { path = "../meilisearch-types" }

17
dump/README.md Normal file
View File

@ -0,0 +1,17 @@
```
dump
├── indexes
│ ├── cattos
│ │ ├── documents.jsonl
│ │ └── settings.json
│ └── doggos
│ ├── documents.jsonl
│ └── settings.json
├── instance-uid.uuid
├── keys.jsonl
├── metadata.json
└── tasks
├── update_files
│ └── [task_id].jsonl
└── queue.jsonl
```

36
dump/src/error.rs Normal file
View File

@ -0,0 +1,36 @@
use meilisearch_types::error::{Code, ErrorCode};
use thiserror::Error;
#[derive(Debug, Error)]
pub enum Error {
#[error("The version 1 of the dumps is not supported anymore. You can re-export your dump from a version between 0.21 and 0.24, or start fresh from a version 0.25 onwards.")]
DumpV1Unsupported,
#[error("Bad index name.")]
BadIndexName,
#[error("Malformed task.")]
MalformedTask,
#[error(transparent)]
Io(#[from] std::io::Error),
#[error(transparent)]
Serde(#[from] serde_json::Error),
#[error(transparent)]
Uuid(#[from] uuid::Error),
}
impl ErrorCode for Error {
fn error_code(&self) -> Code {
match self {
// Are these three really Internal errors?
// TODO look at that later.
Error::Io(_) => Code::Internal,
Error::Serde(_) => Code::Internal,
Error::Uuid(_) => Code::Internal,
// all these errors should never be raised when creating a dump, thus no error code should be associated.
Error::DumpV1Unsupported => Code::Internal,
Error::BadIndexName => Code::Internal,
Error::MalformedTask => Code::Internal,
}
}
}

464
dump/src/lib.rs Normal file
View File

@ -0,0 +1,464 @@
#![allow(clippy::type_complexity)]
#![allow(clippy::wrong_self_convention)]
use meilisearch_types::error::ResponseError;
use meilisearch_types::keys::Key;
use meilisearch_types::milli::update::IndexDocumentsMethod;
use meilisearch_types::settings::Unchecked;
use meilisearch_types::tasks::{Details, IndexSwap, KindWithContent, Status, Task, TaskId};
use meilisearch_types::InstanceUid;
use roaring::RoaringBitmap;
use serde::{Deserialize, Serialize};
use time::OffsetDateTime;
mod error;
mod reader;
mod writer;
pub use error::Error;
pub use reader::{DumpReader, UpdateFile};
pub use writer::DumpWriter;
const CURRENT_DUMP_VERSION: Version = Version::V6;
type Result<T> = std::result::Result<T, Error>;
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Metadata {
pub dump_version: Version,
pub db_version: String,
#[serde(with = "time::serde::rfc3339")]
pub dump_date: OffsetDateTime,
}
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct IndexMetadata {
pub uid: String,
pub primary_key: Option<String>,
#[serde(with = "time::serde::rfc3339")]
pub created_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub updated_at: OffsetDateTime,
}
#[derive(Debug, PartialEq, Eq, Deserialize, Serialize)]
pub enum Version {
V1,
V2,
V3,
V4,
V5,
V6,
}
#[derive(Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct TaskDump {
pub uid: TaskId,
#[serde(default)]
pub index_uid: Option<String>,
pub status: Status,
#[serde(rename = "type")]
pub kind: KindDump,
#[serde(skip_serializing_if = "Option::is_none")]
pub canceled_by: Option<TaskId>,
#[serde(skip_serializing_if = "Option::is_none")]
pub details: Option<Details>,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<ResponseError>,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(
with = "time::serde::rfc3339::option",
skip_serializing_if = "Option::is_none",
default
)]
pub started_at: Option<OffsetDateTime>,
#[serde(
with = "time::serde::rfc3339::option",
skip_serializing_if = "Option::is_none",
default
)]
pub finished_at: Option<OffsetDateTime>,
}
// A `Kind` specific version made for the dump. If modified you may break the dump.
#[derive(Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum KindDump {
DocumentImport {
primary_key: Option<String>,
method: IndexDocumentsMethod,
documents_count: u64,
allow_index_creation: bool,
},
DocumentDeletion {
documents_ids: Vec<String>,
},
DocumentClear,
Settings {
settings: Box<meilisearch_types::settings::Settings<Unchecked>>,
is_deletion: bool,
allow_index_creation: bool,
},
IndexDeletion,
IndexCreation {
primary_key: Option<String>,
},
IndexUpdate {
primary_key: Option<String>,
},
IndexSwap {
swaps: Vec<IndexSwap>,
},
TaskCancelation {
query: String,
tasks: RoaringBitmap,
},
TasksDeletion {
query: String,
tasks: RoaringBitmap,
},
DumpCreation {
keys: Vec<Key>,
instance_uid: Option<InstanceUid>,
},
SnapshotCreation,
}
impl From<Task> for TaskDump {
fn from(task: Task) -> Self {
TaskDump {
uid: task.uid,
index_uid: task.index_uid().map(|uid| uid.to_string()),
status: task.status,
kind: task.kind.into(),
canceled_by: task.canceled_by,
details: task.details,
error: task.error,
enqueued_at: task.enqueued_at,
started_at: task.started_at,
finished_at: task.finished_at,
}
}
}
impl From<KindWithContent> for KindDump {
fn from(kind: KindWithContent) -> Self {
match kind {
KindWithContent::DocumentAdditionOrUpdate {
primary_key,
method,
documents_count,
allow_index_creation,
..
} => KindDump::DocumentImport {
primary_key,
method,
documents_count,
allow_index_creation,
},
KindWithContent::DocumentDeletion { documents_ids, .. } => {
KindDump::DocumentDeletion { documents_ids }
}
KindWithContent::DocumentClear { .. } => KindDump::DocumentClear,
KindWithContent::SettingsUpdate {
new_settings,
is_deletion,
allow_index_creation,
..
} => KindDump::Settings { settings: new_settings, is_deletion, allow_index_creation },
KindWithContent::IndexDeletion { .. } => KindDump::IndexDeletion,
KindWithContent::IndexCreation { primary_key, .. } => {
KindDump::IndexCreation { primary_key }
}
KindWithContent::IndexUpdate { primary_key, .. } => {
KindDump::IndexUpdate { primary_key }
}
KindWithContent::IndexSwap { swaps } => KindDump::IndexSwap { swaps },
KindWithContent::TaskCancelation { query, tasks } => {
KindDump::TaskCancelation { query, tasks }
}
KindWithContent::TaskDeletion { query, tasks } => {
KindDump::TasksDeletion { query, tasks }
}
KindWithContent::DumpCreation { keys, instance_uid } => {
KindDump::DumpCreation { keys, instance_uid }
}
KindWithContent::SnapshotCreation => KindDump::SnapshotCreation,
}
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::{Seek, SeekFrom};
use std::str::FromStr;
use big_s::S;
use maplit::btreeset;
use meilisearch_types::index_uid::IndexUid;
use meilisearch_types::keys::{Action, Key};
use meilisearch_types::milli::update::Setting;
use meilisearch_types::milli::{self};
use meilisearch_types::settings::{Checked, Settings};
use meilisearch_types::star_or::StarOr;
use meilisearch_types::tasks::{Details, Status};
use serde_json::{json, Map, Value};
use time::macros::datetime;
use uuid::Uuid;
use crate::reader::Document;
use crate::{DumpReader, DumpWriter, IndexMetadata, KindDump, TaskDump, Version};
pub fn create_test_instance_uid() -> Uuid {
Uuid::parse_str("9e15e977-f2ae-4761-943f-1eaf75fd736d").unwrap()
}
pub fn create_test_index_metadata() -> IndexMetadata {
IndexMetadata {
uid: S("doggo"),
primary_key: None,
created_at: datetime!(2022-11-20 12:00 UTC),
updated_at: datetime!(2022-11-21 00:00 UTC),
}
}
pub fn create_test_documents() -> Vec<Map<String, Value>> {
vec![
json!({ "id": 1, "race": "golden retriever", "name": "paul", "age": 4 })
.as_object()
.unwrap()
.clone(),
json!({ "id": 2, "race": "bernese mountain", "name": "tamo", "age": 6 })
.as_object()
.unwrap()
.clone(),
json!({ "id": 3, "race": "great pyrenees", "name": "patou", "age": 5 })
.as_object()
.unwrap()
.clone(),
]
}
pub fn create_test_settings() -> Settings<Checked> {
let settings = Settings {
displayed_attributes: Setting::Set(vec![S("race"), S("name")]),
searchable_attributes: Setting::Set(vec![S("name"), S("race")]),
filterable_attributes: Setting::Set(btreeset! { S("race"), S("age") }),
sortable_attributes: Setting::Set(btreeset! { S("age") }),
ranking_rules: Setting::NotSet,
stop_words: Setting::NotSet,
synonyms: Setting::NotSet,
distinct_attribute: Setting::NotSet,
typo_tolerance: Setting::NotSet,
faceting: Setting::NotSet,
pagination: Setting::NotSet,
_kind: std::marker::PhantomData,
};
settings.check()
}
pub fn create_test_tasks() -> Vec<(TaskDump, Option<Vec<Document>>)> {
vec![
(
TaskDump {
uid: 0,
index_uid: Some(S("doggo")),
status: Status::Succeeded,
kind: KindDump::DocumentImport {
method: milli::update::IndexDocumentsMethod::UpdateDocuments,
allow_index_creation: true,
primary_key: Some(S("bone")),
documents_count: 12,
},
canceled_by: None,
details: Some(Details::DocumentAdditionOrUpdate {
received_documents: 12,
indexed_documents: Some(10),
}),
error: None,
enqueued_at: datetime!(2022-11-11 0:00 UTC),
started_at: Some(datetime!(2022-11-20 0:00 UTC)),
finished_at: Some(datetime!(2022-11-21 0:00 UTC)),
},
None,
),
(
TaskDump {
uid: 1,
index_uid: Some(S("doggo")),
status: Status::Enqueued,
kind: KindDump::DocumentImport {
method: milli::update::IndexDocumentsMethod::UpdateDocuments,
allow_index_creation: true,
primary_key: None,
documents_count: 2,
},
canceled_by: None,
details: Some(Details::DocumentAdditionOrUpdate {
received_documents: 2,
indexed_documents: None,
}),
error: None,
enqueued_at: datetime!(2022-11-11 0:00 UTC),
started_at: None,
finished_at: None,
},
Some(vec![
json!({ "id": 4, "race": "leonberg" }).as_object().unwrap().clone(),
json!({ "id": 5, "race": "patou" }).as_object().unwrap().clone(),
]),
),
(
TaskDump {
uid: 5,
index_uid: Some(S("catto")),
status: Status::Enqueued,
kind: KindDump::IndexDeletion,
canceled_by: None,
details: None,
error: None,
enqueued_at: datetime!(2022-11-15 0:00 UTC),
started_at: None,
finished_at: None,
},
None,
),
]
}
pub fn create_test_api_keys() -> Vec<Key> {
vec![
Key {
description: Some(S("The main key to manage all the doggos")),
name: Some(S("doggos_key")),
uid: Uuid::from_str("9f8a34da-b6b2-42f0-939b-dbd4c3448655").unwrap(),
actions: vec![Action::DocumentsAll],
indexes: vec![StarOr::Other(IndexUid::from_str("doggos").unwrap())],
expires_at: Some(datetime!(4130-03-14 12:21 UTC)),
created_at: datetime!(1960-11-15 0:00 UTC),
updated_at: datetime!(2022-11-10 0:00 UTC),
},
Key {
description: Some(S("The master key for everything and even the doggos")),
name: Some(S("master_key")),
uid: Uuid::from_str("4622f717-1c00-47bb-a494-39d76a49b591").unwrap(),
actions: vec![Action::All],
indexes: vec![StarOr::Star],
expires_at: None,
created_at: datetime!(0000-01-01 00:01 UTC),
updated_at: datetime!(1964-05-04 17:25 UTC),
},
Key {
description: Some(S("The useless key to for nothing nor the doggos")),
name: Some(S("useless_key")),
uid: Uuid::from_str("fb80b58b-0a34-412f-8ba7-1ce868f8ac5c").unwrap(),
actions: vec![],
indexes: vec![],
expires_at: None,
created_at: datetime!(400-02-29 0:00 UTC),
updated_at: datetime!(1024-02-29 0:00 UTC),
},
]
}
pub fn create_test_dump() -> File {
let instance_uid = create_test_instance_uid();
let dump = DumpWriter::new(Some(instance_uid)).unwrap();
// ========== Adding an index
let documents = create_test_documents();
let settings = create_test_settings();
let mut index = dump.create_index("doggos", &create_test_index_metadata()).unwrap();
for document in &documents {
index.push_document(document).unwrap();
}
index.flush().unwrap();
index.settings(&settings).unwrap();
// ========== pushing the task queue
let tasks = create_test_tasks();
let mut task_queue = dump.create_tasks_queue().unwrap();
for (task, update_file) in &tasks {
let mut update = task_queue.push_task(task).unwrap();
if let Some(update_file) = update_file {
for u in update_file {
update.push_document(u).unwrap();
}
}
}
task_queue.flush().unwrap();
// ========== pushing the api keys
let api_keys = create_test_api_keys();
let mut keys = dump.create_keys().unwrap();
for key in &api_keys {
keys.push_key(key).unwrap();
}
keys.flush().unwrap();
// create the dump
let mut file = tempfile::tempfile().unwrap();
dump.persist_to(&mut file).unwrap();
file.seek(SeekFrom::Start(0)).unwrap();
file
}
#[test]
#[ignore]
fn test_creating_and_read_dump() {
let mut file = create_test_dump();
let mut dump = DumpReader::open(&mut file).unwrap();
// ==== checking the top level infos
assert_eq!(dump.version(), Version::V6);
assert!(dump.date().is_some());
assert_eq!(dump.instance_uid().unwrap().unwrap(), create_test_instance_uid());
// ==== checking the index
let mut indexes = dump.indexes().unwrap();
let mut index = indexes.next().unwrap().unwrap();
assert!(indexes.next().is_none()); // there was only one index in the dump
for (document, expected) in index.documents().unwrap().zip(create_test_documents()) {
assert_eq!(document.unwrap(), expected);
}
assert_eq!(index.settings().unwrap(), create_test_settings());
assert_eq!(index.metadata(), &create_test_index_metadata());
drop(index);
drop(indexes);
// ==== checking the task queue
for (task, expected) in dump.tasks().unwrap().zip(create_test_tasks()) {
let (task, content_file) = task.unwrap();
assert_eq!(task, expected.0);
if let Some(expected_update) = expected.1 {
assert!(
content_file.is_some(),
"A content file was expected for the task {}.",
expected.0.uid
);
let updates = content_file.unwrap().collect::<Result<Vec<_>, _>>().unwrap();
assert_eq!(updates, expected_update);
}
}
// ==== checking the keys
for (key, expected) in dump.keys().unwrap().zip(create_test_api_keys()) {
assert_eq!(key.unwrap(), expected);
}
}
}

View File

@ -0,0 +1,4 @@
pub mod v2_to_v3;
pub mod v3_to_v4;
pub mod v4_to_v5;
pub mod v5_to_v6;

View File

@ -0,0 +1,480 @@
use std::convert::TryInto;
use std::str::FromStr;
use time::OffsetDateTime;
use uuid::Uuid;
use super::v3_to_v4::CompatV3ToV4;
use crate::reader::{v2, v3, Document};
use crate::Result;
pub struct CompatV2ToV3 {
pub from: v2::V2Reader,
}
impl CompatV2ToV3 {
pub fn new(v2: v2::V2Reader) -> CompatV2ToV3 {
CompatV2ToV3 { from: v2 }
}
pub fn index_uuid(&self) -> Vec<v3::meta::IndexUuid> {
self.from
.index_uuid()
.into_iter()
.map(|index| v3::meta::IndexUuid { uid: index.uid, uuid: index.uuid })
.collect()
}
pub fn to_v4(self) -> CompatV3ToV4 {
CompatV3ToV4::Compat(self)
}
pub fn version(&self) -> crate::Version {
self.from.version()
}
pub fn date(&self) -> Option<time::OffsetDateTime> {
self.from.date()
}
pub fn instance_uid(&self) -> Result<Option<uuid::Uuid>> {
Ok(None)
}
pub fn indexes(&self) -> Result<impl Iterator<Item = Result<CompatIndexV2ToV3>> + '_> {
Ok(self.from.indexes()?.map(|index_reader| -> Result<_> {
let compat = CompatIndexV2ToV3::new(index_reader?);
Ok(compat)
}))
}
pub fn tasks(
&mut self,
) -> Box<
dyn Iterator<Item = Result<(v3::Task, Option<Box<dyn Iterator<Item = Result<Document>>>>)>>
+ '_,
> {
let _indexes = self.from.index_uuid.clone();
Box::new(
self.from
.tasks()
.map(move |task| {
task.map(|(task, content_file)| {
let task = v3::Task { uuid: task.uuid, update: task.update.into() };
Some((
task,
content_file.map(|content_file| {
Box::new(content_file) as Box<dyn Iterator<Item = Result<Document>>>
}),
))
})
})
.filter_map(|res| res.transpose()),
)
}
}
pub struct CompatIndexV2ToV3 {
from: v2::V2IndexReader,
}
impl CompatIndexV2ToV3 {
pub fn new(v2: v2::V2IndexReader) -> CompatIndexV2ToV3 {
CompatIndexV2ToV3 { from: v2 }
}
pub fn metadata(&self) -> &crate::IndexMetadata {
self.from.metadata()
}
pub fn documents(&mut self) -> Result<Box<dyn Iterator<Item = Result<Document>> + '_>> {
self.from
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<Document>> + '_>)
}
pub fn settings(&mut self) -> Result<v3::Settings<v3::Checked>> {
Ok(v3::Settings::<v3::Unchecked>::from(self.from.settings()?).check())
}
}
impl From<v2::updates::UpdateStatus> for v3::updates::UpdateStatus {
fn from(update: v2::updates::UpdateStatus) -> Self {
match update {
v2::updates::UpdateStatus::Processing(processing) => {
match (processing.from.meta.clone(), processing.from.content).try_into() {
Ok(meta) => v3::updates::UpdateStatus::Processing(v3::updates::Processing {
from: v3::updates::Enqueued {
update_id: processing.from.update_id,
meta,
enqueued_at: processing.from.enqueued_at,
},
started_processing_at: processing.started_processing_at,
}),
Err(e) => {
log::warn!("Error with task {}: {}", processing.from.update_id, e);
log::warn!("Task will be marked as `Failed`.");
v3::updates::UpdateStatus::Failed(v3::updates::Failed {
from: v3::updates::Processing {
from: v3::updates::Enqueued {
update_id: processing.from.update_id,
meta: update_from_unchecked_update_meta(processing.from.meta),
enqueued_at: processing.from.enqueued_at,
},
started_processing_at: processing.started_processing_at,
},
msg: e.to_string(),
code: v3::Code::MalformedDump,
failed_at: OffsetDateTime::now_utc(),
})
}
}
}
v2::updates::UpdateStatus::Enqueued(enqueued) => {
match (enqueued.meta.clone(), enqueued.content).try_into() {
Ok(meta) => v3::updates::UpdateStatus::Enqueued(v3::updates::Enqueued {
update_id: enqueued.update_id,
meta,
enqueued_at: enqueued.enqueued_at,
}),
Err(e) => {
log::warn!("Error with task {}: {}", enqueued.update_id, e);
log::warn!("Task will be marked as `Failed`.");
v3::updates::UpdateStatus::Failed(v3::updates::Failed {
from: v3::updates::Processing {
from: v3::updates::Enqueued {
update_id: enqueued.update_id,
meta: update_from_unchecked_update_meta(enqueued.meta),
enqueued_at: enqueued.enqueued_at,
},
started_processing_at: OffsetDateTime::now_utc(),
},
msg: e.to_string(),
code: v3::Code::MalformedDump,
failed_at: OffsetDateTime::now_utc(),
})
}
}
}
v2::updates::UpdateStatus::Processed(processed) => {
v3::updates::UpdateStatus::Processed(v3::updates::Processed {
success: processed.success.into(),
processed_at: processed.processed_at,
from: v3::updates::Processing {
from: v3::updates::Enqueued {
update_id: processed.from.from.update_id,
// since we're never going to read the content_file again it's ok to generate a fake one.
meta: update_from_unchecked_update_meta(processed.from.from.meta),
enqueued_at: processed.from.from.enqueued_at,
},
started_processing_at: processed.from.started_processing_at,
},
})
}
v2::updates::UpdateStatus::Aborted(aborted) => {
v3::updates::UpdateStatus::Aborted(v3::updates::Aborted {
from: v3::updates::Enqueued {
update_id: aborted.from.update_id,
// since we're never going to read the content_file again it's ok to generate a fake one.
meta: update_from_unchecked_update_meta(aborted.from.meta),
enqueued_at: aborted.from.enqueued_at,
},
aborted_at: aborted.aborted_at,
})
}
v2::updates::UpdateStatus::Failed(failed) => {
v3::updates::UpdateStatus::Failed(v3::updates::Failed {
from: v3::updates::Processing {
from: v3::updates::Enqueued {
update_id: failed.from.from.update_id,
// since we're never going to read the content_file again it's ok to generate a fake one.
meta: update_from_unchecked_update_meta(failed.from.from.meta),
enqueued_at: failed.from.from.enqueued_at,
},
started_processing_at: failed.from.started_processing_at,
},
msg: failed.error.message,
code: failed.error.error_code.into(),
failed_at: failed.failed_at,
})
}
}
}
}
impl TryFrom<(v2::updates::UpdateMeta, Option<Uuid>)> for v3::updates::Update {
type Error = crate::Error;
fn try_from((update, uuid): (v2::updates::UpdateMeta, Option<Uuid>)) -> Result<Self> {
Ok(match update {
v2::updates::UpdateMeta::DocumentsAddition { method, format: _, primary_key }
if uuid.is_some() =>
{
v3::updates::Update::DocumentAddition {
primary_key,
method: match method {
v2::updates::IndexDocumentsMethod::ReplaceDocuments => {
v3::updates::IndexDocumentsMethod::ReplaceDocuments
}
v2::updates::IndexDocumentsMethod::UpdateDocuments => {
v3::updates::IndexDocumentsMethod::UpdateDocuments
}
},
content_uuid: uuid.unwrap(),
}
}
v2::updates::UpdateMeta::DocumentsAddition { .. } => {
return Err(crate::Error::MalformedTask)
}
v2::updates::UpdateMeta::ClearDocuments => v3::updates::Update::ClearDocuments,
v2::updates::UpdateMeta::DeleteDocuments { ids } => {
v3::updates::Update::DeleteDocuments(ids)
}
v2::updates::UpdateMeta::Settings(settings) => {
v3::updates::Update::Settings(settings.into())
}
})
}
}
pub fn update_from_unchecked_update_meta(update: v2::updates::UpdateMeta) -> v3::updates::Update {
match update {
v2::updates::UpdateMeta::DocumentsAddition { method, format: _, primary_key } => {
v3::updates::Update::DocumentAddition {
primary_key,
method: match method {
v2::updates::IndexDocumentsMethod::ReplaceDocuments => {
v3::updates::IndexDocumentsMethod::ReplaceDocuments
}
v2::updates::IndexDocumentsMethod::UpdateDocuments => {
v3::updates::IndexDocumentsMethod::UpdateDocuments
}
},
// we use this special uuid so we can recognize it if one day there is a bug related to this field.
content_uuid: Uuid::from_str("00112233-4455-6677-8899-aabbccddeeff").unwrap(),
}
}
v2::updates::UpdateMeta::ClearDocuments => v3::updates::Update::ClearDocuments,
v2::updates::UpdateMeta::DeleteDocuments { ids } => {
v3::updates::Update::DeleteDocuments(ids)
}
v2::updates::UpdateMeta::Settings(settings) => {
v3::updates::Update::Settings(settings.into())
}
}
}
impl From<v2::updates::UpdateResult> for v3::updates::UpdateResult {
fn from(result: v2::updates::UpdateResult) -> Self {
match result {
v2::updates::UpdateResult::DocumentsAddition(addition) => {
v3::updates::UpdateResult::DocumentsAddition(v3::updates::DocumentAdditionResult {
nb_documents: addition.nb_documents,
})
}
v2::updates::UpdateResult::DocumentDeletion { deleted } => {
v3::updates::UpdateResult::DocumentDeletion { deleted }
}
v2::updates::UpdateResult::Other => v3::updates::UpdateResult::Other,
}
}
}
impl From<String> for v3::Code {
fn from(code: String) -> Self {
match code.as_ref() {
"create_index" => v3::Code::CreateIndex,
"index_already_exists" => v3::Code::IndexAlreadyExists,
"index_not_found" => v3::Code::IndexNotFound,
"invalid_index_uid" => v3::Code::InvalidIndexUid,
"invalid_state" => v3::Code::InvalidState,
"missing_primary_key" => v3::Code::MissingPrimaryKey,
"primary_key_already_present" => v3::Code::PrimaryKeyAlreadyPresent,
"max_fields_limit_exceeded" => v3::Code::MaxFieldsLimitExceeded,
"missing_document_id" => v3::Code::MissingDocumentId,
"invalid_document_id" => v3::Code::InvalidDocumentId,
"filter" => v3::Code::Filter,
"sort" => v3::Code::Sort,
"bad_parameter" => v3::Code::BadParameter,
"bad_request" => v3::Code::BadRequest,
"database_size_limit_reached" => v3::Code::DatabaseSizeLimitReached,
"document_not_found" => v3::Code::DocumentNotFound,
"internal" => v3::Code::Internal,
"invalid_geo_field" => v3::Code::InvalidGeoField,
"invalid_ranking_rule" => v3::Code::InvalidRankingRule,
"invalid_store" => v3::Code::InvalidStore,
"invalid_token" => v3::Code::InvalidToken,
"missing_authorization_header" => v3::Code::MissingAuthorizationHeader,
"no_space_left_on_device" => v3::Code::NoSpaceLeftOnDevice,
"dump_not_found" => v3::Code::DumpNotFound,
"task_not_found" => v3::Code::TaskNotFound,
"payload_too_large" => v3::Code::PayloadTooLarge,
"retrieve_document" => v3::Code::RetrieveDocument,
"search_documents" => v3::Code::SearchDocuments,
"unsupported_media_type" => v3::Code::UnsupportedMediaType,
"dump_already_in_progress" => v3::Code::DumpAlreadyInProgress,
"dump_process_failed" => v3::Code::DumpProcessFailed,
"invalid_content_type" => v3::Code::InvalidContentType,
"missing_content_type" => v3::Code::MissingContentType,
"malformed_payload" => v3::Code::MalformedPayload,
"missing_payload" => v3::Code::MissingPayload,
other => {
log::warn!("Unknown error code {}", other);
v3::Code::UnretrievableErrorCode
}
}
}
}
fn option_to_setting<T>(opt: Option<Option<T>>) -> v3::Setting<T> {
match opt {
Some(Some(t)) => v3::Setting::Set(t),
None => v3::Setting::NotSet,
Some(None) => v3::Setting::Reset,
}
}
impl<T> From<v2::Settings<T>> for v3::Settings<v3::Unchecked> {
fn from(settings: v2::Settings<T>) -> Self {
v3::Settings {
displayed_attributes: option_to_setting(settings.displayed_attributes),
searchable_attributes: option_to_setting(settings.searchable_attributes),
filterable_attributes: option_to_setting(settings.filterable_attributes)
.map(|f| f.into_iter().collect()),
sortable_attributes: v3::Setting::NotSet,
ranking_rules: option_to_setting(settings.ranking_rules).map(|criteria| {
criteria.into_iter().map(|criterion| patch_ranking_rules(&criterion)).collect()
}),
stop_words: option_to_setting(settings.stop_words),
synonyms: option_to_setting(settings.synonyms),
distinct_attribute: option_to_setting(settings.distinct_attribute),
_kind: std::marker::PhantomData,
}
}
}
fn patch_ranking_rules(ranking_rule: &str) -> String {
match v2::settings::Criterion::from_str(ranking_rule) {
Ok(v2::settings::Criterion::Words) => String::from("words"),
Ok(v2::settings::Criterion::Typo) => String::from("typo"),
Ok(v2::settings::Criterion::Proximity) => String::from("proximity"),
Ok(v2::settings::Criterion::Attribute) => String::from("attribute"),
Ok(v2::settings::Criterion::Exactness) => String::from("exactness"),
Ok(v2::settings::Criterion::Asc(name)) => format!("{name}:asc"),
Ok(v2::settings::Criterion::Desc(name)) => format!("{name}:desc"),
// we want to forward the error to the current version of meilisearch
Err(_) => ranking_rule.to_string(),
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v2_v3() {
let dump = File::open("tests/assets/v2.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = v2::V2Reader::open(dir).unwrap().to_v3();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-09 20:27:59.904096267 +00:00:00");
// tasks
let tasks = dump.tasks().collect::<Result<Vec<_>>>().unwrap();
let (tasks, mut update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"9507711db47c7171c79bc6d57d0bed79");
assert_eq!(update_files.len(), 9);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
let update_file = update_files.remove(0).unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(update_file), @"7b8889539b669c7b9ddba448bafa385d");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies2 = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"54b3d7a0d96de35427d867fa17164a99");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"ae7c5ade2243a553152dab2f354e9095");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
// movies2
insta::assert_json_snapshot!(movies2.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies_2",
"primaryKey": null,
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"1be82b894556d23953af557b6a328a58");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"1be82b894556d23953af557b6a328a58");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

View File

@ -0,0 +1,450 @@
use super::v2_to_v3::{CompatIndexV2ToV3, CompatV2ToV3};
use super::v4_to_v5::CompatV4ToV5;
use crate::reader::{v3, v4, UpdateFile};
use crate::Result;
pub enum CompatV3ToV4 {
V3(v3::V3Reader),
Compat(CompatV2ToV3),
}
impl CompatV3ToV4 {
pub fn new(v3: v3::V3Reader) -> CompatV3ToV4 {
CompatV3ToV4::V3(v3)
}
pub fn to_v5(self) -> CompatV4ToV5 {
CompatV4ToV5::Compat(self)
}
pub fn version(&self) -> crate::Version {
match self {
CompatV3ToV4::V3(v3) => v3.version(),
CompatV3ToV4::Compat(compat) => compat.version(),
}
}
pub fn date(&self) -> Option<time::OffsetDateTime> {
match self {
CompatV3ToV4::V3(v3) => v3.date(),
CompatV3ToV4::Compat(compat) => compat.date(),
}
}
pub fn instance_uid(&self) -> Result<Option<uuid::Uuid>> {
Ok(None)
}
pub fn indexes(&self) -> Result<impl Iterator<Item = Result<CompatIndexV3ToV4>> + '_> {
Ok(match self {
CompatV3ToV4::V3(v3) => {
Box::new(v3.indexes()?.map(|index| index.map(CompatIndexV3ToV4::from)))
as Box<dyn Iterator<Item = Result<CompatIndexV3ToV4>> + '_>
}
CompatV3ToV4::Compat(compat) => {
Box::new(compat.indexes()?.map(|index| index.map(CompatIndexV3ToV4::from)))
as Box<dyn Iterator<Item = Result<CompatIndexV3ToV4>> + '_>
}
})
}
pub fn tasks(
&mut self,
) -> Box<dyn Iterator<Item = Result<(v4::Task, Option<Box<UpdateFile>>)>> + '_> {
let indexes = match self {
CompatV3ToV4::V3(v3) => v3.index_uuid(),
CompatV3ToV4::Compat(compat) => compat.index_uuid(),
};
let tasks = match self {
CompatV3ToV4::V3(v3) => v3.tasks(),
CompatV3ToV4::Compat(compat) => compat.tasks(),
};
Box::new(
tasks
// we need to override the old task ids that were generated
// by index in favor of a global unique incremental ID.
.enumerate()
.map(move |(task_id, task)| {
task.map(|(task, content_file)| {
let index_uid = indexes
.iter()
.find(|index| index.uuid == task.uuid)
.map(|index| index.uid.clone());
let index_uid = match index_uid {
Some(uid) => uid,
None => {
log::warn!(
"Error while importing the update {}.",
task.update.id()
);
log::warn!(
"The index associated to the uuid `{}` could not be retrieved.",
task.uuid.to_string()
);
if task.update.is_finished() {
// we're fucking with his history but not his data, that's ok-ish.
log::warn!("The index-uuid will be set as `unknown`.");
String::from("unknown")
} else {
log::warn!("The task will be ignored.");
return None;
}
}
};
let task = v4::Task {
id: task_id as u32,
index_uid: v4::meta::IndexUid(index_uid),
content: match task.update.meta() {
v3::Kind::DeleteDocuments(documents) => {
v4::tasks::TaskContent::DocumentDeletion(
v4::tasks::DocumentDeletion::Ids(documents.clone()),
)
}
v3::Kind::DocumentAddition {
primary_key,
method,
content_uuid,
} => v4::tasks::TaskContent::DocumentAddition {
merge_strategy: match method {
v3::updates::IndexDocumentsMethod::ReplaceDocuments => {
v4::tasks::IndexDocumentsMethod::ReplaceDocuments
}
v3::updates::IndexDocumentsMethod::UpdateDocuments => {
v4::tasks::IndexDocumentsMethod::UpdateDocuments
}
},
primary_key: primary_key.clone(),
documents_count: 0, // we don't have this info
allow_index_creation: true, // there was no API-key in the v3
content_uuid: *content_uuid,
},
v3::Kind::Settings(settings) => {
v4::tasks::TaskContent::SettingsUpdate {
settings: v4::Settings::from(settings.clone()),
is_deletion: false, // that didn't exist at this time
allow_index_creation: true, // there was no API-key in the v3
}
}
v3::Kind::ClearDocuments => {
v4::tasks::TaskContent::DocumentDeletion(
v4::tasks::DocumentDeletion::Clear,
)
}
},
events: match task.update {
v3::Status::Processing(processing) => {
vec![v4::tasks::TaskEvent::Created(processing.from.enqueued_at)]
}
v3::Status::Enqueued(enqueued) => {
vec![v4::tasks::TaskEvent::Created(enqueued.enqueued_at)]
}
v3::Status::Processed(processed) => {
vec![
v4::tasks::TaskEvent::Created(
processed.from.from.enqueued_at,
),
v4::tasks::TaskEvent::Processing(
processed.from.started_processing_at,
),
v4::tasks::TaskEvent::Succeded {
result: match processed.success {
v3::updates::UpdateResult::DocumentsAddition(
document_addition,
) => v4::tasks::TaskResult::DocumentAddition {
indexed_documents: document_addition
.nb_documents
as u64,
},
v3::updates::UpdateResult::DocumentDeletion {
deleted,
} => v4::tasks::TaskResult::DocumentDeletion {
deleted_documents: deleted,
},
v3::updates::UpdateResult::Other => {
v4::tasks::TaskResult::Other
}
},
timestamp: processed.processed_at,
},
]
}
v3::Status::Failed(failed) => vec![
v4::tasks::TaskEvent::Created(failed.from.from.enqueued_at),
v4::tasks::TaskEvent::Processing(
failed.from.started_processing_at,
),
v4::tasks::TaskEvent::Failed {
error: v4::ResponseError::from_msg(
failed.msg.to_string(),
failed.code.into(),
),
timestamp: failed.failed_at,
},
],
v3::Status::Aborted(aborted) => vec![
v4::tasks::TaskEvent::Created(aborted.from.enqueued_at),
v4::tasks::TaskEvent::Failed {
error: v4::ResponseError::from_msg(
"Task was aborted in a previous version of meilisearch."
.to_string(),
v4::errors::Code::UnretrievableErrorCode,
),
timestamp: aborted.aborted_at,
},
],
},
};
Some((task, content_file))
})
})
.filter_map(|res| res.transpose()),
)
}
pub fn keys(&mut self) -> Box<dyn Iterator<Item = Result<v4::Key>> + '_> {
Box::new(std::iter::empty())
}
}
pub enum CompatIndexV3ToV4 {
V3(v3::V3IndexReader),
Compat(CompatIndexV2ToV3),
}
impl From<v3::V3IndexReader> for CompatIndexV3ToV4 {
fn from(index_reader: v3::V3IndexReader) -> Self {
Self::V3(index_reader)
}
}
impl From<CompatIndexV2ToV3> for CompatIndexV3ToV4 {
fn from(index_reader: CompatIndexV2ToV3) -> Self {
Self::Compat(index_reader)
}
}
impl CompatIndexV3ToV4 {
pub fn new(v3: v3::V3IndexReader) -> CompatIndexV3ToV4 {
CompatIndexV3ToV4::V3(v3)
}
pub fn metadata(&self) -> &crate::IndexMetadata {
match self {
CompatIndexV3ToV4::V3(v3) => v3.metadata(),
CompatIndexV3ToV4::Compat(compat) => compat.metadata(),
}
}
pub fn documents(&mut self) -> Result<Box<dyn Iterator<Item = Result<v4::Document>> + '_>> {
match self {
CompatIndexV3ToV4::V3(v3) => v3
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<v4::Document>> + '_>),
CompatIndexV3ToV4::Compat(compat) => compat
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<v4::Document>> + '_>),
}
}
pub fn settings(&mut self) -> Result<v4::Settings<v4::Checked>> {
Ok(match self {
CompatIndexV3ToV4::V3(v3) => {
v4::Settings::<v4::Unchecked>::from(v3.settings()?).check()
}
CompatIndexV3ToV4::Compat(compat) => {
v4::Settings::<v4::Unchecked>::from(compat.settings()?).check()
}
})
}
}
impl<T> From<v3::Setting<T>> for v4::Setting<T> {
fn from(setting: v3::Setting<T>) -> Self {
match setting {
v3::Setting::Set(t) => v4::Setting::Set(t),
v3::Setting::Reset => v4::Setting::Reset,
v3::Setting::NotSet => v4::Setting::NotSet,
}
}
}
impl From<v3::Code> for v4::Code {
fn from(code: v3::Code) -> Self {
match code {
v3::Code::CreateIndex => v4::Code::CreateIndex,
v3::Code::IndexAlreadyExists => v4::Code::IndexAlreadyExists,
v3::Code::IndexNotFound => v4::Code::IndexNotFound,
v3::Code::InvalidIndexUid => v4::Code::InvalidIndexUid,
v3::Code::InvalidState => v4::Code::InvalidState,
v3::Code::MissingPrimaryKey => v4::Code::MissingPrimaryKey,
v3::Code::PrimaryKeyAlreadyPresent => v4::Code::PrimaryKeyAlreadyPresent,
v3::Code::MaxFieldsLimitExceeded => v4::Code::MaxFieldsLimitExceeded,
v3::Code::MissingDocumentId => v4::Code::MissingDocumentId,
v3::Code::InvalidDocumentId => v4::Code::InvalidDocumentId,
v3::Code::Filter => v4::Code::Filter,
v3::Code::Sort => v4::Code::Sort,
v3::Code::BadParameter => v4::Code::BadParameter,
v3::Code::BadRequest => v4::Code::BadRequest,
v3::Code::DatabaseSizeLimitReached => v4::Code::DatabaseSizeLimitReached,
v3::Code::DocumentNotFound => v4::Code::DocumentNotFound,
v3::Code::Internal => v4::Code::Internal,
v3::Code::InvalidGeoField => v4::Code::InvalidGeoField,
v3::Code::InvalidRankingRule => v4::Code::InvalidRankingRule,
v3::Code::InvalidStore => v4::Code::InvalidStore,
v3::Code::InvalidToken => v4::Code::InvalidToken,
v3::Code::MissingAuthorizationHeader => v4::Code::MissingAuthorizationHeader,
v3::Code::NoSpaceLeftOnDevice => v4::Code::NoSpaceLeftOnDevice,
v3::Code::DumpNotFound => v4::Code::DumpNotFound,
v3::Code::TaskNotFound => v4::Code::TaskNotFound,
v3::Code::PayloadTooLarge => v4::Code::PayloadTooLarge,
v3::Code::RetrieveDocument => v4::Code::RetrieveDocument,
v3::Code::SearchDocuments => v4::Code::SearchDocuments,
v3::Code::UnsupportedMediaType => v4::Code::UnsupportedMediaType,
v3::Code::DumpAlreadyInProgress => v4::Code::DumpAlreadyInProgress,
v3::Code::DumpProcessFailed => v4::Code::DumpProcessFailed,
v3::Code::InvalidContentType => v4::Code::InvalidContentType,
v3::Code::MissingContentType => v4::Code::MissingContentType,
v3::Code::MalformedPayload => v4::Code::MalformedPayload,
v3::Code::MissingPayload => v4::Code::MissingPayload,
v3::Code::UnretrievableErrorCode => v4::Code::UnretrievableErrorCode,
v3::Code::MalformedDump => v4::Code::MalformedDump,
}
}
}
impl<T> From<v3::Settings<T>> for v4::Settings<v4::Unchecked> {
fn from(settings: v3::Settings<T>) -> Self {
v4::Settings {
displayed_attributes: settings.displayed_attributes.into(),
searchable_attributes: settings.searchable_attributes.into(),
filterable_attributes: settings.filterable_attributes.into(),
sortable_attributes: settings.sortable_attributes.into(),
ranking_rules: settings.ranking_rules.into(),
stop_words: settings.stop_words.into(),
synonyms: settings.synonyms.into(),
distinct_attribute: settings.distinct_attribute.into(),
typo_tolerance: v4::Setting::NotSet,
_kind: std::marker::PhantomData,
}
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v3_v4() {
let dump = File::open("tests/assets/v3.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = v3::V3Reader::open(dir).unwrap().to_v4();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-07 11:39:03.709153554 +00:00:00");
// tasks
let tasks = dump.tasks().collect::<Result<Vec<_>>>().unwrap();
let (tasks, mut update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"79bc053583a1a7172bbaaafb1edaeb78");
assert_eq!(update_files.len(), 10);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
let update_file = update_files.remove(0).unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(update_file), @"7b8889539b669c7b9ddba448bafa385d");
// keys
let keys = dump.keys().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys, { "[].uid" => "[uuid]" }), @"d751713988987e9331980363e24189ce");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies2 = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"d3402aff19b90acea9e9a07c466690aa");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"687aaab250f01b55d57bc69aa313b581");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
// movies2
insta::assert_json_snapshot!(movies2.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies_2",
"primaryKey": null,
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"cd9fedbd7e3492831a94da62c90013ea");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"cd9fedbd7e3492831a94da62c90013ea");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

View File

@ -0,0 +1,468 @@
use super::v3_to_v4::{CompatIndexV3ToV4, CompatV3ToV4};
use super::v5_to_v6::CompatV5ToV6;
use crate::reader::{v4, v5, Document};
use crate::Result;
pub enum CompatV4ToV5 {
V4(v4::V4Reader),
Compat(CompatV3ToV4),
}
impl CompatV4ToV5 {
pub fn new(v4: v4::V4Reader) -> CompatV4ToV5 {
CompatV4ToV5::V4(v4)
}
pub fn to_v6(self) -> CompatV5ToV6 {
CompatV5ToV6::Compat(self)
}
pub fn version(&self) -> crate::Version {
match self {
CompatV4ToV5::V4(v4) => v4.version(),
CompatV4ToV5::Compat(compat) => compat.version(),
}
}
pub fn date(&self) -> Option<time::OffsetDateTime> {
match self {
CompatV4ToV5::V4(v4) => v4.date(),
CompatV4ToV5::Compat(compat) => compat.date(),
}
}
pub fn instance_uid(&self) -> Result<Option<uuid::Uuid>> {
match self {
CompatV4ToV5::V4(v4) => v4.instance_uid(),
CompatV4ToV5::Compat(compat) => compat.instance_uid(),
}
}
pub fn indexes(&self) -> Result<Box<dyn Iterator<Item = Result<CompatIndexV4ToV5>> + '_>> {
Ok(match self {
CompatV4ToV5::V4(v4) => {
Box::new(v4.indexes()?.map(|index| index.map(CompatIndexV4ToV5::from)))
as Box<dyn Iterator<Item = Result<CompatIndexV4ToV5>> + '_>
}
CompatV4ToV5::Compat(compat) => {
Box::new(compat.indexes()?.map(|index| index.map(CompatIndexV4ToV5::from)))
as Box<dyn Iterator<Item = Result<CompatIndexV4ToV5>> + '_>
}
})
}
pub fn tasks(
&mut self,
) -> Box<dyn Iterator<Item = Result<(v5::Task, Option<Box<crate::reader::UpdateFile>>)>> + '_>
{
let tasks = match self {
CompatV4ToV5::V4(v4) => v4.tasks(),
CompatV4ToV5::Compat(compat) => compat.tasks(),
};
Box::new(tasks.map(|task| {
task.map(|(task, content_file)| {
let task = v5::Task {
id: task.id,
content: match task.content {
v4::tasks::TaskContent::DocumentAddition {
content_uuid,
merge_strategy,
primary_key,
documents_count,
allow_index_creation,
} => v5::tasks::TaskContent::DocumentAddition {
index_uid: v5::meta::IndexUid(task.index_uid.0),
content_uuid,
merge_strategy: match merge_strategy {
v4::tasks::IndexDocumentsMethod::ReplaceDocuments => {
v5::tasks::IndexDocumentsMethod::ReplaceDocuments
}
v4::tasks::IndexDocumentsMethod::UpdateDocuments => {
v5::tasks::IndexDocumentsMethod::UpdateDocuments
}
},
primary_key,
documents_count,
allow_index_creation,
},
v4::tasks::TaskContent::DocumentDeletion(deletion) => {
v5::tasks::TaskContent::DocumentDeletion {
index_uid: v5::meta::IndexUid(task.index_uid.0),
deletion: match deletion {
v4::tasks::DocumentDeletion::Clear => {
v5::tasks::DocumentDeletion::Clear
}
v4::tasks::DocumentDeletion::Ids(ids) => {
v5::tasks::DocumentDeletion::Ids(ids)
}
},
}
}
v4::tasks::TaskContent::SettingsUpdate {
settings,
is_deletion,
allow_index_creation,
} => v5::tasks::TaskContent::SettingsUpdate {
index_uid: v5::meta::IndexUid(task.index_uid.0),
settings: settings.into(),
is_deletion,
allow_index_creation,
},
v4::tasks::TaskContent::IndexDeletion => {
v5::tasks::TaskContent::IndexDeletion {
index_uid: v5::meta::IndexUid(task.index_uid.0),
}
}
v4::tasks::TaskContent::IndexCreation { primary_key } => {
v5::tasks::TaskContent::IndexCreation {
index_uid: v5::meta::IndexUid(task.index_uid.0),
primary_key,
}
}
v4::tasks::TaskContent::IndexUpdate { primary_key } => {
v5::tasks::TaskContent::IndexUpdate {
index_uid: v5::meta::IndexUid(task.index_uid.0),
primary_key,
}
}
},
events: task
.events
.into_iter()
.map(|event| match event {
v4::tasks::TaskEvent::Created(date) => {
v5::tasks::TaskEvent::Created(date)
}
v4::tasks::TaskEvent::Batched { timestamp, batch_id } => {
v5::tasks::TaskEvent::Batched { timestamp, batch_id }
}
v4::tasks::TaskEvent::Processing(date) => {
v5::tasks::TaskEvent::Processing(date)
}
v4::tasks::TaskEvent::Succeded { result, timestamp } => {
v5::tasks::TaskEvent::Succeeded {
result: match result {
v4::tasks::TaskResult::DocumentAddition {
indexed_documents,
} => v5::tasks::TaskResult::DocumentAddition {
indexed_documents,
},
v4::tasks::TaskResult::DocumentDeletion {
deleted_documents,
} => v5::tasks::TaskResult::DocumentDeletion {
deleted_documents,
},
v4::tasks::TaskResult::ClearAll { deleted_documents } => {
v5::tasks::TaskResult::ClearAll { deleted_documents }
}
v4::tasks::TaskResult::Other => {
v5::tasks::TaskResult::Other
}
},
timestamp,
}
}
v4::tasks::TaskEvent::Failed { error, timestamp } => {
v5::tasks::TaskEvent::Failed {
error: v5::ResponseError::from(error),
timestamp,
}
}
})
.collect(),
};
(task, content_file)
})
}))
}
pub fn keys(&mut self) -> Box<dyn Iterator<Item = Result<v5::Key>> + '_> {
let keys = match self {
CompatV4ToV5::V4(v4) => v4.keys(),
CompatV4ToV5::Compat(compat) => compat.keys(),
};
Box::new(keys.map(|key| {
key.map(|key| v5::Key {
description: key.description,
name: None,
uid: v5::keys::KeyId::new_v4(),
actions: key.actions.into_iter().filter_map(|action| action.into()).collect(),
indexes: key
.indexes
.into_iter()
.map(|index| match index.as_str() {
"*" => v5::StarOr::Star,
_ => v5::StarOr::Other(v5::meta::IndexUid(index)),
})
.collect(),
expires_at: key.expires_at,
created_at: key.created_at,
updated_at: key.updated_at,
})
}))
}
}
pub enum CompatIndexV4ToV5 {
V4(v4::V4IndexReader),
Compat(CompatIndexV3ToV4),
}
impl From<v4::V4IndexReader> for CompatIndexV4ToV5 {
fn from(index_reader: v4::V4IndexReader) -> Self {
Self::V4(index_reader)
}
}
impl From<CompatIndexV3ToV4> for CompatIndexV4ToV5 {
fn from(index_reader: CompatIndexV3ToV4) -> Self {
Self::Compat(index_reader)
}
}
impl CompatIndexV4ToV5 {
pub fn metadata(&self) -> &crate::IndexMetadata {
match self {
CompatIndexV4ToV5::V4(v4) => v4.metadata(),
CompatIndexV4ToV5::Compat(compat) => compat.metadata(),
}
}
pub fn documents(&mut self) -> Result<Box<dyn Iterator<Item = Result<Document>> + '_>> {
match self {
CompatIndexV4ToV5::V4(v4) => v4
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<Document>> + '_>),
CompatIndexV4ToV5::Compat(compat) => compat
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<Document>> + '_>),
}
}
pub fn settings(&mut self) -> Result<v5::Settings<v5::Checked>> {
match self {
CompatIndexV4ToV5::V4(v4) => Ok(v5::Settings::from(v4.settings()?).check()),
CompatIndexV4ToV5::Compat(compat) => Ok(v5::Settings::from(compat.settings()?).check()),
}
}
}
impl<T> From<v4::Setting<T>> for v5::Setting<T> {
fn from(setting: v4::Setting<T>) -> Self {
match setting {
v4::Setting::Set(t) => v5::Setting::Set(t),
v4::Setting::Reset => v5::Setting::Reset,
v4::Setting::NotSet => v5::Setting::NotSet,
}
}
}
impl From<v4::ResponseError> for v5::ResponseError {
fn from(error: v4::ResponseError) -> Self {
let code = match error.error_code.as_ref() {
"index_creation_failed" => v5::Code::CreateIndex,
"index_already_exists" => v5::Code::IndexAlreadyExists,
"index_not_found" => v5::Code::IndexNotFound,
"invalid_index_uid" => v5::Code::InvalidIndexUid,
"invalid_min_word_length_for_typo" => v5::Code::InvalidMinWordLengthForTypo,
"invalid_state" => v5::Code::InvalidState,
"primary_key_inference_failed" => v5::Code::MissingPrimaryKey,
"index_primary_key_already_exists" => v5::Code::PrimaryKeyAlreadyPresent,
"max_fields_limit_exceeded" => v5::Code::MaxFieldsLimitExceeded,
"missing_document_id" => v5::Code::MissingDocumentId,
"invalid_document_id" => v5::Code::InvalidDocumentId,
"invalid_filter" => v5::Code::Filter,
"invalid_sort" => v5::Code::Sort,
"bad_parameter" => v5::Code::BadParameter,
"bad_request" => v5::Code::BadRequest,
"database_size_limit_reached" => v5::Code::DatabaseSizeLimitReached,
"document_not_found" => v5::Code::DocumentNotFound,
"internal" => v5::Code::Internal,
"invalid_geo_field" => v5::Code::InvalidGeoField,
"invalid_ranking_rule" => v5::Code::InvalidRankingRule,
"invalid_store_file" => v5::Code::InvalidStore,
"invalid_api_key" => v5::Code::InvalidToken,
"missing_authorization_header" => v5::Code::MissingAuthorizationHeader,
"no_space_left_on_device" => v5::Code::NoSpaceLeftOnDevice,
"dump_not_found" => v5::Code::DumpNotFound,
"task_not_found" => v5::Code::TaskNotFound,
"payload_too_large" => v5::Code::PayloadTooLarge,
"unretrievable_document" => v5::Code::RetrieveDocument,
"search_error" => v5::Code::SearchDocuments,
"unsupported_media_type" => v5::Code::UnsupportedMediaType,
"dump_already_processing" => v5::Code::DumpAlreadyInProgress,
"dump_process_failed" => v5::Code::DumpProcessFailed,
"invalid_content_type" => v5::Code::InvalidContentType,
"missing_content_type" => v5::Code::MissingContentType,
"malformed_payload" => v5::Code::MalformedPayload,
"missing_payload" => v5::Code::MissingPayload,
"api_key_not_found" => v5::Code::ApiKeyNotFound,
"missing_parameter" => v5::Code::MissingParameter,
"invalid_api_key_actions" => v5::Code::InvalidApiKeyActions,
"invalid_api_key_indexes" => v5::Code::InvalidApiKeyIndexes,
"invalid_api_key_expires_at" => v5::Code::InvalidApiKeyExpiresAt,
"invalid_api_key_description" => v5::Code::InvalidApiKeyDescription,
other => {
log::warn!("Unknown error code {}", other);
v5::Code::UnretrievableErrorCode
}
};
v5::ResponseError::from_msg(error.message, code)
}
}
impl<T> From<v4::Settings<T>> for v5::Settings<v5::Unchecked> {
fn from(settings: v4::Settings<T>) -> Self {
v5::Settings {
displayed_attributes: settings.displayed_attributes.into(),
searchable_attributes: settings.searchable_attributes.into(),
filterable_attributes: settings.filterable_attributes.into(),
sortable_attributes: settings.sortable_attributes.into(),
ranking_rules: settings.ranking_rules.into(),
stop_words: settings.stop_words.into(),
synonyms: settings.synonyms.into(),
distinct_attribute: settings.distinct_attribute.into(),
typo_tolerance: match settings.typo_tolerance {
v4::Setting::Set(typo) => v5::Setting::Set(v5::TypoTolerance {
enabled: typo.enabled.into(),
min_word_size_for_typos: match typo.min_word_size_for_typos {
v4::Setting::Set(t) => v5::Setting::Set(v5::MinWordSizeForTypos {
one_typo: t.one_typo.into(),
two_typos: t.two_typos.into(),
}),
v4::Setting::Reset => v5::Setting::Reset,
v4::Setting::NotSet => v5::Setting::NotSet,
},
disable_on_words: typo.disable_on_words.into(),
disable_on_attributes: typo.disable_on_attributes.into(),
}),
v4::Setting::Reset => v5::Setting::Reset,
v4::Setting::NotSet => v5::Setting::NotSet,
},
faceting: v5::Setting::NotSet,
pagination: v5::Setting::NotSet,
_kind: std::marker::PhantomData,
}
}
}
impl From<v4::Action> for Option<v5::Action> {
fn from(key: v4::Action) -> Self {
match key {
v4::Action::All => Some(v5::Action::All),
v4::Action::Search => Some(v5::Action::Search),
v4::Action::DocumentsAdd => Some(v5::Action::DocumentsAdd),
v4::Action::DocumentsGet => Some(v5::Action::DocumentsGet),
v4::Action::DocumentsDelete => Some(v5::Action::DocumentsDelete),
v4::Action::IndexesAdd => Some(v5::Action::IndexesAdd),
v4::Action::IndexesGet => Some(v5::Action::IndexesGet),
v4::Action::IndexesUpdate => Some(v5::Action::IndexesUpdate),
v4::Action::IndexesDelete => Some(v5::Action::IndexesDelete),
v4::Action::TasksGet => Some(v5::Action::TasksGet),
v4::Action::SettingsGet => Some(v5::Action::SettingsGet),
v4::Action::SettingsUpdate => Some(v5::Action::SettingsUpdate),
v4::Action::StatsGet => Some(v5::Action::StatsGet),
v4::Action::DumpsCreate => Some(v5::Action::DumpsCreate),
v4::Action::DumpsGet => None,
v4::Action::Version => Some(v5::Action::Version),
}
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v4_v5() {
let dump = File::open("tests/assets/v4.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = v4::V4Reader::open(dir).unwrap().to_v5();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-06 12:53:49.131989609 +00:00:00");
insta::assert_display_snapshot!(dump.instance_uid().unwrap().unwrap(), @"9e15e977-f2ae-4761-943f-1eaf75fd736d");
// tasks
let tasks = dump.tasks().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"ed9a30cded4c046ef46f7cff7450347e");
assert_eq!(update_files.len(), 10);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
// keys
let keys = dump.keys().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys, { "[].uid" => "[uuid]" }), @"1384361d734fd77c23804c9696228660");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"26947283836ee4cdf0974f82efcc5332");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"156871410d17e23803d0c90ddc6a66cb");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"786022a66ecb992c8a2a60fee070a5ab");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"69c9916142612cf4a2da9b9ed9455e9e");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

View File

@ -0,0 +1,488 @@
use super::v4_to_v5::{CompatIndexV4ToV5, CompatV4ToV5};
use crate::reader::{v5, v6, Document, UpdateFile};
use crate::Result;
pub enum CompatV5ToV6 {
V5(v5::V5Reader),
Compat(CompatV4ToV5),
}
impl CompatV5ToV6 {
pub fn new_v5(v5: v5::V5Reader) -> CompatV5ToV6 {
CompatV5ToV6::V5(v5)
}
pub fn version(&self) -> crate::Version {
match self {
CompatV5ToV6::V5(v5) => v5.version(),
CompatV5ToV6::Compat(compat) => compat.version(),
}
}
pub fn date(&self) -> Option<time::OffsetDateTime> {
match self {
CompatV5ToV6::V5(v5) => v5.date(),
CompatV5ToV6::Compat(compat) => compat.date(),
}
}
pub fn instance_uid(&self) -> Result<Option<uuid::Uuid>> {
match self {
CompatV5ToV6::V5(v5) => v5.instance_uid(),
CompatV5ToV6::Compat(compat) => compat.instance_uid(),
}
}
pub fn indexes(&self) -> Result<Box<dyn Iterator<Item = Result<CompatIndexV5ToV6>> + '_>> {
let indexes = match self {
CompatV5ToV6::V5(v5) => {
Box::new(v5.indexes()?.map(|index| index.map(CompatIndexV5ToV6::from)))
as Box<dyn Iterator<Item = Result<CompatIndexV5ToV6>> + '_>
}
CompatV5ToV6::Compat(compat) => {
Box::new(compat.indexes()?.map(|index| index.map(CompatIndexV5ToV6::from)))
as Box<dyn Iterator<Item = Result<CompatIndexV5ToV6>> + '_>
}
};
Ok(indexes)
}
pub fn tasks(
&mut self,
) -> Result<Box<dyn Iterator<Item = Result<(v6::Task, Option<Box<UpdateFile>>)>> + '_>> {
let instance_uid = self.instance_uid().ok().flatten();
let keys = self.keys()?.collect::<Result<Vec<_>>>()?;
let tasks = match self {
CompatV5ToV6::V5(v5) => v5.tasks(),
CompatV5ToV6::Compat(compat) => compat.tasks(),
};
Ok(Box::new(tasks.map(move |task| {
task.map(|(task, content_file)| {
let mut task_view: v5::tasks::TaskView = task.clone().into();
if task_view.status == v5::Status::Processing {
task_view.started_at = None;
}
let task = v6::Task {
uid: task_view.uid,
index_uid: task_view.index_uid,
status: match task_view.status {
v5::Status::Enqueued => v6::Status::Enqueued,
v5::Status::Processing => v6::Status::Enqueued,
v5::Status::Succeeded => v6::Status::Succeeded,
v5::Status::Failed => v6::Status::Failed,
},
kind: match task.content {
v5::tasks::TaskContent::IndexCreation { primary_key, .. } => {
v6::Kind::IndexCreation { primary_key }
}
v5::tasks::TaskContent::IndexUpdate { primary_key, .. } => {
v6::Kind::IndexUpdate { primary_key }
}
v5::tasks::TaskContent::IndexDeletion { .. } => v6::Kind::IndexDeletion,
v5::tasks::TaskContent::DocumentAddition {
merge_strategy,
allow_index_creation,
primary_key,
documents_count,
..
} => v6::Kind::DocumentImport {
primary_key,
documents_count: documents_count as u64,
method: match merge_strategy {
v5::tasks::IndexDocumentsMethod::ReplaceDocuments => {
v6::milli::update::IndexDocumentsMethod::ReplaceDocuments
}
v5::tasks::IndexDocumentsMethod::UpdateDocuments => {
v6::milli::update::IndexDocumentsMethod::UpdateDocuments
}
},
allow_index_creation,
},
v5::tasks::TaskContent::DocumentDeletion { deletion, .. } => match deletion
{
v5::tasks::DocumentDeletion::Clear => v6::Kind::DocumentClear,
v5::tasks::DocumentDeletion::Ids(documents_ids) => {
v6::Kind::DocumentDeletion { documents_ids }
}
},
v5::tasks::TaskContent::SettingsUpdate {
allow_index_creation,
is_deletion,
settings,
..
} => v6::Kind::Settings {
is_deletion,
allow_index_creation,
settings: Box::new(settings.into()),
},
v5::tasks::TaskContent::Dump { uid: _ } => {
// in v6 we compute the dump_uid from the started_at processing time
v6::Kind::DumpCreation { keys: keys.clone(), instance_uid }
}
},
canceled_by: None,
details: task_view.details.map(|details| match details {
v5::Details::DocumentAddition { received_documents, indexed_documents } => {
v6::Details::DocumentAdditionOrUpdate {
received_documents: received_documents as u64,
indexed_documents,
}
}
v5::Details::Settings { settings } => {
v6::Details::SettingsUpdate { settings: Box::new(settings.into()) }
}
v5::Details::IndexInfo { primary_key } => {
v6::Details::IndexInfo { primary_key }
}
v5::Details::DocumentDeletion {
received_document_ids,
deleted_documents,
} => v6::Details::DocumentDeletion {
provided_ids: received_document_ids,
deleted_documents,
},
v5::Details::ClearAll { deleted_documents } => {
v6::Details::ClearAll { deleted_documents }
}
v5::Details::Dump { dump_uid } => {
v6::Details::Dump { dump_uid: Some(dump_uid) }
}
}),
error: task_view.error.map(|e| e.into()),
enqueued_at: task_view.enqueued_at,
started_at: task_view.started_at,
finished_at: task_view.finished_at,
};
(task, content_file)
})
})))
}
pub fn keys(&mut self) -> Result<Box<dyn Iterator<Item = Result<v6::Key>> + '_>> {
let keys = match self {
CompatV5ToV6::V5(v5) => v5.keys()?,
CompatV5ToV6::Compat(compat) => compat.keys(),
};
Ok(Box::new(keys.map(|key| {
key.map(|key| v6::Key {
description: key.description,
name: key.name,
uid: key.uid,
actions: key.actions.into_iter().map(|action| action.into()).collect(),
indexes: key
.indexes
.into_iter()
.map(|index| match index {
v5::StarOr::Star => v6::StarOr::Star,
v5::StarOr::Other(uid) => {
v6::StarOr::Other(v6::IndexUid::new_unchecked(uid.as_str()))
}
})
.collect(),
expires_at: key.expires_at,
created_at: key.created_at,
updated_at: key.updated_at,
})
})))
}
}
pub enum CompatIndexV5ToV6 {
V5(v5::V5IndexReader),
Compat(CompatIndexV4ToV5),
}
impl From<v5::V5IndexReader> for CompatIndexV5ToV6 {
fn from(index_reader: v5::V5IndexReader) -> Self {
Self::V5(index_reader)
}
}
impl From<CompatIndexV4ToV5> for CompatIndexV5ToV6 {
fn from(index_reader: CompatIndexV4ToV5) -> Self {
Self::Compat(index_reader)
}
}
impl CompatIndexV5ToV6 {
pub fn new_v5(v5: v5::V5IndexReader) -> CompatIndexV5ToV6 {
CompatIndexV5ToV6::V5(v5)
}
pub fn metadata(&self) -> &crate::IndexMetadata {
match self {
CompatIndexV5ToV6::V5(v5) => v5.metadata(),
CompatIndexV5ToV6::Compat(compat) => compat.metadata(),
}
}
pub fn documents(&mut self) -> Result<Box<dyn Iterator<Item = Result<Document>> + '_>> {
match self {
CompatIndexV5ToV6::V5(v5) => v5
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<Document>> + '_>),
CompatIndexV5ToV6::Compat(compat) => compat
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<Document>> + '_>),
}
}
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
match self {
CompatIndexV5ToV6::V5(v5) => Ok(v6::Settings::from(v5.settings()?).check()),
CompatIndexV5ToV6::Compat(compat) => Ok(v6::Settings::from(compat.settings()?).check()),
}
}
}
impl<T> From<v5::Setting<T>> for v6::Setting<T> {
fn from(setting: v5::Setting<T>) -> Self {
match setting {
v5::Setting::Set(t) => v6::Setting::Set(t),
v5::Setting::Reset => v6::Setting::Reset,
v5::Setting::NotSet => v6::Setting::NotSet,
}
}
}
impl From<v5::ResponseError> for v6::ResponseError {
fn from(error: v5::ResponseError) -> Self {
let code = match error.error_code.as_ref() {
"index_creation_failed" => v6::Code::CreateIndex,
"index_already_exists" => v6::Code::IndexAlreadyExists,
"index_not_found" => v6::Code::IndexNotFound,
"invalid_index_uid" => v6::Code::InvalidIndexUid,
"invalid_min_word_length_for_typo" => v6::Code::InvalidMinWordLengthForTypo,
"invalid_state" => v6::Code::InvalidState,
"primary_key_inference_failed" => v6::Code::MissingPrimaryKey,
"index_primary_key_already_exists" => v6::Code::PrimaryKeyAlreadyPresent,
"max_fields_limit_exceeded" => v6::Code::MaxFieldsLimitExceeded,
"missing_document_id" => v6::Code::MissingDocumentId,
"invalid_document_id" => v6::Code::InvalidDocumentId,
"invalid_filter" => v6::Code::Filter,
"invalid_sort" => v6::Code::Sort,
"bad_parameter" => v6::Code::BadParameter,
"bad_request" => v6::Code::BadRequest,
"database_size_limit_reached" => v6::Code::DatabaseSizeLimitReached,
"document_not_found" => v6::Code::DocumentNotFound,
"internal" => v6::Code::Internal,
"invalid_geo_field" => v6::Code::InvalidGeoField,
"invalid_ranking_rule" => v6::Code::InvalidRankingRule,
"invalid_store_file" => v6::Code::InvalidStore,
"invalid_api_key" => v6::Code::InvalidToken,
"missing_authorization_header" => v6::Code::MissingAuthorizationHeader,
"no_space_left_on_device" => v6::Code::NoSpaceLeftOnDevice,
"dump_not_found" => v6::Code::DumpNotFound,
"task_not_found" => v6::Code::TaskNotFound,
"payload_too_large" => v6::Code::PayloadTooLarge,
"unretrievable_document" => v6::Code::RetrieveDocument,
"search_error" => v6::Code::SearchDocuments,
"unsupported_media_type" => v6::Code::UnsupportedMediaType,
"dump_already_processing" => v6::Code::DumpAlreadyInProgress,
"dump_process_failed" => v6::Code::DumpProcessFailed,
"invalid_content_type" => v6::Code::InvalidContentType,
"missing_content_type" => v6::Code::MissingContentType,
"malformed_payload" => v6::Code::MalformedPayload,
"missing_payload" => v6::Code::MissingPayload,
"api_key_not_found" => v6::Code::ApiKeyNotFound,
"missing_parameter" => v6::Code::MissingParameter,
"invalid_api_key_actions" => v6::Code::InvalidApiKeyActions,
"invalid_api_key_indexes" => v6::Code::InvalidApiKeyIndexes,
"invalid_api_key_expires_at" => v6::Code::InvalidApiKeyExpiresAt,
"invalid_api_key_description" => v6::Code::InvalidApiKeyDescription,
"invalid_api_key_name" => v6::Code::InvalidApiKeyName,
"invalid_api_key_uid" => v6::Code::InvalidApiKeyUid,
"immutable_field" => v6::Code::ImmutableField,
"api_key_already_exists" => v6::Code::ApiKeyAlreadyExists,
other => {
log::warn!("Unknown error code {}", other);
v6::Code::UnretrievableErrorCode
}
};
v6::ResponseError::from_msg(error.message, code)
}
}
impl<T> From<v5::Settings<T>> for v6::Settings<v6::Unchecked> {
fn from(settings: v5::Settings<T>) -> Self {
v6::Settings {
displayed_attributes: settings.displayed_attributes.into(),
searchable_attributes: settings.searchable_attributes.into(),
filterable_attributes: settings.filterable_attributes.into(),
sortable_attributes: settings.sortable_attributes.into(),
ranking_rules: settings.ranking_rules.into(),
stop_words: settings.stop_words.into(),
synonyms: settings.synonyms.into(),
distinct_attribute: settings.distinct_attribute.into(),
typo_tolerance: match settings.typo_tolerance {
v5::Setting::Set(typo) => v6::Setting::Set(v6::TypoTolerance {
enabled: typo.enabled.into(),
min_word_size_for_typos: match typo.min_word_size_for_typos {
v5::Setting::Set(t) => v6::Setting::Set(v6::MinWordSizeForTypos {
one_typo: t.one_typo.into(),
two_typos: t.two_typos.into(),
}),
v5::Setting::Reset => v6::Setting::Reset,
v5::Setting::NotSet => v6::Setting::NotSet,
},
disable_on_words: typo.disable_on_words.into(),
disable_on_attributes: typo.disable_on_attributes.into(),
}),
v5::Setting::Reset => v6::Setting::Reset,
v5::Setting::NotSet => v6::Setting::NotSet,
},
faceting: match settings.faceting {
v5::Setting::Set(faceting) => v6::Setting::Set(v6::FacetingSettings {
max_values_per_facet: faceting.max_values_per_facet.into(),
}),
v5::Setting::Reset => v6::Setting::Reset,
v5::Setting::NotSet => v6::Setting::NotSet,
},
pagination: match settings.pagination {
v5::Setting::Set(pagination) => v6::Setting::Set(v6::PaginationSettings {
max_total_hits: pagination.max_total_hits.into(),
}),
v5::Setting::Reset => v6::Setting::Reset,
v5::Setting::NotSet => v6::Setting::NotSet,
},
_kind: std::marker::PhantomData,
}
}
}
impl From<v5::Action> for v6::Action {
fn from(key: v5::Action) -> Self {
match key {
v5::Action::All => v6::Action::All,
v5::Action::Search => v6::Action::Search,
v5::Action::DocumentsAll => v6::Action::DocumentsAll,
v5::Action::DocumentsAdd => v6::Action::DocumentsAdd,
v5::Action::DocumentsGet => v6::Action::DocumentsGet,
v5::Action::DocumentsDelete => v6::Action::DocumentsDelete,
v5::Action::IndexesAll => v6::Action::IndexesAll,
v5::Action::IndexesAdd => v6::Action::IndexesAdd,
v5::Action::IndexesGet => v6::Action::IndexesGet,
v5::Action::IndexesUpdate => v6::Action::IndexesUpdate,
v5::Action::IndexesDelete => v6::Action::IndexesDelete,
v5::Action::TasksAll => v6::Action::TasksAll,
v5::Action::TasksGet => v6::Action::TasksGet,
v5::Action::SettingsAll => v6::Action::SettingsAll,
v5::Action::SettingsGet => v6::Action::SettingsGet,
v5::Action::SettingsUpdate => v6::Action::SettingsUpdate,
v5::Action::StatsAll => v6::Action::StatsAll,
v5::Action::StatsGet => v6::Action::StatsGet,
v5::Action::MetricsAll => v6::Action::MetricsAll,
v5::Action::MetricsGet => v6::Action::MetricsGet,
v5::Action::DumpsAll => v6::Action::DumpsAll,
v5::Action::DumpsCreate => v6::Action::DumpsCreate,
v5::Action::Version => v6::Action::Version,
v5::Action::KeysAdd => v6::Action::KeysAdd,
v5::Action::KeysGet => v6::Action::KeysGet,
v5::Action::KeysUpdate => v6::Action::KeysUpdate,
v5::Action::KeysDelete => v6::Action::KeysDelete,
}
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v5_v6() {
let dump = File::open("tests/assets/v5.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = v5::V5Reader::open(dir).unwrap().to_v6();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-04 15:55:10.344982459 +00:00:00");
insta::assert_display_snapshot!(dump.instance_uid().unwrap().unwrap(), @"9e15e977-f2ae-4761-943f-1eaf75fd736d");
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"42d4200cf6d92a6449989ca48cd8e28a");
assert_eq!(update_files.len(), 22);
assert!(update_files[0].is_none()); // the dump creation
assert!(update_files[1].is_some()); // the enqueued document addition
assert!(update_files[2..].iter().all(|u| u.is_none())); // everything already processed
// keys
let keys = dump.keys().unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys), @"c9d2b467fe2fca0b35580d8a999808fb");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"8e5cadabf74aebe1160bf51c3d489efe");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"4894ac1e74b9e1069ed5ee262b7a1aca");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 200);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"e962baafd2fbae4cdd14e876053b0c5a");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"054dbf08a79e08bb9becba6f5d090f13");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

42
dump/src/reader/error.rs Normal file
View File

@ -0,0 +1,42 @@
use meilisearch_auth::error::AuthControllerError;
use meilisearch_types::error::{Code, ErrorCode};
use meilisearch_types::internal_error;
use crate::{index_resolver::error::IndexResolverError, tasks::error::TaskError};
pub type Result<T> = std::result::Result<T, DumpError>;
#[derive(thiserror::Error, Debug)]
pub enum DumpError {
#[error("An internal error has occurred. `{0}`.")]
Internal(Box<dyn std::error::Error + Send + Sync + 'static>),
#[error("{0}")]
IndexResolver(Box<IndexResolverError>),
}
internal_error!(
DumpError: milli::heed::Error,
std::io::Error,
tokio::task::JoinError,
tokio::sync::oneshot::error::RecvError,
serde_json::error::Error,
tempfile::PersistError,
fs_extra::error::Error,
AuthControllerError,
TaskError
);
impl From<IndexResolverError> for DumpError {
fn from(e: IndexResolverError) -> Self {
Self::IndexResolver(Box::new(e))
}
}
impl ErrorCode for DumpError {
fn error_code(&self) -> Code {
match self {
DumpError::Internal(_) => Code::Internal,
DumpError::IndexResolver(e) => e.error_code(),
}
}
}

535
dump/src/reader/mod.rs Normal file
View File

@ -0,0 +1,535 @@
use std::fs::File;
use std::io::{BufReader, Read};
use flate2::bufread::GzDecoder;
use serde::Deserialize;
use tempfile::TempDir;
use self::compat::v4_to_v5::CompatV4ToV5;
use self::compat::v5_to_v6::{CompatIndexV5ToV6, CompatV5ToV6};
use self::v5::V5Reader;
use self::v6::{V6IndexReader, V6Reader};
use crate::{Error, Result, Version};
mod compat;
// pub(self) mod v1;
pub(self) mod v2;
pub(self) mod v3;
pub(self) mod v4;
pub(self) mod v5;
pub(self) mod v6;
pub type Document = serde_json::Map<String, serde_json::Value>;
pub type UpdateFile = dyn Iterator<Item = Result<Document>>;
pub enum DumpReader {
Current(V6Reader),
Compat(CompatV5ToV6),
}
impl DumpReader {
pub fn open(dump: impl Read) -> Result<DumpReader> {
let path = TempDir::new()?;
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(path.path())?;
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
struct MetadataVersion {
pub dump_version: Version,
}
let mut meta_file = File::open(path.path().join("metadata.json"))?;
let MetadataVersion { dump_version } = serde_json::from_reader(&mut meta_file)?;
match dump_version {
// Version::V1 => Ok(Box::new(v1::Reader::open(path)?)),
Version::V1 => Err(Error::DumpV1Unsupported),
Version::V2 => Ok(v2::V2Reader::open(path)?.to_v3().to_v4().to_v5().to_v6().into()),
Version::V3 => Ok(v3::V3Reader::open(path)?.to_v4().to_v5().to_v6().into()),
Version::V4 => Ok(v4::V4Reader::open(path)?.to_v5().to_v6().into()),
Version::V5 => Ok(v5::V5Reader::open(path)?.to_v6().into()),
Version::V6 => Ok(v6::V6Reader::open(path)?.into()),
}
}
pub fn version(&self) -> crate::Version {
match self {
DumpReader::Current(current) => current.version(),
DumpReader::Compat(compat) => compat.version(),
}
}
pub fn date(&self) -> Option<time::OffsetDateTime> {
match self {
DumpReader::Current(current) => current.date(),
DumpReader::Compat(compat) => compat.date(),
}
}
pub fn instance_uid(&self) -> Result<Option<uuid::Uuid>> {
match self {
DumpReader::Current(current) => current.instance_uid(),
DumpReader::Compat(compat) => compat.instance_uid(),
}
}
pub fn indexes(&self) -> Result<Box<dyn Iterator<Item = Result<DumpIndexReader>> + '_>> {
match self {
DumpReader::Current(current) => {
let indexes = Box::new(current.indexes()?.map(|res| res.map(DumpIndexReader::from)))
as Box<dyn Iterator<Item = Result<DumpIndexReader>> + '_>;
Ok(indexes)
}
DumpReader::Compat(compat) => {
let indexes = Box::new(compat.indexes()?.map(|res| res.map(DumpIndexReader::from)))
as Box<dyn Iterator<Item = Result<DumpIndexReader>> + '_>;
Ok(indexes)
}
}
}
pub fn tasks(
&mut self,
) -> Result<Box<dyn Iterator<Item = Result<(v6::Task, Option<Box<UpdateFile>>)>> + '_>> {
match self {
DumpReader::Current(current) => Ok(current.tasks()),
DumpReader::Compat(compat) => compat.tasks(),
}
}
pub fn keys(&mut self) -> Result<Box<dyn Iterator<Item = Result<v6::Key>> + '_>> {
match self {
DumpReader::Current(current) => Ok(current.keys()),
DumpReader::Compat(compat) => compat.keys(),
}
}
}
impl From<V6Reader> for DumpReader {
fn from(value: V6Reader) -> Self {
DumpReader::Current(value)
}
}
impl From<CompatV5ToV6> for DumpReader {
fn from(value: CompatV5ToV6) -> Self {
DumpReader::Compat(value)
}
}
impl From<V5Reader> for DumpReader {
fn from(value: V5Reader) -> Self {
DumpReader::Compat(value.to_v6())
}
}
impl From<CompatV4ToV5> for DumpReader {
fn from(value: CompatV4ToV5) -> Self {
DumpReader::Compat(value.to_v6())
}
}
pub enum DumpIndexReader {
Current(v6::V6IndexReader),
Compat(Box<CompatIndexV5ToV6>),
}
impl DumpIndexReader {
pub fn new_v6(v6: v6::V6IndexReader) -> DumpIndexReader {
DumpIndexReader::Current(v6)
}
pub fn metadata(&self) -> &crate::IndexMetadata {
match self {
DumpIndexReader::Current(v6) => v6.metadata(),
DumpIndexReader::Compat(compat) => compat.metadata(),
}
}
pub fn documents(&mut self) -> Result<Box<dyn Iterator<Item = Result<Document>> + '_>> {
match self {
DumpIndexReader::Current(v6) => v6
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<Document>> + '_>),
DumpIndexReader::Compat(compat) => compat
.documents()
.map(|iter| Box::new(iter) as Box<dyn Iterator<Item = Result<Document>> + '_>),
}
}
pub fn settings(&mut self) -> Result<v6::Settings<v6::Checked>> {
match self {
DumpIndexReader::Current(v6) => v6.settings(),
DumpIndexReader::Compat(compat) => compat.settings(),
}
}
}
impl From<V6IndexReader> for DumpIndexReader {
fn from(value: V6IndexReader) -> Self {
DumpIndexReader::Current(value)
}
}
impl From<CompatIndexV5ToV6> for DumpIndexReader {
fn from(value: CompatIndexV5ToV6) -> Self {
DumpIndexReader::Compat(Box::new(value))
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use meili_snap::insta;
use super::*;
#[test]
#[ignore]
fn import_dump_v5() {
let dump = File::open("tests/assets/v5.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-04 15:55:10.344982459 +00:00:00");
insta::assert_display_snapshot!(dump.instance_uid().unwrap().unwrap(), @"9e15e977-f2ae-4761-943f-1eaf75fd736d");
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"42d4200cf6d92a6449989ca48cd8e28a");
assert_eq!(update_files.len(), 22);
assert!(update_files[0].is_none()); // the dump creation
assert!(update_files[1].is_some()); // the enqueued document addition
assert!(update_files[2..].iter().all(|u| u.is_none())); // everything already processed
// keys
let keys = dump.keys().unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys), @"c9d2b467fe2fca0b35580d8a999808fb");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"8e5cadabf74aebe1160bf51c3d489efe");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"4894ac1e74b9e1069ed5ee262b7a1aca");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 200);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"e962baafd2fbae4cdd14e876053b0c5a");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"054dbf08a79e08bb9becba6f5d090f13");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
#[test]
#[ignore]
fn import_dump_v4() {
let dump = File::open("tests/assets/v4.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-06 12:53:49.131989609 +00:00:00");
insta::assert_display_snapshot!(dump.instance_uid().unwrap().unwrap(), @"9e15e977-f2ae-4761-943f-1eaf75fd736d");
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"491e244a80a19fe2a900b809d310c24a");
assert_eq!(update_files.len(), 10);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
// keys
let keys = dump.keys().unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys, { "[].uid" => "[uuid]" }), @"d751713988987e9331980363e24189ce");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"1f9da51a4518166fb440def5437eafdb");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"488816aba82c1bd65f1609630055c611");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"786022a66ecb992c8a2a60fee070a5ab");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"7b4f66dad597dc651650f35fe34be27f");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
#[test]
#[ignore]
fn import_dump_v3() {
let dump = File::open("tests/assets/v3.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-07 11:39:03.709153554 +00:00:00");
assert_eq!(dump.instance_uid().unwrap(), None);
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"7cacce2e21702be696b866808c726946");
assert_eq!(update_files.len(), 10);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
// keys
let keys = dump.keys().unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys), @"d751713988987e9331980363e24189ce");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies2 = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"855f3165dec609b919171ff83f82b364");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"43e0bf1746c3ea1d64c1e10ea544c190");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
// movies2
insta::assert_json_snapshot!(movies2.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies_2",
"primaryKey": null,
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"5fd06a5038f49311600379d43412b655");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"5fd06a5038f49311600379d43412b655");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
#[test]
#[ignore]
fn import_dump_v2() {
let dump = File::open("tests/assets/v2.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-09 20:27:59.904096267 +00:00:00");
assert_eq!(dump.instance_uid().unwrap(), None);
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"6cabec4e252b74c8f3a2c8517622e85f");
assert_eq!(update_files.len(), 9);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
// keys
let keys = dump.keys().unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys), @"d751713988987e9331980363e24189ce");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies2 = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"b15b71f56dd082d8e8ec5182e688bf36");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"5389153ddf5527fa79c54b6a6e9c21f6");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
// movies2
insta::assert_json_snapshot!(movies2.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies_2",
"primaryKey": null,
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"8aebab01301d266acf3e18dd449c008f");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"8aebab01301d266acf3e18dd449c008f");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

173
dump/src/reader/v1/mod.rs Normal file
View File

@ -0,0 +1,173 @@
use std::{
convert::Infallible,
fs::{self, File},
io::{BufRead, BufReader},
path::Path,
};
use tempfile::TempDir;
use time::OffsetDateTime;
use self::update::UpdateStatus;
use super::{DumpReader, IndexReader};
use crate::{Error, Result, Version};
pub mod settings;
pub mod update;
pub mod v1;
pub struct V1Reader {
dump: TempDir,
metadata: v1::Metadata,
indexes: Vec<V1IndexReader>,
}
struct V1IndexReader {
name: String,
documents: BufReader<File>,
settings: BufReader<File>,
updates: BufReader<File>,
current_update: Option<UpdateStatus>,
}
impl V1IndexReader {
pub fn new(name: String, path: &Path) -> Result<Self> {
let mut ret = V1IndexReader {
name,
documents: BufReader::new(File::open(path.join("documents.jsonl"))?),
settings: BufReader::new(File::open(path.join("settings.json"))?),
updates: BufReader::new(File::open(path.join("updates.jsonl"))?),
current_update: None,
};
ret.next_update();
Ok(ret)
}
pub fn next_update(&mut self) -> Result<Option<UpdateStatus>> {
let current_update = if let Some(line) = self.updates.lines().next() {
Some(serde_json::from_str(&line?)?)
} else {
None
};
Ok(std::mem::replace(&mut self.current_update, current_update))
}
}
impl V1Reader {
pub fn open(dump: TempDir) -> Result<Self> {
let mut meta_file = fs::read(dump.path().join("metadata.json"))?;
let metadata = serde_json::from_reader(&*meta_file)?;
let mut indexes = Vec::new();
let entries = fs::read_dir(dump.path())?;
for entry in entries {
let entry = entry?;
if entry.file_type()?.is_dir() {
indexes.push(V1IndexReader::new(
entry
.file_name()
.to_str()
.ok_or(Error::BadIndexName)?
.to_string(),
&entry.path(),
)?);
}
}
Ok(V1Reader {
dump,
metadata,
indexes,
})
}
fn next_update(&mut self) -> Result<Option<UpdateStatus>> {
if let Some((idx, _)) = self
.indexes
.iter()
.map(|index| index.current_update)
.enumerate()
.filter_map(|(idx, update)| update.map(|u| (idx, u)))
.min_by_key(|(_, update)| update.enqueued_at())
{
self.indexes[idx].next_update()
} else {
Ok(None)
}
}
}
impl IndexReader for &V1IndexReader {
type Document = serde_json::Map<String, serde_json::Value>;
type Settings = settings::Settings;
fn name(&self) -> &str {
todo!()
}
fn documents(&self) -> Result<Box<dyn Iterator<Item = Result<Self::Document>>>> {
todo!()
}
fn settings(&self) -> Result<Self::Settings> {
todo!()
}
}
impl DumpReader for V1Reader {
type Document = serde_json::Map<String, serde_json::Value>;
type Settings = settings::Settings;
type Task = update::UpdateStatus;
type UpdateFile = Infallible;
type Key = Infallible;
fn date(&self) -> Option<OffsetDateTime> {
None
}
fn version(&self) -> Version {
Version::V1
}
fn indexes(
&self,
) -> Result<
Box<
dyn Iterator<
Item = Result<
Box<
dyn super::IndexReader<
Document = Self::Document,
Settings = Self::Settings,
>,
>,
>,
>,
>,
> {
Ok(Box::new(self.indexes.iter().map(|index| {
let index = Box::new(index)
as Box<dyn IndexReader<Document = Self::Document, Settings = Self::Settings>>;
Ok(index)
})))
}
fn tasks(&self) -> Box<dyn Iterator<Item = Result<(Self::Task, Option<Self::UpdateFile>)>>> {
Box::new(std::iter::from_fn(|| {
self.next_update()
.transpose()
.map(|result| result.map(|task| (task, None)))
}))
}
fn keys(&self) -> Box<dyn Iterator<Item = Result<Self::Key>>> {
Box::new(std::iter::empty())
}
}

View File

@ -0,0 +1,63 @@
use std::collections::{BTreeMap, BTreeSet};
use std::result::Result as StdResult;
use serde::{Deserialize, Deserializer, Serialize};
#[derive(Default, Clone, Serialize, Deserialize, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct Settings {
#[serde(default, deserialize_with = "deserialize_some")]
pub ranking_rules: Option<Option<Vec<String>>>,
#[serde(default, deserialize_with = "deserialize_some")]
pub distinct_attribute: Option<Option<String>>,
#[serde(default, deserialize_with = "deserialize_some")]
pub searchable_attributes: Option<Option<Vec<String>>>,
#[serde(default, deserialize_with = "deserialize_some")]
pub displayed_attributes: Option<Option<BTreeSet<String>>>,
#[serde(default, deserialize_with = "deserialize_some")]
pub stop_words: Option<Option<BTreeSet<String>>>,
#[serde(default, deserialize_with = "deserialize_some")]
pub synonyms: Option<Option<BTreeMap<String, Vec<String>>>>,
#[serde(default, deserialize_with = "deserialize_some")]
pub attributes_for_faceting: Option<Option<Vec<String>>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SettingsUpdate {
pub ranking_rules: UpdateState<Vec<RankingRule>>,
pub distinct_attribute: UpdateState<String>,
pub primary_key: UpdateState<String>,
pub searchable_attributes: UpdateState<Vec<String>>,
pub displayed_attributes: UpdateState<BTreeSet<String>>,
pub stop_words: UpdateState<BTreeSet<String>>,
pub synonyms: UpdateState<BTreeMap<String, Vec<String>>>,
pub attributes_for_faceting: UpdateState<Vec<String>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum UpdateState<T> {
Update(T),
Clear,
Nothing,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum RankingRule {
Typo,
Words,
Proximity,
Attribute,
WordsPosition,
Exactness,
Asc(String),
Desc(String),
}
// Any value that is present is considered Some value, including null.
fn deserialize_some<'de, T, D>(deserializer: D) -> StdResult<Option<T>, D::Error>
where
T: Deserialize<'de>,
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(Some)
}

View File

@ -0,0 +1,120 @@
use serde::{Deserialize, Serialize};
use serde_json::Value;
use time::OffsetDateTime;
use super::settings::SettingsUpdate;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Update {
data: UpdateData,
#[serde(with = "time::serde::rfc3339")]
enqueued_at: OffsetDateTime,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum UpdateData {
ClearAll,
Customs(Vec<u8>),
// (primary key, documents)
DocumentsAddition {
primary_key: Option<String>,
documents: Vec<serde_json::Map<String, Value>>,
},
DocumentsPartial {
primary_key: Option<String>,
documents: Vec<serde_json::Map<String, Value>>,
},
DocumentsDeletion(Vec<String>),
Settings(Box<SettingsUpdate>),
}
impl UpdateData {
pub fn update_type(&self) -> UpdateType {
match self {
UpdateData::ClearAll => UpdateType::ClearAll,
UpdateData::Customs(_) => UpdateType::Customs,
UpdateData::DocumentsAddition { documents, .. } => UpdateType::DocumentsAddition {
number: documents.len(),
},
UpdateData::DocumentsPartial { documents, .. } => UpdateType::DocumentsPartial {
number: documents.len(),
},
UpdateData::DocumentsDeletion(deletion) => UpdateType::DocumentsDeletion {
number: deletion.len(),
},
UpdateData::Settings(update) => UpdateType::Settings {
settings: update.clone(),
},
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "name")]
pub enum UpdateType {
ClearAll,
Customs,
DocumentsAddition { number: usize },
DocumentsPartial { number: usize },
DocumentsDeletion { number: usize },
Settings { settings: Box<SettingsUpdate> },
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ProcessedUpdateResult {
pub update_id: u64,
#[serde(rename = "type")]
pub update_type: UpdateType,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub error_type: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub error_code: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub error_link: Option<String>,
pub duration: f64, // in seconds
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub processed_at: OffsetDateTime,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct EnqueuedUpdateResult {
pub update_id: u64,
#[serde(rename = "type")]
pub update_type: UpdateType,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase", tag = "status")]
pub enum UpdateStatus {
Enqueued {
#[serde(flatten)]
content: EnqueuedUpdateResult,
},
Failed {
#[serde(flatten)]
content: ProcessedUpdateResult,
},
Processed {
#[serde(flatten)]
content: ProcessedUpdateResult,
},
}
impl UpdateStatus {
pub fn enqueued_at(&self) -> &OffsetDateTime {
match self {
UpdateStatus::Enqueued { content } => &content.enqueued_at,
UpdateStatus::Failed { content } | UpdateStatus::Processed { content } => {
&content.enqueued_at
}
}
}
}

22
dump/src/reader/v1/v1.rs Normal file
View File

@ -0,0 +1,22 @@
use serde::Deserialize;
use time::OffsetDateTime;
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Index {
pub name: String,
pub uid: String,
#[serde(with = "time::serde::rfc3339")]
created_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
updated_at: OffsetDateTime,
pub primary_key: Option<String>,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Metadata {
indexes: Vec<Index>,
db_version: String,
dump_version: crate::Version,
}

View File

@ -0,0 +1,14 @@
use http::StatusCode;
use serde::Deserialize;
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct ResponseError {
#[serde(skip)]
pub code: StatusCode,
pub message: String,
pub error_code: String,
pub error_type: String,
pub error_link: String,
}

View File

@ -0,0 +1,18 @@
use serde::Deserialize;
use uuid::Uuid;
use super::Settings;
#[derive(Deserialize, Debug, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexUuid {
pub uid: String,
pub uuid: Uuid,
}
#[derive(Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct DumpMeta {
pub settings: Settings<super::Unchecked>,
pub primary_key: Option<String>,
}

310
dump/src/reader/v2/mod.rs Normal file
View File

@ -0,0 +1,310 @@
//! ```text
//! .
//! ├── indexes
//! │   ├── index-40d14c5f-37ae-4873-9d51-b69e014a0d30
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   ├── index-88202369-4524-4410-9b3d-3e924c867fec
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   ├── index-b7f2d03b-bf9b-40d9-a25b-94dc5ec60c32
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   └── index-dc9070b3-572d-4f30-ab45-d4903ab71708
//! │   ├── documents.jsonl
//! │   └── meta.json
//! ├── index_uuids
//! │   └── data.jsonl
//! ├── metadata.json
//! └── updates
//! ├── data.jsonl
//! └── update_files
//! └── update_202573df-718b-4d80-9a65-2ee397c23dc3
//! ```
use std::fs::{self, File};
use std::io::{BufRead, BufReader};
use std::path::Path;
use serde::{Deserialize, Serialize};
use tempfile::TempDir;
use time::OffsetDateTime;
pub mod errors;
pub mod meta;
pub mod settings;
pub mod updates;
use self::meta::{DumpMeta, IndexUuid};
use super::compat::v2_to_v3::CompatV2ToV3;
use super::Document;
use crate::{IndexMetadata, Result, Version};
pub type Settings<T> = settings::Settings<T>;
pub type Checked = settings::Checked;
pub type Unchecked = settings::Unchecked;
pub type Task = updates::UpdateEntry;
// everything related to the errors
pub type ResponseError = errors::ResponseError;
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Metadata {
db_version: String,
index_db_size: usize,
update_db_size: usize,
#[serde(with = "time::serde::rfc3339")]
dump_date: OffsetDateTime,
}
pub struct V2Reader {
dump: TempDir,
metadata: Metadata,
tasks: BufReader<File>,
pub index_uuid: Vec<IndexUuid>,
}
impl V2Reader {
pub fn open(dump: TempDir) -> Result<Self> {
let meta_file = fs::read(dump.path().join("metadata.json"))?;
let metadata = serde_json::from_reader(&*meta_file)?;
let index_uuid = File::open(dump.path().join("index_uuids/data.jsonl"))?;
let index_uuid = BufReader::new(index_uuid);
let index_uuid = index_uuid
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) })
.collect::<Result<Vec<_>>>()?;
Ok(V2Reader {
metadata,
tasks: BufReader::new(
File::open(dump.path().join("updates").join("data.jsonl")).unwrap(),
),
index_uuid,
dump,
})
}
pub fn to_v3(self) -> CompatV2ToV3 {
CompatV2ToV3::new(self)
}
pub fn index_uuid(&self) -> Vec<IndexUuid> {
self.index_uuid.clone()
}
pub fn version(&self) -> Version {
Version::V2
}
pub fn date(&self) -> Option<OffsetDateTime> {
Some(self.metadata.dump_date)
}
pub fn indexes(&self) -> Result<impl Iterator<Item = Result<V2IndexReader>> + '_> {
Ok(self.index_uuid.iter().map(|index| -> Result<_> {
V2IndexReader::new(
index.uid.clone(),
&self.dump.path().join("indexes").join(format!("index-{}", index.uuid)),
)
}))
}
pub fn tasks(&mut self) -> Box<dyn Iterator<Item = Result<(Task, Option<UpdateFile>)>> + '_> {
Box::new((&mut self.tasks).lines().map(|line| -> Result<_> {
let task: Task = serde_json::from_str(&line?)?;
if !task.is_finished() {
if let Some(uuid) = task.get_content_uuid() {
let update_file_path = self
.dump
.path()
.join("updates")
.join("update_files")
.join(format!("update_{}", uuid));
Ok((task, Some(UpdateFile::new(&update_file_path)?)))
} else {
Ok((task, None))
}
} else {
Ok((task, None))
}
}))
}
}
pub struct V2IndexReader {
metadata: IndexMetadata,
settings: Settings<Checked>,
documents: BufReader<File>,
}
impl V2IndexReader {
pub fn new(name: String, path: &Path) -> Result<Self> {
let meta = File::open(path.join("meta.json"))?;
let meta: DumpMeta = serde_json::from_reader(meta)?;
let metadata = IndexMetadata {
uid: name,
primary_key: meta.primary_key,
// FIXME: Iterate over the whole task queue to find the creation and last update date.
created_at: OffsetDateTime::now_utc(),
updated_at: OffsetDateTime::now_utc(),
};
let ret = V2IndexReader {
metadata,
settings: meta.settings.check(),
documents: BufReader::new(File::open(path.join("documents.jsonl"))?),
};
Ok(ret)
}
pub fn metadata(&self) -> &IndexMetadata {
&self.metadata
}
pub fn documents(&mut self) -> Result<impl Iterator<Item = Result<Document>> + '_> {
Ok((&mut self.documents)
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
}
pub fn settings(&mut self) -> Result<Settings<Checked>> {
Ok(self.settings.clone())
}
}
pub struct UpdateFile {
documents: Vec<Document>,
index: usize,
}
impl UpdateFile {
fn new(path: &Path) -> Result<Self> {
let reader = BufReader::new(File::open(path)?);
Ok(UpdateFile { documents: serde_json::from_reader(reader)?, index: 0 })
}
}
impl Iterator for UpdateFile {
type Item = Result<Document>;
fn next(&mut self) -> Option<Self::Item> {
self.index += 1;
self.documents.get(self.index - 1).cloned().map(Ok)
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v2() {
let dump = File::open("tests/assets/v2.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = V2Reader::open(dir).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-09 20:27:59.904096267 +00:00:00");
// tasks
let tasks = dump.tasks().collect::<Result<Vec<_>>>().unwrap();
let (tasks, mut update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"ec5fc0a14bf735ad4e361d5aa8a89ac6");
assert_eq!(update_files.len(), 9);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
let update_file = update_files.remove(0).unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(update_file), @"7b8889539b669c7b9ddba448bafa385d");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies2 = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"c41bf7315d404da46c99b9e3a2a3cc1e");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"3d1d96c85b6bab46e957bc8d2532a910");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
// movies2
insta::assert_json_snapshot!(movies2.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies_2",
"primaryKey": null,
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"4f04afc086828d8da0da57a7d598ddba");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"4f04afc086828d8da0da57a7d598ddba");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

View File

@ -0,0 +1,176 @@
use std::collections::{BTreeMap, BTreeSet};
use std::marker::PhantomData;
use std::str::FromStr;
use once_cell::sync::Lazy;
use regex::Regex;
use serde::{Deserialize, Deserializer};
#[cfg(test)]
fn serialize_with_wildcard<S>(
field: &Option<Option<Vec<String>>>,
s: S,
) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let wildcard = vec!["*".to_string()];
s.serialize_some(&field.as_ref().map(|o| o.as_ref().unwrap_or(&wildcard)))
}
fn deserialize_some<'de, T, D>(deserializer: D) -> std::result::Result<Option<T>, D::Error>
where
T: Deserialize<'de>,
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(Some)
}
#[derive(Clone, Default, Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Checked;
#[derive(Clone, Default, Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Unchecked;
#[derive(Debug, Clone, Default, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
#[serde(bound(serialize = "T: serde::Serialize", deserialize = "T: Deserialize<'static>"))]
pub struct Settings<T> {
#[serde(
default,
deserialize_with = "deserialize_some",
serialize_with = "serialize_with_wildcard",
skip_serializing_if = "Option::is_none"
)]
pub displayed_attributes: Option<Option<Vec<String>>>,
#[serde(
default,
deserialize_with = "deserialize_some",
serialize_with = "serialize_with_wildcard",
skip_serializing_if = "Option::is_none"
)]
pub searchable_attributes: Option<Option<Vec<String>>>,
#[serde(
default,
deserialize_with = "deserialize_some",
skip_serializing_if = "Option::is_none"
)]
pub filterable_attributes: Option<Option<BTreeSet<String>>>,
#[serde(
default,
deserialize_with = "deserialize_some",
skip_serializing_if = "Option::is_none"
)]
pub ranking_rules: Option<Option<Vec<String>>>,
#[serde(
default,
deserialize_with = "deserialize_some",
skip_serializing_if = "Option::is_none"
)]
pub stop_words: Option<Option<BTreeSet<String>>>,
#[serde(
default,
deserialize_with = "deserialize_some",
skip_serializing_if = "Option::is_none"
)]
pub synonyms: Option<Option<BTreeMap<String, Vec<String>>>>,
#[serde(
default,
deserialize_with = "deserialize_some",
skip_serializing_if = "Option::is_none"
)]
pub distinct_attribute: Option<Option<String>>,
#[serde(skip)]
pub _kind: PhantomData<T>,
}
impl Settings<Unchecked> {
pub fn check(mut self) -> Settings<Checked> {
let displayed_attributes = match self.displayed_attributes.take() {
Some(Some(fields)) => {
if fields.iter().any(|f| f == "*") {
Some(None)
} else {
Some(Some(fields))
}
}
otherwise => otherwise,
};
let searchable_attributes = match self.searchable_attributes.take() {
Some(Some(fields)) => {
if fields.iter().any(|f| f == "*") {
Some(None)
} else {
Some(Some(fields))
}
}
otherwise => otherwise,
};
Settings {
displayed_attributes,
searchable_attributes,
filterable_attributes: self.filterable_attributes,
ranking_rules: self.ranking_rules,
stop_words: self.stop_words,
synonyms: self.synonyms,
distinct_attribute: self.distinct_attribute,
_kind: PhantomData,
}
}
}
static ASC_DESC_REGEX: Lazy<Regex> =
Lazy::new(|| Regex::new(r#"(asc|desc)\(([\w_-]+)\)"#).unwrap());
#[derive(Debug, Deserialize, Clone, PartialEq, Eq)]
pub enum Criterion {
/// Sorted by decreasing number of matched query terms.
/// Query words at the front of an attribute is considered better than if it was at the back.
Words,
/// Sorted by increasing number of typos.
Typo,
/// Sorted by increasing distance between matched query terms.
Proximity,
/// Documents with quey words contained in more important
/// attributes are considred better.
Attribute,
/// Sorted by the similarity of the matched words with the query words.
Exactness,
/// Sorted by the increasing value of the field specified.
Asc(String),
/// Sorted by the decreasing value of the field specified.
Desc(String),
}
impl FromStr for Criterion {
type Err = ();
fn from_str(txt: &str) -> Result<Criterion, Self::Err> {
match txt {
"words" => Ok(Criterion::Words),
"typo" => Ok(Criterion::Typo),
"proximity" => Ok(Criterion::Proximity),
"attribute" => Ok(Criterion::Attribute),
"exactness" => Ok(Criterion::Exactness),
text => {
let caps = ASC_DESC_REGEX.captures(text).ok_or(())?;
let order = caps.get(1).unwrap().as_str();
let field_name = caps.get(2).unwrap().as_str();
match order {
"asc" => Ok(Criterion::Asc(field_name.to_string())),
"desc" => Ok(Criterion::Desc(field_name.to_string())),
_text => Err(()),
}
}
}
}
}

View File

@ -0,0 +1,230 @@
use serde::Deserialize;
use time::OffsetDateTime;
use uuid::Uuid;
use super::{ResponseError, Settings, Unchecked};
#[derive(Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct UpdateEntry {
pub uuid: Uuid,
pub update: UpdateStatus,
}
impl UpdateEntry {
pub fn is_finished(&self) -> bool {
match self.update {
UpdateStatus::Processing(_) | UpdateStatus::Enqueued(_) => false,
UpdateStatus::Processed(_) | UpdateStatus::Aborted(_) | UpdateStatus::Failed(_) => true,
}
}
pub fn get_content_uuid(&self) -> Option<&Uuid> {
match &self.update {
UpdateStatus::Enqueued(enqueued) => enqueued.content.as_ref(),
UpdateStatus::Processing(processing) => processing.from.content.as_ref(),
UpdateStatus::Processed(processed) => processed.from.from.content.as_ref(),
UpdateStatus::Aborted(aborted) => aborted.from.content.as_ref(),
UpdateStatus::Failed(failed) => failed.from.from.content.as_ref(),
}
}
}
#[derive(Debug, Clone, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum UpdateResult {
DocumentsAddition(DocumentAdditionResult),
DocumentDeletion { deleted: u64 },
Other,
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct DocumentAdditionResult {
pub nb_documents: usize,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[non_exhaustive]
pub enum IndexDocumentsMethod {
/// Replace the previous document with the new one,
/// removing all the already known attributes.
ReplaceDocuments,
/// Merge the previous version of the document with the new version,
/// replacing old attributes values with the new ones and add the new attributes.
UpdateDocuments,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[non_exhaustive]
pub enum UpdateFormat {
/// The given update is a real **comma seperated** CSV with headers on the first line.
Csv,
/// The given update is a JSON array with documents inside.
Json,
/// The given update is a JSON stream with a document on each line.
JsonStream,
}
#[allow(clippy::large_enum_variant)]
#[derive(Debug, Clone, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(tag = "type")]
pub enum UpdateMeta {
DocumentsAddition {
method: IndexDocumentsMethod,
format: UpdateFormat,
primary_key: Option<String>,
},
ClearDocuments,
DeleteDocuments {
ids: Vec<String>,
},
Settings(Settings<Unchecked>),
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Enqueued {
pub update_id: u64,
pub meta: UpdateMeta,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
pub content: Option<Uuid>,
}
impl Enqueued {
pub fn meta(&self) -> &UpdateMeta {
&self.meta
}
pub fn id(&self) -> u64 {
self.update_id
}
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Processed {
pub success: UpdateResult,
#[serde(with = "time::serde::rfc3339")]
pub processed_at: OffsetDateTime,
#[serde(flatten)]
pub from: Processing,
}
impl Processed {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &UpdateMeta {
self.from.meta()
}
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Processing {
#[serde(flatten)]
pub from: Enqueued,
#[serde(with = "time::serde::rfc3339")]
pub started_processing_at: OffsetDateTime,
}
impl Processing {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &UpdateMeta {
self.from.meta()
}
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Aborted {
#[serde(flatten)]
pub from: Enqueued,
#[serde(with = "time::serde::rfc3339")]
pub aborted_at: OffsetDateTime,
}
impl Aborted {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &UpdateMeta {
self.from.meta()
}
}
#[derive(Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Failed {
#[serde(flatten)]
pub from: Processing,
pub error: ResponseError,
#[serde(with = "time::serde::rfc3339")]
pub failed_at: OffsetDateTime,
}
impl Failed {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &UpdateMeta {
self.from.meta()
}
}
#[derive(Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(tag = "status", rename_all = "camelCase")]
pub enum UpdateStatus {
Processing(Processing),
Enqueued(Enqueued),
Processed(Processed),
Aborted(Aborted),
Failed(Failed),
}
impl UpdateStatus {
pub fn id(&self) -> u64 {
match self {
UpdateStatus::Processing(u) => u.id(),
UpdateStatus::Enqueued(u) => u.id(),
UpdateStatus::Processed(u) => u.id(),
UpdateStatus::Aborted(u) => u.id(),
UpdateStatus::Failed(u) => u.id(),
}
}
pub fn meta(&self) -> &UpdateMeta {
match self {
UpdateStatus::Processing(u) => u.meta(),
UpdateStatus::Enqueued(u) => u.meta(),
UpdateStatus::Processed(u) => u.meta(),
UpdateStatus::Aborted(u) => u.meta(),
UpdateStatus::Failed(u) => u.meta(),
}
}
pub fn processed(&self) -> Option<&Processed> {
match self {
UpdateStatus::Processed(p) => Some(p),
_ => None,
}
}
}

View File

@ -0,0 +1,51 @@
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Clone, Copy)]
pub enum Code {
// index related error
CreateIndex,
IndexAlreadyExists,
IndexNotFound,
InvalidIndexUid,
// invalid state error
InvalidState,
MissingPrimaryKey,
PrimaryKeyAlreadyPresent,
MaxFieldsLimitExceeded,
MissingDocumentId,
InvalidDocumentId,
Filter,
Sort,
BadParameter,
BadRequest,
DatabaseSizeLimitReached,
DocumentNotFound,
Internal,
InvalidGeoField,
InvalidRankingRule,
InvalidStore,
InvalidToken,
MissingAuthorizationHeader,
NoSpaceLeftOnDevice,
DumpNotFound,
TaskNotFound,
PayloadTooLarge,
RetrieveDocument,
SearchDocuments,
UnsupportedMediaType,
DumpAlreadyInProgress,
DumpProcessFailed,
InvalidContentType,
MissingContentType,
MalformedPayload,
MissingPayload,
MalformedDump,
UnretrievableErrorCode,
}

View File

@ -0,0 +1,18 @@
use serde::Deserialize;
use uuid::Uuid;
use super::Settings;
#[derive(Deserialize, Debug, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexUuid {
pub uid: String,
pub uuid: Uuid,
}
#[derive(Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct DumpMeta {
pub settings: Settings<super::Unchecked>,
pub primary_key: Option<String>,
}

326
dump/src/reader/v3/mod.rs Normal file
View File

@ -0,0 +1,326 @@
//! ```text
//! .
//! ├── indexes
//! │   ├── 01d7dd17-8241-4f1f-a7d1-2d1cb255f5b0
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   ├── 78be64a3-cae1-449e-b7ed-13e77c9a8a0c
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   ├── ba553439-18fe-4733-ba53-44eed898280c
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   └── c408bc22-5859-49d1-8e9f-c88e2fa95cb0
//! │   ├── documents.jsonl
//! │   └── meta.json
//! ├── index_uuids
//! │   └── data.jsonl
//! ├── metadata.json
//! └── updates
//! ├── data.jsonl
//! └── updates_files
//! └── 66d3f12d-fcf3-4b53-88cb-407017373de7
//! ```
use std::fs::{self, File};
use std::io::{BufRead, BufReader};
use std::path::Path;
use serde::{Deserialize, Serialize};
use tempfile::TempDir;
use time::OffsetDateTime;
pub mod errors;
pub mod meta;
pub mod settings;
pub mod updates;
use self::meta::{DumpMeta, IndexUuid};
use super::compat::v3_to_v4::CompatV3ToV4;
use super::Document;
use crate::{Error, IndexMetadata, Result, Version};
pub type Settings<T> = settings::Settings<T>;
pub type Checked = settings::Checked;
pub type Unchecked = settings::Unchecked;
pub type Task = updates::UpdateEntry;
// ===== Other types to clarify the code of the compat module
// everything related to the tasks
pub type Status = updates::UpdateStatus;
pub type Kind = updates::Update;
// everything related to the settings
pub type Setting<T> = settings::Setting<T>;
// everything related to the errors
pub type Code = errors::Code;
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Metadata {
db_version: String,
index_db_size: usize,
update_db_size: usize,
#[serde(with = "time::serde::rfc3339")]
dump_date: OffsetDateTime,
}
pub struct V3Reader {
dump: TempDir,
metadata: Metadata,
tasks: BufReader<File>,
index_uuid: Vec<IndexUuid>,
}
impl V3Reader {
pub fn open(dump: TempDir) -> Result<Self> {
let meta_file = fs::read(dump.path().join("metadata.json"))?;
let metadata = serde_json::from_reader(&*meta_file)?;
let index_uuid = File::open(dump.path().join("index_uuids/data.jsonl"))?;
let index_uuid = BufReader::new(index_uuid);
let index_uuid = index_uuid
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) })
.collect::<Result<Vec<_>>>()?;
Ok(V3Reader {
metadata,
tasks: BufReader::new(File::open(dump.path().join("updates").join("data.jsonl"))?),
index_uuid,
dump,
})
}
pub fn index_uuid(&self) -> Vec<IndexUuid> {
self.index_uuid.clone()
}
pub fn to_v4(self) -> CompatV3ToV4 {
CompatV3ToV4::new(self)
}
pub fn version(&self) -> Version {
Version::V3
}
pub fn date(&self) -> Option<OffsetDateTime> {
Some(self.metadata.dump_date)
}
pub fn indexes(&self) -> Result<impl Iterator<Item = Result<V3IndexReader>> + '_> {
Ok(self.index_uuid.iter().map(|index| -> Result<_> {
V3IndexReader::new(
index.uid.clone(),
&self.dump.path().join("indexes").join(index.uuid.to_string()),
)
}))
}
pub fn tasks(
&mut self,
) -> Box<dyn Iterator<Item = Result<(Task, Option<Box<super::UpdateFile>>)>> + '_> {
Box::new((&mut self.tasks).lines().map(|line| -> Result<_> {
let task: Task = serde_json::from_str(&line?)?;
if !task.is_finished() {
if let Some(uuid) = task.get_content_uuid() {
let update_file_path = self
.dump
.path()
.join("updates")
.join("updates_files")
.join(uuid.to_string());
Ok((
task,
Some(
Box::new(UpdateFile::new(&update_file_path)?) as Box<super::UpdateFile>
),
))
} else {
Ok((task, None))
}
} else {
Ok((task, None))
}
}))
}
}
pub struct V3IndexReader {
metadata: IndexMetadata,
settings: Settings<Checked>,
documents: BufReader<File>,
}
impl V3IndexReader {
pub fn new(name: String, path: &Path) -> Result<Self> {
let meta = File::open(path.join("meta.json"))?;
let meta: DumpMeta = serde_json::from_reader(meta)?;
let metadata = IndexMetadata {
uid: name,
primary_key: meta.primary_key,
// FIXME: Iterate over the whole task queue to find the creation and last update date.
created_at: OffsetDateTime::now_utc(),
updated_at: OffsetDateTime::now_utc(),
};
let ret = V3IndexReader {
metadata,
settings: meta.settings.check(),
documents: BufReader::new(File::open(path.join("documents.jsonl"))?),
};
Ok(ret)
}
pub fn metadata(&self) -> &IndexMetadata {
&self.metadata
}
pub fn documents(&mut self) -> Result<impl Iterator<Item = Result<Document>> + '_> {
Ok((&mut self.documents)
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
}
pub fn settings(&mut self) -> Result<Settings<Checked>> {
Ok(self.settings.clone())
}
}
pub struct UpdateFile {
reader: BufReader<File>,
}
impl UpdateFile {
fn new(path: &Path) -> Result<Self> {
Ok(UpdateFile { reader: BufReader::new(File::open(path)?) })
}
}
impl Iterator for UpdateFile {
type Item = Result<Document>;
fn next(&mut self) -> Option<Self::Item> {
(&mut self.reader)
.lines()
.map(|line| {
line.map_err(Error::from)
.and_then(|line| serde_json::from_str(&line).map_err(Error::from))
})
.next()
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v3() {
let dump = File::open("tests/assets/v3.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = V3Reader::open(dir).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-07 11:39:03.709153554 +00:00:00");
// tasks
let tasks = dump.tasks().collect::<Result<Vec<_>>>().unwrap();
let (tasks, mut update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"63086d59c3f2074e4ab3fff7e8cc36c1");
assert_eq!(update_files.len(), 10);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
let update_file = update_files.remove(0).unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(update_file), @"7b8889539b669c7b9ddba448bafa385d");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies2 = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"f309b009608cc0b770b2f74516f92647");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"95dff22ba3a7019616c12df9daa35e1e");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
// movies2
insta::assert_json_snapshot!(movies2.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies_2",
"primaryKey": null,
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"1dafc4b123e3a8e14a889719cc01f6e5");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"1dafc4b123e3a8e14a889719cc01f6e5");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

View File

@ -0,0 +1,233 @@
use std::collections::{BTreeMap, BTreeSet};
use std::marker::PhantomData;
use std::num::NonZeroUsize;
use serde::{Deserialize, Deserializer};
#[cfg(test)]
fn serialize_with_wildcard<S>(
field: &Setting<Vec<String>>,
s: S,
) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
use serde::Serialize;
let wildcard = vec!["*".to_string()];
match field {
Setting::Set(value) => Some(value),
Setting::Reset => Some(&wildcard),
Setting::NotSet => None,
}
.serialize(s)
}
#[derive(Clone, Default, Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Checked;
#[derive(Clone, Default, Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Unchecked;
/// Holds all the settings for an index. `T` can either be `Checked` if they represents settings
/// whose validity is guaranteed, or `Unchecked` if they need to be validated. In the later case, a
/// call to `check` will return a `Settings<Checked>` from a `Settings<Unchecked>`.
#[derive(Debug, Clone, Default, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
#[serde(bound(serialize = "T: serde::Serialize", deserialize = "T: Deserialize<'static>"))]
pub struct Settings<T> {
#[serde(
default,
serialize_with = "serialize_with_wildcard",
skip_serializing_if = "Setting::is_not_set"
)]
pub displayed_attributes: Setting<Vec<String>>,
#[serde(
default,
serialize_with = "serialize_with_wildcard",
skip_serializing_if = "Setting::is_not_set"
)]
pub searchable_attributes: Setting<Vec<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub filterable_attributes: Setting<BTreeSet<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub sortable_attributes: Setting<BTreeSet<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub ranking_rules: Setting<Vec<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub stop_words: Setting<BTreeSet<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub synonyms: Setting<BTreeMap<String, Vec<String>>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub distinct_attribute: Setting<String>,
#[serde(skip)]
pub _kind: PhantomData<T>,
}
impl Settings<Checked> {
pub fn cleared() -> Settings<Checked> {
Settings {
displayed_attributes: Setting::Reset,
searchable_attributes: Setting::Reset,
filterable_attributes: Setting::Reset,
sortable_attributes: Setting::Reset,
ranking_rules: Setting::Reset,
stop_words: Setting::Reset,
synonyms: Setting::Reset,
distinct_attribute: Setting::Reset,
_kind: PhantomData,
}
}
pub fn into_unchecked(self) -> Settings<Unchecked> {
let Self {
displayed_attributes,
searchable_attributes,
filterable_attributes,
sortable_attributes,
ranking_rules,
stop_words,
synonyms,
distinct_attribute,
..
} = self;
Settings {
displayed_attributes,
searchable_attributes,
filterable_attributes,
sortable_attributes,
ranking_rules,
stop_words,
synonyms,
distinct_attribute,
_kind: PhantomData,
}
}
}
impl Settings<Unchecked> {
pub fn check(self) -> Settings<Checked> {
let displayed_attributes = match self.displayed_attributes {
Setting::Set(fields) => {
if fields.iter().any(|f| f == "*") {
Setting::Reset
} else {
Setting::Set(fields)
}
}
otherwise => otherwise,
};
let searchable_attributes = match self.searchable_attributes {
Setting::Set(fields) => {
if fields.iter().any(|f| f == "*") {
Setting::Reset
} else {
Setting::Set(fields)
}
}
otherwise => otherwise,
};
Settings {
displayed_attributes,
searchable_attributes,
filterable_attributes: self.filterable_attributes,
sortable_attributes: self.sortable_attributes,
ranking_rules: self.ranking_rules,
stop_words: self.stop_words,
synonyms: self.synonyms,
distinct_attribute: self.distinct_attribute,
_kind: PhantomData,
}
}
}
#[derive(Debug, Clone, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct Facets {
pub level_group_size: Option<NonZeroUsize>,
pub min_level_size: Option<NonZeroUsize>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Setting<T> {
Set(T),
Reset,
NotSet,
}
impl<T> Default for Setting<T> {
fn default() -> Self {
Self::NotSet
}
}
impl<T> Setting<T> {
pub fn map<U, F>(self, f: F) -> Setting<U>
where
F: FnOnce(T) -> U,
{
match self {
Setting::Set(t) => Setting::Set(f(t)),
Setting::Reset => Setting::Reset,
Setting::NotSet => Setting::NotSet,
}
}
pub fn set(self) -> Option<T> {
match self {
Self::Set(value) => Some(value),
_ => None,
}
}
pub const fn as_ref(&self) -> Setting<&T> {
match *self {
Self::Set(ref value) => Setting::Set(value),
Self::Reset => Setting::Reset,
Self::NotSet => Setting::NotSet,
}
}
pub const fn is_not_set(&self) -> bool {
matches!(self, Self::NotSet)
}
}
#[cfg(test)]
impl<T: serde::Serialize> serde::Serialize for Setting<T> {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
match self {
Self::Set(value) => Some(value),
// Usually not_set isn't serialized by setting skip_serializing_if field attribute
Self::NotSet | Self::Reset => None,
}
.serialize(serializer)
}
}
impl<'de, T: Deserialize<'de>> Deserialize<'de> for Setting<T> {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(|x| match x {
Some(x) => Self::Set(x),
None => Self::Reset, // Reset is forced by sending null value
})
}
}

View File

@ -0,0 +1,227 @@
use std::fmt::Display;
use serde::Deserialize;
use time::OffsetDateTime;
use uuid::Uuid;
use super::{Code, Settings, Unchecked};
#[derive(Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct UpdateEntry {
pub uuid: Uuid,
pub update: UpdateStatus,
}
impl UpdateEntry {
pub fn is_finished(&self) -> bool {
match self.update {
UpdateStatus::Processed(_) | UpdateStatus::Aborted(_) | UpdateStatus::Failed(_) => true,
UpdateStatus::Processing(_) | UpdateStatus::Enqueued(_) => false,
}
}
pub fn get_content_uuid(&self) -> Option<&Uuid> {
match self.update.meta() {
Update::DocumentAddition { content_uuid, .. } => Some(content_uuid),
Update::DeleteDocuments(_) | Update::Settings(_) | Update::ClearDocuments => None,
}
}
}
#[derive(Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(tag = "status", rename_all = "camelCase")]
pub enum UpdateStatus {
Processing(Processing),
Enqueued(Enqueued),
Processed(Processed),
Aborted(Aborted),
Failed(Failed),
}
impl UpdateStatus {
pub fn id(&self) -> u64 {
match self {
UpdateStatus::Processing(u) => u.id(),
UpdateStatus::Enqueued(u) => u.id(),
UpdateStatus::Processed(u) => u.id(),
UpdateStatus::Aborted(u) => u.id(),
UpdateStatus::Failed(u) => u.id(),
}
}
pub fn meta(&self) -> &Update {
match self {
UpdateStatus::Processing(u) => u.meta(),
UpdateStatus::Enqueued(u) => u.meta(),
UpdateStatus::Processed(u) => u.meta(),
UpdateStatus::Aborted(u) => u.meta(),
UpdateStatus::Failed(u) => u.meta(),
}
}
pub fn is_finished(&self) -> bool {
match self {
UpdateStatus::Processing(_) | UpdateStatus::Enqueued(_) => false,
UpdateStatus::Aborted(_) | UpdateStatus::Failed(_) | UpdateStatus::Processed(_) => true,
}
}
pub fn processed(&self) -> Option<&Processed> {
match self {
UpdateStatus::Processed(p) => Some(p),
_ => None,
}
}
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Enqueued {
pub update_id: u64,
pub meta: Update,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
}
impl Enqueued {
pub fn meta(&self) -> &Update {
&self.meta
}
pub fn id(&self) -> u64 {
self.update_id
}
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Processed {
pub success: UpdateResult,
#[serde(with = "time::serde::rfc3339")]
pub processed_at: OffsetDateTime,
#[serde(flatten)]
pub from: Processing,
}
impl Processed {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &Update {
self.from.meta()
}
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Processing {
#[serde(flatten)]
pub from: Enqueued,
#[serde(with = "time::serde::rfc3339")]
pub started_processing_at: OffsetDateTime,
}
impl Processing {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &Update {
self.from.meta()
}
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Aborted {
#[serde(flatten)]
pub from: Enqueued,
#[serde(with = "time::serde::rfc3339")]
pub aborted_at: OffsetDateTime,
}
impl Aborted {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &Update {
self.from.meta()
}
}
#[derive(Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub struct Failed {
#[serde(flatten)]
pub from: Processing,
pub msg: String,
pub code: Code,
#[serde(with = "time::serde::rfc3339")]
pub failed_at: OffsetDateTime,
}
impl Display for Failed {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.msg.fmt(f)
}
}
impl Failed {
pub fn id(&self) -> u64 {
self.from.id()
}
pub fn meta(&self) -> &Update {
self.from.meta()
}
}
#[allow(clippy::large_enum_variant)]
#[derive(Debug, Clone, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum Update {
DeleteDocuments(Vec<String>),
DocumentAddition {
primary_key: Option<String>,
method: IndexDocumentsMethod,
content_uuid: Uuid,
},
Settings(Settings<Unchecked>),
ClearDocuments,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[non_exhaustive]
pub enum IndexDocumentsMethod {
/// Replace the previous document with the new one,
/// removing all the already known attributes.
ReplaceDocuments,
/// Merge the previous version of the document with the new version,
/// replacing old attributes values with the new ones and add the new attributes.
UpdateDocuments,
}
#[derive(Debug, Clone, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum UpdateResult {
DocumentsAddition(DocumentAdditionResult),
DocumentDeletion { deleted: u64 },
Other,
}
#[derive(Debug, Deserialize, Clone)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct DocumentAdditionResult {
pub nb_documents: usize,
}

View File

@ -1,6 +1,6 @@
use std::fmt;
use actix_web::{self as aweb, http::StatusCode, HttpResponseBuilder};
use http::StatusCode;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
@ -8,18 +8,15 @@ use serde::{Deserialize, Serialize};
#[cfg_attr(feature = "test-traits", derive(proptest_derive::Arbitrary))]
pub struct ResponseError {
#[serde(skip)]
#[cfg_attr(
feature = "test-traits",
proptest(strategy = "strategy::status_code_strategy()")
)]
code: StatusCode,
message: String,
#[cfg_attr(feature = "test-traits", proptest(strategy = "strategy::status_code_strategy()"))]
pub code: StatusCode,
pub message: String,
#[serde(rename = "code")]
error_code: String,
pub error_code: String,
#[serde(rename = "type")]
error_type: String,
pub error_type: String,
#[serde(rename = "link")]
error_link: String,
pub error_link: String,
}
impl ResponseError {
@ -57,19 +54,6 @@ where
}
}
impl aweb::error::ResponseError for ResponseError {
fn error_response(&self) -> aweb::HttpResponse {
let json = serde_json::to_vec(self).unwrap();
HttpResponseBuilder::new(self.status_code())
.content_type("application/json")
.body(json)
}
fn status_code(&self) -> StatusCode {
self.code
}
}
pub trait ErrorCode: std::error::Error {
fn error_code(&self) -> Code;
@ -166,6 +150,9 @@ pub enum Code {
InvalidApiKeyIndexes,
InvalidApiKeyExpiresAt,
InvalidApiKeyDescription,
UnretrievableErrorCode,
MalformedDump,
}
impl Code {
@ -216,10 +203,9 @@ impl Code {
BadParameter => ErrCode::invalid("bad_parameter", StatusCode::BAD_REQUEST),
BadRequest => ErrCode::invalid("bad_request", StatusCode::BAD_REQUEST),
DatabaseSizeLimitReached => ErrCode::internal(
"database_size_limit_reached",
StatusCode::INTERNAL_SERVER_ERROR,
),
DatabaseSizeLimitReached => {
ErrCode::internal("database_size_limit_reached", StatusCode::INTERNAL_SERVER_ERROR)
}
DocumentNotFound => ErrCode::invalid("document_not_found", StatusCode::NOT_FOUND),
Internal => ErrCode::internal("internal", StatusCode::INTERNAL_SERVER_ERROR),
InvalidGeoField => ErrCode::invalid("invalid_geo_field", StatusCode::BAD_REQUEST),
@ -275,6 +261,10 @@ impl Code {
InvalidMinWordLengthForTypo => {
ErrCode::invalid("invalid_min_word_length_for_typo", StatusCode::BAD_REQUEST)
}
UnretrievableErrorCode => {
ErrCode::invalid("unretrievable_error_code", StatusCode::BAD_REQUEST)
}
MalformedDump => ErrCode::invalid("malformed_dump", StatusCode::BAD_REQUEST),
}
}
@ -308,50 +298,14 @@ struct ErrCode {
impl ErrCode {
fn authentication(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode {
status_code,
error_name,
error_type: ErrorType::AuthenticationError,
}
ErrCode { status_code, error_name, error_type: ErrorType::AuthenticationError }
}
fn internal(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode {
status_code,
error_name,
error_type: ErrorType::InternalError,
}
ErrCode { status_code, error_name, error_type: ErrorType::InternalError }
}
fn invalid(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode {
status_code,
error_name,
error_type: ErrorType::InvalidRequestError,
}
}
}
#[cfg(feature = "test-traits")]
mod strategy {
use proptest::strategy::Strategy;
use super::*;
pub(super) fn status_code_strategy() -> impl Strategy<Value = StatusCode> {
(100..999u16).prop_map(|i| StatusCode::from_u16(i).unwrap())
}
}
#[macro_export]
macro_rules! internal_error {
($target:ty : $($other:path), *) => {
$(
impl From<$other> for $target {
fn from(other: $other) -> Self {
Self::Internal(Box::new(other))
}
}
)*
ErrCode { status_code, error_name, error_type: ErrorType::InvalidRequestError }
}
}

View File

@ -0,0 +1,77 @@
use serde::Deserialize;
use time::OffsetDateTime;
pub const KEY_ID_LENGTH: usize = 8;
pub type KeyId = [u8; KEY_ID_LENGTH];
#[derive(Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Key {
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
pub id: KeyId,
pub actions: Vec<Action>,
pub indexes: Vec<String>,
#[serde(with = "time::serde::rfc3339::option")]
pub expires_at: Option<OffsetDateTime>,
#[serde(with = "time::serde::rfc3339")]
pub created_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub updated_at: OffsetDateTime,
}
#[derive(Copy, Clone, Deserialize, Debug, Eq, PartialEq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[repr(u8)]
pub enum Action {
#[serde(rename = "*")]
All = 0,
#[serde(rename = "search")]
Search = actions::SEARCH,
#[serde(rename = "documents.add")]
DocumentsAdd = actions::DOCUMENTS_ADD,
#[serde(rename = "documents.get")]
DocumentsGet = actions::DOCUMENTS_GET,
#[serde(rename = "documents.delete")]
DocumentsDelete = actions::DOCUMENTS_DELETE,
#[serde(rename = "indexes.create")]
IndexesAdd = actions::INDEXES_CREATE,
#[serde(rename = "indexes.get")]
IndexesGet = actions::INDEXES_GET,
#[serde(rename = "indexes.update")]
IndexesUpdate = actions::INDEXES_UPDATE,
#[serde(rename = "indexes.delete")]
IndexesDelete = actions::INDEXES_DELETE,
#[serde(rename = "tasks.get")]
TasksGet = actions::TASKS_GET,
#[serde(rename = "settings.get")]
SettingsGet = actions::SETTINGS_GET,
#[serde(rename = "settings.update")]
SettingsUpdate = actions::SETTINGS_UPDATE,
#[serde(rename = "stats.get")]
StatsGet = actions::STATS_GET,
#[serde(rename = "dumps.create")]
DumpsCreate = actions::DUMPS_CREATE,
#[serde(rename = "dumps.get")]
DumpsGet = actions::DUMPS_GET,
#[serde(rename = "version")]
Version = actions::VERSION,
}
pub mod actions {
pub const SEARCH: u8 = 1;
pub const DOCUMENTS_ADD: u8 = 2;
pub const DOCUMENTS_GET: u8 = 3;
pub const DOCUMENTS_DELETE: u8 = 4;
pub const INDEXES_CREATE: u8 = 5;
pub const INDEXES_GET: u8 = 6;
pub const INDEXES_UPDATE: u8 = 7;
pub const INDEXES_DELETE: u8 = 8;
pub const TASKS_GET: u8 = 9;
pub const SETTINGS_GET: u8 = 10;
pub const SETTINGS_UPDATE: u8 = 11;
pub const STATS_GET: u8 = 12;
pub const DUMPS_CREATE: u8 = 13;
pub const DUMPS_GET: u8 = 14;
pub const VERSION: u8 = 15;
}

139
dump/src/reader/v4/meta.rs Normal file
View File

@ -0,0 +1,139 @@
use std::fmt::{self, Display, Formatter};
use std::marker::PhantomData;
use std::str::FromStr;
use serde::de::Visitor;
use serde::{Deserialize, Deserializer};
use uuid::Uuid;
use super::settings::{Settings, Unchecked};
#[derive(Deserialize, Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexUuid {
pub uid: String,
pub index_meta: IndexMeta,
}
#[derive(Deserialize, Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexMeta {
pub uuid: Uuid,
pub creation_task_id: usize,
}
// There is one in each indexes under `meta.json`.
#[derive(Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct DumpMeta {
pub settings: Settings<Unchecked>,
pub primary_key: Option<String>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexUid(pub String);
impl TryFrom<String> for IndexUid {
type Error = IndexUidFormatError;
fn try_from(uid: String) -> Result<Self, Self::Error> {
if !uid.chars().all(|x| x.is_ascii_alphanumeric() || x == '-' || x == '_')
|| uid.is_empty()
|| uid.len() > 400
{
Err(IndexUidFormatError { invalid_uid: uid })
} else {
Ok(IndexUid(uid))
}
}
}
impl FromStr for IndexUid {
type Err = IndexUidFormatError;
fn from_str(uid: &str) -> Result<IndexUid, IndexUidFormatError> {
uid.to_string().try_into()
}
}
impl From<IndexUid> for String {
fn from(uid: IndexUid) -> Self {
uid.into_inner()
}
}
#[derive(Debug)]
pub struct IndexUidFormatError {
pub invalid_uid: String,
}
impl Display for IndexUidFormatError {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
write!(
f,
"invalid index uid `{}`, the uid must be an integer \
or a string containing only alphanumeric characters \
a-z A-Z 0-9, hyphens - and underscores _.",
self.invalid_uid,
)
}
}
impl std::error::Error for IndexUidFormatError {}
/// A type that tries to match either a star (*) or
/// any other thing that implements `FromStr`.
#[derive(Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum StarOr<T> {
Star,
Other(T),
}
impl<'de, T, E> Deserialize<'de> for StarOr<T>
where
T: FromStr<Err = E>,
E: Display,
{
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
/// Serde can't differentiate between `StarOr::Star` and `StarOr::Other` without a tag.
/// Simply using `#[serde(untagged)]` + `#[serde(rename="*")]` will lead to attempting to
/// deserialize everything as a `StarOr::Other`, including "*".
/// [`#[serde(other)]`](https://serde.rs/variant-attrs.html#other) might have helped but is
/// not supported on untagged enums.
struct StarOrVisitor<T>(PhantomData<T>);
impl<'de, T, FE> Visitor<'de> for StarOrVisitor<T>
where
T: FromStr<Err = FE>,
FE: Display,
{
type Value = StarOr<T>;
fn expecting(&self, formatter: &mut Formatter) -> std::fmt::Result {
formatter.write_str("a string")
}
fn visit_str<SE>(self, v: &str) -> Result<Self::Value, SE>
where
SE: serde::de::Error,
{
match v {
"*" => Ok(StarOr::Star),
v => {
let other = FromStr::from_str(v).map_err(|e: T::Err| {
SE::custom(format!("Invalid `other` value: {}", e))
})?;
Ok(StarOr::Other(other))
}
}
}
}
deserializer.deserialize_str(StarOrVisitor(PhantomData))
}
}

307
dump/src/reader/v4/mod.rs Normal file
View File

@ -0,0 +1,307 @@
use std::fs::{self, File};
use std::io::{BufRead, BufReader, ErrorKind};
use std::path::Path;
use serde::{Deserialize, Serialize};
use tempfile::TempDir;
use time::OffsetDateTime;
use uuid::Uuid;
pub mod errors;
pub mod keys;
pub mod meta;
pub mod settings;
pub mod tasks;
use self::meta::{DumpMeta, IndexUuid};
use super::compat::v4_to_v5::CompatV4ToV5;
use crate::{Error, IndexMetadata, Result, Version};
pub type Document = serde_json::Map<String, serde_json::Value>;
pub type Settings<T> = settings::Settings<T>;
pub type Checked = settings::Checked;
pub type Unchecked = settings::Unchecked;
pub type Task = tasks::Task;
pub type Key = keys::Key;
// everything related to the settings
pub type Setting<T> = settings::Setting<T>;
// everything related to the api keys
pub type Action = keys::Action;
// everything related to the errors
pub type ResponseError = errors::ResponseError;
pub type Code = errors::Code;
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Metadata {
db_version: String,
index_db_size: usize,
update_db_size: usize,
#[serde(with = "time::serde::rfc3339")]
dump_date: OffsetDateTime,
}
pub struct V4Reader {
dump: TempDir,
metadata: Metadata,
tasks: BufReader<File>,
keys: BufReader<File>,
index_uuid: Vec<IndexUuid>,
}
impl V4Reader {
pub fn open(dump: TempDir) -> Result<Self> {
let meta_file = fs::read(dump.path().join("metadata.json"))?;
let metadata = serde_json::from_reader(&*meta_file)?;
let index_uuid = File::open(dump.path().join("index_uuids/data.jsonl"))?;
let index_uuid = BufReader::new(index_uuid);
let index_uuid = index_uuid
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) })
.collect::<Result<Vec<_>>>()?;
Ok(V4Reader {
metadata,
tasks: BufReader::new(
File::open(dump.path().join("updates").join("data.jsonl")).unwrap(),
),
keys: BufReader::new(File::open(dump.path().join("keys"))?),
index_uuid,
dump,
})
}
pub fn to_v5(self) -> CompatV4ToV5 {
CompatV4ToV5::new(self)
}
pub fn version(&self) -> Version {
Version::V4
}
pub fn date(&self) -> Option<OffsetDateTime> {
Some(self.metadata.dump_date)
}
pub fn instance_uid(&self) -> Result<Option<Uuid>> {
match fs::read_to_string(self.dump.path().join("instance-uid")) {
Ok(uuid) => Ok(Some(Uuid::parse_str(&uuid)?)),
Err(e) if e.kind() == ErrorKind::NotFound => Ok(None),
Err(e) => Err(e.into()),
}
}
pub fn indexes(&self) -> Result<impl Iterator<Item = Result<V4IndexReader>> + '_> {
Ok(self.index_uuid.iter().map(|index| -> Result<_> {
V4IndexReader::new(
index.uid.clone(),
&self.dump.path().join("indexes").join(index.index_meta.uuid.to_string()),
)
}))
}
pub fn tasks(
&mut self,
) -> Box<dyn Iterator<Item = Result<(Task, Option<Box<super::UpdateFile>>)>> + '_> {
Box::new((&mut self.tasks).lines().map(|line| -> Result<_> {
let task: Task = serde_json::from_str(&line?)?;
if !task.is_finished() {
if let Some(uuid) = task.get_content_uuid() {
let update_file_path = self
.dump
.path()
.join("updates")
.join("updates_files")
.join(uuid.to_string());
Ok((
task,
Some(
Box::new(UpdateFile::new(&update_file_path)?) as Box<super::UpdateFile>
),
))
} else {
Ok((task, None))
}
} else {
Ok((task, None))
}
}))
}
pub fn keys(&mut self) -> Box<dyn Iterator<Item = Result<Key>> + '_> {
Box::new(
(&mut self.keys).lines().map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }),
)
}
}
pub struct V4IndexReader {
metadata: IndexMetadata,
settings: Settings<Checked>,
documents: BufReader<File>,
}
impl V4IndexReader {
pub fn new(name: String, path: &Path) -> Result<Self> {
let meta = File::open(path.join("meta.json"))?;
let meta: DumpMeta = serde_json::from_reader(meta)?;
let metadata = IndexMetadata {
uid: name,
primary_key: meta.primary_key,
// FIXME: Iterate over the whole task queue to find the creation and last update date.
created_at: OffsetDateTime::now_utc(),
updated_at: OffsetDateTime::now_utc(),
};
let ret = V4IndexReader {
metadata,
settings: meta.settings.check(),
documents: BufReader::new(File::open(path.join("documents.jsonl"))?),
};
Ok(ret)
}
pub fn metadata(&self) -> &IndexMetadata {
&self.metadata
}
pub fn documents(&mut self) -> Result<impl Iterator<Item = Result<Document>> + '_> {
Ok((&mut self.documents)
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
}
pub fn settings(&mut self) -> Result<Settings<Checked>> {
Ok(self.settings.clone())
}
}
pub struct UpdateFile {
reader: BufReader<File>,
}
impl UpdateFile {
fn new(path: &Path) -> Result<Self> {
Ok(UpdateFile { reader: BufReader::new(File::open(path)?) })
}
}
impl Iterator for UpdateFile {
type Item = Result<Document>;
fn next(&mut self) -> Option<Self::Item> {
(&mut self.reader)
.lines()
.map(|line| {
line.map_err(Error::from)
.and_then(|line| serde_json::from_str(&line).map_err(Error::from))
})
.next()
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v4() {
let dump = File::open("tests/assets/v4.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = V4Reader::open(dir).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-06 12:53:49.131989609 +00:00:00");
insta::assert_display_snapshot!(dump.instance_uid().unwrap().unwrap(), @"9e15e977-f2ae-4761-943f-1eaf75fd736d");
// tasks
let tasks = dump.tasks().collect::<Result<Vec<_>>>().unwrap();
let (tasks, mut update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"f4efacbea0c1a4400873f4b2ee33f975");
assert_eq!(update_files.len(), 10);
assert!(update_files[0].is_some()); // the enqueued document addition
assert!(update_files[1..].iter().all(|u| u.is_none())); // everything already processed
let update_file = update_files.remove(0).unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(update_file), @"7b8889539b669c7b9ddba448bafa385d");
// keys
let keys = dump.keys().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys, { "[].uid" => "[uuid]" }), @"9240300dca8f962cdf58359ef4c76f09");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"65b139c6b9fc251e187073c8557803e2");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"06aa1988493485d9b2cda7c751e6bb15");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"786022a66ecb992c8a2a60fee070a5ab");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"7d722fc2629eaa45032ed3deb0c9b4ce");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

View File

@ -0,0 +1,261 @@
use std::collections::{BTreeMap, BTreeSet};
use std::marker::PhantomData;
use std::num::NonZeroUsize;
use serde::{Deserialize, Deserializer};
#[cfg(test)]
fn serialize_with_wildcard<S>(
field: &Setting<Vec<String>>,
s: S,
) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
use serde::Serialize;
let wildcard = vec!["*".to_string()];
match field {
Setting::Set(value) => Some(value),
Setting::Reset => Some(&wildcard),
Setting::NotSet => None,
}
.serialize(s)
}
#[derive(Clone, Default, Debug, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Checked;
#[derive(Clone, Default, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Unchecked;
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct MinWordSizeTyposSetting {
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub one_typo: Setting<u8>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub two_typos: Setting<u8>,
}
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct TypoSettings {
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub enabled: Setting<bool>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub min_word_size_for_typos: Setting<MinWordSizeTyposSetting>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub disable_on_words: Setting<BTreeSet<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub disable_on_attributes: Setting<BTreeSet<String>>,
}
/// Holds all the settings for an index. `T` can either be `Checked` if they represents settings
/// whose validity is guaranteed, or `Unchecked` if they need to be validated. In the later case, a
/// call to `check` will return a `Settings<Checked>` from a `Settings<Unchecked>`.
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
#[serde(bound(serialize = "T: serde::Serialize", deserialize = "T: Deserialize<'static>"))]
pub struct Settings<T> {
#[serde(
default,
serialize_with = "serialize_with_wildcard",
skip_serializing_if = "Setting::is_not_set"
)]
pub displayed_attributes: Setting<Vec<String>>,
#[serde(
default,
serialize_with = "serialize_with_wildcard",
skip_serializing_if = "Setting::is_not_set"
)]
pub searchable_attributes: Setting<Vec<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub filterable_attributes: Setting<BTreeSet<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub sortable_attributes: Setting<BTreeSet<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub ranking_rules: Setting<Vec<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub stop_words: Setting<BTreeSet<String>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub synonyms: Setting<BTreeMap<String, Vec<String>>>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub distinct_attribute: Setting<String>,
#[serde(default, skip_serializing_if = "Setting::is_not_set")]
pub typo_tolerance: Setting<TypoSettings>,
#[serde(skip)]
pub _kind: PhantomData<T>,
}
impl Settings<Checked> {
pub fn cleared() -> Settings<Checked> {
Settings {
displayed_attributes: Setting::Reset,
searchable_attributes: Setting::Reset,
filterable_attributes: Setting::Reset,
sortable_attributes: Setting::Reset,
ranking_rules: Setting::Reset,
stop_words: Setting::Reset,
synonyms: Setting::Reset,
distinct_attribute: Setting::Reset,
typo_tolerance: Setting::Reset,
_kind: PhantomData,
}
}
pub fn into_unchecked(self) -> Settings<Unchecked> {
let Self {
displayed_attributes,
searchable_attributes,
filterable_attributes,
sortable_attributes,
ranking_rules,
stop_words,
synonyms,
distinct_attribute,
typo_tolerance,
..
} = self;
Settings {
displayed_attributes,
searchable_attributes,
filterable_attributes,
sortable_attributes,
ranking_rules,
stop_words,
synonyms,
distinct_attribute,
typo_tolerance,
_kind: PhantomData,
}
}
}
impl Settings<Unchecked> {
pub fn check(self) -> Settings<Checked> {
let displayed_attributes = match self.displayed_attributes {
Setting::Set(fields) => {
if fields.iter().any(|f| f == "*") {
Setting::Reset
} else {
Setting::Set(fields)
}
}
otherwise => otherwise,
};
let searchable_attributes = match self.searchable_attributes {
Setting::Set(fields) => {
if fields.iter().any(|f| f == "*") {
Setting::Reset
} else {
Setting::Set(fields)
}
}
otherwise => otherwise,
};
Settings {
displayed_attributes,
searchable_attributes,
filterable_attributes: self.filterable_attributes,
sortable_attributes: self.sortable_attributes,
ranking_rules: self.ranking_rules,
stop_words: self.stop_words,
synonyms: self.synonyms,
distinct_attribute: self.distinct_attribute,
typo_tolerance: self.typo_tolerance,
_kind: PhantomData,
}
}
}
#[derive(Debug, Clone, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct Facets {
pub level_group_size: Option<NonZeroUsize>,
pub min_level_size: Option<NonZeroUsize>,
}
#[derive(Debug, Clone, PartialEq, Eq, Copy)]
pub enum Setting<T> {
Set(T),
Reset,
NotSet,
}
impl<T> Default for Setting<T> {
fn default() -> Self {
Self::NotSet
}
}
impl<T> Setting<T> {
pub fn set(self) -> Option<T> {
match self {
Self::Set(value) => Some(value),
_ => None,
}
}
pub const fn as_ref(&self) -> Setting<&T> {
match *self {
Self::Set(ref value) => Setting::Set(value),
Self::Reset => Setting::Reset,
Self::NotSet => Setting::NotSet,
}
}
pub const fn is_not_set(&self) -> bool {
matches!(self, Self::NotSet)
}
/// If `Self` is `Reset`, then map self to `Set` with the provided `val`.
pub fn or_reset(self, val: T) -> Self {
match self {
Self::Reset => Self::Set(val),
otherwise => otherwise,
}
}
}
#[cfg(test)]
impl<T: serde::Serialize> serde::Serialize for Setting<T> {
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
match self {
Self::Set(value) => Some(value),
// Usually not_set isn't serialized by setting skip_serializing_if field attribute
Self::NotSet | Self::Reset => None,
}
.serialize(serializer)
}
}
impl<'de, T: Deserialize<'de>> Deserialize<'de> for Setting<T> {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(|x| match x {
Some(x) => Self::Set(x),
None => Self::Reset, // Reset is forced by sending null value
})
}
}

135
dump/src/reader/v4/tasks.rs Normal file
View File

@ -0,0 +1,135 @@
use serde::Deserialize;
use time::OffsetDateTime;
use uuid::Uuid;
use super::errors::ResponseError;
use super::meta::IndexUid;
use super::settings::{Settings, Unchecked};
pub type TaskId = u32;
pub type BatchId = u32;
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Task {
pub id: TaskId,
pub index_uid: IndexUid,
pub content: TaskContent,
pub events: Vec<TaskEvent>,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[allow(clippy::large_enum_variant)]
pub enum TaskContent {
DocumentAddition {
content_uuid: Uuid,
merge_strategy: IndexDocumentsMethod,
primary_key: Option<String>,
documents_count: usize,
allow_index_creation: bool,
},
DocumentDeletion(DocumentDeletion),
SettingsUpdate {
settings: Settings<Unchecked>,
/// Indicates whether the task was a deletion
is_deletion: bool,
allow_index_creation: bool,
},
IndexDeletion,
IndexCreation {
primary_key: Option<String>,
},
IndexUpdate {
primary_key: Option<String>,
},
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum IndexDocumentsMethod {
/// Replace the previous document with the new one,
/// removing all the already known attributes.
ReplaceDocuments,
/// Merge the previous version of the document with the new version,
/// replacing old attributes values with the new ones and add the new attributes.
UpdateDocuments,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum DocumentDeletion {
Clear,
Ids(Vec<String>),
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum TaskEvent {
Created(#[serde(with = "time::serde::rfc3339")] OffsetDateTime),
Batched {
#[serde(with = "time::serde::rfc3339")]
timestamp: OffsetDateTime,
batch_id: BatchId,
},
Processing(#[serde(with = "time::serde::rfc3339")] OffsetDateTime),
Succeded {
result: TaskResult,
#[serde(with = "time::serde::rfc3339")]
timestamp: OffsetDateTime,
},
Failed {
error: ResponseError,
#[serde(with = "time::serde::rfc3339")]
timestamp: OffsetDateTime,
},
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum TaskResult {
DocumentAddition { indexed_documents: u64 },
DocumentDeletion { deleted_documents: u64 },
ClearAll { deleted_documents: u64 },
Other,
}
impl Task {
/// Return true when a task is finished.
/// A task is finished when its last state is either `Succeeded` or `Failed`.
pub fn is_finished(&self) -> bool {
self.events.last().map_or(false, |event| {
matches!(event, TaskEvent::Succeded { .. } | TaskEvent::Failed { .. })
})
}
/// Return the content_uuid of the `Task` if there is one.
pub fn get_content_uuid(&self) -> Option<Uuid> {
match self {
Task { content: TaskContent::DocumentAddition { content_uuid, .. }, .. } => {
Some(*content_uuid)
}
_ => None,
}
}
}
impl IndexUid {
pub fn into_inner(self) -> String {
self.0
}
/// Return a reference over the inner str.
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for IndexUid {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}

View File

@ -0,0 +1,272 @@
use std::fmt;
use http::StatusCode;
use serde::Deserialize;
#[derive(Debug, Deserialize, Clone, PartialEq, Eq)]
#[serde(rename_all = "camelCase")]
#[cfg_attr(feature = "test-traits", derive(proptest_derive::Arbitrary))]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct ResponseError {
#[serde(skip)]
code: StatusCode,
pub message: String,
#[serde(rename = "code")]
pub error_code: String,
#[serde(rename = "type")]
pub error_type: String,
#[serde(rename = "link")]
pub error_link: String,
}
impl ResponseError {
pub fn from_msg(message: String, code: Code) -> Self {
Self {
code: code.http(),
message,
error_code: code.err_code().error_name.to_string(),
error_type: code.type_(),
error_link: code.url(),
}
}
}
#[derive(Deserialize, Debug, Clone, Copy)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum Code {
// index related error
CreateIndex,
IndexAlreadyExists,
IndexNotFound,
InvalidIndexUid,
InvalidMinWordLengthForTypo,
// invalid state error
InvalidState,
MissingPrimaryKey,
PrimaryKeyAlreadyPresent,
MaxFieldsLimitExceeded,
MissingDocumentId,
InvalidDocumentId,
Filter,
Sort,
BadParameter,
BadRequest,
DatabaseSizeLimitReached,
DocumentNotFound,
Internal,
InvalidGeoField,
InvalidRankingRule,
InvalidStore,
InvalidToken,
MissingAuthorizationHeader,
NoSpaceLeftOnDevice,
DumpNotFound,
TaskNotFound,
PayloadTooLarge,
RetrieveDocument,
SearchDocuments,
UnsupportedMediaType,
DumpAlreadyInProgress,
DumpProcessFailed,
InvalidContentType,
MissingContentType,
MalformedPayload,
MissingPayload,
ApiKeyNotFound,
MissingParameter,
InvalidApiKeyActions,
InvalidApiKeyIndexes,
InvalidApiKeyExpiresAt,
InvalidApiKeyDescription,
InvalidApiKeyName,
InvalidApiKeyUid,
ImmutableField,
ApiKeyAlreadyExists,
UnretrievableErrorCode,
}
impl Code {
/// associate a `Code` variant to the actual ErrCode
fn err_code(&self) -> ErrCode {
use Code::*;
match self {
// index related errors
// create index is thrown on internal error while creating an index.
CreateIndex => {
ErrCode::internal("index_creation_failed", StatusCode::INTERNAL_SERVER_ERROR)
}
IndexAlreadyExists => ErrCode::invalid("index_already_exists", StatusCode::CONFLICT),
// thrown when requesting an unexisting index
IndexNotFound => ErrCode::invalid("index_not_found", StatusCode::NOT_FOUND),
InvalidIndexUid => ErrCode::invalid("invalid_index_uid", StatusCode::BAD_REQUEST),
// invalid state error
InvalidState => ErrCode::internal("invalid_state", StatusCode::INTERNAL_SERVER_ERROR),
// thrown when no primary key has been set
MissingPrimaryKey => {
ErrCode::invalid("primary_key_inference_failed", StatusCode::BAD_REQUEST)
}
// error thrown when trying to set an already existing primary key
PrimaryKeyAlreadyPresent => {
ErrCode::invalid("index_primary_key_already_exists", StatusCode::BAD_REQUEST)
}
// invalid ranking rule
InvalidRankingRule => ErrCode::invalid("invalid_ranking_rule", StatusCode::BAD_REQUEST),
// invalid database
InvalidStore => {
ErrCode::internal("invalid_store_file", StatusCode::INTERNAL_SERVER_ERROR)
}
// invalid document
MaxFieldsLimitExceeded => {
ErrCode::invalid("max_fields_limit_exceeded", StatusCode::BAD_REQUEST)
}
MissingDocumentId => ErrCode::invalid("missing_document_id", StatusCode::BAD_REQUEST),
InvalidDocumentId => ErrCode::invalid("invalid_document_id", StatusCode::BAD_REQUEST),
// error related to filters
Filter => ErrCode::invalid("invalid_filter", StatusCode::BAD_REQUEST),
// error related to sorts
Sort => ErrCode::invalid("invalid_sort", StatusCode::BAD_REQUEST),
BadParameter => ErrCode::invalid("bad_parameter", StatusCode::BAD_REQUEST),
BadRequest => ErrCode::invalid("bad_request", StatusCode::BAD_REQUEST),
DatabaseSizeLimitReached => {
ErrCode::internal("database_size_limit_reached", StatusCode::INTERNAL_SERVER_ERROR)
}
DocumentNotFound => ErrCode::invalid("document_not_found", StatusCode::NOT_FOUND),
Internal => ErrCode::internal("internal", StatusCode::INTERNAL_SERVER_ERROR),
InvalidGeoField => ErrCode::invalid("invalid_geo_field", StatusCode::BAD_REQUEST),
InvalidToken => ErrCode::authentication("invalid_api_key", StatusCode::FORBIDDEN),
MissingAuthorizationHeader => {
ErrCode::authentication("missing_authorization_header", StatusCode::UNAUTHORIZED)
}
TaskNotFound => ErrCode::invalid("task_not_found", StatusCode::NOT_FOUND),
DumpNotFound => ErrCode::invalid("dump_not_found", StatusCode::NOT_FOUND),
NoSpaceLeftOnDevice => {
ErrCode::internal("no_space_left_on_device", StatusCode::INTERNAL_SERVER_ERROR)
}
PayloadTooLarge => ErrCode::invalid("payload_too_large", StatusCode::PAYLOAD_TOO_LARGE),
RetrieveDocument => {
ErrCode::internal("unretrievable_document", StatusCode::BAD_REQUEST)
}
SearchDocuments => ErrCode::internal("search_error", StatusCode::BAD_REQUEST),
UnsupportedMediaType => {
ErrCode::invalid("unsupported_media_type", StatusCode::UNSUPPORTED_MEDIA_TYPE)
}
// error related to dump
DumpAlreadyInProgress => {
ErrCode::invalid("dump_already_processing", StatusCode::CONFLICT)
}
DumpProcessFailed => {
ErrCode::internal("dump_process_failed", StatusCode::INTERNAL_SERVER_ERROR)
}
MissingContentType => {
ErrCode::invalid("missing_content_type", StatusCode::UNSUPPORTED_MEDIA_TYPE)
}
MalformedPayload => ErrCode::invalid("malformed_payload", StatusCode::BAD_REQUEST),
InvalidContentType => {
ErrCode::invalid("invalid_content_type", StatusCode::UNSUPPORTED_MEDIA_TYPE)
}
MissingPayload => ErrCode::invalid("missing_payload", StatusCode::BAD_REQUEST),
// error related to keys
ApiKeyNotFound => ErrCode::invalid("api_key_not_found", StatusCode::NOT_FOUND),
MissingParameter => ErrCode::invalid("missing_parameter", StatusCode::BAD_REQUEST),
InvalidApiKeyActions => {
ErrCode::invalid("invalid_api_key_actions", StatusCode::BAD_REQUEST)
}
InvalidApiKeyIndexes => {
ErrCode::invalid("invalid_api_key_indexes", StatusCode::BAD_REQUEST)
}
InvalidApiKeyExpiresAt => {
ErrCode::invalid("invalid_api_key_expires_at", StatusCode::BAD_REQUEST)
}
InvalidApiKeyDescription => {
ErrCode::invalid("invalid_api_key_description", StatusCode::BAD_REQUEST)
}
InvalidApiKeyName => ErrCode::invalid("invalid_api_key_name", StatusCode::BAD_REQUEST),
InvalidApiKeyUid => ErrCode::invalid("invalid_api_key_uid", StatusCode::BAD_REQUEST),
ApiKeyAlreadyExists => ErrCode::invalid("api_key_already_exists", StatusCode::CONFLICT),
ImmutableField => ErrCode::invalid("immutable_field", StatusCode::BAD_REQUEST),
InvalidMinWordLengthForTypo => {
ErrCode::invalid("invalid_min_word_length_for_typo", StatusCode::BAD_REQUEST)
}
UnretrievableErrorCode => {
ErrCode::invalid("unretrievable_error_code", StatusCode::BAD_REQUEST)
}
}
}
/// return the HTTP status code associated with the `Code`
fn http(&self) -> StatusCode {
self.err_code().status_code
}
/// return error name, used as error code
fn name(&self) -> String {
self.err_code().error_name.to_string()
}
/// return the error type
fn type_(&self) -> String {
self.err_code().error_type.to_string()
}
/// return the doc url associated with the error
fn url(&self) -> String {
format!("https://docs.meilisearch.com/errors#{}", self.name())
}
}
/// Internal structure providing a convenient way to create error codes
struct ErrCode {
status_code: StatusCode,
error_type: ErrorType,
error_name: &'static str,
}
impl ErrCode {
fn authentication(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode { status_code, error_name, error_type: ErrorType::AuthenticationError }
}
fn internal(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode { status_code, error_name, error_type: ErrorType::InternalError }
}
fn invalid(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode { status_code, error_name, error_type: ErrorType::InvalidRequestError }
}
}
#[allow(clippy::enum_variant_names)]
enum ErrorType {
InternalError,
InvalidRequestError,
AuthenticationError,
}
impl fmt::Display for ErrorType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
use ErrorType::*;
match self {
InternalError => write!(f, "internal"),
InvalidRequestError => write!(f, "invalid_request"),
AuthenticationError => write!(f, "auth"),
}
}
}

View File

@ -0,0 +1,83 @@
use serde::Deserialize;
use time::OffsetDateTime;
use uuid::Uuid;
use super::meta::{IndexUid, StarOr};
pub type KeyId = Uuid;
#[derive(Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Key {
pub description: Option<String>,
pub name: Option<String>,
pub uid: KeyId,
pub actions: Vec<Action>,
pub indexes: Vec<StarOr<IndexUid>>,
#[serde(with = "time::serde::rfc3339::option")]
pub expires_at: Option<OffsetDateTime>,
#[serde(with = "time::serde::rfc3339")]
pub created_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub updated_at: OffsetDateTime,
}
#[derive(Copy, Clone, Deserialize, Debug, Eq, PartialEq, Hash)]
#[cfg_attr(test, derive(serde::Serialize))]
#[repr(u8)]
pub enum Action {
#[serde(rename = "*")]
All = 0,
#[serde(rename = "search")]
Search,
#[serde(rename = "documents.*")]
DocumentsAll,
#[serde(rename = "documents.add")]
DocumentsAdd,
#[serde(rename = "documents.get")]
DocumentsGet,
#[serde(rename = "documents.delete")]
DocumentsDelete,
#[serde(rename = "indexes.*")]
IndexesAll,
#[serde(rename = "indexes.create")]
IndexesAdd,
#[serde(rename = "indexes.get")]
IndexesGet,
#[serde(rename = "indexes.update")]
IndexesUpdate,
#[serde(rename = "indexes.delete")]
IndexesDelete,
#[serde(rename = "tasks.*")]
TasksAll,
#[serde(rename = "tasks.get")]
TasksGet,
#[serde(rename = "settings.*")]
SettingsAll,
#[serde(rename = "settings.get")]
SettingsGet,
#[serde(rename = "settings.update")]
SettingsUpdate,
#[serde(rename = "stats.*")]
StatsAll,
#[serde(rename = "stats.get")]
StatsGet,
#[serde(rename = "metrics.*")]
MetricsAll,
#[serde(rename = "metrics.get")]
MetricsGet,
#[serde(rename = "dumps.*")]
DumpsAll,
#[serde(rename = "dumps.create")]
DumpsCreate,
#[serde(rename = "version")]
Version,
#[serde(rename = "keys.create")]
KeysAdd,
#[serde(rename = "keys.get")]
KeysGet,
#[serde(rename = "keys.update")]
KeysUpdate,
#[serde(rename = "keys.delete")]
KeysDelete,
}

139
dump/src/reader/v5/meta.rs Normal file
View File

@ -0,0 +1,139 @@
use std::fmt::{self, Display, Formatter};
use std::marker::PhantomData;
use std::str::FromStr;
use serde::de::Visitor;
use serde::{Deserialize, Deserializer};
use uuid::Uuid;
use super::settings::{Settings, Unchecked};
#[derive(Deserialize, Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexUuid {
pub uid: String,
pub index_meta: IndexMeta,
}
#[derive(Deserialize, Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexMeta {
pub uuid: Uuid,
pub creation_task_id: usize,
}
// There is one in each indexes under `meta.json`.
#[derive(Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct DumpMeta {
pub settings: Settings<Unchecked>,
pub primary_key: Option<String>,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct IndexUid(pub String);
impl TryFrom<String> for IndexUid {
type Error = IndexUidFormatError;
fn try_from(uid: String) -> Result<Self, Self::Error> {
if !uid.chars().all(|x| x.is_ascii_alphanumeric() || x == '-' || x == '_')
|| uid.is_empty()
|| uid.len() > 400
{
Err(IndexUidFormatError { invalid_uid: uid })
} else {
Ok(IndexUid(uid))
}
}
}
impl FromStr for IndexUid {
type Err = IndexUidFormatError;
fn from_str(uid: &str) -> Result<IndexUid, IndexUidFormatError> {
uid.to_string().try_into()
}
}
impl From<IndexUid> for String {
fn from(uid: IndexUid) -> Self {
uid.into_inner()
}
}
#[derive(Debug)]
pub struct IndexUidFormatError {
pub invalid_uid: String,
}
impl Display for IndexUidFormatError {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
write!(
f,
"invalid index uid `{}`, the uid must be an integer \
or a string containing only alphanumeric characters \
a-z A-Z 0-9, hyphens - and underscores _.",
self.invalid_uid,
)
}
}
impl std::error::Error for IndexUidFormatError {}
/// A type that tries to match either a star (*) or
/// any other thing that implements `FromStr`.
#[derive(Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum StarOr<T> {
Star,
Other(T),
}
impl<'de, T, E> Deserialize<'de> for StarOr<T>
where
T: FromStr<Err = E>,
E: Display,
{
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
/// Serde can't differentiate between `StarOr::Star` and `StarOr::Other` without a tag.
/// Simply using `#[serde(untagged)]` + `#[serde(rename="*")]` will lead to attempting to
/// deserialize everything as a `StarOr::Other`, including "*".
/// [`#[serde(other)]`](https://serde.rs/variant-attrs.html#other) might have helped but is
/// not supported on untagged enums.
struct StarOrVisitor<T>(PhantomData<T>);
impl<'de, T, FE> Visitor<'de> for StarOrVisitor<T>
where
T: FromStr<Err = FE>,
FE: Display,
{
type Value = StarOr<T>;
fn expecting(&self, formatter: &mut Formatter) -> std::fmt::Result {
formatter.write_str("a string")
}
fn visit_str<SE>(self, v: &str) -> Result<Self::Value, SE>
where
SE: serde::de::Error,
{
match v {
"*" => Ok(StarOr::Star),
v => {
let other = FromStr::from_str(v).map_err(|e: T::Err| {
SE::custom(format!("Invalid `other` value: {}", e))
})?;
Ok(StarOr::Other(other))
}
}
}
}
deserializer.deserialize_str(StarOrVisitor(PhantomData))
}
}

350
dump/src/reader/v5/mod.rs Normal file
View File

@ -0,0 +1,350 @@
//! Here is what a dump v5 look like.
//!
//! ```text
//! .
//! ├── indexes
//! │   ├── 22c269d8-fbbd-4416-bd46-7c7c02849325
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   ├── 6d0471ba-2ed1-41de-8ea6-10db10fa2bb8
//! │   │   ├── documents.jsonl
//! │   │   └── meta.json
//! │   └── f7d53ec4-0748-48e6-b66f-1fca9944b0fa
//! │   ├── documents.jsonl
//! │   └── meta.json
//! ├── index_uuids
//! │   └── data.jsonl
//! ├── instance-uid
//! ├── keys
//! ├── metadata.json
//! └── updates
//! ├── data.jsonl
//! └── updates_files
//! └── c83a004a-da98-4b94-b245-3256266c7281
//! ```
//!
//! Here is what `index_uuids/data.jsonl` looks like;
//!
//! ```json
//! {"uid":"dnd_spells","index_meta":{"uuid":"22c269d8-fbbd-4416-bd46-7c7c02849325","creation_task_id":9}}
//! {"uid":"movies","index_meta":{"uuid":"6d0471ba-2ed1-41de-8ea6-10db10fa2bb8","creation_task_id":1}}
//! {"uid":"products","index_meta":{"uuid":"f7d53ec4-0748-48e6-b66f-1fca9944b0fa","creation_task_id":4}}
//! ```
//!
use std::fs::{self, File};
use std::io::{BufRead, BufReader, ErrorKind, Seek, SeekFrom};
use std::path::Path;
use serde::{Deserialize, Serialize};
use tempfile::TempDir;
use time::OffsetDateTime;
use uuid::Uuid;
use super::compat::v5_to_v6::CompatV5ToV6;
use super::Document;
use crate::{Error, IndexMetadata, Result, Version};
pub mod errors;
pub mod keys;
pub mod meta;
pub mod settings;
pub mod tasks;
pub type Settings<T> = settings::Settings<T>;
pub type Checked = settings::Checked;
pub type Unchecked = settings::Unchecked;
pub type Task = tasks::Task;
pub type Key = keys::Key;
// ===== Other types to clarify the code of the compat module
// everything related to the tasks
pub type Status = tasks::TaskStatus;
pub type Details = tasks::TaskDetails;
// everything related to the settings
pub type Setting<T> = settings::Setting<T>;
pub type TypoTolerance = settings::TypoSettings;
pub type MinWordSizeForTypos = settings::MinWordSizeTyposSetting;
// everything related to the api keys
pub type Action = keys::Action;
pub type StarOr<T> = meta::StarOr<T>;
// everything related to the errors
pub type ResponseError = errors::ResponseError;
pub type Code = errors::Code;
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct Metadata {
db_version: String,
index_db_size: usize,
update_db_size: usize,
#[serde(with = "time::serde::rfc3339")]
dump_date: OffsetDateTime,
}
pub struct V5Reader {
dump: TempDir,
metadata: Metadata,
tasks: BufReader<File>,
keys: BufReader<File>,
index_uuid: Vec<meta::IndexUuid>,
}
impl V5Reader {
pub fn open(dump: TempDir) -> Result<Self> {
let meta_file = fs::read(dump.path().join("metadata.json"))?;
let metadata = serde_json::from_reader(&*meta_file)?;
let index_uuid = File::open(dump.path().join("index_uuids/data.jsonl"))?;
let index_uuid = BufReader::new(index_uuid);
let index_uuid = index_uuid
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) })
.collect::<Result<Vec<_>>>()?;
Ok(V5Reader {
metadata,
tasks: BufReader::new(
File::open(dump.path().join("updates").join("data.jsonl")).unwrap(),
),
keys: BufReader::new(File::open(dump.path().join("keys"))?),
index_uuid,
dump,
})
}
pub fn to_v6(self) -> CompatV5ToV6 {
CompatV5ToV6::new_v5(self)
}
pub fn version(&self) -> Version {
Version::V5
}
pub fn date(&self) -> Option<OffsetDateTime> {
Some(self.metadata.dump_date)
}
pub fn instance_uid(&self) -> Result<Option<Uuid>> {
match fs::read_to_string(self.dump.path().join("instance-uid")) {
Ok(uuid) => Ok(Some(Uuid::parse_str(&uuid)?)),
Err(e) if e.kind() == ErrorKind::NotFound => Ok(None),
Err(e) => Err(e.into()),
}
}
pub fn indexes(&self) -> Result<impl Iterator<Item = Result<V5IndexReader>> + '_> {
Ok(self.index_uuid.iter().map(|index| -> Result<_> {
V5IndexReader::new(
index.uid.clone(),
&self.dump.path().join("indexes").join(index.index_meta.uuid.to_string()),
)
}))
}
pub fn tasks(
&mut self,
) -> Box<dyn Iterator<Item = Result<(Task, Option<Box<super::UpdateFile>>)>> + '_> {
Box::new((&mut self.tasks).lines().map(|line| -> Result<_> {
let task: Task = serde_json::from_str(&line?)?;
if !task.is_finished() {
if let Some(uuid) = task.get_content_uuid() {
let update_file_path = self
.dump
.path()
.join("updates")
.join("updates_files")
.join(uuid.to_string());
Ok((
task,
Some(
Box::new(UpdateFile::new(&update_file_path)?) as Box<super::UpdateFile>
),
))
} else {
Ok((task, None))
}
} else {
Ok((task, None))
}
}))
}
pub fn keys(&mut self) -> Result<Box<dyn Iterator<Item = Result<Key>> + '_>> {
self.keys.seek(SeekFrom::Start(0))?;
Ok(Box::new(
(&mut self.keys).lines().map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }),
))
}
}
pub struct V5IndexReader {
metadata: IndexMetadata,
settings: Settings<Checked>,
documents: BufReader<File>,
}
impl V5IndexReader {
pub fn new(name: String, path: &Path) -> Result<Self> {
let meta = File::open(path.join("meta.json"))?;
let meta: meta::DumpMeta = serde_json::from_reader(meta)?;
let metadata = IndexMetadata {
uid: name,
primary_key: meta.primary_key,
// FIXME: Iterate over the whole task queue to find the creation and last update date.
created_at: OffsetDateTime::now_utc(),
updated_at: OffsetDateTime::now_utc(),
};
let ret = V5IndexReader {
metadata,
settings: meta.settings.check(),
documents: BufReader::new(File::open(path.join("documents.jsonl"))?),
};
Ok(ret)
}
pub fn metadata(&self) -> &IndexMetadata {
&self.metadata
}
pub fn documents(&mut self) -> Result<impl Iterator<Item = Result<Document>> + '_> {
Ok((&mut self.documents)
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
}
pub fn settings(&mut self) -> Result<Settings<Checked>> {
Ok(self.settings.clone())
}
}
pub struct UpdateFile {
reader: BufReader<File>,
}
impl UpdateFile {
fn new(path: &Path) -> Result<Self> {
Ok(UpdateFile { reader: BufReader::new(File::open(path)?) })
}
}
impl Iterator for UpdateFile {
type Item = Result<Document>;
fn next(&mut self) -> Option<Self::Item> {
(&mut self.reader)
.lines()
.map(|line| {
line.map_err(Error::from)
.and_then(|line| serde_json::from_str(&line).map_err(Error::from))
})
.next()
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fs::File;
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v5() {
let dump = File::open("tests/assets/v5.dump").unwrap();
let dir = TempDir::new().unwrap();
let mut dump = BufReader::new(dump);
let gz = GzDecoder::new(&mut dump);
let mut archive = tar::Archive::new(gz);
archive.unpack(dir.path()).unwrap();
let mut dump = V5Reader::open(dir).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2022-10-04 15:55:10.344982459 +00:00:00");
insta::assert_display_snapshot!(dump.instance_uid().unwrap().unwrap(), @"9e15e977-f2ae-4761-943f-1eaf75fd736d");
// tasks
let tasks = dump.tasks().collect::<Result<Vec<_>>>().unwrap();
let (tasks, mut update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"e159863f0442b2e987ce37fbd57af76b");
assert_eq!(update_files.len(), 22);
assert!(update_files[0].is_none()); // the dump creation
assert!(update_files[1].is_some()); // the enqueued document addition
assert!(update_files[2..].iter().all(|u| u.is_none())); // everything already processed
let update_file = update_files.remove(1).unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(update_file), @"7b8889539b669c7b9ddba448bafa385d");
// keys
let keys = dump.keys().unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys), @"091ddad754f3cc7cf1d03a477855e819");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut products = indexes.pop().unwrap();
let mut movies = indexes.pop().unwrap();
let mut spells = indexes.pop().unwrap();
assert!(indexes.is_empty());
// products
insta::assert_json_snapshot!(products.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "products",
"primaryKey": "sku",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"b392b928dab63468318b2bdaad844c5a");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
// movies
insta::assert_json_snapshot!(movies.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "movies",
"primaryKey": "id",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"2f881248b7c3623e2ba2885dbf0b2c18");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 200);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"e962baafd2fbae4cdd14e876053b0c5a");
// spells
insta::assert_json_snapshot!(spells.metadata(), { ".createdAt" => "[now]", ".updatedAt" => "[now]" }, @r###"
{
"uid": "dnd_spells",
"primaryKey": "index",
"createdAt": "[now]",
"updatedAt": "[now]"
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"ade154e63ab713de67919892917d3d9d");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
}

View File

@ -0,0 +1,239 @@
use std::collections::{BTreeMap, BTreeSet};
use std::marker::PhantomData;
use serde::{Deserialize, Deserializer, Serialize};
#[derive(Clone, Default, Debug, Serialize, PartialEq, Eq)]
pub struct Checked;
#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct Unchecked;
/// Holds all the settings for an index. `T` can either be `Checked` if they represents settings
/// whose validity is guaranteed, or `Unchecked` if they need to be validated. In the later case, a
/// call to `check` will return a `Settings<Checked>` from a `Settings<Unchecked>`.
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
#[serde(bound(serialize = "T: Serialize", deserialize = "T: Deserialize<'static>"))]
pub struct Settings<T> {
#[serde(default)]
pub displayed_attributes: Setting<Vec<String>>,
#[serde(default)]
pub searchable_attributes: Setting<Vec<String>>,
#[serde(default)]
pub filterable_attributes: Setting<BTreeSet<String>>,
#[serde(default)]
pub sortable_attributes: Setting<BTreeSet<String>>,
#[serde(default)]
pub ranking_rules: Setting<Vec<String>>,
#[serde(default)]
pub stop_words: Setting<BTreeSet<String>>,
#[serde(default)]
pub synonyms: Setting<BTreeMap<String, Vec<String>>>,
#[serde(default)]
pub distinct_attribute: Setting<String>,
#[serde(default)]
pub typo_tolerance: Setting<TypoSettings>,
#[serde(default)]
pub faceting: Setting<FacetingSettings>,
#[serde(default)]
pub pagination: Setting<PaginationSettings>,
#[serde(skip)]
pub _kind: PhantomData<T>,
}
#[derive(Debug, Clone, PartialEq, Eq, Copy)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum Setting<T> {
Set(T),
Reset,
NotSet,
}
impl<T> Default for Setting<T> {
fn default() -> Self {
Self::NotSet
}
}
impl<T> Setting<T> {
pub fn set(self) -> Option<T> {
match self {
Self::Set(value) => Some(value),
_ => None,
}
}
pub const fn as_ref(&self) -> Setting<&T> {
match *self {
Self::Set(ref value) => Setting::Set(value),
Self::Reset => Setting::Reset,
Self::NotSet => Setting::NotSet,
}
}
pub const fn is_not_set(&self) -> bool {
matches!(self, Self::NotSet)
}
/// If `Self` is `Reset`, then map self to `Set` with the provided `val`.
pub fn or_reset(self, val: T) -> Self {
match self {
Self::Reset => Self::Set(val),
otherwise => otherwise,
}
}
}
impl<'de, T: Deserialize<'de>> Deserialize<'de> for Setting<T> {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(|x| match x {
Some(x) => Self::Set(x),
None => Self::Reset, // Reset is forced by sending null value
})
}
}
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct MinWordSizeTyposSetting {
#[serde(default)]
pub one_typo: Setting<u8>,
#[serde(default)]
pub two_typos: Setting<u8>,
}
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct TypoSettings {
#[serde(default)]
pub enabled: Setting<bool>,
#[serde(default)]
pub min_word_size_for_typos: Setting<MinWordSizeTyposSetting>,
#[serde(default)]
pub disable_on_words: Setting<BTreeSet<String>>,
#[serde(default)]
pub disable_on_attributes: Setting<BTreeSet<String>>,
}
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct FacetingSettings {
#[serde(default)]
pub max_values_per_facet: Setting<usize>,
}
#[derive(Debug, Clone, Default, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(deny_unknown_fields)]
#[serde(rename_all = "camelCase")]
pub struct PaginationSettings {
#[serde(default)]
pub max_total_hits: Setting<usize>,
}
impl Settings<Checked> {
pub fn cleared() -> Settings<Checked> {
Settings {
displayed_attributes: Setting::Reset,
searchable_attributes: Setting::Reset,
filterable_attributes: Setting::Reset,
sortable_attributes: Setting::Reset,
ranking_rules: Setting::Reset,
stop_words: Setting::Reset,
synonyms: Setting::Reset,
distinct_attribute: Setting::Reset,
typo_tolerance: Setting::Reset,
faceting: Setting::Reset,
pagination: Setting::Reset,
_kind: PhantomData,
}
}
pub fn into_unchecked(self) -> Settings<Unchecked> {
let Self {
displayed_attributes,
searchable_attributes,
filterable_attributes,
sortable_attributes,
ranking_rules,
stop_words,
synonyms,
distinct_attribute,
typo_tolerance,
faceting,
pagination,
..
} = self;
Settings {
displayed_attributes,
searchable_attributes,
filterable_attributes,
sortable_attributes,
ranking_rules,
stop_words,
synonyms,
distinct_attribute,
typo_tolerance,
faceting,
pagination,
_kind: PhantomData,
}
}
}
impl Settings<Unchecked> {
pub fn check(self) -> Settings<Checked> {
let displayed_attributes = match self.displayed_attributes {
Setting::Set(fields) => {
if fields.iter().any(|f| f == "*") {
Setting::Reset
} else {
Setting::Set(fields)
}
}
otherwise => otherwise,
};
let searchable_attributes = match self.searchable_attributes {
Setting::Set(fields) => {
if fields.iter().any(|f| f == "*") {
Setting::Reset
} else {
Setting::Set(fields)
}
}
otherwise => otherwise,
};
Settings {
displayed_attributes,
searchable_attributes,
filterable_attributes: self.filterable_attributes,
sortable_attributes: self.sortable_attributes,
ranking_rules: self.ranking_rules,
stop_words: self.stop_words,
synonyms: self.synonyms,
distinct_attribute: self.distinct_attribute,
typo_tolerance: self.typo_tolerance,
faceting: self.faceting,
pagination: self.pagination,
_kind: PhantomData,
}
}
}

413
dump/src/reader/v5/tasks.rs Normal file
View File

@ -0,0 +1,413 @@
use serde::Deserialize;
use time::{Duration, OffsetDateTime};
use uuid::Uuid;
use super::errors::ResponseError;
use super::meta::IndexUid;
use super::settings::{Settings, Unchecked};
pub type TaskId = u32;
pub type BatchId = u32;
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub struct Task {
pub id: TaskId,
/// The name of the index the task is targeting. If it isn't targeting any index (i.e Dump task)
/// then this is None
// TODO: when next forward breaking dumps, it would be a good idea to move this field inside of
// the TaskContent.
pub content: TaskContent,
pub events: Vec<TaskEvent>,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
#[allow(clippy::large_enum_variant)]
pub enum TaskContent {
DocumentAddition {
index_uid: IndexUid,
content_uuid: Uuid,
merge_strategy: IndexDocumentsMethod,
primary_key: Option<String>,
documents_count: usize,
allow_index_creation: bool,
},
DocumentDeletion {
index_uid: IndexUid,
deletion: DocumentDeletion,
},
SettingsUpdate {
index_uid: IndexUid,
settings: Settings<Unchecked>,
/// Indicates whether the task was a deletion
is_deletion: bool,
allow_index_creation: bool,
},
IndexDeletion {
index_uid: IndexUid,
},
IndexCreation {
index_uid: IndexUid,
primary_key: Option<String>,
},
IndexUpdate {
index_uid: IndexUid,
primary_key: Option<String>,
},
Dump {
uid: String,
},
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum IndexDocumentsMethod {
/// Replace the previous document with the new one,
/// removing all the already known attributes.
ReplaceDocuments,
/// Merge the previous version of the document with the new version,
/// replacing old attributes values with the new ones and add the new attributes.
UpdateDocuments,
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum DocumentDeletion {
Clear,
Ids(Vec<String>),
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum TaskEvent {
Created(#[serde(with = "time::serde::rfc3339")] OffsetDateTime),
Batched {
#[serde(with = "time::serde::rfc3339")]
timestamp: OffsetDateTime,
batch_id: BatchId,
},
Processing(#[serde(with = "time::serde::rfc3339")] OffsetDateTime),
Succeeded {
result: TaskResult,
#[serde(with = "time::serde::rfc3339")]
timestamp: OffsetDateTime,
},
Failed {
error: ResponseError,
#[serde(with = "time::serde::rfc3339")]
timestamp: OffsetDateTime,
},
}
#[derive(Clone, Debug, Deserialize, PartialEq, Eq)]
#[cfg_attr(test, derive(serde::Serialize))]
pub enum TaskResult {
DocumentAddition { indexed_documents: u64 },
DocumentDeletion { deleted_documents: u64 },
ClearAll { deleted_documents: u64 },
Other,
}
impl Task {
/// Return true when a task is finished.
/// A task is finished when its last state is either `Succeeded` or `Failed`.
pub fn is_finished(&self) -> bool {
self.events.last().map_or(false, |event| {
matches!(event, TaskEvent::Succeeded { .. } | TaskEvent::Failed { .. })
})
}
/// Return the content_uuid of the `Task` if there is one.
pub fn get_content_uuid(&self) -> Option<Uuid> {
match self {
Task { content: TaskContent::DocumentAddition { content_uuid, .. }, .. } => {
Some(*content_uuid)
}
_ => None,
}
}
pub fn index_uid(&self) -> Option<&str> {
match &self.content {
TaskContent::DocumentAddition { index_uid, .. }
| TaskContent::DocumentDeletion { index_uid, .. }
| TaskContent::SettingsUpdate { index_uid, .. }
| TaskContent::IndexDeletion { index_uid }
| TaskContent::IndexCreation { index_uid, .. }
| TaskContent::IndexUpdate { index_uid, .. } => Some(index_uid.as_str()),
TaskContent::Dump { .. } => None,
}
}
}
impl IndexUid {
pub fn into_inner(self) -> String {
self.0
}
/// Return a reference over the inner str.
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for IndexUid {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[derive(Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
#[cfg_attr(test, serde(rename_all = "camelCase"))]
pub struct TaskView {
pub uid: TaskId,
pub index_uid: Option<String>,
pub status: TaskStatus,
#[cfg_attr(test, serde(rename = "type"))]
pub task_type: TaskType,
#[cfg_attr(test, serde(skip_serializing_if = "Option::is_none"))]
pub details: Option<TaskDetails>,
#[cfg_attr(test, serde(skip_serializing_if = "Option::is_none"))]
pub error: Option<ResponseError>,
#[cfg_attr(test, serde(serialize_with = "serialize_duration"))]
pub duration: Option<Duration>,
#[cfg_attr(test, serde(serialize_with = "time::serde::rfc3339::serialize"))]
pub enqueued_at: OffsetDateTime,
#[cfg_attr(test, serde(serialize_with = "time::serde::rfc3339::option::serialize"))]
pub started_at: Option<OffsetDateTime>,
#[cfg_attr(test, serde(serialize_with = "time::serde::rfc3339::option::serialize"))]
pub finished_at: Option<OffsetDateTime>,
}
impl From<Task> for TaskView {
fn from(task: Task) -> Self {
let index_uid = task.index_uid().map(String::from);
let Task { id, content, events } = task;
let (task_type, mut details) = match content {
TaskContent::DocumentAddition { documents_count, .. } => {
let details = TaskDetails::DocumentAddition {
received_documents: documents_count,
indexed_documents: None,
};
(TaskType::DocumentAdditionOrUpdate, Some(details))
}
TaskContent::DocumentDeletion { deletion: DocumentDeletion::Ids(ids), .. } => (
TaskType::DocumentDeletion,
Some(TaskDetails::DocumentDeletion {
received_document_ids: ids.len(),
deleted_documents: None,
}),
),
TaskContent::DocumentDeletion { deletion: DocumentDeletion::Clear, .. } => (
TaskType::DocumentDeletion,
Some(TaskDetails::ClearAll { deleted_documents: None }),
),
TaskContent::IndexDeletion { .. } => {
(TaskType::IndexDeletion, Some(TaskDetails::ClearAll { deleted_documents: None }))
}
TaskContent::SettingsUpdate { settings, .. } => {
(TaskType::SettingsUpdate, Some(TaskDetails::Settings { settings }))
}
TaskContent::IndexCreation { primary_key, .. } => {
(TaskType::IndexCreation, Some(TaskDetails::IndexInfo { primary_key }))
}
TaskContent::IndexUpdate { primary_key, .. } => {
(TaskType::IndexUpdate, Some(TaskDetails::IndexInfo { primary_key }))
}
TaskContent::Dump { uid } => {
(TaskType::DumpCreation, Some(TaskDetails::Dump { dump_uid: uid }))
}
};
// An event always has at least one event: "Created"
let (status, error, finished_at) = match events.last().unwrap() {
TaskEvent::Created(_) => (TaskStatus::Enqueued, None, None),
TaskEvent::Batched { .. } => (TaskStatus::Enqueued, None, None),
TaskEvent::Processing(_) => (TaskStatus::Processing, None, None),
TaskEvent::Succeeded { timestamp, result } => {
match (result, &mut details) {
(
TaskResult::DocumentAddition { indexed_documents: num, .. },
Some(TaskDetails::DocumentAddition { ref mut indexed_documents, .. }),
) => {
indexed_documents.replace(*num);
}
(
TaskResult::DocumentDeletion { deleted_documents: docs, .. },
Some(TaskDetails::DocumentDeletion { ref mut deleted_documents, .. }),
) => {
deleted_documents.replace(*docs);
}
(
TaskResult::ClearAll { deleted_documents: docs },
Some(TaskDetails::ClearAll { ref mut deleted_documents }),
) => {
deleted_documents.replace(*docs);
}
_ => (),
}
(TaskStatus::Succeeded, None, Some(*timestamp))
}
TaskEvent::Failed { timestamp, error } => {
match details {
Some(TaskDetails::DocumentDeletion { ref mut deleted_documents, .. }) => {
deleted_documents.replace(0);
}
Some(TaskDetails::ClearAll { ref mut deleted_documents, .. }) => {
deleted_documents.replace(0);
}
Some(TaskDetails::DocumentAddition { ref mut indexed_documents, .. }) => {
indexed_documents.replace(0);
}
_ => (),
}
(TaskStatus::Failed, Some(error.clone()), Some(*timestamp))
}
};
let enqueued_at = match events.first() {
Some(TaskEvent::Created(ts)) => *ts,
_ => unreachable!("A task must always have a creation event."),
};
let started_at = events.iter().find_map(|e| match e {
TaskEvent::Processing(ts) => Some(*ts),
_ => None,
});
let duration = finished_at.zip(started_at).map(|(tf, ts)| (tf - ts));
Self {
uid: id,
index_uid,
status,
task_type,
details,
error,
duration,
enqueued_at,
started_at,
finished_at,
}
}
}
#[derive(Debug, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub enum TaskType {
IndexCreation,
IndexUpdate,
IndexDeletion,
DocumentAdditionOrUpdate,
DocumentDeletion,
SettingsUpdate,
DumpCreation,
}
impl From<TaskContent> for TaskType {
fn from(other: TaskContent) -> Self {
match other {
TaskContent::IndexCreation { .. } => TaskType::IndexCreation,
TaskContent::IndexUpdate { .. } => TaskType::IndexUpdate,
TaskContent::IndexDeletion { .. } => TaskType::IndexDeletion,
TaskContent::DocumentAddition { .. } => TaskType::DocumentAdditionOrUpdate,
TaskContent::DocumentDeletion { .. } => TaskType::DocumentDeletion,
TaskContent::SettingsUpdate { .. } => TaskType::SettingsUpdate,
TaskContent::Dump { .. } => TaskType::DumpCreation,
}
}
}
#[derive(Debug, PartialEq, Eq, Deserialize)]
#[cfg_attr(test, derive(serde::Serialize))]
#[serde(rename_all = "camelCase")]
pub enum TaskStatus {
Enqueued,
Processing,
Succeeded,
Failed,
}
#[derive(Debug)]
#[cfg_attr(test, derive(serde::Serialize))]
#[cfg_attr(test, serde(untagged))]
#[allow(clippy::large_enum_variant)]
pub enum TaskDetails {
#[cfg_attr(test, serde(rename_all = "camelCase"))]
DocumentAddition { received_documents: usize, indexed_documents: Option<u64> },
#[cfg_attr(test, serde(rename_all = "camelCase"))]
Settings {
#[cfg_attr(test, serde(flatten))]
settings: Settings<Unchecked>,
},
#[cfg_attr(test, serde(rename_all = "camelCase"))]
IndexInfo { primary_key: Option<String> },
#[cfg_attr(test, serde(rename_all = "camelCase"))]
DocumentDeletion { received_document_ids: usize, deleted_documents: Option<u64> },
#[cfg_attr(test, serde(rename_all = "camelCase"))]
ClearAll { deleted_documents: Option<u64> },
#[cfg_attr(test, serde(rename_all = "camelCase"))]
Dump { dump_uid: String },
}
/// Serialize a `time::Duration` as a best effort ISO 8601 while waiting for
/// https://github.com/time-rs/time/issues/378.
/// This code is a port of the old code of time that was removed in 0.2.
#[cfg(test)]
fn serialize_duration<S: serde::Serializer>(
duration: &Option<Duration>,
serializer: S,
) -> Result<S::Ok, S::Error> {
use std::fmt::Write;
match duration {
Some(duration) => {
// technically speaking, negative duration is not valid ISO 8601
if duration.is_negative() {
return serializer.serialize_none();
}
const SECS_PER_DAY: i64 = Duration::DAY.whole_seconds();
let secs = duration.whole_seconds();
let days = secs / SECS_PER_DAY;
let secs = secs - days * SECS_PER_DAY;
let hasdate = days != 0;
let nanos = duration.subsec_nanoseconds();
let hastime = (secs != 0 || nanos != 0) || !hasdate;
// all the following unwrap can't fail
let mut res = String::new();
write!(&mut res, "P").unwrap();
if hasdate {
write!(&mut res, "{}D", days).unwrap();
}
const NANOS_PER_MILLI: i32 = Duration::MILLISECOND.subsec_nanoseconds();
const NANOS_PER_MICRO: i32 = Duration::MICROSECOND.subsec_nanoseconds();
if hastime {
if nanos == 0 {
write!(&mut res, "T{}S", secs).unwrap();
} else if nanos % NANOS_PER_MILLI == 0 {
write!(&mut res, "T{}.{:03}S", secs, nanos / NANOS_PER_MILLI).unwrap();
} else if nanos % NANOS_PER_MICRO == 0 {
write!(&mut res, "T{}.{:06}S", secs, nanos / NANOS_PER_MICRO).unwrap();
} else {
write!(&mut res, "T{}.{:09}S", secs, nanos).unwrap();
}
}
serializer.serialize_str(&res)
}
None => serializer.serialize_none(),
}
}

191
dump/src/reader/v6/mod.rs Normal file
View File

@ -0,0 +1,191 @@
use std::fs::{self, File};
use std::io::{BufRead, BufReader, ErrorKind};
use std::path::Path;
pub use meilisearch_types::milli;
use tempfile::TempDir;
use time::OffsetDateTime;
use uuid::Uuid;
use super::Document;
use crate::{Error, IndexMetadata, Result, Version};
pub type Metadata = crate::Metadata;
pub type Settings<T> = meilisearch_types::settings::Settings<T>;
pub type Checked = meilisearch_types::settings::Checked;
pub type Unchecked = meilisearch_types::settings::Unchecked;
pub type Task = crate::TaskDump;
pub type Key = meilisearch_types::keys::Key;
// ===== Other types to clarify the code of the compat module
// everything related to the tasks
pub type Status = meilisearch_types::tasks::Status;
pub type Kind = crate::KindDump;
pub type Details = meilisearch_types::tasks::Details;
// everything related to the settings
pub type Setting<T> = meilisearch_types::milli::update::Setting<T>;
pub type TypoTolerance = meilisearch_types::settings::TypoSettings;
pub type MinWordSizeForTypos = meilisearch_types::settings::MinWordSizeTyposSetting;
pub type FacetingSettings = meilisearch_types::settings::FacetingSettings;
pub type PaginationSettings = meilisearch_types::settings::PaginationSettings;
// everything related to the api keys
pub type Action = meilisearch_types::keys::Action;
pub type StarOr<T> = meilisearch_types::star_or::StarOr<T>;
pub type IndexUid = meilisearch_types::index_uid::IndexUid;
// everything related to the errors
pub type ResponseError = meilisearch_types::error::ResponseError;
pub type Code = meilisearch_types::error::Code;
pub struct V6Reader {
dump: TempDir,
instance_uid: Option<Uuid>,
metadata: Metadata,
tasks: BufReader<File>,
keys: BufReader<File>,
}
impl V6Reader {
pub fn open(dump: TempDir) -> Result<Self> {
let meta_file = fs::read(dump.path().join("metadata.json"))?;
let instance_uid = match fs::read_to_string(dump.path().join("instance_uid.uuid")) {
Ok(uuid) => Some(Uuid::parse_str(&uuid)?),
Err(e) if e.kind() == ErrorKind::NotFound => None,
Err(e) => return Err(e.into()),
};
Ok(V6Reader {
metadata: serde_json::from_reader(&*meta_file)?,
instance_uid,
tasks: BufReader::new(File::open(dump.path().join("tasks").join("queue.jsonl"))?),
keys: BufReader::new(File::open(dump.path().join("keys.jsonl"))?),
dump,
})
}
pub fn version(&self) -> Version {
Version::V6
}
pub fn date(&self) -> Option<OffsetDateTime> {
Some(self.metadata.dump_date)
}
pub fn instance_uid(&self) -> Result<Option<Uuid>> {
Ok(self.instance_uid)
}
pub fn indexes(&self) -> Result<Box<dyn Iterator<Item = Result<V6IndexReader>> + '_>> {
let entries = fs::read_dir(self.dump.path().join("indexes"))?;
Ok(Box::new(
entries
.map(|entry| -> Result<Option<_>> {
let entry = entry?;
if entry.file_type()?.is_dir() {
let index = V6IndexReader::new(
entry.file_name().to_str().ok_or(Error::BadIndexName)?.to_string(),
&entry.path(),
)?;
Ok(Some(index))
} else {
Ok(None)
}
})
.filter_map(|entry| entry.transpose()),
))
}
pub fn tasks(
&mut self,
) -> Box<dyn Iterator<Item = Result<(Task, Option<Box<super::UpdateFile>>)>> + '_> {
Box::new((&mut self.tasks).lines().map(|line| -> Result<_> {
let task: Task = serde_json::from_str(&line?).unwrap();
let update_file_path = self
.dump
.path()
.join("tasks")
.join("update_files")
.join(format!("{}.jsonl", task.uid));
if update_file_path.exists() {
Ok((
task,
Some(Box::new(UpdateFile::new(&update_file_path).unwrap())
as Box<super::UpdateFile>),
))
} else {
Ok((task, None))
}
}))
}
pub fn keys(&mut self) -> Box<dyn Iterator<Item = Result<Key>> + '_> {
Box::new(
(&mut self.keys).lines().map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }),
)
}
}
pub struct UpdateFile {
reader: BufReader<File>,
}
impl UpdateFile {
fn new(path: &Path) -> Result<Self> {
Ok(UpdateFile { reader: BufReader::new(File::open(path)?) })
}
}
impl Iterator for UpdateFile {
type Item = Result<Document>;
fn next(&mut self) -> Option<Self::Item> {
(&mut self.reader)
.lines()
.map(|line| {
line.map_err(Error::from)
.and_then(|line| serde_json::from_str(&line).map_err(Error::from))
})
.next()
}
}
pub struct V6IndexReader {
metadata: IndexMetadata,
documents: BufReader<File>,
settings: BufReader<File>,
}
impl V6IndexReader {
pub fn new(_name: String, path: &Path) -> Result<Self> {
let metadata = File::open(path.join("metadata.json"))?;
let ret = V6IndexReader {
metadata: serde_json::from_reader(metadata)?,
documents: BufReader::new(File::open(path.join("documents.jsonl"))?),
settings: BufReader::new(File::open(path.join("settings.json"))?),
};
Ok(ret)
}
pub fn metadata(&self) -> &IndexMetadata {
&self.metadata
}
pub fn documents(&mut self) -> Result<impl Iterator<Item = Result<Document>> + '_> {
Ok((&mut self.documents)
.lines()
.map(|line| -> Result<_> { Ok(serde_json::from_str(&line?)?) }))
}
pub fn settings(&mut self) -> Result<Settings<Checked>> {
let settings: Settings<Unchecked> = serde_json::from_reader(&mut self.settings)?;
Ok(settings.check())
}
}

350
dump/src/writer.rs Normal file
View File

@ -0,0 +1,350 @@
use std::fs::{self, File};
use std::io::{BufWriter, Write};
use std::path::PathBuf;
use flate2::write::GzEncoder;
use flate2::Compression;
use meilisearch_types::keys::Key;
use meilisearch_types::settings::{Checked, Settings};
use serde_json::{Map, Value};
use tempfile::TempDir;
use time::OffsetDateTime;
use uuid::Uuid;
use crate::reader::Document;
use crate::{IndexMetadata, Metadata, Result, TaskDump, CURRENT_DUMP_VERSION};
pub struct DumpWriter {
dir: TempDir,
}
impl DumpWriter {
pub fn new(instance_uuid: Option<Uuid>) -> Result<DumpWriter> {
let dir = TempDir::new()?;
if let Some(instance_uuid) = instance_uuid {
fs::write(
dir.path().join("instance_uid.uuid"),
&instance_uuid.as_hyphenated().to_string(),
)?;
}
let metadata = Metadata {
dump_version: CURRENT_DUMP_VERSION,
db_version: env!("CARGO_PKG_VERSION").to_string(),
dump_date: OffsetDateTime::now_utc(),
};
fs::write(dir.path().join("metadata.json"), serde_json::to_string(&metadata)?)?;
std::fs::create_dir(&dir.path().join("indexes"))?;
Ok(DumpWriter { dir })
}
pub fn create_index(&self, index_name: &str, metadata: &IndexMetadata) -> Result<IndexWriter> {
IndexWriter::new(self.dir.path().join("indexes").join(index_name), metadata)
}
pub fn create_keys(&self) -> Result<KeyWriter> {
KeyWriter::new(self.dir.path().to_path_buf())
}
pub fn create_tasks_queue(&self) -> Result<TaskWriter> {
TaskWriter::new(self.dir.path().join("tasks"))
}
pub fn persist_to(self, mut writer: impl Write) -> Result<()> {
let gz_encoder = GzEncoder::new(&mut writer, Compression::default());
let mut tar_encoder = tar::Builder::new(gz_encoder);
tar_encoder.append_dir_all(".", self.dir.path())?;
let gz_encoder = tar_encoder.into_inner()?;
gz_encoder.finish()?;
writer.flush()?;
Ok(())
}
}
pub struct KeyWriter {
keys: BufWriter<File>,
}
impl KeyWriter {
pub(crate) fn new(path: PathBuf) -> Result<Self> {
let keys = File::create(path.join("keys.jsonl"))?;
Ok(KeyWriter { keys: BufWriter::new(keys) })
}
pub fn push_key(&mut self, key: &Key) -> Result<()> {
self.keys.write_all(&serde_json::to_vec(key)?)?;
self.keys.write_all(b"\n")?;
Ok(())
}
pub fn flush(mut self) -> Result<()> {
self.keys.flush()?;
Ok(())
}
}
pub struct TaskWriter {
queue: BufWriter<File>,
update_files: PathBuf,
}
impl TaskWriter {
pub(crate) fn new(path: PathBuf) -> Result<Self> {
std::fs::create_dir(&path)?;
let queue = File::create(path.join("queue.jsonl"))?;
let update_files = path.join("update_files");
std::fs::create_dir(&update_files)?;
Ok(TaskWriter { queue: BufWriter::new(queue), update_files })
}
/// Pushes tasks in the dump.
/// If the tasks has an associated `update_file` it'll use the `task_id` as its name.
pub fn push_task(&mut self, task: &TaskDump) -> Result<UpdateFile> {
self.queue.write_all(&serde_json::to_vec(task)?)?;
self.queue.write_all(b"\n")?;
Ok(UpdateFile::new(self.update_files.join(format!("{}.jsonl", task.uid))))
}
pub fn flush(mut self) -> Result<()> {
self.queue.flush()?;
Ok(())
}
}
pub struct UpdateFile {
path: PathBuf,
writer: Option<BufWriter<File>>,
}
impl UpdateFile {
pub(crate) fn new(path: PathBuf) -> UpdateFile {
UpdateFile { path, writer: None }
}
pub fn push_document(&mut self, document: &Document) -> Result<()> {
if let Some(writer) = self.writer.as_mut() {
writer.write_all(&serde_json::to_vec(document)?)?;
writer.write_all(b"\n")?;
} else {
let file = File::create(&self.path).unwrap();
self.writer = Some(BufWriter::new(file));
self.push_document(document)?;
}
Ok(())
}
pub fn flush(self) -> Result<()> {
if let Some(mut writer) = self.writer {
writer.flush()?;
}
Ok(())
}
}
pub struct IndexWriter {
documents: BufWriter<File>,
settings: File,
}
impl IndexWriter {
pub(self) fn new(path: PathBuf, metadata: &IndexMetadata) -> Result<Self> {
std::fs::create_dir(&path)?;
let metadata_file = File::create(path.join("metadata.json"))?;
serde_json::to_writer(metadata_file, metadata)?;
let documents = File::create(path.join("documents.jsonl"))?;
let settings = File::create(path.join("settings.json"))?;
Ok(IndexWriter { documents: BufWriter::new(documents), settings })
}
pub fn push_document(&mut self, document: &Map<String, Value>) -> Result<()> {
serde_json::to_writer(&mut self.documents, document)?;
self.documents.write_all(b"\n")?;
Ok(())
}
pub fn flush(&mut self) -> Result<()> {
self.documents.flush()?;
Ok(())
}
pub fn settings(mut self, settings: &Settings<Checked>) -> Result<()> {
self.settings.write_all(&serde_json::to_vec(&settings)?)?;
Ok(())
}
}
#[cfg(test)]
pub(crate) mod test {
use std::fmt::Write;
use std::io::BufReader;
use std::path::Path;
use std::str::FromStr;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use meilisearch_types::settings::Unchecked;
use super::*;
use crate::reader::Document;
use crate::test::{
create_test_api_keys, create_test_documents, create_test_dump, create_test_instance_uid,
create_test_settings, create_test_tasks,
};
fn create_directory_hierarchy(dir: &Path) -> String {
let mut ret = String::new();
writeln!(ret, ".").unwrap();
ret.push_str(&_create_directory_hierarchy(dir, 0));
ret
}
fn _create_directory_hierarchy(dir: &Path, depth: usize) -> String {
let mut ret = String::new();
// the entries are not guarenteed to be returned in the same order thus we need to sort them.
let mut entries =
fs::read_dir(dir).unwrap().collect::<std::result::Result<Vec<_>, _>>().unwrap();
// I want the directories first and then sort by name.
entries.sort_by(|a, b| {
let (aft, bft) = (a.file_type().unwrap(), b.file_type().unwrap());
if aft.is_dir() && bft.is_dir() {
a.file_name().cmp(&b.file_name())
} else if aft.is_file() && bft.is_dir() {
std::cmp::Ordering::Greater
} else if bft.is_file() && aft.is_dir() {
std::cmp::Ordering::Less
} else {
a.file_name().cmp(&b.file_name())
}
});
for (idx, entry) in entries.iter().enumerate() {
let mut ident = String::new();
for _ in 0..depth {
ident.push('│');
ident.push_str(&" ".repeat(4));
}
if idx == entries.len() - 1 {
ident.push('└');
} else {
ident.push('├');
}
ident.push_str(&"-".repeat(4));
let name = entry.file_name().into_string().unwrap();
let file_type = entry.file_type().unwrap();
let is_dir = if file_type.is_dir() { "/" } else { "" };
assert!(!file_type.is_symlink());
writeln!(ret, "{ident} {name}{is_dir}").unwrap();
if file_type.is_dir() {
ret.push_str(&_create_directory_hierarchy(&entry.path(), depth + 1));
}
}
ret
}
#[test]
#[ignore]
fn test_creating_dump() {
let file = create_test_dump();
let mut file = BufReader::new(file);
// ============ ensuring we wrote everything in the correct place.
let dump = tempfile::tempdir().unwrap();
let gz = GzDecoder::new(&mut file);
let mut tar = tar::Archive::new(gz);
tar.unpack(dump.path()).unwrap();
let dump_path = dump.path();
// ==== checking global file hierarchy (we want to be sure there isn't too many files or too few)
insta::assert_display_snapshot!(create_directory_hierarchy(dump_path), @r###"
.
├---- indexes/
│ └---- doggos/
│ │ ├---- documents.jsonl
│ │ ├---- metadata.json
│ │ └---- settings.json
├---- tasks/
│ ├---- update_files/
│ │ └---- 1.jsonl
│ └---- queue.jsonl
├---- instance_uid.uuid
├---- keys.jsonl
└---- metadata.json
"###);
// ==== checking the top level infos
let metadata = fs::read_to_string(dump_path.join("metadata.json")).unwrap();
let metadata: Metadata = serde_json::from_str(&metadata).unwrap();
insta::assert_json_snapshot!(metadata, { ".dumpDate" => "[date]" }, @r###"
{
"dumpVersion": "V6",
"dbVersion": "0.29.0",
"dumpDate": "[date]"
}
"###);
let instance_uid = fs::read_to_string(dump_path.join("instance_uid.uuid")).unwrap();
assert_eq!(Uuid::from_str(&instance_uid).unwrap(), create_test_instance_uid());
// ==== checking the index
let docs = fs::read_to_string(dump_path.join("indexes/doggos/documents.jsonl")).unwrap();
for (document, expected) in docs.lines().zip(create_test_documents()) {
assert_eq!(serde_json::from_str::<Map<String, Value>>(document).unwrap(), expected);
}
let test_settings =
fs::read_to_string(dump_path.join("indexes/doggos/settings.json")).unwrap();
assert_eq!(
serde_json::from_str::<Settings<Unchecked>>(&test_settings).unwrap(),
create_test_settings().into_unchecked()
);
let metadata = fs::read_to_string(dump_path.join("indexes/doggos/metadata.json")).unwrap();
let metadata: IndexMetadata = serde_json::from_str(&metadata).unwrap();
insta::assert_json_snapshot!(metadata, { ".createdAt" => "[date]", ".updatedAt" => "[date]" }, @r###"
{
"uid": "doggo",
"primaryKey": null,
"createdAt": "[date]",
"updatedAt": "[date]"
}
"###);
// ==== checking the task queue
let tasks_queue = fs::read_to_string(dump_path.join("tasks/queue.jsonl")).unwrap();
for (task, expected) in tasks_queue.lines().zip(create_test_tasks()) {
assert_eq!(serde_json::from_str::<TaskDump>(task).unwrap(), expected.0);
if let Some(expected_update) = expected.1 {
let path = dump_path.join(format!("tasks/update_files/{}.jsonl", expected.0.uid));
println!("trying to open {}", path.display());
let update = fs::read_to_string(path).unwrap();
let documents: Vec<Document> =
update.lines().map(|line| serde_json::from_str(line).unwrap()).collect();
assert_eq!(documents, expected_update);
}
}
// ==== checking the keys
let keys = fs::read_to_string(dump_path.join("keys.jsonl")).unwrap();
for (key, expected) in keys.lines().zip(create_test_api_keys()) {
assert_eq!(serde_json::from_str::<Key>(key).unwrap(), expected);
}
}
}

BIN
dump/tests/assets/v2.dump Normal file

Binary file not shown.

BIN
dump/tests/assets/v3.dump Normal file

Binary file not shown.

BIN
dump/tests/assets/v4.dump Normal file

Binary file not shown.

BIN
dump/tests/assets/v5.dump Normal file

Binary file not shown.

12
file-store/Cargo.toml Normal file
View File

@ -0,0 +1,12 @@
[package]
name = "file-store"
version = "0.30.0"
edition = "2021"
[dependencies]
tempfile = "3.3.0"
thiserror = "1.0.30"
uuid = { version = "1.1.2", features = ["serde", "v4"] }
[dev-dependencies]
faux = "0.1.8"

132
file-store/src/lib.rs Normal file
View File

@ -0,0 +1,132 @@
use std::collections::BTreeSet;
use std::fs::File as StdFile;
use std::ops::{Deref, DerefMut};
use std::path::{Path, PathBuf};
use std::str::FromStr;
use tempfile::NamedTempFile;
use uuid::Uuid;
const UPDATE_FILES_PATH: &str = "updates/updates_files";
#[derive(Debug, thiserror::Error)]
pub enum Error {
#[error(transparent)]
IoError(#[from] std::io::Error),
#[error(transparent)]
PersistError(#[from] tempfile::PersistError),
}
pub type Result<T> = std::result::Result<T, Error>;
impl Deref for File {
type Target = NamedTempFile;
fn deref(&self) -> &Self::Target {
&self.file
}
}
impl DerefMut for File {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.file
}
}
#[cfg_attr(test, faux::create)]
#[derive(Clone, Debug)]
pub struct FileStore {
path: PathBuf,
}
#[cfg(not(test))]
impl FileStore {
pub fn new(path: impl AsRef<Path>) -> Result<FileStore> {
let path = path.as_ref().to_path_buf();
std::fs::create_dir_all(&path)?;
Ok(FileStore { path })
}
}
#[cfg_attr(test, faux::methods)]
impl FileStore {
/// Creates a new temporary update file.
/// A call to `persist` is needed to persist the file in the database.
pub fn new_update(&self) -> Result<(Uuid, File)> {
let file = NamedTempFile::new_in(&self.path)?;
let uuid = Uuid::new_v4();
let path = self.path.join(uuid.to_string());
let update_file = File { file, path };
Ok((uuid, update_file))
}
/// Creates a new temporary update file with the given Uuid.
/// A call to `persist` is needed to persist the file in the database.
pub fn new_update_with_uuid(&self, uuid: u128) -> Result<(Uuid, File)> {
let file = NamedTempFile::new_in(&self.path)?;
let uuid = Uuid::from_u128(uuid);
let path = self.path.join(uuid.to_string());
let update_file = File { file, path };
Ok((uuid, update_file))
}
/// Returns the file corresponding to the requested uuid.
pub fn get_update(&self, uuid: Uuid) -> Result<StdFile> {
let path = self.get_update_path(uuid);
let file = StdFile::open(path)?;
Ok(file)
}
/// Returns the path that correspond to this uuid, the path could not exists.
pub fn get_update_path(&self, uuid: Uuid) -> PathBuf {
self.path.join(uuid.to_string())
}
/// Copies the content of the update file pointed to by `uuid` to the `dst` directory.
pub fn snapshot(&self, uuid: Uuid, dst: impl AsRef<Path>) -> Result<()> {
let src = self.path.join(uuid.to_string());
let mut dst = dst.as_ref().join(UPDATE_FILES_PATH);
std::fs::create_dir_all(&dst)?;
dst.push(uuid.to_string());
std::fs::copy(src, dst)?;
Ok(())
}
pub fn get_size(&self, uuid: Uuid) -> Result<u64> {
Ok(self.get_update(uuid)?.metadata()?.len())
}
pub fn delete(&self, uuid: Uuid) -> Result<()> {
let path = self.path.join(uuid.to_string());
std::fs::remove_file(path)?;
Ok(())
}
/// List the Uuids of the files in the FileStore
///
/// This function is meant to be used by tests only.
#[doc(hidden)]
pub fn __all_uuids(&self) -> BTreeSet<Uuid> {
let mut uuids = BTreeSet::new();
for entry in self.path.read_dir().unwrap() {
let entry = entry.unwrap();
let uuid = Uuid::from_str(entry.file_name().to_str().unwrap()).unwrap();
uuids.insert(uuid);
}
uuids
}
}
pub struct File {
path: PathBuf,
file: NamedTempFile,
}
impl File {
pub fn persist(self) -> Result<()> {
self.file.persist(&self.path)?;
Ok(())
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,30 @@
[package]
name = "index-scheduler"
version = "0.30.0"
edition = "2021"
[dependencies]
anyhow = "1.0.64"
bincode = "1.3.3"
csv = "1.1.6"
derive_builder = "0.11.2"
dump = { path = "../dump" }
enum-iterator = "1.1.3"
file-store = { path = "../file-store" }
log = "0.4.14"
meilisearch-types = { path = "../meilisearch-types" }
roaring = { version = "0.10.0", features = ["serde"] }
serde = { version = "1.0.136", features = ["derive"] }
serde_json = { version = "1.0.85", features = ["preserve_order"] }
synchronoise = "1.0.1"
tempfile = "3.3.0"
thiserror = "1.0.30"
time = { version = "0.3.7", features = ["serde-well-known", "formatting", "parsing", "macros"] }
uuid = { version = "1.1.2", features = ["serde", "v4"] }
[dev-dependencies]
big_s = "1.0.2"
crossbeam = "0.8.2"
insta = { version = "1.19.1", features = ["json", "redactions"] }
meili-snap = { path = "../meili-snap" }
nelson = { git = "https://github.com/meilisearch/nelson.git", rev = "675f13885548fb415ead8fbb447e9e6d9314000a"}

View File

@ -0,0 +1,727 @@
/*!
The autobatcher is responsible for combining the next enqueued
tasks affecting a single index into a [batch](crate::batch::Batch).
The main function of the autobatcher is [`next_autobatch`].
*/
use std::ops::ControlFlow::{self, Break, Continue};
use meilisearch_types::milli::update::IndexDocumentsMethod::{
self, ReplaceDocuments, UpdateDocuments,
};
use meilisearch_types::tasks::TaskId;
use crate::KindWithContent;
/// Succinctly describes a task's [`Kind`](meilisearch_types::tasks::Kind)
/// for the purpose of simplifying the implementation of the autobatcher.
///
/// Only the non-prioritised tasks that can be grouped in a batch have a corresponding [`AutobatchKind`]
enum AutobatchKind {
DocumentImport { method: IndexDocumentsMethod, allow_index_creation: bool },
DocumentDeletion,
DocumentClear,
Settings { allow_index_creation: bool },
IndexCreation,
IndexDeletion,
IndexUpdate,
IndexSwap,
}
impl AutobatchKind {
#[rustfmt::skip]
fn allow_index_creation(&self) -> Option<bool> {
match self {
AutobatchKind::DocumentImport { allow_index_creation, .. }
| AutobatchKind::Settings { allow_index_creation, .. } => Some(*allow_index_creation),
_ => None,
}
}
}
impl From<KindWithContent> for AutobatchKind {
fn from(kind: KindWithContent) -> Self {
match kind {
KindWithContent::DocumentAdditionOrUpdate { method, allow_index_creation, .. } => {
AutobatchKind::DocumentImport { method, allow_index_creation }
}
KindWithContent::DocumentDeletion { .. } => AutobatchKind::DocumentDeletion,
KindWithContent::DocumentClear { .. } => AutobatchKind::DocumentClear,
KindWithContent::SettingsUpdate { allow_index_creation, is_deletion, .. } => {
AutobatchKind::Settings {
allow_index_creation: allow_index_creation && !is_deletion,
}
}
KindWithContent::IndexDeletion { .. } => AutobatchKind::IndexDeletion,
KindWithContent::IndexCreation { .. } => AutobatchKind::IndexCreation,
KindWithContent::IndexUpdate { .. } => AutobatchKind::IndexUpdate,
KindWithContent::IndexSwap { .. } => AutobatchKind::IndexSwap,
KindWithContent::TaskCancelation { .. }
| KindWithContent::TaskDeletion { .. }
| KindWithContent::DumpCreation { .. }
| KindWithContent::SnapshotCreation => {
panic!("The autobatcher should never be called with tasks that don't apply to an index.")
}
}
}
}
#[derive(Debug)]
pub enum BatchKind {
DocumentClear {
ids: Vec<TaskId>,
},
DocumentImport {
method: IndexDocumentsMethod,
allow_index_creation: bool,
import_ids: Vec<TaskId>,
},
DocumentDeletion {
deletion_ids: Vec<TaskId>,
},
ClearAndSettings {
other: Vec<TaskId>,
allow_index_creation: bool,
settings_ids: Vec<TaskId>,
},
SettingsAndDocumentImport {
settings_ids: Vec<TaskId>,
method: IndexDocumentsMethod,
allow_index_creation: bool,
import_ids: Vec<TaskId>,
},
Settings {
allow_index_creation: bool,
settings_ids: Vec<TaskId>,
},
IndexDeletion {
ids: Vec<TaskId>,
},
IndexCreation {
id: TaskId,
},
IndexUpdate {
id: TaskId,
},
IndexSwap {
id: TaskId,
},
}
impl BatchKind {
#[rustfmt::skip]
fn allow_index_creation(&self) -> Option<bool> {
match self {
BatchKind::DocumentImport { allow_index_creation, .. }
| BatchKind::ClearAndSettings { allow_index_creation, .. }
| BatchKind::SettingsAndDocumentImport { allow_index_creation, .. }
| BatchKind::Settings { allow_index_creation, .. } => Some(*allow_index_creation),
_ => None,
}
}
}
impl BatchKind {
/// Returns a `ControlFlow::Break` if you must stop right now.
/// The boolean tell you if an index has been created by the batched task.
/// To ease the writting of the code. `true` can be returned when you don't need to create an index
/// but false can't be returned if you needs to create an index.
// TODO use an AutoBatchKind as input
pub fn new(
task_id: TaskId,
kind: KindWithContent,
) -> (ControlFlow<BatchKind, BatchKind>, bool) {
use AutobatchKind as K;
match AutobatchKind::from(kind) {
K::IndexCreation => (Break(BatchKind::IndexCreation { id: task_id }), true),
K::IndexDeletion => (Break(BatchKind::IndexDeletion { ids: vec![task_id] }), false),
K::IndexUpdate => (Break(BatchKind::IndexUpdate { id: task_id }), false),
K::IndexSwap => (Break(BatchKind::IndexSwap { id: task_id }), false),
K::DocumentClear => (Continue(BatchKind::DocumentClear { ids: vec![task_id] }), false),
K::DocumentImport { method, allow_index_creation } => (
Continue(BatchKind::DocumentImport {
method,
allow_index_creation,
import_ids: vec![task_id],
}),
allow_index_creation,
),
K::DocumentDeletion => {
(Continue(BatchKind::DocumentDeletion { deletion_ids: vec![task_id] }), false)
}
K::Settings { allow_index_creation } => (
Continue(BatchKind::Settings { allow_index_creation, settings_ids: vec![task_id] }),
allow_index_creation,
),
}
}
/// Returns a `ControlFlow::Break` if you must stop right now.
/// The boolean tell you if an index has been created by the batched task.
/// To ease the writting of the code. `true` can be returned when you don't need to create an index
/// but false can't be returned if you needs to create an index.
#[rustfmt::skip]
fn accumulate(self, id: TaskId, kind: AutobatchKind, index_already_exists: bool) -> ControlFlow<BatchKind, BatchKind> {
use AutobatchKind as K;
match (self, kind) {
// We don't batch any of these operations
(this, K::IndexCreation | K::IndexUpdate | K::IndexSwap) => Break(this),
// We must not batch tasks that don't have the same index creation rights if the index doesn't already exists.
(this, kind) if !index_already_exists && this.allow_index_creation() == Some(false) && kind.allow_index_creation() == Some(true) => {
Break(this)
},
// The index deletion can batch with everything but must stop after
(
BatchKind::DocumentClear { mut ids }
| BatchKind::DocumentDeletion { deletion_ids: mut ids }
| BatchKind::DocumentImport { method: _, allow_index_creation: _, import_ids: mut ids }
| BatchKind::Settings { allow_index_creation: _, settings_ids: mut ids },
K::IndexDeletion,
) => {
ids.push(id);
Break(BatchKind::IndexDeletion { ids })
}
(
BatchKind::ClearAndSettings { settings_ids: mut ids, allow_index_creation: _, mut other }
| BatchKind::SettingsAndDocumentImport { import_ids: mut ids, method: _, allow_index_creation: _, settings_ids: mut other },
K::IndexDeletion,
) => {
ids.push(id);
ids.append(&mut other);
Break(BatchKind::IndexDeletion { ids })
}
(
BatchKind::DocumentClear { mut ids },
K::DocumentClear | K::DocumentDeletion,
) => {
ids.push(id);
Continue(BatchKind::DocumentClear { ids })
}
(
this @ BatchKind::DocumentClear { .. },
K::DocumentImport { .. } | K::Settings { .. },
) => Break(this),
(
BatchKind::DocumentImport { method: _, allow_index_creation: _, import_ids: mut ids },
K::DocumentClear,
) => {
ids.push(id);
Continue(BatchKind::DocumentClear { ids })
}
// we can autobatch the same kind of document additions / updates
(
BatchKind::DocumentImport { method: ReplaceDocuments, allow_index_creation, mut import_ids },
K::DocumentImport { method: ReplaceDocuments, .. },
) => {
import_ids.push(id);
Continue(BatchKind::DocumentImport {
method: ReplaceDocuments,
allow_index_creation,
import_ids,
})
}
(
BatchKind::DocumentImport { method: UpdateDocuments, allow_index_creation, mut import_ids },
K::DocumentImport { method: UpdateDocuments, .. },
) => {
import_ids.push(id);
Continue(BatchKind::DocumentImport {
method: UpdateDocuments,
allow_index_creation,
import_ids,
})
}
// but we can't autobatch documents if it's not the same kind
// this match branch MUST be AFTER the previous one
(
this @ BatchKind::DocumentImport { .. },
K::DocumentDeletion | K::DocumentImport { .. },
) => Break(this),
(
BatchKind::DocumentImport { method, allow_index_creation, import_ids },
K::Settings { .. },
) => Continue(BatchKind::SettingsAndDocumentImport {
settings_ids: vec![id],
method,
allow_index_creation,
import_ids,
}),
(BatchKind::DocumentDeletion { mut deletion_ids }, K::DocumentClear) => {
deletion_ids.push(id);
Continue(BatchKind::DocumentClear { ids: deletion_ids })
}
(this @ BatchKind::DocumentDeletion { .. }, K::DocumentImport { .. }) => Break(this),
(BatchKind::DocumentDeletion { mut deletion_ids }, K::DocumentDeletion) => {
deletion_ids.push(id);
Continue(BatchKind::DocumentDeletion { deletion_ids })
}
(this @ BatchKind::DocumentDeletion { .. }, K::Settings { .. }) => Break(this),
(
BatchKind::Settings { settings_ids, allow_index_creation },
K::DocumentClear,
) => Continue(BatchKind::ClearAndSettings {
settings_ids,
allow_index_creation,
other: vec![id],
}),
(
this @ BatchKind::Settings { .. },
K::DocumentImport { .. } | K::DocumentDeletion,
) => Break(this),
(
BatchKind::Settings { mut settings_ids, allow_index_creation },
K::Settings { .. },
) => {
settings_ids.push(id);
Continue(BatchKind::Settings {
allow_index_creation,
settings_ids,
})
}
(
BatchKind::ClearAndSettings { mut other, settings_ids, allow_index_creation },
K::DocumentClear,
) => {
other.push(id);
Continue(BatchKind::ClearAndSettings {
other,
settings_ids,
allow_index_creation,
})
}
(this @ BatchKind::ClearAndSettings { .. }, K::DocumentImport { .. }) => Break(this),
(
BatchKind::ClearAndSettings {
mut other,
settings_ids,
allow_index_creation,
},
K::DocumentDeletion,
) => {
other.push(id);
Continue(BatchKind::ClearAndSettings {
other,
settings_ids,
allow_index_creation,
})
}
(
BatchKind::ClearAndSettings { mut settings_ids, other, allow_index_creation },
K::Settings { .. },
) => {
settings_ids.push(id);
Continue(BatchKind::ClearAndSettings {
other,
settings_ids,
allow_index_creation,
})
}
(
BatchKind::SettingsAndDocumentImport { settings_ids, method: _, import_ids: mut other, allow_index_creation },
K::DocumentClear,
) => {
other.push(id);
Continue(BatchKind::ClearAndSettings {
settings_ids,
other,
allow_index_creation,
})
}
(
BatchKind::SettingsAndDocumentImport { settings_ids, method: ReplaceDocuments, mut import_ids, allow_index_creation },
K::DocumentImport { method: ReplaceDocuments, .. },
) => {
import_ids.push(id);
Continue(BatchKind::SettingsAndDocumentImport {
settings_ids,
method: ReplaceDocuments,
allow_index_creation,
import_ids,
})
}
(
BatchKind::SettingsAndDocumentImport { settings_ids, method: UpdateDocuments, allow_index_creation, mut import_ids },
K::DocumentImport { method: UpdateDocuments, .. },
) => {
import_ids.push(id);
Continue(BatchKind::SettingsAndDocumentImport {
settings_ids,
method: UpdateDocuments,
allow_index_creation,
import_ids,
})
}
// But we can't batch a settings and a doc op with another doc op
// this MUST be AFTER the two previous branch
(
this @ BatchKind::SettingsAndDocumentImport { .. },
K::DocumentDeletion | K::DocumentImport { .. },
) => Break(this),
(
BatchKind::SettingsAndDocumentImport { mut settings_ids, method, allow_index_creation, import_ids },
K::Settings { .. },
) => {
settings_ids.push(id);
Continue(BatchKind::SettingsAndDocumentImport {
settings_ids,
method,
allow_index_creation,
import_ids,
})
}
(
BatchKind::IndexCreation { .. }
| BatchKind::IndexDeletion { .. }
| BatchKind::IndexUpdate { .. }
| BatchKind::IndexSwap { .. },
_,
) => {
unreachable!()
}
}
}
}
/// Create a batch from an ordered list of tasks.
///
/// ## Preconditions
/// 1. The tasks must be enqueued and given in the order in which they were enqueued
/// 2. The tasks must not be prioritised tasks (e.g. task cancellation, dump, snapshot, task deletion)
/// 3. The tasks must all be related to the same index
///
/// ## Return
/// `None` if the list of tasks is empty. Otherwise, an [`AutoBatch`] that represents
/// a subset of the given tasks.
pub fn autobatch(
enqueued: Vec<(TaskId, KindWithContent)>,
index_already_exists: bool,
) -> Option<(BatchKind, bool)> {
let mut enqueued = enqueued.into_iter();
let (id, kind) = enqueued.next()?;
// index_exist will keep track of if the index should exist at this point after the tasks we batched.
let mut index_exist = index_already_exists;
let (mut acc, must_create_index) = match BatchKind::new(id, kind) {
(Continue(acc), create) => (acc, create),
(Break(acc), create) => return Some((acc, create)),
};
// if an index has been created in the previous step we can consider it as existing.
index_exist |= must_create_index;
for (id, kind) in enqueued {
acc = match acc.accumulate(id, kind.into(), index_exist) {
Continue(acc) => acc,
Break(acc) => return Some((acc, must_create_index)),
};
}
Some((acc, must_create_index))
}
#[cfg(test)]
mod tests {
use meilisearch_types::tasks::IndexSwap;
use uuid::Uuid;
use super::*;
use crate::debug_snapshot;
fn autobatch_from(
index_already_exists: bool,
input: impl IntoIterator<Item = KindWithContent>,
) -> Option<(BatchKind, bool)> {
autobatch(
input.into_iter().enumerate().map(|(id, kind)| (id as TaskId, kind)).collect(),
index_already_exists,
)
}
fn doc_imp(method: IndexDocumentsMethod, allow_index_creation: bool) -> KindWithContent {
KindWithContent::DocumentAdditionOrUpdate {
index_uid: String::from("doggo"),
primary_key: None,
method,
content_file: Uuid::new_v4(),
documents_count: 0,
allow_index_creation,
}
}
fn doc_del() -> KindWithContent {
KindWithContent::DocumentDeletion {
index_uid: String::from("doggo"),
documents_ids: Vec::new(),
}
}
fn doc_clr() -> KindWithContent {
KindWithContent::DocumentClear { index_uid: String::from("doggo") }
}
fn settings(allow_index_creation: bool) -> KindWithContent {
KindWithContent::SettingsUpdate {
index_uid: String::from("doggo"),
new_settings: Default::default(),
is_deletion: false,
allow_index_creation,
}
}
fn idx_create() -> KindWithContent {
KindWithContent::IndexCreation { index_uid: String::from("doggo"), primary_key: None }
}
fn idx_update() -> KindWithContent {
KindWithContent::IndexUpdate { index_uid: String::from("doggo"), primary_key: None }
}
fn idx_del() -> KindWithContent {
KindWithContent::IndexDeletion { index_uid: String::from("doggo") }
}
fn idx_swap() -> KindWithContent {
KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: (String::from("doggo"), String::from("catto")) }],
}
}
#[test]
fn autobatch_simple_operation_together() {
// we can autobatch one or multiple `ReplaceDocuments` together.
// if the index exists.
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, false)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp( ReplaceDocuments, true ), doc_imp(ReplaceDocuments, true )]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, false), doc_imp( ReplaceDocuments, false ), doc_imp(ReplaceDocuments, false )]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0, 1, 2] }, false))");
// if it doesn't exists.
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, false)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, true), doc_imp( ReplaceDocuments, true ), doc_imp(ReplaceDocuments, true )]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, false), doc_imp( ReplaceDocuments, true ), doc_imp(ReplaceDocuments, true )]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
// we can autobatch one or multiple `UpdateDocuments` together.
// if the index exists.
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), doc_imp(UpdateDocuments, true), doc_imp(UpdateDocuments, true)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, false)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, false), doc_imp(UpdateDocuments, false), doc_imp(UpdateDocuments, false)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: false, import_ids: [0, 1, 2] }, false))");
// if it doesn't exists.
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, true)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, true), doc_imp(UpdateDocuments, true), doc_imp(UpdateDocuments, true)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, false)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, false), doc_imp(UpdateDocuments, false), doc_imp(UpdateDocuments, false)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: false, import_ids: [0, 1, 2] }, false))");
// we can autobatch one or multiple DocumentDeletion together
debug_snapshot!(autobatch_from(true, [doc_del()]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_del(), doc_del(), doc_del()]), @"Some((DocumentDeletion { deletion_ids: [0, 1, 2] }, false))");
debug_snapshot!(autobatch_from(false, [doc_del()]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_del(), doc_del(), doc_del()]), @"Some((DocumentDeletion { deletion_ids: [0, 1, 2] }, false))");
// we can autobatch one or multiple Settings together
debug_snapshot!(autobatch_from(true, [settings(true)]), @"Some((Settings { allow_index_creation: true, settings_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [settings(true), settings(true), settings(true)]), @"Some((Settings { allow_index_creation: true, settings_ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(true, [settings(false)]), @"Some((Settings { allow_index_creation: false, settings_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [settings(false), settings(false), settings(false)]), @"Some((Settings { allow_index_creation: false, settings_ids: [0, 1, 2] }, false))");
debug_snapshot!(autobatch_from(false, [settings(true)]), @"Some((Settings { allow_index_creation: true, settings_ids: [0] }, true))");
debug_snapshot!(autobatch_from(false, [settings(true), settings(true), settings(true)]), @"Some((Settings { allow_index_creation: true, settings_ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(false, [settings(false)]), @"Some((Settings { allow_index_creation: false, settings_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [settings(false), settings(false), settings(false)]), @"Some((Settings { allow_index_creation: false, settings_ids: [0, 1, 2] }, false))");
}
#[test]
fn simple_document_operation_dont_autobatch_with_other() {
// addition, updates and deletion can't batch together
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp(UpdateDocuments, true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_del()]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), doc_del()]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_del(), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_del(), doc_imp(UpdateDocuments, true)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), idx_create()]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), idx_create()]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_del(), idx_create()]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), idx_update()]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), idx_update()]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_del(), idx_update()]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), idx_swap()]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), idx_swap()]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_del(), idx_swap()]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
}
#[test]
fn document_addition_batch_with_settings() {
// simple case
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
// multiple settings and doc addition
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp(ReplaceDocuments, true), settings(true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [2, 3], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp(ReplaceDocuments, true), settings(true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [2, 3], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0, 1] }, true))");
// addition and setting unordered
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), doc_imp(ReplaceDocuments, true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1, 3], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), doc_imp(UpdateDocuments, true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1, 3], method: UpdateDocuments, allow_index_creation: true, import_ids: [0, 2] }, true))");
// We ensure this kind of batch doesn't batch with forbidden operations
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), doc_imp(UpdateDocuments, true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), doc_imp(ReplaceDocuments, true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), doc_del()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), doc_del()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), idx_create()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), idx_create()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), idx_update()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), idx_update()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), idx_swap()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), idx_swap()]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: UpdateDocuments, allow_index_creation: true, import_ids: [0] }, true))");
}
#[test]
fn clear_and_additions() {
// these two doesn't need to batch
debug_snapshot!(autobatch_from(true, [doc_clr(), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentClear { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [doc_clr(), doc_imp(UpdateDocuments, true)]), @"Some((DocumentClear { ids: [0] }, false))");
// Basic use case
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp(ReplaceDocuments, true), doc_clr()]), @"Some((DocumentClear { ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), doc_imp(UpdateDocuments, true), doc_clr()]), @"Some((DocumentClear { ids: [0, 1, 2] }, true))");
// This batch kind doesn't mix with other document addition
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp(ReplaceDocuments, true), doc_clr(), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentClear { ids: [0, 1, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), doc_imp(UpdateDocuments, true), doc_clr(), doc_imp(UpdateDocuments, true)]), @"Some((DocumentClear { ids: [0, 1, 2] }, true))");
// But you can batch multiple clear together
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp(ReplaceDocuments, true), doc_clr(), doc_clr(), doc_clr()]), @"Some((DocumentClear { ids: [0, 1, 2, 3, 4] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), doc_imp(UpdateDocuments, true), doc_clr(), doc_clr(), doc_clr()]), @"Some((DocumentClear { ids: [0, 1, 2, 3, 4] }, true))");
}
#[test]
fn clear_and_additions_and_settings() {
// A clear don't need to autobatch the settings that happens AFTER there is no documents
debug_snapshot!(autobatch_from(true, [doc_clr(), settings(true)]), @"Some((DocumentClear { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [settings(true), doc_clr(), settings(true)]), @"Some((ClearAndSettings { other: [1], allow_index_creation: true, settings_ids: [0, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), doc_clr()]), @"Some((ClearAndSettings { other: [0, 2], allow_index_creation: true, settings_ids: [1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), doc_clr()]), @"Some((ClearAndSettings { other: [0, 2], allow_index_creation: true, settings_ids: [1] }, true))");
}
#[test]
fn anything_and_index_deletion() {
// The `IndexDeletion` doesn't batch with anything that happens AFTER.
debug_snapshot!(autobatch_from(true, [idx_del(), doc_imp(ReplaceDocuments, true)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [idx_del(), doc_imp(UpdateDocuments, true)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [idx_del(), doc_imp(ReplaceDocuments, false)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [idx_del(), doc_imp(UpdateDocuments, false)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [idx_del(), doc_del()]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [idx_del(), doc_clr()]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [idx_del(), settings(true)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, [idx_del(), settings(false)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), doc_imp(ReplaceDocuments, true)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), doc_imp(UpdateDocuments, true)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), doc_imp(ReplaceDocuments, false)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), doc_imp(UpdateDocuments, false)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), doc_del()]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), doc_clr()]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), settings(true)]), @"Some((IndexDeletion { ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [idx_del(), settings(false)]), @"Some((IndexDeletion { ids: [0] }, false))");
// The index deletion can accept almost any type of `BatchKind` and transform it to an `IndexDeletion`.
// First, the basic cases
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, false), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, false), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_del(), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, [settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(true, [settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, true), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, true), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, false), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, false), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, [doc_del(), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, [doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, [settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(false, [settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 1] }, false))");
// Then the mixed cases.
// The index already exists, whatever is the right of the tasks it shouldn't change the result.
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(true), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments,false), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, false), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments,false), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, false), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments,false), settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, false), settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments,false), settings(true), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, false), settings(true), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments,true), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments,true), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(UpdateDocuments, true), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
// When the index doesn't exists yet it's more complicated.
// Either the first task we encounter create it, in which case we can create a big batch with everything.
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, true), settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, true), settings(true), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, true), settings(true), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, true), settings(true), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
// The right of the tasks following isn't really important.
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments,true), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, true), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments,true), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, true), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, true))");
// Or, the second case; the first task doesn't create the index and thus we wants to batch it with only tasks that can't create an index.
// that can be a second task that don't have the right to create an index. Or anything that can't create an index like an index deletion, document deletion, document clear, etc.
// All theses tasks are going to throw an error `Index doesn't exist` once the batch is processed.
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments,false), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, false), settings(false), idx_del()]), @"Some((IndexDeletion { ids: [0, 2, 1] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments,false), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, false), settings(false), doc_clr(), idx_del()]), @"Some((IndexDeletion { ids: [1, 3, 0, 2] }, false))");
// The third and final case is when the first task doesn't create an index but is directly followed by a task creating an index. In this case we can't batch whith what
// follows because we first need to process the erronous batch.
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments,false), settings(true), idx_del()]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, false), settings(true), idx_del()]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments,false), settings(true), doc_clr(), idx_del()]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(UpdateDocuments, false), settings(true), doc_clr(), idx_del()]), @"Some((DocumentImport { method: UpdateDocuments, allow_index_creation: false, import_ids: [0] }, false))");
}
#[test]
fn allowed_and_disallowed_index_creation() {
// `DocumentImport` can't be mixed with those disallowed to do so except if the index already exists.
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, false), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, false), doc_imp(ReplaceDocuments, false)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, [doc_imp(ReplaceDocuments, false), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, false), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, true), doc_imp(ReplaceDocuments, true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: true, import_ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, false), doc_imp(ReplaceDocuments, false)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, true), settings(true)]), @"Some((SettingsAndDocumentImport { settings_ids: [1], method: ReplaceDocuments, allow_index_creation: true, import_ids: [0] }, true))");
debug_snapshot!(autobatch_from(false, [doc_imp(ReplaceDocuments, false), settings(true)]), @"Some((DocumentImport { method: ReplaceDocuments, allow_index_creation: false, import_ids: [0] }, false))");
}
}

1286
index-scheduler/src/batch.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,130 @@
use meilisearch_types::error::{Code, ErrorCode};
use meilisearch_types::tasks::{Kind, Status};
use meilisearch_types::{heed, milli};
use thiserror::Error;
use crate::TaskId;
#[allow(clippy::large_enum_variant)]
#[derive(Error, Debug)]
pub enum Error {
#[error("Index `{0}` not found.")]
IndexNotFound(String),
#[error(
"Indexes {} not found.",
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
)]
IndexesNotFound(Vec<String>),
#[error("Index `{0}` already exists.")]
IndexAlreadyExists(String),
#[error(
"Indexes must be declared only once during a swap. `{0}` was specified several times."
)]
SwapDuplicateIndexFound(String),
#[error(
"Indexes must be declared only once during a swap. {} were specified several times.",
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
)]
SwapDuplicateIndexesFound(Vec<String>),
#[error("Corrupted dump.")]
CorruptedDump,
#[error(
"Task `{field}` `{date}` is invalid. It should follow the YYYY-MM-DD or RFC 3339 date-time format."
)]
InvalidTaskDate { field: String, date: String },
#[error("Task uid `{task_uid}` is invalid. It should only contain numeric characters.")]
InvalidTaskUids { task_uid: String },
#[error(
"Task status `{status}` is invalid. Available task statuses are {}.",
enum_iterator::all::<Status>()
.map(|s| format!("`{s}`"))
.collect::<Vec<String>>()
.join(", ")
)]
InvalidTaskStatuses { status: String },
#[error(
"Task type `{type_}` is invalid. Available task types are {}",
enum_iterator::all::<Kind>()
.map(|s| format!("`{s}`"))
.collect::<Vec<String>>()
.join(", ")
)]
InvalidTaskTypes { type_: String },
#[error(
"Task canceledBy `{canceled_by}` is invalid. It should only contains numeric characters separated by `,` character."
)]
InvalidTaskCanceledBy { canceled_by: String },
#[error(
"{index_uid} is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_)."
)]
InvalidIndexUid { index_uid: String },
#[error("Task `{0}` not found.")]
TaskNotFound(TaskId),
#[error("Query parameters to filter the tasks to delete are missing. Available query parameters are: `uids`, `indexUids`, `statuses`, `types`, `beforeEnqueuedAt`, `afterEnqueuedAt`, `beforeStartedAt`, `afterStartedAt`, `beforeFinishedAt`, `afterFinishedAt`.")]
TaskDeletionWithEmptyQuery,
#[error("Query parameters to filter the tasks to cancel are missing. Available query parameters are: `uids`, `indexUids`, `statuses`, `types`, `beforeEnqueuedAt`, `afterEnqueuedAt`, `beforeStartedAt`, `afterStartedAt`, `beforeFinishedAt`, `afterFinishedAt`.")]
TaskCancelationWithEmptyQuery,
#[error(transparent)]
Dump(#[from] dump::Error),
#[error(transparent)]
Heed(#[from] heed::Error),
#[error(transparent)]
Milli(#[from] milli::Error),
#[error("An unexpected crash occurred when processing the task.")]
ProcessBatchPanicked,
#[error(transparent)]
FileStore(#[from] file_store::Error),
#[error(transparent)]
IoError(#[from] std::io::Error),
#[error(transparent)]
Persist(#[from] tempfile::PersistError),
#[error(transparent)]
Anyhow(#[from] anyhow::Error),
// Irrecoverable errors:
#[error(transparent)]
CreateBatch(Box<Self>),
#[error("Corrupted task queue.")]
CorruptedTaskQueue,
#[error(transparent)]
TaskDatabaseUpdate(Box<Self>),
#[error(transparent)]
HeedTransaction(heed::Error),
}
impl ErrorCode for Error {
fn error_code(&self) -> Code {
match self {
Error::IndexNotFound(_) => Code::IndexNotFound,
Error::IndexesNotFound(_) => Code::IndexNotFound,
Error::IndexAlreadyExists(_) => Code::IndexAlreadyExists,
Error::SwapDuplicateIndexesFound(_) => Code::DuplicateIndexFound,
Error::SwapDuplicateIndexFound(_) => Code::DuplicateIndexFound,
Error::InvalidTaskDate { .. } => Code::InvalidTaskDateFilter,
Error::InvalidTaskUids { .. } => Code::InvalidTaskUidsFilter,
Error::InvalidTaskStatuses { .. } => Code::InvalidTaskStatusesFilter,
Error::InvalidTaskTypes { .. } => Code::InvalidTaskTypesFilter,
Error::InvalidTaskCanceledBy { .. } => Code::InvalidTaskCanceledByFilter,
Error::InvalidIndexUid { .. } => Code::InvalidIndexUid,
Error::TaskNotFound(_) => Code::TaskNotFound,
Error::TaskDeletionWithEmptyQuery => Code::TaskDeletionWithEmptyQuery,
Error::TaskCancelationWithEmptyQuery => Code::TaskCancelationWithEmptyQuery,
Error::Dump(e) => e.error_code(),
Error::Milli(e) => e.error_code(),
Error::ProcessBatchPanicked => Code::Internal,
// TODO: TAMO: are all these errors really internal?
Error::Heed(_) => Code::Internal,
Error::FileStore(_) => Code::Internal,
Error::IoError(_) => Code::Internal,
Error::Persist(_) => Code::Internal,
Error::Anyhow(_) => Code::Internal,
Error::CorruptedTaskQueue => Code::Internal,
Error::CorruptedDump => Code::Internal,
Error::TaskDatabaseUpdate(_) => Code::Internal,
Error::CreateBatch(_) => Code::Internal,
Error::HeedTransaction(_) => Code::Internal,
}
}
}

View File

@ -0,0 +1,233 @@
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::sync::{Arc, RwLock};
use std::{fs, thread};
use log::error;
use meilisearch_types::heed::types::{SerdeBincode, Str};
use meilisearch_types::heed::{Database, Env, EnvOpenOptions, RoTxn, RwTxn};
use meilisearch_types::milli::update::IndexerConfig;
use meilisearch_types::milli::Index;
use uuid::Uuid;
use self::IndexStatus::{Available, BeingDeleted};
use crate::{Error, Result};
const INDEX_MAPPING: &str = "index-mapping";
/// Structure managing meilisearch's indexes.
///
/// It is responsible for:
/// 1. Creating new indexes
/// 2. Opening indexes and storing references to these opened indexes
/// 3. Accessing indexes through their uuid
/// 4. Mapping a user-defined name to each index uuid.
#[derive(Clone)]
pub struct IndexMapper {
/// Keep track of the opened indexes. Used mainly by the index resolver.
index_map: Arc<RwLock<HashMap<Uuid, IndexStatus>>>,
// TODO create a UUID Codec that uses the 16 bytes representation
/// Map an index name with an index uuid currently available on disk.
pub(crate) index_mapping: Database<Str, SerdeBincode<Uuid>>,
/// Path to the folder where the LMDB environments of each index are.
base_path: PathBuf,
index_size: usize,
pub indexer_config: Arc<IndexerConfig>,
}
/// Whether the index is available for use or is forbidden to be inserted back in the index map
#[allow(clippy::large_enum_variant)]
#[derive(Clone)]
pub enum IndexStatus {
/// Do not insert it back in the index map as it is currently being deleted.
BeingDeleted,
/// You can use the index without worrying about anything.
Available(Index),
}
impl IndexMapper {
pub fn new(
env: &Env,
base_path: PathBuf,
index_size: usize,
indexer_config: IndexerConfig,
) -> Result<Self> {
Ok(Self {
index_map: Arc::default(),
index_mapping: env.create_database(Some(INDEX_MAPPING))?,
base_path,
index_size,
indexer_config: Arc::new(indexer_config),
})
}
/// Create or open an index in the specified path.
/// The path *must* exists or an error will be thrown.
fn create_or_open_index(&self, path: &Path) -> Result<Index> {
let mut options = EnvOpenOptions::new();
options.map_size(self.index_size);
options.max_readers(1024);
Ok(Index::new(options, path)?)
}
/// Get or create the index.
pub fn create_index(&self, mut wtxn: RwTxn, name: &str) -> Result<Index> {
match self.index(&wtxn, name) {
Ok(index) => {
wtxn.commit()?;
Ok(index)
}
Err(Error::IndexNotFound(_)) => {
let uuid = Uuid::new_v4();
self.index_mapping.put(&mut wtxn, name, &uuid)?;
let index_path = self.base_path.join(uuid.to_string());
fs::create_dir_all(&index_path)?;
let index = self.create_or_open_index(&index_path)?;
wtxn.commit()?;
// TODO: it would be better to lazily create the index. But we need an Index::open function for milli.
if let Some(BeingDeleted) =
self.index_map.write().unwrap().insert(uuid, Available(index.clone()))
{
panic!("Uuid v4 conflict.");
}
Ok(index)
}
error => error,
}
}
/// Removes the index from the mapping table and the in-memory index map
/// but keeps the associated tasks.
pub fn delete_index(&self, mut wtxn: RwTxn, name: &str) -> Result<()> {
let uuid = self
.index_mapping
.get(&wtxn, name)?
.ok_or_else(|| Error::IndexNotFound(name.to_string()))?;
// Once we retrieved the UUID of the index we remove it from the mapping table.
assert!(self.index_mapping.delete(&mut wtxn, name)?);
wtxn.commit()?;
// We remove the index from the in-memory index map.
let mut lock = self.index_map.write().unwrap();
let closing_event = match lock.insert(uuid, BeingDeleted) {
Some(Available(index)) => Some(index.prepare_for_closing()),
_ => None,
};
drop(lock);
let index_map = self.index_map.clone();
let index_path = self.base_path.join(uuid.to_string());
let index_name = name.to_string();
thread::Builder::new()
.name(String::from("index_deleter"))
.spawn(move || {
// We first wait to be sure that the previously opened index is effectively closed.
// This can take a lot of time, this is why we do that in a seperate thread.
if let Some(closing_event) = closing_event {
closing_event.wait();
}
// Then we remove the content from disk.
if let Err(e) = fs::remove_dir_all(&index_path) {
error!(
"An error happened when deleting the index {} ({}): {}",
index_name, uuid, e
);
}
// Finally we remove the entry from the index map.
assert!(matches!(index_map.write().unwrap().remove(&uuid), Some(BeingDeleted)));
})
.unwrap();
Ok(())
}
pub fn exists(&self, rtxn: &RoTxn, name: &str) -> Result<bool> {
Ok(self.index_mapping.get(rtxn, name)?.is_some())
}
/// Return an index, may open it if it wasn't already opened.
pub fn index(&self, rtxn: &RoTxn, name: &str) -> Result<Index> {
let uuid = self
.index_mapping
.get(rtxn, name)?
.ok_or_else(|| Error::IndexNotFound(name.to_string()))?;
// we clone here to drop the lock before entering the match
let index = self.index_map.read().unwrap().get(&uuid).cloned();
let index = match index {
Some(Available(index)) => index,
Some(BeingDeleted) => return Err(Error::IndexNotFound(name.to_string())),
// since we're lazy, it's possible that the index has not been opened yet.
None => {
let mut index_map = self.index_map.write().unwrap();
// between the read lock and the write lock it's not impossible
// that someone already opened the index (eg if two search happens
// at the same time), thus before opening it we check a second time
// if it's not already there.
// Since there is a good chance it's not already there we can use
// the entry method.
match index_map.entry(uuid) {
Entry::Vacant(entry) => {
let index_path = self.base_path.join(uuid.to_string());
let index = self.create_or_open_index(&index_path)?;
entry.insert(Available(index.clone()));
index
}
Entry::Occupied(entry) => match entry.get() {
Available(index) => index.clone(),
BeingDeleted => return Err(Error::IndexNotFound(name.to_string())),
},
}
}
};
Ok(index)
}
/// Return all indexes, may open them if they weren't already opened.
pub fn indexes(&self, rtxn: &RoTxn) -> Result<Vec<(String, Index)>> {
self.index_mapping
.iter(rtxn)?
.map(|ret| {
ret.map_err(Error::from).and_then(|(name, _)| {
self.index(rtxn, name).map(|index| (name.to_string(), index))
})
})
.collect()
}
/// Swap two index names.
pub fn swap(&self, wtxn: &mut RwTxn, lhs: &str, rhs: &str) -> Result<()> {
let lhs_uuid = self
.index_mapping
.get(wtxn, lhs)?
.ok_or_else(|| Error::IndexNotFound(lhs.to_string()))?;
let rhs_uuid = self
.index_mapping
.get(wtxn, rhs)?
.ok_or_else(|| Error::IndexNotFound(rhs.to_string()))?;
self.index_mapping.put(wtxn, lhs, &rhs_uuid)?;
self.index_mapping.put(wtxn, rhs, &lhs_uuid)?;
Ok(())
}
pub fn index_exists(&self, rtxn: &RoTxn, name: &str) -> Result<bool> {
Ok(self.index_mapping.get(rtxn, name)?.is_some())
}
pub fn indexer_config(&self) -> &IndexerConfig {
&self.indexer_config
}
}

View File

@ -0,0 +1,256 @@
use std::fmt::Write;
use meilisearch_types::heed::types::{OwnedType, SerdeBincode, SerdeJson, Str};
use meilisearch_types::heed::{Database, RoTxn};
use meilisearch_types::milli::{CboRoaringBitmapCodec, RoaringBitmapCodec, BEU32};
use meilisearch_types::tasks::{Details, Task};
use roaring::RoaringBitmap;
use crate::index_mapper::IndexMapper;
use crate::{IndexScheduler, Kind, Status, BEI128};
pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
scheduler.assert_internally_consistent();
let IndexScheduler {
autobatching_enabled,
must_stop_processing: _,
processing_tasks,
file_store,
env,
all_tasks,
status,
kind,
index_tasks,
canceled_by,
enqueued_at,
started_at,
finished_at,
index_mapper,
wake_up: _,
dumps_path: _,
snapshots_path: _,
auth_path: _,
version_file_path: _,
test_breakpoint_sdr: _,
planned_failures: _,
run_loop_iteration: _,
} = scheduler;
let rtxn = env.read_txn().unwrap();
let mut snap = String::new();
let processing_tasks = processing_tasks.read().unwrap().processing.clone();
snap.push_str(&format!("### Autobatching Enabled = {autobatching_enabled}\n"));
snap.push_str("### Processing Tasks:\n");
snap.push_str(&snapshot_bitmap(&processing_tasks));
snap.push_str("\n----------------------------------------------------------------------\n");
snap.push_str("### All Tasks:\n");
snap.push_str(&snapshot_all_tasks(&rtxn, *all_tasks));
snap.push_str("----------------------------------------------------------------------\n");
snap.push_str("### Status:\n");
snap.push_str(&snapshot_status(&rtxn, *status));
snap.push_str("----------------------------------------------------------------------\n");
snap.push_str("### Kind:\n");
snap.push_str(&snapshot_kind(&rtxn, *kind));
snap.push_str("----------------------------------------------------------------------\n");
snap.push_str("### Index Tasks:\n");
snap.push_str(&snapshot_index_tasks(&rtxn, *index_tasks));
snap.push_str("----------------------------------------------------------------------\n");
snap.push_str("### Index Mapper:\n");
snap.push_str(&snapshot_index_mapper(&rtxn, index_mapper));
snap.push_str("\n----------------------------------------------------------------------\n");
snap.push_str("### Canceled By:\n");
snap.push_str(&snapshot_canceled_by(&rtxn, *canceled_by));
snap.push_str("\n----------------------------------------------------------------------\n");
snap.push_str("### Enqueued At:\n");
snap.push_str(&snapshot_date_db(&rtxn, *enqueued_at));
snap.push_str("----------------------------------------------------------------------\n");
snap.push_str("### Started At:\n");
snap.push_str(&snapshot_date_db(&rtxn, *started_at));
snap.push_str("----------------------------------------------------------------------\n");
snap.push_str("### Finished At:\n");
snap.push_str(&snapshot_date_db(&rtxn, *finished_at));
snap.push_str("----------------------------------------------------------------------\n");
snap.push_str("### File Store:\n");
snap.push_str(&snapshot_file_store(file_store));
snap.push_str("\n----------------------------------------------------------------------\n");
snap
}
pub fn snapshot_file_store(file_store: &file_store::FileStore) -> String {
let mut snap = String::new();
for uuid in file_store.__all_uuids() {
snap.push_str(&format!("{uuid}\n"));
}
snap
}
pub fn snapshot_bitmap(r: &RoaringBitmap) -> String {
let mut snap = String::new();
snap.push('[');
for x in r {
snap.push_str(&format!("{x},"));
}
snap.push(']');
snap
}
pub fn snapshot_all_tasks(rtxn: &RoTxn, db: Database<OwnedType<BEU32>, SerdeJson<Task>>) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
let (task_id, task) = next.unwrap();
snap.push_str(&format!("{task_id} {}\n", snapshot_task(&task)));
}
snap
}
pub fn snapshot_date_db(
rtxn: &RoTxn,
db: Database<OwnedType<BEI128>, CboRoaringBitmapCodec>,
) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
let (_timestamp, task_ids) = next.unwrap();
snap.push_str(&format!("[timestamp] {}\n", snapshot_bitmap(&task_ids)));
}
snap
}
pub fn snapshot_task(task: &Task) -> String {
let mut snap = String::new();
let Task {
uid,
enqueued_at: _,
started_at: _,
finished_at: _,
error,
canceled_by,
details,
status,
kind,
} = task;
snap.push('{');
snap.push_str(&format!("uid: {uid}, "));
snap.push_str(&format!("status: {status}, "));
if let Some(canceled_by) = canceled_by {
snap.push_str(&format!("canceled_by: {canceled_by}, "));
}
if let Some(error) = error {
snap.push_str(&format!("error: {error:?}, "));
}
if let Some(details) = details {
snap.push_str(&format!("details: {}, ", &snapshot_details(details)));
}
snap.push_str(&format!("kind: {kind:?}"));
snap.push('}');
snap
}
fn snapshot_details(d: &Details) -> String {
match d {
Details::DocumentAdditionOrUpdate {
received_documents,
indexed_documents,
} => {
format!("{{ received_documents: {received_documents}, indexed_documents: {indexed_documents:?} }}")
}
Details::SettingsUpdate { settings } => {
format!("{{ settings: {settings:?} }}")
}
Details::IndexInfo { primary_key } => {
format!("{{ primary_key: {primary_key:?} }}")
}
Details::DocumentDeletion {
provided_ids: received_document_ids,
deleted_documents,
} => format!("{{ received_document_ids: {received_document_ids}, deleted_documents: {deleted_documents:?} }}"),
Details::ClearAll { deleted_documents } => {
format!("{{ deleted_documents: {deleted_documents:?} }}")
},
Details::TaskCancelation {
matched_tasks,
canceled_tasks,
original_filter,
} => {
format!("{{ matched_tasks: {matched_tasks:?}, canceled_tasks: {canceled_tasks:?}, original_filter: {original_filter:?} }}")
}
Details::TaskDeletion {
matched_tasks,
deleted_tasks,
original_filter,
} => {
format!("{{ matched_tasks: {matched_tasks:?}, deleted_tasks: {deleted_tasks:?}, original_filter: {original_filter:?} }}")
},
Details::Dump { dump_uid } => {
format!("{{ dump_uid: {dump_uid:?} }}")
},
Details::IndexSwap { swaps } => {
format!("{{ swaps: {swaps:?} }}")
}
}
}
pub fn snapshot_status(
rtxn: &RoTxn,
db: Database<SerdeBincode<Status>, RoaringBitmapCodec>,
) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
let (status, task_ids) = next.unwrap();
writeln!(snap, "{status} {}", snapshot_bitmap(&task_ids)).unwrap();
}
snap
}
pub fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCodec>) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
let (kind, task_ids) = next.unwrap();
let kind = serde_json::to_string(&kind).unwrap();
writeln!(snap, "{kind} {}", snapshot_bitmap(&task_ids)).unwrap();
}
snap
}
pub fn snapshot_index_tasks(rtxn: &RoTxn, db: Database<Str, RoaringBitmapCodec>) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
let (index, task_ids) = next.unwrap();
writeln!(snap, "{index} {}", snapshot_bitmap(&task_ids)).unwrap();
}
snap
}
pub fn snapshot_canceled_by(
rtxn: &RoTxn,
db: Database<OwnedType<BEU32>, RoaringBitmapCodec>,
) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
let (kind, task_ids) = next.unwrap();
writeln!(snap, "{kind} {}", snapshot_bitmap(&task_ids)).unwrap();
}
snap
}
pub fn snapshot_index_mapper(rtxn: &RoTxn, mapper: &IndexMapper) -> String {
let names = mapper.indexes(rtxn).unwrap().into_iter().map(|(n, _)| n).collect::<Vec<_>>();
format!("{names:?}")
}

3072
index-scheduler/src/lib.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,46 @@
---
source: index-scheduler/src/lib.rs
assertion_line: 1755
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: canceled, canceled_by: 1, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { matched_tasks: 1, canceled_tasks: Some(1), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [1,]
canceled [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
1 [0,]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,40 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { matched_tasks: 1, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,50 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[1,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { matched_tasks: 3, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,3,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
"taskCancelation" [3,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["beavero", "catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
----------------------------------------------------------------------

View File

@ -0,0 +1,55 @@
---
source: index-scheduler/src/lib.rs
assertion_line: 1859
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: canceled, canceled_by: 3, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: canceled, canceled_by: 3, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
3 {uid: 3, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,3,]
canceled [1,2,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
"taskCancelation" [3,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["beavero", "catto"]
----------------------------------------------------------------------
### Canceled By:
3 [1,2,]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [3,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,2,]
[timestamp] [3,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,47 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
----------------------------------------------------------------------

View File

@ -0,0 +1,50 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[1,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { matched_tasks: 3, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,3,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
"taskCancelation" [3,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
----------------------------------------------------------------------

View File

@ -0,0 +1,40 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { matched_tasks: 1, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,47 @@
---
source: index-scheduler/src/lib.rs
assertion_line: 1818
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: canceled, canceled_by: 1, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { matched_tasks: 1, canceled_tasks: Some(1), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [1,]
canceled [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
1 [0,]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,40 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { matched_tasks: 1, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,45 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { matched_tasks: 1, canceled_tasks: Some(0), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
1 []
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,39 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,62 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "girafos", primary_key: None }}
3 {uid: 3, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
4 {uid: 4, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "cattos" }}
5 {uid: 5, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "girafos" }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,2,3,4,5,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [3,4,5,]
"indexCreation" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
cattos [1,4,]
doggos [0,3,]
girafos [2,5,]
----------------------------------------------------------------------
### Index Mapper:
["cattos", "doggos", "girafos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

Some files were not shown because too many files have changed in this diff Show More