Compare commits

..

508 Commits

Author SHA1 Message Date
curquiza
b7d9551870 Merge branch 'release-v0.30.0' into stable 2022-11-28 13:56:35 +01:00
bors[bot]
0aa3f667d4 Merge #3136
3136: Fix no master key error r=Kerollmops a=ManyTheFish


Fixes #3135

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-11-24 15:25:11 +00:00
ManyTheFish
1eba5d45ea Check if the master key is missing before returning an error 2022-11-24 16:02:14 +01:00
ManyTheFish
cfa78418f2 Update tests 2022-11-24 15:51:15 +01:00
bors[bot]
aaf5abbf1c Merge #3085
3085: refactorize the whole test suite r=irevoire a=irevoire

1. Make a call to assert_internally_consistent automatically when snapshotting the scheduler. There is no point in snapshotting something broken and expecting the dumb humans to notice.
2. Replace every possible call to assert_internally_consistent with a snapshot of the scheduler. It uses the same amount of lines and ensures we never change something without noticing in any tests ever.
3. Name every snapshot: it's easier to debug when something goes wrong and easier to review in general.
4. Stop skipping breakpoints; it's too easy to miss something. Now you must explicitly show which path the scheduler is supposed to use.
5. Add a timeout on the channel.recv, it eases the process of writing tests; now, when something files, you get a failure instead of a deadlock.

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-11-23 19:55:08 +00:00
bors[bot]
f724f8adfe Merge #3122
3122: Display the `dumpUid` as `null` until we create it r=irevoire a=Kerollmops

This PR fixes #3117 by displaying the `DumpCreation` `dumpUid` details field as `null` until we compute the dump and the task is finished.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-23 14:37:50 +00:00
Kerollmops
cde2a96486 Display a null dumpUid until we computed the dump itself on disk 2022-11-23 15:16:58 +01:00
bors[bot]
ceca386dc0 Merge #3114
3114: Update the analytics on the ranking rules r=irevoire a=irevoire

fix #3113

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-11-23 14:03:57 +00:00
Irevoire
370a45a58b send the ranking rules as a string because amplitude is too dumb to process an array as a single value 2022-11-23 14:56:22 +01:00
Kerollmops
7093bae131 Update the dump test to check for the dumpUid dumpCreation task details 2022-11-23 14:48:39 +01:00
Irevoire
fb785dc5ac Add more analytics on the ranking rules positions 2022-11-23 12:51:34 +01:00
Irevoire
3a0b1a0c0e try to remove the flakyness of the failing test 2022-11-23 11:22:24 +01:00
Irevoire
af808462b6 update the snapshots after a rebase 2022-11-23 11:16:59 +01:00
Irevoire
2999ae3da4 makes clippy happy 2022-11-23 11:13:54 +01:00
Irevoire
23ec7db3f9 rebase on release-v0.30 2022-11-23 11:13:54 +01:00
Irevoire
f02e5cfaa6 refactorize the whole test suite
1. Make a call to assert_internally_consistent automatically when snapshoting the scheduler. There is no point in snapshoting something broken and expect the dumb humans to notice.
2. Replace every possible call to assert_internally_consistent by a snapshot of the scheduler. It takes as many lines and ensure we never change something without noticing in any tests ever.
3. Name every snapshots: it's easier to debug when something goes wrong and easier to review in general.
4. Stop skipping breakpoints, it's too easy to miss something. Now you must explicitely show which path is the scheduler supposed to use.
5. Add a timeout on the channel.recv, it eases the process of writing tests, now when something file you get a failure instead of a deadlock.
2022-11-23 11:13:54 +01:00
bors[bot]
8443554b1f Merge #3110
3110: Always display `deletedDocuments` in the `IndexDeletion` details r=ManyTheFish a=Kerollmops

This PR fixes #3108 by always displaying a `deletedDocuments` details info.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-23 09:37:02 +00:00
Kerollmops
84c782ce9a Fix the insta tests 2022-11-22 18:53:17 +01:00
bors[bot]
1392a3b304 Merge #3106
3106: Fix linking error when building binaries for aarch64 r=curquiza a=Kerollmops

This PR tries to fix #3094. Please don't look too close. It will be horrifying 😱
You can look at [the status of the fix in the CI](https://github.com/meilisearch/meilisearch/actions/runs/3523723323/jobs/5908229083).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-22 15:49:53 +00:00
Kerollmops
2ec699a2e7 Always display details for the indexDeletion task 2022-11-22 15:14:28 +01:00
Kerollmops
d02837f982 Don't use gold but the default linker 2022-11-22 14:48:57 +01:00
bors[bot]
6784d17d0e Merge #3105
3105: Fix publish release CI r=Kerollmops a=curquiza

Fix CI to avoid release creation when triggering manually the CI with `workflow_dispatch` -> only trigger the upload of binary when the event is `release` instead of different of `schedule`

Co-authored-by: curquiza <clementine@meilisearch.com>
2022-11-22 10:49:52 +00:00
curquiza
415977a41e Fix publish release CI 2022-11-22 11:38:45 +01:00
bors[bot]
a07e1f7a00 Merge #3099
3099: Add a dispatch to the publish binaries workflow r=curquiza a=Kerollmops

This PR adds a dispatch to the publish binaries workflow to help us debug #3094.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-21 17:15:20 +00:00
Kerollmops
b0460abf54 Add a dispatch to the publish binaries workflow 2022-11-21 18:07:47 +01:00
bors[bot]
48e00cdf3f Merge #3096
3096: Fix total memory computation r=Kerollmops a=dureuill

# Pull Request

## Related issue
Fixes #3018 

## What does this PR do?
- Don't multiply the total memory value returned by sysinfo by 1024 anymore:
  According to the [changelog of sysinfo 0.26.0](https://github.com/GuillaumeGomez/sysinfo/blob/master/CHANGELOG.md#0260), units are now in bytes and not KBs.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-21 15:08:51 +00:00
Louis Dureuil
aaea5f87db Don't multiply total memory returned by sysinfo anymore
sysinfo now returns bytes rather than KB
2022-11-21 15:28:05 +01:00
bors[bot]
44440bb8f4 Merge #3084
3084: Bump the `milli` and `grenad` dependencies r=irevoire a=Kerollmops

This PR is bumping the milli direct and the grenad indirect dependencies.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-17 16:21:25 +00:00
Clément Renault
ec74fd6b44 Bump milli to version v0.37.0 2022-11-17 17:02:15 +01:00
Clément Renault
d3bc0c6e93 Bump grenad to 0.4.4 2022-11-17 16:59:13 +01:00
bors[bot]
262c8bf68b Merge #3083
3083: Add `workflow_dispatch` to flaky.yml r=irevoire a=curquiza

Following this PR: bringing the change into `release-v0.30.0` as well, otherwise we cannot test

<img width="346" alt="Capture d’écran 2022-11-17 à 16 55 12" src="https://user-images.githubusercontent.com/20380692/202494559-6032665c-e336-4d4b-8a84-8be5e2371370.png">


Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-11-17 15:57:26 +00:00
Clémentine Urquizar - curqui
c4a669d056 Add workflow_dispatch to flaky.yml 2022-11-17 16:54:09 +01:00
bors[bot]
dfaf845382 Merge #3080
3080: Rename `originalFilters` into `originalFilter` on the cancelation and deletion routes r=irevoire a=Kerollmops

This PR fixes https://github.com/meilisearch/meilisearch/issues/3079.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-17 13:49:50 +00:00
bors[bot]
1fe9fccf6e Merge #3081
3081: Rename `matchedDocuments` into `providedIds` r=ManyTheFish a=Kerollmops

This PR fixes https://github.com/meilisearch/meilisearch/issues/3078.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-17 12:55:59 +00:00
Clément Renault
9fe32e1e3b Rename matchedDocuments into providedIds 2022-11-17 13:48:23 +01:00
Clément Renault
388305fcb6 Rename originalFilters into originalFilters 2022-11-17 13:44:00 +01:00
bors[bot]
49bc45e0d4 Merge #3074
3074: add the analytics of the swap-indexes route r=irevoire a=irevoire

implements https://github.com/meilisearch/specifications/pull/192/files#diff-dbac052211a8ea4b2c5d068a6264380740b022efead552b4dfc9e4e8d961f0f4R276-R284

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-17 11:29:35 +00:00
bors[bot]
b478b18218 Merge #3071
3071: Analytics on the tasks route r=Kerollmops a=irevoire

Implement the missing analytics on the delete and cancel task routes.
+ Batch the analytics on the `GET tasks` route to avoid flooding ourselves while polling meilisearch.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-17 11:05:56 +00:00
bors[bot]
877d1735b1 Merge #3077
3077: Don't remove DB if unreadable r=irevoire a=dureuill

# Pull Request

## Related issue
Related to #3069

Will fix it after merging into `main`

## What does this PR do?

### User standpoint

- When the DB cannot be read after opening it, Meilisearch exits with an error message like previously, but it now leaves the DB untouched
- When the DB cannot be read after importing it from a snapshot or dump, it is still removed, like previously.

### Dev standpoint

- Add new local enum `OnFailure` that is used as a parameter to the `meilisearch_builder` closure to control when to keep or remove the DB in case of failure.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-17 09:38:27 +00:00
Louis Dureuil
a1a29e92fd Stop removing the DB when failing to read it 2022-11-17 09:33:48 +01:00
bors[bot]
fea969d5e5 Merge #3072
3072: Update the finite pagination analytics r=ManyTheFish a=irevoire

Implement this spec: https://github.com/meilisearch/specifications/pull/196/files?show-viewed-files=true&file-filters%5B%5D=#diff-dbac052211a8ea4b2c5d068a6264380740b022efead552b4dfc9e4e8d961f0f4R111

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-17 08:04:58 +00:00
Tamo
fcca7475fa add the analytics of the swap-indexes route 2022-11-16 19:10:01 +01:00
Tamo
3fc1d7e67b Update the finite pagination analytics 2022-11-16 19:01:21 +01:00
Tamo
f1884d6910 batch the tasks seen events 2022-11-16 18:45:19 +01:00
Tamo
0e6394fafc add analytics on the task route
* Add all the missing fields of the new task query type
* Create a new analytics for the task deletion
* Create a new analytics for the task creation
2022-11-16 18:45:19 +01:00
bors[bot]
637ca7b9fa Merge #3067
3067: Fix task details serialization r=Kerollmops a=ManyTheFish

# Pull Request

- document addition task details always contain the field `indexedDocuments`
  - value is set to `null` when the task is enqueued or processing
  - value is set to `0` when the task is canceled or failed
- the field `deletedDocuments` of the document deletion task details is set to `0` when the task is canceled or failed
- the field `deletedDocuments` of the document clearAll task details is set to `0` when the task is canceled or failed
- the field `deletedTasks` of the task deletion task details is set to `0` when the task is canceled or failed
- the field `canceledTasks` of the task cancelation task details is set to `0` when the task is canceled or failed

## Related issue
Fixes #3057
Fixes #3058


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-11-16 16:15:03 +00:00
ManyTheFish
25e39edc7e Fix tests 2022-11-16 16:45:20 +01:00
bors[bot]
f908ae2ef4 Merge #3068
3068: Prepend question mark to the `originalFilters` of the task deletion and cancelation r=irevoire a=Kerollmops

This pull request fixes #3064 by prepending [the HTTP query question mark](https://en.wikipedia.org/wiki/Question_mark#Computing) to the `originalFilters` of the task deletion and cancelation.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-16 15:02:04 +00:00
bors[bot]
8ddec58430 Merge #3061
3061: Name spawned threads r=irevoire a=dureuill

# Pull Request

## Related issue
None, this is to improve debuggability

## What does this PR do?
- This PR replaces the raw `thread::spawn(...)` calls by `thread::Builder::new().name(...).spawn(...).unwrap()` calls so that we can give meaningful names to threads. 
- This PR also setup the `rayon` thread pool to give a name to its threads.
- This improves debuggability, as the thread names are reported by debuggers:

<img width="411" alt="Capture d’écran 2022-11-16 à 10 26 27" src="https://user-images.githubusercontent.com/41078892/202141870-a88663aa-d2f8-494f-b4da-709fdbd072ba.png">

(screen showing vscode's debugger and its main/scheduler/indexing threads)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-16 14:25:32 +00:00
bors[bot]
b4d0403518 Merge #3065
3065: Implement the analytics on the health and version routes r=Kerollmops a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/2955

Must be merged after https://github.com/meilisearch/meilisearch/pull/3063

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 14:01:43 +00:00
Kerollmops
3525c964a7 Add the question mark to the task cancelation query filter 2022-11-16 14:30:35 +01:00
Kerollmops
ed51df41e5 Add the question mark to the task deletion query filter 2022-11-16 14:28:30 +01:00
bors[bot]
7f89e302a2 Merge #3049 #3063
3049: Plural on task query r=Kerollmops a=irevoire

Fixes #3045 and fixes #3046

3063: Update the analytics on the search r=Kerollmops a=irevoire

Partially fix https://github.com/meilisearch/meilisearch/issues/2955

Must be merged after https://github.com/meilisearch/meilisearch/pull/3060

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 13:20:40 +00:00
bors[bot]
c603e17b1d Merge #3060
3060: Add analytics on all the settings r=Kerollmops a=irevoire

Partially fix https://github.com/meilisearch/meilisearch/issues/2955

Must be merged after https://github.com/meilisearch/meilisearch/pull/3059

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 12:48:23 +00:00
ManyTheFish
07b28ea8cf Fix task details serialization 2022-11-16 13:47:08 +01:00
Tamo
10ab5f6a58 implements the analytics on the health and version routes 2022-11-16 13:06:10 +01:00
Tamo
684b90066d update the analytics on the search route 2022-11-16 12:30:49 +01:00
Tamo
93ab019304 update the distinct attributes to the spec update 2022-11-16 12:25:33 +01:00
bors[bot]
1ef517f63d Merge #3059
3059: Create the analytics for the document deletion r=Kerollmops a=irevoire

Partially fix #2955

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-16 10:44:59 +00:00
Louis Dureuil
1a1ede96de Spawn rayon threads with names 2022-11-16 10:28:25 +01:00
Louis Dureuil
93afeedcea Spawn threads with names 2022-11-16 09:50:47 +01:00
Tamo
25d057b75e Add analytics on all the settings 2022-11-15 19:09:02 +01:00
Tamo
b44c381c2a Store analytics for the documents deletions 2022-11-15 19:08:45 +01:00
bors[bot]
51be75a264 Merge #3056
3056: refactor the way we send the cli informations + add the analytics for the config file and ssl usage r=Kerollmops a=irevoire

Partially fix #2955

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-11-15 17:03:30 +00:00
Tamo
4953b62712 reformat, sorry @kero 2022-11-15 17:57:27 +01:00
Tamo
9473cccc27 add a comment over the new infos structure 2022-11-15 17:56:07 +01:00
Tamo
9327db3e91 Apply suggestions from code review
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-11-15 17:53:23 +01:00
Tamo
0fced6f270 refactor the way we send the cli informations + add the analytics for the config file and ssl usage 2022-11-15 16:32:31 +01:00
bors[bot]
1387a211d2 Merge #3053
3053: Upgrade alpine 3.14 to 3.16 r=Kerollmops a=curquiza

Otherwise CI is failing https://github.com/meilisearch/meilisearch/actions/runs/3470576605/jobs/5799173168

Co-authored-by: curquiza <clementine@meilisearch.com>
2022-11-15 13:56:45 +00:00
curquiza
661b345ad9 Upgrade alpine 3.14 to 3.16 2022-11-15 14:54:18 +01:00
bors[bot]
0f0d1dccf0 Merge #3047
3047: Fix soft deleted bug settings r=curquiza a=Kerollmops

This PR fixes https://github.com/meilisearch/meilisearch/issues/3021 and fixes https://github.com/meilisearch/meilisearch/issues/2945 and is released as version 0.29.2.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-15 11:08:47 +00:00
Kerollmops
0331fc7c71 Make clippy happy 2022-11-15 12:07:00 +01:00
Tamo
b4434dcad2 rename the error codes for the sake of consistency 2022-11-14 23:27:02 +01:00
Tamo
d08d97bf43 fix the error messages and add tests 2022-11-14 23:27:02 +01:00
Kerollmops
5cfcdbb55a Bump the version to v0.29.2 2022-11-14 17:39:10 +01:00
Kerollmops
c77c3a90a0 Use milli v0.33.5 2022-11-14 17:39:09 +01:00
bors[bot]
a8991ccb64 Merge #3036
3036: Bump milli to v0.35.1 r=irevoire a=Kerollmops

This PR bumps milli to v0.35.1 which brings some fixes. You can see [the changelog of milli on the release page](https://github.com/meilisearch/milli/releases/tag/v0.35.1).

Fixes #2905
Fixes #3004
Fixes #3000
Fixes #3021
Fixes #2945

Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-10 14:25:54 +00:00
Kerollmops
761bd3aca4 Fix the new error messages 2022-11-10 12:04:25 +01:00
Clément Renault
26ab6ab0cc Bump milli version to 0.35.1 2022-11-10 12:04:25 +01:00
bors[bot]
379522ace3 Merge #3023
3023: Update error codes related to tasks cancelation + add canceledBy filter r=Kerollmops a=Kerollmops

<details>This PR changes the error codes [to follow the specification](https://github.com/meilisearch/specifications/pull/195).

 - [x] The `missing_filters` error code is renamed `missing_task_filters` to be more accurate and follow the `invalid_task_*` convention.
 - [x] The error code `invalid_task_uids_filter` is added.
 - [x] The error code `invalid_task_canceled_by_filter` is added.
 - [x] The error code `invalid_task_date_filter` is added.
      -  The error message is the same as for expires_at in the API Key  EXCEPT that it does not explicitly mention that a date must be given in the future.
</details>

Edit by `@loiclec` :
I have added a few more changes into this PR. The related issues are:

- Fixes https://github.com/meilisearch/meilisearch/issues/3029
- Implements https://github.com/meilisearch/meilisearch/issues/3026
- Fixes https://github.com/meilisearch/meilisearch/issues/2940
- Fixes https://github.com/meilisearch/meilisearch/issues/2939

Additionally:
- Fixes a bug where global tasks were returned by `GET /tasks` queries even if the user did not have the `index.*` API key action.
- Rename `originalQuery` to `originalFilters`
- Display `error: null` and `canceledBy: null` in the task views
- Allow using the star operator in the task filters in the `DELETE /tasks` and `POST /tasks/cancel` routes
- Make sure that the index scheduler keeps making progress even when a grave error occurs.


Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
2022-11-10 10:51:41 +00:00
bors[bot]
1d5f17a9ea Merge #3032
3032: Fix Index name parsing error message to fit the specification r=Kerollmops a=ManyTheFish

fixes #2924


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-11-08 16:01:53 +00:00
ManyTheFish
8bb260bf3e Fix Index name parsing error message to fit the specification 2022-11-08 16:35:58 +01:00
Loïc Lecrenier
52b38bee9d Make rustfmt happy
>:-(
2022-11-08 15:45:53 +01:00
Loïc Lecrenier
f5454dfa60 Make clippy happy
They're a happy clip now.
2022-11-08 15:31:08 +01:00
Loïc Lecrenier
1e464e87fc Update more insta-snap tests 2022-11-08 13:39:52 +01:00
Loïc Lecrenier
6126fc8d98 Rename original_query to original_filters everywhere 2022-11-08 13:18:18 +01:00
Loïc Lecrenier
2fdd814e57 Update tests following task API changes 2022-11-08 13:18:18 +01:00
Loïc Lecrenier
20fa103992 Add canceledBy task filter 2022-11-08 13:18:18 +01:00
Loïc Lecrenier
d5638d2c27 Use more precise error codes/message for the task routes
+ Allow star operator in delete/cancel tasks
+ rename originalQuery to originalFilters
+ Display error/canceled_by in task view even when they are = null
+ Rename task filter fields by using their plural forms
+ Prepare an error code for canceledBy filter
+ Only return global tasks if the API key action `index.*` is there
2022-11-08 13:18:17 +01:00
Clément Renault
932414bf72 WIP Introduce the invalid_task_uid error code 2022-11-08 13:17:56 +01:00
Kerollmops
b20025c01e Change the missing_filters error code into missing_task_filters 2022-11-08 13:17:29 +01:00
bors[bot]
3999f74f78 Merge #3022
3022: Store the `started_at` for a task that is canceled when processing r=irevoire a=Kerollmops

This PR changes the current behavior of the displayed tasks. When a processing task is canceled, the `started_at` date time is kept and displayed to the user. Otherwise, if the task was just enqueued, the `started_at` remains `null`. If a task is processing, the engine is ctrl-c, starts again, and the task becomes enqueued again, so if it is canceled, its `started_at` will be `null`.

You can read more [in this discussion](https://github.com/meilisearch/specifications/pull/195/files#r1009602335).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-03 20:48:24 +00:00
Kerollmops
739b9f5505 Use the content of the ProcessingTasks in the tasks cancelation system 2022-11-03 11:09:59 +01:00
bors[bot]
722a0da0c3 Merge #3019
3019: Fix error code of the "duplicate index found" error on the index swap route r=irevoire a=loiclec

According to the spec, the code of the error should be `duplicate_index_found`, but it was `bad_request` instead.

Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
2022-11-03 09:49:55 +00:00
Loïc Lecrenier
5704a1895d Fix error code of the "duplicate index found" error 2022-11-02 09:34:50 +01:00
bors[bot]
2254bbf3bd Merge #3002
3002: Fix dump import without instance uid r=Kerollmops a=irevoire

When creating a dump without any instance-uid (that can happen if you’ve always run meilisearch with the `--no-analytics` flag), you could get an error when trying to load the dump.


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-31 12:58:37 +00:00
Irevoire
510afda590 remove unused import 2022-10-30 20:05:20 +01:00
Irevoire
fea9fdcd7e fix the dump reader process when no instance-uid was specified 2022-10-30 20:00:27 +01:00
bors[bot]
dd1011ba76 Merge #2995
2995: merge the settings and do one indexation at the end r=irevoire a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 21:24:21 +00:00
bors[bot]
20258461a8 Merge #2981 #2996
2981: Move index swap error handling from meilisearch-http to index-scheduler r=irevoire a=loiclec

And make index_not_found error asynchronous, since we can't know whether the index will exist by the time the index swap task is processed.

Improve the index-swap test to verify that future tasks are not swapped and to test the new error messages that were introduced.

## Related issue
https://github.com/meilisearch/meilisearch/issues/2973


2996: Get rids of the unecessary tasks when an index_uid is specified r=Kerollmops a=irevoire



Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 19:11:23 +00:00
Tamo
87cac158c4 Update index-scheduler/src/batch.rs 2022-10-27 18:08:21 +02:00
Tamo
c9f89d38e3 Merge branch 'main' into index-swap-error-handling 2022-10-27 18:06:45 +02:00
Irevoire
01687c87a2 Get rids of the unecessary tasks when an index_uid is specified 2022-10-27 18:00:04 +02:00
Irevoire
313f204f39 merge the settings and do one indexation at the end 2022-10-27 16:38:21 +02:00
bors[bot]
d16ea755d8 Merge #2982
2982: Adapt task queries to account for special index swap rules r=irevoire a=loiclec

# Pull Request

## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/2970 

## What does this PR do?
- Replace the `get_tasks` method with a `get_tasks_from_authorized_indexes` which returns the list of tasks matched by the query **from the point of view of the user**. That is, it takes into consideration the list of authorised indexes as well as the special case of `IndexSwap` which should not be returned if an index_uid is specified or if any of its associated indexes are not authorised.
- Adapt the code in other places following this change
- Add some tests
- Also the method `get_task_ids_from_authorized_indexes` now takes a read transaction as argument. This is because we want to make sure that the implementation of `get_tasks_from_authorized_indexes` only uses one read transaction. Otherwise, we could (1) get a list of task ids matching the query, then (2) one of these task ids is deleted by a taskDeletion task, and finally (3) we try to get the `Task`s associated with each returned task ids, and get a `CorruptedTaskQueue` error.



Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
2022-10-27 14:28:04 +00:00
Loïc Lecrenier
8152ab5dfc Revert change in initialisation of TempDir for index scheduler tests 2022-10-27 16:26:17 +02:00
Loïc Lecrenier
8dd7942656 Cargo fmt 2022-10-27 16:24:09 +02:00
Loïc Lecrenier
2c31d7c50a Apply review suggestions 2022-10-27 16:24:08 +02:00
bors[bot]
b76f0ace26 Merge #2993
2993: Reconsider the Windows tests r=irevoire a=Kerollmops

This PR removes the `ignore` cfg on top of a lot of our tests. Now that we reworked the index scheduler we can make them pass again!

Fixes #2038, fixes #1966.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 13:41:04 +00:00
bors[bot]
5b535a82ea Merge #2991
2991: Update version for the next release (v0.30.0) in Cargo.toml files r=Kerollmops a=meili-bot

⚠️ This PR is automatically generated. Check the new version is the expected one before merging.

Co-authored-by: curquiza <curquiza@users.noreply.github.com>
2022-10-27 13:12:31 +00:00
Clément Renault
e67673bd12 Ingore the dumps v1 test on Windows 2022-10-27 14:34:45 +02:00
Clément Renault
44d6f3e7a0 Reconsider the Windows tests 2022-10-27 13:50:05 +02:00
curquiza
68f80dbacf Update version for the next release (v0.30.0) in Cargo.toml files 2022-10-27 11:35:44 +00:00
bors[bot]
08db774699 Merge #2990
2990: isolate the search in another task r=Kerollmops a=irevoire

In case there is a failure on milli's side that should avoid blocking the tokio main thread


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 11:29:22 +00:00
Irevoire
4d9e9f4a9d isolate the search in another task
In case there is a failure on milli's side that should avoid blocking the tokio main thread
2022-10-27 13:12:42 +02:00
Loïc Lecrenier
4f4fc20acf Make clippy happy 2022-10-27 13:00:30 +02:00
Loïc Lecrenier
78ffa00f98 Move index swap error handling from meilisearch-http to index-scheduler
And make index_not_found error asynchronous, since we can't know
whether the index will exist by the time the index swap task is
processed.

Improve the index-swap test to verify that future tasks are not swapped
and to test the new error messages that were introduced.
2022-10-27 11:45:38 +02:00
Loïc Lecrenier
7b93ba40bd Reimplement task queries to account for special index swap rules 2022-10-27 11:44:51 +02:00
bors[bot]
b44cc62320 Merge #2763
2763: Index scheduler r=Kerollmops a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/2725

- [x] Durability of the tasks once an answer has been sent to the user.
- [x] Fix the analytics
- [x] Disable the auto-batching system.
- [x] Make sure the task scheduler run if there are tasks to process.
- [x] Auto-batching of enqueued tasks:
    - [x] Do not batch operations from two different indexes.
    - [x] Document addition.
    - [x] Document updates.
    - [x] Settings.
    - [x] Document deletion.
    - [x] Make sure that we only merge batches with the same index-creation rights:
        - [x] the batch either starts with a `yes`
        - [x] [we only batch `no`s together and stop batching when we encounter a `yes`](https://www.youtube.com/watch?v=O27mdRvR1GY)
        - [x] Unify the logic about `false` and `true` index creation rights.
- [ ] Execute all batch kind:
    - [x] Import dumps at startup time.
    - [x] Export dumps i.e. export the tasks queue.
    - [x] Document addition
    - [x] Document update
    - [x] Document deletion.
    - [x] Clear all documents.
    - [x] Update the settings of an index.
    - [ ] Merge multiple settings into a single one.
    - [x] Index update e.g. Create an Index, change an index primary key, delete an index.
    - [x] Cancel enqueued or processing tasks (with filters) (don't count tasks from forbidden indexes) (can't cancel a task with a higher or equal task_id than your own).
    - [x] Delete processed tasks from the task store (with filters) (don't count tasks from forbidden indexes) (can't flush a task with a higher or equal task_id than your own)
    - [x] Document addition + settings
    - [x] Document addition + settings + clear all documents
    - [x] anything + index deletion
    - [x] Snapshot
       - [x] Make the `SnapshotCreation` task visible.
       - [x] Snapshot tasks are scheduled by a detached thread.
       - [x] Only include update files that are useful.
    - [x] Check that statuses and details are correctly set. (ie; if you enqueue a `documentAddition`, is the `documentReceived` well set?)
- [x] Prioritize and reorder tasks i.e. Index deletion, Delete all the documents.
- [x] Always accept new tasks without blocking.
- [x] Fairly share the loads over the different indexes e.g. Always process the index queue with the lowest id.
- [x] Easily testable.
- [x] Well tested i.e. tasks reordering, tasks prioritizing, use atomic barriers to block the tasks for tests.
- [x] Dump
    - [x] Serialize the uuid as string in the keys
    - [x] Create a dump crate with getters and setters
    - [x] Serialize the API key in the dump task
    - [x] Get the instance-uuid in the dump task
- [x] List and filter tasks:
    - [x] Paginate the tasks.
    - [x] Filter by index name.
    - [x] Filter on the status, the enqueued, processing, and finished tasks.
    - [x] Filter on the type of task.
    - [x] Check that it works in `meilisearch-http`.
- [x] Think about [the index wrapper](2c4c14caa8/index/src/updates.rs (L269)) and probably move or remove it.
- [x] Reduce the amount of copy/paste for the batched operations by creating a sub-enum for the `Batch` enum.
- [x] Move the `IndexScheduler` in the lib.rs file.
- [x] Think about the `MilliError` type and probably remove it.
- [x] Remove the `index` crate entirely
- [x] Remove the `Kind` type from the `TaskView` and introduce another type, remove the `<Kind as FromStr>`.
- [x] Once the point above is done; remove the unreachable variant from the autobatchingkind
- [x] Rename the `Settings` task `Kind` to `SettingsUpdate`
- [x] Rename the `DumpExport` task `Kind` to `DumpExport`
- [x] Path the error message when deserializing a `Kind` and `Status`.
- [x] Check the version file when starting.
- [x] Copy the version file when creating snapshots.

---------

Once everything above is done;
- [ ] Check what happens with the update files i.e. when are they deleted.
    - [ ] When a TaskDeletion occurs
    - [ ] When a TaskCancelation
    - [ ] When a task is finished
    - [ ] When a task fails
- [ ] When importing a dump forward the date to milli
- [ ] Add tests for the snapshots.
- [ ] Look at all the places where we put _TODOs_.
- [ ] Rename a bunch of things, see https://github.com/meilisearch/meilisearch/pull/2917
- [ ] Ensure that when compiling meilisearch-http with `no-default-features` it doesn’t pull lindera etc
- [ ] Run a bunch of operations in a `tokio::spawn_blocking`
    - [ ] The search requests
- [ ] Issue to create once this is merged:
    - [ ] Realtime progressing status e.g. Websocket events (optional).
    - [ ] Implement an `Uuid` codec instead of using a `Bincode<Uuid>`.
    - [ ] Handle the dump-v1
    - [ ] When importing a dump v1 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v2 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v3 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v4 we could iterate over the whole task queue to find the creation and last update date
    - [ ] When importing a dump v5 we could iterate over the whole task queue to find the creation and last update date

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-10-27 09:38:00 +00:00
Clément Renault
fae17ed590 Enable the authentication tests on Windows again 2022-10-27 11:35:24 +02:00
Clément Renault
7e355958e0 Await the last insert task 2022-10-27 11:35:24 +02:00
Irevoire
8bc602a7dd makes clippy happy 2022-10-27 11:35:23 +02:00
Irevoire
6c2ecec4d0 fix the return of the task cancelation and task deletion 2022-10-27 11:35:23 +02:00
Irevoire
6280bd51a9 actually fix the test and the swap_indexes name resolution 2022-10-27 11:35:23 +02:00
Irevoire
54d0aff4cf ignore a strange test that works on my machine but not on the ci 2022-10-27 11:35:23 +02:00
Irevoire
b804cba4ca try to debug the ci 2022-10-27 11:35:23 +02:00
Irevoire
3cf8aaa4d0 reformat 2022-10-27 11:35:23 +02:00
Irevoire
225405bb0d ignore the dump tests 2022-10-27 11:35:22 +02:00
Kerollmops
0dd8e00929 Reapply #2601 2022-10-27 11:35:22 +02:00
Irevoire
a99ddf85f7 fix clippy once again 2022-10-27 11:35:22 +02:00
Irevoire
7307c4dacd fix clippy 2022-10-27 11:35:22 +02:00
Irevoire
6aa816d96a use meili-snap in the dump 2022-10-27 11:35:22 +02:00
Irevoire
866a3676eb reupload the test fix for the dump 2022-10-27 11:35:22 +02:00
Kerollmops
fa84eae0f1 Insta review and fix insta snapshots 2022-10-27 11:35:21 +02:00
Irevoire
33996071ea fix clippy from the CI 2022-10-27 11:35:21 +02:00
Irevoire
953055e3d7 bump milli 2022-10-27 11:35:21 +02:00
Kerollmops
7c908fadcf Remove a useless clippy silence 2022-10-27 11:35:21 +02:00
Irevoire
07d39776f9 fix clippy _once again_ 2022-10-27 11:35:21 +02:00
Irevoire
3979c9f02b fix all the dump snasphots 2022-10-27 11:35:20 +02:00
Irevoire
8ec3681cf8 fix clippy part1 2022-10-27 11:35:20 +02:00
Kerollmops
2ba5e3b519 Clean up some code 2022-10-27 11:35:20 +02:00
Kerollmops
861a07792e Remove useless task module 2022-10-27 11:35:20 +02:00
Kerollmops
ee6597da60 Fix all the tests 2022-10-27 11:35:20 +02:00
Kerollmops
314b89ca30 Fix insta snapshots 2022-10-27 11:35:20 +02:00
Irevoire
8ebb49d1b1 bump milli 2022-10-27 11:35:19 +02:00
Irevoire
0ba6253eed fix the sort 2022-10-27 11:35:19 +02:00
Irevoire
e8cd571820 try to convert the OsStr to a rust string to fix the sort 2022-10-27 11:35:19 +02:00
Clément Renault
4f955e68b3 Apply suggestions from code review 2022-10-27 11:35:19 +02:00
Irevoire
6c98752922 move the commit before the insertion in the map 2022-10-27 11:35:19 +02:00
Irevoire
4e1b6b514e update reviewer change 2022-10-27 11:35:19 +02:00
Irevoire
64e55b4db9 fix the index creation. When an index is being created we insert it in the index_map straight away to avoid someone else from trying to re-open it. The definitive fix should be made on milli's side 2022-10-27 11:35:18 +02:00
Loïc Lecrenier
9b43528bbb Update test after fixing bug in index swap 2022-10-27 11:35:18 +02:00
Loïc Lecrenier
e641d08846 Cargo fmt 2022-10-27 11:35:18 +02:00
Loïc Lecrenier
36c9f05998 Revert "Display more than one indexUid in a task view if necessary"
This reverts commit 1f2e253bb6.
2022-10-27 11:35:18 +02:00
Loïc Lecrenier
3b158bb966 Return invalid API key error in /swap-indexes 2022-10-27 11:35:18 +02:00
Loïc Lecrenier
08b5123380 Display more than one indexUid in a task view if necessary 2022-10-27 11:35:17 +02:00
Loïc Lecrenier
1f75caae88 Fix a few index swap bugs.
1. Details of the indexSwap task
2. Query tasks with type=indexUid
3. Synchronous error message for multiple index not found
2022-10-27 11:35:17 +02:00
Irevoire
a16604af80 fix all the tests 2022-10-27 11:35:17 +02:00
Irevoire
1d014a538e comment out a test that makes the CI crash 2022-10-27 11:35:17 +02:00
Irevoire
29bdcb880c update the snapshot 2022-10-27 11:35:17 +02:00
Irevoire
a3fc0d3bd9 Fix the last regression 2022-10-27 11:35:17 +02:00
Kerollmops
2de8a0711a Cargo insta test/review 2022-10-27 11:35:16 +02:00
Kerollmops
2f577b6fcd Patch the IndexScheduler in meilisearch-http to use the options struct 2022-10-27 11:35:16 +02:00
Tamo
eccbdb74cf remove useless print
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 11:35:16 +02:00
Irevoire
033794d209 add tests for the task deletion and task cancelation 2022-10-27 11:35:16 +02:00
Irevoire
a85d5b4981 test the details of all tasks type 2022-10-27 11:35:16 +02:00
Kerollmops
71b50853dc Introduce an options struct to create the IndexScheduler 2022-10-27 11:35:16 +02:00
Kerollmops
7074872a78 cargo insta accept 2022-10-27 11:35:15 +02:00
Kerollmops
035e8eeff5 Clean-up some TODOs 2022-10-27 11:35:15 +02:00
Kerollmops
e35fe33712 Fix some bugs with files 2022-10-27 11:35:15 +02:00
Kerollmops
4736e00253 Handle the CLI options related to snapshots 2022-10-27 11:35:15 +02:00
Kerollmops
942b7c338b Compress the snapshot in a tarball 2022-10-27 11:35:15 +02:00
Kerollmops
4cafc63561 Reintroduce the versioning functions 2022-10-27 11:35:14 +02:00
Kerollmops
89e127e4f4 Declare the auth path in the index scheduler 2022-10-27 11:35:14 +02:00
Kerollmops
eec43ec953 Implement a first version of the snapshots 2022-10-27 11:35:14 +02:00
Kerollmops
c063f154fb Add the snapshots directory path to the IndexScheduler 2022-10-27 11:35:14 +02:00
Kerollmops
e0548e42e7 Rename the Snapshot task into SnapshotCreation 2022-10-27 11:35:14 +02:00
Kerollmops
4d43a9f5b1 Rename the index-scheduler module into insta_snapshot 2022-10-27 11:35:14 +02:00
Kerollmops
901c405919 Fix the inta-snapshot typos in the tests 2022-10-27 11:35:13 +02:00
Kerollmops
c641888a23 Patch the delete and cancel tasks routes 2022-10-27 11:35:13 +02:00
Loïc Lecrenier
6db90ba6cc Make sure that we don't delete or cancel future tasks
This should already have been the case before, but there is no harm
in adding another check.
2022-10-27 11:35:13 +02:00
Irevoire
e0821ad4b0 remove an useless dbg 2022-10-27 11:35:13 +02:00
Irevoire
61f0940f8c fix an issue with the dates 2022-10-27 11:35:13 +02:00
Irevoire
241300d2d8 add more naive tests around the document addition + remove the old unused snapshot files 2022-10-27 11:35:13 +02:00
Irevoire
570b2d1167 add some naive document addition tests 2022-10-27 11:35:12 +02:00
Loïc Lecrenier
d92425658e Add index scheduler tests for task cancelation 2022-10-27 11:35:12 +02:00
Irevoire
12669bf07c rename received_documents_ids to matched_documents 2022-10-27 11:35:12 +02:00
Loïc Lecrenier
16fac10074 Fix crash when batching an index swap task containing 0 swaps 2022-10-27 11:35:12 +02:00
Irevoire
0aca5e84b9 rename received_document_ids to matched_documents in the DocumentDeletion task type (reimplementation of #2826) 2022-10-27 11:35:12 +02:00
Irevoire
7ed3f00b1e reformat 2022-10-27 11:35:12 +02:00
Irevoire
9c00b159ba fix clippy 2022-10-27 11:35:11 +02:00
Irevoire
7e52f1effb remove a lot of unecessary clone and ref 2022-10-27 11:35:11 +02:00
Loïc Lecrenier
4d25c159e6 Apply code review suggestions 2022-10-27 11:35:11 +02:00
Loïc Lecrenier
e9cd6cbbee Revert implementation of get_status to query only the database 2022-10-27 11:35:11 +02:00
Loïc Lecrenier
424202d773 Pause the index scheduler for one second when a fatal error occurs 2022-10-27 11:35:11 +02:00
Loïc Lecrenier
4a35eb9849 Fix (hopefully) queries that include processing tasks 2022-10-27 11:35:11 +02:00
Loïc Lecrenier
493a8cff31 Adjust task details correctly following index swap 2022-10-27 11:35:10 +02:00
Loïc Lecrenier
4de445d386 Start testing unexpected errors and panics in index scheduler 2022-10-27 11:35:10 +02:00
Loïc Lecrenier
e3848b5f28 Add assert method to verify validity of index scheduler state 2022-10-27 11:35:10 +02:00
Irevoire
ecf4e43b3d rename the dumpExport to dumpCreation 2022-10-27 11:35:10 +02:00
Irevoire
3ea489421e move the error types to meilisearch-http 2022-10-27 11:35:10 +02:00
Loïc Lecrenier
2808be9d45 Fix the /swap-indexes route API
1. payload
2. error messages
3. auth errors
2022-10-27 11:35:10 +02:00
Loïc Lecrenier
92c41f0ef6 meili-snap: get the test name from the name of the function 2022-10-27 11:35:09 +02:00
Loïc Lecrenier
1214a68a41 Only store full snapshots if env variable is set to true 2022-10-27 11:35:09 +02:00
Irevoire
8a23e707c1 fix the task view and forward the task db size 2022-10-27 11:35:09 +02:00
Irevoire
eb4bdde432 fix clippy 2022-10-27 11:35:09 +02:00
Irevoire
735a5da257 reformat 2022-10-27 11:35:09 +02:00
Irevoire
1d04ce611d remove ununsed function 2022-10-27 11:35:08 +02:00
Irevoire
e9055f5572 fix clippy 2022-10-27 11:35:08 +02:00
Irevoire
874499a2d2 fix all the snapshots 2022-10-27 11:35:08 +02:00
Irevoire
ecdcbf350f update all the snapshots with the new kind name 2022-10-27 11:35:08 +02:00
Irevoire
de7d4200d8 fix the snapshot tests of the dump after renaming a bunch of kinds 2022-10-27 11:35:08 +02:00
Irevoire
2c0fde4a1a ignore the snapshot test 2022-10-27 11:35:08 +02:00
Irevoire
e2ce8f2d32 remove the public DocumentClear variant 2022-10-27 11:35:07 +02:00
Irevoire
c8ee453b6c fix the autobatched document deletion 2022-10-27 11:35:07 +02:00
Irevoire
f6963f9662 ensure the indexUid is valid in most cases 2022-10-27 11:35:07 +02:00
Irevoire
a8de5368e5 fix the index creation in case an index already exists 2022-10-27 11:35:07 +02:00
Irevoire
b3265a8e1f ensure the index_uid is valid when creating an index 2022-10-27 11:35:07 +02:00
Irevoire
9bb2e3c790 fix the failed document addition with a primary key 2022-10-27 11:35:07 +02:00
Irevoire
cb48a02f94 fix the invalid index uid errors 2022-10-27 11:35:06 +02:00
Irevoire
99144b1419 fix most content file error 2022-10-27 11:35:06 +02:00
Irevoire
e107f1b282 fix the payload too large error 2022-10-27 11:35:06 +02:00
Irevoire
1bef5d119d fix the api keys for the tasks route 2022-10-27 11:35:06 +02:00
Irevoire
ca4234b445 fix the deletion of the data.ms in case of failure 2022-10-27 11:35:06 +02:00
Irevoire
8d1408c65e fix the import of the dumpv4&v5 when there is no instance-uid + rename the Kind+KindWithContent+Details variant for the DocumentImport and the Setting 2022-10-27 11:35:05 +02:00
Irevoire
131fe30934 fix the error messages and the index stats 2022-10-27 11:35:05 +02:00
Irevoire
50386921df fix the index creation 2022-10-27 11:35:05 +02:00
Clément Renault
32cfac0cfd Sort the TOML dependencies 2022-10-27 11:35:05 +02:00
Clément Renault
80b2e70ee7 Introduce a rustfmt file 2022-10-27 11:35:05 +02:00
Clément Renault
52e858a588 Reapply #2890 2022-10-27 11:34:18 +02:00
Clément Renault
8b0427f0c4 Reapply #2839 2022-10-27 11:34:18 +02:00
Clément Renault
fce0996e17 Reapply #2819 2022-10-27 11:34:18 +02:00
Clément Renault
2a7ef3b352 Reapply #2830 2022-10-27 11:34:18 +02:00
Clément Renault
ce4dcf47f0 Reapply #2773 2022-10-27 11:34:18 +02:00
Clément Renault
ca8c922f35 Reapply #2727 2022-10-27 11:34:18 +02:00
Clément Renault
4c42130ec7 Remove once for all the meilisearch-lib crate 2022-10-27 11:34:17 +02:00
Clément Renault
788262e588 Fix final compilation 2022-10-27 11:34:17 +02:00
Clément Renault
61edcd585a Fix the new config file with the index scheduler 2022-10-27 11:34:17 +02:00
Clément Renault
72ec4ce96b Fix allow_index_creation useless field 2022-10-27 11:34:17 +02:00
Clément Renault
75857bf476 Fix the insta tests 2022-10-27 11:34:17 +02:00
Irevoire
0bbf80186f push the snapshot files 2022-10-27 11:34:17 +02:00
Irevoire
b6a0abea9f fix the index deletion when the index doesn’t exists but would be created by one of the autobatched tasks 2022-10-27 11:34:16 +02:00
Irevoire
5303bbffab fix the last rule about merging the allow_index_creation 2022-10-27 11:34:16 +02:00
Irevoire
fc944c39a5 simplify the code A LOT and create less false positive 2022-10-27 11:34:16 +02:00
Irevoire
a1d4cc673d add a whole new batch of tests around the index already exists / allow_index_creation 2022-10-27 11:34:16 +02:00
Irevoire
28d9f2c041 fix all the snapshot tests 2022-10-27 11:34:16 +02:00
Irevoire
d9218578e3 it probably works but it's also horrendous 2022-10-27 11:34:16 +02:00
Loïc Lecrenier
d20b5ddda0 Don't return an error when swapping 0 indexes 2022-10-27 11:34:16 +02:00
Loïc Lecrenier
11fee30f47 Apply review suggestions and stop using rtxn.commit 2022-10-27 11:34:15 +02:00
Loïc Lecrenier
17cd2a4aa0 Implement POST /indexes-swap 2022-10-27 11:34:15 +02:00
Loïc Lecrenier
28bd8b6c6b Remove key from index_tasks database when the value is empty 2022-10-27 11:34:15 +02:00
Loïc Lecrenier
169f386418 Add some documentation to the index scheduler 2022-10-27 11:34:15 +02:00
Irevoire
66c3b93ef1 fix all the snapshot tests in the dump 2022-10-27 11:34:15 +02:00
Loïc Lecrenier
bdb17954d2 Fix bug where assert used != instead of ==
And update snapshot tests.
2022-10-27 11:34:15 +02:00
Loïc Lecrenier
23b01a58df cargo fmt 2022-10-27 11:34:14 +02:00
Loïc Lecrenier
ec3391808d Fix date parsing for task queries
Use rfc3339 or YYYY-MM-DD.

Add a day to the parsed date when it is an excluded lower bound
and the YYYY-MM-DD was used.

Also the Query type does not need to be serialisable anymore
2022-10-27 11:34:14 +02:00
Loïc Lecrenier
10a547df4f Apply suggestions from code review
Co-authored-by: Clément Renault <clement@meilisearch.com>

Apply suggestions from code review

Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>

Apply suggestions from code review

Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>

Apply code review suggestion

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 11:34:14 +02:00
Loïc Lecrenier
22cf0559fe Implement task date filters
before/after enqueued/started/finished at
2022-10-27 11:34:14 +02:00
Irevoire
5765883600 fix the auto-generated details 2022-10-27 11:34:14 +02:00
Tamo
cff003c928 remove the unused variants from the autobatcher 2022-10-27 11:34:14 +02:00
Tamo
ab8f1c2865 fix a bunch of snapshot tests 2022-10-27 11:34:13 +02:00
Tamo
6730e190db fix the dumps tests since we added informations in the DumpTask 2022-10-27 11:34:13 +02:00
Kerollmops
50b8b9df6a Delete the tasks content file once the transaction has been successfully committed 2022-10-27 11:34:13 +02:00
Kerollmops
ec0a5a9f01 Remove the useless r#union thing 2022-10-27 11:34:13 +02:00
Kerollmops
6460b78e08 Clean up the delete_persisted_task_data function 2022-10-27 11:34:13 +02:00
Kerollmops
d21651c968 Throw the error if we can't register the tasks in the store 2022-10-27 11:34:13 +02:00
Kerollmops
6e904d0997 Introduce a ProcessingTasks constructor 2022-10-27 11:34:12 +02:00
Kerollmops
b373d19831 Extract the must_stop flag out of the RwLock 2022-10-27 11:34:12 +02:00
Kerollmops
3cbfacb616 Prefer using an u64 instead of a usize in some places 2022-10-27 11:34:12 +02:00
Kerollmops
79c4275bfc Delete the persisted data when we cancel a task 2022-10-27 11:34:12 +02:00
Kerollmops
f9c8fe5eaa Use a tokio block_in_place method for potentially blocking tasks 2022-10-27 11:34:12 +02:00
Kerollmops
c2ec4a089b Put the original URL query in the tasks details 2022-10-27 11:34:12 +02:00
Kerollmops
751e9bac3b Add the tasks cancel route to cancel tasks 2022-10-27 11:34:11 +02:00
Kerollmops
290945e258 Update the canceledBy and finishedAt fields 2022-10-27 11:34:11 +02:00
Kerollmops
725158b454 Introduce the core algorithm of task cancelation 2022-10-27 11:34:11 +02:00
Kerollmops
b2c5bc67b7 Add more enum-iterator related stuff 2022-10-27 11:34:11 +02:00
Kerollmops
591527a99d Prefer using TaskDeletion in the dumps 2022-10-27 11:34:11 +02:00
Kerollmops
1ca9a67c49 Introduce the task cancelation task type 2022-10-27 11:34:11 +02:00
Kerollmops
f177c97671 Add the canceled task status 2022-10-27 11:34:10 +02:00
Kerollmops
703ba7a1fb Introduce the ProcessingTasks struct 2022-10-27 11:34:10 +02:00
Kerollmops
c9523c6f39 Use the indexation-abortion milli's branch 2022-10-27 11:34:10 +02:00
Kerollmops
e645c4c4d6 Remove the meilisearch-auth milli dependency 2022-10-27 11:34:10 +02:00
Loïc Lecrenier
ea60d35c71 Delete a task's persisted data when appropriate 2022-10-27 11:34:10 +02:00
Tamo
f7e546eea3 make the tests compile again 2022-10-27 11:34:10 +02:00
Tamo
b45c430165 fix the analytics 2022-10-27 11:34:10 +02:00
Tamo
634eb52926 extract the create_app function for the tests 2022-10-27 11:34:09 +02:00
Tamo
d1a6fb2971 bump enum-iter and fix a bunch of error messages 2022-10-27 11:34:09 +02:00
Tamo
bea81ae37b fix meilisearch-http 2022-10-27 11:34:09 +02:00
Tamo
9e85f050b2 fix the tests 2022-10-27 11:34:09 +02:00
Tamo
2f748480a1 share the rtxn between the access to the tasks and to the indexes 2022-10-27 11:34:09 +02:00
Tamo
6bd6321226 dump the content of the dump tasks instead of recreating at import time with wrong API keys 2022-10-27 11:34:08 +02:00
Tamo
655705eb2b remove useless todo 2022-10-27 11:34:08 +02:00
Tamo
9fe24fbff2 get rids of the useless Seek before creating a grenad reader 2022-10-27 11:34:08 +02:00
Tamo
83f3c5ec57 flush the dump-writer only once everything has been inserted 2022-10-27 11:34:08 +02:00
Tamo
78ce29f461 apply most style comments of the review 2022-10-27 11:34:08 +02:00
Tamo
dd70daaae3 Update dump/src/error.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-10-27 11:34:08 +02:00
Tamo
d0e91555d1 rebase on index-scheduler 2022-10-27 11:34:08 +02:00
Tamo
e0221fc0a3 fix a synchronization bug while importing tasks 2022-10-27 11:34:07 +02:00
Tamo
a9eeb070b8 fix all the errors code and settings issues when importing a dump v2 2022-10-27 11:34:07 +02:00
Tamo
3872a1b8d1 fix all the error codes 2022-10-27 11:34:07 +02:00
Tamo
ba150f2127 commit after creating an index 2022-10-27 11:34:07 +02:00
Tamo
554600dfd8 fix the deletion of the data.ms in case of errors 2022-10-27 11:34:07 +02:00
Tamo
e9295c03ce the index-scheduler needs to wake-up after importing a dump 2022-10-27 11:34:06 +02:00
Tamo
955d3339f0 remove the dbg 2022-10-27 11:34:06 +02:00
Tamo
d481669b7e fix the content_file import 2022-10-27 11:34:06 +02:00
Tamo
dd506e5d87 stop dumping the current dumping task as enqueued so it's not looping for ever 2022-10-27 11:34:06 +02:00
Tamo
208c785697 add a bufwriter on the documents 2022-10-27 11:34:06 +02:00
Tamo
d976e680c5 first mostly working version 2022-10-27 11:34:06 +02:00
Tamo
c051166bcc update the API a little bit 2022-10-27 11:34:05 +02:00
Tamo
72a906ae75 fix the tests 2022-10-27 11:34:05 +02:00
Tamo
b7f9c94f4a write the dump export 2022-10-27 11:34:05 +02:00
Loïc Lecrenier
8954b1bd1d Fix number of deleted tasks details after duplicate task deletion 2022-10-27 11:34:05 +02:00
Loïc Lecrenier
8defad6c38 Add task deletion tests where the same task is deleted twice 2022-10-27 11:34:05 +02:00
Loïc Lecrenier
f32b973945 Return an error when calling DELETE /tasks with an empty query 2022-10-27 11:34:04 +02:00
Loïc Lecrenier
fbd2be2ec8 Apply suggested changes from PR review 2022-10-27 11:34:04 +02:00
Loïc Lecrenier
441417447e Avoid creating two read txn at the same time 2022-10-27 11:34:04 +02:00
Loïc Lecrenier
8c6aeaada5 Update snapshot tests following git rebase that fixes a bug 2022-10-27 11:34:04 +02:00
Loïc Lecrenier
8bb0fcd144 Finish first draft of the DELETE /tasks route 2022-10-27 11:34:04 +02:00
Loïc Lecrenier
9522b75454 Continue implementation of task deletion
1. Matched tasks are a roaring bitmap
2. Start implementation in meilisearch-http
3. Snapshots use meili-snap
4. Rename to TaskDeletion
2022-10-27 11:34:03 +02:00
Kerollmops
e4d461ecba Make sure that we do not batch tasks from different indexes 2022-10-27 11:34:03 +02:00
Kerollmops
b029369653 Add a test to check different indexes autobatching 2022-10-27 11:34:03 +02:00
Kerollmops
408d00136c Extract index creation rights and simplify the autobatcher rules 2022-10-27 11:34:03 +02:00
Kerollmops
2c24c7d403 Fix invalid import of tasks types 2022-10-27 11:34:03 +02:00
Tamo
7034803712 move the API key in meilisearch_types 2022-10-27 11:34:02 +02:00
Tamo
c192146fbe remove an unused file 2022-10-27 11:34:02 +02:00
Tamo
b6c84e53ba uncomment a task serialization test 2022-10-27 11:34:02 +02:00
Tamo
2f1eb78b1d refactor the Task a little bit 2022-10-27 11:34:02 +02:00
Tamo
510ce9fc51 start moving a lot of task types to meilisearch_types 2022-10-27 11:34:01 +02:00
Tamo
fa4c1de019 store md5 instead of the whole snapshots 2022-10-27 11:34:01 +02:00
Loïc Lecrenier
3e4337c91f Add meili-snap crate to make writing snapshot tests easier 2022-10-27 11:34:01 +02:00
Tamo
0af00f6b32 fix all the import and comment most of the dump v6 2022-10-27 11:34:01 +02:00
Tamo
141a1c9464 push the document_format and settings I forgot in the previous PR 2022-10-27 11:34:00 +02:00
Tamo
667c282e19 get rids of the index crate + the document_types crate 2022-10-27 11:34:00 +02:00
Loïc Lecrenier
9a74ea0943 Fix compiler errors related autobatching option of the index scheduler 2022-10-27 11:34:00 +02:00
Loïc Lecrenier
eabac9676b Fix typo and remove useless code in tests 2022-10-27 11:34:00 +02:00
Loïc Lecrenier
ab4e649221 Apply suggestions from code review
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-10-27 11:34:00 +02:00
Loïc Lecrenier
568199fc0d Add more task deletion tests 2022-10-27 11:33:59 +02:00
Loïc Lecrenier
13a72f8757 Use more complete snapshot tests for the index scheduler 2022-10-27 11:33:59 +02:00
Loïc Lecrenier
4c55c30027 Add a DetailsView type and improve index scheduler snapshots
The DetailsView type is necessary because serde incorrectly
deserialises the `Details` type, so the database fails to correctly
decode Tasks
2022-10-27 11:33:59 +02:00
Loïc Lecrenier
dc81992eb2 Implement TaskDeletion in the index scheduler 2022-10-27 11:33:59 +02:00
Kerollmops
fe84f2648b Allow a user to disable the auto batching system 2022-10-27 11:33:59 +02:00
Kerollmops
e2a766acb5 Add a test to check that it works without autobatching 2022-10-27 11:33:58 +02:00
Kerollmops
db9d1b18ca Remove the IndexScheduler::notify method 2022-10-27 11:33:58 +02:00
Kerollmops
19c6f8303f Make sure that the index-scheduler tick loop is rerun after processing 2022-10-27 11:33:58 +02:00
Kerollmops
b311eb3bed Add a test that verifies that sending multiple tasks works 2022-10-27 11:33:58 +02:00
Tamo
f026ac3115 remove unused files 2022-10-27 11:33:58 +02:00
Tamo
f176382b34 fix the tests 2022-10-27 11:33:57 +02:00
Tamo
4bd9e4d723 write a bunch of tests that goes through the whole compat layers 2022-10-27 11:33:57 +02:00
Tamo
c6f4fb5f7d remove the warnings 2022-10-27 11:33:57 +02:00
Tamo
2ae0806773 rewrite the update file API 2022-10-27 11:33:57 +02:00
Tamo
7579a363ab finish the dump reader API, the dump Writer API now needs to be updated 2022-10-27 11:33:57 +02:00
Tamo
0284764b5e start dumping the update files to a known format 2022-10-27 11:33:56 +02:00
Tamo
9117fde712 fix the compat between v3 and v4 2022-10-27 11:33:56 +02:00
Tamo
f622ef9836 remove the ununsed snapshot files 2022-10-27 11:33:56 +02:00
Tamo
dc0f307d61 remove all warnings 2022-10-27 11:33:56 +02:00
Tamo
06fadb3004 write the compat layer from v2 to v3 2022-10-27 11:33:55 +02:00
Tamo
6107540ad4 remove old compat files 2022-10-27 11:33:55 +02:00
Tamo
7e18f92635 write the dump v2 import 2022-10-27 11:33:55 +02:00
Tamo
43496b97bd make the open function public 2022-10-27 11:33:55 +02:00
Tamo
6f327a00c7 fix some warnings 2022-10-27 11:33:55 +02:00
Tamo
58ef80a2a7 rebase on main 2022-10-27 11:33:54 +02:00
Tamo
22ffbf3676 write and test the compat layer from v3 to v4 2022-10-27 11:33:54 +02:00
Tamo
089106a970 write and test the dump v3 import 2022-10-27 11:33:54 +02:00
Tamo
026f6fb06a fix the test once again 2022-10-27 11:33:54 +02:00
Tamo
efe0a5f422 finish the test for the compatibility between v4 and v5 2022-10-27 11:33:53 +02:00
Tamo
47e0288747 rewrite the compat API to something more generic 2022-10-27 11:33:53 +02:00
Tamo
2f47443458 rename a few things for consistency 2022-10-27 11:33:53 +02:00
Tamo
a8128678a4 implement the dump v4 import 2022-10-27 11:33:53 +02:00
Tamo
c50b44039e add the compat layer between v5 and v6 2022-10-27 11:33:53 +02:00
Tamo
6dcc5851b5 get rids of the trait in most places 2022-10-27 11:33:52 +02:00
Tamo
0972587cfc start writting the compat layer between v5 and v6 2022-10-27 11:33:52 +02:00
Tamo
afd5fe0783 test the dump v5 2022-10-27 11:33:52 +02:00
Tamo
1473a71e33 write the v5 dump import 2022-10-27 11:33:52 +02:00
Tamo
101f55ce8b introduce the index metadata 2022-10-27 11:33:52 +02:00
Tamo
e845cc2b6f fix the tests 2022-10-27 11:33:51 +02:00
Tamo
7bd6f63001 implement the dump reader v6 2022-10-27 11:33:51 +02:00
Tamo
699ae1b190 start implementing a skeleton of the v1 dump reader 2022-10-27 11:33:51 +02:00
Tamo
f041d474a5 move the DumpWriter and Error to their own module 2022-10-27 11:33:51 +02:00
Tamo
ece6c3f6e7 fix the dump export 2022-10-27 11:33:51 +02:00
Tamo
87a6a337aa write a dump exporter 2022-10-27 11:33:51 +02:00
Clément Renault
123f47dbc4 Create the index only if the task has the rights to do so 2022-10-27 11:33:50 +02:00
Clément Renault
068a4b2884 Correctly batch tasks with different index creation rights 2022-10-27 11:33:50 +02:00
Clément Renault
87212cfd20 Use a ControlFlow in the autobatcher function 2022-10-27 11:33:50 +02:00
Kerollmops
f1b1cfdbcc IndexDeletion operation have ClearAll details 2022-10-27 11:33:50 +02:00
Kerollmops
a083c9e452 Only mark the first clear document with the amount of cleared documents 2022-10-27 11:33:50 +02:00
Kerollmops
b24b13b036 Let the tick function set the Failed status itself 2022-10-27 11:33:50 +02:00
Kerollmops
566c15fb74 Fill an IndexDeletion task with the number of documents removed 2022-10-27 11:33:49 +02:00
Kerollmops
6b3b05fb73 Panic if we encountered a wring KindWithContent type 2022-10-27 11:33:49 +02:00
Kerollmops
36e5efde0d Update the tasks statuses 2022-10-27 11:33:49 +02:00
Kerollmops
2fbdd104b8 Implement the IndexDeletion batch operation 2022-10-27 11:33:49 +02:00
Kerollmops
da363a92ac Implement the IndexUpdate batch operation 2022-10-27 11:33:49 +02:00
Kerollmops
0543cba6eb Implement the IndexCreate batch operation 2022-10-27 11:33:48 +02:00
Kerollmops
cf6084151b Make sure that meilisearch-http works without index wrapper 2022-10-27 11:33:48 +02:00
Kerollmops
c70f375669 Implement ErrorCode on the heed Error 2022-10-27 11:33:48 +02:00
Kerollmops
91e13c2824 Implement ErrorCode on the milli::Error type 2022-10-27 11:33:48 +02:00
Kerollmops
d76634a36c Remove the Index wrapper and use milli::Index directly 2022-10-27 11:33:48 +02:00
Kerollmops
9e8242c57d Remove the IndexRename operation 2022-10-27 11:33:48 +02:00
Kerollmops
5fa214abb1 Move the IndexScheduler to the root of the index-scheduler crate 2022-10-27 11:33:47 +02:00
Kerollmops
9a9e98fb77 Add a TODO about the index creation 2022-10-27 11:33:47 +02:00
Kerollmops
5d21c790ef Make clippy happy 2022-10-27 11:33:47 +02:00
Kerollmops
31de33d5ee Implement a recursive indexation for the index-related operations 2022-10-27 11:33:47 +02:00
Kerollmops
3b343a930e Fix meilisearch-http to use the new DocumentImport batch operation 2022-10-27 11:33:47 +02:00
Kerollmops
07286fcc79 Implement the SettingsAndDocumentImport batch operation 2022-10-27 11:33:47 +02:00
Kerollmops
f68906f5dc Merge both DocumentAddition/Update into one DocumentImport variant 2022-10-27 11:33:46 +02:00
Kerollmops
5174c78f87 Implement the DocumentClear batch operation 2022-10-27 11:33:46 +02:00
Kerollmops
025bb5f616 Implement the DocumentClearAndSettings batch operation 2022-10-27 11:33:46 +02:00
Kerollmops
41ec737e73 Implement the Settings batch operation 2022-10-27 11:33:46 +02:00
Kerollmops
7b4a913704 Implement the DocumentUpdate batch operation 2022-10-27 11:33:46 +02:00
Kerollmops
a6a1043abb Implement the DocumentDeletion batch operation 2022-10-27 11:33:46 +02:00
Tamo
7a0f17c912 remove an old unworking part of the batch execution 2022-10-27 11:33:45 +02:00
Tamo
c2899fe9b2 bring back the IndexMeta and IndexStats in meilisearch-http 2022-10-27 11:33:45 +02:00
Tamo
c759fd6924 fix import bug 2022-10-27 11:33:45 +02:00
Tamo
fba9aa214a remove the create_app macro 2022-10-27 11:33:45 +02:00
Tamo
2c8f1a43e9 get rids of meilisearch-lib 2022-10-27 11:33:44 +02:00
Tamo
0ba1c46e19 fix a deadlock 2022-10-27 11:33:44 +02:00
Tamo
22bfb5a7a0 remove Clone from the IndexScheduler 2022-10-27 11:33:44 +02:00
Tamo
d8d3499aec remove a bunch of comments 2022-10-27 11:33:44 +02:00
Tamo
64e132ce53 move as many fields as possible out of the IndexScheduler 2022-10-27 11:33:44 +02:00
Tamo
9e1f38ec7c move the test function in the test module 2022-10-27 11:33:44 +02:00
Tamo
6f4dcc0c38 start implementing some logic to test the internal states of the scheduler 2022-10-27 11:33:43 +02:00
Tamo
84cd5cef0b fix the tests 2022-10-27 11:33:43 +02:00
Tamo
ae86a8ccd6 slightly refactor the autobatching tests 2022-10-27 11:33:43 +02:00
Tamo
ce2dfecc03 connect the new scheduler to meilisearch-http officially.
I can index documents and do search
2022-10-27 11:33:43 +02:00
Tamo
cb4feabca2 implements the get_tasks 2022-10-27 11:33:43 +02:00
Tamo
19154e48fe fix all compilation errors 2022-10-27 11:33:42 +02:00
Irevoire
8d51c1f389 wip integrating the scheduler in meilisearch-http 2022-10-27 11:33:42 +02:00
Irevoire
250410495c start integrating the index-scheduler in meilisearch-lib 2022-10-27 11:33:42 +02:00
Irevoire
8f0fd35358 add insta::json for later 2022-10-27 11:33:42 +02:00
Irevoire
8770e07397 I can index documents without meilisearch 2022-10-27 11:33:42 +02:00
Tamo
edd8344dc9 wip 2022-10-27 11:33:42 +02:00
Tamo
e547552702 create the end Batch type for all Index* operations 2022-10-27 11:33:41 +02:00
Tamo
925971809a create the end Batch type for all Document* operation 2022-10-27 11:33:41 +02:00
Tamo
1ea9c0b4c0 write most of the run loop 2022-10-27 11:33:41 +02:00
Tamo
4846a7c501 use faux in the file-store 2022-10-27 11:33:41 +02:00
Tamo
9ff0fe952e split the run function in two 2022-10-27 11:33:41 +02:00
Tamo
a8b18b2c96 fix the register test 2022-10-27 11:33:40 +02:00
Tamo
5436b996ab reduce the size of the snapshots 2022-10-27 11:33:40 +02:00
Tamo
7d0c8a3379 test the register tasks 2022-10-27 11:33:40 +02:00
Tamo
fc098022c7 start integrating the index-scheduler in the meilisearch codebase 2022-10-27 11:33:40 +02:00
Tamo
b816535e33 greatly reduce the number of warnings 2022-10-27 11:33:40 +02:00
Tamo
38e4ffe73c fix smol typo 2022-10-27 11:33:40 +02:00
Tamo
366a344474 get rids of the horrendous spinlock in favor of synchronoise 2022-10-27 11:33:39 +02:00
Tamo
7b6673dc1d implement the index swap in the index mapper 2022-10-27 11:33:39 +02:00
Tamo
03aca2e452 move the index mapping logic in another structure 2022-10-27 11:33:39 +02:00
Tamo
4129783019 migrate the index handling code in a different file + implements the create index 2022-10-27 11:33:39 +02:00
Tamo
1804416afa reintroduce the uuid mapping for the indexes 2022-10-27 11:33:39 +02:00
Tamo
c97d51a624 add a bunch of tests 2022-10-27 11:33:39 +02:00
Tamo
803f2157af split the DocumentAdditionOrUpdate in two tasks; DocumentAddition and DocumentUpdate 2022-10-27 11:33:38 +02:00
Tamo
b7c5b71a53 starts importing the real tasks 2022-10-27 11:33:38 +02:00
Tamo
5cc8f96237 get rids of the auto-generated mains 2022-10-27 11:33:38 +02:00
Tamo
94e29a9f5f extract the index abstraction out of the index-scheduler in its own module 2022-10-27 11:33:38 +02:00
Tamo
48138c21a9 rename the update-file-store to file-store since it can store any kind of file 2022-10-27 11:33:38 +02:00
Tamo
76597fc382 import the update_file_store in the index-scheduler 2022-10-27 11:33:37 +02:00
Tamo
2afb381f95 get rids of nelson 2022-10-27 11:33:37 +02:00
Tamo
a9844bd4f6 move the update file store to another crate with as little dependencies as possible 2022-10-27 11:33:37 +02:00
Tamo
a0588d6b94 finishes the global skelton of the auto-batcher 2022-10-27 11:33:37 +02:00
Tamo
b3c9b128d9 polish the global structure of the batch creation 2022-10-27 11:33:37 +02:00
Irevoire
448f44f631 move the autobatcher logic to another file 2022-10-27 11:33:36 +02:00
Tamo
f638774764 add the document format file 2022-10-27 11:33:36 +02:00
Tamo
516860f342 fix the create_new_batch method 2022-10-27 11:33:36 +02:00
Tamo
6b9689a1c0 fix the whole batchKind thingy 2022-10-27 11:33:36 +02:00
Tamo
af0f5d6c0c implements most operations 2022-10-27 11:33:36 +02:00
Tamo
5a7fcf2688 fix a few typos 2022-10-27 11:33:35 +02:00
Tamo
30d2b24689 implements the index deletion, creation and swap 2022-10-27 11:33:35 +02:00
Tamo
72b2e68de4 makes the updates getters smoother to uses 2022-10-27 11:33:35 +02:00
Tamo
7879189c6b make the project compile again 2022-10-27 11:33:35 +02:00
Tamo
46b8ebcab4 fix the file store 2022-10-27 11:33:35 +02:00
Tamo
fa742f60e8 make the file store entirely synchronous, including the file deletion 2022-10-27 11:33:35 +02:00
Tamo
a7aa92df5f fix most of the index module 2022-10-27 11:33:34 +02:00
Irevoire
d8b8e04ad1 wip porting the index back in the scheduler 2022-10-27 11:33:34 +02:00
Irevoire
fe330e1be9 add a little bit of documentation 2022-10-27 11:33:34 +02:00
Tamo
2c4e5ce8be implements the filter query 2022-10-27 11:33:34 +02:00
Tamo
705af94fd7 add the task to the index db in the register task 2022-10-27 11:33:34 +02:00
Tamo
ed745591e1 split the scheduler into multiples files 2022-10-27 11:33:34 +02:00
Tamo
22d24dba56 implement the get_batch method 2022-10-27 11:33:33 +02:00
Tamo
1a47949063 START THE REWRITE OF THE INDEX SCHEDULER: index & register has been implemented 2022-10-27 11:33:33 +02:00
bors[bot]
ab1800551f Merge #2922
2922: Add new error when using /keys without masterkey set r=ManyTheFish a=vishalsodani

# Pull Request

## Related issue
Fixes #2918 


Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?




Co-authored-by: vishalsodani <vishalsodani@rediffmail.com>
2022-10-27 09:13:11 +00:00
vishalsodani
689bef7ad2 fmt the code 2022-10-27 14:09:38 +05:30
vishalsodani
89c40c83c3 refactor code to avoid cloning 2022-10-27 14:08:29 +05:30
vishalsodani
03ba830ab2 uncomment tests 2022-10-27 12:59:28 +05:30
vishalsodani
9cf3ff72a3 fix checking of master key as per review comment 2022-10-27 12:56:18 +05:30
bors[bot]
25ec51e783 Merge #2601
2601: Ease search result pagination r=Kerollmops a=ManyTheFish

# Summary
This PR is a prototype enhancing search results pagination (#2577)

# Todo

- [x] Update the API to return the number of pages and allow users to directly choose a page instead of computing an offset
- [x] Change computation of `total_pages` in order to have an exact count
  - [x] compute query tree exhaustively
  - [x] compute distinct exhaustively

# Small Documentation

## Default search query

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "botman" }'
```

**result**:
```json
{
  "hits":[...],
  "query":"botman",
  "processingTimeMs":5,
  "hitsPerPage":20,
  "page":1,
  "totalPages":4,
  "totalHits":66
}
```

## Search query with offset parameter

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "botman", "offset": 0 }'
```

**result**:
```json
{
  "hits":[...],
  "query":"botman",
  "processingTimeMs":3,
  "limit":20,
  "offset":0,
  "estimatedTotalHits":66
}
```

## Search query selecting page with page parameter

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "botman", "page": 2 }'
```

**result**:
```json
{
  "hits":[...],
  "query":"botman",
  "processingTimeMs":5,
  "hitsPerPage":20,
  "page":2,
  "totalPages":4,
  "totalHits":66
}
```

# Related

fixes #2577

## In charge of the feature

Core: `@ManyTheFish` 
Docs: `@guimachiavelli` 
Integration: `@bidoubiwa` 


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-10-26 16:10:58 +00:00
ManyTheFish
f4021273b8 Add is_finite_pagination method to SearchQuery 2022-10-26 18:08:29 +02:00
vishalsodani
f0ecacb58d add implementation for no master key set and fix tests 2022-10-25 22:41:48 +05:30
ManyTheFish
68c9751d49 Fix clippy 2022-10-25 16:08:07 +02:00
bors[bot]
9aef1031ca Merge #2961
2961: Changed error message for config file r=curquiza a=LunarMarathon

# Pull Request

## Related issue
Fixes #2959 

## What does this PR do?
- Changed the error message as required in the issue - changed config to configuration

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: LunarMarathon <lmaytan24@gmail.com>
2022-10-25 11:36:47 +00:00
LunarMarathon
bc2a161f62 Change err mess for config 2022-10-25 16:16:34 +05:30
bors[bot]
cd3748d412 Merge #2951
2951: Update mini-dashboard to v0.2.3 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
- Update the mini-dashboard with its latest release (v0.2.3)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-10-24 14:30:26 +00:00
Morgane Dubus
079cfc70ae Update mini-dashboard to v0.2.3 2022-10-24 15:20:59 +02:00
ManyTheFish
4afed4de4f stabilize milli 2022-10-24 14:16:41 +02:00
ManyTheFish
a2314cf436 Update analytics 2022-10-24 13:56:26 +02:00
bors[bot]
1d85eeecef Merge #2930
2930: fix wrong variant returned for invalid_api_key_indexes error r=ManyTheFish a=vishalsodani

# Pull Request

## Related issue
Fixes #2924 

## What does this PR do?
- ...

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: vishalsodani <vishalsodani@rediffmail.com>
2022-10-24 11:43:03 +00:00
ManyTheFish
0578aff8c9 Fix the tests 2022-10-20 17:41:13 +02:00
ManyTheFish
1d217cef19 Add some tests 2022-10-20 17:03:07 +02:00
ManyTheFish
c02ae4dfc0 Update roaring 2022-10-19 14:25:43 +02:00
ManyTheFish
506d08a9f4 Update analytics version 2022-10-19 14:05:42 +02:00
ManyTheFish
b423ef72be PROTO: hardcode version and interval for prototype analytics 2022-10-19 14:05:42 +02:00
ManyTheFish
77e718214f Fix pagination analytics 2022-10-19 14:05:42 +02:00
ManyTheFish
e35ea2ad55 Make search returns 0 hits when pages is set to 0 2022-10-19 14:05:42 +02:00
ManyTheFish
dfa70e47f7 Change page and hitsPerPage corner cases 2022-10-19 14:05:42 +02:00
ManyTheFish
815fba9cc3 Fix zero division when computing pages 2022-10-19 14:05:42 +02:00
ManyTheFish
e9d493c052 Add a totalHits field on finite pagination return 2022-10-19 14:05:42 +02:00
ManyTheFish
0fa5c9b515 Fix tests 2022-10-19 14:05:42 +02:00
ManyTheFish
062d17fbc0 Use a milli version that compute exhaustivelly the number of hits 2022-10-19 14:05:42 +02:00
ManyTheFish
30410e870f Format all fields in camelCase 2022-10-19 13:58:03 +02:00
ManyTheFish
b1bf6722e8 Update API to fit the proto needs 2022-10-19 13:58:03 +02:00
vishalsodani
1a61209596 fix wrong variant returned for invalid_api_key_indexes error 2022-10-18 19:41:06 +05:30
vishalsodani
1cf6efa740 Add new error when using /keys without masterkey set 2022-10-18 10:48:45 +05:30
212 changed files with 11904 additions and 2083 deletions

View File

@@ -1,7 +1,8 @@
name: Look for flaky tests
on:
workflow_dispatch:
schedule:
- cron: "0 12 * * FRI" # every friday at 12:00PM
- cron: "0 12 * * FRI" # Every Friday at 12:00PM
jobs:
flaky:

View File

@@ -1,4 +1,5 @@
on:
workflow_dispatch:
schedule:
- cron: '0 2 * * *' # Every day at 2:00am
release:
@@ -17,7 +18,7 @@ jobs:
# If yes, it means we are publishing an official release.
# If no, we are releasing a RC, so no need to check the version.
- name: Check tag format
if: github.event_name != 'schedule'
if: github.event_name == 'release'
id: check-tag-format
run: |
escaped_tag=$(printf "%q" ${{ github.ref_name }})
@@ -28,7 +29,7 @@ jobs:
echo ::set-output name=stable::false
fi
- name: Check release validity
if: github.event_name != 'schedule' && steps.check-tag-format.outputs.stable == 'true'
if: github.event_name == 'release' && steps.check-tag-format.outputs.stable == 'true'
run: bash .github/scripts/check-release.sh
publish:
@@ -59,14 +60,14 @@ jobs:
run: cargo build --release --locked
# No need to upload binaries for dry run (cron)
- name: Upload binaries to release
if: github.event_name != 'schedule'
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
file: target/release/${{ matrix.artifact_name }}
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}
publish-macos-apple-silicon:
name: Publish binary for macOS silicon
runs-on: ${{ matrix.os }}
@@ -97,7 +98,7 @@ jobs:
args: --release --target ${{ matrix.target }}
- name: Upload the binary to release
# No need to upload binaries for dry run (cron)
if: github.event_name != 'schedule'
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}
@@ -153,7 +154,6 @@ jobs:
echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config
echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
echo 'JEMALLOC_SYS_WITH_LG_PAGE=16' >> $GITHUB_ENV
echo RUSTFLAGS="-Clink-arg=-fuse-ld=gold" >> $GITHUB_ENV
- name: Cargo build
uses: actions-rs/cargo@v1
@@ -167,7 +167,7 @@ jobs:
- name: Upload the binary to release
# No need to upload binaries for dry run (cron)
if: github.event_name != 'schedule'
if: github.event_name == 'release'
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.MEILI_BOT_GH_PAT }}

269
Cargo.lock generated
View File

@@ -78,7 +78,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "465a6172cf69b960917811022d8f29bc0b7fa1398bc4f78b3c466673db1213b6"
dependencies = [
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -154,9 +154,9 @@ dependencies = [
[[package]]
name = "actix-utils"
version = "3.0.0"
version = "3.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e491cbaac2e7fc788dfff99ff48ef317e23b3cf63dbaf7aaab6418f40f92aa94"
checksum = "88a1dcdff1466e3c2488e1cb5c36a71822750ad43839937f85d2f4d9f8b705d8"
dependencies = [
"local-waker",
"pin-project-lite",
@@ -213,7 +213,7 @@ dependencies = [
"actix-router",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -297,9 +297,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.65"
version = "1.0.66"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "98161a4e3e2184da77bb14f02184cdd111e83bbbcc9979dfee3c44b9a85f5602"
checksum = "216261ddc8289130e551ddcd5ce8a064710c0d064a4d2895c67151c92b5443f6"
dependencies = [
"backtrace",
]
@@ -332,7 +332,7 @@ checksum = "10f203db73a71dfa2fb6dd22763990fa26f3d2625a6da2da900d23b87d26be27"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -343,7 +343,7 @@ checksum = "1e805d94e6b5001b651426cf4cd446b1ab5f319d27bab5c644f61de0a804360c"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -404,9 +404,9 @@ checksum = "f8fe8f5a8a398345e52358e18ff07cc17a568fbca5c6f73873d3a62056309603"
[[package]]
name = "base64"
version = "0.13.0"
version = "0.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "904dfeac50f3cdaba28fc6f57fdcddb75f49ed61346676a78c4ffe55877802fd"
checksum = "9e1b586273c5702936fe7b7d6896644d8be71e6314cfe09d3167c95f712589e8"
[[package]]
name = "base64ct"
@@ -570,7 +570,7 @@ checksum = "1b9e1f5fa78f69496407a27ae9ed989e3c3b072310286f5ef385525e4cbc24a9"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -699,9 +699,9 @@ dependencies = [
[[package]]
name = "clap"
version = "3.2.22"
version = "3.2.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86447ad904c7fb335a790c9d7fe3d0d971dc523b8ccd1561a520de9a85302750"
checksum = "71655c45cb9845d3270c9d6df84ebe72b4dad3c2ba3f7023ad47c144e4e473a5"
dependencies = [
"atty",
"bitflags",
@@ -716,13 +716,13 @@ dependencies = [
[[package]]
name = "clap"
version = "4.0.17"
version = "4.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06badb543e734a2d6568e19a40af66ed5364360b9226184926f89d229b4b4267"
checksum = "335867764ed2de42325fafe6d18b8af74ba97ee0c590fa016f157535b42ab04b"
dependencies = [
"atty",
"bitflags",
"clap_derive 4.0.13",
"clap_derive 4.0.18",
"clap_lex 0.3.0",
"once_cell",
"strsim",
@@ -739,20 +739,20 @@ dependencies = [
"proc-macro-error",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
name = "clap_derive"
version = "4.0.13"
version = "4.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c42f169caba89a7d512b5418b09864543eeb4d497416c917d7137863bd2076ad"
checksum = "16a1b0f6422af32d5da0c58e2703320f379216ee70198241c84173a8c5ac28f3"
dependencies = [
"heck",
"proc-macro-error",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -781,7 +781,7 @@ checksum = "1df715824eb382e34b7afb7463b0247bf41538aeba731fba05241ecdb5dc3747"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1003,7 +1003,7 @@ dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"strsim",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1014,7 +1014,7 @@ checksum = "ddfc69c5bfcbd2fc09a0f38451d2daf0e372e367986a83906d1b0dbc88134fb5"
dependencies = [
"darling_core",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1035,7 +1035,7 @@ dependencies = [
"darling",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1045,7 +1045,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f0314b72bed045f3a68671b3c86328386762c93f82d98c65c3cb5e5f573dd68"
dependencies = [
"derive_builder_core",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1058,7 +1058,7 @@ dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"rustc_version 0.4.0",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1101,13 +1101,12 @@ dependencies = [
[[package]]
name = "dump"
version = "0.29.0"
version = "0.30.0"
dependencies = [
"anyhow",
"big_s",
"flate2",
"http",
"insta",
"log",
"maplit",
"meili-snap",
@@ -1240,7 +1239,7 @@ checksum = "828de45d0ca18782232dfb8f3ea9cc428e8ced380eb26a520baaacfc70de39ce"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1256,6 +1255,27 @@ dependencies = [
"termcolor",
]
[[package]]
name = "errno"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f639046355ee4f37944e44f60642c6f3a7efa3cf6b78c78a0d989a8ce6c396a1"
dependencies = [
"errno-dragonfly",
"libc",
"winapi",
]
[[package]]
name = "errno-dragonfly"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa68f1b12764fab894d2755d2518754e71b4fd80ecfb822714a1206c2aab39bf"
dependencies = [
"cc",
"libc",
]
[[package]]
name = "fastrand"
version = "1.8.0"
@@ -1284,13 +1304,13 @@ dependencies = [
"darling",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
"uuid 0.8.2",
]
[[package]]
name = "file-store"
version = "0.1.0"
version = "0.30.0"
dependencies = [
"faux",
"tempfile",
@@ -1300,20 +1320,20 @@ dependencies = [
[[package]]
name = "filetime"
version = "0.2.17"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e94a7bbaa59354bc20dd75b67f23e2797b4490e9d6928203fb105c79e448c86c"
checksum = "4b9663d381d07ae25dc88dbdf27df458faa83a9b25336bcac83d5e452b5fc9d3"
dependencies = [
"cfg-if",
"libc",
"redox_syscall",
"windows-sys 0.36.1",
"windows-sys 0.42.0",
]
[[package]]
name = "filter-parser"
version = "0.33.4"
source = "git+https://github.com/meilisearch/milli.git?branch=indexation-abortion#fc03e536153d61da3224698f34fb8c6ee2312c2f"
version = "0.37.0"
source = "git+https://github.com/meilisearch/milli.git?tag=v0.37.0#57c9f03e514436a2cca799b2a28cd89247682be0"
dependencies = [
"nom",
"nom_locate",
@@ -1331,8 +1351,8 @@ dependencies = [
[[package]]
name = "flatten-serde-json"
version = "0.33.4"
source = "git+https://github.com/meilisearch/milli.git?branch=indexation-abortion#fc03e536153d61da3224698f34fb8c6ee2312c2f"
version = "0.37.0"
source = "git+https://github.com/meilisearch/milli.git?tag=v0.37.0#57c9f03e514436a2cca799b2a28cd89247682be0"
dependencies = [
"serde_json",
]
@@ -1414,7 +1434,7 @@ checksum = "bdfb8ce053d86b91919aad980c220b1fb8401a9394410e1c289ed7e66b61835d"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1474,9 +1494,9 @@ checksum = "36d244a08113319b5ebcabad2b8b7925732d15eec46d7e7ac3c11734f3b7a6ad"
[[package]]
name = "getrandom"
version = "0.2.7"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4eb1a864a501629691edf6c15a593b7a51eebaa1e8468e9ddc623de7c9b58ec6"
checksum = "c05aeb6a22b8f62540c194aac980f2115af067bfe15a0734d7277a768d396b31"
dependencies = [
"cfg-if",
"libc",
@@ -1492,7 +1512,7 @@ dependencies = [
"proc-macro-error",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -1522,9 +1542,9 @@ checksum = "9b919933a397b79c37e33b77bb2aa3dc8eb6e165ad809e58ff75bc7db2e34574"
[[package]]
name = "grenad"
version = "0.4.3"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e46ef6273921c5c0cced57632b48c02a968a57f9af929ef78f980409c2e26f2"
checksum = "5232b2d157b7bf63d7abe1b12177039e58db2f29e377517c0cdee1578cca4c93"
dependencies = [
"bytemuck",
"byteorder",
@@ -1533,9 +1553,9 @@ dependencies = [
[[package]]
name = "h2"
version = "0.3.14"
version = "0.3.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5ca32592cf21ac7ccab1825cd87f6c9b3d9022c44d086172ed0966bec8af30be"
checksum = "5f9f29bc9dda355256b2916cf526ab02ce0aeaaaf2bad60d65ef3f12f11dd0f4"
dependencies = [
"bytes",
"fnv",
@@ -1596,8 +1616,8 @@ checksum = "2540771e65fc8cb83cd6e8a237f70c319bd5c29f78ed1084ba5d50eeac86f7f9"
[[package]]
name = "heed"
version = "0.12.2"
source = "git+https://github.com/meilisearch/heed?tag=v0.12.3#076971765f4ce09591ed7e19e45ea817580a53e3"
version = "0.12.4"
source = "git+https://github.com/meilisearch/heed?tag=v0.12.4#7a4542bc72dd60ef0f508c89900ea292218223fb"
dependencies = [
"byteorder",
"heed-traits",
@@ -1614,12 +1634,12 @@ dependencies = [
[[package]]
name = "heed-traits"
version = "0.7.0"
source = "git+https://github.com/meilisearch/heed?tag=v0.12.3#076971765f4ce09591ed7e19e45ea817580a53e3"
source = "git+https://github.com/meilisearch/heed?tag=v0.12.4#7a4542bc72dd60ef0f508c89900ea292218223fb"
[[package]]
name = "heed-types"
version = "0.7.2"
source = "git+https://github.com/meilisearch/heed?tag=v0.12.3#076971765f4ce09591ed7e19e45ea817580a53e3"
source = "git+https://github.com/meilisearch/heed?tag=v0.12.4#7a4542bc72dd60ef0f508c89900ea292218223fb"
dependencies = [
"bincode",
"heed-traits",
@@ -1747,7 +1767,7 @@ dependencies = [
[[package]]
name = "index-scheduler"
version = "0.1.0"
version = "0.30.0"
dependencies = [
"anyhow",
"big_s",
@@ -1809,6 +1829,12 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "io-lifetimes"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6e481ccbe3dea62107216d0d1138bb8ad8e5e5c43009a098bd1990272c497b0"
[[package]]
name = "ipnet"
version = "2.5.0"
@@ -1871,8 +1897,8 @@ dependencies = [
[[package]]
name = "json-depth-checker"
version = "0.33.4"
source = "git+https://github.com/meilisearch/milli.git?branch=indexation-abortion#fc03e536153d61da3224698f34fb8c6ee2312c2f"
version = "0.37.0"
source = "git+https://github.com/meilisearch/milli.git?tag=v0.37.0#57c9f03e514436a2cca799b2a28cd89247682be0"
dependencies = [
"serde_json",
]
@@ -1914,9 +1940,9 @@ dependencies = [
[[package]]
name = "libc"
version = "0.2.135"
version = "0.2.137"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68783febc7782c6c5cb401fbda4de5a9898be1762314da0bb2c10ced61f18b0c"
checksum = "fc7fcc620a3bff7cdd7a365be3376c97191aeaccc2a603e600951e452615bf89"
[[package]]
name = "libgit2-sys"
@@ -1988,7 +2014,7 @@ dependencies = [
"anyhow",
"bincode",
"byteorder",
"clap 3.2.22",
"clap 3.2.23",
"csv",
"encoding",
"env_logger",
@@ -2063,7 +2089,7 @@ dependencies = [
"anyhow",
"bincode",
"byteorder",
"clap 3.2.22",
"clap 3.2.23",
"encoding",
"env_logger",
"glob",
@@ -2083,7 +2109,7 @@ dependencies = [
"anyhow",
"bincode",
"byteorder",
"clap 3.2.22",
"clap 3.2.23",
"csv",
"encoding",
"env_logger",
@@ -2103,7 +2129,7 @@ dependencies = [
"anyhow",
"bincode",
"byteorder",
"clap 3.2.22",
"clap 3.2.23",
"csv",
"encoding",
"env_logger",
@@ -2120,10 +2146,16 @@ version = "0.5.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0717cef1bc8b636c6e1c1bbdefc09e6322da8a9321966e8928ef80d20f7f770f"
[[package]]
name = "linux-raw-sys"
version = "0.0.46"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4d2456c373231a208ad294c33dc5bff30051eafd954cd4caae83a712b12854d"
[[package]]
name = "lmdb-rkv-sys"
version = "0.15.0"
source = "git+https://github.com/meilisearch/lmdb-rs#8f0fe377a98d177cabbd056e777778f559df2bb6"
version = "0.15.1"
source = "git+https://github.com/meilisearch/lmdb-rs#5592bf5a812905cf0c633404ef8f8f4057112c65"
dependencies = [
"cc",
"libc",
@@ -2186,7 +2218,7 @@ dependencies = [
"log",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -2208,7 +2240,7 @@ dependencies = [
"once_cell",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -2225,7 +2257,7 @@ checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771"
[[package]]
name = "meili-snap"
version = "0.1.0"
version = "0.30.0"
dependencies = [
"insta",
"md5",
@@ -2234,7 +2266,7 @@ dependencies = [
[[package]]
name = "meilisearch-auth"
version = "0.29.1"
version = "0.30.0"
dependencies = [
"enum-iterator",
"hmac",
@@ -2251,7 +2283,7 @@ dependencies = [
[[package]]
name = "meilisearch-http"
version = "0.29.1"
version = "0.30.0"
dependencies = [
"actix-cors",
"actix-http",
@@ -2267,7 +2299,7 @@ dependencies = [
"byte-unit",
"bytes",
"cargo_toml",
"clap 4.0.17",
"clap 4.0.18",
"crossbeam-channel",
"dump",
"either",
@@ -2334,12 +2366,14 @@ dependencies = [
[[package]]
name = "meilisearch-types"
version = "0.29.1"
version = "0.30.0"
dependencies = [
"actix-web",
"anyhow",
"csv",
"either",
"enum-iterator",
"flate2",
"fst",
"insta",
"meili-snap",
@@ -2349,6 +2383,7 @@ dependencies = [
"roaring",
"serde",
"serde_json",
"tar",
"thiserror",
"time",
"tokio",
@@ -2381,8 +2416,8 @@ dependencies = [
[[package]]
name = "milli"
version = "0.33.4"
source = "git+https://github.com/meilisearch/milli.git?branch=indexation-abortion#fc03e536153d61da3224698f34fb8c6ee2312c2f"
version = "0.37.0"
source = "git+https://github.com/meilisearch/milli.git?tag=v0.37.0#57c9f03e514436a2cca799b2a28cd89247682be0"
dependencies = [
"bimap",
"bincode",
@@ -2466,14 +2501,14 @@ dependencies = [
[[package]]
name = "mio"
version = "0.8.4"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57ee1c23c7c63b0c9250c339ffdc69255f110b298b901b9f6c82547b7b87caaf"
checksum = "e5d732bc30207a6423068df043e3d02e0735b155ad7ce1a6f76fe2baa5b158de"
dependencies = [
"libc",
"log",
"wasi",
"windows-sys 0.36.1",
"windows-sys 0.42.0",
]
[[package]]
@@ -2712,7 +2747,7 @@ checksum = "478c572c3d73181ff3c2539045f6eb99e5491218eae919370993b890cdbdd98e"
[[package]]
name = "permissive-json-pointer"
version = "0.29.1"
version = "0.30.0"
dependencies = [
"big_s",
"serde_json",
@@ -2748,7 +2783,7 @@ dependencies = [
"pest_meta",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -2814,9 +2849,9 @@ checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "pkg-config"
version = "0.3.25"
version = "0.3.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1df8c4ec4b0627e53bdf214615ad287367e482558cf84b109250b37464dc03ae"
checksum = "6ac9a59f73473f1b8d852421e59e64809f025994837ef743615c6d0c5b305160"
[[package]]
name = "platform-dirs"
@@ -2842,7 +2877,7 @@ dependencies = [
"proc-macro-error-attr",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
"version_check",
]
@@ -2877,22 +2912,22 @@ dependencies = [
[[package]]
name = "procfs"
version = "0.12.0"
version = "0.14.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0941606b9934e2d98a3677759a971756eb821f75764d0e0d26946d08e74d9104"
checksum = "2dfb6451c91904606a1abe93e83a8ec851f45827fa84273f256ade45dc095818"
dependencies = [
"bitflags",
"byteorder",
"hex",
"lazy_static",
"libc",
"rustix",
]
[[package]]
name = "prometheus"
version = "0.13.2"
version = "0.13.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "45c8babc29389186697fe5a2a4859d697825496b83db5d0b65271cdc0488e88c"
checksum = "449811d15fbdf5ceb5c1144416066429cf82316e2ec8ce0c1f6f8a02e7bbcf8c"
dependencies = [
"cfg-if",
"fnv",
@@ -3216,6 +3251,20 @@ dependencies = [
"semver 1.0.14",
]
[[package]]
name = "rustix"
version = "0.35.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "985947f9b6423159c4726323f373be0a21bdb514c5af06a849cb3d2dce2d01e8"
dependencies = [
"bitflags",
"errno",
"io-lifetimes",
"libc",
"linux-raw-sys",
"windows-sys 0.36.1",
]
[[package]]
name = "rustls"
version = "0.20.7"
@@ -3323,9 +3372,9 @@ checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
[[package]]
name = "serde"
version = "1.0.145"
version = "1.0.147"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "728eb6351430bccb993660dfffc5a72f91ccc1295abaa8ce19b27ebe4f75568b"
checksum = "d193d69bae983fc11a79df82342761dfbf28a99fc8d203dca4c3c1b590948965"
dependencies = [
"serde_derive",
]
@@ -3341,13 +3390,13 @@ dependencies = [
[[package]]
name = "serde_derive"
version = "1.0.145"
version = "1.0.147"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81fa1584d3d1bcacd84c277a0dfe21f5b0f6accf4a23d04d4c6d61f1af522b4c"
checksum = "4f1d362ca8fc9c3e3a7484440752472d68a6caa98f1ab81d99b5dfe517cec852"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -3555,9 +3604,9 @@ dependencies = [
[[package]]
name = "syn"
version = "1.0.102"
version = "1.0.103"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fcd952facd492f9be3ef0d0b7032a6e442ee9b361d4acc2b1d0c4aaa5f613a1"
checksum = "a864042229133ada95abf3b54fdc62ef5ccabe9515b64717bcb9a1919e59445d"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
@@ -3581,15 +3630,15 @@ checksum = "f36bdaa60a83aca3921b5259d5400cbf5e90fc51931376a9bd4a0eb79aa7210f"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
"unicode-xid 0.2.4",
]
[[package]]
name = "sysinfo"
version = "0.26.5"
version = "0.26.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ade661fa5e048ada64ad7901713301c21d2dbc5b65ee7967de8826c111452960"
checksum = "c6d0dedf2e65d25b365c588382be9dc3a3ee4b0ed792366cf722d174c359d948"
dependencies = [
"cfg-if",
"core-foundation-sys",
@@ -3655,9 +3704,9 @@ dependencies = [
[[package]]
name = "textwrap"
version = "0.15.1"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "949517c0cf1bf4ee812e2e07e08ab448e3ae0d23472aee8a06c985f0c8815b16"
checksum = "222a222a5bfe1bba4a77b45ec488a741b3cb8872e5e499451fd7d0129c9c7c3d"
[[package]]
name = "thiserror"
@@ -3676,27 +3725,37 @@ checksum = "982d17546b47146b28f7c22e3d08465f6b8903d0ea13c1660d9d84a6e7adcdbb"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
name = "time"
version = "0.3.15"
version = "0.3.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d634a985c4d4238ec39cacaed2e7ae552fbd3c476b552c1deac3021b7d7eaf0c"
checksum = "0fab5c8b9980850e06d92ddbe3ab839c062c801f3927c0fb8abd6fc8e918fbca"
dependencies = [
"itoa 1.0.4",
"libc",
"num_threads",
"serde",
"time-core",
"time-macros",
]
[[package]]
name = "time-macros"
version = "0.2.4"
name = "time-core"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42657b1a6f4d817cda8e7a0ace261fe0cc946cf3a80314390b22cc61ae080792"
checksum = "2e153e1f1acaef8acc537e68b44906d2db6436e2b35ac2c6b42640fff91f00fd"
[[package]]
name = "time-macros"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65bb801831d812c562ae7d2bfb531f26e66e4e1f6b17307ba4149c5064710e5b"
dependencies = [
"time-core",
]
[[package]]
name = "tinyvec"
@@ -3741,7 +3800,7 @@ checksum = "9724f9a975fb987ef7a3cd9be0350edcbe130698af5b8f7a631e23d42d052484"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
]
[[package]]
@@ -4036,7 +4095,7 @@ dependencies = [
"once_cell",
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
"wasm-bindgen-shared",
]
@@ -4070,7 +4129,7 @@ checksum = "07bc0c051dc5f23e307b13285f9d75df86bfdf816c5721e573dec1f9b8aa193c"
dependencies = [
"proc-macro2 1.0.47",
"quote 1.0.21",
"syn 1.0.102",
"syn 1.0.103",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
@@ -4310,7 +4369,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d498dbd1fd7beb83c86709ae1c33ca50942889473473d287d56ce4770a18edfb"
dependencies = [
"proc-macro2 1.0.47",
"syn 1.0.102",
"syn 1.0.103",
"synstructure",
]

View File

@@ -1,6 +1,6 @@
[package]
name = "dump"
version = "0.29.0"
version = "0.30.0"
edition = "2021"
[dependencies]
@@ -23,7 +23,6 @@ uuid = { version = "1.1.2", features = ["serde", "v4"] }
[dev-dependencies]
big_s = "1.0.2"
insta = { version = "1.19.1", features = ["json", "redactions"] }
maplit = "1.0.2"
meili-snap = { path = "../meili-snap" }
meilisearch-types = { path = "../meilisearch-types" }

View File

@@ -5,7 +5,7 @@ use meilisearch_types::error::ResponseError;
use meilisearch_types::keys::Key;
use meilisearch_types::milli::update::IndexDocumentsMethod;
use meilisearch_types::settings::Unchecked;
use meilisearch_types::tasks::{Details, KindWithContent, Status, Task, TaskId};
use meilisearch_types::tasks::{Details, IndexSwap, KindWithContent, Status, Task, TaskId};
use meilisearch_types::InstanceUid;
use roaring::RoaringBitmap;
use serde::{Deserialize, Serialize};
@@ -87,7 +87,7 @@ pub struct TaskDump {
pub finished_at: Option<OffsetDateTime>,
}
// A `Kind` specific version made for the dump. If modified you may break the dump.
// A `Kind` specific version made for the dump. If modified you may break the dump.
#[derive(Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum KindDump {
@@ -114,7 +114,7 @@ pub enum KindDump {
primary_key: Option<String>,
},
IndexSwap {
swaps: Vec<(String, String)>,
swaps: Vec<IndexSwap>,
},
TaskCancelation {
query: String,
@@ -124,12 +124,11 @@ pub enum KindDump {
query: String,
tasks: RoaringBitmap,
},
DumpExport {
dump_uid: String,
DumpCreation {
keys: Vec<Key>,
instance_uid: Option<InstanceUid>,
},
Snapshot,
SnapshotCreation,
}
impl From<Task> for TaskDump {
@@ -188,10 +187,10 @@ impl From<KindWithContent> for KindDump {
KindWithContent::TaskDeletion { query, tasks } => {
KindDump::TasksDeletion { query, tasks }
}
KindWithContent::DumpExport { dump_uid, keys, instance_uid } => {
KindDump::DumpExport { dump_uid, keys, instance_uid }
KindWithContent::DumpCreation { keys, instance_uid } => {
KindDump::DumpCreation { keys, instance_uid }
}
KindWithContent::Snapshot => KindDump::Snapshot,
KindWithContent::SnapshotCreation => KindDump::SnapshotCreation,
}
}
}
@@ -417,6 +416,7 @@ pub(crate) mod test {
}
#[test]
#[ignore]
fn test_creating_and_read_dump() {
let mut file = create_test_dump();
let mut dump = DumpReader::open(&mut file).unwrap();

View File

@@ -375,11 +375,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v2_v3() {
let dump = File::open("tests/assets/v2.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -425,7 +427,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"f43338ecceeddd1ce13ffd55438b2347");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"54b3d7a0d96de35427d867fa17164a99");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
@@ -440,7 +442,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"0d76c745cb334e8c20d6d6a14df733e1");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"ae7c5ade2243a553152dab2f354e9095");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
@@ -455,7 +457,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"09a2f7c571729f70f4cd93e24e8e3f28");
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"1be82b894556d23953af557b6a328a58");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
@@ -470,7 +472,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"09a2f7c571729f70f4cd93e24e8e3f28");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"1be82b894556d23953af557b6a328a58");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -341,11 +341,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v3_v4() {
let dump = File::open("tests/assets/v3.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -395,7 +397,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"ea46dd6b58c5e1d65c1c8159a32695ea");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"d3402aff19b90acea9e9a07c466690aa");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
@@ -410,7 +412,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"4df4074ef6bfb71e8dc66d08ff8c9dfd");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"687aaab250f01b55d57bc69aa313b581");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
@@ -425,7 +427,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"24eaf4046d9718dabff36f35103352d4");
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"cd9fedbd7e3492831a94da62c90013ea");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
@@ -440,7 +442,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"24eaf4046d9718dabff36f35103352d4");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"cd9fedbd7e3492831a94da62c90013ea");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -377,11 +377,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v4_v5() {
let dump = File::open("tests/assets/v4.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -428,7 +430,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"ed1a6977a832b1ab49cd5068b77ce498");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"26947283836ee4cdf0974f82efcc5332");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
@@ -443,7 +445,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"70681af1d52411218036fbd5a9b94ab5");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"156871410d17e23803d0c90ddc6a66cb");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"786022a66ecb992c8a2a60fee070a5ab");
@@ -458,7 +460,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"7019bb8f146004dcdd91fc3c3254b742");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"69c9916142612cf4a2da9b9ed9455e9e");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -119,8 +119,9 @@ impl CompatV5ToV6 {
allow_index_creation,
settings: Box::new(settings.into()),
},
v5::tasks::TaskContent::Dump { uid } => {
v6::Kind::DumpExport { dump_uid: uid, keys: keys.clone(), instance_uid }
v5::tasks::TaskContent::Dump { uid: _ } => {
// in v6 we compute the dump_uid from the started_at processing time
v6::Kind::DumpCreation { keys: keys.clone(), instance_uid }
}
},
canceled_by: None,
@@ -128,7 +129,7 @@ impl CompatV5ToV6 {
v5::Details::DocumentAddition { received_documents, indexed_documents } => {
v6::Details::DocumentAdditionOrUpdate {
received_documents: received_documents as u64,
indexed_documents: indexed_documents.map(|i| i as u64),
indexed_documents,
}
}
v5::Details::Settings { settings } => {
@@ -141,13 +142,15 @@ impl CompatV5ToV6 {
received_document_ids,
deleted_documents,
} => v6::Details::DocumentDeletion {
received_document_ids,
provided_ids: received_document_ids,
deleted_documents,
},
v5::Details::ClearAll { deleted_documents } => {
v6::Details::ClearAll { deleted_documents }
}
v5::Details::Dump { dump_uid } => v6::Details::Dump { dump_uid },
v5::Details::Dump { dump_uid } => {
v6::Details::Dump { dump_uid: Some(dump_uid) }
}
}),
error: task_view.error.map(|e| e.into()),
enqueued_at: task_view.enqueued_at,
@@ -393,11 +396,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn compat_v5_v6() {
let dump = File::open("tests/assets/v5.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -415,7 +420,7 @@ pub(crate) mod test {
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"5d5d839c70adf763d0dc2e0b46c59828");
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"42d4200cf6d92a6449989ca48cd8e28a");
assert_eq!(update_files.len(), 22);
assert!(update_files[0].is_none()); // the dump creation
assert!(update_files[1].is_some()); // the enqueued document addition
@@ -445,7 +450,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"9896a66a399c24a0f4f6a3c8563cd14a");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"8e5cadabf74aebe1160bf51c3d489efe");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
@@ -460,7 +465,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"d0dc7efd1360f95fce57d7931a70b7c9");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"4894ac1e74b9e1069ed5ee262b7a1aca");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 200);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"e962baafd2fbae4cdd14e876053b0c5a");
@@ -475,7 +480,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"59c8e30c2022897987ea7b4394167b06");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"054dbf08a79e08bb9becba6f5d090f13");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -184,9 +184,12 @@ impl From<CompatIndexV5ToV6> for DumpIndexReader {
pub(crate) mod test {
use std::fs::File;
use meili_snap::insta;
use super::*;
#[test]
#[ignore]
fn import_dump_v5() {
let dump = File::open("tests/assets/v5.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
@@ -198,7 +201,7 @@ pub(crate) mod test {
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"5d5d839c70adf763d0dc2e0b46c59828");
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"42d4200cf6d92a6449989ca48cd8e28a");
assert_eq!(update_files.len(), 22);
assert!(update_files[0].is_none()); // the dump creation
assert!(update_files[1].is_some()); // the enqueued document addition
@@ -228,7 +231,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"9896a66a399c24a0f4f6a3c8563cd14a");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"8e5cadabf74aebe1160bf51c3d489efe");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
@@ -243,7 +246,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"d0dc7efd1360f95fce57d7931a70b7c9");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"4894ac1e74b9e1069ed5ee262b7a1aca");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 200);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"e962baafd2fbae4cdd14e876053b0c5a");
@@ -258,13 +261,14 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"59c8e30c2022897987ea7b4394167b06");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"054dbf08a79e08bb9becba6f5d090f13");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
#[test]
#[ignore]
fn import_dump_v4() {
let dump = File::open("tests/assets/v4.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
@@ -305,7 +309,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"ed1a6977a832b1ab49cd5068b77ce498");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"1f9da51a4518166fb440def5437eafdb");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
@@ -320,7 +324,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"70681af1d52411218036fbd5a9b94ab5");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"488816aba82c1bd65f1609630055c611");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"786022a66ecb992c8a2a60fee070a5ab");
@@ -335,13 +339,14 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"7019bb8f146004dcdd91fc3c3254b742");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"7b4f66dad597dc651650f35fe34be27f");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
#[test]
#[ignore]
fn import_dump_v3() {
let dump = File::open("tests/assets/v3.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
@@ -383,7 +388,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"1a5ed16d00e6163662d9d7ffe400c5d0");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"855f3165dec609b919171ff83f82b364");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
@@ -398,7 +403,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"9a6b511669b8f53d193d2f0bd1671baa");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"43e0bf1746c3ea1d64c1e10ea544c190");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
@@ -413,7 +418,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"4fdf905496d9a511800ff523728728ac");
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"5fd06a5038f49311600379d43412b655");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
@@ -428,13 +433,14 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"4fdf905496d9a511800ff523728728ac");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"5fd06a5038f49311600379d43412b655");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
}
#[test]
#[ignore]
fn import_dump_v2() {
let dump = File::open("tests/assets/v2.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
@@ -476,7 +482,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"a7d4fed93bfc91d0f1126d3371abf48e");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"b15b71f56dd082d8e8ec5182e688bf36");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
@@ -491,7 +497,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"e79c3cc4eef44bd22acfb60957b459d9");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"5389153ddf5527fa79c54b6a6e9c21f6");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
@@ -506,7 +512,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"7917f954b6f345336073bb155540ad6d");
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"8aebab01301d266acf3e18dd449c008f");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
@@ -521,7 +527,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"7917f954b6f345336073bb155540ad6d");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"8aebab01301d266acf3e18dd449c008f");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -205,11 +205,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v2() {
let dump = File::open("tests/assets/v2.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -255,7 +257,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"b4814eab5e73e2dcfc90aad50aa583d1");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"c41bf7315d404da46c99b9e3a2a3cc1e");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
@@ -270,7 +272,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"59dd69f590635a58f3d99edc9e1fa21f");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"3d1d96c85b6bab46e957bc8d2532a910");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
@@ -285,7 +287,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"ac041085004c43373fe90dc48f5c23ab");
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"4f04afc086828d8da0da57a7d598ddba");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
@@ -300,7 +302,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"ac041085004c43373fe90dc48f5c23ab");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"4f04afc086828d8da0da57a7d598ddba");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -1,4 +1,4 @@
use std::collections::{BTreeMap, BTreeSet, HashSet};
use std::collections::{BTreeMap, BTreeSet};
use std::marker::PhantomData;
use std::str::FromStr;
@@ -60,7 +60,7 @@ pub struct Settings<T> {
deserialize_with = "deserialize_some",
skip_serializing_if = "Option::is_none"
)]
pub filterable_attributes: Option<Option<HashSet<String>>>,
pub filterable_attributes: Option<Option<BTreeSet<String>>>,
#[serde(
default,

View File

@@ -221,11 +221,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v3() {
let dump = File::open("tests/assets/v3.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -271,7 +273,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"7460d4b242b5c8b1bda223f63bbbf349");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"f309b009608cc0b770b2f74516f92647");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"548284a84de510f71e88e6cdea495cf5");
@@ -286,7 +288,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"d83ab8e79bb44595667d6ce3e6629a4f");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"95dff22ba3a7019616c12df9daa35e1e");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d153b5a81d8b3cdcbe1dec270b574022");
@@ -301,7 +303,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"44d3b5a3b3aa6cd950373ff751d05bb7");
meili_snap::snapshot_hash!(format!("{:#?}", movies2.settings()), @"1dafc4b123e3a8e14a889719cc01f6e5");
let documents = movies2.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 0);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"d751713988987e9331980363e24189ce");
@@ -316,7 +318,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"44d3b5a3b3aa6cd950373ff751d05bb7");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"1dafc4b123e3a8e14a889719cc01f6e5");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -213,11 +213,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v4() {
let dump = File::open("tests/assets/v4.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -267,7 +269,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"ace6546a6eb856ecb770b2409975c01d");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"65b139c6b9fc251e187073c8557803e2");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
@@ -282,7 +284,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"4dfa34fa34f2c03259482e1e4555faa8");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"06aa1988493485d9b2cda7c751e6bb15");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 110);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"786022a66ecb992c8a2a60fee070a5ab");
@@ -297,7 +299,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"1aa241a5e3afd8c85a4e7b9db42362d7");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"7d722fc2629eaa45032ed3deb0c9b4ce");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -255,11 +255,13 @@ pub(crate) mod test {
use std::io::BufReader;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use tempfile::TempDir;
use super::*;
#[test]
#[ignore]
fn read_dump_v5() {
let dump = File::open("tests/assets/v5.dump").unwrap();
let dir = TempDir::new().unwrap();
@@ -310,7 +312,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"9896a66a399c24a0f4f6a3c8563cd14a");
meili_snap::snapshot_hash!(format!("{:#?}", products.settings()), @"b392b928dab63468318b2bdaad844c5a");
let documents = products.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"b01c8371aea4c7171af0d4d846a2bdca");
@@ -325,7 +327,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"d0dc7efd1360f95fce57d7931a70b7c9");
meili_snap::snapshot_hash!(format!("{:#?}", movies.settings()), @"2f881248b7c3623e2ba2885dbf0b2c18");
let documents = movies.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 200);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"e962baafd2fbae4cdd14e876053b0c5a");
@@ -340,7 +342,7 @@ pub(crate) mod test {
}
"###);
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"59c8e30c2022897987ea7b4394167b06");
meili_snap::snapshot_hash!(format!("{:#?}", spells.settings()), @"ade154e63ab713de67919892917d3d9d");
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");

View File

@@ -1,7 +1,6 @@
use std::fs::{self, File};
use std::io::{BufRead, BufReader};
use std::io::{BufRead, BufReader, ErrorKind};
use std::path::Path;
use std::str::FromStr;
pub use meilisearch_types::milli;
use tempfile::TempDir;
@@ -44,7 +43,7 @@ pub type Code = meilisearch_types::error::Code;
pub struct V6Reader {
dump: TempDir,
instance_uid: Uuid,
instance_uid: Option<Uuid>,
metadata: Metadata,
tasks: BufReader<File>,
keys: BufReader<File>,
@@ -53,8 +52,11 @@ pub struct V6Reader {
impl V6Reader {
pub fn open(dump: TempDir) -> Result<Self> {
let meta_file = fs::read(dump.path().join("metadata.json"))?;
let instance_uid = fs::read_to_string(dump.path().join("instance_uid.uuid"))?;
let instance_uid = Uuid::from_str(&instance_uid)?;
let instance_uid = match fs::read_to_string(dump.path().join("instance_uid.uuid")) {
Ok(uuid) => Some(Uuid::parse_str(&uuid)?),
Err(e) if e.kind() == ErrorKind::NotFound => None,
Err(e) => return Err(e.into()),
};
Ok(V6Reader {
metadata: serde_json::from_reader(&*meta_file)?,
@@ -74,7 +76,7 @@ impl V6Reader {
}
pub fn instance_uid(&self) -> Result<Option<Uuid>> {
Ok(Some(self.instance_uid))
Ok(self.instance_uid)
}
pub fn indexes(&self) -> Result<Box<dyn Iterator<Item = Result<V6IndexReader>> + '_>> {

View File

@@ -191,6 +191,7 @@ pub(crate) mod test {
use std::str::FromStr;
use flate2::bufread::GzDecoder;
use meili_snap::insta;
use meilisearch_types::settings::Unchecked;
use super::*;
@@ -220,9 +221,9 @@ pub(crate) mod test {
if aft.is_dir() && bft.is_dir() {
a.file_name().cmp(&b.file_name())
} else if aft.is_file() {
} else if aft.is_file() && bft.is_dir() {
std::cmp::Ordering::Greater
} else if bft.is_file() {
} else if bft.is_file() && aft.is_dir() {
std::cmp::Ordering::Less
} else {
a.file_name().cmp(&b.file_name())
@@ -258,6 +259,7 @@ pub(crate) mod test {
}
#[test]
#[ignore]
fn test_creating_dump() {
let file = create_test_dump();
let mut file = BufReader::new(file);
@@ -276,16 +278,16 @@ pub(crate) mod test {
.
├---- indexes/
│ └---- doggos/
│ │ ├---- settings.json
│ │ ├---- documents.jsonl
│ │ ├---- metadata.json
│ │ └---- documents.jsonl
│ │ └---- settings.json
├---- tasks/
│ ├---- update_files/
│ │ └---- 1.jsonl
│ └---- queue.jsonl
├---- instance_uid.uuid
├---- keys.jsonl
---- metadata.json
└---- instance_uid.uuid
---- metadata.json
"###);
// ==== checking the top level infos

View File

@@ -1,6 +1,6 @@
[package]
name = "file-store"
version = "0.1.0"
version = "0.30.0"
edition = "2021"
[dependencies]

View File

@@ -74,11 +74,16 @@ impl FileStore {
/// Returns the file corresponding to the requested uuid.
pub fn get_update(&self, uuid: Uuid) -> Result<StdFile> {
let path = self.path.join(uuid.to_string());
let path = self.get_update_path(uuid);
let file = StdFile::open(path)?;
Ok(file)
}
/// Returns the path that correspond to this uuid, the path could not exists.
pub fn get_update_path(&self, uuid: Uuid) -> PathBuf {
self.path.join(uuid.to_string())
}
/// Copies the content of the update file pointed to by `uuid` to the `dst` directory.
pub fn snapshot(&self, uuid: Uuid, dst: impl AsRef<Path>) -> Result<()> {
let src = self.path.join(uuid.to_string());

View File

@@ -1,6 +1,6 @@
[package]
name = "index-scheduler"
version = "0.1.0"
version = "0.30.0"
edition = "2021"
[dependencies]

View File

@@ -59,8 +59,8 @@ impl From<KindWithContent> for AutobatchKind {
KindWithContent::IndexSwap { .. } => AutobatchKind::IndexSwap,
KindWithContent::TaskCancelation { .. }
| KindWithContent::TaskDeletion { .. }
| KindWithContent::DumpExport { .. }
| KindWithContent::Snapshot => {
| KindWithContent::DumpCreation { .. }
| KindWithContent::SnapshotCreation => {
panic!("The autobatcher should never be called with tasks that don't apply to an index.")
}
}
@@ -433,6 +433,7 @@ pub fn autobatch(
#[cfg(test)]
mod tests {
use meilisearch_types::tasks::IndexSwap;
use uuid::Uuid;
use super::*;
@@ -492,7 +493,9 @@ mod tests {
}
fn idx_swap() -> KindWithContent {
KindWithContent::IndexSwap { swaps: vec![(String::from("doggo"), String::from("catto"))] }
KindWithContent::IndexSwap {
swaps: vec![IndexSwap { indexes: (String::from("doggo"), String::from("catto")) }],
}
}
#[test]

View File

@@ -17,29 +17,32 @@ tasks individally, but should be much faster since we are only performing
one indexing operation.
*/
use std::collections::HashSet;
use std::fs::File;
use std::collections::{BTreeSet, HashSet};
use std::ffi::OsStr;
use std::fs::{self, File};
use std::io::BufWriter;
use dump::IndexMetadata;
use log::{debug, error, info};
use meilisearch_types::heed::{RoTxn, RwTxn};
use meilisearch_types::milli::documents::{obkv_to_object, DocumentsBatchReader};
use meilisearch_types::milli::heed::CompactionOption;
use meilisearch_types::milli::update::{
DocumentAdditionResult, DocumentDeletionResult, IndexDocumentsConfig, IndexDocumentsMethod,
Settings as MilliSettings,
};
use meilisearch_types::milli::{self, BEU32};
use meilisearch_types::settings::{apply_settings_to_builder, Settings, Unchecked};
use meilisearch_types::tasks::{Details, Kind, KindWithContent, Status, Task};
use meilisearch_types::Index;
use meilisearch_types::tasks::{Details, IndexSwap, Kind, KindWithContent, Status, Task};
use meilisearch_types::{compression, Index, VERSION_FILE_NAME};
use roaring::RoaringBitmap;
use time::macros::format_description;
use time::OffsetDateTime;
use uuid::Uuid;
use crate::autobatcher::BatchKind;
use crate::autobatcher::{self, BatchKind};
use crate::utils::{self, swap_index_uid_in_task};
use crate::{Error, IndexScheduler, Query, Result, TaskId};
use crate::{Error, IndexScheduler, ProcessingTasks, Result, TaskId};
/// Represents a combination of tasks that can all be processed at the same time.
///
@@ -48,15 +51,39 @@ use crate::{Error, IndexScheduler, Query, Result, TaskId};
/// be processed.
#[derive(Debug)]
pub(crate) enum Batch {
TaskCancelation(Task),
TaskCancelation {
/// The task cancelation itself.
task: Task,
/// The date and time at which the previously processing tasks started.
previous_started_at: OffsetDateTime,
/// The list of tasks that were processing when this task cancelation appeared.
previous_processing_tasks: RoaringBitmap,
},
TaskDeletion(Task),
Snapshot(Vec<Task>),
SnapshotCreation(Vec<Task>),
Dump(Task),
IndexOperation { op: IndexOperation, must_create_index: bool },
IndexCreation { index_uid: String, primary_key: Option<String>, task: Task },
IndexUpdate { index_uid: String, primary_key: Option<String>, task: Task },
IndexDeletion { index_uid: String, tasks: Vec<Task>, index_has_been_created: bool },
IndexSwap { task: Task },
IndexOperation {
op: IndexOperation,
must_create_index: bool,
},
IndexCreation {
index_uid: String,
primary_key: Option<String>,
task: Task,
},
IndexUpdate {
index_uid: String,
primary_key: Option<String>,
task: Task,
},
IndexDeletion {
index_uid: String,
tasks: Vec<Task>,
index_has_been_created: bool,
},
IndexSwap {
task: Task,
},
}
/// A [batch](Batch) that combines multiple tasks operating on an index.
@@ -82,7 +109,7 @@ pub(crate) enum IndexOperation {
},
Settings {
index_uid: String,
// TODO what's that boolean, does it mean that it removes things or what?
// The boolean indicates if it's a settings deletion or creation.
settings: Vec<(bool, Settings<Unchecked>)>,
tasks: Vec<Task>,
},
@@ -90,7 +117,7 @@ pub(crate) enum IndexOperation {
index_uid: String,
cleared_tasks: Vec<Task>,
// TODO what's that boolean, does it mean that it removes things or what?
// The boolean indicates if it's a settings deletion or creation.
settings: Vec<(bool, Settings<Unchecked>)>,
settings_tasks: Vec<Task>,
},
@@ -103,7 +130,7 @@ pub(crate) enum IndexOperation {
content_files: Vec<Uuid>,
document_import_tasks: Vec<Task>,
// TODO what's that boolean, does it mean that it removes things or what?
// The boolean indicates if it's a settings deletion or creation.
settings: Vec<(bool, Settings<Unchecked>)>,
settings_tasks: Vec<Task>,
},
@@ -113,12 +140,12 @@ impl Batch {
/// Return the task ids associated with this batch.
pub fn ids(&self) -> Vec<TaskId> {
match self {
Batch::TaskCancelation(task)
Batch::TaskCancelation { task, .. }
| Batch::TaskDeletion(task)
| Batch::Dump(task)
| Batch::IndexCreation { task, .. }
| Batch::IndexUpdate { task, .. } => vec![task.uid],
Batch::Snapshot(tasks) | Batch::IndexDeletion { tasks, .. } => {
Batch::SnapshotCreation(tasks) | Batch::IndexDeletion { tasks, .. } => {
tasks.iter().map(|task| task.uid).collect()
}
Batch::IndexOperation { op, .. } => match op {
@@ -384,73 +411,87 @@ impl IndexScheduler {
/// 4. We get the *next* dump to process.
/// 5. We get the *next* tasks to process for a specific index.
pub(crate) fn create_next_batch(&self, rtxn: &RoTxn) -> Result<Option<Batch>> {
#[cfg(test)]
self.maybe_fail(crate::tests::FailureLocation::InsideCreateBatch)?;
let enqueued = &self.get_status(rtxn, Status::Enqueued)?;
let to_cancel = self.get_kind(rtxn, Kind::TaskCancelation)? & enqueued;
// 1. we get the last task to cancel.
if let Some(task_id) = to_cancel.max() {
return Ok(Some(Batch::TaskCancelation(
self.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?,
)));
// We retrieve the tasks that were processing before this tasks cancelation started.
// We must *not* reset the processing tasks before calling this method.
let ProcessingTasks { started_at, processing } =
&*self.processing_tasks.read().unwrap();
return Ok(Some(Batch::TaskCancelation {
task: self.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?,
previous_started_at: *started_at,
previous_processing_tasks: processing.clone(),
}));
}
// 2. we get the next task to delete
let to_delete = self.get_kind(rtxn, Kind::TaskDeletion)? & enqueued;
if let Some(task_id) = to_delete.min() {
let task = self.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;
return Ok(Some(Batch::TaskDeletion(task)));
}
// 3. we batch the snapshot.
let to_snapshot = self.get_kind(rtxn, Kind::Snapshot)? & enqueued;
let to_snapshot = self.get_kind(rtxn, Kind::SnapshotCreation)? & enqueued;
if !to_snapshot.is_empty() {
return Ok(Some(Batch::Snapshot(self.get_existing_tasks(rtxn, to_snapshot)?)));
return Ok(Some(Batch::SnapshotCreation(self.get_existing_tasks(rtxn, to_snapshot)?)));
}
// 4. we batch the dumps.
let to_dump = self.get_kind(rtxn, Kind::DumpExport)? & enqueued;
let to_dump = self.get_kind(rtxn, Kind::DumpCreation)? & enqueued;
if let Some(to_dump) = to_dump.min() {
return Ok(Some(Batch::Dump(
self.get_task(rtxn, to_dump)?.ok_or(Error::CorruptedTaskQueue)?,
)));
}
// 5. We take the next task and try to batch all the tasks associated with this index.
if let Some(task_id) = enqueued.min() {
let task = self.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;
// 5. We make a batch from the unprioritised tasks. Start by taking the next enqueued task.
let task_id = if let Some(task_id) = enqueued.min() { task_id } else { return Ok(None) };
let task = self.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;
// This is safe because all the remaining task are associated with
// AT LEAST one index. We can use the right or left one it doesn't
// matter.
let index_name = task.indexes().unwrap()[0];
let index_already_exists = self.index_mapper.exists(rtxn, index_name)?;
// If the task is not associated with any index, verify that it is an index swap and
// create the batch directly. Otherwise, get the index name associated with the task
// and use the autobatcher to batch the enqueued tasks associated with it
let index_tasks = self.index_tasks(rtxn, index_name)? & enqueued;
let index_name = if let Some(&index_name) = task.indexes().first() {
index_name
} else {
assert!(matches!(&task.kind, KindWithContent::IndexSwap { swaps } if swaps.is_empty()));
return Ok(Some(Batch::IndexSwap { task }));
};
// If autobatching is disabled we only take one task at a time.
let tasks_limit = if self.autobatching_enabled { usize::MAX } else { 1 };
let index_already_exists = self.index_mapper.exists(rtxn, index_name)?;
let enqueued = index_tasks
.into_iter()
.take(tasks_limit)
.map(|task_id| {
self.get_task(rtxn, task_id)
.and_then(|task| task.ok_or(Error::CorruptedTaskQueue))
.map(|task| (task.uid, task.kind))
})
.collect::<Result<Vec<_>>>()?;
let index_tasks = self.index_tasks(rtxn, index_name)? & enqueued;
if let Some((batchkind, create_index)) =
crate::autobatcher::autobatch(enqueued, index_already_exists)
{
return self.create_next_batch_index(
rtxn,
index_name.to_string(),
batchkind,
create_index,
);
}
// If autobatching is disabled we only take one task at a time.
let tasks_limit = if self.autobatching_enabled { usize::MAX } else { 1 };
let enqueued = index_tasks
.into_iter()
.take(tasks_limit)
.map(|task_id| {
self.get_task(rtxn, task_id)
.and_then(|task| task.ok_or(Error::CorruptedTaskQueue))
.map(|task| (task.uid, task.kind))
})
.collect::<Result<Vec<_>>>()?;
if let Some((batchkind, create_index)) =
autobatcher::autobatch(enqueued, index_already_exists)
{
return self.create_next_batch_index(
rtxn,
index_name.to_string(),
batchkind,
create_index,
);
}
// If we found no tasks then we were notified for something that got autobatched
@@ -465,8 +506,14 @@ impl IndexScheduler {
/// list is updated accordingly, with the exception of the its date fields
/// [`finished_at`](meilisearch_types::tasks::Task::finished_at) and [`started_at`](meilisearch_types::tasks::Task::started_at).
pub(crate) fn process_batch(&self, batch: Batch) -> Result<Vec<Task>> {
#[cfg(test)]
{
self.maybe_fail(crate::tests::FailureLocation::InsideProcessBatch)?;
self.maybe_fail(crate::tests::FailureLocation::PanicInsideProcessBatch)?;
self.breakpoint(crate::Breakpoint::InsideProcessBatch);
}
match batch {
Batch::TaskCancelation(mut task) => {
Batch::TaskCancelation { mut task, previous_started_at, previous_processing_tasks } => {
// 1. Retrieve the tasks that matched the query at enqueue-time.
let matched_tasks =
if let KindWithContent::TaskCancelation { tasks, query: _ } = &task.kind {
@@ -476,22 +523,27 @@ impl IndexScheduler {
};
let mut wtxn = self.env.write_txn()?;
let canceled_tasks_content_uuids =
self.cancel_matched_tasks(&mut wtxn, task.uid, matched_tasks)?;
let canceled_tasks_content_uuids = self.cancel_matched_tasks(
&mut wtxn,
task.uid,
matched_tasks,
previous_started_at,
&previous_processing_tasks,
)?;
task.status = Status::Succeeded;
match &mut task.details {
Some(Details::TaskCancelation {
matched_tasks: _,
canceled_tasks,
original_query: _,
original_filter: _,
}) => {
*canceled_tasks = Some(canceled_tasks_content_uuids.len() as u64);
}
_ => unreachable!(),
}
// We must only remove the content files if the transaction is successfuly committed
// We must only remove the content files if the transaction is successfully committed
// and if errors occurs when we are deleting files we must do our best to delete
// everything. We do not return the encountered errors when deleting the content
// files as it is not a breaking operation and we can safely continue our job.
@@ -528,7 +580,7 @@ impl IndexScheduler {
Some(Details::TaskDeletion {
matched_tasks: _,
deleted_tasks,
original_query: _,
original_filter: _,
}) => {
*deleted_tasks = Some(deleted_tasks_count);
}
@@ -537,19 +589,104 @@ impl IndexScheduler {
wtxn.commit()?;
Ok(vec![task])
}
Batch::Snapshot(_) => todo!(),
Batch::SnapshotCreation(mut tasks) => {
fs::create_dir_all(&self.snapshots_path)?;
let temp_snapshot_dir = tempfile::tempdir()?;
// 1. Snapshot the version file.
let dst = temp_snapshot_dir.path().join(VERSION_FILE_NAME);
fs::copy(&self.version_file_path, dst)?;
// 2. Snapshot the index-scheduler LMDB env
//
// When we call copy_to_path, LMDB opens a read transaction by itself,
// we can't provide our own. It is an issue as we would like to know
// the update files to copy but new ones can be enqueued between the copy
// of the env and the new transaction we open to retrieve the enqueued tasks.
// So we prefer opening a new transaction after copying the env and copy more
// update files than not enough.
//
// Note that there cannot be any update files deleted between those
// two read operations as the task processing is synchronous.
// 2.1 First copy the LMDB env of the index-scheduler
let dst = temp_snapshot_dir.path().join("tasks");
fs::create_dir_all(&dst)?;
self.env.copy_to_path(dst.join("data.mdb"), CompactionOption::Enabled)?;
// 2.2 Create a read transaction on the index-scheduler
let rtxn = self.env.read_txn()?;
// 2.3 Create the update files directory
let update_files_dir = temp_snapshot_dir.path().join("update_files");
fs::create_dir_all(&update_files_dir)?;
// 2.4 Only copy the update files of the enqueued tasks
for task_id in self.get_status(&rtxn, Status::Enqueued)? {
let task = self.get_task(&rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;
if let Some(content_uuid) = task.content_uuid() {
let src = self.file_store.get_update_path(content_uuid);
let dst = update_files_dir.join(content_uuid.to_string());
fs::copy(src, dst)?;
}
}
// 3. Snapshot every indexes
// TODO we are opening all of the indexes it can be too much we should unload all
// of the indexes we are trying to open. It would be even better to only unload
// the ones that were opened by us. Or maybe use a LRU in the index mapper.
for result in self.index_mapper.index_mapping.iter(&rtxn)? {
let (name, uuid) = result?;
let index = self.index_mapper.index(&rtxn, name)?;
let dst = temp_snapshot_dir.path().join("indexes").join(uuid.to_string());
fs::create_dir_all(&dst)?;
index.copy_to_path(dst.join("data.mdb"), CompactionOption::Enabled)?;
}
drop(rtxn);
// 4. Snapshot the auth LMDB env
let dst = temp_snapshot_dir.path().join("auth");
fs::create_dir_all(&dst)?;
// TODO We can't use the open_auth_store_env function here but we should
let auth = milli::heed::EnvOpenOptions::new()
.map_size(1024 * 1024 * 1024) // 1 GiB
.max_dbs(2)
.open(&self.auth_path)?;
auth.copy_to_path(dst.join("data.mdb"), CompactionOption::Enabled)?;
// 5. Copy and tarball the flat snapshot
// 5.1 Find the original name of the database
// TODO find a better way to get this path
let mut base_path = self.env.path().to_owned();
base_path.pop();
let db_name = base_path.file_name().and_then(OsStr::to_str).unwrap_or("data.ms");
// 5.2 Tarball the content of the snapshot in a tempfile with a .snapshot extension
let snapshot_path = self.snapshots_path.join(format!("{}.snapshot", db_name));
let temp_snapshot_file = tempfile::NamedTempFile::new_in(&self.snapshots_path)?;
compression::to_tar_gz(temp_snapshot_dir.path(), temp_snapshot_file.path())?;
let file = temp_snapshot_file.persist(&snapshot_path)?;
// 5.3 Change the permission to make the snapshot readonly
let mut permissions = file.metadata()?.permissions();
permissions.set_readonly(true);
file.set_permissions(permissions)?;
for task in &mut tasks {
task.status = Status::Succeeded;
}
Ok(tasks)
}
Batch::Dump(mut task) => {
let started_at = OffsetDateTime::now_utc();
let (keys, instance_uid, dump_uid) = if let KindWithContent::DumpExport {
keys,
instance_uid,
dump_uid,
} = &task.kind
{
(keys, instance_uid, dump_uid)
} else {
unreachable!();
};
let (keys, instance_uid) =
if let KindWithContent::DumpCreation { keys, instance_uid } = &task.kind {
(keys, instance_uid)
} else {
unreachable!();
};
let dump = dump::DumpWriter::new(*instance_uid)?;
// 1. dump the keys
@@ -566,7 +703,7 @@ impl IndexScheduler {
for ret in self.all_tasks.iter(&rtxn)? {
let (_, mut t) = ret?;
let status = t.status;
let content_file = t.content_uuid().copied();
let content_file = t.content_uuid();
// In the case we're dumping ourselves we want to be marked as finished
// to not loop over ourselves indefinitely.
@@ -633,22 +770,25 @@ impl IndexScheduler {
index_dumper.settings(&settings)?;
}
let dump_uid = started_at.format(format_description!(
"[year repr:full][month repr:numerical][day padding:zero]-[hour padding:zero][minute padding:zero][second padding:zero][subsecond digits:3]"
)).unwrap();
let path = self.dumps_path.join(format!("{}.dump", dump_uid));
let file = File::create(path)?;
dump.persist_to(BufWriter::new(file))?;
// if we reached this step we can tell the scheduler we succeeded to dump ourselves.
task.status = Status::Succeeded;
task.details = Some(Details::Dump { dump_uid: Some(dump_uid) });
Ok(vec![task])
}
Batch::IndexOperation { op, must_create_index } => {
let index_uid = op.index_uid();
let index = if must_create_index {
// create the index if it doesn't already exist
let mut wtxn = self.env.write_txn()?;
let index = self.index_mapper.create_index(&mut wtxn, index_uid)?;
wtxn.commit()?;
index
let wtxn = self.env.write_txn()?;
self.index_mapper.create_index(wtxn, index_uid)?
} else {
let rtxn = self.env.read_txn()?;
self.index_mapper.index(&rtxn, index_uid)?
@@ -661,12 +801,11 @@ impl IndexScheduler {
Ok(tasks)
}
Batch::IndexCreation { index_uid, primary_key, task } => {
let mut wtxn = self.env.write_txn()?;
let wtxn = self.env.write_txn()?;
if self.index_mapper.exists(&wtxn, &index_uid)? {
return Err(Error::IndexAlreadyExists(index_uid));
}
self.index_mapper.create_index(&mut wtxn, &index_uid)?;
wtxn.commit()?;
self.index_mapper.create_index(wtxn, &index_uid)?;
self.process_batch(Batch::IndexUpdate { index_uid, primary_key, task })
}
@@ -732,8 +871,28 @@ impl IndexScheduler {
} else {
unreachable!()
};
for (lhs, rhs) in swaps {
self.apply_index_swap(&mut wtxn, task.uid, lhs, rhs)?;
let mut not_found_indexes = BTreeSet::new();
for IndexSwap { indexes: (lhs, rhs) } in swaps {
for index in [lhs, rhs] {
let index_exists = self.index_mapper.index_exists(&wtxn, index)?;
if !index_exists {
not_found_indexes.insert(index);
}
}
}
if !not_found_indexes.is_empty() {
if not_found_indexes.len() == 1 {
return Err(Error::IndexNotFound(
not_found_indexes.into_iter().next().unwrap().clone(),
));
} else {
return Err(Error::IndexesNotFound(
not_found_indexes.into_iter().cloned().collect(),
));
}
}
for swap in swaps {
self.apply_index_swap(&mut wtxn, task.uid, &swap.indexes.0, &swap.indexes.1)?;
}
wtxn.commit()?;
task.status = Status::Succeeded;
@@ -754,12 +913,10 @@ impl IndexScheduler {
return Err(Error::IndexNotFound(rhs.to_owned()));
}
// 2. Get the task set for index = name.
let mut index_lhs_task_ids =
self.get_task_ids(&Query::default().with_index(lhs.to_owned()))?;
// 2. Get the task set for index = name that appeared before the index swap task
let mut index_lhs_task_ids = self.index_tasks(wtxn, lhs)?;
index_lhs_task_ids.remove_range(task_id..);
let mut index_rhs_task_ids =
self.get_task_ids(&Query::default().with_index(rhs.to_owned()))?;
let mut index_rhs_task_ids = self.index_tasks(wtxn, rhs)?;
index_rhs_task_ids.remove_range(task_id..);
// 3. before_name -> new_name in the task's KindWithContent
@@ -775,9 +932,9 @@ impl IndexScheduler {
*lhs_tasks -= &index_lhs_task_ids;
*lhs_tasks |= &index_rhs_task_ids;
})?;
self.update_index(wtxn, rhs, |lhs_tasks| {
*lhs_tasks -= &index_rhs_task_ids;
*lhs_tasks |= &index_lhs_task_ids;
self.update_index(wtxn, rhs, |rhs_tasks| {
*rhs_tasks -= &index_rhs_task_ids;
*rhs_tasks |= &index_lhs_task_ids;
})?;
// 6. Swap in the index mapper
@@ -922,7 +1079,7 @@ impl IndexScheduler {
for (task, documents) in tasks.iter_mut().zip(documents) {
task.status = Status::Succeeded;
task.details = Some(Details::DocumentDeletion {
received_document_ids: documents.len(),
provided_ids: documents.len(),
deleted_documents: Some(deleted_documents.min(documents.len() as u64)),
});
}
@@ -931,23 +1088,24 @@ impl IndexScheduler {
}
IndexOperation::Settings { index_uid: _, settings, mut tasks } => {
let indexer_config = self.index_mapper.indexer_config();
// TODO merge the settings to only do *one* reindexation.
let mut builder = milli::update::Settings::new(index_wtxn, index, indexer_config);
for (task, (_, settings)) in tasks.iter_mut().zip(settings) {
let checked_settings = settings.clone().check();
task.details = Some(Details::SettingsUpdate { settings: Box::new(settings) });
let mut builder =
milli::update::Settings::new(index_wtxn, index, indexer_config);
apply_settings_to_builder(&checked_settings, &mut builder);
let must_stop_processing = self.must_stop_processing.clone();
builder.execute(
|indexing_step| debug!("update: {:?}", indexing_step),
|| must_stop_processing.get(),
)?;
// We can apply the status right now and if an update fail later
// the whole batch will be marked as failed.
task.status = Status::Succeeded;
}
let must_stop_processing = self.must_stop_processing.clone();
builder.execute(
|indexing_step| debug!("update: {:?}", indexing_step),
|| must_stop_processing.get(),
)?;
Ok(tasks)
}
IndexOperation::SettingsAndDocumentImport {
@@ -1033,12 +1191,12 @@ impl IndexScheduler {
let mut affected_indexes = HashSet::new();
let mut affected_statuses = HashSet::new();
let mut affected_kinds = HashSet::new();
let mut affected_canceled_by = RoaringBitmap::new();
for task_id in to_delete_tasks.iter() {
let task = self.get_task(wtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?;
if let Some(task_indexes) = task.indexes() {
affected_indexes.extend(task_indexes.into_iter().map(|x| x.to_owned()));
}
affected_indexes.extend(task.indexes().into_iter().map(|x| x.to_owned()));
affected_statuses.insert(task.status);
affected_kinds.insert(task.kind.as_kind());
// Note: don't delete the persisted task data since
@@ -1052,6 +1210,9 @@ impl IndexScheduler {
if let Some(finished_at) = task.finished_at {
utils::remove_task_datetime(wtxn, self.finished_at, finished_at, task.uid)?;
}
if let Some(canceled_by) = task.canceled_by {
affected_canceled_by.insert(canceled_by);
}
}
for index in affected_indexes {
@@ -1069,6 +1230,17 @@ impl IndexScheduler {
for task in to_delete_tasks.iter() {
self.all_tasks.delete(wtxn, &BEU32::new(task))?;
}
for canceled_by in affected_canceled_by {
let canceled_by = BEU32::new(canceled_by);
if let Some(mut tasks) = self.canceled_by.get(wtxn, &canceled_by)? {
tasks -= &to_delete_tasks;
if tasks.is_empty() {
self.canceled_by.delete(wtxn, &canceled_by)?;
} else {
self.canceled_by.put(wtxn, &canceled_by, &tasks)?;
}
}
}
Ok(to_delete_tasks.len())
}
@@ -1081,6 +1253,8 @@ impl IndexScheduler {
wtxn: &mut RwTxn,
cancel_task_id: TaskId,
matched_tasks: &RoaringBitmap,
previous_started_at: OffsetDateTime,
previous_processing_tasks: &RoaringBitmap,
) -> Result<Vec<Uuid>> {
let now = OffsetDateTime::now_utc();
@@ -1094,13 +1268,18 @@ impl IndexScheduler {
let mut content_files_to_delete = Vec::new();
for mut task in self.get_existing_tasks(wtxn, tasks_to_cancel.iter())? {
if let Some(uuid) = task.content_uuid() {
content_files_to_delete.push(*uuid);
content_files_to_delete.push(uuid);
}
if previous_processing_tasks.contains(task.uid) {
task.started_at = Some(previous_started_at);
}
task.status = Status::Canceled;
task.canceled_by = Some(cancel_task_id);
task.finished_at = Some(now);
task.details = task.details.map(|d| d.to_failed());
self.update_task(wtxn, &task)?;
}
self.canceled_by.put(wtxn, &BEU32::new(cancel_task_id), &tasks_to_cancel)?;
Ok(content_files_to_delete)
}

View File

@@ -1,4 +1,5 @@
use meilisearch_types::error::{Code, ErrorCode};
use meilisearch_types::tasks::{Kind, Status};
use meilisearch_types::{heed, milli};
use thiserror::Error;
@@ -9,17 +10,59 @@ use crate::TaskId;
pub enum Error {
#[error("Index `{0}` not found.")]
IndexNotFound(String),
#[error(
"Indexes {} not found.",
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
)]
IndexesNotFound(Vec<String>),
#[error("Index `{0}` already exists.")]
IndexAlreadyExists(String),
#[error("Corrupted task queue.")]
CorruptedTaskQueue,
#[error(
"Indexes must be declared only once during a swap. `{0}` was specified several times."
)]
SwapDuplicateIndexFound(String),
#[error(
"Indexes must be declared only once during a swap. {} were specified several times.",
.0.iter().map(|s| format!("`{}`", s)).collect::<Vec<_>>().join(", ")
)]
SwapDuplicateIndexesFound(Vec<String>),
#[error("Corrupted dump.")]
CorruptedDump,
#[error(
"Task `{field}` `{date}` is invalid. It should follow the YYYY-MM-DD or RFC 3339 date-time format."
)]
InvalidTaskDate { field: String, date: String },
#[error("Task uid `{task_uid}` is invalid. It should only contain numeric characters.")]
InvalidTaskUids { task_uid: String },
#[error(
"Task status `{status}` is invalid. Available task statuses are {}.",
enum_iterator::all::<Status>()
.map(|s| format!("`{s}`"))
.collect::<Vec<String>>()
.join(", ")
)]
InvalidTaskStatuses { status: String },
#[error(
"Task type `{type_}` is invalid. Available task types are {}",
enum_iterator::all::<Kind>()
.map(|s| format!("`{s}`"))
.collect::<Vec<String>>()
.join(", ")
)]
InvalidTaskTypes { type_: String },
#[error(
"Task canceledBy `{canceled_by}` is invalid. It should only contains numeric characters separated by `,` character."
)]
InvalidTaskCanceledBy { canceled_by: String },
#[error(
"{index_uid} is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_)."
)]
InvalidIndexUid { index_uid: String },
#[error("Task `{0}` not found.")]
TaskNotFound(TaskId),
#[error("Query parameters to filter the tasks to delete are missing. Available query parameters are: `uid`, `indexUid`, `status`, `type`.")]
#[error("Query parameters to filter the tasks to delete are missing. Available query parameters are: `uids`, `indexUids`, `statuses`, `types`, `beforeEnqueuedAt`, `afterEnqueuedAt`, `beforeStartedAt`, `afterStartedAt`, `beforeFinishedAt`, `afterFinishedAt`.")]
TaskDeletionWithEmptyQuery,
#[error("Query parameters to filter the tasks to cancel are missing. Available query parameters are: `uid`, `indexUid`, `status`, `type`.")]
#[error("Query parameters to filter the tasks to cancel are missing. Available query parameters are: `uids`, `indexUids`, `statuses`, `types`, `beforeEnqueuedAt`, `afterEnqueuedAt`, `beforeStartedAt`, `afterStartedAt`, `beforeFinishedAt`, `afterFinishedAt`.")]
TaskCancelationWithEmptyQuery,
#[error(transparent)]
@@ -28,33 +71,60 @@ pub enum Error {
Heed(#[from] heed::Error),
#[error(transparent)]
Milli(#[from] milli::Error),
#[error("An unexpected crash occurred when processing the task.")]
ProcessBatchPanicked,
#[error(transparent)]
FileStore(#[from] file_store::Error),
#[error(transparent)]
IoError(#[from] std::io::Error),
#[error(transparent)]
Persist(#[from] tempfile::PersistError),
#[error(transparent)]
Anyhow(#[from] anyhow::Error),
// Irrecoverable errors:
#[error(transparent)]
CreateBatch(Box<Self>),
#[error("Corrupted task queue.")]
CorruptedTaskQueue,
#[error(transparent)]
TaskDatabaseUpdate(Box<Self>),
#[error(transparent)]
HeedTransaction(heed::Error),
}
impl ErrorCode for Error {
fn error_code(&self) -> Code {
match self {
Error::IndexNotFound(_) => Code::IndexNotFound,
Error::IndexesNotFound(_) => Code::IndexNotFound,
Error::IndexAlreadyExists(_) => Code::IndexAlreadyExists,
Error::SwapDuplicateIndexesFound(_) => Code::DuplicateIndexFound,
Error::SwapDuplicateIndexFound(_) => Code::DuplicateIndexFound,
Error::InvalidTaskDate { .. } => Code::InvalidTaskDateFilter,
Error::InvalidTaskUids { .. } => Code::InvalidTaskUidsFilter,
Error::InvalidTaskStatuses { .. } => Code::InvalidTaskStatusesFilter,
Error::InvalidTaskTypes { .. } => Code::InvalidTaskTypesFilter,
Error::InvalidTaskCanceledBy { .. } => Code::InvalidTaskCanceledByFilter,
Error::InvalidIndexUid { .. } => Code::InvalidIndexUid,
Error::TaskNotFound(_) => Code::TaskNotFound,
Error::TaskDeletionWithEmptyQuery => Code::TaskDeletionWithEmptyQuery,
Error::TaskCancelationWithEmptyQuery => Code::TaskCancelationWithEmptyQuery,
Error::Dump(e) => e.error_code(),
Error::Milli(e) => e.error_code(),
// TODO: TAMO: are all these errors really internal?
Error::ProcessBatchPanicked => Code::Internal,
// TODO: TAMO: are all these errors really internal?
Error::Heed(_) => Code::Internal,
Error::FileStore(_) => Code::Internal,
Error::IoError(_) => Code::Internal,
Error::Persist(_) => Code::Internal,
Error::Anyhow(_) => Code::Internal,
Error::CorruptedTaskQueue => Code::Internal,
Error::CorruptedDump => Code::Internal,
Error::TaskDatabaseUpdate(_) => Code::Internal,
Error::CreateBatch(_) => Code::Internal,
Error::HeedTransaction(_) => Code::Internal,
}
}
}

View File

@@ -30,7 +30,7 @@ pub struct IndexMapper {
// TODO create a UUID Codec that uses the 16 bytes representation
/// Map an index name with an index uuid currently available on disk.
index_mapping: Database<Str, SerdeBincode<Uuid>>,
pub(crate) index_mapping: Database<Str, SerdeBincode<Uuid>>,
/// Path to the folder where the LMDB environments of each index are.
base_path: PathBuf,
@@ -74,16 +74,29 @@ impl IndexMapper {
}
/// Get or create the index.
pub fn create_index(&self, wtxn: &mut RwTxn, name: &str) -> Result<Index> {
match self.index(wtxn, name) {
Ok(index) => Ok(index),
pub fn create_index(&self, mut wtxn: RwTxn, name: &str) -> Result<Index> {
match self.index(&wtxn, name) {
Ok(index) => {
wtxn.commit()?;
Ok(index)
}
Err(Error::IndexNotFound(_)) => {
let uuid = Uuid::new_v4();
self.index_mapping.put(wtxn, name, &uuid)?;
self.index_mapping.put(&mut wtxn, name, &uuid)?;
let index_path = self.base_path.join(uuid.to_string());
fs::create_dir_all(&index_path)?;
self.create_or_open_index(&index_path)
let index = self.create_or_open_index(&index_path)?;
wtxn.commit()?;
// TODO: it would be better to lazily create the index. But we need an Index::open function for milli.
if let Some(BeingDeleted) =
self.index_map.write().unwrap().insert(uuid, Available(index.clone()))
{
panic!("Uuid v4 conflict.");
}
Ok(index)
}
error => error,
}
@@ -113,24 +126,27 @@ impl IndexMapper {
let index_map = self.index_map.clone();
let index_path = self.base_path.join(uuid.to_string());
let index_name = name.to_string();
thread::spawn(move || {
// We first wait to be sure that the previously opened index is effectively closed.
// This can take a lot of time, this is why we do that in a seperate thread.
if let Some(closing_event) = closing_event {
closing_event.wait();
}
thread::Builder::new()
.name(String::from("index_deleter"))
.spawn(move || {
// We first wait to be sure that the previously opened index is effectively closed.
// This can take a lot of time, this is why we do that in a seperate thread.
if let Some(closing_event) = closing_event {
closing_event.wait();
}
// Then we remove the content from disk.
if let Err(e) = fs::remove_dir_all(&index_path) {
error!(
"An error happened when deleting the index {} ({}): {}",
index_name, uuid, e
);
}
// Then we remove the content from disk.
if let Err(e) = fs::remove_dir_all(&index_path) {
error!(
"An error happened when deleting the index {} ({}): {}",
index_name, uuid, e
);
}
// Finally we remove the entry from the index map.
assert!(matches!(index_map.write().unwrap().remove(&uuid), Some(BeingDeleted)));
});
// Finally we remove the entry from the index map.
assert!(matches!(index_map.write().unwrap().remove(&uuid), Some(BeingDeleted)));
})
.unwrap();
Ok(())
}

View File

@@ -10,6 +10,8 @@ use crate::index_mapper::IndexMapper;
use crate::{IndexScheduler, Kind, Status, BEI128};
pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
scheduler.assert_internally_consistent();
let IndexScheduler {
autobatching_enabled,
must_stop_processing: _,
@@ -20,13 +22,19 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
status,
kind,
index_tasks,
canceled_by,
enqueued_at,
started_at,
finished_at,
index_mapper,
wake_up: _,
dumps_path: _,
snapshots_path: _,
auth_path: _,
version_file_path: _,
test_breakpoint_sdr: _,
planned_failures: _,
run_loop_iteration: _,
} = scheduler;
let rtxn = env.read_txn().unwrap();
@@ -59,6 +67,10 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
snap.push_str(&snapshot_index_mapper(&rtxn, index_mapper));
snap.push_str("\n----------------------------------------------------------------------\n");
snap.push_str("### Canceled By:\n");
snap.push_str(&snapshot_canceled_by(&rtxn, *canceled_by));
snap.push_str("\n----------------------------------------------------------------------\n");
snap.push_str("### Enqueued At:\n");
snap.push_str(&snapshot_date_db(&rtxn, *enqueued_at));
snap.push_str("----------------------------------------------------------------------\n");
@@ -78,7 +90,7 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
snap
}
fn snapshot_file_store(file_store: &file_store::FileStore) -> String {
pub fn snapshot_file_store(file_store: &file_store::FileStore) -> String {
let mut snap = String::new();
for uuid in file_store.__all_uuids() {
snap.push_str(&format!("{uuid}\n"));
@@ -86,7 +98,7 @@ fn snapshot_file_store(file_store: &file_store::FileStore) -> String {
snap
}
fn snapshot_bitmap(r: &RoaringBitmap) -> String {
pub fn snapshot_bitmap(r: &RoaringBitmap) -> String {
let mut snap = String::new();
snap.push('[');
for x in r {
@@ -96,7 +108,7 @@ fn snapshot_bitmap(r: &RoaringBitmap) -> String {
snap
}
fn snapshot_all_tasks(rtxn: &RoTxn, db: Database<OwnedType<BEU32>, SerdeJson<Task>>) -> String {
pub fn snapshot_all_tasks(rtxn: &RoTxn, db: Database<OwnedType<BEU32>, SerdeJson<Task>>) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
@@ -106,7 +118,7 @@ fn snapshot_all_tasks(rtxn: &RoTxn, db: Database<OwnedType<BEU32>, SerdeJson<Tas
snap
}
fn snapshot_date_db(
pub fn snapshot_date_db(
rtxn: &RoTxn,
db: Database<OwnedType<BEI128>, CboRoaringBitmapCodec>,
) -> String {
@@ -119,7 +131,7 @@ fn snapshot_date_db(
snap
}
fn snapshot_task(task: &Task) -> String {
pub fn snapshot_task(task: &Task) -> String {
let mut snap = String::new();
let Task {
uid,
@@ -127,7 +139,7 @@ fn snapshot_task(task: &Task) -> String {
started_at: _,
finished_at: _,
error,
canceled_by: _,
canceled_by,
details,
status,
kind,
@@ -135,11 +147,14 @@ fn snapshot_task(task: &Task) -> String {
snap.push('{');
snap.push_str(&format!("uid: {uid}, "));
snap.push_str(&format!("status: {status}, "));
if let Some(canceled_by) = canceled_by {
snap.push_str(&format!("canceled_by: {canceled_by}, "));
}
if let Some(error) = error {
snap.push_str(&format!("error: {error:?}, "));
}
if let Some(details) = details {
snap.push_str(&format!("details: {}, ", &snaphsot_details(details)));
snap.push_str(&format!("details: {}, ", &snapshot_details(details)));
}
snap.push_str(&format!("kind: {kind:?}"));
@@ -147,7 +162,7 @@ fn snapshot_task(task: &Task) -> String {
snap
}
fn snaphsot_details(d: &Details) -> String {
fn snapshot_details(d: &Details) -> String {
match d {
Details::DocumentAdditionOrUpdate {
received_documents,
@@ -162,7 +177,7 @@ fn snaphsot_details(d: &Details) -> String {
format!("{{ primary_key: {primary_key:?} }}")
}
Details::DocumentDeletion {
received_document_ids,
provided_ids: received_document_ids,
deleted_documents,
} => format!("{{ received_document_ids: {received_document_ids}, deleted_documents: {deleted_documents:?} }}"),
Details::ClearAll { deleted_documents } => {
@@ -171,27 +186,30 @@ fn snaphsot_details(d: &Details) -> String {
Details::TaskCancelation {
matched_tasks,
canceled_tasks,
original_query,
original_filter,
} => {
format!("{{ matched_tasks: {matched_tasks:?}, canceled_tasks: {canceled_tasks:?}, original_query: {original_query:?} }}")
format!("{{ matched_tasks: {matched_tasks:?}, canceled_tasks: {canceled_tasks:?}, original_filter: {original_filter:?} }}")
}
Details::TaskDeletion {
matched_tasks,
deleted_tasks,
original_query,
original_filter,
} => {
format!("{{ matched_tasks: {matched_tasks:?}, deleted_tasks: {deleted_tasks:?}, original_query: {original_query:?} }}")
format!("{{ matched_tasks: {matched_tasks:?}, deleted_tasks: {deleted_tasks:?}, original_filter: {original_filter:?} }}")
},
Details::Dump { dump_uid } => {
format!("{{ dump_uid: {dump_uid:?} }}")
},
Details::IndexSwap { swaps } => {
format!("{{ indexes: {swaps:?} }}")
format!("{{ swaps: {swaps:?} }}")
}
}
}
fn snapshot_status(rtxn: &RoTxn, db: Database<SerdeBincode<Status>, RoaringBitmapCodec>) -> String {
pub fn snapshot_status(
rtxn: &RoTxn,
db: Database<SerdeBincode<Status>, RoaringBitmapCodec>,
) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
@@ -200,8 +218,7 @@ fn snapshot_status(rtxn: &RoTxn, db: Database<SerdeBincode<Status>, RoaringBitma
}
snap
}
fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCodec>) -> String {
pub fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCodec>) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
@@ -212,7 +229,7 @@ fn snapshot_kind(rtxn: &RoTxn, db: Database<SerdeBincode<Kind>, RoaringBitmapCod
snap
}
fn snapshot_index_tasks(rtxn: &RoTxn, db: Database<Str, RoaringBitmapCodec>) -> String {
pub fn snapshot_index_tasks(rtxn: &RoTxn, db: Database<Str, RoaringBitmapCodec>) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
@@ -221,8 +238,19 @@ fn snapshot_index_tasks(rtxn: &RoTxn, db: Database<Str, RoaringBitmapCodec>) ->
}
snap
}
fn snapshot_index_mapper(rtxn: &RoTxn, mapper: &IndexMapper) -> String {
pub fn snapshot_canceled_by(
rtxn: &RoTxn,
db: Database<OwnedType<BEU32>, RoaringBitmapCodec>,
) -> String {
let mut snap = String::new();
let iter = db.iter(rtxn).unwrap();
for next in iter {
let (kind, task_ids) = next.unwrap();
writeln!(snap, "{kind} {}", snapshot_bitmap(&task_ids)).unwrap();
}
snap
}
pub fn snapshot_index_mapper(rtxn: &RoTxn, mapper: &IndexMapper) -> String {
let names = mapper.indexes(rtxn).unwrap().into_iter().map(|(n, _)| n).collect::<Vec<_>>();
format!("{names:?}")
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,46 @@
---
source: index-scheduler/src/lib.rs
assertion_line: 1755
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: canceled, canceled_by: 1, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { matched_tasks: 1, canceled_tasks: Some(1), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [1,]
canceled [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
1 [0,]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -6,21 +6,24 @@ source: index-scheduler/src/lib.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, kind: IndexDeletion { index_uid: "doggos" }}
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { matched_tasks: 1, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"indexDeletion" [1,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -0,0 +1,50 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[1,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { matched_tasks: 3, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,3,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
"taskCancelation" [3,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["beavero", "catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
----------------------------------------------------------------------

View File

@@ -0,0 +1,55 @@
---
source: index-scheduler/src/lib.rs
assertion_line: 1859
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: canceled, canceled_by: 3, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: canceled, canceled_by: 3, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
3 {uid: 3, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(2), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,3,]
canceled [1,2,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
"taskCancelation" [3,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["beavero", "catto"]
----------------------------------------------------------------------
### Canceled By:
3 [1,2,]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [3,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,2,]
[timestamp] [3,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,47 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
----------------------------------------------------------------------

View File

@@ -0,0 +1,50 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[1,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "beavero", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "wolfo", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 1, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { matched_tasks: 3, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,3,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,1,2,]
"taskCancelation" [3,]
----------------------------------------------------------------------
### Index Tasks:
beavero [1,]
catto [0,]
wolfo [2,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
----------------------------------------------------------------------

View File

@@ -0,0 +1,40 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { matched_tasks: 1, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,47 @@
---
source: index-scheduler/src/lib.rs
assertion_line: 1818
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: canceled, canceled_by: 1, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { matched_tasks: 1, canceled_tasks: Some(1), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [1,]
canceled [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
1 [0,]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,40 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { matched_tasks: 1, canceled_tasks: None, original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,45 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { matched_tasks: 1, canceled_tasks: Some(0), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"taskCancelation" [1,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
1 []
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -6,32 +6,32 @@ source: index-scheduler/src/lib.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { deleted_documents: Some(0) }, kind: IndexDeletion { index_uid: "doggos" }}
0 {uid: 0, status: succeeded, details: { received_documents: 1, indexed_documents: Some(1) }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"indexDeletion" [1,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
["catto"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [1,]
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [1,]
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:

View File

@@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,62 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "girafos", primary_key: None }}
3 {uid: 3, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
4 {uid: 4, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "cattos" }}
5 {uid: 5, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "girafos" }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,2,3,4,5,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [3,4,5,]
"indexCreation" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
cattos [1,4,]
doggos [0,3,]
girafos [2,5,]
----------------------------------------------------------------------
### Index Mapper:
["cattos", "doggos", "girafos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -19,6 +19,9 @@ doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -19,6 +19,9 @@ doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -20,6 +20,9 @@ doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -0,0 +1,46 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,]
"indexCreation" [0,]
"indexDeletion" [2,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -24,6 +24,9 @@ doggos [0,1,2,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
@@ -32,11 +35,11 @@ doggos [0,1,2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [2,]
[timestamp] [1,2,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [2,]
[timestamp] [1,2,]
----------------------------------------------------------------------
### File Store:

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -8,26 +8,26 @@ source: index-scheduler/src/lib.rs
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,]
"indexCreation" [0,]
"indexDeletion" [2,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,]
doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------

View File

@@ -8,7 +8,7 @@ source: index-scheduler/src/lib.rs
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
2 {uid: 2, status: enqueued, kind: IndexDeletion { index_uid: "doggos" }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
@@ -23,6 +23,9 @@ doggos [0,1,2,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -7,7 +7,7 @@ source: index-scheduler/src/lib.rs
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: enqueued, kind: IndexDeletion { index_uid: "doggos" }}
1 {uid: 1, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
@@ -21,6 +21,9 @@ doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -22,16 +22,19 @@ doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [1,]
[timestamp] [0,1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [1,]
[timestamp] [0,1,]
----------------------------------------------------------------------
### File Store:

View File

@@ -19,6 +19,9 @@ doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -0,0 +1,39 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: failed, error: ResponseError { code: 200, message: "Corrupted task queue.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued []
failed [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -19,6 +19,9 @@ doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,39 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: failed, error: ResponseError { code: 200, message: "Corrupted task queue.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
----------------------------------------------------------------------
### Status:
enqueued []
failed [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 1, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@@ -20,6 +20,9 @@ doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[0,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
index_a [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
index_a [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -8,26 +8,26 @@ source: index-scheduler/src/lib.rs
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_b", primary_key: Some("id") }}
2 {uid: 2, status: enqueued, kind: IndexDeletion { index_uid: "index_a" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,]
"indexDeletion" [2,]
----------------------------------------------------------------------
### Index Tasks:
index_a [0,2,]
index_a [0,]
index_b [1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------

View File

@@ -8,7 +8,7 @@ source: index-scheduler/src/lib.rs
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_b", primary_key: Some("id") }}
2 {uid: 2, status: enqueued, kind: IndexDeletion { index_uid: "index_a" }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "index_a" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
@@ -23,6 +23,9 @@ index_b [1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -1,41 +0,0 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 50, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 50, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000002, documents_count: 5000, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,3,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,2,3,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,1,2,]
doggo [3,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -1,45 +0,0 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 5000, allow_index_creation: true }}
3 {uid: 3, status: succeeded, details: { matched_tasks: 2, deleted_tasks: Some(0), original_query: "test_query" }, kind: TaskDeletion { query: "test_query", tasks: RoaringBitmap<[0, 1]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
succeeded [3,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,2,]
"indexCreation" [0,]
"taskDeletion" [3,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,1,]
doggo [2,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [3,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [3,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -1,42 +0,0 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[3,]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 5000, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { matched_tasks: 2, deleted_tasks: None, original_query: "test_query" }, kind: TaskDeletion { query: "test_query", tasks: RoaringBitmap<[0, 1]> }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,3,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,2,]
"indexCreation" [0,]
"taskDeletion" [3,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,1,]
doggo [2,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,39 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: failed, error: ResponseError { code: 200, message: "An unexpected crash occurred when processing the task.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
----------------------------------------------------------------------
### Status:
enqueued []
failed [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,45 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,]
"indexDeletion" [2,]
----------------------------------------------------------------------
### Index Tasks:
cattos [1,]
doggos [0,2,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,47 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [2,]
succeeded [0,1,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,]
"indexDeletion" [2,]
----------------------------------------------------------------------
### Index Tasks:
cattos [1,]
doggos [0,2,]
----------------------------------------------------------------------
### Index Mapper:
["cattos", "doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -7,7 +7,7 @@ source: index-scheduler/src/lib.rs
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: succeeded, details: { received_documents: 1, indexed_documents: Some(0) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 1, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, status: succeeded, details: { deleted_documents: Some(0) }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
@@ -15,15 +15,18 @@ enqueued []
succeeded [0,1,2,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,]
"indexCreation" [0,]
"indexCreation" [0,1,]
"indexDeletion" [2,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,]
cattos [1,]
doggos [0,2,]
----------------------------------------------------------------------
### Index Mapper:
[]
["cattos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
@@ -32,10 +35,12 @@ doggos [0,1,2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### File Store:

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,39 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,]
----------------------------------------------------------------------
### Index Tasks:
cattos [1,]
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,42 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "cattos", primary_key: None }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: IndexDeletion { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,]
"indexDeletion" [2,]
----------------------------------------------------------------------
### Index Tasks:
cattos [1,]
doggos [0,2,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,46 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
3 {uid: 3, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,3,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [1,2,3,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,3,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,52 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
2 {uid: 2, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
3 {uid: 3, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,2,3,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [1,2,3,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,3,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,43 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
3 {uid: 3, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,3,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [1,2,3,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,3,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,39 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [1,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,41 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [1,2,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,48 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
2 {uid: 2, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
3 {uid: 3, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [2,3,]
succeeded [0,1,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [1,2,3,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,3,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,50 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = false
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: None }, kind: IndexCreation { index_uid: "doggos", primary_key: None }}
1 {uid: 1, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
2 {uid: 2, status: succeeded, details: { deleted_documents: Some(0) }, kind: DocumentClear { index_uid: "doggos" }}
3 {uid: 3, status: enqueued, details: { deleted_documents: None }, kind: DocumentClear { index_uid: "doggos" }}
----------------------------------------------------------------------
### Status:
enqueued [3,]
succeeded [0,1,2,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [1,2,3,]
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,2,3,]
----------------------------------------------------------------------
### Index Mapper:
["doggos"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,53 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: canceled, canceled_by: 3, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: canceled, canceled_by: 3, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
3 {uid: 3, status: succeeded, details: { matched_tasks: 3, canceled_tasks: Some(0), original_filter: "test_query" }, kind: TaskCancelation { query: "test_query", tasks: RoaringBitmap<[0, 1, 2]> }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,3,]
canceled [1,2,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,]
"indexSwap" [2,]
"taskCancelation" [3,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,2,]
doggo [1,2,]
----------------------------------------------------------------------
### Index Mapper:
["catto"]
----------------------------------------------------------------------
### Canceled By:
3 [1,2,]
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [3,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,2,]
[timestamp] [3,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,49 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: succeeded, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, status: succeeded, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,2,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
catto [2,]
doggo [0,]
whalo [1,]
----------------------------------------------------------------------
### Index Mapper:
["catto", "doggo", "whalo"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggo [0,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,39 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,]
----------------------------------------------------------------------
### Index Tasks:
doggo [0,]
whalo [1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,42 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("bone") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("bone") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("plankton") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("plankton") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("his_own_vomit") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("his_own_vomit") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
catto [2,]
doggo [0,]
whalo [1,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,50 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: succeeded, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: failed, error: ResponseError { code: 200, message: "Corrupted task queue.", error_code: "internal", error_type: "internal", error_link: "https://docs.meilisearch.com/errors#internal" }, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,]
failed [2,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,]
doggo [1,]
whalo [2,]
----------------------------------------------------------------------
### Index Mapper:
["catto", "doggo"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -7,22 +7,25 @@ source: index-scheduler/src/lib.rs
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 5000, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("fish") }, kind: IndexCreation { index_uid: "whalo", primary_key: Some("fish") }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,2,]
"indexCreation" [0,]
"indexCreation" [0,1,2,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,1,]
doggo [2,]
catto [0,]
doggo [1,]
whalo [2,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -7,24 +7,27 @@ source: index-scheduler/src/lib.rs
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("mouse") }, kind: IndexCreation { index_uid: "catto", primary_key: Some("mouse") }}
1 {uid: 1, status: enqueued, details: { received_documents: 12, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "catto", primary_key: None, method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 12, allow_index_creation: true }}
2 {uid: 2, status: enqueued, details: { received_documents: 5000, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggo", primary_key: Some("bone"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000001, documents_count: 5000, allow_index_creation: true }}
3 {uid: 3, status: enqueued, details: { matched_tasks: 2, deleted_tasks: None, original_query: "test_query" }, kind: TaskDeletion { query: "test_query", tasks: RoaringBitmap<[0, 1]> }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("sheep") }, kind: IndexCreation { index_uid: "doggo", primary_key: Some("sheep") }}
2 {uid: 2, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "doggo") }] }}
3 {uid: 3, status: enqueued, details: { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("catto", "whalo") }] }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,2,3,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,2,]
"indexCreation" [0,]
"taskDeletion" [3,]
"indexCreation" [0,1,]
"indexSwap" [2,3,]
----------------------------------------------------------------------
### Index Tasks:
catto [0,1,]
doggo [2,]
catto [0,2,3,]
doggo [1,2,]
whalo [3,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -24,6 +24,9 @@ doggo [3,]
----------------------------------------------------------------------
### Index Mapper:
[]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
@@ -36,6 +39,9 @@ doggo [3,]
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
----------------------------------------------------------------------

View File

@@ -0,0 +1,48 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "a", primary_key: Some("id") }}
1 {uid: 1, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "b", primary_key: Some("id") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "c", primary_key: Some("id") }}
3 {uid: 3, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "d", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [1,2,3,]
succeeded [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,3,]
----------------------------------------------------------------------
### Index Tasks:
a [0,]
b [1,]
c [2,]
d [3,]
----------------------------------------------------------------------
### Index Mapper:
["a"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,50 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "a", primary_key: Some("id") }}
1 {uid: 1, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "b", primary_key: Some("id") }}
2 {uid: 2, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "c", primary_key: Some("id") }}
3 {uid: 3, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "d", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [2,3,]
succeeded [0,1,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,3,]
----------------------------------------------------------------------
### Index Tasks:
a [0,]
b [1,]
c [2,]
d [3,]
----------------------------------------------------------------------
### Index Mapper:
["a", "b"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -0,0 +1,52 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "a", primary_key: Some("id") }}
1 {uid: 1, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "b", primary_key: Some("id") }}
2 {uid: 2, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "c", primary_key: Some("id") }}
3 {uid: 3, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "d", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [3,]
succeeded [0,1,2,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,3,]
----------------------------------------------------------------------
### Index Tasks:
a [0,]
b [1,]
c [2,]
d [3,]
----------------------------------------------------------------------
### Index Mapper:
["a", "b", "c"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -26,6 +26,9 @@ d [3,]
----------------------------------------------------------------------
### Index Mapper:
["a", "b", "c", "d"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

View File

@@ -10,24 +10,28 @@ source: index-scheduler/src/lib.rs
1 {uid: 1, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "a", primary_key: Some("id") }}
2 {uid: 2, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "d", primary_key: Some("id") }}
3 {uid: 3, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "c", primary_key: Some("id") }}
4 {uid: 4, status: succeeded, details: { indexes: [("a", "b"), ("c", "d")] }, kind: IndexSwap { swaps: [("a", "b"), ("c", "d")] }}
4 {uid: 4, status: succeeded, details: { swaps: [IndexSwap { indexes: ("a", "b") }, IndexSwap { indexes: ("c", "d") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("a", "b") }, IndexSwap { indexes: ("c", "d") }] }}
5 {uid: 5, status: enqueued, details: { swaps: [IndexSwap { indexes: ("a", "c") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("a", "c") }] }}
----------------------------------------------------------------------
### Status:
enqueued []
enqueued [5,]
succeeded [0,1,2,3,4,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,3,]
"indexSwap" [4,]
"indexSwap" [4,5,]
----------------------------------------------------------------------
### Index Tasks:
a [1,4,]
a [1,4,5,]
b [0,4,]
c [3,4,]
c [3,4,5,]
d [2,4,]
----------------------------------------------------------------------
### Index Mapper:
["a", "b", "c", "d"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
@@ -35,6 +39,7 @@ d [2,4,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]

View File

@@ -6,29 +6,31 @@ source: index-scheduler/src/lib.rs
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "b", primary_key: Some("id") }}
1 {uid: 1, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "c", primary_key: Some("id") }}
2 {uid: 2, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "d", primary_key: Some("id") }}
3 {uid: 3, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "a", primary_key: Some("id") }}
4 {uid: 4, status: succeeded, details: { indexes: [("a", "b"), ("c", "d")] }, kind: IndexSwap { swaps: [("c", "b"), ("a", "d")] }}
5 {uid: 5, status: succeeded, details: { indexes: [("a", "c")] }, kind: IndexSwap { swaps: [("a", "c")] }}
0 {uid: 0, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "a", primary_key: Some("id") }}
1 {uid: 1, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "b", primary_key: Some("id") }}
2 {uid: 2, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "c", primary_key: Some("id") }}
3 {uid: 3, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "d", primary_key: Some("id") }}
4 {uid: 4, status: enqueued, details: { swaps: [IndexSwap { indexes: ("a", "b") }, IndexSwap { indexes: ("c", "d") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("a", "b") }, IndexSwap { indexes: ("c", "d") }] }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,2,3,4,5,]
enqueued [4,]
succeeded [0,1,2,3,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,1,2,3,]
"indexSwap" [4,5,]
"indexSwap" [4,]
----------------------------------------------------------------------
### Index Tasks:
a [3,4,5,]
b [0,4,]
c [1,4,5,]
d [2,4,]
a [0,4,]
b [1,4,]
c [2,4,]
d [3,4,]
----------------------------------------------------------------------
### Index Mapper:
["a", "b", "c", "d"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
@@ -36,23 +38,18 @@ d [2,4,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
[timestamp] [2,]
[timestamp] [3,]
[timestamp] [4,]
[timestamp] [5,]
----------------------------------------------------------------------
### File Store:

View File

@@ -10,8 +10,8 @@ source: index-scheduler/src/lib.rs
1 {uid: 1, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "c", primary_key: Some("id") }}
2 {uid: 2, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "d", primary_key: Some("id") }}
3 {uid: 3, status: succeeded, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "a", primary_key: Some("id") }}
4 {uid: 4, status: succeeded, details: { indexes: [("a", "b"), ("c", "d")] }, kind: IndexSwap { swaps: [("c", "b"), ("a", "d")] }}
5 {uid: 5, status: succeeded, details: { indexes: [("a", "c")] }, kind: IndexSwap { swaps: [("a", "c")] }}
4 {uid: 4, status: succeeded, details: { swaps: [IndexSwap { indexes: ("c", "b") }, IndexSwap { indexes: ("a", "d") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("c", "b") }, IndexSwap { indexes: ("a", "d") }] }}
5 {uid: 5, status: succeeded, details: { swaps: [IndexSwap { indexes: ("a", "c") }] }, kind: IndexSwap { swaps: [IndexSwap { indexes: ("a", "c") }] }}
----------------------------------------------------------------------
### Status:
enqueued []
@@ -29,6 +29,9 @@ d [2,4,]
----------------------------------------------------------------------
### Index Mapper:
["a", "b", "c", "d"]
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]

Some files were not shown because too many files have changed in this diff Show More