Compare commits

...

149 Commits

Author SHA1 Message Date
Clément Renault
8f04353b7d bump strois to a temporary version while we wait for rusty_s3 to do a new release again 2023-10-17 11:49:18 +02:00
Clément Renault
b9983a48c7 No more create the S3 Bucket 2023-10-17 10:26:03 +02:00
Tamo
b22f1260bf bump strois to a temporary version while we wait for rusty_s3 to do a new release 2023-10-13 11:04:12 +02:00
Tamo
dfb84f80da bump strois version 2023-10-10 19:25:12 +02:00
Tamo
98b67f217a move to our new S3 lib 2023-09-28 11:24:18 +02:00
Tamo
6325cda74f bump charabia 2023-09-21 15:18:44 +02:00
Tamo
c71ba72f73 fix build in release mode 2023-09-21 11:05:57 +02:00
Tamo
ecd36b15f0 exposes all the s3 arguments 2023-09-13 18:17:56 +02:00
Clément Renault
8a2e8a887f Load the latest snapshot when we start the engine 2023-09-12 18:08:24 +02:00
Clément Renault
309c33a418 Fix again the dots 2023-09-12 17:55:01 +02:00
Clément Renault
9b01506cee Move the load snapshot step into a function 2023-09-12 16:05:02 +02:00
Clément Renault
f37fdceb15 Use slashes instead of dots for the s3 paths separators 2023-09-12 15:46:15 +02:00
Clément Renault
f544cfa444 Remove tasks and content file on the s3 2023-09-12 15:19:45 +02:00
Clément Renault
c158d03337 Fix internal error 2023-09-12 14:46:13 +02:00
Tamo
b7109c0fd2 start a script to run everything 2023-09-12 11:34:59 +02:00
Kerollmops
a53a0fdb77 Store content files into the S3 2023-09-11 18:17:22 +02:00
Clément Renault
719fdd701b Fix and crash when the tasks path is unknown 2023-09-07 11:31:18 +02:00
Kerollmops
01c13c98ac Mastering minio 2023-09-06 17:54:21 +02:00
Tamo
5b89276fcc starts using s3 2023-09-05 19:25:09 +02:00
Kerollmops
41697c4d65 Introduce the zk-tasks folder 2023-09-04 18:24:34 +02:00
Kerollmops
7d85753573 Make the snapshot download work 2023-09-04 17:38:56 +02:00
Kerollmops
76657af1f9 Add the options into the IndexScheduler 2023-09-04 16:38:05 +02:00
Tamo
966cbdab69 make the tests compile again 2023-09-04 15:39:54 +02:00
Clément Renault
0c68b9ed4c WIP making the final snapshot swap 2023-08-31 15:56:42 +02:00
Clément Renault
d7233ecdb8 Make things to compile again 2023-08-31 14:55:14 +02:00
Clément Renault
95a011af13 Wrap the IndexScheduler fields into an inner struct 2023-08-31 10:36:33 +02:00
Clément Renault
e257710961 WIP fix the tests 2023-08-30 18:03:24 +02:00
Clément Renault
9dd4423054 Fix the watcher ordering of the auth/ node 2023-08-30 17:51:22 +02:00
Clément Renault
8c3ad57ef9 React to changes towards the cluster members 2023-08-30 17:40:12 +02:00
Clément Renault
2d1434da81 Keep the ZK flow when enqueuing tasks 2023-08-30 17:15:15 +02:00
Clément Renault
c488a4a351 Fixup a lot of small issues on the ZK config 2023-08-30 16:42:55 +02:00
Kerollmops
0c7d7c68bc WIP moving to the sync zookeeper API 2023-08-30 15:06:12 +02:00
Tamo
854745c670 wip: starts working on importing the snapshots 2023-08-16 18:41:05 +02:00
Tamo
777eebb759 starts creating snapshot, the import is still missing 2023-08-10 15:00:25 +02:00
Tamo
61ccfaf9bc wake up after registering a task 2023-08-10 09:39:39 +02:00
Tamo
f0c4d36ff7 implement the deletion of tasks after processing a batch
add a lot of comments and logs
2023-08-10 09:36:43 +02:00
Tamo
8c20d6e2fe fix the leader election 2023-08-09 17:23:13 +02:00
ManyTheFish
8e437ed76c Start leader election and task processing (WIP) 2023-08-09 16:52:38 +02:00
Tamo
1191ec5939 fix the register task watcher 2023-08-08 13:18:55 +02:00
Tamo
0d20d08daf fix a few warnings 2023-08-08 11:39:48 +02:00
ManyTheFish
b66bf049b5 Create a task on zookeeper side when a task is created locally 2023-08-07 17:02:51 +02:00
ManyTheFish
b2f36b9b97 Comment Meilisearch container by default 2023-08-07 17:02:00 +02:00
ManyTheFish
b311089435 Update zookeeper client 2023-08-07 14:20:01 +02:00
ManyTheFish
3d46e84d97 update docker compose as an example 2023-08-03 17:15:24 +02:00
ManyTheFish
ad7f8edff8 fix auto-synchronization with zk 2023-08-03 14:43:29 +02:00
Tamo
5ce01bcb53 add logs 2023-08-03 13:59:05 +02:00
Tamo
d5523cc6ac fix the tests 2023-08-03 12:28:08 +02:00
Tamo
fe7a312ec6 Import the already existing api keys on startup 2023-08-03 12:25:32 +02:00
Tamo
57dc4b148c implement the watcher for all kind of operations 2023-08-03 10:52:13 +02:00
Tamo
a325ddfe6a Forward the key deletions to zookeeper 2023-08-03 10:36:49 +02:00
Tamo
0cd81573b4 Forward the keys update to zookeeper 2023-08-03 10:22:34 +02:00
ManyTheFish
b0ff595f60 Event Listener: delete local key if deleted on ZK 2023-08-02 18:36:36 +02:00
ManyTheFish
3eb6f4b56f Create api keys 2023-08-02 16:52:45 +02:00
ManyTheFish
49f976c8d8 fix analitics compilation 2023-08-02 14:17:03 +02:00
Tamo
84d56f3320 send the creation of api-key to zookeeper 2023-08-02 13:57:30 +02:00
Tamo
97e3dfd99d makes zk available inside the auth-controller with config coming from the cli, it compiles 2023-08-02 13:17:40 +02:00
ManyTheFish
dc38da95c4 WIP 2023-08-02 12:00:02 +02:00
ManyTheFish
2ce8b42757 REMOVE: add docker compôse for tests 2023-08-02 11:59:54 +02:00
meili-bors[bot]
f391039a6f Merge #3967
3967: Bring back changes from `release-v1.3.0` into `main` r=ManyTheFish a=curquiza

Using a temp branch because of git conflict

Co-authored-by: Cong Chen <cong.chen@ocrlabs.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: meili-bors[bot] <89034592+meili-bors[bot]@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-08-01 16:22:09 +00:00
curquiza
fcdd20b533 Fix README after git conflict 2023-08-01 16:06:33 +02:00
ManyTheFish
b45c36cd71 Merge branch 'main' into tmp-release-v1.3.0 2023-08-01 15:05:17 +02:00
meili-bors[bot]
151c31c18f Merge #3963
3963: Fix the milli crate r=ManyTheFish a=irevoire

Milli was using the serde feature of either without enabling it first; thus, it wasn't working.

It was working in meilisearch, though, because `meilisearch-types` was using the feature which enables it globally for all the other crates.

## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/3962

Co-authored-by: Tamo <tamo@meilisearch.com>
2023-07-31 09:32:08 +00:00
Tamo
a8ad0902d3 Fix the milli crate
Milli was using the serde feature of either without enabling it first, thus it wasn't working
2023-07-31 11:08:27 +02:00
meili-bors[bot]
e917dbdebb Merge #3957
3957: fix: upgrade mimalloc dependency to resolve FreeBSD build r=irevoire a=ThatOneCalculator

# Pull Request

## Related issue
Fixes #3806

## What does this PR do?
- Upgrades mimalloc to 0.1.37
- Fixes build on FreeBSD

Ref: https://github.com/meilisearch/meilisearch/issues/3806#issuecomment-1653693468

Tested and working on FreeBSD 13.1-RELEASE-p5

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: ThatOneCalculator <kainoa@t1c.dev>
2023-07-31 08:49:36 +00:00
ThatOneCalculator
ba919b6123 fix: ⬆️ up mimalloc 2023-07-28 20:35:47 -07:00
meili-bors[bot]
5b0157c6c6 Merge #3955
3955: Update mini-dashboard to version 0.2.11 r=curquiza a=bidoubiwa

# Pull Request

## What does this PR do?
- Updates the mini-dashboard to version [0.2.11](https://github.com/meilisearch/mini-dashboard/releases/tag/v0.2.11)

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2023-07-27 11:59:55 +00:00
Charlotte Vermandel
3b9a87c790 Update mini-dashboard to version 0.2.11 2023-07-27 13:16:32 +02:00
meili-bors[bot]
3a3414270d Merge #3952
3952: Use the new safe `read-txn-no-tls` heed feature r=ManyTheFish a=Kerollmops

[We recently found out](https://github.com/meilisearch/heed/issues/191#issuecomment-1650280513) that the `read-sync-txn` heed feature was invalid and must be removed from this crate. We were declaring it in milli/meilisearch but, fortunately, not sharing the `RoTxn`s across threads 😮‍💨

[I recently introduced the `read-txn-no-tls` heed feature](https://github.com/meilisearch/heed/pull/194), which implements `RoTxn: Send` and allows multiple read transactions on a single thread (which we use).

This PR removes the `sync-read-txn` heed feature from the _Cargo.toml_ file. I will fix this in heed v0.20.0 and will fill a RustSec advisory in the meantime.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-07-26 16:40:58 +00:00
meili-bors[bot]
d06e0905db Merge #3953
3953: Update UTM campaign r=curquiza a=macraig

# Pull Request

## What does this PR do?
Redirect CTAs to Cloud landing page



Co-authored-by: María <maria@Marias-MacBook-Pro.local>
2023-07-26 15:20:40 +00:00
meili-bors[bot]
939b2fc6fd Merge #3949
3949: Fix score details casing r=Kerollmops a=ManyTheFish

# Pull Request

Fixes #3941


Co-authored-by: ManyTheFish <many@meilisearch.com>
2023-07-26 14:14:59 +00:00
María
fae61372be Redirect CTAs to Cloud landing page 2023-07-26 15:54:43 +02:00
Clément Renault
d8b47b689e Use the new read-txn-no-tls heed feature 2023-07-26 15:45:15 +02:00
meili-bors[bot]
be72be7c0d Merge #3942
3942: Normalize for the search the facets values r=ManyTheFish a=Kerollmops

This PR improves and fixes the search for facet values feature. Searching for _bre_ wasn't returning facet values like _brévent_ or _brô_.

The issue was related to the fact that facets are normalized but not in the same way as the `searchableAttributes` are. We decided to normalize them further and add another intermediate database where the key is the normalized facet value, and the value is a set of the non-normalized facets. We then use these non-normalized ones to get the correct counts by fetching the associated databases.

### What's missing in this PR?
 - [x] Apply the change to the whole set of `SearchForFacetValue::execute` conditions.
 - [x] Factorize the code that does an intermediate normalized value fetch in a function.
 - [x] Add or modify the search for facet value test.

Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
2023-07-25 14:37:17 +00:00
ManyTheFish
88559a2d54 Fix score details casing 2023-07-25 15:49:33 +02:00
Clément Renault
59201a7852 Use snapshot instead of asserts
Co-authored-by: Many the fish <many@meilisearch.com>
2023-07-25 15:34:05 +02:00
meili-bors[bot]
9e3e69373e Merge #3948
3948: Fix hnsw internal panic by using another library r=ManyTheFish a=Kerollmops

This pull request fixes #3923. The issue concerns the `hnsw` crate panicking due to a wrong call to the `[T]::copy_from_slice` function.

I decided to switch the library to `instant-distance`, which is maintained [by someone of trust](https://lib.rs/~djc), who maintains a lot of very important crates.

- [x] Make Clippy happy with the first commit.
- [x] Reproduce the #3923 bug without this patch
- [x] Check if the bug disappeared with this PR.
- [x] Test with [the Algolia e-commerce dataset](https://www.notion.so/meilisearch/Algolia-Ecommerce-c5fa3b5f23a7485295df7e87306d5859).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2023-07-25 13:28:25 +00:00
Kerollmops
29ab54b259 Replace the hnsw crate by the instant-distance one 2023-07-25 12:37:35 +02:00
Kerollmops
86d8bb3a3e Make clippy happy (again) 2023-07-25 10:30:50 +02:00
Kerollmops
0e2a5951b4 Add more advanced tests 2023-07-24 18:04:58 +02:00
Kerollmops
691a536893 Implement the facet search with the normalized index 2023-07-24 17:56:17 +02:00
Clément Renault
df528b41d8 Normalize for the search the facets values 2023-07-20 17:57:07 +02:00
meili-bors[bot]
2452ec55b4 Merge #3940
3940: Update mini dashboard v0.2.9 r=gillian-meilisearch a=bidoubiwa

# Pull Request


## What does this PR do?
- Updates the mini-dashboard to version [0.2.9](https://github.com/meilisearch/mini-dashboard/releases/tag/v0.2.9)

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2023-07-20 15:08:59 +00:00
Charlotte Vermandel
54ae1b5a67 Update mini-dashboard to version 0.2.9 2023-07-20 14:11:17 +02:00
meili-bors[bot]
3070a20580 Merge #3937
3937: Update Charabia to the last version r=Kerollmops a=ManyTheFish

# Pull Request

## Related issue
Fixes #3924

## What does this PR do?
- Update Charabia


Co-authored-by: ManyTheFish <many@meilisearch.com>
2023-07-19 14:57:38 +00:00
ManyTheFish
0497f93494 Update Charabia to the last version 2023-07-19 15:19:32 +02:00
meili-bors[bot]
2dfbb6813a Merge #3913
3913: Expose a Puffin server to profile the indexing process r=Kerollmops a=Kerollmops

This PR exposes a puffin HTTP server to expose the internal timing it takes to index documents, delete documents, or update the settings of an index.

<img width="1752" alt="Capture d’écran 2023-07-10 à 18 44 58" src="https://github.com/meilisearch/meilisearch/assets/3610253/a3c7a6bf-db5b-42f4-8be1-c4e31c869843">

## To be done

 - [x] Move the puffin HTTP server under a feature flag.
 - [x] Use [the `puffin::set_scopes_on` function](https://docs.rs/puffin/latest/puffin/fn.set_scopes_on.html) to toggle it (by using the feature directly).
     When this function is called with `false`, [a call to `profile_scope!` talked 1-2ns](https://docs.rs/puffin/latest/puffin/fn.set_scopes_on.html).
 - [x] Create a _PROFILING.md_ file explaining how to use it.
   - [x] Explain that merging scopes on the interface is not always useful.
 - [x] Add more info on the number of batched tasks (using the `puffin::profile_scope!` macro data).
   - I added more info, but that's more continuous work when we consider we need more info here and there.
 - [x] Clean up some scopes, and don't touch too much code to inject puffin.
   - I am not sure that the _index_documents/mod.rs_ function is that complex with the addition of the scope.
 - [x] Think about what we consider frames. One indexation operation or the wall program. When must we stop the frame, then?
   - What we consider a frame is one single `IndexScheduler::tick` execution.
   - We can change that later.

Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-07-19 09:44:01 +00:00
Clément Renault
8f589a5cce Introduce a PROFILING.md tutorial to profile Meilisearch 2023-07-18 17:38:13 +02:00
Clément Renault
0b8bbd8750 Toggle the puffin profiling with a feature flag 2023-07-18 17:38:13 +02:00
Kerollmops
eef95de30e First iteration on exposing puffin profiling 2023-07-18 17:38:13 +02:00
meili-bors[bot]
d5ab750627 Merge #3935
3935: Update mini-dashboard to version 0.2.8 r=Kerollmops a=bidoubiwa

# Pull Request


## What does this PR do?
- Updates the mini-dashboard to version [0.2.8](https://github.com/meilisearch/mini-dashboard/releases/tag/v0.2.8)

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2023-07-18 12:59:29 +00:00
Charlotte Vermandel
2afd10f96d Update mini-dashboard to version 0.2.8 2023-07-18 14:49:36 +02:00
meili-bors[bot]
2d2619bd90 Merge #3933
3933: Stop computing the update files size r=ManyTheFish a=Kerollmops

This PR, related #3934, removes the part which computes the total size of the `data.ms/update_files` folder, which can take a lot of time when many updates must be processed.

It is not breaking API-side but is breaking on the result we will show to the user. The `databaseSize` field returned by the `/stats` endpoint will be reduced.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2023-07-18 12:02:08 +00:00
Kerollmops
516d2df862 Stop computing the update files size 2023-07-18 11:51:30 +02:00
meili-bors[bot]
c76b488ab1 Merge #3929
3929: Fix a panic when sorting geo fields represented by strings r=Kerollmops a=Kerollmops

This issue fixes #3927 by retrieving and parsing the original string values into f64s. I also added a test to ensure we don't break it in a future version.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2023-07-18 09:13:22 +00:00
Kerollmops
d383afc82b Fix the geo sort when lat and lng are strings 2023-07-17 18:28:04 +02:00
Kerollmops
f9d94c5845 Test geo sort with string lat/lng 2023-07-17 18:28:03 +02:00
meili-bors[bot]
7745cc9d3c Merge #3921
3921: Deactivate camel case segmentation r=dureuill a=ManyTheFish

# Pull Request
This PR deactivates the camel case segmentation to retrieve the possibility to accept typos over camel-cased words

## Related issue
Fixes #3869
Fixes #3818

## What does this PR do?
- deactivates camelcase segmentation

related to #3919



Co-authored-by: ManyTheFish <many@meilisearch.com>
2023-07-13 11:00:14 +00:00
meili-bors[bot]
657f24ec5f Merge #3907
3907: Add telemetry for define field to search on at query time r=dureuill a=ManyTheFish

Add "attributes_to_search_on" telemetry usage counter:
```json
"attributes_to_search_on": {
   "total_number_of_use": 12,
},
```

This measures the number of search queries that the user uses `attributesToSearchOn` field.

related to https://github.com/meilisearch/specifications/pull/251

## reviewers:

- `@macraig` for validating the telemetry's name
- `@dureuill` for validating the code

Co-authored-by: ManyTheFish <many@meilisearch.com>
2023-07-13 10:14:00 +00:00
ManyTheFish
c106906f8f deactivate camelCase segmentation 2023-07-13 12:06:27 +02:00
ManyTheFish
9c0691156f Add tests 2023-07-13 11:53:13 +02:00
ManyTheFish
359b90288d Use saturating add 2023-07-13 11:38:28 +02:00
ManyTheFish
13e3f8faae Fix typo 2023-07-13 11:34:50 +02:00
meili-bors[bot]
fd7c66fd62 Merge #3915
3915: `attributesToSearchOn` supports wildcards r=ManyTheFish a=dureuill

# Pull Request

## Related issue

Fixes #3912  and #3911 

## What does this PR do?
- Adding `*` in the list of `attributesToSearchOn` allows searching on all the `searchableAttributes`.
- If `searchableAttributes contains "*"`, then any attribute is accepted in the `attributesToSearchOn` list.


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-13 09:33:10 +00:00
Louis Dureuil
183f23f40d More relevant test
Co-authored-by: Many the fish <many@meilisearch.com>
2023-07-12 16:06:15 +02:00
Louis Dureuil
16c8437b28 Update tests 2023-07-12 11:21:19 +02:00
Louis Dureuil
4310928803 Fixes #3912 2023-07-12 10:08:56 +02:00
Louis Dureuil
74315b4ea8 Fixes #3911 2023-07-12 10:08:29 +02:00
meili-bors[bot]
177e6e27f9 Merge #3901
3901: Fix experimental analytics r=curquiza a=dureuill

# Pull Request

## Related issue
Fixes https://github.com/meilisearch/specifications/pull/250#discussion_r1253191583

## What does this PR do?
- `snake_case` instead of `camelCase` for feature fields


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-10 16:22:59 +00:00
meili-bors[bot]
50afe724ae Merge #3909
3909: Effectively send the `vector.max_vector_size` telemetry r=curquiza a=Kerollmops

This PR effectively aggregates and sends the `vector.max_vector_size` analytics value.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2023-07-10 15:44:30 +00:00
Kerollmops
012c960fad Send the vector.max_vector_size telemetry 2023-07-10 16:50:37 +02:00
meili-bors[bot]
76f6d3357e Merge #3908
3908: Allow a comma-separated value to the `vector` argument in GET search r=Kerollmops a=dureuill

# Pull Request

For request:

```
 curl \
  -X GET 'http://localhost:7700/indexes/movies/search?vector=0.123,1.124,244'
```

Before PR: 

```
{"message":"Invalid value type for parameter `vector`: expected a string, but found a string: `0,1,2`","code":"invalid_search_vector","type":"invalid_request","link":"https://docs.meilisearch.com/errors#invalid_search_vector"}%
```

After PR:

```
{"hits":[],"query":"","vector":[0.123,1.124,244.0],"processingTimeMs":0,"limit":20,"offset":0,"estimatedTotalHits":1000}%
```

cc `@gmourier` `@bidoubiwa` 


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-10 14:25:44 +00:00
Louis Dureuil
d59e969c16 Allow a comma-separated value to the vector argument in GET search 2023-07-10 16:16:34 +02:00
meili-bors[bot]
eb7a1aa7af Merge #3904
3904: Sort by lexicographic order after normalization r=dureuill a=dureuill

# Pull Request

## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/3893

## What does this PR do?
- Re-sort stop words after normalization so they're not sent out-of-order to the FST


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-10 12:12:05 +00:00
ManyTheFish
c30a14cb97 Add telemetry 2023-07-10 13:12:12 +02:00
meili-bors[bot]
a3ca8412ce Merge #3906
3906: Add "scoring.*" analytics to multi search route r=Kerollmops a=dureuill

# Pull Request

## Related issue
Fixes https://github.com/meilisearch/specifications/pull/252#discussion_r1254375746 by implementing (3): multi search now returns the "score.show_ranking_rule" and "score.show_ranking_rule_details" analytics.


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-10 09:51:30 +00:00
Louis Dureuil
106f98aa72 Add "scoring.*" analytics to multi search route 2023-07-10 11:45:43 +02:00
Louis Dureuil
40fa59d64c Sort by lexicographic order after normalization 2023-07-10 09:26:59 +02:00
Louis Dureuil
bb40ce6e35 Experimental features analytics match the spec 2023-07-10 08:57:53 +02:00
meili-bors[bot]
0c8dbf6fa6 Merge #3897
3897: Add automated tests for `/experimental-features` route r=Kerollmops a=dureuill

# Pull Request

## What does this PR do?
- Make `RuntimeTogglableFeatures` `Eq`
- Add various tests for the `/experimental-features` route
  - Integration tests for the route itself
  - Integration tests for the effect of enabling `scoreDetails` and `vectorStore` through this route.
  - Dump integration tests


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-06 13:37:56 +00:00
Louis Dureuil
dd6519b64f Dump tests 2023-07-06 14:22:29 +02:00
Louis Dureuil
da02a9cf32 Make RuntimeTogglableFeatures Eq 2023-07-06 14:20:58 +02:00
meili-bors[bot]
ff192bb480 Merge #3889
3889: Display the total number of tasks matching a filter/query r=dureuill a=Kerollmops

This PR returns a new field on the `/tasks` routes. The `total` field exposes the total number of tasks that matches the given filter/query. It is useful to display information on a user interface and can help understand when progress is made in processing tasks, i.e., the total number of tasks on `/tasks?statuses=succeeded` will increase over time.

Fixes #3888.

- [ ] Update the specs fo the `/tasks` route.

## How have I implemented it?

I found it much easier to run two times the task filtering system. Once with the original `from` and `limit` parameters and a second time without. The second call will return the total number of tasks that match the query, not only the number of tasks on the current page.

So far, in terms of performance, there doesn't seem to be any issue. I tried different filters with something like 250k tasks. Note that there is a limit of 1M tasks in the queue.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-07-06 10:23:09 +00:00
Clément Renault
22762808ab Fix the tests 2023-07-06 12:13:29 +02:00
Clément Renault
86b834c9e4 Display the total number of tasks in the tasks route 2023-07-06 10:05:18 +02:00
Louis Dureuil
2d3cec11a7 Search integration test to check score details and vector store 2023-07-06 09:02:02 +02:00
Louis Dureuil
76e1ee9988 integration test on "/experimental-features" route 2023-07-06 09:01:28 +02:00
Louis Dureuil
222615d3df Allow to get/set features in integration test server 2023-07-06 09:01:05 +02:00
Louis Dureuil
11d024c613 Authentication tests 2023-07-06 09:00:51 +02:00
meili-bors[bot]
886c8bb647 Merge #3891
3891: Fix the way we compute the 99th percentile r=dureuill a=Kerollmops

This PR fixes how we compute the 99th percentile by avoiding using float and doing the multiplication and divisions in the correct order avoiding going out of the buffer of timings. You can see the issue on [this rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021).

When there are a very small number of successful requests, the number is so tiny that the 99th percentile calculus sometimes gives an index out of the buffer. In this example, the `1`/`1.0` represent the number of timings you collected (one). As you can see, the float computation gives us the index `1.0`, with is out of a vector of only one value. This makes the engine generate a `null` value.

```rust
1 * 99 / 100 = 0 // with integers
0.99_f64 * (1.0 - 1.0) + 1.0 = 1.0 // with floats
```

Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-07-06 06:04:08 +00:00
meili-bors[bot]
b422e5fdc3 Merge #3890
3890: Fix the analytics of the sort facet values by count feature r=dureuill a=Kerollmops

This PR ensures we return the right analytics from the settings route.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-07-06 05:24:40 +00:00
Clément Renault
d727ebee05 Fix the way we compute the 99th percentile 2023-07-05 17:53:09 +02:00
Clément Renault
da39a7b29e Return the right analytics 2023-07-05 17:27:51 +02:00
meili-bors[bot]
377fe33aac Merge #3885
3885: Exactness missing field r=dureuill a=dureuill

# Pull Request

Adds fields to score details that were [specified](c25d758264/text/0195-ranking-score.md (322-ranking-rule-specific-fields)), but missing in the implementation:

- `exactness.matchingWords`
- `exactness.maxMatchingWords` 


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-04 15:14:53 +00:00
Louis Dureuil
55cd7738b9 Update snapshots 2023-07-04 16:31:01 +02:00
Louis Dureuil
48409c9183 Add missing exactness.matchingWords, exactness.maxMatchingWords 2023-07-04 16:31:01 +02:00
meili-bors[bot]
82650eaae1 Merge #3877
3877: update the total_received properties of multiple events r=dureuill a=dureuill

# Pull Request

## Related issue
Fixes #3814 

## What does this PR do?
-fix name of `total_received` for several events


Co-authored-by: Tamo <tamo@meilisearch.com>
2023-07-03 19:49:53 +00:00
meili-bors[bot]
b8ca09c13f Merge #3878
3878: Remove unsafe `atty` dependency r=dureuill a=Kerollmops

This PR replaces the `atty` dependency with the `is-terminal` one. We do that to fix GHSA-g98v-hv3f-hcfr.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2023-07-03 19:07:03 +00:00
Kerollmops
a442af6a7c Update the features of the either dependency to compile milli successfully 2023-07-03 18:51:43 +02:00
Kerollmops
e7f8daaf86 Update criterion to 0.5.1 to remove the atty dependency 2023-07-03 18:51:42 +02:00
Kerollmops
d1ff631df8 Replace the atty dependency with the is-terminal one 2023-07-03 18:51:42 +02:00
Tamo
202183adf8 update the total_received properties of multiple events 2023-07-03 15:57:09 +02:00
meili-bors[bot]
aae099e330 Merge #3851
3851: Expose lastUpdate and isIndexing in /stats endpoint r=dureuill a=gentcys

# Pull Request

## Related issue
Fixes #3843

## What does this PR do?
- expose lastUpdate in `/stats` endpoint
- expose isIndex in `stats` endpoint
- add a method `is_task_processing` in index-scheduler/src/lib.rs.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Cong Chen <cong.chen@ocrlabs.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-07-03 13:41:04 +00:00
Louis Dureuil
5387cf1718 Don't unwrap in case of error/missing last_update field 2023-07-03 15:32:11 +02:00
ManyTheFish
71500a4e15 Update tests 2023-07-03 11:20:43 +02:00
Cong Chen
9859e65d2f fix tests 2023-07-01 09:32:50 +08:00
Cong Chen
3bdf01bc1c Fix failed test 2023-06-30 17:39:23 +08:00
Cong Chen
a5a31667b0 fix converse result of is_task_processing() 2023-06-30 11:28:18 +08:00
Cong Chen
e3fc7112bc use RoaringBitmap::is_empty instead 2023-06-29 11:46:47 +08:00
Cong Chen
6d4981ec25 Expose lastUpdate and isIndexing in /stats endpoint 2023-06-23 07:24:25 +08:00
210 changed files with 4396 additions and 1910 deletions

773
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -37,23 +37,5 @@ opt-level = 3
[profile.dev.package.roaring]
opt-level = 3
[profile.dev.package.lindera-ipadic-builder]
opt-level = 3
[profile.dev.package.encoding]
opt-level = 3
[profile.dev.package.yada]
opt-level = 3
[profile.release.package.lindera-ipadic-builder]
opt-level = 3
[profile.release.package.encoding]
opt-level = 3
[profile.release.package.yada]
opt-level = 3
[profile.bench.package.lindera-ipadic-builder]
opt-level = 3
[profile.bench.package.encoding]
opt-level = 3
[profile.bench.package.yada]
opt-level = 3
[patch.crates-io]
strois = { git = "http://github.com/meilisearch/strois", rev = "eeb945c" }

19
PROFILING.md Normal file
View File

@@ -0,0 +1,19 @@
# Profiling Meilisearch
Search engine technologies are complex pieces of software that require thorough profiling tools. We chose to use [Puffin](https://github.com/EmbarkStudios/puffin), which the Rust gaming industry uses extensively. You can export and import the profiling reports using the top bar's _File_ menu options.
![An example profiling with Puffin viewer](assets/profiling-example.png)
## Profiling the Indexing Process
When you enable the `profile-with-puffin` feature of Meilisearch, a Puffin HTTP server will run on Meilisearch and listen on the default _0.0.0.0:8585_ address. This server will record a "frame" whenever it executes the `IndexScheduler::tick` method.
Once your Meilisearch is running and awaits new indexation operations, you must [install and run the `puffin_viewer` tool](https://github.com/EmbarkStudios/puffin/tree/main/puffin_viewer) to see the profiling results. I advise you to run the viewer with the `RUST_LOG=puffin_http::client=debug` environment variable to see the client trying to connect to your server.
Another piece of advice on the Puffin viewer UI interface is to consider the _Merge children with same ID_ option. It can hide the exact actual timings at which events were sent. Please turn it off when you see strange gaps on the Flamegraph. It can help.
## Profiling the Search Process
We still need to take the time to profile the search side of the engine with Puffin. It would require time to profile the filtering phase, query parsing, creation, and execution. We could even profile the Actix HTTP server.
The only issue we see is the framing system. Puffin requires a global frame-based profiling phase, which collides with Meilisearch's ability to accept and answer multiple requests on different threads simultaneously.

View File

@@ -65,7 +65,7 @@ You may also want to check out [Meilisearch 101](https://www.meilisearch.com/doc
## ⚡ Supercharge your Meilisearch experience
Say goodbye to server deployment and manual updates with [Meilisearch Cloud](https://www.meilisearch.com/pricing?utm_campaign=oss&utm_source=engine&utm_medium=meilisearch). Get started with a 14-day free trial! No credit card required.
Say goodbye to server deployment and manual updates with [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss&utm_source=github&utm_medium=meilisearch). No credit card required.
## 🧰 SDKs & integration tools
@@ -87,7 +87,7 @@ Finally, for more in-depth information, refer to our articles explaining fundame
Meilisearch collects **anonymized** data from users to help us improve our product. You can [deactivate this](https://www.meilisearch.com/docs/learn/what_is_meilisearch/telemetry?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=telemetry#how-to-disable-data-collection) whenever you want.
To request deletion of collected data, please write to us at [privacy@meilisearch.com](mailto:privacy@meilisearch.com). Don't forget to include your `Instance UID` in the message, as this helps us quickly find and delete your data.
To request deletion of collected data, please write to us at [privacy@meilisearch.com](mailto:privacy@meilisearch.com). Don't forget to include your `Instance UID` in the message, as this helps us quickly find and delete your data.
If you want to know more about the kind of data we collect and what we use it for, check the [telemetry section](https://www.meilisearch.com/docs/learn/what_is_meilisearch/telemetry?utm_campaign=oss&utm_source=github&utm_medium=meilisearch&utm_content=telemetry#how-to-disable-data-collection) of our documentation.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

@@ -14,11 +14,11 @@ license.workspace = true
anyhow = "1.0.70"
csv = "1.2.1"
milli = { path = "../milli" }
mimalloc = { version = "0.1.36", default-features = false }
mimalloc = { version = "0.1.37", default-features = false }
serde_json = { version = "1.0.95", features = ["preserve_order"] }
[dev-dependencies]
criterion = { version = "0.4.0", features = ["html_reports"] }
criterion = { version = "0.5.1", features = ["html_reports"] }
rand = "0.8.5"
rand_chacha = "0.3.1"
roaring = "0.10.1"

59
docker-compose.yml Normal file
View File

@@ -0,0 +1,59 @@
version: "3.9"
services:
zk1:
container_name: zk1
hostname: zk1
image: bitnami/zookeeper:3.7.1
ports:
- 21811:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_SERVER_ID=1
- ZOO_SERVERS=0.0.0.0:2888:3888,zk2:2888:3888,zk3:2888:3888
zk2:
container_name: zk2
hostname: zk2
image: bitnami/zookeeper:3.7.1
ports:
- 21812:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_SERVER_ID=2
- ZOO_SERVERS=zk1:2888:3888,0.0.0.0:2888:3888,zk3:2888:3888
zk3:
container_name: zk3
hostname: zk3
image: bitnami/zookeeper:3.7.1
ports:
- 21813:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOO_SERVER_ID=3
- ZOO_SERVERS=zk1:2888:3888,zk2:2888:3888,0.0.0.0:2888:3888
zoonavigator:
container_name: zoonavigator
image: elkozmon/zoonavigator
ports:
- 9000:9000
# Meilisearch instances
# m1:
# container_name: m1
# hostname: m1
# image: getmeili/meilisearch:prototype-zookeeper-ha-0
# ports:
# - 7700:7700
# environment:
# - MEILI_ZK_URL=zk1:2181
# - MEILI_MASTER_KEY=masterkey
# restart: always
# m2:
# container_name: m2
# hostname: m2
# image: getmeili/meilisearch:prototype-zookeeper-ha-0
# ports:
# - 7701:7700
# environment:
# - MEILI_ZK_URL=zk2:2181
# - MEILI_MASTER_KEY=masterkey
# restart: always

View File

@@ -210,6 +210,7 @@ pub(crate) mod test {
use big_s::S;
use maplit::{btreemap, btreeset};
use meilisearch_types::facet_values_sort::FacetValuesSort;
use meilisearch_types::features::RuntimeTogglableFeatures;
use meilisearch_types::index_uid_pattern::IndexUidPattern;
use meilisearch_types::keys::{Action, Key};
use meilisearch_types::milli;
@@ -418,7 +419,10 @@ pub(crate) mod test {
}
keys.flush().unwrap();
// ========== TODO: create features here
// ========== experimental features
let features = create_test_features();
dump.create_experimental_features(features).unwrap();
// create the dump
let mut file = tempfile::tempfile().unwrap();
@@ -428,6 +432,10 @@ pub(crate) mod test {
file
}
fn create_test_features() -> RuntimeTogglableFeatures {
RuntimeTogglableFeatures { vector_store: true, ..Default::default() }
}
#[test]
fn test_creating_and_read_dump() {
let mut file = create_test_dump();
@@ -472,5 +480,9 @@ pub(crate) mod test {
for (key, expected) in dump.keys().unwrap().zip(create_test_api_keys()) {
assert_eq!(key.unwrap(), expected);
}
// ==== checking the features
let expected = create_test_features();
assert_eq!(dump.features().unwrap().unwrap(), expected);
}
}

View File

@@ -195,8 +195,53 @@ pub(crate) mod test {
use meili_snap::insta;
use super::*;
use crate::reader::v6::RuntimeTogglableFeatures;
// TODO: add `features` to tests
#[test]
fn import_dump_v6_experimental() {
let dump = File::open("tests/assets/v6-with-experimental.dump").unwrap();
let mut dump = DumpReader::open(dump).unwrap();
// top level infos
insta::assert_display_snapshot!(dump.date().unwrap(), @"2023-07-06 7:10:27.21958 +00:00:00");
insta::assert_debug_snapshot!(dump.instance_uid().unwrap(), @"None");
// tasks
let tasks = dump.tasks().unwrap().collect::<Result<Vec<_>>>().unwrap();
let (tasks, update_files): (Vec<_>, Vec<_>) = tasks.into_iter().unzip();
meili_snap::snapshot_hash!(meili_snap::json_string!(tasks), @"d45cd8571703e58ae53c7bd7ce3f5c22");
assert_eq!(update_files.len(), 2);
assert!(update_files[0].is_none()); // the dump creation
assert!(update_files[1].is_none()); // the processed document addition
// keys
let keys = dump.keys().unwrap().collect::<Result<Vec<_>>>().unwrap();
meili_snap::snapshot_hash!(meili_snap::json_string!(keys), @"13c2da155e9729c2344688cab29af71d");
// indexes
let mut indexes = dump.indexes().unwrap().collect::<Result<Vec<_>>>().unwrap();
// the index are not ordered in any way by default
indexes.sort_by_key(|index| index.metadata().uid.to_string());
let mut test = indexes.pop().unwrap();
assert!(indexes.is_empty());
insta::assert_json_snapshot!(test.metadata(), @r###"
{
"uid": "test",
"primaryKey": "id",
"createdAt": "2023-07-06T07:07:41.364694Z",
"updatedAt": "2023-07-06T07:07:41.396114Z"
}
"###);
assert_eq!(test.documents().unwrap().count(), 1);
assert_eq!(
dump.features().unwrap().unwrap(),
RuntimeTogglableFeatures { vector_store: true, ..Default::default() }
);
}
#[test]
fn import_dump_v5() {
@@ -274,6 +319,8 @@ pub(crate) mod test {
let documents = spells.documents().unwrap().collect::<Result<Vec<_>>>().unwrap();
assert_eq!(documents.len(), 10);
meili_snap::snapshot_hash!(format!("{:#?}", documents), @"235016433dd04262c7f2da01d1e808ce");
assert_eq!(dump.features().unwrap(), None);
}
#[test]

View File

@@ -292,6 +292,7 @@ pub(crate) mod test {
│ ├---- update_files/
│ │ └---- 1.jsonl
│ └---- queue.jsonl
├---- experimental-features.json
├---- instance_uid.uuid
├---- keys.jsonl
└---- metadata.json

Binary file not shown.

View File

@@ -22,20 +22,6 @@ pub enum Error {
pub type Result<T> = std::result::Result<T, Error>;
impl Deref for File {
type Target = NamedTempFile;
fn deref(&self) -> &Self::Target {
&self.file
}
}
impl DerefMut for File {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.file
}
}
#[derive(Clone, Debug)]
pub struct FileStore {
path: PathBuf,
@@ -146,6 +132,20 @@ impl File {
}
}
impl Deref for File {
type Target = NamedTempFile;
fn deref(&self) -> &Self::Target {
&self.file
}
}
impl DerefMut for File {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.file
}
}
#[cfg(test)]
mod test {
use std::io::Write;

View File

@@ -472,6 +472,77 @@ pub fn parse_filter(input: Span) -> IResult<FilterCondition> {
terminated(|input| parse_expression(input, 0), eof)(input)
}
impl<'a> std::fmt::Display for FilterCondition<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
FilterCondition::Not(filter) => {
write!(f, "NOT ({filter})")
}
FilterCondition::Condition { fid, op } => {
write!(f, "{fid} {op}")
}
FilterCondition::In { fid, els } => {
write!(f, "{fid} IN[")?;
for el in els {
write!(f, "{el}, ")?;
}
write!(f, "]")
}
FilterCondition::Or(els) => {
write!(f, "OR[")?;
for el in els {
write!(f, "{el}, ")?;
}
write!(f, "]")
}
FilterCondition::And(els) => {
write!(f, "AND[")?;
for el in els {
write!(f, "{el}, ")?;
}
write!(f, "]")
}
FilterCondition::GeoLowerThan { point, radius } => {
write!(f, "_geoRadius({}, {}, {})", point[0], point[1], radius)
}
FilterCondition::GeoBoundingBox {
top_right_point: top_left_point,
bottom_left_point: bottom_right_point,
} => {
write!(
f,
"_geoBoundingBox([{}, {}], [{}, {}])",
top_left_point[0],
top_left_point[1],
bottom_right_point[0],
bottom_right_point[1]
)
}
}
}
}
impl<'a> std::fmt::Display for Condition<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Condition::GreaterThan(token) => write!(f, "> {token}"),
Condition::GreaterThanOrEqual(token) => write!(f, ">= {token}"),
Condition::Equal(token) => write!(f, "= {token}"),
Condition::NotEqual(token) => write!(f, "!= {token}"),
Condition::Null => write!(f, "IS NULL"),
Condition::Empty => write!(f, "IS EMPTY"),
Condition::Exists => write!(f, "EXISTS"),
Condition::LowerThan(token) => write!(f, "< {token}"),
Condition::LowerThanOrEqual(token) => write!(f, "<= {token}"),
Condition::Between { from, to } => write!(f, "{from} TO {to}"),
}
}
}
impl<'a> std::fmt::Display for Token<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{{{}}}", self.value())
}
}
#[cfg(test)]
pub mod tests {
use super::*;
@@ -852,74 +923,3 @@ pub mod tests {
assert_eq!(token.value(), s);
}
}
impl<'a> std::fmt::Display for FilterCondition<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
FilterCondition::Not(filter) => {
write!(f, "NOT ({filter})")
}
FilterCondition::Condition { fid, op } => {
write!(f, "{fid} {op}")
}
FilterCondition::In { fid, els } => {
write!(f, "{fid} IN[")?;
for el in els {
write!(f, "{el}, ")?;
}
write!(f, "]")
}
FilterCondition::Or(els) => {
write!(f, "OR[")?;
for el in els {
write!(f, "{el}, ")?;
}
write!(f, "]")
}
FilterCondition::And(els) => {
write!(f, "AND[")?;
for el in els {
write!(f, "{el}, ")?;
}
write!(f, "]")
}
FilterCondition::GeoLowerThan { point, radius } => {
write!(f, "_geoRadius({}, {}, {})", point[0], point[1], radius)
}
FilterCondition::GeoBoundingBox {
top_right_point: top_left_point,
bottom_left_point: bottom_right_point,
} => {
write!(
f,
"_geoBoundingBox([{}, {}], [{}, {}])",
top_left_point[0],
top_left_point[1],
bottom_right_point[0],
bottom_right_point[1]
)
}
}
}
}
impl<'a> std::fmt::Display for Condition<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Condition::GreaterThan(token) => write!(f, "> {token}"),
Condition::GreaterThanOrEqual(token) => write!(f, ">= {token}"),
Condition::Equal(token) => write!(f, "= {token}"),
Condition::NotEqual(token) => write!(f, "!= {token}"),
Condition::Null => write!(f, "IS NULL"),
Condition::Empty => write!(f, "IS EMPTY"),
Condition::Exists => write!(f, "EXISTS"),
Condition::LowerThan(token) => write!(f, "< {token}"),
Condition::LowerThanOrEqual(token) => write!(f, "<= {token}"),
Condition::Between { from, to } => write!(f, "{from} TO {to}"),
}
}
}
impl<'a> std::fmt::Display for Token<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{{{}}}", self.value())
}
}

View File

@@ -16,7 +16,7 @@ license.workspace = true
serde_json = "1.0"
[dev-dependencies]
criterion = { version = "0.4.0", features = ["html_reports"] }
criterion = { version = "0.5.1", features = ["html_reports"] }
[[bench]]
name = "benchmarks"

61
ha_test/run.sh Normal file
View File

@@ -0,0 +1,61 @@
#!/bin/bash
function is_everything_installed {
everything_ok=yes
if hash zkli 2>/dev/null; then
echo "✅ zkli installed"
else
everything_ok=no
echo "🥺 zkli is missing, please run \`cargo install zkli\`"
fi
if hash s3cmd 2>/dev/null; then
echo "✅ s3cmd installed"
else
everything_ok=no
echo "🥺 s3cmd is missing, see how to install it here https://s3tools.org/s3cmd"
fi
if [ $everything_ok = "no" ]; then
echo "Exiting..."
exit 1
fi
}
# param: addr of zookeeper
function connect_to_zookeeper {
if ! zkli -a "$1" ls > /dev/null; then
echo "zkli can't connect"
return 1
fi
}
# param: addr of the s3 bucket
function connect_to_s3 {
# S3_SECRET_KEY
# S3_ACCESS_KEY
# S3_HOST
# S3_BUCKET
s3cmd --host="$S3_HOST" --host-bucket="$S3_BUCKET" --access_key="$ACCESS_KEY" --secret_key="$S3_SECRET_KEY" ls
if $?; then
echo "s3cmd can't connect"
return 1
fi
}
is_everything_installed
ZOOKEEPER_ADDR="localhost:2181"
if ! connect_to_zookeeper $ZOOKEEPER_ADDR; then
ZOOKEEPER_ADDR="localhost:21811"
if ! connect_to_zookeeper $ZOOKEEPER_ADDR; then
echo "Can't connect to zkli"
exit 1
fi
fi
connect_to_s3

View File

@@ -22,6 +22,7 @@ log = "0.4.17"
meilisearch-auth = { path = "../meilisearch-auth" }
meilisearch-types = { path = "../meilisearch-types" }
page_size = "0.5.0"
puffin = "0.16.0"
roaring = { version = "0.10.1", features = ["serde"] }
serde = { version = "1.0.160", features = ["derive"] }
serde_json = { version = "1.0.95", features = ["preserve_order"] }
@@ -30,6 +31,10 @@ tempfile = "3.5.0"
thiserror = "1.0.40"
time = { version = "0.3.20", features = ["serde-well-known", "formatting", "parsing", "macros"] }
uuid = { version = "1.3.1", features = ["serde", "v4"] }
tokio = { version = "1.27.0", features = ["full"] }
zookeeper = "0.8.0"
parking_lot = "0.12.1"
strois = "0.0.4"
[dev-dependencies]
big_s = "1.0.2"

View File

@@ -43,7 +43,7 @@ use uuid::Uuid;
use crate::autobatcher::{self, BatchKind};
use crate::utils::{self, swap_index_uid_in_task};
use crate::{Error, IndexScheduler, ProcessingTasks, Result, TaskId};
use crate::{Error, IndexSchedulerInner, ProcessingTasks, Result, TaskId};
/// Represents a combination of tasks that can all be processed at the same time.
///
@@ -198,6 +198,35 @@ impl Batch {
| IndexDocumentDeletionByFilter { index_uid, .. } => Some(index_uid),
}
}
/// Return the content fields uuids associated with this batch.
pub fn content_uuids(&self) -> Vec<Uuid> {
match self {
Batch::TaskCancelation { .. }
| Batch::TaskDeletion(_)
| Batch::Dump(_)
| Batch::IndexCreation { .. }
| Batch::IndexDocumentDeletionByFilter { .. }
| Batch::IndexUpdate { .. }
| Batch::SnapshotCreation(_)
| Batch::IndexDeletion { .. }
| Batch::IndexSwap { .. } => vec![],
Batch::IndexOperation { op, .. } => match op {
IndexOperation::DocumentOperation { operations, .. } => operations
.iter()
.flat_map(|op| match op {
DocumentOperation::Add(uuid) => Some(*uuid),
DocumentOperation::Delete(_) => None,
})
.collect(),
IndexOperation::DocumentDeletion { .. }
| IndexOperation::Settings { .. }
| IndexOperation::DocumentClear { .. }
| IndexOperation::SettingsAndDocumentOperation { .. }
| IndexOperation::DocumentClearAndSetting { .. } => vec![],
},
}
}
}
impl IndexOperation {
@@ -213,7 +242,7 @@ impl IndexOperation {
}
}
impl IndexScheduler {
impl IndexSchedulerInner {
/// Convert an [`BatchKind`](crate::autobatcher::BatchKind) into a [`Batch`].
///
/// ## Arguments
@@ -471,6 +500,8 @@ impl IndexScheduler {
#[cfg(test)]
self.maybe_fail(crate::tests::FailureLocation::InsideCreateBatch)?;
puffin::profile_function!();
let enqueued = &self.get_status(rtxn, Status::Enqueued)?;
let to_cancel = self.get_kind(rtxn, Kind::TaskCancelation)? & enqueued;
@@ -478,8 +509,7 @@ impl IndexScheduler {
if let Some(task_id) = to_cancel.max() {
// We retrieve the tasks that were processing before this tasks cancelation started.
// We must *not* reset the processing tasks before calling this method.
let ProcessingTasks { started_at, processing } =
&*self.processing_tasks.read().unwrap();
let ProcessingTasks { started_at, processing, .. } = &*self.processing_tasks.read();
return Ok(Some(Batch::TaskCancelation {
task: self.get_task(rtxn, task_id)?.ok_or(Error::CorruptedTaskQueue)?,
previous_started_at: *started_at,
@@ -575,6 +605,9 @@ impl IndexScheduler {
self.maybe_fail(crate::tests::FailureLocation::PanicInsideProcessBatch)?;
self.breakpoint(crate::Breakpoint::InsideProcessBatch);
}
puffin::profile_function!(format!("{:?}", batch));
match batch {
Batch::TaskCancelation { mut task, previous_started_at, previous_processing_tasks } => {
// 1. Retrieve the tasks that matched the query at enqueue-time.
@@ -1111,6 +1144,8 @@ impl IndexScheduler {
index: &'i Index,
operation: IndexOperation,
) -> Result<Vec<Task>> {
puffin::profile_function!();
match operation {
IndexOperation::DocumentClear { mut tasks, .. } => {
let count = milli::update::ClearDocuments::new(index_wtxn, index).execute()?;
@@ -1385,7 +1420,7 @@ impl IndexScheduler {
fn delete_matched_tasks(&self, wtxn: &mut RwTxn, matched_tasks: &RoaringBitmap) -> Result<u64> {
// 1. Remove from this list the tasks that we are not allowed to delete
let enqueued_tasks = self.get_status(wtxn, Status::Enqueued)?;
let processing_tasks = &self.processing_tasks.read().unwrap().processing.clone();
let processing_tasks = &self.processing_tasks.read().processing.clone();
let all_task_ids = self.all_task_ids(wtxn)?;
let mut to_delete_tasks = all_task_ids & matched_tasks;

View File

@@ -295,6 +295,11 @@ impl IndexMap {
"Attempt to finish deletion of an index that was being closed"
);
}
/// Returns the indexes that were opened by the `IndexMap`.
pub fn clear(&mut self) -> Vec<Index> {
self.available.clear().into_iter().map(|(_, (_, index))| index).collect()
}
}
/// Create or open an index in the specified path.
@@ -335,7 +340,8 @@ mod tests {
impl IndexMapper {
fn test() -> (Self, Env, IndexSchedulerHandle) {
let (index_scheduler, handle) = IndexScheduler::test(true, vec![]);
(index_scheduler.index_mapper, index_scheduler.env, handle)
let index_scheduler = index_scheduler.inner();
(index_scheduler.index_mapper.clone(), index_scheduler.env.clone(), handle)
}
}

View File

@@ -61,7 +61,7 @@ pub struct IndexMapper {
pub(crate) index_stats: Database<UuidCodec, SerdeJson<IndexStats>>,
/// Path to the folder where the LMDB environments of each index are.
base_path: PathBuf,
pub(crate) base_path: PathBuf,
/// The map size an index is opened with on the first time.
index_base_map_size: usize,
/// The quantity by which the map size of an index is incremented upon reopening, in bytes.
@@ -135,7 +135,7 @@ impl IndexMapper {
index_growth_amount: usize,
index_count: usize,
enable_mdb_writemap: bool,
indexer_config: IndexerConfig,
indexer_config: Arc<IndexerConfig>,
) -> Result<Self> {
let mut wtxn = env.write_txn()?;
let index_mapping = env.create_database(&mut wtxn, Some(INDEX_MAPPING))?;
@@ -150,7 +150,7 @@ impl IndexMapper {
index_base_map_size,
index_growth_amount,
enable_mdb_writemap,
indexer_config: Arc::new(indexer_config),
indexer_config,
})
}
@@ -428,6 +428,11 @@ impl IndexMapper {
Ok(())
}
/// Returns the indexes that were opened by the `IndexMapper`.
pub fn clear(&mut self) -> Vec<Index> {
self.index_map.write().unwrap().clear()
}
/// The stats of an index.
///
/// If available in the cache, they are directly returned.

View File

@@ -1,5 +1,6 @@
use std::collections::BTreeSet;
use std::fmt::Write;
use std::ops::Deref;
use meilisearch_types::heed::types::{OwnedType, SerdeBincode, SerdeJson, Str};
use meilisearch_types::heed::{Database, RoTxn};
@@ -8,12 +9,13 @@ use meilisearch_types::tasks::{Details, Task};
use roaring::RoaringBitmap;
use crate::index_mapper::IndexMapper;
use crate::{IndexScheduler, Kind, Status, BEI128};
use crate::{IndexScheduler, IndexSchedulerInner, Kind, Status, BEI128};
pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
scheduler.assert_internally_consistent();
let IndexScheduler {
let inner = scheduler.inner();
let IndexSchedulerInner {
autobatching_enabled,
must_stop_processing: _,
processing_tasks,
@@ -38,13 +40,15 @@ pub fn snapshot_index_scheduler(scheduler: &IndexScheduler) -> String {
test_breakpoint_sdr: _,
planned_failures: _,
run_loop_iteration: _,
} = scheduler;
zookeeper: _,
options: _,
} = inner.deref();
let rtxn = env.read_txn().unwrap();
let mut snap = String::new();
let processing_tasks = processing_tasks.read().unwrap().processing.clone();
let processing_tasks = processing_tasks.read().processing.clone();
snap.push_str(&format!("### Autobatching Enabled = {autobatching_enabled}\n"));
snap.push_str("### Processing Tasks:\n");
snap.push_str(&snapshot_bitmap(&processing_tasks));

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
//! Thread-safe `Vec`-backend LRU cache using [`std::sync::atomic::AtomicU64`] for synchronization.
use std::mem;
use std::sync::atomic::{AtomicU64, Ordering};
/// Thread-safe `Vec`-backend LRU cache
@@ -190,6 +191,11 @@ where
}
None
}
/// Returns the generation associated to the key and values of the `LruMap`.
pub fn clear(&mut self) -> Vec<(AtomicU64, (K, V))> {
mem::take(&mut self.0.data)
}
}
/// The result of an insertion in a LRU map.

View File

@@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { primary_key: Some("id") }, kind: IndexCreation { index_uid: "index_a", primary_key: Some("id") }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"indexCreation" [0,]
----------------------------------------------------------------------
### Index Tasks:
index_a [0,]
----------------------------------------------------------------------
### Index Mapper:
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@@ -10,9 +10,9 @@ use meilisearch_types::tasks::{Details, IndexSwap, Kind, KindWithContent, Status
use roaring::{MultiOps, RoaringBitmap};
use time::OffsetDateTime;
use crate::{Error, IndexScheduler, Result, Task, TaskId, BEI128};
use crate::{Error, IndexSchedulerInner, Result, Task, TaskId, BEI128};
impl IndexScheduler {
impl IndexSchedulerInner {
pub(crate) fn all_task_ids(&self, rtxn: &RoTxn) -> Result<RoaringBitmap> {
enum_iterator::all().map(|s| self.get_status(rtxn, s)).union()
}
@@ -331,11 +331,12 @@ pub fn clamp_to_page_size(size: usize) -> usize {
}
#[cfg(test)]
impl IndexScheduler {
impl crate::IndexScheduler {
/// Asserts that the index scheduler's content is internally consistent.
pub fn assert_internally_consistent(&self) {
let rtxn = self.env.read_txn().unwrap();
for task in self.all_tasks.iter(&rtxn).unwrap() {
let this = self.inner();
let rtxn = this.env.read_txn().unwrap();
for task in this.all_tasks.iter(&rtxn).unwrap() {
let (task_id, task) = task.unwrap();
let task_id = task_id.get();
@@ -354,21 +355,21 @@ impl IndexScheduler {
} = task;
assert_eq!(uid, task.uid);
if let Some(task_index_uid) = &task_index_uid {
assert!(self
assert!(this
.index_tasks
.get(&rtxn, task_index_uid.as_str())
.unwrap()
.unwrap()
.contains(task.uid));
}
let db_enqueued_at = self
let db_enqueued_at = this
.enqueued_at
.get(&rtxn, &BEI128::new(enqueued_at.unix_timestamp_nanos()))
.unwrap()
.unwrap();
assert!(db_enqueued_at.contains(task_id));
if let Some(started_at) = started_at {
let db_started_at = self
let db_started_at = this
.started_at
.get(&rtxn, &BEI128::new(started_at.unix_timestamp_nanos()))
.unwrap()
@@ -376,7 +377,7 @@ impl IndexScheduler {
assert!(db_started_at.contains(task_id));
}
if let Some(finished_at) = finished_at {
let db_finished_at = self
let db_finished_at = this
.finished_at
.get(&rtxn, &BEI128::new(finished_at.unix_timestamp_nanos()))
.unwrap()
@@ -384,9 +385,9 @@ impl IndexScheduler {
assert!(db_finished_at.contains(task_id));
}
if let Some(canceled_by) = canceled_by {
let db_canceled_tasks = self.get_status(&rtxn, Status::Canceled).unwrap();
let db_canceled_tasks = this.get_status(&rtxn, Status::Canceled).unwrap();
assert!(db_canceled_tasks.contains(uid));
let db_canceling_task = self.get_task(&rtxn, canceled_by).unwrap().unwrap();
let db_canceling_task = this.get_task(&rtxn, canceled_by).unwrap().unwrap();
assert_eq!(db_canceling_task.status, Status::Succeeded);
match db_canceling_task.kind {
KindWithContent::TaskCancelation { query: _, tasks } => {
@@ -427,7 +428,7 @@ impl IndexScheduler {
Details::IndexInfo { primary_key: pk1 } => match &kind {
KindWithContent::IndexCreation { index_uid, primary_key: pk2 }
| KindWithContent::IndexUpdate { index_uid, primary_key: pk2 } => {
self.index_tasks
this.index_tasks
.get(&rtxn, index_uid.as_str())
.unwrap()
.unwrap()
@@ -535,23 +536,23 @@ impl IndexScheduler {
}
}
assert!(self.get_status(&rtxn, status).unwrap().contains(uid));
assert!(self.get_kind(&rtxn, kind.as_kind()).unwrap().contains(uid));
assert!(this.get_status(&rtxn, status).unwrap().contains(uid));
assert!(this.get_kind(&rtxn, kind.as_kind()).unwrap().contains(uid));
if let KindWithContent::DocumentAdditionOrUpdate { content_file, .. } = kind {
match status {
Status::Enqueued | Status::Processing => {
assert!(self
assert!(this
.file_store
.all_uuids()
.unwrap()
.any(|uuid| uuid.as_ref().unwrap() == &content_file),
"Could not find uuid `{content_file}` in the file_store. Available uuids are {:?}.",
self.file_store.all_uuids().unwrap().collect::<std::result::Result<Vec<_>, file_store::Error>>().unwrap(),
this.file_store.all_uuids().unwrap().collect::<std::result::Result<Vec<_>, file_store::Error>>().unwrap(),
);
}
Status::Succeeded | Status::Failed | Status::Canceled => {
assert!(self
assert!(this
.file_store
.all_uuids()
.unwrap()

View File

@@ -15,7 +15,7 @@ license.workspace = true
serde_json = "1.0"
[dev-dependencies]
criterion = "0.4.0"
criterion = "0.5.1"
[[bench]]
name = "depth"

View File

@@ -199,6 +199,30 @@ macro_rules! snapshot {
};
}
/// Create a string from the value by serializing it as Json, optionally
/// redacting some parts of it.
///
/// The second argument to the macro can be an object expression for redaction.
/// It's in the form { selector => replacement }. For more information about redactions
/// refer to the redactions feature in the `insta` guide.
#[macro_export]
macro_rules! json_string {
($value:expr, {$($k:expr => $v:expr),*$(,)?}) => {
{
let (_, snap) = meili_snap::insta::_prepare_snapshot_for_redaction!($value, {$($k => $v),*}, Json, File);
snap
}
};
($value:expr) => {{
let value = meili_snap::insta::_macro_support::serialize_value(
&$value,
meili_snap::insta::_macro_support::SerializationFormat::Json,
meili_snap::insta::_macro_support::SnapshotLocation::File
);
value
}};
}
#[cfg(test)]
mod tests {
use crate as meili_snap;
@@ -250,27 +274,3 @@ mod tests {
}
}
}
/// Create a string from the value by serializing it as Json, optionally
/// redacting some parts of it.
///
/// The second argument to the macro can be an object expression for redaction.
/// It's in the form { selector => replacement }. For more information about redactions
/// refer to the redactions feature in the `insta` guide.
#[macro_export]
macro_rules! json_string {
($value:expr, {$($k:expr => $v:expr),*$(,)?}) => {
{
let (_, snap) = meili_snap::insta::_prepare_snapshot_for_redaction!($value, {$($k => $v),*}, Json, File);
snap
}
};
($value:expr) => {{
let value = meili_snap::insta::_macro_support::serialize_value(
&$value,
meili_snap::insta::_macro_support::SerializationFormat::Json,
meili_snap::insta::_macro_support::SnapshotLocation::File
);
value
}};
}

View File

@@ -14,6 +14,7 @@ license.workspace = true
base64 = "0.21.0"
enum-iterator = "1.4.0"
hmac = "0.12.1"
log = "0.4.19"
maplit = "1.0.2"
meilisearch-types = { path = "../meilisearch-types" }
rand = "0.8.5"
@@ -24,3 +25,4 @@ sha2 = "0.10.6"
thiserror = "1.0.40"
time = { version = "0.3.20", features = ["serde-well-known", "formatting", "parsing", "macros"] }
uuid = { version = "1.3.1", features = ["serde", "v4"] }
zookeeper = "0.8.0"

View File

@@ -19,6 +19,7 @@ internal_error!(
AuthControllerError: meilisearch_types::milli::heed::Error,
std::io::Error,
serde_json::Error,
zookeeper::ZkError,
std::str::Utf8Error
);

View File

@@ -16,22 +16,113 @@ pub use store::open_auth_store_env;
use store::{generate_key_as_hexa, HeedAuthStore};
use time::OffsetDateTime;
use uuid::Uuid;
use zookeeper::{
Acl, AddWatchMode, CreateMode, WatchedEvent, WatchedEventType, ZkError, ZooKeeper,
};
#[derive(Clone)]
pub struct AuthController {
store: Arc<HeedAuthStore>,
master_key: Option<String>,
zookeeper: Option<Arc<ZooKeeper>>,
}
impl AuthController {
pub fn new(db_path: impl AsRef<Path>, master_key: &Option<String>) -> Result<Self> {
pub fn new(
db_path: impl AsRef<Path>,
master_key: &Option<String>,
zookeeper: Option<Arc<ZooKeeper>>,
) -> Result<Self> {
let store = HeedAuthStore::new(db_path)?;
let controller = Self { store: Arc::new(store), master_key: master_key.clone(), zookeeper };
if store.is_empty()? {
generate_default_keys(&store)?;
match controller.zookeeper {
// setup the auth zk environment, the `auth` node
Some(ref zookeeper) => {
// Zookeeper Event listener loop
let controller_clone = controller.clone();
let zkk = zookeeper.clone();
zookeeper.add_watch("/auth", AddWatchMode::PersistentRecursive, move |event| {
let WatchedEvent { event_type, path, keeper_state: _ } = dbg!(event);
match event_type {
WatchedEventType::NodeDeleted => {
// TODO: ugly unwraps
let path = path.unwrap();
let uuid = path.strip_prefix("/auth/").unwrap();
let uuid = Uuid::parse_str(&uuid).unwrap();
log::info!("The key {} has been deleted", uuid);
controller_clone.store.delete_api_key(uuid).unwrap();
}
WatchedEventType::NodeCreated | WatchedEventType::NodeDataChanged => {
let path = path.unwrap();
if path.strip_prefix("/auth/").map_or(false, |s| !s.is_empty()) {
let (key, _stat) = zkk.get_data(&path, false).unwrap();
let key: Key = serde_json::from_slice(&key).unwrap();
log::info!("The key {} has been deleted", key.uid);
controller_clone.store.put_api_key(key).unwrap();
}
}
otherwise => panic!("Got the unexpected `{otherwise:?}` event!"),
}
})?;
// TODO: we should catch the potential unexpected errors here https://docs.rs/zookeeper-client/latest/zookeeper_client/struct.Client.html#method.create
// for the moment we consider that `create` only returns Error::NodeExists.
match zookeeper.create(
"/auth",
vec![],
Acl::open_unsafe().clone(),
CreateMode::Persistent,
) {
// If the store is empty, we must generate and push the default api-keys.
Ok(_) => generate_default_keys(&controller)?,
// If the node exist we should clear our DB and download all the existing api-keys
Err(ZkError::NodeExists) => {
log::warn!("Auth directory already exists, we need to clear our keys + import the one in zookeeper");
let store = controller.store.clone();
store.delete_all_keys()?;
let children = zookeeper
.get_children("/auth", false)
.expect("Internal, the auth directory was deleted during execution.");
log::info!("Importing {} api-keys", children.len());
for path in children {
log::info!(" Importing {}", path);
match zookeeper.get_data(&format!("/auth/{}", &path), false) {
Ok((key, _stat)) => {
let key = serde_json::from_slice(&key).unwrap();
let store = controller.store.clone();
store.put_api_key(key)?;
}
Err(e) => panic!("{e}"),
}
// else the file was deleted while we were inserting the key. We ignore it.
// TODO: What happens if someone updates the files before we have the time
// to setup the watcher
}
}
e @ Err(
ZkError::NoNode | ZkError::NoChildrenForEphemerals | ZkError::InvalidACL,
) => unreachable!("{e:?}"),
Err(e) => panic!("{e}"),
}
// TODO: Race condition above:
// What happens if two node join exactly at the same moment:
// One will create the dir
// The second one will delete its DB, load nothing and install a watcher
// The first one will push its keys and should wake up and update the second one.
// / BUT, if the second one delete its DB and the first one push its files before the second one install the watcher we're fucked
}
None => {
if controller.store.is_empty()? {
generate_default_keys(&controller)?;
}
}
}
Ok(Self { store: Arc::new(store), master_key: master_key.clone() })
Ok(controller)
}
/// Return `Ok(())` if the auth controller is able to access one of its database.
@@ -53,7 +144,24 @@ impl AuthController {
pub fn create_key(&self, create_key: CreateApiKey) -> Result<Key> {
match self.store.get_api_key(create_key.uid)? {
Some(_) => Err(AuthControllerError::ApiKeyAlreadyExists(create_key.uid.to_string())),
None => self.store.put_api_key(create_key.to_key()),
None => self.put_key(create_key.to_key()),
}
}
pub fn put_key(&self, key: Key) -> Result<Key> {
let store = self.store.clone();
match &self.zookeeper {
Some(zookeeper) => {
zookeeper.create(
&format!("/auth/{}", key.uid),
serde_json::to_vec_pretty(&key)?,
Acl::open_unsafe().clone(),
CreateMode::Persistent,
)?;
Ok(key)
}
None => store.put_api_key(key),
}
}
@@ -68,7 +176,20 @@ impl AuthController {
name => key.name = name.set(),
};
key.updated_at = OffsetDateTime::now_utc();
self.store.put_api_key(key)
let store = self.store.clone();
// TODO: we may commit only after zk persisted the keys
match &self.zookeeper {
Some(zookeeper) => {
zookeeper.set_data(
&format!("/auth/{}", key.uid),
serde_json::to_vec_pretty(&key)?,
None,
)?;
Ok(key)
}
None => store.put_api_key(key),
}
}
pub fn get_key(&self, uid: Uuid) -> Result<Key> {
@@ -110,7 +231,19 @@ impl AuthController {
}
pub fn delete_key(&self, uid: Uuid) -> Result<()> {
if self.store.delete_api_key(uid)? {
let deleted = match &self.zookeeper {
Some(zookeeper) => match zookeeper.delete(&format!("/auth/{}", uid), None) {
Ok(()) => true,
Err(ZkError::NoNode) => false,
Err(e) => return Err(e.into()),
},
None => {
let store = self.store.clone();
store.delete_api_key(uid)?
}
};
if deleted {
Ok(())
} else {
Err(AuthControllerError::ApiKeyNotFound(uid.to_string()))
@@ -159,7 +292,7 @@ impl AuthController {
self.store.delete_all_keys()
}
/// Delete all the keys in the DB.
/// Insert a key in the DB without any check on its validity
pub fn raw_insert_key(&mut self, key: Key) -> Result<()> {
self.store.put_api_key(key)?;
Ok(())
@@ -304,10 +437,9 @@ pub struct IndexSearchRules {
pub filter: Option<serde_json::Value>,
}
fn generate_default_keys(store: &HeedAuthStore) -> Result<()> {
store.put_api_key(Key::default_admin())?;
store.put_api_key(Key::default_search())?;
fn generate_default_keys(controller: &AuthController) -> Result<()> {
controller.put_key(Key::default_admin())?;
controller.put_key(Key::default_search())?;
Ok(())
}

View File

@@ -1,4 +1,3 @@
use std::borrow::Borrow;
use std::fmt::{self, Debug, Display};
use std::fs::File;
use std::io::{self, Seek, Write};
@@ -42,7 +41,7 @@ impl Display for DocumentFormatError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Io(e) => write!(f, "{e}"),
Self::MalformedPayload(me, b) => match me.borrow() {
Self::MalformedPayload(me, b) => match me {
Error::Json(se) => {
let mut message = match se.classify() {
Category::Data => {

View File

@@ -175,6 +175,7 @@ macro_rules! make_error_codes {
// An exhaustive list of all the error codes used by meilisearch.
make_error_codes! {
S3Error , System , INTERNAL_SERVER_ERROR;
ApiKeyAlreadyExists , InvalidRequest , CONFLICT ;
ApiKeyNotFound , InvalidRequest , NOT_FOUND ;
BadParameter , InvalidRequest , BAD_REQUEST;

View File

@@ -1,6 +1,6 @@
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Clone, Copy, Default)]
#[derive(Serialize, Deserialize, Debug, Clone, Copy, Default, PartialEq, Eq)]
#[serde(rename_all = "camelCase", default)]
pub struct RuntimeTogglableFeatures {
pub score_details: bool,

View File

@@ -19,6 +19,7 @@ actix-http = { version = "3.3.1", default-features = false, features = [
"compress-gzip",
"rustls",
] }
actix-utils = "3.0.1"
actix-web = { version = "4.3.1", default-features = false, features = [
"macros",
"compress-brotli",
@@ -50,13 +51,14 @@ futures-util = "0.3.28"
http = "0.2.9"
index-scheduler = { path = "../index-scheduler" }
indexmap = { version = "1.9.3", features = ["serde-1"] }
is-terminal = "0.4.8"
itertools = "0.10.5"
jsonwebtoken = "8.3.0"
lazy_static = "1.4.0"
log = "0.4.17"
meilisearch-auth = { path = "../meilisearch-auth" }
meilisearch-types = { path = "../meilisearch-types" }
mimalloc = { version = "0.1.36", default-features = false }
mimalloc = { version = "0.1.37", default-features = false }
mime = "0.3.17"
num_cpus = "1.15.0"
obkv = "0.2.0"
@@ -67,6 +69,8 @@ permissive-json-pointer = { path = "../permissive-json-pointer" }
pin-project-lite = "0.2.9"
platform-dirs = "0.3.0"
prometheus = { version = "0.13.3", features = ["process"] }
puffin = "0.16.0"
puffin_http = { version = "0.13.0", optional = true }
rand = "0.8.5"
rayon = "1.7.0"
regex = "1.7.3"
@@ -76,6 +80,7 @@ reqwest = { version = "0.11.16", features = [
], default-features = false }
rustls = "0.20.8"
rustls-pemfile = "1.0.2"
strois = "0.0.4"
segment = { version = "0.2.2", optional = true }
serde = { version = "1.0.160", features = ["derive"] }
serde_json = { version = "1.0.95", features = ["preserve_order"] }
@@ -100,9 +105,8 @@ uuid = { version = "1.3.1", features = ["serde", "v4"] }
walkdir = "2.3.3"
yaup = "0.2.1"
serde_urlencoded = "0.7.1"
actix-utils = "3.0.1"
atty = "0.2.14"
termcolor = "1.2.0"
zookeeper = "0.8.0"
[dev-dependencies]
actix-rt = "2.8.0"
@@ -133,6 +137,7 @@ zip = { version = "0.6.4", optional = true }
[features]
default = ["analytics", "meilisearch-types/all-tokenizations", "mini-dashboard"]
analytics = ["segment"]
profile-with-puffin = ["dep:puffin_http"]
mini-dashboard = [
"actix-web-static-files",
"static-files",
@@ -151,5 +156,5 @@ thai = ["meilisearch-types/thai"]
greek = ["meilisearch-types/greek"]
[package.metadata.mini-dashboard]
assets-url = "https://github.com/meilisearch/mini-dashboard/releases/download/v0.2.7/build.zip"
sha1 = "28b45bf772c84f9a6e16bc1689b393bfce8da7d6"
assets-url = "https://github.com/meilisearch/mini-dashboard/releases/download/v0.2.11/build.zip"
sha1 = "83cd44ed1e5f97ecb581dc9f958a63f4ccc982d9"

View File

@@ -312,6 +312,13 @@ impl From<Opt> for Infos {
config_file_path,
#[cfg(all(not(debug_assertions), feature = "analytics"))]
no_analytics: _,
zk_url: _,
s3_url: _,
s3_region: _,
s3_bucket: _,
s3_access_key: _,
s3_secret_key: _,
s3_security_token: _,
} = options;
let schedule_snapshot = match schedule_snapshot {
@@ -574,6 +581,10 @@ pub struct SearchAggregator {
filter_total_number_of_criteria: usize,
used_syntax: HashMap<String, usize>,
// attributes_to_search_on
// every time a search is done using attributes_to_search_on
attributes_to_search_on_total_number_of_uses: usize,
// q
// The maximum number of terms in a q request
max_terms_number: usize,
@@ -647,6 +658,11 @@ impl SearchAggregator {
ret.filter_sum_of_criteria_terms = RE.split(&stringified_filters).count();
}
// attributes_to_search_on
if let Some(_) = query.attributes_to_search_on {
ret.attributes_to_search_on_total_number_of_uses = 1;
}
if let Some(ref q) = query.q {
ret.max_terms_number = q.split_whitespace().count();
}
@@ -720,9 +736,18 @@ impl SearchAggregator {
let used_syntax = self.used_syntax.entry(key).or_insert(0);
*used_syntax = used_syntax.saturating_add(value);
}
// attributes_to_search_on
self.attributes_to_search_on_total_number_of_uses = self
.attributes_to_search_on_total_number_of_uses
.saturating_add(other.attributes_to_search_on_total_number_of_uses);
// q
self.max_terms_number = self.max_terms_number.max(other.max_terms_number);
// vector
self.max_vector_size = self.max_vector_size.max(other.max_vector_size);
// pagination
self.max_limit = self.max_limit.max(other.max_limit);
self.max_offset = self.max_offset.max(other.max_offset);
@@ -761,17 +786,17 @@ impl SearchAggregator {
if self.total_received == 0 {
None
} else {
// the index of the 99th percentage of value
let percentile_99th = 0.99 * (self.total_succeeded as f64 - 1.) + 1.;
// we get all the values in a sorted manner
let time_spent = self.time_spent.into_sorted_vec();
// the index of the 99th percentage of value
let percentile_99th = time_spent.len() * 99 / 100;
// We are only interested by the slowest value of the 99th fastest results
let time_spent = time_spent.get(percentile_99th as usize);
let time_spent = time_spent.get(percentile_99th);
let properties = json!({
"user-agent": self.user_agents,
"requests": {
"99th_response_time": time_spent.map(|t| format!("{:.2}", t)),
"99th_response_time": time_spent.map(|t| format!("{:.2}", t)),
"total_succeeded": self.total_succeeded,
"total_failed": self.total_received.saturating_sub(self.total_succeeded), // just to be sure we never panics
"total_received": self.total_received,
@@ -786,9 +811,15 @@ impl SearchAggregator {
"avg_criteria_number": format!("{:.2}", self.filter_sum_of_criteria_terms as f64 / self.filter_total_number_of_criteria as f64),
"most_used_syntax": self.used_syntax.iter().max_by_key(|(_, v)| *v).map(|(k, _)| json!(k)).unwrap_or_else(|| json!(null)),
},
"attributes_to_search_on": {
"total_number_of_uses": self.attributes_to_search_on_total_number_of_uses,
},
"q": {
"max_terms_number": self.max_terms_number,
},
"vector": {
"max_vector_size": self.max_vector_size,
},
"pagination": {
"max_limit": self.max_limit,
"max_offset": self.max_offset,
@@ -843,6 +874,10 @@ pub struct MultiSearchAggregator {
// sum of the number of search queries in the requests, use with total_received to compute an average
total_search_count: usize,
// scoring
show_ranking_score: bool,
show_ranking_score_details: bool,
// context
user_agents: HashSet<String>,
}
@@ -856,6 +891,9 @@ impl MultiSearchAggregator {
let distinct_indexes: HashSet<_> =
query.iter().map(|query| query.index_uid.as_str()).collect();
let show_ranking_score = query.iter().any(|query| query.show_ranking_score);
let show_ranking_score_details = query.iter().any(|query| query.show_ranking_score_details);
Self {
timestamp,
total_received: 1,
@@ -863,6 +901,8 @@ impl MultiSearchAggregator {
total_distinct_index_count: distinct_indexes.len(),
total_single_index: if distinct_indexes.len() == 1 { 1 } else { 0 },
total_search_count: query.len(),
show_ranking_score,
show_ranking_score_details,
user_agents,
}
}
@@ -884,6 +924,9 @@ impl MultiSearchAggregator {
this.total_distinct_index_count.saturating_add(other.total_distinct_index_count);
let total_single_index = this.total_single_index.saturating_add(other.total_single_index);
let total_search_count = this.total_search_count.saturating_add(other.total_search_count);
let show_ranking_score = this.show_ranking_score || other.show_ranking_score;
let show_ranking_score_details =
this.show_ranking_score_details || other.show_ranking_score_details;
let mut user_agents = this.user_agents;
for user_agent in other.user_agents.into_iter() {
@@ -899,6 +942,8 @@ impl MultiSearchAggregator {
total_single_index,
total_search_count,
user_agents,
show_ranking_score,
show_ranking_score_details,
// do not add _ or ..Default::default() here
};
@@ -925,6 +970,10 @@ impl MultiSearchAggregator {
"searches": {
"total_search_count": self.total_search_count,
"avg_search_count": (self.total_search_count as f64) / (self.total_received as f64),
},
"scoring": {
"show_ranking_score": self.show_ranking_score,
"show_ranking_score_details": self.show_ranking_score_details,
}
});
@@ -1145,6 +1194,7 @@ pub struct DocumentsDeletionAggregator {
#[serde(rename = "user-agent")]
user_agents: HashSet<String>,
#[serde(rename = "requests.total_received")]
total_received: usize,
per_document_id: bool,
clear_all: bool,
@@ -1295,6 +1345,7 @@ pub struct HealthAggregator {
#[serde(rename = "user-agent")]
user_agents: HashSet<String>,
#[serde(rename = "requests.total_received")]
total_received: usize,
}
@@ -1345,7 +1396,7 @@ pub struct DocumentsFetchAggregator {
#[serde(rename = "user-agent")]
user_agents: HashSet<String>,
#[serde(rename = "requests.max_limit")]
#[serde(rename = "requests.total_received")]
total_received: usize,
// a call on ../documents/:doc_id

View File

@@ -33,6 +33,8 @@ pub enum MeilisearchHttpError {
.0.iter().map(|uid| format!("\"{uid}\"")).collect::<Vec<_>>().join(", "), .0.len()
)]
SwapIndexPayloadWrongLength(Vec<IndexUid>),
#[error("S3 Error: {0}")]
S3Error(#[from] strois::Error),
#[error(transparent)]
IndexUid(#[from] IndexUidFormatError),
#[error(transparent)]
@@ -65,6 +67,7 @@ impl ErrorCode for MeilisearchHttpError {
MeilisearchHttpError::InvalidExpression(_, _) => Code::InvalidSearchFilter,
MeilisearchHttpError::PayloadTooLarge(_) => Code::PayloadTooLarge,
MeilisearchHttpError::SwapIndexPayloadWrongLength(_) => Code::InvalidSwapIndexes,
MeilisearchHttpError::S3Error(_) => Code::S3Error,
MeilisearchHttpError::IndexUid(e) => e.error_code(),
MeilisearchHttpError::SerdeJson(_) => Code::Internal,
MeilisearchHttpError::HeedError(_) => Code::Internal,

View File

@@ -39,6 +39,8 @@ use meilisearch_types::versioning::{check_version_file, create_version_file};
use meilisearch_types::{compression, milli, VERSION_FILE_NAME};
pub use option::Opt;
use option::ScheduleSnapshot;
use strois::Bucket;
use zookeeper::ZooKeeper;
use crate::error::MeilisearchHttpError;
@@ -136,14 +138,17 @@ enum OnFailure {
KeepDb,
}
pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<(Arc<IndexScheduler>, Arc<AuthController>)> {
pub async fn setup_meilisearch(
opt: &Opt,
zookeeper: Option<Arc<ZooKeeper>>,
) -> anyhow::Result<(Arc<IndexScheduler>, Arc<AuthController>)> {
let empty_db = is_empty_db(&opt.db_path);
let (index_scheduler, auth_controller) = if let Some(ref snapshot_path) = opt.import_snapshot {
let snapshot_path_exists = snapshot_path.exists();
// the db is empty and the snapshot exists, import it
if empty_db && snapshot_path_exists {
match compression::from_tar_gz(snapshot_path, &opt.db_path) {
Ok(()) => open_or_create_database_unchecked(opt, OnFailure::RemoveDb)?,
Ok(()) => open_or_create_database_unchecked(opt, OnFailure::RemoveDb, zookeeper)?,
Err(e) => {
std::fs::remove_dir_all(&opt.db_path)?;
return Err(e);
@@ -160,14 +165,14 @@ pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<(Arc<IndexScheduler>, Arc<
bail!("snapshot doesn't exist at {}", snapshot_path.display())
// the snapshot and the db exist, and we can ignore the snapshot because of the ignore_snapshot_if_db_exists flag
} else {
open_or_create_database(opt, empty_db)?
open_or_create_database(opt, empty_db, zookeeper)?
}
} else if let Some(ref path) = opt.import_dump {
let src_path_exists = path.exists();
// the db is empty and the dump exists, import it
if empty_db && src_path_exists {
let (mut index_scheduler, mut auth_controller) =
open_or_create_database_unchecked(opt, OnFailure::RemoveDb)?;
open_or_create_database_unchecked(opt, OnFailure::RemoveDb, zookeeper)?;
match import_dump(&opt.db_path, path, &mut index_scheduler, &mut auth_controller) {
Ok(()) => (index_scheduler, auth_controller),
Err(e) => {
@@ -187,10 +192,10 @@ pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<(Arc<IndexScheduler>, Arc<
// the dump and the db exist and we can ignore the dump because of the ignore_dump_if_db_exists flag
// or, the dump is missing but we can ignore that because of the ignore_missing_dump flag
} else {
open_or_create_database(opt, empty_db)?
open_or_create_database(opt, empty_db, zookeeper)?
}
} else {
open_or_create_database(opt, empty_db)?
open_or_create_database(opt, empty_db, zookeeper)?
};
// We create a loop in a thread that registers snapshotCreation tasks
@@ -199,15 +204,12 @@ pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<(Arc<IndexScheduler>, Arc<
if let ScheduleSnapshot::Enabled(snapshot_delay) = opt.schedule_snapshot {
let snapshot_delay = Duration::from_secs(snapshot_delay);
let index_scheduler = index_scheduler.clone();
thread::Builder::new()
.name(String::from("register-snapshot-tasks"))
.spawn(move || loop {
thread::sleep(snapshot_delay);
if let Err(e) = index_scheduler.register(KindWithContent::SnapshotCreation) {
error!("Error while registering snapshot: {}", e);
}
})
.unwrap();
thread::spawn(move || loop {
thread::sleep(snapshot_delay);
if let Err(e) = index_scheduler.register(KindWithContent::SnapshotCreation) {
error!("Error while registering snapshot: {}", e);
}
});
}
Ok((index_scheduler, auth_controller))
@@ -217,34 +219,48 @@ pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<(Arc<IndexScheduler>, Arc<
fn open_or_create_database_unchecked(
opt: &Opt,
on_failure: OnFailure,
zookeeper: Option<Arc<ZooKeeper>>,
) -> anyhow::Result<(IndexScheduler, AuthController)> {
// we don't want to create anything in the data.ms yet, thus we
// wrap our two builders in a closure that'll be executed later.
let auth_controller = AuthController::new(&opt.db_path, &opt.master_key);
let auth_controller = AuthController::new(&opt.db_path, &opt.master_key, zookeeper.clone());
let instance_features = opt.to_instance_features();
let index_scheduler_builder = || -> anyhow::Result<_> {
Ok(IndexScheduler::new(IndexSchedulerOptions {
version_file_path: opt.db_path.join(VERSION_FILE_NAME),
auth_path: opt.db_path.join("auth"),
tasks_path: opt.db_path.join("tasks"),
update_file_path: opt.db_path.join("update_files"),
indexes_path: opt.db_path.join("indexes"),
snapshots_path: opt.snapshot_dir.clone(),
dumps_path: opt.dump_dir.clone(),
task_db_size: opt.max_task_db_size.get_bytes() as usize,
index_base_map_size: opt.max_index_size.get_bytes() as usize,
enable_mdb_writemap: opt.experimental_reduce_indexing_memory_usage,
indexer_config: (&opt.indexer_options).try_into()?,
autobatching_enabled: true,
max_number_of_tasks: 1_000_000,
index_growth_amount: byte_unit::Byte::from_str("10GiB").unwrap().get_bytes() as usize,
index_count: DEFAULT_INDEX_COUNT,
instance_features,
})?)
};
let index_scheduler = IndexScheduler::new(Arc::new(IndexSchedulerOptions {
version_file_path: opt.db_path.join(VERSION_FILE_NAME),
auth_path: opt.db_path.join("auth"),
tasks_path: opt.db_path.join("tasks"),
update_file_path: opt.db_path.join("update_files"),
indexes_path: opt.db_path.join("indexes"),
snapshots_path: opt.snapshot_dir.clone(),
dumps_path: opt.dump_dir.clone(),
task_db_size: opt.max_task_db_size.get_bytes() as usize,
index_base_map_size: opt.max_index_size.get_bytes() as usize,
enable_mdb_writemap: opt.experimental_reduce_indexing_memory_usage,
indexer_config: (&opt.indexer_options).try_into().map(Arc::new)?,
autobatching_enabled: true,
max_number_of_tasks: 1_000_000,
index_growth_amount: byte_unit::Byte::from_str("10GiB").unwrap().get_bytes() as usize,
index_count: DEFAULT_INDEX_COUNT,
instance_features,
zookeeper: zookeeper.clone(),
s3: opt.s3_url.as_ref().map(|url| {
Arc::new(
Bucket::builder(url)
.unwrap()
.key(opt.s3_access_key.as_ref().expect("Need s3 key to work").clone())
.secret(opt.s3_secret_key.as_ref().expect("Need s3 secret to work").clone())
.maybe_token(opt.s3_security_token.clone())
.region(&opt.s3_region)
.with_url_path_style(true)
.bucket(opt.s3_bucket.as_ref().expect("Need an s3 bucket to work"))
.unwrap(),
)
}),
}))
.map_err(anyhow::Error::from);
match (
index_scheduler_builder(),
index_scheduler,
auth_controller.map_err(anyhow::Error::from),
create_version_file(&opt.db_path).map_err(anyhow::Error::from),
) {
@@ -262,12 +278,13 @@ fn open_or_create_database_unchecked(
fn open_or_create_database(
opt: &Opt,
empty_db: bool,
zookeeper: Option<Arc<ZooKeeper>>,
) -> anyhow::Result<(IndexScheduler, AuthController)> {
if !empty_db {
check_version_file(&opt.db_path)?;
}
open_or_create_database_unchecked(opt, OnFailure::KeepDb)
open_or_create_database_unchecked(opt, OnFailure::KeepDb, zookeeper)
}
fn import_dump(
@@ -277,6 +294,7 @@ fn import_dump(
auth: &mut AuthController,
) -> Result<(), anyhow::Error> {
let reader = File::open(dump_path)?;
let index_scheduler = index_scheduler.inner();
let mut dump_reader = dump::DumpReader::open(reader)?;
if let Some(date) = dump_reader.date() {

View File

@@ -1,16 +1,19 @@
use std::env;
use std::io::Write;
use std::io::{stderr, Write};
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
use actix_web::http::KeepAlive;
use actix_web::web::Data;
use actix_web::HttpServer;
use index_scheduler::IndexScheduler;
use is_terminal::IsTerminal;
use meilisearch::analytics::Analytics;
use meilisearch::{analytics, create_app, prototype_name, setup_meilisearch, Opt};
use meilisearch_auth::{generate_master_key, AuthController, MASTER_KEY_MIN_SIZE};
use termcolor::{Color, ColorChoice, ColorSpec, StandardStream, WriteColor};
use zookeeper::ZooKeeper;
#[global_allocator]
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
@@ -29,6 +32,10 @@ fn setup(opt: &Opt) -> anyhow::Result<()> {
async fn main() -> anyhow::Result<()> {
let (opt, config_read_from) = Opt::try_build()?;
#[cfg(feature = "profile-with-puffin")]
let _server = puffin_http::Server::new(&format!("0.0.0.0:{}", puffin_http::DEFAULT_PORT))?;
puffin::set_scopes_on(cfg!(feature = "profile-with-puffin"));
anyhow::ensure!(
!(cfg!(windows) && opt.experimental_reduce_indexing_memory_usage),
"The `experimental-reduce-indexing-memory-usage` flag is not supported on Windows"
@@ -58,7 +65,10 @@ async fn main() -> anyhow::Result<()> {
_ => (),
}
let (index_scheduler, auth_controller) = setup_meilisearch(&opt)?;
let timeout = Duration::from_millis(2500);
let zookeeper =
opt.zk_url.as_ref().map(|url| Arc::new(ZooKeeper::connect(url, timeout, drop).unwrap()));
let (index_scheduler, auth_controller) = setup_meilisearch(&opt, zookeeper).await?;
#[cfg(all(not(debug_assertions), feature = "analytics"))]
let analytics = if !opt.no_analytics {
@@ -186,7 +196,7 @@ Anonymous telemetry:\t\"Enabled\""
}
eprintln!();
eprintln!("Check out Meilisearch Cloud!\thttps://cloud.meilisearch.com/login?utm_campaign=oss&utm_source=engine&utm_medium=cli");
eprintln!("Check out Meilisearch Cloud!\thttps://www.meilisearch.com/cloud?utm_campaign=oss&utm_source=engine&utm_medium=cli");
eprintln!("Documentation:\t\t\thttps://www.meilisearch.com/docs");
eprintln!("Source code:\t\t\thttps://github.com/meilisearch/meilisearch");
eprintln!("Discord:\t\t\thttps://discord.meilisearch.com");
@@ -197,8 +207,7 @@ const WARNING_BG_COLOR: Option<Color> = Some(Color::Ansi256(178));
const WARNING_FG_COLOR: Option<Color> = Some(Color::Ansi256(0));
fn print_master_key_too_short_warning() {
let choice =
if atty::is(atty::Stream::Stderr) { ColorChoice::Auto } else { ColorChoice::Never };
let choice = if stderr().is_terminal() { ColorChoice::Auto } else { ColorChoice::Never };
let mut stderr = StandardStream::stderr(choice);
stderr
.set_color(
@@ -223,8 +232,7 @@ fn print_master_key_too_short_warning() {
}
fn print_missing_master_key_warning() {
let choice =
if atty::is(atty::Stream::Stderr) { ColorChoice::Auto } else { ColorChoice::Never };
let choice = if stderr().is_terminal() { ColorChoice::Auto } else { ColorChoice::Never };
let mut stderr = StandardStream::stderr(choice);
stderr
.set_color(

View File

@@ -50,4 +50,10 @@ lazy_static! {
&["kind", "value"]
)
.expect("Can't create a metric");
pub static ref MEILISEARCH_LAST_UPDATE: IntGauge =
register_int_gauge!(opts!("meilisearch_last_update", "Meilisearch Last Update"))
.expect("Can't create a metric");
pub static ref MEILISEARCH_IS_INDEXING: IntGauge =
register_int_gauge!(opts!("meilisearch_is_indexing", "Meilisearch Is Indexing"))
.expect("Can't create a metric");
}

View File

@@ -28,6 +28,13 @@ const MEILI_DB_PATH: &str = "MEILI_DB_PATH";
const MEILI_HTTP_ADDR: &str = "MEILI_HTTP_ADDR";
const MEILI_MASTER_KEY: &str = "MEILI_MASTER_KEY";
const MEILI_ENV: &str = "MEILI_ENV";
const MEILI_ZK_URL: &str = "MEILI_ZK_URL";
const MEILI_S3_URL: &str = "MEILI_S3_URL";
const MEILI_S3_BUCKET: &str = "MEILI_S3_BUCKET";
const MEILI_S3_ACCESS_KEY: &str = "MEILI_S3_ACCESS_KEY";
const MEILI_S3_SECRET_KEY: &str = "MEILI_S3_SECRET_KEY";
const MEILI_S3_SECURITY_TOKEN: &str = "MEILI_S3_SECURITY_TOKEN";
const MEILI_S3_REGION: &str = "MEILI_S3_REGION";
#[cfg(all(not(debug_assertions), feature = "analytics"))]
const MEILI_NO_ANALYTICS: &str = "MEILI_NO_ANALYTICS";
const MEILI_HTTP_PAYLOAD_SIZE_LIMIT: &str = "MEILI_HTTP_PAYLOAD_SIZE_LIMIT";
@@ -56,6 +63,7 @@ const DEFAULT_CONFIG_FILE_PATH: &str = "./config.toml";
const DEFAULT_DB_PATH: &str = "./data.ms";
const DEFAULT_HTTP_ADDR: &str = "localhost:7700";
const DEFAULT_ENV: &str = "development";
const DEFAULT_S3_REGION: &str = "eu-central-1";
const DEFAULT_HTTP_PAYLOAD_SIZE_LIMIT: &str = "100 MB";
const DEFAULT_SNAPSHOT_DIR: &str = "snapshots/";
const DEFAULT_SNAPSHOT_INTERVAL_SEC: u64 = 86400;
@@ -154,6 +162,36 @@ pub struct Opt {
#[serde(default = "default_env")]
pub env: String,
/// Sets the HTTP address and port used to communicate with the zookeeper cluster.
/// If ran locally, the default url is `http://localhost:2181/`.
#[clap(long, env = MEILI_ZK_URL)]
pub zk_url: Option<String>,
/// Sets the address and port used to communicate with the S3 bucket.
#[clap(long, env = MEILI_S3_URL)]
pub s3_url: Option<String>,
/// Sets the region used to communicate with the s3 bucket.
#[clap(long, env = MEILI_S3_REGION, default_value_t = default_s3_region())]
#[serde(default = "default_s3_region")]
pub s3_region: String,
/// Sets the S3 bucket name to use.
#[clap(long, env = MEILI_S3_BUCKET)]
pub s3_bucket: Option<String>,
/// Set the S3 access key. If used you must also set the secret key.
#[clap(long, env = MEILI_S3_ACCESS_KEY)]
pub s3_access_key: Option<String>,
/// Set the S3 secret key. If used you must also set the access key.
#[clap(long, env = MEILI_S3_SECRET_KEY)]
pub s3_secret_key: Option<String>,
/// Security token, can't be used with access key and secret key.
#[clap(long, env = MEILI_S3_SECURITY_TOKEN)]
pub s3_security_token: Option<String>,
/// Deactivates Meilisearch's built-in telemetry when provided.
///
/// Meilisearch automatically collects data from all instances that do not opt out using this flag.
@@ -368,6 +406,13 @@ impl Opt {
http_addr,
master_key,
env,
zk_url,
s3_url,
s3_region,
s3_bucket,
s3_access_key,
s3_secret_key,
s3_security_token,
max_index_size: _,
max_task_db_size: _,
http_payload_size_limit,
@@ -401,6 +446,25 @@ impl Opt {
export_to_env_if_not_present(MEILI_MASTER_KEY, master_key);
}
export_to_env_if_not_present(MEILI_ENV, env);
if let Some(zk_url) = zk_url {
export_to_env_if_not_present(MEILI_ZK_URL, zk_url);
}
if let Some(s3_url) = s3_url {
export_to_env_if_not_present(MEILI_S3_URL, s3_url);
}
export_to_env_if_not_present(MEILI_S3_REGION, s3_region);
if let Some(s3_bucket) = s3_bucket {
export_to_env_if_not_present(MEILI_S3_BUCKET, s3_bucket);
}
if let Some(s3_access_key) = s3_access_key {
export_to_env_if_not_present(MEILI_S3_ACCESS_KEY, s3_access_key);
}
if let Some(s3_secret_key) = s3_secret_key {
export_to_env_if_not_present(MEILI_S3_SECRET_KEY, s3_secret_key);
}
if let Some(s3_security_token) = s3_security_token {
export_to_env_if_not_present(MEILI_S3_SECURITY_TOKEN, s3_security_token);
}
#[cfg(all(not(debug_assertions), feature = "analytics"))]
{
export_to_env_if_not_present(MEILI_NO_ANALYTICS, no_analytics.to_string());
@@ -547,7 +611,7 @@ impl TryFrom<&IndexerOpts> for IndexerConfig {
Ok(Self {
log_every_n: Some(DEFAULT_LOG_EVERY_N),
max_memory: other.max_indexing_memory.map(|b| b.get_bytes() as usize),
thread_pool: Some(thread_pool),
thread_pool: Some(Arc::new(thread_pool)),
max_positions_per_attributes: None,
skip_index_budget: other.skip_index_budget,
..Default::default()
@@ -715,6 +779,10 @@ fn default_env() -> String {
DEFAULT_ENV.to_string()
}
fn default_s3_region() -> String {
DEFAULT_S3_REGION.to_string()
}
fn default_max_index_size() -> Byte {
Byte::from_bytes(INDEX_SIZE)
}

View File

@@ -41,14 +41,10 @@ pub async fn create_api_key(
_req: HttpRequest,
) -> Result<HttpResponse, ResponseError> {
let v = body.into_inner();
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let key = auth_controller.create_key(v)?;
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
let key = auth_controller.create_key(v)?;
let key = KeyView::from_key(key, &auth_controller);
Ok(HttpResponse::Created().json(res))
Ok(HttpResponse::Created().json(key))
}
#[derive(Deserr, Debug, Clone, Copy)]
@@ -110,17 +106,11 @@ pub async fn patch_api_key(
) -> Result<HttpResponse, ResponseError> {
let key = path.into_inner().key;
let patch_api_key = body.into_inner();
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
let key = auth_controller.update_key(uid, patch_api_key)?;
let uid = Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
let key = auth_controller.update_key(uid, patch_api_key)?;
let key = KeyView::from_key(key, &auth_controller);
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::Ok().json(res))
Ok(HttpResponse::Ok().json(key))
}
pub async fn delete_api_key(
@@ -128,13 +118,8 @@ pub async fn delete_api_key(
path: web::Path<AuthParam>,
) -> Result<HttpResponse, ResponseError> {
let key = path.into_inner().key;
tokio::task::spawn_blocking(move || {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
auth_controller.delete_key(uid)
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
let uid = Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
auth_controller.delete_key(uid)?;
Ok(HttpResponse::NoContent().finish())
}

View File

@@ -29,8 +29,7 @@ pub async fn create_dump(
keys: auth_controller.list_keys()?,
instance_uid: analytics.instance_uid().cloned(),
};
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))

View File

@@ -64,7 +64,20 @@ async fn patch_features(
vector_store: new_features.0.vector_store.unwrap_or(old_features.vector_store),
};
analytics.publish("Experimental features Updated".to_string(), json!(new_features), Some(&req));
index_scheduler.put_runtime_features(new_features)?;
// explicitly destructure for analytics rather than using the `Serialize` implementation, because
// the it renames to camelCase, which we don't want for analytics.
// **Do not** ignore fields with `..` or `_` here, because we want to add them in the future.
let meilisearch_types::features::RuntimeTogglableFeatures { score_details, vector_store } =
new_features;
analytics.publish(
"Experimental features Updated".to_string(),
json!({
"score_details": score_details,
"vector_store": vector_store,
}),
Some(&req),
);
index_scheduler.inner().put_runtime_features(new_features)?;
Ok(HttpResponse::Ok().json(new_features))
}

View File

@@ -1,4 +1,4 @@
use std::io::ErrorKind;
use std::io::{BufReader, ErrorKind, Seek, SeekFrom};
use actix_web::http::header::CONTENT_TYPE;
use actix_web::web::Data;
@@ -129,8 +129,7 @@ pub async fn delete_document(
index_uid: index_uid.to_string(),
documents_ids: vec![document_id],
};
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}
@@ -396,11 +395,12 @@ async fn document_addition(
return Err(MeilisearchHttpError::MissingPayload(format));
}
if let Err(e) = buffer.seek(std::io::SeekFrom::Start(0)).await {
if let Err(e) = buffer.seek(SeekFrom::Start(0)).await {
return Err(MeilisearchHttpError::Payload(ReceivePayload(Box::new(e))));
}
let read_file = buffer.into_inner().into_std().await;
let s3 = index_scheduler.s3.clone();
let documents_count = tokio::task::spawn_blocking(move || {
let documents_count = match format {
PayloadType::Json => read_json(&read_file, update_file.as_file_mut())?,
@@ -409,8 +409,16 @@ async fn document_addition(
}
PayloadType::Ndjson => read_ndjson(&read_file, update_file.as_file_mut())?,
};
if let Some(s3) = s3 {
update_file.seek(SeekFrom::Start(0)).unwrap();
let mut reader = BufReader::new(&*update_file);
s3.put_object_multipart(format!("update-files/{}", uuid), &mut reader)?;
}
// we NEED to persist the file here because we moved the `udpate_file` in another task.
update_file.persist()?;
Ok(documents_count)
})
.await;
@@ -445,7 +453,7 @@ async fn document_addition(
};
let scheduler = index_scheduler.clone();
let task = match tokio::task::spawn_blocking(move || scheduler.register(task)).await? {
let task = match scheduler.register(task) {
Ok(task) => task,
Err(e) => {
index_scheduler.delete_update_file(uuid)?;
@@ -476,8 +484,7 @@ pub async fn delete_documents_batch(
let task =
KindWithContent::DocumentDeletion { index_uid: index_uid.to_string(), documents_ids: ids };
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
@@ -512,8 +519,7 @@ pub async fn delete_documents_by_filter(
.map_err(|err| ResponseError::from_msg(err.message, Code::InvalidDocumentFilter))?;
let task = KindWithContent::DocumentDeletionByFilter { index_uid, filter_expr: filter };
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
@@ -529,8 +535,7 @@ pub async fn clear_all_documents(
analytics.delete_documents(DocumentDeletionKind::ClearAll, &req);
let task = KindWithContent::DocumentClear { index_uid: index_uid.to_string() };
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))

View File

@@ -135,8 +135,7 @@ pub async fn create_index(
);
let task = KindWithContent::IndexCreation { index_uid: uid.to_string(), primary_key };
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
Ok(HttpResponse::Accepted().json(task))
} else {
@@ -203,8 +202,7 @@ pub async fn update_index(
primary_key: body.primary_key,
};
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
@@ -216,8 +214,7 @@ pub async fn delete_index(
) -> Result<HttpResponse, ResponseError> {
let index_uid = IndexUid::try_from(index_uid.into_inner())?;
let task = KindWithContent::IndexDeletion { index_uid: index_uid.into_inner() };
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
Ok(HttpResponse::Accepted().json(task))
}

View File

@@ -35,7 +35,7 @@ pub struct SearchQueryGet {
#[deserr(default, error = DeserrQueryParamError<InvalidSearchQ>)]
q: Option<String>,
#[deserr(default, error = DeserrQueryParamError<InvalidSearchVector>)]
vector: Option<Vec<f32>>,
vector: Option<CS<f32>>,
#[deserr(default = Param(DEFAULT_SEARCH_OFFSET()), error = DeserrQueryParamError<InvalidSearchOffset>)]
offset: Param<usize>,
#[deserr(default = Param(DEFAULT_SEARCH_LIMIT()), error = DeserrQueryParamError<InvalidSearchLimit>)]
@@ -88,7 +88,7 @@ impl From<SearchQueryGet> for SearchQuery {
Self {
q: other.q,
vector: other.vector,
vector: other.vector.map(CS::into_inner),
offset: other.offset.0,
limit: other.limit.0,
page: other.page.as_deref().copied(),

View File

@@ -5,6 +5,7 @@ use index_scheduler::IndexScheduler;
use log::debug;
use meilisearch_types::deserr::DeserrJsonError;
use meilisearch_types::error::ResponseError;
use meilisearch_types::facet_values_sort::FacetValuesSort;
use meilisearch_types::index_uid::IndexUid;
use meilisearch_types::settings::{settings, RankingRuleView, Settings, Unchecked};
use meilisearch_types::tasks::KindWithContent;
@@ -54,10 +55,7 @@ macro_rules! make_setting_route {
is_deletion: true,
allow_index_creation,
};
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task))
.await??
.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
@@ -96,10 +94,7 @@ macro_rules! make_setting_route {
is_deletion: false,
allow_index_creation,
};
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task))
.await??
.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
@@ -550,10 +545,16 @@ pub async fn update_all(
.as_ref()
.set()
.and_then(|s| s.max_values_per_facet.as_ref().set()),
"sort_facet_values_by": new_settings.faceting
"sort_facet_values_by_star_count": new_settings.faceting
.as_ref()
.set()
.and_then(|s| s.sort_facet_values_by.as_ref().set()),
.and_then(|s| {
s.sort_facet_values_by.as_ref().set().map(|s| s.iter().any(|(k, v)| k == "*" && v == &FacetValuesSort::Count))
}),
"sort_facet_values_by_total": new_settings.faceting
.as_ref()
.set()
.and_then(|s| s.sort_facet_values_by.as_ref().set().map(|s| s.len())),
},
"pagination": {
"max_total_hits": new_settings.pagination
@@ -579,8 +580,7 @@ pub async fn update_all(
is_deletion: false,
allow_index_creation,
};
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
@@ -615,8 +615,7 @@ pub async fn delete_all(
is_deletion: true,
allow_index_creation,
};
let task: SummarizedTaskView =
tokio::task::spawn_blocking(move || index_scheduler.register(task)).await??.into();
let task: SummarizedTaskView = index_scheduler.register(task)?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))

View File

@@ -49,6 +49,11 @@ pub async fn get_metrics(
}
}
if let Some(last_update) = response.last_update {
crate::metrics::MEILISEARCH_LAST_UPDATE.set(last_update.unix_timestamp());
}
crate::metrics::MEILISEARCH_IS_INDEXING.set(index_scheduler.is_task_processing()? as i64);
let encoder = TextEncoder::new();
let mut buffer = vec![];
encoder.encode(&prometheus::gather(), &mut buffer).expect("Failed to encode metrics");

View File

@@ -284,9 +284,6 @@ pub fn create_all_stats(
used_database_size += index_scheduler.used_size()?;
database_size += auth_controller.size()?;
used_database_size += auth_controller.used_size()?;
let update_file_size = index_scheduler.compute_update_file_size()?;
database_size += update_file_size;
used_database_size += update_file_size;
let stats = Stats { database_size, used_database_size, last_update: last_task, indexes };
Ok(stats)

View File

@@ -18,7 +18,6 @@ use serde_json::json;
use time::format_description::well_known::Rfc3339;
use time::macros::format_description;
use time::{Date, Duration, OffsetDateTime, Time};
use tokio::task;
use super::SummarizedTaskView;
use crate::analytics::Analytics;
@@ -325,15 +324,12 @@ async fn cancel_tasks(
let query = params.into_query();
let tasks = index_scheduler.get_task_ids_from_authorized_indexes(
&index_scheduler.read_txn()?,
&query,
index_scheduler.filters(),
)?;
let (tasks, _) =
index_scheduler.get_task_ids_from_authorized_indexes(&query, index_scheduler.filters())?;
let task_cancelation =
KindWithContent::TaskCancelation { query: format!("?{}", req.query_string()), tasks };
let task = task::spawn_blocking(move || index_scheduler.register(task_cancelation)).await??;
let task = index_scheduler.register(task_cancelation)?;
let task: SummarizedTaskView = task.into();
Ok(HttpResponse::Ok().json(task))
@@ -370,15 +366,12 @@ async fn delete_tasks(
);
let query = params.into_query();
let tasks = index_scheduler.get_task_ids_from_authorized_indexes(
&index_scheduler.read_txn()?,
&query,
index_scheduler.filters(),
)?;
let (tasks, _) =
index_scheduler.get_task_ids_from_authorized_indexes(&query, index_scheduler.filters())?;
let task_deletion =
KindWithContent::TaskDeletion { query: format!("?{}", req.query_string()), tasks };
let task = task::spawn_blocking(move || index_scheduler.register(task_deletion)).await??;
let task = index_scheduler.register(task_deletion)?;
let task: SummarizedTaskView = task.into();
Ok(HttpResponse::Ok().json(task))
@@ -387,6 +380,7 @@ async fn delete_tasks(
#[derive(Debug, Serialize)]
pub struct AllTasks {
results: Vec<TaskView>,
total: u64,
limit: u32,
from: Option<u32>,
next: Option<u32>,
@@ -406,23 +400,17 @@ async fn get_tasks(
let limit = params.limit.0;
let query = params.into_query();
let mut tasks_results: Vec<TaskView> = index_scheduler
.get_tasks_from_authorized_indexes(query, index_scheduler.filters())?
.into_iter()
.map(|t| TaskView::from_task(&t))
.collect();
let filters = index_scheduler.filters();
let (tasks, total) = index_scheduler.get_tasks_from_authorized_indexes(query, filters)?;
let mut results: Vec<_> = tasks.iter().map(TaskView::from_task).collect();
// If we were able to fetch the number +1 tasks we asked
// it means that there is more to come.
let next = if tasks_results.len() == limit as usize {
tasks_results.pop().map(|t| t.uid)
} else {
None
};
let next = if results.len() == limit as usize { results.pop().map(|t| t.uid) } else { None };
let from = tasks_results.first().map(|t| t.uid);
let from = results.first().map(|t| t.uid);
let tasks = AllTasks { results, limit: limit.saturating_sub(1), total, from, next };
let tasks = AllTasks { results: tasks_results, limit: limit.saturating_sub(1), from, next };
Ok(HttpResponse::Ok().json(tasks))
}
@@ -444,10 +432,10 @@ async fn get_task(
analytics.publish("Tasks Seen".to_string(), json!({ "per_task_uid": true }), Some(&req));
let query = index_scheduler::Query { uids: Some(vec![task_uid]), ..Query::default() };
let filters = index_scheduler.filters();
let (tasks, _) = index_scheduler.get_tasks_from_authorized_indexes(query, filters)?;
if let Some(task) =
index_scheduler.get_tasks_from_authorized_indexes(query, index_scheduler.filters())?.first()
{
if let Some(task) = tasks.first() {
let task_view = TaskView::from_task(task);
Ok(HttpResponse::Ok().json(task_view))
} else {

View File

@@ -61,6 +61,8 @@ pub static AUTHORIZATIONS: Lazy<HashMap<(&'static str, &'static str), HashSet<&'
("DELETE", "/keys/mykey/") => hashset!{"keys.delete", "*"},
("POST", "/keys") => hashset!{"keys.create", "*"},
("GET", "/keys") => hashset!{"keys.get", "*"},
("GET", "/experimental-features") => hashset!{"experimental.get", "*"},
("PATCH", "/experimental-features") => hashset!{"experimental.update", "*"},
};
authorizations

View File

@@ -39,7 +39,7 @@ impl Server {
let options = default_settings(dir.path());
let (index_scheduler, auth) = setup_meilisearch(&options).unwrap();
let (index_scheduler, auth) = setup_meilisearch(&options, None).await.unwrap();
let service = Service { index_scheduler, auth, options, api_key: None };
Server { service, _dir: Some(dir) }
@@ -54,7 +54,7 @@ impl Server {
options.master_key = Some("MASTER_KEY".to_string());
let (index_scheduler, auth) = setup_meilisearch(&options).unwrap();
let (index_scheduler, auth) = setup_meilisearch(&options, None).await.unwrap();
let service = Service { index_scheduler, auth, options, api_key: None };
Server { service, _dir: Some(dir) }
@@ -67,7 +67,7 @@ impl Server {
}
pub async fn new_with_options(options: Opt) -> Result<Self, anyhow::Error> {
let (index_scheduler, auth) = setup_meilisearch(&options)?;
let (index_scheduler, auth) = setup_meilisearch(&options, None).await?;
let service = Service { index_scheduler, auth, options, api_key: None };
Ok(Server { service, _dir: None })
@@ -189,6 +189,14 @@ impl Server {
let url = format!("/tasks/{}", update_id);
self.service.get(url).await
}
pub async fn get_features(&self) -> (Value, StatusCode) {
self.service.get("/experimental-features").await
}
pub async fn set_features(&self, value: Value) -> (Value, StatusCode) {
self.service.patch("/experimental-features", value).await
}
}
pub fn default_settings(dir: impl AsRef<Path>) -> Opt {

View File

@@ -43,7 +43,7 @@ async fn import_dump_v1_movie_raw() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31968 }, "error": null, "duration": "PT9.317060500S", "enqueuedAt": "2021-09-08T09:08:45.153219Z", "startedAt": "2021-09-08T09:08:45.3961665Z", "finishedAt": "2021-09-08T09:08:54.713227Z" }], "limit": 20, "from": 0, "next": null })
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31968 }, "error": null, "duration": "PT9.317060500S", "enqueuedAt": "2021-09-08T09:08:45.153219Z", "startedAt": "2021-09-08T09:08:45.3961665Z", "finishedAt": "2021-09-08T09:08:54.713227Z" }], "total": 1, "limit": 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
@@ -135,7 +135,7 @@ async fn import_dump_v1_movie_with_settings() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["genres", "id", "overview", "poster", "release_date", "title"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT7.288826907S", "enqueuedAt": "2021-09-08T09:34:40.882977Z", "startedAt": "2021-09-08T09:34:40.883073093Z", "finishedAt": "2021-09-08T09:34:48.1719Z"}, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31968 }, "error": null, "duration": "PT9.090735774S", "enqueuedAt": "2021-09-08T09:34:16.036101Z", "startedAt": "2021-09-08T09:34:16.261191226Z", "finishedAt": "2021-09-08T09:34:25.351927Z" }], "limit": 20, "from": 1, "next": null })
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["genres", "id", "overview", "poster", "release_date", "title"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT7.288826907S", "enqueuedAt": "2021-09-08T09:34:40.882977Z", "startedAt": "2021-09-08T09:34:40.883073093Z", "finishedAt": "2021-09-08T09:34:48.1719Z"}, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31968 }, "error": null, "duration": "PT9.090735774S", "enqueuedAt": "2021-09-08T09:34:16.036101Z", "startedAt": "2021-09-08T09:34:16.261191226Z", "finishedAt": "2021-09-08T09:34:25.351927Z" }], "total": 2, "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
@@ -317,7 +317,7 @@ async fn import_dump_v2_movie_raw() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit": 20, "from": 0, "next": null })
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "total": 1, "limit": 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
@@ -409,7 +409,7 @@ async fn import_dump_v2_movie_with_settings() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "total": 2, "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
@@ -591,7 +591,7 @@ async fn import_dump_v3_movie_raw() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit": 20, "from": 0, "next": null })
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "total": 1, "limit": 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
@@ -683,7 +683,7 @@ async fn import_dump_v3_movie_with_settings() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "total": 2, "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can["results"] still get a few documents by id
@@ -865,7 +865,7 @@ async fn import_dump_v4_movie_raw() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit" : 20, "from": 0, "next": null })
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "total": 1, "limit" : 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
@@ -957,7 +957,7 @@ async fn import_dump_v4_movie_with_settings() {
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "canceledBy": null, "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "error": null, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "canceledBy": null, "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "error": null, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "total": 2, "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can still get a few documents by id

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:08:54.713227Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:08:54.713227Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:08:54.713227Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:08:54.713227Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:08:54.713227Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:08:54.713227Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:08:54.713227Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:34:25.351927Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:34:25.351927Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -40,6 +40,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:34:48.1719Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -40,6 +40,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:34:48.1719Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -40,6 +40,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:34:48.1719Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -40,6 +40,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:34:48.1719Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -40,6 +40,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:34:48.1719Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -45,6 +45,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:26:57.319083Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:28:46.369971Z"
}
],
"total": 92,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:28:46.369971Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:28:46.369971Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:28:46.369971Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:28:46.369971Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T09:28:46.369971Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:21:54.691484Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:21:54.691484Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -36,6 +36,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:24:39.812922Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -36,6 +36,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:24:39.812922Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -36,6 +36,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:24:39.812922Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -36,6 +36,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:24:39.812922Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -36,6 +36,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:24:39.812922Z"
}
],
"total": 2,
"limit": 1,
"from": 1,
"next": 0

View File

@@ -41,6 +41,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:40:28.669652Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:51:53.095314Z"
}
],
"total": 92,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:51:53.095314Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:51:53.095314Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:51:53.095314Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:51:53.095314Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:51:53.095314Z"
}
],
"total": 93,
"limit": 1,
"from": 92,
"next": 91

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

View File

@@ -20,6 +20,7 @@ source: meilisearch/tests/dumps/mod.rs
"finishedAt": "2021-09-08T08:31:12.304168Z"
}
],
"total": 1,
"limit": 1,
"from": 0,
"next": null

Some files were not shown because too many files have changed in this diff Show More