Compare commits

...

777 Commits

Author SHA1 Message Date
aafc36a853 Merge #3182
3182: Bump the other milli r=curquiza a=Kerollmops

This PR bumps the other import of milli in the TOML.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-12-01 17:51:39 +00:00
7070b8ba9a Bump the other milli 2022-12-01 18:16:55 +01:00
121db2ef44 Merge #3176
3176: Bump version to 0.29.3 and milli 0.33.6 r=curquiza a=Kerollmops

This PR bumps Meilisearch to 0.29.3 and milli to 0.33.6.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-12-01 15:47:45 +00:00
236ffce26a Bump Meilisearch to 0.29.3 2022-12-01 16:43:23 +01:00
3efec70ae2 Bump milli to v0.33.6 2022-12-01 16:41:48 +01:00
1387a211d2 Merge #3053
3053: Upgrade alpine 3.14 to 3.16 r=Kerollmops a=curquiza

Otherwise CI is failing https://github.com/meilisearch/meilisearch/actions/runs/3470576605/jobs/5799173168

Co-authored-by: curquiza <clementine@meilisearch.com>
2022-11-15 13:56:45 +00:00
661b345ad9 Upgrade alpine 3.14 to 3.16 2022-11-15 14:54:18 +01:00
0f0d1dccf0 Merge #3047
3047: Fix soft deleted bug settings r=curquiza a=Kerollmops

This PR fixes https://github.com/meilisearch/meilisearch/issues/3021 and fixes https://github.com/meilisearch/meilisearch/issues/2945 and is released as version 0.29.2.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-11-15 11:08:47 +00:00
0331fc7c71 Make clippy happy 2022-11-15 12:07:00 +01:00
5cfcdbb55a Bump the version to v0.29.2 2022-11-14 17:39:10 +01:00
c77c3a90a0 Use milli v0.33.5 2022-11-14 17:39:09 +01:00
3ebd88c03b Revert "Comment cache steps in jobs"
This reverts commit f513ac1233.
2022-10-10 14:46:54 +02:00
c958097e99 Merge #2862
2862: Use Ubuntu 18.04 for all CI tasks that previously used Ubuntu 20.04 r=curquiza a=loiclec

This is to prevent linking with a version of glibc that is too recent.

With meilisearch v0.29.0 we inadvertently bumped the minimum supported glibc version to 2.29, which means it couldn't be run from Debian 10 (for example) anymore. By using Ubuntu 18.04, which uses glibc 2.27, we restore support for older Linux distros.

Fixes #2850

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-10-10 14:42:18 +02:00
f513ac1233 Comment cache steps in jobs 2022-10-10 14:25:24 +02:00
97c202db51 Update version for next release (v0.29.1) 2022-10-10 14:25:24 +02:00
a5e23aa6e4 Use Ubuntu 18.04 for all CI tasks that previously used Ubuntu 20.04
This is to prevent linking with a version of glibc that is too recent.

With meilisearch v0.29.0 we inadvertently bumped the minimum supported
glibc version to 2.29, which means it couldn't be run from Debian 10
(for example) anymore. By using Ubuntu 18.04, which uses glibc 2.27, we
restore support for older Linux distros.
2022-10-10 14:25:18 +02:00
fa315352da Merge #2770
2770: Update milli 0.33.4 r=Kerollmops a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/2764

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-13 16:07:06 +00:00
268d59ccb1 Update milli version to v0.33.4 2022-09-13 18:01:09 +02:00
5901d4e407 Merge #2768
2768: Update patch versions to remove CVE r=Kerollmops a=curquiza

Trying to fix CVE we have with [synchronoise](https://github.com/QuietMisdreavus/synchronoise) crate

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-12 12:47:59 +00:00
aabd67a9fa Update patch version to remove CVE 2022-09-12 14:36:45 +02:00
3fd6af25f9 Merge #2759
2759: Bump milli to 0.33.3 r=Kerollmops a=Kerollmops

This PR fixes #2743.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-09-07 20:55:08 +00:00
441492f1c8 Bump milli to v0.33.3 2022-09-07 18:23:49 +02:00
92b0c51bfe Merge #2755
2755: Update mini-dashboard to v0.2.2 r=Kerollmops a=mdubus

# Pull Request

## What does this PR do?
Fixes #2716

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-09-07 15:53:04 +00:00
b3ffcb2d97 Merge #2758
2758: Update ubuntu-18.04 to 20.04 r=Kerollmops a=curquiza

Trying to avoid CI failure by updating ubuntu machines
Commit already available on main, so for v0.30.0
https://github.com/meilisearch/meilisearch/pull/2719

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-09-07 15:27:19 +00:00
5cbd047989 Update ubuntu-18.04 to 20.04 2022-09-07 17:24:35 +02:00
07f45251e9 Update mini-dashboard to v0.2.2 2022-09-07 11:09:12 +02:00
37dc6537c3 Fix api keys bugs (#2734)
* Add some tests

* Disallow index creation when API key doesn't havec explicitelly the right on the creating index

* Fix lazy index creation with `indexes.*` action
2022-09-06 15:13:09 +02:00
4e37427de8 Merge #2732
2732: Update milli v0.33.2 r=Kerollmops a=ManyTheFish

closes #2722


⚠️ : merging into [release-v0.29.0](https://github.com/meilisearch/meilisearch/tree/release-v0.29.0)

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-09-01 11:18:11 +00:00
50434d35d0 Update milli v0.33.2 2022-09-01 13:15:05 +02:00
e315547ffc Merge #2724
2724: Make the document addition done log to appear once indexing is over r=curquiza a=evpeople

# Pull Request

## What does this PR do?
Fixes #2703 
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: evpeople <hangcaihui@gmail.com>
2022-08-31 13:13:36 +00:00
833ade80a6 cargo update 2022-08-31 13:58:53 +08:00
f117c90c46 remove the intermediate addition variable 2022-08-30 21:49:34 +08:00
1131400694 Merge #2717
2717: Disable LTO due to compilation failures on some platforms r=curquiza a=loiclec

Meilisearch fails to compile on aarch64 Linux due to a linker error ( https://github.com/meilisearch/meilisearch/runs/8072616457?check_suite_focus=true ). This is probably caused by link-time-optimisation (LTO). Since it is not possible to modify a profile based on the target triple, this PR deactivates LTO completely for all platforms.
In the future, we might want to create different custom profiles, such as:
```toml
[profile.release-lto]
inherits = "release"
lto = "thin"
```
and compile Meilisearch using `cargo build --profile release-lto` on the platforms that can support it.


Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-08-29 16:04:34 +00:00
9a789daa58 Disable LTO due to compilation failures on some platforms
(aarch64 linux)
2022-08-29 17:21:08 +02:00
0826aa35e2 Merge #2713
2713: Move prometheus behind a feature flag r=Kerollmops a=irevoire

We decided we wanted to continue working on this feature before making it public.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-08-29 12:38:42 +00:00
6aa3ad6b6c move prometheus behind a feature flag 2022-08-29 14:36:59 +02:00
b774adfbf7 The "document addition done"
appear once the indexation is over now.
2022-08-27 21:50:10 +08:00
47a1aa69f0 Merge #2702
2702: Add link to the main image r=curquiza a=brunoocasali

I have wrapped the image with a `<a>` link, and it seems to be working fine, WDYT?

Co-authored-by: Bruno Casali <brunoocasali@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-08-24 16:54:08 +00:00
b3d89da74d Update README.md 2022-08-24 18:52:34 +02:00
0798170a64 Add link to the main image 2022-08-24 13:47:09 -03:00
446dfccc8c Merge #2504 #2697
2504: New README 🌟 r=curquiza a=curquiza

⚠️ Please do not only look at the Markdown but also how the GitHub renders the README 😇 

👉 👉 [Rendered](https://github.com/meilisearch/meilisearch/blob/new-readme/README.md) 👈 👈

2697: Accept an environment variable to enable the metrics route r=ManyTheFish a=Kerollmops

With the PR Meilisearch is able to accept the `MEILI_ENABLE_METRICS_ROUTE` environment variable to enable the newly introduces metrics route.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-24 15:41:44 +00:00
43175cfb78 New README version
Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update according to guigui reuqest

Add demo link

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: CaroFG <48251481+CaroFG@users.noreply.github.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Update README.md

Co-authored-by: gui machiavelli <hey@guimachiavelli.com>

Put sentence in bold
2022-08-24 17:16:29 +02:00
a1bb49c351 Merge #2696
2696: Add the new `metrics.get` and `metrics.all` actions rights r=Kerollmops a=Kerollmops

Follow the specification and add the new `metrics.get` and `metrics.all` actions, making the `/metrics` route only accessible with those rights.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-24 15:08:53 +00:00
bebd76064a Add test for the rights of /metrics route 2022-08-24 17:03:43 +02:00
f0b2ac6efb metrics.all must define metrics.get 2022-08-24 17:03:30 +02:00
08d86e33ca Accept an env variable to enable the metrics route 2022-08-24 16:39:56 +02:00
2c2efc7ab6 Remove the hand written numbers of the actions rights 2022-08-24 16:33:12 +02:00
381df43be4 Change the metrics route API access rights 2022-08-24 16:28:33 +02:00
f87ebfe477 Merge #2692
2692: Slight changes for prometheus metrics r=Kerollmops a=gmourier

# Pull Request

## What does this PR do?

- Replace "MeiliSearch" with "Meilisearch"
- Brings some consistency between rust identifier and exposed metrics names
- Add suffix describing unit, in plural form. e.g `MEILISEARCH_DB_SIZE_BYTES` (https://prometheus.io/docs/practices/naming/#metric-names)
- Update dashboard.json

Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-08-24 10:12:24 +00:00
c445334070 Merge #2636
2636: Upgrade milli to v0.33.0 r=Kerollmops a=ManyTheFish

# Summary
- Update milli to v0.33.0
- Classify the new InvalidLmdbOpenOptions error as an Internal error
- Update filter error check in tests
- Introduce Terms Matching Policies

fixes #2479
fixes #2484
fixes #2486
fixes #2516
fixes #2578
fixes #2580
fixes #2583
fixes #2600
fixes #2640
fixes #2672
fixes #2679
fixes #2686

# Terms Matching Policies
This PR allows end users to customize matching term policies

## Todo

- [x] Update the API to return the number of pages and allow users to directly choose a page instead of computing an offset
- [x] Change generation of the query tree depending on the chosen settings https://github.com/meilisearch/milli/pull/598

## Small Documentation

### Default search query

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "doctor of tokio" }'
```

**result**:
```json
{
  "hits":[...],
  "estimatedTotalHits":32,
  "query":"doctor of tokio",
  "limit":20,
  "offset":0,
  "processingTimeMs":7
}
```

The default behavior doesn't change with the current Meilisearch behavior:
If we don't have enough documents to fit the requested limit, we remove the query words from the last to the first typed word.

## Search query with `optionalWords` parameter

**request**:
```sh
curl \
  -X POST 'http://localhost:7700/indexes/movies/search' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "q": "doctor of tokio", "matchingStrategy": "all"}'
```

**result**:
```json
{
  "hits":[...],
  "estimatedTotalHits":1,
  "query":"doctor of tokio",
  "limit":20,
  "offset":0,
  "processingTimeMs":7
}
```

### allowed `matchingStrategy` values

#### `last`
The default behavior, If we don't have enough documents to fit the requested limit, we remove the query words from the last to the first typed word.

#### `all`
No word will be removed, If we don't have enough documents to fit the requested limit, we return the number of documents we found.

### In charge of the feature

Core: `@ManyTheFish` & `@curquiza`  
Docs: TBD
Integration: `@bidoubiwa` 


Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Many the fish <many@meilisearch.com>
2022-08-23 16:21:00 +00:00
651a22b1ed Enhance enum documentation
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-23 18:11:20 +02:00
ff59ae56f4 cargo fmt 2022-08-23 17:17:02 +02:00
b2577aac52 Add suffix describing the unit when needed; Replace MeiliSearch by Meilisearch; Precised some metrics name 2022-08-23 17:09:27 +02:00
c9bb111ef3 Implement all and last matching strategy 2022-08-23 17:07:43 +02:00
e2af8dccb8 Fix filter tests 2022-08-23 16:39:39 +02:00
5e206ee84b Classify InvalidLmdbOpenOptions as an Internal error 2022-08-23 16:39:39 +02:00
aff4b64265 Update dependencies 2022-08-23 16:39:39 +02:00
0a2ef0037f Merge #2689 #2690
2689: Use mimalloc as the global allocator r=Kerollmops a=loiclec

milli has switched its global allocator to mimalloc already, and we have seen some performance gains as a result. Furthermore, we can use mimalloc as the global allocator on all platforms whereas jemalloc was only activated on Linux. 

This PR brings mimalloc to Meilisearch as well. 

2690: Add LTO and codegen-units=1 to release compile options r=Kerollmops a=loiclec

This PR brings Meilisearch's release compile options in line with milli (see https://github.com/meilisearch/milli/pull/606 ). 

Adding LTO and codegen=units=1 will make compile times longer, but they also speed up the final binary significantly.

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-08-23 12:05:02 +00:00
40ae26478a Merge #2691
2691: Update version for next release (v0.29.0) in Cargo.toml files r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-08-23 11:41:22 +00:00
6fe3f285ce Update version for next release (v0.29.0) 2022-08-23 13:39:56 +02:00
72f8adaa70 Add LTO and codegen-units=1 to release compile options 2022-08-23 13:03:57 +02:00
e659c08ac4 Use mimalloc as the global allocator 2022-08-23 12:58:10 +02:00
ea365126b4 Merge #2657
2657: prometheus and grafana dashboards implemented r=irevoire a=pavo-tusker

Implemented Basic Prometheus Metrics and Grafana Dashboard using this [Prometheus Crate](https://crates.io/crates/prometheus) [#496](https://github.com/meilisearch/product/issues/496)
![Screenshot from 2022-08-04 19-59-06](https://user-images.githubusercontent.com/43550760/182880420-71ec8591-a2cb-4fd5-b1c5-911a6dcbdaf9.png)
![Screenshot from 2022-08-04 19-58-56](https://user-images.githubusercontent.com/43550760/182880433-11727814-e230-44dd-89c9-fec3baa47b11.png)
![Screenshot from 2022-08-04 19-58-40](https://user-images.githubusercontent.com/43550760/182880436-73312a68-4f20-49f0-80e9-5e344f96db6f.png)


Co-authored-by: mohandasspat <mohan.s@pavo-tusker.com>
Co-authored-by: Pavo-Tusker <43550760+pavo-tusker@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-08-22 11:15:42 +00:00
a277cc9a18 Merge branch 'metrics/prometheus-setup' of https://github.com/pavo-tusker/meilisearch into metrics/prometheus-setup 2022-08-22 13:34:37 +05:30
a37c7ba1bb clippy & cargo fixed 2022-08-22 13:34:19 +05:30
ef1d6b1694 clippy & cargo fixed 2022-08-22 13:27:26 +05:30
099abefc6d Merge branch 'main' into metrics/prometheus-setup 2022-08-22 09:56:15 +02:00
a05101af4d clippy & fmt fixed 2022-08-22 13:21:22 +05:30
109540011a conflict fixes 2022-08-22 13:21:22 +05:30
2f92169e48 clippy issue in metrics fixed 2022-08-22 13:21:22 +05:30
a58b00d8f1 Update meilisearch-http/src/option.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-22 13:21:22 +05:30
2b8f3c26ec Changed prometheus metrics feature as optional 2022-08-22 13:21:22 +05:30
0b6ca73790 review fixes 2022-08-22 13:21:22 +05:30
1f1482e97c Update meilisearch-http/src/routes/mod.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-22 13:21:22 +05:30
25fecf9360 clippy & rustfmt fixed 2022-08-22 13:21:22 +05:30
4bee0565e8 prometheus and grafana dashboards implemented 2022-08-22 13:21:22 +05:30
d5da063666 clippy & fmt fixed 2022-08-22 10:52:09 +05:30
43bb5176a9 conflict fixes 2022-08-22 10:30:07 +05:30
a0734c991c Merge #2674
2674: Add analytics on the stats routes r=ManyTheFish a=irevoire

# Pull Request

## What does this PR do?
Implements https://github.com/meilisearch/specifications/pull/169

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [ ] Have you read the contributing guidelines?
- [ ] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-08-18 14:19:56 +00:00
cb29d7d124 Merge #2678
2678: Accept either an array of documents or a single document r=irevoire a=Kerollmops

# Pull Request

## What does this PR do?
Fixes #2671

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-18 14:00:01 +00:00
e32d5ef2b3 Fix the test with an uncomprehensible user error message 2022-08-18 14:37:44 +02:00
ee69ede1ce Merge #2677
2677: Hide the batch_uid field from the tasks route r=Kerollmops a=Kerollmops

# Pull Request

## What does this PR do?

Fixes #2676

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-08-18 10:01:09 +00:00
9b2036ac05 Accept either an array of documents or a single document 2022-08-18 11:55:14 +02:00
5c543f9d94 Add a test for single document upload 2022-08-18 11:33:22 +02:00
0c03ed3c1e Hide the batch_uid field from the tasks route 2022-08-18 11:15:21 +02:00
54a0b47c2b clippy issue in metrics fixed 2022-08-17 21:08:28 +05:30
947fb5c956 Update meilisearch-http/src/option.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-17 20:57:07 +05:30
cd18459484 Changed prometheus metrics feature as optional 2022-08-17 20:56:15 +05:30
225d9936ed review fixes 2022-08-17 20:55:29 +05:30
93daa4c464 Update meilisearch-http/src/routes/mod.rs
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-08-17 20:55:29 +05:30
d08c77706c clippy & rustfmt fixed 2022-08-17 20:55:29 +05:30
de58ccd4ba prometheus and grafana dashboards implemented 2022-08-17 20:54:39 +05:30
62240b7e19 add analytics on the stats routes 2022-08-17 16:12:26 +02:00
22874ce300 Merge #2664
2664: 🐞 fix: Support https in print_launch_resume r=irevoire a=evpeople

fix #2660

# Pull Request

## What does this PR do?
Fixes #2660 
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: evpeople <hangcaihui@gmail.com>
2022-08-16 14:40:20 +00:00
b5f91b91c3 Merge #2523
2523: Improve the tasks error reporting when processed in batches r=irevoire a=Kerollmops

This fixes #2478 by changing the behavior of the task handler when there is an error in a batch of document addition or update.

What changes is that when there is a user error in a task in a batch we now report this task as failed with the right error message but we continue to process the other tasks. A user error can be when a geo field is invalid, a document id is invalid, or missing.

fixes #2582, #2478

Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-08-16 14:15:30 +00:00
b6e6a08f7d Fix CI test 2022-08-16 15:14:01 +02:00
8198bb9da2 Merge #2665
2665: 📎 makes clippy happy r=Kerollmops a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-08-16 12:01:56 +00:00
68a7d6bc61 reformat 2022-08-12 15:11:01 +02:00
83e20027fd 📎 makes clippy happy 2022-08-12 14:18:27 +02:00
f21a4d61da 🌈 style(http/main.rs): 2022-08-12 16:16:23 +08:00
12538d5a44 🐞 fix: Support https when print_lanuch_resume
fix #2660
2022-08-11 21:29:18 +08:00
ae174c2cca Fix task serialization 2022-08-11 13:35:35 +02:00
e6b806e0cf Merge #2662
2662: Fix(cli): Clamp databases max size to a multiple of system page size r=Kerollmops a=ManyTheFish

fix #2659


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-08-11 08:49:24 +00:00
cf955a77db Fix(cli): Clamp databases max size to a multiple of system page size
fix #2659
2022-08-11 10:44:47 +02:00
3a48de136e Add autobatching test 2022-08-10 17:02:29 +02:00
dfbdc565f9 Merge #2653
2653: Bump Swatinem/rust-cache from 1.4.0 to 2.0.0 r=curquiza a=dependabot[bot]

Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.4.0 to 2.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/releases">Swatinem/rust-cache's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>The action code was refactored to allow for caching multiple workspaces and
different <code>target</code> directory layouts.</li>
<li>The <code>working-directory</code> and <code>target-dir</code> input options were replaced by a
single <code>workspaces</code> option that has the form of <code>$workspace -&gt; $target</code>.</li>
<li>Support for considering <code>env-vars</code> as part of the cache key.</li>
<li>The <code>sharedKey</code> input option was renamed to <code>shared-key</code> for consistency.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md">Swatinem/rust-cache's changelog</a>.</em></p>
<blockquote>
<h2>2.0.0</h2>
<ul>
<li>The action code was refactored to allow for caching multiple workspaces and
different <code>target</code> directory layouts.</li>
<li>The <code>working-directory</code> and <code>target-dir</code> input options were replaced by a
single <code>workspaces</code> option that has the form of <code>$workspace -&gt; $target</code>.</li>
<li>Support for considering <code>env-vars</code> as part of the cache key.</li>
<li>The <code>sharedKey</code> input option was renamed to <code>shared-key</code> for consistency.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="6720f05bc4"><code>6720f05</code></a> 2.0.0</li>
<li><a href="5733786579"><code>5733786</code></a> rebuild</li>
<li><a href="622616010e"><code>6226160</code></a> prepare v2</li>
<li><a href="0497f9301f"><code>0497f93</code></a> improve registry cleanpu</li>
<li><a href="7b8626742a"><code>7b86267</code></a> update registry cleaning</li>
<li><a href="911d8e9e55"><code>911d8e9</code></a> test sparse registry</li>
<li><a href="875be5ce2d"><code>875be5c</code></a> bump cache</li>
<li><a href="07a2ee71bc"><code>07a2ee7</code></a> lol, dependency check was reversed</li>
<li><a href="7c190ef171"><code>7c190ef</code></a> fix actual test code ;-)</li>
<li><a href="fffd6895b2"><code>fffd689</code></a> add some more tests</li>
<li>Additional commits viewable in <a href="https://github.com/Swatinem/rust-cache/compare/v1.4.0...v2.0.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Swatinem/rust-cache&package-manager=github_actions&previous-version=1.4.0&new-version=2.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-02 07:58:48 +00:00
1bb05f2716 Bump Swatinem/rust-cache from 1.4.0 to 2.0.0
Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.4.0 to 2.0.0.
- [Release notes](https://github.com/Swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Swatinem/rust-cache/compare/v1.4.0...v2.0.0)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-01 17:07:30 +00:00
e6f03f82df Fix clippy warnings 2022-07-28 15:56:22 +02:00
58d2aad309 Change binary option and add env var support 2022-07-28 15:13:49 +02:00
e3426d5b7a Improve the tasks error reporting 2022-07-28 15:12:54 +02:00
73d4869e5e Make the changes to plug the new DocumentsBatch system 2022-07-28 14:45:33 +02:00
fe32097964 Update milli v0.32 2022-07-28 14:45:10 +02:00
dc21af46e5 Merge #2634
2634: Bring `stable` changes into `main` (v0.28.1) r=ManyTheFish a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-21 13:22:57 +00:00
22aa349e31 Merge #2633
2633: Fix highlight issue by updating milli to v0.31.2 r=ManyTheFish a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/2627

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-21 11:08:37 +00:00
9cf6acb671 Fix highlight issue by updating milli to v0.31.2 2022-07-21 14:11:24 +04:00
78c5826c57 Merge #2631
2631: Update mini-dashboard to v0.2.1 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
Fixes #2629

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-21 06:37:57 +00:00
7e6f3274fa Update mini-dashboard to v0.2.1 2022-07-21 09:47:07 +04:00
94ef326be3 Merge #2630
2630: Update version for next release (v0.28.1) r=ManyTheFish a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-20 13:42:51 +00:00
d01a3ab889 Update version for next release (v0.28.1) 2022-07-20 15:46:53 +04:00
ad494b6f77 Merge #2625
2625: Update link to Cloud beta form r=curquiza a=davelarkan

# Pull Request

## What does this PR do?
Fixes #2624

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Dave Larkan <davelarkan@gmail.com>
2022-07-19 11:09:12 +00:00
2f11686c81 Update link to Cloud beta form 2022-07-19 11:10:16 +01:00
01a47e2db5 Merge #2598
2598: Bring `stable` into `main` (v0.28.0) r=curquiza a=curquiza



Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-11 14:06:02 +00:00
cfacc79ad7 Remove duplicate step in CI 2022-07-11 15:18:15 +02:00
8e370ed9ab Merge branch 'main' into stable 2022-07-11 14:41:15 +02:00
32d6af6527 Merge remote-tracking branch 'origin/release-v0.28.0' into stable 2022-07-11 14:37:21 +02:00
be3240d2dd Merge #2592
2592: Chores: Add a dedicated section for Language Support in the issue template r=curquiza a=ManyTheFish



This new section in put upper than the feature proposal because language support is kind of a sub-category of it,
and so, in the reading order, we choose to create a feature proposal only if it is not related to Language.


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-07 12:37:36 +00:00
2de6868858 Chores: Add a dedicated section for Language Support in the issue template
This new section in put apper than feature proposal because language-support is kind of a sub-category of it,
and so, in the reading order, we choose to create a feature proposal only if it is not related to Language.
2022-07-07 13:48:47 +02:00
d419a91207 Merge #2591
2591: Introduce the Tasks Seen event when filtering r=Kerollmops a=Kerollmops

This PR fixes #2377 by introducing the Tasks Seen analytics events.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-07 11:41:01 +00:00
a9fb5a4d50 Introduce the Tasks Seen event when filtering 2022-07-07 11:39:23 +02:00
0353537fef Merge #2579
2579: API keys: adds action * for actions r=irevoire a=phdavis1027

# Pull Request
This is PR builds on `@janithpet's` addition to DocumentsAll; it's basically a copy-and-paste job, except I used ```iter.filter()``` to avoid the possibility for duplication that they mentioned. I'm not sure how much that matters.

Also, hi! This is my first open-source contribution and my first attempt to write Rust for anyone other than myself, so any feedback whatsoever is appreciated. 

## What does this PR do?
Fixes #2560

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Phillip Davis <phdavis1027@gmail.com>
2022-07-07 08:54:23 +00:00
da7729e4a8 Merge #2589
2589: Update create-issue-dependencies.yml r=ManyTheFish a=curquiza

Minor change to update the format in the description of the issue created: remove the useless newlines

See the format without this change: https://github.com/meilisearch/meilisearch/issues/2588

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-07-07 08:37:33 +00:00
074a6a0cce Fix typos in HeadAuthStore::put_api_key 2022-07-06 22:24:46 -04:00
755b1a59a2 Merge #2584
2584: Format API keys in hexa instead of base64 r=curquiza a=ManyTheFish

This PR:
- Changes API key generation and formatting to ease the generation of the key made by our users
- updates the `uuid` crate version

The API key can now be generated in bash as below:
```sh
echo -n $HYPHENATED_UUID | openssl dgst -sha256 -hmac $MASTER_KEY
```

fixes the issue raised in [product/discussion#421](https://github.com/meilisearch/product/discussions/421#discussioncomment-3079410), this should not impact anything in documentation nor integration but ease the key generation on the user sides.

poke `@gmourier` 

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-06 12:49:04 +00:00
bb5b18b82c Update create-issue-dependencies.yml 2022-07-06 11:01:25 +02:00
01d9560318 Merge #2587
2587: Update mini-dashboard to v0.2.0 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
Fixes #2469

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-06 08:54:16 +00:00
719879d4d2 Update mini-dashboard to v0.2.0 2022-07-06 08:12:17 +02:00
fb9b298645 Leave actions as HashSet 2022-07-05 21:52:50 -04:00
23f02f241e Run the code formatter 2022-07-05 21:08:56 -04:00
5d80ff41a2 Clean up put_key impl 2022-07-05 21:06:58 -04:00
f4989590db Merge #2585
2585: Add CI creates issue updating dependencies r=curquiza a=VasiliySoldatkin

Adds CI, which uses cron GHA to create Issue "Upgrade dependencies" every 3 months 
Context: [#2569](https://github.com/meilisearch/meilisearch/issues/2569)


Co-authored-by: Vasiliy Soldatkin <vasiliy.soldatkin@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-07-05 19:05:43 +00:00
bba5fab5e5 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:03:08 +02:00
05ffe24d64 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:02:40 +02:00
6f95ae9879 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:02:32 +02:00
480b881e15 Remove template and add GHA from review 2022-07-05 21:08:59 +03:00
43fecbf382 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:35:09 +02:00
5588a6415a Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:34:57 +02:00
b0757e75c4 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:34:49 +02:00
106be03ba8 Merge #2539
2539: Update Docker credentials r=curquiza a=curquiza

This is to avoid using the tpayet credentials and use `@meili-bot` credentials instead.

The `DOCKER_USERNAME` and `DOCKER_PASSWORD` are still present as secret, I will remove them once v0.28.0 is fully merged (they are still used on `release-v0.28.0`)

I tested by created a tag on the branch, it worked: the tag was pushed to docker hub by meili-bot

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-05 17:22:26 +00:00
70c55208f1 Merge #2586
2586: Fix Clippy r=irevoire a=Kerollmops

This PR fixes clippy on the `release-v0.28.0` branch.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-05 15:55:00 +00:00
d56bf66022 Make clippy happy 2022-07-05 17:48:44 +02:00
2c300c72c9 Add CI creates issue updating dependencies 2022-07-05 18:28:29 +03:00
a146fd45b9 Format API keys in hexa instead of base64 2022-07-05 16:14:18 +02:00
63e1fb4f96 Run the code formatter 2022-07-04 21:49:40 -04:00
be1c6f9dc4 Update tests to include .* permissions for tasks, indexes, dumps, stats, and settings 2022-07-04 21:38:54 -04:00
c251b527b0 Add iterators over * for stats, dumps, tasks, settings, and indexes; change documents impl to prevent duplication 2022-07-04 21:38:31 -04:00
1dc3724c1f Added [...]_ALL enum members in action.rs 2022-07-04 21:38:21 -04:00
c1ad56281d Merge #2545 #2556 #2568 #2573 #2574 #2575
2545: meilisearch-lib was missing a feature in its cargo.toml r=Kerollmops a=irevoire

Meilisearch-lib was missing a feature to compile on its own

2556: chore: `meilisearch-http` readability improvements r=curquiza a=ryanrussell

## What does this PR do?
Readability improvements in `meilisearch-http`. 

Believe these are pretty straightforward; let me know if anything needs adjusted or reverted :)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?



2568: Improve manifest for dependabot r=curquiza a=curquiza

Improve dependabot manifest
- `rebase-strategy: disabled` -> avoid issues with bors we had in the past with the integration team. The automatic rebase of dependabot does not fit with the rebase bors will try to do
- `labels` -> better triage for when I create the release changelogs

2573: Bump docker/login-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/login-action/releases">docker/login-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/161">#161</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/170">#170</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/167">#167</a>)</li>
<li>Bump <code>`@​actions/io</code>` from 1.1.1 to 1.1.2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/168">#168</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/176">#176</a>)</li>
<li>Bump https-proxy-agent from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/182">#182</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/login-action/compare/v1.14.1...v2.0.0">https://github.com/docker/login-action/compare/v1.14.1...v2.0.0</a></p>
<h2>v1.14.1</h2>
<ul>
<li>Revert to Node 12 as default runtime to fix issue for GHE users (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/160">#160</a>)</li>
</ul>
<h2>v1.14.0</h2>
<ul>
<li>Update to node 16 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/158">#158</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr</code>` from 3.45.0 to 3.53.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/157">#157</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr-public</code>` from 3.45.0 to 3.53.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/156">#156</a>)</li>
</ul>
<h2>v1.13.0</h2>
<ul>
<li>Handle proxy settings for aws-sdk (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/152">#152</a>)</li>
<li>Workload identity based authentication docs for GCR and GAR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/112">#112</a>)</li>
<li>Test login against ACR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/49">#49</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr</code>` from 3.44.0 to 3.45.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/132">#132</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr-public</code>` from 3.43.0 to 3.45.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/131">#131</a>)</li>
</ul>
<h2>v1.12.0</h2>
<ul>
<li>ECR: only set credentials if username and password are specified (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
<li>Refactor to use aws-sdk v3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
</ul>
<h2>v1.11.0</h2>
<ul>
<li>ECR: switch implementation to use the AWS SDK (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li><code>ecr</code> input to specify whether the given registry is ECR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/123">#123</a>)</li>
<li>Test against Windows runner (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li>Update instructions for Google registry (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/127">#127</a>)</li>
<li>Update dev workflow (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/111">#111</a>)</li>
<li>Small changes for GHCR doc (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/86">#86</a>)</li>
<li>Update dev dependencies (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/85">#85</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/101">#101</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/100">#100</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/94">#94</a> <a href="https://github-redirect.dependabot.com/docker/login-action/issues/103">#103</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/88">#88</a>)</li>
<li>Bump hosted-git-info from 2.8.8 to 2.8.9 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/83">#83</a>)</li>
<li>Bump node-notifier from 8.0.0 to 8.0.2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/82">#82</a>)</li>
<li>Bump ws from 7.3.1 to 7.5.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/81">#81</a>)</li>
<li>Bump lodash from 4.17.20 to 4.17.21 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/80">#80</a>)</li>
<li>Bump y18n from 4.0.0 to 4.0.3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/79">#79</a>)</li>
</ul>
<h2>v1.10.0</h2>
<ul>
<li>GitHub Packages Docker Registry deprecated (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/78">#78</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="49ed152c8e"><code>49ed152</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/161">#161</a> from crazy-max/node16-runtime</li>
<li><a href="b61a9ce7bd"><code>b61a9ce</code></a> Node 16 as default runtime</li>
<li><a href="3a136a8631"><code>3a136a8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/182">#182</a> from docker/dependabot/npm_and_yarn/https-proxy-agent...</li>
<li><a href="b312880b69"><code>b312880</code></a> Update generated content</li>
<li><a href="795794e081"><code>795794e</code></a> Bump https-proxy-agent from 5.0.0 to 5.0.1</li>
<li><a href="1edf6180e0"><code>1edf618</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/179">#179</a> from docker/dependabot/github_actions/codecov/codecov...</li>
<li><a href="8e66ad4089"><code>8e66ad4</code></a> Bump codecov/codecov-action from 2 to 3</li>
<li><a href="7c79b598ea"><code>7c79b59</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/176">#176</a> from docker/dependabot/npm_and_yarn/minimist-1.2.6</li>
<li><a href="24a38e0d6d"><code>24a38e0</code></a> Bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="70e1ff84cb"><code>70e1ff8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/170">#170</a> from crazy-max/eslint</li>
<li>Additional commits viewable in <a href="https://github.com/docker/login-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/login-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2574: Bump Swatinem/rust-cache from 1.3.0 to 1.4.0 r=curquiza a=dependabot[bot]

Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.3.0 to 1.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/releases">Swatinem/rust-cache's releases</a>.</em></p>
<blockquote>
<h2>v1.4.0</h2>
<ul>
<li>Clean both debug and release target directories.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/blob/v1/CHANGELOG.md">Swatinem/rust-cache's changelog</a>.</em></p>
<blockquote>
<h2>1.4.0</h2>
<ul>
<li>Clean both <code>debug</code> and <code>release</code> target directories.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="cb2cf0cc7c"><code>cb2cf0c</code></a> 1.4.0</li>
<li><a href="74e8e24b6d"><code>74e8e24</code></a> Update dependencies, clean both debug and release targets</li>
<li><a href="f8f67b7515"><code>f8f67b7</code></a> Add a LICENSE file</li>
<li><a href="5b2b053862"><code>5b2b053</code></a> Improve Cache Details documentation (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/49">#49</a>)</li>
<li><a href="3bb3a9a087"><code>3bb3a9a</code></a> update deps and rebuild</li>
<li><a href="d127014599"><code>d127014</code></a> update dependencies</li>
<li><a href="801365cd81"><code>801365c</code></a> hint that checkout has to be used first (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/34">#34</a>)</li>
<li><a href="c5ed9ba6b7"><code>c5ed9ba</code></a> update dependencies and rebuild</li>
<li><a href="536c94f32c"><code>536c94f</code></a> Cache-on-failure support (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/22">#22</a>)</li>
<li>See full diff in <a href="https://github.com/Swatinem/rust-cache/compare/v1.3.0...v1.4.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Swatinem/rust-cache&package-manager=github_actions&previous-version=1.3.0&new-version=1.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2575: Bump docker/build-push-action from 2 to 3 r=curquiza a=dependabot[bot]

Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/build-push-action/releases">docker/build-push-action's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/564">#564</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>Standalone mode support by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/601">#601</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/609">#609</a>)</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/571">#571</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/573">#573</a>)</li>
<li>Bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/582">#582</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/584">#584</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/595">#595</a>)</li>
<li>Bump csv-parse from 4.16.3 to 5.0.4 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/533">#533</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v2.10.0...v3.0.0">https://github.com/docker/build-push-action/compare/v2.10.0...v3.0.0</a></p>
<h2>v2.10.0</h2>
<ul>
<li>Add <code>imageid</code> output and use metadata to set <code>digest</code> output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/569">#569</a>)</li>
<li>Add <code>build-contexts</code> input (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/563">#563</a>)</li>
<li>Enhance outputs display (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/559">#559</a>)</li>
</ul>
<h2>v2.9.0</h2>
<ul>
<li><code>add-hosts</code> input (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/553">#553</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/555">#555</a>)</li>
<li>Fix git context subdir example and improve README (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/552">#552</a>)</li>
<li>Add e2e tests for ACR (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/548">#548</a>)</li>
<li>Add description on <code>github-token</code> option to README (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/544">#544</a>)</li>
<li>Bump node-fetch from 2.6.1 to 2.6.7 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/549">#549</a>)</li>
</ul>
<h2>v2.8.0</h2>
<ul>
<li>Allow specifying subdirectory with default git context (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/531">#531</a>)</li>
<li>Add <code>cgroup-parent</code>, <code>shm-size</code>, <code>ulimit</code> inputs (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/501">#501</a>)</li>
<li>Don't set outputs if empty or nil (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/470">#470</a>)</li>
<li>docs: example to sanitize tags with metadata-action (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/476">#476</a>)</li>
<li>docs: wrong syntax to sanitize repo slug (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/475">#475</a>)</li>
<li>docs: test before pushing your image (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/455">#455</a>)</li>
<li>readme: remove v1 section (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/500">#500</a>)</li>
<li>ci: virtual env file system info (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/510">#510</a>)</li>
<li>dev: update workflow (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/499">#499</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/160">#160</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/469">#469</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/465">#465</a>)</li>
<li>Bump csv-parse from 4.16.0 to 4.16.3 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/451">#451</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/459">#459</a>)</li>
</ul>
<h2>v2.7.0</h2>
<ul>
<li>Add <code>metadata</code> output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/412">#412</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/439">#439</a>)</li>
<li>Add note to sanitize tags (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/426">#426</a>)</li>
<li>Cache backend API docs (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/406">#406</a>)</li>
<li>Git context now supports subdir (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/407">#407</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/415">#415</a>)</li>
</ul>
<h2>v2.6.1</h2>
<ul>
<li>Small typo and ensure trimmed output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/400">#400</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="e551b19e49"><code>e551b19</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/564">#564</a> from crazy-max/node-16</li>
<li><a href="3554377aa3"><code>3554377</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/609">#609</a> from crazy-max/ci-fix-test</li>
<li><a href="a62bc1b22b"><code>a62bc1b</code></a> ci: fix standalone test</li>
<li><a href="c2085839e1"><code>c208583</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/601">#601</a> from crazy-max/standalone-mode</li>
<li><a href="fcd91249e5"><code>fcd9124</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/607">#607</a> from docker/dependabot/github_actions/docker/metadata...</li>
<li><a href="0ebe720aed"><code>0ebe720</code></a> Bump docker/metadata-action from 3 to 4</li>
<li><a href="38b45804b5"><code>38b4580</code></a> Standalone mode support</li>
<li><a href="ba317382dc"><code>ba31738</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/533">#533</a> from docker/dependabot/npm_and_yarn/csv-parse-5.0.4</li>
<li><a href="43721d2346"><code>43721d2</code></a> Update generated content</li>
<li><a href="5ea21bf2ba"><code>5ea21bf</code></a> Fix csv-parse implementation since major update</li>
<li>Additional commits viewable in <a href="https://github.com/docker/build-push-action/compare/v2...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Ryan Russell <git@ryanrussell.org>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-07-04 12:13:46 +00:00
83e5f45a91 Merge #2576
2576: Make clippy happy r=curquiza a=Kerollmops

This PR fixes clippy.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-04 11:53:27 +00:00
aff8cd1774 Make clippy happy 2022-07-04 13:36:56 +02:00
d1296d03ea Bump docker/build-push-action from 2 to 3
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:44 +00:00
6f7a4d95d9 Bump Swatinem/rust-cache from 1.3.0 to 1.4.0
Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/Swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/v1/CHANGELOG.md)
- [Commits](https://github.com/Swatinem/rust-cache/compare/v1.3.0...v1.4.0)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:40 +00:00
9ea96fa5c1 Bump docker/login-action from 1 to 2
Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:38 +00:00
19f732e623 Update manifest for dependabot 2022-06-29 11:35:41 +02:00
d833e62282 Merge #2564 #2565 #2566 #2567
2564: Bump codecov/codecov-action from 1 to 3 r=curquiza a=dependabot[bot]

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/codecov/codecov-action/releases">codecov/codecov-action's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/689">#689</a> Bump to node16 and small fixes</li>
</ul>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/688">#688</a> Incorporate <code>gcov</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/548">#548</a> build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/603">#603</a> [Snyk] Upgrade <code>`@​actions/core</code>` from 1.5.0 to 1.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/628">#628</a> build(deps): bump node-fetch from 2.6.1 to 3.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/634">#634</a> build(deps): bump node-fetch from 3.1.1 to 3.2.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/636">#636</a> build(deps): bump openpgp from 5.0.1 to 5.1.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/652">#652</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.30.0 to 0.33.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/653">#653</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.11.21 to 17.0.18</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/659">#659</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.4.0 to 27.4.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/667">#667</a> build(deps): bump actions/checkout from 2 to 3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/673">#673</a> build(deps): bump node-fetch from 3.2.0 to 3.2.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/683">#683</a> build(deps): bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/685">#685</a> build(deps): bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/681">#681</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.18 to 17.0.23</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/682">#682</a> build(deps-dev): bump typescript from 4.5.5 to 4.6.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/676">#676</a> build(deps): bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/675">#675</a> build(deps): bump openpgp from 5.1.0 to 5.2.1</li>
</ul>
<h2>v2.1.0</h2>
<h2>2.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/515">#515</a> Allow specifying version of Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/499">#499</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.29.0 to 0.30.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/508">#508</a> build(deps): bump openpgp from 5.0.0-5 to 5.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/514">#514</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.6.0 to 16.9.0</li>
</ul>
<h2>v2.0.3</h2>
<h2>2.0.3</h2>
<h3>Fixes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/464">#464</a> Fix wrong link in the readme</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/485">#485</a> fix: Add override OS and linux default to platform</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/447">#447</a> build(deps): bump openpgp from 5.0.0-4 to 5.0.0-5</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/458">#458</a> build(deps-dev): bump eslint from 7.31.0 to 7.32.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/465">#465</a> build(deps-dev): bump <code>`@​typescript-eslint/eslint-plugin</code>` from 4.28.4 to 4.29.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/466">#466</a> build(deps-dev): bump <code>`@​typescript-eslint/parser</code>` from 4.28.4 to 4.29.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/468">#468</a> build(deps-dev): bump <code>`@​types/jest</code>` from 26.0.24 to 27.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/470">#470</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.4.0 to 16.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/472">#472</a> build(deps): bump path-parse from 1.0.6 to 1.0.7</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/473">#473</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.0.0 to 27.0.1</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md">codecov/codecov-action's changelog</a>.</em></p>
<blockquote>
<h2>3.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/699">#699</a> Incorporate <code>xcode</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/694">#694</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.33.3 to 0.33.4</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/696">#696</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.23 to 17.0.25</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/698">#698</a> build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0</li>
</ul>
<h2>3.0.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/689">#689</a> Bump to node16 and small fixes</li>
</ul>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/688">#688</a> Incorporate <code>gcov</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/548">#548</a> build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/603">#603</a> [Snyk] Upgrade <code>`@​actions/core</code>` from 1.5.0 to 1.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/628">#628</a> build(deps): bump node-fetch from 2.6.1 to 3.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/634">#634</a> build(deps): bump node-fetch from 3.1.1 to 3.2.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/636">#636</a> build(deps): bump openpgp from 5.0.1 to 5.1.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/652">#652</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.30.0 to 0.33.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/653">#653</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.11.21 to 17.0.18</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/659">#659</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.4.0 to 27.4.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/667">#667</a> build(deps): bump actions/checkout from 2 to 3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/673">#673</a> build(deps): bump node-fetch from 3.2.0 to 3.2.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/683">#683</a> build(deps): bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/685">#685</a> build(deps): bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/681">#681</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.18 to 17.0.23</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/682">#682</a> build(deps-dev): bump typescript from 4.5.5 to 4.6.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/676">#676</a> build(deps): bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/675">#675</a> build(deps): bump openpgp from 5.1.0 to 5.2.1</li>
</ul>
<h2>2.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/515">#515</a> Allow specifying version of Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/499">#499</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.29.0 to 0.30.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/508">#508</a> build(deps): bump openpgp from 5.0.0-5 to 5.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/514">#514</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.6.0 to 16.9.0</li>
</ul>
<h2>2.0.3</h2>
<h3>Fixes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/464">#464</a> Fix wrong link in the readme</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/485">#485</a> fix: Add override OS and linux default to platform</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/447">#447</a> build(deps): bump openpgp from 5.0.0-4 to 5.0.0-5</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="81cd2dc814"><code>81cd2dc</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/699">#699</a> from codecov/feat-xcode</li>
<li><a href="a03184e530"><code>a03184e</code></a> feat: add xcode support</li>
<li><a href="6a6a9ae7b1"><code>6a6a9ae</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/694">#694</a> from codecov/dependabot/npm_and_yarn/vercel/ncc-0.33.4</li>
<li><a href="92a872a5e7"><code>92a872a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/696">#696</a> from codecov/dependabot/npm_and_yarn/types/node-17.0.25</li>
<li><a href="43a9c182dd"><code>43a9c18</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/698">#698</a> from codecov/dependabot/npm_and_yarn/jest-junit-13.2.0</li>
<li><a href="13ce822ccd"><code>13ce822</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/690">#690</a> from codecov/ci-v3</li>
<li><a href="4d6dbaaea6"><code>4d6dbaa</code></a> build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0</li>
<li><a href="98f0f19300"><code>98f0f19</code></a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.23 to 17.0.25</li>
<li><a href="d3021d9910"><code>d3021d9</code></a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.33.3 to 0.33.4</li>
<li><a href="2c83f35c20"><code>2c83f35</code></a> Update makefile to v3</li>
<li>Additional commits viewable in <a href="https://github.com/codecov/codecov-action/compare/v1...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=codecov/codecov-action&package-manager=github_actions&previous-version=1&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2565: Bump docker/setup-qemu-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/setup-qemu-action/releases">docker/setup-qemu-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/48">#48</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/43">#43</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/47">#47</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.3.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/37">#37</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/39">#39</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/41">#41</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.0.4 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/38">#38</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/46">#46</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-qemu-action/compare/v1.2.0...v2.0.0">https://github.com/docker/setup-qemu-action/compare/v1.2.0...v2.0.0</a></p>
<h2>v1.2.0</h2>
<ul>
<li>Display image information (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/36">#36</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.7 to 1.3.0 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/35">#35</a>)</li>
</ul>
<h2>v1.1.0</h2>
<ul>
<li>Remove os limitation (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/30">#30</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.6 to 1.2.7 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/29">#29</a>)</li>
</ul>
<h2>v1.0.2</h2>
<ul>
<li>Enhance workflow (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/26">#26</a>)</li>
<li>Container based developer flow (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/19">#19</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/20">#20</a>)</li>
</ul>
<h2>v1.0.1</h2>
<ul>
<li>Fix CVE-2020-15228</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8b122486ce"><code>8b12248</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/48">#48</a> from crazy-max/node-16</li>
<li><a href="466d53193c"><code>466d531</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/50">#50</a> from crazy-max/update-readme</li>
<li><a href="607c1922b5"><code>607c192</code></a> simplify usage example</li>
<li><a href="d7849ecb9c"><code>d7849ec</code></a> Node 16 as default runtime</li>
<li><a href="2d4bfe71c9"><code>2d4bfe7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/47">#47</a> from crazy-max/update-dev</li>
<li><a href="224b802eb3"><code>224b802</code></a> chore: update dev dependencies and workflow</li>
<li><a href="95bd865778"><code>95bd865</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/46">#46</a> from docker/dependabot/npm_and_yarn/actions/exec-1.1.1</li>
<li><a href="cfd091faa1"><code>cfd091f</code></a> Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="d2a60302b8"><code>d2a6030</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/45">#45</a> from docker/dependabot/github_actions/actions/checkout-3</li>
<li><a href="97dc484a91"><code>97dc484</code></a> Bump actions/checkout from 2 to 3</li>
<li>Additional commits viewable in <a href="https://github.com/docker/setup-qemu-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-qemu-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2566: Bump actions/checkout from 2 to 3 r=curquiza a=dependabot[bot]

Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/releases">actions/checkout's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<ul>
<li>Updated to the node16 runtime by default
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0 to run, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
</ul>
<h2>v2.4.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Add set-safe-directory input to allow customers to take control. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/770">#770</a>) by <a href="https://github.com/TingluoHuang"><code>`@​TingluoHuang</code></a>` in <a href="https://github-redirect.dependabot.com/actions/checkout/pull/776">actions/checkout#776</a></li>
<li>Prepare changelog for v2.4.2. by <a href="https://github.com/TingluoHuang"><code>`@​TingluoHuang</code></a>` in <a href="https://github-redirect.dependabot.com/actions/checkout/pull/778">actions/checkout#778</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/actions/checkout/compare/v2...v2.4.2">https://github.com/actions/checkout/compare/v2...v2.4.2</a></p>
<h2>v2.4.1</h2>
<ul>
<li>Fixed an issue where checkout failed to run in container jobs due to the new git setting <code>safe.directory</code></li>
</ul>
<h2>v2.4.0</h2>
<ul>
<li>Convert SSH URLs like <code>org-&lt;ORG_ID&gt;`@github.com:</code>` to <code>https://github.com/</code> - <a href="https://github-redirect.dependabot.com/actions/checkout/pull/621">pr</a></li>
</ul>
<h2>v2.3.5</h2>
<p>Update dependencies</p>
<h2>v2.3.4</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/379">Add missing <code>await</code>s</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/360">Swap to Environment Files</a></li>
</ul>
<h2>v2.3.3</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/345">Remove Unneeded commit information from build logs</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/326">Add Licensed to verify third party dependencies</a></li>
</ul>
<h2>v2.3.2</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/320">Add Third Party License Information to Dist Files</a></p>
<h2>v2.3.1</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/284">Fix default branch resolution for .wiki and when using SSH</a></p>
<h2>v2.3.0</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/278">Fallback to the default branch</a></p>
<h2>v2.2.0</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/258">Fetch all history for all tags and branches when fetch-depth=0</a></p>
<h2>v2.1.1</h2>
<p>Changes to support GHES (<a href="https://github-redirect.dependabot.com/actions/checkout/pull/236">here</a> and <a href="https://github-redirect.dependabot.com/actions/checkout/pull/248">here</a>)</p>
<h2>v2.1.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/191">Group output</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/199">Changes to support GHES alpha release</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/184">Persist core.sshCommand for submodules</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/163">Add support ssh</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/179">Convert submodule SSH URL to HTTPS, when not using SSH</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>v3.0.2</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/770">Add input <code>set-safe-directory</code></a></li>
</ul>
<h2>v3.0.1</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/762">Fixed an issue where checkout failed to run in container jobs due to the new git setting <code>safe.directory</code></a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/744">Bumped various npm package versions</a></li>
</ul>
<h2>v3.0.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/689">Update to node 16</a></li>
</ul>
<h2>v2.3.1</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/284">Fix default branch resolution for .wiki and when using SSH</a></li>
</ul>
<h2>v2.3.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/278">Fallback to the default branch</a></li>
</ul>
<h2>v2.2.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/258">Fetch all history for all tags and branches when fetch-depth=0</a></li>
</ul>
<h2>v2.1.1</h2>
<ul>
<li>Changes to support GHES (<a href="https://github-redirect.dependabot.com/actions/checkout/pull/236">here</a> and <a href="https://github-redirect.dependabot.com/actions/checkout/pull/248">here</a>)</li>
</ul>
<h2>v2.1.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/191">Group output</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/199">Changes to support GHES alpha release</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/184">Persist core.sshCommand for submodules</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/163">Add support ssh</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/179">Convert submodule SSH URL to HTTPS, when not using SSH</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/157">Add submodule support</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/144">Follow proxy settings</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/141">Fix ref for pr closed event when a pr is merged</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/128">Fix issue checking detached when git less than 2.22</a></li>
</ul>
<h2>v2.0.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/108">Do not pass cred on command line</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/107">Add input persist-credentials</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/104">Fallback to REST API to download repo</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="2541b1294d"><code>2541b12</code></a> Prepare changelog for v3.0.2. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/777">#777</a>)</li>
<li><a href="0ffe6f9c55"><code>0ffe6f9</code></a> Add set-safe-directory input to allow customers to take control. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/770">#770</a>)</li>
<li><a href="dcd71f6466"><code>dcd71f6</code></a> Enforce safe directory (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/762">#762</a>)</li>
<li><a href="add3486cc3"><code>add3486</code></a> Patch to fix the dependbot alert. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/744">#744</a>)</li>
<li><a href="5126516654"><code>5126516</code></a> Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/741">#741</a>)</li>
<li><a href="d50f8ea767"><code>d50f8ea</code></a> Add v3.0 release information to changelog (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/740">#740</a>)</li>
<li><a href="2d1c1198e7"><code>2d1c119</code></a> update test workflows to checkout v3 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/709">#709</a>)</li>
<li><a href="a12a3943b4"><code>a12a394</code></a> update readme for v3 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/708">#708</a>)</li>
<li><a href="8f9e05e482"><code>8f9e05e</code></a> Update to node 16 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/689">#689</a>)</li>
<li>See full diff in <a href="https://github.com/actions/checkout/compare/v2...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2567: Bump docker/setup-buildx-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/setup-buildx-action/releases">docker/setup-buildx-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/131">#131</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-buildx-action/compare/v1.7.0...v2.0.0">https://github.com/docker/setup-buildx-action/compare/v1.7.0...v2.0.0</a></p>
<h2>v1.7.0</h2>
<ul>
<li>Standalone mode by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` in (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/119">#119</a>)</li>
<li>Update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/114">#114</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/130">#130</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/108">#108</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/109">#109</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/110">#110</a>)</li>
<li>Bump actions/checkout from 2 to 3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/126">#126</a>)</li>
<li>Bump <code>`@​actions/tool-cache</code>` from 1.7.1 to 1.7.2 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/128">#128</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/129">#129</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/132">#132</a>)</li>
<li>Bump codecov/codecov-action from 2 to 3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/133">#133</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/136">#136</a>)</li>
</ul>
<h2>v1.6.0</h2>
<ul>
<li>Add <code>config-inline</code> input (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/106">#106</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/104">#104</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/101">#101</a>)</li>
</ul>
<h2>v1.5.1</h2>
<ul>
<li>Explicit version spec for caching (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/100">#100</a>)</li>
</ul>
<h2>v1.5.0</h2>
<ul>
<li>Allow building buildx from source (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/99">#99</a>)</li>
</ul>
<h2>v1.4.1</h2>
<ul>
<li>Fix <code>docker: invalid reference format</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/97">#97</a>)</li>
</ul>
<h2>v1.4.0</h2>
<ul>
<li>Update dev deps (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/95">#95</a>)</li>
<li>Use built-in <code>getExecOutput</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/94">#94</a>)</li>
<li>Use <code>core.getBooleanInput</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/93">#93</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.0.4 to 1.1.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/85">#85</a>)</li>
<li>Bump y18n from 4.0.0 to 4.0.3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/91">#91</a>)</li>
<li>Bump hosted-git-info from 2.8.8 to 2.8.9 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/89">#89</a>)</li>
<li>Bump ws from 7.3.1 to 7.5.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/90">#90</a>)</li>
<li>Bump <code>`@​actions/tool-cache</code>` from 1.6.1 to 1.7.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/82">#82</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/86">#86</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.7 to 1.4.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/80">#80</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/87">#87</a>)</li>
</ul>
<h2>v1.3.0</h2>
<ul>
<li>Display BuildKit version (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/72">#72</a>)</li>
</ul>
<h2>v1.2.0</h2>
<ul>
<li>Remove os limitation (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/71">#71</a>)</li>
<li>Add test job for <code>config</code> input (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/68">#68</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="dc7b9719a9"><code>dc7b971</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/131">#131</a> from crazy-max/node16</li>
<li><a href="f55bc08278"><code>f55bc08</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/141">#141</a> from crazy-max/fix-test</li>
<li><a href="aa877a9d36"><code>aa877a9</code></a> ci: fix standalone test</li>
<li><a href="130c56f342"><code>130c56f</code></a> Node 16 as default runtime</li>
<li>See full diff in <a href="https://github.com/docker/setup-buildx-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-buildx-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-29 08:34:57 +00:00
a733271ced Merge #2563
2563: Bump docker/metadata-action from 3 to 4 r=curquiza a=dependabot[bot]

Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/metadata-action/releases">docker/metadata-action's releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/176">#176</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>Do not sanitize before pattern matching by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/201">#201</a>)
<ul>
<li>Breaking change with <code>type=match</code> pattern matching</li>
</ul>
</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/metadata-action/compare/v3.8.0...v4.0.0">https://github.com/docker/metadata-action/compare/v3.8.0...v4.0.0</a></p>
<h2>v3.8.0</h2>
<ul>
<li>Add attribute to enable/disable images by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/193">#193</a>)</li>
<li>Add <code>is_default_branch</code> global expression by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/192">#192</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/197">#197</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/198">#198</a>)</li>
<li>Update fixtures (dev) by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/190">#190</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/185">#185</a>)</li>
<li>Bump moment from 2.29.2 to 2.29.3 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/187">#187</a>)</li>
<li>Bump csv-parse from 4.16.3 to 5.0.4 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/195">#195</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/metadata-action/compare/v3.7.0...v3.8.0">https://github.com/docker/metadata-action/compare/v3.7.0...v3.8.0</a></p>
<h2>v3.7.0</h2>
<ul>
<li>Handle comments for multi-line inputs (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/172">#172</a>)</li>
<li>Missing <code>json</code> output in <code>action.yml</code> (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/167">#167</a>)</li>
<li>Update dev dependencies and workflow (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/175">#175</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/182">#182</a>)</li>
<li>Bump moment from 2.29.1 to 2.29.2 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/180">#180</a>)</li>
<li>Bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/179">#179</a>)</li>
<li>Bump node-fetch from 2.6.1 to 2.6.7 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/173">#173</a>)</li>
</ul>
<h2>v3.6.2</h2>
<ul>
<li>Handle raw statement for pre-release (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/155">#155</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/156">#156</a>)</li>
</ul>
<h2>v3.6.1</h2>
<ul>
<li>Preserve quotes inside unquoted field (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/153">#153</a>)</li>
</ul>
<h2>v3.6.0</h2>
<ul>
<li><code>base_ref</code> global expression (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/142">#142</a>)</li>
<li>Trim tags and flavor inputs (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/143">#143</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/135">#135</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/134">#134</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/132">#132</a>)</li>
<li>Bump csv-parse from 4.16.0 to 4.16.3 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/131">#131</a>)</li>
</ul>
<h2>v3.5.0</h2>
<ul>
<li>Add global expression <code>date</code> (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/121">#121</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/122">#122</a>)</li>
</ul>
<h2>v3.4.1</h2>
<ul>
<li>Only return edge if branch matches (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/115">#115</a>)</li>
</ul>
<h2>v3.4.0</h2>
<ul>
<li>PEP 440 support (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/108">#108</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Upgrade guide</summary>
<p><em>Sourced from <a href="https://github.com/docker/metadata-action/blob/master/UPGRADE.md">docker/metadata-action's upgrade guide</a>.</em></p>
<blockquote>
<h1>Upgrade notes</h1>
<h2>v2 to v3</h2>
<ul>
<li>Repository has been moved to docker org. Replace <code>crazy-max/ghaction-docker-meta@v2</code> with <code>docker/metadata-action@v4</code></li>
<li>The default bake target has been changed: <code>ghaction-docker-meta</code> &gt; <code>docker-metadata-action</code></li>
</ul>
<h2>v1 to v2</h2>
<ul>
<li><a href="https://github.com/docker/metadata-action/blob/master/#inputs">inputs</a>
<ul>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-sha"><code>tag-sha</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-edge--tag-edge-branch"><code>tag-edge</code> / <code>tag-edge-branch</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-semver"><code>tag-semver</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-match--tag-match-group"><code>tag-match</code> / <code>tag-match-group</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-latest"><code>tag-latest</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-schedule"><code>tag-schedule</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-custom--tag-custom-only"><code>tag-custom</code> / <code>tag-custom-only</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#label-custom"><code>label-custom</code></a></li>
</ul>
</li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#basic-workflow">Basic workflow</a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#semver-workflow">Semver workflow</a></li>
</ul>
<h3>inputs</h3>
<table>
<thead>
<tr>
<th>New</th>
<th>Unchanged</th>
<th>Removed</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>tags</code></td>
<td><code>images</code></td>
<td><code>tag-sha</code></td>
</tr>
<tr>
<td><code>flavor</code></td>
<td><code>sep-tags</code></td>
<td><code>tag-edge</code></td>
</tr>
<tr>
<td><code>labels</code></td>
<td><code>sep-labels</code></td>
<td><code>tag-edge-branch</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-semver</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-match</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-match-group</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-latest</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-schedule</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-custom</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-custom-only</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>label-custom</code></td>
</tr>
</tbody>
</table>
<h4><code>tag-sha</code></h4>
<pre lang="yaml"><code>tags: |
  type=sha
</code></pre>
<h4><code>tag-edge</code> / <code>tag-edge-branch</code></h4>
<pre lang="yaml"><code>tags: |
  # default branch
  type=edge
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="69f6fc9d46"><code>69f6fc9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/203">#203</a> from crazy-max/san-fix</li>
<li><a href="2f5b5ae8bf"><code>2f5b5ae</code></a> Sanitize tag earlier</li>
<li><a href="f206c36955"><code>f206c36</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/202">#202</a> from crazy-max/v4-prep</li>
<li><a href="a20adfa74e"><code>a20adfa</code></a> readme: set metadata-action to v4</li>
<li><a href="26b9439ce3"><code>26b9439</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/201">#201</a> from crazy-max/fix-sanitization</li>
<li><a href="467883f452"><code>467883f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/176">#176</a> from crazy-max/node-16</li>
<li><a href="5edf56f2c4"><code>5edf56f</code></a> Node 16 as default runtime</li>
<li><a href="678218f2be"><code>678218f</code></a> Note about image name and tag sanitization</li>
<li><a href="e44c1fbe6e"><code>e44c1fb</code></a> Do not sanitize before pattern matching</li>
<li>See full diff in <a href="https://github.com/docker/metadata-action/compare/v3...v4">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/metadata-action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-29 08:07:10 +00:00
28609c4176 chore(auth test): Correct compute_authorized_search macro
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:26 -05:00
a626cf4c99 chore: test comment readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:25 -05:00
9b660e1058 chore(routes): correct typo
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:22 -05:00
38b85ec547 Bump docker/setup-buildx-action from 1 to 2
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:43 +00:00
ed185fb636 Bump actions/checkout from 2 to 3
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:41 +00:00
f7b47b43f4 Bump docker/setup-qemu-action from 1 to 2
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:38 +00:00
8e703fbabe Bump codecov/codecov-action from 1 to 3
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:35 +00:00
7ced5c2cc7 Bump docker/metadata-action from 3 to 4
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 3 to 4.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:32 +00:00
ef95d1d545 Merge #2561
2561: Add dependabot for GHA r=Kerollmops a=curquiza

No panic, I only add dependabot to update our CI, not the rust dependencies. Indeed, no easy command exists for this.
If it's too much regular notification, as usual, I will remove it.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-28 20:02:30 +00:00
f98c8d7f8b Add dependabot for GHA 2022-06-28 18:58:23 +02:00
4862993482 Merge #2525
2525: Auth: Provide all document related permissions for action document.* r=Kerollmops a=janithpet

Added a `Action::DocumentsAll` identifier as [suggested](https://github.com/meilisearch/meilisearch/issues/2080#issuecomment-1022952486), along with the other necessary changes in `action.rs`. 

Inside `store.rs`, added an extra condition in `HeedAuthStore::put_api_key` to append all document related permissions if `key.actions.contains(&DocumentsAll)`.

Updated the tests as [suggested](https://github.com/meilisearch/meilisearch/issues/2080#issuecomment-1022952486).

I am quite new to Rust, so please let me know if I had made any mistakes; have I written the code in the most idiomatic/efficient way? I am aware that the way I append the document permissions could create duplicates in the `actions` vector, but I am not sure how fix that in a simple way (other than using other dependencies like [itertools](https://github.com/rust-itertools/itertools), for example).

## What does this PR do?
Fixes #2080 

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [ x] Have you read the contributing guidelines?
- [ x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: janithPet <jpetangoda@gmail.com>
2022-06-28 14:02:06 +00:00
8a64ee0c14 Merge #2557
2557: add more tests on the formatted route r=Kerollmops a=irevoire

We had a bunch of tests trying to send array of array in a get request, this is actually not supported thus I updated the tests to only send a single array or a direct string.

Also, the real tests that ensure the array of array are well handled are in milli so I don’t think we should lose time trying to « improve » our test surface on this point.

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-06-28 13:41:53 +00:00
05ee2eff01 add more tests on the formatted route 2022-06-28 13:17:55 +02:00
7ee8855499 Merge #2553
2553: Fix ci binary push r=irevoire a=curquiza

Prevent the check of version when releasing a RC for the binary publish.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-27 11:18:40 +00:00
7f4fab876d Add fix to publish-binaries.yml 2022-06-27 13:11:58 +02:00
688d6f704b Merge #2549
2549: Fix CI checking version compatibility r=irevoire a=curquiza

- Fix CI checks in `if` regarding the `stable` variable created
- Fix command to remove prefix `refs/tags/v` from `GITHUB_REF` in `check-release.sh` script
- Change from `sh` to `bash` for `check-release.sh` script
- Fix error message

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-27 08:28:31 +00:00
f83188fd60 Fix CI with check-release.sh script 2022-06-24 13:11:04 +02:00
9e261b996f Merge #2543
2543: fix all the array on the search get route and improve the tests r=curquiza a=irevoire

fix #2527

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-23 14:51:36 +00:00
90b0a4e99f Merge #2546
2546: Bump milli to 0.31.1 r=irevoire a=Kerollmops



Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-23 11:07:20 +00:00
dad86fc3d6 Make the changes necessary to use milli 0.31.1 2022-06-23 10:47:49 +02:00
7feb15df28 Bump milli to 0.31.1 2022-06-23 10:47:48 +02:00
bf865f51bb Merge #2544
2544: Fix content of dump/assets for testing r=curquiza a=loiclec

This is just a change to the content of two .dump files used in the integration tests.

Those files contained settings with the criterion `desc(fame)`, which is invalid
for a v3 or higher dump.

The change took place in the updates.json file inside the decompressed
.dump files. Instances of `desc(field)` or `asc(field)` were changed to 
`field:desc` and `field:asc`.

The tests were (wrongly) passing because the ranking rules were never parsed.

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-06-22 15:12:19 +00:00
13f258513f meilisearch-lib was missing a feature in its cargo.toml 2022-06-22 16:44:16 +02:00
f8aa21bc16 Fix content of dump/assets for testing
Some contained settings with the criterion desc(fame), which is invalid
for a v3 or higher dump.

The change took place in the updates.json file inside the decompressed
.dump files. Instances of desc(field) or asc(field) were changed to 
field:desc and field:asc
2022-06-22 14:51:52 +02:00
1ffe90bf15 Merge #2530
2530: Check the version in Cargo.toml before publishing r=irevoire a=curquiza

Fixes #2079 

Also
- improves the current docker CI for v0.28.0: the current implementation will make run 2 CI instead of just one for the official release
- move the `is-latest-releaes.sh` script, and update the documentation comment
- fix version of permissive-json-pointer

How to test the script?

```
export GITHUB_REF='refs/tags/v0.28.0'
sh .github/scripts/check-release.sh
```

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-06-22 12:42:29 +00:00
c47369b502 fix all the array on the search get route and improve the tests 2022-06-22 14:33:44 +02:00
32c8846514 Rollback 0.0.0 versionning 2022-06-22 12:20:12 +02:00
7490383d4f Update the not-released version in Cargo.toml files 2022-06-21 19:17:33 +02:00
c6ed756dbc Update script after review 2022-06-21 10:46:32 +02:00
de16de20f4 Update .github/scripts/check-release.sh
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-21 10:14:24 +02:00
c484d28646 Update .github/scripts/check-release.sh
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-21 10:14:17 +02:00
5318e53248 Move is-latest-release.sh script into the scripts folder 2022-06-21 10:12:00 +02:00
06e05cc4f8 Update Docker credentials 2022-06-20 14:39:50 +02:00
ba839a909f Merge #2537
2537: Change information place regarding release assets r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-20 08:40:25 +00:00
8b98303191 Change information place regarding release assets 2022-06-20 10:36:34 +02:00
2dde6fadb4 Check the version in Cargo.toml before publishing 2022-06-20 10:22:01 +02:00
eb8d53a915 Merge #2529
2529: Improve docker CI: push vX.Y tag (without patch) to DockerHub (for v0.28.0) r=curquiza a=curquiza

Bringing a commit from main ([5ae5b06](5ae5b06018)) into release-v0.28.0 to have this change already applied for v0.28.0

Following the one merged on main: https://github.com/meilisearch/meilisearch/pull/2521

Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
2022-06-16 15:38:25 +00:00
10f3150150 Improve docker CI: push vX.Y tag (without patch) to DockerHub (#2507)
* Create a docker tag without patch version if git tag has 0 patch version.

* Create Docker tag without patch number if git tag follows v<number>.<number>.<number>

Add minor changes on CI
2022-06-16 17:14:09 +02:00
54cd9976d7 Merge #2521
2521: Improve docker CI: push `vX.Y` tag (without patch) to DockerHub (#2507) r=curquiza a=curquiza

Following https://github.com/meilisearch/meilisearch/pull/2507

Fixes #2497 

Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
2022-06-16 13:28:27 +00:00
5ae5b06018 Improve docker CI: push vX.Y tag (without patch) to DockerHub (#2507)
* Create a docker tag without patch version if git tag has 0 patch version.

* Create Docker tag without patch number if git tag follows v<number>.<number>.<number>

Add minor changes on CI
2022-06-16 09:51:20 +02:00
6f910f89eb Ran formatter on the code. 2022-06-15 22:23:38 +01:00
b455c3897f Merge #2524
2524: Add the specific routes for the pagination and faceting settings r=curquiza a=Kerollmops

Fixes #2522

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-15 17:01:52 +00:00
9a8fb6c55a Updated actions identifiers to be in a more pleasing order 2022-06-15 17:27:41 +01:00
4016161035 Provide all document related permissions for action document.* 2022-06-15 16:11:39 +01:00
9d692ba1c6 Add more tests for the pagination and faceting subsettings routes 2022-06-15 15:53:43 +02:00
22e1ac969a Add specific routes for the pagination and faceting settings 2022-06-15 15:27:06 +02:00
3340af1ba9 Merge #2517
2517: Fix typos in `tasks/.rs` r=MarinPostma a=ryanrussell

Signed-off-by: Ryan Russell <git@ryanrussell.org>

## What does this PR do?
Readability in `tasks/.rs` files.

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-15 09:27:21 +00:00
49d509936b Merge #2520
2520: Use nightly for cargo fmt in CI (for `release-v0.28.0` branch) r=Kerollmops a=curquiza

Following https://github.com/meilisearch/meilisearch/pull/2519

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-15 09:08:40 +00:00
e46b853fbf Merge #2519
2519: Use nightly for cargo fmt in CI r=Kerollmops a=curquiza

Rustfmt CI currently does not pass on main with nightly, see https://github.com/meilisearch/meilisearch/pull/2517#issuecomment-1156138216

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-15 08:35:46 +00:00
44e004d895 Use nightly for cargo fmt in CI 2022-06-15 10:33:03 +02:00
71bf9b5b9b docs: Readability improvements in tasks/
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-14 20:38:45 -05:00
053071d866 Merge #2502
2502: test dump v5 r=ManyTheFish a=MarinPostma

this PR adds a test for dump v5


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-13 10:11:22 +00:00
0f6e650ba2 Merge #2508
2508: docs: Readability improvements in `download-latest.sh` and `src/analytics/.rs` r=Kerollmops a=ryanrussell

# Pull Request

## What does this PR do?
Improves readability in `download-latest.sh` and in the `src/analytics/.rs` files

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-13 08:18:17 +00:00
97daea5a66 ignore dump test on windows 2022-06-13 09:16:36 +01:00
8990b12609 docs: Readability fixes in src/analytics/.rs files
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-11 10:21:05 -05:00
07a35c6445 docs: Bash comment readability fixes
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-11 10:20:28 -05:00
4fc73195e6 Merge #2466
2466: index resolver tests r=MarinPostma a=MarinPostma

add more index resolver tests


depends on #2455 

followup #2453 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-09 17:50:01 +00:00
1425d62a31 test dump v5 2022-06-09 18:35:17 +02:00
87d4bf672c Merge #2500
2500: Bump milli r=Kerollmops a=irevoire

Fix #2380

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-09 16:35:00 +00:00
2063fbd985 chore: bump milli 2022-06-09 18:34:03 +02:00
de356061db Merge #2414
2414: Improve index uid validation upon API key creation r=Kerollmops a=pierre-l

- ~Use an IndexUid newtype to enforce stronger constraints~
- ~`cargo update -p vergen`~ (`rustup update` was the proper fix for this)
- Add a new `meilisearch_types` crate
- Move `meilisearch_error` to `meilisearch_types::error`
- Move `meilisearch_lib::index_resolver::IndexUid` to `meilisearch_types::index_uid`
- Add a new `InvalidIndexUid` error in `meilisearch_types::index_uid`
- Move `meilisearch_http::routes::StarOr` to `meilisearch_types::star_or`
- Use the `IndexUid` and `StarOr` in `meilisearch_auth::Key`

Fixes #2158


Co-authored-by: pierre-l <pierre.larger@gmail.com>
2022-06-09 15:41:51 +00:00
9a6841c7ce remove unused test 2022-06-09 17:10:56 +02:00
354f7fb2bf test index_update 2022-06-09 17:10:55 +02:00
0333bad057 delete documents test 2022-06-09 17:10:55 +02:00
0e416b4bcd delete index test 2022-06-09 17:10:55 +02:00
20dd259f23 Merge #2455
2455: refactor perform task r=curquiza a=MarinPostma

Refactor some index resolver functions to make duties more consistent, and code easier to test


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-09 14:49:59 +00:00
18095fa4e1 Merge #2498
2498: Update CONTRIBUTING.md to add the release process r=Kerollmops a=curquiza

Add release process to the contributing

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-09 14:23:49 +00:00
b8745420da Use the IndexUid and StarOr in meilisearch_auth::Key
Move `meilisearch_http::routes::StarOr` to `meilisearch_types::star_or`

Fixes #2158
2022-06-09 16:14:15 +02:00
36cb09eb25 Add a new meilisearch_types crate
Move `meilisearch_error` to `meilisearch_types::error`
Move `meilisearch_lib::index_resolver::IndexUid` to `meilisearch_types::index_uid`
Add a new `InvalidIndexUid` error in `meilisearch_types::index_uid`
2022-06-09 16:14:13 +02:00
8fc3b7d3b0 refactor process_document_addition_batch 2022-06-09 14:59:20 +02:00
64e3096790 process_task updates task events 2022-06-09 14:59:20 +02:00
b594d49def add IndexResolver BatchHandler tests 2022-06-09 14:59:19 +02:00
fbba67fbe9 add mocker to IndexResolver 2022-06-09 14:59:19 +02:00
232b2baaa3 Update TOC 2022-06-09 14:49:49 +02:00
02c5c193a2 Update CONTRIBUTING.md to add the release process 2022-06-09 14:48:50 +02:00
b9b32d65a8 Merge #2494
2494: Introduce the new faceting and pagination settings r=ManyTheFish a=Kerollmops

This PR introduces two new settings following the newly created spec https://github.com/meilisearch/specifications/pull/157:
 - The `faceting.max_values_per_facet` one describes the maximum number of values (each with a count) associated with a value in a facet distribution query.
 - The `pagination.limited_to` one describes the maximum number of documents that a search query can ever return.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-09 12:09:21 +00:00
680606fd82 Merge #2491
2491: Improve Docker CIs r=curquiza a=curquiza

Close #1901 

Continuing work of 

- [x] Merge two docker CI in one (#2477). The latest tag is still only push for the official release.
- [x] Add cron job in GHA to run the CI (without publishing the image) every day. This will check the docker build is working.

Co-authored-by: Lawrence Chou <lawrencechou1024@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-09 11:26:12 +00:00
4a494ad2fa Add schedule to the CI 2022-06-09 11:50:20 +02:00
2b2e571c76 Merge #2460
2460: Create custom error types for `TaskType`, `TaskStatus`, and `IndexUid` r=Kerollmops a=walterbm

# Pull Request

## What does this PR do?
Fixes #2443 by making the following changes:

- Add custom `TaskTypeError` for `TaskType::from_str` 
- Add custom `TaskStatusError` for `TaskStatus::from_str`
- Add custom `IndexUidFormatError` for `IndexUid::from_str`
- Implement `From<IndexUidFormatError> for IndexResolverError` to convert between errors
- Replace all usages of `IndexUid::new` with `IndexUid::from_str`
    - **NOTE** I am relatively new to Rust and I struggled a lot with this final part. This PR ended up with a messy error conversion which does not seem ideal. Please let me know if you have any suggestions for how to make this better and I'll be happy to make any updates!

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: walter <walter.beller.morales@gmail.com>
2022-06-09 09:10:28 +00:00
6f0d3472b1 Change the test for the new pagination.limited_to setting 2022-06-09 10:56:43 +02:00
5cd13cc303 Add a test to validate the faceting.max_values_per_facet setting 2022-06-09 10:56:42 +02:00
1e3dcbea3f Plug the pagination.limited_to setting 2022-06-09 10:56:42 +02:00
b96399d24b Plug the faceting.max_values_per_facet setting 2022-06-09 10:56:42 +02:00
5450b5ced3 Add the faceting.max_values_per_facet setting 2022-06-09 10:54:32 +02:00
c924614527 Bump milli to 0.29.2 2022-06-09 10:54:28 +02:00
96d4fd54bb Change the index uid format check for better legibility 2022-06-08 19:58:47 -04:00
3e5d6be86b Rename TaskType::from_str parameter to 'type_' 2022-06-08 19:57:45 -04:00
2b944ecd89 Remove IndexUid::new and replace with IndexUid::from_str 2022-06-08 19:56:01 -04:00
db42268888 Merge #2473
2473: fix blocking in dumps r=irevoire a=MarinPostma

This PR fixes two blocking calls in the dump process.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-08 17:14:46 +00:00
108b3520de fix blocking auth controller dump 2022-06-08 18:19:29 +02:00
f64b824c45 Merge #2493
2493: Update version for next release (v0.28.0) r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-08 16:02:03 +00:00
fc4990b968 Update version for next release (v0.28.0) 2022-06-08 17:59:18 +02:00
39a1dcb32c Update .github/workflows/publish-docker.yml 2022-06-08 17:27:03 +02:00
afcc493480 Merge publish-docker-latest.yml & publish-docker-tag.yml (#2477)
close #1901
2022-06-08 17:17:20 +02:00
6171f17f1d Merge #2468
2468: Update milli 0.29 r=Kerollmops a=ManyTheFish

- [x] Update milli to 0.29
- [x] Integrate charabia
- [x] Set disabled_words to default when Index::exact_words returns None
- [x] Fix ranking rules integration test

fixes #2375
fixes #2144
fixes #2417
fixes #2407

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-08 14:29:20 +00:00
55169ff914 Fix test get_document_s_nested_attributes_to_retrieve 2022-06-08 15:56:06 +02:00
0a16f71563 Increase wait_task wainting time 2022-06-08 15:56:03 +02:00
bd280f75e7 Merge #2487 #2489
2487: Feat(auth): Use hmac instead of sha256 to derivate API keys from master key r=MarinPostma a=ManyTheFish

Wrap sha256 in HMAC to derivate new API keys from the master key.


2489: Fix(auth): Authorization test were not testing keys unrestricted on index r=Kerollmops a=ManyTheFish



Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-08 13:48:07 +00:00
617802bae7 Merge #2480
2480: Bring release v0.27.2 changes into stable r=Kerollmops a=curquiza



Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-08 13:08:26 +00:00
1a7631c807 Hash master_key before passing it to HMAC 2022-06-08 14:54:47 +02:00
17f30c2b2d Fix(auth): Authorization test were not testing keys unrestricted on index 2022-06-08 14:52:32 +02:00
09938c9b6f Patch ranking rules error test 2022-06-08 14:38:09 +02:00
f5306eb5b0 Set disabled_words to default when Index::exact_words returns None 2022-06-08 14:38:09 +02:00
173eea06e1 Replace old tokenizer by charabia 2022-06-08 14:38:09 +02:00
8d09772334 Update milli 2022-06-08 14:38:05 +02:00
987a7f8926 Wrap sha256 in HMAC instead of directly use sha256 2022-06-08 14:25:12 +02:00
0928f3d41c Merge #2453
2453: test index resolver r=MarinPostma a=MarinPostma

add some tests to the `IndexResolver` implementation of `BatchHandler`


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-08 11:05:30 +00:00
09ec8e9fca Merge #2471
2471: Remove the connection keep-alive timeout r=MarinPostma a=Thearas

# Pull Request

## What does this PR do?
Fixes <https://github.com/meilisearch/meilisearch-go/issues/221>.
Meilisearch has a default connection keep-alive timeout for 5s, which means it will close the connections with idle time >= 5s.
This PR set actix-web keep-alive config to `KeepAlive::Os`, let the client and system to decide when to close the connection.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Thearas <thearas850@gmail.com>
2022-06-08 06:59:25 +00:00
1968950b0f Merge #2475
2475: feat(auth): Paginate API keys listing r=irevoire a=ManyTheFish

- [x] Update tests
- [x] Use Pagination helpers to paginate API keys (thanks to `@irevoire)`

fixes #2442


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 15:42:02 +00:00
6ffa222218 feat(auth): Paginate API keys listing
- [x] Update tests
- [x] Use Pagination helpers to paginate API keys

fixes #2442
2022-06-07 17:37:48 +02:00
79e67df73d Merge #2474
2474: feat(auth): remove `dumps.get` action from keys r=ManyTheFish a=ManyTheFish

- [x] Update tests
- [x] Remove `dumps.get` action


related to: #2430

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 15:03:41 +00:00
7fd66b80ca bump meilisearch version 2022-06-07 16:25:47 +02:00
0cfad7eeac bump milli version 2022-06-07 16:23:23 +02:00
72be296852 Merge #2476
2476: fix(test): Windows index pagination test r=ManyTheFish a=ManyTheFish

The test `index::get_index::get_and_paginate_indexes` fails on every PRs during `Tests on windows-latest` CI job.
- reduce the default index size in tests.
- wait for each task instead of waiting for the last one at the end.

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 14:17:01 +00:00
a7bff35e49 fix(test): Reduce default index size in tests 2022-06-07 15:16:34 +02:00
3b01ed4fe8 feat(auth): remove dumps.get action from keys 2022-06-07 10:49:28 +02:00
cbd27d313c fix blocking writing of meta file in dump 2022-06-07 10:07:40 +02:00
6ac8675c6d add IndexResolver BatchHandler tests 2022-06-07 09:33:57 +02:00
df61ca9cae add mocker to IndexResolver 2022-06-07 09:33:57 +02:00
bbd685af5e move IndexResolver to real module 2022-06-07 09:33:56 +02:00
9b9cbc815b fmt 2022-06-07 03:50:39 +08:00
fd11903920 remove the connection timeout 2022-06-07 03:38:23 +08:00
c3003065e8 Merge #2464
2464: Simplify the `star_or` function usage r=ManyTheFish a=Kerollmops

This PR simplifies the usage of the `star_or` function that was originally introduced in #2399. The `serde-cs` https://github.com/naughie/serde-cs/pull/1 PR was merged and was implementing the `IntoIterator` trait on the `CS` types, which makes it possible to directly collect without converting a `CS` into the inner type (vec).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-06 12:47:27 +00:00
c6ce3452cf Merge #2459
2459: Fix typo in codebase comments r=Kerollmops a=ryanrussell

# Pull Request

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-06 09:01:07 +00:00
e5b760c59a Fix the segment analytics tests 2022-06-06 10:44:46 +02:00
277a0a7967 Bump serde-cs to simplify our usage of the star_or function 2022-06-06 10:17:33 +02:00
64b5b2e1f8 Use serde-cs::CS with StarOr to reduce the logic duplication 2022-06-06 10:06:00 +02:00
10d3b367dc Simplify the const default values 2022-06-06 10:06:00 +02:00
ba55905377 Add custom IndexUidFormatError for IndexUid 2022-06-05 02:26:48 -04:00
0e7e16ae72 Add custom TaskStatusError for TaskStatus 2022-06-05 00:51:08 -04:00
80c156df3f Add custom TaskTypeError for TaskType 2022-06-05 00:51:08 -04:00
4b6c3e72ff Improve Lib Readability
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-04 21:38:04 -05:00
3e46543060 Improve Store Readability
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-04 20:42:53 -05:00
b83455f345 Merge #2454
2454: Unify the pagination of the index and documents route behind a common type r=curquiza a=irevoire

`@MarinPostma` wdyt of keeping the `auto_paginate_sized` until we implement the pagination on every route that needs it just to see if it could be useful to something else

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-02 15:01:43 +00:00
953a209f02 Merge #2447
2447: move index uid in task content r=Kerollmops a=MarinPostma

this pr moves the index_uid from the `Task` to the `TaskContent`. This is because the task can now have content that do not target a particular index.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-02 13:54:09 +00:00
0c5352fc22 move index_uid from task to task_content 2022-06-02 15:30:35 +02:00
8ac8fcb0c9 Merge #2433
2433: Fix the documents route r=Kerollmops a=irevoire

fix #2428

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-06-02 13:25:34 +00:00
4667c9fe1a fix(http): Fix the query parameter in the Documents route 2022-06-02 14:10:44 +02:00
12b5eabd5d chore(http): unify the pagination of the index and documents route behind a common type 2022-06-02 14:06:56 +02:00
cf2d8de48a Merge #2452
2452: Change http verbs r=ManyTheFish a=Kerollmops

This PR fixes #2419 by updating the HTTP verbs used to update the settings and every single setting parameter.

- [x] `PATCH /indexes/{indexUid}` instead of `PUT`
- [x] `PATCH /indexes/{indexUid}/settings`  instead of `POST`
- [x] `PATCH /indexes/{indexUid}/settings/typo-tolerance`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/displayed-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/distinct-attribute`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/filterable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/ranking-rules`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/searchable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/sortable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/stop-words`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/synonyms`  instead of `POST`


Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 11:46:17 +00:00
419922e475 Make clippy happy 2022-06-02 13:38:23 +02:00
c9cd1738a5 Merge #2445
2445: Seek-based tasks list r=Kerollmops a=Kerollmops

This PR implements the seek-based pagination for the tasks list following [the spec](https://github.com/meilisearch/specifications/pull/115).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 10:25:54 +00:00
0258659278 Fix the get_settings tests 2022-06-02 12:24:27 +02:00
ce37f53a16 Add typo-tolerance to the authorization tests 2022-06-02 12:17:53 +02:00
bcb51905d7 Fix the authorization tests 2022-06-02 12:16:46 +02:00
10a71fdb10 Update the /indexes/{indexUid}/settings/* verbs by adding a macro parameter 2022-06-02 11:55:47 +02:00
f8d3f739ad Update the /indexes/{indexUid}/settings verb from POST to PATCH 2022-06-02 11:55:47 +02:00
bb405aa729 Update the /indexes/{indexUid} verb from PUT to PATCH 2022-06-02 11:55:47 +02:00
7e3d5ebc8e Merge #2451
2451: feat(API-keys): Change immutable_field error message r=Kerollmops a=ManyTheFish

Change the immutable_field error message to fit the recent changes in the spec:
aa0a148ee3..84a9baff68



Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-02 09:26:36 +00:00
dfce9ba468 Apply suggestions 2022-06-02 11:26:12 +02:00
9eea142e2b feat(API-keys): Change immutable_field error message
Change the immutable_field error message to fit the recent changes in the spec:
aa0a148ee3..84a9baff68
2022-06-02 11:11:07 +02:00
8b8c3e32f0 Merge #2450
2450: Bump the dependencies r=ManyTheFish a=Kerollmops

In order to use [the latest version of grenad](https://docs.rs/grenad) I bump the dependencies here. We also use the latest versions of all our other dependencies now.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 08:53:12 +00:00
08d72e32a4 Merge #2438
2438: Refine keys api r=ManyTheFish a=ManyTheFish

waiting for #2410 and #2444 to be merged.

fix #2369 

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-02 08:32:35 +00:00
ac9e7bdbe3 Fix a test that was depending on the speed of the CPU 2022-06-02 10:21:19 +02:00
4512eed8f5 Fix PR comments 2022-06-01 18:06:20 +02:00
e769043576 Bump the dependencies 2022-06-01 17:59:05 +02:00
df721b2e9e Scheduler must not reverse the order of the fetched tasks 2022-06-01 17:16:15 +02:00
0656df3a6d Fix the dumps tests 2022-06-01 17:14:13 +02:00
7652295d2c Encode key in base64 instead of hexa 2022-06-01 16:17:47 +02:00
94b32cce01 Patch errors 2022-06-01 16:17:47 +02:00
b2e2dc8558 Re-authorize master_key to access to all routes 2022-06-01 16:17:47 +02:00
1816db8c1f Move dump v4 patcher into v4.rs 2022-06-01 16:17:43 +02:00
c295924ea2 Patch tests 2022-06-01 16:08:42 +02:00
1f62e83267 Remove error_add_api_key_invalid_index_uid_format 2022-06-01 16:08:42 +02:00
b3c8915702 Make small changes and renaming 2022-06-01 16:08:42 +02:00
151f494110 Use Stream Deserializer to load dumps 2022-06-01 16:08:42 +02:00
96152a3d32 Change default API keys names and descriptions 2022-06-01 16:08:42 +02:00
84f52ac175 Add v4 feature to uuid 2022-06-01 16:08:42 +02:00
70916d6596 Patch dump v4 2022-06-01 16:08:42 +02:00
b9a79eb858 Change apiKeyPrefix to apiKeyUid 2022-06-01 16:07:44 +02:00
a57b2d9538 Restrict master key access to /keys routes 2022-06-01 16:07:44 +02:00
34c8888f56 Add keys actions 2022-06-01 16:07:44 +02:00
d54643455c Make PATCH only modify name, description, and updated_at fields 2022-06-01 16:07:44 +02:00
96a5791e39 Add uid and name fields in keys 2022-06-01 16:07:44 +02:00
e2c204cf86 Update tests to fit to the new requirements 2022-06-01 16:07:44 +02:00
d80e8b64af Align the tasks route API to the new spec 2022-06-01 15:30:39 +02:00
c11d21879a Introduce tasks limit and after to the tasks route 2022-06-01 13:26:36 +02:00
d6dd234914 Merge #2434
2434: Update docker volume path r=curquiza a=0x0x1

Currently, the args `$(pwd)/data.ms:/data.ms` cannot share data from the container, makes docker volume same as the [Dockerfile](67b6f4340a/Dockerfile (L40)) to fixing it.



Co-authored-by: 0x0x1 <101086451+0x0x1@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-06-01 10:31:27 +00:00
4970525541 Update README.md
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-06-01 12:29:36 +02:00
461b91fd13 Introduce the fetch_unfinished_tasks function to fetch tasks 2022-06-01 12:09:52 +02:00
004c8b6be3 Add the new limit and after fields in the dump tests 2022-06-01 12:09:52 +02:00
9d5cc88cd5 Implement the seek-based tasks list pagination 2022-06-01 12:09:52 +02:00
d22f07f5b2 Merge #2448
2448: docs(security): Fix `Supported` r=MarinPostma a=ryanrussell

Signed-off-by: Ryan Russell <git@ryanrussell.org>

# Pull Request

## What does this PR do?
- Fix typo in security `Suported` -> `Supported`

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-05-31 19:59:37 +00:00
e81c7aa2e6 Merge #2423
2423: Paginate the index resource r=MarinPostma a=irevoire

Fix #2373


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-31 19:25:25 +00:00
39db6ea42b docs(security): Fix Supported
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-05-31 14:21:34 -05:00
47007fa71b Merge #2446
2446: rename Succeded to Succeeded r=irevoire a=MarinPostma

this pr renames `TaskEvent::Succeded` to `TaskEvent::Succeeded` and apply the migration to the dumps


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-31 18:27:02 +00:00
627f13df85 feat(http): paginate the index resource
Fix #2373
2022-05-31 18:11:45 +02:00
97c14f6fcc Merge #2427
2427: Update the documents resource r=MarinPostma a=irevoire

- Return Documents API resources on `/documents` in an array in the results field.
- Add limit, offset, and total in the response body.
- Rename `attributesToRetrieve` into `fields` (only for the `/documents` endpoints, not for the `/search` ones).
- The `displayedAttributes` settings do not impact anymore the displayed fields returned in the `/documents` endpoints. These settings only impact the `/search` endpoint.
- make the `/document/:uid` route accept the `fields` query parameter

Fix #2372


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-31 15:33:42 +00:00
446f1f31e0 rename Succeded to Succeeded 2022-05-31 17:22:37 +02:00
ddad6cc069 feat(http): update the documents resource
- Return Documents API resources on `/documents` in an array in the the results field.
- Add limit, offset and total in the response body.
- Rename `attributesToRetrieve` into `fields` (only for the `/documents` endpoints, not for the `/search` ones).
- The `displayedAttributes` settings does not impact anymore the displayed fields returned in the `/documents` endpoints. These settings only impacts the `/search` endpoint.

Fix #2372
2022-05-31 16:40:40 +02:00
ab39df9693 Merge #2399
2399: Update the tasks endpoints r=MarinPostma a=Kerollmops

This PR wraps all the changes related to the `tasks` endpoints, it is related to https://github.com/meilisearch/meilisearch/issues/2377 but doesn't close it. I will create a new PR to work on [the seek-based pagination](https://github.com/meilisearch/specifications/pull/115).

I wanted to do something cool with Github: being able to merge multiple PR in this one, to help review changes one by one, unfortunately, Github doesn't allow creating empty PRs. I also struggled with git itself when it comes to merging things in the right order, so I decided that I would add all of the changes in this single PR. I will list the changes and references to the specs here.

 - [x] Tasks statuses and types must be case insensitive
 - [x] Tasks statuses, types and indexUid must accept the `*` selector
 - [ ] Rename the `TaskDetails` struct fields

## Changes

- [ ] Add seek-based pagination following [the spec](https://github.com/meilisearch/specifications/pull/115) 
- [x] Add filtering on the `/tasks` endpoint following [this spec](https://github.com/meilisearch/specifications/pull/116)
  - [x] Add filtering capabilities on `type`, `status` and `indexUid` for `GET` `task` lists endpoints.
  - [x] It is possible to specify several values for a filter using the `,` character. e.g. `?status=enqueued,processing`
  - [x] Between two different filters, an AND operation is applied. e.g. `?status=enqueued&type=indexCreation` is equivalent to `status=enqueued AND type = indexCreation`
- [x] Remove `GET /indexes/:indexUid/tasks`. It can be replaced by `GET /tasks?indexUid=:indexUid`
- [x] Remove `GET /indexes/:indexUid/tasks/:taskUid`.
- [x] Rename `uid` to `taskUid` in the `202 - Accepted` task response return by every asynchronous tasks (ex: index creation, document addition...)
- [x] Rename some task properties
  - [x] `documentPartial`-> `documentAdditionOrUpdate`
  - [x] `documentAddition`-> `documentAdditionOrUpdate`
  - [x] `clearAll` -> `documentDeletion` 

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-05-31 09:40:40 +00:00
1465b5e0ff Refactorize the tasks filters by moving the match inside 2022-05-31 11:33:21 +02:00
8800b348f0 Implement the StarOr on all the tasks filters 2022-05-31 11:33:21 +02:00
082d6b89ff Make the StarOrIndexUid Generic and call it StarOr 2022-05-31 11:33:21 +02:00
b82c86c8f5 Allow users to filter indexUid with a * 2022-05-31 11:33:20 +02:00
36d94257d8 Make clippy happy 2022-05-31 11:33:20 +02:00
3f80468f18 Rename the Tasks Types 2022-05-31 11:33:20 +02:00
8509243e68 Implement the status and type filtering on the tasks route 2022-05-31 11:33:20 +02:00
3684c822f1 Add indexUid filtering on the /tasks route 2022-05-31 11:33:20 +02:00
80f7d87356 Remove the /indexes/:indexUid/tasks/... routes 2022-05-31 11:33:20 +02:00
d2f457a076 Rename the uid to taskUid in asynchronous response 2022-05-31 11:33:20 +02:00
e5ef5a6f9c Remove an unused updates.rs file 2022-05-31 11:33:19 +02:00
5450fecaef Merge #2444
2444: add boilerplate for dump v5 r=MarinPostma a=MarinPostma

add the boilerplate files for dump v5


Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-05-31 08:56:52 +00:00
deba0cc096 Make v4::load_dump copy each part a the dump 2022-05-31 10:24:44 +02:00
26e7bdf702 add boilerplate for dump v5 2022-05-30 17:25:29 +02:00
3441cc6c36 Merge #2410
2410: Make dump a task r=Kerollmops a=MarinPostma

This PR transforms the dump task into a proper task.
The `GET /dumps/:dump_uid` is removed.


Some changes were made to make this work, and a bit a refactoring was necessary.
- The `dump_actor` module has been renamed do `dumps` and moved to the root
- There isn't a `DumpActor` anymore, and the dump process is handled by the `DumpHandler`.
- The `TaskPerformer` is renamed to `BatchHandler`
- The `BatchHandler` trait no longer has a `perform_job` method, but instead has a `accept` method returning whether a handler can proccess a batch
- The scheduler now accept a list of `BatchHandler`, and iterates trhough them until it finds one to accept the current batch.
- `Job` doesn't exist anymore, and everything in now inside of the `BatchContent` enum.
- The `Vec<TaskId>` from `Batch` is replaced with a `BatchContent` enum which hints at the content.
- The Scheduler is slightly modified to accept batch, and prioritize them before regular tasks.
- The `TaskList` are not identified by a `String` representing the index uid anymore, but by a `TaskListIdentifier` which also works for dumps which are not targeting any specific indexes.
- The `GET /dump/:dump_id` no longer exists
- `DumpActorError` is renamed to `DumpError`


close #2410 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-30 14:09:43 +00:00
c7711c7816 Merge #2429
2429: Send the analytics to `telemetry.meilisearch.com` instead of segment r=MarinPostma a=irevoire

Fix #2425

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-30 13:18:55 +00:00
d47b997120 chore(analytics): update the url used to send our analytics 2022-05-30 15:13:10 +02:00
1e310ecc7d fix typo in docstring
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-30 14:34:49 +02:00
4cb2c6ef1e use map_or instead of map + unwrap_or 2022-05-30 12:30:15 +02:00
a9ef399a6b processing::Nothing return BatchContent::Empty instead of panic 2022-05-26 12:04:27 +02:00
5a2972fc19 use TaskEvent method instead of variants in BatchHandler impl 2022-05-26 11:51:58 +02:00
ba51ca83ec Update docker volume path
Makes docker volume same as Dockerfile
2022-05-26 10:29:27 +08:00
1647ca3c1f fix clipy warnings 2022-05-25 15:07:52 +02:00
74a1f88d88 add test for dump processing order 2022-05-25 14:57:36 +02:00
f58507379a fix dump priority in scheduler 2022-05-25 14:50:14 +02:00
6b2016b350 remove typo in BatchContent variant 2022-05-25 14:39:07 +02:00
3015265bde remove useless dump errors 2022-05-25 14:37:10 +02:00
49d8fadb52 test dump handler 2022-05-25 14:32:12 +02:00
127171c812 impl Default on Processing 2022-05-25 14:10:39 +02:00
67b6f4340a Merge #2422
2422: Update url of movies.json r=curquiza a=0x0x1

URL `https://bit.ly/2PAcw9l` is a notion site.

Co-authored-by: 0x0x1 <101086451+0x0x1@users.noreply.github.com>
2022-05-25 11:21:56 +00:00
986a99296d remove useless dump test 2022-05-25 11:25:11 +02:00
92d86ce6aa add tests to IndexResolver BatchHandler 2022-05-25 11:13:36 +02:00
3c85b29865 add doc to BatchHandler 2022-05-25 11:13:35 +02:00
8349f38197 remove unused file 2022-05-25 11:13:35 +02:00
64654ef7c3 rename batch_handler to handler 2022-05-25 11:13:35 +02:00
0f9c134114 fix tests 2022-05-25 11:13:35 +02:00
7b47e4e87a snapshot batch handler 2022-05-25 11:13:35 +02:00
8743d73973 move DumpHandler to own module 2022-05-25 11:13:35 +02:00
f0aceb4fba remove unused files 2022-05-25 11:13:35 +02:00
61035a3ea4 create dump v5 2022-05-25 11:13:34 +02:00
4778884105 remove dump status route 2022-05-25 11:13:34 +02:00
57fde30b91 handle dump 2022-05-25 11:13:34 +02:00
56eb2907c9 dump indexes 2022-05-25 11:13:34 +02:00
414d0907ce register dump handler 2022-05-25 11:13:34 +02:00
60a8249de6 add dump batch handler 2022-05-25 11:13:34 +02:00
46cdc17701 make scheduler accept multiple batch handlers 2022-05-25 11:13:34 +02:00
6a0231cb28 perform dump method 2022-05-25 11:13:33 +02:00
7fa3eb1003 register dump tasks 2022-05-25 11:13:33 +02:00
2f0625a984 register and insert dump task in scheduler 2022-05-25 11:13:33 +02:00
737b891a41 introduce Dump TaskListIdentifier variant 2022-05-25 11:13:33 +02:00
5a5066023b introduce TaskListIdentifier 2022-05-25 11:13:33 +02:00
aa50acb031 make Task index_uid an option
Not all task relate to an index. Tasks that don't have an index_uid set
to None
2022-05-25 11:13:32 +02:00
9935db86c7 Merge #2424
2424: Uncomment clippy from the ci check r=curquiza a=irevoire

The issue has been fixed in the latest release of rust. See https://github.com/rust-lang/rust-clippy/issues/8662
Fix #2305


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-24 17:50:55 +00:00
f65116b208 chore(ci): uncomment clippy from the ci check
The issue has been fixed in the latest release of rust. See https://github.com/rust-lang/rust-clippy/issues/8662
Fix #2305
2022-05-24 15:03:11 +02:00
341756a0eb Merge #2357
2357: chore(dump): add dump tests r=Kerollmops a=irevoire

Add tests on the import of dump v1, v2, v3 and v4.

Since the dumps are slow to decompress, I made the `flate2` crate always compile in optimized.
And since they're also slow to index, I also made the `milli` crate always compile in optimized. What do you think of this `@MarinPostma?`
Should we keep milli unoptimized in case it could help us debug some things? 👀 

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-24 12:24:29 +00:00
5f0e9b63d2 chore(dump): add tests 2022-05-24 14:21:56 +02:00
ca9ba2d90c Merge #2406
2406: chore(search): rename in the search endpoint r=irevoire a=irevoire

Fix #2376


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-24 12:02:45 +00:00
2c248a68a4 Merge #2374
2374: feat(analytics): handle the `X-Meilisearch-Client` header r=Kerollmops a=irevoire

Fix #2367


Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-24 09:49:16 +00:00
641ca5a857 Update url of movies.json 2022-05-24 17:34:34 +08:00
6bf4db0bca feat(analytics): handle the new x-meilisearch-client custom header for the analytics
Fix #2367
2022-05-23 13:51:19 +02:00
4e9accdeb7 chore(search): rename in the search endpoint
Fix ##2376
2022-05-19 16:31:37 +02:00
ae4e419db4 Merge #2408
2408: Upgrade milli v0.28 r=ManyTheFish a=ManyTheFish

- Add smart crop
- Add test on _matches_infos
- Fix some test

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-05-19 11:52:16 +00:00
50763aac82 Fix clippy 2022-05-19 11:23:22 +02:00
3517eae47f Fix tests 2022-05-18 18:45:53 +02:00
0250ea9157 Intergrate smart crop in Meilisearch 2022-05-18 18:35:51 +02:00
6d221058f1 Merge #2404
2404: Bring `release-v0.27.1` into `main` r=curquiza a=curquiza

Following the v0.27.1 hotfixes

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-05-17 16:11:17 +00:00
a23998f2e3 Merge #2402
2402: Update version for next release (v0.27.1) r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-05-17 10:09:16 +00:00
49e857776c Update version for next release (v0.27.1) 2022-05-17 11:59:35 +02:00
7fa7f3d1a4 Merge #2397
2397: Bump milli to v0.26.5 r=irevoire a=irevoire

Fix #2394 and fix #2385

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-17 08:45:05 +00:00
85d19bfb3e chore: bump milli 2022-05-16 18:43:35 +02:00
a65a2bea1e Merge #2396
2396: Fix dump bug r=curquiza a=MarinPostma

Only check db version file if we aren't loading a dump or a snapshot, because we can assume that if we are loading a dump or a snapshot, then we can safely work with the database. The db version file will be (over)written just after that.

This was introduced by #2356.
fix #2389


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-16 13:59:20 +00:00
5670b4d012 fix dump import error 2022-05-16 14:33:33 +02:00
b9866d8df2 Merge #2339
2339: deny warnings in CI r=irevoire a=MarinPostma

Add `RUSTFLAGS= -D warnings` to the CI so all warnings are treated as hard errors.

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-16 09:58:30 +00:00
b9b9cba154 Merge #2383
2383: v0.27.0: bring `stable` into `main` r=Kerollmops a=curquiza

Bring `stable` into `main`

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-05-16 08:35:25 +00:00
cd2239eb2d Merge branch 'release-v0.27.0' into stable 2022-05-09 10:51:32 +02:00
348af6cfbf deny-rust-warnings 2022-05-04 15:20:45 +02:00
5337bdb9c5 Merge #2366
2366: Bump milli to v0.26.4 r=curquiza a=curquiza

Fixes the milli related issues
- https://github.com/meilisearch/meilisearch/issues/2352
- https://github.com/meilisearch/meilisearch/issues/2358
- https://github.com/meilisearch/meilisearch/issues/2338

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-05-04 10:30:42 +00:00
a350cc1186 Merge #2355
2355: feat(http): add analytics on typo tolerance setting r=curquiza a=MarinPostma

Add analytics for the typo settings.

Also make settings analytics return null for settings that are not set, instead of a default value, as seen with `@gmourier` 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-05-04 10:15:15 +00:00
5c4c38c79c Fix the tests about the nested fields 2022-05-04 12:10:52 +02:00
b94eabe48c apply clippy fixes 2022-05-04 11:33:43 +02:00
c46f3587de Bump milli to v0.26.4 2022-05-04 11:25:36 +02:00
34f75d9792 settings analytics return null when no set 2022-04-29 16:38:21 +02:00
3c5f7dbf7e Merge #2356
2356: fix dumps bug r=MarinPostma a=MarinPostma

move the db version check after the dump import
fix #2353


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-29 14:33:58 +00:00
3d0a4a3d18 fix(http): fix event name for typo tolerance settings update 2022-04-27 14:49:21 +02:00
6025372565 fix(lib): Check db presence after dumps 2022-04-27 10:41:09 +02:00
3d10af0333 feat(http): add analytics on typo tolerance setting 2022-04-26 18:29:32 +02:00
c07f3b44b7 Merge #2347
2347: Change Nelson path r=curquiza a=curquiza

Nelson is now on the Meilisearch orga side

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-21 17:50:46 +00:00
38d681c230 Change Nelson path 2022-04-21 18:42:34 +02:00
e85377e725 Merge pull request #2346 from meilisearch/revert-2345-bump-meilisearch-v9000.0.0
Revert "[TEST PURPOSE] Bump meilisearch to version 9000.0.0"
2022-04-21 16:38:48 +02:00
6ff8bf823d Revert "[TEST PURPOSE] Bump meilisearch to version 9000.0.0" 2022-04-21 16:36:56 +02:00
4d25229df9 Merge pull request #2345 from meilisearch/bump-meilisearch-v9000.0.0
Bump meilisearch to version 9000.0.0
2022-04-21 16:28:46 +02:00
f1cd6b6ee8 bump meilisearch to v9000.0.0 2022-04-21 14:26:40 +00:00
63f75bd187 Merge pull request #2344 from meilisearch/revert-2340-bump-meilisearch-v8000.1.0
Revert "[TEST PURPOSE] Bump meilisearch to version 8000.1.0"
2022-04-21 16:24:57 +02:00
acf3357cf3 Revert "[TEST PURPOSE] Bump meilisearch to version 8000.1.0" 2022-04-21 16:24:27 +02:00
202d6105b2 Merge pull request #2340 from meilisearch/bump-meilisearch-v8000.1.0
[TEST PURPOSE] Bump meilisearch to version 8000.1.0
2022-04-21 15:28:00 +02:00
0714551101 bump meilisearch to v8000.1.0 2022-04-21 13:23:46 +00:00
04381011b0 Merge #2336
2336: Move permissive-json-pointer in the meilisearch repository r=Kerollmops a=irevoire

Move the permissive-json-pointer crate in the meilisearch repository.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-20 17:25:44 +00:00
1ef87cc6d0 chore: move permissive-json-pointer in the meilisearch repository
Update permissive-json-pointer/src/lib.rs

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-04-20 19:24:41 +02:00
4a9000bb96 Merge #2332
2332: fix(search): formatted field r=curquiza a=irevoire

fix #2318

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-04-20 14:59:41 +00:00
754c49f991 Merge #2326
2326: rename min word lenght for typo r=irevoire a=MarinPostma

rename `minWordLengthForTypo` to `minWordSizeForTypos` as specified.

discussed here: https://github.com/meilisearch/specifications/pull/117#discussion_r850795714

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-20 11:54:10 +00:00
97adef6bfc Merge #2335
2335: Fix typo reset by upgrading Milli to v0.26.2 r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-20 10:49:57 +00:00
a7fd199ded Fix typo reseting by upgrading milli to v0.26.2 2022-04-20 12:24:46 +02:00
2692b8c960 Merge #2334
2334: Update dashboard to v.0.1.10 r=curquiza a=mdubus

Closes #2322

Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-04-20 10:14:46 +00:00
58a1124e9a fix(search): formatted field 2022-04-20 11:30:01 +02:00
b57ad15a24 Update dashboard to v.0.1.10 2022-04-20 11:14:42 +02:00
9b064e53e7 fix(http, lib): rename_min_word_length_for_typo into rename_min_word_size_for_typo 2022-04-17 10:02:56 +02:00
289bfd46ff Merge #2321
2321: Bump milli r=curquiza a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-04-14 11:51:15 +00:00
64b0a50a58 chore: bump milli 2022-04-14 12:12:54 +02:00
b1333ab5b0 Merge #2320
2320: chore(http, lib): rename typo to typo_tolerance r=irevoire a=MarinPostma

fix #2319


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-14 09:50:39 +00:00
276dc6043a chore(http, lib): rename typo to typo_tolerance 2022-04-14 10:42:06 +02:00
b9e676b8ca Merge #2316
2316: Add version flag r=Kerollmops a=sanders41

# Pull Request

## What does this PR do?
Fixes #2315

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Paul Sanders <psanders1@gmail.com>
2022-04-13 17:24:09 +00:00
6c06fb226d Merge #2307
2307: Feat(Analytics): Add analytics for search format options r=irevoire a=ManyTheFish

Specification: [#120](https://github.com/meilisearch/specifications/pull/120) ([f5c6a8e](f5c6a8e183))

fix #2308

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-04-13 12:01:52 +00:00
41249be274 Add version flag 2022-04-12 15:22:36 -04:00
049cf0fcee Merge #2313
2313: fix(search): remove the back and forth between the IndexMap and the serde_json::Map r=irevoire a=irevoire

This is ok because we're using the preserve_order feature in serde_json which is already internally using an IndexMap.

See https://github.com/meilisearch/meilisearch/pull/2298#discussion_r845228412_


Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-12 14:17:26 +00:00
2ee210483f fix(search): remove the back and forth between the IndexMap and the serde_json::Map
This is ok because we're using the preserve_order feature in serde_json which is already internally using an IndexMap.
2022-04-12 16:12:52 +02:00
13205066f3 Merge #2311
2311: Change version for the next release (v0.27.0) r=irevoire a=curquiza

Fixes #2310 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-11 14:49:33 +00:00
b3661bf8ec Change version for the next release (v0.27.0) 2022-04-11 16:25:15 +02:00
0990e95830 Feat(Analytics): Add analytics for search format options 2022-04-11 14:53:15 +02:00
f67167fa9f Merge #2178
2178: Refacto docker r=irevoire a=irevoire

closes #2166 and #2085

-----------

I noticed many people had issues with the default configuration of our Dockerfile.
Some examples:
- #2166: If you use ubuntu and mount your `data.ms` in a volume (as shown in the [doc](https://docs.meilisearch.com/learn/getting_started/installation.html#download-and-launch)), you can't run meilisearch
- #2085: Here, meilisearch was not able to erase the `data.ms` when loading a dump because it's the mount point
Currently, we don't show how to use the snapshot and dumps with docker in the documentation. And it's quite hard to do:
  - You either send a big command to meilisearch to change the dump-path, snapshot-path and db-path a single directory and then mount that one
  - Or you mount three volumes
- And there were other issues on the slack community

I think this PR solve the problem.
Now the image contains the `meilisearch` binary in the `/bin` directory, so it's easy to find and always in the `PATH`.
It creates a `data` directory and moves the working-dir in it.
So now you can find the `dumps`, `snapshots` and `data.ms` directory in `/data`.

Here is the new command to run meilisearch with a volume:
```
docker run -it --rm -v $PWD/meili_data:/data -p 7700:7700 getmeili/meilisearch:latest
```

And if you need to import a dump or a snapshot, you don't need to restart your container and mount another volume. You can directly hit the `POST /dumps` route and then run:
```
docker run -it --rm -v $PWD/meili_data:/data -p 7700:7700 getmeili/meilisearch:latest meilisearch --import-dump dumps/20220217-152115159.dump
```

-------

You can already try this PR with the following docker image:
```
getmeili/meilisearch:test-docker-v0.26.0
```

If you want to use the v0.25.2 I created another image;
```
getmeili/meilisearch:test-docker-v0.25.2
```

------

If you're using helm I created a branch [here](https://github.com/meilisearch/meilisearch-kubernetes/tree/test-docker-v0.26.0) that use the v0.26.0 image with the good volume 👍 
If you use this conf with the v0.25.2, it should also work.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-11 12:01:25 +00:00
31584f34e8 Merge #2298
2298: Nested fields r=irevoire a=irevoire

There are a few things that I want to fix _AFTER_ merging this PR.
For the following RCs.

## Stop the useless conversion
In the `search.rs` I convert a `Document` to a `Value`, and then the `Value` to a `Document` and then back to a `Value` etc. I should stop doing all these conversion and stick to one format.
Probably by merging my `permissive-json-pointer` crate into meilisearch.
That would also give me the opportunity to work directly with obkvs and stops deserializing fields I don't need.

## Add more test specific to the nested
Everything seems to works but I should write tests to double check that the nested works well with the `formatted` field.

## See how I could stop iterating on hashmap and instead fill them correctly
This is related to milli. I really often needs to iterate over hashmap to see if a field is a subset of another field. I could probably generate a structure containing all the possible key values.
ie. the user say `doggo` is an attribute to retrieve. Instead of iterating on all the attributes to retrieve to check if `doggo.name` is a subset of `doggo`. I should insert `doggo.name` in the attributes to retrieve map.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-11 11:45:37 +00:00
a70e0a6422 Merge #2304
2304: chore(bors): comments clippy out r=curquiza a=irevoire

There is currently an issue with clippy that stops us from merging PRs.
https://github.com/rust-lang/rust-clippy/issues/8662#issuecomment-1093899755

We can't use clippy in the CI while that's not merged


Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-11 11:20:18 +00:00
348345f555 chore(bors): comments clippy out
There is currently an issue with clippy that stops us from merging PRs.
https://github.com/rust-lang/rust-clippy/issues/8662#issuecomment-1093899755

We can't use clippy in the CI while that's not merged
2022-04-11 13:19:00 +02:00
683206e140 feat(docker): refactoring the dockerfile
- Move the meilisearch binary to `/bin/meilisearch` so it's always in scope.
- Create a `meili_data` directory used as the default working directory
2022-04-11 13:14:44 +02:00
69d312209e feat(search): Implements the nested fields
See https://github.com/meilisearch/specifications/pull/121
2022-04-07 19:47:20 +02:00
013fe4cbc9 Merge #2297
2297: Feat(Search): Enhance formating search results r=ManyTheFish a=ManyTheFish

Add new settings and change crop_len behavior to count words instead of characters.

- [x] `highlightPreTag`
- [x] `highlightPostTag`
- [x] `cropMarker`
- [x] `cropLength` count word instead of chars
- [x] `cropLength` 0 is now considered as no `cropLength`
- [ ] ~smart crop finding the best matches interval~ (postponed)

Partially fixes  #2214. (no smart crop)


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-04-07 13:29:56 +00:00
dc2cc1ee89 Feat(Search): Enhance formating search results 2022-04-07 15:04:08 +02:00
bb5f0e1485 Merge #2271
2271: Simplify Dockerfile r=ManyTheFish a=Thearas

# Pull Request

## What does this PR do?

1. Fixes #2234
2. Replace `$TARGETPLATFORM` with `apk --print-arch` to make Dockerfile available for `docker build` as well, not just `docker buildx` (inspired by [rust-lang/docker-rust](https://github.com/rust-lang/docker-rust/blob/master/1.59.0/alpine3.14/Dockerfile#L13))

PTAL `@curquiza` 

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Thearas <thearas850@gmail.com>
2022-04-07 11:50:21 +00:00
d5e33637b7 Merge #2296
2296: disable typo for attributes r=curquiza a=MarinPostma

Introduce the disable typos on attribute feature as per https://github.com/meilisearch/specifications/pull/117.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 18:10:45 +00:00
c321ac61b5 Merge #2259
2259: disable typos on words r=MarinPostma a=MarinPostma

Introduce the disable typo setting as per https://github.com/meilisearch/specifications/pull/117.

waiting for https://github.com/meilisearch/milli/pull/474.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 17:40:08 +00:00
67dea08a0a feat(http, lib): enable disable typos on attributes 2022-04-06 19:25:12 +02:00
e9f66b8766 feat(all): introduce disable typo on words 2022-04-06 19:16:36 +02:00
dd43ba6234 feat(all): introduce disable typos 2022-04-06 19:10:12 +02:00
27a88bcd47 feat(all): introduce minWordLengthForTypo
fix typo in settting

skip serializing not set typo settings
2022-04-06 19:03:24 +02:00
065fe19452 Merge #2249
2249: feat(all): introduce disable typos r=irevoire a=MarinPostma

Introduce the disable typo setting, that allows disabling typos for an index.

waiting on https://github.com/meilisearch/milli/pull/469

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 16:55:46 +00:00
981fba5b44 feat(all): introduce disable typos 2022-04-06 15:47:48 +02:00
09734f0732 Merge #2294
2294: chore(lib): bump milli to 0.25.0 r=MarinPostma a=MarinPostma

bump milli


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 13:19:20 +00:00
a523828f61 chore(lib): bump milli to 0.25.0 2022-04-06 15:03:10 +02:00
7f7958f815 Merge #2277
2277: fix(http): fix panic when sending document update without content type header r=MarinPostma a=MarinPostma

I found a panic when pushing documents without a content-type. This fixes is by returning unknown instead of crashing.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-05 12:07:17 +00:00
9e344f6576 Merge #2207
2207: Fix: avoid embedding the user input into the error response. r=Kerollmops a=CNLHC

# Pull Request

## What does this PR do?
Fix #2107. 

The problem is meilisearch embeds the user input to the error message. 

The reason for this problem is `milli` throws a `serde_json: Error` whose `Display` implementation will do this embedding.  

I tried to solve this problem in this PR by manually implementing the `Display` trait for `DocumentFormatError` instead of deriving automatically.

<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Liu Hancheng <cn_lhc@qq.com>
Co-authored-by: LiuHanCheng <2463765697@qq.com>
2022-04-04 17:35:17 +00:00
09a72cee03 Merge #2281
2281: Hard limit the number of results returned by a search r=Kerollmops a=Kerollmops

This PR fixes #2133 by hard-limiting the number of results that a search request can return at any time. I would like the guidance of `@MarinPostma` to test that, should I use a mocking test here? Or should I do anything else?

I talked about touching the _nb_hits_ value with `@qdequele` and we concluded that it was not correct to do so.

Could you please confirm that it is the right place to change that?

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-04-04 17:19:05 +00:00
6fc6b83632 Update meilisearch-http/tests/documents/add_documents.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-04-01 09:30:40 +08:00
eee2cd5abf Update meilisearch-http/tests/documents/add_documents.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-04-01 09:30:32 +08:00
87e4125875 Merge #2267
2267: Add instance options for RAM and CPU usage r=Kerollmops a=2shiori17

# Pull Request

## What does this PR do?
Fixes #2212 
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: 2shiori17 <98276492+2shiori17@users.noreply.github.com>
Co-authored-by: shiori <98276492+2shiori17@users.noreply.github.com>
2022-03-31 15:29:18 +00:00
7ece7a9d9e change truncate strategy and coresponding test 2022-03-31 10:39:21 +08:00
403f03cb2c Update meilisearch-http/tests/documents/add_documents.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-31 10:14:22 +08:00
b28aa8e666 Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-31 10:14:13 +08:00
98107565c0 Add more detailed comments for max_indexing_threads 2022-03-31 09:32:45 +09:00
a2d7c16f91 Remove indexing_jobs option 2022-03-31 09:27:29 +09:00
ffafd5b976 Add tests for the hard limit 2022-03-30 16:36:02 -07:00
9f1c88680d Fix my mistake when resolving conflicts 2022-03-31 02:48:41 +09:00
9edd407a88 Merge branch 'main' into add-instance-options 2022-03-31 02:38:07 +09:00
8bc6e8dcf9 Make sure that offsets are clamped too 2022-03-30 10:06:15 -07:00
2624c76517 Merge #2254
2254: Test with default CLI opts r=Kerollmops a=Kerollmops

Fixes #2252.

This PR makes sure that we test the HTTP engine with the default CLI parameters and removes some useless internal CLI options.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-03-29 22:39:40 +00:00
891d042164 Remove the memory limit to let Windows tests pass 2022-03-29 11:37:08 -07:00
b3a11e04af Implement Default on IndexerOpts again 2022-03-29 11:37:08 -07:00
acdb10a307 Remove some useless indexer options 2022-03-29 11:37:08 -07:00
8fecc6238d Make the test use the default CLI options 2022-03-29 11:37:08 -07:00
405af09fc8 Hard limit the number of results returned by a search 2022-03-29 11:27:53 -07:00
0d6be2efab Merge #2280
2280: Bump the milli dependency to 0.24.1 r=curquiza a=Kerollmops

We had issues with lindera recently, it was unable to download the official dictionaries from Google Drive and this was causing issues with our CIs (and other users' CIs too). The maintainer changed the source to download the dictionaries to get it from Sourceforge and it is much better and stable now.

This PR bumps the milli dependency to the latest version which includes the latest version of the tokenizer which, itself, includes the latest version of lindera, I advise that we rebase the currently opened pull requests to include this PR when it is merged on main.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-03-29 17:39:03 +00:00
94f04e79eb Bump the milli dependency to 0.24.1 2022-03-29 09:17:25 -07:00
381e98053f Merge #2263
2263: Upgrade the config of the issue management r=curquiza a=curquiza

To reduce the feature feedback in the meilisearch repo

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-03-29 09:11:06 +00:00
6c2fdc7743 fix(http): fix panic when sending document update without content type header 2022-03-29 09:48:25 +02:00
513b37e245 Merge #2253
2253: refactor authentication key extraction r=ManyTheFish a=MarinPostma

I am concerned that the part of the code that performs the key prefix extraction from the jwt token migh be misused in the future. Since this is a critical part of the code, I moved it into it's own function. Since we deserialized the payload twice anyway, I reordered the verifications, and we now use the data from the validated token.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-28 08:53:13 +00:00
13a0e78d3f Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-28 14:58:00 +08:00
80d8ac40af Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-28 14:57:51 +08:00
48d107cc13 Simplify Dockerfile 2022-03-26 20:06:00 +08:00
3ef250c30a fix: docker image failed to boot on arm64 node 2022-03-26 17:41:00 +08:00
c7b489f8cb tidy 2022-03-25 21:36:11 +08:00
ce12000af3 add some comments 2022-03-25 21:33:47 +08:00
3c72f4dc51 fix test and add truncate test. 2022-03-25 21:31:23 +08:00
ce85981a4e add truncate logic 2022-03-25 20:53:28 +08:00
193c666bf9 Merge branch 'main' of github.com:meilisearch/meilisearch into CNLHC/change_json_error_message 2022-03-25 19:53:13 +08:00
705d10a96d Add instance options for RAM and CPU usage 2022-03-24 18:52:36 +00:00
c0056ab73f Merge #2264
2264: Import milli from meilisearch-lib r=Kerollmops a=Kerollmops

This PR directly imports the milli dependency used in _meilisearch-http_ from _meilisearch-lib_. I can't import _meilisearch-lib_ in _meiliserach-auth_ and that _meilisearch-auth_ can't use the milli exported by _meilisearch-lib_ 😞

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-03-24 16:28:54 +00:00
3df542f072 Export milli's heed from meilisearch-lib 2022-03-24 15:30:10 +01:00
09212abdf7 Update the cargo lock 2022-03-24 14:53:08 +01:00
ee6be4f6b9 Import milli from meilisearch-lib in meilisearch-http 2022-03-24 14:45:37 +01:00
4e3b20ed73 Update config.yml 2022-03-24 12:13:30 +01:00
6a82a055d3 chore(auth): refactor token validation 2022-03-21 11:18:51 +01:00
7e65816d63 Merge #2237
2237: Update dependencies r=MarinPostma a=Kerollmops

This PR upgrade and updates the dependencies of meilisearch, but first I removed three unused dependencies. I used [cargo udeps](https://github.com/est31/cargo-udeps) to detect those and [cargo upgrade](https://github.com/killercup/cargo-edit/blob/master/README.md#available-subcommands) to upgrade ⬆️

~This PR **must** be merged when https://github.com/meilisearch/milli/pull/465 is merged and then must be updated accordingly i.e. using the latest version of milli.~

Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-03-17 17:15:19 +00:00
5bffa4b7f9 Tenant token validation is now created by a function 2022-03-17 17:55:50 +01:00
d1c0ecceb9 Merge #2245
2245: Add test to validate cli r=irevoire a=MarinPostma

followup on #2242 and #2243

Add a test to make sure the cli is valid, and add a CI task to run the tests in debug to make sure we hit debug assertions.

FYI `@curquiza,` because of CI changes

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-17 16:14:31 +00:00
1d683865cf chore(CI): add debug test to CI 2022-03-17 12:40:20 +01:00
4aef7c5ac5 Fix tenant token validation when exp is null 2022-03-17 11:05:03 +01:00
968053649b Change the jsonwebtoken crate usage 2022-03-17 11:03:32 +01:00
ac48860bbb Upgrade the workspace dependencies 2022-03-17 11:03:31 +01:00
46e6d23dd2 Remove the zstd dependency comming from actix-web/http 2022-03-17 11:00:25 +01:00
55c9514c6b Reorder the Meilisearch features for more readability 2022-03-17 11:00:25 +01:00
86c1e83ea1 Remove three unused dependencies 2022-03-17 11:00:24 +01:00
3273fe0470 Merge #2246
2246: Release v0.26.1: bring "fix panic at start" commit (#2243) into stable r=MarinPostma a=curquiza

2 commits
- cherry pick `32843f30d973349122ec5a37469ee09e6002b6f3` (merged in #2243)
- change the version in the Cargo toml files (from v0.26.0 to v0.26.1)

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-03-16 18:32:47 +00:00
bb9372114c Merge #2244
2244: chore(all): bump milli r=curquiza a=MarinPostma

continues the work initiated by `@psvnlsaikumar` in #2228

Co-authored-by: Sai Kumar <psvnlsaikumar@gmail.com>
2022-03-16 17:15:10 +00:00
7468a5e96c Update the version (from v0.26.0 to v0.26.1) 2022-03-16 18:03:54 +01:00
a87faa0db7 bug(http): fix panic on startup 2022-03-16 18:03:01 +01:00
b2bbd13c27 Merge #2243
2243: bug(http): fix panic on startup r=MarinPostma a=MarinPostma

this seems to fix #2242


I am not sure why this doesn't reproduce on v0.26.0, so we should remain vigilant.

`@curquiza` FYI

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-16 16:22:13 +00:00
22c61a1ecb chore(http): add test for validity of cli 2022-03-16 17:17:57 +01:00
e271395971 chore(all): bump milli
* updates to Use the milli's heed dependency #2210

* Update index.rs

* Update store.rs

* Update mod.rs

* cargo fmt
2022-03-16 16:34:44 +01:00
32843f30d9 bug(http): fix panic on startup 2022-03-16 13:33:37 +01:00
469aa8feab Merge #2238
2238: cargo: use resolver 2 r=MarinPostma a=happysalada

# Pull Request

## What does this PR do?
use resolver 2 from cargo.
This enables mainly to propagate the `--no-default-features` flag to workspace crates.
I mistakenly thought before that it was enough to have edition 2021 enabled. However it turns out that for virtual workspaces, this needs to be explicitely defined.
https://doc.rust-lang.org/edition-guide/rust-2021/default-cargo-resolver.html

This will also change a little how your dependencies are compiled. See https://blog.rust-lang.org/2021/03/25/Rust-1.51.0.html#cargos-new-feature-resolver for more details.

Just to give a bit more context, this is for usage in nixos. I have tried to do the upgrade today with the latest version, and the no default features flag is just ignored.

Let me know if you need more details of course.


## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [ ] Have you read the contributing guidelines?
- [ ] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: happysalada <raphael@megzari.com>
2022-03-15 15:50:16 +00:00
748d5b69a5 cargo: use resolver 2 2022-03-14 23:18:59 -04:00
3b2fe3aec8 Merge #2232
2232: Bring `stable` into `main` (v0.26.0) r=Kerollmops a=curquiza

Bring `stable` into `main`

Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-03-14 14:46:12 +00:00
d9eb1f7f00 Merge #2233
2233: Change CI name for publishing binaries r=Kerollmops a=curquiza

Minor change regarding the CI job names. Should not impact the usage.

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-03-14 14:17:05 +00:00
f5a72bb19a Change CI name for publishing binaries 2022-03-14 14:20:25 +01:00
35bf7ee538 fix test 2022-03-08 12:26:02 +08:00
c8895cab77 Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-08 12:03:59 +08:00
b669a73432 Merge #2209
2209: rename auto batching cli r=curquiza a=MarinPostma

rename `--enable-autobatching` to `--enable-auto-batching`.

as per https://github.com/meilisearch/specifications/pull/96#issuecomment-1060693721

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-07 15:58:58 +00:00
833f7fbdbe Merge #2204
2204: Fix blocking auth r=Kerollmops a=MarinPostma

Fix auth blocking runtime

I have decided to remove async code from `meilisearch-auth` and let `meilisearch-http` handle that.

Because Actix polls the extractor futures concurrently, I have made a wrapper extractor that forces the errors from the futures to be returned sequentially (though is still polls them sequentially).
close #2201

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-07 15:39:24 +00:00
62ce8e0bda chore(http): rename auto batching cli option 2022-03-07 15:19:19 +01:00
ddd25bfe01 remove token from InvalidToken error 2022-03-07 15:16:07 +01:00
19da45c53b Update meilisearch-http/src/extractors/sequential_extractor.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-03-07 15:02:07 +01:00
0026410c61 review edits 2022-03-07 14:21:44 +01:00
b57c59baa4 sequential extractor 2022-03-04 20:43:12 +01:00
a356c8359c fix broken test 2022-03-04 15:39:18 +08:00
b138b92d39 show detailed error message 2022-03-04 15:31:11 +08:00
58e2903177 first try 2022-03-04 10:46:59 +08:00
af8a5f2c21 async auth 2022-03-02 19:25:51 +01:00
d6400aef27 remove async from meilsearch-authentication 2022-03-02 18:22:34 +01:00
81fe65afed Merge #2200
2200: Fix(dumps): Explicitly define serde for time r=ManyTheFish a=ManyTheFish

fix #2199 


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-03-02 10:39:24 +00:00
c2b58720d1 Fix(dumps): Explicitly define serde for time 2022-03-02 11:37:48 +01:00
5515aa5045 Merge #2197
2197: Additions to 0.26 (Update actix-web dependency to 4.0) r=curquiza a=MarinPostma

- `@robjtede`
`@MarinPostma`
[update actix-web dependency to 4.0](3b2e467ca6)

From main to release-v0.26.0


Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-02-28 18:09:43 +00:00
15150db957 clippy 2022-02-28 19:03:38 +01:00
3b2e467ca6 update actix-web dependency to 4.0 2022-02-28 19:03:37 +01:00
0c9e8cdf8d Merge #2194
2194: update actix-web dependency to 4.0 r=irevoire a=robjtede

# Pull Request

## What does this PR do?
Updates Actix Web ecosystem crates to 4.0 stable.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] ~~Does this PR fix an existing issue?~~
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?




Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-02-28 15:23:42 +00:00
8d624b3800 clippy 2022-02-28 13:43:22 +00:00
4fbb83a34d bug(snapshot): Correctly open environments in snapshots 2022-02-28 12:37:30 +01:00
961e22493c update actix-web dependency to 4.0 2022-02-25 23:28:55 +00:00
09ee8e34a5 Merge #2192
2192: Fix max dbs error r=Kerollmops a=MarinPostma

Factor the way we open environments to make sure they are always opened with the same options.


The issue was that indexes were first opened in snapshots with incorrect options, and heed cache returned an environment with incorrect open options on subsequent index open.

fix #2190


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-02-23 16:18:23 +00:00
7e832105d7 bug(snapshot): Correctly open environments in snapshots 2022-02-23 17:12:08 +01:00
ff6a7b6007 Merge #2191
2191: fix(analytics): flatten the scheduler options r=curquiza a=irevoire

Implement missing part of [this spec](https://github.com/meilisearch/specifications/blob/develop/text/0034-telemetry-policies.md) by flattening the scheduler options.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-02-22 16:19:07 +00:00
6312e7f1f3 fix(analytics): flatten the scheduler options 2022-02-22 15:55:50 +01:00
bfb375ac87 Merge #2189
2189: fix(all): fix two dates that were wrongly formatted r=irevoire a=irevoire

Fix #2188 and fix #2187 

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-02-22 10:48:45 +00:00
21d277a0ef fix(all): fix two dates that were wrongly formatted 2022-02-22 11:29:11 +01:00
c3e3c900f2 Merge #2173
2173: chore(all): replace chrono with time r=irevoire a=irevoire

Chrono has been unmaintained for a few month now and there is a CVE on it.

Also I updated all the error messages related to the API key as you can see here: https://github.com/meilisearch/specifications/pull/114

fix #2172

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-02-17 14:12:23 +00:00
05c8d81e65 chore: get rid of chrono in favor of time
Chrono has been unmaintened for a few month now and there is a CVE on it.

make clippy happy

bump milli
2022-02-16 18:14:29 +01:00
cd6276eef9 Merge #2174
2174: Update dashboard with v0.1.8 r=curquiza a=mdubus



Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-02-16 16:15:50 +00:00
7bcaa2fd13 Update dashboard to v0.1.9 2022-02-16 15:53:15 +01:00
67ecd7c147 Update dashboard with v0.1.8 2022-02-16 14:10:47 +01:00
ce6ff294cf Merge #2171
2171: Update LICENSE with Meili SAS r=curquiza a=curquiza

Check with thomas, we must put the real name of the company

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-02-15 18:29:03 +00:00
b318ab46cb Update LICENSE 2022-02-15 15:54:45 +01:00
5890600101 Merge #2170
2170: Update version (v0.26.0) r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-02-14 15:22:09 +00:00
e2a9414c7a Update version (v0.26.0) 2022-02-14 16:11:07 +01:00
216965e9d9 Merge #2122
2122: fix: docker image failed to boot on arm64 node r=curquiza a=Thearas

# Pull Request

## What does this PR do?
Fixes #2115.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Thearas <thearas850@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-02-14 11:41:18 +00:00
d0ddbcc2b2 update 2022-02-14 18:08:57 +08:00
41db2601a6 Update Dockerfile 2022-02-14 10:44:03 +01:00
42cb94e1f4 fix: docker image failed to boot on arm64 node 2022-02-14 13:23:20 +08:00
6e7a0cc65d Merge #2157
2157: fix(auth): fix env being closed when dumping r=Kerollmops a=MarinPostma

When creating a dump, the auth store environment would be closed on drop, so subsequent dumps couldn't reopen the environment. I have added a flag in the environment to prevent the closing of the environment on drop when dumping.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-02-09 15:04:19 +00:00
23eba82038 fix(auth): fix env being closed when dumping 2022-02-09 13:56:26 +01:00
001b9acb63 Merge #2155
2155: Fix curl command in download-latest script r=Kerollmops a=curquiza

Fixes #2151 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-02-08 15:29:51 +00:00
af65ccfd6a Fix curl command in download-latest script 2022-02-08 16:28:26 +01:00
0c7251475d Merge #2150
2150: Bump milli to v0.22.1 r=curquiza a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/2138 and https://github.com/meilisearch/meilisearch/issues/2123 by bumping milli to [v0.22.1](https://github.com/meilisearch/milli/releases/tag/v0.22.1)

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-02-08 15:05:06 +00:00
1a87b2f37d Bump milli to v0.22.1 2022-02-08 11:21:44 +01:00
752a0e13ad Merge #2136
2136: Refactoring CI regarding ARM binary publish r=curquiza a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/1909

- Remove CI file to publish aarch64 binary and put the logic into `publish-binary.yml`
- Remove the job to publish armv8 binary
- Fix download-latest script accordingly
- Adapt dowload-latest with the specific case of the MacOS m1

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: meili-bot <74670311+meili-bot@users.noreply.github.com>
2022-02-07 16:07:46 +00:00
ccaca33446 Add --fail-with-body flag to curl in script 2022-02-07 16:16:49 +01:00
2a90e805a2 Fix script 2022-02-07 16:05:48 +01:00
c4a2d70d19 Fix error handler for curl command in script 2022-02-07 16:01:50 +01:00
f7e4a0177d Update download-latest.sh
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-02-07 13:53:30 +01:00
cca65499de Merge #2145
2145: Update LICENSE r=meili-bot a=curquiza



Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-02-07 12:51:50 +00:00
80fa7dbbfa Update LICENSE 2022-02-05 18:29:47 +01:00
c24b1e5250 Merge #2135
2135: bug(auth): Make API keys accept Null descriptions r=curquiza a=ManyTheFish

Fix #2116


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-02-03 15:26:11 +00:00
78cf8f1f9f Fix typo 2022-02-02 19:32:20 +01:00
1da7277817 Fix dowload-latest.sh according to the new name of the binary 2022-02-02 19:25:52 +01:00
c71c95feb0 Refactor CIs to publish aaarch64 binary 2022-02-02 19:25:28 +01:00
3bee31e6c7 bug(auth): Make API keys accept Null descriptions 2022-02-02 18:18:17 +01:00
9448ca58aa Merge #2005
2005: auto batching r=MarinPostma a=MarinPostma

This pr implements auto batching. The basic functioning of this is that all updates that can be batched together are batched together while the previous batch is being processed.

For now, the only updates that can be batched together are the document addition updates (both update and replace), for a single index.

The batching is disabled by default for multiple reasons:
- We need more experimentation with the scheduling techniques
- Right now, if one task fails in a batch, the whole batch fails. We need more permissive error handling when processing document indexation.

There are four CLI options, for now, to interact with how the batch is scheduled:
- `enable-autobatching`: enable the autobatching feature.
- `debounce-duration-sec`: When an update is received, wait that number of seconds before batching and performing the updates. Defaults to 0s.
- `max-batch-size`: the maximum number of tasks per batch, defaults to unlimited.
- `max-documents-per-batch`: the maximum number of documents in a batch, defaults to unlimited. The batch will always contain a least 1 task, no matter the number of documents in that task.

# Implementation

The current implementation is made of 3 major components:

## TaskStore
The `TaskStore` contains all the tasks. When a task is pushed, it is directly registered to the task store.

## Scheduler
The scheduler is in charge of making the batches. At its core, there is a `TaskQueue` and a job queue. `Job`s are always processed first. They are *volatile* tasks, that is, they don't have a TaskId and are not persisted to disk. Snapshots and dumps are examples of Jobs.

If no `Job` is available for processing, then the scheduler attempts to make a `Task` batch from the `TaskQueue`. The first step is to gather new tasks from the `TaskStore` to populate the `TaskQueue`. When this is done, we can prepare our batch. The `TaskQueue` is itself a `BinaryHeap` of `Tasklist`. Each `index_uid` is associated with a `TaskList` that contains all the updates associated with that index uid. Each `TaskList` in the `TaskQueue` is ordered by the id of its first task.

When preparing a batch, the `TaskList` at the top of the `TaskQueue` is popped, and the tasks are popped from the list to make the next batch. If there are remaining tasks in the list, the list is inserted back in the `TaskQueue`.

## UpdateLoop
The `UpdateLoop` role is to perform batch sequentially. Each time updates are pushed to the update store, the scheduler is notified, and will in turn notify the update loop that work can be performed. When notified, the update loop waits some time to wait for more incoming update and then asks the scheduler for the next batch to perform and perform it. When it is done, the status of the task is put back into the store, and the next batch is processed.


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-02-02 11:04:30 +00:00
c9a236b0af feat(lib): auto-batching 2022-02-01 18:06:20 +01:00
622c15e825 Merge #2096
2096: feat(auth): Tenant token r=Kerollmops a=ManyTheFish

Make meilisearch support JWT authentication signed with meilisearch API keys
using HS256, HS384 or HS512 algorithms.

Related spec: [specifications#89](https://github.com/meilisearch/specifications/pull/89) [rendered](https://github.com/meilisearch/specifications/blob/scoped-api-keys/text/0089-tenant-tokens.md)
Fix #1991 


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-27 10:38:41 +00:00
054598734a Merge #2120
2120: Bring `stable` into `main` r=curquiza a=curquiza

I forgot to do it, tell me `@Kerollmops` or `@irevoire` if it's useful or not. I would say yes, otherwise I will have conflict when I will try to bring `main` into `stable` for the next release. Maybe I'm wrong

Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-01-27 09:35:21 +00:00
7ca647f0d0 feat(auth): Implement Tenant token
Make meilisearch support JWT authentication signed with meilisearch API keys
using HS256, HS384 or HS512 algorithms.

Related spec: https://github.com/meilisearch/specifications/pull/89
Fix #1991
2022-01-27 08:25:39 +01:00
aa50fcb1f0 Merge branch 'main' into stable 2022-01-26 20:17:41 +01:00
b408de0761 Merge pull request #2117 from meilisearch/rebranding
Changes related to the rebranding
2022-01-26 19:58:54 +01:00
72d9c5ee5c fix(rebranding): Update the ascii art (#2118) 2022-01-26 18:53:07 +01:00
2b7440d4b5 Fix some typo 2022-01-26 17:56:18 +01:00
3bc6a18bcd Update README.md
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-01-26 17:54:51 +01:00
db56d6cb11 Update download-latest.sh 2022-01-26 17:54:22 +01:00
a5759139bf Update CONTRIBUTING.md
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-01-26 17:51:38 +01:00
8a959da120 Update MeiliSearch into Meilisearch everywhere 2022-01-26 17:43:16 +01:00
0a78750465 Replace meilisearch by Meilisearch 2022-01-26 17:35:56 +01:00
372f4fc924 Replace logo 2022-01-26 17:34:31 +01:00
ae5b401e74 Update README.md 2022-01-26 16:31:04 +01:00
c562655be7 Update CONTRIBUTING.md 2022-01-26 16:31:03 +01:00
c8bb54cd94 Merge #2098
2098: feat(dump): Provide the same cli options as the snapshots r=MarinPostma a=irevoire

Add two cli options for the dump:
- `--ignore-missing-dump`
- `--ignore-dump-if-db-exists`

Fix #2087

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-26 14:32:23 +00:00
8c80326dd5 Merge #2086
2086: feat(analytics): send the whole set of cli options instead of only the snapshot r=MarinPostma a=irevoire

Fixes #2088 

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-26 13:53:20 +00:00
bad4bed439 feat(dump): Provide the same cli options as the snapshots
Add two cli options for the dump:
- `--ignore-missing-dump`
- `--ignore-dump-if-db-exists`

Fix #2087
2022-01-26 14:34:06 +01:00
7828da15c3 feat(analytics): send the whole set of cli options instead of only the snapshot 2022-01-26 13:52:41 +01:00
7e2f6063ae Merge #2099 #2108
2099: feat(analytics): Set the timestamp of the aggregated event as the first aggregate r=MarinPostma a=irevoire



2108: meta(auth): Enhance tests on authorization r=MarinPostma a=ManyTheFish

Enhance auth tests in order to be able to add new actions without changing tests.

Helping #2080 

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-24 15:13:01 +00:00
2b766a2f26 meta(auth): Enhance tests on authorization
Enhance auth tests in order to be able to add new action without changing tests
2022-01-24 15:35:39 +01:00
8ae504bfb0 Merge #2101
2101: chore(all): update actix-web dependency to 4.0.0-beta.21 r=MarinPostma a=robjtede

# Pull Request

## What does this PR do?
I don't expect any more breaking changes to Actix Web that will affect Meilisearch so bump to latest beta.

Fixes #N/A?
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to MeiliSearch!


Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-01-24 14:33:46 +00:00
5981e6c57c Merge #2095
2095: feat(error): Update the error message when you have no version file r=MarinPostma a=irevoire

Following this [issue](https://github.com/meilisearch/meilisearch-kubernetes/issues/95) we decided to change the error message from:
```
Version file is missing or the previous MeiliSearch engine version was below 0.24.0. Use a dump to update MeiliSearch.
```
to
```
Version file is missing or the previous MeiliSearch engine version was below 0.25.0. Use a dump to update MeiliSearch.
```

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-24 14:09:13 +00:00
1be3a1e945 Merge #2075
2075: Allow payloads with no documents  r=irevoire a=MarinPostma

accept addition with 0 documents.

0 bytes payload are still refused, since they are not valid json/jsonlines/csv anyways...

close #1987


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-24 12:55:29 +00:00
629b897845 feat(error): Update the error message when you have no version file 2022-01-24 13:44:00 +01:00
9f5fee404b chore(all): update actix-web dependency to 4.0.0-beta.21 2022-01-21 20:44:17 +00:00
40bf98711c feat(analytics): Set the timestamp of the aggregated event as the first aggregate 2022-01-20 19:08:57 +01:00
f9f075bca2 Merge #2068
2068: chore(http): migrate from structopt to clap3 r=Kerollmops a=MarinPostma

migrate from structopt to clap3

This fix the long lasting issue with flags require a value, such as `--no-analytics` or `--schedule-snapshot`.

All flag arguments now take NO argument, i.e:
`meilisearch --schedule-snapshot true` becomes `meilisearch --schedule-snapshot`

as per https://docs.rs/clap/latest/clap/struct.Arg.html#method.env, the env variable is defines as:
> A false literal is n, no, f, false, off or 0. An absent environment variable will also be considered as false. Anything else will considered as true.

`@gmourier` 
`@curquiza` 
`@meilisearch/docs-team` 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-20 10:59:44 +00:00
0c1a3d59eb fix no-analytics 2022-01-20 11:50:24 +01:00
523fb5cd56 Merge #2084
2084: bump milli r=Kerollmops a=irevoire

- Fix https://github.com/meilisearch/MeiliSearch/issues/2082 by updating milli dependency
- Fix Clippy error
- Change the MeiliSearch version in the cargo.toml to anticipate the coming release (v0.25.2)

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-18 14:30:37 +00:00
436f61a7f4 chore: bump meilisearch 2022-01-18 12:27:15 +01:00
3fab5869fa chore: bump milli 2022-01-18 11:50:17 +01:00
0515c6e844 bug(http): fix task duration 2022-01-13 16:41:07 +01:00
38176181ac fix(dump): Fix the import of dump from the v24 and before 2022-01-13 16:40:58 +01:00
a7e634bd4f Merge #2074
2074: fix(dump): Fix the import of dumps when there is no data.ms r=irevoire a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-01-13 13:47:03 +00:00
78a381a30b Merge #2076
2076: fix(dump): Fix the import of dump from the v24 and before r=ManyTheFish a=irevoire

Same as https://github.com/meilisearch/MeiliSearch/pull/2073 but on main this time

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-01-13 13:09:23 +00:00
343bce6a29 fix(dump): Fix the import of dump from the v24 and before 2022-01-13 13:23:57 +01:00
d263f762bf feat(http): accept empty document additions
wip
2022-01-13 12:46:56 +01:00
dfaeb19566 fix(dump): Fix the import of dumps when there is no data.ms 2022-01-13 12:30:58 +01:00
010dcc3e80 Merge #2066
2066: bug(http): fix task duration r=MarinPostma a=MarinPostma

`@gmourier` found that the duration in the task view was not computed correctly, this pr fixes it.

`@curquiza,` I let you decide if we need to make a hotfix out of this or wait for the next release. This is not breaking.


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-12 14:50:58 +00:00
d0aa5f747c Merge #2067
2067: chore(all): fix rust edition r=irevoire a=MarinPostma

I hadn't correctly set the rust edition in my previous pr, and cargo was returning a warning. This time I followed this guide: https://doc.rust-lang.org/edition-guide/editions/transitioning-an-existing-project-to-a-new-edition.html


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-12 13:32:42 +00:00
f6d53e03f1 chore(http): migrate from structopt to clap3 2022-01-12 14:07:19 +01:00
3ecebd15ee chore(all): fix rust edition 2022-01-12 11:14:50 +01:00
db83e39a7f bug(http): fix task duration 2022-01-11 18:01:25 +01:00
5d48f72ade Merge #2065
2065: MeiliSearch v0.25.0: `stable` -> `main` r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: many <maxime@meilisearch.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
Co-authored-by: Maxime Legendre <maximelegendre@MacBook-Pro-de-Maxime.local>
Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-11 16:30:22 +00:00
1818026a84 Merge #2057
2057: fix(dump): Uncompress the dump IN the data.ms  r=irevoire a=irevoire

When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
   one. The problem is that if the `data.ms` is a mount point because
   that's what peoples do with docker usually. We can't override
   a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
   be on the same partition as the `data.ms`. This means when we tried to move
   the dump over the `data.ms`, it was also failing because we can't move data
   between two partitions.
------------------
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.

fix #1833

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-10 17:57:16 +00:00
0ad7d38eec Merge #2061
2061: Update dashboard for v0.25.0 r=curquiza a=mdubus



Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-01-10 16:29:31 +00:00
b17ad5c2be Update with latest release of the dashboard 2022-01-10 17:10:09 +01:00
1824b3c07b Merge #2060
2060: chore(all) set rust edition to 2021 r=MarinPostma a=MarinPostma

set the rust edition for the project to 2021

this make the MSRV to v1.56

#2058


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2022-01-10 15:04:14 +00:00
c9c7da3626 fix(dump): Uncompress the dump IN the data.ms
When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
   one. The problem is that if the `data.ms` is a mount point because
   that's what peoples do with docker usually. We can't override
   a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
   be on the same partition as the `data.ms`. This means when we tried to move
   the dump over the `data.ms`, it was also failing because we can't move data
   between two partitions.
==============
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.

fix #1833
2022-01-10 14:56:03 +01:00
030a90523d Update dashboard for v0.25.0 2022-01-10 10:50:57 +01:00
56d223a51d Merge #2059
2059: change indexed doc count on error r=irevoire a=MarinPostma

change `indexed_documents` and `deleted_documents` to return 0 instead of null when empty when the task has failed.

close #2053


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2022-01-06 15:55:50 +00:00
f558ff826a feat(http): task view indexed and deleted documents return 0 instead of null 2022-01-06 14:55:02 +01:00
5fb4ed60e7 chore(all) set rust edition to 2021 2022-01-06 13:30:45 +01:00
0d2a358cc2 Merge #2056
2056: Allow any header for CORS r=curquiza a=curquiza

Bug fix: trigger a CORS error when trying to send the `User-Agent` header via the browser

`@bidoubiwa` thanks for the bug report!

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-01-05 16:45:51 +00:00
595250c93e Allow any header for CORS 2022-01-05 15:38:47 +01:00
c636988935 Merge #2055
2055: fix(dump): Fix the loading of dump with empty indexes r=irevoire a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-05 14:31:53 +00:00
eea483c470 fix(dump): Fix the loading of dump with empty indexes 2022-01-05 15:08:21 +01:00
d53c61a6d0 Merge #2054
2054: Bug(auth): Wrap key list in results r=irevoire a=ManyTheFish

fix #2052

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-04 15:44:55 +00:00
c0d4f71a34 Bug(auth): Wrap key list in results 2022-01-04 14:10:30 +01:00
f56989e46e Merge #2011
2011: bug(lib): drop env on last use r=curquiza a=MarinPostma

fixes the `too many open files` error when running tests by closing the
environment on last drop

To check that we are actually the last owner of the `env` we plan to drop, I have wrapped all envs in `Arc`, and check that we have the last reference to it.


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2022-01-04 11:03:08 +00:00
c0251eb680 Merge #2050
2050: Bug(CORS): Add missing allowed headers r=curquiza a=ManyTheFish

fix #2040

## test
html file to test:

```html
<!DOCTYPE html>
<html>
<meta content="text/html;charset=utf-8" http-equiv="Content-Type">
<meta content="utf-8" http-equiv="encoding">

<script>
var xmlHttp = new XMLHttpRequest();
    xmlHttp.open( "GET", "http://127.0.0.1:7700/indexes/toto", false ); // false for synchronous request
    xmlHttp.setRequestHeader("Authorization", "Bearer manythefish");
    xmlHttp.send( null );
    console.log(xmlHttp.responseText);
</script>
</html>
```


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-03 14:23:34 +00:00
450b81ca13 Bug(CORS): Add missing allowed headers
fix #2040
2022-01-03 13:41:12 +01:00
2f3faadcbf Merge #2034
2034: Fix typo r=curquiza a=curquiza

Fix `Meilisearch` typo into `MeiliSearch`

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-01-03 09:40:56 +00:00
5986a2d126 Merge #2036
2036: chore(ci): Enable rust_backtrace in the ci r=curquiza a=irevoire

This should help us to understand unreproducible panics that happens in the CI all the time

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-12-22 19:00:08 +00:00
d75e84f625 chore(ci): Enable rust_backtrace in the ci 2021-12-22 18:20:44 +01:00
c221277fd2 Merge #2035
2035: Use self hosted GitHub runner r=curquiza a=curquiza

Checked with `@tpayet,` we have created a self hosted github runner to save time when pushing the docker images.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-12-22 15:28:33 +00:00
3b30fadb55 Merge #2037
2037: test: Ignore the auths tests on windows r=irevoire a=irevoire

Since the auths tests fail sporadically on the windows CI but we can't reproduce these failures with a real windows machine we are going to ignore these ones.

But we still ensure they compile.

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-12-22 15:13:32 +00:00
d7df4d6b84 test: Ignore the auths tests on windows
Since the auths tests fail sporadically on the windows CI but we can't
reproduce these failures with a real windows machine we are going to
ignore theses one.
But we still ensure they compile.
2021-12-22 12:39:48 +01:00
fd854035c1 Use self hosted github runner 2021-12-21 18:32:29 +01:00
4d1c138842 Merge #2032
2032: Revert docker as non root PR r=curquiza a=ManyTheFish

Revert #1759

hotfix for #1969

Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
2021-12-21 16:19:36 +00:00
7649239b08 Revert docker as non root PR 2021-12-21 16:59:15 +01:00
0e2f6ba1b6 Merge #2033
2033: Bug(FS): Consider empty pre-created directory as unexisting DB r=curquiza a=ManyTheFish

When the database directory was pre-created we were considering that DB is invalid, we are now accepting to create a database in it.


Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
2021-12-21 15:12:07 +00:00
f529c46598 Fix typo in error messages and comments 2021-12-21 16:01:38 +01:00
1ba49d2ddb Bug(FS): Consider empty pre-created directory as unexisting DB 2021-12-21 15:30:11 +01:00
1b5ca88231 Merge #2026
2026: Bug(auth): Parse YMD date r=curquiza a=ManyTheFish

Use NaiveDate to parse YMD date instead of NaiveDatetime

fix #2017


Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
2021-12-21 13:48:21 +00:00
37329e0784 Bug(auth): Parse YMD date
Use NaiveDate to parse YMD date instead of NaiveDatetime

fix #2017
2021-12-20 15:30:11 +01:00
eaff393c76 Merge #2025
2025: Fix security index creation r=ManyTheFish a=ManyTheFish

Forbid index creation on alternates routes when the action `index.create` is not given

fix #2024

Co-authored-by: Maxime Legendre <maximelegendre@MacBook-Pro-de-Maxime.local>
2021-12-20 14:04:28 +00:00
a845cd8880 Fix(auth): Forbid index creation on alternates routes
Forbid index creation on alternates routes when the action `index.create` is not given

fix #2024
2021-12-20 14:48:18 +01:00
b28a465304 bug(lib): drop env on last use
fixes the `too many open files` error when running tests by closing the
environment on last drop
2021-12-16 10:57:55 +01:00
845d3114ea Merge #2008
2008: bug(lib): fix get dumps bad error code r=curquiza a=MarinPostma

fix bad error code being returned whet getting a dump status, and add a test
close #1994

Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-12-15 18:58:17 +00:00
287fa7ca74 Merge #2006 #2007
2006: chore(http): rename task types r=curquiza a=MarinPostma

Rename
- documentsAddition into documentAddition
- documentsPartial into documentPartial
- documentsDeletion into documentDeletion

close #1999


2007: bug(lib): ignore primary if already set on document addition r=curquiza a=MarinPostma

Ignore the primary key if it is already set on documents updates. Add a test for verify behaviour.

close #2002


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-12-15 16:55:40 +00:00
80ed9654e1 chore(http): rename task types 2021-12-15 17:01:34 +01:00
7ddab7ef31 bug(lib): fix get dumps bad error code 2021-12-15 16:58:05 +01:00
d534a7f7c8 bug(lib): ignore primary if already set on document addition 2021-12-15 14:58:37 +01:00
5af51c852c Merge #1989
1989: Extend API keys r=curquiza a=ManyTheFish

# Pull Request

## What does this PR do?

- Add API keys in snapshots
- Add API keys in dumps
- fix QA #1979

fix #1979
fix #1995
fix #2001
fix #2003

related to #1890

Co-authored-by: many <maxime@meilisearch.com>
2021-12-14 17:22:58 +00:00
ee7970f603 feat(auth): Extend API keys
- Add API keys in snapshots
- Add API keys in dumps
- Rename action indexes.add to indexes.create
- fix QA #1979

fix #1979
fix #1995
fix #2001
fix #2003
related to #1890
2021-12-14 17:33:39 +01:00
5453877ca7 Merge #1982
1982: Set fail-fast to false in publish-binaries CI r=curquiza a=curquiza

This avoids the other jobs to fail if one of the jobs fails.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-12-14 15:37:26 +00:00
ea0a5271f7 Merge #1908
1908: Update CONTRIBUTING.md r=curquiza a=ferdi05

Added a sentence on other means to contribute. If we like it, we can add it in some other places.

# Pull Request

## What does this PR do?
Improves `CONTRIBUTING`

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to MeiliSearch!


Co-authored-by: Ferdinand Boas <ferdinand.boas@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2021-12-09 14:02:48 +00:00
879cc4ec26 Merge #1984
1984: Support boolean for the no-analytics flag r=Kerollmops a=Kerollmops

This PR fixes an issue with the `no-analytics` flag that was ignoring the value passed to it, therefore a `no-analytics false` was just understood as a `no-analytics` and was effectively disabling the analytics instead of enabling them. I found [a closed issue about this exact behavior on the structopt repository](https://github.com/TeXitoi/structopt/issues/468) and applied it here.

I don't think we should update the documentation as it must have worked like this from the start of this project. I tested it on my machine and it is working great now. Thank you `@nicolasvienot` for this issue report.

Fixes #1983.

Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
2021-12-08 16:41:46 +00:00
6ac2475aba Fix the no-analytics flag in the tests 2021-12-08 12:02:18 +01:00
47d5f659e0 Bump the structopt crate to 0.3.25 2021-12-08 11:24:40 +01:00
8c9e51e94f Make sure that we can also specify the no-analytics flags with a boolean 2021-12-08 11:23:21 +01:00
0da5aca9f6 Set fail-fast to false in publish-binaries CI 2021-12-08 09:54:13 +01:00
9906db9e64 Merge #1978
1978: Fix of `release-v0.25.0` branch into `main` r=curquiza a=curquiza

The fixes in #1976 should be on main to be taken into account by 
```
curl -L https://install.meilisearch.com | sh
```

Co-authored-by: Yann Prono <yann.prono@nist.gov>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
2021-12-07 17:13:38 +00:00
8096b568f0 Merge #1976 #1977
1976: Fix download-latest.sh r=curquiza a=Mcdostone

# Pull Request

## What does this PR do?
Fixes #1975

The script was broken because `grep` matches the word **draft** in the changelog of [v0.25.0rc0](https://github.com/meilisearch/MeiliSearch/releases/tag/v0.25.0rc0)

> Misc
>    Remove email address from the launch message (#1896) `@curquiza`
>    Remove release drafter workflow (#1882) `@curquiza`                           ⚠️👀


## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to MeiliSearch!
> Your product is awesome!


1977: Fix Dockerfile r=MarinPostma a=curquiza

Remove this error

<img width="830" alt="Capture d’écran 2021-12-07 à 17 00 46" src="https://user-images.githubusercontent.com/20380692/145063294-51ae2c50-2468-47e9-a891-542d824cad8e.png">


Co-authored-by: Yann Prono <yann.prono@nist.gov>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-12-07 16:27:51 +00:00
2934a77832 Fix Dockerfile 2021-12-07 17:00:24 +01:00
cf6cb938a6 update download-latest.sh 2021-12-07 16:37:22 +01:00
8ff6b1b540 update download-latest.sh 2021-12-07 16:23:30 +01:00
a938a9ab0f Merge #1974
1974: Update version for the next release (v0.25.0) r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-12-07 13:11:24 +00:00
ae73386723 Update version for the next release (v0.25.0) 2021-12-07 14:00:43 +01:00
34c8a859eb Merge pull request #1973 from meilisearch/drop-dump-v1
feat(dumps): drop dump V1 support
2021-12-07 14:00:06 +01:00
80d039042b Update CONTRIBUTING.md 2021-12-07 11:58:44 +01:00
5606e22d97 Update CONTRIBUTING.md
typo
2021-12-07 11:46:08 +01:00
23e35fa526 feat(dumps): drop dump V1 support 2021-12-07 10:36:27 +01:00
82033f935e Merge #1970
1970: Use milli reexported tokenizer  r=curquiza a=ManyTheFish

Use milli reexported tokenizer instead of importing meilisearch-tokenizer dependency.

fix #1888

Co-authored-by: many <maxime@meilisearch.com>
2021-12-07 08:50:44 +00:00
ae2b0e7aa7 Use milli reexported tokenizer instead of importing meilisearch-tokenizer dependency 2021-12-06 17:18:28 +01:00
948615537b Merge #1965
1965: Reintroduce engine version file r=MarinPostma a=irevoire

Right now if you boot up MeiliSearch and point it to a DB directory created with a previous version of MeiliSearch the existing indexes will be deleted. This [used to be](51d7c84e73) prevented by a startup check which would compare the current engine version vs what was stored in the DB directory's version file, but this functionality seems to have been lost after a few refactorings of the code.

In order to go back to the old behavior we'll need to reintroduce the `VERSION` file that used to be present; I considered reusing the `metadata.json` file used in the dumps feature, but this seemed like the simpler and more approach. As the intent is just to restore functionality, the implementation is quite basic. I imagine that in the future we could build on this and do things like compatibility across major/minor versions and even migrating between formats.

This PR was made thanks to `@mbStavola` and is basically a port of his PR #1860 after a big refacto of the code #1796.

Closes #1840

Co-authored-by: Matt Stavola <m.freitas@offensive-security.com>
2021-12-06 13:39:37 +00:00
a0e129304c feat(lib): Reintroduce engine version file
Right now if you boot up MeiliSearch and point it to a DB directory created with a previous version of MeiliSearch the existing indexes will be deleted. This used to be prevented by a startup check which would compare the current engine version vs what was stored in the DB directory's version file, but this functionality seems to have been lost after a few refactorings of the code.
In order to go back to the old behavior we'll need to reintroduce the VERSION file that used to be present; I considered reusing the metadata.json file used in the dumps feature, but this seemed like the simpler and more approach. As the intent is just to restore functionality, the implementation is quite basic. I imagine that in the future we could build on this and do things like compatibility across major/minor versions and even migrating between formats.

This PR was made thanks to @mbStavola

Closes #1840
2021-12-06 14:30:56 +01:00
8d72d538de Merge #1890
1890: Api keys r=MarinPostma a=ManyTheFish

# Pull Request

## API keys management and authorizations
Fixes #1867

## PR checklist

- [x] Test `/keys` routes
- [x] Implement `/keys` routes
- [x] Test API key authorization
- [x] Implement API key authorization
- [x] Rename Authentication header
- [x] default key creation

## Postponed in another PR
- dumps  (waiting for #1796)
- snapshot  (waiting for #1796)




Co-authored-by: many <maxime@meilisearch.com>
2021-12-06 13:23:23 +00:00
ffefd0caf2 feat(auth): API keys
implements:
https://github.com/meilisearch/specifications/blob/develop/text/0085-api-keys.md

- Add tests on API keys management route (meilisearch-http/tests/auth/api_keys.rs)
- Add tests checking authorizations on each meilisearch routes (meilisearch-http/tests/auth/authorization.rs)
- Implement API keys management routes (meilisearch-http/src/routes/api_key.rs)
- Create module to manage API keys and authorizations (meilisearch-auth)
- Reimplement GuardedData to extend authorizations (meilisearch-http/src/extractors/authentication/mod.rs)
- Change X-MEILI-API-KEY by Authorization Bearer (meilisearch-http/src/extractors/authentication/mod.rs)
- Change meilisearch routes to fit to the new authorization feature (meilisearch-http/src/routes/)

- close #1867
2021-12-06 09:52:41 +01:00
fa196986c2 Merge #1796
1796: Feature branch: Task store r=irevoire a=MarinPostma

# Feature branch: Task Store

## Spec todo
https://github.com/meilisearch/specifications/blob/develop/text/0060-refashion-updates-apis.md

- [x] The update resource is renamed task. The names of existing API routes are also changed to reflect this change.
- [x]    Tasks are now also accessible as an independent resource of an index. GET - /tasks; GET - /tasks/:taskUid
- [x] The task uid is not incremented by index anymore. The sequence is generated globally.
- [x] A task_not_found error is introduced.
- [x] The format of the task object is updated.
- [x] updateId becomes uid.
- [x] Attributes of an error appearing in a failed task are now contained in a dedicated error object.
- [x] type is no longer an object. It now becomes a string containing the values of its name field previously defined in the type object.
- [x] The possible values for the type field are reworked to be more clear and consistent with our naming rules.
- [x] A details object is added to contain specific information related to a task payload that was previously displayed in the type nested object. Previous number key is renamed numberOfDocuments.
- [x] An indexUid field is added to give information about the related index on which the task is performed.
- [x] duration format has been updated to express an ISO 8601 duration.
- [x] processed status changes to succeeded.
- [x] startedProcessingAt is updated to startedAt.
- [x] processedAt is updated to finishedAt.
- [x] 202 Accepted requests previously returning an updateId are now returning a summarized task object.
- [x] MEILI_MAX_UDB_SIZE env var is updated MEILI_MAX_TASK_DB_SIZE.
- [x] --max-udb-size cli option is updated to --max-task-db-size.
- [x] task object lists are now returned under a results array.
- [x] Each operation on an index (creation, update, deletion) is now asynchronous and represented by a task.

## Todo tech

- [x] Restore Snapshots
- [x] Restore dumps of documents
- [x] Implements the dump of updates
- [x] Error  handling
- [x] Fix stats
- [x] Restore the Analytics
- [x] [Add the new analytics](https://github.com/meilisearch/specifications/pull/92/files)
- [x] Fix tests
- [x] ~Deleting tasks when index is deleted (see bellow)~ see #1891 instead
- [x] Improve details for documents addition and deletion tasks
- [ ] Add integration test
- [ ] Test task store filtering
- [x] Rename `UuidStore` to `IndexMetaStore`, and simplify the trait.
- [x] Fix task store initialization: fill pending queue from hard state
- [x] Synchronously return error when creating an index with an invalid index_uid and add test
- [x] Task should be returned in decreasing uid + tests (on index task route)
- [x] Summarized task view
- [x] fix snapshot permissions


## Implementation

### Linked PRs
- #1889
- #1891
- #1892
- #1902
- #1906
- #1911
- #1914
- #1915
- #1916
- #1918
- #1924
- #1925
- #1926
- #1930
- #1936
- #1937
- #1942
- #1944
- #1945
- #1946
- #1947
- #1950
- #1951
- #1957
- #1959
- #1960
- #1961
- #1962
- #1964

### Linked PRs in milli:
- https://github.com/meilisearch/milli/pull/414
- https://github.com/meilisearch/milli/pull/409
- https://github.com/meilisearch/milli/pull/406
- https://github.com/meilisearch/milli/pull/418

### Issues
- close #1687
- close #1786
- close #1940
- close #1948
- close #1949
- close #1932
- close #1956

### Spec patches
- https://github.com/meilisearch/specifications/pull/90


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-12-03 11:36:53 +00:00
a30e02c18c feat(all): Task store
implements:
https://github.com/meilisearch/specifications/blob/develop/text/0060-refashion-updates-apis.md

linked PR:

- #1889
- #1891
- #1892
- #1902
- #1906
- #1911
- #1914
- #1915
- #1916
- #1918
- #1924
- #1925
- #1926
- #1930
- #1936
- #1937
- #1942
- #1944
- #1945
- #1946
- #1947
- #1950
- #1951
- #1957
- #1959
- #1960
- #1961
- #1962
- #1964

- https://github.com/meilisearch/milli/pull/414
- https://github.com/meilisearch/milli/pull/409
- https://github.com/meilisearch/milli/pull/406
- https://github.com/meilisearch/milli/pull/418

- close #1687
- close #1786
- close #1940
- close #1948
- close #1949
- close #1932
- close #1956
2021-12-02 20:14:42 +01:00
c9f3726447 Merge #1893
1893: Make matches work with numerical value r=MarinPostma a=Thearas

# Pull Request

## What does this PR do?

Implement #1883.

I have test this PR with unit test. It appears to be working properly:
![image](https://user-images.githubusercontent.com/44015907/141141082-dad8cd18-e803-408f-ad6a-c7a212b7ec88.png)

PTAL `@curquiza` 

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Thearas <thearas850@gmail.com>
2021-11-29 13:38:57 +00:00
8363200fd7 Merge #1910
1910: After v0.24.0: import `stable` in `main` r=MarinPostma a=curquiza



Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: many <maxime@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-11-17 12:48:56 +00:00
37548eb720 Merge #1896
1896: Remove email address from the message at the launch r=irevoire a=curquiza

I suggest removing this email address from the message at the launch since it can encourage people to think this is an email address for support. Is it something we want `@meilisearch/devrel-team` since we mostly redirect them to the forum or the slack?

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-11-16 16:29:05 +00:00
dadce6032d Update CONTRIBUTING.md
Added a sentence on other means to contribute. If we like it, we can add it in some other places.
2021-11-16 17:27:18 +01:00
0a1d2ce231 Merge #1904
1904: Update mini-dashboard version to v0.1.5 r=curquiza a=curquiza

Update the mini-dashboard with its latest version (v0.1.5)

Check with `@mdubus,` replaces https://github.com/meilisearch/MeiliSearch/pull/1903

Fixes #1898 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-11-15 18:02:07 +00:00
9d01c5d882 Update mini-dashboard version to v0.1.5 2021-11-15 18:54:55 +01:00
53fc2edab3 Merge #1897
1897: Add ARM image for Docker to CI r=irevoire a=curquiza

Fixes #1315 

- [x] Publish MeiliSearch's docker image for `arm64`
- [x] Add `workflow_dispatch` event in case we need to re-trigger it after a failure without creating a new release
- [x] Use our own server to run the github runner since this CI is really slow (1h instead of 4h)
- [x] Open an issue for a refactor by merging both files in one file (https://github.com/meilisearch/MeiliSearch/issues/1901)

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-11-15 16:16:34 +00:00
b7c5b78a61 Remove workflow_dispatch 2021-11-15 14:13:40 +01:00
f081dc2001 Remove checkout 2021-11-12 21:31:18 +01:00
2cf7daa227 Use self-hosted runner 2021-11-12 18:49:02 +01:00
9d75fbc619 Fix docker meta job 2021-11-11 19:42:30 +01:00
3b1b9a277b Remove context 2021-11-11 16:38:45 +01:00
40e87b9544 Add checkout 2021-11-11 16:37:01 +01:00
ded7922be5 Add context and platform 2021-11-11 16:30:16 +01:00
11ef64ee43 Fix credentials 2021-11-11 16:02:32 +01:00
5e6d7b7649 Add worflow dispatch event 2021-11-11 16:00:10 +01:00
5fd9616b5f Add ARM image for Docker to CI 2021-11-11 15:58:44 +01:00
a1227648ba Remove email address from the message at the launch 2021-11-11 14:36:45 +01:00
919f4173cf Merge #1895
1895: Fix aggregated search events name r=irevoire a=gmourier



Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2021-11-11 09:35:34 +00:00
7c5aad4073 fix aggregated search event names 2021-11-11 01:38:10 +01:00
d47ccd9199 Merge #1894
1894: Fix the 99th percentile in the analytics r=gmourier a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-11-10 17:27:39 +00:00
cc5e884b34 fix the 99th percentile in the analytics 2021-11-10 18:26:38 +01:00
ac5535055f Make matches work with numerical value 2021-11-10 23:10:30 +08:00
15cb4dafa9 Merge #1882
1882: Remove release drafter r=curquiza a=curquiza

Remove release drafter since it's not used at the moment due to the specific release process of MeiliSearch.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-11-09 14:36:32 +00:00
8ca76d9fdf Merge #1884
1884: Remove Hacktoberfest section from CONTRIBUTING.md r=curquiza a=meili-bot

_This PR is auto-generated._

Remove Hacktoberfest section from CONTRIBUTING.md


Co-authored-by: meili-bot <74670311+meili-bot@users.noreply.github.com>
2021-11-09 13:19:30 +00:00
f62e52ec68 Update CONTRIBUTING.md 2021-11-09 14:15:50 +01:00
bf01c674ea Remove release drafter 2021-11-08 14:49:50 +01:00
e9b6a05b75 Merge #1878
1878: Add error object in task r=MarinPostma a=ManyTheFish

# Pull Request

## What does this PR do?
Fixes #1877

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Update error test
- [x] Remove flattening of errors during task serialization


Co-authored-by: many <maxime@meilisearch.com>
2021-11-04 17:16:30 +00:00
6bbc1b4316 Remove error flattening in task serialization 2021-11-04 17:40:28 +01:00
3c696da274 Update tests 2021-11-04 17:40:28 +01:00
d9d6dee550 Merge #1873
1873: Change lacking errors r=ManyTheFish a=ManyTheFish



Co-authored-by: many <maxime@meilisearch.com>
2021-11-04 14:21:52 +00:00
cc6306c0e1 Update milli version 2021-11-04 14:57:45 +01:00
b59145385e Fix PR comments 2021-11-04 14:57:27 +01:00
3f4e0ec971 Merge #1875 #1876
1875: Fix search post event and disk size analytics r=irevoire a=gmourier

- Branch POST search on the post_search aggregator
- Use largest disk `total_space` instead of `available_space` 

1876: Update SEGMENT_API_KEY r=irevoire a=gmourier

Branch it on our Segment production stack

Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2021-11-04 10:16:13 +00:00
ec0716ddd1 Merge #1874
1874: Fix small typo in download-latest.sh r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2021-11-04 09:50:55 +00:00
6d6725b3b8 Update SEGMENT_API_KEY 2021-11-04 08:10:12 +01:00
6660be2cb7 Branch POST /search on the dedicated analytics aggregator 2021-11-04 08:03:48 +01:00
847fcb570b Use total_space of the largest disk instead of available_space 2021-11-04 08:03:11 +01:00
4095ec462e Merge #1865
1865: Aggregate the search even when it fails fail r=MarinPostma a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2021-11-03 16:49:05 +00:00
f7f2421e71 Update download-latest.sh 2021-11-03 17:09:50 +01:00
b664a46e91 Update milli version 2021-11-03 16:11:20 +01:00
06e6eaa7b4 Remove useless Facet variant 2021-11-03 16:11:09 +01:00
30a094cbb2 Change lacking errors 2021-11-03 14:33:33 +01:00
904bae98f8 send the analytics even when the search fail 2021-11-02 12:38:01 +01:00
191 changed files with 21115 additions and 9118 deletions

View File

@ -23,8 +23,8 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**MeiliSearch version:** [e.g. v0.20.0]
**Meilisearch version:** [e.g. v0.20.0]
**Additional context**
Additional information that may be relevant to the issue.
[e.g. architecture, device, OS, browser]
[e.g. architecture, device, OS, browser]

View File

@ -1,10 +1,13 @@
contact_links:
- name: Feature request
- name: Language support request & feedback
url: https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal?discussions_q=label%3Aproduct%3Acore%3Atokenizer+category%3A%22Feedback+%26+Feature+Proposal%22
about: The requests and feedback regarding Language support are not managed in this repository. Please upvote the related discussion in our dedicated product repository or open a new one if it doesn't exist.
- name: Feature request & feedback
url: https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal
about: The feature requests are not managed in this repository, please open a discussion in our dedicated product repository
about: The feature requests and feedback regarding the already existing features are not managed in this repository. Please open a discussion in our dedicated product repository
- name: Documentation issue
url: https://github.com/meilisearch/documentation/issues/new
about: For documentation issues, open an issue or a PR in the documentation repository
- name: Support questions & other
url: https://github.com/meilisearch/MeiliSearch/discussions/new
url: https://github.com/meilisearch/meilisearch/discussions/new
about: For any other question, open a discussion in this repository

13
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,13 @@
# Set update schedule for GitHub Actions only
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
labels:
- 'skip changelog'
- 'dependencies'
rebase-strategy: disabled

View File

@ -1,13 +0,0 @@
name-template: 'v$RESOLVED_VERSION'
tag-template: 'v$RESOLVED_VERSION'
version-template: '0.21.0-alpha.$PATCH'
exclude-labels:
- 'skip-changelog'
template: |
## Changes
$CHANGES
no-changes-template: 'Changes are coming soon 😎'
sort-direction: 'ascending'
version-resolver:
default: patch

28
.github/scripts/check-release.sh vendored Normal file
View File

@ -0,0 +1,28 @@
#!/bin/bash
# check_tag $current_tag $file_tag $file_name
function check_tag {
if [[ "$1" != "$2" ]]; then
echo "Error: the current tag does not match the version in $3: found $2 - expected $1"
ret=1
fi
}
ret=0
current_tag=${GITHUB_REF#'refs/tags/v'}
toml_files='*/Cargo.toml'
for toml_file in $toml_files;
do
file_tag="$(grep '^version = ' $toml_file | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')"
check_tag $current_tag $file_tag $toml_file
done
lock_file='Cargo.lock'
lock_tag=$(grep -A 1 'name = "meilisearch-auth"' $lock_file | grep version | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')
check_tag $current_tag $lock_tag $lock_file
if [[ "$ret" -eq 0 ]] ; then
echo 'OK'
fi
exit $ret

View File

@ -1,14 +1,14 @@
#!/bin/sh
# Checks if the current tag should be the latest (in terms of semver and not of release date).
# Ex: previous tag -> v0.10.1
# new tag -> v0.8.12
# The new tag should not be the latest
# So it returns "false", the CI should not run for the release v0.8.2
# Used in GHA in publish-docker-latest.yml
# Was used in our CIs to publish the latest docker image. Not used anymore, will be used again when v1 and v2 will be out and we will want to maintain multiple stable versions.
# Returns "true" or "false" (as a string) to be used in the `if` in GHA
# Checks if the current tag should be the latest (in terms of semver and not of release date).
# Ex: previous tag -> v2.1.1
# new tag -> v1.20.3
# The new tag (v1.20.3) should NOT be the latest
# So it returns "false", the `latest` tag should not be updated for the release v1.20.3 and still need to correspond to v2.1.1
# GLOBAL
GREP_SEMVER_REGEXP='v\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)$' # i.e. v[number].[number].[number]
@ -74,7 +74,7 @@ semverLT() {
# Returns the tag of the latest stable release (in terms of semver and not of release date)
get_latest() {
temp_file='temp_file' # temp_file needed because the grep would start before the download is over
curl -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file"
curl -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file"
releases=$(cat "$temp_file" | \
grep -E "tag_name|draft|prerelease" \
| tr -d ',"' | cut -d ':' -f2 | tr -d ' ')

View File

@ -1,20 +0,0 @@
# GitHub Actions Workflow for MeiliSearch
> **Note:**
> - We do not use [cache](https://github.com/actions/cache) yet but we could use it to speed up CI
## Workflow
- On each pull request, we trigger `cargo test`.
- On each tag, we build:
- the tagged Docker image and publish it to Docker Hub
- the binaries for MacOS, Ubuntu, and Windows
- the Debian package
- On each stable release (`v*.*.*` tag):
- we build the `latest` Docker image and publish it to Docker Hub
- we publish the binary to Hombrew and Gemfury
## Problems
- We do not test on Windows because we are unable to make it work, there is a disk space problem.

View File

@ -8,7 +8,7 @@ jobs:
nightly-coverage:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
@ -25,7 +25,7 @@ jobs:
RUSTFLAGS: "-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Cpanic=unwind -Zpanic_abort_tests"
- uses: actions-rs/grcov@v0.1
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ${{ steps.coverage.outputs.report }}

View File

@ -0,0 +1,23 @@
name: Create issue to upgrade dependencies
on:
schedule:
- cron: '0 0 1 */3 *'
workflow_dispatch:
jobs:
create-issue:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Create an issue
uses: actions-ecosystem/action-create-issue@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
title: Upgrade dependencies
body: |
We need to update the dependencies of the Meilisearch repository, and, if possible, the dependencies of all the core-team repositories that Meilisearch depends on (milli, charabia, heed...).
⚠️ This issue should only be done at the beginning of the sprint!
labels: |
dependencies
maintenance

View File

@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install cargo-flaky
run: cargo install cargo-flaky
- name: Run cargo flaky 100 times

View File

@ -5,10 +5,35 @@ on:
name: Publish binaries to release
jobs:
check-version:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Check if the tag has the v<nmumber>.<number>.<number> format.
# If yes, it means we are publishing an official release.
# If no, we are releasing a RC, so no need to check the version.
- name: Check tag format
if: github.event_name != 'schedule'
id: check-tag-format
run: |
escaped_tag=$(printf "%q" ${{ github.ref_name }})
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo ::set-output name=stable::true
else
echo ::set-output name=stable::false
fi
- name: Check release validity
if: steps.check-tag-format.outputs.stable == 'true'
run: bash .github/scripts/check-release.sh
publish:
name: Publish for ${{ matrix.os }}
name: Publish binary for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
needs: check-version
strategy:
fail-fast: false
matrix:
os: [ubuntu-18.04, macos-latest, windows-latest]
include:
@ -26,7 +51,7 @@ jobs:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build
run: cargo build --release --locked
- name: Upload binaries to release
@ -37,28 +62,70 @@ jobs:
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}
publish-armv8:
name: Publish for ARMv8
runs-on: ubuntu-18.04
publish-aarch64:
name: Publish binary for aarch64
runs-on: ${{ matrix.os }}
needs: check-version
continue-on-error: false
strategy:
fail-fast: false
matrix:
include:
- build: aarch64
os: ubuntu-18.04
target: aarch64-unknown-linux-gnu
linker: gcc-aarch64-linux-gnu
use-cross: true
asset_name: meilisearch-linux-aarch64
steps:
- uses: actions/checkout@v2
- uses: uraimo/run-on-arch-action@v2.1.1
id: runcmd
- name: Checkout repository
uses: actions/checkout@v3
- name: Installing Rust toolchain
uses: actions-rs/toolchain@v1
with:
arch: aarch64 # aka ARMv8
distro: ubuntu18.04
env: |
JEMALLOC_SYS_WITH_LG_PAGE: 16
run: |
apt update
apt install -y curl gcc make
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --profile minimal --default-toolchain stable
source $HOME/.cargo/env
cargo build --release --locked
toolchain: stable
profile: minimal
target: ${{ matrix.target }}
override: true
- name: APT update
run: |
sudo apt update
- name: Install target specific tools
if: matrix.use-cross
run: |
sudo apt-get install -y ${{ matrix.linker }}
- name: Configure target aarch64 GNU
if: matrix.target == 'aarch64-unknown-linux-gnu'
## Environment variable is not passed using env:
## LD gold won't work with MUSL
# env:
# JEMALLOC_SYS_WITH_LG_PAGE: 16
# RUSTFLAGS: '-Clink-arg=-fuse-ld=gold'
run: |
echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config
echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
echo 'JEMALLOC_SYS_WITH_LG_PAGE=16' >> $GITHUB_ENV
echo RUSTFLAGS="-Clink-arg=-fuse-ld=gold" >> $GITHUB_ENV
- name: Cargo build
uses: actions-rs/cargo@v1
with:
command: build
use-cross: ${{ matrix.use-cross }}
args: --release --target ${{ matrix.target }}
- name: List target output files
run: ls -lR ./target
- name: Upload the binary to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
file: target/release/meilisearch
asset_name: meilisearch-linux-armv8
file: target/${{ matrix.target }}/release/meilisearch
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}

View File

@ -1,76 +0,0 @@
name: Publish aarch64 binary
on:
release:
types: [published]
env:
CARGO_TERM_COLOR: always
jobs:
publish-aarch64:
name: Publish to Github
runs-on: ${{ matrix.os }}
continue-on-error: false
strategy:
fail-fast: false
matrix:
include:
- build: aarch64
os: ubuntu-18.04
target: aarch64-unknown-linux-gnu
linker: gcc-aarch64-linux-gnu
use-cross: true
asset_name: meilisearch-linux-aarch64
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Installing Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
target: ${{ matrix.target }}
override: true
- name: APT update
run: |
sudo apt update
- name: Install target specific tools
if: matrix.use-cross
run: |
sudo apt-get install -y ${{ matrix.linker }}
- name: Configure target aarch64 GNU
if: matrix.target == 'aarch64-unknown-linux-gnu'
## Environment variable is not passed using env:
## LD gold won't work with MUSL
# env:
# JEMALLOC_SYS_WITH_LG_PAGE: 16
# RUSTFLAGS: '-Clink-arg=-fuse-ld=gold'
run: |
echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config
echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
echo 'JEMALLOC_SYS_WITH_LG_PAGE=16' >> $GITHUB_ENV
echo RUSTFLAGS="-Clink-arg=-fuse-ld=gold" >> $GITHUB_ENV
- name: Cargo build
uses: actions-rs/cargo@v1
with:
command: build
use-cross: ${{ matrix.use-cross }}
args: --release --target ${{ matrix.target }}
- name: List target output files
run: ls -lR ./target
- name: Upload the binary to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
file: target/${{ matrix.target }}/release/meilisearch
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}

View File

@ -5,16 +5,25 @@ on:
types: [released]
jobs:
check-version:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check release validity
run: bash .github/scripts/check-release.sh
debian:
name: Publish debian packagge
runs-on: ubuntu-18.04
needs: check-version
steps:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- name: Install cargo-deb
run: cargo install cargo-deb
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build deb package
run: cargo deb -p meilisearch-http -o target/debian/meilisearch.deb
- name: Upload debian pkg to release
@ -30,6 +39,7 @@ jobs:
homebrew:
name: Bump Homebrew formula
runs-on: ubuntu-18.04
needs: check-version
steps:
- name: Create PR to Homebrew
uses: mislav/bump-homebrew-formula-action@v1

View File

@ -0,0 +1,71 @@
---
on:
schedule:
- cron: '0 4 * * *' # Every day at 4:00am
push:
tags:
- '*'
name: Publish tagged images to Docker Hub
jobs:
docker:
runs-on: docker
steps:
- uses: actions/checkout@v2
# Check if the tag has the v<nmumber>.<number>.<number> format. If yes, it means we are publishing an official release.
# In this situation, we need to set `output.stable` to create/update the following tags (additionally to the `vX.Y.Z` Docker tag):
# - a `vX.Y` (without patch version) Docker tag
# - a `latest` Docker tag
- name: Check tag format
if: github.event_name != 'schedule'
id: check-tag-format
run: |
escaped_tag=$(printf "%q" ${{ github.ref_name }})
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo ::set-output name=stable::true
else
echo ::set-output name=stable::false
fi
# Check only the validity of the tag for official releases (not for pre-releases or other tags)
- name: Check release validity
if: github.event_name != 'schedule' && steps.check-tag-format.outputs.stable == 'true'
run: bash .github/scripts/check-release.sh
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
if: github.event_name != 'schedule'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: getmeili/meilisearch
# The lastest and `vX.Y` tags are only pushed for the official Meilisearch releases
# See https://github.com/docker/metadata-action#latest-tag
flavor: latest=false
tags: |
type=ref,event=tag
type=semver,pattern=v{{major}}.{{minor}},enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
type=raw,value=latest,enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v3
with:
# We do not push tags for the cron jobs, this is only for test purposes
push: ${{ github.event_name != 'schedule' }}
platforms: linux/amd64,linux/arm64
tags: ${{ steps.meta.outputs.tags }}

View File

@ -1,22 +0,0 @@
---
on:
release:
types: [released]
name: Publish latest image to Docker Hub
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Check if current release is latest
run: echo "##[set-output name=is_latest;]$(sh .github/is-latest-release.sh)"
id: release
- name: Publish to Registry
if: steps.release.outputs.is_latest == 'true'
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: getmeili/meilisearch
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}

View File

@ -1,22 +0,0 @@
---
on:
push:
tags:
- '*'
name: Publish tagged image to Docker Hub
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Publish to Registry
uses: elgohr/Publish-Docker-Github-Action@master
env:
COMMIT_SHA: ${{ github.sha }}
with:
name: getmeili/meilisearch
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
tag_names: true

View File

@ -1,16 +0,0 @@
name: Release Drafter
on:
push:
branches:
- main
jobs:
update_release_draft:
runs-on: ubuntu-latest
steps:
- uses: release-drafter/release-drafter@v5
with:
config-name: release-draft-template.yml
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_DRAFTER_TOKEN }}

View File

@ -11,6 +11,8 @@ on:
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
RUSTFLAGS: "-D warnings"
jobs:
tests:
@ -21,9 +23,9 @@ jobs:
matrix:
os: [ubuntu-18.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo check without any default features
uses: actions-rs/cargo@v1
with:
@ -35,11 +37,30 @@ jobs:
command: test
args: --locked --release
# We run tests in debug also, to make sure that the debug_assertions are hit
test-debug:
name: Run tests in debug
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Cache dependencies
uses: Swatinem/rust-cache@v2.0.0
- name: Run tests in debug
uses: actions-rs/cargo@v1
with:
command: test
args: --locked
clippy:
name: Run Clippy
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
@ -47,7 +68,7 @@ jobs:
override: true
components: clippy
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo clippy
uses: actions-rs/cargo@v1
with:
@ -58,14 +79,14 @@ jobs:
name: Run Rustfmt
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
toolchain: stable
override: true
components: rustfmt
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo fmt
run: cargo fmt --all -- --check

View File

@ -1,28 +1,26 @@
# Contributing
First, thank you for contributing to MeiliSearch! The goal of this document is to provide everything you need to start contributing to MeiliSearch.
First, thank you for contributing to Meilisearch! The goal of this document is to provide everything you need to start contributing to Meilisearch.
Remember that there are many ways to contribute other than writing code: writing [tutorials or blog posts](https://github.com/meilisearch/awesome-meilisearch), improving [the documentation](https://github.com/meilisearch/documentation), submitting [bug reports](https://github.com/meilisearch/meilisearch/issues/new?assignees=&labels=&template=bug_report.md&title=) and [feature requests](https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal)...
The code in this repository is only concerned with managing multiple indexes, handling the update store, and exposing an HTTP API. Search and indexation are the domain of our core engine, [`milli`](https://github.com/meilisearch/milli), while tokenization is handled by [our `charabia` library](https://github.com/meilisearch/charabia/).
If Meilisearch does not offer optimized support for your language, please consider contributing to `charabia` by following the [CONTRIBUTING.md file](https://github.com/meilisearch/charabia/blob/main/CONTRIBUTING.md) and integrating your intended normalizer/segmenter.
## Table of Contents
- [Hacktoberfest](#hacktoberfest)
- [Assumptions](#assumptions)
- [How to Contribute](#how-to-contribute)
- [Development Workflow](#development-workflow)
- [Git Guidelines](#git-guidelines)
## Hacktoberfest
It's [Hacktoberfest month](https://blog.meilisearch.com/contribute-hacktoberfest-2021/)! 🥳
🚀 If your PR gets accepted it will count into your participation to Hacktoberfest!
✅ To be accepted it has either to have been merged, approved or tagged with the `hacktoberfest-accepted` label.
🧐 Don't forget to check the [quality standards](https://hacktoberfest.digitalocean.com/resources/qualitystandards)! Low-quality PRs might get marked as `spam` or `invalid`, and will not count toward your participation in Hacktoberfest.
- [Release Process (for internal team only)](#release-process-for-internal-team-only)
## Assumptions
1. **You're familiar with [Github](https://github.com) and the [Pull Requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)(PR) workflow.**
2. **You've read the MeiliSearch [documentation](https://docs.meilisearch.com).**
3. **You know about the [MeiliSearch community](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html).
1. **You're familiar with [GitHub](https://github.com) and the [Pull Requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)(PR) workflow.**
2. **You've read the Meilisearch [documentation](https://docs.meilisearch.com).**
3. **You know about the [Meilisearch community](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html).
Please use this for help.**
## How to Contribute
@ -30,21 +28,21 @@ It's [Hacktoberfest month](https://blog.meilisearch.com/contribute-hacktoberfest
1. Ensure your change has an issue! Find an
[existing issue](https://github.com/meilisearch/meilisearch/issues/) or [open a new issue](https://github.com/meilisearch/meilisearch/issues/new).
* This is where you can get a feel if the change will be accepted or not.
2. Once approved, [fork the MeiliSearch repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) in your own Github account.
2. Once approved, [fork the Meilisearch repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) in your own GitHub account.
3. [Create a new Git branch](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository)
4. Review the [Development Workflow](#development-workflow) section that describes the steps to maintain the repository.
5. Make your changes on your branch.
6. [Submit the branch as a Pull Request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) pointing to the `main` branch of the MeiliSearch repository. A maintainer should comment and/or review your Pull Request within a few days. Although depending on the circumstances, it may take longer.
6. [Submit the branch as a Pull Request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) pointing to the `main` branch of the Meilisearch repository. A maintainer should comment and/or review your Pull Request within a few days. Although depending on the circumstances, it may take longer.
## Development Workflow
### Setup and run MeiliSearch
### Setup and run Meilisearch
```bash
cargo run --release
```
We recommend using the `--release` flag to test the full performance of MeiliSearch.
We recommend using the `--release` flag to test the full performance of Meilisearch.
### Test
@ -52,6 +50,8 @@ We recommend using the `--release` flag to test the full performance of MeiliSea
cargo test
```
This command will be triggered to each PR as a requirement for merging it.
If you get a "Too many open files" error you might want to increase the open file limit using this command:
```bash
@ -76,7 +76,7 @@ As minimal requirements, your commit message should:
We don't follow any other convention, but if you want to use one, we recommend [the Chris Beams one](https://chris.beams.io/posts/git-commit/).
### Github Pull Requests
### GitHub Pull Requests
Some notes on GitHub PRs:
@ -86,6 +86,29 @@ Some notes on GitHub PRs:
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [Bors](https://github.com/bors-ng/bors-ng) to automatically enforce this requirement without the PR author having to rebase manually.
## Release Process (for internal team only)
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
### Automation to rebase and Merge the PRs
This project integrates a bot that helps us manage pull requests merging.<br>
_[Read more about this](https://github.com/meilisearch/integration-guides/blob/main/resources/bors.md)._
### How to Publish a new Release
The full Meilisearch release process is described in [this guide](https://github.com/meilisearch/core-team/blob/main/resources/meilisearch-release.md). Please follow it carefully before doing any release.
### Release assets
For each release, the following assets are created:
- Binaries for differents platforms (Linux, MacOS, Windows and ARM architectures) are attached to the GitHub release
- Binaries are pushed to HomeBrew and APT (not published for RC)
- Docker tags are created/updated:
- `vX.Y.Z`
- `vX.Y` (not published for RC)
- `latest` (not published for RC)
<hr>
Thank you again for reading this through, we can not wait to begin to work with you if you made your way through this contributing guide ❤️

2668
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,10 +1,18 @@
[workspace]
resolver = "2"
members = [
"meilisearch-http",
"meilisearch-error",
"meilisearch-types",
"meilisearch-lib",
"meilisearch-auth",
"permissive-json-pointer",
]
resolver = "2"
[patch.crates-io]
pest = { git = "https://github.com/pest-parser/pest.git", rev = "51fd1d49f1041f7839975664ef71fe15c7dcaf67" }
[profile.release]
codegen-units = 1
[profile.dev.package.flate2]
opt-level = 3
[profile.dev.package.milli]
opt-level = 3

View File

@ -1,54 +1,46 @@
# Compile
FROM alpine:3.14 AS compiler
FROM rust:alpine3.16 AS compiler
RUN apk update --quiet \
&& apk add -q --no-cache curl build-base
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
RUN apk add -q --update-cache --no-cache build-base openssl-dev
WORKDIR /meilisearch
COPY Cargo.lock .
COPY Cargo.toml .
COPY meilisearch-error/Cargo.toml meilisearch-error/
COPY meilisearch-http/Cargo.toml meilisearch-http/
COPY meilisearch-lib/Cargo.toml meilisearch-lib/
ENV RUSTFLAGS="-C target-feature=-crt-static"
# Create dummy main.rs files for each workspace member to be able to compile all the dependencies
RUN find . -type d -name "meilisearch-*" | xargs -I{} sh -c 'mkdir {}/src; echo "fn main() { }" > {}/src/main.rs;'
# Use `cargo build` instead of `cargo vendor` because we need to not only download but compile dependencies too
RUN $HOME/.cargo/bin/cargo build --release
# Cleanup dummy main.rs files
RUN find . -path "*/src/main.rs" -delete
ARG COMMIT_SHA
ARG COMMIT_DATE
ENV COMMIT_SHA=${COMMIT_SHA} COMMIT_DATE=${COMMIT_DATE}
ENV RUSTFLAGS="-C target-feature=-crt-static"
COPY . .
RUN $HOME/.cargo/bin/cargo build --release
RUN set -eux; \
apkArch="$(apk --print-arch)"; \
if [ "$apkArch" = "aarch64" ]; then \
export JEMALLOC_SYS_WITH_LG_PAGE=16; \
fi && \
cargo build --release
# Run
FROM alpine:3.14
FROM alpine:3.16
ARG USER=meili
ENV HOME /home/${USER}
ENV MEILI_HTTP_ADDR 0.0.0.0:7700
ENV MEILI_SERVER_PROVIDER docker
# download runtime deps as root and create ${USER}
RUN apk update --quiet \
&& apk add -q --no-cache libgcc tini curl \
&& adduser -D ${USER}
WORKDIR ${HOME}
USER ${USER}
# copy file as ${USER} to ${HOME}
COPY --from=compiler /meilisearch/target/release/meilisearch .
&& apk add -q --no-cache libgcc tini curl
# add meilisearch to the `/bin` so you can run it from anywhere and it's easy
# to find.
COPY --from=compiler /meilisearch/target/release/meilisearch /bin/meilisearch
# To stay compatible with the older version of the container (pre v0.27.0) we're
# going to symlink the meilisearch binary in the path to `/meilisearch`
RUN ln -s /bin/meilisearch /meilisearch
# This directory should hold all the data related to meilisearch so we're going
# to move our PWD in there.
# We don't want to put the meilisearch binary
WORKDIR /meili_data
EXPOSE 7700/tcp
ENTRYPOINT ["tini", "--"]
CMD ./meilisearch
CMD /bin/meilisearch

View File

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2019-2021 Meili SAS
Copyright (c) 2019-2022 Meili SAS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

218
README.md
View File

@ -1,205 +1,103 @@
<p align="center">
<img src="assets/logo.svg" alt="MeiliSearch" width="200" height="200" />
<img src="assets/meilisearch-logo-light.svg?sanitize=true#gh-light-mode-only">
<img src="assets/meilisearch-logo-dark.svg?sanitize=true#gh-dark-mode-only">
</p>
<h1 align="center">MeiliSearch</h1>
<h4 align="center">
<a href="https://www.meilisearch.com">Website</a> |
<a href="https://roadmap.meilisearch.com/tabs/1-under-consideration">Roadmap</a> |
<a href="https://blog.meilisearch.com">Blog</a> |
<a href="https://fr.linkedin.com/company/meilisearch">LinkedIn</a> |
<a href="https://twitter.com/meilisearch">Twitter</a> |
<a href="https://docs.meilisearch.com">Documentation</a> |
<a href="https://docs.meilisearch.com/faq/">FAQ</a>
<a href="https://docs.meilisearch.com/faq/">FAQ</a> |
<a href="https://slack.meilisearch.com">Slack</a>
</h4>
<p align="center">
<a href="https://github.com/meilisearch/MeiliSearch/actions"><img src="https://github.com/meilisearch/MeiliSearch/workflows/Cargo%20test/badge.svg" alt="Build Status"></a>
<a href="https://deps.rs/repo/github/meilisearch/MeiliSearch"><img src="https://deps.rs/repo/github/meilisearch/MeiliSearch/status.svg" alt="Dependency status"></a>
<a href="https://github.com/meilisearch/MeiliSearch/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-informational" alt="License"></a>
<a href="https://slack.meilisearch.com"><img src="https://img.shields.io/badge/slack-MeiliSearch-blue.svg?logo=slack" alt="Slack"></a>
<a href="https://github.com/meilisearch/MeiliSearch/discussions" alt="Discussions"><img src="https://img.shields.io/badge/github-discussions-red" /></a>
<a href="https://github.com/meilisearch/meilisearch/actions"><img src="https://github.com/meilisearch/meilisearch/workflows/Cargo%20test/badge.svg" alt="Build Status"></a>
<a href="https://deps.rs/repo/github/meilisearch/meilisearch"><img src="https://deps.rs/repo/github/meilisearch/meilisearch/status.svg" alt="Dependency status"></a>
<a href="https://github.com/meilisearch/meilisearch/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-informational" alt="License"></a>
<a href="https://app.bors.tech/repositories/26457"><img src="https://bors.tech/images/badge_small.svg" alt="Bors enabled"></a>
</p>
<p align="center">Lightning Fast, Ultra Relevant, and Typo-Tolerant Search Engine 🔍</p>
<p align="center">A lightning-fast search engine that fits effortlessly into your apps, websites, and workflow 🔍</p>
**MeiliSearch** is a powerful, fast, open-source, easy to use and deploy search engine. Both searching and indexing are highly customizable. Features such as typo-tolerance, filters, and synonyms are provided out-of-the-box.
For more information about features go to [our documentation](https://docs.meilisearch.com/).
Meilisearch helps you shape a delightful search experience in a snap, offering features that work out-of-the-box to speed up your workflow.
<p align="center">
<img src="assets/trumen-fast.gif" alt="Web interface gif" />
<p align="center" name="demo">
<a href="https://where2watch.meilisearch.com/#gh-light-mode-only" target="_blank">
<img src="assets/demo-light.gif#gh-light-mode-only" alt="A bright colored application for finding movies screening near the user">
</a>
<a href="https://where2watch.meilisearch.com/#gh-dark-mode-only" target="_blank">
<img src="assets/demo-dark.gif#gh-dark-mode-only" alt="A dark colored application for finding movies screening near the user">
</a>
</p>
🔥 [**Try it!**](https://where2watch.meilisearch.com/) 🔥
## ✨ Features
* Search-as-you-type experience (answers < 50 milliseconds)
* Full-text search
* Typo tolerant (understands typos and misspelling)
* Faceted search and filters
* Supports hanzi (Chinese characters)
* Supports synonyms
* Easy to install, deploy, and maintain
* Whole documents are returned
* Highly customizable
* RESTful API
## Getting started
- **Search-as-you-type:** find search results in less than 50 milliseconds
- **[Typo tolerance](https://docs.meilisearch.com/learn/getting_started/customizing_relevancy.html#typo-tolerance):** get relevant matches even when queries contain typos and misspellings
- **[Filtering and faceted search](https://docs.meilisearch.com/learn/advanced/filtering_and_faceted_search.html):** enhance your user's search experience with custom filters and build a faceted search interface in a few lines of code
- **[Sorting](https://docs.meilisearch.com/learn/advanced/sorting.html):** sort results based on price, date, or pretty much anything else your users need
- **[Synonym support](https://docs.meilisearch.com/learn/getting_started/customizing_relevancy.html#synonyms):** configure synonyms to include more relevant content in your search results
- **[Geosearch](https://docs.meilisearch.com/learn/advanced/geosearch.html):** filter and sort documents based on geographic data
- **[Extensive language support](https://docs.meilisearch.com/learn/what_is_meilisearch/language.html):** search datasets in any language, with optimized support for Chinese, Japanese, Hebrew, and languages using the Latin alphabet
- **[Security management](https://docs.meilisearch.com/learn/security/master_api_keys.html):** control which users can access what data with API keys that allow fine-grained permissions handling
- **[Multi-Tenancy](https://docs.meilisearch.com/learn/security/tenant_tokens.html):** personalize search results for any number of application tenants
- **Highly Customizable:** customize Meilisearch to your specific needs or use our out-of-the-box and hassle-free presets
- **[RESTful API](https://docs.meilisearch.com/reference/api/overview.html):** integrate Meilisearch in your technical stack with our plugins and SDKs
- **Easy to install, deploy, and maintain**
### Deploy the Server
## 📖 Documentation
#### Homebrew (Mac OS)
You can consult Meilisearch's documentation at [https://docs.meilisearch.com](https://docs.meilisearch.com/).
```bash
brew update && brew install meilisearch
meilisearch
```
## 🚀 Getting started
#### Docker
For basic instructions on how to set up Meilisearch, add documents to an index, and search for documents, take a look at our [Quick Start](https://docs.meilisearch.com/learn/getting_started/quick_start.html) guide.
```bash
docker run -p 7700:7700 -v "$(pwd)/data.ms:/data.ms" getmeili/meilisearch
```
You may also want to check out [Meilisearch 101](https://docs.meilisearch.com/learn/getting_started/filtering_and_sorting.html) for an introduction to some of Meilisearch's most popular features.
#### Announcing a cloud-hosted MeiliSearch
## ☁️ Meilisearch cloud
Join the closed beta by filling out this [form](https://meilisearch.typeform.com/to/FtnzvZfh).
Join the closed beta for Meilisearch cloud by filling out [this form](https://meilisearch.typeform.com/to/VI2cI2rv).
#### Try MeiliSearch in our Sandbox
## 🧰 SDKs & integration tools
Create a MeiliSearch instance in [MeiliSearch Sandbox](https://sandbox.meilisearch.com/). This instance is free, and will be active for 48 hours.
Install one of our SDKs in your project for seamless integration between Meilisearch and your favorite language or framework!
#### Run on Digital Ocean
Take a look at the complete [Meilisearch integration list](https://docs.meilisearch.com/learn/what_is_meilisearch/sdks.html).
[![DigitalOcean Marketplace](assets/do-btn-blue.svg)](https://marketplace.digitalocean.com/apps/meilisearch?action=deploy&refcode=7c67bd97e101)
![Logos belonging to different languages and frameworks supported by Meilisearch, including React, Ruby on Rails, Go, Rust, and PHP](assets/integrations.png)
#### Deploy on Platform.sh
## ⚙️ Advanced usage
<a href="https://console.platform.sh/projects/create-project?template=https://raw.githubusercontent.com/platformsh/template-builder/master/templates/meilisearch/.platform.template.yaml&utm_content=meilisearch&utm_source=github&utm_medium=button&utm_campaign=deploy_on_platform">
<img src="https://platform.sh/images/deploy/lg-blue.svg" alt="Deploy on Platform.sh" width="180px" />
</a>
Experienced users will want to keep our [API Reference](https://docs.meilisearch.com/reference/api) close at hand.
#### APT (Debian & Ubuntu)
We also offer a wide range of dedicated guides to all Meilisearch features, such as [filtering](https://docs.meilisearch.com/learn/advanced/filtering_and_faceted_search.html), [sorting](https://docs.meilisearch.com/learn/advanced/sorting.html), [geosearch](https://docs.meilisearch.com/learn/advanced/geosearch.html), [API keys](https://docs.meilisearch.com/learn/security/master_api_keys.html), and [tenant tokens](https://docs.meilisearch.com/learn/security/tenant_tokens.html).
```bash
echo "deb [trusted=yes] https://apt.fury.io/meilisearch/ /" > /etc/apt/sources.list.d/fury.list
apt update && apt install meilisearch-http
meilisearch
```
Finally, for more in-depth information, refer to our articles explaining fundamental Meilisearch concepts such as [documents](https://docs.meilisearch.com/learn/core_concepts/documents.html) and [indexes](https://docs.meilisearch.com/learn/core_concepts/indexes.html).
#### Download the binary (Linux & Mac OS)
## 📊 Telemetry
```bash
curl -L https://install.meilisearch.com | sh
./meilisearch
```
Meilisearch collects **anonymized** data from users to help us improve our product. You can [deactivate this](https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html#how-to-disable-data-collection) whenever you want.
#### Compile and run it from sources
To request deletion of collected data, please write to us at [privacy@meilisearch.com](mailto:privacy@meilisearch.com). Don't forget to include your `Instance UID` in the message, as this helps us quickly find and delete your data.
If you have the latest stable Rust toolchain installed on your local system, clone the repository and change it to your working directory.
If you want to know more about the kind of data we collect and what we use it for, check the [telemetry section](https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html) of our documentation.
```bash
git clone https://github.com/meilisearch/MeiliSearch.git
cd MeiliSearch
cargo run --release
```
## 📫 Get in touch!
### Create an Index and Upload Some Documents
Meilisearch is a search engine created by [Meili](https://www.welcometothejungle.com/en/companies/meilisearch), a software development company based in France and with team members all over the world. Want to know more about us? [Check out our blog!](https://blog.meilisearch.com/)
Let's create an index! If you need a sample dataset, use [this movie database](https://www.notion.so/meilisearch/A-movies-dataset-to-test-Meili-1cbf7c9cfa4247249c40edfa22d7ca87#b5ae399b81834705ba5420ac70358a65). You can also find it in the `datasets/` directory.
🗞 [Subscribe to our newsletter](https://meilisearch.us2.list-manage.com/subscribe?u=27870f7b71c908a8b359599fb&id=79582d828e) if you don't want to miss any updates! We promise we won't clutter your mailbox: we only send one edition every two months.
```bash
curl -L 'https://bit.ly/2PAcw9l' -o movies.json
```
💌 Want to make a suggestion or give feedback? Here are some of the channels where you can reach us:
Now, you're ready to index some data.
- For feature requests, please visit our [product repository](https://github.com/meilisearch/product/discussions)
- Found a bug? Open an [issue](https://github.com/meilisearch/meilisearch/issues)!
- Want to be part of our Slack community? [Join us!](https://slack.meilisearch.com/)
- For everything else, please check [this page listing some of the other places where you can find us](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html)
```bash
curl -i -X POST 'http://127.0.0.1:7700/indexes/movies/documents' \
--header 'content-type: application/json' \
--data-binary @movies.json
```
### Search for Documents
#### In command line
The search engine is now aware of your documents and can serve those via an HTTP server.
The [`jq` command-line tool](https://stedolan.github.io/jq/) can greatly help you read the server responses.
```bash
curl 'http://127.0.0.1:7700/indexes/movies/search?q=botman+robin&limit=2' | jq
```
```json
{
"hits": [
{
"id": "415",
"title": "Batman & Robin",
"poster": "https://image.tmdb.org/t/p/w1280/79AYCcxw3kSKbhGpx1LiqaCAbwo.jpg",
"overview": "Along with crime-fighting partner Robin and new recruit Batgirl, Batman battles the dual threat of frosty genius Mr. Freeze and homicidal horticulturalist Poison Ivy. Freeze plans to put Gotham City on ice, while Ivy tries to drive a wedge between the dynamic duo.",
"release_date": 866768400
},
{
"id": "411736",
"title": "Batman: Return of the Caped Crusaders",
"poster": "https://image.tmdb.org/t/p/w1280/GW3IyMW5Xgl0cgCN8wu96IlNpD.jpg",
"overview": "Adam West and Burt Ward returns to their iconic roles of Batman and Robin. Featuring the voices of Adam West, Burt Ward, and Julie Newmar, the film sees the superheroes going up against classic villains like The Joker, The Riddler, The Penguin and Catwoman, both in Gotham City… and in space.",
"release_date": 1475888400
}
],
"nbHits": 8,
"exhaustiveNbHits": false,
"query": "botman robin",
"limit": 2,
"offset": 0,
"processingTimeMs": 2
}
```
#### Use the Web Interface
We also deliver an **out-of-the-box [web interface](https://github.com/meilisearch/mini-dashboard)** in which you can test MeiliSearch interactively.
You can access the web interface in your web browser at the root of the server. The default URL is [http://127.0.0.1:7700](http://127.0.0.1:7700). All you need to do is open your web browser and enter MeiliSearchs address to visit it. This will lead you to a web page with a search bar that will allow you to search in the selected index.
| [See the gif above](#demo)
## Documentation
Now that your MeiliSearch server is up and running, you can learn more about how to tune your search engine in [the documentation](https://docs.meilisearch.com).
## Contributing
Hey! We're glad you're thinking about contributing to MeiliSearch! Feel free to pick an [issue labeled as `good first issue`](https://github.com/meilisearch/MeiliSearch/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22), and to ask any question you need. Some points might not be clear and we are available to help you!
Also, we recommend following the [CONTRIBUTING](./CONTRIBUTING.md) to create your PR.
## Core engine and tokenizer
The code in this repository is only concerned with managing multiple indexes, handling the update store, and exposing an HTTP API.
Search and indexation are the domain of our core engine, [`milli`](https://github.com/meilisearch/milli), while tokenization is handled by [our `tokenizer` library](https://github.com/meilisearch/tokenizer/).
## Telemetry
MeiliSearch collects anonymous data regarding general usage.
This helps us better understand developers' usage of MeiliSearch features.
To find out more on what information we're retrieving, please see our documentation on [Telemetry](https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html).
This program is optional, you can disable these analytics by using the `MEILI_NO_ANALYTICS` env variable.
## Feature request
The feature requests are not managed in this repository. Please visit our [dedicated repository](https://github.com/meilisearch/product) to see our work about the MeiliSearch product.
If you have a feature request or any feedback about an existing feature, please open [a discussion](https://github.com/meilisearch/product/discussions).
Also, feel free to participate in the current discussions, we are looking forward to reading your comments.
## 💌 Contact
Please visit [this page](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html#contact-us).
MeiliSearch is developed by [Meili](https://www.meilisearch.com), a young company. To know more about us, you can [read our blog](https://blog.meilisearch.com). Any suggestion or feedback is highly appreciated. Thank you for your support!
Thank you for your support!

View File

@ -1,16 +1,16 @@
# Security
MeiliSearch takes the security of our software products and services seriously.
Meilisearch takes the security of our software products and services seriously.
If you believe you have found a security vulnerability in any MeiliSearch-owned repository, please report it to us as described below.
If you believe you have found a security vulnerability in any Meilisearch-owned repository, please report it to us as described below.
## Suported versions
## Supported versions
As long as we are pre-v1.0, only the latest version of MeiliSearch will be supported with security updates.
As long as we are pre-v1.0, only the latest version of Meilisearch will be supported with security updates.
## Reporting security issues
⚠️ Please do not report security vulnerabilities through public GitHub issues. ⚠️
⚠️ Please do not report security vulnerabilities through public GitHub issues. ⚠️
Instead, please kindly email us at security@meilisearch.com

BIN
assets/demo-dark.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 MiB

BIN
assets/demo-light.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

BIN
assets/integrations.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 799 KiB

View File

@ -1,17 +1,19 @@
<svg width="360" height="360" viewBox="0 0 360 360" fill="none" xmlns="http://www.w3.org/2000/svg">
<g id="logo_main">
<rect id="Rectangle" x="107.333" y="0.150146" width="274.315" height="274.315" rx="98.8334" transform="rotate(23 107.333 0.150146)" fill="url(#paint0_linear)"/>
<path id="Rectangle_2" fill-rule="evenodd" clip-rule="evenodd" d="M61.3296 230.199C46.2224 194.608 38.6688 176.813 38.208 160.329C37.5286 136.025 47.0175 112.539 64.3891 95.5282C76.1718 83.9904 93.9669 76.4368 129.557 61.3296C165.147 46.2224 182.943 38.6688 199.427 38.208C223.731 37.5286 247.217 47.0175 264.228 64.3891C275.766 76.1718 283.319 93.9669 298.426 129.557C313.534 165.147 321.087 182.943 321.548 199.427C322.227 223.731 312.738 247.217 295.367 264.228C283.584 275.766 265.789 283.319 230.199 298.426C194.608 313.534 176.813 321.087 160.329 321.548C136.025 322.227 112.539 312.738 95.5282 295.367C83.9903 283.584 76.4368 265.789 61.3296 230.199Z" fill="url(#paint1_linear)"/>
<path id="m" fill-rule="evenodd" clip-rule="evenodd" d="M219.568 130.748C242.363 130.748 259.263 147.451 259.263 174.569V229.001H227.232V179.678C227.232 166.119 220.747 159.634 210.136 159.634C205.223 159.634 200.311 161.796 195.595 167.494C195.791 169.852 195.988 172.21 195.988 174.569V229.001H164.154V179.678C164.154 166.119 157.472 159.634 147.057 159.634C142.145 159.634 137.429 161.992 132.712 168.084V229.001H100.878V133.695H132.712V139.394C139.197 133.892 145.878 130.748 156.49 130.748C168.477 130.748 178.695 135.267 185.769 143.52C195.791 134.678 205.42 130.748 219.568 130.748Z" fill="white"/>
</g>
<svg width="300" height="300" viewBox="0 0 300 300" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0 237L55.426 96.7678C63.2367 77.0063 82.499 64 103.955 64H137.371L81.9447 204.232C74.1341 223.993 54.8717 237 33.4156 237H0Z" fill="url(#paint0_linear_1_898)"/>
<path d="M81.3123 237L136.738 96.7682C144.549 77.0067 163.811 64.0004 185.267 64.0004H218.683L163.257 204.232C155.446 223.994 136.184 237 114.728 237H81.3123Z" fill="url(#paint1_linear_1_898)"/>
<path d="M162.629 237L218.055 96.7682C225.866 77.0067 245.128 64.0004 266.584 64.0004H300L244.574 204.232C236.763 223.994 217.501 237 196.045 237H162.629Z" fill="url(#paint2_linear_1_898)"/>
<defs>
<linearGradient id="paint0_linear" x1="-13.6248" y1="129.208" x2="244.49" y2="403.522" gradientUnits="userSpaceOnUse">
<stop stop-color="#E41359"/>
<stop offset="1" stop-color="#F23C79"/>
<linearGradient id="paint0_linear_1_898" x1="300.001" y1="50.7858" x2="1.63474" y2="221.244" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="11.0088" y1="111.65" x2="111.65" y2="348.747" gradientUnits="userSpaceOnUse">
<stop stop-color="#24222F"/>
<stop offset="1" stop-color="#2B2937"/>
<linearGradient id="paint1_linear_1_898" x1="300.001" y1="50.7858" x2="1.63474" y2="221.244" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint2_linear_1_898" x1="300.001" y1="50.7858" x2="1.63474" y2="221.244" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@ -0,0 +1,30 @@
<svg width="495" height="74" viewBox="0 0 495 74" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M181.842 42.5349C181.842 37.6137 184.201 34.715 188.718 34.715C192.965 34.715 194.381 37.7486 194.381 41.6585V62.6238H203.953V40.5799C203.953 32.3556 199.639 26.4907 191.145 26.4907C186.089 26.4907 182.516 28.0412 179.415 31.4792C177.393 28.3782 173.955 26.4907 169.168 26.4907C164.112 26.4907 160.607 28.5805 158.989 31.614V27.2996H150.158V62.6238H159.731V42.3326C159.731 37.6137 162.157 34.715 166.607 34.715C170.854 34.715 172.269 37.7486 172.269 41.6585V62.6238H181.842V42.5349Z" fill="white"/>
<path d="M243.245 47.7256C243.245 47.7256 243.379 46.4448 243.379 44.8943C243.379 34.4454 236.301 26.4907 225.852 26.4907C215.403 26.4907 208.123 34.4454 208.123 44.8943C208.123 55.7477 215.471 63.4327 225.92 63.4327C234.077 63.4327 240.548 58.5116 242.638 51.3659H232.998C231.852 53.9276 229.088 55.2084 226.189 55.2084C221.403 55.2084 218.302 52.5793 217.628 47.7256H243.245ZM225.785 34.1757C230.234 34.1757 233.133 36.8722 233.807 40.8495H217.763C218.572 36.8048 221.403 34.1757 225.785 34.1757Z" fill="white"/>
<path d="M244.791 35.524H249.038V62.6238H258.61V27.2996H244.791V35.524ZM253.824 22.7156C257.195 22.7156 259.622 20.3561 259.622 16.9855C259.622 13.6149 257.195 11.188 253.824 11.188C250.454 11.188 248.027 13.6149 248.027 16.9855C248.027 20.3561 250.454 22.7156 253.824 22.7156Z" fill="white"/>
<path d="M278.432 54.3995C278.163 54.3995 277.758 54.4669 277.152 54.4669C274.994 54.4669 274.725 53.4557 274.725 51.9726V12.0644H265.152V52.6467C265.152 59.6576 267.849 62.7586 275.466 62.7586C276.747 62.7586 277.96 62.6238 278.432 62.5564V54.3995Z" fill="white"/>
<path d="M279.521 35.524H283.768V62.6238H293.341V27.2996H279.521V35.524ZM288.555 22.7156C291.925 22.7156 294.352 20.3561 294.352 16.9855C294.352 13.6149 291.925 11.188 288.555 11.188C285.184 11.188 282.757 13.6149 282.757 16.9855C282.757 20.3561 285.184 22.7156 288.555 22.7156Z" fill="white"/>
<path d="M312.557 62.9937C321.86 62.9937 326.242 58.0726 326.242 52.8819C326.242 38.4556 305.007 46.4777 305.007 36.9725C305.007 33.8716 307.636 31.2425 312.962 31.2425C318.422 31.2425 320.984 34.2086 321.388 37.9163H326.175C325.77 33.2648 322.602 27.0629 313.097 27.0629C304.94 27.0629 300.356 31.9166 300.356 37.1748C300.356 51.264 321.591 43.1745 321.591 53.0167C321.591 56.4547 318.355 58.8142 312.557 58.8142C306.625 58.8142 303.659 55.848 303.322 51.4662H298.468C298.873 57.4659 302.648 62.9937 312.557 62.9937Z" fill="white"/>
<path d="M364.256 46.4103C364.256 46.4103 364.324 45.3317 364.324 44.5901C364.324 34.8827 358.054 27.0629 347.808 27.0629C337.494 27.0629 330.955 35.4894 330.955 44.9946C330.955 54.6346 337.022 62.9937 347.875 62.9937C356.032 62.9937 361.695 58.0052 363.717 51.4662H358.729C357.245 55.6458 353.201 58.6794 347.943 58.6794C340.729 58.6794 336.213 53.3538 335.741 46.4103H364.256ZM347.808 31.3773C354.549 31.3773 358.931 35.8939 359.538 42.5004H335.876C336.685 36.1636 341.134 31.3773 347.808 31.3773Z" fill="white"/>
<path d="M394.037 45.871V49.1068C394.037 54.9717 389.79 59.0164 381.634 59.0164C376.578 59.0164 373.814 56.9266 373.814 52.41C373.814 50.118 374.892 48.3652 376.578 47.4215C378.33 46.4777 380.69 45.871 394.037 45.871ZM381.094 62.9937C387.027 62.9937 391.813 61.1062 394.24 57.1963V62.1848H398.824V39.7364C398.824 32.1188 394.442 27.0629 384.532 27.0629C375.027 27.0629 370.848 31.8492 369.971 37.9837H374.623C375.566 33.13 379.274 31.1751 384.33 31.1751C390.802 31.1751 394.037 33.8716 394.037 39.669V41.8936C383.184 41.8936 378.667 42.0959 375.297 43.4441C371.387 44.9946 369.095 48.4327 369.095 52.5448C369.095 58.5445 372.937 62.9937 381.094 62.9937Z" fill="white"/>
<path d="M424.991 27.6022C424.991 27.6022 424.182 27.5348 423.845 27.5348C417.509 27.5348 414.138 30.838 412.857 33.1974V27.8718H408.273V62.1848H413.059V42.7026C413.059 35.5569 417.441 32.0514 423.306 32.0514C424.182 32.0514 424.991 32.1188 424.991 32.1188V27.6022Z" fill="white"/>
<path d="M425.809 45.062C425.809 54.4324 432.28 62.9937 442.729 62.9937C452.032 62.9937 457.425 56.7918 458.773 49.9831H453.92C452.504 55.3087 448.594 58.6794 442.729 58.6794C435.516 58.6794 430.662 52.9493 430.662 45.062C430.662 37.1073 435.516 31.3773 442.729 31.3773C448.594 31.3773 452.504 34.7479 453.92 40.0735H458.773C457.425 33.2648 452.032 27.0629 442.729 27.0629C432.28 27.0629 425.809 35.6243 425.809 45.062Z" fill="white"/>
<path d="M470.041 11.6254H465.255V62.1848H470.041V41.8936C470.041 34.8827 474.558 31.2425 480.355 31.2425C486.49 31.2425 489.389 35.0176 489.389 41.2195V62.1848H494.175V40.2757C494.175 32.6581 489.658 27.0629 481.164 27.0629C474.76 27.0629 471.255 30.5683 470.041 32.6581V11.6254Z" fill="white"/>
<path d="M0.825012 73.993L24.0688 14.5224C27.3443 6.14179 35.4223 0.625977 44.4203 0.625977H58.4336L35.1899 60.0966C31.9143 68.4772 23.8363 73.993 14.8384 73.993H0.825012Z" fill="url(#paint0_linear_0_3)"/>
<path d="M34.9246 73.9932L58.1684 14.5226C61.444 6.14197 69.5219 0.626152 78.5199 0.626152H92.5333L69.2895 60.0968C66.014 68.4774 57.936 73.9932 48.938 73.9932H34.9246Z" fill="url(#paint1_linear_0_3)"/>
<path d="M69.0262 73.9932L92.27 14.5226C95.5456 6.14197 103.624 0.626152 112.622 0.626152H126.635L103.391 60.0968C100.116 68.4774 92.0376 73.9932 83.0396 73.9932H69.0262Z" fill="url(#paint2_linear_0_3)"/>
<defs>
<linearGradient id="paint0_linear_0_3" x1="126.635" y1="-4.97799" x2="0.825008" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint1_linear_0_3" x1="126.635" y1="-4.97799" x2="0.825008" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint2_linear_0_3" x1="126.635" y1="-4.97799" x2="0.825008" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.8 KiB

View File

@ -0,0 +1,30 @@
<svg width="495" height="74" viewBox="0 0 495 74" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M181.84 42.5347C181.84 37.6136 184.199 34.7149 188.716 34.7149C192.963 34.7149 194.378 37.7484 194.378 41.6584V62.6237H203.951V40.5798C203.951 32.3554 199.637 26.4906 191.143 26.4906C186.087 26.4906 182.514 28.041 179.413 31.4791C177.39 28.3781 173.952 26.4906 169.166 26.4906C164.11 26.4906 160.605 28.5804 158.987 31.6139V27.2995H150.156V62.6237H159.728V42.3325C159.728 37.6136 162.155 34.7149 166.604 34.7149C170.851 34.7149 172.267 37.7484 172.267 41.6584V62.6237H181.84V42.5347Z" fill="#21004B"/>
<path d="M243.242 47.7255C243.242 47.7255 243.377 46.4447 243.377 44.8942C243.377 34.4452 236.299 26.4906 225.85 26.4906C215.401 26.4906 208.12 34.4452 208.12 44.8942C208.12 55.7476 215.468 63.4326 225.917 63.4326C234.074 63.4326 240.546 58.5115 242.636 51.3658H232.996C231.85 53.9274 229.086 55.2083 226.187 55.2083C221.401 55.2083 218.3 52.5792 217.626 47.7255H243.242ZM225.783 34.1756C230.232 34.1756 233.131 36.8721 233.805 40.8494H217.76C218.569 36.8047 221.401 34.1756 225.783 34.1756Z" fill="#21004B"/>
<path d="M244.789 35.5238H249.036V62.6237H258.608V27.2995H244.789V35.5238ZM253.822 22.7155C257.193 22.7155 259.619 20.356 259.619 16.9854C259.619 13.6148 257.193 11.1879 253.822 11.1879C250.451 11.1879 248.024 13.6148 248.024 16.9854C248.024 20.356 250.451 22.7155 253.822 22.7155Z" fill="#21004B"/>
<path d="M278.43 54.3993C278.16 54.3993 277.756 54.4667 277.149 54.4667C274.992 54.4667 274.722 53.4556 274.722 51.9725V12.0643H265.15V52.6466C265.15 59.6575 267.846 62.7585 275.464 62.7585C276.745 62.7585 277.958 62.6237 278.43 62.5562V54.3993Z" fill="#21004B"/>
<path d="M279.519 35.5238H283.766V62.6237H293.339V27.2995H279.519V35.5238ZM288.553 22.7155C291.923 22.7155 294.35 20.356 294.35 16.9854C294.35 13.6148 291.923 11.1879 288.553 11.1879C285.182 11.1879 282.755 13.6148 282.755 16.9854C282.755 20.356 285.182 22.7155 288.553 22.7155Z" fill="#21004B"/>
<path d="M312.557 62.9939C321.86 62.9939 326.242 58.0728 326.242 52.882C326.242 38.4557 305.007 46.4778 305.007 36.9726C305.007 33.8717 307.636 31.2426 312.962 31.2426C318.422 31.2426 320.984 34.2087 321.388 37.9164H326.175C325.77 33.265 322.602 27.063 313.097 27.063C304.94 27.063 300.356 31.9167 300.356 37.1749C300.356 51.2641 321.591 43.1746 321.591 53.0168C321.591 56.4548 318.355 58.8143 312.557 58.8143C306.625 58.8143 303.659 55.8481 303.322 51.4663H298.468C298.872 57.466 302.648 62.9939 312.557 62.9939Z" fill="#21004B"/>
<path d="M364.256 46.4104C364.256 46.4104 364.324 45.3318 364.324 44.5903C364.324 34.8829 358.054 27.063 347.808 27.063C337.494 27.063 330.955 35.4896 330.955 44.9947C330.955 54.6347 337.022 62.9939 347.875 62.9939C356.032 62.9939 361.695 58.0053 363.717 51.4663H358.728C357.245 55.6459 353.201 58.6795 347.942 58.6795C340.729 58.6795 336.213 53.3539 335.741 46.4104H364.256ZM347.808 31.3774C354.549 31.3774 358.931 35.894 359.537 42.5005H335.876C336.685 36.1637 341.134 31.3774 347.808 31.3774Z" fill="#21004B"/>
<path d="M394.037 45.8711V49.1069C394.037 54.9718 389.79 59.0165 381.633 59.0165C376.578 59.0165 373.814 56.9267 373.814 52.4101C373.814 50.1181 374.892 48.3654 376.578 47.4216C378.33 46.4778 380.69 45.8711 394.037 45.8711ZM381.094 62.9939C387.026 62.9939 391.813 61.1063 394.24 57.1964V62.1849H398.824V39.7366C398.824 32.1189 394.442 27.063 384.532 27.063C375.027 27.063 370.847 31.8493 369.971 37.9838H374.623C375.566 33.1301 379.274 31.1752 384.33 31.1752C390.802 31.1752 394.037 33.8717 394.037 39.6691V41.8938C383.184 41.8938 378.667 42.096 375.297 43.4442C371.387 44.9947 369.095 48.4328 369.095 52.5449C369.095 58.5446 372.937 62.9939 381.094 62.9939Z" fill="#21004B"/>
<path d="M424.991 27.6023C424.991 27.6023 424.182 27.5349 423.845 27.5349C417.508 27.5349 414.138 30.8381 412.857 33.1975V27.872H408.273V62.1849H413.059V42.7027C413.059 35.557 417.441 32.0515 423.306 32.0515C424.182 32.0515 424.991 32.1189 424.991 32.1189V27.6023Z" fill="#21004B"/>
<path d="M425.809 45.0621C425.809 54.4325 432.28 62.9939 442.729 62.9939C452.032 62.9939 457.425 56.7919 458.773 49.9832H453.92C452.504 55.3088 448.594 58.6795 442.729 58.6795C435.516 58.6795 430.662 52.9494 430.662 45.0621C430.662 37.1075 435.516 31.3774 442.729 31.3774C448.594 31.3774 452.504 34.748 453.92 40.0736H458.773C457.425 33.265 452.032 27.063 442.729 27.063C432.28 27.063 425.809 35.6244 425.809 45.0621Z" fill="#21004B"/>
<path d="M470.041 11.6255H465.255V62.1849H470.041V41.8938C470.041 34.8829 474.558 31.2426 480.355 31.2426C486.49 31.2426 489.389 35.0177 489.389 41.2196V62.1849H494.175V40.2759C494.175 32.6582 489.658 27.063 481.164 27.063C474.76 27.063 471.255 30.5685 470.041 32.6582V11.6255Z" fill="#21004B"/>
<path d="M0.824951 73.993L24.0688 14.5224C27.3443 6.14179 35.4223 0.625977 44.4202 0.625977H58.4336L35.1898 60.0966C31.9143 68.4772 23.8363 73.993 14.8383 73.993H0.824951Z" fill="url(#paint0_linear_0_15)"/>
<path d="M34.9246 73.9932L58.1684 14.5226C61.4439 6.14197 69.5219 0.626152 78.5199 0.626152H92.5332L69.2894 60.0968C66.0139 68.4774 57.9359 73.9932 48.9379 73.9932H34.9246Z" fill="url(#paint1_linear_0_15)"/>
<path d="M69.0262 73.9932L92.27 14.5226C95.5455 6.14197 103.623 0.626152 112.621 0.626152H126.635L103.391 60.0968C100.115 68.4774 92.0375 73.9932 83.0395 73.9932H69.0262Z" fill="url(#paint2_linear_0_15)"/>
<defs>
<linearGradient id="paint0_linear_0_15" x1="126.635" y1="-4.97799" x2="0.824952" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint1_linear_0_15" x1="126.635" y1="-4.97799" x2="0.824952" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint2_linear_0_15" x1="126.635" y1="-4.97799" x2="0.824952" y2="66.0978" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.9 KiB

View File

@ -3,7 +3,8 @@ status = [
'Tests on macos-latest',
'Tests on windows-latest',
'Run Clippy',
'Run Rustfmt'
'Run Rustfmt',
'Run tests in debug',
]
pr_status = ['Milestone Check']
# 3 hours timeout

View File

@ -67,20 +67,20 @@ semverLT() {
return 1
}
# Get a token from https://github.com/settings/tokens to increasae rate limit (from 60 to 5000), make sure the token scope is set to 'public_repo'
# Create GITHUB_PAT enviroment variable once you aquired the token to start using it
# Get a token from https://github.com/settings/tokens to increase rate limit (from 60 to 5000), make sure the token scope is set to 'public_repo'
# Create GITHUB_PAT environment variable once you acquired the token to start using it
# Returns the tag of the latest stable release (in terms of semver and not of release date)
get_latest() {
temp_file='temp_file' # temp_file needed because the grep would start before the download is over
if [ -z "$GITHUB_PAT" ]; then
curl -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file" || return 1
if [ -z "$GITHUB_PAT" ]; then
curl -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file" || return 1
else
curl -H "Authorization: token $GITHUB_PAT" -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file" || return 1
curl -H "Authorization: token $GITHUB_PAT" -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file" || return 1
fi
releases=$(cat "$temp_file" | \
grep -E "tag_name|draft|prerelease" \
grep -E '"tag_name":|"draft":|"prerelease":' \
| tr -d ',"' | cut -d ':' -f2 | tr -d ' ')
# Returns a list of [tag_name draft_boolean prerelease_boolean ...]
# Ex: v0.10.1 false false v0.9.1-rc.1 false true v0.9.0 false false...
@ -89,7 +89,7 @@ get_latest() {
latest=''
current_tag=''
for release_info in $releases; do
if [ $i -eq 0 ]; then # Cheking tag_name
if [ $i -eq 0 ]; then # Checking tag_name
if echo "$release_info" | grep -q "$GREP_SEMVER_REGEXP"; then # If it's not an alpha or beta release
current_tag=$release_info
else
@ -120,7 +120,7 @@ get_latest() {
done
rm -f "$temp_file"
echo $latest
return 0
}
# Gets the OS by setting the $os variable
@ -148,11 +148,18 @@ get_os() {
get_archi() {
architecture=$(uname -m)
case "$architecture" in
'x86_64' | 'amd64' | 'arm64')
'x86_64' | 'amd64' )
archi='amd64'
;;
'arm64')
if [ $os = 'macos' ]; then # MacOS M1
archi='amd64'
else
archi='aarch64'
fi
;;
'aarch64')
archi='armv8'
archi='aarch64'
;;
*)
return 1
@ -161,7 +168,7 @@ get_archi() {
}
success_usage() {
printf "$GREEN%s\n$DEFAULT" "MeiliSearch $latest binary successfully downloaded as '$binary_name' file."
printf "$GREEN%s\n$DEFAULT" "Meilisearch $latest binary successfully downloaded as '$binary_name' file."
echo ''
echo 'Run it:'
echo ' $ ./meilisearch'
@ -169,47 +176,65 @@ success_usage() {
echo ' $ ./meilisearch --help'
}
failure_usage() {
printf "$RED%s\n$DEFAULT" 'ERROR: MeiliSearch binary is not available for your OS distribution or your architecture yet.'
not_available_failure_usage() {
printf "$RED%s\n$DEFAULT" 'ERROR: Meilisearch binary is not available for your OS distribution or your architecture yet.'
echo ''
echo 'However, you can easily compile the binary from the source files.'
echo 'Follow the steps at the page ("Source" tab): https://docs.meilisearch.com/learn/getting_started/installation.html'
}
fetch_release_failure_usage() {
echo ''
printf "$RED%s\n$DEFAULT" 'ERROR: Impossible to get the latest stable version of Meilisearch.'
echo 'Please let us know about this issue: https://github.com/meilisearch/meilisearch/issues/new/choose'
}
# MAIN
latest="$(get_latest)"
# Fill $latest variable
if ! get_latest; then
fetch_release_failure_usage # TO CHANGE
exit 1
fi
if [ "$latest" = '' ]; then
echo ''
echo 'Impossible to get the latest stable version of MeiliSearch.'
echo 'Please let us know about this issue: https://github.com/meilisearch/meilisearch-swift/issues/new/choose'
fetch_release_failure_usage
exit 1
fi
# Fill $os variable
if ! get_os; then
failure_usage
not_available_failure_usage
exit 1
fi
# Fill $archi variable
if ! get_archi; then
failure_usage
not_available_failure_usage
exit 1
fi
echo "Downloading MeiliSearch binary $latest for $os, architecture $archi..."
echo "Downloading Meilisearch binary $latest for $os, architecture $archi..."
case "$os" in
'windows')
release_file="meilisearch-$os-$archi.exe"
binary_name='meilisearch.exe'
binary_name='meilisearch.exe'
;;
*)
release_file="meilisearch-$os-$archi"
binary_name='meilisearch'
*)
release_file="meilisearch-$os-$archi"
binary_name='meilisearch'
esac
link="https://github.com/meilisearch/MeiliSearch/releases/download/$latest/$release_file"
curl -OL "$link"
# Fetch the Meilisearch binary
link="https://github.com/meilisearch/meilisearch/releases/download/$latest/$release_file"
curl --fail -OL "$link"
if [ $? -ne 0 ]; then
fetch_release_failure_usage
exit 1
fi
mv "$release_file" "$binary_name"
chmod 744 "$binary_name"
success_usage

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,17 @@
[package]
name = "meilisearch-auth"
version = "0.29.3"
edition = "2021"
[dependencies]
enum-iterator = "0.7.0"
hmac = "0.12.1"
meilisearch-types = { path = "../meilisearch-types" }
milli = { git = "https://github.com/meilisearch/milli.git", tag = "v0.33.6" }
rand = "0.8.4"
serde = { version = "1.0.136", features = ["derive"] }
serde_json = { version = "1.0.85", features = ["preserve_order"] }
sha2 = "0.10.2"
thiserror = "1.0.30"
time = { version = "0.3.7", features = ["serde-well-known", "formatting", "parsing", "macros"] }
uuid = { version = "1.1.2", features = ["serde", "v4"] }

View File

@ -0,0 +1,134 @@
use enum_iterator::IntoEnumIterator;
use serde::{Deserialize, Serialize};
use std::hash::Hash;
#[derive(IntoEnumIterator, Copy, Clone, Serialize, Deserialize, Debug, Eq, PartialEq, Hash)]
#[repr(u8)]
pub enum Action {
#[serde(rename = "*")]
All = 0,
#[serde(rename = "search")]
Search,
#[serde(rename = "documents.*")]
DocumentsAll,
#[serde(rename = "documents.add")]
DocumentsAdd,
#[serde(rename = "documents.get")]
DocumentsGet,
#[serde(rename = "documents.delete")]
DocumentsDelete,
#[serde(rename = "indexes.*")]
IndexesAll,
#[serde(rename = "indexes.create")]
IndexesAdd,
#[serde(rename = "indexes.get")]
IndexesGet,
#[serde(rename = "indexes.update")]
IndexesUpdate,
#[serde(rename = "indexes.delete")]
IndexesDelete,
#[serde(rename = "tasks.*")]
TasksAll,
#[serde(rename = "tasks.get")]
TasksGet,
#[serde(rename = "settings.*")]
SettingsAll,
#[serde(rename = "settings.get")]
SettingsGet,
#[serde(rename = "settings.update")]
SettingsUpdate,
#[serde(rename = "stats.*")]
StatsAll,
#[serde(rename = "stats.get")]
StatsGet,
#[serde(rename = "metrics.*")]
MetricsAll,
#[serde(rename = "metrics.get")]
MetricsGet,
#[serde(rename = "dumps.*")]
DumpsAll,
#[serde(rename = "dumps.create")]
DumpsCreate,
#[serde(rename = "version")]
Version,
#[serde(rename = "keys.create")]
KeysAdd,
#[serde(rename = "keys.get")]
KeysGet,
#[serde(rename = "keys.update")]
KeysUpdate,
#[serde(rename = "keys.delete")]
KeysDelete,
}
impl Action {
pub const fn from_repr(repr: u8) -> Option<Self> {
use actions::*;
match repr {
ALL => Some(Self::All),
SEARCH => Some(Self::Search),
DOCUMENTS_ALL => Some(Self::DocumentsAll),
DOCUMENTS_ADD => Some(Self::DocumentsAdd),
DOCUMENTS_GET => Some(Self::DocumentsGet),
DOCUMENTS_DELETE => Some(Self::DocumentsDelete),
INDEXES_ALL => Some(Self::IndexesAll),
INDEXES_CREATE => Some(Self::IndexesAdd),
INDEXES_GET => Some(Self::IndexesGet),
INDEXES_UPDATE => Some(Self::IndexesUpdate),
INDEXES_DELETE => Some(Self::IndexesDelete),
TASKS_ALL => Some(Self::TasksAll),
TASKS_GET => Some(Self::TasksGet),
SETTINGS_ALL => Some(Self::SettingsAll),
SETTINGS_GET => Some(Self::SettingsGet),
SETTINGS_UPDATE => Some(Self::SettingsUpdate),
STATS_ALL => Some(Self::StatsAll),
STATS_GET => Some(Self::StatsGet),
METRICS_ALL => Some(Self::MetricsAll),
METRICS_GET => Some(Self::MetricsGet),
DUMPS_ALL => Some(Self::DumpsAll),
DUMPS_CREATE => Some(Self::DumpsCreate),
VERSION => Some(Self::Version),
KEYS_CREATE => Some(Self::KeysAdd),
KEYS_GET => Some(Self::KeysGet),
KEYS_UPDATE => Some(Self::KeysUpdate),
KEYS_DELETE => Some(Self::KeysDelete),
_otherwise => None,
}
}
pub const fn repr(&self) -> u8 {
*self as u8
}
}
pub mod actions {
use super::Action::*;
pub(crate) const ALL: u8 = All.repr();
pub const SEARCH: u8 = Search.repr();
pub const DOCUMENTS_ALL: u8 = DocumentsAll.repr();
pub const DOCUMENTS_ADD: u8 = DocumentsAdd.repr();
pub const DOCUMENTS_GET: u8 = DocumentsGet.repr();
pub const DOCUMENTS_DELETE: u8 = DocumentsDelete.repr();
pub const INDEXES_ALL: u8 = IndexesAll.repr();
pub const INDEXES_CREATE: u8 = IndexesAdd.repr();
pub const INDEXES_GET: u8 = IndexesGet.repr();
pub const INDEXES_UPDATE: u8 = IndexesUpdate.repr();
pub const INDEXES_DELETE: u8 = IndexesDelete.repr();
pub const TASKS_ALL: u8 = TasksAll.repr();
pub const TASKS_GET: u8 = TasksGet.repr();
pub const SETTINGS_ALL: u8 = SettingsAll.repr();
pub const SETTINGS_GET: u8 = SettingsGet.repr();
pub const SETTINGS_UPDATE: u8 = SettingsUpdate.repr();
pub const STATS_ALL: u8 = StatsAll.repr();
pub const STATS_GET: u8 = StatsGet.repr();
pub const METRICS_ALL: u8 = MetricsAll.repr();
pub const METRICS_GET: u8 = MetricsGet.repr();
pub const DUMPS_ALL: u8 = DumpsAll.repr();
pub const DUMPS_CREATE: u8 = DumpsCreate.repr();
pub const VERSION: u8 = Version.repr();
pub const KEYS_CREATE: u8 = KeysAdd.repr();
pub const KEYS_GET: u8 = KeysGet.repr();
pub const KEYS_UPDATE: u8 = KeysUpdate.repr();
pub const KEYS_DELETE: u8 = KeysDelete.repr();
}

View File

@ -0,0 +1,47 @@
use serde_json::Deserializer;
use std::fs::File;
use std::io::BufReader;
use std::io::Write;
use std::path::Path;
use crate::{AuthController, HeedAuthStore, Result};
const KEYS_PATH: &str = "keys";
impl AuthController {
pub fn dump(src: impl AsRef<Path>, dst: impl AsRef<Path>) -> Result<()> {
let mut store = HeedAuthStore::new(&src)?;
// do not attempt to close the database on drop!
store.set_drop_on_close(false);
let keys_file_path = dst.as_ref().join(KEYS_PATH);
let keys = store.list_api_keys()?;
let mut keys_file = File::create(&keys_file_path)?;
for key in keys {
serde_json::to_writer(&mut keys_file, &key)?;
keys_file.write_all(b"\n")?;
}
Ok(())
}
pub fn load_dump(src: impl AsRef<Path>, dst: impl AsRef<Path>) -> Result<()> {
let store = HeedAuthStore::new(&dst)?;
let keys_file_path = src.as_ref().join(KEYS_PATH);
if !keys_file_path.exists() {
return Ok(());
}
let reader = BufReader::new(File::open(&keys_file_path)?);
for key in Deserializer::from_reader(reader).into_iter() {
store.put_api_key(key?)?;
}
Ok(())
}
}

View File

@ -0,0 +1,60 @@
use std::error::Error;
use meilisearch_types::error::{Code, ErrorCode};
use meilisearch_types::internal_error;
use serde_json::Value;
pub type Result<T> = std::result::Result<T, AuthControllerError>;
#[derive(Debug, thiserror::Error)]
pub enum AuthControllerError {
#[error("`{0}` field is mandatory.")]
MissingParameter(&'static str),
#[error("`actions` field value `{0}` is invalid. It should be an array of string representing action names.")]
InvalidApiKeyActions(Value),
#[error("`indexes` field value `{0}` is invalid. It should be an array of string representing index names.")]
InvalidApiKeyIndexes(Value),
#[error("`expiresAt` field value `{0}` is invalid. It should follow the RFC 3339 format to represents a date or datetime in the future or specified as a null value. e.g. 'YYYY-MM-DD' or 'YYYY-MM-DD HH:MM:SS'.")]
InvalidApiKeyExpiresAt(Value),
#[error("`description` field value `{0}` is invalid. It should be a string or specified as a null value.")]
InvalidApiKeyDescription(Value),
#[error(
"`name` field value `{0}` is invalid. It should be a string or specified as a null value."
)]
InvalidApiKeyName(Value),
#[error("`uid` field value `{0}` is invalid. It should be a valid UUID v4 string or omitted.")]
InvalidApiKeyUid(Value),
#[error("API key `{0}` not found.")]
ApiKeyNotFound(String),
#[error("`uid` field value `{0}` is already an existing API key.")]
ApiKeyAlreadyExists(String),
#[error("The `{0}` field cannot be modified for the given resource.")]
ImmutableField(String),
#[error("Internal error: {0}")]
Internal(Box<dyn Error + Send + Sync + 'static>),
}
internal_error!(
AuthControllerError: milli::heed::Error,
std::io::Error,
serde_json::Error,
std::str::Utf8Error
);
impl ErrorCode for AuthControllerError {
fn error_code(&self) -> Code {
match self {
Self::MissingParameter(_) => Code::MissingParameter,
Self::InvalidApiKeyActions(_) => Code::InvalidApiKeyActions,
Self::InvalidApiKeyIndexes(_) => Code::InvalidApiKeyIndexes,
Self::InvalidApiKeyExpiresAt(_) => Code::InvalidApiKeyExpiresAt,
Self::InvalidApiKeyDescription(_) => Code::InvalidApiKeyDescription,
Self::InvalidApiKeyName(_) => Code::InvalidApiKeyName,
Self::ApiKeyNotFound(_) => Code::ApiKeyNotFound,
Self::InvalidApiKeyUid(_) => Code::InvalidApiKeyUid,
Self::ApiKeyAlreadyExists(_) => Code::ApiKeyAlreadyExists,
Self::ImmutableField(_) => Code::ImmutableField,
Self::Internal(_) => Code::Internal,
}
}
}

201
meilisearch-auth/src/key.rs Normal file
View File

@ -0,0 +1,201 @@
use crate::action::Action;
use crate::error::{AuthControllerError, Result};
use crate::store::KeyId;
use meilisearch_types::index_uid::IndexUid;
use meilisearch_types::star_or::StarOr;
use serde::{Deserialize, Serialize};
use serde_json::{from_value, Value};
use time::format_description::well_known::Rfc3339;
use time::macros::{format_description, time};
use time::{Date, OffsetDateTime, PrimitiveDateTime};
use uuid::Uuid;
#[derive(Debug, Deserialize, Serialize)]
pub struct Key {
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub name: Option<String>,
pub uid: KeyId,
pub actions: Vec<Action>,
pub indexes: Vec<StarOr<IndexUid>>,
#[serde(with = "time::serde::rfc3339::option")]
pub expires_at: Option<OffsetDateTime>,
#[serde(with = "time::serde::rfc3339")]
pub created_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub updated_at: OffsetDateTime,
}
impl Key {
pub fn create_from_value(value: Value) -> Result<Self> {
let name = match value.get("name") {
None | Some(Value::Null) => None,
Some(des) => from_value(des.clone())
.map(Some)
.map_err(|_| AuthControllerError::InvalidApiKeyName(des.clone()))?,
};
let description = match value.get("description") {
None | Some(Value::Null) => None,
Some(des) => from_value(des.clone())
.map(Some)
.map_err(|_| AuthControllerError::InvalidApiKeyDescription(des.clone()))?,
};
let uid = value.get("uid").map_or_else(
|| Ok(Uuid::new_v4()),
|uid| {
from_value(uid.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyUid(uid.clone()))
},
)?;
let actions = value
.get("actions")
.map(|act| {
from_value(act.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyActions(act.clone()))
})
.ok_or(AuthControllerError::MissingParameter("actions"))??;
let indexes = value
.get("indexes")
.map(|ind| {
from_value(ind.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyIndexes(ind.clone()))
})
.ok_or(AuthControllerError::MissingParameter("indexes"))??;
let expires_at = value
.get("expiresAt")
.map(parse_expiration_date)
.ok_or(AuthControllerError::MissingParameter("expiresAt"))??;
let created_at = OffsetDateTime::now_utc();
let updated_at = created_at;
Ok(Self {
name,
description,
uid,
actions,
indexes,
expires_at,
created_at,
updated_at,
})
}
pub fn update_from_value(&mut self, value: Value) -> Result<()> {
if let Some(des) = value.get("description") {
let des = from_value(des.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyDescription(des.clone()));
self.description = des?;
}
if let Some(des) = value.get("name") {
let des = from_value(des.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyName(des.clone()));
self.name = des?;
}
if value.get("uid").is_some() {
return Err(AuthControllerError::ImmutableField("uid".to_string()));
}
if value.get("actions").is_some() {
return Err(AuthControllerError::ImmutableField("actions".to_string()));
}
if value.get("indexes").is_some() {
return Err(AuthControllerError::ImmutableField("indexes".to_string()));
}
if value.get("expiresAt").is_some() {
return Err(AuthControllerError::ImmutableField("expiresAt".to_string()));
}
if value.get("createdAt").is_some() {
return Err(AuthControllerError::ImmutableField("createdAt".to_string()));
}
if value.get("updatedAt").is_some() {
return Err(AuthControllerError::ImmutableField("updatedAt".to_string()));
}
self.updated_at = OffsetDateTime::now_utc();
Ok(())
}
pub(crate) fn default_admin() -> Self {
let now = OffsetDateTime::now_utc();
let uid = Uuid::new_v4();
Self {
name: Some("Default Admin API Key".to_string()),
description: Some("Use it for anything that is not a search operation. Caution! Do not expose it on a public frontend".to_string()),
uid,
actions: vec![Action::All],
indexes: vec![StarOr::Star],
expires_at: None,
created_at: now,
updated_at: now,
}
}
pub(crate) fn default_search() -> Self {
let now = OffsetDateTime::now_utc();
let uid = Uuid::new_v4();
Self {
name: Some("Default Search API Key".to_string()),
description: Some("Use it to search from the frontend".to_string()),
uid,
actions: vec![Action::Search],
indexes: vec![StarOr::Star],
expires_at: None,
created_at: now,
updated_at: now,
}
}
}
fn parse_expiration_date(value: &Value) -> Result<Option<OffsetDateTime>> {
match value {
Value::String(string) => OffsetDateTime::parse(string, &Rfc3339)
.or_else(|_| {
PrimitiveDateTime::parse(
string,
format_description!(
"[year repr:full base:calendar]-[month repr:numerical]-[day]T[hour]:[minute]:[second]"
),
).map(|datetime| datetime.assume_utc())
})
.or_else(|_| {
PrimitiveDateTime::parse(
string,
format_description!(
"[year repr:full base:calendar]-[month repr:numerical]-[day] [hour]:[minute]:[second]"
),
).map(|datetime| datetime.assume_utc())
})
.or_else(|_| {
Date::parse(string, format_description!(
"[year repr:full base:calendar]-[month repr:numerical]-[day]"
)).map(|date| PrimitiveDateTime::new(date, time!(00:00)).assume_utc())
})
.map_err(|_| AuthControllerError::InvalidApiKeyExpiresAt(value.clone()))
// check if the key is already expired.
.and_then(|d| {
if d > OffsetDateTime::now_utc() {
Ok(d)
} else {
Err(AuthControllerError::InvalidApiKeyExpiresAt(value.clone()))
}
})
.map(Option::Some),
Value::Null => Ok(None),
_otherwise => Err(AuthControllerError::InvalidApiKeyExpiresAt(value.clone())),
}
}

255
meilisearch-auth/src/lib.rs Normal file
View File

@ -0,0 +1,255 @@
mod action;
mod dump;
pub mod error;
mod key;
mod store;
use std::collections::{HashMap, HashSet};
use std::ops::Deref;
use std::path::Path;
use std::sync::Arc;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use time::OffsetDateTime;
use uuid::Uuid;
pub use action::{actions, Action};
use error::{AuthControllerError, Result};
pub use key::Key;
use meilisearch_types::star_or::StarOr;
use store::generate_key_as_hexa;
pub use store::open_auth_store_env;
use store::HeedAuthStore;
#[derive(Clone)]
pub struct AuthController {
store: Arc<HeedAuthStore>,
master_key: Option<String>,
}
impl AuthController {
pub fn new(db_path: impl AsRef<Path>, master_key: &Option<String>) -> Result<Self> {
let store = HeedAuthStore::new(db_path)?;
if store.is_empty()? {
generate_default_keys(&store)?;
}
Ok(Self {
store: Arc::new(store),
master_key: master_key.clone(),
})
}
pub fn create_key(&self, value: Value) -> Result<Key> {
let key = Key::create_from_value(value)?;
match self.store.get_api_key(key.uid)? {
Some(_) => Err(AuthControllerError::ApiKeyAlreadyExists(
key.uid.to_string(),
)),
None => self.store.put_api_key(key),
}
}
pub fn update_key(&self, uid: Uuid, value: Value) -> Result<Key> {
let mut key = self.get_key(uid)?;
key.update_from_value(value)?;
self.store.put_api_key(key)
}
pub fn get_key(&self, uid: Uuid) -> Result<Key> {
self.store
.get_api_key(uid)?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(uid.to_string()))
}
pub fn get_optional_uid_from_encoded_key(&self, encoded_key: &[u8]) -> Result<Option<Uuid>> {
match &self.master_key {
Some(master_key) => self
.store
.get_uid_from_encoded_key(encoded_key, master_key.as_bytes()),
None => Ok(None),
}
}
pub fn get_uid_from_encoded_key(&self, encoded_key: &str) -> Result<Uuid> {
self.get_optional_uid_from_encoded_key(encoded_key.as_bytes())?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(encoded_key.to_string()))
}
pub fn get_key_filters(
&self,
uid: Uuid,
search_rules: Option<SearchRules>,
) -> Result<AuthFilter> {
let mut filters = AuthFilter::default();
let key = self
.store
.get_api_key(uid)?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(uid.to_string()))?;
if !key.indexes.iter().any(|i| i == &StarOr::Star) {
filters.search_rules = match search_rules {
// Intersect search_rules with parent key authorized indexes.
Some(search_rules) => SearchRules::Map(
key.indexes
.into_iter()
.filter_map(|index| {
search_rules.get_index_search_rules(index.deref()).map(
|index_search_rules| {
(String::from(index), Some(index_search_rules))
},
)
})
.collect(),
),
None => SearchRules::Set(key.indexes.into_iter().map(String::from).collect()),
};
} else if let Some(search_rules) = search_rules {
filters.search_rules = search_rules;
}
filters.allow_index_creation = self.is_key_authorized(uid, Action::IndexesAdd, None)?;
Ok(filters)
}
pub fn list_keys(&self) -> Result<Vec<Key>> {
self.store.list_api_keys()
}
pub fn delete_key(&self, uid: Uuid) -> Result<()> {
if self.store.delete_api_key(uid)? {
Ok(())
} else {
Err(AuthControllerError::ApiKeyNotFound(uid.to_string()))
}
}
pub fn get_master_key(&self) -> Option<&String> {
self.master_key.as_ref()
}
/// Generate a valid key from a key id using the current master key.
/// Returns None if no master key has been set.
pub fn generate_key(&self, uid: Uuid) -> Option<String> {
self.master_key
.as_ref()
.map(|master_key| generate_key_as_hexa(uid, master_key.as_bytes()))
}
/// Check if the provided key is authorized to make a specific action
/// without checking if the key is valid.
pub fn is_key_authorized(
&self,
uid: Uuid,
action: Action,
index: Option<&str>,
) -> Result<bool> {
match self
.store
// check if the key has access to all indexes.
.get_expiration_date(uid, action, None)?
.or(match index {
// else check if the key has access to the requested index.
Some(index) => {
self.store
.get_expiration_date(uid, action, Some(index.as_bytes()))?
}
// or to any index if no index has been requested.
None => self.store.prefix_first_expiration_date(uid, action)?,
}) {
// check expiration date.
Some(Some(exp)) => Ok(OffsetDateTime::now_utc() < exp),
// no expiration date.
Some(None) => Ok(true),
// action or index forbidden.
None => Ok(false),
}
}
}
pub struct AuthFilter {
pub search_rules: SearchRules,
pub allow_index_creation: bool,
}
impl Default for AuthFilter {
fn default() -> Self {
Self {
search_rules: SearchRules::default(),
allow_index_creation: true,
}
}
}
/// Transparent wrapper around a list of allowed indexes with the search rules to apply for each.
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(untagged)]
pub enum SearchRules {
Set(HashSet<String>),
Map(HashMap<String, Option<IndexSearchRules>>),
}
impl Default for SearchRules {
fn default() -> Self {
Self::Set(Some("*".to_string()).into_iter().collect())
}
}
impl SearchRules {
pub fn is_index_authorized(&self, index: &str) -> bool {
match self {
Self::Set(set) => set.contains("*") || set.contains(index),
Self::Map(map) => map.contains_key("*") || map.contains_key(index),
}
}
pub fn get_index_search_rules(&self, index: &str) -> Option<IndexSearchRules> {
match self {
Self::Set(set) => {
if set.contains("*") || set.contains(index) {
Some(IndexSearchRules::default())
} else {
None
}
}
Self::Map(map) => map
.get(index)
.or_else(|| map.get("*"))
.map(|isr| isr.clone().unwrap_or_default()),
}
}
}
impl IntoIterator for SearchRules {
type Item = (String, IndexSearchRules);
type IntoIter = Box<dyn Iterator<Item = Self::Item>>;
fn into_iter(self) -> Self::IntoIter {
match self {
Self::Set(array) => {
Box::new(array.into_iter().map(|i| (i, IndexSearchRules::default())))
}
Self::Map(map) => {
Box::new(map.into_iter().map(|(i, isr)| (i, isr.unwrap_or_default())))
}
}
}
}
/// Contains the rules to apply on the top of the search query for a specific index.
///
/// filter: search filter to apply in addition to query filters.
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
pub struct IndexSearchRules {
pub filter: Option<serde_json::Value>,
}
fn generate_default_keys(store: &HeedAuthStore) -> Result<()> {
store.put_api_key(Key::default_admin())?;
store.put_api_key(Key::default_search())?;
Ok(())
}

View File

@ -0,0 +1,322 @@
use std::borrow::Cow;
use std::cmp::Reverse;
use std::collections::HashSet;
use std::convert::TryFrom;
use std::convert::TryInto;
use std::fs::create_dir_all;
use std::ops::Deref;
use std::path::Path;
use std::str;
use std::sync::Arc;
use enum_iterator::IntoEnumIterator;
use hmac::{Hmac, Mac};
use meilisearch_types::star_or::StarOr;
use milli::heed::types::{ByteSlice, DecodeIgnore, SerdeJson};
use milli::heed::{Database, Env, EnvOpenOptions, RwTxn};
use sha2::Sha256;
use time::OffsetDateTime;
use uuid::fmt::Hyphenated;
use uuid::Uuid;
use super::error::Result;
use super::{Action, Key};
const AUTH_STORE_SIZE: usize = 1_073_741_824; //1GiB
const AUTH_DB_PATH: &str = "auth";
const KEY_DB_NAME: &str = "api-keys";
const KEY_ID_ACTION_INDEX_EXPIRATION_DB_NAME: &str = "keyid-action-index-expiration";
pub type KeyId = Uuid;
#[derive(Clone)]
pub struct HeedAuthStore {
env: Arc<Env>,
keys: Database<ByteSlice, SerdeJson<Key>>,
action_keyid_index_expiration: Database<KeyIdActionCodec, SerdeJson<Option<OffsetDateTime>>>,
should_close_on_drop: bool,
}
impl Drop for HeedAuthStore {
fn drop(&mut self) {
if self.should_close_on_drop && Arc::strong_count(&self.env) == 1 {
self.env.as_ref().clone().prepare_for_closing();
}
}
}
pub fn open_auth_store_env(path: &Path) -> milli::heed::Result<milli::heed::Env> {
let mut options = EnvOpenOptions::new();
options.map_size(AUTH_STORE_SIZE); // 1GB
options.max_dbs(2);
options.open(path)
}
impl HeedAuthStore {
pub fn new(path: impl AsRef<Path>) -> Result<Self> {
let path = path.as_ref().join(AUTH_DB_PATH);
create_dir_all(&path)?;
let env = Arc::new(open_auth_store_env(path.as_ref())?);
let keys = env.create_database(Some(KEY_DB_NAME))?;
let action_keyid_index_expiration =
env.create_database(Some(KEY_ID_ACTION_INDEX_EXPIRATION_DB_NAME))?;
Ok(Self {
env,
keys,
action_keyid_index_expiration,
should_close_on_drop: true,
})
}
pub fn set_drop_on_close(&mut self, v: bool) {
self.should_close_on_drop = v;
}
pub fn is_empty(&self) -> Result<bool> {
let rtxn = self.env.read_txn()?;
Ok(self.keys.len(&rtxn)? == 0)
}
pub fn put_api_key(&self, key: Key) -> Result<Key> {
let uid = key.uid;
let mut wtxn = self.env.write_txn()?;
self.keys.put(&mut wtxn, uid.as_bytes(), &key)?;
// delete key from inverted database before refilling it.
self.delete_key_from_inverted_db(&mut wtxn, &uid)?;
// create inverted database.
let db = self.action_keyid_index_expiration;
let mut actions = HashSet::new();
for action in &key.actions {
match action {
Action::All => actions.extend(Action::into_enum_iter()),
Action::DocumentsAll => {
actions.extend(
[
Action::DocumentsGet,
Action::DocumentsDelete,
Action::DocumentsAdd,
]
.iter(),
);
}
Action::IndexesAll => {
actions.extend(
[
Action::IndexesAdd,
Action::IndexesDelete,
Action::IndexesGet,
Action::IndexesUpdate,
]
.iter(),
);
}
Action::SettingsAll => {
actions.extend([Action::SettingsGet, Action::SettingsUpdate].iter());
}
Action::DumpsAll => {
actions.insert(Action::DumpsCreate);
}
Action::TasksAll => {
actions.insert(Action::TasksGet);
}
Action::StatsAll => {
actions.insert(Action::StatsGet);
}
Action::MetricsAll => {
actions.insert(Action::MetricsGet);
}
other => {
actions.insert(*other);
}
}
}
let no_index_restriction = key.indexes.contains(&StarOr::Star);
for action in actions {
if no_index_restriction {
// If there is no index restriction we put None.
db.put(&mut wtxn, &(&uid, &action, None), &key.expires_at)?;
} else {
// else we create a key for each index.
for index in key.indexes.iter() {
db.put(
&mut wtxn,
&(&uid, &action, Some(index.deref().as_bytes())),
&key.expires_at,
)?;
}
}
}
wtxn.commit()?;
Ok(key)
}
pub fn get_api_key(&self, uid: Uuid) -> Result<Option<Key>> {
let rtxn = self.env.read_txn()?;
self.keys.get(&rtxn, uid.as_bytes()).map_err(|e| e.into())
}
pub fn get_uid_from_encoded_key(
&self,
encoded_key: &[u8],
master_key: &[u8],
) -> Result<Option<Uuid>> {
let rtxn = self.env.read_txn()?;
let uid = self
.keys
.remap_data_type::<DecodeIgnore>()
.iter(&rtxn)?
.filter_map(|res| match res {
Ok((uid, _)) => {
let (uid, _) = try_split_array_at(uid)?;
let uid = Uuid::from_bytes(*uid);
if generate_key_as_hexa(uid, master_key).as_bytes() == encoded_key {
Some(uid)
} else {
None
}
}
Err(_) => None,
})
.next();
Ok(uid)
}
pub fn delete_api_key(&self, uid: Uuid) -> Result<bool> {
let mut wtxn = self.env.write_txn()?;
let existing = self.keys.delete(&mut wtxn, uid.as_bytes())?;
self.delete_key_from_inverted_db(&mut wtxn, &uid)?;
wtxn.commit()?;
Ok(existing)
}
pub fn list_api_keys(&self) -> Result<Vec<Key>> {
let mut list = Vec::new();
let rtxn = self.env.read_txn()?;
for result in self.keys.remap_key_type::<DecodeIgnore>().iter(&rtxn)? {
let (_, content) = result?;
list.push(content);
}
list.sort_unstable_by_key(|k| Reverse(k.created_at));
Ok(list)
}
pub fn get_expiration_date(
&self,
uid: Uuid,
action: Action,
index: Option<&[u8]>,
) -> Result<Option<Option<OffsetDateTime>>> {
let rtxn = self.env.read_txn()?;
let tuple = (&uid, &action, index);
Ok(self.action_keyid_index_expiration.get(&rtxn, &tuple)?)
}
pub fn prefix_first_expiration_date(
&self,
uid: Uuid,
action: Action,
) -> Result<Option<Option<OffsetDateTime>>> {
let rtxn = self.env.read_txn()?;
let tuple = (&uid, &action, None);
let exp = self
.action_keyid_index_expiration
.prefix_iter(&rtxn, &tuple)?
.next()
.transpose()?
.map(|(_, expiration)| expiration);
Ok(exp)
}
fn delete_key_from_inverted_db(&self, wtxn: &mut RwTxn, key: &KeyId) -> Result<()> {
let mut iter = self
.action_keyid_index_expiration
.remap_types::<ByteSlice, DecodeIgnore>()
.prefix_iter_mut(wtxn, key.as_bytes())?;
while iter.next().transpose()?.is_some() {
// safety: we don't keep references from inside the LMDB database.
unsafe { iter.del_current()? };
}
Ok(())
}
}
/// Codec allowing to retrieve the expiration date of an action,
/// optionally on a specific index, for a given key.
pub struct KeyIdActionCodec;
impl<'a> milli::heed::BytesDecode<'a> for KeyIdActionCodec {
type DItem = (KeyId, Action, Option<&'a [u8]>);
fn bytes_decode(bytes: &'a [u8]) -> Option<Self::DItem> {
let (key_id_bytes, action_bytes) = try_split_array_at(bytes)?;
let (action_bytes, index) = match try_split_array_at(action_bytes)? {
(action, []) => (action, None),
(action, index) => (action, Some(index)),
};
let key_id = Uuid::from_bytes(*key_id_bytes);
let action = Action::from_repr(u8::from_be_bytes(*action_bytes))?;
Some((key_id, action, index))
}
}
impl<'a> milli::heed::BytesEncode<'a> for KeyIdActionCodec {
type EItem = (&'a KeyId, &'a Action, Option<&'a [u8]>);
fn bytes_encode((key_id, action, index): &Self::EItem) -> Option<Cow<[u8]>> {
let mut bytes = Vec::new();
bytes.extend_from_slice(key_id.as_bytes());
let action_bytes = u8::to_be_bytes(action.repr());
bytes.extend_from_slice(&action_bytes);
if let Some(index) = index {
bytes.extend_from_slice(index);
}
Some(Cow::Owned(bytes))
}
}
pub fn generate_key_as_hexa(uid: Uuid, master_key: &[u8]) -> String {
// format uid as hyphenated allowing user to generate their own keys.
let mut uid_buffer = [0; Hyphenated::LENGTH];
let uid = uid.hyphenated().encode_lower(&mut uid_buffer);
// new_from_slice function never fail.
let mut mac = Hmac::<Sha256>::new_from_slice(master_key).unwrap();
mac.update(uid.as_bytes());
let result = mac.finalize();
format!("{:x}", result.into_bytes())
}
/// Divides one slice into two at an index, returns `None` if mid is out of bounds.
pub fn try_split_at<T>(slice: &[T], mid: usize) -> Option<(&[T], &[T])> {
if mid <= slice.len() {
Some(slice.split_at(mid))
} else {
None
}
}
/// Divides one slice into an array and the tail at an index,
/// returns `None` if `N` is out of bounds.
pub fn try_split_array_at<T, const N: usize>(slice: &[T]) -> Option<(&[T; N], &[T])>
where
[T; N]: for<'a> TryFrom<&'a [T]>,
{
let (head, tail) = try_split_at(slice, N)?;
let head = head.try_into().ok()?;
Some((head, tail))
}

View File

@ -1,9 +0,0 @@
[package]
name = "meilisearch-error"
version = "0.24.0"
authors = ["marin <postma.marin@protonmail.com>"]
edition = "2018"
[dependencies]
actix-http = "=3.0.0-beta.10"
serde = { version = "1.0.130", features = ["derive"] }

View File

@ -1,88 +1,101 @@
[package]
authors = ["Quentin de Quelen <quentin@dequelen.me>", "Clément Renault <clement@meilisearch.com>"]
description = "MeiliSearch HTTP server"
edition = "2018"
description = "Meilisearch HTTP server"
edition = "2021"
license = "MIT"
name = "meilisearch-http"
version = "0.24.0"
version = "0.29.3"
[[bin]]
name = "meilisearch"
path = "src/main.rs"
[build-dependencies]
actix-web-static-files = { git = "https://github.com/MarinPostma/actix-web-static-files.git", rev = "39d8006", optional = true }
anyhow = { version = "1.0.43", optional = true }
cargo_toml = { version = "0.9", optional = true }
anyhow = { version = "1.0.62", optional = true }
cargo_toml = { version = "0.11.4", optional = true }
hex = { version = "0.4.3", optional = true }
reqwest = { version = "0.11.4", features = ["blocking", "rustls-tls"], default-features = false, optional = true }
sha-1 = { version = "0.9.8", optional = true }
tempfile = { version = "3.2.0", optional = true }
vergen = { version = "5.1.15", default-features = false, features = ["git"] }
reqwest = { version = "0.11.9", features = ["blocking", "rustls-tls"], default-features = false, optional = true }
sha-1 = { version = "0.10.0", optional = true }
static-files = { version = "0.2.3", optional = true }
tempfile = { version = "3.3.0", optional = true }
vergen = { version = "7.0.0", default-features = false, features = ["git"] }
zip = { version = "0.5.13", optional = true }
[dependencies]
actix-cors = { git = "https://github.com/MarinPostma/actix-extras.git", rev = "963ac94d" }
actix-web = { version = "4.0.0-beta.9", features = ["rustls"] }
actix-web-static-files = { git = "https://github.com/MarinPostma/actix-web-static-files.git", rev = "39d8006", optional = true }
anyhow = { version = "1.0.43", features = ["backtrace"] }
async-stream = "0.3.2"
async-trait = "0.1.51"
arc-swap = "1.3.2"
byte-unit = { version = "4.0.12", default-features = false, features = ["std"] }
actix-cors = "0.6.1"
actix-web = { version = "4.0.1", default-features = false, features = ["macros", "compress-brotli", "compress-gzip", "cookies", "rustls"] }
actix-web-static-files = { git = "https://github.com/kilork/actix-web-static-files.git", rev = "2d3b6160", optional = true }
anyhow = { version = "1.0.62", features = ["backtrace"] }
async-stream = "0.3.3"
async-trait = "0.1.52"
bstr = "0.2.17"
byte-unit = { version = "4.0.14", default-features = false, features = ["std", "serde"] }
bytes = "1.1.0"
chrono = { version = "0.4.19", features = ["serde"] }
crossbeam-channel = "0.5.1"
clap = { version = "3.1.6", features = ["derive", "env"] }
crossbeam-channel = "0.5.2"
either = "1.6.1"
env_logger = "0.9.0"
flate2 = "1.0.21"
flate2 = "1.0.22"
fst = "0.4.7"
futures = "0.3.17"
futures-util = "0.3.17"
heed = { git = "https://github.com/Kerollmops/heed", tag = "v0.12.1" }
http = "0.2.4"
indexmap = { version = "1.7.0", features = ["serde-1"] }
itertools = "0.10.1"
futures = "0.3.21"
futures-util = "0.3.21"
http = "0.2.6"
indexmap = { version = "1.8.0", features = ["serde-1"] }
itertools = "0.10.3"
jsonwebtoken = "8.0.1"
log = "0.4.14"
meilisearch-auth = { path = "../meilisearch-auth" }
meilisearch-types = { path = "../meilisearch-types" }
meilisearch-lib = { path = "../meilisearch-lib" }
meilisearch-error = { path = "../meilisearch-error" }
meilisearch-tokenizer = { git = "https://github.com/meilisearch/tokenizer.git", tag = "v0.2.5" }
mimalloc = { version = "0.1.29", default-features = false }
mime = "0.3.16"
num_cpus = "1.13.0"
once_cell = "1.8.0"
parking_lot = "0.11.2"
platform-dirs = "0.3.0"
rand = "0.8.4"
rayon = "1.5.1"
regex = "1.5.4"
rustls = "0.19.1"
segment = { version = "0.1.2", optional = true }
serde = { version = "1.0.130", features = ["derive"] }
serde_json = { version = "1.0.67", features = ["preserve_order"] }
sha2 = "0.9.6"
siphasher = "0.3.7"
slice-group-by = "0.2.6"
structopt = "0.3.23"
tar = "0.4.37"
tempfile = "3.2.0"
thiserror = "1.0.28"
tokio = { version = "1.11.0", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde"] }
walkdir = "2.3.2"
num_cpus = "1.13.1"
obkv = "0.2.0"
pin-project = "1.0.8"
sysinfo = "0.20.2"
tokio-stream = "0.1.7"
once_cell = "1.10.0"
parking_lot = "0.12.0"
pin-project-lite = "0.2.8"
platform-dirs = "0.3.0"
rand = "0.8.5"
rayon = "1.5.1"
regex = "1.5.5"
reqwest = { version = "0.11.4", features = ["rustls-tls", "json"], default-features = false }
rustls = "0.20.4"
rustls-pemfile = "0.3.0"
segment = { version = "0.2.0", optional = true }
serde = { version = "1.0.136", features = ["derive"] }
serde-cs = "0.2.3"
serde_json = { version = "1.0.85", features = ["preserve_order"] }
sha2 = "0.10.2"
siphasher = "0.3.10"
slice-group-by = "0.3.0"
static-files = { version = "0.2.3", optional = true }
sysinfo = "0.23.5"
tar = "0.4.38"
tempfile = "3.3.0"
thiserror = "1.0.30"
time = { version = "0.3.7", features = ["serde-well-known", "formatting", "parsing", "macros"] }
tokio = { version = "1.17.0", features = ["full"] }
tokio-stream = "0.1.8"
uuid = { version = "1.1.2", features = ["serde", "v4"] }
walkdir = "2.3.2"
prometheus = { version = "0.13.0", features = ["process"], optional = true }
lazy_static = "1.4.0"
[dev-dependencies]
actix-rt = "2.2.0"
paste = "1.0.5"
serde_url_params = "0.2.1"
actix-rt = "2.7.0"
assert-json-diff = "2.0.1"
manifest-dir-macros = "0.1.14"
maplit = "1.0.2"
urlencoding = "2.1.0"
yaup = "0.2.0"
[features]
default = ["analytics", "mini-dashboard"]
metrics = ["prometheus"]
analytics = ["segment"]
mini-dashboard = [
"actix-web-static-files",
"static-files",
"anyhow",
"cargo_toml",
"hex",
@ -91,12 +104,7 @@ mini-dashboard = [
"tempfile",
"zip",
]
analytics = ["segment"]
default = ["analytics", "mini-dashboard"]
[target.'cfg(target_os = "linux")'.dependencies]
tikv-jemallocator = "0.4.1"
[package.metadata.mini-dashboard]
assets-url = "https://github.com/meilisearch/mini-dashboard/releases/download/v0.1.4/build.zip"
sha1 = "750e8a8e56cfa61fbf9ead14b08a5f17ad3f3d37"
assets-url = "https://github.com/meilisearch/mini-dashboard/releases/download/v0.2.2/build.zip"
sha1 = "c69feffc6b590e38a46981a85c47f48905d4082a"

View File

@ -16,11 +16,11 @@ mod mini_dashboard {
use std::io::{Cursor, Read, Write};
use std::path::PathBuf;
use actix_web_static_files::resource_dir;
use anyhow::Context;
use cargo_toml::Manifest;
use reqwest::blocking::get;
use sha1::{Digest, Sha1};
use static_files::resource_dir;
pub fn setup_mini_dashboard() -> anyhow::Result<()> {
let cargo_manifest_dir = PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap());

View File

@ -18,7 +18,7 @@ impl SearchAggregator {
Self::default()
}
pub fn finish(&mut self, _: &dyn Any) {}
pub fn succeed(&mut self, _: &dyn Any) {}
}
impl MockAnalytics {

View File

@ -29,12 +29,12 @@ pub type SegmentAnalytics = segment_analytics::SegmentAnalytics;
#[cfg(all(not(debug_assertions), feature = "analytics"))]
pub type SearchAggregator = segment_analytics::SearchAggregator;
/// The MeiliSearch config dir:
/// `~/.config/MeiliSearch` on *NIX or *BSD.
/// The Meilisearch config dir:
/// `~/.config/Meilisearch` on *NIX or *BSD.
/// `~/Library/ApplicationSupport` on macOS.
/// `%APPDATA` (= `C:\Users%USERNAME%\AppData\Roaming`) on windows.
static MEILISEARCH_CONFIG_PATH: Lazy<Option<PathBuf>> =
Lazy::new(|| AppDirs::new(Some("MeiliSearch"), false).map(|appdir| appdir.config_dir));
Lazy::new(|| AppDirs::new(Some("Meilisearch"), false).map(|appdir| appdir.config_dir));
fn config_user_id_path(db_path: &Path) -> Option<PathBuf> {
db_path
@ -44,13 +44,13 @@ fn config_user_id_path(db_path: &Path) -> Option<PathBuf> {
path.join("instance-uid")
.display()
.to_string()
.replace("/", "-")
.replace('/', "-")
})
.zip(MEILISEARCH_CONFIG_PATH.as_ref())
.map(|(filename, config_path)| config_path.join(filename.trim_start_matches('-')))
}
/// Look for the instance-uid in the `data.ms` or in `~/.config/MeiliSearch/path-to-db-instance-uid`
/// Look for the instance-uid in the `data.ms` or in `~/.config/Meilisearch/path-to-db-instance-uid`
fn find_user_id(db_path: &Path) -> Option<String> {
fs::read_to_string(db_path.join("instance-uid"))
.ok()
@ -61,7 +61,7 @@ pub trait Analytics: Sync + Send {
/// The method used to publish most analytics that do not need to be batched every hours
fn publish(&self, event_name: String, send: Value, request: Option<&HttpRequest>);
/// This method should be called to aggergate a get search
/// This method should be called to aggregate a get search
fn get_search(&self, aggregate: SearchAggregator);
/// This method should be called to aggregate a post search

View File

@ -1,13 +1,17 @@
use std::collections::{HashMap, HashSet};
use std::collections::{BinaryHeap, HashMap, HashSet};
use std::fs;
use std::path::Path;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::{Duration, Instant};
use actix_web::http::header::USER_AGENT;
use actix_web::HttpRequest;
use http::header::CONTENT_TYPE;
use meilisearch_lib::index::{SearchQuery, SearchResult};
use meilisearch_auth::SearchRules;
use meilisearch_lib::index::{
SearchQuery, SearchResult, DEFAULT_CROP_LENGTH, DEFAULT_CROP_MARKER,
DEFAULT_HIGHLIGHT_POST_TAG, DEFAULT_HIGHLIGHT_PRE_TAG,
};
use meilisearch_lib::index_controller::Stats;
use meilisearch_lib::MeiliSearch;
use once_cell::sync::Lazy;
@ -16,6 +20,7 @@ use segment::message::{Identify, Track, User};
use segment::{AutoBatcher, Batcher, HttpClient};
use serde_json::{json, Value};
use sysinfo::{DiskExt, System, SystemExt};
use time::OffsetDateTime;
use tokio::select;
use tokio::sync::mpsc::{self, Receiver, Sender};
use uuid::Uuid;
@ -26,6 +31,8 @@ use crate::Opt;
use super::{config_user_id_path, MEILISEARCH_CONFIG_PATH};
const ANALYTICS_HEADER: &str = "X-Meilisearch-Client";
/// Write the instance-uid in the `data.ms` and in `~/.config/MeiliSearch/path-to-db-instance-uid`. Ignore the errors.
fn write_user_id(db_path: &Path, user_id: &str) {
let _ = fs::write(db_path.join("instance-uid"), user_id.as_bytes());
@ -38,12 +45,13 @@ fn write_user_id(db_path: &Path, user_id: &str) {
}
}
const SEGMENT_API_KEY: &str = "vHi89WrNDckHSQssyUJqLvIyp2QFITSC";
const SEGMENT_API_KEY: &str = "P3FWhhEsJiEDCuEHpmcN9DHcK4hVfBvb";
pub fn extract_user_agents(request: &HttpRequest) -> Vec<String> {
request
.headers()
.get(USER_AGENT)
.get(ANALYTICS_HEADER)
.or_else(|| request.headers().get(USER_AGENT))
.map(|header| header.to_str().ok())
.flatten()
.unwrap_or("unknown")
@ -73,9 +81,44 @@ impl SegmentAnalytics {
let user_id = user_id.unwrap_or_else(|| Uuid::new_v4().to_string());
write_user_id(&opt.db_path, &user_id);
let client = HttpClient::default();
let client = reqwest::Client::builder()
.connect_timeout(Duration::from_secs(10))
.build();
// if reqwest throws an error we won't be able to send analytics
if client.is_err() {
return super::MockAnalytics::new(opt);
}
let client = HttpClient::new(
client.unwrap(),
"https://telemetry.meilisearch.com".to_string(),
);
let user = User::UserId { user_id };
let batcher = AutoBatcher::new(client, Batcher::new(None), SEGMENT_API_KEY.to_string());
let mut batcher = AutoBatcher::new(client, Batcher::new(None), SEGMENT_API_KEY.to_string());
// If Meilisearch is Launched for the first time:
// 1. Send an event Launched associated to the user `total_launch`.
// 2. Batch an event Launched with the real instance-id and send it in one hour.
if first_time_run {
let _ = batcher
.push(Track {
user: User::UserId {
user_id: "total_launch".to_string(),
},
event: "Launched".to_string(),
..Default::default()
})
.await;
let _ = batcher.flush().await;
let _ = batcher
.push(Track {
user: user.clone(),
event: "Launched".to_string(),
..Default::default()
})
.await;
}
let (sender, inbox) = mpsc::channel(100); // How many analytics can we bufferize
@ -95,10 +138,6 @@ impl SegmentAnalytics {
sender,
user: user.clone(),
};
// batch the launched for the first time track event
if first_time_run {
this.publish("Launched".to_string(), json!({}), None);
}
(Arc::new(this), user.to_string())
}
@ -106,11 +145,7 @@ impl SegmentAnalytics {
impl super::Analytics for SegmentAnalytics {
fn publish(&self, event_name: String, mut send: Value, request: Option<&HttpRequest>) {
let user_agent = request
.map(|req| req.headers().get(USER_AGENT))
.flatten()
.map(|header| header.to_str().unwrap_or("unknown"))
.map(|s| s.split(';').map(str::trim).collect::<Vec<&str>>());
let user_agent = request.map(|req| extract_user_agents(req));
send["user-agent"] = json!(user_agent);
let event = Track {
@ -187,14 +222,34 @@ impl Segment {
"kernel_version": kernel_version,
"cores": sys.processors().len(),
"ram_size": sys.total_memory(),
"disk_size": sys.disks().iter().map(|disk| disk.available_space()).max(),
"disk_size": sys.disks().iter().map(|disk| disk.total_space()).max(),
"server_provider": std::env::var("MEILI_SERVER_PROVIDER").ok(),
})
});
let infos = json!({
"env": opt.env.clone(),
"has_snapshot": opt.schedule_snapshot,
});
// The infos are all cli option except every option containing sensitive information.
// We consider an information as sensible if it contains a path, an address or a key.
let infos = {
// First we see if any sensitive fields were used.
let db_path = opt.db_path != PathBuf::from("./data.ms");
let import_dump = opt.import_dump.is_some();
let dumps_dir = opt.dumps_dir != PathBuf::from("dumps/");
let import_snapshot = opt.import_snapshot.is_some();
let snapshots_dir = opt.snapshot_dir != PathBuf::from("snapshots/");
let http_addr = opt.http_addr != "127.0.0.1:7700";
let mut infos = serde_json::to_value(opt).unwrap();
// Then we overwrite all sensitive field with a boolean representing if
// the feature was used or not.
infos["db_path"] = json!(db_path);
infos["import_dump"] = json!(import_dump);
infos["dumps_dir"] = json!(dumps_dir);
infos["import_snapshot"] = json!(import_snapshot);
infos["snapshot_dir"] = json!(snapshots_dir);
infos["http_addr"] = json!(http_addr);
infos
};
let number_of_documents = stats
.indexes
@ -216,7 +271,9 @@ impl Segment {
async fn run(mut self, meilisearch: MeiliSearch) {
const INTERVAL: Duration = Duration::from_secs(60 * 60); // one hour
let mut interval = tokio::time::interval(INTERVAL);
// The first batch must be sent after one hour.
let mut interval =
tokio::time::interval_at(tokio::time::Instant::now() + INTERVAL, INTERVAL);
loop {
select! {
@ -238,7 +295,7 @@ impl Segment {
}
async fn tick(&mut self, meilisearch: MeiliSearch) {
if let Ok(stats) = meilisearch.get_all_stats().await {
if let Ok(stats) = meilisearch.get_all_stats(&SearchRules::default()).await {
let _ = self
.batcher
.push(Identify {
@ -254,9 +311,9 @@ impl Segment {
.await;
}
let get_search = std::mem::take(&mut self.get_search_aggregator)
.into_event(&self.user, "Document Searched GET");
.into_event(&self.user, "Documents Searched GET");
let post_search = std::mem::take(&mut self.post_search_aggregator)
.into_event(&self.user, "Document Searched POST");
.into_event(&self.user, "Documents Searched POST");
let add_documents = std::mem::take(&mut self.add_documents_aggregator)
.into_event(&self.user, "Documents Added");
let update_documents = std::mem::take(&mut self.update_documents_aggregator)
@ -280,13 +337,15 @@ impl Segment {
#[derive(Default)]
pub struct SearchAggregator {
timestamp: Option<OffsetDateTime>,
// context
user_agents: HashSet<String>,
// requests
total_received: usize,
total_succeeded: usize,
time_spent: Vec<usize>,
time_spent: BinaryHeap<usize>,
// sort
sort_with_geo_point: bool,
@ -304,19 +363,29 @@ pub struct SearchAggregator {
used_syntax: HashMap<String, usize>,
// q
// everytime a request has a q field, this field must be incremented by the number of terms
sum_of_terms_count: usize,
// everytime a request has a q field, this field must be incremented by one
total_number_of_q: usize,
// The maximum number of terms in a q request
max_terms_number: usize,
// everytime a search is done, we increment the counter linked to the used settings
matching_strategy: HashMap<String, usize>,
// pagination
max_limit: usize,
max_offset: usize,
// formatting
highlight_pre_tag: bool,
highlight_post_tag: bool,
crop_marker: bool,
show_matches_position: bool,
crop_length: bool,
}
impl SearchAggregator {
pub fn from_query(query: &SearchQuery, request: &HttpRequest) -> Self {
let mut ret = Self::default();
ret.timestamp = Some(OffsetDateTime::now_utc());
ret.total_received = 1;
ret.user_agents = extract_user_agents(request).into_iter().collect();
@ -354,61 +423,96 @@ impl SearchAggregator {
}
if let Some(ref q) = query.q {
ret.total_number_of_q = 1;
ret.sum_of_terms_count = q.split_whitespace().count();
ret.max_terms_number = q.split_whitespace().count();
}
ret.matching_strategy
.insert(format!("{:?}", query.matching_strategy), 1);
ret.max_limit = query.limit;
ret.max_offset = query.offset.unwrap_or_default();
ret.highlight_pre_tag = query.highlight_pre_tag != DEFAULT_HIGHLIGHT_PRE_TAG();
ret.highlight_post_tag = query.highlight_post_tag != DEFAULT_HIGHLIGHT_POST_TAG();
ret.crop_marker = query.crop_marker != DEFAULT_CROP_MARKER();
ret.crop_length = query.crop_length != DEFAULT_CROP_LENGTH();
ret.show_matches_position = query.show_matches_position;
ret
}
pub fn finish(&mut self, result: &SearchResult) {
self.total_succeeded += 1;
pub fn succeed(&mut self, result: &SearchResult) {
self.total_succeeded = self.total_succeeded.saturating_add(1);
self.time_spent.push(result.processing_time_ms as usize);
}
/// Aggregate one [SearchAggregator] into another.
pub fn aggregate(&mut self, mut other: Self) {
if self.timestamp.is_none() {
self.timestamp = other.timestamp;
}
// context
for user_agent in other.user_agents.into_iter() {
self.user_agents.insert(user_agent);
}
// request
self.total_received += other.total_received;
self.total_succeeded += other.total_succeeded;
self.total_received = self.total_received.saturating_add(other.total_received);
self.total_succeeded = self.total_succeeded.saturating_add(other.total_succeeded);
self.time_spent.append(&mut other.time_spent);
// sort
self.sort_with_geo_point |= other.sort_with_geo_point;
self.sort_sum_of_criteria_terms += other.sort_sum_of_criteria_terms;
self.sort_total_number_of_criteria += other.sort_total_number_of_criteria;
self.sort_sum_of_criteria_terms = self
.sort_sum_of_criteria_terms
.saturating_add(other.sort_sum_of_criteria_terms);
self.sort_total_number_of_criteria = self
.sort_total_number_of_criteria
.saturating_add(other.sort_total_number_of_criteria);
// filter
self.filter_with_geo_radius |= other.filter_with_geo_radius;
self.filter_sum_of_criteria_terms += other.filter_sum_of_criteria_terms;
self.filter_total_number_of_criteria += other.filter_total_number_of_criteria;
self.filter_sum_of_criteria_terms = self
.filter_sum_of_criteria_terms
.saturating_add(other.filter_sum_of_criteria_terms);
self.filter_total_number_of_criteria = self
.filter_total_number_of_criteria
.saturating_add(other.filter_total_number_of_criteria);
for (key, value) in other.used_syntax.into_iter() {
*self.used_syntax.entry(key).or_insert(0) += value;
let used_syntax = self.used_syntax.entry(key).or_insert(0);
*used_syntax = used_syntax.saturating_add(value);
}
// q
self.sum_of_terms_count += other.sum_of_terms_count;
self.total_number_of_q += other.total_number_of_q;
self.max_terms_number = self.max_terms_number.max(other.max_terms_number);
for (key, value) in other.matching_strategy.into_iter() {
let matching_strategy = self.matching_strategy.entry(key).or_insert(0);
*matching_strategy = matching_strategy.saturating_add(value);
}
// pagination
self.max_limit = self.max_limit.max(other.max_limit);
self.max_offset = self.max_offset.max(other.max_offset);
self.highlight_pre_tag |= other.highlight_pre_tag;
self.highlight_post_tag |= other.highlight_post_tag;
self.crop_marker |= other.crop_marker;
self.show_matches_position |= other.show_matches_position;
self.crop_length |= other.crop_length;
}
pub fn into_event(mut self, user: &User, event_name: &str) -> Option<Track> {
pub fn into_event(self, user: &User, event_name: &str) -> Option<Track> {
if self.total_received == 0 {
None
} else {
// the index of the 99th percentage of value
let percentile_99th = 0.99 * (self.total_succeeded as f64 - 1.) + 1.;
self.time_spent.drain(percentile_99th as usize..);
// we get all the values in a sorted manner
let time_spent = self.time_spent.into_sorted_vec();
// We are only interested by the slowest value of the 99th fastest results
let time_spent = time_spent.get(percentile_99th as usize);
let properties = json!({
"user-agent": self.user_agents,
"requests": {
"99th_response_time": format!("{:.2}", self.time_spent.iter().sum::<usize>() as f64 / self.time_spent.len() as f64),
"99th_response_time": time_spent.map(|t| format!("{:.2}", t)),
"total_succeeded": self.total_succeeded,
"total_failed": self.total_received.saturating_sub(self.total_succeeded), // just to be sure we never panics
"total_received": self.total_received,
@ -423,15 +527,24 @@ impl SearchAggregator {
"most_used_syntax": self.used_syntax.iter().max_by_key(|(_, v)| *v).map(|(k, _)| json!(k)).unwrap_or_else(|| json!(null)),
},
"q": {
"avg_terms_number": format!("{:.2}", self.sum_of_terms_count as f64 / self.total_number_of_q as f64),
"max_terms_number": self.max_terms_number,
"most_used_matching_strategy": self.matching_strategy.iter().max_by_key(|(_, v)| *v).map(|(k, _)| json!(k)).unwrap_or_else(|| json!(null)),
},
"pagination": {
"max_limit": self.max_limit,
"max_offset": self.max_offset,
},
"formatting": {
"highlight_pre_tag": self.highlight_pre_tag,
"highlight_post_tag": self.highlight_post_tag,
"crop_marker": self.crop_marker,
"show_matches_position": self.show_matches_position,
"crop_length": self.crop_length,
},
});
Some(Track {
timestamp: self.timestamp,
user: user.clone(),
event: event_name.to_string(),
properties,
@ -443,6 +556,8 @@ impl SearchAggregator {
#[derive(Default)]
pub struct DocumentsAggregator {
timestamp: Option<OffsetDateTime>,
// set to true when at least one request was received
updated: bool,
@ -461,6 +576,7 @@ impl DocumentsAggregator {
request: &HttpRequest,
) -> Self {
let mut ret = Self::default();
ret.timestamp = Some(OffsetDateTime::now_utc());
ret.updated = true;
ret.user_agents = extract_user_agents(request).into_iter().collect();
@ -470,8 +586,8 @@ impl DocumentsAggregator {
let content_type = request
.headers()
.get(CONTENT_TYPE)
.map(|s| s.to_str().unwrap_or("unkown"))
.unwrap()
.and_then(|s| s.to_str().ok())
.unwrap_or("unknown")
.to_string();
ret.content_types.insert(content_type);
ret.index_creation = index_creation;
@ -481,15 +597,19 @@ impl DocumentsAggregator {
/// Aggregate one [DocumentsAggregator] into another.
pub fn aggregate(&mut self, other: Self) {
if self.timestamp.is_none() {
self.timestamp = other.timestamp;
}
self.updated |= other.updated;
// we can't create a union because there is no `into_union` method
for user_agent in other.user_agents.into_iter() {
for user_agent in other.user_agents {
self.user_agents.insert(user_agent);
}
for primary_key in other.primary_keys.into_iter() {
for primary_key in other.primary_keys {
self.primary_keys.insert(primary_key);
}
for content_type in other.content_types.into_iter() {
for content_type in other.content_types {
self.content_types.insert(content_type);
}
self.index_creation |= other.index_creation;
@ -507,6 +627,7 @@ impl DocumentsAggregator {
});
Some(Track {
timestamp: self.timestamp,
user: user.clone(),
event: event_name.to_string(),
properties,

View File

@ -1,13 +1,6 @@
use std::error::Error;
use std::fmt;
use actix_web as aweb;
use actix_web::body::Body;
use actix_web::http::StatusCode;
use actix_web::HttpResponseBuilder;
use aweb::error::{JsonPayloadError, QueryPayloadError};
use meilisearch_error::{Code, ErrorCode};
use serde::{Deserialize, Serialize};
use meilisearch_types::error::{Code, ErrorCode, ResponseError};
#[derive(Debug, thiserror::Error)]
pub enum MeilisearchHttpError {
@ -36,71 +29,18 @@ impl From<MeilisearchHttpError> for aweb::Error {
}
}
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(rename_all = "camelCase")]
pub struct ResponseError {
#[serde(skip)]
code: StatusCode,
message: String,
#[serde(rename = "code")]
error_code: String,
#[serde(rename = "type")]
error_type: String,
#[serde(rename = "link")]
error_link: String,
}
impl fmt::Display for ResponseError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.message.fmt(f)
}
}
impl<T> From<T> for ResponseError
where
T: ErrorCode,
{
fn from(other: T) -> Self {
Self {
code: other.http_status(),
message: other.to_string(),
error_code: other.error_name(),
error_type: other.error_type(),
error_link: other.error_url(),
}
}
}
impl aweb::error::ResponseError for ResponseError {
fn error_response(&self) -> aweb::HttpResponse<Body> {
let json = serde_json::to_vec(self).unwrap();
HttpResponseBuilder::new(self.status_code())
.content_type("application/json")
.body(json)
}
fn status_code(&self) -> StatusCode {
self.code
}
}
impl fmt::Display for PayloadError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
PayloadError::Json(e) => e.fmt(f),
PayloadError::Query(e) => e.fmt(f),
}
}
}
#[derive(Debug)]
#[derive(Debug, thiserror::Error)]
pub enum PayloadError {
#[error("{0}")]
Json(JsonPayloadError),
#[error("{0}")]
Query(QueryPayloadError),
#[error("The json payload provided is malformed. `{0}`.")]
MalformedPayload(serde_json::error::Error),
#[error("A json payload is missing.")]
MissingPayload,
}
impl Error for PayloadError {}
impl ErrorCode for PayloadError {
fn error_code(&self) -> Code {
match self {
@ -110,7 +50,8 @@ impl ErrorCode for PayloadError {
JsonPayloadError::Payload(aweb::error::PayloadError::Overflow) => {
Code::PayloadTooLarge
}
JsonPayloadError::Deserialize(_) | JsonPayloadError::Payload(_) => Code::BadRequest,
JsonPayloadError::Payload(_) => Code::BadRequest,
JsonPayloadError::Deserialize(_) => Code::BadRequest,
JsonPayloadError::Serialize(_) => Code::Internal,
_ => Code::Internal,
},
@ -118,13 +59,29 @@ impl ErrorCode for PayloadError {
QueryPayloadError::Deserialize(_) => Code::BadRequest,
_ => Code::Internal,
},
PayloadError::MissingPayload => Code::MissingPayload,
PayloadError::MalformedPayload(_) => Code::MalformedPayload,
}
}
}
impl From<JsonPayloadError> for PayloadError {
fn from(other: JsonPayloadError) -> Self {
Self::Json(other)
match other {
JsonPayloadError::Deserialize(e)
if e.classify() == serde_json::error::Category::Eof
&& e.line() == 1
&& e.column() == 0 =>
{
Self::MissingPayload
}
JsonPayloadError::Deserialize(e)
if e.classify() != serde_json::error::Category::Data =>
{
Self::MalformedPayload(e)
}
_ => Self::Json(other),
}
}
}

View File

@ -1,25 +1,22 @@
use meilisearch_error::{Code, ErrorCode};
use meilisearch_types::error::{Code, ErrorCode};
#[derive(Debug, thiserror::Error)]
pub enum AuthenticationError {
#[error("You must have an authorization token")]
#[error("The Authorization header is missing. It must use the bearer authorization method.")]
MissingAuthorizationHeader,
#[error("Invalid API key")]
InvalidToken(String),
#[error("The provided API key is invalid.")]
InvalidToken,
// Triggered on configuration error.
#[error("Irretrievable state")]
#[error("An internal error has occurred. `Irretrievable state`.")]
IrretrievableState,
#[error("Unknown authentication policy")]
UnknownPolicy,
}
impl ErrorCode for AuthenticationError {
fn error_code(&self) -> Code {
match self {
AuthenticationError::MissingAuthorizationHeader => Code::MissingAuthorizationHeader,
AuthenticationError::InvalidToken(_) => Code::InvalidToken,
AuthenticationError::InvalidToken => Code::InvalidToken,
AuthenticationError::IrretrievableState => Code::Internal,
AuthenticationError::UnknownPolicy => Code::Internal,
}
}
}

View File

@ -1,84 +1,81 @@
mod error;
use std::any::{Any, TypeId};
use std::collections::HashMap;
use std::marker::PhantomData;
use std::ops::Deref;
use std::pin::Pin;
use actix_web::FromRequest;
pub use error::AuthenticationError;
use futures::future::err;
use futures::future::{ok, Ready};
use futures::Future;
use meilisearch_auth::{AuthController, AuthFilter};
use meilisearch_types::error::{Code, ResponseError};
use crate::error::ResponseError;
use error::AuthenticationError;
macro_rules! create_policies {
($($name:ident), *) => {
pub mod policies {
use std::collections::HashSet;
use crate::extractors::authentication::Policy;
$(
#[derive(Debug, Default)]
pub struct $name {
inner: HashSet<Vec<u8>>
}
impl $name {
pub fn new() -> Self {
Self { inner: HashSet::new() }
}
pub fn add(&mut self, token: Vec<u8>) {
self.inner.insert(token);
}
}
impl Policy for $name {
fn authenticate(&self, token: &[u8]) -> bool {
self.inner.contains(token)
}
}
)*
}
};
}
create_policies!(Public, Private, Admin);
/// Instanciate a `Policies`, filled with the given policies.
macro_rules! init_policies {
($($name:ident), *) => {
{
let mut policies = crate::extractors::authentication::Policies::new();
$(
let policy = $name::new();
policies.insert(policy);
)*
policies
}
};
}
/// Adds user to all specified policies.
macro_rules! create_users {
($policies:ident, $($user:expr => { $($policy:ty), * }), *) => {
{
$(
$(
$policies.get_mut::<$policy>().map(|p| p.add($user.to_owned()));
)*
)*
}
};
}
pub struct GuardedData<T, D> {
pub struct GuardedData<P, D> {
data: D,
_marker: PhantomData<T>,
filters: AuthFilter,
_marker: PhantomData<P>,
}
impl<T, D> Deref for GuardedData<T, D> {
impl<P, D> GuardedData<P, D> {
pub fn filters(&self) -> &AuthFilter {
&self.filters
}
async fn auth_bearer(
auth: AuthController,
token: String,
index: Option<String>,
data: Option<D>,
) -> Result<Self, ResponseError>
where
P: Policy + 'static,
{
match Self::authenticate(auth, token, index).await? {
Some(filters) => match data {
Some(data) => Ok(Self {
data,
filters,
_marker: PhantomData,
}),
None => Err(AuthenticationError::IrretrievableState.into()),
},
None => Err(AuthenticationError::InvalidToken.into()),
}
}
async fn auth_token(auth: AuthController, data: Option<D>) -> Result<Self, ResponseError>
where
P: Policy + 'static,
{
match Self::authenticate(auth, String::new(), None).await? {
Some(filters) => match data {
Some(data) => Ok(Self {
data,
filters,
_marker: PhantomData,
}),
None => Err(AuthenticationError::IrretrievableState.into()),
},
None => Err(AuthenticationError::MissingAuthorizationHeader.into()),
}
}
async fn authenticate(
auth: AuthController,
token: String,
index: Option<String>,
) -> Result<Option<AuthFilter>, ResponseError>
where
P: Policy + 'static,
{
tokio::task::spawn_blocking(move || P::authenticate(auth, token.as_ref(), index.as_deref()))
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))
}
}
impl<P, D> Deref for GuardedData<P, D> {
type Target = D;
fn deref(&self) -> &Self::Target {
@ -86,98 +83,172 @@ impl<T, D> Deref for GuardedData<T, D> {
}
}
pub trait Policy {
fn authenticate(&self, token: &[u8]) -> bool;
}
#[derive(Debug)]
pub struct Policies {
inner: HashMap<TypeId, Box<dyn Any>>,
}
impl Policies {
pub fn new() -> Self {
Self {
inner: HashMap::new(),
}
}
pub fn insert<S: Policy + 'static>(&mut self, policy: S) {
self.inner.insert(TypeId::of::<S>(), Box::new(policy));
}
pub fn get<S: Policy + 'static>(&self) -> Option<&S> {
self.inner
.get(&TypeId::of::<S>())
.and_then(|p| p.downcast_ref::<S>())
}
pub fn get_mut<S: Policy + 'static>(&mut self) -> Option<&mut S> {
self.inner
.get_mut(&TypeId::of::<S>())
.and_then(|p| p.downcast_mut::<S>())
}
}
impl Default for Policies {
fn default() -> Self {
Self::new()
}
}
pub enum AuthConfig {
NoAuth,
Auth(Policies),
}
impl Default for AuthConfig {
fn default() -> Self {
Self::NoAuth
}
}
impl<P: Policy + 'static, D: 'static + Clone> FromRequest for GuardedData<P, D> {
type Config = AuthConfig;
type Error = ResponseError;
type Future = Ready<Result<Self, Self::Error>>;
type Future = Pin<Box<dyn Future<Output = Result<Self, Self::Error>>>>;
fn from_request(
req: &actix_web::HttpRequest,
_payload: &mut actix_web::dev::Payload,
) -> Self::Future {
match req.app_data::<Self::Config>() {
Some(config) => match config {
AuthConfig::NoAuth => match req.app_data::<D>().cloned() {
Some(data) => ok(Self {
data,
_marker: PhantomData,
}),
None => err(AuthenticationError::IrretrievableState.into()),
},
AuthConfig::Auth(policies) => match policies.get::<P>() {
Some(policy) => match req.headers().get("x-meili-api-key") {
Some(token) => {
if policy.authenticate(token.as_bytes()) {
match req.app_data::<D>().cloned() {
Some(data) => ok(Self {
data,
_marker: PhantomData,
}),
None => err(AuthenticationError::IrretrievableState.into()),
}
} else {
let token = token.to_str().unwrap_or("unknown").to_string();
err(AuthenticationError::InvalidToken(token).into())
}
match req.app_data::<AuthController>().cloned() {
Some(auth) => match req
.headers()
.get("Authorization")
.map(|type_token| type_token.to_str().unwrap_or_default().splitn(2, ' '))
{
Some(mut type_token) => match type_token.next() {
Some("Bearer") => {
// TODO: find a less hardcoded way?
let index = req.match_info().get("index_uid");
match type_token.next() {
Some(token) => Box::pin(Self::auth_bearer(
auth,
token.to_string(),
index.map(String::from),
req.app_data::<D>().cloned(),
)),
None => Box::pin(err(AuthenticationError::InvalidToken.into())),
}
None => err(AuthenticationError::MissingAuthorizationHeader.into()),
},
None => err(AuthenticationError::UnknownPolicy.into()),
}
_otherwise => {
Box::pin(err(AuthenticationError::MissingAuthorizationHeader.into()))
}
},
None => Box::pin(Self::auth_token(auth, req.app_data::<D>().cloned())),
},
None => err(AuthenticationError::IrretrievableState.into()),
None => Box::pin(err(AuthenticationError::IrretrievableState.into())),
}
}
}
pub trait Policy {
fn authenticate(auth: AuthController, token: &str, index: Option<&str>) -> Option<AuthFilter>;
}
pub mod policies {
use jsonwebtoken::{decode, Algorithm, DecodingKey, Validation};
use serde::{Deserialize, Serialize};
use time::OffsetDateTime;
use uuid::Uuid;
use crate::extractors::authentication::Policy;
use meilisearch_auth::{Action, AuthController, AuthFilter, SearchRules};
// reexport actions in policies in order to be used in routes configuration.
pub use meilisearch_auth::actions;
fn tenant_token_validation() -> Validation {
let mut validation = Validation::default();
validation.validate_exp = false;
validation.required_spec_claims.remove("exp");
validation.algorithms = vec![Algorithm::HS256, Algorithm::HS384, Algorithm::HS512];
validation
}
/// Extracts the key id used to sign the payload, without performing any validation.
fn extract_key_id(token: &str) -> Option<Uuid> {
let mut validation = tenant_token_validation();
validation.insecure_disable_signature_validation();
let dummy_key = DecodingKey::from_secret(b"secret");
let token_data = decode::<Claims>(token, &dummy_key, &validation).ok()?;
// get token fields without validating it.
let Claims { api_key_uid, .. } = token_data.claims;
Some(api_key_uid)
}
fn is_keys_action(action: u8) -> bool {
use actions::*;
matches!(action, KEYS_GET | KEYS_CREATE | KEYS_UPDATE | KEYS_DELETE)
}
pub struct ActionPolicy<const A: u8>;
impl<const A: u8> Policy for ActionPolicy<A> {
fn authenticate(
auth: AuthController,
token: &str,
index: Option<&str>,
) -> Option<AuthFilter> {
// authenticate if token is the master key.
// master key can only have access to keys routes.
// if master key is None only keys routes are inaccessible.
if auth
.get_master_key()
.map_or_else(|| !is_keys_action(A), |mk| mk == token)
{
return Some(AuthFilter::default());
}
// Tenant token
if let Some(filters) = ActionPolicy::<A>::authenticate_tenant_token(&auth, token, index)
{
return Some(filters);
} else if let Some(action) = Action::from_repr(A) {
// API key
if let Ok(Some(uid)) = auth.get_optional_uid_from_encoded_key(token.as_bytes()) {
if let Ok(true) = auth.is_key_authorized(uid, action, index) {
return auth.get_key_filters(uid, None).ok();
}
}
}
None
}
}
impl<const A: u8> ActionPolicy<A> {
fn authenticate_tenant_token(
auth: &AuthController,
token: &str,
index: Option<&str>,
) -> Option<AuthFilter> {
// Only search action can be accessed by a tenant token.
if A != actions::SEARCH {
return None;
}
let uid = extract_key_id(token)?;
// check if parent key is authorized to do the action.
if auth.is_key_authorized(uid, Action::Search, index).ok()? {
// Check if tenant token is valid.
let key = auth.generate_key(uid)?;
let data = decode::<Claims>(
token,
&DecodingKey::from_secret(key.as_bytes()),
&tenant_token_validation(),
)
.ok()?;
// Check index access if an index restriction is provided.
if let Some(index) = index {
if !data.claims.search_rules.is_index_authorized(index) {
return None;
}
}
// Check if token is expired.
if let Some(exp) = data.claims.exp {
if OffsetDateTime::now_utc().unix_timestamp() > exp {
return None;
}
}
return auth
.get_key_filters(uid, Some(data.claims.search_rules))
.ok();
}
None
}
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct Claims {
search_rules: SearchRules,
exp: Option<i64>,
api_key_uid: Uuid,
}
}

View File

@ -1,3 +1,4 @@
pub mod payload;
#[macro_use]
pub mod authentication;
pub mod sequential_extractor;

View File

@ -28,8 +28,6 @@ impl Default for PayloadConfig {
}
impl FromRequest for Payload {
type Config = PayloadConfig;
type Error = PayloadError;
type Future = Ready<Result<Payload, Self::Error>>;
@ -39,7 +37,7 @@ impl FromRequest for Payload {
let limit = req
.app_data::<PayloadConfig>()
.map(|c| c.limit)
.unwrap_or(Self::Config::default().limit);
.unwrap_or(PayloadConfig::default().limit);
ready(Ok(Payload {
payload: payload.take(),
limit,

View File

@ -0,0 +1,148 @@
#![allow(non_snake_case)]
use std::{future::Future, pin::Pin, task::Poll};
use actix_web::{dev::Payload, FromRequest, Handler, HttpRequest};
use pin_project_lite::pin_project;
/// `SeqHandler` is an actix `Handler` that enforces that extractors errors are returned in the
/// same order as they are defined in the wrapped handler. This is needed because, by default, actix
/// resolves the extractors concurrently, whereas we always need the authentication extractor to
/// throw first.
#[derive(Clone)]
pub struct SeqHandler<H>(pub H);
pub struct SeqFromRequest<T>(T);
/// This macro implements `FromRequest` for arbitrary arity handler, except for one, which is
/// useless anyway.
macro_rules! gen_seq {
($ty:ident; $($T:ident)+) => {
pin_project! {
pub struct $ty<$($T: FromRequest), +> {
$(
#[pin]
$T: ExtractFuture<$T::Future, $T, $T::Error>,
)+
}
}
impl<$($T: FromRequest), +> Future for $ty<$($T),+> {
type Output = Result<SeqFromRequest<($($T),+)>, actix_web::Error>;
fn poll(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Self::Output> {
let mut this = self.project();
let mut count_fut = 0;
let mut count_finished = 0;
$(
count_fut += 1;
match this.$T.as_mut().project() {
ExtractProj::Future { fut } => match fut.poll(cx) {
Poll::Ready(Ok(output)) => {
count_finished += 1;
let _ = this
.$T
.as_mut()
.project_replace(ExtractFuture::Done { output });
}
Poll::Ready(Err(error)) => {
count_finished += 1;
let _ = this
.$T
.as_mut()
.project_replace(ExtractFuture::Error { error });
}
Poll::Pending => (),
},
ExtractProj::Done { .. } => count_finished += 1,
ExtractProj::Error { .. } => {
// short circuit if all previous are finished and we had an error.
if count_finished == count_fut {
match this.$T.project_replace(ExtractFuture::Empty) {
ExtractReplaceProj::Error { error } => {
return Poll::Ready(Err(error.into()))
}
_ => unreachable!("Invalid future state"),
}
} else {
count_finished += 1;
}
}
ExtractProj::Empty => unreachable!("From request polled after being finished. {}", stringify!($T)),
}
)+
if count_fut == count_finished {
let result = (
$(
match this.$T.project_replace(ExtractFuture::Empty) {
ExtractReplaceProj::Done { output } => output,
ExtractReplaceProj::Error { error } => return Poll::Ready(Err(error.into())),
_ => unreachable!("Invalid future state"),
},
)+
);
Poll::Ready(Ok(SeqFromRequest(result)))
} else {
Poll::Pending
}
}
}
impl<$($T: FromRequest,)+> FromRequest for SeqFromRequest<($($T,)+)> {
type Error = actix_web::Error;
type Future = $ty<$($T),+>;
fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future {
$ty {
$(
$T: ExtractFuture::Future {
fut: $T::from_request(req, payload),
},
)+
}
}
}
impl<Han, $($T: FromRequest),+> Handler<SeqFromRequest<($($T),+)>> for SeqHandler<Han>
where
Han: Handler<($($T),+)>,
{
type Output = Han::Output;
type Future = Han::Future;
fn call(&self, args: SeqFromRequest<($($T),+)>) -> Self::Future {
self.0.call(args.0)
}
}
};
}
// Not working for a single argument, but then, it is not really necessary.
// gen_seq! { SeqFromRequestFut1; A }
gen_seq! { SeqFromRequestFut2; A B }
gen_seq! { SeqFromRequestFut3; A B C }
gen_seq! { SeqFromRequestFut4; A B C D }
gen_seq! { SeqFromRequestFut5; A B C D E }
gen_seq! { SeqFromRequestFut6; A B C D E F }
pin_project! {
#[project = ExtractProj]
#[project_replace = ExtractReplaceProj]
enum ExtractFuture<Fut, Res, Err> {
Future {
#[pin]
fut: Fut,
},
Done {
output: Res,
},
Error {
error: Err,
},
Empty,
}
}

View File

@ -1,16 +0,0 @@
use walkdir::WalkDir;
pub trait EnvSizer {
fn size(&self) -> u64;
}
impl EnvSizer for heed::Env {
fn size(&self) -> u64 {
WalkDir::new(self.path())
.into_iter()
.filter_map(|entry| entry.ok())
.filter_map(|entry| entry.metadata().ok())
.filter(|metadata| metadata.is_file())
.fold(0, |acc, m| acc + m.len())
}
}

View File

@ -1,3 +0,0 @@
mod env;
pub use env::EnvSizer;

View File

@ -1,17 +1,22 @@
#![allow(rustdoc::private_intra_doc_links)]
#[macro_use]
pub mod error;
pub mod analytics;
pub mod task;
#[macro_use]
pub mod extractors;
pub mod analytics;
pub mod helpers;
pub mod option;
pub mod routes;
use std::sync::Arc;
#[cfg(feature = "metrics")]
pub mod metrics;
#[cfg(feature = "metrics")]
pub mod route_metrics;
use std::sync::{atomic::AtomicBool, Arc};
use std::time::Duration;
use crate::error::MeilisearchHttpError;
use crate::extractors::authentication::AuthConfig;
use actix_web::error::JsonPayloadError;
use analytics::Analytics;
use error::PayloadError;
@ -20,45 +25,33 @@ pub use option::Opt;
use actix_web::{web, HttpRequest};
use extractors::authentication::policies::*;
use extractors::payload::PayloadConfig;
use meilisearch_auth::AuthController;
use meilisearch_lib::MeiliSearch;
use sha2::Digest;
#[derive(Clone)]
pub struct ApiKeys {
pub public: Option<String>,
pub private: Option<String>,
pub master: Option<String>,
}
impl ApiKeys {
pub fn generate_missing_api_keys(&mut self) {
if let Some(master_key) = &self.master {
if self.private.is_none() {
let key = format!("{}-private", master_key);
let sha = sha2::Sha256::digest(key.as_bytes());
self.private = Some(format!("{:x}", sha));
}
if self.public.is_none() {
let key = format!("{}-public", master_key);
let sha = sha2::Sha256::digest(key.as_bytes());
self.public = Some(format!("{:x}", sha));
}
}
}
}
pub static AUTOBATCHING_ENABLED: AtomicBool = AtomicBool::new(false);
pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<MeiliSearch> {
let mut meilisearch = MeiliSearch::builder();
// disable autobatching?
AUTOBATCHING_ENABLED.store(
!opt.scheduler_options.disable_auto_batching,
std::sync::atomic::Ordering::Relaxed,
);
meilisearch
.set_max_index_size(opt.max_index_size.get_bytes() as usize)
.set_max_update_store_size(opt.max_udb_size.get_bytes() as usize)
.set_max_task_store_size(opt.max_task_db_size.get_bytes() as usize)
// snapshot
.set_ignore_missing_snapshot(opt.ignore_missing_snapshot)
.set_ignore_snapshot_if_db_exists(opt.ignore_snapshot_if_db_exists)
.set_dump_dst(opt.dumps_dir.clone())
.set_snapshot_interval(Duration::from_secs(opt.snapshot_interval_sec))
.set_snapshot_dir(opt.snapshot_dir.clone());
.set_snapshot_dir(opt.snapshot_dir.clone())
// dump
.set_ignore_missing_dump(opt.ignore_missing_dump)
.set_ignore_dump_if_db_exists(opt.ignore_dump_if_db_exists)
.set_dump_dst(opt.dumps_dir.clone());
if let Some(ref path) = opt.import_snapshot {
meilisearch.set_import_snapshot(path.clone());
@ -72,18 +65,24 @@ pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<MeiliSearch> {
meilisearch.set_schedule_snapshot();
}
meilisearch.build(opt.db_path.clone(), opt.indexer_options.clone())
meilisearch.build(
opt.db_path.clone(),
opt.indexer_options.clone(),
opt.scheduler_options.clone(),
)
}
pub fn configure_data(
config: &mut web::ServiceConfig,
data: MeiliSearch,
auth: AuthController,
opt: &Opt,
analytics: Arc<dyn Analytics>,
) {
let http_payload_size_limit = opt.http_payload_size_limit.get_bytes() as usize;
config
.app_data(data)
.app_data(auth)
.app_data(web::Data::from(analytics))
.app_data(
web::JsonConfig::default()
@ -109,37 +108,10 @@ pub fn configure_data(
);
}
pub fn configure_auth(config: &mut web::ServiceConfig, opts: &Opt) {
let mut keys = ApiKeys {
master: opts.master_key.clone(),
private: None,
public: None,
};
keys.generate_missing_api_keys();
let auth_config = if let Some(ref master_key) = keys.master {
let private_key = keys.private.as_ref().unwrap();
let public_key = keys.public.as_ref().unwrap();
let mut policies = init_policies!(Public, Private, Admin);
create_users!(
policies,
master_key.as_bytes() => { Admin, Private, Public },
private_key.as_bytes() => { Private, Public },
public_key.as_bytes() => { Public }
);
AuthConfig::Auth(policies)
} else {
AuthConfig::NoAuth
};
config.app_data(auth_config).app_data(keys);
}
#[cfg(feature = "mini-dashboard")]
pub fn dashboard(config: &mut web::ServiceConfig, enable_frontend: bool) {
use actix_web::HttpResponse;
use actix_web_static_files::Resource;
use static_files::Resource;
mod generated {
include!(concat!(env!("OUT_DIR"), "/generated.rs"));
@ -154,13 +126,13 @@ pub fn dashboard(config: &mut web::ServiceConfig, enable_frontend: bool) {
} = resource;
// Redirect index.html to /
if path == "index.html" {
config.service(web::resource("/").route(
web::get().to(move || HttpResponse::Ok().content_type(mime_type).body(data)),
));
config.service(web::resource("/").route(web::get().to(move || async move {
HttpResponse::Ok().content_type(mime_type).body(data)
})));
} else {
config.service(web::resource(path).route(
web::get().to(move || HttpResponse::Ok().content_type(mime_type).body(data)),
));
config.service(web::resource(path).route(web::get().to(move || async move {
HttpResponse::Ok().content_type(mime_type).body(data)
})));
}
}
} else {
@ -173,26 +145,44 @@ pub fn dashboard(config: &mut web::ServiceConfig, _enable_frontend: bool) {
config.service(web::resource("/").route(web::get().to(routes::running)));
}
#[cfg(feature = "metrics")]
pub fn configure_metrics_route(config: &mut web::ServiceConfig, enable_metrics_route: bool) {
if enable_metrics_route {
config.service(
web::resource("/metrics").route(web::get().to(crate::route_metrics::get_metrics)),
);
}
}
#[macro_export]
macro_rules! create_app {
($data:expr, $enable_frontend:expr, $opt:expr, $analytics:expr) => {{
($data:expr, $auth:expr, $enable_frontend:expr, $opt:expr, $analytics:expr) => {{
use actix_cors::Cors;
use actix_web::dev::Service;
use actix_web::middleware::Condition;
use actix_web::middleware::TrailingSlash;
use actix_web::App;
use actix_web::{middleware, web};
use meilisearch_http::error::{MeilisearchHttpError, ResponseError};
use meilisearch_http::error::MeilisearchHttpError;
use meilisearch_http::routes;
use meilisearch_http::{configure_auth, configure_data, dashboard};
use meilisearch_http::{configure_data, dashboard};
#[cfg(feature = "metrics")]
use meilisearch_http::{configure_metrics_route, metrics, route_metrics};
use meilisearch_types::error::ResponseError;
App::new()
.configure(|s| configure_data(s, $data.clone(), &$opt, $analytics))
.configure(|s| configure_auth(s, &$opt))
let app = App::new()
.configure(|s| configure_data(s, $data.clone(), $auth.clone(), &$opt, $analytics))
.configure(routes::configure)
.configure(|s| dashboard(s, $enable_frontend))
.configure(|s| dashboard(s, $enable_frontend));
#[cfg(feature = "metrics")]
let app = app.configure(|s| configure_metrics_route(s, $opt.enable_metrics_route));
let app = app
.wrap(
Cors::default()
.send_wildcard()
.allowed_headers(vec!["content-type", "x-meili-api-key"])
.allow_any_header()
.allow_any_origin()
.allow_any_method()
.max_age(86_400), // 24h
@ -201,6 +191,14 @@ macro_rules! create_app {
.wrap(middleware::Compress::default())
.wrap(middleware::NormalizePath::new(
middleware::TrailingSlash::Trim,
))
));
#[cfg(feature = "metrics")]
let app = app.wrap(Condition::new(
$opt.enable_metrics_route,
route_metrics::RouteMetrics,
));
app
}};
}

View File

@ -1,16 +1,17 @@
use std::env;
use std::sync::Arc;
use actix_web::http::KeepAlive;
use actix_web::HttpServer;
use clap::Parser;
use meilisearch_auth::AuthController;
use meilisearch_http::analytics;
use meilisearch_http::analytics::Analytics;
use meilisearch_http::{create_app, setup_meilisearch, Opt};
use meilisearch_lib::MeiliSearch;
use structopt::StructOpt;
#[cfg(target_os = "linux")]
#[global_allocator]
static ALLOC: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
/// does all the setup before meilisearch is launched
fn setup(opt: &Opt) -> anyhow::Result<()> {
@ -28,7 +29,7 @@ fn setup(opt: &Opt) -> anyhow::Result<()> {
#[actix_web::main]
async fn main() -> anyhow::Result<()> {
let opt = Opt::from_args();
let opt = Opt::parse();
setup(&opt)?;
@ -46,6 +47,8 @@ async fn main() -> anyhow::Result<()> {
let meilisearch = setup_meilisearch(&opt)?;
let auth_controller = AuthController::new(&opt.db_path, &opt.master_key)?;
#[cfg(all(not(debug_assertions), feature = "analytics"))]
let (analytics, user) = if !opt.no_analytics {
analytics::SegmentAnalytics::new(&opt, &meilisearch).await
@ -57,22 +60,31 @@ async fn main() -> anyhow::Result<()> {
print_launch_resume(&opt, &user);
run_http(meilisearch, opt, analytics).await?;
run_http(meilisearch, auth_controller, opt, analytics).await?;
Ok(())
}
async fn run_http(
data: MeiliSearch,
auth_controller: AuthController,
opt: Opt,
analytics: Arc<dyn Analytics>,
) -> anyhow::Result<()> {
let _enable_dashboard = &opt.env == "development";
let opt_clone = opt.clone();
let http_server =
HttpServer::new(move || create_app!(data, _enable_dashboard, opt_clone, analytics.clone()))
// Disable signals allows the server to terminate immediately when a user enter CTRL-C
.disable_signals();
let http_server = HttpServer::new(move || {
create_app!(
data,
auth_controller,
_enable_dashboard,
opt_clone,
analytics.clone()
)
})
// Disable signals allows the server to terminate immediately when a user enter CTRL-C
.disable_signals()
.keep_alive(KeepAlive::Os);
if let Some(config) = opt.get_ssl_config()? {
http_server
@ -88,22 +100,26 @@ async fn run_http(
pub fn print_launch_resume(opt: &Opt, user: &str) {
let commit_sha = option_env!("VERGEN_GIT_SHA").unwrap_or("unknown");
let commit_date = option_env!("VERGEN_GIT_COMMIT_TIMESTAMP").unwrap_or("unknown");
let protocol = if opt.ssl_cert_path.is_some() && opt.ssl_key_path.is_some() {
"https"
} else {
"http"
};
let ascii_name = r#"
888b d888 d8b 888 d8b .d8888b. 888
8888b d8888 Y8P 888 Y8P d88P Y88b 888
88888b.d88888 888 Y88b. 888
888Y88888P888 .d88b. 888 888 888 "Y888b. .d88b. 8888b. 888d888 .d8888b 88888b.
888 Y888P 888 d8P Y8b 888 888 888 "Y88b. d8P Y8b "88b 888P" d88P" 888 "88b
888 Y8P 888 88888888 888 888 888 "888 88888888 .d888888 888 888 888 888
888 " 888 Y8b. 888 888 888 Y88b d88P Y8b. 888 888 888 Y88b. 888 888
888 888 "Y8888 888 888 888 "Y8888P" "Y8888 "Y888888 888 "Y8888P 888 888
888b d888 d8b 888 d8b 888
8888b d8888 Y8P 888 Y8P 888
88888b.d88888 888 888
888Y88888P888 .d88b. 888 888 888 .d8888b .d88b. 8888b. 888d888 .d8888b 88888b.
888 Y888P 888 d8P Y8b 888 888 888 88K d8P Y8b "88b 888P" d88P" 888 "88b
888 Y8P 888 88888888 888 888 888 "Y8888b. 88888888 .d888888 888 888 888 888
888 " 888 Y8b. 888 888 888 X88 Y8b. 888 888 888 Y88b. 888 888
888 888 "Y8888 888 888 888 88888P' "Y8888 "Y888888 888 "Y8888P 888 888
"#;
eprintln!("{}", ascii_name);
eprintln!("Database path:\t\t{:?}", opt.db_path);
eprintln!("Server listening on:\t\"http://{}\"", opt.http_addr);
eprintln!("Server listening on:\t\"{}://{}\"", protocol, opt.http_addr);
eprintln!("Environment:\t\t{:?}", opt.env);
eprintln!("Commit SHA:\t\t{:?}", commit_sha.to_string());
eprintln!("Commit date:\t\t{:?}", commit_date.to_string());
@ -114,17 +130,17 @@ pub fn print_launch_resume(opt: &Opt, user: &str) {
#[cfg(all(not(debug_assertions), feature = "analytics"))]
{
if opt.no_analytics {
eprintln!("Anonymous telemetry:\t\"Disabled\"");
} else {
if !opt.no_analytics {
eprintln!(
"
Thank you for using MeiliSearch!
Thank you for using Meilisearch!
We collect anonymized analytics to improve our product and your experience. To learn more, including how to turn off analytics, visit our dedicated documentation page: https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html
Anonymous telemetry:\t\"Enabled\""
);
} else {
eprintln!("Anonymous telemetry:\t\"Disabled\"");
}
}
@ -135,7 +151,7 @@ Anonymous telemetry:\t\"Enabled\""
eprintln!();
if opt.master_key.is_some() {
eprintln!("A Master Key has been set. Requests to MeiliSearch won't be authorized unless you provide an authentication key.");
eprintln!("A Master Key has been set. Requests to Meilisearch won't be authorized unless you provide an authentication key.");
} else {
eprintln!("No master key found; The server will accept unidentified requests. \
If you need some protection in development mode, please export a key: export MEILI_MASTER_KEY=xxx");
@ -144,6 +160,6 @@ Anonymous telemetry:\t\"Enabled\""
eprintln!();
eprintln!("Documentation:\t\thttps://docs.meilisearch.com");
eprintln!("Source code:\t\thttps://github.com/meilisearch/meilisearch");
eprintln!("Contact:\t\thttps://docs.meilisearch.com/resources/contact.html or bonjour@meilisearch.com");
eprintln!("Contact:\t\thttps://docs.meilisearch.com/resources/contact.html");
eprintln!();
}

View File

@ -0,0 +1,42 @@
use lazy_static::lazy_static;
use prometheus::{
opts, register_histogram_vec, register_int_counter_vec, register_int_gauge,
register_int_gauge_vec,
};
use prometheus::{HistogramVec, IntCounterVec, IntGauge, IntGaugeVec};
const HTTP_RESPONSE_TIME_CUSTOM_BUCKETS: &[f64; 14] = &[
0.0005, 0.0008, 0.00085, 0.0009, 0.00095, 0.001, 0.00105, 0.0011, 0.00115, 0.0012, 0.0015,
0.002, 0.003, 1.0,
];
lazy_static! {
pub static ref HTTP_REQUESTS_TOTAL: IntCounterVec = register_int_counter_vec!(
opts!("http_requests_total", "HTTP requests total"),
&["method", "path"]
)
.expect("Can't create a metric");
pub static ref MEILISEARCH_DB_SIZE_BYTES: IntGauge = register_int_gauge!(opts!(
"meilisearch_db_size_bytes",
"Meilisearch Db Size In Bytes"
))
.expect("Can't create a metric");
pub static ref MEILISEARCH_INDEX_COUNT: IntGauge =
register_int_gauge!(opts!("meilisearch_index_count", "Meilisearch Index Count"))
.expect("Can't create a metric");
pub static ref MEILISEARCH_INDEX_DOCS_COUNT: IntGaugeVec = register_int_gauge_vec!(
opts!(
"meilisearch_index_docs_count",
"Meilisearch Index Docs Count"
),
&["index"]
)
.expect("Can't create a metric");
pub static ref HTTP_RESPONSE_TIME_SECONDS: HistogramVec = register_histogram_vec!(
"http_response_time_seconds",
"HTTP response times",
&["method", "path"],
HTTP_RESPONSE_TIME_CUSTOM_BUCKETS.to_vec()
)
.expect("Can't create a metric");
}

View File

@ -4,134 +4,174 @@ use std::path::PathBuf;
use std::sync::Arc;
use byte_unit::Byte;
use meilisearch_lib::options::IndexerOpts;
use rustls::internal::pemfile::{certs, pkcs8_private_keys, rsa_private_keys};
use clap::Parser;
use meilisearch_lib::options::{IndexerOpts, SchedulerConfig};
use rustls::{
AllowAnyAnonymousOrAuthenticatedClient, AllowAnyAuthenticatedClient, NoClientAuth,
server::{
AllowAnyAnonymousOrAuthenticatedClient, AllowAnyAuthenticatedClient,
ServerSessionMemoryCache,
},
RootCertStore,
};
use structopt::StructOpt;
use rustls_pemfile::{certs, pkcs8_private_keys, rsa_private_keys};
use serde::Serialize;
const POSSIBLE_ENV: [&str; 2] = ["development", "production"];
#[derive(Debug, Clone, StructOpt)]
#[derive(Debug, Clone, Parser, Serialize)]
#[clap(version)]
pub struct Opt {
/// The destination where the database must be created.
#[structopt(long, env = "MEILI_DB_PATH", default_value = "./data.ms")]
#[clap(long, env = "MEILI_DB_PATH", default_value = "./data.ms")]
pub db_path: PathBuf,
/// The address on which the http server will listen.
#[structopt(long, env = "MEILI_HTTP_ADDR", default_value = "127.0.0.1:7700")]
#[clap(long, env = "MEILI_HTTP_ADDR", default_value = "127.0.0.1:7700")]
pub http_addr: String,
/// The master key allowing you to do everything on the server.
#[structopt(long, env = "MEILI_MASTER_KEY")]
#[serde(skip)]
#[clap(long, env = "MEILI_MASTER_KEY")]
pub master_key: Option<String>,
/// This environment variable must be set to `production` if you are running in production.
/// If the server is running in development mode more logs will be displayed,
/// and the master key can be avoided which implies that there is no security on the updates routes.
/// This is useful to debug when integrating the engine with another service.
#[structopt(long, env = "MEILI_ENV", default_value = "development", possible_values = &POSSIBLE_ENV)]
#[clap(long, env = "MEILI_ENV", default_value = "development", possible_values = &POSSIBLE_ENV)]
pub env: String,
/// Do not send analytics to Meili.
#[cfg(all(not(debug_assertions), feature = "analytics"))]
#[structopt(long, env = "MEILI_NO_ANALYTICS")]
#[serde(skip)] // we can't send true
#[clap(long, env = "MEILI_NO_ANALYTICS")]
pub no_analytics: bool,
/// The maximum size, in bytes, of the main lmdb database directory
#[structopt(long, env = "MEILI_MAX_INDEX_SIZE", default_value = "100 GiB")]
#[clap(long, env = "MEILI_MAX_INDEX_SIZE", default_value = "100 GiB")]
pub max_index_size: Byte,
/// The maximum size, in bytes, of the update lmdb database directory
#[structopt(long, env = "MEILI_MAX_UDB_SIZE", default_value = "100 GiB")]
pub max_udb_size: Byte,
#[clap(long, env = "MEILI_MAX_TASK_DB_SIZE", default_value = "100 GiB")]
pub max_task_db_size: Byte,
/// The maximum size, in bytes, of accepted JSON payloads
#[structopt(long, env = "MEILI_HTTP_PAYLOAD_SIZE_LIMIT", default_value = "100 MB")]
#[clap(long, env = "MEILI_HTTP_PAYLOAD_SIZE_LIMIT", default_value = "100 MB")]
pub http_payload_size_limit: Byte,
/// Read server certificates from CERTFILE.
/// This should contain PEM-format certificates
/// in the right order (the first certificate should
/// certify KEYFILE, the last should be a root CA).
#[structopt(long, env = "MEILI_SSL_CERT_PATH", parse(from_os_str))]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_CERT_PATH", parse(from_os_str))]
pub ssl_cert_path: Option<PathBuf>,
/// Read private key from KEYFILE. This should be a RSA
/// private key or PKCS8-encoded private key, in PEM format.
#[structopt(long, env = "MEILI_SSL_KEY_PATH", parse(from_os_str))]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_KEY_PATH", parse(from_os_str))]
pub ssl_key_path: Option<PathBuf>,
/// Enable client authentication, and accept certificates
/// signed by those roots provided in CERTFILE.
#[structopt(long, env = "MEILI_SSL_AUTH_PATH", parse(from_os_str))]
#[clap(long, env = "MEILI_SSL_AUTH_PATH", parse(from_os_str))]
#[serde(skip)]
pub ssl_auth_path: Option<PathBuf>,
/// Read DER-encoded OCSP response from OCSPFILE and staple to certificate.
/// Optional
#[structopt(long, env = "MEILI_SSL_OCSP_PATH", parse(from_os_str))]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_OCSP_PATH", parse(from_os_str))]
pub ssl_ocsp_path: Option<PathBuf>,
/// Send a fatal alert if the client does not complete client authentication.
#[structopt(long, env = "MEILI_SSL_REQUIRE_AUTH")]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_REQUIRE_AUTH")]
pub ssl_require_auth: bool,
/// SSL support session resumption
#[structopt(long, env = "MEILI_SSL_RESUMPTION")]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_RESUMPTION")]
pub ssl_resumption: bool,
/// SSL support tickets.
#[structopt(long, env = "MEILI_SSL_TICKETS")]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_TICKETS")]
pub ssl_tickets: bool,
/// Defines the path of the snapshot file to import.
/// This option will, by default, stop the process if a database already exist or if no snapshot exists at
/// the given path. If this option is not specified no snapshot is imported.
#[structopt(long)]
#[clap(long)]
pub import_snapshot: Option<PathBuf>,
/// The engine will ignore a missing snapshot and not return an error in such case.
#[structopt(long, requires = "import-snapshot")]
#[clap(long, requires = "import-snapshot")]
pub ignore_missing_snapshot: bool,
/// The engine will skip snapshot importation and not return an error in such case.
#[structopt(long, requires = "import-snapshot")]
#[clap(long, requires = "import-snapshot")]
pub ignore_snapshot_if_db_exists: bool,
/// Defines the directory path where meilisearch will create snapshot each snapshot_time_gap.
#[structopt(long, env = "MEILI_SNAPSHOT_DIR", default_value = "snapshots/")]
#[clap(long, env = "MEILI_SNAPSHOT_DIR", default_value = "snapshots/")]
pub snapshot_dir: PathBuf,
/// Activate snapshot scheduling.
#[structopt(long, env = "MEILI_SCHEDULE_SNAPSHOT")]
#[clap(long, env = "MEILI_SCHEDULE_SNAPSHOT")]
pub schedule_snapshot: bool,
/// Defines time interval, in seconds, between each snapshot creation.
#[structopt(long, env = "MEILI_SNAPSHOT_INTERVAL_SEC", default_value = "86400")] // 24h
#[clap(long, env = "MEILI_SNAPSHOT_INTERVAL_SEC", default_value = "86400")] // 24h
pub snapshot_interval_sec: u64,
/// Folder where dumps are created when the dump route is called.
#[structopt(long, env = "MEILI_DUMPS_DIR", default_value = "dumps/")]
pub dumps_dir: PathBuf,
/// Import a dump from the specified path, must be a `.dump` file.
#[structopt(long, conflicts_with = "import-snapshot")]
#[clap(long, conflicts_with = "import-snapshot")]
pub import_dump: Option<PathBuf>,
/// If the dump doesn't exists, load or create the database specified by `db-path` instead.
#[clap(long, requires = "import-dump")]
pub ignore_missing_dump: bool,
/// Ignore the dump if a database already exists, and load that database instead.
#[clap(long, requires = "import-dump")]
pub ignore_dump_if_db_exists: bool,
/// Folder where dumps are created when the dump route is called.
#[clap(long, env = "MEILI_DUMPS_DIR", default_value = "dumps/")]
pub dumps_dir: PathBuf,
/// Set the log level
#[structopt(long, env = "MEILI_LOG_LEVEL", default_value = "info")]
#[clap(long, env = "MEILI_LOG_LEVEL", default_value = "info")]
pub log_level: String,
#[structopt(skip)]
/// Enables Prometheus metrics and /metrics route.
#[cfg(feature = "metrics")]
#[clap(long, env = "MEILI_ENABLE_METRICS_ROUTE")]
pub enable_metrics_route: bool,
#[serde(flatten)]
#[clap(flatten)]
pub indexer_options: IndexerOpts,
#[serde(flatten)]
#[clap(flatten)]
pub scheduler_options: SchedulerConfig,
}
impl Opt {
/// Wether analytics should be enabled or not.
#[cfg(all(not(debug_assertions), feature = "analytics"))]
pub fn analytics(&self) -> bool {
!self.no_analytics
}
pub fn get_ssl_config(&self) -> anyhow::Result<Option<rustls::ServerConfig>> {
if let (Some(cert_path), Some(key_path)) = (&self.ssl_cert_path, &self.ssl_key_path) {
let client_auth = match &self.ssl_auth_path {
let config = rustls::ServerConfig::builder().with_safe_defaults();
let config = match &self.ssl_auth_path {
Some(auth_path) => {
let roots = load_certs(auth_path.to_path_buf())?;
let mut client_auth_roots = RootCertStore::empty();
@ -139,30 +179,32 @@ impl Opt {
client_auth_roots.add(&root).unwrap();
}
if self.ssl_require_auth {
AllowAnyAuthenticatedClient::new(client_auth_roots)
let verifier = AllowAnyAuthenticatedClient::new(client_auth_roots);
config.with_client_cert_verifier(verifier)
} else {
AllowAnyAnonymousOrAuthenticatedClient::new(client_auth_roots)
let verifier =
AllowAnyAnonymousOrAuthenticatedClient::new(client_auth_roots);
config.with_client_cert_verifier(verifier)
}
}
None => NoClientAuth::new(),
None => config.with_no_client_auth(),
};
let mut config = rustls::ServerConfig::new(client_auth);
config.key_log = Arc::new(rustls::KeyLogFile::new());
let certs = load_certs(cert_path.to_path_buf())?;
let privkey = load_private_key(key_path.to_path_buf())?;
let ocsp = load_ocsp(&self.ssl_ocsp_path)?;
config
.set_single_cert_with_ocsp_and_sct(certs, privkey, ocsp, vec![])
let mut config = config
.with_single_cert_with_ocsp_and_sct(certs, privkey, ocsp, vec![])
.map_err(|_| anyhow::anyhow!("bad certificates/private key"))?;
config.key_log = Arc::new(rustls::KeyLogFile::new());
if self.ssl_resumption {
config.set_persistence(rustls::ServerSessionMemoryCache::new(256));
config.session_storage = ServerSessionMemoryCache::new(256);
}
if self.ssl_tickets {
config.ticketer = rustls::Ticketer::new();
config.ticketer = rustls::Ticketer::new().unwrap();
}
Ok(Some(config))
@ -176,7 +218,9 @@ fn load_certs(filename: PathBuf) -> anyhow::Result<Vec<rustls::Certificate>> {
let certfile =
fs::File::open(filename).map_err(|_| anyhow::anyhow!("cannot open certificate file"))?;
let mut reader = BufReader::new(certfile);
certs(&mut reader).map_err(|_| anyhow::anyhow!("cannot read certificate file"))
certs(&mut reader)
.map(|certs| certs.into_iter().map(rustls::Certificate).collect())
.map_err(|_| anyhow::anyhow!("cannot read certificate file"))
}
fn load_private_key(filename: PathBuf) -> anyhow::Result<rustls::PrivateKey> {
@ -201,10 +245,10 @@ fn load_private_key(filename: PathBuf) -> anyhow::Result<rustls::PrivateKey> {
// prefer to load pkcs8 keys
if !pkcs8_keys.is_empty() {
Ok(pkcs8_keys[0].clone())
Ok(rustls::PrivateKey(pkcs8_keys[0].clone()))
} else {
assert!(!rsa_keys.is_empty());
Ok(rsa_keys[0].clone())
Ok(rustls::PrivateKey(rsa_keys[0].clone()))
}
}
@ -220,3 +264,13 @@ fn load_ocsp(filename: &Option<PathBuf>) -> anyhow::Result<Vec<u8>> {
Ok(ret)
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_valid_opt() {
assert!(Opt::try_parse_from(Some("")).is_ok());
}
}

View File

@ -0,0 +1,112 @@
use std::future::{ready, Ready};
use actix_web::http::header;
use actix_web::HttpResponse;
use actix_web::{
dev::{self, Service, ServiceRequest, ServiceResponse, Transform},
Error,
};
use futures_util::future::LocalBoxFuture;
use meilisearch_auth::actions;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use prometheus::HistogramTimer;
use prometheus::{Encoder, TextEncoder};
use crate::extractors::authentication::policies::ActionPolicy;
use crate::extractors::authentication::GuardedData;
pub async fn get_metrics(
meilisearch: GuardedData<ActionPolicy<{ actions::METRICS_GET }>, MeiliSearch>,
) -> Result<HttpResponse, ResponseError> {
let search_rules = &meilisearch.filters().search_rules;
let response = meilisearch.get_all_stats(search_rules).await?;
crate::metrics::MEILISEARCH_DB_SIZE_BYTES.set(response.database_size as i64);
crate::metrics::MEILISEARCH_INDEX_COUNT.set(response.indexes.len() as i64);
for (index, value) in response.indexes.iter() {
crate::metrics::MEILISEARCH_INDEX_DOCS_COUNT
.with_label_values(&[index])
.set(value.number_of_documents as i64);
}
let encoder = TextEncoder::new();
let mut buffer = vec![];
encoder
.encode(&prometheus::gather(), &mut buffer)
.expect("Failed to encode metrics");
let response = String::from_utf8(buffer).expect("Failed to convert bytes to string");
Ok(HttpResponse::Ok()
.insert_header(header::ContentType(mime::TEXT_PLAIN))
.body(response))
}
pub struct RouteMetrics;
// Middleware factory is `Transform` trait from actix-service crate
// `S` - type of the next service
// `B` - type of response's body
impl<S, B> Transform<S, ServiceRequest> for RouteMetrics
where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error>,
S::Future: 'static,
B: 'static,
{
type Response = ServiceResponse<B>;
type Error = Error;
type InitError = ();
type Transform = RouteMetricsMiddleware<S>;
type Future = Ready<Result<Self::Transform, Self::InitError>>;
fn new_transform(&self, service: S) -> Self::Future {
ready(Ok(RouteMetricsMiddleware { service }))
}
}
pub struct RouteMetricsMiddleware<S> {
service: S,
}
impl<S, B> Service<ServiceRequest> for RouteMetricsMiddleware<S>
where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error>,
S::Future: 'static,
B: 'static,
{
type Response = ServiceResponse<B>;
type Error = Error;
type Future = LocalBoxFuture<'static, Result<Self::Response, Self::Error>>;
dev::forward_ready!(service);
fn call(&self, req: ServiceRequest) -> Self::Future {
let mut histogram_timer: Option<HistogramTimer> = None;
let request_path = req.path();
let is_registered_resource = req.resource_map().has_resource(request_path);
if is_registered_resource {
let request_method = req.method().to_string();
histogram_timer = Some(
crate::metrics::HTTP_RESPONSE_TIME_SECONDS
.with_label_values(&[&request_method, request_path])
.start_timer(),
);
crate::metrics::HTTP_REQUESTS_TOTAL
.with_label_values(&[&request_method, request_path])
.inc();
}
let fut = self.service.call(req);
Box::pin(async move {
let res = fut.await?;
if let Some(histogram_timer) = histogram_timer {
histogram_timer.observe_duration();
};
Ok(res)
})
}
}

View File

@ -0,0 +1,160 @@
use std::str;
use actix_web::{web, HttpRequest, HttpResponse};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use time::OffsetDateTime;
use uuid::Uuid;
use meilisearch_auth::{error::AuthControllerError, Action, AuthController, Key};
use meilisearch_types::error::{Code, ResponseError};
use crate::extractors::{
authentication::{policies::*, GuardedData},
sequential_extractor::SeqHandler,
};
use crate::routes::Pagination;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::post().to(SeqHandler(create_api_key)))
.route(web::get().to(SeqHandler(list_api_keys))),
)
.service(
web::resource("/{key}")
.route(web::get().to(SeqHandler(get_api_key)))
.route(web::patch().to(SeqHandler(patch_api_key)))
.route(web::delete().to(SeqHandler(delete_api_key))),
);
}
pub async fn create_api_key(
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_CREATE }>, AuthController>,
body: web::Json<Value>,
_req: HttpRequest,
) -> Result<HttpResponse, ResponseError> {
let v = body.into_inner();
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let key = auth_controller.create_key(v)?;
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::Created().json(res))
}
pub async fn list_api_keys(
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_GET }>, AuthController>,
paginate: web::Query<Pagination>,
) -> Result<HttpResponse, ResponseError> {
let page_view = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let keys = auth_controller.list_keys()?;
let page_view = paginate.auto_paginate_sized(
keys.into_iter()
.map(|k| KeyView::from_key(k, &auth_controller)),
);
Ok(page_view)
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::Ok().json(page_view))
}
pub async fn get_api_key(
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_GET }>, AuthController>,
path: web::Path<AuthParam>,
) -> Result<HttpResponse, ResponseError> {
let key = path.into_inner().key;
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
let key = auth_controller.get_key(uid)?;
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::Ok().json(res))
}
pub async fn patch_api_key(
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_UPDATE }>, AuthController>,
body: web::Json<Value>,
path: web::Path<AuthParam>,
) -> Result<HttpResponse, ResponseError> {
let key = path.into_inner().key;
let body = body.into_inner();
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
let key = auth_controller.update_key(uid, body)?;
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::Ok().json(res))
}
pub async fn delete_api_key(
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_DELETE }>, AuthController>,
path: web::Path<AuthParam>,
) -> Result<HttpResponse, ResponseError> {
let key = path.into_inner().key;
tokio::task::spawn_blocking(move || {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
auth_controller.delete_key(uid)
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::NoContent().finish())
}
#[derive(Deserialize)]
pub struct AuthParam {
key: String,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct KeyView {
name: Option<String>,
description: Option<String>,
key: String,
uid: Uuid,
actions: Vec<Action>,
indexes: Vec<String>,
#[serde(serialize_with = "time::serde::rfc3339::option::serialize")]
expires_at: Option<OffsetDateTime>,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
created_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
updated_at: OffsetDateTime,
}
impl KeyView {
fn from_key(key: Key, auth: &AuthController) -> Self {
let generated_key = auth.generate_key(key.uid).unwrap_or_default();
KeyView {
name: key.name,
description: key.description,
key: generated_key,
uid: key.uid,
actions: key.actions,
indexes: key.indexes.into_iter().map(String::from).collect(),
expires_at: key.expires_at,
created_at: key.created_at,
updated_at: key.updated_at,
}
}
}

View File

@ -1,48 +1,27 @@
use actix_web::{web, HttpRequest, HttpResponse};
use log::debug;
use meilisearch_lib::MeiliSearch;
use serde::{Deserialize, Serialize};
use meilisearch_types::error::ResponseError;
use serde_json::json;
use crate::analytics::Analytics;
use crate::error::ResponseError;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::extractors::sequential_extractor::SeqHandler;
use crate::task::SummarizedTaskView;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("").route(web::post().to(create_dump)))
.service(web::resource("/{dump_uid}/status").route(web::get().to(get_dump_status)));
cfg.service(web::resource("").route(web::post().to(SeqHandler(create_dump))));
}
pub async fn create_dump(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::DUMPS_CREATE }>, MeiliSearch>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
analytics.publish("Dump Created".to_string(), json!({}), Some(&req));
let res = meilisearch.create_dump().await?;
let res: SummarizedTaskView = meilisearch.register_dump_task().await?.into();
debug!("returns: {:?}", res);
Ok(HttpResponse::Accepted().json(res))
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct DumpStatusResponse {
status: String,
}
#[derive(Deserialize)]
struct DumpParam {
dump_uid: String,
}
async fn get_dump_status(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<DumpParam>,
) -> Result<HttpResponse, ResponseError> {
let res = meilisearch.dump_info(path.dump_uid.clone()).await?;
debug!("returns: {:?}", res);
Ok(HttpResponse::Ok().json(res))
}

View File

@ -1,24 +1,38 @@
use actix_web::error::PayloadError;
use actix_web::http::header::CONTENT_TYPE;
use actix_web::web::Bytes;
use actix_web::HttpMessage;
use actix_web::{web, HttpRequest, HttpResponse};
use bstr::ByteSlice;
use futures::{Stream, StreamExt};
use log::debug;
use meilisearch_lib::index_controller::{DocumentAdditionFormat, Update};
use meilisearch_lib::milli::update::IndexDocumentsMethod;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use meilisearch_types::star_or::StarOr;
use mime::Mime;
use once_cell::sync::Lazy;
use serde::Deserialize;
use serde_cs::vec::CS;
use serde_json::Value;
use tokio::sync::mpsc;
use crate::analytics::Analytics;
use crate::error::{MeilisearchHttpError, ResponseError};
use crate::error::MeilisearchHttpError;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::extractors::payload::Payload;
use crate::routes::IndexParam;
use crate::extractors::sequential_extractor::SeqHandler;
use crate::routes::{fold_star_or, PaginationView};
use crate::task::SummarizedTaskView;
const DEFAULT_RETRIEVE_DOCUMENTS_OFFSET: usize = 0;
const DEFAULT_RETRIEVE_DOCUMENTS_LIMIT: usize = 20;
static ACCEPTED_CONTENT_TYPE: Lazy<Vec<String>> = Lazy::new(|| {
vec![
"application/json".to_string(),
"application/x-ndjson".to_string(),
"text/csv".to_string(),
]
});
/// This is required because Payload is not Sync nor Send
fn payload_to_stream(mut payload: Payload) -> impl Stream<Item = Result<Bytes, PayloadError>> {
@ -31,6 +45,24 @@ fn payload_to_stream(mut payload: Payload) -> impl Stream<Item = Result<Bytes, P
tokio_stream::wrappers::ReceiverStream::new(recv)
}
/// Extracts the mime type from the content type and return
/// a meilisearch error if anything bad happen.
fn extract_mime_type(req: &HttpRequest) -> Result<Option<Mime>, MeilisearchHttpError> {
match req.mime_type() {
Ok(Some(mime)) => Ok(Some(mime)),
Ok(None) => Ok(None),
Err(_) => match req.headers().get(CONTENT_TYPE) {
Some(content_type) => Err(MeilisearchHttpError::InvalidContentType(
content_type.as_bytes().as_bstr().to_string(),
ACCEPTED_CONTENT_TYPE.clone(),
)),
None => Err(MeilisearchHttpError::MissingContentType(
ACCEPTED_CONTENT_TYPE.clone(),
)),
},
}
}
#[derive(Deserialize)]
pub struct DocumentParam {
index_uid: String,
@ -40,35 +72,45 @@ pub struct DocumentParam {
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::get().to(get_all_documents))
.route(web::post().to(add_documents))
.route(web::put().to(update_documents))
.route(web::delete().to(clear_all_documents)),
.route(web::get().to(SeqHandler(get_all_documents)))
.route(web::post().to(SeqHandler(add_documents)))
.route(web::put().to(SeqHandler(update_documents)))
.route(web::delete().to(SeqHandler(clear_all_documents))),
)
// this route needs to be before the /documents/{document_id} to match properly
.service(web::resource("/delete-batch").route(web::post().to(delete_documents)))
.service(web::resource("/delete-batch").route(web::post().to(SeqHandler(delete_documents))))
.service(
web::resource("/{document_id}")
.route(web::get().to(get_document))
.route(web::delete().to(delete_document)),
.route(web::get().to(SeqHandler(get_document)))
.route(web::delete().to(SeqHandler(delete_document))),
);
}
#[derive(Deserialize, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct GetDocument {
fields: Option<CS<StarOr<String>>>,
}
pub async fn get_document(
meilisearch: GuardedData<Public, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_GET }>, MeiliSearch>,
path: web::Path<DocumentParam>,
params: web::Query<GetDocument>,
) -> Result<HttpResponse, ResponseError> {
let index = path.index_uid.clone();
let id = path.document_id.clone();
let GetDocument { fields } = params.into_inner();
let attributes_to_retrieve = fields.and_then(fold_star_or);
let document = meilisearch
.document(index, id, None as Option<Vec<String>>)
.document(index, id, attributes_to_retrieve)
.await?;
debug!("returns: {:?}", document);
Ok(HttpResponse::Ok().json(document))
}
pub async fn delete_document(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_DELETE }>, MeiliSearch>,
path: web::Path<DocumentParam>,
) -> Result<HttpResponse, ResponseError> {
let DocumentParam {
@ -76,48 +118,42 @@ pub async fn delete_document(
index_uid,
} = path.into_inner();
let update = Update::DeleteDocuments(vec![document_id]);
let update_status = meilisearch
.register_update(index_uid, update, false)
.await?;
debug!("returns: {:?}", update_status);
Ok(HttpResponse::Accepted().json(serde_json::json!({ "updateId": update_status.id() })))
let task: SummarizedTaskView = meilisearch.register_update(index_uid, update).await?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}
#[derive(Deserialize, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct BrowseQuery {
offset: Option<usize>,
limit: Option<usize>,
attributes_to_retrieve: Option<String>,
#[serde(default)]
offset: usize,
#[serde(default = "crate::routes::PAGINATION_DEFAULT_LIMIT")]
limit: usize,
fields: Option<CS<StarOr<String>>>,
}
pub async fn get_all_documents(
meilisearch: GuardedData<Public, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_GET }>, MeiliSearch>,
path: web::Path<String>,
params: web::Query<BrowseQuery>,
) -> Result<HttpResponse, ResponseError> {
debug!("called with params: {:?}", params);
let attributes_to_retrieve = params.attributes_to_retrieve.as_ref().and_then(|attrs| {
let mut names = Vec::new();
for name in attrs.split(',').map(String::from) {
if name == "*" {
return None;
}
names.push(name);
}
Some(names)
});
let BrowseQuery {
limit,
offset,
fields,
} = params.into_inner();
let attributes_to_retrieve = fields.and_then(fold_star_or);
let documents = meilisearch
.documents(
path.index_uid.clone(),
params.offset.unwrap_or(DEFAULT_RETRIEVE_DOCUMENTS_OFFSET),
params.limit.unwrap_or(DEFAULT_RETRIEVE_DOCUMENTS_LIMIT),
attributes_to_retrieve,
)
let (total, documents) = meilisearch
.documents(path.into_inner(), offset, limit, attributes_to_retrieve)
.await?;
debug!("returns: {:?}", documents);
Ok(HttpResponse::Ok().json(documents))
let ret = PaginationView::new(offset, limit, total as usize, documents);
debug!("returns: {:?}", ret);
Ok(HttpResponse::Ok().json(ret))
}
#[derive(Deserialize, Debug)]
@ -127,92 +163,89 @@ pub struct UpdateDocumentsQuery {
}
pub async fn add_documents(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_ADD }>, MeiliSearch>,
path: web::Path<String>,
params: web::Query<UpdateDocumentsQuery>,
body: Payload,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
debug!("called with params: {:?}", params);
let content_type = req
.headers()
.get("Content-type")
.map(|s| s.to_str().unwrap_or("unkown"));
let params = params.into_inner();
let index_uid = path.into_inner();
analytics.add_documents(
&params,
meilisearch.get_index(path.index_uid.clone()).await.is_err(),
meilisearch.get_index(index_uid.clone()).await.is_err(),
&req,
);
document_addition(
content_type,
let allow_index_creation = meilisearch.filters().allow_index_creation;
let task = document_addition(
extract_mime_type(&req)?,
meilisearch,
path.index_uid.clone(),
index_uid,
params.primary_key,
body,
IndexDocumentsMethod::ReplaceDocuments,
allow_index_creation,
)
.await
.await?;
Ok(HttpResponse::Accepted().json(task))
}
pub async fn update_documents(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_ADD }>, MeiliSearch>,
path: web::Path<String>,
params: web::Query<UpdateDocumentsQuery>,
body: Payload,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
debug!("called with params: {:?}", params);
let content_type = req
.headers()
.get("Content-type")
.map(|s| s.to_str().unwrap_or("unkown"));
let index_uid = path.into_inner();
analytics.update_documents(
&params,
meilisearch.get_index(path.index_uid.clone()).await.is_err(),
meilisearch.get_index(index_uid.clone()).await.is_err(),
&req,
);
document_addition(
content_type,
let allow_index_creation = meilisearch.filters().allow_index_creation;
let task = document_addition(
extract_mime_type(&req)?,
meilisearch,
path.into_inner().index_uid,
index_uid,
params.into_inner().primary_key,
body,
IndexDocumentsMethod::UpdateDocuments,
allow_index_creation,
)
.await
.await?;
Ok(HttpResponse::Accepted().json(task))
}
/// Route used when the payload type is "application/json"
/// Used to add or replace documents
async fn document_addition(
content_type: Option<&str>,
meilisearch: GuardedData<Private, MeiliSearch>,
mime_type: Option<Mime>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_ADD }>, MeiliSearch>,
index_uid: String,
primary_key: Option<String>,
body: Payload,
method: IndexDocumentsMethod,
) -> Result<HttpResponse, ResponseError> {
static ACCEPTED_CONTENT_TYPE: Lazy<Vec<String>> = Lazy::new(|| {
vec![
"application/json".to_string(),
"application/x-ndjson".to_string(),
"text/csv".to_string(),
]
});
let format = match content_type {
Some("application/json") => DocumentAdditionFormat::Json,
Some("application/x-ndjson") => DocumentAdditionFormat::Ndjson,
Some("text/csv") => DocumentAdditionFormat::Csv,
Some(other) => {
allow_index_creation: bool,
) -> Result<SummarizedTaskView, ResponseError> {
let format = match mime_type
.as_ref()
.map(|m| (m.type_().as_str(), m.subtype().as_str()))
{
Some(("application", "json")) => DocumentAdditionFormat::Json,
Some(("application", "x-ndjson")) => DocumentAdditionFormat::Ndjson,
Some(("text", "csv")) => DocumentAdditionFormat::Csv,
Some((type_, subtype)) => {
return Err(MeilisearchHttpError::InvalidContentType(
other.to_string(),
format!("{}/{}", type_, subtype),
ACCEPTED_CONTENT_TYPE.clone(),
)
.into())
@ -229,17 +262,18 @@ async fn document_addition(
primary_key,
method,
format,
allow_index_creation,
};
let update_status = meilisearch.register_update(index_uid, update, true).await?;
let task = meilisearch.register_update(index_uid, update).await?.into();
debug!("returns: {:?}", update_status);
Ok(HttpResponse::Accepted().json(serde_json::json!({ "updateId": update_status.id() })))
debug!("returns: {:?}", task);
Ok(task)
}
pub async fn delete_documents(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_DELETE }>, MeiliSearch>,
path: web::Path<String>,
body: web::Json<Vec<Value>>,
) -> Result<HttpResponse, ResponseError> {
debug!("called with params: {:?}", body);
@ -253,21 +287,25 @@ pub async fn delete_documents(
.collect();
let update = Update::DeleteDocuments(ids);
let update_status = meilisearch
.register_update(path.into_inner().index_uid, update, false)
.await?;
debug!("returns: {:?}", update_status);
Ok(HttpResponse::Accepted().json(serde_json::json!({ "updateId": update_status.id() })))
let task: SummarizedTaskView = meilisearch
.register_update(path.into_inner(), update)
.await?
.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}
pub async fn clear_all_documents(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_DELETE }>, MeiliSearch>,
path: web::Path<String>,
) -> Result<HttpResponse, ResponseError> {
let update = Update::ClearDocuments;
let update_status = meilisearch
.register_update(path.into_inner().index_uid, update, false)
.await?;
debug!("returns: {:?}", update_status);
Ok(HttpResponse::Accepted().json(serde_json::json!({ "updateId": update_status.id() })))
let task: SummarizedTaskView = meilisearch
.register_update(path.into_inner(), update)
.await?
.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}

View File

@ -1,49 +1,60 @@
use actix_web::{web, HttpRequest, HttpResponse};
use chrono::{DateTime, Utc};
use log::debug;
use meilisearch_lib::index_controller::IndexSettings;
use meilisearch_lib::index_controller::Update;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use serde::{Deserialize, Serialize};
use serde_json::json;
use time::OffsetDateTime;
use crate::analytics::Analytics;
use crate::error::ResponseError;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::routes::IndexParam;
use crate::extractors::authentication::{policies::*, AuthenticationError, GuardedData};
use crate::extractors::sequential_extractor::SeqHandler;
use crate::task::SummarizedTaskView;
use super::Pagination;
pub mod documents;
pub mod search;
pub mod settings;
pub mod updates;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::get().to(list_indexes))
.route(web::post().to(create_index)),
.route(web::post().to(SeqHandler(create_index))),
)
.service(
web::scope("/{index_uid}")
.service(
web::resource("")
.route(web::get().to(get_index))
.route(web::put().to(update_index))
.route(web::delete().to(delete_index)),
.route(web::get().to(SeqHandler(get_index)))
.route(web::patch().to(SeqHandler(update_index)))
.route(web::delete().to(SeqHandler(delete_index))),
)
.service(web::resource("/stats").route(web::get().to(get_index_stats)))
.service(web::resource("/stats").route(web::get().to(SeqHandler(get_index_stats))))
.service(web::scope("/documents").configure(documents::configure))
.service(web::scope("/search").configure(search::configure))
.service(web::scope("/updates").configure(updates::configure))
.service(web::scope("/settings").configure(settings::configure)),
);
}
pub async fn list_indexes(
data: GuardedData<Private, MeiliSearch>,
data: GuardedData<ActionPolicy<{ actions::INDEXES_GET }>, MeiliSearch>,
paginate: web::Query<Pagination>,
) -> Result<HttpResponse, ResponseError> {
let indexes = data.list_indexes().await?;
debug!("returns: {:?}", indexes);
Ok(HttpResponse::Ok().json(indexes))
let search_rules = &data.filters().search_rules;
let indexes: Vec<_> = data.list_indexes().await?;
let nb_indexes = indexes.len();
let iter = indexes
.into_iter()
.filter(|i| search_rules.is_index_authorized(&i.uid));
let ret = paginate
.into_inner()
.auto_paginate_unsized(nb_indexes, iter);
debug!("returns: {:?}", ret);
Ok(HttpResponse::Ok().json(ret))
}
#[derive(Debug, Deserialize)]
@ -54,24 +65,35 @@ pub struct IndexCreateRequest {
}
pub async fn create_index(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::INDEXES_CREATE }>, MeiliSearch>,
body: web::Json<IndexCreateRequest>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
let body = body.into_inner();
let IndexCreateRequest {
primary_key, uid, ..
} = body.into_inner();
analytics.publish(
"Index Created".to_string(),
json!({ "primary_key": body.primary_key}),
Some(&req),
);
let meta = meilisearch.create_index(body.uid, body.primary_key).await?;
Ok(HttpResponse::Created().json(meta))
let allow_index_creation = meilisearch.filters().search_rules.is_index_authorized(&uid);
if allow_index_creation {
analytics.publish(
"Index Created".to_string(),
json!({ "primary_key": primary_key }),
Some(&req),
);
let update = Update::CreateIndex { primary_key };
let task: SummarizedTaskView = meilisearch.register_update(uid, update).await?.into();
Ok(HttpResponse::Accepted().json(task))
} else {
Err(AuthenticationError::InvalidToken.into())
}
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
#[allow(dead_code)]
pub struct UpdateIndexRequest {
uid: Option<String>,
primary_key: Option<String>,
@ -82,23 +104,26 @@ pub struct UpdateIndexRequest {
pub struct UpdateIndexResponse {
name: String,
uid: String,
created_at: DateTime<Utc>,
updated_at: DateTime<Utc>,
primary_key: Option<String>,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
created_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
updated_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
primary_key: OffsetDateTime,
}
pub async fn get_index(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::INDEXES_GET }>, MeiliSearch>,
path: web::Path<String>,
) -> Result<HttpResponse, ResponseError> {
let meta = meilisearch.get_index(path.index_uid.clone()).await?;
let meta = meilisearch.get_index(path.into_inner()).await?;
debug!("returns: {:?}", meta);
Ok(HttpResponse::Ok().json(meta))
}
pub async fn update_index(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::INDEXES_UPDATE }>, MeiliSearch>,
path: web::Path<String>,
body: web::Json<UpdateIndexRequest>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
@ -110,30 +135,43 @@ pub async fn update_index(
json!({ "primary_key": body.primary_key}),
Some(&req),
);
let settings = IndexSettings {
uid: body.uid,
let update = Update::UpdateIndex {
primary_key: body.primary_key,
};
let meta = meilisearch
.update_index(path.into_inner().index_uid, settings)
.await?;
debug!("returns: {:?}", meta);
Ok(HttpResponse::Ok().json(meta))
let task: SummarizedTaskView = meilisearch
.register_update(path.into_inner(), update)
.await?
.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}
pub async fn delete_index(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::INDEXES_DELETE }>, MeiliSearch>,
path: web::Path<String>,
) -> Result<HttpResponse, ResponseError> {
meilisearch.delete_index(path.index_uid.clone()).await?;
Ok(HttpResponse::NoContent().finish())
let uid = path.into_inner();
let update = Update::DeleteIndex;
let task: SummarizedTaskView = meilisearch.register_update(uid, update).await?.into();
Ok(HttpResponse::Accepted().json(task))
}
pub async fn get_index_stats(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::STATS_GET }>, MeiliSearch>,
path: web::Path<String>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
let response = meilisearch.get_index_stats(path.index_uid.clone()).await?;
analytics.publish(
"Stats Seen".to_string(),
json!({ "per_index_uid": true }),
Some(&req),
);
let response = meilisearch.get_index_stats(path.into_inner()).await?;
debug!("returns: {:?}", response);
Ok(HttpResponse::Ok().json(response))

View File

@ -1,20 +1,25 @@
use actix_web::{web, HttpRequest, HttpResponse};
use log::debug;
use meilisearch_lib::index::{default_crop_length, SearchQuery, DEFAULT_SEARCH_LIMIT};
use meilisearch_auth::IndexSearchRules;
use meilisearch_lib::index::{
MatchingStrategy, SearchQuery, DEFAULT_CROP_LENGTH, DEFAULT_CROP_MARKER,
DEFAULT_HIGHLIGHT_POST_TAG, DEFAULT_HIGHLIGHT_PRE_TAG, DEFAULT_SEARCH_LIMIT,
};
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use serde::Deserialize;
use serde_cs::vec::CS;
use serde_json::Value;
use crate::analytics::{Analytics, SearchAggregator};
use crate::error::ResponseError;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::routes::IndexParam;
use crate::extractors::sequential_extractor::SeqHandler;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::get().to(search_with_url_query))
.route(web::post().to(search_with_post)),
.route(web::get().to(SeqHandler(search_with_url_query)))
.route(web::post().to(SeqHandler(search_with_post))),
);
}
@ -24,36 +29,28 @@ pub struct SearchQueryGet {
q: Option<String>,
offset: Option<usize>,
limit: Option<usize>,
attributes_to_retrieve: Option<String>,
attributes_to_crop: Option<String>,
#[serde(default = "default_crop_length")]
attributes_to_retrieve: Option<CS<String>>,
attributes_to_crop: Option<CS<String>>,
#[serde(default = "DEFAULT_CROP_LENGTH")]
crop_length: usize,
attributes_to_highlight: Option<String>,
attributes_to_highlight: Option<CS<String>>,
filter: Option<String>,
sort: Option<String>,
#[serde(default = "Default::default")]
matches: bool,
facets_distribution: Option<String>,
show_matches_position: bool,
facets: Option<CS<String>>,
#[serde(default = "DEFAULT_HIGHLIGHT_PRE_TAG")]
highlight_pre_tag: String,
#[serde(default = "DEFAULT_HIGHLIGHT_POST_TAG")]
highlight_post_tag: String,
#[serde(default = "DEFAULT_CROP_MARKER")]
crop_marker: String,
#[serde(default)]
matching_strategy: MatchingStrategy,
}
impl From<SearchQueryGet> for SearchQuery {
fn from(other: SearchQueryGet) -> Self {
let attributes_to_retrieve = other
.attributes_to_retrieve
.map(|attrs| attrs.split(',').map(String::from).collect());
let attributes_to_crop = other
.attributes_to_crop
.map(|attrs| attrs.split(',').map(String::from).collect());
let attributes_to_highlight = other
.attributes_to_highlight
.map(|attrs| attrs.split(',').map(String::from).collect());
let facets_distribution = other
.facets_distribution
.map(|attrs| attrs.split(',').map(String::from).collect());
let filter = match other.filter {
Some(f) => match serde_json::from_str(&f) {
Ok(v) => Some(v),
@ -62,20 +59,46 @@ impl From<SearchQueryGet> for SearchQuery {
None => None,
};
let sort = other.sort.map(|attr| fix_sort_query_parameters(&attr));
Self {
q: other.q,
offset: other.offset,
limit: other.limit.unwrap_or(DEFAULT_SEARCH_LIMIT),
attributes_to_retrieve,
attributes_to_crop,
limit: other.limit.unwrap_or_else(DEFAULT_SEARCH_LIMIT),
attributes_to_retrieve: other
.attributes_to_retrieve
.map(|o| o.into_iter().collect()),
attributes_to_crop: other.attributes_to_crop.map(|o| o.into_iter().collect()),
crop_length: other.crop_length,
attributes_to_highlight,
attributes_to_highlight: other
.attributes_to_highlight
.map(|o| o.into_iter().collect()),
filter,
sort,
matches: other.matches,
facets_distribution,
sort: other.sort.map(|attr| fix_sort_query_parameters(&attr)),
show_matches_position: other.show_matches_position,
facets: other.facets.map(|o| o.into_iter().collect()),
highlight_pre_tag: other.highlight_pre_tag,
highlight_post_tag: other.highlight_post_tag,
crop_marker: other.crop_marker,
matching_strategy: other.matching_strategy,
}
}
}
/// Incorporate search rules in search query
fn add_search_rules(query: &mut SearchQuery, rules: IndexSearchRules) {
query.filter = match (query.filter.take(), rules.filter) {
(None, rules_filter) => rules_filter,
(filter, None) => filter,
(Some(filter), Some(rules_filter)) => {
let filter = match filter {
Value::Array(filter) => filter,
filter => vec![filter],
};
let rules_filter = match rules_filter {
Value::Array(rules_filter) => rules_filter,
rules_filter => vec![rules_filter],
};
Some(Value::Array([filter, rules_filter].concat()))
}
}
}
@ -91,10 +114,9 @@ fn fix_sort_query_parameters(sort_query: &str) -> Vec<String> {
sort_parameters.push(current_sort.to_string());
merge = true;
} else if merge && !sort_parameters.is_empty() {
sort_parameters
.last_mut()
.unwrap()
.push_str(&format!(",{}", current_sort));
let s = sort_parameters.last_mut().unwrap();
s.push(',');
s.push_str(current_sort);
if current_sort.ends_with("):desc") || current_sort.ends_with("):asc") {
merge = false;
}
@ -107,55 +129,69 @@ fn fix_sort_query_parameters(sort_query: &str) -> Vec<String> {
}
pub async fn search_with_url_query(
meilisearch: GuardedData<Public, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::SEARCH }>, MeiliSearch>,
path: web::Path<String>,
params: web::Query<SearchQueryGet>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
debug!("called with params: {:?}", params);
let query: SearchQuery = params.into_inner().into();
let mut query: SearchQuery = params.into_inner().into();
let index_uid = path.into_inner();
// Tenant token search_rules.
if let Some(search_rules) = meilisearch
.filters()
.search_rules
.get_index_search_rules(&index_uid)
{
add_search_rules(&mut query, search_rules);
}
let mut aggregate = SearchAggregator::from_query(&query, &req);
let search_result = meilisearch
.search(path.into_inner().index_uid, query)
.await?;
// Tests that the nb_hits is always set to false
#[cfg(test)]
assert!(!search_result.exhaustive_nb_hits);
aggregate.finish(&search_result);
let search_result = meilisearch.search(index_uid, query).await;
if let Ok(ref search_result) = search_result {
aggregate.succeed(search_result);
}
analytics.get_search(aggregate);
let search_result = search_result?;
debug!("returns: {:?}", search_result);
Ok(HttpResponse::Ok().json(search_result))
}
pub async fn search_with_post(
meilisearch: GuardedData<Public, MeiliSearch>,
path: web::Path<IndexParam>,
meilisearch: GuardedData<ActionPolicy<{ actions::SEARCH }>, MeiliSearch>,
path: web::Path<String>,
params: web::Json<SearchQuery>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
let query = params.into_inner();
let mut query = params.into_inner();
debug!("search called with params: {:?}", query);
let index_uid = path.into_inner();
// Tenant token search_rules.
if let Some(search_rules) = meilisearch
.filters()
.search_rules
.get_index_search_rules(&index_uid)
{
add_search_rules(&mut query, search_rules);
}
let mut aggregate = SearchAggregator::from_query(&query, &req);
let search_result = meilisearch
.search(path.into_inner().index_uid, query)
.await?;
// Tests that the nb_hits is always set to false
#[cfg(test)]
assert!(!search_result.exhaustive_nb_hits);
aggregate.finish(&search_result);
let search_result = meilisearch.search(index_uid, query).await;
if let Ok(ref search_result) = search_result {
aggregate.succeed(search_result);
}
analytics.post_search(aggregate);
let search_result = search_result?;
debug!("returns: {:?}", search_result);
Ok(HttpResponse::Ok().json(search_result))
}

View File

@ -4,46 +4,59 @@ use actix_web::{web, HttpRequest, HttpResponse};
use meilisearch_lib::index::{Settings, Unchecked};
use meilisearch_lib::index_controller::Update;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use serde_json::json;
use crate::analytics::Analytics;
use crate::error::ResponseError;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::task::SummarizedTaskView;
#[macro_export]
macro_rules! make_setting_route {
($route:literal, $type:ty, $attr:ident, $camelcase_attr:literal, $analytics_var:ident, $analytics:expr) => {
($route:literal, $update_verb:ident, $type:ty, $attr:ident, $camelcase_attr:literal, $analytics_var:ident, $analytics:expr) => {
pub mod $attr {
use actix_web::{web, HttpRequest, HttpResponse, Resource};
use log::debug;
use actix_web::{web, HttpResponse, HttpRequest, Resource};
use meilisearch_lib::milli::update::Setting;
use meilisearch_lib::{MeiliSearch, index::Settings, index_controller::Update};
use meilisearch_lib::{index::Settings, index_controller::Update, MeiliSearch};
use crate::analytics::Analytics;
use crate::error::ResponseError;
use crate::extractors::authentication::{GuardedData, policies::*};
use meilisearch_types::error::ResponseError;
use $crate::analytics::Analytics;
use $crate::extractors::authentication::{policies::*, GuardedData};
use $crate::extractors::sequential_extractor::SeqHandler;
use $crate::task::SummarizedTaskView;
pub async fn delete(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::SETTINGS_UPDATE }>, MeiliSearch>,
index_uid: web::Path<String>,
) -> Result<HttpResponse, ResponseError> {
let settings = Settings {
$attr: Setting::Reset,
..Default::default()
};
let update = Update::Settings(settings);
let update_status = meilisearch.register_update(index_uid.into_inner(), update, false).await?;
debug!("returns: {:?}", update_status);
Ok(HttpResponse::Accepted().json(serde_json::json!({ "updateId": update_status.id() })))
let allow_index_creation = meilisearch.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: true,
allow_index_creation,
};
let task: SummarizedTaskView = meilisearch
.register_update(index_uid.into_inner(), update)
.await?
.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}
pub async fn update(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::SETTINGS_UPDATE }>, MeiliSearch>,
index_uid: actix_web::web::Path<String>,
body: actix_web::web::Json<Option<$type>>,
req: HttpRequest,
$analytics_var: web::Data< dyn Analytics>,
$analytics_var: web::Data<dyn Analytics>,
) -> std::result::Result<HttpResponse, ResponseError> {
let body = body.into_inner();
@ -52,43 +65,62 @@ macro_rules! make_setting_route {
let settings = Settings {
$attr: match body {
Some(inner_body) => Setting::Set(inner_body),
None => Setting::Reset
None => Setting::Reset,
},
..Default::default()
};
let update = Update::Settings(settings);
let update_status = meilisearch.register_update(index_uid.into_inner(), update, true).await?;
debug!("returns: {:?}", update_status);
Ok(HttpResponse::Accepted().json(serde_json::json!({ "updateId": update_status.id() })))
let allow_index_creation = meilisearch.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: false,
allow_index_creation,
};
let task: SummarizedTaskView = meilisearch
.register_update(index_uid.into_inner(), update)
.await?
.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}
pub async fn get(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::SETTINGS_GET }>, MeiliSearch>,
index_uid: actix_web::web::Path<String>,
) -> std::result::Result<HttpResponse, ResponseError> {
let settings = meilisearch.settings(index_uid.into_inner()).await?;
debug!("returns: {:?}", settings);
let mut json = serde_json::json!(&settings);
let val = json[$camelcase_attr].take();
Ok(HttpResponse::Ok().json(val))
}
pub fn resources() -> Resource {
Resource::new($route)
.route(web::get().to(get))
.route(web::post().to(update))
.route(web::delete().to(delete))
.route(web::get().to(SeqHandler(get)))
.route(web::$update_verb().to(SeqHandler(update)))
.route(web::delete().to(SeqHandler(delete)))
}
}
};
($route:literal, $type:ty, $attr:ident, $camelcase_attr:literal) => {
make_setting_route!($route, $type, $attr, $camelcase_attr, _analytics, |_, _| {});
($route:literal, $update_verb:ident, $type:ty, $attr:ident, $camelcase_attr:literal) => {
make_setting_route!(
$route,
$update_verb,
$type,
$attr,
$camelcase_attr,
_analytics,
|_, _| {}
);
};
}
make_setting_route!(
"/filterable-attributes",
put,
std::collections::BTreeSet<String>,
filterable_attributes,
"filterableAttributes",
@ -111,6 +143,7 @@ make_setting_route!(
make_setting_route!(
"/sortable-attributes",
put,
std::collections::BTreeSet<String>,
sortable_attributes,
"sortableAttributes",
@ -122,8 +155,8 @@ make_setting_route!(
"SortableAttributes Updated".to_string(),
json!({
"sortable_attributes": {
"total": setting.as_ref().map(|sort| sort.len()).unwrap_or(0),
"has_geo": setting.as_ref().map(|sort| sort.contains("_geo")).unwrap_or(false),
"total": setting.as_ref().map(|sort| sort.len()),
"has_geo": setting.as_ref().map(|sort| sort.contains("_geo")),
},
}),
Some(req),
@ -133,13 +166,57 @@ make_setting_route!(
make_setting_route!(
"/displayed-attributes",
put,
Vec<String>,
displayed_attributes,
"displayedAttributes"
);
make_setting_route!(
"/typo-tolerance",
patch,
meilisearch_lib::index::updates::TypoSettings,
typo_tolerance,
"typoTolerance",
analytics,
|setting: &Option<meilisearch_lib::index::updates::TypoSettings>, req: &HttpRequest| {
use serde_json::json;
analytics.publish(
"TypoTolerance Updated".to_string(),
json!({
"typo_tolerance": {
"enabled": setting.as_ref().map(|s| !matches!(s.enabled, Setting::Set(false))),
"disable_on_attributes": setting
.as_ref()
.and_then(|s| s.disable_on_attributes.as_ref().set().map(|m| !m.is_empty())),
"disable_on_words": setting
.as_ref()
.and_then(|s| s.disable_on_words.as_ref().set().map(|m| !m.is_empty())),
"min_word_size_for_one_typo": setting
.as_ref()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.one_typo.set()))
.flatten(),
"min_word_size_for_two_typos": setting
.as_ref()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.two_typos.set()))
.flatten(),
},
}),
Some(req),
);
}
);
make_setting_route!(
"/searchable-attributes",
put,
Vec<String>,
searchable_attributes,
"searchableAttributes",
@ -151,7 +228,7 @@ make_setting_route!(
"SearchableAttributes Updated".to_string(),
json!({
"searchable_attributes": {
"total": setting.as_ref().map(|sort| sort.len()).unwrap_or(0),
"total": setting.as_ref().map(|searchable| searchable.len()),
},
}),
Some(req),
@ -161,6 +238,7 @@ make_setting_route!(
make_setting_route!(
"/stop-words",
put,
std::collections::BTreeSet<String>,
stop_words,
"stopWords"
@ -168,6 +246,7 @@ make_setting_route!(
make_setting_route!(
"/synonyms",
put,
std::collections::BTreeMap<String, Vec<String>>,
synonyms,
"synonyms"
@ -175,6 +254,7 @@ make_setting_route!(
make_setting_route!(
"/distinct-attribute",
put,
String,
distinct_attribute,
"distinctAttribute"
@ -182,6 +262,7 @@ make_setting_route!(
make_setting_route!(
"/ranking-rules",
put,
Vec<String>,
ranking_rules,
"rankingRules",
@ -201,14 +282,59 @@ make_setting_route!(
}
);
make_setting_route!(
"/faceting",
patch,
meilisearch_lib::index::updates::FacetingSettings,
faceting,
"faceting",
analytics,
|setting: &Option<meilisearch_lib::index::updates::FacetingSettings>, req: &HttpRequest| {
use serde_json::json;
analytics.publish(
"Faceting Updated".to_string(),
json!({
"faceting": {
"max_values_per_facet": setting.as_ref().and_then(|s| s.max_values_per_facet.set()),
},
}),
Some(req),
);
}
);
make_setting_route!(
"/pagination",
patch,
meilisearch_lib::index::updates::PaginationSettings,
pagination,
"pagination",
analytics,
|setting: &Option<meilisearch_lib::index::updates::PaginationSettings>, req: &HttpRequest| {
use serde_json::json;
analytics.publish(
"Pagination Updated".to_string(),
json!({
"pagination": {
"max_total_hits": setting.as_ref().and_then(|s| s.max_total_hits.set()),
},
}),
Some(req),
);
}
);
macro_rules! generate_configure {
($($mod:ident),*) => {
pub fn configure(cfg: &mut web::ServiceConfig) {
use crate::extractors::sequential_extractor::SeqHandler;
cfg.service(
web::resource("")
.route(web::post().to(update_all))
.route(web::get().to(get_all))
.route(web::delete().to(delete_all)))
.route(web::patch().to(SeqHandler(update_all)))
.route(web::get().to(SeqHandler(get_all)))
.route(web::delete().to(SeqHandler(delete_all))))
$(.service($mod::resources()))*;
}
};
@ -222,11 +348,14 @@ generate_configure!(
distinct_attribute,
stop_words,
synonyms,
ranking_rules
ranking_rules,
typo_tolerance,
pagination,
faceting
);
pub async fn update_all(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::SETTINGS_UPDATE }>, MeiliSearch>,
index_uid: web::Path<String>,
body: web::Json<Settings<Unchecked>>,
req: HttpRequest,
@ -240,29 +369,81 @@ pub async fn update_all(
"ranking_rules": {
"sort_position": settings.ranking_rules.as_ref().set().map(|sort| sort.iter().position(|s| s == "sort")),
},
"searchable_attributes": {
"total": settings.searchable_attributes.as_ref().set().map(|searchable| searchable.len()),
},
"sortable_attributes": {
"total": settings.sortable_attributes.as_ref().set().map(|sort| sort.len()).unwrap_or(0),
"has_geo": settings.sortable_attributes.as_ref().set().map(|sort| sort.iter().any(|s| s == "_geo")).unwrap_or(false),
"total": settings.sortable_attributes.as_ref().set().map(|sort| sort.len()),
"has_geo": settings.sortable_attributes.as_ref().set().map(|sort| sort.iter().any(|s| s == "_geo")),
},
"filterable_attributes": {
"total": settings.filterable_attributes.as_ref().set().map(|filter| filter.len()).unwrap_or(0),
"has_geo": settings.filterable_attributes.as_ref().set().map(|filter| filter.iter().any(|s| s == "_geo")).unwrap_or(false),
"total": settings.filterable_attributes.as_ref().set().map(|filter| filter.len()),
"has_geo": settings.filterable_attributes.as_ref().set().map(|filter| filter.iter().any(|s| s == "_geo")),
},
"typo_tolerance": {
"enabled": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.enabled.as_ref().set())
.copied(),
"disable_on_attributes": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.disable_on_attributes.as_ref().set().map(|m| !m.is_empty())),
"disable_on_words": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.disable_on_words.as_ref().set().map(|m| !m.is_empty())),
"min_word_size_for_one_typo": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.one_typo.set()))
.flatten(),
"min_word_size_for_two_typos": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.two_typos.set()))
.flatten(),
},
"faceting": {
"max_values_per_facet": settings.faceting
.as_ref()
.set()
.and_then(|s| s.max_values_per_facet.as_ref().set()),
},
"pagination": {
"max_total_hits": settings.pagination
.as_ref()
.set()
.and_then(|s| s.max_total_hits.as_ref().set()),
},
}),
Some(&req),
);
let update = Update::Settings(settings);
let update_result = meilisearch
.register_update(index_uid.into_inner(), update, true)
.await?;
let json = serde_json::json!({ "updateId": update_result.id() });
debug!("returns: {:?}", json);
Ok(HttpResponse::Accepted().json(json))
let allow_index_creation = meilisearch.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: false,
allow_index_creation,
};
let task: SummarizedTaskView = meilisearch
.register_update(index_uid.into_inner(), update)
.await?
.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}
pub async fn get_all(
data: GuardedData<Private, MeiliSearch>,
data: GuardedData<ActionPolicy<{ actions::SETTINGS_GET }>, MeiliSearch>,
index_uid: web::Path<String>,
) -> Result<HttpResponse, ResponseError> {
let settings = data.settings(index_uid.into_inner()).await?;
@ -271,16 +452,22 @@ pub async fn get_all(
}
pub async fn delete_all(
data: GuardedData<Private, MeiliSearch>,
data: GuardedData<ActionPolicy<{ actions::SETTINGS_UPDATE }>, MeiliSearch>,
index_uid: web::Path<String>,
) -> Result<HttpResponse, ResponseError> {
let settings = Settings::cleared();
let settings = Settings::cleared().into_unchecked();
let update = Update::Settings(settings.into_unchecked());
let update_result = data
.register_update(index_uid.into_inner(), update, false)
.await?;
let json = serde_json::json!({ "updateId": update_result.id() });
debug!("returns: {:?}", json);
Ok(HttpResponse::Accepted().json(json))
let allow_index_creation = data.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: true,
allow_index_creation,
};
let task: SummarizedTaskView = data
.register_update(index_uid.into_inner(), update)
.await?
.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Accepted().json(task))
}

View File

@ -1,59 +0,0 @@
use actix_web::{web, HttpResponse};
use chrono::{DateTime, Utc};
use log::debug;
use meilisearch_lib::MeiliSearch;
use serde::{Deserialize, Serialize};
use crate::error::ResponseError;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::routes::{IndexParam, UpdateStatusResponse};
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("").route(web::get().to(get_all_updates_status)))
.service(web::resource("{update_id}").route(web::get().to(get_update_status)));
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct UpdateIndexResponse {
name: String,
uid: String,
created_at: DateTime<Utc>,
updated_at: DateTime<Utc>,
primary_key: Option<String>,
}
#[derive(Deserialize)]
pub struct UpdateParam {
index_uid: String,
update_id: u64,
}
pub async fn get_update_status(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<UpdateParam>,
) -> Result<HttpResponse, ResponseError> {
let params = path.into_inner();
let meta = meilisearch
.update_status(params.index_uid, params.update_id)
.await?;
let meta = UpdateStatusResponse::from(meta);
debug!("returns: {:?}", meta);
Ok(HttpResponse::Ok().json(meta))
}
pub async fn get_all_updates_status(
meilisearch: GuardedData<Private, MeiliSearch>,
path: web::Path<IndexParam>,
) -> Result<HttpResponse, ResponseError> {
let metas = meilisearch
.all_update_status(path.into_inner().index_uid)
.await?;
let metas = metas
.into_iter()
.map(UpdateStatusResponse::from)
.collect::<Vec<_>>();
debug!("returns: {:?}", metas);
Ok(HttpResponse::Ok().json(metas))
}

View File

@ -1,30 +1,128 @@
use std::time::Duration;
use actix_web::{web, HttpResponse};
use chrono::{DateTime, Utc};
use actix_web::{web, HttpRequest, HttpResponse};
use log::debug;
use meilisearch_lib::index_controller::updates::status::{UpdateResult, UpdateStatus};
use serde::{Deserialize, Serialize};
use serde_json::json;
use time::OffsetDateTime;
use meilisearch_lib::index::{Settings, Unchecked};
use meilisearch_lib::{MeiliSearch, Update};
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use meilisearch_types::star_or::StarOr;
use crate::error::ResponseError;
use crate::analytics::Analytics;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::ApiKeys;
mod api_key;
mod dump;
pub mod indexes;
mod tasks;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("/health").route(web::get().to(get_health)))
cfg.service(web::scope("/tasks").configure(tasks::configure))
.service(web::resource("/health").route(web::get().to(get_health)))
.service(web::scope("/keys").configure(api_key::configure))
.service(web::scope("/dumps").configure(dump::configure))
.service(web::resource("/keys").route(web::get().to(list_keys)))
.service(web::resource("/stats").route(web::get().to(get_stats)))
.service(web::resource("/version").route(web::get().to(get_version)))
.service(web::scope("/indexes").configure(indexes::configure));
}
/// Extracts the raw values from the `StarOr` types and
/// return None if a `StarOr::Star` is encountered.
pub fn fold_star_or<T, O>(content: impl IntoIterator<Item = StarOr<T>>) -> Option<O>
where
O: FromIterator<T>,
{
content
.into_iter()
.map(|value| match value {
StarOr::Star => None,
StarOr::Other(val) => Some(val),
})
.collect()
}
const PAGINATION_DEFAULT_LIMIT: fn() -> usize = || 20;
#[derive(Debug, Clone, Copy, Deserialize)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct Pagination {
#[serde(default)]
pub offset: usize,
#[serde(default = "PAGINATION_DEFAULT_LIMIT")]
pub limit: usize,
}
#[derive(Debug, Clone, Serialize)]
pub struct PaginationView<T> {
pub results: Vec<T>,
pub offset: usize,
pub limit: usize,
pub total: usize,
}
impl Pagination {
/// Given the full data to paginate, returns the selected section.
pub fn auto_paginate_sized<T>(
self,
content: impl IntoIterator<Item = T> + ExactSizeIterator,
) -> PaginationView<T>
where
T: Serialize,
{
let total = content.len();
let content: Vec<_> = content
.into_iter()
.skip(self.offset)
.take(self.limit)
.collect();
self.format_with(total, content)
}
/// Given an iterator and the total number of elements, returns the selected section.
pub fn auto_paginate_unsized<T>(
self,
total: usize,
content: impl IntoIterator<Item = T>,
) -> PaginationView<T>
where
T: Serialize,
{
let content: Vec<_> = content
.into_iter()
.skip(self.offset)
.take(self.limit)
.collect();
self.format_with(total, content)
}
/// Given the data already paginated + the total number of elements, it stores
/// everything in a [PaginationResult].
pub fn format_with<T>(self, total: usize, results: Vec<T>) -> PaginationView<T>
where
T: Serialize,
{
PaginationView {
results,
offset: self.offset,
limit: self.limit,
total,
}
}
}
impl<T> PaginationView<T> {
pub fn new(offset: usize, limit: usize, total: usize, results: Vec<T>) -> Self {
Self {
offset,
limit,
results,
total,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[allow(clippy::large_enum_variant)]
#[serde(tag = "name")]
@ -48,38 +146,6 @@ pub enum UpdateType {
},
}
impl From<&UpdateStatus> for UpdateType {
fn from(other: &UpdateStatus) -> Self {
use meilisearch_lib::milli::update::IndexDocumentsMethod::*;
match other.meta() {
Update::DocumentAddition { method, .. } => {
let number = match other {
UpdateStatus::Processed(processed) => match processed.success {
UpdateResult::DocumentsAddition(ref addition) => {
Some(addition.nb_documents)
}
_ => None,
},
_ => None,
};
match method {
ReplaceDocuments => UpdateType::DocumentsAddition { number },
UpdateDocuments => UpdateType::DocumentsPartial { number },
_ => unreachable!(),
}
}
Update::Settings(settings) => UpdateType::Settings {
settings: settings.clone(),
},
Update::ClearDocuments => UpdateType::ClearAll,
Update::DeleteDocuments(ids) => UpdateType::DocumentsDeletion {
number: Some(ids.len()),
},
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ProcessedUpdateResult {
@ -87,8 +153,10 @@ pub struct ProcessedUpdateResult {
#[serde(rename = "type")]
pub update_type: UpdateType,
pub duration: f64, // in seconds
pub enqueued_at: DateTime<Utc>,
pub processed_at: DateTime<Utc>,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub processed_at: OffsetDateTime,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -97,11 +165,12 @@ pub struct FailedUpdateResult {
pub update_id: u64,
#[serde(rename = "type")]
pub update_type: UpdateType,
#[serde(flatten)]
pub response: ResponseError,
pub error: ResponseError,
pub duration: f64, // in seconds
pub enqueued_at: DateTime<Utc>,
pub processed_at: DateTime<Utc>,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub processed_at: OffsetDateTime,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -110,9 +179,13 @@ pub struct EnqueuedUpdateResult {
pub update_id: u64,
#[serde(rename = "type")]
pub update_type: UpdateType,
pub enqueued_at: DateTime<Utc>,
#[serde(skip_serializing_if = "Option::is_none")]
pub started_processing_at: Option<DateTime<Utc>>,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(
skip_serializing_if = "Option::is_none",
with = "time::serde::rfc3339::option"
)]
pub started_processing_at: Option<OffsetDateTime>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -136,81 +209,6 @@ pub enum UpdateStatusResponse {
},
}
impl From<UpdateStatus> for UpdateStatusResponse {
fn from(other: UpdateStatus) -> Self {
let update_type = UpdateType::from(&other);
match other {
UpdateStatus::Processing(processing) => {
let content = EnqueuedUpdateResult {
update_id: processing.id(),
update_type,
enqueued_at: processing.from.enqueued_at,
started_processing_at: Some(processing.started_processing_at),
};
UpdateStatusResponse::Processing { content }
}
UpdateStatus::Enqueued(enqueued) => {
let content = EnqueuedUpdateResult {
update_id: enqueued.id(),
update_type,
enqueued_at: enqueued.enqueued_at,
started_processing_at: None,
};
UpdateStatusResponse::Enqueued { content }
}
UpdateStatus::Processed(processed) => {
let duration = processed
.processed_at
.signed_duration_since(processed.from.started_processing_at)
.num_milliseconds();
// necessary since chrono::duration don't expose a f64 secs method.
let duration = Duration::from_millis(duration as u64).as_secs_f64();
let content = ProcessedUpdateResult {
update_id: processed.id(),
update_type,
duration,
enqueued_at: processed.from.from.enqueued_at,
processed_at: processed.processed_at,
};
UpdateStatusResponse::Processed { content }
}
UpdateStatus::Aborted(_) => unreachable!(),
UpdateStatus::Failed(failed) => {
let duration = failed
.failed_at
.signed_duration_since(failed.from.started_processing_at)
.num_milliseconds();
// necessary since chrono::duration don't expose a f64 secs method.
let duration = Duration::from_millis(duration as u64).as_secs_f64();
let update_id = failed.id();
let processed_at = failed.failed_at;
let enqueued_at = failed.from.from.enqueued_at;
let response = failed.into();
let content = FailedUpdateResult {
update_id,
update_type,
response,
duration,
enqueued_at,
processed_at,
};
UpdateStatusResponse::Failed { content }
}
}
}
}
#[derive(Deserialize)]
pub struct IndexParam {
index_uid: String,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
pub struct IndexUpdateResponse {
@ -230,13 +228,21 @@ impl IndexUpdateResponse {
/// }
/// ```
pub async fn running() -> HttpResponse {
HttpResponse::Ok().json(serde_json::json!({ "status": "MeiliSearch is running" }))
HttpResponse::Ok().json(serde_json::json!({ "status": "Meilisearch is running" }))
}
async fn get_stats(
meilisearch: GuardedData<Private, MeiliSearch>,
meilisearch: GuardedData<ActionPolicy<{ actions::STATS_GET }>, MeiliSearch>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
let response = meilisearch.get_all_stats().await?;
analytics.publish(
"Stats Seen".to_string(),
json!({ "per_index_uid": false }),
Some(&req),
);
let search_rules = &meilisearch.filters().search_rules;
let response = meilisearch.get_all_stats(search_rules).await?;
debug!("returns: {:?}", response);
Ok(HttpResponse::Ok().json(response))
@ -250,7 +256,9 @@ struct VersionResponse {
pkg_version: String,
}
async fn get_version(_meilisearch: GuardedData<Private, MeiliSearch>) -> HttpResponse {
async fn get_version(
_meilisearch: GuardedData<ActionPolicy<{ actions::VERSION }>, MeiliSearch>,
) -> HttpResponse {
let commit_sha = option_env!("VERGEN_GIT_SHA").unwrap_or("unknown");
let commit_date = option_env!("VERGEN_GIT_COMMIT_TIMESTAMP").unwrap_or("unknown");
@ -267,108 +275,6 @@ struct KeysResponse {
public: Option<String>,
}
pub async fn list_keys(meilisearch: GuardedData<Admin, ApiKeys>) -> HttpResponse {
let api_keys = (*meilisearch).clone();
HttpResponse::Ok().json(&KeysResponse {
private: api_keys.private,
public: api_keys.public,
})
}
pub async fn get_health() -> Result<HttpResponse, ResponseError> {
Ok(HttpResponse::Ok().json(serde_json::json!({ "status": "available" })))
}
#[cfg(test)]
mod test {
use super::*;
use crate::extractors::authentication::GuardedData;
/// A type implemented for a route that uses a authentication policy `Policy`.
///
/// This trait is used for regression testing of route authenticaton policies.
trait Is<Policy, Data, T> {}
macro_rules! impl_is_policy {
($($param:ident)*) => {
impl<Policy, Func, Data, $($param,)* Res> Is<Policy, Data, (($($param,)*), Res)> for Func
where Func: Fn(GuardedData<Policy, Data>, $($param,)*) -> Res {}
};
}
impl_is_policy! {}
impl_is_policy! {A}
impl_is_policy! {A B}
impl_is_policy! {A B C}
impl_is_policy! {A B C D}
impl_is_policy! {A B C D E}
/// Emits a compile error if a route doesn't have the correct authentication policy.
///
/// This works by trying to cast the route function into a Is<Policy, _> type, where Policy it
/// the authentication policy defined for the route.
macro_rules! test_auth_routes {
($($policy:ident => { $($route:expr,)*})*) => {
#[test]
fn test_auth() {
$($(let _: &dyn Is<$policy, _, _> = &$route;)*)*
}
};
}
test_auth_routes! {
Public => {
indexes::search::search_with_url_query,
indexes::search::search_with_post,
indexes::documents::get_document,
indexes::documents::get_all_documents,
}
Private => {
get_stats,
get_version,
indexes::create_index,
indexes::list_indexes,
indexes::get_index_stats,
indexes::delete_index,
indexes::update_index,
indexes::get_index,
dump::create_dump,
indexes::settings::filterable_attributes::get,
indexes::settings::displayed_attributes::get,
indexes::settings::searchable_attributes::get,
indexes::settings::stop_words::get,
indexes::settings::synonyms::get,
indexes::settings::distinct_attribute::get,
indexes::settings::filterable_attributes::update,
indexes::settings::displayed_attributes::update,
indexes::settings::searchable_attributes::update,
indexes::settings::stop_words::update,
indexes::settings::synonyms::update,
indexes::settings::distinct_attribute::update,
indexes::settings::filterable_attributes::delete,
indexes::settings::displayed_attributes::delete,
indexes::settings::searchable_attributes::delete,
indexes::settings::stop_words::delete,
indexes::settings::synonyms::delete,
indexes::settings::distinct_attribute::delete,
indexes::settings::delete_all,
indexes::settings::get_all,
indexes::settings::update_all,
indexes::documents::clear_all_documents,
indexes::documents::delete_documents,
indexes::documents::update_documents,
indexes::documents::add_documents,
indexes::documents::delete_document,
indexes::updates::get_all_updates_status,
indexes::updates::get_update_status,
}
Admin => { list_keys, }
}
}

View File

@ -0,0 +1,203 @@
use actix_web::{web, HttpRequest, HttpResponse};
use meilisearch_lib::tasks::task::{TaskContent, TaskEvent, TaskId};
use meilisearch_lib::tasks::TaskFilter;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use meilisearch_types::index_uid::IndexUid;
use meilisearch_types::star_or::StarOr;
use serde::Deserialize;
use serde_cs::vec::CS;
use serde_json::json;
use crate::analytics::Analytics;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::extractors::sequential_extractor::SeqHandler;
use crate::task::{TaskListView, TaskStatus, TaskType, TaskView};
use super::fold_star_or;
const DEFAULT_LIMIT: fn() -> usize = || 20;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("").route(web::get().to(SeqHandler(get_tasks))))
.service(web::resource("/{task_id}").route(web::get().to(SeqHandler(get_task))));
}
#[derive(Deserialize, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct TasksFilterQuery {
#[serde(rename = "type")]
type_: Option<CS<StarOr<TaskType>>>,
status: Option<CS<StarOr<TaskStatus>>>,
index_uid: Option<CS<StarOr<IndexUid>>>,
#[serde(default = "DEFAULT_LIMIT")]
limit: usize,
from: Option<TaskId>,
}
#[rustfmt::skip]
fn task_type_matches_content(type_: &TaskType, content: &TaskContent) -> bool {
matches!((type_, content),
(TaskType::IndexCreation, TaskContent::IndexCreation { .. })
| (TaskType::IndexUpdate, TaskContent::IndexUpdate { .. })
| (TaskType::IndexDeletion, TaskContent::IndexDeletion { .. })
| (TaskType::DocumentAdditionOrUpdate, TaskContent::DocumentAddition { .. })
| (TaskType::DocumentDeletion, TaskContent::DocumentDeletion{ .. })
| (TaskType::SettingsUpdate, TaskContent::SettingsUpdate { .. })
)
}
#[rustfmt::skip]
fn task_status_matches_events(status: &TaskStatus, events: &[TaskEvent]) -> bool {
events.last().map_or(false, |event| {
matches!((status, event),
(TaskStatus::Enqueued, TaskEvent::Created(_))
| (TaskStatus::Processing, TaskEvent::Processing(_) | TaskEvent::Batched { .. })
| (TaskStatus::Succeeded, TaskEvent::Succeeded { .. })
| (TaskStatus::Failed, TaskEvent::Failed { .. }),
)
})
}
async fn get_tasks(
meilisearch: GuardedData<ActionPolicy<{ actions::TASKS_GET }>, MeiliSearch>,
params: web::Query<TasksFilterQuery>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
let TasksFilterQuery {
type_,
status,
index_uid,
limit,
from,
} = params.into_inner();
let search_rules = &meilisearch.filters().search_rules;
// We first transform a potential indexUid=* into a "not specified indexUid filter"
// for every one of the filters: type, status, and indexUid.
let type_: Option<Vec<_>> = type_.and_then(fold_star_or);
let status: Option<Vec<_>> = status.and_then(fold_star_or);
let index_uid: Option<Vec<_>> = index_uid.and_then(fold_star_or);
analytics.publish(
"Tasks Seen".to_string(),
json!({
"filtered_by_index_uid": index_uid.as_ref().map_or(false, |v| !v.is_empty()),
"filtered_by_type": type_.as_ref().map_or(false, |v| !v.is_empty()),
"filtered_by_status": status.as_ref().map_or(false, |v| !v.is_empty()),
}),
Some(&req),
);
// Then we filter on potential indexes and make sure that the search filter
// restrictions are also applied.
let indexes_filters = match index_uid {
Some(indexes) => {
let mut filters = TaskFilter::default();
for name in indexes {
if search_rules.is_index_authorized(&name) {
filters.filter_index(name.to_string());
}
}
Some(filters)
}
None => {
if search_rules.is_index_authorized("*") {
None
} else {
let mut filters = TaskFilter::default();
for (index, _policy) in search_rules.clone() {
filters.filter_index(index);
}
Some(filters)
}
}
};
// Then we complete the task filter with other potential status and types filters.
let filters = if type_.is_some() || status.is_some() {
let mut filters = indexes_filters.unwrap_or_default();
filters.filter_fn(Box::new(move |task| {
let matches_type = match &type_ {
Some(types) => types
.iter()
.any(|t| task_type_matches_content(t, &task.content)),
None => true,
};
let matches_status = match &status {
Some(statuses) => statuses
.iter()
.any(|t| task_status_matches_events(t, &task.events)),
None => true,
};
matches_type && matches_status
}));
Some(filters)
} else {
indexes_filters
};
// We +1 just to know if there is more after this "page" or not.
let limit = limit.saturating_add(1);
let mut tasks_results: Vec<_> = meilisearch
.list_tasks(filters, Some(limit), from)
.await?
.into_iter()
.map(TaskView::from)
.collect();
// If we were able to fetch the number +1 tasks we asked
// it means that there is more to come.
let next = if tasks_results.len() == limit {
tasks_results.pop().map(|t| t.uid)
} else {
None
};
let from = tasks_results.first().map(|t| t.uid);
let tasks = TaskListView {
results: tasks_results,
limit: limit.saturating_sub(1),
from,
next,
};
Ok(HttpResponse::Ok().json(tasks))
}
async fn get_task(
meilisearch: GuardedData<ActionPolicy<{ actions::TASKS_GET }>, MeiliSearch>,
task_id: web::Path<TaskId>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
analytics.publish(
"Tasks Seen".to_string(),
json!({ "per_task_uid": true }),
Some(&req),
);
let search_rules = &meilisearch.filters().search_rules;
let filters = if search_rules.is_index_authorized("*") {
None
} else {
let mut filters = TaskFilter::default();
for (index, _policy) in search_rules.clone() {
filters.filter_index(index);
}
Some(filters)
};
let task: TaskView = meilisearch
.get_task(task_id.into_inner(), filters)
.await?
.into();
Ok(HttpResponse::Ok().json(task))
}

View File

@ -0,0 +1,434 @@
use std::error::Error;
use std::fmt::{self, Write};
use std::str::FromStr;
use std::write;
use meilisearch_lib::index::{Settings, Unchecked};
use meilisearch_lib::tasks::task::{
DocumentDeletion, Task, TaskContent, TaskEvent, TaskId, TaskResult,
};
use meilisearch_types::error::ResponseError;
use serde::{Deserialize, Serialize, Serializer};
use time::{Duration, OffsetDateTime};
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum TaskType {
IndexCreation,
IndexUpdate,
IndexDeletion,
DocumentAdditionOrUpdate,
DocumentDeletion,
SettingsUpdate,
DumpCreation,
}
impl From<TaskContent> for TaskType {
fn from(other: TaskContent) -> Self {
match other {
TaskContent::IndexCreation { .. } => TaskType::IndexCreation,
TaskContent::IndexUpdate { .. } => TaskType::IndexUpdate,
TaskContent::IndexDeletion { .. } => TaskType::IndexDeletion,
TaskContent::DocumentAddition { .. } => TaskType::DocumentAdditionOrUpdate,
TaskContent::DocumentDeletion { .. } => TaskType::DocumentDeletion,
TaskContent::SettingsUpdate { .. } => TaskType::SettingsUpdate,
TaskContent::Dump { .. } => TaskType::DumpCreation,
}
}
}
#[derive(Debug)]
pub struct TaskTypeError {
invalid_type: String,
}
impl fmt::Display for TaskTypeError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"invalid task type `{}`, expecting one of: \
indexCreation, indexUpdate, indexDeletion, documentAdditionOrUpdate, \
documentDeletion, settingsUpdate, dumpCreation",
self.invalid_type
)
}
}
impl Error for TaskTypeError {}
impl FromStr for TaskType {
type Err = TaskTypeError;
fn from_str(type_: &str) -> Result<Self, TaskTypeError> {
if type_.eq_ignore_ascii_case("indexCreation") {
Ok(TaskType::IndexCreation)
} else if type_.eq_ignore_ascii_case("indexUpdate") {
Ok(TaskType::IndexUpdate)
} else if type_.eq_ignore_ascii_case("indexDeletion") {
Ok(TaskType::IndexDeletion)
} else if type_.eq_ignore_ascii_case("documentAdditionOrUpdate") {
Ok(TaskType::DocumentAdditionOrUpdate)
} else if type_.eq_ignore_ascii_case("documentDeletion") {
Ok(TaskType::DocumentDeletion)
} else if type_.eq_ignore_ascii_case("settingsUpdate") {
Ok(TaskType::SettingsUpdate)
} else if type_.eq_ignore_ascii_case("dumpCreation") {
Ok(TaskType::DumpCreation)
} else {
Err(TaskTypeError {
invalid_type: type_.to_string(),
})
}
}
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub enum TaskStatus {
Enqueued,
Processing,
Succeeded,
Failed,
}
#[derive(Debug)]
pub struct TaskStatusError {
invalid_status: String,
}
impl fmt::Display for TaskStatusError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"invalid task status `{}`, expecting one of: \
enqueued, processing, succeeded, or failed",
self.invalid_status,
)
}
}
impl Error for TaskStatusError {}
impl FromStr for TaskStatus {
type Err = TaskStatusError;
fn from_str(status: &str) -> Result<Self, TaskStatusError> {
if status.eq_ignore_ascii_case("enqueued") {
Ok(TaskStatus::Enqueued)
} else if status.eq_ignore_ascii_case("processing") {
Ok(TaskStatus::Processing)
} else if status.eq_ignore_ascii_case("succeeded") {
Ok(TaskStatus::Succeeded)
} else if status.eq_ignore_ascii_case("failed") {
Ok(TaskStatus::Failed)
} else {
Err(TaskStatusError {
invalid_status: status.to_string(),
})
}
}
}
#[derive(Debug, Serialize)]
#[serde(untagged)]
#[allow(clippy::large_enum_variant)]
enum TaskDetails {
#[serde(rename_all = "camelCase")]
DocumentAddition {
received_documents: usize,
indexed_documents: Option<u64>,
},
#[serde(rename_all = "camelCase")]
Settings {
#[serde(flatten)]
settings: Settings<Unchecked>,
},
#[serde(rename_all = "camelCase")]
IndexInfo { primary_key: Option<String> },
#[serde(rename_all = "camelCase")]
DocumentDeletion {
received_document_ids: usize,
deleted_documents: Option<u64>,
},
#[serde(rename_all = "camelCase")]
ClearAll { deleted_documents: Option<u64> },
#[serde(rename_all = "camelCase")]
Dump { dump_uid: String },
}
/// Serialize a `time::Duration` as a best effort ISO 8601 while waiting for
/// https://github.com/time-rs/time/issues/378.
/// This code is a port of the old code of time that was removed in 0.2.
fn serialize_duration<S: Serializer>(
duration: &Option<Duration>,
serializer: S,
) -> Result<S::Ok, S::Error> {
match duration {
Some(duration) => {
// technically speaking, negative duration is not valid ISO 8601
if duration.is_negative() {
return serializer.serialize_none();
}
const SECS_PER_DAY: i64 = Duration::DAY.whole_seconds();
let secs = duration.whole_seconds();
let days = secs / SECS_PER_DAY;
let secs = secs - days * SECS_PER_DAY;
let hasdate = days != 0;
let nanos = duration.subsec_nanoseconds();
let hastime = (secs != 0 || nanos != 0) || !hasdate;
// all the following unwrap can't fail
let mut res = String::new();
write!(&mut res, "P").unwrap();
if hasdate {
write!(&mut res, "{}D", days).unwrap();
}
const NANOS_PER_MILLI: i32 = Duration::MILLISECOND.subsec_nanoseconds();
const NANOS_PER_MICRO: i32 = Duration::MICROSECOND.subsec_nanoseconds();
if hastime {
if nanos == 0 {
write!(&mut res, "T{}S", secs).unwrap();
} else if nanos % NANOS_PER_MILLI == 0 {
write!(&mut res, "T{}.{:03}S", secs, nanos / NANOS_PER_MILLI).unwrap();
} else if nanos % NANOS_PER_MICRO == 0 {
write!(&mut res, "T{}.{:06}S", secs, nanos / NANOS_PER_MICRO).unwrap();
} else {
write!(&mut res, "T{}.{:09}S", secs, nanos).unwrap();
}
}
serializer.serialize_str(&res)
}
None => serializer.serialize_none(),
}
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct TaskView {
pub uid: TaskId,
index_uid: Option<String>,
status: TaskStatus,
#[serde(rename = "type")]
task_type: TaskType,
#[serde(skip_serializing_if = "Option::is_none")]
details: Option<TaskDetails>,
#[serde(skip_serializing_if = "Option::is_none")]
error: Option<ResponseError>,
#[serde(serialize_with = "serialize_duration")]
duration: Option<Duration>,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
enqueued_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::option::serialize")]
started_at: Option<OffsetDateTime>,
#[serde(serialize_with = "time::serde::rfc3339::option::serialize")]
finished_at: Option<OffsetDateTime>,
}
impl From<Task> for TaskView {
fn from(task: Task) -> Self {
let index_uid = task.index_uid().map(String::from);
let Task {
id,
content,
events,
} = task;
let (task_type, mut details) = match content {
TaskContent::DocumentAddition {
documents_count, ..
} => {
let details = TaskDetails::DocumentAddition {
received_documents: documents_count,
indexed_documents: None,
};
(TaskType::DocumentAdditionOrUpdate, Some(details))
}
TaskContent::DocumentDeletion {
deletion: DocumentDeletion::Ids(ids),
..
} => (
TaskType::DocumentDeletion,
Some(TaskDetails::DocumentDeletion {
received_document_ids: ids.len(),
deleted_documents: None,
}),
),
TaskContent::DocumentDeletion {
deletion: DocumentDeletion::Clear,
..
} => (
TaskType::DocumentDeletion,
Some(TaskDetails::ClearAll {
deleted_documents: None,
}),
),
TaskContent::IndexDeletion { .. } => (
TaskType::IndexDeletion,
Some(TaskDetails::ClearAll {
deleted_documents: None,
}),
),
TaskContent::SettingsUpdate { settings, .. } => (
TaskType::SettingsUpdate,
Some(TaskDetails::Settings { settings }),
),
TaskContent::IndexCreation { primary_key, .. } => (
TaskType::IndexCreation,
Some(TaskDetails::IndexInfo { primary_key }),
),
TaskContent::IndexUpdate { primary_key, .. } => (
TaskType::IndexUpdate,
Some(TaskDetails::IndexInfo { primary_key }),
),
TaskContent::Dump { uid } => (
TaskType::DumpCreation,
Some(TaskDetails::Dump { dump_uid: uid }),
),
};
// An event always has at least one event: "Created"
let (status, error, finished_at) = match events.last().unwrap() {
TaskEvent::Created(_) => (TaskStatus::Enqueued, None, None),
TaskEvent::Batched { .. } => (TaskStatus::Enqueued, None, None),
TaskEvent::Processing(_) => (TaskStatus::Processing, None, None),
TaskEvent::Succeeded { timestamp, result } => {
match (result, &mut details) {
(
TaskResult::DocumentAddition {
indexed_documents: num,
..
},
Some(TaskDetails::DocumentAddition {
ref mut indexed_documents,
..
}),
) => {
indexed_documents.replace(*num);
}
(
TaskResult::DocumentDeletion {
deleted_documents: docs,
..
},
Some(TaskDetails::DocumentDeletion {
ref mut deleted_documents,
..
}),
) => {
deleted_documents.replace(*docs);
}
(
TaskResult::ClearAll {
deleted_documents: docs,
},
Some(TaskDetails::ClearAll {
ref mut deleted_documents,
}),
) => {
deleted_documents.replace(*docs);
}
_ => (),
}
(TaskStatus::Succeeded, None, Some(*timestamp))
}
TaskEvent::Failed { timestamp, error } => {
match details {
Some(TaskDetails::DocumentDeletion {
ref mut deleted_documents,
..
}) => {
deleted_documents.replace(0);
}
Some(TaskDetails::ClearAll {
ref mut deleted_documents,
..
}) => {
deleted_documents.replace(0);
}
Some(TaskDetails::DocumentAddition {
ref mut indexed_documents,
..
}) => {
indexed_documents.replace(0);
}
_ => (),
}
(TaskStatus::Failed, Some(error.clone()), Some(*timestamp))
}
};
let enqueued_at = match events.first() {
Some(TaskEvent::Created(ts)) => *ts,
_ => unreachable!("A task must always have a creation event."),
};
let started_at = events.iter().find_map(|e| match e {
TaskEvent::Processing(ts) => Some(*ts),
_ => None,
});
let duration = finished_at.zip(started_at).map(|(tf, ts)| (tf - ts));
Self {
uid: id,
index_uid,
status,
task_type,
details,
error,
duration,
enqueued_at,
started_at,
finished_at,
}
}
}
#[derive(Debug, Serialize)]
pub struct TaskListView {
pub results: Vec<TaskView>,
pub limit: usize,
pub from: Option<TaskId>,
pub next: Option<TaskId>,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct SummarizedTaskView {
task_uid: TaskId,
index_uid: Option<String>,
status: TaskStatus,
#[serde(rename = "type")]
task_type: TaskType,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
enqueued_at: OffsetDateTime,
}
impl From<Task> for SummarizedTaskView {
fn from(mut other: Task) -> Self {
let created_event = other
.events
.drain(..1)
.next()
.expect("Task must have an enqueued event.");
let enqueued_at = match created_event {
TaskEvent::Created(ts) => ts,
_ => unreachable!("The first event of a task must always be 'Created'"),
};
Self {
task_uid: other.id,
index_uid: other.index_uid().map(String::from),
status: TaskStatus::Enqueued,
task_type: other.content.into(),
enqueued_at,
}
}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,724 @@
use crate::common::Server;
use ::time::format_description::well_known::Rfc3339;
use maplit::{hashmap, hashset};
use once_cell::sync::Lazy;
use serde_json::{json, Value};
use std::collections::{HashMap, HashSet};
use time::{Duration, OffsetDateTime};
pub static AUTHORIZATIONS: Lazy<HashMap<(&'static str, &'static str), HashSet<&'static str>>> =
Lazy::new(|| {
let mut authorizations = hashmap! {
("POST", "/indexes/products/search") => hashset!{"search", "*"},
("GET", "/indexes/products/search") => hashset!{"search", "*"},
("POST", "/indexes/products/documents") => hashset!{"documents.add", "documents.*", "*"},
("GET", "/indexes/products/documents") => hashset!{"documents.get", "documents.*", "*"},
("GET", "/indexes/products/documents/0") => hashset!{"documents.get", "documents.*", "*"},
("DELETE", "/indexes/products/documents/0") => hashset!{"documents.delete", "documents.*", "*"},
("GET", "/tasks") => hashset!{"tasks.get", "tasks.*", "*"},
("GET", "/tasks?indexUid=products") => hashset!{"tasks.get", "tasks.*", "*"},
("GET", "/tasks/0") => hashset!{"tasks.get", "tasks.*", "*"},
("PATCH", "/indexes/products/") => hashset!{"indexes.update", "indexes.*", "*"},
("GET", "/indexes/products/") => hashset!{"indexes.get", "indexes.*", "*"},
("DELETE", "/indexes/products/") => hashset!{"indexes.delete", "indexes.*", "*"},
("POST", "/indexes") => hashset!{"indexes.create", "indexes.*", "*"},
("GET", "/indexes") => hashset!{"indexes.get", "indexes.*", "*"},
("GET", "/indexes/products/settings") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/displayed-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/distinct-attribute") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/filterable-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/ranking-rules") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/searchable-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/sortable-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/stop-words") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/synonyms") => hashset!{"settings.get", "settings.*", "*"},
("DELETE", "/indexes/products/settings") => hashset!{"settings.update", "settings.*", "*"},
("PATCH", "/indexes/products/settings") => hashset!{"settings.update", "settings.*", "*"},
("PATCH", "/indexes/products/settings/typo-tolerance") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/displayed-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/distinct-attribute") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/filterable-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/ranking-rules") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/searchable-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/sortable-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/stop-words") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/synonyms") => hashset!{"settings.update", "settings.*", "*"},
("GET", "/indexes/products/stats") => hashset!{"stats.get", "stats.*", "*"},
("GET", "/stats") => hashset!{"stats.get", "stats.*", "*"},
("POST", "/dumps") => hashset!{"dumps.create", "dumps.*", "*"},
("GET", "/version") => hashset!{"version", "*"},
("PATCH", "/keys/mykey/") => hashset!{"keys.update", "*"},
("GET", "/keys/mykey/") => hashset!{"keys.get", "*"},
("DELETE", "/keys/mykey/") => hashset!{"keys.delete", "*"},
("POST", "/keys") => hashset!{"keys.create", "*"},
("GET", "/keys") => hashset!{"keys.get", "*"},
};
if cfg!(feature = "metrics") {
authorizations.insert(
("GET", "/metrics"),
hashset! {"metrics.get", "metrics.*", "*"},
);
}
authorizations
});
pub static ALL_ACTIONS: Lazy<HashSet<&'static str>> = Lazy::new(|| {
AUTHORIZATIONS
.values()
.cloned()
.reduce(|l, r| l.union(&r).cloned().collect())
.unwrap()
});
static INVALID_RESPONSE: Lazy<Value> = Lazy::new(|| {
json!({"message": "The provided API key is invalid.",
"code": "invalid_api_key",
"type": "auth",
"link": "https://docs.meilisearch.com/errors#invalid_api_key"
})
});
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_expired_key() {
use std::{thread, time};
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["products"],
"actions": ALL_ACTIONS.clone(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::seconds(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
// wait until the key is expired.
thread::sleep(time::Duration::new(1, 0));
for (method, route) in AUTHORIZATIONS.keys() {
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_eq!(403, code, "{:?}", &response);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_unauthorized_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["sales"],
"actions": ALL_ACTIONS.clone(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
for (method, route) in AUTHORIZATIONS
.keys()
// filter `products` index routes
.filter(|(_, route)| route.starts_with("/indexes/products"))
{
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_eq!(403, code, "{:?}", &response);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_unauthorized_action() {
let mut server = Server::new_auth().await;
for ((method, route), action) in AUTHORIZATIONS.iter() {
// create a new API key letting only the needed action.
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["products"],
"actions": ALL_ACTIONS.difference(action).collect::<Vec<_>>(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_eq!(403, code, "{:?}", &response);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_master_key() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
// master key must have access to all routes.
for ((method, route), _) in AUTHORIZATIONS.iter() {
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_ne!(code, 403);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_restricted_index() {
let mut server = Server::new_auth().await;
for ((method, route), actions) in AUTHORIZATIONS.iter() {
for action in actions {
// create a new API key letting only the needed action.
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["products"],
"actions": [action],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?} with action: {:?}",
method,
route,
action
);
assert_ne!(code, 403);
}
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_no_index_restriction() {
let mut server = Server::new_auth().await;
for ((method, route), actions) in AUTHORIZATIONS.iter() {
for action in actions {
// create a new API key letting only the needed action.
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": [action],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?} with action: {:?}",
method,
route,
action
);
assert_ne!(code, 403);
}
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_stats_restricted_index() {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on `products` index only.
let content = json!({
"indexes": ["products"],
"actions": ["stats.get"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.stats().await;
assert_eq!(200, code, "{:?}", &response);
// key should have access on `products` index.
assert!(response["indexes"].get("products").is_some());
// key should not have access on `test` index.
assert!(response["indexes"].get("test").is_none());
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_stats_no_index_restriction() {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
"actions": ["stats.get"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.stats().await;
assert_eq!(200, code, "{:?}", &response);
// key should have access on `products` index.
assert!(response["indexes"].get("products").is_some());
// key should have access on `test` index.
assert!(response["indexes"].get("test").is_some());
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn list_authorized_indexes_restricted_index() {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on `products` index only.
let content = json!({
"indexes": ["products"],
"actions": ["indexes.get"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.list_indexes(None, None).await;
assert_eq!(200, code, "{:?}", &response);
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
assert!(response.iter().any(|index| index["uid"] == "products"));
// key should not have access on `test` index.
assert!(!response.iter().any(|index| index["uid"] == "test"));
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn list_authorized_indexes_no_index_restriction() {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
"actions": ["indexes.get"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.list_indexes(None, None).await;
assert_eq!(200, code, "{:?}", &response);
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
assert!(response.iter().any(|index| index["uid"] == "products"));
// key should have access on `test` index.
assert!(response.iter().any(|index| index["uid"] == "test"));
}
#[actix_rt::test]
async fn list_authorized_tasks_restricted_index() {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on `products` index only.
let content = json!({
"indexes": ["products"],
"actions": ["tasks.get"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.service.get("/tasks").await;
assert_eq!(200, code, "{:?}", &response);
println!("{}", response);
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
assert!(response.iter().any(|task| task["indexUid"] == "products"));
// key should not have access on `test` index.
assert!(!response.iter().any(|task| task["indexUid"] == "test"));
}
#[actix_rt::test]
async fn list_authorized_tasks_no_index_restriction() {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
"actions": ["tasks.get"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let (response, code) = server.service.get("/tasks").await;
assert_eq!(200, code, "{:?}", &response);
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
assert!(response.iter().any(|task| task["indexUid"] == "products"));
// key should have access on `test` index.
assert!(response.iter().any(|task| task["indexUid"] == "test"));
}
#[actix_rt::test]
async fn error_creating_index_without_action() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
// Give all action but the ones allowing to create an index.
"actions": ALL_ACTIONS.iter().cloned().filter(|a| !AUTHORIZATIONS.get(&("POST","/indexes")).unwrap().contains(a)).collect::<Vec<_>>(),
"expiresAt": "2050-11-13T00:00:00Z"
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
let expected_error = json!({
"message": "Index `test` not found.",
"code": "index_not_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index_not_found"
});
// try to create a index via add documents route
let index = server.index("test");
let documents = json!([
{
"id": 1,
"content": "foo",
}
]);
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
let response = index.wait_task(task_id).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_error.clone());
// try to create a index via add settings route
let settings = json!({ "distinctAttribute": "test"});
let (response, code) = index.update_settings(settings).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
let response = index.wait_task(task_id).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_error.clone());
// try to create a index via add specialized settings route
let (response, code) = index.update_distinct_attribute(json!("test")).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
let response = index.wait_task(task_id).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_error.clone());
}
#[actix_rt::test]
async fn lazy_create_index() {
let mut server = Server::new_auth().await;
// create key with access on all indexes.
let contents = vec![
json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": "2050-11-13T00:00:00Z"
}),
json!({
"indexes": ["*"],
"actions": ["indexes.*", "documents.*", "settings.*", "tasks.*"],
"expiresAt": "2050-11-13T00:00:00Z"
}),
json!({
"indexes": ["*"],
"actions": ["indexes.create", "documents.add", "settings.update", "tasks.get"],
"expiresAt": "2050-11-13T00:00:00Z"
}),
];
for content in contents {
server.use_api_key("MASTER_KEY");
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
// try to create a index via add documents route
let index = server.index("test");
let documents = json!([
{
"id": 1,
"content": "foo",
}
]);
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
index.wait_task(task_id).await;
let (response, code) = index.get_task(task_id).await;
assert_eq!(200, code, "{:?}", &response);
assert_eq!(response["status"], "succeeded");
// try to create a index via add settings route
let index = server.index("test1");
let settings = json!({ "distinctAttribute": "test"});
let (response, code) = index.update_settings(settings).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
index.wait_task(task_id).await;
let (response, code) = index.get_task(task_id).await;
assert_eq!(200, code, "{:?}", &response);
assert_eq!(response["status"], "succeeded");
// try to create a index via add specialized settings route
let index = server.index("test2");
let (response, code) = index.update_distinct_attribute(json!("test")).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
index.wait_task(task_id).await;
let (response, code) = index.get_task(task_id).await;
assert_eq!(200, code, "{:?}", &response);
assert_eq!(response["status"], "succeeded");
}
}
#[actix_rt::test]
async fn error_creating_index_without_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
// create key with access on all indexes.
let content = json!({
"indexes": ["unexpected"],
"actions": ["*"],
"expiresAt": "2050-11-13T00:00:00Z"
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(key);
// try to create a index via add documents route
let index = server.index("test");
let documents = json!([
{
"id": 1,
"content": "foo",
}
]);
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(403, code, "{:?}", &response);
// try to create a index via add settings route
let index = server.index("test1");
let settings = json!({ "distinctAttribute": "test"});
let (response, code) = index.update_settings(settings).await;
assert_eq!(403, code, "{:?}", &response);
// try to create a index via add specialized settings route
let index = server.index("test2");
let (response, code) = index.update_distinct_attribute(json!("test")).await;
assert_eq!(403, code, "{:?}", &response);
// try to create a index via create index route
let index = server.index("test3");
let (response, code) = index.create(None).await;
assert_eq!(403, code, "{:?}", &response);
}

View File

@ -0,0 +1,64 @@
mod api_keys;
mod authorization;
mod payload;
mod tenant_token;
use crate::common::Server;
use actix_web::http::StatusCode;
use serde_json::{json, Value};
impl Server {
pub fn use_api_key(&mut self, api_key: impl AsRef<str>) {
self.service.api_key = Some(api_key.as_ref().to_string());
}
/// Fetch and use the default admin key for nexts http requests.
pub async fn use_admin_key(&mut self, master_key: impl AsRef<str>) {
self.use_api_key(master_key);
let (response, code) = self.list_api_keys().await;
assert_eq!(200, code, "{:?}", response);
let admin_key = &response["results"][1]["key"];
self.use_api_key(admin_key.as_str().unwrap());
}
pub async fn add_api_key(&self, content: Value) -> (Value, StatusCode) {
let url = "/keys";
self.service.post(url, content).await
}
pub async fn get_api_key(&self, key: impl AsRef<str>) -> (Value, StatusCode) {
let url = format!("/keys/{}", key.as_ref());
self.service.get(url).await
}
pub async fn patch_api_key(&self, key: impl AsRef<str>, content: Value) -> (Value, StatusCode) {
let url = format!("/keys/{}", key.as_ref());
self.service.patch(url, content).await
}
pub async fn list_api_keys(&self) -> (Value, StatusCode) {
let url = "/keys";
self.service.get(url).await
}
pub async fn delete_api_key(&self, key: impl AsRef<str>) -> (Value, StatusCode) {
let url = format!("/keys/{}", key.as_ref());
self.service.delete(url).await
}
pub async fn dummy_request(
&self,
method: impl AsRef<str>,
url: impl AsRef<str>,
) -> (Value, StatusCode) {
match method.as_ref() {
"POST" => self.service.post(url, json!({})).await,
"PUT" => self.service.put(url, json!({})).await,
"PATCH" => self.service.patch(url, json!({})).await,
"GET" => self.service.get(url).await,
"DELETE" => self.service.delete(url).await,
_ => unreachable!(),
}
}
}

View File

@ -0,0 +1,340 @@
use crate::common::Server;
use actix_web::test;
use meilisearch_http::{analytics, create_app};
use serde_json::{json, Value};
#[actix_rt::test]
async fn error_api_key_bad_content_types() {
let content = json!({
"indexes": ["products"],
"actions": [
"documents.add"
],
"expiresAt": "2050-11-13T00:00:00Z"
});
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
// post
let req = test::TestRequest::post()
.uri("/keys")
.set_payload(content.to_string())
.insert_header(("content-type", "text/plain"))
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 415);
assert_eq!(
response["message"],
json!(
r#"The Content-Type `text/plain` is invalid. Accepted values for the Content-Type header are: `application/json`"#
)
);
assert_eq!(response["code"], "invalid_content_type");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#invalid_content_type"
);
// patch
let req = test::TestRequest::patch()
.uri("/keys/d0552b41536279a0ad88bd595327b96f01176a60c2243e906c52ac02375f9bc4")
.set_payload(content.to_string())
.insert_header(("content-type", "text/plain"))
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 415);
assert_eq!(
response["message"],
json!(
r#"The Content-Type `text/plain` is invalid. Accepted values for the Content-Type header are: `application/json`"#
)
);
assert_eq!(response["code"], "invalid_content_type");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#invalid_content_type"
);
}
#[actix_rt::test]
async fn error_api_key_empty_content_types() {
let content = json!({
"indexes": ["products"],
"actions": [
"documents.add"
],
"expiresAt": "2050-11-13T00:00:00Z"
});
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
// post
let req = test::TestRequest::post()
.uri("/keys")
.set_payload(content.to_string())
.insert_header(("content-type", ""))
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 415);
assert_eq!(
response["message"],
json!(
r#"The Content-Type `` is invalid. Accepted values for the Content-Type header are: `application/json`"#
)
);
assert_eq!(response["code"], "invalid_content_type");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#invalid_content_type"
);
// patch
let req = test::TestRequest::patch()
.uri("/keys/d0552b41536279a0ad88bd595327b96f01176a60c2243e906c52ac02375f9bc4")
.set_payload(content.to_string())
.insert_header(("content-type", ""))
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 415);
assert_eq!(
response["message"],
json!(
r#"The Content-Type `` is invalid. Accepted values for the Content-Type header are: `application/json`"#
)
);
assert_eq!(response["code"], "invalid_content_type");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#invalid_content_type"
);
}
#[actix_rt::test]
async fn error_api_key_missing_content_types() {
let content = json!({
"indexes": ["products"],
"actions": [
"documents.add"
],
"expiresAt": "2050-11-13T00:00:00Z"
});
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
// post
let req = test::TestRequest::post()
.uri("/keys")
.set_payload(content.to_string())
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 415);
assert_eq!(
response["message"],
json!(
r#"A Content-Type header is missing. Accepted values for the Content-Type header are: `application/json`"#
)
);
assert_eq!(response["code"], "missing_content_type");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#missing_content_type"
);
// patch
let req = test::TestRequest::patch()
.uri("/keys/d0552b41536279a0ad88bd595327b96f01176a60c2243e906c52ac02375f9bc4")
.set_payload(content.to_string())
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 415);
assert_eq!(
response["message"],
json!(
r#"A Content-Type header is missing. Accepted values for the Content-Type header are: `application/json`"#
)
);
assert_eq!(response["code"], "missing_content_type");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#missing_content_type"
);
}
#[actix_rt::test]
async fn error_api_key_empty_payload() {
let content = "";
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
// post
let req = test::TestRequest::post()
.uri("/keys")
.set_payload(content)
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(response["code"], json!("missing_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#missing_payload")
);
assert_eq!(response["message"], json!(r#"A json payload is missing."#));
// patch
let req = test::TestRequest::patch()
.uri("/keys/d0552b41536279a0ad88bd595327b96f01176a60c2243e906c52ac02375f9bc4")
.set_payload(content)
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(response["code"], json!("missing_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#missing_payload")
);
assert_eq!(response["message"], json!(r#"A json payload is missing."#));
}
#[actix_rt::test]
async fn error_api_key_malformed_payload() {
let content = r#"{"malormed": "payload""#;
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
// post
let req = test::TestRequest::post()
.uri("/keys")
.set_payload(content)
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(response["code"], json!("malformed_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
assert_eq!(
response["message"],
json!(
r#"The json payload provided is malformed. `EOF while parsing an object at line 1 column 22`."#
)
);
// patch
let req = test::TestRequest::patch()
.uri("/keys/d0552b41536279a0ad88bd595327b96f01176a60c2243e906c52ac02375f9bc4")
.set_payload(content)
.insert_header(("Authorization", "Bearer MASTER_KEY"))
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(response["code"], json!("malformed_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
assert_eq!(
response["message"],
json!(
r#"The json payload provided is malformed. `EOF while parsing an object at line 1 column 22`."#
)
);
}

View File

@ -0,0 +1,584 @@
use crate::common::Server;
use ::time::format_description::well_known::Rfc3339;
use maplit::hashmap;
use once_cell::sync::Lazy;
use serde_json::{json, Value};
use std::collections::HashMap;
use time::{Duration, OffsetDateTime};
use super::authorization::{ALL_ACTIONS, AUTHORIZATIONS};
fn generate_tenant_token(
parent_uid: impl AsRef<str>,
parent_key: impl AsRef<str>,
mut body: HashMap<&str, Value>,
) -> String {
use jsonwebtoken::{encode, EncodingKey, Header};
let parent_uid = parent_uid.as_ref();
body.insert("apiKeyUid", json!(parent_uid));
encode(
&Header::default(),
&body,
&EncodingKey::from_secret(parent_key.as_ref().as_bytes()),
)
.unwrap()
}
static DOCUMENTS: Lazy<Value> = Lazy::new(|| {
json!([
{
"title": "Shazam!",
"id": "287947",
"color": ["green", "blue"]
},
{
"title": "Captain Marvel",
"id": "299537",
"color": ["yellow", "blue"]
},
{
"title": "Escape Room",
"id": "522681",
"color": ["yellow", "red"]
},
{
"title": "How to Train Your Dragon: The Hidden World",
"id": "166428",
"color": ["green", "red"]
},
{
"title": "Glass",
"id": "450465",
"color": ["blue", "red"]
}
])
});
static INVALID_RESPONSE: Lazy<Value> = Lazy::new(|| {
json!({"message": "The provided API key is invalid.",
"code": "invalid_api_key",
"type": "auth",
"link": "https://docs.meilisearch.com/errors#invalid_api_key"
})
});
static ACCEPTED_KEYS: Lazy<Vec<Value>> = Lazy::new(|| {
vec![
json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["*"],
"actions": ["search"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["sales"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["sales"],
"actions": ["search"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
]
});
static REFUSED_KEYS: Lazy<Vec<Value>> = Lazy::new(|| {
vec![
// no search action
json!({
"indexes": ["*"],
"actions": ALL_ACTIONS.iter().cloned().filter(|a| *a != "search" && *a != "*").collect::<Vec<_>>(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["sales"],
"actions": ALL_ACTIONS.iter().cloned().filter(|a| *a != "search" && *a != "*").collect::<Vec<_>>(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
// bad index
json!({
"indexes": ["products"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["products"],
"actions": ["search"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
]
});
macro_rules! compute_authorized_search {
($tenant_tokens:expr, $filter:expr, $expected_count:expr) => {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
let index = server.index("sales");
let documents = DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(0).await;
index
.update_settings(json!({"filterableAttributes": ["color"]}))
.await;
index.wait_task(1).await;
drop(index);
for key_content in ACCEPTED_KEYS.iter() {
server.use_api_key("MASTER_KEY");
let (response, code) = server.add_api_key(key_content.clone()).await;
assert_eq!(code, 201);
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
for tenant_token in $tenant_tokens.iter() {
let web_token = generate_tenant_token(&uid, &key, tenant_token.clone());
server.use_api_key(&web_token);
let index = server.index("sales");
index
.search(json!({ "filter": $filter }), |response, code| {
assert_eq!(
code, 200,
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response, tenant_token, key_content
);
assert_eq!(
response["hits"].as_array().unwrap().len(),
$expected_count,
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response,
tenant_token,
key_content
);
})
.await;
}
}
};
}
macro_rules! compute_forbidden_search {
($tenant_tokens:expr, $parent_keys:expr) => {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
let index = server.index("sales");
let documents = DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(0).await;
drop(index);
for key_content in $parent_keys.iter() {
server.use_api_key("MASTER_KEY");
let (response, code) = server.add_api_key(key_content.clone()).await;
assert_eq!(code, 201, "{:?}", response);
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
for tenant_token in $tenant_tokens.iter() {
let web_token = generate_tenant_token(&uid, &key, tenant_token.clone());
server.use_api_key(&web_token);
let index = server.index("sales");
index
.search(json!({}), |response, code| {
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response,
tenant_token,
key_content
);
assert_eq!(
code, 403,
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response, tenant_token, key_content
);
})
.await;
}
}
};
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn search_authorized_simple_token() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"*": Value::Null}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"sales": Value::Null}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => Value::Null
},
];
compute_authorized_search!(tenant_tokens, {}, 5);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn search_authorized_filter_token() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
// filter on sales should override filters on *
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
];
compute_authorized_search!(tenant_tokens, {}, 3);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn filter_search_authorized_filter_token() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
// filter on sales should override filters on *
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
];
compute_authorized_search!(tenant_tokens, "color = yellow", 1);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_search_token_forbidden_parent_key() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
];
compute_forbidden_search!(tenant_tokens, REFUSED_KEYS);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_search_forbidden_token() {
let tenant_tokens = vec![
// bad index
hashmap! {
"searchRules" => json!({"products": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["products"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"products": {}}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"products": Value::Null}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!(["products"]),
"exp" => Value::Null
},
// expired token
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
];
compute_forbidden_search!(tenant_tokens, ACCEPTED_KEYS);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_forbidden_routes() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
let tenant_token = hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let web_token = generate_tenant_token(uid, key, tenant_token);
server.use_api_key(&web_token);
for ((method, route), actions) in AUTHORIZATIONS.iter() {
if !actions.contains("search") {
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
}
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_expired_parent_key() {
use std::{thread, time};
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::seconds(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
let tenant_token = hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let web_token = generate_tenant_token(uid, key, tenant_token);
server.use_api_key(&web_token);
// test search request while parent_key is not expired
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
// wait until the key is expired.
thread::sleep(time::Duration::new(1, 0));
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_modified_token() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
let tenant_token = hashmap! {
"searchRules" => json!(["products"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let web_token = generate_tenant_token(uid, key, tenant_token);
server.use_api_key(&web_token);
// test search request while web_token is valid
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
let tenant_token = hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let alt = generate_tenant_token(uid, key, tenant_token);
let altered_token = [
web_token.split('.').next().unwrap(),
alt.split('.').nth(1).unwrap(),
web_token.split('.').nth(2).unwrap(),
]
.join(".");
server.use_api_key(&altered_token);
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
}

View File

@ -1,31 +1,16 @@
use std::{
fmt::Write,
panic::{catch_unwind, resume_unwind, UnwindSafe},
time::Duration,
};
use actix_web::http::StatusCode;
use paste::paste;
use serde_json::{json, Value};
use tokio::time::sleep;
use urlencoding::encode;
use super::service::Service;
macro_rules! make_settings_test_routes {
($($name:ident),+) => {
$(paste! {
pub async fn [<update_$name>](&self, value: Value) -> (Value, StatusCode) {
let url = format!("/indexes/{}/settings/{}", self.uid, stringify!($name).replace("_", "-"));
self.service.post(url, value).await
}
pub async fn [<get_$name>](&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/settings/{}", self.uid, stringify!($name).replace("_", "-"));
self.service.get(url).await
}
})*
};
}
pub struct Index<'a> {
pub uid: String,
pub service: &'a Service,
@ -34,19 +19,19 @@ pub struct Index<'a> {
#[allow(dead_code)]
impl Index<'_> {
pub async fn get(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}", self.uid);
let url = format!("/indexes/{}", encode(self.uid.as_ref()));
self.service.get(url).await
}
pub async fn load_test_set(&self) -> u64 {
let url = format!("/indexes/{}/documents", self.uid);
let url = format!("/indexes/{}/documents", encode(self.uid.as_ref()));
let (response, code) = self
.service
.post_str(url, include_str!("../assets/test_set.json"))
.await;
assert_eq!(code, 202);
let update_id = response["updateId"].as_i64().unwrap();
self.wait_update_id(update_id as u64).await;
let update_id = response["taskUid"].as_i64().unwrap();
self.wait_task(update_id as u64).await;
update_id as u64
}
@ -62,13 +47,13 @@ impl Index<'_> {
let body = json!({
"primaryKey": primary_key,
});
let url = format!("/indexes/{}", self.uid);
let url = format!("/indexes/{}", encode(self.uid.as_ref()));
self.service.put(url, body).await
self.service.patch(url, body).await
}
pub async fn delete(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}", self.uid);
let url = format!("/indexes/{}", encode(self.uid.as_ref()));
self.service.delete(url).await
}
@ -78,8 +63,12 @@ impl Index<'_> {
primary_key: Option<&str>,
) -> (Value, StatusCode) {
let url = match primary_key {
Some(key) => format!("/indexes/{}/documents?primaryKey={}", self.uid, key),
None => format!("/indexes/{}/documents", self.uid),
Some(key) => format!(
"/indexes/{}/documents?primaryKey={}",
encode(self.uid.as_ref()),
key
),
None => format!("/indexes/{}/documents", encode(self.uid.as_ref())),
};
self.service.post(url, documents).await
}
@ -90,101 +79,120 @@ impl Index<'_> {
primary_key: Option<&str>,
) -> (Value, StatusCode) {
let url = match primary_key {
Some(key) => format!("/indexes/{}/documents?primaryKey={}", self.uid, key),
None => format!("/indexes/{}/documents", self.uid),
Some(key) => format!(
"/indexes/{}/documents?primaryKey={}",
encode(self.uid.as_ref()),
key
),
None => format!("/indexes/{}/documents", encode(self.uid.as_ref())),
};
self.service.put(url, documents).await
}
pub async fn wait_update_id(&self, update_id: u64) -> Value {
// try 10 times to get status, or panic to not wait forever
let url = format!("/indexes/{}/updates/{}", self.uid, update_id);
for _ in 0..10 {
pub async fn wait_task(&self, update_id: u64) -> Value {
// try several times to get status, or panic to not wait forever
let url = format!("/tasks/{}", update_id);
for _ in 0..100 {
let (response, status_code) = self.service.get(&url).await;
assert_eq!(status_code, 200, "response: {}", response);
assert_eq!(200, status_code, "response: {}", response);
if response["status"] == "processed" || response["status"] == "failed" {
if response["status"] == "succeeded" || response["status"] == "failed" {
return response;
}
sleep(Duration::from_secs(1)).await;
// wait 0.5 second.
sleep(Duration::from_millis(500)).await;
}
panic!("Timeout waiting for update id");
}
pub async fn get_update(&self, update_id: u64) -> (Value, StatusCode) {
let url = format!("/indexes/{}/updates/{}", self.uid, update_id);
pub async fn get_task(&self, update_id: u64) -> (Value, StatusCode) {
let url = format!("/tasks/{}", update_id);
self.service.get(url).await
}
pub async fn list_updates(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/updates", self.uid);
pub async fn list_tasks(&self) -> (Value, StatusCode) {
let url = format!("/tasks?indexUid={}", self.uid);
self.service.get(url).await
}
pub async fn filtered_tasks(&self, type_: &[&str], status: &[&str]) -> (Value, StatusCode) {
let mut url = format!("/tasks?indexUid={}", self.uid);
if !type_.is_empty() {
let _ = write!(url, "&type={}", type_.join(","));
}
if !status.is_empty() {
let _ = write!(url, "&status={}", status.join(","));
}
self.service.get(url).await
}
pub async fn get_document(
&self,
id: u64,
_options: Option<GetDocumentOptions>,
options: Option<GetDocumentOptions>,
) -> (Value, StatusCode) {
let url = format!("/indexes/{}/documents/{}", self.uid, id);
let mut url = format!("/indexes/{}/documents/{}", encode(self.uid.as_ref()), id);
if let Some(fields) = options.and_then(|o| o.fields) {
let _ = write!(url, "?fields={}", fields.join(","));
}
self.service.get(url).await
}
pub async fn get_all_documents(&self, options: GetAllDocumentsOptions) -> (Value, StatusCode) {
let mut url = format!("/indexes/{}/documents?", self.uid);
let mut url = format!("/indexes/{}/documents?", encode(self.uid.as_ref()));
if let Some(limit) = options.limit {
url.push_str(&format!("limit={}&", limit));
let _ = write!(url, "limit={}&", limit);
}
if let Some(offset) = options.offset {
url.push_str(&format!("offset={}&", offset));
let _ = write!(url, "offset={}&", offset);
}
if let Some(attributes_to_retrieve) = options.attributes_to_retrieve {
url.push_str(&format!(
"attributesToRetrieve={}&",
attributes_to_retrieve.join(",")
));
let _ = write!(url, "fields={}&", attributes_to_retrieve.join(","));
}
self.service.get(url).await
}
pub async fn delete_document(&self, id: u64) -> (Value, StatusCode) {
let url = format!("/indexes/{}/documents/{}", self.uid, id);
let url = format!("/indexes/{}/documents/{}", encode(self.uid.as_ref()), id);
self.service.delete(url).await
}
pub async fn clear_all_documents(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/documents", self.uid);
let url = format!("/indexes/{}/documents", encode(self.uid.as_ref()));
self.service.delete(url).await
}
pub async fn delete_batch(&self, ids: Vec<u64>) -> (Value, StatusCode) {
let url = format!("/indexes/{}/documents/delete-batch", self.uid);
let url = format!(
"/indexes/{}/documents/delete-batch",
encode(self.uid.as_ref())
);
self.service
.post(url, serde_json::to_value(&ids).unwrap())
.await
}
pub async fn settings(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/settings", self.uid);
let url = format!("/indexes/{}/settings", encode(self.uid.as_ref()));
self.service.get(url).await
}
pub async fn update_settings(&self, settings: Value) -> (Value, StatusCode) {
let url = format!("/indexes/{}/settings", self.uid);
self.service.post(url, settings).await
let url = format!("/indexes/{}/settings", encode(self.uid.as_ref()));
self.service.patch(url, settings).await
}
pub async fn delete_settings(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/settings", self.uid);
let url = format!("/indexes/{}/settings", encode(self.uid.as_ref()));
self.service.delete(url).await
}
pub async fn stats(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/stats", self.uid);
let url = format!("/indexes/{}/stats", encode(self.uid.as_ref()));
self.service.get(url).await
}
@ -209,20 +217,38 @@ impl Index<'_> {
}
pub async fn search_post(&self, query: Value) -> (Value, StatusCode) {
let url = format!("/indexes/{}/search", self.uid);
let url = format!("/indexes/{}/search", encode(self.uid.as_ref()));
self.service.post(url, query).await
}
pub async fn search_get(&self, query: Value) -> (Value, StatusCode) {
let params = serde_url_params::to_string(&query).unwrap();
let url = format!("/indexes/{}/search?{}", self.uid, params);
let params = yaup::to_string(&query).unwrap();
let url = format!("/indexes/{}/search?{}", encode(self.uid.as_ref()), params);
self.service.get(url).await
}
make_settings_test_routes!(distinct_attribute);
pub async fn update_distinct_attribute(&self, value: Value) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/settings/{}",
encode(self.uid.as_ref()),
"distinct-attribute"
);
self.service.put(url, value).await
}
pub async fn get_distinct_attribute(&self) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/settings/{}",
encode(self.uid.as_ref()),
"distinct-attribute"
);
self.service.get(url).await
}
}
pub struct GetDocumentOptions;
pub struct GetDocumentOptions {
pub fields: Option<Vec<&'static str>>,
}
#[derive(Debug, Default)]
pub struct GetAllDocumentsOptions {

View File

@ -3,7 +3,7 @@ pub mod server;
pub mod service;
pub use index::{GetAllDocumentsOptions, GetDocumentOptions};
pub use server::Server;
pub use server::{default_settings, Server};
/// Performs a search test on both post and get routes
#[macro_export]

View File

@ -1,13 +1,16 @@
#![allow(dead_code)]
use clap::Parser;
use std::path::Path;
use actix_web::http::StatusCode;
use byte_unit::{Byte, ByteUnit};
use meilisearch_auth::AuthController;
use meilisearch_http::setup_meilisearch;
use meilisearch_lib::options::{IndexerOpts, MaxMemory};
use once_cell::sync::Lazy;
use serde_json::Value;
use tempfile::TempDir;
use urlencoding::encode;
use meilisearch_http::option::Opt;
@ -20,7 +23,7 @@ pub struct Server {
_dir: Option<TempDir>,
}
static TEST_TEMP_DIR: Lazy<TempDir> = Lazy::new(|| TempDir::new().unwrap());
pub static TEST_TEMP_DIR: Lazy<TempDir> = Lazy::new(|| TempDir::new().unwrap());
impl Server {
pub async fn new() -> Self {
@ -35,9 +38,12 @@ impl Server {
let options = default_settings(dir.path());
let meilisearch = setup_meilisearch(&options).unwrap();
let auth = AuthController::new(&options.db_path, &options.master_key).unwrap();
let service = Service {
meilisearch,
auth,
options,
api_key: None,
};
Server {
@ -46,29 +52,81 @@ impl Server {
}
}
pub async fn new_with_options(options: Opt) -> Self {
pub async fn new_auth_with_options(mut options: Opt, dir: TempDir) -> Self {
if cfg!(windows) {
std::env::set_var("TMP", TEST_TEMP_DIR.path());
} else {
std::env::set_var("TMPDIR", TEST_TEMP_DIR.path());
}
options.master_key = Some("MASTER_KEY".to_string());
let meilisearch = setup_meilisearch(&options).unwrap();
let auth = AuthController::new(&options.db_path, &options.master_key).unwrap();
let service = Service {
meilisearch,
auth,
options,
api_key: None,
};
Server {
service,
_dir: None,
_dir: Some(dir),
}
}
pub async fn new_auth() -> Self {
let dir = TempDir::new().unwrap();
let options = default_settings(dir.path());
Self::new_auth_with_options(options, dir).await
}
pub async fn new_with_options(options: Opt) -> Result<Self, anyhow::Error> {
let meilisearch = setup_meilisearch(&options)?;
let auth = AuthController::new(&options.db_path, &options.master_key)?;
let service = Service {
meilisearch,
auth,
options,
api_key: None,
};
Ok(Server {
service,
_dir: None,
})
}
/// Returns a view to an index. There is no guarantee that the index exists.
pub fn index(&self, uid: impl AsRef<str>) -> Index<'_> {
Index {
uid: encode(uid.as_ref()).to_string(),
uid: uid.as_ref().to_string(),
service: &self.service,
}
}
pub async fn list_indexes(&self) -> (Value, StatusCode) {
self.service.get("/indexes").await
pub async fn list_indexes(
&self,
offset: Option<usize>,
limit: Option<usize>,
) -> (Value, StatusCode) {
let (offset, limit) = (
offset.map(|offset| format!("offset={offset}")),
limit.map(|limit| format!("limit={limit}")),
);
let query_parameter = offset
.as_ref()
.zip(limit.as_ref())
.map(|(offset, limit)| format!("{offset}&{limit}"))
.or_else(|| offset.xor(limit));
if let Some(query_parameter) = query_parameter {
self.service
.get(format!("/indexes?{query_parameter}"))
.await
} else {
self.service.get("/indexes").await
}
}
pub async fn version(&self) -> (Value, StatusCode) {
@ -78,39 +136,34 @@ impl Server {
pub async fn stats(&self) -> (Value, StatusCode) {
self.service.get("/stats").await
}
pub async fn tasks(&self) -> (Value, StatusCode) {
self.service.get("/tasks").await
}
pub async fn get_dump_status(&self, uid: &str) -> (Value, StatusCode) {
self.service.get(format!("/dumps/{}/status", uid)).await
}
}
pub fn default_settings(dir: impl AsRef<Path>) -> Opt {
Opt {
db_path: dir.as_ref().join("db"),
dumps_dir: dir.as_ref().join("dump"),
http_addr: "127.0.0.1:7700".to_owned(),
master_key: None,
env: "development".to_owned(),
#[cfg(all(not(debug_assertions), feature = "analytics"))]
no_analytics: true,
max_index_size: Byte::from_unit(4.0, ByteUnit::GiB).unwrap(),
max_udb_size: Byte::from_unit(4.0, ByteUnit::GiB).unwrap(),
max_index_size: Byte::from_unit(100.0, ByteUnit::MiB).unwrap(),
max_task_db_size: Byte::from_unit(1.0, ByteUnit::GiB).unwrap(),
http_payload_size_limit: Byte::from_unit(10.0, ByteUnit::MiB).unwrap(),
ssl_cert_path: None,
ssl_key_path: None,
ssl_auth_path: None,
ssl_ocsp_path: None,
ssl_require_auth: false,
ssl_resumption: false,
ssl_tickets: false,
import_snapshot: None,
ignore_missing_snapshot: false,
ignore_snapshot_if_db_exists: false,
snapshot_dir: ".".into(),
schedule_snapshot: false,
snapshot_interval_sec: 0,
import_dump: None,
indexer_options: IndexerOpts {
// memory has to be unlimited because several meilisearch are running in test context.
max_memory: MaxMemory::unlimited(),
..Default::default()
max_indexing_memory: MaxMemory::unlimited(),
..Parser::parse_from(None as Option<&str>)
},
log_level: "off".into(),
#[cfg(feature = "metrics")]
enable_metrics_route: true,
..Parser::parse_from(None as Option<&str>)
}
}

View File

@ -1,4 +1,5 @@
use actix_web::{http::StatusCode, test};
use meilisearch_auth::AuthController;
use meilisearch_lib::MeiliSearch;
use serde_json::Value;
@ -6,23 +7,27 @@ use meilisearch_http::{analytics, create_app, Opt};
pub struct Service {
pub meilisearch: MeiliSearch,
pub auth: AuthController,
pub options: Opt,
pub api_key: Option<String>,
}
impl Service {
pub async fn post(&self, url: impl AsRef<str>, body: Value) -> (Value, StatusCode) {
let app = test::init_service(create_app!(
&self.meilisearch,
&self.auth,
true,
&self.options,
self.options,
analytics::MockAnalytics::new(&self.options).0
))
.await;
let req = test::TestRequest::post()
.uri(url.as_ref())
.set_json(&body)
.to_request();
let mut req = test::TestRequest::post().uri(url.as_ref()).set_json(&body);
if let Some(api_key) = &self.api_key {
req = req.insert_header(("Authorization", ["Bearer ", api_key].concat()));
}
let req = req.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
@ -39,17 +44,21 @@ impl Service {
) -> (Value, StatusCode) {
let app = test::init_service(create_app!(
&self.meilisearch,
&self.auth,
true,
&self.options,
self.options,
analytics::MockAnalytics::new(&self.options).0
))
.await;
let req = test::TestRequest::post()
let mut req = test::TestRequest::post()
.uri(url.as_ref())
.set_payload(body.as_ref().to_string())
.insert_header(("content-type", "application/json"))
.to_request();
.insert_header(("content-type", "application/json"));
if let Some(api_key) = &self.api_key {
req = req.insert_header(("Authorization", ["Bearer ", api_key].concat()));
}
let req = req.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
@ -61,13 +70,18 @@ impl Service {
pub async fn get(&self, url: impl AsRef<str>) -> (Value, StatusCode) {
let app = test::init_service(create_app!(
&self.meilisearch,
&self.auth,
true,
&self.options,
self.options,
analytics::MockAnalytics::new(&self.options).0
))
.await;
let req = test::TestRequest::get().uri(url.as_ref()).to_request();
let mut req = test::TestRequest::get().uri(url.as_ref());
if let Some(api_key) = &self.api_key {
req = req.insert_header(("Authorization", ["Bearer ", api_key].concat()));
}
let req = req.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
@ -79,16 +93,41 @@ impl Service {
pub async fn put(&self, url: impl AsRef<str>, body: Value) -> (Value, StatusCode) {
let app = test::init_service(create_app!(
&self.meilisearch,
&self.auth,
true,
&self.options,
self.options,
analytics::MockAnalytics::new(&self.options).0
))
.await;
let req = test::TestRequest::put()
.uri(url.as_ref())
.set_json(&body)
.to_request();
let mut req = test::TestRequest::put().uri(url.as_ref()).set_json(&body);
if let Some(api_key) = &self.api_key {
req = req.insert_header(("Authorization", ["Bearer ", api_key].concat()));
}
let req = req.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response = serde_json::from_slice(&body).unwrap_or_default();
(response, status_code)
}
pub async fn patch(&self, url: impl AsRef<str>, body: Value) -> (Value, StatusCode) {
let app = test::init_service(create_app!(
&self.meilisearch,
&self.auth,
true,
self.options,
analytics::MockAnalytics::new(&self.options).0
))
.await;
let mut req = test::TestRequest::patch().uri(url.as_ref()).set_json(&body);
if let Some(api_key) = &self.api_key {
req = req.insert_header(("Authorization", ["Bearer ", api_key].concat()));
}
let req = req.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
@ -100,13 +139,18 @@ impl Service {
pub async fn delete(&self, url: impl AsRef<str>) -> (Value, StatusCode) {
let app = test::init_service(create_app!(
&self.meilisearch,
&self.auth,
true,
&self.options,
self.options,
analytics::MockAnalytics::new(&self.options).0
))
.await;
let req = test::TestRequest::delete().uri(url.as_ref()).to_request();
let mut req = test::TestRequest::delete().uri(url.as_ref());
if let Some(api_key) = &self.api_key {
req = req.insert_header(("Authorization", ["Bearer ", api_key].concat()));
}
let req = req.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();

View File

@ -7,23 +7,45 @@ use actix_web::test;
use meilisearch_http::{analytics, create_app};
use serde_json::{json, Value};
enum HttpVerb {
Put,
Patch,
Post,
Get,
Delete,
}
impl HttpVerb {
fn test_request(&self) -> test::TestRequest {
match self {
HttpVerb::Put => test::TestRequest::put(),
HttpVerb::Patch => test::TestRequest::patch(),
HttpVerb::Post => test::TestRequest::post(),
HttpVerb::Get => test::TestRequest::get(),
HttpVerb::Delete => test::TestRequest::delete(),
}
}
}
#[actix_rt::test]
async fn error_json_bad_content_type() {
use HttpVerb::{Patch, Post, Put};
let routes = [
// all the POST routes except the dumps that can be created without any body or content-type
// all the routes except the dumps that can be created without any body or content-type
// and the search that is not a strict json
"/indexes",
"/indexes/doggo/documents/delete-batch",
"/indexes/doggo/search",
"/indexes/doggo/settings",
"/indexes/doggo/settings/displayed-attributes",
"/indexes/doggo/settings/distinct-attribute",
"/indexes/doggo/settings/filterable-attributes",
"/indexes/doggo/settings/ranking-rules",
"/indexes/doggo/settings/searchable-attributes",
"/indexes/doggo/settings/sortable-attributes",
"/indexes/doggo/settings/stop-words",
"/indexes/doggo/settings/synonyms",
(Post, "/indexes"),
(Post, "/indexes/doggo/documents/delete-batch"),
(Post, "/indexes/doggo/search"),
(Patch, "/indexes/doggo/settings"),
(Put, "/indexes/doggo/settings/displayed-attributes"),
(Put, "/indexes/doggo/settings/distinct-attribute"),
(Put, "/indexes/doggo/settings/filterable-attributes"),
(Put, "/indexes/doggo/settings/ranking-rules"),
(Put, "/indexes/doggo/settings/searchable-attributes"),
(Put, "/indexes/doggo/settings/sortable-attributes"),
(Put, "/indexes/doggo/settings/stop-words"),
(Put, "/indexes/doggo/settings/synonyms"),
];
let bad_content_types = [
"application/csv",
@ -39,15 +61,17 @@ async fn error_json_bad_content_type() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
for route in routes {
for (verb, route) in routes {
// Good content-type, we probably have an error since we didn't send anything in the json
// so we only ensure we didn't get a bad media type error.
let req = test::TestRequest::post()
let req = verb
.test_request()
.uri(route)
.set_payload(document)
.insert_header(("content-type", "application/json"))
@ -58,7 +82,8 @@ async fn error_json_bad_content_type() {
"calling the route `{}` with a content-type of json isn't supposed to throw a bad media type error", route);
// No content-type.
let req = test::TestRequest::post()
let req = verb
.test_request()
.uri(route)
.set_payload(document)
.to_request();
@ -81,7 +106,8 @@ async fn error_json_bad_content_type() {
for bad_content_type in bad_content_types {
// Always bad content-type
let req = test::TestRequest::post()
let req = verb
.test_request()
.uri(route)
.set_payload(document.to_string())
.insert_header(("content-type", bad_content_type))
@ -110,3 +136,40 @@ async fn error_json_bad_content_type() {
}
}
}
#[actix_rt::test]
async fn extract_actual_content_type() {
let route = "/indexes/doggo/documents";
let documents = "[{}]";
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
// Good content-type, we probably have an error since we didn't send anything in the json
// so we only ensure we didn't get a bad media type error.
let req = test::TestRequest::post()
.uri(route)
.set_payload(documents)
.insert_header(("content-type", "application/json; charset=utf-8"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
assert_ne!(status_code, 415,
"calling the route `{}` with a content-type of json isn't supposed to throw a bad media type error", route);
let req = test::TestRequest::put()
.uri(route)
.set_payload(documents)
.insert_header(("content-type", "application/json; charset=latin-1"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
assert_ne!(status_code, 415,
"calling the route `{}` with a content-type of json isn't supposed to throw a bad media type error", route);
}

View File

@ -1,8 +1,9 @@
use crate::common::{GetAllDocumentsOptions, Server};
use actix_web::test;
use chrono::DateTime;
use meilisearch_http::{analytics, create_app};
use serde_json::{json, Value};
use time::{format_description::well_known::Rfc3339, OffsetDateTime};
/// This is the basic usage of our API and every other tests uses the content-type application/json
#[actix_rt::test]
@ -18,8 +19,9 @@ async fn add_documents_test_json_content_types() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -34,7 +36,7 @@ async fn add_documents_test_json_content_types() {
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 202);
assert_eq!(response, json!({ "updateId": 0 }));
assert_eq!(response["taskUid"], 0);
// put
let req = test::TestRequest::put()
@ -47,7 +49,52 @@ async fn add_documents_test_json_content_types() {
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 202);
assert_eq!(response, json!({ "updateId": 1 }));
assert_eq!(response["taskUid"], 1);
}
/// Here we try to send a single document instead of an array with a single document inside.
#[actix_rt::test]
async fn add_single_document_test_json_content_types() {
let document = json!({
"id": 1,
"content": "Bouvier Bernois",
});
// this is a what is expected and should work
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
// post
let req = test::TestRequest::post()
.uri("/indexes/dog/documents")
.set_payload(document.to_string())
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 202);
assert_eq!(response["taskUid"], 0);
// put
let req = test::TestRequest::put()
.uri("/indexes/dog/documents")
.set_payload(document.to_string())
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let status_code = res.status();
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 202);
assert_eq!(response["taskUid"], 1);
}
/// any other content-type is must be refused
@ -63,8 +110,9 @@ async fn error_add_documents_test_bad_content_types() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -130,8 +178,9 @@ async fn error_add_documents_test_no_content_type() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -189,8 +238,9 @@ async fn error_add_malformed_csv_documents() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -208,7 +258,7 @@ async fn error_add_malformed_csv_documents() {
assert_eq!(
response["message"],
json!(
r#"The `csv` payload provided is malformed. `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
r#"The `csv` payload provided is malformed: `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
@ -232,7 +282,7 @@ async fn error_add_malformed_csv_documents() {
assert_eq!(
response["message"],
json!(
r#"The `csv` payload provided is malformed. `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
r#"The `csv` payload provided is malformed: `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
@ -250,8 +300,9 @@ async fn error_add_malformed_json_documents() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -302,6 +353,56 @@ async fn error_add_malformed_json_documents() {
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
// truncate
// length = 100
let long = "0123456789".repeat(10);
let document = format!("\"{}\"", long);
let req = test::TestRequest::put()
.uri("/indexes/dog/documents")
.set_payload(document)
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(
response["message"],
json!(
r#"The `json` payload provided is malformed. `Couldn't serialize document value: data did not match any variant of untagged enum Either`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
// add one more char to the long string to test if the truncating works.
let document = format!("\"{}m\"", long);
let req = test::TestRequest::put()
.uri("/indexes/dog/documents")
.set_payload(document)
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(
response["message"],
json!("The `json` payload provided is malformed. `Couldn't serialize document value: data did not match any variant of untagged enum Either`.")
);
assert_eq!(response["code"], json!("malformed_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
}
#[actix_rt::test]
@ -311,8 +412,9 @@ async fn error_add_malformed_ndjson_documents() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -330,7 +432,7 @@ async fn error_add_malformed_ndjson_documents() {
assert_eq!(
response["message"],
json!(
r#"The `ndjson` payload provided is malformed. `Couldn't serialize document value: key must be a string at line 1 column 2`."#
r#"The `ndjson` payload provided is malformed. `Couldn't serialize document value: key must be a string at line 2 column 2`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
@ -353,9 +455,7 @@ async fn error_add_malformed_ndjson_documents() {
assert_eq!(status_code, 400);
assert_eq!(
response["message"],
json!(
r#"The `ndjson` payload provided is malformed. `Couldn't serialize document value: key must be a string at line 1 column 2`."#
)
json!("The `ndjson` payload provided is malformed. `Couldn't serialize document value: key must be a string at line 2 column 2`.")
);
assert_eq!(response["code"], json!("malformed_payload"));
assert_eq!(response["type"], json!("invalid_request"));
@ -372,8 +472,9 @@ async fn error_add_missing_payload_csv_documents() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -423,8 +524,9 @@ async fn error_add_missing_payload_json_documents() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -474,8 +576,9 @@ async fn error_add_missing_payload_ndjson_documents() {
let server = Server::new().await;
let app = test::init_service(create_app!(
&server.service.meilisearch,
&server.service.auth,
true,
&server.service.options,
server.service.options,
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
@ -538,7 +641,7 @@ async fn add_documents_no_index_creation() {
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
assert_eq!(response["updateId"], 0);
assert_eq!(response["taskUid"], 0);
/*
* currently we dont check these field to stay ISO with meilisearch
* assert_eq!(response["status"], "pending");
@ -548,22 +651,23 @@ async fn add_documents_no_index_creation() {
* assert!(response.get("enqueuedAt").is_some());
*/
index.wait_update_id(0).await;
index.wait_task(0).await;
let (response, code) = index.get_update(0).await;
let (response, code) = index.get_task(0).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "processed");
assert_eq!(response["updateId"], 0);
assert_eq!(response["type"]["name"], "DocumentsAddition");
assert_eq!(response["type"]["number"], 1);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["uid"], 0);
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["receivedDocuments"], 1);
assert_eq!(response["details"]["indexedDocuments"], 1);
let processed_at =
DateTime::parse_from_rfc3339(response["processedAt"].as_str().unwrap()).unwrap();
OffsetDateTime::parse(response["finishedAt"].as_str().unwrap(), &Rfc3339).unwrap();
let enqueued_at =
DateTime::parse_from_rfc3339(response["enqueuedAt"].as_str().unwrap()).unwrap();
OffsetDateTime::parse(response["enqueuedAt"].as_str().unwrap(), &Rfc3339).unwrap();
assert!(processed_at > enqueued_at);
// index was created, and primary key was infered.
// index was created, and primary key was inferred.
let (response, code) = index.get().await;
assert_eq!(code, 200);
assert_eq!(response["primaryKey"], "id");
@ -573,34 +677,34 @@ async fn add_documents_no_index_creation() {
async fn error_document_add_create_index_bad_uid() {
let server = Server::new().await;
let index = server.index("883 fj!");
let (response, code) = index.add_documents(json!([]), None).await;
let (response, code) = index.add_documents(json!([{"id": 1}]), None).await;
let expected_response = json!({
"message": "`883 fj!` is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_).",
"message": "invalid index uid `883 fj!`, the uid must be an integer or a string containing only alphanumeric characters a-z A-Z 0-9, hyphens - and underscores _.",
"code": "invalid_index_uid",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_index_uid"
});
assert_eq!(response, expected_response);
assert_eq!(code, 400);
assert_eq!(response, expected_response);
}
#[actix_rt::test]
async fn error_document_update_create_index_bad_uid() {
let server = Server::new().await;
let index = server.index("883 fj!");
let (response, code) = index.update_documents(json!([]), None).await;
let (response, code) = index.update_documents(json!([{"id": 1}]), None).await;
let expected_response = json!({
"message": "`883 fj!` is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_).",
"message": "invalid index uid `883 fj!`, the uid must be an integer or a string containing only alphanumeric characters a-z A-Z 0-9, hyphens - and underscores _.",
"code": "invalid_index_uid",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_index_uid"
});
assert_eq!(response, expected_response);
assert_eq!(code, 400);
assert_eq!(response, expected_response);
}
#[actix_rt::test]
@ -617,14 +721,15 @@ async fn document_addition_with_primary_key() {
let (response, code) = index.add_documents(documents, Some("primary")).await;
assert_eq!(code, 202, "response: {}", response);
index.wait_update_id(0).await;
index.wait_task(0).await;
let (response, code) = index.get_update(0).await;
let (response, code) = index.get_task(0).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "processed");
assert_eq!(response["updateId"], 0);
assert_eq!(response["type"]["name"], "DocumentsAddition");
assert_eq!(response["type"]["number"], 1);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["uid"], 0);
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["receivedDocuments"], 1);
assert_eq!(response["details"]["indexedDocuments"], 1);
let (response, code) = index.get().await;
assert_eq!(code, 200);
@ -645,14 +750,15 @@ async fn document_update_with_primary_key() {
let (_response, code) = index.update_documents(documents, Some("primary")).await;
assert_eq!(code, 202);
index.wait_update_id(0).await;
index.wait_task(0).await;
let (response, code) = index.get_update(0).await;
let (response, code) = index.get_task(0).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "processed");
assert_eq!(response["updateId"], 0);
assert_eq!(response["type"]["name"], "DocumentsPartial");
assert_eq!(response["type"]["number"], 1);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["uid"], 0);
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["indexedDocuments"], 1);
assert_eq!(response["details"]["receivedDocuments"], 1);
let (response, code) = index.get().await;
assert_eq!(code, 200);
@ -674,7 +780,7 @@ async fn replace_document() {
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202, "response: {}", response);
index.wait_update_id(0).await;
index.wait_task(0).await;
let documents = json!([
{
@ -686,11 +792,11 @@ async fn replace_document() {
let (_response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
index.wait_update_id(1).await;
index.wait_task(1).await;
let (response, code) = index.get_update(1).await;
let (response, code) = index.get_task(1).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "processed");
assert_eq!(response["status"], "succeeded");
let (response, code) = index.get_document(1, None).await;
assert_eq!(code, 200);
@ -698,20 +804,11 @@ async fn replace_document() {
}
#[actix_rt::test]
async fn error_add_no_documents() {
async fn add_no_documents() {
let server = Server::new().await;
let index = server.index("test");
let (response, code) = index.add_documents(json!([]), None).await;
let expected_response = json!({
"message": "The `json` payload must contain at least one document.",
"code": "malformed_payload",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#malformed_payload"
});
assert_eq!(response, expected_response);
assert_eq!(code, 400);
let (_response, code) = index.add_documents(json!([]), None).await;
assert_eq!(code, 202);
}
#[actix_rt::test]
@ -729,7 +826,7 @@ async fn update_document() {
let (_response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
index.wait_update_id(0).await;
index.wait_task(0).await;
let documents = json!([
{
@ -741,11 +838,11 @@ async fn update_document() {
let (response, code) = index.update_documents(documents, None).await;
assert_eq!(code, 202, "response: {}", response);
index.wait_update_id(1).await;
index.wait_task(1).await;
let (response, code) = index.get_update(1).await;
let (response, code) = index.get_task(1).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "processed");
assert_eq!(response["status"], "succeeded");
let (response, code) = index.get_document(1, None).await;
assert_eq!(code, 200);
@ -760,19 +857,20 @@ async fn add_larger_dataset() {
let server = Server::new().await;
let index = server.index("test");
let update_id = index.load_test_set().await;
let (response, code) = index.get_update(update_id).await;
let (response, code) = index.get_task(update_id).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "processed");
assert_eq!(response["type"]["name"], "DocumentsAddition");
assert_eq!(response["type"]["number"], 77);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["indexedDocuments"], 77);
assert_eq!(response["details"]["receivedDocuments"], 77);
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
limit: Some(1000),
..Default::default()
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 77);
assert_eq!(code, 200, "failed with `{}`", response);
assert_eq!(response["results"].as_array().unwrap().len(), 77);
}
#[actix_rt::test]
@ -781,11 +879,11 @@ async fn update_larger_dataset() {
let index = server.index("test");
let documents = serde_json::from_str(include_str!("../assets/test_set.json")).unwrap();
index.update_documents(documents, None).await;
index.wait_update_id(0).await;
let (response, code) = index.get_update(0).await;
index.wait_task(0).await;
let (response, code) = index.get_task(0).await;
assert_eq!(code, 200);
assert_eq!(response["type"]["name"], "DocumentsPartial");
assert_eq!(response["type"]["number"], 77);
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["indexedDocuments"], 77);
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
limit: Some(1000),
@ -793,7 +891,7 @@ async fn update_larger_dataset() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 77);
assert_eq!(response["results"].as_array().unwrap().len(), 77);
}
#[actix_rt::test]
@ -808,15 +906,20 @@ async fn error_add_documents_bad_document_id() {
}
]);
index.add_documents(documents, None).await;
index.wait_update_id(0).await;
let (response, code) = index.get_update(0).await;
index.wait_task(1).await;
let (response, code) = index.get_task(1).await;
assert_eq!(code, 200);
assert_eq!(response["status"], json!("failed"));
assert_eq!(response["message"], json!("Document identifier `foo & bar` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."));
assert_eq!(response["code"], json!("invalid_document_id"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
response["error"]["message"],
json!(
r#"Document identifier `"foo & bar"` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."#
)
);
assert_eq!(response["error"]["code"], json!("invalid_document_id"));
assert_eq!(response["error"]["type"], json!("invalid_request"));
assert_eq!(
response["error"]["link"],
json!("https://docs.meilisearch.com/errors#invalid_document_id")
);
}
@ -833,15 +936,18 @@ async fn error_update_documents_bad_document_id() {
}
]);
index.update_documents(documents, None).await;
index.wait_update_id(0).await;
let (response, code) = index.get_update(0).await;
assert_eq!(code, 200);
let response = index.wait_task(1).await;
assert_eq!(response["status"], json!("failed"));
assert_eq!(response["message"], json!("Document identifier `foo & bar` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."));
assert_eq!(response["code"], json!("invalid_document_id"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
response["error"]["message"],
json!(
r#"Document identifier `"foo & bar"` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."#
)
);
assert_eq!(response["error"]["code"], json!("invalid_document_id"));
assert_eq!(response["error"]["type"], json!("invalid_request"));
assert_eq!(
response["error"]["link"],
json!("https://docs.meilisearch.com/errors#invalid_document_id")
);
}
@ -858,18 +964,18 @@ async fn error_add_documents_missing_document_id() {
}
]);
index.add_documents(documents, None).await;
index.wait_update_id(0).await;
let (response, code) = index.get_update(0).await;
index.wait_task(1).await;
let (response, code) = index.get_task(1).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "failed");
assert_eq!(
response["message"],
response["error"]["message"],
json!(r#"Document doesn't have a `docid` attribute: `{"id":"11","content":"foobar"}`."#)
);
assert_eq!(response["code"], json!("missing_document_id"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(response["error"]["code"], json!("missing_document_id"));
assert_eq!(response["error"]["type"], json!("invalid_request"));
assert_eq!(
response["link"],
response["error"]["link"],
json!("https://docs.meilisearch.com/errors#missing_document_id")
);
}
@ -886,18 +992,16 @@ async fn error_update_documents_missing_document_id() {
}
]);
index.update_documents(documents, None).await;
index.wait_update_id(0).await;
let (response, code) = index.get_update(0).await;
assert_eq!(code, 200);
let response = index.wait_task(1).await;
assert_eq!(response["status"], "failed");
assert_eq!(
response["message"],
response["error"]["message"],
r#"Document doesn't have a `docid` attribute: `{"id":"11","content":"foobar"}`."#
);
assert_eq!(response["code"], "missing_document_id");
assert_eq!(response["type"], "invalid_request");
assert_eq!(response["error"]["code"], "missing_document_id");
assert_eq!(response["error"]["type"], "invalid_request");
assert_eq!(
response["link"],
response["error"]["link"],
"https://docs.meilisearch.com/errors#missing_document_id"
);
}
@ -922,50 +1026,43 @@ async fn error_document_field_limit_reached() {
let (_response, code) = index.update_documents(documents, Some("id")).await;
assert_eq!(code, 202);
index.wait_update_id(0).await;
let (response, code) = index.get_update(0).await;
index.wait_task(0).await;
let (response, code) = index.get_task(0).await;
assert_eq!(code, 200);
// Documents without a primary key are not accepted.
assert_eq!(response["status"], "failed");
assert_eq!(
response["message"],
"A document cannot contain more than 65,535 fields."
);
assert_eq!(response["code"], "document_fields_limit_reached");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#document_fields_limit_reached"
);
let expected_error = json!({
"message": "A document cannot contain more than 65,535 fields.",
"code": "document_fields_limit_reached",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#document_fields_limit_reached"
});
assert_eq!(response["error"], expected_error);
}
#[actix_rt::test]
#[ignore] // // TODO: Fix in an other PR: this does not provoke any error.
async fn error_add_documents_invalid_geo_field() {
async fn add_documents_invalid_geo_field() {
let server = Server::new().await;
let index = server.index("test");
index.create(Some("id")).await;
index
.update_settings(json!({"sortableAttributes": ["_geo"]}))
.await;
let documents = json!([
{
"id": "11",
"_geo": "foobar"
}
]);
index.add_documents(documents, None).await;
index.wait_update_id(0).await;
let (response, code) = index.get_update(0).await;
index.wait_task(2).await;
let (response, code) = index.get_task(2).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "failed");
assert_eq!(
response["message"],
r#"The document with the id: `11` contains an invalid _geo field: :syntaxErrorHelper:REPLACE_ME."#
);
assert_eq!(response["code"], "invalid_geo_field");
assert_eq!(response["type"], "invalid_request");
assert_eq!(
response["link"],
"https://docs.meilisearch.com/errors#invalid_geo_field"
);
}
#[actix_rt::test]
@ -993,3 +1090,113 @@ async fn error_add_documents_payload_size() {
assert_eq!(response, expected_response);
assert_eq!(code, 413);
}
#[actix_rt::test]
async fn error_primary_key_inference() {
let server = Server::new().await;
let index = server.index("test");
let documents = json!([
{
"title": "11",
"desc": "foobar"
}
]);
index.add_documents(documents, None).await;
index.wait_task(0).await;
let (response, code) = index.get_task(0).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "failed");
let expected_error = json!({
"message": r#"The primary key inference process failed because the engine did not find any fields containing `id` substring in their name. If your document identifier does not contain any `id` substring, you can set the primary key of the index."#,
"code": "primary_key_inference_failed",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#primary_key_inference_failed"
});
assert_eq!(response["error"], expected_error);
}
#[actix_rt::test]
async fn add_documents_with_primary_key_twice() {
let server = Server::new().await;
let index = server.index("test");
let documents = json!([
{
"title": "11",
"desc": "foobar"
}
]);
index.add_documents(documents.clone(), Some("title")).await;
index.wait_task(0).await;
let (response, _code) = index.get_task(0).await;
assert_eq!(response["status"], "succeeded");
index.add_documents(documents, Some("title")).await;
index.wait_task(1).await;
let (response, _code) = index.get_task(1).await;
assert_eq!(response["status"], "succeeded");
}
#[actix_rt::test]
async fn batch_several_documents_addition() {
let server = Server::new().await;
let index = server.index("test");
let mut documents: Vec<_> = (0..150usize)
.into_iter()
.map(|id| {
json!(
{
"id": id,
"title": "foo",
"desc": "bar"
}
)
})
.collect();
documents[100] = json!({"title": "error", "desc": "error"});
// enqueue batch of documents
let mut waiter = Vec::new();
for chunk in documents.chunks(30) {
waiter.push(index.add_documents(json!(chunk), Some("id")));
}
// wait first batch of documents to finish
futures::future::join_all(waiter).await;
index.wait_task(4).await;
// run a second completely failing batch
documents[40] = json!({"title": "error", "desc": "error"});
documents[70] = json!({"title": "error", "desc": "error"});
documents[130] = json!({"title": "error", "desc": "error"});
let mut waiter = Vec::new();
for chunk in documents.chunks(30) {
waiter.push(index.add_documents(json!(chunk), Some("id")));
}
// wait second batch of documents to finish
futures::future::join_all(waiter).await;
index.wait_task(9).await;
let (response, _code) = index.filtered_tasks(&[], &["failed"]).await;
// Check if only the 6th task failed
println!("{}", &response);
assert_eq!(response["results"].as_array().unwrap().len(), 5);
// Check if there are exactly 120 documents (150 - 30) in the index;
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
limit: Some(200),
..Default::default()
})
.await;
assert_eq!(code, 200, "failed with `{}`", response);
assert_eq!(response["results"].as_array().unwrap().len(), 120);
}

View File

@ -5,8 +5,13 @@ use crate::common::{GetAllDocumentsOptions, Server};
#[actix_rt::test]
async fn delete_one_document_unexisting_index() {
let server = Server::new().await;
let (_response, code) = server.index("test").delete_document(0).await;
assert_eq!(code, 404);
let index = server.index("test");
let (_response, code) = index.delete_document(0).await;
assert_eq!(code, 202);
let response = index.wait_task(0).await;
assert_eq!(response["status"], "failed");
}
#[actix_rt::test]
@ -16,8 +21,8 @@ async fn delete_one_unexisting_document() {
index.create(None).await;
let (response, code) = index.delete_document(0).await;
assert_eq!(code, 202, "{}", response);
let update = index.wait_update_id(0).await;
assert_eq!(update["status"], "processed");
let update = index.wait_task(0).await;
assert_eq!(update["status"], "succeeded");
}
#[actix_rt::test]
@ -27,10 +32,10 @@ async fn delete_one_document() {
index
.add_documents(json!([{ "id": 0, "content": "foobar" }]), None)
.await;
index.wait_update_id(0).await;
index.wait_task(0).await;
let (_response, code) = server.index("test").delete_document(0).await;
assert_eq!(code, 202);
index.wait_update_id(1).await;
index.wait_task(1).await;
let (_response, code) = index.get_document(0, None).await;
assert_eq!(code, 404);
@ -39,8 +44,13 @@ async fn delete_one_document() {
#[actix_rt::test]
async fn clear_all_documents_unexisting_index() {
let server = Server::new().await;
let (_response, code) = server.index("test").clear_all_documents().await;
assert_eq!(code, 404);
let index = server.index("test");
let (_response, code) = index.clear_all_documents().await;
assert_eq!(code, 202);
let response = index.wait_task(0).await;
assert_eq!(response["status"], "failed");
}
#[actix_rt::test]
@ -53,16 +63,16 @@ async fn clear_all_documents() {
None,
)
.await;
index.wait_update_id(0).await;
index.wait_task(0).await;
let (_response, code) = index.clear_all_documents().await;
assert_eq!(code, 202);
let _update = index.wait_update_id(1).await;
let _update = index.wait_task(1).await;
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert!(response.as_array().unwrap().is_empty());
assert!(response["results"].as_array().unwrap().is_empty());
}
#[actix_rt::test]
@ -74,26 +84,31 @@ async fn clear_all_documents_empty_index() {
let (_response, code) = index.clear_all_documents().await;
assert_eq!(code, 202);
let _update = index.wait_update_id(0).await;
let _update = index.wait_task(0).await;
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert!(response.as_array().unwrap().is_empty());
assert!(response["results"].as_array().unwrap().is_empty());
}
#[actix_rt::test]
async fn error_delete_batch_unexisting_index() {
let server = Server::new().await;
let (response, code) = server.index("test").delete_batch(vec![]).await;
let index = server.index("test");
let (_, code) = index.delete_batch(vec![]).await;
let expected_response = json!({
"message": "Index `test` not found.",
"code": "index_not_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index_not_found"
});
assert_eq!(code, 404);
assert_eq!(response, expected_response);
assert_eq!(code, 202);
let response = index.wait_task(0).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_response);
}
#[actix_rt::test]
@ -101,17 +116,17 @@ async fn delete_batch() {
let server = Server::new().await;
let index = server.index("test");
index.add_documents(json!([{ "id": 1, "content": "foobar" }, { "id": 0, "content": "foobar" }, { "id": 3, "content": "foobar" }]), Some("id")).await;
index.wait_update_id(0).await;
index.wait_task(0).await;
let (_response, code) = index.delete_batch(vec![1, 0]).await;
assert_eq!(code, 202);
let _update = index.wait_update_id(1).await;
let _update = index.wait_task(1).await;
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 1);
assert_eq!(response.as_array().unwrap()[0]["id"], 3);
assert_eq!(response["results"].as_array().unwrap().len(), 1);
assert_eq!(response["results"][0]["id"], json!(3));
}
#[actix_rt::test]
@ -119,14 +134,14 @@ async fn delete_no_document_batch() {
let server = Server::new().await;
let index = server.index("test");
index.add_documents(json!([{ "id": 1, "content": "foobar" }, { "id": 0, "content": "foobar" }, { "id": 3, "content": "foobar" }]), Some("id")).await;
index.wait_update_id(0).await;
index.wait_task(0).await;
let (_response, code) = index.delete_batch(vec![]).await;
assert_eq!(code, 202, "{}", _response);
let _update = index.wait_update_id(1).await;
let _update = index.wait_task(1).await;
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 3);
assert_eq!(response["results"].as_array().unwrap().len(), 3);
}

View File

@ -1,5 +1,4 @@
use crate::common::GetAllDocumentsOptions;
use crate::common::Server;
use crate::common::{GetAllDocumentsOptions, GetDocumentOptions, Server};
use serde_json::json;
@ -17,6 +16,7 @@ async fn error_get_unexisting_document() {
let server = Server::new().await;
let index = server.index("test");
index.create(None).await;
index.wait_task(0).await;
let (response, code) = index.get_document(1, None).await;
let expected_response = json!({
@ -38,19 +38,51 @@ async fn get_document() {
let documents = serde_json::json!([
{
"id": 0,
"content": "foobar",
"nested": { "content": "foobar" },
}
]);
let (_, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
index.wait_update_id(0).await;
index.wait_task(1).await;
let (response, code) = index.get_document(0, None).await;
assert_eq!(code, 200);
assert_eq!(
response,
serde_json::json!( {
serde_json::json!({
"id": 0,
"content": "foobar",
"nested": { "content": "foobar" },
})
);
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["id"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
serde_json::json!({
"id": 0,
})
);
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["nested.content"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
serde_json::json!({
"nested": { "content": "foobar" },
})
);
}
@ -75,17 +107,19 @@ async fn error_get_unexisting_index_all_documents() {
}
#[actix_rt::test]
async fn get_no_documents() {
async fn get_no_document() {
let server = Server::new().await;
let index = server.index("test");
let (_, code) = index.create(None).await;
assert_eq!(code, 201);
assert_eq!(code, 202);
index.wait_task(0).await;
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert!(response.as_array().unwrap().is_empty());
assert!(response["results"].as_array().unwrap().is_empty());
}
#[actix_rt::test]
@ -98,7 +132,7 @@ async fn get_all_documents_no_options() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
let arr = response.as_array().unwrap();
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 20);
let first = serde_json::json!({
"id":0,
@ -134,8 +168,11 @@ async fn test_get_all_documents_limit() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 5);
assert_eq!(response.as_array().unwrap()[0]["id"], 0);
assert_eq!(response["results"].as_array().unwrap().len(), 5);
assert_eq!(response["results"][0]["id"], json!(0));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(5));
assert_eq!(response["total"], json!(77));
}
#[actix_rt::test]
@ -151,8 +188,11 @@ async fn test_get_all_documents_offset() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(response.as_array().unwrap()[0]["id"], 13);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
assert_eq!(response["results"][0]["id"], json!(5));
assert_eq!(response["offset"], json!(5));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
}
#[actix_rt::test]
@ -168,20 +208,14 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
1
);
assert!(response.as_array().unwrap()[0]
.as_object()
.unwrap()
.get("name")
.is_some());
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 1);
assert!(results["name"] != json!(null));
}
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@ -190,15 +224,13 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
0
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 0);
}
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@ -207,15 +239,13 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
0
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 0);
}
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@ -224,15 +254,12 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
2
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 2);
assert!(results["name"] != json!(null));
assert!(results["tags"] != json!(null));
}
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@ -241,15 +268,10 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
16
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 16);
}
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@ -258,19 +280,99 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 16);
}
}
#[actix_rt::test]
async fn get_document_s_nested_attributes_to_retrieve() {
let server = Server::new().await;
let index = server.index("test");
index.create(None).await;
let documents = json!([
{
"id": 0,
"content.truc": "foobar",
},
{
"id": 1,
"content": {
"truc": "foobar",
"machin": "bidule",
},
},
]);
let (_, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
index.wait_task(1).await;
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["content"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(response, json!({}));
let (response, code) = index
.get_document(
1,
Some(GetDocumentOptions {
fields: Some(vec!["content"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
16
response,
json!({
"content": {
"truc": "foobar",
"machin": "bidule",
},
})
);
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["content.truc"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
json!({
"content.truc": "foobar",
})
);
let (response, code) = index
.get_document(
1,
Some(GetDocumentOptions {
fields: Some(vec!["content.truc"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
json!({
"content": {
"truc": "foobar",
},
})
);
}
#[actix_rt::test]
async fn get_documents_displayed_attributes() {
async fn get_documents_displayed_attributes_is_ignored() {
let server = Server::new().await;
let index = server.index("test");
index
@ -282,23 +384,19 @@ async fn get_documents_displayed_attributes() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
1
response["results"][0].as_object().unwrap().keys().count(),
16
);
assert!(response.as_array().unwrap()[0]
.as_object()
.unwrap()
.get("gender")
.is_some());
assert!(response["results"][0]["gender"] != json!(null));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index.get_document(0, None).await;
assert_eq!(code, 200);
assert_eq!(response.as_object().unwrap().keys().count(), 1);
assert_eq!(response.as_object().unwrap().keys().count(), 16);
assert!(response.as_object().unwrap().get("gender").is_some());
}

View File

@ -0,0 +1,73 @@
use std::path::PathBuf;
use manifest_dir_macros::exist_relative_path;
pub enum GetDump {
MoviesRawV1,
MoviesWithSettingsV1,
RubyGemsWithSettingsV1,
MoviesRawV2,
MoviesWithSettingsV2,
RubyGemsWithSettingsV2,
MoviesRawV3,
MoviesWithSettingsV3,
RubyGemsWithSettingsV3,
MoviesRawV4,
MoviesWithSettingsV4,
RubyGemsWithSettingsV4,
TestV5,
}
impl GetDump {
pub fn path(&self) -> PathBuf {
match self {
GetDump::MoviesRawV1 => {
exist_relative_path!("tests/assets/v1_v0.20.0_movies.dump").into()
}
GetDump::MoviesWithSettingsV1 => {
exist_relative_path!("tests/assets/v1_v0.20.0_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV1 => {
exist_relative_path!("tests/assets/v1_v0.20.0_rubygems_with_settings.dump").into()
}
GetDump::MoviesRawV2 => {
exist_relative_path!("tests/assets/v2_v0.21.1_movies.dump").into()
}
GetDump::MoviesWithSettingsV2 => {
exist_relative_path!("tests/assets/v2_v0.21.1_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV2 => {
exist_relative_path!("tests/assets/v2_v0.21.1_rubygems_with_settings.dump").into()
}
GetDump::MoviesRawV3 => {
exist_relative_path!("tests/assets/v3_v0.24.0_movies.dump").into()
}
GetDump::MoviesWithSettingsV3 => {
exist_relative_path!("tests/assets/v3_v0.24.0_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV3 => {
exist_relative_path!("tests/assets/v3_v0.24.0_rubygems_with_settings.dump").into()
}
GetDump::MoviesRawV4 => {
exist_relative_path!("tests/assets/v4_v0.25.2_movies.dump").into()
}
GetDump::MoviesWithSettingsV4 => {
exist_relative_path!("tests/assets/v4_v0.25.2_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV4 => {
exist_relative_path!("tests/assets/v4_v0.25.2_rubygems_with_settings.dump").into()
}
GetDump::TestV5 => {
exist_relative_path!("tests/assets/v5_v0.28.0_test_dump.dump").into()
}
}
}
}

View File

@ -0,0 +1,677 @@
mod data;
use crate::common::{default_settings, GetAllDocumentsOptions, Server};
use meilisearch_http::Opt;
use serde_json::json;
use self::data::GetDump;
// all the following test are ignored on windows. See #2364
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v1() {
let temp = tempfile::tempdir().unwrap();
for path in [
GetDump::MoviesRawV1.path(),
GetDump::MoviesWithSettingsV1.path(),
GetDump::RubyGemsWithSettingsV1.path(),
] {
let options = Opt {
import_dump: Some(path),
..default_settings(temp.path())
};
let error = Server::new_with_options(options)
.await
.map(|_| ())
.unwrap_err();
assert_eq!(error.to_string(), "The version 1 of the dumps is not supported anymore. You can re-export your dump from a version between 0.21 and 0.24, or start fresh from a version 0.25 onwards.");
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v2_movie_raw() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesRawV2.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["*"], "searchableAttributes": ["*"], "filterableAttributes": [], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit": 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 100, "title": "Lock, Stock and Two Smoking Barrels", "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "genres": ["Comedy", "Crime"], "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000})
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 500, "title": "Reservoir Dogs", "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "genres": ["Crime", "Thriller"], "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 10006, "title": "Wild Seven", "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "genres": ["Action", "Crime", "Drama"], "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v2_movie_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesWithSettingsV2.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": ["of", "the"], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": { "oneTypo": 5, "twoTypos": 9 }, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "genres": ["Comedy", "Crime"], "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000 })
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "genres": ["Crime", "Thriller"], "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "genres": ["Action", "Crime", "Drama"], "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v2_rubygems_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::RubyGemsWithSettingsV2.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("rubygems"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("rubygems");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"description": 53, "id": 53, "name": 53, "summary": 53, "total_downloads": 53, "version": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["name", "summary", "description", "version", "total_downloads"], "searchableAttributes": ["name", "summary"], "filterableAttributes": ["version"], "sortableAttributes": [], "rankingRules": ["typo", "words", "fame:desc", "proximity", "attribute", "exactness", "total_downloads:desc"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 }})
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks["results"][0],
json!({"uid": 92, "indexUid": "rubygems", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": {"receivedDocuments": 0, "indexedDocuments": 1042}, "duration": "PT14.034672S", "enqueuedAt": "2021-09-08T08:40:31.390775Z", "startedAt": "2021-09-08T08:51:39.060642Z", "finishedAt": "2021-09-08T08:51:53.095314Z"})
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(188040, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "meilisearch", "summary": "An easy-to-use ruby client for Meilisearch API", "description": "An easy-to-use ruby client for Meilisearch API. See https://github.com/meilisearch/MeiliSearch", "id": "188040", "version": "0.15.2", "total_downloads": "7465"})
);
let (document, code) = index.get_document(191940, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "doggo", "summary": "RSpec 3 formatter - documentation, with progress indication", "description": "Similar to \"rspec -f d\", but also indicates progress by showing the current test number and total test count on each line.", "id": "191940", "version": "1.1.0", "total_downloads": "9394"})
);
let (document, code) = index.get_document(159227, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "vortex-of-agony", "summary": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "description": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "id": "159227", "version": "0.1.0", "total_downloads": "1007"})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v3_movie_raw() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesRawV3.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["*"], "searchableAttributes": ["*"], "filterableAttributes": [], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit": 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 100, "title": "Lock, Stock and Two Smoking Barrels", "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "genres": ["Comedy", "Crime"], "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000})
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 500, "title": "Reservoir Dogs", "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "genres": ["Crime", "Thriller"], "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 10006, "title": "Wild Seven", "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "genres": ["Action", "Crime", "Drama"], "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v3_movie_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesWithSettingsV3.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": ["of", "the"], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": { "oneTypo": 5, "twoTypos": 9 }, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can["results"] still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "genres": ["Comedy", "Crime"], "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000 })
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "genres": ["Crime", "Thriller"], "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "genres": ["Action", "Crime", "Drama"], "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v3_rubygems_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::RubyGemsWithSettingsV3.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("rubygems"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("rubygems");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"description": 53, "id": 53, "name": 53, "summary": 53, "total_downloads": 53, "version": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["name", "summary", "description", "version", "total_downloads"], "searchableAttributes": ["name", "summary"], "filterableAttributes": ["version"], "sortableAttributes": [], "rankingRules": ["typo", "words", "fame:desc", "proximity", "attribute", "exactness", "total_downloads:desc"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks["results"][0],
json!({"uid": 92, "indexUid": "rubygems", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": {"receivedDocuments": 0, "indexedDocuments": 1042}, "duration": "PT14.034672S", "enqueuedAt": "2021-09-08T08:40:31.390775Z", "startedAt": "2021-09-08T08:51:39.060642Z", "finishedAt": "2021-09-08T08:51:53.095314Z"})
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(188040, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "meilisearch", "summary": "An easy-to-use ruby client for Meilisearch API", "description": "An easy-to-use ruby client for Meilisearch API. See https://github.com/meilisearch/MeiliSearch", "id": "188040", "version": "0.15.2", "total_downloads": "7465"})
);
let (document, code) = index.get_document(191940, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "doggo", "summary": "RSpec 3 formatter - documentation, with progress indication", "description": "Similar to \"rspec -f d\", but also indicates progress by showing the current test number and total test count on each line.", "id": "191940", "version": "1.1.0", "total_downloads": "9394"})
);
let (document, code) = index.get_document(159227, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "vortex-of-agony", "summary": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "description": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "id": "159227", "version": "0.1.0", "total_downloads": "1007"})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v4_movie_raw() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesRawV4.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["*"], "searchableAttributes": ["*"], "filterableAttributes": [], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit" : 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "genres": ["Comedy", "Crime"], "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000})
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "genres": ["Crime", "Thriller"], "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "genres": ["Action", "Crime", "Drama"], "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v4_movie_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesWithSettingsV4.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": ["of", "the"], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": { "oneTypo": 5, "twoTypos": 9 }, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "genres": ["Comedy", "Crime"], "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000 })
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "genres": ["Crime", "Thriller"], "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "genres": ["Action", "Crime", "Drama"], "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v4_rubygems_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::RubyGemsWithSettingsV4.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("rubygems"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("rubygems");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"description": 53, "id": 53, "name": 53, "summary": 53, "total_downloads": 53, "version": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["name", "summary", "description", "version", "total_downloads"], "searchableAttributes": ["name", "summary"], "filterableAttributes": ["version"], "sortableAttributes": [], "rankingRules": ["typo", "words", "fame:desc", "proximity", "attribute", "exactness", "total_downloads:desc"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks["results"][0],
json!({ "uid": 92, "indexUid": "rubygems", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": {"receivedDocuments": 0, "indexedDocuments": 1042}, "duration": "PT14.034672S", "enqueuedAt": "2021-09-08T08:40:31.390775Z", "startedAt": "2021-09-08T08:51:39.060642Z", "finishedAt": "2021-09-08T08:51:53.095314Z"})
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(188040, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "meilisearch", "summary": "An easy-to-use ruby client for Meilisearch API", "description": "An easy-to-use ruby client for Meilisearch API. See https://github.com/meilisearch/MeiliSearch", "id": "188040", "version": "0.15.2", "total_downloads": "7465"})
);
let (document, code) = index.get_document(191940, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "doggo", "summary": "RSpec 3 formatter - documentation, with progress indication", "description": "Similar to \"rspec -f d\", but also indicates progress by showing the current test number and total test count on each line.", "id": "191940", "version": "1.1.0", "total_downloads": "9394"})
);
let (document, code) = index.get_document(159227, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "vortex-of-agony", "summary": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "description": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "id": "159227", "version": "0.1.0", "total_downloads": "1007"})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v5() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::TestV5.path()),
..default_settings(temp.path())
};
let mut server = Server::new_auth_with_options(options, temp).await;
server.use_api_key("MASTER_KEY");
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200, "{indexes}");
assert_eq!(indexes["results"].as_array().unwrap().len(), 2);
assert_eq!(indexes["results"][0]["uid"], json!("test"));
assert_eq!(indexes["results"][1]["uid"], json!("test2"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let expected_stats = json!({
"numberOfDocuments": 10,
"isIndexing": false,
"fieldDistribution": {
"cast": 10,
"director": 10,
"genres": 10,
"id": 10,
"overview": 10,
"popularity": 10,
"poster_path": 10,
"producer": 10,
"production_companies": 10,
"release_date": 10,
"tagline": 10,
"title": 10,
"vote_average": 10,
"vote_count": 10
}
});
let index1 = server.index("test");
let index2 = server.index("test2");
let (stats, code) = index1.stats().await;
assert_eq!(code, 200);
assert_eq!(stats, expected_stats);
let (docs, code) = index2
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(docs["results"].as_array().unwrap().len(), 10);
let (docs, code) = index1
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(docs["results"].as_array().unwrap().len(), 10);
let (stats, code) = index2.stats().await;
assert_eq!(code, 200);
assert_eq!(stats, expected_stats);
let (keys, code) = server.list_api_keys().await;
assert_eq!(code, 200);
let key = &keys["results"][0];
assert_eq!(key["name"], "my key");
}

View File

@ -7,14 +7,15 @@ async fn create_index_no_primary_key() {
let index = server.index("test");
let (response, code) = index.create(None).await;
assert_eq!(code, 201);
assert_eq!(response["uid"], "test");
assert_eq!(response["name"], "test");
assert!(response.get("createdAt").is_some());
assert!(response.get("updatedAt").is_some());
assert_eq!(response["createdAt"], response["updatedAt"]);
assert_eq!(response["primaryKey"], Value::Null);
assert_eq!(response.as_object().unwrap().len(), 5);
assert_eq!(code, 202);
assert_eq!(response["status"], "enqueued");
let response = index.wait_task(0).await;
assert_eq!(response["status"], "succeeded");
assert_eq!(response["type"], "indexCreation");
assert_eq!(response["details"]["primaryKey"], Value::Null);
}
#[actix_rt::test]
@ -23,14 +24,15 @@ async fn create_index_with_primary_key() {
let index = server.index("test");
let (response, code) = index.create(Some("primary")).await;
assert_eq!(code, 201);
assert_eq!(response["uid"], "test");
assert_eq!(response["name"], "test");
assert!(response.get("createdAt").is_some());
assert!(response.get("updatedAt").is_some());
//assert_eq!(response["createdAt"], response["updatedAt"]);
assert_eq!(response["primaryKey"], "primary");
assert_eq!(response.as_object().unwrap().len(), 5);
assert_eq!(code, 202);
assert_eq!(response["status"], "enqueued");
let response = index.wait_task(0).await;
assert_eq!(response["status"], "succeeded");
assert_eq!(response["type"], "indexCreation");
assert_eq!(response["details"]["primaryKey"], "primary");
}
#[actix_rt::test]
@ -42,7 +44,7 @@ async fn create_index_with_invalid_primary_key() {
let (_response, code) = index.add_documents(document, Some("title")).await;
assert_eq!(code, 202);
index.wait_update_id(0).await;
index.wait_task(0).await;
let (response, code) = index.get().await;
assert_eq!(code, 200);
@ -61,6 +63,10 @@ async fn test_create_multiple_indexes() {
index2.create(None).await;
index3.create(None).await;
index1.wait_task(0).await;
index1.wait_task(1).await;
index1.wait_task(2).await;
assert_eq!(index1.get().await.1, 200);
assert_eq!(index2.get().await.1, 200);
assert_eq!(index3.get().await.1, 200);
@ -73,9 +79,11 @@ async fn error_create_existing_index() {
let index = server.index("test");
let (_, code) = index.create(Some("primary")).await;
assert_eq!(code, 201);
assert_eq!(code, 202);
let (response, code) = index.create(Some("primary")).await;
index.create(Some("primary")).await;
let response = index.wait_task(1).await;
let expected_response = json!({
"message": "Index `test` already exists.",
@ -84,19 +92,17 @@ async fn error_create_existing_index() {
"link":"https://docs.meilisearch.com/errors#index_already_exists"
});
assert_eq!(response, expected_response);
assert_eq!(code, 409);
assert_eq!(response["error"], expected_response);
}
#[actix_rt::test]
#[ignore] // TODO: Fix in an other PR: uid returned `test%20test%23%21` instead of `test test#!`
async fn error_create_with_invalid_index_uid() {
let server = Server::new().await;
let index = server.index("test test#!");
let (response, code) = index.create(None).await;
let expected_response = json!({
"message": "`test test#!` is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_).",
"message": "invalid index uid `test test#!`, the uid must be an integer or a string containing only alphanumeric characters a-z A-Z 0-9, hyphens - and underscores _.",
"code": "invalid_index_uid",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_index_uid"

View File

@ -8,11 +8,17 @@ async fn create_and_delete_index() {
let index = server.index("test");
let (_response, code) = index.create(None).await;
assert_eq!(code, 201);
assert_eq!(code, 202);
index.wait_task(0).await;
assert_eq!(index.get().await.1, 200);
let (_response, code) = index.delete().await;
assert_eq!(code, 204);
assert_eq!(code, 202);
index.wait_task(1).await;
assert_eq!(index.get().await.1, 404);
}
@ -21,7 +27,9 @@ async fn create_and_delete_index() {
async fn error_delete_unexisting_index() {
let server = Server::new().await;
let index = server.index("test");
let (response, code) = index.delete().await;
let (_, code) = index.delete().await;
assert_eq!(code, 202);
let expected_response = json!({
"message": "Index `test` not found.",
@ -30,19 +38,29 @@ async fn error_delete_unexisting_index() {
"link": "https://docs.meilisearch.com/errors#index_not_found"
});
assert_eq!(response, expected_response);
assert_eq!(code, 404);
let response = index.wait_task(0).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_response);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn loop_delete_add_documents() {
let server = Server::new().await;
let index = server.index("test");
let documents = json!([{"id": 1, "field1": "hello"}]);
let mut tasks = Vec::new();
for _ in 0..50 {
let (response, code) = index.add_documents(documents.clone(), None).await;
tasks.push(response["taskUid"].as_u64().unwrap());
assert_eq!(code, 202, "{}", response);
let (response, code) = index.delete().await;
assert_eq!(code, 204, "{}", response);
tasks.push(response["taskUid"].as_u64().unwrap());
assert_eq!(code, 202, "{}", response);
}
for task in tasks {
let response = index.wait_task(task).await;
assert_eq!(response["status"], "succeeded", "{}", response);
}
}

Some files were not shown because too many files have changed in this diff Show More