Compare commits

...

595 Commits

Author SHA1 Message Date
Clémentine Urquizar
160cba1b46 Add mold linker 2022-08-02 16:48:19 +02:00
bors[bot]
dfbdc565f9 Merge #2653
2653: Bump Swatinem/rust-cache from 1.4.0 to 2.0.0 r=curquiza a=dependabot[bot]

Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.4.0 to 2.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/releases">Swatinem/rust-cache's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>The action code was refactored to allow for caching multiple workspaces and
different <code>target</code> directory layouts.</li>
<li>The <code>working-directory</code> and <code>target-dir</code> input options were replaced by a
single <code>workspaces</code> option that has the form of <code>$workspace -&gt; $target</code>.</li>
<li>Support for considering <code>env-vars</code> as part of the cache key.</li>
<li>The <code>sharedKey</code> input option was renamed to <code>shared-key</code> for consistency.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md">Swatinem/rust-cache's changelog</a>.</em></p>
<blockquote>
<h2>2.0.0</h2>
<ul>
<li>The action code was refactored to allow for caching multiple workspaces and
different <code>target</code> directory layouts.</li>
<li>The <code>working-directory</code> and <code>target-dir</code> input options were replaced by a
single <code>workspaces</code> option that has the form of <code>$workspace -&gt; $target</code>.</li>
<li>Support for considering <code>env-vars</code> as part of the cache key.</li>
<li>The <code>sharedKey</code> input option was renamed to <code>shared-key</code> for consistency.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="6720f05bc4"><code>6720f05</code></a> 2.0.0</li>
<li><a href="5733786579"><code>5733786</code></a> rebuild</li>
<li><a href="622616010e"><code>6226160</code></a> prepare v2</li>
<li><a href="0497f9301f"><code>0497f93</code></a> improve registry cleanpu</li>
<li><a href="7b8626742a"><code>7b86267</code></a> update registry cleaning</li>
<li><a href="911d8e9e55"><code>911d8e9</code></a> test sparse registry</li>
<li><a href="875be5ce2d"><code>875be5c</code></a> bump cache</li>
<li><a href="07a2ee71bc"><code>07a2ee7</code></a> lol, dependency check was reversed</li>
<li><a href="7c190ef171"><code>7c190ef</code></a> fix actual test code ;-)</li>
<li><a href="fffd6895b2"><code>fffd689</code></a> add some more tests</li>
<li>Additional commits viewable in <a href="https://github.com/Swatinem/rust-cache/compare/v1.4.0...v2.0.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Swatinem/rust-cache&package-manager=github_actions&previous-version=1.4.0&new-version=2.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-02 07:58:48 +00:00
dependabot[bot]
1bb05f2716 Bump Swatinem/rust-cache from 1.4.0 to 2.0.0
Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.4.0 to 2.0.0.
- [Release notes](https://github.com/Swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Swatinem/rust-cache/compare/v1.4.0...v2.0.0)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-01 17:07:30 +00:00
bors[bot]
dc21af46e5 Merge #2634
2634: Bring `stable` changes into `main` (v0.28.1) r=ManyTheFish a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-21 13:22:57 +00:00
bors[bot]
22aa349e31 Merge #2633
2633: Fix highlight issue by updating milli to v0.31.2 r=ManyTheFish a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/2627

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-21 11:08:37 +00:00
Clémentine Urquizar
9cf6acb671 Fix highlight issue by updating milli to v0.31.2 2022-07-21 14:11:24 +04:00
bors[bot]
78c5826c57 Merge #2631
2631: Update mini-dashboard to v0.2.1 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
Fixes #2629

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-21 06:37:57 +00:00
Morgane Dubus
7e6f3274fa Update mini-dashboard to v0.2.1 2022-07-21 09:47:07 +04:00
bors[bot]
94ef326be3 Merge #2630
2630: Update version for next release (v0.28.1) r=ManyTheFish a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-20 13:42:51 +00:00
Clémentine Urquizar
d01a3ab889 Update version for next release (v0.28.1) 2022-07-20 15:46:53 +04:00
bors[bot]
ad494b6f77 Merge #2625
2625: Update link to Cloud beta form r=curquiza a=davelarkan

# Pull Request

## What does this PR do?
Fixes #2624

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Dave Larkan <davelarkan@gmail.com>
2022-07-19 11:09:12 +00:00
Dave Larkan
2f11686c81 Update link to Cloud beta form 2022-07-19 11:10:16 +01:00
bors[bot]
01a47e2db5 Merge #2598
2598: Bring `stable` into `main` (v0.28.0) r=curquiza a=curquiza



Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-11 14:06:02 +00:00
Clémentine Urquizar
cfacc79ad7 Remove duplicate step in CI 2022-07-11 15:18:15 +02:00
Clémentine Urquizar - curqui
8e370ed9ab Merge branch 'main' into stable 2022-07-11 14:41:15 +02:00
Clémentine Urquizar
32d6af6527 Merge remote-tracking branch 'origin/release-v0.28.0' into stable 2022-07-11 14:37:21 +02:00
bors[bot]
be3240d2dd Merge #2592
2592: Chores: Add a dedicated section for Language Support in the issue template r=curquiza a=ManyTheFish



This new section in put upper than the feature proposal because language support is kind of a sub-category of it,
and so, in the reading order, we choose to create a feature proposal only if it is not related to Language.


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-07 12:37:36 +00:00
ManyTheFish
2de6868858 Chores: Add a dedicated section for Language Support in the issue template
This new section in put apper than feature proposal because language-support is kind of a sub-category of it,
and so, in the reading order, we choose to create a feature proposal only if it is not related to Language.
2022-07-07 13:48:47 +02:00
bors[bot]
d419a91207 Merge #2591
2591: Introduce the Tasks Seen event when filtering r=Kerollmops a=Kerollmops

This PR fixes #2377 by introducing the Tasks Seen analytics events.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-07 11:41:01 +00:00
Kerollmops
a9fb5a4d50 Introduce the Tasks Seen event when filtering 2022-07-07 11:39:23 +02:00
bors[bot]
0353537fef Merge #2579
2579: API keys: adds action * for actions r=irevoire a=phdavis1027

# Pull Request
This is PR builds on `@janithpet's` addition to DocumentsAll; it's basically a copy-and-paste job, except I used ```iter.filter()``` to avoid the possibility for duplication that they mentioned. I'm not sure how much that matters.

Also, hi! This is my first open-source contribution and my first attempt to write Rust for anyone other than myself, so any feedback whatsoever is appreciated. 

## What does this PR do?
Fixes #2560

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Phillip Davis <phdavis1027@gmail.com>
2022-07-07 08:54:23 +00:00
bors[bot]
da7729e4a8 Merge #2589
2589: Update create-issue-dependencies.yml r=ManyTheFish a=curquiza

Minor change to update the format in the description of the issue created: remove the useless newlines

See the format without this change: https://github.com/meilisearch/meilisearch/issues/2588

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-07-07 08:37:33 +00:00
Phillip Davis
074a6a0cce Fix typos in HeadAuthStore::put_api_key 2022-07-06 22:24:46 -04:00
bors[bot]
755b1a59a2 Merge #2584
2584: Format API keys in hexa instead of base64 r=curquiza a=ManyTheFish

This PR:
- Changes API key generation and formatting to ease the generation of the key made by our users
- updates the `uuid` crate version

The API key can now be generated in bash as below:
```sh
echo -n $HYPHENATED_UUID | openssl dgst -sha256 -hmac $MASTER_KEY
```

fixes the issue raised in [product/discussion#421](https://github.com/meilisearch/product/discussions/421#discussioncomment-3079410), this should not impact anything in documentation nor integration but ease the key generation on the user sides.

poke `@gmourier` 

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-07-06 12:49:04 +00:00
Clémentine Urquizar - curqui
bb5b18b82c Update create-issue-dependencies.yml 2022-07-06 11:01:25 +02:00
bors[bot]
01d9560318 Merge #2587
2587: Update mini-dashboard to v0.2.0 r=curquiza a=mdubus

# Pull Request

## What does this PR do?
Fixes #2469

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-07-06 08:54:16 +00:00
Morgane Dubus
719879d4d2 Update mini-dashboard to v0.2.0 2022-07-06 08:12:17 +02:00
Phillip Davis
fb9b298645 Leave actions as HashSet 2022-07-05 21:52:50 -04:00
Phillip Davis
23f02f241e Run the code formatter 2022-07-05 21:08:56 -04:00
Phillip Davis
5d80ff41a2 Clean up put_key impl 2022-07-05 21:06:58 -04:00
bors[bot]
f4989590db Merge #2585
2585: Add CI creates issue updating dependencies r=curquiza a=VasiliySoldatkin

Adds CI, which uses cron GHA to create Issue "Upgrade dependencies" every 3 months 
Context: [#2569](https://github.com/meilisearch/meilisearch/issues/2569)


Co-authored-by: Vasiliy Soldatkin <vasiliy.soldatkin@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-07-05 19:05:43 +00:00
Clémentine Urquizar - curqui
bba5fab5e5 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:03:08 +02:00
Clémentine Urquizar - curqui
05ffe24d64 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:02:40 +02:00
Clémentine Urquizar - curqui
6f95ae9879 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 21:02:32 +02:00
Vasiliy Soldatkin
480b881e15 Remove template and add GHA from review 2022-07-05 21:08:59 +03:00
Clémentine Urquizar - curqui
43fecbf382 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:35:09 +02:00
Clémentine Urquizar - curqui
5588a6415a Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:34:57 +02:00
Clémentine Urquizar - curqui
b0757e75c4 Update .github/workflows/create-issue-dependencies.yml 2022-07-05 19:34:49 +02:00
bors[bot]
106be03ba8 Merge #2539
2539: Update Docker credentials r=curquiza a=curquiza

This is to avoid using the tpayet credentials and use `@meili-bot` credentials instead.

The `DOCKER_USERNAME` and `DOCKER_PASSWORD` are still present as secret, I will remove them once v0.28.0 is fully merged (they are still used on `release-v0.28.0`)

I tested by created a tag on the branch, it worked: the tag was pushed to docker hub by meili-bot

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-07-05 17:22:26 +00:00
bors[bot]
70c55208f1 Merge #2586
2586: Fix Clippy r=irevoire a=Kerollmops

This PR fixes clippy on the `release-v0.28.0` branch.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-05 15:55:00 +00:00
Kerollmops
d56bf66022 Make clippy happy 2022-07-05 17:48:44 +02:00
Vasiliy Soldatkin
2c300c72c9 Add CI creates issue updating dependencies 2022-07-05 18:28:29 +03:00
ManyTheFish
a146fd45b9 Format API keys in hexa instead of base64 2022-07-05 16:14:18 +02:00
Phillip Davis
63e1fb4f96 Run the code formatter 2022-07-04 21:49:40 -04:00
Phillip Davis
be1c6f9dc4 Update tests to include .* permissions for tasks, indexes, dumps, stats, and settings 2022-07-04 21:38:54 -04:00
Phillip Davis
c251b527b0 Add iterators over * for stats, dumps, tasks, settings, and indexes; change documents impl to prevent duplication 2022-07-04 21:38:31 -04:00
Phillip Davis
1dc3724c1f Added [...]_ALL enum members in action.rs 2022-07-04 21:38:21 -04:00
bors[bot]
c1ad56281d Merge #2545 #2556 #2568 #2573 #2574 #2575
2545: meilisearch-lib was missing a feature in its cargo.toml r=Kerollmops a=irevoire

Meilisearch-lib was missing a feature to compile on its own

2556: chore: `meilisearch-http` readability improvements r=curquiza a=ryanrussell

## What does this PR do?
Readability improvements in `meilisearch-http`. 

Believe these are pretty straightforward; let me know if anything needs adjusted or reverted :)

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?



2568: Improve manifest for dependabot r=curquiza a=curquiza

Improve dependabot manifest
- `rebase-strategy: disabled` -> avoid issues with bors we had in the past with the integration team. The automatic rebase of dependabot does not fit with the rebase bors will try to do
- `labels` -> better triage for when I create the release changelogs

2573: Bump docker/login-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/login-action/releases">docker/login-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/161">#161</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/170">#170</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/167">#167</a>)</li>
<li>Bump <code>`@​actions/io</code>` from 1.1.1 to 1.1.2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/168">#168</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/176">#176</a>)</li>
<li>Bump https-proxy-agent from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/182">#182</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/login-action/compare/v1.14.1...v2.0.0">https://github.com/docker/login-action/compare/v1.14.1...v2.0.0</a></p>
<h2>v1.14.1</h2>
<ul>
<li>Revert to Node 12 as default runtime to fix issue for GHE users (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/160">#160</a>)</li>
</ul>
<h2>v1.14.0</h2>
<ul>
<li>Update to node 16 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/158">#158</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr</code>` from 3.45.0 to 3.53.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/157">#157</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr-public</code>` from 3.45.0 to 3.53.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/156">#156</a>)</li>
</ul>
<h2>v1.13.0</h2>
<ul>
<li>Handle proxy settings for aws-sdk (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/152">#152</a>)</li>
<li>Workload identity based authentication docs for GCR and GAR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/112">#112</a>)</li>
<li>Test login against ACR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/49">#49</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr</code>` from 3.44.0 to 3.45.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/132">#132</a>)</li>
<li>Bump <code>`@​aws-sdk/client-ecr-public</code>` from 3.43.0 to 3.45.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/131">#131</a>)</li>
</ul>
<h2>v1.12.0</h2>
<ul>
<li>ECR: only set credentials if username and password are specified (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
<li>Refactor to use aws-sdk v3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
</ul>
<h2>v1.11.0</h2>
<ul>
<li>ECR: switch implementation to use the AWS SDK (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li><code>ecr</code> input to specify whether the given registry is ECR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/123">#123</a>)</li>
<li>Test against Windows runner (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li>Update instructions for Google registry (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/127">#127</a>)</li>
<li>Update dev workflow (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/111">#111</a>)</li>
<li>Small changes for GHCR doc (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/86">#86</a>)</li>
<li>Update dev dependencies (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/85">#85</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/101">#101</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/100">#100</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/94">#94</a> <a href="https://github-redirect.dependabot.com/docker/login-action/issues/103">#103</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/88">#88</a>)</li>
<li>Bump hosted-git-info from 2.8.8 to 2.8.9 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/83">#83</a>)</li>
<li>Bump node-notifier from 8.0.0 to 8.0.2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/82">#82</a>)</li>
<li>Bump ws from 7.3.1 to 7.5.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/81">#81</a>)</li>
<li>Bump lodash from 4.17.20 to 4.17.21 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/80">#80</a>)</li>
<li>Bump y18n from 4.0.0 to 4.0.3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/79">#79</a>)</li>
</ul>
<h2>v1.10.0</h2>
<ul>
<li>GitHub Packages Docker Registry deprecated (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/78">#78</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="49ed152c8e"><code>49ed152</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/161">#161</a> from crazy-max/node16-runtime</li>
<li><a href="b61a9ce7bd"><code>b61a9ce</code></a> Node 16 as default runtime</li>
<li><a href="3a136a8631"><code>3a136a8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/182">#182</a> from docker/dependabot/npm_and_yarn/https-proxy-agent...</li>
<li><a href="b312880b69"><code>b312880</code></a> Update generated content</li>
<li><a href="795794e081"><code>795794e</code></a> Bump https-proxy-agent from 5.0.0 to 5.0.1</li>
<li><a href="1edf6180e0"><code>1edf618</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/179">#179</a> from docker/dependabot/github_actions/codecov/codecov...</li>
<li><a href="8e66ad4089"><code>8e66ad4</code></a> Bump codecov/codecov-action from 2 to 3</li>
<li><a href="7c79b598ea"><code>7c79b59</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/176">#176</a> from docker/dependabot/npm_and_yarn/minimist-1.2.6</li>
<li><a href="24a38e0d6d"><code>24a38e0</code></a> Bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="70e1ff84cb"><code>70e1ff8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/170">#170</a> from crazy-max/eslint</li>
<li>Additional commits viewable in <a href="https://github.com/docker/login-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/login-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2574: Bump Swatinem/rust-cache from 1.3.0 to 1.4.0 r=curquiza a=dependabot[bot]

Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.3.0 to 1.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/releases">Swatinem/rust-cache's releases</a>.</em></p>
<blockquote>
<h2>v1.4.0</h2>
<ul>
<li>Clean both debug and release target directories.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/Swatinem/rust-cache/blob/v1/CHANGELOG.md">Swatinem/rust-cache's changelog</a>.</em></p>
<blockquote>
<h2>1.4.0</h2>
<ul>
<li>Clean both <code>debug</code> and <code>release</code> target directories.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="cb2cf0cc7c"><code>cb2cf0c</code></a> 1.4.0</li>
<li><a href="74e8e24b6d"><code>74e8e24</code></a> Update dependencies, clean both debug and release targets</li>
<li><a href="f8f67b7515"><code>f8f67b7</code></a> Add a LICENSE file</li>
<li><a href="5b2b053862"><code>5b2b053</code></a> Improve Cache Details documentation (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/49">#49</a>)</li>
<li><a href="3bb3a9a087"><code>3bb3a9a</code></a> update deps and rebuild</li>
<li><a href="d127014599"><code>d127014</code></a> update dependencies</li>
<li><a href="801365cd81"><code>801365c</code></a> hint that checkout has to be used first (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/34">#34</a>)</li>
<li><a href="c5ed9ba6b7"><code>c5ed9ba</code></a> update dependencies and rebuild</li>
<li><a href="536c94f32c"><code>536c94f</code></a> Cache-on-failure support (<a href="https://github-redirect.dependabot.com/Swatinem/rust-cache/issues/22">#22</a>)</li>
<li>See full diff in <a href="https://github.com/Swatinem/rust-cache/compare/v1.3.0...v1.4.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Swatinem/rust-cache&package-manager=github_actions&previous-version=1.3.0&new-version=1.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2575: Bump docker/build-push-action from 2 to 3 r=curquiza a=dependabot[bot]

Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/build-push-action/releases">docker/build-push-action's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/564">#564</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>Standalone mode support by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/601">#601</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/609">#609</a>)</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/571">#571</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/573">#573</a>)</li>
<li>Bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/582">#582</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/584">#584</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/595">#595</a>)</li>
<li>Bump csv-parse from 4.16.3 to 5.0.4 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/533">#533</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v2.10.0...v3.0.0">https://github.com/docker/build-push-action/compare/v2.10.0...v3.0.0</a></p>
<h2>v2.10.0</h2>
<ul>
<li>Add <code>imageid</code> output and use metadata to set <code>digest</code> output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/569">#569</a>)</li>
<li>Add <code>build-contexts</code> input (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/563">#563</a>)</li>
<li>Enhance outputs display (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/559">#559</a>)</li>
</ul>
<h2>v2.9.0</h2>
<ul>
<li><code>add-hosts</code> input (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/553">#553</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/555">#555</a>)</li>
<li>Fix git context subdir example and improve README (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/552">#552</a>)</li>
<li>Add e2e tests for ACR (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/548">#548</a>)</li>
<li>Add description on <code>github-token</code> option to README (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/544">#544</a>)</li>
<li>Bump node-fetch from 2.6.1 to 2.6.7 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/549">#549</a>)</li>
</ul>
<h2>v2.8.0</h2>
<ul>
<li>Allow specifying subdirectory with default git context (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/531">#531</a>)</li>
<li>Add <code>cgroup-parent</code>, <code>shm-size</code>, <code>ulimit</code> inputs (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/501">#501</a>)</li>
<li>Don't set outputs if empty or nil (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/470">#470</a>)</li>
<li>docs: example to sanitize tags with metadata-action (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/476">#476</a>)</li>
<li>docs: wrong syntax to sanitize repo slug (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/475">#475</a>)</li>
<li>docs: test before pushing your image (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/455">#455</a>)</li>
<li>readme: remove v1 section (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/500">#500</a>)</li>
<li>ci: virtual env file system info (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/510">#510</a>)</li>
<li>dev: update workflow (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/499">#499</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/160">#160</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/469">#469</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/465">#465</a>)</li>
<li>Bump csv-parse from 4.16.0 to 4.16.3 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/451">#451</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/459">#459</a>)</li>
</ul>
<h2>v2.7.0</h2>
<ul>
<li>Add <code>metadata</code> output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/412">#412</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/439">#439</a>)</li>
<li>Add note to sanitize tags (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/426">#426</a>)</li>
<li>Cache backend API docs (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/406">#406</a>)</li>
<li>Git context now supports subdir (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/407">#407</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/415">#415</a>)</li>
</ul>
<h2>v2.6.1</h2>
<ul>
<li>Small typo and ensure trimmed output (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/400">#400</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="e551b19e49"><code>e551b19</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/564">#564</a> from crazy-max/node-16</li>
<li><a href="3554377aa3"><code>3554377</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/609">#609</a> from crazy-max/ci-fix-test</li>
<li><a href="a62bc1b22b"><code>a62bc1b</code></a> ci: fix standalone test</li>
<li><a href="c2085839e1"><code>c208583</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/601">#601</a> from crazy-max/standalone-mode</li>
<li><a href="fcd91249e5"><code>fcd9124</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/607">#607</a> from docker/dependabot/github_actions/docker/metadata...</li>
<li><a href="0ebe720aed"><code>0ebe720</code></a> Bump docker/metadata-action from 3 to 4</li>
<li><a href="38b45804b5"><code>38b4580</code></a> Standalone mode support</li>
<li><a href="ba317382dc"><code>ba31738</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/533">#533</a> from docker/dependabot/npm_and_yarn/csv-parse-5.0.4</li>
<li><a href="43721d2346"><code>43721d2</code></a> Update generated content</li>
<li><a href="5ea21bf2ba"><code>5ea21bf</code></a> Fix csv-parse implementation since major update</li>
<li>Additional commits viewable in <a href="https://github.com/docker/build-push-action/compare/v2...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Ryan Russell <git@ryanrussell.org>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-07-04 12:13:46 +00:00
bors[bot]
83e5f45a91 Merge #2576
2576: Make clippy happy r=curquiza a=Kerollmops

This PR fixes clippy.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-07-04 11:53:27 +00:00
Kerollmops
aff8cd1774 Make clippy happy 2022-07-04 13:36:56 +02:00
dependabot[bot]
d1296d03ea Bump docker/build-push-action from 2 to 3
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:44 +00:00
dependabot[bot]
6f7a4d95d9 Bump Swatinem/rust-cache from 1.3.0 to 1.4.0
Bumps [Swatinem/rust-cache](https://github.com/Swatinem/rust-cache) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/Swatinem/rust-cache/releases)
- [Changelog](https://github.com/Swatinem/rust-cache/blob/v1/CHANGELOG.md)
- [Commits](https://github.com/Swatinem/rust-cache/compare/v1.3.0...v1.4.0)

---
updated-dependencies:
- dependency-name: Swatinem/rust-cache
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:40 +00:00
dependabot[bot]
9ea96fa5c1 Bump docker/login-action from 1 to 2
Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 17:06:38 +00:00
Clémentine Urquizar
19f732e623 Update manifest for dependabot 2022-06-29 11:35:41 +02:00
bors[bot]
d833e62282 Merge #2564 #2565 #2566 #2567
2564: Bump codecov/codecov-action from 1 to 3 r=curquiza a=dependabot[bot]

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/codecov/codecov-action/releases">codecov/codecov-action's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/689">#689</a> Bump to node16 and small fixes</li>
</ul>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/688">#688</a> Incorporate <code>gcov</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/548">#548</a> build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/603">#603</a> [Snyk] Upgrade <code>`@​actions/core</code>` from 1.5.0 to 1.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/628">#628</a> build(deps): bump node-fetch from 2.6.1 to 3.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/634">#634</a> build(deps): bump node-fetch from 3.1.1 to 3.2.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/636">#636</a> build(deps): bump openpgp from 5.0.1 to 5.1.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/652">#652</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.30.0 to 0.33.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/653">#653</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.11.21 to 17.0.18</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/659">#659</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.4.0 to 27.4.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/667">#667</a> build(deps): bump actions/checkout from 2 to 3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/673">#673</a> build(deps): bump node-fetch from 3.2.0 to 3.2.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/683">#683</a> build(deps): bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/685">#685</a> build(deps): bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/681">#681</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.18 to 17.0.23</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/682">#682</a> build(deps-dev): bump typescript from 4.5.5 to 4.6.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/676">#676</a> build(deps): bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/675">#675</a> build(deps): bump openpgp from 5.1.0 to 5.2.1</li>
</ul>
<h2>v2.1.0</h2>
<h2>2.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/515">#515</a> Allow specifying version of Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/499">#499</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.29.0 to 0.30.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/508">#508</a> build(deps): bump openpgp from 5.0.0-5 to 5.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/514">#514</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.6.0 to 16.9.0</li>
</ul>
<h2>v2.0.3</h2>
<h2>2.0.3</h2>
<h3>Fixes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/464">#464</a> Fix wrong link in the readme</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/485">#485</a> fix: Add override OS and linux default to platform</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/447">#447</a> build(deps): bump openpgp from 5.0.0-4 to 5.0.0-5</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/458">#458</a> build(deps-dev): bump eslint from 7.31.0 to 7.32.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/465">#465</a> build(deps-dev): bump <code>`@​typescript-eslint/eslint-plugin</code>` from 4.28.4 to 4.29.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/466">#466</a> build(deps-dev): bump <code>`@​typescript-eslint/parser</code>` from 4.28.4 to 4.29.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/468">#468</a> build(deps-dev): bump <code>`@​types/jest</code>` from 26.0.24 to 27.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/470">#470</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.4.0 to 16.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/472">#472</a> build(deps): bump path-parse from 1.0.6 to 1.0.7</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/473">#473</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.0.0 to 27.0.1</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md">codecov/codecov-action's changelog</a>.</em></p>
<blockquote>
<h2>3.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/699">#699</a> Incorporate <code>xcode</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/694">#694</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.33.3 to 0.33.4</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/696">#696</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.23 to 17.0.25</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/698">#698</a> build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0</li>
</ul>
<h2>3.0.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/689">#689</a> Bump to node16 and small fixes</li>
</ul>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/688">#688</a> Incorporate <code>gcov</code> arguments for the Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/548">#548</a> build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/603">#603</a> [Snyk] Upgrade <code>`@​actions/core</code>` from 1.5.0 to 1.6.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/628">#628</a> build(deps): bump node-fetch from 2.6.1 to 3.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/634">#634</a> build(deps): bump node-fetch from 3.1.1 to 3.2.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/636">#636</a> build(deps): bump openpgp from 5.0.1 to 5.1.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/652">#652</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.30.0 to 0.33.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/653">#653</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.11.21 to 17.0.18</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/659">#659</a> build(deps-dev): bump <code>`@​types/jest</code>` from 27.4.0 to 27.4.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/667">#667</a> build(deps): bump actions/checkout from 2 to 3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/673">#673</a> build(deps): bump node-fetch from 3.2.0 to 3.2.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/683">#683</a> build(deps): bump minimist from 1.2.5 to 1.2.6</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/685">#685</a> build(deps): bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/681">#681</a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.18 to 17.0.23</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/682">#682</a> build(deps-dev): bump typescript from 4.5.5 to 4.6.3</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/676">#676</a> build(deps): bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/675">#675</a> build(deps): bump openpgp from 5.1.0 to 5.2.1</li>
</ul>
<h2>2.1.0</h2>
<h3>Features</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/515">#515</a> Allow specifying version of Codecov uploader</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/499">#499</a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.29.0 to 0.30.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/508">#508</a> build(deps): bump openpgp from 5.0.0-5 to 5.0.0</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/514">#514</a> build(deps-dev): bump <code>`@​types/node</code>` from 16.6.0 to 16.9.0</li>
</ul>
<h2>2.0.3</h2>
<h3>Fixes</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/464">#464</a> Fix wrong link in the readme</li>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/485">#485</a> fix: Add override OS and linux default to platform</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/447">#447</a> build(deps): bump openpgp from 5.0.0-4 to 5.0.0-5</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="81cd2dc814"><code>81cd2dc</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/699">#699</a> from codecov/feat-xcode</li>
<li><a href="a03184e530"><code>a03184e</code></a> feat: add xcode support</li>
<li><a href="6a6a9ae7b1"><code>6a6a9ae</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/694">#694</a> from codecov/dependabot/npm_and_yarn/vercel/ncc-0.33.4</li>
<li><a href="92a872a5e7"><code>92a872a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/696">#696</a> from codecov/dependabot/npm_and_yarn/types/node-17.0.25</li>
<li><a href="43a9c182dd"><code>43a9c18</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/698">#698</a> from codecov/dependabot/npm_and_yarn/jest-junit-13.2.0</li>
<li><a href="13ce822ccd"><code>13ce822</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/codecov/codecov-action/issues/690">#690</a> from codecov/ci-v3</li>
<li><a href="4d6dbaaea6"><code>4d6dbaa</code></a> build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0</li>
<li><a href="98f0f19300"><code>98f0f19</code></a> build(deps-dev): bump <code>`@​types/node</code>` from 17.0.23 to 17.0.25</li>
<li><a href="d3021d9910"><code>d3021d9</code></a> build(deps-dev): bump <code>`@​vercel/ncc</code>` from 0.33.3 to 0.33.4</li>
<li><a href="2c83f35c20"><code>2c83f35</code></a> Update makefile to v3</li>
<li>Additional commits viewable in <a href="https://github.com/codecov/codecov-action/compare/v1...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=codecov/codecov-action&package-manager=github_actions&previous-version=1&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2565: Bump docker/setup-qemu-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/setup-qemu-action/releases">docker/setup-qemu-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/48">#48</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>chore: update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/43">#43</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/47">#47</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.3.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/37">#37</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/39">#39</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/41">#41</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.0.4 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/38">#38</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/46">#46</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-qemu-action/compare/v1.2.0...v2.0.0">https://github.com/docker/setup-qemu-action/compare/v1.2.0...v2.0.0</a></p>
<h2>v1.2.0</h2>
<ul>
<li>Display image information (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/36">#36</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.7 to 1.3.0 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/35">#35</a>)</li>
</ul>
<h2>v1.1.0</h2>
<ul>
<li>Remove os limitation (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/30">#30</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.6 to 1.2.7 (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/29">#29</a>)</li>
</ul>
<h2>v1.0.2</h2>
<ul>
<li>Enhance workflow (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/26">#26</a>)</li>
<li>Container based developer flow (<a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/19">#19</a> <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/20">#20</a>)</li>
</ul>
<h2>v1.0.1</h2>
<ul>
<li>Fix CVE-2020-15228</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8b122486ce"><code>8b12248</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/48">#48</a> from crazy-max/node-16</li>
<li><a href="466d53193c"><code>466d531</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/50">#50</a> from crazy-max/update-readme</li>
<li><a href="607c1922b5"><code>607c192</code></a> simplify usage example</li>
<li><a href="d7849ecb9c"><code>d7849ec</code></a> Node 16 as default runtime</li>
<li><a href="2d4bfe71c9"><code>2d4bfe7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/47">#47</a> from crazy-max/update-dev</li>
<li><a href="224b802eb3"><code>224b802</code></a> chore: update dev dependencies and workflow</li>
<li><a href="95bd865778"><code>95bd865</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/46">#46</a> from docker/dependabot/npm_and_yarn/actions/exec-1.1.1</li>
<li><a href="cfd091faa1"><code>cfd091f</code></a> Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1</li>
<li><a href="d2a60302b8"><code>d2a6030</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-qemu-action/issues/45">#45</a> from docker/dependabot/github_actions/actions/checkout-3</li>
<li><a href="97dc484a91"><code>97dc484</code></a> Bump actions/checkout from 2 to 3</li>
<li>Additional commits viewable in <a href="https://github.com/docker/setup-qemu-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-qemu-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2566: Bump actions/checkout from 2 to 3 r=curquiza a=dependabot[bot]

Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/releases">actions/checkout's releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<ul>
<li>Updated to the node16 runtime by default
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0 to run, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
</ul>
<h2>v2.4.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Add set-safe-directory input to allow customers to take control. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/770">#770</a>) by <a href="https://github.com/TingluoHuang"><code>`@​TingluoHuang</code></a>` in <a href="https://github-redirect.dependabot.com/actions/checkout/pull/776">actions/checkout#776</a></li>
<li>Prepare changelog for v2.4.2. by <a href="https://github.com/TingluoHuang"><code>`@​TingluoHuang</code></a>` in <a href="https://github-redirect.dependabot.com/actions/checkout/pull/778">actions/checkout#778</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/actions/checkout/compare/v2...v2.4.2">https://github.com/actions/checkout/compare/v2...v2.4.2</a></p>
<h2>v2.4.1</h2>
<ul>
<li>Fixed an issue where checkout failed to run in container jobs due to the new git setting <code>safe.directory</code></li>
</ul>
<h2>v2.4.0</h2>
<ul>
<li>Convert SSH URLs like <code>org-&lt;ORG_ID&gt;`@github.com:</code>` to <code>https://github.com/</code> - <a href="https://github-redirect.dependabot.com/actions/checkout/pull/621">pr</a></li>
</ul>
<h2>v2.3.5</h2>
<p>Update dependencies</p>
<h2>v2.3.4</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/379">Add missing <code>await</code>s</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/360">Swap to Environment Files</a></li>
</ul>
<h2>v2.3.3</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/345">Remove Unneeded commit information from build logs</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/326">Add Licensed to verify third party dependencies</a></li>
</ul>
<h2>v2.3.2</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/320">Add Third Party License Information to Dist Files</a></p>
<h2>v2.3.1</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/284">Fix default branch resolution for .wiki and when using SSH</a></p>
<h2>v2.3.0</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/278">Fallback to the default branch</a></p>
<h2>v2.2.0</h2>
<p><a href="https://github-redirect.dependabot.com/actions/checkout/pull/258">Fetch all history for all tags and branches when fetch-depth=0</a></p>
<h2>v2.1.1</h2>
<p>Changes to support GHES (<a href="https://github-redirect.dependabot.com/actions/checkout/pull/236">here</a> and <a href="https://github-redirect.dependabot.com/actions/checkout/pull/248">here</a>)</p>
<h2>v2.1.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/191">Group output</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/199">Changes to support GHES alpha release</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/184">Persist core.sshCommand for submodules</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/163">Add support ssh</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/179">Convert submodule SSH URL to HTTPS, when not using SSH</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>v3.0.2</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/770">Add input <code>set-safe-directory</code></a></li>
</ul>
<h2>v3.0.1</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/762">Fixed an issue where checkout failed to run in container jobs due to the new git setting <code>safe.directory</code></a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/744">Bumped various npm package versions</a></li>
</ul>
<h2>v3.0.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/689">Update to node 16</a></li>
</ul>
<h2>v2.3.1</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/284">Fix default branch resolution for .wiki and when using SSH</a></li>
</ul>
<h2>v2.3.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/278">Fallback to the default branch</a></li>
</ul>
<h2>v2.2.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/258">Fetch all history for all tags and branches when fetch-depth=0</a></li>
</ul>
<h2>v2.1.1</h2>
<ul>
<li>Changes to support GHES (<a href="https://github-redirect.dependabot.com/actions/checkout/pull/236">here</a> and <a href="https://github-redirect.dependabot.com/actions/checkout/pull/248">here</a>)</li>
</ul>
<h2>v2.1.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/191">Group output</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/199">Changes to support GHES alpha release</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/184">Persist core.sshCommand for submodules</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/163">Add support ssh</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/179">Convert submodule SSH URL to HTTPS, when not using SSH</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/157">Add submodule support</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/144">Follow proxy settings</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/141">Fix ref for pr closed event when a pr is merged</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/128">Fix issue checking detached when git less than 2.22</a></li>
</ul>
<h2>v2.0.0</h2>
<ul>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/108">Do not pass cred on command line</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/107">Add input persist-credentials</a></li>
<li><a href="https://github-redirect.dependabot.com/actions/checkout/pull/104">Fallback to REST API to download repo</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="2541b1294d"><code>2541b12</code></a> Prepare changelog for v3.0.2. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/777">#777</a>)</li>
<li><a href="0ffe6f9c55"><code>0ffe6f9</code></a> Add set-safe-directory input to allow customers to take control. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/770">#770</a>)</li>
<li><a href="dcd71f6466"><code>dcd71f6</code></a> Enforce safe directory (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/762">#762</a>)</li>
<li><a href="add3486cc3"><code>add3486</code></a> Patch to fix the dependbot alert. (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/744">#744</a>)</li>
<li><a href="5126516654"><code>5126516</code></a> Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/741">#741</a>)</li>
<li><a href="d50f8ea767"><code>d50f8ea</code></a> Add v3.0 release information to changelog (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/740">#740</a>)</li>
<li><a href="2d1c1198e7"><code>2d1c119</code></a> update test workflows to checkout v3 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/709">#709</a>)</li>
<li><a href="a12a3943b4"><code>a12a394</code></a> update readme for v3 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/708">#708</a>)</li>
<li><a href="8f9e05e482"><code>8f9e05e</code></a> Update to node 16 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/689">#689</a>)</li>
<li>See full diff in <a href="https://github.com/actions/checkout/compare/v2...v3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

2567: Bump docker/setup-buildx-action from 1 to 2 r=curquiza a=dependabot[bot]

Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1 to 2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/setup-buildx-action/releases">docker/setup-buildx-action's releases</a>.</em></p>
<blockquote>
<h2>v2.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/131">#131</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-buildx-action/compare/v1.7.0...v2.0.0">https://github.com/docker/setup-buildx-action/compare/v1.7.0...v2.0.0</a></p>
<h2>v1.7.0</h2>
<ul>
<li>Standalone mode by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` in (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/119">#119</a>)</li>
<li>Update dev dependencies and workflow by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/114">#114</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/130">#130</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/108">#108</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/109">#109</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/110">#110</a>)</li>
<li>Bump actions/checkout from 2 to 3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/126">#126</a>)</li>
<li>Bump <code>`@​actions/tool-cache</code>` from 1.7.1 to 1.7.2 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/128">#128</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/129">#129</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/132">#132</a>)</li>
<li>Bump codecov/codecov-action from 2 to 3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/133">#133</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/136">#136</a>)</li>
</ul>
<h2>v1.6.0</h2>
<ul>
<li>Add <code>config-inline</code> input (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/106">#106</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/104">#104</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/101">#101</a>)</li>
</ul>
<h2>v1.5.1</h2>
<ul>
<li>Explicit version spec for caching (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/100">#100</a>)</li>
</ul>
<h2>v1.5.0</h2>
<ul>
<li>Allow building buildx from source (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/99">#99</a>)</li>
</ul>
<h2>v1.4.1</h2>
<ul>
<li>Fix <code>docker: invalid reference format</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/97">#97</a>)</li>
</ul>
<h2>v1.4.0</h2>
<ul>
<li>Update dev deps (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/95">#95</a>)</li>
<li>Use built-in <code>getExecOutput</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/94">#94</a>)</li>
<li>Use <code>core.getBooleanInput</code> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/93">#93</a>)</li>
<li>Bump <code>`@​actions/exec</code>` from 1.0.4 to 1.1.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/85">#85</a>)</li>
<li>Bump y18n from 4.0.0 to 4.0.3 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/91">#91</a>)</li>
<li>Bump hosted-git-info from 2.8.8 to 2.8.9 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/89">#89</a>)</li>
<li>Bump ws from 7.3.1 to 7.5.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/90">#90</a>)</li>
<li>Bump <code>`@​actions/tool-cache</code>` from 1.6.1 to 1.7.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/82">#82</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/86">#86</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.2.7 to 1.4.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/80">#80</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/87">#87</a>)</li>
</ul>
<h2>v1.3.0</h2>
<ul>
<li>Display BuildKit version (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/72">#72</a>)</li>
</ul>
<h2>v1.2.0</h2>
<ul>
<li>Remove os limitation (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/71">#71</a>)</li>
<li>Add test job for <code>config</code> input (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/68">#68</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="dc7b9719a9"><code>dc7b971</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/131">#131</a> from crazy-max/node16</li>
<li><a href="f55bc08278"><code>f55bc08</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/141">#141</a> from crazy-max/fix-test</li>
<li><a href="aa877a9d36"><code>aa877a9</code></a> ci: fix standalone test</li>
<li><a href="130c56f342"><code>130c56f</code></a> Node 16 as default runtime</li>
<li>See full diff in <a href="https://github.com/docker/setup-buildx-action/compare/v1...v2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-buildx-action&package-manager=github_actions&previous-version=1&new-version=2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-29 08:34:57 +00:00
bors[bot]
a733271ced Merge #2563
2563: Bump docker/metadata-action from 3 to 4 r=curquiza a=dependabot[bot]

Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/metadata-action/releases">docker/metadata-action's releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<ul>
<li>Node 16 as default runtime by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/176">#176</a>)
<ul>
<li>This requires a minimum <a href="https://github.com/actions/runner/releases/tag/v2.285.0">Actions Runner</a> version of v2.285.0, which is by default available in GHES 3.4 or later.</li>
</ul>
</li>
<li>Do not sanitize before pattern matching by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/201">#201</a>)
<ul>
<li>Breaking change with <code>type=match</code> pattern matching</li>
</ul>
</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/metadata-action/compare/v3.8.0...v4.0.0">https://github.com/docker/metadata-action/compare/v3.8.0...v4.0.0</a></p>
<h2>v3.8.0</h2>
<ul>
<li>Add attribute to enable/disable images by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/193">#193</a>)</li>
<li>Add <code>is_default_branch</code> global expression by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/192">#192</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/197">#197</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/198">#198</a>)</li>
<li>Update fixtures (dev) by <a href="https://github.com/crazy-max"><code>`@​crazy-max</code></a>` (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/190">#190</a>)</li>
<li>Bump semver from 7.3.5 to 7.3.7 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/185">#185</a>)</li>
<li>Bump moment from 2.29.2 to 2.29.3 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/187">#187</a>)</li>
<li>Bump csv-parse from 4.16.3 to 5.0.4 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/195">#195</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/metadata-action/compare/v3.7.0...v3.8.0">https://github.com/docker/metadata-action/compare/v3.7.0...v3.8.0</a></p>
<h2>v3.7.0</h2>
<ul>
<li>Handle comments for multi-line inputs (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/172">#172</a>)</li>
<li>Missing <code>json</code> output in <code>action.yml</code> (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/167">#167</a>)</li>
<li>Update dev dependencies and workflow (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/175">#175</a>)</li>
<li>Bump minimist from 1.2.5 to 1.2.6 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/182">#182</a>)</li>
<li>Bump moment from 2.29.1 to 2.29.2 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/180">#180</a>)</li>
<li>Bump <code>`@​actions/github</code>` from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/179">#179</a>)</li>
<li>Bump node-fetch from 2.6.1 to 2.6.7 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/173">#173</a>)</li>
</ul>
<h2>v3.6.2</h2>
<ul>
<li>Handle raw statement for pre-release (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/155">#155</a> <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/156">#156</a>)</li>
</ul>
<h2>v3.6.1</h2>
<ul>
<li>Preserve quotes inside unquoted field (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/153">#153</a>)</li>
</ul>
<h2>v3.6.0</h2>
<ul>
<li><code>base_ref</code> global expression (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/142">#142</a>)</li>
<li>Trim tags and flavor inputs (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/143">#143</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/135">#135</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/134">#134</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/132">#132</a>)</li>
<li>Bump csv-parse from 4.16.0 to 4.16.3 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/131">#131</a>)</li>
</ul>
<h2>v3.5.0</h2>
<ul>
<li>Add global expression <code>date</code> (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/121">#121</a>)</li>
<li>Bump <code>`@​actions/core</code>` from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/122">#122</a>)</li>
</ul>
<h2>v3.4.1</h2>
<ul>
<li>Only return edge if branch matches (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/115">#115</a>)</li>
</ul>
<h2>v3.4.0</h2>
<ul>
<li>PEP 440 support (<a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/108">#108</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Upgrade guide</summary>
<p><em>Sourced from <a href="https://github.com/docker/metadata-action/blob/master/UPGRADE.md">docker/metadata-action's upgrade guide</a>.</em></p>
<blockquote>
<h1>Upgrade notes</h1>
<h2>v2 to v3</h2>
<ul>
<li>Repository has been moved to docker org. Replace <code>crazy-max/ghaction-docker-meta@v2</code> with <code>docker/metadata-action@v4</code></li>
<li>The default bake target has been changed: <code>ghaction-docker-meta</code> &gt; <code>docker-metadata-action</code></li>
</ul>
<h2>v1 to v2</h2>
<ul>
<li><a href="https://github.com/docker/metadata-action/blob/master/#inputs">inputs</a>
<ul>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-sha"><code>tag-sha</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-edge--tag-edge-branch"><code>tag-edge</code> / <code>tag-edge-branch</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-semver"><code>tag-semver</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-match--tag-match-group"><code>tag-match</code> / <code>tag-match-group</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-latest"><code>tag-latest</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-schedule"><code>tag-schedule</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#tag-custom--tag-custom-only"><code>tag-custom</code> / <code>tag-custom-only</code></a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#label-custom"><code>label-custom</code></a></li>
</ul>
</li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#basic-workflow">Basic workflow</a></li>
<li><a href="https://github.com/docker/metadata-action/blob/master/#semver-workflow">Semver workflow</a></li>
</ul>
<h3>inputs</h3>
<table>
<thead>
<tr>
<th>New</th>
<th>Unchanged</th>
<th>Removed</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>tags</code></td>
<td><code>images</code></td>
<td><code>tag-sha</code></td>
</tr>
<tr>
<td><code>flavor</code></td>
<td><code>sep-tags</code></td>
<td><code>tag-edge</code></td>
</tr>
<tr>
<td><code>labels</code></td>
<td><code>sep-labels</code></td>
<td><code>tag-edge-branch</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-semver</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-match</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-match-group</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-latest</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-schedule</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-custom</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>tag-custom-only</code></td>
</tr>
<tr>
<td></td>
<td></td>
<td><code>label-custom</code></td>
</tr>
</tbody>
</table>
<h4><code>tag-sha</code></h4>
<pre lang="yaml"><code>tags: |
  type=sha
</code></pre>
<h4><code>tag-edge</code> / <code>tag-edge-branch</code></h4>
<pre lang="yaml"><code>tags: |
  # default branch
  type=edge
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="69f6fc9d46"><code>69f6fc9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/203">#203</a> from crazy-max/san-fix</li>
<li><a href="2f5b5ae8bf"><code>2f5b5ae</code></a> Sanitize tag earlier</li>
<li><a href="f206c36955"><code>f206c36</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/202">#202</a> from crazy-max/v4-prep</li>
<li><a href="a20adfa74e"><code>a20adfa</code></a> readme: set metadata-action to v4</li>
<li><a href="26b9439ce3"><code>26b9439</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/201">#201</a> from crazy-max/fix-sanitization</li>
<li><a href="467883f452"><code>467883f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/metadata-action/issues/176">#176</a> from crazy-max/node-16</li>
<li><a href="5edf56f2c4"><code>5edf56f</code></a> Node 16 as default runtime</li>
<li><a href="678218f2be"><code>678218f</code></a> Note about image name and tag sanitization</li>
<li><a href="e44c1fbe6e"><code>e44c1fb</code></a> Do not sanitize before pattern matching</li>
<li>See full diff in <a href="https://github.com/docker/metadata-action/compare/v3...v4">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/metadata-action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-29 08:07:10 +00:00
Ryan Russell
28609c4176 chore(auth test): Correct compute_authorized_search macro
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:26 -05:00
Ryan Russell
a626cf4c99 chore: test comment readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:25 -05:00
Ryan Russell
9b660e1058 chore(routes): correct typo
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-28 18:09:22 -05:00
dependabot[bot]
38b85ec547 Bump docker/setup-buildx-action from 1 to 2
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:43 +00:00
dependabot[bot]
ed185fb636 Bump actions/checkout from 2 to 3
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:41 +00:00
dependabot[bot]
f7b47b43f4 Bump docker/setup-qemu-action from 1 to 2
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:38 +00:00
dependabot[bot]
8e703fbabe Bump codecov/codecov-action from 1 to 3
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:35 +00:00
dependabot[bot]
7ced5c2cc7 Bump docker/metadata-action from 3 to 4
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 3 to 4.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-28 20:21:32 +00:00
bors[bot]
ef95d1d545 Merge #2561
2561: Add dependabot for GHA r=Kerollmops a=curquiza

No panic, I only add dependabot to update our CI, not the rust dependencies. Indeed, no easy command exists for this.
If it's too much regular notification, as usual, I will remove it.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-28 20:02:30 +00:00
Clémentine Urquizar
f98c8d7f8b Add dependabot for GHA 2022-06-28 18:58:23 +02:00
bors[bot]
4862993482 Merge #2525
2525: Auth: Provide all document related permissions for action document.* r=Kerollmops a=janithpet

Added a `Action::DocumentsAll` identifier as [suggested](https://github.com/meilisearch/meilisearch/issues/2080#issuecomment-1022952486), along with the other necessary changes in `action.rs`. 

Inside `store.rs`, added an extra condition in `HeedAuthStore::put_api_key` to append all document related permissions if `key.actions.contains(&DocumentsAll)`.

Updated the tests as [suggested](https://github.com/meilisearch/meilisearch/issues/2080#issuecomment-1022952486).

I am quite new to Rust, so please let me know if I had made any mistakes; have I written the code in the most idiomatic/efficient way? I am aware that the way I append the document permissions could create duplicates in the `actions` vector, but I am not sure how fix that in a simple way (other than using other dependencies like [itertools](https://github.com/rust-itertools/itertools), for example).

## What does this PR do?
Fixes #2080 

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [ x] Have you read the contributing guidelines?
- [ x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: janithPet <jpetangoda@gmail.com>
2022-06-28 14:02:06 +00:00
bors[bot]
8a64ee0c14 Merge #2557
2557: add more tests on the formatted route r=Kerollmops a=irevoire

We had a bunch of tests trying to send array of array in a get request, this is actually not supported thus I updated the tests to only send a single array or a direct string.

Also, the real tests that ensure the array of array are well handled are in milli so I don’t think we should lose time trying to « improve » our test surface on this point.

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-06-28 13:41:53 +00:00
Irevoire
05ee2eff01 add more tests on the formatted route 2022-06-28 13:17:55 +02:00
bors[bot]
7ee8855499 Merge #2553
2553: Fix ci binary push r=irevoire a=curquiza

Prevent the check of version when releasing a RC for the binary publish.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-27 11:18:40 +00:00
Clémentine Urquizar
7f4fab876d Add fix to publish-binaries.yml 2022-06-27 13:11:58 +02:00
bors[bot]
688d6f704b Merge #2549
2549: Fix CI checking version compatibility r=irevoire a=curquiza

- Fix CI checks in `if` regarding the `stable` variable created
- Fix command to remove prefix `refs/tags/v` from `GITHUB_REF` in `check-release.sh` script
- Change from `sh` to `bash` for `check-release.sh` script
- Fix error message

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-27 08:28:31 +00:00
Clémentine Urquizar
f83188fd60 Fix CI with check-release.sh script 2022-06-24 13:11:04 +02:00
bors[bot]
9e261b996f Merge #2543
2543: fix all the array on the search get route and improve the tests r=curquiza a=irevoire

fix #2527

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-23 14:51:36 +00:00
bors[bot]
90b0a4e99f Merge #2546
2546: Bump milli to 0.31.1 r=irevoire a=Kerollmops



Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-23 11:07:20 +00:00
Kerollmops
dad86fc3d6 Make the changes necessary to use milli 0.31.1 2022-06-23 10:47:49 +02:00
Kerollmops
7feb15df28 Bump milli to 0.31.1 2022-06-23 10:47:48 +02:00
bors[bot]
bf865f51bb Merge #2544
2544: Fix content of dump/assets for testing r=curquiza a=loiclec

This is just a change to the content of two .dump files used in the integration tests.

Those files contained settings with the criterion `desc(fame)`, which is invalid
for a v3 or higher dump.

The change took place in the updates.json file inside the decompressed
.dump files. Instances of `desc(field)` or `asc(field)` were changed to 
`field:desc` and `field:asc`.

The tests were (wrongly) passing because the ranking rules were never parsed.

Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2022-06-22 15:12:19 +00:00
Tamo
13f258513f meilisearch-lib was missing a feature in its cargo.toml 2022-06-22 16:44:16 +02:00
Loïc Lecrenier
f8aa21bc16 Fix content of dump/assets for testing
Some contained settings with the criterion desc(fame), which is invalid
for a v3 or higher dump.

The change took place in the updates.json file inside the decompressed
.dump files. Instances of desc(field) or asc(field) were changed to 
field:desc and field:asc
2022-06-22 14:51:52 +02:00
bors[bot]
1ffe90bf15 Merge #2530
2530: Check the version in Cargo.toml before publishing r=irevoire a=curquiza

Fixes #2079 

Also
- improves the current docker CI for v0.28.0: the current implementation will make run 2 CI instead of just one for the official release
- move the `is-latest-releaes.sh` script, and update the documentation comment
- fix version of permissive-json-pointer

How to test the script?

```
export GITHUB_REF='refs/tags/v0.28.0'
sh .github/scripts/check-release.sh
```

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-06-22 12:42:29 +00:00
Tamo
c47369b502 fix all the array on the search get route and improve the tests 2022-06-22 14:33:44 +02:00
Clémentine Urquizar
32c8846514 Rollback 0.0.0 versionning 2022-06-22 12:20:12 +02:00
Clémentine Urquizar
7490383d4f Update the not-released version in Cargo.toml files 2022-06-21 19:17:33 +02:00
Clémentine Urquizar
c6ed756dbc Update script after review 2022-06-21 10:46:32 +02:00
Clémentine Urquizar - curqui
de16de20f4 Update .github/scripts/check-release.sh
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-21 10:14:24 +02:00
Clémentine Urquizar - curqui
c484d28646 Update .github/scripts/check-release.sh
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-21 10:14:17 +02:00
Clémentine Urquizar
5318e53248 Move is-latest-release.sh script into the scripts folder 2022-06-21 10:12:00 +02:00
Clémentine Urquizar
06e05cc4f8 Update Docker credentials 2022-06-20 14:39:50 +02:00
bors[bot]
ba839a909f Merge #2537
2537: Change information place regarding release assets r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-20 08:40:25 +00:00
Clémentine Urquizar
8b98303191 Change information place regarding release assets 2022-06-20 10:36:34 +02:00
Clémentine Urquizar
2dde6fadb4 Check the version in Cargo.toml before publishing 2022-06-20 10:22:01 +02:00
bors[bot]
eb8d53a915 Merge #2529
2529: Improve docker CI: push vX.Y tag (without patch) to DockerHub (for v0.28.0) r=curquiza a=curquiza

Bringing a commit from main ([5ae5b06](5ae5b06018)) into release-v0.28.0 to have this change already applied for v0.28.0

Following the one merged on main: https://github.com/meilisearch/meilisearch/pull/2521

Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
2022-06-16 15:38:25 +00:00
Janith Petangoda
10f3150150 Improve docker CI: push vX.Y tag (without patch) to DockerHub (#2507)
* Create a docker tag without patch version if git tag has 0 patch version.

* Create Docker tag without patch number if git tag follows v<number>.<number>.<number>

Add minor changes on CI
2022-06-16 17:14:09 +02:00
bors[bot]
54cd9976d7 Merge #2521
2521: Improve docker CI: push `vX.Y` tag (without patch) to DockerHub (#2507) r=curquiza a=curquiza

Following https://github.com/meilisearch/meilisearch/pull/2507

Fixes #2497 

Co-authored-by: Janith Petangoda <22471198+janithpet@users.noreply.github.com>
2022-06-16 13:28:27 +00:00
Janith Petangoda
5ae5b06018 Improve docker CI: push vX.Y tag (without patch) to DockerHub (#2507)
* Create a docker tag without patch version if git tag has 0 patch version.

* Create Docker tag without patch number if git tag follows v<number>.<number>.<number>

Add minor changes on CI
2022-06-16 09:51:20 +02:00
janithPet
6f910f89eb Ran formatter on the code. 2022-06-15 22:23:38 +01:00
bors[bot]
b455c3897f Merge #2524
2524: Add the specific routes for the pagination and faceting settings r=curquiza a=Kerollmops

Fixes #2522

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-15 17:01:52 +00:00
janithPet
9a8fb6c55a Updated actions identifiers to be in a more pleasing order 2022-06-15 17:27:41 +01:00
janithPet
4016161035 Provide all document related permissions for action document.* 2022-06-15 16:11:39 +01:00
Kerollmops
9d692ba1c6 Add more tests for the pagination and faceting subsettings routes 2022-06-15 15:53:43 +02:00
Kerollmops
22e1ac969a Add specific routes for the pagination and faceting settings 2022-06-15 15:27:06 +02:00
bors[bot]
3340af1ba9 Merge #2517
2517: Fix typos in `tasks/.rs` r=MarinPostma a=ryanrussell

Signed-off-by: Ryan Russell <git@ryanrussell.org>

## What does this PR do?
Readability in `tasks/.rs` files.

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-15 09:27:21 +00:00
bors[bot]
49d509936b Merge #2520
2520: Use nightly for cargo fmt in CI (for `release-v0.28.0` branch) r=Kerollmops a=curquiza

Following https://github.com/meilisearch/meilisearch/pull/2519

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-15 09:08:40 +00:00
bors[bot]
e46b853fbf Merge #2519
2519: Use nightly for cargo fmt in CI r=Kerollmops a=curquiza

Rustfmt CI currently does not pass on main with nightly, see https://github.com/meilisearch/meilisearch/pull/2517#issuecomment-1156138216

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-15 08:35:46 +00:00
Clémentine Urquizar
44e004d895 Use nightly for cargo fmt in CI 2022-06-15 10:33:03 +02:00
Ryan Russell
71bf9b5b9b docs: Readability improvements in tasks/
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-14 20:38:45 -05:00
bors[bot]
053071d866 Merge #2502
2502: test dump v5 r=ManyTheFish a=MarinPostma

this PR adds a test for dump v5


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-13 10:11:22 +00:00
bors[bot]
0f6e650ba2 Merge #2508
2508: docs: Readability improvements in `download-latest.sh` and `src/analytics/.rs` r=Kerollmops a=ryanrussell

# Pull Request

## What does this PR do?
Improves readability in `download-latest.sh` and in the `src/analytics/.rs` files

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-13 08:18:17 +00:00
ad hoc
97daea5a66 ignore dump test on windows 2022-06-13 09:16:36 +01:00
Ryan Russell
8990b12609 docs: Readability fixes in src/analytics/.rs files
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-11 10:21:05 -05:00
Ryan Russell
07a35c6445 docs: Bash comment readability fixes
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-11 10:20:28 -05:00
bors[bot]
4fc73195e6 Merge #2466
2466: index resolver tests r=MarinPostma a=MarinPostma

add more index resolver tests


depends on #2455 

followup #2453 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-09 17:50:01 +00:00
ad hoc
1425d62a31 test dump v5 2022-06-09 18:35:17 +02:00
bors[bot]
87d4bf672c Merge #2500
2500: Bump milli r=Kerollmops a=irevoire

Fix #2380

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-09 16:35:00 +00:00
Tamo
2063fbd985 chore: bump milli 2022-06-09 18:34:03 +02:00
bors[bot]
de356061db Merge #2414
2414: Improve index uid validation upon API key creation r=Kerollmops a=pierre-l

- ~Use an IndexUid newtype to enforce stronger constraints~
- ~`cargo update -p vergen`~ (`rustup update` was the proper fix for this)
- Add a new `meilisearch_types` crate
- Move `meilisearch_error` to `meilisearch_types::error`
- Move `meilisearch_lib::index_resolver::IndexUid` to `meilisearch_types::index_uid`
- Add a new `InvalidIndexUid` error in `meilisearch_types::index_uid`
- Move `meilisearch_http::routes::StarOr` to `meilisearch_types::star_or`
- Use the `IndexUid` and `StarOr` in `meilisearch_auth::Key`

Fixes #2158


Co-authored-by: pierre-l <pierre.larger@gmail.com>
2022-06-09 15:41:51 +00:00
ad hoc
9a6841c7ce remove unused test 2022-06-09 17:10:56 +02:00
ad hoc
354f7fb2bf test index_update 2022-06-09 17:10:55 +02:00
ad hoc
0333bad057 delete documents test 2022-06-09 17:10:55 +02:00
ad hoc
0e416b4bcd delete index test 2022-06-09 17:10:55 +02:00
bors[bot]
20dd259f23 Merge #2455
2455: refactor perform task r=curquiza a=MarinPostma

Refactor some index resolver functions to make duties more consistent, and code easier to test


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-09 14:49:59 +00:00
bors[bot]
18095fa4e1 Merge #2498
2498: Update CONTRIBUTING.md to add the release process r=Kerollmops a=curquiza

Add release process to the contributing

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-09 14:23:49 +00:00
pierre-l
b8745420da Use the IndexUid and StarOr in meilisearch_auth::Key
Move `meilisearch_http::routes::StarOr` to `meilisearch_types::star_or`

Fixes #2158
2022-06-09 16:14:15 +02:00
pierre-l
36cb09eb25 Add a new meilisearch_types crate
Move `meilisearch_error` to `meilisearch_types::error`
Move `meilisearch_lib::index_resolver::IndexUid` to `meilisearch_types::index_uid`
Add a new `InvalidIndexUid` error in `meilisearch_types::index_uid`
2022-06-09 16:14:13 +02:00
ad hoc
8fc3b7d3b0 refactor process_document_addition_batch 2022-06-09 14:59:20 +02:00
ad hoc
64e3096790 process_task updates task events 2022-06-09 14:59:20 +02:00
ad hoc
b594d49def add IndexResolver BatchHandler tests 2022-06-09 14:59:19 +02:00
ad hoc
fbba67fbe9 add mocker to IndexResolver 2022-06-09 14:59:19 +02:00
Clémentine Urquizar
232b2baaa3 Update TOC 2022-06-09 14:49:49 +02:00
Clémentine Urquizar
02c5c193a2 Update CONTRIBUTING.md to add the release process 2022-06-09 14:48:50 +02:00
bors[bot]
b9b32d65a8 Merge #2494
2494: Introduce the new faceting and pagination settings r=ManyTheFish a=Kerollmops

This PR introduces two new settings following the newly created spec https://github.com/meilisearch/specifications/pull/157:
 - The `faceting.max_values_per_facet` one describes the maximum number of values (each with a count) associated with a value in a facet distribution query.
 - The `pagination.limited_to` one describes the maximum number of documents that a search query can ever return.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-09 12:09:21 +00:00
bors[bot]
680606fd82 Merge #2491
2491: Improve Docker CIs r=curquiza a=curquiza

Close #1901 

Continuing work of 

- [x] Merge two docker CI in one (#2477). The latest tag is still only push for the official release.
- [x] Add cron job in GHA to run the CI (without publishing the image) every day. This will check the docker build is working.

Co-authored-by: Lawrence Chou <lawrencechou1024@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-09 11:26:12 +00:00
Clémentine Urquizar
4a494ad2fa Add schedule to the CI 2022-06-09 11:50:20 +02:00
bors[bot]
2b2e571c76 Merge #2460
2460: Create custom error types for `TaskType`, `TaskStatus`, and `IndexUid` r=Kerollmops a=walterbm

# Pull Request

## What does this PR do?
Fixes #2443 by making the following changes:

- Add custom `TaskTypeError` for `TaskType::from_str` 
- Add custom `TaskStatusError` for `TaskStatus::from_str`
- Add custom `IndexUidFormatError` for `IndexUid::from_str`
- Implement `From<IndexUidFormatError> for IndexResolverError` to convert between errors
- Replace all usages of `IndexUid::new` with `IndexUid::from_str`
    - **NOTE** I am relatively new to Rust and I struggled a lot with this final part. This PR ended up with a messy error conversion which does not seem ideal. Please let me know if you have any suggestions for how to make this better and I'll be happy to make any updates!

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: walter <walter.beller.morales@gmail.com>
2022-06-09 09:10:28 +00:00
Kerollmops
6f0d3472b1 Change the test for the new pagination.limited_to setting 2022-06-09 10:56:43 +02:00
Kerollmops
5cd13cc303 Add a test to validate the faceting.max_values_per_facet setting 2022-06-09 10:56:42 +02:00
Kerollmops
1e3dcbea3f Plug the pagination.limited_to setting 2022-06-09 10:56:42 +02:00
Kerollmops
b96399d24b Plug the faceting.max_values_per_facet setting 2022-06-09 10:56:42 +02:00
Kerollmops
5450b5ced3 Add the faceting.max_values_per_facet setting 2022-06-09 10:54:32 +02:00
Kerollmops
c924614527 Bump milli to 0.29.2 2022-06-09 10:54:28 +02:00
walter
96d4fd54bb Change the index uid format check for better legibility 2022-06-08 19:58:47 -04:00
walter
3e5d6be86b Rename TaskType::from_str parameter to 'type_' 2022-06-08 19:57:45 -04:00
walter
2b944ecd89 Remove IndexUid::new and replace with IndexUid::from_str 2022-06-08 19:56:01 -04:00
bors[bot]
db42268888 Merge #2473
2473: fix blocking in dumps r=irevoire a=MarinPostma

This PR fixes two blocking calls in the dump process.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-08 17:14:46 +00:00
ad hoc
108b3520de fix blocking auth controller dump 2022-06-08 18:19:29 +02:00
bors[bot]
f64b824c45 Merge #2493
2493: Update version for next release (v0.28.0) r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-06-08 16:02:03 +00:00
Clémentine Urquizar
fc4990b968 Update version for next release (v0.28.0) 2022-06-08 17:59:18 +02:00
Clémentine Urquizar - curqui
39a1dcb32c Update .github/workflows/publish-docker.yml 2022-06-08 17:27:03 +02:00
Lawrence Chou
afcc493480 Merge publish-docker-latest.yml & publish-docker-tag.yml (#2477)
close #1901
2022-06-08 17:17:20 +02:00
bors[bot]
6171f17f1d Merge #2468
2468: Update milli 0.29 r=Kerollmops a=ManyTheFish

- [x] Update milli to 0.29
- [x] Integrate charabia
- [x] Set disabled_words to default when Index::exact_words returns None
- [x] Fix ranking rules integration test

fixes #2375
fixes #2144
fixes #2417
fixes #2407

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-08 14:29:20 +00:00
ManyTheFish
55169ff914 Fix test get_document_s_nested_attributes_to_retrieve 2022-06-08 15:56:06 +02:00
ManyTheFish
0a16f71563 Increase wait_task wainting time 2022-06-08 15:56:03 +02:00
bors[bot]
bd280f75e7 Merge #2487 #2489
2487: Feat(auth): Use hmac instead of sha256 to derivate API keys from master key r=MarinPostma a=ManyTheFish

Wrap sha256 in HMAC to derivate new API keys from the master key.


2489: Fix(auth): Authorization test were not testing keys unrestricted on index r=Kerollmops a=ManyTheFish



Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-08 13:48:07 +00:00
bors[bot]
617802bae7 Merge #2480
2480: Bring release v0.27.2 changes into stable r=Kerollmops a=curquiza



Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-08 13:08:26 +00:00
ManyTheFish
1a7631c807 Hash master_key before passing it to HMAC 2022-06-08 14:54:47 +02:00
ManyTheFish
17f30c2b2d Fix(auth): Authorization test were not testing keys unrestricted on index 2022-06-08 14:52:32 +02:00
ManyTheFish
09938c9b6f Patch ranking rules error test 2022-06-08 14:38:09 +02:00
ManyTheFish
f5306eb5b0 Set disabled_words to default when Index::exact_words returns None 2022-06-08 14:38:09 +02:00
ManyTheFish
173eea06e1 Replace old tokenizer by charabia 2022-06-08 14:38:09 +02:00
ManyTheFish
8d09772334 Update milli 2022-06-08 14:38:05 +02:00
ManyTheFish
987a7f8926 Wrap sha256 in HMAC instead of directly use sha256 2022-06-08 14:25:12 +02:00
bors[bot]
0928f3d41c Merge #2453
2453: test index resolver r=MarinPostma a=MarinPostma

add some tests to the `IndexResolver` implementation of `BatchHandler`


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-08 11:05:30 +00:00
bors[bot]
09ec8e9fca Merge #2471
2471: Remove the connection keep-alive timeout r=MarinPostma a=Thearas

# Pull Request

## What does this PR do?
Fixes <https://github.com/meilisearch/meilisearch-go/issues/221>.
Meilisearch has a default connection keep-alive timeout for 5s, which means it will close the connections with idle time >= 5s.
This PR set actix-web keep-alive config to `KeepAlive::Os`, let the client and system to decide when to close the connection.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Thearas <thearas850@gmail.com>
2022-06-08 06:59:25 +00:00
bors[bot]
1968950b0f Merge #2475
2475: feat(auth): Paginate API keys listing r=irevoire a=ManyTheFish

- [x] Update tests
- [x] Use Pagination helpers to paginate API keys (thanks to `@irevoire)`

fixes #2442


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 15:42:02 +00:00
ManyTheFish
6ffa222218 feat(auth): Paginate API keys listing
- [x] Update tests
- [x] Use Pagination helpers to paginate API keys

fixes #2442
2022-06-07 17:37:48 +02:00
bors[bot]
79e67df73d Merge #2474
2474: feat(auth): remove `dumps.get` action from keys r=ManyTheFish a=ManyTheFish

- [x] Update tests
- [x] Remove `dumps.get` action


related to: #2430

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 15:03:41 +00:00
Tamo
7fd66b80ca bump meilisearch version 2022-06-07 16:25:47 +02:00
Tamo
0cfad7eeac bump milli version 2022-06-07 16:23:23 +02:00
bors[bot]
72be296852 Merge #2476
2476: fix(test): Windows index pagination test r=ManyTheFish a=ManyTheFish

The test `index::get_index::get_and_paginate_indexes` fails on every PRs during `Tests on windows-latest` CI job.
- reduce the default index size in tests.
- wait for each task instead of waiting for the last one at the end.

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-07 14:17:01 +00:00
ManyTheFish
a7bff35e49 fix(test): Reduce default index size in tests 2022-06-07 15:16:34 +02:00
ManyTheFish
3b01ed4fe8 feat(auth): remove dumps.get action from keys 2022-06-07 10:49:28 +02:00
ad hoc
cbd27d313c fix blocking writing of meta file in dump 2022-06-07 10:07:40 +02:00
ad hoc
6ac8675c6d add IndexResolver BatchHandler tests 2022-06-07 09:33:57 +02:00
ad hoc
df61ca9cae add mocker to IndexResolver 2022-06-07 09:33:57 +02:00
ad hoc
bbd685af5e move IndexResolver to real module 2022-06-07 09:33:56 +02:00
Thearas
9b9cbc815b fmt 2022-06-07 03:50:39 +08:00
Thearas
fd11903920 remove the connection timeout 2022-06-07 03:38:23 +08:00
bors[bot]
c3003065e8 Merge #2464
2464: Simplify the `star_or` function usage r=ManyTheFish a=Kerollmops

This PR simplifies the usage of the `star_or` function that was originally introduced in #2399. The `serde-cs` https://github.com/naughie/serde-cs/pull/1 PR was merged and was implementing the `IntoIterator` trait on the `CS` types, which makes it possible to directly collect without converting a `CS` into the inner type (vec).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-06 12:47:27 +00:00
bors[bot]
c6ce3452cf Merge #2459
2459: Fix typo in codebase comments r=Kerollmops a=ryanrussell

# Pull Request

## PR checklist
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-06-06 09:01:07 +00:00
Kerollmops
e5b760c59a Fix the segment analytics tests 2022-06-06 10:44:46 +02:00
Kerollmops
277a0a7967 Bump serde-cs to simplify our usage of the star_or function 2022-06-06 10:17:33 +02:00
Kerollmops
64b5b2e1f8 Use serde-cs::CS with StarOr to reduce the logic duplication 2022-06-06 10:06:00 +02:00
Kerollmops
10d3b367dc Simplify the const default values 2022-06-06 10:06:00 +02:00
walter
ba55905377 Add custom IndexUidFormatError for IndexUid 2022-06-05 02:26:48 -04:00
walter
0e7e16ae72 Add custom TaskStatusError for TaskStatus 2022-06-05 00:51:08 -04:00
walter
80c156df3f Add custom TaskTypeError for TaskType 2022-06-05 00:51:08 -04:00
Ryan Russell
4b6c3e72ff Improve Lib Readability
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-04 21:38:04 -05:00
Ryan Russell
3e46543060 Improve Store Readability
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-04 20:42:53 -05:00
bors[bot]
b83455f345 Merge #2454
2454: Unify the pagination of the index and documents route behind a common type r=curquiza a=irevoire

`@MarinPostma` wdyt of keeping the `auto_paginate_sized` until we implement the pagination on every route that needs it just to see if it could be useful to something else

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-06-02 15:01:43 +00:00
bors[bot]
953a209f02 Merge #2447
2447: move index uid in task content r=Kerollmops a=MarinPostma

this pr moves the index_uid from the `Task` to the `TaskContent`. This is because the task can now have content that do not target a particular index.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-06-02 13:54:09 +00:00
ad hoc
0c5352fc22 move index_uid from task to task_content 2022-06-02 15:30:35 +02:00
bors[bot]
8ac8fcb0c9 Merge #2433
2433: Fix the documents route r=Kerollmops a=irevoire

fix #2428

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-06-02 13:25:34 +00:00
Irevoire
4667c9fe1a fix(http): Fix the query parameter in the Documents route 2022-06-02 14:10:44 +02:00
Tamo
12b5eabd5d chore(http): unify the pagination of the index and documents route behind a common type 2022-06-02 14:06:56 +02:00
bors[bot]
cf2d8de48a Merge #2452
2452: Change http verbs r=ManyTheFish a=Kerollmops

This PR fixes #2419 by updating the HTTP verbs used to update the settings and every single setting parameter.

- [x] `PATCH /indexes/{indexUid}` instead of `PUT`
- [x] `PATCH /indexes/{indexUid}/settings`  instead of `POST`
- [x] `PATCH /indexes/{indexUid}/settings/typo-tolerance`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/displayed-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/distinct-attribute`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/filterable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/ranking-rules`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/searchable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/sortable-attributes`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/stop-words`  instead of `POST`
- [x] `PUT /indexes/{indexUid}/settings/synonyms`  instead of `POST`


Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 11:46:17 +00:00
Kerollmops
419922e475 Make clippy happy 2022-06-02 13:38:23 +02:00
bors[bot]
c9cd1738a5 Merge #2445
2445: Seek-based tasks list r=Kerollmops a=Kerollmops

This PR implements the seek-based pagination for the tasks list following [the spec](https://github.com/meilisearch/specifications/pull/115).

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 10:25:54 +00:00
Kerollmops
0258659278 Fix the get_settings tests 2022-06-02 12:24:27 +02:00
Kerollmops
ce37f53a16 Add typo-tolerance to the authorization tests 2022-06-02 12:17:53 +02:00
Kerollmops
bcb51905d7 Fix the authorization tests 2022-06-02 12:16:46 +02:00
Kerollmops
10a71fdb10 Update the /indexes/{indexUid}/settings/* verbs by adding a macro parameter 2022-06-02 11:55:47 +02:00
Kerollmops
f8d3f739ad Update the /indexes/{indexUid}/settings verb from POST to PATCH 2022-06-02 11:55:47 +02:00
Kerollmops
bb405aa729 Update the /indexes/{indexUid} verb from PUT to PATCH 2022-06-02 11:55:47 +02:00
bors[bot]
7e3d5ebc8e Merge #2451
2451: feat(API-keys): Change immutable_field error message r=Kerollmops a=ManyTheFish

Change the immutable_field error message to fit the recent changes in the spec:
aa0a148ee3..84a9baff68



Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-02 09:26:36 +00:00
Kerollmops
dfce9ba468 Apply suggestions 2022-06-02 11:26:12 +02:00
ManyTheFish
9eea142e2b feat(API-keys): Change immutable_field error message
Change the immutable_field error message to fit the recent changes in the spec:
aa0a148ee3..84a9baff68
2022-06-02 11:11:07 +02:00
bors[bot]
8b8c3e32f0 Merge #2450
2450: Bump the dependencies r=ManyTheFish a=Kerollmops

In order to use [the latest version of grenad](https://docs.rs/grenad) I bump the dependencies here. We also use the latest versions of all our other dependencies now.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-06-02 08:53:12 +00:00
bors[bot]
08d72e32a4 Merge #2438
2438: Refine keys api r=ManyTheFish a=ManyTheFish

waiting for #2410 and #2444 to be merged.

fix #2369 

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-06-02 08:32:35 +00:00
Kerollmops
ac9e7bdbe3 Fix a test that was depending on the speed of the CPU 2022-06-02 10:21:19 +02:00
ManyTheFish
4512eed8f5 Fix PR comments 2022-06-01 18:06:20 +02:00
Kerollmops
e769043576 Bump the dependencies 2022-06-01 17:59:05 +02:00
Kerollmops
df721b2e9e Scheduler must not reverse the order of the fetched tasks 2022-06-01 17:16:15 +02:00
Kerollmops
0656df3a6d Fix the dumps tests 2022-06-01 17:14:13 +02:00
ManyTheFish
7652295d2c Encode key in base64 instead of hexa 2022-06-01 16:17:47 +02:00
ManyTheFish
94b32cce01 Patch errors 2022-06-01 16:17:47 +02:00
ManyTheFish
b2e2dc8558 Re-authorize master_key to access to all routes 2022-06-01 16:17:47 +02:00
ManyTheFish
1816db8c1f Move dump v4 patcher into v4.rs 2022-06-01 16:17:43 +02:00
ManyTheFish
c295924ea2 Patch tests 2022-06-01 16:08:42 +02:00
ManyTheFish
1f62e83267 Remove error_add_api_key_invalid_index_uid_format 2022-06-01 16:08:42 +02:00
ManyTheFish
b3c8915702 Make small changes and renaming 2022-06-01 16:08:42 +02:00
ManyTheFish
151f494110 Use Stream Deserializer to load dumps 2022-06-01 16:08:42 +02:00
ManyTheFish
96152a3d32 Change default API keys names and descriptions 2022-06-01 16:08:42 +02:00
ManyTheFish
84f52ac175 Add v4 feature to uuid 2022-06-01 16:08:42 +02:00
ManyTheFish
70916d6596 Patch dump v4 2022-06-01 16:08:42 +02:00
ManyTheFish
b9a79eb858 Change apiKeyPrefix to apiKeyUid 2022-06-01 16:07:44 +02:00
ManyTheFish
a57b2d9538 Restrict master key access to /keys routes 2022-06-01 16:07:44 +02:00
ManyTheFish
34c8888f56 Add keys actions 2022-06-01 16:07:44 +02:00
ManyTheFish
d54643455c Make PATCH only modify name, description, and updated_at fields 2022-06-01 16:07:44 +02:00
ManyTheFish
96a5791e39 Add uid and name fields in keys 2022-06-01 16:07:44 +02:00
ManyTheFish
e2c204cf86 Update tests to fit to the new requirements 2022-06-01 16:07:44 +02:00
Kerollmops
d80e8b64af Align the tasks route API to the new spec 2022-06-01 15:30:39 +02:00
Kerollmops
c11d21879a Introduce tasks limit and after to the tasks route 2022-06-01 13:26:36 +02:00
bors[bot]
d6dd234914 Merge #2434
2434: Update docker volume path r=curquiza a=0x0x1

Currently, the args `$(pwd)/data.ms:/data.ms` cannot share data from the container, makes docker volume same as the [Dockerfile](67b6f4340a/Dockerfile (L40)) to fixing it.



Co-authored-by: 0x0x1 <101086451+0x0x1@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-06-01 10:31:27 +00:00
Clémentine Urquizar - curqui
4970525541 Update README.md
Co-authored-by: Tamo <irevoire@protonmail.ch>
2022-06-01 12:29:36 +02:00
Kerollmops
461b91fd13 Introduce the fetch_unfinished_tasks function to fetch tasks 2022-06-01 12:09:52 +02:00
Kerollmops
004c8b6be3 Add the new limit and after fields in the dump tests 2022-06-01 12:09:52 +02:00
Kerollmops
9d5cc88cd5 Implement the seek-based tasks list pagination 2022-06-01 12:09:52 +02:00
bors[bot]
d22f07f5b2 Merge #2448
2448: docs(security): Fix `Supported` r=MarinPostma a=ryanrussell

Signed-off-by: Ryan Russell <git@ryanrussell.org>

# Pull Request

## What does this PR do?
- Fix typo in security `Suported` -> `Supported`

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Ryan Russell <git@ryanrussell.org>
2022-05-31 19:59:37 +00:00
bors[bot]
e81c7aa2e6 Merge #2423
2423: Paginate the index resource r=MarinPostma a=irevoire

Fix #2373


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-31 19:25:25 +00:00
Ryan Russell
39db6ea42b docs(security): Fix Supported
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-05-31 14:21:34 -05:00
bors[bot]
47007fa71b Merge #2446
2446: rename Succeded to Succeeded r=irevoire a=MarinPostma

this pr renames `TaskEvent::Succeded` to `TaskEvent::Succeeded` and apply the migration to the dumps


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-31 18:27:02 +00:00
Irevoire
627f13df85 feat(http): paginate the index resource
Fix #2373
2022-05-31 18:11:45 +02:00
bors[bot]
97c14f6fcc Merge #2427
2427: Update the documents resource r=MarinPostma a=irevoire

- Return Documents API resources on `/documents` in an array in the results field.
- Add limit, offset, and total in the response body.
- Rename `attributesToRetrieve` into `fields` (only for the `/documents` endpoints, not for the `/search` ones).
- The `displayedAttributes` settings do not impact anymore the displayed fields returned in the `/documents` endpoints. These settings only impact the `/search` endpoint.
- make the `/document/:uid` route accept the `fields` query parameter

Fix #2372


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-31 15:33:42 +00:00
ad hoc
446f1f31e0 rename Succeded to Succeeded 2022-05-31 17:22:37 +02:00
Irevoire
ddad6cc069 feat(http): update the documents resource
- Return Documents API resources on `/documents` in an array in the the results field.
- Add limit, offset and total in the response body.
- Rename `attributesToRetrieve` into `fields` (only for the `/documents` endpoints, not for the `/search` ones).
- The `displayedAttributes` settings does not impact anymore the displayed fields returned in the `/documents` endpoints. These settings only impacts the `/search` endpoint.

Fix #2372
2022-05-31 16:40:40 +02:00
bors[bot]
ab39df9693 Merge #2399
2399: Update the tasks endpoints r=MarinPostma a=Kerollmops

This PR wraps all the changes related to the `tasks` endpoints, it is related to https://github.com/meilisearch/meilisearch/issues/2377 but doesn't close it. I will create a new PR to work on [the seek-based pagination](https://github.com/meilisearch/specifications/pull/115).

I wanted to do something cool with Github: being able to merge multiple PR in this one, to help review changes one by one, unfortunately, Github doesn't allow creating empty PRs. I also struggled with git itself when it comes to merging things in the right order, so I decided that I would add all of the changes in this single PR. I will list the changes and references to the specs here.

 - [x] Tasks statuses and types must be case insensitive
 - [x] Tasks statuses, types and indexUid must accept the `*` selector
 - [ ] Rename the `TaskDetails` struct fields

## Changes

- [ ] Add seek-based pagination following [the spec](https://github.com/meilisearch/specifications/pull/115) 
- [x] Add filtering on the `/tasks` endpoint following [this spec](https://github.com/meilisearch/specifications/pull/116)
  - [x] Add filtering capabilities on `type`, `status` and `indexUid` for `GET` `task` lists endpoints.
  - [x] It is possible to specify several values for a filter using the `,` character. e.g. `?status=enqueued,processing`
  - [x] Between two different filters, an AND operation is applied. e.g. `?status=enqueued&type=indexCreation` is equivalent to `status=enqueued AND type = indexCreation`
- [x] Remove `GET /indexes/:indexUid/tasks`. It can be replaced by `GET /tasks?indexUid=:indexUid`
- [x] Remove `GET /indexes/:indexUid/tasks/:taskUid`.
- [x] Rename `uid` to `taskUid` in the `202 - Accepted` task response return by every asynchronous tasks (ex: index creation, document addition...)
- [x] Rename some task properties
  - [x] `documentPartial`-> `documentAdditionOrUpdate`
  - [x] `documentAddition`-> `documentAdditionOrUpdate`
  - [x] `clearAll` -> `documentDeletion` 

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-05-31 09:40:40 +00:00
Kerollmops
1465b5e0ff Refactorize the tasks filters by moving the match inside 2022-05-31 11:33:21 +02:00
Kerollmops
8800b348f0 Implement the StarOr on all the tasks filters 2022-05-31 11:33:21 +02:00
Kerollmops
082d6b89ff Make the StarOrIndexUid Generic and call it StarOr 2022-05-31 11:33:21 +02:00
Kerollmops
b82c86c8f5 Allow users to filter indexUid with a * 2022-05-31 11:33:20 +02:00
Kerollmops
36d94257d8 Make clippy happy 2022-05-31 11:33:20 +02:00
Kerollmops
3f80468f18 Rename the Tasks Types 2022-05-31 11:33:20 +02:00
Kerollmops
8509243e68 Implement the status and type filtering on the tasks route 2022-05-31 11:33:20 +02:00
Kerollmops
3684c822f1 Add indexUid filtering on the /tasks route 2022-05-31 11:33:20 +02:00
Kerollmops
80f7d87356 Remove the /indexes/:indexUid/tasks/... routes 2022-05-31 11:33:20 +02:00
Kerollmops
d2f457a076 Rename the uid to taskUid in asynchronous response 2022-05-31 11:33:20 +02:00
Kerollmops
e5ef5a6f9c Remove an unused updates.rs file 2022-05-31 11:33:19 +02:00
bors[bot]
5450fecaef Merge #2444
2444: add boilerplate for dump v5 r=MarinPostma a=MarinPostma

add the boilerplate files for dump v5


Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-05-31 08:56:52 +00:00
ManyTheFish
deba0cc096 Make v4::load_dump copy each part a the dump 2022-05-31 10:24:44 +02:00
ad hoc
26e7bdf702 add boilerplate for dump v5 2022-05-30 17:25:29 +02:00
bors[bot]
3441cc6c36 Merge #2410
2410: Make dump a task r=Kerollmops a=MarinPostma

This PR transforms the dump task into a proper task.
The `GET /dumps/:dump_uid` is removed.


Some changes were made to make this work, and a bit a refactoring was necessary.
- The `dump_actor` module has been renamed do `dumps` and moved to the root
- There isn't a `DumpActor` anymore, and the dump process is handled by the `DumpHandler`.
- The `TaskPerformer` is renamed to `BatchHandler`
- The `BatchHandler` trait no longer has a `perform_job` method, but instead has a `accept` method returning whether a handler can proccess a batch
- The scheduler now accept a list of `BatchHandler`, and iterates trhough them until it finds one to accept the current batch.
- `Job` doesn't exist anymore, and everything in now inside of the `BatchContent` enum.
- The `Vec<TaskId>` from `Batch` is replaced with a `BatchContent` enum which hints at the content.
- The Scheduler is slightly modified to accept batch, and prioritize them before regular tasks.
- The `TaskList` are not identified by a `String` representing the index uid anymore, but by a `TaskListIdentifier` which also works for dumps which are not targeting any specific indexes.
- The `GET /dump/:dump_id` no longer exists
- `DumpActorError` is renamed to `DumpError`


close #2410 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-30 14:09:43 +00:00
bors[bot]
c7711c7816 Merge #2429
2429: Send the analytics to `telemetry.meilisearch.com` instead of segment r=MarinPostma a=irevoire

Fix #2425

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-30 13:18:55 +00:00
Irevoire
d47b997120 chore(analytics): update the url used to send our analytics 2022-05-30 15:13:10 +02:00
ad hoc
1e310ecc7d fix typo in docstring
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-30 14:34:49 +02:00
ad hoc
4cb2c6ef1e use map_or instead of map + unwrap_or 2022-05-30 12:30:15 +02:00
ad hoc
a9ef399a6b processing::Nothing return BatchContent::Empty instead of panic 2022-05-26 12:04:27 +02:00
ad hoc
5a2972fc19 use TaskEvent method instead of variants in BatchHandler impl 2022-05-26 11:51:58 +02:00
0x0x1
ba51ca83ec Update docker volume path
Makes docker volume same as Dockerfile
2022-05-26 10:29:27 +08:00
ad hoc
1647ca3c1f fix clipy warnings 2022-05-25 15:07:52 +02:00
ad hoc
74a1f88d88 add test for dump processing order 2022-05-25 14:57:36 +02:00
ad hoc
f58507379a fix dump priority in scheduler 2022-05-25 14:50:14 +02:00
ad hoc
6b2016b350 remove typo in BatchContent variant 2022-05-25 14:39:07 +02:00
ad hoc
3015265bde remove useless dump errors 2022-05-25 14:37:10 +02:00
ad hoc
49d8fadb52 test dump handler 2022-05-25 14:32:12 +02:00
ad hoc
127171c812 impl Default on Processing 2022-05-25 14:10:39 +02:00
bors[bot]
67b6f4340a Merge #2422
2422: Update url of movies.json r=curquiza a=0x0x1

URL `https://bit.ly/2PAcw9l` is a notion site.

Co-authored-by: 0x0x1 <101086451+0x0x1@users.noreply.github.com>
2022-05-25 11:21:56 +00:00
ad hoc
986a99296d remove useless dump test 2022-05-25 11:25:11 +02:00
ad hoc
92d86ce6aa add tests to IndexResolver BatchHandler 2022-05-25 11:13:36 +02:00
ad hoc
3c85b29865 add doc to BatchHandler 2022-05-25 11:13:35 +02:00
ad hoc
8349f38197 remove unused file 2022-05-25 11:13:35 +02:00
ad hoc
64654ef7c3 rename batch_handler to handler 2022-05-25 11:13:35 +02:00
ad hoc
0f9c134114 fix tests 2022-05-25 11:13:35 +02:00
ad hoc
7b47e4e87a snapshot batch handler 2022-05-25 11:13:35 +02:00
ad hoc
8743d73973 move DumpHandler to own module 2022-05-25 11:13:35 +02:00
ad hoc
f0aceb4fba remove unused files 2022-05-25 11:13:35 +02:00
ad hoc
61035a3ea4 create dump v5 2022-05-25 11:13:34 +02:00
ad hoc
4778884105 remove dump status route 2022-05-25 11:13:34 +02:00
ad hoc
57fde30b91 handle dump 2022-05-25 11:13:34 +02:00
ad hoc
56eb2907c9 dump indexes 2022-05-25 11:13:34 +02:00
ad hoc
414d0907ce register dump handler 2022-05-25 11:13:34 +02:00
ad hoc
60a8249de6 add dump batch handler 2022-05-25 11:13:34 +02:00
ad hoc
46cdc17701 make scheduler accept multiple batch handlers 2022-05-25 11:13:34 +02:00
ad hoc
6a0231cb28 perform dump method 2022-05-25 11:13:33 +02:00
ad hoc
7fa3eb1003 register dump tasks 2022-05-25 11:13:33 +02:00
ad hoc
2f0625a984 register and insert dump task in scheduler 2022-05-25 11:13:33 +02:00
ad hoc
737b891a41 introduce Dump TaskListIdentifier variant 2022-05-25 11:13:33 +02:00
ad hoc
5a5066023b introduce TaskListIdentifier 2022-05-25 11:13:33 +02:00
ad hoc
aa50acb031 make Task index_uid an option
Not all task relate to an index. Tasks that don't have an index_uid set
to None
2022-05-25 11:13:32 +02:00
bors[bot]
9935db86c7 Merge #2424
2424: Uncomment clippy from the ci check r=curquiza a=irevoire

The issue has been fixed in the latest release of rust. See https://github.com/rust-lang/rust-clippy/issues/8662
Fix #2305


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-24 17:50:55 +00:00
Irevoire
f65116b208 chore(ci): uncomment clippy from the ci check
The issue has been fixed in the latest release of rust. See https://github.com/rust-lang/rust-clippy/issues/8662
Fix #2305
2022-05-24 15:03:11 +02:00
bors[bot]
341756a0eb Merge #2357
2357: chore(dump): add dump tests r=Kerollmops a=irevoire

Add tests on the import of dump v1, v2, v3 and v4.

Since the dumps are slow to decompress, I made the `flate2` crate always compile in optimized.
And since they're also slow to index, I also made the `milli` crate always compile in optimized. What do you think of this `@MarinPostma?`
Should we keep milli unoptimized in case it could help us debug some things? 👀 

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-24 12:24:29 +00:00
Tamo
5f0e9b63d2 chore(dump): add tests 2022-05-24 14:21:56 +02:00
bors[bot]
ca9ba2d90c Merge #2406
2406: chore(search): rename in the search endpoint r=irevoire a=irevoire

Fix #2376


Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-05-24 12:02:45 +00:00
bors[bot]
2c248a68a4 Merge #2374
2374: feat(analytics): handle the `X-Meilisearch-Client` header r=Kerollmops a=irevoire

Fix #2367


Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-24 09:49:16 +00:00
0x0x1
641ca5a857 Update url of movies.json 2022-05-24 17:34:34 +08:00
Tamo
6bf4db0bca feat(analytics): handle the new x-meilisearch-client custom header for the analytics
Fix #2367
2022-05-23 13:51:19 +02:00
Irevoire
4e9accdeb7 chore(search): rename in the search endpoint
Fix ##2376
2022-05-19 16:31:37 +02:00
bors[bot]
ae4e419db4 Merge #2408
2408: Upgrade milli v0.28 r=ManyTheFish a=ManyTheFish

- Add smart crop
- Add test on _matches_infos
- Fix some test

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-05-19 11:52:16 +00:00
ManyTheFish
50763aac82 Fix clippy 2022-05-19 11:23:22 +02:00
ManyTheFish
3517eae47f Fix tests 2022-05-18 18:45:53 +02:00
ManyTheFish
0250ea9157 Intergrate smart crop in Meilisearch 2022-05-18 18:35:51 +02:00
bors[bot]
6d221058f1 Merge #2404
2404: Bring `release-v0.27.1` into `main` r=curquiza a=curquiza

Following the v0.27.1 hotfixes

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-05-17 16:11:17 +00:00
bors[bot]
a23998f2e3 Merge #2402
2402: Update version for next release (v0.27.1) r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-05-17 10:09:16 +00:00
Clémentine Urquizar
49e857776c Update version for next release (v0.27.1) 2022-05-17 11:59:35 +02:00
bors[bot]
7fa7f3d1a4 Merge #2397
2397: Bump milli to v0.26.5 r=irevoire a=irevoire

Fix #2394 and fix #2385

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-05-17 08:45:05 +00:00
Tamo
85d19bfb3e chore: bump milli 2022-05-16 18:43:35 +02:00
bors[bot]
a65a2bea1e Merge #2396
2396: Fix dump bug r=curquiza a=MarinPostma

Only check db version file if we aren't loading a dump or a snapshot, because we can assume that if we are loading a dump or a snapshot, then we can safely work with the database. The db version file will be (over)written just after that.

This was introduced by #2356.
fix #2389


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-16 13:59:20 +00:00
ad hoc
5670b4d012 fix dump import error 2022-05-16 14:33:33 +02:00
bors[bot]
b9866d8df2 Merge #2339
2339: deny warnings in CI r=irevoire a=MarinPostma

Add `RUSTFLAGS= -D warnings` to the CI so all warnings are treated as hard errors.

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-05-16 09:58:30 +00:00
bors[bot]
b9b9cba154 Merge #2383
2383: v0.27.0: bring `stable` into `main` r=Kerollmops a=curquiza

Bring `stable` into `main`

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-05-16 08:35:25 +00:00
Clémentine Urquizar
cd2239eb2d Merge branch 'release-v0.27.0' into stable 2022-05-09 10:51:32 +02:00
ad hoc
348af6cfbf deny-rust-warnings 2022-05-04 15:20:45 +02:00
bors[bot]
5337bdb9c5 Merge #2366
2366: Bump milli to v0.26.4 r=curquiza a=curquiza

Fixes the milli related issues
- https://github.com/meilisearch/meilisearch/issues/2352
- https://github.com/meilisearch/meilisearch/issues/2358
- https://github.com/meilisearch/meilisearch/issues/2338

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-05-04 10:30:42 +00:00
bors[bot]
a350cc1186 Merge #2355
2355: feat(http): add analytics on typo tolerance setting r=curquiza a=MarinPostma

Add analytics for the typo settings.

Also make settings analytics return null for settings that are not set, instead of a default value, as seen with `@gmourier` 

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Guillaume Mourier <guillaume@meilisearch.com>
2022-05-04 10:15:15 +00:00
Kerollmops
5c4c38c79c Fix the tests about the nested fields 2022-05-04 12:10:52 +02:00
ad hoc
b94eabe48c apply clippy fixes 2022-05-04 11:33:43 +02:00
Clémentine Urquizar
c46f3587de Bump milli to v0.26.4 2022-05-04 11:25:36 +02:00
ad hoc
34f75d9792 settings analytics return null when no set 2022-04-29 16:38:21 +02:00
bors[bot]
3c5f7dbf7e Merge #2356
2356: fix dumps bug r=MarinPostma a=MarinPostma

move the db version check after the dump import
fix #2353


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-29 14:33:58 +00:00
Guillaume Mourier
3d0a4a3d18 fix(http): fix event name for typo tolerance settings update 2022-04-27 14:49:21 +02:00
ad hoc
6025372565 fix(lib): Check db presence after dumps 2022-04-27 10:41:09 +02:00
ad hoc
3d10af0333 feat(http): add analytics on typo tolerance setting 2022-04-26 18:29:32 +02:00
bors[bot]
c07f3b44b7 Merge #2347
2347: Change Nelson path r=curquiza a=curquiza

Nelson is now on the Meilisearch orga side

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-21 17:50:46 +00:00
Clémentine Urquizar
38d681c230 Change Nelson path 2022-04-21 18:42:34 +02:00
Clémentine Urquizar - curqui
e85377e725 Merge pull request #2346 from meilisearch/revert-2345-bump-meilisearch-v9000.0.0
Revert "[TEST PURPOSE] Bump meilisearch to version 9000.0.0"
2022-04-21 16:38:48 +02:00
Clémentine Urquizar - curqui
6ff8bf823d Revert "[TEST PURPOSE] Bump meilisearch to version 9000.0.0" 2022-04-21 16:36:56 +02:00
Clémentine Urquizar - curqui
4d25229df9 Merge pull request #2345 from meilisearch/bump-meilisearch-v9000.0.0
Bump meilisearch to version 9000.0.0
2022-04-21 16:28:46 +02:00
releasemops
f1cd6b6ee8 bump meilisearch to v9000.0.0 2022-04-21 14:26:40 +00:00
Clémentine Urquizar - curqui
63f75bd187 Merge pull request #2344 from meilisearch/revert-2340-bump-meilisearch-v8000.1.0
Revert "[TEST PURPOSE] Bump meilisearch to version 8000.1.0"
2022-04-21 16:24:57 +02:00
Clémentine Urquizar - curqui
acf3357cf3 Revert "[TEST PURPOSE] Bump meilisearch to version 8000.1.0" 2022-04-21 16:24:27 +02:00
Clémentine Urquizar - curqui
202d6105b2 Merge pull request #2340 from meilisearch/bump-meilisearch-v8000.1.0
[TEST PURPOSE] Bump meilisearch to version 8000.1.0
2022-04-21 15:28:00 +02:00
releasemops
0714551101 bump meilisearch to v8000.1.0 2022-04-21 13:23:46 +00:00
bors[bot]
04381011b0 Merge #2336
2336: Move permissive-json-pointer in the meilisearch repository r=Kerollmops a=irevoire

Move the permissive-json-pointer crate in the meilisearch repository.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-20 17:25:44 +00:00
Tamo
1ef87cc6d0 chore: move permissive-json-pointer in the meilisearch repository
Update permissive-json-pointer/src/lib.rs

Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-04-20 19:24:41 +02:00
bors[bot]
4a9000bb96 Merge #2332
2332: fix(search): formatted field r=curquiza a=irevoire

fix #2318

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-04-20 14:59:41 +00:00
bors[bot]
754c49f991 Merge #2326
2326: rename min word lenght for typo r=irevoire a=MarinPostma

rename `minWordLengthForTypo` to `minWordSizeForTypos` as specified.

discussed here: https://github.com/meilisearch/specifications/pull/117#discussion_r850795714

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-20 11:54:10 +00:00
bors[bot]
97adef6bfc Merge #2335
2335: Fix typo reset by upgrading Milli to v0.26.2 r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-20 10:49:57 +00:00
Clémentine Urquizar
a7fd199ded Fix typo reseting by upgrading milli to v0.26.2 2022-04-20 12:24:46 +02:00
bors[bot]
2692b8c960 Merge #2334
2334: Update dashboard to v.0.1.10 r=curquiza a=mdubus

Closes #2322

Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-04-20 10:14:46 +00:00
Irevoire
58a1124e9a fix(search): formatted field 2022-04-20 11:30:01 +02:00
Morgane Dubus
b57ad15a24 Update dashboard to v.0.1.10 2022-04-20 11:14:42 +02:00
ad hoc
9b064e53e7 fix(http, lib): rename_min_word_length_for_typo into rename_min_word_size_for_typo 2022-04-17 10:02:56 +02:00
bors[bot]
289bfd46ff Merge #2321
2321: Bump milli r=curquiza a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-04-14 11:51:15 +00:00
Irevoire
64b0a50a58 chore: bump milli 2022-04-14 12:12:54 +02:00
bors[bot]
b1333ab5b0 Merge #2320
2320: chore(http, lib): rename typo to typo_tolerance r=irevoire a=MarinPostma

fix #2319


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-14 09:50:39 +00:00
ad hoc
276dc6043a chore(http, lib): rename typo to typo_tolerance 2022-04-14 10:42:06 +02:00
bors[bot]
b9e676b8ca Merge #2316
2316: Add version flag r=Kerollmops a=sanders41

# Pull Request

## What does this PR do?
Fixes #2315

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Paul Sanders <psanders1@gmail.com>
2022-04-13 17:24:09 +00:00
bors[bot]
6c06fb226d Merge #2307
2307: Feat(Analytics): Add analytics for search format options r=irevoire a=ManyTheFish

Specification: [#120](https://github.com/meilisearch/specifications/pull/120) ([f5c6a8e](f5c6a8e183))

fix #2308

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-04-13 12:01:52 +00:00
Paul Sanders
41249be274 Add version flag 2022-04-12 15:22:36 -04:00
bors[bot]
049cf0fcee Merge #2313
2313: fix(search): remove the back and forth between the IndexMap and the serde_json::Map r=irevoire a=irevoire

This is ok because we're using the preserve_order feature in serde_json which is already internally using an IndexMap.

See https://github.com/meilisearch/meilisearch/pull/2298#discussion_r845228412_


Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-12 14:17:26 +00:00
Tamo
2ee210483f fix(search): remove the back and forth between the IndexMap and the serde_json::Map
This is ok because we're using the preserve_order feature in serde_json which is already internally using an IndexMap.
2022-04-12 16:12:52 +02:00
bors[bot]
13205066f3 Merge #2311
2311: Change version for the next release (v0.27.0) r=irevoire a=curquiza

Fixes #2310 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-11 14:49:33 +00:00
Clémentine Urquizar
b3661bf8ec Change version for the next release (v0.27.0) 2022-04-11 16:25:15 +02:00
ManyTheFish
0990e95830 Feat(Analytics): Add analytics for search format options 2022-04-11 14:53:15 +02:00
bors[bot]
f67167fa9f Merge #2178
2178: Refacto docker r=irevoire a=irevoire

closes #2166 and #2085

-----------

I noticed many people had issues with the default configuration of our Dockerfile.
Some examples:
- #2166: If you use ubuntu and mount your `data.ms` in a volume (as shown in the [doc](https://docs.meilisearch.com/learn/getting_started/installation.html#download-and-launch)), you can't run meilisearch
- #2085: Here, meilisearch was not able to erase the `data.ms` when loading a dump because it's the mount point
Currently, we don't show how to use the snapshot and dumps with docker in the documentation. And it's quite hard to do:
  - You either send a big command to meilisearch to change the dump-path, snapshot-path and db-path a single directory and then mount that one
  - Or you mount three volumes
- And there were other issues on the slack community

I think this PR solve the problem.
Now the image contains the `meilisearch` binary in the `/bin` directory, so it's easy to find and always in the `PATH`.
It creates a `data` directory and moves the working-dir in it.
So now you can find the `dumps`, `snapshots` and `data.ms` directory in `/data`.

Here is the new command to run meilisearch with a volume:
```
docker run -it --rm -v $PWD/meili_data:/data -p 7700:7700 getmeili/meilisearch:latest
```

And if you need to import a dump or a snapshot, you don't need to restart your container and mount another volume. You can directly hit the `POST /dumps` route and then run:
```
docker run -it --rm -v $PWD/meili_data:/data -p 7700:7700 getmeili/meilisearch:latest meilisearch --import-dump dumps/20220217-152115159.dump
```

-------

You can already try this PR with the following docker image:
```
getmeili/meilisearch:test-docker-v0.26.0
```

If you want to use the v0.25.2 I created another image;
```
getmeili/meilisearch:test-docker-v0.25.2
```

------

If you're using helm I created a branch [here](https://github.com/meilisearch/meilisearch-kubernetes/tree/test-docker-v0.26.0) that use the v0.26.0 image with the good volume 👍 
If you use this conf with the v0.25.2, it should also work.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-11 12:01:25 +00:00
bors[bot]
31584f34e8 Merge #2298
2298: Nested fields r=irevoire a=irevoire

There are a few things that I want to fix _AFTER_ merging this PR.
For the following RCs.

## Stop the useless conversion
In the `search.rs` I convert a `Document` to a `Value`, and then the `Value` to a `Document` and then back to a `Value` etc. I should stop doing all these conversion and stick to one format.
Probably by merging my `permissive-json-pointer` crate into meilisearch.
That would also give me the opportunity to work directly with obkvs and stops deserializing fields I don't need.

## Add more test specific to the nested
Everything seems to works but I should write tests to double check that the nested works well with the `formatted` field.

## See how I could stop iterating on hashmap and instead fill them correctly
This is related to milli. I really often needs to iterate over hashmap to see if a field is a subset of another field. I could probably generate a structure containing all the possible key values.
ie. the user say `doggo` is an attribute to retrieve. Instead of iterating on all the attributes to retrieve to check if `doggo.name` is a subset of `doggo`. I should insert `doggo.name` in the attributes to retrieve map.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-11 11:45:37 +00:00
bors[bot]
a70e0a6422 Merge #2304
2304: chore(bors): comments clippy out r=curquiza a=irevoire

There is currently an issue with clippy that stops us from merging PRs.
https://github.com/rust-lang/rust-clippy/issues/8662#issuecomment-1093899755

We can't use clippy in the CI while that's not merged


Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-11 11:20:18 +00:00
Tamo
348345f555 chore(bors): comments clippy out
There is currently an issue with clippy that stops us from merging PRs.
https://github.com/rust-lang/rust-clippy/issues/8662#issuecomment-1093899755

We can't use clippy in the CI while that's not merged
2022-04-11 13:19:00 +02:00
Tamo
683206e140 feat(docker): refactoring the dockerfile
- Move the meilisearch binary to `/bin/meilisearch` so it's always in scope.
- Create a `meili_data` directory used as the default working directory
2022-04-11 13:14:44 +02:00
Tamo
69d312209e feat(search): Implements the nested fields
See https://github.com/meilisearch/specifications/pull/121
2022-04-07 19:47:20 +02:00
bors[bot]
013fe4cbc9 Merge #2297
2297: Feat(Search): Enhance formating search results r=ManyTheFish a=ManyTheFish

Add new settings and change crop_len behavior to count words instead of characters.

- [x] `highlightPreTag`
- [x] `highlightPostTag`
- [x] `cropMarker`
- [x] `cropLength` count word instead of chars
- [x] `cropLength` 0 is now considered as no `cropLength`
- [ ] ~smart crop finding the best matches interval~ (postponed)

Partially fixes  #2214. (no smart crop)


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-04-07 13:29:56 +00:00
ManyTheFish
dc2cc1ee89 Feat(Search): Enhance formating search results 2022-04-07 15:04:08 +02:00
bors[bot]
bb5f0e1485 Merge #2271
2271: Simplify Dockerfile r=ManyTheFish a=Thearas

# Pull Request

## What does this PR do?

1. Fixes #2234
2. Replace `$TARGETPLATFORM` with `apk --print-arch` to make Dockerfile available for `docker build` as well, not just `docker buildx` (inspired by [rust-lang/docker-rust](https://github.com/rust-lang/docker-rust/blob/master/1.59.0/alpine3.14/Dockerfile#L13))

PTAL `@curquiza` 

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Thearas <thearas850@gmail.com>
2022-04-07 11:50:21 +00:00
bors[bot]
d5e33637b7 Merge #2296
2296: disable typo for attributes r=curquiza a=MarinPostma

Introduce the disable typos on attribute feature as per https://github.com/meilisearch/specifications/pull/117.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 18:10:45 +00:00
bors[bot]
c321ac61b5 Merge #2259
2259: disable typos on words r=MarinPostma a=MarinPostma

Introduce the disable typo setting as per https://github.com/meilisearch/specifications/pull/117.

waiting for https://github.com/meilisearch/milli/pull/474.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 17:40:08 +00:00
ad hoc
67dea08a0a feat(http, lib): enable disable typos on attributes 2022-04-06 19:25:12 +02:00
ad hoc
e9f66b8766 feat(all): introduce disable typo on words 2022-04-06 19:16:36 +02:00
ad hoc
dd43ba6234 feat(all): introduce disable typos 2022-04-06 19:10:12 +02:00
ad hoc
27a88bcd47 feat(all): introduce minWordLengthForTypo
fix typo in settting

skip serializing not set typo settings
2022-04-06 19:03:24 +02:00
bors[bot]
065fe19452 Merge #2249
2249: feat(all): introduce disable typos r=irevoire a=MarinPostma

Introduce the disable typo setting, that allows disabling typos for an index.

waiting on https://github.com/meilisearch/milli/pull/469

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 16:55:46 +00:00
ad hoc
981fba5b44 feat(all): introduce disable typos 2022-04-06 15:47:48 +02:00
bors[bot]
09734f0732 Merge #2294
2294: chore(lib): bump milli to 0.25.0 r=MarinPostma a=MarinPostma

bump milli


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-06 13:19:20 +00:00
ad hoc
a523828f61 chore(lib): bump milli to 0.25.0 2022-04-06 15:03:10 +02:00
bors[bot]
7f7958f815 Merge #2277
2277: fix(http): fix panic when sending document update without content type header r=MarinPostma a=MarinPostma

I found a panic when pushing documents without a content-type. This fixes is by returning unknown instead of crashing.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-05 12:07:17 +00:00
bors[bot]
9e344f6576 Merge #2207
2207: Fix: avoid embedding the user input into the error response. r=Kerollmops a=CNLHC

# Pull Request

## What does this PR do?
Fix #2107. 

The problem is meilisearch embeds the user input to the error message. 

The reason for this problem is `milli` throws a `serde_json: Error` whose `Display` implementation will do this embedding.  

I tried to solve this problem in this PR by manually implementing the `Display` trait for `DocumentFormatError` instead of deriving automatically.

<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Liu Hancheng <cn_lhc@qq.com>
Co-authored-by: LiuHanCheng <2463765697@qq.com>
2022-04-04 17:35:17 +00:00
bors[bot]
09a72cee03 Merge #2281
2281: Hard limit the number of results returned by a search r=Kerollmops a=Kerollmops

This PR fixes #2133 by hard-limiting the number of results that a search request can return at any time. I would like the guidance of `@MarinPostma` to test that, should I use a mocking test here? Or should I do anything else?

I talked about touching the _nb_hits_ value with `@qdequele` and we concluded that it was not correct to do so.

Could you please confirm that it is the right place to change that?

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-04-04 17:19:05 +00:00
LiuHanCheng
6fc6b83632 Update meilisearch-http/tests/documents/add_documents.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-04-01 09:30:40 +08:00
LiuHanCheng
eee2cd5abf Update meilisearch-http/tests/documents/add_documents.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-04-01 09:30:32 +08:00
bors[bot]
87e4125875 Merge #2267
2267: Add instance options for RAM and CPU usage r=Kerollmops a=2shiori17

# Pull Request

## What does this PR do?
Fixes #2212 
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: 2shiori17 <98276492+2shiori17@users.noreply.github.com>
Co-authored-by: shiori <98276492+2shiori17@users.noreply.github.com>
2022-03-31 15:29:18 +00:00
Liu Hancheng
7ece7a9d9e change truncate strategy and coresponding test 2022-03-31 10:39:21 +08:00
LiuHanCheng
403f03cb2c Update meilisearch-http/tests/documents/add_documents.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-31 10:14:22 +08:00
LiuHanCheng
b28aa8e666 Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-31 10:14:13 +08:00
2shiori17
98107565c0 Add more detailed comments for max_indexing_threads 2022-03-31 09:32:45 +09:00
2shiori17
a2d7c16f91 Remove indexing_jobs option 2022-03-31 09:27:29 +09:00
Kerollmops
ffafd5b976 Add tests for the hard limit 2022-03-30 16:36:02 -07:00
shiori
9f1c88680d Fix my mistake when resolving conflicts 2022-03-31 02:48:41 +09:00
shiori
9edd407a88 Merge branch 'main' into add-instance-options 2022-03-31 02:38:07 +09:00
Kerollmops
8bc6e8dcf9 Make sure that offsets are clamped too 2022-03-30 10:06:15 -07:00
bors[bot]
2624c76517 Merge #2254
2254: Test with default CLI opts r=Kerollmops a=Kerollmops

Fixes #2252.

This PR makes sure that we test the HTTP engine with the default CLI parameters and removes some useless internal CLI options.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-03-29 22:39:40 +00:00
Kerollmops
891d042164 Remove the memory limit to let Windows tests pass 2022-03-29 11:37:08 -07:00
Kerollmops
b3a11e04af Implement Default on IndexerOpts again 2022-03-29 11:37:08 -07:00
Kerollmops
acdb10a307 Remove some useless indexer options 2022-03-29 11:37:08 -07:00
Kerollmops
8fecc6238d Make the test use the default CLI options 2022-03-29 11:37:08 -07:00
Kerollmops
405af09fc8 Hard limit the number of results returned by a search 2022-03-29 11:27:53 -07:00
bors[bot]
0d6be2efab Merge #2280
2280: Bump the milli dependency to 0.24.1 r=curquiza a=Kerollmops

We had issues with lindera recently, it was unable to download the official dictionaries from Google Drive and this was causing issues with our CIs (and other users' CIs too). The maintainer changed the source to download the dictionaries to get it from Sourceforge and it is much better and stable now.

This PR bumps the milli dependency to the latest version which includes the latest version of the tokenizer which, itself, includes the latest version of lindera, I advise that we rebase the currently opened pull requests to include this PR when it is merged on main.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-03-29 17:39:03 +00:00
Kerollmops
94f04e79eb Bump the milli dependency to 0.24.1 2022-03-29 09:17:25 -07:00
bors[bot]
381e98053f Merge #2263
2263: Upgrade the config of the issue management r=curquiza a=curquiza

To reduce the feature feedback in the meilisearch repo

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-03-29 09:11:06 +00:00
ad hoc
6c2fdc7743 fix(http): fix panic when sending document update without content type header 2022-03-29 09:48:25 +02:00
bors[bot]
513b37e245 Merge #2253
2253: refactor authentication key extraction r=ManyTheFish a=MarinPostma

I am concerned that the part of the code that performs the key prefix extraction from the jwt token migh be misused in the future. Since this is a critical part of the code, I moved it into it's own function. Since we deserialized the payload twice anyway, I reordered the verifications, and we now use the data from the validated token.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-28 08:53:13 +00:00
LiuHanCheng
13a0e78d3f Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-28 14:58:00 +08:00
LiuHanCheng
80d8ac40af Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-28 14:57:51 +08:00
Thearas
48d107cc13 Simplify Dockerfile 2022-03-26 20:06:00 +08:00
Thearas
3ef250c30a fix: docker image failed to boot on arm64 node 2022-03-26 17:41:00 +08:00
Liu Hancheng
c7b489f8cb tidy 2022-03-25 21:36:11 +08:00
Liu Hancheng
ce12000af3 add some comments 2022-03-25 21:33:47 +08:00
Liu Hancheng
3c72f4dc51 fix test and add truncate test. 2022-03-25 21:31:23 +08:00
Liu Hancheng
ce85981a4e add truncate logic 2022-03-25 20:53:28 +08:00
Liu Hancheng
193c666bf9 Merge branch 'main' of github.com:meilisearch/meilisearch into CNLHC/change_json_error_message 2022-03-25 19:53:13 +08:00
2shiori17
705d10a96d Add instance options for RAM and CPU usage 2022-03-24 18:52:36 +00:00
bors[bot]
c0056ab73f Merge #2264
2264: Import milli from meilisearch-lib r=Kerollmops a=Kerollmops

This PR directly imports the milli dependency used in _meilisearch-http_ from _meilisearch-lib_. I can't import _meilisearch-lib_ in _meiliserach-auth_ and that _meilisearch-auth_ can't use the milli exported by _meilisearch-lib_ 😞

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-03-24 16:28:54 +00:00
Kerollmops
3df542f072 Export milli's heed from meilisearch-lib 2022-03-24 15:30:10 +01:00
Kerollmops
09212abdf7 Update the cargo lock 2022-03-24 14:53:08 +01:00
Kerollmops
ee6be4f6b9 Import milli from meilisearch-lib in meilisearch-http 2022-03-24 14:45:37 +01:00
Clémentine Urquizar - curqui
4e3b20ed73 Update config.yml 2022-03-24 12:13:30 +01:00
ad hoc
6a82a055d3 chore(auth): refactor token validation 2022-03-21 11:18:51 +01:00
bors[bot]
7e65816d63 Merge #2237
2237: Update dependencies r=MarinPostma a=Kerollmops

This PR upgrade and updates the dependencies of meilisearch, but first I removed three unused dependencies. I used [cargo udeps](https://github.com/est31/cargo-udeps) to detect those and [cargo upgrade](https://github.com/killercup/cargo-edit/blob/master/README.md#available-subcommands) to upgrade ⬆️

~This PR **must** be merged when https://github.com/meilisearch/milli/pull/465 is merged and then must be updated accordingly i.e. using the latest version of milli.~

Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-03-17 17:15:19 +00:00
Kerollmops
5bffa4b7f9 Tenant token validation is now created by a function 2022-03-17 17:55:50 +01:00
bors[bot]
d1c0ecceb9 Merge #2245
2245: Add test to validate cli r=irevoire a=MarinPostma

followup on #2242 and #2243

Add a test to make sure the cli is valid, and add a CI task to run the tests in debug to make sure we hit debug assertions.

FYI `@curquiza,` because of CI changes

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-17 16:14:31 +00:00
ad hoc
1d683865cf chore(CI): add debug test to CI 2022-03-17 12:40:20 +01:00
ManyTheFish
4aef7c5ac5 Fix tenant token validation when exp is null 2022-03-17 11:05:03 +01:00
Kerollmops
968053649b Change the jsonwebtoken crate usage 2022-03-17 11:03:32 +01:00
Kerollmops
ac48860bbb Upgrade the workspace dependencies 2022-03-17 11:03:31 +01:00
Kerollmops
46e6d23dd2 Remove the zstd dependency comming from actix-web/http 2022-03-17 11:00:25 +01:00
Kerollmops
55c9514c6b Reorder the Meilisearch features for more readability 2022-03-17 11:00:25 +01:00
Kerollmops
86c1e83ea1 Remove three unused dependencies 2022-03-17 11:00:24 +01:00
bors[bot]
3273fe0470 Merge #2246
2246: Release v0.26.1: bring "fix panic at start" commit (#2243) into stable r=MarinPostma a=curquiza

2 commits
- cherry pick `32843f30d973349122ec5a37469ee09e6002b6f3` (merged in #2243)
- change the version in the Cargo toml files (from v0.26.0 to v0.26.1)

Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-03-16 18:32:47 +00:00
bors[bot]
bb9372114c Merge #2244
2244: chore(all): bump milli r=curquiza a=MarinPostma

continues the work initiated by `@psvnlsaikumar` in #2228

Co-authored-by: Sai Kumar <psvnlsaikumar@gmail.com>
2022-03-16 17:15:10 +00:00
Clémentine Urquizar
7468a5e96c Update the version (from v0.26.0 to v0.26.1) 2022-03-16 18:03:54 +01:00
ad hoc
a87faa0db7 bug(http): fix panic on startup 2022-03-16 18:03:01 +01:00
bors[bot]
b2bbd13c27 Merge #2243
2243: bug(http): fix panic on startup r=MarinPostma a=MarinPostma

this seems to fix #2242


I am not sure why this doesn't reproduce on v0.26.0, so we should remain vigilant.

`@curquiza` FYI

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-16 16:22:13 +00:00
ad hoc
22c61a1ecb chore(http): add test for validity of cli 2022-03-16 17:17:57 +01:00
Sai Kumar
e271395971 chore(all): bump milli
* updates to Use the milli's heed dependency #2210

* Update index.rs

* Update store.rs

* Update mod.rs

* cargo fmt
2022-03-16 16:34:44 +01:00
ad hoc
32843f30d9 bug(http): fix panic on startup 2022-03-16 13:33:37 +01:00
bors[bot]
469aa8feab Merge #2238
2238: cargo: use resolver 2 r=MarinPostma a=happysalada

# Pull Request

## What does this PR do?
use resolver 2 from cargo.
This enables mainly to propagate the `--no-default-features` flag to workspace crates.
I mistakenly thought before that it was enough to have edition 2021 enabled. However it turns out that for virtual workspaces, this needs to be explicitely defined.
https://doc.rust-lang.org/edition-guide/rust-2021/default-cargo-resolver.html

This will also change a little how your dependencies are compiled. See https://blog.rust-lang.org/2021/03/25/Rust-1.51.0.html#cargos-new-feature-resolver for more details.

Just to give a bit more context, this is for usage in nixos. I have tried to do the upgrade today with the latest version, and the no default features flag is just ignored.

Let me know if you need more details of course.


## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [ ] Have you read the contributing guidelines?
- [ ] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: happysalada <raphael@megzari.com>
2022-03-15 15:50:16 +00:00
happysalada
748d5b69a5 cargo: use resolver 2 2022-03-14 23:18:59 -04:00
bors[bot]
3b2fe3aec8 Merge #2232
2232: Bring `stable` into `main` (v0.26.0) r=Kerollmops a=curquiza

Bring `stable` into `main`

Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: ad hoc <postma.marin@protonmail.com>
Co-authored-by: Rob Ede <robjtede@icloud.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-03-14 14:46:12 +00:00
bors[bot]
d9eb1f7f00 Merge #2233
2233: Change CI name for publishing binaries r=Kerollmops a=curquiza

Minor change regarding the CI job names. Should not impact the usage.

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-03-14 14:17:05 +00:00
Clémentine Urquizar - curqui
f5a72bb19a Change CI name for publishing binaries 2022-03-14 14:20:25 +01:00
Liu Hancheng
35bf7ee538 fix test 2022-03-08 12:26:02 +08:00
LiuHanCheng
c8895cab77 Update meilisearch-lib/src/document_formats.rs
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2022-03-08 12:03:59 +08:00
bors[bot]
b669a73432 Merge #2209
2209: rename auto batching cli r=curquiza a=MarinPostma

rename `--enable-autobatching` to `--enable-auto-batching`.

as per https://github.com/meilisearch/specifications/pull/96#issuecomment-1060693721

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-07 15:58:58 +00:00
bors[bot]
833f7fbdbe Merge #2204
2204: Fix blocking auth r=Kerollmops a=MarinPostma

Fix auth blocking runtime

I have decided to remove async code from `meilisearch-auth` and let `meilisearch-http` handle that.

Because Actix polls the extractor futures concurrently, I have made a wrapper extractor that forces the errors from the futures to be returned sequentially (though is still polls them sequentially).
close #2201

Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-03-07 15:39:24 +00:00
ad hoc
62ce8e0bda chore(http): rename auto batching cli option 2022-03-07 15:19:19 +01:00
ad hoc
ddd25bfe01 remove token from InvalidToken error 2022-03-07 15:16:07 +01:00
ad hoc
19da45c53b Update meilisearch-http/src/extractors/sequential_extractor.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-03-07 15:02:07 +01:00
ad hoc
0026410c61 review edits 2022-03-07 14:21:44 +01:00
ad hoc
b57c59baa4 sequential extractor 2022-03-04 20:43:12 +01:00
Liu Hancheng
a356c8359c fix broken test 2022-03-04 15:39:18 +08:00
Liu Hancheng
b138b92d39 show detailed error message 2022-03-04 15:31:11 +08:00
Liu Hancheng
58e2903177 first try 2022-03-04 10:46:59 +08:00
ad hoc
af8a5f2c21 async auth 2022-03-02 19:25:51 +01:00
ad hoc
d6400aef27 remove async from meilsearch-authentication 2022-03-02 18:22:34 +01:00
bors[bot]
81fe65afed Merge #2200
2200: Fix(dumps): Explicitly define serde for time r=ManyTheFish a=ManyTheFish

fix #2199 


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-03-02 10:39:24 +00:00
ManyTheFish
c2b58720d1 Fix(dumps): Explicitly define serde for time 2022-03-02 11:37:48 +01:00
bors[bot]
5515aa5045 Merge #2197
2197: Additions to 0.26 (Update actix-web dependency to 4.0) r=curquiza a=MarinPostma

- `@robjtede`
`@MarinPostma`
[update actix-web dependency to 4.0](3b2e467ca6)

From main to release-v0.26.0


Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-02-28 18:09:43 +00:00
Rob Ede
15150db957 clippy 2022-02-28 19:03:38 +01:00
Rob Ede
3b2e467ca6 update actix-web dependency to 4.0 2022-02-28 19:03:37 +01:00
bors[bot]
0c9e8cdf8d Merge #2194
2194: update actix-web dependency to 4.0 r=irevoire a=robjtede

# Pull Request

## What does this PR do?
Updates Actix Web ecosystem crates to 4.0 stable.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] ~~Does this PR fix an existing issue?~~
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?




Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-02-28 15:23:42 +00:00
Rob Ede
8d624b3800 clippy 2022-02-28 13:43:22 +00:00
ad hoc
4fbb83a34d bug(snapshot): Correctly open environments in snapshots 2022-02-28 12:37:30 +01:00
Rob Ede
961e22493c update actix-web dependency to 4.0 2022-02-25 23:28:55 +00:00
bors[bot]
09ee8e34a5 Merge #2192
2192: Fix max dbs error r=Kerollmops a=MarinPostma

Factor the way we open environments to make sure they are always opened with the same options.


The issue was that indexes were first opened in snapshots with incorrect options, and heed cache returned an environment with incorrect open options on subsequent index open.

fix #2190


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-02-23 16:18:23 +00:00
ad hoc
7e832105d7 bug(snapshot): Correctly open environments in snapshots 2022-02-23 17:12:08 +01:00
bors[bot]
ff6a7b6007 Merge #2191
2191: fix(analytics): flatten the scheduler options r=curquiza a=irevoire

Implement missing part of [this spec](https://github.com/meilisearch/specifications/blob/develop/text/0034-telemetry-policies.md) by flattening the scheduler options.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-02-22 16:19:07 +00:00
Tamo
6312e7f1f3 fix(analytics): flatten the scheduler options 2022-02-22 15:55:50 +01:00
bors[bot]
bfb375ac87 Merge #2189
2189: fix(all): fix two dates that were wrongly formatted r=irevoire a=irevoire

Fix #2188 and fix #2187 

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-02-22 10:48:45 +00:00
Tamo
21d277a0ef fix(all): fix two dates that were wrongly formatted 2022-02-22 11:29:11 +01:00
bors[bot]
c3e3c900f2 Merge #2173
2173: chore(all): replace chrono with time r=irevoire a=irevoire

Chrono has been unmaintained for a few month now and there is a CVE on it.

Also I updated all the error messages related to the API key as you can see here: https://github.com/meilisearch/specifications/pull/114

fix #2172

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-02-17 14:12:23 +00:00
Irevoire
05c8d81e65 chore: get rid of chrono in favor of time
Chrono has been unmaintened for a few month now and there is a CVE on it.

make clippy happy

bump milli
2022-02-16 18:14:29 +01:00
bors[bot]
cd6276eef9 Merge #2174
2174: Update dashboard with v0.1.8 r=curquiza a=mdubus



Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-02-16 16:15:50 +00:00
Morgane Dubus
7bcaa2fd13 Update dashboard to v0.1.9 2022-02-16 15:53:15 +01:00
Morgane Dubus
67ecd7c147 Update dashboard with v0.1.8 2022-02-16 14:10:47 +01:00
bors[bot]
ce6ff294cf Merge #2171
2171: Update LICENSE with Meili SAS r=curquiza a=curquiza

Check with thomas, we must put the real name of the company

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-02-15 18:29:03 +00:00
Clémentine Urquizar - curqui
b318ab46cb Update LICENSE 2022-02-15 15:54:45 +01:00
bors[bot]
5890600101 Merge #2170
2170: Update version (v0.26.0) r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-02-14 15:22:09 +00:00
Clémentine Urquizar
e2a9414c7a Update version (v0.26.0) 2022-02-14 16:11:07 +01:00
bors[bot]
216965e9d9 Merge #2122
2122: fix: docker image failed to boot on arm64 node r=curquiza a=Thearas

# Pull Request

## What does this PR do?
Fixes #2115.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?


Co-authored-by: Thearas <thearas850@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-02-14 11:41:18 +00:00
Thearas
d0ddbcc2b2 update 2022-02-14 18:08:57 +08:00
Clémentine Urquizar - curqui
41db2601a6 Update Dockerfile 2022-02-14 10:44:03 +01:00
Thearas
42cb94e1f4 fix: docker image failed to boot on arm64 node 2022-02-14 13:23:20 +08:00
bors[bot]
6e7a0cc65d Merge #2157
2157: fix(auth): fix env being closed when dumping r=Kerollmops a=MarinPostma

When creating a dump, the auth store environment would be closed on drop, so subsequent dumps couldn't reopen the environment. I have added a flag in the environment to prevent the closing of the environment on drop when dumping.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-02-09 15:04:19 +00:00
ad hoc
23eba82038 fix(auth): fix env being closed when dumping 2022-02-09 13:56:26 +01:00
bors[bot]
001b9acb63 Merge #2155
2155: Fix curl command in download-latest script r=Kerollmops a=curquiza

Fixes #2151 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-02-08 15:29:51 +00:00
Clémentine Urquizar
af65ccfd6a Fix curl command in download-latest script 2022-02-08 16:28:26 +01:00
bors[bot]
0c7251475d Merge #2150
2150: Bump milli to v0.22.1 r=curquiza a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/2138 and https://github.com/meilisearch/meilisearch/issues/2123 by bumping milli to [v0.22.1](https://github.com/meilisearch/milli/releases/tag/v0.22.1)

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-02-08 15:05:06 +00:00
Clémentine Urquizar
1a87b2f37d Bump milli to v0.22.1 2022-02-08 11:21:44 +01:00
bors[bot]
752a0e13ad Merge #2136
2136: Refactoring CI regarding ARM binary publish r=curquiza a=curquiza

Fixes https://github.com/meilisearch/meilisearch/issues/1909

- Remove CI file to publish aarch64 binary and put the logic into `publish-binary.yml`
- Remove the job to publish armv8 binary
- Fix download-latest script accordingly
- Adapt dowload-latest with the specific case of the MacOS m1

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: meili-bot <74670311+meili-bot@users.noreply.github.com>
2022-02-07 16:07:46 +00:00
Clémentine Urquizar
ccaca33446 Add --fail-with-body flag to curl in script 2022-02-07 16:16:49 +01:00
Clémentine Urquizar
2a90e805a2 Fix script 2022-02-07 16:05:48 +01:00
Clémentine Urquizar
c4a2d70d19 Fix error handler for curl command in script 2022-02-07 16:01:50 +01:00
meili-bot
f7e4a0177d Update download-latest.sh
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-02-07 13:53:30 +01:00
bors[bot]
cca65499de Merge #2145
2145: Update LICENSE r=meili-bot a=curquiza



Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-02-07 12:51:50 +00:00
Clémentine Urquizar - curqui
80fa7dbbfa Update LICENSE 2022-02-05 18:29:47 +01:00
bors[bot]
c24b1e5250 Merge #2135
2135: bug(auth): Make API keys accept Null descriptions r=curquiza a=ManyTheFish

Fix #2116


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-02-03 15:26:11 +00:00
Clémentine Urquizar
78cf8f1f9f Fix typo 2022-02-02 19:32:20 +01:00
Clémentine Urquizar
1da7277817 Fix dowload-latest.sh according to the new name of the binary 2022-02-02 19:25:52 +01:00
Clémentine Urquizar
c71c95feb0 Refactor CIs to publish aaarch64 binary 2022-02-02 19:25:28 +01:00
ManyTheFish
3bee31e6c7 bug(auth): Make API keys accept Null descriptions 2022-02-02 18:18:17 +01:00
bors[bot]
9448ca58aa Merge #2005
2005: auto batching r=MarinPostma a=MarinPostma

This pr implements auto batching. The basic functioning of this is that all updates that can be batched together are batched together while the previous batch is being processed.

For now, the only updates that can be batched together are the document addition updates (both update and replace), for a single index.

The batching is disabled by default for multiple reasons:
- We need more experimentation with the scheduling techniques
- Right now, if one task fails in a batch, the whole batch fails. We need more permissive error handling when processing document indexation.

There are four CLI options, for now, to interact with how the batch is scheduled:
- `enable-autobatching`: enable the autobatching feature.
- `debounce-duration-sec`: When an update is received, wait that number of seconds before batching and performing the updates. Defaults to 0s.
- `max-batch-size`: the maximum number of tasks per batch, defaults to unlimited.
- `max-documents-per-batch`: the maximum number of documents in a batch, defaults to unlimited. The batch will always contain a least 1 task, no matter the number of documents in that task.

# Implementation

The current implementation is made of 3 major components:

## TaskStore
The `TaskStore` contains all the tasks. When a task is pushed, it is directly registered to the task store.

## Scheduler
The scheduler is in charge of making the batches. At its core, there is a `TaskQueue` and a job queue. `Job`s are always processed first. They are *volatile* tasks, that is, they don't have a TaskId and are not persisted to disk. Snapshots and dumps are examples of Jobs.

If no `Job` is available for processing, then the scheduler attempts to make a `Task` batch from the `TaskQueue`. The first step is to gather new tasks from the `TaskStore` to populate the `TaskQueue`. When this is done, we can prepare our batch. The `TaskQueue` is itself a `BinaryHeap` of `Tasklist`. Each `index_uid` is associated with a `TaskList` that contains all the updates associated with that index uid. Each `TaskList` in the `TaskQueue` is ordered by the id of its first task.

When preparing a batch, the `TaskList` at the top of the `TaskQueue` is popped, and the tasks are popped from the list to make the next batch. If there are remaining tasks in the list, the list is inserted back in the `TaskQueue`.

## UpdateLoop
The `UpdateLoop` role is to perform batch sequentially. Each time updates are pushed to the update store, the scheduler is notified, and will in turn notify the update loop that work can be performed. When notified, the update loop waits some time to wait for more incoming update and then asks the scheduler for the next batch to perform and perform it. When it is done, the status of the task is put back into the store, and the next batch is processed.


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-02-02 11:04:30 +00:00
mpostma
c9a236b0af feat(lib): auto-batching 2022-02-01 18:06:20 +01:00
bors[bot]
622c15e825 Merge #2096
2096: feat(auth): Tenant token r=Kerollmops a=ManyTheFish

Make meilisearch support JWT authentication signed with meilisearch API keys
using HS256, HS384 or HS512 algorithms.

Related spec: [specifications#89](https://github.com/meilisearch/specifications/pull/89) [rendered](https://github.com/meilisearch/specifications/blob/scoped-api-keys/text/0089-tenant-tokens.md)
Fix #1991 


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-27 10:38:41 +00:00
bors[bot]
054598734a Merge #2120
2120: Bring `stable` into `main` r=curquiza a=curquiza

I forgot to do it, tell me `@Kerollmops` or `@irevoire` if it's useful or not. I would say yes, otherwise I will have conflict when I will try to bring `main` into `stable` for the next release. Maybe I'm wrong

Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-01-27 09:35:21 +00:00
ManyTheFish
7ca647f0d0 feat(auth): Implement Tenant token
Make meilisearch support JWT authentication signed with meilisearch API keys
using HS256, HS384 or HS512 algorithms.

Related spec: https://github.com/meilisearch/specifications/pull/89
Fix #1991
2022-01-27 08:25:39 +01:00
Clémentine Urquizar - curqui
aa50fcb1f0 Merge branch 'main' into stable 2022-01-26 20:17:41 +01:00
Clémentine Urquizar - curqui
b408de0761 Merge pull request #2117 from meilisearch/rebranding
Changes related to the rebranding
2022-01-26 19:58:54 +01:00
Tamo
72d9c5ee5c fix(rebranding): Update the ascii art (#2118) 2022-01-26 18:53:07 +01:00
Clémentine Urquizar
2b7440d4b5 Fix some typo 2022-01-26 17:56:18 +01:00
Clémentine Urquizar - curqui
3bc6a18bcd Update README.md
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-01-26 17:54:51 +01:00
Clémentine Urquizar - curqui
db56d6cb11 Update download-latest.sh 2022-01-26 17:54:22 +01:00
Clémentine Urquizar - curqui
a5759139bf Update CONTRIBUTING.md
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-01-26 17:51:38 +01:00
Clémentine Urquizar
8a959da120 Update MeiliSearch into Meilisearch everywhere 2022-01-26 17:43:16 +01:00
Clémentine Urquizar
0a78750465 Replace meilisearch by Meilisearch 2022-01-26 17:35:56 +01:00
Clémentine Urquizar
372f4fc924 Replace logo 2022-01-26 17:34:31 +01:00
meili-bot
ae5b401e74 Update README.md 2022-01-26 16:31:04 +01:00
meili-bot
c562655be7 Update CONTRIBUTING.md 2022-01-26 16:31:03 +01:00
bors[bot]
c8bb54cd94 Merge #2098
2098: feat(dump): Provide the same cli options as the snapshots r=MarinPostma a=irevoire

Add two cli options for the dump:
- `--ignore-missing-dump`
- `--ignore-dump-if-db-exists`

Fix #2087

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-26 14:32:23 +00:00
bors[bot]
8c80326dd5 Merge #2086
2086: feat(analytics): send the whole set of cli options instead of only the snapshot r=MarinPostma a=irevoire

Fixes #2088 

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-26 13:53:20 +00:00
Tamo
bad4bed439 feat(dump): Provide the same cli options as the snapshots
Add two cli options for the dump:
- `--ignore-missing-dump`
- `--ignore-dump-if-db-exists`

Fix #2087
2022-01-26 14:34:06 +01:00
Tamo
7828da15c3 feat(analytics): send the whole set of cli options instead of only the snapshot 2022-01-26 13:52:41 +01:00
bors[bot]
7e2f6063ae Merge #2099 #2108
2099: feat(analytics): Set the timestamp of the aggregated event as the first aggregate r=MarinPostma a=irevoire



2108: meta(auth): Enhance tests on authorization r=MarinPostma a=ManyTheFish

Enhance auth tests in order to be able to add new actions without changing tests.

Helping #2080 

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-24 15:13:01 +00:00
ManyTheFish
2b766a2f26 meta(auth): Enhance tests on authorization
Enhance auth tests in order to be able to add new action without changing tests
2022-01-24 15:35:39 +01:00
bors[bot]
8ae504bfb0 Merge #2101
2101: chore(all): update actix-web dependency to 4.0.0-beta.21 r=MarinPostma a=robjtede

# Pull Request

## What does this PR do?
I don't expect any more breaking changes to Actix Web that will affect Meilisearch so bump to latest beta.

Fixes #N/A?
<!-- Please link the issue you're trying to fix with this PR, if none then please create an issue first. -->

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to MeiliSearch!


Co-authored-by: Rob Ede <robjtede@icloud.com>
2022-01-24 14:33:46 +00:00
bors[bot]
5981e6c57c Merge #2095
2095: feat(error): Update the error message when you have no version file r=MarinPostma a=irevoire

Following this [issue](https://github.com/meilisearch/meilisearch-kubernetes/issues/95) we decided to change the error message from:
```
Version file is missing or the previous MeiliSearch engine version was below 0.24.0. Use a dump to update MeiliSearch.
```
to
```
Version file is missing or the previous MeiliSearch engine version was below 0.25.0. Use a dump to update MeiliSearch.
```

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-24 14:09:13 +00:00
bors[bot]
1be3a1e945 Merge #2075
2075: Allow payloads with no documents  r=irevoire a=MarinPostma

accept addition with 0 documents.

0 bytes payload are still refused, since they are not valid json/jsonlines/csv anyways...

close #1987


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-24 12:55:29 +00:00
Tamo
629b897845 feat(error): Update the error message when you have no version file 2022-01-24 13:44:00 +01:00
Rob Ede
9f5fee404b chore(all): update actix-web dependency to 4.0.0-beta.21 2022-01-21 20:44:17 +00:00
Tamo
40bf98711c feat(analytics): Set the timestamp of the aggregated event as the first aggregate 2022-01-20 19:08:57 +01:00
bors[bot]
f9f075bca2 Merge #2068
2068: chore(http): migrate from structopt to clap3 r=Kerollmops a=MarinPostma

migrate from structopt to clap3

This fix the long lasting issue with flags require a value, such as `--no-analytics` or `--schedule-snapshot`.

All flag arguments now take NO argument, i.e:
`meilisearch --schedule-snapshot true` becomes `meilisearch --schedule-snapshot`

as per https://docs.rs/clap/latest/clap/struct.Arg.html#method.env, the env variable is defines as:
> A false literal is n, no, f, false, off or 0. An absent environment variable will also be considered as false. Anything else will considered as true.

`@gmourier` 
`@curquiza` 
`@meilisearch/docs-team` 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-20 10:59:44 +00:00
mpostma
0c1a3d59eb fix no-analytics 2022-01-20 11:50:24 +01:00
bors[bot]
523fb5cd56 Merge #2084
2084: bump milli r=Kerollmops a=irevoire

- Fix https://github.com/meilisearch/MeiliSearch/issues/2082 by updating milli dependency
- Fix Clippy error
- Change the MeiliSearch version in the cargo.toml to anticipate the coming release (v0.25.2)

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-18 14:30:37 +00:00
Tamo
436f61a7f4 chore: bump meilisearch 2022-01-18 12:27:15 +01:00
Tamo
3fab5869fa chore: bump milli 2022-01-18 11:50:17 +01:00
mpostma
0515c6e844 bug(http): fix task duration 2022-01-13 16:41:07 +01:00
Irevoire
38176181ac fix(dump): Fix the import of dump from the v24 and before 2022-01-13 16:40:58 +01:00
bors[bot]
a7e634bd4f Merge #2074
2074: fix(dump): Fix the import of dumps when there is no data.ms r=irevoire a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-01-13 13:47:03 +00:00
bors[bot]
78a381a30b Merge #2076
2076: fix(dump): Fix the import of dump from the v24 and before r=ManyTheFish a=irevoire

Same as https://github.com/meilisearch/MeiliSearch/pull/2073 but on main this time

Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-01-13 13:09:23 +00:00
Irevoire
343bce6a29 fix(dump): Fix the import of dump from the v24 and before 2022-01-13 13:23:57 +01:00
mpostma
d263f762bf feat(http): accept empty document additions
wip
2022-01-13 12:46:56 +01:00
Irevoire
dfaeb19566 fix(dump): Fix the import of dumps when there is no data.ms 2022-01-13 12:30:58 +01:00
bors[bot]
010dcc3e80 Merge #2066
2066: bug(http): fix task duration r=MarinPostma a=MarinPostma

`@gmourier` found that the duration in the task view was not computed correctly, this pr fixes it.

`@curquiza,` I let you decide if we need to make a hotfix out of this or wait for the next release. This is not breaking.


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-12 14:50:58 +00:00
bors[bot]
d0aa5f747c Merge #2067
2067: chore(all): fix rust edition r=irevoire a=MarinPostma

I hadn't correctly set the rust edition in my previous pr, and cargo was returning a warning. This time I followed this guide: https://doc.rust-lang.org/edition-guide/editions/transitioning-an-existing-project-to-a-new-edition.html


Co-authored-by: mpostma <postma.marin@protonmail.com>
2022-01-12 13:32:42 +00:00
mpostma
f6d53e03f1 chore(http): migrate from structopt to clap3 2022-01-12 14:07:19 +01:00
mpostma
3ecebd15ee chore(all): fix rust edition 2022-01-12 11:14:50 +01:00
mpostma
db83e39a7f bug(http): fix task duration 2022-01-11 18:01:25 +01:00
bors[bot]
5d48f72ade Merge #2065
2065: MeiliSearch v0.25.0: `stable` -> `main` r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: many <maxime@meilisearch.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
Co-authored-by: Maxime Legendre <maximelegendre@MacBook-Pro-de-Maxime.local>
Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-11 16:30:22 +00:00
bors[bot]
1818026a84 Merge #2057
2057: fix(dump): Uncompress the dump IN the data.ms  r=irevoire a=irevoire

When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
   one. The problem is that if the `data.ms` is a mount point because
   that's what peoples do with docker usually. We can't override
   a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
   be on the same partition as the `data.ms`. This means when we tried to move
   the dump over the `data.ms`, it was also failing because we can't move data
   between two partitions.
------------------
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.

fix #1833

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-10 17:57:16 +00:00
bors[bot]
0ad7d38eec Merge #2061
2061: Update dashboard for v0.25.0 r=curquiza a=mdubus



Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2022-01-10 16:29:31 +00:00
Morgane Dubus
b17ad5c2be Update with latest release of the dashboard 2022-01-10 17:10:09 +01:00
bors[bot]
1824b3c07b Merge #2060
2060: chore(all) set rust edition to 2021 r=MarinPostma a=MarinPostma

set the rust edition for the project to 2021

this make the MSRV to v1.56

#2058


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2022-01-10 15:04:14 +00:00
Tamo
c9c7da3626 fix(dump): Uncompress the dump IN the data.ms
When loading a dump with docker, we had two problems.
After creating a tempdirectory, uncompressing and re-indexing the dump:
1. We try to `move` the new “data.ms” onto the currently present
   one. The problem is that if the `data.ms` is a mount point because
   that's what peoples do with docker usually. We can't override
   a mount point, and thus we were throwing an error.
2. The tempdir is created in `/tmp`, which is usually quite small AND may not
   be on the same partition as the `data.ms`. This means when we tried to move
   the dump over the `data.ms`, it was also failing because we can't move data
   between two partitions.
==============
1 was fixed by deleting the *content* of the `data.ms` and moving the *content*
of the tempdir *inside* the `data.ms`. If someone tries to create volumes inside
the `data.ms` that's his problem, not ours.
2 was fixed by creating the tempdir *inside* of the `data.ms`. If a user mounted
its `data.ms` on a large partition, there is no reason he could not load a big
dump because his `/tmp` was too small. This solves the issue; now the dump is
extracted and indexed on the same partition the `data.ms` will lay.

fix #1833
2022-01-10 14:56:03 +01:00
Morgane Dubus
030a90523d Update dashboard for v0.25.0 2022-01-10 10:50:57 +01:00
bors[bot]
56d223a51d Merge #2059
2059: change indexed doc count on error r=irevoire a=MarinPostma

change `indexed_documents` and `deleted_documents` to return 0 instead of null when empty when the task has failed.

close #2053


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2022-01-06 15:55:50 +00:00
Marin Postma
f558ff826a feat(http): task view indexed and deleted documents return 0 instead of null 2022-01-06 14:55:02 +01:00
Marin Postma
5fb4ed60e7 chore(all) set rust edition to 2021 2022-01-06 13:30:45 +01:00
bors[bot]
0d2a358cc2 Merge #2056
2056: Allow any header for CORS r=curquiza a=curquiza

Bug fix: trigger a CORS error when trying to send the `User-Agent` header via the browser

`@bidoubiwa` thanks for the bug report!

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-01-05 16:45:51 +00:00
Clémentine Urquizar
595250c93e Allow any header for CORS 2022-01-05 15:38:47 +01:00
bors[bot]
c636988935 Merge #2055
2055: fix(dump): Fix the loading of dump with empty indexes r=irevoire a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2022-01-05 14:31:53 +00:00
Tamo
eea483c470 fix(dump): Fix the loading of dump with empty indexes 2022-01-05 15:08:21 +01:00
bors[bot]
d53c61a6d0 Merge #2054
2054: Bug(auth): Wrap key list in results r=irevoire a=ManyTheFish

fix #2052

Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-04 15:44:55 +00:00
ManyTheFish
c0d4f71a34 Bug(auth): Wrap key list in results 2022-01-04 14:10:30 +01:00
bors[bot]
f56989e46e Merge #2011
2011: bug(lib): drop env on last use r=curquiza a=MarinPostma

fixes the `too many open files` error when running tests by closing the
environment on last drop

To check that we are actually the last owner of the `env` we plan to drop, I have wrapped all envs in `Arc`, and check that we have the last reference to it.


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2022-01-04 11:03:08 +00:00
bors[bot]
c0251eb680 Merge #2050
2050: Bug(CORS): Add missing allowed headers r=curquiza a=ManyTheFish

fix #2040

## test
html file to test:

```html
<!DOCTYPE html>
<html>
<meta content="text/html;charset=utf-8" http-equiv="Content-Type">
<meta content="utf-8" http-equiv="encoding">

<script>
var xmlHttp = new XMLHttpRequest();
    xmlHttp.open( "GET", "http://127.0.0.1:7700/indexes/toto", false ); // false for synchronous request
    xmlHttp.setRequestHeader("Authorization", "Bearer manythefish");
    xmlHttp.send( null );
    console.log(xmlHttp.responseText);
</script>
</html>
```


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-01-03 14:23:34 +00:00
ManyTheFish
450b81ca13 Bug(CORS): Add missing allowed headers
fix #2040
2022-01-03 13:41:12 +01:00
bors[bot]
2f3faadcbf Merge #2034
2034: Fix typo r=curquiza a=curquiza

Fix `Meilisearch` typo into `MeiliSearch`

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-01-03 09:40:56 +00:00
bors[bot]
5986a2d126 Merge #2036
2036: chore(ci): Enable rust_backtrace in the ci r=curquiza a=irevoire

This should help us to understand unreproducible panics that happens in the CI all the time

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-12-22 19:00:08 +00:00
Tamo
d75e84f625 chore(ci): Enable rust_backtrace in the ci 2021-12-22 18:20:44 +01:00
bors[bot]
c221277fd2 Merge #2035
2035: Use self hosted GitHub runner r=curquiza a=curquiza

Checked with `@tpayet,` we have created a self hosted github runner to save time when pushing the docker images.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-12-22 15:28:33 +00:00
bors[bot]
3b30fadb55 Merge #2037
2037: test: Ignore the auths tests on windows r=irevoire a=irevoire

Since the auths tests fail sporadically on the windows CI but we can't reproduce these failures with a real windows machine we are going to ignore these ones.

But we still ensure they compile.

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-12-22 15:13:32 +00:00
Tamo
d7df4d6b84 test: Ignore the auths tests on windows
Since the auths tests fail sporadically on the windows CI but we can't
reproduce these failures with a real windows machine we are going to
ignore theses one.
But we still ensure they compile.
2021-12-22 12:39:48 +01:00
Clémentine Urquizar
fd854035c1 Use self hosted github runner 2021-12-21 18:32:29 +01:00
bors[bot]
4d1c138842 Merge #2032
2032: Revert docker as non root PR r=curquiza a=ManyTheFish

Revert #1759

hotfix for #1969

Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
2021-12-21 16:19:36 +00:00
Maxime Legendre
7649239b08 Revert docker as non root PR 2021-12-21 16:59:15 +01:00
bors[bot]
0e2f6ba1b6 Merge #2033
2033: Bug(FS): Consider empty pre-created directory as unexisting DB r=curquiza a=ManyTheFish

When the database directory was pre-created we were considering that DB is invalid, we are now accepting to create a database in it.


Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
2021-12-21 15:12:07 +00:00
Clémentine Urquizar
f529c46598 Fix typo in error messages and comments 2021-12-21 16:01:38 +01:00
Maxime Legendre
1ba49d2ddb Bug(FS): Consider empty pre-created directory as unexisting DB 2021-12-21 15:30:11 +01:00
bors[bot]
1b5ca88231 Merge #2026
2026: Bug(auth): Parse YMD date r=curquiza a=ManyTheFish

Use NaiveDate to parse YMD date instead of NaiveDatetime

fix #2017


Co-authored-by: Maxime Legendre <maximelegendre@mbp-de-maxime.home>
2021-12-21 13:48:21 +00:00
Maxime Legendre
37329e0784 Bug(auth): Parse YMD date
Use NaiveDate to parse YMD date instead of NaiveDatetime

fix #2017
2021-12-20 15:30:11 +01:00
bors[bot]
eaff393c76 Merge #2025
2025: Fix security index creation r=ManyTheFish a=ManyTheFish

Forbid index creation on alternates routes when the action `index.create` is not given

fix #2024

Co-authored-by: Maxime Legendre <maximelegendre@MacBook-Pro-de-Maxime.local>
2021-12-20 14:04:28 +00:00
Maxime Legendre
a845cd8880 Fix(auth): Forbid index creation on alternates routes
Forbid index creation on alternates routes when the action `index.create` is not given

fix #2024
2021-12-20 14:48:18 +01:00
Marin Postma
b28a465304 bug(lib): drop env on last use
fixes the `too many open files` error when running tests by closing the
environment on last drop
2021-12-16 10:57:55 +01:00
bors[bot]
ea0a5271f7 Merge #1908
1908: Update CONTRIBUTING.md r=curquiza a=ferdi05

Added a sentence on other means to contribute. If we like it, we can add it in some other places.

# Pull Request

## What does this PR do?
Improves `CONTRIBUTING`

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to MeiliSearch!


Co-authored-by: Ferdinand Boas <ferdinand.boas@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2021-12-09 14:02:48 +00:00
Clémentine Urquizar - curqui
80d039042b Update CONTRIBUTING.md 2021-12-07 11:58:44 +01:00
Ferdinand Boas
5606e22d97 Update CONTRIBUTING.md
typo
2021-12-07 11:46:08 +01:00
Ferdinand Boas
dadce6032d Update CONTRIBUTING.md
Added a sentence on other means to contribute. If we like it, we can add it in some other places.
2021-11-16 17:27:18 +01:00
161 changed files with 12841 additions and 6381 deletions

View File

@@ -23,8 +23,8 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**MeiliSearch version:** [e.g. v0.20.0]
**Meilisearch version:** [e.g. v0.20.0]
**Additional context**
Additional information that may be relevant to the issue.
[e.g. architecture, device, OS, browser]
[e.g. architecture, device, OS, browser]

View File

@@ -1,10 +1,13 @@
contact_links:
- name: Feature request
- name: Language support request & feedback
url: https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal?discussions_q=label%3Aproduct%3Acore%3Atokenizer+category%3A%22Feedback+%26+Feature+Proposal%22
about: The requests and feedback regarding Language support are not managed in this repository. Please upvote the related discussion in our dedicated product repository or open a new one if it doesn't exist.
- name: Feature request & feedback
url: https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal
about: The feature requests are not managed in this repository, please open a discussion in our dedicated product repository
about: The feature requests and feedback regarding the already existing features are not managed in this repository. Please open a discussion in our dedicated product repository
- name: Documentation issue
url: https://github.com/meilisearch/documentation/issues/new
about: For documentation issues, open an issue or a PR in the documentation repository
- name: Support questions & other
url: https://github.com/meilisearch/MeiliSearch/discussions/new
url: https://github.com/meilisearch/meilisearch/discussions/new
about: For any other question, open a discussion in this repository

13
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
# Set update schedule for GitHub Actions only
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
labels:
- 'skip changelog'
- 'dependencies'
rebase-strategy: disabled

28
.github/scripts/check-release.sh vendored Normal file
View File

@@ -0,0 +1,28 @@
#!/bin/bash
# check_tag $current_tag $file_tag $file_name
function check_tag {
if [[ "$1" != "$2" ]]; then
echo "Error: the current tag does not match the version in $3: found $2 - expected $1"
ret=1
fi
}
ret=0
current_tag=${GITHUB_REF#'refs/tags/v'}
toml_files='*/Cargo.toml'
for toml_file in $toml_files;
do
file_tag="$(grep '^version = ' $toml_file | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')"
check_tag $current_tag $file_tag $toml_file
done
lock_file='Cargo.lock'
lock_tag=$(grep -A 1 'name = "meilisearch-auth"' $lock_file | grep version | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')
check_tag $current_tag $lock_tag $lock_file
if [[ "$ret" -eq 0 ]] ; then
echo 'OK'
fi
exit $ret

View File

@@ -1,14 +1,14 @@
#!/bin/sh
# Checks if the current tag should be the latest (in terms of semver and not of release date).
# Ex: previous tag -> v0.10.1
# new tag -> v0.8.12
# The new tag should not be the latest
# So it returns "false", the CI should not run for the release v0.8.2
# Used in GHA in publish-docker-latest.yml
# Was used in our CIs to publish the latest docker image. Not used anymore, will be used again when v1 and v2 will be out and we will want to maintain multiple stable versions.
# Returns "true" or "false" (as a string) to be used in the `if` in GHA
# Checks if the current tag should be the latest (in terms of semver and not of release date).
# Ex: previous tag -> v2.1.1
# new tag -> v1.20.3
# The new tag (v1.20.3) should NOT be the latest
# So it returns "false", the `latest` tag should not be updated for the release v1.20.3 and still need to correspond to v2.1.1
# GLOBAL
GREP_SEMVER_REGEXP='v\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)$' # i.e. v[number].[number].[number]
@@ -74,7 +74,7 @@ semverLT() {
# Returns the tag of the latest stable release (in terms of semver and not of release date)
get_latest() {
temp_file='temp_file' # temp_file needed because the grep would start before the download is over
curl -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file"
curl -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file"
releases=$(cat "$temp_file" | \
grep -E "tag_name|draft|prerelease" \
| tr -d ',"' | cut -d ':' -f2 | tr -d ' ')

View File

@@ -1,20 +0,0 @@
# GitHub Actions Workflow for MeiliSearch
> **Note:**
> - We do not use [cache](https://github.com/actions/cache) yet but we could use it to speed up CI
## Workflow
- On each pull request, we trigger `cargo test`.
- On each tag, we build:
- the tagged Docker image and publish it to Docker Hub
- the binaries for MacOS, Ubuntu, and Windows
- the Debian package
- On each stable release (`v*.*.*` tag):
- we build the `latest` Docker image and publish it to Docker Hub
- we publish the binary to Hombrew and Gemfury
## Problems
- We do not test on Windows because we are unable to make it work, there is a disk space problem.

View File

@@ -8,7 +8,7 @@ jobs:
nightly-coverage:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
@@ -25,7 +25,7 @@ jobs:
RUSTFLAGS: "-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Cpanic=unwind -Zpanic_abort_tests"
- uses: actions-rs/grcov@v0.1
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ${{ steps.coverage.outputs.report }}

View File

@@ -0,0 +1,23 @@
name: Create issue to upgrade dependencies
on:
schedule:
- cron: '0 0 1 */3 *'
workflow_dispatch:
jobs:
create-issue:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Create an issue
uses: actions-ecosystem/action-create-issue@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
title: Upgrade dependencies
body: |
We need to update the dependencies of the Meilisearch repository, and, if possible, the dependencies of all the core-team repositories that Meilisearch depends on (milli, charabia, heed...).
⚠️ This issue should only be done at the beginning of the sprint!
labels: |
dependencies
maintenance

View File

@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install cargo-flaky
run: cargo install cargo-flaky
- name: Run cargo flaky 100 times

View File

@@ -5,9 +5,33 @@ on:
name: Publish binaries to release
jobs:
check-version:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Check if the tag has the v<nmumber>.<number>.<number> format.
# If yes, it means we are publishing an official release.
# If no, we are releasing a RC, so no need to check the version.
- name: Check tag format
if: github.event_name != 'schedule'
id: check-tag-format
run: |
escaped_tag=$(printf "%q" ${{ github.ref_name }})
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo ::set-output name=stable::true
else
echo ::set-output name=stable::false
fi
- name: Check release validity
if: steps.check-tag-format.outputs.stable == 'true'
run: bash .github/scripts/check-release.sh
publish:
name: Publish for ${{ matrix.os }}
name: Publish binary for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
needs: check-version
strategy:
fail-fast: false
matrix:
@@ -27,7 +51,7 @@ jobs:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build
run: cargo build --release --locked
- name: Upload binaries to release
@@ -38,28 +62,70 @@ jobs:
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}
publish-armv8:
name: Publish for ARMv8
runs-on: ubuntu-18.04
publish-aarch64:
name: Publish binary for aarch64
runs-on: ${{ matrix.os }}
needs: check-version
continue-on-error: false
strategy:
fail-fast: false
matrix:
include:
- build: aarch64
os: ubuntu-18.04
target: aarch64-unknown-linux-gnu
linker: gcc-aarch64-linux-gnu
use-cross: true
asset_name: meilisearch-linux-aarch64
steps:
- uses: actions/checkout@v2
- uses: uraimo/run-on-arch-action@v2.1.1
id: runcmd
- name: Checkout repository
uses: actions/checkout@v3
- name: Installing Rust toolchain
uses: actions-rs/toolchain@v1
with:
arch: aarch64 # aka ARMv8
distro: ubuntu18.04
env: |
JEMALLOC_SYS_WITH_LG_PAGE: 16
run: |
apt update
apt install -y curl gcc make
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --profile minimal --default-toolchain stable
source $HOME/.cargo/env
cargo build --release --locked
toolchain: stable
profile: minimal
target: ${{ matrix.target }}
override: true
- name: APT update
run: |
sudo apt update
- name: Install target specific tools
if: matrix.use-cross
run: |
sudo apt-get install -y ${{ matrix.linker }}
- name: Configure target aarch64 GNU
if: matrix.target == 'aarch64-unknown-linux-gnu'
## Environment variable is not passed using env:
## LD gold won't work with MUSL
# env:
# JEMALLOC_SYS_WITH_LG_PAGE: 16
# RUSTFLAGS: '-Clink-arg=-fuse-ld=gold'
run: |
echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config
echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
echo 'JEMALLOC_SYS_WITH_LG_PAGE=16' >> $GITHUB_ENV
echo RUSTFLAGS="-Clink-arg=-fuse-ld=gold" >> $GITHUB_ENV
- name: Cargo build
uses: actions-rs/cargo@v1
with:
command: build
use-cross: ${{ matrix.use-cross }}
args: --release --target ${{ matrix.target }}
- name: List target output files
run: ls -lR ./target
- name: Upload the binary to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
file: target/release/meilisearch
asset_name: meilisearch-linux-armv8
file: target/${{ matrix.target }}/release/meilisearch
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}

View File

@@ -1,76 +0,0 @@
name: Publish aarch64 binary
on:
release:
types: [published]
env:
CARGO_TERM_COLOR: always
jobs:
publish-aarch64:
name: Publish to Github
runs-on: ${{ matrix.os }}
continue-on-error: false
strategy:
fail-fast: false
matrix:
include:
- build: aarch64
os: ubuntu-18.04
target: aarch64-unknown-linux-gnu
linker: gcc-aarch64-linux-gnu
use-cross: true
asset_name: meilisearch-linux-aarch64
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Installing Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
target: ${{ matrix.target }}
override: true
- name: APT update
run: |
sudo apt update
- name: Install target specific tools
if: matrix.use-cross
run: |
sudo apt-get install -y ${{ matrix.linker }}
- name: Configure target aarch64 GNU
if: matrix.target == 'aarch64-unknown-linux-gnu'
## Environment variable is not passed using env:
## LD gold won't work with MUSL
# env:
# JEMALLOC_SYS_WITH_LG_PAGE: 16
# RUSTFLAGS: '-Clink-arg=-fuse-ld=gold'
run: |
echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config
echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config
echo 'JEMALLOC_SYS_WITH_LG_PAGE=16' >> $GITHUB_ENV
echo RUSTFLAGS="-Clink-arg=-fuse-ld=gold" >> $GITHUB_ENV
- name: Cargo build
uses: actions-rs/cargo@v1
with:
command: build
use-cross: ${{ matrix.use-cross }}
args: --release --target ${{ matrix.target }}
- name: List target output files
run: ls -lR ./target
- name: Upload the binary to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
file: target/${{ matrix.target }}/release/meilisearch
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}

View File

@@ -5,16 +5,25 @@ on:
types: [released]
jobs:
check-version:
name: Check the version validity
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check release validity
run: bash .github/scripts/check-release.sh
debian:
name: Publish debian packagge
runs-on: ubuntu-18.04
needs: check-version
steps:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- name: Install cargo-deb
run: cargo install cargo-deb
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build deb package
run: cargo deb -p meilisearch-http -o target/debian/meilisearch.deb
- name: Upload debian pkg to release
@@ -30,6 +39,7 @@ jobs:
homebrew:
name: Bump Homebrew formula
runs-on: ubuntu-18.04
needs: check-version
steps:
- name: Create PR to Homebrew
uses: mislav/bump-homebrew-formula-action@v1

View File

@@ -0,0 +1,71 @@
---
on:
schedule:
- cron: '0 4 * * *' # Every day at 4:00am
push:
tags:
- '*'
name: Publish tagged images to Docker Hub
jobs:
docker:
runs-on: docker
steps:
- uses: actions/checkout@v2
# Check if the tag has the v<nmumber>.<number>.<number> format. If yes, it means we are publishing an official release.
# In this situation, we need to set `output.stable` to create/update the following tags (additionally to the `vX.Y.Z` Docker tag):
# - a `vX.Y` (without patch version) Docker tag
# - a `latest` Docker tag
- name: Check tag format
if: github.event_name != 'schedule'
id: check-tag-format
run: |
escaped_tag=$(printf "%q" ${{ github.ref_name }})
if [[ $escaped_tag =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo ::set-output name=stable::true
else
echo ::set-output name=stable::false
fi
# Check only the validity of the tag for official releases (not for pre-releases or other tags)
- name: Check release validity
if: github.event_name != 'schedule' && steps.check-tag-format.outputs.stable == 'true'
run: bash .github/scripts/check-release.sh
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
if: github.event_name != 'schedule'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: getmeili/meilisearch
# The lastest and `vX.Y` tags are only pushed for the official Meilisearch releases
# See https://github.com/docker/metadata-action#latest-tag
flavor: latest=false
tags: |
type=ref,event=tag
type=semver,pattern=v{{major}}.{{minor}},enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
type=raw,value=latest,enable=${{ steps.check-tag-format.outputs.stable == 'true' }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v3
with:
# We do not push tags for the cron jobs, this is only for test purposes
push: ${{ github.event_name != 'schedule' }}
platforms: linux/amd64,linux/arm64
tags: ${{ steps.meta.outputs.tags }}

View File

@@ -1,30 +0,0 @@
---
on:
release:
types: [released]
name: Publish latest image to Docker Hub
jobs:
docker-latest:
runs-on: self-hosted
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
platforms: linux/amd64,linux/arm64
tags: getmeili/meilisearch:latest

View File

@@ -1,39 +0,0 @@
---
on:
push:
tags:
- '*'
name: Publish tagged image to Docker Hub
jobs:
docker-tag:
runs-on: self-hosted
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: getmeili/meilisearch
flavor: latest=false
tags: type=ref,event=tag
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
platforms: linux/amd64,linux/arm64
tags: ${{ steps.meta.outputs.tags }}

View File

@@ -11,6 +11,8 @@ on:
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
RUSTFLAGS: "-D warnings"
jobs:
tests:
@@ -21,9 +23,10 @@ jobs:
matrix:
os: [ubuntu-18.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: rui314/setup-mold@v1 # Optimize link time
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo check without any default features
uses: actions-rs/cargo@v1
with:
@@ -35,11 +38,32 @@ jobs:
command: test
args: --locked --release
# We run tests in debug also, to make sure that the debug_assertions are hit
test-debug:
name: Run tests in debug
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v3
- uses: rui314/setup-mold@v1 # Optimize link time
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Cache dependencies
uses: Swatinem/rust-cache@v2.0.0
- name: Run tests in debug
uses: actions-rs/cargo@v1
with:
command: test
args: --locked
clippy:
name: Run Clippy
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: rui314/setup-mold@v1 # Optimize link time
- uses: actions-rs/toolchain@v1
with:
profile: minimal
@@ -47,7 +71,7 @@ jobs:
override: true
components: clippy
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo clippy
uses: actions-rs/cargo@v1
with:
@@ -58,14 +82,15 @@ jobs:
name: Run Rustfmt
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: rui314/setup-mold@v1 # Optimize link time
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
toolchain: stable
override: true
components: rustfmt
- name: Cache dependencies
uses: Swatinem/rust-cache@v1.3.0
uses: Swatinem/rust-cache@v2.0.0
- name: Run cargo fmt
run: cargo fmt --all -- --check

View File

@@ -1,17 +1,22 @@
# Contributing
First, thank you for contributing to MeiliSearch! The goal of this document is to provide everything you need to start contributing to MeiliSearch.
First, thank you for contributing to Meilisearch! The goal of this document is to provide everything you need to start contributing to Meilisearch.
Remember that there are many ways to contribute other than writing code: writing [tutorials or blog posts](https://github.com/meilisearch/awesome-meilisearch), improving [the documentation](https://github.com/meilisearch/documentation), submitting [bug reports](https://github.com/meilisearch/meilisearch/issues/new?assignees=&labels=&template=bug_report.md&title=) and [feature requests](https://github.com/meilisearch/product/discussions/categories/feedback-feature-proposal)...
## Table of Contents
- [Assumptions](#assumptions)
- [How to Contribute](#how-to-contribute)
- [Development Workflow](#development-workflow)
- [Git Guidelines](#git-guidelines)
- [Release Process (for internal team only)](#release-process-for-internal-team-only)
## Assumptions
1. **You're familiar with [Github](https://github.com) and the [Pull Requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)(PR) workflow.**
2. **You've read the MeiliSearch [documentation](https://docs.meilisearch.com).**
3. **You know about the [MeiliSearch community](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html).
1. **You're familiar with [GitHub](https://github.com) and the [Pull Requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)(PR) workflow.**
2. **You've read the Meilisearch [documentation](https://docs.meilisearch.com).**
3. **You know about the [Meilisearch community](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html).
Please use this for help.**
## How to Contribute
@@ -19,21 +24,21 @@ First, thank you for contributing to MeiliSearch! The goal of this document is t
1. Ensure your change has an issue! Find an
[existing issue](https://github.com/meilisearch/meilisearch/issues/) or [open a new issue](https://github.com/meilisearch/meilisearch/issues/new).
* This is where you can get a feel if the change will be accepted or not.
2. Once approved, [fork the MeiliSearch repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) in your own Github account.
2. Once approved, [fork the Meilisearch repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) in your own GitHub account.
3. [Create a new Git branch](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository)
4. Review the [Development Workflow](#development-workflow) section that describes the steps to maintain the repository.
5. Make your changes on your branch.
6. [Submit the branch as a Pull Request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) pointing to the `main` branch of the MeiliSearch repository. A maintainer should comment and/or review your Pull Request within a few days. Although depending on the circumstances, it may take longer.
6. [Submit the branch as a Pull Request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) pointing to the `main` branch of the Meilisearch repository. A maintainer should comment and/or review your Pull Request within a few days. Although depending on the circumstances, it may take longer.
## Development Workflow
### Setup and run MeiliSearch
### Setup and run Meilisearch
```bash
cargo run --release
```
We recommend using the `--release` flag to test the full performance of MeiliSearch.
We recommend using the `--release` flag to test the full performance of Meilisearch.
### Test
@@ -41,6 +46,8 @@ We recommend using the `--release` flag to test the full performance of MeiliSea
cargo test
```
This command will be triggered to each PR as a requirement for merging it.
If you get a "Too many open files" error you might want to increase the open file limit using this command:
```bash
@@ -65,7 +72,7 @@ As minimal requirements, your commit message should:
We don't follow any other convention, but if you want to use one, we recommend [the Chris Beams one](https://chris.beams.io/posts/git-commit/).
### Github Pull Requests
### GitHub Pull Requests
Some notes on GitHub PRs:
@@ -75,6 +82,29 @@ Some notes on GitHub PRs:
The draft PRs are recommended when you want to show that you are working on something and make your work visible.
- The branch related to the PR must be **up-to-date with `main`** before merging. Fortunately, this project uses [Bors](https://github.com/bors-ng/bors-ng) to automatically enforce this requirement without the PR author having to rebase manually.
## Release Process (for internal team only)
Meilisearch tools follow the [Semantic Versioning Convention](https://semver.org/).
### Automation to rebase and Merge the PRs
This project integrates a bot that helps us manage pull requests merging.<br>
_[Read more about this](https://github.com/meilisearch/integration-guides/blob/main/resources/bors.md)._
### How to Publish a new Release
The full Meilisearch release process is described in [this guide](https://github.com/meilisearch/core-team/blob/main/resources/meilisearch-release.md). Please follow it carefully before doing any release.
### Release assets
For each release, the following assets are created:
- Binaries for differents platforms (Linux, MacOS, Windows and ARM architectures) are attached to the GitHub release
- Binaries are pushed to HomeBrew and APT (not published for RC)
- Docker tags are created/updated:
- `vX.Y.Z`
- `vX.Y` (not published for RC)
- `latest` (not published for RC)
<hr>
Thank you again for reading this through, we can not wait to begin to work with you if you made your way through this contributing guide ❤️

2154
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,15 @@
[workspace]
resolver = "2"
members = [
"meilisearch-http",
"meilisearch-error",
"meilisearch-types",
"meilisearch-lib",
"meilisearch-auth",
"permissive-json-pointer",
]
resolver = "2"
[profile.dev.package.flate2]
opt-level = 3
[profile.dev.package.milli]
opt-level = 3

View File

@@ -1,55 +1,46 @@
# Compile
FROM alpine:3.14 AS compiler
FROM rust:alpine3.14 AS compiler
RUN apk update --quiet \
&& apk add -q --no-cache curl build-base
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
RUN apk add -q --update-cache --no-cache build-base openssl-dev
WORKDIR /meilisearch
COPY Cargo.lock .
COPY Cargo.toml .
COPY meilisearch-auth/Cargo.toml meilisearch-auth/
COPY meilisearch-error/Cargo.toml meilisearch-error/
COPY meilisearch-http/Cargo.toml meilisearch-http/
COPY meilisearch-lib/Cargo.toml meilisearch-lib/
ENV RUSTFLAGS="-C target-feature=-crt-static"
# Create dummy main.rs files for each workspace member to be able to compile all the dependencies
RUN find . -type d -name "meilisearch-*" | xargs -I{} sh -c 'mkdir {}/src; echo "fn main() { }" > {}/src/main.rs;'
# Use `cargo build` instead of `cargo vendor` because we need to not only download but compile dependencies too
RUN $HOME/.cargo/bin/cargo build --release
# Cleanup dummy main.rs files
RUN find . -path "*/src/main.rs" -delete
ARG COMMIT_SHA
ARG COMMIT_DATE
ENV COMMIT_SHA=${COMMIT_SHA} COMMIT_DATE=${COMMIT_DATE}
ENV RUSTFLAGS="-C target-feature=-crt-static"
COPY . .
RUN $HOME/.cargo/bin/cargo build --release
RUN set -eux; \
apkArch="$(apk --print-arch)"; \
if [ "$apkArch" = "aarch64" ]; then \
export JEMALLOC_SYS_WITH_LG_PAGE=16; \
fi && \
cargo build --release
# Run
FROM alpine:3.14
ARG USER=meili
ENV HOME /home/${USER}
ENV MEILI_HTTP_ADDR 0.0.0.0:7700
ENV MEILI_SERVER_PROVIDER docker
# download runtime deps as root and create ${USER}
RUN apk update --quiet \
&& apk add -q --no-cache libgcc tini curl \
&& adduser -D ${USER}
WORKDIR ${HOME}
USER ${USER}
# copy file as ${USER} to ${HOME}
COPY --from=compiler /meilisearch/target/release/meilisearch .
&& apk add -q --no-cache libgcc tini curl
# add meilisearch to the `/bin` so you can run it from anywhere and it's easy
# to find.
COPY --from=compiler /meilisearch/target/release/meilisearch /bin/meilisearch
# To stay compatible with the older version of the container (pre v0.27.0) we're
# going to symlink the meilisearch binary in the path to `/meilisearch`
RUN ln -s /bin/meilisearch /meilisearch
# This directory should hold all the data related to meilisearch so we're going
# to move our PWD in there.
# We don't want to put the meilisearch binary
WORKDIR /meili_data
EXPOSE 7700/tcp
ENTRYPOINT ["tini", "--"]
CMD ./meilisearch
CMD /bin/meilisearch

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2019-2021 Meili SAS
Copyright (c) 2019-2022 Meili SAS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,8 +1,8 @@
<p align="center">
<img src="assets/logo.svg" alt="MeiliSearch" width="200" height="200" />
<img src="assets/logo.svg" alt="Meilisearch" width="200" height="200" />
</p>
<h1 align="center">MeiliSearch</h1>
<h1 align="center">Meilisearch</h1>
<h4 align="center">
<a href="https://www.meilisearch.com">Website</a> |
@@ -15,17 +15,17 @@
</h4>
<p align="center">
<a href="https://github.com/meilisearch/MeiliSearch/actions"><img src="https://github.com/meilisearch/MeiliSearch/workflows/Cargo%20test/badge.svg" alt="Build Status"></a>
<a href="https://deps.rs/repo/github/meilisearch/MeiliSearch"><img src="https://deps.rs/repo/github/meilisearch/MeiliSearch/status.svg" alt="Dependency status"></a>
<a href="https://github.com/meilisearch/MeiliSearch/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-informational" alt="License"></a>
<a href="https://slack.meilisearch.com"><img src="https://img.shields.io/badge/slack-MeiliSearch-blue.svg?logo=slack" alt="Slack"></a>
<a href="https://github.com/meilisearch/MeiliSearch/discussions" alt="Discussions"><img src="https://img.shields.io/badge/github-discussions-red" /></a>
<a href="https://github.com/meilisearch/meilisearch/actions"><img src="https://github.com/meilisearch/meilisearch/workflows/Cargo%20test/badge.svg" alt="Build Status"></a>
<a href="https://deps.rs/repo/github/meilisearch/meilisearch"><img src="https://deps.rs/repo/github/meilisearch/meilisearch/status.svg" alt="Dependency status"></a>
<a href="https://github.com/meilisearch/meilisearch/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-informational" alt="License"></a>
<a href="https://slack.meilisearch.com"><img src="https://img.shields.io/badge/slack-meilisearch-blue.svg?logo=slack" alt="Slack"></a>
<a href="https://github.com/meilisearch/meilisearch/discussions" alt="Discussions"><img src="https://img.shields.io/badge/github-discussions-red" /></a>
<a href="https://app.bors.tech/repositories/26457"><img src="https://bors.tech/images/badge_small.svg" alt="Bors enabled"></a>
</p>
<p align="center">⚡ Lightning Fast, Ultra Relevant, and Typo-Tolerant Search Engine 🔍</p>
**MeiliSearch** is a powerful, fast, open-source, easy to use and deploy search engine. Both searching and indexing are highly customizable. Features such as typo-tolerance, filters, and synonyms are provided out-of-the-box.
**Meilisearch** is a powerful, fast, open-source, easy to use and deploy search engine. Both searching and indexing are highly customizable. Features such as typo-tolerance, filters, and synonyms are provided out-of-the-box.
For more information about features go to [our documentation](https://docs.meilisearch.com/).
<p align="center">
@@ -58,16 +58,16 @@ meilisearch
#### Docker
```bash
docker run -p 7700:7700 -v "$(pwd)/data.ms:/data.ms" getmeili/meilisearch
docker run -p 7700:7700 -v "$(pwd)/meili_data:/meili_data" getmeili/meilisearch
```
#### Announcing a cloud-hosted MeiliSearch
#### Announcing a cloud-hosted Meilisearch
Join the closed beta by filling out this [form](https://meilisearch.typeform.com/to/FtnzvZfh).
Join the closed beta by filling out this [form](https://meilisearch.typeform.com/to/VI2cI2rv).
#### Try MeiliSearch in our Sandbox
#### Try Meilisearch in our Sandbox
Create a MeiliSearch instance in [MeiliSearch Sandbox](https://sandbox.meilisearch.com/). This instance is free, and will be active for 48 hours.
Create a Meilisearch instance in [Meilisearch Sandbox](https://sandbox.meilisearch.com/). This instance is free, and will be active for 48 hours.
#### Run on Digital Ocean
@@ -99,8 +99,8 @@ curl -L https://install.meilisearch.com | sh
If you have the latest stable Rust toolchain installed on your local system, clone the repository and change it to your working directory.
```bash
git clone https://github.com/meilisearch/MeiliSearch.git
cd MeiliSearch
git clone https://github.com/meilisearch/meilisearch.git
cd meilisearch
cargo run --release
```
@@ -109,7 +109,7 @@ cargo run --release
Let's create an index! If you need a sample dataset, use [this movie database](https://www.notion.so/meilisearch/A-movies-dataset-to-test-Meili-1cbf7c9cfa4247249c40edfa22d7ca87#b5ae399b81834705ba5420ac70358a65). You can also find it in the `datasets/` directory.
```bash
curl -L 'https://bit.ly/2PAcw9l' -o movies.json
curl -L https://docs.meilisearch.com/movies.json -o movies.json
```
Now, you're ready to index some data.
@@ -161,19 +161,19 @@ curl 'http://127.0.0.1:7700/indexes/movies/search?q=botman+robin&limit=2' | jq
#### Use the Web Interface
We also deliver an **out-of-the-box [web interface](https://github.com/meilisearch/mini-dashboard)** in which you can test MeiliSearch interactively.
We also deliver an **out-of-the-box [web interface](https://github.com/meilisearch/mini-dashboard)** in which you can test Meilisearch interactively.
You can access the web interface in your web browser at the root of the server. The default URL is [http://127.0.0.1:7700](http://127.0.0.1:7700). All you need to do is open your web browser and enter MeiliSearchs address to visit it. This will lead you to a web page with a search bar that will allow you to search in the selected index.
You can access the web interface in your web browser at the root of the server. The default URL is [http://127.0.0.1:7700](http://127.0.0.1:7700). All you need to do is open your web browser and enter Meilisearchs address to visit it. This will lead you to a web page with a search bar that will allow you to search in the selected index.
| [See the gif above](#demo)
## Documentation
Now that your MeiliSearch server is up and running, you can learn more about how to tune your search engine in [the documentation](https://docs.meilisearch.com).
Now that your Meilisearch server is up and running, you can learn more about how to tune your search engine in [the documentation](https://docs.meilisearch.com).
## Contributing
Hey! We're glad you're thinking about contributing to MeiliSearch! Feel free to pick an [issue labeled as `good first issue`](https://github.com/meilisearch/MeiliSearch/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22), and to ask any question you need. Some points might not be clear and we are available to help you!
Hey! We're glad you're thinking about contributing to Meilisearch! Feel free to pick an [issue labeled as `good first issue`](https://github.com/meilisearch/meilisearch/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22), and to ask any question you need. Some points might not be clear and we are available to help you!
Also, we recommend following the [CONTRIBUTING](./CONTRIBUTING.md) to create your PR.
@@ -184,8 +184,8 @@ The code in this repository is only concerned with managing multiple indexes, ha
Search and indexation are the domain of our core engine, [`milli`](https://github.com/meilisearch/milli), while tokenization is handled by [our `tokenizer` library](https://github.com/meilisearch/tokenizer/).
## Telemetry
MeiliSearch collects anonymous data regarding general usage.
This helps us better understand developers' usage of MeiliSearch features.
Meilisearch collects anonymous data regarding general usage.
This helps us better understand developers' usage of Meilisearch features.
To find out more on what information we're retrieving, please see our documentation on [Telemetry](https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html).
@@ -193,7 +193,7 @@ This program is optional, you can disable these analytics by using the `MEILI_NO
## Feature request
The feature requests are not managed in this repository. Please visit our [dedicated repository](https://github.com/meilisearch/product) to see our work about the MeiliSearch product.
The feature requests are not managed in this repository. Please visit our [dedicated repository](https://github.com/meilisearch/product) to see our work about the Meilisearch product.
If you have a feature request or any feedback about an existing feature, please open [a discussion](https://github.com/meilisearch/product/discussions).
Also, feel free to participate in the current discussions, we are looking forward to reading your comments.
@@ -202,4 +202,4 @@ Also, feel free to participate in the current discussions, we are looking forwar
Please visit [this page](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html#contact-us).
MeiliSearch is developed by [Meili](https://www.meilisearch.com), a young company. To know more about us, you can [read our blog](https://blog.meilisearch.com). Any suggestion or feedback is highly appreciated. Thank you for your support!
Meilisearch is developed by [Meili](https://www.meilisearch.com), a young company. To know more about us, you can [read our blog](https://blog.meilisearch.com). Any suggestion or feedback is highly appreciated. Thank you for your support!

View File

@@ -1,16 +1,16 @@
# Security
MeiliSearch takes the security of our software products and services seriously.
Meilisearch takes the security of our software products and services seriously.
If you believe you have found a security vulnerability in any MeiliSearch-owned repository, please report it to us as described below.
If you believe you have found a security vulnerability in any Meilisearch-owned repository, please report it to us as described below.
## Suported versions
## Supported versions
As long as we are pre-v1.0, only the latest version of MeiliSearch will be supported with security updates.
As long as we are pre-v1.0, only the latest version of Meilisearch will be supported with security updates.
## Reporting security issues
⚠️ Please do not report security vulnerabilities through public GitHub issues. ⚠️
⚠️ Please do not report security vulnerabilities through public GitHub issues. ⚠️
Instead, please kindly email us at security@meilisearch.com

View File

@@ -1,17 +1,19 @@
<svg width="360" height="360" viewBox="0 0 360 360" fill="none" xmlns="http://www.w3.org/2000/svg">
<g id="logo_main">
<rect id="Rectangle" x="107.333" y="0.150146" width="274.315" height="274.315" rx="98.8334" transform="rotate(23 107.333 0.150146)" fill="url(#paint0_linear)"/>
<path id="Rectangle_2" fill-rule="evenodd" clip-rule="evenodd" d="M61.3296 230.199C46.2224 194.608 38.6688 176.813 38.208 160.329C37.5286 136.025 47.0175 112.539 64.3891 95.5282C76.1718 83.9904 93.9669 76.4368 129.557 61.3296C165.147 46.2224 182.943 38.6688 199.427 38.208C223.731 37.5286 247.217 47.0175 264.228 64.3891C275.766 76.1718 283.319 93.9669 298.426 129.557C313.534 165.147 321.087 182.943 321.548 199.427C322.227 223.731 312.738 247.217 295.367 264.228C283.584 275.766 265.789 283.319 230.199 298.426C194.608 313.534 176.813 321.087 160.329 321.548C136.025 322.227 112.539 312.738 95.5282 295.367C83.9903 283.584 76.4368 265.789 61.3296 230.199Z" fill="url(#paint1_linear)"/>
<path id="m" fill-rule="evenodd" clip-rule="evenodd" d="M219.568 130.748C242.363 130.748 259.263 147.451 259.263 174.569V229.001H227.232V179.678C227.232 166.119 220.747 159.634 210.136 159.634C205.223 159.634 200.311 161.796 195.595 167.494C195.791 169.852 195.988 172.21 195.988 174.569V229.001H164.154V179.678C164.154 166.119 157.472 159.634 147.057 159.634C142.145 159.634 137.429 161.992 132.712 168.084V229.001H100.878V133.695H132.712V139.394C139.197 133.892 145.878 130.748 156.49 130.748C168.477 130.748 178.695 135.267 185.769 143.52C195.791 134.678 205.42 130.748 219.568 130.748Z" fill="white"/>
</g>
<svg width="300" height="300" viewBox="0 0 300 300" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0 237L55.426 96.7678C63.2367 77.0063 82.499 64 103.955 64H137.371L81.9447 204.232C74.1341 223.993 54.8717 237 33.4156 237H0Z" fill="url(#paint0_linear_1_898)"/>
<path d="M81.3123 237L136.738 96.7682C144.549 77.0067 163.811 64.0004 185.267 64.0004H218.683L163.257 204.232C155.446 223.994 136.184 237 114.728 237H81.3123Z" fill="url(#paint1_linear_1_898)"/>
<path d="M162.629 237L218.055 96.7682C225.866 77.0067 245.128 64.0004 266.584 64.0004H300L244.574 204.232C236.763 223.994 217.501 237 196.045 237H162.629Z" fill="url(#paint2_linear_1_898)"/>
<defs>
<linearGradient id="paint0_linear" x1="-13.6248" y1="129.208" x2="244.49" y2="403.522" gradientUnits="userSpaceOnUse">
<stop stop-color="#E41359"/>
<stop offset="1" stop-color="#F23C79"/>
<linearGradient id="paint0_linear_1_898" x1="300.001" y1="50.7858" x2="1.63474" y2="221.244" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="11.0088" y1="111.65" x2="111.65" y2="348.747" gradientUnits="userSpaceOnUse">
<stop stop-color="#24222F"/>
<stop offset="1" stop-color="#2B2937"/>
<linearGradient id="paint1_linear_1_898" x1="300.001" y1="50.7858" x2="1.63474" y2="221.244" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
<linearGradient id="paint2_linear_1_898" x1="300.001" y1="50.7858" x2="1.63474" y2="221.244" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF5CAA"/>
<stop offset="1" stop-color="#FF4E62"/>
</linearGradient>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -3,7 +3,8 @@ status = [
'Tests on macos-latest',
'Tests on windows-latest',
'Run Clippy',
'Run Rustfmt'
'Run Rustfmt',
'Run tests in debug',
]
pr_status = ['Milestone Check']
# 3 hours timeout

View File

@@ -67,16 +67,16 @@ semverLT() {
return 1
}
# Get a token from https://github.com/settings/tokens to increasae rate limit (from 60 to 5000), make sure the token scope is set to 'public_repo'
# Create GITHUB_PAT enviroment variable once you aquired the token to start using it
# Get a token from https://github.com/settings/tokens to increase rate limit (from 60 to 5000), make sure the token scope is set to 'public_repo'
# Create GITHUB_PAT environment variable once you acquired the token to start using it
# Returns the tag of the latest stable release (in terms of semver and not of release date)
get_latest() {
temp_file='temp_file' # temp_file needed because the grep would start before the download is over
if [ -z "$GITHUB_PAT" ]; then
curl -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file" || return 1
if [ -z "$GITHUB_PAT" ]; then
curl -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file" || return 1
else
curl -H "Authorization: token $GITHUB_PAT" -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file" || return 1
curl -H "Authorization: token $GITHUB_PAT" -s 'https://api.github.com/repos/meilisearch/meilisearch/releases' > "$temp_file" || return 1
fi
releases=$(cat "$temp_file" | \
@@ -89,7 +89,7 @@ get_latest() {
latest=''
current_tag=''
for release_info in $releases; do
if [ $i -eq 0 ]; then # Cheking tag_name
if [ $i -eq 0 ]; then # Checking tag_name
if echo "$release_info" | grep -q "$GREP_SEMVER_REGEXP"; then # If it's not an alpha or beta release
current_tag=$release_info
else
@@ -120,7 +120,7 @@ get_latest() {
done
rm -f "$temp_file"
echo $latest
return 0
}
# Gets the OS by setting the $os variable
@@ -148,11 +148,18 @@ get_os() {
get_archi() {
architecture=$(uname -m)
case "$architecture" in
'x86_64' | 'amd64' | 'arm64')
'x86_64' | 'amd64' )
archi='amd64'
;;
'arm64')
if [ $os = 'macos' ]; then # MacOS M1
archi='amd64'
else
archi='aarch64'
fi
;;
'aarch64')
archi='armv8'
archi='aarch64'
;;
*)
return 1
@@ -161,7 +168,7 @@ get_archi() {
}
success_usage() {
printf "$GREEN%s\n$DEFAULT" "MeiliSearch $latest binary successfully downloaded as '$binary_name' file."
printf "$GREEN%s\n$DEFAULT" "Meilisearch $latest binary successfully downloaded as '$binary_name' file."
echo ''
echo 'Run it:'
echo ' $ ./meilisearch'
@@ -169,47 +176,65 @@ success_usage() {
echo ' $ ./meilisearch --help'
}
failure_usage() {
printf "$RED%s\n$DEFAULT" 'ERROR: MeiliSearch binary is not available for your OS distribution or your architecture yet.'
not_available_failure_usage() {
printf "$RED%s\n$DEFAULT" 'ERROR: Meilisearch binary is not available for your OS distribution or your architecture yet.'
echo ''
echo 'However, you can easily compile the binary from the source files.'
echo 'Follow the steps at the page ("Source" tab): https://docs.meilisearch.com/learn/getting_started/installation.html'
}
fetch_release_failure_usage() {
echo ''
printf "$RED%s\n$DEFAULT" 'ERROR: Impossible to get the latest stable version of Meilisearch.'
echo 'Please let us know about this issue: https://github.com/meilisearch/meilisearch/issues/new/choose'
}
# MAIN
latest="$(get_latest)"
# Fill $latest variable
if ! get_latest; then
fetch_release_failure_usage # TO CHANGE
exit 1
fi
if [ "$latest" = '' ]; then
echo ''
echo 'Impossible to get the latest stable version of MeiliSearch.'
echo 'Please let us know about this issue: https://github.com/meilisearch/MeiliSearch/issues/new/choose'
fetch_release_failure_usage
exit 1
fi
# Fill $os variable
if ! get_os; then
failure_usage
not_available_failure_usage
exit 1
fi
# Fill $archi variable
if ! get_archi; then
failure_usage
not_available_failure_usage
exit 1
fi
echo "Downloading MeiliSearch binary $latest for $os, architecture $archi..."
echo "Downloading Meilisearch binary $latest for $os, architecture $archi..."
case "$os" in
'windows')
release_file="meilisearch-$os-$archi.exe"
binary_name='meilisearch.exe'
binary_name='meilisearch.exe'
;;
*)
release_file="meilisearch-$os-$archi"
binary_name='meilisearch'
*)
release_file="meilisearch-$os-$archi"
binary_name='meilisearch'
esac
link="https://github.com/meilisearch/MeiliSearch/releases/download/$latest/$release_file"
curl -OL "$link"
# Fetch the Meilisearch binary
link="https://github.com/meilisearch/meilisearch/releases/download/$latest/$release_file"
curl --fail -OL "$link"
if [ $? -ne 0 ]; then
fetch_release_failure_usage
exit 1
fi
mv "$release_file" "$binary_name"
chmod 744 "$binary_name"
success_usage

View File

@@ -1,15 +1,17 @@
[package]
name = "meilisearch-auth"
version = "0.25.0"
edition = "2018"
version = "0.28.1"
edition = "2021"
[dependencies]
enum-iterator = "0.7.0"
heed = { git = "https://github.com/Kerollmops/heed", tag = "v0.12.1" }
sha2 = "0.9.6"
chrono = { version = "0.4.19", features = ["serde"] }
meilisearch-error = { path = "../meilisearch-error" }
serde_json = { version = "1.0.67", features = ["preserve_order"] }
hmac = "0.12.1"
meilisearch-types = { path = "../meilisearch-types" }
milli = { git = "https://github.com/meilisearch/milli.git", tag = "v0.31.2" }
rand = "0.8.4"
serde = { version = "1.0.130", features = ["derive"] }
thiserror = "1.0.28"
serde = { version = "1.0.136", features = ["derive"] }
serde_json = { version = "1.0.79", features = ["preserve_order"] }
sha2 = "0.10.2"
thiserror = "1.0.30"
time = { version = "0.3.7", features = ["serde-well-known", "formatting", "parsing", "macros"] }
uuid = { version = "1.1.2", features = ["serde", "v4"] }

View File

@@ -1,19 +1,24 @@
use enum_iterator::IntoEnumIterator;
use serde::{Deserialize, Serialize};
use std::hash::Hash;
#[derive(IntoEnumIterator, Copy, Clone, Serialize, Deserialize, Debug, Eq, PartialEq)]
#[derive(IntoEnumIterator, Copy, Clone, Serialize, Deserialize, Debug, Eq, PartialEq, Hash)]
#[repr(u8)]
pub enum Action {
#[serde(rename = "*")]
All = 0,
All = actions::ALL,
#[serde(rename = "search")]
Search = actions::SEARCH,
#[serde(rename = "documents.*")]
DocumentsAll = actions::DOCUMENTS_ALL,
#[serde(rename = "documents.add")]
DocumentsAdd = actions::DOCUMENTS_ADD,
#[serde(rename = "documents.get")]
DocumentsGet = actions::DOCUMENTS_GET,
#[serde(rename = "documents.delete")]
DocumentsDelete = actions::DOCUMENTS_DELETE,
#[serde(rename = "indexes.*")]
IndexesAll = actions::INDEXES_ALL,
#[serde(rename = "indexes.create")]
IndexesAdd = actions::INDEXES_CREATE,
#[serde(rename = "indexes.get")]
@@ -22,42 +27,65 @@ pub enum Action {
IndexesUpdate = actions::INDEXES_UPDATE,
#[serde(rename = "indexes.delete")]
IndexesDelete = actions::INDEXES_DELETE,
#[serde(rename = "tasks.*")]
TasksAll = actions::TASKS_ALL,
#[serde(rename = "tasks.get")]
TasksGet = actions::TASKS_GET,
#[serde(rename = "settings.*")]
SettingsAll = actions::SETTINGS_ALL,
#[serde(rename = "settings.get")]
SettingsGet = actions::SETTINGS_GET,
#[serde(rename = "settings.update")]
SettingsUpdate = actions::SETTINGS_UPDATE,
#[serde(rename = "stats.*")]
StatsAll = actions::STATS_ALL,
#[serde(rename = "stats.get")]
StatsGet = actions::STATS_GET,
#[serde(rename = "dumps.*")]
DumpsAll = actions::DUMPS_ALL,
#[serde(rename = "dumps.create")]
DumpsCreate = actions::DUMPS_CREATE,
#[serde(rename = "dumps.get")]
DumpsGet = actions::DUMPS_GET,
#[serde(rename = "version")]
Version = actions::VERSION,
#[serde(rename = "keys.create")]
KeysAdd = actions::KEYS_CREATE,
#[serde(rename = "keys.get")]
KeysGet = actions::KEYS_GET,
#[serde(rename = "keys.update")]
KeysUpdate = actions::KEYS_UPDATE,
#[serde(rename = "keys.delete")]
KeysDelete = actions::KEYS_DELETE,
}
impl Action {
pub fn from_repr(repr: u8) -> Option<Self> {
use actions::*;
match repr {
0 => Some(Self::All),
ALL => Some(Self::All),
SEARCH => Some(Self::Search),
DOCUMENTS_ALL => Some(Self::DocumentsAll),
DOCUMENTS_ADD => Some(Self::DocumentsAdd),
DOCUMENTS_GET => Some(Self::DocumentsGet),
DOCUMENTS_DELETE => Some(Self::DocumentsDelete),
INDEXES_ALL => Some(Self::IndexesAll),
INDEXES_CREATE => Some(Self::IndexesAdd),
INDEXES_GET => Some(Self::IndexesGet),
INDEXES_UPDATE => Some(Self::IndexesUpdate),
INDEXES_DELETE => Some(Self::IndexesDelete),
TASKS_ALL => Some(Self::TasksAll),
TASKS_GET => Some(Self::TasksGet),
SETTINGS_ALL => Some(Self::SettingsAll),
SETTINGS_GET => Some(Self::SettingsGet),
SETTINGS_UPDATE => Some(Self::SettingsUpdate),
STATS_ALL => Some(Self::StatsAll),
STATS_GET => Some(Self::StatsGet),
DUMPS_ALL => Some(Self::DumpsAll),
DUMPS_CREATE => Some(Self::DumpsCreate),
DUMPS_GET => Some(Self::DumpsGet),
VERSION => Some(Self::Version),
KEYS_CREATE => Some(Self::KeysAdd),
KEYS_GET => Some(Self::KeysGet),
KEYS_UPDATE => Some(Self::KeysUpdate),
KEYS_DELETE => Some(Self::KeysDelete),
_otherwise => None,
}
}
@@ -65,40 +93,59 @@ impl Action {
pub fn repr(&self) -> u8 {
use actions::*;
match self {
Self::All => 0,
Self::All => ALL,
Self::Search => SEARCH,
Self::DocumentsAll => DOCUMENTS_ALL,
Self::DocumentsAdd => DOCUMENTS_ADD,
Self::DocumentsGet => DOCUMENTS_GET,
Self::DocumentsDelete => DOCUMENTS_DELETE,
Self::IndexesAll => INDEXES_ALL,
Self::IndexesAdd => INDEXES_CREATE,
Self::IndexesGet => INDEXES_GET,
Self::IndexesUpdate => INDEXES_UPDATE,
Self::IndexesDelete => INDEXES_DELETE,
Self::TasksAll => TASKS_ALL,
Self::TasksGet => TASKS_GET,
Self::SettingsAll => SETTINGS_ALL,
Self::SettingsGet => SETTINGS_GET,
Self::SettingsUpdate => SETTINGS_UPDATE,
Self::StatsAll => STATS_ALL,
Self::StatsGet => STATS_GET,
Self::DumpsAll => DUMPS_ALL,
Self::DumpsCreate => DUMPS_CREATE,
Self::DumpsGet => DUMPS_GET,
Self::Version => VERSION,
Self::KeysAdd => KEYS_CREATE,
Self::KeysGet => KEYS_GET,
Self::KeysUpdate => KEYS_UPDATE,
Self::KeysDelete => KEYS_DELETE,
}
}
}
pub mod actions {
pub(crate) const ALL: u8 = 0;
pub const SEARCH: u8 = 1;
pub const DOCUMENTS_ADD: u8 = 2;
pub const DOCUMENTS_GET: u8 = 3;
pub const DOCUMENTS_DELETE: u8 = 4;
pub const INDEXES_CREATE: u8 = 5;
pub const INDEXES_GET: u8 = 6;
pub const INDEXES_UPDATE: u8 = 7;
pub const INDEXES_DELETE: u8 = 8;
pub const TASKS_GET: u8 = 9;
pub const SETTINGS_GET: u8 = 10;
pub const SETTINGS_UPDATE: u8 = 11;
pub const STATS_GET: u8 = 12;
pub const DUMPS_CREATE: u8 = 13;
pub const DUMPS_GET: u8 = 14;
pub const VERSION: u8 = 15;
pub const DOCUMENTS_ALL: u8 = 2;
pub const DOCUMENTS_ADD: u8 = 3;
pub const DOCUMENTS_GET: u8 = 4;
pub const DOCUMENTS_DELETE: u8 = 5;
pub const INDEXES_ALL: u8 = 6;
pub const INDEXES_CREATE: u8 = 7;
pub const INDEXES_GET: u8 = 8;
pub const INDEXES_UPDATE: u8 = 9;
pub const INDEXES_DELETE: u8 = 10;
pub const TASKS_ALL: u8 = 11;
pub const TASKS_GET: u8 = 12;
pub const SETTINGS_ALL: u8 = 13;
pub const SETTINGS_GET: u8 = 14;
pub const SETTINGS_UPDATE: u8 = 15;
pub const STATS_ALL: u8 = 16;
pub const STATS_GET: u8 = 17;
pub const DUMPS_ALL: u8 = 18;
pub const DUMPS_CREATE: u8 = 19;
pub const VERSION: u8 = 20;
pub const KEYS_CREATE: u8 = 21;
pub const KEYS_GET: u8 = 22;
pub const KEYS_UPDATE: u8 = 23;
pub const KEYS_DELETE: u8 = 24;
}

View File

@@ -1,5 +1,6 @@
use serde_json::Deserializer;
use std::fs::File;
use std::io::BufRead;
use std::io::BufReader;
use std::io::Write;
use std::path::Path;
@@ -10,7 +11,10 @@ const KEYS_PATH: &str = "keys";
impl AuthController {
pub fn dump(src: impl AsRef<Path>, dst: impl AsRef<Path>) -> Result<()> {
let store = HeedAuthStore::new(&src)?;
let mut store = HeedAuthStore::new(&src)?;
// do not attempt to close the database on drop!
store.set_drop_on_close(false);
let keys_file_path = dst.as_ref().join(KEYS_PATH);
@@ -29,10 +33,13 @@ impl AuthController {
let keys_file_path = src.as_ref().join(KEYS_PATH);
let mut reader = BufReader::new(File::open(&keys_file_path)?).lines();
while let Some(key) = reader.next().transpose()? {
let key = serde_json::from_str(&key)?;
store.put_api_key(key)?;
if !keys_file_path.exists() {
return Ok(());
}
let reader = BufReader::new(File::open(&keys_file_path)?);
for key in Deserializer::from_reader(reader).into_iter() {
store.put_api_key(key?)?;
}
Ok(())

View File

@@ -1,7 +1,7 @@
use std::error::Error;
use meilisearch_error::ErrorCode;
use meilisearch_error::{internal_error, Code};
use meilisearch_types::error::{Code, ErrorCode};
use meilisearch_types::internal_error;
use serde_json::Value;
pub type Result<T> = std::result::Result<T, AuthControllerError>;
@@ -10,22 +10,32 @@ pub type Result<T> = std::result::Result<T, AuthControllerError>;
pub enum AuthControllerError {
#[error("`{0}` field is mandatory.")]
MissingParameter(&'static str),
#[error("actions field value `{0}` is invalid. It should be an array of string representing action names.")]
#[error("`actions` field value `{0}` is invalid. It should be an array of string representing action names.")]
InvalidApiKeyActions(Value),
#[error("indexes field value `{0}` is invalid. It should be an array of string representing index names.")]
#[error("`indexes` field value `{0}` is invalid. It should be an array of string representing index names.")]
InvalidApiKeyIndexes(Value),
#[error("expiresAt field value `{0}` is invalid. It should be in ISO-8601 format to represents a date or datetime in the future or specified as a null value. e.g. 'YYYY-MM-DD' or 'YYYY-MM-DDTHH:MM:SS'.")]
#[error("`expiresAt` field value `{0}` is invalid. It should follow the RFC 3339 format to represents a date or datetime in the future or specified as a null value. e.g. 'YYYY-MM-DD' or 'YYYY-MM-DD HH:MM:SS'.")]
InvalidApiKeyExpiresAt(Value),
#[error("description field value `{0}` is invalid. It should be a string or specified as a null value.")]
#[error("`description` field value `{0}` is invalid. It should be a string or specified as a null value.")]
InvalidApiKeyDescription(Value),
#[error(
"`name` field value `{0}` is invalid. It should be a string or specified as a null value."
)]
InvalidApiKeyName(Value),
#[error("`uid` field value `{0}` is invalid. It should be a valid UUID v4 string or omitted.")]
InvalidApiKeyUid(Value),
#[error("API key `{0}` not found.")]
ApiKeyNotFound(String),
#[error("`uid` field value `{0}` is already an existing API key.")]
ApiKeyAlreadyExists(String),
#[error("The `{0}` field cannot be modified for the given resource.")]
ImmutableField(String),
#[error("Internal error: {0}")]
Internal(Box<dyn Error + Send + Sync + 'static>),
}
internal_error!(
AuthControllerError: heed::Error,
AuthControllerError: milli::heed::Error,
std::io::Error,
serde_json::Error,
std::str::Utf8Error
@@ -39,7 +49,11 @@ impl ErrorCode for AuthControllerError {
Self::InvalidApiKeyIndexes(_) => Code::InvalidApiKeyIndexes,
Self::InvalidApiKeyExpiresAt(_) => Code::InvalidApiKeyExpiresAt,
Self::InvalidApiKeyDescription(_) => Code::InvalidApiKeyDescription,
Self::InvalidApiKeyName(_) => Code::InvalidApiKeyName,
Self::ApiKeyNotFound(_) => Code::ApiKeyNotFound,
Self::InvalidApiKeyUid(_) => Code::InvalidApiKeyUid,
Self::ApiKeyAlreadyExists(_) => Code::ApiKeyAlreadyExists,
Self::ImmutableField(_) => Code::ImmutableField,
Self::Internal(_) => Code::Internal,
}
}

View File

@@ -1,34 +1,56 @@
use crate::action::Action;
use crate::error::{AuthControllerError, Result};
use crate::store::{KeyId, KEY_ID_LENGTH};
use chrono::{DateTime, NaiveDateTime, Utc};
use rand::Rng;
use crate::store::KeyId;
use meilisearch_types::index_uid::IndexUid;
use meilisearch_types::star_or::StarOr;
use serde::{Deserialize, Serialize};
use serde_json::{from_value, Value};
use time::format_description::well_known::Rfc3339;
use time::macros::{format_description, time};
use time::{Date, OffsetDateTime, PrimitiveDateTime};
use uuid::Uuid;
#[derive(Debug, Deserialize, Serialize)]
pub struct Key {
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
pub id: KeyId,
#[serde(skip_serializing_if = "Option::is_none")]
pub name: Option<String>,
pub uid: KeyId,
pub actions: Vec<Action>,
pub indexes: Vec<String>,
pub expires_at: Option<DateTime<Utc>>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub indexes: Vec<StarOr<IndexUid>>,
#[serde(with = "time::serde::rfc3339::option")]
pub expires_at: Option<OffsetDateTime>,
#[serde(with = "time::serde::rfc3339")]
pub created_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub updated_at: OffsetDateTime,
}
impl Key {
pub fn create_from_value(value: Value) -> Result<Self> {
let description = value
.get("description")
.map(|des| {
from_value(des.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyDescription(des.clone()))
})
.transpose()?;
let name = match value.get("name") {
None | Some(Value::Null) => None,
Some(des) => from_value(des.clone())
.map(Some)
.map_err(|_| AuthControllerError::InvalidApiKeyName(des.clone()))?,
};
let id = generate_id();
let description = match value.get("description") {
None | Some(Value::Null) => None,
Some(des) => from_value(des.clone())
.map(Some)
.map_err(|_| AuthControllerError::InvalidApiKeyDescription(des.clone()))?,
};
let uid = value.get("uid").map_or_else(
|| Ok(Uuid::new_v4()),
|uid| {
from_value(uid.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyUid(uid.clone()))
},
)?;
let actions = value
.get("actions")
@@ -51,12 +73,13 @@ impl Key {
.map(parse_expiration_date)
.ok_or(AuthControllerError::MissingParameter("expiresAt"))??;
let created_at = Utc::now();
let updated_at = Utc::now();
let created_at = OffsetDateTime::now_utc();
let updated_at = created_at;
Ok(Self {
name,
description,
id,
uid,
actions,
indexes,
expires_at,
@@ -72,83 +95,100 @@ impl Key {
self.description = des?;
}
if let Some(act) = value.get("actions") {
let act = from_value(act.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyActions(act.clone()));
self.actions = act?;
if let Some(des) = value.get("name") {
let des = from_value(des.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyName(des.clone()));
self.name = des?;
}
if let Some(ind) = value.get("indexes") {
let ind = from_value(ind.clone())
.map_err(|_| AuthControllerError::InvalidApiKeyIndexes(ind.clone()));
self.indexes = ind?;
if value.get("uid").is_some() {
return Err(AuthControllerError::ImmutableField("uid".to_string()));
}
if let Some(exp) = value.get("expiresAt") {
self.expires_at = parse_expiration_date(exp)?;
if value.get("actions").is_some() {
return Err(AuthControllerError::ImmutableField("actions".to_string()));
}
self.updated_at = Utc::now();
if value.get("indexes").is_some() {
return Err(AuthControllerError::ImmutableField("indexes".to_string()));
}
if value.get("expiresAt").is_some() {
return Err(AuthControllerError::ImmutableField("expiresAt".to_string()));
}
if value.get("createdAt").is_some() {
return Err(AuthControllerError::ImmutableField("createdAt".to_string()));
}
if value.get("updatedAt").is_some() {
return Err(AuthControllerError::ImmutableField("updatedAt".to_string()));
}
self.updated_at = OffsetDateTime::now_utc();
Ok(())
}
pub(crate) fn default_admin() -> Self {
let now = OffsetDateTime::now_utc();
let uid = Uuid::new_v4();
Self {
description: Some("Default Admin API Key (Use it for all other operations. Caution! Do not use it on a public frontend)".to_string()),
id: generate_id(),
name: Some("Default Admin API Key".to_string()),
description: Some("Use it for anything that is not a search operation. Caution! Do not expose it on a public frontend".to_string()),
uid,
actions: vec![Action::All],
indexes: vec!["*".to_string()],
indexes: vec![StarOr::Star],
expires_at: None,
created_at: Utc::now(),
updated_at: Utc::now(),
created_at: now,
updated_at: now,
}
}
pub(crate) fn default_search() -> Self {
let now = OffsetDateTime::now_utc();
let uid = Uuid::new_v4();
Self {
description: Some(
"Default Search API Key (Use it to search from the frontend)".to_string(),
),
id: generate_id(),
name: Some("Default Search API Key".to_string()),
description: Some("Use it to search from the frontend".to_string()),
uid,
actions: vec![Action::Search],
indexes: vec!["*".to_string()],
indexes: vec![StarOr::Star],
expires_at: None,
created_at: Utc::now(),
updated_at: Utc::now(),
created_at: now,
updated_at: now,
}
}
}
/// Generate a printable key of 64 characters using thread_rng.
fn generate_id() -> [u8; KEY_ID_LENGTH] {
const CHARSET: &[u8] = b"abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
let mut rng = rand::thread_rng();
let mut bytes = [0; KEY_ID_LENGTH];
for byte in bytes.iter_mut() {
*byte = CHARSET[rng.gen_range(0..CHARSET.len())];
}
bytes
}
fn parse_expiration_date(value: &Value) -> Result<Option<DateTime<Utc>>> {
fn parse_expiration_date(value: &Value) -> Result<Option<OffsetDateTime>> {
match value {
Value::String(string) => DateTime::parse_from_rfc3339(string)
.map(|d| d.into())
Value::String(string) => OffsetDateTime::parse(string, &Rfc3339)
.or_else(|_| {
NaiveDateTime::parse_from_str(string, "%Y-%m-%dT%H:%M:%S")
.map(|naive| DateTime::from_utc(naive, Utc))
PrimitiveDateTime::parse(
string,
format_description!(
"[year repr:full base:calendar]-[month repr:numerical]-[day]T[hour]:[minute]:[second]"
),
).map(|datetime| datetime.assume_utc())
})
.or_else(|_| {
NaiveDateTime::parse_from_str(string, "%Y-%m-%d")
.map(|naive| DateTime::from_utc(naive, Utc))
PrimitiveDateTime::parse(
string,
format_description!(
"[year repr:full base:calendar]-[month repr:numerical]-[day] [hour]:[minute]:[second]"
),
).map(|datetime| datetime.assume_utc())
})
.or_else(|_| {
Date::parse(string, format_description!(
"[year repr:full base:calendar]-[month repr:numerical]-[day]"
)).map(|date| PrimitiveDateTime::new(date, time!(00:00)).assume_utc())
})
.map_err(|_| AuthControllerError::InvalidApiKeyExpiresAt(value.clone()))
// check if the key is already expired.
.and_then(|d| {
if d > Utc::now() {
if d > OffsetDateTime::now_utc() {
Ok(d)
} else {
Err(AuthControllerError::InvalidApiKeyExpiresAt(value.clone()))

View File

@@ -4,17 +4,22 @@ pub mod error;
mod key;
mod store;
use std::collections::{HashMap, HashSet};
use std::ops::Deref;
use std::path::Path;
use std::str::from_utf8;
use std::sync::Arc;
use chrono::Utc;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use sha2::{Digest, Sha256};
use time::OffsetDateTime;
use uuid::Uuid;
pub use action::{actions, Action};
use error::{AuthControllerError, Result};
pub use key::Key;
use meilisearch_types::star_or::StarOr;
use store::generate_key_as_hexa;
pub use store::open_auth_store_env;
use store::HeedAuthStore;
#[derive(Clone)]
@@ -37,54 +42,91 @@ impl AuthController {
})
}
pub async fn create_key(&self, value: Value) -> Result<Key> {
pub fn create_key(&self, value: Value) -> Result<Key> {
let key = Key::create_from_value(value)?;
self.store.put_api_key(key)
match self.store.get_api_key(key.uid)? {
Some(_) => Err(AuthControllerError::ApiKeyAlreadyExists(
key.uid.to_string(),
)),
None => self.store.put_api_key(key),
}
}
pub async fn update_key(&self, key: impl AsRef<str>, value: Value) -> Result<Key> {
let mut key = self.get_key(key).await?;
pub fn update_key(&self, uid: Uuid, value: Value) -> Result<Key> {
let mut key = self.get_key(uid)?;
key.update_from_value(value)?;
self.store.put_api_key(key)
}
pub async fn get_key(&self, key: impl AsRef<str>) -> Result<Key> {
pub fn get_key(&self, uid: Uuid) -> Result<Key> {
self.store
.get_api_key(&key)?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(key.as_ref().to_string()))
.get_api_key(uid)?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(uid.to_string()))
}
pub fn get_key_filters(&self, key: impl AsRef<str>) -> Result<AuthFilter> {
let mut filters = AuthFilter::default();
if self
.master_key
.as_ref()
.map_or(false, |master_key| master_key != key.as_ref())
{
let key = self
pub fn get_optional_uid_from_encoded_key(&self, encoded_key: &[u8]) -> Result<Option<Uuid>> {
match &self.master_key {
Some(master_key) => self
.store
.get_api_key(&key)?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(key.as_ref().to_string()))?;
if !key.indexes.iter().any(|i| i.as_str() == "*") {
filters.indexes = Some(key.indexes);
}
.get_uid_from_encoded_key(encoded_key, master_key.as_bytes()),
None => Ok(None),
}
}
pub fn get_uid_from_encoded_key(&self, encoded_key: &str) -> Result<Uuid> {
self.get_optional_uid_from_encoded_key(encoded_key.as_bytes())?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(encoded_key.to_string()))
}
pub fn get_key_filters(
&self,
uid: Uuid,
search_rules: Option<SearchRules>,
) -> Result<AuthFilter> {
let mut filters = AuthFilter::default();
let key = self
.store
.get_api_key(uid)?
.ok_or_else(|| AuthControllerError::ApiKeyNotFound(uid.to_string()))?;
if !key.indexes.iter().any(|i| i == &StarOr::Star) {
filters.search_rules = match search_rules {
// Intersect search_rules with parent key authorized indexes.
Some(search_rules) => SearchRules::Map(
key.indexes
.into_iter()
.filter_map(|index| {
search_rules.get_index_search_rules(index.deref()).map(
|index_search_rules| {
(String::from(index), Some(index_search_rules))
},
)
})
.collect(),
),
None => SearchRules::Set(key.indexes.into_iter().map(String::from).collect()),
};
} else if let Some(search_rules) = search_rules {
filters.search_rules = search_rules;
}
filters.allow_index_creation = key
.actions
.iter()
.any(|&action| action == Action::IndexesAdd || action == Action::All);
Ok(filters)
}
pub async fn list_keys(&self) -> Result<Vec<Key>> {
pub fn list_keys(&self) -> Result<Vec<Key>> {
self.store.list_api_keys()
}
pub async fn delete_key(&self, key: impl AsRef<str>) -> Result<()> {
if self.store.delete_api_key(&key)? {
pub fn delete_key(&self, uid: Uuid) -> Result<()> {
if self.store.delete_api_key(uid)? {
Ok(())
} else {
Err(AuthControllerError::ApiKeyNotFound(
key.as_ref().to_string(),
))
Err(AuthControllerError::ApiKeyNotFound(uid.to_string()))
}
}
@@ -92,41 +134,120 @@ impl AuthController {
self.master_key.as_ref()
}
pub fn authenticate(&self, token: &[u8], action: Action, index: Option<&[u8]>) -> Result<bool> {
if let Some(master_key) = &self.master_key {
if let Some((id, exp)) = self
.store
// check if the key has access to all indexes.
.get_expiration_date(token, action, None)?
.or(match index {
// else check if the key has access to the requested index.
Some(index) => self.store.get_expiration_date(token, action, Some(index))?,
// or to any index if no index has been requested.
None => self.store.prefix_first_expiration_date(token, action)?,
})
{
let id = from_utf8(&id)?;
if exp.map_or(true, |exp| Utc::now() < exp)
&& generate_key(master_key.as_bytes(), id).as_bytes() == token
{
return Ok(true);
}
}
}
/// Generate a valid key from a key id using the current master key.
/// Returns None if no master key has been set.
pub fn generate_key(&self, uid: Uuid) -> Option<String> {
self.master_key
.as_ref()
.map(|master_key| generate_key_as_hexa(uid, master_key.as_bytes()))
}
Ok(false)
/// Check if the provided key is authorized to make a specific action
/// without checking if the key is valid.
pub fn is_key_authorized(
&self,
uid: Uuid,
action: Action,
index: Option<&str>,
) -> Result<bool> {
match self
.store
// check if the key has access to all indexes.
.get_expiration_date(uid, action, None)?
.or(match index {
// else check if the key has access to the requested index.
Some(index) => {
self.store
.get_expiration_date(uid, action, Some(index.as_bytes()))?
}
// or to any index if no index has been requested.
None => self.store.prefix_first_expiration_date(uid, action)?,
}) {
// check expiration date.
Some(Some(exp)) => Ok(OffsetDateTime::now_utc() < exp),
// no expiration date.
Some(None) => Ok(true),
// action or index forbidden.
None => Ok(false),
}
}
}
#[derive(Default)]
pub struct AuthFilter {
pub indexes: Option<Vec<String>>,
pub search_rules: SearchRules,
pub allow_index_creation: bool,
}
pub fn generate_key(master_key: &[u8], uid: &str) -> String {
let key = [uid.as_bytes(), master_key].concat();
let sha = Sha256::digest(&key);
format!("{}{:x}", uid, sha)
impl Default for AuthFilter {
fn default() -> Self {
Self {
search_rules: SearchRules::default(),
allow_index_creation: true,
}
}
}
/// Transparent wrapper around a list of allowed indexes with the search rules to apply for each.
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(untagged)]
pub enum SearchRules {
Set(HashSet<String>),
Map(HashMap<String, Option<IndexSearchRules>>),
}
impl Default for SearchRules {
fn default() -> Self {
Self::Set(Some("*".to_string()).into_iter().collect())
}
}
impl SearchRules {
pub fn is_index_authorized(&self, index: &str) -> bool {
match self {
Self::Set(set) => set.contains("*") || set.contains(index),
Self::Map(map) => map.contains_key("*") || map.contains_key(index),
}
}
pub fn get_index_search_rules(&self, index: &str) -> Option<IndexSearchRules> {
match self {
Self::Set(set) => {
if set.contains("*") || set.contains(index) {
Some(IndexSearchRules::default())
} else {
None
}
}
Self::Map(map) => map
.get(index)
.or_else(|| map.get("*"))
.map(|isr| isr.clone().unwrap_or_default()),
}
}
}
impl IntoIterator for SearchRules {
type Item = (String, IndexSearchRules);
type IntoIter = Box<dyn Iterator<Item = Self::Item>>;
fn into_iter(self) -> Self::IntoIter {
match self {
Self::Set(array) => {
Box::new(array.into_iter().map(|i| (i, IndexSearchRules::default())))
}
Self::Map(map) => {
Box::new(map.into_iter().map(|(i, isr)| (i, isr.unwrap_or_default())))
}
}
}
}
/// Contains the rules to apply on the top of the search query for a specific index.
///
/// filter: search filter to apply in addition to query filters.
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
pub struct IndexSearchRules {
pub filter: Option<serde_json::Value>,
}
fn generate_default_keys(store: &HeedAuthStore) -> Result<()> {

View File

@@ -1,42 +1,62 @@
use enum_iterator::IntoEnumIterator;
use std::borrow::Cow;
use std::cmp::Reverse;
use std::collections::HashSet;
use std::convert::TryFrom;
use std::convert::TryInto;
use std::fs::create_dir_all;
use std::ops::Deref;
use std::path::Path;
use std::str;
use std::sync::Arc;
use chrono::{DateTime, Utc};
use heed::types::{ByteSlice, DecodeIgnore, SerdeJson};
use heed::{Database, Env, EnvOpenOptions, RwTxn};
use enum_iterator::IntoEnumIterator;
use hmac::{Hmac, Mac};
use meilisearch_types::star_or::StarOr;
use milli::heed::types::{ByteSlice, DecodeIgnore, SerdeJson};
use milli::heed::{Database, Env, EnvOpenOptions, RwTxn};
use sha2::Sha256;
use time::OffsetDateTime;
use uuid::fmt::Hyphenated;
use uuid::Uuid;
use super::error::Result;
use super::{Action, Key};
const AUTH_STORE_SIZE: usize = 1_073_741_824; //1GiB
pub const KEY_ID_LENGTH: usize = 8;
const AUTH_DB_PATH: &str = "auth";
const KEY_DB_NAME: &str = "api-keys";
const KEY_ID_ACTION_INDEX_EXPIRATION_DB_NAME: &str = "keyid-action-index-expiration";
pub type KeyId = [u8; KEY_ID_LENGTH];
pub type KeyId = Uuid;
#[derive(Clone)]
pub struct HeedAuthStore {
env: Env,
env: Arc<Env>,
keys: Database<ByteSlice, SerdeJson<Key>>,
action_keyid_index_expiration: Database<KeyIdActionCodec, SerdeJson<Option<DateTime<Utc>>>>,
action_keyid_index_expiration: Database<KeyIdActionCodec, SerdeJson<Option<OffsetDateTime>>>,
should_close_on_drop: bool,
}
impl Drop for HeedAuthStore {
fn drop(&mut self) {
if self.should_close_on_drop && Arc::strong_count(&self.env) == 1 {
self.env.as_ref().clone().prepare_for_closing();
}
}
}
pub fn open_auth_store_env(path: &Path) -> milli::heed::Result<milli::heed::Env> {
let mut options = EnvOpenOptions::new();
options.map_size(AUTH_STORE_SIZE); // 1GB
options.max_dbs(2);
options.open(path)
}
impl HeedAuthStore {
pub fn new(path: impl AsRef<Path>) -> Result<Self> {
let path = path.as_ref().join(AUTH_DB_PATH);
create_dir_all(&path)?;
let mut options = EnvOpenOptions::new();
options.map_size(AUTH_STORE_SIZE); // 1GB
options.max_dbs(2);
let env = options.open(path)?;
let env = Arc::new(open_auth_store_env(path.as_ref())?);
let keys = env.create_database(Some(KEY_DB_NAME))?;
let action_keyid_index_expiration =
env.create_database(Some(KEY_ID_ACTION_INDEX_EXPIRATION_DB_NAME))?;
@@ -44,9 +64,14 @@ impl HeedAuthStore {
env,
keys,
action_keyid_index_expiration,
should_close_on_drop: true,
})
}
pub fn set_drop_on_close(&mut self, v: bool) {
self.should_close_on_drop = v;
}
pub fn is_empty(&self) -> Result<bool> {
let rtxn = self.env.read_txn()?;
@@ -54,33 +79,70 @@ impl HeedAuthStore {
}
pub fn put_api_key(&self, key: Key) -> Result<Key> {
let uid = key.uid;
let mut wtxn = self.env.write_txn()?;
self.keys.put(&mut wtxn, &key.id, &key)?;
let id = key.id;
self.keys.put(&mut wtxn, uid.as_bytes(), &key)?;
// delete key from inverted database before refilling it.
self.delete_key_from_inverted_db(&mut wtxn, &id)?;
self.delete_key_from_inverted_db(&mut wtxn, &uid)?;
// create inverted database.
let db = self.action_keyid_index_expiration;
let actions = if key.actions.contains(&Action::All) {
// if key.actions contains All, we iterate over all actions.
Action::into_enum_iter().collect()
} else {
key.actions.clone()
};
let mut actions = HashSet::new();
for action in &key.actions {
match action {
Action::All => actions.extend(Action::into_enum_iter()),
Action::DocumentsAll => {
actions.extend(
[
Action::DocumentsGet,
Action::DocumentsDelete,
Action::DocumentsAdd,
]
.iter(),
);
}
Action::IndexesAll => {
actions.extend(
[
Action::IndexesAdd,
Action::IndexesDelete,
Action::IndexesGet,
Action::IndexesUpdate,
]
.iter(),
);
}
Action::SettingsAll => {
actions.extend([Action::SettingsGet, Action::SettingsUpdate].iter());
}
Action::DumpsAll => {
actions.insert(Action::DumpsCreate);
}
Action::TasksAll => {
actions.insert(Action::TasksGet);
}
Action::StatsAll => {
actions.insert(Action::StatsGet);
}
other => {
actions.insert(*other);
}
}
}
let no_index_restriction = key.indexes.contains(&"*".to_owned());
let no_index_restriction = key.indexes.contains(&StarOr::Star);
for action in actions {
if no_index_restriction {
// If there is no index restriction we put None.
db.put(&mut wtxn, &(&id, &action, None), &key.expires_at)?;
db.put(&mut wtxn, &(&uid, &action, None), &key.expires_at)?;
} else {
// else we create a key for each index.
for index in key.indexes.iter() {
db.put(
&mut wtxn,
&(&id, &action, Some(index.as_bytes())),
&(&uid, &action, Some(index.deref().as_bytes())),
&key.expires_at,
)?;
}
@@ -92,24 +154,42 @@ impl HeedAuthStore {
Ok(key)
}
pub fn get_api_key(&self, key: impl AsRef<str>) -> Result<Option<Key>> {
pub fn get_api_key(&self, uid: Uuid) -> Result<Option<Key>> {
let rtxn = self.env.read_txn()?;
match try_split_array_at::<_, KEY_ID_LENGTH>(key.as_ref().as_bytes()) {
Some((id, _)) => self.keys.get(&rtxn, id).map_err(|e| e.into()),
None => Ok(None),
}
self.keys.get(&rtxn, uid.as_bytes()).map_err(|e| e.into())
}
pub fn delete_api_key(&self, key: impl AsRef<str>) -> Result<bool> {
pub fn get_uid_from_encoded_key(
&self,
encoded_key: &[u8],
master_key: &[u8],
) -> Result<Option<Uuid>> {
let rtxn = self.env.read_txn()?;
let uid = self
.keys
.remap_data_type::<DecodeIgnore>()
.iter(&rtxn)?
.filter_map(|res| match res {
Ok((uid, _)) => {
let (uid, _) = try_split_array_at(uid)?;
let uid = Uuid::from_bytes(*uid);
if generate_key_as_hexa(uid, master_key).as_bytes() == encoded_key {
Some(uid)
} else {
None
}
}
Err(_) => None,
})
.next();
Ok(uid)
}
pub fn delete_api_key(&self, uid: Uuid) -> Result<bool> {
let mut wtxn = self.env.write_txn()?;
let existing = match try_split_array_at(key.as_ref().as_bytes()) {
Some((id, _)) => {
let existing = self.keys.delete(&mut wtxn, id)?;
self.delete_key_from_inverted_db(&mut wtxn, id)?;
existing
}
None => false,
};
let existing = self.keys.delete(&mut wtxn, uid.as_bytes())?;
self.delete_key_from_inverted_db(&mut wtxn, &uid)?;
wtxn.commit()?;
Ok(existing)
@@ -128,48 +208,37 @@ impl HeedAuthStore {
pub fn get_expiration_date(
&self,
key: &[u8],
uid: Uuid,
action: Action,
index: Option<&[u8]>,
) -> Result<Option<(KeyId, Option<DateTime<Utc>>)>> {
) -> Result<Option<Option<OffsetDateTime>>> {
let rtxn = self.env.read_txn()?;
match try_split_array_at::<_, KEY_ID_LENGTH>(key) {
Some((id, _)) => {
let tuple = (id, &action, index);
Ok(self
.action_keyid_index_expiration
.get(&rtxn, &tuple)?
.map(|expiration| (*id, expiration)))
}
None => Ok(None),
}
let tuple = (&uid, &action, index);
Ok(self.action_keyid_index_expiration.get(&rtxn, &tuple)?)
}
pub fn prefix_first_expiration_date(
&self,
key: &[u8],
uid: Uuid,
action: Action,
) -> Result<Option<(KeyId, Option<DateTime<Utc>>)>> {
) -> Result<Option<Option<OffsetDateTime>>> {
let rtxn = self.env.read_txn()?;
match try_split_array_at::<_, KEY_ID_LENGTH>(key) {
Some((id, _)) => {
let tuple = (id, &action, None);
Ok(self
.action_keyid_index_expiration
.prefix_iter(&rtxn, &tuple)?
.next()
.transpose()?
.map(|(_, expiration)| (*id, expiration)))
}
None => Ok(None),
}
let tuple = (&uid, &action, None);
let exp = self
.action_keyid_index_expiration
.prefix_iter(&rtxn, &tuple)?
.next()
.transpose()?
.map(|(_, expiration)| expiration);
Ok(exp)
}
fn delete_key_from_inverted_db(&self, wtxn: &mut RwTxn, key: &KeyId) -> Result<()> {
let mut iter = self
.action_keyid_index_expiration
.remap_types::<ByteSlice, DecodeIgnore>()
.prefix_iter_mut(wtxn, key)?;
.prefix_iter_mut(wtxn, key.as_bytes())?;
while iter.next().transpose()?.is_some() {
// safety: we don't keep references from inside the LMDB database.
unsafe { iter.del_current()? };
@@ -180,31 +249,32 @@ impl HeedAuthStore {
}
/// Codec allowing to retrieve the expiration date of an action,
/// optionnally on a spcific index, for a given key.
/// optionally on a specific index, for a given key.
pub struct KeyIdActionCodec;
impl<'a> heed::BytesDecode<'a> for KeyIdActionCodec {
impl<'a> milli::heed::BytesDecode<'a> for KeyIdActionCodec {
type DItem = (KeyId, Action, Option<&'a [u8]>);
fn bytes_decode(bytes: &'a [u8]) -> Option<Self::DItem> {
let (key_id, action_bytes) = try_split_array_at(bytes)?;
let (key_id_bytes, action_bytes) = try_split_array_at(bytes)?;
let (action_bytes, index) = match try_split_array_at(action_bytes)? {
(action, []) => (action, None),
(action, index) => (action, Some(index)),
};
let key_id = Uuid::from_bytes(*key_id_bytes);
let action = Action::from_repr(u8::from_be_bytes(*action_bytes))?;
Some((*key_id, action, index))
Some((key_id, action, index))
}
}
impl<'a> heed::BytesEncode<'a> for KeyIdActionCodec {
impl<'a> milli::heed::BytesEncode<'a> for KeyIdActionCodec {
type EItem = (&'a KeyId, &'a Action, Option<&'a [u8]>);
fn bytes_encode((key_id, action, index): &Self::EItem) -> Option<Cow<[u8]>> {
let mut bytes = Vec::new();
bytes.extend_from_slice(*key_id);
bytes.extend_from_slice(key_id.as_bytes());
let action_bytes = u8::to_be_bytes(action.repr());
bytes.extend_from_slice(&action_bytes);
if let Some(index) = index {
@@ -215,6 +285,19 @@ impl<'a> heed::BytesEncode<'a> for KeyIdActionCodec {
}
}
pub fn generate_key_as_hexa(uid: Uuid, master_key: &[u8]) -> String {
// format uid as hyphenated allowing user to generate their own keys.
let mut uid_buffer = [0; Hyphenated::LENGTH];
let uid = uid.hyphenated().encode_lower(&mut uid_buffer);
// new_from_slice function never fail.
let mut mac = Hmac::<Sha256>::new_from_slice(master_key).unwrap();
mac.update(uid.as_bytes());
let result = mac.finalize();
format!("{:x}", result.into_bytes())
}
/// Divides one slice into two at an index, returns `None` if mid is out of bounds.
pub fn try_split_at<T>(slice: &[T], mid: usize) -> Option<(&[T], &[T])> {
if mid <= slice.len() {

View File

@@ -1,93 +1,97 @@
[package]
authors = ["Quentin de Quelen <quentin@dequelen.me>", "Clément Renault <clement@meilisearch.com>"]
description = "MeiliSearch HTTP server"
edition = "2018"
description = "Meilisearch HTTP server"
edition = "2021"
license = "MIT"
name = "meilisearch-http"
version = "0.25.0"
version = "0.28.1"
[[bin]]
name = "meilisearch"
path = "src/main.rs"
[build-dependencies]
actix-web-static-files = { git = "https://github.com/MarinPostma/actix-web-static-files.git", rev = "39d8006", optional = true }
anyhow = { version = "1.0.43", optional = true }
cargo_toml = { version = "0.9", optional = true }
anyhow = { version = "1.0.56", optional = true }
cargo_toml = { version = "0.11.4", optional = true }
hex = { version = "0.4.3", optional = true }
reqwest = { version = "0.11.4", features = ["blocking", "rustls-tls"], default-features = false, optional = true }
sha-1 = { version = "0.9.8", optional = true }
tempfile = { version = "3.2.0", optional = true }
vergen = { version = "5.1.15", default-features = false, features = ["git"] }
reqwest = { version = "0.11.9", features = ["blocking", "rustls-tls"], default-features = false, optional = true }
sha-1 = { version = "0.10.0", optional = true }
static-files = { version = "0.2.3", optional = true }
tempfile = { version = "3.3.0", optional = true }
vergen = { version = "7.0.0", default-features = false, features = ["git"] }
zip = { version = "0.5.13", optional = true }
[dependencies]
actix-cors = { git = "https://github.com/MarinPostma/actix-extras.git", rev = "963ac94d" }
actix-web = { version = "4.0.0-beta.9", features = ["rustls"] }
actix-web-static-files = { git = "https://github.com/MarinPostma/actix-web-static-files.git", rev = "39d8006", optional = true }
# TODO: specifying this dependency so semver doesn't bump to next beta
actix-tls = "=3.0.0-beta.5"
anyhow = { version = "1.0.43", features = ["backtrace"] }
arc-swap = "1.3.2"
async-stream = "0.3.2"
async-trait = "0.1.51"
actix-cors = "0.6.1"
actix-web = { version = "4.0.1", default-features = false, features = ["macros", "compress-brotli", "compress-gzip", "cookies", "rustls"] }
actix-web-static-files = { git = "https://github.com/kilork/actix-web-static-files.git", rev = "2d3b6160", optional = true }
anyhow = { version = "1.0.56", features = ["backtrace"] }
async-stream = "0.3.3"
async-trait = "0.1.52"
bstr = "0.2.17"
byte-unit = { version = "4.0.12", default-features = false, features = ["std"] }
byte-unit = { version = "4.0.14", default-features = false, features = ["std", "serde"] }
bytes = "1.1.0"
chrono = { version = "0.4.19", features = ["serde"] }
crossbeam-channel = "0.5.1"
clap = { version = "3.1.6", features = ["derive", "env"] }
crossbeam-channel = "0.5.2"
either = "1.6.1"
env_logger = "0.9.0"
flate2 = "1.0.21"
flate2 = "1.0.22"
fst = "0.4.7"
futures = "0.3.17"
futures-util = "0.3.17"
heed = { git = "https://github.com/Kerollmops/heed", tag = "v0.12.1" }
http = "0.2.4"
indexmap = { version = "1.7.0", features = ["serde-1"] }
itertools = "0.10.1"
futures = "0.3.21"
futures-util = "0.3.21"
http = "0.2.6"
indexmap = { version = "1.8.0", features = ["serde-1"] }
itertools = "0.10.3"
jsonwebtoken = "8.0.1"
log = "0.4.14"
meilisearch-auth = { path = "../meilisearch-auth" }
meilisearch-error = { path = "../meilisearch-error" }
meilisearch-types = { path = "../meilisearch-types" }
meilisearch-lib = { path = "../meilisearch-lib" }
mime = "0.3.16"
num_cpus = "1.13.0"
num_cpus = "1.13.1"
obkv = "0.2.0"
once_cell = "1.8.0"
parking_lot = "0.11.2"
pin-project = "1.0.8"
once_cell = "1.10.0"
parking_lot = "0.12.0"
pin-project-lite = "0.2.8"
platform-dirs = "0.3.0"
rand = "0.8.4"
rand = "0.8.5"
rayon = "1.5.1"
regex = "1.5.4"
rustls = "0.19.1"
segment = { version = "0.1.2", optional = true }
serde = { version = "1.0.130", features = ["derive"] }
serde_json = { version = "1.0.67", features = ["preserve_order"] }
sha2 = "0.9.6"
siphasher = "0.3.7"
slice-group-by = "0.2.6"
structopt = "0.3.25"
sysinfo = "0.20.2"
tar = "0.4.37"
tempfile = "3.2.0"
thiserror = "1.0.28"
tokio = { version = "1.11.0", features = ["full"] }
tokio-stream = "0.1.7"
uuid = { version = "0.8.2", features = ["serde"] }
regex = "1.5.5"
reqwest = { version = "0.11.4", features = ["rustls-tls", "json"], default-features = false }
rustls = "0.20.4"
rustls-pemfile = "0.3.0"
segment = { version = "0.2.0", optional = true }
serde = { version = "1.0.136", features = ["derive"] }
serde-cs = "0.2.3"
serde_json = { version = "1.0.79", features = ["preserve_order"] }
sha2 = "0.10.2"
siphasher = "0.3.10"
slice-group-by = "0.3.0"
static-files = { version = "0.2.3", optional = true }
sysinfo = "0.23.5"
tar = "0.4.38"
tempfile = "3.3.0"
thiserror = "1.0.30"
time = { version = "0.3.7", features = ["serde-well-known", "formatting", "parsing", "macros"] }
tokio = { version = "1.17.0", features = ["full"] }
tokio-stream = "0.1.8"
uuid = { version = "1.1.2", features = ["serde", "v4"] }
walkdir = "2.3.2"
[dev-dependencies]
actix-rt = "2.2.0"
actix-rt = "2.7.0"
assert-json-diff = "2.0.1"
manifest-dir-macros = "0.1.14"
maplit = "1.0.2"
paste = "1.0.5"
serde_url_params = "0.2.1"
urlencoding = "2.1.0"
yaup = "0.2.0"
[features]
default = ["analytics", "mini-dashboard"]
analytics = ["segment"]
mini-dashboard = [
"actix-web-static-files",
"static-files",
"anyhow",
"cargo_toml",
"hex",
@@ -96,12 +100,10 @@ mini-dashboard = [
"tempfile",
"zip",
]
analytics = ["segment"]
default = ["analytics", "mini-dashboard"]
[target.'cfg(target_os = "linux")'.dependencies]
tikv-jemallocator = "0.4.1"
tikv-jemallocator = "0.4.3"
[package.metadata.mini-dashboard]
assets-url = "https://github.com/meilisearch/mini-dashboard/releases/download/v0.1.5/build.zip"
sha1 = "1d955ea91b7691bd6fc207cb39866b82210783f0"
assets-url = "https://github.com/meilisearch/mini-dashboard/releases/download/v0.2.1/build.zip"
sha1 = "05a02ff13c3982091884a3f81d28bf53e72607b2"

View File

@@ -16,11 +16,11 @@ mod mini_dashboard {
use std::io::{Cursor, Read, Write};
use std::path::PathBuf;
use actix_web_static_files::resource_dir;
use anyhow::Context;
use cargo_toml::Manifest;
use reqwest::blocking::get;
use sha1::{Digest, Sha1};
use static_files::resource_dir;
pub fn setup_mini_dashboard() -> anyhow::Result<()> {
let cargo_manifest_dir = PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap());

View File

@@ -29,12 +29,12 @@ pub type SegmentAnalytics = segment_analytics::SegmentAnalytics;
#[cfg(all(not(debug_assertions), feature = "analytics"))]
pub type SearchAggregator = segment_analytics::SearchAggregator;
/// The MeiliSearch config dir:
/// `~/.config/MeiliSearch` on *NIX or *BSD.
/// The Meilisearch config dir:
/// `~/.config/Meilisearch` on *NIX or *BSD.
/// `~/Library/ApplicationSupport` on macOS.
/// `%APPDATA` (= `C:\Users%USERNAME%\AppData\Roaming`) on windows.
static MEILISEARCH_CONFIG_PATH: Lazy<Option<PathBuf>> =
Lazy::new(|| AppDirs::new(Some("MeiliSearch"), false).map(|appdir| appdir.config_dir));
Lazy::new(|| AppDirs::new(Some("Meilisearch"), false).map(|appdir| appdir.config_dir));
fn config_user_id_path(db_path: &Path) -> Option<PathBuf> {
db_path
@@ -44,13 +44,13 @@ fn config_user_id_path(db_path: &Path) -> Option<PathBuf> {
path.join("instance-uid")
.display()
.to_string()
.replace("/", "-")
.replace('/', "-")
})
.zip(MEILISEARCH_CONFIG_PATH.as_ref())
.map(|(filename, config_path)| config_path.join(filename.trim_start_matches('-')))
}
/// Look for the instance-uid in the `data.ms` or in `~/.config/MeiliSearch/path-to-db-instance-uid`
/// Look for the instance-uid in the `data.ms` or in `~/.config/Meilisearch/path-to-db-instance-uid`
fn find_user_id(db_path: &Path) -> Option<String> {
fs::read_to_string(db_path.join("instance-uid"))
.ok()
@@ -61,7 +61,7 @@ pub trait Analytics: Sync + Send {
/// The method used to publish most analytics that do not need to be batched every hours
fn publish(&self, event_name: String, send: Value, request: Option<&HttpRequest>);
/// This method should be called to aggergate a get search
/// This method should be called to aggregate a get search
fn get_search(&self, aggregate: SearchAggregator);
/// This method should be called to aggregate a post search

View File

@@ -1,13 +1,17 @@
use std::collections::{BinaryHeap, HashMap, HashSet};
use std::fs;
use std::path::Path;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::{Duration, Instant};
use actix_web::http::header::USER_AGENT;
use actix_web::HttpRequest;
use http::header::CONTENT_TYPE;
use meilisearch_lib::index::{SearchQuery, SearchResult};
use meilisearch_auth::SearchRules;
use meilisearch_lib::index::{
SearchQuery, SearchResult, DEFAULT_CROP_LENGTH, DEFAULT_CROP_MARKER,
DEFAULT_HIGHLIGHT_POST_TAG, DEFAULT_HIGHLIGHT_PRE_TAG,
};
use meilisearch_lib::index_controller::Stats;
use meilisearch_lib::MeiliSearch;
use once_cell::sync::Lazy;
@@ -16,6 +20,7 @@ use segment::message::{Identify, Track, User};
use segment::{AutoBatcher, Batcher, HttpClient};
use serde_json::{json, Value};
use sysinfo::{DiskExt, System, SystemExt};
use time::OffsetDateTime;
use tokio::select;
use tokio::sync::mpsc::{self, Receiver, Sender};
use uuid::Uuid;
@@ -26,6 +31,8 @@ use crate::Opt;
use super::{config_user_id_path, MEILISEARCH_CONFIG_PATH};
const ANALYTICS_HEADER: &str = "X-Meilisearch-Client";
/// Write the instance-uid in the `data.ms` and in `~/.config/MeiliSearch/path-to-db-instance-uid`. Ignore the errors.
fn write_user_id(db_path: &Path, user_id: &str) {
let _ = fs::write(db_path.join("instance-uid"), user_id.as_bytes());
@@ -43,7 +50,8 @@ const SEGMENT_API_KEY: &str = "P3FWhhEsJiEDCuEHpmcN9DHcK4hVfBvb";
pub fn extract_user_agents(request: &HttpRequest) -> Vec<String> {
request
.headers()
.get(USER_AGENT)
.get(ANALYTICS_HEADER)
.or_else(|| request.headers().get(USER_AGENT))
.map(|header| header.to_str().ok())
.flatten()
.unwrap_or("unknown")
@@ -73,7 +81,19 @@ impl SegmentAnalytics {
let user_id = user_id.unwrap_or_else(|| Uuid::new_v4().to_string());
write_user_id(&opt.db_path, &user_id);
let client = HttpClient::default();
let client = reqwest::Client::builder()
.connect_timeout(Duration::from_secs(10))
.build();
// if reqwest throws an error we won't be able to send analytics
if client.is_err() {
return super::MockAnalytics::new(opt);
}
let client = HttpClient::new(
client.unwrap(),
"https://telemetry.meilisearch.com".to_string(),
);
let user = User::UserId { user_id };
let mut batcher = AutoBatcher::new(client, Batcher::new(None), SEGMENT_API_KEY.to_string());
@@ -125,11 +145,7 @@ impl SegmentAnalytics {
impl super::Analytics for SegmentAnalytics {
fn publish(&self, event_name: String, mut send: Value, request: Option<&HttpRequest>) {
let user_agent = request
.map(|req| req.headers().get(USER_AGENT))
.flatten()
.map(|header| header.to_str().unwrap_or("unknown"))
.map(|s| s.split(';').map(str::trim).collect::<Vec<&str>>());
let user_agent = request.map(|req| extract_user_agents(req));
send["user-agent"] = json!(user_agent);
let event = Track {
@@ -210,10 +226,30 @@ impl Segment {
"server_provider": std::env::var("MEILI_SERVER_PROVIDER").ok(),
})
});
let infos = json!({
"env": opt.env.clone(),
"has_snapshot": opt.schedule_snapshot,
});
// The infos are all cli option except every option containing sensitive information.
// We consider an information as sensible if it contains a path, an address or a key.
let infos = {
// First we see if any sensitive fields were used.
let db_path = opt.db_path != PathBuf::from("./data.ms");
let import_dump = opt.import_dump.is_some();
let dumps_dir = opt.dumps_dir != PathBuf::from("dumps/");
let import_snapshot = opt.import_snapshot.is_some();
let snapshots_dir = opt.snapshot_dir != PathBuf::from("snapshots/");
let http_addr = opt.http_addr != "127.0.0.1:7700";
let mut infos = serde_json::to_value(opt).unwrap();
// Then we overwrite all sensitive field with a boolean representing if
// the feature was used or not.
infos["db_path"] = json!(db_path);
infos["import_dump"] = json!(import_dump);
infos["dumps_dir"] = json!(dumps_dir);
infos["import_snapshot"] = json!(import_snapshot);
infos["snapshot_dir"] = json!(snapshots_dir);
infos["http_addr"] = json!(http_addr);
infos
};
let number_of_documents = stats
.indexes
@@ -259,7 +295,7 @@ impl Segment {
}
async fn tick(&mut self, meilisearch: MeiliSearch) {
if let Ok(stats) = meilisearch.get_all_stats(&None).await {
if let Ok(stats) = meilisearch.get_all_stats(&SearchRules::default()).await {
let _ = self
.batcher
.push(Identify {
@@ -301,6 +337,8 @@ impl Segment {
#[derive(Default)]
pub struct SearchAggregator {
timestamp: Option<OffsetDateTime>,
// context
user_agents: HashSet<String>,
@@ -331,11 +369,20 @@ pub struct SearchAggregator {
// pagination
max_limit: usize,
max_offset: usize,
// formatting
highlight_pre_tag: bool,
highlight_post_tag: bool,
crop_marker: bool,
show_matches_position: bool,
crop_length: bool,
}
impl SearchAggregator {
pub fn from_query(query: &SearchQuery, request: &HttpRequest) -> Self {
let mut ret = Self::default();
ret.timestamp = Some(OffsetDateTime::now_utc());
ret.total_received = 1;
ret.user_agents = extract_user_agents(request).into_iter().collect();
@@ -379,6 +426,12 @@ impl SearchAggregator {
ret.max_limit = query.limit;
ret.max_offset = query.offset.unwrap_or_default();
ret.highlight_pre_tag = query.highlight_pre_tag != DEFAULT_HIGHLIGHT_PRE_TAG();
ret.highlight_post_tag = query.highlight_post_tag != DEFAULT_HIGHLIGHT_POST_TAG();
ret.crop_marker = query.crop_marker != DEFAULT_CROP_MARKER();
ret.crop_length = query.crop_length != DEFAULT_CROP_LENGTH();
ret.show_matches_position = query.show_matches_position;
ret
}
@@ -389,6 +442,10 @@ impl SearchAggregator {
/// Aggregate one [SearchAggregator] into another.
pub fn aggregate(&mut self, mut other: Self) {
if self.timestamp.is_none() {
self.timestamp = other.timestamp;
}
// context
for user_agent in other.user_agents.into_iter() {
self.user_agents.insert(user_agent);
@@ -422,6 +479,12 @@ impl SearchAggregator {
// pagination
self.max_limit = self.max_limit.max(other.max_limit);
self.max_offset = self.max_offset.max(other.max_offset);
self.highlight_pre_tag |= other.highlight_pre_tag;
self.highlight_post_tag |= other.highlight_post_tag;
self.crop_marker |= other.crop_marker;
self.show_matches_position |= other.show_matches_position;
self.crop_length |= other.crop_length;
}
pub fn into_event(self, user: &User, event_name: &str) -> Option<Track> {
@@ -432,7 +495,7 @@ impl SearchAggregator {
let percentile_99th = 0.99 * (self.total_succeeded as f64 - 1.) + 1.;
// we get all the values in a sorted manner
let time_spent = self.time_spent.into_sorted_vec();
// We are only intersted by the slowest value of the 99th fastest results
// We are only interested by the slowest value of the 99th fastest results
let time_spent = time_spent.get(percentile_99th as usize);
let properties = json!({
@@ -459,9 +522,17 @@ impl SearchAggregator {
"max_limit": self.max_limit,
"max_offset": self.max_offset,
},
"formatting": {
"highlight_pre_tag": self.highlight_pre_tag,
"highlight_post_tag": self.highlight_post_tag,
"crop_marker": self.crop_marker,
"show_matches_position": self.show_matches_position,
"crop_length": self.crop_length,
},
});
Some(Track {
timestamp: self.timestamp,
user: user.clone(),
event: event_name.to_string(),
properties,
@@ -473,6 +544,8 @@ impl SearchAggregator {
#[derive(Default)]
pub struct DocumentsAggregator {
timestamp: Option<OffsetDateTime>,
// set to true when at least one request was received
updated: bool,
@@ -491,6 +564,7 @@ impl DocumentsAggregator {
request: &HttpRequest,
) -> Self {
let mut ret = Self::default();
ret.timestamp = Some(OffsetDateTime::now_utc());
ret.updated = true;
ret.user_agents = extract_user_agents(request).into_iter().collect();
@@ -500,8 +574,8 @@ impl DocumentsAggregator {
let content_type = request
.headers()
.get(CONTENT_TYPE)
.map(|s| s.to_str().unwrap_or("unkown"))
.unwrap()
.and_then(|s| s.to_str().ok())
.unwrap_or("unknown")
.to_string();
ret.content_types.insert(content_type);
ret.index_creation = index_creation;
@@ -511,15 +585,19 @@ impl DocumentsAggregator {
/// Aggregate one [DocumentsAggregator] into another.
pub fn aggregate(&mut self, other: Self) {
if self.timestamp.is_none() {
self.timestamp = other.timestamp;
}
self.updated |= other.updated;
// we can't create a union because there is no `into_union` method
for user_agent in other.user_agents.into_iter() {
for user_agent in other.user_agents {
self.user_agents.insert(user_agent);
}
for primary_key in other.primary_keys.into_iter() {
for primary_key in other.primary_keys {
self.primary_keys.insert(primary_key);
}
for content_type in other.content_types.into_iter() {
for content_type in other.content_types {
self.content_types.insert(content_type);
}
self.index_creation |= other.index_creation;
@@ -537,6 +615,7 @@ impl DocumentsAggregator {
});
Some(Track {
timestamp: self.timestamp,
user: user.clone(),
event: event_name.to_string(),
properties,

View File

@@ -1,6 +1,6 @@
use actix_web as aweb;
use aweb::error::{JsonPayloadError, QueryPayloadError};
use meilisearch_error::{Code, ErrorCode, ResponseError};
use meilisearch_types::error::{Code, ErrorCode, ResponseError};
#[derive(Debug, thiserror::Error)]
pub enum MeilisearchHttpError {

View File

@@ -1,11 +1,11 @@
use meilisearch_error::{Code, ErrorCode};
use meilisearch_types::error::{Code, ErrorCode};
#[derive(Debug, thiserror::Error)]
pub enum AuthenticationError {
#[error("The Authorization header is missing. It must use the bearer authorization method.")]
MissingAuthorizationHeader,
#[error("The provided API key is invalid.")]
InvalidToken(String),
InvalidToken,
// Triggered on configuration error.
#[error("An internal error has occurred. `Irretrievable state`.")]
IrretrievableState,
@@ -15,7 +15,7 @@ impl ErrorCode for AuthenticationError {
fn error_code(&self) -> Code {
match self {
AuthenticationError::MissingAuthorizationHeader => Code::MissingAuthorizationHeader,
AuthenticationError::InvalidToken(_) => Code::InvalidToken,
AuthenticationError::InvalidToken => Code::InvalidToken,
AuthenticationError::IrretrievableState => Code::Internal,
}
}

View File

@@ -2,28 +2,80 @@ mod error;
use std::marker::PhantomData;
use std::ops::Deref;
use std::pin::Pin;
use actix_web::FromRequest;
use futures::future::err;
use futures::future::{ok, Ready};
use meilisearch_error::ResponseError;
use error::AuthenticationError;
use futures::future::err;
use futures::Future;
use meilisearch_auth::{AuthController, AuthFilter};
use meilisearch_types::error::{Code, ResponseError};
pub struct GuardedData<T, D> {
pub struct GuardedData<P, D> {
data: D,
filters: AuthFilter,
_marker: PhantomData<T>,
_marker: PhantomData<P>,
}
impl<T, D> GuardedData<T, D> {
impl<P, D> GuardedData<P, D> {
pub fn filters(&self) -> &AuthFilter {
&self.filters
}
async fn auth_bearer(
auth: AuthController,
token: String,
index: Option<String>,
data: Option<D>,
) -> Result<Self, ResponseError>
where
P: Policy + 'static,
{
match Self::authenticate(auth, token, index).await? {
Some(filters) => match data {
Some(data) => Ok(Self {
data,
filters,
_marker: PhantomData,
}),
None => Err(AuthenticationError::IrretrievableState.into()),
},
None => Err(AuthenticationError::InvalidToken.into()),
}
}
async fn auth_token(auth: AuthController, data: Option<D>) -> Result<Self, ResponseError>
where
P: Policy + 'static,
{
match Self::authenticate(auth, String::new(), None).await? {
Some(filters) => match data {
Some(data) => Ok(Self {
data,
filters,
_marker: PhantomData,
}),
None => Err(AuthenticationError::IrretrievableState.into()),
},
None => Err(AuthenticationError::MissingAuthorizationHeader.into()),
}
}
async fn authenticate(
auth: AuthController,
token: String,
index: Option<String>,
) -> Result<Option<AuthFilter>, ResponseError>
where
P: Policy + 'static,
{
tokio::task::spawn_blocking(move || P::authenticate(auth, token.as_ref(), index.as_deref()))
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))
}
}
impl<T, D> Deref for GuardedData<T, D> {
impl<P, D> Deref for GuardedData<P, D> {
type Target = D;
fn deref(&self) -> &Self::Target {
@@ -32,11 +84,9 @@ impl<T, D> Deref for GuardedData<T, D> {
}
impl<P: Policy + 'static, D: 'static + Clone> FromRequest for GuardedData<P, D> {
type Config = ();
type Error = ResponseError;
type Future = Ready<Result<Self, Self::Error>>;
type Future = Pin<Box<dyn Future<Output = Result<Self, Self::Error>>>>;
fn from_request(
req: &actix_web::HttpRequest,
@@ -52,37 +102,23 @@ impl<P: Policy + 'static, D: 'static + Clone> FromRequest for GuardedData<P, D>
Some("Bearer") => {
// TODO: find a less hardcoded way?
let index = req.match_info().get("index_uid");
let token = type_token.next().unwrap_or("unknown");
match P::authenticate(auth, token, index) {
Some(filters) => match req.app_data::<D>().cloned() {
Some(data) => ok(Self {
data,
filters,
_marker: PhantomData,
}),
None => err(AuthenticationError::IrretrievableState.into()),
},
None => {
let token = token.to_string();
err(AuthenticationError::InvalidToken(token).into())
}
match type_token.next() {
Some(token) => Box::pin(Self::auth_bearer(
auth,
token.to_string(),
index.map(String::from),
req.app_data::<D>().cloned(),
)),
None => Box::pin(err(AuthenticationError::InvalidToken.into())),
}
}
_otherwise => err(AuthenticationError::MissingAuthorizationHeader.into()),
},
None => match P::authenticate(auth, "", None) {
Some(filters) => match req.app_data::<D>().cloned() {
Some(data) => ok(Self {
data,
filters,
_marker: PhantomData,
}),
None => err(AuthenticationError::IrretrievableState.into()),
},
None => err(AuthenticationError::MissingAuthorizationHeader.into()),
_otherwise => {
Box::pin(err(AuthenticationError::MissingAuthorizationHeader.into()))
}
},
None => Box::pin(Self::auth_token(auth, req.app_data::<D>().cloned())),
},
None => err(AuthenticationError::IrretrievableState.into()),
None => Box::pin(err(AuthenticationError::IrretrievableState.into())),
}
}
}
@@ -92,27 +128,39 @@ pub trait Policy {
}
pub mod policies {
use jsonwebtoken::{decode, Algorithm, DecodingKey, Validation};
use serde::{Deserialize, Serialize};
use time::OffsetDateTime;
use uuid::Uuid;
use crate::extractors::authentication::Policy;
use meilisearch_auth::{Action, AuthController, AuthFilter};
use meilisearch_auth::{Action, AuthController, AuthFilter, SearchRules};
// reexport actions in policies in order to be used in routes configuration.
pub use meilisearch_auth::actions;
pub struct MasterPolicy;
fn tenant_token_validation() -> Validation {
let mut validation = Validation::default();
validation.validate_exp = false;
validation.required_spec_claims.remove("exp");
validation.algorithms = vec![Algorithm::HS256, Algorithm::HS384, Algorithm::HS512];
validation
}
impl Policy for MasterPolicy {
fn authenticate(
auth: AuthController,
token: &str,
_index: Option<&str>,
) -> Option<AuthFilter> {
if let Some(master_key) = auth.get_master_key() {
if master_key == token {
return Some(AuthFilter::default());
}
}
/// Extracts the key id used to sign the payload, without performing any validation.
fn extract_key_id(token: &str) -> Option<Uuid> {
let mut validation = tenant_token_validation();
validation.insecure_disable_signature_validation();
let dummy_key = DecodingKey::from_secret(b"secret");
let token_data = decode::<Claims>(token, &dummy_key, &validation).ok()?;
None
}
// get token fields without validating it.
let Claims { api_key_uid, .. } = token_data.claims;
Some(api_key_uid)
}
fn is_keys_action(action: u8) -> bool {
use actions::*;
matches!(action, KEYS_GET | KEYS_CREATE | KEYS_UPDATE | KEYS_DELETE)
}
pub struct ActionPolicy<const A: u8>;
@@ -124,19 +172,83 @@ pub mod policies {
index: Option<&str>,
) -> Option<AuthFilter> {
// authenticate if token is the master key.
if auth.get_master_key().map_or(true, |mk| mk == token) {
// master key can only have access to keys routes.
// if master key is None only keys routes are inaccessible.
if auth
.get_master_key()
.map_or_else(|| !is_keys_action(A), |mk| mk == token)
{
return Some(AuthFilter::default());
}
// authenticate if token is allowed.
if let Some(action) = Action::from_repr(A) {
let index = index.map(|i| i.as_bytes());
if let Ok(true) = auth.authenticate(token.as_bytes(), action, index) {
return auth.get_key_filters(token).ok();
// Tenant token
if let Some(filters) = ActionPolicy::<A>::authenticate_tenant_token(&auth, token, index)
{
return Some(filters);
} else if let Some(action) = Action::from_repr(A) {
// API key
if let Ok(Some(uid)) = auth.get_optional_uid_from_encoded_key(token.as_bytes()) {
if let Ok(true) = auth.is_key_authorized(uid, action, index) {
return auth.get_key_filters(uid, None).ok();
}
}
}
None
}
}
impl<const A: u8> ActionPolicy<A> {
fn authenticate_tenant_token(
auth: &AuthController,
token: &str,
index: Option<&str>,
) -> Option<AuthFilter> {
// Only search action can be accessed by a tenant token.
if A != actions::SEARCH {
return None;
}
let uid = extract_key_id(token)?;
// check if parent key is authorized to do the action.
if auth.is_key_authorized(uid, Action::Search, index).ok()? {
// Check if tenant token is valid.
let key = auth.generate_key(uid)?;
let data = decode::<Claims>(
token,
&DecodingKey::from_secret(key.as_bytes()),
&tenant_token_validation(),
)
.ok()?;
// Check index access if an index restriction is provided.
if let Some(index) = index {
if !data.claims.search_rules.is_index_authorized(index) {
return None;
}
}
// Check if token is expired.
if let Some(exp) = data.claims.exp {
if OffsetDateTime::now_utc().unix_timestamp() > exp {
return None;
}
}
return auth
.get_key_filters(uid, Some(data.claims.search_rules))
.ok();
}
None
}
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct Claims {
search_rules: SearchRules,
exp: Option<i64>,
api_key_uid: Uuid,
}
}

View File

@@ -1,3 +1,4 @@
pub mod payload;
#[macro_use]
pub mod authentication;
pub mod sequential_extractor;

View File

@@ -28,8 +28,6 @@ impl Default for PayloadConfig {
}
impl FromRequest for Payload {
type Config = PayloadConfig;
type Error = PayloadError;
type Future = Ready<Result<Payload, Self::Error>>;
@@ -39,7 +37,7 @@ impl FromRequest for Payload {
let limit = req
.app_data::<PayloadConfig>()
.map(|c| c.limit)
.unwrap_or(Self::Config::default().limit);
.unwrap_or(PayloadConfig::default().limit);
ready(Ok(Payload {
payload: payload.take(),
limit,

View File

@@ -0,0 +1,148 @@
#![allow(non_snake_case)]
use std::{future::Future, pin::Pin, task::Poll};
use actix_web::{dev::Payload, FromRequest, Handler, HttpRequest};
use pin_project_lite::pin_project;
/// `SeqHandler` is an actix `Handler` that enforces that extractors errors are returned in the
/// same order as they are defined in the wrapped handler. This is needed because, by default, actix
/// resolves the extractors concurrently, whereas we always need the authentication extractor to
/// throw first.
#[derive(Clone)]
pub struct SeqHandler<H>(pub H);
pub struct SeqFromRequest<T>(T);
/// This macro implements `FromRequest` for arbitrary arity handler, except for one, which is
/// useless anyway.
macro_rules! gen_seq {
($ty:ident; $($T:ident)+) => {
pin_project! {
pub struct $ty<$($T: FromRequest), +> {
$(
#[pin]
$T: ExtractFuture<$T::Future, $T, $T::Error>,
)+
}
}
impl<$($T: FromRequest), +> Future for $ty<$($T),+> {
type Output = Result<SeqFromRequest<($($T),+)>, actix_web::Error>;
fn poll(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Self::Output> {
let mut this = self.project();
let mut count_fut = 0;
let mut count_finished = 0;
$(
count_fut += 1;
match this.$T.as_mut().project() {
ExtractProj::Future { fut } => match fut.poll(cx) {
Poll::Ready(Ok(output)) => {
count_finished += 1;
let _ = this
.$T
.as_mut()
.project_replace(ExtractFuture::Done { output });
}
Poll::Ready(Err(error)) => {
count_finished += 1;
let _ = this
.$T
.as_mut()
.project_replace(ExtractFuture::Error { error });
}
Poll::Pending => (),
},
ExtractProj::Done { .. } => count_finished += 1,
ExtractProj::Error { .. } => {
// short circuit if all previous are finished and we had an error.
if count_finished == count_fut {
match this.$T.project_replace(ExtractFuture::Empty) {
ExtractReplaceProj::Error { error } => {
return Poll::Ready(Err(error.into()))
}
_ => unreachable!("Invalid future state"),
}
} else {
count_finished += 1;
}
}
ExtractProj::Empty => unreachable!("From request polled after being finished. {}", stringify!($T)),
}
)+
if count_fut == count_finished {
let result = (
$(
match this.$T.project_replace(ExtractFuture::Empty) {
ExtractReplaceProj::Done { output } => output,
ExtractReplaceProj::Error { error } => return Poll::Ready(Err(error.into())),
_ => unreachable!("Invalid future state"),
},
)+
);
Poll::Ready(Ok(SeqFromRequest(result)))
} else {
Poll::Pending
}
}
}
impl<$($T: FromRequest,)+> FromRequest for SeqFromRequest<($($T,)+)> {
type Error = actix_web::Error;
type Future = $ty<$($T),+>;
fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future {
$ty {
$(
$T: ExtractFuture::Future {
fut: $T::from_request(req, payload),
},
)+
}
}
}
impl<Han, $($T: FromRequest),+> Handler<SeqFromRequest<($($T),+)>> for SeqHandler<Han>
where
Han: Handler<($($T),+)>,
{
type Output = Han::Output;
type Future = Han::Future;
fn call(&self, args: SeqFromRequest<($($T),+)>) -> Self::Future {
self.0.call(args.0)
}
}
};
}
// Not working for a single argument, but then, it is not really necessary.
// gen_seq! { SeqFromRequestFut1; A }
gen_seq! { SeqFromRequestFut2; A B }
gen_seq! { SeqFromRequestFut3; A B C }
gen_seq! { SeqFromRequestFut4; A B C D }
gen_seq! { SeqFromRequestFut5; A B C D E }
gen_seq! { SeqFromRequestFut6; A B C D E F }
pin_project! {
#[project = ExtractProj]
#[project_replace = ExtractReplaceProj]
enum ExtractFuture<Fut, Res, Err> {
Future {
#[pin]
fut: Fut,
},
Done {
output: Res,
},
Error {
error: Err,
},
Empty,
}
}

View File

@@ -1,10 +1,11 @@
use meilisearch_lib::heed::Env;
use walkdir::WalkDir;
pub trait EnvSizer {
fn size(&self) -> u64;
}
impl EnvSizer for heed::Env {
impl EnvSizer for Env {
fn size(&self) -> u64 {
WalkDir::new(self.path())
.into_iter()

View File

@@ -2,14 +2,14 @@
#[macro_use]
pub mod error;
pub mod analytics;
mod task;
pub mod task;
#[macro_use]
pub mod extractors;
pub mod helpers;
pub mod option;
pub mod routes;
use std::sync::Arc;
use std::sync::{atomic::AtomicBool, Arc};
use std::time::Duration;
use crate::error::MeilisearchHttpError;
@@ -25,16 +25,29 @@ use extractors::payload::PayloadConfig;
use meilisearch_auth::AuthController;
use meilisearch_lib::MeiliSearch;
pub static AUTOBATCHING_ENABLED: AtomicBool = AtomicBool::new(false);
pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<MeiliSearch> {
let mut meilisearch = MeiliSearch::builder();
// enable autobatching?
AUTOBATCHING_ENABLED.store(
opt.scheduler_options.enable_auto_batching,
std::sync::atomic::Ordering::Relaxed,
);
meilisearch
.set_max_index_size(opt.max_index_size.get_bytes() as usize)
.set_max_task_store_size(opt.max_task_db_size.get_bytes() as usize)
// snapshot
.set_ignore_missing_snapshot(opt.ignore_missing_snapshot)
.set_ignore_snapshot_if_db_exists(opt.ignore_snapshot_if_db_exists)
.set_dump_dst(opt.dumps_dir.clone())
.set_snapshot_interval(Duration::from_secs(opt.snapshot_interval_sec))
.set_snapshot_dir(opt.snapshot_dir.clone());
.set_snapshot_dir(opt.snapshot_dir.clone())
// dump
.set_ignore_missing_dump(opt.ignore_missing_dump)
.set_ignore_dump_if_db_exists(opt.ignore_dump_if_db_exists)
.set_dump_dst(opt.dumps_dir.clone());
if let Some(ref path) = opt.import_snapshot {
meilisearch.set_import_snapshot(path.clone());
@@ -48,7 +61,11 @@ pub fn setup_meilisearch(opt: &Opt) -> anyhow::Result<MeiliSearch> {
meilisearch.set_schedule_snapshot();
}
meilisearch.build(opt.db_path.clone(), opt.indexer_options.clone())
meilisearch.build(
opt.db_path.clone(),
opt.indexer_options.clone(),
opt.scheduler_options.clone(),
)
}
pub fn configure_data(
@@ -90,7 +107,7 @@ pub fn configure_data(
#[cfg(feature = "mini-dashboard")]
pub fn dashboard(config: &mut web::ServiceConfig, enable_frontend: bool) {
use actix_web::HttpResponse;
use actix_web_static_files::Resource;
use static_files::Resource;
mod generated {
include!(concat!(env!("OUT_DIR"), "/generated.rs"));
@@ -105,13 +122,13 @@ pub fn dashboard(config: &mut web::ServiceConfig, enable_frontend: bool) {
} = resource;
// Redirect index.html to /
if path == "index.html" {
config.service(web::resource("/").route(
web::get().to(move || HttpResponse::Ok().content_type(mime_type).body(data)),
));
config.service(web::resource("/").route(web::get().to(move || async move {
HttpResponse::Ok().content_type(mime_type).body(data)
})));
} else {
config.service(web::resource(path).route(
web::get().to(move || HttpResponse::Ok().content_type(mime_type).body(data)),
));
config.service(web::resource(path).route(web::get().to(move || async move {
HttpResponse::Ok().content_type(mime_type).body(data)
})));
}
}
} else {
@@ -131,10 +148,10 @@ macro_rules! create_app {
use actix_web::middleware::TrailingSlash;
use actix_web::App;
use actix_web::{middleware, web};
use meilisearch_error::ResponseError;
use meilisearch_http::error::MeilisearchHttpError;
use meilisearch_http::routes;
use meilisearch_http::{configure_data, dashboard};
use meilisearch_types::error::ResponseError;
App::new()
.configure(|s| configure_data(s, $data.clone(), $auth.clone(), &$opt, $analytics))
@@ -143,6 +160,7 @@ macro_rules! create_app {
.wrap(
Cors::default()
.send_wildcard()
.allow_any_header()
.allow_any_origin()
.allow_any_method()
.max_age(86_400), // 24h

View File

@@ -1,13 +1,14 @@
use std::env;
use std::sync::Arc;
use actix_web::http::KeepAlive;
use actix_web::HttpServer;
use clap::Parser;
use meilisearch_auth::AuthController;
use meilisearch_http::analytics;
use meilisearch_http::analytics::Analytics;
use meilisearch_http::{create_app, setup_meilisearch, Opt};
use meilisearch_lib::MeiliSearch;
use structopt::StructOpt;
#[cfg(target_os = "linux")]
#[global_allocator]
@@ -29,7 +30,7 @@ fn setup(opt: &Opt) -> anyhow::Result<()> {
#[actix_web::main]
async fn main() -> anyhow::Result<()> {
let opt = Opt::from_args();
let opt = Opt::parse();
setup(&opt)?;
@@ -50,7 +51,7 @@ async fn main() -> anyhow::Result<()> {
let auth_controller = AuthController::new(&opt.db_path, &opt.master_key)?;
#[cfg(all(not(debug_assertions), feature = "analytics"))]
let (analytics, user) = if opt.analytics() {
let (analytics, user) = if !opt.no_analytics {
analytics::SegmentAnalytics::new(&opt, &meilisearch).await
} else {
analytics::MockAnalytics::new(&opt)
@@ -83,7 +84,8 @@ async fn run_http(
)
})
// Disable signals allows the server to terminate immediately when a user enter CTRL-C
.disable_signals();
.disable_signals()
.keep_alive(KeepAlive::Os);
if let Some(config) = opt.get_ssl_config()? {
http_server
@@ -101,14 +103,14 @@ pub fn print_launch_resume(opt: &Opt, user: &str) {
let commit_date = option_env!("VERGEN_GIT_COMMIT_TIMESTAMP").unwrap_or("unknown");
let ascii_name = r#"
888b d888 d8b 888 d8b .d8888b. 888
8888b d8888 Y8P 888 Y8P d88P Y88b 888
88888b.d88888 888 Y88b. 888
888Y88888P888 .d88b. 888 888 888 "Y888b. .d88b. 8888b. 888d888 .d8888b 88888b.
888 Y888P 888 d8P Y8b 888 888 888 "Y88b. d8P Y8b "88b 888P" d88P" 888 "88b
888 Y8P 888 88888888 888 888 888 "888 88888888 .d888888 888 888 888 888
888 " 888 Y8b. 888 888 888 Y88b d88P Y8b. 888 888 888 Y88b. 888 888
888 888 "Y8888 888 888 888 "Y8888P" "Y8888 "Y888888 888 "Y8888P 888 888
888b d888 d8b 888 d8b 888
8888b d8888 Y8P 888 Y8P 888
88888b.d88888 888 888
888Y88888P888 .d88b. 888 888 888 .d8888b .d88b. 8888b. 888d888 .d8888b 88888b.
888 Y888P 888 d8P Y8b 888 888 888 88K d8P Y8b "88b 888P" d88P" 888 "88b
888 Y8P 888 88888888 888 888 888 "Y8888b. 88888888 .d888888 888 888 888 888
888 " 888 Y8b. 888 888 888 X88 Y8b. 888 888 888 Y88b. 888 888
888 888 "Y8888 888 888 888 88888P' "Y8888 "Y888888 888 "Y8888P 888 888
"#;
eprintln!("{}", ascii_name);
@@ -125,10 +127,10 @@ pub fn print_launch_resume(opt: &Opt, user: &str) {
#[cfg(all(not(debug_assertions), feature = "analytics"))]
{
if opt.analytics() {
if !opt.no_analytics {
eprintln!(
"
Thank you for using MeiliSearch!
Thank you for using Meilisearch!
We collect anonymized analytics to improve our product and your experience. To learn more, including how to turn off analytics, visit our dedicated documentation page: https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html
@@ -146,7 +148,7 @@ Anonymous telemetry:\t\"Enabled\""
eprintln!();
if opt.master_key.is_some() {
eprintln!("A Master Key has been set. Requests to MeiliSearch won't be authorized unless you provide an authentication key.");
eprintln!("A Master Key has been set. Requests to Meilisearch won't be authorized unless you provide an authentication key.");
} else {
eprintln!("No master key found; The server will accept unidentified requests. \
If you need some protection in development mode, please export a key: export MEILI_MASTER_KEY=xxx");

View File

@@ -4,144 +4,169 @@ use std::path::PathBuf;
use std::sync::Arc;
use byte_unit::Byte;
use meilisearch_lib::options::IndexerOpts;
use rustls::internal::pemfile::{certs, pkcs8_private_keys, rsa_private_keys};
use clap::Parser;
use meilisearch_lib::options::{IndexerOpts, SchedulerConfig};
use rustls::{
AllowAnyAnonymousOrAuthenticatedClient, AllowAnyAuthenticatedClient, NoClientAuth,
server::{
AllowAnyAnonymousOrAuthenticatedClient, AllowAnyAuthenticatedClient,
ServerSessionMemoryCache,
},
RootCertStore,
};
use structopt::StructOpt;
use rustls_pemfile::{certs, pkcs8_private_keys, rsa_private_keys};
use serde::Serialize;
const POSSIBLE_ENV: [&str; 2] = ["development", "production"];
#[derive(Debug, Clone, StructOpt)]
#[derive(Debug, Clone, Parser, Serialize)]
#[clap(version)]
pub struct Opt {
/// The destination where the database must be created.
#[structopt(long, env = "MEILI_DB_PATH", default_value = "./data.ms")]
#[clap(long, env = "MEILI_DB_PATH", default_value = "./data.ms")]
pub db_path: PathBuf,
/// The address on which the http server will listen.
#[structopt(long, env = "MEILI_HTTP_ADDR", default_value = "127.0.0.1:7700")]
#[clap(long, env = "MEILI_HTTP_ADDR", default_value = "127.0.0.1:7700")]
pub http_addr: String,
/// The master key allowing you to do everything on the server.
#[structopt(long, env = "MEILI_MASTER_KEY")]
#[serde(skip)]
#[clap(long, env = "MEILI_MASTER_KEY")]
pub master_key: Option<String>,
/// This environment variable must be set to `production` if you are running in production.
/// If the server is running in development mode more logs will be displayed,
/// and the master key can be avoided which implies that there is no security on the updates routes.
/// This is useful to debug when integrating the engine with another service.
#[structopt(long, env = "MEILI_ENV", default_value = "development", possible_values = &POSSIBLE_ENV)]
#[clap(long, env = "MEILI_ENV", default_value = "development", possible_values = &POSSIBLE_ENV)]
pub env: String,
/// Do not send analytics to Meili.
#[cfg(all(not(debug_assertions), feature = "analytics"))]
#[structopt(long, env = "MEILI_NO_ANALYTICS")]
pub no_analytics: Option<Option<bool>>,
#[serde(skip)] // we can't send true
#[clap(long, env = "MEILI_NO_ANALYTICS")]
pub no_analytics: bool,
/// The maximum size, in bytes, of the main lmdb database directory
#[structopt(long, env = "MEILI_MAX_INDEX_SIZE", default_value = "100 GiB")]
#[clap(long, env = "MEILI_MAX_INDEX_SIZE", default_value = "100 GiB")]
pub max_index_size: Byte,
/// The maximum size, in bytes, of the update lmdb database directory
#[structopt(long, env = "MEILI_MAX_TASK_DB_SIZE", default_value = "100 GiB")]
#[clap(long, env = "MEILI_MAX_TASK_DB_SIZE", default_value = "100 GiB")]
pub max_task_db_size: Byte,
/// The maximum size, in bytes, of accepted JSON payloads
#[structopt(long, env = "MEILI_HTTP_PAYLOAD_SIZE_LIMIT", default_value = "100 MB")]
#[clap(long, env = "MEILI_HTTP_PAYLOAD_SIZE_LIMIT", default_value = "100 MB")]
pub http_payload_size_limit: Byte,
/// Read server certificates from CERTFILE.
/// This should contain PEM-format certificates
/// in the right order (the first certificate should
/// certify KEYFILE, the last should be a root CA).
#[structopt(long, env = "MEILI_SSL_CERT_PATH", parse(from_os_str))]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_CERT_PATH", parse(from_os_str))]
pub ssl_cert_path: Option<PathBuf>,
/// Read private key from KEYFILE. This should be a RSA
/// private key or PKCS8-encoded private key, in PEM format.
#[structopt(long, env = "MEILI_SSL_KEY_PATH", parse(from_os_str))]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_KEY_PATH", parse(from_os_str))]
pub ssl_key_path: Option<PathBuf>,
/// Enable client authentication, and accept certificates
/// signed by those roots provided in CERTFILE.
#[structopt(long, env = "MEILI_SSL_AUTH_PATH", parse(from_os_str))]
#[clap(long, env = "MEILI_SSL_AUTH_PATH", parse(from_os_str))]
#[serde(skip)]
pub ssl_auth_path: Option<PathBuf>,
/// Read DER-encoded OCSP response from OCSPFILE and staple to certificate.
/// Optional
#[structopt(long, env = "MEILI_SSL_OCSP_PATH", parse(from_os_str))]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_OCSP_PATH", parse(from_os_str))]
pub ssl_ocsp_path: Option<PathBuf>,
/// Send a fatal alert if the client does not complete client authentication.
#[structopt(long, env = "MEILI_SSL_REQUIRE_AUTH")]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_REQUIRE_AUTH")]
pub ssl_require_auth: bool,
/// SSL support session resumption
#[structopt(long, env = "MEILI_SSL_RESUMPTION")]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_RESUMPTION")]
pub ssl_resumption: bool,
/// SSL support tickets.
#[structopt(long, env = "MEILI_SSL_TICKETS")]
#[serde(skip)]
#[clap(long, env = "MEILI_SSL_TICKETS")]
pub ssl_tickets: bool,
/// Defines the path of the snapshot file to import.
/// This option will, by default, stop the process if a database already exist or if no snapshot exists at
/// the given path. If this option is not specified no snapshot is imported.
#[structopt(long)]
#[clap(long)]
pub import_snapshot: Option<PathBuf>,
/// The engine will ignore a missing snapshot and not return an error in such case.
#[structopt(long, requires = "import-snapshot")]
#[clap(long, requires = "import-snapshot")]
pub ignore_missing_snapshot: bool,
/// The engine will skip snapshot importation and not return an error in such case.
#[structopt(long, requires = "import-snapshot")]
#[clap(long, requires = "import-snapshot")]
pub ignore_snapshot_if_db_exists: bool,
/// Defines the directory path where meilisearch will create snapshot each snapshot_time_gap.
#[structopt(long, env = "MEILI_SNAPSHOT_DIR", default_value = "snapshots/")]
#[clap(long, env = "MEILI_SNAPSHOT_DIR", default_value = "snapshots/")]
pub snapshot_dir: PathBuf,
/// Activate snapshot scheduling.
#[structopt(long, env = "MEILI_SCHEDULE_SNAPSHOT")]
#[clap(long, env = "MEILI_SCHEDULE_SNAPSHOT")]
pub schedule_snapshot: bool,
/// Defines time interval, in seconds, between each snapshot creation.
#[structopt(long, env = "MEILI_SNAPSHOT_INTERVAL_SEC", default_value = "86400")] // 24h
#[clap(long, env = "MEILI_SNAPSHOT_INTERVAL_SEC", default_value = "86400")] // 24h
pub snapshot_interval_sec: u64,
/// Folder where dumps are created when the dump route is called.
#[structopt(long, env = "MEILI_DUMPS_DIR", default_value = "dumps/")]
pub dumps_dir: PathBuf,
/// Import a dump from the specified path, must be a `.dump` file.
#[structopt(long, conflicts_with = "import-snapshot")]
#[clap(long, conflicts_with = "import-snapshot")]
pub import_dump: Option<PathBuf>,
/// If the dump doesn't exists, load or create the database specified by `db-path` instead.
#[clap(long, requires = "import-dump")]
pub ignore_missing_dump: bool,
/// Ignore the dump if a database already exists, and load that database instead.
#[clap(long, requires = "import-dump")]
pub ignore_dump_if_db_exists: bool,
/// Folder where dumps are created when the dump route is called.
#[clap(long, env = "MEILI_DUMPS_DIR", default_value = "dumps/")]
pub dumps_dir: PathBuf,
/// Set the log level
#[structopt(long, env = "MEILI_LOG_LEVEL", default_value = "info")]
#[clap(long, env = "MEILI_LOG_LEVEL", default_value = "info")]
pub log_level: String,
#[structopt(skip)]
#[serde(flatten)]
#[clap(flatten)]
pub indexer_options: IndexerOpts,
#[serde(flatten)]
#[clap(flatten)]
pub scheduler_options: SchedulerConfig,
}
impl Opt {
/// Wether analytics should be enabled or not.
#[cfg(all(not(debug_assertions), feature = "analytics"))]
pub fn analytics(&self) -> bool {
match self.no_analytics {
None => true,
Some(None) => false,
Some(Some(disabled)) => !disabled,
}
!self.no_analytics
}
pub fn get_ssl_config(&self) -> anyhow::Result<Option<rustls::ServerConfig>> {
if let (Some(cert_path), Some(key_path)) = (&self.ssl_cert_path, &self.ssl_key_path) {
let client_auth = match &self.ssl_auth_path {
let config = rustls::ServerConfig::builder().with_safe_defaults();
let config = match &self.ssl_auth_path {
Some(auth_path) => {
let roots = load_certs(auth_path.to_path_buf())?;
let mut client_auth_roots = RootCertStore::empty();
@@ -149,30 +174,32 @@ impl Opt {
client_auth_roots.add(&root).unwrap();
}
if self.ssl_require_auth {
AllowAnyAuthenticatedClient::new(client_auth_roots)
let verifier = AllowAnyAuthenticatedClient::new(client_auth_roots);
config.with_client_cert_verifier(verifier)
} else {
AllowAnyAnonymousOrAuthenticatedClient::new(client_auth_roots)
let verifier =
AllowAnyAnonymousOrAuthenticatedClient::new(client_auth_roots);
config.with_client_cert_verifier(verifier)
}
}
None => NoClientAuth::new(),
None => config.with_no_client_auth(),
};
let mut config = rustls::ServerConfig::new(client_auth);
config.key_log = Arc::new(rustls::KeyLogFile::new());
let certs = load_certs(cert_path.to_path_buf())?;
let privkey = load_private_key(key_path.to_path_buf())?;
let ocsp = load_ocsp(&self.ssl_ocsp_path)?;
config
.set_single_cert_with_ocsp_and_sct(certs, privkey, ocsp, vec![])
let mut config = config
.with_single_cert_with_ocsp_and_sct(certs, privkey, ocsp, vec![])
.map_err(|_| anyhow::anyhow!("bad certificates/private key"))?;
config.key_log = Arc::new(rustls::KeyLogFile::new());
if self.ssl_resumption {
config.set_persistence(rustls::ServerSessionMemoryCache::new(256));
config.session_storage = ServerSessionMemoryCache::new(256);
}
if self.ssl_tickets {
config.ticketer = rustls::Ticketer::new();
config.ticketer = rustls::Ticketer::new().unwrap();
}
Ok(Some(config))
@@ -186,7 +213,9 @@ fn load_certs(filename: PathBuf) -> anyhow::Result<Vec<rustls::Certificate>> {
let certfile =
fs::File::open(filename).map_err(|_| anyhow::anyhow!("cannot open certificate file"))?;
let mut reader = BufReader::new(certfile);
certs(&mut reader).map_err(|_| anyhow::anyhow!("cannot read certificate file"))
certs(&mut reader)
.map(|certs| certs.into_iter().map(rustls::Certificate).collect())
.map_err(|_| anyhow::anyhow!("cannot read certificate file"))
}
fn load_private_key(filename: PathBuf) -> anyhow::Result<rustls::PrivateKey> {
@@ -211,10 +240,10 @@ fn load_private_key(filename: PathBuf) -> anyhow::Result<rustls::PrivateKey> {
// prefer to load pkcs8 keys
if !pkcs8_keys.is_empty() {
Ok(pkcs8_keys[0].clone())
Ok(rustls::PrivateKey(pkcs8_keys[0].clone()))
} else {
assert!(!rsa_keys.is_empty());
Ok(rsa_keys[0].clone())
Ok(rustls::PrivateKey(rsa_keys[0].clone()))
}
}
@@ -230,3 +259,13 @@ fn load_ocsp(filename: &Option<PathBuf>) -> anyhow::Result<Vec<u8>> {
Ok(ret)
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_valid_opt() {
assert!(Opt::try_parse_from(Some("")).is_ok());
}
}

View File

@@ -1,127 +1,160 @@
use std::str;
use actix_web::{web, HttpRequest, HttpResponse};
use chrono::SecondsFormat;
use log::debug;
use meilisearch_auth::{generate_key, Action, AuthController, Key};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use time::OffsetDateTime;
use uuid::Uuid;
use crate::extractors::authentication::{policies::*, GuardedData};
use meilisearch_error::ResponseError;
use meilisearch_auth::{error::AuthControllerError, Action, AuthController, Key};
use meilisearch_types::error::{Code, ResponseError};
use crate::extractors::{
authentication::{policies::*, GuardedData},
sequential_extractor::SeqHandler,
};
use crate::routes::Pagination;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::post().to(create_api_key))
.route(web::get().to(list_api_keys)),
.route(web::post().to(SeqHandler(create_api_key)))
.route(web::get().to(SeqHandler(list_api_keys))),
)
.service(
web::resource("/{api_key}")
.route(web::get().to(get_api_key))
.route(web::patch().to(patch_api_key))
.route(web::delete().to(delete_api_key)),
web::resource("/{key}")
.route(web::get().to(SeqHandler(get_api_key)))
.route(web::patch().to(SeqHandler(patch_api_key)))
.route(web::delete().to(SeqHandler(delete_api_key))),
);
}
pub async fn create_api_key(
auth_controller: GuardedData<MasterPolicy, AuthController>,
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_CREATE }>, AuthController>,
body: web::Json<Value>,
_req: HttpRequest,
) -> Result<HttpResponse, ResponseError> {
let key = auth_controller.create_key(body.into_inner()).await?;
let res = KeyView::from_key(key, auth_controller.get_master_key());
let v = body.into_inner();
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let key = auth_controller.create_key(v)?;
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
debug!("returns: {:?}", res);
Ok(HttpResponse::Created().json(res))
}
pub async fn list_api_keys(
auth_controller: GuardedData<MasterPolicy, AuthController>,
_req: HttpRequest,
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_GET }>, AuthController>,
paginate: web::Query<Pagination>,
) -> Result<HttpResponse, ResponseError> {
let keys = auth_controller.list_keys().await?;
let res: Vec<_> = keys
.into_iter()
.map(|k| KeyView::from_key(k, auth_controller.get_master_key()))
.collect();
let page_view = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let keys = auth_controller.list_keys()?;
let page_view = paginate.auto_paginate_sized(
keys.into_iter()
.map(|k| KeyView::from_key(k, &auth_controller)),
);
debug!("returns: {:?}", res);
Ok(HttpResponse::Ok().json(res))
Ok(page_view)
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::Ok().json(page_view))
}
pub async fn get_api_key(
auth_controller: GuardedData<MasterPolicy, AuthController>,
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_GET }>, AuthController>,
path: web::Path<AuthParam>,
) -> Result<HttpResponse, ResponseError> {
// keep 8 first characters that are the ID of the API key.
let key = auth_controller.get_key(&path.api_key).await?;
let res = KeyView::from_key(key, auth_controller.get_master_key());
let key = path.into_inner().key;
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
let key = auth_controller.get_key(uid)?;
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
debug!("returns: {:?}", res);
Ok(HttpResponse::Ok().json(res))
}
pub async fn patch_api_key(
auth_controller: GuardedData<MasterPolicy, AuthController>,
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_UPDATE }>, AuthController>,
body: web::Json<Value>,
path: web::Path<AuthParam>,
) -> Result<HttpResponse, ResponseError> {
let key = auth_controller
// keep 8 first characters that are the ID of the API key.
.update_key(&path.api_key, body.into_inner())
.await?;
let res = KeyView::from_key(key, auth_controller.get_master_key());
let key = path.into_inner().key;
let body = body.into_inner();
let res = tokio::task::spawn_blocking(move || -> Result<_, AuthControllerError> {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
let key = auth_controller.update_key(uid, body)?;
Ok(KeyView::from_key(key, &auth_controller))
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
debug!("returns: {:?}", res);
Ok(HttpResponse::Ok().json(res))
}
pub async fn delete_api_key(
auth_controller: GuardedData<MasterPolicy, AuthController>,
auth_controller: GuardedData<ActionPolicy<{ actions::KEYS_DELETE }>, AuthController>,
path: web::Path<AuthParam>,
) -> Result<HttpResponse, ResponseError> {
// keep 8 first characters that are the ID of the API key.
auth_controller.delete_key(&path.api_key).await?;
let key = path.into_inner().key;
tokio::task::spawn_blocking(move || {
let uid =
Uuid::parse_str(&key).or_else(|_| auth_controller.get_uid_from_encoded_key(&key))?;
auth_controller.delete_key(uid)
})
.await
.map_err(|e| ResponseError::from_msg(e.to_string(), Code::Internal))??;
Ok(HttpResponse::NoContent().finish())
}
#[derive(Deserialize)]
pub struct AuthParam {
api_key: String,
key: String,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct KeyView {
name: Option<String>,
description: Option<String>,
key: String,
uid: Uuid,
actions: Vec<Action>,
indexes: Vec<String>,
expires_at: Option<String>,
created_at: String,
updated_at: String,
#[serde(serialize_with = "time::serde::rfc3339::option::serialize")]
expires_at: Option<OffsetDateTime>,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
created_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
updated_at: OffsetDateTime,
}
impl KeyView {
fn from_key(key: Key, master_key: Option<&String>) -> Self {
let key_id = str::from_utf8(&key.id).unwrap();
let generated_key = match master_key {
Some(master_key) => generate_key(master_key.as_bytes(), key_id),
None => generate_key(&[], key_id),
};
fn from_key(key: Key, auth: &AuthController) -> Self {
let generated_key = auth.generate_key(key.uid).unwrap_or_default();
KeyView {
name: key.name,
description: key.description,
key: generated_key,
uid: key.uid,
actions: key.actions,
indexes: key.indexes,
expires_at: key
.expires_at
.map(|dt| dt.to_rfc3339_opts(SecondsFormat::Secs, true)),
created_at: key.created_at.to_rfc3339_opts(SecondsFormat::Secs, true),
updated_at: key.updated_at.to_rfc3339_opts(SecondsFormat::Secs, true),
indexes: key.indexes.into_iter().map(String::from).collect(),
expires_at: key.expires_at,
created_at: key.created_at,
updated_at: key.updated_at,
}
}
}

View File

@@ -1,16 +1,16 @@
use actix_web::{web, HttpRequest, HttpResponse};
use log::debug;
use meilisearch_error::ResponseError;
use meilisearch_lib::MeiliSearch;
use serde::{Deserialize, Serialize};
use meilisearch_types::error::ResponseError;
use serde_json::json;
use crate::analytics::Analytics;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::extractors::sequential_extractor::SeqHandler;
use crate::task::SummarizedTaskView;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("").route(web::post().to(create_dump)))
.service(web::resource("/{dump_uid}/status").route(web::get().to(get_dump_status)));
cfg.service(web::resource("").route(web::post().to(SeqHandler(create_dump))));
}
pub async fn create_dump(
@@ -20,29 +20,8 @@ pub async fn create_dump(
) -> Result<HttpResponse, ResponseError> {
analytics.publish("Dump Created".to_string(), json!({}), Some(&req));
let res = meilisearch.create_dump().await?;
let res: SummarizedTaskView = meilisearch.register_dump_task().await?.into();
debug!("returns: {:?}", res);
Ok(HttpResponse::Accepted().json(res))
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
struct DumpStatusResponse {
status: String,
}
#[derive(Deserialize)]
struct DumpParam {
dump_uid: String,
}
async fn get_dump_status(
meilisearch: GuardedData<ActionPolicy<{ actions::DUMPS_GET }>, MeiliSearch>,
path: web::Path<DumpParam>,
) -> Result<HttpResponse, ResponseError> {
let res = meilisearch.dump_info(path.dump_uid.clone()).await?;
debug!("returns: {:?}", res);
Ok(HttpResponse::Ok().json(res))
}

View File

@@ -6,13 +6,15 @@ use actix_web::{web, HttpRequest, HttpResponse};
use bstr::ByteSlice;
use futures::{Stream, StreamExt};
use log::debug;
use meilisearch_error::ResponseError;
use meilisearch_lib::index_controller::{DocumentAdditionFormat, Update};
use meilisearch_lib::milli::update::IndexDocumentsMethod;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use meilisearch_types::star_or::StarOr;
use mime::Mime;
use once_cell::sync::Lazy;
use serde::Deserialize;
use serde_cs::vec::CS;
use serde_json::Value;
use tokio::sync::mpsc;
@@ -20,11 +22,10 @@ use crate::analytics::Analytics;
use crate::error::MeilisearchHttpError;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::extractors::payload::Payload;
use crate::extractors::sequential_extractor::SeqHandler;
use crate::routes::{fold_star_or, PaginationView};
use crate::task::SummarizedTaskView;
const DEFAULT_RETRIEVE_DOCUMENTS_OFFSET: usize = 0;
const DEFAULT_RETRIEVE_DOCUMENTS_LIMIT: usize = 20;
static ACCEPTED_CONTENT_TYPE: Lazy<Vec<String>> = Lazy::new(|| {
vec![
"application/json".to_string(),
@@ -45,7 +46,7 @@ fn payload_to_stream(mut payload: Payload) -> impl Stream<Item = Result<Bytes, P
}
/// Extracts the mime type from the content type and return
/// a meilisearch error if anyhthing bad happen.
/// a meilisearch error if anything bad happen.
fn extract_mime_type(req: &HttpRequest) -> Result<Option<Mime>, MeilisearchHttpError> {
match req.mime_type() {
Ok(Some(mime)) => Ok(Some(mime)),
@@ -71,28 +72,38 @@ pub struct DocumentParam {
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::get().to(get_all_documents))
.route(web::post().to(add_documents))
.route(web::put().to(update_documents))
.route(web::delete().to(clear_all_documents)),
.route(web::get().to(SeqHandler(get_all_documents)))
.route(web::post().to(SeqHandler(add_documents)))
.route(web::put().to(SeqHandler(update_documents)))
.route(web::delete().to(SeqHandler(clear_all_documents))),
)
// this route needs to be before the /documents/{document_id} to match properly
.service(web::resource("/delete-batch").route(web::post().to(delete_documents)))
.service(web::resource("/delete-batch").route(web::post().to(SeqHandler(delete_documents))))
.service(
web::resource("/{document_id}")
.route(web::get().to(get_document))
.route(web::delete().to(delete_document)),
.route(web::get().to(SeqHandler(get_document)))
.route(web::delete().to(SeqHandler(delete_document))),
);
}
#[derive(Deserialize, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct GetDocument {
fields: Option<CS<StarOr<String>>>,
}
pub async fn get_document(
meilisearch: GuardedData<ActionPolicy<{ actions::DOCUMENTS_GET }>, MeiliSearch>,
path: web::Path<DocumentParam>,
params: web::Query<GetDocument>,
) -> Result<HttpResponse, ResponseError> {
let index = path.index_uid.clone();
let id = path.document_id.clone();
let GetDocument { fields } = params.into_inner();
let attributes_to_retrieve = fields.and_then(fold_star_or);
let document = meilisearch
.document(index, id, None as Option<Vec<String>>)
.document(index, id, attributes_to_retrieve)
.await?;
debug!("returns: {:?}", document);
Ok(HttpResponse::Ok().json(document))
@@ -115,9 +126,11 @@ pub async fn delete_document(
#[derive(Deserialize, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct BrowseQuery {
offset: Option<usize>,
limit: Option<usize>,
attributes_to_retrieve: Option<String>,
#[serde(default)]
offset: usize,
#[serde(default = "crate::routes::PAGINATION_DEFAULT_LIMIT")]
limit: usize,
fields: Option<CS<StarOr<String>>>,
}
pub async fn get_all_documents(
@@ -126,27 +139,21 @@ pub async fn get_all_documents(
params: web::Query<BrowseQuery>,
) -> Result<HttpResponse, ResponseError> {
debug!("called with params: {:?}", params);
let attributes_to_retrieve = params.attributes_to_retrieve.as_ref().and_then(|attrs| {
let mut names = Vec::new();
for name in attrs.split(',').map(String::from) {
if name == "*" {
return None;
}
names.push(name);
}
Some(names)
});
let BrowseQuery {
limit,
offset,
fields,
} = params.into_inner();
let attributes_to_retrieve = fields.and_then(fold_star_or);
let documents = meilisearch
.documents(
path.into_inner(),
params.offset.unwrap_or(DEFAULT_RETRIEVE_DOCUMENTS_OFFSET),
params.limit.unwrap_or(DEFAULT_RETRIEVE_DOCUMENTS_LIMIT),
attributes_to_retrieve,
)
let (total, documents) = meilisearch
.documents(path.into_inner(), offset, limit, attributes_to_retrieve)
.await?;
debug!("returns: {:?}", documents);
Ok(HttpResponse::Ok().json(documents))
let ret = PaginationView::new(offset, limit, total as usize, documents);
debug!("returns: {:?}", ret);
Ok(HttpResponse::Ok().json(ret))
}
#[derive(Deserialize, Debug)]
@@ -173,6 +180,7 @@ pub async fn add_documents(
&req,
);
let allow_index_creation = meilisearch.filters().allow_index_creation;
let task = document_addition(
extract_mime_type(&req)?,
meilisearch,
@@ -180,6 +188,7 @@ pub async fn add_documents(
params.primary_key,
body,
IndexDocumentsMethod::ReplaceDocuments,
allow_index_creation,
)
.await?;
@@ -203,6 +212,7 @@ pub async fn update_documents(
&req,
);
let allow_index_creation = meilisearch.filters().allow_index_creation;
let task = document_addition(
extract_mime_type(&req)?,
meilisearch,
@@ -210,6 +220,7 @@ pub async fn update_documents(
params.into_inner().primary_key,
body,
IndexDocumentsMethod::UpdateDocuments,
allow_index_creation,
)
.await?;
@@ -223,6 +234,7 @@ async fn document_addition(
primary_key: Option<String>,
body: Payload,
method: IndexDocumentsMethod,
allow_index_creation: bool,
) -> Result<SummarizedTaskView, ResponseError> {
let format = match mime_type
.as_ref()
@@ -250,6 +262,7 @@ async fn document_addition(
primary_key,
method,
format,
allow_index_creation,
};
let task = meilisearch.register_update(index_uid, update).await?.into();

View File

@@ -1,57 +1,60 @@
use actix_web::{web, HttpRequest, HttpResponse};
use chrono::{DateTime, Utc};
use log::debug;
use meilisearch_error::ResponseError;
use meilisearch_lib::index_controller::Update;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use serde::{Deserialize, Serialize};
use serde_json::json;
use time::OffsetDateTime;
use crate::analytics::Analytics;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::extractors::sequential_extractor::SeqHandler;
use crate::task::SummarizedTaskView;
use super::Pagination;
pub mod documents;
pub mod search;
pub mod settings;
pub mod tasks;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::get().to(list_indexes))
.route(web::post().to(create_index)),
.route(web::post().to(SeqHandler(create_index))),
)
.service(
web::scope("/{index_uid}")
.service(
web::resource("")
.route(web::get().to(get_index))
.route(web::put().to(update_index))
.route(web::delete().to(delete_index)),
.route(web::get().to(SeqHandler(get_index)))
.route(web::patch().to(SeqHandler(update_index)))
.route(web::delete().to(SeqHandler(delete_index))),
)
.service(web::resource("/stats").route(web::get().to(get_index_stats)))
.service(web::resource("/stats").route(web::get().to(SeqHandler(get_index_stats))))
.service(web::scope("/documents").configure(documents::configure))
.service(web::scope("/search").configure(search::configure))
.service(web::scope("/tasks").configure(tasks::configure))
.service(web::scope("/settings").configure(settings::configure)),
);
}
pub async fn list_indexes(
data: GuardedData<ActionPolicy<{ actions::INDEXES_GET }>, MeiliSearch>,
paginate: web::Query<Pagination>,
) -> Result<HttpResponse, ResponseError> {
let filters = data.filters();
let mut indexes = data.list_indexes().await?;
if let Some(indexes_filter) = filters.indexes.as_ref() {
indexes = indexes
.into_iter()
.filter(|i| indexes_filter.contains(&i.uid))
.collect();
}
let search_rules = &data.filters().search_rules;
let indexes: Vec<_> = data.list_indexes().await?;
let nb_indexes = indexes.len();
let iter = indexes
.into_iter()
.filter(|i| search_rules.is_index_authorized(&i.uid));
let ret = paginate
.into_inner()
.auto_paginate_unsized(nb_indexes, iter);
debug!("returns: {:?}", indexes);
Ok(HttpResponse::Ok().json(indexes))
debug!("returns: {:?}", ret);
Ok(HttpResponse::Ok().json(ret))
}
#[derive(Debug, Deserialize)]
@@ -96,9 +99,12 @@ pub struct UpdateIndexRequest {
pub struct UpdateIndexResponse {
name: String,
uid: String,
created_at: DateTime<Utc>,
updated_at: DateTime<Utc>,
primary_key: Option<String>,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
created_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
updated_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
primary_key: OffsetDateTime,
}
pub async fn get_index(

View File

@@ -1,19 +1,25 @@
use actix_web::{web, HttpRequest, HttpResponse};
use log::debug;
use meilisearch_error::ResponseError;
use meilisearch_lib::index::{default_crop_length, SearchQuery, DEFAULT_SEARCH_LIMIT};
use meilisearch_auth::IndexSearchRules;
use meilisearch_lib::index::{
SearchQuery, DEFAULT_CROP_LENGTH, DEFAULT_CROP_MARKER, DEFAULT_HIGHLIGHT_POST_TAG,
DEFAULT_HIGHLIGHT_PRE_TAG, DEFAULT_SEARCH_LIMIT,
};
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use serde::Deserialize;
use serde_cs::vec::CS;
use serde_json::Value;
use crate::analytics::{Analytics, SearchAggregator};
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::extractors::sequential_extractor::SeqHandler;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(
web::resource("")
.route(web::get().to(search_with_url_query))
.route(web::post().to(search_with_post)),
.route(web::get().to(SeqHandler(search_with_url_query)))
.route(web::post().to(SeqHandler(search_with_post))),
);
}
@@ -23,36 +29,26 @@ pub struct SearchQueryGet {
q: Option<String>,
offset: Option<usize>,
limit: Option<usize>,
attributes_to_retrieve: Option<String>,
attributes_to_crop: Option<String>,
#[serde(default = "default_crop_length")]
attributes_to_retrieve: Option<CS<String>>,
attributes_to_crop: Option<CS<String>>,
#[serde(default = "DEFAULT_CROP_LENGTH")]
crop_length: usize,
attributes_to_highlight: Option<String>,
attributes_to_highlight: Option<CS<String>>,
filter: Option<String>,
sort: Option<String>,
#[serde(default = "Default::default")]
matches: bool,
facets_distribution: Option<String>,
show_matches_position: bool,
facets: Option<CS<String>>,
#[serde(default = "DEFAULT_HIGHLIGHT_PRE_TAG")]
highlight_pre_tag: String,
#[serde(default = "DEFAULT_HIGHLIGHT_POST_TAG")]
highlight_post_tag: String,
#[serde(default = "DEFAULT_CROP_MARKER")]
crop_marker: String,
}
impl From<SearchQueryGet> for SearchQuery {
fn from(other: SearchQueryGet) -> Self {
let attributes_to_retrieve = other
.attributes_to_retrieve
.map(|attrs| attrs.split(',').map(String::from).collect());
let attributes_to_crop = other
.attributes_to_crop
.map(|attrs| attrs.split(',').map(String::from).collect());
let attributes_to_highlight = other
.attributes_to_highlight
.map(|attrs| attrs.split(',').map(String::from).collect());
let facets_distribution = other
.facets_distribution
.map(|attrs| attrs.split(',').map(String::from).collect());
let filter = match other.filter {
Some(f) => match serde_json::from_str(&f) {
Ok(v) => Some(v),
@@ -61,20 +57,45 @@ impl From<SearchQueryGet> for SearchQuery {
None => None,
};
let sort = other.sort.map(|attr| fix_sort_query_parameters(&attr));
Self {
q: other.q,
offset: other.offset,
limit: other.limit.unwrap_or(DEFAULT_SEARCH_LIMIT),
attributes_to_retrieve,
attributes_to_crop,
limit: other.limit.unwrap_or_else(DEFAULT_SEARCH_LIMIT),
attributes_to_retrieve: other
.attributes_to_retrieve
.map(|o| o.into_iter().collect()),
attributes_to_crop: other.attributes_to_crop.map(|o| o.into_iter().collect()),
crop_length: other.crop_length,
attributes_to_highlight,
attributes_to_highlight: other
.attributes_to_highlight
.map(|o| o.into_iter().collect()),
filter,
sort,
matches: other.matches,
facets_distribution,
sort: other.sort.map(|attr| fix_sort_query_parameters(&attr)),
show_matches_position: other.show_matches_position,
facets: other.facets.map(|o| o.into_iter().collect()),
highlight_pre_tag: other.highlight_pre_tag,
highlight_post_tag: other.highlight_post_tag,
crop_marker: other.crop_marker,
}
}
}
/// Incorporate search rules in search query
fn add_search_rules(query: &mut SearchQuery, rules: IndexSearchRules) {
query.filter = match (query.filter.take(), rules.filter) {
(None, rules_filter) => rules_filter,
(filter, None) => filter,
(Some(filter), Some(rules_filter)) => {
let filter = match filter {
Value::Array(filter) => filter,
filter => vec![filter],
};
let rules_filter = match rules_filter {
Value::Array(rules_filter) => rules_filter,
rules_filter => vec![rules_filter],
};
Some(Value::Array([filter, rules_filter].concat()))
}
}
}
@@ -90,10 +111,9 @@ fn fix_sort_query_parameters(sort_query: &str) -> Vec<String> {
sort_parameters.push(current_sort.to_string());
merge = true;
} else if merge && !sort_parameters.is_empty() {
sort_parameters
.last_mut()
.unwrap()
.push_str(&format!(",{}", current_sort));
let s = sort_parameters.last_mut().unwrap();
s.push(',');
s.push_str(current_sort);
if current_sort.ends_with("):desc") || current_sort.ends_with("):asc") {
merge = false;
}
@@ -113,11 +133,21 @@ pub async fn search_with_url_query(
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
debug!("called with params: {:?}", params);
let query: SearchQuery = params.into_inner().into();
let mut query: SearchQuery = params.into_inner().into();
let index_uid = path.into_inner();
// Tenant token search_rules.
if let Some(search_rules) = meilisearch
.filters()
.search_rules
.get_index_search_rules(&index_uid)
{
add_search_rules(&mut query, search_rules);
}
let mut aggregate = SearchAggregator::from_query(&query, &req);
let search_result = meilisearch.search(path.into_inner(), query).await;
let search_result = meilisearch.search(index_uid, query).await;
if let Ok(ref search_result) = search_result {
aggregate.succeed(search_result);
}
@@ -125,10 +155,6 @@ pub async fn search_with_url_query(
let search_result = search_result?;
// Tests that the nb_hits is always set to false
#[cfg(test)]
assert!(!search_result.exhaustive_nb_hits);
debug!("returns: {:?}", search_result);
Ok(HttpResponse::Ok().json(search_result))
}
@@ -140,12 +166,22 @@ pub async fn search_with_post(
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
let query = params.into_inner();
let mut query = params.into_inner();
debug!("search called with params: {:?}", query);
let index_uid = path.into_inner();
// Tenant token search_rules.
if let Some(search_rules) = meilisearch
.filters()
.search_rules
.get_index_search_rules(&index_uid)
{
add_search_rules(&mut query, search_rules);
}
let mut aggregate = SearchAggregator::from_query(&query, &req);
let search_result = meilisearch.search(path.into_inner(), query).await;
let search_result = meilisearch.search(index_uid, query).await;
if let Ok(ref search_result) = search_result {
aggregate.succeed(search_result);
}
@@ -153,10 +189,6 @@ pub async fn search_with_post(
let search_result = search_result?;
// Tests that the nb_hits is always set to false
#[cfg(test)]
assert!(!search_result.exhaustive_nb_hits);
debug!("returns: {:?}", search_result);
Ok(HttpResponse::Ok().json(search_result))
}

View File

@@ -1,10 +1,10 @@
use log::debug;
use actix_web::{web, HttpRequest, HttpResponse};
use meilisearch_error::ResponseError;
use meilisearch_lib::index::{Settings, Unchecked};
use meilisearch_lib::index_controller::Update;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use serde_json::json;
use crate::analytics::Analytics;
@@ -13,7 +13,7 @@ use crate::task::SummarizedTaskView;
#[macro_export]
macro_rules! make_setting_route {
($route:literal, $type:ty, $attr:ident, $camelcase_attr:literal, $analytics_var:ident, $analytics:expr) => {
($route:literal, $update_verb:ident, $type:ty, $attr:ident, $camelcase_attr:literal, $analytics_var:ident, $analytics:expr) => {
pub mod $attr {
use actix_web::{web, HttpRequest, HttpResponse, Resource};
use log::debug;
@@ -21,10 +21,11 @@ macro_rules! make_setting_route {
use meilisearch_lib::milli::update::Setting;
use meilisearch_lib::{index::Settings, index_controller::Update, MeiliSearch};
use crate::analytics::Analytics;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::task::SummarizedTaskView;
use meilisearch_error::ResponseError;
use meilisearch_types::error::ResponseError;
use $crate::analytics::Analytics;
use $crate::extractors::authentication::{policies::*, GuardedData};
use $crate::extractors::sequential_extractor::SeqHandler;
use $crate::task::SummarizedTaskView;
pub async fn delete(
meilisearch: GuardedData<ActionPolicy<{ actions::SETTINGS_UPDATE }>, MeiliSearch>,
@@ -34,9 +35,12 @@ macro_rules! make_setting_route {
$attr: Setting::Reset,
..Default::default()
};
let allow_index_creation = meilisearch.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: true,
allow_index_creation,
};
let task: SummarizedTaskView = meilisearch
.register_update(index_uid.into_inner(), update)
@@ -66,9 +70,11 @@ macro_rules! make_setting_route {
..Default::default()
};
let allow_index_creation = meilisearch.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: false,
allow_index_creation,
};
let task: SummarizedTaskView = meilisearch
.register_update(index_uid.into_inner(), update)
@@ -93,19 +99,28 @@ macro_rules! make_setting_route {
pub fn resources() -> Resource {
Resource::new($route)
.route(web::get().to(get))
.route(web::post().to(update))
.route(web::delete().to(delete))
.route(web::get().to(SeqHandler(get)))
.route(web::$update_verb().to(SeqHandler(update)))
.route(web::delete().to(SeqHandler(delete)))
}
}
};
($route:literal, $type:ty, $attr:ident, $camelcase_attr:literal) => {
make_setting_route!($route, $type, $attr, $camelcase_attr, _analytics, |_, _| {});
($route:literal, $update_verb:ident, $type:ty, $attr:ident, $camelcase_attr:literal) => {
make_setting_route!(
$route,
$update_verb,
$type,
$attr,
$camelcase_attr,
_analytics,
|_, _| {}
);
};
}
make_setting_route!(
"/filterable-attributes",
put,
std::collections::BTreeSet<String>,
filterable_attributes,
"filterableAttributes",
@@ -128,6 +143,7 @@ make_setting_route!(
make_setting_route!(
"/sortable-attributes",
put,
std::collections::BTreeSet<String>,
sortable_attributes,
"sortableAttributes",
@@ -139,8 +155,8 @@ make_setting_route!(
"SortableAttributes Updated".to_string(),
json!({
"sortable_attributes": {
"total": setting.as_ref().map(|sort| sort.len()).unwrap_or(0),
"has_geo": setting.as_ref().map(|sort| sort.contains("_geo")).unwrap_or(false),
"total": setting.as_ref().map(|sort| sort.len()),
"has_geo": setting.as_ref().map(|sort| sort.contains("_geo")),
},
}),
Some(req),
@@ -150,13 +166,57 @@ make_setting_route!(
make_setting_route!(
"/displayed-attributes",
put,
Vec<String>,
displayed_attributes,
"displayedAttributes"
);
make_setting_route!(
"/typo-tolerance",
patch,
meilisearch_lib::index::updates::TypoSettings,
typo_tolerance,
"typoTolerance",
analytics,
|setting: &Option<meilisearch_lib::index::updates::TypoSettings>, req: &HttpRequest| {
use serde_json::json;
analytics.publish(
"TypoTolerance Updated".to_string(),
json!({
"typo_tolerance": {
"enabled": setting.as_ref().map(|s| !matches!(s.enabled, Setting::Set(false))),
"disable_on_attributes": setting
.as_ref()
.and_then(|s| s.disable_on_attributes.as_ref().set().map(|m| !m.is_empty())),
"disable_on_words": setting
.as_ref()
.and_then(|s| s.disable_on_words.as_ref().set().map(|m| !m.is_empty())),
"min_word_size_for_one_typo": setting
.as_ref()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.one_typo.set()))
.flatten(),
"min_word_size_for_two_typos": setting
.as_ref()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.two_typos.set()))
.flatten(),
},
}),
Some(req),
);
}
);
make_setting_route!(
"/searchable-attributes",
put,
Vec<String>,
searchable_attributes,
"searchableAttributes",
@@ -168,7 +228,7 @@ make_setting_route!(
"SearchableAttributes Updated".to_string(),
json!({
"searchable_attributes": {
"total": setting.as_ref().map(|searchable| searchable.len()).unwrap_or(0),
"total": setting.as_ref().map(|searchable| searchable.len()),
},
}),
Some(req),
@@ -178,6 +238,7 @@ make_setting_route!(
make_setting_route!(
"/stop-words",
put,
std::collections::BTreeSet<String>,
stop_words,
"stopWords"
@@ -185,6 +246,7 @@ make_setting_route!(
make_setting_route!(
"/synonyms",
put,
std::collections::BTreeMap<String, Vec<String>>,
synonyms,
"synonyms"
@@ -192,6 +254,7 @@ make_setting_route!(
make_setting_route!(
"/distinct-attribute",
put,
String,
distinct_attribute,
"distinctAttribute"
@@ -199,6 +262,7 @@ make_setting_route!(
make_setting_route!(
"/ranking-rules",
put,
Vec<String>,
ranking_rules,
"rankingRules",
@@ -218,14 +282,59 @@ make_setting_route!(
}
);
make_setting_route!(
"/faceting",
patch,
meilisearch_lib::index::updates::FacetingSettings,
faceting,
"faceting",
analytics,
|setting: &Option<meilisearch_lib::index::updates::FacetingSettings>, req: &HttpRequest| {
use serde_json::json;
analytics.publish(
"Faceting Updated".to_string(),
json!({
"faceting": {
"max_values_per_facet": setting.as_ref().and_then(|s| s.max_values_per_facet.set()),
},
}),
Some(req),
);
}
);
make_setting_route!(
"/pagination",
patch,
meilisearch_lib::index::updates::PaginationSettings,
pagination,
"pagination",
analytics,
|setting: &Option<meilisearch_lib::index::updates::PaginationSettings>, req: &HttpRequest| {
use serde_json::json;
analytics.publish(
"Pagination Updated".to_string(),
json!({
"pagination": {
"max_total_hits": setting.as_ref().and_then(|s| s.max_total_hits.set()),
},
}),
Some(req),
);
}
);
macro_rules! generate_configure {
($($mod:ident),*) => {
pub fn configure(cfg: &mut web::ServiceConfig) {
use crate::extractors::sequential_extractor::SeqHandler;
cfg.service(
web::resource("")
.route(web::post().to(update_all))
.route(web::get().to(get_all))
.route(web::delete().to(delete_all)))
.route(web::patch().to(SeqHandler(update_all)))
.route(web::get().to(SeqHandler(get_all)))
.route(web::delete().to(SeqHandler(delete_all))))
$(.service($mod::resources()))*;
}
};
@@ -239,7 +348,10 @@ generate_configure!(
distinct_attribute,
stop_words,
synonyms,
ranking_rules
ranking_rules,
typo_tolerance,
pagination,
faceting
);
pub async fn update_all(
@@ -258,23 +370,68 @@ pub async fn update_all(
"sort_position": settings.ranking_rules.as_ref().set().map(|sort| sort.iter().position(|s| s == "sort")),
},
"searchable_attributes": {
"total": settings.searchable_attributes.as_ref().set().map(|searchable| searchable.len()).unwrap_or(0),
"total": settings.searchable_attributes.as_ref().set().map(|searchable| searchable.len()),
},
"sortable_attributes": {
"total": settings.sortable_attributes.as_ref().set().map(|sort| sort.len()).unwrap_or(0),
"has_geo": settings.sortable_attributes.as_ref().set().map(|sort| sort.iter().any(|s| s == "_geo")).unwrap_or(false),
"total": settings.sortable_attributes.as_ref().set().map(|sort| sort.len()),
"has_geo": settings.sortable_attributes.as_ref().set().map(|sort| sort.iter().any(|s| s == "_geo")),
},
"filterable_attributes": {
"total": settings.filterable_attributes.as_ref().set().map(|filter| filter.len()).unwrap_or(0),
"has_geo": settings.filterable_attributes.as_ref().set().map(|filter| filter.iter().any(|s| s == "_geo")).unwrap_or(false),
"total": settings.filterable_attributes.as_ref().set().map(|filter| filter.len()),
"has_geo": settings.filterable_attributes.as_ref().set().map(|filter| filter.iter().any(|s| s == "_geo")),
},
"typo_tolerance": {
"enabled": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.enabled.as_ref().set())
.copied(),
"disable_on_attributes": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.disable_on_attributes.as_ref().set().map(|m| !m.is_empty())),
"disable_on_words": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.disable_on_words.as_ref().set().map(|m| !m.is_empty())),
"min_word_size_for_one_typo": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.one_typo.set()))
.flatten(),
"min_word_size_for_two_typos": settings.typo_tolerance
.as_ref()
.set()
.and_then(|s| s.min_word_size_for_typos
.as_ref()
.set()
.map(|s| s.two_typos.set()))
.flatten(),
},
"faceting": {
"max_values_per_facet": settings.faceting
.as_ref()
.set()
.and_then(|s| s.max_values_per_facet.as_ref().set()),
},
"pagination": {
"max_total_hits": settings.pagination
.as_ref()
.set()
.and_then(|s| s.max_total_hits.as_ref().set()),
},
}),
Some(&req),
);
let allow_index_creation = meilisearch.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: false,
allow_index_creation,
};
let task: SummarizedTaskView = meilisearch
.register_update(index_uid.into_inner(), update)
@@ -300,9 +457,11 @@ pub async fn delete_all(
) -> Result<HttpResponse, ResponseError> {
let settings = Settings::cleared().into_unchecked();
let allow_index_creation = data.filters().allow_index_creation;
let update = Update::Settings {
settings,
is_deletion: true,
allow_index_creation,
};
let task: SummarizedTaskView = data
.register_update(index_uid.into_inner(), update)

View File

@@ -1,76 +0,0 @@
use actix_web::{web, HttpRequest, HttpResponse};
use chrono::{DateTime, Utc};
use log::debug;
use meilisearch_error::ResponseError;
use meilisearch_lib::MeiliSearch;
use serde::{Deserialize, Serialize};
use serde_json::json;
use crate::analytics::Analytics;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::task::{TaskListView, TaskView};
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("").route(web::get().to(get_all_tasks_status)))
.service(web::resource("{task_id}").route(web::get().to(get_task_status)));
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct UpdateIndexResponse {
name: String,
uid: String,
created_at: DateTime<Utc>,
updated_at: DateTime<Utc>,
primary_key: Option<String>,
}
#[derive(Deserialize)]
pub struct UpdateParam {
index_uid: String,
task_id: u64,
}
pub async fn get_task_status(
meilisearch: GuardedData<ActionPolicy<{ actions::TASKS_GET }>, MeiliSearch>,
index_uid: web::Path<UpdateParam>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
analytics.publish(
"Index Tasks Seen".to_string(),
json!({ "per_task_uid": true }),
Some(&req),
);
let UpdateParam { index_uid, task_id } = index_uid.into_inner();
let task: TaskView = meilisearch.get_index_task(index_uid, task_id).await?.into();
debug!("returns: {:?}", task);
Ok(HttpResponse::Ok().json(task))
}
pub async fn get_all_tasks_status(
meilisearch: GuardedData<ActionPolicy<{ actions::TASKS_GET }>, MeiliSearch>,
index_uid: web::Path<String>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
analytics.publish(
"Index Tasks Seen".to_string(),
json!({ "per_task_uid": false }),
Some(&req),
);
let tasks: TaskListView = meilisearch
.list_index_task(index_uid.into_inner(), None, None)
.await?
.into_iter()
.map(TaskView::from)
.collect::<Vec<_>>()
.into();
debug!("returns: {:?}", tasks);
Ok(HttpResponse::Ok().json(tasks))
}

View File

@@ -1,11 +1,13 @@
use actix_web::{web, HttpResponse};
use chrono::{DateTime, Utc};
use log::debug;
use serde::{Deserialize, Serialize};
use meilisearch_error::ResponseError;
use time::OffsetDateTime;
use meilisearch_lib::index::{Settings, Unchecked};
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use meilisearch_types::star_or::StarOr;
use crate::extractors::authentication::{policies::*, GuardedData};
@@ -24,6 +26,101 @@ pub fn configure(cfg: &mut web::ServiceConfig) {
.service(web::scope("/indexes").configure(indexes::configure));
}
/// Extracts the raw values from the `StarOr` types and
/// return None if a `StarOr::Star` is encountered.
pub fn fold_star_or<T, O>(content: impl IntoIterator<Item = StarOr<T>>) -> Option<O>
where
O: FromIterator<T>,
{
content
.into_iter()
.map(|value| match value {
StarOr::Star => None,
StarOr::Other(val) => Some(val),
})
.collect()
}
const PAGINATION_DEFAULT_LIMIT: fn() -> usize = || 20;
#[derive(Debug, Clone, Copy, Deserialize)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct Pagination {
#[serde(default)]
pub offset: usize,
#[serde(default = "PAGINATION_DEFAULT_LIMIT")]
pub limit: usize,
}
#[derive(Debug, Clone, Serialize)]
pub struct PaginationView<T> {
pub results: Vec<T>,
pub offset: usize,
pub limit: usize,
pub total: usize,
}
impl Pagination {
/// Given the full data to paginate, returns the selected section.
pub fn auto_paginate_sized<T>(
self,
content: impl IntoIterator<Item = T> + ExactSizeIterator,
) -> PaginationView<T>
where
T: Serialize,
{
let total = content.len();
let content: Vec<_> = content
.into_iter()
.skip(self.offset)
.take(self.limit)
.collect();
self.format_with(total, content)
}
/// Given an iterator and the total number of elements, returns the selected section.
pub fn auto_paginate_unsized<T>(
self,
total: usize,
content: impl IntoIterator<Item = T>,
) -> PaginationView<T>
where
T: Serialize,
{
let content: Vec<_> = content
.into_iter()
.skip(self.offset)
.take(self.limit)
.collect();
self.format_with(total, content)
}
/// Given the data already paginated + the total number of elements, it stores
/// everything in a [PaginationResult].
pub fn format_with<T>(self, total: usize, results: Vec<T>) -> PaginationView<T>
where
T: Serialize,
{
PaginationView {
results,
offset: self.offset,
limit: self.limit,
total,
}
}
}
impl<T> PaginationView<T> {
pub fn new(offset: usize, limit: usize, total: usize, results: Vec<T>) -> Self {
Self {
offset,
limit,
results,
total,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[allow(clippy::large_enum_variant)]
#[serde(tag = "name")]
@@ -54,8 +151,10 @@ pub struct ProcessedUpdateResult {
#[serde(rename = "type")]
pub update_type: UpdateType,
pub duration: f64, // in seconds
pub enqueued_at: DateTime<Utc>,
pub processed_at: DateTime<Utc>,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub processed_at: OffsetDateTime,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -66,8 +165,10 @@ pub struct FailedUpdateResult {
pub update_type: UpdateType,
pub error: ResponseError,
pub duration: f64, // in seconds
pub enqueued_at: DateTime<Utc>,
pub processed_at: DateTime<Utc>,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(with = "time::serde::rfc3339")]
pub processed_at: OffsetDateTime,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -76,9 +177,13 @@ pub struct EnqueuedUpdateResult {
pub update_id: u64,
#[serde(rename = "type")]
pub update_type: UpdateType,
pub enqueued_at: DateTime<Utc>,
#[serde(skip_serializing_if = "Option::is_none")]
pub started_processing_at: Option<DateTime<Utc>>,
#[serde(with = "time::serde::rfc3339")]
pub enqueued_at: OffsetDateTime,
#[serde(
skip_serializing_if = "Option::is_none",
with = "time::serde::rfc3339::option"
)]
pub started_processing_at: Option<OffsetDateTime>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -121,15 +226,14 @@ impl IndexUpdateResponse {
/// }
/// ```
pub async fn running() -> HttpResponse {
HttpResponse::Ok().json(serde_json::json!({ "status": "MeiliSearch is running" }))
HttpResponse::Ok().json(serde_json::json!({ "status": "Meilisearch is running" }))
}
async fn get_stats(
meilisearch: GuardedData<ActionPolicy<{ actions::STATS_GET }>, MeiliSearch>,
) -> Result<HttpResponse, ResponseError> {
let filters = meilisearch.filters();
let response = meilisearch.get_all_stats(&filters.indexes).await?;
let search_rules = &meilisearch.filters().search_rules;
let response = meilisearch.get_all_stats(search_rules).await?;
debug!("returns: {:?}", response);
Ok(HttpResponse::Ok().json(response))

View File

@@ -1,45 +1,172 @@
use actix_web::{web, HttpRequest, HttpResponse};
use meilisearch_error::ResponseError;
use meilisearch_lib::tasks::task::TaskId;
use meilisearch_lib::tasks::task::{TaskContent, TaskEvent, TaskId};
use meilisearch_lib::tasks::TaskFilter;
use meilisearch_lib::MeiliSearch;
use meilisearch_types::error::ResponseError;
use meilisearch_types::index_uid::IndexUid;
use meilisearch_types::star_or::StarOr;
use serde::Deserialize;
use serde_cs::vec::CS;
use serde_json::json;
use crate::analytics::Analytics;
use crate::extractors::authentication::{policies::*, GuardedData};
use crate::task::{TaskListView, TaskView};
use crate::extractors::sequential_extractor::SeqHandler;
use crate::task::{TaskListView, TaskStatus, TaskType, TaskView};
use super::fold_star_or;
const DEFAULT_LIMIT: fn() -> usize = || 20;
pub fn configure(cfg: &mut web::ServiceConfig) {
cfg.service(web::resource("").route(web::get().to(get_tasks)))
.service(web::resource("/{task_id}").route(web::get().to(get_task)));
cfg.service(web::resource("").route(web::get().to(SeqHandler(get_tasks))))
.service(web::resource("/{task_id}").route(web::get().to(SeqHandler(get_task))));
}
#[derive(Deserialize, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct TasksFilterQuery {
#[serde(rename = "type")]
type_: Option<CS<StarOr<TaskType>>>,
status: Option<CS<StarOr<TaskStatus>>>,
index_uid: Option<CS<StarOr<IndexUid>>>,
#[serde(default = "DEFAULT_LIMIT")]
limit: usize,
from: Option<TaskId>,
}
#[rustfmt::skip]
fn task_type_matches_content(type_: &TaskType, content: &TaskContent) -> bool {
matches!((type_, content),
(TaskType::IndexCreation, TaskContent::IndexCreation { .. })
| (TaskType::IndexUpdate, TaskContent::IndexUpdate { .. })
| (TaskType::IndexDeletion, TaskContent::IndexDeletion { .. })
| (TaskType::DocumentAdditionOrUpdate, TaskContent::DocumentAddition { .. })
| (TaskType::DocumentDeletion, TaskContent::DocumentDeletion{ .. })
| (TaskType::SettingsUpdate, TaskContent::SettingsUpdate { .. })
)
}
#[rustfmt::skip]
fn task_status_matches_events(status: &TaskStatus, events: &[TaskEvent]) -> bool {
events.last().map_or(false, |event| {
matches!((status, event),
(TaskStatus::Enqueued, TaskEvent::Created(_))
| (TaskStatus::Processing, TaskEvent::Processing(_) | TaskEvent::Batched { .. })
| (TaskStatus::Succeeded, TaskEvent::Succeeded { .. })
| (TaskStatus::Failed, TaskEvent::Failed { .. }),
)
})
}
async fn get_tasks(
meilisearch: GuardedData<ActionPolicy<{ actions::TASKS_GET }>, MeiliSearch>,
params: web::Query<TasksFilterQuery>,
req: HttpRequest,
analytics: web::Data<dyn Analytics>,
) -> Result<HttpResponse, ResponseError> {
let TasksFilterQuery {
type_,
status,
index_uid,
limit,
from,
} = params.into_inner();
let search_rules = &meilisearch.filters().search_rules;
// We first transform a potential indexUid=* into a "not specified indexUid filter"
// for every one of the filters: type, status, and indexUid.
let type_: Option<Vec<_>> = type_.and_then(fold_star_or);
let status: Option<Vec<_>> = status.and_then(fold_star_or);
let index_uid: Option<Vec<_>> = index_uid.and_then(fold_star_or);
analytics.publish(
"Tasks Seen".to_string(),
json!({ "per_task_uid": false }),
json!({
"filtered_by_index_uid": index_uid.as_ref().map_or(false, |v| !v.is_empty()),
"filtered_by_type": type_.as_ref().map_or(false, |v| !v.is_empty()),
"filtered_by_status": status.as_ref().map_or(false, |v| !v.is_empty()),
}),
Some(&req),
);
let filters = meilisearch.filters().indexes.as_ref().map(|indexes| {
let mut filters = TaskFilter::default();
for index in indexes {
filters.filter_index(index.to_string());
// Then we filter on potential indexes and make sure that the search filter
// restrictions are also applied.
let indexes_filters = match index_uid {
Some(indexes) => {
let mut filters = TaskFilter::default();
for name in indexes {
if search_rules.is_index_authorized(&name) {
filters.filter_index(name.to_string());
}
}
Some(filters)
}
filters
});
None => {
if search_rules.is_index_authorized("*") {
None
} else {
let mut filters = TaskFilter::default();
for (index, _policy) in search_rules.clone() {
filters.filter_index(index);
}
Some(filters)
}
}
};
let tasks: TaskListView = meilisearch
.list_tasks(filters, None, None)
// Then we complete the task filter with other potential status and types filters.
let filters = if type_.is_some() || status.is_some() {
let mut filters = indexes_filters.unwrap_or_default();
filters.filter_fn(move |task| {
let matches_type = match &type_ {
Some(types) => types
.iter()
.any(|t| task_type_matches_content(t, &task.content)),
None => true,
};
let matches_status = match &status {
Some(statuses) => statuses
.iter()
.any(|t| task_status_matches_events(t, &task.events)),
None => true,
};
matches_type && matches_status
});
Some(filters)
} else {
indexes_filters
};
// We +1 just to know if there is more after this "page" or not.
let limit = limit.saturating_add(1);
let mut tasks_results: Vec<_> = meilisearch
.list_tasks(filters, Some(limit), from)
.await?
.into_iter()
.map(TaskView::from)
.collect::<Vec<_>>()
.into();
.collect();
// If we were able to fetch the number +1 tasks we asked
// it means that there is more to come.
let next = if tasks_results.len() == limit {
tasks_results.pop().map(|t| t.uid)
} else {
None
};
let from = tasks_results.first().map(|t| t.uid);
let tasks = TaskListView {
results: tasks_results,
limit: limit.saturating_sub(1),
from,
next,
};
Ok(HttpResponse::Ok().json(tasks))
}
@@ -56,13 +183,16 @@ async fn get_task(
Some(&req),
);
let filters = meilisearch.filters().indexes.as_ref().map(|indexes| {
let search_rules = &meilisearch.filters().search_rules;
let filters = if search_rules.is_index_authorized("*") {
None
} else {
let mut filters = TaskFilter::default();
for index in indexes {
filters.filter_index(index.to_string());
for (index, _policy) in search_rules.clone() {
filters.filter_index(index);
}
filters
});
Some(filters)
};
let task: TaskView = meilisearch
.get_task(task_id.into_inner(), filters)

View File

@@ -1,56 +1,137 @@
use chrono::{DateTime, Duration, Utc};
use meilisearch_error::ResponseError;
use std::error::Error;
use std::fmt::{self, Write};
use std::str::FromStr;
use std::write;
use meilisearch_lib::index::{Settings, Unchecked};
use meilisearch_lib::milli::update::IndexDocumentsMethod;
use meilisearch_lib::tasks::batch::BatchId;
use meilisearch_lib::tasks::task::{
DocumentDeletion, Task, TaskContent, TaskEvent, TaskId, TaskResult,
};
use serde::{Serialize, Serializer};
use meilisearch_types::error::ResponseError;
use serde::{Deserialize, Serialize, Serializer};
use time::{Duration, OffsetDateTime};
#[derive(Debug, Serialize)]
use crate::AUTOBATCHING_ENABLED;
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
enum TaskType {
pub enum TaskType {
IndexCreation,
IndexUpdate,
IndexDeletion,
DocumentAddition,
DocumentPartial,
DocumentAdditionOrUpdate,
DocumentDeletion,
SettingsUpdate,
ClearAll,
DumpCreation,
}
impl From<TaskContent> for TaskType {
fn from(other: TaskContent) -> Self {
match other {
TaskContent::DocumentAddition {
merge_strategy: IndexDocumentsMethod::ReplaceDocuments,
..
} => TaskType::DocumentAddition,
TaskContent::DocumentAddition {
merge_strategy: IndexDocumentsMethod::UpdateDocuments,
..
} => TaskType::DocumentPartial,
TaskContent::DocumentDeletion(DocumentDeletion::Clear) => TaskType::ClearAll,
TaskContent::DocumentDeletion(DocumentDeletion::Ids(_)) => TaskType::DocumentDeletion,
TaskContent::SettingsUpdate { .. } => TaskType::SettingsUpdate,
TaskContent::IndexDeletion => TaskType::IndexDeletion,
TaskContent::IndexCreation { .. } => TaskType::IndexCreation,
TaskContent::IndexUpdate { .. } => TaskType::IndexUpdate,
_ => unreachable!("unexpected task type"),
TaskContent::IndexDeletion { .. } => TaskType::IndexDeletion,
TaskContent::DocumentAddition { .. } => TaskType::DocumentAdditionOrUpdate,
TaskContent::DocumentDeletion { .. } => TaskType::DocumentDeletion,
TaskContent::SettingsUpdate { .. } => TaskType::SettingsUpdate,
TaskContent::Dump { .. } => TaskType::DumpCreation,
}
}
}
#[derive(Debug, Serialize)]
#[derive(Debug)]
pub struct TaskTypeError {
invalid_type: String,
}
impl fmt::Display for TaskTypeError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"invalid task type `{}`, expecting one of: \
indexCreation, indexUpdate, indexDeletion, documentAdditionOrUpdate, \
documentDeletion, settingsUpdate, dumpCreation",
self.invalid_type
)
}
}
impl Error for TaskTypeError {}
impl FromStr for TaskType {
type Err = TaskTypeError;
fn from_str(type_: &str) -> Result<Self, TaskTypeError> {
if type_.eq_ignore_ascii_case("indexCreation") {
Ok(TaskType::IndexCreation)
} else if type_.eq_ignore_ascii_case("indexUpdate") {
Ok(TaskType::IndexUpdate)
} else if type_.eq_ignore_ascii_case("indexDeletion") {
Ok(TaskType::IndexDeletion)
} else if type_.eq_ignore_ascii_case("documentAdditionOrUpdate") {
Ok(TaskType::DocumentAdditionOrUpdate)
} else if type_.eq_ignore_ascii_case("documentDeletion") {
Ok(TaskType::DocumentDeletion)
} else if type_.eq_ignore_ascii_case("settingsUpdate") {
Ok(TaskType::SettingsUpdate)
} else if type_.eq_ignore_ascii_case("dumpCreation") {
Ok(TaskType::DumpCreation)
} else {
Err(TaskTypeError {
invalid_type: type_.to_string(),
})
}
}
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
enum TaskStatus {
pub enum TaskStatus {
Enqueued,
Processing,
Succeeded,
Failed,
}
#[derive(Debug)]
pub struct TaskStatusError {
invalid_status: String,
}
impl fmt::Display for TaskStatusError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"invalid task status `{}`, expecting one of: \
enqueued, processing, succeeded, or failed",
self.invalid_status,
)
}
}
impl Error for TaskStatusError {}
impl FromStr for TaskStatus {
type Err = TaskStatusError;
fn from_str(status: &str) -> Result<Self, TaskStatusError> {
if status.eq_ignore_ascii_case("enqueued") {
Ok(TaskStatus::Enqueued)
} else if status.eq_ignore_ascii_case("processing") {
Ok(TaskStatus::Processing)
} else if status.eq_ignore_ascii_case("succeeded") {
Ok(TaskStatus::Succeeded)
} else if status.eq_ignore_ascii_case("failed") {
Ok(TaskStatus::Failed)
} else {
Err(TaskStatusError {
invalid_status: status.to_string(),
})
}
}
}
#[derive(Debug, Serialize)]
#[serde(untagged)]
#[allow(clippy::large_enum_variant)]
@@ -74,16 +155,56 @@ enum TaskDetails {
},
#[serde(rename_all = "camelCase")]
ClearAll { deleted_documents: Option<u64> },
#[serde(rename_all = "camelCase")]
Dump { dump_uid: String },
}
/// Serialize a `time::Duration` as a best effort ISO 8601 while waiting for
/// https://github.com/time-rs/time/issues/378.
/// This code is a port of the old code of time that was removed in 0.2.
fn serialize_duration<S: Serializer>(
duration: &Option<Duration>,
serializer: S,
) -> Result<S::Ok, S::Error> {
match duration {
Some(duration) => {
let duration_str = duration.to_string();
serializer.serialize_str(&duration_str)
// technically speaking, negative duration is not valid ISO 8601
if duration.is_negative() {
return serializer.serialize_none();
}
const SECS_PER_DAY: i64 = Duration::DAY.whole_seconds();
let secs = duration.whole_seconds();
let days = secs / SECS_PER_DAY;
let secs = secs - days * SECS_PER_DAY;
let hasdate = days != 0;
let nanos = duration.subsec_nanoseconds();
let hastime = (secs != 0 || nanos != 0) || !hasdate;
// all the following unwrap can't fail
let mut res = String::new();
write!(&mut res, "P").unwrap();
if hasdate {
write!(&mut res, "{}D", days).unwrap();
}
const NANOS_PER_MILLI: i32 = Duration::MILLISECOND.subsec_nanoseconds();
const NANOS_PER_MICRO: i32 = Duration::MICROSECOND.subsec_nanoseconds();
if hastime {
if nanos == 0 {
write!(&mut res, "T{}S", secs).unwrap();
} else if nanos % NANOS_PER_MILLI == 0 {
write!(&mut res, "T{}.{:03}S", secs, nanos / NANOS_PER_MILLI).unwrap();
} else if nanos % NANOS_PER_MICRO == 0 {
write!(&mut res, "T{}.{:06}S", secs, nanos / NANOS_PER_MICRO).unwrap();
} else {
write!(&mut res, "T{}.{:09}S", secs, nanos).unwrap();
}
}
serializer.serialize_str(&res)
}
None => serializer.serialize_none(),
}
@@ -92,8 +213,8 @@ fn serialize_duration<S: Serializer>(
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct TaskView {
uid: TaskId,
index_uid: String,
pub uid: TaskId,
index_uid: Option<String>,
status: TaskStatus,
#[serde(rename = "type")]
task_type: TaskType,
@@ -103,53 +224,56 @@ pub struct TaskView {
error: Option<ResponseError>,
#[serde(serialize_with = "serialize_duration")]
duration: Option<Duration>,
enqueued_at: DateTime<Utc>,
started_at: Option<DateTime<Utc>>,
finished_at: Option<DateTime<Utc>>,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
enqueued_at: OffsetDateTime,
#[serde(serialize_with = "time::serde::rfc3339::option::serialize")]
started_at: Option<OffsetDateTime>,
#[serde(serialize_with = "time::serde::rfc3339::option::serialize")]
finished_at: Option<OffsetDateTime>,
#[serde(skip_serializing_if = "Option::is_none")]
batch_uid: Option<Option<BatchId>>,
}
impl From<Task> for TaskView {
fn from(task: Task) -> Self {
let index_uid = task.index_uid().map(String::from);
let Task {
id,
index_uid,
content,
events,
} = task;
let (task_type, mut details) = match content {
TaskContent::DocumentAddition {
merge_strategy,
documents_count,
..
documents_count, ..
} => {
let details = TaskDetails::DocumentAddition {
received_documents: documents_count,
indexed_documents: None,
};
let task_type = match merge_strategy {
IndexDocumentsMethod::UpdateDocuments => TaskType::DocumentPartial,
IndexDocumentsMethod::ReplaceDocuments => TaskType::DocumentAddition,
_ => unreachable!("Unexpected document merge strategy."),
};
(task_type, Some(details))
(TaskType::DocumentAdditionOrUpdate, Some(details))
}
TaskContent::DocumentDeletion(DocumentDeletion::Ids(ids)) => (
TaskContent::DocumentDeletion {
deletion: DocumentDeletion::Ids(ids),
..
} => (
TaskType::DocumentDeletion,
Some(TaskDetails::DocumentDeletion {
received_document_ids: ids.len(),
deleted_documents: None,
}),
),
TaskContent::DocumentDeletion(DocumentDeletion::Clear) => (
TaskType::ClearAll,
TaskContent::DocumentDeletion {
deletion: DocumentDeletion::Clear,
..
} => (
TaskType::DocumentDeletion,
Some(TaskDetails::ClearAll {
deleted_documents: None,
}),
),
TaskContent::IndexDeletion => (
TaskContent::IndexDeletion { .. } => (
TaskType::IndexDeletion,
Some(TaskDetails::ClearAll {
deleted_documents: None,
@@ -159,14 +283,18 @@ impl From<Task> for TaskView {
TaskType::SettingsUpdate,
Some(TaskDetails::Settings { settings }),
),
TaskContent::IndexCreation { primary_key } => (
TaskContent::IndexCreation { primary_key, .. } => (
TaskType::IndexCreation,
Some(TaskDetails::IndexInfo { primary_key }),
),
TaskContent::IndexUpdate { primary_key } => (
TaskContent::IndexUpdate { primary_key, .. } => (
TaskType::IndexUpdate,
Some(TaskDetails::IndexInfo { primary_key }),
),
TaskContent::Dump { uid } => (
TaskType::DumpCreation,
Some(TaskDetails::Dump { dump_uid: uid }),
),
};
// An event always has at least one event: "Created"
@@ -174,7 +302,7 @@ impl From<Task> for TaskView {
TaskEvent::Created(_) => (TaskStatus::Enqueued, None, None),
TaskEvent::Batched { .. } => (TaskStatus::Enqueued, None, None),
TaskEvent::Processing(_) => (TaskStatus::Processing, None, None),
TaskEvent::Succeded { timestamp, result } => {
TaskEvent::Succeeded { timestamp, result } => {
match (result, &mut details) {
(
TaskResult::DocumentAddition {
@@ -215,6 +343,27 @@ impl From<Task> for TaskView {
(TaskStatus::Succeeded, None, Some(*timestamp))
}
TaskEvent::Failed { timestamp, error } => {
match details {
Some(TaskDetails::DocumentDeletion {
ref mut deleted_documents,
..
}) => {
deleted_documents.replace(0);
}
Some(TaskDetails::ClearAll {
ref mut deleted_documents,
..
}) => {
deleted_documents.replace(0);
}
Some(TaskDetails::DocumentAddition {
ref mut indexed_documents,
..
}) => {
indexed_documents.replace(0);
}
_ => (),
}
(TaskStatus::Failed, Some(error.clone()), Some(*timestamp))
}
};
@@ -224,16 +373,26 @@ impl From<Task> for TaskView {
_ => unreachable!("A task must always have a creation event."),
};
let duration = finished_at.map(|ts| (ts - enqueued_at));
let started_at = events.iter().find_map(|e| match e {
TaskEvent::Processing(ts) => Some(*ts),
_ => None,
});
let duration = finished_at.zip(started_at).map(|(tf, ts)| (tf - ts));
let batch_uid = if AUTOBATCHING_ENABLED.load(std::sync::atomic::Ordering::Relaxed) {
let id = events.iter().find_map(|e| match e {
TaskEvent::Batched { batch_id, .. } => Some(*batch_id),
_ => None,
});
Some(id)
} else {
None
};
Self {
uid: id,
index_uid: index_uid.into_inner(),
index_uid,
status,
task_type,
details,
@@ -242,30 +401,29 @@ impl From<Task> for TaskView {
enqueued_at,
started_at,
finished_at,
batch_uid,
}
}
}
#[derive(Debug, Serialize)]
pub struct TaskListView {
results: Vec<TaskView>,
}
impl From<Vec<TaskView>> for TaskListView {
fn from(results: Vec<TaskView>) -> Self {
Self { results }
}
pub results: Vec<TaskView>,
pub limit: usize,
pub from: Option<TaskId>,
pub next: Option<TaskId>,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct SummarizedTaskView {
uid: TaskId,
index_uid: String,
task_uid: TaskId,
index_uid: Option<String>,
status: TaskStatus,
#[serde(rename = "type")]
task_type: TaskType,
enqueued_at: DateTime<Utc>,
#[serde(serialize_with = "time::serde::rfc3339::serialize")]
enqueued_at: OffsetDateTime,
}
impl From<Task> for SummarizedTaskView {
@@ -282,8 +440,8 @@ impl From<Task> for SummarizedTaskView {
};
Self {
uid: other.id,
index_uid: other.index_uid.to_string(),
task_uid: other.id,
index_uid: other.index_uid().map(String::from),
status: TaskStatus::Enqueued,
task_type: other.content.into(),
enqueued_at,

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -1,56 +1,67 @@
use crate::common::Server;
use chrono::{Duration, Utc};
use maplit::hashmap;
use ::time::format_description::well_known::Rfc3339;
use maplit::{hashmap, hashset};
use once_cell::sync::Lazy;
use serde_json::{json, Value};
use std::collections::{HashMap, HashSet};
use time::{Duration, OffsetDateTime};
static AUTHORIZATIONS: Lazy<HashMap<(&'static str, &'static str), &'static str>> =
pub static AUTHORIZATIONS: Lazy<HashMap<(&'static str, &'static str), HashSet<&'static str>>> =
Lazy::new(|| {
hashmap! {
("POST", "/indexes/products/search") => "search",
("GET", "/indexes/products/search") => "search",
("POST", "/indexes/products/documents") => "documents.add",
("GET", "/indexes/products/documents") => "documents.get",
("GET", "/indexes/products/documents/0") => "documents.get",
("DELETE", "/indexes/products/documents/0") => "documents.delete",
("GET", "/tasks") => "tasks.get",
("GET", "/indexes/products/tasks") => "tasks.get",
("GET", "/indexes/products/tasks/0") => "tasks.get",
("PUT", "/indexes/products/") => "indexes.update",
("GET", "/indexes/products/") => "indexes.get",
("DELETE", "/indexes/products/") => "indexes.delete",
("POST", "/indexes") => "indexes.create",
("GET", "/indexes") => "indexes.get",
("GET", "/indexes/products/settings") => "settings.get",
("GET", "/indexes/products/settings/displayed-attributes") => "settings.get",
("GET", "/indexes/products/settings/distinct-attribute") => "settings.get",
("GET", "/indexes/products/settings/filterable-attributes") => "settings.get",
("GET", "/indexes/products/settings/ranking-rules") => "settings.get",
("GET", "/indexes/products/settings/searchable-attributes") => "settings.get",
("GET", "/indexes/products/settings/sortable-attributes") => "settings.get",
("GET", "/indexes/products/settings/stop-words") => "settings.get",
("GET", "/indexes/products/settings/synonyms") => "settings.get",
("DELETE", "/indexes/products/settings") => "settings.update",
("POST", "/indexes/products/settings") => "settings.update",
("POST", "/indexes/products/settings/displayed-attributes") => "settings.update",
("POST", "/indexes/products/settings/distinct-attribute") => "settings.update",
("POST", "/indexes/products/settings/filterable-attributes") => "settings.update",
("POST", "/indexes/products/settings/ranking-rules") => "settings.update",
("POST", "/indexes/products/settings/searchable-attributes") => "settings.update",
("POST", "/indexes/products/settings/sortable-attributes") => "settings.update",
("POST", "/indexes/products/settings/stop-words") => "settings.update",
("POST", "/indexes/products/settings/synonyms") => "settings.update",
("GET", "/indexes/products/stats") => "stats.get",
("GET", "/stats") => "stats.get",
("POST", "/dumps") => "dumps.create",
("GET", "/dumps/0/status") => "dumps.get",
("GET", "/version") => "version",
("POST", "/indexes/products/search") => hashset!{"search", "*"},
("GET", "/indexes/products/search") => hashset!{"search", "*"},
("POST", "/indexes/products/documents") => hashset!{"documents.add", "documents.*", "*"},
("GET", "/indexes/products/documents") => hashset!{"documents.get", "documents.*", "*"},
("GET", "/indexes/products/documents/0") => hashset!{"documents.get", "documents.*", "*"},
("DELETE", "/indexes/products/documents/0") => hashset!{"documents.delete", "documents.*", "*"},
("GET", "/tasks") => hashset!{"tasks.get", "tasks.*", "*"},
("GET", "/tasks?indexUid=products") => hashset!{"tasks.get", "tasks.*", "*"},
("GET", "/tasks/0") => hashset!{"tasks.get", "tasks.*", "*"},
("PATCH", "/indexes/products/") => hashset!{"indexes.update", "indexes.*", "*"},
("GET", "/indexes/products/") => hashset!{"indexes.get", "indexes.*", "*"},
("DELETE", "/indexes/products/") => hashset!{"indexes.delete", "indexes.*", "*"},
("POST", "/indexes") => hashset!{"indexes.create", "indexes.*", "*"},
("GET", "/indexes") => hashset!{"indexes.get", "indexes.*", "*"},
("GET", "/indexes/products/settings") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/displayed-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/distinct-attribute") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/filterable-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/ranking-rules") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/searchable-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/sortable-attributes") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/stop-words") => hashset!{"settings.get", "settings.*", "*"},
("GET", "/indexes/products/settings/synonyms") => hashset!{"settings.get", "settings.*", "*"},
("DELETE", "/indexes/products/settings") => hashset!{"settings.update", "settings.*", "*"},
("PATCH", "/indexes/products/settings") => hashset!{"settings.update", "settings.*", "*"},
("PATCH", "/indexes/products/settings/typo-tolerance") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/displayed-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/distinct-attribute") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/filterable-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/ranking-rules") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/searchable-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/sortable-attributes") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/stop-words") => hashset!{"settings.update", "settings.*", "*"},
("PUT", "/indexes/products/settings/synonyms") => hashset!{"settings.update", "settings.*", "*"},
("GET", "/indexes/products/stats") => hashset!{"stats.get", "stats.*", "*"},
("GET", "/stats") => hashset!{"stats.get", "stats.*", "*"},
("POST", "/dumps") => hashset!{"dumps.create", "dumps.*", "*"},
("GET", "/version") => hashset!{"version", "*"},
("PATCH", "/keys/mykey/") => hashset!{"keys.update", "*"},
("GET", "/keys/mykey/") => hashset!{"keys.get", "*"},
("DELETE", "/keys/mykey/") => hashset!{"keys.delete", "*"},
("POST", "/keys") => hashset!{"keys.create", "*"},
("GET", "/keys") => hashset!{"keys.get", "*"},
}
});
static ALL_ACTIONS: Lazy<HashSet<&'static str>> =
Lazy::new(|| AUTHORIZATIONS.values().cloned().collect());
pub static ALL_ACTIONS: Lazy<HashSet<&'static str>> = Lazy::new(|| {
AUTHORIZATIONS
.values()
.cloned()
.reduce(|l, r| l.union(&r).cloned().collect())
.unwrap()
});
static INVALID_RESPONSE: Lazy<Value> = Lazy::new(|| {
json!({"message": "The provided API key is invalid.",
@@ -61,6 +72,7 @@ static INVALID_RESPONSE: Lazy<Value> = Lazy::new(|| {
});
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_expired_key() {
use std::{thread, time};
@@ -70,11 +82,11 @@ async fn error_access_expired_key() {
let content = json!({
"indexes": ["products"],
"actions": ALL_ACTIONS.clone(),
"expiresAt": (Utc::now() + Duration::seconds(1)),
"expiresAt": (OffsetDateTime::now_utc() + Duration::seconds(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
@@ -86,12 +98,19 @@ async fn error_access_expired_key() {
for (method, route) in AUTHORIZATIONS.keys() {
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_eq!(403, code, "{:?}", &response);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_unauthorized_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
@@ -99,11 +118,11 @@ async fn error_access_unauthorized_index() {
let content = json!({
"indexes": ["sales"],
"actions": ALL_ACTIONS.clone(),
"expiresAt": Utc::now() + Duration::hours(1),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
@@ -116,172 +135,171 @@ async fn error_access_unauthorized_index() {
{
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_eq!(403, code, "{:?}", &response);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_unauthorized_action() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["products"],
"actions": [],
"expiresAt": Utc::now() + Duration::hours(1),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
for ((method, route), action) in AUTHORIZATIONS.iter() {
// create a new API key letting only the needed action.
server.use_api_key("MASTER_KEY");
// Patch API key letting all rights but the needed one.
let content = json!({
"actions": ALL_ACTIONS.iter().cloned().filter(|a| a != action).collect::<Vec<_>>(),
"indexes": ["products"],
"actions": ALL_ACTIONS.difference(action).collect::<Vec<_>>(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (_, code) = server.patch_api_key(&key, content).await;
assert_eq!(code, 200);
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_eq!(403, code, "{:?}", &response);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_master_key() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
// master key must have access to all routes.
for ((method, route), _) in AUTHORIZATIONS.iter() {
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?}",
method,
route
);
assert_ne!(code, 403);
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_restricted_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
for ((method, route), actions) in AUTHORIZATIONS.iter() {
for action in actions {
// create a new API key letting only the needed action.
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["products"],
"actions": [],
"expiresAt": Utc::now() + Duration::hours(1),
});
let content = json!({
"indexes": ["products"],
"actions": [action],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
for ((method, route), action) in AUTHORIZATIONS.iter() {
// Patch API key letting only the needed action.
let content = json!({
"actions": [action],
});
let (response, code) = server.dummy_request(method, route).await;
server.use_api_key("MASTER_KEY");
let (_, code) = server.patch_api_key(&key, content).await;
assert_eq!(code, 200);
server.use_api_key(&key);
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
// Patch API key using action all action.
let content = json!({
"actions": ["*"],
});
server.use_api_key("MASTER_KEY");
let (_, code) = server.patch_api_key(&key, content).await;
assert_eq!(code, 200);
server.use_api_key(&key);
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
assert_ne!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?} with action: {:?}",
method,
route,
action
);
assert_ne!(code, 403);
}
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_no_index_restriction() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": [],
"expiresAt": Utc::now() + Duration::hours(1),
});
for ((method, route), actions) in AUTHORIZATIONS.iter() {
for action in actions {
// create a new API key letting only the needed action.
server.use_api_key("MASTER_KEY");
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let content = json!({
"indexes": ["*"],
"actions": [action],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
for ((method, route), action) in AUTHORIZATIONS.iter() {
server.use_api_key("MASTER_KEY");
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
// Patch API key letting only the needed action.
let content = json!({
"actions": [action],
});
let (_, code) = server.patch_api_key(&key, content).await;
assert_eq!(code, 200);
let (response, code) = server.dummy_request(method, route).await;
server.use_api_key(&key);
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
// Patch API key using action all action.
let content = json!({
"actions": ["*"],
});
server.use_api_key("MASTER_KEY");
let (_, code) = server.patch_api_key(&key, content).await;
assert_eq!(code, 200);
server.use_api_key(&key);
let (response, code) = server.dummy_request(method, route).await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
assert_ne!(
response,
INVALID_RESPONSE.clone(),
"on route: {:?} - {:?} with action: {:?}",
method,
route,
action
);
assert_ne!(code, 403);
}
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_stats_restricted_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (_, code) = index.create(Some("id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (_, code) = index.create(Some("product_id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on `products` index only.
let content = json!({
"indexes": ["products"],
"actions": ["stats.get"],
"expiresAt": Utc::now() + Duration::hours(1),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
@@ -289,7 +307,7 @@ async fn access_authorized_stats_restricted_index() {
server.use_api_key(&key);
let (response, code) = server.stats().await;
assert_eq!(code, 200);
assert_eq!(200, code, "{:?}", &response);
// key should have access on `products` index.
assert!(response["indexes"].get("products").is_some());
@@ -299,28 +317,29 @@ async fn access_authorized_stats_restricted_index() {
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn access_authorized_stats_no_index_restriction() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (_, code) = index.create(Some("id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (_, code) = index.create(Some("product_id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
"actions": ["stats.get"],
"expiresAt": Utc::now() + Duration::hours(1),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
@@ -328,7 +347,7 @@ async fn access_authorized_stats_no_index_restriction() {
server.use_api_key(&key);
let (response, code) = server.stats().await;
assert_eq!(code, 200);
assert_eq!(200, code, "{:?}", &response);
// key should have access on `products` index.
assert!(response["indexes"].get("products").is_some());
@@ -338,38 +357,39 @@ async fn access_authorized_stats_no_index_restriction() {
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn list_authorized_indexes_restricted_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (_, code) = index.create(Some("id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (_, code) = index.create(Some("product_id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on `products` index only.
let content = json!({
"indexes": ["products"],
"actions": ["indexes.get"],
"expiresAt": Utc::now() + Duration::hours(1),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
let (response, code) = server.list_indexes().await;
assert_eq!(code, 200);
let (response, code) = server.list_indexes(None, None).await;
assert_eq!(200, code, "{:?}", &response);
let response = response.as_array().unwrap();
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
assert!(response.iter().any(|index| index["uid"] == "products"));
@@ -378,38 +398,39 @@ async fn list_authorized_indexes_restricted_index() {
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn list_authorized_indexes_no_index_restriction() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (_, code) = index.create(Some("id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (_, code) = index.create(Some("product_id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
"actions": ["indexes.get"],
"expiresAt": Utc::now() + Duration::hours(1),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
let (response, code) = server.list_indexes().await;
assert_eq!(code, 200);
let (response, code) = server.list_indexes(None, None).await;
assert_eq!(200, code, "{:?}", &response);
let response = response.as_array().unwrap();
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
assert!(response.iter().any(|index| index["uid"] == "products"));
@@ -420,26 +441,26 @@ async fn list_authorized_indexes_no_index_restriction() {
#[actix_rt::test]
async fn list_authorized_tasks_restricted_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (_, code) = index.create(Some("id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (_, code) = index.create(Some("product_id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on `products` index only.
let content = json!({
"indexes": ["products"],
"actions": ["tasks.get"],
"expiresAt": Utc::now() + Duration::hours(1),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
@@ -447,7 +468,7 @@ async fn list_authorized_tasks_restricted_index() {
server.use_api_key(&key);
let (response, code) = server.service.get("/tasks").await;
assert_eq!(code, 200);
assert_eq!(200, code, "{:?}", &response);
println!("{}", response);
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
@@ -460,26 +481,26 @@ async fn list_authorized_tasks_restricted_index() {
#[actix_rt::test]
async fn list_authorized_tasks_no_index_restriction() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
server.use_admin_key("MASTER_KEY").await;
// create index `test`
let index = server.index("test");
let (_, code) = index.create(Some("id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("id")).await;
assert_eq!(202, code, "{:?}", &response);
// create index `products`
let index = server.index("products");
let (_, code) = index.create(Some("product_id")).await;
assert_eq!(code, 202);
let (response, code) = index.create(Some("product_id")).await;
assert_eq!(202, code, "{:?}", &response);
index.wait_task(0).await;
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
"actions": ["tasks.get"],
"expiresAt": Utc::now() + Duration::hours(1),
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
@@ -487,7 +508,7 @@ async fn list_authorized_tasks_no_index_restriction() {
server.use_api_key(&key);
let (response, code) = server.service.get("/tasks").await;
assert_eq!(code, 200);
assert_eq!(200, code, "{:?}", &response);
let response = response["results"].as_array().unwrap();
// key should have access on `products` index.
@@ -496,3 +517,136 @@ async fn list_authorized_tasks_no_index_restriction() {
// key should have access on `test` index.
assert!(response.iter().any(|task| task["indexUid"] == "test"));
}
#[actix_rt::test]
async fn error_creating_index_without_action() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
// Give all action but the ones allowing to create an index.
"actions": ALL_ACTIONS.iter().cloned().filter(|a| !AUTHORIZATIONS.get(&("POST","/indexes")).unwrap().contains(a)).collect::<Vec<_>>(),
"expiresAt": "2050-11-13T00:00:00Z"
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
let expected_error = json!({
"message": "Index `test` not found.",
"code": "index_not_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index_not_found"
});
// try to create a index via add documents route
let index = server.index("test");
let documents = json!([
{
"id": 1,
"content": "foo",
}
]);
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
let response = index.wait_task(task_id).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_error.clone());
// try to create a index via add settings route
let settings = json!({ "distinctAttribute": "test"});
let (response, code) = index.update_settings(settings).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
let response = index.wait_task(task_id).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_error.clone());
// try to create a index via add specialized settings route
let (response, code) = index.update_distinct_attribute(json!("test")).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
let response = index.wait_task(task_id).await;
assert_eq!(response["status"], "failed");
assert_eq!(response["error"], expected_error.clone());
}
#[actix_rt::test]
async fn lazy_create_index() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
// create key with access on all indexes.
let content = json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": "2050-11-13T00:00:00Z"
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(201, code, "{:?}", &response);
assert!(response["key"].is_string());
// use created key.
let key = response["key"].as_str().unwrap();
server.use_api_key(&key);
// try to create a index via add documents route
let index = server.index("test");
let documents = json!([
{
"id": 1,
"content": "foo",
}
]);
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
index.wait_task(task_id).await;
let (response, code) = index.get_task(task_id).await;
assert_eq!(200, code, "{:?}", &response);
assert_eq!(response["status"], "succeeded");
// try to create a index via add settings route
let index = server.index("test1");
let settings = json!({ "distinctAttribute": "test"});
let (response, code) = index.update_settings(settings).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
index.wait_task(task_id).await;
let (response, code) = index.get_task(task_id).await;
assert_eq!(200, code, "{:?}", &response);
assert_eq!(response["status"], "succeeded");
// try to create a index via add specialized settings route
let index = server.index("test2");
let (response, code) = index.update_distinct_attribute(json!("test")).await;
assert_eq!(202, code, "{:?}", &response);
let task_id = response["taskUid"].as_u64().unwrap();
index.wait_task(task_id).await;
let (response, code) = index.get_task(task_id).await;
assert_eq!(200, code, "{:?}", &response);
assert_eq!(response["status"], "succeeded");
}

View File

@@ -1,34 +1,27 @@
mod api_keys;
mod authorization;
mod payload;
mod tenant_token;
use crate::common::server::default_settings;
use crate::common::server::TEST_TEMP_DIR;
use crate::common::Server;
use actix_web::http::StatusCode;
use serde_json::{json, Value};
use tempfile::TempDir;
impl Server {
pub async fn new_auth() -> Self {
let dir = TempDir::new().unwrap();
if cfg!(windows) {
std::env::set_var("TMP", TEST_TEMP_DIR.path());
} else {
std::env::set_var("TMPDIR", TEST_TEMP_DIR.path());
}
let mut options = default_settings(dir.path());
options.master_key = Some("MASTER_KEY".to_string());
Self::new_with_options(options).await
}
pub fn use_api_key(&mut self, api_key: impl AsRef<str>) {
self.service.api_key = Some(api_key.as_ref().to_string());
}
/// Fetch and use the default admin key for nexts http requests.
pub async fn use_admin_key(&mut self, master_key: impl AsRef<str>) {
self.use_api_key(master_key);
let (response, code) = self.list_api_keys().await;
assert_eq!(200, code, "{:?}", response);
let admin_key = &response["results"][1]["key"];
self.use_api_key(admin_key.as_str().unwrap());
}
pub async fn add_api_key(&self, content: Value) -> (Value, StatusCode) {
let url = "/keys";
self.service.post(url, content).await

View File

@@ -0,0 +1,584 @@
use crate::common::Server;
use ::time::format_description::well_known::Rfc3339;
use maplit::hashmap;
use once_cell::sync::Lazy;
use serde_json::{json, Value};
use std::collections::HashMap;
use time::{Duration, OffsetDateTime};
use super::authorization::{ALL_ACTIONS, AUTHORIZATIONS};
fn generate_tenant_token(
parent_uid: impl AsRef<str>,
parent_key: impl AsRef<str>,
mut body: HashMap<&str, Value>,
) -> String {
use jsonwebtoken::{encode, EncodingKey, Header};
let parent_uid = parent_uid.as_ref();
body.insert("apiKeyUid", json!(parent_uid));
encode(
&Header::default(),
&body,
&EncodingKey::from_secret(parent_key.as_ref().as_bytes()),
)
.unwrap()
}
static DOCUMENTS: Lazy<Value> = Lazy::new(|| {
json!([
{
"title": "Shazam!",
"id": "287947",
"color": ["green", "blue"]
},
{
"title": "Captain Marvel",
"id": "299537",
"color": ["yellow", "blue"]
},
{
"title": "Escape Room",
"id": "522681",
"color": ["yellow", "red"]
},
{
"title": "How to Train Your Dragon: The Hidden World",
"id": "166428",
"color": ["green", "red"]
},
{
"title": "Glass",
"id": "450465",
"color": ["blue", "red"]
}
])
});
static INVALID_RESPONSE: Lazy<Value> = Lazy::new(|| {
json!({"message": "The provided API key is invalid.",
"code": "invalid_api_key",
"type": "auth",
"link": "https://docs.meilisearch.com/errors#invalid_api_key"
})
});
static ACCEPTED_KEYS: Lazy<Vec<Value>> = Lazy::new(|| {
vec![
json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["*"],
"actions": ["search"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["sales"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["sales"],
"actions": ["search"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
]
});
static REFUSED_KEYS: Lazy<Vec<Value>> = Lazy::new(|| {
vec![
// no search action
json!({
"indexes": ["*"],
"actions": ALL_ACTIONS.iter().cloned().filter(|a| *a != "search" && *a != "*").collect::<Vec<_>>(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["sales"],
"actions": ALL_ACTIONS.iter().cloned().filter(|a| *a != "search" && *a != "*").collect::<Vec<_>>(),
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
// bad index
json!({
"indexes": ["products"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
json!({
"indexes": ["products"],
"actions": ["search"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::days(1)).format(&Rfc3339).unwrap()
}),
]
});
macro_rules! compute_authorized_search {
($tenant_tokens:expr, $filter:expr, $expected_count:expr) => {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
let index = server.index("sales");
let documents = DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(0).await;
index
.update_settings(json!({"filterableAttributes": ["color"]}))
.await;
index.wait_task(1).await;
drop(index);
for key_content in ACCEPTED_KEYS.iter() {
server.use_api_key("MASTER_KEY");
let (response, code) = server.add_api_key(key_content.clone()).await;
assert_eq!(code, 201);
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
for tenant_token in $tenant_tokens.iter() {
let web_token = generate_tenant_token(&uid, &key, tenant_token.clone());
server.use_api_key(&web_token);
let index = server.index("sales");
index
.search(json!({ "filter": $filter }), |response, code| {
assert_eq!(
code, 200,
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response, tenant_token, key_content
);
assert_eq!(
response["hits"].as_array().unwrap().len(),
$expected_count,
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response,
tenant_token,
key_content
);
})
.await;
}
}
};
}
macro_rules! compute_forbidden_search {
($tenant_tokens:expr, $parent_keys:expr) => {
let mut server = Server::new_auth().await;
server.use_admin_key("MASTER_KEY").await;
let index = server.index("sales");
let documents = DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(0).await;
drop(index);
for key_content in $parent_keys.iter() {
server.use_api_key("MASTER_KEY");
let (response, code) = server.add_api_key(key_content.clone()).await;
assert_eq!(code, 201, "{:?}", response);
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
for tenant_token in $tenant_tokens.iter() {
let web_token = generate_tenant_token(&uid, &key, tenant_token.clone());
server.use_api_key(&web_token);
let index = server.index("sales");
index
.search(json!({}), |response, code| {
assert_eq!(
response,
INVALID_RESPONSE.clone(),
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response,
tenant_token,
key_content
);
assert_eq!(
code, 403,
"{} using tenant_token: {:?} generated with parent_key: {:?}",
response, tenant_token, key_content
);
})
.await;
}
}
};
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn search_authorized_simple_token() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"*": Value::Null}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"sales": Value::Null}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => Value::Null
},
];
compute_authorized_search!(tenant_tokens, {}, 5);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn search_authorized_filter_token() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
// filter on sales should override filters on *
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
];
compute_authorized_search!(tenant_tokens, {}, 3);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn filter_search_authorized_filter_token() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": "color = blue"}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {"filter": ["color = blue"]}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
// filter on sales should override filters on *
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": "color = blue"}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {"filter": "color = green"},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({
"*": {},
"sales": {"filter": ["color = blue"]}
}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
];
compute_authorized_search!(tenant_tokens, "color = yellow", 1);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_search_token_forbidden_parent_key() {
let tenant_tokens = vec![
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
];
compute_forbidden_search!(tenant_tokens, REFUSED_KEYS);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_search_forbidden_token() {
let tenant_tokens = vec![
// bad index
hashmap! {
"searchRules" => json!({"products": {}}),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["products"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"products": {}}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!({"products": Value::Null}),
"exp" => Value::Null
},
hashmap! {
"searchRules" => json!(["products"]),
"exp" => Value::Null
},
// expired token
hashmap! {
"searchRules" => json!({"*": {}}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"*": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": {}}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!({"sales": Value::Null}),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
hashmap! {
"searchRules" => json!(["sales"]),
"exp" => json!((OffsetDateTime::now_utc() - Duration::hours(1)).unix_timestamp())
},
];
compute_forbidden_search!(tenant_tokens, ACCEPTED_KEYS);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_forbidden_routes() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
let tenant_token = hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let web_token = generate_tenant_token(&uid, &key, tenant_token);
server.use_api_key(&web_token);
for ((method, route), actions) in AUTHORIZATIONS.iter() {
if !actions.contains("search") {
let (response, code) = server.dummy_request(method, route).await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
}
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_expired_parent_key() {
use std::{thread, time};
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::seconds(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
let tenant_token = hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let web_token = generate_tenant_token(&uid, &key, tenant_token);
server.use_api_key(&web_token);
// test search request while parent_key is not expired
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
// wait until the key is expired.
thread::sleep(time::Duration::new(1, 0));
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn error_access_modified_token() {
let mut server = Server::new_auth().await;
server.use_api_key("MASTER_KEY");
let content = json!({
"indexes": ["*"],
"actions": ["*"],
"expiresAt": (OffsetDateTime::now_utc() + Duration::hours(1)).format(&Rfc3339).unwrap(),
});
let (response, code) = server.add_api_key(content).await;
assert_eq!(code, 201);
assert!(response["key"].is_string());
let key = response["key"].as_str().unwrap();
let uid = response["uid"].as_str().unwrap();
let tenant_token = hashmap! {
"searchRules" => json!(["products"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let web_token = generate_tenant_token(&uid, &key, tenant_token);
server.use_api_key(&web_token);
// test search request while web_token is valid
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_ne!(response, INVALID_RESPONSE.clone());
assert_ne!(code, 403);
let tenant_token = hashmap! {
"searchRules" => json!(["*"]),
"exp" => json!((OffsetDateTime::now_utc() + Duration::hours(1)).unix_timestamp())
};
let alt = generate_tenant_token(&uid, &key, tenant_token);
let altered_token = [
web_token.split('.').next().unwrap(),
alt.split('.').nth(1).unwrap(),
web_token.split('.').nth(2).unwrap(),
]
.join(".");
server.use_api_key(&altered_token);
let (response, code) = server
.dummy_request("POST", "/indexes/products/search")
.await;
assert_eq!(response, INVALID_RESPONSE.clone());
assert_eq!(code, 403);
}

View File

@@ -1,32 +1,16 @@
use std::{
fmt::Write,
panic::{catch_unwind, resume_unwind, UnwindSafe},
time::Duration,
};
use actix_web::http::StatusCode;
use paste::paste;
use serde_json::{json, Value};
use tokio::time::sleep;
use urlencoding::encode;
use super::service::Service;
macro_rules! make_settings_test_routes {
($($name:ident),+) => {
$(paste! {
pub async fn [<update_$name>](&self, value: Value) -> (Value, StatusCode) {
let url = format!("/indexes/{}/settings/{}", encode(self.uid.as_ref()).to_string(), stringify!($name).replace("_", "-"));
self.service.post(url, value).await
}
pub async fn [<get_$name>](&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/settings/{}", encode(self.uid.as_ref()).to_string(), stringify!($name).replace("_", "-"));
self.service.get(url).await
}
})*
};
}
pub struct Index<'a> {
pub uid: String,
pub service: &'a Service,
@@ -35,21 +19,18 @@ pub struct Index<'a> {
#[allow(dead_code)]
impl Index<'_> {
pub async fn get(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}", encode(self.uid.as_ref()).to_string());
let url = format!("/indexes/{}", encode(self.uid.as_ref()));
self.service.get(url).await
}
pub async fn load_test_set(&self) -> u64 {
let url = format!(
"/indexes/{}/documents",
encode(self.uid.as_ref()).to_string()
);
let url = format!("/indexes/{}/documents", encode(self.uid.as_ref()));
let (response, code) = self
.service
.post_str(url, include_str!("../assets/test_set.json"))
.await;
assert_eq!(code, 202);
let update_id = response["uid"].as_i64().unwrap();
let update_id = response["taskUid"].as_i64().unwrap();
self.wait_task(update_id as u64).await;
update_id as u64
}
@@ -66,13 +47,13 @@ impl Index<'_> {
let body = json!({
"primaryKey": primary_key,
});
let url = format!("/indexes/{}", encode(self.uid.as_ref()).to_string());
let url = format!("/indexes/{}", encode(self.uid.as_ref()));
self.service.put(url, body).await
self.service.patch(url, body).await
}
pub async fn delete(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}", encode(self.uid.as_ref()).to_string());
let url = format!("/indexes/{}", encode(self.uid.as_ref()));
self.service.delete(url).await
}
@@ -84,13 +65,10 @@ impl Index<'_> {
let url = match primary_key {
Some(key) => format!(
"/indexes/{}/documents?primaryKey={}",
encode(self.uid.as_ref()).to_string(),
encode(self.uid.as_ref()),
key
),
None => format!(
"/indexes/{}/documents",
encode(self.uid.as_ref()).to_string()
),
None => format!("/indexes/{}/documents", encode(self.uid.as_ref())),
};
self.service.post(url, documents).await
}
@@ -103,100 +81,95 @@ impl Index<'_> {
let url = match primary_key {
Some(key) => format!(
"/indexes/{}/documents?primaryKey={}",
encode(self.uid.as_ref()).to_string(),
encode(self.uid.as_ref()),
key
),
None => format!(
"/indexes/{}/documents",
encode(self.uid.as_ref()).to_string()
),
None => format!("/indexes/{}/documents", encode(self.uid.as_ref())),
};
self.service.put(url, documents).await
}
pub async fn wait_task(&self, update_id: u64) -> Value {
// try 10 times to get status, or panic to not wait forever
// try several times to get status, or panic to not wait forever
let url = format!("/tasks/{}", update_id);
for _ in 0..10 {
for _ in 0..100 {
let (response, status_code) = self.service.get(&url).await;
assert_eq!(status_code, 200, "response: {}", response);
assert_eq!(200, status_code, "response: {}", response);
if response["status"] == "succeeded" || response["status"] == "failed" {
return response;
}
sleep(Duration::from_secs(1)).await;
// wait 0.5 second.
sleep(Duration::from_millis(500)).await;
}
panic!("Timeout waiting for update id");
}
pub async fn get_task(&self, update_id: u64) -> (Value, StatusCode) {
let url = format!("/indexes/{}/tasks/{}", self.uid, update_id);
let url = format!("/tasks/{}", update_id);
self.service.get(url).await
}
pub async fn list_tasks(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/tasks", self.uid);
let url = format!("/tasks?indexUid={}", self.uid);
self.service.get(url).await
}
pub async fn filtered_tasks(&self, type_: &[&str], status: &[&str]) -> (Value, StatusCode) {
let mut url = format!("/tasks?indexUid={}", self.uid);
if !type_.is_empty() {
let _ = write!(url, "&type={}", type_.join(","));
}
if !status.is_empty() {
let _ = write!(url, "&status={}", status.join(","));
}
self.service.get(url).await
}
pub async fn get_document(
&self,
id: u64,
_options: Option<GetDocumentOptions>,
options: Option<GetDocumentOptions>,
) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/documents/{}",
encode(self.uid.as_ref()).to_string(),
id
);
let mut url = format!("/indexes/{}/documents/{}", encode(self.uid.as_ref()), id);
if let Some(fields) = options.and_then(|o| o.fields) {
let _ = write!(url, "?fields={}", fields.join(","));
}
self.service.get(url).await
}
pub async fn get_all_documents(&self, options: GetAllDocumentsOptions) -> (Value, StatusCode) {
let mut url = format!(
"/indexes/{}/documents?",
encode(self.uid.as_ref()).to_string()
);
let mut url = format!("/indexes/{}/documents?", encode(self.uid.as_ref()));
if let Some(limit) = options.limit {
url.push_str(&format!("limit={}&", limit));
let _ = write!(url, "limit={}&", limit);
}
if let Some(offset) = options.offset {
url.push_str(&format!("offset={}&", offset));
let _ = write!(url, "offset={}&", offset);
}
if let Some(attributes_to_retrieve) = options.attributes_to_retrieve {
url.push_str(&format!(
"attributesToRetrieve={}&",
attributes_to_retrieve.join(",")
));
let _ = write!(url, "fields={}&", attributes_to_retrieve.join(","));
}
self.service.get(url).await
}
pub async fn delete_document(&self, id: u64) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/documents/{}",
encode(self.uid.as_ref()).to_string(),
id
);
let url = format!("/indexes/{}/documents/{}", encode(self.uid.as_ref()), id);
self.service.delete(url).await
}
pub async fn clear_all_documents(&self) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/documents",
encode(self.uid.as_ref()).to_string()
);
let url = format!("/indexes/{}/documents", encode(self.uid.as_ref()));
self.service.delete(url).await
}
pub async fn delete_batch(&self, ids: Vec<u64>) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/documents/delete-batch",
encode(self.uid.as_ref()).to_string()
encode(self.uid.as_ref())
);
self.service
.post(url, serde_json::to_value(&ids).unwrap())
@@ -204,31 +177,22 @@ impl Index<'_> {
}
pub async fn settings(&self) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/settings",
encode(self.uid.as_ref()).to_string()
);
let url = format!("/indexes/{}/settings", encode(self.uid.as_ref()));
self.service.get(url).await
}
pub async fn update_settings(&self, settings: Value) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/settings",
encode(self.uid.as_ref()).to_string()
);
self.service.post(url, settings).await
let url = format!("/indexes/{}/settings", encode(self.uid.as_ref()));
self.service.patch(url, settings).await
}
pub async fn delete_settings(&self) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/settings",
encode(self.uid.as_ref()).to_string()
);
let url = format!("/indexes/{}/settings", encode(self.uid.as_ref()));
self.service.delete(url).await
}
pub async fn stats(&self) -> (Value, StatusCode) {
let url = format!("/indexes/{}/stats", encode(self.uid.as_ref()).to_string());
let url = format!("/indexes/{}/stats", encode(self.uid.as_ref()));
self.service.get(url).await
}
@@ -253,24 +217,38 @@ impl Index<'_> {
}
pub async fn search_post(&self, query: Value) -> (Value, StatusCode) {
let url = format!("/indexes/{}/search", encode(self.uid.as_ref()).to_string());
let url = format!("/indexes/{}/search", encode(self.uid.as_ref()));
self.service.post(url, query).await
}
pub async fn search_get(&self, query: Value) -> (Value, StatusCode) {
let params = serde_url_params::to_string(&query).unwrap();
let url = format!(
"/indexes/{}/search?{}",
encode(self.uid.as_ref()).to_string(),
params
);
let params = yaup::to_string(&query).unwrap();
let url = format!("/indexes/{}/search?{}", encode(self.uid.as_ref()), params);
self.service.get(url).await
}
make_settings_test_routes!(distinct_attribute);
pub async fn update_distinct_attribute(&self, value: Value) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/settings/{}",
encode(self.uid.as_ref()),
"distinct-attribute"
);
self.service.put(url, value).await
}
pub async fn get_distinct_attribute(&self) -> (Value, StatusCode) {
let url = format!(
"/indexes/{}/settings/{}",
encode(self.uid.as_ref()),
"distinct-attribute"
);
self.service.get(url).await
}
}
pub struct GetDocumentOptions;
pub struct GetDocumentOptions {
pub fields: Option<Vec<&'static str>>,
}
#[derive(Debug, Default)]
pub struct GetAllDocumentsOptions {

View File

@@ -3,7 +3,7 @@ pub mod server;
pub mod service;
pub use index::{GetAllDocumentsOptions, GetDocumentOptions};
pub use server::Server;
pub use server::{default_settings, Server};
/// Performs a search test on both post and get routes
#[macro_export]

View File

@@ -1,4 +1,6 @@
#![allow(dead_code)]
use clap::Parser;
use std::path::Path;
use actix_web::http::StatusCode;
@@ -50,7 +52,15 @@ impl Server {
}
}
pub async fn new_with_options(options: Opt) -> Self {
pub async fn new_auth_with_options(mut options: Opt, dir: TempDir) -> Self {
if cfg!(windows) {
std::env::set_var("TMP", TEST_TEMP_DIR.path());
} else {
std::env::set_var("TMPDIR", TEST_TEMP_DIR.path());
}
options.master_key = Some("MASTER_KEY".to_string());
let meilisearch = setup_meilisearch(&options).unwrap();
let auth = AuthController::new(&options.db_path, &options.master_key).unwrap();
let service = Service {
@@ -62,10 +72,32 @@ impl Server {
Server {
service,
_dir: None,
_dir: Some(dir),
}
}
pub async fn new_auth() -> Self {
let dir = TempDir::new().unwrap();
let options = default_settings(dir.path());
Self::new_auth_with_options(options, dir).await
}
pub async fn new_with_options(options: Opt) -> Result<Self, anyhow::Error> {
let meilisearch = setup_meilisearch(&options)?;
let auth = AuthController::new(&options.db_path, &options.master_key)?;
let service = Service {
meilisearch,
auth,
options,
api_key: None,
};
Ok(Server {
service,
_dir: None,
})
}
/// Returns a view to an index. There is no guarantee that the index exists.
pub fn index(&self, uid: impl AsRef<str>) -> Index<'_> {
Index {
@@ -74,8 +106,27 @@ impl Server {
}
}
pub async fn list_indexes(&self) -> (Value, StatusCode) {
self.service.get("/indexes").await
pub async fn list_indexes(
&self,
offset: Option<usize>,
limit: Option<usize>,
) -> (Value, StatusCode) {
let (offset, limit) = (
offset.map(|offset| format!("offset={offset}")),
limit.map(|limit| format!("limit={limit}")),
);
let query_parameter = offset
.as_ref()
.zip(limit.as_ref())
.map(|(offset, limit)| format!("{offset}&{limit}"))
.or_else(|| offset.xor(limit));
if let Some(query_parameter) = query_parameter {
self.service
.get(format!("/indexes?{query_parameter}"))
.await
} else {
self.service.get("/indexes").await
}
}
pub async fn version(&self) -> (Value, StatusCode) {
@@ -99,33 +150,18 @@ pub fn default_settings(dir: impl AsRef<Path>) -> Opt {
Opt {
db_path: dir.as_ref().join("db"),
dumps_dir: dir.as_ref().join("dump"),
http_addr: "127.0.0.1:7700".to_owned(),
master_key: None,
env: "development".to_owned(),
#[cfg(all(not(debug_assertions), feature = "analytics"))]
no_analytics: Some(Some(true)),
max_index_size: Byte::from_unit(4.0, ByteUnit::GiB).unwrap(),
max_task_db_size: Byte::from_unit(4.0, ByteUnit::GiB).unwrap(),
no_analytics: true,
max_index_size: Byte::from_unit(100.0, ByteUnit::MiB).unwrap(),
max_task_db_size: Byte::from_unit(1.0, ByteUnit::GiB).unwrap(),
http_payload_size_limit: Byte::from_unit(10.0, ByteUnit::MiB).unwrap(),
ssl_cert_path: None,
ssl_key_path: None,
ssl_auth_path: None,
ssl_ocsp_path: None,
ssl_require_auth: false,
ssl_resumption: false,
ssl_tickets: false,
import_snapshot: None,
ignore_missing_snapshot: false,
ignore_snapshot_if_db_exists: false,
snapshot_dir: ".".into(),
schedule_snapshot: false,
snapshot_interval_sec: 0,
import_dump: None,
indexer_options: IndexerOpts {
// memory has to be unlimited because several meilisearch are running in test context.
max_memory: MaxMemory::unlimited(),
..Default::default()
max_indexing_memory: MaxMemory::unlimited(),
..Parser::parse_from(None as Option<&str>)
},
log_level: "off".into(),
..Parser::parse_from(None as Option<&str>)
}
}

View File

@@ -7,23 +7,45 @@ use actix_web::test;
use meilisearch_http::{analytics, create_app};
use serde_json::{json, Value};
enum HttpVerb {
Put,
Patch,
Post,
Get,
Delete,
}
impl HttpVerb {
fn test_request(&self) -> test::TestRequest {
match self {
HttpVerb::Put => test::TestRequest::put(),
HttpVerb::Patch => test::TestRequest::patch(),
HttpVerb::Post => test::TestRequest::post(),
HttpVerb::Get => test::TestRequest::get(),
HttpVerb::Delete => test::TestRequest::delete(),
}
}
}
#[actix_rt::test]
async fn error_json_bad_content_type() {
use HttpVerb::{Patch, Post, Put};
let routes = [
// all the POST routes except the dumps that can be created without any body or content-type
// all the routes except the dumps that can be created without any body or content-type
// and the search that is not a strict json
"/indexes",
"/indexes/doggo/documents/delete-batch",
"/indexes/doggo/search",
"/indexes/doggo/settings",
"/indexes/doggo/settings/displayed-attributes",
"/indexes/doggo/settings/distinct-attribute",
"/indexes/doggo/settings/filterable-attributes",
"/indexes/doggo/settings/ranking-rules",
"/indexes/doggo/settings/searchable-attributes",
"/indexes/doggo/settings/sortable-attributes",
"/indexes/doggo/settings/stop-words",
"/indexes/doggo/settings/synonyms",
(Post, "/indexes"),
(Post, "/indexes/doggo/documents/delete-batch"),
(Post, "/indexes/doggo/search"),
(Patch, "/indexes/doggo/settings"),
(Put, "/indexes/doggo/settings/displayed-attributes"),
(Put, "/indexes/doggo/settings/distinct-attribute"),
(Put, "/indexes/doggo/settings/filterable-attributes"),
(Put, "/indexes/doggo/settings/ranking-rules"),
(Put, "/indexes/doggo/settings/searchable-attributes"),
(Put, "/indexes/doggo/settings/sortable-attributes"),
(Put, "/indexes/doggo/settings/stop-words"),
(Put, "/indexes/doggo/settings/synonyms"),
];
let bad_content_types = [
"application/csv",
@@ -45,10 +67,11 @@ async fn error_json_bad_content_type() {
analytics::MockAnalytics::new(&server.service.options).0
))
.await;
for route in routes {
for (verb, route) in routes {
// Good content-type, we probably have an error since we didn't send anything in the json
// so we only ensure we didn't get a bad media type error.
let req = test::TestRequest::post()
let req = verb
.test_request()
.uri(route)
.set_payload(document)
.insert_header(("content-type", "application/json"))
@@ -59,7 +82,8 @@ async fn error_json_bad_content_type() {
"calling the route `{}` with a content-type of json isn't supposed to throw a bad media type error", route);
// No content-type.
let req = test::TestRequest::post()
let req = verb
.test_request()
.uri(route)
.set_payload(document)
.to_request();
@@ -82,7 +106,8 @@ async fn error_json_bad_content_type() {
for bad_content_type in bad_content_types {
// Always bad content-type
let req = test::TestRequest::post()
let req = verb
.test_request()
.uri(route)
.set_payload(document.to_string())
.insert_header(("content-type", bad_content_type))

View File

@@ -1,8 +1,8 @@
use crate::common::{GetAllDocumentsOptions, Server};
use actix_web::test;
use chrono::DateTime;
use meilisearch_http::{analytics, create_app};
use serde_json::{json, Value};
use time::{format_description::well_known::Rfc3339, OffsetDateTime};
/// This is the basic usage of our API and every other tests uses the content-type application/json
#[actix_rt::test]
@@ -35,7 +35,7 @@ async fn add_documents_test_json_content_types() {
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 202);
assert_eq!(response["uid"], 0);
assert_eq!(response["taskUid"], 0);
// put
let req = test::TestRequest::put()
@@ -48,7 +48,7 @@ async fn add_documents_test_json_content_types() {
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 202);
assert_eq!(response["uid"], 1);
assert_eq!(response["taskUid"], 1);
}
/// any other content-type is must be refused
@@ -212,7 +212,7 @@ async fn error_add_malformed_csv_documents() {
assert_eq!(
response["message"],
json!(
r#"The `csv` payload provided is malformed. `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
r#"The `csv` payload provided is malformed: `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
@@ -236,7 +236,7 @@ async fn error_add_malformed_csv_documents() {
assert_eq!(
response["message"],
json!(
r#"The `csv` payload provided is malformed. `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
r#"The `csv` payload provided is malformed: `CSV error: record 1 (line: 2, byte: 12): found record with 3 fields, but the previous record has 2 fields`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
@@ -307,6 +307,58 @@ async fn error_add_malformed_json_documents() {
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
// truncate
// length = 100
let long = "0123456789".repeat(10);
let document = format!("\"{}\"", long);
let req = test::TestRequest::put()
.uri("/indexes/dog/documents")
.set_payload(document)
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(
response["message"],
json!(
r#"The `json` payload provided is malformed. `Couldn't serialize document value: invalid type: string "0123456789012345678901234567...890123456789", expected a documents, or a sequence of documents. at line 1 column 102`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
// add one more char to the long string to test if the truncating works.
let document = format!("\"{}m\"", long);
let req = test::TestRequest::put()
.uri("/indexes/dog/documents")
.set_payload(document)
.insert_header(("content-type", "application/json"))
.to_request();
let res = test::call_service(&app, req).await;
let body = test::read_body(res).await;
let response: Value = serde_json::from_slice(&body).unwrap_or_default();
assert_eq!(status_code, 400);
assert_eq!(
response["message"],
json!(
r#"The `json` payload provided is malformed. `Couldn't serialize document value: invalid type: string "0123456789012345678901234567...90123456789m", expected a documents, or a sequence of documents. at line 1 column 103`."#
)
);
assert_eq!(response["code"], json!("malformed_payload"));
assert_eq!(response["type"], json!("invalid_request"));
assert_eq!(
response["link"],
json!("https://docs.meilisearch.com/errors#malformed_payload")
);
}
#[actix_rt::test]
@@ -547,7 +599,7 @@ async fn add_documents_no_index_creation() {
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
assert_eq!(response["uid"], 0);
assert_eq!(response["taskUid"], 0);
/*
* currently we dont check these field to stay ISO with meilisearch
* assert_eq!(response["status"], "pending");
@@ -563,17 +615,17 @@ async fn add_documents_no_index_creation() {
assert_eq!(code, 200);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["uid"], 0);
assert_eq!(response["type"], "documentAddition");
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["receivedDocuments"], 1);
assert_eq!(response["details"]["indexedDocuments"], 1);
let processed_at =
DateTime::parse_from_rfc3339(response["finishedAt"].as_str().unwrap()).unwrap();
OffsetDateTime::parse(response["finishedAt"].as_str().unwrap(), &Rfc3339).unwrap();
let enqueued_at =
DateTime::parse_from_rfc3339(response["enqueuedAt"].as_str().unwrap()).unwrap();
OffsetDateTime::parse(response["enqueuedAt"].as_str().unwrap(), &Rfc3339).unwrap();
assert!(processed_at > enqueued_at);
// index was created, and primary key was infered.
// index was created, and primary key was inferred.
let (response, code) = index.get().await;
assert_eq!(code, 200);
assert_eq!(response["primaryKey"], "id");
@@ -586,7 +638,7 @@ async fn error_document_add_create_index_bad_uid() {
let (response, code) = index.add_documents(json!([{"id": 1}]), None).await;
let expected_response = json!({
"message": "`883 fj!` is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_).",
"message": "invalid index uid `883 fj!`, the uid must be an integer or a string containing only alphanumeric characters a-z A-Z 0-9, hyphens - and underscores _.",
"code": "invalid_index_uid",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_index_uid"
@@ -603,7 +655,7 @@ async fn error_document_update_create_index_bad_uid() {
let (response, code) = index.update_documents(json!([{"id": 1}]), None).await;
let expected_response = json!({
"message": "`883 fj!` is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_).",
"message": "invalid index uid `883 fj!`, the uid must be an integer or a string containing only alphanumeric characters a-z A-Z 0-9, hyphens - and underscores _.",
"code": "invalid_index_uid",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_index_uid"
@@ -633,7 +685,7 @@ async fn document_addition_with_primary_key() {
assert_eq!(code, 200);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["uid"], 0);
assert_eq!(response["type"], "documentAddition");
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["receivedDocuments"], 1);
assert_eq!(response["details"]["indexedDocuments"], 1);
@@ -662,7 +714,7 @@ async fn document_update_with_primary_key() {
assert_eq!(code, 200);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["uid"], 0);
assert_eq!(response["type"], "documentPartial");
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["indexedDocuments"], 1);
assert_eq!(response["details"]["receivedDocuments"], 1);
@@ -710,20 +762,11 @@ async fn replace_document() {
}
#[actix_rt::test]
async fn error_add_no_documents() {
async fn add_no_documents() {
let server = Server::new().await;
let index = server.index("test");
let (response, code) = index.add_documents(json!([]), None).await;
let expected_response = json!({
"message": "The `json` payload must contain at least one document.",
"code": "malformed_payload",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#malformed_payload"
});
assert_eq!(response, expected_response);
assert_eq!(code, 400);
let (_response, code) = index.add_documents(json!([]), None).await;
assert_eq!(code, 202);
}
#[actix_rt::test]
@@ -775,7 +818,7 @@ async fn add_larger_dataset() {
let (response, code) = index.get_task(update_id).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "succeeded");
assert_eq!(response["type"], "documentAddition");
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["indexedDocuments"], 77);
assert_eq!(response["details"]["receivedDocuments"], 77);
let (response, code) = index
@@ -784,8 +827,8 @@ async fn add_larger_dataset() {
..Default::default()
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 77);
assert_eq!(code, 200, "failed with `{}`", response);
assert_eq!(response["results"].as_array().unwrap().len(), 77);
}
#[actix_rt::test]
@@ -797,7 +840,7 @@ async fn update_larger_dataset() {
index.wait_task(0).await;
let (response, code) = index.get_task(0).await;
assert_eq!(code, 200);
assert_eq!(response["type"], "documentPartial");
assert_eq!(response["type"], "documentAdditionOrUpdate");
assert_eq!(response["details"]["indexedDocuments"], 77);
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@@ -806,7 +849,7 @@ async fn update_larger_dataset() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 77);
assert_eq!(response["results"].as_array().unwrap().len(), 77);
}
#[actix_rt::test]
@@ -825,7 +868,12 @@ async fn error_add_documents_bad_document_id() {
let (response, code) = index.get_task(1).await;
assert_eq!(code, 200);
assert_eq!(response["status"], json!("failed"));
assert_eq!(response["error"]["message"], json!("Document identifier `foo & bar` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."));
assert_eq!(
response["error"]["message"],
json!(
r#"Document identifier `"foo & bar"` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."#
)
);
assert_eq!(response["error"]["code"], json!("invalid_document_id"));
assert_eq!(response["error"]["type"], json!("invalid_request"));
assert_eq!(
@@ -848,7 +896,12 @@ async fn error_update_documents_bad_document_id() {
index.update_documents(documents, None).await;
let response = index.wait_task(1).await;
assert_eq!(response["status"], json!("failed"));
assert_eq!(response["error"]["message"], json!("Document identifier `foo & bar` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."));
assert_eq!(
response["error"]["message"],
json!(
r#"Document identifier `"foo & bar"` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_)."#
)
);
assert_eq!(response["error"]["code"], json!("invalid_document_id"));
assert_eq!(response["error"]["type"], json!("invalid_request"));
assert_eq!(
@@ -948,7 +1001,7 @@ async fn error_document_field_limit_reached() {
}
#[actix_rt::test]
async fn error_add_documents_invalid_geo_field() {
async fn add_documents_invalid_geo_field() {
let server = Server::new().await;
let index = server.index("test");
index.create(Some("id")).await;
@@ -967,16 +1020,7 @@ async fn error_add_documents_invalid_geo_field() {
index.wait_task(2).await;
let (response, code) = index.get_task(2).await;
assert_eq!(code, 200);
assert_eq!(response["status"], "failed");
let expected_error = json!({
"message": r#"The document with the id: `11` contains an invalid _geo field: `foobar`."#,
"code": "invalid_geo_field",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_geo_field"
});
assert_eq!(response["error"], expected_error);
assert_eq!(response["status"], "succeeded");
}
#[actix_rt::test]

View File

@@ -72,7 +72,7 @@ async fn clear_all_documents() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert!(response.as_array().unwrap().is_empty());
assert!(response["results"].as_array().unwrap().is_empty());
}
#[actix_rt::test]
@@ -89,7 +89,7 @@ async fn clear_all_documents_empty_index() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert!(response.as_array().unwrap().is_empty());
assert!(response["results"].as_array().unwrap().is_empty());
}
#[actix_rt::test]
@@ -125,8 +125,8 @@ async fn delete_batch() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 1);
assert_eq!(response.as_array().unwrap()[0]["id"], 3);
assert_eq!(response["results"].as_array().unwrap().len(), 1);
assert_eq!(response["results"][0]["id"], json!(3));
}
#[actix_rt::test]
@@ -143,5 +143,5 @@ async fn delete_no_document_batch() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 3);
assert_eq!(response["results"].as_array().unwrap().len(), 3);
}

View File

@@ -1,5 +1,4 @@
use crate::common::GetAllDocumentsOptions;
use crate::common::Server;
use crate::common::{GetAllDocumentsOptions, GetDocumentOptions, Server};
use serde_json::json;
@@ -39,19 +38,51 @@ async fn get_document() {
let documents = serde_json::json!([
{
"id": 0,
"content": "foobar",
"nested": { "content": "foobar" },
}
]);
let (_, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
index.wait_task(0).await;
index.wait_task(1).await;
let (response, code) = index.get_document(0, None).await;
assert_eq!(code, 200);
assert_eq!(
response,
serde_json::json!( {
serde_json::json!({
"id": 0,
"content": "foobar",
"nested": { "content": "foobar" },
})
);
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["id"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
serde_json::json!({
"id": 0,
})
);
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["nested.content"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
serde_json::json!({
"nested": { "content": "foobar" },
})
);
}
@@ -88,7 +119,7 @@ async fn get_no_document() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert!(response.as_array().unwrap().is_empty());
assert!(response["results"].as_array().unwrap().is_empty());
}
#[actix_rt::test]
@@ -101,7 +132,7 @@ async fn get_all_documents_no_options() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
let arr = response.as_array().unwrap();
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 20);
let first = serde_json::json!({
"id":0,
@@ -137,8 +168,11 @@ async fn test_get_all_documents_limit() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 5);
assert_eq!(response.as_array().unwrap()[0]["id"], 0);
assert_eq!(response["results"].as_array().unwrap().len(), 5);
assert_eq!(response["results"][0]["id"], json!(0));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(5));
assert_eq!(response["total"], json!(77));
}
#[actix_rt::test]
@@ -154,8 +188,11 @@ async fn test_get_all_documents_offset() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(response.as_array().unwrap()[0]["id"], 13);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
assert_eq!(response["results"][0]["id"], json!(5));
assert_eq!(response["offset"], json!(5));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
}
#[actix_rt::test]
@@ -171,20 +208,14 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
1
);
assert!(response.as_array().unwrap()[0]
.as_object()
.unwrap()
.get("name")
.is_some());
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 1);
assert!(results["name"] != json!(null));
}
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@@ -193,15 +224,13 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
0
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 0);
}
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@@ -210,15 +239,13 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
0
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 0);
}
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@@ -227,15 +254,12 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
2
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 2);
assert!(results["name"] != json!(null));
assert!(results["tags"] != json!(null));
}
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@@ -244,15 +268,10 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
16
);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 16);
}
let (response, code) = index
.get_all_documents(GetAllDocumentsOptions {
@@ -261,19 +280,99 @@ async fn test_get_all_documents_attributes_to_retrieve() {
})
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
for results in response["results"].as_array().unwrap() {
assert_eq!(results.as_object().unwrap().keys().count(), 16);
}
}
#[actix_rt::test]
async fn get_document_s_nested_attributes_to_retrieve() {
let server = Server::new().await;
let index = server.index("test");
index.create(None).await;
let documents = json!([
{
"id": 0,
"content.truc": "foobar",
},
{
"id": 1,
"content": {
"truc": "foobar",
"machin": "bidule",
},
},
]);
let (_, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
index.wait_task(1).await;
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["content"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(response, json!({}));
let (response, code) = index
.get_document(
1,
Some(GetDocumentOptions {
fields: Some(vec!["content"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
16
response,
json!({
"content": {
"truc": "foobar",
"machin": "bidule",
},
})
);
let (response, code) = index
.get_document(
0,
Some(GetDocumentOptions {
fields: Some(vec!["content.truc"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
json!({
"content.truc": "foobar",
})
);
let (response, code) = index
.get_document(
1,
Some(GetDocumentOptions {
fields: Some(vec!["content.truc"]),
}),
)
.await;
assert_eq!(code, 200);
assert_eq!(
response,
json!({
"content": {
"truc": "foobar",
},
})
);
}
#[actix_rt::test]
async fn get_documents_displayed_attributes() {
async fn get_documents_displayed_attributes_is_ignored() {
let server = Server::new().await;
let index = server.index("test");
index
@@ -285,23 +384,19 @@ async fn get_documents_displayed_attributes() {
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(response.as_array().unwrap().len(), 20);
assert_eq!(response["results"].as_array().unwrap().len(), 20);
assert_eq!(
response.as_array().unwrap()[0]
.as_object()
.unwrap()
.keys()
.count(),
1
response["results"][0].as_object().unwrap().keys().count(),
16
);
assert!(response.as_array().unwrap()[0]
.as_object()
.unwrap()
.get("gender")
.is_some());
assert!(response["results"][0]["gender"] != json!(null));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["limit"], json!(20));
assert_eq!(response["total"], json!(77));
let (response, code) = index.get_document(0, None).await;
assert_eq!(code, 200);
assert_eq!(response.as_object().unwrap().keys().count(), 1);
assert_eq!(response.as_object().unwrap().keys().count(), 16);
assert!(response.as_object().unwrap().get("gender").is_some());
}

View File

@@ -1,22 +0,0 @@
#![allow(dead_code)]
mod common;
use crate::common::Server;
use serde_json::json;
#[actix_rt::test]
async fn get_unexisting_dump_status() {
let server = Server::new().await;
let (response, code) = server.get_dump_status("foobar").await;
assert_eq!(code, 404);
let expected_response = json!({
"message": "Dump `foobar` not found.",
"code": "dump_not_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#dump_not_found"
});
assert_eq!(response, expected_response);
}

View File

@@ -0,0 +1,73 @@
use std::path::PathBuf;
use manifest_dir_macros::exist_relative_path;
pub enum GetDump {
MoviesRawV1,
MoviesWithSettingsV1,
RubyGemsWithSettingsV1,
MoviesRawV2,
MoviesWithSettingsV2,
RubyGemsWithSettingsV2,
MoviesRawV3,
MoviesWithSettingsV3,
RubyGemsWithSettingsV3,
MoviesRawV4,
MoviesWithSettingsV4,
RubyGemsWithSettingsV4,
TestV5,
}
impl GetDump {
pub fn path(&self) -> PathBuf {
match self {
GetDump::MoviesRawV1 => {
exist_relative_path!("tests/assets/v1_v0.20.0_movies.dump").into()
}
GetDump::MoviesWithSettingsV1 => {
exist_relative_path!("tests/assets/v1_v0.20.0_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV1 => {
exist_relative_path!("tests/assets/v1_v0.20.0_rubygems_with_settings.dump").into()
}
GetDump::MoviesRawV2 => {
exist_relative_path!("tests/assets/v2_v0.21.1_movies.dump").into()
}
GetDump::MoviesWithSettingsV2 => {
exist_relative_path!("tests/assets/v2_v0.21.1_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV2 => {
exist_relative_path!("tests/assets/v2_v0.21.1_rubygems_with_settings.dump").into()
}
GetDump::MoviesRawV3 => {
exist_relative_path!("tests/assets/v3_v0.24.0_movies.dump").into()
}
GetDump::MoviesWithSettingsV3 => {
exist_relative_path!("tests/assets/v3_v0.24.0_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV3 => {
exist_relative_path!("tests/assets/v3_v0.24.0_rubygems_with_settings.dump").into()
}
GetDump::MoviesRawV4 => {
exist_relative_path!("tests/assets/v4_v0.25.2_movies.dump").into()
}
GetDump::MoviesWithSettingsV4 => {
exist_relative_path!("tests/assets/v4_v0.25.2_movies_with_settings.dump").into()
}
GetDump::RubyGemsWithSettingsV4 => {
exist_relative_path!("tests/assets/v4_v0.25.2_rubygems_with_settings.dump").into()
}
GetDump::TestV5 => {
exist_relative_path!("tests/assets/v5_v0.28.0_test_dump.dump").into()
}
}
}
}

View File

@@ -0,0 +1,677 @@
mod data;
use crate::common::{default_settings, GetAllDocumentsOptions, Server};
use meilisearch_http::Opt;
use serde_json::json;
use self::data::GetDump;
// all the following test are ignored on windows. See #2364
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v1() {
let temp = tempfile::tempdir().unwrap();
for path in [
GetDump::MoviesRawV1.path(),
GetDump::MoviesWithSettingsV1.path(),
GetDump::RubyGemsWithSettingsV1.path(),
] {
let options = Opt {
import_dump: Some(path),
..default_settings(temp.path())
};
let error = Server::new_with_options(options)
.await
.map(|_| ())
.unwrap_err();
assert_eq!(error.to_string(), "The version 1 of the dumps is not supported anymore. You can re-export your dump from a version between 0.21 and 0.24, or start fresh from a version 0.25 onwards.");
}
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v2_movie_raw() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesRawV2.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["*"], "searchableAttributes": ["*"], "filterableAttributes": [], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit": 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 100, "title": "Lock, Stock and Two Smoking Barrels", "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "genres": ["Comedy", "Crime"], "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000})
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 500, "title": "Reservoir Dogs", "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "genres": ["Crime", "Thriller"], "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 10006, "title": "Wild Seven", "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "genres": ["Action", "Crime", "Drama"], "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v2_movie_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesWithSettingsV2.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": ["of", "the"], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": { "oneTypo": 5, "twoTypos": 9 }, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "genres": ["Comedy", "Crime"], "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000 })
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "genres": ["Crime", "Thriller"], "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "genres": ["Action", "Crime", "Drama"], "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v2_rubygems_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::RubyGemsWithSettingsV2.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("rubygems"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("rubygems");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"description": 53, "id": 53, "name": 53, "summary": 53, "total_downloads": 53, "version": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["name", "summary", "description", "version", "total_downloads"], "searchableAttributes": ["name", "summary"], "filterableAttributes": ["version"], "sortableAttributes": [], "rankingRules": ["typo", "words", "fame:desc", "proximity", "attribute", "exactness", "total_downloads:desc"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 }})
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks["results"][0],
json!({"uid": 92, "indexUid": "rubygems", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": {"receivedDocuments": 0, "indexedDocuments": 1042}, "duration": "PT14.034672S", "enqueuedAt": "2021-09-08T08:40:31.390775Z", "startedAt": "2021-09-08T08:51:39.060642Z", "finishedAt": "2021-09-08T08:51:53.095314Z"})
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(188040, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "meilisearch", "summary": "An easy-to-use ruby client for Meilisearch API", "description": "An easy-to-use ruby client for Meilisearch API. See https://github.com/meilisearch/MeiliSearch", "id": "188040", "version": "0.15.2", "total_downloads": "7465"})
);
let (document, code) = index.get_document(191940, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "doggo", "summary": "RSpec 3 formatter - documentation, with progress indication", "description": "Similar to \"rspec -f d\", but also indicates progress by showing the current test number and total test count on each line.", "id": "191940", "version": "1.1.0", "total_downloads": "9394"})
);
let (document, code) = index.get_document(159227, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "vortex-of-agony", "summary": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "description": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "id": "159227", "version": "0.1.0", "total_downloads": "1007"})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v3_movie_raw() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesRawV3.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["*"], "searchableAttributes": ["*"], "filterableAttributes": [], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit": 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 100, "title": "Lock, Stock and Two Smoking Barrels", "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "genres": ["Comedy", "Crime"], "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000})
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 500, "title": "Reservoir Dogs", "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "genres": ["Crime", "Thriller"], "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({"id": 10006, "title": "Wild Seven", "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "genres": ["Action", "Crime", "Drama"], "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v3_movie_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesWithSettingsV3.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": ["of", "the"], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": { "oneTypo": 5, "twoTypos": 9 }, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can["results"] still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "genres": ["Comedy", "Crime"], "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000 })
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "genres": ["Crime", "Thriller"], "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "genres": ["Action", "Crime", "Drama"], "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v3_rubygems_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::RubyGemsWithSettingsV3.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("rubygems"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("rubygems");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"description": 53, "id": 53, "name": 53, "summary": 53, "total_downloads": 53, "version": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({"displayedAttributes": ["name", "summary", "description", "version", "total_downloads"], "searchableAttributes": ["name", "summary"], "filterableAttributes": ["version"], "sortableAttributes": [], "rankingRules": ["typo", "words", "fame:desc", "proximity", "attribute", "exactness", "total_downloads:desc"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks["results"][0],
json!({"uid": 92, "indexUid": "rubygems", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": {"receivedDocuments": 0, "indexedDocuments": 1042}, "duration": "PT14.034672S", "enqueuedAt": "2021-09-08T08:40:31.390775Z", "startedAt": "2021-09-08T08:51:39.060642Z", "finishedAt": "2021-09-08T08:51:53.095314Z"})
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(188040, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "meilisearch", "summary": "An easy-to-use ruby client for Meilisearch API", "description": "An easy-to-use ruby client for Meilisearch API. See https://github.com/meilisearch/MeiliSearch", "id": "188040", "version": "0.15.2", "total_downloads": "7465"})
);
let (document, code) = index.get_document(191940, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "doggo", "summary": "RSpec 3 formatter - documentation, with progress indication", "description": "Similar to \"rspec -f d\", but also indicates progress by showing the current test number and total test count on each line.", "id": "191940", "version": "1.1.0", "total_downloads": "9394"})
);
let (document, code) = index.get_document(159227, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "vortex-of-agony", "summary": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "description": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "id": "159227", "version": "0.1.0", "total_downloads": "1007"})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v4_movie_raw() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesRawV4.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["*"], "searchableAttributes": ["*"], "filterableAttributes": [], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{"uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT41.751156S", "enqueuedAt": "2021-09-08T08:30:30.550282Z", "startedAt": "2021-09-08T08:30:30.553012Z", "finishedAt": "2021-09-08T08:31:12.304168Z" }], "limit" : 20, "from": 0, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "genres": ["Comedy", "Crime"], "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000})
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "genres": ["Crime", "Thriller"], "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "genres": ["Action", "Crime", "Drama"], "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v4_movie_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::MoviesWithSettingsV4.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("indexUID"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("indexUID");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"genres": 53, "id": 53, "overview": 53, "poster": 53, "release_date": 53, "title": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "sortableAttributes": [], "rankingRules": ["words", "typo", "proximity", "attribute", "exactness"], "stopWords": ["of", "the"], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": { "oneTypo": 5, "twoTypos": 9 }, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks,
json!({ "results": [{ "uid": 1, "indexUid": "indexUID", "status": "succeeded", "type": "settingsUpdate", "details": { "displayedAttributes": ["title", "genres", "overview", "poster", "release_date"], "searchableAttributes": ["title", "overview"], "filterableAttributes": ["genres"], "stopWords": ["of", "the"] }, "duration": "PT37.488777S", "enqueuedAt": "2021-09-08T08:24:02.323444Z", "startedAt": "2021-09-08T08:24:02.324145Z", "finishedAt": "2021-09-08T08:24:39.812922Z" }, { "uid": 0, "indexUid": "indexUID", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": { "receivedDocuments": 0, "indexedDocuments": 31944 }, "duration": "PT39.941318S", "enqueuedAt": "2021-09-08T08:21:14.742672Z", "startedAt": "2021-09-08T08:21:14.750166Z", "finishedAt": "2021-09-08T08:21:54.691484Z" }], "limit": 20, "from": 1, "next": null })
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(100, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 100, "title": "Lock, Stock and Two Smoking Barrels", "genres": ["Comedy", "Crime"], "overview": "A card shark and his unwillingly-enlisted friends need to make a lot of cash quick after losing a sketchy poker match. To do this they decide to pull a heist on a small-time gang who happen to be operating out of the flat next door.", "poster": "https://image.tmdb.org/t/p/w500/8kSerJrhrJWKLk1LViesGcnrUPE.jpg", "release_date": 889056000 })
);
let (document, code) = index.get_document(500, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 500, "title": "Reservoir Dogs", "genres": ["Crime", "Thriller"], "overview": "A botched robbery indicates a police informant, and the pressure mounts in the aftermath at a warehouse. Crime begets violence as the survivors -- veteran Mr. White, newcomer Mr. Orange, psychopathic parolee Mr. Blonde, bickering weasel Mr. Pink and Nice Guy Eddie -- unravel.", "poster": "https://image.tmdb.org/t/p/w500/AjTtJNumZyUDz33VtMlF1K8JPsE.jpg", "release_date": 715392000})
);
let (document, code) = index.get_document(10006, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "id": 10006, "title": "Wild Seven", "genres": ["Action", "Crime", "Drama"], "overview": "In this darkly karmic vision of Arizona, a man who breathes nothing but ill will begins a noxious domino effect as quickly as an uncontrollable virus kills. As he exits Arizona State Penn after twenty-one long years, Wilson has only one thing on the brain, leveling the score with career criminal, Mackey Willis.", "poster": "https://image.tmdb.org/t/p/w500/y114dTPoqn8k2Txps4P2tI95YCS.jpg", "release_date": 1136073600})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v4_rubygems_with_settings() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::RubyGemsWithSettingsV4.path()),
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await.unwrap();
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(indexes["results"].as_array().unwrap().len(), 1);
assert_eq!(indexes["results"][0]["uid"], json!("rubygems"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let index = server.index("rubygems");
let (stats, code) = index.stats().await;
assert_eq!(code, 200);
assert_eq!(
stats,
json!({ "numberOfDocuments": 53, "isIndexing": false, "fieldDistribution": {"description": 53, "id": 53, "name": 53, "summary": 53, "total_downloads": 53, "version": 53 }})
);
let (settings, code) = index.settings().await;
assert_eq!(code, 200);
assert_eq!(
settings,
json!({ "displayedAttributes": ["name", "summary", "description", "version", "total_downloads"], "searchableAttributes": ["name", "summary"], "filterableAttributes": ["version"], "sortableAttributes": [], "rankingRules": ["typo", "words", "fame:desc", "proximity", "attribute", "exactness", "total_downloads:desc"], "stopWords": [], "synonyms": {}, "distinctAttribute": null, "typoTolerance": {"enabled": true, "minWordSizeForTypos": {"oneTypo": 5, "twoTypos": 9}, "disableOnWords": [], "disableOnAttributes": [] }, "faceting": { "maxValuesPerFacet": 100 }, "pagination": { "maxTotalHits": 1000 } })
);
let (tasks, code) = index.list_tasks().await;
assert_eq!(code, 200);
assert_eq!(
tasks["results"][0],
json!({ "uid": 92, "indexUid": "rubygems", "status": "succeeded", "type": "documentAdditionOrUpdate", "details": {"receivedDocuments": 0, "indexedDocuments": 1042}, "duration": "PT14.034672S", "enqueuedAt": "2021-09-08T08:40:31.390775Z", "startedAt": "2021-09-08T08:51:39.060642Z", "finishedAt": "2021-09-08T08:51:53.095314Z"})
);
// finally we're just going to check that we can still get a few documents by id
let (document, code) = index.get_document(188040, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "meilisearch", "summary": "An easy-to-use ruby client for Meilisearch API", "description": "An easy-to-use ruby client for Meilisearch API. See https://github.com/meilisearch/MeiliSearch", "id": "188040", "version": "0.15.2", "total_downloads": "7465"})
);
let (document, code) = index.get_document(191940, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "doggo", "summary": "RSpec 3 formatter - documentation, with progress indication", "description": "Similar to \"rspec -f d\", but also indicates progress by showing the current test number and total test count on each line.", "id": "191940", "version": "1.1.0", "total_downloads": "9394"})
);
let (document, code) = index.get_document(159227, None).await;
assert_eq!(code, 200);
assert_eq!(
document,
json!({ "name": "vortex-of-agony", "summary": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "description": "You dont need to use nodejs or go, just install this plugin. It will crash your application at random", "id": "159227", "version": "0.1.0", "total_downloads": "1007"})
);
}
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn import_dump_v5() {
let temp = tempfile::tempdir().unwrap();
let options = Opt {
import_dump: Some(GetDump::TestV5.path()),
..default_settings(temp.path())
};
let mut server = Server::new_auth_with_options(options, temp).await;
server.use_api_key("MASTER_KEY");
let (indexes, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200, "{indexes}");
assert_eq!(indexes["results"].as_array().unwrap().len(), 2);
assert_eq!(indexes["results"][0]["uid"], json!("test"));
assert_eq!(indexes["results"][1]["uid"], json!("test2"));
assert_eq!(indexes["results"][0]["primaryKey"], json!("id"));
let expected_stats = json!({
"numberOfDocuments": 10,
"isIndexing": false,
"fieldDistribution": {
"cast": 10,
"director": 10,
"genres": 10,
"id": 10,
"overview": 10,
"popularity": 10,
"poster_path": 10,
"producer": 10,
"production_companies": 10,
"release_date": 10,
"tagline": 10,
"title": 10,
"vote_average": 10,
"vote_count": 10
}
});
let index1 = server.index("test");
let index2 = server.index("test2");
let (stats, code) = index1.stats().await;
assert_eq!(code, 200);
assert_eq!(stats, expected_stats);
let (docs, code) = index2
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(docs["results"].as_array().unwrap().len(), 10);
let (docs, code) = index1
.get_all_documents(GetAllDocumentsOptions::default())
.await;
assert_eq!(code, 200);
assert_eq!(docs["results"].as_array().unwrap().len(), 10);
let (stats, code) = index2.stats().await;
assert_eq!(code, 200);
assert_eq!(stats, expected_stats);
let (keys, code) = server.list_api_keys().await;
assert_eq!(code, 200);
let key = &keys["results"][0];
assert_eq!(key["name"], "my key");
}

View File

@@ -102,7 +102,7 @@ async fn error_create_with_invalid_index_uid() {
let (response, code) = index.create(None).await;
let expected_response = json!({
"message": "`test test#!` is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_).",
"message": "invalid index uid `test test#!`, the uid must be an integer or a string containing only alphanumeric characters a-z A-Z 0-9, hyphens - and underscores _.",
"code": "invalid_index_uid",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_index_uid"

View File

@@ -43,8 +43,8 @@ async fn error_delete_unexisting_index() {
assert_eq!(response["error"], expected_response);
}
#[cfg(not(windows))]
#[actix_rt::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn loop_delete_add_documents() {
let server = Server::new().await;
let index = server.index("test");
@@ -52,10 +52,10 @@ async fn loop_delete_add_documents() {
let mut tasks = Vec::new();
for _ in 0..50 {
let (response, code) = index.add_documents(documents.clone(), None).await;
tasks.push(response["uid"].as_u64().unwrap());
tasks.push(response["taskUid"].as_u64().unwrap());
assert_eq!(code, 202, "{}", response);
let (response, code) = index.delete().await;
tasks.push(response["uid"].as_u64().unwrap());
tasks.push(response["taskUid"].as_u64().unwrap());
assert_eq!(code, 202, "{}", response);
}

View File

@@ -16,12 +16,11 @@ async fn create_and_get_index() {
assert_eq!(code, 200);
assert_eq!(response["uid"], "test");
assert_eq!(response["name"], "test");
assert!(response.get("createdAt").is_some());
assert!(response.get("updatedAt").is_some());
assert_eq!(response["createdAt"], response["updatedAt"]);
assert_eq!(response["primaryKey"], Value::Null);
assert_eq!(response.as_object().unwrap().len(), 5);
assert_eq!(response.as_object().unwrap().len(), 4);
}
#[actix_rt::test]
@@ -45,10 +44,10 @@ async fn error_get_unexisting_index() {
#[actix_rt::test]
async fn no_index_return_empty_list() {
let server = Server::new().await;
let (response, code) = server.list_indexes().await;
let (response, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert!(response.is_array());
assert!(response.as_array().unwrap().is_empty());
assert!(response["results"].is_array());
assert!(response["results"].as_array().unwrap().is_empty());
}
#[actix_rt::test]
@@ -59,10 +58,10 @@ async fn list_multiple_indexes() {
server.index("test").wait_task(1).await;
let (response, code) = server.list_indexes().await;
let (response, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert!(response.is_array());
let arr = response.as_array().unwrap();
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 2);
assert!(arr
.iter()
@@ -72,6 +71,118 @@ async fn list_multiple_indexes() {
.any(|entry| entry["uid"] == "test1" && entry["primaryKey"] == "key"));
}
#[actix_rt::test]
async fn get_and_paginate_indexes() {
let server = Server::new().await;
const NB_INDEXES: usize = 50;
for i in 0..NB_INDEXES {
server.index(&format!("test_{i:02}")).create(None).await;
server
.index(&format!("test_{i:02}"))
.wait_task(i as u64)
.await;
}
// basic
let (response, code) = server.list_indexes(None, None).await;
assert_eq!(code, 200);
assert_eq!(response["limit"], json!(20));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["total"], json!(NB_INDEXES));
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 20);
// ensuring we get all the indexes in the alphabetical order
assert!((0..20)
.map(|idx| format!("test_{idx:02}"))
.zip(arr)
.all(|(expected, entry)| entry["uid"] == expected));
// with an offset
let (response, code) = server.list_indexes(Some(15), None).await;
assert_eq!(code, 200);
assert_eq!(response["limit"], json!(20));
assert_eq!(response["offset"], json!(15));
assert_eq!(response["total"], json!(NB_INDEXES));
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 20);
assert!((15..35)
.map(|idx| format!("test_{idx:02}"))
.zip(arr)
.all(|(expected, entry)| entry["uid"] == expected));
// with an offset and not enough elements
let (response, code) = server.list_indexes(Some(45), None).await;
assert_eq!(code, 200);
assert_eq!(response["limit"], json!(20));
assert_eq!(response["offset"], json!(45));
assert_eq!(response["total"], json!(NB_INDEXES));
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 5);
assert!((45..50)
.map(|idx| format!("test_{idx:02}"))
.zip(arr)
.all(|(expected, entry)| entry["uid"] == expected));
// with a limit lower than the default
let (response, code) = server.list_indexes(None, Some(5)).await;
assert_eq!(code, 200);
assert_eq!(response["limit"], json!(5));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["total"], json!(NB_INDEXES));
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 5);
assert!((0..5)
.map(|idx| format!("test_{idx:02}"))
.zip(arr)
.all(|(expected, entry)| entry["uid"] == expected));
// with a limit higher than the default
let (response, code) = server.list_indexes(None, Some(40)).await;
assert_eq!(code, 200);
assert_eq!(response["limit"], json!(40));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["total"], json!(NB_INDEXES));
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 40);
assert!((0..40)
.map(|idx| format!("test_{idx:02}"))
.zip(arr)
.all(|(expected, entry)| entry["uid"] == expected));
// with a limit higher than the default
let (response, code) = server.list_indexes(None, Some(80)).await;
assert_eq!(code, 200);
assert_eq!(response["limit"], json!(80));
assert_eq!(response["offset"], json!(0));
assert_eq!(response["total"], json!(NB_INDEXES));
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 50);
assert!((0..50)
.map(|idx| format!("test_{idx:02}"))
.zip(arr)
.all(|(expected, entry)| entry["uid"] == expected));
// with a limit and an offset
let (response, code) = server.list_indexes(Some(20), Some(10)).await;
assert_eq!(code, 200);
assert_eq!(response["limit"], json!(10));
assert_eq!(response["offset"], json!(20));
assert_eq!(response["total"], json!(NB_INDEXES));
assert!(response["results"].is_array());
let arr = response["results"].as_array().unwrap();
assert_eq!(arr.len(), 10);
assert!((20..30)
.map(|idx| format!("test_{idx:02}"))
.zip(arr)
.all(|(expected, entry)| entry["uid"] == expected));
}
#[actix_rt::test]
async fn get_invalid_index_uid() {
let server = Server::new().await;

View File

@@ -35,7 +35,7 @@ async fn stats() {
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
assert_eq!(response["uid"], 1);
assert_eq!(response["taskUid"], 1);
index.wait_task(1).await;

View File

@@ -1,6 +1,6 @@
use crate::common::Server;
use chrono::DateTime;
use serde_json::json;
use time::{format_description::well_known::Rfc3339, OffsetDateTime};
#[actix_rt::test]
async fn update_primary_key() {
@@ -21,16 +21,17 @@ async fn update_primary_key() {
assert_eq!(code, 200);
assert_eq!(response["uid"], "test");
assert_eq!(response["name"], "test");
assert!(response.get("createdAt").is_some());
assert!(response.get("updatedAt").is_some());
let created_at = DateTime::parse_from_rfc3339(response["createdAt"].as_str().unwrap()).unwrap();
let updated_at = DateTime::parse_from_rfc3339(response["updatedAt"].as_str().unwrap()).unwrap();
let created_at =
OffsetDateTime::parse(response["createdAt"].as_str().unwrap(), &Rfc3339).unwrap();
let updated_at =
OffsetDateTime::parse(response["updatedAt"].as_str().unwrap(), &Rfc3339).unwrap();
assert!(created_at < updated_at);
assert_eq!(response["primaryKey"], "primary");
assert_eq!(response.as_object().unwrap().len(), 5);
assert_eq!(response.as_object().unwrap().len(), 4);
}
#[actix_rt::test]

View File

@@ -2,6 +2,7 @@ mod auth;
mod common;
mod dashboard;
mod documents;
mod dumps;
mod index;
mod search;
mod settings;

View File

@@ -36,6 +36,30 @@ async fn search_unexisting_parameter() {
.await;
}
#[actix_rt::test]
async fn search_invalid_highlight_and_crop_tags() {
let server = Server::new().await;
let index = server.index("test");
let fields = &["cropMarker", "highlightPreTag", "highlightPostTag"];
for field in fields {
// object
let (response, code) = index
.search_post(json!({field.to_string(): {"marker": "<crop>"}}))
.await;
assert_eq!(code, 400, "field {} passing object: {}", &field, response);
assert_eq!(response["code"], "bad_request");
// array
let (response, code) = index
.search_post(json!({field.to_string(): ["marker", "<crop>"]}))
.await;
assert_eq!(code, 400, "field {} passing array: {}", &field, response);
assert_eq!(response["code"], "bad_request");
}
}
#[actix_rt::test]
async fn filter_invalid_syntax_object() {
let server = Server::new().await;
@@ -83,7 +107,7 @@ async fn filter_invalid_syntax_array() {
"link": "https://docs.meilisearch.com/errors#invalid_filter"
});
index
.search(json!({"filter": [["title & Glass"]]}), |response, code| {
.search(json!({"filter": ["title & Glass"]}), |response, code| {
assert_eq!(response, expected_response);
assert_eq!(code, 400);
})
@@ -140,7 +164,7 @@ async fn filter_invalid_attribute_array() {
"link": "https://docs.meilisearch.com/errors#invalid_filter"
});
index
.search(json!({"filter": [["many = Glass"]]}), |response, code| {
.search(json!({"filter": ["many = Glass"]}), |response, code| {
assert_eq!(response, expected_response);
assert_eq!(code, 400);
})
@@ -194,7 +218,7 @@ async fn filter_reserved_geo_attribute_array() {
"link": "https://docs.meilisearch.com/errors#invalid_filter"
});
index
.search(json!({"filter": [["_geo = Glass"]]}), |response, code| {
.search(json!({"filter": ["_geo = Glass"]}), |response, code| {
assert_eq!(response, expected_response);
assert_eq!(code, 400);
})
@@ -249,7 +273,7 @@ async fn filter_reserved_attribute_array() {
});
index
.search(
json!({"filter": [["_geoDistance = Glass"]]}),
json!({"filter": ["_geoDistance = Glass"]}),
|response, code| {
assert_eq!(response, expected_response);
assert_eq!(code, 400);

View File

@@ -0,0 +1,471 @@
use super::*;
use crate::common::Server;
use serde_json::json;
#[actix_rt::test]
async fn formatted_contain_wildcard() {
let server = Server::new().await;
let index = server.index("test");
index
.update_settings(json!({ "displayedAttributes": ["id", "cattos"] }))
.await;
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(1).await;
index.search(json!({ "q": "pesti", "attributesToRetrieve": ["father", "mother"], "attributesToHighlight": ["father", "mother", "*"], "attributesToCrop": ["doggos"], "showMatchesPosition": true }),
|response, code|
{
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"id": "852",
"cattos": "<em>pesti</em>",
},
"_matchesPosition": {"cattos": [{"start": 0, "length": 5}]},
})
);
}
)
.await;
index
.search(
json!({ "q": "pesti", "attributesToRetrieve": ["*"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pesti",
})
);
},
)
.await;
index
.search(
json!({ "q": "pesti", "attributesToRetrieve": ["*"], "attributesToHighlight": ["id"], "showMatchesPosition": true }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pesti",
"_formatted": {
"id": "852",
"cattos": "pesti",
},
"_matchesPosition": {"cattos": [{"start": 0, "length": 5}]},
})
);
}
)
.await;
index
.search(
json!({ "q": "pesti", "attributesToRetrieve": ["*"], "attributesToCrop": ["*"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pesti",
"_formatted": {
"id": "852",
"cattos": "pesti",
}
})
);
},
)
.await;
index
.search(
json!({ "q": "pesti", "attributesToCrop": ["*"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pesti",
"_formatted": {
"id": "852",
"cattos": "pesti",
}
})
);
},
)
.await;
}
#[actix_rt::test]
async fn format_nested() {
let server = Server::new().await;
let index = server.index("test");
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(0).await;
index
.search(
json!({ "q": "pesti", "attributesToRetrieve": ["doggos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
"age": 2,
},
{
"name": "buddy",
"age": 4,
},
],
})
);
},
)
.await;
index
.search(
json!({ "q": "pesti", "attributesToRetrieve": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
})
);
},
)
.await;
index
.search(
json!({ "q": "bobby", "attributesToRetrieve": ["doggos.name"], "showMatchesPosition": true }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
"_matchesPosition": {"doggos.name": [{"start": 0, "length": 5}]},
})
);
}
)
.await;
index
.search(json!({ "q": "pesti", "attributesToRetrieve": [], "attributesToHighlight": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
},
})
);
})
.await;
index
.search(json!({ "q": "pesti", "attributesToRetrieve": [], "attributesToCrop": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
},
})
);
})
.await;
index
.search(json!({ "q": "pesti", "attributesToRetrieve": ["doggos.name"], "attributesToHighlight": ["doggos.age"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
"_formatted": {
"doggos": [
{
"name": "bobby",
"age": "2",
},
{
"name": "buddy",
"age": "4",
},
],
},
})
);
})
.await;
index
.search(json!({ "q": "pesti", "attributesToRetrieve": [], "attributesToHighlight": ["doggos.age"], "attributesToCrop": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"doggos": [
{
"name": "bobby",
"age": "2",
},
{
"name": "buddy",
"age": "4",
},
],
},
})
);
}
)
.await;
}
#[actix_rt::test]
async fn displayedattr_2_smol() {
let server = Server::new().await;
let index = server.index("test");
// not enough displayed for the other settings
index
.update_settings(json!({ "displayedAttributes": ["id"] }))
.await;
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(1).await;
index
.search(json!({ "attributesToRetrieve": ["father", "id"], "attributesToHighlight": ["mother"], "attributesToCrop": ["cattos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
})
.await;
index
.search(
json!({ "attributesToRetrieve": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
},
)
.await;
index
.search(
json!({ "attributesToHighlight": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"_formatted": {
"id": "852",
}
})
);
},
)
.await;
index
.search(json!({ "attributesToCrop": ["id"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"_formatted": {
"id": "852",
}
})
);
})
.await;
index
.search(
json!({ "attributesToHighlight": ["id"], "attributesToCrop": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"_formatted": {
"id": "852",
}
})
);
},
)
.await;
index
.search(
json!({ "attributesToHighlight": ["cattos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
},
)
.await;
index
.search(
json!({ "attributesToCrop": ["cattos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
},
)
.await;
index
.search(
json!({ "attributesToRetrieve": ["cattos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"][0], json!({}));
},
)
.await;
index
.search(
json!({ "attributesToRetrieve": ["cattos"], "attributesToHighlight": ["cattos"], "attributesToCrop": ["cattos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"][0], json!({}));
}
)
.await;
index
.search(
json!({ "attributesToRetrieve": ["cattos"], "attributesToHighlight": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"id": "852",
}
})
);
},
)
.await;
index
.search(
json!({ "attributesToRetrieve": ["cattos"], "attributesToCrop": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"id": "852",
}
})
);
},
)
.await;
}

View File

@@ -1,39 +1,97 @@
// This modules contains all the test concerning search. Each particular feture of the search
// This modules contains all the test concerning search. Each particular feature of the search
// should be tested in its own module to isolate tests and keep the tests readable.
mod errors;
mod formatted;
use crate::common::Server;
use once_cell::sync::Lazy;
use serde_json::{json, Value};
static DOCUMENTS: Lazy<Value> = Lazy::new(|| {
pub(self) static DOCUMENTS: Lazy<Value> = Lazy::new(|| {
json!([
{
"title": "Shazam!",
"id": "287947"
"id": "287947",
},
{
"title": "Captain Marvel",
"id": "299537"
"id": "299537",
},
{
"title": "Escape Room",
"id": "522681"
"id": "522681",
},
{ "title": "How to Train Your Dragon: The Hidden World", "id": "166428"
{
"title": "How to Train Your Dragon: The Hidden World",
"id": "166428",
},
{
"title": "Glass",
"id": "450465"
"id": "450465",
}
])
});
pub(self) static NESTED_DOCUMENTS: Lazy<Value> = Lazy::new(|| {
json!([
{
"id": 852,
"father": "jean",
"mother": "michelle",
"doggos": [
{
"name": "bobby",
"age": 2,
},
{
"name": "buddy",
"age": 4,
},
],
"cattos": "pesti",
},
{
"id": 654,
"father": "pierre",
"mother": "sabine",
"doggos": [
{
"name": "gros bill",
"age": 8,
},
],
"cattos": ["simba", "pestiféré"],
},
{
"id": 750,
"father": "romain",
"mother": "michelle",
"cattos": ["enigma"],
},
{
"id": 951,
"father": "jean-baptiste",
"mother": "sophie",
"doggos": [
{
"name": "turbo",
"age": 5,
},
{
"name": "fast",
"age": 6,
},
],
"cattos": ["moumoute", "gomez"],
},
])
});
#[actix_rt::test]
async fn simple_placeholder_search() {
let server = Server::new().await;
let index = server.index("test");
let index = server.index("basic");
let documents = DOCUMENTS.clone();
index.add_documents(documents, None).await;
@@ -45,6 +103,18 @@ async fn simple_placeholder_search() {
assert_eq!(response["hits"].as_array().unwrap().len(), 5);
})
.await;
let index = server.index("nested");
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(1).await;
index
.search(json!({}), |response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 4);
})
.await;
}
#[actix_rt::test]
@@ -62,6 +132,18 @@ async fn simple_search() {
assert_eq!(response["hits"].as_array().unwrap().len(), 1);
})
.await;
let index = server.index("nested");
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(1).await;
index
.search(json!({"q": "pesti"}), |response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 2);
})
.await;
}
#[actix_rt::test]
@@ -88,6 +170,27 @@ async fn search_multiple_params() {
},
)
.await;
let index = server.index("nested");
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(1).await;
index
.search(
json!({
"q": "pesti",
"attributesToCrop": ["catto:2"],
"attributesToHighlight": ["catto"],
"limit": 2,
"offset": 0,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 2);
},
)
.await;
}
#[actix_rt::test]
@@ -114,6 +217,43 @@ async fn search_with_filter_string_notation() {
},
)
.await;
let index = server.index("nested");
index
.update_settings(json!({"filterableAttributes": ["cattos", "doggos.age"]}))
.await;
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(3).await;
index
.search(
json!({
"filter": "cattos = pesti"
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 1);
assert_eq!(response["hits"][0]["id"], json!(852));
},
)
.await;
index
.search(
json!({
"filter": "doggos.age > 5"
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 2);
assert_eq!(response["hits"][0]["id"], json!(654));
assert_eq!(response["hits"][1]["id"], json!(951));
},
)
.await;
}
#[actix_rt::test]
@@ -170,6 +310,28 @@ async fn search_with_sort_on_numbers() {
},
)
.await;
let index = server.index("nested");
index
.update_settings(json!({"sortableAttributes": ["doggos.age"]}))
.await;
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(3).await;
index
.search(
json!({
"sort": ["doggos.age:asc"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 4);
},
)
.await;
}
#[actix_rt::test]
@@ -196,6 +358,28 @@ async fn search_with_sort_on_strings() {
},
)
.await;
let index = server.index("nested");
index
.update_settings(json!({"sortableAttributes": ["doggos.name"]}))
.await;
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(3).await;
index
.search(
json!({
"sort": ["doggos.name:asc"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 4);
},
)
.await;
}
#[actix_rt::test]
@@ -236,16 +420,94 @@ async fn search_facet_distribution() {
index
.search(
json!({
"facetsDistribution": ["title"]
"facets": ["title"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
let dist = response["facetsDistribution"].as_object().unwrap();
let dist = response["facetDistribution"].as_object().unwrap();
assert_eq!(dist.len(), 1);
assert!(dist.get("title").is_some());
},
)
.await;
let index = server.index("nested");
index
.update_settings(json!({"filterableAttributes": ["father", "doggos.name"]}))
.await;
let documents = NESTED_DOCUMENTS.clone();
index.add_documents(documents, None).await;
index.wait_task(3).await;
// TODO: TAMO: fix the test
index
.search(
json!({
// "facets": ["father", "doggos.name"]
"facets": ["father"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
let dist = response["facetDistribution"].as_object().unwrap();
assert_eq!(dist.len(), 1);
assert_eq!(
dist["father"],
json!({ "jean": 1, "pierre": 1, "romain": 1, "jean-baptiste": 1})
);
/*
assert_eq!(
dist["doggos.name"],
json!({ "bobby": 1, "buddy": 1, "gros bill": 1, "turbo": 1, "fast": 1})
);
*/
},
)
.await;
index
.update_settings(json!({"filterableAttributes": ["doggos"]}))
.await;
index.wait_task(4).await;
index
.search(
json!({
"facets": ["doggos.name"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
let dist = response["facetDistribution"].as_object().unwrap();
assert_eq!(dist.len(), 1);
assert_eq!(
dist["doggos.name"],
json!({ "bobby": 1, "buddy": 1, "gros bill": 1, "turbo": 1, "fast": 1})
);
},
)
.await;
index
.search(
json!({
"facets": ["doggos"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
let dist = response["facetDistribution"].as_object().unwrap();
assert_eq!(dist.len(), 3);
assert_eq!(
dist["doggos.name"],
json!({ "bobby": 1, "buddy": 1, "gros bill": 1, "turbo": 1, "fast": 1})
);
assert_eq!(
dist["doggos.age"],
json!({ "2": 1, "4": 1, "5": 1, "6": 1, "8": 1})
);
},
)
.await;
}
#[actix_rt::test]
@@ -265,5 +527,192 @@ async fn displayed_attributes() {
.search_post(json!({ "attributesToRetrieve": ["title", "id"] }))
.await;
assert_eq!(code, 200, "{}", response);
assert!(response["hits"].get("title").is_none());
assert!(response["hits"][0].get("title").is_some());
}
#[actix_rt::test]
async fn placeholder_search_is_hard_limited() {
let server = Server::new().await;
let index = server.index("test");
let documents: Vec<_> = (0..1200)
.map(|i| json!({ "id": i, "text": "I am unique!" }))
.collect();
index.add_documents(documents.into(), None).await;
index.wait_task(0).await;
index
.search(
json!({
"limit": 1500,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 1000);
},
)
.await;
index
.search(
json!({
"offset": 800,
"limit": 400,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 200);
},
)
.await;
index
.update_settings(json!({ "pagination": { "maxTotalHits": 10_000 } }))
.await;
index.wait_task(1).await;
index
.search(
json!({
"limit": 1500,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 1200);
},
)
.await;
index
.search(
json!({
"offset": 1000,
"limit": 400,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 200);
},
)
.await;
}
#[actix_rt::test]
async fn search_is_hard_limited() {
let server = Server::new().await;
let index = server.index("test");
let documents: Vec<_> = (0..1200)
.map(|i| json!({ "id": i, "text": "I am unique!" }))
.collect();
index.add_documents(documents.into(), None).await;
index.wait_task(0).await;
index
.search(
json!({
"q": "unique",
"limit": 1500,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 1000);
},
)
.await;
index
.search(
json!({
"q": "unique",
"offset": 800,
"limit": 400,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 200);
},
)
.await;
index
.update_settings(json!({ "pagination": { "maxTotalHits": 10_000 } }))
.await;
index.wait_task(1).await;
index
.search(
json!({
"q": "unique",
"limit": 1500,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 1200);
},
)
.await;
index
.search(
json!({
"q": "unique",
"offset": 1000,
"limit": 400,
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
assert_eq!(response["hits"].as_array().unwrap().len(), 200);
},
)
.await;
}
#[actix_rt::test]
async fn faceting_max_values_per_facet() {
let server = Server::new().await;
let index = server.index("test");
index
.update_settings(json!({ "filterableAttributes": ["number"] }))
.await;
let documents: Vec<_> = (0..10_000)
.map(|id| json!({ "id": id, "number": id * 10 }))
.collect();
index.add_documents(json!(documents), None).await;
index.wait_task(1).await;
index
.search(
json!({
"facets": ["number"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
let numbers = response["facetDistribution"]["number"].as_object().unwrap();
assert_eq!(numbers.len(), 100);
},
)
.await;
index
.update_settings(json!({ "faceting": { "maxValuesPerFacet": 10_000 } }))
.await;
index.wait_task(2).await;
index
.search(
json!({
"facets": ["number"]
}),
|response, code| {
assert_eq!(code, 200, "{}", response);
let numbers = dbg!(&response)["facetDistribution"]["number"]
.as_object()
.unwrap();
assert_eq!(numbers.len(), 10_000);
},
)
.await;
}

View File

@@ -24,6 +24,18 @@ static DEFAULT_SETTINGS_VALUES: Lazy<HashMap<&'static str, Value>> = Lazy::new(|
);
map.insert("stop_words", json!([]));
map.insert("synonyms", json!({}));
map.insert(
"faceting",
json!({
"maxValuesPerFacet": json!(100),
}),
);
map.insert(
"pagination",
json!({
"maxTotalHits": json!(1000),
}),
);
map
});
@@ -43,7 +55,7 @@ async fn get_settings() {
let (response, code) = index.settings().await;
assert_eq!(code, 200);
let settings = response.as_object().unwrap();
assert_eq!(settings.keys().len(), 8);
assert_eq!(settings.keys().len(), 11);
assert_eq!(settings["displayedAttributes"], json!(["*"]));
assert_eq!(settings["searchableAttributes"], json!(["*"]));
assert_eq!(settings["filterableAttributes"], json!([]));
@@ -61,6 +73,18 @@ async fn get_settings() {
])
);
assert_eq!(settings["stopWords"], json!([]));
assert_eq!(
settings["faceting"],
json!({
"maxValuesPerFacet": 100,
})
);
assert_eq!(
settings["pagination"],
json!({
"maxTotalHits": 1000,
})
);
}
#[actix_rt::test]
@@ -122,7 +146,7 @@ async fn reset_all_settings() {
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202);
assert_eq!(response["uid"], 0);
assert_eq!(response["taskUid"], 0);
index.wait_task(0).await;
index
@@ -179,7 +203,7 @@ async fn error_update_setting_unexisting_index_invalid_uid() {
assert_eq!(code, 400);
let expected = json!({
"message": "`test##! ` is not a valid index uid. Index uid can be an integer or a string containing only alphanumeric characters, hyphens (-) and underscores (_).",
"message": "invalid index uid `test##! `, the uid must be an integer or a string containing only alphanumeric characters a-z A-Z 0-9, hyphens - and underscores _.",
"code": "invalid_index_uid",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_index_uid"});
@@ -188,7 +212,7 @@ async fn error_update_setting_unexisting_index_invalid_uid() {
}
macro_rules! test_setting_routes {
($($setting:ident), *) => {
($($setting:ident $write_method:ident), *) => {
$(
mod $setting {
use crate::common::Server;
@@ -214,7 +238,7 @@ macro_rules! test_setting_routes {
.chars()
.map(|c| if c == '_' { '-' } else { c })
.collect::<String>());
let (response, code) = server.service.post(url, serde_json::Value::Null).await;
let (response, code) = server.service.$write_method(url, serde_json::Value::Null).await;
assert_eq!(code, 202, "{}", response);
server.index("").wait_task(0).await;
let (response, code) = server.index("test").get().await;
@@ -258,13 +282,15 @@ macro_rules! test_setting_routes {
}
test_setting_routes!(
filterable_attributes,
displayed_attributes,
searchable_attributes,
distinct_attribute,
stop_words,
ranking_rules,
synonyms
filterable_attributes put,
displayed_attributes put,
searchable_attributes put,
distinct_attribute put,
stop_words put,
ranking_rules put,
synonyms put,
pagination patch,
faceting patch
);
#[actix_rt::test]
@@ -283,7 +309,7 @@ async fn error_set_invalid_ranking_rules() {
assert_eq!(response["status"], "failed");
let expected_error = json!({
"message": r#"`manyTheFish` ranking rule is invalid. Valid ranking rules are Words, Typo, Sort, Proximity, Attribute, Exactness and custom ranking rules."#,
"message": r#"`manyTheFish` ranking rule is invalid. Valid ranking rules are words, typo, sort, proximity, attribute, exactness and custom ranking rules."#,
"code": "invalid_ranking_rule",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_ranking_rule"

View File

@@ -41,7 +41,7 @@ async fn perform_snapshot() {
..default_settings(temp.path())
};
let server = Server::new_with_options(options).await;
let server = Server::new_with_options(options).await.unwrap();
let index = server.index("test");
index
@@ -60,20 +60,17 @@ async fn perform_snapshot() {
let temp = tempfile::tempdir().unwrap();
let snapshot_path = snapshot_dir
.path()
.to_owned()
.join("db.snapshot".to_string());
let snapshot_path = snapshot_dir.path().to_owned().join("db.snapshot");
let options = Opt {
import_snapshot: Some(snapshot_path),
..default_settings(temp.path())
};
let snapshot_server = Server::new_with_options(options).await;
let snapshot_server = Server::new_with_options(options).await.unwrap();
verify_snapshot!(server, snapshot_server, |server| =>
server.list_indexes(),
server.list_indexes(None, None),
// for some reason the db sizes differ. this may be due to the compaction options we have
// set when performing the snapshot
//server.stats(),

View File

@@ -1,4 +1,5 @@
use serde_json::json;
use time::{format_description::well_known::Rfc3339, OffsetDateTime};
use crate::common::Server;
@@ -53,15 +54,19 @@ async fn stats() {
let (response, code) = index.add_documents(documents, None).await;
assert_eq!(code, 202, "{}", response);
assert_eq!(response["uid"], 1);
assert_eq!(response["taskUid"], 1);
index.wait_task(1).await;
let timestamp = OffsetDateTime::now_utc();
let (response, code) = server.stats().await;
assert_eq!(code, 200);
assert!(response["databaseSize"].as_u64().unwrap() > 0);
assert!(response.get("lastUpdate").is_some());
let last_update =
OffsetDateTime::parse(response["lastUpdate"].as_str().unwrap(), &Rfc3339).unwrap();
assert!(last_update - timestamp < time::Duration::SECOND);
assert_eq!(response["indexes"]["test"]["numberOfDocuments"], 2);
assert!(response["indexes"]["test"]["isIndexing"] == false);
assert_eq!(response["indexes"]["test"]["fieldDistribution"]["id"], 2);

View File

@@ -1,22 +1,7 @@
use crate::common::Server;
use chrono::{DateTime, Utc};
use serde_json::json;
#[actix_rt::test]
async fn error_get_task_unexisting_index() {
let server = Server::new().await;
let (response, code) = server.service.get("/indexes/test/tasks").await;
let expected_response = json!({
"message": "Index `test` not found.",
"code": "index_not_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index_not_found"
});
assert_eq!(response, expected_response);
assert_eq!(code, 404);
}
use time::format_description::well_known::Rfc3339;
use time::OffsetDateTime;
#[actix_rt::test]
async fn error_get_unexisting_task_status() {
@@ -54,23 +39,7 @@ async fn get_task_status() {
index.wait_task(0).await;
let (_response, code) = index.get_task(1).await;
assert_eq!(code, 200);
// TODO check resonse format, as per #48
}
#[actix_rt::test]
async fn error_list_tasks_unexisting_index() {
let server = Server::new().await;
let (response, code) = server.index("test").list_tasks().await;
let expected_response = json!({
"message": "Index `test` not found.",
"code": "index_not_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index_not_found"
});
assert_eq!(response, expected_response);
assert_eq!(code, 404);
// TODO check response format, as per #48
}
#[actix_rt::test]
@@ -90,15 +59,146 @@ async fn list_tasks() {
assert_eq!(response["results"].as_array().unwrap().len(), 2);
}
#[actix_rt::test]
async fn list_tasks_with_star_filters() {
let server = Server::new().await;
let index = server.index("test");
index.create(None).await;
index.wait_task(0).await;
index
.add_documents(
serde_json::from_str(include_str!("../assets/test_set.json")).unwrap(),
None,
)
.await;
let (response, code) = index.service.get("/tasks?indexUid=test").await;
assert_eq!(code, 200);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
let (response, code) = index.service.get("/tasks?indexUid=*").await;
assert_eq!(code, 200);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
let (response, code) = index.service.get("/tasks?indexUid=*,pasteque").await;
assert_eq!(code, 200);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
let (response, code) = index.service.get("/tasks?type=*").await;
assert_eq!(code, 200);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
let (response, code) = index
.service
.get("/tasks?type=*,documentAdditionOrUpdate&status=*")
.await;
assert_eq!(code, 200, "{:?}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
let (response, code) = index
.service
.get("/tasks?type=*,documentAdditionOrUpdate&status=*,failed&indexUid=test")
.await;
assert_eq!(code, 200, "{:?}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
let (response, code) = index
.service
.get("/tasks?type=*,documentAdditionOrUpdate&status=*,failed&indexUid=test,*")
.await;
assert_eq!(code, 200, "{:?}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
}
#[actix_rt::test]
async fn list_tasks_status_filtered() {
let server = Server::new().await;
let index = server.index("test");
index.create(None).await;
index.wait_task(0).await;
index
.add_documents(
serde_json::from_str(include_str!("../assets/test_set.json")).unwrap(),
None,
)
.await;
let (response, code) = index.filtered_tasks(&[], &["succeeded"]).await;
assert_eq!(code, 200, "{}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 1);
// We can't be sure that the update isn't already processed so we can't test this
// let (response, code) = index.filtered_tasks(&[], &["processing"]).await;
// assert_eq!(code, 200, "{}", response);
// assert_eq!(response["results"].as_array().unwrap().len(), 1);
index.wait_task(1).await;
let (response, code) = index.filtered_tasks(&[], &["succeeded"]).await;
assert_eq!(code, 200, "{}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
}
#[actix_rt::test]
async fn list_tasks_type_filtered() {
let server = Server::new().await;
let index = server.index("test");
index.create(None).await;
index.wait_task(0).await;
index
.add_documents(
serde_json::from_str(include_str!("../assets/test_set.json")).unwrap(),
None,
)
.await;
let (response, code) = index.filtered_tasks(&["indexCreation"], &[]).await;
assert_eq!(code, 200, "{}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 1);
let (response, code) = index
.filtered_tasks(&["indexCreation", "documentAdditionOrUpdate"], &[])
.await;
assert_eq!(code, 200, "{}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
}
#[actix_rt::test]
async fn list_tasks_status_and_type_filtered() {
let server = Server::new().await;
let index = server.index("test");
index.create(None).await;
index.wait_task(0).await;
index
.add_documents(
serde_json::from_str(include_str!("../assets/test_set.json")).unwrap(),
None,
)
.await;
let (response, code) = index.filtered_tasks(&["indexCreation"], &["failed"]).await;
assert_eq!(code, 200, "{}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 0);
let (response, code) = index
.filtered_tasks(
&["indexCreation", "documentAdditionOrUpdate"],
&["succeeded", "processing", "enqueued"],
)
.await;
assert_eq!(code, 200, "{}", response);
assert_eq!(response["results"].as_array().unwrap().len(), 2);
}
macro_rules! assert_valid_summarized_task {
($response:expr, $task_type:literal, $index:literal) => {{
assert_eq!($response.as_object().unwrap().len(), 5);
assert!($response["uid"].as_u64().is_some());
assert!($response["taskUid"].as_u64().is_some());
assert_eq!($response["indexUid"], $index);
assert_eq!($response["status"], "enqueued");
assert_eq!($response["type"], $task_type);
let date = $response["enqueuedAt"].as_str().expect("missing date");
date.parse::<DateTime<Utc>>().unwrap();
OffsetDateTime::parse(date, &Rfc3339).unwrap();
}};
}
@@ -117,16 +217,16 @@ async fn test_summarized_task_view() {
assert_valid_summarized_task!(response, "settingsUpdate", "test");
let (response, _) = index.update_documents(json!([{"id": 1}]), None).await;
assert_valid_summarized_task!(response, "documentPartial", "test");
assert_valid_summarized_task!(response, "documentAdditionOrUpdate", "test");
let (response, _) = index.add_documents(json!([{"id": 1}]), None).await;
assert_valid_summarized_task!(response, "documentAddition", "test");
assert_valid_summarized_task!(response, "documentAdditionOrUpdate", "test");
let (response, _) = index.delete_document(1).await;
assert_valid_summarized_task!(response, "documentDeletion", "test");
let (response, _) = index.clear_all_documents().await;
assert_valid_summarized_task!(response, "clearAll", "test");
assert_valid_summarized_task!(response, "documentDeletion", "test");
let (response, _) = index.delete().await;
assert_valid_summarized_task!(response, "indexDeletion", "test");

View File

@@ -1,68 +1,65 @@
[package]
name = "meilisearch-lib"
version = "0.25.0"
edition = "2018"
resolver = "2"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
version = "0.28.1"
edition = "2021"
[dependencies]
actix-web = { version = "4.0.0-beta.9", features = ["rustls"] }
actix-web-static-files = { git = "https://github.com/MarinPostma/actix-web-static-files.git", rev = "39d8006", optional = true }
anyhow = { version = "1.0.43", features = ["backtrace"] }
async-stream = "0.3.2"
async-trait = "0.1.51"
byte-unit = { version = "4.0.12", default-features = false, features = ["std"] }
actix-web = { version = "4.0.1", default-features = false }
anyhow = { version = "1.0.56", features = ["backtrace"] }
async-stream = "0.3.3"
async-trait = "0.1.52"
atomic_refcell = "0.1.8"
byte-unit = { version = "4.0.14", default-features = false, features = ["std", "serde"] }
bytes = "1.1.0"
chrono = { version = "0.4.19", features = ["serde"] }
clap = { version = "3.1.6", features = ["derive", "env"] }
crossbeam-channel = "0.5.2"
csv = "1.1.6"
crossbeam-channel = "0.5.1"
derivative = "2.2.0"
either = "1.6.1"
flate2 = "1.0.21"
flate2 = "1.0.22"
fs_extra = "1.2.0"
fst = "0.4.7"
futures = "0.3.17"
futures-util = "0.3.17"
heed = { git = "https://github.com/Kerollmops/heed", tag = "v0.12.1" }
http = "0.2.4"
indexmap = { version = "1.7.0", features = ["serde-1"] }
itertools = "0.10.1"
futures = "0.3.21"
futures-util = "0.3.21"
http = "0.2.6"
indexmap = { version = "1.8.0", features = ["serde-1"] }
itertools = "0.10.3"
lazy_static = "1.4.0"
log = "0.4.14"
meilisearch-error = { path = "../meilisearch-error" }
meilisearch-auth = { path = "../meilisearch-auth" }
milli = { git = "https://github.com/meilisearch/milli.git", tag = "v0.21.0" }
meilisearch-types = { path = "../meilisearch-types" }
milli = { git = "https://github.com/meilisearch/milli.git", tag = "v0.31.2" }
mime = "0.3.16"
num_cpus = "1.13.0"
once_cell = "1.8.0"
parking_lot = "0.11.2"
rand = "0.8.4"
rayon = "1.5.1"
regex = "1.5.4"
rustls = "0.19.1"
serde = { version = "1.0.130", features = ["derive"] }
serde_json = { version = "1.0.67", features = ["preserve_order"] }
siphasher = "0.3.7"
slice-group-by = "0.2.6"
structopt = "0.3.23"
tar = "0.4.37"
tempfile = "3.2.0"
thiserror = "1.0.28"
tokio = { version = "1.11.0", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde"] }
walkdir = "2.3.2"
num_cpus = "1.13.1"
obkv = "0.2.0"
pin-project = "1.0.8"
whoami = { version = "1.1.3", optional = true }
reqwest = { version = "0.11.4", features = ["json", "rustls-tls"], default-features = false, optional = true }
sysinfo = "0.20.2"
derivative = "2.2.0"
fs_extra = "1.2.0"
once_cell = "1.10.0"
parking_lot = "0.12.0"
permissive-json-pointer = { path = "../permissive-json-pointer" }
rand = "0.8.5"
rayon = "1.5.1"
regex = "1.5.5"
reqwest = { version = "0.11.9", features = ["json", "rustls-tls"], default-features = false, optional = true }
roaring = "0.9.0"
rustls = "0.20.4"
serde = { version = "1.0.136", features = ["derive"] }
serde_json = { version = "1.0.79", features = ["preserve_order"] }
siphasher = "0.3.10"
slice-group-by = "0.3.0"
sysinfo = "0.23.5"
tar = "0.4.38"
tempfile = "3.3.0"
thiserror = "1.0.30"
time = { version = "0.3.7", features = ["serde-well-known", "formatting", "parsing", "macros"] }
tokio = { version = "1.17.0", features = ["full"] }
uuid = { version = "1.1.2", features = ["serde", "v4"] }
walkdir = "2.3.2"
whoami = { version = "1.2.1", optional = true }
[dev-dependencies]
actix-rt = "2.2.0"
mockall = "0.10.2"
paste = "1.0.5"
nelson = { git = "https://github.com/MarinPostma/nelson.git", rev = "e5f4ff046c21e7e986c7cb31550d1c9e7f0b693b"}
meilisearch-error = { path = "../meilisearch-error", features = ["test-traits"] }
actix-rt = "2.7.0"
meilisearch-types = { path = "../meilisearch-types", features = ["test-traits"] }
mockall = "0.11.0"
nelson = { git = "https://github.com/meilisearch/nelson.git", rev = "675f13885548fb415ead8fbb447e9e6d9314000a"}
paste = "1.0.6"
proptest = "1.0.0"
proptest-derive = "0.3.0"

View File

@@ -1,7 +1,9 @@
use std::fmt;
use std::borrow::Borrow;
use std::fmt::{self, Debug, Display};
use std::io::{self, BufRead, BufReader, BufWriter, Cursor, Read, Seek, Write};
use meilisearch_error::{internal_error, Code, ErrorCode};
use meilisearch_types::error::{Code, ErrorCode};
use meilisearch_types::internal_error;
use milli::documents::DocumentBatchBuilder;
type Result<T> = std::result::Result<T, DocumentFormatError>;
@@ -23,19 +25,40 @@ impl fmt::Display for PayloadType {
}
}
#[derive(thiserror::Error, Debug)]
#[derive(Debug)]
pub enum DocumentFormatError {
#[error("An internal error has occurred. `{0}`.")]
Internal(Box<dyn std::error::Error + Send + Sync + 'static>),
#[error("The `{1}` payload provided is malformed. `{0}`.")]
MalformedPayload(
Box<dyn std::error::Error + Send + Sync + 'static>,
PayloadType,
),
#[error("The `{0}` payload must contain at least one document.")]
EmptyPayload(PayloadType),
MalformedPayload(Box<milli::documents::Error>, PayloadType),
}
impl Display for DocumentFormatError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Internal(e) => write!(f, "An internal error has occurred: `{}`.", e),
Self::MalformedPayload(me, b) => match me.borrow() {
milli::documents::Error::JsonError(se) => {
// https://github.com/meilisearch/meilisearch/issues/2107
// The user input maybe insanely long. We need to truncate it.
let mut serde_msg = se.to_string();
let ellipsis = "...";
if serde_msg.len() > 100 + ellipsis.len() {
serde_msg.replace_range(50..serde_msg.len() - 85, ellipsis);
}
write!(
f,
"The `{}` payload provided is malformed. `Couldn't serialize document value: {}`.",
b, serde_msg
)
}
_ => write!(f, "The `{}` payload provided is malformed: `{}`.", b, me),
},
}
}
}
impl std::error::Error for DocumentFormatError {}
impl From<(PayloadType, milli::documents::Error)> for DocumentFormatError {
fn from((ty, error): (PayloadType, milli::documents::Error)) -> Self {
match error {
@@ -50,7 +73,6 @@ impl ErrorCode for DocumentFormatError {
match self {
DocumentFormatError::Internal(_) => Code::Internal,
DocumentFormatError::MalformedPayload(_, _) => Code::MalformedPayload,
DocumentFormatError::EmptyPayload(_) => Code::MalformedPayload,
}
}
}
@@ -63,10 +85,6 @@ pub fn read_csv(input: impl Read, writer: impl Write + Seek) -> Result<usize> {
let builder =
DocumentBatchBuilder::from_csv(input, writer).map_err(|e| (PayloadType::Csv, e))?;
if builder.len() == 0 {
return Err(DocumentFormatError::EmptyPayload(PayloadType::Csv));
}
let count = builder.finish().map_err(|e| (PayloadType::Csv, e))?;
Ok(count)
@@ -81,16 +99,17 @@ pub fn read_ndjson(input: impl Read, writer: impl Write + Seek) -> Result<usize>
let mut buf = String::new();
while reader.read_line(&mut buf)? > 0 {
// skip empty lines
if buf == "\n" {
buf.clear();
continue;
}
builder
.extend_from_json(Cursor::new(&buf.as_bytes()))
.map_err(|e| (PayloadType::Ndjson, e))?;
buf.clear();
}
if builder.len() == 0 {
return Err(DocumentFormatError::EmptyPayload(PayloadType::Ndjson));
}
let count = builder.finish().map_err(|e| (PayloadType::Ndjson, e))?;
Ok(count)
@@ -104,10 +123,6 @@ pub fn read_json(input: impl Read, writer: impl Write + Seek) -> Result<usize> {
.extend_from_json(input)
.map_err(|e| (PayloadType::Json, e))?;
if builder.len() == 0 {
return Err(DocumentFormatError::EmptyPayload(PayloadType::Json));
}
let count = builder.finish().map_err(|e| (PayloadType::Json, e))?;
Ok(count)

Some files were not shown because too many files have changed in this diff Show More