Compare commits

...

2088 Commits

Author SHA1 Message Date
dd645e6da4 Merge #1605
1605: Fix pacic when decoding r=curquiza a=curquiza

Update milli to fix the panic during document deletion

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-08-23 11:06:45 +00:00
149f46c184 Fix pacic when decoding 2021-08-23 12:37:51 +02:00
3e27d5e885 Merge #1596
1596: Update milli and tokenizer version: fix panic during indexation r=curquiza a=curquiza

Fixes #1590 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-08-18 13:44:30 +00:00
38fc876704 Update tokenizer and new milli version with new tags 2021-08-18 14:55:10 +02:00
39d5a99095 Update milli and tokenizer version 2021-08-18 12:09:34 +02:00
2beb306834 Merge #1577
1577: Update milli dependency: fix facet values bugs r=Kerollmops a=curquiza

Fixes #1576 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-08-16 16:13:42 +00:00
f3e595e2f0 Update milli dependency 2021-08-16 13:36:42 +02:00
5d80d11b23 Merge #1580
1580: Update telemetry link r=curquiza a=curquiza

Here is the page the user will have: https://dev.docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html
instead of: https://docs.meilisearch.com/reference/features/configuration.html#disable-analytics

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-08-12 17:11:30 +00:00
621529e9dc Update telemetry link 2021-08-12 18:58:07 +02:00
535aff8f7e Merge #1578
1578: Update tokenizer version to v0.2.4 r=ManyTheFish a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-08-12 15:27:12 +00:00
7531280764 Update tokenizer version to v0.2.4 2021-08-12 13:55:47 +02:00
7e3b2ddff2 Merge #1554
1554: Fix dump v1 (attributesForFaceting, and criteria) r=curquiza a=MarinPostma

close #1553


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-08-05 19:45:52 +00:00
312d93961a Merge #1556
1556: Update milli to v0.9.0 r=MarinPostma a=curquiza

Fixes #1552 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-08-05 14:04:55 +00:00
8f05d8d546 fix clippy warnings 2021-08-05 16:00:47 +02:00
f5ddea481a reintroduce exactness 2021-08-05 15:59:39 +02:00
29ca8271b3 test dumpv1 format regression 2021-08-05 15:59:39 +02:00
3084537d1e restore attributes for faceting in dump v1 2021-08-05 15:59:39 +02:00
86ac994543 Merge #1557
1557: Fix docs link anchor r=MarinPostma a=curquiza

thank you `@guimachiavelli` 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-08-05 13:34:48 +00:00
992b082c6f Fix docs link anchor 2021-08-05 13:28:32 +02:00
31fe263356 Update milli to v0.9.0 2021-08-05 13:08:27 +02:00
7a0b20c740 Merge #1532
1532: Start writing documentation for newcomers r=MarinPostma a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2021-08-03 09:26:45 +00:00
9810f6b695 Merge #1540
1540: Update milli to version 0.8.1 r=curquiza a=curquiza

Integrates this fix into MeiliSearch https://github.com/meilisearch/milli/pull/296

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-07-29 17:15:52 +00:00
09c74c04a0 Merge #1539
1539: Use serdeval for validating json format. r=curquiza a=MarinPostma

uses [serdeval](https://github.com/MarinPostma/serdeval) to validate that the json payload is valid json, and in the correct format.

fix #1535


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-29 17:05:13 +00:00
b6cc932c09 Merge #1541
1541: Make clippy happy r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-07-29 16:53:53 +00:00
1b5d918cb9 Fix rustfmt 2021-07-29 18:32:09 +02:00
bf76d4a43c Make clippy happy 2021-07-29 18:14:36 +02:00
53b4b2fcbc Use serdeval for validating json format. 2021-07-29 18:02:54 +02:00
9a8629a6a9 Update milli 2021-07-29 17:45:31 +02:00
78308365ec fix typos 2021-07-29 14:40:41 +02:00
976075578f Merge #1537
1537: Import `.git` to the docker build image to fix vergen r=curquiza a=irevoire

I observed a small difference in the size of the build image, but I think we can allow it:
![image](https://user-images.githubusercontent.com/7032172/127369567-d03f9a41-3ad5-4933-888e-a3777df8c6cf.png)

I was not able to see any difference in build time, though.

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-07-29 12:31:55 +00:00
243233f652 import .git to docker to fix vergen 2021-07-28 19:12:40 +02:00
d66eea42bb Merge #1536
1536: Remove ARMv7 binary publish r=MarinPostma a=curquiza

Fixes #1315 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-07-28 15:39:34 +00:00
c55f73bbc3 Remove ARMv7 support 2021-07-28 17:29:40 +02:00
3e30d4270b Merge #1533
1533: Update milli version to v0.8.0 r=MarinPostma a=curquiza

- Update milli, heed and obkv
- fix relevancy issue and the `facetsDistribution` display

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-07-28 11:15:31 +00:00
80916baa21 Add FieldId in import 2021-07-28 12:25:13 +02:00
1df8f041bd Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-07-28 12:10:25 +02:00
6a6e2a8cd1 Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-07-28 12:08:51 +02:00
f9d337b320 Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-07-28 12:08:36 +02:00
feb069f604 Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-07-28 12:08:28 +02:00
7e0eed5772 Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-07-28 12:08:24 +02:00
9bdd040dd0 Update meilisearch-http/src/index/mod.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-07-28 12:08:19 +02:00
e5dabf265a Update milli version to v0.8.0 2021-07-28 10:52:47 +02:00
1a1046a0ef start writing some documentation for newcomers 2021-07-27 16:35:42 +02:00
dd18319b44 Merge #1530
1530: Update mini-dashboard version to v0.1.4 r=irevoire a=mdubus



Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2021-07-27 10:11:02 +00:00
d3cd7e92d1 Update mini-dashboard version to v0.1.4 2021-07-27 11:44:20 +02:00
553e7d8aaa Merge #1528
1528: Update of the Date Time Format in commitDate  r=MarinPostma a=irevoire

Since we were relying on a [super old version of `vergen`](https://docs.rs/crate/vergen/3.0.1), we could not get the `commit timestamp`, so I updated `vergen` to the latest version.
This also allows us to remove all the features we don't use.

closes #1522

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-07-27 07:49:31 +00:00
f79b8287f5 update vergen 2021-07-26 15:25:30 +02:00
b4c98f6cc3 Merge #1521
1521: Sentry was never sending anything r=Kerollmops a=irevoire

@Kerollmops noticed that we had no log of this release in sentry, and it look like I badly tested my code after ignoring the “No space left on device” errors.

Now it should be fixed.

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-07-21 14:46:56 +00:00
5d4a0ac844 sentry was never sending anything 2021-07-21 11:50:54 +02:00
0136b02e5b Merge #1498
1498: Show the filterable and not the faceted attributes in the settings r=Kerollmops a=Kerollmops

Fixes #1497

Co-authored-by: Clément Renault <clement@meilisearch.com>
2021-07-13 07:27:14 +00:00
f49a01703a Show the filterable and not the faceted attributes in the settings 2021-07-09 16:11:37 +02:00
e4f82aa441 Merge #1494
1494: Add cache to the ci r=irevoire a=irevoire

closes #1446 

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-07-08 14:06:03 +00:00
751d1af2a6 Merge #1492
1492: auth tests r=irevoire a=MarinPostma

add regression tests on route authentication.


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-08 13:13:55 +00:00
076d8fbb84 add cache to the ci 2021-07-08 11:19:12 +02:00
4b2d01a453 Merge #1484
1484: Add MeiliSearch version to issue template r=irevoire a=bidoubiwa

It is relevant to know the version of MeiliSearch before any other additional information that might be important to know.

We could also reduce the number of required information asked to the user. I would like to suggest the following:

Instead of the section of `Desktop` and `Smartphone`  I would just improve the last section

```
**Additional context**
Additional information that may be relevant to the issue.
[e.g. architecture, device, OS, browser]
```

By applying this, the template final look will be the following: 

-----

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**MeiliSearch version:** [e.g. v0.20.0]

**Additional context**
Additional information that may be relevant to the issue.
[e.g. architecture, device, OS, browser]

Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2021-07-08 08:49:35 +00:00
a71fa25ebe auth tests 2021-07-07 17:47:48 +02:00
b4db54cb1f Merge #1488
1488: fix search permissions r=MarinPostma a=MarinPostma

fix #1485


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-07 13:16:07 +00:00
b2ca600e79 Remove unecessary questions 2021-07-07 11:15:08 +02:00
83725a1330 fix search permissions 2021-07-07 10:39:04 +02:00
587b837a6c Add MeiliSearch version to issue template 2021-07-06 22:04:15 +02:00
2844fe959f Merge #1483
1483: search tests r=MarinPostma a=MarinPostma

adds search tests from #1440.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-06 15:13:07 +00:00
41e271974a add tests 2021-07-06 16:21:15 +02:00
520d37983c implement index search methods 2021-07-06 11:54:09 +02:00
487d82773a Merge #1481
1481: fix bug in index deletion r=Kerollmops a=MarinPostma

this bug was caused by a heed iterator entry being deleted while still holding a reference to it.


close #1333


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-06 08:07:30 +00:00
066085f6f5 fix index deletion bug 2021-07-05 18:42:13 +02:00
0d1f5b7193 Merge #1469
1469: Return 201 on index creation r=Kerollmops a=MarinPostma

fix #1467


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-05 15:42:13 +00:00
2f3a439566 fix tests 2021-07-05 16:31:52 +02:00
9681ffca52 change index create http code 2021-07-05 16:31:51 +02:00
fddc60f893 Merge #1471
1471: Bump milli to 0.7.2 r=irevoire a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2021-07-05 13:29:38 +00:00
0f024cc225 Merge #1478
1478: refactor routes r=irevoire a=MarinPostma

refactor the route directory, so the module tree follows the route structure


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-05 12:55:39 +00:00
575ec2a06f refactor routes 2021-07-05 14:33:48 +02:00
83aef0a27d Merge #1473
1473: Update loop r=MarinPostma a=irevoire



Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-07-05 12:32:29 +00:00
bc85d30076 add test 2021-07-05 12:33:28 +02:00
bc417726fc fix update loop bug 2021-07-05 12:33:22 +02:00
9949a2a930 bump milli to 0.7.2 2021-07-05 12:19:27 +02:00
71e1cb472f Merge #1457
1457: Hotfix highlight on emojis panic r=Kerollmops a=ManyTheFish

When the highlight bound is in the middle of a character
or if we are out of bounds, we highlight the complete matching word.

note: we should enhance the tokenizer and the Highlighter to match char indices.

Fix #1368

Co-authored-by: many <maxime@meilisearch.com>
2021-07-01 14:48:18 +00:00
38161ede33 Add test with special characters 2021-07-01 16:44:17 +02:00
70dd1e6263 Merge #1456
1456: Fix update loop timeout r=Kerollmops a=Kerollmops

This PR fixes a wrong fix of the update loop introduced in #1429.

Co-authored-by: Kerollmops <clement@meilisearch.com>
2021-07-01 13:53:47 +00:00
e626c9c8b9 Merge #1448
1448: Enable the tests on windows in the CI r=curquiza a=irevoire

closes #1443 

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-07-01 13:12:09 +00:00
fa5f8f9531 Fix an issue with the update loop falsely breaking 2021-07-01 14:53:31 +02:00
acfe31151e Hotfix panic for unicode characters
When the highlight bound is in the middle of a character
or if we are out of bounds, we highlight the complete matching word.

note: we should enhance the tokenizer and the Highlighter to match char indices.

Fix #1368
2021-07-01 14:49:22 +02:00
cb71b714d7 fix bors 2021-07-01 14:43:54 +02:00
4c6655f68c ci: enable tests on windows 2021-07-01 14:43:54 +02:00
490836a7b3 ignore the snapshots and dumps in the gitignore (#1449) 2021-07-01 14:41:53 +02:00
c11c909bad update bors 2021-07-01 12:02:22 +02:00
5c9401ad94 Merge #1438
1438: Update milli to 0.7.1 r=curquiza a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-30 18:49:41 +00:00
768987583a Merge #1428
1428: Accept any content type as json r=curquiza a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-30 18:29:57 +00:00
cb58a8c776 Merge #1429
1429: Do not block when sending update notifications r=curquiza a=irevoire

transplant this [PR](https://github.com/meilisearch/transplant/pull/260) from @Kerollmops 🎉 

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-30 17:21:56 +00:00
4f0d3b065f Update milli 2021-06-30 18:39:06 +02:00
a95c44193d Do not block when sending update notifications 2021-06-30 17:29:22 +02:00
2830853665 accept any content type as json 2021-06-30 17:05:59 +02:00
a4ca79c9b3 Merge #1427
1427: Update README.md r=curquiza a=tpayet

Update quickstart & examples for rc0.21

Co-authored-by: Thomas Payet <thomas@meilisearch.com>
2021-06-30 15:00:42 +00:00
85b0878334 Update README.md
Update quickstart & examples for rc0.21
2021-06-30 16:58:02 +02:00
d61852a73f Merge #1421
1421: Transplant the new search engine r=tpayet a=curquiza



Co-authored-by: tamo <tamo@meilisearch.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: marin <postma.marin@protonmail.com>
2021-06-30 14:14:11 +00:00
14b6224de7 Update docker CIs 2021-06-30 16:08:01 +02:00
f0958c7d9b Remove useless CI 2021-06-30 16:00:25 +02:00
01de7f9e36 Update version 2021-06-30 15:59:59 +02:00
9f9148a1c6 Remove legacy test CI 2021-06-30 15:50:20 +02:00
73db1b3822 Merge remote-tracking branch 'transplant/main' 2021-06-30 15:30:08 +02:00
abca68bf24 Remove legacy source code 2021-06-30 15:20:17 +02:00
eeca841a21 Merge #259
259: Run rustfmt one the whole project and add it to the CI r=curquiza a=irevoire

Since there is currently no other PR modifying the code, I think it's a good time to reformat everything and add rustfmt to the ci.

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-30 11:55:30 +00:00
3a9b86ad55 add rustfmt to bors 2021-06-30 10:49:10 +02:00
f1cc141f6c Merge #258
258: Use rustls instead of openssl r=curquiza a=irevoire

I also removed all the `default-features` of reqwest since we are only using the JSON one.
Fix #255

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-29 14:42:25 +00:00
3011209e28 bump alpine version 2021-06-29 16:36:41 +02:00
29bf6a8d42 run rustfmt one the whole project and add it to the CI 2021-06-29 15:25:18 +02:00
c282466750 remove the libressl dependency from our docker file 2021-06-29 15:22:11 +02:00
de9ea94f57 Merge #257
257: Accept no content-type as json r=curquiza a=irevoire

closes #253 

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-29 12:54:33 +00:00
fe7640555d fix the content-type 2021-06-29 13:16:56 +02:00
ec809ca487 use rustls instead of openssl and remove all default-features of reqwest 2021-06-29 13:07:40 +02:00
1dc99ea451 accept no content-type as json 2021-06-29 11:59:25 +02:00
f12ace3fbf Merge #256
256: Update heed and milli r=irevoire a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-29 08:49:22 +00:00
c09e610bb5 Update heed and milli 2021-06-29 10:25:47 +02:00
712abf4c5f Merge #246
246: Stop logging the no space left on device error r=curquiza a=irevoire

closes #208
@qdequele what do you think of that?
Are there any other errors we need to ignore?

As you can see in the code, once we are in `Sentry` the error has already been converted to a `String` so the only thing we can do to see if we need to send the error or not is to match the `String` against our error message. 
If we have a lot of other logs we want to ignore I would suggest prefixing all the logs with something like:
```
User error: No space left on device
```
So in Sentry, we could just check if the log start by `User error:` and ignore all these errors at once

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-29 08:20:49 +00:00
261df4b386 Merge #252
252: Fix docker run r=curquiza a=curquiza

Not the most beautiful fix since I cannot update alpine to version 3.14 without being flooded with errors.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-28 15:47:24 +00:00
b0f399a51d Merge #249
249: Use half of the computer threads for the indexing process by default r=Kerollmops a=irevoire

closes #241 
By default, we use only half of the CPU threads when indexing documents; this allows the user to use the search while indexing. Also, the machine will not appear unresponsive when indexing a large batch of documents.

On the special case where a user only has one core, we use it entirely 😄 

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-28 15:25:11 +00:00
348d112388 Fix docker run 2021-06-28 16:55:29 +02:00
5c35a5d9fc Merge #250
250: Update mini-dashboard to v.0.1.3 r=curquiza a=mdubus

Should fix #245 

Co-authored-by: Morgane Dubus <30866152+mdubus@users.noreply.github.com>
2021-06-28 13:42:34 +00:00
a26bb50d62 Update mini-dashboard to v.0.1.3 2021-06-28 15:13:52 +02:00
a59f437ee3 use only half of the computer threads for the indexation by default 2021-06-28 14:35:50 +02:00
d74c698adc stop logging the no space left on device error 2021-06-28 13:59:48 +02:00
8d8fe8fd29 Merge #248
248: Unused borrow that must be used r=curquiza a=irevoire

I noticed #228 introduced a warning while compiling

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-28 11:53:22 +00:00
c1c50f6714 unused borrow that must be used 2021-06-28 13:35:25 +02:00
d7ca68d8e9 Merge #228
228: Authentication rework r=curquiza a=MarinPostma

In an attempt to fix #201, I ended up rewriting completely the authentication system we use. This is because actix doesn't allow to wrap a single route into a middleware, so we initially put each route into it's own service to use the authentication middleware. Routes are now grouped in resources, fixing #201.

As for the authentication, I decided to take a very different approach, and ditch middleware altogether. Instead, I decided to use actix's [extractor](https://actix.rs/docs/extractors/). `Data` is now wrapped in a `GuardedData<P: Policy, T>` (where `T` is `Data`) in each route. The `Policy` trait, thanks to the `authenticate` method tell if a request is authorized to access the resources in the route. Concretely, before the server starts, it is configured with a `AuthConfig` instance that can either be `AuthConfig::NoAuth` when no auth is required at runtime, or `AuthConfig::Auth(Policies)`, where `Policies` maps the `Policy` type to it singleton instance.

In the current implementation, and this to match the legacy meilisearch behaviour, each policy implementation contains a `HashSet` of token (`Vec<u8>` for now), that represents the user it can authenticate. When starting the program, each key (identified as a user) is given a set of `Policy`, representing its roles. The later is facilitated by the `create_users` macro, like so:

```rust
create_users!(
    policies,
    master_key.as_bytes() => { Admin, Private, Public },
    private_key.as_bytes() => { Private, Public },
    public_key.as_bytes() => { Public }
);
```

This is some groundwork for later development on a full fledged authentication system for meilisearch.


fix #201

Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-28 08:38:59 +00:00
01b09c065b change route to service<resource> 2021-06-24 19:02:28 +02:00
08104fd49c Merge #242
242: Fix docker build r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-24 15:30:27 +00:00
3b601f615a declare new authentication related errors 2021-06-24 16:53:20 +02:00
b1f7fe24f6 Fix docker build 2021-06-24 16:45:51 +02:00
fbd58f2eec clippy 2021-06-24 16:36:22 +02:00
79fc3bb84e fmt 2021-06-24 16:36:22 +02:00
8e4928c7ea fix tests 2021-06-24 16:36:22 +02:00
d078cbf39b remove authentication middleware 2021-06-24 16:36:21 +02:00
561596d8bc update stats routes 2021-06-24 16:36:18 +02:00
549b489c8a update settings routes 2021-06-24 16:35:48 +02:00
1e9f374ff8 update running route 2021-06-24 16:35:12 +02:00
817fcfdd88 update keys route 2021-06-24 16:35:12 +02:00
fab50256bc update index routes 2021-06-24 16:35:04 +02:00
b044608b25 update health route 2021-06-24 16:32:45 +02:00
ce4fb8ce20 update dump route 2021-06-24 16:32:43 +02:00
adf91d286b update documents and search routes 2021-06-24 16:32:15 +02:00
0c1c7a3dd9 implement authentication policies 2021-06-24 16:31:30 +02:00
5b71751391 policies macros 2021-06-24 16:31:30 +02:00
12f6709e1c move authencation to extractor mod 2021-06-24 16:31:28 +02:00
5229f1e220 experimental auth extractor 2021-06-24 16:30:15 +02:00
b6ca7929eb Merge #240
240: Rework error messages r=irevoire a=MarinPostma

Simplify the error messages, and make them more compliant with legacy Meilisearch.

Basically, stop composing the messages, and simply forward the message of inner errors.


Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-24 11:36:11 +00:00
43204ca67b Merge #230
230: Logs r=MarinPostma a=irevoire

closes #193 

Since we can't really print the body of requests in actix-web, I logged the parameters of every request and what we were returning to the client.

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-24 09:23:24 +00:00
ad8d9a97d6 debug the body of every http request 2021-06-24 11:22:11 +02:00
36f32f58d4 add the log_level variable to the cli and reduce the log level of milli and grenad 2021-06-24 11:20:52 +02:00
b4fd4212ad reduce the log level of some info! 2021-06-24 11:20:52 +02:00
a1d34faaad decompose error messages 2021-06-24 10:57:28 +02:00
a2368db154 Merge #239
239: Bump milli to 0.6.0 r=MarinPostma a=MarinPostma

fix #231


Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-24 08:08:41 +00:00
381e07b7b6 Merge #1415
1415: Fix README.md typos r=curquiza a=dichotommy

Just fixing some typos and such.
Kanji -> Hanzi
Kanji refers only to the Japanese versions of Chinese characters, and since we don't have a Japanese tokenization pipeline I think it could be misunderstood.

Co-authored-by: Tommy <68053732+dichotommy@users.noreply.github.com>
2021-06-24 07:46:28 +00:00
74bb748a4e bump milli to 0.6.0 2021-06-23 18:40:19 +02:00
09113fc73c Update README.md
Just fixing some typos and such.
Kanji refers only to Japanese versions of the Chinese characters, and since we don't have a Japanese tokenization pipeline I think it could be misleading.
2021-06-23 18:30:48 +02:00
8638c9ab77 Merge #232
232: Fix payload size limit r=MarinPostma a=MarinPostma

Fix #223

This was due to the fact that Payload ignores the limit payload size limit. I fixed it by implementing my own `Payload` extractor that checks that the size of the payload is not too large.

I also refactored the `create_app` a bit.

Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-23 16:06:08 +00:00
b676b10cfe Merge #238
238: Fix settings subroutes get r=MarinPostma a=MarinPostma

Fix #225 

Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-23 15:45:50 +00:00
f68c257452 move flush in write_to_file function 2021-06-23 16:49:25 +02:00
880fc069bd remove dbg 2021-06-23 16:49:25 +02:00
a838238a63 move payload to own module 2021-06-23 16:49:25 +02:00
834995b130 clippy + fmt 2021-06-23 16:49:23 +02:00
b000ae7614 remove file if write to update file fails 2021-06-23 16:48:33 +02:00
f62779671b change error message for payload size limit 2021-06-23 16:48:33 +02:00
4b292c6e9b add payload limit to app config 2021-06-23 16:48:33 +02:00
1c13100948 implement custom payload 2021-06-23 16:48:31 +02:00
71226feb74 refactor create_app macro 2021-06-23 16:47:15 +02:00
b9b4feada8 add tests 2021-06-23 16:21:32 +02:00
3175f09989 Merge #235
235: Fix dump not found error r=MarinPostma a=MarinPostma

fix #233


Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-23 14:21:07 +00:00
322d6b8cfe fix serialization bug in settings 2021-06-23 15:25:56 +02:00
da36a6b5cd fix not found error 2021-06-23 15:06:36 +02:00
f2b2ca6d55 Merge #227
227: improve mini dashboard routing r=MarinPostma a=MarinPostma

The dependency we use to statically serve the mini-dashboard used globing to serve the mini-dashboard files. This caused all unfound routes to be caught by the "/" serving the dashboard assets. This fix makes it so that the assets have a dedicated route, and any unfound route is caught by the default service and return a 404.


Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-23 13:01:40 +00:00
0ebe3900e0 Merge #229
229: Add exhaustiveFacetsCount r=MarinPostma a=curquiza

I completely forgot this one 😅

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-23 09:29:54 +00:00
ec3140a29e Fix clippy 2021-06-23 11:23:57 +02:00
00b0a00fc5 Add exhaustiveFacetsCount 2021-06-23 11:05:30 +02:00
adb970edcc Merge #226
226: Make facetsDistribution name iso r=MarinPostma a=curquiza

Even if there is an English mistake in `facets_distribution` (because of the `s`) @gmourier asked me to keep the typo: the name of `facetsDistribution` might change completely in the future, he wants to avoid two breakings.

@gmourier can you confirm before we merge this PR?

Sorry I left this update in the code (I'm confused because no issues was open to update `facetsDistribution`), there might have been a confusion with `fieldsDistribution` that has been renamed into `fieldDistribution`. Sorry!

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-23 08:14:12 +00:00
6d24a4744f Roll back facetsDistribution 2021-06-23 10:04:01 +02:00
b1a5ef0aab improve mini dashboard routing 2021-06-22 21:49:05 +02:00
7ec752ed1c Merge #224
224: Update version for alpha 6 r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-22 18:20:09 +00:00
0de696feaf Update version for alpha 6 2021-06-22 18:40:51 +02:00
d6b53c5e7a Merge #220
220: Implement `matches` r=irevoire a=MarinPostma

implement `_matchesInfo`. I initially thought we could factor it inside the highlighting, but they are unrelated features after all, and needed a dedicated pass too handle.

Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-22 16:29:07 +00:00
3456a78552 refactor formatter
share the analyzer instance between the formatter and the
compute_matches function
2021-06-22 18:28:20 +02:00
eb3d63691a add tests 2021-06-22 18:12:53 +02:00
c4ee937635 optimize fromat string 2021-06-22 18:12:53 +02:00
f6d1fb7ac2 fmt 2021-06-22 18:12:53 +02:00
97ef4a6c22 implement matches 2021-06-22 18:12:52 +02:00
db7215eaa9 Merge #213
213: Implement all the CLI options r=MarinPostma a=irevoire

closes #206 
And I looked into #204, I fixed some default values and tried to test as many options as possible, and I think the cli is already mostly working.
If someone knows any issues about it, I would like to hear more 🙂 

Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-22 15:04:05 +00:00
4b37a4a415 Merge #211 #218
211: fix index deletion race condition r=MarinPostma a=MarinPostma

Make update store block if the currently processing update is from an index we are trying to delete. This ensure that no write to the index can occur after it has been deleted.

218: Update milli version to v0.5.0 r=MarinPostma a=curquiza



Co-authored-by: marin postma <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-22 14:36:34 +00:00
d1ad23e2d8 Merge #221
221: fix get search crop len r=irevoire a=MarinPostma

Fix bug where crop length was mandatory when performing a GET search.


Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-22 14:13:52 +00:00
caa231aebe fix race condition 2021-06-22 16:09:07 +02:00
9cc31c2258 fix get search crop len 2021-06-22 16:01:40 +02:00
e2844f3a92 Update tokenizer version to v0.2.3 2021-06-22 15:57:47 +02:00
2e3d85c31a Update milli version to v0.5.0 2021-06-22 15:57:46 +02:00
25af262e79 Merge #210
210: Error handling r=MarinPostma a=MarinPostma

This pr implements the error handling for meilisearch.

Rather than grouping errors by types, this implementation groups them by scope, each scope enclosing errors from a scope further down, or new errors within this scope. This makes the tracking of the origins of errors easier , and error handling easier at the module level.

All errors that are eventually returned to the user implement the `Into<ResponseError>` trait. `ReponseError` in turn implements the `ErrorCode` trait from `meilisearch-error`.

Some new errors have been introduced with the new engine for which we haven't defined error codes yet. It has been decided with @gmourier that those would return the `internal-error` code until the correct error code is specified.


Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-22 13:21:33 +00:00
d0ef1ef174 change errors codes 2021-06-22 11:58:01 +02:00
905ace3e13 fix test 2021-06-22 11:10:57 +02:00
9092d35a3c fix payload error handler 2021-06-21 21:51:38 +02:00
2bdaa70f31 invalid update payload returns bad_request 2021-06-21 18:56:22 +02:00
f91a3bc6ab set error content type to json 2021-06-21 18:48:05 +02:00
1e4592dd7e enable errors in updates 2021-06-21 18:42:47 +02:00
50dc2fc7a5 Merge #219
219: Run cargo flaky only 100 times r=irevoire a=irevoire

Look like the CI was not able to run cargo flaky 1000 times in 6 hours, so I guess, for now, we can come back to 100 times.

https://github.com/meilisearch/transplant/runs/2858159390


Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-21 16:29:27 +00:00
76727455ca ignore all the options related to the indexer 2021-06-21 18:13:00 +02:00
cf94b8e6e0 run cargo flaky only 100 times 2021-06-21 17:36:54 +02:00
1cf9f43dfe fix the tests 2021-06-21 16:34:49 +02:00
2097554c09 fix the cli 2021-06-21 16:34:49 +02:00
56686dee40 review changes 2021-06-21 13:57:32 +02:00
763ee521be fix rebase errors 2021-06-21 12:11:09 +02:00
0bfdf9a785 bump milli 2021-06-21 12:11:09 +02:00
fa573dabf0 fmt 2021-06-21 12:11:09 +02:00
abdf642d68 integrate milli errors 2021-06-21 12:11:08 +02:00
0dfd1b74c8 fix tests 2021-06-21 12:11:08 +02:00
0d3fb5ee0d factorize internal error macro 2021-06-21 12:11:08 +02:00
02277ec2cf reintroduce anyhow 2021-06-21 12:11:06 +02:00
70661ce50d Merge #216
216: optimize cropping r=MarinPostma a=MarinPostma

Optimize cropping as per @kerollmops suggestion.


Co-authored-by: marin postma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
2021-06-21 10:00:45 +00:00
8fc12b1526 Update meilisearch-http/src/index/search.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2021-06-21 11:06:06 +02:00
439db1aae0 enable response error for search routes 2021-06-21 11:00:14 +02:00
8afbb9c462 enable response error for documents routes 2021-06-21 10:59:41 +02:00
5c52a1393f enable response error for settings routes 2021-06-21 10:59:41 +02:00
112cd1787c change error message for uuid resolver 2021-06-21 10:59:40 +02:00
d1550670a8 enable response error for index routes 2021-06-21 10:59:40 +02:00
58f9974be4 remove anyhow refs & implement missing errors 2021-06-21 10:59:38 +02:00
3a2e7d3c3b optimize cropping 2021-06-20 16:59:31 +02:00
c1b6f0e833 Merge #183
183: Add cropping and update `_formatted` behavior r=curquiza a=MarinPostma

TODO:
- [x] Solves #5 
- [x] Solves #203 
- [x] integrate the new milli highlight (according to the query words)

Co-authored-by: Marin Postma <postma.marin@protonmail.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-18 11:18:37 +00:00
5f08e41a85 Merge #215
215: Fix Clippy errors r=irevoire a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-17 17:05:11 +00:00
5d8a21b0de Fix clippy errors 2021-06-17 18:51:07 +02:00
9e8888b603 Fix clippy errors 2021-06-17 18:50:18 +02:00
623b71e81e Fix clippy errors 2021-06-17 18:02:25 +02:00
c5c7e76805 Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-06-17 18:00:02 +02:00
e4b3d35ed8 Fix clippy errors 2021-06-17 17:03:43 +02:00
33e55bd82e Refactor the crop 2021-06-17 16:59:01 +02:00
9543ab4db6 Use mut instead of returning the hashmap 2021-06-17 13:51:27 +02:00
97909ce56e Use BTreeMap and remove ids_in_formatted 2021-06-16 19:30:06 +02:00
2f2484e186 Merge #212
212: bump milli to 0.4.0 r=MarinPostma a=MarinPostma



Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-16 15:42:34 +00:00
2062b10b79 Merge #209
209: Integrate amplitude r=MarinPostma a=irevoire

And merge the sentry and amplitude usage under one “Enable analytics” flag

closes #180


Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-06-16 15:25:31 +00:00
a0b022afee Add Cow 2021-06-16 17:25:02 +02:00
5a47cef9a8 bump milli to 0.4.0 2021-06-16 17:15:56 +02:00
9538790b33 Decompose into two functions 2021-06-16 17:13:21 +02:00
4e2568fd6e disable amplitude on debug build 2021-06-16 17:12:49 +02:00
dc5a3d4a62 Use BTreeSet instead of HashSet 2021-06-16 16:20:10 +02:00
7b02fdaddc Rename functions 2021-06-16 14:23:08 +02:00
c0d169e79e Apply suggestions from code review
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-16 11:12:46 +02:00
9840b5c7fb Refacto 2021-06-15 18:44:56 +02:00
1ef061d92b Fix clippy errors 2021-06-15 17:40:45 +02:00
79a1212ebe Do intersection with displayed ids instead of checking in loop 2021-06-15 17:40:45 +02:00
8d0269fcc4 Create function to create fomatted_options 2021-06-15 17:40:45 +02:00
5e656bb58a Rename parse_facets into parse_filter 2021-06-15 17:40:45 +02:00
d9c0190497 Redo to_retrieve_ids 2021-06-15 17:40:45 +02:00
5dffe566fd Remove useless comments 2021-06-15 17:40:45 +02:00
b769877183 Make it compatible with the new milli highlighting 2021-06-15 17:40:44 +02:00
446b66b0fe Fix cargo clippy error 2021-06-15 17:40:44 +02:00
d0ec081e49 Refacto 2021-06-15 17:40:44 +02:00
65130d9ee7 Change crop_length type from Option(usize) to usize 2021-06-15 17:40:44 +02:00
638009fb2b Rename highlighter variable into formatter 2021-06-15 17:40:44 +02:00
7f84f59472 Reorganize imports 2021-06-15 17:40:44 +02:00
4f8c771bb5 Add new line 2021-06-15 17:40:43 +02:00
9e69f33f3c Fix clippy errors 2021-06-15 17:40:43 +02:00
0da8fa115e Add custom croplength for attributes to crop 2021-06-15 17:40:43 +02:00
811bc2f421 Around to previous word 2021-06-15 17:40:43 +02:00
caaf8d3f40 Fix tests 2021-06-15 17:40:43 +02:00
7473cc6e27 implement crop around 2021-06-15 17:40:43 +02:00
56c9633c53 simple crop before 2021-06-15 17:40:43 +02:00
93002e734c Fix tests 2021-06-15 17:40:42 +02:00
60f6d1c373 First version of highlight after refacto 2021-06-15 17:40:42 +02:00
a03d9d496e Fix compilation errors 2021-06-15 17:40:42 +02:00
7904637893 crop skeleton 2021-06-15 17:40:42 +02:00
def1596eaf Integrate amplitude
And merge the sentry and amplitude usage under one “Enable analytics”
flag
2021-06-15 15:36:30 +02:00
5795254b2a Merge #207
207: Update alpha for the next release r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-14 16:40:26 +00:00
fe5a494035 Update alpha for the next release 2021-06-14 17:55:04 +02:00
13e864d29f Merge #196
196: Implements the synonyms in transplant r=MarinPostma a=irevoire

closes #18 

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-14 14:09:08 +00:00
a780cff8fd fix clippy warning 2021-06-14 14:53:47 +02:00
7cb2dcbdf8 add a comment 2021-06-14 14:47:53 +02:00
f068d7f978 makes clippy happy 2021-06-14 14:47:53 +02:00
18d4d6097a implements the synonyms in transplant 2021-06-14 14:47:51 +02:00
b119bb4ab0 Merge #197
197: Update milli (v0.3.1) with filterable attributes r=MarinPostma a=curquiza

Fixes #187 and #70
Also fixes #195 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-14 12:19:42 +00:00
d65b5db97f Merge #144 #173
144: Concurrent update run loop (refactor) r=MarinPostma a=MarinPostma

This PR allows multiple request to the update store to be performed concurently (i.e, one can list updates while an updates in being written to the update store).


173: Convert UpdateStatus to legacy meilisearch format r=MarinPostma a=MarinPostma

Returns the update statuses with the same format as legacy meilisearch.

The number of documents in a document addition/deletion is not known before processing, so it is only returned when the update is `processed`.

close #78 

associated milli PR: https://github.com/meilisearch/milli/pull/178


Co-authored-by: marin postma <postma.marin@protonmail.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-06-14 11:30:44 +00:00
d4be4d80db Fix after rebase 2021-06-14 13:27:18 +02:00
9996c59183 Update with milli 0.3.1 2021-06-14 13:20:43 +02:00
88bf867a3e Rename attributes for faceting into filterable attributes 2021-06-14 13:20:43 +02:00
7009906d55 Update reset-all-settings test 2021-06-14 13:20:43 +02:00
ca1bb7dc1c Fix tests 2021-06-14 13:20:43 +02:00
aa04124bfc Add changes according to milli update 2021-06-14 13:20:37 +02:00
2be834fced Merge #205
205: Fix the cron syntax to effectively run the test once every friday r=MarinPostma a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2021-06-14 11:11:03 +00:00
11c81ab4cb fix tests 2021-06-14 11:17:49 +02:00
0f767e3743 conccurrent update run loop 2021-06-14 10:57:14 +02:00
92d954ddfe Fix the cron syntax to effectively run the test once every friday 2021-06-14 10:48:59 +02:00
1e659bb17b Merge #194
194: Bump sentry version r=MarinPostma a=irevoire

closes #102 

Co-authored-by: tamo <tamo@meilisearch.com>
2021-06-14 08:34:04 +00:00
e8bd5ea4e0 convert UpdateStatus to legacy meilisearch format 2021-06-14 10:21:57 +02:00
d765397c82 Merge #179
179: Enable filter paramater during search r=MarinPostma a=MarinPostma

This pr makes the necessary changes to transplant in accordance with the specification on filters.

More precisely, it:
- Removes the `filters` parameter
- Renames `facetFilters` to `filter`
- Allows either a string or an array to be passed to the filter param.

It doesn't allow the mixed syntax, that needs to be handled by milli.

close #81
close #140


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-06-14 08:11:30 +00:00
d46a2713d2 Merge #202
202: Add a github action to run cargo-flaky 1000 times r=curquiza a=irevoire

I don’t know how to ensure the CI works so it’s just a first version, do not hesitate to update the code

Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-06-10 22:04:57 +00:00
8932f302ce Merge #1403
1403: fix amount of time r=curquiza a=TheTechRobo

The new MeiliSearch sandboix website says "48 hours" rather than 72, so I updated the readme to reflect that

Co-authored-by: TheTechRobo <52163910+TheTechRobo@users.noreply.github.com>
2021-06-10 15:21:23 +00:00
51105d3b1c run the tests in release mode 2021-06-10 17:12:07 +02:00
efc1225cd8 Update .github/workflows/flaky.yml
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-10 17:07:23 +02:00
41220a7f96 Apply suggestions from code review
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-06-10 17:02:06 +02:00
7312c13665 add a github action to run cargo-flaky 1000 times 2021-06-10 16:53:30 +02:00
e6220a1346 Merge #199
199: fix flaky tests r=irevoire a=MarinPostma

fixes:
- index creation bug
- fix store lock
- fix decoding error
- fix stats


Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-06-10 14:13:25 +00:00
3ef0830c5d review changes 2021-06-10 16:11:52 +02:00
eb7616ca0f remove dbg 2021-06-10 16:03:48 +02:00
592fcbc71f fix stats test 2021-06-10 16:03:48 +02:00
20e1caef47 makes clippy happy 2021-06-10 16:03:48 +02:00
2d19b78dd8 fix stats test 2021-06-10 16:03:48 +02:00
99551fc21b fix encoding bug 2021-06-10 16:03:48 +02:00
d30641e9ca fix amount of time 2021-06-10 08:55:05 -04:00
2716c1aebb fix update store lock 2021-06-09 16:19:45 +02:00
1a65eed724 fix index creation bug 2021-06-09 11:52:36 +02:00
a26a0a4eec Merge pull request #1401 from meilisearch/remove-stop-words
Remove stop-word datasets
2021-06-08 17:56:07 +02:00
a56ac66e6c Remove stop-word datasets 2021-06-08 16:38:53 +02:00
7e2d7601f2 Merge #1399
1399: Update download-latest.sh r=curquiza a=94noni

Hey, PR of the weekend :)
Kidding, I began to use MeiliSearch recently for fun&personal usage, wishing you good luck for your next v0.21|v1.0 releases
Cheers

Co-authored-by: Antoine Makdessi <amakdessi@me.com>
2021-06-07 15:22:26 +00:00
1550b7d6ba Update download-latest.sh 2021-06-05 16:45:13 +02:00
9f40896f4a Merge #175
175: Fix update loop infinite loop r=irevoire a=MarinPostma

fix update loop infinite loop in case of udpate error.

close #169


Co-authored-by: marin postma <postma.marin@protonmail.com>
2021-06-02 23:02:10 +00:00
75c0718691 fix update loop infinite loop 2021-06-02 17:29:50 +02:00
509a56a43d Merge #158
158: Implements the dumps r=irevoire a=irevoire

closes #20

divergence from legacy meilisearch:
- dump v2 added, support loading of pending updates (only works dumps created from v2)
- added time stamps to the dump info
- Dump info are only persisted in an internal data structure, and they are not fetched from fs on demand anymore. This was a potential security flaw. This means that the dump infos are flushed on every restart.

Co-authored-by: tamo <tamo@meilisearch.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-06-02 12:06:47 +00:00
2d7785ae0c remove the dump_batch_size option from the CLI 2021-06-01 20:42:06 +02:00
d0552e765e forbid deserialization of Setting<Checked> 2021-06-01 20:41:45 +02:00
3a7c1f2469 Merge #191
191: dumps v2 r=irevoire a=MarinPostma



Co-authored-by: Marin Postma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
2021-06-01 09:46:31 +00:00
df6ba0e824 Apply suggestions from code review
Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-06-01 11:18:37 +02:00
6609f9e3be review edits 2021-05-31 18:41:37 +02:00
1c4f0b2ccf clippy, fmt & tests 2021-05-31 16:03:39 +02:00
10fc870684 improve dump info reports 2021-05-31 15:49:04 +02:00
dffbaca63b bump sentry version 2021-05-31 13:59:31 +02:00
b3c8f0e1f6 fix empty index error 2021-05-31 10:58:51 +02:00
bc5a5e37ea fix dump v1 2021-05-31 10:42:31 +02:00
33c6c4f0ee add timestamos to dump info 2021-05-30 15:55:17 +02:00
39c16c0fe4 fix dump import 2021-05-30 12:35:17 +02:00
1cb64caae4 dump content is now only uuid 2021-05-29 00:08:17 +02:00
b258f4f394 fix dump import 2021-05-27 14:30:20 +02:00
c47369839b dump meta 2021-05-27 10:51:19 +02:00
b924e897f1 load index dump 2021-05-27 10:27:47 +02:00
e818c33fec implement load uuid_resolver 2021-05-26 20:42:09 +02:00
9278a6fe59 integrate in dump actor 2021-05-25 18:14:11 +02:00
3593ebb8aa dump updates 2021-05-25 16:44:58 +02:00
464639aa0f udpate actor error improvements 2021-05-25 16:44:58 +02:00
4acbe8e473 implement index dump 2021-05-25 16:44:58 +02:00
7ad553670f index error handling 2021-05-25 16:44:58 +02:00
2185fb8367 dump uuid resolver 2021-05-25 16:44:54 +02:00
cbcf50960f Merge pull request #192 from meilisearch/dumps-tasks
Dumps tasks
2021-05-25 15:49:15 +02:00
89846d1656 improve panic message 2021-05-25 15:47:57 +02:00
e5175f5dc1 merge 2021-05-25 15:24:39 +02:00
1a6dcec83a crash when the actor have no inbox 2021-05-25 15:23:13 +02:00
fe260f1330 Update meilisearch-http/src/index_controller/dump_actor/actor.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-05-25 15:13:47 +02:00
991d8e1ec6 fix the error printing 2021-05-25 10:48:57 +02:00
49a0e8aa19 use a RwLock instead of a Mutex 2021-05-24 18:19:34 +02:00
912f0286b3 remove the dump_inner trickery 2021-05-24 18:06:20 +02:00
dcf29e1081 fix the error handling in case there is a panic while creating a dump 2021-05-24 17:33:42 +02:00
529f7962f4 handle parallel requests for the dump actor 2021-05-24 15:42:12 +02:00
8a11c6c429 Implements the legacy behaviour of the dump
When asked if a dump exists we check if it's the current dump, and if
it's not then we check on the filesystem for any file matching our
`uid.dump`
2021-05-24 12:35:46 +02:00
4cbf866821 merge with main 2021-05-12 18:12:37 +02:00
e0e23636c6 fix the serializer + reformat the file 2021-05-12 17:04:24 +02:00
295f496e8a atomic index dump load 2021-05-12 16:21:37 +02:00
47a1bc34de Merge #189
189: Fix snapshots r=irevoire a=MarinPostma



Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-05-12 09:28:50 +00:00
6d837e3e07 the route to create a dump must return a 202 2021-05-11 17:34:34 +02:00
1b671d4302 fix-snapshot 2021-05-11 13:57:18 +02:00
c30b32e173 add the criterion attribute when importing dumps from the v1 2021-05-11 13:21:36 +02:00
9e798fea75 fix the import of dump without unprocessing updates 2021-05-11 13:03:47 +02:00
384afb3455 fix the way we return the settings 2021-05-11 11:47:04 +02:00
92a7c8cd17 make clippy happy 2021-05-11 00:27:22 +02:00
8b7735c20a move the import of the updates in the v2 and ignore the v1 for now 2021-05-11 00:20:55 +02:00
7d748fa384 integrate the new Settings in the dumps 2021-05-10 20:48:06 +02:00
d767990424 fix the import of the updates in the dump 2021-05-10 20:25:12 +02:00
ef438852cd fix the v1 2021-05-10 20:25:12 +02:00
40ced3ff8d first working version 2021-05-10 20:25:12 +02:00
5f5402a3ab provide a way to access the internal content path of all processing State 2021-05-10 20:25:12 +02:00
26dcb9e66d bump milli version and fix a performance issue for large dumps 2021-05-10 20:25:12 +02:00
956012da95 fix dump lock 2021-05-10 20:25:12 +02:00
24192fc550 fix tests 2021-05-10 20:25:12 +02:00
efca63f9ce [WIP] rebase on main 2021-05-10 20:25:09 +02:00
c3552cecdf WIP rebase on main 2021-05-10 20:24:18 +02:00
0f94ef8abc WIP: dump 2021-05-10 20:24:18 +02:00
0275b36fb0 [WIP] rebase on main 2021-05-10 20:24:14 +02:00
1b5fc61eb6 [WIP] rebase on main 2021-05-10 20:23:12 +02:00
0fee81678e [WIP] rebase on main 2021-05-10 20:22:18 +02:00
c4d898a265 split the dumps between v1 and v2 2021-05-10 20:20:57 +02:00
e389c088eb WIP: rebasing on master 2021-05-10 20:20:57 +02:00
ceb8d6e1c9 Merge #186
186: settings fix r=MarinPostma a=MarinPostma

add type checked settigns validation. For now it only transform the settings accepting wildcard


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-05-10 16:42:12 +00:00
0cc79d414f add test 2021-05-10 18:34:25 +02:00
8d11b368d1 implement check 2021-05-10 18:22:41 +02:00
706643dfed type setting struct 2021-05-10 17:30:09 +02:00
b192cb9c1f enable string syntax for the filters 2021-05-06 12:48:31 +02:00
998d5ead34 Merge #182
182: remove facet setting r=MarinPostma a=MarinPostma

remove useless code


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-05-05 11:22:12 +00:00
ec7eb7798f remove facet setting 2021-05-04 22:36:31 +02:00
a717925caa remove filters, rename facet_filters to filter 2021-05-04 18:20:56 +02:00
88ae02f8d9 Merge #174
174: Upgrade Tokenizer r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-05-04 15:57:07 +00:00
eb03a3ccb1 Upgrade Milli and Tokenizer 2021-05-04 17:56:19 +02:00
77740829bd Merge #177
177: bump milli r=MarinPostma a=MarinPostma



Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-05-04 13:49:37 +00:00
928fb34eff bump milli and fix tests 2021-05-04 15:10:22 +02:00
1e6b40a24b Merge #172
172: Fix cors authentication issue r=MarinPostma a=MarinPostma

The error was due to the middleware returning an error, instead of a response containing the error.

close #110


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-05-03 08:38:42 +00:00
78217bcf18 Fix cors authentication issue 2021-04-29 16:28:12 +02:00
53c88d9fa3 Merge #170
170: Improve CI r=MarinPostma a=curquiza

Checked with @Kerollmops to improve (a little bit) the CI execution time.

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-29 14:08:33 +00:00
b14fdb1163 Merge #171
171: Update mini-dashboard with version 0.1.2 r=MarinPostma a=mdubus

Update of the mini-dashboard sha1 & assets-url, due to a new release

Co-authored-by: Morgane Dubus <morgane.d@meilisearch.com>
2021-04-29 13:48:54 +00:00
3d5fba94c2 Update mini-dashboard with version 0.1.2 2021-04-29 15:22:41 +02:00
3ee2b07918 Improve CI 2021-04-29 15:19:48 +02:00
8bc7dd8b03 Merge #143
143: Shared update store r=irevoire a=MarinPostma

This PR changes the updates process so that only one instance of an update store is shared among indexes.

This allows updates to always be processed sequentially without additional synchronization, and fixes the bug where all the first pending update for each index were reported as processing whereas only one was.

EDIT:

I ended having to rewrite the whole `UpdateStore` to allow updates being really queued and processed sequentially in the ordered they were added. For that purpose I created a `pending_queue` that orders the updates by a global update id.

To find the next `update_id` to use, both globally and for each index, I have created another database that contains the next id to use.

Finally, all updates that have been processed (with success or otherwise) are all stores in an `updates` database.

The layout for the keys of these databases are such that it is easy to iterate over the elements for a particular index, and greatly reduces the amount of code to do so, compared to the former implementation.

I have also simplified the locking mechanism for the update store, thanks to the StateLock data structure, that allow both an arbitrary number of readers and a single writer to concurrently access the state. The current state can be either Idle, Processing, or Snapshotting. When an update or snapshotting is ongoing, the process holds the state lock until it is done processing its task. When it is done, it sets bask the state to Idle.

I have made other small improvements here and there, and have let some other for work, such as:
- When creating an update file to hold a request's content, it would be preferable to first create a temporary file, and then atomically persist it when we have written to it. This would simplify the case when there is no data to be written to the file, since we wouldn't have to take care about cleaning after ourselves.
- The logic for content validation must be factored.
- Some more tests related to error handling in the process_pending_update function.
- The issue #159

close #114


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-27 18:41:55 +00:00
e6fd1afc3d Merge pull request #163 from meilisearch/curquiza-patch-1
Update README.md
2021-04-27 18:51:04 +02:00
a961f0ce75 fix clippy warnings 2021-04-27 18:28:46 +02:00
cea0c1f41d Update README.md 2021-04-27 16:33:22 +02:00
703d2026e4 Update README.md 2021-04-27 16:33:00 +02:00
3d85b2d854 Merge #162
162: Re-enable ranking rules route r=MarinPostma a=MarinPostma

re-enable ranking rules setting route


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-27 13:55:40 +00:00
bb79a15c04 reenable ranking rules route 2021-04-27 15:29:00 +02:00
4fe2a13c71 rewrite update store 2021-04-27 15:20:52 +02:00
51829ad85e review fixes 2021-04-27 15:10:57 +02:00
c78f351300 fix tests 2021-04-27 15:10:57 +02:00
ee675eadf1 fix stats 2021-04-27 15:10:55 +02:00
33830d5ecf fix snapshots 2021-04-27 15:09:55 +02:00
2b154524bb fix filtered out pending update 2021-04-27 15:09:23 +02:00
b626d02ffe simplify index actor run loop 2021-04-27 15:09:22 +02:00
9ce68d11a7 single update store instance 2021-04-27 15:09:21 +02:00
5a38f13cae multi_index udpate store 2021-04-27 15:07:13 +02:00
7055384aeb Merge #116
116: Add tests for every plateform + clippy r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-27 11:07:58 +00:00
0c41adf868 Update CI 2021-04-27 12:43:00 +02:00
1ba46f8f77 Disable clippy rule 2021-04-27 12:43:00 +02:00
f80ea24d2b Add tests on every platform and fix clippy errors 2021-04-27 12:42:59 +02:00
d34d7cbc37 Merge #161
161: put mini-dashboard in out-dir r=MarinPostma a=MarinPostma

This PR puts the mini-dashboard during build in the `OUT_DIR` specified by cargo. This allow the mini-dashboard artifacts to be cleaned when `cargo clean` is ran, and not pollute the working directory with unwanted files.


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-27 07:40:23 +00:00
5014f74649 put mini-dashboard in out-dir 2021-04-27 09:32:17 +02:00
1f32f35d9e Merge #160
160: Update version for the next release (alpha4) r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-26 19:09:08 +00:00
f3b6bf55a6 Update version for the next release (alpha4) 2021-04-26 19:05:16 +02:00
9e6a7e3aa9 Merge #153
153: integrate mini dashboard r=MarinPostma a=MarinPostma

This PR integrate the [mini dashboard](https://github.com/meilisearch/mini-dashboard) to transplant.

It adds a build feature `mini-dashboard` to statically add the mini-dashboard to the MeiliSearch binary. The mini-dashboard build feature is enabled by default and can be disabled by building MeiliSearch with `cargo build --no-default-features`.

- [x] Fetch the mini-dashboard from the Github release
- [x] Check that the SHA1 on the downloaded payload matches the one in the metadata
- [x] Unpack the mini dashboard in `meilisearch-http/mini-dashboard`
- [x] serve the mini-dashboard if the `mini-dashboard` feature is enabled
- [x] Update CI to build MeiliSearch with mini-dashboard for releases

close #87

## Shasum check and build optimizations.

In order to make sure that the right bundle for the mini-dashboard is downloaded, its shasum is computed and compared to the one specified in the `Cargo.toml`. If the shasums match, them the shasum is written to the `.mini-dashboard.sha1` file for later comparison. On subsequent builds, the build script will check that both the mini-dashboard assets and the shasum file are found and that the shasum file content matches the one from the toml file. It will only preform a re-generation on the static dashboard files if it finds that either the dashboard is not present where it expects it to be, or if it finds out that it is outdated, by comparing the shasums.

## Notes

I had to rely on a [custom patch](https://github.com/MarinPostma/actix-web-static-files/tree/actix-web-4) of actix-web-static-files, to support actix-web 4 beta6. there is currently a [pr on the official repo](https://github.com/kilork/actix-web-static-files/pull/35) to support actix-web 4, but it most likely won't be merged until actix is stabilized.


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-26 16:22:20 +00:00
77481d7c76 update gitignore 2021-04-26 18:21:09 +02:00
c2461e5066 review fixes 2021-04-26 10:20:46 +02:00
e4bd1bc5ce update actix-web-static-file rev 2021-04-22 11:42:41 +02:00
90f57c1329 update CI & Dockerfile 2021-04-22 11:22:09 +02:00
6af769af20 bump mini-dashboard 2021-04-22 10:45:05 +02:00
6bcf20c70e serve static site 2021-04-22 10:26:54 +02:00
bb79695e44 load mini-dashboard assets 2021-04-22 10:26:54 +02:00
ea5517bc8c add mini-dashboard feature 2021-04-22 10:26:54 +02:00
da08a1f25c Merge #157
157: Use <em> tags instead of <mark> tags for highlighting r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-22 08:11:07 +00:00
a72d2f66cd use <em> tags instead of <mark> tags for highlighting 2021-04-21 19:14:55 +02:00
e5df58bc04 Merge #150
150: add _formated field to search result r=MarinPostma a=MarinPostma

close #75 

Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-21 16:33:30 +00:00
662ffc8fa5 Merge #155
155: Fix dockerfile r=MarinPostma a=curquiza

docker build and run works now :)

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-21 10:22:01 +00:00
ce5e4743e6 Fix dockerfile 2021-04-21 11:00:04 +02:00
dd2914873b fix document fields order 2021-04-20 21:30:30 +02:00
d9a29cae60 fix ignored displayed attributes 2021-04-20 21:23:35 +02:00
7a737d2bd3 support wildcard 2021-04-20 21:23:35 +02:00
881b099c8e add tests 2021-04-20 21:23:34 +02:00
c6bb36efa5 implement _formated 2021-04-20 21:23:28 +02:00
526a05565e add SearchHit structure 2021-04-20 21:22:48 +02:00
09f13823f4 Merge #154
154: Update version for the next release (alpha3) r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-20 14:21:18 +00:00
b8e535579f Update version for the next release (alpha3) 2021-04-20 16:11:07 +02:00
63d443deb8 Merge #124
124: enable distinct r=MarinPostma a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-20 13:52:00 +00:00
f8c338e3a7 add test for dedicated distinct route 2021-04-20 15:49:17 +02:00
6c470cf687 enable distinct-attribute setting route 2021-04-20 11:34:18 +02:00
ec63e13896 bump actix 2021-04-20 11:29:32 +02:00
1746132c7d add test set/reset distinct attribute 2021-04-20 11:29:08 +02:00
ec230c2835 enable distinct 2021-04-20 11:29:06 +02:00
bf3c04f2dc Merge #152
152: bump actix r=irevoire a=MarinPostma



Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-20 09:16:15 +00:00
45665245dc bump actix 2021-04-20 11:07:23 +02:00
94c5c5843b Merge #149
149: Handle star in attributes_to_retrieve r=MarinPostma a=curquiza

Closes #147

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-19 17:29:21 +00:00
c05d260d9a Merge #148
148: Update milli version to v0.1.1 r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-19 17:22:20 +00:00
8eceba98d3 Handle star in attributes_to_retrieve 2021-04-19 18:20:19 +02:00
2c380731b9 Update milli version to v0.1.1 2021-04-19 16:03:39 +02:00
7ce74f95a2 Merge #146
146: Remove another unused legacy file r=MarinPostma a=irevoire

When doing #135 I missed an old useless file in the scr/routes directory

Co-authored-by: tamo <tamo@meilisearch.com>
2021-04-15 18:05:28 +00:00
a3813dd453 Merge #145
145: Update tokenizer to v0.2.1 r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-15 17:56:47 +00:00
ec3a08ea0c remove another unused legacy file 2021-04-15 14:44:43 +02:00
b0717b75d9 Update tokenizer to v0.2.1 2021-04-14 19:06:18 +02:00
6359a08cfe Merge #139
139: Fix commit date & SHA in startup message r=MarinPostma a=shekhirin

Resolves https://github.com/meilisearch/transplant/issues/137
Resolves https://github.com/meilisearch/transplant/issues/138

---
I ran a GitHub Action towards my own dockerhub: https://github.com/shekhirin/transplant/actions/runs/732666353

Startup message now shows correct `Commit SHA` and `Commit date` (changed from `Build date`).
```console
➜ transplant (shekhirin/startup-git-vars) ✔ docker run -it -p 7700:7700 shekhirin/meilisearch:v0.21.0-alpha.2 ./meilisearch --no-analytics=true
Unable to find image 'shekhirin/meilisearch:v0.21.0-alpha.2' locally
v0.21.0-alpha.2: Pulling from shekhirin/meilisearch
bfdacc68c91b: Already exists 
73b1ed30fa0b: Pull complete 
6607217ed754: Pull complete 
Digest: sha256:31bd6ac37e8711ab9d4123cf2ba2f942686569f08d68cfed8643752f381bfb74
Status: Downloaded newer image for shekhirin/meilisearch:v0.21.0-alpha.2

888b     d888          d8b 888 d8b  .d8888b.                                    888
8888b   d8888          Y8P 888 Y8P d88P  Y88b                                   888
88888b.d88888              888     Y88b.                                        888
888Y88888P888  .d88b.  888 888 888  "Y888b.    .d88b.   8888b.  888d888 .d8888b 88888b.
888 Y888P 888 d8P  Y8b 888 888 888     "Y88b. d8P  Y8b     "88b 888P"  d88P"    888 "88b
888  Y8P  888 88888888 888 888 888       "888 88888888 .d888888 888    888      888  888
888   "   888 Y8b.     888 888 888 Y88b  d88P Y8b.     888  888 888    Y88b.    888  888
888       888  "Y8888  888 888 888  "Y8888P"   "Y8888  "Y888888 888     "Y8888P 888  888

Database path:          "./data.ms"
Server listening on:    "http://0.0.0.0:7700"
Environment:            "development"
Commit SHA:             "038f1c740198f974743ba87fce7b74a8d0b71b5c"
Commit date:            "2021-04-09"
Package version:        "0.21.0-alpha.2"
Sentry DSN:             "https://5ddfa22b95f241198be2271aaf028653@sentry.io/3060337"
Anonymous telemetry:    "Disabled"

No master key found; The server will accept unidentified requests. If you need some protection in development mode, please export a key: export MEILI_MASTER_KEY=xxx

Documentation:          https://docs.meilisearch.com
Source code:            https://github.com/meilisearch/meilisearch
Contact:                https://docs.meilisearch.com/resources/contact.html or bonjour@meilisearch.com

[2021-04-09T10:29:49Z INFO  actix_server::builder] Starting 2 workers
[2021-04-09T10:29:49Z INFO  actix_server::builder] Starting "actix-web-service-0.0.0.0:7700" service on 0.0.0.0:7700
[2021-04-09T10:29:49Z INFO  meilisearch_http::index_controller::uuid_resolver::actor] uuid resolver started
[2021-04-09T10:29:49Z INFO  meilisearch_http::index_controller::update_actor::actor] Started update actor.
```

Endpoint also works as expected (`buildDate` -> `commitDate`)
```console
➜ transplant (shekhirin/startup-git-vars) ✔ curl http://localhost:7700/version
{"commitSha":"038f1c740198f974743ba87fce7b74a8d0b71b5c","commitDate":"2021-04-09","pkgVersion":"0.21.0-alpha.2"}
```

Co-authored-by: Alexey Shekhirin <a.shekhirin@gmail.com>
2021-04-13 17:38:47 +00:00
f87afbc558 fix(http): commit date & SHA in startup message 2021-04-13 20:16:18 +03:00
8df5f73706 Merge #133
133: Implement stats route r=MarinPostma a=shekhirin

Resolves https://github.com/meilisearch/transplant/issues/73

Co-authored-by: Alexey Shekhirin <a.shekhirin@gmail.com>
2021-04-13 17:03:33 +00:00
9eaf048a06 fix(http): use BTreeMap instead of HashMap to preserve stats order 2021-04-13 11:59:07 +03:00
adfdb99abc feat(http): calculate updates' and uuids' dbs size 2021-04-09 15:59:12 +03:00
ae1655586c fixes after review 2021-04-09 14:40:48 +03:00
698a1ea582 feat(http): store processing as RwLock<Option<Uuid>> in index_actor 2021-04-09 14:34:43 +03:00
87412f63ef feat(http): implement is_indexing for stats 2021-04-09 14:34:42 +03:00
09d9a29176 test(http): server & index stats 2021-04-09 14:34:42 +03:00
dd9eae8c26 feat(http): stats route 2021-04-09 14:34:42 +03:00
a1d04fbff5 Merge #136
136: Rename update status "pending" into "enqueued" r=curquiza a=curquiza

Closes #107 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-08 16:46:12 +00:00
dd1a08087b Merge #134
134: fix(http, index): init analyzer with optional stop words r=MarinPostma a=shekhirin

Also bump `milli` and `meilisearch-tokenizer` packages versions

Co-authored-by: Alexey Shekhirin <a.shekhirin@gmail.com>
2021-04-08 16:13:15 +00:00
51ba1bd7d3 fix(http, index): init analyzer with optional stop words
Next release

update tokenizer
2021-04-08 17:16:13 +03:00
f881e8691e Merge #135
135: Add stop words r=curquiza a=irevoire

closes #21 

Co-authored-by: tamo <tamo@meilisearch.com>
2021-04-08 11:29:00 +00:00
94c0858c27 Merge #1327
1327: Update link after branch renaming r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-08 05:47:20 +00:00
6aaa4a8e19 Update link after branch renaming 2021-04-07 19:47:48 +02:00
cb23775d18 Rename pending into enqueued 2021-04-07 19:46:36 +02:00
0344cf5874 Merge #122
122: Update display r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-07 12:33:25 +00:00
4a1b033765 Merge #1318
1318: Update README.md for contributions r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-06 23:11:29 +00:00
dcd60a5b45 add more tests for the stop_words 2021-04-06 18:29:38 +02:00
b1962c8e02 remove legacy files from meilisearch that have been replaced by a macro in routes/settings/mod.rs 2021-04-06 16:29:04 +02:00
40ef9a3c6a push a first implementation of the stop_words 2021-04-06 16:29:04 +02:00
2206a44baf Merge #132
132: Next release (alpha2) r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-04-01 15:25:45 +00:00
4ee6ce7871 Next release 2021-04-01 17:16:16 +02:00
6cb8052d3d Merge #104
104: Update all the response format (issue #64) r=MarinPostma a=irevoire

closes #64 

Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: tamo <tamo@meilisearch.com>
2021-04-01 14:22:57 +00:00
73973e2b9e fix more settings routes 2021-04-01 15:50:45 +02:00
89e05fc6c5 Merge #113
113: snapshots r=MarinPostma a=MarinPostma

 This pr adds support for snapshoting.

The snapshoting process for an index requires that no other update is processing at the same time. A mutex lock has been added to prevent a snapshot from occuring at the same time as an update, while still premitting updates to be pushed.

The list of the indexes to snapshot is first retrieved from the `UuidResolver` which also performs its snapshot.

This list is passed to the update store, which attempts to acquire a lock on the update store while it snaphots itself and it's associated index store.

 This means that a snapshot can only be completed once all indexes have finished their ongoing update.

This pr also adds refactoring of the code to allow unit testing and mocking, and unit test the snapshot creation.

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: tamo <irevoire@protonmail.ch>
Co-authored-by: marin <postma.marin@protonmail.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-04-01 13:16:00 +00:00
248e9b3808 Merge remote-tracking branch 'origin/main' into snapshots 2021-04-01 15:10:33 +02:00
79c63049d7 update the settings routes 2021-04-01 11:52:26 +02:00
96cffeab1e update all the response format to be ISO with meilisearch, see #64 2021-04-01 11:43:03 +02:00
39a18d4edc Update README.md 2021-04-01 00:00:21 +02:00
6e1ddfea5a Merge pull request #129 from shekhirin/fix-docker-commit-sha
fix(ci, http): commit_sha and commit_date in docker builds
2021-03-31 21:46:17 +02:00
d8af4a7202 ignore snapshot test (#130) 2021-03-31 20:07:52 +02:00
3d51db5929 fix(ci, http): commit_sha and commit_date in docker builds
chore(ci): cache dependencies in Docker build
2021-03-31 13:56:28 +03:00
b0956c09c1 Merge pull request #127 from shekhirin/docker-deps-cache
chore(ci): cache dependencies in Docker build
2021-03-31 12:48:57 +02:00
a294462a06 Merge #1319
1319: Stable into master r=MarinPostma a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-03-31 09:32:48 +00:00
5bc464dc53 chore(ci): cache dependencies in Docker build 2021-03-31 11:23:09 +03:00
7807a8dcff Merge #1315
1315: fix armv7 r=MarinPostma a=MarinPostma

fix armv7 build

this was caused by usize being 32 bit on armv7 and 64bits on all other targeted architectures.


Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-03-29 17:20:50 +00:00
0bad5529d8 Merge #1309
1309: fix snapshot r=MarinPostma a=MarinPostma

fix snapshot broken by #1238.

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: Marin Postma <postma.marin@protonmail.com>
2021-03-29 15:20:46 +00:00
4fe885408b fix arm 2021-03-29 17:19:31 +02:00
9a1ab4e69f fix test 2021-03-29 14:10:37 +02:00
e0b3c4f82f Merge #1310
1310: Fix display of http address r=MarinPostma a=curquiza

Wrong display introduced by https://github.com/meilisearch/MeiliSearch/pull/1206

Now displaying:

<img width="968" alt="Capture d’écran 2021-03-26 à 12 04 59" src="https://user-images.githubusercontent.com/20380692/112622594-8c173080-8e2b-11eb-81c3-5876d273e5fa.png">


Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-03-29 11:04:49 +00:00
ac858d9800 Remove clippy warnings in CI 2021-03-29 12:01:26 +02:00
7050236a93 Merge pull request #123 from irevoire/snapshots
remove the now useless dead_code flags
2021-03-26 17:54:38 +01:00
0f2143e7fd remove the now useless dead_code flags 2021-03-26 14:15:12 +01:00
b9f79c8df0 Update display 2021-03-26 12:12:55 +01:00
9587ea7f06 Fix display of http address 2021-03-26 12:04:22 +01:00
7f68b83cb7 fix snapshot 2021-03-26 11:34:37 +01:00
d7c077cffb atomic snapshot import 2021-03-25 14:48:51 +01:00
7d6ec7f3d3 resolve merge 2021-03-25 14:21:05 +01:00
f3dc853be3 Merge remote-tracking branch 'origin/main' into snapshots 2021-03-25 13:45:07 +01:00
28095c6454 Merge #1307
1307: change ubuntu version r=MarinPostma a=MarinPostma

Change the CI ubuntu version from `latest` to `18.04` because `latest` uses a too recent version of glibc, preventing meilisearch from running on the debian version of the DO image


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-25 11:42:13 +00:00
48507460b2 add snapshot tests 2021-03-25 12:02:10 +01:00
bb7d3be1b8 change ubuntu version 2021-03-25 10:44:40 +01:00
d029464de8 fix snapshot path 2021-03-25 10:23:31 +01:00
79d09705d8 perform snapshot on startup 2021-03-25 09:35:15 +01:00
868658f3d8 Merge #109
109: Make updates atomic r=curquiza a=MarinPostma

Until now, the index_uid->uuid mapping was done before the update was written to disk in the case of automatic index creation. This was an issue when the update failed, and the index would still exists in the uuid resolver.

This is fixed by this pr, by first creating the update with an uuid if the index does not exist, and then register this uuid to the uuid resolver.

This is preliminary work to the implementation of snapshots (#19).

This pr also changes the `resolve` method on the `UuidResolver` to `get` to make it clearer.


The `create_uuid` method may be bound to disappear when the index name resolution is handled by a remote machine.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-24 12:24:32 +00:00
fe87477238 Merge #115
115: Add the exhaustiveNbHits in search response body (returns always false) r=curquiza a=irevoire

closes #103 

Co-authored-by: tamo <irevoire@protonmail.ch>
Co-authored-by: Irevoire <irevoire@protonmail.ch>
2021-03-24 12:16:53 +00:00
d892a2643e fix clippy 2021-03-24 12:38:59 +01:00
83ffdc888a remove bad file name test 2021-03-24 12:38:59 +01:00
4041d9dc48 format code 2021-03-24 12:38:59 +01:00
1f16c8d224 integration test snapshot 2021-03-24 12:38:59 +01:00
06f9dae0f3 remove prints 2021-03-24 12:38:59 +01:00
48d5f88c1a fix snapshot dir already exists 2021-03-24 12:38:59 +01:00
eb53ed4cc1 load snapshot 2021-03-24 12:38:59 +01:00
46293546f3 add tests and mocks 2021-03-24 12:38:59 +01:00
3cc3637e2d refactor for tests 2021-03-24 12:38:56 +01:00
1f51fc8baf create indexes snapshots concurrently 2021-03-24 12:38:12 +01:00
e9da191b7d fix snapshot bugs 2021-03-24 12:38:12 +01:00
d73fbdef2e remove from snapshot 2021-03-24 12:38:12 +01:00
44dcfe29aa clean snapshot creation 2021-03-24 12:38:12 +01:00
a85e7abb0c fix snapshot creation 2021-03-24 12:38:12 +01:00
4847884165 restore snapshots 2021-03-24 12:38:12 +01:00
7f6a54cb12 add lock to prevent snapshot during update 2021-03-24 12:38:12 +01:00
520f7c09ba sequential index snapshot 2021-03-24 12:38:12 +01:00
35a7b800eb snapshot indexes 2021-03-24 12:38:12 +01:00
c966b1dd94 use options to schedule snapshot 2021-03-24 12:38:11 +01:00
ee838be41b implement snapshot scheduler 2021-03-24 12:38:11 +01:00
127e944866 Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-03-23 19:13:22 +01:00
cc81aca6a4 Update meilisearch-http/src/index/search.rs
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-03-23 10:47:19 +01:00
46d7cedb18 Update meilisearch-http/src/index/search.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-03-23 10:46:59 +01:00
5f33672f0e change payload send to use stream methods 2021-03-22 19:49:21 +01:00
b690f1103a fix typos 2021-03-22 19:25:56 +01:00
91089db444 add the exhaustive nb hits to be ISO, currently it's always set to false 2021-03-22 18:41:33 +01:00
70fd4f109d Merge #1299
1299: bump meilisearch r=MarinPostma a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-22 15:14:11 +00:00
186b0869df edit changelog 2021-03-22 16:10:53 +01:00
7652fc1a04 bump meiliseach 2021-03-22 16:03:19 +01:00
2f418ee767 Merge #108
108: use write senders for updates r=MarinPostma a=MarinPostma

 Use write senders to send updates to the `IndexActor`, so updates are performed sequentially on all indexes.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-22 14:18:43 +00:00
2ecde74fa4 Merge #112
112: fix root route r=MarinPostma a=irevoire

closes #93

Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-03-22 14:08:59 +00:00
7ecefe37da fix root route 2021-03-19 11:34:54 +01:00
89d13706f1 Merge #1291
1291: Use 200 status code for healthcheck endpoint  r=MarinPostma a=irevoire

closes  #1282

Co-authored-by: tamo <tamo@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-03-18 11:02:45 +00:00
d4b1331a0a use the json method instead of the body method in the creation of the response 2021-03-18 11:54:10 +01:00
147756750b create uuid on successful update addition
also change resolve to get in uuid resolver
2021-03-18 09:09:26 +01:00
8b99860e85 use write sender for updates 2021-03-18 08:32:05 +01:00
a2c8dae914 Merge #1292
1292: return a 200 on / when meilisearch is running in production r=MarinPostma a=irevoire

close #1235

Co-authored-by: tamo <tamo@meilisearch.com>
Co-authored-by: Irevoire <irevoire@protonmail.ch>
2021-03-18 06:09:21 +00:00
1640d9ea91 Merge #106
106: return 202 on settings update / reset r=MarinPostma a=irevoire

closes #105

Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-03-18 06:06:35 +00:00
6b4ea7f594 ensure the reset_settings also return a 202 2021-03-17 15:09:13 +01:00
c8b05712fa return 202 on settings update / reset 2021-03-17 14:44:32 +01:00
56b4782ee1 Merge #1293
1293: stable to master r=curquiza a=MarinPostma

replace & close #1239


Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
Co-authored-by: Many <legendre.maxime.isn@gmail.com>
Co-authored-by: many <maxime@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
2021-03-17 13:25:21 +00:00
b6831320f9 Merge pull request #100 from meilisearch/next-release
Update Cargo.toml for the next release
2021-03-16 20:18:37 +01:00
8a52979ffa Update Cargo.toml 2021-03-16 19:54:34 +01:00
ca3b343b1f Merge #96
96: Check json payload on document addition r=curquiza a=MarinPostma

Check if the json payload in updates is valid. It uses a json validator to avoid allocation, and only serializes the json in case of error, to return a pretty message.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-16 17:20:44 +00:00
f8ea081df5 Merge #98
98: replace body with json r=curquiza a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-16 17:12:30 +00:00
588bc8f9ef Merge #99
99: return a 200 on health check r=MarinPostma a=irevoire

closes #92 

Co-authored-by: tamo <tamo@meilisearch.com>
2021-03-16 16:47:44 +00:00
233c1e304d use json instead of body when crafting the request 2021-03-16 17:45:59 +01:00
a268d0e283 return a 200 on health check 2021-03-16 17:42:01 +01:00
9992c36ced Merge branch 'stable'
fix conflict with master
2021-03-16 16:59:39 +01:00
81255814b1 Update meilisearch-http/src/routes/mod.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-03-16 16:57:29 +01:00
764ced8b5c Merge #88
88: restore name field in index meta r=MarinPostma a=MarinPostma

Makes the IndexMetadata payload iso with legacy meilisearch and closes #67 


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-16 15:50:08 +00:00
3c25ab0d50 replace body with json 2021-03-16 16:46:07 +01:00
63a3a1fd90 Merge pull request #97 from meilisearch/improve-release-drafter
Update release-draft-template.yml
2021-03-16 16:00:28 +01:00
761c2b0639 Update release-draft-template.yml 2021-03-16 15:16:33 +01:00
c6dbd81823 Merge #90
90: restore version route r=MarinPostma a=MarinPostma

close #74


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-16 13:53:23 +00:00
13c5289ff1 Update release-drafter.yml 2021-03-16 14:46:08 +01:00
23fae3328b Merge pull request #77 from meilisearch/release-drafter
Add release drafter file
2021-03-16 14:43:27 +01:00
85f3b192d5 Update release-draft-template.yml 2021-03-16 14:33:52 +01:00
204c743bcc add json payload check on document addition 2021-03-16 14:28:13 +01:00
4aaa561147 Add release drafter file 2021-03-16 14:17:08 +01:00
018cadc598 follow the IBM convention 2021-03-16 14:02:14 +01:00
2138f54954 Merge #89
89: delete index returns 204 instead of 200 r=curquiza a=MarinPostma

 close #63

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-16 13:01:32 +00:00
0a0eee4993 Merge #1238
1238: fix snapshot temp file r=curquiza a=MarinPostma

fix snapshot creating a temp file in /tmp, and create the temp file in the snapshot directory instead.

close #1237


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-16 13:00:21 +00:00
e0c5740050 Merge #94
94: remove guard on document addition routes r=curquiza a=MarinPostma

 Remove `application/json` guards on document addition routes.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-03-16 12:52:43 +00:00
0c27bea135 return a 400 on / when meilisearch is running in production 2021-03-16 13:38:43 +01:00
1145599c04 Merge #91
91: Add bors configuration r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-03-16 12:09:11 +00:00
9dd1ecdc2a Add bors configuration 2021-03-16 13:08:26 +01:00
f4cf96915a remove guard on add documetn route 2021-03-16 12:04:32 +01:00
f6d0689967 add a body to be fully compliant with the http spec 2021-03-16 11:40:51 +01:00
a2ac2de011 Use 200 status code for healthcheck endpoint 2021-03-16 11:22:00 +01:00
6a742ee62c restore version route 2021-03-15 19:11:27 +01:00
58fab035bb delete index returns 204 instead of 200 2021-03-15 18:44:33 +01:00
07bb1e2c4e fix tests 2021-03-15 18:38:13 +01:00
94bd14ede3 add name to index_metadata 2021-03-15 18:35:16 +01:00
0c17b166df Merge pull request #58 from meilisearch/actor-index-controller
actor index controller
2021-03-15 18:25:35 +01:00
dd324807f9 last review edits + fmt 2021-03-15 18:11:10 +01:00
c29b86849b use actix cors git dependency 2021-03-15 17:40:20 +01:00
abbea59732 fix clippy warnings 2021-03-15 16:52:05 +01:00
01479dcf99 rename name to uid in code 2021-03-15 14:43:47 +01:00
0c80d891c0 clean Cargo.toml 2021-03-15 14:29:30 +01:00
f727dcc8c6 update milli 2021-03-15 14:26:59 +01:00
55fadd7f87 change facetedAttributes to attributesForFaceting 2021-03-15 13:53:50 +01:00
fcf1d4e922 fix displayed attributes in search 2021-03-15 12:20:33 +01:00
c079f60346 fixup! fix displayed attributes in document retrieval 2021-03-15 11:01:14 +01:00
77c0a0fba5 add test get document displayed attributes 2021-03-15 10:36:12 +01:00
adc71a70ce fix displayed attributes in document retrieval 2021-03-15 10:17:41 +01:00
99c89cf2ba use options max db sizes 2021-03-13 10:09:10 +01:00
49b74b587a enable jemalloc only on linux 2021-03-12 17:47:40 +01:00
c61fab1435 Merge branch 'main' into actor-index-controller 2021-03-12 15:14:20 +01:00
2ee2e6a9b2 clean project 2021-03-12 14:57:24 +01:00
c4846dafca implement update index 2021-03-12 14:48:43 +01:00
77d5dd452f remove open_or_create 2021-03-12 14:16:54 +01:00
e4d45b0500 fix various bugs 2021-03-12 00:37:43 +01:00
7d9637861f fix add primary key on index creation 2021-03-11 22:55:29 +01:00
271c8ba991 change index name to uid 2021-03-11 22:47:29 +01:00
8617bcf8bd add ranking rules 2021-03-11 22:39:16 +01:00
66b64c1f80 correct error on settings delete unexisting index 2021-03-11 22:33:31 +01:00
30dd790884 handle badly formatted index uid 2021-03-11 22:23:48 +01:00
40b3451a4e fix unexisting update store + race conditions 2021-03-11 22:11:58 +01:00
3f68460d6c fix update dedup 2021-03-11 20:58:51 +01:00
79a4bc8129 use meta from milli 2021-03-11 19:40:18 +01:00
1fad72e019 fix test bug with tempdir 2021-03-11 17:59:47 +01:00
2ae90f9c5d lazy load update store 2021-03-11 14:23:11 +01:00
53cf500e36 uuid resolver hard state 2021-03-10 18:04:20 +01:00
a56e8c1a0c fix tests 2021-03-10 14:47:04 +01:00
0cd8869349 update relevant changes from master 2021-03-10 14:43:10 +01:00
5ca3382f5c Merge #1286
1286: Timestamp changelog r=curquiza a=sandstrom

A timestamped changelog makes it easier to track progress, understand velocity, see if something has recently changed, etc.

https://keepachangelog.com/en/1.0.0/

Co-authored-by: sandstrom <mail+github@a16m.se>
2021-03-10 12:57:31 +00:00
dcc6f20f31 Timestamp changelog 2021-03-10 13:47:48 +01:00
5ecf514d28 restructure project 2021-03-10 13:46:49 +01:00
8061a04661 add test assets 2021-03-10 13:38:30 +01:00
562da9dd3f fix test compilation 2021-03-10 11:56:51 +01:00
f475385788 Merge #1113
1113: [ci] Add all target to  check r=MarinPostma a=woshilapin

Follow-up on https://github.com/meilisearch/MeiliSearch/pull/1100#issuecomment-735828974. If you disagree to add this, I'm totally fine to close this PR without merging (related to #1099).

Co-authored-by: Jean SIMARD <woshilapin@tuziwo.info>
2021-03-09 14:27:21 +00:00
9661ee5d64 Merge pull request #76 from meilisearch/no-jemalloc-macos
Make sure that we do not use jemalloc on macos
2021-03-09 09:57:39 +01:00
4a0f5f1b03 Make sure that we do not use jemalloc on macos 2021-03-08 21:22:30 +01:00
ce652fc8df Merge #1252
1252: change the wording of Amplify to make it clearer r=curquiza a=fharper



Co-authored-by: Frédéric Harper <hi@fred.dev>
2021-03-08 19:42:13 +00:00
07e7acc35d Merge #1280
1280: Make sure that we do not use jemalloc on macos r=MarinPostma a=Kerollmops

We were wrongly compiling jemalloc on macOS even though we did use it only on Linux.

Fixes #1136.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2021-03-08 19:10:21 +00:00
51e0d6d5ee remove word on 2021-03-08 11:41:09 -05:00
4e1597bd1d clean Uuid resolver actor 2021-03-08 16:28:27 +01:00
06403a5708 clean index actor unwraps 2021-03-08 15:53:16 +01:00
9d421d5ed4 Merge pull request #72 from meilisearch/enable-criterion
enable criterion setting
2021-03-08 14:08:16 +01:00
e9b90d5380 fixes from review 2021-03-08 13:51:33 +01:00
944a5bb36e update milli 2021-03-08 13:46:30 +01:00
2f93cce7aa auto index creation 2021-03-08 10:48:34 +01:00
ac4d795eff update created at when updating index 2021-03-08 10:21:12 +01:00
ced32afd9f implement get single index 2021-03-06 20:17:58 +01:00
281a445998 implement list indexes 2021-03-06 20:12:20 +01:00
d9254c4355 implement index delete 2021-03-06 12:57:56 +01:00
86211b1ddd import routes modules in main 2021-03-06 10:53:11 +01:00
7d28f8cff0 implement get single udpate 2021-03-06 10:51:52 +01:00
f4f42ec441 add tests 2021-03-05 20:06:10 +01:00
3992d917ec Merge pull request #55 from meilisearch/fix-settings-delete
fix settings delete
2021-03-05 19:57:43 +01:00
964e52ef08 Merge pull request #56 from meilisearch/fix-bad-index-uid
Fix bad index uid
2021-03-05 19:57:31 +01:00
65ca80bdde enable criterion setting 2021-03-05 19:31:49 +01:00
b8ebf07555 Merge pull request #57 from meilisearch/remove-duplicated-pending-update
remove duplicated pending update
2021-03-05 19:17:57 +01:00
f04dd2af39 enable tests delete settings 2021-03-05 19:14:45 +01:00
d52e6fc21e fix settings delete bug 2021-03-05 19:14:45 +01:00
561f29042c add tests 2021-03-05 19:12:35 +01:00
3987d17e40 add indx uid format guard on create ops 2021-03-05 19:10:24 +01:00
c0515bcfe2 Update src/index_controller/local_index_controller/mod.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2021-03-05 19:08:32 +01:00
7d2ae9089e restore test 2021-03-05 19:08:32 +01:00
4552c42f88 deduplicate pending and processing updates 2021-03-05 19:08:32 +01:00
a9c7b73744 implement list all updates 2021-03-05 18:34:04 +01:00
c2282ab5cb non local udpate actor 2021-03-04 19:30:13 +01:00
f090f42e7a multi index store
create two channels for Index handler, one for writes and one for reads,
so write are processed one at a time, while reads are processed in
parallel.
2021-03-04 19:18:01 +01:00
6a0a9fec6b async update store 2021-03-04 17:25:02 +01:00
a955e04ab6 implement clear documents 2021-03-04 16:04:12 +01:00
ae5581d37c implement delete documents 2021-03-04 15:59:18 +01:00
181eaf95f5 restore update documents 2021-03-04 15:10:58 +01:00
581dcd5735 implement retrieve one document 2021-03-04 15:09:00 +01:00
f3d65ec5e9 implement retrieve documents 2021-03-04 14:20:19 +01:00
17b84691f2 list settings 2021-03-04 12:38:55 +01:00
47138c7632 update settings 2021-03-04 12:20:14 +01:00
8432c8584a refactor index controller 2021-03-04 12:03:06 +01:00
a56db854a2 refactor update handler 2021-03-04 11:58:15 +01:00
9e2a95b1a3 refactor search 2021-03-04 11:23:41 +01:00
ae3c8af56c enable faceted search 2021-03-04 10:42:44 +01:00
70dce6cc0b Make sure that we do not use jemalloc on macos 2021-03-04 09:17:46 +01:00
77083d9e80 Merge #1279
1279: fix Docker volume path r=MarinPostma a=fharper

essential if `$(pwd)` returns a path with spaces

Co-authored-by: Frédéric Harper <hi@fred.dev>
2021-03-03 21:15:16 +00:00
4a66803d76 fix Docker volume path
essential if pwd returns a path with spaces
2021-03-03 13:18:07 -05:00
eff8570f59 handle ctrl-c shutdown 2021-03-03 15:10:00 +01:00
3cd799a744 fix update files created in the wrong place 2021-03-03 14:39:44 +01:00
e285404c3e handle errors when sendign payload to actor 2021-03-03 12:16:16 +01:00
70d935a2da refactor index serach for better error handling 2021-03-03 11:53:01 +01:00
7c7143d435 remove IndexController interface 2021-03-03 11:43:51 +01:00
9aca6fab88 completely file backed udpates 2021-03-03 11:01:15 +01:00
d1f34f926e [ci] Add all target to check 2021-03-02 20:48:57 +01:00
62532b8f79 WIP concurent index store 2021-03-02 14:05:03 +01:00
402203aa2a Merge pull request #62 from meilisearch/fix-ci-2
Fix CI artefacts
2021-03-02 13:25:16 +01:00
cf97b9ff2b Update create_artifacts.yml 2021-03-02 12:06:38 +01:00
e7b541a2af Merge pull request #61 from meilisearch/fix-ci
Add checkout to docker CI
2021-03-02 11:43:45 +01:00
4cf66831d4 Update publish_to_docker.yml 2021-03-02 11:38:39 +01:00
f41284a133 Merge pull request #60 from meilisearch/prepare-for-ci
Prepare for ci
2021-03-02 10:53:15 +01:00
a77d517ac1 Merge #1206
1206: fix running URL display r=curquiza a=fharper

by doing that you can just click on it in the terminal if you want

Co-authored-by: Frédéric Harper <hi@fred.dev>
2021-03-02 09:51:32 +00:00
fc351b54d9 change milli revision 2021-03-01 20:09:23 +01:00
c2fdb0ad4d Update .github/workflows/create_artifacts.yml
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-03-01 19:59:54 +01:00
1968bfac4d remove legacy tests 2021-03-01 15:48:42 +01:00
c4dfd5f0c3 implement search and fix document addition 2021-03-01 15:45:05 +01:00
ac2af4354d remove actor index controller 2021-03-01 15:35:32 +01:00
9227b7cb2f remove data.ms 2021-03-01 15:30:48 +01:00
e1e5935e3c CI recipes 2021-03-01 14:44:55 +01:00
4316d991a2 add docker recipe 2021-03-01 14:41:57 +01:00
d1be3d60df run tests on all pushed 2021-03-01 14:41:57 +01:00
a9a9ed6318 create workspace with meilisearch-error 2021-03-01 14:41:55 +01:00
79708aeb67 add milli as git dep 2021-03-01 14:41:20 +01:00
0c2777dfd5 Merge pull request #59 from meilisearch/license
license
2021-02-28 10:11:33 +01:00
5ba58c1e9c add Marin to authors 2021-02-28 10:09:56 +01:00
c994fe4609 add license 2021-02-28 10:08:36 +01:00
658166c05e implement document push 2021-02-26 18:11:43 +01:00
6bcc302950 receive update 2021-02-26 17:14:11 +01:00
d8a337fcac Merge #1265
1265: Inferring whether to show or Hide API Key box r=curquiza a=sanders41

Relates to #1261

This is one potential solution for inferring whether an instance has an API key and show or hide the text input box accordingly. When the page first loads a request is sent to the server with no API key. If that request was successful then no API key is need so the box is hidden. If the request returns with a 401 status then the API Key was needed and it is shown.


Co-authored-by: Paul Sanders <psanders1@gmail.com>
2021-02-26 10:27:37 +00:00
672a4b5400 add actors/ support index creation 2021-02-26 09:10:36 +01:00
61ce749122 update tokio and disable all routes 2021-02-26 09:10:04 +01:00
ee02d55e67 Merge #1266
1266: Simplify compile and run from sources r=curquiza a=tpayet

Related to #1136, I just saw that compile & run instructions from sources were not up to date

Co-authored-by: Thomas Payet <thomas@meilisearch.com>
2021-02-25 15:47:11 +00:00
417d0ae92a Simplify compile and run from sources 2021-02-25 11:52:08 +01:00
22108f9f90 Specifying a 401 status code to show API Key 2021-02-25 01:07:18 -05:00
101e050746 Show or hide the API key text input box when needed 2021-02-25 00:56:08 -05:00
45d8f36f5e Merge pull request #49 from meilisearch/tests
tests
2021-02-24 10:41:55 +01:00
caaaf15fd6 Create rust.yml 2021-02-24 10:31:28 +01:00
60a42bc511 reset settings 2021-02-24 10:19:22 +01:00
3f939f3ccf test delete settings 2021-02-24 10:14:36 +01:00
7d9c5f64aa test partial update 2021-02-24 09:42:36 +01:00
c7ab4dccc3 test get settings 2021-02-24 09:30:51 +01:00
ac89c35edc add settings routes errors 2021-02-23 19:46:18 +01:00
af2cbd0258 test get updates 2021-02-23 19:15:42 +01:00
0a3e946726 test delete batches 2021-02-23 14:13:43 +01:00
d3758b6f76 test delete documents 2021-02-22 16:03:17 +01:00
c95bf0cdf0 test badly formated primary key 2021-02-22 15:13:10 +01:00
4bca26298e test add document bad primary key 2021-02-22 14:55:40 +01:00
ded6483173 tests get one document 2021-02-22 14:32:48 +01:00
097cae90a7 tests get documents limit, offset, attr to retrieve 2021-02-22 14:23:17 +01:00
739c860cfd Merge #1260
1260: README.md: typos r=Kerollmops a=skerkour

Hey, I think I've noticed small typos. Feel free to close if I'm wrong :)

Co-authored-by: Sylvain Kerkour <6172808+skerkour@users.noreply.github.com>
2021-02-22 08:59:58 +00:00
f01bb9cee3 README.md: typos 2021-02-20 17:49:59 +00:00
b8b8cc1312 get all documents, no options 2021-02-19 19:55:44 +01:00
27a7238d3f test list no documents 2021-02-19 19:46:45 +01:00
ec9dcd3285 test get add documents 2021-02-19 19:43:32 +01:00
ba2cfcc72d test delete index 2021-02-19 19:26:56 +01:00
5270cc0eae test update index 2021-02-19 19:26:42 +01:00
2bb695d60f test list all indexes 2021-02-19 19:23:58 +01:00
556ba956b8 test get empty index list 2021-02-19 19:14:25 +01:00
b1226be2c8 test document addition 2021-02-19 13:16:41 +01:00
b293948d36 test index delete 2021-02-18 20:44:33 +01:00
ed3f8f5cc0 test create multiple indexes 2021-02-18 20:32:34 +01:00
4c5effe714 test index update 2021-02-18 20:28:10 +01:00
68692a256e test get index 2021-02-18 20:24:40 +01:00
72eed0e369 test create index 2021-02-18 19:50:52 +01:00
588add8bec rename update fields to camel case 2021-02-18 19:11:19 +01:00
a7bd0681a0 Merge pull request #45 from meilisearch/facet-distributions
facets distribution
2021-02-17 15:03:38 +01:00
999758f7a1 facets distribution 2021-02-17 14:59:32 +01:00
2d7b2e651d Merge pull request #43 from meilisearch/facet-filters
enable faceted searches
2021-02-17 14:11:10 +01:00
b723f23f14 Merge pull request #44 from meilisearch/fix-fill-buffer-error
fix error message when empty payload
2021-02-17 14:02:39 +01:00
ae9a41a19f fix error message when empty payload 2021-02-17 14:00:42 +01:00
86f32e4ee4 Merge #1253
1253: fix line break r=Kerollmops a=fharper



Co-authored-by: Frédéric Harper <hi@fred.dev>
2021-02-17 10:57:16 +00:00
1873c0399a fix line break 2021-02-16 16:21:50 -05:00
47eeed0a4c change the wording of Amplify to make it clearer 2021-02-16 16:09:26 -05:00
91d6e90d5d enable faceted searches 2021-02-16 19:20:39 +01:00
4d08f04db2 Update movie posters (#1219)
* Update movie posters

* Remove last comma
2021-02-16 11:06:53 -05:00
93ce32d94d Merge pull request #39 from meilisearch/fix-attributes-to-retrieve
fix attributes to retrieve
2021-02-16 16:52:47 +01:00
4fe90a1a1c fix attributes to retrieve in search 2021-02-16 16:51:00 +01:00
22c204fea6 Merge pull request #40 from meilisearch/search-get
search get
2021-02-16 16:49:56 +01:00
e1253b6969 enable search with get route 2021-02-16 16:48:05 +01:00
f175d20599 Merge pull request #41 from meilisearch/list-keys
list keys
2021-02-16 16:39:24 +01:00
4d9819f6ef Merge pull request #42 from meilisearch/basic-error-handling
basic error handling
2021-02-16 16:38:25 +01:00
bead4075d8 implement list api keys 2021-02-16 16:38:20 +01:00
1823fa18c9 add basic error handling 2021-02-16 16:36:57 +01:00
4738fa94d0 Merge pull request #38 from meilisearch/index-deletion
implement index deletion
2021-02-16 16:36:20 +01:00
aad5b789a7 review edits 2021-02-15 23:40:53 +01:00
5c0b541248 delete db files on deletion 2021-02-15 23:32:38 +01:00
a9e9e72840 implement index deletion 2021-02-15 23:24:28 +01:00
a580a6a44d Merge pull request #37 from meilisearch/update-documents
Update documents
2021-02-15 23:22:02 +01:00
1eaf28f823 add primary key and update documents 2021-02-15 23:21:01 +01:00
3a634cb583 Merge pull request #35 from meilisearch/retrieve-documents
implemement retrieve documents
2021-02-15 23:11:34 +01:00
8bb1b6146f make retrieval non blocking 2021-02-15 23:02:20 +01:00
6c7175dfc2 Merge pull request #36 from meilisearch/delete-documents
delete documents
2021-02-15 22:39:00 +01:00
28b9c158b1 implement delete single document 2021-02-15 22:37:56 +01:00
4ea0e0fc05 Merge #1220
1220: Update Contact section of README.md r=Kerollmops a=react-learner

- Remove reference to Crisp chatbox (currently deactivated on docs site and homepage)
- Remove bonjour @ meilisearch.com email address, in order to concentrate communications in visible locations such as Slack and forums. @fharper

Co-authored-by: Tommy <68053732+react-learner@users.noreply.github.com>
2021-02-15 20:52:18 +00:00
b28be43cc6 Remove bonjour email from readme.md
Remove email address from README to concentrate communications in visible locations.
2021-02-15 09:19:23 -05:00
4a71861066 Revert link 2021-02-15 09:19:23 -05:00
5f25703d44 Update README.md
Fix docs links, remove reference to Crisp chatbox
2021-02-15 09:19:23 -05:00
c317af58bc implement delete document batches 2021-02-12 17:39:14 +01:00
a8ba809656 implement clear all documents 2021-02-11 12:03:00 +01:00
6766de437f implement get document 2021-02-11 11:20:39 +01:00
fa7379e129 Merge pull request #30 from meilisearch/update-index
implement update index
2021-02-11 11:03:25 +01:00
9fb0d94fc3 add tests 2021-02-11 11:02:27 +01:00
8fd9dc231c implement retrieve all documents 2021-02-10 17:08:37 +01:00
4ca46b9e5f fix bug in error message 2021-02-09 14:32:28 +01:00
90b930ed7f implement update index
implement update index
2021-02-09 14:32:26 +01:00
f44f8a823a Merge pull request #27 from meilisearch/create-index
Implement create index
2021-02-09 14:26:59 +01:00
e89b11b1fa create IndexSetting struct
need to stabilize the create index trait interface
2021-02-09 11:41:26 +01:00
e0976d10ba Merge branch 'release-v0.19.0' into stable 2021-02-09 11:11:33 +01:00
ea681026f7 fix snapshot temp file 2021-02-09 11:08:30 +01:00
759f6b48ee Merge #1233
1233: Fix link in launched resume r=Kerollmops a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-02-08 19:04:09 +00:00
ec047eefd2 implement create index 2021-02-08 12:28:45 +01:00
811426b161 Update main.rs 2021-02-06 15:53:40 +01:00
b1d9ad7134 Merge #1224
1224: fix synonyms normalization r=MarinPostma a=LegendreM

Synonyms needs to be indexed in ascendant order,
and the new normalization step for synonyms potentially changes this order
which break the indexation process
because "Harry Potter" > "HP"  but "harry potter" < "hp"

Co-authored-by: many <maxime@meilisearch.com>
2021-02-04 15:37:33 +00:00
e000e10e01 Merge #1229
1229: Fix links in CONTRIBUTING.md r=Kerollmops a=curquiza

Closes #1228 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-02-04 15:00:26 +00:00
8dea9662dc Fix links in CONTRIBUTING.md 2021-02-04 15:56:06 +01:00
ed44e684cc review fixes 2021-02-04 15:28:52 +01:00
f18e795124 fix rebase 2021-02-04 15:09:43 +01:00
f1c09a54be implement get index meta 2021-02-04 14:56:37 +01:00
8d462afb79 add tests for list index and create index. 2021-02-04 14:56:36 +01:00
f988306691 implement create index 2021-02-04 14:56:34 +01:00
d43dc4824c implement list indexes 2021-02-04 14:54:48 +01:00
482f734e53 Merge pull request #24 from meilisearch/index-controller
Index controller
2021-02-04 14:51:21 +01:00
f8f02af23e incorporate review changes 2021-02-04 13:21:15 +01:00
cb50781d2d Merge #1222
1222: Ignore existing primary key r=Kerollmops a=MarinPostma

fixing bug in #1176 made it an hard error to try to re-set the primary key on a document addition. This PR makes Meilisearch ignore a primary key passed as an argument to a document addition. This has been decided after a discussion with @curquiza, in order to make the bug fix non breaking.

Turns out it was a good catch too, since contrary to what I thought the error was not caught asynchronously, thank you @curquiza 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-02-04 08:08:09 +00:00
1df0fdf3e2 fix synonyms normalization
Synonyms needs to be indexed in ascendant order,
and the new normalization step for synonyms potentially changes this order
which break the indexation process
because "Harry Potter" > "HP"  but "harry potter" < "hp"
2021-02-03 15:21:06 +01:00
a95a18afe4 ignore primary key if it is already set 2021-02-03 11:59:29 +01:00
9af0a08122 post review fixes 2021-02-02 17:34:06 +01:00
69c91d2b56 Merge #1218
1218: bump meilisearch version 0.19.0 r=LegendreM a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-02-02 13:45:28 +00:00
97ba5e97c6 update changelog 2021-02-02 14:32:04 +01:00
8760beed1c bump meilisearch 2021-02-02 14:23:33 +01:00
15464e57af Merge #1172
1172: Fix atomic snapshot creation r=MarinPostma a=raszi

Compress gzip files to a temporary file first and then do an atomic rename.

In our setup we have an indexer which does snapshoting for the instances serving the requests. Since currently the snapshoting mechanism is replacing the file in place therefore the indexer could not share the snapshot with a live instance. 

With this small patch we first create a new temporary file in the same directory as the snapshot dir and then we do an atomic rename therefore the snapshot path would always contain a valid snapshot.
After applying this change it would be enough to simply restart the serving instances to pick up the new snapshot from a shared storage without worrying them to die because of an incomplete snapshot.

Co-authored-by: KARASZI István <ikaraszi@gmail.com>
2021-02-02 12:37:33 +00:00
c984fa1071 Merge #1176
1176: fix race condition in  document addition r=Kerollmops a=MarinPostma

As described in #1160, there was a race condition when updating settings and adding documents simultaneously. This was due to the schema being updated and document addition being processed in two different transactions. This PR moves the schema update logic for the primary key in the same transaction as the document addition, while maintaining the input checks for the validity of the primary key in the http route, in order not to break the error reporting for the document addition route.

close #1160.

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
2021-02-02 09:26:32 +00:00
97f35de41f fix flaky test 2021-02-01 18:59:22 +01:00
81e9fd8933 Merge #1184
1184: normalize synonyms during indexation r=MarinPostma a=LegendreM

fix #1135 #964

Normalizes the synonyms before indexing them, so they are not case sensitive anymore. Then normalization also involves deunicoding is some cases, such as accents, so `été` and `ete` are considered equivalent in a search for synonyms.

Co-authored-by: many <maxime@meilisearch.com>
Co-authored-by: Many <legendre.maxime.isn@gmail.com>
2021-02-01 14:12:57 +00:00
17c463ca61 remove unused deps 2021-02-01 13:32:21 +01:00
f0ca193122 Merge branch 'master' into atomic-rename 2021-02-01 13:30:51 +01:00
940f83698c Update meilisearch-core/src/update/settings_update.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-02-01 12:06:48 +01:00
ccb7104dee add tests for IndexStore 2021-01-29 19:14:23 +01:00
da056a6877 comment tests out 2021-01-28 20:55:29 +01:00
e9c95f6623 remove useless files 2021-01-28 19:43:54 +01:00
f37a420a04 Merge #1174
1174: Limit query words number r=MarinPostma a=MarinPostma

This pr adds a limit to the number of words taken into account in a search query. Using query string that are too long leads to huge performance hits and ressources consumtion, that occasionally crashes the machine. The limit has been hard set to 10, and tests have been added to make sure that it is taken into account.

close #941

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-01-28 17:38:34 +00:00
6c63ee6798 implement list all indexes 2021-01-28 18:32:24 +01:00
60371b9dcf get update id 2021-01-28 17:20:51 +01:00
4119ae8655 setttings update 2021-01-28 16:57:53 +01:00
8183202868 documetn addition and search 2021-01-28 15:14:48 +01:00
74410d8c6b architecture rework 2021-01-28 14:12:34 +01:00
c1808513fe Merge #1211
1211: update tokenizer to v0.1.3 r=MarinPostma a=LegendreM

fix #1188

Co-authored-by: many <maxime@meilisearch.com>
2021-01-28 09:50:38 +00:00
eeccdce33a update tokenizer to v0.1.3 2021-01-28 10:33:44 +01:00
a6667b14df Merge #1193
1193: Update LICENSE year r=MarinPostma a=curquiza



Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2021-01-28 09:17:55 +00:00
62e908264e Merge #1207
1207: fix homebrew name r=MarinPostma a=fharper

brew is the command, the package manager name is homebrew

Co-authored-by: Frédéric Harper <hi@fred.dev>
2021-01-28 08:45:07 +00:00
2fe52d0a4f fix homebrew name
brew is the command, the package manager name is homebrew
2021-01-26 15:14:53 -05:00
d01c93aeee fix running URL display
by doing that you can just click on it in the terminal if you want
2021-01-26 15:11:46 -05:00
c75ffbf3d5 Merge branch 'master' into atomic-rename 2021-01-19 13:04:31 +01:00
e3e475c5b1 Update LICENSE 2021-01-19 00:18:52 +01:00
6a3f625e11 WIP: refactor IndexController
change the architecture of the index controller to allow it to own an
index store.
2021-01-16 15:09:48 +01:00
1d910dbb42 Update meilisearch-core/src/update/documents_addition.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2021-01-15 00:55:31 +01:00
bf3f36b46e Merge pull request #1191 from meilisearch/release-v0.18.1
Release v0.18.1
2021-01-14 14:11:19 +01:00
686f987180 fix compile errors 2021-01-14 11:27:07 +01:00
334933b874 fix search 2021-01-13 18:29:17 +01:00
d22fab5bae implement open index 2021-01-13 18:20:14 +01:00
ddd7789713 WIP: IndexController 2021-01-13 17:50:36 +01:00
ff38220b68 Merge #1190
1190: Bump meilisearch 0 18 1 r=LegendreM a=LegendreM

- bump version to `0.18.1`
- update `CHANGELOG.md`

Co-authored-by: many <maxime@meilisearch.com>
2021-01-13 15:35:28 +00:00
7a7cb9bcbf update dependencies 2021-01-13 15:48:53 +01:00
fe9c99a11b update changelog 2021-01-13 15:38:54 +01:00
9b47bbc1ac bump meilisearch 2021-01-13 15:37:15 +01:00
430a5f902b fix race condition in document addition 2021-01-13 13:17:52 +01:00
bc0d53e819 Update meilisearch-core/src/update/settings_update.rs
Co-authored-by: marin <postma.marin@protonmail.com>
2021-01-13 13:17:19 +01:00
0bb8b3a68d Merge #1185
1185: fix cors issue r=MarinPostma a=MarinPostma

This PR fixes a bug where foreign origin were not accepted.
This was due to an update to actix-cors

It also fixes the cors bug when authentication failed, with the caveat that request that are denied for permissions reason are not logged. 

it introduces a bug described in  #1186

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-01-13 10:56:25 +00:00
e5c220b82c fix authentication cors bug 2021-01-12 18:08:16 +01:00
60c636738b fix cors error 2021-01-12 16:46:53 +01:00
06b2a587af normalize synonyms during indexation 2021-01-12 13:53:32 +01:00
26b1e5a51b Merge pull request #1171 from meilisearch/fix-changelog-typo
fix changelog typo
2021-01-11 14:13:30 +01:00
81f343a46a add word limit to search queries 2021-01-08 16:23:23 +01:00
956adfc90a Replace in-place compression
Compress gzip files to a temporary file first and then do an atomic
rename.
2021-01-07 17:36:42 +01:00
c7c8ca63b6 fix changelog typo 2021-01-07 12:38:24 +01:00
fa40c6e3d4 Merge #1168
1168: Bump meilisearch r=LegendreM a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-01-06 11:02:16 +00:00
7ccbbb7a75 update changelog 2021-01-06 11:54:06 +01:00
948c89c26f bump meilisearch 2021-01-06 11:41:44 +01:00
768791440a Merge #1167
1167: Update dumps ci r=LegendreM a=MarinPostma

Now that the dump test are re-entrant, they can be run from a multithreaded context, whereas they used to be ran from a single threaded context, in a separate CI task.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-01-06 09:42:59 +00:00
08a8dc0d0d Merge #1091
1091: New tokenizer r=LegendreM a=MarinPostma

Integration of the new tokenizer to meilisearch.

- Tokenize and normalizes the query string for better search results
- Language sensitive tokenization and normalization during indexation
- better support for Chinese thanks to jieba (when Chinese characters are detected)

To do in a later PR:
- Use a common tokenization instance
- use tokenization for synonyms

close #624

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: many <maxime@meilisearch.com>
2021-01-06 08:47:53 +00:00
0675ecdd73 remove specific task for dump in ci 2021-01-05 21:55:14 +01:00
08c160c178 un-ignore dump tests 2021-01-05 21:54:14 +01:00
677627586c fix test set
fix dump tests
2021-01-05 21:37:05 +01:00
0731971300 fix style 2021-01-05 15:21:06 +01:00
c290719984 remove byte offset in index_seq 2021-01-05 15:21:06 +01:00
2a145e288c fix style 2021-01-05 15:21:06 +01:00
aeb676e757 skip indexation while token is not a word 2021-01-05 15:21:06 +01:00
2852349e68 update tokenizer version 2021-01-05 15:21:06 +01:00
0447594e02 add search test on chinese scripts 2021-01-05 15:21:05 +01:00
748a8240dd fix highlight shifting bug 2021-01-05 15:21:05 +01:00
808be4678a fix style 2021-01-05 15:21:05 +01:00
398577f116 bump tokenizer 2021-01-05 15:21:05 +01:00
8e64a24d19 fix suggestions 2021-01-05 15:21:05 +01:00
8b149c9aa3 update tokenizer dep to release 2021-01-05 15:21:05 +01:00
a7c88c7951 restore synonyms tests 2021-01-05 15:21:05 +01:00
db64e19b8d all tests pass 2021-01-05 15:21:05 +01:00
b574960755 fix split_query_string 2021-01-05 15:21:05 +01:00
c6434f609c fix indexing length 2021-01-05 15:21:05 +01:00
206308c1aa replace hashset with fst::Set 2021-01-05 15:21:05 +01:00
6527d3e492 better separator handling 2021-01-05 15:21:05 +01:00
e616b1e356 hard separator offset 2021-01-05 15:21:05 +01:00
8843062604 fix indexer tests 2021-01-05 15:21:05 +01:00
5e00842087 integration with new tokenizer wip 2021-01-05 15:21:05 +01:00
8a4d05b7bb remove meilisearch tokenizer 2021-01-05 15:21:05 +01:00
061832af7f Merge #1163
1163: remove benches r=LegendreM a=MarinPostma

remove unused benches, that did not compile either


Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-01-05 13:27:42 +00:00
9dd818ed7b Merge #1165
1165: Bumps r=MarinPostma a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2021-01-05 12:55:50 +00:00
0e04c90abe remove benches 2021-01-05 10:54:19 +01:00
b07e21ab3c temp 2021-01-05 00:21:42 +01:00
83ea088bf7 fix incompatible deps 2021-01-04 18:33:22 +01:00
48eb78b14d bump deps 2021-01-04 16:56:28 +01:00
e3d1314bd8 Merge #1147
1147: Increasing payload default size r=LegendreM a=sanders41

References issue #1137

Increasing the default payload size from 10mb to 100mb.

Co-authored-by: Paul Sanders <psanders1@gmail.com>
2021-01-04 12:47:06 +00:00
b4d447b5cb temp 2021-01-01 16:59:49 +01:00
a05aef5c14 Merge #1151
1151: Fixing a comment typo r=MarinPostma a=sanders41

Fixed a typo in a code comment.

Co-authored-by: Paul Sanders <psanders1@gmail.com>
2020-12-31 15:18:40 +00:00
3de5161dd8 Fixing a comment typo 2020-12-31 07:32:27 -05:00
d1e9ded76f setting builder takes ownership 2020-12-31 00:50:30 +01:00
12ee7b9b13 impl get all updates 2020-12-30 19:17:13 +01:00
d9dc2036a7 support error & return document count on addition 2020-12-30 18:44:33 +01:00
54861335a0 retrieve update status 2020-12-30 18:16:07 +01:00
8e0d8f4533 Increasing payload default size 2020-12-29 16:55:35 -05:00
0cd9e62fc6 search first iteration 2020-12-24 12:58:34 +01:00
02ef1d41d7 route document add json 2020-12-23 16:12:37 +01:00
1a38bfd31f data add documents 2020-12-23 13:52:28 +01:00
0d7c4beecd reimplement Data 2020-12-22 17:53:13 +01:00
55e1552957 update queue refactor, first iteration 2020-12-22 17:13:50 +01:00
7c9eaaeadb clean code, and fix errors 2020-12-22 14:02:41 +01:00
d12ef576fc Merge #1142
1142: Update interface.html r=Kerollmops a=curquiza

😇

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2020-12-21 10:58:35 +00:00
a05eea3a11 Update interface.html 2020-12-21 10:15:19 +01:00
446b2e7058 Merge #1128
1128: Settings consistency r=MarinPostma a=MarinPostma

- close #1124, fix #761 
- fix some clippy warnings
- makes dump process reentrant

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
2020-12-16 14:12:09 +00:00
e06f3808c0 requested changes
Co-authored-by: Clément Renault <clement@meilisearch.com>

Update meilisearch-http/src/routes/setting.rs

Co-authored-by: Clément Renault <clement@meilisearch.com>

Update meilisearch-schema/src/schema.rs

Update meilisearch-schema/src/schema.rs
2020-12-16 15:08:36 +01:00
6d79107b14 make dumps reentrant 2020-12-15 13:05:01 +01:00
5fe0e06342 fix clippy warnings 2020-12-15 12:42:19 +01:00
6eb7843858 fix tests 2020-12-15 12:05:17 +01:00
2904ca7f57 update codebase with shcema refactor 2020-12-15 12:04:51 +01:00
54686b0505 refactor schema 2020-12-15 12:04:33 +01:00
861c6fec06 Merge #1126
1126: Bumps r=MarinPostma a=MarinPostma

bump various meilisearch dependencies

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-12-14 19:03:59 +00:00
eec954ede1 Merge #1134
1134: Add Roadmap to README r=MarinPostma a=curquiza



Co-authored-by: Clementine Urquizar <clementine@meilisearch.com>
2020-12-14 14:59:38 +00:00
aa99c1ba55 Add Roadmap in README 2020-12-14 15:38:47 +01:00
29b1f55bb0 prepare boilerplate code for new api 2020-12-12 16:04:37 +01:00
8c0ab106c7 initial commit 2020-12-12 13:32:06 +01:00
dec0e2545d Merge #1131
1131: fix attributes to retrieve bug r=Kerollmops a=MarinPostma

fix bug when using empty `attributeToRetrieve`

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-12-10 22:36:42 +00:00
90cf4b9462 test attributesToRetrieve 2020-12-10 16:15:12 +01:00
2bd5d2474e fix attributes to retrieve bug 2020-12-10 15:58:24 +01:00
a6e08a83a7 bump whoami 2020-12-09 13:44:35 +01:00
ed11dd62da bump serde_qs 2020-12-09 13:41:43 +01:00
c977b70921 bump actix-web 2020-12-09 12:49:21 +01:00
31c9ccd8be bump bytes 2020-12-09 12:44:45 +01:00
044dbb0333 bump actix cors 2020-12-09 12:44:02 +01:00
d45c794a9e bump rustyline 2020-12-09 12:41:36 +01:00
c9dd7e10b9 bump ordered floats 2020-12-09 12:40:24 +01:00
56ad400c49 update heed 2020-12-09 11:27:38 +01:00
e2b0402cf5 bump regex 2020-12-09 10:28:22 +01:00
0c7fffeaf6 update env-logger 2020-12-09 10:25:17 +01:00
5f8dc21dd2 bump once-cell 2020-12-09 10:22:14 +01:00
7a27f9b610 Merge #1108
1108: [UI] Optimisation of bulma use and accessibility r=Kerollmops a=JoffreyGe

Fixes #1107

Co-authored-by: Joffrey Gentreau <13904635+JoffreyGe@users.noreply.github.com>
Co-authored-by: JoffreyGe <joffrey.gentrau@gmail.com>
2020-12-01 13:01:07 +00:00
1944dd70c7 Merge #1112
1112: Bump meilisearch r=MarinPostma a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-11-30 15:45:52 +00:00
3ec76ac33d bump meilisearch 2020-11-30 16:35:56 +01:00
72bc22dfd1 update changelog 2020-11-30 16:30:33 +01:00
b8e677efd2 Merge #1100
1100: [fix] Remove some clippy warnings r=MarinPostma a=woshilapin

fix #1099 

I'm also wondering if I should add `-- --deny warnings` to the modified line in `test.yml`.

Co-authored-by: Jean SIMARD <woshilapin@tuziwo.info>
2020-11-30 15:02:26 +00:00
65079f5e2e Merge #1097
1097: disable frontend in production r=LegendreM a=MarinPostma

disable frontend in production as per #411 and https://github.com/meilisearch/specifications/blob/master/text/0001-frontend-disable-prod.md

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-11-30 14:38:48 +00:00
cfb21b94e8 fix tests 2020-11-30 15:35:28 +01:00
cf74cfed15 Merge branch 'master' into UI-optimisations 2020-11-27 15:14:57 +01:00
f564a9ce51 Merge #849
849: Update nbHits count with filtered documents r=MarinPostma a=balajisivaraman

Closes #764 
close #1039

After discussing with @MarinPostma on Slack, this is my first attempt at implementing this for the basic flow that will go through `bucket_sort_with_distinct`.

A few thoughts here: 

- For getting the count of filtered documents alone, I originally thought of using `filter_map.values().filter(|&&v| !v).count()`. In a few cases, this was the same as what I have now implemented. But I realised I couldn't do something similar for `distinct`. So for being consistent, I have implemented both in a similar fashion.
- I also needed the `contains_key` check to ensure we're not counting the same document ID twice.

@MarinPostma also mentioned that this will be an approximation since the sort is lazy. In the test example that I've updated, the actual filtered count will be just 19 (for `male` records), but due to the `limit` in play, it returns 32 (filtering out 11 records overall).

Please let me know if this is the kind of fix we are looking for, and I can implement it in the placeholder search also.

Co-authored-by: Balaji Sivaraman <balaji@balajisivaraman.com>
2020-11-26 09:53:13 +00:00
cd1a3ad7c9 [UI] Optimisation of bulma use and accessibility 2020-11-26 10:43:34 +01:00
85d0a914ac [fix] Remove some clippy warnings 2020-11-23 23:24:40 +01:00
d3e7e18b7d disable frontend in production 2020-11-23 13:13:10 +01:00
d6c76b02e3 Merge #1090
1090: remove update changelog ci check r=Kerollmops a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-11-20 09:49:48 +00:00
fe3e20751c Merge #1089
1089: Fix clear bug r=Kerollmops a=MarinPostma

close #1088 

The placeholder data was not cleared on when deleting all documents.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-11-20 09:24:24 +00:00
aab041e692 Merge #1082
1082: remove maintenance error from http r=MarinPostma a=MarinPostma

remove the maintenance error from `meilisearch-http`

close #1061 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-11-19 15:42:33 +00:00
75e22fc7f5 feat(search): update nbHits count with filtered docs for placeholder search 2020-11-19 21:02:47 +05:30
6fff49b33b Merge #1087
1087: Add deploy on Platform.sh option to README r=Kerollmops a=chadwcarlson

We have had a lot of success using Meilisearch on our public documentation, and I've put together the "movies" demo to quickly show it off. Included in our template README is instructions for modifying the template deployment to make it production ready. 

All the best.

As per CONTRIBUTING, related to https://github.com/meilisearch/MeiliSearch/issues/1086

Co-authored-by: chadcarlson <chad.carlson@platform.sh>
2020-11-19 15:10:13 +00:00
2eaab48532 remove Maintenance error for error lib 2020-11-19 15:12:12 +01:00
43df4a56c4 feat(search): update nbHits count with filtered docs for core flow 2020-11-19 19:35:37 +05:30
680756500c remove update changelog ci check 2020-11-19 14:27:48 +01:00
0645a6568e add test clear all documents 2020-11-19 14:13:27 +01:00
3a0861694d fix clear document bug 2020-11-19 14:04:07 +01:00
0f4182bddf Uncenter to match existing. 2020-11-17 15:06:04 -05:00
cc4284b89e Add Deploy on Platform.sh button. 2020-11-17 15:05:17 -05:00
a326466f32 remove maintenance error from http 2020-11-16 17:30:37 +01:00
5a67862e00 Merge #1077
1077: Change movie gifs r=MarinPostma a=bidoubiwa

Remove old movie gif that showed some misleading information
- Typo on first letter
- `word` ranking rules implemented

Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-11-12 13:07:01 +00:00
201bb3f80a Add loop to gif 2020-11-12 10:05:39 +01:00
49afe7d89f Change movie gifs 2020-11-12 09:58:24 +01:00
f968d039f7 Merge #1065
1065: Stable -> master r=Kerollmops a=MarinPostma

~waiting for release~ OK

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
2020-11-04 21:22:08 +00:00
705669ddf8 Merge #1056
1056: Bump actix-http from 2.0.0 to 2.1.0 r=MarinPostma a=dependabot[bot]

Bumps [actix-http](https://github.com/actix/actix-web) from 2.0.0 to 2.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actix/actix-web/releases">actix-http's releases</a>.</em></p>
<blockquote>
<h2>actix-http: v2.1.0</h2>
<h3>Added</h3>
<ul>
<li>Added more flexible <code>on_connect_ext</code> methods for on-connect handling. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1754">#1754</a></li>
</ul>
<h3>Changed</h3>
<ul>
<li>Upgrade <code>base64</code> to <code>0.13</code>. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1744">#1744</a></li>
<li>Upgrade <code>pin-project</code> to <code>1.0</code>. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1733">#1733</a></li>
<li>Deprecate <code>ResponseBuilder::{if_some, if_true}</code>. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1760">#1760</a></li>
</ul>
<p><a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1760">#1760</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1760">actix/actix-web#1760</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1754">#1754</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1754">actix/actix-web#1754</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1733">#1733</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1733">actix/actix-web#1733</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1744">#1744</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1744">actix/actix-web#1744</a></p>
<h2>awc: v2.0.1</h2>
<h3>Changed</h3>
<ul>
<li>Upgrade <code>base64</code> to <code>0.13</code>. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1744">#1744</a></li>
<li>Deprecate <code>ClientRequest::{if_some, if_true}</code>. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1760">#1760</a></li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Use <code>Accept-Encoding: identity</code> instead of <code>Accept-Encoding: br</code> when no compression feature
is enabled <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1737">#1737</a></li>
</ul>
<p><a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1737">#1737</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1737">actix/actix-web#1737</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1760">#1760</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1760">actix/actix-web#1760</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1744">#1744</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1744">actix/actix-web#1744</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/actix/actix-web/blob/master/CHANGES.md">actix-http's changelog</a>.</em></p>
<blockquote>
<h1>Changes</h1>
<h2>Unreleased - 2020-xx-xx</h2>
<h2>3.2.0 - 2020-10-30</h2>
<h3>Added</h3>
<ul>
<li>Implement <code>exclude_regex</code> for Logger middleware. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1723">#1723</a></li>
<li>Add request-local data extractor <code>web::ReqData</code>. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1748">#1748</a></li>
<li>Add ability to register closure for request middleware logging. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1749">#1749</a></li>
<li>Add <code>app_data</code> to <code>ServiceConfig</code>. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1757">#1757</a></li>
<li>Expose <code>on_connect</code> for access to the connection stream before request is handled. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1754">#1754</a></li>
</ul>
<h3>Changed</h3>
<ul>
<li>Updated actix-web-codegen dependency for access to new <code>#[route(...)]</code> multi-method macro.</li>
<li>Print non-configured <code>Data&lt;T&gt;</code> type when attempting extraction. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1743">#1743</a></li>
<li>Re-export bytes::Buf{Mut} in web module. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1750">#1750</a></li>
<li>Upgrade <code>pin-project</code> to <code>1.0</code>.</li>
</ul>
<p><a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1723">#1723</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1723">actix/actix-web#1723</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1743">#1743</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1743">actix/actix-web#1743</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1748">#1748</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1748">actix/actix-web#1748</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1750">#1750</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1750">actix/actix-web#1750</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1754">#1754</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1754">actix/actix-web#1754</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1749">#1749</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1749">actix/actix-web#1749</a></p>
<h2>3.1.0 - 2020-09-29</h2>
<h3>Changed</h3>
<ul>
<li>Add <code>TrailingSlash::MergeOnly</code> behaviour to <code>NormalizePath</code>, which allows <code>NormalizePath</code>
to retain any trailing slashes. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1695">#1695</a></li>
<li>Remove bound <code>std::marker::Sized</code> from <code>web::Data</code> to support storing <code>Arc&lt;dyn Trait&gt;</code>
via <code>web::Data::from</code> <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1710">#1710</a></li>
</ul>
<h3>Fixed</h3>
<ul>
<li><code>ResourceMap</code> debug printing is no longer infinitely recursive. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1708">#1708</a></li>
</ul>
<p><a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1695">#1695</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1695">actix/actix-web#1695</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1708">#1708</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1708">actix/actix-web#1708</a>
<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1710">#1710</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1710">actix/actix-web#1710</a></p>
<h2>3.0.2 - 2020-09-15</h2>
<h3>Fixed</h3>
<ul>
<li><code>NormalizePath</code> when used with <code>TrailingSlash::Trim</code> no longer trims the root path &quot;/&quot;. <a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1678">#1678</a></li>
</ul>
<p><a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1678">#1678</a>: <a href="https://github-redirect.dependabot.com/actix/actix-web/pull/1678">actix/actix-web#1678</a></p>
<h2>3.0.1 - 2020-09-13</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="156c97cef2"><code>156c97c</code></a> prepare awc release 2.0.1</li>
<li><a href="798d744eef"><code>798d744</code></a> prepare http release 2.1.0</li>
<li><a href="4cb833616a"><code>4cb8336</code></a> deprecate builder if-x methods (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1760">#1760</a>)</li>
<li><a href="9963a5ef54"><code>9963a5e</code></a> expose on_connect v2 (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1754">#1754</a>)</li>
<li><a href="4519db36b2"><code>4519db3</code></a> register fns for custom request-derived logging units (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1749">#1749</a>)</li>
<li><a href="7030bf5fe8"><code>7030bf5</code></a> Adding app_data to ServiceConfig (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1758">#1758</a>)</li>
<li><a href="20078fe603"><code>20078fe</code></a> Bump pin-project to 1.0 (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1733">#1733</a>)</li>
<li><a href="06e5042b94"><code>06e5042</code></a> use idenity encoding on client if no compression features are enabled (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1737">#1737</a>)</li>
<li><a href="41e7cec72f"><code>41e7cec</code></a> Re-export bytes::Buf and bytes::BufMut as well (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1750">#1750</a>)</li>
<li><a href="d45a1aa6b6"><code>d45a1aa</code></a> Add <code>web::ReqData\&lt;T&gt;</code> extractor (<a href="https://github-redirect.dependabot.com/actix/actix-web/issues/1748">#1748</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/actix/actix-web/compare/awc-v2.0.0...http-v2.1.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actix-http&package-manager=cargo&previous-version=2.0.0&new-version=2.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-11-03 12:59:41 +00:00
73dd345cda Bump actix-http from 2.0.0 to 2.1.0
Bumps [actix-http](https://github.com/actix/actix-web) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/actix/actix-web/releases)
- [Changelog](https://github.com/actix/actix-web/blob/master/CHANGES.md)
- [Commits](https://github.com/actix/actix-web/compare/awc-v2.0.0...http-v2.1.0)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-03 12:36:05 +00:00
65c6e46775 Merge #1054
1054: Make small improvements r=Kerollmops a=whoan

Thanks for this great tool!

Co-authored-by: Juan Eugenio Abadie <juaneabadie@gmail.com>
2020-11-03 12:35:18 +00:00
7a1d003341 Merge #1057
1057: Bump futures from 0.3.6 to 0.3.7 r=LegendreM a=dependabot[bot]

Bumps [futures](https://github.com/rust-lang/futures-rs) from 0.3.6 to 0.3.7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/rust-lang/futures-rs/releases">futures's releases</a>.</em></p>
<blockquote>
<h2>0.3.7</h2>
<ul>
<li>Fixed unsoundness in <code>MappedMutexGuard</code> (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2240">#2240</a>)</li>
<li>Re-exported <code>TakeUntil</code> (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2235">#2235</a>)</li>
<li>futures-test: Prevent double panic in <code>panic_waker</code> (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2236">#2236</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md">futures's changelog</a>.</em></p>
<blockquote>
<h1>0.3.7 - 2020-10-23</h1>
<ul>
<li>Fixed unsoundness in <code>MappedMutexGuard</code> (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2240">#2240</a>)</li>
<li>Re-exported <code>TakeUntil</code> (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2235">#2235</a>)</li>
<li>futures-test: Prevent double panic in <code>panic_waker</code> (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2236">#2236</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="c4f734926f"><code>c4f7349</code></a> Release 0.3.7</li>
<li><a href="cfb827ad3c"><code>cfb827a</code></a> Fix unsoundness in MappedMutexGuard (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2240">#2240</a>)</li>
<li><a href="7340d3d5d6"><code>7340d3d</code></a> Fix: TakeUntil not re-exported from utils (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2235">#2235</a>)</li>
<li><a href="66949b8882"><code>66949b8</code></a> Don't double panic in futures-test::test::panic_waker::wake_panic (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2236">#2236</a>)</li>
<li><a href="f605139976"><code>f605139</code></a> Clean up private modules (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2233">#2233</a>)</li>
<li><a href="ad441002ba"><code>ad44100</code></a> Remove outdated comment (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2230">#2230</a>)</li>
<li><a href="2539ddc0a7"><code>2539ddc</code></a> Fix CI failure (<a href="https://github-redirect.dependabot.com/rust-lang/futures-rs/issues/2232">#2232</a>)</li>
<li><a href="67566c65f5"><code>67566c6</code></a> Bump MSRV of futures-{util, executor, test} to 1.37</li>
<li><a href="8a65340675"><code>8a65340</code></a> Update pin-project to 1</li>
<li><a href="5df6d68418"><code>5df6d68</code></a> Fix clippy::needless_lifetimes warning</li>
<li>See full diff in <a href="https://github.com/rust-lang/futures-rs/compare/0.3.6...0.3.7">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=futures&package-manager=cargo&previous-version=0.3.6&new-version=0.3.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-11-03 12:10:15 +00:00
6a2a56d48f Bump futures from 0.3.6 to 0.3.7
Bumps [futures](https://github.com/rust-lang/futures-rs) from 0.3.6 to 0.3.7.
- [Release notes](https://github.com/rust-lang/futures-rs/releases)
- [Changelog](https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/futures-rs/compare/0.3.6...0.3.7)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-03 08:39:09 +00:00
9ff5bdd297 Merge #1059
1059: Bump serde from 1.0.116 to 1.0.117 r=MarinPostma a=dependabot[bot]

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.116 to 1.0.117.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/serde-rs/serde/releases">serde's releases</a>.</em></p>
<blockquote>
<h2>v1.0.117</h2>
<ul>
<li>Allow serialization of std::net::SocketAddrV6 to include a scope id if present (based on <a href="https://github-redirect.dependabot.com/rust-lang/rust/pull/77426">rust-lang/rust#77426</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="fc3f104c4a"><code>fc3f104</code></a> Release 1.0.117</li>
<li><a href="4bec9ffd0f"><code>4bec9ff</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/serde-rs/serde/issues/1906">#1906</a> from Mingun/fix-misprint</li>
<li><a href="e6d2322e68"><code>e6d2322</code></a> Fix misprint in the error message</li>
<li><a href="2b504099e4"><code>2b50409</code></a> Include room for SocketAddrV6 to serialize scope id</li>
<li><a href="be7d0e7eb2"><code>be7d0e7</code></a> Ignore map_err_ignore Clippy pedantic lint</li>
<li>See full diff in <a href="https://github.com/serde-rs/serde/compare/v1.0.116...v1.0.117">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde&package-manager=cargo&previous-version=1.0.116&new-version=1.0.117)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-11-03 08:17:32 +00:00
4ba5e22f64 Merge #1052
1052: Revert "Merge #1001" r=Kerollmops a=MarinPostma

This reverts commit 690eab4a25, reversing
changes made to 086020e543.

After arbitrage with @curquiza and @eskombro, this fix would introduce a relevancy bug that cannot be circumvented, whereas the previous bug was only a setting bug with a workaround. we need to discuss this issue further to provide a fix that meets our expectations.

related to #1050 

This will be merged directly in the release branche, as a hotfix

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-11-02 14:43:56 +00:00
a8ab15d65d Revert "Merge #1001"
This reverts commit 690eab4a25, reversing
changes made to 086020e543.

update changelog
2020-11-02 15:10:09 +01:00
93953103ad Bump serde from 1.0.116 to 1.0.117
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.116 to 1.0.117.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.116...v1.0.117)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-01 05:40:44 +00:00
f25890c140 Make small improvements 2020-10-30 23:48:23 -03:00
39cf1931ae Merge #1047
1047: bump meilisearch r=Kerollmops a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-10-28 11:42:24 +00:00
bbb6771625 bump meilisearch 2020-10-28 12:36:52 +01:00
e9f9f270e1 Merge #1045
1045: Revert "Merge #1037" r=MarinPostma a=MarinPostma

This reverts commit 257f9fb2b2, reversing
changes made to 9bae7a35bf.

The reason fo this is that de-unicoding is not always desirable (for example is the case of CJK documents). This cannot be handled correctly for now, and will necessitate work on the tokenizer.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-10-27 17:16:27 +00:00
190b78b7be Revert "Merge #1037"
This reverts commit 257f9fb2b2, reversing
changes made to 9bae7a35bf.
2020-10-27 17:27:47 +01:00
257f9fb2b2 Merge #1037
1037: Synonym unidecode r=Kerollmops a=MarinPostma

fix #964 

- unidecodes all synonyms before adding them to the synonyms fst
- stores a copy of the original synonyms (unicoded) for later retrieve

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-10-27 10:57:40 +00:00
d35a104ad3 requested changes 2020-10-27 11:53:24 +01:00
9bae7a35bf Merge #1032
1032: Remove not maintained csv movies dataset r=MarinPostma a=bidoubiwa

Remove `movies.csv` from the dataset folder as it is not updated and not usable with MeiliSearch without converting it to json.

Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-10-27 08:18:20 +00:00
33c7c5a7e3 remove del_synonyms function 2020-10-26 21:33:39 +01:00
91363daeaa add tests 2020-10-26 17:48:13 +01:00
f9ab85adbe deunicase synonyms 2020-10-26 17:47:55 +01:00
9dbf43d3e7 Update readme accordingly 2020-10-22 20:33:20 +02:00
772f4d6671 Remove not maintained cvs movies dataset 2020-10-22 20:33:20 +02:00
1b57218739 Merge #1040
1040: Update movie posters r=Kerollmops a=bidoubiwa

This PR resolves 3 issues: 

1. update posters URLs that changed
2. All posters point to a smaller image ( +- 20kb instead of 500kb+-) this was done by changing the width size from 1280 px to 500 px. 
3. Remove films that are not in the tmdb database

Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-10-22 16:38:41 +00:00
8767269b47 Update movie posters 2020-10-22 18:07:57 +02:00
baceaed582 Merge #1038
1038: Add Sandbox section to README.md r=LegendreM a=eskombro

This PR adds a link to [MeiliSearch Sandbox](https://sandbox.meilisearch.com/) in the README.md

Co-authored-by: Samuel Jimenez <sjimenezre@gmail.com>
2020-10-22 15:25:23 +00:00
62a28bc2a1 Add Sandbox section to README.md 2020-10-22 17:04:45 +02:00
f83caa6c40 Merge #1008
1008: Dump info r=Kerollmops a=LegendreM

fix #998 
fix #988 
fix #1009
fix #1010
fix #1033


Co-authored-by: many <maxime@meilisearch.com>
2020-10-22 14:23:50 +00:00
53b1483e71 fix pr comments 2020-10-22 16:12:55 +02:00
a0eafea200 fix tests 2020-10-22 15:46:20 +02:00
10dace305d snapshot at start 2020-10-22 15:46:20 +02:00
1eace79f77 change error message to be absolute 2020-10-22 15:46:20 +02:00
e6033e174d fix #1010 2020-10-22 15:46:20 +02:00
f1925b8f71 fix #1009 2020-10-22 15:46:20 +02:00
834f3cc192 rename folder to dir 2020-10-22 15:46:20 +02:00
e049aead16 improve dump status 2020-10-22 15:46:20 +02:00
0a9c9670e7 Merge #1028
1028: Clean external contributions r=Kerollmops a=LegendreM

We accepted some unperfect external PRs, this one is here to clean this:
-  clean PR #946 (remove changelog line and add forgotten newline)
- remove useless function after health route refacto #1026

Co-authored-by: many <maxime@meilisearch.com>
Co-authored-by: Many <legendre.maxime.isn@gmail.com>
2020-10-22 10:46:19 +00:00
1744dcebfe Merge branch 'master' into clean_external_contributions 2020-10-22 12:23:51 +02:00
29712916e6 Merge #1034
1034: Remove outdated settings file r=Kerollmops a=bidoubiwa

Unnecessary settings files in the dataset folder should be removed. 

Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-10-21 15:42:48 +00:00
4d2783bb04 Remove outdated settings file 2020-10-21 17:12:10 +02:00
50f0fbb05c remove useless function after health route refacto #1026 2020-10-20 16:21:46 +02:00
5a842ec94a clean PR #946 2020-10-19 17:16:25 +02:00
372680e2ab Merge #1026
1026: refactor /health  r=LegendreM a=frbimo

Fixes: #940 

Testing:
`cargo test` and `cargo build --release` passed

Co-authored-by: frbimo <fr.bimo@gmail.com>
2020-10-19 13:57:15 +00:00
6465a3f549 refactor /health on meilisearch-http that complies:
1. NEEDS to ensure that service is completely up if it returns 204
2. DOES NOT block service process (write transaction)
3. NEEDS to use the less network bandwidth as possible when it's triggered
4. NEEDS to use the less service resources as possible when it's triggered
5. DOES NOT NEED any authentication
6. MAY be named /health
2020-10-19 14:30:43 +08:00
690eab4a25 Merge #1001
1001: Fix settings bug r=MarinPostma a=MarinPostma

fix #942, see https://github.com/meilisearch/MeiliSearch/issues/942#issuecomment-706266440

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: many <maxime@meilisearch.com>
2020-10-16 13:25:32 +00:00
dc2e5ceed2 fix bug 2020-10-16 14:16:12 +02:00
1639a7338d add test to reproduce #891 bug report
fix bug
2020-10-16 13:35:11 +02:00
ac7226bb27 fix deserializer 2020-10-16 13:02:44 +02:00
086020e543 Merge #1020
1020: Apply recommended updates from dependabot r=LegendreM a=qdequele



Co-authored-by: qdequele <quentin@dequelen.me>
2020-10-15 17:05:31 +00:00
452d456fad Merge #997
997: fix(core): fix benchmark in core with types r=LegendreM a=neeldug

forces a dereference onto query and then creates an option to wrap the
query

Closes #994 

Co-authored-by: nd419 <5161147+neeldug@users.noreply.github.com>
2020-10-15 16:41:58 +00:00
f741942226 Remove redundant black_box import 2020-10-15 15:57:34 +01:00
a27399cf65 apply recommanded updates from dependabot 2020-10-15 13:26:52 +02:00
29b8810db8 Merge #914
914: lazily create an index on documents push r=LegendreM a=qdequele

Create an index if it's possible when a user trying to send data to a non-existing index. https://github.com/meilisearch/MeiliSearch/issues/918

Co-authored-by: qdequele <quentin@meilisearch.com>
Co-authored-by: qdequele <quentin@dequelen.me>
2020-10-15 09:37:15 +00:00
a5a47911d1 add tests 2020-10-15 09:43:54 +02:00
7bf6a3d7b2 Merge #984
984: Add test search r=LegendreM a=LegendreM

- Get an error if the index does not exist
- Get an error if a parameter is not expected (e.g.: "lol")
- Check a basic search with no parameter
- Check a basic search with only a q parameter

isssue #814 

Co-authored-by: many <maxime@meilisearch.com>
2020-10-14 16:22:10 +00:00
0cabcb7c79 Merge #979
979: Add dependabot with a monthly update r=LegendreM a=qdequele



Co-authored-by: qdequele <quentin@dequelen.me>
2020-10-14 09:15:48 +00:00
f359b64d59 Merge #946
946: Sort displayedAttributes field r=MarinPostma a=gorogoroumaru

Fix #943

displayedAttributes use the HashSet struct which is an unsorted structure, so I changed the implementation from HashSet into BTreeSet.

Co-authored-by: gorogoroumaru <zokutyou2@gmail.com>
2020-10-13 14:37:47 +00:00
2f3ecab8d9 Merge #978
978: Add code coverage r=MarinPostma a=qdequele



Co-authored-by: qdequele <quentin@dequelen.me>
2020-10-13 14:12:53 +00:00
17f71a1a55 add lazy create index on settings handlers 2020-10-13 10:54:02 +02:00
bfe3bb0eeb create an helper to allow to delete the index on error 2020-10-13 10:54:02 +02:00
0a67248bfe cargo fmt 2020-10-13 10:54:02 +02:00
2644f087d0 add tests 2020-10-13 10:54:02 +02:00
91c8c7a2e3 lazily create an index during document addition 2020-10-13 10:54:02 +02:00
029abd3413 add code coverage 2020-10-13 10:53:26 +02:00
726756bad4 add dependabot with a monthly update 2020-10-13 10:52:17 +02:00
10c56d9919 Add test on search
related to SEARCH part in #814
2020-10-13 10:38:22 +02:00
5f59f93804 Merge #1007
1007: fix clippy errors r=MarinPostma a=qdequele

I fixed clippy warning and errors. It will allow us to not have future issues when bors try to merge a branch. 

Co-authored-by: qdequele <quentin@dequelen.me>
2020-10-13 08:29:49 +00:00
704defea78 fix clippy 2020-10-13 10:01:57 +02:00
eb240c8b60 update test 2020-10-10 06:13:27 +00:00
c3bcd7a410 Merge branch 'issue943' of https://github.com/gorogoroumaru/MeiliSearch into issue943 2020-10-10 02:58:16 +00:00
26124e6436 update test 2020-10-10 02:56:44 +00:00
3cd6f5c7ea Merge branch 'master' into issue943 2020-10-10 11:50:45 +09:00
7c646e031c update test 2020-10-10 02:43:09 +00:00
0a2ca075d3 fix(core): fix benchmark in core with types
forces a dereference onto query and then creates an option to wrap the
query

Closes 994
2020-10-08 13:37:58 +01:00
b406b6ee44 Merge #989
989: URL encode search in web UI r=LegendreM a=akrantz01

Fixes #986 

Co-authored-by: Alex Krantz <alex@krantz.dev>
2020-10-06 15:28:46 +00:00
726e867058 URL encode search in web UI
Fixes #986
2020-10-05 11:57:52 -07:00
f4d918d22a Merge branch 'master' into issue943 2020-10-02 21:01:31 +09:00
5ef3a01b6c Merge branch 'issue943' of https://github.com/gorogoroumaru/MeiliSearch into issue943 2020-10-02 20:01:13 +09:00
5a98f1f076 sort facetsDistribution attribute 2020-10-02 20:00:55 +09:00
4398f2c023 Merge #982
982: fix backups r=MarinPostma a=LegendreM

* pluralize variable `backup_folder` -> `backups_folder`
* change env case `MEILI_backup_folder` -> `MEILI_BACKUPS_FOLDER`
* add miliseconds to backup ID to reduce colisions

Co-authored-by: many <maxime@meilisearch.com>
2020-09-30 17:02:34 +00:00
afc3b0915b fix backups
* pluralize variable `backup_folder` -> `backups_folder`
* change env case `MEILI_backup_folder` -> `MEILI_BACKUPS_FOLDER`
* add miliseconds to backup ID to reduce colisions
* fix forgoten stats synchronization
2020-09-30 13:20:40 +02:00
f313de98c8 Merge #980
980: bump meilisearch to v0.15.0 r=Kerollmops a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-09-28 15:09:26 +00:00
03d4651077 bump meilisearch 2020-09-28 16:56:05 +02:00
32f6a9a457 Merge #976
976: Revert 944 r=MarinPostma a=MarinPostma

revert #944 
@bidoubiwa  @curquiza @eskombro, this was a misunderstanding from our side. Doing this would in fact be an error, and would prevent us to do this: https://github.com/meilisearch/MeiliSearch/issues/945#issuecomment-685526678, which is what we are really after. We are resetting this to its default behaviour before it goes in prodution. Sorry for the confusion.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-09-28 13:38:46 +00:00
099a0802fc Merge #916
916: Considere an empty query search as a placeholder search r=MarinPostma a=qdequele

Fix #856; Relative tracking issue: #729

Co-authored-by: qdequele <quentin@meilisearch.com>
2020-09-28 13:13:47 +00:00
e258e0b2c2 Merge #887
887: backup r=Kerollmops a=LegendreM

[Tracking Issue](https://github.com/meilisearch/MeiliSearch/issues/840)
[Documentation PR](https://github.com/meilisearch/documentation/pull/468)
[Other relevant issue](https://github.com/meilisearch/MeiliSearch/issues/884)

Co-authored-by: many <maxime@meilisearch.com>
2020-09-28 12:47:08 +00:00
c254320860 Implement backups
* trigger backup importation via http route
* follow backup advancement with status route
* import backup via a command line
* let user choose batch size of documents to import (command lines)

closes #884
closes #840
2020-09-28 14:40:06 +02:00
51fd849852 cargo fmt 2020-09-28 14:23:32 +02:00
ab170ce4fd add test 2020-09-28 14:19:45 +02:00
90226dc8a9 Considere an empty query search as a placeholder search #916 2020-09-28 14:19:45 +02:00
63868b2600 Merge #977
977: update pest dependency r=Kerollmops a=MarinPostma

update pest dependency to official repo

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-09-25 19:35:25 +00:00
22d439f682 update pest dependency 2020-09-24 18:36:38 +02:00
394f2abd49 Merge #971
971: Meili tests r=MarinPostma a=MarinPostma

#869 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-09-24 16:06:35 +00:00
030bcd8b05 Revert "facet count more tests"
This reverts commit 954f572e79.
2020-09-24 16:40:18 +02:00
d8d29d3615 Revert "fix facet count bug"
This reverts commit 733c02dd7c.
2020-09-24 16:39:42 +02:00
efe5984d54 Merge #963
963: upgrade actix-web to v3 r=Kerollmops a=robjtede

Test failures are the same before and after upgrade.

Co-authored-by: Rob Ede <robjtede@icloud.com>
2020-09-22 15:30:21 +00:00
63260e6443 add tests for documents 2020-09-22 16:05:40 +02:00
a794970b72 additional tests for index 2020-09-22 10:51:34 +02:00
ba0f44e361 upgrade actix-web to v3 2020-09-21 22:37:54 +01:00
4acaecd921 Merge #749
749: Contributor guidelines r=Kerollmops a=erlend-sh

Preliminary contributor guidelines, heavily based on the [Vector doc](https://github.com/timberio/vector/blob/master/CONTRIBUTING.md).

Co-authored-by: Erlend Sogge Heggen <e.soghe@gmail.com>
2020-09-21 09:51:56 +00:00
84a3e95fa4 Merge branch 'stable' 2020-09-11 12:08:20 +02:00
f045e111ea Merge #960
960: bump version and update changelog r=MarinPostma a=LegendreM

* bump to 0.14.1
* update CHANGELOG.md file

Co-authored-by: many <maxime@meilisearch.com>
2020-09-08 16:11:53 +00:00
87a76c2a60 bump version and update changelog 2020-09-08 18:11:03 +02:00
4edaebab90 Merge #959
959: add version guard in copy_and_compact_to_path function r=MarinPostma a=LegendreM

fix #958

need to create 0.14.1

Co-authored-by: many <maxime@meilisearch.com>
2020-09-08 08:35:49 +00:00
b43137b508 add version guard in copy_and_compact_to_path function 2020-09-07 18:21:04 +02:00
0ca44b6a82 Merge branch 'master' into issue943 2020-09-02 13:09:37 +09:00
ae2de4d0c4 added changelog 2020-09-02 11:21:58 +09:00
e47b4acd08 changed the implementation of displayedAttributes from HashSet into BtreeSet 2020-09-02 11:13:16 +09:00
a07c3743f0 Merge #944
944: Fix facet count r=MarinPostma a=MarinPostma

fix bug reported in: https://github.com/meilisearch/MeiliSearch/issues/929#issuecomment-683683728

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-09-01 08:43:47 +00:00
954f572e79 facet count more tests 2020-09-01 10:27:50 +02:00
733c02dd7c fix facet count bug 2020-09-01 10:12:00 +02:00
c94daf8c3d Merge #933
933: README.md - Fixed Small Typo r=MarinPostma a=LiamRiddell



Co-authored-by: Liam Riddell <3812154+LiamRiddell@users.noreply.github.com>
2020-08-28 13:09:34 +00:00
6db51ed8b2 README.md - Fixed Small Typo 2020-08-28 13:44:53 +01:00
118c673eaf Merge #927
927: Bump meilisearch r=Kerollmops a=MarinPostma

bump meilisearch version 0.14.0

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-08-24 14:36:21 +00:00
a9a2d3bca3 update changelog 2020-08-24 15:49:24 +02:00
4a9e56aa4f bump meilisearch version 0.14.0 2020-08-24 15:49:09 +02:00
14bb9505eb Merge #926
926: Update genre field with genres r=MarinPostma a=bidoubiwa

Most code samples are made with the assumption that the `genres` field takes an `s`. I'm updating the dataset to match those code-samples.


Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-08-24 12:48:08 +00:00
d937aeac0a Update genre field with genres 2020-08-24 14:22:33 +02:00
dd540d2540 Merge #924
924: change max db size opt name r=Kerollmops a=MarinPostma

fix #867

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-08-24 12:18:17 +00:00
4ecaf99047 fix test option test 2020-08-24 14:14:11 +02:00
445a6c9ea2 update options name 2020-08-21 14:42:20 +02:00
67b7d60cb0 Merge #920
920: fix bug and add tests r=MarinPostma a=LegendreM

- add tests about updates
- fix select bug

fix #896

Co-authored-by: many <maxime@meilisearch.com>
2020-08-19 07:56:27 +00:00
94b3e8e56e fix bug and add tests
- add tests about updates
- fix select bug

fix #896
2020-08-19 09:51:57 +02:00
89b5ae63fc Merge #915
915: fix unwrap bug r=Kerollmops a=MarinPostma

fix #912.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-08-18 12:50:10 +00:00
2a79dc9ded log error on unwrap error 2020-08-17 16:32:40 +02:00
5ed62dbf76 fix unwrap bug 2020-08-14 12:16:48 +02:00
cb267b68ed Merge #910
910: Fix typo in error message r=MarinPostma a=curquiza

Thanks to @ppamorim for reporting the typos to me!

Co-authored-by: Clementine Urquizar <clementine@meilisearch.com>
2020-08-13 15:43:58 +00:00
6539be6c46 Fix typo in error message 2020-08-13 17:13:19 +02:00
a23bdb31a3 Merge #829
829: implement snapshoting r=MarinPostma a=LegendreM

related to #551.

This pull request permit user to create periodically a snapshot of MeiliSearch database via a command line and launch meiliSearch from a snapshot with another command

## Documentation

### schedule a snapshot
`--snapshot-path <DIRECTORY_PATH>`:
this will periodically create a snapshot `<DB_NAME>.tar.gz` in the specified directory

### change period between 2 snapshot creation
`--snapshot-interval-sec <GAP_IN_SEC>`
choose the time gap between 2 snapshot

### start meilisearch from a snapshot
`--load-from-snapshot <FILE_PATH>`
this will use the snapshot stored at `<FILE_PATH>` to initialize MeiliSearch database,

`--ignore-snapshot-if-db-exists` if set and if a db already exists,
this will skip snapshot importation and continue process with actual db instead of quitting process by returning an Error

`--ignore-missing-snapshot` if set and if no snapshot exists at provided path,
this will skip snapshot importation and continue process with actual db instead of quitting process by returning an Error

Co-authored-by: many <maxime@meilisearch.com>
2020-08-12 16:37:31 +00:00
9014290875 implement snapshot 2020-08-12 17:46:28 +02:00
1903302a74 Merge #906
906: Facet distribution correct case r=LegendreM a=MarinPostma

~

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
2020-08-12 09:04:36 +00:00
75c3cb4bb6 fix compile error 2020-08-12 10:31:11 +02:00
bfd0f806f8 requested changed
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2020-08-12 10:31:11 +02:00
afab8a7846 clean facet result types 2020-08-12 10:31:11 +02:00
afacdbc7a0 update tests for facets distribution case 2020-08-12 10:31:11 +02:00
18a50b4dac fix facet distribution case 2020-08-12 10:31:10 +02:00
fb69769991 Merge #889
889: Fix clippy warnings r=MarinPostma a=TaKO8Ki

Good day!

Since `cargo clippy` showed two warnings like the following, I've fixed them. This is a small PR.

```sh
warning: use of `ok_or` followed by a function call
   --> meilisearch-core/src/database.rs:185:18
    |
185 |                 .ok_or(Error::VersionMismatch("bad VERSION file".to_string()))?;
    |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try this: `ok_or_else(|| Error::VersionMismatch("bad VERSION file".to_string()))`
    |
    = note: `#[warn(clippy::or_fun_call)]` on by default
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#or_fun_call

warning: useless use of `format!`
   --> meilisearch-core/src/database.rs:208:59
    |
208 |                         return Err(Error::VersionMismatch(format!("<0.12.0")));
    |                                                           ^^^^^^^^^^^^^^^^^^ help: consider using `.to_string()`: `"<0.12.0".to_string()`
    |
    = note: `#[warn(clippy::useless_format)]` on by default
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_format

warning: 2 warnings emitted
```

Co-authored-by: Takayuki Maeda <41065217+TaKO8Ki@users.noreply.github.com>
2020-07-29 11:40:08 +00:00
750e7382c6 fix clippy warnings 2020-07-29 11:32:34 +09:00
2464cc7a6d Merge #888
888: Remove schema mention in error message r=MarinPostma a=curquiza

We avoid mentioning the schema since MeiliSearch is schemaless for the user 🙂

Co-authored-by: Clementine Urquizar <clementine@meilisearch.com>
2020-07-28 15:20:59 +00:00
f078cbac4d Remove schema mention in error message 2020-07-28 15:18:05 +02:00
aa545e5386 Merge #638 #828 #865
638: Update requitites for source build(rust version) r=MarinPostma a=djKooks

Hello,
I just found that compile via source has been failed by issue here:
```
error[E0658]: the `#[non_exhaustive]` attribute is an experimental feature
  --> /Users/kwangin.jung/.cargo/registry/src/github.com-1ecc6299db9ec823/whoami-0.8.1/src/lib.rs:40:1
   |
40 | #[non_exhaustive]
   | ^^^^^^^^^^^^^^^^^
   |
   = note: for more information, see https://github.com/rust-lang/rust/issues/44109

error[E0658]: the `#[non_exhaustive]` attribute is an experimental feature
   --> /Users/kwangin.jung/.cargo/registry/src/github.com-1ecc6299db9ec823/whoami-0.8.1/src/lib.rs:102:1
    |
102 | #[non_exhaustive]
    | ^^^^^^^^^^^^^^^^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/44109
```
Seems `#[non_exhaustive]` is a new feature on Rust 1.40.0, so added as pre-requitites.


828: Cleanup readme r=MarinPostma a=tpayet

Closes #613 

865: Update movie dataset with genre field r=MarinPostma a=bidoubiwa

Updated the movie dataset by adding  the `genre` field to each movies where the genre could be fetched.
The `genre` was fetch for each movie by making a search request on the bigger movie dataset (200mb) using MeilISearch. 

I make this proposition to make testing and trying  more accessible. 

```json
{
  "id": "323661",
  "title": "Mune: Guardian of the Moon",
  "poster": "https://image.tmdb.org/t/p/w1280/4vzqow7mVUahqA4hHoe2UpQOxy.jpg",
  "overview": "When a faun named Mune becomes the Guardian of the Moon, little did he had unprepared experience with the Moon and an accident that could put both the Moon and the Sun in danger, including a corrupt titan named Necross who wants the Sun for himself and placing the balance of night and day in great peril. Now with the help of a wax-child named Glim and the warrior, Sohone who also became the Sun Guardian, they go out on an exciting journey to get the Sun back and restore the Moon to their rightful place in the sky.",
  "release_date": 1423094400,
  "genre": [
    "Animation",
    "Family",
    "Adventure",
    "Fantasy",
    "Comedy"
  ]
}
{
  "id": "306",
  "title": "Beverly Hills Cop III",
  "poster": "https://image.tmdb.org/t/p/w1280/tw9gAhqQcBFX0X0XfVbWqUsmzoU.jpg",
  "overview": "Back in sunny southern California and on the trail of two murderers, Axel Foley again teams up with LA cop Billy Rosewood. Soon, they discover that an amusement park is being used as a front for a massive counterfeiting ring – and it's run by the same gang that shot Billy's boss.",
  "release_date": 769741200,
  "genre": [
    "Action",
    "Comedy",
    "Crime"
  ]
}
```

Co-authored-by: kwangin.jung <inylove82@gmail.com>
Co-authored-by: Thomas Payet <thomas@meilisearch.com>
Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-07-24 09:45:01 +00:00
9711100ff1 Merge #874
874: Fixes default values on web interface r=MarinPostma a=tpayet



Co-authored-by: Thomas Payet <thomas@meilisearch.com>
2020-07-24 09:20:33 +00:00
8c49ee1b3b Fixes default values on web interface 2020-07-22 14:42:34 +02:00
44cb7f68f9 Merge #878
878: Bump meilisearch v0.13.0 r=MarinPostma a=MarinPostma



Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-22 09:18:56 +00:00
25dc2ad66f update changelog 2020-07-22 10:56:19 +02:00
624bd56459 bump meilisearch version 2020-07-22 10:56:19 +02:00
7a6615cfa7 Merge #785
785: Adding a tracking issue template r=MarinPostma a=qdequele



Co-authored-by: Quentin de Quelen <quentin@dequelen.me>
2020-07-22 08:49:27 +00:00
bcad3ffd7c Merge #873
873: Update CI for new workflow r=MarinPostma a=MarinPostma

This pr implements the necessary automation for our new release workflow.

## Pre-releases

whenever something is pushed to a branch `release-v*`, tests are triggered. If all test pass, the current reference is checked to see if it's a release branch. If it's a release branch, a pre-release is created for this branch and assets are automatically generated for this branch. The prerelease has the tag `vx.x.xrcn` where `x.x.x` is the version extracteds from the branch name, and n is the number of commits since the branch was forked from master. (starting from rc0).

## Releases

Whenever something is pushed to stable and tagged `vx.x.x` where `x.x.x` is the version, tests are run and a release is generated containing the assets, and binaries are published to docker, brew, apt, etc.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-22 08:24:24 +00:00
98d87fa1ff Merge #868
868: Update error.rs r=MarinPostma a=tpayet



Co-authored-by: Thomas Payet <thomas@meilisearch.com>
2020-07-21 16:54:56 +00:00
7e00bf4bfa update ci to new workflow 2020-07-21 16:52:01 +02:00
476aecf86d Cleanup readme 2020-07-20 16:03:25 +02:00
c39b358518 Update error.rs 2020-07-20 14:42:47 +02:00
bd5d25429b Update movie dataset with genre field 2020-07-20 10:39:29 +02:00
982fb7b786 Merge #858
858: update error url r=LegendreM a=MarinPostma

@bidoubiwa 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-16 14:55:52 +00:00
7dc628965c Merge #846
846: Change settings behavior r=LegendreM a=MarinPostma

partially implements #824.

Returning the field distribution for all know fields is more complicated that anticipated, see https://github.com/meilisearch/MeiliSearch/issues/824#issuecomment-657656561

If we decide to to it anyway, and find a reasonable solution, I will make another PR.

fix #853 by resetting displayed and searchable attributes to wildcard when attributes are set to `[]` in the all settings route. @curquiza @bidoubiwa can you confirm me that this is the expected behavior?

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-16 14:31:06 +00:00
d114250ebb requested changes 2020-07-16 16:19:15 +02:00
8eec3bcdc2 update error url 2020-07-16 15:14:53 +02:00
0583cd8e5d Merge pull request #810 from MarinPostma/remove-sys-info
remove the sys-info routes
2020-07-15 20:24:18 +02:00
83b6fc48e1 remove the sys-info routes 2020-07-15 19:33:29 +02:00
4b5437a882 fix displayed attrs empty array bug 2020-07-15 19:25:24 +02:00
de4caef468 test reset attributes to wildcard 2020-07-15 18:56:19 +02:00
36b763b84e test setting attributes before adding documents 2020-07-15 18:56:19 +02:00
c06dd35af1 fix tests 2020-07-15 18:56:19 +02:00
51b7cb2722 remove accept new fields / add indexed * 2020-07-15 18:56:19 +02:00
7f5fb50307 add displayed attributes wildcard 2020-07-15 18:56:19 +02:00
4262561596 Merge #819
819: run clippy during tests r=MarinPostma a=MarinPostma



Co-authored-by: marin <postma.marin@protonmail.com>
Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-15 08:07:42 +00:00
8471796987 add clippy component 2020-07-13 18:53:19 +02:00
2775aeb6ac Merge #794
794: Check database version mismatch r=MarinPostma a=MarinPostma

Checks if the versions of the database and the engine are compatible.

The database and the engine are compatible if they share the same major and minor version.

The engine will refuse to start if there is a mismatch.

@bidoubiwa do we need to document this?

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-13 15:08:33 +00:00
a747e79e5d run clippy during tests 2020-07-13 16:15:32 +02:00
5773c5c865 check version file against regex 2020-07-13 16:06:28 +02:00
51d7c84e73 better exit on error
Update meilisearch-core/src/database.rs

Co-authored-by: Clément Renault <renault.cle@gmail.com>

Update meilisearch-core/src/database.rs

Co-authored-by: Clément Renault <renault.cle@gmail.com>
2020-07-13 16:06:28 +02:00
6f0b6933e6 update changelog 2020-07-13 16:05:56 +02:00
f5a936614a error on meili database version mismatch 2020-07-13 16:05:08 +02:00
308630c094 Merge #841
841: Unique docid bugfix r=LegendreM a=MarinPostma

fix #827 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-13 13:36:32 +00:00
f54397e0cf test unique document id bug 2020-07-13 15:14:07 +02:00
754efe1f42 fix document id uniqueness bug 2020-07-13 15:14:07 +02:00
05c30c879f Merge #791
791: Create tests for error codes r=LegendreM a=MarinPostma

- create tests for error codes
-  fix primary key error that returned internal error instead of the correct error
- bits of documentation for error
- change a bunch of error type, for better accuracy, @curquiza, @eskombro, @bidoubiwa  you may want to take a look at `meilisearch-error/src/lib.rs`
- fix #836 

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-13 13:12:21 +00:00
99e8d4adae fix missing primary key 2020-07-13 14:54:25 +02:00
ac63f1cd7a fix typo in error code 2020-07-13 14:54:25 +02:00
169749396b update error types to be more accurate 2020-07-13 14:54:25 +02:00
a0637c2c6d Merge #842
842: bors setup r=LegendreM a=MarinPostma

set up bors to run the tests and merge automatically.

the tests are now run only on staging and trying branches

you can use `bors r+` to test and merge the branch into master if the tests succeed

or

you can just use `bors try` to run the test on the trying branch (synced with master)

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-07-10 13:27:21 +00:00
edbba64711 fix bors.yaml 2020-07-08 21:04:07 +02:00
9ba711dfe5 update readme with bors badge 2020-07-08 14:33:15 +02:00
6bce83dde8 set bors timeout 2020-07-08 13:36:33 +02:00
629a658c75 bors setup 2020-07-08 09:50:07 +02:00
2f6c55ef78 Merge pull request #771 from MarinPostma/placeholder-search
Placeholder search
2020-07-03 18:56:55 +02:00
a6457718f2 update changelog 2020-07-03 17:17:28 +02:00
3bf23a7c59 test placeholder search
move search test macro to common module
2020-07-03 17:17:28 +02:00
bbe3a10107 implement placeholder search 2020-07-03 17:17:28 +02:00
37ee0f36c1 Merge pull request #792 from MarinPostma/error-codes-in-updates
Error codes in updates
2020-07-02 16:17:57 +02:00
e92f544fd1 add test for update errors 2020-07-02 15:18:30 +02:00
d7b49fa671 fix potential infinite loop 2020-07-02 15:18:30 +02:00
41707e3245 fix error on missing document id in document 2020-07-02 15:18:30 +02:00
3c51e9f5ed Enable error code reporting for update errors 2020-07-02 15:18:30 +02:00
7d3e937134 add tests for error codes 2020-07-02 15:18:30 +02:00
6445eea946 update error types to be more accurate 2020-07-02 15:18:28 +02:00
ced6cc0e23 fix bad error report when primary key exists 2020-07-02 15:16:48 +02:00
944a3943e5 Merge pull request #820 from MarinPostma/readme-update
update readme
2020-07-02 15:16:37 +02:00
d419f151a0 update readme 2020-07-02 15:14:05 +02:00
b2124822a3 Merge pull request #825 from Rio/log-analytics-usage
feat(analytics): log if analytics are enabled
2020-07-02 15:02:19 +02:00
f60b912f12 feat(analytics): log if analytics are enabled 2020-07-02 14:33:25 +02:00
e1f956ce18 Merge pull request #821 from aeriksson/patch-1
Fix typo in option.rs
2020-07-02 14:05:00 +02:00
ab16e2eff1 fix merge error 2020-07-02 14:04:15 +02:00
3da607749f Merge branch 'master' into patch-1 2020-07-02 13:57:52 +02:00
a626e5e935 Merge pull request #737 from balajisivaraman/wip_655
Improve test suite performance using Test Dataset
2020-07-02 13:51:38 +02:00
3d73a4895e cleanup movies dataset and related functions 2020-07-02 16:52:39 +05:30
979b01a1c0 update index status test to use the test dataset 2020-07-02 16:52:39 +05:30
38cf489acf update remaining search tests to use the test dataset 2020-07-02 16:52:39 +05:30
60264763f4 update search_settings tests to use the test dataset 2020-07-02 16:52:39 +05:30
d55124e524 update settings_ranking_rules tests to use the test dataset 2020-07-02 16:52:39 +05:30
643933c3b0 update settings tests to use the test dataset 2020-07-02 16:52:39 +05:30
44fd9384bd update stop_words tests to use the test dataset 2020-07-02 16:52:39 +05:30
75d0d2df6c update documents_delete tests to use the test dataset 2020-07-02 16:52:39 +05:30
92d9283d1a Merge pull request #823 from Rio/public-health-endpoint
chore(http): do not require auth on /health endpoint
2020-07-01 17:01:23 +02:00
9b46887f75 chore(http): do not require auth on /health endpoint
This makes it easier to determine the health of the server using http.

closes #822
2020-07-01 16:33:01 +02:00
ad267cbe59 Merge pull request #813 from Rio/remove-hardcoded-sentry-dsn
feat(sentry): make sentry dsn customizable
2020-07-01 16:15:21 +02:00
029772e11f Fix typo in option.rs 2020-07-01 13:45:00 +02:00
2ef888d100 chore(sentry): make sentry dsn customizable
By removing the hardcoded value the sentry client will fall back to pulling
it from the SENTRY_DSN environment variable. The hardcoded value has been
moved to the default value of the commandline options so the default
behavior will be the same.

A `--no-sentry` and `MEILI_NO_SENTRY` option has also been introduced
that effectively disables sentry reporting.
2020-07-01 12:55:14 +02:00
4e1e41994c Merge pull request #817 from meilisearch/bump-version
Bump meilisearch to version 0.12.0
2020-06-30 21:24:47 +02:00
0545424781 update changelog 2020-06-30 20:47:00 +02:00
69af8e9e3d bump meilisearch to 0.12.0 2020-06-30 20:42:19 +02:00
9c7abebde4 Merge pull request #816 from MarinPostma/fix-index-length
Fix long documents not being indexed completely bug
2020-06-30 19:19:07 +02:00
e240591128 add test document over 1000 words 2020-06-30 18:49:33 +02:00
0bceaa5669 add test for long document indexing 2020-06-30 17:46:23 +02:00
3423c0b246 fix indexed document length bug 2020-06-30 17:46:23 +02:00
0953d99198 Merge pull request #809 from MarinPostma/bump-script
Bump script
2020-06-30 13:54:07 +02:00
7ad835baf5 add bump script 2020-06-30 13:45:39 +02:00
8309e00ed3 Merge pull request #801 from MarinPostma/make-clippy-happy
Make clippy happy
2020-06-30 12:25:33 +02:00
4f6a6b1359 make clippy happy 2 2020-06-30 11:01:07 +02:00
21253a2bcb make setting enums more balanced 2020-06-30 11:01:07 +02:00
8e9296c66f simplify bucket sort signature 2020-06-30 11:01:07 +02:00
641d12fb2d make clippy happy 1 2020-06-30 11:01:07 +02:00
2019db972d Merge pull request #805 from MarinPostma/error-code-rename
rename error codes
2020-06-30 10:33:16 +02:00
0d2f5d3fe0 rename error codes 2020-06-29 14:37:51 +02:00
21567eeb8f Merge pull request #800 from MarinPostma/distinct-attribute-return-correct-name
Fix distinct attribute returning id instead of name
2020-06-29 10:42:57 +02:00
b1272d05b4 Test get distinct attribute 2020-06-27 10:38:08 +02:00
feb12a581e fix distinct attribute returning id instead of name 2020-06-27 10:30:27 +02:00
4ad4d7cf34 Merge pull request #796 from meilisearch/bump-version
Bump meilisearch version
2020-06-25 15:19:06 +02:00
a38498fe1e update changelog 2020-06-25 14:31:45 +02:00
8ea6ef1e90 bump meilisearch version 2020-06-25 14:28:50 +02:00
4f2b68eef1 Update CONTRIBUTING.md
Change Git links to chris.beams post
2020-06-24 19:49:20 +02:00
f1d55314d5 Merge pull request #793 from MarinPostma/fix-sysinfos
Fix sysinfos
2020-06-23 19:13:04 +02:00
c7701ebd19 partial sysinfo fix 2020-06-23 14:37:29 +02:00
05c3f598ac Merge pull request #778 from MarinPostma/consistent-settings
Make settings more consistent
2020-06-22 15:32:50 +02:00
3d771f2289 test distinct attribute 2020-06-22 12:16:35 +02:00
8035ca7138 fix distinct attribute behavior 2020-06-22 12:16:35 +02:00
60a90e96f3 add test for ranking rules settings 2020-06-22 12:16:35 +02:00
6167a10e5e change ranking rule addition behavior 2020-06-22 12:16:35 +02:00
ce28567dda Merge pull request #789 from MarinPostma/facet-distribution-update
Fix facet cache on document update
2020-06-22 12:14:01 +02:00
179942b07a test facet document fix 2020-06-22 11:40:08 +02:00
fabb1985ca recompute all facets during document addition 2020-06-22 11:40:08 +02:00
33bfcbeba7 Merge pull request #781 from MarinPostma/fix-benchmarks
Fix benchmarks and remove unused dependencies
2020-06-19 17:13:32 +02:00
3143ffe208 remove unused dependencies 2020-06-19 13:59:40 +02:00
c52d6d0741 fix broken benchmarks 2020-06-19 13:59:40 +02:00
ce7a9073e1 Adding a tracking issue template 2020-06-18 11:09:00 +02:00
95d1762f19 Merge pull request #735 from MarinPostma/post-search-route
Post search route
2020-06-15 22:32:12 +02:00
e5079004e1 adds SearchQueryPost 2020-06-15 16:28:08 +02:00
f6795775e2 update changelog 2020-06-15 16:28:08 +02:00
2d31371975 fix style 2020-06-15 16:28:08 +02:00
26d29783ce add tests for post search route 2020-06-15 16:28:08 +02:00
0ebf7b6214 fix CORS config error in actix 2020-06-15 16:28:08 +02:00
6add10b18f add search post route 2020-06-15 16:28:08 +02:00
940105efb3 change cors max age 2020-06-15 16:28:08 +02:00
3e13e728aa add post method 2020-06-15 16:28:08 +02:00
8cd224899c move search logic out of search route 2020-06-15 16:28:08 +02:00
35605c9f57 Merge pull request #777 from curquiza/hotfix-is-latest-script
Hotfix: Fix syntax error in is-latest-release.sh script
2020-06-15 14:57:44 +02:00
c6e68c87cd Fix syntax error in is-latest-release.sh script 2020-06-15 14:27:34 +02:00
7685165089 Merge pull request #775 from meilisearch/bump-version
Bump Meilisearch to v0.11.0
2020-06-15 11:21:38 +02:00
c6bad90c79 Mark unreleased changes as released in the changelog 2020-06-15 10:56:13 +02:00
8aeeea8382 Bump the Meilisearch crates version to 0.11.0 2020-06-15 10:54:16 +02:00
0ee46f773e Merge pull request #766 from MarinPostma/empty-facet-attributes-error
Empty facet attributes error
2020-06-10 14:04:48 +02:00
ff2490ca8b fix tests 2020-06-10 12:30:33 +02:00
2ada9c5d72 add error on search with empty facets 2020-06-10 12:30:33 +02:00
18b56c6af8 Merge pull request #760 from MarinPostma/typo-update-id
fix typo in error message
2020-06-06 11:02:52 +02:00
6fee7e638c fix typo in error message 2020-06-06 09:05:28 +02:00
f0822a86e1 Merge pull request #757 from MarinPostma/auth-status-code
change error status codes for auth
2020-06-05 20:57:08 +02:00
d007bf13f1 change missing headers & auth status code 2020-06-05 15:44:38 +02:00
cff9e1fd94 Merge pull request #759 from MarinPostma/document-delete-error
return error on deleting unexisting index
2020-06-05 12:33:06 +02:00
56b01ba440 test error delete unexisting index 2020-06-05 11:40:18 +02:00
11e00c906f error when deleting unexisting index 2020-06-05 11:33:59 +02:00
32843e9ade Merge pull request #751 from MarinPostma/handle-path-error
Handle url params errors
2020-06-04 15:22:54 +02:00
cf6c6eb117 test invalid query params 2020-06-04 14:48:37 +02:00
6df56c4ec5 add error handler for query params error 2020-06-04 14:48:37 +02:00
aabfe73b38 Merge pull request #756 from meilisearch/cleanup-dependencies
Cleanup the dependency tree
2020-06-04 14:39:04 +02:00
263583c118 Remove http-service/-mock from the dependencies 2020-06-04 14:04:18 +02:00
3ab8baa1b4 Merge pull request #755 from VerKnowSys/master
new: Updated sysinfo depdendency of meilisearch-http/Cargo.toml. This…
2020-06-04 13:37:00 +02:00
73c60d7768 new: Updated sysinfo depdendency of meilisearch-http/Cargo.toml. This fixes #740 2020-06-04 13:08:12 +02:00
987a60a6c0 Merge pull request #748 from MarinPostma/missing-primary-key-message
error message for missing primary key
2020-06-04 10:52:05 +02:00
ae6a92f89a error message for missing primary key 2020-06-03 17:38:39 +02:00
0fc624aa81 Merge pull request #750 from meilisearch/issue-templates
Update issue templates
2020-06-03 16:09:02 +02:00
af50a5528f Update issue templates
Feel free to close this PR and just go through the settings yourself:

https://github.com/meilisearch/MeiliSearch/issues/templates/edit

Once the new folder has been set up we also need a config.yml file like [this one](https://github.com/vercel/next.js/blob/canary/.github/ISSUE_TEMPLATE/config.yml) that will create the same type of discussion link that you see [here](https://github.com/vercel/next.js/issues/new/choose).

blank_issues_enabled: false
contact_links:
  - name: Ask a question
    url: https://github.com/meilisearch/MeiliSearch/discussions
    about: Ask questions and discuss with other community members
2020-06-03 13:57:01 +02:00
b2877b3549 Merge pull request #747 from MarinPostma/facets-settings-subroutes
Facets settings subroutes
2020-06-03 13:45:40 +02:00
5f1ca15a7c Update CONTRIBUTING.md 2020-06-03 13:37:46 +02:00
e1002862a9 Create CONTRIBUTING.md 2020-06-03 13:31:21 +02:00
3fe3c8cf02 test attributes_for_faceting subroutes 2020-06-03 11:31:58 +02:00
ed051b65ad default attributes_for_faceting to [] 2020-06-03 11:31:32 +02:00
8f0d9ccd87 add subroutes for attributes_for_faceting 2020-06-03 11:31:32 +02:00
adaf74bc87 Merge pull request #718 from meilisearch/add-more-analytics-reporting
Add more analytics
2020-06-02 17:05:09 +02:00
a2321d1562 update changelog and readme 2020-06-02 15:40:33 +02:00
e51ea55ae3 add more analytics 2020-06-02 15:40:31 +02:00
3af2f8b344 Merge pull request #733 from curquiza/fix-welcome-message
Change http into https in welcoming message links
2020-06-02 14:53:34 +02:00
f6c531a5a8 Change http into https in welcoming message links 2020-06-02 14:20:08 +02:00
2ae05d9fd1 Merge pull request #734 from MarinPostma/index-already-exist-code
Index already exist code
2020-06-01 11:43:29 +02:00
e95cec7ea6 add test for error_code 2020-06-01 11:06:57 +02:00
3bd5a90976 rename error types 2020-05-30 12:10:35 +02:00
68ad570cfc replace existing_index with index_already_exists 2020-05-30 12:10:35 +02:00
db45826232 take existing_index out of create_index error 2020-05-30 12:10:35 +02:00
df7284a4df Merge pull request #732 from meilisearch/api-key-dashboard
Allow users to input an API Key to search into private data
2020-05-29 17:53:36 +02:00
b327442eb6 Update the changelog 2020-05-29 12:22:23 +02:00
1370b19402 Allow users to input an API Key to search into private data 2020-05-29 12:22:23 +02:00
5ee4a1e954 Merge pull request #703 from MarinPostma/error-code
Error code support
2020-05-29 11:26:14 +02:00
8a2e60dc09 requested changes 2020-05-28 19:19:26 +02:00
2a32ad39a0 move filter parse error display to core 2020-05-28 16:32:17 +02:00
2bf82b3198 update error codes 2020-05-28 16:32:14 +02:00
c9f10432b8 update changelog 2020-05-28 16:28:41 +02:00
fb6a9ea280 remove unecessary errors 2020-05-28 16:28:41 +02:00
05344043b2 style fixes 2020-05-28 16:28:37 +02:00
d9e2e1a177 ErrorCode improvements 2020-05-28 16:23:46 +02:00
51b3139c0b fix status code 2020-05-28 16:23:46 +02:00
4254cfbce5 reponse error payload 2020-05-28 16:23:46 +02:00
e2546f2646 error codes for schema 2020-05-28 16:23:46 +02:00
9c58ca7ce5 error codes for core 2020-05-28 16:23:46 +02:00
0e20ac28e5 Change ErrorCategory to ErrorType 2020-05-28 16:23:46 +02:00
30fd24aa47 fix details 2020-05-28 16:23:46 +02:00
3bd15a4195 fix tests, restore behavior 2020-05-28 16:23:46 +02:00
c771694623 remove heed from http dependencies 2020-05-28 16:23:46 +02:00
d69180ec67 refactor errors / isolate core/http errors 2020-05-28 16:23:46 +02:00
e2db197b3f change ResponseError to Error 2020-05-28 16:23:46 +02:00
4c2af8e515 add error code abstractions 2020-05-28 16:23:46 +02:00
81b1aed7a1 Merge pull request #726 from MarinPostma/exhaustive-facet-count
Return the exhaustive facets count field
2020-05-28 12:39:00 +02:00
7c7f753463 add facet count in response 2020-05-28 12:08:38 +02:00
f1ac76a283 Merge pull request #725 from MarinPostma/fix-test-warnings
fix test warnings
2020-05-28 11:49:42 +02:00
2b7d614e84 fix test warnings 2020-05-27 19:32:55 +02:00
b859477ffd Merge pull request #716 from MarinPostma/rename-facet
rename facets to facetsDistribution
2020-05-27 18:29:21 +02:00
b6570f7016 rename facets to facetsDistribution 2020-05-27 17:35:33 +02:00
c1a2c7b610 Merge pull request #719 from eskombro/rename_fieldfrequency_to_fielddistribution
Rename fields_frequency into fields_distribution (and fieldsFrequency into fieldsDistribution)
2020-05-27 09:24:07 +02:00
b16088eec1 Update CHANGELOG.md 2020-05-26 20:44:06 +02:00
8438ac9756 Rename fields_frequency into fields_distribution 2020-05-26 20:40:49 +02:00
a3a389cae6 Merge pull request #715 from meilisearch/bump-heed
Bump heed to 0.8.0 and handle abort errors
2020-05-26 17:39:10 +02:00
8cebf78485 Bump heed to 0.8.0 and handle abort errors 2020-05-26 17:04:13 +02:00
166a301c7f Merge pull request #714 from MarinPostma/fix-null-facet-response
fix null facets in response
2020-05-26 17:02:23 +02:00
fac35e34e9 fix numm facets in response 2020-05-26 16:30:27 +02:00
0883e345d0 Merge pull request #669 from meilisearch/add-ssl
Add ssl support
2020-05-26 16:24:22 +02:00
7096fdb56b update changelog 2020-05-26 14:16:40 +02:00
a5ab4b3f64 update tests 2020-05-26 14:16:25 +02:00
7e6f068b18 add ssl support
format code

remove expects and unwrap
2020-05-26 14:16:25 +02:00
dc246b97e6 Merge pull request #699 from mattjtodd/add-tini-process-manager
Added tini process manager and entrypoint decl.
2020-05-26 11:20:56 +02:00
1ce7e09a44 Added tini process manager and entrypoint decl. 2020-05-26 08:52:22 +01:00
690023baff Merge pull request #705 from tpayet/add-docker-test-on-pr
Add docker test on pr
2020-05-25 14:04:33 +02:00
ea4c3b613a update sentry features to remove openssl
update changelog

Add docker build test on PR
2020-05-25 12:24:10 +02:00
8f990b2079 Merge pull request #702 from meilisearch/remove-open-ssl
Update sentry features to remove openssl
2020-05-25 12:22:22 +02:00
82fa060bc8 update changelog 2020-05-25 11:30:31 +02:00
a7cda7f950 update sentry features to remove openssl 2020-05-25 11:29:59 +02:00
59ed3e88b3 Merge pull request #695 from meilisearch/fix-dashboard
update normalize_path middleware
2020-05-23 15:19:08 +02:00
6d33376595 update Changelog 2020-05-23 12:20:28 +02:00
92897e7ad0 add test 2020-05-23 12:20:28 +02:00
92ce0f5c2b update normalize_path middleware 2020-05-23 12:20:27 +02:00
c946d144ce Merge pull request #706 from meilisearch/bump-fst-version
Bump the fst crate version to 0.4
2020-05-22 21:49:27 +02:00
bc7b0a38fd Use fst 0.4.4 in the project 2020-05-22 15:01:55 +02:00
6c87723b19 Bump the fst crate to 0.4.4 2020-05-22 15:01:35 +02:00
cd1679dea7 Merge pull request #684 from MarinPostma/max-payload-size
allow max payload size override
2020-05-22 11:35:15 +02:00
c5daa4a256 fix tests 2020-05-22 10:38:14 +02:00
df2eed1be3 update changelog 2020-05-22 10:38:12 +02:00
5193382b07 allow max payload size override 2020-05-22 10:37:41 +02:00
e40d9e7462 Merge pull request #696 from meilisearch/reduce-document-id-size
Reduce document id size from 64bits to 32bits
2020-05-20 18:58:12 +02:00
ddeb5745be Refactor a little bit 2020-05-20 17:01:57 +02:00
a60e3fb1cb Rename user ids into external docids 2020-05-20 15:08:56 +02:00
7bbb101555 Prefix the attributes_for_faceting key name 2020-05-20 14:19:00 +02:00
788e2202c9 Reduce the DocumentId size from 64 to 32bits 2020-05-20 14:19:00 +02:00
3bca31856d Discover and remove documents ids 2020-05-20 14:18:59 +02:00
5bf15a4190 Compute and merge discovered ids 2020-05-20 14:18:59 +02:00
016bfa391b Introduce internal and user ids put and get methods 2020-05-20 14:18:59 +02:00
e6a7521610 Introduce the DiscoverIds and DocumentsIds types 2020-05-20 14:18:59 +02:00
3e84f916b6 Merge pull request #697 from ndudnicz/typo/route-health-healtbody
typo in route/health.rs: HealtBody -> HealthBody
2020-05-20 14:18:38 +02:00
2d2c933611 typo in route/health.rs: HealtBody -> HealthBody 2020-05-20 11:57:44 +02:00
d30874c912 Merge pull request #691 from meilisearch/rewrite-indexer
Rewrite and simplify every indexer function
2020-05-19 17:13:53 +02:00
e2b115f3a9 Improve Number extraction/conversion function 2020-05-19 16:51:33 +02:00
ae30ee2ade Clean up some comments and variable names 2020-05-19 16:51:33 +02:00
3026840530 Introduce an index_document helper function 2020-05-19 16:51:33 +02:00
d300d788c7 Make the compute_document_id validate the id 2020-05-19 16:51:33 +02:00
2828b5fa19 Move the helper function to their own module 2020-05-19 16:51:33 +02:00
25b3c9a057 Remove the serde ExtractDocumentId struct 2020-05-19 16:51:33 +02:00
2558ce9a00 Export the value_to_string helper function 2020-05-19 16:51:33 +02:00
65ed2dcc1b Remove the serde ConvertToNumber 2020-05-19 16:51:32 +02:00
5e063da14f Remove the serde Indexer 2020-05-19 16:51:32 +02:00
615825b9fd Remove the serde Serializer 2020-05-19 16:51:32 +02:00
3502d8b48c Merge pull request #680 from MarinPostma/better-welcome
improve welcome message
2020-05-19 15:59:36 +02:00
a1d20ea8c8 remove keys in welcome message 2020-05-19 15:32:49 +02:00
ef7b1cc829 update changelog 2020-05-19 15:32:49 +02:00
2c9776c3e8 improve welcome message 2020-05-19 15:32:49 +02:00
3743d8ca5b Merge pull request #690 from MarinPostma/bump-sentry
bump sentry
2020-05-19 14:30:27 +02:00
e222e20517 update changelog 2020-05-19 10:29:38 +02:00
10d7dc75f3 update sentry 2020-05-19 10:27:55 +02:00
f6300497f7 Merge pull request #694 from curquiza/arm
Take achitecture into account in download-latest
2020-05-18 22:15:56 +02:00
1cae6c18b2 Take achitecture into account in download-latest 2020-05-18 18:15:50 +02:00
1fef613024 Merge pull request #685 from curquiza/hotfix-download-script
HOTFIX: the link in download-latest.sh
2020-05-15 22:37:49 +02:00
047407342b Fix the link in download-latest.sh 2020-05-15 17:49:33 +02:00
e2b71b0e57 Merge pull request #679 from MarinPostma/highlight-align-fix
Highlight align fix
2020-05-14 14:57:54 +02:00
9c1de3adfc add tests 2020-05-14 12:57:38 +02:00
54707e4e24 update changelog 2020-05-14 12:57:36 +02:00
a94ee167fc fix unaligned highlight 2020-05-14 12:56:15 +02:00
ce789682cc remove unnecessary clone 2020-05-14 12:56:15 +02:00
c95d4e48a5 Merge pull request #681 from MarinPostma/sentry-release-only
enables debug without sentry
2020-05-14 11:33:22 +02:00
1f35db2ddc update changelog 2020-05-14 10:56:57 +02:00
be1320d21d enables debug without sentry 2020-05-14 10:54:15 +02:00
308c652b30 Merge pull request #678 from erlend-sh/do-button
DigitalOcean button
2020-05-13 16:08:40 +02:00
80ab82897e DigitalOcean button 2020-05-13 15:41:31 +02:00
71578a5462 Merge pull request #676 from MarinPostma/facet-count
Facet count
2020-05-13 12:14:39 +02:00
eca39ad7bf update changelog 2020-05-13 11:48:34 +02:00
28a3e4005a adds test 2020-05-13 11:48:34 +02:00
f38d0d731f style fix 2020-05-13 11:48:34 +02:00
5051a796a0 error handling 2020-05-13 11:48:34 +02:00
869b6019c6 fix tests 2020-05-13 11:48:34 +02:00
347045adf2 smarter field_id name passing 2020-05-13 11:29:46 +02:00
e5126af458 enables facet count 2020-05-13 11:29:46 +02:00
effbb7f7f1 add sort result struct 2020-05-12 18:22:24 +02:00
a88f6c3241 Merge pull request #661 from meilisearch/add-actix-middleware
Add actix middleware
2020-05-12 16:04:29 +02:00
b96da94f92 fix issues from review
Co-authored-by: Clément Renault <clement@meilisearch.com>
2020-05-12 15:42:17 +02:00
305665cd42 Update CHANGELOG.md
Co-authored-by: Clément Renault <clement@meilisearch.com>
2020-05-12 15:34:08 +02:00
f2b7aea16c add tests 2020-05-12 15:34:08 +02:00
71e3b5bc11 update changelog 2020-05-12 15:34:08 +02:00
cd12e2717c add errors on content-type and add more serde debug 2020-05-12 15:34:08 +02:00
7a8e64be30 add normalize_slashes middleware 2020-05-12 15:34:07 +02:00
36abcb3976 Merge pull request #660 from curquiza/fix-release-process
Update release process for stable releases
2020-05-12 11:50:04 +02:00
5dc7d498bd Update release process for stable releases 2020-05-12 11:10:55 +02:00
e9c5928fd3 Merge pull request #674 from meilisearch/fix-windows-ci
Fix the Windows CI
2020-05-11 22:45:59 +02:00
48e94b4372 Enable jemalloc only on linux 2020-05-11 21:24:35 +02:00
e3e32e7f2b Fix the Windows CI by using .exe 2020-05-11 18:19:12 +02:00
b215e9e848 Merge pull request #631 from MarinPostma/facet-filters
Facet filters
2020-05-11 18:16:34 +02:00
44ae21671c update changelog 2020-05-11 17:42:33 +02:00
0ce2666d2f tests 2020-05-11 17:38:52 +02:00
d7f099d3ba enables faceted search 2020-05-11 17:38:52 +02:00
e07fe017c1 document update 2020-05-11 17:38:52 +02:00
270c7b0288 facet settings 2020-05-11 16:12:13 +02:00
59c67f6bc8 setting up facets 2020-05-11 16:12:13 +02:00
dd08cfc6a3 Merge pull request #664 from meilisearch/add-sentry-probe
add sentry probe
2020-05-07 18:16:42 +02:00
b89e76ccb4 add sentry as default feature 2020-05-07 17:36:33 +02:00
57e515d5e2 update changelog 2020-05-07 17:36:33 +02:00
b62945961f add sentry probe 2020-05-07 17:36:33 +02:00
61ce9486fc Merge pull request #662 from meilisearch/database-option-default
implement default on DatabaseOptions
2020-05-07 17:09:13 +02:00
2e55457ecc implement default on DatabaseOptions 2020-05-07 15:40:44 +02:00
fe21a43364 Merge pull request #654 from tpayet/fix-docker-expose-port
Add EXPOSE port to Dockerfile
2020-05-04 17:15:07 +02:00
dee12c9c4d Add EXPOSE port to Dockerfile 2020-05-04 12:11:16 +02:00
bd1929695c Merge pull request #651 from meilisearch/add-code-of-conduct-1
Create CODE_OF_CONDUCT.md
2020-05-01 11:47:26 +02:00
7ba92da5e5 Create CODE_OF_CONDUCT.md 2020-04-30 20:16:02 +02:00
4ae2097cdc Merge branch 'update/readme-rust-ver' of https://github.com/djKooks/MeiliSearch into update/readme-rust-ver 2020-04-30 21:09:38 +09:00
1f2ab71bb6 Update requitites for source build
Update requitites for source build(rust version)

Fix README
2020-04-30 21:08:55 +09:00
f3b1261e2f Merge pull request #649 from hkrutzer/patch-1
Update the link to FAQ in README
2020-04-30 13:58:43 +02:00
b47f7dd4c7 Update the link to FAQ in README 2020-04-30 13:12:55 +02:00
674476155a Merge pull request #647 from MarinPostma/master
fix database options
2020-04-29 23:00:34 +02:00
2e3a765dac fix database options 2020-04-29 22:29:09 +02:00
382e300326 Merge pull request #646 from Wazner/configurable-map-size
Add support for configuring lmdb map size
2020-04-29 14:32:03 +02:00
dff36eaef4 Fix example not compiling 2020-04-29 11:04:09 +02:00
bdd088830a Add DatabaseOptions arg to query_builder test 2020-04-29 10:12:25 +02:00
17401cfbe9 Fix compilation error in unit tests 2020-04-29 09:21:07 +02:00
c4287cdfac Add support for configuring lmdb map size 2020-04-29 09:21:07 +02:00
9c0956049a Update requitites for source build
Update requitites for source build(rust version)

Fix README
2020-04-29 08:48:17 +09:00
899559a060 Merge pull request #601 from meilisearch/tide-to-actix-web
Change tide to actix-web
2020-04-28 18:43:06 +02:00
99866ba484 fix test after rebase 2020-04-28 17:54:50 +02:00
36c7fd0cf1 fix requested changes 2020-04-28 17:47:04 +02:00
ea308eb798 remove timeout search query parameter
fix requested changes
2020-04-28 17:46:03 +02:00
bc8ff49de3 update authorization middleware with actix-web-macros 2020-04-28 17:46:03 +02:00
e74d2c1872 simplify error handling by impl errors traits on ResponseError 2020-04-28 17:46:03 +02:00
4bd7e46ba6 revert get document method 2020-04-28 17:46:03 +02:00
ff3149f6fa remove search multi index 2020-04-28 17:46:03 +02:00
27b3b53bc5 update tests & fix the broken code 2020-04-28 17:46:03 +02:00
5e2861ff55 prepare architecture for tests 2020-04-28 17:45:22 +02:00
38d41252e6 add authentication middleware 2020-04-28 17:45:22 +02:00
5fed155f15 add middleware 2020-04-28 17:45:22 +02:00
6a1f73a304 clippy + fmt 2020-04-28 17:45:22 +02:00
22fbff98d4 add stop-word and synonym endpoints 2020-04-28 17:45:22 +02:00
85833e3a0a add setting endpoint 2020-04-28 17:45:22 +02:00
b08f6737ac change param tuples by struct
add settings endpoint; wip
2020-04-28 17:45:22 +02:00
5ec130e6dc cleanup 2020-04-28 17:45:22 +02:00
6c581fb3bd add index endpoint & key endpoint & stats endpoint 2020-04-28 17:45:21 +02:00
73b5c87cbb add search endpoint; warn unwrap 2020-04-28 17:45:21 +02:00
0aa16dd3b1 add key endpoint 2020-04-28 17:45:21 +02:00
540308dc63 add interface endpoint & health endpoint 2020-04-28 17:45:21 +02:00
6d6c8e8fb2 Start change http server; finish document endpoint 2020-04-28 17:45:20 +02:00
6cc80d2565 Merge pull request #641 from meilisearch/bump-version
Bump version to v0.10.1
2020-04-28 16:12:01 +02:00
5265fafd7a Update the changelog for the release 2020-04-28 15:55:29 +02:00
287226b609 Bump crates versions to v0.10.1 2020-04-28 15:55:29 +02:00
7119b21b46 Merge pull request #640 from MarinPostma/fix_filter_parenthesis
fixes parenthesis
2020-04-28 11:10:45 +02:00
d1f1bfe071 fix floats bug
Update CHANGELOG.md

Co-Authored-By: Clément Renault <renault.cle@gmail.com>
2020-04-28 10:44:07 +02:00
812465e014 fixes parenthesis
adds tests
2020-04-27 22:29:29 +02:00
86bab04997 Merge pull request #635 from lironhl/bug_fix/highlight_longest_area
Bug fix/highlight longest area
2020-04-27 19:34:34 +02:00
867bd1ffd7 Tests for the new highlight algorithm 2020-04-27 20:10:40 +03:00
16e075983d Highlights result with longest match 2020-04-27 20:09:12 +03:00
1b7a6687c8 Update README.md (#630)
* Update README.md

* Update README.md

Co-Authored-By: Clément Renault <renault.cle@gmail.com>

Co-authored-by: Clément Renault <renault.cle@gmail.com>
2020-04-24 10:11:27 +02:00
8c41fb2b49 Merge pull request #623 from lironhl/bug_fix/chrome-content-overflow
Fixes the content overflow in the web interface in chrome.
2020-04-22 13:47:33 +02:00
c1797c4e75 add overflow-wrap css property to content class 2020-04-22 11:33:18 +03:00
1c094346e2 Merge pull request #616 from MarinPostma/array-filter
filters on arrays
2020-04-21 10:58:21 +02:00
cd3c0d750c Add support for filtering on arrays of strings
update changelog

Update CHANGELOG.md

Co-Authored-By: Clément Renault <renault.cle@gmail.com>

fix requested changes
2020-04-21 10:33:57 +02:00
3d2f04a7af Added GitHub discussions 2020-04-20 10:54:08 +02:00
10d047a636 Merge pull request #607 from tpayet/add-separators-tokenizer
Add '@' char as a tokenizer separator
2020-04-16 12:18:11 +02:00
10211737c5 Add '@' char as a tokenizer separator
Update CHANGELOG.md

Co-Authored-By: Clément Renault <renault.cle@gmail.com>
2020-04-16 11:04:03 +02:00
45e55bc054 Merge pull request #608 from matboivin/minor-changes
Minor changes
2020-04-15 20:32:25 +02:00
1892ba8973 Minor changes 2020-04-15 16:04:50 +02:00
b7c287ffb7 Merge pull request #604 from meilisearch/personal-token-binaries
Use a personal access token to publish release binaries
2020-04-10 22:51:30 +02:00
457b645f3c Use a personal access token to publish bins
The default GITHUB_TOKEN expires after 1h
2020-04-10 18:28:28 +02:00
0185ffad89 Merge pull request #603 from meilisearch/bump-version
Bump version to v0.10
2020-04-10 15:56:56 +02:00
08edc9d5d0 Update the changelog to refer to the v0.10 2020-04-10 15:43:20 +02:00
979bea0327 Bump MeiliSearch version to v0.10 2020-04-10 15:43:03 +02:00
c7ea9f4cf3 Merge pull request #580 from meilisearch/rework-highlight-crop
Rework query highlight/crop parameters
2020-04-10 13:27:35 +02:00
233651bef8 update changelog 2020-04-10 12:26:53 +02:00
c6fb591348 add * on attributesToRetrieve 2020-04-10 12:26:34 +02:00
644e78df89 Add some tests 2020-04-10 12:26:34 +02:00
500eeca3fb Rework query highlight/crop parameters 2020-04-10 11:12:58 +02:00
c418abe92d Merge pull request #602 from meilisearch/fix-tide-cors
fix tide cors
2020-04-10 10:29:55 +02:00
2fdf33a006 update changelog 2020-04-10 10:13:43 +02:00
c3cf0cade9 fix tide cors 2020-04-10 10:13:43 +02:00
210bc68ced Merge pull request #592 from MarinPostma/query-filters
Implements query filters
2020-04-09 18:43:11 +02:00
193bded4b7 fixes broken tests 2020-04-09 18:26:48 +02:00
8f4d090f34 update changelog 2020-04-09 17:20:37 +02:00
a0a481697b replace lazy_static with once_cell 2020-04-09 17:13:34 +02:00
c3d5778aae allows to get names from schema 2020-04-09 17:13:34 +02:00
3e031d8297 adds error handling and integration 2020-04-09 17:13:34 +02:00
83f50914ec tests 2020-04-09 17:13:34 +02:00
d3916f28aa implements filter logic 2020-04-09 17:13:34 +02:00
dcf1096ac3 implements parser 2020-04-09 17:13:31 +02:00
66568a913c logic skeleton for filter and parser 2020-04-09 16:08:05 +02:00
6db6b40659 Merge pull request #594 from meilisearch/fix-stop-words
Fixes the stop words and words fst generation
2020-04-07 11:06:39 +02:00
780ac5cfd3 Update the CHANGELOG.md 2020-04-06 19:47:57 +02:00
d24209f5a7 Adds a test to check that stop word ar correctly handled 2020-04-06 19:47:57 +02:00
29d021ad4d Fixes the stop words and words fst generation 2020-04-06 18:53:02 +02:00
eb28276923 Merge pull request #589 from meilisearch/change-logo
change logo format
2020-04-05 12:18:36 +02:00
0679ec4f41 change logo format 2020-04-05 11:09:38 +02:00
1b5b71869f Merge pull request #588 from techieshark/patch-1
Fix typo in README
2020-04-05 10:35:30 +02:00
6681681a76 Merge branch 'master' into patch-1 2020-04-05 10:34:10 +02:00
83d8dc0d2b Merge pull request #587 from sgummaluri/fix_first_all_updates_call_after_indexing
Fix for 'Update Status after the first update comes up to be empty (#542)'
2020-04-05 10:32:27 +02:00
49499ca54d Fix typo in README
Non-plural would be more usual in English. I assume "performances" was a typo.
2020-04-05 17:34:12 +10:00
16a63c74ea Modifying the test name for better readability 2020-04-05 00:26:09 +05:30
b4df54197b Slight grammar modification to the changelog message 2020-04-05 00:17:47 +05:30
a28b428074 Update changelog to make the message more readable 2020-04-05 00:14:58 +05:30
e5a336a042 Fix for 'First update does not appear before being processed' #542 2020-04-04 23:18:43 +05:30
5e5702833c Merge pull request #583 from meilisearch/gha-ignore-changelog
Ignores the CHANGELOG when a specific label is set
2020-04-03 15:47:20 +02:00
03063cf349 Ingores the CHANGELOG when label asks for 2020-04-03 15:06:25 +02:00
241b842ef7 Merge pull request #581 from meilisearch/publish-armv8-binary
Publish an aarch64 binary on releases
2020-04-03 11:56:35 +02:00
184c290773 Update the CHANGELOG 2020-04-03 10:42:19 +02:00
5c638184e9 Publish an aarch64 (aka ARMv8) binary on releases 2020-04-03 10:39:28 +02:00
3a88910a24 Merge pull request #579 from meilisearch/update-deps
Update dependencies
2020-04-02 20:24:23 +02:00
eddd453564 Makes http-service a dev-dependency 2020-04-02 18:36:35 +02:00
38c43759bb Update most of the dependencies 2020-04-02 18:36:04 +02:00
26225a2fdf Merge pull request #576 from ppamorim/fix-bench
Fix benchmark
2020-04-02 12:23:31 +02:00
9950fffb6f Simplify imports of std::fs and std::io, remove space not needed, Remove UpdateState 2020-04-02 11:02:19 +01:00
f5d57c9dce Replace the toml reader with the JSON settings reader, directly parse the data to SettingsUpdate, Update CHANGELOG 2020-04-02 11:01:56 +01:00
bc9c80a5ee Merge pull request #577 from meilisearch/change-slogan
Change the slogan
2020-04-01 16:35:59 +02:00
702f7445ec Change the slogan 2020-04-01 16:34:24 +02:00
dcb93e3166 Merge pull request #575 from ppamorim/nested-seq
Support nested-seq
2020-04-01 14:16:47 +02:00
02b79e0040 Modified JSON to add move conditions 2020-04-01 12:59:40 +01:00
88b71fb6c4 Update CHANGELOG to add seq support 2020-04-01 12:59:40 +01:00
95bb443430 Add empty seq 2020-04-01 12:59:40 +01:00
1b47a10e89 Add support for seq values 2020-04-01 12:59:40 +01:00
006e54109b Merge pull request #570 from tpayet/clean-readme-heroku
Removing Heroku deployment from README
2020-04-01 11:35:29 +02:00
7eb6333933 Removing Heroku deployment from README 2020-04-01 11:04:16 +02:00
065da3d613 Merge pull request #572 from ppamorim/ignore-null-nested-obj
Add support of nested null
2020-03-31 16:33:16 +02:00
e698fa0b63 Add issue index in the CHANGELOG 2020-03-31 15:06:04 +01:00
8b662be42b Update CHANGELOG.md
Co-Authored-By: Clément Renault <renault.cle@gmail.com>
2020-03-31 15:03:35 +01:00
52a4f7cd23 Update readme 2020-03-31 14:41:22 +01:00
690b8e0dd0 Replace .toString to String::new() 2020-03-31 14:01:44 +01:00
bc6d86c8ce serialize_unit returns a empty string 2020-03-31 13:51:12 +01:00
fbf7117d6a Rename function, add trailing line, replace JSON string with macro 2020-03-31 13:13:09 +01:00
51472142c6 Add test to check if nested null will be ignored 2020-03-31 12:00:13 +01:00
91d1bd5903 Merge pull request #569 from meilisearch/ignore-bool-nested-obj
Make the engine index booleans
2020-03-31 11:01:26 +02:00
69aee870da Make the engine index booleans
The engine will see the values like text "true" and "false"
2020-03-31 10:39:58 +02:00
3b25bd71ab Merge pull request #567 from meilisearch/fix-not-dedup-matches
Construct a Set using the from_dirty method
2020-03-31 10:15:03 +02:00
c18e907f96 Construct a Set using the from_dirty method
This commit fixes #566 by ensuring that the slice of matches is
ordered and deduplicated.
2020-03-30 20:56:30 +02:00
e3808b8694 Merge pull request #558 from matboivin/update-readme
Update readme
2020-03-28 10:46:00 +01:00
116b301359 Add Slack 2020-03-28 10:28:48 +01:00
3ed510b78e Minor fix 2020-03-28 10:28:30 +01:00
565c46fdd4 Merge pull request #548 from tendant/master
Stringify nested JSON object
2020-03-27 19:57:34 +01:00
b0255076de Merge branch 'master' into master 2020-03-27 19:43:02 +01:00
67348f2251 Merge pull request #555 from meilisearch/add-changelog
Add a CHANGELOG.md file
2020-03-27 19:33:39 +01:00
227bc716d8 Add a Github Action to ensure the CHANGELOG is updated in PRs 2020-03-27 19:12:50 +01:00
c3467313e5 Add a CHANGELOG to help the documentation follow the engine udpates 2020-03-27 19:01:46 +01:00
c82eed010a Merge pull request #543 from MarinPostma/aligned-search-crops
adds support for aligned crop in search result
2020-03-27 18:58:45 +01:00
158c2b5382 tests aligned crop 2020-03-27 18:38:41 +01:00
2d1d59acb7 adds support for aligned cropping with cjk 2020-03-27 18:38:41 +01:00
0088de9802 adds support for aligned crop in search result 2020-03-27 18:38:41 +01:00
f49d2bca64 Merge branch 'master' into master 2020-03-27 17:07:06 +01:00
b7273c450f Merge pull request #545 from matboivin/update-readme
Update readme
2020-03-27 11:49:11 +01:00
4130fddcc8 Center-align crates demo gif 2020-03-27 11:28:57 +01:00
4f05045acb Center-align web interface gif 2020-03-27 11:20:30 +01:00
bc16c9beb7 Update gif links 2020-03-27 11:17:31 +01:00
0af9f6cf6e Add movies gif and move crates demo gif 2020-03-27 11:17:17 +01:00
022aeac808 Stringify nested JSON object 2020-03-26 18:45:57 -07:00
20461ccf36 Add gif
Co-Authored-By: cvermand <33010418+bidoubiwa@users.noreply.github.com>
2020-03-26 21:56:27 +01:00
7297396162 Update performance 2020-03-26 19:22:59 +01:00
c15deb41b0 Remove How it works (deep dive) section 2020-03-26 16:26:43 +01:00
cb2a08db7e Center-align badges 2020-03-26 16:24:03 +01:00
67703b5ea2 Remove Notes about system allocator 2020-03-26 16:17:47 +01:00
c445abb982 Replace a by an
Co-Authored-By: Clément Renault <renault.cle@gmail.com>
2020-03-26 16:14:52 +01:00
38d97fa339 Change phrasing 2020-03-26 13:48:08 +01:00
d45f0819be Remove repetitive word 2020-03-26 13:25:57 +01:00
9375d0efbe Fix details 2020-03-26 13:23:20 +01:00
2291c33074 Align with quick start guide 2020-03-26 13:18:11 +01:00
0a216066f4 Split commands 2020-03-26 13:13:02 +01:00
eea2a9cfc3 Add contact 2020-03-26 13:10:44 +01:00
33c2b9c5ff Add social 2020-03-26 13:04:23 +01:00
1129812e6e Update link formatting 2020-03-26 12:42:41 +01:00
b1b0c6b4b3 Add useful links 2020-03-26 12:31:58 +01:00
6ae3f2f8b9 Remove line under logo 2020-03-26 12:24:02 +01:00
f8d594e7ea Update formatting and add logo 2020-03-26 12:23:09 +01:00
38c3aa542f Add logo image 2020-03-26 12:05:53 +01:00
f3382125e1 Merge branch 'master' of git://github.com/meilisearch/MeiliSearch into update-readme 2020-03-26 12:01:40 +01:00
592a438ae8 Rephrase the readme 2020-03-26 11:59:40 +01:00
d84a86897c Merge pull request #540 from meilisearch/publish-arm-binaries
Publish an ARMv7 binary for the releases
2020-03-26 11:14:48 +01:00
88c063e887 Publish an ARMv7 binary for the releases 2020-03-26 10:51:47 +01:00
ba8a410d4c Merge pull request #539 from emresaglam/html-sanitize
html sanitize
2020-03-25 21:33:03 +01:00
451061f4b8 Merge branch 'master' into html-sanitize 2020-03-25 13:06:18 -07:00
ae17aa4955 Update meilisearch-http/public/interface.html
bypassing <em> tag after encoding the "<>"

Co-Authored-By: Clément Renault <renault.cle@gmail.com>
2020-03-25 12:48:59 -07:00
f589d07706 Merge pull request #544 from meilisearch/add-slack-link
Add a slack badge on readme
2020-03-25 20:29:00 +01:00
3f343ebfdb Update README.md 2020-03-25 20:22:04 +01:00
95ea3e39d2 Merge pull request #541 from MarinPostma/search-result-count
Adds number of hits in search result
2020-03-25 15:34:06 +01:00
a6dcd7a421 fixes tests
fixes tests impacted by sifnature change of query
2020-03-25 15:17:20 +01:00
fa9b7dd29f removes useless deserializer for SearchResult 2020-03-25 13:59:15 +01:00
fd65cf9dcb populates exhaustive number of hits 2020-03-25 12:44:38 +01:00
6e9d7f94d4 adds exhaustive number hits to search result 2020-03-25 12:11:37 +01:00
6151bc262f Added the missing function call 2020-03-24 11:03:16 -07:00
b62f9fabf2 Update meilisearch-http/public/interface.html
Co-Authored-By: Clément Renault <renault.cle@gmail.com>
2020-03-24 10:39:53 -07:00
86e1ba871f html sanitize
Added a function to sanitize the html
This is for browser side only.
2020-03-24 08:37:56 -07:00
a6ac902bf4 Merge pull request #534 from curquiza/homebrew-automatization
Automate homebrew publish
2020-03-20 16:14:41 +01:00
4cdb67c249 Automate homebrew publish 2020-03-20 12:14:08 +01:00
29622e11f5 Merge pull request #533 from meilisearch/bump-to-v0.9.0
Bump the workspace crates to 0.9.0
2020-03-19 13:50:55 +01:00
3ca8db2cc1 Bump the workspace crates to 0.9.0 2020-03-19 11:56:23 +01:00
cc5eb885ea Merge pull request #531 from meilisearch/bump-rc
Bump the workspace crates to 0.9.0-rc.1
2020-03-16 18:09:11 +01:00
f6972ec682 Bump the workspace crates to 0.9.0-rc.1 2020-03-16 16:58:20 +01:00
cfe21f7b02 Merge pull request #530 from meilisearch/fix-ranking-rules-inference
Ranking fields should be stored and indexed by default
2020-03-16 16:53:06 +01:00
2d82f1b655 ranking fields should be stored and indexed by default; fix #521 2020-03-16 16:19:23 +01:00
cf6e481c14 Merge pull request #520 from meilisearch/fix-http-issues
Fix http issues
2020-03-11 15:21:50 +01:00
7be376721c global settings update make partial update; fix #516 2020-03-11 14:42:58 +01:00
ce0e8415d5 adding primary-key when adding documents does not work; fix #519 2020-03-11 14:12:38 +01:00
4ccf1d10bd error message when impossible to infer the primary-key; fix #517 2020-03-11 12:27:42 +01:00
c25641ff2d fix that AcceptNewFields does not take into account the primary-key; fix #518 2020-03-11 12:00:40 +01:00
14c1aba6c7 Merge pull request #509 from meilisearch/fix-internal-schema
Fix internal schema
2020-03-10 16:25:36 +01:00
8204d961de allow api key in header when no master-key is set; fix #515 2020-03-10 15:59:16 +01:00
ef3bcd65ab fix comments from review 2020-03-10 15:59:11 +01:00
b06e33f3d3 fix errors on http parameter naming 2020-03-10 12:08:10 +01:00
179969a9e2 fix tests + fmt 2020-03-10 11:29:56 +01:00
c984d8d5a5 rename identifier into primaryKey; fix #514 2020-03-09 18:45:29 +01:00
8ffa80883a remove the unused function 2020-03-09 18:45:29 +01:00
86c3482cbd review the internal schema to allow to create schema without identifier; fix #513 2020-03-09 18:45:20 +01:00
16a99aa95e update to infer identifier; fix #498 2020-03-06 10:55:25 +01:00
6d86968c4c Merge pull request #496 from meilisearch/small-fixes-before-0.9
Fix some issues before v0.9
2020-03-06 10:28:45 +01:00
8df6d6e954 fix error 500 when sending bad rankingRules; fix #500 2020-03-06 10:15:19 +01:00
8aeddec982 remove the route to get identifier on settings; fix #502 2020-03-06 10:15:19 +01:00
f4ae0844ab replace index-new-field route to accept-new-fields; fix #503 2020-03-06 10:15:19 +01:00
d56968cb23 default values of synonyms and stop-words; fix #499 fix #504 2020-03-06 10:15:19 +01:00
c5b6e641a4 index UID format; fix #497 2020-03-06 10:15:19 +01:00
041eed2a06 no id returned; fix #492 2020-03-06 10:15:19 +01:00
54c675e195 fix delete-batch route; #493 2020-03-06 10:15:19 +01:00
81ce90e57f update test 2020-03-06 10:15:19 +01:00
6016f2e941 change wording of custom ranking rules dsc -> desc; #490 2020-03-06 10:15:19 +01:00
4d27318b72 remove unnecessary comment on env Opt; #491 2020-03-06 10:15:11 +01:00
decce4d8e4 change route /keys/ -> /keys; #495 2020-03-05 15:33:02 +01:00
1cb9f75026 Merge pull request #507 from meilisearch/fix-documents-fields-order-inference
Fix the inference of the documents searchable fields
2020-03-04 14:16:36 +01:00
5e31d28759 Fix the inference of the documents searchable fields 2020-03-03 20:54:17 +01:00
2b780ab2c5 Merge pull request #489 from meilisearch/fix-rank-distinct
Use distinct on search
2020-03-02 16:34:27 +01:00
a2f0f95337 use distinct on search 2020-03-02 16:19:41 +01:00
72450c765d Merge pull request #484 from meilisearch/fix-reindex-by-chunk
Stop reindexing by chunk during complete reindexing
2020-02-28 18:29:25 +01:00
250aeaa86c stop reindexing by chunk during complete reindexing 2020-02-28 11:49:12 +01:00
06ace88901 Merge pull request #482 from meilisearch/review-settings-endpoint
Review settings endpoint
2020-02-28 11:39:38 +01:00
47009615ee rename words_position to wordsPosition; fix #483 2020-02-27 16:24:49 +01:00
dda08d60d2 cargo fmt 2020-02-27 14:33:57 +01:00
f182afc50b update tests 2020-02-27 11:30:23 +01:00
bb5d931f16 rename criterions on settings route; fix #480 2020-02-27 11:30:22 +01:00
3c74e71d4f show default ranking rules if user reset them; fix #476 2020-02-27 11:30:17 +01:00
79e07fa852 reset value of searchable and displayed attributes; fix #473 2020-02-27 11:04:39 +01:00
aa95c26e07 update tests 2020-02-27 11:04:39 +01:00
2eb6f81c58 rename ranking_distinct to distinct_attribute; fix #474 2020-02-27 11:04:39 +01:00
a067a1b16b replace index_new_fields to accept_new_fields; fix #475 2020-02-27 11:04:38 +01:00
1df51c52e0 Merge pull request #458 from meilisearch/rename-exactness-criterion
Rename the Exact criterion into Exactness
2020-02-25 16:23:57 +01:00
96248d9bfa Change the exactness criterion in the tests 2020-02-25 14:24:15 +01:00
9d167c08f4 Rename the Exact criterion into Exactness 2020-02-25 14:16:55 +01:00
8e6560d102 Merge pull request #464 from meilisearch/simplify-keys
Simplify keys & add launcher resume
2020-02-17 13:59:41 +01:00
ad83c3ab5a add launch resume & environment 2020-02-17 10:13:08 +01:00
257b7b4df4 introduce new key management 2020-02-14 12:54:07 +01:00
5ac757a5fd Merge pull request #465 from meilisearch/fix-un-rankable-fields
fix un-rankable fields errors
2020-02-14 11:27:12 +01:00
2d7a1bfce0 fix un-rankable fields errors; fix #463 2020-02-14 10:34:33 +01:00
3845b89a16 Merge pull request #441 from meilisearch/issues-0.9.0
Stabilize http endpoint
2020-02-13 15:57:37 +01:00
ce8e12c7c5 update tests 2020-02-13 12:24:30 +01:00
4986adc186 move identifier from settings to index; fix #470 2020-02-12 17:00:14 +01:00
dc9ca2ebc9 fixes for review 2020-02-12 16:51:14 +01:00
40d7396d90 update tests for settings 2020-02-11 15:28:01 +01:00
559c2f8907 Add stop words on query 2020-02-11 15:28:00 +01:00
dc6907e748 rebase from master 2020-02-11 15:28:00 +01:00
2143226f04 setup clippy and make a pass on code 2020-02-11 15:28:00 +01:00
ea2a64a504 remove unecessary settings routes 2020-02-11 15:28:00 +01:00
a5b0e468ee fix for review 2020-02-11 15:28:00 +01:00
14b5fc4d6c cargo fmt 2020-02-11 15:28:00 +01:00
f498bfed51 add test on /settings/ranking 2020-02-11 15:27:59 +01:00
50a9825a0f fix some uses cases on settings 2020-02-11 15:27:59 +01:00
5c49f08bb2 update settings routes 2020-02-11 15:27:59 +01:00
bbf9f41a04 add cors 2020-02-11 15:27:59 +01:00
6a32432b01 add /settings/index-new-fields routes 2020-02-11 15:27:59 +01:00
037724576e update tests 2020-02-11 15:27:59 +01:00
10b8a0ab00 add request middleware 2020-02-11 15:27:59 +01:00
faf0dd2f44 do not show matches on undesired fields 2020-02-11 15:27:58 +01:00
585bba43a0 set new attributes indexed if needed 2020-02-11 15:27:58 +01:00
b1528f9466 allow to see highlihts with matches and crop; fix #450 #449 2020-02-11 15:27:58 +01:00
7a491a64c0 add test 2020-02-11 15:27:58 +01:00
57503ad9bf add test on search 2020-02-11 15:27:58 +01:00
c276dda305 run cargo fmt 2020-02-11 15:27:58 +01:00
9c0497c419 change the way settings are show in updates 2020-02-11 15:27:58 +01:00
b33dac9faa add test for search + update ci for test in release 2020-02-11 15:27:57 +01:00
f77f38dfa0 fix update system 2020-02-11 15:27:57 +01:00
58fe87067b finish settings 2020-02-11 15:27:57 +01:00
dbba310770 squash me 2020-02-11 15:27:57 +01:00
6deb481589 definitely remove attributes_ranked on settings; auto create it with ranking_rules 2020-02-11 15:27:57 +01:00
036977bfe4 add the possibility to totally clear the schema 2020-02-11 15:27:57 +01:00
d280848ff6 add test for settings 2020-02-11 15:27:56 +01:00
7a6f583b1f fix issue on ranking rules 2020-02-11 15:27:56 +01:00
e078eafb1f clean unused functions 2020-02-11 15:27:56 +01:00
6f534540a6 fix error on stop words fst 2020-02-11 15:27:56 +01:00
38d57d213f expose api for new settings 2020-02-11 15:27:56 +01:00
7c14769226 add test for index creation 2020-02-11 15:27:56 +01:00
b71bbcffaa simplify error handling 2020-02-11 15:27:56 +01:00
f83e874e35 return the good created_at and updated_at on index creation 2020-02-11 15:27:55 +01:00
ae0a11e422 fix schema & fix tests 2020-02-11 15:27:55 +01:00
116a637cfd set test for healthyness 2020-02-11 15:27:55 +01:00
83cf683db4 introduce test for meilisearch-http 2020-02-11 15:27:55 +01:00
1b3312871e set name optional during index creation 2020-02-11 15:27:55 +01:00
0e12920910 bump tide version 2020-02-11 15:27:55 +01:00
a35eb16a2a store the schema after each document updates 2020-02-11 15:27:54 +01:00
4f0ead625b adapt meilisearch-http to the new schemaless option 2020-02-11 15:27:54 +01:00
21d122a870 rewrite indexed_pos -> field_id for hightligths 2020-02-11 15:27:54 +01:00
130fb74928 introduce a new schemaless way 2020-02-11 15:27:54 +01:00
bbe1845f66 squash-me 2020-02-11 15:27:54 +01:00
2ee90a891c introduce a new settings update system 2020-02-11 15:27:54 +01:00
203c83bdb4 Remove SearchableAttributes; fix #429 2020-02-11 15:27:53 +01:00
73918d803c Rename AttributesToSearchIn into SearchableAttributes; fix #428 2020-02-11 15:27:53 +01:00
110adcae85 Remove the schema; fix #422 2020-02-11 15:27:53 +01:00
c536ea64c3 Change the indexes stats HTTP route; fix #423 2020-02-11 15:27:53 +01:00
aa7a6d5f8c Rewrite the synonyms endpoint; fix #418 2020-02-11 15:27:53 +01:00
91c6539baf Rewrite the stop-words endpoint; fix #417 2020-02-11 15:27:53 +01:00
f0590d3301 Change documents routes; fix #416 2020-02-11 15:27:53 +01:00
a5c5df0290 Merge pull request #443 from curquiza/brew
Add Brew installation in README
2020-02-10 16:36:33 +01:00
f0c2913dcf Add Brew installation in README 2020-02-10 16:26:50 +01:00
9c6d590950 Merge pull request #442 from curquiza/docker-github-action
Change github action for docker latest image
2020-02-10 16:26:14 +01:00
ab3339f5a1 Change github action for docker latest image 2020-02-10 16:11:45 +01:00
43ce45f62b Merge pull request #456 from djKooks/update/cjk-filter-ko-ja
Update CJK filter
2020-01-30 09:46:08 +01:00
2b5d153361 Update cjk filter 2020-01-30 09:55:16 +09:00
cde8845143 Merge pull request #454 from meilisearch/fix-db-compaction
Support compaction with the new split database
2020-01-24 17:45:34 +01:00
7c0d8f073b Support compaction with multi database 2020-01-24 17:38:14 +01:00
69adb1d771 Merge pull request #453 from meilisearch/introduce-query-tree
Introduce a query tree structure
2020-01-23 10:40:53 +01:00
a2bc689b92 Fix the tests a little bit 2020-01-22 18:12:56 +01:00
a9adbda2cd Make the engine support non-exact multi-words synonyms 2020-01-22 18:11:58 +01:00
0b9fe2c072 Introduce the new Query Tree creation supporting more operations 2020-01-22 17:46:46 +01:00
789e05304c Replace prints by debug logs 2020-01-21 11:05:34 +01:00
7604387701 Clean up the dependencies 2020-01-21 11:04:25 +01:00
daffcaf4c6 Make the docids OR operation method conditional 2020-01-19 12:29:06 +01:00
ff1ec599e0 Try a better version of sdset 2020-01-19 12:01:24 +01:00
e44d498c94 Display more debug info for prefix tolerant fetches 2020-01-19 11:07:32 +01:00
c334d6b7fe Avoid sorting sorted sequences, prefer using set operations 2020-01-19 10:58:01 +01:00
5465e401bb Catch query tree related errors 2020-01-17 10:41:27 +01:00
9cc3c56c9c Fix the prefix system 2020-01-16 18:41:27 +01:00
d7a7560220 Use an union instead of a sort for prefix fetching 2020-01-16 17:09:27 +01:00
70a529d197 Reduce the number of args of update functions 2020-01-16 16:29:50 +01:00
be31a14326 Make the clear all operation clear caches 2020-01-16 16:19:04 +01:00
96139da0d2 Reintroduce the distinct search system 2020-01-16 15:55:55 +01:00
74fa9ee4df Introduce a better higlighting system 2020-01-16 14:56:16 +01:00
00336c5154 Reintroduce a basic highlight display 2020-01-16 14:24:45 +01:00
3912d1ec4b Improve query parsing and interpretation 2020-01-16 14:11:17 +01:00
70d4f47f37 Differentiate short words as prefix or exact matches 2020-01-16 12:01:51 +01:00
9809ded23d Implement synonym fetching 2020-01-16 11:38:23 +01:00
5f9a3546e0 Use an union instead of a sort for OR ops 2020-01-15 15:14:24 +01:00
db625a08f7 Update lock file 2020-01-15 12:25:14 +01:00
44fec1b6c9 Cache prefixes of a length of 2 2020-01-14 18:17:52 +01:00
54dacb362d Use different algorithms for different documents ratios 2020-01-14 17:51:08 +01:00
6edb460bea Try with an exponential search 2020-01-14 16:52:24 +01:00
40dab80dfa Change the way we filter the documents 2020-01-14 14:18:01 +01:00
681711fced Fix query ids to be usize 2020-01-14 13:12:42 +01:00
21c1473e0c Introduce the distance data 2020-01-14 11:38:04 +01:00
8acbdcbbad wip: Make the new query tree work with the criteria 2020-01-13 14:36:06 +01:00
da8abebfa2 Introduce the query words mapping along with the query tree 2020-01-13 13:29:47 +01:00
4f7a7ea0bb Faster intersection group by 2020-01-09 16:30:03 +01:00
d6c9ba8f08 Store the postings lists 2020-01-09 15:04:53 +01:00
ec8916bf54 Change the debug outputs 2020-01-09 12:05:39 +01:00
81c573ec92 Add the raw document IDs to the postings lists 2020-01-08 15:30:43 +01:00
9420edadf4 Introduce the Postings type to decorrelate the DocumentIds 2020-01-08 14:48:23 +01:00
d724a7659e Introduce a query tree context struct 2020-01-08 13:37:22 +01:00
887c212b49 Add more logs about the docids construction 2020-01-08 13:22:42 +01:00
07937ed6d7 Use the prefix caches 2020-01-08 13:14:07 +01:00
a262c67ec3 limit the search in the FST 2020-01-08 13:06:12 +01:00
13ca30c4d8 WIP: Made the query tree traversing support prefix search 2020-01-08 12:02:58 +01:00
fbcec2975d wip: Impl a basic tree traversing 2020-01-07 18:24:13 +01:00
6e1f4af833 wip: Create a tree from query but need to show synonyms 2020-01-07 18:24:13 +01:00
856c5c4214 Fix group offset computing 2019-12-31 14:24:10 +01:00
670e80c151 Use the cached postings lists in the query system 2019-12-31 13:32:36 +01:00
eed07c724f Add more logging for postings lists fetching by word 2019-12-31 13:32:36 +01:00
99d35fb940 Introduce a first version of a number of candidates reducer
It works by ignoring the postings lists associated to documents that the previous words did not returned
2019-12-31 13:32:36 +01:00
106b886873 Cache the prefix postings lists 2019-12-30 18:01:32 +01:00
928876b553 Introduce the postings lists caching stores
Currently not used
2019-12-30 18:01:27 +01:00
58836d89aa Rename the PrefixCache into PrefixDocumentsCache 2019-12-30 15:42:09 +01:00
1a5a104f13 Display proximity evaluation number of calls 2019-12-30 15:42:09 +01:00
9790c393a0 Change the time measurement of the query 2019-12-30 15:42:08 +01:00
064cfa4755 Add more debug, where are those 100ms 2019-12-30 15:42:08 +01:00
ed6172aa94 Add a time measurement of the criterion loop 2019-12-30 15:42:08 +01:00
8c140f6bcd Increase the disk usage limit 2019-12-30 15:42:08 +01:00
1e1f0fcaf5 Introduce a basic cache system for first letters 2019-12-30 15:42:08 +01:00
d21352a109 Change the time measurement of the FST 2019-12-30 15:42:08 +01:00
4be11f961b Use an ugly trick to avoid cloning the FST 2019-12-30 15:42:07 +01:00
1163f390b3 Restrict FST search to the first letter of the word 2019-12-30 15:42:07 +01:00
534143e91d Merge pull request #439 from meilisearch/fix-update-deadlock
Fix a blocking channel, appearing like a deadlock
2019-12-30 15:41:26 +01:00
691e2a3c1d Fix a blocking channel, appearing like a deadlock 2019-12-30 15:28:28 +01:00
20b92fcb4c Merge pull request #435 from meilisearch/debug-missing-measurements
Add more debug timings
2019-12-20 18:04:21 +01:00
04bb49989f Add more debug timings 2019-12-20 14:18:48 +01:00
2aa7cb9d20 Merge pull request #433 from meilisearch/fix-index-creation
Set the indexes info in the create_index function
2019-12-19 10:59:47 +01:00
d12ff15ee3 Set the indexes info in the create_index function 2019-12-19 10:38:56 +01:00
11b684114d Merge pull request #431 from curquiza/web-interface-readme
Update REAME with the Web Interface introduction
2019-12-18 13:50:12 +01:00
1bf177f81a Update REAME with the Web Interface introduction
Co-Authored-By: cvermand <33010418+bidoubiwa@users.noreply.github.com>
2019-12-18 13:41:15 +01:00
df7dc54409 Merge pull request #415 from meilisearch/fix-blocking-settings
Use a main read transaction instead of a write one
2019-12-17 16:21:41 +01:00
7e86056a27 Use a main read transaction instead of a write one 2019-12-17 15:48:06 +01:00
59f74dabe7 Merge pull request #407 from meilisearch/friendly-web-interface
Friendly web interface
2019-12-17 14:47:24 +01:00
4610198ba2 Introduce a Bulma based web interface 2019-12-17 14:36:26 +01:00
3d19f566b6 Merge pull request #406 from bidoubiwa/remove_nsfw_movie
Removed nsfw movie from movies.json dataset
2019-12-13 17:56:09 +01:00
8d90cd8e35 Removed nsfw movie from movies.json dataset 2019-12-13 17:21:46 +01:00
610d44e703 Merge pull request #401 from tpayet/feat/heroku-button
Add heroku one-click deploy
2019-12-13 16:26:31 +01:00
0272b44d7e Add heroku one-click deploy 2019-12-13 16:03:00 +01:00
3eccf2fd76 Merge pull request #405 from meilisearch/disable-bench-workflow
Disable the benchmarks github workflow
2019-12-13 15:56:16 +01:00
736f285092 Disable the benchmarks github workflow 2019-12-13 15:37:24 +01:00
020cd7f9e8 Merge pull request #403 from meilisearch/lazy-data-fetching
Criteria lazy data preparation
2019-12-13 14:57:19 +01:00
40c0b14d1c Reintroduce searchable attributes and reordering 2019-12-13 14:38:25 +01:00
a4dd033ccf Rename raw_matches into bare_matches 2019-12-13 14:38:25 +01:00
48e8778881 Clean up the modules declarations 2019-12-13 14:38:25 +01:00
4be23efe66 Remove the AttrCount type
Could probably be reintroduced later
2019-12-13 14:38:25 +01:00
7d67750865 Reintroduce exacteness for one word document field 2019-12-13 14:38:25 +01:00
746e6e170c Make the test pass again 2019-12-13 14:38:24 +01:00
d93e35cace Introduce ContextMut and Context structs 2019-12-13 14:38:24 +01:00
d75339a271 Prefer summing the attribute 2019-12-13 14:38:24 +01:00
86ee0cbd6e Introduce bucket_sort_with_distinct function 2019-12-13 14:38:24 +01:00
248ccfc0d8 Update the criteria to the new ones 2019-12-13 14:38:24 +01:00
ea148575cf Remove the raw_query functions 2019-12-13 14:38:23 +01:00
efc2be0b7b Bump the sdset dependency to 0.3.6 2019-12-13 14:38:23 +01:00
8d71112dcb Rewrite the phrase query postings lists
This simplified the multiword_rewrite_matches function a little bit.
2019-12-13 14:38:23 +01:00
dd03a6256a Debug pre filtered number of documents 2019-12-13 14:38:23 +01:00
9c03bb3428 First probably working phrase query doc filtering 2019-12-13 14:38:23 +01:00
22b19c0d93 Fix the processed distance algorithm 2019-12-13 14:38:22 +01:00
0f698d6bd9 Work in progress: Bad Typo detection
I have an issue where "speakers" is split into "speaker" and "s",
when I compute the distances for the Typo criterion,
it takes "s" into account and put a distance of zero in the bucket 0
(the "speakers" bucket), therefore it reports any document matching "s"
without typos as best results.

I need to make sure to ignore "s" when its associated part "speaker"
doesn't even exist in the document and is not in the place
it should be ("speaker" followed by "s").

This is hard to think that it will had much computation time to
the Typo criterion like in the previous algorithm where I computed
the real query/words indexes based and removed the invalid ones
before sending the documents to the bucket sort.
2019-12-13 14:38:22 +01:00
4e91b31b1f Make the Typo and Words work with synonyms 2019-12-13 14:38:22 +01:00
f87c67fcad Improve the QueryEnhancer by doing a single lookup 2019-12-13 14:38:22 +01:00
902625601a Work in progress: It seems like we support synonyms, split and concat words 2019-12-13 14:38:22 +01:00
d17d4dc5ec Add more debug infos 2019-12-13 14:38:21 +01:00
ef6a4db182 Before improving fields AttrCount
Removing the fields_count fetching reduced by 2 times the serach time, we should look at lazily pulling them form the criterions in needs

ugly-test: Make the fields_count fetching lazy

Just before running the exactness criterion
2019-12-13 14:38:21 +01:00
11f3d7782d Introduce the AttrCount type 2019-12-13 14:38:21 +01:00
5b9fff6636 Merge pull request #352 from meilisearch/add-search-benchmarks
Add some criterion benchmarks to help detect regressions
2019-12-13 14:37:48 +01:00
a8272f0eef Add a benchmark github workflow 2019-12-13 14:17:40 +01:00
951f0bcb10 sqaush-me: Improve benchmarks naming 2019-12-13 14:17:40 +01:00
d8ba405baf Add some criterion benchmarks to help mesure improvements 2019-12-13 14:17:40 +01:00
70f18a8086 Merge pull request #400 from meilisearch/fix-issues
Close multiples issues on HTTP behavior
2019-12-13 10:30:42 +01:00
0b5db77511 Fix erase setting option 2019-12-13 10:22:35 +01:00
3a4130f344 Allow to index files with null or boolean 2019-12-12 19:25:05 +01:00
1ea29bb92e Fix unwrap if schema does not contain ranked attributes on a custom ranking setting 2019-12-12 16:37:46 +01:00
04d34cb8aa Search; return formated section only if it's necessary 2019-12-12 16:36:42 +01:00
bf80729e17 Update message on access forbidden 2019-12-12 15:39:32 +01:00
88b3c05155 Stop words; Do not reindex all documents if there is no documents 2019-12-12 15:31:39 +01:00
6edef07e29 HTTP delete index route; Fix error on index not found 2019-12-12 14:06:16 +01:00
5ad73fe08b Merge pull request #399 from meilisearch/rewrite-synonym-endpoint
Rewrite the synonym endpoint
2019-12-12 12:58:14 +01:00
a4f26e8e48 Rewrite the synonym endpoint 2019-12-12 12:47:02 +01:00
cc10804607 Merge pull request #395 from meilisearch/update-bitly-link
Update the bit.ly movies.json link
2019-12-10 18:13:52 +01:00
f959cd76ae Update the bit.ly movies.json link 2019-12-10 18:07:14 +01:00
dcd332e2e4 Merge pull request #396 from meilisearch/disable-windows-tests
Disable windows tests
2019-12-10 18:03:13 +01:00
f3a276d1e1 Update the workflow README.md 2019-12-10 17:56:24 +01:00
640d21a7d2 Disable the Windows tests workflow 2019-12-10 17:53:26 +01:00
216cccbfba Merge pull request #391 from meilisearch/fix-one-document-route
Do not expect a JSON value as a document indentifer
2019-12-09 21:53:04 +01:00
04d1da11f7 Do not expect a JSON value as a document indentifer 2019-12-09 21:34:40 +01:00
ee4e9dcc74 Merge pull request #388 from meilisearch/remove-synonyms-unwraps
Remove unsound unwraps from the synonym routes
2019-12-09 17:06:02 +01:00
6fef04be20 Remove unsound unwraps from the synonym routes 2019-12-09 16:54:54 +01:00
86347bff3a Merge pull request #384 from curquiza/install-script-prereleases
Change regexp in install script
2019-12-09 15:28:19 +01:00
e291d9954a Change regexp in install script to not take into acccount pre-releases 2019-12-09 15:14:25 +01:00
7a548467b9 Merge pull request #382 from curquiza/health-routes
Keep only useful routes for /health
2019-12-08 18:11:19 +01:00
06d8e00ff3 Keep only useful routes for /health 2019-12-08 17:56:33 +01:00
225f5a172d Merge pull request #381 from curquiza/update-index-httpstatus
Change HTTP status of update index route
2019-12-08 17:53:01 +01:00
e531ff2e98 Change HTTP status of update index route 2019-12-08 17:10:21 +01:00
8c8040884e Merge pull request #376 from meilisearch/windows-support
Update the actions to support Windows
2019-12-07 12:07:27 +01:00
e3611ad0e4 Update the action to test on more platforms 2019-12-07 11:57:33 +01:00
289bc6570b Update the action to publish windows binaries 2019-12-07 11:52:14 +01:00
dc1849d291 Bump heed to 0.6.1 2019-12-07 11:49:45 +01:00
17a66227f4 Merge pull request #375 from nithinkashyapn/master
Docker command updated
2019-12-06 12:11:56 +01:00
0e8b95f4bf Docker command updated
Docker does not allow Uppercase letters, throws this error 

`docker: invalid reference format: repository name must be lowercase.`
2019-12-06 16:30:37 +05:30
5b8344cfc3 Merge pull request #373 from curquiza/stop-words-deletion
Use POST instead of DELETE method to delete stops-word
2019-12-05 23:06:15 +01:00
075f4034d9 Use POST instead of DELETE method to delete stops-word 2019-12-05 18:07:56 +01:00
c616ce99a8 Merge pull request #368 from tpayet/add-push-debpkg
Add publish action to gemfury for apt pkg
2019-12-05 15:35:12 +01:00
6b9b5fda7e Add publish action to gemfury for apt pkg 2019-12-05 14:54:57 +01:00
b756fc382a Merge pull request #367 from meilisearch/support-stdin-example
Allow users to send csv files from stdin in examples
2019-12-05 12:33:18 +01:00
29fd54dcfa Allow users to send csv files from stdin in examples 2019-12-05 12:23:56 +01:00
d664e97104 Merge pull request #365 from meilisearch/update-readme
Reorder "Deploy the server" options on the README
2019-12-04 18:37:40 +01:00
4466097d44 Update readme.md; Deploy part 2019-12-04 18:16:56 +01:00
60b94d2dc1 Merge pull request #366 from tpayet/cargo-deb
Add debian package in CI
2019-12-04 18:14:10 +01:00
51636402c2 Add debian package in CI 2019-12-04 18:02:30 +01:00
fc8182d7d3 Merge pull request #363 from meilisearch/bump-version
Bump meilisearch crates to v0.8.4
2019-12-03 17:30:31 +01:00
4f87465f18 Bump meilisearch crates to v0.8.4 2019-12-03 17:22:45 +01:00
5f1586ae85 Merge pull request #360 from meilisearch/fix-readme-broken-links
Fix README broken links
2019-12-02 19:10:40 +01:00
8d3161a2cf Reorder README parts 2019-12-02 18:29:53 +01:00
8bc8214279 Fix README broken links
Thanks to @baptistejamin!
2019-12-02 16:45:27 +01:00
3ea5aa18a2 Merge pull request #359 from bidoubiwa/fix_wording_in_readme
Fix bad wording in readme file
2019-12-02 14:06:49 +01:00
c4845b78a9 Fix bad wording in readme file 2019-12-02 11:15:39 +01:00
530e913e2f Merge pull request #356 from tpayet/fix-port-readme
Fix port in README & Dockerfile
2019-11-29 19:21:55 +01:00
5917f212ba Fix port in README & Dockerfile 2019-11-29 18:03:54 +01:00
d2b1690191 Merge pull request #355 from tpayet/master
Update binary default settings
2019-11-29 15:47:04 +01:00
710b7ea091 Update default listening port to 7700 2019-11-29 15:25:26 +01:00
089579d835 Update default database directory to working directory 2019-11-29 15:25:26 +01:00
7780293ddb Merge pull request #354 from meilisearch/camelcase-updates-result
Fix updates formattings and namings
2019-11-29 15:19:45 +01:00
773a51e7d0 Rename 'update_type' to 'type' on EnqueuedUpdateResult 2019-11-29 15:09:48 +01:00
7923752513 Serialize updates results to camelCase 2019-11-29 15:05:54 +01:00
9a48091b21 Merge pull request #353 from meilisearch/bump-version
Bump meilisearch crates to v0.8.3
2019-11-29 14:13:37 +01:00
30cb60f679 Bump meilisearch crates to v0.8.3 2019-11-29 14:06:17 +01:00
08687d8dab Merge pull request #351 from meilisearch/status-failed-updates-status
Add status failed on UpdateStatus
2019-11-28 18:53:31 +01:00
3a90233a3d Add status failed on UpdateStatus 2019-11-28 18:41:11 +01:00
32483cae2d Merge pull request #347 from curquiza/installation-script
Add script for binary installation
2019-11-28 18:34:58 +01:00
d7f28e0260 Add script for binary installation 2019-11-28 18:34:12 +01:00
9640c2aaa6 Merge pull request #349 from meilisearch/bump-version
Bump meilisearch crates to v0.8.2
2019-11-28 17:23:40 +01:00
9a2b4d08e1 Bump meilisearch crates to v0.8.2 2019-11-28 17:15:13 +01:00
e91615fe59 Merge pull request #348 from meilisearch/replace-isahc-by-ureq
Replace isahc by ureq
2019-11-28 17:14:32 +01:00
aed02b2e19 Remove many dependencies from the Dockerfile 2019-11-28 17:04:01 +01:00
83ad80d9db Replace isahc by ureq 2019-11-28 16:41:42 +01:00
abdb7793fb Merge pull request #345 from tpayet/readme_changes
Clarification of readme file
2019-11-28 16:35:44 +01:00
387eb3fde3 Clarification of readme file 2019-11-28 16:28:25 +01:00
e640bc90b4 Merge pull request #343 from meilisearch/explicit-index-clear
Change the update loop to be more explicit on index clear
2019-11-28 14:48:37 +01:00
3978378152 Merge pull request #344 from tpayet/patch-1
Update README license badge
2019-11-28 14:35:50 +01:00
61e3e4f0b9 Update README license badge 2019-11-28 14:28:30 +01:00
1def56ea11 Change the update loop to be more explicit on index clear 2019-11-27 13:43:28 +01:00
6d686ac14f Merge pull request #342 from meilisearch/update-lock
Update the lock file
2019-11-27 12:49:47 +01:00
641e0d15f5 Make sure the lock file is up to date 2019-11-27 12:06:14 +01:00
71b39426c0 Update the lock file 2019-11-27 12:01:22 +01:00
57584eaccc Merge pull request #341 from meilisearch/bump-version
Bump meilisearch crates to v0.8.1
2019-11-27 11:54:39 +01:00
f6fb31c531 Bump meilisearch crates to v0.8.1 2019-11-27 11:47:27 +01:00
0cea8ce5b5 Merge pull request #340 from meilisearch/separate-updates-kvstore
Separate the update and main databases
2019-11-27 11:39:14 +01:00
d08b76a323 Separate the update and main databases
We used the heed typed transaction to make it safe (https://github.com/Kerollmops/heed/pull/27).
2019-11-27 11:29:06 +01:00
86a87d6032 Merge pull request #339 from tpayet/action-docker-tag
Update action workflow for docker tagged image
2019-11-26 19:17:19 +01:00
e534929f80 Update action workflow for docker tagged image 2019-11-26 18:18:51 +01:00
fcc154da1c Merge pull request #336 from meilisearch/rename-to-meilisearch
Rename MeiliDB into MeiliSearch
2019-11-26 14:06:01 +01:00
00d1200704 Rename the meilisearch-http binary into meilisearch 2019-11-26 11:17:30 +01:00
7cc096e0a2 Rename MeiliDB into MeiliSearch 2019-11-26 11:12:30 +01:00
58eaf78dc4 Merge pull request #335 from tpayet/github-release-action
GitHub release action
2019-11-25 19:19:08 +01:00
3be2281483 Update workflows README 2019-11-25 18:14:21 +01:00
cc06d96993 Add gh actions to release binaries 2019-11-25 17:27:15 +01:00
93c7e700bc Merge pull request #333 from tpayet/update-dockerfile
Add meilihttp_addr env variable in docker build
2019-11-25 16:41:52 +01:00
97c6757fc7 Add meilihttp_addr env variable in docker build 2019-11-25 16:30:07 +01:00
276d3f8e22 Merge pull request #332 from meilisearch/jemalloc-only-on-linux
Make jemalloc only used on linux
2019-11-25 16:13:54 +01:00
4869a88ae2 Make jemalloc only used on linux 2019-11-25 15:35:13 +01:00
ae88bc31bc Merge pull request #331 from meilisearch/enable-jemalloc-linux-only
Enable jemalloc only on linux OSs
2019-11-25 14:59:56 +01:00
8aed1d96c5 Enable jemalloc only on linux OSs 2019-11-25 14:51:47 +01:00
c93949474c Merge pull request #330 from tpayet/fix-actions-badge-link
Update action badge link
2019-11-25 13:51:07 +01:00
8cf19f1c6b Update action badge link 2019-11-25 13:44:20 +01:00
a82ecb3cef Merge pull request #324 from tpayet/gh-actions
Replace Azure CI by Github Actions
2019-11-25 13:31:15 +01:00
04c2b37d82 Remove Azure CI
Add gh actions for cargo check using rust nightly

Add readme about actions workflows

Add basic Dockerfile

Add action workflow for docker publish

Change check action to test action

Update workflow readme without rust nightly

Rename test action file

Add gh actions to push latest docker image from master

Update github action for publish docker image

Add 2 steps dockerfile based on alpine

Update readme badges to match new CI
2019-11-25 13:20:54 +01:00
ab3e8d6537 Merge pull request #314 from meilisearch/fix-number-ord
Fix the ordering functions of the Number type
2019-11-22 15:14:05 +01:00
fd185a5e6b Add a test for the SorByAttr criterion 2019-11-22 15:04:23 +01:00
d9678f0040 Fix the ordering functions of the Number type 2019-11-22 14:44:02 +01:00
840217b111 Merge pull request #321 from meilisearch/fix-create-index
Fix index creation
2019-11-22 14:10:05 +01:00
9605a2cd88 Make possible to use a custom uid and simplify the usage 2019-11-22 14:01:00 +01:00
0f86ccc035 Index UID generation makes sure to not generate the same number 2019-11-22 14:01:00 +01:00
b3b73e2276 Merge pull request #323 from meilisearch/fix-index-deletion
Fix index deletion once again
2019-11-22 14:00:19 +01:00
f241c999ad Make the CI use rust stable 2019-11-22 13:47:29 +01:00
d4d2a2303a Fix a typo on timeout_ms used for multi index search 2019-11-22 13:47:29 +01:00
c8832409ad Fix the dead lock on index deletion once again 2019-11-22 13:47:29 +01:00
98f76aa952 Merge pull request #320 from meilisearch/send-amplitude-events
Add an Amplitude analysis loop tick
2019-11-22 10:52:29 +01:00
4236632af6 Add an amplitude analysis loop tick 2019-11-21 20:28:58 +01:00
e2c98244ec Merge pull request #313 from meilisearch/fix-dead-lock
Fix dead locks when deleting indexes
2019-11-21 12:42:40 +01:00
c1cf67c008 Join updates threads after dropping the indexes lock and avoid deadlocks 2019-11-21 12:01:46 +01:00
4abea919b2 Merge pull request #311 from meilisearch/add-index-name-and-id
Add index name and change some routes request body & response
2019-11-21 11:59:14 +01:00
d60aa722c0 Allow to update expireAt and revoked on token 2019-11-21 11:49:49 +01:00
055368acd8 Fix for review 2019-11-21 11:49:49 +01:00
7f2e5d091a Rename routes /synonym to /synonyms 2019-11-20 15:33:42 +01:00
c69ae8154f Allow to receive schema update formated as SchemaBuilder 2019-11-20 15:25:34 +01:00
cd95b243bb Add the update index route 2019-11-20 15:00:06 +01:00
1f1cb1f501 Rename browse_documents into get_all_documents and always respond HTTP Ok 2019-11-20 14:18:21 +01:00
530738cfe9 Format code 2019-11-20 14:12:12 +01:00
878dd6912e Return a HTTP 401 instead of 404 if token is not found 2019-11-20 14:06:56 +01:00
5f0f699f37 Move route to clear all synonyms on DELETE /synonyms 2019-11-20 14:03:55 +01:00
ca13900699 Add async routes should return ACCEPTED status code response 2019-11-20 14:03:19 +01:00
cc97889b37 Add stop-word is now PATCH method 2019-11-20 13:56:43 +01:00
45ded0498b Format code with cargo fmt 2019-11-20 11:45:23 +01:00
d01a3944c1 Add last_update information on global /stats route 2019-11-20 11:45:22 +01:00
a0caf0d6d7 Remove unused result response on indexes_uids function 2019-11-20 11:45:22 +01:00
e22debb994 Update index updated_at information at each update callback 2019-11-20 11:45:22 +01:00
1b8df0ed8b Remove last_update from stats 2019-11-20 11:45:22 +01:00
3286a5213c Move fields frequency from common store to index main store 2019-11-20 11:45:22 +01:00
394976d330 Update list_index route to return all index information, not only list of uid 2019-11-20 11:45:22 +01:00
b95acbece0 Function generate_uid return now lowercased uid 2019-11-20 11:45:22 +01:00
c94f4dff71 Do not return update_id on IndexCreateRespnse if it's none 2019-11-20 11:45:22 +01:00
e6465f4ea1 Create a new specific route for schema 2019-11-20 11:45:22 +01:00
2b3c91aabd Update get_index_schema to allow raw response 2019-11-20 11:45:22 +01:00
e97e13ce9f Rename index_name to index_uids 2019-11-20 11:45:22 +01:00
39e2b73718 Add updatedAt on main index store 2019-11-20 11:45:22 +01:00
a90facaa41 Rename index_name by index_uid 2019-11-20 11:45:22 +01:00
5527457655 Rewrite create_index route new path, body request and response 2019-11-20 11:45:21 +01:00
076e781810 Add name, created_at and updated_at informations into main index 2019-11-20 11:45:21 +01:00
750d336018 Bump Cargo.lock meili versions 2019-11-20 11:45:21 +01:00
e8251ad45b Merge pull request #310 from meilisearch/unify-crates-version
Unify the crates versions to 0.8.0
2019-11-20 11:05:54 +01:00
963ca1e2c7 Unify the crates versions to 0.8.0 2019-11-20 10:47:32 +01:00
12a6c7d54d Merge pull request #298 from bidoubiwa/add_ranked_movies_dataset
Create a dataset where the release_date is a numeric timestamp
2019-11-20 10:46:24 +01:00
2d0fc3f9d3 Create a dataset where the release_date is a numeric timestamp 2019-11-20 10:44:32 +01:00
e554784527 Merge pull request #309 from bidoubiwa/remove_stop_words_from_settings
Removed stop words from settings route
2019-11-19 18:35:27 +01:00
2cb43fa638 Removed stop words from settings route 2019-11-19 18:21:44 +01:00
66d5309a51 Merge pull request #308 from meilisearch/improve-structopt
Introduce better argument names
2019-11-19 18:09:44 +01:00
7eeedec7eb Bump meilidb-http to v0.3.0 2019-11-19 17:50:01 +01:00
4b798c71ae Introduce new arguments and understand env vars 2019-11-19 17:50:01 +01:00
685016bfec Bump meilidb-core to v0.7.0 and meilidb-http to v0.2.0 2019-11-18 15:49:23 +01:00
d30e5f6231 Merge pull request #299 from meilisearch/default-update-callbacks
Prefer using a global update callback common to all indexes
2019-11-18 15:05:21 +01:00
e854d67a55 Remove useless routes and checks 2019-11-18 14:41:49 +01:00
23a89732a5 Prefer using a global update callback common to all indexes 2019-11-18 14:41:49 +01:00
3a1f41ebdb Merge pull request #305 from meilisearch/fix-example
Make easier to interact with compacted databases
2019-11-17 20:31:06 +01:00
f873761a27 Make easier to interact with compacted databases 2019-11-17 20:01:02 +01:00
ebf620c7f9 Merge pull request #302 from meilisearch/fix-dataset-schema
Rename the movies dataset schema file
2019-11-17 17:17:33 +01:00
8b92bc3421 Rename the movies dataset schema file 2019-11-17 16:45:13 +01:00
70a5aa61e9 Merge pull request #301 from meilisearch/separate-types
Move the main types to a separate library
2019-11-17 12:45:25 +01:00
a76169042f Make the serde and zerocopy meilidb-types dependencies optional 2019-11-17 12:30:39 +01:00
c9c3cfcee9 Move the main types to a separate library 2019-11-17 12:19:36 +01:00
2e60ac5359 Merge pull request #300 from meilisearch/update-dependencies
Do not use a forked fst dependency
2019-11-17 12:19:08 +01:00
2dd7751e09 Disable the fst MemMap feature 2019-11-17 11:43:00 +01:00
26bdabcdec Do not use a forked fst dependency 2019-11-17 11:14:01 +01:00
fc8c7ed77e Merge pull request #297 from meilisearch/improve-highlights
Improve the highlight formatted outputs
2019-11-15 14:28:27 +01:00
521c96354f Improve the highlight formatted outputs 2019-11-15 14:16:21 +01:00
9788779894 Merge pull request #296 from meilisearch/update-readme
Update the README
2019-11-14 21:32:32 +01:00
9b965764ab Update the README 2019-11-14 19:09:04 +01:00
9a5a543311 Merge pull request #290 from curquiza/deploy-doc
Add information in documentation in Deploy Server part
2019-11-13 16:06:27 +01:00
b18fb868e8 Add information in documentation in Deploy Server part 2019-11-13 15:37:21 +01:00
c734af55c0 Merge pull request #289 from curquiza/status204-delete-index
Change the HTTP status code on index deletion
2019-11-13 15:33:27 +01:00
810b328ad2 Change the HTTP status code on index deletion 2019-11-13 15:14:23 +01:00
0a8039d8d8 Merge pull request #285 from bidoubiwa/remove_catching_same_index_creation
Change the error catching on the index creation route
2019-11-13 15:13:51 +01:00
e51704c09a Remove the error catching on the index creation route when the index already exist 2019-11-13 14:42:59 +01:00
623a9012d5 Merge pull request #279 from bidoubiwa/new_slogan_and_resume
Slogan and Resume proposition
2019-11-13 14:41:21 +01:00
b9a185634f Slogan and Resume proposition 2019-11-13 14:31:22 +01:00
b46889b5f0 Merge pull request #282 from meilisearch/fix-ci-artifacts
Add the meilidb-http binary to the artifacts
2019-11-13 11:39:00 +01:00
ef9a0c07db Add the meilidb-http binary to the artifacts 2019-11-13 11:15:39 +01:00
3a6f3947c9 Merge pull request #281 from meilisearch/fix-attributes-to-search-in
Take attributes to search in into account
2019-11-12 18:45:40 +01:00
5c5f41d755 Take attributes to search in into account 2019-11-12 18:35:58 +01:00
6803a8fad0 Merge pull request #280 from meilisearch/format-updates-json
Format updates json
2019-11-12 18:35:25 +01:00
8e4b362e4d Fixed the display of enqueued updates 2019-11-12 18:21:59 +01:00
acb5e624c6 Add enqueued and processed datetimes 2019-11-12 18:21:59 +01:00
a98949ff1d Improve updates JSON format 2019-11-12 16:57:22 +01:00
f355280250 Merge pull request #278 from meilisearch/mit-license
Change the license to an MIT one
2019-11-12 14:35:32 +01:00
cee8d6a8d9 Change the license to an MIT one 2019-11-12 14:24:28 +01:00
27326ea069 Merge pull request #277 from bidoubiwa/add_cmd_to_compile
Add cmd line to compile binary
2019-11-12 13:55:54 +01:00
7bbe5aca5b Add cmd line to compile binary 2019-11-12 10:57:03 +01:00
1c4afe6d0f Merge pull request #276 from meilisearch/support-slash-tokenizer
Add support for back/slashes
2019-11-11 21:46:14 +01:00
2d8f9a9849 Add support for back/slashes 2019-11-11 21:23:08 +01:00
3f41681b18 Merge pull request #274 from meilisearch/enable-env-logger
Add env logger to enable logging
2019-11-11 19:13:33 +01:00
64791815fa Add env logger to enable logging 2019-11-11 19:03:38 +01:00
8a36571a74 Merge pull request #272 from meilisearch/fix-long-words
Ignore words that are too long
2019-11-10 20:07:22 +01:00
d18e775bec Ignore words that are too long 2019-11-10 17:44:27 +01:00
78381f1818 Merge pull request #271 from meilisearch/update-dependencies
Update Dependencies
2019-11-10 11:17:09 +01:00
7f33a01ae1 Update dependencies 2019-11-10 11:04:56 +01:00
d07d14d33a Update crossbeam-channel to 0.4.0 2019-11-10 11:03:22 +01:00
540d7886ab Merge pull request #266 from meilisearch/update-readme
Update the readme and add a Quick Start section
2019-11-09 13:21:22 +01:00
5a5d10af52 Add an image description of the gif 2019-11-09 13:12:01 +01:00
f95d077ef8 Improve the README a little bit by adding a quick start section 2019-11-09 13:12:01 +01:00
05dd99936f Add a gif to show a demo using crates.io 2019-11-09 12:59:39 +01:00
c086625773 Merge pull request #269 from meilisearch/repo-became-binary
Make the repository be a binary and version the Cargo.lock
2019-11-09 12:58:52 +01:00
dc17bebf4a Make the repository be a binary and version the Cargo.lock 2019-11-09 12:13:28 +01:00
026464b2e4 Bump meilidb-core to v0.6.5 2019-11-06 11:52:34 +01:00
bd42158a70 Merge pull request #264 from meilisearch/index-soft-deletion
Index soft deletion
2019-11-06 11:51:50 +01:00
df066f4321 Introduce a new add or update documents PUT route 2019-11-06 11:42:41 +01:00
69832e8c70 Update the http index deletion route 2019-11-06 11:42:41 +01:00
95eb6ad09a Add a test to check index soft deletion works correctly 2019-11-06 11:02:30 +01:00
f3fc0bed45 Introduce index soft deletion 2019-11-06 11:02:30 +01:00
5dd6b697b9 Bump meilidb-core to v0.6.4 2019-11-05 18:46:16 +01:00
b7d170c7d1 Merge pull request #262 from meilisearch/fix-unidecoded-emojis
Fix an highlighting problem
2019-11-05 17:04:35 +01:00
7541172d12 Make the example show highlighted areas more explicitly 2019-11-05 16:40:48 +01:00
85bf5d113c Fix an highlighting problem when query was longer than original text 2019-11-05 16:40:34 +01:00
89fd397903 Bump meilidb-core to v0.6.3 2019-11-05 15:40:04 +01:00
d8392f2f18 Merge pull request #261 from meilisearch/partial-updates
Introduce the support of partial updates
2019-11-05 15:39:02 +01:00
36b74f0efe Introduce partial updates to the update system 2019-11-05 15:23:41 +01:00
68c0a36b00 Make the deserialization support correctly optional documents 2019-11-05 15:03:18 +01:00
a127b72a74 Merge pull request #259 from meilisearch/allow-add-schema-attributes-at-end
Allow to introduce attributes only at the end of a schema
2019-11-05 12:34:11 +01:00
5782fb9e52 Test the add of attributes only at the end of a schema 2019-11-05 12:09:52 +01:00
20319f7974 Allow to introduce attributes only at the end of a schema 2019-11-05 12:09:52 +01:00
c4087e2ec2 Merge pull request #258 from meilisearch/debug-schema
Implement a better debug for the schema
2019-11-05 11:35:02 +01:00
b1d1f2f627 Implement a better debug system for the schema 2019-11-05 11:21:07 +01:00
62fe6a8263 Merge pull request #257 from meilisearch/bump-version
Bump meilidb-core/tokenizer versions
2019-11-04 17:26:01 +01:00
d88c10f3b4 Bump meilidb-tokenizer to v0.6.1 2019-11-04 17:17:06 +01:00
00f49990c7 Bump meilidb-core to v0.6.2 2019-11-04 17:16:50 +01:00
89f30ad47e Merge pull request #256 from meilisearch/fix-tokenizer
Fix the tokenizer to make it work with unicode chars
2019-11-04 17:15:17 +01:00
3b1cbed238 Check that the unidecoded words are not empty 2019-11-04 17:03:11 +01:00
4571b80a49 Update the tests 2019-11-04 16:41:58 +01:00
de2b8672d4 Make the tokenizer understand strange whitespaces/quotes 2019-11-04 16:41:58 +01:00
ccded7b429 Improve the indexer to not not deunicode before indexing
Revert of #179
2019-11-04 16:41:58 +01:00
1d4e98410a Merge pull request #255 from meilisearch/bump-version
Bump meilidb-core to v0.6.1
2019-11-04 14:47:53 +01:00
e493b27ef1 Bump meilidb-core to v0.6.1 2019-11-04 14:22:08 +01:00
70589c136f Merge pull request #253 from meilisearch/fix-updates-system
Fix the updates system
2019-11-04 13:46:37 +01:00
1c3620a7d4 Add tests to the update system 2019-11-04 13:18:07 +01:00
c2cc0704d7 Clean up the update_awaiter function 2019-11-04 11:11:58 +01:00
2a50e08bb8 Moving to heed v0.5.0 2019-11-04 10:49:27 +01:00
6b326a45d7 Fix the update system to always consume updates even if failing 2019-10-31 17:44:13 +01:00
b73874bf24 Merge pull request #252 from meilisearch/examples-specify-index-name
Allow users to specify the index name to use with examples bins
2019-10-31 17:02:00 +01:00
95c8ad0f80 Allow users to specify the index name to use with examples bins 2019-10-31 16:20:31 +01:00
996763cc52 Merge pull request #251 from meilisearch/update-heed
Moving to heed 0.3.0
2019-10-31 16:20:07 +01:00
6a8171d335 Moving to heed 0.3.0 2019-10-31 16:11:02 +01:00
2f32586dab Merge pull request #250 from meilisearch/new-http-server
Introduce a brand new HTTP server
2019-10-31 16:07:52 +01:00
db898001eb Get rid of rust-crypto and uuid 2019-10-31 15:28:37 +01:00
c2a12b661a Make it a runnable server 2019-10-31 15:27:21 +01:00
f51c49db93 Introduce the HTTP tide based library 2019-10-31 15:02:34 +01:00
1be5b0f327 Bump the meili-core/schema/tokenizer crates to 0.6.0 2019-10-31 14:05:59 +01:00
a136c62208 Merge pull request #249 from meilisearch/display-all-updates
Display enqueued along with processed updates
2019-10-31 13:53:46 +01:00
cc461b1331 Display enqueued along with processed updates 2019-10-31 12:25:52 +01:00
dbe5363672 Merge pull request #248 from meilisearch/fix-highlight-too-long
Correctly highlight when query string is too long
2019-10-30 18:19:06 +01:00
45d4361e7d Correctly highlight when query string is longer 2019-10-30 17:49:50 +01:00
b28c44cc6b Merge pull request #247 from meilisearch/bump-meilidb
Bump the meili-core/schema/tokenizer crates to 0.5.11
2019-10-30 17:48:26 +01:00
b709a7a30a Bump the meili-core/schema/tokenizer crates to 0.5.11 2019-10-30 17:40:31 +01:00
64c25bdb40 Merge pull request #246 from meilisearch/better-highlighting-area
Make the highlight system much better
2019-10-30 17:39:12 +01:00
c230f244be Make the highlight system much better 2019-10-30 17:32:29 +01:00
02af4ff113 Merge pull request #245 from meilisearch/reindex-all-documents-reduce-memory-usage
Reduce the ram consumption when re-indexing all the documents
2019-10-29 17:54:47 +01:00
4dff8a215e Reduce the ram consumption when re-indexing all the documents 2019-10-29 17:46:23 +01:00
41065305aa Merge pull request #244 from meilisearch/reintroduce-stop-words
Reintroduce stop words
2019-10-29 16:35:03 +01:00
e9dce3ce81 Add a test to ensure that the indexer support stop words 2019-10-29 16:18:06 +01:00
ff7dde7522 Make the RawIndexer support stop words 2019-10-29 16:18:06 +01:00
a226fd23c3 Introduce the stop words deletion update type 2019-10-29 16:18:06 +01:00
776673ebae Introduce the stop words addition update type 2019-10-29 15:24:09 +01:00
32d2cc3aea Merge pull request #243 from meilisearch/all-updates-results
Introduce a function to get all updates results
2019-10-29 11:45:55 +01:00
8a17fcdda5 Introduce a function to get all updates results 2019-10-29 11:37:40 +01:00
9602d7a960 Merge pull request #242 from meilisearch/accept-dup-documents
Make documents additions accept only the last duplicate document
2019-10-28 20:52:40 +01:00
ac12a4b9c9 Make documents additions accept only the last duplicate document 2019-10-28 20:40:33 +01:00
af96050944 Merge pull request #241 from meilisearch/fix-dead-locks
Fix dead locks
2019-10-28 18:20:01 +01:00
a43b37dfc1 Send channel notification when clearing documents 2019-10-28 17:58:22 +01:00
c08dcac1d4 Abort the update transaction before calling the update callback 2019-10-28 17:55:43 +01:00
a17dccd84e Merge pull request #237 from meilisearch/fix-exactness-criterion
Fix the exactness criterion algorithm
2019-10-26 18:43:10 +02:00
9a57cab3ee Fix the exactness criterion algorithm 2019-10-26 18:34:40 +02:00
751b060320 Merge pull request #238 from meilisearch/improve-highlighting
Only highlight query words areas not the whole words
2019-10-26 18:23:19 +02:00
4111b99a6d Only highlight query words areas not the whole words 2019-10-26 15:56:34 +02:00
d6fb2b56d1 Merge pull request #236 from meilisearch/reorder-automatons
Make sure that automatons group with more automatons are better
2019-10-24 15:29:16 +02:00
cb5c77e536 Make sure that automatons group with more automatons are better 2019-10-24 15:18:53 +02:00
44c89b1ea2 Merge pull request #235 from meilisearch/readme-concat-split-query-words
Add information about search concat and split query words support
2019-10-23 18:20:59 +02:00
26a285053b Add information about search concat and split query words support 2019-10-23 18:19:15 +02:00
1446a6a2d2 Merge pull request #234 from meilisearch/clear-all-update-variant
Introduce a clear all documents update
2019-10-23 16:45:37 +02:00
047eba3ff3 Introduce a clear all documents update 2019-10-23 16:39:10 +02:00
8d9d183ce6 Merge pull request #233 from meilisearch/commit-when-update-ok
Commit an update only when it is Ok
2019-10-23 16:07:48 +02:00
eb67195840 Commit an update only when it is Ok 2019-10-23 15:52:40 +02:00
93306c2326 Merge pull request #232 from meilisearch/support-splitted-words
Support splitted words
2019-10-23 13:38:16 +02:00
7d9cf8d713 Clean up the fetch algorithm 2019-10-23 12:06:21 +02:00
03eb7898e7 Introduce a basic working version of phrase query for splitting words 2019-10-23 11:40:13 +02:00
0fbd4cd632 Merge pull request #231 from meilisearch/recursive-object-indexing
Make possible to convert recursive object into strings
2019-10-22 16:20:10 +02:00
858bf359b8 Make possible to convert recursive object into strings 2019-10-22 16:02:02 +02:00
5dc8465ebd Merge pull request #181 from meilisearch/diff-schema
Make possible to update an index schema
2019-10-22 14:23:43 +02:00
0f30a221fa Introduce the reindex_all_documents indexing function 2019-10-22 14:07:27 +02:00
e86a547e93 Introduce a basic schema diff function 2019-10-21 17:57:32 +02:00
32d8b4b83f Merge pull request #230 from meilisearch/moving-to-heed
Move to heed 0.1.0
2019-10-21 13:34:06 +02:00
78535b3e33 Move to heed 0.1.0 2019-10-21 12:05:53 +02:00
183 changed files with 38332 additions and 29896 deletions

4
.dockerignore Normal file
View File

@ -0,0 +1,4 @@
target
Dockerfile
.dockerignore
.gitignore

30
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,30 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**MeiliSearch version:** [e.g. v0.20.0]
**Additional context**
Additional information that may be relevant to the issue.
[e.g. architecture, device, OS, browser]

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -0,0 +1,40 @@
---
name: Tracking issue
about: Template for a tracking issue
title: ''
labels: tracking-issue
assignees: ''
---
# Summary
One paragraph to explain the feature.
# Motivations
Why are we doing this? What use cases does it support? What is the expected outcome?
# Explanation
Explain the proposal like it was the final documentation of this proposal.
- What is changing for end-users.
- How it works.
- What is breaking?
- Examples.
# Implementation
Explain the technical specificities that will need to be known or done in order to implement this proposal.
## Steps
Describe each step to create the feature with it's associated issue/PR.
# Related
- [ ] Validated by the team (@people needed)
- [ ] Test added
- [ ] [Documentation](https://github.com/meilisearch/documentation/issues/#xxx) //Change xxx or remove the line
- [ ] [SDK/Integrations](https://github.com/meilisearch/integration-guides/issues/#xxx) //Change xxx or remove the line

6
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "cargo"
directory: "/"
schedule:
interval: "monthly"

132
.github/is-latest-release.sh vendored Normal file
View File

@ -0,0 +1,132 @@
#!/bin/sh
# Checks if the current tag should be the latest (in terms of semver and not of release date).
# Ex: previous tag -> v0.10.1
# new tag -> v0.8.12
# The new tag should not be the latest
# So it returns "false", the CI should not run for the release v0.8.2
# Used in GHA in publish-docker-latest.yml
# Returns "true" or "false" (as a string) to be used in the `if` in GHA
# GLOBAL
GREP_SEMVER_REGEXP='v\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)$' # i.e. v[number].[number].[number]
# FUNCTIONS
# semverParseInto and semverLT from https://github.com/cloudflare/semver_bash/blob/master/semver.sh
# usage: semverParseInto version major minor patch special
# version: the string version
# major, minor, patch, special: will be assigned by the function
semverParseInto() {
local RE='[^0-9]*\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)\([0-9A-Za-z-]*\)'
#MAJOR
eval $2=`echo $1 | sed -e "s#$RE#\1#"`
#MINOR
eval $3=`echo $1 | sed -e "s#$RE#\2#"`
#MINOR
eval $4=`echo $1 | sed -e "s#$RE#\3#"`
#SPECIAL
eval $5=`echo $1 | sed -e "s#$RE#\4#"`
}
# usage: semverLT version1 version2
semverLT() {
local MAJOR_A=0
local MINOR_A=0
local PATCH_A=0
local SPECIAL_A=0
local MAJOR_B=0
local MINOR_B=0
local PATCH_B=0
local SPECIAL_B=0
semverParseInto $1 MAJOR_A MINOR_A PATCH_A SPECIAL_A
semverParseInto $2 MAJOR_B MINOR_B PATCH_B SPECIAL_B
if [ $MAJOR_A -lt $MAJOR_B ]; then
return 0
fi
if [ $MAJOR_A -le $MAJOR_B ] && [ $MINOR_A -lt $MINOR_B ]; then
return 0
fi
if [ $MAJOR_A -le $MAJOR_B ] && [ $MINOR_A -le $MINOR_B ] && [ $PATCH_A -lt $PATCH_B ]; then
return 0
fi
if [ "_$SPECIAL_A" == "_" ] && [ "_$SPECIAL_B" == "_" ] ; then
return 1
fi
if [ "_$SPECIAL_A" == "_" ] && [ "_$SPECIAL_B" != "_" ] ; then
return 1
fi
if [ "_$SPECIAL_A" != "_" ] && [ "_$SPECIAL_B" == "_" ] ; then
return 0
fi
if [ "_$SPECIAL_A" < "_$SPECIAL_B" ]; then
return 0
fi
return 1
}
# Returns the tag of the latest stable release (in terms of semver and not of release date)
get_latest() {
temp_file='temp_file' # temp_file needed because the grep would start before the download is over
curl -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file"
releases=$(cat "$temp_file" | \
grep -E "tag_name|draft|prerelease" \
| tr -d ',"' | cut -d ':' -f2 | tr -d ' ')
# Returns a list of [tag_name draft_boolean prerelease_boolean ...]
# Ex: v0.10.1 false false v0.9.1-rc.1 false true v0.9.0 false false...
i=0
latest=""
current_tag=""
for release_info in $releases; do
if [ $i -eq 0 ]; then # Cheking tag_name
if echo "$release_info" | grep -q "$GREP_SEMVER_REGEXP"; then # If it's not an alpha or beta release
current_tag=$release_info
else
current_tag=""
fi
i=1
elif [ $i -eq 1 ]; then # Checking draft boolean
if [ "$release_info" = "true" ]; then
current_tag=""
fi
i=2
elif [ $i -eq 2 ]; then # Checking prerelease boolean
if [ "$release_info" = "true" ]; then
current_tag=""
fi
i=0
if [ "$current_tag" != "" ]; then # If the current_tag is valid
if [ "$latest" = "" ]; then # If there is no latest yet
latest="$current_tag"
else
semverLT $current_tag $latest # Comparing latest and the current tag
if [ $? -eq 1 ]; then
latest="$current_tag"
fi
fi
fi
fi
done
rm -f "$temp_file"
echo $latest
}
# MAIN
current_tag="$(echo $GITHUB_REF | tr -d 'refs/tags/')"
latest="$(get_latest)"
if [ "$current_tag" != "$latest" ]; then
# The current release tag is not the latest
echo "false"
else
# The current release tag is the latest
echo "true"
fi

13
.github/release-draft-template.yml vendored Normal file
View File

@ -0,0 +1,13 @@
name-template: 'v$RESOLVED_VERSION'
tag-template: 'v$RESOLVED_VERSION'
version-template: '0.21.0-alpha.$PATCH'
exclude-labels:
- 'skip-changelog'
template: |
## Changes
$CHANGES
no-changes-template: 'Changes are coming soon 😎'
sort-direction: 'ascending'
version-resolver:
default: patch

20
.github/workflows/README.md vendored Normal file
View File

@ -0,0 +1,20 @@
# GitHub Actions Workflow for MeiliSearch
> **Note:**
> - We do not use [cache](https://github.com/actions/cache) yet but we could use it to speed up CI
## Workflow
- On each pull request, we trigger `cargo test`.
- On each tag, we build:
- the tagged Docker image and publish it to Docker Hub
- the binaries for MacOS, Ubuntu, and Windows
- the Debian package
- On each stable release (`v*.*.*` tag):
- we build the `latest` Docker image and publish it to Docker Hub
- we publish the binary to Hombrew and Gemfury
## Problems
- We do not test on Windows because we are unable to make it work, there is a disk space problem.

34
.github/workflows/coverage.yml vendored Normal file
View File

@ -0,0 +1,34 @@
---
on:
pull_request:
types: [review_requested, ready_for_review]
name: Execute code coverage
jobs:
nightly-coverage:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true
- uses: actions-rs/cargo@v1
with:
command: clean
- uses: actions-rs/cargo@v1
with:
command: test
args: --all-features --no-fail-fast
env:
CARGO_INCREMENTAL: "0"
RUSTFLAGS: "-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Cpanic=unwind -Zpanic_abort_tests"
- uses: actions-rs/grcov@v0.1
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ${{ steps.coverage.outputs.report }}
yml: ./codecov.yml
fail_ci_if_error: true

15
.github/workflows/flaky.yml vendored Normal file
View File

@ -0,0 +1,15 @@
name: Look for flaky tests
on:
schedule:
- cron: "0 12 * * FRI" # every friday at 12:00PM
jobs:
flaky:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Install cargo-flaky
run: cargo install cargo-flaky
- name: Run cargo flaky 100 times
run: cargo flaky -i 100 --release

62
.github/workflows/publish-binaries.yml vendored Normal file
View File

@ -0,0 +1,62 @@
on:
release:
types: [published]
name: Publish binaries to release
jobs:
publish:
name: Publish for ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-18.04, macos-latest, windows-latest]
include:
- os: ubuntu-18.04
artifact_name: meilisearch
asset_name: meilisearch-linux-amd64
- os: macos-latest
artifact_name: meilisearch
asset_name: meilisearch-macos-amd64
- os: windows-latest
artifact_name: meilisearch.exe
asset_name: meilisearch-windows-amd64.exe
steps:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- uses: actions/checkout@v1
- name: Build
run: cargo build --release --locked
- name: Upload binaries to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
file: target/release/${{ matrix.artifact_name }}
asset_name: ${{ matrix.asset_name }}
tag: ${{ github.ref }}
publish-armv8:
name: Publish for ARMv8
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v1.0.0
- uses: uraimo/run-on-arch-action@v1.0.7
id: runcmd
with:
architecture: aarch64 # aka ARMv8
distribution: ubuntu18.04
run: |
apt update
apt install -y curl gcc make
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --profile minimal --default-toolchain stable
source $HOME/.cargo/env
cargo build --release --locked
- name: Upload the binary to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.PUBLISH_TOKEN }}
file: target/release/meilisearch
asset_name: meilisearch-linux-armv8
tag: ${{ github.ref }}

View File

@ -0,0 +1,39 @@
name: Publish deb pkg to GitHub release & APT repository & Homebrew
on:
release:
types: [released]
jobs:
debian:
name: Publish debian packagge
runs-on: ubuntu-18.04
steps:
- uses: hecrj/setup-rust-action@master
with:
rust-version: stable
- name: Install cargo-deb
run: cargo install cargo-deb
- uses: actions/checkout@v1
- name: Build deb package
run: cargo deb -p meilisearch-http -o target/debian/meilisearch.deb
- name: Upload debian pkg to release
uses: svenstaro/upload-release-action@v1-release
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
file: target/debian/meilisearch.deb
asset_name: meilisearch.deb
tag: ${{ github.ref }}
- name: Upload debian pkg to apt repository
run: curl -F package=@target/debian/meilisearch.deb https://${{ secrets.GEMFURY_PUSH_TOKEN }}@push.fury.io/meilisearch/
homebrew:
name: Bump Homebrew formula
runs-on: ubuntu-18.04
steps:
- name: Create PR to Homebrew
uses: mislav/bump-homebrew-formula-action@v1
with:
formula-name: meilisearch
env:
COMMITTER_TOKEN: ${{ secrets.HOMEBREW_COMMITTER_TOKEN }}

View File

@ -0,0 +1,27 @@
---
on:
release:
types: [released]
name: Publish latest image to Docker Hub
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Check if current release is latest
run: echo "##[set-output name=is_latest;]$(sh .github/is-latest-release.sh)"
id: release
- name: Set COMMIT_DATE env variable
run: |
echo "COMMIT_DATE=$( git log --pretty=format:'%ad' -n1 --date=short )" >> $GITHUB_ENV
- name: Publish to Registry
if: steps.release.outputs.is_latest == 'true'
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: getmeili/meilisearch
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
tag_names: true
buildargs: COMMIT_SHA,COMMIT_DATE

View File

@ -0,0 +1,26 @@
---
on:
push:
tags:
- '*'
name: Publish tagged image to Docker Hub
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v1
- name: Set COMMIT_DATE env variable
run: |
echo "COMMIT_DATE=$( git log --pretty=format:'%ad' -n1 --date=short )" >> $GITHUB_ENV
- name: Publish to Registry
uses: elgohr/Publish-Docker-Github-Action@master
env:
COMMIT_SHA: ${{ github.sha }}
with:
name: getmeili/meilisearch
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
tag_names: true
buildargs: COMMIT_SHA,COMMIT_DATE

16
.github/workflows/release-drafter.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: Release Drafter
on:
push:
branches:
- main
jobs:
update_release_draft:
runs-on: ubuntu-latest
steps:
- uses: release-drafter/release-drafter@v5
with:
config-name: release-draft-template.yml
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_DRAFTER_TOKEN }}

86
.github/workflows/rust.yml vendored Normal file
View File

@ -0,0 +1,86 @@
name: Rust
on:
workflow_dispatch:
pull_request:
push:
# trying and staging branches are for Bors config
branches:
- trying
- staging
env:
CARGO_TERM_COLOR: always
jobs:
tests:
name: Tests on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-18.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v2
- name: Cache dependencies
uses: actions/cache@v2
with:
path: |
~/.cargo
./target
key: ${{ matrix.os }}-${{ hashFiles('Cargo.lock') }}
- name: Run cargo check without any default features
uses: actions-rs/cargo@v1
with:
command: build
args: --locked --release --no-default-features
- name: Run cargo test
uses: actions-rs/cargo@v1
with:
command: test
args: --locked --release
clippy:
name: Run Clippy
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Cache dependencies
uses: actions/cache@v2
with:
path: |
~/.cargo
./target
key: ${{ matrix.os }}-${{ hashFiles('Cargo.lock') }}
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: clippy
- name: Run cargo clippy
uses: actions-rs/cargo@v1
with:
command: clippy
args: --all-targets -- --deny warnings
fmt:
name: Run Rustfmt
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Cache dependencies
uses: actions/cache@v2
with:
path: |
~/.cargo
./target
key: ${{ matrix.os }}-${{ hashFiles('Cargo.lock') }}
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
override: true
components: rustfmt
- name: Run cargo fmt
run: cargo fmt --all -- --check

4
.gitignore vendored
View File

@ -1,7 +1,9 @@
/target
Cargo.lock
**/*.csv
**/*.json_lines
**/*.rs.bk
/*.mdb
/query-history.txt
/data.ms
/snapshots
/dumps

76
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,76 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at bonjour@meilisearch.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

112
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,112 @@
# Contributing
First, thank you for contributing to MeiliSearch! The goal of this document is to
provide everything you need to start contributing to MeiliSearch. The
following TOC is sorted progressively, starting with the basics and
expanding into more specifics.
<!-- MarkdownTOC autolink="true" style="ordered" indent=" " -->
1. [Assumptions](#assumptions)
1. [Your First Contribution](#your-first-contribution)
1. [Change Control](#change-control)
1. [Git Branches](#git-branches)
1. [Git Commits](#git-commits)
1. [Style](#style)
1. [Github Pull Requests](#github-pull-requests)
1. [Reviews & Approvals](#reviews--approvals)
1. [Merge Style](#merge-style)
1. [CI](#ci)
1. [Development](#development)
1. [Setup](#setup)
1. [Testing](#testing)
1. [Benchmarking](#benchmarking--profiling)
1. [Humans](#humans)
1. [Documentation](#documentation)
1. [Changelog](#changelog)
<!-- /MarkdownTOC -->
## Assumptions
1. **You're familiar with [Github](https://github.com) and the [pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)
workflow.**
2. **You've read the MeiliSearch [docs](https://docs.meilisearch.com).**
3. **You know about the [MeiliSearch community](https://docs.meilisearch.com/learn/what_is_meilisearch/contact.html).
Please use this for help.**
## Your First Contribution
1. Ensure your change has an issue! Find an
[existing issue](https://github.com/meilisearch/meilisearch/issues/) or [open a new issue](https://github.com/meilisearch/meilisearch/issues/new).
* This is where you can get a feel if the change will be accepted or not.
2. Once approved, [fork the MeiliSearch repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) in your own
Github account.
3. [Create a new Git branch](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository)
4. Review the MeiliSearch [workflow](#workflow) and [development](#development).
5. Make your changes.
6. [Submit the branch as a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) to the main MeiliSearch
repo. A MeiliSearch team member should comment and/or review your pull request
with a few days. Although, depending on the circumstances, it may take
longer.
## Change Control
### Git Branches
_All_ changes must be made in a branch and submitted as [pull requests](#pull-requests).
MeiliSearch does not adopt any type of branch naming style, but please use something
descriptive of your changes.
### Git Commits
#### Style
Please ensure your commits are small and focused; they should tell a story of
your change. This helps reviewers to follow your changes, especially for more
complex changes.
Familiarise yourself with [How to Write a Git Commit Message](https://chris.beams.io/posts/git-commit/).
### Github Pull Requests
Once your changes are ready you must submit your branch as a pull request.
#### Reviews & Approvals
All pull requests must be reviewed and approved by at least one MeiliSearch team
member.
#### Merge Style
All pull requests are squashed and merged. We generally discourage large pull
requests that are over 300-500 lines of diff. If you would like to propose
a change that is larger we suggest coming onto our chat channel and
discuss it with one of our engineers. This way we can talk through the
solution and discuss if a change that large is even needed! This overall
will produce a quicker response to the change and likely produce code that
aligns better with our process.
## Development
### Setup
See the [MeiliSearch Docs](https://docs.meilisearch.com/reference/features/installation.html) for how to set up a development environment.
### Benchmarking & Profiling
We do not yet do any benchmarking, nor have we formalised our profiling. If you'd like to work on this please get in touch!
## Humans
After making your change, you'll want to prepare it for MeiliSearch users (mostly humans). This usually entails updating documentation and announcing your feature.
### Documentation
Documentation is very important to MeiliSearch. All contributions that
alter user-facing behavior MUST include documentation changes. Please see
[GitHub.com/meilisearch/documentation](https://github.com/meilisearch/documentation) for more info.
### Changelog
Until we have guidelines in place, updating the [`Changelog`](/CHANGELOG.md) is solely the responsibility of MeiliSearch team members.

3673
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,7 @@
[workspace]
members = [
"meilidb-core",
"meilidb-schema",
"meilidb-tokenizer",
"meilisearch-http",
"meilisearch-error",
]
[profile.release]

45
Dockerfile Normal file
View File

@ -0,0 +1,45 @@
# Compile
FROM alpine:3.14 AS compiler
RUN apk update --quiet
RUN apk add curl
RUN apk add build-base
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
WORKDIR /meilisearch
COPY Cargo.lock .
COPY Cargo.toml .
COPY meilisearch-error/Cargo.toml meilisearch-error/
COPY meilisearch-http/Cargo.toml meilisearch-http/
ENV RUSTFLAGS="-C target-feature=-crt-static"
# Create dummy main.rs files for each workspace member to be able to compile all the dependencies
RUN find . -type d -name "meilisearch-*" | xargs -I{} sh -c 'mkdir {}/src; echo "fn main() { }" > {}/src/main.rs;'
# Use `cargo build` instead of `cargo vendor` because we need to not only download but compile dependencies too
RUN $HOME/.cargo/bin/cargo build --release
# Cleanup dummy main.rs files
RUN find . -path "*/src/main.rs" -delete
ARG COMMIT_SHA
ARG COMMIT_DATE
ENV COMMIT_SHA=${COMMIT_SHA} COMMIT_DATE=${COMMIT_DATE}
COPY . .
RUN $HOME/.cargo/bin/cargo build --release
# Run
FROM alpine:3.14
RUN apk add -q --no-cache libgcc tini
COPY --from=compiler /meilisearch/target/release/meilisearch .
ENV MEILI_HTTP_ADDR 0.0.0.0:7700
EXPOSE 7700/tcp
ENTRYPOINT ["tini", "--"]
CMD ./meilisearch

26
LICENSE
View File

@ -1,13 +1,21 @@
“Commons Clause” License Condition v1.0
MIT License
The Software is provided to you by the Licensor under the License, as defined below, subject to the following condition.
Copyright (c) 2019-2021 Meili SAS
Without limiting other conditions in the License, the grant of rights under the License will not include, and the License does not grant to you, the right to Sell the Software.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
For purposes of the foregoing, “Sell” means practicing any or all of the rights granted to you under the License to provide to third parties, for a fee or other consideration (including without limitation fees for hosting or consulting/ support services related to the Software), a product or service whose value derives, entirely or substantially, from the functionality of the Software. Any license notice or attribution required by the License must also include this Commons Clause License Condition notice.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
Software: MeiliDB
License: MIT
Licensor: MEILI SAS
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

236
README.md
View File

@ -1,78 +1,192 @@
# MeiliDB
<p align="center">
<img src="assets/logo.svg" alt="MeiliSearch" width="200" height="200" />
</p>
[![Build Status](https://dev.azure.com/thomas0884/thomas/_apis/build/status/meilisearch.MeiliDB?branchName=master)](https://dev.azure.com/thomas0884/thomas/_build/latest?definitionId=1&branchName=master)
[![dependency status](https://deps.rs/repo/github/meilisearch/MeiliDB/status.svg)](https://deps.rs/repo/github/meilisearch/MeiliDB)
[![License](https://img.shields.io/badge/license-commons%20clause-lightgrey)](https://commonsclause.com/)
<h1 align="center">MeiliSearch</h1>
A _full-text search database_ based on the fast [LMDB key-value store](https://en.wikipedia.org/wiki/Lightning_Memory-Mapped_Database).
<h4 align="center">
<a href="https://www.meilisearch.com">Website</a> |
<a href="https://roadmap.meilisearch.com/tabs/1-under-consideration">Roadmap</a> |
<a href="https://blog.meilisearch.com">Blog</a> |
<a href="https://fr.linkedin.com/company/meilisearch">LinkedIn</a> |
<a href="https://twitter.com/meilisearch">Twitter</a> |
<a href="https://docs.meilisearch.com">Documentation</a> |
<a href="https://docs.meilisearch.com/faq/">FAQ</a>
</h4>
## Features
<p align="center">
<a href="https://github.com/meilisearch/MeiliSearch/actions"><img src="https://github.com/meilisearch/MeiliSearch/workflows/Cargo%20test/badge.svg" alt="Build Status"></a>
<a href="https://deps.rs/repo/github/meilisearch/MeiliSearch"><img src="https://deps.rs/repo/github/meilisearch/MeiliSearch/status.svg" alt="Dependency status"></a>
<a href="https://github.com/meilisearch/MeiliSearch/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-informational" alt="License"></a>
<a href="https://slack.meilisearch.com"><img src="https://img.shields.io/badge/slack-MeiliSearch-blue.svg?logo=slack" alt="Slack"></a>
<a href="https://github.com/meilisearch/MeiliSearch/discussions" alt="Discussions"><img src="https://img.shields.io/badge/github-discussions-red" /></a>
<a href="https://app.bors.tech/repositories/26457"><img src="https://bors.tech/images/badge_small.svg" alt="Bors enabled"></a>
</p>
- Provides [6 default ranking criteria](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/criterion/mod.rs#L107-L113) used to [bucket sort](https://en.wikipedia.org/wiki/Bucket_sort) documents
- Accepts [custom criteria](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/criterion/mod.rs#L24-L33) and can apply them in any custom order
- Support [ranged queries](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/query_builder.rs#L283), useful for paginating results
- Can [distinct](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/query_builder.rs#L265-L270) and [filter](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/query_builder.rs#L246-L259) returned documents based on context defined rules
- Can store complete documents or only [user schema specified fields](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-schema/src/lib.rs#L265-L279)
- The [default tokenizer](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-tokenizer/src/lib.rs) can index latin and kanji based languages
- Returns [the matching text areas](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/lib.rs#L66-L88), useful to highlight matched words in results
- Accepts query time search config like the [searchable attributes](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/query_builder.rs#L272-L275)
- Supports [runtime incremental indexing](https://github.com/meilisearch/MeiliDB/blob/dc5c42821e1340e96cb90a3da472264624a26326/meilidb-core/src/store/mod.rs#L143-L173)
<p align="center">⚡ Lightning Fast, Ultra Relevant, and Typo-Tolerant Search Engine 🔍</p>
**MeiliSearch** is a powerful, fast, open-source, easy to use and deploy search engine. Both searching and indexing are highly customizable. Features such as typo-tolerance, filters, and synonyms are provided out-of-the-box.
For more information about features go to [our documentation](https://docs.meilisearch.com/).
<p align="center">
<img src="assets/trumen_quick_loop.gif" alt="Web interface gif" />
</p>
It uses [LMDB](https://en.wikipedia.org/wiki/Lightning_Memory-Mapped_Database) as the internal key-value store. The key-value store allows us to handle updates and queries with small memory and CPU overheads. The whole ranking system is [data oriented](https://github.com/meilisearch/MeiliDB/issues/82) and provides great performances.
## ✨ Features
* Search-as-you-type experience (answers < 50 milliseconds)
* Full-text search
* Typo tolerant (understands typos and misspelling)
* Faceted search and filters
* Supports hanzi (Chinese characters)
* Supports synonyms
* Easy to install, deploy, and maintain
* Whole documents are returned
* Highly customizable
* RESTful API
You can [read the deep dive](deep-dive.md) if you want more information on the engine, it describes the whole process of generating updates and handling queries or you can take a look at the [typos and ranking rules](typos-ranking-rules.md) if you want to know the default rules used to sort the documents.
## Getting started
We will be proud if you submit issues and pull requests. You can help to grow this project and start contributing by checking [issues tagged "good-first-issue"](https://github.com/meilisearch/MeiliDB/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22). It is a good start!
### Deploy the Server
The project is only a library yet. It means that there is no binary provided yet. To get started, you can check the examples wich are made to work with the data located in the `datasets/` folder.
MeiliDB will be a binary in a near future so you will be able to use it as a database out-of-the-box. We should be able to query it using HTTP. This is our current goal, [see the milestones](https://github.com/meilisearch/MeiliDB/milestones). In the end, the binary will be a bunch of network protocols and wrappers around the library - which will also be published on [crates.io](https://crates.io). Both the binary and the library will follow the same update cycle.
## Performances
With a database composed of _100 353_ documents with _352_ attributes each and _3_ of them indexed.
So more than _300 000_ fields indexed for _35 million_ stored we can handle more than _2.8k req/sec_ with an average response time of _9 ms_ on an Intel i7-7700 (8) @ 4.2GHz.
Requests are made using [wrk](https://github.com/wg/wrk) and scripted to simulate real users queries.
```
Running 10s test @ http://localhost:2230
2 threads and 25 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 9.52ms 7.61ms 99.25ms 84.58%
Req/Sec 1.41k 119.11 1.78k 64.50%
28080 requests in 10.01s, 7.42MB read
Requests/sec: 2806.46
Transfer/sec: 759.17KB
```
### Notes
With Rust 1.32 the allocator has been [changed to use the system allocator](https://blog.rust-lang.org/2019/01/17/Rust-1.32.0.html#jemalloc-is-removed-by-default).
We have seen much better performances when [using jemalloc as the global allocator](https://github.com/alexcrichton/jemallocator#documentation).
## Usage and examples
Currently MeiliDB do not provide an http server but you can run the example binary.
The _index_ subcommand has been made to create an index and inject documents into it. Using the command line below, the index will be named _movies_ and the _19 700_ movies of the `datasets/` will be injected in MeiliDB.
#### Homebrew (Mac OS)
```bash
cargo run --release --example from_file -- \
index example.mdb datasets/movies/data.csv \
--schema datasets/movies/schema.toml
brew update && brew install meilisearch
meilisearch
```
Once the first command is done, you can query the freshly created _movies_ index using the _search_ subcomand. In this example we filtered the dataset to only show _non-adult_ movies using the non-definitive `!adult` syntax filter.
#### Docker
```bash
cargo run --release --example from_file -- \
search example.mdb
--number 4 \
--filter '!adult' \
id popularity adult original_title
docker run -p 7700:7700 -v "$(pwd)/data.ms:/data.ms" getmeili/meilisearch
```
#### Try MeiliSearch in our Sandbox
Create a MeiliSearch instance in [MeiliSearch Sandbox](https://sandbox.meilisearch.com/). This instance is free, and will be active for 48 hours.
#### Run on Digital Ocean
[![DigitalOcean Marketplace](assets/do-btn-blue.svg)](https://marketplace.digitalocean.com/apps/meilisearch?action=deploy&refcode=7c67bd97e101)
#### Deploy on Platform.sh
<a href="https://console.platform.sh/projects/create-project?template=https://raw.githubusercontent.com/platformsh/template-builder/master/templates/meilisearch/.platform.template.yaml&utm_content=meilisearch&utm_source=github&utm_medium=button&utm_campaign=deploy_on_platform">
<img src="https://platform.sh/images/deploy/lg-blue.svg" alt="Deploy on Platform.sh" width="180px" />
</a>
#### APT (Debian & Ubuntu)
```bash
echo "deb [trusted=yes] https://apt.fury.io/meilisearch/ /" > /etc/apt/sources.list.d/fury.list
apt update && apt install meilisearch-http
meilisearch
```
#### Download the binary (Linux & Mac OS)
```bash
curl -L https://install.meilisearch.com | sh
./meilisearch
```
#### Compile and run it from sources
If you have the latest stable Rust toolchain installed on your local system, clone the repository and change it to your working directory.
```bash
git clone https://github.com/meilisearch/MeiliSearch.git
cd MeiliSearch
cargo run --release
```
### Create an Index and Upload Some Documents
Let's create an index! If you need a sample dataset, use [this movie database](https://www.notion.so/meilisearch/A-movies-dataset-to-test-Meili-1cbf7c9cfa4247249c40edfa22d7ca87#b5ae399b81834705ba5420ac70358a65). You can also find it in the `datasets/` directory.
```bash
curl -L 'https://bit.ly/2PAcw9l' -o movies.json
```
Now, you're ready to index some data.
```bash
curl -i -X POST 'http://127.0.0.1:7700/indexes/movies/documents' \
--header 'content-type: application/json' \
--data-binary @movies.json
```
### Search for Documents
#### In command line
The search engine is now aware of your documents and can serve those via an HTTP server.
The [`jq` command-line tool](https://stedolan.github.io/jq/) can greatly help you read the server responses.
```bash
curl 'http://127.0.0.1:7700/indexes/movies/search?q=botman+robin&limit=2' | jq
```
```json
{
"hits": [
{
"id": "415",
"title": "Batman & Robin",
"poster": "https://image.tmdb.org/t/p/w1280/79AYCcxw3kSKbhGpx1LiqaCAbwo.jpg",
"overview": "Along with crime-fighting partner Robin and new recruit Batgirl, Batman battles the dual threat of frosty genius Mr. Freeze and homicidal horticulturalist Poison Ivy. Freeze plans to put Gotham City on ice, while Ivy tries to drive a wedge between the dynamic duo.",
"release_date": 866768400
},
{
"id": "411736",
"title": "Batman: Return of the Caped Crusaders",
"poster": "https://image.tmdb.org/t/p/w1280/GW3IyMW5Xgl0cgCN8wu96IlNpD.jpg",
"overview": "Adam West and Burt Ward returns to their iconic roles of Batman and Robin. Featuring the voices of Adam West, Burt Ward, and Julie Newmar, the film sees the superheroes going up against classic villains like The Joker, The Riddler, The Penguin and Catwoman, both in Gotham City… and in space.",
"release_date": 1475888400
}
],
"nbHits": 8,
"exhaustiveNbHits": false,
"query": "botman robin",
"limit": 2,
"offset": 0,
"processingTimeMs": 2
}
```
#### Use the Web Interface
We also deliver an **out-of-the-box [web interface](https://github.com/meilisearch/mini-dashboard)** in which you can test MeiliSearch interactively.
You can access the web interface in your web browser at the root of the server. The default URL is [http://127.0.0.1:7700](http://127.0.0.1:7700). All you need to do is open your web browser and enter MeiliSearchs address to visit it. This will lead you to a web page with a search bar that will allow you to search in the selected index.
| [See the gif above](#demo)
## Documentation
Now that your MeiliSearch server is up and running, you can learn more about how to tune your search engine in [the documentation](https://docs.meilisearch.com).
## Contributing
Hey! We're glad you're thinking about contributing to MeiliSearch! However, we are currently working on a huge refactor and accepting PRs on this repository wouldn't be productive. We are sorry about this! Be sure we are doing our best so that you can contribute to MeiliSearch again as soon as possible ❤️
## Telemetry
MeiliSearch collects anonymous data regarding general usage.
This helps us better understand developers' usage of MeiliSearch features.
To see what information we're retrieving, please see the complete list [on the dedicated issue](https://github.com/meilisearch/MeiliSearch/issues/720).
We also use Sentry to make us crash and error reports. If you want to know more about what Sentry collects, please visit their [privacy policy website](https://sentry.io/privacy/).
This program is optional, you can disable these analytics by using the `MEILI_NO_ANALYTICS` env variable.
## 💌 Contact
Feel free to contact us with any questions you may have:
* 🆕 Join our [GitHub Discussions forum](https://github.com/meilisearch/MeiliSearch/discussions)
* Join our [Slack community](https://slack.meilisearch.com/).
* By opening an issue.
MeiliSearch is developed by [Meili](https://www.meilisearch.com), a young company. To know more about us, you can [read our blog](https://blog.meilisearch.com). Any suggestion or feedback is highly appreciated. Thank you for your support!

BIN
assets/crates-io-demo.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.2 MiB

23
assets/do-btn-blue.svg Normal file
View File

@ -0,0 +1,23 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="200px" height="42px" viewBox="0 0 200 42" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<!-- Generator: Sketch 52.5 (67469) - http://www.bohemiancoding.com/sketch -->
<title>do-btn-blue</title>
<desc>Created with Sketch.</desc>
<g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="Partner-welcome-kit-Copy-3" transform="translate(-651.000000, -762.000000)">
<g id="do-btn-blue" transform="translate(651.000000, 763.000000)">
<rect id="Rectangle-Copy" fill="#0069FF" x="0" y="0" width="200" height="40" rx="6"></rect>
<path d="M45,0 L45,40" id="Line-2" stroke="#FFFFFF" stroke-linecap="square"></path>
<g id="DO_Logo_horizontal_blue-Copy" transform="translate(13.000000, 11.000000)" fill="#FFFFFF">
<path d="M10.0098493,20 L10.0098493,16.1262429 C14.12457,16.1262429 17.2897398,12.0548452 15.7269372,7.74627862 C15.1334679,6.14538921 13.8674,4.86072487 12.2650328,4.28756693 C7.952489,2.72620566 3.87733294,5.88845634 3.87733294,9.99938223 C3.87733294,9.99938223 3.87733294,9.99938223 3.87733294,9.99938223 L0,9.99938223 C0,3.45747613 6.3303395,-1.64165309 13.1948014,0.492866119 C16.2017127,1.42177726 18.57559,3.81322933 19.5053586,6.79760341 C21.6418482,13.6754986 16.5577943,20 10.0098493,20 Z" id="XMLID_49_"></path>
<polygon id="XMLID_47_" points="9.56521739 15.6521739 6.08695652 15.6521739 6.08695652 12.173913 6.08695652 12.173913 9.56521739 12.173913 9.56521739 12.173913"></polygon>
<polygon id="XMLID_46_" points="6.08695652 19.1304348 3.47826087 19.1304348 3.47826087 19.1304348 3.47826087 16.5217391 6.08695652 16.5217391"></polygon>
<polygon id="XMLID_45_" points="3.47826087 16.5217391 0.869565217 16.5217391 0.869565217 16.5217391 0.869565217 13.9130435 0.869565217 13.9130435 3.47826087 13.9130435 3.47826087 13.9130435"></polygon>
</g>
<text id="Create-a-Droplet-Copy" font-family="Sailec-Medium, Sailec" font-size="16" font-weight="400" fill="#FFFFFF">
<tspan x="58" y="26">Create a Droplet</tspan>
</text>
</g>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.3 KiB

17
assets/logo.svg Normal file
View File

@ -0,0 +1,17 @@
<svg width="360" height="360" viewBox="0 0 360 360" fill="none" xmlns="http://www.w3.org/2000/svg">
<g id="logo_main">
<rect id="Rectangle" x="107.333" y="0.150146" width="274.315" height="274.315" rx="98.8334" transform="rotate(23 107.333 0.150146)" fill="url(#paint0_linear)"/>
<path id="Rectangle_2" fill-rule="evenodd" clip-rule="evenodd" d="M61.3296 230.199C46.2224 194.608 38.6688 176.813 38.208 160.329C37.5286 136.025 47.0175 112.539 64.3891 95.5282C76.1718 83.9904 93.9669 76.4368 129.557 61.3296C165.147 46.2224 182.943 38.6688 199.427 38.208C223.731 37.5286 247.217 47.0175 264.228 64.3891C275.766 76.1718 283.319 93.9669 298.426 129.557C313.534 165.147 321.087 182.943 321.548 199.427C322.227 223.731 312.738 247.217 295.367 264.228C283.584 275.766 265.789 283.319 230.199 298.426C194.608 313.534 176.813 321.087 160.329 321.548C136.025 322.227 112.539 312.738 95.5282 295.367C83.9903 283.584 76.4368 265.789 61.3296 230.199Z" fill="url(#paint1_linear)"/>
<path id="m" fill-rule="evenodd" clip-rule="evenodd" d="M219.568 130.748C242.363 130.748 259.263 147.451 259.263 174.569V229.001H227.232V179.678C227.232 166.119 220.747 159.634 210.136 159.634C205.223 159.634 200.311 161.796 195.595 167.494C195.791 169.852 195.988 172.21 195.988 174.569V229.001H164.154V179.678C164.154 166.119 157.472 159.634 147.057 159.634C142.145 159.634 137.429 161.992 132.712 168.084V229.001H100.878V133.695H132.712V139.394C139.197 133.892 145.878 130.748 156.49 130.748C168.477 130.748 178.695 135.267 185.769 143.52C195.791 134.678 205.42 130.748 219.568 130.748Z" fill="white"/>
</g>
<defs>
<linearGradient id="paint0_linear" x1="-13.6248" y1="129.208" x2="244.49" y2="403.522" gradientUnits="userSpaceOnUse">
<stop stop-color="#E41359"/>
<stop offset="1" stop-color="#F23C79"/>
</linearGradient>
<linearGradient id="paint1_linear" x1="11.0088" y1="111.65" x2="111.65" y2="348.747" gradientUnits="userSpaceOnUse">
<stop stop-color="#24222F"/>
<stop offset="1" stop-color="#2B2937"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

@ -1,52 +0,0 @@
---
trigger:
branches:
include: [ master ]
pr: [ master ]
jobs:
- job: test
pool:
vmImage: 'Ubuntu 16.04'
container: tpayet/chiquitita:latest
steps:
- script: |
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain nightly
$HOME/.cargo/bin/rustup component add rustfmt
displayName: 'Install rustc and components'
- script: |
$HOME/.cargo/bin/cargo check
displayName: 'Check MeiliDB'
- script: |
$HOME/.cargo/bin/cargo test
displayName: 'Test MeiliDB'
- script: |
$HOME/.cargo/bin/cargo fmt --all -- --check
displayName: 'Fmt MeiliDB'
- job: build
dependsOn:
- test
condition: succeeded()
pool:
vmImage: 'Ubuntu 16.04'
container: tpayet/chiquitita:latest
steps:
- script: |
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain nightly
$HOME/.cargo/bin/rustup component add rustfmt
displayName: 'Install rustc and components'
- script: |
$HOME/.cargo/bin/cargo build --release
displayName: 'Build MeiliDB'
- task: CopyFiles@2
inputs:
contents: '$(System.DefaultWorkingDirectory)/target/release/libmeilidb.rlib'
targetFolder: $(Build.ArtifactStagingDirectory)
displayName: 'Copy build'
- task: PublishBuildArtifacts@1
inputs:
artifactName: libmeilidb.rlib
displayName: 'Upload artifacts'

9
bors.toml Normal file
View File

@ -0,0 +1,9 @@
status = [
'Tests on ubuntu-18.04',
'Tests on macos-latest',
'Tests on windows-latest',
'Run Clippy',
'Run Rustfmt'
]
# 3 hours timeout
timeout-sec = 10800

View File

@ -1 +1 @@
_datas in movies.csv are from https://www.themoviedb.org/_
_datas in movies.json are from https://www.themoviedb.org/_

File diff suppressed because it is too large Load Diff

19549
datasets/movies/movies.json Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,21 +0,0 @@
# This schema has been generated ...
# The order in which the attributes are declared is important,
# it specify the attribute xxx...
identifier = "id"
[attributes.id]
displayed = true
[attributes.title]
displayed = true
indexed = true
[attributes.overview]
displayed = true
indexed = true
[attributes.release_date]
displayed = true
[attributes.poster]
displayed = true

View File

@ -1,95 +0,0 @@
# A deep dive in MeiliDB
On the 15 of May 2019.
MeiliDB is a full text search engine based on a final state transducer named [fst](https://github.com/BurntSushi/fst) and a key-value store named [sled](https://github.com/spacejam/sled). The goal of a search engine is to store data and to respond to queries as accurate and fast as possible. To achieve this it must save the matching words in an [inverted index](https://en.wikipedia.org/wiki/Inverted_index).
<!-- MarkdownTOC autolink="true" -->
- [Where is the data stored?](#where-is-the-data-stored)
- [What does the key-value store contains?](#what-does-the-key-value-store-contains)
- [The inverted word index](#the-inverted-word-index)
- [A final state transducer](#a-final-state-transducer)
- [Document indexes](#document-indexes)
- [The schema](#the-schema)
- [Document attributes](#document-attributes)
- [How is a request processed?](#how-is-a-request-processed)
- [Query lexemes](#query-lexemes)
- [Automatons and query index](#automatons-and-query-index)
- [Sort by criteria](#sort-by-criteria)
<!-- /MarkdownTOC -->
## Where is the data stored?
MeiliDB is entirely backed by a key-value store like any good database (i.e. Postgres, MySQL). This brings a great flexibility in the way documents can be stored and updates handled along time.
[sled will brings some](https://github.com/spacejam/sled/tree/434533332a3f485e6d2e467023be0a0b55d3a1af#plans) of the [A.C.I.D. properties](https://en.wikipedia.org/wiki/ACID_(computer_science)) to help us be sure the saved data is consistent.
## What does the key-value store contains?
It contain the inverted word index, the schema and the documents fields.
### The inverted word index
[The inverted word index](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-data/src/database/words_index.rs) is a sled Tree dedicated to store and give access to all documents that contains a specific word. The information stored under the word is simply a big ordered array of where in the document the word has been found. In other word, a big list of [`DocIndex`](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-core/src/lib.rs#L35-L51).
#### A final state transducer
_...also abbreviated fst_
This is the first entry point of the engine, you can read more about how it work with the beautiful blog post of @BurntSushi, [Index 1,600,000,000 Keys with Automata and Rust](https://blog.burntsushi.net/transducers/).
To make it short it is a powerful way to store all the words that are present in the indexed documents. You construct it by giving it all the words you want to index. When you want to search in it you can provide any automaton you want, in MeiliDB [a custom levenshtein automaton](https://github.com/tantivy-search/levenshtein-automata/) is used.
#### Document indexes
The `fst` will only return the words that match with the search automaton but the goal of the search engine is to retrieve all matches in all the documents when a query is made. You want it to return some sort of position in an attribute in a document, an information about where the given word matched.
To make it possible we retrieve all of the `DocIndex` corresponding to all the matching words in the fst, we use the [`WordsIndex`](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-data/src/database/words_index.rs#L11-L21) Tree to get the `DocIndexes` corresponding the words.
### The schema
The schema is a data structure that represents which documents attributes should be stored and which should be indexed. It is stored under a the [`MainIndex`](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-data/src/database/main_index.rs#L12) Tree and given to MeiliDB only at the creation of an index.
Each document attribute is associated to a unique 16 bit number named [`SchemaAttr`](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-data/src/schema.rs#L186).
In the future, this schema type could be given along with updates, the database could be able to handled a new schema and reindex the database according to the new one.
### Document attributes
When the engine handle a query the result that the requester want is a document, not only the [`Matches`](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-core/src/lib.rs#L62-L88) associated to it, fields of the original document must be returned too.
So MeiliDB again uses the power of the underlying key-value store and save the documents attributes marked as _STORE_ in the schema. The dedicated Tree for this information is the [`DocumentsIndex`](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-data/src/database/documents_index.rs#L11).
When a document field is saved in the key-value store its value is binary encoded using [message pack](https://github.com/3Hren/msgpack-rust), so a document must be serializable using serde.
## How is a request processed?
Now that we have our inverted index we are able to return results based on a query. In the MeiliDB universe a query is a simple string containing words.
### Query lexemes
The first step to be able to call the underlying structures is to split the query in words, for that we use a [custom tokenizer](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-tokenizer/src/lib.rs#L82-L84). Note that a tokenizer is specialized for a human language, this is the hard part.
### Automatons and query index
So to query the fst we need an automaton, in MeiliDB we use a [levenshtein automaton](https://en.wikipedia.org/wiki/Levenshtein_automaton), this automaton is constructed using a string and a maximum distance. According to the [Algolia's blog post](https://blog.algolia.com/inside-the-algolia-engine-part-3-query-processing/#algolia%e2%80%99s-way-of-searching-for-alternatives) we [created the DFAs](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-core/src/automaton.rs#L59-L78) with different settings.
Thanks to the power of the fst library [it is possible to union multiple automatons](https://docs.rs/fst/0.3.2/fst/map/struct.OpBuilder.html#method.union) on the same fst set. The `Stream` is able to return all the matching words. We use these words to find the whole list of `DocIndexes` associated.
With all these informations it is possible [to reconstruct a list of all the `DocIndexes` associated](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-core/src/query_builder.rs#L103-L130) with the words queried.
### Sort by criteria
Now that we are able to get a big list of [DocIndexes](https://github.com/Kerollmops/MeiliDB/blob/550dc1e99224e386516877450320f694947332d4/src/lib.rs#L21-L36) it is not enough to sort them by criteria, we need more informations like the levenshtein distance or the fact that a query word match exactly the word stored in the fst. So [we stuff it a little bit](https://github.com/Kerollmops/MeiliDB/blob/550dc1e99224e386516877450320f694947332d4/src/rank/query_builder.rs#L86-L93), and aggregate all these [Matches](https://github.com/Kerollmops/MeiliDB/blob/550dc1e99224e386516877450320f694947332d4/src/lib.rs#L47-L74) for each document. This way it will be easy to sort a simple vector of document using a bunch of functions.
With this big list of documents and associated matches [we are able to sort only the part of the slice that we want](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-core/src/query_builder.rs#L160-L188) using bucket sorting. [Each criterion](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-core/src/criterion/mod.rs#L95-L101) is evaluated on each subslice without copy, thanks to [GroupByMut](https://docs.rs/slice-group-by/0.2.4/slice_group_by/) which, I hope [will soon be merged](https://github.com/rust-lang/rfcs/pull/2477).
Note that it is possible to customize the criteria used by using the `QueryBuilder::with_criteria` constructor, this way you can implement some custom ranking based on the document attributes using the appropriate structure and the [`document` method](https://github.com/meilisearch/MeiliDB/blob/3db823de002243004612e36a19b4578d800dab97/meilidb-data/src/database/index.rs#L86).
At this point, MeiliDB work is over 🎉

188
download-latest.sh Normal file
View File

@ -0,0 +1,188 @@
#!/bin/sh
# COLORS
RED='\033[31m'
GREEN='\033[32m'
DEFAULT='\033[0m'
# GLOBALS
BINARY_NAME='meilisearch'
GREP_SEMVER_REGEXP='v\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)$' # i.e. v[number].[number].[number]
# FUNCTIONS
# semverParseInto and semverLT from https://github.com/cloudflare/semver_bash/blob/master/semver.sh
# usage: semverParseInto version major minor patch special
# version: the string version
# major, minor, patch, special: will be assigned by the function
semverParseInto() {
local RE='[^0-9]*\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)\([0-9A-Za-z-]*\)'
#MAJOR
eval $2=`echo $1 | sed -e "s#$RE#\1#"`
#MINOR
eval $3=`echo $1 | sed -e "s#$RE#\2#"`
#PATCH
eval $4=`echo $1 | sed -e "s#$RE#\3#"`
#SPECIAL
eval $5=`echo $1 | sed -e "s#$RE#\4#"`
}
# usage: semverLT version1 version2
semverLT() {
local MAJOR_A=0
local MINOR_A=0
local PATCH_A=0
local SPECIAL_A=0
local MAJOR_B=0
local MINOR_B=0
local PATCH_B=0
local SPECIAL_B=0
semverParseInto $1 MAJOR_A MINOR_A PATCH_A SPECIAL_A
semverParseInto $2 MAJOR_B MINOR_B PATCH_B SPECIAL_B
if [ $MAJOR_A -lt $MAJOR_B ]; then
return 0
fi
if [ $MAJOR_A -le $MAJOR_B ] && [ $MINOR_A -lt $MINOR_B ]; then
return 0
fi
if [ $MAJOR_A -le $MAJOR_B ] && [ $MINOR_A -le $MINOR_B ] && [ $PATCH_A -lt $PATCH_B ]; then
return 0
fi
if [ "_$SPECIAL_A" == "_" ] && [ "_$SPECIAL_B" == "_" ] ; then
return 1
fi
if [ "_$SPECIAL_A" == "_" ] && [ "_$SPECIAL_B" != "_" ] ; then
return 1
fi
if [ "_$SPECIAL_A" != "_" ] && [ "_$SPECIAL_B" == "_" ] ; then
return 0
fi
if [ "_$SPECIAL_A" < "_$SPECIAL_B" ]; then
return 0
fi
return 1
}
# Returns the tag of the latest stable release (in terms of semver and not of release date)
get_latest() {
temp_file='temp_file' # temp_file needed because the grep would start before the download is over
curl -s 'https://api.github.com/repos/meilisearch/MeiliSearch/releases' > "$temp_file" || return 1
releases=$(cat "$temp_file" | \
grep -E "tag_name|draft|prerelease" \
| tr -d ',"' | cut -d ':' -f2 | tr -d ' ')
# Returns a list of [tag_name draft_boolean prerelease_boolean ...]
# Ex: v0.10.1 false false v0.9.1-rc.1 false true v0.9.0 false false...
i=0
latest=""
current_tag=""
for release_info in $releases; do
if [ $i -eq 0 ]; then # Cheking tag_name
if echo "$release_info" | grep -q "$GREP_SEMVER_REGEXP"; then # If it's not an alpha or beta release
current_tag=$release_info
else
current_tag=""
fi
i=1
elif [ $i -eq 1 ]; then # Checking draft boolean
if [ "$release_info" = "true" ]; then
current_tag=""
fi
i=2
elif [ $i -eq 2 ]; then # Checking prerelease boolean
if [ "$release_info" = "true" ]; then
current_tag=""
fi
i=0
if [ "$current_tag" != "" ]; then # If the current_tag is valid
if [ "$latest" = "" ]; then # If there is no latest yet
latest="$current_tag"
else
semverLT $current_tag $latest # Comparing latest and the current tag
if [ $? -eq 1 ]; then
latest="$current_tag"
fi
fi
fi
fi
done
rm -f "$temp_file"
echo $latest
}
# Gets the OS by setting the $os variable
# Returns 0 in case of success, 1 otherwise.
get_os() {
os_name=$(uname -s)
case "$os_name" in
'Darwin')
os='macos'
;;
'Linux')
os='linux'
;;
*)
return 1
esac
return 0
}
# Gets the architecture by setting the $archi variable
# Returns 0 in case of success, 1 otherwise.
get_archi() {
architecture=$(uname -m)
case "$architecture" in
'x86_64' | 'amd64')
archi='amd64'
;;
'aarch64')
archi='armv8'
;;
*)
return 1
esac
return 0
}
success_usage() {
printf "$GREEN%s\n$DEFAULT" "MeiliSearch binary successfully downloaded as '$BINARY_NAME' file."
echo ''
echo 'Run it:'
echo ' $ ./meilisearch'
echo 'Usage:'
echo ' $ ./meilisearch --help'
}
failure_usage() {
printf "$RED%s\n$DEFAULT" 'ERROR: MeiliSearch binary is not available for your OS distribution or your architecture yet.'
echo ''
echo 'However, you can easily compile the binary from the source files.'
echo 'Follow the steps at the page ("Source" tab): https://docs.meilisearch.com/guides/advanced_guides/installation.html'
}
# MAIN
latest="$(get_latest)"
if ! get_os; then
failure_usage
exit 1
fi
if ! get_archi; then
failure_usage
exit 1
fi
echo "Downloading MeiliSearch binary $latest for $os, architecture $archi..."
release_file="meilisearch-$os-$archi"
link="https://github.com/meilisearch/MeiliSearch/releases/download/$latest/$release_file"
curl -OL "$link"
mv "$release_file" "$BINARY_NAME"
chmod 744 "$BINARY_NAME"
success_usage

View File

@ -1,49 +0,0 @@
[package]
name = "meilidb-core"
version = "0.1.0"
authors = ["Kerollmops <clement@meilisearch.com>"]
edition = "2018"
[dependencies]
arc-swap = "0.4.3"
bincode = "1.1.4"
byteorder = "1.3.2"
crossbeam-channel = "0.3.9"
deunicode = "1.0.0"
env_logger = "0.7.0"
hashbrown = { version = "0.6.0", features = ["serde"] }
log = "0.4.8"
meilidb-schema = { path = "../meilidb-schema", version = "0.1.0" }
meilidb-tokenizer = { path = "../meilidb-tokenizer", version = "0.1.0" }
once_cell = "1.2.0"
ordered-float = { version = "1.0.2", features = ["serde"] }
sdset = "0.3.3"
serde = { version = "1.0.101", features = ["derive"] }
serde_json = "1.0.41"
siphasher = "0.3.0"
slice-group-by = "0.2.6"
zerocopy = "0.2.8"
[dependencies.zlmdb]
package = "zerocopy-lmdb"
git = "https://github.com/Kerollmops/zerocopy-lmdb.git"
branch = "master"
[dependencies.levenshtein_automata]
git = "https://github.com/Kerollmops/levenshtein-automata.git"
branch = "arc-byte-slice"
features = ["fst_automaton"]
[dependencies.fst]
git = "https://github.com/Kerollmops/fst.git"
branch = "arc-byte-slice"
[dev-dependencies]
assert_matches = "1.3"
csv = "1.0.7"
indexmap = { version = "1.2.0", features = ["serde-1"] }
rustyline = { version = "5.0.0", default-features = false }
structopt = "0.3.2"
tempfile = "3.1.0"
termcolor = "1.0.4"
toml = "0.5.3"

View File

@ -1,431 +0,0 @@
use std::collections::btree_map::{BTreeMap, Entry};
use std::collections::HashSet;
use std::error::Error;
use std::io::Write;
use std::iter::FromIterator;
use std::path::{Path, PathBuf};
use std::time::{Duration, Instant};
use std::{fs, io, sync::mpsc};
use rustyline::{Config, Editor};
use serde::{Deserialize, Serialize};
use structopt::StructOpt;
use termcolor::{Color, ColorChoice, ColorSpec, StandardStream, WriteColor};
use meilidb_core::{Database, Highlight, UpdateResult};
use meilidb_schema::SchemaAttr;
const INDEX_NAME: &str = "default";
#[derive(Debug, StructOpt)]
struct IndexCommand {
/// The destination where the database must be created.
#[structopt(parse(from_os_str))]
database_path: PathBuf,
/// The csv file to index.
#[structopt(parse(from_os_str))]
csv_data_path: PathBuf,
/// The path to the schema.
#[structopt(long, parse(from_os_str))]
schema: PathBuf,
#[structopt(long)]
update_group_size: Option<usize>,
#[structopt(long, parse(from_os_str))]
compact_to_path: Option<PathBuf>,
}
#[derive(Debug, StructOpt)]
struct SearchCommand {
/// The destination where the database must be created.
#[structopt(parse(from_os_str))]
database_path: PathBuf,
/// Timeout after which the search will return results.
#[structopt(long)]
fetch_timeout_ms: Option<u64>,
/// The number of returned results
#[structopt(short, long, default_value = "10")]
number_results: usize,
/// The number of characters before and after the first match
#[structopt(short = "C", long, default_value = "35")]
char_context: usize,
/// A filter string that can be `!adult` or `adult` to
/// filter documents on this specfied field
#[structopt(short, long)]
filter: Option<String>,
/// Fields that must be displayed.
displayed_fields: Vec<String>,
}
#[derive(Debug, StructOpt)]
enum Command {
Index(IndexCommand),
Search(SearchCommand),
}
impl Command {
fn path(&self) -> &Path {
match self {
Command::Index(command) => &command.database_path,
Command::Search(command) => &command.database_path,
}
}
}
#[derive(Serialize, Deserialize)]
#[serde(transparent)]
struct Document(indexmap::IndexMap<String, String>);
fn index_command(command: IndexCommand, database: Database) -> Result<(), Box<dyn Error>> {
let start = Instant::now();
let (sender, receiver) = mpsc::sync_channel(100);
let update_fn = move |update: UpdateResult| sender.send(update.update_id).unwrap();
let index = match database.open_index(INDEX_NAME) {
Some(index) => index,
None => database.create_index(INDEX_NAME).unwrap(),
};
let done = database.set_update_callback(INDEX_NAME, Box::new(update_fn));
assert!(done, "could not set the index update function");
let env = &database.env;
let schema = {
let string = fs::read_to_string(&command.schema)?;
toml::from_str(&string).unwrap()
};
let mut writer = env.write_txn().unwrap();
match index.main.schema(&writer)? {
Some(current_schema) => {
if current_schema != schema {
return Err(meilidb_core::Error::SchemaDiffer.into());
}
writer.abort();
}
None => {
index.schema_update(&mut writer, schema)?;
writer.commit().unwrap();
}
}
let mut rdr = csv::Reader::from_path(command.csv_data_path)?;
let mut raw_record = csv::StringRecord::new();
let headers = rdr.headers()?.clone();
let mut max_update_id = 0;
let mut i = 0;
let mut end_of_file = false;
while !end_of_file {
let mut additions = index.documents_addition();
loop {
end_of_file = !rdr.read_record(&mut raw_record)?;
if end_of_file {
break;
}
let document: Document = match raw_record.deserialize(Some(&headers)) {
Ok(document) => document,
Err(e) => {
eprintln!("{:?}", e);
continue;
}
};
additions.update_document(document);
print!("\rindexing document {}", i);
i += 1;
if let Some(group_size) = command.update_group_size {
if i % group_size == 0 {
break;
}
}
}
println!();
let mut writer = env.write_txn().unwrap();
println!("committing update...");
let update_id = additions.finalize(&mut writer)?;
writer.commit().unwrap();
max_update_id = max_update_id.max(update_id);
println!("committed update {}", update_id);
}
println!("Waiting for update {}", max_update_id);
for id in receiver {
if id == max_update_id {
break;
}
}
println!(
"database created in {:.2?} at: {:?}",
start.elapsed(),
command.database_path
);
if let Some(path) = command.compact_to_path {
let start = Instant::now();
let _file = database.copy_and_compact_to_path(&path)?;
println!(
"database compacted in {:.2?} at: {:?}",
start.elapsed(),
path
);
}
Ok(())
}
fn display_highlights(text: &str, ranges: &[usize]) -> io::Result<()> {
let mut stdout = StandardStream::stdout(ColorChoice::Always);
let mut highlighted = false;
for range in ranges.windows(2) {
let [start, end] = match range {
[start, end] => [*start, *end],
_ => unreachable!(),
};
if highlighted {
stdout.set_color(ColorSpec::new().set_fg(Some(Color::Yellow)))?;
}
write!(&mut stdout, "{}", &text[start..end])?;
stdout.reset()?;
highlighted = !highlighted;
}
Ok(())
}
fn char_to_byte_range(index: usize, length: usize, text: &str) -> (usize, usize) {
let mut byte_index = 0;
let mut byte_length = 0;
for (n, (i, c)) in text.char_indices().enumerate() {
if n == index {
byte_index = i;
}
if n + 1 == index + length {
byte_length = i - byte_index + c.len_utf8();
break;
}
}
(byte_index, byte_length)
}
fn create_highlight_areas(text: &str, highlights: &[Highlight]) -> Vec<usize> {
let mut byte_indexes = BTreeMap::new();
for highlight in highlights {
let char_index = highlight.char_index as usize;
let char_length = highlight.char_length as usize;
let (byte_index, byte_length) = char_to_byte_range(char_index, char_length, text);
match byte_indexes.entry(byte_index) {
Entry::Vacant(entry) => {
entry.insert(byte_length);
}
Entry::Occupied(mut entry) => {
if *entry.get() < byte_length {
entry.insert(byte_length);
}
}
}
}
let mut title_areas = Vec::new();
title_areas.push(0);
for (byte_index, length) in byte_indexes {
title_areas.push(byte_index);
title_areas.push(byte_index + length);
}
title_areas.push(text.len());
title_areas.sort_unstable();
title_areas
}
/// note: matches must have been sorted by `char_index` and `char_length` before being passed.
///
/// ```no_run
/// matches.sort_unstable_by_key(|m| (m.char_index, m.char_length));
///
/// let matches = matches.matches.iter().filter(|m| SchemaAttr::new(m.attribute) == attr).cloned();
///
/// let (text, matches) = crop_text(&text, matches, 35);
/// ```
fn crop_text(
text: &str,
highlights: impl IntoIterator<Item = Highlight>,
context: usize,
) -> (String, Vec<Highlight>) {
let mut highlights = highlights.into_iter().peekable();
let char_index = highlights
.peek()
.map(|m| m.char_index as usize)
.unwrap_or(0);
let start = char_index.saturating_sub(context);
let text = text.chars().skip(start).take(context * 2).collect();
let highlights = highlights
.take_while(|m| (m.char_index as usize) + (m.char_length as usize) <= start + (context * 2))
.map(|highlight| Highlight {
char_index: highlight.char_index - start as u16,
..highlight
})
.collect();
(text, highlights)
}
fn search_command(command: SearchCommand, database: Database) -> Result<(), Box<dyn Error>> {
let env = &database.env;
let index = database
.open_index(INDEX_NAME)
.expect("Could not find index");
let reader = env.read_txn().unwrap();
let schema = index.main.schema(&reader)?;
reader.abort();
let schema = schema.ok_or(meilidb_core::Error::SchemaMissing)?;
let fields = command.displayed_fields.iter().map(String::as_str);
let fields = HashSet::from_iter(fields);
let config = Config::builder().auto_add_history(true).build();
let mut readline = Editor::<()>::with_config(config);
let _ = readline.load_history("query-history.txt");
for result in readline.iter("Searching for: ") {
match result {
Ok(query) => {
let start_total = Instant::now();
let reader = env.read_txn().unwrap();
let ref_index = &index;
let ref_reader = &reader;
let mut builder = index.query_builder();
if let Some(timeout) = command.fetch_timeout_ms {
builder.with_fetch_timeout(Duration::from_millis(timeout));
}
if let Some(ref filter) = command.filter {
let filter = filter.as_str();
let (positive, filter) = if filter.chars().next() == Some('!') {
(false, &filter[1..])
} else {
(true, filter)
};
let attr = schema
.attribute(&filter)
.expect("Could not find filtered attribute");
builder.with_filter(move |document_id| {
let string: String = ref_index
.document_attribute(ref_reader, document_id, attr)
.unwrap()
.unwrap();
(string == "true") == positive
});
}
let documents = builder.query(ref_reader, &query, 0..command.number_results)?;
let mut retrieve_duration = Duration::default();
let number_of_documents = documents.len();
for mut doc in documents {
doc.highlights
.sort_unstable_by_key(|m| (m.char_index, m.char_length));
let start_retrieve = Instant::now();
let result = index.document::<Document>(&reader, Some(&fields), doc.id);
retrieve_duration += start_retrieve.elapsed();
match result {
Ok(Some(document)) => {
println!("raw-id: {:?}", doc.id);
for (name, text) in document.0 {
print!("{}: ", name);
let attr = schema.attribute(&name).unwrap();
let highlights = doc
.highlights
.iter()
.filter(|m| SchemaAttr::new(m.attribute) == attr)
.cloned();
let (text, highlights) =
crop_text(&text, highlights, command.char_context);
let areas = create_highlight_areas(&text, &highlights);
display_highlights(&text, &areas)?;
println!();
}
}
Ok(None) => eprintln!("missing document"),
Err(e) => eprintln!("{}", e),
}
let mut matching_attributes = HashSet::new();
for highlight in doc.highlights {
let attr = SchemaAttr::new(highlight.attribute);
let name = schema.attribute_name(attr);
matching_attributes.insert(name);
}
let matching_attributes = Vec::from_iter(matching_attributes);
println!("matching in: {:?}", matching_attributes);
println!();
}
eprintln!(
"whole documents fields retrieve took {:.2?}",
retrieve_duration
);
eprintln!(
"===== Found {} results in {:.2?} =====",
number_of_documents,
start_total.elapsed()
);
}
Err(err) => {
println!("Error: {:?}", err);
break;
}
}
}
readline.save_history("query-history.txt").unwrap();
Ok(())
}
fn main() -> Result<(), Box<dyn Error>> {
env_logger::init();
let opt = Command::from_args();
let database = Database::open_or_create(opt.path())?;
match opt {
Command::Index(command) => index_command(command, database),
Command::Search(command) => search_command(command, database),
}
}

View File

@ -1,48 +0,0 @@
use levenshtein_automata::{LevenshteinAutomatonBuilder as LevBuilder, DFA};
use once_cell::sync::OnceCell;
static LEVDIST0: OnceCell<LevBuilder> = OnceCell::new();
static LEVDIST1: OnceCell<LevBuilder> = OnceCell::new();
static LEVDIST2: OnceCell<LevBuilder> = OnceCell::new();
#[derive(Copy, Clone)]
enum PrefixSetting {
Prefix,
NoPrefix,
}
fn build_dfa_with_setting(query: &str, setting: PrefixSetting) -> DFA {
use PrefixSetting::{NoPrefix, Prefix};
match query.len() {
0..=4 => {
let builder = LEVDIST0.get_or_init(|| LevBuilder::new(0, true));
match setting {
Prefix => builder.build_prefix_dfa(query),
NoPrefix => builder.build_dfa(query),
}
}
5..=8 => {
let builder = LEVDIST1.get_or_init(|| LevBuilder::new(1, true));
match setting {
Prefix => builder.build_prefix_dfa(query),
NoPrefix => builder.build_dfa(query),
}
}
_ => {
let builder = LEVDIST2.get_or_init(|| LevBuilder::new(2, true));
match setting {
Prefix => builder.build_prefix_dfa(query),
NoPrefix => builder.build_dfa(query),
}
}
}
}
pub fn build_prefix_dfa(query: &str) -> DFA {
build_dfa_with_setting(query, PrefixSetting::Prefix)
}
pub fn build_dfa(query: &str) -> DFA {
build_dfa_with_setting(query, PrefixSetting::NoPrefix)
}

View File

@ -1,220 +0,0 @@
mod dfa;
mod query_enhancer;
use std::cmp::Reverse;
use std::vec;
use fst::{IntoStreamer, Streamer};
use levenshtein_automata::DFA;
use meilidb_tokenizer::{is_cjk, split_query_string};
use crate::error::MResult;
use crate::store;
use self::dfa::{build_dfa, build_prefix_dfa};
pub use self::query_enhancer::QueryEnhancer;
use self::query_enhancer::QueryEnhancerBuilder;
const NGRAMS: usize = 3;
pub struct AutomatonProducer {
automatons: Vec<Vec<Automaton>>,
}
impl AutomatonProducer {
pub fn new(
reader: &zlmdb::RoTxn,
query: &str,
main_store: store::Main,
synonyms_store: store::Synonyms,
) -> MResult<(AutomatonProducer, QueryEnhancer)> {
let (automatons, query_enhancer) =
generate_automatons(reader, query, main_store, synonyms_store)?;
Ok((AutomatonProducer { automatons }, query_enhancer))
}
pub fn into_iter(self) -> vec::IntoIter<Vec<Automaton>> {
self.automatons.into_iter()
}
}
#[derive(Debug)]
pub struct Automaton {
pub index: usize,
pub ngram: usize,
pub query_len: usize,
pub is_exact: bool,
pub is_prefix: bool,
pub query: String,
}
impl Automaton {
pub fn dfa(&self) -> DFA {
if self.is_prefix {
build_prefix_dfa(&self.query)
} else {
build_dfa(&self.query)
}
}
fn exact(index: usize, ngram: usize, query: &str) -> Automaton {
Automaton {
index,
ngram,
query_len: query.len(),
is_exact: true,
is_prefix: false,
query: query.to_string(),
}
}
fn prefix_exact(index: usize, ngram: usize, query: &str) -> Automaton {
Automaton {
index,
ngram,
query_len: query.len(),
is_exact: true,
is_prefix: true,
query: query.to_string(),
}
}
fn non_exact(index: usize, ngram: usize, query: &str) -> Automaton {
Automaton {
index,
ngram,
query_len: query.len(),
is_exact: false,
is_prefix: false,
query: query.to_string(),
}
}
}
pub fn normalize_str(string: &str) -> String {
let mut string = string.to_lowercase();
if !string.contains(is_cjk) {
string = deunicode::deunicode_with_tofu(&string, "");
}
string
}
fn generate_automatons(
reader: &zlmdb::RoTxn,
query: &str,
main_store: store::Main,
synonym_store: store::Synonyms,
) -> MResult<(Vec<Vec<Automaton>>, QueryEnhancer)> {
let has_end_whitespace = query.chars().last().map_or(false, char::is_whitespace);
let query_words: Vec<_> = split_query_string(query).map(str::to_lowercase).collect();
let synonyms = match main_store.synonyms_fst(reader)? {
Some(synonym) => synonym,
None => fst::Set::default(),
};
let mut automaton_index = 0;
let mut automatons = Vec::new();
let mut enhancer_builder = QueryEnhancerBuilder::new(&query_words);
// We must not declare the original words to the query enhancer
// *but* we need to push them in the automatons list first
let mut original_automatons = Vec::new();
let mut original_words = query_words.iter().peekable();
while let Some(word) = original_words.next() {
let has_following_word = original_words.peek().is_some();
let not_prefix_dfa = has_following_word || has_end_whitespace || word.chars().all(is_cjk);
let automaton = if not_prefix_dfa {
Automaton::exact(automaton_index, 1, word)
} else {
Automaton::prefix_exact(automaton_index, 1, word)
};
automaton_index += 1;
original_automatons.push(automaton);
}
automatons.push(original_automatons);
for n in 1..=NGRAMS {
let mut ngrams = query_words.windows(n).enumerate().peekable();
while let Some((query_index, ngram_slice)) = ngrams.next() {
let query_range = query_index..query_index + n;
let ngram_nb_words = ngram_slice.len();
let ngram = ngram_slice.join(" ");
let has_following_word = ngrams.peek().is_some();
let not_prefix_dfa =
has_following_word || has_end_whitespace || ngram.chars().all(is_cjk);
// automaton of synonyms of the ngrams
let normalized = normalize_str(&ngram);
let lev = if not_prefix_dfa {
build_dfa(&normalized)
} else {
build_prefix_dfa(&normalized)
};
let mut stream = synonyms.search(&lev).into_stream();
while let Some(base) = stream.next() {
// only trigger alternatives when the last word has been typed
// i.e. "new " do not but "new yo" triggers alternatives to "new york"
let base = std::str::from_utf8(base).unwrap();
let base_nb_words = split_query_string(base).count();
if ngram_nb_words != base_nb_words {
continue;
}
if let Some(synonyms) = synonym_store.synonyms(reader, base.as_bytes())? {
let mut stream = synonyms.into_stream();
while let Some(synonyms) = stream.next() {
let synonyms = std::str::from_utf8(synonyms).unwrap();
let synonyms_words: Vec<_> = split_query_string(synonyms).collect();
let nb_synonym_words = synonyms_words.len();
let real_query_index = automaton_index;
enhancer_builder.declare(
query_range.clone(),
real_query_index,
&synonyms_words,
);
for synonym in synonyms_words {
let automaton = if nb_synonym_words == 1 {
Automaton::exact(automaton_index, n, synonym)
} else {
Automaton::non_exact(automaton_index, n, synonym)
};
automaton_index += 1;
automatons.push(vec![automaton]);
}
}
}
}
if n != 1 {
// automaton of concatenation of query words
let concat = ngram_slice.concat();
let normalized = normalize_str(&concat);
let real_query_index = automaton_index;
enhancer_builder.declare(query_range.clone(), real_query_index, &[&normalized]);
let automaton = Automaton::exact(automaton_index, n, &normalized);
automaton_index += 1;
automatons.push(vec![automaton]);
}
}
}
// order automatons, the most important first,
// we keep the original automatons at the front.
automatons[1..].sort_by_key(|a| {
let a = a.first().unwrap();
(Reverse(a.is_exact), a.ngram)
});
Ok((automatons, enhancer_builder.build()))
}

View File

@ -1,423 +0,0 @@
use std::cmp::Ordering::{Equal, Greater, Less};
use std::ops::Range;
/// Return `true` if the specified range can accept the given replacements words.
/// Returns `false` if the replacements words are already present in the original query
/// or if there is fewer replacement words than the range to replace.
//
//
// ## Ignored because already present in original
//
// new york city subway
// -------- ^^^^
// / \
// [new york city]
//
//
// ## Ignored because smaller than the original
//
// new york city subway
// -------------
// \ /
// [new york]
//
//
// ## Accepted because bigger than the original
//
// NYC subway
// ---
// / \
// / \
// / \
// / \
// / \
// [new york city]
//
fn rewrite_range_with<S, T>(query: &[S], range: Range<usize>, words: &[T]) -> bool
where
S: AsRef<str>,
T: AsRef<str>,
{
if words.len() <= range.len() {
// there is fewer or equal replacement words
// than there is already in the replaced range
return false;
}
// retrieve the part to rewrite but with the length
// of the replacement part
let original = query.iter().skip(range.start).take(words.len());
// check if the original query doesn't already contain
// the replacement words
!original
.map(AsRef::as_ref)
.eq(words.iter().map(AsRef::as_ref))
}
type Origin = usize;
type RealLength = usize;
struct FakeIntervalTree {
intervals: Vec<(Range<usize>, (Origin, RealLength))>,
}
impl FakeIntervalTree {
fn new(mut intervals: Vec<(Range<usize>, (Origin, RealLength))>) -> FakeIntervalTree {
intervals.sort_unstable_by_key(|(r, _)| (r.start, r.end));
FakeIntervalTree { intervals }
}
fn query(&self, point: usize) -> Option<(Range<usize>, (Origin, RealLength))> {
let element = self.intervals.binary_search_by(|(r, _)| {
if point >= r.start {
if point < r.end {
Equal
} else {
Less
}
} else {
Greater
}
});
let n = match element {
Ok(n) => n,
Err(n) => n,
};
match self.intervals.get(n) {
Some((range, value)) if range.contains(&point) => Some((range.clone(), *value)),
_otherwise => None,
}
}
}
pub struct QueryEnhancerBuilder<'a, S> {
query: &'a [S],
origins: Vec<usize>,
real_to_origin: Vec<(Range<usize>, (Origin, RealLength))>,
}
impl<S: AsRef<str>> QueryEnhancerBuilder<'_, S> {
pub fn new(query: &[S]) -> QueryEnhancerBuilder<S> {
// we initialize origins query indices based on their positions
let origins: Vec<_> = (0..=query.len()).collect();
let real_to_origin = origins.iter().map(|&o| (o..o + 1, (o, 1))).collect();
QueryEnhancerBuilder {
query,
origins,
real_to_origin,
}
}
/// Update the final real to origin query indices mapping.
///
/// `range` is the original words range that this `replacement` words replace
/// and `real` is the first real query index of these replacement words.
pub fn declare<T>(&mut self, range: Range<usize>, real: usize, replacement: &[T])
where
T: AsRef<str>,
{
// check if the range of original words
// can be rewritten with the replacement words
if rewrite_range_with(self.query, range.clone(), replacement) {
// this range can be replaced so we need to
// modify the origins accordingly
let offset = replacement.len() - range.len();
let previous_padding = self.origins[range.end - 1];
let current_offset = (self.origins[range.end] - 1) - previous_padding;
let diff = offset.saturating_sub(current_offset);
self.origins[range.end] += diff;
for r in &mut self.origins[range.end + 1..] {
*r += diff;
}
}
// we need to store the real number and origins relations
// this way it will be possible to know by how many
// we need to pad real query indices
let real_range = real..real + replacement.len().max(range.len());
let real_length = replacement.len();
self.real_to_origin
.push((real_range, (range.start, real_length)));
}
pub fn build(self) -> QueryEnhancer {
QueryEnhancer {
origins: self.origins,
real_to_origin: FakeIntervalTree::new(self.real_to_origin),
}
}
}
pub struct QueryEnhancer {
origins: Vec<usize>,
real_to_origin: FakeIntervalTree,
}
impl QueryEnhancer {
/// Returns the query indices to use to replace this real query index.
pub fn replacement(&self, real: u32) -> Range<u32> {
let real = real as usize;
// query the fake interval tree with the real query index
let (range, (origin, real_length)) = self
.real_to_origin
.query(real)
.expect("real has never been declared");
// if `real` is the end bound of the range
if (range.start + real_length - 1) == real {
let mut count = range.len();
let mut new_origin = origin;
for (i, slice) in self.origins[new_origin..].windows(2).enumerate() {
let len = slice[1] - slice[0];
count = count.saturating_sub(len);
if count == 0 {
new_origin = origin + i;
break;
}
}
let n = real - range.start;
let start = self.origins[origin];
let end = self.origins[new_origin + 1];
let remaining = (end - start) - n;
Range {
start: (start + n) as u32,
end: (start + n + remaining) as u32,
}
} else {
// just return the origin along with
// the real position of the word
let n = real as usize - range.start;
let origin = self.origins[origin];
Range {
start: (origin + n) as u32,
end: (origin + n + 1) as u32,
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn original_unmodified() {
let query = ["new", "york", "city", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// new york = new york city
builder.declare(0..2, 4, &["new", "york", "city"]);
// ^ 4 5 6
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // new
assert_eq!(enhancer.replacement(1), 1..2); // york
assert_eq!(enhancer.replacement(2), 2..3); // city
assert_eq!(enhancer.replacement(3), 3..4); // subway
assert_eq!(enhancer.replacement(4), 0..1); // new
assert_eq!(enhancer.replacement(5), 1..2); // york
assert_eq!(enhancer.replacement(6), 2..3); // city
}
#[test]
fn simple_growing() {
let query = ["new", "york", "subway"];
// 0 1 2
let mut builder = QueryEnhancerBuilder::new(&query);
// new york = new york city
builder.declare(0..2, 3, &["new", "york", "city"]);
// ^ 3 4 5
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // new
assert_eq!(enhancer.replacement(1), 1..3); // york
assert_eq!(enhancer.replacement(2), 3..4); // subway
assert_eq!(enhancer.replacement(3), 0..1); // new
assert_eq!(enhancer.replacement(4), 1..2); // york
assert_eq!(enhancer.replacement(5), 2..3); // city
}
#[test]
fn same_place_growings() {
let query = ["NY", "subway"];
// 0 1
let mut builder = QueryEnhancerBuilder::new(&query);
// NY = new york
builder.declare(0..1, 2, &["new", "york"]);
// ^ 2 3
// NY = new york city
builder.declare(0..1, 4, &["new", "york", "city"]);
// ^ 4 5 6
// NY = NYC
builder.declare(0..1, 7, &["NYC"]);
// ^ 7
// NY = new york city
builder.declare(0..1, 8, &["new", "york", "city"]);
// ^ 8 9 10
// subway = underground train
builder.declare(1..2, 11, &["underground", "train"]);
// ^ 11 12
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..3); // NY
assert_eq!(enhancer.replacement(1), 3..5); // subway
assert_eq!(enhancer.replacement(2), 0..1); // new
assert_eq!(enhancer.replacement(3), 1..3); // york
assert_eq!(enhancer.replacement(4), 0..1); // new
assert_eq!(enhancer.replacement(5), 1..2); // york
assert_eq!(enhancer.replacement(6), 2..3); // city
assert_eq!(enhancer.replacement(7), 0..3); // NYC
assert_eq!(enhancer.replacement(8), 0..1); // new
assert_eq!(enhancer.replacement(9), 1..2); // york
assert_eq!(enhancer.replacement(10), 2..3); // city
assert_eq!(enhancer.replacement(11), 3..4); // underground
assert_eq!(enhancer.replacement(12), 4..5); // train
}
#[test]
fn bigger_growing() {
let query = ["NYC", "subway"];
// 0 1
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(0..1, 2, &["new", "york", "city"]);
// ^ 2 3 4
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..3); // NYC
assert_eq!(enhancer.replacement(1), 3..4); // subway
assert_eq!(enhancer.replacement(2), 0..1); // new
assert_eq!(enhancer.replacement(3), 1..2); // york
assert_eq!(enhancer.replacement(4), 2..3); // city
}
#[test]
fn middle_query_growing() {
let query = ["great", "awesome", "NYC", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(2..3, 4, &["new", "york", "city"]);
// ^ 4 5 6
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // great
assert_eq!(enhancer.replacement(1), 1..2); // awesome
assert_eq!(enhancer.replacement(2), 2..5); // NYC
assert_eq!(enhancer.replacement(3), 5..6); // subway
assert_eq!(enhancer.replacement(4), 2..3); // new
assert_eq!(enhancer.replacement(5), 3..4); // york
assert_eq!(enhancer.replacement(6), 4..5); // city
}
#[test]
fn end_query_growing() {
let query = ["NYC", "subway"];
// 0 1
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(1..2, 2, &["underground", "train"]);
// ^ 2 3
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // NYC
assert_eq!(enhancer.replacement(1), 1..3); // subway
assert_eq!(enhancer.replacement(2), 1..2); // underground
assert_eq!(enhancer.replacement(3), 2..3); // train
}
#[test]
fn multiple_growings() {
let query = ["great", "awesome", "NYC", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(2..3, 4, &["new", "york", "city"]);
// ^ 4 5 6
// subway = underground train
builder.declare(3..4, 7, &["underground", "train"]);
// ^ 7 8
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // great
assert_eq!(enhancer.replacement(1), 1..2); // awesome
assert_eq!(enhancer.replacement(2), 2..5); // NYC
assert_eq!(enhancer.replacement(3), 5..7); // subway
assert_eq!(enhancer.replacement(4), 2..3); // new
assert_eq!(enhancer.replacement(5), 3..4); // york
assert_eq!(enhancer.replacement(6), 4..5); // city
assert_eq!(enhancer.replacement(7), 5..6); // underground
assert_eq!(enhancer.replacement(8), 6..7); // train
}
#[test]
fn multiple_probable_growings() {
let query = ["great", "awesome", "NYC", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(2..3, 4, &["new", "york", "city"]);
// ^ 4 5 6
// subway = underground train
builder.declare(3..4, 7, &["underground", "train"]);
// ^ 7 8
// great awesome = good
builder.declare(0..2, 9, &["good"]);
// ^ 9
// awesome NYC = NY
builder.declare(1..3, 10, &["NY"]);
// ^^ 10
// NYC subway = metro
builder.declare(2..4, 11, &["metro"]);
// ^^ 11
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // great
assert_eq!(enhancer.replacement(1), 1..2); // awesome
assert_eq!(enhancer.replacement(2), 2..5); // NYC
assert_eq!(enhancer.replacement(3), 5..7); // subway
assert_eq!(enhancer.replacement(4), 2..3); // new
assert_eq!(enhancer.replacement(5), 3..4); // york
assert_eq!(enhancer.replacement(6), 4..5); // city
assert_eq!(enhancer.replacement(7), 5..6); // underground
assert_eq!(enhancer.replacement(8), 6..7); // train
assert_eq!(enhancer.replacement(9), 0..2); // good
assert_eq!(enhancer.replacement(10), 1..5); // NY
assert_eq!(enhancer.replacement(11), 2..5); // metro
}
}

View File

@ -1,16 +0,0 @@
use crate::criterion::Criterion;
use crate::RawDocument;
use std::cmp::Ordering;
#[derive(Debug, Clone, Copy)]
pub struct DocumentId;
impl Criterion for DocumentId {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
lhs.id.cmp(&rhs.id)
}
fn name(&self) -> &str {
"DocumentId"
}
}

View File

@ -1,133 +0,0 @@
use std::cmp::Ordering;
use meilidb_schema::SchemaAttr;
use sdset::Set;
use slice_group_by::GroupBy;
use crate::criterion::Criterion;
use crate::RawDocument;
#[inline]
fn number_exact_matches(
query_index: &[u32],
attribute: &[u16],
is_exact: &[bool],
fields_counts: &Set<(SchemaAttr, u64)>,
) -> usize {
let mut count = 0;
let mut index = 0;
for group in query_index.linear_group() {
let len = group.len();
let mut found_exact = false;
for (pos, _) in is_exact[index..index + len]
.iter()
.filter(|x| **x)
.enumerate()
{
found_exact = true;
if let Ok(pos) = fields_counts.binary_search_by_key(&attribute[pos], |(a, _)| a.0) {
let (_, count) = fields_counts[pos];
if count == 1 {
return usize::max_value();
}
}
}
count += found_exact as usize;
index += len;
}
count
}
#[derive(Debug, Clone, Copy)]
pub struct Exact;
impl Criterion for Exact {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
let lhs = {
let query_index = lhs.query_index();
let is_exact = lhs.is_exact();
let attribute = lhs.attribute();
let fields_counts = &lhs.fields_counts;
number_exact_matches(query_index, attribute, is_exact, fields_counts)
};
let rhs = {
let query_index = rhs.query_index();
let is_exact = rhs.is_exact();
let attribute = rhs.attribute();
let fields_counts = &rhs.fields_counts;
number_exact_matches(query_index, attribute, is_exact, fields_counts)
};
lhs.cmp(&rhs).reverse()
}
fn name(&self) -> &str {
"Exact"
}
}
#[cfg(test)]
mod tests {
use super::*;
// typing: "soulier"
//
// doc0: "Soulier bleu"
// doc1: "souliereres rouge"
#[test]
fn easy_case() {
let doc0 = {
let query_index = &[0];
let attribute = &[0];
let is_exact = &[true];
let fields_counts = Set::new(&[(SchemaAttr(0), 2)]).unwrap();
number_exact_matches(query_index, attribute, is_exact, fields_counts)
};
let doc1 = {
let query_index = &[0];
let attribute = &[0];
let is_exact = &[false];
let fields_counts = Set::new(&[(SchemaAttr(0), 2)]).unwrap();
number_exact_matches(query_index, attribute, is_exact, fields_counts)
};
assert_eq!(doc0.cmp(&doc1).reverse(), Ordering::Less);
}
// typing: "soulier"
//
// doc0: { 0. "soulier" }
// doc1: { 0. "soulier bleu et blanc" }
#[test]
fn basic() {
let doc0 = {
let query_index = &[0];
let attribute = &[0];
let is_exact = &[true];
let fields_counts = Set::new(&[(SchemaAttr(0), 1)]).unwrap();
number_exact_matches(query_index, attribute, is_exact, fields_counts)
};
let doc1 = {
let query_index = &[0];
let attribute = &[0];
let is_exact = &[true];
let fields_counts = Set::new(&[(SchemaAttr(0), 4)]).unwrap();
number_exact_matches(query_index, attribute, is_exact, fields_counts)
};
assert_eq!(doc0.cmp(&doc1).reverse(), Ordering::Less);
}
}

View File

@ -1,121 +0,0 @@
mod document_id;
mod exact;
mod number_of_words;
mod sort_by_attr;
mod sum_of_typos;
mod sum_of_words_attribute;
mod sum_of_words_position;
mod words_proximity;
use crate::RawDocument;
use std::cmp::Ordering;
pub use self::{
document_id::DocumentId, exact::Exact, number_of_words::NumberOfWords,
sort_by_attr::SortByAttr, sum_of_typos::SumOfTypos,
sum_of_words_attribute::SumOfWordsAttribute, sum_of_words_position::SumOfWordsPosition,
words_proximity::WordsProximity,
};
pub trait Criterion: Send + Sync {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering;
fn name(&self) -> &str;
#[inline]
fn eq(&self, lhs: &RawDocument, rhs: &RawDocument) -> bool {
self.evaluate(lhs, rhs) == Ordering::Equal
}
}
impl<'a, T: Criterion + ?Sized + Send + Sync> Criterion for &'a T {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
(**self).evaluate(lhs, rhs)
}
fn name(&self) -> &str {
(**self).name()
}
fn eq(&self, lhs: &RawDocument, rhs: &RawDocument) -> bool {
(**self).eq(lhs, rhs)
}
}
impl<T: Criterion + ?Sized> Criterion for Box<T> {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
(**self).evaluate(lhs, rhs)
}
fn name(&self) -> &str {
(**self).name()
}
fn eq(&self, lhs: &RawDocument, rhs: &RawDocument) -> bool {
(**self).eq(lhs, rhs)
}
}
#[derive(Default)]
pub struct CriteriaBuilder<'a> {
inner: Vec<Box<dyn Criterion + 'a>>,
}
impl<'a> CriteriaBuilder<'a> {
pub fn new() -> CriteriaBuilder<'a> {
CriteriaBuilder { inner: Vec::new() }
}
pub fn with_capacity(capacity: usize) -> CriteriaBuilder<'a> {
CriteriaBuilder {
inner: Vec::with_capacity(capacity),
}
}
pub fn reserve(&mut self, additional: usize) {
self.inner.reserve(additional)
}
pub fn add<C: 'a>(mut self, criterion: C) -> CriteriaBuilder<'a>
where
C: Criterion,
{
self.push(criterion);
self
}
pub fn push<C: 'a>(&mut self, criterion: C)
where
C: Criterion,
{
self.inner.push(Box::new(criterion));
}
pub fn build(self) -> Criteria<'a> {
Criteria { inner: self.inner }
}
}
pub struct Criteria<'a> {
inner: Vec<Box<dyn Criterion + 'a>>,
}
impl<'a> Default for Criteria<'a> {
fn default() -> Self {
CriteriaBuilder::with_capacity(7)
.add(SumOfTypos)
.add(NumberOfWords)
.add(WordsProximity)
.add(SumOfWordsAttribute)
.add(SumOfWordsPosition)
.add(Exact)
.add(DocumentId)
.build()
}
}
impl<'a> AsRef<[Box<dyn Criterion + 'a>]> for Criteria<'a> {
fn as_ref(&self) -> &[Box<dyn Criterion + 'a>] {
&self.inner
}
}

View File

@ -1,31 +0,0 @@
use crate::criterion::Criterion;
use crate::RawDocument;
use slice_group_by::GroupBy;
use std::cmp::Ordering;
#[inline]
fn number_of_query_words(query_index: &[u32]) -> usize {
query_index.linear_group().count()
}
#[derive(Debug, Clone, Copy)]
pub struct NumberOfWords;
impl Criterion for NumberOfWords {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
let lhs = {
let query_index = lhs.query_index();
number_of_query_words(query_index)
};
let rhs = {
let query_index = rhs.query_index();
number_of_query_words(query_index)
};
lhs.cmp(&rhs).reverse()
}
fn name(&self) -> &str {
"NumberOfWords"
}
}

View File

@ -1,130 +0,0 @@
use std::cmp::Ordering;
use std::error::Error;
use std::fmt;
use crate::criterion::Criterion;
use crate::{RankedMap, RawDocument};
use meilidb_schema::{Schema, SchemaAttr};
/// An helper struct that permit to sort documents by
/// some of their stored attributes.
///
/// # Note
///
/// If a document cannot be deserialized it will be considered [`None`][].
///
/// Deserialized documents are compared like `Some(doc0).cmp(&Some(doc1))`,
/// so you must check the [`Ord`] of `Option` implementation.
///
/// [`None`]: https://doc.rust-lang.org/std/option/enum.Option.html#variant.None
/// [`Ord`]: https://doc.rust-lang.org/std/option/enum.Option.html#impl-Ord
///
/// # Example
///
/// ```ignore
/// use serde_derive::Deserialize;
/// use meilidb::rank::criterion::*;
///
/// let custom_ranking = SortByAttr::lower_is_better(&ranked_map, &schema, "published_at")?;
///
/// let builder = CriteriaBuilder::with_capacity(8)
/// .add(SumOfTypos)
/// .add(NumberOfWords)
/// .add(WordsProximity)
/// .add(SumOfWordsAttribute)
/// .add(SumOfWordsPosition)
/// .add(Exact)
/// .add(custom_ranking)
/// .add(DocumentId);
///
/// let criterion = builder.build();
///
/// ```
pub struct SortByAttr<'a> {
ranked_map: &'a RankedMap,
attr: SchemaAttr,
reversed: bool,
}
impl<'a> SortByAttr<'a> {
pub fn lower_is_better(
ranked_map: &'a RankedMap,
schema: &Schema,
attr_name: &str,
) -> Result<SortByAttr<'a>, SortByAttrError> {
SortByAttr::new(ranked_map, schema, attr_name, false)
}
pub fn higher_is_better(
ranked_map: &'a RankedMap,
schema: &Schema,
attr_name: &str,
) -> Result<SortByAttr<'a>, SortByAttrError> {
SortByAttr::new(ranked_map, schema, attr_name, true)
}
fn new(
ranked_map: &'a RankedMap,
schema: &Schema,
attr_name: &str,
reversed: bool,
) -> Result<SortByAttr<'a>, SortByAttrError> {
let attr = match schema.attribute(attr_name) {
Some(attr) => attr,
None => return Err(SortByAttrError::AttributeNotFound),
};
if !schema.props(attr).is_ranked() {
return Err(SortByAttrError::AttributeNotRegisteredForRanking);
}
Ok(SortByAttr {
ranked_map,
attr,
reversed,
})
}
}
impl<'a> Criterion for SortByAttr<'a> {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
let lhs = self.ranked_map.get(lhs.id, self.attr);
let rhs = self.ranked_map.get(rhs.id, self.attr);
match (lhs, rhs) {
(Some(lhs), Some(rhs)) => {
let order = lhs.cmp(&rhs);
if self.reversed {
order.reverse()
} else {
order
}
}
(None, Some(_)) => Ordering::Greater,
(Some(_), None) => Ordering::Less,
(None, None) => Ordering::Equal,
}
}
fn name(&self) -> &str {
"SortByAttr"
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum SortByAttrError {
AttributeNotFound,
AttributeNotRegisteredForRanking,
}
impl fmt::Display for SortByAttrError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
use SortByAttrError::*;
match self {
AttributeNotFound => f.write_str("attribute not found in the schema"),
AttributeNotRegisteredForRanking => f.write_str("attribute not registered for ranking"),
}
}
}
impl Error for SortByAttrError {}

View File

@ -1,116 +0,0 @@
use std::cmp::Ordering;
use slice_group_by::GroupBy;
use crate::criterion::Criterion;
use crate::RawDocument;
// This function is a wrong logarithmic 10 function.
// It is safe to panic on input number higher than 3,
// the number of typos is never bigger than that.
#[inline]
fn custom_log10(n: u8) -> f32 {
match n {
0 => 0.0, // log(1)
1 => 0.30102, // log(2)
2 => 0.47712, // log(3)
3 => 0.60205, // log(4)
_ => panic!("invalid number"),
}
}
#[inline]
fn sum_matches_typos(query_index: &[u32], distance: &[u8]) -> usize {
let mut number_words: usize = 0;
let mut sum_typos = 0.0;
let mut index = 0;
for group in query_index.linear_group() {
sum_typos += custom_log10(distance[index]);
number_words += 1;
index += group.len();
}
(number_words as f32 / (sum_typos + 1.0) * 1000.0) as usize
}
#[derive(Debug, Clone, Copy)]
pub struct SumOfTypos;
impl Criterion for SumOfTypos {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
let lhs = {
let query_index = lhs.query_index();
let distance = lhs.distance();
sum_matches_typos(query_index, distance)
};
let rhs = {
let query_index = rhs.query_index();
let distance = rhs.distance();
sum_matches_typos(query_index, distance)
};
lhs.cmp(&rhs).reverse()
}
fn name(&self) -> &str {
"SumOfTypos"
}
}
#[cfg(test)]
mod tests {
use super::*;
// typing: "Geox CEO"
//
// doc0: "Geox SpA: CEO and Executive"
// doc1: "Mt. Gox CEO Resigns From Bitcoin Foundation"
#[test]
fn one_typo_reference() {
let query_index0 = &[0, 1];
let distance0 = &[0, 0];
let query_index1 = &[0, 1];
let distance1 = &[1, 0];
let doc0 = sum_matches_typos(query_index0, distance0);
let doc1 = sum_matches_typos(query_index1, distance1);
assert_eq!(doc0.cmp(&doc1).reverse(), Ordering::Less);
}
// typing: "bouton manchette"
//
// doc0: "bouton manchette"
// doc1: "bouton"
#[test]
fn no_typo() {
let query_index0 = &[0, 1];
let distance0 = &[0, 0];
let query_index1 = &[0];
let distance1 = &[0];
let doc0 = sum_matches_typos(query_index0, distance0);
let doc1 = sum_matches_typos(query_index1, distance1);
assert_eq!(doc0.cmp(&doc1).reverse(), Ordering::Less);
}
// typing: "bouton manchztte"
//
// doc0: "bouton manchette"
// doc1: "bouton"
#[test]
fn one_typo() {
let query_index0 = &[0, 1];
let distance0 = &[0, 1];
let query_index1 = &[0];
let distance1 = &[0];
let doc0 = sum_matches_typos(query_index0, distance0);
let doc1 = sum_matches_typos(query_index1, distance1);
assert_eq!(doc0.cmp(&doc1).reverse(), Ordering::Less);
}
}

View File

@ -1,64 +0,0 @@
use crate::criterion::Criterion;
use crate::RawDocument;
use slice_group_by::GroupBy;
use std::cmp::Ordering;
#[inline]
fn sum_matches_attributes(query_index: &[u32], attribute: &[u16]) -> usize {
let mut sum_attributes = 0;
let mut index = 0;
for group in query_index.linear_group() {
sum_attributes += attribute[index] as usize;
index += group.len();
}
sum_attributes
}
#[derive(Debug, Clone, Copy)]
pub struct SumOfWordsAttribute;
impl Criterion for SumOfWordsAttribute {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
let lhs = {
let query_index = lhs.query_index();
let attribute = lhs.attribute();
sum_matches_attributes(query_index, attribute)
};
let rhs = {
let query_index = rhs.query_index();
let attribute = rhs.attribute();
sum_matches_attributes(query_index, attribute)
};
lhs.cmp(&rhs)
}
fn name(&self) -> &str {
"SumOfWordsAttribute"
}
}
#[cfg(test)]
mod tests {
use super::*;
// typing: "soulier"
//
// doc0: { 0. "Soulier bleu", 1. "bla bla bla" }
// doc1: { 0. "Botte rouge", 1. "Soulier en cuir" }
#[test]
fn title_vs_description() {
let query_index0 = &[0];
let attribute0 = &[0];
let query_index1 = &[0];
let attribute1 = &[1];
let doc0 = sum_matches_attributes(query_index0, attribute0);
let doc1 = sum_matches_attributes(query_index1, attribute1);
assert_eq!(doc0.cmp(&doc1), Ordering::Less);
}
}

View File

@ -1,64 +0,0 @@
use crate::criterion::Criterion;
use crate::RawDocument;
use slice_group_by::GroupBy;
use std::cmp::Ordering;
#[inline]
fn sum_matches_attribute_index(query_index: &[u32], word_index: &[u16]) -> usize {
let mut sum_word_index = 0;
let mut index = 0;
for group in query_index.linear_group() {
sum_word_index += word_index[index] as usize;
index += group.len();
}
sum_word_index
}
#[derive(Debug, Clone, Copy)]
pub struct SumOfWordsPosition;
impl Criterion for SumOfWordsPosition {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
let lhs = {
let query_index = lhs.query_index();
let word_index = lhs.word_index();
sum_matches_attribute_index(query_index, word_index)
};
let rhs = {
let query_index = rhs.query_index();
let word_index = rhs.word_index();
sum_matches_attribute_index(query_index, word_index)
};
lhs.cmp(&rhs)
}
fn name(&self) -> &str {
"SumOfWordsPosition"
}
}
#[cfg(test)]
mod tests {
use super::*;
// typing: "soulier"
//
// doc0: "Soulier bleu"
// doc1: "Botte rouge et soulier noir"
#[test]
fn easy_case() {
let query_index0 = &[0];
let word_index0 = &[0];
let query_index1 = &[0];
let word_index1 = &[3];
let doc0 = sum_matches_attribute_index(query_index0, word_index0);
let doc1 = sum_matches_attribute_index(query_index1, word_index1);
assert_eq!(doc0.cmp(&doc1), Ordering::Less);
}
}

View File

@ -1,164 +0,0 @@
use crate::criterion::Criterion;
use crate::RawDocument;
use slice_group_by::GroupBy;
use std::cmp::{self, Ordering};
const MAX_DISTANCE: u16 = 8;
#[inline]
fn clone_tuple<T: Clone, U: Clone>((a, b): (&T, &U)) -> (T, U) {
(a.clone(), b.clone())
}
fn index_proximity(lhs: u16, rhs: u16) -> u16 {
if lhs < rhs {
cmp::min(rhs - lhs, MAX_DISTANCE)
} else {
cmp::min(lhs - rhs, MAX_DISTANCE) + 1
}
}
fn attribute_proximity((lattr, lwi): (u16, u16), (rattr, rwi): (u16, u16)) -> u16 {
if lattr != rattr {
return MAX_DISTANCE;
}
index_proximity(lwi, rwi)
}
fn min_proximity((lattr, lwi): (&[u16], &[u16]), (rattr, rwi): (&[u16], &[u16])) -> u16 {
let mut min_prox = u16::max_value();
for a in lattr.iter().zip(lwi) {
for b in rattr.iter().zip(rwi) {
let a = clone_tuple(a);
let b = clone_tuple(b);
min_prox = cmp::min(min_prox, attribute_proximity(a, b));
}
}
min_prox
}
fn matches_proximity(
query_index: &[u32],
distance: &[u8],
attribute: &[u16],
word_index: &[u16],
) -> u16 {
let mut query_index_groups = query_index.linear_group();
let mut proximity = 0;
let mut index = 0;
let get_attr_wi = |index: usize, group_len: usize| {
// retrieve the first distance group (with the lowest values)
let len = distance[index..index + group_len]
.linear_group()
.next()
.unwrap()
.len();
let rattr = &attribute[index..index + len];
let rwi = &word_index[index..index + len];
(rattr, rwi)
};
let mut last = query_index_groups.next().map(|group| {
let attr_wi = get_attr_wi(index, group.len());
index += group.len();
attr_wi
});
// iter by windows of size 2
while let (Some(lhs), Some(rhs)) = (last, query_index_groups.next()) {
let attr_wi = get_attr_wi(index, rhs.len());
proximity += min_proximity(lhs, attr_wi);
last = Some(attr_wi);
index += rhs.len();
}
proximity
}
#[derive(Debug, Clone, Copy)]
pub struct WordsProximity;
impl Criterion for WordsProximity {
fn evaluate(&self, lhs: &RawDocument, rhs: &RawDocument) -> Ordering {
let lhs = {
let query_index = lhs.query_index();
let distance = lhs.distance();
let attribute = lhs.attribute();
let word_index = lhs.word_index();
matches_proximity(query_index, distance, attribute, word_index)
};
let rhs = {
let query_index = rhs.query_index();
let distance = rhs.distance();
let attribute = rhs.attribute();
let word_index = rhs.word_index();
matches_proximity(query_index, distance, attribute, word_index)
};
lhs.cmp(&rhs)
}
fn name(&self) -> &str {
"WordsProximity"
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn three_different_attributes() {
// "soup" "of the" "the day"
//
// { id: 0, attr: 0, attr_index: 0 }
// { id: 1, attr: 1, attr_index: 0 }
// { id: 2, attr: 1, attr_index: 1 }
// { id: 2, attr: 2, attr_index: 0 }
// { id: 3, attr: 3, attr_index: 1 }
let query_index = &[0, 1, 2, 2, 3];
let distance = &[0, 0, 0, 0, 0];
let attribute = &[0, 1, 1, 2, 3];
let word_index = &[0, 0, 1, 0, 1];
// soup -> of = 8
// + of -> the = 1
// + the -> day = 8 (not 1)
assert_eq!(
matches_proximity(query_index, distance, attribute, word_index),
17
);
}
#[test]
fn two_different_attributes() {
// "soup day" "soup of the day"
//
// { id: 0, attr: 0, attr_index: 0 }
// { id: 0, attr: 1, attr_index: 0 }
// { id: 1, attr: 1, attr_index: 1 }
// { id: 2, attr: 1, attr_index: 2 }
// { id: 3, attr: 0, attr_index: 1 }
// { id: 3, attr: 1, attr_index: 3 }
let query_index = &[0, 0, 1, 2, 3, 3];
let distance = &[0, 0, 0, 0, 0, 0];
let attribute = &[0, 1, 1, 1, 0, 1];
let word_index = &[0, 0, 1, 2, 1, 3];
// soup -> of = 1
// + of -> the = 1
// + the -> day = 1
assert_eq!(
matches_proximity(query_index, distance, attribute, word_index),
3
);
}
}

View File

@ -1,205 +0,0 @@
use std::collections::hash_map::{Entry, HashMap};
use std::fs::File;
use std::path::Path;
use std::sync::{Arc, RwLock};
use std::{fs, thread};
use crossbeam_channel::Receiver;
use log::{debug, error};
use zlmdb::types::{Str, Unit};
use zlmdb::{CompactionOption, Result as ZResult};
use crate::{store, update, Index, MResult};
pub type BoxUpdateFn = Box<dyn Fn(update::UpdateResult) + Send + Sync + 'static>;
type ArcSwapFn = arc_swap::ArcSwapOption<BoxUpdateFn>;
pub struct Database {
pub env: zlmdb::Env,
common_store: zlmdb::DynDatabase,
indexes_store: zlmdb::Database<Str, Unit>,
indexes: RwLock<HashMap<String, (Index, Arc<ArcSwapFn>, thread::JoinHandle<()>)>>,
}
fn update_awaiter(
receiver: Receiver<()>,
env: zlmdb::Env,
update_fn: Arc<ArcSwapFn>,
index: Index,
) {
for () in receiver {
// consume all updates in order (oldest first)
loop {
let mut writer = match env.write_txn() {
Ok(writer) => writer,
Err(e) => {
error!("LMDB writer transaction begin failed: {}", e);
break;
}
};
match update::update_task(&mut writer, index.clone()) {
Ok(Some(status)) => {
if let Err(e) = writer.commit() {
error!("update transaction failed: {}", e)
}
if let Some(ref callback) = *update_fn.load() {
(callback)(status);
}
}
// no more updates to handle for now
Ok(None) => {
debug!("no more updates");
writer.abort();
break;
}
Err(e) => {
error!("update task failed: {}", e);
writer.abort()
}
}
}
}
}
impl Database {
pub fn open_or_create(path: impl AsRef<Path>) -> MResult<Database> {
fs::create_dir_all(path.as_ref())?;
let env = zlmdb::EnvOpenOptions::new()
.map_size(10 * 1024 * 1024 * 1024) // 10GB
.max_dbs(3000)
.open(path)?;
let common_store = env.create_dyn_database(Some("common"))?;
let indexes_store = env.create_database::<Str, Unit>(Some("indexes"))?;
// list all indexes that needs to be opened
let mut must_open = Vec::new();
let reader = env.read_txn()?;
for result in indexes_store.iter(&reader)? {
let (index_name, _) = result?;
must_open.push(index_name.to_owned());
}
reader.abort();
// open the previously aggregated indexes
let mut indexes = HashMap::new();
for index_name in must_open {
let (sender, receiver) = crossbeam_channel::bounded(100);
let index = match store::open(&env, &index_name, sender.clone())? {
Some(index) => index,
None => {
log::warn!(
"the index {} doesn't exist or has not all the databases",
index_name
);
continue;
}
};
let update_fn = Arc::new(ArcSwapFn::empty());
let env_clone = env.clone();
let index_clone = index.clone();
let update_fn_clone = update_fn.clone();
let handle = thread::spawn(move || {
update_awaiter(receiver, env_clone, update_fn_clone, index_clone)
});
// send an update notification to make sure that
// possible pre-boot updates are consumed
sender.send(()).unwrap();
let result = indexes.insert(index_name, (index, update_fn, handle));
assert!(
result.is_none(),
"The index should not have been already open"
);
}
Ok(Database {
env,
common_store,
indexes_store,
indexes: RwLock::new(indexes),
})
}
pub fn open_index(&self, name: impl AsRef<str>) -> Option<Index> {
let indexes_lock = self.indexes.read().unwrap();
match indexes_lock.get(name.as_ref()) {
Some((index, ..)) => Some(index.clone()),
None => None,
}
}
pub fn create_index(&self, name: impl AsRef<str>) -> MResult<Index> {
let name = name.as_ref();
let mut indexes_lock = self.indexes.write().unwrap();
match indexes_lock.entry(name.to_owned()) {
Entry::Occupied(_) => Err(crate::Error::IndexAlreadyExists),
Entry::Vacant(entry) => {
let (sender, receiver) = crossbeam_channel::bounded(100);
let index = store::create(&self.env, name, sender)?;
let mut writer = self.env.write_txn()?;
self.indexes_store.put(&mut writer, name, &())?;
let env_clone = self.env.clone();
let index_clone = index.clone();
let no_update_fn = Arc::new(ArcSwapFn::empty());
let no_update_fn_clone = no_update_fn.clone();
let handle = thread::spawn(move || {
update_awaiter(receiver, env_clone, no_update_fn_clone, index_clone)
});
writer.commit()?;
entry.insert((index.clone(), no_update_fn, handle));
Ok(index)
}
}
}
pub fn set_update_callback(&self, name: impl AsRef<str>, update_fn: BoxUpdateFn) -> bool {
let indexes_lock = self.indexes.read().unwrap();
match indexes_lock.get(name.as_ref()) {
Some((_, current_update_fn, _)) => {
let update_fn = Some(Arc::new(update_fn));
current_update_fn.swap(update_fn);
true
}
None => false,
}
}
pub fn unset_update_callback(&self, name: impl AsRef<str>) -> bool {
let indexes_lock = self.indexes.read().unwrap();
match indexes_lock.get(name.as_ref()) {
Some((_, current_update_fn, _)) => {
current_update_fn.swap(None);
true
}
None => false,
}
}
pub fn copy_and_compact_to_path<P: AsRef<Path>>(&self, path: P) -> ZResult<File> {
self.env.copy_to_path(path, CompactionOption::Enabled)
}
pub fn indexes_names(&self) -> MResult<Vec<String>> {
let indexes = self.indexes.read().unwrap();
Ok(indexes.keys().cloned().collect())
}
pub fn common_store(&self) -> zlmdb::DynDatabase {
self.common_store
}
}

View File

@ -1,103 +0,0 @@
use hashbrown::HashMap;
use std::hash::Hash;
pub struct DistinctMap<K> {
inner: HashMap<K, usize>,
limit: usize,
len: usize,
}
impl<K: Hash + Eq> DistinctMap<K> {
pub fn new(limit: usize) -> Self {
DistinctMap {
inner: HashMap::new(),
limit,
len: 0,
}
}
pub fn len(&self) -> usize {
self.len
}
}
pub struct BufferedDistinctMap<'a, K> {
internal: &'a mut DistinctMap<K>,
inner: HashMap<K, usize>,
len: usize,
}
impl<'a, K: Hash + Eq> BufferedDistinctMap<'a, K> {
pub fn new(internal: &'a mut DistinctMap<K>) -> BufferedDistinctMap<'a, K> {
BufferedDistinctMap {
internal,
inner: HashMap::new(),
len: 0,
}
}
pub fn register(&mut self, key: K) -> bool {
let internal_seen = self.internal.inner.get(&key).unwrap_or(&0);
let inner_seen = self.inner.entry(key).or_insert(0);
let seen = *internal_seen + *inner_seen;
if seen < self.internal.limit {
*inner_seen += 1;
self.len += 1;
true
} else {
false
}
}
pub fn register_without_key(&mut self) -> bool {
self.len += 1;
true
}
pub fn transfert_to_internal(&mut self) {
for (k, v) in self.inner.drain() {
let value = self.internal.inner.entry(k).or_insert(0);
*value += v;
}
self.internal.len += self.len;
self.len = 0;
}
pub fn len(&self) -> usize {
self.internal.len() + self.len
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn easy_distinct_map() {
let mut map = DistinctMap::new(2);
let mut buffered = BufferedDistinctMap::new(&mut map);
for x in &[1, 1, 1, 2, 3, 4, 5, 6, 6, 6, 6, 6] {
buffered.register(x);
}
buffered.transfert_to_internal();
assert_eq!(map.len(), 8);
let mut map = DistinctMap::new(2);
let mut buffered = BufferedDistinctMap::new(&mut map);
assert_eq!(buffered.register(1), true);
assert_eq!(buffered.register(1), true);
assert_eq!(buffered.register(1), false);
assert_eq!(buffered.register(1), false);
assert_eq!(buffered.register(2), true);
assert_eq!(buffered.register(3), true);
assert_eq!(buffered.register(2), true);
assert_eq!(buffered.register(2), false);
buffered.transfert_to_internal();
assert_eq!(map.len(), 5);
}
}

View File

@ -1,107 +0,0 @@
use crate::serde::{DeserializerError, SerializerError};
use serde_json::Error as SerdeJsonError;
use std::{error, fmt, io};
pub type MResult<T> = Result<T, Error>;
#[derive(Debug)]
pub enum Error {
Io(io::Error),
IndexAlreadyExists,
SchemaDiffer,
SchemaMissing,
WordIndexMissing,
MissingDocumentId,
Zlmdb(zlmdb::Error),
Fst(fst::Error),
SerdeJson(SerdeJsonError),
Bincode(bincode::Error),
Serializer(SerializerError),
Deserializer(DeserializerError),
UnsupportedOperation(UnsupportedOperation),
}
impl From<io::Error> for Error {
fn from(error: io::Error) -> Error {
Error::Io(error)
}
}
impl From<zlmdb::Error> for Error {
fn from(error: zlmdb::Error) -> Error {
Error::Zlmdb(error)
}
}
impl From<fst::Error> for Error {
fn from(error: fst::Error) -> Error {
Error::Fst(error)
}
}
impl From<SerdeJsonError> for Error {
fn from(error: SerdeJsonError) -> Error {
Error::SerdeJson(error)
}
}
impl From<bincode::Error> for Error {
fn from(error: bincode::Error) -> Error {
Error::Bincode(error)
}
}
impl From<SerializerError> for Error {
fn from(error: SerializerError) -> Error {
Error::Serializer(error)
}
}
impl From<DeserializerError> for Error {
fn from(error: DeserializerError) -> Error {
Error::Deserializer(error)
}
}
impl From<UnsupportedOperation> for Error {
fn from(op: UnsupportedOperation) -> Error {
Error::UnsupportedOperation(op)
}
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
use self::Error::*;
match self {
Io(e) => write!(f, "{}", e),
IndexAlreadyExists => write!(f, "index already exists"),
SchemaDiffer => write!(f, "schemas differ"),
SchemaMissing => write!(f, "this index does not have a schema"),
WordIndexMissing => write!(f, "this index does not have a word index"),
MissingDocumentId => write!(f, "document id is missing"),
Zlmdb(e) => write!(f, "zlmdb error; {}", e),
Fst(e) => write!(f, "fst error; {}", e),
SerdeJson(e) => write!(f, "serde json error; {}", e),
Bincode(e) => write!(f, "bincode error; {}", e),
Serializer(e) => write!(f, "serializer error; {}", e),
Deserializer(e) => write!(f, "deserializer error; {}", e),
UnsupportedOperation(op) => write!(f, "unsupported operation; {}", op),
}
}
}
impl error::Error for Error {}
#[derive(Debug)]
pub enum UnsupportedOperation {
SchemaAlreadyExists,
}
impl fmt::Display for UnsupportedOperation {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
use self::UnsupportedOperation::*;
match self {
SchemaAlreadyExists => write!(f, "Cannot update index which already have a schema"),
}
}
}

View File

@ -1,168 +0,0 @@
#[cfg(test)]
#[macro_use]
extern crate assert_matches;
mod automaton;
pub mod criterion;
mod database;
mod distinct_map;
mod error;
mod number;
mod query_builder;
mod ranked_map;
mod raw_document;
pub mod raw_indexer;
mod reordered_attrs;
pub mod serde;
pub mod store;
mod update;
pub use self::database::{BoxUpdateFn, Database};
pub use self::error::{Error, MResult};
pub use self::number::{Number, ParseNumberError};
pub use self::ranked_map::RankedMap;
pub use self::raw_document::RawDocument;
pub use self::store::Index;
pub use self::update::{UpdateResult, UpdateStatus, UpdateType};
use ::serde::{Deserialize, Serialize};
use zerocopy::{AsBytes, FromBytes};
/// Represent an internally generated document unique identifier.
///
/// It is used to inform the database the document you want to deserialize.
/// Helpful for custom ranking.
#[derive(
Debug,
Copy,
Clone,
Eq,
PartialEq,
PartialOrd,
Ord,
Hash,
Serialize,
Deserialize,
AsBytes,
FromBytes,
)]
#[repr(C)]
pub struct DocumentId(pub u64);
/// This structure represent the position of a word
/// in a document and its attributes.
///
/// This is stored in the map, generated at index time,
/// extracted and interpreted at search time.
#[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, AsBytes, FromBytes)]
#[repr(C)]
pub struct DocIndex {
/// The document identifier where the word was found.
pub document_id: DocumentId,
/// The attribute in the document where the word was found
/// along with the index in it.
pub attribute: u16,
pub word_index: u16,
/// The position in bytes where the word was found
/// along with the length of it.
///
/// It informs on the original word area in the text indexed
/// without needing to run the tokenizer again.
pub char_index: u16,
pub char_length: u16,
}
/// This structure represent a matching word with informations
/// on the location of the word in the document.
///
/// The order of the field is important because it defines
/// the way these structures are ordered between themselves.
#[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Highlight {
/// The attribute in the document where the word was found
/// along with the index in it.
pub attribute: u16,
/// The position in bytes where the word was found.
///
/// It informs on the original word area in the text indexed
/// without needing to run the tokenizer again.
pub char_index: u16,
/// The length in bytes of the found word.
///
/// It informs on the original word area in the text indexed
/// without needing to run the tokenizer again.
pub char_length: u16,
}
#[doc(hidden)]
#[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct TmpMatch {
pub query_index: u32,
pub distance: u8,
pub attribute: u16,
pub word_index: u16,
pub is_exact: bool,
}
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Document {
pub id: DocumentId,
pub highlights: Vec<Highlight>,
#[cfg(test)]
pub matches: Vec<TmpMatch>,
}
impl Document {
#[cfg(not(test))]
fn from_raw(raw: RawDocument) -> Document {
Document {
id: raw.id,
highlights: raw.highlights,
}
}
#[cfg(test)]
fn from_raw(raw: RawDocument) -> Document {
let len = raw.query_index().len();
let mut matches = Vec::with_capacity(len);
let query_index = raw.query_index();
let distance = raw.distance();
let attribute = raw.attribute();
let word_index = raw.word_index();
let is_exact = raw.is_exact();
for i in 0..len {
let match_ = TmpMatch {
query_index: query_index[i],
distance: distance[i],
attribute: attribute[i],
word_index: word_index[i],
is_exact: is_exact[i],
};
matches.push(match_);
}
Document {
id: raw.id,
matches,
highlights: raw.highlights,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::mem;
#[test]
fn docindex_mem_size() {
assert_eq!(mem::size_of::<DocIndex>(), 16);
}
}

View File

@ -1,65 +0,0 @@
use std::fmt;
use std::num::{ParseFloatError, ParseIntError};
use std::str::FromStr;
use ordered_float::OrderedFloat;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum Number {
Unsigned(u64),
Signed(i64),
Float(OrderedFloat<f64>),
}
impl FromStr for Number {
type Err = ParseNumberError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let uint_error = match u64::from_str(s) {
Ok(unsigned) => return Ok(Number::Unsigned(unsigned)),
Err(error) => error,
};
let int_error = match i64::from_str(s) {
Ok(signed) => return Ok(Number::Signed(signed)),
Err(error) => error,
};
let float_error = match f64::from_str(s) {
Ok(float) => return Ok(Number::Float(OrderedFloat(float))),
Err(error) => error,
};
Err(ParseNumberError {
uint_error,
int_error,
float_error,
})
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct ParseNumberError {
uint_error: ParseIntError,
int_error: ParseIntError,
float_error: ParseFloatError,
}
impl fmt::Display for ParseNumberError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if self.uint_error == self.int_error {
write!(
f,
"can not parse number: {}, {}",
self.uint_error, self.float_error
)
} else {
write!(
f,
"can not parse number: {}, {}, {}",
self.uint_error, self.int_error, self.float_error
)
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,398 +0,0 @@
use std::ops::Range;
use std::cmp::Ordering::{Less, Greater, Equal};
/// Return `true` if the specified range can accept the given replacements words.
/// Returns `false` if the replacements words are already present in the original query
/// or if there is fewer replacement words than the range to replace.
//
//
// ## Ignored because already present in original
//
// new york city subway
// -------- ^^^^
// / \
// [new york city]
//
//
// ## Ignored because smaller than the original
//
// new york city subway
// -------------
// \ /
// [new york]
//
//
// ## Accepted because bigger than the original
//
// NYC subway
// ---
// / \
// / \
// / \
// / \
// / \
// [new york city]
//
fn rewrite_range_with<S, T>(query: &[S], range: Range<usize>, words: &[T]) -> bool
where S: AsRef<str>,
T: AsRef<str>,
{
if words.len() <= range.len() {
// there is fewer or equal replacement words
// than there is already in the replaced range
return false
}
// retrieve the part to rewrite but with the length
// of the replacement part
let original = query.iter().skip(range.start).take(words.len());
// check if the original query doesn't already contain
// the replacement words
!original.map(AsRef::as_ref).eq(words.iter().map(AsRef::as_ref))
}
type Origin = usize;
type RealLength = usize;
struct FakeIntervalTree {
intervals: Vec<(Range<usize>, (Origin, RealLength))>,
}
impl FakeIntervalTree {
fn new(mut intervals: Vec<(Range<usize>, (Origin, RealLength))>) -> FakeIntervalTree {
intervals.sort_unstable_by_key(|(r, _)| (r.start, r.end));
FakeIntervalTree { intervals }
}
fn query(&self, point: usize) -> Option<(Range<usize>, (Origin, RealLength))> {
let element = self.intervals.binary_search_by(|(r, _)| {
if point >= r.start {
if point < r.end { Equal } else { Less }
} else { Greater }
});
let n = match element { Ok(n) => n, Err(n) => n };
match self.intervals.get(n) {
Some((range, value)) if range.contains(&point) => Some((range.clone(), *value)),
_otherwise => None,
}
}
}
pub struct QueryEnhancerBuilder<'a, S> {
query: &'a [S],
origins: Vec<usize>,
real_to_origin: Vec<(Range<usize>, (Origin, RealLength))>,
}
impl<S: AsRef<str>> QueryEnhancerBuilder<'_, S> {
pub fn new(query: &[S]) -> QueryEnhancerBuilder<S> {
// we initialize origins query indices based on their positions
let origins: Vec<_> = (0..query.len() + 1).collect();
let real_to_origin = origins.iter().map(|&o| (o..o+1, (o, 1))).collect();
QueryEnhancerBuilder { query, origins, real_to_origin }
}
/// Update the final real to origin query indices mapping.
///
/// `range` is the original words range that this `replacement` words replace
/// and `real` is the first real query index of these replacement words.
pub fn declare<T>(&mut self, range: Range<usize>, real: usize, replacement: &[T])
where T: AsRef<str>,
{
// check if the range of original words
// can be rewritten with the replacement words
if rewrite_range_with(self.query, range.clone(), replacement) {
// this range can be replaced so we need to
// modify the origins accordingly
let offset = replacement.len() - range.len();
let previous_padding = self.origins[range.end - 1];
let current_offset = (self.origins[range.end] - 1) - previous_padding;
let diff = offset.saturating_sub(current_offset);
self.origins[range.end] += diff;
for r in &mut self.origins[range.end + 1..] {
*r += diff;
}
}
// we need to store the real number and origins relations
// this way it will be possible to know by how many
// we need to pad real query indices
let real_range = real..real + replacement.len().max(range.len());
let real_length = replacement.len();
self.real_to_origin.push((real_range, (range.start, real_length)));
}
pub fn build(self) -> QueryEnhancer {
QueryEnhancer {
origins: self.origins,
real_to_origin: FakeIntervalTree::new(self.real_to_origin),
}
}
}
pub struct QueryEnhancer {
origins: Vec<usize>,
real_to_origin: FakeIntervalTree,
}
impl QueryEnhancer {
/// Returns the query indices to use to replace this real query index.
pub fn replacement(&self, real: u32) -> Range<u32> {
let real = real as usize;
// query the fake interval tree with the real query index
let (range, (origin, real_length)) =
self.real_to_origin
.query(real)
.expect("real has never been declared");
// if `real` is the end bound of the range
if (range.start + real_length - 1) == real {
let mut count = range.len();
let mut new_origin = origin;
for (i, slice) in self.origins[new_origin..].windows(2).enumerate() {
let len = slice[1] - slice[0];
count = count.saturating_sub(len);
if count == 0 { new_origin = origin + i; break }
}
let n = real - range.start;
let start = self.origins[origin];
let end = self.origins[new_origin + 1];
let remaining = (end - start) - n;
Range { start: (start + n) as u32, end: (start + n + remaining) as u32 }
} else {
// just return the origin along with
// the real position of the word
let n = real as usize - range.start;
let origin = self.origins[origin];
Range { start: (origin + n) as u32, end: (origin + n + 1) as u32 }
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn original_unmodified() {
let query = ["new", "york", "city", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// new york = new york city
builder.declare(0..2, 4, &["new", "york", "city"]);
// ^ 4 5 6
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // new
assert_eq!(enhancer.replacement(1), 1..2); // york
assert_eq!(enhancer.replacement(2), 2..3); // city
assert_eq!(enhancer.replacement(3), 3..4); // subway
assert_eq!(enhancer.replacement(4), 0..1); // new
assert_eq!(enhancer.replacement(5), 1..2); // york
assert_eq!(enhancer.replacement(6), 2..3); // city
}
#[test]
fn simple_growing() {
let query = ["new", "york", "subway"];
// 0 1 2
let mut builder = QueryEnhancerBuilder::new(&query);
// new york = new york city
builder.declare(0..2, 3, &["new", "york", "city"]);
// ^ 3 4 5
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // new
assert_eq!(enhancer.replacement(1), 1..3); // york
assert_eq!(enhancer.replacement(2), 3..4); // subway
assert_eq!(enhancer.replacement(3), 0..1); // new
assert_eq!(enhancer.replacement(4), 1..2); // york
assert_eq!(enhancer.replacement(5), 2..3); // city
}
#[test]
fn same_place_growings() {
let query = ["NY", "subway"];
// 0 1
let mut builder = QueryEnhancerBuilder::new(&query);
// NY = new york
builder.declare(0..1, 2, &["new", "york"]);
// ^ 2 3
// NY = new york city
builder.declare(0..1, 4, &["new", "york", "city"]);
// ^ 4 5 6
// NY = NYC
builder.declare(0..1, 7, &["NYC"]);
// ^ 7
// NY = new york city
builder.declare(0..1, 8, &["new", "york", "city"]);
// ^ 8 9 10
// subway = underground train
builder.declare(1..2, 11, &["underground", "train"]);
// ^ 11 12
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..3); // NY
assert_eq!(enhancer.replacement(1), 3..5); // subway
assert_eq!(enhancer.replacement(2), 0..1); // new
assert_eq!(enhancer.replacement(3), 1..3); // york
assert_eq!(enhancer.replacement(4), 0..1); // new
assert_eq!(enhancer.replacement(5), 1..2); // york
assert_eq!(enhancer.replacement(6), 2..3); // city
assert_eq!(enhancer.replacement(7), 0..3); // NYC
assert_eq!(enhancer.replacement(8), 0..1); // new
assert_eq!(enhancer.replacement(9), 1..2); // york
assert_eq!(enhancer.replacement(10), 2..3); // city
assert_eq!(enhancer.replacement(11), 3..4); // underground
assert_eq!(enhancer.replacement(12), 4..5); // train
}
#[test]
fn bigger_growing() {
let query = ["NYC", "subway"];
// 0 1
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(0..1, 2, &["new", "york", "city"]);
// ^ 2 3 4
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..3); // NYC
assert_eq!(enhancer.replacement(1), 3..4); // subway
assert_eq!(enhancer.replacement(2), 0..1); // new
assert_eq!(enhancer.replacement(3), 1..2); // york
assert_eq!(enhancer.replacement(4), 2..3); // city
}
#[test]
fn middle_query_growing() {
let query = ["great", "awesome", "NYC", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(2..3, 4, &["new", "york", "city"]);
// ^ 4 5 6
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // great
assert_eq!(enhancer.replacement(1), 1..2); // awesome
assert_eq!(enhancer.replacement(2), 2..5); // NYC
assert_eq!(enhancer.replacement(3), 5..6); // subway
assert_eq!(enhancer.replacement(4), 2..3); // new
assert_eq!(enhancer.replacement(5), 3..4); // york
assert_eq!(enhancer.replacement(6), 4..5); // city
}
#[test]
fn end_query_growing() {
let query = ["NYC", "subway"];
// 0 1
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(1..2, 2, &["underground", "train"]);
// ^ 2 3
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // NYC
assert_eq!(enhancer.replacement(1), 1..3); // subway
assert_eq!(enhancer.replacement(2), 1..2); // underground
assert_eq!(enhancer.replacement(3), 2..3); // train
}
#[test]
fn multiple_growings() {
let query = ["great", "awesome", "NYC", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(2..3, 4, &["new", "york", "city"]);
// ^ 4 5 6
// subway = underground train
builder.declare(3..4, 7, &["underground", "train"]);
// ^ 7 8
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // great
assert_eq!(enhancer.replacement(1), 1..2); // awesome
assert_eq!(enhancer.replacement(2), 2..5); // NYC
assert_eq!(enhancer.replacement(3), 5..7); // subway
assert_eq!(enhancer.replacement(4), 2..3); // new
assert_eq!(enhancer.replacement(5), 3..4); // york
assert_eq!(enhancer.replacement(6), 4..5); // city
assert_eq!(enhancer.replacement(7), 5..6); // underground
assert_eq!(enhancer.replacement(8), 6..7); // train
}
#[test]
fn multiple_probable_growings() {
let query = ["great", "awesome", "NYC", "subway"];
// 0 1 2 3
let mut builder = QueryEnhancerBuilder::new(&query);
// NYC = new york city
builder.declare(2..3, 4, &["new", "york", "city"]);
// ^ 4 5 6
// subway = underground train
builder.declare(3..4, 7, &["underground", "train"]);
// ^ 7 8
// great awesome = good
builder.declare(0..2, 9, &["good"]);
// ^ 9
// awesome NYC = NY
builder.declare(1..3, 10, &["NY"]);
// ^^ 10
// NYC subway = metro
builder.declare(2..4, 11, &["metro"]);
// ^^ 11
let enhancer = builder.build();
assert_eq!(enhancer.replacement(0), 0..1); // great
assert_eq!(enhancer.replacement(1), 1..2); // awesome
assert_eq!(enhancer.replacement(2), 2..5); // NYC
assert_eq!(enhancer.replacement(3), 5..7); // subway
assert_eq!(enhancer.replacement(4), 2..3); // new
assert_eq!(enhancer.replacement(5), 3..4); // york
assert_eq!(enhancer.replacement(6), 4..5); // city
assert_eq!(enhancer.replacement(7), 5..6); // underground
assert_eq!(enhancer.replacement(8), 6..7); // train
assert_eq!(enhancer.replacement(9), 0..2); // good
assert_eq!(enhancer.replacement(10), 1..5); // NY
assert_eq!(enhancer.replacement(11), 2..5); // metro
}
}

View File

@ -1,41 +0,0 @@
use std::io::{Read, Write};
use hashbrown::HashMap;
use meilidb_schema::SchemaAttr;
use serde::{Deserialize, Serialize};
use crate::{DocumentId, Number};
#[derive(Debug, Default, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(transparent)]
pub struct RankedMap(HashMap<(DocumentId, SchemaAttr), Number>);
impl RankedMap {
pub fn len(&self) -> usize {
self.0.len()
}
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
pub fn insert(&mut self, document: DocumentId, attribute: SchemaAttr, number: Number) {
self.0.insert((document, attribute), number);
}
pub fn remove(&mut self, document: DocumentId, attribute: SchemaAttr) {
self.0.remove(&(document, attribute));
}
pub fn get(&self, document: DocumentId, attribute: SchemaAttr) -> Option<Number> {
self.0.get(&(document, attribute)).cloned()
}
pub fn read_from_bin<R: Read>(reader: R) -> bincode::Result<RankedMap> {
bincode::deserialize_from(reader).map(RankedMap)
}
pub fn write_to_bin<W: Write>(&self, writer: W) -> bincode::Result<()> {
bincode::serialize_into(writer, &self.0)
}
}

View File

@ -1,186 +0,0 @@
use std::fmt;
use std::sync::Arc;
use meilidb_schema::SchemaAttr;
use sdset::SetBuf;
use slice_group_by::GroupBy;
use crate::{DocumentId, Highlight, TmpMatch};
#[derive(Clone)]
pub struct RawDocument {
pub id: DocumentId,
pub matches: SharedMatches,
pub highlights: Vec<Highlight>,
pub fields_counts: SetBuf<(SchemaAttr, u64)>,
}
impl RawDocument {
pub fn query_index(&self) -> &[u32] {
let r = self.matches.range;
// it is safe because construction/modifications
// can only be done in this module
unsafe {
&self
.matches
.matches
.query_index
.get_unchecked(r.start..r.end)
}
}
pub fn distance(&self) -> &[u8] {
let r = self.matches.range;
// it is safe because construction/modifications
// can only be done in this module
unsafe { &self.matches.matches.distance.get_unchecked(r.start..r.end) }
}
pub fn attribute(&self) -> &[u16] {
let r = self.matches.range;
// it is safe because construction/modifications
// can only be done in this module
unsafe { &self.matches.matches.attribute.get_unchecked(r.start..r.end) }
}
pub fn word_index(&self) -> &[u16] {
let r = self.matches.range;
// it is safe because construction/modifications
// can only be done in this module
unsafe {
&self
.matches
.matches
.word_index
.get_unchecked(r.start..r.end)
}
}
pub fn is_exact(&self) -> &[bool] {
let r = self.matches.range;
// it is safe because construction/modifications
// can only be done in this module
unsafe { &self.matches.matches.is_exact.get_unchecked(r.start..r.end) }
}
}
impl fmt::Debug for RawDocument {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str("RawDocument {\r\n")?;
f.write_fmt(format_args!("{:>15}: {:?},\r\n", "id", self.id))?;
f.write_fmt(format_args!(
"{:>15}: {:^5?},\r\n",
"query_index",
self.query_index()
))?;
f.write_fmt(format_args!(
"{:>15}: {:^5?},\r\n",
"distance",
self.distance()
))?;
f.write_fmt(format_args!(
"{:>15}: {:^5?},\r\n",
"attribute",
self.attribute()
))?;
f.write_fmt(format_args!(
"{:>15}: {:^5?},\r\n",
"word_index",
self.word_index()
))?;
f.write_fmt(format_args!(
"{:>15}: {:^5?},\r\n",
"is_exact",
self.is_exact()
))?;
f.write_str("}")?;
Ok(())
}
}
pub fn raw_documents_from(
matches: SetBuf<(DocumentId, TmpMatch)>,
highlights: SetBuf<(DocumentId, Highlight)>,
fields_counts: SetBuf<(DocumentId, SchemaAttr, u64)>,
) -> Vec<RawDocument> {
let mut docs_ranges: Vec<(_, Range, _, _)> = Vec::new();
let mut matches2 = Matches::with_capacity(matches.len());
let matches = matches.linear_group_by_key(|(id, _)| *id);
let highlights = highlights.linear_group_by_key(|(id, _)| *id);
let fields_counts = fields_counts.linear_group_by_key(|(id, _, _)| *id);
for ((mgroup, hgroup), fgroup) in matches.zip(highlights).zip(fields_counts) {
debug_assert_eq!(mgroup[0].0, hgroup[0].0);
debug_assert_eq!(mgroup[0].0, fgroup[0].0);
let document_id = mgroup[0].0;
let start = docs_ranges.last().map(|(_, r, _, _)| r.end).unwrap_or(0);
let end = start + mgroup.len();
let highlights = hgroup.iter().map(|(_, h)| *h).collect();
let fields_counts = SetBuf::new(fgroup.iter().map(|(_, a, c)| (*a, *c)).collect()).unwrap();
docs_ranges.push((document_id, Range { start, end }, highlights, fields_counts));
matches2.extend_from_slice(mgroup);
}
let matches = Arc::new(matches2);
docs_ranges
.into_iter()
.map(|(id, range, highlights, fields_counts)| {
let matches = SharedMatches {
range,
matches: matches.clone(),
};
RawDocument {
id,
matches,
highlights,
fields_counts,
}
})
.collect()
}
#[derive(Debug, Copy, Clone)]
struct Range {
start: usize,
end: usize,
}
#[derive(Clone)]
pub struct SharedMatches {
range: Range,
matches: Arc<Matches>,
}
#[derive(Clone)]
struct Matches {
query_index: Vec<u32>,
distance: Vec<u8>,
attribute: Vec<u16>,
word_index: Vec<u16>,
is_exact: Vec<bool>,
}
impl Matches {
fn with_capacity(cap: usize) -> Matches {
Matches {
query_index: Vec::with_capacity(cap),
distance: Vec::with_capacity(cap),
attribute: Vec::with_capacity(cap),
word_index: Vec::with_capacity(cap),
is_exact: Vec::with_capacity(cap),
}
}
fn extend_from_slice(&mut self, matches: &[(DocumentId, TmpMatch)]) {
for (_, match_) in matches {
self.query_index.push(match_.query_index);
self.distance.push(match_.distance);
self.attribute.push(match_.attribute);
self.word_index.push(match_.word_index);
self.is_exact.push(match_.is_exact);
}
}
}

View File

@ -1,255 +0,0 @@
use std::collections::{BTreeMap, HashMap};
use std::convert::TryFrom;
use crate::{DocIndex, DocumentId};
use deunicode::deunicode_with_tofu;
use meilidb_schema::SchemaAttr;
use meilidb_tokenizer::{is_cjk, SeqTokenizer, Token, Tokenizer};
use sdset::SetBuf;
type Word = Vec<u8>; // TODO make it be a SmallVec
pub struct RawIndexer {
word_limit: usize, // the maximum number of indexed words
words_doc_indexes: BTreeMap<Word, Vec<DocIndex>>,
docs_words: HashMap<DocumentId, Vec<Word>>,
}
pub struct Indexed {
pub words_doc_indexes: BTreeMap<Word, SetBuf<DocIndex>>,
pub docs_words: HashMap<DocumentId, fst::Set>,
}
impl RawIndexer {
pub fn new() -> RawIndexer {
RawIndexer::with_word_limit(1000)
}
pub fn with_word_limit(limit: usize) -> RawIndexer {
RawIndexer {
word_limit: limit,
words_doc_indexes: BTreeMap::new(),
docs_words: HashMap::new(),
}
}
pub fn index_text(&mut self, id: DocumentId, attr: SchemaAttr, text: &str) -> usize {
let mut number_of_words = 0;
let lowercase_text = text.to_lowercase();
let deunicoded = deunicode_with_tofu(&lowercase_text, "");
// TODO compute the deunicoded version after the cjk check
let next = if !lowercase_text.contains(is_cjk) && lowercase_text != deunicoded {
Some(deunicoded)
} else {
None
};
let iter = Some(lowercase_text).into_iter().chain(next);
for text in iter {
// we must not count 2 times the same words
number_of_words = 0;
for token in Tokenizer::new(&text) {
let must_continue = index_token(
token,
id,
attr,
self.word_limit,
&mut self.words_doc_indexes,
&mut self.docs_words,
);
if !must_continue {
break;
}
number_of_words += 1;
}
}
number_of_words
}
pub fn index_text_seq<'a, I, IT>(&mut self, id: DocumentId, attr: SchemaAttr, iter: I)
where
I: IntoIterator<Item = &'a str, IntoIter = IT>,
IT: Iterator<Item = &'a str> + Clone,
{
// TODO serialize this to one call to the SeqTokenizer loop
let lowercased: Vec<_> = iter.into_iter().map(str::to_lowercase).collect();
let iter = lowercased.iter().map(|t| t.as_str());
for token in SeqTokenizer::new(iter) {
let must_continue = index_token(
token,
id,
attr,
self.word_limit,
&mut self.words_doc_indexes,
&mut self.docs_words,
);
if !must_continue {
break;
}
}
let deunicoded: Vec<_> = lowercased
.into_iter()
.map(|lowercase_text| {
if lowercase_text.contains(is_cjk) {
return lowercase_text;
}
let deunicoded = deunicode_with_tofu(&lowercase_text, "");
if lowercase_text != deunicoded {
deunicoded
} else {
lowercase_text
}
})
.collect();
let iter = deunicoded.iter().map(|t| t.as_str());
for token in SeqTokenizer::new(iter) {
let must_continue = index_token(
token,
id,
attr,
self.word_limit,
&mut self.words_doc_indexes,
&mut self.docs_words,
);
if !must_continue {
break;
}
}
}
pub fn build(self) -> Indexed {
let words_doc_indexes = self
.words_doc_indexes
.into_iter()
.map(|(word, indexes)| (word, SetBuf::from_dirty(indexes)))
.collect();
let docs_words = self
.docs_words
.into_iter()
.map(|(id, mut words)| {
words.sort_unstable();
words.dedup();
(id, fst::Set::from_iter(words).unwrap())
})
.collect();
Indexed {
words_doc_indexes,
docs_words,
}
}
}
impl Default for RawIndexer {
fn default() -> Self {
Self::new()
}
}
fn index_token(
token: Token,
id: DocumentId,
attr: SchemaAttr,
word_limit: usize,
words_doc_indexes: &mut BTreeMap<Word, Vec<DocIndex>>,
docs_words: &mut HashMap<DocumentId, Vec<Word>>,
) -> bool {
if token.word_index >= word_limit {
return false;
}
match token_to_docindex(id, attr, token) {
Some(docindex) => {
let word = Vec::from(token.word);
words_doc_indexes
.entry(word.clone())
.or_insert_with(Vec::new)
.push(docindex);
docs_words.entry(id).or_insert_with(Vec::new).push(word);
}
None => return false,
}
true
}
fn token_to_docindex(id: DocumentId, attr: SchemaAttr, token: Token) -> Option<DocIndex> {
let word_index = u16::try_from(token.word_index).ok()?;
let char_index = u16::try_from(token.char_index).ok()?;
let char_length = u16::try_from(token.word.chars().count()).ok()?;
let docindex = DocIndex {
document_id: id,
attribute: attr.0,
word_index,
char_index,
char_length,
};
Some(docindex)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn strange_apostrophe() {
let mut indexer = RawIndexer::new();
let docid = DocumentId(0);
let attr = SchemaAttr(0);
let text = "Zut, laspirateur, jai oublié de léteindre !";
indexer.index_text(docid, attr, text);
let Indexed {
words_doc_indexes, ..
} = indexer.build();
assert!(words_doc_indexes.get(&b"l"[..]).is_some());
assert!(words_doc_indexes.get(&b"aspirateur"[..]).is_some());
assert!(words_doc_indexes.get(&b"ai"[..]).is_some());
assert!(words_doc_indexes.get(&b"eteindre"[..]).is_some());
// with the ugly apostrophe...
assert!(words_doc_indexes
.get(&"léteindre".to_owned().into_bytes())
.is_some());
}
#[test]
fn strange_apostrophe_in_sequence() {
let mut indexer = RawIndexer::new();
let docid = DocumentId(0);
let attr = SchemaAttr(0);
let text = vec!["Zut, laspirateur, jai oublié de léteindre !"];
indexer.index_text_seq(docid, attr, text);
let Indexed {
words_doc_indexes, ..
} = indexer.build();
assert!(words_doc_indexes.get(&b"l"[..]).is_some());
assert!(words_doc_indexes.get(&b"aspirateur"[..]).is_some());
assert!(words_doc_indexes.get(&b"ai"[..]).is_some());
assert!(words_doc_indexes.get(&b"eteindre"[..]).is_some());
// with the ugly apostrophe...
assert!(words_doc_indexes
.get(&"léteindre".to_owned().into_bytes())
.is_some());
}
}

View File

@ -1,27 +0,0 @@
#[derive(Default, Clone)]
pub struct ReorderedAttrs {
count: usize,
reorders: Vec<Option<u16>>,
}
impl ReorderedAttrs {
pub fn new() -> ReorderedAttrs {
ReorderedAttrs {
count: 0,
reorders: Vec::new(),
}
}
pub fn insert_attribute(&mut self, attribute: u16) {
self.reorders.resize(attribute as usize + 1, None);
self.reorders[attribute as usize] = Some(self.count as u16);
self.count += 1;
}
pub fn get(&self, attribute: u16) -> Option<u16> {
match self.reorders.get(attribute as usize) {
Some(Some(attribute)) => Some(*attribute),
_ => None,
}
}
}

View File

@ -1,198 +0,0 @@
use std::str::FromStr;
use ordered_float::OrderedFloat;
use serde::ser;
use serde::Serialize;
use super::SerializerError;
use crate::Number;
pub struct ConvertToNumber;
impl ser::Serializer for ConvertToNumber {
type Ok = Number;
type Error = SerializerError;
type SerializeSeq = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTuple = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleStruct = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleVariant = ser::Impossible<Self::Ok, Self::Error>;
type SerializeMap = ser::Impossible<Self::Ok, Self::Error>;
type SerializeStruct = ser::Impossible<Self::Ok, Self::Error>;
type SerializeStructVariant = ser::Impossible<Self::Ok, Self::Error>;
fn serialize_bool(self, value: bool) -> Result<Self::Ok, Self::Error> {
Ok(Number::Unsigned(u64::from(value)))
}
fn serialize_char(self, _value: char) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnrankableType { type_name: "char" })
}
fn serialize_i8(self, value: i8) -> Result<Self::Ok, Self::Error> {
Ok(Number::Signed(i64::from(value)))
}
fn serialize_i16(self, value: i16) -> Result<Self::Ok, Self::Error> {
Ok(Number::Signed(i64::from(value)))
}
fn serialize_i32(self, value: i32) -> Result<Self::Ok, Self::Error> {
Ok(Number::Signed(i64::from(value)))
}
fn serialize_i64(self, value: i64) -> Result<Self::Ok, Self::Error> {
Ok(Number::Signed(value))
}
fn serialize_u8(self, value: u8) -> Result<Self::Ok, Self::Error> {
Ok(Number::Unsigned(u64::from(value)))
}
fn serialize_u16(self, value: u16) -> Result<Self::Ok, Self::Error> {
Ok(Number::Unsigned(u64::from(value)))
}
fn serialize_u32(self, value: u32) -> Result<Self::Ok, Self::Error> {
Ok(Number::Unsigned(u64::from(value)))
}
fn serialize_u64(self, value: u64) -> Result<Self::Ok, Self::Error> {
Ok(Number::Unsigned(value))
}
fn serialize_f32(self, value: f32) -> Result<Self::Ok, Self::Error> {
Ok(Number::Float(OrderedFloat(f64::from(value))))
}
fn serialize_f64(self, value: f64) -> Result<Self::Ok, Self::Error> {
Ok(Number::Float(OrderedFloat(value)))
}
fn serialize_str(self, value: &str) -> Result<Self::Ok, Self::Error> {
Ok(Number::from_str(value)?)
}
fn serialize_bytes(self, _v: &[u8]) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnrankableType { type_name: "&[u8]" })
}
fn serialize_none(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "Option",
})
}
fn serialize_some<T: ?Sized>(self, _value: &T) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
Err(SerializerError::UnrankableType {
type_name: "Option",
})
}
fn serialize_unit(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnrankableType { type_name: "()" })
}
fn serialize_unit_struct(self, _name: &'static str) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "unit struct",
})
}
fn serialize_unit_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "unit variant",
})
}
fn serialize_newtype_struct<T: ?Sized>(
self,
_name: &'static str,
value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
value.serialize(self)
}
fn serialize_newtype_variant<T: ?Sized>(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
Err(SerializerError::UnrankableType {
type_name: "newtype variant",
})
}
fn serialize_seq(self, _len: Option<usize>) -> Result<Self::SerializeSeq, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "sequence",
})
}
fn serialize_tuple(self, _len: usize) -> Result<Self::SerializeTuple, Self::Error> {
Err(SerializerError::UnrankableType { type_name: "tuple" })
}
fn serialize_tuple_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleStruct, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "tuple struct",
})
}
fn serialize_tuple_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleVariant, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "tuple variant",
})
}
fn serialize_map(self, _len: Option<usize>) -> Result<Self::SerializeMap, Self::Error> {
Err(SerializerError::UnrankableType { type_name: "map" })
}
fn serialize_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeStruct, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "struct",
})
}
fn serialize_struct_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeStructVariant, Self::Error> {
Err(SerializerError::UnrankableType {
type_name: "struct variant",
})
}
}

View File

@ -1,196 +0,0 @@
use serde::ser;
use serde::Serialize;
use super::SerializerError;
pub struct ConvertToString;
impl ser::Serializer for ConvertToString {
type Ok = String;
type Error = SerializerError;
type SerializeSeq = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTuple = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleStruct = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleVariant = ser::Impossible<Self::Ok, Self::Error>;
type SerializeMap = ser::Impossible<Self::Ok, Self::Error>;
type SerializeStruct = ser::Impossible<Self::Ok, Self::Error>;
type SerializeStructVariant = ser::Impossible<Self::Ok, Self::Error>;
fn serialize_bool(self, _value: bool) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "boolean",
})
}
fn serialize_char(self, value: char) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_i8(self, value: i8) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_i16(self, value: i16) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_i32(self, value: i32) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_i64(self, value: i64) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_u8(self, value: u8) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_u16(self, value: u16) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_u32(self, value: u32) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_u64(self, value: u64) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_f32(self, value: f32) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_f64(self, value: f64) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_str(self, value: &str) -> Result<Self::Ok, Self::Error> {
Ok(value.to_string())
}
fn serialize_bytes(self, _v: &[u8]) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "&[u8]" })
}
fn serialize_none(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "Option",
})
}
fn serialize_some<T: ?Sized>(self, _value: &T) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
Err(SerializerError::UnserializableType {
type_name: "Option",
})
}
fn serialize_unit(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "()" })
}
fn serialize_unit_struct(self, _name: &'static str) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "unit struct",
})
}
fn serialize_unit_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "unit variant",
})
}
fn serialize_newtype_struct<T: ?Sized>(
self,
_name: &'static str,
value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
value.serialize(self)
}
fn serialize_newtype_variant<T: ?Sized>(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
Err(SerializerError::UnserializableType {
type_name: "newtype variant",
})
}
fn serialize_seq(self, _len: Option<usize>) -> Result<Self::SerializeSeq, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "sequence",
})
}
fn serialize_tuple(self, _len: usize) -> Result<Self::SerializeTuple, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "tuple" })
}
fn serialize_tuple_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleStruct, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "tuple struct",
})
}
fn serialize_tuple_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleVariant, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "tuple variant",
})
}
fn serialize_map(self, _len: Option<usize>) -> Result<Self::SerializeMap, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "map" })
}
fn serialize_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeStruct, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "struct",
})
}
fn serialize_struct_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeStructVariant, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "struct variant",
})
}
}

View File

@ -1,144 +0,0 @@
use std::collections::HashSet;
use std::io::Cursor;
use std::{error::Error, fmt};
use meilidb_schema::{Schema, SchemaAttr};
use serde::{de, forward_to_deserialize_any};
use serde_json::de::IoRead as SerdeJsonIoRead;
use serde_json::Deserializer as SerdeJsonDeserializer;
use serde_json::Error as SerdeJsonError;
use crate::store::DocumentsFields;
use crate::DocumentId;
#[derive(Debug)]
pub enum DeserializerError {
SerdeJson(SerdeJsonError),
Zlmdb(zlmdb::Error),
Custom(String),
}
impl de::Error for DeserializerError {
fn custom<T: fmt::Display>(msg: T) -> Self {
DeserializerError::Custom(msg.to_string())
}
}
impl fmt::Display for DeserializerError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
DeserializerError::SerdeJson(e) => write!(f, "serde json related error: {}", e),
DeserializerError::Zlmdb(e) => write!(f, "zlmdb related error: {}", e),
DeserializerError::Custom(s) => f.write_str(s),
}
}
}
impl Error for DeserializerError {}
impl From<SerdeJsonError> for DeserializerError {
fn from(error: SerdeJsonError) -> DeserializerError {
DeserializerError::SerdeJson(error)
}
}
impl From<zlmdb::Error> for DeserializerError {
fn from(error: zlmdb::Error) -> DeserializerError {
DeserializerError::Zlmdb(error)
}
}
pub struct Deserializer<'a> {
pub document_id: DocumentId,
pub reader: &'a zlmdb::RoTxn,
pub documents_fields: DocumentsFields,
pub schema: &'a Schema,
pub attributes: Option<&'a HashSet<SchemaAttr>>,
}
impl<'de, 'a, 'b> de::Deserializer<'de> for &'b mut Deserializer<'a> {
type Error = DeserializerError;
fn deserialize_any<V>(self, visitor: V) -> Result<V::Value, Self::Error>
where
V: de::Visitor<'de>,
{
self.deserialize_map(visitor)
}
forward_to_deserialize_any! {
bool i8 i16 i32 i64 i128 u8 u16 u32 u64 u128 f32 f64 char str string
bytes byte_buf option unit unit_struct newtype_struct seq tuple
tuple_struct struct enum identifier ignored_any
}
fn deserialize_map<V>(self, visitor: V) -> Result<V::Value, Self::Error>
where
V: de::Visitor<'de>,
{
let mut error = None;
let iter = self
.documents_fields
.document_fields(self.reader, self.document_id)?
.filter_map(|result| {
let (attr, value) = match result {
Ok(value) => value,
Err(e) => {
error = Some(e);
return None;
}
};
let is_displayed = self.schema.props(attr).is_displayed();
if is_displayed && self.attributes.map_or(true, |f| f.contains(&attr)) {
let attribute_name = self.schema.attribute_name(attr);
let cursor = Cursor::new(value.to_owned());
let ioread = SerdeJsonIoRead::new(cursor);
let value = Value(SerdeJsonDeserializer::new(ioread));
Some((attribute_name, value))
} else {
None
}
});
let map_deserializer = de::value::MapDeserializer::new(iter);
let result = visitor
.visit_map(map_deserializer)
.map_err(DeserializerError::from);
match error.take() {
Some(error) => Err(error.into()),
None => result,
}
}
}
struct Value(SerdeJsonDeserializer<SerdeJsonIoRead<Cursor<Vec<u8>>>>);
impl<'de> de::IntoDeserializer<'de, SerdeJsonError> for Value {
type Deserializer = Self;
fn into_deserializer(self) -> Self::Deserializer {
self
}
}
impl<'de> de::Deserializer<'de> for Value {
type Error = SerdeJsonError;
fn deserialize_any<V>(mut self, visitor: V) -> Result<V::Value, Self::Error>
where
V: de::Visitor<'de>,
{
self.0.deserialize_any(visitor)
}
forward_to_deserialize_any! {
bool i8 i16 i32 i64 i128 u8 u16 u32 u64 u128 f32 f64 char str string
bytes byte_buf option unit unit_struct newtype_struct seq tuple
tuple_struct map struct enum identifier ignored_any
}
}

View File

@ -1,295 +0,0 @@
use std::hash::{Hash, Hasher};
use crate::DocumentId;
use serde::{ser, Serialize};
use serde_json::Value;
use siphasher::sip::SipHasher;
use super::{ConvertToString, SerializerError};
pub fn extract_document_id<D>(
identifier: &str,
document: &D,
) -> Result<Option<DocumentId>, SerializerError>
where
D: serde::Serialize,
{
let serializer = ExtractDocumentId { identifier };
document.serialize(serializer)
}
pub fn value_to_string(value: &Value) -> Option<String> {
match value {
Value::Null => None,
Value::Bool(_) => None,
Value::Number(value) => Some(value.to_string()),
Value::String(value) => Some(value.to_string()),
Value::Array(_) => None,
Value::Object(_) => None,
}
}
pub fn compute_document_id<H: Hash>(t: H) -> DocumentId {
let mut s = SipHasher::new();
t.hash(&mut s);
let hash = s.finish();
DocumentId(hash)
}
struct ExtractDocumentId<'a> {
identifier: &'a str,
}
impl<'a> ser::Serializer for ExtractDocumentId<'a> {
type Ok = Option<DocumentId>;
type Error = SerializerError;
type SerializeSeq = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTuple = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleStruct = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleVariant = ser::Impossible<Self::Ok, Self::Error>;
type SerializeMap = ExtractDocumentIdMapSerializer<'a>;
type SerializeStruct = ExtractDocumentIdStructSerializer<'a>;
type SerializeStructVariant = ser::Impossible<Self::Ok, Self::Error>;
forward_to_unserializable_type! {
bool => serialize_bool,
char => serialize_char,
i8 => serialize_i8,
i16 => serialize_i16,
i32 => serialize_i32,
i64 => serialize_i64,
u8 => serialize_u8,
u16 => serialize_u16,
u32 => serialize_u32,
u64 => serialize_u64,
f32 => serialize_f32,
f64 => serialize_f64,
}
fn serialize_str(self, _value: &str) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "str" })
}
fn serialize_bytes(self, _value: &[u8]) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "&[u8]" })
}
fn serialize_none(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "Option",
})
}
fn serialize_some<T: ?Sized>(self, _value: &T) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
Err(SerializerError::UnserializableType {
type_name: "Option",
})
}
fn serialize_unit(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "()" })
}
fn serialize_unit_struct(self, _name: &'static str) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "unit struct",
})
}
fn serialize_unit_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "unit variant",
})
}
fn serialize_newtype_struct<T: ?Sized>(
self,
_name: &'static str,
value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
value.serialize(self)
}
fn serialize_newtype_variant<T: ?Sized>(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: Serialize,
{
Err(SerializerError::UnserializableType {
type_name: "newtype variant",
})
}
fn serialize_seq(self, _len: Option<usize>) -> Result<Self::SerializeSeq, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "sequence",
})
}
fn serialize_tuple(self, _len: usize) -> Result<Self::SerializeTuple, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "tuple" })
}
fn serialize_tuple_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleStruct, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "tuple struct",
})
}
fn serialize_tuple_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleVariant, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "tuple variant",
})
}
fn serialize_map(self, _len: Option<usize>) -> Result<Self::SerializeMap, Self::Error> {
let serializer = ExtractDocumentIdMapSerializer {
identifier: self.identifier,
document_id: None,
current_key_name: None,
};
Ok(serializer)
}
fn serialize_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeStruct, Self::Error> {
let serializer = ExtractDocumentIdStructSerializer {
identifier: self.identifier,
document_id: None,
};
Ok(serializer)
}
fn serialize_struct_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeStructVariant, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "struct variant",
})
}
}
pub struct ExtractDocumentIdMapSerializer<'a> {
identifier: &'a str,
document_id: Option<DocumentId>,
current_key_name: Option<String>,
}
impl<'a> ser::SerializeMap for ExtractDocumentIdMapSerializer<'a> {
type Ok = Option<DocumentId>;
type Error = SerializerError;
fn serialize_key<T: ?Sized>(&mut self, key: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
let key = key.serialize(ConvertToString)?;
self.current_key_name = Some(key);
Ok(())
}
fn serialize_value<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
let key = self.current_key_name.take().unwrap();
self.serialize_entry(&key, value)
}
fn serialize_entry<K: ?Sized, V: ?Sized>(
&mut self,
key: &K,
value: &V,
) -> Result<(), Self::Error>
where
K: Serialize,
V: Serialize,
{
let key = key.serialize(ConvertToString)?;
if self.identifier == key {
let value = serde_json::to_string(value).and_then(|s| serde_json::from_str(&s))?;
match value_to_string(&value).map(|s| compute_document_id(&s)) {
Some(document_id) => self.document_id = Some(document_id),
None => return Err(SerializerError::InvalidDocumentIdType),
}
}
Ok(())
}
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(self.document_id)
}
}
pub struct ExtractDocumentIdStructSerializer<'a> {
identifier: &'a str,
document_id: Option<DocumentId>,
}
impl<'a> ser::SerializeStruct for ExtractDocumentIdStructSerializer<'a> {
type Ok = Option<DocumentId>;
type Error = SerializerError;
fn serialize_field<T: ?Sized>(
&mut self,
key: &'static str,
value: &T,
) -> Result<(), Self::Error>
where
T: Serialize,
{
if self.identifier == key {
let value = serde_json::to_string(value).and_then(|s| serde_json::from_str(&s))?;
match value_to_string(&value).map(compute_document_id) {
Some(document_id) => self.document_id = Some(document_id),
None => return Err(SerializerError::InvalidDocumentIdType),
}
}
Ok(())
}
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(self.document_id)
}
}

View File

@ -1,365 +0,0 @@
use meilidb_schema::SchemaAttr;
use serde::ser;
use serde::Serialize;
use super::{ConvertToString, SerializerError};
use crate::raw_indexer::RawIndexer;
use crate::DocumentId;
pub struct Indexer<'a> {
pub attribute: SchemaAttr,
pub indexer: &'a mut RawIndexer,
pub document_id: DocumentId,
}
impl<'a> ser::Serializer for Indexer<'a> {
type Ok = Option<usize>;
type Error = SerializerError;
type SerializeSeq = SeqIndexer<'a>;
type SerializeTuple = TupleIndexer<'a>;
type SerializeTupleStruct = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleVariant = ser::Impossible<Self::Ok, Self::Error>;
type SerializeMap = MapIndexer<'a>;
type SerializeStruct = StructSerializer<'a>;
type SerializeStructVariant = ser::Impossible<Self::Ok, Self::Error>;
fn serialize_bool(self, _value: bool) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "boolean",
})
}
fn serialize_char(self, value: char) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_i8(self, value: i8) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_i16(self, value: i16) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_i32(self, value: i32) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_i64(self, value: i64) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_u8(self, value: u8) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_u16(self, value: u16) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_u32(self, value: u32) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_u64(self, value: u64) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_f32(self, value: f32) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_f64(self, value: f64) -> Result<Self::Ok, Self::Error> {
let text = value.serialize(ConvertToString)?;
self.serialize_str(&text)
}
fn serialize_str(self, text: &str) -> Result<Self::Ok, Self::Error> {
let number_of_words = self
.indexer
.index_text(self.document_id, self.attribute, text);
Ok(Some(number_of_words))
}
fn serialize_bytes(self, _v: &[u8]) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnindexableType { type_name: "&[u8]" })
}
fn serialize_none(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "Option",
})
}
fn serialize_some<T: ?Sized>(self, value: &T) -> Result<Self::Ok, Self::Error>
where
T: ser::Serialize,
{
let text = value.serialize(ConvertToString)?;
let number_of_words = self
.indexer
.index_text(self.document_id, self.attribute, &text);
Ok(Some(number_of_words))
}
fn serialize_unit(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnindexableType { type_name: "()" })
}
fn serialize_unit_struct(self, _name: &'static str) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "unit struct",
})
}
fn serialize_unit_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "unit variant",
})
}
fn serialize_newtype_struct<T: ?Sized>(
self,
_name: &'static str,
value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: ser::Serialize,
{
value.serialize(self)
}
fn serialize_newtype_variant<T: ?Sized>(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: ser::Serialize,
{
Err(SerializerError::UnindexableType {
type_name: "newtype variant",
})
}
fn serialize_seq(self, _len: Option<usize>) -> Result<Self::SerializeSeq, Self::Error> {
let indexer = SeqIndexer {
attribute: self.attribute,
document_id: self.document_id,
indexer: self.indexer,
texts: Vec::new(),
};
Ok(indexer)
}
fn serialize_tuple(self, _len: usize) -> Result<Self::SerializeTuple, Self::Error> {
let indexer = TupleIndexer {
attribute: self.attribute,
document_id: self.document_id,
indexer: self.indexer,
texts: Vec::new(),
};
Ok(indexer)
}
fn serialize_tuple_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleStruct, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "tuple struct",
})
}
fn serialize_tuple_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleVariant, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "tuple variant",
})
}
fn serialize_map(self, _len: Option<usize>) -> Result<Self::SerializeMap, Self::Error> {
let indexer = MapIndexer {
attribute: self.attribute,
document_id: self.document_id,
indexer: self.indexer,
texts: Vec::new(),
};
Ok(indexer)
}
fn serialize_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeStruct, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "struct",
})
}
fn serialize_struct_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeStructVariant, Self::Error> {
Err(SerializerError::UnindexableType {
type_name: "struct variant",
})
}
}
pub struct SeqIndexer<'a> {
attribute: SchemaAttr,
document_id: DocumentId,
indexer: &'a mut RawIndexer,
texts: Vec<String>,
}
impl<'a> ser::SerializeSeq for SeqIndexer<'a> {
type Ok = Option<usize>;
type Error = SerializerError;
fn serialize_element<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: ser::Serialize,
{
let text = value.serialize(ConvertToString)?;
self.texts.push(text);
Ok(())
}
fn end(self) -> Result<Self::Ok, Self::Error> {
let texts = self.texts.iter().map(String::as_str);
self.indexer
.index_text_seq(self.document_id, self.attribute, texts);
Ok(None)
}
}
pub struct MapIndexer<'a> {
attribute: SchemaAttr,
document_id: DocumentId,
indexer: &'a mut RawIndexer,
texts: Vec<String>,
}
impl<'a> ser::SerializeMap for MapIndexer<'a> {
type Ok = Option<usize>;
type Error = SerializerError;
fn serialize_key<T: ?Sized>(&mut self, key: &T) -> Result<(), Self::Error>
where
T: ser::Serialize,
{
let text = key.serialize(ConvertToString)?;
self.texts.push(text);
Ok(())
}
fn serialize_value<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: ser::Serialize,
{
let text = value.serialize(ConvertToString)?;
self.texts.push(text);
Ok(())
}
fn end(self) -> Result<Self::Ok, Self::Error> {
let texts = self.texts.iter().map(String::as_str);
self.indexer
.index_text_seq(self.document_id, self.attribute, texts);
Ok(None)
}
}
pub struct StructSerializer<'a> {
attribute: SchemaAttr,
document_id: DocumentId,
indexer: &'a mut RawIndexer,
texts: Vec<String>,
}
impl<'a> ser::SerializeStruct for StructSerializer<'a> {
type Ok = Option<usize>;
type Error = SerializerError;
fn serialize_field<T: ?Sized>(
&mut self,
key: &'static str,
value: &T,
) -> Result<(), Self::Error>
where
T: ser::Serialize,
{
let key_text = key.to_owned();
let value_text = value.serialize(ConvertToString)?;
self.texts.push(key_text);
self.texts.push(value_text);
Ok(())
}
fn end(self) -> Result<Self::Ok, Self::Error> {
let texts = self.texts.iter().map(String::as_str);
self.indexer
.index_text_seq(self.document_id, self.attribute, texts);
Ok(None)
}
}
pub struct TupleIndexer<'a> {
attribute: SchemaAttr,
document_id: DocumentId,
indexer: &'a mut RawIndexer,
texts: Vec<String>,
}
impl<'a> ser::SerializeTuple for TupleIndexer<'a> {
type Ok = Option<usize>;
type Error = SerializerError;
fn serialize_element<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: Serialize,
{
let text = value.serialize(ConvertToString)?;
self.texts.push(text);
Ok(())
}
fn end(self) -> Result<Self::Ok, Self::Error> {
let texts = self.texts.iter().map(String::as_str);
self.indexer
.index_text_seq(self.document_id, self.attribute, texts);
Ok(None)
}
}

View File

@ -1,127 +0,0 @@
macro_rules! forward_to_unserializable_type {
($($ty:ident => $se_method:ident,)*) => {
$(
fn $se_method(self, _v: $ty) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "$ty" })
}
)*
}
}
mod convert_to_number;
mod convert_to_string;
mod deserializer;
mod extract_document_id;
mod indexer;
mod serializer;
pub use self::convert_to_number::ConvertToNumber;
pub use self::convert_to_string::ConvertToString;
pub use self::deserializer::{Deserializer, DeserializerError};
pub use self::extract_document_id::{compute_document_id, extract_document_id, value_to_string};
pub use self::indexer::Indexer;
pub use self::serializer::Serializer;
use std::collections::BTreeMap;
use std::{error::Error, fmt};
use meilidb_schema::SchemaAttr;
use serde::ser;
use serde_json::Error as SerdeJsonError;
use crate::{DocumentId, ParseNumberError};
#[derive(Debug)]
pub enum SerializerError {
DocumentIdNotFound,
InvalidDocumentIdType,
Zlmdb(zlmdb::Error),
SerdeJson(SerdeJsonError),
ParseNumber(ParseNumberError),
UnserializableType { type_name: &'static str },
UnindexableType { type_name: &'static str },
UnrankableType { type_name: &'static str },
Custom(String),
}
impl ser::Error for SerializerError {
fn custom<T: fmt::Display>(msg: T) -> Self {
SerializerError::Custom(msg.to_string())
}
}
impl fmt::Display for SerializerError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
SerializerError::DocumentIdNotFound => {
f.write_str("serialized document does not have an id according to the schema")
}
SerializerError::InvalidDocumentIdType => {
f.write_str("document identifier can only be of type string or number")
}
SerializerError::Zlmdb(e) => write!(f, "zlmdb related error: {}", e),
SerializerError::SerdeJson(e) => write!(f, "serde json error: {}", e),
SerializerError::ParseNumber(e) => {
write!(f, "error while trying to parse a number: {}", e)
}
SerializerError::UnserializableType { type_name } => {
write!(f, "{} is not a serializable type", type_name)
}
SerializerError::UnindexableType { type_name } => {
write!(f, "{} is not an indexable type", type_name)
}
SerializerError::UnrankableType { type_name } => {
write!(f, "{} types can not be used for ranking", type_name)
}
SerializerError::Custom(s) => f.write_str(s),
}
}
}
impl Error for SerializerError {}
impl From<String> for SerializerError {
fn from(value: String) -> SerializerError {
SerializerError::Custom(value)
}
}
impl From<SerdeJsonError> for SerializerError {
fn from(error: SerdeJsonError) -> SerializerError {
SerializerError::SerdeJson(error)
}
}
impl From<zlmdb::Error> for SerializerError {
fn from(error: zlmdb::Error) -> SerializerError {
SerializerError::Zlmdb(error)
}
}
impl From<ParseNumberError> for SerializerError {
fn from(error: ParseNumberError) -> SerializerError {
SerializerError::ParseNumber(error)
}
}
pub struct RamDocumentStore(BTreeMap<(DocumentId, SchemaAttr), Vec<u8>>);
impl RamDocumentStore {
pub fn new() -> RamDocumentStore {
RamDocumentStore(BTreeMap::new())
}
pub fn set_document_field(&mut self, id: DocumentId, attr: SchemaAttr, value: Vec<u8>) {
self.0.insert((id, attr), value);
}
pub fn into_inner(self) -> BTreeMap<(DocumentId, SchemaAttr), Vec<u8>> {
self.0
}
}
impl Default for RamDocumentStore {
fn default() -> Self {
Self::new()
}
}

View File

@ -1,325 +0,0 @@
use meilidb_schema::{Schema, SchemaAttr};
use serde::ser;
use std::collections::HashMap;
use crate::raw_indexer::RawIndexer;
use crate::serde::RamDocumentStore;
use crate::{DocumentId, RankedMap};
use super::{ConvertToNumber, ConvertToString, Indexer, SerializerError};
pub struct Serializer<'a> {
pub schema: &'a Schema,
pub document_store: &'a mut RamDocumentStore,
pub document_fields_counts: &'a mut HashMap<(DocumentId, SchemaAttr), u64>,
pub indexer: &'a mut RawIndexer,
pub ranked_map: &'a mut RankedMap,
pub document_id: DocumentId,
}
impl<'a> ser::Serializer for Serializer<'a> {
type Ok = ();
type Error = SerializerError;
type SerializeSeq = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTuple = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleStruct = ser::Impossible<Self::Ok, Self::Error>;
type SerializeTupleVariant = ser::Impossible<Self::Ok, Self::Error>;
type SerializeMap = MapSerializer<'a>;
type SerializeStruct = StructSerializer<'a>;
type SerializeStructVariant = ser::Impossible<Self::Ok, Self::Error>;
forward_to_unserializable_type! {
bool => serialize_bool,
char => serialize_char,
i8 => serialize_i8,
i16 => serialize_i16,
i32 => serialize_i32,
i64 => serialize_i64,
u8 => serialize_u8,
u16 => serialize_u16,
u32 => serialize_u32,
u64 => serialize_u64,
f32 => serialize_f32,
f64 => serialize_f64,
}
fn serialize_str(self, _v: &str) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "str" })
}
fn serialize_bytes(self, _v: &[u8]) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "&[u8]" })
}
fn serialize_none(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "Option",
})
}
fn serialize_some<T: ?Sized>(self, _value: &T) -> Result<Self::Ok, Self::Error>
where
T: ser::Serialize,
{
Err(SerializerError::UnserializableType {
type_name: "Option",
})
}
fn serialize_unit(self) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "()" })
}
fn serialize_unit_struct(self, _name: &'static str) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "unit struct",
})
}
fn serialize_unit_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
) -> Result<Self::Ok, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "unit variant",
})
}
fn serialize_newtype_struct<T: ?Sized>(
self,
_name: &'static str,
value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: ser::Serialize,
{
value.serialize(self)
}
fn serialize_newtype_variant<T: ?Sized>(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_value: &T,
) -> Result<Self::Ok, Self::Error>
where
T: ser::Serialize,
{
Err(SerializerError::UnserializableType {
type_name: "newtype variant",
})
}
fn serialize_seq(self, _len: Option<usize>) -> Result<Self::SerializeSeq, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "sequence",
})
}
fn serialize_tuple(self, _len: usize) -> Result<Self::SerializeTuple, Self::Error> {
Err(SerializerError::UnserializableType { type_name: "tuple" })
}
fn serialize_tuple_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleStruct, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "tuple struct",
})
}
fn serialize_tuple_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeTupleVariant, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "tuple variant",
})
}
fn serialize_map(self, _len: Option<usize>) -> Result<Self::SerializeMap, Self::Error> {
Ok(MapSerializer {
schema: self.schema,
document_id: self.document_id,
document_store: self.document_store,
document_fields_counts: self.document_fields_counts,
indexer: self.indexer,
ranked_map: self.ranked_map,
current_key_name: None,
})
}
fn serialize_struct(
self,
_name: &'static str,
_len: usize,
) -> Result<Self::SerializeStruct, Self::Error> {
Ok(StructSerializer {
schema: self.schema,
document_id: self.document_id,
document_store: self.document_store,
document_fields_counts: self.document_fields_counts,
indexer: self.indexer,
ranked_map: self.ranked_map,
})
}
fn serialize_struct_variant(
self,
_name: &'static str,
_variant_index: u32,
_variant: &'static str,
_len: usize,
) -> Result<Self::SerializeStructVariant, Self::Error> {
Err(SerializerError::UnserializableType {
type_name: "struct variant",
})
}
}
pub struct MapSerializer<'a> {
schema: &'a Schema,
document_id: DocumentId,
document_store: &'a mut RamDocumentStore,
document_fields_counts: &'a mut HashMap<(DocumentId, SchemaAttr), u64>,
indexer: &'a mut RawIndexer,
ranked_map: &'a mut RankedMap,
current_key_name: Option<String>,
}
impl<'a> ser::SerializeMap for MapSerializer<'a> {
type Ok = ();
type Error = SerializerError;
fn serialize_key<T: ?Sized>(&mut self, key: &T) -> Result<(), Self::Error>
where
T: ser::Serialize,
{
let key = key.serialize(ConvertToString)?;
self.current_key_name = Some(key);
Ok(())
}
fn serialize_value<T: ?Sized>(&mut self, value: &T) -> Result<(), Self::Error>
where
T: ser::Serialize,
{
let key = self.current_key_name.take().unwrap();
self.serialize_entry(&key, value)
}
fn serialize_entry<K: ?Sized, V: ?Sized>(
&mut self,
key: &K,
value: &V,
) -> Result<(), Self::Error>
where
K: ser::Serialize,
V: ser::Serialize,
{
let key = key.serialize(ConvertToString)?;
serialize_value(
self.schema,
self.document_id,
self.document_store,
self.document_fields_counts,
self.indexer,
self.ranked_map,
&key,
value,
)
}
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(())
}
}
pub struct StructSerializer<'a> {
schema: &'a Schema,
document_id: DocumentId,
document_store: &'a mut RamDocumentStore,
document_fields_counts: &'a mut HashMap<(DocumentId, SchemaAttr), u64>,
indexer: &'a mut RawIndexer,
ranked_map: &'a mut RankedMap,
}
impl<'a> ser::SerializeStruct for StructSerializer<'a> {
type Ok = ();
type Error = SerializerError;
fn serialize_field<T: ?Sized>(
&mut self,
key: &'static str,
value: &T,
) -> Result<(), Self::Error>
where
T: ser::Serialize,
{
serialize_value(
self.schema,
self.document_id,
self.document_store,
self.document_fields_counts,
self.indexer,
self.ranked_map,
key,
value,
)
}
fn end(self) -> Result<Self::Ok, Self::Error> {
Ok(())
}
}
fn serialize_value<T: ?Sized>(
schema: &Schema,
document_id: DocumentId,
document_store: &mut RamDocumentStore,
documents_fields_counts: &mut HashMap<(DocumentId, SchemaAttr), u64>,
indexer: &mut RawIndexer,
ranked_map: &mut RankedMap,
key: &str,
value: &T,
) -> Result<(), SerializerError>
where
T: ser::Serialize,
{
if let Some(attribute) = schema.attribute(key) {
let props = schema.props(attribute);
let serialized = serde_json::to_vec(value)?;
document_store.set_document_field(document_id, attribute, serialized);
if props.is_indexed() {
let indexer = Indexer {
attribute,
indexer,
document_id,
};
if let Some(number_of_words) = value.serialize(indexer)? {
documents_fields_counts.insert((document_id, attribute), number_of_words as u64);
}
}
if props.is_ranked() {
let number = value.serialize(ConvertToNumber)?;
ranked_map.insert(document_id, attribute, number);
}
}
Ok(())
}

View File

@ -1,49 +0,0 @@
use super::BEU64;
use crate::DocumentId;
use std::sync::Arc;
use zlmdb::types::{ByteSlice, OwnedType};
use zlmdb::Result as ZResult;
#[derive(Copy, Clone)]
pub struct DocsWords {
pub(crate) docs_words: zlmdb::Database<OwnedType<BEU64>, ByteSlice>,
}
impl DocsWords {
pub fn put_doc_words(
self,
writer: &mut zlmdb::RwTxn,
document_id: DocumentId,
words: &fst::Set,
) -> ZResult<()> {
let document_id = BEU64::new(document_id.0);
let bytes = words.as_fst().as_bytes();
self.docs_words.put(writer, &document_id, bytes)
}
pub fn del_doc_words(
self,
writer: &mut zlmdb::RwTxn,
document_id: DocumentId,
) -> ZResult<bool> {
let document_id = BEU64::new(document_id.0);
self.docs_words.delete(writer, &document_id)
}
pub fn doc_words(
self,
reader: &zlmdb::RoTxn,
document_id: DocumentId,
) -> ZResult<Option<fst::Set>> {
let document_id = BEU64::new(document_id.0);
match self.docs_words.get(reader, &document_id)? {
Some(bytes) => {
let len = bytes.len();
let bytes = Arc::from(bytes);
let fst = fst::raw::Fst::from_shared_bytes(bytes, 0, len).unwrap();
Ok(Some(fst::Set::from(fst)))
}
None => Ok(None),
}
}
}

View File

@ -1,74 +0,0 @@
use meilidb_schema::SchemaAttr;
use zlmdb::types::{ByteSlice, OwnedType};
use zlmdb::Result as ZResult;
use super::DocumentAttrKey;
use crate::DocumentId;
#[derive(Copy, Clone)]
pub struct DocumentsFields {
pub(crate) documents_fields: zlmdb::Database<OwnedType<DocumentAttrKey>, ByteSlice>,
}
impl DocumentsFields {
pub fn put_document_field(
self,
writer: &mut zlmdb::RwTxn,
document_id: DocumentId,
attribute: SchemaAttr,
value: &[u8],
) -> ZResult<()> {
let key = DocumentAttrKey::new(document_id, attribute);
self.documents_fields.put(writer, &key, value)
}
pub fn del_all_document_fields(
self,
writer: &mut zlmdb::RwTxn,
document_id: DocumentId,
) -> ZResult<usize> {
let start = DocumentAttrKey::new(document_id, SchemaAttr::min());
let end = DocumentAttrKey::new(document_id, SchemaAttr::max());
self.documents_fields.delete_range(writer, start..=end)
}
pub fn document_attribute<'txn>(
self,
reader: &'txn zlmdb::RoTxn,
document_id: DocumentId,
attribute: SchemaAttr,
) -> ZResult<Option<&'txn [u8]>> {
let key = DocumentAttrKey::new(document_id, attribute);
self.documents_fields.get(reader, &key)
}
pub fn document_fields<'txn>(
self,
reader: &'txn zlmdb::RoTxn,
document_id: DocumentId,
) -> ZResult<DocumentFieldsIter<'txn>> {
let start = DocumentAttrKey::new(document_id, SchemaAttr::min());
let end = DocumentAttrKey::new(document_id, SchemaAttr::max());
let iter = self.documents_fields.range(reader, start..=end)?;
Ok(DocumentFieldsIter { iter })
}
}
pub struct DocumentFieldsIter<'txn> {
iter: zlmdb::RoRange<'txn, OwnedType<DocumentAttrKey>, ByteSlice>,
}
impl<'txn> Iterator for DocumentFieldsIter<'txn> {
type Item = ZResult<(SchemaAttr, &'txn [u8])>;
fn next(&mut self) -> Option<Self::Item> {
match self.iter.next() {
Some(Ok((key, bytes))) => {
let attr = SchemaAttr(key.attr.get());
Some(Ok((attr, bytes)))
}
Some(Err(e)) => Some(Err(e)),
None => None,
}
}
}

View File

@ -1,141 +0,0 @@
use super::DocumentAttrKey;
use crate::DocumentId;
use meilidb_schema::SchemaAttr;
use zlmdb::types::OwnedType;
use zlmdb::Result as ZResult;
#[derive(Copy, Clone)]
pub struct DocumentsFieldsCounts {
pub(crate) documents_fields_counts: zlmdb::Database<OwnedType<DocumentAttrKey>, OwnedType<u64>>,
}
impl DocumentsFieldsCounts {
pub fn put_document_field_count(
self,
writer: &mut zlmdb::RwTxn,
document_id: DocumentId,
attribute: SchemaAttr,
value: u64,
) -> ZResult<()> {
let key = DocumentAttrKey::new(document_id, attribute);
self.documents_fields_counts.put(writer, &key, &value)
}
pub fn del_all_document_fields_counts(
self,
writer: &mut zlmdb::RwTxn,
document_id: DocumentId,
) -> ZResult<usize> {
let start = DocumentAttrKey::new(document_id, SchemaAttr::min());
let end = DocumentAttrKey::new(document_id, SchemaAttr::max());
self.documents_fields_counts
.delete_range(writer, start..=end)
}
pub fn document_field_count(
self,
reader: &zlmdb::RoTxn,
document_id: DocumentId,
attribute: SchemaAttr,
) -> ZResult<Option<u64>> {
let key = DocumentAttrKey::new(document_id, attribute);
match self.documents_fields_counts.get(reader, &key)? {
Some(count) => Ok(Some(count)),
None => Ok(None),
}
}
pub fn document_fields_counts<'txn>(
self,
reader: &'txn zlmdb::RoTxn,
document_id: DocumentId,
) -> ZResult<DocumentFieldsCountsIter<'txn>> {
let start = DocumentAttrKey::new(document_id, SchemaAttr::min());
let end = DocumentAttrKey::new(document_id, SchemaAttr::max());
let iter = self.documents_fields_counts.range(reader, start..=end)?;
Ok(DocumentFieldsCountsIter { iter })
}
pub fn documents_ids<'txn>(
self,
reader: &'txn zlmdb::RoTxn,
) -> ZResult<DocumentsIdsIter<'txn>> {
let iter = self.documents_fields_counts.iter(reader)?;
Ok(DocumentsIdsIter {
last_seen_id: None,
iter,
})
}
pub fn all_documents_fields_counts<'txn>(
self,
reader: &'txn zlmdb::RoTxn,
) -> ZResult<AllDocumentsFieldsCountsIter<'txn>> {
let iter = self.documents_fields_counts.iter(reader)?;
Ok(AllDocumentsFieldsCountsIter { iter })
}
}
pub struct DocumentFieldsCountsIter<'txn> {
iter: zlmdb::RoRange<'txn, OwnedType<DocumentAttrKey>, OwnedType<u64>>,
}
impl Iterator for DocumentFieldsCountsIter<'_> {
type Item = ZResult<(SchemaAttr, u64)>;
fn next(&mut self) -> Option<Self::Item> {
match self.iter.next() {
Some(Ok((key, count))) => {
let attr = SchemaAttr(key.attr.get());
Some(Ok((attr, count)))
}
Some(Err(e)) => Some(Err(e)),
None => None,
}
}
}
pub struct DocumentsIdsIter<'txn> {
last_seen_id: Option<DocumentId>,
iter: zlmdb::RoIter<'txn, OwnedType<DocumentAttrKey>, OwnedType<u64>>,
}
impl Iterator for DocumentsIdsIter<'_> {
type Item = ZResult<DocumentId>;
fn next(&mut self) -> Option<Self::Item> {
for result in &mut self.iter {
match result {
Ok((key, _)) => {
let document_id = DocumentId(key.docid.get());
if Some(document_id) != self.last_seen_id {
self.last_seen_id = Some(document_id);
return Some(Ok(document_id));
}
}
Err(e) => return Some(Err(e)),
}
}
None
}
}
pub struct AllDocumentsFieldsCountsIter<'txn> {
iter: zlmdb::RoIter<'txn, OwnedType<DocumentAttrKey>, OwnedType<u64>>,
}
impl<'r> Iterator for AllDocumentsFieldsCountsIter<'r> {
type Item = ZResult<(DocumentId, SchemaAttr, u64)>;
fn next(&mut self) -> Option<Self::Item> {
match self.iter.next() {
Some(Ok((key, count))) => {
let docid = DocumentId(key.docid.get());
let attr = SchemaAttr(key.attr.get());
Some(Ok((docid, attr, count)))
}
Some(Err(e)) => Some(Err(e)),
None => None,
}
}
}

View File

@ -1,101 +0,0 @@
use crate::RankedMap;
use meilidb_schema::Schema;
use std::sync::Arc;
use zlmdb::types::{ByteSlice, OwnedType, Serde, Str};
use zlmdb::Result as ZResult;
const CUSTOMS_KEY: &str = "customs-key";
const NUMBER_OF_DOCUMENTS_KEY: &str = "number-of-documents";
const RANKED_MAP_KEY: &str = "ranked-map";
const SCHEMA_KEY: &str = "schema";
const SYNONYMS_KEY: &str = "synonyms";
const WORDS_KEY: &str = "words";
#[derive(Copy, Clone)]
pub struct Main {
pub(crate) main: zlmdb::DynDatabase,
}
impl Main {
pub fn put_words_fst(self, writer: &mut zlmdb::RwTxn, fst: &fst::Set) -> ZResult<()> {
let bytes = fst.as_fst().as_bytes();
self.main.put::<Str, ByteSlice>(writer, WORDS_KEY, bytes)
}
pub fn words_fst(self, reader: &zlmdb::RoTxn) -> ZResult<Option<fst::Set>> {
match self.main.get::<Str, ByteSlice>(reader, WORDS_KEY)? {
Some(bytes) => {
let len = bytes.len();
let bytes = Arc::from(bytes);
let fst = fst::raw::Fst::from_shared_bytes(bytes, 0, len).unwrap();
Ok(Some(fst::Set::from(fst)))
}
None => Ok(None),
}
}
pub fn put_schema(self, writer: &mut zlmdb::RwTxn, schema: &Schema) -> ZResult<()> {
self.main
.put::<Str, Serde<Schema>>(writer, SCHEMA_KEY, schema)
}
pub fn schema(self, reader: &zlmdb::RoTxn) -> ZResult<Option<Schema>> {
self.main.get::<Str, Serde<Schema>>(reader, SCHEMA_KEY)
}
pub fn put_ranked_map(self, writer: &mut zlmdb::RwTxn, ranked_map: &RankedMap) -> ZResult<()> {
self.main
.put::<Str, Serde<RankedMap>>(writer, RANKED_MAP_KEY, &ranked_map)
}
pub fn ranked_map(self, reader: &zlmdb::RoTxn) -> ZResult<Option<RankedMap>> {
self.main
.get::<Str, Serde<RankedMap>>(reader, RANKED_MAP_KEY)
}
pub fn put_synonyms_fst(self, writer: &mut zlmdb::RwTxn, fst: &fst::Set) -> ZResult<()> {
let bytes = fst.as_fst().as_bytes();
self.main.put::<Str, ByteSlice>(writer, SYNONYMS_KEY, bytes)
}
pub fn synonyms_fst(self, reader: &zlmdb::RoTxn) -> ZResult<Option<fst::Set>> {
match self.main.get::<Str, ByteSlice>(reader, SYNONYMS_KEY)? {
Some(bytes) => {
let len = bytes.len();
let bytes = Arc::from(bytes);
let fst = fst::raw::Fst::from_shared_bytes(bytes, 0, len).unwrap();
Ok(Some(fst::Set::from(fst)))
}
None => Ok(None),
}
}
pub fn put_number_of_documents<F>(self, writer: &mut zlmdb::RwTxn, f: F) -> ZResult<u64>
where
F: Fn(u64) -> u64,
{
let new = self.number_of_documents(writer).map(f)?;
self.main
.put::<Str, OwnedType<u64>>(writer, NUMBER_OF_DOCUMENTS_KEY, &new)?;
Ok(new)
}
pub fn number_of_documents(self, reader: &zlmdb::RoTxn) -> ZResult<u64> {
match self
.main
.get::<Str, OwnedType<u64>>(reader, NUMBER_OF_DOCUMENTS_KEY)?
{
Some(value) => Ok(value),
None => Ok(0),
}
}
pub fn put_customs(self, writer: &mut zlmdb::RwTxn, customs: &[u8]) -> ZResult<()> {
self.main
.put::<Str, ByteSlice>(writer, CUSTOMS_KEY, customs)
}
pub fn customs<'txn>(self, reader: &'txn zlmdb::RoTxn) -> ZResult<Option<&'txn [u8]>> {
self.main.get::<Str, ByteSlice>(reader, CUSTOMS_KEY)
}
}

View File

@ -1,325 +0,0 @@
mod docs_words;
mod documents_fields;
mod documents_fields_counts;
mod main;
mod postings_lists;
mod synonyms;
mod updates;
mod updates_results;
pub use self::docs_words::DocsWords;
pub use self::documents_fields::{DocumentFieldsIter, DocumentsFields};
pub use self::documents_fields_counts::{
DocumentFieldsCountsIter, DocumentsFieldsCounts, DocumentsIdsIter,
};
pub use self::main::Main;
pub use self::postings_lists::PostingsLists;
pub use self::synonyms::Synonyms;
pub use self::updates::Updates;
pub use self::updates_results::UpdatesResults;
use std::collections::HashSet;
use meilidb_schema::{Schema, SchemaAttr};
use serde::de;
use zerocopy::{AsBytes, FromBytes};
use zlmdb::Result as ZResult;
use crate::criterion::Criteria;
use crate::serde::Deserializer;
use crate::{query_builder::QueryBuilder, update, DocumentId, Error, MResult};
type BEU64 = zerocopy::U64<byteorder::BigEndian>;
type BEU16 = zerocopy::U16<byteorder::BigEndian>;
#[derive(Debug, Copy, Clone, AsBytes, FromBytes)]
#[repr(C)]
pub struct DocumentAttrKey {
docid: BEU64,
attr: BEU16,
}
impl DocumentAttrKey {
fn new(docid: DocumentId, attr: SchemaAttr) -> DocumentAttrKey {
DocumentAttrKey {
docid: BEU64::new(docid.0),
attr: BEU16::new(attr.0),
}
}
}
fn main_name(name: &str) -> String {
format!("store-{}", name)
}
fn postings_lists_name(name: &str) -> String {
format!("store-{}-postings-lists", name)
}
fn documents_fields_name(name: &str) -> String {
format!("store-{}-documents-fields", name)
}
fn documents_fields_counts_name(name: &str) -> String {
format!("store-{}-documents-fields-counts", name)
}
fn synonyms_name(name: &str) -> String {
format!("store-{}-synonyms", name)
}
fn docs_words_name(name: &str) -> String {
format!("store-{}-docs-words", name)
}
fn updates_name(name: &str) -> String {
format!("store-{}-updates", name)
}
fn updates_results_name(name: &str) -> String {
format!("store-{}-updates-results", name)
}
#[derive(Clone)]
pub struct Index {
pub main: Main,
pub postings_lists: PostingsLists,
pub documents_fields: DocumentsFields,
pub documents_fields_counts: DocumentsFieldsCounts,
pub synonyms: Synonyms,
pub docs_words: DocsWords,
pub updates: Updates,
pub updates_results: UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
}
impl Index {
pub fn document<T: de::DeserializeOwned>(
&self,
reader: &zlmdb::RoTxn,
attributes: Option<&HashSet<&str>>,
document_id: DocumentId,
) -> MResult<Option<T>> {
let schema = self.main.schema(reader)?;
let schema = schema.ok_or(Error::SchemaMissing)?;
let attributes = match attributes {
Some(attributes) => attributes
.iter()
.map(|name| schema.attribute(name))
.collect(),
None => None,
};
let mut deserializer = Deserializer {
document_id,
reader,
documents_fields: self.documents_fields,
schema: &schema,
attributes: attributes.as_ref(),
};
// TODO: currently we return an error if all document fields are missing,
// returning None would have been better
Ok(T::deserialize(&mut deserializer).map(Some)?)
}
pub fn document_attribute<T: de::DeserializeOwned>(
&self,
reader: &zlmdb::RoTxn,
document_id: DocumentId,
attribute: SchemaAttr,
) -> MResult<Option<T>> {
let bytes = self
.documents_fields
.document_attribute(reader, document_id, attribute)?;
match bytes {
Some(bytes) => Ok(Some(serde_json::from_slice(bytes)?)),
None => Ok(None),
}
}
pub fn schema_update(&self, writer: &mut zlmdb::RwTxn, schema: Schema) -> MResult<u64> {
let _ = self.updates_notifier.send(());
update::push_schema_update(writer, self.updates, self.updates_results, schema)
}
pub fn customs_update(&self, writer: &mut zlmdb::RwTxn, customs: Vec<u8>) -> ZResult<u64> {
let _ = self.updates_notifier.send(());
update::push_customs_update(writer, self.updates, self.updates_results, customs)
}
pub fn documents_addition<D>(&self) -> update::DocumentsAddition<D> {
update::DocumentsAddition::new(
self.updates,
self.updates_results,
self.updates_notifier.clone(),
)
}
pub fn documents_deletion(&self) -> update::DocumentsDeletion {
update::DocumentsDeletion::new(
self.updates,
self.updates_results,
self.updates_notifier.clone(),
)
}
pub fn synonyms_addition(&self) -> update::SynonymsAddition {
update::SynonymsAddition::new(
self.updates,
self.updates_results,
self.updates_notifier.clone(),
)
}
pub fn synonyms_deletion(&self) -> update::SynonymsDeletion {
update::SynonymsDeletion::new(
self.updates,
self.updates_results,
self.updates_notifier.clone(),
)
}
pub fn current_update_id(&self, reader: &zlmdb::RoTxn) -> MResult<Option<u64>> {
match self.updates.last_update_id(reader)? {
Some((id, _)) => Ok(Some(id)),
None => Ok(None),
}
}
pub fn update_status(
&self,
reader: &zlmdb::RoTxn,
update_id: u64,
) -> MResult<update::UpdateStatus> {
update::update_status(reader, self.updates, self.updates_results, update_id)
}
pub fn query_builder(&self) -> QueryBuilder {
QueryBuilder::new(
self.main,
self.postings_lists,
self.documents_fields_counts,
self.synonyms,
)
}
pub fn query_builder_with_criteria<'c, 'f, 'd>(
&self,
criteria: Criteria<'c>,
) -> QueryBuilder<'c, 'f, 'd> {
QueryBuilder::with_criteria(
self.main,
self.postings_lists,
self.documents_fields_counts,
self.synonyms,
criteria,
)
}
}
pub fn create(
env: &zlmdb::Env,
name: &str,
updates_notifier: crossbeam_channel::Sender<()>,
) -> MResult<Index> {
// create all the store names
let main_name = main_name(name);
let postings_lists_name = postings_lists_name(name);
let documents_fields_name = documents_fields_name(name);
let documents_fields_counts_name = documents_fields_counts_name(name);
let synonyms_name = synonyms_name(name);
let docs_words_name = docs_words_name(name);
let updates_name = updates_name(name);
let updates_results_name = updates_results_name(name);
// open all the stores
let main = env.create_dyn_database(Some(&main_name))?;
let postings_lists = env.create_database(Some(&postings_lists_name))?;
let documents_fields = env.create_database(Some(&documents_fields_name))?;
let documents_fields_counts = env.create_database(Some(&documents_fields_counts_name))?;
let synonyms = env.create_database(Some(&synonyms_name))?;
let docs_words = env.create_database(Some(&docs_words_name))?;
let updates = env.create_database(Some(&updates_name))?;
let updates_results = env.create_database(Some(&updates_results_name))?;
Ok(Index {
main: Main { main },
postings_lists: PostingsLists { postings_lists },
documents_fields: DocumentsFields { documents_fields },
documents_fields_counts: DocumentsFieldsCounts {
documents_fields_counts,
},
synonyms: Synonyms { synonyms },
docs_words: DocsWords { docs_words },
updates: Updates { updates },
updates_results: UpdatesResults { updates_results },
updates_notifier,
})
}
pub fn open(
env: &zlmdb::Env,
name: &str,
updates_notifier: crossbeam_channel::Sender<()>,
) -> MResult<Option<Index>> {
// create all the store names
let main_name = main_name(name);
let postings_lists_name = postings_lists_name(name);
let documents_fields_name = documents_fields_name(name);
let documents_fields_counts_name = documents_fields_counts_name(name);
let synonyms_name = synonyms_name(name);
let docs_words_name = docs_words_name(name);
let updates_name = updates_name(name);
let updates_results_name = updates_results_name(name);
// open all the stores
let main = match env.open_dyn_database(Some(&main_name))? {
Some(main) => main,
None => return Ok(None),
};
let postings_lists = match env.open_database(Some(&postings_lists_name))? {
Some(postings_lists) => postings_lists,
None => return Ok(None),
};
let documents_fields = match env.open_database(Some(&documents_fields_name))? {
Some(documents_fields) => documents_fields,
None => return Ok(None),
};
let documents_fields_counts = match env.open_database(Some(&documents_fields_counts_name))? {
Some(documents_fields_counts) => documents_fields_counts,
None => return Ok(None),
};
let synonyms = match env.open_database(Some(&synonyms_name))? {
Some(synonyms) => synonyms,
None => return Ok(None),
};
let docs_words = match env.open_database(Some(&docs_words_name))? {
Some(docs_words) => docs_words,
None => return Ok(None),
};
let updates = match env.open_database(Some(&updates_name))? {
Some(updates) => updates,
None => return Ok(None),
};
let updates_results = match env.open_database(Some(&updates_results_name))? {
Some(updates_results) => updates_results,
None => return Ok(None),
};
Ok(Some(Index {
main: Main { main },
postings_lists: PostingsLists { postings_lists },
documents_fields: DocumentsFields { documents_fields },
documents_fields_counts: DocumentsFieldsCounts {
documents_fields_counts,
},
synonyms: Synonyms { synonyms },
docs_words: DocsWords { docs_words },
updates: Updates { updates },
updates_results: UpdatesResults { updates_results },
updates_notifier,
}))
}

View File

@ -1,37 +0,0 @@
use crate::DocIndex;
use sdset::{Set, SetBuf};
use std::borrow::Cow;
use zlmdb::types::{ByteSlice, CowSlice};
use zlmdb::Result as ZResult;
#[derive(Copy, Clone)]
pub struct PostingsLists {
pub(crate) postings_lists: zlmdb::Database<ByteSlice, CowSlice<DocIndex>>,
}
impl PostingsLists {
pub fn put_postings_list(
self,
writer: &mut zlmdb::RwTxn,
word: &[u8],
words_indexes: &Set<DocIndex>,
) -> ZResult<()> {
self.postings_lists.put(writer, word, words_indexes)
}
pub fn del_postings_list(self, writer: &mut zlmdb::RwTxn, word: &[u8]) -> ZResult<bool> {
self.postings_lists.delete(writer, word)
}
pub fn postings_list<'txn>(
self,
reader: &'txn zlmdb::RoTxn,
word: &[u8],
) -> ZResult<Option<Cow<'txn, Set<DocIndex>>>> {
match self.postings_lists.get(reader, word)? {
Some(Cow::Borrowed(slice)) => Ok(Some(Cow::Borrowed(Set::new_unchecked(slice)))),
Some(Cow::Owned(vec)) => Ok(Some(Cow::Owned(SetBuf::new_unchecked(vec)))),
None => Ok(None),
}
}
}

View File

@ -1,36 +0,0 @@
use std::sync::Arc;
use zlmdb::types::ByteSlice;
use zlmdb::Result as ZResult;
#[derive(Copy, Clone)]
pub struct Synonyms {
pub(crate) synonyms: zlmdb::Database<ByteSlice, ByteSlice>,
}
impl Synonyms {
pub fn put_synonyms(
self,
writer: &mut zlmdb::RwTxn,
word: &[u8],
synonyms: &fst::Set,
) -> ZResult<()> {
let bytes = synonyms.as_fst().as_bytes();
self.synonyms.put(writer, word, bytes)
}
pub fn del_synonyms(self, writer: &mut zlmdb::RwTxn, word: &[u8]) -> ZResult<bool> {
self.synonyms.delete(writer, word)
}
pub fn synonyms(self, reader: &zlmdb::RoTxn, word: &[u8]) -> ZResult<Option<fst::Set>> {
match self.synonyms.get(reader, word)? {
Some(bytes) => {
let len = bytes.len();
let bytes = Arc::from(bytes);
let fst = fst::raw::Fst::from_shared_bytes(bytes, 0, len).unwrap();
Ok(Some(fst::Set::from(fst)))
}
None => Ok(None),
}
}
}

View File

@ -1,81 +0,0 @@
use super::BEU64;
use crate::update::Update;
use serde::{Deserialize, Serialize};
use std::borrow::Cow;
use zlmdb::types::OwnedType;
use zlmdb::{BytesDecode, BytesEncode, Result as ZResult};
pub struct SerdeJson<T>(std::marker::PhantomData<T>);
impl<T> BytesEncode for SerdeJson<T>
where
T: Serialize,
{
type EItem = T;
fn bytes_encode(item: &Self::EItem) -> Option<Cow<[u8]>> {
serde_json::to_vec(item).map(Cow::Owned).ok()
}
}
impl<'a, T: 'a> BytesDecode<'a> for SerdeJson<T>
where
T: Deserialize<'a> + Clone,
{
type DItem = T;
fn bytes_decode(bytes: &'a [u8]) -> Option<Self::DItem> {
serde_json::from_slice(bytes).ok()
}
}
#[derive(Copy, Clone)]
pub struct Updates {
pub(crate) updates: zlmdb::Database<OwnedType<BEU64>, SerdeJson<Update>>,
}
impl Updates {
// TODO do not trigger deserialize if possible
pub fn last_update_id(self, reader: &zlmdb::RoTxn) -> ZResult<Option<(u64, Update)>> {
match self.updates.last(reader)? {
Some((key, data)) => Ok(Some((key.get(), data))),
None => Ok(None),
}
}
// TODO do not trigger deserialize if possible
fn first_update_id(self, reader: &zlmdb::RoTxn) -> ZResult<Option<(u64, Update)>> {
match self.updates.first(reader)? {
Some((key, data)) => Ok(Some((key.get(), data))),
None => Ok(None),
}
}
// TODO do not trigger deserialize if possible
pub fn contains(self, reader: &zlmdb::RoTxn, update_id: u64) -> ZResult<bool> {
let update_id = BEU64::new(update_id);
self.updates.get(reader, &update_id).map(|v| v.is_some())
}
pub fn put_update(
self,
writer: &mut zlmdb::RwTxn,
update_id: u64,
update: &Update,
) -> ZResult<()> {
// TODO prefer using serde_json?
let update_id = BEU64::new(update_id);
self.updates.put(writer, &update_id, update)
}
pub fn pop_front(self, writer: &mut zlmdb::RwTxn) -> ZResult<Option<(u64, Update)>> {
match self.first_update_id(writer)? {
Some((update_id, update)) => {
let key = BEU64::new(update_id);
self.updates.delete(writer, &key)?;
Ok(Some((update_id, update)))
}
None => Ok(None),
}
}
}

View File

@ -1,37 +0,0 @@
use super::BEU64;
use crate::update::UpdateResult;
use zlmdb::types::{OwnedType, Serde};
use zlmdb::Result as ZResult;
#[derive(Copy, Clone)]
pub struct UpdatesResults {
pub(crate) updates_results: zlmdb::Database<OwnedType<BEU64>, Serde<UpdateResult>>,
}
impl UpdatesResults {
pub fn last_update_id(self, reader: &zlmdb::RoTxn) -> ZResult<Option<(u64, UpdateResult)>> {
match self.updates_results.last(reader)? {
Some((key, data)) => Ok(Some((key.get(), data))),
None => Ok(None),
}
}
pub fn put_update_result(
self,
writer: &mut zlmdb::RwTxn,
update_id: u64,
update_result: &UpdateResult,
) -> ZResult<()> {
let update_id = BEU64::new(update_id);
self.updates_results.put(writer, &update_id, update_result)
}
pub fn update_result(
self,
reader: &zlmdb::RoTxn,
update_id: u64,
) -> ZResult<Option<UpdateResult>> {
let update_id = BEU64::new(update_id);
self.updates_results.get(reader, &update_id)
}
}

View File

@ -1,25 +0,0 @@
use crate::store;
use crate::update::{next_update_id, Update};
use zlmdb::Result as ZResult;
pub fn apply_customs_update(
writer: &mut zlmdb::RwTxn,
main_store: store::Main,
customs: &[u8],
) -> ZResult<()> {
main_store.put_customs(writer, customs)
}
pub fn push_customs_update(
writer: &mut zlmdb::RwTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
customs: Vec<u8>,
) -> ZResult<u64> {
let last_update_id = next_update_id(writer, updates_store, updates_results_store)?;
let update = Update::Customs(customs);
updates_store.put_update(writer, last_update_id, &update)?;
Ok(last_update_id)
}

View File

@ -1,194 +0,0 @@
use std::collections::{HashMap, HashSet};
use fst::{set::OpBuilder, SetBuilder};
use sdset::{duo::Union, SetOperation};
use serde::Serialize;
use crate::raw_indexer::RawIndexer;
use crate::serde::{extract_document_id, RamDocumentStore, Serializer};
use crate::store;
use crate::update::{apply_documents_deletion, next_update_id, Update};
use crate::{Error, MResult, RankedMap};
pub struct DocumentsAddition<D> {
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
documents: Vec<D>,
}
impl<D> DocumentsAddition<D> {
pub fn new(
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
) -> DocumentsAddition<D> {
DocumentsAddition {
updates_store,
updates_results_store,
updates_notifier,
documents: Vec::new(),
}
}
pub fn update_document(&mut self, document: D) {
self.documents.push(document);
}
pub fn finalize(self, writer: &mut zlmdb::RwTxn) -> MResult<u64>
where
D: serde::Serialize,
{
let _ = self.updates_notifier.send(());
let update_id = push_documents_addition(
writer,
self.updates_store,
self.updates_results_store,
self.documents,
)?;
Ok(update_id)
}
}
impl<D> Extend<D> for DocumentsAddition<D> {
fn extend<T: IntoIterator<Item = D>>(&mut self, iter: T) {
self.documents.extend(iter)
}
}
pub fn push_documents_addition<D: serde::Serialize>(
writer: &mut zlmdb::RwTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
addition: Vec<D>,
) -> MResult<u64> {
let mut values = Vec::with_capacity(addition.len());
for add in addition {
let vec = serde_json::to_vec(&add)?;
let add = serde_json::from_slice(&vec)?;
values.push(add);
}
let last_update_id = next_update_id(writer, updates_store, updates_results_store)?;
let update = Update::DocumentsAddition(values);
updates_store.put_update(writer, last_update_id, &update)?;
Ok(last_update_id)
}
pub fn apply_documents_addition(
writer: &mut zlmdb::RwTxn,
main_store: store::Main,
documents_fields_store: store::DocumentsFields,
documents_fields_counts_store: store::DocumentsFieldsCounts,
postings_lists_store: store::PostingsLists,
docs_words_store: store::DocsWords,
mut ranked_map: RankedMap,
addition: Vec<serde_json::Value>,
) -> MResult<()> {
let mut document_ids = HashSet::new();
let mut document_store = RamDocumentStore::new();
let mut document_fields_counts = HashMap::new();
let mut indexer = RawIndexer::new();
let schema = match main_store.schema(writer)? {
Some(schema) => schema,
None => return Err(Error::SchemaMissing),
};
let identifier = schema.identifier_name();
for document in addition {
let document_id = match extract_document_id(identifier, &document)? {
Some(id) => id,
None => return Err(Error::MissingDocumentId),
};
// 1. store the document id for future deletion
document_ids.insert(document_id);
// 2. index the document fields in ram stores
let serializer = Serializer {
schema: &schema,
document_store: &mut document_store,
document_fields_counts: &mut document_fields_counts,
indexer: &mut indexer,
ranked_map: &mut ranked_map,
document_id,
};
document.serialize(serializer)?;
}
// 1. remove the previous documents match indexes
let documents_to_insert = document_ids.iter().cloned().collect();
apply_documents_deletion(
writer,
main_store,
documents_fields_store,
documents_fields_counts_store,
postings_lists_store,
docs_words_store,
ranked_map.clone(),
documents_to_insert,
)?;
// 2. insert new document attributes in the database
for ((id, attr), value) in document_store.into_inner() {
documents_fields_store.put_document_field(writer, id, attr, &value)?;
}
// 3. insert new document attributes counts
for ((id, attr), count) in document_fields_counts {
documents_fields_counts_store.put_document_field_count(writer, id, attr, count)?;
}
let indexed = indexer.build();
let mut delta_words_builder = SetBuilder::memory();
for (word, delta_set) in indexed.words_doc_indexes {
delta_words_builder.insert(&word).unwrap();
let set = match postings_lists_store.postings_list(writer, &word)? {
Some(set) => Union::new(&set, &delta_set).into_set_buf(),
None => delta_set,
};
postings_lists_store.put_postings_list(writer, &word, &set)?;
}
for (id, words) in indexed.docs_words {
docs_words_store.put_doc_words(writer, id, &words)?;
}
let delta_words = delta_words_builder
.into_inner()
.and_then(fst::Set::from_bytes)
.unwrap();
let words = match main_store.words_fst(writer)? {
Some(words) => {
let op = OpBuilder::new()
.add(words.stream())
.add(delta_words.stream())
.r#union();
let mut words_builder = SetBuilder::memory();
words_builder.extend_stream(op).unwrap();
words_builder
.into_inner()
.and_then(fst::Set::from_bytes)
.unwrap()
}
None => delta_words,
};
main_store.put_words_fst(writer, &words)?;
main_store.put_ranked_map(writer, &ranked_map)?;
let inserted_documents_len = document_ids.len() as u64;
main_store.put_number_of_documents(writer, |old| old + inserted_documents_len)?;
Ok(())
}

View File

@ -1,188 +0,0 @@
use std::collections::{BTreeSet, HashMap, HashSet};
use fst::{SetBuilder, Streamer};
use meilidb_schema::Schema;
use sdset::{duo::DifferenceByKey, SetBuf, SetOperation};
use crate::serde::extract_document_id;
use crate::store;
use crate::update::{next_update_id, Update};
use crate::{DocumentId, Error, MResult, RankedMap};
pub struct DocumentsDeletion {
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
documents: Vec<DocumentId>,
}
impl DocumentsDeletion {
pub fn new(
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
) -> DocumentsDeletion {
DocumentsDeletion {
updates_store,
updates_results_store,
updates_notifier,
documents: Vec::new(),
}
}
pub fn delete_document_by_id(&mut self, document_id: DocumentId) {
self.documents.push(document_id);
}
pub fn delete_document<D>(&mut self, schema: &Schema, document: D) -> MResult<()>
where
D: serde::Serialize,
{
let identifier = schema.identifier_name();
let document_id = match extract_document_id(identifier, &document)? {
Some(id) => id,
None => return Err(Error::MissingDocumentId),
};
self.delete_document_by_id(document_id);
Ok(())
}
pub fn finalize(self, writer: &mut zlmdb::RwTxn) -> MResult<u64> {
let _ = self.updates_notifier.send(());
let update_id = push_documents_deletion(
writer,
self.updates_store,
self.updates_results_store,
self.documents,
)?;
Ok(update_id)
}
}
impl Extend<DocumentId> for DocumentsDeletion {
fn extend<T: IntoIterator<Item = DocumentId>>(&mut self, iter: T) {
self.documents.extend(iter)
}
}
pub fn push_documents_deletion(
writer: &mut zlmdb::RwTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
deletion: Vec<DocumentId>,
) -> MResult<u64> {
let last_update_id = next_update_id(writer, updates_store, updates_results_store)?;
let update = Update::DocumentsDeletion(deletion);
updates_store.put_update(writer, last_update_id, &update)?;
Ok(last_update_id)
}
pub fn apply_documents_deletion(
writer: &mut zlmdb::RwTxn,
main_store: store::Main,
documents_fields_store: store::DocumentsFields,
documents_fields_counts_store: store::DocumentsFieldsCounts,
postings_lists_store: store::PostingsLists,
docs_words_store: store::DocsWords,
mut ranked_map: RankedMap,
deletion: Vec<DocumentId>,
) -> MResult<()> {
let idset = SetBuf::from_dirty(deletion);
let schema = match main_store.schema(writer)? {
Some(schema) => schema,
None => return Err(Error::SchemaMissing),
};
// collect the ranked attributes according to the schema
let ranked_attrs: Vec<_> = schema
.iter()
.filter_map(
|(_, attr, prop)| {
if prop.is_ranked() {
Some(attr)
} else {
None
}
},
)
.collect();
let mut words_document_ids = HashMap::new();
for id in idset {
// remove all the ranked attributes from the ranked_map
for ranked_attr in &ranked_attrs {
ranked_map.remove(id, *ranked_attr);
}
if let Some(words) = docs_words_store.doc_words(writer, id)? {
let mut stream = words.stream();
while let Some(word) = stream.next() {
let word = word.to_vec();
words_document_ids
.entry(word)
.or_insert_with(Vec::new)
.push(id);
}
}
}
let mut deleted_documents = HashSet::new();
let mut removed_words = BTreeSet::new();
for (word, document_ids) in words_document_ids {
let document_ids = SetBuf::from_dirty(document_ids);
if let Some(doc_indexes) = postings_lists_store.postings_list(writer, &word)? {
let op = DifferenceByKey::new(&doc_indexes, &document_ids, |d| d.document_id, |id| *id);
let doc_indexes = op.into_set_buf();
if !doc_indexes.is_empty() {
postings_lists_store.put_postings_list(writer, &word, &doc_indexes)?;
} else {
postings_lists_store.del_postings_list(writer, &word)?;
removed_words.insert(word);
}
}
for id in document_ids {
documents_fields_counts_store.del_all_document_fields_counts(writer, id)?;
if documents_fields_store.del_all_document_fields(writer, id)? != 0 {
deleted_documents.insert(id);
}
}
}
let deleted_documents_len = deleted_documents.len() as u64;
for id in deleted_documents {
docs_words_store.del_doc_words(writer, id)?;
}
let removed_words = fst::Set::from_iter(removed_words).unwrap();
let words = match main_store.words_fst(writer)? {
Some(words_set) => {
let op = fst::set::OpBuilder::new()
.add(words_set.stream())
.add(removed_words.stream())
.difference();
let mut words_builder = SetBuilder::memory();
words_builder.extend_stream(op).unwrap();
words_builder
.into_inner()
.and_then(fst::Set::from_bytes)
.unwrap()
}
None => fst::Set::default(),
};
main_store.put_words_fst(writer, &words)?;
main_store.put_ranked_map(writer, &ranked_map)?;
main_store.put_number_of_documents(writer, |old| old - deleted_documents_len)?;
Ok(())
}

View File

@ -1,223 +0,0 @@
mod customs_update;
mod documents_addition;
mod documents_deletion;
mod schema_update;
mod synonyms_addition;
mod synonyms_deletion;
pub use self::customs_update::{apply_customs_update, push_customs_update};
pub use self::documents_addition::{apply_documents_addition, DocumentsAddition};
pub use self::documents_deletion::{apply_documents_deletion, DocumentsDeletion};
pub use self::schema_update::{apply_schema_update, push_schema_update};
pub use self::synonyms_addition::{apply_synonyms_addition, SynonymsAddition};
pub use self::synonyms_deletion::{apply_synonyms_deletion, SynonymsDeletion};
use std::cmp;
use std::collections::BTreeMap;
use std::time::{Duration, Instant};
use log::debug;
use serde::{Deserialize, Serialize};
use zlmdb::Result as ZResult;
use crate::{store, DocumentId, MResult, RankedMap};
use meilidb_schema::Schema;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum Update {
Schema(Schema),
Customs(Vec<u8>),
DocumentsAddition(Vec<serde_json::Value>),
DocumentsDeletion(Vec<DocumentId>),
SynonymsAddition(BTreeMap<String, Vec<String>>),
SynonymsDeletion(BTreeMap<String, Option<Vec<String>>>),
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum UpdateType {
Schema { schema: Schema },
Customs,
DocumentsAddition { number: usize },
DocumentsDeletion { number: usize },
SynonymsAddition { number: usize },
SynonymsDeletion { number: usize },
}
#[derive(Clone, Serialize, Deserialize)]
pub struct DetailedDuration {
pub main: Duration,
}
#[derive(Clone, Serialize, Deserialize)]
pub struct UpdateResult {
pub update_id: u64,
pub update_type: UpdateType,
pub result: Result<(), String>,
pub detailed_duration: DetailedDuration,
}
#[derive(Clone, Serialize, Deserialize)]
pub enum UpdateStatus {
Enqueued,
Processed(UpdateResult),
Unknown,
}
pub fn update_status(
reader: &zlmdb::RoTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
update_id: u64,
) -> MResult<UpdateStatus> {
match updates_results_store.update_result(reader, update_id)? {
Some(result) => Ok(UpdateStatus::Processed(result)),
None => {
if updates_store.contains(reader, update_id)? {
Ok(UpdateStatus::Enqueued)
} else {
Ok(UpdateStatus::Unknown)
}
}
}
}
pub fn next_update_id(
writer: &mut zlmdb::RwTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
) -> ZResult<u64> {
let last_update_id = updates_store.last_update_id(writer)?;
let last_update_id = last_update_id.map(|(n, _)| n);
let last_update_results_id = updates_results_store.last_update_id(writer)?;
let last_update_results_id = last_update_results_id.map(|(n, _)| n);
let max_update_id = cmp::max(last_update_id, last_update_results_id);
let new_update_id = max_update_id.map_or(0, |n| n + 1);
Ok(new_update_id)
}
pub fn update_task(
writer: &mut zlmdb::RwTxn,
index: store::Index,
) -> MResult<Option<UpdateResult>> {
let (update_id, update) = match index.updates.pop_front(writer)? {
Some(value) => value,
None => return Ok(None),
};
debug!("Processing update number {}", update_id);
let (update_type, result, duration) = match update {
Update::Schema(schema) => {
let start = Instant::now();
let update_type = UpdateType::Schema {
schema: schema.clone(),
};
let result = apply_schema_update(writer, index.main, &schema);
(update_type, result, start.elapsed())
}
Update::Customs(customs) => {
let start = Instant::now();
let update_type = UpdateType::Customs;
let result = apply_customs_update(writer, index.main, &customs).map_err(Into::into);
(update_type, result, start.elapsed())
}
Update::DocumentsAddition(documents) => {
let start = Instant::now();
let ranked_map = match index.main.ranked_map(writer)? {
Some(ranked_map) => ranked_map,
None => RankedMap::default(),
};
let update_type = UpdateType::DocumentsAddition {
number: documents.len(),
};
let result = apply_documents_addition(
writer,
index.main,
index.documents_fields,
index.documents_fields_counts,
index.postings_lists,
index.docs_words,
ranked_map,
documents,
);
(update_type, result, start.elapsed())
}
Update::DocumentsDeletion(documents) => {
let start = Instant::now();
let ranked_map = match index.main.ranked_map(writer)? {
Some(ranked_map) => ranked_map,
None => RankedMap::default(),
};
let update_type = UpdateType::DocumentsDeletion {
number: documents.len(),
};
let result = apply_documents_deletion(
writer,
index.main,
index.documents_fields,
index.documents_fields_counts,
index.postings_lists,
index.docs_words,
ranked_map,
documents,
);
(update_type, result, start.elapsed())
}
Update::SynonymsAddition(synonyms) => {
let start = Instant::now();
let update_type = UpdateType::SynonymsAddition {
number: synonyms.len(),
};
let result = apply_synonyms_addition(writer, index.main, index.synonyms, synonyms);
(update_type, result, start.elapsed())
}
Update::SynonymsDeletion(synonyms) => {
let start = Instant::now();
let update_type = UpdateType::SynonymsDeletion {
number: synonyms.len(),
};
let result = apply_synonyms_deletion(writer, index.main, index.synonyms, synonyms);
(update_type, result, start.elapsed())
}
};
debug!(
"Processed update number {} {:?} {:?}",
update_id, update_type, result
);
let detailed_duration = DetailedDuration { main: duration };
let status = UpdateResult {
update_id,
update_type,
result: result.map_err(|e| e.to_string()),
detailed_duration,
};
index
.updates_results
.put_update_result(writer, update_id, &status)?;
Ok(Some(status))
}

View File

@ -1,31 +0,0 @@
use crate::update::{next_update_id, Update};
use crate::{error::UnsupportedOperation, store, MResult};
use meilidb_schema::Schema;
pub fn apply_schema_update(
writer: &mut zlmdb::RwTxn,
main_store: store::Main,
new_schema: &Schema,
) -> MResult<()> {
if main_store.schema(writer)?.is_some() {
return Err(UnsupportedOperation::SchemaAlreadyExists.into());
}
main_store
.put_schema(writer, new_schema)
.map_err(Into::into)
}
pub fn push_schema_update(
writer: &mut zlmdb::RwTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
schema: Schema,
) -> MResult<u64> {
let last_update_id = next_update_id(writer, updates_store, updates_results_store)?;
let update = Update::Schema(schema);
updates_store.put_update(writer, last_update_id, &update)?;
Ok(last_update_id)
}

View File

@ -1,118 +0,0 @@
use std::collections::BTreeMap;
use fst::{set::OpBuilder, SetBuilder};
use sdset::SetBuf;
use crate::automaton::normalize_str;
use crate::update::{next_update_id, Update};
use crate::{store, MResult};
pub struct SynonymsAddition {
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
synonyms: BTreeMap<String, Vec<String>>,
}
impl SynonymsAddition {
pub fn new(
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
) -> SynonymsAddition {
SynonymsAddition {
updates_store,
updates_results_store,
updates_notifier,
synonyms: BTreeMap::new(),
}
}
pub fn add_synonym<S, T, I>(&mut self, synonym: S, alternatives: I)
where
S: AsRef<str>,
T: AsRef<str>,
I: IntoIterator<Item = T>,
{
let synonym = normalize_str(synonym.as_ref());
let alternatives = alternatives.into_iter().map(|s| s.as_ref().to_lowercase());
self.synonyms
.entry(synonym)
.or_insert_with(Vec::new)
.extend(alternatives);
}
pub fn finalize(self, writer: &mut zlmdb::RwTxn) -> MResult<u64> {
let _ = self.updates_notifier.send(());
let update_id = push_synonyms_addition(
writer,
self.updates_store,
self.updates_results_store,
self.synonyms,
)?;
Ok(update_id)
}
}
pub fn push_synonyms_addition(
writer: &mut zlmdb::RwTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
addition: BTreeMap<String, Vec<String>>,
) -> MResult<u64> {
let last_update_id = next_update_id(writer, updates_store, updates_results_store)?;
let update = Update::SynonymsAddition(addition);
updates_store.put_update(writer, last_update_id, &update)?;
Ok(last_update_id)
}
pub fn apply_synonyms_addition(
writer: &mut zlmdb::RwTxn,
main_store: store::Main,
synonyms_store: store::Synonyms,
addition: BTreeMap<String, Vec<String>>,
) -> MResult<()> {
let mut synonyms_builder = SetBuilder::memory();
for (word, alternatives) in addition {
synonyms_builder.insert(&word).unwrap();
let alternatives = {
let alternatives = SetBuf::from_dirty(alternatives);
let mut alternatives_builder = SetBuilder::memory();
alternatives_builder.extend_iter(alternatives).unwrap();
let bytes = alternatives_builder.into_inner().unwrap();
fst::Set::from_bytes(bytes).unwrap()
};
synonyms_store.put_synonyms(writer, word.as_bytes(), &alternatives)?;
}
let delta_synonyms = synonyms_builder
.into_inner()
.and_then(fst::Set::from_bytes)
.unwrap();
let synonyms = match main_store.synonyms_fst(writer)? {
Some(synonyms) => {
let op = OpBuilder::new()
.add(synonyms.stream())
.add(delta_synonyms.stream())
.r#union();
let mut synonyms_builder = SetBuilder::memory();
synonyms_builder.extend_stream(op).unwrap();
synonyms_builder
.into_inner()
.and_then(fst::Set::from_bytes)
.unwrap()
}
None => delta_synonyms,
};
main_store.put_synonyms_fst(writer, &synonyms)?;
Ok(())
}

View File

@ -1,156 +0,0 @@
use std::collections::BTreeMap;
use std::iter::FromIterator;
use fst::{set::OpBuilder, SetBuilder};
use sdset::SetBuf;
use crate::automaton::normalize_str;
use crate::update::{next_update_id, Update};
use crate::{store, MResult};
pub struct SynonymsDeletion {
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
synonyms: BTreeMap<String, Option<Vec<String>>>,
}
impl SynonymsDeletion {
pub fn new(
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
updates_notifier: crossbeam_channel::Sender<()>,
) -> SynonymsDeletion {
SynonymsDeletion {
updates_store,
updates_results_store,
updates_notifier,
synonyms: BTreeMap::new(),
}
}
pub fn delete_all_alternatives_of<S: AsRef<str>>(&mut self, synonym: S) {
let synonym = normalize_str(synonym.as_ref());
self.synonyms.insert(synonym, None);
}
pub fn delete_specific_alternatives_of<S, T, I>(&mut self, synonym: S, alternatives: I)
where
S: AsRef<str>,
T: AsRef<str>,
I: Iterator<Item = T>,
{
let synonym = normalize_str(synonym.as_ref());
let value = self.synonyms.entry(synonym).or_insert(None);
let alternatives = alternatives.map(|s| s.as_ref().to_lowercase());
match value {
Some(v) => v.extend(alternatives),
None => *value = Some(Vec::from_iter(alternatives)),
}
}
pub fn finalize(self, writer: &mut zlmdb::RwTxn) -> MResult<u64> {
let _ = self.updates_notifier.send(());
let update_id = push_synonyms_deletion(
writer,
self.updates_store,
self.updates_results_store,
self.synonyms,
)?;
Ok(update_id)
}
}
pub fn push_synonyms_deletion(
writer: &mut zlmdb::RwTxn,
updates_store: store::Updates,
updates_results_store: store::UpdatesResults,
deletion: BTreeMap<String, Option<Vec<String>>>,
) -> MResult<u64> {
let last_update_id = next_update_id(writer, updates_store, updates_results_store)?;
let update = Update::SynonymsDeletion(deletion);
updates_store.put_update(writer, last_update_id, &update)?;
Ok(last_update_id)
}
pub fn apply_synonyms_deletion(
writer: &mut zlmdb::RwTxn,
main_store: store::Main,
synonyms_store: store::Synonyms,
deletion: BTreeMap<String, Option<Vec<String>>>,
) -> MResult<()> {
let mut delete_whole_synonym_builder = SetBuilder::memory();
for (synonym, alternatives) in deletion {
match alternatives {
Some(alternatives) => {
let prev_alternatives = synonyms_store.synonyms(writer, synonym.as_bytes())?;
let prev_alternatives = match prev_alternatives {
Some(alternatives) => alternatives,
None => continue,
};
let delta_alternatives = {
let alternatives = SetBuf::from_dirty(alternatives);
let mut builder = SetBuilder::memory();
builder.extend_iter(alternatives).unwrap();
builder.into_inner().and_then(fst::Set::from_bytes).unwrap()
};
let op = OpBuilder::new()
.add(prev_alternatives.stream())
.add(delta_alternatives.stream())
.difference();
let (alternatives, empty_alternatives) = {
let mut builder = SetBuilder::memory();
let len = builder.get_ref().len();
builder.extend_stream(op).unwrap();
let is_empty = len == builder.get_ref().len();
let bytes = builder.into_inner().unwrap();
let alternatives = fst::Set::from_bytes(bytes).unwrap();
(alternatives, is_empty)
};
if empty_alternatives {
delete_whole_synonym_builder.insert(synonym.as_bytes())?;
} else {
synonyms_store.put_synonyms(writer, synonym.as_bytes(), &alternatives)?;
}
}
None => {
delete_whole_synonym_builder.insert(&synonym).unwrap();
synonyms_store.del_synonyms(writer, synonym.as_bytes())?;
}
}
}
let delta_synonyms = delete_whole_synonym_builder
.into_inner()
.and_then(fst::Set::from_bytes)
.unwrap();
let synonyms = match main_store.synonyms_fst(writer)? {
Some(synonyms) => {
let op = OpBuilder::new()
.add(synonyms.stream())
.add(delta_synonyms.stream())
.difference();
let mut synonyms_builder = SetBuilder::memory();
synonyms_builder.extend_stream(op).unwrap();
synonyms_builder
.into_inner()
.and_then(fst::Set::from_bytes)
.unwrap()
}
None => fst::Set::default(),
};
main_store.put_synonyms_fst(writer, &synonyms)?;
Ok(())
}

View File

@ -1,12 +0,0 @@
[package]
name = "meilidb-schema"
version = "0.1.0"
authors = ["Kerollmops <renault.cle@gmail.com>"]
edition = "2018"
[dependencies]
bincode = "1.1.2"
indexmap = { version = "1.1.0", features = ["serde-1"] }
serde = { version = "1.0.91", features = ["derive"] }
serde_json = { version = "1.0.39", features = ["preserve_order"] }
toml = { version = "0.5.0", features = ["preserve_order"] }

View File

@ -1,306 +0,0 @@
use std::collections::{BTreeMap, HashMap};
use std::ops::BitOr;
use std::sync::Arc;
use std::{fmt, u16};
use indexmap::IndexMap;
use serde::{Deserialize, Serialize};
pub const DISPLAYED: SchemaProps = SchemaProps {
displayed: true,
indexed: false,
ranked: false,
};
pub const INDEXED: SchemaProps = SchemaProps {
displayed: false,
indexed: true,
ranked: false,
};
pub const RANKED: SchemaProps = SchemaProps {
displayed: false,
indexed: false,
ranked: true,
};
#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct SchemaProps {
#[serde(default)]
pub displayed: bool,
#[serde(default)]
pub indexed: bool,
#[serde(default)]
pub ranked: bool,
}
impl SchemaProps {
pub fn is_displayed(self) -> bool {
self.displayed
}
pub fn is_indexed(self) -> bool {
self.indexed
}
pub fn is_ranked(self) -> bool {
self.ranked
}
}
impl BitOr for SchemaProps {
type Output = Self;
fn bitor(self, other: Self) -> Self::Output {
SchemaProps {
displayed: self.displayed | other.displayed,
indexed: self.indexed | other.indexed,
ranked: self.ranked | other.ranked,
}
}
}
#[derive(Serialize, Deserialize)]
pub struct SchemaBuilder {
identifier: String,
attributes: IndexMap<String, SchemaProps>,
}
impl SchemaBuilder {
pub fn with_identifier<S: Into<String>>(name: S) -> SchemaBuilder {
SchemaBuilder {
identifier: name.into(),
attributes: IndexMap::new(),
}
}
pub fn new_attribute<S: Into<String>>(&mut self, name: S, props: SchemaProps) -> SchemaAttr {
let len = self.attributes.len();
if self.attributes.insert(name.into(), props).is_some() {
panic!("Field already inserted.")
}
SchemaAttr(len as u16)
}
pub fn build(self) -> Schema {
let mut attrs = HashMap::new();
let mut props = Vec::new();
for (i, (name, prop)) in self.attributes.into_iter().enumerate() {
attrs.insert(name.clone(), SchemaAttr(i as u16));
props.push((name, prop));
}
let identifier = self.identifier;
Schema {
inner: Arc::new(InnerSchema {
identifier,
attrs,
props,
}),
}
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Schema {
inner: Arc<InnerSchema>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
struct InnerSchema {
identifier: String,
attrs: HashMap<String, SchemaAttr>,
props: Vec<(String, SchemaProps)>,
}
impl Schema {
fn to_builder(&self) -> SchemaBuilder {
let identifier = self.inner.identifier.clone();
let attributes = self.attributes_ordered();
SchemaBuilder {
identifier,
attributes,
}
}
fn attributes_ordered(&self) -> IndexMap<String, SchemaProps> {
let mut ordered = BTreeMap::new();
for (name, attr) in &self.inner.attrs {
let (_, props) = self.inner.props[attr.0 as usize];
ordered.insert(attr.0, (name, props));
}
let mut attributes = IndexMap::with_capacity(ordered.len());
for (_, (name, props)) in ordered {
attributes.insert(name.clone(), props);
}
attributes
}
pub fn props(&self, attr: SchemaAttr) -> SchemaProps {
let (_, props) = self.inner.props[attr.0 as usize];
props
}
pub fn identifier_name(&self) -> &str {
&self.inner.identifier
}
pub fn attribute<S: AsRef<str>>(&self, name: S) -> Option<SchemaAttr> {
self.inner.attrs.get(name.as_ref()).cloned()
}
pub fn attribute_name(&self, attr: SchemaAttr) -> &str {
let (name, _) = &self.inner.props[attr.0 as usize];
name
}
pub fn iter<'a>(&'a self) -> impl Iterator<Item = (&str, SchemaAttr, SchemaProps)> + 'a {
self.inner.props.iter().map(move |(name, prop)| {
let attr = self.inner.attrs.get(name).unwrap();
(name.as_str(), *attr, *prop)
})
}
}
impl Serialize for Schema {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::ser::Serializer,
{
self.to_builder().serialize(serializer)
}
}
impl<'de> Deserialize<'de> for Schema {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::de::Deserializer<'de>,
{
let builder = SchemaBuilder::deserialize(deserializer)?;
Ok(builder.build())
}
}
#[derive(Serialize, Deserialize, Debug, Copy, Clone, PartialOrd, Ord, PartialEq, Eq, Hash)]
pub struct SchemaAttr(pub u16);
impl SchemaAttr {
pub const fn new(value: u16) -> SchemaAttr {
SchemaAttr(value)
}
pub const fn min() -> SchemaAttr {
SchemaAttr(u16::min_value())
}
pub const fn max() -> SchemaAttr {
SchemaAttr(u16::max_value())
}
pub fn next(self) -> Option<SchemaAttr> {
self.0.checked_add(1).map(SchemaAttr)
}
pub fn prev(self) -> Option<SchemaAttr> {
self.0.checked_sub(1).map(SchemaAttr)
}
}
impl fmt::Display for SchemaAttr {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.0.fmt(f)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::error::Error;
#[test]
fn serialize_deserialize() -> bincode::Result<()> {
let mut builder = SchemaBuilder::with_identifier("id");
builder.new_attribute("alpha", DISPLAYED);
builder.new_attribute("beta", DISPLAYED | INDEXED);
builder.new_attribute("gamma", INDEXED);
let schema = builder.build();
let mut buffer = Vec::new();
bincode::serialize_into(&mut buffer, &schema)?;
let schema2 = bincode::deserialize_from(buffer.as_slice())?;
assert_eq!(schema, schema2);
Ok(())
}
#[test]
fn serialize_deserialize_toml() -> Result<(), Box<dyn Error>> {
let mut builder = SchemaBuilder::with_identifier("id");
builder.new_attribute("alpha", DISPLAYED);
builder.new_attribute("beta", DISPLAYED | INDEXED);
builder.new_attribute("gamma", INDEXED);
let schema = builder.build();
let buffer = toml::to_vec(&schema)?;
let schema2 = toml::from_slice(buffer.as_slice())?;
assert_eq!(schema, schema2);
let data = r#"
identifier = "id"
[attributes."alpha"]
displayed = true
[attributes."beta"]
displayed = true
indexed = true
[attributes."gamma"]
indexed = true
"#;
let schema2 = toml::from_str(data)?;
assert_eq!(schema, schema2);
Ok(())
}
#[test]
fn serialize_deserialize_json() -> Result<(), Box<dyn Error>> {
let mut builder = SchemaBuilder::with_identifier("id");
builder.new_attribute("alpha", DISPLAYED);
builder.new_attribute("beta", DISPLAYED | INDEXED);
builder.new_attribute("gamma", INDEXED);
let schema = builder.build();
let buffer = serde_json::to_vec(&schema)?;
let schema2 = serde_json::from_slice(buffer.as_slice())?;
assert_eq!(schema, schema2);
let data = r#"
{
"identifier": "id",
"attributes": {
"alpha": {
"displayed": true
},
"beta": {
"displayed": true,
"indexed": true
},
"gamma": {
"indexed": true
}
}
}"#;
let schema2 = serde_json::from_str(data)?;
assert_eq!(schema, schema2);
Ok(())
}
}

View File

@ -1,8 +0,0 @@
[package]
name = "meilidb-tokenizer"
version = "0.1.0"
authors = ["Kerollmops <renault.cle@gmail.com>"]
edition = "2018"
[dependencies]
slice-group-by = "0.2.4"

View File

@ -1,500 +0,0 @@
use self::SeparatorCategory::*;
use slice_group_by::StrGroupBy;
use std::iter::Peekable;
pub fn is_cjk(c: char) -> bool {
(c >= '\u{2e80}' && c <= '\u{2eff}')
|| (c >= '\u{2f00}' && c <= '\u{2fdf}')
|| (c >= '\u{3040}' && c <= '\u{309f}')
|| (c >= '\u{30a0}' && c <= '\u{30ff}')
|| (c >= '\u{3100}' && c <= '\u{312f}')
|| (c >= '\u{3200}' && c <= '\u{32ff}')
|| (c >= '\u{3400}' && c <= '\u{4dbf}')
|| (c >= '\u{4e00}' && c <= '\u{9fff}')
|| (c >= '\u{f900}' && c <= '\u{faff}')
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
enum SeparatorCategory {
Soft,
Hard,
}
impl SeparatorCategory {
fn merge(self, other: SeparatorCategory) -> SeparatorCategory {
if let (Soft, Soft) = (self, other) {
Soft
} else {
Hard
}
}
fn to_usize(self) -> usize {
match self {
Soft => 1,
Hard => 8,
}
}
}
fn is_separator(c: char) -> bool {
classify_separator(c).is_some()
}
fn classify_separator(c: char) -> Option<SeparatorCategory> {
match c {
' ' | '-' | '_' | '\'' | ':' | '"' => Some(Soft),
'.' | ';' | ',' | '!' | '?' | '(' | ')' => Some(Hard),
_ => None,
}
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
enum CharCategory {
Separator(SeparatorCategory),
Cjk,
Other,
}
fn classify_char(c: char) -> CharCategory {
if let Some(category) = classify_separator(c) {
CharCategory::Separator(category)
} else if is_cjk(c) {
CharCategory::Cjk
} else {
CharCategory::Other
}
}
fn is_str_word(s: &str) -> bool {
!s.chars().any(is_separator)
}
fn same_group_category(a: char, b: char) -> bool {
match (classify_char(a), classify_char(b)) {
(CharCategory::Cjk, _) | (_, CharCategory::Cjk) => false,
(CharCategory::Separator(_), CharCategory::Separator(_)) => true,
(a, b) => a == b,
}
}
// fold the number of chars along with the index position
fn chars_count_index((n, _): (usize, usize), (i, c): (usize, char)) -> (usize, usize) {
(n + 1, i + c.len_utf8())
}
pub fn split_query_string(query: &str) -> impl Iterator<Item = &str> {
Tokenizer::new(query).map(|t| t.word)
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct Token<'a> {
pub word: &'a str,
pub word_index: usize,
pub char_index: usize,
}
pub struct Tokenizer<'a> {
inner: &'a str,
word_index: usize,
char_index: usize,
}
impl<'a> Tokenizer<'a> {
pub fn new(string: &str) -> Tokenizer {
// skip every separator and set `char_index`
// to the number of char trimmed
let (count, index) = string
.char_indices()
.take_while(|(_, c)| is_separator(*c))
.fold((0, 0), chars_count_index);
Tokenizer {
inner: &string[index..],
word_index: 0,
char_index: count,
}
}
}
impl<'a> Iterator for Tokenizer<'a> {
type Item = Token<'a>;
fn next(&mut self) -> Option<Self::Item> {
let mut iter = self.inner.linear_group_by(same_group_category).peekable();
while let (Some(string), next_string) = (iter.next(), iter.peek()) {
let (count, index) = string.char_indices().fold((0, 0), chars_count_index);
if !is_str_word(string) {
self.word_index += string
.chars()
.filter_map(classify_separator)
.fold(Soft, |a, x| a.merge(x))
.to_usize();
self.char_index += count;
self.inner = &self.inner[index..];
continue;
}
let token = Token {
word: string,
word_index: self.word_index,
char_index: self.char_index,
};
if next_string.filter(|s| is_str_word(s)).is_some() {
self.word_index += 1;
}
self.char_index += count;
self.inner = &self.inner[index..];
return Some(token);
}
self.inner = "";
None
}
}
pub struct SeqTokenizer<'a, I>
where
I: Iterator<Item = &'a str>,
{
inner: I,
current: Option<Peekable<Tokenizer<'a>>>,
word_offset: usize,
char_offset: usize,
}
impl<'a, I> SeqTokenizer<'a, I>
where
I: Iterator<Item = &'a str>,
{
pub fn new(mut iter: I) -> SeqTokenizer<'a, I> {
let current = iter.next().map(|s| Tokenizer::new(s).peekable());
SeqTokenizer {
inner: iter,
current,
word_offset: 0,
char_offset: 0,
}
}
}
impl<'a, I> Iterator for SeqTokenizer<'a, I>
where
I: Iterator<Item = &'a str>,
{
type Item = Token<'a>;
fn next(&mut self) -> Option<Self::Item> {
match &mut self.current {
Some(current) => {
match current.next() {
Some(token) => {
// we must apply the word and char offsets
// to the token before returning it
let token = Token {
word: token.word,
word_index: token.word_index + self.word_offset,
char_index: token.char_index + self.char_offset,
};
// if this is the last iteration on this text
// we must save the offsets for next texts
if current.peek().is_none() {
let hard_space = SeparatorCategory::Hard.to_usize();
self.word_offset = token.word_index + hard_space;
self.char_offset = token.char_index + hard_space;
}
Some(token)
}
None => {
// no more words in this text we must
// start tokenizing the next text
self.current = self.inner.next().map(|s| Tokenizer::new(s).peekable());
self.next()
}
}
}
// no more texts available
None => None,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn easy() {
let mut tokenizer = Tokenizer::new("salut");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "salut",
word_index: 0,
char_index: 0
})
);
assert_eq!(tokenizer.next(), None);
let mut tokenizer = Tokenizer::new("yo ");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "yo",
word_index: 0,
char_index: 0
})
);
assert_eq!(tokenizer.next(), None);
}
#[test]
fn hard() {
let mut tokenizer = Tokenizer::new(" .? yo lolo. aïe (ouch)");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "yo",
word_index: 0,
char_index: 4
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "lolo",
word_index: 1,
char_index: 7
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "aïe",
word_index: 9,
char_index: 13
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "ouch",
word_index: 17,
char_index: 18
})
);
assert_eq!(tokenizer.next(), None);
let mut tokenizer = Tokenizer::new("yo ! lolo ? wtf - lol . aïe ,");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "yo",
word_index: 0,
char_index: 0
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "lolo",
word_index: 8,
char_index: 5
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "wtf",
word_index: 16,
char_index: 12
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "lol",
word_index: 17,
char_index: 18
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "aïe",
word_index: 25,
char_index: 24
})
);
assert_eq!(tokenizer.next(), None);
}
#[test]
fn hard_long_chars() {
let mut tokenizer = Tokenizer::new(" .? yo 😂. aïe");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "yo",
word_index: 0,
char_index: 4
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "😂",
word_index: 1,
char_index: 7
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "aïe",
word_index: 9,
char_index: 10
})
);
assert_eq!(tokenizer.next(), None);
let mut tokenizer = Tokenizer::new("yo ! lolo ? 😱 - lol . 😣 ,");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "yo",
word_index: 0,
char_index: 0
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "lolo",
word_index: 8,
char_index: 5
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "😱",
word_index: 16,
char_index: 12
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "lol",
word_index: 17,
char_index: 16
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "😣",
word_index: 25,
char_index: 22
})
);
assert_eq!(tokenizer.next(), None);
}
#[test]
fn hard_kanjis() {
let mut tokenizer = Tokenizer::new("\u{2ec4}lolilol\u{2ec7}");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "\u{2ec4}",
word_index: 0,
char_index: 0
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "lolilol",
word_index: 1,
char_index: 1
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "\u{2ec7}",
word_index: 2,
char_index: 8
})
);
assert_eq!(tokenizer.next(), None);
let mut tokenizer = Tokenizer::new("\u{2ec4}\u{2ed3}\u{2ef2} lolilol - hello \u{2ec7}");
assert_eq!(
tokenizer.next(),
Some(Token {
word: "\u{2ec4}",
word_index: 0,
char_index: 0
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "\u{2ed3}",
word_index: 1,
char_index: 1
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "\u{2ef2}",
word_index: 2,
char_index: 2
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "lolilol",
word_index: 3,
char_index: 4
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "hello",
word_index: 4,
char_index: 14
})
);
assert_eq!(
tokenizer.next(),
Some(Token {
word: "\u{2ec7}",
word_index: 5,
char_index: 23
})
);
assert_eq!(tokenizer.next(), None);
}
}

View File

@ -0,0 +1,8 @@
[package]
name = "meilisearch-error"
version = "0.21.0"
authors = ["marin <postma.marin@protonmail.com>"]
edition = "2018"
[dependencies]
actix-http = "=3.0.0-beta.6"

View File

@ -0,0 +1,200 @@
use std::fmt;
use actix_http::http::StatusCode;
pub trait ErrorCode: std::error::Error {
fn error_code(&self) -> Code;
/// returns the HTTP status code ascociated with the error
fn http_status(&self) -> StatusCode {
self.error_code().http()
}
/// returns the doc url ascociated with the error
fn error_url(&self) -> String {
self.error_code().url()
}
/// returns error name, used as error code
fn error_name(&self) -> String {
self.error_code().name()
}
/// return the error type
fn error_type(&self) -> String {
self.error_code().type_()
}
}
#[allow(clippy::enum_variant_names)]
enum ErrorType {
InternalError,
InvalidRequestError,
AuthenticationError,
}
impl fmt::Display for ErrorType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
use ErrorType::*;
match self {
InternalError => write!(f, "internal_error"),
InvalidRequestError => write!(f, "invalid_request_error"),
AuthenticationError => write!(f, "authentication_error"),
}
}
}
pub enum Code {
// index related error
CreateIndex,
IndexAlreadyExists,
IndexNotFound,
InvalidIndexUid,
OpenIndex,
// invalid state error
InvalidState,
MissingPrimaryKey,
PrimaryKeyAlreadyPresent,
MaxFieldsLimitExceeded,
MissingDocumentId,
Facet,
Filter,
BadParameter,
BadRequest,
DocumentNotFound,
Internal,
InvalidToken,
MissingAuthorizationHeader,
NotFound,
PayloadTooLarge,
RetrieveDocument,
SearchDocuments,
UnsupportedMediaType,
DumpAlreadyInProgress,
DumpProcessFailed,
}
impl Code {
/// ascociate a `Code` variant to the actual ErrCode
fn err_code(&self) -> ErrCode {
use Code::*;
match self {
// index related errors
// create index is thrown on internal error while creating an index.
CreateIndex => ErrCode::internal("index_creation_failed", StatusCode::BAD_REQUEST),
IndexAlreadyExists => ErrCode::invalid("index_already_exists", StatusCode::BAD_REQUEST),
// thrown when requesting an unexisting index
IndexNotFound => ErrCode::invalid("index_not_found", StatusCode::NOT_FOUND),
InvalidIndexUid => ErrCode::invalid("invalid_index_uid", StatusCode::BAD_REQUEST),
OpenIndex => {
ErrCode::internal("index_not_accessible", StatusCode::INTERNAL_SERVER_ERROR)
}
// invalid state error
InvalidState => ErrCode::internal("invalid_state", StatusCode::INTERNAL_SERVER_ERROR),
// thrown when no primary key has been set
MissingPrimaryKey => ErrCode::invalid("missing_primary_key", StatusCode::BAD_REQUEST),
// error thrown when trying to set an already existing primary key
PrimaryKeyAlreadyPresent => {
ErrCode::invalid("primary_key_already_present", StatusCode::BAD_REQUEST)
}
// invalid document
MaxFieldsLimitExceeded => {
ErrCode::invalid("max_fields_limit_exceeded", StatusCode::BAD_REQUEST)
}
MissingDocumentId => ErrCode::invalid("missing_document_id", StatusCode::BAD_REQUEST),
// error related to facets
Facet => ErrCode::invalid("invalid_facet", StatusCode::BAD_REQUEST),
// error related to filters
Filter => ErrCode::invalid("invalid_filter", StatusCode::BAD_REQUEST),
BadParameter => ErrCode::invalid("bad_parameter", StatusCode::BAD_REQUEST),
BadRequest => ErrCode::invalid("bad_request", StatusCode::BAD_REQUEST),
DocumentNotFound => ErrCode::invalid("document_not_found", StatusCode::NOT_FOUND),
Internal => ErrCode::internal("internal", StatusCode::INTERNAL_SERVER_ERROR),
InvalidToken => ErrCode::authentication("invalid_token", StatusCode::FORBIDDEN),
MissingAuthorizationHeader => {
ErrCode::authentication("missing_authorization_header", StatusCode::UNAUTHORIZED)
}
NotFound => ErrCode::invalid("not_found", StatusCode::NOT_FOUND),
PayloadTooLarge => ErrCode::invalid("payload_too_large", StatusCode::PAYLOAD_TOO_LARGE),
RetrieveDocument => {
ErrCode::internal("unretrievable_document", StatusCode::BAD_REQUEST)
}
SearchDocuments => ErrCode::internal("search_error", StatusCode::BAD_REQUEST),
UnsupportedMediaType => {
ErrCode::invalid("unsupported_media_type", StatusCode::UNSUPPORTED_MEDIA_TYPE)
}
// error related to dump
DumpAlreadyInProgress => {
ErrCode::invalid("dump_already_in_progress", StatusCode::CONFLICT)
}
DumpProcessFailed => {
ErrCode::internal("dump_process_failed", StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
/// return the HTTP status code ascociated with the `Code`
fn http(&self) -> StatusCode {
self.err_code().status_code
}
/// return error name, used as error code
fn name(&self) -> String {
self.err_code().error_name.to_string()
}
/// return the error type
fn type_(&self) -> String {
self.err_code().error_type.to_string()
}
/// return the doc url ascociated with the error
fn url(&self) -> String {
format!("https://docs.meilisearch.com/errors#{}", self.name())
}
}
/// Internal structure providing a convenient way to create error codes
struct ErrCode {
status_code: StatusCode,
error_type: ErrorType,
error_name: &'static str,
}
impl ErrCode {
fn authentication(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode {
status_code,
error_name,
error_type: ErrorType::AuthenticationError,
}
}
fn internal(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode {
status_code,
error_name,
error_type: ErrorType::InternalError,
}
}
fn invalid(error_name: &'static str, status_code: StatusCode) -> ErrCode {
ErrCode {
status_code,
error_name,
error_type: ErrorType::InvalidRequestError,
}
}
}

123
meilisearch-http/Cargo.toml Normal file
View File

@ -0,0 +1,123 @@
[package]
authors = ["Quentin de Quelen <quentin@dequelen.me>", "Clément Renault <clement@meilisearch.com>"]
description = "MeiliSearch HTTP server"
edition = "2018"
license = "MIT"
name = "meilisearch-http"
version = "0.21.0"
[[bin]]
name = "meilisearch"
path = "src/main.rs"
[build-dependencies]
actix-web-static-files = { git = "https://github.com/MarinPostma/actix-web-static-files.git", rev = "6db8c3e", optional = true }
anyhow = { version = "*", optional = true }
cargo_toml = { version = "0.9.0", optional = true }
hex = { version = "0.4.3", optional = true }
reqwest = { version = "0.11.3", features = ["blocking", "rustls-tls"], default-features = false, optional = true }
sha-1 = { version = "0.9.4", optional = true }
tempfile = { version = "3.1.0", optional = true }
vergen = { version = "5.1.13", default-features = false, features = ["build", "git"] }
zip = { version = "0.5.12", optional = true }
[dependencies]
actix-cors = { git = "https://github.com/MarinPostma/actix-extras.git", rev = "2dac1a4"}
actix-http = { version = "=3.0.0-beta.6" }
actix-service = "2.0.0"
actix-web = { version = "=4.0.0-beta.6", features = ["rustls"] }
actix-web-static-files = { git = "https://github.com/MarinPostma/actix-web-static-files.git", rev = "6db8c3e", optional = true }
anyhow = "1.0.36"
async-stream = "0.3.0"
async-trait = "0.1.42"
arc-swap = "1.2.0"
byte-unit = { version = "4.0.9", default-features = false, features = ["std"] }
bytes = "0.6.0"
chrono = { version = "0.4.19", features = ["serde"] }
crossbeam-channel = "0.5.0"
either = "1.6.1"
env_logger = "0.8.2"
flate2 = "1.0.19"
fst = "0.4.5"
futures = "0.3.7"
futures-util = "0.3.8"
grenad = { git = "https://github.com/Kerollmops/grenad.git", rev = "3adcb26" }
heed = { git = "https://github.com/Kerollmops/heed", tag = "v0.12.1" }
http = "0.2.1"
indexmap = { version = "1.3.2", features = ["serde-1"] }
itertools = "0.10.0"
log = "0.4.8"
main_error = "0.1.0"
meilisearch-error = { path = "../meilisearch-error" }
meilisearch-tokenizer = { git = "https://github.com/meilisearch/tokenizer.git", tag = "v0.2.5" }
memmap = "0.7.0"
milli = { git = "https://github.com/meilisearch/milli.git", tag = "v0.10.2" }
mime = "0.3.16"
num_cpus = "1.13.0"
once_cell = "1.5.2"
parking_lot = "0.11.1"
rand = "0.7.3"
rayon = "1.5.0"
regex = "1.4.2"
rustls = "0.19"
serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0.59", features = ["preserve_order"] }
sha2 = "0.9.1"
siphasher = "0.3.2"
slice-group-by = "0.2.6"
structopt = "0.3.20"
tar = "0.4.29"
tempfile = "3.1.0"
thiserror = "1.0.24"
tokio = { version = "1", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde"] }
walkdir = "2.3.2"
obkv = "0.2.0"
pin-project = "1.0.7"
whoami = { version = "1.1.2", optional = true }
reqwest = { version = "0.11.3", features = ["json", "rustls-tls"], default-features = false, optional = true }
serdeval = "0.1.0"
[dependencies.sentry]
default-features = false
features = [
"backtrace",
"contexts",
"panic",
"reqwest",
"rustls",
"log",
]
optional = true
version = "0.22.0"
[dev-dependencies]
actix-rt = "2.1.0"
assert-json-diff = { branch = "master", git = "https://github.com/qdequele/assert-json-diff" }
mockall = "0.9.1"
paste = "1.0.5"
serde_url_params = "0.2.1"
tempdir = "0.3.7"
urlencoding = "1.1.1"
[features]
mini-dashboard = [
"actix-web-static-files",
"anyhow",
"cargo_toml",
"hex",
"reqwest",
"sha-1",
"tempfile",
"zip",
]
analytics = ["sentry", "whoami", "reqwest"]
default = ["analytics", "mini-dashboard"]
[target.'cfg(target_os = "linux")'.dependencies]
jemallocator = "0.3.2"
[package.metadata.mini-dashboard]
assets-url = "https://github.com/meilisearch/mini-dashboard/releases/download/v0.1.4/build.zip"
sha1 = "750e8a8e56cfa61fbf9ead14b08a5f17ad3f3d37"

84
meilisearch-http/build.rs Normal file
View File

@ -0,0 +1,84 @@
use vergen::{vergen, Config};
fn main() {
vergen(Config::default()).unwrap();
#[cfg(feature = "mini-dashboard")]
mini_dashboard::setup_mini_dashboard().expect("Could not load the mini-dashboard assets");
}
#[cfg(feature = "mini-dashboard")]
mod mini_dashboard {
use std::env;
use std::fs::{create_dir_all, File, OpenOptions};
use std::io::{Cursor, Read, Write};
use std::path::PathBuf;
use actix_web_static_files::resource_dir;
use anyhow::Context;
use cargo_toml::Manifest;
use reqwest::blocking::get;
use sha1::{Digest, Sha1};
pub fn setup_mini_dashboard() -> anyhow::Result<()> {
let cargo_manifest_dir = PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap());
let cargo_toml = cargo_manifest_dir.join("Cargo.toml");
let out_dir = PathBuf::from(env::var("OUT_DIR").unwrap());
let sha1_path = out_dir.join(".mini-dashboard.sha1");
let dashboard_dir = out_dir.join("mini-dashboard");
let manifest = Manifest::from_path(cargo_toml).unwrap();
let meta = &manifest
.package
.as_ref()
.context("package not specified in Cargo.toml")?
.metadata
.as_ref()
.context("no metadata specified in Cargo.toml")?["mini-dashboard"];
// Check if there already is a dashboard built, and if it is up to date.
if sha1_path.exists() && dashboard_dir.exists() {
let mut sha1_file = File::open(&sha1_path)?;
let mut sha1 = String::new();
sha1_file.read_to_string(&mut sha1)?;
if sha1 == meta["sha1"].as_str().unwrap() {
// Nothing to do.
return Ok(());
}
}
let url = meta["assets-url"].as_str().unwrap();
let dashboard_assets_bytes = get(url)?.bytes()?;
let mut hasher = Sha1::new();
hasher.update(&dashboard_assets_bytes);
let sha1 = hex::encode(hasher.finalize());
assert_eq!(
meta["sha1"].as_str().unwrap(),
sha1,
"Downloaded mini-dashboard shasum differs from the one specified in the Cargo.toml"
);
create_dir_all(&dashboard_dir)?;
let cursor = Cursor::new(&dashboard_assets_bytes);
let mut zip = zip::read::ZipArchive::new(cursor)?;
zip.extract(&dashboard_dir)?;
resource_dir(&dashboard_dir).build()?;
// Write the sha1 for the dashboard back to file.
let mut file = OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.open(sha1_path)?;
file.write_all(sha1.as_bytes())?;
file.flush()?;
Ok(())
}
}

View File

@ -0,0 +1,126 @@
use std::hash::{Hash, Hasher};
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
use log::debug;
use serde::Serialize;
use siphasher::sip::SipHasher;
use crate::Data;
use crate::Opt;
const AMPLITUDE_API_KEY: &str = "f7fba398780e06d8fe6666a9be7e3d47";
#[derive(Debug, Serialize)]
struct EventProperties {
database_size: u64,
last_update_timestamp: Option<i64>, //timestamp
number_of_documents: Vec<u64>,
}
impl EventProperties {
async fn from(data: Data) -> anyhow::Result<EventProperties> {
let stats = data.index_controller.get_all_stats().await?;
let database_size = stats.database_size;
let last_update_timestamp = stats.last_update.map(|u| u.timestamp());
let number_of_documents = stats
.indexes
.values()
.map(|index| index.number_of_documents)
.collect();
Ok(EventProperties {
database_size,
last_update_timestamp,
number_of_documents,
})
}
}
#[derive(Debug, Serialize)]
struct UserProperties<'a> {
env: &'a str,
start_since_days: u64,
user_email: Option<String>,
server_provider: Option<String>,
}
#[derive(Debug, Serialize)]
struct Event<'a> {
user_id: &'a str,
event_type: &'a str,
device_id: &'a str,
time: u64,
app_version: &'a str,
user_properties: UserProperties<'a>,
event_properties: Option<EventProperties>,
}
#[derive(Debug, Serialize)]
struct AmplitudeRequest<'a> {
api_key: &'a str,
events: Vec<Event<'a>>,
}
pub async fn analytics_sender(data: Data, opt: Opt) {
let username = whoami::username();
let hostname = whoami::hostname();
let platform = whoami::platform();
let uid = username + &hostname + &platform.to_string();
let mut hasher = SipHasher::new();
uid.hash(&mut hasher);
let hash = hasher.finish();
let uid = format!("{:X}", hash);
let platform = platform.to_string();
let first_start = Instant::now();
loop {
let n = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
let user_id = &uid;
let device_id = &platform;
let time = n.as_secs();
let event_type = "runtime_tick";
let elapsed_since_start = first_start.elapsed().as_secs() / 86_400; // One day
let event_properties = EventProperties::from(data.clone()).await.ok();
let app_version = env!("CARGO_PKG_VERSION").to_string();
let app_version = app_version.as_str();
let user_email = std::env::var("MEILI_USER_EMAIL").ok();
let server_provider = std::env::var("MEILI_SERVER_PROVIDER").ok();
let user_properties = UserProperties {
env: &opt.env,
start_since_days: elapsed_since_start,
user_email,
server_provider,
};
let event = Event {
user_id,
event_type,
device_id,
time,
app_version,
user_properties,
event_properties,
};
let request = AmplitudeRequest {
api_key: AMPLITUDE_API_KEY,
events: vec![event],
};
let response = reqwest::Client::new()
.post("https://api2.amplitude.com/2/httpapi")
.timeout(Duration::from_secs(60)) // 1 minute max
.json(&request)
.send()
.await;
if let Err(e) = response {
debug!("Unsuccessful call to Amplitude: {}", e);
}
tokio::time::sleep(Duration::from_secs(3600)).await;
}
}

View File

@ -0,0 +1,133 @@
use std::ops::Deref;
use std::sync::Arc;
use sha2::Digest;
use crate::index::{Checked, Settings};
use crate::index_controller::{
error::Result, DumpInfo, IndexController, IndexMetadata, IndexSettings, IndexStats, Stats,
};
use crate::option::Opt;
pub mod search;
mod updates;
#[derive(Clone)]
pub struct Data {
inner: Arc<DataInner>,
}
impl Deref for Data {
type Target = DataInner;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
pub struct DataInner {
pub index_controller: IndexController,
pub api_keys: ApiKeys,
options: Opt,
}
#[derive(Clone)]
pub struct ApiKeys {
pub public: Option<String>,
pub private: Option<String>,
pub master: Option<String>,
}
impl ApiKeys {
pub fn generate_missing_api_keys(&mut self) {
if let Some(master_key) = &self.master {
if self.private.is_none() {
let key = format!("{}-private", master_key);
let sha = sha2::Sha256::digest(key.as_bytes());
self.private = Some(format!("{:x}", sha));
}
if self.public.is_none() {
let key = format!("{}-public", master_key);
let sha = sha2::Sha256::digest(key.as_bytes());
self.public = Some(format!("{:x}", sha));
}
}
}
}
impl Data {
pub fn new(options: Opt) -> anyhow::Result<Data> {
let path = options.db_path.clone();
let index_controller = IndexController::new(&path, &options)?;
let mut api_keys = ApiKeys {
master: options.clone().master_key,
private: None,
public: None,
};
api_keys.generate_missing_api_keys();
let inner = DataInner {
index_controller,
api_keys,
options,
};
let inner = Arc::new(inner);
Ok(Data { inner })
}
pub async fn settings(&self, uid: String) -> Result<Settings<Checked>> {
self.index_controller.settings(uid).await
}
pub async fn list_indexes(&self) -> Result<Vec<IndexMetadata>> {
self.index_controller.list_indexes().await
}
pub async fn index(&self, uid: String) -> Result<IndexMetadata> {
self.index_controller.get_index(uid).await
}
pub async fn create_index(
&self,
uid: String,
primary_key: Option<String>,
) -> Result<IndexMetadata> {
let settings = IndexSettings {
uid: Some(uid),
primary_key,
};
let meta = self.index_controller.create_index(settings).await?;
Ok(meta)
}
pub async fn get_index_stats(&self, uid: String) -> Result<IndexStats> {
Ok(self.index_controller.get_index_stats(uid).await?)
}
pub async fn get_all_stats(&self) -> Result<Stats> {
Ok(self.index_controller.get_all_stats().await?)
}
pub async fn create_dump(&self) -> Result<DumpInfo> {
Ok(self.index_controller.create_dump().await?)
}
pub async fn dump_status(&self, uid: String) -> Result<DumpInfo> {
Ok(self.index_controller.dump_info(uid).await?)
}
#[inline]
pub fn http_payload_size_limit(&self) -> usize {
self.options.http_payload_size_limit.get_bytes() as usize
}
#[inline]
pub fn api_keys(&self) -> &ApiKeys {
&self.api_keys
}
}

View File

@ -0,0 +1,34 @@
use serde_json::{Map, Value};
use super::Data;
use crate::index::{SearchQuery, SearchResult};
use crate::index_controller::error::Result;
impl Data {
pub async fn search(&self, index: String, search_query: SearchQuery) -> Result<SearchResult> {
self.index_controller.search(index, search_query).await
}
pub async fn retrieve_documents(
&self,
index: String,
offset: usize,
limit: usize,
attributes_to_retrieve: Option<Vec<String>>,
) -> Result<Vec<Map<String, Value>>> {
self.index_controller
.documents(index, offset, limit, attributes_to_retrieve)
.await
}
pub async fn retrieve_document(
&self,
index: String,
document_id: String,
attributes_to_retrieve: Option<Vec<String>>,
) -> Result<Map<String, Value>> {
self.index_controller
.document(index, document_id, attributes_to_retrieve)
.await
}
}

View File

@ -0,0 +1,80 @@
use milli::update::{IndexDocumentsMethod, UpdateFormat};
use crate::extractors::payload::Payload;
use crate::index::{Checked, Settings};
use crate::index_controller::{error::Result, IndexMetadata, IndexSettings, UpdateStatus};
use crate::Data;
impl Data {
pub async fn add_documents(
&self,
index: String,
method: IndexDocumentsMethod,
format: UpdateFormat,
stream: Payload,
primary_key: Option<String>,
) -> Result<UpdateStatus> {
let update_status = self
.index_controller
.add_documents(index, method, format, stream, primary_key)
.await?;
Ok(update_status)
}
pub async fn update_settings(
&self,
index: String,
settings: Settings<Checked>,
create: bool,
) -> Result<UpdateStatus> {
let update = self
.index_controller
.update_settings(index, settings, create)
.await?;
Ok(update)
}
pub async fn clear_documents(&self, index: String) -> Result<UpdateStatus> {
let update = self.index_controller.clear_documents(index).await?;
Ok(update)
}
pub async fn delete_documents(
&self,
index: String,
document_ids: Vec<String>,
) -> Result<UpdateStatus> {
let update = self
.index_controller
.delete_documents(index, document_ids)
.await?;
Ok(update)
}
pub async fn delete_index(&self, index: String) -> Result<()> {
self.index_controller.delete_index(index).await?;
Ok(())
}
pub async fn get_update_status(&self, index: String, uid: u64) -> Result<UpdateStatus> {
self.index_controller.update_status(index, uid).await
}
pub async fn get_updates_status(&self, index: String) -> Result<Vec<UpdateStatus>> {
self.index_controller.all_update_status(index).await
}
pub async fn update_index(
&self,
uid: String,
primary_key: Option<String>,
new_uid: Option<String>,
) -> Result<IndexMetadata> {
let settings = IndexSettings {
uid: new_uid,
primary_key,
};
self.index_controller.update_index(uid, settings).await
}
}

View File

@ -0,0 +1,167 @@
use std::error::Error;
use std::fmt;
use actix_web as aweb;
use actix_web::body::Body;
use actix_web::dev::BaseHttpResponseBuilder;
use actix_web::http::StatusCode;
use aweb::error::{JsonPayloadError, QueryPayloadError};
use meilisearch_error::{Code, ErrorCode};
use milli::UserError;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(rename_all = "camelCase")]
pub struct ResponseError {
#[serde(skip)]
code: StatusCode,
message: String,
error_code: String,
error_type: String,
error_link: String,
}
impl fmt::Display for ResponseError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.message.fmt(f)
}
}
impl<T> From<T> for ResponseError
where
T: ErrorCode,
{
fn from(other: T) -> Self {
Self {
code: other.http_status(),
message: other.to_string(),
error_code: other.error_name(),
error_type: other.error_type(),
error_link: other.error_url(),
}
}
}
impl aweb::error::ResponseError for ResponseError {
fn error_response(&self) -> aweb::BaseHttpResponse<Body> {
let json = serde_json::to_vec(self).unwrap();
BaseHttpResponseBuilder::new(self.status_code())
.content_type("application/json")
.body(json)
}
fn status_code(&self) -> StatusCode {
self.code
}
}
macro_rules! internal_error {
($target:ty : $($other:path), *) => {
$(
impl From<$other> for $target {
fn from(other: $other) -> Self {
Self::Internal(Box::new(other))
}
}
)*
}
}
#[derive(Debug)]
pub struct MilliError<'a>(pub &'a milli::Error);
impl Error for MilliError<'_> {}
impl fmt::Display for MilliError<'_> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.0.fmt(f)
}
}
impl ErrorCode for MilliError<'_> {
fn error_code(&self) -> Code {
match self.0 {
milli::Error::InternalError(_) => Code::Internal,
milli::Error::IoError(_) => Code::Internal,
milli::Error::UserError(ref error) => {
match error {
// TODO: wait for spec for new error codes.
UserError::Csv(_)
| UserError::SerdeJson(_)
| UserError::MaxDatabaseSizeReached
| UserError::InvalidCriterionName { .. }
| UserError::InvalidDocumentId { .. }
| UserError::InvalidStoreFile
| UserError::NoSpaceLeftOnDevice
| UserError::DocumentLimitReached => Code::Internal,
UserError::AttributeLimitReached => Code::MaxFieldsLimitExceeded,
UserError::InvalidFilter(_) => Code::Filter,
UserError::InvalidFilterAttribute(_) => Code::Filter,
UserError::MissingDocumentId { .. } => Code::MissingDocumentId,
UserError::MissingPrimaryKey => Code::MissingPrimaryKey,
UserError::PrimaryKeyCannotBeChanged => Code::PrimaryKeyAlreadyPresent,
UserError::PrimaryKeyCannotBeReset => Code::PrimaryKeyAlreadyPresent,
UserError::UnknownInternalDocumentId { .. } => Code::DocumentNotFound,
UserError::InvalidFacetsDistribution { .. } => Code::BadRequest,
}
}
}
}
}
impl fmt::Display for PayloadError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
PayloadError::Json(e) => e.fmt(f),
PayloadError::Query(e) => e.fmt(f),
}
}
}
#[derive(Debug)]
pub enum PayloadError {
Json(JsonPayloadError),
Query(QueryPayloadError),
}
impl Error for PayloadError {}
impl ErrorCode for PayloadError {
fn error_code(&self) -> Code {
match self {
PayloadError::Json(err) => match err {
JsonPayloadError::Overflow => Code::PayloadTooLarge,
JsonPayloadError::ContentType => Code::UnsupportedMediaType,
JsonPayloadError::Payload(aweb::error::PayloadError::Overflow) => {
Code::PayloadTooLarge
}
JsonPayloadError::Deserialize(_) | JsonPayloadError::Payload(_) => Code::BadRequest,
JsonPayloadError::Serialize(_) => Code::Internal,
_ => Code::Internal,
},
PayloadError::Query(err) => match err {
QueryPayloadError::Deserialize(_) => Code::BadRequest,
_ => Code::Internal,
},
}
}
}
impl From<JsonPayloadError> for PayloadError {
fn from(other: JsonPayloadError) -> Self {
Self::Json(other)
}
}
impl From<QueryPayloadError> for PayloadError {
fn from(other: QueryPayloadError) -> Self {
Self::Query(other)
}
}
pub fn payload_error_handler<E>(err: E) -> ResponseError
where
E: Into<PayloadError>,
{
err.into().into()
}

View File

@ -0,0 +1,25 @@
use meilisearch_error::{Code, ErrorCode};
#[derive(Debug, thiserror::Error)]
pub enum AuthenticationError {
#[error("You must have an authorization token")]
MissingAuthorizationHeader,
#[error("Invalid API key")]
InvalidToken(String),
// Triggered on configuration error.
#[error("Irretrievable state")]
IrretrievableState,
#[error("Unknown authentication policy")]
UnknownPolicy,
}
impl ErrorCode for AuthenticationError {
fn error_code(&self) -> Code {
match self {
AuthenticationError::MissingAuthorizationHeader => Code::MissingAuthorizationHeader,
AuthenticationError::InvalidToken(_) => Code::InvalidToken,
AuthenticationError::IrretrievableState => Code::Internal,
AuthenticationError::UnknownPolicy => Code::Internal,
}
}
}

View File

@ -0,0 +1,182 @@
mod error;
use std::any::{Any, TypeId};
use std::collections::HashMap;
use std::marker::PhantomData;
use std::ops::Deref;
use actix_web::FromRequest;
use futures::future::err;
use futures::future::{ok, Ready};
use crate::error::ResponseError;
use error::AuthenticationError;
macro_rules! create_policies {
($($name:ident), *) => {
pub mod policies {
use std::collections::HashSet;
use crate::extractors::authentication::Policy;
$(
#[derive(Debug, Default)]
pub struct $name {
inner: HashSet<Vec<u8>>
}
impl $name {
pub fn new() -> Self {
Self { inner: HashSet::new() }
}
pub fn add(&mut self, token: Vec<u8>) {
self.inner.insert(token);
}
}
impl Policy for $name {
fn authenticate(&self, token: &[u8]) -> bool {
self.inner.contains(token)
}
}
)*
}
};
}
create_policies!(Public, Private, Admin);
/// Instanciate a `Policies`, filled with the given policies.
macro_rules! init_policies {
($($name:ident), *) => {
{
let mut policies = crate::extractors::authentication::Policies::new();
$(
let policy = $name::new();
policies.insert(policy);
)*
policies
}
};
}
/// Adds user to all specified policies.
macro_rules! create_users {
($policies:ident, $($user:expr => { $($policy:ty), * }), *) => {
{
$(
$(
$policies.get_mut::<$policy>().map(|p| p.add($user.to_owned()));
)*
)*
}
};
}
pub struct GuardedData<T, D> {
data: D,
_marker: PhantomData<T>,
}
impl<T, D> Deref for GuardedData<T, D> {
type Target = D;
fn deref(&self) -> &Self::Target {
&self.data
}
}
pub trait Policy {
fn authenticate(&self, token: &[u8]) -> bool;
}
#[derive(Debug)]
pub struct Policies {
inner: HashMap<TypeId, Box<dyn Any>>,
}
impl Policies {
pub fn new() -> Self {
Self {
inner: HashMap::new(),
}
}
pub fn insert<S: Policy + 'static>(&mut self, policy: S) {
self.inner.insert(TypeId::of::<S>(), Box::new(policy));
}
pub fn get<S: Policy + 'static>(&self) -> Option<&S> {
self.inner
.get(&TypeId::of::<S>())
.and_then(|p| p.downcast_ref::<S>())
}
pub fn get_mut<S: Policy + 'static>(&mut self) -> Option<&mut S> {
self.inner
.get_mut(&TypeId::of::<S>())
.and_then(|p| p.downcast_mut::<S>())
}
}
impl Default for Policies {
fn default() -> Self {
Self::new()
}
}
pub enum AuthConfig {
NoAuth,
Auth(Policies),
}
impl Default for AuthConfig {
fn default() -> Self {
Self::NoAuth
}
}
impl<P: Policy + 'static, D: 'static + Clone> FromRequest for GuardedData<P, D> {
type Config = AuthConfig;
type Error = ResponseError;
type Future = Ready<Result<Self, Self::Error>>;
fn from_request(
req: &actix_web::HttpRequest,
_payload: &mut actix_http::Payload,
) -> Self::Future {
match req.app_data::<Self::Config>() {
Some(config) => match config {
AuthConfig::NoAuth => match req.app_data::<D>().cloned() {
Some(data) => ok(Self {
data,
_marker: PhantomData,
}),
None => err(AuthenticationError::IrretrievableState.into()),
},
AuthConfig::Auth(policies) => match policies.get::<P>() {
Some(policy) => match req.headers().get("x-meili-api-key") {
Some(token) => {
if policy.authenticate(token.as_bytes()) {
match req.app_data::<D>().cloned() {
Some(data) => ok(Self {
data,
_marker: PhantomData,
}),
None => err(AuthenticationError::IrretrievableState.into()),
}
} else {
err(AuthenticationError::InvalidToken(String::from("hello")).into())
}
}
None => err(AuthenticationError::MissingAuthorizationHeader.into()),
},
None => err(AuthenticationError::UnknownPolicy.into()),
},
},
None => err(AuthenticationError::IrretrievableState.into()),
}
}
}

View File

@ -0,0 +1,3 @@
pub mod payload;
#[macro_use]
pub mod authentication;

Some files were not shown because too many files have changed in this diff Show More