Compare commits

...

39 Commits

Author SHA1 Message Date
f045e111ea Merge #960
960: bump version and update changelog r=MarinPostma a=LegendreM

* bump to 0.14.1
* update CHANGELOG.md file

Co-authored-by: many <maxime@meilisearch.com>
2020-09-08 16:11:53 +00:00
87a76c2a60 bump version and update changelog 2020-09-08 18:11:03 +02:00
4edaebab90 Merge #959
959: add version guard in copy_and_compact_to_path function r=MarinPostma a=LegendreM

fix #958

need to create 0.14.1

Co-authored-by: many <maxime@meilisearch.com>
2020-09-08 08:35:49 +00:00
b43137b508 add version guard in copy_and_compact_to_path function 2020-09-07 18:21:04 +02:00
118c673eaf Merge #927
927: Bump meilisearch r=Kerollmops a=MarinPostma

bump meilisearch version 0.14.0

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-08-24 14:36:21 +00:00
a9a2d3bca3 update changelog 2020-08-24 15:49:24 +02:00
4a9e56aa4f bump meilisearch version 0.14.0 2020-08-24 15:49:09 +02:00
14bb9505eb Merge #926
926: Update genre field with genres r=MarinPostma a=bidoubiwa

Most code samples are made with the assumption that the `genres` field takes an `s`. I'm updating the dataset to match those code-samples.


Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-08-24 12:48:08 +00:00
d937aeac0a Update genre field with genres 2020-08-24 14:22:33 +02:00
dd540d2540 Merge #924
924: change max db size opt name r=Kerollmops a=MarinPostma

fix #867

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-08-24 12:18:17 +00:00
4ecaf99047 fix test option test 2020-08-24 14:14:11 +02:00
445a6c9ea2 update options name 2020-08-21 14:42:20 +02:00
67b7d60cb0 Merge #920
920: fix bug and add tests r=MarinPostma a=LegendreM

- add tests about updates
- fix select bug

fix #896

Co-authored-by: many <maxime@meilisearch.com>
2020-08-19 07:56:27 +00:00
94b3e8e56e fix bug and add tests
- add tests about updates
- fix select bug

fix #896
2020-08-19 09:51:57 +02:00
89b5ae63fc Merge #915
915: fix unwrap bug r=Kerollmops a=MarinPostma

fix #912.

Co-authored-by: mpostma <postma.marin@protonmail.com>
2020-08-18 12:50:10 +00:00
2a79dc9ded log error on unwrap error 2020-08-17 16:32:40 +02:00
5ed62dbf76 fix unwrap bug 2020-08-14 12:16:48 +02:00
cb267b68ed Merge #910
910: Fix typo in error message r=MarinPostma a=curquiza

Thanks to @ppamorim for reporting the typos to me!

Co-authored-by: Clementine Urquizar <clementine@meilisearch.com>
2020-08-13 15:43:58 +00:00
6539be6c46 Fix typo in error message 2020-08-13 17:13:19 +02:00
a23bdb31a3 Merge #829
829: implement snapshoting r=MarinPostma a=LegendreM

related to #551.

This pull request permit user to create periodically a snapshot of MeiliSearch database via a command line and launch meiliSearch from a snapshot with another command

## Documentation

### schedule a snapshot
`--snapshot-path <DIRECTORY_PATH>`:
this will periodically create a snapshot `<DB_NAME>.tar.gz` in the specified directory

### change period between 2 snapshot creation
`--snapshot-interval-sec <GAP_IN_SEC>`
choose the time gap between 2 snapshot

### start meilisearch from a snapshot
`--load-from-snapshot <FILE_PATH>`
this will use the snapshot stored at `<FILE_PATH>` to initialize MeiliSearch database,

`--ignore-snapshot-if-db-exists` if set and if a db already exists,
this will skip snapshot importation and continue process with actual db instead of quitting process by returning an Error

`--ignore-missing-snapshot` if set and if no snapshot exists at provided path,
this will skip snapshot importation and continue process with actual db instead of quitting process by returning an Error

Co-authored-by: many <maxime@meilisearch.com>
2020-08-12 16:37:31 +00:00
9014290875 implement snapshot 2020-08-12 17:46:28 +02:00
1903302a74 Merge #906
906: Facet distribution correct case r=LegendreM a=MarinPostma

~

Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
2020-08-12 09:04:36 +00:00
75c3cb4bb6 fix compile error 2020-08-12 10:31:11 +02:00
bfd0f806f8 requested changed
Co-authored-by: Clément Renault <renault.cle@gmail.com>
2020-08-12 10:31:11 +02:00
afab8a7846 clean facet result types 2020-08-12 10:31:11 +02:00
afacdbc7a0 update tests for facets distribution case 2020-08-12 10:31:11 +02:00
18a50b4dac fix facet distribution case 2020-08-12 10:31:10 +02:00
fb69769991 Merge #889
889: Fix clippy warnings r=MarinPostma a=TaKO8Ki

Good day!

Since `cargo clippy` showed two warnings like the following, I've fixed them. This is a small PR.

```sh
warning: use of `ok_or` followed by a function call
   --> meilisearch-core/src/database.rs:185:18
    |
185 |                 .ok_or(Error::VersionMismatch("bad VERSION file".to_string()))?;
    |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try this: `ok_or_else(|| Error::VersionMismatch("bad VERSION file".to_string()))`
    |
    = note: `#[warn(clippy::or_fun_call)]` on by default
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#or_fun_call

warning: useless use of `format!`
   --> meilisearch-core/src/database.rs:208:59
    |
208 |                         return Err(Error::VersionMismatch(format!("<0.12.0")));
    |                                                           ^^^^^^^^^^^^^^^^^^ help: consider using `.to_string()`: `"<0.12.0".to_string()`
    |
    = note: `#[warn(clippy::useless_format)]` on by default
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#useless_format

warning: 2 warnings emitted
```

Co-authored-by: Takayuki Maeda <41065217+TaKO8Ki@users.noreply.github.com>
2020-07-29 11:40:08 +00:00
750e7382c6 fix clippy warnings 2020-07-29 11:32:34 +09:00
2464cc7a6d Merge #888
888: Remove schema mention in error message r=MarinPostma a=curquiza

We avoid mentioning the schema since MeiliSearch is schemaless for the user 🙂

Co-authored-by: Clementine Urquizar <clementine@meilisearch.com>
2020-07-28 15:20:59 +00:00
f078cbac4d Remove schema mention in error message 2020-07-28 15:18:05 +02:00
aa545e5386 Merge #638 #828 #865
638: Update requitites for source build(rust version) r=MarinPostma a=djKooks

Hello,
I just found that compile via source has been failed by issue here:
```
error[E0658]: the `#[non_exhaustive]` attribute is an experimental feature
  --> /Users/kwangin.jung/.cargo/registry/src/github.com-1ecc6299db9ec823/whoami-0.8.1/src/lib.rs:40:1
   |
40 | #[non_exhaustive]
   | ^^^^^^^^^^^^^^^^^
   |
   = note: for more information, see https://github.com/rust-lang/rust/issues/44109

error[E0658]: the `#[non_exhaustive]` attribute is an experimental feature
   --> /Users/kwangin.jung/.cargo/registry/src/github.com-1ecc6299db9ec823/whoami-0.8.1/src/lib.rs:102:1
    |
102 | #[non_exhaustive]
    | ^^^^^^^^^^^^^^^^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/44109
```
Seems `#[non_exhaustive]` is a new feature on Rust 1.40.0, so added as pre-requitites.


828: Cleanup readme r=MarinPostma a=tpayet

Closes #613 

865: Update movie dataset with genre field r=MarinPostma a=bidoubiwa

Updated the movie dataset by adding  the `genre` field to each movies where the genre could be fetched.
The `genre` was fetch for each movie by making a search request on the bigger movie dataset (200mb) using MeilISearch. 

I make this proposition to make testing and trying  more accessible. 

```json
{
  "id": "323661",
  "title": "Mune: Guardian of the Moon",
  "poster": "https://image.tmdb.org/t/p/w1280/4vzqow7mVUahqA4hHoe2UpQOxy.jpg",
  "overview": "When a faun named Mune becomes the Guardian of the Moon, little did he had unprepared experience with the Moon and an accident that could put both the Moon and the Sun in danger, including a corrupt titan named Necross who wants the Sun for himself and placing the balance of night and day in great peril. Now with the help of a wax-child named Glim and the warrior, Sohone who also became the Sun Guardian, they go out on an exciting journey to get the Sun back and restore the Moon to their rightful place in the sky.",
  "release_date": 1423094400,
  "genre": [
    "Animation",
    "Family",
    "Adventure",
    "Fantasy",
    "Comedy"
  ]
}
{
  "id": "306",
  "title": "Beverly Hills Cop III",
  "poster": "https://image.tmdb.org/t/p/w1280/tw9gAhqQcBFX0X0XfVbWqUsmzoU.jpg",
  "overview": "Back in sunny southern California and on the trail of two murderers, Axel Foley again teams up with LA cop Billy Rosewood. Soon, they discover that an amusement park is being used as a front for a massive counterfeiting ring – and it's run by the same gang that shot Billy's boss.",
  "release_date": 769741200,
  "genre": [
    "Action",
    "Comedy",
    "Crime"
  ]
}
```

Co-authored-by: kwangin.jung <inylove82@gmail.com>
Co-authored-by: Thomas Payet <thomas@meilisearch.com>
Co-authored-by: Charlotte Vermandel <charlottevermandel@gmail.com>
2020-07-24 09:45:01 +00:00
9711100ff1 Merge #874
874: Fixes default values on web interface r=MarinPostma a=tpayet



Co-authored-by: Thomas Payet <thomas@meilisearch.com>
2020-07-24 09:20:33 +00:00
8c49ee1b3b Fixes default values on web interface 2020-07-22 14:42:34 +02:00
476aecf86d Cleanup readme 2020-07-20 16:03:25 +02:00
bd5d25429b Update movie dataset with genre field 2020-07-20 10:39:29 +02:00
4ae2097cdc Merge branch 'update/readme-rust-ver' of https://github.com/djKooks/MeiliSearch into update/readme-rust-ver 2020-04-30 21:09:38 +09:00
1f2ab71bb6 Update requitites for source build
Update requitites for source build(rust version)

Fix README
2020-04-30 21:08:55 +09:00
9c0956049a Update requitites for source build
Update requitites for source build(rust version)

Fix README
2020-04-29 08:48:17 +09:00
30 changed files with 20235 additions and 19772 deletions

View File

@ -1,15 +1,25 @@
## v0.14.1
- Fix version mismatch in snapshot importation (#959)
## v0.14.0
- Fix facet distribution case (#797)
- Snapshotting (#839)
- Fix bucket-sort unwrap bug (#915)
## v0.13.0
- placeholder search (#771)
- Add database version mismatch check (#794)
- Displayed and searchable attributes wildcard (#846)
- Remove sys-info route (#810)
- Fix facet distribution case (#797)
- Check database version mismatch (#794)
- Fix unique docid bug (#841)
- Error codes in updates (#792)
- Sentry disable argument (#813)
- Log analytics if enabled (#825)
- Fix default values displayed on web interface (#874)
## v0.12.0

66
Cargo.lock generated
View File

@ -301,10 +301,10 @@ dependencies = [
]
[[package]]
name = "adler32"
version = "1.0.4"
name = "adler"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d2e7343e7fc9de883d1b0341e0b13970f764c14101234857d2ddafa1cb1cac2"
checksum = "ccc9a9dd069569f212bc4330af9f17c4afb5e8ce185e83dbb14f1349dda18b10"
[[package]]
name = "ahash"
@ -889,10 +889,22 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e88a8acf291dafb59c2d96e8f59828f3838bb1a70398823ade51a84de6a6deed"
[[package]]
name = "flate2"
version = "1.0.14"
name = "filetime"
version = "0.2.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2cfff41391129e0a856d6d822600b8d71179d46879e310417eb9c762eb178b42"
checksum = "affc17579b132fc2461adf7c575cc6e8b134ebca52c51f5411388965227dc695"
dependencies = [
"cfg-if",
"libc",
"redox_syscall",
"winapi 0.3.8",
]
[[package]]
name = "flate2"
version = "1.0.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68c90b0fc46cf89d227cc78b40e494ff81287a92dd07631e5af0d06fe3cf885e"
dependencies = [
"cfg-if",
"crc32fast",
@ -1481,7 +1493,7 @@ checksum = "60302e4db3a61da70c0cb7991976248362f30319e88850c487b9b95bbf059e00"
[[package]]
name = "meilisearch-core"
version = "0.13.0"
version = "0.14.1"
dependencies = [
"arc-swap",
"assert_matches",
@ -1528,14 +1540,14 @@ dependencies = [
[[package]]
name = "meilisearch-error"
version = "0.13.0"
version = "0.14.1"
dependencies = [
"actix-http",
]
[[package]]
name = "meilisearch-http"
version = "0.13.0"
version = "0.14.1"
dependencies = [
"actix-cors",
"actix-http",
@ -1548,6 +1560,7 @@ dependencies = [
"chrono",
"crossbeam-channel",
"env_logger",
"flate2",
"futures",
"http 0.1.21",
"indexmap",
@ -1571,7 +1584,9 @@ dependencies = [
"siphasher",
"slice-group-by",
"structopt",
"tar",
"tempdir",
"tempfile",
"tokio",
"ureq",
"vergen",
@ -1581,7 +1596,7 @@ dependencies = [
[[package]]
name = "meilisearch-schema"
version = "0.13.0"
version = "0.14.1"
dependencies = [
"indexmap",
"meilisearch-error",
@ -1592,7 +1607,7 @@ dependencies = [
[[package]]
name = "meilisearch-tokenizer"
version = "0.13.0"
version = "0.14.1"
dependencies = [
"deunicode",
"slice-group-by",
@ -1600,7 +1615,7 @@ dependencies = [
[[package]]
name = "meilisearch-types"
version = "0.13.0"
version = "0.14.1"
dependencies = [
"serde",
"zerocopy",
@ -1639,11 +1654,11 @@ dependencies = [
[[package]]
name = "miniz_oxide"
version = "0.3.6"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa679ff6578b1cddee93d7e82e263b94a575e0bfced07284eb0c037c1d2416a5"
checksum = "be0f75932c1f6cfae3c04000e40114adf955636e19040f9c0a2c380702aa1c7f"
dependencies = [
"adler32",
"adler",
]
[[package]]
@ -2585,6 +2600,18 @@ dependencies = [
"unicode-xid",
]
[[package]]
name = "tar"
version = "0.4.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8a4c1d0bee3230179544336c15eefb563cf0302955d962e456542323e8c2e8a"
dependencies = [
"filetime",
"libc",
"redox_syscall",
"xattr",
]
[[package]]
name = "tempdir"
version = "0.3.7"
@ -3162,6 +3189,15 @@ dependencies = [
"winapi-build",
]
[[package]]
name = "xattr"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "244c3741f4240ef46274860397c7c74e50eb23624996930e484c16679633a54c"
dependencies = [
"libc",
]
[[package]]
name = "zerocopy"
version = "0.3.0"

View File

@ -2,7 +2,6 @@
<img src="assets/logo.svg" alt="MeiliSearch" width="200" height="200" />
</p>
<h1 align="center">MeiliSearch</h1>
<h4 align="center">
@ -29,45 +28,43 @@
For more information about features go to [our documentation](https://docs.meilisearch.com/).
<p align="center">
<a href="https://crates.meilisearch.com"><img src="assets/crates-io-demo.gif" alt="crates.io demo gif" /></a>
<img src="assets/movies-web-demo.gif" alt="Web interface gif" />
</p>
> MeiliSearch helps the Rust community find crates on [crates.meilisearch.com](https://crates.meilisearch.com)
## Features
## ✨ Features
* Search as-you-type experience (answers < 50 milliseconds)
* Full-text search
* Typo tolerant (understands typos and miss-spelling)
* Faceted search and filters
* Supports Kanji characters
* Supports Synonym
* Easy to install, deploy, and maintain
* Whole documents are returned
* Highly customizable
* RESTful API
* Faceted search and filtering
## Get started
## Getting started
### Deploy the Server
#### Run it using Digital Ocean
[![DigitalOcean Marketplace](assets/do-btn-blue.svg)](https://marketplace.digitalocean.com/apps/meilisearch?action=deploy&refcode=7c67bd97e101)
#### Run it using Docker
```bash
docker run -p 7700:7700 -v $(pwd)/data.ms:/data.ms getmeili/meilisearch
```
#### Installing with Homebrew
#### Brew (Mac OS)
```bash
brew update && brew install meilisearch
meilisearch
```
#### Installing with APT
#### Docker
```bash
docker run -p 7700:7700 -v $(pwd)/data.ms:/data.ms getmeili/meilisearch
```
#### Run on Digital Ocean
[![DigitalOcean Marketplace](assets/do-btn-blue.svg)](https://marketplace.digitalocean.com/apps/meilisearch?action=deploy&refcode=7c67bd97e101)
#### APT (Debian & Ubuntu)
```bash
echo "deb [trusted=yes] https://apt.fury.io/meilisearch/ /" > /etc/apt/sources.list.d/fury.list
@ -75,7 +72,7 @@ apt update && apt install meilisearch-http
meilisearch
```
#### Download the binary
#### Download the binary (Linux & Mac OS)
```bash
curl -L https://install.meilisearch.com | sh
@ -84,7 +81,7 @@ curl -L https://install.meilisearch.com | sh
#### Compile and run it from sources
If you have the Rust toolchain already installed on your local system, clone the repository and change it to your working directory.
If you have the latest stable Rust toolchain installed on your local system, clone the repository and change it to your working directory.
```bash
git clone https://github.com/meilisearch/MeiliSearch.git
@ -165,33 +162,31 @@ We also deliver an **out-of-the-box web interface** in which you can test MeiliS
You can access the web interface in your web browser at the root of the server. The default URL is [http://127.0.0.1:7700](http://127.0.0.1:7700). All you need to do is open your web browser and enter MeiliSearch’s address to visit it. This will lead you to a web page with a search bar that will allow you to search in the selected index.
<p align="center">
<img src="assets/movies-web-demo.gif" alt="Web interface gif" />
</p>
| [See the gif above](#demo)
### Documentation
## Documentation
Now that your MeiliSearch server is up and running, you can learn more about how to tune your search engine in [the documentation](https://docs.meilisearch.com).
## Contributing
Hey! We're glad you're thinking about contributing to MeiliSearch! If you think something is missing or could be improved, please open issues and pull requests. If you'd like to help this project grow, we'd love to have you! To start contributing, checking [issues tagged as "good-first-issue"](https://github.com/meilisearch/MeiliSearch/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) is a good start!
### Analytic Events
## Telemetry
Every hour, events are being sent to our Amplitude instance so we can know how many people are using MeiliSearch.<br/>
MeiliSearch collects anonymous data regarding general usage.
This helps us better understand developers usage of MeiliSearch features.<br/>
To see what information we're retrieving, please see the complete list [on the dedicated issue](https://github.com/meilisearch/MeiliSearch/issues/720).<br/>
We also use Sentry to make us crash and error reports. If you want to know more about what Sentry collects, please visit their [privacy policy website](https://sentry.io/privacy/).<br/>
If this doesn't suit you, you can disable these analytics by using the `MEILI_NO_ANALYTICS` env variable.
This program is optionnal, you can disable these analytics by using the `MEILI_NO_ANALYTICS` env variable.
## Contact
## đź’Ś Contact
Feel free to contact us about any questions you may have:
* At [bonjour@meilisearch.com](mailto:bonjour@meilisearch.com): English or French is welcome! 🇬🇧 🇫🇷
* At [bonjour@meilisearch.com](mailto:bonjour@meilisearch.com)
* Via the chat box available on every page of [our documentation](https://docs.meilisearch.com/) and on [our landing page](https://www.meilisearch.com/).
* 🆕 Join our [GitHub Discussions forum](https://github.com/meilisearch/MeiliSearch/discussions) (BETA hype!)
* 🆕 Join our [GitHub Discussions forum](https://github.com/meilisearch/MeiliSearch/discussions)
* Join our [Slack community](https://slack.meilisearch.com/).
* By opening an issue.
Any suggestion or feedback is highly appreciated. Thank you for your support!
MeiliSearch is developed by [Meili](https://www.meilisearch.com), a young company. To know more about us, you can [read our blog](https://blog.meilisearch.com). Any suggestion or feedback is highly appreciated. Thank you for your support!

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package]
name = "meilisearch-core"
version = "0.13.0"
version = "0.14.1"
license = "MIT"
authors = ["Kerollmops <clement@meilisearch.com>"]
edition = "2018"
@ -24,10 +24,10 @@ intervaltree = "0.2.5"
itertools = "0.9.0"
levenshtein_automata = { version = "0.2.0", features = ["fst_automaton"] }
log = "0.4.8"
meilisearch-error = { path = "../meilisearch-error", version = "0.13.0" }
meilisearch-schema = { path = "../meilisearch-schema", version = "0.13.0" }
meilisearch-tokenizer = { path = "../meilisearch-tokenizer", version = "0.13.0" }
meilisearch-types = { path = "../meilisearch-types", version = "0.13.0" }
meilisearch-error = { path = "../meilisearch-error", version = "0.14.1" }
meilisearch-schema = { path = "../meilisearch-schema", version = "0.14.1" }
meilisearch-tokenizer = { path = "../meilisearch-tokenizer", version = "0.14.1" }
meilisearch-types = { path = "../meilisearch-types", version = "0.14.1" }
once_cell = "1.3.1"
ordered-float = { version = "1.0.2", features = ["serde"] }
pest = { git = "https://github.com/MarinPostma/pest.git", tag = "meilisearch-patch1" }

View File

@ -9,7 +9,7 @@ use std::time::Instant;
use std::fmt;
use compact_arena::{SmallArena, Idx32, mk_arena};
use log::debug;
use log::{debug, error};
use sdset::{Set, SetBuf, exponential_search, SetOperation, Counter, duo::OpBuilder};
use slice_group_by::{GroupBy, GroupByMut};
@ -39,7 +39,7 @@ pub fn bucket_sort<'c, FI>(
query: &str,
range: Range<usize>,
facets_docids: Option<SetBuf<DocumentId>>,
facet_count_docids: Option<HashMap<String, HashMap<String, Cow<Set<DocumentId>>>>>,
facet_count_docids: Option<HashMap<String, HashMap<String, (&str, Cow<Set<DocumentId>>)>>>,
filter: Option<FI>,
criteria: Criteria<'c>,
searchable_attrs: Option<ReorderedAttrs>,
@ -199,7 +199,7 @@ pub fn bucket_sort_with_distinct<'c, FI, FD>(
query: &str,
range: Range<usize>,
facets_docids: Option<SetBuf<DocumentId>>,
facet_count_docids: Option<HashMap<String, HashMap<String, Cow<Set<DocumentId>>>>>,
facet_count_docids: Option<HashMap<String, HashMap<String, (&str, Cow<Set<DocumentId>>)>>>,
filter: Option<FI>,
distinct: FD,
distinct_size: usize,
@ -370,12 +370,18 @@ where
let mut documents = Vec::with_capacity(range.len());
for raw_document in raw_documents.into_iter().skip(distinct_raw_offset) {
let filter_accepted = match &filter {
Some(_) => filter_map.remove(&raw_document.id).unwrap(),
Some(_) => filter_map.remove(&raw_document.id).unwrap_or_else(|| {
error!("error during filtering: expected value for document id {}", &raw_document.id.0);
Default::default()
}),
None => true,
};
if filter_accepted {
let key = key_cache.remove(&raw_document.id).unwrap();
let key = key_cache.remove(&raw_document.id).unwrap_or_else(|| {
error!("error during distinct: expected value for document id {}", &raw_document.id.0);
Default::default()
});
let distinct_accepted = match key {
Some(key) => seen.register(key),
None => seen.register_without_key(),
@ -637,17 +643,17 @@ pub fn placeholder_document_sort(
/// For each entry in facet_docids, calculates the number of documents in the intersection with candidate_docids.
pub fn facet_count(
facet_docids: HashMap<String, HashMap<String, Cow<Set<DocumentId>>>>,
facet_docids: HashMap<String, HashMap<String, (&str, Cow<Set<DocumentId>>)>>,
candidate_docids: &Set<DocumentId>,
) -> HashMap<String, HashMap<String, usize>> {
let mut facets_counts = HashMap::with_capacity(facet_docids.len());
for (key, doc_map) in facet_docids {
let mut count_map = HashMap::with_capacity(doc_map.len());
for (value, docids) in doc_map {
for (_, (value, docids)) in doc_map {
let mut counter = Counter::new();
let op = OpBuilder::new(docids.as_ref(), candidate_docids).intersection();
SetOperation::<DocumentId>::extend_collection(op, &mut counter);
count_map.insert(value, counter.0);
count_map.insert(value.to_string(), counter.0);
}
facets_counts.insert(key, count_map);
}

View File

@ -40,6 +40,7 @@ pub struct Database {
indexes_store: heed::Database<Str, Unit>,
indexes: RwLock<HashMap<String, (Index, thread::JoinHandle<MResult<()>>)>>,
update_fn: Arc<ArcSwapFn>,
database_version: (u32, u32, u32),
}
pub struct DatabaseOptions {
@ -165,7 +166,7 @@ fn update_awaiter(
/// Ensures Meilisearch version is compatible with the database, returns an error versions mismatch.
/// If create is set to true, a VERSION file is created with the current version.
fn version_guard(path: &Path, create: bool) -> MResult<()> {
fn version_guard(path: &Path, create: bool) -> MResult<(u32, u32, u32)> {
let current_version_major = env!("CARGO_PKG_VERSION_MAJOR");
let current_version_minor = env!("CARGO_PKG_VERSION_MINOR");
let current_version_patch = env!("CARGO_PKG_VERSION_PATCH");
@ -182,13 +183,20 @@ fn version_guard(path: &Path, create: bool) -> MResult<()> {
let version = re
.captures_iter(&version)
.next()
.ok_or(Error::VersionMismatch("bad VERSION file".to_string()))?;
.ok_or_else(|| Error::VersionMismatch("bad VERSION file".to_string()))?;
// the first is always the complete match, safe to unwrap because we have a match
let version_major = version.get(1).unwrap().as_str();
let version_minor = version.get(2).unwrap().as_str();
let version_patch = version.get(3).unwrap().as_str();
if version_major != current_version_major || version_minor != current_version_minor {
return Err(Error::VersionMismatch(format!("{}.{}.XX", version_major, version_minor)));
Err(Error::VersionMismatch(format!("{}.{}.XX", version_major, version_minor)))
} else {
Ok((
version_major.parse().or_else(|e| Err(Error::VersionMismatch(format!("error parsing database version: {}", e))))?,
version_minor.parse().or_else(|e| Err(Error::VersionMismatch(format!("error parsing database version: {}", e))))?,
version_patch.parse().or_else(|e| Err(Error::VersionMismatch(format!("error parsing database version: {}", e))))?
))
}
}
Err(error) => {
@ -202,17 +210,22 @@ fn version_guard(path: &Path, create: bool) -> MResult<()> {
current_version_major,
current_version_minor,
current_version_patch).as_bytes())?;
Ok((
current_version_major.parse().or_else(|e| Err(Error::VersionMismatch(format!("error parsing database version: {}", e))))?,
current_version_minor.parse().or_else(|e| Err(Error::VersionMismatch(format!("error parsing database version: {}", e))))?,
current_version_patch.parse().or_else(|e| Err(Error::VersionMismatch(format!("error parsing database version: {}", e))))?
))
} else {
// when no version file is found and we were not told to create one, this
// means that the version is inferior to the one this feature was added in.
return Err(Error::VersionMismatch(format!("<0.12.0")));
Err(Error::VersionMismatch("<0.12.0".to_string()))
}
}
_ => return Err(error.into())
_ => Err(error.into())
}
}
}
Ok(())
}
impl Database {
@ -224,7 +237,7 @@ impl Database {
fs::create_dir_all(&path)?;
// create file only if main db wasn't created before (first run)
version_guard(path.as_ref(), !main_path.exists() && !update_path.exists())?;
let database_version = version_guard(path.as_ref(), !main_path.exists() && !update_path.exists())?;
fs::create_dir_all(&main_path)?;
let env = heed::EnvOpenOptions::new()
@ -302,6 +315,7 @@ impl Database {
indexes_store,
indexes: RwLock::new(indexes),
update_fn,
database_version,
})
}
@ -469,9 +483,18 @@ impl Database {
let env_path = path.join("main");
let env_update_path = path.join("update");
let env_version_path = path.join("VERSION");
fs::create_dir(&env_path)?;
fs::create_dir(&env_update_path)?;
// write Database Version
let (current_version_major, current_version_minor, current_version_patch) = self.database_version;
let mut version_file = File::create(&env_version_path)?;
version_file.write_all(format!("{}.{}.{}",
current_version_major,
current_version_minor,
current_version_patch).as_bytes())?;
let env_path = env_path.join("data.mdb");
let env_file = self.env.copy_to_path(&env_path, CompactionOption::Enabled)?;

View File

@ -164,7 +164,7 @@ impl<'a> heed::BytesDecode<'a> for FacetKey {
}
pub fn add_to_facet_map(
facet_map: &mut HashMap<FacetKey, Vec<DocumentId>>,
facet_map: &mut HashMap<FacetKey, (String, Vec<DocumentId>)>,
field_id: FieldId,
value: Value,
document_id: DocumentId,
@ -175,8 +175,8 @@ pub fn add_to_facet_map(
Value::Null => return Ok(()),
value => return Err(FacetError::InvalidDocumentAttribute(value.to_string())),
};
let key = FacetKey::new(field_id, value);
facet_map.entry(key).or_insert_with(Vec::new).push(document_id);
let key = FacetKey::new(field_id, value.clone());
facet_map.entry(key).or_insert_with(|| (value, Vec::new())).1.push(document_id);
Ok(())
}
@ -185,8 +185,10 @@ pub fn facet_map_from_docids(
index: &crate::Index,
document_ids: &[DocumentId],
attributes_for_facetting: &[FieldId],
) -> MResult<HashMap<FacetKey, Vec<DocumentId>>> {
let mut facet_map = HashMap::new();
) -> MResult<HashMap<FacetKey, (String, Vec<DocumentId>)>> {
// A hashmap that ascociate a facet key to a pair containing the original facet attribute
// string with it's case preserved, and a list of document ids for that facet attribute.
let mut facet_map: HashMap<FacetKey, (String, Vec<DocumentId>)> = HashMap::new();
for document_id in document_ids {
for result in index
.documents_fields
@ -212,7 +214,7 @@ pub fn facet_map_from_docs(
schema: &Schema,
documents: &HashMap<DocumentId, IndexMap<String, Value>>,
attributes_for_facetting: &[FieldId],
) -> MResult<HashMap<FacetKey, Vec<DocumentId>>> {
) -> MResult<HashMap<FacetKey, (String, Vec<DocumentId>)>> {
let mut facet_map = HashMap::new();
let attributes_for_facetting = attributes_for_facetting
.iter()

View File

@ -97,16 +97,14 @@ impl<'c, 'f, 'd, 'i> QueryBuilder<'c, 'f, 'd, 'i> {
.unwrap_or_default();
ors.push(docids);
}
let sets: Vec<_> = ors.iter().map(Cow::deref).collect();
let or_result = sdset::multi::OpBuilder::from_vec(sets)
.union()
.into_set_buf();
let sets: Vec<_> = ors.iter().map(|(_, i)| i).map(Cow::deref).collect();
let or_result = sdset::multi::OpBuilder::from_vec(sets).union().into_set_buf();
ands.push(Cow::Owned(or_result));
ors.clear();
}
Either::Right(key) => {
match self.index.facets.facet_document_ids(reader, &key)? {
Some(docids) => ands.push(docids),
Some((_name, docids)) => ands.push(docids),
// no candidates for search, early return.
None => return Ok(Some(SetBuf::default())),
}
@ -206,7 +204,7 @@ impl<'c, 'f, 'd, 'i> QueryBuilder<'c, 'f, 'd, 'i> {
}
}
fn facet_count_docids<'a>(&self, reader: &'a MainReader) -> MResult<Option<HashMap<String, HashMap<String, Cow<'a, Set<DocumentId>>>>>> {
fn facet_count_docids<'a>(&self, reader: &'a MainReader) -> MResult<Option<HashMap<String, HashMap<String, (&'a str, Cow<'a, Set<DocumentId>>)>>>> {
match self.facets {
Some(ref field_ids) => {
let mut facet_count_map = HashMap::new();

View File

@ -1,12 +1,14 @@
use std::borrow::Cow;
use std::collections::HashMap;
use std::mem;
use heed::{RwTxn, RoTxn, Result as ZResult, RoRange};
use heed::{RwTxn, RoTxn, RoRange, types::Str, BytesEncode, BytesDecode};
use sdset::{SetBuf, Set, SetOperation};
use meilisearch_types::DocumentId;
use meilisearch_schema::FieldId;
use crate::MResult;
use crate::database::MainT;
use crate::facets::FacetKey;
use super::cow_set::CowSet;
@ -14,45 +16,82 @@ use super::cow_set::CowSet;
/// contains facet info
#[derive(Clone, Copy)]
pub struct Facets {
pub(crate) facets: heed::Database<FacetKey, CowSet<DocumentId>>,
pub(crate) facets: heed::Database<FacetKey, FacetData>,
}
pub struct FacetData;
impl<'a> BytesEncode<'a> for FacetData {
type EItem = (&'a str, &'a Set<DocumentId>);
fn bytes_encode(item: &'a Self::EItem) -> Option<Cow<'a, [u8]>> {
// get size of the first item
let first_size = item.0.as_bytes().len();
let size = mem::size_of::<u64>()
+ first_size
+ item.1.len() * mem::size_of::<DocumentId>();
let mut buffer = Vec::with_capacity(size);
// encode the length of the first item
buffer.extend_from_slice(&first_size.to_be_bytes());
buffer.extend_from_slice(Str::bytes_encode(&item.0)?.as_ref());
let second_slice = CowSet::bytes_encode(&item.1)?;
buffer.extend_from_slice(second_slice.as_ref());
Some(Cow::Owned(buffer))
}
}
impl<'a> BytesDecode<'a> for FacetData {
type DItem = (&'a str, Cow<'a, Set<DocumentId>>);
fn bytes_decode(bytes: &'a [u8]) -> Option<Self::DItem> {
const LEN: usize = mem::size_of::<u64>();
let mut size_buf = [0; LEN];
size_buf.copy_from_slice(bytes.get(0..LEN)?);
// decode size of the first item from the bytes
let first_size = usize::from_be_bytes(size_buf);
// decode first and second items
let first_item = Str::bytes_decode(bytes.get(LEN..(LEN + first_size))?)?;
let second_item = CowSet::bytes_decode(bytes.get((LEN + first_size)..)?)?;
Some((first_item, second_item))
}
}
impl Facets {
// we use sdset::SetBuf to ensure the docids are sorted.
pub fn put_facet_document_ids(&self, writer: &mut RwTxn<MainT>, facet_key: FacetKey, doc_ids: &Set<DocumentId>) -> ZResult<()> {
self.facets.put(writer, &facet_key, doc_ids)
pub fn put_facet_document_ids(&self, writer: &mut RwTxn<MainT>, facet_key: FacetKey, doc_ids: &Set<DocumentId>, facet_value: &str) -> MResult<()> {
Ok(self.facets.put(writer, &facet_key, &(facet_value, doc_ids))?)
}
pub fn field_document_ids<'txn>(&self, reader: &'txn RoTxn<MainT>, field_id: FieldId) -> ZResult<RoRange<'txn, FacetKey, CowSet<DocumentId>>> {
self.facets.prefix_iter(reader, &FacetKey::new(field_id, String::new()))
pub fn field_document_ids<'txn>(&self, reader: &'txn RoTxn<MainT>, field_id: FieldId) -> MResult<RoRange<'txn, FacetKey, FacetData>> {
Ok(self.facets.prefix_iter(reader, &FacetKey::new(field_id, String::new()))?)
}
pub fn facet_document_ids<'txn>(&self, reader: &'txn RoTxn<MainT>, facet_key: &FacetKey) -> ZResult<Option<Cow<'txn, Set<DocumentId>>>> {
self.facets.get(reader, &facet_key)
pub fn facet_document_ids<'txn>(&self, reader: &'txn RoTxn<MainT>, facet_key: &FacetKey) -> MResult<Option<(&'txn str,Cow<'txn, Set<DocumentId>>)>> {
Ok(self.facets.get(reader, &facet_key)?)
}
/// updates the facets store, revmoving the documents from the facets provided in the
/// `facet_map` argument
pub fn remove(&self, writer: &mut RwTxn<MainT>, facet_map: HashMap<FacetKey, Vec<DocumentId>>) -> ZResult<()> {
for (key, document_ids) in facet_map {
if let Some(old) = self.facets.get(writer, &key)? {
pub fn remove(&self, writer: &mut RwTxn<MainT>, facet_map: HashMap<FacetKey, (String, Vec<DocumentId>)>) -> MResult<()> {
for (key, (name, document_ids)) in facet_map {
if let Some((_, old)) = self.facets.get(writer, &key)? {
let to_remove = SetBuf::from_dirty(document_ids);
let new = sdset::duo::OpBuilder::new(old.as_ref(), to_remove.as_set()).difference().into_set_buf();
self.facets.put(writer, &key, new.as_set())?;
self.facets.put(writer, &key, &(&name, new.as_set()))?;
}
}
Ok(())
}
pub fn add(&self, writer: &mut RwTxn<MainT>, facet_map: HashMap<FacetKey, Vec<DocumentId>>) -> ZResult<()> {
for (key, document_ids) in facet_map {
pub fn add(&self, writer: &mut RwTxn<MainT>, facet_map: HashMap<FacetKey, (String, Vec<DocumentId>)>) -> MResult<()> {
for (key, (facet_name, document_ids)) in facet_map {
let set = SetBuf::from_dirty(document_ids);
self.put_facet_document_ids(writer, key, set.as_set())?;
self.put_facet_document_ids(writer, key, set.as_set(), &facet_name)?;
}
Ok(())
}
pub fn clear(self, writer: &mut heed::RwTxn<MainT>) -> ZResult<()> {
self.facets.clear(writer)
pub fn clear(self, writer: &mut heed::RwTxn<MainT>) -> MResult<()> {
Ok(self.facets.clear(writer)?)
}
}

View File

@ -1,6 +1,6 @@
[package]
name = "meilisearch-error"
version = "0.13.0"
version = "0.14.1"
authors = ["marin <postma.marin@protonmail.com>"]
edition = "2018"

View File

@ -1,7 +1,7 @@
[package]
name = "meilisearch-http"
description = "MeiliSearch HTTP server"
version = "0.13.0"
version = "0.14.1"
license = "MIT"
authors = [
"Quentin de Quelen <quentin@dequelen.me>",
@ -27,15 +27,16 @@ bytes = "0.5.4"
chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.4.2"
env_logger = "0.7.1"
flate2 = "1.0.16"
futures = "0.3.4"
http = "0.1.19"
indexmap = { version = "1.3.2", features = ["serde-1"] }
log = "0.4.8"
main_error = "0.1.0"
meilisearch-core = { path = "../meilisearch-core", version = "0.13.0" }
meilisearch-error = { path = "../meilisearch-error", version = "0.13.0" }
meilisearch-schema = { path = "../meilisearch-schema", version = "0.13.0" }
meilisearch-tokenizer = {path = "../meilisearch-tokenizer", version = "0.13.0"}
meilisearch-core = { path = "../meilisearch-core", version = "0.14.1" }
meilisearch-error = { path = "../meilisearch-error", version = "0.14.1" }
meilisearch-schema = { path = "../meilisearch-schema", version = "0.14.1" }
meilisearch-tokenizer = {path = "../meilisearch-tokenizer", version = "0.14.1"}
mime = "0.3.16"
rand = "0.7.3"
regex = "1.3.6"
@ -47,6 +48,8 @@ sha2 = "0.8.1"
siphasher = "0.3.2"
slice-group-by = "0.2.6"
structopt = "0.3.12"
tar = "0.4.29"
tempfile = "3.1.0"
tokio = { version = "0.2.18", features = ["macros"] }
ureq = { version = "0.12.0", features = ["tls"], default-features = false }
walkdir = "2.3.1"

View File

@ -136,13 +136,13 @@
<div class="level-item has-text-centered">
<div>
<p class="heading">Documents</p>
<p id="count" class="title">25</p>
<p id="count" class="title">0</p>
</div>
</div>
<div class="level-item has-text-centered">
<div>
<p class="heading">Time Spent</p>
<p id="time" class="title">4ms</p>
<p id="time" class="title">N/A</p>
</div>
</div>
</nav>
@ -221,7 +221,7 @@
results.innerHTML = '';
let processingTimeMs = httpResults.processingTimeMs;
let numberOfDocuments = httpResults.hits.length;
let numberOfDocuments = httpResults.nbHits;
time.innerHTML = `${processingTimeMs}ms`;
count.innerHTML = `${numberOfDocuments}`;
@ -299,6 +299,8 @@
refreshIndexList();
search.oninput = triggerSearch;
let select = document.getElementById("index");
select.onchange = triggerSearch;
triggerSearch();

View File

@ -60,8 +60,8 @@ impl Data {
let server_pid = std::process::id();
let db_opt = DatabaseOptions {
main_map_size: opt.main_map_size,
update_map_size: opt.update_map_size,
main_map_size: opt.max_mdb_size,
update_map_size: opt.max_udb_size,
};
let http_payload_size_limit = opt.http_payload_size_limit;

View File

@ -114,10 +114,10 @@ impl fmt::Display for FacetCountError {
use FacetCountError::*;
match self {
AttributeNotSet(attr) => write!(f, "attribute {} is not set as facet", attr),
SyntaxError(msg) => write!(f, "syntax error: {}", msg),
UnexpectedToken { expected, found } => write!(f, "unexpected {} found, expected {:?}", found, expected),
NoFacetSet => write!(f, "can't perform facet count, as no facet is set"),
AttributeNotSet(attr) => write!(f, "Attribute {} is not set as facet", attr),
SyntaxError(msg) => write!(f, "Syntax error: {}", msg),
UnexpectedToken { expected, found } => write!(f, "Unexpected {} found, expected {:?}", found, expected),
NoFacetSet => write!(f, "Can't perform facet count, as no facet is set"),
}
}
}
@ -195,9 +195,9 @@ impl fmt::Display for Error {
Self::MissingAuthorizationHeader => f.write_str("You must have an authorization token"),
Self::NotFound(err) => write!(f, "{} not found", err),
Self::OpenIndex(err) => write!(f, "Impossible to open index; {}", err),
Self::RetrieveDocument(id, err) => write!(f, "impossible to retrieve the document with id: {}; {}", id, err),
Self::SearchDocuments(err) => write!(f, "impossible to search documents; {}", err),
Self::PayloadTooLarge => f.write_str("Payload to large"),
Self::RetrieveDocument(id, err) => write!(f, "Impossible to retrieve the document with id: {}; {}", id, err),
Self::SearchDocuments(err) => write!(f, "Impossible to search documents; {}", err),
Self::PayloadTooLarge => f.write_str("Payload too large"),
Self::UnsupportedMediaType => f.write_str("Unsupported media type"),
}
}
@ -236,6 +236,18 @@ impl From<actix_http::Error> for Error {
}
}
impl From<std::io::Error> for Error {
fn from(err: std::io::Error) -> Error {
Error::Internal(err.to_string())
}
}
impl From<meilisearch_core::Error> for Error {
fn from(err: meilisearch_core::Error) -> Error {
Error::Internal(err.to_string())
}
}
impl From<FacetCountError> for ResponseError {
fn from(err: FacetCountError) -> ResponseError {
ResponseError { inner: Box::new(err) }

View File

@ -7,6 +7,7 @@ pub mod models;
pub mod option;
pub mod routes;
pub mod analytics;
pub mod snapshot;
use actix_http::Error;
use actix_service::ServiceFactory;

View File

@ -6,6 +6,7 @@ use main_error::MainError;
use meilisearch_http::helpers::NormalizePath;
use meilisearch_http::{create_app, index_update_callback, Data, Opt};
use structopt::StructOpt;
use meilisearch_http::snapshot;
mod analytics;
@ -51,6 +52,10 @@ async fn main() -> Result<(), MainError> {
_ => unreachable!(),
}
if let Some(path) = &opt.load_from_snapshot {
snapshot::load_snapshot(&opt.db_path, path, opt.ignore_snapshot_if_db_exists, opt.ignore_missing_snapshot)?;
}
let data = Data::new(opt.clone())?;
if !opt.no_analytics {
@ -64,6 +69,10 @@ async fn main() -> Result<(), MainError> {
index_update_callback(name, &data_cloned, status);
}));
if let Some(path) = &opt.snapshot_path {
snapshot::schedule_snapshot(data.clone(), &path, opt.snapshot_interval_sec.unwrap_or(86400))?;
}
print_launch_resume(&opt, &data);
let http_server = HttpServer::new(move || {

View File

@ -1,7 +1,7 @@
use std::{error, fs};
use std::io::{BufReader, Read};
use std::path::PathBuf;
use std::sync::Arc;
use std::{error, fs};
use rustls::internal::pemfile::{certs, pkcs8_private_keys, rsa_private_keys};
use rustls::{
@ -49,12 +49,12 @@ pub struct Opt {
pub no_analytics: bool,
/// The maximum size, in bytes, of the main lmdb database directory
#[structopt(long, env = "MEILI_MAIN_MAP_SIZE", default_value = "107374182400")] // 100GB
pub main_map_size: usize,
#[structopt(long, env = "MEILI_MAX_MDB_SIZE", default_value = "107374182400")] // 100GB
pub max_mdb_size: usize,
/// The maximum size, in bytes, of the update lmdb database directory
#[structopt(long, env = "MEILI_UPDATE_MAP_SIZE", default_value = "107374182400")] // 100GB
pub update_map_size: usize,
#[structopt(long, env = "MEILI_MAX_UDB_SIZE", default_value = "107374182400")] // 100GB
pub max_udb_size: usize,
/// The maximum size, in bytes, of accepted JSON payloads
#[structopt(long, env = "MEILI_HTTP_PAYLOAD_SIZE_LIMIT", default_value = "10485760")] // 10MB
@ -93,6 +93,28 @@ pub struct Opt {
/// SSL support tickets.
#[structopt(long, env = "MEILI_SSL_TICKETS")]
pub ssl_tickets: bool,
/// Defines the path of the snapshot file to import.
/// This option will, by default, stop the process if a database already exist or if no snapshot exists at
/// the given path. If this option is not specified no snapshot is imported.
#[structopt(long, env = "MEILI_LOAD_FROM_SNAPSHOT")]
pub load_from_snapshot: Option<PathBuf>,
/// The engine will ignore a missing snapshot and not return an error in such case.
#[structopt(long, requires = "load-from-snapshot", env = "MEILI_IGNORE_MISSING_SNAPSHOT")]
pub ignore_missing_snapshot: bool,
/// The engine will skip snapshot importation and not return an error in such case.
#[structopt(long, requires = "load-from-snapshot", env = "MEILI_IGNORE_SNAPSHOT_IF_DB_EXISTS")]
pub ignore_snapshot_if_db_exists: bool,
/// Defines the directory path where meilisearch will create snapshot each snapshot_time_gap.
#[structopt(long, env = "MEILI_SNAPSHOT_PATH")]
pub snapshot_path: Option<PathBuf>,
/// Defines time interval, in seconds, between each snapshot creation.
#[structopt(long, requires = "snapshot-path", env = "MEILI_SNAPSHOT_INTERVAL_SEC")]
pub snapshot_interval_sec: Option<u64>,
}
impl Opt {

View File

@ -0,0 +1,124 @@
use crate::Data;
use crate::error::Error;
use flate2::Compression;
use flate2::read::GzDecoder;
use flate2::write::GzEncoder;
use log::error;
use std::fs::{create_dir_all, File};
use std::io;
use std::path::Path;
use std::thread;
use std::time::{Duration};
use tar::{Builder, Archive};
use tempfile::TempDir;
fn pack(src: &Path, dest: &Path) -> io::Result<()> {
let f = File::create(dest)?;
let gz_encoder = GzEncoder::new(f, Compression::default());
let mut tar_encoder = Builder::new(gz_encoder);
tar_encoder.append_dir_all(".", src)?;
let gz_encoder = tar_encoder.into_inner()?;
gz_encoder.finish()?;
Ok(())
}
fn unpack(src: &Path, dest: &Path) -> Result<(), Error> {
let f = File::open(src)?;
let gz = GzDecoder::new(f);
let mut ar = Archive::new(gz);
create_dir_all(dest)?;
ar.unpack(dest)?;
Ok(())
}
pub fn load_snapshot(
db_path: &str,
snapshot_path: &Path,
ignore_snapshot_if_db_exists: bool,
ignore_missing_snapshot: bool
) -> Result<(), Error> {
let db_path = Path::new(db_path);
if !db_path.exists() && snapshot_path.exists() {
unpack(snapshot_path, db_path)
} else if db_path.exists() && !ignore_snapshot_if_db_exists {
Err(Error::Internal(format!("database already exists at {:?}", db_path)))
} else if !snapshot_path.exists() && !ignore_missing_snapshot {
Err(Error::Internal(format!("snapshot doesn't exist at {:?}", snapshot_path)))
} else {
Ok(())
}
}
pub fn create_snapshot(data: &Data, snapshot_path: &Path) -> Result<(), Error> {
let tmp_dir = TempDir::new()?;
data.db.copy_and_compact_to_path(tmp_dir.path())?;
pack(tmp_dir.path(), snapshot_path).or_else(|e| Err(Error::Internal(format!("something went wrong during snapshot compression: {}", e))))
}
pub fn schedule_snapshot(data: Data, snapshot_dir: &Path, time_gap_s: u64) -> Result<(), Error> {
if snapshot_dir.file_name().is_none() {
return Err(Error::Internal("invalid snapshot file path".to_string()));
}
let db_name = Path::new(&data.db_path).file_name().ok_or_else(|| Error::Internal("invalid database name".to_string()))?;
create_dir_all(snapshot_dir)?;
let snapshot_path = snapshot_dir.join(format!("{}.tar.gz", db_name.to_str().unwrap_or("data.ms")));
thread::spawn(move || loop {
thread::sleep(Duration::from_secs(time_gap_s));
if let Err(e) = create_snapshot(&data, &snapshot_path) {
error!("Unsuccessful snapshot creation: {}", e);
}
});
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::prelude::*;
use std::fs;
#[test]
fn test_pack_unpack() {
let tempdir = TempDir::new().unwrap();
let test_dir = tempdir.path();
let src_dir = test_dir.join("src");
let dest_dir = test_dir.join("complex/destination/path/");
let archive_path = test_dir.join("archive.tar.gz");
let file_1_relative = Path::new("file1.txt");
let subfolder_relative = Path::new("subfolder/");
let file_2_relative = Path::new("subfolder/file2.txt");
create_dir_all(src_dir.join(subfolder_relative)).unwrap();
File::create(src_dir.join(file_1_relative)).unwrap().write_all(b"Hello_file_1").unwrap();
File::create(src_dir.join(file_2_relative)).unwrap().write_all(b"Hello_file_2").unwrap();
assert!(pack(&src_dir, &archive_path).is_ok());
assert!(archive_path.exists());
assert!(load_snapshot(&dest_dir.to_str().unwrap(), &archive_path, false, false).is_ok());
assert!(dest_dir.exists());
assert!(dest_dir.join(file_1_relative).exists());
assert!(dest_dir.join(subfolder_relative).exists());
assert!(dest_dir.join(file_2_relative).exists());
let contents = fs::read_to_string(dest_dir.join(file_1_relative)).unwrap();
assert_eq!(contents, "Hello_file_1");
let contents = fs::read_to_string(dest_dir.join(file_2_relative)).unwrap();
assert_eq!(contents, "Hello_file_2");
}
}

View File

@ -5,7 +5,7 @@
"balance": "$2,668.55",
"picture": "http://placehold.it/32x32",
"age": 36,
"color": "green",
"color": "Green",
"name": "Lucas Hess",
"gender": "male",
"email": "lucashess@chorizon.com",
@ -26,7 +26,7 @@
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -90,7 +90,7 @@
"balance": "$2,575.78",
"picture": "http://placehold.it/32x32",
"age": 39,
"color": "green",
"color": "Green",
"name": "Mariana Pacheco",
"gender": "female",
"email": "marianapacheco@chorizon.com",
@ -110,7 +110,7 @@
"balance": "$3,793.09",
"picture": "http://placehold.it/32x32",
"age": 20,
"color": "green",
"color": "Green",
"name": "Warren Watson",
"gender": "male",
"email": "warrenwatson@chorizon.com",
@ -155,7 +155,7 @@
"balance": "$1,349.50",
"picture": "http://placehold.it/32x32",
"age": 28,
"color": "green",
"color": "Green",
"name": "Chrystal Boyd",
"gender": "female",
"email": "chrystalboyd@chorizon.com",
@ -235,7 +235,7 @@
"balance": "$1,351.43",
"picture": "http://placehold.it/32x32",
"age": 28,
"color": "green",
"color": "Green",
"name": "Evans Wagner",
"gender": "male",
"email": "evanswagner@chorizon.com",
@ -431,7 +431,7 @@
"balance": "$1,986.48",
"picture": "http://placehold.it/32x32",
"age": 38,
"color": "green",
"color": "Green",
"name": "Florence Long",
"gender": "female",
"email": "florencelong@chorizon.com",
@ -530,7 +530,7 @@
"balance": "$3,973.43",
"picture": "http://placehold.it/32x32",
"age": 29,
"color": "green",
"color": "Green",
"name": "Sykes Conley",
"gender": "male",
"email": "sykesconley@chorizon.com",
@ -813,7 +813,7 @@
"balance": "$1,992.38",
"picture": "http://placehold.it/32x32",
"age": 40,
"color": "green",
"color": "Green",
"name": "Christina Short",
"gender": "female",
"email": "christinashort@chorizon.com",
@ -944,7 +944,7 @@
"balance": "$2,893.45",
"picture": "http://placehold.it/32x32",
"age": 22,
"color": "green",
"color": "Green",
"name": "Joni Spears",
"gender": "female",
"email": "jonispears@chorizon.com",
@ -988,7 +988,7 @@
"balance": "$1,348.04",
"picture": "http://placehold.it/32x32",
"age": 34,
"color": "green",
"color": "Green",
"name": "Lawson Curtis",
"gender": "male",
"email": "lawsoncurtis@chorizon.com",
@ -1006,7 +1006,7 @@
"balance": "$1,132.41",
"picture": "http://placehold.it/32x32",
"age": 38,
"color": "green",
"color": "Green",
"name": "Goff May",
"gender": "male",
"email": "goffmay@chorizon.com",
@ -1026,7 +1026,7 @@
"balance": "$1,201.87",
"picture": "http://placehold.it/32x32",
"age": 38,
"color": "green",
"color": "Green",
"name": "Goodman Becker",
"gender": "male",
"email": "goodmanbecker@chorizon.com",
@ -1069,7 +1069,7 @@
"balance": "$1,947.08",
"picture": "http://placehold.it/32x32",
"age": 21,
"color": "green",
"color": "Green",
"name": "Guerra Mcintyre",
"gender": "male",
"email": "guerramcintyre@chorizon.com",
@ -1153,7 +1153,7 @@
"balance": "$2,113.29",
"picture": "http://placehold.it/32x32",
"age": 28,
"color": "green",
"color": "Green",
"name": "Richards Walls",
"gender": "male",
"email": "richardswalls@chorizon.com",
@ -1211,7 +1211,7 @@
"balance": "$1,844.56",
"picture": "http://placehold.it/32x32",
"age": 20,
"color": "green",
"color": "Green",
"name": "Kaitlin Conner",
"gender": "female",
"email": "kaitlinconner@chorizon.com",
@ -1229,7 +1229,7 @@
"balance": "$2,876.10",
"picture": "http://placehold.it/32x32",
"age": 38,
"color": "green",
"color": "Green",
"name": "Mamie Fischer",
"gender": "female",
"email": "mamiefischer@chorizon.com",
@ -1252,7 +1252,7 @@
"balance": "$1,921.58",
"picture": "http://placehold.it/32x32",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -1291,7 +1291,7 @@
"balance": "$2,813.41",
"picture": "http://placehold.it/32x32",
"age": 37,
"color": "green",
"color": "Green",
"name": "Charles Castillo",
"gender": "male",
"email": "charlescastillo@chorizon.com",
@ -1433,7 +1433,7 @@
"balance": "$1,539.98",
"picture": "http://placehold.it/32x32",
"age": 24,
"color": "green",
"color": "Green",
"name": "Angelina Dyer",
"gender": "female",
"email": "angelinadyer@chorizon.com",
@ -1493,7 +1493,7 @@
"balance": "$3,381.63",
"picture": "http://placehold.it/32x32",
"age": 38,
"color": "green",
"color": "Green",
"name": "Candace Sawyer",
"gender": "female",
"email": "candacesawyer@chorizon.com",
@ -1514,7 +1514,7 @@
"balance": "$1,640.98",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Hendricks Martinez",
"gender": "male",
"email": "hendricksmartinez@chorizon.com",
@ -1557,7 +1557,7 @@
"balance": "$1,180.90",
"picture": "http://placehold.it/32x32",
"age": 36,
"color": "green",
"color": "Green",
"name": "Stark Wong",
"gender": "male",
"email": "starkwong@chorizon.com",
@ -1577,7 +1577,7 @@
"balance": "$1,913.42",
"picture": "http://placehold.it/32x32",
"age": 24,
"color": "green",
"color": "Green",
"name": "Emma Jacobs",
"gender": "female",
"email": "emmajacobs@chorizon.com",
@ -1595,7 +1595,7 @@
"balance": "$1,274.29",
"picture": "http://placehold.it/32x32",
"age": 25,
"color": "green",
"color": "Green",
"name": "Clarice Gardner",
"gender": "female",
"email": "claricegardner@chorizon.com",

View File

@ -44,8 +44,8 @@ impl Server {
master_key: None,
env: "development".to_owned(),
no_analytics: true,
main_map_size: default_db_options.main_map_size,
update_map_size: default_db_options.update_map_size,
max_mdb_size: default_db_options.main_map_size,
max_udb_size: default_db_options.update_map_size,
http_payload_size_limit: 10000000,
..Opt::default()
};

View File

@ -1,6 +1,5 @@
use assert_json_diff::assert_json_eq;
use serde_json::json;
use serde_json::Value;
mod common;
@ -663,30 +662,3 @@ async fn check_add_documents_without_primary_key() {
assert_eq!(status_code, 400);
}
#[actix_rt::test]
async fn check_first_update_should_bring_up_processed_status_after_first_docs_addition() {
let mut server = common::Server::with_uid("test");
let body = json!({
"uid": "test",
});
// 1. Create Index
let (response, status_code) = server.create_index(body).await;
assert_eq!(status_code, 201);
assert_eq!(response["primaryKey"], json!(null));
let dataset = include_bytes!("assets/test_set.json");
let body: Value = serde_json::from_slice(dataset).unwrap();
// 2. Index the documents from movies.json, present inside of assets directory
server.add_or_replace_multiple_documents(body).await;
// 3. Fetch the status of the indexing done above.
let (response, status_code) = server.get_all_updates_status().await;
// 4. Verify the fetch is successful and indexing status is 'processed'
assert_eq!(status_code, 200);
assert_eq!(response[0]["status"], "processed");
}

View File

@ -0,0 +1,200 @@
use serde_json::json;
use serde_json::Value;
use assert_json_diff::assert_json_include;
mod common;
#[actix_rt::test]
async fn check_first_update_should_bring_up_processed_status_after_first_docs_addition() {
let mut server = common::Server::with_uid("test");
let body = json!({
"uid": "test",
});
// 1. Create Index
let (response, status_code) = server.create_index(body).await;
assert_eq!(status_code, 201);
assert_eq!(response["primaryKey"], json!(null));
let dataset = include_bytes!("assets/test_set.json");
let body: Value = serde_json::from_slice(dataset).unwrap();
// 2. Index the documents from movies.json, present inside of assets directory
server.add_or_replace_multiple_documents(body).await;
// 3. Fetch the status of the indexing done above.
let (response, status_code) = server.get_all_updates_status().await;
// 4. Verify the fetch is successful and indexing status is 'processed'
assert_eq!(status_code, 200);
assert_eq!(response[0]["status"], "processed");
}
#[actix_rt::test]
async fn return_error_when_get_update_status_of_unexisting_index() {
let mut server = common::Server::with_uid("test");
// 1. Fetch the status of unexisting index.
let (_, status_code) = server.get_all_updates_status().await;
// 2. Verify the fetch returned 404
assert_eq!(status_code, 404);
}
#[actix_rt::test]
async fn return_empty_when_get_update_status_of_empty_index() {
let mut server = common::Server::with_uid("test");
let body = json!({
"uid": "test",
});
// 1. Create Index
let (response, status_code) = server.create_index(body).await;
assert_eq!(status_code, 201);
assert_eq!(response["primaryKey"], json!(null));
// 2. Fetch the status of empty index.
let (response, status_code) = server.get_all_updates_status().await;
// 3. Verify the fetch is successful, and no document are returned
assert_eq!(status_code, 200);
assert_eq!(response, json!([]));
}
#[actix_rt::test]
async fn return_update_status_of_pushed_documents() {
let mut server = common::Server::with_uid("test");
let body = json!({
"uid": "test",
});
// 1. Create Index
let (response, status_code) = server.create_index(body).await;
assert_eq!(status_code, 201);
assert_eq!(response["primaryKey"], json!(null));
let bodies = vec![
json!([{
"title": "Test",
"comment": "comment test"
}]),
json!([{
"title": "Test1",
"comment": "comment test1"
}]),
json!([{
"title": "Test2",
"comment": "comment test2"
}]),
];
let mut update_ids = Vec::new();
let url = "/indexes/test/documents?primaryKey=title";
for body in bodies {
let (response, status_code) = server.post_request(&url, body).await;
assert_eq!(status_code, 202);
let update_id = response["updateId"].as_u64().unwrap();
update_ids.push(update_id);
}
// 2. Fetch the status of index.
let (response, status_code) = server.get_all_updates_status().await;
// 3. Verify the fetch is successful, and updates are returned
let expected = json!([{
"type": {
"name": "DocumentsAddition",
"number": 1,
},
"updateId": update_ids[0]
},{
"type": {
"name": "DocumentsAddition",
"number": 1,
},
"updateId": update_ids[1]
},{
"type": {
"name": "DocumentsAddition",
"number": 1,
},
"updateId": update_ids[2]
},]);
assert_eq!(status_code, 200);
assert_json_include!(actual: json!(response), expected: expected);
}
#[actix_rt::test]
async fn return_error_if_index_does_not_exist() {
let mut server = common::Server::with_uid("test");
let (response, status_code) = server.get_update_status(42).await;
assert_eq!(status_code, 404);
assert_eq!(response["errorCode"], "index_not_found");
}
#[actix_rt::test]
async fn return_error_if_update_does_not_exist() {
let mut server = common::Server::with_uid("test");
let body = json!({
"uid": "test",
});
// 1. Create Index
let (response, status_code) = server.create_index(body).await;
assert_eq!(status_code, 201);
assert_eq!(response["primaryKey"], json!(null));
let (response, status_code) = server.get_update_status(42).await;
assert_eq!(status_code, 404);
assert_eq!(response["errorCode"], "not_found");
}
#[actix_rt::test]
async fn should_return_existing_update() {
let mut server = common::Server::with_uid("test");
let body = json!({
"uid": "test",
});
// 1. Create Index
let (response, status_code) = server.create_index(body).await;
assert_eq!(status_code, 201);
assert_eq!(response["primaryKey"], json!(null));
let body = json!([{
"title": "Test",
"comment": "comment test"
}]);
let url = "/indexes/test/documents?primaryKey=title";
let (response, status_code) = server.post_request(&url, body).await;
assert_eq!(status_code, 202);
let update_id = response["updateId"].as_u64().unwrap();
let expected = json!({
"type": {
"name": "DocumentsAddition",
"number": 1,
},
"updateId": update_id
});
let (response, status_code) = server.get_update_status(update_id).await;
assert_eq!(status_code, 200);
assert_json_include!(actual: json!(response), expected: expected);
}

View File

@ -156,7 +156,7 @@ async fn placeholder_search_with_filter() {
test_post_get_search!(server, query, |response, _status_code| {
let hits = response["hits"].as_array().unwrap();
assert!(hits.iter().all(|v| v["color"].as_str().unwrap() == "green"));
assert!(hits.iter().all(|v| v["color"].as_str().unwrap() == "Green"));
});
let query = json!({
@ -177,7 +177,7 @@ async fn placeholder_search_with_filter() {
let bug = Value::String(String::from("bug"));
let wontfix = Value::String(String::from("wontfix"));
assert!(hits.iter().all(|v|
v["color"].as_str().unwrap() == "green" &&
v["color"].as_str().unwrap() == "Green" &&
v["tags"].as_array().unwrap().contains(&bug) ||
v["tags"].as_array().unwrap().contains(&wontfix)));
});
@ -206,7 +206,7 @@ async fn placeholder_test_faceted_search_valid() {
.as_array()
.unwrap()
.iter()
.all(|value| value.get("color").unwrap() == "green"));
.all(|value| value.get("color").unwrap() == "Green"));
});
let query = json!({
@ -296,7 +296,7 @@ async fn placeholder_test_faceted_search_valid() {
.unwrap() == "blue"
|| value
.get("color")
.unwrap() == "green"));
.unwrap() == "Green"));
});
// test and-or: ["tags:bug", ["color:blue", "color:green"]]
let query = json!({
@ -322,7 +322,7 @@ async fn placeholder_test_faceted_search_valid() {
.unwrap() == "blue"
|| value
.get("color")
.unwrap() == "green")));
.unwrap() == "Green")));
});
}

View File

@ -21,7 +21,7 @@ async fn search_with_limit() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -42,7 +42,7 @@ async fn search_with_limit() {
"balance": "$1,921.58",
"picture": "http://placehold.it/32x32",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -101,7 +101,7 @@ async fn search_with_offset() {
"balance": "$1,921.58",
"picture": "http://placehold.it/32x32",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -142,7 +142,7 @@ async fn search_with_offset() {
"balance": "$2,668.55",
"picture": "http://placehold.it/32x32",
"age": 36,
"color": "green",
"color": "Green",
"name": "Lucas Hess",
"gender": "male",
"email": "lucashess@chorizon.com",
@ -181,7 +181,7 @@ async fn search_with_attribute_to_highlight_wildcard() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -201,7 +201,7 @@ async fn search_with_attribute_to_highlight_wildcard() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "<em>Cherry</em> Orr",
"gender": "female",
"email": "<em>cherry</em>orr@chorizon.com",
@ -241,7 +241,7 @@ async fn search_with_attribute_to_highlight_1() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -261,7 +261,7 @@ async fn search_with_attribute_to_highlight_1() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "<em>Cherry</em> Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -301,7 +301,7 @@ async fn search_with_matches() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -355,7 +355,7 @@ async fn search_with_crop() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -375,7 +375,7 @@ async fn search_with_crop() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -413,7 +413,7 @@ async fn search_with_attributes_to_retrieve() {
{
"name": "Cherry Orr",
"age": 27,
"color": "green",
"color": "Green",
"gender": "female"
}
]);
@ -440,7 +440,7 @@ async fn search_with_attributes_to_retrieve_wildcard() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -478,7 +478,7 @@ async fn search_with_filter() {
"balance": "$1,921.58",
"picture": "http://placehold.it/32x32",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -499,7 +499,7 @@ async fn search_with_filter() {
"balance": "$2,668.55",
"picture": "http://placehold.it/32x32",
"age": 36,
"color": "green",
"color": "Green",
"name": "Lucas Hess",
"gender": "male",
"email": "lucashess@chorizon.com",
@ -547,7 +547,7 @@ async fn search_with_filter() {
"balance": "$2,668.55",
"picture": "http://placehold.it/32x32",
"age": 36,
"color": "green",
"color": "Green",
"name": "Lucas Hess",
"gender": "male",
"email": "lucashess@chorizon.com",
@ -601,7 +601,7 @@ async fn search_with_filter() {
"balance": "$1,913.42",
"picture": "http://placehold.it/32x32",
"age": 24,
"color": "green",
"color": "Green",
"name": "Emma Jacobs",
"gender": "female",
"email": "emmajacobs@chorizon.com",
@ -705,7 +705,7 @@ async fn search_with_filter() {
"balance": "$1,921.58",
"picture": "http://placehold.it/32x32",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -726,7 +726,7 @@ async fn search_with_filter() {
"balance": "$2,668.55",
"picture": "http://placehold.it/32x32",
"age": 36,
"color": "green",
"color": "Green",
"name": "Lucas Hess",
"gender": "male",
"email": "lucashess@chorizon.com",
@ -779,7 +779,7 @@ async fn search_with_filter() {
"balance": "$1,351.43",
"picture": "http://placehold.it/32x32",
"age": 28,
"color": "green",
"color": "Green",
"name": "Evans Wagner",
"gender": "male",
"email": "evanswagner@chorizon.com",
@ -823,7 +823,7 @@ async fn search_with_attributes_to_highlight_and_matches() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -843,7 +843,7 @@ async fn search_with_attributes_to_highlight_and_matches() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "<em>Cherry</em> Orr",
"gender": "female",
"email": "<em>cherry</em>orr@chorizon.com",
@ -900,7 +900,7 @@ async fn search_with_attributes_to_highlight_and_matches_and_crop() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -920,7 +920,7 @@ async fn search_with_attributes_to_highlight_and_matches_and_crop() {
"balance": "$1,706.13",
"picture": "http://placehold.it/32x32",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -1223,7 +1223,7 @@ async fn test_faceted_search_valid() {
.as_array()
.unwrap()
.iter()
.all(|value| value.get("color").unwrap() == "green"));
.all(|value| value.get("color").unwrap() == "Green"));
});
let query = json!({
@ -1318,7 +1318,7 @@ async fn test_faceted_search_valid() {
.unwrap() == "blue"
|| value
.get("color")
.unwrap() == "green"));
.unwrap() == "Green"));
});
// test and-or: ["tags:bug", ["color:blue", "color:green"]]
let query = json!({
@ -1345,7 +1345,7 @@ async fn test_faceted_search_valid() {
.unwrap() == "blue"
|| value
.get("color")
.unwrap() == "green")));
.unwrap() == "Green")));
});
}
@ -1469,6 +1469,14 @@ async fn test_facet_count() {
println!("{}", response);
assert!(response.get("exhaustiveFacetsCount").is_some());
assert_eq!(response.get("facetsDistribution").unwrap().as_object().unwrap().values().count(), 1);
// assert that case is preserved
assert!(response["facetsDistribution"]
.as_object()
.unwrap()["color"]
.as_object()
.unwrap()
.get("Green")
.is_some());
});
// searching on color and tags
let query = json!({

View File

@ -130,7 +130,7 @@ async fn search_with_settings_stop_words() {
{
"balance": "$1,921.58",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -140,7 +140,7 @@ async fn search_with_settings_stop_words() {
{
"balance": "$1,706.13",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -213,7 +213,7 @@ async fn search_with_settings_synonyms() {
{
"balance": "$1,921.58",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -223,7 +223,7 @@ async fn search_with_settings_synonyms() {
{
"balance": "$1,706.13",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -292,7 +292,7 @@ async fn search_with_settings_ranking_rules() {
{
"balance": "$1,921.58",
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -302,7 +302,7 @@ async fn search_with_settings_ranking_rules() {
{
"balance": "$1,706.13",
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",
@ -438,7 +438,7 @@ async fn search_with_settings_displayed_attributes() {
let expect = json!([
{
"age": 31,
"color": "green",
"color": "Green",
"name": "Harper Carson",
"gender": "male",
"email": "harpercarson@chorizon.com",
@ -446,7 +446,7 @@ async fn search_with_settings_displayed_attributes() {
},
{
"age": 27,
"color": "green",
"color": "Green",
"name": "Cherry Orr",
"gender": "female",
"email": "cherryorr@chorizon.com",

View File

@ -1,13 +1,13 @@
[package]
name = "meilisearch-schema"
version = "0.13.0"
version = "0.14.1"
license = "MIT"
authors = ["Kerollmops <renault.cle@gmail.com>"]
edition = "2018"
[dependencies]
indexmap = { version = "1.3.2", features = ["serde-1"] }
meilisearch-error = { path = "../meilisearch-error", version = "0.13.0" }
meilisearch-error = { path = "../meilisearch-error", version = "0.14.1" }
serde = { version = "1.0.105", features = ["derive"] }
serde_json = { version = "1.0.50", features = ["preserve_order"] }
zerocopy = "0.3.0"

View File

@ -16,7 +16,7 @@ impl fmt::Display for Error {
use self::Error::*;
match self {
FieldNameNotFound(field) => write!(f, "The field {:?} doesn't exist", field),
PrimaryKeyAlreadyPresent => write!(f, "The schema already have an primary key. It's impossible to update it"),
PrimaryKeyAlreadyPresent => write!(f, "A primary key is already present. It's impossible to update it"),
MaxFieldsLimitExceeded => write!(f, "The maximum of possible reattributed field id has been reached"),
}
}

View File

@ -1,6 +1,6 @@
[package]
name = "meilisearch-tokenizer"
version = "0.13.0"
version = "0.14.1"
license = "MIT"
authors = ["Kerollmops <renault.cle@gmail.com>"]
edition = "2018"

View File

@ -1,6 +1,6 @@
[package]
name = "meilisearch-types"
version = "0.13.0"
version = "0.14.1"
license = "MIT"
authors = ["Clément Renault <renault.cle@gmail.com>"]
edition = "2018"