Compare commits

..

3 Commits

Author SHA1 Message Date
Kerollmops
7065ffb081 Improve the prototype guide 2025-12-02 18:06:05 +01:00
Kerollmops
31d13d23a1 Update the prototype format 2025-12-02 17:57:39 +01:00
Kerollmops
4010e1315e Introduce the first working version of the tool 2025-12-02 17:34:42 +01:00
61 changed files with 636 additions and 3981 deletions

View File

@@ -24,11 +24,6 @@ TBD
- [ ] If not, add the `no db change` label to your PR, and you're good to merge.
- [ ] If yes, add the `db change` label to your PR. You'll receive a message explaining you what to do.
### Reminders when adding features
- [ ] Write unit tests using insta
- [ ] Write declarative integration tests in [workloads/tests](https://github.com/meilisearch/meilisearch/tree/main/workloads/test). Specify the routes to call and then call `cargo xtask test workloads/tests/YOUR_TEST.json --update-responses` so that responses are automatically filled.
### Reminders when modifying the API
- [ ] Update the openAPI file with utoipa:

View File

@@ -159,6 +159,8 @@ jobs:
steps:
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@1.89
- name: Cache dependencies
uses: Swatinem/rust-cache@v2.8.0
- name: Run tests in debug
uses: actions-rs/cargo@v1
with:

View File

@@ -124,7 +124,6 @@ They are JSON files with the following structure (comments are not actually supp
{
// Name of the workload. Must be unique to the workload, as it will be used to group results on the dashboard.
"name": "hackernews.ndjson_1M,no-threads",
"type": "bench",
// Number of consecutive runs of the commands that should be performed.
// Each run uses a fresh instance of Meilisearch and a fresh database.
// Each run produces its own report file.

16
Cargo.lock generated
View File

@@ -6072,20 +6072,6 @@ name = "similar"
version = "2.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbbb5d9659141646ae647b42fe094daf6c6192d1620870b449d9557f748b2daa"
dependencies = [
"bstr",
"unicode-segmentation",
]
[[package]]
name = "similar-asserts"
version = "1.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5b441962c817e33508847a22bd82f03a30cff43642dc2fae8b050566121eb9a"
dependencies = [
"console",
"similar",
]
[[package]]
name = "simple_asn1"
@@ -7806,10 +7792,10 @@ dependencies = [
"futures-core",
"futures-util",
"reqwest",
"semver",
"serde",
"serde_json",
"sha2",
"similar-asserts",
"sysinfo",
"time",
"tokio",

View File

@@ -1,326 +0,0 @@
# Declarative tests
Declarative tests ensure that Meilisearch features remain stable across versions.
While we already have unit tests, those are run against **temporary databases** that are created fresh each time and therefore never risk corruption.
Declarative tests instead **simulate the lifetime of a database**: they chain together commands and requests to change the binary, verifying that database state and API responses remain consistent.
## Basic example
```jsonc
{
"type": "test",
"name": "api-keys",
"binary": { // the first command will run on the binary following this specification.
"source": "release", // get the binary as a release from GitHub
"version": "1.19.0", // version to fetch
"edition": "community" // edition to fetch
},
"commands": []
}
```
This example defines a no-op test (it does nothing).
If the file is saved at `workloads/tests/example.json`, you can run it with:
```bash
cargo xtask test workloads/tests/example.json
```
## Commands
Commands represent API requests sent to Meilisearch endpoints during a test.
They are executed sequentially, and their responses can be validated to ensure consistent behavior across upgrades.
```jsonc
{
"route": "keys",
"method": "POST",
"body": {
"inline": {
"actions": [
"search",
"documents.add"
],
"description": "Test API Key",
"expiresAt": null,
"indexes": [ "movies" ]
}
}
}
```
This command issues a `POST /keys` request, creating an API key with permissions to search and add documents in the `movies` index.
### Using assets in commands
To keep tests concise and reusable, you can define **assets** at the root of the workload file.
Assets are external data sources (such as datasets) that are cached between runs, making tests faster and easier to read.
```jsonc
{
"type": "test",
"name": "movies",
"binary": {
"source": "release",
"version": "1.19.0",
"edition": "community"
},
"assets": {
"movies.json": {
"local_location": null,
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies.json",
"sha256": "5b6e4cb660bc20327776e8a33ea197b43d9ec84856710ead1cc87ab24df77de1"
}
},
"commands": [
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies.json"
}
}
]
}
```
In this example:
- The `movies.json` dataset is defined as an asset, pointing to a remote URL.
- The SHA-256 checksum ensures integrity.
- The `POST /indexes/movies/documents` command uses this asset as the request body.
This makes the test much cleaner than inlining a large dataset directly into the command.
For asset handling, please refer to the [declarative benchmarks documentation](/BENCHMARKS.md#adding-new-assets).
### Asserting responses
Commands can specify both the **expected status code** and the **expected response body**.
```jsonc
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies.json"
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]", // Set to a bracketed string to ignore the value
"indexUid": "movies",
"status": "enqueued",
"taskUid": 1,
"type": "documentAdditionOrUpdate"
},
"synchronous": "WaitForTask"
}
```
Manually writing `expectedResponse` fields can be tedious.
Instead, you can let the test runner populate them automatically:
```bash
# Run the workload to populate expected fields. Only adds the missing ones, doesn't change existing data
cargo xtask test workloads/tests/example.json --add-missing-responses
# OR
# Run the workload to populate expected fields. Updates all fields including existing ones
cargo xtask test workloads/tests/example.json --update-responses
```
This workflow is recommended:
1. Write the test without expected fields.
2. Run it with `--add-missing-responses` to capture the actual responses.
3. Review and commit the generated expectations.
## Changing binary
It is possible to insert an instruction to change the current Meilisearch instance from one binary specification to another during a test.
When executed, such an instruction will:
1. Stop the current Meilisearch instance.
2. Fetch the binary specified by the instruction.
3. Restart the server with the specified binary on the same database.
```jsonc
{
"type": "test",
"name": "movies",
"binary": {
"source": "release",
"version": "1.19.0", // start with version v1.19.0
"edition": "community"
},
"assets": {
"movies.json": {
"local_location": null,
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies.json",
"sha256": "5b6e4cb660bc20327776e8a33ea197b43d9ec84856710ead1cc87ab24df77de1"
}
},
"commands": [
// setup some data
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies.json"
}
},
// switch binary to v1.24.0
{
"binary": {
"source": "release",
"version": "1.24.0",
"edition": "community"
}
}
]
}
```
### Typical Usage
In most cases, the change binary instruction will be used to update a database.
- **Set up** some data using commands on an older version.
- **Upgrade** to the latest version.
- **Assert** that the data and API behavior remain correct after the upgrade.
To properly test the dumpless upgrade, one should typically:
1. Open the database without processing the update task: Use a `binary` instruction to switch to the desired version, passing `--experimental-dumpless-upgrade` and `--experimental-max-number-of-batched-tasks=0` as extra CLI arguments
2. Check that the search, stats and task queue still work.
3. Open the database and process the update task: Use a `binary` instruction to switch to the desired version, passing `--experimental-dumpless-upgrade` as the extra CLI argument. Use a `health` command to wait for the upgrade task to finish.
4. Check that the indexing, search, stats, and task queue still work.
```jsonc
{
"type": "test",
"name": "movies",
"binary": {
"source": "release",
"version": "1.12.0",
"edition": "community"
},
"commands": [
// 0. Run commands to populate the database
{
// ..
},
// 1. Open the database with new MS without processing the update task
{
"binary": {
"source": "build", // build the binary from the sources in the current git repository
"edition": "community",
"extraCliArgs": [
"--experimental-dumpless-upgrade", // allows to open with a newer MS
"--experimental-max-number-of-batched-tasks=0" // prevent processing of the update task
]
}
},
// 2. Check the search etc.
{
// ..
},
// 3. Open the database with new MS and processing the update task
{
"binary": {
"source": "build", // build the binary from the sources in the current git repository
"edition": "community",
"extraCliArgs": [
"--experimental-dumpless-upgrade" // allows to open with a newer MS
// no `--experimental-max-number-of-batched-tasks=0`
]
}
},
// 4. Check the indexing, search, etc.
{
// ..
}
]
}
```
This ensures backward compatibility: databases created with older Meilisearch versions should remain functional and consistent after an upgrade.
## Variables
Sometimes a command needs to use a value returned by a **previous response**.
These values can be captured and reused using the register field.
```jsonc
{
"route": "keys",
"method": "POST",
"body": {
"inline": {
"actions": [
"search",
"documents.add"
],
"description": "Test API Key",
"expiresAt": null,
"indexes": [ "movies" ]
}
},
"expectedResponse": {
"key": "c6f64630bad2996b1f675007c8800168e14adf5d6a7bb1a400a6d2b158050eaf",
// ...
},
"register": {
"key": "/key"
},
"synchronous": "WaitForResponse"
}
```
The `register` field captures the value at the JSON path `/key` from the response.
Paths follow the **JavaScript Object Notation Pointer (RFC 6901)** format.
Registered variables are available for all subsequent commands.
Registered variables can be referenced by wrapping their name in double curly braces:
In the route/path:
```jsonc
{
"route": "tasks/{{ task_id }}",
"method": "GET"
}
```
In the request body:
```jsonc
{
"route": "indexes/movies/documents",
"method": "PATCH",
"body": {
"inline": {
"id": "{{ document_id }}",
"overview": "Shazam turns evil and the world is in danger.",
}
}
}
```
Or they can be referenced by their name (**without curly braces**) as an API key:
```jsonc
{
"route": "indexes/movies/documents",
"method": "POST",
"body": { /* ... */ },
"apiKeyVariable": "key" // The **content** of the key variable will be used as an API key
}
```

View File

@@ -2,7 +2,6 @@ mod chat;
mod distinct;
mod errors;
mod get_settings;
mod parent_seachable_fields;
mod prefix_search_settings;
mod proximity_settings;
mod tokenizer_customization;

View File

@@ -1,114 +0,0 @@
use meili_snap::{json_string, snapshot};
use once_cell::sync::Lazy;
use crate::common::Server;
use crate::json;
static DOCUMENTS: Lazy<crate::common::Value> = Lazy::new(|| {
json!([
{
"id": 1,
"meta": {
"title": "Soup of the day",
"description": "many the fish",
}
},
{
"id": 2,
"meta": {
"title": "Soup of day",
"description": "many the lazy fish",
}
},
{
"id": 3,
"meta": {
"title": "the Soup of day",
"description": "many the fish",
}
},
])
});
#[actix_rt::test]
async fn nested_field_becomes_searchable() {
let server = Server::new_shared();
let index = server.unique_index();
let (task, _status_code) = index.add_documents(DOCUMENTS.clone(), None).await;
server.wait_task(task.uid()).await.succeeded();
let (response, code) = index
.update_settings(json!({
"searchableAttributes": ["meta.title"]
}))
.await;
assert_eq!("202", code.as_str(), "{response:?}");
server.wait_task(response.uid()).await.succeeded();
// We expect no documents when searching for
// a nested non-searchable field
index
.search(json!({"q": "many fish"}), |response, code| {
snapshot!(code, @"200 OK");
snapshot!(json_string!(response["hits"]), @r###"[]"###);
})
.await;
let (response, code) = index
.update_settings(json!({
"searchableAttributes": ["meta.title", "meta.description"]
}))
.await;
assert_eq!("202", code.as_str(), "{response:?}");
server.wait_task(response.uid()).await.succeeded();
// We expect all the documents when the nested field becomes searchable
index
.search(json!({"q": "many fish"}), |response, code| {
snapshot!(code, @"200 OK");
snapshot!(json_string!(response["hits"]), @r###"
[
{
"id": 1,
"meta": {
"title": "Soup of the day",
"description": "many the fish"
}
},
{
"id": 3,
"meta": {
"title": "the Soup of day",
"description": "many the fish"
}
},
{
"id": 2,
"meta": {
"title": "Soup of day",
"description": "many the lazy fish"
}
}
]
"###);
})
.await;
let (response, code) = index
.update_settings(json!({
"searchableAttributes": ["meta.title"]
}))
.await;
assert_eq!("202", code.as_str(), "{response:?}");
server.wait_task(response.uid()).await.succeeded();
// We expect no documents when searching for
// a nested non-searchable field
index
.search(json!({"q": "many fish"}), |response, code| {
snapshot!(code, @"200 OK");
snapshot!(json_string!(response["hits"]), @r###"[]"###);
})
.await;
}

View File

@@ -18,8 +18,6 @@ use crate::{
pub struct Metadata {
/// The weight as defined in the FieldidsWeightsMap of the searchable attribute if it is searchable.
pub searchable: Option<Weight>,
/// The field is part of the exact attributes.
pub exact: bool,
/// The field is part of the sortable attributes.
pub sortable: bool,
/// The field is defined as the distinct attribute.
@@ -211,7 +209,6 @@ impl Metadata {
#[derive(Debug, Clone)]
pub struct MetadataBuilder {
searchable_attributes: Option<Vec<String>>,
exact_searchable_attributes: Vec<String>,
filterable_attributes: Vec<FilterableAttributesRule>,
sortable_attributes: HashSet<String>,
localized_attributes: Option<Vec<LocalizedAttributesRule>>,
@@ -223,18 +220,15 @@ impl MetadataBuilder {
pub fn from_index(index: &Index, rtxn: &RoTxn) -> Result<Self> {
let searchable_attributes = index
.user_defined_searchable_fields(rtxn)?
.map(|fields| fields.into_iter().map(String::from).collect());
let exact_searchable_attributes =
index.exact_attributes(rtxn)?.into_iter().map(String::from).collect();
.map(|fields| fields.into_iter().map(|s| s.to_string()).collect());
let filterable_attributes = index.filterable_attributes_rules(rtxn)?;
let sortable_attributes = index.sortable_fields(rtxn)?;
let localized_attributes = index.localized_attributes_rules(rtxn)?;
let distinct_attribute = index.distinct_field(rtxn)?.map(String::from);
let distinct_attribute = index.distinct_field(rtxn)?.map(|s| s.to_string());
let asc_desc_attributes = index.asc_desc_fields(rtxn)?;
Ok(Self::new(
searchable_attributes,
exact_searchable_attributes,
filterable_attributes,
sortable_attributes,
localized_attributes,
@@ -248,7 +242,6 @@ impl MetadataBuilder {
/// This is used for testing, prefer using `MetadataBuilder::from_index` instead.
pub fn new(
searchable_attributes: Option<Vec<String>>,
exact_searchable_attributes: Vec<String>,
filterable_attributes: Vec<FilterableAttributesRule>,
sortable_attributes: HashSet<String>,
localized_attributes: Option<Vec<LocalizedAttributesRule>>,
@@ -263,7 +256,6 @@ impl MetadataBuilder {
Self {
searchable_attributes,
exact_searchable_attributes,
filterable_attributes,
sortable_attributes,
localized_attributes,
@@ -277,7 +269,6 @@ impl MetadataBuilder {
// Vectors fields are not searchable, filterable, distinct or asc_desc
return Metadata {
searchable: None,
exact: false,
sortable: false,
distinct: false,
asc_desc: false,
@@ -305,7 +296,6 @@ impl MetadataBuilder {
// Geo fields are not searchable, distinct or asc_desc
return Metadata {
searchable: None,
exact: false,
sortable,
distinct: false,
asc_desc: false,
@@ -319,7 +309,6 @@ impl MetadataBuilder {
debug_assert!(!sortable, "geojson fields should not be sortable");
return Metadata {
searchable: None,
exact: false,
sortable,
distinct: false,
asc_desc: false,
@@ -340,8 +329,6 @@ impl MetadataBuilder {
None => Some(0),
};
let exact = self.exact_searchable_attributes.iter().any(|attr| is_faceted_by(field, attr));
let distinct =
self.distinct_attribute.as_ref().is_some_and(|distinct_field| field == distinct_field);
let asc_desc = self.asc_desc_attributes.contains(field);
@@ -356,7 +343,6 @@ impl MetadataBuilder {
Metadata {
searchable,
exact,
sortable,
distinct,
asc_desc,

View File

@@ -8,26 +8,17 @@ use bumpalo::Bump;
use super::match_searchable_field;
use super::tokenize_document::{tokenizer_builder, DocumentTokenizer};
use crate::fields_ids_map::metadata::Metadata;
use crate::update::new::document::DocumentContext;
use crate::update::new::extract::cache::BalancedCaches;
use crate::update::new::extract::perm_json_p::contained_in;
use crate::update::new::extract::searchable::has_searchable_children;
use crate::update::new::indexer::document_changes::{
extract, DocumentChanges, Extractor, IndexingContext,
};
use crate::update::new::indexer::settings_changes::{
settings_change_extract, DocumentsIndentifiers, SettingsChangeExtractor,
};
use crate::update::new::ref_cell_ext::RefCellExt as _;
use crate::update::new::steps::IndexingStep;
use crate::update::new::thread_local::{FullySend, MostlySend, ThreadLocal};
use crate::update::new::{DocumentChange, DocumentIdentifiers};
use crate::update::settings::SettingsDelta;
use crate::{
bucketed_position, DocumentId, FieldId, PatternMatch, Result, UserError,
MAX_POSITION_PER_ATTRIBUTE,
};
use crate::update::new::DocumentChange;
use crate::{bucketed_position, DocumentId, FieldId, Result, MAX_POSITION_PER_ATTRIBUTE};
const MAX_COUNTED_WORDS: usize = 30;
@@ -43,15 +34,6 @@ pub struct WordDocidsBalancedCaches<'extractor> {
unsafe impl MostlySend for WordDocidsBalancedCaches<'_> {}
/// Whether to extract or skip fields during word extraction.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum FieldDbExtraction {
/// Extract the word and put it in to the fid-based databases.
Extract,
/// Do not store the word in the fid-based databases.
Skip,
}
impl<'extractor> WordDocidsBalancedCaches<'extractor> {
pub fn new_in(buckets: usize, max_memory: Option<usize>, alloc: &'extractor Bump) -> Self {
Self {
@@ -65,14 +47,12 @@ impl<'extractor> WordDocidsBalancedCaches<'extractor> {
}
}
#[allow(clippy::too_many_arguments)]
fn insert_add_u32(
&mut self,
field_id: FieldId,
position: u16,
word: &str,
exact: bool,
field_db_extraction: FieldDbExtraction,
docid: u32,
bump: &Bump,
) -> Result<()> {
@@ -86,13 +66,11 @@ impl<'extractor> WordDocidsBalancedCaches<'extractor> {
let buffer_size = word_bytes.len() + 1 + size_of::<FieldId>();
let mut buffer = BumpVec::with_capacity_in(buffer_size, bump);
if field_db_extraction == FieldDbExtraction::Extract {
buffer.clear();
buffer.extend_from_slice(word_bytes);
buffer.push(0);
buffer.extend_from_slice(&field_id.to_be_bytes());
self.word_fid_docids.insert_add_u32(&buffer, docid)?;
}
buffer.clear();
buffer.extend_from_slice(word_bytes);
buffer.push(0);
buffer.extend_from_slice(&field_id.to_be_bytes());
self.word_fid_docids.insert_add_u32(&buffer, docid)?;
let position = bucketed_position(position);
buffer.clear();
@@ -105,26 +83,21 @@ impl<'extractor> WordDocidsBalancedCaches<'extractor> {
self.flush_fid_word_count(&mut buffer)?;
}
if field_db_extraction == FieldDbExtraction::Extract {
self.fid_word_count
.entry(field_id)
.and_modify(|(_current_count, new_count)| *new_count.get_or_insert(0) += 1)
.or_insert((None, Some(1)));
}
self.fid_word_count
.entry(field_id)
.and_modify(|(_current_count, new_count)| *new_count.get_or_insert(0) += 1)
.or_insert((None, Some(1)));
self.current_docid = Some(docid);
Ok(())
}
#[allow(clippy::too_many_arguments)]
fn insert_del_u32(
&mut self,
field_id: FieldId,
position: u16,
word: &str,
exact: bool,
field_db_extraction: FieldDbExtraction,
docid: u32,
bump: &Bump,
) -> Result<()> {
@@ -138,13 +111,11 @@ impl<'extractor> WordDocidsBalancedCaches<'extractor> {
let buffer_size = word_bytes.len() + 1 + size_of::<FieldId>();
let mut buffer = BumpVec::with_capacity_in(buffer_size, bump);
if field_db_extraction == FieldDbExtraction::Extract {
buffer.clear();
buffer.extend_from_slice(word_bytes);
buffer.push(0);
buffer.extend_from_slice(&field_id.to_be_bytes());
self.word_fid_docids.insert_del_u32(&buffer, docid)?;
}
buffer.clear();
buffer.extend_from_slice(word_bytes);
buffer.push(0);
buffer.extend_from_slice(&field_id.to_be_bytes());
self.word_fid_docids.insert_del_u32(&buffer, docid)?;
let position = bucketed_position(position);
buffer.clear();
@@ -157,12 +128,10 @@ impl<'extractor> WordDocidsBalancedCaches<'extractor> {
self.flush_fid_word_count(&mut buffer)?;
}
if field_db_extraction == FieldDbExtraction::Extract {
self.fid_word_count
.entry(field_id)
.and_modify(|(current_count, _new_count)| *current_count.get_or_insert(0) += 1)
.or_insert((Some(1), None));
}
self.fid_word_count
.entry(field_id)
.and_modify(|(current_count, _new_count)| *current_count.get_or_insert(0) += 1)
.or_insert((Some(1), None));
self.current_docid = Some(docid);
@@ -356,24 +325,6 @@ impl WordDocidsExtractors {
exact_attributes.iter().any(|attr| contained_in(fname, attr))
|| disabled_typos_terms.is_exact(word)
};
let mut should_tokenize = |field_name: &str| {
let Some((field_id, meta)) = new_fields_ids_map.id_with_metadata_or_insert(field_name)
else {
return Err(UserError::AttributeLimitReached.into());
};
let pattern_match = if meta.is_searchable() {
PatternMatch::Match
} else {
// TODO: should be a match on the field_name using `match_field_legacy` function,
// but for legacy reasons we iterate over all the fields to fill the field_id_map.
PatternMatch::Parent
};
Ok((field_id, pattern_match))
};
match document_change {
DocumentChange::Deletion(inner) => {
let mut token_fn = |fname: &str, fid, pos, word: &str| {
@@ -382,14 +333,13 @@ impl WordDocidsExtractors {
pos,
word,
is_exact(fname, word),
FieldDbExtraction::Extract,
inner.docid(),
doc_alloc,
)
};
document_tokenizer.tokenize_document(
inner.current(rtxn, index, context.db_fields_ids_map)?,
&mut should_tokenize,
new_fields_ids_map,
&mut token_fn,
)?;
}
@@ -411,14 +361,13 @@ impl WordDocidsExtractors {
pos,
word,
is_exact(fname, word),
FieldDbExtraction::Extract,
inner.docid(),
doc_alloc,
)
};
document_tokenizer.tokenize_document(
inner.current(rtxn, index, context.db_fields_ids_map)?,
&mut should_tokenize,
new_fields_ids_map,
&mut token_fn,
)?;
@@ -428,14 +377,13 @@ impl WordDocidsExtractors {
pos,
word,
is_exact(fname, word),
FieldDbExtraction::Extract,
inner.docid(),
doc_alloc,
)
};
document_tokenizer.tokenize_document(
inner.merged(rtxn, index, context.db_fields_ids_map)?,
&mut should_tokenize,
new_fields_ids_map,
&mut token_fn,
)?;
}
@@ -446,14 +394,13 @@ impl WordDocidsExtractors {
pos,
word,
is_exact(fname, word),
FieldDbExtraction::Extract,
inner.docid(),
doc_alloc,
)
};
document_tokenizer.tokenize_document(
inner.inserted(),
&mut should_tokenize,
new_fields_ids_map,
&mut token_fn,
)?;
}
@@ -464,292 +411,3 @@ impl WordDocidsExtractors {
cached_sorter.flush_fid_word_count(&mut buffer)
}
}
pub struct WordDocidsSettingsExtractorsData<'a, SD> {
tokenizer: DocumentTokenizer<'a>,
max_memory_by_thread: Option<usize>,
buckets: usize,
settings_delta: &'a SD,
}
impl<'extractor, SD: SettingsDelta + Sync> SettingsChangeExtractor<'extractor>
for WordDocidsSettingsExtractorsData<'_, SD>
{
type Data = RefCell<Option<WordDocidsBalancedCaches<'extractor>>>;
fn init_data<'doc>(&'doc self, extractor_alloc: &'extractor Bump) -> crate::Result<Self::Data> {
Ok(RefCell::new(Some(WordDocidsBalancedCaches::new_in(
self.buckets,
self.max_memory_by_thread,
extractor_alloc,
))))
}
fn process<'doc>(
&'doc self,
documents: impl Iterator<Item = crate::Result<DocumentIdentifiers<'doc>>>,
context: &'doc DocumentContext<Self::Data>,
) -> crate::Result<()> {
for document in documents {
let document = document?;
SettingsChangeWordDocidsExtractors::extract_document_from_settings_change(
document,
context,
&self.tokenizer,
self.settings_delta,
)?;
}
Ok(())
}
}
pub struct SettingsChangeWordDocidsExtractors;
impl SettingsChangeWordDocidsExtractors {
pub fn run_extraction<'fid, 'indexer, 'index, 'extractor, SD, MSP>(
settings_delta: &SD,
documents: &'indexer DocumentsIndentifiers<'indexer>,
indexing_context: IndexingContext<'fid, 'indexer, 'index, MSP>,
extractor_allocs: &'extractor mut ThreadLocal<FullySend<Bump>>,
step: IndexingStep,
) -> Result<WordDocidsCaches<'extractor>>
where
SD: SettingsDelta + Sync,
MSP: Fn() -> bool + Sync,
{
// Warning: this is duplicated code from extract_word_pair_proximity_docids.rs
// TODO we need to read the new AND old settings to support changing global parameters
let rtxn = indexing_context.index.read_txn()?;
let stop_words = indexing_context.index.stop_words(&rtxn)?;
let allowed_separators = indexing_context.index.allowed_separators(&rtxn)?;
let allowed_separators: Option<Vec<_>> =
allowed_separators.as_ref().map(|s| s.iter().map(String::as_str).collect());
let dictionary = indexing_context.index.dictionary(&rtxn)?;
let dictionary: Option<Vec<_>> =
dictionary.as_ref().map(|s| s.iter().map(String::as_str).collect());
let mut builder = tokenizer_builder(
stop_words.as_ref(),
allowed_separators.as_deref(),
dictionary.as_deref(),
);
let tokenizer = builder.build();
let localized_attributes_rules =
indexing_context.index.localized_attributes_rules(&rtxn)?.unwrap_or_default();
let document_tokenizer = DocumentTokenizer {
tokenizer: &tokenizer,
localized_attributes_rules: &localized_attributes_rules,
max_positions_per_attributes: MAX_POSITION_PER_ATTRIBUTE,
};
let extractor_data = WordDocidsSettingsExtractorsData {
tokenizer: document_tokenizer,
max_memory_by_thread: indexing_context.grenad_parameters.max_memory_by_thread(),
buckets: rayon::current_num_threads(),
settings_delta,
};
let datastore = ThreadLocal::new();
{
let span = tracing::debug_span!(target: "indexing::documents::extract", "vectors");
let _entered = span.enter();
settings_change_extract(
documents,
&extractor_data,
indexing_context,
extractor_allocs,
&datastore,
step,
)?;
}
let mut merger = WordDocidsCaches::new();
for cache in datastore.into_iter().flat_map(RefCell::into_inner) {
merger.push(cache)?;
}
Ok(merger)
}
/// Extracts document words from a settings change.
fn extract_document_from_settings_change<SD: SettingsDelta>(
document: DocumentIdentifiers<'_>,
context: &DocumentContext<RefCell<Option<WordDocidsBalancedCaches>>>,
document_tokenizer: &DocumentTokenizer,
settings_delta: &SD,
) -> Result<()> {
let mut cached_sorter_ref = context.data.borrow_mut_or_yield();
let cached_sorter = cached_sorter_ref.as_mut().unwrap();
let doc_alloc = &context.doc_alloc;
let new_fields_ids_map = settings_delta.new_fields_ids_map();
let old_fields_ids_map = context.index.fields_ids_map_with_metadata(&context.rtxn)?;
let old_searchable = settings_delta.old_searchable_attributes().as_ref();
let new_searchable = settings_delta.new_searchable_attributes().as_ref();
let current_document = document.current(
&context.rtxn,
context.index,
old_fields_ids_map.as_fields_ids_map(),
)?;
#[derive(Debug, Clone, Copy, PartialEq)]
enum ActionToOperate {
ReindexAllFields,
// TODO improve by listing field prefixes
IndexAddedFields,
SkipDocument,
}
let mut action = ActionToOperate::SkipDocument;
// Here we do a preliminary check to determine the action to take.
// This check doesn't trigger the tokenizer as we never return
// PatternMatch::Match.
document_tokenizer.tokenize_document(
current_document,
&mut |field_name| {
let fid = new_fields_ids_map.id(field_name).expect("All fields IDs must exist");
// If the document must be reindexed, early return NoMatch to stop the scanning process.
if action == ActionToOperate::ReindexAllFields {
return Ok((fid, PatternMatch::NoMatch));
}
let old_field_metadata = old_fields_ids_map.metadata(fid).unwrap();
let new_field_metadata = new_fields_ids_map.metadata(fid).unwrap();
action = match (old_field_metadata, new_field_metadata) {
// At least one field is added or removed from the exact fields => ReindexAllFields
(Metadata { exact: old_exact, .. }, Metadata { exact: new_exact, .. })
if old_exact != new_exact =>
{
ActionToOperate::ReindexAllFields
}
// At least one field is removed from the searchable fields => ReindexAllFields
(Metadata { searchable: Some(_), .. }, Metadata { searchable: None, .. }) => {
ActionToOperate::ReindexAllFields
}
// At least one field is added in the searchable fields => IndexAddedFields
(Metadata { searchable: None, .. }, Metadata { searchable: Some(_), .. }) => {
// We can safely overwrite the action, because we early return when action is ReindexAllFields.
ActionToOperate::IndexAddedFields
}
_ => action,
};
Ok((fid, PatternMatch::Parent))
},
&mut |_, _, _, _| Ok(()),
)?;
// Early return when we don't need to index the document
if action == ActionToOperate::SkipDocument {
return Ok(());
}
let mut should_tokenize = |field_name: &str| {
let field_id = new_fields_ids_map.id(field_name).expect("All fields IDs must exist");
let old_field_metadata = old_fields_ids_map.metadata(field_id).unwrap();
let new_field_metadata = new_fields_ids_map.metadata(field_id).unwrap();
let pattern_match = match action {
ActionToOperate::ReindexAllFields => {
if old_field_metadata.is_searchable() || new_field_metadata.is_searchable() {
PatternMatch::Match
// If any old or new field is searchable then we need to iterate over all fields
// else if any field matches we need to iterate over all fields
} else if has_searchable_children(
field_name,
old_searchable.zip(new_searchable).map(|(old, new)| old.iter().chain(new)),
) {
PatternMatch::Parent
} else {
PatternMatch::NoMatch
}
}
ActionToOperate::IndexAddedFields => {
// Was not searchable but now is
if !old_field_metadata.is_searchable() && new_field_metadata.is_searchable() {
PatternMatch::Match
// If the field is now a parent of a searchable field
} else if has_searchable_children(field_name, new_searchable) {
PatternMatch::Parent
} else {
PatternMatch::NoMatch
}
}
ActionToOperate::SkipDocument => unreachable!(),
};
Ok((field_id, pattern_match))
};
let old_disabled_typos_terms = settings_delta.old_disabled_typos_terms();
let new_disabled_typos_terms = settings_delta.new_disabled_typos_terms();
let mut token_fn = |_field_name: &str, field_id, pos, word: &str| {
let old_field_metadata = old_fields_ids_map.metadata(field_id).unwrap();
let new_field_metadata = new_fields_ids_map.metadata(field_id).unwrap();
match (old_field_metadata, new_field_metadata) {
(
Metadata { searchable: Some(_), exact: old_exact, .. },
Metadata { searchable: None, .. },
) => cached_sorter.insert_del_u32(
field_id,
pos,
word,
old_exact || old_disabled_typos_terms.is_exact(word),
// We deleted the field globally
FieldDbExtraction::Skip,
document.docid(),
doc_alloc,
),
(
Metadata { searchable: None, .. },
Metadata { searchable: Some(_), exact: new_exact, .. },
) => cached_sorter.insert_add_u32(
field_id,
pos,
word,
new_exact || new_disabled_typos_terms.is_exact(word),
FieldDbExtraction::Extract,
document.docid(),
doc_alloc,
),
(Metadata { searchable: None, .. }, Metadata { searchable: None, .. }) => {
unreachable!()
}
(Metadata { exact: old_exact, .. }, Metadata { exact: new_exact, .. }) => {
cached_sorter.insert_del_u32(
field_id,
pos,
word,
old_exact || old_disabled_typos_terms.is_exact(word),
// The field has already been extracted
FieldDbExtraction::Skip,
document.docid(),
doc_alloc,
)?;
cached_sorter.insert_add_u32(
field_id,
pos,
word,
new_exact || new_disabled_typos_terms.is_exact(word),
// The field has already been extracted
FieldDbExtraction::Skip,
document.docid(),
doc_alloc,
)
}
}
};
// TODO we must tokenize twice when we change global parameters like stop words,
// the language settings, dictionary, separators, non-separators...
document_tokenizer.tokenize_document(
current_document,
&mut should_tokenize,
&mut token_fn,
)?;
Ok(())
}
}

View File

@@ -6,24 +6,17 @@ use bumpalo::Bump;
use super::match_searchable_field;
use super::tokenize_document::{tokenizer_builder, DocumentTokenizer};
use crate::fields_ids_map::metadata::Metadata;
use crate::proximity::ProximityPrecision::*;
use crate::proximity::{index_proximity, MAX_DISTANCE};
use crate::update::new::document::{Document, DocumentContext};
use crate::update::new::extract::cache::BalancedCaches;
use crate::update::new::indexer::document_changes::{
extract, DocumentChanges, Extractor, IndexingContext,
};
use crate::update::new::indexer::settings_change_extract;
use crate::update::new::indexer::settings_changes::{
DocumentsIndentifiers, SettingsChangeExtractor,
};
use crate::update::new::ref_cell_ext::RefCellExt as _;
use crate::update::new::steps::IndexingStep;
use crate::update::new::thread_local::{FullySend, ThreadLocal};
use crate::update::new::{DocumentChange, DocumentIdentifiers};
use crate::update::settings::SettingsDelta;
use crate::{FieldId, PatternMatch, Result, UserError, MAX_POSITION_PER_ATTRIBUTE};
use crate::update::new::DocumentChange;
use crate::{FieldId, GlobalFieldsIdsMap, Result, MAX_POSITION_PER_ATTRIBUTE};
pub struct WordPairProximityDocidsExtractorData<'a> {
tokenizer: DocumentTokenizer<'a>,
@@ -123,7 +116,7 @@ impl WordPairProximityDocidsExtractor {
// and to store the docids of the documents that have a number of words in a given field
// equal to or under than MAX_COUNTED_WORDS.
fn extract_document_change(
context: &DocumentContext<RefCell<BalancedCaches<'_>>>,
context: &DocumentContext<RefCell<BalancedCaches>>,
document_tokenizer: &DocumentTokenizer,
searchable_attributes: Option<&[&str]>,
document_change: DocumentChange,
@@ -154,12 +147,8 @@ impl WordPairProximityDocidsExtractor {
process_document_tokens(
document,
document_tokenizer,
new_fields_ids_map,
&mut word_positions,
&mut |field_name| {
new_fields_ids_map
.id_with_metadata_or_insert(field_name)
.ok_or(UserError::AttributeLimitReached.into())
},
&mut |(w1, w2), prox| {
del_word_pair_proximity.push(((w1, w2), prox));
},
@@ -181,12 +170,8 @@ impl WordPairProximityDocidsExtractor {
process_document_tokens(
document,
document_tokenizer,
new_fields_ids_map,
&mut word_positions,
&mut |field_name| {
new_fields_ids_map
.id_with_metadata_or_insert(field_name)
.ok_or(UserError::AttributeLimitReached.into())
},
&mut |(w1, w2), prox| {
del_word_pair_proximity.push(((w1, w2), prox));
},
@@ -195,12 +180,8 @@ impl WordPairProximityDocidsExtractor {
process_document_tokens(
document,
document_tokenizer,
new_fields_ids_map,
&mut word_positions,
&mut |field_name| {
new_fields_ids_map
.id_with_metadata_or_insert(field_name)
.ok_or(UserError::AttributeLimitReached.into())
},
&mut |(w1, w2), prox| {
add_word_pair_proximity.push(((w1, w2), prox));
},
@@ -211,12 +192,8 @@ impl WordPairProximityDocidsExtractor {
process_document_tokens(
document,
document_tokenizer,
new_fields_ids_map,
&mut word_positions,
&mut |field_name| {
new_fields_ids_map
.id_with_metadata_or_insert(field_name)
.ok_or(UserError::AttributeLimitReached.into())
},
&mut |(w1, w2), prox| {
add_word_pair_proximity.push(((w1, w2), prox));
},
@@ -280,8 +257,8 @@ fn drain_word_positions(
fn process_document_tokens<'doc>(
document: impl Document<'doc>,
document_tokenizer: &DocumentTokenizer,
fields_ids_map: &mut GlobalFieldsIdsMap,
word_positions: &mut VecDeque<(Rc<str>, u16)>,
field_id_and_metadata: &mut impl FnMut(&str) -> Result<(FieldId, Metadata)>,
word_pair_proximity: &mut impl FnMut((Rc<str>, Rc<str>), u8),
) -> Result<()> {
let mut field_id = None;
@@ -302,248 +279,8 @@ fn process_document_tokens<'doc>(
word_positions.push_back((Rc::from(word), pos));
Ok(())
};
let mut should_tokenize = |field_name: &str| {
let (field_id, meta) = field_id_and_metadata(field_name)?;
let pattern_match = if meta.is_searchable() {
PatternMatch::Match
} else {
// TODO: should be a match on the field_name using `match_field_legacy` function,
// but for legacy reasons we iterate over all the fields to fill the field_id_map.
PatternMatch::Parent
};
Ok((field_id, pattern_match))
};
document_tokenizer.tokenize_document(document, &mut should_tokenize, &mut token_fn)?;
document_tokenizer.tokenize_document(document, fields_ids_map, &mut token_fn)?;
drain_word_positions(word_positions, word_pair_proximity);
Ok(())
}
pub struct WordPairProximityDocidsSettingsExtractorsData<'a, SD> {
tokenizer: DocumentTokenizer<'a>,
max_memory_by_thread: Option<usize>,
buckets: usize,
settings_delta: &'a SD,
}
impl<'extractor, SD: SettingsDelta + Sync> SettingsChangeExtractor<'extractor>
for WordPairProximityDocidsSettingsExtractorsData<'_, SD>
{
type Data = RefCell<BalancedCaches<'extractor>>;
fn init_data<'doc>(&'doc self, extractor_alloc: &'extractor Bump) -> crate::Result<Self::Data> {
Ok(RefCell::new(BalancedCaches::new_in(
self.buckets,
self.max_memory_by_thread,
extractor_alloc,
)))
}
fn process<'doc>(
&'doc self,
documents: impl Iterator<Item = crate::Result<DocumentIdentifiers<'doc>>>,
context: &'doc DocumentContext<Self::Data>,
) -> crate::Result<()> {
for document in documents {
let document = document?;
SettingsChangeWordPairProximityDocidsExtractors::extract_document_from_settings_change(
document,
context,
&self.tokenizer,
self.settings_delta,
)?;
}
Ok(())
}
}
pub struct SettingsChangeWordPairProximityDocidsExtractors;
impl SettingsChangeWordPairProximityDocidsExtractors {
pub fn run_extraction<'fid, 'indexer, 'index, 'extractor, SD, MSP>(
settings_delta: &SD,
documents: &'indexer DocumentsIndentifiers<'indexer>,
indexing_context: IndexingContext<'fid, 'indexer, 'index, MSP>,
extractor_allocs: &'extractor mut ThreadLocal<FullySend<Bump>>,
step: IndexingStep,
) -> Result<Vec<BalancedCaches<'extractor>>>
where
SD: SettingsDelta + Sync,
MSP: Fn() -> bool + Sync,
{
// Warning: this is duplicated code from extract_word_docids.rs
let rtxn = indexing_context.index.read_txn()?;
let stop_words = indexing_context.index.stop_words(&rtxn)?;
let allowed_separators = indexing_context.index.allowed_separators(&rtxn)?;
let allowed_separators: Option<Vec<_>> =
allowed_separators.as_ref().map(|s| s.iter().map(String::as_str).collect());
let dictionary = indexing_context.index.dictionary(&rtxn)?;
let dictionary: Option<Vec<_>> =
dictionary.as_ref().map(|s| s.iter().map(String::as_str).collect());
let mut builder = tokenizer_builder(
stop_words.as_ref(),
allowed_separators.as_deref(),
dictionary.as_deref(),
);
let tokenizer = builder.build();
let localized_attributes_rules =
indexing_context.index.localized_attributes_rules(&rtxn)?.unwrap_or_default();
let document_tokenizer = DocumentTokenizer {
tokenizer: &tokenizer,
localized_attributes_rules: &localized_attributes_rules,
max_positions_per_attributes: MAX_POSITION_PER_ATTRIBUTE,
};
let extractor_data = WordPairProximityDocidsSettingsExtractorsData {
tokenizer: document_tokenizer,
max_memory_by_thread: indexing_context.grenad_parameters.max_memory_by_thread(),
buckets: rayon::current_num_threads(),
settings_delta,
};
let datastore = ThreadLocal::new();
{
let span = tracing::trace_span!(target: "indexing::documents::extract", "word_pair_proximity_docids_extraction");
let _entered = span.enter();
settings_change_extract(
documents,
&extractor_data,
indexing_context,
extractor_allocs,
&datastore,
step,
)?;
}
Ok(datastore.into_iter().map(RefCell::into_inner).collect())
}
/// Extracts document words from a settings change.
fn extract_document_from_settings_change<SD: SettingsDelta>(
document: DocumentIdentifiers<'_>,
context: &DocumentContext<RefCell<BalancedCaches<'_>>>,
document_tokenizer: &DocumentTokenizer,
settings_delta: &SD,
) -> Result<()> {
let mut cached_sorter = context.data.borrow_mut_or_yield();
let doc_alloc = &context.doc_alloc;
let new_fields_ids_map = settings_delta.new_fields_ids_map();
let old_fields_ids_map = settings_delta.old_fields_ids_map();
let old_proximity_precision = *settings_delta.old_proximity_precision();
let new_proximity_precision = *settings_delta.new_proximity_precision();
let current_document = document.current(
&context.rtxn,
context.index,
old_fields_ids_map.as_fields_ids_map(),
)?;
#[derive(Debug, Clone, Copy, PartialEq)]
enum ActionToOperate {
ReindexAllFields,
SkipDocument,
}
// TODO prefix_fid delete_old_fid_based_databases
let mut action = match (old_proximity_precision, new_proximity_precision) {
(ByAttribute, ByWord) => ActionToOperate::ReindexAllFields,
(_, _) => ActionToOperate::SkipDocument,
};
// Here we do a preliminary check to determine the action to take.
// This check doesn't trigger the tokenizer as we never return
// PatternMatch::Match.
if action != ActionToOperate::ReindexAllFields {
document_tokenizer.tokenize_document(
current_document,
&mut |field_name| {
let fid = new_fields_ids_map.id(field_name).expect("All fields IDs must exist");
// If the document must be reindexed, early return NoMatch to stop the scanning process.
if action == ActionToOperate::ReindexAllFields {
return Ok((fid, PatternMatch::NoMatch));
}
let old_field_metadata = old_fields_ids_map.metadata(fid).unwrap();
let new_field_metadata = new_fields_ids_map.metadata(fid).unwrap();
action = match (old_field_metadata, new_field_metadata) {
// At least one field is removed or added from the searchable fields
(
Metadata { searchable: Some(_), .. },
Metadata { searchable: None, .. },
)
| (
Metadata { searchable: None, .. },
Metadata { searchable: Some(_), .. },
) => ActionToOperate::ReindexAllFields,
_ => action,
};
Ok((fid, PatternMatch::Parent))
},
&mut |_, _, _, _| Ok(()),
)?;
}
// Early return when we don't need to index the document
if action == ActionToOperate::SkipDocument {
return Ok(());
}
let mut del_word_pair_proximity = bumpalo::collections::Vec::new_in(doc_alloc);
let mut add_word_pair_proximity = bumpalo::collections::Vec::new_in(doc_alloc);
// is a vecdequeue, and will be smol, so can stay on the heap for now
let mut word_positions: VecDeque<(Rc<str>, u16)> =
VecDeque::with_capacity(MAX_DISTANCE as usize);
process_document_tokens(
current_document,
// TODO Tokenize must be based on old settings
document_tokenizer,
&mut word_positions,
&mut |field_name| {
Ok(old_fields_ids_map.id_with_metadata(field_name).expect("All fields must exist"))
},
&mut |(w1, w2), prox| {
del_word_pair_proximity.push(((w1, w2), prox));
},
)?;
process_document_tokens(
current_document,
// TODO Tokenize must be based on new settings
document_tokenizer,
&mut word_positions,
&mut |field_name| {
Ok(new_fields_ids_map.id_with_metadata(field_name).expect("All fields must exist"))
},
&mut |(w1, w2), prox| {
add_word_pair_proximity.push(((w1, w2), prox));
},
)?;
let mut key_buffer = bumpalo::collections::Vec::new_in(doc_alloc);
del_word_pair_proximity.sort_unstable();
del_word_pair_proximity.dedup_by(|(k1, _), (k2, _)| k1 == k2);
for ((w1, w2), prox) in del_word_pair_proximity.iter() {
let key = build_key(*prox, w1, w2, &mut key_buffer);
cached_sorter.insert_del_u32(key, document.docid())?;
}
add_word_pair_proximity.sort_unstable();
add_word_pair_proximity.dedup_by(|(k1, _), (k2, _)| k1 == k2);
for ((w1, w2), prox) in add_word_pair_proximity.iter() {
let key = build_key(*prox, w1, w2, &mut key_buffer);
cached_sorter.insert_add_u32(key, document.docid())?;
}
Ok(())
}
}

View File

@@ -2,12 +2,8 @@ mod extract_word_docids;
mod extract_word_pair_proximity_docids;
mod tokenize_document;
pub use extract_word_docids::{
SettingsChangeWordDocidsExtractors, WordDocidsCaches, WordDocidsExtractors,
};
pub use extract_word_pair_proximity_docids::{
SettingsChangeWordPairProximityDocidsExtractors, WordPairProximityDocidsExtractor,
};
pub use extract_word_docids::{WordDocidsCaches, WordDocidsExtractors};
pub use extract_word_pair_proximity_docids::WordPairProximityDocidsExtractor;
use crate::attribute_patterns::{match_field_legacy, PatternMatch};
@@ -31,17 +27,3 @@ pub fn match_searchable_field(
selection
}
/// return `true` if the provided `field_name` is a parent of at least one of the fields contained in `searchable`,
/// or if `searchable` is `None`.
fn has_searchable_children<I, A>(field_name: &str, searchable: Option<I>) -> bool
where
I: IntoIterator<Item = A>,
A: AsRef<str>,
{
searchable.is_none_or(|fields| {
fields
.into_iter()
.any(|attr| match_field_legacy(attr.as_ref(), field_name) == PatternMatch::Parent)
})
}

View File

@@ -8,7 +8,10 @@ use crate::update::new::document::Document;
use crate::update::new::extract::perm_json_p::{
seek_leaf_values_in_array, seek_leaf_values_in_object, Depth,
};
use crate::{FieldId, InternalError, LocalizedAttributesRule, Result, MAX_WORD_LENGTH};
use crate::{
FieldId, GlobalFieldsIdsMap, InternalError, LocalizedAttributesRule, Result, UserError,
MAX_WORD_LENGTH,
};
// todo: should be crate::proximity::MAX_DISTANCE but it has been forgotten
const MAX_DISTANCE: u32 = 8;
@@ -23,25 +26,26 @@ impl DocumentTokenizer<'_> {
pub fn tokenize_document<'doc>(
&self,
document: impl Document<'doc>,
should_tokenize: &mut impl FnMut(&str) -> Result<(FieldId, PatternMatch)>,
field_id_map: &mut GlobalFieldsIdsMap,
token_fn: &mut impl FnMut(&str, FieldId, u16, &str) -> Result<()>,
) -> Result<()> {
let mut field_position = HashMap::new();
for entry in document.iter_top_level_fields() {
let (field_name, value) = entry?;
if let (_, PatternMatch::NoMatch) = should_tokenize(field_name)? {
continue;
}
let mut tokenize_field = |field_name: &str, _depth, value: &Value| {
let (fid, pattern_match) = should_tokenize(field_name)?;
if pattern_match == PatternMatch::Match {
self.tokenize_field(fid, field_name, value, token_fn, &mut field_position)?;
}
Ok(pattern_match)
let mut tokenize_field = |field_name: &str, _depth, value: &Value| {
let Some((field_id, meta)) = field_id_map.id_with_metadata_or_insert(field_name) else {
return Err(UserError::AttributeLimitReached.into());
};
if meta.is_searchable() {
self.tokenize_field(field_id, field_name, value, token_fn, &mut field_position)?;
}
// todo: should be a match on the field_name using `match_field_legacy` function,
// but for legacy reasons we iterate over all the fields to fill the field_id_map.
Ok(PatternMatch::Match)
};
for entry in document.iter_top_level_fields() {
let (field_name, value) = entry?;
// parse json.
match serde_json::to_value(value).map_err(InternalError::SerdeJson)? {
Value::Object(object) => seek_leaf_values_in_object(
@@ -188,7 +192,7 @@ mod test {
use super::*;
use crate::fields_ids_map::metadata::{FieldIdMapWithMetadata, MetadataBuilder};
use crate::update::new::document::{DocumentFromVersions, Versions};
use crate::{FieldsIdsMap, GlobalFieldsIdsMap, UserError};
use crate::FieldsIdsMap;
#[test]
fn test_tokenize_document() {
@@ -227,7 +231,6 @@ mod test {
Default::default(),
Default::default(),
Default::default(),
Default::default(),
None,
None,
Default::default(),
@@ -248,19 +251,15 @@ mod test {
let document = Versions::single(document);
let document = DocumentFromVersions::new(&document);
let mut should_tokenize = |field_name: &str| {
let Some(field_id) = global_fields_ids_map.id_or_insert(field_name) else {
return Err(UserError::AttributeLimitReached.into());
};
Ok((field_id, PatternMatch::Match))
};
document_tokenizer
.tokenize_document(document, &mut should_tokenize, &mut |_fname, fid, pos, word| {
words.insert([fid, pos], word.to_string());
Ok(())
})
.tokenize_document(
document,
&mut global_fields_ids_map,
&mut |_fname, fid, pos, word| {
words.insert([fid, pos], word.to_string());
Ok(())
},
)
.unwrap();
snapshot!(format!("{:#?}", words), @r###"

View File

@@ -1,6 +1,5 @@
use std::cell::RefCell;
use std::fmt::Debug;
use std::sync::RwLock;
use bumpalo::collections::Vec as BVec;
use bumpalo::Bump;
@@ -28,10 +27,7 @@ use crate::vector::extractor::{
use crate::vector::session::{EmbedSession, Input, Metadata, OnEmbed};
use crate::vector::settings::ReindexAction;
use crate::vector::{Embedding, RuntimeEmbedder, RuntimeEmbedders, RuntimeFragment};
use crate::{
DocumentId, FieldDistribution, GlobalFieldsIdsMap, InternalError, Result, ThreadPoolNoAbort,
UserError,
};
use crate::{DocumentId, FieldDistribution, InternalError, Result, ThreadPoolNoAbort, UserError};
pub struct EmbeddingExtractor<'a, 'b> {
embedders: &'a RuntimeEmbedders,
@@ -325,15 +321,6 @@ impl<'extractor, SD: SettingsDelta + Sync> SettingsChangeExtractor<'extractor>
let old_embedders = self.settings_delta.old_embedders();
let unused_vectors_distribution = UnusedVectorsDistributionBump::new_in(&context.doc_alloc);
// We get a reference to the new and old fields ids maps but
// note that those are local versions where updates to them
// will not be reflected in the database. It's not an issue
// because new settings do not generate new fields.
let new_fields_ids_map = RwLock::new(self.settings_delta.new_fields_ids_map().clone());
let new_fields_ids_map = RefCell::new(GlobalFieldsIdsMap::new(&new_fields_ids_map));
let old_fields_ids_map = RwLock::new(self.settings_delta.old_fields_ids_map().clone());
let old_fields_ids_map = RefCell::new(GlobalFieldsIdsMap::new(&old_fields_ids_map));
let mut all_chunks = BVec::with_capacity_in(embedders.len(), &context.doc_alloc);
let embedder_configs = context.index.embedding_configs();
for (embedder_name, action) in self.settings_delta.embedder_actions().iter() {
@@ -409,7 +396,6 @@ impl<'extractor, SD: SettingsDelta + Sync> SettingsChangeExtractor<'extractor>
if !must_regenerate {
continue;
}
// we need to regenerate the prompts for the document
chunks.settings_change_autogenerated(
document.docid(),
@@ -420,8 +406,7 @@ impl<'extractor, SD: SettingsDelta + Sync> SettingsChangeExtractor<'extractor>
context.db_fields_ids_map,
)?,
self.settings_delta,
&old_fields_ids_map,
&new_fields_ids_map,
context.new_fields_ids_map,
&unused_vectors_distribution,
old_is_user_provided,
fragments_changed,
@@ -457,8 +442,7 @@ impl<'extractor, SD: SettingsDelta + Sync> SettingsChangeExtractor<'extractor>
context.db_fields_ids_map,
)?,
self.settings_delta,
&old_fields_ids_map,
&new_fields_ids_map,
context.new_fields_ids_map,
&unused_vectors_distribution,
old_is_user_provided,
true,
@@ -654,8 +638,7 @@ impl<'a, 'b, 'extractor> Chunks<'a, 'b, 'extractor> {
external_docid: &'a str,
document: D,
settings_delta: &SD,
old_fields_ids_map: &'a RefCell<GlobalFieldsIdsMap<'a>>,
new_fields_ids_map: &'a RefCell<GlobalFieldsIdsMap<'a>>,
fields_ids_map: &'a RefCell<crate::GlobalFieldsIdsMap>,
unused_vectors_distribution: &UnusedVectorsDistributionBump<'a>,
old_is_user_provided: bool,
full_reindex: bool,
@@ -750,17 +733,10 @@ impl<'a, 'b, 'extractor> Chunks<'a, 'b, 'extractor> {
old_embedder.as_ref().map(|old_embedder| &old_embedder.document_template)
};
let extractor = DocumentTemplateExtractor::new(
document_template,
doc_alloc,
new_fields_ids_map,
);
let extractor =
DocumentTemplateExtractor::new(document_template, doc_alloc, fields_ids_map);
let old_extractor = old_document_template.map(|old_document_template| {
DocumentTemplateExtractor::new(
old_document_template,
doc_alloc,
old_fields_ids_map,
)
DocumentTemplateExtractor::new(old_document_template, doc_alloc, fields_ids_map)
});
let metadata =
Metadata { docid, external_docid, extractor_id: extractor.extractor_id() };

View File

@@ -372,10 +372,11 @@ where
SD: SettingsDelta + Sync,
{
// Create the list of document ids to extract
let index = indexing_context.index;
let rtxn = index.read_txn()?;
let all_document_ids = index.documents_ids(&rtxn)?.into_iter().collect::<Vec<_>>();
let primary_key = primary_key_from_db(index, &rtxn, &indexing_context.db_fields_ids_map)?;
let rtxn = indexing_context.index.read_txn()?;
let all_document_ids =
indexing_context.index.documents_ids(&rtxn)?.into_iter().collect::<Vec<_>>();
let primary_key =
primary_key_from_db(indexing_context.index, &rtxn, &indexing_context.db_fields_ids_map)?;
let documents = DocumentsIndentifiers::new(&all_document_ids, primary_key);
let span =
@@ -390,133 +391,6 @@ where
extractor_allocs,
)?;
{
let WordDocidsCaches {
word_docids,
word_fid_docids,
exact_word_docids,
word_position_docids,
fid_word_count_docids,
} = {
let span = tracing::trace_span!(target: "indexing::documents::extract", "word_docids");
let _entered = span.enter();
SettingsChangeWordDocidsExtractors::run_extraction(
settings_delta,
&documents,
indexing_context,
extractor_allocs,
IndexingStep::ExtractingWords,
)?
};
indexing_context.progress.update_progress(IndexingStep::MergingWordCaches);
{
let span = tracing::trace_span!(target: "indexing::documents::merge", "word_docids");
let _entered = span.enter();
indexing_context.progress.update_progress(MergingWordCache::WordDocids);
merge_and_send_docids(
word_docids,
index.word_docids.remap_types(),
index,
extractor_sender.docids::<WordDocids>(),
&indexing_context.must_stop_processing,
)?;
}
{
let span =
tracing::trace_span!(target: "indexing::documents::merge", "word_fid_docids");
let _entered = span.enter();
indexing_context.progress.update_progress(MergingWordCache::WordFieldIdDocids);
merge_and_send_docids(
word_fid_docids,
index.word_fid_docids.remap_types(),
index,
extractor_sender.docids::<WordFidDocids>(),
&indexing_context.must_stop_processing,
)?;
}
{
let span =
tracing::trace_span!(target: "indexing::documents::merge", "exact_word_docids");
let _entered = span.enter();
indexing_context.progress.update_progress(MergingWordCache::ExactWordDocids);
merge_and_send_docids(
exact_word_docids,
index.exact_word_docids.remap_types(),
index,
extractor_sender.docids::<ExactWordDocids>(),
&indexing_context.must_stop_processing,
)?;
}
{
let span =
tracing::trace_span!(target: "indexing::documents::merge", "word_position_docids");
let _entered = span.enter();
indexing_context.progress.update_progress(MergingWordCache::WordPositionDocids);
merge_and_send_docids(
word_position_docids,
index.word_position_docids.remap_types(),
index,
extractor_sender.docids::<WordPositionDocids>(),
&indexing_context.must_stop_processing,
)?;
}
{
let span =
tracing::trace_span!(target: "indexing::documents::merge", "fid_word_count_docids");
let _entered = span.enter();
indexing_context.progress.update_progress(MergingWordCache::FieldIdWordCountDocids);
merge_and_send_docids(
fid_word_count_docids,
index.field_id_word_count_docids.remap_types(),
index,
extractor_sender.docids::<FidWordCountDocids>(),
&indexing_context.must_stop_processing,
)?;
}
}
// Run the proximity extraction only if the precision is ByWord.
let new_proximity_precision = settings_delta.new_proximity_precision();
if *new_proximity_precision == ProximityPrecision::ByWord {
let caches = {
let span = tracing::trace_span!(target: "indexing::documents::extract", "word_pair_proximity_docids");
let _entered = span.enter();
SettingsChangeWordPairProximityDocidsExtractors::run_extraction(
settings_delta,
&documents,
indexing_context,
extractor_allocs,
IndexingStep::ExtractingWordProximity,
)?
};
{
let span = tracing::trace_span!(target: "indexing::documents::merge", "word_pair_proximity_docids");
let _entered = span.enter();
indexing_context.progress.update_progress(IndexingStep::MergingWordProximity);
merge_and_send_docids(
caches,
index.word_pair_proximity_docids.remap_types(),
index,
extractor_sender.docids::<WordPairProximityDocids>(),
&indexing_context.must_stop_processing,
)?;
}
}
'vectors: {
if settings_delta.embedder_actions().is_empty() {
break 'vectors;

View File

@@ -1,4 +1,4 @@
use std::collections::{BTreeMap, BTreeSet};
use std::collections::BTreeMap;
use std::sync::atomic::AtomicBool;
use std::sync::{Arc, Once, RwLock};
use std::thread::{self, Builder};
@@ -8,11 +8,9 @@ use document_changes::{DocumentChanges, IndexingContext};
pub use document_deletion::DocumentDeletion;
pub use document_operation::{DocumentOperation, PayloadStats};
use hashbrown::HashMap;
use heed::types::DecodeIgnore;
use heed::{BytesDecode, Database, RoTxn, RwTxn};
use heed::{RoTxn, RwTxn};
pub use partial_dump::PartialDump;
pub use post_processing::recompute_word_fst_from_word_docids_database;
pub use settings_changes::settings_change_extract;
pub use update_by_function::UpdateByFunction;
pub use write::ChannelCongestion;
use write::{build_vectors, update_index, write_to_db};
@@ -22,18 +20,12 @@ use super::steps::IndexingStep;
use super::thread_local::ThreadLocal;
use crate::documents::PrimaryKey;
use crate::fields_ids_map::metadata::{FieldIdMapWithMetadata, MetadataBuilder};
use crate::heed_codec::StrBEU16Codec;
use crate::progress::{EmbedderStats, Progress};
use crate::proximity::ProximityPrecision;
use crate::update::new::steps::SettingsIndexerStep;
use crate::update::new::FacetFieldIdsDelta;
use crate::update::settings::SettingsDelta;
use crate::update::GrenadParameters;
use crate::vector::settings::{EmbedderAction, RemoveFragments, WriteBackToDocuments};
use crate::vector::{Embedder, RuntimeEmbedders, VectorStore};
use crate::{
Error, FieldsIdsMap, GlobalFieldsIdsMap, Index, InternalError, Result, ThreadPoolNoAbort,
};
use crate::{FieldsIdsMap, GlobalFieldsIdsMap, Index, InternalError, Result, ThreadPoolNoAbort};
#[cfg(not(feature = "enterprise"))]
pub mod community_edition;
@@ -250,20 +242,6 @@ where
SD: SettingsDelta + Sync,
{
delete_old_embedders_and_fragments(wtxn, index, settings_delta)?;
delete_old_fid_based_databases(wtxn, index, settings_delta, must_stop_processing, progress)?;
// Clear word_pair_proximity if byWord to byAttribute
let old_proximity_precision = settings_delta.old_proximity_precision();
let new_proximity_precision = settings_delta.new_proximity_precision();
if *old_proximity_precision == ProximityPrecision::ByWord
&& *new_proximity_precision == ProximityPrecision::ByAttribute
{
index.word_pair_proximity_docids.clear(wtxn)?;
}
// TODO delete useless searchable databases
// - Clear fid_prefix_* in the post processing
// - clear the prefix + fid_prefix if setting `PrefixSearch` is enabled
let mut bbbuffers = Vec::new();
let finished_extraction = AtomicBool::new(false);
@@ -322,8 +300,6 @@ where
.unwrap()
})?;
let global_fields_ids_map = GlobalFieldsIdsMap::new(&new_fields_ids_map);
let new_embedders = settings_delta.new_embedders();
let embedder_actions = settings_delta.embedder_actions();
let index_embedder_category_ids = settings_delta.new_embedder_category_id();
@@ -358,18 +334,6 @@ where
})
.unwrap()?;
pool.install(|| {
// WARN When implementing the facets don't forget this
let facet_field_ids_delta = FacetFieldIdsDelta::new(0, 0);
post_processing::post_process(
indexing_context,
wtxn,
global_fields_ids_map,
facet_field_ids_delta,
)
})
.unwrap()?;
indexing_context.progress.update_progress(IndexingStep::BuildingGeoJson);
index.cellulite.build(
wtxn,
@@ -499,106 +463,6 @@ where
Ok(())
}
/// Deletes entries refering the provided
/// fids from the fid-based databases.
fn delete_old_fid_based_databases<SD, MSP>(
wtxn: &mut RwTxn<'_>,
index: &Index,
settings_delta: &SD,
must_stop_processing: &MSP,
progress: &Progress,
) -> Result<()>
where
SD: SettingsDelta + Sync,
MSP: Fn() -> bool + Sync,
{
let fids_to_delete: Option<BTreeSet<_>> = {
let rtxn = index.read_txn()?;
let fields_ids_map = index.fields_ids_map(&rtxn)?;
let old_searchable_attributes = settings_delta.old_searchable_attributes().as_ref();
let new_searchable_attributes = settings_delta.new_searchable_attributes().as_ref();
old_searchable_attributes.zip(new_searchable_attributes).map(|(old, new)| {
old.iter()
// Ignore the field if it is not searchable anymore
// or if it was never referenced in any document
.filter_map(|name| if new.contains(name) { None } else { fields_ids_map.id(name) })
.collect()
})
};
let Some(fids_to_delete) = fids_to_delete else {
return Ok(());
};
progress.update_progress(SettingsIndexerStep::DeletingOldWordFidDocids);
delete_old_word_fid_docids(wtxn, index.word_fid_docids, must_stop_processing, &fids_to_delete)?;
progress.update_progress(SettingsIndexerStep::DeletingOldFidWordCountDocids);
delete_old_fid_word_count_docids(wtxn, index, must_stop_processing, &fids_to_delete)?;
progress.update_progress(SettingsIndexerStep::DeletingOldWordPrefixFidDocids);
delete_old_word_fid_docids(
wtxn,
index.word_prefix_fid_docids,
must_stop_processing,
&fids_to_delete,
)?;
Ok(())
}
fn delete_old_word_fid_docids<'txn, MSP, DC>(
wtxn: &mut RwTxn<'txn>,
database: Database<StrBEU16Codec, DC>,
must_stop_processing: &MSP,
fids_to_delete: &BTreeSet<u16>,
) -> Result<(), Error>
where
MSP: Fn() -> bool + Sync,
DC: BytesDecode<'txn>,
{
let mut iter = database.iter_mut(wtxn)?.remap_data_type::<DecodeIgnore>();
while let Some(((_word, fid), ())) = iter.next().transpose()? {
// TODO should I call it that often?
if must_stop_processing() {
return Err(Error::InternalError(InternalError::AbortedIndexation));
}
if fids_to_delete.contains(&fid) {
// safety: We don't keep any references to the data.
unsafe { iter.del_current()? };
}
}
Ok(())
}
fn delete_old_fid_word_count_docids<MSP>(
wtxn: &mut RwTxn<'_>,
index: &Index,
must_stop_processing: &MSP,
fids_to_delete: &BTreeSet<u16>,
) -> Result<(), Error>
where
MSP: Fn() -> bool + Sync,
{
let db = index.field_id_word_count_docids.remap_data_type::<DecodeIgnore>();
for &fid_to_delete in fids_to_delete {
if must_stop_processing() {
return Err(Error::InternalError(InternalError::AbortedIndexation));
}
let mut iter = db.prefix_iter_mut(wtxn, &(fid_to_delete, 0))?;
while let Some(((fid, _word_count), ())) = iter.next().transpose()? {
debug_assert_eq!(fid, fid_to_delete);
// safety: We don't keep any references to the data.
unsafe { iter.del_current()? };
}
}
Ok(())
}
fn indexer_memory_settings(
current_num_threads: usize,
grenad_parameters: GrenadParameters,

View File

@@ -28,9 +28,6 @@ make_enum_progress! {
ChangingVectorStore,
UsingStableIndexer,
UsingExperimentalIndexer,
DeletingOldWordFidDocids,
DeletingOldFidWordCountDocids,
DeletingOldWordPrefixFidDocids,
}
}

View File

@@ -1589,33 +1589,33 @@ impl<'a, 't, 'i> Settings<'a, 't, 'i> {
// only use the new indexer when only the embedder possibly changed
if let Self {
searchable_fields: _,
searchable_fields: Setting::NotSet,
displayed_fields: Setting::NotSet,
filterable_fields: Setting::NotSet,
sortable_fields: Setting::NotSet,
criteria: Setting::NotSet,
stop_words: Setting::NotSet, // TODO (require force reindexing of searchables)
non_separator_tokens: Setting::NotSet, // TODO (require force reindexing of searchables)
separator_tokens: Setting::NotSet, // TODO (require force reindexing of searchables)
dictionary: Setting::NotSet, // TODO (require force reindexing of searchables)
stop_words: Setting::NotSet,
non_separator_tokens: Setting::NotSet,
separator_tokens: Setting::NotSet,
dictionary: Setting::NotSet,
distinct_field: Setting::NotSet,
synonyms: Setting::NotSet,
primary_key: Setting::NotSet,
authorize_typos: Setting::NotSet,
min_word_len_two_typos: Setting::NotSet,
min_word_len_one_typo: Setting::NotSet,
exact_words: Setting::NotSet, // TODO (require force reindexing of searchables)
exact_attributes: _,
exact_words: Setting::NotSet,
exact_attributes: Setting::NotSet,
max_values_per_facet: Setting::NotSet,
sort_facet_values_by: Setting::NotSet,
pagination_max_total_hits: Setting::NotSet,
proximity_precision: _,
proximity_precision: Setting::NotSet,
embedder_settings: _,
search_cutoff: Setting::NotSet,
localized_attributes_rules: Setting::NotSet, // TODO to start with
prefix_search: Setting::NotSet, // TODO continue with this
localized_attributes_rules: Setting::NotSet,
prefix_search: Setting::NotSet,
facet_search: Setting::NotSet,
disable_on_numbers: Setting::NotSet, // TODO (require force reindexing of searchables)
disable_on_numbers: Setting::NotSet,
chat: Setting::NotSet,
vector_store: Setting::NotSet,
wtxn: _,
@@ -1632,12 +1632,10 @@ impl<'a, 't, 'i> Settings<'a, 't, 'i> {
// Update index settings
let embedding_config_updates = self.update_embedding_configs()?;
self.update_user_defined_searchable_attributes()?;
self.update_exact_attributes()?;
self.update_proximity_precision()?;
// Note that we don't need to update the searchables here,
// as it will be done after the settings update.
let new_inner_settings = InnerIndexSettings::from_index(self.index, self.wtxn, None)?;
let mut new_inner_settings =
InnerIndexSettings::from_index(self.index, self.wtxn, None)?;
new_inner_settings.recompute_searchables(self.wtxn, self.index)?;
let primary_key_id = self
.index
@@ -2064,12 +2062,9 @@ impl InnerIndexSettings {
let sortable_fields = index.sortable_fields(rtxn)?;
let asc_desc_fields = index.asc_desc_fields(rtxn)?;
let distinct_field = index.distinct_field(rtxn)?.map(|f| f.to_string());
let user_defined_searchable_attributes = match index.user_defined_searchable_fields(rtxn)? {
Some(fields) if fields.contains(&"*") => None,
Some(fields) => Some(fields.into_iter().map(|f| f.to_string()).collect()),
None => None,
};
let user_defined_searchable_attributes = index
.user_defined_searchable_fields(rtxn)?
.map(|fields| fields.into_iter().map(|f| f.to_string()).collect());
let builder = MetadataBuilder::from_index(index, rtxn)?;
let fields_ids_map = FieldIdMapWithMetadata::new(fields_ids_map, builder);
let disabled_typos_terms = index.disabled_typos_terms(rtxn)?;
@@ -2583,20 +2578,8 @@ fn deserialize_sub_embedder(
/// Implement this trait for the settings delta type.
/// This is used in the new settings update flow and will allow to easily replace the old settings delta type: `InnerIndexSettingsDiff`.
pub trait SettingsDelta {
fn old_fields_ids_map(&self) -> &FieldIdMapWithMetadata;
fn new_fields_ids_map(&self) -> &FieldIdMapWithMetadata;
fn old_searchable_attributes(&self) -> &Option<Vec<String>>;
fn new_searchable_attributes(&self) -> &Option<Vec<String>>;
fn old_disabled_typos_terms(&self) -> &DisabledTyposTerms;
fn new_disabled_typos_terms(&self) -> &DisabledTyposTerms;
fn old_proximity_precision(&self) -> &ProximityPrecision;
fn new_proximity_precision(&self) -> &ProximityPrecision;
fn old_embedders(&self) -> &RuntimeEmbedders;
fn new_embedders(&self) -> &RuntimeEmbedders;
fn old_embedders(&self) -> &RuntimeEmbedders;
fn new_embedder_category_id(&self) -> &HashMap<String, u8>;
fn embedder_actions(&self) -> &BTreeMap<String, EmbedderAction>;
fn try_for_each_fragment_diff<F, E>(
@@ -2606,6 +2589,7 @@ pub trait SettingsDelta {
) -> std::result::Result<(), E>
where
F: FnMut(FragmentDiff) -> std::result::Result<(), E>;
fn new_fields_ids_map(&self) -> &FieldIdMapWithMetadata;
}
pub struct FragmentDiff<'a> {
@@ -2614,47 +2598,26 @@ pub struct FragmentDiff<'a> {
}
impl SettingsDelta for InnerIndexSettingsDiff {
fn old_fields_ids_map(&self) -> &FieldIdMapWithMetadata {
&self.old.fields_ids_map
}
fn new_fields_ids_map(&self) -> &FieldIdMapWithMetadata {
&self.new.fields_ids_map
}
fn old_searchable_attributes(&self) -> &Option<Vec<String>> {
&self.old.user_defined_searchable_attributes
}
fn new_searchable_attributes(&self) -> &Option<Vec<String>> {
&self.new.user_defined_searchable_attributes
}
fn old_disabled_typos_terms(&self) -> &DisabledTyposTerms {
&self.old.disabled_typos_terms
}
fn new_disabled_typos_terms(&self) -> &DisabledTyposTerms {
&self.new.disabled_typos_terms
}
fn old_proximity_precision(&self) -> &ProximityPrecision {
&self.old.proximity_precision
}
fn new_proximity_precision(&self) -> &ProximityPrecision {
&self.new.proximity_precision
fn new_embedders(&self) -> &RuntimeEmbedders {
&self.new.runtime_embedders
}
fn old_embedders(&self) -> &RuntimeEmbedders {
&self.old.runtime_embedders
}
fn new_embedders(&self) -> &RuntimeEmbedders {
&self.new.runtime_embedders
}
fn new_embedder_category_id(&self) -> &HashMap<String, u8> {
&self.new.embedder_category_id
}
fn embedder_actions(&self) -> &BTreeMap<String, EmbedderAction> {
&self.embedding_config_updates
}
fn new_fields_ids_map(&self) -> &FieldIdMapWithMetadata {
&self.new.fields_ids_map
}
fn try_for_each_fragment_diff<F, E>(
&self,
embedder_name: &str,

View File

@@ -14,21 +14,28 @@ fn set_and_reset_searchable_fields() {
let index = TempIndex::new();
// First we send 3 documents with ids from 1 to 3.
let mut wtxn = index.write_txn().unwrap();
index
.add_documents(documents!([
{ "id": 1, "name": "kevin", "age": 23 },
{ "id": 2, "name": "kevina", "age": 21},
{ "id": 3, "name": "benoit", "age": 34 }
]))
.add_documents_using_wtxn(
&mut wtxn,
documents!([
{ "id": 1, "name": "kevin", "age": 23 },
{ "id": 2, "name": "kevina", "age": 21},
{ "id": 3, "name": "benoit", "age": 34 }
]),
)
.unwrap();
// We change the searchable fields to be the "name" field only.
index
.update_settings(|settings| {
.update_settings_using_wtxn(&mut wtxn, |settings| {
settings.set_searchable_fields(vec!["name".into()]);
})
.unwrap();
wtxn.commit().unwrap();
db_snap!(index, fields_ids_map, @r###"
0 id |
1 name |

View File

@@ -22,6 +22,7 @@ reqwest = { version = "0.12.24", features = [
"json",
"rustls-tls",
], default-features = false }
semver = "1.0.27"
serde = { version = "1.0.228", features = ["derive"] }
serde_json = "1.0.145"
sha2 = "0.10.9"
@@ -42,4 +43,3 @@ tracing = "0.1.41"
tracing-subscriber = "0.3.20"
tracing-trace = { version = "0.1.0", path = "../tracing-trace" }
uuid = { version = "1.18.1", features = ["v7", "serde"] }
similar-asserts = "1.7.0"

View File

@@ -3,22 +3,21 @@ use std::io::{Read as _, Seek as _, Write as _};
use anyhow::{bail, Context};
use futures_util::TryStreamExt as _;
use serde::{Deserialize, Serialize};
use serde::Deserialize;
use sha2::Digest;
use super::client::Client;
#[derive(Serialize, Deserialize, Clone, Debug)]
#[derive(Deserialize, Clone)]
pub struct Asset {
pub local_location: Option<String>,
pub remote_location: Option<String>,
#[serde(default, skip_serializing_if = "AssetFormat::is_default")]
#[serde(default)]
pub format: AssetFormat,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub sha256: Option<String>,
}
#[derive(Serialize, Deserialize, Default, Copy, Clone, Debug)]
#[derive(Deserialize, Default, Copy, Clone)]
pub enum AssetFormat {
#[default]
Auto,
@@ -28,10 +27,6 @@ pub enum AssetFormat {
}
impl AssetFormat {
fn is_default(&self) -> bool {
matches!(self, AssetFormat::Auto)
}
pub fn to_content_type(self, filename: &str) -> &'static str {
match self {
AssetFormat::Auto => Self::auto_detect(filename).to_content_type(filename),
@@ -171,14 +166,7 @@ fn check_sha256(name: &str, asset: &Asset, mut file: std::fs::File) -> anyhow::R
}
}
None => {
let msg = match name.starts_with("meilisearch-v") {
true => "Please add it to xtask/src/test/versions.rs",
false => "Please add it to workload file",
};
tracing::warn!(
sha256 = file_hash,
"Skipping hash for asset {name} that doesn't have one. {msg}"
);
tracing::warn!(sha256 = file_hash, "Skipping hash for asset {name} that doesn't have one. Please add it to workload file");
true
}
})

View File

@@ -1,5 +1,5 @@
use anyhow::Context;
use serde::{Deserialize, Serialize};
use serde::Deserialize;
#[derive(Debug, Clone)]
pub struct Client {
@@ -61,7 +61,7 @@ impl Client {
}
}
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
#[derive(Debug, Clone, Copy, Deserialize)]
#[serde(rename_all = "SCREAMING_SNAKE_CASE")]
pub enum Method {
Get,

View File

@@ -0,0 +1,194 @@
use std::collections::BTreeMap;
use std::fmt::Display;
use std::io::Read as _;
use anyhow::{bail, Context as _};
use serde::Deserialize;
use super::assets::{fetch_asset, Asset};
use super::client::{Client, Method};
#[derive(Clone, Deserialize)]
pub struct Command {
pub route: String,
pub method: Method,
#[serde(default)]
pub body: Body,
#[serde(default)]
pub synchronous: SyncMode,
}
#[derive(Default, Clone, Deserialize)]
#[serde(untagged)]
pub enum Body {
Inline {
inline: serde_json::Value,
},
Asset {
asset: String,
},
#[default]
Empty,
}
impl Body {
pub fn get(
self,
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<Option<(Vec<u8>, &'static str)>> {
Ok(match self {
Body::Inline { inline: body } => Some((
serde_json::to_vec(&body)
.context("serializing to bytes")
.context("while getting inline body")?,
"application/json",
)),
Body::Asset { asset: name } => Some({
let context = || format!("while getting body from asset '{name}'");
let (mut file, format) =
fetch_asset(&name, assets, asset_folder).with_context(context)?;
let mut buf = Vec::new();
file.read_to_end(&mut buf).with_context(context)?;
(buf, format.to_content_type(&name))
}),
Body::Empty => None,
})
}
}
impl Display for Command {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?} {} ({:?})", self.method, self.route, self.synchronous)
}
}
#[derive(Default, Debug, Clone, Copy, Deserialize)]
pub enum SyncMode {
DontWait,
#[default]
WaitForResponse,
WaitForTask,
}
pub async fn run_batch(
client: &Client,
batch: &[Command],
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<()> {
let [.., last] = batch else { return Ok(()) };
let sync = last.synchronous;
let mut tasks = tokio::task::JoinSet::new();
for command in batch {
// FIXME: you probably don't want to copy assets everytime here
tasks.spawn({
let client = client.clone();
let command = command.clone();
let assets = assets.clone();
let asset_folder = asset_folder.to_owned();
async move { run(client, command, &assets, &asset_folder).await }
});
}
while let Some(result) = tasks.join_next().await {
result
.context("panicked while executing command")?
.context("error while executing command")?;
}
match sync {
SyncMode::DontWait => {}
SyncMode::WaitForResponse => {}
SyncMode::WaitForTask => wait_for_tasks(client).await?,
}
Ok(())
}
async fn wait_for_tasks(client: &Client) -> anyhow::Result<()> {
loop {
let response = client
.get("tasks?statuses=enqueued,processing")
.send()
.await
.context("could not wait for tasks")?;
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response to JSON")
.context("could not wait for tasks")?;
match response.get("total") {
Some(serde_json::Value::Number(number)) => {
let number = number.as_u64().with_context(|| {
format!("waiting for tasks: could not parse 'total' as integer, got {}", number)
})?;
if number == 0 {
break;
} else {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
continue;
}
}
Some(thing_else) => {
bail!(format!(
"waiting for tasks: could not parse 'total' as a number, got '{thing_else}'"
))
}
None => {
bail!(format!(
"waiting for tasks: expected response to contain 'total', got '{response}'"
))
}
}
}
Ok(())
}
#[tracing::instrument(skip(client, command, assets, asset_folder), fields(command = %command))]
pub async fn run(
client: Client,
mut command: Command,
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<()> {
// memtake the body here to leave an empty body in its place, so that command is not partially moved-out
let body = std::mem::take(&mut command.body)
.get(assets, asset_folder)
.with_context(|| format!("while getting body for command {command}"))?;
let request = client.request(command.method.into(), &command.route);
let request = if let Some((body, content_type)) = body {
request.body(body).header(reqwest::header::CONTENT_TYPE, content_type)
} else {
request
};
let response =
request.send().await.with_context(|| format!("error sending command: {}", command))?;
let code = response.status();
if code.is_client_error() {
tracing::error!(%command, %code, "error in workload file");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing error in workload file when sending command")?;
bail!("error in workload file: server responded with error code {code} and '{response}'")
} else if code.is_server_error() {
tracing::error!(%command, %code, "server error");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing server error when sending command")?;
bail!("server error: server responded with error code {code} and '{response}'")
}
Ok(())
}

View File

@@ -7,9 +7,9 @@ use tokio::task::AbortHandle;
use tracing_trace::processor::span_stats::CallStats;
use uuid::Uuid;
use super::client::Client;
use super::env_info;
use super::workload::BenchWorkload;
use crate::common::client::Client;
use super::workload::Workload;
#[derive(Debug, Clone)]
pub enum DashboardClient {
@@ -89,7 +89,7 @@ impl DashboardClient {
pub async fn create_workload(
&self,
invocation_uuid: Uuid,
workload: &BenchWorkload,
workload: &Workload,
) -> anyhow::Result<Uuid> {
let Self::Client(dashboard_client) = self else { return Ok(Uuid::now_v7()) };

View File

@@ -1,18 +1,18 @@
use std::collections::{BTreeMap, HashMap};
use std::path::Path;
use std::collections::BTreeMap;
use std::time::Duration;
use anyhow::{bail, Context as _};
use tokio::process::Command as TokioCommand;
use tokio::process::Command;
use tokio::time;
use crate::common::client::Client;
use crate::common::command::{health_command, run as run_command};
use super::assets::Asset;
use super::client::Client;
use super::workload::Workload;
pub async fn kill_meili(mut meilisearch: tokio::process::Child) {
pub async fn kill(mut meilisearch: tokio::process::Child) {
let Some(id) = meilisearch.id() else { return };
match TokioCommand::new("kill").args(["--signal=TERM", &id.to_string()]).spawn() {
match Command::new("kill").args(["--signal=TERM", &id.to_string()]).spawn() {
Ok(mut cmd) => {
let Err(error) = cmd.wait().await else { return };
tracing::warn!(
@@ -49,8 +49,8 @@ pub async fn kill_meili(mut meilisearch: tokio::process::Child) {
}
#[tracing::instrument]
async fn build() -> anyhow::Result<()> {
let mut command = TokioCommand::new("cargo");
pub async fn build() -> anyhow::Result<()> {
let mut command = Command::new("cargo");
command.arg("build").arg("--release").arg("-p").arg("meilisearch");
command.kill_on_drop(true);
@@ -64,60 +64,29 @@ async fn build() -> anyhow::Result<()> {
Ok(())
}
#[tracing::instrument(skip(client, master_key))]
pub async fn start_meili(
#[tracing::instrument(skip(client, master_key, workload), fields(workload = workload.name))]
pub async fn start(
client: &Client,
master_key: Option<&str>,
extra_cli_args: &[String],
binary_path: Option<&Path>,
workload: &Workload,
asset_folder: &str,
mut command: Command,
) -> anyhow::Result<tokio::process::Child> {
let mut command = match binary_path {
Some(binary_path) => tokio::process::Command::new(binary_path),
None => {
build().await?;
let mut command = tokio::process::Command::new("cargo");
command
.arg("run")
.arg("--release")
.arg("-p")
.arg("meilisearch")
.arg("--bin")
.arg("meilisearch")
.arg("--");
command
}
};
command.arg("--db-path").arg("./_xtask_benchmark.ms");
if let Some(master_key) = master_key {
command.arg("--master-key").arg(master_key);
}
command.arg("--experimental-enable-logs-route");
for extra_arg in extra_cli_args.iter() {
for extra_arg in workload.extra_cli_args.iter() {
command.arg(extra_arg);
}
command.kill_on_drop(true);
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
if let Some(binary_path) = binary_path {
let mut perms = tokio::fs::metadata(binary_path)
.await
.with_context(|| format!("could not get metadata for {binary_path:?}"))?
.permissions();
perms.set_mode(perms.mode() | 0o111);
tokio::fs::set_permissions(binary_path, perms)
.await
.with_context(|| format!("could not set permissions for {binary_path:?}"))?;
}
}
let mut meilisearch = command.spawn().context("Error starting Meilisearch")?;
wait_for_health(client, &mut meilisearch).await?;
wait_for_health(client, &mut meilisearch, &workload.assets, asset_folder).await?;
Ok(meilisearch)
}
@@ -125,11 +94,11 @@ pub async fn start_meili(
async fn wait_for_health(
client: &Client,
meilisearch: &mut tokio::process::Child,
assets: &BTreeMap<String, Asset>,
asset_folder: &str,
) -> anyhow::Result<()> {
for i in 0..100 {
let res =
run_command(client, &health_command(), 0, &BTreeMap::new(), HashMap::new(), "", false)
.await;
let res = super::command::run(client.clone(), health_command(), assets, asset_folder).await;
if res.is_ok() {
// check that this is actually the current Meilisearch instance that answered us
if let Some(exit_code) =
@@ -153,6 +122,15 @@ async fn wait_for_health(
bail!("meilisearch is not responding")
}
pub async fn delete_db() {
let _ = tokio::fs::remove_dir_all("./_xtask_benchmark.ms").await;
fn health_command() -> super::command::Command {
super::command::Command {
route: "/health".into(),
method: super::client::Method::Get,
body: Default::default(),
synchronous: super::command::SyncMode::WaitForResponse,
}
}
pub fn delete_db() {
let _ = std::fs::remove_dir_all("./_xtask_benchmark.ms");
}

View File

@@ -1,36 +1,51 @@
mod assets;
mod client;
mod command;
mod dashboard;
mod env_info;
mod meili_process;
mod workload;
use crate::common::args::CommonArgs;
use crate::common::logs::setup_logs;
use crate::common::workload::Workload;
use std::{path::PathBuf, sync::Arc};
use std::io::LineWriter;
use std::path::PathBuf;
use anyhow::{bail, Context};
use anyhow::Context;
use clap::Parser;
use tracing_subscriber::fmt::format::FmtSpan;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::Layer;
use crate::common::client::Client;
pub use workload::BenchWorkload;
use self::client::Client;
use self::workload::Workload;
pub fn default_http_addr() -> String {
"127.0.0.1:7700".to_string()
}
pub fn default_report_folder() -> String {
"./bench/reports/".into()
}
pub fn default_asset_folder() -> String {
"./bench/assets/".into()
}
pub fn default_log_filter() -> String {
"info".into()
}
pub fn default_dashboard_url() -> String {
"http://localhost:9001".into()
}
/// Run benchmarks from a workload
#[derive(Parser, Debug)]
pub struct BenchDeriveArgs {
/// Common arguments shared with other commands
#[command(flatten)]
common: CommonArgs,
/// Meilisearch master keys
#[arg(long)]
pub master_key: Option<String>,
pub struct BenchArgs {
/// Filename of the workload file, pass multiple filenames
/// to run multiple workloads in the specified order.
///
/// Each workload run will get its own report file.
#[arg(value_name = "WORKLOAD_FILE", last = false)]
workload_file: Vec<PathBuf>,
/// URL of the dashboard.
#[arg(long, default_value_t = default_dashboard_url())]
@@ -44,14 +59,34 @@ pub struct BenchDeriveArgs {
#[arg(long, default_value_t = default_report_folder())]
report_folder: String,
/// Directory to store the remote assets.
#[arg(long, default_value_t = default_asset_folder())]
asset_folder: String,
/// Log directives
#[arg(short, long, default_value_t = default_log_filter())]
log_filter: String,
/// Benchmark dashboard API key
#[arg(long)]
api_key: Option<String>,
/// Meilisearch master keys
#[arg(long)]
master_key: Option<String>,
/// Authentication bearer for fetching assets
#[arg(long)]
assets_key: Option<String>,
/// Reason for the benchmark invocation
#[arg(short, long)]
reason: Option<String>,
/// The maximum time in seconds we allow for fetching the task queue before timing out.
#[arg(long, default_value_t = 60)]
tasks_queue_timeout_secs: u64,
/// The path to the binary to run.
///
/// If unspecified, runs `cargo run` after building Meilisearch with `cargo build`.
@@ -59,8 +94,18 @@ pub struct BenchDeriveArgs {
binary_path: Option<PathBuf>,
}
pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
setup_logs(&args.common.log_filter)?;
pub fn run(args: BenchArgs) -> anyhow::Result<()> {
// setup logs
let filter: tracing_subscriber::filter::Targets =
args.log_filter.parse().context("invalid --log-filter")?;
let subscriber = tracing_subscriber::registry().with(
tracing_subscriber::fmt::layer()
.with_writer(|| LineWriter::new(std::io::stderr()))
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(filter),
);
tracing::subscriber::set_global_default(subscriber).context("could not setup logging")?;
// fetch environment and build info
let env = env_info::Environment::generate_from_current_config();
@@ -71,11 +116,8 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
let _scope = rt.enter();
// setup clients
let assets_client = Client::new(
None,
args.common.assets_key.as_deref(),
Some(std::time::Duration::from_secs(3600)), // 1h
)?;
let assets_client =
Client::new(None, args.assets_key.as_deref(), Some(std::time::Duration::from_secs(3600)))?; // 1h
let dashboard_client = if args.no_dashboard {
dashboard::DashboardClient::new_dry()
@@ -92,11 +134,11 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
None,
)?;
let meili_client = Arc::new(Client::new(
let meili_client = Client::new(
Some("http://127.0.0.1:7700".into()),
args.master_key.as_deref(),
Some(std::time::Duration::from_secs(args.common.tasks_queue_timeout_secs)),
)?);
Some(std::time::Duration::from_secs(args.tasks_queue_timeout_secs)),
)?;
// enter runtime
@@ -104,11 +146,11 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
dashboard_client.send_machine_info(&env).await?;
let commit_message = build_info.commit_msg.unwrap_or_default().split('\n').next().unwrap();
let max_workloads = args.common.workload_file.len();
let max_workloads = args.workload_file.len();
let reason: Option<&str> = args.reason.as_deref();
let invocation_uuid = dashboard_client.create_invocation(build_info.clone(), commit_message, env, max_workloads, reason).await?;
tracing::info!(workload_count = args.common.workload_file.len(), "handling workload files");
tracing::info!(workload_count = args.workload_file.len(), "handling workload files");
// main task
let workload_runs = tokio::spawn(
@@ -116,17 +158,13 @@ pub fn run(args: BenchDeriveArgs) -> anyhow::Result<()> {
let dashboard_client = dashboard_client.clone();
let mut dashboard_urls = Vec::new();
async move {
for workload_file in args.common.workload_file.iter() {
for workload_file in args.workload_file.iter() {
let workload: Workload = serde_json::from_reader(
std::fs::File::open(workload_file)
.with_context(|| format!("error opening {}", workload_file.display()))?,
)
.with_context(|| format!("error parsing {} as JSON", workload_file.display()))?;
let Workload::Bench(workload) = workload else {
bail!("workload file {} is not a bench workload", workload_file.display());
};
let workload_name = workload.name.clone();
workload::execute(

View File

@@ -1,27 +1,24 @@
use std::collections::{BTreeMap, HashMap};
use std::collections::BTreeMap;
use std::fs::File;
use std::io::{Seek as _, Write as _};
use std::path::Path;
use std::sync::Arc;
use anyhow::{bail, Context as _};
use futures_util::TryStreamExt as _;
use serde::{Deserialize, Serialize};
use serde::Deserialize;
use serde_json::json;
use tokio::task::JoinHandle;
use uuid::Uuid;
use super::assets::Asset;
use super::client::Client;
use super::command::SyncMode;
use super::dashboard::DashboardClient;
use super::BenchDeriveArgs;
use crate::common::assets::{self, Asset};
use crate::common::client::Client;
use crate::common::command::{run_commands, Command};
use crate::common::process::{self, delete_db, start_meili};
use super::BenchArgs;
use crate::bench::{assets, meili_process};
/// A bench workload.
/// Not to be confused with [a test workload](crate::test::workload::Workload).
#[derive(Serialize, Deserialize, Debug)]
pub struct BenchWorkload {
#[derive(Deserialize)]
pub struct Workload {
pub name: String,
pub run_count: u16,
pub extra_cli_args: Vec<String>,
@@ -29,34 +26,30 @@ pub struct BenchWorkload {
#[serde(default)]
pub target: String,
#[serde(default)]
pub precommands: Vec<Command>,
pub commands: Vec<Command>,
pub precommands: Vec<super::command::Command>,
pub commands: Vec<super::command::Command>,
}
async fn run_workload_commands(
async fn run_commands(
dashboard_client: &DashboardClient,
logs_client: &Client,
meili_client: &Arc<Client>,
meili_client: &Client,
workload_uuid: Uuid,
workload: &BenchWorkload,
args: &BenchDeriveArgs,
workload: &Workload,
args: &BenchArgs,
run_number: u16,
) -> anyhow::Result<JoinHandle<anyhow::Result<File>>> {
let report_folder = &args.report_folder;
let workload_name = &workload.name;
let assets = Arc::new(workload.assets.clone());
let asset_folder = args.common.asset_folder.clone().leak();
run_commands(
meili_client,
&workload.precommands,
0,
&assets,
asset_folder,
&mut HashMap::new(),
false,
)
.await?;
for batch in workload
.precommands
.as_slice()
.split_inclusive(|command| !matches!(command.synchronous, SyncMode::DontWait))
{
super::command::run_batch(meili_client, batch, &workload.assets, &args.asset_folder)
.await?;
}
std::fs::create_dir_all(report_folder)
.with_context(|| format!("could not create report directory at {report_folder}"))?;
@@ -66,16 +59,14 @@ async fn run_workload_commands(
let report_handle = start_report(logs_client, trace_filename, &workload.target).await?;
run_commands(
meili_client,
&workload.commands,
0,
&assets,
asset_folder,
&mut HashMap::new(),
false,
)
.await?;
for batch in workload
.commands
.as_slice()
.split_inclusive(|command| !matches!(command.synchronous, SyncMode::DontWait))
{
super::command::run_batch(meili_client, batch, &workload.assets, &args.asset_folder)
.await?;
}
let processor =
stop_report(dashboard_client, logs_client, workload_uuid, report_filename, report_handle)
@@ -90,14 +81,14 @@ pub async fn execute(
assets_client: &Client,
dashboard_client: &DashboardClient,
logs_client: &Client,
meili_client: &Arc<Client>,
meili_client: &Client,
invocation_uuid: Uuid,
master_key: Option<&str>,
workload: BenchWorkload,
args: &BenchDeriveArgs,
workload: Workload,
args: &BenchArgs,
binary_path: Option<&Path>,
) -> anyhow::Result<()> {
assets::fetch_assets(assets_client, &workload.assets, &args.common.asset_folder).await?;
assets::fetch_assets(assets_client, &workload.assets, &args.asset_folder).await?;
let workload_uuid = dashboard_client.create_workload(invocation_uuid, &workload).await?;
@@ -138,20 +129,38 @@ pub async fn execute(
async fn execute_run(
dashboard_client: &DashboardClient,
logs_client: &Client,
meili_client: &Arc<Client>,
meili_client: &Client,
workload_uuid: Uuid,
master_key: Option<&str>,
workload: &BenchWorkload,
args: &BenchDeriveArgs,
workload: &Workload,
args: &BenchArgs,
binary_path: Option<&Path>,
run_number: u16,
) -> anyhow::Result<tokio::task::JoinHandle<anyhow::Result<std::fs::File>>> {
delete_db().await;
meili_process::delete_db();
let run_command = match binary_path {
Some(binary_path) => tokio::process::Command::new(binary_path),
None => {
meili_process::build().await?;
let mut command = tokio::process::Command::new("cargo");
command
.arg("run")
.arg("--release")
.arg("-p")
.arg("meilisearch")
.arg("--bin")
.arg("meilisearch")
.arg("--");
command
}
};
let meilisearch =
start_meili(meili_client, master_key, &workload.extra_cli_args, binary_path).await?;
meili_process::start(meili_client, master_key, workload, &args.asset_folder, run_command)
.await?;
let processor = run_workload_commands(
let processor = run_commands(
dashboard_client,
logs_client,
meili_client,
@@ -162,7 +171,7 @@ async fn execute_run(
)
.await?;
process::kill_meili(meilisearch).await;
meili_process::kill(meilisearch).await;
tracing::info!(run_number, "Successful run");

View File

@@ -1,36 +0,0 @@
use clap::Parser;
use std::path::PathBuf;
pub fn default_asset_folder() -> String {
"./bench/assets/".into()
}
pub fn default_log_filter() -> String {
"info".into()
}
#[derive(Parser, Debug, Clone)]
pub struct CommonArgs {
/// Filename of the workload file, pass multiple filenames
/// to run multiple workloads in the specified order.
///
/// For benches, each workload run will get its own report file.
#[arg(value_name = "WORKLOAD_FILE", last = false)]
pub workload_file: Vec<PathBuf>,
/// Directory to store the remote assets.
#[arg(long, default_value_t = default_asset_folder())]
pub asset_folder: String,
/// Log directives
#[arg(short, long, default_value_t = default_log_filter())]
pub log_filter: String,
/// Authentication bearer for fetching assets
#[arg(long)]
pub assets_key: Option<String>,
/// The maximum time in seconds we allow for fetching the task queue before timing out.
#[arg(long, default_value_t = 60)]
pub tasks_queue_timeout_secs: u64,
}

View File

@@ -1,420 +0,0 @@
use std::collections::{BTreeMap, HashMap};
use std::fmt::Display;
use std::io::Read as _;
use std::sync::Arc;
use anyhow::{bail, Context as _};
use reqwest::StatusCode;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use similar_asserts::SimpleDiff;
use crate::common::assets::{fetch_asset, Asset};
use crate::common::client::{Client, Method};
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct Command {
pub route: String,
pub method: Method,
#[serde(default)]
pub body: Body,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub expected_status: Option<u16>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub expected_response: Option<serde_json::Value>,
#[serde(default, skip_serializing_if = "HashMap::is_empty")]
pub register: HashMap<String, String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub api_key_variable: Option<String>,
#[serde(default)]
pub synchronous: SyncMode,
}
#[derive(Default, Clone, Serialize, Deserialize, Debug)]
#[serde(untagged)]
pub enum Body {
Inline {
inline: serde_json::Value,
},
Asset {
asset: String,
},
#[default]
Empty,
}
impl Body {
pub fn get(
self,
assets: &BTreeMap<String, Asset>,
registered: &HashMap<String, Value>,
asset_folder: &str,
) -> anyhow::Result<Option<(Vec<u8>, &'static str)>> {
Ok(match self {
Body::Inline { inline: mut body } => {
fn insert_variables(value: &mut Value, registered: &HashMap<String, Value>) {
match value {
Value::Null | Value::Bool(_) | Value::Number(_) => (),
Value::String(s) => {
if s.starts_with("{{") && s.ends_with("}}") {
let name = s[2..s.len() - 2].trim();
if let Some(replacement) = registered.get(name) {
*value = replacement.clone();
}
}
}
Value::Array(values) => {
for value in values {
insert_variables(value, registered);
}
}
Value::Object(map) => {
for (_key, value) in map.iter_mut() {
insert_variables(value, registered);
}
}
}
}
if !registered.is_empty() {
insert_variables(&mut body, registered);
}
Some((
serde_json::to_vec(&body)
.context("serializing to bytes")
.context("while getting inline body")?,
"application/json",
))
}
Body::Asset { asset: name } => Some({
let context = || format!("while getting body from asset '{name}'");
let (mut file, format) =
fetch_asset(&name, assets, asset_folder).with_context(context)?;
let mut buf = Vec::new();
file.read_to_end(&mut buf).with_context(context)?;
(buf, format.to_content_type(&name))
}),
Body::Empty => None,
})
}
}
impl Display for Command {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?} {} ({:?})", self.method, self.route, self.synchronous)
}
}
#[derive(Default, Debug, Clone, Copy, Serialize, Deserialize, PartialEq)]
pub enum SyncMode {
DontWait,
#[default]
WaitForResponse,
WaitForTask,
}
async fn run_batch(
client: &Arc<Client>,
batch: &[Command],
first_command_index: usize,
assets: &Arc<BTreeMap<String, Asset>>,
asset_folder: &'static str,
registered: &mut HashMap<String, Value>,
return_response: bool,
) -> anyhow::Result<Vec<(Value, StatusCode)>> {
let [.., last] = batch else { return Ok(Vec::new()) };
let sync = last.synchronous;
let batch_len = batch.len();
let mut tasks = Vec::with_capacity(batch.len());
for (index, command) in batch.iter().cloned().enumerate() {
let client2 = Arc::clone(client);
let assets2 = Arc::clone(assets);
let needs_response = return_response || !command.register.is_empty();
let registered2 = registered.clone(); // FIXME: cloning the whole map for each command is inefficient
tasks.push(tokio::spawn(async move {
run(
&client2,
&command,
first_command_index + index,
&assets2,
registered2,
asset_folder,
needs_response,
)
.await
}));
}
let mut outputs = Vec::with_capacity(if return_response { batch_len } else { 0 });
for (task, command) in tasks.into_iter().zip(batch.iter()) {
let output = task.await.context("task panicked")??;
if let Some(output) = output {
for (name, path) in &command.register {
let value = output
.0
.pointer(path)
.with_context(|| format!("could not find path '{path}' in response (required to register '{name}')"))?
.clone();
registered.insert(name.clone(), value);
}
if return_response {
outputs.push(output);
}
}
}
match sync {
SyncMode::DontWait => {}
SyncMode::WaitForResponse => {}
SyncMode::WaitForTask => wait_for_tasks(client).await?,
}
Ok(outputs)
}
async fn wait_for_tasks(client: &Client) -> anyhow::Result<()> {
loop {
let response = client
.get("tasks?statuses=enqueued,processing")
.send()
.await
.context("could not wait for tasks")?;
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response to JSON")
.context("could not wait for tasks")?;
match response.get("total") {
Some(serde_json::Value::Number(number)) => {
let number = number.as_u64().with_context(|| {
format!("waiting for tasks: could not parse 'total' as integer, got {}", number)
})?;
if number == 0 {
break;
} else {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
continue;
}
}
Some(thing_else) => {
bail!(format!(
"waiting for tasks: could not parse 'total' as a number, got '{thing_else}'"
))
}
None => {
bail!(format!(
"waiting for tasks: expected response to contain 'total', got '{response}'"
))
}
}
}
Ok(())
}
fn json_eq_ignore(reference: &Value, value: &Value) -> bool {
match reference {
Value::Null | Value::Bool(_) | Value::Number(_) => reference == value,
Value::String(s) => (s.starts_with('[') && s.ends_with(']')) || reference == value,
Value::Array(values) => match value {
Value::Array(other_values) => {
if values.len() != other_values.len() {
return false;
}
for (value, other_value) in values.iter().zip(other_values.iter()) {
if !json_eq_ignore(value, other_value) {
return false;
}
}
true
}
_ => false,
},
Value::Object(map) => match value {
Value::Object(other_map) => {
if map.len() != other_map.len() {
return false;
}
for (key, value) in map.iter() {
match other_map.get(key) {
Some(other_value) => {
if !json_eq_ignore(value, other_value) {
return false;
}
}
None => return false,
}
}
true
}
_ => false,
},
}
}
#[tracing::instrument(skip(client, command, assets, registered, asset_folder), fields(command = %command))]
pub async fn run(
client: &Client,
command: &Command,
command_index: usize,
assets: &BTreeMap<String, Asset>,
registered: HashMap<String, Value>,
asset_folder: &str,
return_value: bool,
) -> anyhow::Result<Option<(Value, StatusCode)>> {
// Try to replace variables in the route
let mut route = &command.route;
let mut owned_route;
if !registered.is_empty() {
while let (Some(pos1), Some(pos2)) = (route.find("{{"), route.rfind("}}")) {
if pos2 > pos1 {
let name = route[pos1 + 2..pos2].trim();
if let Some(replacement) = registered.get(name).and_then(|r| r.as_str()) {
let mut new_route = String::new();
new_route.push_str(&route[..pos1]);
new_route.push_str(replacement);
new_route.push_str(&route[pos2 + 2..]);
owned_route = new_route;
route = &owned_route;
continue;
}
}
break;
}
}
// memtake the body here to leave an empty body in its place, so that command is not partially moved-out
let body = command
.body
.clone()
.get(assets, &registered, asset_folder)
.with_context(|| format!("while getting body for command {command}"))?;
let mut request = client.request(command.method.into(), route);
// Replace the api key
if let Some(var_name) = &command.api_key_variable {
if let Some(api_key) = registered.get(var_name).and_then(|v| v.as_str()) {
request = request.header("Authorization", format!("Bearer {api_key}"));
} else {
bail!("could not find API key variable '{var_name}' in registered values");
}
}
let request = if let Some((body, content_type)) = body {
request.body(body).header(reqwest::header::CONTENT_TYPE, content_type)
} else {
request
};
let response =
request.send().await.with_context(|| format!("error sending command: {}", command))?;
let code = response.status();
if !return_value {
if let Some(expected_status) = command.expected_status {
if code.as_u16() != expected_status {
let response = response
.text()
.await
.context("could not read response body as text")
.context("reading response body when checking expected status")?;
bail!("unexpected status code: got {}, expected {expected_status}, response body: '{response}'", code.as_u16());
}
} else if code.is_client_error() {
tracing::error!(%command, %code, "error in workload file");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing error in workload file when sending command")?;
bail!(
"error in workload file: server responded with error code {code} and '{response}'"
)
} else if code.is_server_error() {
tracing::error!(%command, %code, "server error");
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing server error when sending command")?;
bail!("server error: server responded with error code {code} and '{response}'")
}
}
if let Some(expected_response) = &command.expected_response {
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing response when checking expected response")?;
if return_value {
return Ok(Some((response, code)));
}
if !json_eq_ignore(expected_response, &response) {
let expected_pretty = serde_json::to_string_pretty(expected_response)
.context("serializing expected response as pretty JSON")?;
let response_pretty = serde_json::to_string_pretty(&response)
.context("serializing response as pretty JSON")?;
let diff = SimpleDiff::from_str(&expected_pretty, &response_pretty, "expected", "got");
bail!("command #{command_index} unexpected response:\n{diff}");
}
} else if return_value {
let response: serde_json::Value = response
.json()
.await
.context("could not deserialize response as JSON")
.context("parsing response when recording expected response")?;
return Ok(Some((response, code)));
}
Ok(None)
}
pub async fn run_commands(
client: &Arc<Client>,
commands: &[Command],
mut first_command_index: usize,
assets: &Arc<BTreeMap<String, Asset>>,
asset_folder: &'static str,
registered: &mut HashMap<String, Value>,
return_response: bool,
) -> anyhow::Result<Vec<(Value, StatusCode)>> {
let mut responses = Vec::new();
for batch in
commands.split_inclusive(|command| !matches!(command.synchronous, SyncMode::DontWait))
{
let mut new_responses = run_batch(
client,
batch,
first_command_index,
assets,
asset_folder,
registered,
return_response,
)
.await?;
responses.append(&mut new_responses);
first_command_index += batch.len();
}
Ok(responses)
}
pub fn health_command() -> Command {
Command {
route: "/health".into(),
method: crate::common::client::Method::Get,
body: Default::default(),
register: HashMap::new(),
synchronous: SyncMode::WaitForResponse,
expected_status: None,
expected_response: None,
api_key_variable: None,
}
}

View File

@@ -1,113 +0,0 @@
use std::fmt::Display;
use std::path::PathBuf;
use serde::{Deserialize, Serialize};
mod release;
pub use release::{add_releases_to_assets, Release};
/// A binary to execute on a temporary DB.
///
/// - The URL of the binary will be in the form <http://localhost:PORT>, where `PORT`
/// is selected by the runner.
/// - The database will be temporary, cleaned before use, and will be selected by the runner.
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Binary {
/// Describes how this binary should be instantiated
#[serde(flatten)]
pub source: BinarySource,
/// Extra CLI arguments to pass to the binary.
///
/// Should be Meilisearch CLI options.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub extra_cli_args: Vec<String>,
}
impl Display for Binary {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.source)?;
if !self.extra_cli_args.is_empty() {
write!(f, "with arguments: {:?}", self.extra_cli_args)?;
}
Ok(())
}
}
impl Binary {
pub fn as_release(&self) -> Option<&Release> {
if let BinarySource::Release(release) = &self.source {
Some(release)
} else {
None
}
}
pub fn binary_path(&self, asset_folder: &str) -> anyhow::Result<Option<PathBuf>> {
self.source.binary_path(asset_folder)
}
}
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase", deny_unknown_fields, tag = "source")]
/// Description of how to get a binary to instantiate.
pub enum BinarySource {
/// Compile and run the binary from the current repository.=
Build {
#[serde(default)]
edition: Edition,
},
/// Get a release from GitHub
Release(Release),
/// Run the binary from the specified local path.
Path(PathBuf),
}
impl Display for BinarySource {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
BinarySource::Build { edition: Edition::Community } => {
f.write_str("git with community edition")
}
BinarySource::Build { edition: Edition::Enterprise } => {
f.write_str("git with enterprise edition")
}
BinarySource::Release(release) => write!(f, "{release}"),
BinarySource::Path(path) => write!(f, "binary at `{}`", path.display()),
}
}
}
impl Default for BinarySource {
fn default() -> Self {
Self::Build { edition: Default::default() }
}
}
impl BinarySource {
fn binary_path(&self, asset_folder: &str) -> anyhow::Result<Option<PathBuf>> {
Ok(match self {
Self::Release(release) => Some(release.binary_path(asset_folder)?),
Self::Build { .. } => None,
Self::Path(path) => Some(path.clone()),
})
}
}
#[derive(Debug, Clone, Copy, Default, Serialize, Deserialize)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub enum Edition {
#[default]
Community,
Enterprise,
}
impl Edition {
fn binary_base(&self) -> &'static str {
match self {
Edition::Community => "meilisearch",
Edition::Enterprise => "meilisearch-enterprise",
}
}
}

View File

@@ -1,190 +0,0 @@
use std::collections::BTreeMap;
use std::fmt::Display;
use std::path::PathBuf;
use anyhow::Context;
use cargo_metadata::semver::Version;
use serde::{Deserialize, Serialize};
use super::Edition;
use crate::common::assets::{Asset, AssetFormat};
#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase", deny_unknown_fields)]
pub struct Release {
#[serde(default)]
pub edition: Edition,
pub version: Version,
}
impl Display for Release {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "v{}", self.version)?;
match self.edition {
Edition::Community => f.write_str(" Community Edition"),
Edition::Enterprise => f.write_str(" Enterprise Edition"),
}
}
}
impl Release {
pub fn binary_path(&self, asset_folder: &str) -> anyhow::Result<PathBuf> {
let mut asset_folder: PathBuf = asset_folder
.parse()
.with_context(|| format!("parsing asset folder `{asset_folder}` as a path"))?;
asset_folder.push(self.local_filename()?);
Ok(asset_folder)
}
fn local_filename(&self) -> anyhow::Result<String> {
let version = &self.version;
let arch = get_arch()?;
let base = self.edition.binary_base();
Ok(format!("{base}-{version}-{arch}"))
}
fn remote_filename(&self) -> anyhow::Result<String> {
let arch = get_arch()?;
let base = self.edition.binary_base();
Ok(format!("{base}-{arch}"))
}
async fn fetch_sha256(&self) -> anyhow::Result<String> {
let version = &self.version;
let asset_name = self.remote_filename()?;
// If version is lower than 1.15 there is no point in trying to get the sha256, GitHub didn't support it
if *version < Version::parse("1.15.0")? {
anyhow::bail!("version is lower than 1.15, sha256 not available");
}
#[derive(Deserialize)]
struct GithubReleaseAsset {
name: String,
digest: Option<String>,
}
#[derive(Deserialize)]
struct GithubRelease {
assets: Vec<GithubReleaseAsset>,
}
let url = format!(
"https://api.github.com/repos/meilisearch/meilisearch/releases/tags/v{version}"
);
let client = reqwest::Client::builder()
.user_agent("Meilisearch bench xtask")
.build()
.context("failed to build reqwest client")?;
let body = client.get(url).send().await?.text().await?;
let data: GithubRelease = serde_json::from_str(&body)?;
let digest = data
.assets
.into_iter()
.find(|asset| asset.name.as_str() == asset_name.as_str())
.with_context(|| format!("asset {asset_name} not found in release {self}"))?
.digest
.with_context(|| format!("asset {asset_name} has no digest"))?;
let sha256 = digest
.strip_prefix("sha256:")
.map(|s| s.to_string())
.context("invalid sha256 format")?;
Ok(sha256)
}
async fn add_asset(&self, assets: &mut BTreeMap<String, Asset>) -> anyhow::Result<()> {
let local_filename = self.local_filename()?;
let version = &self.version;
if assets.contains_key(&local_filename) {
return Ok(());
}
let remote_filename = self.remote_filename()?;
// Try to get the sha256 but it may fail if Github is rate limiting us
// We hardcode some values to speed up tests and avoid hitting Github
// Also, versions prior to 1.15 don't have sha256 available anyway
let sha256 = match local_filename.as_str() {
"meilisearch-1.12.0-macos-apple-silicon" => Some(String::from(
"3b384707a5df9edf66f9157f0ddb70dcd3ac84d4887149169cf93067d06717b7",
)),
_ => match self.fetch_sha256().await {
Ok(sha256) => Some(sha256),
Err(err) => {
tracing::warn!("failed to get sha256 for release {self}: {err}");
None
}
},
};
let url = format!(
"https://github.com/meilisearch/meilisearch/releases/download/v{version}/{remote_filename}"
);
let asset = Asset {
local_location: Some(local_filename.clone()),
remote_location: Some(url),
format: AssetFormat::Raw,
sha256,
};
assets.insert(local_filename, asset);
Ok(())
}
}
pub fn get_arch() -> anyhow::Result<&'static str> {
// linux-aarch64
#[cfg(all(target_os = "linux", target_arch = "aarch64"))]
{
Ok("linux-aarch64")
}
// linux-amd64
#[cfg(all(target_os = "linux", target_arch = "x86_64"))]
{
Ok("linux-amd64")
}
// macos-amd64
#[cfg(all(target_os = "macos", target_arch = "x86_64"))]
{
Ok("macos-amd64")
}
// macos-apple-silicon
#[cfg(all(target_os = "macos", target_arch = "aarch64"))]
{
Ok("macos-apple-silicon")
}
// windows-amd64
#[cfg(all(target_os = "windows", target_arch = "x86_64"))]
{
Ok("windows-amd64")
}
#[cfg(not(all(target_os = "windows", target_arch = "x86_64")))]
#[cfg(not(all(target_os = "linux", target_arch = "aarch64")))]
#[cfg(not(all(target_os = "linux", target_arch = "x86_64")))]
#[cfg(not(all(target_os = "macos", target_arch = "aarch64")))]
anyhow::bail!("unsupported platform")
}
pub async fn add_releases_to_assets(
assets: &mut BTreeMap<String, Asset>,
releases: impl IntoIterator<Item = &Release>,
) -> anyhow::Result<()> {
for release in releases {
release.add_asset(assets).await?;
}
Ok(())
}

View File

@@ -1,18 +0,0 @@
use anyhow::Context;
use std::io::LineWriter;
use tracing_subscriber::{fmt::format::FmtSpan, layer::SubscriberExt, Layer};
pub fn setup_logs(log_filter: &str) -> anyhow::Result<()> {
let filter: tracing_subscriber::filter::Targets =
log_filter.parse().context("invalid --log-filter")?;
let subscriber = tracing_subscriber::registry().with(
tracing_subscriber::fmt::layer()
.with_writer(|| LineWriter::new(std::io::stderr()))
.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)
.with_filter(filter),
);
tracing::subscriber::set_global_default(subscriber).context("could not setup logging")?;
Ok(())
}

View File

@@ -1,8 +0,0 @@
pub mod args;
pub mod assets;
pub mod client;
pub mod command;
pub mod instance;
pub mod logs;
pub mod process;
pub mod workload;

View File

@@ -1,11 +0,0 @@
use serde::{Deserialize, Serialize};
use crate::{bench::BenchWorkload, test::TestWorkload};
#[derive(Serialize, Deserialize, Debug)]
#[serde(tag = "type")]
#[serde(rename_all = "camelCase")]
pub enum Workload {
Bench(BenchWorkload),
Test(TestWorkload),
}

View File

@@ -1,3 +1 @@
pub mod bench;
pub mod common;
pub mod test;

View File

@@ -1,16 +1,34 @@
use std::collections::HashSet;
use std::{collections::HashSet, process::Stdio};
use anyhow::Context;
use clap::Parser;
use xtask::{bench::BenchDeriveArgs, test::TestDeriveArgs};
use semver::{Prerelease, Version};
use xtask::bench::BenchArgs;
/// This is the version of the crate but also the current Meilisearch version
pub const VERSION: &str = env!("CARGO_PKG_VERSION");
/// List features available in the workspace
#[derive(Parser, Debug)]
struct ListFeaturesDeriveArgs {
struct ListFeaturesArgs {
/// Feature to exclude from the list. Use a comma to separate multiple features.
#[arg(short, long, value_delimiter = ',')]
exclude_feature: Vec<String>,
}
/// Create a git tag for the current version
///
/// The tag will of the form prototype-v<version>-<name>.<increment>
#[derive(Parser, Debug)]
struct PrototypeArgs {
/// Name of the prototype to generate
name: String,
/// If true refuses to increment the tag if it already exists
/// else refuses to generate new tag and expect the tag to exist.
#[arg(long)]
generate_new: bool,
}
/// Utilitary commands
#[derive(Parser, Debug)]
#[command(author, version, about, long_about)]
@@ -18,9 +36,9 @@ struct ListFeaturesDeriveArgs {
#[command(bin_name = "cargo xtask")]
#[allow(clippy::large_enum_variant)] // please, that's enough...
enum Command {
ListFeatures(ListFeaturesDeriveArgs),
Bench(BenchDeriveArgs),
Test(TestDeriveArgs),
ListFeatures(ListFeaturesArgs),
Bench(BenchArgs),
GeneratePrototype(PrototypeArgs),
}
fn main() -> anyhow::Result<()> {
@@ -28,12 +46,12 @@ fn main() -> anyhow::Result<()> {
match args {
Command::ListFeatures(args) => list_features(args),
Command::Bench(args) => xtask::bench::run(args)?,
Command::Test(args) => xtask::test::run(args)?,
Command::GeneratePrototype(args) => generate_prototype(args)?,
}
Ok(())
}
fn list_features(args: ListFeaturesDeriveArgs) {
fn list_features(args: ListFeaturesArgs) {
let exclude_features: HashSet<_> = args.exclude_feature.into_iter().collect();
let metadata = cargo_metadata::MetadataCommand::new().no_deps().exec().unwrap();
let features: Vec<String> = metadata
@@ -46,3 +64,106 @@ fn list_features(args: ListFeaturesDeriveArgs) {
let features = features.join(" ");
println!("{features}")
}
fn generate_prototype(args: PrototypeArgs) -> anyhow::Result<()> {
let PrototypeArgs { name, generate_new: create_new } = args;
if name.rsplit_once(['.', '-']).filter(|(_, t)| t.chars().all(char::is_numeric)).is_some() {
anyhow::bail!(
"The increment must not be part of the name and will be rather incremented by this command."
);
}
// 1. Fetch the crate version
let version = Version::parse(VERSION).context("while semver-parsing the crate version")?;
// 2. Pull tags from remote and retrieve last prototype tag
std::process::Command::new("git")
.arg("fetch")
.arg("--tags")
.stderr(Stdio::inherit())
.stdout(Stdio::inherit())
.status()?;
let output = std::process::Command::new("git")
.arg("tag")
.args(["--list", "prototype-v*"])
.stderr(Stdio::inherit())
.output()?;
let output =
String::try_from(output.stdout).context("while converting the tag list into a string")?;
let mut highest_increment = None;
for tag in output.lines() {
let Some(version) = tag.strip_prefix("prototype-v") else {
continue;
};
let Ok(version) = Version::parse(version) else {
continue;
};
let Ok(proto) = PrototypePrerelease::from_str(version.pre.as_str()) else {
continue;
};
if proto.name() == name {
highest_increment = match highest_increment {
Some(last) if last < proto.increment() => Some(proto.increment()),
Some(last) => Some(last),
None => Some(proto.increment()),
};
}
}
// 3. Generate the new tag name (without git, just a string)
let increment = match (create_new, highest_increment) {
(true, None) => 0,
(true, Some(increment)) => anyhow::bail!(
"A prototype with the name `{name}` already exists with increment `{increment}`"
),
(false, None) => anyhow::bail!(
"Prototype `{name}` is missing and must exist to be incremented.\n\
Use the --generate-new flag to create a new prototype with an increment at 0."
),
(false, Some(increment)) => {
increment.checked_add(1).context("While incrementing by one the increment")?
}
};
// Note that we cannot have leading zeros in the increment
let pre = format!("{name}.{increment}").parse().context("while parsing pre-release name")?;
let tag_name = Version { pre, ..version };
println!("prototype-v{tag_name}");
Ok(())
}
#[derive(Debug, Clone, PartialEq, Eq)]
struct PrototypePrerelease {
pre: Prerelease,
}
impl PrototypePrerelease {
fn from_str(s: &str) -> anyhow::Result<Self> {
Prerelease::new(s)
.map_err(Into::into)
.and_then(|pre| {
if pre.rsplit_once('.').is_some() {
Ok(pre)
} else {
Err(anyhow::anyhow!("Invalid prototype name, missing name or increment"))
}
})
.map(|pre| PrototypePrerelease { pre })
}
fn name(&self) -> &str {
self.pre.rsplit_once('.').expect("Missing prototype name").0
}
fn increment(&self) -> u32 {
self.pre
.as_str()
.rsplit_once('.')
.map(|(_, tail)| tail.parse().expect("Invalid increment"))
.expect("Missing increment")
}
}

View File

@@ -1,95 +0,0 @@
use std::sync::Arc;
use std::time::Duration;
use anyhow::{bail, Context};
use clap::Parser;
use crate::common::args::CommonArgs;
use crate::common::client::Client;
use crate::common::command::SyncMode;
use crate::common::logs::setup_logs;
use crate::common::workload::Workload;
use crate::test::workload::CommandOrBinary;
mod workload;
pub use workload::TestWorkload;
/// Run tests from a workload
#[derive(Parser, Debug)]
pub struct TestDeriveArgs {
/// Common arguments shared with other commands
#[command(flatten)]
common: CommonArgs,
/// Enables workloads to be rewritten in place to update expected responses.
#[arg(short, long, default_value_t = false)]
pub update_responses: bool,
/// Enables workloads to be rewritten in place to add missing expected responses.
#[arg(short, long, default_value_t = false)]
pub add_missing_responses: bool,
}
pub fn run(args: TestDeriveArgs) -> anyhow::Result<()> {
let rt = tokio::runtime::Builder::new_current_thread().enable_io().enable_time().build()?;
let _scope = rt.enter();
rt.block_on(async { run_inner(args).await })?;
Ok(())
}
async fn run_inner(args: TestDeriveArgs) -> anyhow::Result<()> {
setup_logs(&args.common.log_filter)?;
// setup clients
let assets_client = Arc::new(Client::new(
None,
args.common.assets_key.as_deref(),
Some(Duration::from_secs(3600)), // 1h
)?);
let meili_client = Arc::new(Client::new(
Some("http://127.0.0.1:7700".into()),
Some("masterKey"),
Some(Duration::from_secs(args.common.tasks_queue_timeout_secs)),
)?);
let asset_folder = args.common.asset_folder.clone().leak();
for workload_file in &args.common.workload_file {
let string = tokio::fs::read_to_string(workload_file)
.await
.with_context(|| format!("error reading {}", workload_file.display()))?;
let workload: Workload = serde_json::from_str(string.trim())
.with_context(|| format!("error parsing {} as JSON", workload_file.display()))?;
let Workload::Test(workload) = workload else {
bail!("workload file {} is not a test workload", workload_file.display());
};
let has_faulty_register = workload.commands.iter().any(|c| {
matches!(c, CommandOrBinary::Command(cmd) if cmd.synchronous == SyncMode::DontWait && !cmd.register.is_empty())
});
if has_faulty_register {
bail!("workload {} contains commands that register values but are marked as --dont-wait. This is not supported because we cannot guarantee the value will be registered before the next command runs.", workload.name);
}
let name = workload.name.clone();
match workload.run(&args, &assets_client, &meili_client, asset_folder).await {
Ok(_) => match args.update_responses || args.add_missing_responses {
true => println!(
"🛠️ Workload {name} was updated, please check the output and restart the test"
),
false => println!("✅ Workload {name} passed"),
},
Err(error) => {
println!("❌ Workload {name} failed: {error}");
println!("💡 Is this intentional? If so, rerun with --update-responses to update the workload files.");
return Err(error);
}
}
}
Ok(())
}

View File

@@ -1,203 +0,0 @@
use std::collections::{BTreeMap, HashMap};
use std::io::Write;
use std::sync::Arc;
use anyhow::Context;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use crate::common::assets::{fetch_assets, Asset};
use crate::common::client::Client;
use crate::common::command::{run_commands, Command};
use crate::common::instance::Binary;
use crate::common::process::{self, delete_db, kill_meili};
use crate::common::workload::Workload;
use crate::test::TestDeriveArgs;
#[derive(Serialize, Deserialize, Debug)]
#[serde(untagged)]
#[allow(clippy::large_enum_variant)]
pub enum CommandOrBinary {
Command(Command),
Binary { binary: Binary },
}
enum CommandOrBinaryVec<'a> {
Commands(Vec<&'a mut Command>),
Binary(Binary),
}
fn produce_reference_value(value: &mut Value) {
match value {
Value::Null | Value::Bool(_) | Value::Number(_) => (),
Value::String(string) => {
if time::OffsetDateTime::parse(
string.as_str(),
&time::format_description::well_known::Rfc3339,
)
.is_ok()
{
*string = String::from("[timestamp]");
} else if uuid::Uuid::parse_str(string).is_ok() {
*string = String::from("[uuid]");
}
}
Value::Array(values) => {
for value in values {
produce_reference_value(value);
}
}
Value::Object(map) => {
for (key, value) in map.iter_mut() {
match key.as_str() {
"duration" => {
*value = Value::String(String::from("[duration]"));
}
"processingTimeMs" => {
*value = Value::String(String::from("[duration]"));
}
_ => produce_reference_value(value),
}
}
}
}
}
/// A test workload.
/// Not to be confused with [a bench workload](crate::bench::workload::Workload).
#[derive(Serialize, Deserialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct TestWorkload {
pub name: String,
pub binary: Binary,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub master_key: Option<String>,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub assets: BTreeMap<String, Asset>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub commands: Vec<CommandOrBinary>,
}
impl TestWorkload {
pub async fn run(
mut self,
args: &TestDeriveArgs,
assets_client: &Client,
meili_client: &Arc<Client>,
asset_folder: &'static str,
) -> anyhow::Result<()> {
// Group commands between upgrades
let mut commands_or_instance = Vec::new();
let mut current_commands = Vec::new();
let mut all_releases = Vec::new();
if let Some(release) = self.binary.as_release() {
all_releases.push(release);
}
for command_or_upgrade in &mut self.commands {
match command_or_upgrade {
CommandOrBinary::Command(command) => current_commands.push(command),
CommandOrBinary::Binary { binary: instance } => {
if !current_commands.is_empty() {
commands_or_instance.push(CommandOrBinaryVec::Commands(current_commands));
current_commands = Vec::new();
}
commands_or_instance.push(CommandOrBinaryVec::Binary(instance.clone()));
if let Some(release) = instance.as_release() {
all_releases.push(release);
}
}
}
}
if !current_commands.is_empty() {
commands_or_instance.push(CommandOrBinaryVec::Commands(current_commands));
}
// Fetch assets
crate::common::instance::add_releases_to_assets(&mut self.assets, all_releases).await?;
fetch_assets(assets_client, &self.assets, &args.common.asset_folder).await?;
// Run server
delete_db().await;
let binary_path = self.binary.binary_path(&args.common.asset_folder)?;
let mut process = process::start_meili(
meili_client,
Some("masterKey"),
&self.binary.extra_cli_args,
binary_path.as_deref(),
)
.await?;
let assets = Arc::new(self.assets.clone());
let return_responses = args.add_missing_responses || args.update_responses;
let mut registered = HashMap::new();
let mut first_command_index = 0;
for command_or_upgrade in commands_or_instance {
match command_or_upgrade {
CommandOrBinaryVec::Commands(commands) => {
let cloned: Vec<_> = commands.iter().map(|c| (*c).clone()).collect();
let responses = run_commands(
meili_client,
&cloned,
first_command_index,
&assets,
asset_folder,
&mut registered,
return_responses,
)
.await?;
first_command_index += cloned.len();
if return_responses {
assert_eq!(responses.len(), cloned.len());
for (command, (mut response, status)) in commands.into_iter().zip(responses)
{
if args.update_responses
|| (args.add_missing_responses
&& command.expected_response.is_none())
{
produce_reference_value(&mut response);
command.expected_response = Some(response);
command.expected_status = Some(status.as_u16());
}
}
}
}
CommandOrBinaryVec::Binary(binary) => {
kill_meili(process).await;
let binary_path = binary.binary_path(&args.common.asset_folder)?;
process = process::start_meili(
meili_client,
Some("masterKey"),
&binary.extra_cli_args,
binary_path.as_deref(),
)
.await?;
tracing::info!("Restarted instance with {binary}");
}
}
}
// Write back the workload if needed
if return_responses {
// Filter out the assets we added for the versions
self.assets.retain(|_, asset| {
asset.local_location.as_ref().is_none_or(|a| !a.starts_with("meilisearch-"))
});
let workload = Workload::Test(self);
let mut file =
std::fs::File::create(&args.common.workload_file[0]).with_context(|| {
format!("could not open {}", args.common.workload_file[0].display())
})?;
serde_json::to_writer_pretty(&file, &workload).with_context(|| {
format!("could not write to {}", args.common.workload_file[0].display())
})?;
file.write_all(b"\n").with_context(|| {
format!("could not write to {}", args.common.workload_file[0].display())
})?;
tracing::info!("Updated workload file {}", args.common.workload_file[0].display());
}
Ok(())
}
}

View File

@@ -20,29 +20,33 @@ These make us iterate fast before stabilizing it for the current release.
### Release steps
The prototype name must follow this convention: `prototype-v<version>.<name>-<number>` where
The prototype name must [follow this convention](https://semver.org/#spec-item-11): `prototype-v<version>-<name>.<iteration>` where
- `version` is the version of Meilisearch on which the prototype is based.
- `name` is the feature name formatted in `kebab-case`. It should not end with a single number.
- `Y` is the version of the prototype, starting from `0`.
- `name` is the feature name formatted in `kebab-case`.
- `iteration` is the iteration of the prototype, starting from `0`.
✅ Example: `prototype-v1.23.0.search-personalization-0`. </br>
✅ Example: `prototype-v1.23.0-search-personalization.1`. </br>
❌ Bad example: `prototype-v1.23.0-search-personalization-0`: a dash separates the name and version. </br>
❌ Bad example: `prototype-v1.23.0.search-personalization.0`: a dot separates the version and name. </br>
❌ Bad example: `prototype-search-personalization-0`: version is missing.</br>
❌ Bad example: `v1.23.0.auto-resize-0`: lacks the `prototype` prefix. </br>
❌ Bad example: `prototype-v1.23.0.auto-resize`: lacks the version suffix. </br>
❌ Bad example: `prototype-v1.23.0.auto-resize-0-0`: feature name ends with a single number.
❌ Bad example: `v1.23.0-auto-resize-0`: lacks the `prototype-` prefix. </br>
❌ Bad example: `prototype-v1.23.0-auto-resize`: lacks the version suffix. </br>
❌ Bad example: `prototype-v1.23.0-auto-resize.0-0`: feature name ends with something else than a number.
Steps to create a prototype:
1. In your terminal, go to the last commit of your branch (the one you want to provide as a prototype).
2. Create a tag following the convention: `git tag prototype-X-Y`
3. Run Meilisearch and check that its launch summary features a line: `Prototype: prototype-X-Y` (you may need to switch branches and back after tagging for this to work).
3. Push the tag: `git push origin prototype-X-Y`
4. Check the [Docker CI](https://github.com/meilisearch/meilisearch/actions/workflows/publish-docker-images.yml) is now running.
2. Use the `cargo xtask generate-prototype` command to generate the prototype name.
3. Create the tag using the `git tag` command.
4. Checkout the tag, run Meilisearch and check that it launches summary features a line: `Prototype: prototype-v<version>-<name>.<iteration>`.
5. Checkout back to your branch: `git checkout -`.
6. Push the tag: `git push origin prototype-v<version>-<name>.<iteration>`
7. Check that the [Docker CI](https://github.com/meilisearch/meilisearch/actions/workflows/publish-docker-images.yml) is now running.
🐳 Once the CI has finished to run (~1h30), a Docker image named `prototype-X-Y` will be available on [DockerHub](https://hub.docker.com/repository/docker/getmeili/meilisearch/general). People can use it with the following command: `docker run -p 7700:7700 -v $(pwd)/meili_data:/meili_data getmeili/meilisearch:prototype-X-Y`. <br>
🐳 Once the CI has finished to run, a Docker image named `prototype-v<version>-<name>.<iteration>` will be available on [DockerHub](https://hub.docker.com/repository/docker/getmeili/meilisearch/general). People can use it with the following command: `docker run -p 7700:7700 -v $(pwd)/meili_data:/meili_data getmeili/meilisearch:prototype-v<version>-<name>.<iteration>`. <br>
More information about [how to run Meilisearch with Docker](https://docs.meilisearch.com/learn/cookbooks/docker.html#download-meilisearch-with-docker).
⚠️ However, no binaries will be created. If the users do not use Docker, they can go to the `prototype-X-Y` tag in the Meilisearch repository and compile it from the source code.
⚠️ However, no binaries will be created. If the users do not use Docker, they can go to the `prototype-v<version>-<name>.<iteration>` tag in the Meilisearch repository and compile it from the source code.
### Communication
@@ -63,7 +67,7 @@ Here is an example of messages to share on GitHub:
> How to run the prototype?
> You need to start from a fresh new database (remove the previous used `data.ms`) and use the following Docker image:
> ```bash
> docker run -it --rm -p 7700:7700 -v $(pwd)/meili_data:/meili_data getmeili/meilisearch:prototype-X-Y
> docker run -it --rm -p 7700:7700 -v $(pwd)/meili_data:/meili_data getmeili/meilisearch:prototype-v<version>-<name>.<iteration>
> ```
>
> You can use the feature this way:

View File

@@ -1,6 +1,5 @@
{
"name": "movies-subset-hf-embeddings",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-add-embeddings-hf",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.add_new_documents",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.ndjson_1M_ignore_first_100k",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.modify_facet_numbers",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.modify_facet_strings",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.modify_searchables",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "hackernews.ndjson_1M",
"type": "bench",
"run_count": 3,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "movies.json,no-threads",
"type": "bench",
"run_count": 2,
"extra_cli_args": [
"--max-indexing-threads=1"

View File

@@ -1,6 +1,5 @@
{
"name": "movies.json",
"type": "bench",
"run_count": 10,
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "search-movies-subset-hf-embeddings",
"type": "bench",
"run_count": 2,
"target": "search::=trace",
"extra_cli_args": [

View File

@@ -1,6 +1,5 @@
{
"name": "search-filterable-movies.json",
"type": "bench",
"run_count": 10,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,7 +1,6 @@
{
"name": "search-geosort.jsonl_1M",
"type": "bench",
"run_count": 3,
"run_count": 3,
"target": "search::=trace",
"extra_cli_args": [],
"assets": {

View File

@@ -1,6 +1,5 @@
{
"name": "search-hackernews.ndjson_1M",
"type": "bench",
"run_count": 3,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,6 +1,5 @@
{
"name": "search-movies.json",
"type": "bench",
"run_count": 10,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,6 +1,5 @@
{
"name": "search-sortable-movies.json",
"type": "bench",
"run_count": 10,
"target": "search::=trace",
"extra_cli_args": [],

View File

@@ -1,6 +1,5 @@
{
"name": "settings-add-remove-filters.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-proximity-precision.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-remove-add-swap-searchable.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,6 +1,5 @@
{
"name": "settings-typo.json",
"type": "bench",
"run_count": 5,
"extra_cli_args": [
"--max-indexing-threads=4"

View File

@@ -1,369 +0,0 @@
{
"type": "test",
"name": "api-keys",
"binary": {
"source": "release",
"edition": "community",
"version": "1.12.0"
},
"commands": [
{
"route": "keys",
"method": "POST",
"body": {
"inline": {
"actions": [
"search",
"documents.add"
],
"description": "Test API Key",
"expiresAt": null,
"indexes": [
"movies"
],
"uid": "9e053497-b180-4b9f-bf10-a4a6fc4ca1b2"
}
},
"expectedStatus": 201,
"expectedResponse": {
"actions": [
"search",
"documents.add"
],
"createdAt": "[timestamp]",
"description": "Test API Key",
"expiresAt": null,
"indexes": [
"movies"
],
"key": "b387a8dabdc80d4d2069718ca43bad8bcb1ce5d8bb85b31af17a5ea6348317dc",
"name": null,
"uid": "9e053497-b180-4b9f-bf10-a4a6fc4ca1b2",
"updatedAt": "[timestamp]"
},
"register": {
"key": "/key"
},
"synchronous": "WaitForResponse"
},
{
"route": "keys/{{ key }}",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"actions": [
"search",
"documents.add"
],
"createdAt": "[timestamp]",
"description": "Test API Key",
"expiresAt": null,
"indexes": [
"movies"
],
"key": "b387a8dabdc80d4d2069718ca43bad8bcb1ce5d8bb85b31af17a5ea6348317dc",
"name": null,
"uid": "[uuid]",
"updatedAt": "[timestamp]"
},
"synchronous": "WaitForResponse"
},
{
"route": "/indexes",
"method": "POST",
"body": {
"inline": {
"primaryKey": "id",
"uid": "movies"
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 0,
"type": "indexCreation"
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"inline": {
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 1,
"type": "documentAdditionOrUpdate"
},
"apiKeyVariable": "key",
"synchronous": "WaitForTask"
},
{
"route": "indexes/movies/search?q=shazam",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 1,
"hits": [
{
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "shazam"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"binary": {
"source": "build",
"edition": "community",
"extraCliArgs": [
"--experimental-dumpless-upgrade",
"--experimental-max-number-of-batched-tasks=0"
]
}
},
{
"route": "indexes/movies/search?q=shazam",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 1,
"hits": [
{
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "shazam",
"requestUid": "[uid]"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"binary": {
"source": "build",
"edition": "community",
"extraCliArgs": [
"--experimental-dumpless-upgrade"
]
}
},
{
"route": "health",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"status": "available"
},
"synchronous": "WaitForTask"
},
{
"route": "indexes/movies/search?q=shazam",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 1,
"hits": [
{
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "shazam",
"requestUid": "[uid]"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/documents/287947",
"method": "DELETE",
"body": null,
"expectedStatus": 403,
"expectedResponse": {
"code": "invalid_api_key",
"link": "https://docs.meilisearch.com/errors#invalid_api_key",
"message": "The provided API key is invalid.",
"type": "auth"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"inline": {
"id": 287948,
"overview": "Shazam turns evil and the world is in danger.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2032-03-23",
"title": "Shazam 2"
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 3,
"type": "documentAdditionOrUpdate"
},
"apiKeyVariable": "key",
"synchronous": "WaitForTask"
},
{
"route": "indexes/movies/search?q=shaza",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 2,
"hits": [
{
"id": 287947,
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2019-03-23",
"title": "Shazam"
},
{
"id": 287948,
"overview": "Shazam turns evil and the world is in danger.",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"release_date": "2032-03-23",
"title": "Shazam 2"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "shaza",
"requestUid": "[uid]"
},
"apiKeyVariable": "key",
"synchronous": "WaitForResponse"
},
{
"route": "tasks",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"from": 3,
"limit": 20,
"next": null,
"results": [
{
"batchUid": 3,
"canceledBy": null,
"details": {
"indexedDocuments": 1,
"receivedDocuments": 1
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": "movies",
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "documentAdditionOrUpdate",
"uid": 3
},
{
"batchUid": 2,
"canceledBy": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "[latest]"
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": null,
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "upgradeDatabase",
"uid": 2
},
{
"batchUid": 1,
"canceledBy": null,
"details": {
"indexedDocuments": 1,
"receivedDocuments": 1
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": "movies",
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "documentAdditionOrUpdate",
"uid": 1
},
{
"batchUid": 0,
"canceledBy": null,
"details": {
"primaryKey": "id"
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": "movies",
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "indexCreation",
"uid": 0
}
],
"total": 4
},
"synchronous": "WaitForResponse"
}
]
}

View File

@@ -1,450 +0,0 @@
{
"type": "test",
"name": "hf-embed",
"binary": {
"source": "release",
"edition": "community",
"version": "1.14.0"
},
"assets": {
"movies-100.json": {
"local_location": null,
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies-100.json",
"sha256": "d215e395e4240f12f03b8f1f68901eac82d9e7ded5b462cbf4a6b8efde76c6c6"
}
},
"commands": [
{
"route": "indexes/movies/settings",
"method": "PATCH",
"body": {
"inline": {
"filterableAttributes": [
"genres",
"release_date"
],
"searchableAttributes": [
"title",
"overview"
],
"sortableAttributes": [
"release_date"
]
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 0,
"type": "settingsUpdate"
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/settings",
"method": "PATCH",
"body": {
"inline": {
"embedders": {
"default": {
"source": "huggingFace"
}
}
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 1,
"type": "settingsUpdate"
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies-100.json"
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 2,
"type": "documentAdditionOrUpdate"
},
"synchronous": "WaitForTask"
},
{
"route": "indexes/movies/search",
"method": "POST",
"body": {
"inline": {
"attributesToRetrieve": [
"title",
"overview"
],
"hybrid": {
"embedder": "default",
"semanticRatio": 1.0
},
"limit": 5,
"q": "Police"
}
},
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 99,
"hits": [
{
"overview": "A hard-nosed cop reluctantly teams up with a wise-cracking criminal temporarily paroled to him, in order to track down a killer.",
"title": "48 Hrs."
},
{
"overview": "Young Treasury Agent Elliot Ness arrives in Chicago and is determined to take down Al Capone, but it is not going to be easy because Capone has the police in his pocket. Ness meets Jimmy Malone, a veteran patrolman and probably the most honorable one on the force. He asks Malone to help him get Capone, but Malone warns him that if he goes after Capone, he is going to war.",
"title": "The Untouchables"
},
{
"overview": "Axel heads for the land of sunshine and palm trees to find out who shot police Captain Andrew Bogomil. Thanks to a couple of old friends, Axel is investigation uncovers a series of robberies masterminded by a heartless weapons kingpin—and the chase is on.",
"title": "Beverly Hills Cop II"
},
{
"overview": "An average family is thrust into the spotlight after the father commits a seemingly self-defense murder at his diner.",
"title": "A History of Violence"
},
{
"overview": "One man defeated three assassins who sought to murder the most powerful warlord in pre-unified China.",
"title": "Hero"
}
],
"limit": 5,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "Police",
"semanticHitCount": 5
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/settings/embedders",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"default": {
"documentTemplate": "{% for field in fields %}{% if field.is_searchable and field.value != nil %}{{ field.name }}: {{ field.value }}\n{% endif %}{% endfor %}",
"documentTemplateMaxBytes": 400,
"model": "BAAI/bge-base-en-v1.5",
"pooling": "useModel",
"revision": "617ca489d9e86b49b8167676d8220688b99db36e",
"source": "huggingFace"
}
},
"synchronous": "WaitForResponse"
},
{
"binary": {
"source": "build",
"edition": "community",
"extraCliArgs": [
"--experimental-dumpless-upgrade",
"--experimental-max-number-of-batched-tasks=0"
]
}
},
{
"route": "experimental-features",
"method": "PATCH",
"body": {
"inline": {
"vectorStoreSetting": true
}
},
"expectedStatus": 200,
"expectedResponse": {
"chatCompletions": false,
"compositeEmbedders": false,
"containsFilter": false,
"editDocumentsByFunction": false,
"getTaskDocumentsRoute": false,
"logsRoute": true,
"metrics": false,
"multimodal": false,
"network": false,
"vectorStoreSetting": true
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/search",
"method": "POST",
"body": {
"inline": {
"attributesToRetrieve": [
"title",
"overview"
],
"hybrid": {
"embedder": "default",
"semanticRatio": 1.0
},
"limit": 5,
"q": "Police"
}
},
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 99,
"hits": [
{
"overview": "A hard-nosed cop reluctantly teams up with a wise-cracking criminal temporarily paroled to him, in order to track down a killer.",
"title": "48 Hrs."
},
{
"overview": "Young Treasury Agent Elliot Ness arrives in Chicago and is determined to take down Al Capone, but it is not going to be easy because Capone has the police in his pocket. Ness meets Jimmy Malone, a veteran patrolman and probably the most honorable one on the force. He asks Malone to help him get Capone, but Malone warns him that if he goes after Capone, he is going to war.",
"title": "The Untouchables"
},
{
"overview": "Axel heads for the land of sunshine and palm trees to find out who shot police Captain Andrew Bogomil. Thanks to a couple of old friends, Axel is investigation uncovers a series of robberies masterminded by a heartless weapons kingpin—and the chase is on.",
"title": "Beverly Hills Cop II"
},
{
"overview": "An average family is thrust into the spotlight after the father commits a seemingly self-defense murder at his diner.",
"title": "A History of Violence"
},
{
"overview": "One man defeated three assassins who sought to murder the most powerful warlord in pre-unified China.",
"title": "Hero"
}
],
"limit": 5,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "Police",
"requestUid": "[uuid]",
"semanticHitCount": 5
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/settings/embedders",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"default": {
"documentTemplate": "{% for field in fields %}{% if field.is_searchable and field.value != nil %}{{ field.name }}: {{ field.value }}\n{% endif %}{% endfor %}",
"documentTemplateMaxBytes": 400,
"model": "BAAI/bge-base-en-v1.5",
"pooling": "useModel",
"revision": "617ca489d9e86b49b8167676d8220688b99db36e",
"source": "huggingFace"
}
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/settings/vector-store",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": null,
"synchronous": "WaitForResponse"
},
{
"binary": {
"source": "build",
"edition": "community",
"extraCliArgs": [
"--experimental-dumpless-upgrade"
]
}
},
{
"route": "health",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"status": "available"
},
"synchronous": "WaitForTask"
},
{
"route": "indexes/movies/settings/vector-store",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": null,
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/search",
"method": "POST",
"body": {
"inline": {
"attributesToRetrieve": [
"title",
"overview"
],
"hybrid": {
"embedder": "default",
"semanticRatio": 1.0
},
"limit": 5,
"q": "Police"
}
},
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 99,
"hits": [
{
"overview": "A hard-nosed cop reluctantly teams up with a wise-cracking criminal temporarily paroled to him, in order to track down a killer.",
"title": "48 Hrs."
},
{
"overview": "Young Treasury Agent Elliot Ness arrives in Chicago and is determined to take down Al Capone, but it is not going to be easy because Capone has the police in his pocket. Ness meets Jimmy Malone, a veteran patrolman and probably the most honorable one on the force. He asks Malone to help him get Capone, but Malone warns him that if he goes after Capone, he is going to war.",
"title": "The Untouchables"
},
{
"overview": "Axel heads for the land of sunshine and palm trees to find out who shot police Captain Andrew Bogomil. Thanks to a couple of old friends, Axel is investigation uncovers a series of robberies masterminded by a heartless weapons kingpin—and the chase is on.",
"title": "Beverly Hills Cop II"
},
{
"overview": "An average family is thrust into the spotlight after the father commits a seemingly self-defense murder at his diner.",
"title": "A History of Violence"
},
{
"overview": "One man defeated three assassins who sought to murder the most powerful warlord in pre-unified China.",
"title": "Hero"
}
],
"limit": 5,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "Police",
"requestUid": "[uuid]",
"semanticHitCount": 5
},
"synchronous": "WaitForResponse"
},
{
"route": "indexes/movies/settings/embedders",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"default": {
"documentTemplate": "{% for field in fields %}{% if field.is_searchable and field.value != nil %}{{ field.name }}: {{ field.value }}\n{% endif %}{% endfor %}",
"documentTemplateMaxBytes": 400,
"model": "BAAI/bge-base-en-v1.5",
"pooling": "useModel",
"revision": "617ca489d9e86b49b8167676d8220688b99db36e",
"source": "huggingFace"
}
},
"synchronous": "WaitForResponse"
},
{
"route": "tasks",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"from": 3,
"limit": 20,
"next": null,
"results": [
{
"batchUid": 3,
"canceledBy": null,
"details": {
"upgradeFrom": "v1.14.0",
"upgradeTo": "[latest]"
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": null,
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "upgradeDatabase",
"uid": 3
},
{
"batchUid": 2,
"canceledBy": null,
"details": {
"indexedDocuments": 99,
"receivedDocuments": 99
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": "movies",
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "documentAdditionOrUpdate",
"uid": 2
},
{
"batchUid": 1,
"canceledBy": null,
"details": {
"embedders": {
"default": {
"source": "huggingFace"
}
}
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": "movies",
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "settingsUpdate",
"uid": 1
},
{
"batchUid": 0,
"canceledBy": null,
"details": {
"filterableAttributes": [
"genres",
"release_date"
],
"searchableAttributes": [
"title",
"overview"
],
"sortableAttributes": [
"release_date"
]
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": "movies",
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "settingsUpdate",
"uid": 0
}
],
"total": 4
},
"synchronous": "WaitForResponse"
}
]
}

View File

@@ -1,326 +0,0 @@
{
"type": "test",
"name": "movies",
"binary": {
"source": "release",
"edition": "community",
"version": "1.12.0"
},
"assets": {
"movies.json": {
"local_location": null,
"remote_location": "https://milli-benchmarks.fra1.digitaloceanspaces.com/bench/datasets/movies.json",
"sha256": "5b6e4cb660bc20327776e8a33ea197b43d9ec84856710ead1cc87ab24df77de1"
}
},
"commands": [
{
"route": "indexes/movies/settings",
"method": "PATCH",
"body": {
"inline": {
"filterableAttributes": [
"genres",
"release_date"
],
"searchableAttributes": [
"title",
"overview"
],
"sortableAttributes": [
"release_date"
]
}
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 0,
"type": "settingsUpdate"
},
"synchronous": "DontWait"
},
{
"route": "indexes/movies/documents",
"method": "POST",
"body": {
"asset": "movies.json"
},
"expectedStatus": 202,
"expectedResponse": {
"enqueuedAt": "[timestamp]",
"indexUid": "movies",
"status": "enqueued",
"taskUid": 1,
"type": "documentAdditionOrUpdate"
},
"synchronous": "WaitForTask"
},
{
"binary": {
"source": "build",
"extraCliArgs": [
"--experimental-dumpless-upgrade",
"--experimental-max-number-of-batched-tasks=0"
]
}
},
{
"route": "indexes/movies/search?q=bitcoin",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 6,
"hits": [
{
"genres": [
"Documentary"
],
"id": 349086,
"overview": "A documentary exploring how money and the trading of value has evolved, culminating in Bitcoin.",
"poster": "https://image.tmdb.org/t/p/w500/A82oxum0dTL71N0cjD0F66S9gdt.jpg",
"release_date": 1437177600,
"title": "Bitcoin: The End of Money as We Know It"
},
{
"genres": [
"Documentary",
"History"
],
"id": 427451,
"overview": "Not since the invention of the Internet has there been such a disruptive technology as Bitcoin. Bitcoin's early pioneers sought to blur the lines of sovereignty and the financial status quo. After years of underground development Bitcoin grabbed the attention of a curious public, and the ire of the regulators the technology had subverted. After landmark arrests of prominent cyber criminals Bitcoin faces its most severe adversary yet, the very banks it was built to destroy.",
"poster": "https://image.tmdb.org/t/p/w500/qW3vsno24UBawZjnrKfQ1qHRPD6.jpg",
"release_date": 1483056000,
"title": "Banking on Bitcoin"
},
{
"genres": [
"Documentary",
"History"
],
"id": 292607,
"overview": "A documentary about the development and spread of the virtual currency called Bitcoin.",
"poster": "https://image.tmdb.org/t/p/w500/nUzeZupwmEOoddQIDAq10Gyifk0.jpg",
"release_date": 1412294400,
"title": "The Rise and Rise of Bitcoin"
},
{
"genres": [
"Documentary"
],
"id": 321769,
"overview": "Deep Web gives the inside story of one of the most important and riveting digital crime sagas of the century -- the arrest of Ross William Ulbricht, the 30-year-old entrepreneur convicted of being 'Dread Pirate Roberts,' creator and operator of online black market Silk Road. As the only film with exclusive access to the Ulbricht family, Deep Web explores how the brightest minds and thought leaders behind the Deep Web and Bitcoin are now caught in the crosshairs of the battle for control of a future inextricably linked to technology, with our digital rights hanging in the balance.",
"poster": "https://image.tmdb.org/t/p/w500/dtSOFZ7ioDSaJxPzORaplqo8QZ2.jpg",
"release_date": 1426377600,
"title": "Deep Web"
},
{
"genres": [
"Comedy",
"Horror"
],
"id": 179538,
"overview": "A gang of gold thieves lands in a coven of witches who are preparing for an ancient ritual... and in need of a sacrifice.",
"poster": "https://image.tmdb.org/t/p/w500/u7w6vghlbz8xDUZRayOXma3Ax96.jpg",
"release_date": 1379635200,
"title": "Witching & Bitching"
},
{
"genres": [
"Comedy"
],
"id": 70882,
"overview": "Roseanne Barr is back with an all-new HBO comedy special! Filmed live at the Comedy Store in Los Angeles, Roseanne returns to her stand-up roots for the first time in 14 years, as she tackles hot issues of today - from gay marriage to President Bush.",
"poster": "https://image.tmdb.org/t/p/w500/cUkQQnfPTonMXRroZzCyw11eKXr.jpg",
"release_date": 1162598400,
"title": "Roseanne Barr: Blonde and Bitchin'"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "bitcoin",
"requestUid": "[uuid]"
},
"synchronous": "DontWait"
},
{
"route": "indexes/movies/stats",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"avgDocumentSize": 0,
"fieldDistribution": {
"genres": 31944,
"id": 31944,
"overview": 31944,
"poster": 31944,
"release_date": 31944,
"title": 31944
},
"isIndexing": false,
"numberOfDocuments": 31944,
"rawDocumentDbSize": 0
},
"synchronous": "DontWait"
},
{
"binary": {
"source": "build",
"edition": "community",
"extraCliArgs": [
"--experimental-dumpless-upgrade"
]
}
},
{
"route": "health",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"status": "available"
},
"synchronous": "WaitForTask"
},
{
"route": "indexes/movies/search?q=bitcoin",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"estimatedTotalHits": 6,
"hits": [
{
"genres": [
"Documentary"
],
"id": 349086,
"overview": "A documentary exploring how money and the trading of value has evolved, culminating in Bitcoin.",
"poster": "https://image.tmdb.org/t/p/w500/A82oxum0dTL71N0cjD0F66S9gdt.jpg",
"release_date": 1437177600,
"title": "Bitcoin: The End of Money as We Know It"
},
{
"genres": [
"Documentary",
"History"
],
"id": 427451,
"overview": "Not since the invention of the Internet has there been such a disruptive technology as Bitcoin. Bitcoin's early pioneers sought to blur the lines of sovereignty and the financial status quo. After years of underground development Bitcoin grabbed the attention of a curious public, and the ire of the regulators the technology had subverted. After landmark arrests of prominent cyber criminals Bitcoin faces its most severe adversary yet, the very banks it was built to destroy.",
"poster": "https://image.tmdb.org/t/p/w500/qW3vsno24UBawZjnrKfQ1qHRPD6.jpg",
"release_date": 1483056000,
"title": "Banking on Bitcoin"
},
{
"genres": [
"Documentary",
"History"
],
"id": 292607,
"overview": "A documentary about the development and spread of the virtual currency called Bitcoin.",
"poster": "https://image.tmdb.org/t/p/w500/nUzeZupwmEOoddQIDAq10Gyifk0.jpg",
"release_date": 1412294400,
"title": "The Rise and Rise of Bitcoin"
},
{
"genres": [
"Documentary"
],
"id": 321769,
"overview": "Deep Web gives the inside story of one of the most important and riveting digital crime sagas of the century -- the arrest of Ross William Ulbricht, the 30-year-old entrepreneur convicted of being 'Dread Pirate Roberts,' creator and operator of online black market Silk Road. As the only film with exclusive access to the Ulbricht family, Deep Web explores how the brightest minds and thought leaders behind the Deep Web and Bitcoin are now caught in the crosshairs of the battle for control of a future inextricably linked to technology, with our digital rights hanging in the balance.",
"poster": "https://image.tmdb.org/t/p/w500/dtSOFZ7ioDSaJxPzORaplqo8QZ2.jpg",
"release_date": 1426377600,
"title": "Deep Web"
},
{
"genres": [
"Comedy",
"Horror"
],
"id": 179538,
"overview": "A gang of gold thieves lands in a coven of witches who are preparing for an ancient ritual... and in need of a sacrifice.",
"poster": "https://image.tmdb.org/t/p/w500/u7w6vghlbz8xDUZRayOXma3Ax96.jpg",
"release_date": 1379635200,
"title": "Witching & Bitching"
},
{
"genres": [
"Comedy"
],
"id": 70882,
"overview": "Roseanne Barr is back with an all-new HBO comedy special! Filmed live at the Comedy Store in Los Angeles, Roseanne returns to her stand-up roots for the first time in 14 years, as she tackles hot issues of today - from gay marriage to President Bush.",
"poster": "https://image.tmdb.org/t/p/w500/cUkQQnfPTonMXRroZzCyw11eKXr.jpg",
"release_date": 1162598400,
"title": "Roseanne Barr: Blonde and Bitchin'"
}
],
"limit": 20,
"offset": 0,
"processingTimeMs": "[duration]",
"query": "bitcoin",
"requestUid": "[uuid]"
},
"synchronous": "DontWait"
},
{
"route": "indexes/movies/stats",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"avgDocumentSize": "[avgDocSize]",
"fieldDistribution": {
"genres": 31944,
"id": 31944,
"overview": 31944,
"poster": 31944,
"release_date": 31944,
"title": 31944
},
"isIndexing": false,
"numberOfDocuments": 31944,
"numberOfEmbeddedDocuments": 0,
"numberOfEmbeddings": 0,
"rawDocumentDbSize": "[rawDbSize]"
},
"synchronous": "DontWait"
},
{
"route": "tasks?types=upgradeDatabase",
"method": "GET",
"body": null,
"expectedStatus": 200,
"expectedResponse": {
"from": 2,
"limit": 20,
"next": null,
"results": [
{
"batchUid": 2,
"canceledBy": null,
"details": {
"upgradeFrom": "v1.12.0",
"upgradeTo": "[latest]"
},
"duration": "[duration]",
"enqueuedAt": "[timestamp]",
"error": null,
"finishedAt": "[timestamp]",
"indexUid": null,
"startedAt": "[timestamp]",
"status": "succeeded",
"type": "upgradeDatabase",
"uid": 2
}
],
"total": 1
},
"synchronous": "WaitForResponse"
}
]
}