Compare commits

..

25 Commits

Author SHA1 Message Date
a41c0ba755 Fix the legend 2023-06-24 14:53:32 +02:00
ef9875256b Create a small tool to measure the size of inernal databases 2023-06-23 22:57:57 +02:00
040b5a5b6f Merge #3842
3842: fix some typos r=dureuill a=cuishuang

# Pull Request

## Related issue
Fixes #<issue_number>

## What does this PR do?
- fix some typos

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: cui fliter <imcusg@gmail.com>
2023-06-22 18:01:10 +00:00
530a3e2df3 fix some typos
Signed-off-by: cui fliter <imcusg@gmail.com>
2023-06-22 21:59:00 +08:00
28404d56b7 Merge #3799
3799: Fix error messages in `check-release.sh` r=curquiza a=vvv

- `check_tag`: Report file name correctly. Use named variables.
- Introduce `read_version` helper function. Simplify the implementation.
- Show meaningful error message if `GITHUB_REF` is not set or its format is incorrect.

Co-authored-by: Valeriy V. Vorotyntsev <valery.vv@gmail.com>
2023-06-20 13:35:33 +00:00
262c1f2baf Merge #3844
3844: Fix SDK CI (again) r=curquiza a=curquiza

Following this PR: https://github.com/meilisearch/meilisearch/pull/3813

Sorry `@Kerollmops,` here is (I hope) the latest fix 🙏 I made tests last time that were not sufficient. I really did a lot this time. I hope I have not missed anything.



Co-authored-by: curquiza <clementine@meilisearch.com>
2023-06-20 13:01:07 +00:00
cfed349aa3 Fix error messages in check-release.sh
- `check_tag`: Report file name correctly. Use named variables.
- Introduce `read_version` helper function. Simplify the implementation.
- Show meaningful error message if `GITHUB_REF` is not set or its format
  is incorrect.
2023-06-20 13:58:09 +03:00
bbc9f68ff5 Use the input from the previous job instead of the workflow dispatch 2023-06-19 18:49:15 +02:00
45636d315c Merge #3670
3670: Fix addition deletion bug r=irevoire a=irevoire

The first commit of this PR is a revert of https://github.com/meilisearch/meilisearch/pull/3667. It re-enable the auto-batching of addition and deletion of tasks. No new changes have been introduced outside of `milli`. So all the changes you see on the autobatcher have actually already been reviewed.

It fixes https://github.com/meilisearch/meilisearch/issues/3440.

### What was happening?

The issue was that the `external_documents_ids` generated in the `transform` were used in a very strange way that wasn’t compatible with the deletion of documents.
Instead of doing a clear merge between the external document IDs of the DB and the one returned by the transform + writing it on disk, we were doing some weird tricks with the soft-deleted to avoid writing the fst on disk as much as possible.
The new algorithm may be a bit slower but is way more straightforward and doesn’t change depending on if the soft deletion was used or not. Here is a list of the changes introduced:
1. We now do a clear distinction between the `new_external_documents_ids` coming from the transform and only held on RAM and the `external_documents_ids` coming from the DB.
2. The `new_external_documents_ids` (coming out of the transform) are now represented as an `fst`. We don't need to struggle with the hard, soft distinction + the soft_deleted => That's easier to understand
3. When indexing documents, we merge the `external_documents_ids` coming from the DB and the `new_external_documents_ids` coming from the transform.

### Other things introduced in this  PR

Since we constantly have to write small, very specialized fuzzers for this kind of bug, we decided to push the one used to reproduce this bug.
It's not perfect, but it's easy to improve in the future.
It'll also run for as long as possible on every merge on the main branch.

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: LoĂŻc Lecrenier <loic.lecrenier@icloud.com>
2023-06-19 09:09:30 +00:00
cb9d78fc7f Merge #3835
3835: Add more documentation to graph-based ranking rule algorithms + comment cleanup r=Kerollmops a=loiclec

In addition to documenting the `cheapest_path.rs` file, this PR cleans up a few outdated comments as well as some TODOs. These TODOs have been moved to https://github.com/meilisearch/meilisearch/issues/3776



Co-authored-by: LoĂŻc Lecrenier <loic.lecrenier@icloud.com>
2023-06-15 15:30:24 +00:00
2da86b31a6 Remove comments and add documentation 2023-06-14 12:39:42 +02:00
4e81445d42 Stop the fuzzer after an hour 2023-06-12 15:30:51 +02:00
f03d99690d run the indexing fuzzer on every merge for as long as possible 2023-05-29 14:56:15 +02:00
23a5b45ebf drop the old fuzz file 2023-05-29 14:02:37 +02:00
46fa99f486 make the fuzzer stops if an error occurs 2023-05-29 13:44:32 +02:00
67a583bedf handle the panic happening in milli 2023-05-29 13:39:26 +02:00
99e9057684 rename the indexing fuzzer to fuzz-indexing so it doesn't collide with other binary name when being called from the root of the workspace 2023-05-29 13:07:06 +02:00
8d40d300a5 rename the fuzzer to indexing 2023-05-29 12:37:24 +02:00
6c6387d05e move the fuzzer to its own crate 2023-05-29 12:27:39 +02:00
002f42875f fix the fuzzer 2023-05-23 11:42:40 +02:00
22213dc604 push the fuzzer 2023-05-23 09:14:26 +02:00
602ad98cb8 improve the way we handle the fsts 2023-05-22 11:15:14 +02:00
7f619ff0e4 get rids of the now unused soft_deletion_used parameter 2023-05-22 10:33:49 +02:00
4391cba6ca fix the addition + deletion bug 2023-05-17 18:28:57 +02:00
d7ddf4925e Revert "Disable autobatching of additions and deletions"
This reverts commit a94e78ffb0.
2023-05-17 14:25:50 +02:00
151 changed files with 2171 additions and 11263 deletions

View File

@ -1,24 +1,41 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu -o pipefail
# check_tag $current_tag $file_tag $file_name
function check_tag {
if [[ "$1" != "$2" ]]; then
echo "Error: the current tag does not match the version in Cargo.toml: found $2 - expected $1"
ret=1
fi
check_tag() {
local expected=$1
local actual=$2
local filename=$3
if [[ $actual != $expected ]]; then
echo >&2 "Error: the current tag does not match the version in $filename: found $actual, expected $expected"
return 1
fi
}
read_version() {
grep '^version = ' | cut -d \" -f 2
}
if [[ -z "${GITHUB_REF:-}" ]]; then
echo >&2 "Error: GITHUB_REF is not set"
exit 1
fi
if [[ ! "$GITHUB_REF" =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+(-[a-z0-9]+)?$ ]]; then
echo >&2 "Error: GITHUB_REF is not a valid tag: $GITHUB_REF"
exit 1
fi
current_tag=${GITHUB_REF#refs/tags/v}
ret=0
current_tag=${GITHUB_REF#'refs/tags/v'}
file_tag="$(grep '^version = ' Cargo.toml | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')"
check_tag $current_tag $file_tag
toml_tag="$(cat Cargo.toml | read_version)"
check_tag "$current_tag" "$toml_tag" Cargo.toml || ret=1
lock_file='Cargo.lock'
lock_tag=$(grep -A 1 'name = "meilisearch-auth"' $lock_file | grep version | cut -d '=' -f 2 | tr -d '"' | tr -d ' ')
check_tag $current_tag $lock_tag $lock_file
lock_tag=$(grep -A 1 '^name = "meilisearch-auth"' Cargo.lock | read_version)
check_tag "$current_tag" "$lock_tag" Cargo.lock || ret=1
if [[ "$ret" -eq 0 ]] ; then
echo 'OK'
if (( ret == 0 )); then
echo 'OK'
fi
exit $ret

24
.github/workflows/fuzzer-indexing.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: Run the indexing fuzzer
on:
push:
branches:
- main
jobs:
fuzz:
name: Setup the action
runs-on: ubuntu-latest
timeout-minutes: 4320 # 72h
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
# Run benchmarks
- name: Run the fuzzer
run: |
cargo run --release --bin fuzz-indexing

View File

@ -25,7 +25,7 @@ jobs:
- name: Define the Docker image we need to use
id: define-image
run: |
event=${{ github.event.action }}
event=${{ github.event_name }}
echo "docker-image=nightly" >> $GITHUB_OUTPUT
if [[ $event == 'workflow_dispatch' ]]; then
echo "docker-image=${{ github.event.inputs.docker_image }}" >> $GITHUB_OUTPUT
@ -37,7 +37,7 @@ jobs:
runs-on: ubuntu-latest
services:
meilisearch:
image: getmeili/meilisearch:${{ github.event.inputs.docker_image }}
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
env:
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
@ -72,7 +72,7 @@ jobs:
runs-on: ubuntu-latest
services:
meilisearch:
image: getmeili/meilisearch:${{ github.event.inputs.docker_image }}
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
env:
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
@ -99,7 +99,7 @@ jobs:
runs-on: ubuntu-latest
services:
meilisearch:
image: getmeili/meilisearch:${{ github.event.inputs.docker_image }}
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
env:
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
@ -130,7 +130,7 @@ jobs:
runs-on: ubuntu-latest
services:
meilisearch:
image: getmeili/meilisearch:${{ github.event.inputs.docker_image }}
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
env:
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
@ -155,7 +155,7 @@ jobs:
runs-on: ubuntu-latest
services:
meilisearch:
image: getmeili/meilisearch:${{ github.event.inputs.docker_image }}
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
env:
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
@ -185,7 +185,7 @@ jobs:
runs-on: ubuntu-latest
services:
meilisearch:
image: getmeili/meilisearch:${{ github.event.inputs.docker_image }}
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
env:
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}
@ -210,7 +210,7 @@ jobs:
runs-on: ubuntu-latest
services:
meilisearch:
image: getmeili/meilisearch:${{ github.event.inputs.docker_image }}
image: getmeili/meilisearch:${{ needs.define-docker-image.outputs.docker-image }}
env:
MEILI_MASTER_KEY: ${{ env.MEILI_MASTER_KEY }}
MEILI_NO_ANALYTICS: ${{ env.MEILI_NO_ANALYTICS }}

756
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -10,10 +10,12 @@ members = [
"file-store",
"permissive-json-pointer",
"milli",
"index-stats",
"filter-parser",
"flatten-serde-json",
"json-depth-checker",
"benchmarks"
"benchmarks",
"fuzzers",
]
[workspace.package]

20
fuzzers/Cargo.toml Normal file
View File

@ -0,0 +1,20 @@
[package]
name = "fuzzers"
publish = false
version.workspace = true
authors.workspace = true
description.workspace = true
homepage.workspace = true
readme.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
arbitrary = { version = "1.3.0", features = ["derive"] }
clap = { version = "4.3.0", features = ["derive"] }
fastrand = "1.9.0"
milli = { path = "../milli" }
serde = { version = "1.0.160", features = ["derive"] }
serde_json = { version = "1.0.95", features = ["preserve_order"] }
tempfile = "3.5.0"

3
fuzzers/README.md Normal file
View File

@ -0,0 +1,3 @@
# Fuzzers
The purpose of this crate is to contains all the handmade "fuzzer" we may need.

View File

@ -0,0 +1,152 @@
use std::num::NonZeroUsize;
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::time::Duration;
use arbitrary::{Arbitrary, Unstructured};
use clap::Parser;
use fuzzers::Operation;
use milli::heed::EnvOpenOptions;
use milli::update::{IndexDocuments, IndexDocumentsConfig, IndexerConfig};
use milli::Index;
use tempfile::TempDir;
#[derive(Debug, Arbitrary)]
struct Batch([Operation; 5]);
#[derive(Debug, Clone, Parser)]
struct Opt {
/// The number of fuzzer to run in parallel.
#[clap(long)]
par: Option<NonZeroUsize>,
// We need to put a lot of newlines in the following documentation or else everything gets collapsed on one line
/// The path in which the databases will be created.
/// Using a ramdisk is recommended.
///
/// Linux:
///
/// sudo mount -t tmpfs -o size=2g tmpfs ramdisk # to create it
///
/// sudo umount ramdisk # to remove it
///
/// MacOS:
///
/// diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nobrowse -nomount ram://4194304 # create it
///
/// hdiutil detach /dev/:the_disk
#[clap(long)]
path: Option<PathBuf>,
}
fn main() {
let opt = Opt::parse();
let progression: &'static AtomicUsize = Box::leak(Box::new(AtomicUsize::new(0)));
let stop: &'static AtomicBool = Box::leak(Box::new(AtomicBool::new(false)));
let par = opt.par.unwrap_or_else(|| std::thread::available_parallelism().unwrap()).get();
let mut handles = Vec::with_capacity(par);
for _ in 0..par {
let opt = opt.clone();
let handle = std::thread::spawn(move || {
let mut options = EnvOpenOptions::new();
options.map_size(1024 * 1024 * 1024 * 1024);
let tempdir = match opt.path {
Some(path) => TempDir::new_in(path).unwrap(),
None => TempDir::new().unwrap(),
};
let index = Index::new(options, tempdir.path()).unwrap();
let indexer_config = IndexerConfig::default();
let index_documents_config = IndexDocumentsConfig::default();
std::thread::scope(|s| {
loop {
if stop.load(Ordering::Relaxed) {
return;
}
let v: Vec<u8> =
std::iter::repeat_with(|| fastrand::u8(..)).take(1000).collect();
let mut data = Unstructured::new(&v);
let batches = <[Batch; 5]>::arbitrary(&mut data).unwrap();
// will be used to display the error once a thread crashes
let dbg_input = format!("{:#?}", batches);
let handle = s.spawn(|| {
let mut wtxn = index.write_txn().unwrap();
for batch in batches {
let mut builder = IndexDocuments::new(
&mut wtxn,
&index,
&indexer_config,
index_documents_config.clone(),
|_| (),
|| false,
)
.unwrap();
for op in batch.0 {
match op {
Operation::AddDoc(doc) => {
let documents =
milli::documents::objects_from_json_value(doc.to_d());
let documents =
milli::documents::documents_batch_reader_from_objects(
documents,
);
let (b, _added) = builder.add_documents(documents).unwrap();
builder = b;
}
Operation::DeleteDoc(id) => {
let (b, _removed) =
builder.remove_documents(vec![id.to_s()]).unwrap();
builder = b;
}
}
}
builder.execute().unwrap();
// after executing a batch we check if the database is corrupted
let res = index.search(&wtxn).execute().unwrap();
index.documents(&wtxn, res.documents_ids).unwrap();
progression.fetch_add(1, Ordering::Relaxed);
}
wtxn.abort().unwrap();
});
if let err @ Err(_) = handle.join() {
stop.store(true, Ordering::Relaxed);
err.expect(&dbg_input);
}
}
});
});
handles.push(handle);
}
std::thread::spawn(|| {
let mut last_value = 0;
let start = std::time::Instant::now();
loop {
let total = progression.load(Ordering::Relaxed);
let elapsed = start.elapsed().as_secs();
if elapsed > 3600 {
// after 1 hour, stop the fuzzer, success
std::process::exit(0);
}
println!(
"Has been running for {:?} seconds. Tested {} new values for a total of {}.",
elapsed,
total - last_value,
total
);
last_value = total;
std::thread::sleep(Duration::from_secs(1));
}
});
for handle in handles {
handle.join().unwrap();
}
}

46
fuzzers/src/lib.rs Normal file
View File

@ -0,0 +1,46 @@
use arbitrary::Arbitrary;
use serde_json::{json, Value};
#[derive(Debug, Arbitrary)]
pub enum Document {
One,
Two,
Three,
Four,
Five,
Six,
}
impl Document {
pub fn to_d(&self) -> Value {
match self {
Document::One => json!({ "id": 0, "doggo": "bernese" }),
Document::Two => json!({ "id": 0, "doggo": "golden" }),
Document::Three => json!({ "id": 0, "catto": "jorts" }),
Document::Four => json!({ "id": 1, "doggo": "bernese" }),
Document::Five => json!({ "id": 1, "doggo": "golden" }),
Document::Six => json!({ "id": 1, "catto": "jorts" }),
}
}
}
#[derive(Debug, Arbitrary)]
pub enum DocId {
Zero,
One,
}
impl DocId {
pub fn to_s(&self) -> String {
match self {
DocId::Zero => "0".to_string(),
DocId::One => "1".to_string(),
}
}
}
#[derive(Debug, Arbitrary)]
pub enum Operation {
AddDoc(Document),
DeleteDoc(DocId),
}

View File

@ -160,7 +160,7 @@ impl BatchKind {
impl BatchKind {
/// Returns a `ControlFlow::Break` if you must stop right now.
/// The boolean tell you if an index has been created by the batched task.
/// To ease the writting of the code. `true` can be returned when you don't need to create an index
/// To ease the writing of the code. `true` can be returned when you don't need to create an index
/// but false can't be returned if you needs to create an index.
// TODO use an AutoBatchKind as input
pub fn new(
@ -214,7 +214,7 @@ impl BatchKind {
/// Returns a `ControlFlow::Break` if you must stop right now.
/// The boolean tell you if an index has been created by the batched task.
/// To ease the writting of the code. `true` can be returned when you don't need to create an index
/// To ease the writing of the code. `true` can be returned when you don't need to create an index
/// but false can't be returned if you needs to create an index.
#[rustfmt::skip]
fn accumulate(self, id: TaskId, kind: AutobatchKind, index_already_exists: bool, primary_key: Option<&str>) -> ControlFlow<BatchKind, BatchKind> {
@ -321,9 +321,18 @@ impl BatchKind {
})
}
(
this @ BatchKind::DocumentOperation { .. },
BatchKind::DocumentOperation { method, allow_index_creation, primary_key, mut operation_ids },
K::DocumentDeletion,
) => Break(this),
) => {
operation_ids.push(id);
Continue(BatchKind::DocumentOperation {
method,
allow_index_creation,
primary_key,
operation_ids,
})
}
// but we can't autobatch documents if it's not the same kind
// this match branch MUST be AFTER the previous one
(
@ -346,7 +355,35 @@ impl BatchKind {
deletion_ids.push(id);
Continue(BatchKind::DocumentClear { ids: deletion_ids })
}
// we can't autobatch a deletion and an import
// we can autobatch the deletion and import if the index already exists
(
BatchKind::DocumentDeletion { mut deletion_ids },
K::DocumentImport { method, allow_index_creation, primary_key }
) if index_already_exists => {
deletion_ids.push(id);
Continue(BatchKind::DocumentOperation {
method,
allow_index_creation,
primary_key,
operation_ids: deletion_ids,
})
}
// we can autobatch the deletion and import if both can't create an index
(
BatchKind::DocumentDeletion { mut deletion_ids },
K::DocumentImport { method, allow_index_creation, primary_key }
) if !allow_index_creation => {
deletion_ids.push(id);
Continue(BatchKind::DocumentOperation {
method,
allow_index_creation,
primary_key,
operation_ids: deletion_ids,
})
}
// we can't autobatch a deletion and an import if the index does not exists but would be created by an addition
(
this @ BatchKind::DocumentDeletion { .. },
K::DocumentImport { .. }
@ -648,36 +685,36 @@ mod tests {
debug_snapshot!(autobatch_from(false,None, [settings(false)]), @"Some((Settings { allow_index_creation: false, settings_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false,None, [settings(false), settings(false), settings(false)]), @"Some((Settings { allow_index_creation: false, settings_ids: [0, 1, 2] }, false))");
// We can't autobatch document addition with document deletion
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0] }, true))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0] }, true))"###);
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0] }, true))"###);
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0] }, false))"###);
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0] }, false))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0] }, true))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0] }, true))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0] }, true))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0] }, true))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0] }, false))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0] }, false))"###);
// we also can't do the only way around
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, true, None)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, true, None)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, false, None)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, false, None)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, true, Some("catto"))]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, true, Some("catto"))]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, false, Some("catto"))]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, false, Some("catto"))]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(ReplaceDocuments, false, None)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(UpdateDocuments, false, None)]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(ReplaceDocuments, false, Some("catto"))]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(UpdateDocuments, false, Some("catto"))]), @"Some((DocumentDeletion { deletion_ids: [0] }, false))");
// We can autobatch document addition with document deletion
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0, 1] }, true))"###);
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0, 1] }, true))"###);
debug_snapshot!(autobatch_from(true, None, [doc_imp(ReplaceDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(true, None, [doc_imp(UpdateDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, true, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0, 1] }, true))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, false, None), doc_del()]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0, 1] }, true))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, true, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0, 1] }, true))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(ReplaceDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(false, None, [doc_imp(UpdateDocuments, false, Some("catto")), doc_del()]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
// And the other way around
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, true, None)]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, true, None)]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, false, None)]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, false, None)]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, true, Some("catto"))]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, true, Some("catto"))]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: true, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(ReplaceDocuments, false, Some("catto"))]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(true, None, [doc_del(), doc_imp(UpdateDocuments, false, Some("catto"))]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(ReplaceDocuments, false, None)]), @"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(UpdateDocuments, false, None)]), @"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: None, operation_ids: [0, 1] }, false))");
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(ReplaceDocuments, false, Some("catto"))]), @r###"Some((DocumentOperation { method: ReplaceDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
debug_snapshot!(autobatch_from(false, None, [doc_del(), doc_imp(UpdateDocuments, false, Some("catto"))]), @r###"Some((DocumentOperation { method: UpdateDocuments, allow_index_creation: false, primary_key: Some("catto"), operation_ids: [0, 1] }, false))"###);
}
#[test]

View File

@ -998,7 +998,7 @@ impl IndexScheduler {
}()
.unwrap_or_default();
// The write transaction is directly owned and commited inside.
// The write transaction is directly owned and committed inside.
match self.index_mapper.delete_index(wtxn, &index_uid) {
Ok(()) => (),
Err(Error::IndexNotFound(_)) if index_has_been_created => (),

View File

@ -1785,7 +1785,7 @@ mod tests {
assert_eq!(task.kind.as_kind(), k);
}
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "everything_is_succesfully_registered");
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "everything_is_successfully_registered");
}
#[test]
@ -2075,6 +2075,105 @@ mod tests {
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "both_task_succeeded");
}
#[test]
fn document_addition_and_document_deletion() {
let (index_scheduler, mut handle) = IndexScheduler::test(true, vec![]);
let content = r#"[
{ "id": 1, "doggo": "jean bob" },
{ "id": 2, "catto": "jorts" },
{ "id": 3, "doggo": "bork" }
]"#;
let (uuid, mut file) = index_scheduler.create_update_file_with_uuid(0).unwrap();
let documents_count = read_json(content.as_bytes(), file.as_file_mut()).unwrap();
file.persist().unwrap();
index_scheduler
.register(KindWithContent::DocumentAdditionOrUpdate {
index_uid: S("doggos"),
primary_key: Some(S("id")),
method: ReplaceDocuments,
content_file: uuid,
documents_count,
allow_index_creation: true,
})
.unwrap();
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "registered_the_first_task");
index_scheduler
.register(KindWithContent::DocumentDeletion {
index_uid: S("doggos"),
documents_ids: vec![S("1"), S("2")],
})
.unwrap();
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "registered_the_second_task");
handle.advance_one_successful_batch(); // The addition AND deletion should've been batched together
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "after_processing_the_batch");
let index = index_scheduler.index("doggos").unwrap();
let rtxn = index.read_txn().unwrap();
let field_ids_map = index.fields_ids_map(&rtxn).unwrap();
let field_ids = field_ids_map.ids().collect::<Vec<_>>();
let documents = index
.all_documents(&rtxn)
.unwrap()
.map(|ret| obkv_to_json(&field_ids, &field_ids_map, ret.unwrap().1).unwrap())
.collect::<Vec<_>>();
snapshot!(serde_json::to_string_pretty(&documents).unwrap(), name: "documents");
}
#[test]
fn document_deletion_and_document_addition() {
let (index_scheduler, mut handle) = IndexScheduler::test(true, vec![]);
index_scheduler
.register(KindWithContent::DocumentDeletion {
index_uid: S("doggos"),
documents_ids: vec![S("1"), S("2")],
})
.unwrap();
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "registered_the_first_task");
let content = r#"[
{ "id": 1, "doggo": "jean bob" },
{ "id": 2, "catto": "jorts" },
{ "id": 3, "doggo": "bork" }
]"#;
let (uuid, mut file) = index_scheduler.create_update_file_with_uuid(0).unwrap();
let documents_count = read_json(content.as_bytes(), file.as_file_mut()).unwrap();
file.persist().unwrap();
index_scheduler
.register(KindWithContent::DocumentAdditionOrUpdate {
index_uid: S("doggos"),
primary_key: Some(S("id")),
method: ReplaceDocuments,
content_file: uuid,
documents_count,
allow_index_creation: true,
})
.unwrap();
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "registered_the_second_task");
// The deletion should have failed because it can't create an index
handle.advance_one_failed_batch();
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "after_failing_the_deletion");
// The addition should works
handle.advance_one_successful_batch();
snapshot!(snapshot_index_scheduler(&index_scheduler), name: "after_last_successful_addition");
let index = index_scheduler.index("doggos").unwrap();
let rtxn = index.read_txn().unwrap();
let field_ids_map = index.fields_ids_map(&rtxn).unwrap();
let field_ids = field_ids_map.ids().collect::<Vec<_>>();
let documents = index
.all_documents(&rtxn)
.unwrap()
.map(|ret| obkv_to_json(&field_ids, &field_ids_map, ret.unwrap().1).unwrap())
.collect::<Vec<_>>();
snapshot!(serde_json::to_string_pretty(&documents).unwrap(), name: "documents");
}
#[test]
fn do_not_batch_task_of_different_indexes() {
let (index_scheduler, mut handle) = IndexScheduler::test(true, vec![]);

View File

@ -0,0 +1,43 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: succeeded, details: { received_documents: 3, indexed_documents: Some(3) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 3, allow_index_creation: true }}
1 {uid: 1, status: succeeded, details: { received_document_ids: 2, deleted_documents: Some(2) }, kind: DocumentDeletion { index_uid: "doggos", documents_ids: ["1", "2"] }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"documentDeletion" [1,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
doggos: { number_of_documents: 1, field_distribution: {"doggo": 1, "id": 1} }
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,9 @@
---
source: index-scheduler/src/lib.rs
---
[
{
"id": 3,
"doggo": "bork"
}
]

View File

@ -0,0 +1,37 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 3, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 3, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,40 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_documents: 3, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 3, allow_index_creation: true }}
1 {uid: 1, status: enqueued, details: { received_document_ids: 2, deleted_documents: None }, kind: DocumentDeletion { index_uid: "doggos", documents_ids: ["1", "2"] }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [0,]
"documentDeletion" [1,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,43 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: failed, error: ResponseError { code: 200, message: "Index `doggos` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { received_document_ids: 2, deleted_documents: Some(0) }, kind: DocumentDeletion { index_uid: "doggos", documents_ids: ["1", "2"] }}
1 {uid: 1, status: enqueued, details: { received_documents: 3, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 3, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [1,]
failed [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,]
"documentDeletion" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

View File

@ -0,0 +1,46 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: failed, error: ResponseError { code: 200, message: "Index `doggos` not found.", error_code: "index_not_found", error_type: "invalid_request", error_link: "https://docs.meilisearch.com/errors#index_not_found" }, details: { received_document_ids: 2, deleted_documents: Some(0) }, kind: DocumentDeletion { index_uid: "doggos", documents_ids: ["1", "2"] }}
1 {uid: 1, status: succeeded, details: { received_documents: 3, indexed_documents: Some(3) }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 3, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued []
succeeded [1,]
failed [0,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,]
"documentDeletion" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
doggos: { number_of_documents: 3, field_distribution: {"catto": 1, "doggo": 2, "id": 3} }
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Finished At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,17 @@
---
source: index-scheduler/src/lib.rs
---
[
{
"id": 1,
"doggo": "jean bob"
},
{
"id": 2,
"catto": "jorts"
},
{
"id": 3,
"doggo": "bork"
}
]

View File

@ -0,0 +1,36 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_document_ids: 2, deleted_documents: None }, kind: DocumentDeletion { index_uid: "doggos", documents_ids: ["1", "2"] }}
----------------------------------------------------------------------
### Status:
enqueued [0,]
----------------------------------------------------------------------
### Kind:
"documentDeletion" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,]
----------------------------------------------------------------------
### Index Mapper:
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
----------------------------------------------------------------------

View File

@ -0,0 +1,40 @@
---
source: index-scheduler/src/lib.rs
---
### Autobatching Enabled = true
### Processing Tasks:
[]
----------------------------------------------------------------------
### All Tasks:
0 {uid: 0, status: enqueued, details: { received_document_ids: 2, deleted_documents: None }, kind: DocumentDeletion { index_uid: "doggos", documents_ids: ["1", "2"] }}
1 {uid: 1, status: enqueued, details: { received_documents: 3, indexed_documents: None }, kind: DocumentAdditionOrUpdate { index_uid: "doggos", primary_key: Some("id"), method: ReplaceDocuments, content_file: 00000000-0000-0000-0000-000000000000, documents_count: 3, allow_index_creation: true }}
----------------------------------------------------------------------
### Status:
enqueued [0,1,]
----------------------------------------------------------------------
### Kind:
"documentAdditionOrUpdate" [1,]
"documentDeletion" [0,]
----------------------------------------------------------------------
### Index Tasks:
doggos [0,1,]
----------------------------------------------------------------------
### Index Mapper:
----------------------------------------------------------------------
### Canceled By:
----------------------------------------------------------------------
### Enqueued At:
[timestamp] [0,]
[timestamp] [1,]
----------------------------------------------------------------------
### Started At:
----------------------------------------------------------------------
### Finished At:
----------------------------------------------------------------------
### File Store:
00000000-0000-0000-0000-000000000000
----------------------------------------------------------------------

12
index-stats/Cargo.toml Normal file
View File

@ -0,0 +1,12 @@
[package]
name = "index-stats"
description = "A small program that computes internal stats of a Meilisearch index"
version = "0.1.0"
edition = "2021"
publish = false
[dependencies]
anyhow = "1.0.71"
clap = { version = "4.3.5", features = ["derive"] }
milli = { path = "../milli" }
piechart = "1.0.0"

224
index-stats/src/main.rs Normal file
View File

@ -0,0 +1,224 @@
use std::cmp::Reverse;
use std::path::PathBuf;
use clap::Parser;
use milli::heed::{types::ByteSlice, EnvOpenOptions, PolyDatabase, RoTxn};
use milli::index::db_name::*;
use milli::index::Index;
use piechart::{Chart, Color, Data};
/// Simple program to greet a person
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Args {
/// The path to the LMDB Meilisearch index database.
path: PathBuf,
/// The radius of the graphs
#[clap(long, default_value_t = 10)]
graph_radius: u16,
/// The radius of the graphs
#[clap(long, default_value_t = 6)]
graph_aspect_ratio: u16,
}
fn main() -> anyhow::Result<()> {
let Args { path, graph_radius, graph_aspect_ratio } = Args::parse();
let env = EnvOpenOptions::new().max_dbs(24).open(path)?;
// TODO not sure to keep that...
// if removed put the pub(crate) back in the Index struct
matches!(
Option::<Index>::None,
Some(Index {
env: _,
main: _,
word_docids: _,
exact_word_docids: _,
word_prefix_docids: _,
exact_word_prefix_docids: _,
word_pair_proximity_docids: _,
word_prefix_pair_proximity_docids: _,
prefix_word_pair_proximity_docids: _,
word_position_docids: _,
word_fid_docids: _,
field_id_word_count_docids: _,
word_prefix_position_docids: _,
word_prefix_fid_docids: _,
script_language_docids: _,
facet_id_exists_docids: _,
facet_id_is_null_docids: _,
facet_id_is_empty_docids: _,
facet_id_f64_docids: _,
facet_id_string_docids: _,
field_id_docid_facet_f64s: _,
field_id_docid_facet_strings: _,
documents: _,
})
);
let mut wtxn = env.write_txn()?;
let main = env.create_poly_database(&mut wtxn, Some(MAIN))?;
let word_docids = env.create_poly_database(&mut wtxn, Some(WORD_DOCIDS))?;
let exact_word_docids = env.create_poly_database(&mut wtxn, Some(EXACT_WORD_DOCIDS))?;
let word_prefix_docids = env.create_poly_database(&mut wtxn, Some(WORD_PREFIX_DOCIDS))?;
let exact_word_prefix_docids =
env.create_poly_database(&mut wtxn, Some(EXACT_WORD_PREFIX_DOCIDS))?;
let word_pair_proximity_docids =
env.create_poly_database(&mut wtxn, Some(WORD_PAIR_PROXIMITY_DOCIDS))?;
let script_language_docids =
env.create_poly_database(&mut wtxn, Some(SCRIPT_LANGUAGE_DOCIDS))?;
let word_prefix_pair_proximity_docids =
env.create_poly_database(&mut wtxn, Some(WORD_PREFIX_PAIR_PROXIMITY_DOCIDS))?;
let prefix_word_pair_proximity_docids =
env.create_poly_database(&mut wtxn, Some(PREFIX_WORD_PAIR_PROXIMITY_DOCIDS))?;
let word_position_docids = env.create_poly_database(&mut wtxn, Some(WORD_POSITION_DOCIDS))?;
let word_fid_docids = env.create_poly_database(&mut wtxn, Some(WORD_FIELD_ID_DOCIDS))?;
let field_id_word_count_docids =
env.create_poly_database(&mut wtxn, Some(FIELD_ID_WORD_COUNT_DOCIDS))?;
let word_prefix_position_docids =
env.create_poly_database(&mut wtxn, Some(WORD_PREFIX_POSITION_DOCIDS))?;
let word_prefix_fid_docids =
env.create_poly_database(&mut wtxn, Some(WORD_PREFIX_FIELD_ID_DOCIDS))?;
let facet_id_f64_docids = env.create_poly_database(&mut wtxn, Some(FACET_ID_F64_DOCIDS))?;
let facet_id_string_docids =
env.create_poly_database(&mut wtxn, Some(FACET_ID_STRING_DOCIDS))?;
let facet_id_exists_docids =
env.create_poly_database(&mut wtxn, Some(FACET_ID_EXISTS_DOCIDS))?;
let facet_id_is_null_docids =
env.create_poly_database(&mut wtxn, Some(FACET_ID_IS_NULL_DOCIDS))?;
let facet_id_is_empty_docids =
env.create_poly_database(&mut wtxn, Some(FACET_ID_IS_EMPTY_DOCIDS))?;
let field_id_docid_facet_f64s =
env.create_poly_database(&mut wtxn, Some(FIELD_ID_DOCID_FACET_F64S))?;
let field_id_docid_facet_strings =
env.create_poly_database(&mut wtxn, Some(FIELD_ID_DOCID_FACET_STRINGS))?;
let documents = env.create_poly_database(&mut wtxn, Some(DOCUMENTS))?;
wtxn.commit()?;
let list = [
(main, MAIN),
(word_docids, WORD_DOCIDS),
(exact_word_docids, EXACT_WORD_DOCIDS),
(word_prefix_docids, WORD_PREFIX_DOCIDS),
(exact_word_prefix_docids, EXACT_WORD_PREFIX_DOCIDS),
(word_pair_proximity_docids, WORD_PAIR_PROXIMITY_DOCIDS),
(script_language_docids, SCRIPT_LANGUAGE_DOCIDS),
(word_prefix_pair_proximity_docids, WORD_PREFIX_PAIR_PROXIMITY_DOCIDS),
(prefix_word_pair_proximity_docids, PREFIX_WORD_PAIR_PROXIMITY_DOCIDS),
(word_position_docids, WORD_POSITION_DOCIDS),
(word_fid_docids, WORD_FIELD_ID_DOCIDS),
(field_id_word_count_docids, FIELD_ID_WORD_COUNT_DOCIDS),
(word_prefix_position_docids, WORD_PREFIX_POSITION_DOCIDS),
(word_prefix_fid_docids, WORD_PREFIX_FIELD_ID_DOCIDS),
(facet_id_f64_docids, FACET_ID_F64_DOCIDS),
(facet_id_string_docids, FACET_ID_STRING_DOCIDS),
(facet_id_exists_docids, FACET_ID_EXISTS_DOCIDS),
(facet_id_is_null_docids, FACET_ID_IS_NULL_DOCIDS),
(facet_id_is_empty_docids, FACET_ID_IS_EMPTY_DOCIDS),
(field_id_docid_facet_f64s, FIELD_ID_DOCID_FACET_F64S),
(field_id_docid_facet_strings, FIELD_ID_DOCID_FACET_STRINGS),
(documents, DOCUMENTS),
];
let rtxn = env.read_txn()?;
let result: Result<Vec<_>, _> =
list.into_iter().map(|(db, name)| compute_stats(&rtxn, db).map(|s| (s, name))).collect();
let mut stats = result?;
println!("{:1$} Number of Entries", "", graph_radius as usize * 2);
stats.sort_by_key(|(s, _)| Reverse(s.number_of_entries));
let data = compute_graph_data(stats.iter().map(|(s, n)| (s.number_of_entries as f32, *n)));
Chart::new().radius(graph_radius).aspect_ratio(graph_aspect_ratio).draw(&data);
display_legend(&data);
print!("\r\n");
println!("{:1$} Size of Entries", "", graph_radius as usize * 2);
stats.sort_by_key(|(s, _)| Reverse(s.size_of_entries));
let data = compute_graph_data(stats.iter().map(|(s, n)| (s.size_of_entries as f32, *n)));
Chart::new().radius(graph_radius).aspect_ratio(graph_aspect_ratio).draw(&data);
display_legend(&data);
print!("\r\n");
println!("{:1$} Size of Data", "", graph_radius as usize * 2);
stats.sort_by_key(|(s, _)| Reverse(s.size_of_data));
let data = compute_graph_data(stats.iter().map(|(s, n)| (s.size_of_data as f32, *n)));
Chart::new().radius(graph_radius).aspect_ratio(graph_aspect_ratio).draw(&data);
display_legend(&data);
print!("\r\n");
println!("{:1$} Size of Keys", "", graph_radius as usize * 2);
stats.sort_by_key(|(s, _)| Reverse(s.size_of_keys));
let data = compute_graph_data(stats.iter().map(|(s, n)| (s.size_of_keys as f32, *n)));
Chart::new().radius(graph_radius).aspect_ratio(graph_aspect_ratio).draw(&data);
display_legend(&data);
Ok(())
}
fn display_legend(data: &[Data]) {
let total: f32 = data.iter().map(|d| d.value).sum();
for Data { label, value, color, fill } in data {
println!(
"{} {} {:.02}%",
color.unwrap().paint(fill.to_string()),
label,
value / total * 100.0
);
}
}
fn compute_graph_data<'a>(stats: impl IntoIterator<Item = (f32, &'a str)>) -> Vec<Data> {
let mut colors = [
Color::Red,
Color::Green,
Color::Yellow,
Color::Blue,
Color::Purple,
Color::Cyan,
Color::White,
]
.into_iter()
.cycle();
let mut characters = ['▴', '▵', '▾', '▿', '▪', '▫', '•', '◦'].into_iter().cycle();
stats
.into_iter()
.map(|(value, name)| Data {
label: (*name).into(),
value,
color: Some(colors.next().unwrap().into()),
fill: characters.next().unwrap(),
})
.collect()
}
#[derive(Debug)]
pub struct Stats {
pub number_of_entries: u64,
pub size_of_keys: u64,
pub size_of_data: u64,
pub size_of_entries: u64,
}
fn compute_stats(rtxn: &RoTxn, db: PolyDatabase) -> anyhow::Result<Stats> {
let mut number_of_entries = 0;
let mut size_of_keys = 0;
let mut size_of_data = 0;
for result in db.iter::<_, ByteSlice, ByteSlice>(rtxn)? {
let (key, data) = result?;
number_of_entries += 1;
size_of_keys += key.len() as u64;
size_of_data += data.len() as u64;
}
Ok(Stats {
number_of_entries,
size_of_keys,
size_of_data,
size_of_entries: size_of_keys + size_of_data,
})
}

View File

@ -240,8 +240,6 @@ InvalidSearchOffset , InvalidRequest , BAD_REQUEST ;
InvalidSearchPage , InvalidRequest , BAD_REQUEST ;
InvalidSearchQ , InvalidRequest , BAD_REQUEST ;
InvalidSearchShowMatchesPosition , InvalidRequest , BAD_REQUEST ;
InvalidSearchShowRankingScore , InvalidRequest , BAD_REQUEST ;
InvalidSearchShowRankingScoreDetails , InvalidRequest , BAD_REQUEST ;
InvalidSearchSort , InvalidRequest , BAD_REQUEST ;
InvalidSettingsDisplayedAttributes , InvalidRequest , BAD_REQUEST ;
InvalidSettingsDistinctAttribute , InvalidRequest , BAD_REQUEST ;

View File

@ -56,10 +56,6 @@ pub struct SearchQueryGet {
sort: Option<String>,
#[deserr(default, error = DeserrQueryParamError<InvalidSearchShowMatchesPosition>)]
show_matches_position: Param<bool>,
#[deserr(default, error = DeserrQueryParamError<InvalidSearchShowRankingScore>)]
show_ranking_score: Param<bool>,
#[deserr(default, error = DeserrQueryParamError<InvalidSearchShowRankingScoreDetails>)]
show_ranking_score_details: Param<bool>,
#[deserr(default, error = DeserrQueryParamError<InvalidSearchFacets>)]
facets: Option<CS<String>>,
#[deserr( default = DEFAULT_HIGHLIGHT_PRE_TAG(), error = DeserrQueryParamError<InvalidSearchHighlightPreTag>)]
@ -95,8 +91,6 @@ impl From<SearchQueryGet> for SearchQuery {
filter,
sort: other.sort.map(|attr| fix_sort_query_parameters(&attr)),
show_matches_position: other.show_matches_position.0,
show_ranking_score: other.show_ranking_score.0,
show_ranking_score_details: other.show_ranking_score_details.0,
facets: other.facets.map(|o| o.into_iter().collect()),
highlight_pre_tag: other.highlight_pre_tag,
highlight_post_tag: other.highlight_post_tag,

View File

@ -9,7 +9,6 @@ use meilisearch_auth::IndexSearchRules;
use meilisearch_types::deserr::DeserrJsonError;
use meilisearch_types::error::deserr_codes::*;
use meilisearch_types::index_uid::IndexUid;
use meilisearch_types::milli::score_details::{ScoreDetails, ScoringStrategy};
use meilisearch_types::settings::DEFAULT_PAGINATION_MAX_TOTAL_HITS;
use meilisearch_types::{milli, Document};
use milli::tokenizer::TokenizerBuilder;
@ -55,10 +54,6 @@ pub struct SearchQuery {
pub attributes_to_highlight: Option<HashSet<String>>,
#[deserr(default, error = DeserrJsonError<InvalidSearchShowMatchesPosition>, default)]
pub show_matches_position: bool,
#[deserr(default, error = DeserrJsonError<InvalidSearchShowRankingScore>, default)]
pub show_ranking_score: bool,
#[deserr(default, error = DeserrJsonError<InvalidSearchShowRankingScoreDetails>, default)]
pub show_ranking_score_details: bool,
#[deserr(default, error = DeserrJsonError<InvalidSearchFilter>)]
pub filter: Option<Value>,
#[deserr(default, error = DeserrJsonError<InvalidSearchSort>)]
@ -108,10 +103,6 @@ pub struct SearchQueryWithIndex {
pub crop_length: usize,
#[deserr(default, error = DeserrJsonError<InvalidSearchAttributesToHighlight>)]
pub attributes_to_highlight: Option<HashSet<String>>,
#[deserr(default, error = DeserrJsonError<InvalidSearchShowRankingScore>, default)]
pub show_ranking_score: bool,
#[deserr(default, error = DeserrJsonError<InvalidSearchShowRankingScoreDetails>, default)]
pub show_ranking_score_details: bool,
#[deserr(default, error = DeserrJsonError<InvalidSearchShowMatchesPosition>, default)]
pub show_matches_position: bool,
#[deserr(default, error = DeserrJsonError<InvalidSearchFilter>)]
@ -143,8 +134,6 @@ impl SearchQueryWithIndex {
attributes_to_crop,
crop_length,
attributes_to_highlight,
show_ranking_score,
show_ranking_score_details,
show_matches_position,
filter,
sort,
@ -166,8 +155,6 @@ impl SearchQueryWithIndex {
attributes_to_crop,
crop_length,
attributes_to_highlight,
show_ranking_score,
show_ranking_score_details,
show_matches_position,
filter,
sort,
@ -207,7 +194,7 @@ impl From<MatchingStrategy> for TermsMatchingStrategy {
}
}
#[derive(Debug, Clone, Serialize, PartialEq)]
#[derive(Debug, Clone, Serialize, PartialEq, Eq)]
pub struct SearchHit {
#[serde(flatten)]
pub document: Document,
@ -215,10 +202,6 @@ pub struct SearchHit {
pub formatted: Document,
#[serde(rename = "_matchesPosition", skip_serializing_if = "Option::is_none")]
pub matches_position: Option<MatchesPosition>,
#[serde(rename = "_rankingScore", skip_serializing_if = "Option::is_none")]
pub ranking_score: Option<f64>,
#[serde(rename = "_rankingScoreDetails", skip_serializing_if = "Option::is_none")]
pub ranking_score_details: Option<serde_json::Map<String, serde_json::Value>>,
}
#[derive(Serialize, Debug, Clone, PartialEq)]
@ -300,11 +283,6 @@ pub fn perform_search(
.unwrap_or(DEFAULT_PAGINATION_MAX_TOTAL_HITS);
search.exhaustive_number_hits(is_finite_pagination);
search.scoring_strategy(if query.show_ranking_score || query.show_ranking_score_details {
ScoringStrategy::Detailed
} else {
ScoringStrategy::Skip
});
// compute the offset on the limit depending on the pagination mode.
let (offset, limit) = if is_finite_pagination {
@ -342,8 +320,7 @@ pub fn perform_search(
search.sort_criteria(sort);
}
let milli::SearchResult { documents_ids, matching_words, candidates, document_scores, .. } =
search.execute()?;
let milli::SearchResult { documents_ids, matching_words, candidates, .. } = search.execute()?;
let fields_ids_map = index.fields_ids_map(&rtxn).unwrap();
@ -415,7 +392,7 @@ pub fn perform_search(
let documents_iter = index.documents(&rtxn, documents_ids)?;
for ((_id, obkv), score) in documents_iter.into_iter().zip(document_scores.into_iter()) {
for (_id, obkv) in documents_iter {
// First generate a document with all the displayed fields
let displayed_document = make_document(&displayed_ids, &fields_ids_map, obkv)?;
@ -439,18 +416,7 @@ pub fn perform_search(
insert_geo_distance(sort, &mut document);
}
let ranking_score =
query.show_ranking_score.then(|| ScoreDetails::global_score(score.iter()));
let ranking_score_details =
query.show_ranking_score_details.then(|| ScoreDetails::to_json_map(score.iter()));
let hit = SearchHit {
document,
formatted,
matches_position,
ranking_score_details,
ranking_score,
};
let hit = SearchHit { document, formatted, matches_position };
documents.push(hit);
}

View File

@ -1,4 +1,3 @@
use insta::{allow_duplicates, assert_json_snapshot};
use serde_json::json;
use super::*;
@ -19,43 +18,30 @@ async fn formatted_contain_wildcard() {
|response, code|
{
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"_formatted": {
"id": "852",
"cattos": "<em>pésti</em>"
},
"_matchesPosition": {
"cattos": [
{
"start": 0,
"length": 5
}
]
}
}
"###);
}
}
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"id": "852",
"cattos": "<em>pésti</em>",
},
"_matchesPosition": {"cattos": [{"start": 0, "length": 5}]},
})
);
}
)
.await;
index
.search(json!({ "q": "pésti", "attributesToRetrieve": ["*"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852,
"cattos": "pésti"
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pésti",
})
);
})
.await;
@ -64,29 +50,20 @@ async fn formatted_contain_wildcard() {
json!({ "q": "pésti", "attributesToRetrieve": ["*"], "attributesToHighlight": ["id"], "showMatchesPosition": true }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852,
"cattos": "pésti",
"_formatted": {
"id": "852",
"cattos": "pésti"
},
"_matchesPosition": {
"cattos": [
{
"start": 0,
"length": 5
}
]
}
}
"###)
}
})
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pésti",
"_formatted": {
"id": "852",
"cattos": "pésti",
},
"_matchesPosition": {"cattos": [{"start": 0, "length": 5}]},
})
);
}
)
.await;
index
@ -94,20 +71,17 @@ async fn formatted_contain_wildcard() {
json!({ "q": "pésti", "attributesToRetrieve": ["*"], "attributesToCrop": ["*"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852,
"cattos": "pésti",
"_formatted": {
"id": "852",
"cattos": "pésti"
}
}
"###);
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pésti",
"_formatted": {
"id": "852",
"cattos": "pésti",
}
})
);
},
)
.await;
@ -115,20 +89,17 @@ async fn formatted_contain_wildcard() {
index
.search(json!({ "q": "pésti", "attributesToCrop": ["*"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852,
"cattos": "pésti",
"_formatted": {
"id": "852",
"cattos": "pésti"
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"cattos": "pésti",
"_formatted": {
"id": "852",
"cattos": "pésti",
}
})
);
})
.await;
}
@ -145,24 +116,21 @@ async fn format_nested() {
index
.search(json!({ "q": "pésti", "attributesToRetrieve": ["doggos"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"doggos": [
{
"name": "bobby",
"age": 2
},
{
"name": "buddy",
"age": 4
}
]
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
"age": 2,
},
{
"name": "buddy",
"age": 4,
},
],
})
);
})
.await;
@ -171,22 +139,19 @@ async fn format_nested() {
json!({ "q": "pésti", "attributesToRetrieve": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"doggos": [
{
"name": "bobby"
},
{
"name": "buddy"
}
]
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
})
);
},
)
.await;
@ -196,30 +161,20 @@ async fn format_nested() {
json!({ "q": "bobby", "attributesToRetrieve": ["doggos.name"], "showMatchesPosition": true }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"doggos": [
{
"name": "bobby"
},
{
"name": "buddy"
}
],
"_matchesPosition": {
"doggos.name": [
{
"start": 0,
"length": 5
}
]
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
"_matchesPosition": {"doggos.name": [{"start": 0, "length": 5}]},
})
);
}
)
.await;
@ -228,24 +183,21 @@ async fn format_nested() {
.search(json!({ "q": "pésti", "attributesToRetrieve": [], "attributesToHighlight": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"_formatted": {
"doggos": [
{
"name": "bobby"
},
{
"name": "buddy"
}
]
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
},
})
);
})
.await;
@ -253,24 +205,21 @@ async fn format_nested() {
.search(json!({ "q": "pésti", "attributesToRetrieve": [], "attributesToCrop": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"_formatted": {
"doggos": [
{
"name": "bobby"
},
{
"name": "buddy"
}
]
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"doggos": [
{
"name": "bobby",
},
{
"name": "buddy",
},
],
},
})
);
})
.await;
@ -278,61 +227,55 @@ async fn format_nested() {
.search(json!({ "q": "pésti", "attributesToRetrieve": ["doggos.name"], "attributesToHighlight": ["doggos.age"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"doggos": [
{
"name": "bobby"
},
{
"name": "buddy"
}
],
"_formatted": {
assert_eq!(
response["hits"][0],
json!({
"doggos": [
{
"name": "bobby",
"age": "2"
},
{
"name": "buddy",
"age": "4"
}
]
}
}
"###)
}
})
{
"name": "bobby",
},
{
"name": "buddy",
},
],
"_formatted": {
"doggos": [
{
"name": "bobby",
"age": "2",
},
{
"name": "buddy",
"age": "4",
},
],
},
})
);
})
.await;
index
.search(json!({ "q": "pésti", "attributesToRetrieve": [], "attributesToHighlight": ["doggos.age"], "attributesToCrop": ["doggos.name"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"doggos": [
{
"_formatted": {
"doggos": [
{
"name": "bobby",
"age": "2"
},
{
"name": "buddy",
"age": "4"
}
]
}
}
"###)
}
"name": "bobby",
"age": "2",
},
{
"name": "buddy",
"age": "4",
},
],
},
})
);
}
)
.await;
@ -354,66 +297,54 @@ async fn displayedattr_2_smol() {
.search(json!({ "attributesToRetrieve": ["father", "id"], "attributesToHighlight": ["mother"], "attributesToCrop": ["cattos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
})
.await;
index
.search(json!({ "attributesToRetrieve": ["id"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
})
.await;
index
.search(json!({ "attributesToHighlight": ["id"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852,
"_formatted": {
"id": "852"
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"_formatted": {
"id": "852",
}
})
);
})
.await;
index
.search(json!({ "attributesToCrop": ["id"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852,
"_formatted": {
"id": "852"
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"_formatted": {
"id": "852",
}
})
);
})
.await;
@ -422,18 +353,15 @@ async fn displayedattr_2_smol() {
json!({ "attributesToHighlight": ["id"], "attributesToCrop": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852,
"_formatted": {
"id": "852"
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
"_formatted": {
"id": "852",
}
})
);
},
)
.await;
@ -441,41 +369,31 @@ async fn displayedattr_2_smol() {
index
.search(json!({ "attributesToHighlight": ["cattos"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
})
.await;
index
.search(json!({ "attributesToCrop": ["cattos"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"id": 852
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"id": 852,
})
);
})
.await;
index
.search(json!({ "attributesToRetrieve": ["cattos"] }), |response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@"{}")
}
assert_eq!(response["hits"][0], json!({}));
})
.await;
@ -484,11 +402,7 @@ async fn displayedattr_2_smol() {
json!({ "attributesToRetrieve": ["cattos"], "attributesToHighlight": ["cattos"], "attributesToCrop": ["cattos"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@"{}")
}
assert_eq!(response["hits"][0], json!({}));
}
)
@ -499,17 +413,14 @@ async fn displayedattr_2_smol() {
json!({ "attributesToRetrieve": ["cattos"], "attributesToHighlight": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"_formatted": {
"id": "852"
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"id": "852",
}
})
);
},
)
.await;
@ -519,17 +430,14 @@ async fn displayedattr_2_smol() {
json!({ "attributesToRetrieve": ["cattos"], "attributesToCrop": ["id"] }),
|response, code| {
assert_eq!(code, 200, "{}", response);
allow_duplicates! {
assert_json_snapshot!(response["hits"][0],
{ "._rankingScore" => "[score]" },
@r###"
{
"_formatted": {
"id": "852"
}
}
"###)
}
assert_eq!(
response["hits"][0],
json!({
"_formatted": {
"id": "852",
}
})
);
},
)
.await;

View File

@ -65,7 +65,7 @@ async fn simple_search_single_index() {
]}))
.await;
snapshot!(code, @"200 OK");
insta::assert_json_snapshot!(response["results"], { "[].processingTimeMs" => "[time]", ".**._rankingScore" => "[score]" }, @r###"
insta::assert_json_snapshot!(response["results"], { "[].processingTimeMs" => "[time]" }, @r###"
[
{
"indexUid": "test",
@ -170,7 +170,7 @@ async fn simple_search_two_indexes() {
]}))
.await;
snapshot!(code, @"200 OK");
insta::assert_json_snapshot!(response["results"], { "[].processingTimeMs" => "[time]", ".**._rankingScore" => "[score]" }, @r###"
insta::assert_json_snapshot!(response["results"], { "[].processingTimeMs" => "[time]" }, @r###"
[
{
"indexUid": "test",

View File

@ -75,9 +75,6 @@ maplit = "1.0.2"
md5 = "0.7.0"
rand = { version = "0.8.5", features = ["small_rng"] }
[target.'cfg(fuzzing)'.dev-dependencies]
fuzzcheck = "0.12.1"
[features]
all-tokenizations = ["charabia/default"]

View File

@ -53,7 +53,6 @@ fn main() -> Result<(), Box<dyn Error>> {
&mut ctx,
&(!query.trim().is_empty()).then(|| query.trim().to_owned()),
TermsMatchingStrategy::Last,
milli::score_details::ScoringStrategy::Skip,
false,
&None,
&None,

View File

@ -111,7 +111,6 @@ pub enum Error {
Io(#[from] io::Error),
}
#[cfg(test)]
pub fn objects_from_json_value(json: serde_json::Value) -> Vec<crate::Object> {
let documents = match json {
object @ serde_json::Value::Object(_) => vec![object],
@ -141,7 +140,6 @@ macro_rules! documents {
}};
}
#[cfg(test)]
pub fn documents_batch_reader_from_objects(
objects: impl IntoIterator<Item = Object>,
) -> DocumentsBatchReader<std::io::Cursor<Vec<u8>>> {

View File

@ -106,22 +106,30 @@ impl<'a> ExternalDocumentsIds<'a> {
map
}
/// Return an fst of the combined hard and soft deleted ID.
pub fn to_fst<'b>(&'b self) -> fst::Result<Cow<'b, fst::Map<Cow<'a, [u8]>>>> {
if self.soft.is_empty() {
return Ok(Cow::Borrowed(&self.hard));
}
let union_op = self.hard.op().add(&self.soft).r#union();
let mut iter = union_op.into_stream();
let mut new_hard_builder = fst::MapBuilder::memory();
while let Some((external_id, marked_docids)) = iter.next() {
let value = indexed_last_value(marked_docids).unwrap();
if value != DELETED_ID {
new_hard_builder.insert(external_id, value)?;
}
}
drop(iter);
Ok(Cow::Owned(new_hard_builder.into_map().map_data(Cow::Owned)?))
}
fn merge_soft_into_hard(&mut self) -> fst::Result<()> {
if self.soft.len() >= self.hard.len() / 2 {
let union_op = self.hard.op().add(&self.soft).r#union();
let mut iter = union_op.into_stream();
let mut new_hard_builder = fst::MapBuilder::memory();
while let Some((external_id, marked_docids)) = iter.next() {
let value = indexed_last_value(marked_docids).unwrap();
if value != DELETED_ID {
new_hard_builder.insert(external_id, value)?;
}
}
drop(iter);
self.hard = new_hard_builder.into_map().map_data(Cow::Owned)?;
self.hard = self.to_fst()?.into_owned();
self.soft = fst::Map::default().map_data(Cow::Owned)?;
}

View File

@ -93,10 +93,10 @@ pub mod db_name {
#[derive(Clone)]
pub struct Index {
/// The LMDB environment which this index is associated with.
pub(crate) env: heed::Env,
pub env: heed::Env,
/// Contains many different types (e.g. the fields ids map).
pub(crate) main: PolyDatabase,
pub main: PolyDatabase,
/// A word and all the documents ids containing the word.
pub word_docids: Database<Str, RoaringBitmapCodec>,
@ -150,7 +150,7 @@ pub struct Index {
pub field_id_docid_facet_strings: Database<FieldDocIdFacetStringCodec, Str>,
/// Maps the document id to the document as an obkv store.
pub(crate) documents: Database<OwnedType<BEU32>, ObkvCodec>,
pub documents: Database<OwnedType<BEU32>, ObkvCodec>,
}
impl Index {
@ -2488,12 +2488,8 @@ pub(crate) mod tests {
let rtxn = index.read_txn().unwrap();
let search = Search::new(&rtxn, &index);
let SearchResult {
matching_words: _,
candidates: _,
document_scores: _,
mut documents_ids,
} = search.execute().unwrap();
let SearchResult { matching_words: _, candidates: _, mut documents_ids } =
search.execute().unwrap();
let primary_key_id = index.fields_ids_map(&rtxn).unwrap().id("primary_key").unwrap();
documents_ids.sort_unstable();
let docs = index.documents(&rtxn, documents_ids).unwrap();

View File

@ -17,7 +17,6 @@ mod fields_ids_map;
pub mod heed_codec;
pub mod index;
pub mod proximity;
pub mod score_details;
mod search;
pub mod update;

View File

@ -1,316 +0,0 @@
use serde::Serialize;
use crate::distance_between_two_points;
#[derive(Debug, Clone, PartialEq)]
pub enum ScoreDetails {
Words(Words),
Typo(Typo),
Proximity(Rank),
Fid(Rank),
Position(Rank),
ExactAttribute(ExactAttribute),
Exactness(Rank),
Sort(Sort),
GeoSort(GeoSort),
}
impl ScoreDetails {
pub fn local_score(&self) -> Option<f64> {
self.rank().map(Rank::local_score)
}
pub fn rank(&self) -> Option<Rank> {
match self {
ScoreDetails::Words(details) => Some(details.rank()),
ScoreDetails::Typo(details) => Some(details.rank()),
ScoreDetails::Proximity(details) => Some(*details),
ScoreDetails::Fid(details) => Some(*details),
ScoreDetails::Position(details) => Some(*details),
ScoreDetails::ExactAttribute(details) => Some(details.rank()),
ScoreDetails::Exactness(details) => Some(*details),
ScoreDetails::Sort(_) => None,
ScoreDetails::GeoSort(_) => None,
}
}
pub fn global_score<'a>(details: impl Iterator<Item = &'a Self>) -> f64 {
Rank::global_score(details.filter_map(Self::rank))
}
/// Panics
///
/// - If Position is not preceded by Fid
/// - If Exactness is not preceded by ExactAttribute
pub fn to_json_map<'a>(
details: impl Iterator<Item = &'a Self>,
) -> serde_json::Map<String, serde_json::Value> {
let mut order = 0;
let mut fid_details = None;
let mut details_map = serde_json::Map::default();
for details in details {
match details {
ScoreDetails::Words(words) => {
let words_details = serde_json::json!({
"order": order,
"matchingWords": words.matching_words,
"maxMatchingWords": words.max_matching_words,
"score": words.rank().local_score(),
});
details_map.insert("words".into(), words_details);
order += 1;
}
ScoreDetails::Typo(typo) => {
let typo_details = serde_json::json!({
"order": order,
"typoCount": typo.typo_count,
"maxTypoCount": typo.max_typo_count,
"score": typo.rank().local_score(),
});
details_map.insert("typo".into(), typo_details);
order += 1;
}
ScoreDetails::Proximity(proximity) => {
let proximity_details = serde_json::json!({
"order": order,
"score": proximity.local_score(),
});
details_map.insert("proximity".into(), proximity_details);
order += 1;
}
ScoreDetails::Fid(fid) => {
// copy the rank for future use in Position.
fid_details = Some(*fid);
// For now, fid is a virtual rule always followed by the "position" rule
let fid_details = serde_json::json!({
"order": order,
"attributes_ranking_order": fid.local_score(),
});
details_map.insert("attribute".into(), fid_details);
order += 1;
}
ScoreDetails::Position(position) => {
// For now, position is a virtual rule always preceded by the "fid" rule
let attribute_details = details_map
.get_mut("attribute")
.expect("position not preceded by attribute");
let attribute_details = attribute_details
.as_object_mut()
.expect("attribute details was not an object");
let Some(fid_details) = fid_details
else {
panic!("position not preceded by attribute");
};
attribute_details.insert(
"attributes_query_word_order".into(),
position.local_score().into(),
);
let score = Rank::global_score([fid_details, *position].iter().copied());
attribute_details.insert("score".into(), score.into());
// do not update the order since this was already done by fid
}
ScoreDetails::ExactAttribute(exact_attribute) => {
let exactness_details = serde_json::json!({
"order": order,
"matchType": exact_attribute,
"score": exact_attribute.rank().local_score(),
});
details_map.insert("exactness".into(), exactness_details);
order += 1;
}
ScoreDetails::Exactness(details) => {
// For now, exactness is a virtual rule always preceded by the "ExactAttribute" rule
let exactness_details = details_map
.get_mut("exactness")
.expect("Exactness not preceded by exactAttribute");
let exactness_details = exactness_details
.as_object_mut()
.expect("exactness details was not an object");
if exactness_details.get("matchType").expect("missing 'matchType'")
== &serde_json::json!(ExactAttribute::NoExactMatch)
{
let score = Rank::global_score(
[ExactAttribute::NoExactMatch.rank(), *details].iter().copied(),
);
*exactness_details.get_mut("score").expect("missing score") = score.into();
}
// do not update the order since this was already done by exactAttribute
}
ScoreDetails::Sort(details) => {
let sort = if details.redacted {
format!("<hidden-rule-{order}>")
} else {
format!(
"{}:{}",
details.field_name,
if details.ascending { "asc" } else { "desc" }
)
};
let value =
if details.redacted { "<hidden>".into() } else { details.value.clone() };
let sort_details = serde_json::json!({
"order": order,
"value": value,
});
details_map.insert(sort, sort_details);
order += 1;
}
ScoreDetails::GeoSort(details) => {
let sort = format!(
"_geoPoint({}, {}):{}",
details.target_point[0],
details.target_point[1],
if details.ascending { "asc" } else { "desc" }
);
let point = if let Some(value) = details.value {
serde_json::json!({ "lat": value[0], "lng": value[1]})
} else {
serde_json::Value::Null
};
let sort_details = serde_json::json!({
"order": order,
"value": point,
"distance": details.distance(),
});
details_map.insert(sort, sort_details);
order += 1;
}
}
}
details_map
}
}
/// The strategy to compute scores.
///
/// It makes sense to pass down this strategy to the internals of the search, because
/// some optimizations (today, mainly skipping ranking rules for universes of a single document)
/// are not correct to do when computing the scores.
///
/// This strategy could feasibly be extended to differentiate between the normalized score and the
/// detailed scores, but it is not useful today as the normalized score is *derived from* the
/// detailed scores.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum ScoringStrategy {
/// Don't compute scores
#[default]
Skip,
/// Compute detailed scores
Detailed,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Words {
pub matching_words: u32,
pub max_matching_words: u32,
}
impl Words {
pub fn rank(&self) -> Rank {
Rank { rank: self.matching_words, max_rank: self.max_matching_words }
}
pub(crate) fn from_rank(rank: Rank) -> Words {
Words { matching_words: rank.rank, max_matching_words: rank.max_rank }
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Typo {
pub typo_count: u32,
pub max_typo_count: u32,
}
impl Typo {
pub fn rank(&self) -> Rank {
Rank {
rank: self.max_typo_count - self.typo_count + 1,
max_rank: (self.max_typo_count + 1),
}
}
// max_rank = max_typo + 1
// max_typo = max_rank - 1
//
// rank = max_typo - typo + 1
// rank = max_rank - 1 - typo + 1
// rank + typo = max_rank
// typo = max_rank - rank
pub fn from_rank(rank: Rank) -> Typo {
Typo { typo_count: rank.max_rank - rank.rank, max_typo_count: rank.max_rank - 1 }
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Rank {
/// The ordinal rank, such that `max_rank` is the first rank, and 0 is the last rank.
///
/// The higher the better. Documents with a rank of 0 have a score of 0 and are typically never returned
/// (they don't match the query).
pub rank: u32,
/// The maximum possible rank. Documents with this rank have a score of 1.
///
/// The max rank should not be 0.
pub max_rank: u32,
}
impl Rank {
pub fn local_score(self) -> f64 {
self.rank as f64 / self.max_rank as f64
}
pub fn global_score(details: impl Iterator<Item = Self>) -> f64 {
let mut rank = Rank { rank: 1, max_rank: 1 };
for inner_rank in details {
rank.rank -= 1;
rank.rank *= inner_rank.max_rank;
rank.max_rank *= inner_rank.max_rank;
rank.rank += inner_rank.rank;
}
rank.local_score()
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize)]
#[serde(rename_all = "camelCase")]
pub enum ExactAttribute {
ExactMatch,
MatchesStart,
NoExactMatch,
}
impl ExactAttribute {
pub fn rank(&self) -> Rank {
let rank = match self {
ExactAttribute::ExactMatch => 3,
ExactAttribute::MatchesStart => 2,
ExactAttribute::NoExactMatch => 1,
};
Rank { rank, max_rank: 3 }
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct Sort {
pub field_name: String,
pub ascending: bool,
pub redacted: bool,
pub value: serde_json::Value,
}
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct GeoSort {
pub target_point: [f64; 2],
pub ascending: bool,
pub value: Option<[f64; 2]>,
}
impl GeoSort {
pub fn distance(&self) -> Option<f64> {
self.value.map(|value| distance_between_two_points(&self.target_point, &value))
}
}

View File

@ -7,7 +7,6 @@ use roaring::bitmap::RoaringBitmap;
pub use self::facet::{FacetDistribution, Filter, DEFAULT_VALUES_PER_FACET};
pub use self::new::matches::{FormatOptions, MatchBounds, Matcher, MatcherBuilder, MatchingWords};
use self::new::PartialSearchResult;
use crate::score_details::{ScoreDetails, ScoringStrategy};
use crate::{
execute_search, AscDesc, DefaultSearchLogger, DocumentId, Index, Result, SearchContext,
};
@ -30,7 +29,6 @@ pub struct Search<'a> {
sort_criteria: Option<Vec<AscDesc>>,
geo_strategy: new::GeoSortStrategy,
terms_matching_strategy: TermsMatchingStrategy,
scoring_strategy: ScoringStrategy,
words_limit: usize,
exhaustive_number_hits: bool,
rtxn: &'a heed::RoTxn<'a>,
@ -47,7 +45,6 @@ impl<'a> Search<'a> {
sort_criteria: None,
geo_strategy: new::GeoSortStrategy::default(),
terms_matching_strategy: TermsMatchingStrategy::default(),
scoring_strategy: Default::default(),
exhaustive_number_hits: false,
words_limit: 10,
rtxn,
@ -80,11 +77,6 @@ impl<'a> Search<'a> {
self
}
pub fn scoring_strategy(&mut self, value: ScoringStrategy) -> &mut Search<'a> {
self.scoring_strategy = value;
self
}
pub fn words_limit(&mut self, value: usize) -> &mut Search<'a> {
self.words_limit = value;
self
@ -101,7 +93,7 @@ impl<'a> Search<'a> {
self
}
/// Forces the search to exhaustively compute the number of candidates,
/// Force the search to exhastivelly compute the number of candidates,
/// this will increase the search time but allows finite pagination.
pub fn exhaustive_number_hits(&mut self, exhaustive_number_hits: bool) -> &mut Search<'a> {
self.exhaustive_number_hits = exhaustive_number_hits;
@ -110,12 +102,11 @@ impl<'a> Search<'a> {
pub fn execute(&self) -> Result<SearchResult> {
let mut ctx = SearchContext::new(self.index, self.rtxn);
let PartialSearchResult { located_query_terms, candidates, documents_ids, document_scores } =
let PartialSearchResult { located_query_terms, candidates, documents_ids } =
execute_search(
&mut ctx,
&self.query,
self.terms_matching_strategy,
self.scoring_strategy,
self.exhaustive_number_hits,
&self.filter,
&self.sort_criteria,
@ -133,7 +124,7 @@ impl<'a> Search<'a> {
None => MatchingWords::default(),
};
Ok(SearchResult { matching_words, candidates, document_scores, documents_ids })
Ok(SearchResult { matching_words, candidates, documents_ids })
}
}
@ -147,7 +138,6 @@ impl fmt::Debug for Search<'_> {
sort_criteria,
geo_strategy: _,
terms_matching_strategy,
scoring_strategy,
words_limit,
exhaustive_number_hits,
rtxn: _,
@ -160,7 +150,6 @@ impl fmt::Debug for Search<'_> {
.field("limit", limit)
.field("sort_criteria", sort_criteria)
.field("terms_matching_strategy", terms_matching_strategy)
.field("scoring_strategy", scoring_strategy)
.field("exhaustive_number_hits", exhaustive_number_hits)
.field("words_limit", words_limit)
.finish()
@ -171,8 +160,8 @@ impl fmt::Debug for Search<'_> {
pub struct SearchResult {
pub matching_words: MatchingWords,
pub candidates: RoaringBitmap,
// TODO those documents ids should be associated with their criteria scores.
pub documents_ids: Vec<DocumentId>,
pub document_scores: Vec<Vec<ScoreDetails>>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]

View File

@ -3,18 +3,14 @@ use roaring::RoaringBitmap;
use super::logger::SearchLogger;
use super::ranking_rules::{BoxRankingRule, RankingRuleQueryTrait};
use super::SearchContext;
use crate::score_details::{ScoreDetails, ScoringStrategy};
use crate::search::new::distinct::{apply_distinct_rule, distinct_single_docid, DistinctOutput};
use crate::Result;
pub struct BucketSortOutput {
pub docids: Vec<u32>,
pub scores: Vec<Vec<ScoreDetails>>,
pub all_candidates: RoaringBitmap,
}
// TODO: would probably be good to regroup some of these inside of a struct?
#[allow(clippy::too_many_arguments)]
pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
ctx: &mut SearchContext<'ctx>,
mut ranking_rules: Vec<BoxRankingRule<'ctx, Q>>,
@ -22,7 +18,6 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
universe: &RoaringBitmap,
from: usize,
length: usize,
scoring_strategy: ScoringStrategy,
logger: &mut dyn SearchLogger<Q>,
) -> Result<BucketSortOutput> {
logger.initial_query(query);
@ -36,11 +31,7 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
};
if universe.len() < from as u64 {
return Ok(BucketSortOutput {
docids: vec![],
scores: vec![],
all_candidates: universe.clone(),
});
return Ok(BucketSortOutput { docids: vec![], all_candidates: universe.clone() });
}
if ranking_rules.is_empty() {
if let Some(distinct_fid) = distinct_fid {
@ -58,32 +49,22 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
}
let mut all_candidates = universe - excluded;
all_candidates.extend(results.iter().copied());
return Ok(BucketSortOutput {
scores: vec![Default::default(); results.len()],
docids: results,
all_candidates,
});
return Ok(BucketSortOutput { docids: results, all_candidates });
} else {
let docids: Vec<u32> = universe.iter().skip(from).take(length).collect();
return Ok(BucketSortOutput {
scores: vec![Default::default(); docids.len()],
docids,
all_candidates: universe.clone(),
});
let docids = universe.iter().skip(from).take(length).collect();
return Ok(BucketSortOutput { docids, all_candidates: universe.clone() });
};
}
let ranking_rules_len = ranking_rules.len();
logger.start_iteration_ranking_rule(0, ranking_rules[0].as_ref(), query, universe);
ranking_rules[0].start_iteration(ctx, logger, universe, query)?;
let mut ranking_rule_scores: Vec<ScoreDetails> = vec![];
let mut ranking_rule_universes: Vec<RoaringBitmap> =
vec![RoaringBitmap::default(); ranking_rules_len];
ranking_rule_universes[0] = universe.clone();
let mut cur_ranking_rule_index = 0;
/// Finish iterating over the current ranking rule, yielding
@ -108,16 +89,11 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
} else {
cur_ranking_rule_index -= 1;
}
// FIXME: check off by one
if ranking_rule_scores.len() > cur_ranking_rule_index {
ranking_rule_scores.pop();
}
};
}
let mut all_candidates = universe.clone();
let mut valid_docids = vec![];
let mut valid_scores = vec![];
let mut cur_offset = 0usize;
macro_rules! maybe_add_to_results {
@ -128,26 +104,21 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
length,
logger,
&mut valid_docids,
&mut valid_scores,
&mut all_candidates,
&mut ranking_rule_universes,
&mut ranking_rules,
cur_ranking_rule_index,
&mut cur_offset,
distinct_fid,
&ranking_rule_scores,
$candidates,
)?;
};
}
while valid_docids.len() < length {
// The universe for this bucket is zero, so we don't need to sort
// anything, just go back to the parent ranking rule.
if ranking_rule_universes[cur_ranking_rule_index].is_empty()
|| (scoring_strategy == ScoringStrategy::Skip
&& ranking_rule_universes[cur_ranking_rule_index].len() == 1)
{
// The universe for this bucket is zero or one element, so we don't need to sort
// anything, just extend the results and go back to the parent ranking rule.
if ranking_rule_universes[cur_ranking_rule_index].len() <= 1 {
let bucket = std::mem::take(&mut ranking_rule_universes[cur_ranking_rule_index]);
maybe_add_to_results!(bucket);
back!();
@ -159,8 +130,6 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
continue;
};
ranking_rule_scores.push(next_bucket.score);
logger.next_bucket_ranking_rule(
cur_ranking_rule_index,
ranking_rules[cur_ranking_rule_index].as_ref(),
@ -174,12 +143,10 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
ranking_rule_universes[cur_ranking_rule_index] -= &next_bucket.candidates;
if cur_ranking_rule_index == ranking_rules_len - 1
|| (scoring_strategy == ScoringStrategy::Skip && next_bucket.candidates.len() <= 1)
|| next_bucket.candidates.len() <= 1
|| cur_offset + (next_bucket.candidates.len() as usize) < from
{
maybe_add_to_results!(next_bucket.candidates);
// FIXME: use index based logic like all the other rules so that you don't have to maintain the pop/push?
ranking_rule_scores.pop();
continue;
}
@ -199,7 +166,7 @@ pub fn bucket_sort<'ctx, Q: RankingRuleQueryTrait>(
)?;
}
Ok(BucketSortOutput { docids: valid_docids, scores: valid_scores, all_candidates })
Ok(BucketSortOutput { docids: valid_docids, all_candidates })
}
/// Add the candidates to the results. Take `distinct`, `from`, `length`, and `cur_offset`
@ -212,18 +179,14 @@ fn maybe_add_to_results<'ctx, Q: RankingRuleQueryTrait>(
logger: &mut dyn SearchLogger<Q>,
valid_docids: &mut Vec<u32>,
valid_scores: &mut Vec<Vec<ScoreDetails>>,
all_candidates: &mut RoaringBitmap,
ranking_rule_universes: &mut [RoaringBitmap],
ranking_rules: &mut [BoxRankingRule<'ctx, Q>],
cur_ranking_rule_index: usize,
cur_offset: &mut usize,
distinct_fid: Option<u16>,
ranking_rule_scores: &[ScoreDetails],
candidates: RoaringBitmap,
) -> Result<()> {
// First apply the distinct rule on the candidates, reducing the universes if necessary
@ -268,17 +231,13 @@ fn maybe_add_to_results<'ctx, Q: RankingRuleQueryTrait>(
let candidates =
candidates.iter().take(length - valid_docids.len()).copied().collect::<Vec<_>>();
logger.add_to_results(&candidates);
valid_docids.extend_from_slice(&candidates);
valid_scores
.extend(std::iter::repeat(ranking_rule_scores.to_owned()).take(candidates.len()));
valid_docids.extend(&candidates);
}
} else {
// if we have passed the offset already, add some of the documents (up to the limit)
let candidates = candidates.iter().take(length - valid_docids.len()).collect::<Vec<u32>>();
logger.add_to_results(&candidates);
valid_docids.extend_from_slice(&candidates);
valid_scores
.extend(std::iter::repeat(ranking_rule_scores.to_owned()).take(candidates.len()));
valid_docids.extend(&candidates);
}
*cur_offset += candidates.len() as usize;

View File

@ -26,7 +26,6 @@ pub fn apply_distinct_rule(
ctx: &mut SearchContext,
field_id: u16,
candidates: &RoaringBitmap,
// TODO: add a universe here, such that the `excluded` are a subset of the universe?
) -> Result<DistinctOutput> {
let mut excluded = RoaringBitmap::new();
let mut remaining = RoaringBitmap::new();

View File

@ -2,7 +2,6 @@ use roaring::{MultiOps, RoaringBitmap};
use super::query_graph::QueryGraph;
use super::ranking_rules::{RankingRule, RankingRuleOutput};
use crate::score_details::{self, ScoreDetails};
use crate::search::new::query_graph::QueryNodeData;
use crate::search::new::query_term::ExactTerm;
use crate::{Result, SearchContext, SearchLogger};
@ -207,7 +206,7 @@ impl State {
)?;
intersection &= &candidates;
if !intersection.is_empty() {
// TODO: although not really worth it in terms of performance,
// Although not really worth it in terms of performance,
// if would be good to put this in cache for the sake of consistency
let candidates_with_exact_word_count = if count_all_positions < u8::MAX as usize {
ctx.index
@ -245,13 +244,7 @@ impl State {
candidates &= universe;
(
State::AttributeStarts(query_graph.clone(), candidates_per_attribute),
Some(RankingRuleOutput {
query: query_graph,
candidates,
score: ScoreDetails::ExactAttribute(
score_details::ExactAttribute::ExactMatch,
),
}),
Some(RankingRuleOutput { query: query_graph, candidates }),
)
}
State::AttributeStarts(query_graph, candidates_per_attribute) => {
@ -264,24 +257,12 @@ impl State {
candidates &= universe;
(
State::Empty(query_graph.clone()),
Some(RankingRuleOutput {
query: query_graph,
candidates,
score: ScoreDetails::ExactAttribute(
score_details::ExactAttribute::MatchesStart,
),
}),
Some(RankingRuleOutput { query: query_graph, candidates }),
)
}
State::Empty(query_graph) => (
State::Empty(query_graph.clone()),
Some(RankingRuleOutput {
query: query_graph,
candidates: universe.clone(),
score: ScoreDetails::ExactAttribute(
score_details::ExactAttribute::NoExactMatch,
),
}),
Some(RankingRuleOutput { query: query_graph, candidates: universe.clone() }),
),
};
(state, output)

View File

@ -8,7 +8,6 @@ use rstar::RTree;
use super::ranking_rules::{RankingRule, RankingRuleOutput, RankingRuleQueryTrait};
use crate::heed_codec::facet::{FieldDocIdFacetCodec, OrderedF64Codec};
use crate::score_details::{self, ScoreDetails};
use crate::{
distance_between_two_points, lat_lng_to_xyz, GeoPoint, Index, Result, SearchContext,
SearchLogger,
@ -81,7 +80,7 @@ pub struct GeoSort<Q: RankingRuleQueryTrait> {
field_ids: Option<[u16; 2]>,
rtree: Option<RTree<GeoPoint>>,
cached_sorted_docids: VecDeque<(u32, [f64; 2])>,
cached_sorted_docids: VecDeque<u32>,
geo_candidates: RoaringBitmap,
}
@ -131,7 +130,7 @@ impl<Q: RankingRuleQueryTrait> GeoSort<Q> {
let point = lat_lng_to_xyz(&self.point);
for point in rtree.nearest_neighbor_iter(&point) {
if self.geo_candidates.contains(point.data.0) {
self.cached_sorted_docids.push_back(point.data);
self.cached_sorted_docids.push_back(point.data.0);
if self.cached_sorted_docids.len() >= cache_size {
break;
}
@ -143,7 +142,7 @@ impl<Q: RankingRuleQueryTrait> GeoSort<Q> {
let point = lat_lng_to_xyz(&opposite_of(self.point));
for point in rtree.nearest_neighbor_iter(&point) {
if self.geo_candidates.contains(point.data.0) {
self.cached_sorted_docids.push_front(point.data);
self.cached_sorted_docids.push_front(point.data.0);
if self.cached_sorted_docids.len() >= cache_size {
break;
}
@ -178,7 +177,7 @@ impl<Q: RankingRuleQueryTrait> GeoSort<Q> {
// computing the distance between two points is expensive thus we cache the result
documents
.sort_by_cached_key(|(_, p)| distance_between_two_points(&self.point, p) as usize);
self.cached_sorted_docids.extend(documents.into_iter());
self.cached_sorted_docids.extend(documents.into_iter().map(|(doc_id, _)| doc_id));
};
Ok(())
@ -221,19 +220,12 @@ impl<'ctx, Q: RankingRuleQueryTrait> RankingRule<'ctx, Q> for GeoSort<Q> {
logger: &mut dyn SearchLogger<Q>,
universe: &RoaringBitmap,
) -> Result<Option<RankingRuleOutput<Q>>> {
assert!(universe.len() > 1);
let query = self.query.as_ref().unwrap().clone();
self.geo_candidates &= universe;
if self.geo_candidates.is_empty() {
return Ok(Some(RankingRuleOutput {
query,
candidates: universe.clone(),
score: ScoreDetails::GeoSort(score_details::GeoSort {
target_point: self.point,
ascending: self.ascending,
value: None,
}),
}));
return Ok(Some(RankingRuleOutput { query, candidates: universe.clone() }));
}
let ascending = self.ascending;
@ -244,16 +236,11 @@ impl<'ctx, Q: RankingRuleQueryTrait> RankingRule<'ctx, Q> for GeoSort<Q> {
cache.pop_back()
}
};
while let Some((id, point)) = next(&mut self.cached_sorted_docids) {
while let Some(id) = next(&mut self.cached_sorted_docids) {
if self.geo_candidates.contains(id) {
return Ok(Some(RankingRuleOutput {
query,
candidates: RoaringBitmap::from_iter([id]),
score: ScoreDetails::GeoSort(score_details::GeoSort {
target_point: self.point,
ascending: self.ascending,
value: Some(point),
}),
}));
}
}

View File

@ -50,7 +50,6 @@ use super::ranking_rule_graph::{
};
use super::small_bitmap::SmallBitmap;
use super::{QueryGraph, RankingRule, RankingRuleOutput, SearchContext};
use crate::score_details::Rank;
use crate::search::new::query_term::LocatedQueryTermSubset;
use crate::search::new::ranking_rule_graph::PathVisitor;
use crate::{Result, TermsMatchingStrategy};
@ -119,8 +118,6 @@ pub struct GraphBasedRankingRuleState<G: RankingRuleGraphTrait> {
all_costs: MappedInterner<QueryNode, Vec<u64>>,
/// An index in the first element of `all_distances`, giving the cost of the next bucket
cur_cost: u64,
/// One above the highest possible cost for this rule
next_max_cost: u64,
}
impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBasedRankingRule<G> {
@ -134,20 +131,7 @@ impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBase
_universe: &RoaringBitmap,
query_graph: &QueryGraph,
) -> Result<()> {
// the `next_max_cost` is the successor integer to the maximum cost of the paths in the graph.
//
// When there is a matching strategy, it also factors the additional costs of:
// 1. The words that are matched in phrases
// 2. Skipping words (by adding them to the paths with a cost)
let mut next_max_cost = 1;
let removal_cost = if let Some(terms_matching_strategy) = self.terms_matching_strategy {
// add the cost of the phrase to the next_max_cost
next_max_cost += query_graph
.words_in_phrases_count(ctx)
// remove 1 from the words in phrases count, because when there is a phrase we can now have a document
// where only the phrase is matching, and none of the non-phrase words.
// With the `1` that `next_max_cost` is initialized with, this gets counted twice.
.saturating_sub(1) as u64;
match terms_matching_strategy {
TermsMatchingStrategy::Last => {
let removal_order =
@ -155,12 +139,13 @@ impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBase
let mut forbidden_nodes =
SmallBitmap::for_interned_values_in(&query_graph.nodes);
let mut costs = query_graph.nodes.map(|_| None);
// FIXME: this works because only words uses termsmatchingstrategy at the moment.
let mut cost = 100;
for ns in removal_order {
for n in ns.iter() {
*costs.get_mut(n) = Some((1, forbidden_nodes.clone()));
*costs.get_mut(n) = Some((cost, forbidden_nodes.clone()));
}
forbidden_nodes.union(&ns);
cost += 100;
}
costs
}
@ -177,16 +162,12 @@ impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBase
// Then pre-compute the cost of all paths from each node to the end node
let all_costs = graph.find_all_costs_to_end();
next_max_cost +=
all_costs.get(graph.query_graph.root_node).iter().copied().max().unwrap_or(0);
let state = GraphBasedRankingRuleState {
graph,
conditions_cache: condition_docids_cache,
dead_ends_cache,
all_costs,
cur_cost: 0,
next_max_cost,
};
self.state = Some(state);
@ -200,13 +181,17 @@ impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBase
logger: &mut dyn SearchLogger<QueryGraph>,
universe: &RoaringBitmap,
) -> Result<Option<RankingRuleOutput<QueryGraph>>> {
// If universe.len() <= 1, the bucket sort algorithm
// should not have called this function.
assert!(universe.len() > 1);
// Will crash if `next_bucket` is called before `start_iteration` or after `end_iteration`,
// should never happen
let mut state = self.state.take().unwrap();
let all_costs = state.all_costs.get(state.graph.query_graph.root_node);
// Retrieve the cost of the paths to compute
let Some(&cost) = all_costs
let Some(&cost) = state
.all_costs
.get(state.graph.query_graph.root_node)
.iter()
.find(|c| **c >= state.cur_cost) else {
self.state = None;
@ -222,12 +207,8 @@ impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBase
dead_ends_cache,
all_costs,
cur_cost: _,
next_max_cost,
} = &mut state;
let rank = *next_max_cost - cost;
let score = G::rank_to_score(Rank { rank: rank as u32, max_rank: *next_max_cost as u32 });
let mut universe = universe.clone();
let mut used_conditions = SmallBitmap::for_interned_values_in(&graph.conditions_interner);
@ -314,6 +295,8 @@ impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBase
// We modify the next query graph so that it only contains the subgraph
// that was used to compute this bucket
// But we only do it in case the bucket length is >1, because otherwise
// we know the child ranking rule won't be called anyway
let paths: Vec<Vec<(Option<LocatedQueryTermSubset>, LocatedQueryTermSubset)>> = good_paths
.into_iter()
@ -342,7 +325,7 @@ impl<'ctx, G: RankingRuleGraphTrait> RankingRule<'ctx, QueryGraph> for GraphBase
self.state = Some(state);
Ok(Some(RankingRuleOutput { query: next_query_graph, candidates: bucket, score }))
Ok(Some(RankingRuleOutput { query: next_query_graph, candidates: bucket }))
}
fn end_iteration(

View File

@ -32,7 +32,7 @@ impl<T> Interned<T> {
#[derive(Clone)]
pub struct DedupInterner<T> {
stable_store: Vec<T>,
lookup: FxHashMap<T, Interned<T>>, // TODO: Arc
lookup: FxHashMap<T, Interned<T>>,
}
impl<T> Default for DedupInterner<T> {
fn default() -> Self {

View File

@ -1,5 +1,4 @@
/// Maximum number of tokens we consider in a single search.
// TODO: Loic, find proper value here so we don't overflow the interner.
pub const MAX_TOKEN_COUNT: usize = 1_000;
/// Maximum number of prefixes that can be derived from a single word.

View File

@ -510,7 +510,6 @@ mod tests {
&mut ctx,
&Some(query.to_string()),
crate::TermsMatchingStrategy::default(),
crate::score_details::ScoringStrategy::Skip,
false,
&None,
&None,

View File

@ -44,7 +44,6 @@ use self::geo_sort::GeoSort;
pub use self::geo_sort::Strategy as GeoSortStrategy;
use self::graph_based_ranking_rule::Words;
use self::interner::Interned;
use crate::score_details::{ScoreDetails, ScoringStrategy};
use crate::search::new::distinct::apply_distinct_rule;
use crate::{AscDesc, DocumentId, Filter, Index, Member, Result, TermsMatchingStrategy, UserError};
@ -351,7 +350,6 @@ pub fn execute_search(
ctx: &mut SearchContext,
query: &Option<String>,
terms_matching_strategy: TermsMatchingStrategy,
scoring_strategy: ScoringStrategy,
exhaustive_number_hits: bool,
filters: &Option<Filter>,
sort_criteria: &Option<Vec<AscDesc>>,
@ -413,16 +411,7 @@ pub fn execute_search(
universe =
resolve_universe(ctx, &universe, &graph, terms_matching_strategy, query_graph_logger)?;
bucket_sort(
ctx,
ranking_rules,
&graph,
&universe,
from,
length,
scoring_strategy,
query_graph_logger,
)?
bucket_sort(ctx, ranking_rules, &graph, &universe, from, length, query_graph_logger)?
} else {
let ranking_rules =
get_ranking_rules_for_placeholder_search(ctx, sort_criteria, geo_strategy)?;
@ -433,20 +422,17 @@ pub fn execute_search(
&universe,
from,
length,
scoring_strategy,
placeholder_search_logger,
)?
};
let BucketSortOutput { docids, scores, mut all_candidates } = bucket_sort_output;
let fields_ids_map = ctx.index.fields_ids_map(ctx.txn)?;
let BucketSortOutput { docids, mut all_candidates } = bucket_sort_output;
// The candidates is the universe unless the exhaustive number of hits
// is requested and a distinct attribute is set.
if exhaustive_number_hits {
if let Some(f) = ctx.index.distinct_field(ctx.txn)? {
if let Some(distinct_fid) = fields_ids_map.id(f) {
if let Some(distinct_fid) = ctx.index.fields_ids_map(ctx.txn)?.id(f) {
all_candidates = apply_distinct_rule(ctx, distinct_fid, &all_candidates)?.remaining;
}
}
@ -454,7 +440,6 @@ pub fn execute_search(
Ok(PartialSearchResult {
candidates: all_candidates,
document_scores: scores,
documents_ids: docids,
located_query_terms,
})
@ -506,5 +491,4 @@ pub struct PartialSearchResult {
pub located_query_terms: Option<Vec<LocatedQueryTerm>>,
pub candidates: RoaringBitmap,
pub documents_ids: Vec<DocumentId>,
pub document_scores: Vec<Vec<ScoreDetails>>,
}

View File

@ -92,7 +92,7 @@ impl QueryGraph {
/// which contains ngrams.
pub fn from_query(
ctx: &mut SearchContext,
// NOTE: the terms here must be consecutive
// The terms here must be consecutive
terms: &[LocatedQueryTerm],
) -> Result<(QueryGraph, Vec<LocatedQueryTerm>)> {
let mut new_located_query_terms = terms.to_vec();
@ -103,7 +103,7 @@ impl QueryGraph {
let root_node = 0;
let end_node = 1;
// TODO: we could consider generalizing to 4,5,6,7,etc. ngrams
// Ee could consider generalizing to 4,5,6,7,etc. ngrams
let (mut prev2, mut prev1, mut prev0): (Vec<u16>, Vec<u16>, Vec<u16>) =
(vec![], vec![], vec![root_node]);
@ -342,25 +342,6 @@ impl QueryGraph {
}
res
}
/// Number of words in the phrases in this query graph
pub(crate) fn words_in_phrases_count(&self, ctx: &SearchContext) -> usize {
let mut word_count = 0;
for (_, node) in self.nodes.iter() {
match &node.data {
QueryNodeData::Term(term) => {
let Some(phrase) = term.term_subset.original_phrase(ctx)
else {
continue
};
let phrase = ctx.phrase_interner.get(phrase);
word_count += phrase.words.iter().copied().filter(|a| a.is_some()).count()
}
_ => continue,
}
}
word_count
}
}
fn add_node(nodes_data: &mut Vec<QueryNodeData>, node_data: QueryNodeData) -> u16 {

View File

@ -132,7 +132,6 @@ impl QueryTermSubset {
if full_query_term.ngram_words.is_some() {
return None;
}
// TODO: included in subset
if let Some(phrase) = full_query_term.zero_typo.phrase {
self.zero_typo_subset.contains_phrase(phrase).then_some(ExactTerm::Phrase(phrase))
} else if let Some(word) = full_query_term.zero_typo.exact {
@ -182,7 +181,6 @@ impl QueryTermSubset {
let word = match &self.zero_typo_subset {
NTypoTermSubset::All => Some(use_prefix_db),
NTypoTermSubset::Subset { words, phrases: _ } => {
// TODO: use a subset of prefix words instead
if words.contains(&use_prefix_db) {
Some(use_prefix_db)
} else {
@ -204,7 +202,6 @@ impl QueryTermSubset {
ctx: &mut SearchContext,
) -> Result<BTreeSet<Word>> {
let mut result = BTreeSet::default();
// TODO: a compute_partially funtion
if !self.one_typo_subset.is_empty() || !self.two_typo_subset.is_empty() {
self.original.compute_fully_if_needed(ctx)?;
}
@ -300,7 +297,6 @@ impl QueryTermSubset {
let mut result = BTreeSet::default();
if !self.one_typo_subset.is_empty() {
// TODO: compute less than fully if possible
self.original.compute_fully_if_needed(ctx)?;
}
let original = ctx.term_interner.get_mut(self.original);

View File

@ -139,7 +139,6 @@ pub fn number_of_typos_allowed<'ctx>(
let min_len_one_typo = ctx.index.min_word_len_one_typo(ctx.txn)?;
let min_len_two_typos = ctx.index.min_word_len_two_typos(ctx.txn)?;
// TODO: should `exact_words` also disable prefix search, ngrams, split words, or synonyms?
let exact_words = ctx.index.exact_words(ctx.txn)?;
Ok(Box::new(move |word: &str| {
@ -250,8 +249,6 @@ impl PhraseBuilder {
} else {
// token has kind Word
let word = ctx.word_interner.insert(token.lemma().to_string());
// TODO: in a phrase, check that every word exists
// otherwise return an empty term
self.words.push(Some(word));
}
}

View File

@ -49,15 +49,10 @@ impl<G: RankingRuleGraphTrait> RankingRuleGraph<G> {
if let Some((cost_of_ignoring, forbidden_nodes)) =
cost_of_ignoring_node.get(dest_idx)
{
let dest = graph_nodes.get(dest_idx);
let dest_size = match &dest.data {
QueryNodeData::Term(term) => term.term_ids.len(),
_ => panic!(),
};
let new_edge_id = edges_store.insert(Some(Edge {
source_node: source_id,
dest_node: dest_idx,
cost: *cost_of_ignoring * dest_size as u32,
cost: *cost_of_ignoring,
condition: None,
nodes_to_skip: forbidden_nodes.clone(),
}));

View File

@ -1,5 +1,48 @@
#![allow(clippy::too_many_arguments)]
/** Implements a "PathVisitor" which finds all paths of a certain cost
from the START to END node of a ranking rule graph.
A path is a list of conditions. A condition is the data associated with
an edge, given by the ranking rule. Some edges don't have a condition associated
with them, they are "unconditional". These kinds of edges are used to "skip" a node.
The algorithm uses a depth-first search. It benefits from two main optimisations:
- The list of all possible costs to go from any node to the END node is precomputed
- The `DeadEndsCache` reduces the number of valid paths drastically, by making some edges
untraversable depending on what other edges were selected.
These two optimisations are meant to avoid traversing edges that wouldn't lead
to a valid path. In practically all cases, we avoid the exponential complexity
that is inherent to depth-first search in a large ranking rule graph.
The DeadEndsCache is a sort of prefix tree which associates a list of forbidden
conditions to a list of traversed conditions.
For example, the DeadEndsCache could say the following:
- Immediately, from the start, the conditions `[a,b]` are forbidden
- if we take the condition `c`, then the conditions `[e]` are also forbidden
- and if after that, we take `f`, then `[h,i]` are also forbidden
- etc.
- if we take `g`, then `[f]` is also forbidden
- etc.
- etc.
As we traverse the graph, we also traverse the `DeadEndsCache` and keep a list of forbidden
conditions in memory. Then, we know to avoid all edges which have a condition that is forbidden.
When a path is found from START to END, we give it to the `visit` closure.
This closure takes a mutable reference to the `DeadEndsCache`. This means that
the caller can update this cache. Therefore, we must handle the case where the
DeadEndsCache has been updated. This means potentially backtracking up to the point
where the traversed conditions are all allowed by the new DeadEndsCache.
The algorithm also implements the `TermsMatchingStrategy` logic.
Some edges are augmented with a list of "nodes_to_skip". Skipping
a node means "reaching this node through an unconditional edge". If we have
already traversed (ie. not skipped) a node that is in this list, then we know that we
can't traverse this edge. Otherwise, we traverse the edge but make sure to skip any
future node that was present in the "nodes_to_skip" list.
The caller can decide to stop the path finding algorithm
by returning a `ControlFlow::Break` from the `visit` closure.
*/
use std::collections::{BTreeSet, VecDeque};
use std::iter::FromIterator;
use std::ops::ControlFlow;
@ -12,30 +55,41 @@ use crate::search::new::query_graph::QueryNode;
use crate::search::new::small_bitmap::SmallBitmap;
use crate::Result;
/// Closure which processes a path found by the `PathVisitor`
type VisitFn<'f, G> = &'f mut dyn FnMut(
// the path as a list of conditions
&[Interned<<G as RankingRuleGraphTrait>::Condition>],
&mut RankingRuleGraph<G>,
// a mutable reference to the DeadEndsCache, to update it in case the given
// path doesn't resolve to any valid document ids
&mut DeadEndsCache<<G as RankingRuleGraphTrait>::Condition>,
) -> Result<ControlFlow<()>>;
/// A structure which is kept but not updated during the traversal of the graph.
/// It can however be updated by the `visit` closure once a valid path has been found.
struct VisitorContext<'a, G: RankingRuleGraphTrait> {
graph: &'a mut RankingRuleGraph<G>,
all_costs_from_node: &'a MappedInterner<QueryNode, Vec<u64>>,
dead_ends_cache: &'a mut DeadEndsCache<G::Condition>,
}
/// The internal state of the traversal algorithm
struct VisitorState<G: RankingRuleGraphTrait> {
/// Budget from the current node to the end node
remaining_cost: u64,
/// Previously visited conditions, in order.
path: Vec<Interned<G::Condition>>,
/// Previously visited conditions, as an efficient and compact set.
visited_conditions: SmallBitmap<G::Condition>,
/// Previously visited (ie not skipped) nodes, as an efficient and compact set.
visited_nodes: SmallBitmap<QueryNode>,
/// The conditions that cannot be visited anymore
forbidden_conditions: SmallBitmap<G::Condition>,
forbidden_conditions_to_nodes: SmallBitmap<QueryNode>,
/// The nodes that cannot be visited anymore (they must be skipped)
nodes_to_skip: SmallBitmap<QueryNode>,
}
/// See module documentation
pub struct PathVisitor<'a, G: RankingRuleGraphTrait> {
state: VisitorState<G>,
ctx: VisitorContext<'a, G>,
@ -56,14 +110,13 @@ impl<'a, G: RankingRuleGraphTrait> PathVisitor<'a, G> {
forbidden_conditions: SmallBitmap::for_interned_values_in(
&graph.conditions_interner,
),
forbidden_conditions_to_nodes: SmallBitmap::for_interned_values_in(
&graph.query_graph.nodes,
),
nodes_to_skip: SmallBitmap::for_interned_values_in(&graph.query_graph.nodes),
},
ctx: VisitorContext { graph, all_costs_from_node, dead_ends_cache },
}
}
/// See module documentation
pub fn visit_paths(mut self, visit: VisitFn<G>) -> Result<()> {
let _ =
self.state.visit_node(self.ctx.graph.query_graph.root_node, visit, &mut self.ctx)?;
@ -72,22 +125,31 @@ impl<'a, G: RankingRuleGraphTrait> PathVisitor<'a, G> {
}
impl<G: RankingRuleGraphTrait> VisitorState<G> {
/// Visits a node: traverse all its valid conditional and unconditional edges.
///
/// Returns ControlFlow::Break if the path finding algorithm should stop.
/// Returns whether a valid path was found from this node otherwise.
fn visit_node(
&mut self,
from_node: Interned<QueryNode>,
visit: VisitFn<G>,
ctx: &mut VisitorContext<G>,
) -> Result<ControlFlow<(), bool>> {
// any valid path will be found from this point
// if a valid path was found, then we know that the DeadEndsCache may have been updated,
// and we will need to do more work to potentially backtrack
let mut any_valid = false;
let edges = ctx.graph.edges_of_node.get(from_node).clone();
for edge_idx in edges.iter() {
// could be none if the edge was deleted
let Some(edge) = ctx.graph.edges_store.get(edge_idx).clone() else { continue };
if self.remaining_cost < edge.cost as u64 {
continue;
}
self.remaining_cost -= edge.cost as u64;
let cf = match edge.condition {
Some(condition) => self.visit_condition(
condition,
@ -119,6 +181,10 @@ impl<G: RankingRuleGraphTrait> VisitorState<G> {
Ok(ControlFlow::Continue(any_valid))
}
/// Visits an unconditional edge.
///
/// Returns ControlFlow::Break if the path finding algorithm should stop.
/// Returns whether a valid path was found from this node otherwise.
fn visit_no_condition(
&mut self,
dest_node: Interned<QueryNode>,
@ -134,20 +200,29 @@ impl<G: RankingRuleGraphTrait> VisitorState<G> {
{
return Ok(ControlFlow::Continue(false));
}
// We've reached the END node!
if dest_node == ctx.graph.query_graph.end_node {
let control_flow = visit(&self.path, ctx.graph, ctx.dead_ends_cache)?;
// We could change the return type of the visit closure such that the caller
// tells us whether the dead ends cache was updated or not.
// Alternatively, maybe the DeadEndsCache should have a generation number
// to it, so that we don't need to play with these booleans at all.
match control_flow {
ControlFlow::Continue(_) => Ok(ControlFlow::Continue(true)),
ControlFlow::Break(_) => Ok(ControlFlow::Break(())),
}
} else {
let old_fbct = self.forbidden_conditions_to_nodes.clone();
self.forbidden_conditions_to_nodes.union(edge_new_nodes_to_skip);
let old_fbct = self.nodes_to_skip.clone();
self.nodes_to_skip.union(edge_new_nodes_to_skip);
let cf = self.visit_node(dest_node, visit, ctx)?;
self.forbidden_conditions_to_nodes = old_fbct;
self.nodes_to_skip = old_fbct;
Ok(cf)
}
}
/// Visits a conditional edge.
///
/// Returns ControlFlow::Break if the path finding algorithm should stop.
/// Returns whether a valid path was found from this node otherwise.
fn visit_condition(
&mut self,
condition: Interned<G::Condition>,
@ -159,7 +234,7 @@ impl<G: RankingRuleGraphTrait> VisitorState<G> {
assert!(dest_node != ctx.graph.query_graph.end_node);
if self.forbidden_conditions.contains(condition)
|| self.forbidden_conditions_to_nodes.contains(dest_node)
|| self.nodes_to_skip.contains(dest_node)
|| edge_new_nodes_to_skip.intersects(&self.visited_nodes)
{
return Ok(ControlFlow::Continue(false));
@ -180,19 +255,19 @@ impl<G: RankingRuleGraphTrait> VisitorState<G> {
self.visited_nodes.insert(dest_node);
self.visited_conditions.insert(condition);
let old_fc = self.forbidden_conditions.clone();
let old_forb_cond = self.forbidden_conditions.clone();
if let Some(next_forbidden) =
ctx.dead_ends_cache.forbidden_conditions_after_prefix(self.path.iter().copied())
{
self.forbidden_conditions.union(&next_forbidden);
}
let old_fctn = self.forbidden_conditions_to_nodes.clone();
self.forbidden_conditions_to_nodes.union(edge_new_nodes_to_skip);
let old_nodes_to_skip = self.nodes_to_skip.clone();
self.nodes_to_skip.union(edge_new_nodes_to_skip);
let cf = self.visit_node(dest_node, visit, ctx)?;
self.forbidden_conditions_to_nodes = old_fctn;
self.forbidden_conditions = old_fc;
self.nodes_to_skip = old_nodes_to_skip;
self.forbidden_conditions = old_forb_cond;
self.visited_conditions.remove(condition);
self.visited_nodes.remove(dest_node);

View File

@ -9,12 +9,8 @@ use crate::search::new::query_term::LocatedQueryTermSubset;
use crate::search::new::SearchContext;
use crate::Result;
// TODO: give a generation to each universe, then be able to get the exact
// delta of docids between two universes of different generations!
/// A cache storing the document ids associated with each ranking rule edge
pub struct ConditionDocIdsCache<G: RankingRuleGraphTrait> {
// TOOD: should be a mapped interner?
pub cache: FxHashMap<Interned<G::Condition>, ComputedCondition>,
_phantom: PhantomData<G>,
}
@ -54,7 +50,7 @@ impl<G: RankingRuleGraphTrait> ConditionDocIdsCache<G> {
}
let condition = graph.conditions_interner.get_mut(interned_condition);
let computed = G::resolve_condition(ctx, condition, universe)?;
// TODO: if computed.universe_len != universe.len() ?
// Can we put an assert here for computed.universe_len == universe.len() ?
let _ = self.cache.insert(interned_condition, computed);
let computed = &self.cache[&interned_condition];
Ok(computed)

View File

@ -2,6 +2,7 @@ use crate::search::new::interner::{FixedSizeInterner, Interned};
use crate::search::new::small_bitmap::SmallBitmap;
pub struct DeadEndsCache<T> {
// conditions and next could/should be part of the same vector
conditions: Vec<Interned<T>>,
next: Vec<Self>,
pub forbidden: SmallBitmap<T>,
@ -27,7 +28,7 @@ impl<T> DeadEndsCache<T> {
self.forbidden.insert(condition);
}
pub fn advance(&mut self, condition: Interned<T>) -> Option<&mut Self> {
fn advance(&mut self, condition: Interned<T>) -> Option<&mut Self> {
if let Some(idx) = self.conditions.iter().position(|c| *c == condition) {
Some(&mut self.next[idx])
} else {

View File

@ -1,7 +1,6 @@
use roaring::RoaringBitmap;
use super::{ComputedCondition, RankingRuleGraphTrait};
use crate::score_details::{Rank, ScoreDetails};
use crate::search::new::interner::{DedupInterner, Interned};
use crate::search::new::query_term::{ExactTerm, LocatedQueryTermSubset};
use crate::search::new::resolve_query_graph::compute_query_term_subset_docids;
@ -85,8 +84,4 @@ impl RankingRuleGraphTrait for ExactnessGraph {
Ok(vec![(0, exact_condition), (dest_node.term_ids.len() as u32, skip_condition)])
}
fn rank_to_score(rank: Rank) -> ScoreDetails {
ScoreDetails::Exactness(rank)
}
}

View File

@ -2,7 +2,6 @@ use fxhash::FxHashSet;
use roaring::RoaringBitmap;
use super::{ComputedCondition, RankingRuleGraphTrait};
use crate::score_details::{Rank, ScoreDetails};
use crate::search::new::interner::{DedupInterner, Interned};
use crate::search::new::query_term::LocatedQueryTermSubset;
use crate::search::new::resolve_query_graph::compute_query_term_subset_docids_within_field_id;
@ -69,47 +68,13 @@ impl RankingRuleGraphTrait for FidGraph {
}
let mut edges = vec![];
for fid in all_fields.iter().copied() {
// TODO: We can improve performances and relevancy by storing
// the term subsets associated to each field ids fetched.
for fid in all_fields {
edges.push((
fid as u32 * term.term_ids.len() as u32, // TODO improve the fid score i.e. fid^10.
conditions_interner.insert(FidCondition {
term: term.clone(), // TODO remove this ugly clone
fid,
}),
fid as u32 * term.term_ids.len() as u32,
conditions_interner.insert(FidCondition { term: term.clone(), fid }),
));
}
// always lookup the max_fid if we don't already and add an artificial condition for max scoring
let max_fid: Option<u16> = {
if let Some(max_fid) = ctx
.index
.searchable_fields_ids(ctx.txn)?
.map(|field_ids| field_ids.into_iter().max())
{
max_fid
} else {
ctx.index.fields_ids_map(ctx.txn)?.ids().max()
}
};
if let Some(max_fid) = max_fid {
if !all_fields.contains(&max_fid) {
edges.push((
max_fid as u32 * term.term_ids.len() as u32, // TODO improve the fid score i.e. fid^10.
conditions_interner.insert(FidCondition {
term: term.clone(), // TODO remove this ugly clone
fid: max_fid,
}),
));
}
}
Ok(edges)
}
fn rank_to_score(rank: Rank) -> ScoreDetails {
ScoreDetails::Fid(rank)
}
}

View File

@ -41,7 +41,6 @@ use super::interner::{DedupInterner, FixedSizeInterner, Interned, MappedInterner
use super::query_term::LocatedQueryTermSubset;
use super::small_bitmap::SmallBitmap;
use super::{QueryGraph, QueryNode, SearchContext};
use crate::score_details::{Rank, ScoreDetails};
use crate::Result;
pub struct ComputedCondition {
@ -111,9 +110,6 @@ pub trait RankingRuleGraphTrait: Sized + 'static {
source_node: Option<&LocatedQueryTermSubset>,
dest_node: &LocatedQueryTermSubset,
) -> Result<Vec<(u32, Interned<Self::Condition>)>>;
/// Convert the rank of a path to its corresponding score for the ranking rule
fn rank_to_score(rank: Rank) -> ScoreDetails;
}
/// The graph used by graph-based ranking rules.

View File

@ -2,7 +2,6 @@ use fxhash::{FxHashMap, FxHashSet};
use roaring::RoaringBitmap;
use super::{ComputedCondition, RankingRuleGraphTrait};
use crate::score_details::{Rank, ScoreDetails};
use crate::search::new::interner::{DedupInterner, Interned};
use crate::search::new::query_term::LocatedQueryTermSubset;
use crate::search::new::resolve_query_graph::compute_query_term_subset_docids_within_position;
@ -78,8 +77,6 @@ impl RankingRuleGraphTrait for PositionGraph {
let mut positions_for_costs = FxHashMap::<u32, Vec<u16>>::default();
for position in all_positions {
// FIXME: bucketed position???
let distance = position.abs_diff(*term.positions.start());
let cost = {
let mut cost = 0;
for i in 0..term.term_ids.len() {
@ -87,48 +84,28 @@ impl RankingRuleGraphTrait for PositionGraph {
// Because if two words are in the same bucketed position (e.g. 32) and consecutive,
// then their position cost will be 32+32=64, but an ngram of these two words at the
// same position will have a cost of 32+32+1=65
cost += cost_from_distance(distance as u32 + i as u32);
cost += cost_from_position(position as u32 + i as u32);
}
cost
};
positions_for_costs.entry(cost).or_default().push(position);
}
let max_cost = term.term_ids.len() as u32 * 10;
let max_cost_exists = positions_for_costs.contains_key(&max_cost);
let mut edges = vec![];
for (cost, positions) in positions_for_costs {
// TODO: We can improve performances and relevancy by storing
// the term subsets associated to each position fetched
edges.push((
cost,
conditions_interner.insert(PositionCondition {
term: term.clone(), // TODO remove this ugly clone
positions,
}),
));
}
if !max_cost_exists {
// artificial empty condition for computing max cost
edges.push((
max_cost,
conditions_interner
.insert(PositionCondition { term: term.clone(), positions: Vec::default() }),
conditions_interner.insert(PositionCondition { term: term.clone(), positions }),
));
}
Ok(edges)
}
fn rank_to_score(rank: Rank) -> ScoreDetails {
ScoreDetails::Position(rank)
}
}
fn cost_from_distance(distance: u32) -> u32 {
match distance {
fn cost_from_position(sum_positions: u32) -> u32 {
match sum_positions {
0 => 0,
1 => 1,
2..=4 => 2,

View File

@ -12,11 +12,11 @@ pub fn build_edges(
left_term: Option<&LocatedQueryTermSubset>,
right_term: &LocatedQueryTermSubset,
) -> Result<Vec<(u32, Interned<ProximityCondition>)>> {
let right_ngram_max = right_term.term_ids.len().saturating_sub(1);
let right_ngram_length = right_term.term_ids.len();
let Some(left_term) = left_term else {
return Ok(vec![(
right_ngram_max as u32,
(right_ngram_length - 1) as u32,
conditions_interner.insert(ProximityCondition::Term { term: right_term.clone() }),
)])
};
@ -29,25 +29,25 @@ pub fn build_edges(
// The remaining query graph represents `the sun .. are beautiful`
// but `sun` and `are` have no proximity condition between them
return Ok(vec![(
right_ngram_max as u32,
(right_ngram_length - 1) as u32,
conditions_interner.insert(ProximityCondition::Term { term: right_term.clone() }),
)]);
}
let mut conditions = vec![];
for cost in right_ngram_max..(7 + right_ngram_max) {
for cost in right_ngram_length..(7 + right_ngram_length) {
conditions.push((
cost as u32,
conditions_interner.insert(ProximityCondition::Uninit {
left_term: left_term.clone(),
right_term: right_term.clone(),
cost: (cost + 1) as u8,
cost: cost as u8,
}),
))
}
conditions.push((
(7 + right_ngram_max) as u32,
(7 + right_ngram_length) as u32,
conditions_interner.insert(ProximityCondition::Term { term: right_term.clone() }),
));

View File

@ -65,13 +65,6 @@ pub fn compute_docids(
}
}
// TODO: add safeguard in case the cartesian product is too large!
// even if we restrict the word derivations to a maximum of 100, the size of the
// caterisan product could reach a maximum of 10_000 derivations, which is way too much.
// Maybe prioritise the product of zero typo derivations, then the product of zero-typo/one-typo
// + one-typo/zero-typo, then one-typo/one-typo, then ... until an arbitrary limit has been
// reached
for (left_phrase, left_word) in last_words_of_term_derivations(ctx, &left_term.term_subset)? {
// Before computing the edges, check that the left word and left phrase
// aren't disjoint with the universe, but only do it if there is more than
@ -111,8 +104,6 @@ pub fn compute_docids(
Ok(ComputedCondition {
docids,
universe_len: universe.len(),
// TODO: think about whether we want to reduce the subset,
// we probably should!
start_term_subset: Some(left_term.clone()),
end_term_subset: right_term.clone(),
})
@ -203,12 +194,7 @@ fn compute_non_prefix_edges(
*docids |= new_docids;
}
}
if backward_proximity >= 1
// TODO: for now, we don't do any swapping when either term is a phrase
// but maybe we should. We'd need to look at the first/last word of the phrase
// depending on the context.
&& left_phrase.is_none() && right_phrase.is_none()
{
if backward_proximity >= 1 && left_phrase.is_none() && right_phrase.is_none() {
if let Some(new_docids) =
ctx.get_db_word_pair_proximity_docids(word2, word1, backward_proximity)?
{

View File

@ -4,7 +4,6 @@ pub mod compute_docids;
use roaring::RoaringBitmap;
use super::{ComputedCondition, RankingRuleGraphTrait};
use crate::score_details::{Rank, ScoreDetails};
use crate::search::new::interner::{DedupInterner, Interned};
use crate::search::new::query_term::LocatedQueryTermSubset;
use crate::search::new::SearchContext;
@ -37,8 +36,4 @@ impl RankingRuleGraphTrait for ProximityGraph {
) -> Result<Vec<(u32, Interned<Self::Condition>)>> {
build::build_edges(ctx, conditions_interner, source_term, dest_term)
}
fn rank_to_score(rank: Rank) -> ScoreDetails {
ScoreDetails::Proximity(rank)
}
}

View File

@ -1,7 +1,6 @@
use roaring::RoaringBitmap;
use super::{ComputedCondition, RankingRuleGraphTrait};
use crate::score_details::{self, Rank, ScoreDetails};
use crate::search::new::interner::{DedupInterner, Interned};
use crate::search::new::query_term::LocatedQueryTermSubset;
use crate::search::new::resolve_query_graph::compute_query_term_subset_docids;
@ -76,8 +75,4 @@ impl RankingRuleGraphTrait for TypoGraph {
}
Ok(edges)
}
fn rank_to_score(rank: Rank) -> ScoreDetails {
ScoreDetails::Typo(score_details::Typo::from_rank(rank))
}
}

View File

@ -1,7 +1,6 @@
use roaring::RoaringBitmap;
use super::{ComputedCondition, RankingRuleGraphTrait};
use crate::score_details::{self, Rank, ScoreDetails};
use crate::search::new::interner::{DedupInterner, Interned};
use crate::search::new::query_term::LocatedQueryTermSubset;
use crate::search::new::resolve_query_graph::compute_query_term_subset_docids;
@ -42,10 +41,9 @@ impl RankingRuleGraphTrait for WordsGraph {
_from: Option<&LocatedQueryTermSubset>,
to_term: &LocatedQueryTermSubset,
) -> Result<Vec<(u32, Interned<Self::Condition>)>> {
Ok(vec![(0, conditions_interner.insert(WordsCondition { term: to_term.clone() }))])
}
fn rank_to_score(rank: Rank) -> ScoreDetails {
ScoreDetails::Words(score_details::Words::from_rank(rank))
Ok(vec![(
to_term.term_ids.len() as u32,
conditions_interner.insert(WordsCondition { term: to_term.clone() }),
)])
}
}

View File

@ -2,7 +2,6 @@ use roaring::RoaringBitmap;
use super::logger::SearchLogger;
use super::{QueryGraph, SearchContext};
use crate::score_details::ScoreDetails;
use crate::Result;
/// An internal trait implemented by only [`PlaceholderQuery`] and [`QueryGraph`]
@ -67,6 +66,4 @@ pub struct RankingRuleOutput<Q> {
pub query: Q,
/// The allowed candidates for the child ranking rule
pub candidates: RoaringBitmap,
/// The score for the candidates of the current bucket
pub score: ScoreDetails,
}

View File

@ -33,8 +33,6 @@ pub fn compute_query_term_subset_docids(
ctx: &mut SearchContext,
term: &QueryTermSubset,
) -> Result<RoaringBitmap> {
// TODO Use the roaring::MultiOps trait
let mut docids = RoaringBitmap::new();
for word in term.all_single_words_except_prefix_db(ctx)? {
if let Some(word_docids) = ctx.word_docids(word)? {
@ -59,8 +57,6 @@ pub fn compute_query_term_subset_docids_within_field_id(
term: &QueryTermSubset,
fid: u16,
) -> Result<RoaringBitmap> {
// TODO Use the roaring::MultiOps trait
let mut docids = RoaringBitmap::new();
for word in term.all_single_words_except_prefix_db(ctx)? {
if let Some(word_fid_docids) = ctx.get_db_word_fid_docids(word.interned(), fid)? {
@ -71,7 +67,6 @@ pub fn compute_query_term_subset_docids_within_field_id(
for phrase in term.all_phrases(ctx)? {
// There may be false positives when resolving a phrase, so we're not
// guaranteed that all of its words are within a single fid.
// TODO: fix this?
if let Some(word) = phrase.words(ctx).iter().flatten().next() {
if let Some(word_fid_docids) = ctx.get_db_word_fid_docids(*word, fid)? {
docids |= ctx.get_phrase_docids(phrase)? & word_fid_docids;
@ -95,7 +90,6 @@ pub fn compute_query_term_subset_docids_within_position(
term: &QueryTermSubset,
position: u16,
) -> Result<RoaringBitmap> {
// TODO Use the roaring::MultiOps trait
let mut docids = RoaringBitmap::new();
for word in term.all_single_words_except_prefix_db(ctx)? {
if let Some(word_position_docids) =
@ -108,7 +102,6 @@ pub fn compute_query_term_subset_docids_within_position(
for phrase in term.all_phrases(ctx)? {
// It's difficult to know the expected position of the words in the phrase,
// so instead we just check the first one.
// TODO: fix this?
if let Some(word) = phrase.words(ctx).iter().flatten().next() {
if let Some(word_position_docids) = ctx.get_db_word_position_docids(*word, position)? {
docids |= ctx.get_phrase_docids(phrase)? & word_position_docids
@ -132,9 +125,6 @@ pub fn compute_query_graph_docids(
q: &QueryGraph,
universe: &RoaringBitmap,
) -> Result<RoaringBitmap> {
// TODO: there must be a faster way to compute this big
// roaring bitmap expression
let mut nodes_resolved = SmallBitmap::for_interned_values_in(&q.nodes);
let mut path_nodes_docids = q.nodes.map(|_| RoaringBitmap::new());

View File

@ -1,11 +1,9 @@
use heed::BytesDecode;
use roaring::RoaringBitmap;
use super::logger::SearchLogger;
use super::{RankingRule, RankingRuleOutput, RankingRuleQueryTrait, SearchContext};
use crate::heed_codec::facet::{FacetGroupKeyCodec, OrderedF64Codec};
use crate::heed_codec::{ByteSliceRefCodec, StrRefCodec};
use crate::score_details::{self, ScoreDetails};
use crate::heed_codec::facet::FacetGroupKeyCodec;
use crate::heed_codec::ByteSliceRefCodec;
use crate::search::facet::{ascending_facet_sort, descending_facet_sort};
use crate::{FieldId, Index, Result};
@ -51,7 +49,6 @@ pub struct Sort<'ctx, Query> {
is_ascending: bool,
original_query: Option<Query>,
iter: Option<RankingRuleOutputIterWrapper<'ctx, Query>>,
must_redact: bool,
}
impl<'ctx, Query> Sort<'ctx, Query> {
pub fn new(
@ -62,30 +59,15 @@ impl<'ctx, Query> Sort<'ctx, Query> {
) -> Result<Self> {
let fields_ids_map = index.fields_ids_map(rtxn)?;
let field_id = fields_ids_map.id(&field_name);
let must_redact = Self::must_redact(index, rtxn, &field_name)?;
Ok(Self {
field_name,
field_id,
is_ascending,
original_query: None,
iter: None,
must_redact,
})
}
fn must_redact(index: &Index, rtxn: &'ctx heed::RoTxn, field_name: &str) -> Result<bool> {
let Some(displayed_fields) = index.displayed_fields(rtxn)?
else { return Ok(false); };
Ok(!displayed_fields.iter().any(|&field| field == field_name))
Ok(Self { field_name, field_id, is_ascending, original_query: None, iter: None })
}
}
impl<'ctx, Query: RankingRuleQueryTrait> RankingRule<'ctx, Query> for Sort<'ctx, Query> {
fn id(&self) -> String {
let Self { field_name, is_ascending, .. } = self;
format!("{field_name}:{}", if *is_ascending { "asc" } else { "desc" })
format!("{field_name}:{}", if *is_ascending { "asc" } else { "desc " })
}
fn start_iteration(
&mut self,
@ -136,45 +118,12 @@ impl<'ctx, Query: RankingRuleQueryTrait> RankingRule<'ctx, Query> for Sort<'ctx,
(itertools::Either::Right(number_iter), itertools::Either::Right(string_iter))
};
let number_iter = number_iter.map(|r| -> Result<_> {
let (docids, bytes) = r?;
Ok((
docids,
serde_json::Value::Number(
serde_json::Number::from_f64(
OrderedF64Codec::bytes_decode(bytes).expect("some number"),
)
.expect("too big float"),
),
))
});
let string_iter = string_iter.map(|r| -> Result<_> {
let (docids, bytes) = r?;
Ok((
docids,
serde_json::Value::String(
StrRefCodec::bytes_decode(bytes).expect("some string").to_owned(),
),
))
});
let query_graph = parent_query.clone();
let ascending = self.is_ascending;
let field_name = self.field_name.clone();
let must_redact = self.must_redact;
RankingRuleOutputIterWrapper::new(Box::new(number_iter.chain(string_iter).map(
move |r| {
let (docids, value) = r?;
Ok(RankingRuleOutput {
query: query_graph.clone(),
candidates: docids,
score: ScoreDetails::Sort(score_details::Sort {
field_name: field_name.clone(),
ascending,
redacted: must_redact,
value,
}),
})
let (docids, _) = r?;
Ok(RankingRuleOutput { query: query_graph.clone(), candidates: docids })
},
)))
}
@ -192,25 +141,12 @@ impl<'ctx, Query: RankingRuleQueryTrait> RankingRule<'ctx, Query> for Sort<'ctx,
universe: &RoaringBitmap,
) -> Result<Option<RankingRuleOutput<Query>>> {
let iter = self.iter.as_mut().unwrap();
// TODO: we should make use of the universe in the function below
// good for correctness, but ideally iter.next_bucket would take the current universe into account,
// as right now it could return buckets that don't intersect with the universe, meaning we will make many
// unneeded calls.
if let Some(mut bucket) = iter.next_bucket()? {
bucket.candidates &= universe;
Ok(Some(bucket))
} else {
let query = self.original_query.as_ref().unwrap().clone();
Ok(Some(RankingRuleOutput {
query,
candidates: universe.clone(),
score: ScoreDetails::Sort(score_details::Sort {
field_name: self.field_name.clone(),
ascending: self.is_ascending,
redacted: self.must_redact,
value: serde_json::Value::Null,
}),
}))
Ok(Some(RankingRuleOutput { query, candidates: universe.clone() }))
}
}

View File

@ -122,11 +122,8 @@ fn test_attribute_fid_simple() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 6, 5, 4, 3, 9, 7, 8, 11, 10, 12, 13, 14, 0]");
}
#[test]
@ -138,11 +135,6 @@ fn test_attribute_fid_ngrams() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 6, 5, 4, 3, 9, 7, 8, 11, 10, 12, 13, 14, 0]");
}

View File

@ -40,68 +40,68 @@ fn create_index() -> TempIndex {
},
{
"id": 5,
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
the quick brown fox",
},
{
"id": 6,
"text": "quick a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
"text": "quick a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
brown",
},
{
"id": 7,
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
quickbrown",
},
{
"id": 8,
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
quick brown",
},
{
"id": 9,
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
"text": "a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
quickbrown",
},
{
@ -137,13 +137,8 @@ fn test_attribute_position_simple() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("quick brown");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[10, 11, 12, 13, 2, 3, 4, 1, 0, 6, 8, 7, 9, 5]");
}
#[test]
fn test_attribute_position_repeated() {
@ -154,13 +149,8 @@ fn test_attribute_position_repeated() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("a a a a a");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[5, 7, 8, 9, 6]");
}
#[test]
@ -172,13 +162,8 @@ fn test_attribute_position_different_fields() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("quick brown");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[10, 11, 12, 13, 2, 3, 4, 1, 0, 6, 8, 7, 9, 5]");
}
#[test]
@ -190,11 +175,6 @@ fn test_attribute_position_ngrams() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("quick brown");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[10, 11, 12, 13, 2, 3, 4, 1, 0, 6, 8, 7, 9, 5]");
}

View File

@ -527,7 +527,7 @@ fn test_distinct_all_candidates() {
let SearchResult { documents_ids, candidates, .. } = s.execute().unwrap();
let candidates = candidates.iter().collect::<Vec<_>>();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[14, 26, 4, 7, 17, 23, 1, 19, 25, 8, 20, 24]");
// TODO: this is incorrect!
// This is incorrect, but unfortunately impossible to do better efficiently.
insta::assert_snapshot!(format!("{candidates:?}"), @"[1, 4, 7, 8, 14, 17, 19, 20, 23, 24, 25, 26]");
}

View File

@ -474,14 +474,8 @@ fn test_exactness_simple_ordered() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[9, 8, 7, 6, 5, 4, 3, 2, 1]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
@ -507,14 +501,8 @@ fn test_exactness_simple_reversed() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[9, 8, 3, 4, 5, 6, 7]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
@ -531,14 +519,8 @@ fn test_exactness_simple_reversed() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[9, 8, 3, 4, 5, 6, 7]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
@ -562,14 +544,8 @@ fn test_exactness_simple_random() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[8, 7, 4, 6, 3, 5]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
@ -592,14 +568,8 @@ fn test_exactness_attribute_starts_with_simple() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("this balcony");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 1, 0]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
@ -619,14 +589,8 @@ fn test_exactness_attribute_starts_with_phrase() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("\"overlooking the sea\" is a beautiful balcony");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[6, 5, 4, 1]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
@ -640,14 +604,8 @@ fn test_exactness_attribute_starts_with_phrase() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("overlooking the sea is a beautiful balcony");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[6, 5, 4, 3, 1, 7]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
@ -670,14 +628,8 @@ fn test_exactness_all_candidates_with_typo() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("overlocking the sea is a beautiful balcony");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[4, 5, 6, 1, 7]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
// "overlooking" is returned here because the term matching strategy allows it
// but it has the worst exactness score (0 exact words)
@ -707,14 +659,8 @@ fn test_exactness_after_words() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[19, 9, 18, 8, 17, 16, 6, 7, 15, 5, 14, 4, 13, 3, 12, 2, 1, 11]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
@ -756,13 +702,7 @@ fn test_words_after_exactness() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[19, 9, 18, 8, 17, 16, 6, 7, 15, 5, 14, 4, 13, 3, 12, 2, 1, 11]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
@ -805,14 +745,7 @@ fn test_proximity_after_exactness() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 1, 0, 4, 5, 8, 7, 3, 6]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
@ -843,13 +776,7 @@ fn test_proximity_after_exactness() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("the quick brown fox jumps over the lazy dog");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[0, 1, 2]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
@ -877,13 +804,7 @@ fn test_exactness_followed_by_typo_prefer_no_typo_prefix() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("quick brown fox extra");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 1, 0, 4, 3]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
@ -913,13 +834,7 @@ fn test_typo_followed_by_exactness() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::Last);
s.query("extraordinarily quick brown fox");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let document_ids_scores: Vec<_> =
documents_ids.iter().zip(document_scores.into_iter()).collect();
insta::assert_snapshot!(format!("{document_ids_scores:#?}"));
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[1, 0, 4, 3]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);

View File

@ -7,7 +7,6 @@ use heed::RoTxn;
use maplit::hashset;
use crate::index::tests::TempIndex;
use crate::score_details::ScoreDetails;
use crate::search::new::tests::collect_field_values;
use crate::{AscDesc, Criterion, GeoSortStrategy, Member, Search, SearchResult};
@ -29,37 +28,30 @@ fn execute_iterative_and_rtree_returns_the_same<'a>(
rtxn: &RoTxn<'a>,
index: &TempIndex,
search: &mut Search<'a>,
) -> (Vec<usize>, Vec<Vec<ScoreDetails>>) {
) -> Vec<usize> {
search.geo_sort_strategy(GeoSortStrategy::AlwaysIterative(2));
let SearchResult { documents_ids, document_scores: iterative_scores_bucketed, .. } =
search.execute().unwrap();
let SearchResult { documents_ids, .. } = search.execute().unwrap();
let iterative_ids_bucketed = collect_field_values(index, rtxn, "id", &documents_ids);
search.geo_sort_strategy(GeoSortStrategy::AlwaysIterative(1000));
let SearchResult { documents_ids, document_scores: iterative_scores, .. } =
search.execute().unwrap();
let SearchResult { documents_ids, .. } = search.execute().unwrap();
let iterative_ids = collect_field_values(index, rtxn, "id", &documents_ids);
assert_eq!(iterative_ids_bucketed, iterative_ids, "iterative bucket");
assert_eq!(iterative_scores_bucketed, iterative_scores, "iterative bucket score");
search.geo_sort_strategy(GeoSortStrategy::AlwaysRtree(2));
let SearchResult { documents_ids, document_scores: rtree_scores_bucketed, .. } =
search.execute().unwrap();
let SearchResult { documents_ids, .. } = search.execute().unwrap();
let rtree_ids_bucketed = collect_field_values(index, rtxn, "id", &documents_ids);
search.geo_sort_strategy(GeoSortStrategy::AlwaysRtree(1000));
let SearchResult { documents_ids, document_scores: rtree_scores, .. } =
search.execute().unwrap();
let SearchResult { documents_ids, .. } = search.execute().unwrap();
let rtree_ids = collect_field_values(index, rtxn, "id", &documents_ids);
assert_eq!(rtree_ids_bucketed, rtree_ids, "rtree bucket");
assert_eq!(rtree_scores_bucketed, rtree_scores, "rtree bucket score");
assert_eq!(iterative_ids, rtree_ids, "iterative vs rtree");
assert_eq!(iterative_scores, rtree_scores, "iterative vs rtree scores");
(iterative_ids.into_iter().map(|id| id.parse().unwrap()).collect(), iterative_scores)
iterative_ids.into_iter().map(|id| id.parse().unwrap()).collect()
}
#[test]
@ -81,17 +73,14 @@ fn test_geo_sort() {
let rtxn = index.read_txn().unwrap();
let mut s = Search::new(&rtxn, &index);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([0., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[0, 1, 2, 3, 4, 5, 6, 8, 7, 10, 9]");
insta::assert_snapshot!(format!("{scores:#?}"));
s.sort_criteria(vec![AscDesc::Desc(Member::Geo([0., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[5, 4, 3, 2, 1, 0, 6, 8, 7, 10, 9]");
insta::assert_snapshot!(format!("{scores:#?}"));
}
#[test]
@ -112,63 +101,52 @@ fn test_geo_sort_around_the_edge_of_the_flat_earth() {
let rtxn = index.read_txn().unwrap();
let mut s = Search::new(&rtxn, &index);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
// --- asc
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([0., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[0, 1, 2, 3, 4]");
insta::assert_snapshot!(format!("{scores:#?}"));
// ensuring the lat doesn't wrap around
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([85., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[1, 0, 3, 4, 2]");
insta::assert_snapshot!(format!("{scores:#?}"));
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([-85., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[2, 0, 3, 4, 1]");
insta::assert_snapshot!(format!("{scores:#?}"));
// ensuring the lng does wrap around
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([0., 175.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[3, 4, 2, 1, 0]");
insta::assert_snapshot!(format!("{scores:#?}"));
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([0., -175.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[4, 3, 2, 1, 0]");
insta::assert_snapshot!(format!("{scores:#?}"));
// --- desc
s.sort_criteria(vec![AscDesc::Desc(Member::Geo([0., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[4, 3, 2, 1, 0]");
insta::assert_snapshot!(format!("{scores:#?}"));
// ensuring the lat doesn't wrap around
s.sort_criteria(vec![AscDesc::Desc(Member::Geo([85., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[2, 4, 3, 0, 1]");
insta::assert_snapshot!(format!("{scores:#?}"));
s.sort_criteria(vec![AscDesc::Desc(Member::Geo([-85., 0.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[1, 4, 3, 0, 2]");
insta::assert_snapshot!(format!("{scores:#?}"));
// ensuring the lng does wrap around
s.sort_criteria(vec![AscDesc::Desc(Member::Geo([0., 175.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[0, 1, 2, 4, 3]");
insta::assert_snapshot!(format!("{scores:#?}"));
s.sort_criteria(vec![AscDesc::Desc(Member::Geo([0., -175.]))]);
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[0, 1, 2, 3, 4]");
insta::assert_snapshot!(format!("{scores:#?}"));
}
#[test]
@ -188,98 +166,19 @@ fn geo_sort_mixed_with_words() {
let rtxn = index.read_txn().unwrap();
let mut s = Search::new(&rtxn, &index);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([0., 0.]))]);
s.query("jean");
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[0, 2, 3]");
insta::assert_snapshot!(format!("{scores:#?}"));
s.query("bob");
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[2, 4]");
insta::assert_snapshot!(format!("{scores:#?}"), @r###"
[
[
Words(
Words {
matching_words: 1,
max_matching_words: 1,
},
),
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
-89.0,
0.0,
],
),
},
),
],
[
Words(
Words {
matching_words: 1,
max_matching_words: 1,
},
),
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
0.0,
-179.0,
],
),
},
),
],
]
"###);
s.query("intel");
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[1]");
insta::assert_snapshot!(format!("{scores:#?}"), @r###"
[
[
Words(
Words {
matching_words: 1,
max_matching_words: 1,
},
),
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
88.0,
0.0,
],
),
},
),
],
]
"###);
}
#[test]
@ -299,11 +198,9 @@ fn geo_sort_without_any_geo_faceted_documents() {
let rtxn = index.read_txn().unwrap();
let mut s = Search::new(&rtxn, &index);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.sort_criteria(vec![AscDesc::Asc(Member::Geo([0., 0.]))]);
s.query("jean");
let (ids, scores) = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
let ids = execute_iterative_and_rtree_returns_the_same(&rtxn, &index, &mut s);
insta::assert_snapshot!(format!("{ids:?}"), @"[0, 2, 3]");
insta::assert_snapshot!(format!("{scores:#?}"));
}

View File

@ -80,13 +80,10 @@ fn test_2gram_simple() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("sun flower");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
// will also match documents with "sunflower" + prefix tolerance
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[0, 1, 2, 3, 5]");
// scores are empty because the only rule is Words with All matching strategy
insta::assert_snapshot!(format!("{document_scores:?}"), @"[[], [], [], [], []]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[

View File

@ -125,8 +125,8 @@ fn create_edge_cases_index() -> TempIndex {
// The next 5 documents lay out a trap with the split word, phrase search, or synonym `sun flower`.
// If the search query is "sunflower", the split word "Sun Flower" will match some documents.
// If the query is `sunflower wilting`, then we should make sure that
// the sprximity condition `flower wilting: sprx N` also comes with the condition
// `sun wilting: sprx N+1`. TODO: this is not the exact condition we use for now.
// the proximity condition `flower wilting: sprx N` also comes with the condition
// `sun wilting: sprx N+1`, but this is not the exact condition we use for now.
// We only check that the phrase `sun flower` exists and `flower wilting: sprx N`, which
// is better than nothing but not the best.
{
@ -270,13 +270,13 @@ fn test_proximity_simple() {
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("the quick brown fox jumps over the lazy dog");
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[9, 10, 4, 7, 6, 5, 2, 3, 0, 1]");
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[4, 9, 10, 7, 6, 5, 2, 3, 0, 1]");
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
[
"\"the quickbrown fox jumps over the lazy dog\"",
"\"the quack brown fox jumps over the lazy dog\"",
"\"the quick brown fox jumps over the lazy dog\"",
"\"the quickbrown fox jumps over the lazy dog\"",
"\"the really quick brown fox jumps over the lazy dog\"",
"\"the really quick brown fox jumps over the very lazy dog\"",
"\"brown quick fox jumps over the lazy dog\"",
@ -295,14 +295,11 @@ fn test_proximity_split_word() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("sunflower wilting");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 4, 5, 1, 3]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
// TODO: "2" and "4" should be swapped ideally
// "2" and "4" should be swapped ideally
insta::assert_debug_snapshot!(texts, @r###"
[
"\"Sun Flower sounds like the title of a painting, maybe about a flower wilting under the heat.\"",
@ -315,13 +312,11 @@ fn test_proximity_split_word() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("\"sun flower\" wilting");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 4, 1]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
// TODO: "2" and "4" should be swapped ideally
// "2" and "4" should be swapped ideally
insta::assert_debug_snapshot!(texts, @r###"
[
"\"Sun Flower sounds like the title of a painting, maybe about a flower wilting under the heat.\"",
@ -342,13 +337,11 @@ fn test_proximity_split_word() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("xyz wilting");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[2, 4, 1]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
// TODO: "2" and "4" should be swapped ideally
// "2" and "4" should be swapped ideally
insta::assert_debug_snapshot!(texts, @r###"
[
"\"Sun Flower sounds like the title of a painting, maybe about a flower wilting under the heat.\"",
@ -365,11 +358,9 @@ fn test_proximity_prefix_db() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("best s");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[10, 13, 9, 12, 8, 6, 7, 11, 15]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
// This test illustrates the loss of precision from using the prefix DB
@ -390,11 +381,9 @@ fn test_proximity_prefix_db() {
// Difference when using the `su` prefix, which is not in the prefix DB
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("best su");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[10, 13, 9, 12, 8, 11, 7, 6, 15]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
@ -417,11 +406,9 @@ fn test_proximity_prefix_db() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("best win");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[15, 16, 17, 18, 19, 20, 21, 22]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
@ -441,11 +428,9 @@ fn test_proximity_prefix_db() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("best wint");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[19, 22, 18, 21, 17, 20, 16, 15]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"
@ -465,11 +450,9 @@ fn test_proximity_prefix_db() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
s.query("best wi");
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[19, 22, 18, 21, 17, 15, 16, 20]");
insta::assert_snapshot!(format!("{document_scores:#?}"));
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
insta::assert_debug_snapshot!(texts, @r###"

View File

@ -2,9 +2,8 @@
This module tests the interactions between the proximity and typo ranking rules.
The proximity ranking rule should transform the query graph such that it
only contains the word pairs that it used to compute its bucket.
TODO: This is not currently implemented.
only contains the word pairs that it used to compute its bucket, but this is not currently
implemented.
*/
use crate::index::tests::TempIndex;
@ -61,43 +60,10 @@ fn test_trap_basic() {
let mut s = Search::new(&txn, &index);
s.terms_matching_strategy(TermsMatchingStrategy::All);
s.query("summer holiday");
s.scoring_strategy(crate::score_details::ScoringStrategy::Detailed);
let SearchResult { documents_ids, document_scores, .. } = s.execute().unwrap();
let SearchResult { documents_ids, .. } = s.execute().unwrap();
insta::assert_snapshot!(format!("{documents_ids:?}"), @"[0, 1]");
insta::assert_snapshot!(format!("{document_scores:#?}"), @r###"
[
[
Proximity(
Rank {
rank: 8,
max_rank: 8,
},
),
Typo(
Typo {
typo_count: 0,
max_typo_count: 2,
},
),
],
[
Proximity(
Rank {
rank: 8,
max_rank: 8,
},
),
Typo(
Typo {
typo_count: 0,
max_typo_count: 2,
},
),
],
]
"###);
let texts = collect_field_values(&index, &txn, "text", &documents_ids);
// TODO: this is incorrect, 1 should come before 0
// This is incorrect, 1 should come before 0
insta::assert_debug_snapshot!(texts, @r###"
[
"\"summer. holiday. sommer holidty\"",

View File

@ -1,244 +0,0 @@
---
source: milli/src/search/new/tests/attribute_fid.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
2,
[
Fid(
Rank {
rank: 19,
max_rank: 19,
},
),
Position(
Rank {
rank: 91,
max_rank: 91,
},
),
],
),
(
6,
[
Fid(
Rank {
rank: 15,
max_rank: 19,
},
),
Position(
Rank {
rank: 81,
max_rank: 91,
},
),
],
),
(
5,
[
Fid(
Rank {
rank: 14,
max_rank: 19,
},
),
Position(
Rank {
rank: 79,
max_rank: 91,
},
),
],
),
(
4,
[
Fid(
Rank {
rank: 13,
max_rank: 19,
},
),
Position(
Rank {
rank: 77,
max_rank: 91,
},
),
],
),
(
3,
[
Fid(
Rank {
rank: 12,
max_rank: 19,
},
),
Position(
Rank {
rank: 83,
max_rank: 91,
},
),
],
),
(
9,
[
Fid(
Rank {
rank: 11,
max_rank: 19,
},
),
Position(
Rank {
rank: 75,
max_rank: 91,
},
),
],
),
(
8,
[
Fid(
Rank {
rank: 10,
max_rank: 19,
},
),
Position(
Rank {
rank: 79,
max_rank: 91,
},
),
],
),
(
7,
[
Fid(
Rank {
rank: 10,
max_rank: 19,
},
),
Position(
Rank {
rank: 73,
max_rank: 91,
},
),
],
),
(
11,
[
Fid(
Rank {
rank: 7,
max_rank: 19,
},
),
Position(
Rank {
rank: 77,
max_rank: 91,
},
),
],
),
(
10,
[
Fid(
Rank {
rank: 6,
max_rank: 19,
},
),
Position(
Rank {
rank: 81,
max_rank: 91,
},
),
],
),
(
13,
[
Fid(
Rank {
rank: 6,
max_rank: 19,
},
),
Position(
Rank {
rank: 81,
max_rank: 91,
},
),
],
),
(
12,
[
Fid(
Rank {
rank: 6,
max_rank: 19,
},
),
Position(
Rank {
rank: 78,
max_rank: 91,
},
),
],
),
(
14,
[
Fid(
Rank {
rank: 5,
max_rank: 19,
},
),
Position(
Rank {
rank: 75,
max_rank: 91,
},
),
],
),
(
0,
[
Fid(
Rank {
rank: 1,
max_rank: 19,
},
),
Position(
Rank {
rank: 91,
max_rank: 91,
},
),
],
),
]

View File

@ -1,244 +0,0 @@
---
source: milli/src/search/new/tests/attribute_fid.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
2,
[
Fid(
Rank {
rank: 19,
max_rank: 19,
},
),
Position(
Rank {
rank: 91,
max_rank: 91,
},
),
],
),
(
6,
[
Fid(
Rank {
rank: 15,
max_rank: 19,
},
),
Position(
Rank {
rank: 81,
max_rank: 91,
},
),
],
),
(
5,
[
Fid(
Rank {
rank: 14,
max_rank: 19,
},
),
Position(
Rank {
rank: 79,
max_rank: 91,
},
),
],
),
(
4,
[
Fid(
Rank {
rank: 13,
max_rank: 19,
},
),
Position(
Rank {
rank: 77,
max_rank: 91,
},
),
],
),
(
3,
[
Fid(
Rank {
rank: 12,
max_rank: 19,
},
),
Position(
Rank {
rank: 83,
max_rank: 91,
},
),
],
),
(
9,
[
Fid(
Rank {
rank: 11,
max_rank: 19,
},
),
Position(
Rank {
rank: 75,
max_rank: 91,
},
),
],
),
(
8,
[
Fid(
Rank {
rank: 10,
max_rank: 19,
},
),
Position(
Rank {
rank: 79,
max_rank: 91,
},
),
],
),
(
7,
[
Fid(
Rank {
rank: 10,
max_rank: 19,
},
),
Position(
Rank {
rank: 73,
max_rank: 91,
},
),
],
),
(
11,
[
Fid(
Rank {
rank: 7,
max_rank: 19,
},
),
Position(
Rank {
rank: 77,
max_rank: 91,
},
),
],
),
(
10,
[
Fid(
Rank {
rank: 6,
max_rank: 19,
},
),
Position(
Rank {
rank: 81,
max_rank: 91,
},
),
],
),
(
13,
[
Fid(
Rank {
rank: 6,
max_rank: 19,
},
),
Position(
Rank {
rank: 81,
max_rank: 91,
},
),
],
),
(
12,
[
Fid(
Rank {
rank: 6,
max_rank: 19,
},
),
Position(
Rank {
rank: 78,
max_rank: 91,
},
),
],
),
(
14,
[
Fid(
Rank {
rank: 5,
max_rank: 19,
},
),
Position(
Rank {
rank: 75,
max_rank: 91,
},
),
],
),
(
0,
[
Fid(
Rank {
rank: 1,
max_rank: 19,
},
),
Position(
Rank {
rank: 91,
max_rank: 91,
},
),
],
),
]

View File

@ -1,244 +0,0 @@
---
source: milli/src/search/new/tests/attribute_position.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
10,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 21,
max_rank: 21,
},
),
],
),
(
12,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 21,
max_rank: 21,
},
),
],
),
(
11,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 20,
max_rank: 21,
},
),
],
),
(
13,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 20,
max_rank: 21,
},
),
],
),
(
3,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 19,
max_rank: 21,
},
),
],
),
(
4,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 19,
max_rank: 21,
},
),
],
),
(
2,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 18,
max_rank: 21,
},
),
],
),
(
0,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 15,
max_rank: 21,
},
),
],
),
(
1,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 15,
max_rank: 21,
},
),
],
),
(
6,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 13,
max_rank: 21,
},
),
],
),
(
8,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 5,
max_rank: 21,
},
),
],
),
(
7,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 4,
max_rank: 21,
},
),
],
),
(
9,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 4,
max_rank: 21,
},
),
],
),
(
5,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 1,
max_rank: 21,
},
),
],
),
]

View File

@ -1,244 +0,0 @@
---
source: milli/src/search/new/tests/attribute_position.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
10,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 21,
max_rank: 21,
},
),
],
),
(
12,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 21,
max_rank: 21,
},
),
],
),
(
11,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 20,
max_rank: 21,
},
),
],
),
(
13,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 20,
max_rank: 21,
},
),
],
),
(
3,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 19,
max_rank: 21,
},
),
],
),
(
4,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 19,
max_rank: 21,
},
),
],
),
(
2,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 18,
max_rank: 21,
},
),
],
),
(
0,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 15,
max_rank: 21,
},
),
],
),
(
1,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 15,
max_rank: 21,
},
),
],
),
(
6,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 13,
max_rank: 21,
},
),
],
),
(
8,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 5,
max_rank: 21,
},
),
],
),
(
7,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 4,
max_rank: 21,
},
),
],
),
(
9,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 4,
max_rank: 21,
},
),
],
),
(
5,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 1,
max_rank: 21,
},
),
],
),
]

View File

@ -1,91 +0,0 @@
---
source: milli/src/search/new/tests/attribute_position.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
5,
[
Fid(
Rank {
rank: 11,
max_rank: 11,
},
),
Position(
Rank {
rank: 51,
max_rank: 51,
},
),
],
),
(
7,
[
Fid(
Rank {
rank: 11,
max_rank: 11,
},
),
Position(
Rank {
rank: 51,
max_rank: 51,
},
),
],
),
(
8,
[
Fid(
Rank {
rank: 11,
max_rank: 11,
},
),
Position(
Rank {
rank: 51,
max_rank: 51,
},
),
],
),
(
9,
[
Fid(
Rank {
rank: 11,
max_rank: 11,
},
),
Position(
Rank {
rank: 51,
max_rank: 51,
},
),
],
),
(
6,
[
Fid(
Rank {
rank: 11,
max_rank: 11,
},
),
Position(
Rank {
rank: 50,
max_rank: 51,
},
),
],
),
]

View File

@ -1,244 +0,0 @@
---
source: milli/src/search/new/tests/attribute_position.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
10,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 21,
max_rank: 21,
},
),
],
),
(
12,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 21,
max_rank: 21,
},
),
],
),
(
11,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 20,
max_rank: 21,
},
),
],
),
(
13,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 20,
max_rank: 21,
},
),
],
),
(
3,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 19,
max_rank: 21,
},
),
],
),
(
4,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 19,
max_rank: 21,
},
),
],
),
(
2,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 18,
max_rank: 21,
},
),
],
),
(
0,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 15,
max_rank: 21,
},
),
],
),
(
1,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 15,
max_rank: 21,
},
),
],
),
(
6,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 13,
max_rank: 21,
},
),
],
),
(
8,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 5,
max_rank: 21,
},
),
],
),
(
7,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 4,
max_rank: 21,
},
),
],
),
(
9,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 4,
max_rank: 21,
},
),
],
),
(
5,
[
Fid(
Rank {
rank: 5,
max_rank: 5,
},
),
Position(
Rank {
rank: 1,
max_rank: 21,
},
),
],
),
]

View File

@ -1,366 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
19,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
9,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 7,
max_rank: 10,
},
),
],
),
(
18,
[
Words(
Words {
matching_words: 8,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 9,
max_rank: 9,
},
),
],
),
(
8,
[
Words(
Words {
matching_words: 8,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 6,
max_rank: 9,
},
),
],
),
(
17,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
16,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 8,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 8,
},
),
],
),
(
15,
[
Words(
Words {
matching_words: 5,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 6,
max_rank: 6,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 5,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 6,
},
),
],
),
(
14,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 5,
},
),
],
),
(
13,
[
Words(
Words {
matching_words: 3,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 4,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 3,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 4,
},
),
],
),
(
12,
[
Words(
Words {
matching_words: 2,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
(
2,
[
Words(
Words {
matching_words: 2,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 3,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
11,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
]

View File

@ -1,106 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
4,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 7,
max_rank: 8,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 7,
max_rank: 8,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 7,
max_rank: 8,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 4,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 5,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 1,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 1,
max_rank: 2,
},
),
],
),
]

View File

@ -1,126 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
6,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 7,
max_rank: 8,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 4,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 1,
max_matching_words: 7,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
]

View File

@ -1,86 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
6,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 6,
max_rank: 6,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 6,
max_rank: 6,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 7,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 6,
max_rank: 6,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 4,
max_matching_words: 7,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
]

View File

@ -1,66 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
2,
[
Words(
Words {
matching_words: 2,
max_matching_words: 2,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 2,
max_matching_words: 2,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
(
0,
[
Words(
Words {
matching_words: 2,
max_matching_words: 2,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
]

View File

@ -1,136 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
2,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
Typo(
Typo {
typo_count: 0,
max_typo_count: 1,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 5,
},
),
Typo(
Typo {
typo_count: 0,
max_typo_count: 2,
},
),
],
),
(
0,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 5,
},
),
Typo(
Typo {
typo_count: 1,
max_typo_count: 2,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 5,
},
),
Typo(
Typo {
typo_count: 1,
max_typo_count: 2,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 5,
},
),
Typo(
Typo {
typo_count: 2,
max_typo_count: 3,
},
),
],
),
]

View File

@ -1,186 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
9,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
8,
[
Words(
Words {
matching_words: 8,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 9,
max_rank: 9,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 5,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 6,
max_rank: 6,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 3,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 4,
},
),
],
),
(
2,
[
Words(
Words {
matching_words: 2,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
]

View File

@ -1,126 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
8,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 3,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 4,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 2,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 2,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
]

View File

@ -1,146 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
9,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
8,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
]

View File

@ -1,146 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
9,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
8,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
]

View File

@ -1,84 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
0,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
Proximity(
Rank {
rank: 35,
max_rank: 57,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
Proximity(
Rank {
rank: 35,
max_rank: 57,
},
),
],
),
(
2,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
Proximity(
Rank {
rank: 35,
max_rank: 57,
},
),
],
),
]

View File

@ -1,240 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
2,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
Proximity(
Rank {
rank: 57,
max_rank: 57,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
Proximity(
Rank {
rank: 56,
max_rank: 57,
},
),
],
),
(
0,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
Proximity(
Rank {
rank: 35,
max_rank: 57,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
Proximity(
Rank {
rank: 22,
max_rank: 22,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
Proximity(
Rank {
rank: 22,
max_rank: 22,
},
),
],
),
(
8,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
MatchesStart,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
Proximity(
Rank {
rank: 22,
max_rank: 22,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
Proximity(
Rank {
rank: 21,
max_rank: 22,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
Proximity(
Rank {
rank: 17,
max_rank: 22,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
Proximity(
Rank {
rank: 17,
max_rank: 22,
},
),
],
),
]

View File

@ -1,110 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
1,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
Typo(
Typo {
typo_count: 0,
max_typo_count: 5,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
],
),
(
0,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
Typo(
Typo {
typo_count: 1,
max_typo_count: 5,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 5,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
Typo(
Typo {
typo_count: 2,
max_typo_count: 5,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 5,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 4,
max_matching_words: 4,
},
),
Typo(
Typo {
typo_count: 2,
max_typo_count: 5,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 5,
},
),
],
),
]

View File

@ -1,366 +0,0 @@
---
source: milli/src/search/new/tests/exactness.rs
expression: "format!(\"{document_ids_scores:#?}\")"
---
[
(
19,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 10,
max_rank: 10,
},
),
],
),
(
9,
[
Words(
Words {
matching_words: 9,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 7,
max_rank: 10,
},
),
],
),
(
18,
[
Words(
Words {
matching_words: 8,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 9,
max_rank: 9,
},
),
],
),
(
8,
[
Words(
Words {
matching_words: 8,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 6,
max_rank: 9,
},
),
],
),
(
17,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
16,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 8,
max_rank: 8,
},
),
],
),
(
6,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 8,
},
),
],
),
(
7,
[
Words(
Words {
matching_words: 7,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 8,
},
),
],
),
(
15,
[
Words(
Words {
matching_words: 5,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 6,
max_rank: 6,
},
),
],
),
(
5,
[
Words(
Words {
matching_words: 5,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 6,
},
),
],
),
(
14,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 5,
max_rank: 5,
},
),
],
),
(
4,
[
Words(
Words {
matching_words: 4,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 5,
},
),
],
),
(
13,
[
Words(
Words {
matching_words: 3,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 4,
max_rank: 4,
},
),
],
),
(
3,
[
Words(
Words {
matching_words: 3,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 4,
},
),
],
),
(
12,
[
Words(
Words {
matching_words: 2,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 3,
max_rank: 3,
},
),
],
),
(
2,
[
Words(
Words {
matching_words: 2,
max_matching_words: 9,
},
),
ExactAttribute(
NoExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 3,
},
),
],
),
(
1,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
(
11,
[
Words(
Words {
matching_words: 1,
max_matching_words: 9,
},
),
ExactAttribute(
ExactMatch,
),
Exactness(
Rank {
rank: 2,
max_rank: 2,
},
),
],
),
]

View File

@ -1,168 +0,0 @@
---
source: milli/src/search/new/tests/geo_sort.rs
expression: "format!(\"{scores:#?}\")"
---
[
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
0.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
1.0,
1.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
2.0,
-1.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
-2.0,
-2.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
3.0,
5.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: Some(
[
6.0,
-5.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: true,
value: None,
},
),
],
]

View File

@ -1,168 +0,0 @@
---
source: milli/src/search/new/tests/geo_sort.rs
expression: "format!(\"{scores:#?}\")"
---
[
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
6.0,
-5.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
3.0,
5.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
-2.0,
-2.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
2.0,
-1.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
1.0,
1.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: None,
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: None,
},
),
],
]

View File

@ -1,91 +0,0 @@
---
source: milli/src/search/new/tests/geo_sort.rs
expression: "format!(\"{scores:#?}\")"
---
[
[
GeoSort(
GeoSort {
target_point: [
0.0,
-175.0,
],
ascending: true,
value: Some(
[
0.0,
-179.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
-175.0,
],
ascending: true,
value: Some(
[
0.0,
178.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
-175.0,
],
ascending: true,
value: Some(
[
-89.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
-175.0,
],
ascending: true,
value: Some(
[
88.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
-175.0,
],
ascending: true,
value: Some(
[
0.0,
0.0,
],
),
},
),
],
]

View File

@ -1,91 +0,0 @@
---
source: milli/src/search/new/tests/geo_sort.rs
expression: "format!(\"{scores:#?}\")"
---
[
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
-179.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
178.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
-89.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
88.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
0.0,
],
),
},
),
],
]

View File

@ -1,91 +0,0 @@
---
source: milli/src/search/new/tests/geo_sort.rs
expression: "format!(\"{scores:#?}\")"
---
[
[
GeoSort(
GeoSort {
target_point: [
85.0,
0.0,
],
ascending: false,
value: Some(
[
-89.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
85.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
-179.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
85.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
178.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
85.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
85.0,
0.0,
],
ascending: false,
value: Some(
[
88.0,
0.0,
],
),
},
),
],
]

View File

@ -1,91 +0,0 @@
---
source: milli/src/search/new/tests/geo_sort.rs
expression: "format!(\"{scores:#?}\")"
---
[
[
GeoSort(
GeoSort {
target_point: [
-85.0,
0.0,
],
ascending: false,
value: Some(
[
88.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
-85.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
-179.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
-85.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
178.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
-85.0,
0.0,
],
ascending: false,
value: Some(
[
0.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
-85.0,
0.0,
],
ascending: false,
value: Some(
[
-89.0,
0.0,
],
),
},
),
],
]

View File

@ -1,91 +0,0 @@
---
source: milli/src/search/new/tests/geo_sort.rs
expression: "format!(\"{scores:#?}\")"
---
[
[
GeoSort(
GeoSort {
target_point: [
0.0,
175.0,
],
ascending: false,
value: Some(
[
0.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
175.0,
],
ascending: false,
value: Some(
[
88.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
175.0,
],
ascending: false,
value: Some(
[
-89.0,
0.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
175.0,
],
ascending: false,
value: Some(
[
0.0,
-179.0,
],
),
},
),
],
[
GeoSort(
GeoSort {
target_point: [
0.0,
175.0,
],
ascending: false,
value: Some(
[
0.0,
178.0,
],
),
},
),
],
]

Some files were not shown because too many files have changed in this diff Show More