-
-
Notifications
You must be signed in to change notification settings - Fork 721
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Removed useless copy when ingesting JSON. - Bugfix in phrase query with a missing field norms. - Disabled range query on default fields Closes #1251
- Loading branch information
1 parent
d37633e
commit d3a6d60
Showing
36 changed files
with
2,237 additions
and
457 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,128 @@ | ||
# Json | ||
|
||
As of tantivy 0.17, tantivy supports a json object type. | ||
This type can be used to allow for a schema-less search index. | ||
|
||
When indexing a json object, we "flatten" the JSON. This operation emits terms that represent a triplet `(json_path, value_type, value)` | ||
|
||
For instance, if user is a json field, the following document: | ||
|
||
```json | ||
{ | ||
"user": { | ||
"name": "Paul Masurel", | ||
"address": { | ||
"city": "Tokyo", | ||
"country": "Japan" | ||
}, | ||
"created_at": "2018-11-12T23:20:50.52Z" | ||
} | ||
} | ||
``` | ||
|
||
emits the following tokens: | ||
- ("name", Text, "Paul") | ||
- ("name", Text, "Masurel") | ||
- ("address.city", Text, "Tokyo") | ||
- ("address.country", Text, "Japan") | ||
- ("created_at", Date, 15420648505) | ||
|
||
|
||
# Bytes-encoding and lexicographical sort. | ||
|
||
Like any other terms, these triplets are encoded into a binary format as follows. | ||
- `json_path`: the json path is a sequence of "segments". In the example above, `address.city` | ||
is just a debug representation of the json path `["address", "city"]`. | ||
Its representation is done by separating segments by a unicode char `\x01`, and ending the path by `\x00`. | ||
- `value type`: One byte represents the `Value` type. | ||
- `value`: The value representation is just the regular Value representation. | ||
|
||
This representation is designed to align the natural sort of Terms with the lexicographical sort | ||
of their binary representation (Tantivy's dictionary (whether fst or sstable) is sorted and does prefix encoding). | ||
|
||
In the example above, the terms will be sorted as | ||
- ("address.city", Text, "Tokyo") | ||
- ("address.country", Text, "Japan") | ||
- ("name", Text, "Masurel") | ||
- ("name", Text, "Paul") | ||
- ("created_at", Date, 15420648505) | ||
|
||
As seen in "pitfalls", we may end up having to search for a value for a same path in several different fields. Putting the field code after the path makes it maximizes compression opportunities but also increases the chances for the two terms to end up in the actual same term dictionary block. | ||
|
||
|
||
# Pitfalls, limitation and corner cases. | ||
|
||
Json gives very little information about the type of the literals it stores. | ||
All numeric types end up mapped as a "Number" and there are no types for dates. | ||
|
||
At indexing, tantivy will try to interpret number and strings as different type with a | ||
priority order. | ||
|
||
Numbers will be interpreted as u64, i64 and f64 in that order. | ||
Strings will be interpreted as rfc3999 dates or simple strings. | ||
|
||
The first working type is picked and is the only term that is emitted for indexing. | ||
Note this interpretation happens on a per-document basis, and there is no effort to try to sniff | ||
a consistent field type at the scale of a segment. | ||
|
||
On the query parser side on the other hand, we may end up emitting more than one type. | ||
For instance, we do not even know if the type is a number or string based. | ||
|
||
So the query | ||
|
||
``` | ||
my_path.my_segment:233 | ||
``` | ||
|
||
Will be interpreted as | ||
`(my_path.my_segment, String, 233) or (my_path.my_segment, u64, 233)` | ||
|
||
Likewise, we need to emit two tokens if the query contains an rfc3999 date. | ||
Indeed the date could have been actually a single token inside the text of a document at ingestion time. Generally speaking, we will always at least emit a string token in query parsing, and sometimes more. | ||
|
||
If one more json field is defined, things get even more complicated. | ||
|
||
|
||
## Default json field | ||
|
||
If the schema contains a text field called "text" and a json field that is set as a default field: | ||
`text:hello` could be reasonably interpreted as targetting the text field or as targetting the json field called `json_dynamic` with the json_path "text". | ||
|
||
If there is such an ambiguity, we decide to only search in the "text" field: `text:hello`. | ||
|
||
In other words, the parser will not search in default json fields if there is a schema hit. | ||
This is a product decision. | ||
|
||
The user can still target the JSON field by specifying its name explicitly: | ||
`json_dynamic.text:hello`. | ||
|
||
## Range queries are not supported. | ||
|
||
Json field do not support range queries. | ||
|
||
## Arrays do not work like nested object. | ||
|
||
If json object contains an array, a search query might return more documents | ||
than what might be expected. | ||
|
||
Let's take an example. | ||
|
||
```json | ||
{ | ||
"cart_id": 3234234 , | ||
"cart": [ | ||
{"product_type": "sneakers", "attributes": {"color": "white"} }, | ||
{"product_type": "t-shirt", "attributes": {"color": "red"}}, | ||
] | ||
} | ||
``` | ||
|
||
Despite the array structure, a document in tantivy is a bag of terms. | ||
The query: | ||
|
||
``` | ||
cart.product_type:sneakers AND cart.attributes.color:red | ||
``` | ||
|
||
Actually match the document above. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
// # Json field example | ||
// | ||
// This example shows how the json field can be used | ||
// to make tantivy partially schemaless. | ||
|
||
use tantivy::collector::{Count, TopDocs}; | ||
use tantivy::query::QueryParser; | ||
use tantivy::schema::{Schema, FAST, STORED, STRING, TEXT}; | ||
use tantivy::Index; | ||
|
||
fn main() -> tantivy::Result<()> { | ||
// # Defining the schema | ||
// | ||
// We need two fields: | ||
// - a timestamp | ||
// - a json object field | ||
let mut schema_builder = Schema::builder(); | ||
schema_builder.add_date_field("timestamp", FAST | STORED); | ||
let event_type = schema_builder.add_text_field("event_type", STRING | STORED); | ||
let attributes = schema_builder.add_json_field("attributes", STORED | TEXT); | ||
let schema = schema_builder.build(); | ||
|
||
// # Indexing documents | ||
let index = Index::create_in_ram(schema.clone()); | ||
|
||
let mut index_writer = index.writer(50_000_000)?; | ||
let doc = schema.parse_document( | ||
r#"{ | ||
"timestamp": "2022-02-22T23:20:50.53Z", | ||
"event_type": "click", | ||
"attributes": { | ||
"target": "submit-button", | ||
"cart": {"product_id": 103}, | ||
"description": "the best vacuum cleaner ever" | ||
} | ||
}"#, | ||
)?; | ||
index_writer.add_document(doc)?; | ||
let doc = schema.parse_document( | ||
r#"{ | ||
"timestamp": "2022-02-22T23:20:51.53Z", | ||
"event_type": "click", | ||
"attributes": { | ||
"target": "submit-button", | ||
"cart": {"product_id": 133}, | ||
"description": "das keyboard" | ||
} | ||
}"#, | ||
)?; | ||
index_writer.add_document(doc)?; | ||
index_writer.commit()?; | ||
|
||
let reader = index.reader()?; | ||
let searcher = reader.searcher(); | ||
|
||
let query_parser = QueryParser::for_index(&index, vec![event_type, attributes]); | ||
{ | ||
let query = query_parser.parse_query("target:submit-button")?; | ||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?; | ||
assert_eq!(count_docs.len(), 2); | ||
} | ||
{ | ||
let query = query_parser.parse_query("target:submit")?; | ||
let count_docs = searcher.search(&*query, &TopDocs::with_limit(2))?; | ||
assert_eq!(count_docs.len(), 2); | ||
} | ||
{ | ||
let query = query_parser.parse_query("cart.product_id:103")?; | ||
let count_docs = searcher.search(&*query, &Count)?; | ||
assert_eq!(count_docs, 1); | ||
} | ||
{ | ||
let query = query_parser | ||
.parse_query("event_type:click AND cart.product_id:133") | ||
.unwrap(); | ||
let hits = searcher.search(&*query, &TopDocs::with_limit(2)).unwrap(); | ||
assert_eq!(hits.len(), 1); | ||
} | ||
Ok(()) | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.