From 73ab08c996ce741bb8510c941eeecc32c4728b5c Mon Sep 17 00:00:00 2001 From: Junqiu Lei Date: Thu, 1 Aug 2024 11:40:49 -0700 Subject: [PATCH] Add doc for binary format support in k-NN (#7840) * Add doc for binary format support in k-NN Signed-off-by: Junqiu Lei * Resolve tech feedback Signed-off-by: Junqiu Lei * Doc review Signed-off-by: Fanit Kolchina * Add newline Signed-off-by: Fanit Kolchina * Formatting Signed-off-by: Fanit Kolchina * Link fix Signed-off-by: Fanit Kolchina * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> * Add query results to examples Signed-off-by: Junqiu Lei * Rephrased sentences and changed vector field name Signed-off-by: Fanit Kolchina * Editorial review Signed-off-by: Fanit Kolchina * Remove details from one of the requests Signed-off-by: Fanit Kolchina --------- Signed-off-by: Junqiu Lei Signed-off-by: Fanit Kolchina Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../supported-field-types/knn-vector.md | 416 +++++++++++++++++- _search-plugins/knn/approximate-knn.md | 18 +- _search-plugins/knn/knn-index.md | 13 +- _search-plugins/knn/knn-score-script.md | 14 +- _search-plugins/knn/painless-functions.md | 4 + _search-plugins/vector-search.md | 2 +- 6 files changed, 447 insertions(+), 20 deletions(-) diff --git a/_field-types/supported-field-types/knn-vector.md b/_field-types/supported-field-types/knn-vector.md index c7f9ec7f2b..a2a7137733 100644 --- a/_field-types/supported-field-types/knn-vector.md +++ b/_field-types/supported-field-types/knn-vector.md @@ -13,7 +13,7 @@ The [k-NN plugin]({{site.url}}{{site.baseurl}}/search-plugins/knn/index/) introd ## Example -For example, to map `my_vector1` as a `knn_vector`, use the following request: +For example, to map `my_vector` as a `knn_vector`, use the following request: ```json PUT test-index @@ -26,7 +26,7 @@ PUT test-index }, "mappings": { "properties": { - "my_vector1": { + "my_vector": { "type": "knn_vector", "dimension": 3, "method": { @@ -67,8 +67,7 @@ PUT test-index ## Model IDs -Model IDs are used when the underlying Approximate k-NN algorithm requires a training step. As a prerequisite, the -model has to be created with the [Train API]({{site.url}}{{site.baseurl}}/search-plugins/knn/api#train-a-model). The +Model IDs are used when the underlying Approximate k-NN algorithm requires a training step. As a prerequisite, the model must be created with the [Train API]({{site.url}}{{site.baseurl}}/search-plugins/knn/api#train-a-model). The model contains the information needed to initialize the native library segment files. ```json @@ -111,7 +110,7 @@ PUT test-index }, "mappings": { "properties": { - "my_vector1": { + "my_vector": { "type": "knn_vector", "dimension": 3, "data_type": "byte", @@ -136,7 +135,7 @@ Then ingest documents as usual. Make sure each dimension in the vector is in the ```json PUT test-index/_doc/1 { - "my_vector1": [-126, 28, 127] + "my_vector": [-126, 28, 127] } ``` {% include copy-curl.html %} @@ -144,7 +143,7 @@ PUT test-index/_doc/1 ```json PUT test-index/_doc/2 { - "my_vector1": [100, -128, 0] + "my_vector": [100, -128, 0] } ``` {% include copy-curl.html %} @@ -157,7 +156,7 @@ GET test-index/_search "size": 2, "query": { "knn": { - "my_vector1": { + "my_vector": { "vector": [26, -120, 99], "k": 2 } @@ -267,3 +266,404 @@ else: return Byte(bval) ``` {% include copy.html %} + +## Binary k-NN vectors + +You can reduce memory costs by a factor of 32 by switching from float to binary vectors. +Using binary vector indexes can lower operational costs while maintaining high recall performance, making large-scale deployment more economical and efficient. + +Binary format is available for the following k-NN search types: + +- [Approximate k-NN]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/): Supports binary vectors only for the Faiss engine with the HNSW and IVF algorithms. +- [Script score k-NN]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-score-script/): Enables the use of binary vectors in script scoring. +- [Painless extensions]({{site.url}}{{site.baseurl}}/search-plugins/knn/painless-functions/): Allows the use of binary vectors with Painless scripting extensions. + +### Requirements + +There are several requirements for using binary vectors in the OpenSearch k-NN plugin: + +- The `data_type` of the binary vector index must be `binary`. +- The `space_type` of the binary vector index must be `hamming`. +- The `dimension` of the binary vector index must be a multiple of 8. +- You must convert your binary data into 8-bit signed integers (`int8`) in the [-128, 127] range. For example, the binary sequence of 8 bits `0, 1, 1, 0, 0, 0, 1, 1` must be converted into its equivalent byte value of `99` to be used as a binary vector input. + +### Example: HNSW + +To create a binary vector index with the Faiss engine and HNSW algorithm, send the following request: + +```json +PUT /test-binary-hnsw +{ + "settings": { + "index": { + "knn": true + } + }, + "mappings": { + "properties": { + "my_vector": { + "type": "knn_vector", + "dimension": 8, + "data_type": "binary", + "method": { + "name": "hnsw", + "space_type": "hamming", + "engine": "faiss", + "parameters": { + "ef_construction": 128, + "m": 24 + } + } + } + } + } +} +``` +{% include copy-curl.html %} + +Then ingest some documents containing binary vectors: + +```json +PUT _bulk +{"index": {"_index": "test-binary-hnsw", "_id": "1"}} +{"my_vector": [7], "price": 4.4} +{"index": {"_index": "test-binary-hnsw", "_id": "2"}} +{"my_vector": [10], "price": 14.2} +{"index": {"_index": "test-binary-hnsw", "_id": "3"}} +{"my_vector": [15], "price": 19.1} +{"index": {"_index": "test-binary-hnsw", "_id": "4"}} +{"my_vector": [99], "price": 1.2} +{"index": {"_index": "test-binary-hnsw", "_id": "5"}} +{"my_vector": [80], "price": 16.5} +``` +{% include copy-curl.html %} + +When querying, be sure to use a binary vector: + +```json +GET /test-binary-hnsw/_search +{ + "size": 2, + "query": { + "knn": { + "my_vector": { + "vector": [9], + "k": 2 + } + } + } +} +``` +{% include copy-curl.html %} + +The response contains the two vectors closest to the query vector: + +
+ + Response + + {: .text-delta} + +```json +{ + "took": 8, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 0.5, + "hits": [ + { + "_index": "test-binary-hnsw", + "_id": "2", + "_score": 0.5, + "_source": { + "my_vector": [ + 10 + ], + "price": 14.2 + } + }, + { + "_index": "test-binary-hnsw", + "_id": "5", + "_score": 0.25, + "_source": { + "my_vector": [ + 80 + ], + "price": 16.5 + } + } + ] + } +} +``` +
+ +### Example: IVF + +The IVF method requires a training step that creates and trains the model used to initialize the native library index during segment creation. For more information, see [Building a k-NN index from a model]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#building-a-k-nn-index-from-a-model). + +First, create an index that will contain binary vector training data. Specify the Faiss engine and IVF algorithm and make sure that the `dimension` matches the dimension of the model you want to create: + +```json +PUT train-index +{ + "mappings": { + "properties": { + "train-field": { + "type": "knn_vector", + "dimension": 8, + "data_type": "binary" + } + } + } +} +``` +{% include copy-curl.html %} + +Ingest training data containing binary vectors into the training index: + +
+ + Bulk ingest request + + {: .text-delta} + +```json +PUT _bulk +{ "index": { "_index": "train-index", "_id": "1" } } +{ "train-field": [1] } +{ "index": { "_index": "train-index", "_id": "2" } } +{ "train-field": [2] } +{ "index": { "_index": "train-index", "_id": "3" } } +{ "train-field": [3] } +{ "index": { "_index": "train-index", "_id": "4" } } +{ "train-field": [4] } +{ "index": { "_index": "train-index", "_id": "5" } } +{ "train-field": [5] } +{ "index": { "_index": "train-index", "_id": "6" } } +{ "train-field": [6] } +{ "index": { "_index": "train-index", "_id": "7" } } +{ "train-field": [7] } +{ "index": { "_index": "train-index", "_id": "8" } } +{ "train-field": [8] } +{ "index": { "_index": "train-index", "_id": "9" } } +{ "train-field": [9] } +{ "index": { "_index": "train-index", "_id": "10" } } +{ "train-field": [10] } +{ "index": { "_index": "train-index", "_id": "11" } } +{ "train-field": [11] } +{ "index": { "_index": "train-index", "_id": "12" } } +{ "train-field": [12] } +{ "index": { "_index": "train-index", "_id": "13" } } +{ "train-field": [13] } +{ "index": { "_index": "train-index", "_id": "14" } } +{ "train-field": [14] } +{ "index": { "_index": "train-index", "_id": "15" } } +{ "train-field": [15] } +{ "index": { "_index": "train-index", "_id": "16" } } +{ "train-field": [16] } +{ "index": { "_index": "train-index", "_id": "17" } } +{ "train-field": [17] } +{ "index": { "_index": "train-index", "_id": "18" } } +{ "train-field": [18] } +{ "index": { "_index": "train-index", "_id": "19" } } +{ "train-field": [19] } +{ "index": { "_index": "train-index", "_id": "20" } } +{ "train-field": [20] } +{ "index": { "_index": "train-index", "_id": "21" } } +{ "train-field": [21] } +{ "index": { "_index": "train-index", "_id": "22" } } +{ "train-field": [22] } +{ "index": { "_index": "train-index", "_id": "23" } } +{ "train-field": [23] } +{ "index": { "_index": "train-index", "_id": "24" } } +{ "train-field": [24] } +{ "index": { "_index": "train-index", "_id": "25" } } +{ "train-field": [25] } +{ "index": { "_index": "train-index", "_id": "26" } } +{ "train-field": [26] } +{ "index": { "_index": "train-index", "_id": "27" } } +{ "train-field": [27] } +{ "index": { "_index": "train-index", "_id": "28" } } +{ "train-field": [28] } +{ "index": { "_index": "train-index", "_id": "29" } } +{ "train-field": [29] } +{ "index": { "_index": "train-index", "_id": "30" } } +{ "train-field": [30] } +{ "index": { "_index": "train-index", "_id": "31" } } +{ "train-field": [31] } +{ "index": { "_index": "train-index", "_id": "32" } } +{ "train-field": [32] } +{ "index": { "_index": "train-index", "_id": "33" } } +{ "train-field": [33] } +{ "index": { "_index": "train-index", "_id": "34" } } +{ "train-field": [34] } +{ "index": { "_index": "train-index", "_id": "35" } } +{ "train-field": [35] } +{ "index": { "_index": "train-index", "_id": "36" } } +{ "train-field": [36] } +{ "index": { "_index": "train-index", "_id": "37" } } +{ "train-field": [37] } +{ "index": { "_index": "train-index", "_id": "38" } } +{ "train-field": [38] } +{ "index": { "_index": "train-index", "_id": "39" } } +{ "train-field": [39] } +{ "index": { "_index": "train-index", "_id": "40" } } +{ "train-field": [40] } +``` +{% include copy-curl.html %} +
+ +Then, create and train the model named `test-binary-model`. The model will be trained using the training data from the `train_field` in the `train-index`. Specify the `binary` data type and `hamming` space type: + +```json +POST _plugins/_knn/models/test-binary-model/_train +{ + "training_index": "train-index", + "training_field": "train-field", + "dimension": 8, + "description": "model with binary data", + "data_type": "binary", + "method": { + "name": "ivf", + "engine": "faiss", + "space_type": "hamming", + "parameters": { + "nlist": 1, + "nprobes": 1 + } + } +} +``` +{% include copy-curl.html %} + +To check the model training status, call the Get Model API: + +```json +GET _plugins/_knn/models/test-binary-model?filter_path=state +``` +{% include copy-curl.html %} + +Once the training is complete, the `state` changes to `created`. + +Next, create an index that will initialize its native library indexes using the trained model: + +```json +PUT test-binary-ivf +{ + "settings": { + "index": { + "knn": true + } + }, + "mappings": { + "properties": { + "my_vector": { + "type": "knn_vector", + "model_id": "test-binary-model" + } + } + } +} +``` +{% include copy-curl.html %} + +Ingest the data containing the binary vectors that you want to search into the created index: + +```json +PUT _bulk?refresh=true +{"index": {"_index": "test-binary-ivf", "_id": "1"}} +{"my_vector": [7], "price": 4.4} +{"index": {"_index": "test-binary-ivf", "_id": "2"}} +{"my_vector": [10], "price": 14.2} +{"index": {"_index": "test-binary-ivf", "_id": "3"}} +{"my_vector": [15], "price": 19.1} +{"index": {"_index": "test-binary-ivf", "_id": "4"}} +{"my_vector": [99], "price": 1.2} +{"index": {"_index": "test-binary-ivf", "_id": "5"}} +{"my_vector": [80], "price": 16.5} +``` +{% include copy-curl.html %} + +Finally, search the data. Be sure to provide a binary vector in the k-NN vector field: + +```json +GET test-binary-ivf/_search +{ + "size": 2, + "query": { + "knn": { + "my_vector": { + "vector": [8], + "k": 2 + } + } + } +} +``` +{% include copy-curl.html %} + +The response contains the two vectors closest to the query vector: + +
+ + Response + + {: .text-delta} + +```json +GET /_plugins/_knn/models/my-model?filter_path=state +{ + "took": 7, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 2, + "relation": "eq" + }, + "max_score": 0.5, + "hits": [ + { + "_index": "test-binary-ivf", + "_id": "2", + "_score": 0.5, + "_source": { + "my_vector": [ + 10 + ], + "price": 14.2 + } + }, + { + "_index": "test-binary-ivf", + "_id": "3", + "_score": 0.25, + "_source": { + "my_vector": [ + 15 + ], + "price": 19.1 + } + } + ] + } +} +``` +
diff --git a/_search-plugins/knn/approximate-knn.md b/_search-plugins/knn/approximate-knn.md index fa1b4096c7..0b5a48059b 100644 --- a/_search-plugins/knn/approximate-knn.md +++ b/_search-plugins/knn/approximate-knn.md @@ -314,6 +314,10 @@ To learn about using k-NN search with nested fields, see [k-NN search with neste To learn more about the radial search feature, see [k-NN radial search]({{site.url}}{{site.baseurl}}/search-plugins/knn/radial-search-knn/). +### Using approximate k-NN with binary vectors + +To learn more about using binary vectors with k-NN search, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). + ## Spaces A space corresponds to the function used to measure the distance between two points in order to determine the k-nearest neighbors. From the k-NN perspective, a lower score equates to a closer and better result. This is the opposite of how OpenSearch scores results, where a greater score equates to a better result. To convert distances to OpenSearch scores, we take 1 / (1 + distance). The k-NN plugin supports the following spaces. @@ -325,9 +329,9 @@ Not every method supports each of these spaces. Be sure to check out [the method - - - + + + @@ -363,6 +367,11 @@ Not every method supports each of these spaces. Be sure to check out [the method \[ \text{If} d > 0, score = d + 1 \] \[\text{If} d \le 0\] \[score = {1 \over 1 + (-1 · d) }\] + + + + +
spaceTypeDistance Function (d)OpenSearch ScoreSpace typeDistance function (d)OpenSearch score
hamming (supported for binary vectors in OpenSearch version 2.16 and later)\[ d(\mathbf{x}, \mathbf{y}) = \text{countSetBits}(\mathbf{x} \oplus \mathbf{y})\]\[ score = {1 \over 1 + d } \]
The cosine similarity formula does not include the `1 -` prefix. However, because similarity search libraries equates @@ -374,3 +383,6 @@ With cosine similarity, it is not valid to pass a zero vector (`[0, 0, ...]`) as such a vector is 0, which raises a `divide by 0` exception in the corresponding formula. Requests containing the zero vector will be rejected and a corresponding exception will be thrown. {: .note } + +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +{: .note} diff --git a/_search-plugins/knn/knn-index.md b/_search-plugins/knn/knn-index.md index ed8b9217f5..a6ffd922eb 100644 --- a/_search-plugins/knn/knn-index.md +++ b/_search-plugins/knn/knn-index.md @@ -45,6 +45,10 @@ PUT /test-index Starting with k-NN plugin version 2.9, you can use `byte` vectors with the `lucene` engine to reduce the amount of storage space needed. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector). +## Binary vector + +Starting with k-NN plugin version 2.16, you can use `binary` vectors with the `faiss` engine to reduce the amount of required storage space. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). + ## SIMD optimization for the Faiss engine Starting with version 2.13, the k-NN plugin supports [Single Instruction Multiple Data (SIMD)](https://en.wikipedia.org/wiki/Single_instruction,_multiple_data) processing if the underlying hardware supports SIMD instructions (AVX2 on x64 architecture and Neon on ARM64 architecture). SIMD is supported by default on Linux machines only for the Faiss engine. SIMD architecture helps boost overall performance by improving indexing throughput and reducing search latency. @@ -105,13 +109,16 @@ An index created in OpenSearch version 2.11 or earlier will still use the old `e ### Supported Faiss methods Method name | Requires training | Supported spaces | Description -:--- | :--- | :--- | :--- -`hnsw` | false | l2, innerproduct | Hierarchical proximity graph approach to approximate k-NN search. -`ivf` | true | l2, innerproduct | Stands for _inverted file index_. Bucketing approach where vectors are assigned different buckets based on clustering and, during search, only a subset of the buckets is searched. +:--- | :--- |:---| :--- +`hnsw` | false | l2, innerproduct, hamming | Hierarchical proximity graph approach to approximate k-NN search. +`ivf` | true | l2, innerproduct, hamming | Stands for _inverted file index_. Bucketing approach where vectors are assigned different buckets based on clustering and, during search, only a subset of the buckets is searched. For hnsw, "innerproduct" is not available when PQ is used. {: .note} +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +{: .note} + #### HNSW parameters Parameter name | Required | Default | Updatable | Description diff --git a/_search-plugins/knn/knn-score-script.md b/_search-plugins/knn/knn-score-script.md index 1696bd4cad..1a21f49513 100644 --- a/_search-plugins/knn/knn-score-script.md +++ b/_search-plugins/knn/knn-score-script.md @@ -319,7 +319,10 @@ A space corresponds to the function used to measure the distance between two poi - hammingbit + + hammingbit (supported for binary and long vectors)

+ hamming (supported for binary vectors in OpenSearch version 2.16 and later) + \[ d(\mathbf{x}, \mathbf{y}) = \text{countSetBits}(\mathbf{x} \oplus \mathbf{y})\] \[ score = {1 \over 1 + d } \] @@ -328,7 +331,8 @@ A space corresponds to the function used to measure the distance between two poi Cosine similarity returns a number between -1 and 1, and because OpenSearch relevance scores can't be below 0, the k-NN plugin adds 1 to get the final score. -With cosine similarity, it is not valid to pass a zero vector (`[0, 0, ...`]) as input. This is because the magnitude of -such a vector is 0, which raises a `divide by 0` exception in the corresponding formula. Requests -containing the zero vector will be rejected and a corresponding exception will be thrown. -{: .note } \ No newline at end of file +With cosine similarity, it is not valid to pass a zero vector (`[0, 0, ... ]`) as input. This is because the magnitude of such a vector is 0, which raises a `divide by 0` exception in the corresponding formula. Requests containing the zero vector will be rejected, and a corresponding exception will be thrown. +{: .note } + +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +{: .note} diff --git a/_search-plugins/knn/painless-functions.md b/_search-plugins/knn/painless-functions.md index 09eb989702..85840ff535 100644 --- a/_search-plugins/knn/painless-functions.md +++ b/_search-plugins/knn/painless-functions.md @@ -52,6 +52,10 @@ Function name | Function signature | Description l2Squared | `float l2Squared (float[] queryVector, doc['vector field'])` | This function calculates the square of the L2 distance (Euclidean distance) between a given query vector and document vectors. The shorter the distance, the more relevant the document is, so this example inverts the return value of the l2Squared function. If the document vector matches the query vector, the result is 0, so this example also adds 1 to the distance to avoid divide by zero errors. l1Norm | `float l1Norm (float[] queryVector, doc['vector field'])` | This function calculates the square of the L2 distance (Euclidean distance) between a given query vector and document vectors. The shorter the distance, the more relevant the document is, so this example inverts the return value of the l2Squared function. If the document vector matches the query vector, the result is 0, so this example also adds 1 to the distance to avoid divide by zero errors. cosineSimilarity | `float cosineSimilarity (float[] queryVector, doc['vector field'])` | Cosine similarity is an inner product of the query vector and document vector normalized to both have a length of 1. If the magnitude of the query vector doesn't change throughout the query, you can pass the magnitude of the query vector to improve performance, instead of calculating the magnitude every time for every filtered document:
`float cosineSimilarity (float[] queryVector, doc['vector field'], float normQueryVector)`
In general, the range of cosine similarity is [-1, 1]. However, in the case of information retrieval, the cosine similarity of two documents ranges from 0 to 1 because the tf-idf statistic can't be negative. Therefore, the k-NN plugin adds 1.0 in order to always yield a positive cosine similarity score. +hamming | `float hamming (float[] queryVector, doc['vector field'])` | This function calculates the Hamming distance between a given query vector and document vectors. The Hamming distance is the number of positions at which the corresponding elements are different. The shorter the distance, the more relevant the document is, so this example inverts the return value of the Hamming distance. + +The `hamming` space type is supported for binary vectors in OpenSearch version 2.16 and later. For more information, see [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). +{: .note} ## Constraints diff --git a/_search-plugins/vector-search.md b/_search-plugins/vector-search.md index 862b26b375..68f6dea08c 100644 --- a/_search-plugins/vector-search.md +++ b/_search-plugins/vector-search.md @@ -57,7 +57,7 @@ PUT test-index You must designate the field that will store vectors as a [`knn_vector`]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/) field type. OpenSearch supports vectors of up to 16,000 dimensions, each of which is represented as a 32-bit or 16-bit float. -To save storage space, you can use `byte` vectors. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector). +To save storage space, you can use `byte` or `binary` vectors. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector) and [Binary k-NN vectors]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#binary-k-nn-vectors). ### k-NN vector search