Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds section on product quantization for docs #6926

Merged
merged 38 commits into from
Apr 16, 2024
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
4f1bd63
Adds section on product quantization for docs
jmazanec15 Apr 9, 2024
0370b85
Update knn-vector-quantization.md
vagimeli Apr 10, 2024
548805b
Update knn-vector-quantization.md
vagimeli Apr 10, 2024
167cb96
Update knn-vector-quantization.md
vagimeli Apr 10, 2024
be1c836
Update _search-plugins/knn/knn-vector-quantization.md
jmazanec15 Apr 10, 2024
255a25b
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
08194cb
Update _search-plugins/knn/knn-index.md
vagimeli Apr 12, 2024
0a0e105
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
83503a8
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
1e413bc
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
050064e
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
f2f42ee
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
10c35eb
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
22508e2
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
6d8e9d1
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
bda3d4f
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 12, 2024
7e4e956
Update knn-index.md
vagimeli Apr 12, 2024
3cf22dd
Update knn-vector-quantization.md
vagimeli Apr 15, 2024
28acaac
Merge branch 'main' into knn-pq-improved-docs
vagimeli Apr 15, 2024
5231b87
Update _search-plugins/knn/knn-index.md
vagimeli Apr 16, 2024
be1e447
Update _search-plugins/knn/knn-index.md
vagimeli Apr 16, 2024
c78bc75
Update _search-plugins/knn/knn-index.md
vagimeli Apr 16, 2024
036db27
Update _search-plugins/knn/knn-index.md
vagimeli Apr 16, 2024
08a9857
Update _search-plugins/knn/knn-index.md
vagimeli Apr 16, 2024
55bf87c
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
38aca8d
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
5591d88
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
2b51288
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
2c389a3
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
5b0f258
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
d054f3f
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
9cc412f
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
ff0bebc
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
5b68721
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
b15ac62
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
072f1e6
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
ead048e
Update _search-plugins/knn/knn-vector-quantization.md
vagimeli Apr 16, 2024
2721679
Update knn-vector-quantization.md
vagimeli Apr 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 9 additions & 15 deletions _search-plugins/knn/knn-index.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ PUT /test-index

## Lucene byte vector

Starting with k-NN plugin version 2.9, you can use `byte` vectors with the `lucene` engine in order to reduce the amount of storage space needed. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector).
Starting with k-NN plugin version 2.9, you can use `byte` vectors with the `lucene` engine to reduce the amount of storage space needed. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector).

## SIMD optimization for the Faiss engine

Expand Down Expand Up @@ -137,10 +137,7 @@ For more information about setting these parameters, refer to the [Faiss documen

#### IVF training requirements

The IVF algorithm requires a training step. To create an index that uses IVF, you need to train a model with the
[Train API]({{site.url}}{{site.baseurl}}/search-plugins/knn/api#train-model), passing the IVF method definition. IVF requires that, at a minimum, there should be `nlist` training
data points, but it is [recommended that you use more](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset).
Training data can be composed of either the same data that is going to be ingested or a separate dataset.
The IVF algorithm requires a training step. To create an index that uses IVF, you need to train a model with the [Train API]({{site.url}}{{site.baseurl}}/search-plugins/knn/api#train-model), passing the IVF method definition. IVF requires that, at a minimum, there should be `nlist` training data points, but it is [recommended that you use more](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset). Training data can be composed of either the same data that is going to be ingested or a separate dataset.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

### Supported Lucene methods

Expand Down Expand Up @@ -175,8 +172,7 @@ An index created in OpenSearch version 2.11 or earlier will still use the old `e

### Supported Faiss encoders

You can use encoders to reduce the memory footprint of a k-NN index at the expense of search accuracy. The k-NN plugin currently supports the
`flat`, `pq`, and `sq` encoders in the Faiss library.
You can use encoders to reduce the memory footprint of a k-NN index at the expense of search accuracy. The k-NN plugin currently supports the `flat`, `pq`, and `sq` encoders in the Faiss library.

The following example method definition specifies the `hnsw` method and a `pq` encoder:

Expand Down Expand Up @@ -204,7 +200,7 @@ Encoder name | Requires training | Description
:--- | :--- | :---
`flat` (Default) | false | Encode vectors as floating-point arrays. This encoding does not reduce memory footprint.
`pq` | true | An abbreviation for _product quantization_, it is a lossy compression technique that uses clustering to encode a vector into a fixed size of bytes, with the goal of minimizing the drop in k-NN search accuracy. At a high level, vectors are broken up into `m` subvectors, and then each subvector is represented by a `code_size` code obtained from a code book produced during training. For more information about product quantization, see [this blog post](https://medium.com/dotstar/understanding-faiss-part-2-79d90b1e5388).
`sq` | false | An abbreviation for _scalar quantization_. Starting with k-NN plugin version 2.13, you can use the `sq` encoder to quantize 32-bit floating-point vectors into 16-bit floats. In version 2.13, the built-in `sq` encoder is the SQFP16 Faiss encoder. The encoder reduces memory footprint with a minimal loss of precision and improves performance by using SIMD optimization (using AVX2 on x86 architecture or Neon on ARM64 architecture). For more information, see [Faiss scalar quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization#faiss-scalar-quantization).
`sq` | false | An abbreviation for _scalar quantization_. Starting with k-NN plugin version 2.13, you can use the `sq` encoder to quantize 32-bit floating-point vectors into 16-bit floats. In version 2.13, the built-in `sq` encoder is the SQFP16 Faiss encoder. The encoder reduces memory footprint with a minimal loss of precision and improves performance by using SIMD optimization (using AVX2 on x86 architecture or Neon on ARM64 architecture). For more information, see [Faiss scalar quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization#faiss-16-bit-scalar-quantization).
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

#### PQ parameters

Expand Down Expand Up @@ -314,21 +310,19 @@ The following example uses the `ivf` method with an `sq` encoder of type `fp16`:

### Choosing the right method

There are a lot of options to choose from when building your `knn_vector` field. To determine the correct methods and parameters to choose, you should first understand what requirements you have for your workload and what trade-offs you are willing to make. Factors to consider are (1) query latency, (2) query quality, (3) memory limits, (4) indexing latency.
There are several options to choose from when building your `knn_vector` field. To determine the correct methods and parameters to choose, you should first understand what requirements you have for your workload and what trade-offs you are willing to make. Factors to consider are (1) query latency, (2) query quality, (3) memory limits, and (4) indexing latency.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

If memory is not a concern, HNSW offers a very strong query latency/query quality tradeoff.
If memory is not a concern, HNSW offers a strong query latency/query quality trade-off.

If you want to use less memory and index faster than HNSW, while maintaining similar query quality, you should evaluate IVF.
If you want to use less memory and index faster than HNSW while maintaining similar query quality, you should evaluate IVF.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

If memory is a concern, consider adding a PQ encoder to your HNSW or IVF index. Because PQ is a lossy encoding, query quality will drop.

You can reduce the memory footprint by a factor of 2, with a minimal loss in search quality, by using the [`fp_16` encoder]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/#faiss-scalar-quantization). If your vector dimensions are within the [-128, 127] byte range, we recommend using the [byte quantizer]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#lucene-byte-vector) in order to reduce the memory footprint by a factor of 4. To learn more about vector quantization options, see [k-NN vector quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/).
You can reduce the memory footprint by a factor of 2, with a minimal loss in search quality, by using the [`fp_16` encoder]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/#faiss-16-bit-scalar-quantization). If your vector dimensions are within the [-128, 127] byte range, using the [byte quantizer]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector/#lucene-byte-vector) is recommended in order to reduce the memory footprint by a factor of 4. To learn more about vector quantization options, see [k-NN vector quantization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-vector-quantization/).
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

### Memory estimation

In a typical OpenSearch cluster, a certain portion of RAM is set aside for the JVM heap. The k-NN plugin allocates
native library indexes to a portion of the remaining RAM. This portion's size is determined by
the `circuit_breaker_limit` cluster setting. By default, the limit is set at 50%.
In a typical OpenSearch cluster, a certain portion of RAM is set aside for the JVM heap. The k-NN plugin allocates native library indexes to a portion of the remaining RAM. This portion's size is determined by the `circuit_breaker_limit` cluster setting. By default, the limit is set at 50%.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

Having a replica doubles the total number of vectors.
{: .note }
Expand Down
64 changes: 56 additions & 8 deletions _search-plugins/knn/knn-vector-quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,17 @@

By default, the k-NN plugin supports the indexing and querying of vectors of type `float`, where each dimension of the vector occupies 4 bytes of memory. For use cases that require ingestion on a large scale, keeping `float` vectors can be expensive because OpenSearch needs to construct, load, save, and search graphs (for native `nmslib` and `faiss` engines). To reduce the memory footprint, you can use vector quantization.

In OpenSearch, many varieties of quantization are supported. In general, the level of quantization will provide a trade-off between the accuracy of the nearest neighbor search and the size of the memory footprint the vector search system will consume. The supported types include byte vectors, 16-bit scalar quantization, and product quantization (PQ).
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

## Lucene byte vector

Starting with k-NN plugin version 2.9, you can use `byte` vectors with the `lucene` engine in order to reduce the amount of required memory. This requires quantizing the vectors outside of OpenSearch before ingesting them into an OpenSearch index. For more information, see [Lucene byte vector]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/knn-vector#lucene-byte-vector).

## Faiss scalar quantization
## Faiss 16-bit scalar quantization

Starting with version 2.13, the k-NN plugin supports performing scalar quantization for the Faiss engine within OpenSearch. Within the Faiss engine, a scalar quantizer (SQfp16) performs the conversion between 32-bit and 16-bit vectors. At ingestion time, when you upload 32-bit floating-point vectors to OpenSearch, SQfp16 quantizes them into 16-bit floating-point vectors and stores the quantized vectors in a k-NN index. At search time, SQfp16 decodes the vector values back into 32-bit floating-point values for distance computation. The SQfp16 quantization can decrease the memory footprint by a factor of 2. Additionally, it leads to a minimal loss in recall when differences between vector values are large compared to the error introduced by eliminating their two least significant bits. When used with [SIMD optimization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index#simd-optimization-for-the-faiss-engine), SQfp16 quantization can also significantly reduce search latencies and improve indexing throughput.
Starting with version 2.13, the k-NN plugin supports performing scalar quantization for the Faiss engine within OpenSearch. Within the Faiss engine, a scalar quantizer (SQfp16) performs the conversion between 32-bit and 16-bit vectors. At ingestion time, when you upload 32-bit floating-point vectors to OpenSearch, SQfp16 quantizes them into 16-bit floating-point vectors and stores the quantized vectors in a k-NN index.

At search time, SQfp16 decodes the vector values back into 32-bit floating-point values for distance computation. The SQfp16 quantization can decrease the memory footprint by a factor of 2. Additionally, it leads to a minimal loss in recall when differences between vector values are large compared to the error introduced by eliminating their two least significant bits. When used with [SIMD optimization]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index#simd-optimization-for-the-faiss-engine), SQfp16 quantization can also significantly reduce search latencies and improve indexing throughput.

SIMD optimization is not supported on Windows. Using Faiss scalar quantization on Windows can lead to a significant drop in performance, including decreased indexing throughput and increased search latencies.
{: .warning}
Expand Down Expand Up @@ -62,12 +66,14 @@

Optionally, you can specify the parameters in `method.parameters.encoder`. For more information about `encoder` object parameters, see [SQ parameters]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#sq-parameters).

The `fp16` encoder converts 32-bit vectors into their 16-bit counterparts. For this encoder type, the vector values must be in the [-65504.0, 65504.0] range. To define how to handle out-of-range values, the preceding request specifies the `clip` parameter. By default, this parameter is `false`, and any vectors containing out-of-range values are rejected. When `clip` is set to `true` (as in the preceding request), out-of-range vector values are rounded up or down so that they are in the supported range. For example, if the original 32-bit vector is `[65510.82, -65504.1]`, the vector will be indexed as a 16-bit vector `[65504.0, -65504.0]`.
The `fp16` encoder converts 32-bit vectors into their 16-bit counterparts. For this encoder type, the vector values must be in the [-65504.0, 65504.0] range. To define how to handle out-of-range values, the preceding request specifies the `clip` parameter. By default, this parameter is `false`, and any vectors containing out-of-range values are rejected.

When `clip` is set to `true` (as in the preceding request), out-of-range vector values are rounded up or down so that they are in the supported range. For example, if the original 32-bit vector is `[65510.82, -65504.1]`, the vector will be indexed as a 16-bit vector `[65504.0, -65504.0]`.

We recommend setting `clip` to `true` only if very few elements lie outside of the supported range. Rounding the values may cause a drop in recall.
{: .note}

The following example method definition specifies the Faiss SQfp16 encoder, which rejects any indexing request that contains out-of-range vector values (because the `clip` parameter is `false` by default):
The following example method definition specifies the Faiss SQfp16 encoder, which rejects any indexing request that contains out-of-range vector values (because the `clip` parameter is `false` by default).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this end in a colon because it introduces an example?

vagimeli marked this conversation as resolved.
Show resolved Hide resolved

```json
PUT /test-index
Expand Down Expand Up @@ -105,7 +111,7 @@
```
{% include copy-curl.html %}

During ingestion, make sure each dimension of the vector is in the supported range ([-65504.0, 65504.0]):
During ingestion, make sure each dimension of the vector is in the supported range ([-65504.0, 65504.0]).
vagimeli marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this end in a colon because it introduces an example?


```json
PUT test-index/_doc/1
Expand All @@ -115,7 +121,7 @@
```
{% include copy-curl.html %}

During querying, there is no range limitation for the query vector:
During querying, the query vector has no range limitation.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this end in a colon because it introduces an example?

vagimeli marked this conversation as resolved.
Show resolved Hide resolved

```json
GET test-index/_search
Expand All @@ -133,13 +139,13 @@
```
{% include copy-curl.html %}

## Memory estimation
### Memory estimation

In the best-case scenario, 16-bit vectors produced by the Faiss SQfp16 quantizer require 50% of the memory that 32-bit vectors require.

#### HNSW memory estimation

The memory required for HNSW is estimated to be `1.1 * (2 * dimension + 8 * M)` bytes/vector.
The memory required for Hierarchical Navigable Small Worlds (HNSW) is estimated to be `1.1 * (2 * dimension + 8 * M)` bytes/vector.

As an example, assume that you have 1 million vectors with a dimension of 256 and M of 16. The memory requirement can be estimated as follows:

Expand All @@ -157,3 +163,45 @@
1.1 * (((2 * 256) * 1,000,000) + (4 * 128 * 256)) ~= 0.525 GB
```

## Faiss product quantization

PQ is a technique used to represent a vector in a configurable amount of bits. In general, it can be used to achieve a higher level of compression compared to byte and scalar quantization. PQ works by breaking up vectors into _m_ subvectors and encoding each subvector with _code_size_ bits. Thus, the total amount of memory for the vector ends up being `m*code_size` bits, plus overhead. For details about the parameters, see [PQ parameters]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#pq-parameters). PQ is only supported for the _Faiss_ engine and can be used with either the _HNSW_ or the _IVF_ ANN (Approximate Nearest Neighbor) algorithms.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

### Using Faiss product quantization

To minimize the loss in accuracy, PQ requires a _training_ step that builds a model based on the distribution of the data that will be searched over.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

The product quantizer is trained by running k-means clustering on a set of training vectors for each sub-vector space and extracts the centroids to be used for the encoding. The training vectors can be either a subset of the vectors to be ingested or vectors that have the same distribution and dimension as the vectors to be ingested.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

In OpenSearch, the training vectors need to be present in an index. In general, the amount of training data will depend on which ANN algorithm will be used and how much data will go into the index. For IVF-based indices, a good number of training vectors to use is `max(1000*nlist, 2^code_size * 1000)`. For HNSW-based indexes, a good number is `2^code_size*1000` training vectors. See [Faiss's documentation](https://github.com/facebookresearch/faiss/wiki/FAQ#how-many-training-points-do-i-need-for-k-means) for more details about the methodology behind calculating these figures.

Check failure on line 176 in _search-plugins/knn/knn-vector-quantization.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.SubstitutionsError] Use 'indexes' instead of 'indices'. Raw Output: {"message": "[OpenSearch.SubstitutionsError] Use 'indexes' instead of 'indices'.", "location": {"path": "_search-plugins/knn/knn-vector-quantization.md", "range": {"start": {"line": 176, "column": 213}}}, "severity": "ERROR"}
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

For PQ, the two parameters that need to be selected are _m_ and _code_size_. _m_ determines how many sub-vectors the vectors should be split to encode separately. Consequently, the _dimension_ needs to be divisible by _m_. _code_size_ determines how many bits each sub-vector will be encoded with. In general, a good place to start is setting `code_size = 8` and then tuning _m_ to get the desired trade-off between memory footprint and recall.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not following the second sentence here. Do we mean something like "m determines the number of subvectors into which vectors should be split for separate encoding"? In the fourth sentence, is "with" the correct preposition, or should it be "into"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, your rewrite is correct. I revised the following sentence to read: _code_size_ determines the number of bits used to encode each subvector.


For an example of setting up an index with PQ, see the [Building a k-NN index from a model]({{site.url}}{{site.baseurl}}/search-plugins/knn/approximate-knn/#building-a-k-nn-index-from-a-model) tutorial.

### Memory Estimation

Check failure on line 182 in _search-plugins/knn/knn-vector-quantization.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.HeadingCapitalization] 'Memory Estimation' is a heading and should be in sentence case. Raw Output: {"message": "[OpenSearch.HeadingCapitalization] 'Memory Estimation' is a heading and should be in sentence case.", "location": {"path": "_search-plugins/knn/knn-vector-quantization.md", "range": {"start": {"line": 182, "column": 5}}}, "severity": "ERROR"}
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

While PQ is meant to represent individual vectors with `m*code_size` bits, in reality the indexes take up more space. This is mainly due to the overhead of storing certain code tables and auxiliary data structures.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

Some of the memory formulas depend on the number of segments present. Typically, this is not known beforehand but a good default value is 300.
vagimeli marked this conversation as resolved.
Show resolved Hide resolved
{: .note}

#### HNSW memory estimation

The memory required for HNSW with PQ is estimated to be `1.1*(((pq_code_size / 8) * pq_m + 24 + 8 * hnsw_m) * num_vectors + num_segments * (2^pq_code_size * 4 * d))` bytes.

As an example, assume that you have 1 million vectors with a dimension of 256, `hnsw_m` of 16, `pq_m` of 32, `pq_code_size` of 8, and 100 segments. The memory requirement can be estimated as follows:

```bash
1.1*((8 / 8 * 32 + 24 + 8 * 16) * 1000000 + 100 * (2^8 * 4 * 256)) ~= 0.215 GB
```

#### IVF memory estimation

The memory required for IVF with PQ is estimated to be `1.1*(((pq_code_size / 8) * pq_m + 24) * num_vectors + num_segments * (2^code_size * 4 * d + 4 * ivf_nlist * d))` bytes.

For example, assume that you have 1 million vectors with a dimension of 256, `ivf_nlist` of 512, `pq_m` of 32, `pq_code_size` of 8 and 100 segments. The memory requirement can be estimated as follows:
vagimeli marked this conversation as resolved.
Show resolved Hide resolved

```bash
1.1*((8 / 8 * 64 + 24) * 1000000 + 100 * (2^8 * 4 * 256 + 4 * 512 * 256)) ~= 0.171 GB
```
Loading