-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]Reduce Disk Usage By Reusing NativeEngine Files #2266
Comments
This is an interesting gain. I am wondering when you say 50% gain in disk space it will happen only in case when source is not enabled for vectors. Cutting down the flat vectors and just reading it via Faiss index has been discussed couple of times. My only concern with this is will reading flat vectors from Faiss file be as efficient as reading the flat vectors from .vec file? Also, did we explore this option where we don't store/serialize flat vectors in Faiss and use the .vec file instead. It can also help this feature: #1693 |
This would be good savings! Like @navneet1v, Im wondering if itll be easier to leverage .vec in faiss as opposed to simulating .vec with faiss. Also, for this plan, how will quantized vectors be handled, where we dont store the full precision vecs in faiss files? |
after #1693, we talked about that goal is want to merge vector into one storage, we took following 2 options into consideration @jmazanec15 @navneet1v talked.
i think all these options, native engine do AnnSearch would be the same latency cause vectors all in memory, the only impacts for query latency are and why i chose option2, because
|
@jmazanec15 at 1st step, i skipped using faiss file as docvalues when it is quantized. because we can not get full precision vecs forsexact search. but i think we can use it for merge and save the faiss computation in sa_encode and sa_decode. |
@navneet1v good question, i will do some benchmark for different types |
i did some mini benchmark for file size and iterator all docs tests as following show:
@navneet1v @jmazanec15 because in Lucene99HnswVectorsFormat in dense vector, would be flat vector. so the file size equals to faiss flat. any options: i also did trace the iterator latency. i think faiss file is too simple so we can iterator faster than lucene, and also because i put the IDMap in the memory, so |
Description
before ISSUE #1572 AND MR #1571, we found that we reuse docValues field like
KNNVectorFieldData
and do synthetic logic in _source field. and it can save about 1/3 disk usage. Also @jmazanec15 mentioned a great method to implements storeFields as #1571 (comment) says.this RFC, i proposal a new method to reduce disk usage, that we can read nativeEngines files to create DocValues. so we can save the disk for skip write
flatFieldVectorsWriter
orBinaryDocValues
i read the Faiss code:
faiss/impl/index_write.cpp
it show that faiss HNSW32,Flat file structure like followings:i implements a
FaissEngineFlatVectorValues
which read_0_2011_target_field.faissc
files directly and wrap aDocIdSetIterator
instead of usingFlatVectorsReader
. at POC code , it shows that, we can cut almost 50% disk usage for skip write flatVectors also without write flatVectors, write performance do a little optimizein the next:
The text was updated successfully, but these errors were encountered: