-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] Add bounded_queue data structure #138
Comments
This issue has been labeled |
Still worth doing, still on my list; just haven't gotten around to it yet. |
This issue has been labeled |
This issue has been labeled |
We are using cmake-rs to link the c++ cmake build and the rust build. This had the side effect of causing us to always recompile all of libcuvs with the rust bindings. With the migration of the libraft ann code to cuvs, this has led to the rust bindings being the bottleneck in building. Get around this by creating a new dummy cmake target that depends on cuvs, and having this target only build libcuvs if needed. Authors: - Ben Frederickson (https://github.com/benfred) Approvers: - Corey J. Nolet (https://github.com/cjnolet) URL: rapidsai/cuvs#138
In work on rapidsai/cuml#3410, I introduced a bounded queue for performance reasons. This is a general enough data structure with a common enough use case that it should probably be made available in RAFT.
The basic idea of this data structure is that when you know that there is some reasonable upper limit on the size of your queue and you can get an approximate sense of what that limit is before you begin filling it (or even an approximate sense that you update periodically as you fill it), then it is better to simply store the data in a vector and keep track of a "front" and "end" index into that vector than to use a stdlib queue. This is primarily because the C++ standard for stdlib queue has certain requirements on when memory gets de- or re-allocated that can result in a non-optimal (in terms of runtime) memory allocation strategy when you know that running out of memory for the underlying vector would not be a problem.
The text was updated successfully, but these errors were encountered: