-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA]: Have a less verbose pinned memory container #2485
Comments
There is ongoing work on providing a better replacement for Thrust's vectors that will support pinned memory. @miscco knows more about this. In the meantime, as a very simple fix, we could just add the following to thrust: namespace thrust {
template <typename T>
using host_pinned_vector = host_vector<T, mr::stateless_resource_allocator<T, system::cuda::universal_host_pinned_memory_resource>>;
} |
Some pieces were already present in thrust, like Following existing naming conventions, the new facility will probably be called |
Is this a duplicate?
Area
Thrust
Is your feature request related to a problem? Please describe.
Currently, the least verbose way for users to have a container of pinned memory without having to write any custom code is this:
thrust::host_vector<i_t, thrust::mr::stateless_resource_allocator<i_t, thrust::system::cuda::universal_host_pinned_memory_resource>>
See slack discussion here: https://nvidia.slack.com/archives/CCP05T27R/p1723208979166279
Describe the solution you'd like
I think there should be something less verbose like:
thrust::host_vector<i_t, thrust::host_pinned>
orthrust::pinned_host_vector<i_t>
Describe alternatives you've considered
Hande written allocator but users shouldn't have to do that.
Libcudf pinned host vector: rapidsai/cudf#16206 rapidsai/cudf#15895 which should be generally available in CCCL.
Additional context
No response
The text was updated successfully, but these errors were encountered: