Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory batching #166

Open
Shayan-P opened this issue Aug 16, 2023 · 1 comment
Open

memory batching #166

Shayan-P opened this issue Aug 16, 2023 · 1 comment
Labels
GPU related to new gpu support addition

Comments

@Shayan-P
Copy link
Collaborator

Right now, we are loading the whole Stabilizer to the GPU memory. Depending on the user's GPU, it might not always fit.
A 2000 gate Pauli_Frame with 2^19 trajectories takes about 2GB of memory.

so we might want to split the memory into multiple batches and load as we need.

Or

we can pin the memory (I think this will allow GPU to use it without having to load it).

@Shayan-P Shayan-P added the GPU related to new gpu support addition label Aug 16, 2023
@Krastanov
Copy link
Member

I suspect the wins would not be very big compared to just using the GPU for small batches like this. It would be interesting to have this, but no need to put it high on the priority list.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GPU related to new gpu support addition
Projects
None yet
Development

No branches or pull requests

2 participants