You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run the model on a custom dataset, which is significantly larger than the datasets used in this repo.
On line 520 of trainer.py while calculating k_means, a large enough dataset creates problems as it tries to compute the k_means for the whole dataset simultaneously which leads to CUDA out of memory issues.
Any leads on how this can be improved?
The text was updated successfully, but these errors were encountered:
I am trying to run the model on a custom dataset, which is significantly larger than the datasets used in this repo.
On line 520 of trainer.py while calculating k_means, a large enough dataset creates problems as it tries to compute the k_means for the whole dataset simultaneously which leads to CUDA out of memory issues.
Any leads on how this can be improved?
The text was updated successfully, but these errors were encountered: