-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple GPUs training #327
Comments
even with a single GPU, it allocates the selected one, but doesn't fully utilize it |
You need to increase your batchSize. Try |
but which size do you recommend ? because I have read some issues about batch size, and most of the people said that So this means that I choose my batch size according the #GPUs ? eg: If I am using 2 GPUs, then And also I read something about what's your opinion in general @junyanz ? |
|
Thanks for this question. |
Yeah, we added it in Q & A. Will add it in training/testing tips soon. |
Hello,
I am running on a server which has 8 GPUs
I want to train the CycleGAN on at least 2 GPUs, so what I did is that I passed this flag
--gpu_ids 6,7
It only trained on the the 6th GPU, and didn't allocate the other one
any help ?
The text was updated successfully, but these errors were encountered: