-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: CUDA Error: out of memory #27
Comments
try smaller batch size |
I have 64 batches. and the input size is 256 and the output size 242. By how much I am going to reduce it? |
try batch size 8, 16, 32. See if it works |
It is still showing me this error: |
@omrfrkmfy Were you ever able to figure out a solution to the problem? I'm dealing with the same issue |
The issue is that your graphics card memory is small. you need to find one
with with big memory.
…On Sun, May 19, 2019, 07:54 robhyb19 ***@***.***> wrote:
@omrfrkmfy <https://github.com/omrfrkmfy> Were you ever able to figure
out a solution to the problem? I'm dealing with the same issue
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27?email_source=notifications&email_token=AJXR52JB3GFSXCLLNZK4UL3PWD2RVA5CNFSM4FWOLZPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVW4CMQ#issuecomment-493732146>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJXR52KN7W72U3GLRND5FK3PWD2RVANCNFSM4FWOLZPA>
.
|
With 4 worker cores of NVIDIA P100, I had to gave 12 batch size. But, the AUROC is 49%, may be due to small batch size |
Maybe you can try this idea |
I have encountered the same issue and solved it by forcing no gradient when using model.eval() |
I am dealing with the same issue and when I try multiple times it achieves different results. Did you solve it or find why? |
# Problem According to this [issue](arnoweng#27) I also forked this repo and try running on my Colab project. The same problem arose: `RuntimeError: CUDA Error: out of memory`. # Solution As far as my knowledge, the problem happened because some section did not require `grad` yet still did anyway. Thus, [`with no_grad()`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) should be presented.
Hello, I know this is so late and it seems like the owner does not continue maintain the code for years. Yet if some of you end up with this problem and somehow, run into this issue. Try my solution: https://github.com/arnoweng/CheXNet/pull/39. I just started learning |
Please help me resolve this issue
The text was updated successfully, but these errors were encountered: