Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory increasing, then eventually OOM #43

Closed
DangChuong-DC opened this issue Feb 28, 2022 · 4 comments
Closed

GPU memory increasing, then eventually OOM #43

DangChuong-DC opened this issue Feb 28, 2022 · 4 comments

Comments

@DangChuong-DC
Copy link
Contributor

DangChuong-DC commented Feb 28, 2022

I tried to run Rotated-Faster-RCNN on dota-1.5 dataset.
I found that GPU memory is increasing throughout the training and eventually OOM.
Below image is my run with 1 GPU.

Image
Screenshot from 2022-02-28 22-32-12

@yangxue0827
Copy link
Collaborator

Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.

@yangxue0827
Copy link
Collaborator

Also, please provide your configs in the path of work_dirs.

@williamcorsel
Copy link

Experienced the same issue with Oriented R-CNN. I fixed it by setting the gpu_assign_thr flag following the suggestion here

@DangChuong-DC
Copy link
Contributor Author

Thank you @yangxue0827 for prompt response and your dedicated works.
@williamcorsel I confirmed that setting gpu_assign_thr=100 resolved my issues.

Great works, thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants