-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tiny fix of mmedit/apis/test.py
#261
Conversation
Nice. Is this a common problem for all repos? |
Could you pls test how this change affects the inference time? |
Codecov Report
@@ Coverage Diff @@
## master #261 +/- ##
==========================================
- Coverage 81.34% 79.93% -1.41%
==========================================
Files 158 159 +1
Lines 7773 7941 +168
Branches 1152 1177 +25
==========================================
+ Hits 6323 6348 +25
- Misses 1306 1449 +143
Partials 144 144
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
mmedit/apis/test.py
Outdated
@@ -82,6 +83,8 @@ def multi_gpu_test(model, | |||
save_path (str): The path to save image. Default: None. | |||
iteration (int): Iteration number. It is used for the save image name. | |||
Default: None. | |||
limited_gpu (bool): Limited CUDA memory or not. Default: False. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
limited_gpu -> empty_cache
Empty GPU cache in each iteration or not
mmedit/apis/test.py
Outdated
@@ -83,8 +83,7 @@ def multi_gpu_test(model, | |||
save_path (str): The path to save image. Default: None. | |||
iteration (int): Iteration number. It is used for the save image name. | |||
Default: None. | |||
limited_gpu (bool): Limited CUDA memory or not. Default: False. | |||
If limited_gpu and not gpu_collect, empty cache in every batch. | |||
empty_cache (bool): empty cache in every batch. Default: False. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch -> iteration.
tools/test.py
Outdated
@@ -88,7 +88,7 @@ def main(): | |||
model = build_model(cfg.model, train_cfg=None, test_cfg=cfg.test_cfg) | |||
|
|||
args.save_image = args.save_path is not None | |||
limited_gpu = cfg.limited_gpu is not None and cfg.limited_gpu | |||
empty_cache = cfg.empty_cache is not None and cfg.empty_cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cfg.get('empty_cache', False)
I suggest adding an item in FAQ https://github.com/open-mmlab/mmediting/blob/master/docs/faq.md, to demonstrate how to use this option. |
* tiny fix * Tiny Fix, add limited_gpu. * Tiny Fix * Tiny Fix Co-authored-by: liyinshuo <[email protected]>
In the current test program (
mmedit/apis/test.py
), the GPU memory occupied by the processed data is not released in time, andCUDA out of memory
if the test data is large.Fix the issue in this PR.