Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MMSIG-80] Update and refine get_flops.py #2237

Merged
merged 8 commits into from
Apr 21, 2023
Merged

[MMSIG-80] Update and refine get_flops.py #2237

merged 8 commits into from
Apr 21, 2023

Conversation

xin-li-67
Copy link
Contributor

Motivation

MMSIG-80: update and refine get_flops.py

Modification

tools/analysis_tools/get_flops.py

BC-breaking (Optional)

Use cases (Optional)

Checklist

Before PR:

  • I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
  • Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
  • Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
  • New functionalities are covered by complete unit tests. If not, please add more unit tests to ensure correctness.
  • The documentation has been modified accordingly, including docstring or example tutorials.

After PR:

  • CLA has been signed and all committers have signed the CLA in this PR.

@xin-li-67 xin-li-67 changed the title Update and refine get_flops.py [WIP] Update and refine get_flops.py Apr 16, 2023
@codecov
Copy link

codecov bot commented Apr 17, 2023

Codecov Report

Patch and project coverage have no change.

Comparison is base (896e9d5) 82.25% compared to head (4e16835) 82.26%.

❗ Current head 4e16835 differs from pull request most recent head f21d3a2. Consider uploading reports for the commit f21d3a2 to get more accurate results

Additional details and impacted files
@@            Coverage Diff            @@
##           dev-1.x    #2237    +/-   ##
=========================================
  Coverage    82.25%   82.26%            
=========================================
  Files          228      230     +2     
  Lines        13387    13552   +165     
  Branches      2268     2301    +33     
=========================================
+ Hits         11011    11148   +137     
- Misses        1862     1874    +12     
- Partials       514      530    +16     
Flag Coverage Δ
unittests 82.26% <ø> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

see 21 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

Comment on lines 41 to 44
'--batch-input',
'-c',
type=str,
choices=['none', 'batch'],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the argument can be converted to a boolean type using the store_true action? Additionally, the option '-c' may no longer be intuitive, as the argument name has been altered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the argument can be converted to a boolean type using the store_true action? Additionally, the option '-c' may no longer be intuitive, as the argument name has been altered.

agree! I will update this in the next commit

@xin-li-67 xin-li-67 changed the title [WIP] Update and refine get_flops.py [MMSIG-80] Update and refine get_flops.py Apr 20, 2023
@Tau-J
Copy link
Collaborator

Tau-J commented Apr 21, 2023

  1. --batch-input and --batch-size seems to be cumbersome. How about use batch_input = args.batch_size > 1?
  2. I found a bug in mmengine: when inputs=None, get_model_complexity_info() cannot inference with cuda:0. I suggest setting cpu as default device for the moment. I'll report this bug to mmengine, or maybe someone will raise a PR to fix it.
  3. The message is too long when printing both out_arch and out_table. I suggest only printing out_table and leaving out_arch as an optional argument.

@xin-li-67
Copy link
Contributor Author

  1. --batch-input and --batch-size seems to be cumbersome. How about use batch_input = args.batch_size > 1?
  2. I found a bug in mmengine: when inputs=None, get_model_complexity_info() cannot inference with cuda:0. I suggest setting cpu as default device for the moment. I'll report this bug to mmengine, or maybe someone will raise a PR to fix it.
  3. The message is too long when printing both out_arch and out_table. I suggest only printing out_table and leaving out_arch as an optional argument.

I made some changes in the latest commit:

  1. removed batch-input and use batch-size;
  2. now default device is CPU;
  3. now out-table will be printed by default, and out-arch will not be printed unless the user specifies with --show-arch-info. However, it seems that the get_complexity_info function in MMEngine still needs to be refined, some models in MMPose are not well supported.

@Tau-J
Copy link
Collaborator

Tau-J commented Apr 21, 2023

LGTM

@Tau-J Tau-J merged commit 470dce1 into open-mmlab:dev-1.x Apr 21, 2023
@xin-li-67 xin-li-67 deleted the getflops_dev branch April 21, 2023 12:24
Tau-J pushed a commit to Tau-J/mmpose that referenced this pull request Apr 25, 2023
Tau-J pushed a commit to Tau-J/mmpose that referenced this pull request Apr 25, 2023
shuheilocale pushed a commit to shuheilocale/mmpose that referenced this pull request May 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants