Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Speed up TopDown Inference) modified inference_top_down_model, make model able to run on batches of bounding box #560

Merged
merged 5 commits into from
Apr 12, 2021

Conversation

namirinz
Copy link
Contributor

@namirinz namirinz commented Apr 7, 2021

I have been modified _inference_single_pose_model to for-loop preprocess (center, scale) all bounding boxes (let say N bboxes) and stack them into batch_data. And then feed these batch_data on pose_model to get (N) keypoint at once.

And also modify inference_top_down_model to return output as close as the old one.

@CLAassistant
Copy link

CLAassistant commented Apr 7, 2021

CLA assistant check
All committers have signed the CLA.

@namirinz namirinz changed the title (Speed up TopDown Inference) modified inference_top_down_model, make run on batches of bounding box (Speed up TopDown Inference) modified inference_top_down_model, make model able to run on batches of bounding box Apr 7, 2021
@codecov
Copy link

codecov bot commented Apr 7, 2021

Codecov Report

Merging #560 (c8a9f43) into master (8203f11) will decrease coverage by 0.87%.
The diff coverage is 70.76%.

❗ Current head c8a9f43 differs from pull request most recent head d69da36. Consider uploading reports for the commit d69da36 to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##           master     #560      +/-   ##
==========================================
- Coverage   82.25%   81.37%   -0.88%     
==========================================
  Files         154      163       +9     
  Lines       10436    11271     +835     
  Branches     1655     1810     +155     
==========================================
+ Hits         8584     9172     +588     
- Misses       1463     1662     +199     
- Partials      389      437      +48     
Flag Coverage Δ
unittests 81.29% <70.76%> (-0.87%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...datasets/datasets/animal/animal_macaque_dataset.py 12.13% <12.13%> (ø)
mmpose/apis/inference.py 49.57% <35.18%> (-5.14%) ⬇️
mmpose/models/detectors/pose_lifter.py 74.62% <74.62%> (ø)
mmpose/datasets/pipelines/pose3d_transform.py 89.31% <77.77%> (-1.68%) ⬇️
...se/datasets/datasets/body3d/body3d_h36m_dataset.py 83.47% <78.94%> (-0.60%) ⬇️
.../models/keypoint_heads/temporal_regression_head.py 88.77% <79.62%> (-2.30%) ⬇️
mmpose/core/utils/regularizations.py 81.81% <81.81%> (ø)
...ose/datasets/datasets/animal/animal_fly_dataset.py 91.20% <91.20%> (ø)
.../datasets/datasets/animal/animal_locust_dataset.py 91.20% <91.20%> (ø)
...e/datasets/datasets/animal/animal_zebra_dataset.py 91.20% <91.20%> (ø)
... and 24 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8203f11...d69da36. Read the comment docs.

@innerlee innerlee requested a review from jin-s13 April 7, 2021 17:19
@innerlee
Copy link
Contributor

innerlee commented Apr 7, 2021

Thanks!

The linting fails. Pls install pre-commit to do auto-formatting. Under the repo's root dir

pip install -U pre-commit
pre-commit install
pre-commit run --all-files

@namirinz
Copy link
Contributor Author

namirinz commented Apr 8, 2021

Video comparison with single inference and batch inference
mmpose batch inference comparison

@jin-s13
Copy link
Collaborator

jin-s13 commented Apr 8, 2021

Please sign CLA (Contributor License Agreement). @namirinz

@namirinz
Copy link
Contributor Author

namirinz commented Apr 8, 2021

Please sign CLA (Contributor License Agreement). @namirinz

Done.

mmpose/apis/inference.py Outdated Show resolved Hide resolved
mmpose/apis/inference.py Outdated Show resolved Hide resolved
mmpose/apis/inference.py Outdated Show resolved Hide resolved
mmpose/apis/inference.py Outdated Show resolved Hide resolved
mmpose/apis/inference.py Outdated Show resolved Hide resolved
@innerlee innerlee merged commit cd74bf1 into open-mmlab:master Apr 12, 2021
Copy link
Contributor

@HoBeom HoBeom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#560
IndexError: tuple index out of range


# Select bboxes by score threshold
if bbox_thr is not None:
assert bboxes.shape[1] == 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

python demo/top_down_pose_tracking_demo_with_mmdet.py \
    demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py \
http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
    configs/top_down/resnet/coco/res50_coco_256x192.py \
    https://download.openmmlab.com/mmpose/top_down/resnet/res50_coco_256x192-ec54d7f3_20200709.pth \
    --video-path /dev/video0 \
    --show

Exception handling is not exist when the human bounding boxes are not detected

Use load_from_http loader
Use load_from_http loader
Traceback (most recent call last):
  File "demo/top_down_pose_tracking_demo_with_mmdet.py", line 177, in <module>
    main()
  File "demo/top_down_pose_tracking_demo_with_mmdet.py", line 132, in main
    pose_results, returned_outputs = inference_top_down_pose_model(
  File "/mmpose/mmpose/apis/inference.py", line 370, in inference_top_down_pose_model
    assert bboxes.shape[1] == 5
IndexError: tuple index out of range

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Please open an issue so that it is easier to track progress

@@ -333,38 +345,42 @@ def inference_top_down_pose_model(model,
pose_results = []
returned_outputs = []

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

    if len(person_results) == 0:
        return pose_results, returned_outputs

ly015 added a commit to ly015/mmpose that referenced this pull request Apr 20, 2021
* Reorganize demo folder
* Fix out-of-date mmdet/mmtrack configs
* Fix a bug in inference_top_sown_pose_model which causes track_id missing (seems introduced by open-mmlab#560)

Remaining issues:
* Some video files used in example commands in demos do not exist
@ly015 ly015 mentioned this pull request Apr 20, 2021
@namirinz namirinz deleted the batch_inference branch April 21, 2021 13:15
shuheilocale pushed a commit to shuheilocale/mmpose that referenced this pull request May 6, 2023
…model able to run on batches of bounding box (open-mmlab#560)

* modified inference_top_down_model to make model-batch runnable

* formattig code by pre-commit

* Fix bug when bbox_thr make empty bbox

* resolve comments

* resolve comments

Co-authored-by: jinsheng <[email protected]>
ajgrafton pushed a commit to ajgrafton/mmpose that referenced this pull request Mar 6, 2024
…model able to run on batches of bounding box (open-mmlab#560)

* modified inference_top_down_model to make model-batch runnable

* formattig code by pre-commit

* Fix bug when bbox_thr make empty bbox

* resolve comments

* resolve comments

Co-authored-by: jinsheng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants