-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Speed up TopDown Inference) modified inference_top_down_model, make model able to run on batches of bounding box #560
Conversation
Codecov Report
@@ Coverage Diff @@
## master #560 +/- ##
==========================================
- Coverage 82.25% 81.37% -0.88%
==========================================
Files 154 163 +9
Lines 10436 11271 +835
Branches 1655 1810 +155
==========================================
+ Hits 8584 9172 +588
- Misses 1463 1662 +199
- Partials 389 437 +48
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Thanks! The linting fails. Pls install pre-commit to do auto-formatting. Under the repo's root dir pip install -U pre-commit
pre-commit install
pre-commit run --all-files |
Video comparison with single inference and batch inference |
Please sign CLA (Contributor License Agreement). @namirinz |
Done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#560
IndexError: tuple index out of range
|
||
# Select bboxes by score threshold | ||
if bbox_thr is not None: | ||
assert bboxes.shape[1] == 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python demo/top_down_pose_tracking_demo_with_mmdet.py \
demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py \
http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
configs/top_down/resnet/coco/res50_coco_256x192.py \
https://download.openmmlab.com/mmpose/top_down/resnet/res50_coco_256x192-ec54d7f3_20200709.pth \
--video-path /dev/video0 \
--show
Exception handling is not exist when the human bounding boxes are not detected
Use load_from_http loader
Use load_from_http loader
Traceback (most recent call last):
File "demo/top_down_pose_tracking_demo_with_mmdet.py", line 177, in <module>
main()
File "demo/top_down_pose_tracking_demo_with_mmdet.py", line 132, in main
pose_results, returned_outputs = inference_top_down_pose_model(
File "/mmpose/mmpose/apis/inference.py", line 370, in inference_top_down_pose_model
assert bboxes.shape[1] == 5
IndexError: tuple index out of range
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Please open an issue so that it is easier to track progress
@@ -333,38 +345,42 @@ def inference_top_down_pose_model(model, | |||
pose_results = [] | |||
returned_outputs = [] | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if len(person_results) == 0:
return pose_results, returned_outputs
* Reorganize demo folder * Fix out-of-date mmdet/mmtrack configs * Fix a bug in inference_top_sown_pose_model which causes track_id missing (seems introduced by open-mmlab#560) Remaining issues: * Some video files used in example commands in demos do not exist
…model able to run on batches of bounding box (open-mmlab#560) * modified inference_top_down_model to make model-batch runnable * formattig code by pre-commit * Fix bug when bbox_thr make empty bbox * resolve comments * resolve comments Co-authored-by: jinsheng <[email protected]>
…model able to run on batches of bounding box (open-mmlab#560) * modified inference_top_down_model to make model-batch runnable * formattig code by pre-commit * Fix bug when bbox_thr make empty bbox * resolve comments * resolve comments Co-authored-by: jinsheng <[email protected]>
I have been modified
_inference_single_pose_model
to for-loop preprocess (center, scale) all bounding boxes (let say N bboxes) and stack them into batch_data. And then feed these batch_data on pose_model to get (N) keypoint at once.And also modify
inference_top_down_model
to return output as close as the old one.