diff --git a/demo/docs/en/webcam_api_demo.md b/demo/docs/en/webcam_api_demo.md
index 4bbc75c261..9869392171 100644
--- a/demo/docs/en/webcam_api_demo.md
+++ b/demo/docs/en/webcam_api_demo.md
@@ -1,104 +1,30 @@
## Webcam Demo
-We provide a webcam demo tool which integrartes detection and 2D pose estimation for humans and animals. It can also apply fun effects like putting on sunglasses or enlarging the eyes, based on the pose estimation results.
+The original Webcam API has been deprecated starting from version v1.1.0. Users now have the option to utilize either the Inferencer or the demo script for conducting pose estimation using webcam input.
-
-
-
+### Webcam Demo with Inferencer
-### Get started
-
-Launch the demo from the mmpose root directory:
-
-```shell
-# Run webcam demo with GPU
-python demo/webcam_api_demo.py
-
-# Run webcam demo with CPU
-python demo/webcam_api_demo.py --cpu
-```
-
-The command above will use the default config file `demo/webcam_cfg/human_pose.py`. You can also specify the config file in the command:
+Users can utilize the MMPose Inferencer to estimate human poses in webcam inputs by executing the following command:
```shell
-python demo/webcam_api_demo.py --config demo/webcam_cfg/human_pose.py
+python demo/inferencer_demo.py webcam --pose2d 'human'
```
-### Hotkeys
-
-| Hotkey | Function |
-| ------ | ------------------------------------- |
-| v | Toggle the pose visualization on/off. |
-| h | Show help information. |
-| m | Show the monitoring information. |
-| q | Exit. |
-
-Note that the demo will automatically save the output video into a file `webcam_api_demo.mp4`.
+For additional information about the arguments of Inferencer, please refer to the [Inferencer Documentation](/docs/en/user_guides/inference.md).
-### Usage and configuarations
+### Webcam Demo with Demo Script
-Detailed configurations can be found in the config file.
+All of the demo scripts, except for `demo/image_demo.py`, support webcam input.
-- **Configure detection models**
- Users can choose detection models from the [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html). Just set the `model_config` and `model_checkpoint` in the detector node accordingly, and the model will be automatically downloaded and loaded.
+Take `demo/topdown_demo_with_mmdet.py` as example, users can utilize this script with webcam input by specifying **`--input webcam`** in the command:
- ```python
- # 'DetectorNode':
- # This node performs object detection from the frame image using an
- # MMDetection model.
- dict(
- type='DetectorNode',
- name='detector',
- model_config='demo/mmdetection_cfg/'
- 'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py',
- model_checkpoint='https://download.openmmlab.com'
- '/mmdetection/v2.0/ssd/'
- 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_'
- 'scratch_600e_coco_20210629_110627-974d9307.pth',
- input_buffer='_input_',
- output_buffer='det_result'),
- ```
-
-- **Configure pose estimation models**
- In this demo we use two [top-down](https://github.com/open-mmlab/mmpose/tree/latest/configs/body_2d_keypoint/topdown_heatmap) pose estimation models for humans and animals respectively. Users can choose models from the [MMPose Model Zoo](https://mmpose.readthedocs.io/en/latest/modelzoo.html). To apply different pose models on different instance types, you can add multiple pose estimator nodes with `cls_names` set accordingly.
-
- ```python
- # 'TopdownPoseEstimatorNode':
- # This node performs keypoint detection from the frame image using an
- # MMPose top-down model. Detection results is needed.
- dict(
- type='TopdownPoseEstimatorNode',
- name='human pose estimator',
- model_config='configs/wholebody_2d_keypoint/'
- 'topdown_heatmap/coco-wholebody/'
- 'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/'
- 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark'
- '-e2158108_20211205.pth',
- labels=['person'],
- input_buffer='det_result',
- output_buffer='human_pose'),
- dict(
- type='TopdownPoseEstimatorNode',
- name='animal pose estimator',
- model_config='configs/animal_2d_keypoint/topdown_heatmap/'
- 'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/animal/'
- 'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth',
- labels=['cat', 'dog', 'horse', 'sheep', 'cow'],
- input_buffer='human_pose',
- output_buffer='animal_pose'),
- ```
-
-- **Run the demo on a local video file**
- You can use local video files as the demo input by set `camera_id` to the file path.
-
-- **The computer doesn't have a camera?**
- A smart phone can serve as a webcam via apps like [Camo](https://reincubate.com/camo/) or [DroidCam](https://www.dev47apps.com/).
-
-- **Test the camera and display**
- Run follow command for a quick test of video capturing and displaying.
-
- ```shell
- python demo/webcam_api_demo.py --config demo/webcam_cfg/test_camera.py
- ```
+```shell
+# inference with webcam
+python demo/topdown_demo_with_mmdet.py \
+ projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \
+ https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \
+ projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \
+ https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \
+ --input webcam \
+ --show
+```
diff --git a/demo/docs/zh_cn/webcam_api_demo.md b/demo/docs/zh_cn/webcam_api_demo.md
index acc1aa9b0a..66099c9ca6 100644
--- a/demo/docs/zh_cn/webcam_api_demo.md
+++ b/demo/docs/zh_cn/webcam_api_demo.md
@@ -1,109 +1,30 @@
-## Webcam Demo
+## 摄像头推理
-我们提供了同时支持人体和动物的识别和 2D 姿态预估 webcam demo 工具,用户也可以用这个脚本在姿态预测结果上加入譬如大眼和戴墨镜等好玩的特效。
+从版本 v1.1.0 开始,原来的摄像头 API 已被弃用。用户现在可以选择使用推理器(Inferencer)或 Demo 脚本从摄像头读取的视频中进行姿势估计。
-
-
-
+### 使用推理器进行摄像头推理
-### Get started
-
-脚本使用方式很简单,直接在 MMPose 根路径使用:
-
-```shell
-# 使用 GPU
-python demo/webcam_api_demo.py
-
-# 仅使用 CPU
-python demo/webcam_api_demo.py --cpu
-```
-
-该命令会使用默认的 `demo/webcam_cfg/human_pose.py` 作为配置文件,用户可以自行指定别的配置:
+用户可以通过执行以下命令来利用 MMPose Inferencer 对摄像头输入进行人体姿势估计:
```shell
-python demo/webcam_api_demo.py --config demo/webcam_cfg/human_pose.py
+python demo/inferencer_demo.py webcam --pose2d 'human'
```
-### Hotkeys
-
-| Hotkey | Function |
-| ------ | ------------------------------------- |
-| v | Toggle the pose visualization on/off. |
-| h | Show help information. |
-| m | Show the monitoring information. |
-| q | Exit. |
-
-注意:脚本会自动将实时结果保存成一个名为 `webcam_api_demo.mp4` 的视频文件。
-
-### 配置使用
-
-这里我们只进行一些基本的说明,更多的信息可以直接参考对应的配置文件。
-
-- **设置检测模型**
+有关推理器的参数详细信息,请参阅 [推理器文档](/docs/en/user_guides/inference.md)。
- 用户可以直接使用 [MMDetection Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html) 里的识别模型,需要注意的是确保配置文件中的 DetectorNode 里的 `model_config` 和 `model_checkpoint` 需要对应起来,这样模型就会被自动下载和加载,例如:
+### 使用 Demo 脚本进行摄像头推理
- ```python
- # 'DetectorNode':
- # This node performs object detection from the frame image using an
- # MMDetection model.
- dict(
- type='DetectorNode',
- name='detector',
- model_config='demo/mmdetection_cfg/'
- 'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py',
- model_checkpoint='https://download.openmmlab.com'
- '/mmdetection/v2.0/ssd/'
- 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_'
- 'scratch_600e_coco_20210629_110627-974d9307.pth',
- input_buffer='_input_',
- output_buffer='det_result'),
- ```
+除了 `demo/image_demo.py` 之外,所有的 Demo 脚本都支持摄像头输入。
-- **设置姿态预估模型**
+以 `demo/topdown_demo_with_mmdet.py` 为例,用户可以通过在命令中指定 **`--input webcam`** 来使用该脚本对摄像头输入进行推理:
- 这里我们用两个 [top-down](https://github.com/open-mmlab/mmpose/tree/latest/configs/body_2d_keypoint/topdown_heatmap) 结构的人体和动物姿态预估模型进行演示。用户可以自由使用 [MMPose Model Zoo](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo/body_2d_keypoint.html) 里的模型。需要注意的是,更换模型后用户需要在对应的 pose estimate node 里添加或修改对应的 `cls_names` ,例如:
-
- ```python
- # 'TopdownPoseEstimatorNode':
- # This node performs keypoint detection from the frame image using an
- # MMPose top-down model. Detection results is needed.
- dict(
- type='TopdownPoseEstimatorNode',
- name='human pose estimator',
- model_config='configs/wholebody_2d_keypoint/'
- 'topdown_heatmap/coco-wholebody/'
- 'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/'
- 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark'
- '-e2158108_20211205.pth',
- labels=['person'],
- input_buffer='det_result',
- output_buffer='human_pose'),
- dict(
- type='TopdownPoseEstimatorNode',
- name='animal pose estimator',
- model_config='configs/animal_2d_keypoint/topdown_heatmap/'
- 'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/animal/'
- 'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth',
- labels=['cat', 'dog', 'horse', 'sheep', 'cow'],
- input_buffer='human_pose',
- output_buffer='animal_pose'),
- ```
-
-- **使用本地视频文件**
-
- 如果想直接使用本地的视频文件,用户只需要把文件路径设置到 `camera_id` 就行。
-
-- **本机没有摄像头怎么办**
-
- 用户可以在自己手机安装上一些 app 就能替代摄像头,例如 [Camo](https://reincubate.com/camo/) 和 [DroidCam](https://www.dev47apps.com/) 。
-
-- **测试摄像头和显示器连接**
-
- 使用如下命令就能完成检测:
-
- ```shell
- python demo/webcam_api_demo.py --config demo/webcam_cfg/test_camera.py
- ```
+```shell
+# inference with webcam
+python demo/topdown_demo_with_mmdet.py \
+ projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \
+ https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \
+ projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \
+ https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \
+ --input webcam \
+ --show
+```
diff --git a/demo/webcam_api_demo.py b/demo/webcam_api_demo.py
deleted file mode 100644
index 7d7ad263b1..0000000000
--- a/demo/webcam_api_demo.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-
-import logging
-import warnings
-from argparse import ArgumentParser
-
-from mmengine import Config, DictAction
-
-from mmpose.apis.webcam import WebcamExecutor
-from mmpose.apis.webcam.nodes import model_nodes
-
-
-def parse_args():
- parser = ArgumentParser('Webcam executor configs')
- parser.add_argument(
- '--config', type=str, default='demo/webcam_cfg/human_pose.py')
- parser.add_argument(
- '--cfg-options',
- nargs='+',
- action=DictAction,
- default={},
- help='Override settings in the config. The key-value pair '
- 'in xxx=yyy format will be merged into config file. For example, '
- "'--cfg-options executor_cfg.camera_id=1'")
- parser.add_argument(
- '--debug', action='store_true', help='Show debug information.')
- parser.add_argument(
- '--cpu', action='store_true', help='Use CPU for model inference.')
- parser.add_argument(
- '--cuda', action='store_true', help='Use GPU for model inference.')
-
- return parser.parse_args()
-
-
-def set_device(cfg: Config, device: str):
- """Set model device in config.
-
- Args:
- cfg (Config): Webcam config
- device (str): device indicator like "cpu" or "cuda:0"
- """
-
- device = device.lower()
- assert device == 'cpu' or device.startswith('cuda:')
-
- for node_cfg in cfg.executor_cfg.nodes:
- if node_cfg.type in model_nodes.__all__:
- node_cfg.update(device=device)
-
- return cfg
-
-
-def run():
-
- warnings.warn('The Webcam API will be deprecated in future. ',
- DeprecationWarning)
-
- args = parse_args()
- cfg = Config.fromfile(args.config)
- cfg.merge_from_dict(args.cfg_options)
-
- if args.debug:
- logging.basicConfig(level=logging.DEBUG)
-
- if args.cpu:
- cfg = set_device(cfg, 'cpu')
-
- if args.cuda:
- cfg = set_device(cfg, 'cuda:0')
-
- webcam_exe = WebcamExecutor(**cfg.executor_cfg)
- webcam_exe.run()
-
-
-if __name__ == '__main__':
- run()
diff --git a/demo/webcam_cfg/human_animal_pose.py b/demo/webcam_cfg/human_animal_pose.py
deleted file mode 100644
index 5eedc7f216..0000000000
--- a/demo/webcam_cfg/human_animal_pose.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-executor_cfg = dict(
- # Basic configurations of the executor
- name='Pose Estimation',
- camera_id=0,
- # Define nodes.
- # The configuration of a node usually includes:
- # 1. 'type': Node class name
- # 2. 'name': Node name
- # 3. I/O buffers (e.g. 'input_buffer', 'output_buffer'): specify the
- # input and output buffer names. This may depend on the node class.
- # 4. 'enable_key': assign a hot-key to toggle enable/disable this node.
- # This may depend on the node class.
- # 5. Other class-specific arguments
- nodes=[
- # 'DetectorNode':
- # This node performs object detection from the frame image using an
- # MMDetection model.
- dict(
- type='DetectorNode',
- name='detector',
- model_config='demo/mmdetection_cfg/'
- 'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py',
- model_checkpoint='https://download.openmmlab.com'
- '/mmdetection/v2.0/ssd/'
- 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_'
- 'scratch_600e_coco_20210629_110627-974d9307.pth',
- input_buffer='_input_', # `_input_` is an executor-reserved buffer
- output_buffer='det_result'),
- # 'TopdownPoseEstimatorNode':
- # This node performs keypoint detection from the frame image using an
- # MMPose top-down model. Detection results is needed.
- dict(
- type='TopdownPoseEstimatorNode',
- name='human pose estimator',
- model_config='configs/wholebody_2d_keypoint/'
- 'topdown_heatmap/coco-wholebody/'
- 'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/'
- 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark'
- '-e2158108_20211205.pth',
- labels=['person'],
- input_buffer='det_result',
- output_buffer='human_pose'),
- dict(
- type='TopdownPoseEstimatorNode',
- name='animal pose estimator',
- model_config='configs/animal_2d_keypoint/topdown_heatmap/'
- 'animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/animal/'
- 'hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth',
- labels=['cat', 'dog', 'horse', 'sheep', 'cow'],
- input_buffer='human_pose',
- output_buffer='animal_pose'),
- # 'ObjectAssignerNode':
- # This node binds the latest model inference result with the current
- # frame. (This means the frame image and inference result may be
- # asynchronous).
- dict(
- type='ObjectAssignerNode',
- name='object assigner',
- frame_buffer='_frame_', # `_frame_` is an executor-reserved buffer
- object_buffer='animal_pose',
- output_buffer='frame'),
- # 'ObjectVisualizerNode':
- # This node draw the pose visualization result in the frame image.
- # Pose results is needed.
- dict(
- type='ObjectVisualizerNode',
- name='object visualizer',
- enable_key='v',
- enable=True,
- show_bbox=True,
- must_have_keypoint=False,
- show_keypoint=True,
- input_buffer='frame',
- output_buffer='vis'),
- # 'SunglassesNode':
- # This node draw the sunglasses effect in the frame image.
- # Pose results is needed.
- dict(
- type='SunglassesEffectNode',
- name='sunglasses',
- enable_key='s',
- enable=False,
- input_buffer='vis',
- output_buffer='vis_sunglasses'),
- # 'BigeyeEffectNode':
- # This node draw the big-eye effetc in the frame image.
- # Pose results is needed.
- dict(
- type='BigeyeEffectNode',
- name='big-eye',
- enable_key='b',
- enable=False,
- input_buffer='vis_sunglasses',
- output_buffer='vis_bigeye'),
- # 'NoticeBoardNode':
- # This node show a notice board with given content, e.g. help
- # information.
- dict(
- type='NoticeBoardNode',
- name='instruction',
- enable_key='h',
- enable=True,
- input_buffer='vis_bigeye',
- output_buffer='vis_notice',
- content_lines=[
- 'This is a demo for pose visualization and simple image '
- 'effects. Have fun!', '', 'Hot-keys:',
- '"v": Pose estimation result visualization',
- '"s": Sunglasses effect B-)', '"b": Big-eye effect 0_0',
- '"h": Show help information',
- '"m": Show diagnostic information', '"q": Exit'
- ],
- ),
- # 'MonitorNode':
- # This node show diagnostic information in the frame image. It can
- # be used for debugging or monitoring system resource status.
- dict(
- type='MonitorNode',
- name='monitor',
- enable_key='m',
- enable=False,
- input_buffer='vis_notice',
- output_buffer='display'),
- # 'RecorderNode':
- # This node save the output video into a file.
- dict(
- type='RecorderNode',
- name='recorder',
- out_video_file='webcam_api_demo.mp4',
- input_buffer='display',
- output_buffer='_display_'
- # `_display_` is an executor-reserved buffer
- )
- ])
diff --git a/demo/webcam_cfg/human_pose.py b/demo/webcam_cfg/human_pose.py
deleted file mode 100644
index d1bac5722a..0000000000
--- a/demo/webcam_cfg/human_pose.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-executor_cfg = dict(
- # Basic configurations of the executor
- name='Pose Estimation',
- camera_id=0,
- # Define nodes.
- # The configuration of a node usually includes:
- # 1. 'type': Node class name
- # 2. 'name': Node name
- # 3. I/O buffers (e.g. 'input_buffer', 'output_buffer'): specify the
- # input and output buffer names. This may depend on the node class.
- # 4. 'enable_key': assign a hot-key to toggle enable/disable this node.
- # This may depend on the node class.
- # 5. Other class-specific arguments
- nodes=[
- # 'DetectorNode':
- # This node performs object detection from the frame image using an
- # MMDetection model.
- dict(
- type='DetectorNode',
- name='detector',
- model_config='projects/rtmpose/rtmdet/person/'
- 'rtmdet_nano_320-8xb32_coco-person.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/v1/'
- 'projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth', # noqa
- input_buffer='_input_', # `_input_` is an executor-reserved buffer
- output_buffer='det_result'),
- # 'TopdownPoseEstimatorNode':
- # This node performs keypoint detection from the frame image using an
- # MMPose top-down model. Detection results is needed.
- dict(
- type='TopdownPoseEstimatorNode',
- name='human pose estimator',
- model_config='projects/rtmpose/rtmpose/body_2d_keypoint/'
- 'rtmpose-t_8xb256-420e_coco-256x192.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/v1/'
- 'projects/rtmpose/rtmpose-tiny_simcc-aic-coco_pt-aic-coco_420e-256x192-cfc8f33d_20230126.pth', # noqa
- labels=['person'],
- input_buffer='det_result',
- output_buffer='human_pose'),
- # 'ObjectAssignerNode':
- # This node binds the latest model inference result with the current
- # frame. (This means the frame image and inference result may be
- # asynchronous).
- dict(
- type='ObjectAssignerNode',
- name='object assigner',
- frame_buffer='_frame_', # `_frame_` is an executor-reserved buffer
- object_buffer='human_pose',
- output_buffer='frame'),
- # 'ObjectVisualizerNode':
- # This node draw the pose visualization result in the frame image.
- # Pose results is needed.
- dict(
- type='ObjectVisualizerNode',
- name='object visualizer',
- enable_key='v',
- enable=True,
- show_bbox=True,
- must_have_keypoint=False,
- show_keypoint=True,
- input_buffer='frame',
- output_buffer='vis'),
- # 'NoticeBoardNode':
- # This node show a notice board with given content, e.g. help
- # information.
- dict(
- type='NoticeBoardNode',
- name='instruction',
- enable_key='h',
- enable=True,
- input_buffer='vis',
- output_buffer='vis_notice',
- content_lines=[
- 'This is a demo for pose visualization and simple image '
- 'effects. Have fun!', '', 'Hot-keys:',
- '"v": Pose estimation result visualization',
- '"h": Show help information',
- '"m": Show diagnostic information', '"q": Exit'
- ],
- ),
- # 'MonitorNode':
- # This node show diagnostic information in the frame image. It can
- # be used for debugging or monitoring system resource status.
- dict(
- type='MonitorNode',
- name='monitor',
- enable_key='m',
- enable=False,
- input_buffer='vis_notice',
- output_buffer='display'),
- # 'RecorderNode':
- # This node save the output video into a file.
- dict(
- type='RecorderNode',
- name='recorder',
- out_video_file='webcam_api_demo.mp4',
- input_buffer='display',
- output_buffer='_display_'
- # `_display_` is an executor-reserved buffer
- )
- ])
diff --git a/demo/webcam_cfg/test_camera.py b/demo/webcam_cfg/test_camera.py
deleted file mode 100644
index e6d79cf6db..0000000000
--- a/demo/webcam_cfg/test_camera.py
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-executor_cfg = dict(
- name='Test Webcam',
- camera_id=0,
- camera_max_fps=30,
- nodes=[
- dict(
- type='MonitorNode',
- name='monitor',
- enable_key='m',
- enable=False,
- input_buffer='_frame_',
- output_buffer='display'),
- # 'RecorderNode':
- # This node save the output video into a file.
- dict(
- type='RecorderNode',
- name='recorder',
- out_video_file='webcam_api_output.mp4',
- input_buffer='display',
- output_buffer='_display_')
- ])
diff --git a/docs/en/api.rst b/docs/en/api.rst
index a75e4a451d..48819a2531 100644
--- a/docs/en/api.rst
+++ b/docs/en/api.rst
@@ -132,5 +132,3 @@ hooks
^^^^^^^^^^^
.. automodule:: mmpose.engine.hooks
:members:
-
-.. include:: webcam_api.rst
diff --git a/docs/en/webcam_api.rst b/docs/en/webcam_api.rst
deleted file mode 100644
index ff1c127515..0000000000
--- a/docs/en/webcam_api.rst
+++ /dev/null
@@ -1,112 +0,0 @@
-mmpose.apis.webcam
---------------------
-.. contents:: MMPose Webcam API: Tools to build simple interactive webcam applications and demos
- :depth: 2
- :local:
- :backlinks: top
-
-Executor
-^^^^^^^^^^^^^^^^^^^^
-.. currentmodule:: mmpose.apis.webcam
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- WebcamExecutor
-
-Nodes
-^^^^^^^^^^^^^^^^^^^^
-.. currentmodule:: mmpose.apis.webcam.nodes
-
-Base Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- Node
- BaseVisualizerNode
-
-Model Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- DetectorNode
- TopdownPoseEstimatorNode
-
-Visualizer Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- ObjectVisualizerNode
- NoticeBoardNode
- SunglassesEffectNode
- BigeyeEffectNode
-
-Helper Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- ObjectAssignerNode
- MonitorNode
- RecorderNode
-
-Utils
-^^^^^^^^^^^^^^^^^^^^
-.. currentmodule:: mmpose.apis.webcam.utils
-
-Buffer and Message
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- BufferManager
- Message
- FrameMessage
- VideoEndingMessage
-
-Pose
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- get_eye_keypoint_ids
- get_face_keypoint_ids
- get_hand_keypoint_ids
- get_mouth_keypoint_ids
- get_wrist_keypoint_ids
-
-Event
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- EventManager
-
-Misc
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- copy_and_paste
- screen_matting
- expand_and_clamp
- limit_max_fps
- is_image_file
- get_cached_file_path
- load_image_from_disk_or_url
- get_config_path
diff --git a/docs/zh_cn/api.rst b/docs/zh_cn/api.rst
index a75e4a451d..48819a2531 100644
--- a/docs/zh_cn/api.rst
+++ b/docs/zh_cn/api.rst
@@ -132,5 +132,3 @@ hooks
^^^^^^^^^^^
.. automodule:: mmpose.engine.hooks
:members:
-
-.. include:: webcam_api.rst
diff --git a/docs/zh_cn/webcam_api.rst b/docs/zh_cn/webcam_api.rst
deleted file mode 100644
index ff1c127515..0000000000
--- a/docs/zh_cn/webcam_api.rst
+++ /dev/null
@@ -1,112 +0,0 @@
-mmpose.apis.webcam
---------------------
-.. contents:: MMPose Webcam API: Tools to build simple interactive webcam applications and demos
- :depth: 2
- :local:
- :backlinks: top
-
-Executor
-^^^^^^^^^^^^^^^^^^^^
-.. currentmodule:: mmpose.apis.webcam
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- WebcamExecutor
-
-Nodes
-^^^^^^^^^^^^^^^^^^^^
-.. currentmodule:: mmpose.apis.webcam.nodes
-
-Base Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- Node
- BaseVisualizerNode
-
-Model Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- DetectorNode
- TopdownPoseEstimatorNode
-
-Visualizer Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- ObjectVisualizerNode
- NoticeBoardNode
- SunglassesEffectNode
- BigeyeEffectNode
-
-Helper Nodes
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
- :template: webcam_node_class.rst
-
- ObjectAssignerNode
- MonitorNode
- RecorderNode
-
-Utils
-^^^^^^^^^^^^^^^^^^^^
-.. currentmodule:: mmpose.apis.webcam.utils
-
-Buffer and Message
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- BufferManager
- Message
- FrameMessage
- VideoEndingMessage
-
-Pose
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- get_eye_keypoint_ids
- get_face_keypoint_ids
- get_hand_keypoint_ids
- get_mouth_keypoint_ids
- get_wrist_keypoint_ids
-
-Event
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- EventManager
-
-Misc
-""""""""""""""""""""
-.. autosummary::
- :toctree: generated
- :nosignatures:
-
- copy_and_paste
- screen_matting
- expand_and_clamp
- limit_max_fps
- is_image_file
- get_cached_file_path
- load_image_from_disk_or_url
- get_config_path
diff --git a/mmpose/apis/webcam/__init__.py b/mmpose/apis/webcam/__init__.py
deleted file mode 100644
index 271b238c67..0000000000
--- a/mmpose/apis/webcam/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .webcam_executor import WebcamExecutor
-
-__all__ = ['WebcamExecutor']
diff --git a/mmpose/apis/webcam/nodes/__init__.py b/mmpose/apis/webcam/nodes/__init__.py
deleted file mode 100644
index 50f7c899d3..0000000000
--- a/mmpose/apis/webcam/nodes/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .base_visualizer_node import BaseVisualizerNode
-from .helper_nodes import MonitorNode, ObjectAssignerNode, RecorderNode
-from .model_nodes import DetectorNode, TopdownPoseEstimatorNode
-from .node import Node
-from .registry import NODES
-from .visualizer_nodes import (BigeyeEffectNode, NoticeBoardNode,
- ObjectVisualizerNode, SunglassesEffectNode)
-
-__all__ = [
- 'BaseVisualizerNode', 'NODES', 'MonitorNode', 'ObjectAssignerNode',
- 'RecorderNode', 'DetectorNode', 'TopdownPoseEstimatorNode', 'Node',
- 'BigeyeEffectNode', 'NoticeBoardNode', 'ObjectVisualizerNode',
- 'ObjectAssignerNode', 'SunglassesEffectNode'
-]
diff --git a/mmpose/apis/webcam/nodes/base_visualizer_node.py b/mmpose/apis/webcam/nodes/base_visualizer_node.py
deleted file mode 100644
index 0e0ba397d4..0000000000
--- a/mmpose/apis/webcam/nodes/base_visualizer_node.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from abc import abstractmethod
-from typing import Dict, List, Optional, Union
-
-import numpy as np
-
-from ..utils import FrameMessage, Message
-from .node import Node
-
-
-class BaseVisualizerNode(Node):
- """Base class for nodes whose function is to create visual effects, like
- visualizing model predictions, showing graphics or showing text messages.
-
- All subclass should implement the method ``draw()``.
-
- Args:
- name (str): The node name (also thread name)
- input_buffer (str): The name of the input buffer
- output_buffer (str | list): The name(s) of the output buffer(s).
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``
- """
-
- def __init__(self,
- name: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = True):
-
- super().__init__(name=name, enable_key=enable_key, enable=enable)
-
- # Register buffers
- self.register_input_buffer(input_buffer, 'input', trigger=True)
- self.register_output_buffer(output_buffer)
-
- def process(self, input_msgs: Dict[str, Message]) -> Union[Message, None]:
- input_msg = input_msgs['input']
-
- img = self.draw(input_msg)
- input_msg.set_image(img)
-
- return input_msg
-
- def bypass(self, input_msgs: Dict[str, Message]) -> Union[Message, None]:
- return input_msgs['input']
-
- @abstractmethod
- def draw(self, input_msg: FrameMessage) -> np.ndarray:
- """Draw on the frame image of the input FrameMessage.
-
- Args:
- input_msg (:obj:`FrameMessage`): The message of the frame to draw
- on
-
- Returns:
- np.array: The processed image.
- """
diff --git a/mmpose/apis/webcam/nodes/helper_nodes/__init__.py b/mmpose/apis/webcam/nodes/helper_nodes/__init__.py
deleted file mode 100644
index 8bb0ed9dd1..0000000000
--- a/mmpose/apis/webcam/nodes/helper_nodes/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .monitor_node import MonitorNode
-from .object_assigner_node import ObjectAssignerNode
-from .recorder_node import RecorderNode
-
-__all__ = ['MonitorNode', 'ObjectAssignerNode', 'RecorderNode']
diff --git a/mmpose/apis/webcam/nodes/helper_nodes/monitor_node.py b/mmpose/apis/webcam/nodes/helper_nodes/monitor_node.py
deleted file mode 100644
index 305490dc52..0000000000
--- a/mmpose/apis/webcam/nodes/helper_nodes/monitor_node.py
+++ /dev/null
@@ -1,167 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Optional, Union
-
-import cv2
-import numpy as np
-from mmcv import color_val
-
-from ..node import Node
-from ..registry import NODES
-
-try:
- import psutil
- psutil_proc = psutil.Process()
-except (ImportError, ModuleNotFoundError):
- psutil_proc = None
-
-
-@NODES.register_module()
-class MonitorNode(Node):
- """Show diagnostic information.
-
- Args:
- name (str): The node name (also thread name)
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``
- x_offset (int): The position of the text box's left border in
- pixels. Default: 20
- y_offset (int): The position of the text box's top border in
- pixels. Default: 20
- y_delta (int): The line height in pixels. Default: 15
- text_color (str|tuple): The font color represented in a color name or
- a BGR tuple. Default: ``'black'``
- backbround_color (str|tuple): The background color represented in a
- color name or a BGR tuple. Default: (255, 183, 0)
- text_scale (float): The font scale factor that is multiplied by the
- base size. Default: 0.4
- ignore_items (list[str], optional): Specify the node information items
- that will not be shown. See ``MonitorNode._default_ignore_items``
- for the default setting.
-
- Example::
- >>> cfg = dict(
- ... type='MonitorNode',
- ... name='monitor',
- ... enable_key='m',
- ... enable=False,
- ... input_buffer='vis_notice',
- ... output_buffer='display')
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- _default_ignore_items = ['timestamp']
-
- def __init__(self,
- name: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = False,
- x_offset=20,
- y_offset=20,
- y_delta=15,
- text_color='black',
- background_color=(255, 183, 0),
- text_scale=0.4,
- ignore_items: Optional[List[str]] = None):
- super().__init__(name=name, enable_key=enable_key, enable=enable)
-
- self.x_offset = x_offset
- self.y_offset = y_offset
- self.y_delta = y_delta
- self.text_color = color_val(text_color)
- self.background_color = color_val(background_color)
- self.text_scale = text_scale
- if ignore_items is None:
- self.ignore_items = self._default_ignore_items
- else:
- self.ignore_items = ignore_items
-
- self.register_input_buffer(input_buffer, 'input', trigger=True)
- self.register_output_buffer(output_buffer)
-
- def process(self, input_msgs):
- input_msg = input_msgs['input']
-
- input_msg.update_route_info(
- node_name='System Info',
- node_type='none',
- info=self._get_system_info())
-
- img = input_msg.get_image()
- route_info = input_msg.get_route_info()
- img = self._show_route_info(img, route_info)
-
- input_msg.set_image(img)
- return input_msg
-
- def _get_system_info(self):
- """Get the system information including CPU and memory usage.
-
- Returns:
- dict: The system information items.
- """
- sys_info = {}
- if psutil_proc is not None:
- sys_info['CPU(%)'] = psutil_proc.cpu_percent()
- sys_info['Memory(%)'] = psutil_proc.memory_percent()
- return sys_info
-
- def _show_route_info(self, img: np.ndarray,
- route_info: List[Dict]) -> np.ndarray:
- """Show the route information in the frame.
-
- Args:
- img (np.ndarray): The frame image.
- route_info (list[dict]): The route information of the frame.
-
- Returns:
- np.ndarray: The processed image.
- """
- canvas = np.full(img.shape, self.background_color, dtype=img.dtype)
-
- x = self.x_offset
- y = self.y_offset
-
- max_len = 0
-
- def _put_line(line=''):
- nonlocal y, max_len
- cv2.putText(canvas, line, (x, y), cv2.FONT_HERSHEY_DUPLEX,
- self.text_scale, self.text_color, 1)
- y += self.y_delta
- max_len = max(max_len, len(line))
-
- for node_info in route_info:
- title = f'{node_info["node"]}({node_info["node_type"]})'
- _put_line(title)
- for k, v in node_info['info'].items():
- if k in self.ignore_items:
- continue
- if isinstance(v, float):
- v = f'{v:.1f}'
- _put_line(f' {k}: {v}')
-
- x1 = max(0, self.x_offset)
- x2 = min(img.shape[1], int(x + max_len * self.text_scale * 20))
- y1 = max(0, self.y_offset - self.y_delta)
- y2 = min(img.shape[0], y)
-
- src1 = canvas[y1:y2, x1:x2]
- src2 = img[y1:y2, x1:x2]
- img[y1:y2, x1:x2] = cv2.addWeighted(src1, 0.5, src2, 0.5, 0)
-
- return img
-
- def bypass(self, input_msgs):
- return input_msgs['input']
diff --git a/mmpose/apis/webcam/nodes/helper_nodes/object_assigner_node.py b/mmpose/apis/webcam/nodes/helper_nodes/object_assigner_node.py
deleted file mode 100644
index a1a7804ab4..0000000000
--- a/mmpose/apis/webcam/nodes/helper_nodes/object_assigner_node.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import time
-from typing import List, Union
-
-from mmpose.utils.timer import RunningAverage
-from ..node import Node
-from ..registry import NODES
-
-
-@NODES.register_module()
-class ObjectAssignerNode(Node):
- """Assign the object information to the frame message.
-
- :class:`ObjectAssignerNode` enables asynchronous processing of model
- inference and video I/O, so the video will be captured and displayed
- smoothly regardless of the model inference speed. Specifically,
- :class:`ObjectAssignerNode` takes messages from both model branch and
- video I/O branch as its input, indicated as "object message" and "frame
- message" respectively. When an object message arrives it will update the
- latest object information; and when a frame message arrives, it will be
- assigned with the latest object information and output.
-
- Specially, if the webcam executor is set to synchrounous mode, the
- behavior of :class:`ObjectAssignerNode` will be different: When an object
- message arrives, it will trigger an output of itself; and the frame
- messages will be ignored.
-
- Args:
- name (str): The node name (also thread name)
- frame_buffer (str): Buffer name for frame messages
- object_buffer (str): Buffer name for object messages
- output_buffer (str): The name(s) of the output buffer(s)
-
- Example::
- >>> cfg =dict(
- ... type='ObjectAssignerNode',
- ... name='object assigner',
- ... frame_buffer='_frame_',
- ... # `_frame_` is an executor-reserved buffer
- ... object_buffer='animal_pose',
- ... output_buffer='frame')
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- def __init__(self, name: str, frame_buffer: str, object_buffer: str,
- output_buffer: Union[str, List[str]]):
- super().__init__(name=name, enable=True)
- self.synchronous = None
-
- # Cache the latest model result
- self.last_object_msg = None
- self.last_output_msg = None
-
- # Inference speed analysis
- self.frame_fps = RunningAverage(window=10)
- self.frame_lag = RunningAverage(window=10)
- self.object_fps = RunningAverage(window=10)
- self.object_lag = RunningAverage(window=10)
-
- # Register buffers
- # The trigger buffer depends on the executor.synchronous attribute,
- # so it will be set later after the executor is assigned in
- # ``set_executor``.
- self.register_input_buffer(object_buffer, 'object', trigger=False)
- self.register_input_buffer(frame_buffer, 'frame', trigger=False)
- self.register_output_buffer(output_buffer)
-
- def set_executor(self, executor):
- super().set_executor(executor)
- # Set synchronous according to the executor
- if executor.synchronous:
- self.synchronous = True
- trigger = 'object'
- else:
- self.synchronous = False
- trigger = 'frame'
-
- # Set trigger input buffer according to the synchronous setting
- for buffer_info in self._input_buffers:
- if buffer_info.input_name == trigger:
- buffer_info.trigger = True
-
- def process(self, input_msgs):
- object_msg = input_msgs['object']
-
- # Update last result
- if object_msg is not None:
- # Update result FPS
- if self.last_object_msg is not None:
- self.object_fps.update(
- 1.0 /
- (object_msg.timestamp - self.last_object_msg.timestamp))
- # Update inference latency
- self.object_lag.update(time.time() - object_msg.timestamp)
- # Update last inference result
- self.last_object_msg = object_msg
-
- if not self.synchronous:
- # Asynchronous mode:
- # Assign the latest object information to the
- # current frame.
- frame_msg = input_msgs['frame']
-
- self.frame_lag.update(time.time() - frame_msg.timestamp)
-
- # Assign objects to frame
- if self.last_object_msg is not None:
- frame_msg.update_objects(self.last_object_msg.get_objects())
- frame_msg.merge_route_info(
- self.last_object_msg.get_route_info())
-
- output_msg = frame_msg
-
- else:
- # Synchronous mode:
- # The current frame will be ignored. Instead,
- # the frame from which the latest object information is obtained
- # will be used.
- self.frame_lag.update(time.time() - object_msg.timestamp)
- output_msg = object_msg
-
- # Update frame fps and lag
- if self.last_output_msg is not None:
- self.frame_lag.update(time.time() - output_msg.timestamp)
- self.frame_fps.update(
- 1.0 / (output_msg.timestamp - self.last_output_msg.timestamp))
- self.last_output_msg = output_msg
-
- return output_msg
-
- def _get_node_info(self):
- info = super()._get_node_info()
- info['object_fps'] = self.object_fps.average()
- info['object_lag (ms)'] = self.object_lag.average() * 1000
- info['frame_fps'] = self.frame_fps.average()
- info['frame_lag (ms)'] = self.frame_lag.average() * 1000
- return info
diff --git a/mmpose/apis/webcam/nodes/helper_nodes/recorder_node.py b/mmpose/apis/webcam/nodes/helper_nodes/recorder_node.py
deleted file mode 100644
index b35a778692..0000000000
--- a/mmpose/apis/webcam/nodes/helper_nodes/recorder_node.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from queue import Full, Queue
-from threading import Thread
-from typing import List, Union
-
-import cv2
-
-from ..node import Node
-from ..registry import NODES
-
-
-@NODES.register_module()
-class RecorderNode(Node):
- """Record the video frames into a local file.
-
- :class:`RecorderNode` uses OpenCV backend to record the video. Recording
- is performed in a separate thread to avoid blocking the data stream. A
- buffer queue is used to cached the arrived frame images.
-
- Args:
- name (str): The node name (also thread name)
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- out_video_file (str): The path of the output video file
- out_video_fps (int): The frame rate of the output video. Default: 30
- out_video_codec (str): The codec of the output video. Default: 'mp4v'
- buffer_size (int): Size of the buffer queue that caches the arrived
- frame images.
- enable (bool): Default enable/disable status. Default: ``True``.
-
- Example::
- >>> cfg = dict(
- ... type='RecorderNode',
- ... name='recorder',
- ... out_video_file='webcam_demo.mp4',
- ... input_buffer='display',
- ... output_buffer='_display_'
- ... # `_display_` is an executor-reserved buffer
- ... )
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- def __init__(
- self,
- name: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- out_video_file: str,
- out_video_fps: int = 30,
- out_video_codec: str = 'mp4v',
- buffer_size: int = 30,
- enable: bool = True,
- ):
- super().__init__(name=name, enable_key=None, enable=enable)
-
- self.queue = Queue(maxsize=buffer_size)
- self.out_video_file = out_video_file
- self.out_video_fps = out_video_fps
- self.out_video_codec = out_video_codec
- self.vwriter = None
-
- # Register buffers
- self.register_input_buffer(input_buffer, 'input', trigger=True)
- self.register_output_buffer(output_buffer)
-
- # Start a new thread to write frame
- self.t_record = Thread(target=self._record, args=(), daemon=True)
- self.t_record.start()
-
- def process(self, input_msgs):
-
- input_msg = input_msgs['input']
- img = input_msg.get_image() if input_msg is not None else None
- img_queued = False
-
- while not img_queued:
- try:
- self.queue.put(img, timeout=1)
- img_queued = True
- self.logger.info('Recorder received one frame.')
- except Full:
- self.logger.warn('Recorder jamed!')
-
- return input_msg
-
- def _record(self):
- """This method is used to create a thread to get frame images from the
- buffer queue and write them into the file."""
-
- while True:
-
- img = self.queue.get()
-
- if img is None:
- break
-
- if self.vwriter is None:
- fourcc = cv2.VideoWriter_fourcc(*self.out_video_codec)
- fps = self.out_video_fps
- frame_size = (img.shape[1], img.shape[0])
- self.vwriter = cv2.VideoWriter(self.out_video_file, fourcc,
- fps, frame_size)
- assert self.vwriter.isOpened()
-
- self.vwriter.write(img)
-
- self.logger.info('Recorder released.')
- if self.vwriter is not None:
- self.vwriter.release()
-
- def on_exit(self):
- try:
- # Try putting a None into the output queue so the self.vwriter will
- # be released after all queue frames have been written to file.
- self.queue.put(None, timeout=1)
- self.t_record.join(timeout=1)
- except Full:
- pass
-
- if self.t_record.is_alive():
- # Force to release self.vwriter
- self.logger.warn('Recorder forced release!')
- if self.vwriter is not None:
- self.vwriter.release()
diff --git a/mmpose/apis/webcam/nodes/model_nodes/__init__.py b/mmpose/apis/webcam/nodes/model_nodes/__init__.py
deleted file mode 100644
index a9a116bfec..0000000000
--- a/mmpose/apis/webcam/nodes/model_nodes/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .detector_node import DetectorNode
-from .pose_estimator_node import TopdownPoseEstimatorNode
-
-__all__ = ['DetectorNode', 'TopdownPoseEstimatorNode']
diff --git a/mmpose/apis/webcam/nodes/model_nodes/detector_node.py b/mmpose/apis/webcam/nodes/model_nodes/detector_node.py
deleted file mode 100644
index 350831fe62..0000000000
--- a/mmpose/apis/webcam/nodes/model_nodes/detector_node.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Optional, Union
-
-import numpy as np
-
-from mmpose.utils import adapt_mmdet_pipeline
-from ...utils import get_config_path
-from ..node import Node
-from ..registry import NODES
-
-try:
- from mmdet.apis import inference_detector, init_detector
- has_mmdet = True
-except (ImportError, ModuleNotFoundError):
- has_mmdet = False
-
-
-@NODES.register_module()
-class DetectorNode(Node):
- """Detect objects from the frame image using MMDetection model.
-
- Note that MMDetection is required for this node. Please refer to
- `MMDetection documentation `_ for the installation guide.
-
- Parameters:
- name (str): The node name (also thread name)
- model_cfg (str): The model config file
- model_checkpoint (str): The model checkpoint file
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``
- device (str): Specify the device to hold model weights and inference
- the model. Default: ``'cuda:0'``
- bbox_thr (float): Set a threshold to filter out objects with low bbox
- scores. Default: 0.5
- multi_input (bool): Whether load all frames in input buffer. If True,
- all frames in buffer will be loaded and stacked. The latest frame
- is used to detect objects of interest. Default: False
-
- Example::
- >>> cfg = dict(
- ... type='DetectorNode',
- ... name='detector',
- ... model_config='demo/mmdetection_cfg/'
- ... 'ssdlite_mobilenetv2_scratch_600e_coco.py',
- ... model_checkpoint='https://download.openmmlab.com'
- ... '/mmdetection/v2.0/ssd/'
- ... 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_'
- ... 'scratch_600e_coco_20210629_110627-974d9307.pth',
- ... # `_input_` is an executor-reserved buffer
- ... input_buffer='_input_',
- ... output_buffer='det_result')
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- def __init__(self,
- name: str,
- model_config: str,
- model_checkpoint: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = True,
- device: str = 'cuda:0',
- bbox_thr: float = 0.5,
- multi_input: bool = False):
- # Check mmdetection is installed
- assert has_mmdet, \
- f'MMDetection is required for {self.__class__.__name__}.'
-
- super().__init__(
- name=name,
- enable_key=enable_key,
- enable=enable,
- multi_input=multi_input)
-
- self.model_config = get_config_path(model_config, 'mmdet')
- self.model_checkpoint = model_checkpoint
- self.device = device.lower()
- self.bbox_thr = bbox_thr
-
- # Init model
- self.model = init_detector(
- self.model_config, self.model_checkpoint, device=self.device)
- self.model.cfg = adapt_mmdet_pipeline(self.model.cfg)
-
- # Register buffers
- self.register_input_buffer(input_buffer, 'input', trigger=True)
- self.register_output_buffer(output_buffer)
-
- def bypass(self, input_msgs):
- return input_msgs['input']
-
- def process(self, input_msgs):
- input_msg = input_msgs['input']
-
- if self.multi_input:
- imgs = [frame.get_image() for frame in input_msg]
- input_msg = input_msg[-1]
-
- img = input_msg.get_image()
-
- preds = inference_detector(self.model, img)
- objects = self._post_process(preds)
- input_msg.update_objects(objects)
-
- if self.multi_input:
- input_msg.set_image(np.stack(imgs, axis=0))
-
- return input_msg
-
- def _post_process(self, preds) -> List[Dict]:
- """Post-process the predictions of MMDetection model."""
- instances = preds.pred_instances.cpu().numpy()
-
- classes = self.model.dataset_meta['classes']
- if isinstance(classes, str):
- classes = (classes, )
-
- objects = []
- for i in range(len(instances)):
- if instances.scores[i] < self.bbox_thr:
- continue
- class_id = instances.labels[i]
- obj = {
- 'class_id': class_id,
- 'label': classes[class_id],
- 'bbox': instances.bboxes[i],
- 'det_model_cfg': self.model.cfg,
- 'dataset_meta': self.model.dataset_meta.copy(),
- }
- objects.append(obj)
- return objects
diff --git a/mmpose/apis/webcam/nodes/model_nodes/pose_estimator_node.py b/mmpose/apis/webcam/nodes/model_nodes/pose_estimator_node.py
deleted file mode 100644
index 64691cf560..0000000000
--- a/mmpose/apis/webcam/nodes/model_nodes/pose_estimator_node.py
+++ /dev/null
@@ -1,135 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from dataclasses import dataclass
-from typing import List, Optional, Union
-
-import numpy as np
-
-from mmpose.apis import inference_topdown, init_model
-from ...utils import get_config_path
-from ..node import Node
-from ..registry import NODES
-
-
-@dataclass
-class TrackInfo:
- """Dataclass for object tracking information."""
- next_id: int = 0
- last_objects: List = None
-
-
-@NODES.register_module()
-class TopdownPoseEstimatorNode(Node):
- """Perform top-down pose estimation using MMPose model.
-
- The node should be placed after an object detection node.
-
- Parameters:
- name (str): The node name (also thread name)
- model_cfg (str): The model config file
- model_checkpoint (str): The model checkpoint file
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``
- device (str): Specify the device to hold model weights and inference
- the model. Default: ``'cuda:0'``
- class_ids (list[int], optional): Specify the object category indices
- to apply pose estimation. If both ``class_ids`` and ``labels``
- are given, ``labels`` will be ignored. If neither is given, pose
- estimation will be applied for all objects. Default: ``None``
- labels (list[str], optional): Specify the object category names to
- apply pose estimation. See also ``class_ids``. Default: ``None``
- bbox_thr (float): Set a threshold to filter out objects with low bbox
- scores. Default: 0.5
-
- Example::
- >>> cfg = dict(
- ... type='TopdownPoseEstimatorNode',
- ... name='human pose estimator',
- ... model_config='configs/wholebody/2d_kpt_sview_rgb_img/'
- ... 'topdown_heatmap/coco-wholebody/'
- ... 'vipnas_mbv3_coco_wholebody_256x192_dark.py',
- ... model_checkpoint='https://download.openmmlab.com/mmpose/'
- ... 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark'
- ... '-e2158108_20211205.pth',
- ... labels=['person'],
- ... input_buffer='det_result',
- ... output_buffer='human_pose')
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- def __init__(self,
- name: str,
- model_config: str,
- model_checkpoint: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = True,
- device: str = 'cuda:0',
- class_ids: Optional[List[int]] = None,
- labels: Optional[List[str]] = None,
- bbox_thr: float = 0.5):
- super().__init__(name=name, enable_key=enable_key, enable=enable)
-
- # Init model
- self.model_config = get_config_path(model_config, 'mmpose')
- self.model_checkpoint = model_checkpoint
- self.device = device.lower()
-
- self.class_ids = class_ids
- self.labels = labels
- self.bbox_thr = bbox_thr
-
- # Init model
- self.model = init_model(
- self.model_config, self.model_checkpoint, device=self.device)
-
- # Register buffers
- self.register_input_buffer(input_buffer, 'input', trigger=True)
- self.register_output_buffer(output_buffer)
-
- def bypass(self, input_msgs):
- return input_msgs['input']
-
- def process(self, input_msgs):
-
- input_msg = input_msgs['input']
- img = input_msg.get_image()
-
- if self.class_ids:
- objects = input_msg.get_objects(
- lambda x: x.get('class_id') in self.class_ids)
- elif self.labels:
- objects = input_msg.get_objects(
- lambda x: x.get('label') in self.labels)
- else:
- objects = input_msg.get_objects()
-
- if len(objects) > 0:
- # Inference pose
- bboxes = np.stack([object['bbox'] for object in objects])
- pose_results = inference_topdown(self.model, img, bboxes)
-
- # Update objects
- for pose_result, object in zip(pose_results, objects):
- pred_instances = pose_result.pred_instances
- object['keypoints'] = pred_instances.keypoints[0]
- object['keypoint_scores'] = pred_instances.keypoint_scores[0]
-
- dataset_meta = self.model.dataset_meta.copy()
- dataset_meta.update(object.get('dataset_meta', dict()))
- object['dataset_meta'] = dataset_meta
- object['pose_model_cfg'] = self.model.cfg
-
- input_msg.update_objects(objects)
-
- return input_msg
diff --git a/mmpose/apis/webcam/nodes/node.py b/mmpose/apis/webcam/nodes/node.py
deleted file mode 100644
index 3d34ae1cc0..0000000000
--- a/mmpose/apis/webcam/nodes/node.py
+++ /dev/null
@@ -1,407 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-import time
-from abc import ABCMeta, abstractmethod
-from dataclasses import dataclass
-from threading import Thread
-from typing import Callable, Dict, List, Optional, Tuple, Union
-
-from mmengine import is_method_overridden
-
-from mmpose.utils import StopWatch
-from ..utils import Message, VideoEndingMessage, limit_max_fps
-
-
-@dataclass
-class BufferInfo():
- """Dataclass for buffer information."""
- buffer_name: str
- input_name: Optional[str] = None
- trigger: bool = False
-
-
-@dataclass
-class EventInfo():
- """Dataclass for event handler information."""
- event_name: str
- is_keyboard: bool = False
- handler_func: Optional[Callable] = None
-
-
-class Node(Thread, metaclass=ABCMeta):
- """Base class for node, which is the interface of basic function module.
-
- :class:`Node` inherits :class:`threading.Thread`. All subclasses should
- override following methods:
-
- - ``process()``
- - ``bypass()`` (optional)
-
-
- Parameters:
- name (str): The node name (also thread name)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- max_fps (int): Maximum FPS of the node. This is to avoid the node
- running unrestrictedly and causing large resource consuming.
- Default: 30
- input_check_interval (float): Minimum interval (in millisecond) between
- checking if input is ready. Default: 0.001
- enable (bool): Default enable/disable status. Default: ``True``
- daemon (bool): Whether node is a daemon. Default: ``True``
- multi_input (bool): Whether load all messages in buffer. If False,
- only one message will be loaded each time. Default: ``False``
- """
-
- def __init__(self,
- name: str,
- enable_key: Optional[Union[str, int]] = None,
- max_fps: int = 30,
- input_check_interval: float = 0.01,
- enable: bool = True,
- daemon: bool = False,
- multi_input: bool = False):
- super().__init__(name=name, daemon=daemon)
- self._executor = None
- self._enabled = enable
- self.enable_key = enable_key
- self.max_fps = max_fps
- self.input_check_interval = input_check_interval
- self.multi_input = multi_input
-
- # A partitioned buffer manager the executor's buffer manager that
- # only accesses the buffers related to the node
- self._buffer_manager = None
-
- # Input/output buffers are a list of registered buffers' information
- self._input_buffers = []
- self._output_buffers = []
-
- # Event manager is a copy of assigned executor's event manager
- self._event_manager = None
-
- # A list of registered event information
- # See register_event() for more information
- # Note that we recommend to handle events in nodes by registering
- # handlers, but one can still access the raw event by _event_manager
- self._registered_events = []
-
- # A list of (listener_threads, event_info)
- # See set_executor() for more information
- self._event_listener_threads = []
-
- # A timer to calculate node FPS
- self._timer = StopWatch(window=10)
-
- # Register enable toggle key
- if self.enable_key:
- # If the node allows toggling enable, it should override the
- # `bypass` method to define the node behavior when disabled.
- if not is_method_overridden('bypass', Node, self.__class__):
- raise NotImplementedError(
- f'The node {self.__class__} does not support toggling'
- 'enable but got argument `enable_key`. To support toggling'
- 'enable, please override the `bypass` method of the node.')
-
- self.register_event(
- event_name=self.enable_key,
- is_keyboard=True,
- handler_func=self._toggle_enable,
- )
-
- # Logger
- self.logger = logging.getLogger(f'Node "{self.name}"')
-
- @property
- def registered_buffers(self):
- return self._input_buffers + self._output_buffers
-
- @property
- def registered_events(self):
- return self._registered_events.copy()
-
- def _toggle_enable(self):
- self._enabled = not self._enabled
-
- def register_input_buffer(self,
- buffer_name: str,
- input_name: str,
- trigger: bool = False):
- """Register an input buffer, so that Node can automatically check if
- data is ready, fetch data from the buffers and format the inputs to
- feed into `process` method.
-
- The subclass of Node should invoke `register_input_buffer` in its
- `__init__` method. This method can be invoked multiple times to
- register multiple input buffers.
-
- Args:
- buffer_name (str): The name of the buffer
- input_name (str): The name of the fetched message from the
- corresponding buffer
- trigger (bool): An trigger input means the node will wait
- until the input is ready before processing. Otherwise, an
- inessential input will not block the processing, instead
- a None will be fetched if the buffer is not ready.
- """
- buffer_info = BufferInfo(buffer_name, input_name, trigger)
- self._input_buffers.append(buffer_info)
-
- def register_output_buffer(self, buffer_name: Union[str, List[str]]):
- """Register one or multiple output buffers, so that the Node can
- automatically send the output of the `process` method to these buffers.
-
- The subclass of Node should invoke `register_output_buffer` in its
- `__init__` method.
-
- Args:
- buffer_name (str|list): The name(s) of the output buffer(s).
- """
-
- if not isinstance(buffer_name, list):
- buffer_name = [buffer_name]
-
- for name in buffer_name:
- buffer_info = BufferInfo(name)
- self._output_buffers.append(buffer_info)
-
- def register_event(self,
- event_name: str,
- is_keyboard: bool = False,
- handler_func: Optional[Callable] = None):
- """Register an event. All events used in the node need to be registered
- in __init__(). If a callable handler is given, a thread will be create
- to listen and handle the event when the node starts.
-
- Args:
- Args:
- event_name (str|int): The event name. If is_keyboard==True,
- event_name should be a str (as char) or an int (as ascii)
- is_keyboard (bool): Indicate whether it is an keyboard
- event. If True, the argument event_name will be regarded as a
- key indicator.
- handler_func (callable, optional): The event handler function,
- which should be a collable object with no arguments or
- return values. Default: ``None``.
- """
- event_info = EventInfo(event_name, is_keyboard, handler_func)
- self._registered_events.append(event_info)
-
- def set_executor(self, executor):
- """Assign the node to an executor so the node can access the buffers
- and event manager of the executor.
-
- This method should be invoked by the executor instance.
-
- Args:
- executor (:obj:`WebcamExecutor`): The executor to hold the node
- """
- # Get partitioned buffer manager
- buffer_names = [
- buffer.buffer_name
- for buffer in self._input_buffers + self._output_buffers
- ]
- self._buffer_manager = executor.buffer_manager.get_sub_manager(
- buffer_names)
-
- # Get event manager
- self._event_manager = executor.event_manager
-
- def _get_input_from_buffer(self) -> Tuple[bool, Optional[Dict]]:
- """Get and pack input data.
-
- The function returns a tuple (status, data). If the trigger buffers
- are ready, the status flag will be True, and the packed data is a dict
- whose items are buffer names and corresponding messages (unready
- non-trigger buffers will give a `None`). Otherwise, the status flag is
- False and the packed data is None.
-
- Returns:
- tuple[bool, dict]: The first item is a bool value indicating
- whether input is ready (i.e., all tirgger buffers are ready). The
- second value is a dict of buffer names and messages.
- """
- buffer_manager = self._buffer_manager
-
- if buffer_manager is None:
- raise ValueError(f'Node "{self.name}": not set to an executor.')
-
- # Check that trigger buffers are ready
- for buffer_info in self._input_buffers:
- if buffer_info.trigger and buffer_manager.is_empty(
- buffer_info.buffer_name):
- return False, None
-
- # Default input
- result = {
- buffer_info.input_name: None
- for buffer_info in self._input_buffers
- }
-
- for buffer_info in self._input_buffers:
-
- while not buffer_manager.is_empty(buffer_info.buffer_name):
- msg = buffer_manager.get(buffer_info.buffer_name, block=False)
- if self.multi_input:
- if result[buffer_info.input_name] is None:
- result[buffer_info.input_name] = []
- result[buffer_info.input_name].append(msg)
- else:
- result[buffer_info.input_name] = msg
- break
-
- # Return unsuccessful flag if any trigger input is unready
- if buffer_info.trigger and result[buffer_info.input_name] is None:
- return False, None
-
- return True, result
-
- def _send_output_to_buffers(self, output_msg):
- """Send output of ``process()`` to the registered output buffers.
-
- Args:
- output_msg (Message): output message
- """
- for buffer_info in self._output_buffers:
- buffer_name = buffer_info.buffer_name
- self._buffer_manager.put_force(buffer_name, output_msg)
-
- @abstractmethod
- def process(self, input_msgs: Dict[str, Message]) -> Union[Message, None]:
- """The method that implements the function of the node.
-
- This method will be invoked when the node is enabled and the input
- data is ready. All subclasses of Node should override this method.
-
- Args:
- input_msgs (dict[str, :obj:`Message`]): The input data collected
- from the buffers. For each item, the key is the `input_name`
- of the registered input buffer, and the value is a Message
- instance fetched from the buffer (or None if the buffer is
- non-trigger and not ready).
-
- Returns:
- Message: The output message of the node which will be send to all
- registered output buffers.
- """
-
- def bypass(self, input_msgs: Dict[str, Message]) -> Union[Message, None]:
- """The method that defines the node behavior when disabled.
-
- Note that a node must override this method if it has `enable_key`.
- This method has the same signature as ``process()``.
-
- Args:
- input_msgs (dict[str, :obj:`Message`]): The input data collected
- from the buffers. For each item, the key is the `input_name`
- of the registered input buffer, and the value is a Message
- instance fetched from the buffer (or None if the buffer is
- non-trigger and not ready).
-
- Returns:
- Message: The output message of the node which will be send to all
- registered output buffers.
- """
- raise NotImplementedError
-
- def _get_node_info(self) -> Dict:
- """Get route information of the node.
-
- Default information includes:
- - ``'fps'``: The processing speed of the node
- - ``'timestamp'``: The time that this method is invoked
-
- Subclasses can override this method to customize the node information.
-
- Returns:
- dict: The items of node information
- """
- info = {'fps': self._timer.report('_FPS_'), 'timestamp': time.time()}
- return info
-
- def on_exit(self):
- """This method will be invoked on event `_exit_`.
-
- Subclasses should override this method to specifying the exiting
- behavior.
- """
-
- def run(self):
- """Method representing the Node's activity.
-
- This method override the standard ``run()`` method of Thread.
- Subclasses of :class:`Node` should not override this method in
- subclasses.
- """
-
- self.logger.info('Process starts.')
-
- # Create event listener threads
- for event_info in self._registered_events:
-
- if event_info.handler_func is None:
- continue
-
- def event_listener():
- while True:
- with self._event_manager.wait_and_handle(
- event_info.event_name, event_info.is_keyboard):
- event_info.handler_func()
-
- t_listener = Thread(target=event_listener, args=(), daemon=True)
- t_listener.start()
- self._event_listener_threads.append(t_listener)
-
- # Loop
- while True:
- # Exit
- if self._event_manager.is_set('_exit_'):
- self.on_exit()
- break
-
- # Check if input is ready
- input_status, input_msgs = self._get_input_from_buffer()
-
- # Input is not ready
- if not input_status:
- time.sleep(self.input_check_interval)
- continue
-
- # If a VideoEndingMessage is received, broadcast the signal
- # without invoking process() or bypass()
- video_ending = False
- for _, msg in input_msgs.items():
- if isinstance(msg, VideoEndingMessage):
- self._send_output_to_buffers(msg)
- video_ending = True
- break
-
- if video_ending:
- self.on_exit()
- break
-
- # Check if enabled
- if not self._enabled:
- # Override bypass method to define node behavior when disabled
- output_msg = self.bypass(input_msgs)
- else:
- with self._timer.timeit():
- with limit_max_fps(self.max_fps):
- # Process
- output_msg = self.process(input_msgs)
-
- if output_msg:
- # Update route information
- node_info = self._get_node_info()
- output_msg.update_route_info(node=self, info=node_info)
-
- # Send output message
- if output_msg is not None:
- self._send_output_to_buffers(output_msg)
-
- self.logger.info('Process ends.')
diff --git a/mmpose/apis/webcam/nodes/registry.py b/mmpose/apis/webcam/nodes/registry.py
deleted file mode 100644
index 06d39fed63..0000000000
--- a/mmpose/apis/webcam/nodes/registry.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmengine.registry import Registry
-
-NODES = Registry('node')
diff --git a/mmpose/apis/webcam/nodes/visualizer_nodes/__init__.py b/mmpose/apis/webcam/nodes/visualizer_nodes/__init__.py
deleted file mode 100644
index fad7e30376..0000000000
--- a/mmpose/apis/webcam/nodes/visualizer_nodes/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .bigeye_effect_node import BigeyeEffectNode
-from .notice_board_node import NoticeBoardNode
-from .object_visualizer_node import ObjectVisualizerNode
-from .sunglasses_effect_node import SunglassesEffectNode
-
-__all__ = [
- 'ObjectVisualizerNode', 'NoticeBoardNode', 'SunglassesEffectNode',
- 'BigeyeEffectNode'
-]
diff --git a/mmpose/apis/webcam/nodes/visualizer_nodes/bigeye_effect_node.py b/mmpose/apis/webcam/nodes/visualizer_nodes/bigeye_effect_node.py
deleted file mode 100644
index 3bbec3d670..0000000000
--- a/mmpose/apis/webcam/nodes/visualizer_nodes/bigeye_effect_node.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from itertools import groupby
-from typing import Dict, List, Optional, Union
-
-import cv2
-import numpy as np
-
-from ...utils import get_eye_keypoint_ids
-from ..base_visualizer_node import BaseVisualizerNode
-from ..registry import NODES
-
-
-@NODES.register_module()
-class BigeyeEffectNode(BaseVisualizerNode):
- """Apply big-eye effect to the objects with eye keypoints in the frame.
-
- Args:
- name (str): The node name (also thread name)
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``
- kpt_thr (float): The score threshold of valid keypoints. Default: 0.5
-
- Example::
- >>> cfg = dict(
- ... type='SunglassesEffectNode',
- ... name='sunglasses',
- ... enable_key='s',
- ... enable=False,
- ... input_buffer='vis',
- ... output_buffer='vis_sunglasses')
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- def __init__(self,
- name: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = True,
- kpt_thr: float = 0.5):
-
- super().__init__(
- name=name,
- input_buffer=input_buffer,
- output_buffer=output_buffer,
- enable_key=enable_key,
- enable=enable)
- self.kpt_thr = kpt_thr
-
- def draw(self, input_msg):
- canvas = input_msg.get_image()
-
- objects = input_msg.get_objects(lambda x:
- ('keypoints' in x and 'bbox' in x))
-
- for dataset_meta, group in groupby(objects,
- lambda x: x['dataset_meta']):
- left_eye_index, right_eye_index = get_eye_keypoint_ids(
- dataset_meta)
- canvas = self.apply_bigeye_effect(canvas, group, left_eye_index,
- right_eye_index)
- return canvas
-
- def apply_bigeye_effect(self, canvas: np.ndarray, objects: List[Dict],
- left_eye_index: int,
- right_eye_index: int) -> np.ndarray:
- """Apply big-eye effect.
-
- Args:
- canvas (np.ndarray): The image to apply the effect
- objects (list[dict]): The object list with bbox and keypoints
- - "bbox" ([K, 4(or 5)]): bbox in [x1, y1, x2, y2, (score)]
- - "keypoints" ([K,3]): keypoints in [x, y, score]
- left_eye_index (int): Keypoint index of left eye
- right_eye_index (int): Keypoint index of right eye
-
- Returns:
- np.ndarray: Processed image.
- """
-
- xx, yy = np.meshgrid(
- np.arange(canvas.shape[1]), np.arange(canvas.shape[0]))
- xx = xx.astype(np.float32)
- yy = yy.astype(np.float32)
-
- for obj in objects:
- bbox = obj['bbox']
- kpts = obj['keypoints']
- kpt_scores = obj['keypoint_scores']
-
- if kpt_scores[left_eye_index] < self.kpt_thr or kpt_scores[
- right_eye_index] < self.kpt_thr:
- continue
-
- kpt_leye = kpts[left_eye_index, :2]
- kpt_reye = kpts[right_eye_index, :2]
- for xc, yc in [kpt_leye, kpt_reye]:
-
- # distortion parameters
- k1 = 0.001
- epe = 1e-5
-
- scale = (bbox[2] - bbox[0])**2 + (bbox[3] - bbox[1])**2
- r2 = ((xx - xc)**2 + (yy - yc)**2)
- r2 = (r2 + epe) / scale # normalized by bbox scale
-
- xx = (xx - xc) / (1 + k1 / r2) + xc
- yy = (yy - yc) / (1 + k1 / r2) + yc
-
- canvas = cv2.remap(
- canvas,
- xx,
- yy,
- interpolation=cv2.INTER_AREA,
- borderMode=cv2.BORDER_REPLICATE)
-
- return canvas
diff --git a/mmpose/apis/webcam/nodes/visualizer_nodes/notice_board_node.py b/mmpose/apis/webcam/nodes/visualizer_nodes/notice_board_node.py
deleted file mode 100644
index 0578ec38eb..0000000000
--- a/mmpose/apis/webcam/nodes/visualizer_nodes/notice_board_node.py
+++ /dev/null
@@ -1,128 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List, Optional, Tuple, Union
-
-import cv2
-import numpy as np
-from mmcv import color_val
-
-from ...utils import FrameMessage
-from ..base_visualizer_node import BaseVisualizerNode
-from ..registry import NODES
-
-
-@NODES.register_module()
-class NoticeBoardNode(BaseVisualizerNode):
- """Show text messages in the frame.
-
- Args:
- name (str): The node name (also thread name)
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``
- content_lines (list[str], optional): The lines of text message to show
- in the frame. If not given, a default message will be shown.
- Default: ``None``
- x_offset (int): The position of the notice board's left border in
- pixels. Default: 20
- y_offset (int): The position of the notice board's top border in
- pixels. Default: 20
- y_delta (int): The line height in pixels. Default: 15
- text_color (str|tuple): The font color represented in a color name or
- a BGR tuple. Default: ``'black'``
- backbround_color (str|tuple): The background color represented in a
- color name or a BGR tuple. Default: (255, 183, 0)
- text_scale (float): The font scale factor that is multiplied by the
- base size. Default: 0.4
-
- Example::
- >>> cfg = dict(
- ... type='NoticeBoardNode',
- ... name='instruction',
- ... enable_key='h',
- ... enable=True,
- ... input_buffer='vis_bigeye',
- ... output_buffer='vis_notice',
- ... content_lines=[
- ... 'This is a demo for pose visualization and simple image '
- ... 'effects. Have fun!', '', 'Hot-keys:',
- ... '"v": Pose estimation result visualization',
- ... '"s": Sunglasses effect B-)', '"b": Big-eye effect 0_0',
- ... '"h": Show help information',
- ... '"m": Show diagnostic information', '"q": Exit'
- ... ],
- ... )
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- default_content_lines = ['This is a notice board!']
-
- def __init__(self,
- name: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = True,
- content_lines: Optional[List[str]] = None,
- x_offset: int = 20,
- y_offset: int = 20,
- y_delta: int = 15,
- text_color: Union[str, Tuple[int, int, int]] = 'black',
- background_color: Union[str, Tuple[int, int,
- int]] = (255, 183, 0),
- text_scale: float = 0.4):
- super().__init__(
- name=name,
- input_buffer=input_buffer,
- output_buffer=output_buffer,
- enable_key=enable_key,
- enable=enable)
-
- self.x_offset = x_offset
- self.y_offset = y_offset
- self.y_delta = y_delta
- self.text_color = color_val(text_color)
- self.background_color = color_val(background_color)
- self.text_scale = text_scale
-
- if content_lines:
- self.content_lines = content_lines
- else:
- self.content_lines = self.default_content_lines
-
- def draw(self, input_msg: FrameMessage) -> np.ndarray:
- img = input_msg.get_image()
- canvas = np.full(img.shape, self.background_color, dtype=img.dtype)
-
- x = self.x_offset
- y = self.y_offset
-
- max_len = max([len(line) for line in self.content_lines])
-
- def _put_line(line=''):
- nonlocal y
- cv2.putText(canvas, line, (x, y), cv2.FONT_HERSHEY_DUPLEX,
- self.text_scale, self.text_color, 1)
- y += self.y_delta
-
- for line in self.content_lines:
- _put_line(line)
-
- x1 = max(0, self.x_offset)
- x2 = min(img.shape[1], int(x + max_len * self.text_scale * 20))
- y1 = max(0, self.y_offset - self.y_delta)
- y2 = min(img.shape[0], y)
-
- src1 = canvas[y1:y2, x1:x2]
- src2 = img[y1:y2, x1:x2]
- img[y1:y2, x1:x2] = cv2.addWeighted(src1, 0.5, src2, 0.5, 0)
-
- return img
diff --git a/mmpose/apis/webcam/nodes/visualizer_nodes/object_visualizer_node.py b/mmpose/apis/webcam/nodes/visualizer_nodes/object_visualizer_node.py
deleted file mode 100644
index ef28a0804c..0000000000
--- a/mmpose/apis/webcam/nodes/visualizer_nodes/object_visualizer_node.py
+++ /dev/null
@@ -1,341 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-from itertools import groupby
-from typing import Dict, List, Optional, Tuple, Union
-
-import cv2
-import mmcv
-import numpy as np
-
-from ...utils import FrameMessage
-from ..base_visualizer_node import BaseVisualizerNode
-from ..registry import NODES
-
-
-def imshow_bboxes(img,
- bboxes,
- labels=None,
- colors='green',
- text_color='white',
- thickness=1,
- font_scale=0.5):
- """Draw bboxes with labels (optional) on an image. This is a wrapper of
- mmcv.imshow_bboxes.
-
- Args:
- img (str or ndarray): The image to be displayed.
- bboxes (ndarray): ndarray of shape (k, 4), each row is a bbox in
- format [x1, y1, x2, y2].
- labels (str or list[str], optional): labels of each bbox.
- colors (list[str or tuple or :obj:`Color`]): A list of colors.
- text_color (str or tuple or :obj:`Color`): Color of texts.
- thickness (int): Thickness of lines.
- font_scale (float): Font scales of texts.
-
- Returns:
- ndarray: The image with bboxes drawn on it.
- """
-
- # adapt to mmcv.imshow_bboxes input format
- bboxes = np.split(
- bboxes, bboxes.shape[0], axis=0) if bboxes.shape[0] > 0 else []
- if not isinstance(colors, list):
- colors = [colors for _ in range(len(bboxes))]
- colors = [mmcv.color_val(c) for c in colors]
- assert len(bboxes) == len(colors)
-
- img = mmcv.imshow_bboxes(
- img,
- bboxes,
- colors,
- top_k=-1,
- thickness=thickness,
- show=False,
- out_file=None)
-
- if labels is not None:
- if not isinstance(labels, list):
- labels = [labels for _ in range(len(bboxes))]
- assert len(labels) == len(bboxes)
-
- for bbox, label, color in zip(bboxes, labels, colors):
- if label is None:
- continue
- bbox_int = bbox[0, :4].astype(np.int32)
- # roughly estimate the proper font size
- text_size, text_baseline = cv2.getTextSize(label,
- cv2.FONT_HERSHEY_DUPLEX,
- font_scale, thickness)
- text_x1 = bbox_int[0]
- text_y1 = max(0, bbox_int[1] - text_size[1] - text_baseline)
- text_x2 = bbox_int[0] + text_size[0]
- text_y2 = text_y1 + text_size[1] + text_baseline
- cv2.rectangle(img, (text_x1, text_y1), (text_x2, text_y2), color,
- cv2.FILLED)
- cv2.putText(img, label, (text_x1, text_y2 - text_baseline),
- cv2.FONT_HERSHEY_DUPLEX, font_scale,
- mmcv.color_val(text_color), thickness)
-
- return img
-
-
-def imshow_keypoints(img,
- pose_result,
- skeleton=None,
- kpt_score_thr=0.3,
- pose_kpt_color=None,
- pose_link_color=None,
- radius=4,
- thickness=1,
- show_keypoint_weight=False):
- """Draw keypoints and links on an image.
-
- Args:
- img (str or Tensor): The image to draw poses on. If an image array
- is given, id will be modified in-place.
- pose_result (list[kpts]): The poses to draw. Each element kpts is
- a set of K keypoints as an Kx3 numpy.ndarray, where each
- keypoint is represented as x, y, score.
- kpt_score_thr (float, optional): Minimum score of keypoints
- to be shown. Default: 0.3.
- pose_kpt_color (np.array[Nx3]`): Color of N keypoints. If None,
- the keypoint will not be drawn.
- pose_link_color (np.array[Mx3]): Color of M links. If None, the
- links will not be drawn.
- thickness (int): Thickness of lines.
- """
-
- img = mmcv.imread(img)
- img_h, img_w, _ = img.shape
-
- for kpts in pose_result:
-
- kpts = np.array(kpts, copy=False)
-
- # draw each point on image
- if pose_kpt_color is not None:
- assert len(pose_kpt_color) == len(kpts)
-
- for kid, kpt in enumerate(kpts):
- x_coord, y_coord, kpt_score = int(kpt[0]), int(kpt[1]), kpt[2]
-
- if kpt_score < kpt_score_thr or pose_kpt_color[kid] is None:
- # skip the point that should not be drawn
- continue
-
- color = tuple(int(c) for c in pose_kpt_color[kid])
- if show_keypoint_weight:
- img_copy = img.copy()
- cv2.circle(img_copy, (int(x_coord), int(y_coord)), radius,
- color, -1)
- transparency = max(0, min(1, kpt_score))
- cv2.addWeighted(
- img_copy,
- transparency,
- img,
- 1 - transparency,
- 0,
- dst=img)
- else:
- cv2.circle(img, (int(x_coord), int(y_coord)), radius,
- color, -1)
-
- # draw links
- if skeleton is not None and pose_link_color is not None:
- assert len(pose_link_color) == len(skeleton)
-
- for sk_id, sk in enumerate(skeleton):
- pos1 = (int(kpts[sk[0], 0]), int(kpts[sk[0], 1]))
- pos2 = (int(kpts[sk[1], 0]), int(kpts[sk[1], 1]))
-
- if (pos1[0] <= 0 or pos1[0] >= img_w or pos1[1] <= 0
- or pos1[1] >= img_h or pos2[0] <= 0 or pos2[0] >= img_w
- or pos2[1] <= 0 or pos2[1] >= img_h
- or kpts[sk[0], 2] < kpt_score_thr
- or kpts[sk[1], 2] < kpt_score_thr
- or pose_link_color[sk_id] is None):
- # skip the link that should not be drawn
- continue
- color = tuple(int(c) for c in pose_link_color[sk_id])
- if show_keypoint_weight:
- img_copy = img.copy()
- X = (pos1[0], pos2[0])
- Y = (pos1[1], pos2[1])
- mX = np.mean(X)
- mY = np.mean(Y)
- length = ((Y[0] - Y[1])**2 + (X[0] - X[1])**2)**0.5
- angle = math.degrees(math.atan2(Y[0] - Y[1], X[0] - X[1]))
- stickwidth = 2
- polygon = cv2.ellipse2Poly(
- (int(mX), int(mY)), (int(length / 2), int(stickwidth)),
- int(angle), 0, 360, 1)
- cv2.fillConvexPoly(img_copy, polygon, color)
- transparency = max(
- 0, min(1, 0.5 * (kpts[sk[0], 2] + kpts[sk[1], 2])))
- cv2.addWeighted(
- img_copy,
- transparency,
- img,
- 1 - transparency,
- 0,
- dst=img)
- else:
- cv2.line(img, pos1, pos2, color, thickness=thickness)
-
- return img
-
-
-@NODES.register_module()
-class ObjectVisualizerNode(BaseVisualizerNode):
- """Visualize the bounding box and keypoints of objects.
-
- Args:
- name (str): The node name (also thread name)
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note: (1) If ``enable_key`` is set,
- the ``bypass()`` method need to be overridden to define the node
- behavior when disabled; (2) Some hot-keys are reserved for
- particular use. For example: 'q', 'Q' and 27 are used for exiting.
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``
- show_bbox (bool): Set ``True`` to show the bboxes of detection
- objects. Default: ``True``
- show_keypoint (bool): Set ``True`` to show the pose estimation
- results. Default: ``True``
- must_have_bbox (bool): Only show objects with keypoints.
- Default: ``False``
- kpt_thr (float): The threshold of keypoint score. Default: 0.3
- radius (int): The radius of keypoint. Default: 4
- thickness (int): The thickness of skeleton. Default: 2
- bbox_color (str|tuple|dict): The color of bboxes. If a single color is
- given (a str like 'green' or a BGR tuple like (0, 255, 0)), it
- will be used for all bboxes. If a dict is given, it will be used
- as a map from class labels to bbox colors. If not given, a default
- color map will be used. Default: ``None``
-
- Example::
- >>> cfg = dict(
- ... type='ObjectVisualizerNode',
- ... name='object visualizer',
- ... enable_key='v',
- ... enable=True,
- ... show_bbox=True,
- ... must_have_keypoint=False,
- ... show_keypoint=True,
- ... input_buffer='frame',
- ... output_buffer='vis')
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- default_bbox_color = {
- 'person': (148, 139, 255),
- 'cat': (255, 255, 0),
- 'dog': (255, 255, 0),
- }
-
- def __init__(self,
- name: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = True,
- show_bbox: bool = True,
- show_keypoint: bool = True,
- must_have_keypoint: bool = False,
- kpt_thr: float = 0.3,
- radius: int = 4,
- thickness: int = 2,
- bbox_color: Optional[Union[str, Tuple, Dict]] = 'green'):
-
- super().__init__(
- name=name,
- input_buffer=input_buffer,
- output_buffer=output_buffer,
- enable_key=enable_key,
- enable=enable)
-
- self.kpt_thr = kpt_thr
- self.bbox_color = bbox_color
- self.show_bbox = show_bbox
- self.show_keypoint = show_keypoint
- self.must_have_keypoint = must_have_keypoint
- self.radius = radius
- self.thickness = thickness
-
- def _draw_bbox(self, canvas: np.ndarray, input_msg: FrameMessage):
- """Draw object bboxes."""
-
- if self.must_have_keypoint:
- objects = input_msg.get_objects(
- lambda x: 'bbox' in x and 'keypoints' in x)
- else:
- objects = input_msg.get_objects(lambda x: 'bbox' in x)
- # return if there is no detected objects
- if not objects:
- return canvas
-
- bboxes = [obj['bbox'] for obj in objects]
- labels = [obj.get('label', None) for obj in objects]
- default_color = (0, 255, 0)
-
- # Get bbox colors
- if isinstance(self.bbox_color, dict):
- colors = [
- self.bbox_color.get(label, default_color) for label in labels
- ]
- else:
- colors = self.bbox_color
-
- imshow_bboxes(
- canvas,
- np.vstack(bboxes),
- labels=labels,
- colors=colors,
- text_color='white',
- font_scale=0.5)
-
- return canvas
-
- def _draw_keypoint(self, canvas: np.ndarray, input_msg: FrameMessage):
- """Draw object keypoints."""
- objects = input_msg.get_objects(lambda x: 'pose_model_cfg' in x)
-
- # return if there is no object with keypoints
- if not objects:
- return canvas
-
- for model_cfg, group in groupby(objects,
- lambda x: x['pose_model_cfg']):
- dataset_info = objects[0]['dataset_meta']
- keypoints = [
- np.concatenate(
- (obj['keypoints'], obj['keypoint_scores'][:, None]),
- axis=1) for obj in group
- ]
- imshow_keypoints(
- canvas,
- keypoints,
- skeleton=dataset_info['skeleton_links'],
- kpt_score_thr=self.kpt_thr,
- pose_kpt_color=dataset_info['keypoint_colors'],
- pose_link_color=dataset_info['skeleton_link_colors'],
- radius=self.radius,
- thickness=self.thickness)
-
- return canvas
-
- def draw(self, input_msg: FrameMessage) -> np.ndarray:
- canvas = input_msg.get_image()
-
- if self.show_bbox:
- canvas = self._draw_bbox(canvas, input_msg)
-
- if self.show_keypoint:
- canvas = self._draw_keypoint(canvas, input_msg)
-
- return canvas
diff --git a/mmpose/apis/webcam/nodes/visualizer_nodes/sunglasses_effect_node.py b/mmpose/apis/webcam/nodes/visualizer_nodes/sunglasses_effect_node.py
deleted file mode 100644
index 7c011177f5..0000000000
--- a/mmpose/apis/webcam/nodes/visualizer_nodes/sunglasses_effect_node.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from itertools import groupby
-from typing import Dict, List, Optional, Union
-
-import cv2
-import numpy as np
-
-from ...utils import get_eye_keypoint_ids, load_image_from_disk_or_url
-from ..base_visualizer_node import BaseVisualizerNode
-from ..registry import NODES
-
-
-@NODES.register_module()
-class SunglassesEffectNode(BaseVisualizerNode):
- """Apply sunglasses effect (draw sunglasses at the facial area)to the
- objects with eye keypoints in the frame.
-
- Args:
- name (str): The node name (also thread name)
- input_buffer (str): The name of the input buffer
- output_buffer (str|list): The name(s) of the output buffer(s)
- enable_key (str|int, optional): Set a hot-key to toggle enable/disable
- of the node. If an int value is given, it will be treated as an
- ascii code of a key. Please note:
- 1. If enable_key is set, the bypass method need to be
- overridden to define the node behavior when disabled
- 2. Some hot-key has been use for particular use. For example:
- 'q', 'Q' and 27 are used for quit
- Default: ``None``
- enable (bool): Default enable/disable status. Default: ``True``.
- kpt_thr (float): The score threshold of valid keypoints. Default: 0.5
- resource_img_path (str, optional): The resource image path or url.
- The image should be a pair of sunglasses with white background.
- If not specified, the url of a default image will be used. See
- ``SunglassesNode.default_resource_img_path``. Default: ``None``
-
- Example::
- >>> cfg = dict(
- ... type='SunglassesEffectNode',
- ... name='sunglasses',
- ... enable_key='s',
- ... enable=False,
- ... input_buffer='vis',
- ... output_buffer='vis_sunglasses')
-
- >>> from mmpose.apis.webcam.nodes import NODES
- >>> node = NODES.build(cfg)
- """
-
- # The image attributes to:
- # "https://www.vecteezy.com/vector-art/1932353-summer-sunglasses-
- # accessory-isolated-icon" by Vecteezy
- default_resource_img_path = (
- 'https://user-images.githubusercontent.com/15977946/'
- '170850839-acc59e26-c6b3-48c9-a9ec-87556edb99ed.jpg')
-
- def __init__(self,
- name: str,
- input_buffer: str,
- output_buffer: Union[str, List[str]],
- enable_key: Optional[Union[str, int]] = None,
- enable: bool = True,
- kpt_thr: float = 0.5,
- resource_img_path: Optional[str] = None):
-
- super().__init__(
- name=name,
- input_buffer=input_buffer,
- output_buffer=output_buffer,
- enable_key=enable_key,
- enable=enable)
-
- if resource_img_path is None:
- resource_img_path = self.default_resource_img_path
-
- self.resource_img = load_image_from_disk_or_url(resource_img_path)
- self.kpt_thr = kpt_thr
-
- def draw(self, input_msg):
- canvas = input_msg.get_image()
-
- objects = input_msg.get_objects(lambda x: 'keypoints' in x)
-
- for dataset_meta, group in groupby(objects,
- lambda x: x['dataset_meta']):
- left_eye_index, right_eye_index = get_eye_keypoint_ids(
- dataset_meta)
- canvas = self.apply_sunglasses_effect(canvas, group,
- left_eye_index,
- right_eye_index)
- return canvas
-
- def apply_sunglasses_effect(self, canvas: np.ndarray, objects: List[Dict],
- left_eye_index: int,
- right_eye_index: int) -> np.ndarray:
- """Apply sunglasses effect.
-
- Args:
- canvas (np.ndarray): The image to apply the effect
- objects (list[dict]): The object list with keypoints
- - "keypoints" ([K,3]): keypoints in [x, y, score]
- left_eye_index (int): Keypoint index of the left eye
- right_eye_index (int): Keypoint index of the right eye
-
- Returns:
- np.ndarray: Processed image
- """
-
- hm, wm = self.resource_img.shape[:2]
- # anchor points in the sunglasses image
- pts_src = np.array([[0.3 * wm, 0.3 * hm], [0.3 * wm, 0.7 * hm],
- [0.7 * wm, 0.3 * hm], [0.7 * wm, 0.7 * hm]],
- dtype=np.float32)
-
- for obj in objects:
- kpts = obj['keypoints']
- kpt_scores = obj['keypoint_scores']
-
- if kpt_scores[left_eye_index] < self.kpt_thr or kpt_scores[
- right_eye_index] < self.kpt_thr:
- continue
-
- kpt_leye = kpts[left_eye_index, :2]
- kpt_reye = kpts[right_eye_index, :2]
- # orthogonal vector to the left-to-right eyes
- vo = 0.5 * (kpt_reye - kpt_leye)[::-1] * [-1, 1]
-
- # anchor points in the image by eye positions
- pts_tar = np.vstack(
- [kpt_reye + vo, kpt_reye - vo, kpt_leye + vo, kpt_leye - vo])
-
- h_mat, _ = cv2.findHomography(pts_src, pts_tar)
- patch = cv2.warpPerspective(
- self.resource_img,
- h_mat,
- dsize=(canvas.shape[1], canvas.shape[0]),
- borderValue=(255, 255, 255))
- # mask the white background area in the patch with a threshold 200
- mask = cv2.cvtColor(patch, cv2.COLOR_BGR2GRAY)
- mask = (mask < 200).astype(np.uint8)
- canvas = cv2.copyTo(patch, mask, canvas)
-
- return canvas
diff --git a/mmpose/apis/webcam/utils/__init__.py b/mmpose/apis/webcam/utils/__init__.py
deleted file mode 100644
index 2911bcd5bf..0000000000
--- a/mmpose/apis/webcam/utils/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .buffer import BufferManager
-from .event import EventManager
-from .image_capture import ImageCapture
-from .message import FrameMessage, Message, VideoEndingMessage
-from .misc import (copy_and_paste, expand_and_clamp, get_cached_file_path,
- get_config_path, is_image_file, limit_max_fps,
- load_image_from_disk_or_url, screen_matting)
-from .pose import (get_eye_keypoint_ids, get_face_keypoint_ids,
- get_hand_keypoint_ids, get_mouth_keypoint_ids,
- get_wrist_keypoint_ids)
-
-__all__ = [
- 'BufferManager', 'EventManager', 'FrameMessage', 'Message',
- 'limit_max_fps', 'VideoEndingMessage', 'load_image_from_disk_or_url',
- 'get_cached_file_path', 'screen_matting', 'get_config_path',
- 'expand_and_clamp', 'copy_and_paste', 'is_image_file', 'ImageCapture',
- 'get_eye_keypoint_ids', 'get_face_keypoint_ids', 'get_wrist_keypoint_ids',
- 'get_mouth_keypoint_ids', 'get_hand_keypoint_ids'
-]
diff --git a/mmpose/apis/webcam/utils/buffer.py b/mmpose/apis/webcam/utils/buffer.py
deleted file mode 100644
index f7f8b9864e..0000000000
--- a/mmpose/apis/webcam/utils/buffer.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from functools import wraps
-from queue import Queue
-from typing import Any, Dict, List, Optional
-
-from mmengine import is_seq_of
-
-__all__ = ['BufferManager']
-
-
-def check_buffer_registered(exist=True):
- """A function wrapper to check the buffer existence before it is being used
- by the wrapped function.
-
- Args:
- exist (bool): If set to ``True``, assert the buffer exists; if set to
- ``False``, assert the buffer does not exist. Default: ``True``
- """
-
- def wrapper(func):
-
- @wraps(func)
- def wrapped(manager, name, *args, **kwargs):
- if exist:
- # Assert buffer exist
- if name not in manager:
- raise ValueError(f'Fail to call {func.__name__}: '
- f'buffer "{name}" is not registered.')
- else:
- # Assert buffer not exist
- if name in manager:
- raise ValueError(f'Fail to call {func.__name__}: '
- f'buffer "{name}" is already registered.')
- return func(manager, name, *args, **kwargs)
-
- return wrapped
-
- return wrapper
-
-
-class Buffer(Queue):
-
- def put_force(self, item: Any):
- """Force to put an item into the buffer.
-
- If the buffer is already full, the earliest item in the buffer will be
- remove to make room for the incoming item.
-
- Args:
- item (any): The item to put into the buffer
- """
- with self.mutex:
- if self.maxsize > 0:
- while self._qsize() >= self.maxsize:
- _ = self._get()
- self.unfinished_tasks -= 1
-
- self._put(item)
- self.unfinished_tasks += 1
- self.not_empty.notify()
-
-
-class BufferManager():
- """A helper class to manage multiple buffers.
-
- Parameters:
- buffer_type (type): The class to build buffer instances. Default:
- :class:`mmpose.apis.webcam.utils.buffer.Buffer`.
- buffers (dict, optional): Create :class:`BufferManager` from existing
- buffers. Each item should a buffer name and the buffer. If not
- given, an empty buffer manager will be create. Default: ``None``
- """
-
- def __init__(self,
- buffer_type: type = Buffer,
- buffers: Optional[Dict] = None):
- self.buffer_type = buffer_type
- if buffers is None:
- self._buffers = {}
- else:
- if is_seq_of(list(buffers.values()), buffer_type):
- self._buffers = buffers.copy()
- else:
- raise ValueError('The values of buffers should be instance '
- f'of {buffer_type}')
-
- def __contains__(self, name):
- return name in self._buffers
-
- @check_buffer_registered(False)
- def register_buffer(self, name, maxsize: int = 0):
- """Register a buffer.
-
- If the buffer already exists, an ValueError will be raised.
-
- Args:
- name (any): The buffer name
- maxsize (int): The capacity of the buffer. If set to 0, the
- capacity is unlimited. Default: 0
- """
- self._buffers[name] = self.buffer_type(maxsize)
-
- @check_buffer_registered()
- def put(self, name, item, block: bool = True, timeout: float = None):
- """Put an item into specified buffer.
-
- Args:
- name (any): The buffer name
- item (any): The item to put into the buffer
- block (bool): If set to ``True``, block if necessary util a free
- slot is available in the target buffer. It blocks at most
- ``timeout`` seconds and raises the ``Full`` exception.
- Otherwise, put an item on the queue if a free slot is
- immediately available, else raise the ``Full`` exception.
- Default: ``True``
- timeout (float, optional): The most waiting time in seconds if
- ``block`` is ``True``. Default: ``None``
- """
- self._buffers[name].put(item, block, timeout)
-
- @check_buffer_registered()
- def put_force(self, name, item):
- """Force to put an item into specified buffer. If the buffer was full,
- the earliest item within the buffer will be popped out to make a free
- slot.
-
- Args:
- name (any): The buffer name
- item (any): The item to put into the buffer
- """
- self._buffers[name].put_force(item)
-
- @check_buffer_registered()
- def get(self, name, block: bool = True, timeout: float = None) -> Any:
- """Remove an return an item from the specified buffer.
-
- Args:
- name (any): The buffer name
- block (bool): If set to ``True``, block if necessary until an item
- is available in the target buffer. It blocks at most
- ``timeout`` seconds and raises the ``Empty`` exception.
- Otherwise, return an item if one is immediately available,
- else raise the ``Empty`` exception. Default: ``True``
- timeout (float, optional): The most waiting time in seconds if
- ``block`` is ``True``. Default: ``None``
-
- Returns:
- any: The returned item.
- """
- return self._buffers[name].get(block, timeout)
-
- @check_buffer_registered()
- def is_empty(self, name) -> bool:
- """Check if a buffer is empty.
-
- Args:
- name (any): The buffer name
-
- Returns:
- bool: Weather the buffer is empty.
- """
- return self._buffers[name].empty()
-
- @check_buffer_registered()
- def is_full(self, name):
- """Check if a buffer is full.
-
- Args:
- name (any): The buffer name
-
- Returns:
- bool: Weather the buffer is full.
- """
- return self._buffers[name].full()
-
- def get_sub_manager(self, buffer_names: List[str]) -> 'BufferManager':
- """Return a :class:`BufferManager` instance that covers a subset of the
- buffers in the parent. The is usually used to partially share the
- buffers of the executor to the node.
-
- Args:
- buffer_names (list): The list of buffers to create the sub manager
-
- Returns:
- BufferManager: The created sub buffer manager.
- """
- buffers = {name: self._buffers[name] for name in buffer_names}
- return BufferManager(self.buffer_type, buffers)
-
- def get_info(self):
- """Returns the information of all buffers in the manager.
-
- Returns:
- dict[any, dict]: Each item is a buffer name and the information
- dict of that buffer.
- """
- buffer_info = {}
- for name, buffer in self._buffers.items():
- buffer_info[name] = {
- 'size': buffer.qsize(),
- 'maxsize': buffer.maxsize
- }
- return buffer_info
diff --git a/mmpose/apis/webcam/utils/event.py b/mmpose/apis/webcam/utils/event.py
deleted file mode 100644
index b8e88e1d8b..0000000000
--- a/mmpose/apis/webcam/utils/event.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-from collections import defaultdict
-from contextlib import contextmanager
-from threading import Event
-from typing import Optional
-
-logger = logging.getLogger('Event')
-
-
-class EventManager():
- """A helper class to manage events.
-
- :class:`EventManager` provides interfaces to register, set, clear and
- check events by name.
- """
-
- def __init__(self):
- self._events = defaultdict(Event)
-
- def register_event(self, event_name: str, is_keyboard: bool = False):
- """Register an event. A event must be registered first before being
- set, cleared or checked.
-
- Args:
- event_name (str): The indicator of the event. The name should be
- unique in one :class:`EventManager` instance
- is_keyboard (bool): Specify weather it is a keyboard event. If so,
- the ``event_name`` should be the key value, and the indicator
- will be set as ``'_keyboard_{event_name}'``. Otherwise, the
- ``event_name`` will be directly used as the indicator.
- Default: ``False``
- """
- if is_keyboard:
- event_name = self._get_keyboard_event_name(event_name)
- self._events[event_name] = Event()
-
- def set(self, event_name: str, is_keyboard: bool = False):
- """Set the internal flag of an event to ``True``.
-
- Args:
- event_name (str): The indicator of the event
- is_keyboard (bool): Specify weather it is a keyboard event. See
- ``register_event()`` for details. Default: False
- """
- if is_keyboard:
- event_name = self._get_keyboard_event_name(event_name)
- self._events[event_name].set()
- logger.info(f'Event {event_name} is set.')
-
- def wait(self,
- event_name: str = None,
- is_keyboard: bool = False,
- timeout: Optional[float] = None) -> bool:
- """Block until the internal flag of an event is ``True``.
-
- Args:
- event_name (str): The indicator of the event
- is_keyboard (bool): Specify weather it is a keyboard event. See
- ``register_event()`` for details. Default: False
- timeout (float, optional): The optional maximum blocking time in
- seconds. Default: ``None``
-
- Returns:
- bool: The internal event flag on exit.
- """
- if is_keyboard:
- event_name = self._get_keyboard_event_name(event_name)
- return self._events[event_name].wait(timeout)
-
- def is_set(self,
- event_name: str = None,
- is_keyboard: Optional[bool] = False) -> bool:
- """Check weather the internal flag of an event is ``True``.
-
- Args:
- event_name (str): The indicator of the event
- is_keyboard (bool): Specify weather it is a keyboard event. See
- ``register_event()`` for details. Default: False
- Returns:
- bool: The internal event flag.
- """
- if is_keyboard:
- event_name = self._get_keyboard_event_name(event_name)
- return self._events[event_name].is_set()
-
- def clear(self,
- event_name: str = None,
- is_keyboard: Optional[bool] = False):
- """Reset the internal flag of en event to False.
-
- Args:
- event_name (str): The indicator of the event
- is_keyboard (bool): Specify weather it is a keyboard event. See
- ``register_event()`` for details. Default: False
- """
- if is_keyboard:
- event_name = self._get_keyboard_event_name(event_name)
- self._events[event_name].clear()
- logger.info(f'Event {event_name} is cleared.')
-
- @staticmethod
- def _get_keyboard_event_name(key):
- """Get keyboard event name from the key value."""
- return f'_keyboard_{chr(key) if isinstance(key,int) else key}'
-
- @contextmanager
- def wait_and_handle(self,
- event_name: str = None,
- is_keyboard: Optional[bool] = False):
- """Context manager that blocks until an evenet is set ``True`` and then
- goes into the context.
-
- The internal event flag will be reset ``False`` automatically before
- entering the context.
-
- Args:
- event_name (str): The indicator of the event
- is_keyboard (bool): Specify weather it is a keyboard event. See
- ``register_event()`` for details. Default: False
-
- Example::
- >>> from mmpose.apis.webcam.utils import EventManager
- >>> manager = EventManager()
- >>> manager.register_event('q', is_keybard=True)
-
- >>> # Once the keyboard event `q` is set, ``wait_and_handle``
- >>> # will reset the event and enter the context to invoke
- >>> # ``foo()``
- >>> with manager.wait_and_handle('q', is_keybard=True):
- ... foo()
- """
- self.wait(event_name, is_keyboard)
- try:
- yield
- finally:
- self.clear(event_name, is_keyboard)
diff --git a/mmpose/apis/webcam/utils/image_capture.py b/mmpose/apis/webcam/utils/image_capture.py
deleted file mode 100644
index fb28acff94..0000000000
--- a/mmpose/apis/webcam/utils/image_capture.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Union
-
-import cv2
-import numpy as np
-
-from .misc import load_image_from_disk_or_url
-
-
-class ImageCapture:
- """A mock-up of cv2.VideoCapture that always return a const image.
-
- Args:
- image (str | ndarray): The image path or image data
- """
-
- def __init__(self, image: Union[str, np.ndarray]):
- if isinstance(image, str):
- self.image = load_image_from_disk_or_url(image)
- else:
- self.image = image
-
- def isOpened(self):
- return (self.image is not None)
-
- def read(self):
- return True, self.image.copy()
-
- def release(self):
- pass
-
- def get(self, propId):
- if propId == cv2.CAP_PROP_FRAME_WIDTH:
- return self.image.shape[1]
- elif propId == cv2.CAP_PROP_FRAME_HEIGHT:
- return self.image.shape[0]
- elif propId == cv2.CAP_PROP_FPS:
- return np.nan
- else:
- raise NotImplementedError()
diff --git a/mmpose/apis/webcam/utils/message.py b/mmpose/apis/webcam/utils/message.py
deleted file mode 100644
index 8961ea39c2..0000000000
--- a/mmpose/apis/webcam/utils/message.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import time
-import uuid
-import warnings
-from typing import Callable, Dict, List, Optional
-
-import numpy as np
-
-Filter = Callable[[Dict], bool]
-
-
-class Message():
- """Message base class.
-
- All message class should inherit this class. The basic use of a Message
- instance is to carray a piece of text message (self.msg) and a dict that
- stores structured data (self.data), e.g. frame image, model prediction,
- et al.
-
- A message may also hold route information, which is composed of
- information of all nodes the message has passed through.
-
- Parameters:
- msg (str): The text message.
- data (dict, optional): The structured data.
- """
-
- def __init__(self, msg: str = '', data: Optional[Dict] = None):
- self.msg = msg
- self.data = data if data else {}
- self.route_info = []
- self.timestamp = time.time()
- self.id = uuid.uuid1()
-
- def update_route_info(self,
- node=None,
- node_name: Optional[str] = None,
- node_type: Optional[str] = None,
- info: Optional[Dict] = None):
- """Append new node information to the route information.
-
- Args:
- node (Node, optional): An instance of Node that provides basic
- information like the node name and type. Default: ``None``.
- node_name (str, optional): The node name. If node is given,
- node_name will be ignored. Default: ``None``.
- node_type (str, optional): The class name of the node. If node
- is given, node_type will be ignored. Default: ``None``.
- info (dict, optional): The node information, which is usually
- given by node.get_node_info(). Default: ``None``.
- """
- if node is not None:
- if node_name is not None or node_type is not None:
- warnings.warn(
- '`node_name` and `node_type` will be overridden if node '
- 'is provided.')
- node_name = node.name
- node_type = node.__class__.__name__
-
- node_info = {'node': node_name, 'node_type': node_type, 'info': info}
- self.route_info.append(node_info)
-
- def set_route_info(self, route_info: List[Dict]):
- """Directly set the entire route information.
-
- Args:
- route_info (list): route information to set to the message.
- """
- self.route_info = route_info
-
- def merge_route_info(self, route_info: List[Dict]):
- """Merge the given route information into the original one of the
- message. This is used for combining route information from multiple
- messages. The node information in the route will be reordered according
- to their timestamps.
-
- Args:
- route_info (list): route information to merge.
- """
- self.route_info += route_info
- self.route_info.sort(key=lambda x: x.get('timestamp', np.inf))
-
- def get_route_info(self) -> List:
- return self.route_info.copy()
-
-
-class VideoEndingMessage(Message):
- """The special message to indicate the ending of the input video."""
-
-
-class FrameMessage(Message):
- """The message to store information of a video frame."""
-
- def __init__(self, img):
- super().__init__(data=dict(image=img, objects={}, model_cfgs={}))
-
- def get_image(self) -> np.ndarray:
- """Get the frame image.
-
- Returns:
- np.ndarray: The frame image.
- """
- return self.data.get('image', None)
-
- def set_image(self, img):
- """Set the frame image to the message.
-
- Args:
- img (np.ndarray): The frame image.
- """
- self.data['image'] = img
-
- def set_objects(self, objects: List[Dict]):
- """Set the object information. The old object information will be
- cleared.
-
- Args:
- objects (list[dict]): A list of object information
-
- See also :func:`update_objects`.
- """
- self.data['objects'] = {}
- self.update_objects(objects)
-
- def update_objects(self, objects: List[Dict]):
- """Update object information.
-
- Each object will be assigned an unique ID if it does not has one. If
- an object's ID already exists in ``self.data['objects']``, the object
- information will be updated; otherwise it will be added as a new
- object.
-
- Args:
- objects (list[dict]): A list of object information
- """
- for obj in objects:
- if '_id_' in obj:
- # get the object id if it exists
- obj_id = obj['_id_']
- else:
- # otherwise assign a new object id
- obj_id = uuid.uuid1()
- obj['_id_'] = obj_id
- self.data['objects'][obj_id] = obj
-
- def get_objects(self, obj_filter: Optional[Filter] = None) -> List[Dict]:
- """Get object information from the frame data.
-
- Default to return all objects in the frame data. Optionally, filters
- can be set to retrieve objects with specific keys and values. The
- filters are represented as a dict. Each key in the filters specifies a
- required key of the object. Each value in the filters is a tuple that
- enumerate the required values of the corresponding key in the object.
-
- Args:
- obj_filter (callable, optional): A filter function that returns a
- bool value from a object (dict). If provided, only objects
- that return True will be retrieved. Otherwise all objects will
- be retrieved. Default: ``None``.
-
- Returns:
- list[dict]: A list of object information.
-
-
- Example::
- >>> objects = [
- ... {'_id_': 2, 'label': 'dog'}
- ... {'_id_': 1, 'label': 'cat'},
- ... ]
- >>> frame = FrameMessage(img)
- >>> frame.set_objects(objects)
- >>> frame.get_objects()
- [
- {'_id_': 1, 'label': 'cat'},
- {'_id_': 2, 'label': 'dog'}
- ]
- >>> frame.get_objects(obj_filter=lambda x:x['label'] == 'cat')
- [{'_id_': 1, 'label': 'cat'}]
- """
-
- objects = [
- obj.copy()
- for obj in filter(obj_filter, self.data['objects'].values())
- ]
-
- return objects
diff --git a/mmpose/apis/webcam/utils/misc.py b/mmpose/apis/webcam/utils/misc.py
deleted file mode 100644
index 6c6f5417ae..0000000000
--- a/mmpose/apis/webcam/utils/misc.py
+++ /dev/null
@@ -1,367 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import importlib
-import os
-import os.path as osp
-import sys
-import time
-from contextlib import contextmanager
-from typing import List, Optional, Tuple
-from urllib.parse import urlparse
-from urllib.request import urlopen
-
-import cv2
-import numpy as np
-from mmengine import mkdir_or_exist
-from torch.hub import HASH_REGEX, download_url_to_file
-
-
-@contextmanager
-def limit_max_fps(fps: float):
- """A context manager to limit maximum frequence of entering the context.
-
- Args:
- fps (float): The maximum frequence of entering the context
-
- Example::
- >>> from mmpose.apis.webcam.utils import limit_max_fps
- >>> import cv2
-
- >>> while True:
- ... with limit_max_fps(20):
- ... cv2.imshow(img) # display image at most 20 fps
- """
- t_start = time.time()
- try:
- yield
- finally:
- t_end = time.time()
- if fps is not None:
- t_sleep = 1.0 / fps - t_end + t_start
- if t_sleep > 0:
- time.sleep(t_sleep)
-
-
-def _is_url(filename: str) -> bool:
- """Check if the file is a url link.
-
- Args:
- filename (str): the file name or url link
-
- Returns:
- bool: is url or not.
- """
- prefixes = ['http://', 'https://']
- for p in prefixes:
- if filename.startswith(p):
- return True
- return False
-
-
-def load_image_from_disk_or_url(filename: str,
- readFlag: int = cv2.IMREAD_COLOR
- ) -> np.ndarray:
- """Load an image file, from disk or url.
-
- Args:
- filename (str): file name on the disk or url link
- readFlag (int): readFlag for imdecode. Default: cv2.IMREAD_COLOR
-
- Returns:
- np.ndarray: A loaded image
- """
- if _is_url(filename):
- # download the image, convert it to a NumPy array, and then read
- # it into OpenCV format
- resp = urlopen(filename)
- image = np.asarray(bytearray(resp.read()), dtype='uint8')
- image = cv2.imdecode(image, readFlag)
- return image
- else:
- image = cv2.imread(filename, readFlag)
- return image
-
-
-def get_cached_file_path(url: str,
- save_dir: str,
- progress: bool = True,
- check_hash: bool = False,
- file_name: Optional[str] = None) -> str:
- r"""Loads the Torch serialized object at the given URL.
-
- If downloaded file is a zip file, it will be automatically decompressed
-
- If the object is already present in `model_dir`, it's deserialized and
- returned.
- The default value of ``model_dir`` is ``/checkpoints`` where
- ``hub_dir`` is the directory returned by :func:`~torch.hub.get_dir`.
-
- Args:
- url (str): URL of the object to download
- save_dir (str): directory in which to save the object
- progress (bool): whether or not to display a progress bar
- to stderr. Default: ``True``
- check_hash(bool): If True, the filename part of the URL
- should follow the naming convention ``filename-.ext``
- where ```` is the first eight or more digits of the
- SHA256 hash of the contents of the file. The hash is used to
- ensure unique names and to verify the contents of the file.
- Default: ``False``
- file_name (str, optional): name for the downloaded file. Filename
- from ``url`` will be used if not set. Default: ``None``.
-
- Returns:
- str: The path to the cached file.
- """
-
- mkdir_or_exist(save_dir)
-
- parts = urlparse(url)
- filename = os.path.basename(parts.path)
- if file_name is not None:
- filename = file_name
- cached_file = os.path.join(save_dir, filename)
- if not os.path.exists(cached_file):
- sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
- hash_prefix = None
- if check_hash:
- r = HASH_REGEX.search(filename) # r is Optional[Match[str]]
- hash_prefix = r.group(1) if r else None
- download_url_to_file(url, cached_file, hash_prefix, progress=progress)
- return cached_file
-
-
-def screen_matting(img: np.ndarray,
- color_low: Optional[Tuple] = None,
- color_high: Optional[Tuple] = None,
- color: Optional[str] = None) -> np.ndarray:
- """Get screen matting mask.
-
- Args:
- img (np.ndarray): Image data.
- color_low (tuple): Lower limit (b, g, r).
- color_high (tuple): Higher limit (b, g, r).
- color (str): Support colors include:
-
- - 'green' or 'g'
- - 'blue' or 'b'
- - 'black' or 'k'
- - 'white' or 'w'
-
- Returns:
- np.ndarray: A mask with the same shape of the input image. The value
- is 0 at the pixels in the matting color range, and 1 everywhere else.
- """
-
- if color_high is None or color_low is None:
- if color is not None:
- if color.lower() == 'g' or color.lower() == 'green':
- color_low = (0, 200, 0)
- color_high = (60, 255, 60)
- elif color.lower() == 'b' or color.lower() == 'blue':
- color_low = (230, 0, 0)
- color_high = (255, 40, 40)
- elif color.lower() == 'k' or color.lower() == 'black':
- color_low = (0, 0, 0)
- color_high = (40, 40, 40)
- elif color.lower() == 'w' or color.lower() == 'white':
- color_low = (230, 230, 230)
- color_high = (255, 255, 255)
- else:
- raise NotImplementedError(f'Not supported color: {color}.')
- else:
- raise ValueError(
- 'color or color_high | color_low should be given.')
-
- mask = cv2.inRange(img, np.array(color_low), np.array(color_high)) == 0
-
- return mask.astype(np.uint8)
-
-
-def expand_and_clamp(box: List, im_shape: Tuple, scale: float = 1.25) -> List:
- """Expand the bbox and clip it to fit the image shape.
-
- Args:
- box (list): x1, y1, x2, y2
- im_shape (tuple): image shape (h, w, c)
- scale (float): expand ratio
-
- Returns:
- list: x1, y1, x2, y2
- """
-
- x1, y1, x2, y2 = box[:4]
- w = x2 - x1
- h = y2 - y1
- deta_w = w * (scale - 1) / 2
- deta_h = h * (scale - 1) / 2
-
- x1, y1, x2, y2 = x1 - deta_w, y1 - deta_h, x2 + deta_w, y2 + deta_h
-
- img_h, img_w = im_shape[:2]
-
- x1 = min(max(0, int(x1)), img_w - 1)
- y1 = min(max(0, int(y1)), img_h - 1)
- x2 = min(max(0, int(x2)), img_w - 1)
- y2 = min(max(0, int(y2)), img_h - 1)
-
- return [x1, y1, x2, y2]
-
-
-def _find_bbox(mask):
- """Find the bounding box for the mask.
-
- Args:
- mask (ndarray): Mask.
-
- Returns:
- list(4, ): Returned box (x1, y1, x2, y2).
- """
- mask_shape = mask.shape
- if len(mask_shape) == 3:
- assert mask_shape[-1] == 1, 'the channel of the mask should be 1.'
- elif len(mask_shape) == 2:
- pass
- else:
- NotImplementedError()
-
- h, w = mask_shape[:2]
- mask_w = mask.sum(0)
- mask_h = mask.sum(1)
-
- left = 0
- right = w - 1
- up = 0
- down = h - 1
-
- for i in range(w):
- if mask_w[i] > 0:
- break
- left += 1
-
- for i in range(w - 1, left, -1):
- if mask_w[i] > 0:
- break
- right -= 1
-
- for i in range(h):
- if mask_h[i] > 0:
- break
- up += 1
-
- for i in range(h - 1, up, -1):
- if mask_h[i] > 0:
- break
- down -= 1
-
- return [left, up, right, down]
-
-
-def copy_and_paste(
- img: np.ndarray,
- background_img: np.ndarray,
- mask: np.ndarray,
- bbox: Optional[List] = None,
- effect_region: Tuple = (0.2, 0.2, 0.8, 0.8),
- min_size: Tuple = (20, 20)
-) -> np.ndarray:
- """Copy the image region and paste to the background.
-
- Args:
- img (np.ndarray): Image data.
- background_img (np.ndarray): Background image data.
- mask (ndarray): instance segmentation result.
- bbox (list, optional): instance bbox in (x1, y1, x2, y2). If not
- given, the bbox will be obtained by ``_find_bbox()``. Default:
- ``None``
- effect_region (tuple): The region to apply mask, the coordinates
- are normalized (x1, y1, x2, y2). Default: (0.2, 0.2, 0.8, 0.8)
- min_size (tuple): The minimum region size (w, h) in pixels.
- Default: (20, 20)
-
- Returns:
- np.ndarray: The background with pasted image region.
- """
- background_img = background_img.copy()
- background_h, background_w = background_img.shape[:2]
- region_h = (effect_region[3] - effect_region[1]) * background_h
- region_w = (effect_region[2] - effect_region[0]) * background_w
- region_aspect_ratio = region_w / region_h
-
- if bbox is None:
- bbox = _find_bbox(mask)
- instance_w = bbox[2] - bbox[0]
- instance_h = bbox[3] - bbox[1]
-
- if instance_w > min_size[0] and instance_h > min_size[1]:
- aspect_ratio = instance_w / instance_h
- if region_aspect_ratio > aspect_ratio:
- resize_rate = region_h / instance_h
- else:
- resize_rate = region_w / instance_w
-
- mask_inst = mask[int(bbox[1]):int(bbox[3]), int(bbox[0]):int(bbox[2])]
- img_inst = img[int(bbox[1]):int(bbox[3]), int(bbox[0]):int(bbox[2])]
- img_inst = cv2.resize(
- img_inst.astype('float32'),
- (int(resize_rate * instance_w), int(resize_rate * instance_h)))
- img_inst = img_inst.astype(background_img.dtype)
- mask_inst = cv2.resize(
- mask_inst.astype('float32'),
- (int(resize_rate * instance_w), int(resize_rate * instance_h)),
- interpolation=cv2.INTER_NEAREST)
-
- mask_ids = list(np.where(mask_inst == 1))
- mask_ids[1] += int(effect_region[0] * background_w)
- mask_ids[0] += int(effect_region[1] * background_h)
-
- background_img[tuple(mask_ids)] = img_inst[np.where(mask_inst == 1)]
-
- return background_img
-
-
-def is_image_file(path: str) -> bool:
- """Check if a path is an image file by its extension.
-
- Args:
- path (str): The image path.
-
- Returns:
- bool: Weather the path is an image file.
- """
- if isinstance(path, str):
- if path.lower().endswith(('.png', '.jpg', '.jpeg', '.tiff', '.bmp')):
- return True
- return False
-
-
-def get_config_path(path: str, module_name: str):
- """Get config path from an OpenMMLab codebase.
-
- If the path is an existing file, it will be directly returned. If the file
- doesn't exist, it will be searched in the 'configs' folder of the
- specified module.
-
- Args:
- path (str): the path of the config file
- module_name (str): The module name of an OpenMMLab codebase
-
- Returns:
- str: The config file path.
-
- Example::
- >>> path = 'configs/_base_/filters/one_euro.py'
- >>> get_config_path(path, 'mmpose')
- '/home/mmpose/configs/_base_/filters/one_euro.py'
- """
-
- if osp.isfile(path):
- return path
-
- module = importlib.import_module(module_name)
- module_dir = osp.dirname(module.__file__)
- path_in_module = osp.join(module_dir, '.mim', path)
-
- if not osp.isfile(path_in_module):
- raise FileNotFoundError(f'Can not find the config file "{path}"')
-
- return path_in_module
diff --git a/mmpose/apis/webcam/utils/pose.py b/mmpose/apis/webcam/utils/pose.py
deleted file mode 100644
index 8ff32f9e16..0000000000
--- a/mmpose/apis/webcam/utils/pose.py
+++ /dev/null
@@ -1,181 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Tuple
-
-
-def get_eye_keypoint_ids(dataset_meta: Dict) -> Tuple[int, int]:
- """A helper function to get the keypoint indices of left and right eyes
- from the dataset meta information.
-
- Args:
- dataset_meta (dict): dataset meta information.
-
- Returns:
- tuple[int, int]: The keypoint indices of left eye and right eye.
- """
- left_eye_idx = None
- right_eye_idx = None
-
- # try obtaining eye point ids from dataset_meta
- keypoint_name2id = dataset_meta.get('keypoint_name2id', {})
- left_eye_idx = keypoint_name2id.get('left_eye', None)
- right_eye_idx = keypoint_name2id.get('right_eye', None)
-
- if left_eye_idx is None or right_eye_idx is None:
- # Fall back to hard coded keypoint id
- dataset_name = dataset_meta.get('dataset_name', 'unknown dataset')
- if dataset_name in {'coco', 'coco_wholebody'}:
- left_eye_idx = 1
- right_eye_idx = 2
- elif dataset_name in {'animalpose', 'ap10k'}:
- left_eye_idx = 0
- right_eye_idx = 1
- else:
- raise ValueError('Can not determine the eye keypoint id of '
- f'{dataset_name}')
-
- return left_eye_idx, right_eye_idx
-
-
-def get_face_keypoint_ids(dataset_meta: Dict) -> List:
- """A helper function to get the keypoint indices of the face from the
- dataset meta information.
-
- Args:
- dataset_meta (dict): dataset meta information.
-
- Returns:
- list[int]: face keypoint indices. The length depends on the dataset.
- """
- face_indices = []
-
- # try obtaining nose point ids from dataset_meta
- keypoint_name2id = dataset_meta.get('keypoint_name2id', {})
- for id in range(68):
- face_indices.append(keypoint_name2id.get(f'face-{id}', None))
-
- if None in face_indices:
- # Fall back to hard coded keypoint id
- dataset_name = dataset_meta.get('dataset_name', 'unknown dataset')
- if dataset_name in {'coco_wholebody'}:
- face_indices = list(range(23, 91))
- else:
- raise ValueError('Can not determine the face id of '
- f'{dataset_name}')
-
- return face_indices
-
-
-def get_wrist_keypoint_ids(dataset_meta: Dict) -> Tuple[int, int]:
- """A helper function to get the keypoint indices of left and right wrists
- from the dataset meta information.
-
- Args:
- dataset_meta (dict): dataset meta information.
- Returns:
- tuple[int, int]: The keypoint indices of left and right wrists.
- """
-
- # try obtaining wrist point ids from dataset_meta
- keypoint_name2id = dataset_meta.get('keypoint_name2id', {})
- left_wrist_idx = keypoint_name2id.get('left_wrist', None)
- right_wrist_idx = keypoint_name2id.get('right_wrist', None)
-
- if left_wrist_idx is None or right_wrist_idx is None:
- # Fall back to hard coded keypoint id
- dataset_name = dataset_meta.get('dataset_name', 'unknown dataset')
- if dataset_name in {'coco', 'coco_wholebody'}:
- left_wrist_idx = 9
- right_wrist_idx = 10
- elif dataset_name == 'animalpose':
- left_wrist_idx = 16
- right_wrist_idx = 17
- elif dataset_name == 'ap10k':
- left_wrist_idx = 7
- right_wrist_idx = 10
- else:
- raise ValueError('Can not determine the eye keypoint id of '
- f'{dataset_name}')
-
- return left_wrist_idx, right_wrist_idx
-
-
-def get_mouth_keypoint_ids(dataset_meta: Dict) -> int:
- """A helper function to get the mouth keypoint index from the dataset meta
- information.
-
- Args:
- dataset_meta (dict): dataset meta information.
- Returns:
- int: The mouth keypoint index
- """
- # try obtaining mouth point ids from dataset_info
- keypoint_name2id = dataset_meta.get('keypoint_name2id', {})
- mouth_index = keypoint_name2id.get('face-62', None)
-
- if mouth_index is None:
- # Fall back to hard coded keypoint id
- dataset_name = dataset_meta.get('dataset_name', 'unknown dataset')
- if dataset_name == 'coco_wholebody':
- mouth_index = 85
- else:
- raise ValueError('Can not determine the eye keypoint id of '
- f'{dataset_name}')
-
- return mouth_index
-
-
-def get_hand_keypoint_ids(dataset_meta: Dict) -> List[int]:
- """A helper function to get the keypoint indices of left and right hand
- from the dataset meta information.
-
- Args:
- dataset_meta (dict): dataset meta information.
- Returns:
- list[int]: hand keypoint indices. The length depends on the dataset.
- """
- # try obtaining hand keypoint ids from dataset_meta
- keypoint_name2id = dataset_meta.get('keypoint_name2id', {})
- hand_indices = []
- hand_indices.append(keypoint_name2id.get('left_hand_root', None))
-
- for id in range(1, 5):
- hand_indices.append(keypoint_name2id.get(f'left_thumb{id}', None))
- for id in range(1, 5):
- hand_indices.append(keypoint_name2id.get(f'left_forefinger{id}', None))
- for id in range(1, 5):
- hand_indices.append(
- keypoint_name2id.get(f'left_middle_finger{id}', None))
- for id in range(1, 5):
- hand_indices.append(
- keypoint_name2id.get(f'left_ring_finger{id}', None))
- for id in range(1, 5):
- hand_indices.append(
- keypoint_name2id.get(f'left_pinky_finger{id}', None))
-
- hand_indices.append(keypoint_name2id.get('right_hand_root', None))
-
- for id in range(1, 5):
- hand_indices.append(keypoint_name2id.get(f'right_thumb{id}', None))
- for id in range(1, 5):
- hand_indices.append(
- keypoint_name2id.get(f'right_forefinger{id}', None))
- for id in range(1, 5):
- hand_indices.append(
- keypoint_name2id.get(f'right_middle_finger{id}', None))
- for id in range(1, 5):
- hand_indices.append(
- keypoint_name2id.get(f'right_ring_finger{id}', None))
- for id in range(1, 5):
- hand_indices.append(
- keypoint_name2id.get(f'right_pinky_finger{id}', None))
-
- if None in hand_indices:
- # Fall back to hard coded keypoint id
- dataset_name = dataset_meta.get('dataset_name', 'unknown dataset')
- if dataset_name in {'coco_wholebody'}:
- hand_indices = list(range(91, 133))
- else:
- raise ValueError('Can not determine the hand id of '
- f'{dataset_name}')
-
- return hand_indices
diff --git a/mmpose/apis/webcam/webcam_executor.py b/mmpose/apis/webcam/webcam_executor.py
deleted file mode 100644
index f39aa4b847..0000000000
--- a/mmpose/apis/webcam/webcam_executor.py
+++ /dev/null
@@ -1,329 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-import sys
-import time
-import warnings
-from threading import Thread
-from typing import Dict, List, Optional, Tuple, Union
-
-import cv2
-
-from .nodes import NODES
-from .utils import (BufferManager, EventManager, FrameMessage, ImageCapture,
- VideoEndingMessage, is_image_file, limit_max_fps)
-
-try:
- from contextlib import nullcontext
-except ImportError:
- # compatible with python3.6
- from contextlib import contextmanager
-
- @contextmanager
- def nullcontext(enter_result=None):
- yield enter_result
-
-
-DEFAULT_FRAME_BUFFER_SIZE = 1
-DEFAULT_INPUT_BUFFER_SIZE = 1
-DEFAULT_DISPLAY_BUFFER_SIZE = 0
-DEFAULT_USER_BUFFER_SIZE = 1
-
-logger = logging.getLogger('Executor')
-
-
-class WebcamExecutor():
- """The interface to build and execute webcam applications from configs.
-
- Parameters:
- nodes (list[dict]): Node configs. See :class:`webcam.nodes.Node` for
- details
- name (str): Executor name. Default: 'MMPose Webcam App'.
- camera_id (int | str): The camera ID (usually the ID of the default
- camera is 0). Alternatively a file path or a URL can be given
- to load from a video or image file.
- camera_frame_shape (tuple, optional): Set the frame shape of the
- camera in (width, height). If not given, the default frame shape
- will be used. This argument is only valid when using a camera
- as the input source. Default: ``None``
- camera_max_fps (int): Video reading maximum FPS. Default: 30
- buffer_sizes (dict, optional): A dict to specify buffer sizes. The
- key is the buffer name and the value is the buffer size.
- Default: ``None``
-
- Example::
- >>> cfg = dict(
- >>> name='Test Webcam',
- >>> camera_id=0,
- >>> camera_max_fps=30,
- >>> nodes=[
- >>> dict(
- >>> type='MonitorNode',
- >>> name='monitor',
- >>> enable_key='m',
- >>> enable=False,
- >>> input_buffer='_frame_',
- >>> output_buffer='display'),
- >>> dict(
- >>> type='RecorderNode',
- >>> name='recorder',
- >>> out_video_file='webcam_output.mp4',
- >>> input_buffer='display',
- >>> output_buffer='_display_')
- >>> ])
-
- >>> executor = WebcamExecutor(**cfg)
- """
-
- def __init__(self,
- nodes: List[Dict],
- name: str = 'MMPose Webcam App',
- camera_id: Union[int, str] = 0,
- camera_max_fps: int = 30,
- camera_frame_shape: Optional[Tuple[int, int]] = None,
- synchronous: bool = False,
- buffer_sizes: Optional[Dict[str, int]] = None):
-
- # Basic parameters
- self.name = name
- self.camera_id = camera_id
- self.camera_max_fps = camera_max_fps
- self.camera_frame_shape = camera_frame_shape
- self.synchronous = synchronous
-
- # self.buffer_manager manages data flow between executor and nodes
- self.buffer_manager = BufferManager()
- # self.event_manager manages event-based asynchronous communication
- self.event_manager = EventManager()
- # self.node_list holds all node instance
- self.node_list = []
- # self.vcap is used to read camera frames. It will be built when the
- # executor starts running
- self.vcap = None
-
- # Register executor events
- self.event_manager.register_event('_exit_', is_keyboard=False)
- if self.synchronous:
- self.event_manager.register_event('_idle_', is_keyboard=False)
-
- # Register nodes
- if not nodes:
- raise ValueError('No node is registered to the executor.')
-
- # Register default buffers
- if buffer_sizes is None:
- buffer_sizes = {}
- # _frame_ buffer
- frame_buffer_size = buffer_sizes.get('_frame_',
- DEFAULT_FRAME_BUFFER_SIZE)
- self.buffer_manager.register_buffer('_frame_', frame_buffer_size)
- # _input_ buffer
- input_buffer_size = buffer_sizes.get('_input_',
- DEFAULT_INPUT_BUFFER_SIZE)
- self.buffer_manager.register_buffer('_input_', input_buffer_size)
- # _display_ buffer
- display_buffer_size = buffer_sizes.get('_display_',
- DEFAULT_DISPLAY_BUFFER_SIZE)
- self.buffer_manager.register_buffer('_display_', display_buffer_size)
-
- # Build all nodes:
- for node_cfg in nodes:
- logger.info(f'Create node: {node_cfg.name}({node_cfg.type})')
- node = NODES.build(node_cfg)
-
- # Register node
- self.node_list.append(node)
-
- # Register buffers
- for buffer_info in node.registered_buffers:
- buffer_name = buffer_info.buffer_name
- if buffer_name in self.buffer_manager:
- continue
- buffer_size = buffer_sizes.get(buffer_name,
- DEFAULT_USER_BUFFER_SIZE)
- self.buffer_manager.register_buffer(buffer_name, buffer_size)
- logger.info(
- f'Register user buffer: {buffer_name}({buffer_size})')
-
- # Register events
- for event_info in node.registered_events:
- self.event_manager.register_event(
- event_name=event_info.event_name,
- is_keyboard=event_info.is_keyboard)
- logger.info(f'Register event: {event_info.event_name}')
-
- # Set executor for nodes
- # This step is performed after node building when the executor has
- # create full buffer/event managers and can
- for node in self.node_list:
- logger.info(f'Set executor for node: {node.name})')
- node.set_executor(self)
-
- def _read_camera(self):
- """Read video frames from the caemra (or the source video/image) and
- put them into input buffers."""
-
- camera_id = self.camera_id
- fps = self.camera_max_fps
-
- # Build video capture
- if is_image_file(camera_id):
- self.vcap = ImageCapture(camera_id)
- else:
- self.vcap = cv2.VideoCapture(camera_id)
- if self.camera_frame_shape is not None:
- width, height = self.camera_frame_shape
- self.vcap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
- self.vcap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
-
- if not self.vcap.isOpened():
- warnings.warn(f'Cannot open camera (ID={camera_id})')
- sys.exit()
-
- # Read video frames in a loop
- first_frame = True
- while not self.event_manager.is_set('_exit_'):
- if self.synchronous:
- if first_frame:
- cm = nullcontext()
- else:
- # Read a new frame until the last frame has been processed
- cm = self.event_manager.wait_and_handle('_idle_')
- else:
- # Read frames with a maximum FPS
- cm = limit_max_fps(fps)
-
- first_frame = False
-
- with cm:
- # Read a frame
- ret_val, frame = self.vcap.read()
- if ret_val:
- # Put frame message (for display) into buffer `_frame_`
- frame_msg = FrameMessage(frame)
- self.buffer_manager.put('_frame_', frame_msg)
-
- # Put input message (for model inference or other use)
- # into buffer `_input_`
- input_msg = FrameMessage(frame.copy())
- input_msg.update_route_info(
- node_name='Camera Info',
- node_type='none',
- info=self._get_camera_info())
- self.buffer_manager.put_force('_input_', input_msg)
- logger.info('Read one frame.')
- else:
- logger.info('Reached the end of the video.')
- # Put a video ending signal
- self.buffer_manager.put_force('_frame_',
- VideoEndingMessage())
- self.buffer_manager.put_force('_input_',
- VideoEndingMessage())
- # Wait for `_exit_` event util a timeout occurs
- if not self.event_manager.wait('_exit_', timeout=5.0):
- break
-
- self.vcap.release()
-
- def _display(self):
- """Receive processed frames from the output buffer and display on
- screen."""
-
- output_msg = None
-
- while not self.event_manager.is_set('_exit_'):
- while self.buffer_manager.is_empty('_display_'):
- time.sleep(0.001)
-
- # Set _idle_ to allow reading next frame
- if self.synchronous:
- self.event_manager.set('_idle_')
-
- # acquire output from buffer
- output_msg = self.buffer_manager.get('_display_')
-
- # None indicates input stream ends
- if isinstance(output_msg, VideoEndingMessage):
- self.event_manager.set('_exit_')
- break
-
- img = output_msg.get_image()
-
- # show in a window
- cv2.imshow(self.name, img)
-
- # handle keyboard input
- key = cv2.waitKey(1)
- if key != -1:
- self._on_keyboard_input(key)
-
- cv2.destroyAllWindows()
-
- # Avoid dead lock
- if self.synchronous:
- self.event_manager.set('_idle_')
-
- def _on_keyboard_input(self, key):
- """Handle the keyboard input.
-
- The key 'Q' and `ESC` will trigger an '_exit_' event, which will be
- responded by all nodes and the executor itself to exit. Other keys will
- trigger keyboard event to be responded by the nodes which has
- registered corresponding event. See :class:`webcam.utils.EventManager`
- for details.
- """
-
- if key in (27, ord('q'), ord('Q')):
- logger.info(f'Exit event captured: {key}')
- self.event_manager.set('_exit_')
- else:
- logger.info(f'Keyboard event captured: {key}')
- self.event_manager.set(key, is_keyboard=True)
-
- def _get_camera_info(self):
- """Return the camera information in a dict."""
-
- frame_width = self.vcap.get(cv2.CAP_PROP_FRAME_WIDTH)
- frame_height = self.vcap.get(cv2.CAP_PROP_FRAME_HEIGHT)
- frame_rate = self.vcap.get(cv2.CAP_PROP_FPS)
-
- cam_info = {
- 'Camera ID': self.camera_id,
- 'Camera resolution': f'{frame_width}x{frame_height}',
- 'Camera FPS': frame_rate,
- }
-
- return cam_info
-
- def run(self):
- """Start the executor.
-
- This method starts all nodes as well as video I/O in separate threads.
- """
-
- try:
- # Start node threads
- non_daemon_nodes = []
- for node in self.node_list:
- node.start()
- if not node.daemon:
- non_daemon_nodes.append(node)
-
- # Create a thread to read video frames
- t_read = Thread(target=self._read_camera, args=())
- t_read.start()
-
- # Run display in the main thread
- self._display()
- logger.info('Display has stopped.')
-
- # joint non-daemon nodes and executor threads
- logger.info('Camera reading is about to join.')
- t_read.join()
-
- for node in non_daemon_nodes:
- logger.info(f'Node {node.name} is about to join.')
- node.join()
- logger.info('All nodes jointed successfully.')
-
- except KeyboardInterrupt:
- pass
diff --git a/requirements/mminstall.txt b/requirements/mminstall.txt
index 24be7462fc..30d8402a42 100644
--- a/requirements/mminstall.txt
+++ b/requirements/mminstall.txt
@@ -1,3 +1,3 @@
mmcv>=2.0.0,<2.1.0
-mmdet>=3.0.0,<3.1.0
+mmdet>=3.0.0,<3.2.0
mmengine>=0.4.0,<1.0.0
diff --git a/tests/test_apis/test_inferencers/test_mmpose_inferencer.py b/tests/test_apis/test_inferencers/test_mmpose_inferencer.py
index f679df27b6..8b8a4744b8 100644
--- a/tests/test_apis/test_inferencers/test_mmpose_inferencer.py
+++ b/tests/test_apis/test_inferencers/test_mmpose_inferencer.py
@@ -11,10 +11,15 @@
from mmpose.apis.inferencers import MMPoseInferencer
from mmpose.structures import PoseDataSample
+from mmpose.utils import register_all_modules
class TestMMPoseInferencer(TestCase):
+ def tearDown(self) -> None:
+ register_all_modules(init_default_scope=True)
+ return super().tearDown()
+
def test_pose2d_call(self):
try:
from mmdet.apis.det_inferencer import DetInferencer # noqa: F401
diff --git a/tests/test_apis/test_inferencers/test_pose2d_inferencer.py b/tests/test_apis/test_inferencers/test_pose2d_inferencer.py
index 63206631ba..b59232efac 100644
--- a/tests/test_apis/test_inferencers/test_pose2d_inferencer.py
+++ b/tests/test_apis/test_inferencers/test_pose2d_inferencer.py
@@ -13,10 +13,15 @@
from mmpose.apis.inferencers import Pose2DInferencer
from mmpose.structures import PoseDataSample
+from mmpose.utils import register_all_modules
class TestPose2DInferencer(TestCase):
+ def tearDown(self) -> None:
+ register_all_modules(init_default_scope=True)
+ return super().tearDown()
+
def _get_det_model_weights(self):
if platform.system().lower() == 'windows':
# the default human/animal pose estimator utilizes rtmdet-m
diff --git a/tests/test_apis/test_inferencers/test_pose3d_inferencer.py b/tests/test_apis/test_inferencers/test_pose3d_inferencer.py
index 4a3f5a613e..da4a34b160 100644
--- a/tests/test_apis/test_inferencers/test_pose3d_inferencer.py
+++ b/tests/test_apis/test_inferencers/test_pose3d_inferencer.py
@@ -12,10 +12,15 @@
from mmpose.apis.inferencers import Pose2DInferencer, Pose3DInferencer
from mmpose.structures import PoseDataSample
+from mmpose.utils import register_all_modules
class TestPose3DInferencer(TestCase):
+ def tearDown(self) -> None:
+ register_all_modules(init_default_scope=True)
+ return super().tearDown()
+
def _get_det_model_weights(self):
if platform.system().lower() == 'windows':
# the default human/animal pose estimator utilizes rtmdet-m
diff --git a/tests/test_apis/test_webcam/test_nodes/test_big_eye_effect_node.py b/tests/test_apis/test_webcam/test_nodes/test_big_eye_effect_node.py
deleted file mode 100644
index b5a8ee8f72..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_big_eye_effect_node.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import mmcv
-import numpy as np
-from mmengine import Config
-
-from mmpose.apis.webcam.nodes import BigeyeEffectNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-from mmpose.datasets.datasets.utils import parse_pose_metainfo
-
-
-class TestBigeyeEffectNode(unittest.TestCase):
-
- def setUp(self) -> None:
- self.node = BigeyeEffectNode(
- name='big-eye', input_buffer='vis', output_buffer='vis_bigeye')
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- h, w = image.shape[:2]
- msg.set_image(image)
-
- objects = [
- dict(
- bbox=np.array([285.1, 44.4, 510.2, 387.7]),
- keypoints=np.stack((np.random.rand(17) *
- (w - 1), np.random.rand(17) * (h - 1)),
- axis=1),
- keypoint_scores=np.ones(17),
- dataset_meta=parse_pose_metainfo(
- Config.fromfile('configs/_base_/datasets/coco.py')
- ['dataset_info']))
- ]
- msg.update_objects(objects)
-
- return msg
-
- def test_process(self):
- input_msg = self._get_input_msg()
- img_h, img_w = input_msg.get_image().shape[:2]
- self.assertEqual(len(input_msg.get_objects()), 1)
-
- output_msg = self.node.process(dict(input=input_msg))
- canvas = output_msg.get_image()
- self.assertIsInstance(canvas, np.ndarray)
- self.assertEqual(canvas.shape[0], img_h)
- self.assertEqual(canvas.shape[1], img_w)
-
- def test_bypass(self):
- input_msg = self._get_input_msg()
- img = input_msg.get_image().copy()
- output_msg = self.node.bypass(dict(input=input_msg))
- self.assertTrue((img == output_msg.get_image()).all())
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_detector_node.py b/tests/test_apis/test_webcam/test_nodes/test_detector_node.py
deleted file mode 100644
index b519744fee..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_detector_node.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import mmcv
-
-from mmpose.apis.webcam.nodes import DetectorNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-
-
-class TestDetectorNode(unittest.TestCase):
- model_config = dict(
- name='detector',
- model_config='demo/mmdetection_cfg/'
- 'ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py',
- model_checkpoint='https://download.openmmlab.com'
- '/mmdetection/v2.0/ssd/'
- 'ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_'
- 'scratch_600e_coco_20210629_110627-974d9307.pth',
- device='cpu',
- input_buffer='_input_',
- output_buffer='det_result')
-
- def setUp(self) -> None:
- self._has_mmdet = True
- try:
- from mmdet.apis import init_detector # noqa: F401
- except (ImportError, ModuleNotFoundError):
- self._has_mmdet = False
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- msg.set_image(image)
-
- return msg
-
- def test_init(self):
-
- if not self._has_mmdet:
- return unittest.skip('mmdet is not installed')
-
- node = DetectorNode(**self.model_config)
-
- self.assertEqual(len(node._input_buffers), 1)
- self.assertEqual(len(node._output_buffers), 1)
- self.assertEqual(node._input_buffers[0].buffer_name, '_input_')
- self.assertEqual(node._output_buffers[0].buffer_name, 'det_result')
- self.assertEqual(node.device, 'cpu')
-
- def test_process(self):
-
- if not self._has_mmdet:
- return unittest.skip('mmdet is not installed')
-
- node = DetectorNode(**self.model_config)
-
- input_msg = self._get_input_msg()
- self.assertEqual(len(input_msg.get_objects()), 0)
-
- output_msg = node.process(dict(input=input_msg))
- objects = output_msg.get_objects()
- # there is a person in the image
- self.assertGreaterEqual(len(objects), 1)
- self.assertIn('person', [obj['label'] for obj in objects])
- self.assertEqual(objects[0]['bbox'].shape, (4, ))
-
- def test_bypass(self):
-
- if not self._has_mmdet:
- return unittest.skip('mmdet is not installed')
-
- node = DetectorNode(**self.model_config)
-
- input_msg = self._get_input_msg()
- self.assertEqual(len(input_msg.get_objects()), 0)
-
- output_msg = node.bypass(dict(input=input_msg))
- self.assertEqual(len(output_msg.get_objects()), 0)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_monitor_node.py b/tests/test_apis/test_webcam/test_nodes/test_monitor_node.py
deleted file mode 100644
index d71654cc39..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_monitor_node.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import mmcv
-
-from mmpose.apis.webcam.nodes import MonitorNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-
-
-class TestMonitorNode(unittest.TestCase):
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- msg.set_image(image)
-
- objects = [dict(label='human')]
- msg.update_objects(objects)
-
- return msg
-
- def test_init(self):
- node = MonitorNode(
- name='monitor', input_buffer='_frame_', output_buffer='display')
- self.assertEqual(len(node._input_buffers), 1)
- self.assertEqual(len(node._output_buffers), 1)
- self.assertEqual(node._input_buffers[0].buffer_name, '_frame_')
- self.assertEqual(node._output_buffers[0].buffer_name, 'display')
-
- # test initialization with given ignore_items
- node = MonitorNode(
- name='monitor',
- input_buffer='_frame_',
- output_buffer='display',
- ignore_items=['ignore_item'])
- self.assertEqual(len(node.ignore_items), 1)
- self.assertEqual(node.ignore_items[0], 'ignore_item')
-
- def test_process(self):
- node = MonitorNode(
- name='monitor', input_buffer='_frame_', output_buffer='display')
-
- input_msg = self._get_input_msg()
- self.assertEqual(len(input_msg.get_route_info()), 0)
- img_shape = input_msg.get_image().shape
-
- output_msg = node.process(dict(input=input_msg))
- # 'System Info' will be added into route_info
- self.assertEqual(len(output_msg.get_route_info()), 1)
- self.assertEqual(output_msg.get_image().shape, img_shape)
-
- def test_bypass(self):
- node = MonitorNode(
- name='monitor', input_buffer='_frame_', output_buffer='display')
- input_msg = self._get_input_msg()
- self.assertEqual(len(input_msg.get_route_info()), 0)
-
- output_msg = node.bypass(dict(input=input_msg))
- # output_msg should be identity with input_msg
- self.assertEqual(len(output_msg.get_route_info()), 0)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_notice_board_node.py b/tests/test_apis/test_webcam/test_nodes/test_notice_board_node.py
deleted file mode 100644
index 31583bf815..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_notice_board_node.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import mmcv
-import numpy as np
-
-from mmpose.apis.webcam.nodes import NoticeBoardNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-
-
-class TestNoticeBoardNode(unittest.TestCase):
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- h, w = image.shape[:2]
- msg.set_image(image)
-
- return msg
-
- def test_init(self):
- node = NoticeBoardNode(
- name='instruction', input_buffer='vis', output_buffer='vis_notice')
-
- self.assertEqual(len(node._input_buffers), 1)
- self.assertEqual(len(node._output_buffers), 1)
- self.assertEqual(node._input_buffers[0].buffer_name, 'vis')
- self.assertEqual(node._output_buffers[0].buffer_name, 'vis_notice')
- self.assertEqual(len(node.content_lines), 1)
-
- node = NoticeBoardNode(
- name='instruction',
- input_buffer='vis',
- output_buffer='vis_notice',
- content_lines=[
- 'This is a demo for pose visualization and simple image '
- 'effects. Have fun!', '', 'Hot-keys:',
- '"v": Pose estimation result visualization',
- '"s": Sunglasses effect B-)', '"b": Big-eye effect 0_0',
- '"h": Show help information',
- '"m": Show diagnostic information', '"q": Exit'
- ])
- self.assertEqual(len(node.content_lines), 9)
-
- def test_draw(self):
- node = NoticeBoardNode(
- name='instruction', input_buffer='vis', output_buffer='vis_notice')
- input_msg = self._get_input_msg()
- img_h, img_w = input_msg.get_image().shape[:2]
-
- canvas = node.draw(input_msg)
- self.assertIsInstance(canvas, np.ndarray)
- self.assertEqual(canvas.shape[0], img_h)
- self.assertEqual(canvas.shape[1], img_w)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_object_assigner_node.py b/tests/test_apis/test_webcam/test_nodes/test_object_assigner_node.py
deleted file mode 100644
index 0405c885d7..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_object_assigner_node.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import time
-import unittest
-
-import mmcv
-import numpy as np
-
-from mmpose.apis.webcam.nodes import ObjectAssignerNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-
-
-class TestObjectAssignerNode(unittest.TestCase):
-
- def _get_input_msg(self, with_object: bool = False):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- msg.set_image(image)
-
- if with_object:
- objects = [
- dict(
- label='person',
- class_id=0,
- bbox=np.array([285.1, 44.4, 510.2, 387.7]))
- ]
- msg.update_objects(objects)
-
- return msg
-
- def test_init(self):
- node = ObjectAssignerNode(
- name='object assigner',
- frame_buffer='_frame_',
- object_buffer='pred_result',
- output_buffer='frame')
-
- self.assertEqual(len(node._input_buffers), 2)
- self.assertEqual(len(node._output_buffers), 1)
- self.assertEqual(node._input_buffers[0].buffer_name, 'pred_result')
- self.assertEqual(node._input_buffers[1].buffer_name, '_frame_')
- self.assertEqual(node._output_buffers[0].buffer_name, 'frame')
-
- def test_process(self):
- node = ObjectAssignerNode(
- name='object assigner',
- frame_buffer='_frame_',
- object_buffer='pred_result',
- output_buffer='frame')
-
- frame_msg = self._get_input_msg()
- object_msg = self._get_input_msg(with_object=True)
- self.assertEqual(len(frame_msg.get_objects()), 0)
- self.assertEqual(len(object_msg.get_objects()), 1)
-
- # node.synchronous is False
- output_msg = node.process(dict(frame=frame_msg, object=object_msg))
- objects = output_msg.get_objects()
- self.assertEqual(id(frame_msg), id(output_msg))
- self.assertEqual(objects[0]['_id_'],
- object_msg.get_objects()[0]['_id_'])
-
- # object_message is None
- # take a pause to increase the interval of messages' timestamp
- # to avoid ZeroDivisionError when computing fps in `process`
- time.sleep(1 / 30.0)
- frame_msg = self._get_input_msg()
- output_msg = node.process(dict(frame=frame_msg, object=None))
- objects = output_msg.get_objects()
- self.assertEqual(objects[0]['_id_'],
- object_msg.get_objects()[0]['_id_'])
-
- # node.synchronous is True
- node.synchronous = True
- time.sleep(1 / 30.0)
- frame_msg = self._get_input_msg()
- object_msg = self._get_input_msg(with_object=True)
- output_msg = node.process(dict(frame=frame_msg, object=object_msg))
- self.assertEqual(len(frame_msg.get_objects()), 0)
- self.assertEqual(id(object_msg), id(output_msg))
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_object_visualizer_node.py b/tests/test_apis/test_webcam/test_nodes/test_object_visualizer_node.py
deleted file mode 100644
index c55bc1eb8d..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_object_visualizer_node.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import mmcv
-import numpy as np
-from mmengine import Config
-
-from mmpose.apis.webcam.nodes import ObjectVisualizerNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-from mmpose.datasets.datasets.utils import parse_pose_metainfo
-
-
-class TestObjectVisualizerNode(unittest.TestCase):
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- h, w = image.shape[:2]
- msg.set_image(image)
-
- objects = [
- dict(
- label='person',
- class_id=0,
- bbox=np.array([285.1, 44.4, 510.2, 387.7]),
- keypoints=np.stack((np.random.rand(17) *
- (w - 1), np.random.rand(17) * (h - 1)),
- axis=1),
- keypoint_scores=np.ones(17),
- dataset_meta=parse_pose_metainfo(
- Config.fromfile('configs/_base_/datasets/coco.py')
- ['dataset_info']))
- ]
- msg.update_objects(objects)
-
- return msg
-
- def test_init(self):
- node = ObjectVisualizerNode(
- name='object visualizer',
- input_buffer='frame',
- output_buffer='vis')
-
- self.assertEqual(len(node._input_buffers), 1)
- self.assertEqual(len(node._output_buffers), 1)
- self.assertEqual(node._input_buffers[0].buffer_name, 'frame')
- self.assertEqual(node._output_buffers[0].buffer_name, 'vis')
-
- def test_draw(self):
- # draw all objects with bounding box
- node = ObjectVisualizerNode(
- name='object visualizer',
- input_buffer='frame',
- output_buffer='vis')
- input_msg = self._get_input_msg()
- img_h, img_w = input_msg.get_image().shape[:2]
- self.assertEqual(len(input_msg.get_objects()), 1)
-
- canvas = node.draw(input_msg)
- self.assertIsInstance(canvas, np.ndarray)
- self.assertEqual(canvas.shape[0], img_h)
- self.assertEqual(canvas.shape[1], img_w)
-
- # draw all objects with keypoints
- node = ObjectVisualizerNode(
- name='object visualizer',
- input_buffer='frame',
- output_buffer='vis',
- must_have_keypoint=True)
- canvas = node.draw(input_msg)
- self.assertIsInstance(canvas, np.ndarray)
- self.assertEqual(canvas.shape[0], img_h)
- self.assertEqual(canvas.shape[1], img_w)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_pose_estimator_node.py b/tests/test_apis/test_webcam/test_nodes/test_pose_estimator_node.py
deleted file mode 100644
index 43345d116a..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_pose_estimator_node.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-from copy import deepcopy
-
-import mmcv
-import numpy as np
-
-from mmpose.apis.webcam.nodes import TopdownPoseEstimatorNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-
-
-class TestTopdownPoseEstimatorNode(unittest.TestCase):
- model_config = dict(
- name='human pose estimator',
- model_config='configs/wholebody_2d_keypoint/'
- 'topdown_heatmap/coco-wholebody/'
- 'td-hm_vipnas-mbv3_dark-8xb64-210e_coco-wholebody-256x192.py',
- model_checkpoint='https://download.openmmlab.com/mmpose/'
- 'top_down/vipnas/vipnas_mbv3_coco_wholebody_256x192_dark'
- '-e2158108_20211205.pth',
- device='cpu',
- input_buffer='det_result',
- output_buffer='human_pose')
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- msg.set_image(image)
-
- objects = [
- dict(
- label='person',
- class_id=0,
- bbox=np.array([285.1, 44.4, 510.2, 387.7]))
- ]
- msg.update_objects(objects)
-
- return msg
-
- def test_init(self):
- node = TopdownPoseEstimatorNode(**self.model_config)
-
- self.assertEqual(len(node._input_buffers), 1)
- self.assertEqual(len(node._output_buffers), 1)
- self.assertEqual(node._input_buffers[0].buffer_name, 'det_result')
- self.assertEqual(node._output_buffers[0].buffer_name, 'human_pose')
- self.assertEqual(node.device, 'cpu')
-
- def test_process(self):
- node = TopdownPoseEstimatorNode(**self.model_config)
-
- input_msg = self._get_input_msg()
- self.assertEqual(len(input_msg.get_objects()), 1)
-
- # run inference on all objects
- output_msg = node.process(dict(input=input_msg))
- objects = output_msg.get_objects()
-
- # there is a person in the image
- self.assertGreaterEqual(len(objects), 1)
- self.assertIn('person', [obj['label'] for obj in objects])
- self.assertEqual(objects[0]['keypoints'].shape, (133, 2))
- self.assertEqual(objects[0]['keypoint_scores'].shape, (133, ))
-
- # select objects by class_id
- model_config = self.model_config.copy()
- model_config['class_ids'] = [0]
- node = TopdownPoseEstimatorNode(**model_config)
- output_msg = node.process(dict(input=input_msg))
- self.assertGreaterEqual(len(objects), 1)
-
- # select objects by label
- model_config = self.model_config.copy()
- model_config['labels'] = ['cat']
- node = TopdownPoseEstimatorNode(**model_config)
- output_msg = node.process(dict(input=input_msg))
- self.assertGreaterEqual(len(objects), 0)
-
- def test_bypass(self):
- node = TopdownPoseEstimatorNode(**self.model_config)
-
- input_msg = self._get_input_msg()
- input_objects = input_msg.get_objects()
-
- output_msg = node.bypass(dict(input=deepcopy(input_msg)))
- output_objects = output_msg.get_objects()
- self.assertEqual(len(input_objects), len(output_objects))
- self.assertListEqual(
- list(input_objects[0].keys()), list(output_objects[0].keys()))
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_recorder_node.py b/tests/test_apis/test_webcam/test_nodes/test_recorder_node.py
deleted file mode 100644
index a646abb430..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_recorder_node.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import unittest
-
-import mmcv
-
-from mmpose.apis.webcam.nodes import RecorderNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-
-
-class TestMonitorNode(unittest.TestCase):
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- msg.set_image(image)
-
- objects = [dict(label='human')]
- msg.update_objects(objects)
-
- return msg
-
- def test_init(self):
- node = RecorderNode(
- name='recorder',
- out_video_file='webcam_output.mp4',
- input_buffer='display',
- output_buffer='_display_')
- self.assertEqual(len(node._input_buffers), 1)
- self.assertEqual(len(node._output_buffers), 1)
- self.assertEqual(node._input_buffers[0].buffer_name, 'display')
- self.assertEqual(node._output_buffers[0].buffer_name, '_display_')
- self.assertTrue(node.t_record.is_alive())
-
- def test_process(self):
- node = RecorderNode(
- name='recorder',
- out_video_file='webcam_output.mp4',
- input_buffer='display',
- output_buffer='_display_',
- buffer_size=1)
-
- if os.path.exists('webcam_output.mp4'):
- os.remove('webcam_output.mp4')
-
- input_msg = self._get_input_msg()
- node.process(dict(input=input_msg))
- self.assertEqual(node.queue.qsize(), 1)
-
- # process 5 frames in total.
- # the first frame has been processed above
- for _ in range(4):
- node.process(dict(input=input_msg))
- node.on_exit()
-
- # check the properties of output video
- self.assertTrue(os.path.exists('webcam_output.mp4'))
- video = mmcv.VideoReader('webcam_output.mp4')
- self.assertEqual(video.frame_cnt, 5)
- self.assertEqual(video.fps, 30)
- video.vcap.release()
- os.remove('webcam_output.mp4')
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_nodes/test_sunglasses_effect_node.py b/tests/test_apis/test_webcam/test_nodes/test_sunglasses_effect_node.py
deleted file mode 100644
index 1bf1c8199d..0000000000
--- a/tests/test_apis/test_webcam/test_nodes/test_sunglasses_effect_node.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import mmcv
-import numpy as np
-from mmengine import Config
-
-from mmpose.apis.webcam.nodes import SunglassesEffectNode
-from mmpose.apis.webcam.utils.message import FrameMessage
-from mmpose.datasets.datasets.utils import parse_pose_metainfo
-
-
-class TestSunglassesEffectNode(unittest.TestCase):
-
- def setUp(self) -> None:
- self.node = SunglassesEffectNode(
- name='sunglasses',
- input_buffer='vis',
- output_buffer='vis_sunglasses')
-
- def _get_input_msg(self):
-
- msg = FrameMessage(None)
-
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- h, w = image.shape[:2]
- msg.set_image(image)
-
- objects = [
- dict(
- keypoints=np.stack((np.random.rand(17) *
- (w - 1), np.random.rand(17) * (h - 1)),
- axis=1),
- keypoint_scores=np.ones(17),
- dataset_meta=parse_pose_metainfo(
- Config.fromfile('configs/_base_/datasets/coco.py')
- ['dataset_info']))
- ]
- msg.update_objects(objects)
-
- return msg
-
- def test_process(self):
- input_msg = self._get_input_msg()
- img_h, img_w = input_msg.get_image().shape[:2]
- self.assertEqual(len(input_msg.get_objects()), 1)
-
- output_msg = self.node.process(dict(input=input_msg))
- canvas = output_msg.get_image()
- self.assertIsInstance(canvas, np.ndarray)
- self.assertEqual(canvas.shape[0], img_h)
- self.assertEqual(canvas.shape[1], img_w)
-
- def test_bypass(self):
- input_msg = self._get_input_msg()
- img = input_msg.get_image().copy()
- output_msg = self.node.bypass(dict(input=input_msg))
- self.assertTrue((img == output_msg.get_image()).all())
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_utils/test_buffer.py b/tests/test_apis/test_webcam/test_utils/test_buffer.py
deleted file mode 100644
index 2708433ac1..0000000000
--- a/tests/test_apis/test_webcam/test_utils/test_buffer.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-from queue import Queue
-
-from mmpose.apis.webcam.utils.buffer import Buffer, BufferManager
-
-
-class TestBuffer(unittest.TestCase):
-
- def test_buffer(self):
-
- buffer = Buffer(maxsize=1)
- for i in range(3):
- buffer.put_force(i)
- item = buffer.get()
- self.assertEqual(item, 2)
-
-
-class TestBufferManager(unittest.TestCase):
-
- def _get_buffer_dict(self):
- return dict(example_buffer=Buffer())
-
- def test_init(self):
-
- # test default initialization
- buffer_manager = BufferManager()
- self.assertIn('_buffers', dir(buffer_manager))
- self.assertIsInstance(buffer_manager._buffers, dict)
-
- # test initialization with given buffers
- buffers = self._get_buffer_dict()
- buffer_manager = BufferManager(buffers=buffers)
- self.assertIn('_buffers', dir(buffer_manager))
- self.assertIsInstance(buffer_manager._buffers, dict)
- self.assertIn('example_buffer', buffer_manager._buffers.keys())
- # test __contains__
- self.assertIn('example_buffer', buffer_manager)
-
- # test initialization with incorrect buffers
- buffers['incorrect_buffer'] = Queue()
- with self.assertRaises(ValueError):
- buffer_manager = BufferManager(buffers=buffers)
-
- def test_buffer_operations(self):
- buffer_manager = BufferManager()
-
- # test register_buffer
- buffer_manager.register_buffer('example_buffer', 1)
- self.assertIn('example_buffer', buffer_manager)
- self.assertEqual(buffer_manager._buffers['example_buffer'].maxsize, 1)
-
- # test buffer operations
- buffer_manager.put('example_buffer', 0)
- item = buffer_manager.get('example_buffer')
- self.assertEqual(item, 0)
-
- buffer_manager.put('example_buffer', 0)
- self.assertTrue(buffer_manager.is_full('example_buffer'))
- buffer_manager.put_force('example_buffer', 1)
- item = buffer_manager.get('example_buffer')
- self.assertEqual(item, 1)
- self.assertTrue(buffer_manager.is_empty('example_buffer'))
-
- # test get_info
- buffer_info = buffer_manager.get_info()
- self.assertIn('example_buffer', buffer_info)
- self.assertEqual(buffer_info['example_buffer']['size'], 0)
- self.assertEqual(buffer_info['example_buffer']['maxsize'], 1)
-
- # test get_sub_manager
- buffer_manager = buffer_manager.get_sub_manager(['example_buffer'])
- self.assertIsInstance(buffer_manager, BufferManager)
- self.assertIn('example_buffer', buffer_manager)
- self.assertEqual(buffer_manager._buffers['example_buffer'].maxsize, 1)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_utils/test_event.py b/tests/test_apis/test_webcam/test_utils/test_event.py
deleted file mode 100644
index 7ff4b234bd..0000000000
--- a/tests/test_apis/test_webcam/test_utils/test_event.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-from threading import Event
-
-from mmpose.apis.webcam.utils.event import EventManager
-
-
-class TestEventManager(unittest.TestCase):
-
- def test_event_manager(self):
- event_manager = EventManager()
-
- # test register_event
- event_manager.register_event('example_event')
- self.assertIn('example_event', event_manager._events)
- self.assertIsInstance(event_manager._events['example_event'], Event)
- self.assertFalse(event_manager.is_set('example_event'))
-
- # test event operations
- event_manager.set('q', is_keyboard=True)
- self.assertIn('_keyboard_q', event_manager._events)
- self.assertTrue(event_manager.is_set('q', is_keyboard=True))
-
- flag = event_manager.wait('q', is_keyboard=True)
- self.assertTrue(flag)
-
- event_manager.wait_and_handle('q', is_keyboard=True)
- event_manager.clear('q', is_keyboard=True)
- self.assertFalse(event_manager._events['_keyboard_q']._flag)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_utils/test_image_capture.py b/tests/test_apis/test_webcam/test_utils/test_image_capture.py
deleted file mode 100644
index 8165299b89..0000000000
--- a/tests/test_apis/test_webcam/test_utils/test_image_capture.py
+++ /dev/null
@@ -1,48 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import cv2
-import numpy as np
-
-from mmpose.apis.webcam.utils.image_capture import ImageCapture
-
-
-class TestImageCapture(unittest.TestCase):
-
- def setUp(self):
- self.image_path = 'tests/data/coco/000000000785.jpg'
- self.image = cv2.imread(self.image_path)
-
- def test_init(self):
- image_cap = ImageCapture(self.image_path)
- self.assertIsInstance(image_cap.image, np.ndarray)
-
- image_cap = ImageCapture(self.image)
- self.assertTrue((self.image == image_cap.image).all())
-
- def test_image_capture(self):
- image_cap = ImageCapture(self.image_path)
-
- # test operations
- self.assertTrue(image_cap.isOpened())
-
- flag, image_ = image_cap.read()
- self.assertTrue(flag)
- self.assertTrue((self.image == image_).all())
-
- image_cap.release()
- self.assertIsInstance(image_cap.image, np.ndarray)
-
- img_h = image_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
- self.assertAlmostEqual(img_h, self.image.shape[0])
- img_w = image_cap.get(cv2.CAP_PROP_FRAME_WIDTH)
- self.assertAlmostEqual(img_w, self.image.shape[1])
- fps = image_cap.get(cv2.CAP_PROP_FPS)
- self.assertTrue(np.isnan(fps))
-
- with self.assertRaises(NotImplementedError):
- _ = image_cap.get(-1)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_utils/test_message.py b/tests/test_apis/test_webcam/test_utils/test_message.py
deleted file mode 100644
index 536b672e78..0000000000
--- a/tests/test_apis/test_webcam/test_utils/test_message.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-import mmcv
-import numpy as np
-
-from mmpose.apis.webcam.nodes import MonitorNode
-from mmpose.apis.webcam.utils.message import FrameMessage, Message
-
-
-class TestMessage(unittest.TestCase):
-
- def _get_monitor_node(self):
- return MonitorNode(
- name='monitor', input_buffer='_frame_', output_buffer='display')
-
- def _get_image(self):
- image_path = 'tests/data/coco/000000000785.jpg'
- image = mmcv.imread(image_path)
- return image
-
- def test_message(self):
- msg = Message()
-
- with self.assertWarnsRegex(
- Warning, '`node_name` and `node_type` will be '
- 'overridden if node is provided.'):
- node = self._get_monitor_node()
- msg.update_route_info(node=node, node_name='monitor')
-
- route_info = msg.get_route_info()
- self.assertEqual(len(route_info), 1)
- self.assertEqual(route_info[0]['node'], 'monitor')
-
- msg.set_route_info([dict(node='recorder', node_type='RecorderNode')])
- msg.merge_route_info(route_info)
- route_info = msg.get_route_info()
- self.assertEqual(len(route_info), 2)
- self.assertEqual(route_info[1]['node'], 'monitor')
-
- def test_frame_message(self):
- msg = FrameMessage(None)
-
- # test set/get image
- self.assertIsInstance(msg.data, dict)
- self.assertIsNone(msg.get_image())
-
- msg.set_image(self._get_image())
- self.assertIsInstance(msg.get_image(), np.ndarray)
-
- # test set/get objects
- objects = msg.get_objects()
- self.assertEqual(len(objects), 0)
-
- objects = [dict(label='cat'), dict(label='dog')]
- msg.update_objects(objects)
- dog_objects = msg.get_objects(lambda x: x['label'] == 'dog')
- self.assertEqual(len(dog_objects), 1)
-
- msg.set_objects(objects[:1])
- dog_objects = msg.get_objects(lambda x: x['label'] == 'dog')
- self.assertEqual(len(dog_objects), 0)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_utils/test_misc.py b/tests/test_apis/test_webcam/test_utils/test_misc.py
deleted file mode 100644
index d60fdaa002..0000000000
--- a/tests/test_apis/test_webcam/test_utils/test_misc.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import tempfile
-import unittest
-
-import mmcv
-import numpy as np
-
-from mmpose.apis.webcam.utils.misc import (copy_and_paste, expand_and_clamp,
- get_cached_file_path,
- get_config_path, is_image_file,
- screen_matting)
-
-
-class TestMISC(unittest.TestCase):
-
- def test_get_cached_file_path(self):
- url = 'https://user-images.githubusercontent.com/15977946/' \
- '170850839-acc59e26-c6b3-48c9-a9ec-87556edb99ed.jpg'
- with tempfile.TemporaryDirectory() as tmpdir:
- cached_file = get_cached_file_path(
- url, save_dir=tmpdir, file_name='sunglasses.jpg')
- self.assertTrue(os.path.exists(cached_file))
- # check if image is successfully cached
- img = mmcv.imread(cached_file)
- self.assertIsNotNone(img)
-
- def test_get_config_path(self):
- cfg_path = 'configs/_base_/datasets/coco.py'
- path_in_module = get_config_path(cfg_path, 'mmpose')
- self.assertEqual(cfg_path, path_in_module)
-
- cfg_path = '_base_/datasets/coco.py'
- with self.assertRaises(FileNotFoundError):
- _ = get_config_path(cfg_path, 'mmpose')
-
- def test_is_image_file(self):
- self.assertTrue(is_image_file('example.png'))
- self.assertFalse(is_image_file('example.mp4'))
-
- def test_expand_and_clamp(self):
- img_shape = [125, 125, 3]
- bbox = [0, 0, 40, 40] # [x1, y1, x2, y2]
-
- expanded_bbox = expand_and_clamp(bbox, img_shape)
- self.assertListEqual(expanded_bbox, [0, 0, 45, 45])
-
- def test_screen_matting(self):
- img = np.random.randint(0, 256, size=(100, 100, 3))
-
- # test with supported colors
- for color in 'gbkw':
- img_mat = screen_matting(img, color=color)
- self.assertEqual(len(img_mat.shape), 2)
- self.assertTupleEqual(img_mat.shape, img.shape[:2])
-
- # test with unsupported arguments
- with self.assertRaises(ValueError):
- screen_matting(img)
-
- with self.assertRaises(NotImplementedError):
- screen_matting(img, color='r')
-
- def test_copy_and_paste(self):
- img = np.random.randint(0, 256, size=(50, 50, 3))
- background_img = np.random.randint(0, 256, size=(200, 200, 3))
- mask = screen_matting(background_img, color='b')
-
- output_img = copy_and_paste(img, background_img, mask)
- self.assertTupleEqual(output_img.shape, background_img.shape)
diff --git a/tests/test_apis/test_webcam/test_utils/test_pose.py b/tests/test_apis/test_webcam/test_utils/test_pose.py
deleted file mode 100644
index 06f4fc0e41..0000000000
--- a/tests/test_apis/test_webcam/test_utils/test_pose.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-from mmengine import Config
-
-from mmpose.apis.webcam.utils.pose import (get_eye_keypoint_ids,
- get_face_keypoint_ids,
- get_hand_keypoint_ids,
- get_mouth_keypoint_ids,
- get_wrist_keypoint_ids)
-from mmpose.datasets.datasets.utils import parse_pose_metainfo
-
-
-class TestGetKeypointIds(unittest.TestCase):
-
- def setUp(self) -> None:
- datasets_meta = dict(
- coco=Config.fromfile('configs/_base_/datasets/coco.py'),
- coco_wholebody=Config.fromfile(
- 'configs/_base_/datasets/coco_wholebody.py'),
- animalpose=Config.fromfile(
- 'configs/_base_/datasets/animalpose.py'),
- ap10k=Config.fromfile('configs/_base_/datasets/ap10k.py'),
- wflw=Config.fromfile('configs/_base_/datasets/wflw.py'),
- )
- self.datasets_meta = {
- key: parse_pose_metainfo(value['dataset_info'])
- for key, value in datasets_meta.items()
- }
-
- def test_get_eye_keypoint_ids(self):
-
- # coco dataset
- coco_dataset_meta = self.datasets_meta['coco'].copy()
- left_eye_idx, right_eye_idx = get_eye_keypoint_ids(coco_dataset_meta)
- self.assertEqual(left_eye_idx, 1)
- self.assertEqual(right_eye_idx, 2)
-
- del coco_dataset_meta['keypoint_name2id']['left_eye']
- left_eye_idx, right_eye_idx = get_eye_keypoint_ids(coco_dataset_meta)
- self.assertEqual(left_eye_idx, 1)
- self.assertEqual(right_eye_idx, 2)
-
- # animalpose dataset
- animalpose_dataset_meta = self.datasets_meta['animalpose'].copy()
- left_eye_idx, right_eye_idx = get_eye_keypoint_ids(
- animalpose_dataset_meta)
- self.assertEqual(left_eye_idx, 0)
- self.assertEqual(right_eye_idx, 1)
-
- # dataset without keys `'left_eye'` or `'right_eye'`
- wflw_dataset_meta = self.datasets_meta['wflw'].copy()
- with self.assertRaises(ValueError):
- _ = get_eye_keypoint_ids(wflw_dataset_meta)
-
- def test_get_face_keypoint_ids(self):
-
- # coco_wholebody dataset
- wholebody_dataset_meta = self.datasets_meta['coco_wholebody'].copy()
- face_indices = get_face_keypoint_ids(wholebody_dataset_meta)
- for i, ind in enumerate(range(23, 91)):
- self.assertEqual(face_indices[i], ind)
-
- del wholebody_dataset_meta['keypoint_name2id']['face-0']
- face_indices = get_face_keypoint_ids(wholebody_dataset_meta)
- for i, ind in enumerate(range(23, 91)):
- self.assertEqual(face_indices[i], ind)
-
- # dataset without keys `'face-x'`
- wflw_dataset_meta = self.datasets_meta['wflw'].copy()
- with self.assertRaises(ValueError):
- _ = get_face_keypoint_ids(wflw_dataset_meta)
-
- def test_get_wrist_keypoint_ids(self):
-
- # coco dataset
- coco_dataset_meta = self.datasets_meta['coco'].copy()
- left_wrist_idx, right_wrist_idx = get_wrist_keypoint_ids(
- coco_dataset_meta)
- self.assertEqual(left_wrist_idx, 9)
- self.assertEqual(right_wrist_idx, 10)
-
- del coco_dataset_meta['keypoint_name2id']['left_wrist']
- left_wrist_idx, right_wrist_idx = get_wrist_keypoint_ids(
- coco_dataset_meta)
- self.assertEqual(left_wrist_idx, 9)
- self.assertEqual(right_wrist_idx, 10)
-
- # animalpose dataset
- animalpose_dataset_meta = self.datasets_meta['animalpose'].copy()
- left_wrist_idx, right_wrist_idx = get_wrist_keypoint_ids(
- animalpose_dataset_meta)
- self.assertEqual(left_wrist_idx, 16)
- self.assertEqual(right_wrist_idx, 17)
-
- # ap10k
- ap10k_dataset_meta = self.datasets_meta['ap10k'].copy()
- left_wrist_idx, right_wrist_idx = get_wrist_keypoint_ids(
- ap10k_dataset_meta)
- self.assertEqual(left_wrist_idx, 7)
- self.assertEqual(right_wrist_idx, 10)
-
- # dataset without keys `'left_wrist'` or `'right_wrist'`
- wflw_dataset_meta = self.datasets_meta['wflw'].copy()
- with self.assertRaises(ValueError):
- _ = get_wrist_keypoint_ids(wflw_dataset_meta)
-
- def test_get_mouth_keypoint_ids(self):
-
- # coco_wholebody dataset
- wholebody_dataset_meta = self.datasets_meta['coco_wholebody'].copy()
- mouth_index = get_mouth_keypoint_ids(wholebody_dataset_meta)
- self.assertEqual(mouth_index, 85)
-
- del wholebody_dataset_meta['keypoint_name2id']['face-62']
- mouth_index = get_mouth_keypoint_ids(wholebody_dataset_meta)
- self.assertEqual(mouth_index, 85)
-
- # dataset without keys `'face-62'`
- wflw_dataset_meta = self.datasets_meta['wflw'].copy()
- with self.assertRaises(ValueError):
- _ = get_mouth_keypoint_ids(wflw_dataset_meta)
-
- def test_get_hand_keypoint_ids(self):
-
- # coco_wholebody dataset
- wholebody_dataset_meta = self.datasets_meta['coco_wholebody'].copy()
- hand_indices = get_hand_keypoint_ids(wholebody_dataset_meta)
- for i, ind in enumerate(range(91, 133)):
- self.assertEqual(hand_indices[i], ind)
-
- del wholebody_dataset_meta['keypoint_name2id']['left_hand_root']
- hand_indices = get_hand_keypoint_ids(wholebody_dataset_meta)
- for i, ind in enumerate(range(91, 133)):
- self.assertEqual(hand_indices[i], ind)
-
- # dataset without hand keys
- wflw_dataset_meta = self.datasets_meta['wflw'].copy()
- with self.assertRaises(ValueError):
- _ = get_hand_keypoint_ids(wflw_dataset_meta)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_apis/test_webcam/test_webcam_executor.py b/tests/test_apis/test_webcam/test_webcam_executor.py
deleted file mode 100644
index 0436308869..0000000000
--- a/tests/test_apis/test_webcam/test_webcam_executor.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import unittest
-
-from mmengine import Config
-
-from mmpose.apis.webcam import WebcamExecutor
-
-
-class TestWebcamExecutor(unittest.TestCase):
-
- def setUp(self) -> None:
- config = Config.fromfile('demo/webcam_cfg/test_camera.py').executor_cfg
- config.camera_id = 'tests/data/posetrack18/videos/' \
- '000001_mpiinew_test/000001_mpiinew_test.mp4'
- self.executor = WebcamExecutor(**config)
-
- def test_init(self):
-
- self.assertEqual(len(self.executor.node_list), 2)
- self.assertEqual(self.executor.node_list[0].name, 'monitor')
- self.assertEqual(self.executor.node_list[1].name, 'recorder')
-
-
-if __name__ == '__main__':
- unittest.main()