Skip to content

Commit

Permalink
Merge branch 'main' into dev-1.x
Browse files Browse the repository at this point in the history
  • Loading branch information
cir7 committed May 6, 2023
2 parents 81766c7 + 718a17d commit 30c3380
Show file tree
Hide file tree
Showing 30 changed files with 414 additions and 399 deletions.
1 change: 0 additions & 1 deletion .circleci/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,6 @@ jobs:
docker build .circleci/docker -t mmaction:gpu --build-arg PYTORCH=<< parameters.torch >> --build-arg CUDA=<< parameters.cuda >> --build-arg CUDNN=<< parameters.cudnn >>
docker run --gpus all -t -d -v /home/circleci/project:/mmaction -w /mmaction --name mmaction mmaction:gpu
docker exec mmaction apt-get update
docker exec mmaction pip install "numpy==1.23"
docker exec mmaction apt-get upgrade -y
docker exec mmaction apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 libturbojpeg pkg-config
docker exec mmaction apt-get install -y libavdevice-dev libavfilter-dev libopus-dev libvpx-dev libsrtp2-dev libsndfile1
Expand Down
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/1-bug-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ body:
label: Branch
description: Which branch/version are you using?
options:
- master branch (0.x version, such as `v0.10.0`, or `dev` branch)
- 1.x branch (1.x version, such as `v1.0.0rc2`, or `dev-1.x` branch)
- main branch (1.x version, such as `v1.0.0`, or `dev-1.x` branch)
- 0.x branch (0.x version, such as `v0.24.1`)
validations:
required: true

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pr_stage_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -141,10 +141,10 @@ jobs:
platform: [cpu, cu111]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python }}
python-version: ${{ matrix.python-version }}
- name: Upgrade pip
run: |
python -V
Expand Down
16 changes: 16 additions & 0 deletions .owners.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
assign:
issues: enabled
pull_requests: disabled
strategy:
# random
daily-shift-based
scedule:
'*/1 * * * *'
assignees:
- hukkai
- Dai-Wenxun
- cir7
- Dai-Wenxun
- cir7
- hukkai
- hukkai
6 changes: 5 additions & 1 deletion .readthedocs.yml
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
version: 2

build:
os: ubuntu-22.04
tools:
python: "3.7"

formats:
- epub

python:
version: 3.7
install:
- requirements: requirements/docs.txt
- requirements: requirements/readthedocs.txt
172 changes: 85 additions & 87 deletions README.md

Large diffs are not rendered by default.

184 changes: 91 additions & 93 deletions README_zh-CN.md

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
]

train_dataloader = dict(
batch_size=4,
batch_size=32,
num_workers=8,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
Expand Down Expand Up @@ -99,15 +99,4 @@
# - `enable` means enable scaling LR automatically
# or not by default.
# - `base_batch_size` = (8 GPUs) x (32 samples per GPU).
auto_scale_lr = dict(enable=True, base_batch_size=256)

train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=10, val_interval=3)
param_scheduler = [
dict(
type='MultiStepLR',
begin=0,
end=10,
by_epoch=True,
milestones=[4, 8],
gamma=0.1)
]
auto_scale_lr = dict(enable=False, base_batch_size=256)
2 changes: 1 addition & 1 deletion demo/demo_skeleton.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ def visualize(args, frames, data_samples, action_label):
show=False,
wait_time=0,
out_file=None,
kpt_score_thr=0.3)
kpt_thr=0.3)
vis_frame = visualizer.get_image()
cv2.putText(vis_frame, action_label, (10, 30), FONTFACE, FONTSCALE,
FONTCOLOR, THICKNESS, LINETYPE)
Expand Down
9 changes: 5 additions & 4 deletions demo/mmaction2_tutorial.ipynb
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/open-mmlab/mmaction2/blob/dev-1.x/demo/mmaction2_tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
"<a href=\"https://colab.research.google.com/github/open-mmlab/mmaction2/blob/main/demo/mmaction2_tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
Expand Down Expand Up @@ -332,14 +333,14 @@
"!pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html\n",
"\n",
"# Install mmcv\n",
"!pip install 'mmcv>=2.0.0rc1' -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html\n",
"!pip install 'mmcv>=2.0.0' -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html\n",
"\n",
"# Install mmengine\n",
"!pip install mmengine\n",
"\n",
"# Install mmaction2\n",
"!rm -rf mmaction2\n",
"!git clone https://github.com/open-mmlab/mmaction2.git -b dev-1.x\n",
"!git clone https://github.com/open-mmlab/mmaction2.git -b main\n",
"%cd mmaction2\n",
"\n",
"!pip install -e .\n",
Expand Down Expand Up @@ -1799,7 +1800,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.13 (default, Mar 29 2022, 02:18:16) \n[GCC 7.5.0]"
"version": "3.7.16"
},
"vscode": {
"interpreter": {
Expand Down
47 changes: 23 additions & 24 deletions docs/en/advanced_guides/customize_dataset.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,34 @@
# Customize Datasets
# Customize Dataset

In this tutorial, we will introduce some methods about how to customize your own dataset by online conversion.

- [Customize Datasets](#customize-datasets)
- [Customize Dataset](#customize-dataset)
- [General understanding of the Dataset in MMAction2](#general-understanding-of-the-dataset-in-mmaction2)
- [Customize new datasets](#customize-new-datasets)
- [Customize keypoint format for PoseDataset](#customize-keypoint-format-for-posedataset)

## General understanding of the Dataset in MMAction2

MMAction2 provides specific Dataset class according to the task, e.g. `VideoDataset`/`RawframeDataset` for action recognition, `AVADataset` for spatio-temporal action detection, `PoseDataset` for skeleton-based action recognition. All these specific datasets only need to implement `get_data_info(self, idx)` to build a data list from the annotation file, while other functions are handled by the superclass. The following table shows the inherent relationship and the main function of the modules.
MMAction2 provides task-specific `Dataset` class, e.g. `VideoDataset`/`RawframeDataset` for action recognition, `AVADataset` for spatio-temporal action detection, `PoseDataset` for skeleton-based action recognition. These task-specific datasets only require the implementation of `load_data_list(self)` for generating a data list from the annotation file. The remaining functions are automatically handled by the superclass (i.e., `BaseActionDataset` and `BaseDataset`). The following table shows the inherent relationship and the main method of the modules.

| Class Name | Functions |
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| MMAction2::VideoDataset | `load_data_list(self)` <br> Build data list from the annotation file. |
| MMAction2::BaseActionDataset | `get_data_info(self, idx)` <br> Given the `idx`, return the corresponding data sample from data list |
| MMEngine::BaseDataset | `__getitem__(self, idx)` <br> Given the `idx`, call `get_data_info` to get data sample, then call the `pipeline` to perform transforms and augmentation in `train_pipeline` or `val_pipeline` |
| Class Name | Class Method |
| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `MMAction2::VideoDataset` | `load_data_list(self)` <br> Build data list from the annotation file. |
| `MMAction2::BaseActionDataset` | `get_data_info(self, idx)` <br> Given the `idx`, return the corresponding data sample from the data list. |
| `MMEngine::BaseDataset` | `__getitem__(self, idx)` <br> Given the `idx`, call `get_data_info` to get the data sample, then call the `pipeline` to perform transforms and augmentation in `train_pipeline` or `val_pipeline` . |

## Customize new datasets

For most scenarios, we don't need to customize a new dataset class, offline conversion is recommended way to use your data. But customizing a new dataset class is also easy in MMAction2. As above mentioned, a dataset for a specific task usually only needs to implement `load_data_list(self)` to generate the data list from the annotation file. It is worth noting that elements in the `data_list` are `dict` with fields required in the following pipeline.
Although offline conversion is the preferred method for utilizing your own data in most cases, MMAction2 offers a convenient process for creating a customized `Dataset` class. As mentioned previously, task-specific datasets only require the implementation of `load_data_list(self)` for generating a data list from the annotation file. It is noteworthy that the elements in the `data_list` are `dict` with fields that are essential for the subsequent processes in the `pipeline`.

Take `VideoDataset` as an example, `train_pipeline`/`val_pipeline` requires `'filename'` in `DecordInit` and `'label'` in `PackActionInput`, so data samples in the data list have 2 fields: `'filename'` and `'label'`.
you can refer to [customize pipeline](customize_pipeline.md) for more details about the pipeline.
Taking `VideoDataset` as an example, `train_pipeline`/`val_pipeline` require `'filename'` in `DecordInit` and `'label'` in `PackActionInputs`. Consequently, the data samples in the `data_list` must contain 2 fields: `'filename'` and `'label'`.
Please refer to [customize pipeline](customize_pipeline.md) for more details about the `pipeline`.

```
data_list.append(dict(filename=filename, label=label))
```

While `AVADataset` is more complex, elements in the data list consist of several fields about video data, and it further overwrites `get_data_info(self, idx)` to convert keys, which are required in spatio-temporal action detection pipeline.
However, `AVADataset` is more complex, data samples in the `data_list` consist of several fields about the video data. Moreover, it overwrites `get_data_info(self, idx)` to convert keys that are indispensable in the spatio-temporal action detection pipeline.

```python

Expand Down Expand Up @@ -60,21 +60,21 @@ class AVADataset(BaseActionDataset):

## Customize keypoint format for PoseDataset

MMAction2 currently supports three kinds of keypoint formats: `coco`, `nturgb+d` and `openpose`. If your use one of them, just specify the corresponding format in the following modules:
MMAction2 currently supports three keypoint formats: `coco`, `nturgb+d` and `openpose`. If you use one of these formats, you may simply specify the corresponding format in the following modules:

For Graph Convolutional Networks, such as AAGCN, STGCN...
For Graph Convolutional Networks, such as AAGCN, STGCN, ...

- transform: argument `dataset` in `JointToBone`.
- backbone: argument `graph_cfg` in Graph Convolutional Networks.
- `pipeline`: argument `dataset` in `JointToBone`.
- `backbone`: argument `graph_cfg` in Graph Convolutional Networks.

And for PoseC3D:
For PoseC3D:

- transform: In `Flip`, specify `left_kp` and `right_kp` according to the keypoint symmetrical relationship, or remove the transform for asymmetric keypoints structure.
- transform: In `GeneratePoseTarget`, specify `skeletons`, `left_limb`, `right_limb` if `with_limb` is `true`, and `left_kp`, `right_kp` if `with_kp` is `true`.
- `pipeline`: In `Flip`, specify `left_kp` and `right_kp` based on the symmetrical relationship between keypoints.
- `pipeline`: In `GeneratePoseTarget`, specify `skeletons`, `left_limb`, `right_limb` if `with_limb` is `True`, and `left_kp`, `right_kp` if `with_kp` is `True`.

For a custom format, you need to add a new graph layout into models and transforms, which defines the keypoints and their connection relationship.
If using a custom keypoint format, it is necessary to include a new graph layout in both the `backbone` and `pipeline`. This layout will define the keypoints and their connection relationship.

Take the coco dataset as an example, we define a layout named `coco` in `Graph`, and set its `inward` as followed, which includes all connections between nodes, each connection is a pair of nodes from far to near. The order of connections does not matter. Other settings about coco are to set the number of nodes to 17, and set node 0 as the center node.
Taking the `coco` dataset as an example, we define a layout named `coco` in `Graph`. The `inward` connections of this layout comprise all node connections, with each **centripetal** connection consisting of a tuple of nodes. Additional settings for `coco` include specifying the number of nodes as `17` the `node 0` as the central node.

```python

Expand All @@ -85,7 +85,7 @@ self.inward = [(15, 13), (13, 11), (16, 14), (14, 12), (11, 5),
self.center = 0
```

Similarly, we define the `pairs` in `JointToBone`, adding a bone of `(0, 0)` to align the number of bones to the nodes. The `pairs` of coco dataset is as followed, same as above mentioned, the order of pairs does not matter.
Similarly, we define the `pairs` in `JointToBone`, adding a bone of `(0, 0)` to align the number of bones to the nodes. The `pairs` of coco dataset are shown below, and the order of `pairs` in `JointToBone` is irrelevant.

```python

Expand All @@ -94,10 +94,9 @@ self.pairs = ((0, 0), (1, 0), (2, 0), (3, 1), (4, 2), (5, 0),
(12, 0), (13, 11), (14, 12), (15, 13), (16, 14))
```

For your custom format, just define the above setting as your graph structure, and specify in your config file as followed, we take `STGCN` as an example, assuming you already define a `custom_dataset` in `Graph` and `JointToBone`, and num_classes is n.
To use your custom keypoint format, simply define the aforementioned settings as your graph structure and specify them in your config file as shown below, In this example, we will use `STGCN`, with `n` denoting the number of classes and `custom_dataset` defined in `Graph` and `JointToBone`.

```python

model = dict(
type='RecognizerGCN',
backbone=dict(
Expand Down
16 changes: 8 additions & 8 deletions docs/en/advanced_guides/customize_logging.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Customize Logging

MMAction2 produces a lot of logs during the running process, such as loss, iteration time, learning rate, etc. In this section, we will introduce you how to output custom log. More details about the logging system, please refer to [MMEngine](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/logging.html).
MMAction2 produces a lot of logs during the running process, such as loss, iteration time, learning rate, etc. In this section, we will introduce you how to output custom log. More details about the logging system, please refer to [MMEngine Tutorial](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/logging.html).

- [Customize Logging](#customize-logging)
- [Flexible Logging System](#flexible-logging-system)
Expand All @@ -9,13 +9,13 @@ MMAction2 produces a lot of logs during the running process, such as loss, itera

## Flexible Logging System

MMAction2 configures the logging system by LogProcessor in [default_runtime](/configs/_base_/default_runtime.py) in default, which is equivalent to:
The MMAction2 logging system is configured by the `LogProcessor` in [default_runtime](/configs/_base_/default_runtime.py) by default, which is equivalent to:

```python
log_processor = dict(type='LogProcessor', window_size=20, by_epoch=True)
```

Defaultly, LogProcessor catches all filed start with `loss` return by `model.forward`. For example in the following model, `loss1` and `loss2` will be logged automatically without additional configuration.
By default, the `LogProcessor` captures all fields that begin with `loss` returned by `model.forward`. For instance, in the following model, `loss1` and `loss2` will be logged automatically without any additional configuration.

```python
from mmengine.model import BaseModel
Expand All @@ -32,14 +32,14 @@ class ToyModel(BaseModel):
return dict(loss1=loss1, loss2=loss2)
```

The format of the output log is as followed:
The output log follows the following format:

```
08/21 02:58:41 - mmengine - INFO - Epoch(train) [1][10/25] lr: 1.0000e-02 eta: 0:00:00 time: 0.0019 data_time: 0.0004 loss1: 0.8381 loss2: 0.9007 loss: 1.7388
08/21 02:58:41 - mmengine - INFO - Epoch(train) [1][20/25] lr: 1.0000e-02 eta: 0:00:00 time: 0.0029 data_time: 0.0010 loss1: 0.1978 loss2: 0.4312 loss: 0.6290
```

LogProcessor will output the log in the following format:
`LogProcessor` will output the log in the following format:

- The prefix of the log:
- epoch mode(`by_epoch=True`): `Epoch(train) [{current_epoch}/{current_iteration}]/{dataloader_length}`
Expand All @@ -55,11 +55,11 @@ LogProcessor will output the log in the following format:
log_processor outputs the epoch based log by default(`by_epoch=True`). To get an expected log matched with the `train_cfg`, we should set the same value for `by_epoch` in `train_cfg` and `log_processor`.
```

Based on the rules above, the code snippet will count the average value of the loss1 and the loss2 every 20 iterations. More types of statistical methods, please refer to [MMEngine.LogProcessor](mmengine.runner.LogProcessor).
Based on the rules above, the code snippet will count the average value of the loss1 and the loss2 every 20 iterations. More types of statistical methods, please refer to [mmengine.runner.LogProcessor](mmengine.runner.LogProcessor).

## Customize log

The logging system could not only log the loss, lr, .etc but also collect and output the custom log. For example, if we want to statistic the intermediate loss:
The logging system could not only log the `loss`, `lr`, .etc but also collect and output the custom log. For example, if we want to statistic the intermediate loss:

The `ToyModel` calculate `loss_tmp` in forward, but don't save it into the return dict.

Expand Down Expand Up @@ -108,7 +108,7 @@ The `loss_tmp` will be added to the output log:

## Export the debug log

To export the debug log to the `work_dir`, you can set log_level in config file as followed:
To export the debug log to the `work_dir`, you can set log_level in config file as follows:

```
log_level='DEBUG'
Expand Down
Loading

0 comments on commit 30c3380

Please sign in to comment.