Skip to content

Latest commit

 

History

History
164 lines (125 loc) · 5.19 KB

train.md

File metadata and controls

164 lines (125 loc) · 5.19 KB

Baseline Results

We recommend that participants use United-Perception to train the model and provide two baseline models (resnet18 and resnet18c_x0_25). The results are as follows:

EQL loss paper

model_id backbone bs epoch Bag of tricks eql top1 (test1w)
1 resnet18 4 * 64 100 yes(strikes) no 86.88
2 resnet18 4 * 64 100 yes(strikes) yes 88.09
3 resnet18c_x0_25 4 * 64 100 yes(strikes) no 84.52
4 resnet18c_x0_25 4 * 64 100 yes(strikes) yes 87.01

You can reproduce the above results by following these steps:

1. Prepare code base

git clone https://github.com/ModelTC/United-Perception
cd United-Perception
# in this challenge, we only need python requirements
pip install --user -r requirements.txt 

2. prepare your dataset

You can download the training dataset of the challenge from here. The training dataset is organized as follows:

your_data_path
├── data
│   ├── 0.png
│   ├── 1.png
│   ├── ...
│   └── 50002.png
└── train.txt

Change the data path in the provided configuration file to your data path:

  train:
    dataset:
      type: cls
      kwargs:
        meta_file: your_data_path/train.txt
        image_reader:
          type: fs_pillow
          kwargs:
            image_dir: your_data_path/
            color_mode: RGB
        transformer: [*random_resized_crop, *random_horizontal_flip, *pil_color_jitter,*to_tensor, *normalize]

3. Training and Eval the model

You can easily train the model with the training scripts we provide:

Specially, in this challenge, we only need cls tasks. export DEFAULT_TASKS=cls

  • train
sh scripts/dist_train.sh num_gpus your_config_path

#!/bin/bash

ROOT=./  # your up path root
T=`date +%m%d%H%M`
export ROOT=$ROOT
cfg=$2
export PYTHONPATH=$ROOT:$PYTHONPATH
# in this challenge, we only need cls tasks
export DEFAULT_TASKS=cls
python -m up train \
  --ng=$1 \
  --launch=pytorch \
  --config=$cfg \
  --display=10 \
  2>&1 | tee log.train.$T.$(basename $cfg) 
  • eval or test
sh scripts/dist_test.sh num_gpus your_config_path

#!/bin/bash

ROOT=./  # your up path root
T=`date +%m%d%H%M`
export ROOT=$ROOT
cfg=$2
export PYTHONPATH=$ROOT:$PYTHONPATH
# in this challenge, we only need cls tasks
export DEFAULT_TASKS=cls
python -m up train \
  -e \
  --ng=$1 \
  --launch=pytorch \
  --config=$cfg \
  --display=10 \
  2>&1 | tee log.test.$T.$(basename $cfg) 

4. export the onnx file

After training, you can easily export the onnx files using the scripts provided by UP:

sh scripts/to_onnx.sh num_gpus your_config_path

and the result will be saved in the ./toonnx/. It is worth noting that you need to modify the input_size before you run the to_onnx.sh

#!/bin/bash

ROOT=./  # your up path root
T=`date +%m%d%H%M`
export ROOT=$ROOT
cfg=$2
export PYTHONPATH=$ROOT:$PYTHONPATH
# in this challenge, we only need cls tasks
export DEFAULT_TASKS=cls
CPUS_PER_TASK=${CPUS_PER_TASK:-2}

python -m up to_onnx \
  --ng=$1 \
  --launch=pytorch \
  --config=$cfg \
  --save_prefix=toonnx \
  --input_size=3x112x112 \  # change the input size to yours
  2>&1 | tee log.deploy.$(basename $cfg) 

5. More Detail

Please refer United-Perception docs.

6. Hardware Performance

repo

repo

repo

repo

repo

repo

7. Supported Operator List

Platform operator list
T4 https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html
Atlas300 https://support.huaweicloud.com/api-ge-atlas300app3000/atlasge_07_0094.html
rv1126 https://github.com/rockchip-linux/rknn-toolkit/blob/master/doc/RKNN_OP_Support_V1.7.3.md
snpe-gpu/dsp https://developer.qualcomm.com/sites/default/files/docs/snpe/supported_onnx_ops.html
Ti https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/07_03_00_07/exports/docs/tidl_j7_02_00_00_07/ti_dl/docs/user_guide_html/md_tidl_layers_info.html
STPU https://practical-dl.sensecore.cn/#/stpu