Skip to content

Latest commit

 

History

History
106 lines (74 loc) · 9.8 KB

README.md

File metadata and controls

106 lines (74 loc) · 9.8 KB

SlowFast

Slowfast networks for video recognition

Abstract

We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA.

Results and Models

AVA2.1

frame sampling strategy gpus backbone pretrain mAP config ckpt log
4x16x1 8 SlowFast ResNet50 Kinetics-400 24.32 config ckpt log
4x16x1 8 SlowFast ResNet50 (with context) Kinetics-400 25.34 config ckpt log
8x8x1 8 SlowFast ResNet50 Kinetics-400 25.80 config ckpt log

AVA2.2

frame sampling strategy gpus backbone pretrain mAP config ckpt log
8x8x1 8 SlowFast ResNet50 Kinetics-400 25.90 config ckpt log
8x8x1 8 SlowFast ResNet50 (temporal-max) Kinetics-400 26.41 config ckpt log
8x8x1 8 SlowFast ResNet50 (temporal-max, focal loss) Kinetics-400 26.65 config ckpt log

MultiSports

frame sampling strategy gpus backbone pretrain f-mAP [email protected] [email protected] [email protected]:0.9 gpu_mem(M) config ckpt log
4x16x1 8 SlowFast ResNet50 Kinetics-400 36.88 22.83 16.9 14.74 18618 config ckpt log
  1. The gpus indicates the number of gpus we used to get the checkpoint. If you want to use a different number of gpus or videos per gpu, the best way is to set --auto-scale-lr when calling tools/train.py, this parameter will auto-scale the learning rate according to the actual batch size and the original batch size.
  2. with context indicates that using both RoI feature and global pooled feature for classification; temporal-max indicates that using max pooling in the temporal dimension for the feature.
  3. MultiSports dataset utilizes frame-mAP(f-mAP) and video-mAP(v-map) to evaluate performance. Frame-mAP evaluates on detection results of each frame, and video-mAP uses 3D IoU to evaluate tube-level results under several thresholds. You could refer to the competition page for details.

For more details on data preparation, you can refer to

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train the SlowFast model on AVA2.1 in a deterministic option with periodic validation.

python tools/train.py configs/detection/slowfast/slowfast_kinetics400-pretrained-r50_8xb16-4x16x1-20e_ava21-rgb.py \
    --seed 0 --deterministic

For more details, you can refer to the Training part in the Training and Test Tutorial.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test the SlowFast model on AVA2.1 and dump the result to a pkl file.

python tools/test.py configs/detection/slowfast/slowfast_kinetics400-pretrained-r50_8xb16-4x16x1-20e_ava21-rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --dump result.pkl

For more details, you can refer to the Test part in the Training and Test Tutorial.

Citation

@inproceedings{feichtenhofer2019slowfast,
  title={Slowfast networks for video recognition},
  author={Feichtenhofer, Christoph and Fan, Haoqi and Malik, Jitendra and He, Kaiming},
  booktitle={ICCV},
  pages={6202--6211},
  year={2019}
}
@inproceedings{gu2018ava,
  title={Ava: A video dataset of spatio-temporally localized atomic visual actions},
  author={Gu, Chunhui and Sun, Chen and Ross, David A and Vondrick, Carl and Pantofaru, Caroline and Li, Yeqing and Vijayanarasimhan, Sudheendra and Toderici, George and Ricco, Susanna and Sukthankar, Rahul and others},
  booktitle={CVPR},
  pages={6047--6056},
  year={2018}
}