This repository contains the implementation of the paper Multimodal Distillation for Egocentric Action Recognition, published at ICCV 2023.
The main dependencies that you need to install to reproduce the virtual environment are PyTorch, and:
pip install accelerate tqdm h5py yacs timm einops
Create a directory ./data/pretrained-backbones/
and download Swin-T from here:
wget https://github.com/SwinTransformer/storage/releases/download/v1.0.4/swin_tiny_patch244_window877_kinetics400_1k.pth -O ./data/pretrained-backbones/
We store all data (video frames, optical flow frames, audios, etc.) is an efficient HDF5 file where each video represents a dataset within the HDF5 file, and the n-th element of the dataset contains the bytes for the n-th frame of the video. Since these files are large, drop us an email and we can give you access to them.
Ones we send you the datasets, you can place them inside ./data/
- ./data/something-something/
and ./data/EPIC-KITCHENS
. Please feel free to store the data wherever you see fit, just do not forget to modify the config.yaml
files with the appropriate location. In this README.md, we assume that all data is placed inside ./data/
and all experiments are placed inside ./experiments/
.
- Download our Epic-Kitchens distilled model from here, and place it in
./experiments/
. - Run inference as indicated bellow:
python src/inference.py --experiment_path "experiments/epic-kitchens-swint-distill-flow-audio" --opts DATASET_TYPE "video"
- Download our Something-Else distilled model from here or the Something-Something distilled model from here, and place it in
./experiments/
. - Run inference as indicated bellow:
python src/inference.py --experiment_path "experiments/something-swint-distill-layout-flow" --opts DATASET_TYPE "video"
for Something-Something, and
python src/inference.py --experiment_path "experiments/something-else-swint-distill-layout-flow" --opts DATASET_TYPE "video"
for Something-Else.
To reproduce the experiments (i.e., using the identical hyperparameters, where only the random seed will vary) you can do:
python src/patient_distill.py --config "experiments/something-else-swint-distill-layout-flow/config.yaml" --opts EXPERIMENT_PATH "experiments/experiments/reproducing-the-something-else-experiment"
note that this assumes access to the datasets for all modalities (video, optical flow, audio, object detections), as well as the individual (unimodal) models which constitute the multimodal ensemble teacher.
- Something-Else: Video; Object detections; Optical Flow.
- Something-Something: Video; Object detections; Optical Flow.
- Epic-Kitchens: Video; Audio, Optical Flow.
- Release Something-Something pretrained teachers for each modality.
- Test the codebase.
- Structure the Model ZOO part of the codebase.
If you find our code useful for your own research, please use the following BibTeX entry:
@inproceedings{radevski2023multimodal,
title={Multimodal Distillation for Egocentric Action Recognition},
author={Radevski, Gorjan and Grujicic, Dusan and Blaschko, Matthew and Moens, Marie-Francine and Tuytelaars, Tinne},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5213--5224},
year={2023}
}