This is a code repo for Learning to Super Resolve Intensity Images from Events (CVPR 2020 - Oral)
Mohammad Mostafavi, Jonghyun Choi and Kuk-Jin Yoon (Corresponding author)
Our extended and upgraded version produces highly consistent videos, and includes further details and experiments E2SRI: Learning to Super-Resolve Intensity Images from Events - TPAMI 2021
If you use any of this code, please cite both following publications:
@article{mostafaviisfahani2021e2sri,
title = {E2SRI: Learning to Super-Resolve Intensity Images from Events},
author = {Mostafaviisfahani, Sayed Mohammad and Nam, Yeongwoo and Choi, Jonghyun and Yoon, Kuk-Jin},
journal = {IEEE Transactions on Pattern Analysis \& Machine Intelligence},
number = {01},
pages = {1--1},
year = {2021},
publisher = {IEEE Computer Society}
}
@article{mostafavi2020e2sri,
author = {Mostafavi I., S. Mohammad and Choi, Jonghyun and Yoon, Kuk-Jin},
title = {Learning to Super Resolve Intensity Images from Events},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
month = {June},
year = {2020},
pages = {2768-2786}
}
- Make your own environment
python -m venv ./e2sri
source e2sri/bin/activate
- Install the requirements
cd e2sri
pip install -r requirements.txt
- Unizp pyflow
cd src
unzip pyflow.zip
cd pyflow
python3 setup.py build_ext -i
-
Download the linked material below
- Sample pretrained weight (2x_7s.pth) for 2x scale (2x width and 2x height) and 7S sequences of stacks.
- Sample dataset for training and testing (datasets.zip).
-
Unzip the dataset.zip file and put the pth weight file in the main folder
unzip dataset.zip
cd src
- Run inference:
python test.py --data_dir ../dataset/slider_depth --checkpoint_path ../save_dir/2x_7s.pth --save_dir ../save_dir
Note that our code with the given weights (7S) consumes ~ 4753MiB GPU memory at inference.
From this sample event stack, you should produce a similar (resized) result as:
- Run training:
python3 train.py --config_path ./configs/2x_3.yaml --data_dir ../dataset/Gray_5K_7s_tiny --save_dir ../save_dir
We provided a sample sequence (slider_depth.zip) made from the rosbags of the Event Camera Dataset and Simulator. The rosbag (bag) is a file format in ROS (Robot Operating System) for storing ROS message data. You can make other sequences using the given matlab m-file (/e2sri/stacking/make_stacks.m). The matlab code depends on matlab_rosbag which is included in the stacking folder and needs to be unzipped.
Note: The output image quality relies on "events_per_stack" and "stack_shift". We used "events_per_stack"=5000, however we did not rely on "stack_shift" as we synchronized with APS frames instead. The APS synchronized stacking when this 5000 setting should be kept will be released with the training code together.
A list of publicly available event datasets for testing:
- Bardow et al., CVPR'16
- The Event Camera Dataset and Simulator
- Multi Vehicle Stereo Event Camera Dataset, RAL'18
- Scherlinck et al., ACCV'18
- High Speed and HDR Dataset
- Color event sequences from the CED dataset Scheerlinck et al., CVPR'18
-
Stereo Depth from Events Cameras: Concentrate and Focus on the Future + Code - CVPR 2022 (TBU)
-
Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds - Openaccess ICCV 2021 (PDF)
-
E2SRI: Learning to Super Resolve Intensity Images from Events - TPAMI 2021 (Link)
-
Learning to Super Resolve Intensity Images from Events - Openaccess CVPR 2020 (PDF)
MIT license.