Radar-Diffusion: Towards Dense and Accurate Radar Perception Via Efficient Cross-modal Diffusion Model
- 25 June, 2024: Paper accepted by IEEE Robotics and Automation Letters (RA-L) !
- 27 July, 2024: Code and pre-trained models released!
- 29 August, 2024: Updating coloradar dataset download link.
- 18 October, 2024: Updating checkpoint download link in case that you fail to download the checkpoints uploaded to this git repo.
- Release training and testing code for Radar-Diffusion.
- Release pre-trained models in diffusion_consistency_radar/checkpoint.
- Release user guide.
- Release data pre-processing code.
- Release performance evaluation code.
This repository contains the source code and pre-trained models of Radar-Diffusion described in our paper "Towards Dense and Accurate Radar Perception Via Efficient Cross-modal Diffusion Model." accepted by IEEE Robotics and Automation Letters (RA-L), 2024.
Authors: Ruibin Zhang*, Donglai Xue*, Yuhan Wang, Ruixu Geng, and Fei Gao ( * equal contributors )
Supplementary Video: YouTube, Bilibili.
Abstract: Millimeter wave (mmWave) radars have attracted significant attention from both academia and industry due to their capability to operate in extreme weather conditions. However, they face challenges in terms of sparsity and noise interference, which hinder their application in the field of micro aerial vehicle (MAV) autonomous navigation. To this end, this paper proposes a novel approach to dense and accurate mmWave radar point cloud construction via cross-modal learning. Specifically, we introduce diffusion models, which possess state-of-the-art performance in generative modeling, to predict LiDAR-like point clouds from paired raw radar data. We also incorporate the most recent diffusion model inference accelerating techniques to ensure that the proposed method can be implemented on MAVs with limited computing resources. We validate the proposed method through extensive benchmark comparisons and real-world experiments, demonstrating its superior performance and generalization ability..
git clone https://github.com/ZJU-FAST-Lab/Radar-Diffusion.git
cd diffusion_consistency_radar
pip install -e .
sh launch/inference_cd_example_batch.sh
In case of network issues, you can manually download the checkpoints in diffusion_consistency_radar/checkpoint.
The above script runs consistency inference in only 1 step using the pre-trained checkpoint. After that, you can find the predicted results and Ground-Truth LiDAR bev point clouds in diffusion_consistency_radar/inference_results.
- First, download the Coloradar dataset (kitti format). In case of network issues, we share a download link here.
- Unzip all the subsequences in a folder, then run:
python Coloradar_pre_processing/generate_coloradar_timestamp_index.py
- Download patchwork++ to Coloradar_pre_processing/patchwork-plusplus. Then install patchwork++ by running:
cd Coloradar_pre_processing/patchwork-plusplus
make pyinstall
- Generate pre-processed dataset by running:
python Coloradar_pre_processing/dataset_generation_coloradar.py
- Train a regular EDM model:
sh diffusion_consistency_radar/launch/train_edm.sh
- Distill a CD model from the above EDM model:
sh diffusion_consistency_radar/launch/train_cd.sh
- Inference from an EDM model:
sh diffusion_consistency_radar/launch/inference_edm.sh
- Inference from a CD model in one step:
sh diffusion_consistency_radar/launch/inference_cd.sh
The source code is released under MIT license.
- The diffusion-consistendy model code is heavily based on consistency_models.
- The radar pre-processing code is heavily based on azinke/coloradar.
If you find this method and/or code useful, please consider citing
@article{zhang2024towards,
title={Towards Dense and Accurate Radar Perception Via Efficient Cross-Modal Diffusion Model},
author={Zhang, Ruibin and Xue, Donglai and Wang, Yuhan and Geng, Ruixu and Gao, Fei},
journal={arXiv preprint arXiv:2403.08460},
year={2024}
}