This is the working repo for the following paper:
Ming-Fang Chang, Akash Sharma, Michael Kaess, and Simon Lucey. Neural Radiance Fields with LiDAR Maps. ICCV 2023. paper link
If you find our work useful, please consider to cite:
@inproceedings{Chang2023iccv,
title={{Neural Radiance Fields with LiDAR Maps}},
author={Ming-Fang Chang and Akash Sharma and Michael Kaess and and Simon Lucey},
booktitle=International Conference on Computer Vision,
year={2023}
}
- The code was implemented and tested with python 3.7,
PyTorch v1.12.1
andDGL 0.9.v1post1
.
- Training/val samples (preprocessed DGL graphs). preprocessed graphs The preprocessed DGL graphs contain geometric information needed for volume rendering (see 3. for visualizations).
- The LiDAR point cloud maps. maps
- Other dataset information (ground truth images, camera poses, etc). dataset
- Masks for dynamic objects. masks
- Specify your local data folder path in
configs/config.ini
, or make a symlink nameddata
pointing to your dataset folder.
- The visualization code was tested with
pyvista v0.37.0
. - Run
python3 visualize_data.py --log_id=<log_id> --name_data=clean
- Expected outputs include (from log
2b044433-ddc1-3580-b560-d46474934089
):- Camera rays (black), ray samples (red), and nearby LiDAR points (green) of subsampled pixels.
- GT rgb and depth.
- Train (blue) / val (red) camera poses on the map.
- Run
python3 train.py --name_data=clean --log_id=<log_id> --name_config=config.ini --eval_only
. - Check the results with tensorboard (e.g. Run
tensorboard --logdir=logs>
to see the visuals. The log path can be specified inconfigs/config.ini
). - You can download the trained weights from weights (clean maps) weights (noisy maps).
- Expected outputs (from log
2b044433-ddc1-3580-b560-d46474934089
):
- For netowrk training, remove the
--eval_only
argument.
Besides the preprocessed graphs, We also provide a sample code generate_graphs.py
for generating new graphs. This version generates slighter high-quality graphs but runs a bit slower than the original version we used in the paper. To use it:
- Modify
path_preprocessed_graph
inconfigs/config.ini
to your goal folder. - Run:
python3 generate_graphs.py --log_id=<log_id> --name_config=config.ini --name_data=clean
.