| Project Page | arXiv Paper |
Our method converges very quickly and achieves real-time rendering speed.
The following are the results of our method on the D-NeRF dataset. Also, our method can also be applied in 3D and achieve consistent results with 3D Gaussian.You must have an NVIDIA video card with CUDA installed on the system. This library has been tested with version 11.8 of CUDA. You can find more information about installing CUDA here.
This code base requires python >= 3.8
. We recommend using conda to manage dependencies. Make sure to install Conda before proceeding.
conda create --name 4drotorgs -y python=3.8
conda activate 4drotorgs
pip install --upgrade pip
Install other packages including PyTorch with CUDA (this repo has been tested with CUDA 11.8), tiny-cuda-nn, and PyTorch3D.
cuda-toolkit
is required for building tiny-cuda-nn
.
For CUDA 11.8:
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu118_pyt200/download.html
pip install --upgrade pip setuptools
If you got any issues from the above installation, see Installation documentation from nerfstudio for more.
git clone https://github.com/weify627/4D-Rotor-Gaussians.git
cd 4D-Rotor-Gaussians; pip install -e .
cd libs/diff-gaussian-rasterization-confidence; pip install .
cd ../knn; pip install .
cd ../knn_ops_3_fwd_bwd_mask; pip install .
If you have successfully reached here, you are ready to run the code!
The dataset provided in D-NeRF is used. You can download the dataset from dropbox at $data_root$/dnerf
.
Download the Neural 3D Video dataset and preprocess the raw video by executing:
python scripts/n3v2blender.py $data_root$/N3V/$scene_name$
For training synthetic scenes from D-NeRF Dataset such as bouncingballs
, run
ns-train splatfacto --data $data_root$/dnerf/bouncingballs
For training real dynamic scenes from N3V Dataset such as cook_spinach
, run
ns-train splatfacto-big --data $data_root$/N3V/cook_spinach --pipeline.model.path $data_root$/N3V/cook_spinach
One exception is for flame_salmon
in N3V Dataset, run
ns-train splatfacto-big --data $data_root$/N3V/flame_salmon --pipeline.model.path $data_root$/N3V/flame_salmon --max_num_iterations 16000
Run the following command to render the images.
ns-render dataset --load_config $path_to_your_experiment$/config.yml --output-path $path_to_your_experiment$ --split test
If you followed all the previous steps, $path_to_your_experiment$
should look
something like outputs/bouncing_balls/splatfacto/2024-XX-XX_XXXXXX
.
python scripts/metrics.py $path_to_your_experiment$/test
This repository contains our PyTorch implementation to support related research. The FPS reported in the paper is measured using our highly optimized CUDA framework, which we plan to commercialize and are not releasing at this time. For inquiries regarding the CUDA-based implementation, please contact Yuanxing Duan at [email protected].
The codebase is based on Nerfstudio.
@inproceedings{duan:2024:4drotorgs,
author = "Yuanxing Duan and Fangyin Wei and Qiyu Dai and Yuhang He and Wenzheng Chen and Baoquan Chen",
title = "4D-Rotor Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes",
booktitle = "Proc. SIGGRAPH",
year = "2024",
month = July
}