Yitong Xia¹, Hao Tang¹, Radu Timofte¹² , Luc Van Gool¹
¹CVL, D-ITET, ETH Zürich
²CVL, CAIDAS, University of Würzburg
Please set environment and codes with the following commandlines:
conda create -n sinerf python=3.8.12
conda activate sinerf
pip install -r requirements.txt
git clone https://github.com/yitongx/sinerf.git
cd nerfmm
The original LLFF dataset can be downloaded from here. SiNeRF uses the refined version provided by NeRFmm. Please download the data with the following commandlines:
wget https://www.robots.ox.ac.uk/~ryan/nerfmm2021/nerfmm_release_data.tar.gz
tar -xzvf nerfmm_release_data.tar.gz
We provide train_eval_sinerf.py
for training SiNeRF and evaluating test results. Please run the following commandlines:
python tasks/nerfmm/train_eval_sinerf.py \
--base_dir [BASE_DIR] \ # e.g., ./data/LLFF/
--root_dir [ROOT_DIR] \ # e.g., ./outputs/sinerf
--scene_name [SCENE] \ # [fern, flower, fortress, horns, leaves, orchids, room, trex]
--model_type "sinerf" \ # [sinerf, official]
--hidden_dims 256 \
--pos_enc_levels 0 \ # [0, 10]
--dir_enc_levels 0 \ # [0, 4]
--epoch 10000 \
--use_ROI \
--ROI_schedule_head 0.0 \
--ROI_schedule_tail [T_R] \
--alias ""
where
-
base_dir
: where you store the dataset. -
root_dir
: where you wish to save outputs. Use--alias
argument to further specify this directory's name. A task directoryroot_dir/[TASK_NAME]/
will be automatically created. -
model_type
: you can choose either using SiNeRF or the official NeRFmm baseline. -
pos_enc_levels
anddir_enc_levels
": The Positional Encoding (PE) level for position$\textbf{x}$ and direction$d$ . For SiNeRF bothpos_enc_levels
anddir_enc_levels
are set to 0, i.e. no PE for input. -
use_ROI
: whether to use Mixed Region Sampling in Section 3.4 in paper. -
ROI_schedule_head
: starting time of MRS, float value ranges from 0 to 1. Set to 0 as default. -
ROI_schedule_tail
: ending time of MRS, float value ranges from 0 to 1. Set to 0.05 for$\textit{Fortress}$ and$\textit{Trex}$ and 0.005 for the rest. You are also encouraged to try different values along with different random seeds to reproduce the results in the paper.
The model checkpoints will be saved in [ROOT_DIR]/[TASK_NAME]/
directory. The evaluation results will be saved in [ROOT_DIR]/[TASK_NAME]/render_eval/
directory.
To plot the aligned camera trajectories between COLMAP and your poses, as the one in Figure 2, please run the following commandlines:
python tasks/nerfmm/vis_learned_poses.py \
--base_dir [BASE_DIR] \
--scene_name [SCENE] \
--ckpt_dir [ROOT_DIR/CKPT_DIR]
And get an image like this:
To produce the spiral-like RGB and depth videos, please run the following commandlines:
python tasks/nerfmm/spiral_sinerf.py \
--base_dir [BASE_DIR] \
--scene_name [SCENE] \
--model_type "sinerf" \
--pos_enc_levels 0 \
--dir_enc_levels 0 \
--ckpt_dir [ROOT_DIR/CKPT_DIR]
And get a MP4 video clip like this after concatenating:
This work is done by Yitong Xia during his semester project at Computer Vision Laboratory (CVL), D-ITET, ETH Zürich, under the instruction of Dr. Hao Tang, Prof. Radu Timofte, and Prof. Luc Van Gool.
This work is an improvement upon NeRFmm, to which we reference the codes and experimental settings.
If you find our paper or repository useful, please cite BibTeX
as:
@inproceedings{Xia_2022_BMVC,
author = {Yitong Xia and Hao Tang and Radu Timofte and Luc Van Gool},
title = {SiNeRF: Sinusoidal Neural Radiance Fields for Joint Pose Estimation and Scene Reconstruction},
booktitle = {33rd British Machine Vision Conference 2022, {BMVC} 2022, London, UK, November 21-24, 2022},
publisher = {{BMVA} Press},
year = {2022},
url = {https://bmvc2022.mpi-inf.mpg.de/0131.pdf}
}