This is the unofficial implementation of the paper: "NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors".
horns_test_inpainting_spiral_250000_rgb.mp4
room_test_inpainting_spiral_250000_rgb.mp4
We annotated four scenes for NeRF inpainting. In each scene, we provide the annotated masks (processed by STCN), the inpainted frames (processed by LaMA) and the filled depth map. Please download these four scenes and put them in the <Data directory>
.
Then, please download the pretrained NeRF models. These models are pretrained with the original data, and our goal is to finetune these models with the inpainted frames/depth maps to get the inpainted neural radiance field. You can download these models at NeRF-pytorch repository or this page. Put the pretrained models in the <Model directory>
As for the package installation, please refer to the original NeRF-pytorch repository.
We provide the finetuned checkpoints, which has encoded the inpainted scenes in the radiance fields. You can download them at this page and put them in the <Result directory>
.
Then, modify the datadir
, basedir
, finetune_dir
and pre_ckpt
in the configuration files in configs_inpainting with the corresponding data path and pretrained checkpoint path.
FInally, run the following commands to activate finetuning process.
bash inference.sh --config configs_inpainting/horns.txt
If everything works, you will find a video in the basedir
, which is shown as.
horns_test_inpainting_spiral_250000_rgb.mp4
As for training, all you need to do is to modify the corresponding items in the configuration files and run the following commands to activate the training process.
bash train.sh --config configs_inpainting/horns.txt