This repository is for our Pattern Recognition (PR) 2023 paper 'A Noising-Denoising Framework for Point Cloud Upsampling via Normalizing Flows'. In this paper, we present a novel noising-denoising framework for 3D point cloud upsampling (3DPU), which aims to generate dense points from a sparse input point cloud.
Install the common dependencies from the requirements.txt
file
pip install -r requirements.txt
We provide the pre-processed supervised and self-supervised data for the following datasets:
Please put the datasets in ./data
. You can put the datasets elsewhere if you modify the corresponding paths in the args.py.
The directory structure of our project looks like this:
│
├── data <- Project data
│ └── PU-GAN
│ │ └── pointclouds
│ │ │ └── train
│ │ │ └── test
│ │ └── meshes
│ │ │ └── train
│ │ │ └── test
│ └── PU1K
│ │ └── pointclouds
│ │ │ └── train
│ │ │ └── test
│ │ └── meshes
│ │ │ └── test
│ └── Sketchfab
│ │ └── pointclouds
│ │ │ └── train
│ │ │ └── test
│ │ └── meshes
│ │ │ └── train
│ │ │ └── test
Current settings in
args.py
are tested on one NVIDIA GeForce RTX 3090. To reduce memory consumption, you can setbatch_size
, orpatch_size
to a smaller number.
Train model on PU-GAN or Sketchfab dataset:
python train.py
Train model on PU1K dataset:
python train_pu1k.py
Test model on PU-GAN, PU1K or Sketchfab dataset:
python test.py
If you find our code or paper useful, please cite
@article{HU2023109569,
title = {A Noising-Denoising Framework for Point Cloud Upsampling via Normalizing Flows},
author = {Xin Hu, Xin Wei and Jian Sun},
journal = {Pattern Recognition},
volume = {140},
pages = {109569},
issn = {0031-3203},
year = {2023}