Skip to content

Latest commit

 

History

History
54 lines (27 loc) · 2.49 KB

readme.md

File metadata and controls

54 lines (27 loc) · 2.49 KB

Data Setup

OcCo

We construct the training data based on ModelNet in the same format of the data provided in PCN which is based on ShapeNet. You can find our generated dataset based on ModelNet40 here, this is similar with the resources used in the PCN and its follow-ups (summarised here).

If you want to generate your own data, please check our provided instructions from render/readme.md.

Classification

In the classification tasks, we use the following benchmark datasets:

  • ModelNet10[link]

  • ModelNet40[link]

  • ShapeNet10 and ScanNet10 are from PointDAN]

  • ScanObjectNN are obtained via enquiry to the author of [paper]

  • ShapeNet/ModelNet Occluded are generated via utils/lmdb2hdf5.py on the OcCo pre-trained data:

     python lmdb2hdf5.py \
     	--partial \
     	--num_scan 10 \
     	--fname train \
     	--lmdb_path ../data/modelnet40_pcn \
     	--hdf5_path ../data/modelnet40/hdf5_partial_1024 ;

For ModelNet40, we noticed that this newer source provided in PointNet++ will result in performance gains, yet we stick to the original data used in the PointNet and DGCNN to make a fair comparison.

Semantic Segmentation

We use the provided S3DIS data from PointNet, which is also used in DGCNN.

Please see here for the download details, it is worth mentioning that if you download from the original S3DIS and preprocess via utils/collect_indoor3d_data.py and utils/gen_indoor3d_h5.py, you need to delete an extra symbol in the raw file (reference).

Part Segmentation

we use the data provided in the PointNet, which is also used in DGCNN.

Jigsaw Puzzles

Please check utils/3DPC_Data_Gen.py for details, as well as the original paper.