Skip to content

Latest commit

 

History

History
83 lines (74 loc) · 3.7 KB

DATASET.md

File metadata and controls

83 lines (74 loc) · 3.7 KB

Dataset

The overall directory structure should be:

│Point-BERT/
├──cfgs/
├──datasets/
├──data/
│   ├──ModelNet/
│   ├──ModelNetFewshot/
│   ├──ScanObjectNN/
│   ├──ShapeNet55-34/
├──.......

ModelNet Dataset: You can download the processed ModelNet data from [Google Drive][Tsinghua Cloud][BaiDuYun](code:4u1e) and save it in data/ModelNet/modelnet40_normal_resampled/. (You can download the offical ModelNet from here, and process it by yourself.) Finally, the directory structure should be:

│ModelNet/
├──modelnet40_normal_resampled/
│  ├── modelnet40_shape_names.txt
│  ├── modelnet40_train.txt
│  ├── modelnet40_test.txt
│  ├── modelnet40_train_8192pts_fps.dat
│  ├── modelnet40_test_8192pts_fps.dat

ModelNet Few-shot Dataset: We follow the previous work to split the original ModelNet40 into pairs of support set and query set. The split used in our experiments is public in [Google Drive]/[Tsinghua Cloud]/[BaiDuYun](code:bjbq). Download the split file and put it into data/ModelNetFewshot, then the structure should be:

│ModelNetFewshot/
├──5way10shot/
│  ├── 0.pkl
│  ├── ...
│  ├── 9.pkl
├──5way20shot/
│  ├── ...
├──10way10shot/
│  ├── ...
├──10way20shot/
│  ├── ...

ShapeNet55/34 Dataset: You can download the processed ShapeNet55/34 dataset at [BaiduCloud] (code:le04) or [Google Drive]. Unzip the file under ShapeNet55-34/. The directory structure should be

│ShapeNet55-34/
├──shapenet_pc/
│  ├── 02691156-1a04e3eab45ca15dd86060f189eb133.npy
│  ├── 02691156-1a6ad7a24bb89733f412783097373bdc.npy
│  ├── .......
├──ShapeNet-35/
│  ├── train.txt
│  └── test.txt

ScanObjectNN Dataset: Download the offical data from here and unzip it into data/ScanObjectNN. The directory structure should be:

│ScanObjectNN/
├──main_split/
│  ├── training_objectdataset_augmentedrot_scale75.h5
│  ├── test_objectdataset_augmentedrot_scale75.h5
│  ├── training_objectdataset.h5
│  ├── test_objectdataset.h5
├──main_split_nobg/
│  ├── training_objectdataset.h5
│  ├── test_objectdataset.h5

ScanNet Dataset: Prepare the pretraining dataset following the instructions from DepthContrast, and place it into data/ScanNet/scannet. To prepare the data faster, you can change the count from 10 to 100 at here and here. The directory structure should be:

│ScanNet/
├──scannet/
│  ├──scene0000_00/
│    ├── 0.npy
│    ├── 100.npy
│    ├── ...
│  ├──scene0000_01/
│    ├── 0.npy
│    ├── 100.npy
│    ├── ...
│  ├──...