This repo is implementation for Push-the-Boundary: Boundary-aware Feature Propogation for Semantic Segmentation of 3D Point Clouds in pytorch.
Two networks PointNet++ and KP-Conv are adopted as baselines in our work.
The codes using both baselines are tested on Ubuntu 20.04, Python 3.8. Install the following dependencies:
- numpy
- scikit-learn 0.23.2
- pytorch 1.7.1
- cudatoolkit 10.1
For KP-Conv backbone, you also need to compile the C++ extension modules in cpp_wrappers
. Open a terminal in this folder, and run:
sh compile_wrappers.sh
Download the data from the data link.
For running PointNet++ backbone, use the data extracted from pointnet_data_s3dis/stanford_indoor3d.zip
. Unzip the data and put the .npy files under the folder PointNet2_Backbone/data_s3dis/
. The point clouds are pre-processed, containing the following fields:
- coordinate, i.e., x, y, z
- color, i.e., r, g, b
- label
- normal, i.e., nx, ny, nz
- boundary, i.e., 0 for interior and 1 for boundary
- direction, i.e., dx, dy, dz
For running KP-Conv backbone, use the data extracted from kpconv_data_s3dis/s3dis.zip
. The scenes are stored in .ply format, containing the same fields. You can also find the subsampled point clouds using the default voxel size of 5cm. Unzip the data and put both the original point clouds and the subsampled point clouds under the folder KPConv_Backbone/data_s3dis/
. Note that there is an additional field "dis_boundary", denoting the distance from current point to the closest boundary point. However, this field is not used in our final network.
Train the model using:
python train_semseg_boundary.py
Test the model using:
python test_semseg_boundary.py --log_dir your_resulted_log --test_area 5
with the specified path to your model directory.
Train the model using:
python train_S3DIS_boundary.py
Test the model using:
python test_models.py
In L12, you can specify your model directory.
Download the data from the data link, use the data extracted from data_sensaturban.zip
. The scenes are stored in .ply format. Unzip the data and put both the original point clouds and the subsampled point clouds under the folder KPConv_Backbone/data_sensat/
. The subsampled point clouds are pre-processed and contain the following fields:
- coordinate, i.e., x, y, z
- color, i.e., r, g, b
- label
- boundary, i.e., 0 for interior and 1 for boundary
- direction, i.e., dx, dy, dz
Note that normal information is contained but not used in our final network. Information of label, boundary, and direction is not available in the test set, i.e., Birmingham block 2 and 8, Cambridge block 15, 16, 22 and 27.
Train the model using:
python train_SensatUrban_boundary.py
Test the model using:
python test_models.py
In L12, you can specify your model directory.
The segmentation outputs are stored as .ply files which contain the predictions of pointwise boundaries, directions and semantic classes. They can be visualized using various softwares (e.g., Easy3D, CloudCompare, MeshLab).
If you use (part of) the code / approach in a scientific work, please cite our paper:
@inproceedings{du2022pushboundary,
title={Push-the-Boundary: Boundary-aware Feature Propagation for Semantic Segmentation of 3D Point Clouds},
author={Du, Shenglan and Ibrahimli, Nail and Stoter, Jantien and Kooij, Julian and Nan, Liangliang},
journal={International conference on 3D vision (3DV)},
year={2022}
}