-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The procedure of real word point cloud completion code. #4
Comments
Hi, thank you for your interest and for finding our work useful. We will release the preprocessing code shortly along with the dataset. |
By the way, How do you prepare the canonical GT data? Do you use the pre-trained condor model to predict it or just align it to a human-defined 'canonical axis'? |
The shapenet dataset objects are pre-aligned to a common axis, which we use as our predefined canonical axis. We also use the dataset used by ConDor, which also has aligned pointclouds. You can refer to https://github.com/brown-ivl/ConDor/blob/main/ConDor_pytorch/datasets/h5_dataset.py |
Thanks, and why there are 2 pre-trained weights in some classes such as mug and bottle? what's the difference between them? |
The 2 pre-trained weights correspond to training with different weights for the losses - l2 and dcd. Shapes like mug and bottle may have an axis of symmetry. For isntance, a mug without a handle has an axis of symmetry, and a mug without a handle doesn't. L2 helps supervise shape completion in cases where there is no axis of symmetry as only a single valid pose for that object exists, whereas, for object instances with an axis of symmetry, multiple possible gt poses of the object can exist, which is better supervised using the DCD loss, as its only distribution aware. |
Hi, would you be able to release the pre-processing code? I'd like to test SCARP on some real-world scans captured with my own realsense. |
In the paper, we don't show pointcloud completion on any real-world scans. All the pointclouds are generated from the shapenet dataset and randomly rotated using a scipy library, and sliced. Doing pre-processing on real-world scans would either require -- a 2d segmentation model like SAM to segment the object and then 2d-3d association to get the segmented pointcloud, or an unknown instance segmentation model. The results on real-world scans that we showed in the video were done by performing a cutoff on known depth parameters (how far behind was the object and the background in our pointcloud). |
Thanks @skymanaditya1 - I had already done all you suggested. My question was more related to whether or not there is some sort of scale and/or orientation normalisation being done (apart from the mean centering). When I run the model on a real point cloud the estimated points seem to be at a totally different scale to the real data. |
Hi,
Thank you for your excellent work! Could you please explain the detailed code or procedure of real word point cloud completion preprocessing? For example, how to extract object pcd, denoise, do normalize and transformation, etc.
Best Regards.
The text was updated successfully, but these errors were encountered: