-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I back reproject RGBD data to point cloud after resizing the images? #8153
Comments
Hi @zc08 It sounds as though you want to go through the following steps:
Is this correct, please? If so, and you are able to use C++ (as the reference to rs-pointcloud would suggest, the scripting in the C++ discussion in the link below seems to cover the implementation of the three steps above (decimation, align, deprojection). Alternatively, the link below has Python scripting for generating a decimated and aligned point cloud without using deproject (using pc.calculate instead): |
Thank you for the reply!
Yes, that's basically what I want to do. With one thing different, in step 1, I would like to pick the closest depth value instead of median value as the rs-depth-filter sample does, rather than decimation filter. (But I can try decimation filter too.) And another question, does decimation filter alter the intrinsics, so that the deproject process is correct? Thanks! |
I noted that the documentation mentioned that.
Thanks! |
Thanks very much for the quote about intrinsics confirming the answer to your decimation question. A setting that may be of interest to you if you are concerned with the confidence values of pixels is secondpeakdelta, which can be set by loading a json camera configuration file. The setting is described in Intel's paper about improving depth results on drones. |
Hi @zc08 Do you require further assistance with this case, please? Thanks! |
No, thank you ! |
Issue Description
I'm implementing some obstacle detection algorithm for android devices.
For efficiency reasons, I intend to resize both the rgb image and depth image to 1/4 of their original sizes, and then back-reproject the RGBD data to obtain 3D point cloud.
In the rs-pointcloud example, the back-projection and rgb-depth alignment APIs are demonstrated, but I don't know excactly how to do this after resizing.
In theory, I just need to adopt the normal back-projection process, with camera intrinsics (specifically focal length & principal point) rescaled accordingly. I dived into the code a bit, and found that the function
const float3 * pointcloud::depth_to_points(rs2::points output, const rs2_intrinsics &depth_intrinsics, const rs2::depth_frame& depth_frame, float depth_scale)
was basically doing what I want. But examining and picking out all the code and manually doing the modification seems cumbersome and error-prone.
Is there a simple way to do the reprojection after resizing the images ? (E.g. by modifying the intrinsic parameters inside the depth frame?) Thanks in advance!
The text was updated successfully, but these errors were encountered: