-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rs2_deproject_pixel_to_point(), does argument "depth" have to be the actual depth of target surface? #10037
Comments
Hi @surefyyq Usually when rs2_deproject_pixel_to_point is being used to obtain 3D xyz coordinates, you first have to align depth to color. A successful example of a Python script for obtaining xyz coordinates with this method can be found at #6749 (comment) In that script, their depth value used in the rs2_deproject_pixel_to_point equation comes from the instruction depth = depth_frame.get_distance(x, y) The SDK instruction get_distance() can be used to obtain the z-distance of a coordinate in meters. However, if you are creating a pointcloud by performing depth to color alignment and then obtaining the 3D real-world point cloud coordinates with rs2_deproject_pixel_to_point then the use of alignment may result in inaccuracies. Using points.get_vertices() instead to generate the point cloud and then store the vertices in a numpy array whose values can be printed out should provide better accuracy. This subject is discussed in detail in the Python case at #4315 |
Hi@MartyG-RealSense Thanks for your reply. The process of obtaining 3D xyz coordinates is no problem and I can get the right 3D xyz coordinates of pointed position of object surface. According to your explaination and quote, I reckon that there is no way to get the 3D coordinates of point which locates between object surface and realsense camera by I took a test about it:
and I found that the value of |
If you wanted to halve the pixel depth value but not affect x and y then I would try amending the pixel_depth/2 to isolate it from xy by adding a couple more brackets. ponit_in_the_air = rs2_deproject_pixel_to_point(intrinsics, [pixel_x, pixel_y], (pixel_depth/2)) This should ensure that only the pixel_depth value is halved and not [pixel_x, pixel_y]. |
Hi @surefyyq Do you require further assistance with this case, please? Thanks! |
@MartyG-RealSense Sorry for the delay, I tried more brackets like you advised but still, all values are halved. This question comes from the requirement as follows: |
I considered your requirement carefully. As you require two distance points and not one, you could generate a 3D depth pointcloud of the scene and retrieve the real-world Z-distance from the camera for the coordinates corresponding to the two points (the highest arc point and the one that is not at the arc surface). |
Thanks for your advice @MartyG-RealSense |
You are very welcome, @surefyyq - thanks very much for the update! |
The key step of obtaining the 3D coordinates of a pointed pixel in the RGB frame is using
rs2_deproject_pixel_to_point(intrinsics, [pixel_x, pixel_y], pixel_depth)
, wherepixel_depth
comes frompixel_depth = depth_frame.get_distance(pixel_x, pixel_y)
, right?So can I get the 3D coordinates at the x&y position of (pixel_x, pixel_y) but z position of (pixel_depth/2) through
rs2_deproject_pixel_to_point(intrinsics, [pixel_x, pixel_y], pixel_depth/2)
? Does argument "depth" have to be the actual depth of the target position surface?The text was updated successfully, but these errors were encountered: