-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deprojection pixels to 3d point #1413
Comments
Hello @adarvit , the As for the code - it seem legit. I'm not sure I understand the question, but if you want to verify the data flow you may check the following:
If the input is a depth-aligned-to-color frame, then its intrinsic and the color's frame intrinsic must be identical. You haven't specified what was the original depth value at [100,100] before deprojecting the pixel, nor the 'depth_scale'. And while the results you've got are legit for a particular case (the 3D point is located very close, yet in front of the camera) it is not possible to deduce more from this. Try several distinct (and valid) depth coordinates - deproject them to see how they correspond to the scene. One quick assert is that de-projection must not modify the Z (depth) value, only multiply by |
I want to find 3d point of depth pixel which i have already aligned to rgb frames. all the frames are already saved saved in my disk. (in order to align them i used the code as it shown in python examples) i used depth_frame = frames.get_depth_frame() only for getting the depth intrinsic for the depth sensor, since it always returns the same values of intrinsic, not matter which frame the camera see in live, those values are constants.
how can i use the aligned frames in my disk to find those 3d points in order to do my calculations? if rs.rs2_deproject_pixel_to_point method only works only on live stream frames, i would like to know how to implement the method by myself and apply it on my frames. |
Use this function. Intrinsics change when stream resolution changes. |
@vladsterz |
This indeed seem to be the reason-
The missing part- you need to retrieve the raw depth data that corresponds to [100,100] coordinate, convert it to meter using the |
Hi @adarvit , you can use |
I used it and it seems that rs-measure provides more accurate result. I cant use it for my frames since rs-measure takes live stream frames. edit: i changed the intrinsics to be of the color sensor and not of the depth sensor and not the distance seems to be a little bit more accurate, is it possible? |
thanks you all for the help, problem solved :) |
@adarvit, what was your ultimate solution? Was there something better than using the color sensor's intrinsics? |
@adarvit Hi, I'm working on a similar project which is measure object with D435, by first taking pictures on-site, and then measure back in the lab. #3549 |
I think the issue is you sent the x,y from Color Frame but use depth intrisinc to measure. I used color intrisinct
This is how I do it and within in 5 meters it’s quite accurate within 5 cm |
@soarwing52 Hi, may I ask what are the x_ratio and y_ratio in your code snippet? @"anyone who could help": I am now dealing with a very similar problem where I try to find 3d points from 2d rgb pixels with depth aligned to color stream:
Even with color intrisinct in rs.rs2_deproject_pixel_to_point(): my result is far away from correct. Any ideas would be appreciated. Thanks! Problem solved! The pixel coordinates should first have x value and then y value....not as the convention of [height, width]... |
so, because in my application I adjusted the window size, so the x,y coordination must be calculated by the ratio to get the correct distance data, and avoid no data. |
@soarwing52 Hi, thank you for your response. The problem is already solved. I placed the pixel coordinates with wrong order. The correct one should be x-coordinate comes first and then y-coordinate. |
I am trying to get 3d point of objects using aligned color and depth frames that i have in order to measure those objects (length, width, etc...)
I am using rs.rs2_deproject_pixel_to_point in order to get 3d point of from depth image, but i am not quite sure what is the meaning of the values that returned and how to connect them to the real value of depth which corresponds to the pixel.
I am using the following code in order to construct a 3d point of pixel from depth frame
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics
color_intrin = color_frame.profile.as_video_stream_profile().intrinsics
depth_to_color_extrin = depth_frame.profile.get_extrinsics_to(color_frame.profile)
depth_pixel = [100, 100]
depth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, depth_scale)
and the result is: [-5.814936230308376e-05, -3.9755548641551286e-05, 0.00012498664727900177]
what is the meaning of those values, and how can i use them on specific frames i saved on my disk? my depth frame is already aligned to the color frame (depth frame`s pixel is mapped to the same pixel in the rgb frame).
my goal is to find 3d points of two pixels in an image and calculate the real distance between them in order to measure length of object.
thank you
The text was updated successfully, but these errors were encountered: