-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get_texture_coordinate #6339
Comments
I hope that the long Python discussion in the link below will provide useful guidance if you have not seen it already. |
My current problem is that I got the coordinates of some points in the color image ahead of time (these coordinates were obtained in the extracted 2d image, not in points.get_texture_coordinates()), and now I want to get the corresponding 3d coordinates, my idea is to find the locations of those coordinates in points.get_texture_coordinates() and then correspond to get_vertices.Is this the right approach? |
The usual approach for converting 2D coordinates to 3D points would be rs2_deproject_pixel_to_point. |
I want to know if my idea is correct, because it's easier for me to manipulate other data.Here's part of my program.
... |
I am still learning RealSense programming, so I will have to refer your question to another RealSense team member. I apologise for the wait in the meantime. @dorodnic Could you take a look at the points.get_texture_coordinates() code of @nevynwong above please? |
@nevynwong , please elaborate what do you mean by
Specify precisely the Librealsense calls made and the results received. |
I first extracted the color sequence image from the.bag file, then took some markers on the human face, then read them in python and mapped them to the texture point cloud.Here are the results of my markers in 2d and 3d reconstruction. Sorry, the picture cannot be sent, I am trying. |
It seem that the Librealsense-related code is correct, though it could be further optimized. So when you arrive to the core algorithm segment the texture coordinates
Can be applied directly onto the depth frame to find the corresponding vertices. For some RGB pixels the corresponding Depth data will be zero (no data) and it is normal. So if you find too many "no data" references then I would recommend to expand the search "windows" to NxN RGB pixels or to use aggressive "hole filling" filter settings. The content of the face marks and the way they are applied is not part of the SDK, it is up to you to verify its correctness. Still can you explain the meaning of the "rotation changes" ? |
Because I need the aligned color and point cloud data, I can only do this.I tried to directly use the depth frame to find the corresponding point cloud, but the result was not ideal. Maybe I used the function parameter of deproject incorrectly.My previous part of program is here.
Whether the intrinsics parameter in rs2_deproject_pixel_to_point is the depth parameter after aligned depth to color frame?
|
Check what is the source of depth_frame. My understanding is that it comes from
and if that assumption is correct the
|
I have understood the first question. Thank you for your reply. |
| Camera Model |D415 |
| Operating System & Version | win10 |
| Platform | PC |
| Language | python |
I've asked before about color and point cloud alignment, and I know that pc.map_to(color) and pc.calculate(depth) can get the point cloud and color images aligned, but I've found that for some points in color, the mapping to the point cloud seems to have some rotation changes, what's the reason?
Part of my program is this:
.....
pc.map_to(color_frame)
points = pc.calculate(depth_frame)
tex_coor = np.asanyarray(points.get_texture_coordinate())
pc_coor = np.asanyarray(points.get_vertices())
....
Then I correspond the index position of some points in tex_coord to the index position of pc_coord to get the corresponding point cloud coordinates.Because I need not only the data from the point cloud, but also the data from the corresponding points.I want to ask is this the right thing to do?If not, what do you suggest?
The text was updated successfully, but these errors were encountered: