-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pyrealsense2: Question about 3D reconstruction problem #1231
Comments
Hi @ljc19800331 ,
tl;dr: You should use pc = rs.pointcloud()
pc.map_to(color)
points = pc.calculate(depth)
vtx = np.asanyarray(points.get_vertices())
tex = np.asanyarray(points.get_texture_coordinates()) # <----------- Like this In details: After calling
Yes :)
Each stream profile is either a video, motion, or pose stream (at the moment). Video streams profiles provide intrinsics of cameras and motion stream profiles provide motion device intrinsics. Here's how to get this data in python and also some additional data, and finally mapping a depth pixel to a color one: import pyrealsense2 as rs
pipeline = rs.pipeline()
pipe_profile = pipeline.start()
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
# Intrinsics & Extrinsics
depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics
color_intrin = color_frame.profile.as_video_stream_profile().intrinsics
depth_to_color_extrin = depth_frame.profile.get_extrinsics_to(color_frame.profile)
# Depth scale - units of the values inside a depth frame, i.e how to convert the value to units of 1 meter
depth_sensor = pipe_profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
# Map depth to color
depth_pixel = [240, 320] # Random pixel
depth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, depth_scale)
color_point = rs.rs2_transform_point_to_point(depth_to_color_extrin, depth_point)
color_pixel = rs.rs2_project_point_to_pixel(color_intrin, color_point)
pipeline.stop() |
Hello @zivsha , thanks for the reply. I can successfully get the texture point cloud. However, it seems like the mapping the depth information to the 3D point cloud vertices. But my goal is to get the color map point cloud (sorry for the mistake before), like this: The problem of the alignment is that I only map the color image to the point cloud based on the index from 1 to 300700 (480*640), which is wrong. For the first problem, I might be confused about texture point cloud before. My goal is actually to map the color image to the point cloud, similar to the demo here: Is there any way to do it in python? I see the demo code in c++. My new code is shown in the following: pc = rs.pointcloud()
frames = pipeline.wait_for_frames()
depth = frames.get_depth_frame()
color = frames.get_color_frame()
img_color = np.asanyarray(color.get_data())
img_depth = np.asanyarray(depth.get_data())
pc.map_to(color)
points = pc.calculate(depth)
vtx = np.asanyarray(points.get_vertices())
tex = np.asanyarray(points.get_texture_coordinates())
npy_vtx = np.zeros((len(vtx), 3), float)
for i in range(len(vtx)):
npy_vtx[i][0] = np.float(vtx[i][0])
npy_vtx[i][1] = np.float(vtx[i][1])
npy_vtx[i][2] = np.float(vtx[i][2])
npy_tex = np.zeros((len(tex), 3), float)
for i in range(len(tex)):
npy_tex[i][0] = np.float(tex[i][0])
npy_tex[i][1] = np.float(tex[i][1]) For the second problem, I got the result after using your code to map depth pixel to color pixel.
I am confused about the 4096 value here. I think the color pixel coordinate should be something like [200, 400] within the range of 480 * 640 (I set this config at the beginning). Thanks again for the answers. |
Hi, Regarding your second question, I'm not sure how you got this result, but please note that you should only transform points with depth value > 0, maybe that's the issue |
Thanks for the reply @zivsha , I think the reason is that I am not quite understand how to use the texture coordinate information (u,v) with range in [0,1]. I don't know how to map (u,v) back to the color image. The problem might be that to find the mapping function that can perform. Given an color image,
I know how to do it in c++ with the example shown previously. But I am not sure if pyrealsense has built in functions that can solve this problem. The second one is my main problem and I don't know how to convert (u,v) in range [0,1] to a color pixel value. Thanks again for the patience and reply. |
The mapping of We do not provide a function that takes texture (u, v) and returns RGB (r, g, b, a), mainly because mostly rendering libraries take texture parameters and will handle it on their own. But you can do the mapping yourself if you wish by converting texture to pixel using the above explanation. |
Is there any way to render (visualize) the pointcloud obtained from Intel RealSense API in Python? There are examples for rendering in C++ but not in Python. I hope you may give me a hint here. Thanks |
Unfortunately I don't know any, but a quick web search for "python point cloud" provides a few ways to do that... |
@zivsha Thanks for your reply before. My code is like that:
Does this comment mean that all the texture coordinates have to be scaled to the specific index in the color image? For example:
Thank you very much. |
is there a way to align color to depth? |
For anyone following this with the D435 camera, you need to feed the deprojection the depth and not just the depth scale. |
Here is my version of the realsense code for anyone who would need it:
|
@svarnypetr Thanks for the effort, do you also have a snippet for rgb2cloud mapping in which we could obtain a 3D position of a 2D pixel? And why should the resolution be 848x480? I'm trying to get this working with the max. resolution and the resulting pixels are just way off. Is there a bug, or? |
@eyildiz-ugoe Sorry, don't have. But for getting the 3D coordinate you use deprojection: What do you mean by way off? The 3D Higher resolution is possible, I use this for performance purposes due to my hardware. AFAIK the maximal resolution for D435 is 1280x720. |
@svarnypetr Way off as in a mapped coordinate like What I am trying to do is to map 2D pixel to 3D point and vice versa, and that does not seem to be working with Realsense. I've already created another issue to address this #4613 |
@eyildiz-ugoe In my experience it shouldn't be a problem with Realsense. |
Hi, Any help would be really appreciated...:) |
@ravan786 Sorry, I do not have experience with such an approach or making a point cloud created from multiple frames. I can imagine that it will be necessary to identify the points in a common reference frame so what I would do is to tie points from multiple color+depth frame pairs to one frame of reference (e.g. world) and then present them as one point cloud in that frame. |
@svarnypetr any idea or code for tying points from multiple frame pairs? |
@ravan786 No, sorry. |
Issue Description
Hello, I am recently using pyrealsense2 for 3D reconstruction based on SR300 and I got the following questions:
1: Is there a better way for to get textured 3D pointcloud in pyrealsense?
My method is to first get the 3D point cloud coordinates and the corresponding color image, and map these two information together to 3D textured information ( So far I am doing this step in MATLAB). However, I got the result that these two information are not correctly aligned together ( It might have some problems with my code).
Code in Python:
Code in MATLAB:
The result I input for the MATLAB is the color_img and vtx.
But the image is not correctly aligned up with the point cloud (Would the problem be the difference between left image and right image? ).
The final result I want to get is similar to the result in this demo:
https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud
2. Is there a direct way I can get the extrinsic parameters for the camera?
I want to get the extrinsic parameters for the camera relative to the target object in the image. However, I have no idea how to input the variables for the function to get the corresponding parameters:
Sorry for the long questions and I appreciate the helps.
The text was updated successfully, but these errors were encountered: