Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inaccurate pixel-pointcloud mapping #4613

Closed
eyildiz-ugoe opened this issue Aug 9, 2019 · 5 comments
Closed

Inaccurate pixel-pointcloud mapping #4613

eyildiz-ugoe opened this issue Aug 9, 2019 · 5 comments

Comments

@eyildiz-ugoe
Copy link

eyildiz-ugoe commented Aug 9, 2019


Required Info
Camera Model { D435 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Ubuntu 18}
Kernel Version (Linux Only) 4.15.0-55
Platform PC
SDK Version {v2.2.3}
Language {python }
Segment {Robot }

Issue Description

I'm trying to map RGB values to Points and vice versa using D435. I've found the following code that is "supposed" to do that, however, the result I am getting is totally off.

import pyrealsense2 as rs
import numpy as np

config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
pipeline = rs.pipeline()
pipe_profile = pipeline.start(config)
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()

# Intrinsics & Extrinsics
depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics
color_intrin = color_frame.profile.as_video_stream_profile().intrinsics
depth_to_color_extrin = depth_frame.profile.get_extrinsics_to(color_frame.profile)
color_to_depth_extrin = color_frame.profile.get_extrinsics_to(depth_frame.profile)
print("\n Depth intrinsics: " + str(depth_intrin))
print("\n Color intrinsics: " + str(color_intrin))
print("\n Depth to color extrinsics: " + str(depth_to_color_extrin))

# Depth scale - units of the values inside a depth frame, i.e how to convert the value to units of 1 meter
depth_sensor = pipe_profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("\n\t depth_scale: " + str(depth_scale))
depth_image = np.asanyarray(depth_frame.get_data())
depth_pixel = [200, 200] # Random pixel
depth_value = depth_image[200][200]*depth_scale
print("\n\t depth_pixel@" + str(depth_pixel) + " value: " + str(depth_value) + " meter")

# From pixel to 3D point
depth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, depth_value)
print("\n\t 3D depth_point: " + str(depth_point))

# From 3D depth point to 3D color point
color_point = rs.rs2_transform_point_to_point(depth_to_color_extrin, depth_point)
print("\n\t 3D color_point: " + str(color_point))

# From color point to 2D color pixel
color_pixel = rs.rs2_project_point_to_pixel(color_intrin, color_point)
print("\n\t color_pixel: " + str(color_pixel))

which outputs:

Depth intrinsics: width: 1280, height: 720, ppx: 643.548, ppy: 367.861, fx: 652.776, fy: 652.776, model: Brown Conrady, coeffs: [0, 0, 0, 0, 0]
Color intrinsics: width: 1280, height: 720, ppx: 640.522, ppy: 360.351, fx: 926.419, fy: 927.058, model: Inverse Brown Conrady, coeffs: [0, 0, 0, 0, 0]
Depth to color extrinsics: rotation: [0.999905, 0.0111846, -0.0081191, -0.0111741, 0.999937, 0.00133797, 0.00813355, -0.00124712, 0.999966]
translation: [0.0147091, -5.76127e-05, 0.000246798]

depth_scale: 0.0010000000475
depth_pixel@[200, 200] value: 0.0 meter
3D depth_point: [-0.0, -0.0, 0.0]
3D color_point: [0.01470907311886549, -5.761265492765233e-05, 0.00024679797934368253]
color_pixel: [55854.78125, 143.9385223388672]

So, the pixel color_pixel: [55854.78125, 143.9385223388672] is obviously wrong, since X cannot be so high.

What's wrong?

@eyildiz-ugoe
Copy link
Author

eyildiz-ugoe commented Aug 9, 2019

Funny enough, code seems to provide a more reasonable output with a way lower resolution:

config = rs.config()
config.enable_stream(rs.stream.depth, 848, 480 rs.format.z16, 90)
config.enable_stream(rs.stream.color, 848, 480, rs.format.bgr8, 30)

Output:

Depth intrinsics: width: 848, height: 480, ppx: 426.351, ppy: 245.208, fx: 432.464, fy: 432.464, model: Brown Conrady, coeffs: [0, 0, 0, 0, 0]
Color intrinsics: width: 848, height: 480, ppx: 424.348, ppy: 240.234, fx: 617.613, fy: 618.038, model: Inverse Brown Conrady, coeffs: [0, 0, 0, 0, 0]
Depth to color extrinsics: rotation: [0.999905, 0.0111846, -0.0081191, -0.0111741, 0.999937, 0.00133797, 0.00813355, -0.00124712, 0.999966]
translation: [0.0147091, -5.76127e-05, 0.000246798]
depth_scale: 0.0010000000475
depth_pixel@[200, 200] value: 0.5890000279759988 meter
3D depth_point: [-0.3082813024520874, -0.06157129630446434, 0.5890000462532043]
3D color_point: [-0.2880640923976898, -0.06580755859613419, 0.5916475057601929]
color_pixel: [123.6419677734375, 171.4912872314453]

But still, why wouldn't it work with high resolution? Is there a firmware bug?

@ev-mp
Copy link
Collaborator

ev-mp commented Aug 11, 2019

@eyildiz-ugoe , the following lines in the test shall be revised:

depth_pixel = [200, 200] # Random pixel
depth_value = depth_image[200][200]*depth_scale
...
depth_scale: 0.0010000000475
depth_pixel@[200, 200] value: 0.0 meter
3D depth_point: [-0.0, -0.0, 0.0]

A zero depth value that you got from a random assignment designates a no valid depth. ( the camera min range is about 0.1meter)
Hence re-projecting a non-valid value into RGB point of view results in a texture coordinate that is way off the RGB sensor's FOV, as expected.

Add depth validity check to make sure you operate with a proper depth data.

@eyildiz-ugoe
Copy link
Author

Camera is about 30 cm above from the plain. And it works when I use a lower resolution, more or less. So I can't really relate to the distance problem.

@ev-mp
Copy link
Collaborator

ev-mp commented Aug 15, 2019

@eyildiz-ugoe , the distance is not a factor, unless the object is below the minimum range.
There are a variety reasons why for the specific version the depth data could not be calculated, including surface reflections, ambient condition, occlusions etc'. Inspecting the raw depth image may help to assess the situation. You may also use the SDK-provided presets with a more granular tailoring for specific scenarios.

Note that the coordinate [200,200] is VGA and in HD resolutions represent a completely different spacial coordinate so the actual measurements there are not related.

You need to make sure that when you try to re-project depth pixel there is actual data produced by the camera, as the "no data" (0) value is a nominal and not an exceptional case for Depth cameras.

@eyildiz-ugoe
Copy link
Author

I've solved this problem by recalibrating the camera again, now everything works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants