Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to calibrate Depth map to Left IR image of D435 #11015

Closed
Elena-ssq opened this issue Oct 22, 2022 · 11 comments
Closed

How to calibrate Depth map to Left IR image of D435 #11015

Elena-ssq opened this issue Oct 22, 2022 · 11 comments

Comments

@Elena-ssq
Copy link

I am trying to get the stereo ir images and corresponding map from D435 with python realsense toolkit.
By aligning the left ir image and the depth image, I got a point cloud.
However, by viewing the cloud, it seems that either 1) the depth map is not calibrated to the left ir image; or 2) the depth map is poor.

Info:

  • I need clear stereo ir images so the power of infrared emitter is turned to 0.
  • The point cloud is generated merely by aligning left ir image and depth image by pixel.
  • The depth is masked between [1, 5000] milimeters and converted to cemtimeters in the point cloud.

So the questions are:

  1. Why is the poor point cloud?
  2. How do I get better results (calibrated left ir image and depth image)?
  3. Maybe something else caused this, such as possible synchronize problem etc.?

Thank you in advance for any possible help.

Below are the images:

Left ir image:
ir_left_00000000

Depth image:
depth

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 22, 2022

Hi @Elena-ssq It is not necessary to perform alignment between the depth map and left infrared, as they are already perfectly aligned by default. This is described in point 2 of the section of Intel's camera tuning guide linked to below.

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#use-the-left-color-camera

If you do need a combined depth and infrared image though then Python code for doing so can be found at #5093 (comment)

There is a relationship between depth image quality and the emitter. The emitter projects a semi random pattern of dots onto surfaces in the scene and uses those dots as a 'texture source' to analyze for depth information. Disabling the emitter can therefore result in a noticeable reduction in the detail on the depth image.

The visibility of the dot pattern is directly related to the Laser Power value, becoming less visible as Laser Power is reduced and more visible as it is increased. So instead of disabling the emitter, you could try reducing Laser Power to around '78' (half of its default of '156') to see whether this provides a relatively clear infrared image whilst also reducing loss of depth image detail.

RealSense 400 Series cameras can though alternatively use the ambient light in a scene to aid depth analysis instead of using the dot pattern. So if you need to disable the emitter then increasing the strength of the lighting in the room may result in an improved depth image.

The empty black section at the bottom of the depth image may be because that section of the desk is closer to the camera than the D435 camera's minimum default depth sensing distance of around 0.1 meters / 10 cm. Setting the Disparity Shift option to a value of '50' to reduce the camera's minimum distance by a small amount may allow more of that foreground area to be displayed. There is an example of Python code for setting Disparity Shift at #2015 (comment)

In regard to the hand, they are usually difficult for a camera to read, with the thick black outline around the hand becoming larger as the hand is moved closer to the camera and improving as the hand is moved further away. Aside from the black outline, the depth information of the hand on your image looks good though.

It is also worth bearing in mind that an image in a script that you have created yourself will not be applying any post-processing filters by default, whereas a RealSense Viewer image does apply a range of image improvement settings by default. So to achieve an image that is closer in appearance to the Viewer, those settings can be programmed into your Python script. References for doing so can be found at #10572 (comment)

@Elena-ssq
Copy link
Author

Hi @MartyG-RealSense .

Thank you very much for the detailed explaination. It indeed helps me to get started.

By far, I have tried to enable the emitter again by setting the laser power to 150 and here is what I got as the depth image.
It looks quite better than the previous one.

depth2

However, I prefer to use the clean infrared images since the pattern hurts the performance of my following algorithms.
As a result, I would like to try the post-processing methods provided and see if I can get more promising depth results.

Moreover, I encountered another problem during this process. It would be great if you can offer some help.

Currently, I can get a disparity map from the stereo infrared images (resolution: 640 x 480). How can I transfer it into depth image?

I understand the equation $z = bf/d$ and I found the baseline on this page.
To confirm, is the baseline=50 mm and focal-length=1.93 mm should be used in my case? If no, what are they?

Thank you again for your kind help.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 22, 2022

Page 64 of the data sheet document for the 400 Series cameras confirms that the baseline for the D435 model is 50 mm, whilst page 38 confirms that the focal length of the infrared sensors is 1.93 mm.

https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet

There is a post-processing filter called Disparity2Depth Transform for transforming disparity to depth. There is also a filter called Depth2Disparity Transform for depth to disparity transformation.

At the bottom of Intel's Python tutorial for post-processing filters, in the Putting Everything Together section, they demonstrate defining depth to disparity and disparity to depth. With these two filters, one must be set to true and the other to false - they must not be both true or both false. You must define the true / false status of both depth to disparity and disparity to depth even if you are only using one of them.

https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb

In the tutorial, they state the code like this:

depth_to_disparity = rs.disparity_transform(True)
disparity_to_depth = rs.disparity_transform(False)

for x in range(10):
    frame = frames[x]
    frame = depth_to_disparity.process(frame)
    frame = disparity_to_depth.process(frame)

I believe that to use disparity to depth instead of depth to disparity, the True and False should be reversed, like this:

depth_to_disparity = rs.disparity_transform(False)
disparity_to_depth = rs.disparity_transform(True)

for x in range(10):
    frame = frames[x]
    frame = depth_to_disparity.process(frame)
    frame = disparity_to_depth.process(frame)

@Elena-ssq
Copy link
Author

Thank you for the quick response.
I will try as is indicated and report any valuable findings later.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @Elena-ssq for the update!

@Elena-ssq
Copy link
Author

Elena-ssq commented Oct 24, 2022

Hi @MartyG-RealSense .
I have tried out the above-mentioned methods and it seems depth_to_disparity handle does not provide promising result.
Below is an example.

The depth image looks like (which seems correct):
depth

While the disparity image looks like (which is weird):
disp

In case it helps, I used the following code:

import pyrealsense2 as rs

depth_to_disparity = rs.disparity_transform(True)
disparity_to_depth = rs.disparity_transform(False)

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
pipeline_profile = pipeline.start(config)

frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
disp_frame = depth_to_disparity.process(depth_frame)

disp_image = (np.asanyarray(disp_frame.get_data()) / 65536).astype(np.uint16) 
depth_image = np.asanyarray(depth_frame.get_data())

cv2.imwrite(path\to\disp.png, disp_image)
cv2.imwrite(path\to\depth.png, depth_image)

In the code, I divided the disp by 65536 because I found in this doc and this answer that the disparity by default is in CV_32FC1 format while what I need is in Uint16 format.

Also, I masked out depth over 5000 to view the depth image.

Apart from viewing wrong, I compare the disparity map with a result computed by stereo matching, the data inside also seems quite different.

Maybe I did something wrong? Thank you so much for any help.

@Elena-ssq Elena-ssq reopened this Oct 24, 2022
@Elena-ssq
Copy link
Author

Elena-ssq commented Oct 24, 2022

Also, I followed this file line No.107-114 to convert disparity map computed from stereo matching to depth, it also varies a lot than the depth map catched by the device.

Here I used fx=382.995 which is read by get_intrinsics following this answer and baseline=50 as is confirmed earlier under this issue.

So the transform from stereo-matching-computed disparity to depth that I used is:

depth = baseline * fx / disp

The disp here is converted to Uint8 format from Uint16 input.

Disparity computed by stereo matching is:
disp2

And corresponding depth is:
depth2

@MartyG-RealSense
Copy link
Collaborator

The disparity image being a single color shade is what I would expect. I ran a test in the RealSense Viewer myself.

image

image

As you are writing an image to png with cv2, I wonder whether the color scheme on your image is reversed because the OpenCV color space is BGR by default instead of RGB, as described at #9304

@Elena-ssq
Copy link
Author

Emmm, I don't think color is what's the problem here since both depth and disparity should be with only one channel if I understand this correct.

I tried again this time with simpler code and the disparity computed from depth looks resonable now.
The difference between disparity maps from stereo matching and the device is:
diff

84.3% of disparities are with error less than 3 pixels (Uint8) and I can accept that result.
This observation also stands for the depth maps where the average error is 262.3 mm.

So the only problem remains is the wrong disp map generated by depth_to_disparity function. Maybe there's some mistake in my usage of it?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 24, 2022

#8572 (comment) is an example of Python scripting that a RealSense user generated a disparity map with.

image

image

@Elena-ssq
Copy link
Author

Cool. Thank you! @MartyG-RealSense

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants