-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to calibrate Depth map to Left IR image of D435 #11015
Comments
Hi @Elena-ssq It is not necessary to perform alignment between the depth map and left infrared, as they are already perfectly aligned by default. This is described in point 2 of the section of Intel's camera tuning guide linked to below. If you do need a combined depth and infrared image though then Python code for doing so can be found at #5093 (comment) There is a relationship between depth image quality and the emitter. The emitter projects a semi random pattern of dots onto surfaces in the scene and uses those dots as a 'texture source' to analyze for depth information. Disabling the emitter can therefore result in a noticeable reduction in the detail on the depth image. The visibility of the dot pattern is directly related to the Laser Power value, becoming less visible as Laser Power is reduced and more visible as it is increased. So instead of disabling the emitter, you could try reducing Laser Power to around '78' (half of its default of '156') to see whether this provides a relatively clear infrared image whilst also reducing loss of depth image detail. RealSense 400 Series cameras can though alternatively use the ambient light in a scene to aid depth analysis instead of using the dot pattern. So if you need to disable the emitter then increasing the strength of the lighting in the room may result in an improved depth image. The empty black section at the bottom of the depth image may be because that section of the desk is closer to the camera than the D435 camera's minimum default depth sensing distance of around 0.1 meters / 10 cm. Setting the Disparity Shift option to a value of '50' to reduce the camera's minimum distance by a small amount may allow more of that foreground area to be displayed. There is an example of Python code for setting Disparity Shift at #2015 (comment) In regard to the hand, they are usually difficult for a camera to read, with the thick black outline around the hand becoming larger as the hand is moved closer to the camera and improving as the hand is moved further away. Aside from the black outline, the depth information of the hand on your image looks good though. It is also worth bearing in mind that an image in a script that you have created yourself will not be applying any post-processing filters by default, whereas a RealSense Viewer image does apply a range of image improvement settings by default. So to achieve an image that is closer in appearance to the Viewer, those settings can be programmed into your Python script. References for doing so can be found at #10572 (comment) |
Hi @MartyG-RealSense . Thank you very much for the detailed explaination. It indeed helps me to get started. By far, I have tried to enable the emitter again by setting the laser power to 150 and here is what I got as the depth image. However, I prefer to use the clean infrared images since the pattern hurts the performance of my following algorithms. Moreover, I encountered another problem during this process. It would be great if you can offer some help. Currently, I can get a disparity map from the stereo infrared images (resolution: 640 x 480). How can I transfer it into depth image? I understand the equation Thank you again for your kind help. |
Page 64 of the data sheet document for the 400 Series cameras confirms that the baseline for the D435 model is 50 mm, whilst page 38 confirms that the focal length of the infrared sensors is 1.93 mm. https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet There is a post-processing filter called Disparity2Depth Transform for transforming disparity to depth. There is also a filter called Depth2Disparity Transform for depth to disparity transformation. At the bottom of Intel's Python tutorial for post-processing filters, in the Putting Everything Together section, they demonstrate defining depth to disparity and disparity to depth. With these two filters, one must be set to true and the other to false - they must not be both true or both false. You must define the true / false status of both depth to disparity and disparity to depth even if you are only using one of them. https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb In the tutorial, they state the code like this:
I believe that to use disparity to depth instead of depth to disparity, the True and False should be reversed, like this:
|
Thank you for the quick response. |
Thanks very much @Elena-ssq for the update! |
Hi @MartyG-RealSense . The depth image looks like (which seems correct): While the disparity image looks like (which is weird): In case it helps, I used the following code:
In the code, I divided the disp by 65536 because I found in this doc and this answer that the disparity by default is in Also, I masked out depth over 5000 to view the depth image. Apart from viewing wrong, I compare the disparity map with a result computed by stereo matching, the data inside also seems quite different. Maybe I did something wrong? Thank you so much for any help. |
Also, I followed this file line No.107-114 to convert disparity map computed from stereo matching to depth, it also varies a lot than the depth map catched by the device. Here I used So the transform from stereo-matching-computed disparity to depth that I used is:
The disp here is converted to Uint8 format from Uint16 input. |
The disparity image being a single color shade is what I would expect. I ran a test in the RealSense Viewer myself. As you are writing an image to png with cv2, I wonder whether the color scheme on your image is reversed because the OpenCV color space is BGR by default instead of RGB, as described at #9304 |
#8572 (comment) is an example of Python scripting that a RealSense user generated a disparity map with. |
Cool. Thank you! @MartyG-RealSense |
I am trying to get the stereo ir images and corresponding map from D435 with python realsense toolkit.
By aligning the left ir image and the depth image, I got a point cloud.
However, by viewing the cloud, it seems that either 1) the depth map is not calibrated to the left ir image; or 2) the depth map is poor.
Info:
So the questions are:
Thank you in advance for any possible help.
Below are the images:
Left ir image:
Depth image:
The text was updated successfully, but these errors were encountered: