Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D455 camera calibration and captures feature points #12689

Closed
junmoxiao11 opened this issue Feb 21, 2024 · 107 comments
Closed

D455 camera calibration and captures feature points #12689

junmoxiao11 opened this issue Feb 21, 2024 · 107 comments

Comments

@junmoxiao11
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D455
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Linux (Ubuntu 20.04)
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC
SDK Version { legacy / 2.. }
Language python
Segment Robot

Issue Description

Uploading a103dbb3f103ecb388efad8b398fff3.png…
When I was shooting with the D455 camera, I noticed a lot of black depth noise on the image. Is there any way to calibrate the camera for noise reduction? I found the code to capture the feature points, but there's a lot of depth noise on the image. This severely affected the use of the algorithm. So I needed to calibrate the camera. See if the results can be a little more accurate.Is there any way I can learn how to use the D455 camera more?

@junmoxiao11
Copy link
Author

a103dbb3f103ecb388efad8b398fff3 There is a lot of black depth noise on the image.

@MartyG-RealSense
Copy link
Collaborator

Hi @junmoxiao11 Does your depth image improve if you reset the camera to its factory-new default calibration in the RealSense Viewer using the instructions at #10182 (comment) please?

@junmoxiao11
Copy link
Author

image
I did everything you said, but there's still some black noise dots on the image.
These black dots will affect my ability to capture the point at which the depth value of the object is changing. That would make a big difference in my observations. Do you have any other methods to reduce noise? Or dose the D455 camera inevitable produce depth noise in the image when shooting?

@MartyG-RealSense
Copy link
Collaborator

If you expand open the Post Processing section of the Viewer's side-panel and enable the Hole Filling filter (which is turned off by default) then it should fill in the holes.

@junmoxiao11
Copy link
Author

I am not sure I know what you mean. And what do yo mean fill in the hole? Dose that mean filling in the black depth noise?
Do you have any other way to calibrate? I want my images to be shot without any black depth noise.

@MartyG-RealSense
Copy link
Collaborator

  1. Expand open the Stereo Module section of the Viewer side-panel by clicking on the arrow beside it.

  2. Look down the list of options until you find one called Post Processing. Click on the arrow beside it to show all the types of post-processing filter that are available.

  3. Find the filter called Hole Filling and click on the red icon beside it (which means Off) to turn it blue (On). The small black holes should then be automatically filled in.

image


You could try resetting the camera in the Viewer to its factory-new default calibration to see whether your depth image improves. Instructions for doing so can be found at #10182 (comment)

@junmoxiao11
Copy link
Author

After I tried it the way you said, all the black noise on the image was gone! Thank you! But want I turned the realsense-viewer back on, the Settings were gone again and I had to readjust them. Is there a way to save my Settings for the camera? So that everytime I open the realsense-viewer, I don't have to keep adjusting it.
And I have one more question. Is the camera's accurary affected by light sources and thermal radiation? Because I found that the depth value of the same point would always vary in an interval rathen than a fixed nummber.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 24, 2024

Some settings, including post-processing filters, are not preserved when the Viewer is closed and reset to their default status when the Viewer is re-opened. There is not a way to permanently set these options in the Viewer unfortunately.

RealSense 400 Series cameras can perform excellently in sunlight, except when directly facing the sun. When the camera faces the sun its infrared sensors can become saturated with light, negatively affecting the depth and infrared images. If auto-exposure is enabled then the camera should auto-correct when the camera is no longer directly facing the sun.

Using a RealSense camera equipped with a light-blocking filter such as the D455f can result in an improved depth image.

https://www.intelrealsense.com/depth-camera-d455f/

The filter, the CLAREX NIR-75N, can also be purchased separately and attached externally on the outside of a RealSense camera that is not equipped with the filter such as the D455.

@junmoxiao11
Copy link
Author

So in your third step : Find the filter called Hole Filling and click on the red icon beside it (which means Off) to turn it blue (On). The small black holes should then be automatically filled in. The setting in this step cannot be saved, right?

@MartyG-RealSense
Copy link
Collaborator

No, the setting cannot be saved. An alternative method that you could try for reducing holes is to set the Laser Power option under 'Stereo Module > Controls' to its maximum value of '360'. The Laser Power value remains at its previous setting when the Viewer is opened and closed, so once set to 360 then it should still be 360 when the Viewer is next launched.

@junmoxiao11
Copy link
Author

image
I set the Laser power option to 360 and there are still a lot of black spots on the image.
Can I write a piece of python code to control the realsense-viewer to automatically turn on the Hole Filling Filter when it is turned on?
And I found that when I took a piture with the realsense-viewer , the depth value of the same point kept changing, within about a centimeter. Is this normal?

@junmoxiao11
Copy link
Author

And I found that if I pointed the camera at a light source , like an electric lamp, the part of the lamp that was captured would appear black. Is the camera not aligned with the light source when shooting?

@MartyG-RealSense
Copy link
Collaborator

If the light source that the camera is pointed at is very strong then the camera may be unable to read depth information from the area where the light is concentrated, causing that area to appear as black (no depth) on the depth image.

RealSense camera models equipped with a light-blocking filter, such as D455f, will be better able to handle light. The filter can also be purchased separately as the CLAREX NIR-75N product and attached over the camera lenses on the outside of a non-filtered camera model such as D455.

@junmoxiao11
Copy link
Author

Thank you for your answer. Now I want to shoot a video with the D455 camera and save it. Then use the code to capture the objects in this video that have changed positions. Dose this idea of mine work?

@MartyG-RealSense
Copy link
Collaborator

If you want to analyze the data then you would likely have to record a bag file, which is like a video recording of camera data. When a script reads a bag file then it can use the data stored in the bag as though it is accessing a live camera.

@junmoxiao11
Copy link
Author

As you said, I will try to use the code to capture the dispalcement change of the moving object in the video.
And I saw a video on YouTube.https://youtu.be/b-1jF9m2NSQ
Do you know how this is done in the video? It might help me with my research.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 29, 2024

From the video description and the date of the video, it sounds as though it is using the Unreal Engine 5 VR Template and the RealSense Unreal Engine 5 plugin.

VR template
https://docs.unrealengine.com/5.0/en-US/vr-template-in-unreal-engine/

RealSense UE5 plugin
#12262

@junmoxiao11
Copy link
Author

image
Is this picture taken by my D455 camera normal?
And I have one more question. Why do I record a video after adjusting the realsense-viewer's controls and post-processing is still a lot of black noise in the video? Is this because post-processing can't be saved to the video either?

@MartyG-RealSense
Copy link
Collaborator

Yes, that is a normal and good quality depth image.

The black edge around your body is normal when scanning the human body with RealSense cameras.

In regard to the black area that is apparently behind your body which looks like a chair, if the chair has black colored sections then these will not be rendered on the depth image. This is because it is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain.

image

You could try filling in the black areas of the depth image by applying a post-processing filter with hole-filling properties

You are correct, post-processing is not saved to a bag file. Instead, the bag file and its raw camera data should be loaded in and then post-processing filters applied to the bag file's data in real-time.

@junmoxiao11
Copy link
Author

So how to post-process the raw camera data in this bag file. To remove the effect of black noise.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 1, 2024

A script that uses bag file data is almost the same as a script that uses a live camera except that it contains a rs.config.enable_device_from_file instruction to tell the script to use the bag as its data source. This principle is demonstrated in the SDK's Python example read_bag_example.py.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/read_bag_example.py#L42C5-L42C38

So you would use a Python post-processing script and add the enable_device_from_file line to it to post-process bag data.

@junmoxiao11
Copy link
Author

First import library

import pyrealsense2 as rs

Import Numpy for easy array manipulation

import numpy as np

Import OpenCV for easy image rendering

import cv2

Import argparse for command-line options

import argparse

Import os.path for file path manipulation

import os.path

Create object for parsing command-line options

parser = argparse.ArgumentParser(description="Read recorded bag file and display depth stream in jet colormap.
Remember to change the stream fps and format to match the recorded.")

Add argument which takes path to a bag file as an input

parser.add_argument("-i", "--input", type=str, default="20240301_101433.bag", help="Path to the bag file, default is '20240301_101433.bag'")

Parse the command line arguments to an object

args = parser.parse_args()

try:
# Create pipeline
pipeline = rs.pipeline()

# Create a config object
config = rs.config()

# Tell config that we will use a recorded device from file to be used by the pipeline through playback.
rs.config.enable_device_from_file(config, args.input)

# Configure the pipeline to stream the depth stream
# Change this parameters according to the recorded bag file resolution
config.enable_stream(rs.stream.depth, rs.format.z16, 30)

# Start streaming from file
pipeline.start(config)

# Create opencv window to render image in
cv2.namedWindow("Depth Stream", cv2.WINDOW_AUTOSIZE)

# Create colorizer object
colorizer = rs.colorizer()

# Streaming loop
while True:
    # Get frameset of depth
    frames = pipeline.wait_for_frames()

    # Get depth frame
    depth_frame = frames.get_depth_frame()

    # Colorize depth frame to jet colormap
    depth_color_frame = colorizer.colorize(depth_frame)

    # Convert depth_frame to numpy array to render image in opencv
    depth_color_image = np.asanyarray(depth_color_frame.get_data())

    # Render image in opencv window
    cv2.imshow("Depth Stream", depth_color_image)
    key = cv2.waitKey(1)
    # if pressed escape exit program
    if key == 27:
        cv2.destroyAllWindows()
        break

finally:
pass

This is the code I changed based on the example you gave me, but I got the following error when I ran the ubuntu20.04 terminal. Do you know why that is?
"Traceback (most recent call last):
File "bagduqu.py", line 35, in
pipeline.start(config)
RuntimeError: Failed to resolve request. Request to enable_device_from_file("20240301_101433.bag") was invalid, Reason: Failed to create ros reader: Error opening file: 20240301_101433.bag"

@MartyG-RealSense
Copy link
Collaborator

Is the bag file placed in the same folder as your Python script?

Is the bag file able to be played back if you drag and drop it into the center panel of the RealSense Viewer? If it does not play back then it could indicate that the bag file is incomplete or corrupted.

@junmoxiao11
Copy link
Author

The package file plays normally in the realsense-viewer. The package file dose not appear to be in the same folder as the python script. I'll try again with your advice.

@junmoxiao11
Copy link
Author

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/read_bag_example.py#L42C5-L42C38
I ran the code here, but it only seemed to open my bag file and play the video inside. Is there any way to capture the point in this video where the displacement changes?

@MartyG-RealSense
Copy link
Collaborator

Can you confirm what you mean when you say 'displacement' please?

@junmoxiao11
Copy link
Author

I mean I want to find the object in the video that has changed position. In short, I want to find the moving object and record its depth information.

@junmoxiao11
Copy link
Author

2024-09-10.112327.mp4

This is a video I recorded with a wall in the background. Was the video shot well? Why do I feel like the video keeps shaking? That is, the depth at the same point is always changing and there's some noise. Is this all normal?

@MartyG-RealSense
Copy link
Collaborator

It does appear as though you have quite a lot of fluctuation in depth values in the same area. Plain walls can be difficult for the camera to read because they lack texture detail.

If you are using the RealSense Viewer then increasing the Laser Power setting to its maximum value of '360' may help. This is because it will make the invisible dot-pattern projection cast onto the wall by the camera more visible to the camera and so increase its ability to extract depth information from the surface.

You could also go to the Post-Processing section of the Viewer's options side-panel and expand open the settings for the Temporal filter. Changing the Temporal filter's 'Filter Smooth Alpha' option from its default value of '0.4' to the lower '0.1' can significantly reduce depth value fluctuation.

@junmoxiao11
Copy link
Author

Thanks, after modifying the Settings according to you tips, the video picture has been improved.

1 similar comment
@junmoxiao11
Copy link
Author

Thanks, after modifying the Settings according to you tips, the video picture has been improved.

@junmoxiao11
Copy link
Author

Untitled.video.Clipchamp.mp4

As you can see in this video. I can capture the red dot in the video and output its coordinates and depth. Now I want to transform this coordinate information from the image coordinate system to the world coordinate system. That is to calculate the true motion of the red dot.

@MartyG-RealSense
Copy link
Collaborator

The instruction rs2_deproject_pixel_to_point would be best in this situation for converting a 2D image pixel to a 3D world point. There is an example of a Python script for doing so at #9749

The script was for the L515 camera model, so you will need to change this line:

config.enable_stream(rs.stream.depth, 1024, 768, rs.format.z16, 30)

to this:

config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)

@junmoxiao11
Copy link
Author

b5e6f8a9bb84c2e996f0ec3f1aeb861
4a61470c2ea3cd7f4d5fa37b2955c9c
Please look carefully at the red lines and red circles in my two pictures. I set the resolution for both depth image and RGB image to 848*480. Why do two images of the same point have different coordinates?

@MartyG-RealSense
Copy link
Collaborator

In the '2D' mode of the RealSense Viewer, the depth and RGB images are not aligned. Their views will therefore not exactly match each other as the depth sensors and the RGB sensors are in different physical locations on the front of the camera (meaning the RGB image is horizontally offset from the depth image).

On the D455 camera model you can obtain an RGB image that is precisely aligned to the depth image. Instead of enabling the RGB stream, you can select the Infrared stream in the Stereo Module section and set its format to RGB8 instead of the default Y8 format. This is 'RGB from left infrared sensor' mode.

It is also worth bearing in mind that by default the Viewer has a Decimation filter enabled in the Post-Processing section of the options side-panel that 'downsamples' the selected resolution by half, so 848x480 is downsampled to 424x240 resolution. You can disable this filter to show the depth image in the full 848x480 resolution by left-clicking on the blue icon beside 'Decimation Filter' to turn the icon from blue to black, indicating that it is disabled.

@junmoxiao11
Copy link
Author

Is there any way to align the image obtained by enabling RGB streams with the depth image? Instead of select the infrared stream in the Stereo Moudule section and set its format to RGB8.

@MartyG-RealSense
Copy link
Collaborator

How does the image look if you try the SDK's Python example program align_depth2color.py to align the depth and RGB streams?

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py

@junmoxiao11
Copy link
Author

Are there any books or websites that can help me further understand OpenCV and realsense camera?

@MartyG-RealSense
Copy link
Collaborator

The two links below are the main official RealSense OpenCV resources.

https://dev.intelrealsense.com/docs/opencv-wrapper

https://github.com/IntelRealSense/librealsense/blob/master/doc/stepbystep/getting_started_with_openCV.md

There are not OpenCV programming books for RealSense available, unfortunately.

@junmoxiao11
Copy link
Author

depth_intrin = rs.video_stream_profile(depth_frame.profile).get_intrinsics()
world_point = rs.rs2_deproject_pixel_to_point(depth_intrin, [point[0], point[1]], depth)
I added these two lines of code to my program to calculate worldcoordinates. I wonder how the depth camera converts pixel coordinates to world coordinates.

@MartyG-RealSense
Copy link
Collaborator

The conversion of 2D pixel coordinates to 3D world coordinates is described as deprojection.

Deprojection takes a 2D pixel location on a stream's images, as well as a depth, specified in meters, and maps it to a 3D point location within the stream's associated 3D coordinate space. It is provided by the RealSense SDK function rs2_deproject_pixel_to_point()

@junmoxiao11
Copy link
Author

I would like to know the principle of the function rs2_deproject_pixel_to_point() and whether there is a corresponding formula or specification. Let me understand how it computes and transforms

@MartyG-RealSense
Copy link
Collaborator

Documentation for the rs2_deproject_pixel_to_point() function can be found here:

https://intelrealsense.github.io/librealsense/doxygen/rsutil_8h.html#a76529e2c7ee07143b043ecb6d82639f0

The source code in the RealSense SDK that performs the computation for the function is here:

static void rs2_deproject_pixel_to_point(float point[3], const struct rs2_intrinsics * intrin, const float pixel[2], float depth)

Here are some examples of its use in pyrealsense2 scripts.

https://snyk.io/advisor/python/pyrealsense2/functions/pyrealsense2.rs2_deproject_pixel_to_point

@MartyG-RealSense
Copy link
Collaborator

Hi @junmoxiao11 Do you require further assistance with this case, please? Thanks!

@junmoxiao11
Copy link
Author

Yes, I still have a lot of questions to ask. Please do not close this case.

@MartyG-RealSense
Copy link
Collaborator

Okay, it's no problem at all to keep the case open. Thanks very much for letting me know.

@junmoxiao11
Copy link
Author

Can I crop a video shot with realsense camera?

@MartyG-RealSense
Copy link
Collaborator

The easiest way to reduce the height and width of a viewpoint is to use a lower resolution.

If you would prefer not to reduce the resolution and are going to be using OpenCV then you could try cropping the image with OpenCV code.

https://learnopencv.com/cropping-an-image-using-opencv/

@junmoxiao11
Copy link
Author

It seems you misunderstood what I said about cropping.
I mean crop the length of the video. So that the video starts where I want it to start. Can the Realsense Viewer do this?

@junmoxiao11
Copy link
Author

The video length here refers to the length of time.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the clarification. The RealSense Viewer does not have a video cropping feature for the setting of a custom start point.

@junmoxiao11
Copy link
Author

Is there any software that can do this? Can process the video shot by the Realsense camera.

@MartyG-RealSense
Copy link
Collaborator

That kind of start-end cropping is usually only found in rosbag file editing tools for ROS, like the one at the link below.

https://github.com/alesof/rosbag2_editor

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 23, 2024

Hi @junmoxiao11 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants