-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intel Realsense camera -noisy image #11912
Comments
Hi @Gopi-GK02 The large black hole at the center of your depth image will be because of the intensity of the round lamp being too strong for the camera to obtain depth information from that part of the scene. If the rectangular lamp in the top corner of the image is a fluorescent light such as a ceiling strip light then that type of lighting can create depth noise. This is because they contain heated gases that flicker at frequencies that are difficult to see with the human eye. If you do have a ceiling strip light then the flicker can be compensated for using a setting in the RGB section of the Viewer's options side-panel called Power Line Frequency. This frequency setting has a choice of '50' and '60' values and should be matched as closely as possible with operating frequency of the light. Typically this lighting operating frequency will be 50 in European regions and 60 in North American regions. I note that your USB connection type is being detected as 2.1 by the RealSense Viewer, which has slower performance and more limited resolution / FPS options than a faster USB 3 connection. Can you confirm whether your camera is plugged into a USB 3 port on your PC, please? If it is in a USB 3 port, using a USB 2 cable or a USB 2 hub with the camera could also cause it to be detected as USB 2.1 speed. If the RGB image is blurry then maximizing the Sharpness RGB setting to '100' can greatly reduce RGB blurring. |
Hi @MartyG-RealSense, these are the results after I changed the Laser Power Frequency. The reflection of the light still caused flickering so I changed to a place with minimal light reflection and these are the results. You were right, the depth image is better after changing the laser power frequency, but still, I don't think it will produce a better 3d image. Any suggestion on how to improve it? |
There is a relationship between the Laser Power setting and depth image quality. As Laser Power is reduced from its default value of '156', more holes and gaps can appear in the image, whilst increasing Laser Power above 156 (its maximum value is 360) can increase depth image detail and help to fill in holes. Laser Power is lowered to '60' on your images. It looks as though the incorrect setting has been changed. 'Laser Power' should not have been reduced to 60. The fault was mine, as I should have said Power Line Frequency in my comment above instead of Laser Power Frequency. I have edited the comment to correct it and offer my sincere apologies. Power Line Frequency in the RGB section of the side-panel should be set to '60' in its drop-down menu. |
The large black area in the foreground that corresponds to the foreground may be because that area is close to the camera. By default with the D435, areas up to around 10 cm from the camera lenses will not be rendered on the depth image because the camera has a minimum depth sensing distance, below which the depth image breaks up. Setting the Disparity Shift option to '50' instead of the default '0' may enable more of the foreground depth to be rendered, as increasing Disparity Shift reduces the minimum depth sensing distance of the camera (though also reduces maximum observable depth distance at the same time). This option is located in the Stereo Module > Advanced Controls > Depth Table section of the Viewer's side-panel. Black-surfaced parts of objects may also not show up on the depth image. This is because it is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain. Casting a strong light source onto black objects can help to bring out depth detail from them. |
The depth image above does not look correct. If it is a flat wall then you would expect it to be mostly the same color instead of a spectrum of colors ranging from blue (near) to red (far). The empty black area that the cable is in is expected though, because - as mentioned earlier - the camera has difficulty seeing black-colored surfaces and objects such as black cables. The Infrared 2 (right-side infrared) has some noise and distortion on it that the left infrared image does not have. They should be approximately the same image aside from one of the two infrared images being slightly horizontally offset from the other due to the different positions of the left and right sensors on the front of the camera. Next, please try resetting the camera to its factory-new calibration settings using the instructions at #10182 (comment) to see whether it improves the image. |
The Infrared 2 image at least no longer has a corruption on it after the write-table reset. With such a dense dot pattern on the table surface for the camera to analyze for depth information, it is unusual that the depth image is broken like this. As you are on a Windows 11 computer, it is also unusual that the camera is being detected as USB 2.1 when on a modern computer I would expect it to have USB 3 ports. Can you confirm that the camera is plugged into a USB 3 port please? If it is then the USB 2.1 status is a mis-detection of the USB connection that would affect its performance, as USB 2.1 is slower than USB 3. |
The ports are USB 3.0, but I am using a USB 2.0 cable |
The camera will be detected as USB 2 in a USB 3 port if the cable is USB 2. This is because a USB 3 cable has extra wires in it that a USB 2 cable does not have and those extra wires enable USB 3 detection. A USB 3 cable should be used with the camera if possible to enable it to be detected as beng on a USB 3 connection. |
I don't have a USB 3.0 cable as of now, I will try it out as soon as I get one. I am using python. |
If it is the RGB aspect of the pointcloud that is blurry then you can improve that by maximizing RGB sharpness by setting it to a value of '100'. It can be set with Python code by defining a 'color_sensor' to access the RGB sensor instead of the depth sensor and then use the SDK instruction rs.option.sharpness to set the RGB sharpness. The instruction should be placed after the pipeline start line.
|
Hi @Gopi-GK02 Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense, The issues are not resolved with USB 3.0 cable. |
Do the black holes and gaps reduce if you change the Depth Units option from its default value of 0.001 to the lower value of 0.0001 The 'Depth Units' setting can be found under the Controls sub-section of the Stereo Module options. You can input 0.0001 manually by left-clicking on the pencil icon beside the option. If changing the Depth Units does not improve the image, please next try using the instructions at the link below to reset the Viewer to its defaults. This is a different procedure from resetting the camera hardware. https://support.intelrealsense.com/hc/en-us/community/posts/360052013914/comments/1500000416522 |
The projected dot pattern on the infrared image is strong and bright, so it is not clear why the camera is having difficulty with depth-analyzing the surface of the object. I note that the Asic Temperature reading is 43 degrees C. This temperature represents the internal operating temperature of the camera. Officially the maximum recommended operating temperature is 35 degrees C, though the camera may be able to go up to 42 degrees in real-world conditions before glitches occur above that level. Is there heat in the environment that the camera is in, such as hot weather, that could be causing this high temperature? How many of the holes in the depth image are filled in if you increase the Laser Power option to its maximum of '360'. Using the Medium Density Visual Preset instead of the default Custom may also help to fill in some of the holes. The Post-Processing section of the options side-panel also has a Spatial filter that is enabled by default that has a 'holes filling mode' as a sub-option. Set its drop-down menu to 'Unlimited' if it is not set to that already in order to maximize the filter's hole-filling effect. |
Setting the preset to medium density and setting the hole filling mode as unlimited does seem to provide some improvement compared to previous images. When exporting frames to a bag file, the number of frames in the bag will rarely be very close to the number of live frames during the recording. Working out the number of expected frames by multiplying FPS by the number of seconds is a commonly used technique but is typically not an accurate method. |
The areas of the image surrounding the object that are large black gaps look as though they are dark grey or black colored in the real-world scene and also highly reflective (it looks like the floor). Both factors will make such surfaces difficult for the camera to read. If you compare to the image at #11912 (comment) where a different surface is in the image, there are much fewer large black gaps. So moving the object to a location with a lighter, less reflective floor surface if possible might enhance the image. In regard to finding the "correct" frame nuber versus the number of frames in the bag, it is best not to think too much about it and just aim to get a number of bag frames that is as close as possible to the number of frames that were in the live recording session. It will likely not be possible to avoid loss of some frames from the bag. |
Hi @Gopi-GK02 Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense, is there any function to get the matrix and distortion coefficients of the d435 camera ? |
You can access intrinsics and extrinsics in Python with the SDK instructions get_intrinsics and get_extrinsics_to. See #3986 for an example of get_intrinsics and #10180 (comment) for get_extrinsics_to. On the D435 camera model the five distortion coefficient values will all be zero. The reason for this is provided by a RealSense team member at #1430 (comment) There are newer RealSense models such as D455 that have some exceptions to the all-zero rule by having non-zero coefficient values for the RGB color stream. |
Hi @MartyG-RealSense, I'm trying to understand the depth frame and trying to find the x,y,z values of a point. The two monochrome Infrared streams are used to form the depth stream, therefore each pixel in the depth stream represents the distance of that point from the camera base in the real world. Is my understanding correct?
Is this code correct ? #1413 (comment) I'm trying to find the x,y,z coordinates of a particular point from depth frame. |
The left and right infrared frames that are used to construct the depth frames are raw frames inside the camera hardware and are not the Infrared and Infrared 2 streams. This is why depth can be streamed even when the infrared streams are inactive, as these IR streams are not involved in depth frame generation. In regards to the code, #10037 (comment) may be a helpful reference for obtaining 3D xyz with rs2_deproject_pixel_to_point. Regarding #1413 (comment) - it is discussing how the real-world Z-distance in meters can be obtained by multiplying the raw pixel depth value (uint16_t) by the depth unit scale to obtain the real-world distance. For example, if the raw pixel depth value for a particular coordinate is 6500 and the depth scale is 0.001 then 6500 x 0.001 = 6.5 (meters) distance for that particular coordinate. |
Hi @Gopi-GK02 Do you require further assistance with this case, please? Thanks! |
Hi @Gopi-GK02 The pc.calculate() instruction is usually preceded by a map_to() instruction to map depth and color together. That is the proper way to align depth and color when using pc.calculate().
So your depth_frame line likely does not require the apply_filter command.
|
How do you save your pointcloud data to a ply file, please? Your script does not have any export_to_ply() code in it. |
Below is the code
|
I can't see anything obviously wrong with your code. As the top of this discussion says that you are using Windows, you could try loading the ply file into Windows' 3D Viewer tool to see if the file has any content. You should be able to find the tool by typing 3d viewer into the text box at the bottom of your Windows screen. Then when 3D Viewer launches, drag and drop the ply file into the main panel of the tool to display its contents. |
As you have the ability to use Python (your earlier pointcloud script at #11912 (comment) was C++), you could try an Open3D Python pointcloud script posted last week at #12090 to see whether it exports a ply correctly if you add export_to_ply to the end of it. #3579 has an example of an export_to_ply program for Python. However, there is a well known issue with pyrealsense2 where it will not export color to a ply. The only example of color ply export in Python that has previously worked is at #6194 (comment) |
I have been using python version to generate pointcloud along with color
The above python code works, it generates a pointcloud with color and normals. |
Thanks so much for sharing your Python code above. So the pointcloud is successfully generated but it is not exporting successfully to a ply file? If the exported ply is empty then an alternative to using export_to_ply is to use save_to_ply instead, which offers more options for configuring the export like the options in the RealSense Viewer's ply export interface. |
The pointcloud is generated and exported to a ply file in python successfully. |
You could try running librealsense's C++ rs-pointcloud example program to see if it works, as it also makes use of map_to and pc_calculate. https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud If your librealsense installation has the example programs installed then you should be able to run a pre-made executable version of rs-pointcloud without having to build it yourself. Examples on Linux are located in the usr/local/bin folder, whilst on Windows they can be found at C: > Program Files (x86) > Intel RealSense SDK 2.0 > tools if you have installed the full RealSense SDK with the Intel.RealSense.SDK-WIN10 installer program. |
I have built the librealsense with the examples and the rs-pointcloud example works. |
Is there any difference if you change the setup of your color_frames and depth_frames lines so that they are the same as those of rs-pointcloud:
|
In your C++ script at #11912 (comment) it looks as though you could use RGB8 as the color format instead of BGR8 as the script is not using OpenCV commands.
Have you also tried commenting out your clock code to make sure that it is not affecting the script negatively with its break command?
|
We are trying to improve the depth image by applying the post processing filters. We are not sure how to do it. I have attached the code below
|
I hope that the script at #11246 will be a helpful Python reference for performing align after post-processing. |
The script applies post processing filters after the aligning. |
Yes, it does apply align after the post processing filters. |
Sorry @MartyG-RealSense,
|
The filter and alignment instructions are applied with .process instructions. Until that instruction is used, the filter or alignment is not activated. In the script from the top of #11246 the list of post-processing filters are applied first with .process instructions and then the alignment is applied with .process afterwards, further down the script. |
I still don't understand @MartyG-RealSense, I don't understand how where and how the .process method applies the filters first and aligns the frames |
When a script is run, the computer starts reading the script at the first line and proceeds downwards through the script until it reaches the bottom. So if the .process instructions for the filters are listed first in the script and the .process instruction for align is listed after the filters, further down the script, then alignment will be activated after all of the filters have been activated. This is because the line that activates alignment comes after the filter lines. The computer has to go through each line of the script in the order that it is written. An exception to this rule is if there is an instruction in the script to jump to another section of the script, but this particular script does not have that. |
I will go through the code again. |
Hi @Gopi-GK02 Any polarizing filter should work so long as it is linear, except for the round ones used in 3D glasses. This means that thin-film polarizing filters can be purchased inexpensively by searching stores such as Amazon for the term linear polarization filter sheet |
Okay, |
The stereo module produces grainy/blurry output
I'm new to 3d development and this is my first time using a intel realsense camera.
I'm trying to get the point cloud of an object and visualize it in pybullet but the stereo module of the camera produces blurry/noisy images,
The text was updated successfully, but these errors were encountered: