Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ghosting in a free space (most probably reflections). #1343

Closed
marcingajewski14 opened this issue Aug 20, 2020 · 20 comments
Closed

Ghosting in a free space (most probably reflections). #1343

marcingajewski14 opened this issue Aug 20, 2020 · 20 comments
Labels

Comments

@marcingajewski14
Copy link

marcingajewski14 commented Aug 20, 2020

Info
Camera Models D435i / D435
Firmware Versions 05.12.06.00
Platform Nvidia Xavier
SDK Version 2.36.0
ROS version Kinetic

Hi,
we are using D435 and D435i cameras' depth image on our robot as one of the sources of data for SLAM. From time to time, we observe strange random points hanging in a free space between the robot and obstacles/floor (ghostings). After conducting some experiments, we realized that they are most probably reflections - in case of this experiment, the source were metal window blinds, but also probably reflections from the floor. Covering the blinds resulted in a lack of these ghostings in a free space. What is more, when the robot moves, the reflections appear and disappear (used temporal filters make matters worse). This is a huge problem, as those reflections are treated by Google Cartographer as obstacles visible on our costmaps. We want our robot to work in different indoor conditions and solving this problem is imperative, as there cannot be ghost obstacles for the robot. In our setup, we are using only depth image.

Visualisation in Rviz:
reflex

These ghost reflections are quite big:
image

Different ghostings in a free space:
image

When we used a second camera placed very closely to the first, the ghosting was not visible, what strengthens the thesis that this is a reflection.

We made a quite big research on this topic and thought about different solutions (mainly not connected to Realsense parameters), but we would love to exhaust all possibilities of tuning/ setup Realsense cameras. All other ideas are complicated and not supported by known solutions. Do you have any ideas how to solve this problem?

Regards,
Marcin Gajewski

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 20, 2020

Hi @marcingajewski14 Reflective surfaces (including shiny floor tiles and metal surfaces) can be harder for the camera to read depth detail from, especially if a light-source is projecting onto the surfaces and intensifying the reflections. There is a long discussion in the link below about reflective floor tiles and ways to reduce the impact of the problem.

https://support.intelrealsense.com/hc/en-us/community/posts/360043612734-Issue-with-tiled-floor

There is another phenomenon that could be at work though regarding the blinds. Areas that have a repetitive horizontal or vertical pattern, such as window blinds or rows of fence posts, can confuse the camera. The discussion in the link below discusses this and also suggests reference sources to look at that might help to reduce the effect.

IntelRealSense/librealsense#6713

@marcingajewski14
Copy link
Author

@MartyG-RealSense thank you very much for your prompt reply. The suggestions seem to be promising. I have never thought before about patterns as a probable source of the problem. We will get through resources, discuss/test possible solutions and I will give you a feedback next week.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @marcingajewski14 - I look forward to your report. Good luck!

@marcingajewski14
Copy link
Author

@MartyG-RealSense we are still working on this issue, but the resources are really great! Thank you very much again! I will add report after going through all of tests.

According to this issue I have additional question about solving this problem. One of our ideas before was to use two Realsense cameras with overlapping views. Removing such ghostings could be done by comparing overlapping parts of clouds during merging and removing not fitting elements (like ghostings). It would require custom processing during merging clouds. Do you know any solutions or articles about such an approach? I haven't found any reliable so far. What do you think about this idea?

@MartyG-RealSense
Copy link
Collaborator

Yes, overlapping the FOVs of multiple cameras can improve the quality of depth data, because the same area is being observed by more than one camera and so there is redundancy in the data. the more cameras that are in the arrangement the better, as there are less blind-spots in the observed scene.

You can arrange cameras vertically or horizontally for a tall FOV (e.g observing a full human body) or a wide view.

The CONIX Research Center at Carnegie Mellon developed a realtime point cloud stitching system that could process data from up to twenty 400 Series cameras and interface it with the Point Cloud Library (PCL).

https://github.com/conix-center/pointcloud_stitching

PCL also provides filters for point clouds that can perform a range of advanced point-cloud processing functions, such as removing outliers (data that does not fit the model). So the CONIX model may provide some ideas for creating something simpler for your own project.

@marcingajewski14
Copy link
Author

Thank you for your suggestions. Probably we would have to think about some custom solution, maybe based on the pointcloud stitching you referenced to. However, using computationally demanding filters from PCL could be hard on a robot.

@MartyG-RealSense
Copy link
Collaborator

Okay, let's continue looking at it once your tests are complete.

@MartyG-RealSense
Copy link
Collaborator

Just setting a reminder to keep this case open longer.

@marcingajewski14
Copy link
Author

@MartyG-RealSense we are currently working on parameters from a json file. As our test stand we are using a wooden palette to provide a repetitive pattern. We tried to compare the accuracy of our packages with High Accuracy preset provided by Intel and of course, the Intel's preset eliminate a lot of those ghostings as well as the palette has shape of a palette (with all rows well visible), not plane. However those tests are no reliable as ghostings are visible in very specific positions, so we are capturing them mainly during movement. Smaller number of them is rather our assumption. We need more reliable, repetitive method for comparison as well as we wanted to tune a little the bag provided by Intel for our needs.

We are curious, if it is possible to for example record a raw bag of a scene with ghostings, then process it with different presets offline and compare saved results? We don't know if this processing offline with different presets is possible and how to do this. Do you have any suggestions about those repetitive tests in our case?

@MartyG-RealSense
Copy link
Collaborator

In the Viewer at least, once a bag is loaded in then you can change the depth visualization settings (e.g colorizer presets) and apply post-processing filters, but not change the main Visual Preset configuration setting that is normally at the top of the Viewer's options side-panel.

Perhaps you could record a bag for each preset, 20 seconds long each so that you have a set of bags to compare. They will loop round in Viewer playback, so it should be straigthforward to see what effect post-processing filter changes have on the bag.

@marcingajewski14
Copy link
Author

So as I understand, manipulating json offline on a bag is impossible?

@MartyG-RealSense
Copy link
Collaborator

I will ask a RealSense team member about it and get back to you.

@MartyG-RealSense
Copy link
Collaborator

I have been informed that the effect of presets is calculated inside the ASIC of the camera's D4 chip, hence it cannot be modified without physical hardware. The bag file contains all the raw data from the camera, and so this defines most of the controls and the preset. As mentioned earlier, depth visualization and post-processing changes can still be applied to the bag data.

@marcingajewski14
Copy link
Author

Thank you for your response. It makes the situation a little harder but we will try to do further comparisons and tuning. Some ghostings are still present with the Intel's preset so we will try additional filters, like those provided by Intel in mentioned whitepapers. We are still working.

@MartyG-RealSense
Copy link
Collaborator

Hi @marcingajewski14 Do you have an update for us regarding your tests please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Hi @marcingajewski14 Do you have an update for us regarding your tests please?

@marcingajewski14
Copy link
Author

Yes, I am sorry I was not responding.
We tested different json files and decided to use High Accuracy preset given by Intel. It reduced number of ghostings, especially in bad lightning conditions. One of their sources were those dark rooms etc - tests were conducted in different lightning.

The problem of ghostings still exist - for example on heaters or nets of chairs but is reduced. If we observe too many ghostings we will use computational filters provided by Intel or we think about testing this new Realsense Lidar. I don't know what can we do more.

@MartyG-RealSense
Copy link
Collaborator

The L515 lidar camera would not be suitable for environments with variable lighting conditions, as it works best indoors and in controlled lighting conditions.

If you are having problems with glare then applying a physical linear polarizer optical filter over the camera lenses can significantly reduce the negative effects of glare. Section 4.4 of Intel's white-paper document about optical filters has a lot of detail about this.

https://dev.intelrealsense.com/docs/optical-filters-for-intel-realsense-depth-cameras-d400#section-4-the-use-of-optical-filters

@MartyG-RealSense
Copy link
Collaborator

Hi @marcingajewski14 Do you require further assistance wth this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants