-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help alleviating "ghosting" or "smearing" #7021
Comments
Hi @t-kole-yipp Given that the camera is 4 meters from the observed surface, the noise is not likely related to being too close to the observed object. If the noise is being generated by the floor, the position of the camera is fixed and you do not care about capturing the floor, you may be able to clean up the data by applying a Threshold filter. This allows you to define a minimum and maximum observable depth distance, and depth data outside of this range is excluded from the image. So if the floor is exactly 4 meters away and the camera is mounted in a fixed position, setting the threshold filter to a maximum distance of 3.95 meters may exclude the floor whilst still capturing the box. Alternatively, you could set the maximum Threshold filter distance to 10 meters (the camera's full depth sensing range) to test whether the noise is caused by the floor only being partially captured (as my experience is that in the Viewer, the maximum distance of the Threshold filter tends to be set to 4 meters by default if this filter is enabled). In the Viewer, you can find the Threshold filter by expanding open the Post Processing section of the options side-panel under the Stereo Module controls, and then expanding open the Threshold Filter option in the post-processing filter list. A Threshold filter can also be programmed into your own application using C++, C# or Python code. Bear in mind that the floor may be rendered poorly on the depth image if it is dark grey / black or has a reflective surface. In that situation, excluding the floor from the image with a sub-4m threshold setting may work best. A shadow on the floor from the box could also count as a dark grey texture that is difficult for the camera to read. If the floor surface is not reflective and is not completely black then projecting a strong light source onto the area to brighten the surfaces may help to make them more depth-readable and reduce shadows. There was a project where somebody built a makeshift conveyor with different shaped LEGO bricks passing along the conveyor under a camera mounted above. To deal with inconsistencies in lighting that made object detection more awkward, the project's designer "blasted" the conveyor with a powerful overhead light source to 'out-compete' other light sources in the area that may cause disruption by creating shadows. Details of the project and a making-of article for it are in the link below. Edit: having reviewed the case again, the problem that you are experiencing seems to more closely resemble the kind of ghost noise where the points get "dragged out". In that conversation, a member of the RealSense team gives a list of advice on how to deal with the problem. |
Thank you for the quick reply. |
@t-kole-yipp Thanks very much - I look forward to your report. |
Thanks very much for the test results. Yes, Medium Density is a balance between Fill factor and accuracy. Do the results on High Density show any improvement if you increase the value of the Laser Power setting please? On the RealSense Viewer program, this can be found in the Stereo Module section of the options side-panel, under the Controls sub-section. Increasing laser power can reduce "sparseness" in depth data, whilst reducing laser power can increase sparseness. |
Increasing laser power seems to make the surfaces less "blobby", but doesn't help get rid of the ghosting. |
Next, could you try turning off the Auto Exposure option in the Viewer and seeing if the image improves, please? |
Hi MartyG, Enabling or Disabling Auto Exposure does not seem to do much. |
If the depth mis-readings surrounding the box are the result of the box's shadow, there is an interesting long discussion at the link below about shadows and the "false positive" depth readings they may generate. There was not a clear solution reached at the end of the conversation though, just the general suggestion that "custom post processing steps" may correct the problem. The SR300 had a "depth confidence" function that the 400 Series stereo cameras do not have. It was once suggested that in the absence of that feature on the 400 Series, the Second Peak Threshold function of the 400 Series' "Advanced Mode" may be a substitute. |
Hi @t-kole-yipp Do you still require assistance with this case, please? Thanks! |
Hi Marty, thanks for checking in. |
Okay, great - thanks so much for the update! |
I tried to solve the ghosting problem at the source as much as possible, but I haven't had much luck. I've decided to focus my efforts more on alleviating the issue on my end, after the point clouds are rendered, by trying to get rid of the speckles in less blunt way. If anyone has any further ideas, I'm open to suggestions! |
@t-kole-yipp I have a sense of familiarity, as there is a different case today where some of the same concepts are being discussed: multiple cameras, point clouds and how to process the clouds to minimize ghost elements. |
Hi @t-kole-yipp Do you have an update for us, please? Thanks! |
Hi @MartyG-RealSense , None of the solutions posted here or in related threads seem to have helped significantly. Thanks for the help so far. |
Perhaps you could create your own custom preset file that copies-and-pastes the settings of the Medium Density preset but improves the fill rate (though not as much as the High Density preset). |
Hi @t-kole-yipp Do you require further assistance with this case please? Thanks! |
Hi @t-kole-yipp Do you require further assistance with this case please? Thanks! |
Case closed due to no further comments received. |
I have an issue which requires a little bit of drawing to explain.
I'm essentially trying to chain multiple realsenses d435's together to create one big view from above.
This is going well, for the most part. This process takes place in Unity. By converting the depth images to point clouds, I can align the devices to create a single map. However, there is one effect that I can't seem to get rid of, and I need help to understand what is causing it and which settings to tweak to remove it as well as possible.
Consider the following scenario:
A large, 3m tall box is standing on the floor. A d435 is positioned 4 meters from the floor, looking down.
In our situation, the box would cast a big "shadow" on the floor, the part where the d434 cannot see the box. This is acceptable, as I'm not interested in the floor.
You would expect, in a perfect scenario, that the output would look like this:
Left, the pixel depth of the realsense, and right the reprojected pointcloud.
However, the actual output looks a bit more like this:
The depth seems to bleed out across the image.
It's barely noticeable when just looking at the depth image, but when reprojected, these "ghost points" can smear out well over a meter at times. Quite significant, as shown here:
Additionally, there seems to be a "magic" threshold where it occurs. If the distance between surface A and B is less than about 1.5 meters, the smearing occurs. If it's more than that value, it pops away.
I'd be fine with having no data there, if that's the other option.
The different presents the realsense viewer application provides, such as "high accuracy", help a little bit.
But it removes only a little bit of "smearing" while removing lots of good data.
What settings am I to tweak to get rid of this as much as possible?
Help is much appreciated!
Kind regards, T.
The text was updated successfully, but these errors were encountered: