Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: HDR mode #3844

Closed
stwirth opened this issue Apr 25, 2019 · 42 comments
Closed

Feature request: HDR mode #3844

stwirth opened this issue Apr 25, 2019 · 42 comments

Comments

@stwirth
Copy link

stwirth commented Apr 25, 2019

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D400 (D410 / D415 / D435i)
Firmware Version any
Operating System & Version not relevant
Kernel Version (Linux Only) not relevant
Platform PC
SDK Version
Language C++
Segment Robot

Issue Description

Related issues: #2875 (comment)

The D415 and D435i are advertised as working in different light conditions, including direct sunlight. While the auto-exposure does a good job indoors and outdoors, there are clearly times when the dynamic range of the camera reaches its limits. With direct sunlight falling through a window on the ground we have seen that either the sunny patch gets over-exposed or the shade gets under-exposed.

Example overexposed area (from left IR imager):
image

... and half a second later (after the auto-exposure adjusts):
image

In both conditions, the fill rate of the depth image drops.

As there clearly exist exposure settings that will work for either sunny or shady spots, my suggestion is to offer an "HDR mode" which cycles through different exposure settings (also called "exposure bracketing" in photography) to assemble an artificial HDR image. We tried doing this via calling set_option(RS2_OPTION_EXPOSURE, exposure) with different exposures after each frame but it seems that this is not a very fast operation, the resulting frame rate dropped. Maybe it would be possible to implement this in firmware and offer it as an additional exposure option, parallel to auto-exposure?

Note that exposure ROI and exposure setpoint don't help here as there is no exposure value that makes both sun and shadow expose correctly at the same time.

@stwirth stwirth changed the title Feature request HDR mode Feature request: HDR mode Apr 25, 2019
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 26, 2019

Thanks so much for your request!

High Dynamic Range tends to be a feature found in more expensive CMOS sensors. An Intel article a few years ago on the rationale for parts selection said the following:

"if long range and high quality are paramount and the product is less sensitive price, it will be possible to use higher performance CMOS sensors (higher dynamic range, more sensitivity and higher quality) as well as better and larger optics so that the input images are of high quality. In the other extreme where cost and size are critical, it is possible to use small baseline, cheap consumer-grade CMOS sensors, and plastic optics".

You could experiment with different Visual Preset configurations, such as 'High Density', which gives a higher fill factor.

https://github.com/IntelRealSense/librealsense/wiki/D400-Series-Visual-Presets

@stwirth
Copy link
Author

stwirth commented Apr 29, 2019

Thanks @MartyG-RealSense for your reply.
We have experimented with visual presets and fixed exposure settings as well as changing the exposure setpoint and it became clear that the physical limit of the CMOS is reached in these high contrast situations.

I am aware that the cameras are optimized for low cost/weight/power consumption. While I would like to see a product with the same technology with a higher quality CMOS, I think it is possible to get this "artificial HDR mode" working with the D400 products as they are. What I don't know is at which rate the low-level exposure setting can be changed (i.e. how high can the frame rate be when the exposure is altered after every single frame).

@MartyG-RealSense
Copy link
Collaborator

The link below has an interesting discussion of the math involved in determining how FPS is affected when the exposure value is changed.

#1957 (comment)

@dorodnic
Copy link
Contributor

@stwirth - still need to check, but I think the camera can already do this (change exposure at per-frame rate) You'd need frame metadata enabled and an init script until we can properly add it to librealsense.

@stwirth
Copy link
Author

stwirth commented Apr 29, 2019

@dorodnic that sounds great, can you give me some more pointers on how to do it?

@dorodnic
Copy link
Contributor

Sure, but please consider that it is not yet 100% officially supported.
First, lets check that you can get it working:

  1. You need D435 or D435i (this trick will not work with rolling-shutter)
  2. Make sure you disable Auto-Exposure in the Viewer and start depth and IR streaming
  3. Run rs-terminal -d 0 to connect to the first device
  4. Drop 43 00 ab cd 7b 00 00 00 2f 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 19 00 53 75 62 50 72 65 73 65 74 4e 61 6d 65 00 00 00 00 00 00 00 00 02 00 05 00 01 01 00 01 00 2a 0b 00 00 05 00 01 01 00 01 00 8f 7d 00 00 into the terminal
  5. You should see the image flickering between high and low exposure

Once you make sure this is working, I can share more info on how to do this in code and how to modify exposure values

@stwirth
Copy link
Author

stwirth commented Apr 29, 2019

@dorodnic 🚀 your magic works! I see the flickering at a high rate (the bright image is too bright though).

@dorodnic
Copy link
Contributor

Great!

8f 7d is 0x7d8f = 32,143, the high exposure value
2a 0b is 0x0b2a = 2,858, the low exposure value

You should be able to replace these with your values.

@stwirth
Copy link
Author

stwirth commented Apr 29, 2019

@dorodnic Great! I'll try experimenting with this. Would it be possible to have three values as well? The ultimate goal would be to have these exposure values be controlled by an algorithm that maximizes the fill-rate.

@dorodnic
Copy link
Contributor

For example of embedding terminal commands in your code, please take a look at #2922

@dorodnic
Copy link
Contributor

Would it be possible to have three values as well?

I think so, but I only tried two. The hardware can basically sequence N "steps", with each changing Exposure / Gain / Emitter State and holding the new value for x frames. The protocol is somewhat complicated, so I can't just dump the spec, but I'll try to help as much as possible.

@stwirth
Copy link
Author

stwirth commented Apr 29, 2019

@dorodnic awesome, thanks! Any hints on implementing an "HDR auto-exposure" that automatically determines the two or three exposure values? I was thinking of something like

librealsense/src/algo.cpp

Lines 247 to 274 in 9bdbfd9

histogram_score(H, total_weight, score);
// int EffectiveDynamicRange = (score.highlight_limit - score.shadow_limit);
///
float s1 = (score.main_mean - 128.0f) / 255.0f;
float s2 = 0;
s2 = (score.over_exposure_count - score.under_exposure_count) / (float)total_weight;
float s = -0.3f * (s1 + 5.0f * s2);
LOG_DEBUG(" AnalyzeImage Score: " << s);
if (s > 0)
{
direction = +1;
increase_exposure_target(s, target_exposure);
}
else
{
LOG_DEBUG(" AnalyzeImage: DecreaseExposure");
direction = -1;
decrease_exposure_target(s, target_exposure);
}
if (fabs(1.0f - (exposure * gain) / target_exposure) < hysteresis)
{
LOG_DEBUG(" AnalyzeImage: Don't Modify (Hysteresis): " << target_exposure << " " << exposure * gain);
return false;
}
but cutting the histogram in two (or three) sections.

@dorodnic
Copy link
Contributor

I'm not an expert on this. The team spent a lot of time developing stable AE algorithm with just one value, that would converge fast and not fall into oscillations.
You are certainly welcomed to share your results.

@stwirth
Copy link
Author

stwirth commented May 1, 2019

@dorodnic you wrote above that this trick does not work for rolling shutter. I actually need this for the D410 camera. Do you have a suggestion for that?

@dorodnic
Copy link
Contributor

dorodnic commented May 2, 2019

Firmware can only do this with the global shutter cameras as of right now.
Maybe it will be extended to rolling shutter at some later point, but not in the near future I believe.
Hence, the only alternative is to simulate this in software, and suffer couple of frames delay.

@stwirth
Copy link
Author

stwirth commented May 2, 2019

@dorodnic with "simulate this in software" you mean calling set_option(RS2_OPTION_EXPOSURE, exposure) repeatedly?

@dorodnic
Copy link
Contributor

dorodnic commented May 4, 2019

Yes

@g2-bernotas
Copy link

@stwirth did you manage to simulate HDR using RealSense?

I am imaging objects with black and white pattern (variable proportions) in slightly variable ambient light, so one exposure (set manually) is not an option, while the AE tend to focus either on black or white areas, so the one area becomes overexposed, while the other underexposed.

@stwirth
Copy link
Author

stwirth commented Jul 23, 2019

@g2-bernotas I stopped working on this.

@mhkabir
Copy link

mhkabir commented Nov 10, 2019

@dorodnic Would you be able to share how to use the above API for controlling the emitter on a per-frame basis? Thanks!

@dorodnic
Copy link
Contributor

Hi @mhkabir
For frame-on-frame-off sequence we provide a dedicated option for that (all this per frame toggling is currently only implemented for the D430 sub-family, since its more complicated in case of rolling-shutter)
If you wish to have more finer control (x-frames-on-y-frames-off), this may be possible, let me know and I can look into it.

@mhkabir
Copy link

mhkabir commented Dec 10, 2019

If you wish to have more finer control (x-frames-on-y-frames-off), this may be possible, let me know and I can look into it.

Indeed! Would be good to have the information.

Additionally is there a deterministic delay within which the camera will accept the command?

@nbonwit
Copy link

nbonwit commented Jan 1, 2020

@dorodnic Thanks so much for all your help with this. I'm new to Realsense but I need to create HDR images of the point clouds so that I am not missing so much information in the under-exposed or over-exposed regions of my scene. I have a D415 camera since I am doing small objects at close range.

I can use the set_option function to set the exposure and then change it for my next exposure, but how do I combine the images together HDR-style to create one single point cloud? Thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 1, 2020

@nbonwit If you are using a single camera that you are moving around and need to do "point cloud stitching" to merge the clouds into a single cloud then the information in the links below may be helpful.

#1721

#1941

@nbonwit
Copy link

nbonwit commented Jan 1, 2020

@MartyG-RealSense Thanks for the suggestion. However I am not moving the camera around; I need to do HDR merging. Perhaps the libraries in the links you shared would work for HDR, but it doesn't sound like it. For example, when I have an over-exposed part of an image at one HDR setting, how do I ignore that part of the image/data and just use the properly-exposed data in a different image. I'm not sure how to properly merge the images...

@MartyG-RealSense
Copy link
Collaborator

I considered your question very carefully. I am not clear on what you are trying to do. If the camera is not being moved, are you rotating the object, taking a capture and then rotating the object a bit more and taking another snapshot until you have captured the full 360 degrees of the object, please?

@nbonwit
Copy link

nbonwit commented Jan 1, 2020

Sorry for the confusion. I have a static scene. I have dropouts (pixels with no 3D data) at the edges of the objects and am trying to find improved lighting conditions where I can fill those dropouts with real data. So, I'd like to take 3 separate images of the same scene with each image at a different exposure level (no physical movement of camera or object), then merge the images together to use the best data from each of the images. Just like 2D standard HDR, but with 3D point clouds.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 1, 2020

If the object remains in exactly the same position and rotation in front of the camera and the camera never moves, I wonder if it might be worth first trying to capture a single image with an auto-exposure Region of Interest (ROI) set on the camera in the area of the image corresponding to where the object is.

You can test in the RealSense Viewer whether this method will make a difference to your results by going to the 'Set ROI' button once a stream has been activated. Turn off the 'Enable Auto Exposure' box beside it and then use the Set ROI button and drag to draw a region of interest on the image.

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 1, 2020

Another question: do the objects that you are capturing have reflective surfaces, such as those on metal or jewellery? Reflections can make it more difficult for the camera to read the detail on an object. A professional solution for such situations is a 3D scanning spray, such as the one in the link below.

https://www.laserdesign.com/3d-scan-spray/

Another approach is to coat a reflective surface in a fine spray-on powder such as foot powder or baby powder.

Edit: I am away for the day now but if you would like to leave a comment below, I will be happy to continue the discussion with you when I return in 7 hours from the time of writing this. Good luck!

@nbonwit
Copy link

nbonwit commented Jan 1, 2020

I tried the ROI approach and it was slightly better, but still lots of dropouts at the edges. Yes, the objects are reflective. I am imaging Lego bricks for my test scene; as you know, they are shiny/reflective on the sides.

Unfortunately the spray won't work, as I'm not able to modify the scene physically (I need to leave the scene physically unaltered).

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 1, 2020

I saw an article recently about a person who created a machine for scanning Lego bricks passing under a camera. His solution to problems caused by light variation was to blast the bricks with a very strong light-source, as shown in the making-of article for his project in the link below.

https://support.intelrealsense.com/hc/en-us/community/posts/360037737254-Using-a-neural-network-Pi-and-camera-to-identify-and-sort-LEGO-pieces

@nbonwit
Copy link

nbonwit commented Jan 2, 2020

Wow! That's super cool! Thanks for the suggestion.

@Jconn
Copy link

Jconn commented Apr 1, 2020

@dorodnic is the recommended way to cycle through several exposure values still through that sequence of hex numbers that you posted on April 29, 2019?

@stwirth
Copy link
Author

stwirth commented Nov 3, 2020

I'm reading in the PDF that accompanies the latest FW release 5.12.8.200 (https://dev.intelrealsense.com/docs/firmware-releases) that there is a new HDR mode for depth:
image

Is this / will this be supported by the ROS wrapper? @MartyG-RealSense @dorodnic ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 3, 2020

Hi @stwirth You can read more about the new HDR support in the link below.

#7657 (comment)

In regard to HDR support in the ROS wrapper, @doronhi the RealSense ROS wrapper developer is best placed to comment on that subject.

@stwirth
Copy link
Author

stwirth commented Nov 3, 2020

Thanks for the link @MartyG-RealSense, I'll have a look. Great to hear that this feature was developed!

@tomatac
Copy link

tomatac commented Apr 20, 2021

Would it be possible to have three values as well?

I think so, but I only tried two. The hardware can basically sequence N "steps", with each changing Exposure / Gain / Emitter State and holding the new value for x frames. The protocol is somewhat complicated, so I can't just dump the spec, but I'll try to help as much as possible.

@dorodnic Could you please elaborate on your comment above?
I would like to use HDR using 3 or 4 steps with different exposures.

Also is there an example that can be used to optimize the HDR exposures for a given scene?
In my application I need to inspect flat products but the application can include products with different reflectivity and I am typically getting areas in the center of the image where there is no depth information.
So I am looking to add an option to quickly determine optimal HDR exposure settings inspecting a given product.

@MartyG-RealSense
Copy link
Collaborator

Hi @tomatac Intel's white-paper document about HDR provides a little more information about using two distinct exposures to handle reflections and glare.

https://dev.intelrealsense.com/docs/high-dynamic-range-with-stereoscopic-depth-cameras#section-4-1-when-hdr-depth-should-be-considered

Alternatively, you could deal with the reflectivity issue automatically without needing a programming solution if you apply a physical optical filter product called a linear polarization filter over the lenses on the outside of the camera. Doing so can significantly reduce the negative impact on the image of glare from reflections.

Information about this subject can be found in section 4.4 When to use polarizers and waveplates of the Intel white-paper about optical filters

https://dev.intelrealsense.com/docs/optical-filters-for-intel-realsense-depth-cameras-d400#section-4-the-use-of-optical-filters

@tomatac
Copy link

tomatac commented Apr 22, 2021

Thank you @MartyG-RealSense!
I looked at the article you suggested.
Do you know by any chance what linear polarizers would work? Is there a film that supposed to stick to the lens or a glass type that need to be mounted in front of the lens?
On the software side, I read about the HDR. I was wondering if I can use more than 2 frames.

@MartyG-RealSense
Copy link
Collaborator

The Chief Technical Officer of the RealSense Group at Intel (agrunnet) has said that any thin-film polarizer will be sufficient.

#6159 (comment)

I recommend googling for linear thin film polarizer for leads about where to purchase thin-film polarizer filters.

In regard to HDR, it seems that 2 frames is a deliberate aspect of RealSense's implementation of HDR support.

https://dev.intelrealsense.com/docs/high-dynamic-range-with-stereoscopic-depth-cameras#section-3-1-hdr-with-intel-real-sense-viewer

Line 62 onward of the SDK file hdr-merge.cpp provides further notes about how it works.

http://docs.ros.org/en/kinetic/api/librealsense2/html/hdr-merge_8cpp_source.html

@tomatac
Copy link

tomatac commented Apr 22, 2021

Hi @MartyG-RealSense, Thank you! I will try the filters.

Going back to the HDR features it looks like there are 3 "Sequence IDs": UVC, 1 and 2
image
each one with different settings for exposure and gain.
Do you know what UVC is?
It is not obvious from the source file and I could not find any information on that.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 23, 2021

UVC stands for USB Video Class. It is simply a term for a USB device that can stream video. Webcams are commonly classed as UVC devices.

https://en.m.wikipedia.org/wiki/USB_video_device_class

In regard to the filter settings, a note about sequence ID in the librealsense scripting says that HDR mode starts from '1' and '0' is not HDR. So it is probable that as the filter ID list is ordered as 'UVC, 1, 2' then the UVC setting corresponds to HDR being in the Off state.

https://github.com/IntelRealSense/librealsense/blob/v2.39.0/include/librealsense2/h/rs_option.h#L103

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants