-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: HDR mode #3844
Comments
Thanks so much for your request! High Dynamic Range tends to be a feature found in more expensive CMOS sensors. An Intel article a few years ago on the rationale for parts selection said the following: "if long range and high quality are paramount and the product is less sensitive price, it will be possible to use higher performance CMOS sensors (higher dynamic range, more sensitivity and higher quality) as well as better and larger optics so that the input images are of high quality. In the other extreme where cost and size are critical, it is possible to use small baseline, cheap consumer-grade CMOS sensors, and plastic optics". You could experiment with different Visual Preset configurations, such as 'High Density', which gives a higher fill factor. https://github.com/IntelRealSense/librealsense/wiki/D400-Series-Visual-Presets |
Thanks @MartyG-RealSense for your reply. I am aware that the cameras are optimized for low cost/weight/power consumption. While I would like to see a product with the same technology with a higher quality CMOS, I think it is possible to get this "artificial HDR mode" working with the D400 products as they are. What I don't know is at which rate the low-level exposure setting can be changed (i.e. how high can the frame rate be when the exposure is altered after every single frame). |
The link below has an interesting discussion of the math involved in determining how FPS is affected when the exposure value is changed. |
@stwirth - still need to check, but I think the camera can already do this (change exposure at per-frame rate) You'd need frame metadata enabled and an init script until we can properly add it to librealsense. |
@dorodnic that sounds great, can you give me some more pointers on how to do it? |
Sure, but please consider that it is not yet 100% officially supported.
Once you make sure this is working, I can share more info on how to do this in code and how to modify exposure values |
@dorodnic 🚀 your magic works! I see the flickering at a high rate (the bright image is too bright though). |
Great!
You should be able to replace these with your values. |
@dorodnic Great! I'll try experimenting with this. Would it be possible to have three values as well? The ultimate goal would be to have these exposure values be controlled by an algorithm that maximizes the fill-rate. |
For example of embedding terminal commands in your code, please take a look at #2922 |
I think so, but I only tried two. The hardware can basically sequence N "steps", with each changing Exposure / Gain / Emitter State and holding the new value for x frames. The protocol is somewhat complicated, so I can't just dump the spec, but I'll try to help as much as possible. |
@dorodnic awesome, thanks! Any hints on implementing an "HDR auto-exposure" that automatically determines the two or three exposure values? I was thinking of something like Lines 247 to 274 in 9bdbfd9
|
I'm not an expert on this. The team spent a lot of time developing stable AE algorithm with just one value, that would converge fast and not fall into oscillations. |
@dorodnic you wrote above that this trick does not work for rolling shutter. I actually need this for the D410 camera. Do you have a suggestion for that? |
Firmware can only do this with the global shutter cameras as of right now. |
@dorodnic with "simulate this in software" you mean calling |
Yes |
@stwirth did you manage to simulate HDR using RealSense? I am imaging objects with black and white pattern (variable proportions) in slightly variable ambient light, so one exposure (set manually) is not an option, while the AE tend to focus either on black or white areas, so the one area becomes overexposed, while the other underexposed. |
@g2-bernotas I stopped working on this. |
@dorodnic Would you be able to share how to use the above API for controlling the emitter on a per-frame basis? Thanks! |
Hi @mhkabir |
Indeed! Would be good to have the information. Additionally is there a deterministic delay within which the camera will accept the command? |
@dorodnic Thanks so much for all your help with this. I'm new to Realsense but I need to create HDR images of the point clouds so that I am not missing so much information in the under-exposed or over-exposed regions of my scene. I have a D415 camera since I am doing small objects at close range. I can use the set_option function to set the exposure and then change it for my next exposure, but how do I combine the images together HDR-style to create one single point cloud? Thanks! |
@MartyG-RealSense Thanks for the suggestion. However I am not moving the camera around; I need to do HDR merging. Perhaps the libraries in the links you shared would work for HDR, but it doesn't sound like it. For example, when I have an over-exposed part of an image at one HDR setting, how do I ignore that part of the image/data and just use the properly-exposed data in a different image. I'm not sure how to properly merge the images... |
I considered your question very carefully. I am not clear on what you are trying to do. If the camera is not being moved, are you rotating the object, taking a capture and then rotating the object a bit more and taking another snapshot until you have captured the full 360 degrees of the object, please? |
Sorry for the confusion. I have a static scene. I have dropouts (pixels with no 3D data) at the edges of the objects and am trying to find improved lighting conditions where I can fill those dropouts with real data. So, I'd like to take 3 separate images of the same scene with each image at a different exposure level (no physical movement of camera or object), then merge the images together to use the best data from each of the images. Just like 2D standard HDR, but with 3D point clouds. |
If the object remains in exactly the same position and rotation in front of the camera and the camera never moves, I wonder if it might be worth first trying to capture a single image with an auto-exposure Region of Interest (ROI) set on the camera in the area of the image corresponding to where the object is. You can test in the RealSense Viewer whether this method will make a difference to your results by going to the 'Set ROI' button once a stream has been activated. Turn off the 'Enable Auto Exposure' box beside it and then use the Set ROI button and drag to draw a region of interest on the image. |
Another question: do the objects that you are capturing have reflective surfaces, such as those on metal or jewellery? Reflections can make it more difficult for the camera to read the detail on an object. A professional solution for such situations is a 3D scanning spray, such as the one in the link below. https://www.laserdesign.com/3d-scan-spray/ Another approach is to coat a reflective surface in a fine spray-on powder such as foot powder or baby powder. Edit: I am away for the day now but if you would like to leave a comment below, I will be happy to continue the discussion with you when I return in 7 hours from the time of writing this. Good luck! |
I tried the ROI approach and it was slightly better, but still lots of dropouts at the edges. Yes, the objects are reflective. I am imaging Lego bricks for my test scene; as you know, they are shiny/reflective on the sides. Unfortunately the spray won't work, as I'm not able to modify the scene physically (I need to leave the scene physically unaltered). |
I saw an article recently about a person who created a machine for scanning Lego bricks passing under a camera. His solution to problems caused by light variation was to blast the bricks with a very strong light-source, as shown in the making-of article for his project in the link below. |
Wow! That's super cool! Thanks for the suggestion. |
@dorodnic is the recommended way to cycle through several exposure values still through that sequence of hex numbers that you posted on April 29, 2019? |
I'm reading in the PDF that accompanies the latest FW release 5.12.8.200 (https://dev.intelrealsense.com/docs/firmware-releases) that there is a new HDR mode for depth: Is this / will this be supported by the ROS wrapper? @MartyG-RealSense @dorodnic ? |
Thanks for the link @MartyG-RealSense, I'll have a look. Great to hear that this feature was developed! |
@dorodnic Could you please elaborate on your comment above? Also is there an example that can be used to optimize the HDR exposures for a given scene? |
Hi @tomatac Intel's white-paper document about HDR provides a little more information about using two distinct exposures to handle reflections and glare. Alternatively, you could deal with the reflectivity issue automatically without needing a programming solution if you apply a physical optical filter product called a linear polarization filter over the lenses on the outside of the camera. Doing so can significantly reduce the negative impact on the image of glare from reflections. Information about this subject can be found in section 4.4 When to use polarizers and waveplates of the Intel white-paper about optical filters |
Thank you @MartyG-RealSense! |
The Chief Technical Officer of the RealSense Group at Intel (agrunnet) has said that any thin-film polarizer will be sufficient. I recommend googling for linear thin film polarizer for leads about where to purchase thin-film polarizer filters. In regard to HDR, it seems that 2 frames is a deliberate aspect of RealSense's implementation of HDR support. Line 62 onward of the SDK file hdr-merge.cpp provides further notes about how it works. http://docs.ros.org/en/kinetic/api/librealsense2/html/hdr-merge_8cpp_source.html |
Hi @MartyG-RealSense, Thank you! I will try the filters. Going back to the HDR features it looks like there are 3 "Sequence IDs": UVC, 1 and 2 |
UVC stands for USB Video Class. It is simply a term for a USB device that can stream video. Webcams are commonly classed as UVC devices. https://en.m.wikipedia.org/wiki/USB_video_device_class In regard to the filter settings, a note about sequence ID in the librealsense scripting says that HDR mode starts from '1' and '0' is not HDR. So it is probable that as the filter ID list is ordered as 'UVC, 1, 2' then the UVC setting corresponds to HDR being in the Off state. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Related issues: #2875 (comment)
The D415 and D435i are advertised as working in different light conditions, including direct sunlight. While the auto-exposure does a good job indoors and outdoors, there are clearly times when the dynamic range of the camera reaches its limits. With direct sunlight falling through a window on the ground we have seen that either the sunny patch gets over-exposed or the shade gets under-exposed.
Example overexposed area (from left IR imager):
... and half a second later (after the auto-exposure adjusts):
In both conditions, the fill rate of the depth image drops.
As there clearly exist exposure settings that will work for either sunny or shady spots, my suggestion is to offer an "HDR mode" which cycles through different exposure settings (also called "exposure bracketing" in photography) to assemble an artificial HDR image. We tried doing this via calling
set_option(RS2_OPTION_EXPOSURE, exposure)
with different exposures after each frame but it seems that this is not a very fast operation, the resulting frame rate dropped. Maybe it would be possible to implement this in firmware and offer it as an additional exposure option, parallel to auto-exposure?Note that exposure ROI and exposure setpoint don't help here as there is no exposure value that makes both sun and shadow expose correctly at the same time.
The text was updated successfully, but these errors were encountered: