-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert depth image to 16-bit (or any desired level of quantization)? #12069
Comments
Hi @nogaini I am not familiar with the workings of the cv2.convertScaleAbs function's alpha, so do not know a formula to compute alpha for conversion to other quantization levels. I will try to provide some hopefully useful information though. The official OpenCV documentation for cv2.convertScaleAbs describes alpha as being a scale factor. https://docs.opencv.org/3.4/d2/de8/group__core__array.html#ga3460e9c9f37b563ab9dd550c4d8c4e7d My research indicates that when using cv2.convertScaleAbs, the alpha and beta parameters are used to adjust the contrast and brightness of an image, with alpha affecting the contrast and beta affecting the brightness. Apparently the alpha value is '1' by default. It appears that 0.03 is considered an appropriate alpha value for converting a RealSense depth image to an OpenCV mat. However, a RealSense depth image does not need to be converted to 16-bit as it is already 16-bit by default (its raw pixel depth values are in the uint16_t format). Are you seeking to convert a 16-bit RealSense image to a 16-bit OpenCV mat instead of an 8-bit one, please? If you are then #7081 may be a helpful L515 reference. The pixel depth values of uint16_t can be between 0 and 65535. The real-world depth in meters can be calculated by multiplying the uint16_t value by the depth unit scale of a particular RealSense camera model. The L515's default depth scale is 0.000250 (whilst the 400 Series D-models mostly use 0.001 as their default scale). So for example, if the uint16_t value was 8500 then the real world distance in meters on L515 would be 8500 x 0.000250 = 2.125 (meters). When a 16-bit RealSense image is converted to an OpenCV mat, the range 0-255 is used instead of 0-65535. |
hello, can you help me ?answer my question #12072 |
Hi @nogaini Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense, Sorry about the late reply! Thank you for your assistance, this was really helpful. :) Closing this issue now as it's been resolved. |
No problem at all, I'm pleased that I was able to help. Thanks very much for the update! |
Issue Description
In this file in the repo, I noticed on line 56 that the
cv2.convertScaleAbs()
method is used withalpha=0.03
for converting the depth image into 8-bit. How can this be modified to quantize the depth image to any level, for instance, 16-bit?I believe
alpha
is computed as:=alpha = ((2**q) - 1)/(max_val - min_val)
, whereq
is the quantization level (e.g. 8, 16, etc.), andmax_val
andmin_val
are the maximum and minimum values respectively in the depth image. Foralpha
to be equal to 0.03 for conversion to 8-bit as in the example file linked above,max_val
has to be 8500 (sincemin_val
is 0). But I'm consistently getting values higher than 8500 in the depth image. So could you tell me whyalpha
is set to 0.03? And what exactly is the formula to computealpha
for conversion to other quantization levels?The text was updated successfully, but these errors were encountered: