Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to recover colorized depth images in python #9713

Closed
gachiemchiep opened this issue Sep 1, 2021 · 15 comments
Closed

How to recover colorized depth images in python #9713

gachiemchiep opened this issue Sep 1, 2021 · 15 comments

Comments

@gachiemchiep
Copy link

Required Info
Camera Model D400
Firmware Version 05.10.06.00
Operating System & Version Ubuntu 16.04
Kernel Version (Linux Only) 4.4.0-210-generic
Platform PC
SDK Version { legacy / 2.. }
Language Python
Segment others

Issue Description

Helo @RealSenseCustomerSupport
How can we recover colorized depth images in python?

We follows the document at depth-image-compression-by-colorization-for-intel-realsense-depth-cameras and this issue, but we couldn't calculate correctly.

# Few values of depth_image_colorizer 
# depth_image_colorizer[0][0:5]
array([[  0, 131, 255],
       [  0, 131, 255],
       [  0, 133, 255],
       [  0, 133, 255],
       [  0, 137, 255]], dtype=uint8)

# The corresponding depth values
# depth_image[0][0:5]
array([ 987,  987,  993,  993, 1000], dtype=uint16)

# Our calculated values
# depth_image_recover[0][0:5]
array([[0, 0, 0],
       [0, 0, 0],
       [0, 0, 0],
       [0, 0, 0],
       [0, 0, 0]], dtype=uint16)

The code we used to recover colorized depth images

def RGBtoD(r, g, b):
    if r + g + b < 255:
        return 0
    elif r >= g and r >= b:
        if g >= b:
            return g - b
        else:
            return (g - b) + 1529
    elif g >= r and g >= b:
        return b - r + 510
    elif b >= g and b >= r:
        return r - g + 1020
    
def ColorToD(frame, is_disparity=False):
    min_depth = 0.29
    max_depth = 10.0
    height = frame.shape[0]
    width = frame.shape[1]
    
    depth_frame = np.zeros(frame.shape, dtype=np.ushort)
    
    for i in range(height):
        for j in range(width):
            r, g, b = frame[i][j]
            hue_value = RGBtoD(r, g, b)
             
            z_value = min_depth + (div * hue_value) / 1529.0 + 0.5
            depth_frame[i][j] = z_value # into 
            
    return depth_frame

We used the C++ code as in the document but the result didn't make sense. For [ 0, 131, 255] input, the result should be 987 but the C++ code return 6

image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 1, 2021

Hi @gachiemchiep Use of this technique to recover a colorized depth image may not be ideal because of the difficulties involved in recovering the image outside of the RealSense SDK and of converting the recovery logic to a programming language other than the C++ scripting provided in the paper. As such, I am not aware of any references about converting the technique for Python other than the discussion at #7930 that you quoted above.

There is a detailed Python discussion about image compression alternatives to the recovery method at #8117

@gachiemchiep
Copy link
Author

Hello @MartyG-RealSense
It looks like we have no other choice but switching back to C++.
Since all of our depth frame are saved as .jpg image files, is there any C++ api that we can use to convert those saved image file back into depth ?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 2, 2021

The SDK's software_device() feature can be used to import a PNG image file and generate a synthetic depth frame from it using its create_synthetic_texture function, like in the C++ example program in the link below. I am not aware of an ability to generate a depth frame from a jpg file with software_device() though.

https://github.com/IntelRealSense/librealsense/tree/master/examples/software-device

Would it be possible to adapt your project to save depth to file in PNG format instead?

Using software_device() on Python does not currently work well in its default state because it has problems with an instruction called dev.create_matcher(RS2_MATCHER_DLR_C)

https://support.intelrealsense.com/hc/en-us/community/posts/1500000934242-exploring-equivalent-API-in-python-for-rs2-software-device-create-matcher-RS2-MATCHER-DLR-C-defined-in-c-

A RealSense user in the above-linked discussion did though provide a 'patch' that they created to try to fix some of the issues.

https://support.intelrealsense.com/hc/en-us/community/posts/1500000934242/comments/4403012747923

@MartyG-RealSense
Copy link
Collaborator

Hi @gachiemchiep Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

@Camilochiang
Copy link

Just for futures references: The reason why this implementation doesn't work is cos is treating the individual channels (RGB) as bit8, therefore their addition cannot go higher than 255. Adding int() to each channel in the presented code solve the problem.

@n1ckfg
Copy link

n1ckfg commented Jul 24, 2022

@MartyG-RealSense Have you got a reverse example of your Python code, to encode depth values to color? We're making a system with a non-realtime Python encoder, and a realtime GLSL decoder.

@MartyG-RealSense
Copy link
Collaborator

Hi @n1ckfg As you mention a 'reverse' example of encoding depth to color, are you aiming to convert a color image into RealSense depth data, please? Thanks!

@n1ckfg
Copy link

n1ckfg commented Jul 24, 2022

Ah, good point. I mean starting from a grayscale depth image, converting it into the RealSense color format, streaming it as an mp4, and decoding back to the original grayscale value on the other side.

@MartyG-RealSense
Copy link
Collaborator

The main problem will be that once the depth data is converted then it will be very difficult to convert it back to accurate depth values on the other side due to loss of most of the depth information during the conversion.

It is certainly possible to live-stream RealSense data as a video feed using tools such as the RealSense GStreamer plugin at the link below.

https://github.com/WKDSMRT/realsense-gstreamer

Once the original depth data has been converted though then you could likely not go back to those original values.

It may be worth considering using RealSense's networking system so that you can transfer the depth data from one computer with the camera attached to another and then access the data in real-time on the destination computer as though it was directly accessing the camera.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras

There is also an older RealSense networking tool called EtherSense that is for Python specifically.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/ethernet_client_server

https://dev.intelrealsense.com/docs/depth-camera-over-ethernet-whitepaper

@AlexanderKhazatsky
Copy link

+1 For a formula for DtoRGB. Even without compression, the formula presented in the paper fails for very simple examples. There must be an issue somewhere.

@AlexanderKhazatsky
Copy link

If anyone has written python code for the encoding and decoding scheme it would be extremely appreciated for you to share it :)

@n1ckfg
Copy link

n1ckfg commented Dec 8, 2022

Yeah, you'll need to extract it from the other stuff but this is what I came up with:
https://github.com/n1ckfg/latk-video-001

@AlexanderKhazatsky
Copy link

Agh!! You are a life savor, thanks so much!! What has your experience been like with this depth compression scheme?

@AlexanderKhazatsky
Copy link

I'm considering this colorization scheme, this one, or some kind of lossless compression. However, I will be generating huge amounts of depth data and the last option is probably not possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants