Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth Information in python code with camera d435i doesn't match with Intel Realsense View #12676

Closed
nokey93 opened this issue Feb 16, 2024 · 14 comments

Comments

@nokey93
Copy link

nokey93 commented Feb 16, 2024


Required Info
Camera Model D435i
Firmware Version 5.15.1
Operating System & Version Win 10
Kernel Version (Linux Only) N/A
Platform PC
SDK Version 2.54.2.5684
Language python
Segment others

I am using Camera d435i for my research and trying to find the box corners with opencv, then use the information of pixel corners to find the depth of that point.

corners_box

I could find the corners and which pixel of that corners (not exactly the corners, but acceptable). But when I try to find the depth with scrpit "realsense_depth", it showed me the like this Corner depths: [661, 613, 703, 878]

obviously that depth informations aren't right, it could be different, but not too far away like that. Specially when I use Intel Realsense View to check the corners, it showed me the other result. Anyone can help me with this problem?

here is my code:

 import numpy as np
 import pyrealsense2 as rs
 import cv2
 from realsense_depth import *

 # Initialize DepthCamera
 dc = DepthCamera()

try:
    ret, depth_image, color_image = dc.get_frame()
    if not ret:
        print("Failed to get frames.")
    else:
        # Process the color image to find corners
        gray = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (7, 7), 0)
        edged = cv2.Canny(gray, 50, 100)
        edged = cv2.dilate(edged, None, iterations=1)
        edged = cv2.erode(edged, None, iterations=1)

        cnts, _ = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
        corner_positions = []  # List to hold the corners' pixel positions

        for c in cnts:
            if cv2.contourArea(c) < 10000:  # Adjust threshold as needed
                continue

            box = cv2.minAreaRect(c)
            box = cv2.boxPoints(box)
            box = order_points(box)
            for (x, y) in box:
                corner_positions.append((int(x), int(y)))  # Save corner pixel positions
                cv2.circle(color_image, (int(x), int(y)), 5, (0, 0, 255), -1)  # Mark corners with red dots

        cv2.imshow("depth", depth_image)
        # Print the list of corner positions
        print("Corner positions (pixels):", corner_positions)

        # Use the corner pixel positions to find depth
        corner_depths = []
        for (x, y) in corner_positions:
            depth = depth_image[y, x]
            corner_depths.append(depth)

        # Print the depth of each corner
        print("Corner depths:", corner_depths)

        # Show the image with detected box
        cv2.imshow('Detect Box', color_image)
        cv2.waitKey(0)


finally:
    dc.release()
    cv2.destroyAllWindows()
@MartyG-RealSense
Copy link
Collaborator

Hi @nokey93 Are you using the 2D mode of the RealSense Viewer to check the corners of the depth image and get the distance in meters of the corner from the image like in the picture below, please?

image

With the color image that you are using, the most appropriate way to obtain a real-world depth value for a specific color pixel would likely be to use the rs2_project_color_pixel_to_depth_pixel instruction to convert a color pixel to a depth pixel. A Python example of using this instruction is at #5603 (comment)

@nokey93
Copy link
Author

nokey93 commented Feb 16, 2024

Hi @MartyG-RealSense !
No, I used RGB 2D to identify the corners, so I can know, which pixel is the corners.
After that, I compare that pixel with Depth Frame to take the depth Information.
I did try to use rs2_project_color_pixel_to_depth_pixel once, but it worked not very well in my situation, so I did hope, that depth Camera can improve it.

@MartyG-RealSense
Copy link
Collaborator

The depth and RGB sensors on the D435i camera model have different field of view sizes, with the RGB sensor having a smaller view than depth. This means that the corner pixels on the RGB image will not correspond to where the corners are on the depth image. Because the depth image has a larger field of view, its corners will be further out beyond the edges of the RGB image.

image

If rs2_project_color_pixel_to_depth_pixel is not used to obtain a depth coordinate then the other main approach is to perform depth to color alignment to map the depth and color images together and then obtain the 3D coordinate. This can be done with rs2_deproject_pixel_to_point

@nokey93
Copy link
Author

nokey93 commented Feb 17, 2024

The depth and RGB sensors on the D435i camera model have different field of view sizes, with the RGB sensor having a smaller view than depth. This means that the corner pixels on the RGB image will not correspond to where the corners are on the depth image. Because the depth image has a larger field of view, its corners will be further out beyond the edges of the RGB image.

image

If rs2_project_color_pixel_to_depth_pixel is not used to obtain a depth coordinate then the other main approach is to perform depth to color alignment to map the depth and color images together and then obtain the 3D coordinate. This can be done with rs2_deproject_pixel_to_point

Thank you for your suggestion! I will try to apply that and response with result in next week! 👍

@nokey93
Copy link
Author

nokey93 commented Feb 19, 2024

@MartyG-RealSense thanks for your help, it works now!

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. I'm pleased to hear that you were successful. Thanks very much for the update!

@wesboyt
Copy link

wesboyt commented Feb 19, 2024

thanks guys running into exactly this issue

@wesboyt
Copy link

wesboyt commented Feb 19, 2024

@MartyG-RealSense can I align depth to color like this file.
Do I need to align color to depth instead?
I'm seeing similar problems as nokey in the multicam example, which produces heights of 11mm for items ranging from 12mm to 30mm in height.
The reason I ask is my yolo AI is not trained on the artifacted output described here.
Im using d415.
Ah I see this: #11991 (comment)
perhaps I can just use a larger depth resolution and align wont produce artifacts?

@MartyG-RealSense
Copy link
Collaborator

Hi @wesboyt You can align color to depth by changing align_to = rs.stream.color to align_to = rs.stream.depth so that depth is the target of alignment instead of color. As the D415 model has approximately the same field of view size for depth and color, color-to-depth alignment may not make much difference though.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py#L60

Does your problem with artifacts when aligning resemble the case at #10768

Accuracy increases as resolution increases, so 1280x720 would be more accurate than 640x480.

@wesboyt
Copy link

wesboyt commented Feb 20, 2024

it worked, thanks!

@MartyG-RealSense
Copy link
Collaborator

I'm pleased to hear that you were successful. Thanks very much for the update!

@MartyG-RealSense
Copy link
Collaborator

Hi @nokey93 Do you require further assistance with this case, please? Thanks!

@nokey93
Copy link
Author

nokey93 commented Feb 27, 2024

@MartyG-RealSense no, currently I don't have any questions more!
Thank you!

@MartyG-RealSense
Copy link
Collaborator

You are very welcome, @nokey93 - thanks very much for the update. As you do not require further assistance, I will close this case. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants