Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real Time Point Cloud Visualization #8806

Closed
karlita101 opened this issue Apr 12, 2021 · 11 comments
Closed

Real Time Point Cloud Visualization #8806

karlita101 opened this issue Apr 12, 2021 · 11 comments
Labels

Comments

@karlita101
Copy link

karlita101 commented Apr 12, 2021

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model {D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10)
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.. }
Language {opencv/python }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

Hi there, I noticed that the https://github.com/dorodnic/binder_test/blob/master/pointcloud.ipynb by @dorodnic isn't available anymore ( nothing is loading). Will this be available again soon?

Are there any other recommended Point cloud visualizations that are ideal for real time visualization and that do not require PLY exporting/loading ?

image
Neither are any of the other notebook examples

https://github.com/IntelRealSense/librealsense/tree/jupyter/notebooks

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 12, 2021

Hi @karlita101 Thanks very much for the report. I tested all of the Jupyter examples and you are correct. None of them are rendering, though the source code of the pages is still accessible through the Display the source blob option highlighted below.

image

I have passed on your report to Intel. Thanks again!


Edit: I found that the same endless spinning loading wheel was occurring on an ipynb page like the Python examples on a non-RealSense GitHub. So it may be an issue whose cause is external to the RealSense GitHub site rather than a problem with the Jupyter Python examples.


In regard to alternative real-time point cloud examples for Python, the SDK example program opencv_pointcloud_viewer.py may meet your needs.

https://github.com/IntelRealSense/librealsense/blob/development/wrappers/python/examples/opencv_pointcloud_viewer.py

Another approach is to make use of Open3D, like in the script in the link below:

isl-org/Open3D#473 (comment)

The pyntcloud point cloud library, which is compatible with 400 Series cameras, can also be recommended.

https://github.com/daavoo/pyntcloud

@MartyG-RealSense
Copy link
Collaborator

Hi @karlita101 The Jupyter Notebook tutorials are accessible again now, including the pointcloud one that you needed.

https://github.com/dorodnic/binder_test/blob/master/pointcloud.ipynb

@karlita101
Copy link
Author

karlita101 commented Apr 17, 2021

Great, thank you so much @MartyG-RealSense!

I will try the other methods out.

I've read into the Open3D method and played around with it myself. Seems like the FPS are a bottleneck with the video stream loop. ( On my code ~ 5fps)

Any insight into methods that might work better for the 30 fps stream initialization?

`

import numpy as np
import cv2
import cv2.aruco as aruco
import pyrealsense2 as rs
import os
import keyboard
from imutils import perspective
from datetime import datetime
from open3d import *

pipeline = rs.pipeline()

#Create a config and configure the pipeline to stream different resolutions of color and depth streams
config = rs.config()

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)

#Start streaming
profile = pipeline.start(config)

#Getting the depth sensor's depth scale (see rs-align example for explanation)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: ", depth_scale)

#Turn ON Emitter
depth_sensor.set_option(rs.option.emitter_always_on, 1)

#Align
align_to = rs.stream.color
align = rs.align(align_to)

#frame rate counter
frate_count=[]

#Streaming loop
try:
geometry_added = False
#vis = Visualizer()
vis = open3d.visualization.Visualizer()
#vis.create_window('Aligned Column', width=640, height=480)
vis.create_window("Test")
pcd = open3d.geometry.PointCloud()
print(type(pcd))

while True:
    #initalize time
    dt0 = datetime.now()
    vis.add_geometry(pcd)
    #Add
    pcd.clear()
    
    # Get frameset of color and depth

    frames = pipeline.wait_for_frames()
    # frames.get_depth_frame() is a 640x360 depth image
    # Align the depth frame to color frame
    aligned_frames = align.process(frames)
    
    #Get Depth Intrinsics 
    #Return pinhole camera intrinsics for Open3d
    intrinsics = aligned_frames.profile.as_video_stream_profile().intrinsics
    pinhole_camera_intrinsic = open3d.camera.PinholeCameraIntrinsic(
        intrinsics.width, intrinsics.height, intrinsics.fx, intrinsics.fy, intrinsics.ppx, intrinsics.ppy)

    print(intrinsics.width, intrinsics.height)
    
    aligned_depth_frame = aligned_frames.get_depth_frame()
    color_frame = aligned_frames.get_color_frame()
    

    if not aligned_depth_frame or not color_frame:
        continue
    
    depth_image = np.asanyarray(aligned_depth_frame.get_data())
    color_image = np.asanyarray(color_frame.get_data())
    
    img_color = open3d.geometry.Image(color_image)
    img_depth = open3d.geometry.Image(depth_image)
    
    #Create RGBD image
    #rgbd_image = open3d.geometry.RGBDImage.create_from_color_and_depth(img_color, img_depth, depth_scale*1000, depth_trunc=2000, convert_rgb_to_intensity=False)
    rgbd_image = open3d.geometry.RGBDImage.create_from_color_and_depth(img_color, img_depth,convert_rgb_to_intensity=False)

    ##USE DEPTH CAMERA INTRINSICS
    #Create PC from RGBD
    #temp = open3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image, intrinsic_od3)
    
    temp = open3d.geometry.PointCloud.create_from_rgbd_image(
        rgbd_image, pinhole_camera_intrinsic)
    
    
    #Flip it, otherwise the pointcloud will be upside down
    temp.transform([[1, 0, 0, 0], [0, -1, 0, 0],
                   [0, 0, -1, 0], [0, 0, 0, 1]])

    pcd.points=temp.points
    pcd.colors=temp.colors

        
    vis.update_geometry(pcd)
    vis.poll_events()    
    vis.update_renderer()
    
    #Calculate process time
    process_time = datetime.now() - dt0
    print("FPS = {0}".format(1/process_time.total_seconds()))
    
    frate_count = np.append(frate_count, 1/process_time.total_seconds())
    #frame_count += 1
    
    #Press esc or 'q' to close the image window
    #if key & 0xFF == ord('q') or key == 27:
        #cv2.destroyAllWindows()
        #break

except KeyboardInterrupt:
print("Press Ctrl-C to terminate while statement")
mean_frp=np.mean(frate_count)
std_frp=np.std(frate_count)
print("Mean FRP PC estimation", mean_frp)
print("Stdev FRP PC estimation", mean_frp)
pass

finally:
pipeline.stop()

vis.destroy_window()
`

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 18, 2021

If you believe that the color stream is causing your program to lag, you could try putting depth and color on separate pipelines. An example of a multi-pipeline application in Python is provided below, in a script with two pipelines that puts IMU on one stream and depth / color on the other.

#5628 (comment)

@karlita101
Copy link
Author

karlita101 commented Apr 18, 2021

If you believe that the color stream is causing your program to lag, you could try putting depth and color on separate pipelines. An example of a multi-pipeline application in Python is provided below, in a script with two pipelines that puts IMU on one stream and depth / color on the other.

#5628 (comment)

Thank you Marty I will check it out. I added a few checkpoints and indeed saw that the alignment process was talk ~50% of the processing time, the other half from updating the visualizer.

I have tried the opencv_pointcloud_viewer.py and getting ~ 20 fps.

For my application, I am looking to only generate a point cloud for a specific area of interest (ROI) defined by a the bounding area of aruco markers detected by in the RGB image. So I implemented the rs.align () process as in my code above and in the rs. align example.

Here I notice that the alignment process does not take nearly as much time as in the open3D script. It's still not quite clear to me how the align process would be much quicker with this implementation if it follows the same code ( ~0.008s).

1)Would you have any insight into why this may be, given the same code lines?

`align_to = rs.stream.color
align = rs.align(align_to)

while True:
# Grab camera data
if not state.paused:
dt0 = datetime.now()

    # Wait for a coherent pair of frames: depth and color
    frames = pipeline.wait_for_frames()
    
    # depth_frame = frames.get_depth_frame()
    # color_frame = frames.get_color_frame()
    
    #KR
    **#Align the depth frame to color frame
    aligned_frames = align.process(frames)

    #Get aligned frames        
    depth_frame = aligned_frames.get_depth_frame()
    color_frame = aligned_frames.get_color_frame(**)`

**2) The SDK method initializes the entire point cloud object before the depth frame is aligned to the colour frame. Would this cause any issues?

  1. Is it possible to change the depth_frame to the aligned_depth_frame, and ONLY specify for a specific ROI?
    -I already have a binary mask for the ROI, so I wanted to apply a cv2.bitwise operation to get only the points where the mask is true... which why I was trying to use open3d since I can work directly with the depth_image and not the frame.
  • It just isn't clear to me if you can pass an image into pc.calculate or only specify specific pixel ranges of an entire.

  • points = pc.calculate(depth_frame)==> pc. calculate(aligned_depth_frame[ specify range here]) **

Thank you

@MartyG-RealSense
Copy link
Collaborator

  1. In general, depth to color alignment is a processing-intensive activity. You can take advantage of CUDA acceleration of depth to color alignment if you have a computer / computing device with an Nvidia graphics GPU (which includes Nvidia Jetson boards) and build librealsense with CUDA support enabled.

  2. Other than the processing demands of alignment mentioned above that may slow down the program on low-power hardware, I cannot think of anything to warn about.

  3. This link might provide some guidance in regard to defining a specific ROI on an aligned image with Python:

#2769

In regard to the aligned_depth_frame aspect though, you may find useful the Python script in the link below for generating a point cloud with pc.calculate and aligned_depth_frame.

#2526

If you are primarily concerned with obtaining the details of a single specific RGB pixel coordinate instead of generating an entire aligned cloud, then an alternative that should have faster processing is to use rs2_project_color_pixel_to_depth_pixel

#6239 (comment)

@karlita101
Copy link
Author

Thanks Marty, will look into all the suggestions!

@sanjaiiv04
Copy link

@MartyG-RealSense Hey When I run the opencv_pointcloud_viewer.py , I realized that there is some kind of a projection of objects in the point cloud. This is the result of running the script:
out
You can see a projection of my body behind me. Is this supposed to happen? How do you take care of it? I am new to point clouds and this is a little confusing to me.

@MartyG-RealSense
Copy link
Collaborator

Hi @sanjaiiv04 No, that image looks incorrect. I have not seen an image like that before from opencv_pointcloud_viewer.py. Below is an example of a correct looking image.

image

If you are new to pointclouds and using Python is not a compulsory requirement for you then you can generate one in the RealSense Viewer tool by clicking on the '3D' option in the top corner of the Viewer window to enter the 3D pointcloud mode and then enable the depth stream first and the RGB stream secondly to produce an RGB-textured pointcloud.

@sanjaiiv04
Copy link

@MartyG-RealSense I think its how the camera works. Plane surfaces like floors and ceilings are being captured and rendered properly as point clouds but other objects tend to be having the offset shadow behind them. I guess this is due to the fact that RealSense has LIDAR for depth perception and as light cannot travel through objects, there is an offset of shadow behind them. What you have sent as a reference is a flat plane and hence it renders perfectly.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 16, 2023

Thanks @sanjaiiv04 The L515 has some qualities in its depth analysis that react differently from the 400 Series cameras when observing certain types of surface. For example, if an L515 tries to depth sense a transparent plastic bottle of water then it sees through the bottle and renders the objects behind the bottle on the depth image.

Setting the L515 visual preset Short Range can help to deal with this kind of situation, as it reduces the values of the Laser Power and Receiver Gain settings. Below is an example of Python code for doing so.

option = rs.option.visual_preset
preset = rs.l500_visual_preset.short_range
depth_sensor = depth_sensor.set_option(option, preset)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants