-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real Time Point Cloud Visualization #8806
Comments
Hi @karlita101 Thanks very much for the report. I tested all of the Jupyter examples and you are correct. None of them are rendering, though the source code of the pages is still accessible through the Display the source blob option highlighted below. I have passed on your report to Intel. Thanks again! Edit: I found that the same endless spinning loading wheel was occurring on an ipynb page like the Python examples on a non-RealSense GitHub. So it may be an issue whose cause is external to the RealSense GitHub site rather than a problem with the Jupyter Python examples. In regard to alternative real-time point cloud examples for Python, the SDK example program opencv_pointcloud_viewer.py may meet your needs. Another approach is to make use of Open3D, like in the script in the link below: The pyntcloud point cloud library, which is compatible with 400 Series cameras, can also be recommended. |
Hi @karlita101 The Jupyter Notebook tutorials are accessible again now, including the pointcloud one that you needed. https://github.com/dorodnic/binder_test/blob/master/pointcloud.ipynb |
Great, thank you so much @MartyG-RealSense! I will try the other methods out. I've read into the Open3D method and played around with it myself. Seems like the FPS are a bottleneck with the video stream loop. ( On my code ~ 5fps) Any insight into methods that might work better for the 30 fps stream initialization? ` import numpy as np pipeline = rs.pipeline() #Create a config and configure the pipeline to stream different resolutions of color and depth streams config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) #Start streaming #Getting the depth sensor's depth scale (see rs-align example for explanation) #Turn ON Emitter #Align #frame rate counter #Streaming loop
except KeyboardInterrupt: finally: vis.destroy_window() |
If you believe that the color stream is causing your program to lag, you could try putting depth and color on separate pipelines. An example of a multi-pipeline application in Python is provided below, in a script with two pipelines that puts IMU on one stream and depth / color on the other. |
Thank you Marty I will check it out. I added a few checkpoints and indeed saw that the alignment process was talk ~50% of the processing time, the other half from updating the visualizer. I have tried the opencv_pointcloud_viewer.py and getting ~ 20 fps. For my application, I am looking to only generate a point cloud for a specific area of interest (ROI) defined by a the bounding area of aruco markers detected by in the RGB image. So I implemented the rs.align () process as in my code above and in the rs. align example. Here I notice that the alignment process does not take nearly as much time as in the open3D script. It's still not quite clear to me how the align process would be much quicker with this implementation if it follows the same code ( ~0.008s). 1)Would you have any insight into why this may be, given the same code lines? `align_to = rs.stream.color while True:
**2) The SDK method initializes the entire point cloud object before the depth frame is aligned to the colour frame. Would this cause any issues?
Thank you |
In regard to the aligned_depth_frame aspect though, you may find useful the Python script in the link below for generating a point cloud with pc.calculate and aligned_depth_frame. If you are primarily concerned with obtaining the details of a single specific RGB pixel coordinate instead of generating an entire aligned cloud, then an alternative that should have faster processing is to use rs2_project_color_pixel_to_depth_pixel |
Thanks Marty, will look into all the suggestions! |
@MartyG-RealSense Hey When I run the opencv_pointcloud_viewer.py , I realized that there is some kind of a projection of objects in the point cloud. This is the result of running the script: |
Hi @sanjaiiv04 No, that image looks incorrect. I have not seen an image like that before from opencv_pointcloud_viewer.py. Below is an example of a correct looking image. If you are new to pointclouds and using Python is not a compulsory requirement for you then you can generate one in the RealSense Viewer tool by clicking on the '3D' option in the top corner of the Viewer window to enter the 3D pointcloud mode and then enable the depth stream first and the RGB stream secondly to produce an RGB-textured pointcloud. |
@MartyG-RealSense I think its how the camera works. Plane surfaces like floors and ceilings are being captured and rendered properly as point clouds but other objects tend to be having the offset shadow behind them. I guess this is due to the fact that RealSense has LIDAR for depth perception and as light cannot travel through objects, there is an offset of shadow behind them. What you have sent as a reference is a flat plane and hence it renders perfectly. |
Thanks @sanjaiiv04 The L515 has some qualities in its depth analysis that react differently from the 400 Series cameras when observing certain types of surface. For example, if an L515 tries to depth sense a transparent plastic bottle of water then it sees through the bottle and renders the objects behind the bottle on the depth image. Setting the L515 visual preset Short Range can help to deal with this kind of situation, as it reduces the values of the Laser Power and Receiver Gain settings. Below is an example of Python code for doing so.
|
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi there, I noticed that the https://github.com/dorodnic/binder_test/blob/master/pointcloud.ipynb by @dorodnic isn't available anymore ( nothing is loading). Will this be available again soon?
Are there any other recommended Point cloud visualizations that are ideal for real time visualization and that do not require PLY exporting/loading ?
Neither are any of the other notebook examples
https://github.com/IntelRealSense/librealsense/tree/jupyter/notebooks
The text was updated successfully, but these errors were encountered: