-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hardware accelerated point cloud streaming (example and video) #5799
Comments
@bmegli, great write-up! |
IntroductionI have recently shared extension of the methodology that encodes depth data with infrared texture. The videoThe detailsThis uses the same idea for depth encoding as before (explained in pull #5597, with isolated example) Additionally infrared data is directly mapped to HEVC Main10 P010LE chroma UV plane. Why this is even possible, the technical details and consequences are beyond the scope of this post. Some highlights:
The source code (same as before, just different configuration options): The futureI expected to get similiar results (848x480@30 fps) at around 5 Mb/s (vs 8 Mb/s) with further plans of decreasing resolution and framerate to get below 2 Mb/s (suitable for longer range wireless streaming). Unfortunately the hacky way of infrared encoding wastes the bandwidth. There are other possible approaches employing widespread hardware encoders to explore. |
A word of caution
For anybody planning to use similar approach it is important that he/she understands implications of using video codec for depth encoding. I wouldn't use it for anything but human teleoperation which is actually my personal use case and motivation.
The video
The details
The idea for encoding was explained in pull #5597.
Some of the functionality requires Realsense firmware FW 5.12.1.0 (see #5587, this was development firmware the last time I checked)
The source code:
https://github.com/bmegli/realsense-network-hardware-video-encoder
https://github.com/bmegli/unity-network-hardware-video-decoder
The code currently works on Unix-like operating systems only (e.g. - Linux) and with Intel on encoding side (HEVC Main10 encoding through VAAPI).
The isolated example of depth encoding is also linked in realsense examples
(direct link).
The pipeline
The current pipeline is:
grab depth data -> hardware encode -> send -> receive -> hardware decode -> unproject -> render
The future
Let's pave the way for the future.
There are three more things that can be done:
In most cases when hardware decoding HEVC with VAAPI we end up with data on GPU side.
We can use OpenCL/VAAPI sharing extensions, namely cl_intel_va_api_media_sharing.
Finally it should be possible to use OpenCL/OpenGL sharing to map unprojected data to OpenGL vertex buffer which in turn may be rendered with shader.
Adding those 3 elements we end up with the ultimate zero copy hardware accelerated point cloud pipeline including:
This may seem like a futile work for video codec depth encoding (lossy, artifacts) but there will come time when 3D-HEVC will reach hardware encoders.
The text was updated successfully, but these errors were encountered: