-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
accelerated network point cloud decoder #5
Comments
Receiving and decoding HEVC Main10 depth data from new RNHVE with NHVDThis is just a matter of configuring NHVD for HEVC. Turning depth map into point cloud using camera matrix
The CPU will have to bash through 400k points unprojection one by one, this looks like typical task for GPU. Feeding point cloud to Unity Mesh dataUnity mesh Edit: not anymore - Unity now supports 32 bit index format I see two ways to do this efficiently:
I really like the first approach:
The native 400k vertices data may be wrapped without ever touching data with: The second approach:
Meshes:
Having it implemented as circular buffer allows:
Visualizing point cloud
Edit: Unity now supports meshes with points topology This again mimics current implementation of NHVD: network -> hardware decoder -> frame update -> fill Unity object with native data |
Turning depth map into point cloud using camera matrixThe math behind Realsense D400 projection and deprojection |
Turning depth map into point cloud using camera matrix
Edit:
|
- working proof of concept for NHPCD - some hardcoded data for Realsense D435 Proof of concept: - to be turned into final clean code Related to: bmegli/unity-network-hardware-video-decoder#5
- working proof of concept - needs tidying up Related to #5
Feeding point cloud to Unity Mesh dataImplemented proof of concept in:
Used first approach. Works in realtime. |
Visualizing point cloudImplemented in the same UNHVD commit as above. So the final proof-of-concept workflow is:
|
- API for frame/point cloud support - library documentation (doxygen) Related to: bmegli/unity-network-hardware-video-decoder#5
- third example with point cloud decoding - use new NHVD interface - updated readme Related to: #5
Finished & merged. The unprojection step is now in software through HDU, on my laptop this uses less then 2% of CPU and ~5 ms. To decide whether this is worth OpenCLing. A simple SIMD on x/y/z should give 2-3 times speedup even without GPU. Encoding with 2 Mb/s bitrate gives reasonable results which proofs the concept of realtime hardware accelerated point cloud streaming. This generally finishes: The rest is maintenance/polishing. |
The last left step in HVS accelerated depth streaming
This means:
Just to get the idea of amount of data:
The text was updated successfully, but these errors were encountered: