Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

accelerated depth streaming #1

Closed
bmegli opened this issue Jan 3, 2020 · 5 comments
Closed

accelerated depth streaming #1

bmegli opened this issue Jan 3, 2020 · 5 comments
Labels
planning high level plans

Comments

@bmegli
Copy link
Owner

bmegli commented Jan 3, 2020

Continuing work from realsense-ir-to-vaapi-h264 issue encoding depth stream where the plan was sketched out.

Extending HVE for HEVC support

Done in HVE ac3a4c1.
P010LE encoding example was also added.

HEVC Main10 depth encoding example

Done in realsense-depth-to-vaapi-hevc10.
This configures Realsense to output P016LE Y plane which is directly fed to hardware for encoding as P010LE (binary compatible). Range/precision trade-off can be controlled.

Extending HVD for HEVC support

This is already supported.

Extending NHVE for HEVC support

Extend with new HVE interface for encoder, add synthetic procedurally generated HEVC Main10 P010LE example

Extending RNHVE to support depth streaming apart from color/infrared

Rather straightforward. The only problem is that currently RNHVE uses H.264. Possibly separate repository or configurable codec.

Extending UNHVD for depth data or creating separate project that decodes and feeds point cloud data to Unity

A bit involved to keep performance, framerate and low latency.

Probably:

  • decode HEVC on the native side (as currently with H.264)
  • recast to point cloud on the native side
  • feed mesh data on the native side
@bmegli bmegli added the planning high level plans label Jan 4, 2020
bmegli added a commit to bmegli/network-hardware-video-encoder that referenced this issue Jan 9, 2020
- synthetic example streaming moving greyscale
- encoded with HEVC Main10 P010LE pixel format

closes #2
related to bmegli/hardware-video-streaming#1
@bmegli
Copy link
Owner Author

bmegli commented Jan 9, 2020

Extending NHVE for HEVC support

bmegli added a commit to bmegli/realsense-network-hardware-video-encoder that referenced this issue Jan 9, 2020
- rename h264 streaming to realsense-nhve-h264

This makes place for another example with HEVC depth streaming

Related to bmegli/hardware-video-streaming#1
bmegli added a commit to bmegli/realsense-network-hardware-video-encoder that referenced this issue Jan 9, 2020
- add additional executable for color/infrared/depth HEVC streaming
- update readme accordingly

Related to bmegli/hardware-video-streaming#1
@bmegli
Copy link
Owner Author

bmegli commented Jan 9, 2020

Extending RNHVE to support depth streaming apart from color/infrared

  • migration to new NHVE interface added in NHVE ef70a2c
  • HEVC Main color/infrared and HEVC Main10 depth streaming added in NHVE e63508c

@bmegli
Copy link
Owner Author

bmegli commented Jan 12, 2020

Extending HVD, NHVD, UNHVD for HEVC support

Turns out:

@bmegli
Copy link
Owner Author

bmegli commented Feb 10, 2020

This is finished already.

Video example of working functionality:

Hardware Accelerated Point Cloud Streaming

@bmegli
Copy link
Owner Author

bmegli commented Feb 11, 2020

Further improvements were discussed in in librealsense#5799.

A zero-copy pipeline for point cloud streaming/decoding/unprojection/rendering was sketched out:

There are three more things that can be done:

  1. OpenCL unprojection step (hardware accelerated unprojection)

In most cases when hardware decoding HEVC with VAAPI we end up with data on GPU side.
We can use OpenCL/VAAPI sharing extensions, namely cl_intel_va_api_media_sharing.

  1. Map decoded VAAPI data to OpenCL (zero copy unprojection).

Finally it should be possible to use OpenCL/OpenGL sharing to map unprojected data to OpenGL vertex buffer which in turn may be rendered with shader.

  1. Map unprojected OpenCL data to OpenGL (zero copy rendering)

Adding those 3 elements we end up with the ultimate zero copy hardware accelerated point cloud pipeline including:

  • decoding
  • unprojection
  • rendering

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
planning high level plans
Projects
None yet
Development

No branches or pull requests

1 participant