Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

accelerated textured depth streaming (independently encoded) #5

Closed
bmegli opened this issue Mar 10, 2020 · 8 comments
Closed

accelerated textured depth streaming (independently encoded) #5

bmegli opened this issue Mar 10, 2020 · 8 comments
Labels
planning high level plans

Comments

@bmegli
Copy link
Owner

bmegli commented Mar 10, 2020

Working textured depth streaming is already implemented (see #2, #4 and video) however encoding is hacky and unoptimal.

Here the idea is to:

  • hardware encode depth and texture separately (HEVC + HEVC or HEVC + H.264)
  • wrap encoded data in a synchronized packet

The possible gains are:

From depth encoding time benchmark it will only add a few ms of additional latency.

From #4 some of hardware encoding operations may be run concurrently even cutting the few ms latency.

@bmegli bmegli added the planning high level plans label Mar 10, 2020
@bmegli
Copy link
Owner Author

bmegli commented Mar 10, 2020

Problem analysis.

HVE

HVE already supports multiple concurrent hardware encoders.

MLSP

Multiple application level frames may be encoded in a single MLSP frame. It always is the responsibility of the application to interpret the data correctly.

Optimization is possible to avoid unnecessary copies of the data when preparing packets:

  • MLSP could handle a list of data fragments (pointers) with size
  • to avoid copying data when preparing packet (before fragmenting to MTU)
  • similar mechanism could be used on receiving side
  • adding such mechanisms would make MLSP aware of carrying multi-frame data

NHVE

Interface change:

  • accepts list of hardware encoder configurations
  • accepts list of video frames data to encode and send
  • the old interface is a special case with a single encoder/frame

RNHVE

Subprogram that streams depth + ir (or anything + anything in general).

HVD

HVD already supports multiple concurrent hardware decoders.

NHVD

Interface change:

  • accepts list of hardware decoder configurations
  • returns lists of decoded frames

Possibly special case implementation for:

  • depth + texture decoding -> textured unprojection
  • just 2 concurrent streams (generic multi-stream C pointer interface from Unity would be horror)

Implementation-wise it is easy. Conceptually this further breaks the generic character of NHVD (maintenance, reusability).

UNHVD

Possibly no change:

  • still wrapping native data for textured mesh
  • the work is carried in NHVD

@bmegli
Copy link
Owner Author

bmegli commented Mar 11, 2020

Before proceeding.

Multi-frame approach

Here I mean for example sending depth and ir frame together:

  • there is H.264 multiview profile, supported by hardware that I use, but here I need HEVC
  • there is MV-HEVC extension, not supported on hardware that I use (is it supported on any already?)

A similar functionality is provided by containers (e.g. mkv, avi, mp4) with multiple streams:

  • in a proof of cocnept
  • I don't want to introduce complexity with muxing/demuxing
  • where a few additional lines of code are sufficient for my needs

Multi-frame pipeline

It is possible to (for example):

  • receive depth frame
  • start decoding
  • receive ir frame (while decoding)
  • decode ir frame
  • combine data
  • which altogether would save some time

However with wireless lossy medium:

  • we expect that we will be losing some data (we don't want TCP/IP)
  • it is possible that:
    • we would receive depth frame
    • we would lose ir frame
  • which would lead to different states of hardware decoders state
  • which would lead to losing synchronized state between depth/ir frames

From engineering perspective:

  • coupling data transport layer with application level logic is a bad idea
  • which would make independent work on MLSP impossible

@bmegli
Copy link
Owner Author

bmegli commented Apr 18, 2020

Proof-of-concept finished across multi-frame branches in all repositories.

Subjective impressions - works much better, requires lower bitrate, needs more GPU

This needs some serious cleanup before merging.

bmegli added a commit to bmegli/network-hardware-video-encoder that referenced this issue Apr 26, 2020
- single library interface for:
  - initializing single/multiple hardware encoders
  - encoding and sending multiple logical subframes together
- example of multi streaming
- merge common code paths (error handling)
- update readme for multi-frame streaming

Closes #4 
Related to bmegli/hardware-video-streaming#5
bmegli added a commit to bmegli/network-hardware-video-decoder that referenced this issue Apr 26, 2020
- single library interface for
   - initializing single/multiple hardware decoders
   - getting single/multiple frames
- example of multi frame receiving and decoding
- multi-frame example
- cloud example for multi-frame approach
- merge common code paths (error handling)
- improve sanity checks for depth decoding & unprojection
- note about multi-frame streaming in readme

Closes #9, #11 
Related to bmegli/hardware-video-streaming#5
bmegli added a commit to bmegli/realsense-network-hardware-video-encoder that referenced this issue Apr 26, 2020
- migrate code to NHVE multi-frame API
- multi-frame textured depth steaming example (HEVC Main10 depth + HEVC ir)
- readme multi-frame textured depth info

Closes #9 
Related to bmegli/hardware-video-streaming#5
bmegli added a commit to bmegli/unity-network-hardware-video-decoder that referenced this issue Apr 26, 2020
- multi-frame textured point cloud streaming
- update readme for new textured depth streaming
- use depth units compatible with older Realsense cameras

Closes #15 
Related to bmegli/hardware-video-streaming#5
@bmegli
Copy link
Owner Author

bmegli commented Apr 26, 2020

Finished and merged into master.

Needs some documentation.

@bmegli
Copy link
Owner Author

bmegli commented Apr 27, 2020

Documentation updated.

It would be nice to document new pipeline with video but this is distant future (if at all).

First:

  • check if LattePanda Alpha can handle multi-frame encoding
  • implement mechanism for duplicate packets in MLSP
  • split NHVD to two libraries
  • fix some of the artifacts in UNHVD
  • do not unproject depth to point cloud under mutex (e.g. double buffering)

This is the minimum required for next "release".

@bmegli
Copy link
Owner Author

bmegli commented May 20, 2020

Refreshing - we are now after "split NHVD to two libraries"

@bmegli
Copy link
Owner Author

bmegli commented May 24, 2020

Refreshing - we are now after "fix some of the artifacts in UNHVD"

@bmegli
Copy link
Owner Author

bmegli commented May 31, 2020

This is finished now.

There are:

Some loosely related improvements are ongoing.

@bmegli bmegli closed this as completed May 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
planning high level plans
Projects
None yet
Development

No branches or pull requests

1 participant