-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
accelerated textured depth streaming (independently encoded) #5
Comments
Problem analysis. HVEHVE already supports multiple concurrent hardware encoders. MLSPMultiple application level frames may be encoded in a single MLSP frame. It always is the responsibility of the application to interpret the data correctly. Optimization is possible to avoid unnecessary copies of the data when preparing packets:
NHVEInterface change:
RNHVESubprogram that streams depth + ir (or anything + anything in general). HVDHVD already supports multiple concurrent hardware decoders. NHVDInterface change:
Possibly special case implementation for:
Implementation-wise it is easy. Conceptually this further breaks the generic character of NHVD (maintenance, reusability). UNHVDPossibly no change:
|
Before proceeding. Multi-frame approachHere I mean for example sending depth and ir frame together:
A similar functionality is provided by containers (e.g. mkv, avi, mp4) with multiple streams:
Multi-frame pipelineIt is possible to (for example):
However with wireless lossy medium:
From engineering perspective:
|
Proof-of-concept finished across multi-frame branches in all repositories. Subjective impressions - works much better, requires lower bitrate, needs more GPU This needs some serious cleanup before merging. |
- single library interface for: - initializing single/multiple hardware encoders - encoding and sending multiple logical subframes together - example of multi streaming - merge common code paths (error handling) - update readme for multi-frame streaming Closes #4 Related to bmegli/hardware-video-streaming#5
- single library interface for - initializing single/multiple hardware decoders - getting single/multiple frames - example of multi frame receiving and decoding - multi-frame example - cloud example for multi-frame approach - merge common code paths (error handling) - improve sanity checks for depth decoding & unprojection - note about multi-frame streaming in readme Closes #9, #11 Related to bmegli/hardware-video-streaming#5
- migrate code to NHVE multi-frame API - multi-frame textured depth steaming example (HEVC Main10 depth + HEVC ir) - readme multi-frame textured depth info Closes #9 Related to bmegli/hardware-video-streaming#5
- multi-frame textured point cloud streaming - update readme for new textured depth streaming - use depth units compatible with older Realsense cameras Closes #15 Related to bmegli/hardware-video-streaming#5
Finished and merged into master. Needs some documentation. |
Documentation updated. It would be nice to document new pipeline with video but this is distant future (if at all). First:
This is the minimum required for next "release". |
Refreshing - we are now after "split NHVD to two libraries" |
Refreshing - we are now after "fix some of the artifacts in UNHVD" |
This is finished now. There are:
Some loosely related improvements are ongoing. |
Working textured depth streaming is already implemented (see #2, #4 and video) however encoding is hacky and unoptimal.
Here the idea is to:
The possible gains are:
From depth encoding time benchmark it will only add a few ms of additional latency.
From #4 some of hardware encoding operations may be run concurrently even cutting the few ms latency.
The text was updated successfully, but these errors were encountered: