Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Realtime point cloud calculation in a remote computer #6535

Closed
akawashiro opened this issue Jun 7, 2020 · 11 comments
Closed

Realtime point cloud calculation in a remote computer #6535

akawashiro opened this issue Jun 7, 2020 · 11 comments

Comments

@akawashiro
Copy link

Required Info
Camera Model D415
Firmware Version
Operating System & Version ubuntu 18.04
Kernel Version (Linux Only) 5.3
Platform PC
SDK Version 2.35.0
Language C++
Segment AR

Issue Description

I want to calculate a point cloud on a remote computer. It is connected via a network to the computer which is the camera attached. And I want to make a realtime application so I cannot use a record-and-playback method like Serializing / Pickling frames / frame sets for remote align #5296. Furthermore, I don't want to send a point cloud data from the viewpoint of the bandwidth of the network.

Ideally, I want to construct a point cloud only from RGB-D data which we can get with rs2::video_frame::get_data() and rs2::depth_frame::get_data(). Is there any way?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 7, 2020

Intel recently published a white-paper document about setting up open-source ethernet networking for RealSense cameras.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras

Section 2.7 of the paper explains how it is possible to add a Network Device (a camera's IP address) in the RealSense Viewer using the Add Source button to use the Viewer's various streaming modes (including point cloud) through a network.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras#section-2-7-testing-the-camera

I believe that in the paper's setup, the computer with the camera attached is the Remote computer and the computer that accesses the camera data via the network is the Host computer (which has the Viewer installed on it).

@akawashiro
Copy link
Author

Thank you for your quick response.

However, in my setting two computers do not belong to the same LAN. They are connected over the Internet so that the bandwidth is narrow. So, I need to shrink each frame as far as possible. In other words, I want to calculate a point cloud from minimal data or send the compressed point cloud itself.

Is there any other way?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 7, 2020

There is an application called Depth2Web that can package depth data for the web. It is built on Javascript code.

https://github.com/js6450/depth2web
https://github.com/js6450/depth2web/releases/tag/0.0.1
https://www.youtube.com/watch?v=eV5NIPKC_pc

Another camera to web system has previously been published on Intel's main GitHub (not the RealSense GitHub):

#6047

The link below, meanwhile, has a non-RealSense white paper document by Intel that discusses techniques for streaming high quality data over broadband or 5G phone connection with a lower amount of data:

https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/v2-volumetric-vod-streaming-whitepaper.pdf

@akawashiro
Copy link
Author

Thank you for many pointers. I will read all of them carefully.

@bmegli
Copy link
Contributor

bmegli commented Jun 21, 2020

Hi @akawashiro,

From what I understand you are mostly concerned with the bandwidth limitation and not concerned with web technologies so UDP or TCP/IP streaming is ok.

I solved similar problem in the following fashion:

  • if needed alignment of color to or depth to color
  • hardware encoding depth to HEVC Main10 (6 bits of precision lost), you may control precision-range trade-off (e.g. ~ 2 mm/2 m, ~ 5 mm/5 m, ... you have 10 bits)
  • hardware encoding texture
  • streaming both over UDP
  • hardware decoding depth/texture
  • unprojection of depth to 3D with optional color

Ideally, I want to construct a point cloud only from RGB-D data which we can get with rs2::video_frame::get_data() and rs2::depth_frame::get_data(). Is there any way?

See explanation here or the code here (~ 30 lines of code).

so that the bandwidth is narrow

I typically use it with 8 Mbit (depth) + 1 Mbit (color or infrared)

If you don't have infrastructure and use consumer type internet provider you will be mostly limited with upload bandwidth (typically orders of magnitude lower than download bandwidth.

What I did is Linux only and mostly works on Intel, at least KabyLake on encoding side.

The repositories:

  • sender repository is RNHVE
  • receiver repository is UNHVD
  • for building this solution I wrote a few libraries, so you may reuse the bricks

You may see it working in action here:
Hardware Accelerated Point Cloud Streaming


@MartyG-RealSense wrote:

The RealSense developers also introduced a compressed depth stream format called Z16H into the SDK, although I believe it is awaiting camera firmware driver support before it is accessible.
#5564

This scheme is great and lossless but gives compression of order 50-75%. This is probably still too much data for what you need.

Another recent white paper discussed how to compress depth data by using colorization:
https://dev.intelrealsense.com/docs/depth-image-compression-by-colorization-for-intel-realsense-depth-cameras

The paper is very interesting, however typical hardware encoders don't support chroma subsampling 4:4:4 (e.g. every sample of luminance matched with chrominance data sample) that would be needed for encoding. It is interesting scheme for software encoding or if you have the luck to have hardware encoder supporting it. Compared to my approach (HEVC Main10) it gives 10,5 bits per pixel vs 10 bits per pixel. Both ways are quite picky in the hardware they need for hardware encoding.

For software encoding it is possible to combine gstreamer/FFmpeg with the methodology from whitepaper (or if you are lucky also hardware encoding).

The link below, meanwhile, has a non-RealSense white paper document by Intel that discusses techniques for streaming high quality data over broadband or 5G phone connection with a lower amount of data:

https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/v2-volumetric-vod-streaming-whitepaper.pdf

This paper is also quite interesting but it looks like the authors are mostly concerned with decoding. Typically the encoding is the harder part (to do in realtime). So I am not certain if the authors mean encoding offline and only serving in real time to AR framework.

The bandwidth requirements are also quite high (assumed 100 Mbps).

Some more comments here.

Kind regards

@akawashiro
Copy link
Author

akawashiro commented Jun 23, 2020

@bmegli Thank you for your beneficial comments. Especially, the explanation of unprojection is helpful for me.

Now, I am trying to compress point cloud by cutting off 6 bits and zlib. However, your work looks better than mine because yours doesn't have to send point cloud itself.

@bmegli
Copy link
Contributor

bmegli commented Jun 23, 2020

@akawashiro,

Ok, some notes:

  • both my method and Intel colororization when used with video codec encoding result in lossy encoding and may introduce artifacts (the reason here is using video codec for depth encoding)
  • your methodology with zlib would be looseless (I have no idea about speed and compression ratio)
  • you probably don't have to cut the 6 bits for zlib (I may be wrong)
    • the only reason my encoding uses 10 bits is compatibility with HEVC Main10 (10 bit) standard
    • the reason why Intel colorization gives 10.5 bits is conversion of depth to Hue color space
    • both methods do this for compatibility with image processing pipelines (and codecs)

@akawashiro
Copy link
Author

Thanks!

I cut off 6 bits in order to compress point cloud smaller. From my experiments, we can compress smaller when we cut more bits.

@MartyG-RealSense
Copy link
Collaborator

Hi @akawashiro Do you still require assistance with this case, please? Thanks!

@akawashiro
Copy link
Author

No. I get enough information. Thank you for @MartyG-RealSense and @bmegli .

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 5, 2020

@akawashiro Thanks so much for the update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants