Skip to content

Point clouds configuration

Bartosz Meglicki edited this page Dec 3, 2020 · 6 revisions

Basics

Point clouds are streamed as depth maps which are unprojected to 3D using camera model. The camera model is defined in so called intrinsics.

Intrinsics

For the sake of this software you are only intested in intrinsics fx, fy, ppx, ppy (focal lenghts and principal points).

Intrinsics change with resolution.

Even the same camera model (e.g. - two D345s) differ in intrinsics due to manufacturing mechanical tolerances and calibration.

Alignment

For D400 and L515 cameras depth is generated from infrared streams. IR sensors and RGB sensor are in different physical locations and have different characteristics. The problem of matching RGB sensor data with depth data is called alignment. For textured point clouds UNHVD expects aligned data. Alignment may be performed in two directions:

  • depth to color
  • color to depth

Alignment and its direction has consequences and determines intrinsics and resolution.

RNHVE streams already aligned data but your is:

  • the decision which alignment direction to use
  • the decision what resolutions to use for depth/RGB
  • the responsibility to configure intrinsics accordingly

There is no alignment problem for infrared textured depth.

Depth units

16 bit depth is encoded in 10 bits. For this reason there is trade-off between precision and max range. This trade-off is controlled with depth-units. What depth units to use is your decision.

Min and max margins

Min and max margins let you ignore points with depth below/above certain threshold during unprojection.

Realsense cameras have minimal working distance depending on resolution and other factors.

Configuring margins has benefit of removing some depth encoding artifacts with no loss of information.

Configuration

For now the configuration is performed in PointCloud PointCloudRenderer source with DepthConfig and unhvd_hw_config.

public struct DepthConfig
{	
   public float ppx;
   public float ppy;
   public float fx;    
   public float fy;
   public float depth_unit;
   public float min_margin;
   public float max_margin;
}

All RNHVE depth pipelines output intrinsics while starting. Just copy them.

Decide what depth-units you need and use them on both RNHVE and UNHVD sides.

Configure min_margin at least to your Realsense minimal working distance, max_margin may be set arbitrarily.

Resolution in RNHVE should match the one in UNHVD unhvd_hw_config.

Tuning for best results

Some options are only set on RNHVE sending side.

  • follow general Realsense best-practices (e.g. calibration)
  • tune Realsense resolution
  • update Realsense firmware (>= 5.12.1.0 unlocks more Depth Units options)
  • tune Realsense depth units in RNHVE and UNHVD
  • set depth min_margin in UNHVD to match your Realsense device MinZ
  • tune encoding bitrate in RNHVE CLI
  • tune encoding options in RNHVE code (e.g. increase B frames)
  • aligning depth to color seems to give visually better effect
  • use librealsense preset json configuration in RNHVE CLI
Clone this wiki locally