You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm a beginner in systems engineering and I'm not familiar with the camera system. Our team is designing a robot and using the system below:
HOST: x64 - Intel i7 without GPU
OS: Ubuntu 20.04
Sensor: D435 and others
We are applying the following options to Librealsense to get an image:
RS2_FORMAT_BGR8/Z16
Decimation Filter
Holes Filling filter
align_to(RS2_STREAM_COLOR)
Due to many algorithms running on the host PC, we have faced a shortage of computing resources. I would like to know the following:
RealSense's RGB camera only has BAYER/YUYV output. Does Librealsense perform RGB conversion using software?
Whether the filter application or alignment calculation is also performed in software by Librealsense, not hardware inside the RealSense camera?
Whether the above is used to accelerate the process in RealSense to hardware?
We are also considering accelerators with FPGA to process YUYV images and Z16 data received directly from V4l2. If the filter application or alignment calculation is processed in software, can I check the code for this?
The text was updated successfully, but these errors were encountered:
Hi @muonkmu Alignment and filters are processed on the computer CPU and not the camera hardware. RealSense 400 Series cameras contain Vision Processor D4 hardware to process data and so enable the camera to be used with low-end computers / computing devices without a strong GPU, but alignment and filters will place a burden on the CPU.
The RealSense SDK also supports 'headless' text-based camera applications and tools that do not have a requirement for graphics support.
The raw RGB of RealSense cameras is bayered YUY but the visual stream that is output by the RealSense SDK can be in a range of supported formats such as RGB8, BGR8 and YUYV.
Conversion of raw frames is handled by the camera's Vision Processor D4 hardware.
The RealSense SDK has in-built support for GLSL Processing Blocks which are 'vendor neutral' (they should work with any GPU brand) and accelerate processing by offloading work from the CPU onto the GPU. GLSL is best used with the C++ programming language. It may also not provide noticable improvement when used on low-end computers / computing devices.
#3654 discusses the advantages and disadvantages of using GLSL and how to apply it.
The RealSense SDK also has a C++ example program for GLSL.
The SDK also has support for CUDA graphics acceleration for computers / computing devices with Nvidia GPUs, such as the Nvidia Jetson range of Arm architecture computing boards. CUDA provides automatic acceleration for three specific types of operation: YUY to RGB color conversion, alignment and pointclouds.
Standard Linux tools can receive RealSense camera data if the SDK is built with the V4L2 Backend (its default backend). It will not be nicely colored though compared to the stream in a RealSense application.
The SDK's code for filters and alignment can be found in the src/proc folder of its source code.
Hi, I'm a beginner in systems engineering and I'm not familiar with the camera system. Our team is designing a robot and using the system below:
We are applying the following options to Librealsense to get an image:
Due to many algorithms running on the host PC, we have faced a shortage of computing resources. I would like to know the following:
We are also considering accelerators with FPGA to process YUYV images and Z16 data received directly from V4l2. If the filter application or alignment calculation is processed in software, can I check the code for this?
The text was updated successfully, but these errors were encountered: