Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synchronizing multiple devices #2148

Closed
rishikant-sharma opened this issue Jul 25, 2018 · 16 comments
Closed

Synchronizing multiple devices #2148

rishikant-sharma opened this issue Jul 25, 2018 · 16 comments
Assignees

Comments

@rishikant-sharma
Copy link

Required Info
Camera Model D415
Firmware Version 05.09.11.00
Operating System & Version Win 10
Platform PC
SDK Version 2.13.0

Issue Description

I am trying to synchronize the depth streams from multiple D415 cameras.

Currently I set up a pipeline for each device and set up 2 infrared, 1 depth a 1 color stream.
I create a thread to run a function which receives the pipeline along with the serial number of the device.
The function saves the next 300 framesets in a file.

In each iteration I wait_for_frames() and then write them to the file so that the frames are in sync.

The frames captured from the same device are more or less in sync(I'm satisfied with the performance), but they are not in sync with the frames from the other devices.

Is there a way for me to set up the syncer so that frames from all devices are in sync with each other.
I believe there should be a way to use the low level API for my purpose, but I can't seem to figure it out as I am new to the library.

Unfortunately, hardware sync is not an option for me at the moment.

@MartyG-RealSense
Copy link
Collaborator

If you cannot use hardware sync, the multiple camera white paper has advice about software sync.


It is also possible to align frames in software using either their time-stamp or frame-number. Both approaches are valid but you do need to be careful during the initial step of subtracting the offsets that the frames were not read across a frame boundary – for example one camera could have a frame belonging to a previous frame time. The benefit of using the frame counters is that they will not walk off over time

@dorodnic
Copy link
Contributor

Hi @rishikant-sharma
This is a complex problem. If cameras are hardware-synced, you might be able to rely on the frame number (we are still evaluating this). Timestamp will be unreliable because it would come from different clocks.
If the cameras are not hardware-synced, the main problem is OS pipe latency, which on Windows can vary significantly. From our experience, Linux can offer shorter and more consistent latencies.

@zuru
Copy link

zuru commented Jul 27, 2018

If cameras are hardware-synced, you might be able to rely on the frame number (we are still evaluating this).

What we have noticed is that there is no frame number correspondence guarantee. This is primarily because when you set the device as slave, it still acquires frames, even if no master signal is received. However, even if that was not the case, one would have to first setup the slaves (that would not be receiving frames at all), and the fire up the master in order to establish frame number correspondences.

Nonetheless, as reported in #2121, it seems that HW syncing sometimes causes the color stream to not produce any frames (can it be because of intra-frame syncing?), therefore limiting the applicability of HW sync for an increasing number of cameras.

Thank you for your immediate responses, we will have more feedback on these once some more concrete information is available as we have everything setup (cabling, housings, pins, multiple sensors connected).

@pavloblindnology
Copy link

@MartyG-RealSense @dorodnic To software synchronize clocks from different cameras and host a function to get current camera time would help. Synchronization up to the function latency would be possible. This would eliminate time drift between different clocks and one could rely on timestamps instead of frame counters.
The related questions are raised in
IntelRealSense/realsense-ros#419
IntelRealSense/realsense-ros#431

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
hi @pavloblindnology,

would like to know why not HW sync? would be more easy and direct.

@pavloblindnology
Copy link

@RealSense-Customer-Engineering
Well, why to use HW sync if you can do without any wires, just adding some additional API function?

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
hi @pavloblindnology,
no problem at all, just two other person in the thread mentioned they can't do HW sync, so want to know if if have different issue regarding HW sync.

@pavloblindnology
Copy link

@RealSense-Customer-Engineering
I haven't tried HW sync. Started with ROS wrapper, found that it gave very bad synchronization. I've modified it to compute host timestamp with the next formula RS2_FRAME_METADATA_BACKEND_TIMESTAMP - (RS2_FRAME_METADATA_FRAME_TIMESTAMP - RS2_FRAME_METADATA_SENSOR_TIMESTAMP)), making the best assumption I can allow that RS2_FRAME_METADATA_BACKEND_TIMESTAMP corresponds to RS2_FRAME_METADATA_FRAME_TIMESTAMP.
But it seems to me synchronization can be improved by introducing mentioned function to get current device time, e.g. getDeviceTime. Than we could compare it to frame's device timestamp to compute its age and get its host timestamp by
getHostTime - (getDeviceTime - RS2_FRAME_METADATA_SENSOR_TIMESTAMP).
This approach is used, for example, by Stereolabs ZED camera.
Does this approach seem good to Intel, and if yes, is it going to be implemented?
Thanks.

@inders
Copy link

inders commented Dec 1, 2018

Hi @rishikant-sharma
This is a complex problem. If cameras are hardware-synced, you might be able to rely on the frame number (we are still evaluating this). Timestamp will be unreliable because it would come from different clocks.
If the cameras are not hardware-synced, the main problem is OS pipe latency, which on Windows can vary significantly. From our experience, Linux can offer shorter and more consistent latencies.

Hi @rishikant-sharma
This is a complex problem. If cameras are hardware-synced, you might be able to rely on the frame number (we are still evaluating this). Timestamp will be unreliable because it would come from different clocks.
If the cameras are not hardware-synced, the main problem is OS pipe latency, which on Windows can vary significantly. From our experience, Linux can offer shorter and more consistent latencies.

Can this work in capturing images from multi cam powered by different Jetson tx2 where unto 33ms latency is ok for my use-case.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Hi @inders,

Did you try the HW sync to see how it works based on the multiple camera white paper?

@ivyzhong93
Copy link

Hi @rishikant-sharma
This is a complex problem. If cameras are hardware-synced, you might be able to rely on the frame number (we are still evaluating this). Timestamp will be unreliable because it would come from different clocks.
If the cameras are not hardware-synced, the main problem is OS pipe latency, which on Windows can vary significantly. From our experience, Linux can offer shorter and more consistent latencies.

Hi, I successfully HW synced three 415 cameras and because they do not start at exactly the same time so that the frame number is not aligned. Is there any way we can align or sync the frame number during HW sync?

@jbohnslav
Copy link

Hi there,

When the camera is set to "slave", why does it still acquire images? The behavior I expected is that a camera set to "slave" would wait to receive the hardware trigger before acquiring any images. This would guarantee that frame numbers, clocks, etc. would all be synchronized, presuming you have a good hardware trigger. Is there any way to accomplish this?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 15, 2019

A slave camera continues streaming even when it is not synched. It listens for a sync trigger and if it does not receive it within a certain time then it gives up listening and triggers itself independently as though it were an unsynced camera.

I am not aware of a means of having the stream inactive until a sync signal is received. Given that a stream starting from a dead stop will take several frames before the auto-exposure settles down, having the stream on 'hot standby' listening for a trigger whilst it streams is likely to produce more accurate results.

@jbohnslav
Copy link

A slave camera continues streaming even when it is not synched. It listens for a sync trigger and if it does not receive it within a certain time then it gives up listening and triggers itself independently as though it were an unsynced camera.

I am not aware of a means of having the stream inactive until a sync signal is received. Given that a stream starting from a dead stop will take several frames before the auto-exposure settles down, having the stream on 'hot standby' listening for a trigger whilst it streams is likely to produce more accurate results.

Thanks for your reply! I'm not using auto-exposure, so that seems like not an issue. I'm using this in a scientific application, when it's absolutely crucial to know when frames were acquired in order to sync with other hardware. I'll make a new issue trying to find a way to keep the stream inactive until a signal is received.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 15, 2019

In Intel's January 2018 demonstration of multi-cam capture with D435 cameras at the Sundance Festival (about 6 months before the D435 officially supported sync in the firmware), they used a commercial flash to signal the start and end of capture by causing a spike in the data to make sure the camera footage was properly synched together.

https://realsense.intel.com/intel-realsense-volumetric-capture/

Edit: I knew I had seen a video of the demo area before. I managed to track it down.

https://www.youtube.com/watch?v=GHuDxdrUx4k

@dorodnic
Copy link
Contributor

dorodnic commented Jun 3, 2019

I haven't tried HW sync. Started with ROS wrapper, found that it gave very bad synchronization. I've modified it to compute host timestamp with the next formula RS2_FRAME_METADATA_BACKEND_TIMESTAMP - (RS2_FRAME_METADATA_FRAME_TIMESTAMP - RS2_FRAME_METADATA_SENSOR_TIMESTAMP)), making the best assumption I can allow that RS2_FRAME_METADATA_BACKEND_TIMESTAMP corresponds to RS2_FRAME_METADATA_FRAME_TIMESTAMP.
But it seems to me synchronization can be improved by introducing mentioned function to get current device time, e.g. getDeviceTime. Than we could compare it to frame's device timestamp to compute its age and get its host timestamp by
getHostTime - (getDeviceTime - RS2_FRAME_METADATA_SENSOR_TIMESTAMP).
This approach is used, for example, by Stereolabs ZED camera.
Does this approach seem good to Intel, and if yes, is it going to be implemented?
Thanks.

This is now implemented as part of 2.22

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants