-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth and color misalignment with external trigger and Genlock #10926
Comments
Hi @Raulval82 At #7488 (comment) a RealSense team member explains how frame rate and frame drops are calculated. In the RealSense Viewer you can also view the frame rate of a particular stream type in real-time by left-clicking on the 'i' icon at the top of the stream panel. Which camera sync pin are you using, please? The trigger transmitting pin is Pin 5, the Sync Pin. Both depth and RGB are synced when using Inter Cam Sync Mode '3', the Full Slave mode. Otherwise, only depth is hardware synced to the master signal. In regard to the delay, the section of Intel's External Synchronization (genlock) multiple camera white paper quoted below may be relevant, as it advises how the camera will be 'blind' to trigger signals for a certain period depending on the FPS settings. The relevant section is quoted below. When streaming in Genlock mode, the camera can be configured during initialization to any of the normal input resolutions and frame rates. Specifically, the “native” frame rate of the camera can be set to 6 fps, 15fps, 30fps, 60fps, or 90fps. This native frame rate has an impact on two significant aspects: First, it does not set the trigger rate, but it does set an upper limit on the frame rate that can be supported by an external trigger. When sending a trigger pulse train, the trigger signal cannot be repeated faster than HALF the native camera mode frame rate, as shown in the table below. In other words, once a trigger is received the camera will be blind to any other trigger signals for 2x the frame time of the “native mode” of the camera. This means that the trigger frequency can be any value inside the allowable range in the Table 1 (below). |
Hi MartyG, I am using the pin 5 for trigger and the pin 9 for GND, and in the viewer says that it is working at 2.66fps, it seems that the external trigger is working. So there is no way to have both RGB and Depth blocked and waiting for the external trigger? If I use the full slave mode, it goes for 30fps if there are no pulses, that is not what we are looking for. Regarding the delay of the depth frames, the camera has 30fps 1280x720 configured, and the pulses are 2.6 fps, with this configuration I can reach 15fps maximum. The thing is that the RGB frame is as expected, but the depth is like one or two pulses behind, in the viewer they are not fully aligned. |
In mode 4 and above (genlock) the slave camera will wait indefinitely for a trigger pulse and only trigger when the pulse is received. In slave modes 2 and 3 the camera will listen for a trigger for a certain period of time on each frame and then capture independently (unsynced) if it does not detect a trigger. Capture cannot be blocked in mode 2 and 3, and in the absence of a trigger the camera will independently initiate unsynced capture. Does the gap between RGB and depth reduce if burst mode is used by setting an Inter Cam Sync Mode higher than 4, such as 5? The sync paper states that the maximum sync mode value is 258 (4-258 are for genlock with trigger frequency 1-255). |
Yes, the delay is present with Cam Sync Mode at 5, it is less visible because you have more frames per pulse, but it is there. So, there is no way to block both pictures until a trigger arrives? |
You are correct, there is not a method available for preventing capture from initiating with modes 2 and 3. Does the delay reduce further if you increase the burst value above 5 (to 10, 15, 20, etc) |
Sorry, maybe I am not explaining well, I want a method to block both frames (Color and depth) with external trigger (Mode 4). It has no sense to have a live RGB video and a fixed depth frame at the same time (At least for me), it is very difficult to know if they are properly aligned or not like this. The delay is still present with values over 5, I tried 10 an 15. |
As a rule, if mode value 4 or above is being used then only depth will have its timestamp synced with the trigger and RGB will be unsynced. If your primary concern is to have depth and RGB frames aligned as closely as possible then it may be better to stop using a hardware sync trigger and instead use the SDK's wait_for_frames() instruction. At #1548 (comment) a RealSense team member advises that RGB and depth frames have a temporal offset bounded by a period of one frame, and wait_for_frames() finds the best mapping between them. |
I discovered something, If yin the viewer, you start the depth stream first (Mode 4), and then the color stream, both are waiting for the trigger. But if you start the color frame first, it starts at full framerate and the depth waits for pulses. Working like this the misalignment disappears. This is a bug or a feature? Because I don't know how this will work in the API. |
I recall in at least one past case that starting RGB first and depth second has been a recommended means of resolving a problem with a pair of depth and RGB streams. At #2637 (comment) a RealSense team member advises that when syncing depth and color sensors on the same camera, the color sensor always wants to be the master. In regard to a start order, you could try adapting the Python script at #5628 that puts streams on two separate pipelines, and sets the start order of streams by putting one pipe start instruction before the other. The linked-to script puts depth + color on one pipeline and IMU on the second, though you may be able to edit it to have color on pipeline 1 and depth on pipeline 2. Then start pipeline 1 (color) first and pipeline 2 (depth) on the next line after that. |
Thank you, I will try it, I was using the Open3d realsense integration, I assume that I will need to modify my code to use directly your library. Then, if this works, the only thing pending is the delay. It is not just that I'm missing the first two pulses, the next ones are not giving me the frames for that exact moment, and this is not how a trigger is supposed to work... is it a known issue? |
Another question, now I have separate pipelines for each frame, but do you know how to integrate the pyrealsense pipelines to the open3D library? Or at least the result RGBD frames or Pointclouds, so I can keep using the Open3D functions as before. Thank you again. |
If you are using Open3D with Python, there is a RealSense pyrealsense2 example script for Open3D at the link below. Although there is not a two-pipeline Open3D script to refer to, I would speculate that you would first have two separate wait_for_frames() definitions for each pipeline like in the two-pipeline script that I linked to earlier. For example, if the two pipelines were defined as 'depth_pipeline' and 'color_pipeline'.
And then when retrieving frames from the depth pipeline and the color pipeline, use the appropriate frame name and pipeline name to ensure that depth is taking frames from the depth pipeline and color is taking frames from the color pipeline. Apologies that I do not have more specific scripting to offer for a two-pipeline Open3D script. |
Hi, This example saves the frames to disk, what I am trying to do is to transform the "pyrealsense.frameset" that returns the wait_for_frames function to a "open3d.t.geometry.RGBDImage", this is the one that can be used in all the Open3D functions. Do you know how to do this without saving the pictures in the hdd? i mean, I am trying to use "create_from_rgbd_image" function from Open3D, so I can have a colored pointcloud using the pyrealsense pipelines as source of RGBD frames, but the objective is to have valid colored pointclouds so I can use all the odometry functions in Open3D. Thank you in advance. |
Would the Python code at #6023 (comment) that uses RGBDImage.create_from_color_and_depth to convert a RealSense frameset to an Open3D pointcloud be helpful to your conversion goal, please? |
Thank you a lot, it worked! Now I can control the pipelines separately and use the frames in Open3D, now I will be able to analyze properly the alignment issues. |
That's excellent news that it worked for you, @Raulval82 - thanks very much for the update :) |
Hi @Raulval82 Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense, the delay is still there, I can't figure how to solve this. |
I reviewed this case again from the beginning. If mode 4+ is being used then it would seem that RGB will be free to capture independently and unsynced without waiting for a trigger pulse, whilst depth has to wait indefinitely for a trigger. Once a trigger is received, the camera will be blind to any other trigger signals for 2x the frame time of the “native mode” of the camera. You have been using a 1ms trigger pulse as you could not have a 100us pulse right now. 1 millisecond = 1000 microseconds. The recommended frequency for an external trigger (one produced by a signal generator instead of a RealSense camera) is 100 microseconds or more. This raises the question of whether the 1ms / 1000us trigger is too wide and is leading to the delay, or whether the frame time of the "native mode" of the camera is blinding the camera to a couple of the trigger signals because the frame time is too high. |
Hi @Raulval82 Do you have an update about this case that you can provide, please? Thanks! |
Attached, you can find the actual issue with synchronization and delay that I am mentioning: 1- Each time the wheel stops, I am changing the direction, but it keeps going for 2 or 3 frames in the same direction. Notice that it is in mode 4 at 30fps in depth and color. Sync.issue.example.mp4 |
So if the camera outputs additional frames when using a sync value higher than '4' and then waits for the next trigger after all of the extra 'burst' frames have been released, this suggests that if '4' is the sync value then burst = 0 and the next trigger should be listened for almost immediately after the previous trigger has completed. I say "suggests" because the External Synchronization (Genlock) system was an experiment whose development was not continued or documented further except for discussions during RealSense user support cases. The original sync modes '1 and 2' are considered "mature and validated" by Intel whilst modes 3+ described by the External Synchronization paper are considered "immature' and unvalidated". Looking at the above video, if you visualize the RGB image being removed then there seems as though there will be little valid depth data left in the image if it were a depth image on its own. This could be a combination of the silver wheel hub being difficult to read due to reflectivity, and the black tire being unable to be depth analyzed because it is black. Depth cameras in general (not just RealSense) have difficulty reading depth from black or dark grey objects because physics dictates that these colors absorb light. So black / dark grey objects may appear to be rendered on the image but are actually empty areas without depth detail that are shaped like the object. For example, scanning a black cable may result in a shape on the image that looks like a cable but is actually a cable-shaped area of empty detail with no texture. A black, non-reflective object such as the tire may be able to produce greater depth data if it has a strong light source cast onto it. Coating objects in fine white power (flour, baby powder or foot powder) or reflection-dampening professional 3D scanning spray aerosols are also methods that have been used in the past for scanning reflective or black objects. There was a past story of a vehicle body shop that was scanned by coating the bodywork in baby powder. |
Thank you @MartyG-RealSense, Then, for syncronizing several cameras with an external trigger should I use Genlock mode 2? Or this feature is not fully available? |
Any mode number that is 4 and above will be Genlock. Using Mode 2 with your current hardware sync system would be non-genlock. The easiest way to use mode 2 (slave) is to have another RealSense camera act as the master camera set to mode 1 instead of an externally generated trigger from a device such as an oscilloscope. This is because in the original hardware sync system (modes 1 and 2) the external trigger signal has to be matched to the slave camera with a very precisely tuned frequency, such as '30.03' for 30 FPS, or hardware sync will not be achieved. https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration Using a RealSense camera as a mode 1 master camera with a mode 2 slave camera greatly simplifies hardware sync, as you do not have the problem of configuring the trigger frequency. The difficulty with using mode 2 instead of 4 for the slave for your particular project though is that the slave camera will not wait indefinitely for a trigger and will instead wait for a certain time period on each frame and then initiate depth capture unsynced on that frame. |
Hi @Raulval82 Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Sorry, so the conclusion is that Genlock is "experimental" and it should not work properly, and it is not going to be improved in the future, right? New models will have this feature? |
Genlock mode is usable but experimental and there is no further development on genlock planned. However, it should continue to work in future with 400 Series camera models that have a 'global' type fast shutter on the depth sensor (this currently includes D405, D435, D435i, D435f, D455 and the new D457). The D415 model does not have a global shutter. |
@Raulval82 what @MartyG-RealSense is not explaining is that the RGB and Depth cameras cannot run sync. This has been a hardware limitation of the entire product line, and according to Intel it cannot be fixed by a firmware update, and even if they were able to they do not have the internal resources. |
@Raulval82 see here: If you require accurate color/depth sync would look into the OakD-Pro. Apparently all cameras can be triggered off of the same sync signal and it is an opto-isolated trigger, which is much more reliable than the D400 series which is prone to electrostatic shock. |
@MartyG-RealSense the title of the post is "Depth and color misalignment with external trigger and Genlock" and the first sentence is "I am trying to use an encoder signal to acquire the frames at the exact same distance from each other, so I can properly reconstruct a large moving object." It is clear the objective here is to have both the depth and color images capture the exact same moment in time. Regardless of the method, camera or mode, there is no RealSense camera capable of having synchronized color and depth exposures. All of this other information obscures the fact that the hardware is not capable of this feature. |
To be clear; "RGB and depth frames have a temporal offset bounded by a period of one frame" is not exposure synchronization. Any camera pair is capable of temporal alignment. What is needed for virtually all 3D scanning use cases is synchronized RGB and Depth exposure, something that Intel has never had as a priority. |
Thank you @sam598, this is what I finally concluded. |
Referencing this issue that should be reopened, and the lack of understanding from Intel's side of what the need here is. |
It looks like the new D457 may have this feature. |
Thank you again @sam598, I will keep an eye on the new discussion. |
Hi again @sam598 In discussions I have had about RGB-depth sync on the same camera with a RealSense team member, they advised that all 400 Series camera models are technically capable of RGB-depth sync on the same camera. The D415, D455 and D457 models have depth and RGB sensors on the same PCB board, so RGB-depth sync on the same camera is not a problem. However, on the D435 and D435i models the RGB sensor is not on the same PCB as depth. How the camera's ASIC deals with this on these camera models is to automatically make RGB the master and depth the slave. Another RealSense team member provides advice about RGB-depth sync on the same camera at #2637 (comment) and states that sync should kick in once both RGB and depth have the same actual FPS. They add that disabling the auto-exposure priority option should help both depth and RGB to maintain the same FPS rate and not permit the FPS to vary. |
Thanks @MartyG-RealSense. I do not want to belabor this, but I think it is important that we all agree on the same definition of sync. RealSense team members keep referring to frame rate when talking about sync, but it is not possible to verify sync from looking at frame rate alone. Syncing cameras means that both cameras capture their image at the exact same moment in time. This would mean that RGB and Depth have no temporal offset. This is the expected behavior from the RealSense engineers? |
I was informed that on cameras that have RGB and depth on the same PCB, either sensor can be the master. On D435 and D435i where the RGB and depth are not on the same PCB, RGB is always the master and depth the slave. My interpretation of the advice at #2637 (comment) is that if actual FPS of the depth and RGB sensors on the same camera are the same then it is assumed that depth-RGB sync is automatically taking place, rather than there being a mechanism to verify that it is occurring. Assuming that the advice from April 2018 at #1548 (comment) about a temporal offset is correct (there is not supporting documentation for the statement elsewhere), it could be further assumed that there may be a temporal offset between depth and RGB but it is taken care of when the RGB-depth sync kicks in. It may be best to draw a line under the subject though, as we know that sync will occur on D457 because of the data sheet statement that you quoted. |
This seems like the crux of the issue: I am not asking if the camera has a way of verifying sync, but that someone on the RealSense team has verified that RGB and Depth on those older cameras do indeed take synchronous exposures, and that there is not a temporal offest. Instead of assuming can the engineers at Intel confirm this is indeed how their product works? |
I will seek further clarification from my colleagues on the point of whether all 400 Series camers can take synchronous exposures and also whether there is a temporal offset. Thanks very much again for your patience. |
I received responses to your questions.
"We never claim syncing the exposure. Usually, the camera is running in auto exposure mode. We sync the camera readout time, not exposure time". 2.. Can it be confirmed whether or not there is a temporal offset between the depth and RGB sensors like the one described by the support team member in #1548 (comment) "In the thread mentioned, the support engineer was talking about software sync using timestamp, which has a temporal offset between the depth and RGB sensors". Going back to the start of #1548 and reading the opening comment, it does appear that the case is related to finding the closest match between RGB and depth stream timestamp values by comparing frame timestamps, and not about syncing depth and RGB images. |
This is a really important clarification. Because what is being described by the engineers is temporal alignment, not synchronization. https://www.merriam-webster.com/dictionary/synchronous With this definition in mind I have to ask one last time; which (if any) RealSense cameras are capable of RGB and Depth sync? |
I referred the above message to my Intel RealSense colleagues, who confirmed in reply, "All cameras can do RGB and Depth sync within one camera". |
The only way the RGB and Depth would be in sync is if the frames are captured at the exact same time, with no temporal offset. You and other RealSense team members have repeatedly said this is not the case, meaning the cameras are not capable of sync. To claim otherwise is confusing and misleading customers about what the devices are actually capable of. Furthermore if you are targeting these products for safety critical applications like factories and autonomous vehicles, not being explicitly clear about whether or not these devices are capable of sync is irresponsible at best. |
The one exception possibly being the D457 if both RGB and Depth pins are triggered simultaneously. |
Hey :-) I'm a researcher at the Uni Bremen and we heavily leaned on the paper for our RealSense and Optitrack integration (both of which rely on IR and therefore need to be exactly off via HW sync). This worked really well and we were looking forward to record a lot more data with slightly adjusted settings. Since intel decided to pull the white paper I cannot continue with that work :/ Sorry for slightly off topic, but this thread contains most information of the paper, so I felt this to be the best fitting place. Thanks a lot in advance! |
Hi @Saniamos The paper is no longer available and there was not a PDF version of it, but if you do a Google Images search for the term librealsense "external synchronization" (including the quotation marks for relevant search results) then you can find a number of images and diagrams quoted from that paper on this support forum. More information about the paper's withdrawal can be found at the link below. |
Damn, but thank you! |
As the External Synchronization paper was formatted as a web-page instead of a PDF, I had the idea of using the Wayback Machine website that archives past versions of internet pages to find an archived copy of the paper and saved it as a PDF file for you. You can download it from the link below. External Synchronization of Intel® RealSense™ Depth cameras.pdf |
Issue Description
Hi,
I am trying to use an encoder signal to acquire the frames at the exact same distance from each other, so I can properly reconstruct a large moving object. The encoder signal has been adapted to 1.8V and more or less 2Hz. As trigger signal I am using a 1ms pulse (I can't have a 100us pulse right now), and for powering the trigger signal I am using the 3.3V pin from the same camera and some resistors.
First, in the RealSense viewer I can see the RGB image moving between depth frames, I mean, the depth frames arrive apparently at time, but between them, I can see the RGB image moving in the fixed Point cloud. Is this how it should work? It is not supposed to take a RGB and depth frame just when the trigger arrives?
Another strange behavior is that it seems to have a delay, the first triggers are not visible immediately. If the trigger signals keep arriving, then it starts to show frames, but not the actual ones, It is like two or three triggers behind.
The last issue that I am facing is that I am receiving two frames for each pulse using the function capture_frame from the Open3D library in Python (The genlock is configured with a 4), and the second frame is not properly aligned between RGB and depth. There is a way of validating the framerate in the viewer? I want to check if the issue is from the camera side or from the o3d library.
Thank you a lot in advance.
The text was updated successfully, but these errors were encountered: