Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Erroneous trigger out behavior #8171

Closed
kshitijminhas opened this issue Jan 15, 2021 · 14 comments
Closed

Erroneous trigger out behavior #8171

kshitijminhas opened this issue Jan 15, 2021 · 14 comments

Comments

@kshitijminhas
Copy link


Required Info
Camera Model D435i
Firmware Version 05.12.02.100
Operating System & Version Linux (Ubuntu 18)
Platform Intel i7
SDK Version 2.32.1
Language C++
Segment AR/Robot

Issue Description

I am trying to sync an external camera to the trigger out from RS depth camera. When I use the realsense-viewer the triggers start properly (see yellow triggers in the following image)
trigger_from_realsense_viewer

However, when I write my own code to fetch frames, I see 2 'not-normal' behavior:

  1. Using the code below, I request 5 frames, but I get 9 trigger out. Is this expected behavior? If so, which pulse should correspond to first RS depth frame? (see yellow triggers in image below)
  2. Again using the code below, I see a the trigger go high for some time before the trigger starts (see yellow 'hump' in trigger below). This behavior is erratic, and sometimes this 'hump' does not exist. Can I improve the code to avoid this? What does realsense-viewer do differently that I do not see this 'hump'?

trigger_from_code

// License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API

#include <fstream>              // File IO
#include <iostream>             // Terminal IO
#include <sstream>              // Stringstreams

// 3rd party header for writing png files
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "/home/ubuntu/intel_example/stb/stb_image_write.h"

// Helper function for writing metadata to disk as a csv file
void metadata_to_csv(const rs2::frame& frm, const std::string& filename);

// This sample captures 30 frames and writes the last frame to disk.
// It can be useful for debugging an embedded system with no display.
int main(int argc, char * argv[]) try
{
    // Declare depth colorizer for pretty visualization of depth data
    rs2::colorizer color_map;

    // Declare RealSense pipeline, encapsulating the actual device and sensors
    rs2::pipeline pipe;

    rs2::pipeline_profile selection = pipe.start();
    rs2::device selected_device = selection.get_device();
    auto depth_sensor = selected_device.first<rs2::depth_sensor>();

    if (depth_sensor.supports(RS2_OPTION_OUTPUT_TRIGGER_ENABLED))
    {
        std::cout<<"here";
        depth_sensor.set_option(RS2_OPTION_OUTPUT_TRIGGER_ENABLED, 1.f); // Enable trigger
    }
    // Start streaming with default recommended configuration
    // pipe.start();

    // Capture 30 frames to give autoexposure, etc. a chance to settle
    // for (auto i = 0; i < 30; ++i) pipe.wait_for_frames();

    // Wait for the next set of frames from the camera. Now that autoexposure, etc.
    // has settled, we will write these to disk
    for (auto i = 0; i < 5; ++i) {
        for (auto&& frame : pipe.wait_for_frames())
        {
            // We can only save video frames as pngs, so we skip the rest
            if (auto vf = frame.as<rs2::video_frame>())
            {
                // auto stream = frame.get_profile().stream_type();
                // Use the colorizer to get an rgb image for the depth stream
                // if (vf.is<rs2::depth_frame>()) vf = color_map.process(frame);

                // Write images to disk
                // std::stringstream png_file;
                // png_file << "rs-save-to-disk-output-" << i << vf.get_profile().stream_name() << ".png";
                // stbi_write_png(png_file.str().c_str(), vf.get_width(), vf.get_height(),
                //                vf.get_bytes_per_pixel(), vf.get_data(), vf.get_stride_in_bytes());
                // std::cout << "Saved " << i <<" "<<png_file.str() << std::endl;

                // // Record per-frame metadata for UVC streams
                // std::stringstream csv_file;
                // csv_file << "rs-save-to-disk-output-" << vf.get_profile().stream_name()
                //          << "-metadata.csv";
                // metadata_to_csv(vf, csv_file.str());
            }
        }
    }

    return EXIT_SUCCESS;
}
catch(const rs2::error & e)
{
    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
    return EXIT_FAILURE;
}
catch(const std::exception & e)
{
    std::cerr << e.what() << std::endl;
    return EXIT_FAILURE;
}

void metadata_to_csv(const rs2::frame& frm, const std::string& filename)
{
    std::ofstream csv;

    csv.open(filename);

    //    std::cout << "Writing metadata to " << filename << endl;
    csv << "Stream," << rs2_stream_to_string(frm.get_profile().stream_type()) << "\nMetadata Attribute,Value\n";

    // Record all the available metadata attributes
    for (size_t i = 0; i < RS2_FRAME_METADATA_COUNT; i++)
    {
        if (frm.supports_frame_metadata((rs2_frame_metadata_value)i))
        {
            csv << rs2_frame_metadata_to_string((rs2_frame_metadata_value)i) << ","
                << frm.get_frame_metadata((rs2_frame_metadata_value)i) << "\n";
        }
    }

    csv.close();
}

Thank you

@kshitijminhas
Copy link
Author

Also to note for point 1, it is always 9 triggers for 5 frames, I have also seen 10 or 12 triggers for the same code.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 15, 2021

Hi @kshitijminhas Have you instead considered using the External Synchronization (Genlock) feature of the RealSense SDK to trigger your camera? Genlock provides an advanced level of control over trigger sequencing of cameras.

https://dev.intelrealsense.com/docs/external-synchronization-of-intel-realsense-depth-cameras

Section 4 of the paper should enable you to quickly reach a conclusion about whether Genlock would be a suitable replacement for your own triggering method.

https://dev.intelrealsense.com/docs/external-synchronization-of-intel-realsense-depth-cameras#section-4-genlock-usages

@kshitijminhas
Copy link
Author

@MartyG-RealSense thank you for the link. It seems Genlock feature is for triggering the RealSense camera from an external source. Instead, in our application, we are trying to trigger an external camera from RealSense triggers.

@MartyG-RealSense
Copy link
Collaborator

Could you provide information please about what external camera you are using if it is not a RealSense camera. Thanks!

@kshitijminhas
Copy link
Author

I am using a FLIR Chameleon camera: https://www.flir.com/products/chameleon3-usb3/

I believe the issue is independent of the external camera. The first trigger out from realsense may not be the first captured frame, because there could be a delay between starting the camera (pipe.start()) and fetching the frame (wait_for_frames())

Which leads to a frame mismatch between the realsense frame and any other camera.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 15, 2021

There is not a lot of available information about syncing between RealSense and non RealSense cameras. There is a past case though involving a RealSense D415 and a Stereolabs ZED depth camera that discussed using camera metadata to sync the devices.

#2186

There is also a case that discusses hardware components that might be involved in syncing a RealSense D435i with a Flir Blackfly S. It was recommended that the RealSense was made the Master camera (the one that transmits the trigger pulse) and the Flir made the Slave.

#6801

@kshitijminhas
Copy link
Author

Hi @MartyG-RealSense,
Can you please clarify the response from get_frame_number() mentioned here.

This frame number starts from 0 and each frame will align with each trigger out from the RealSense? Aka 1st trigger on the scope will correspond to get_frame_number() = 0?

@MartyG-RealSense
Copy link
Collaborator

A camera that is set as a Slave listens out on each frame for a trigger pulse, such as one generated by a Master camera. If a trigger pulse is not detected within a certain time period then the slave stops listening and triggers an unsynced capture. I would therefore prefer not to claim with certainty that the trigger pulse will be detected by the slave and acted upon in the very first frame.

In regard to the frame handle, a more detailed explanation of its function is provided in the link below.

#5087 (comment)

@kshitijminhas
Copy link
Author

Hi,
I am seeing 3 different types of trigger out from the realsense, with the same code.

Ideal case is the following, where the the sensor starts triggers from a low state.
ideal

Error case 1: Here the trigger line goes high for about 250 ms then triggers are seen.
error_case1

Error case 2: Here the trigger line is high for > 500 ms before triggers are seen.
error_case2

Since we want to use this line to trigger an external (non-intel) camera, this non-deterministic behavior leads to frame mis-alignment. Any advice on how to make this more deterministic?

@MartyG-RealSense
cc: @RealSenseCustomerSupport @RealSense-Customer-Engineering

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 28, 2021

If you are planning to trigger a non-RealSense Flir camera, as mentioned in this discussion, and genlock is not being used then it is difficult to advise on this situation as it is the kind of syncing scenario that genlock was designed to solve. Without genlock, control of trigger timing between a RealSense and non-RealSense camera may be considerably more complex to achieve.

In the absence of genlock, we are probably back to the earlier discussed subject of using metadata to sync the frames of each camera.

#2186

@kshitijminhas
Copy link
Author

Well, it seems GenLock is only for systems where an external device is the Master.
"This feature can be an enabler for applications that require exact hardware time synchronization to an external RGB master camera or to where any other external sync signal is desired to control the exact frame rate and capture time electronically."

However in my system, RealSense device is the master. It seems metadata is my best bet.

@agrunnet
Copy link
Contributor

@kshitijminhas genlock can be used as well. It is described in the white paper how you can set one camera to master and the others to slave with genlock. The limitations on frame rate and exposure time are detailed in the paper.

@MartyG-RealSense
Copy link
Collaborator

Hi @kshitijminhas Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants