Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to achieve constant FPS and manage frames with RealSense cameras? #11871

Closed
filbert14 opened this issue Jun 5, 2023 · 16 comments
Closed

Comments

@filbert14
Copy link

Required Info
Camera Model L515
Operating System & Version Ubuntu 20.04
Platform PC
SDK Version 2
Language C++

Issue Description

Hi, I am trying to create a small GUI application to record with the L515. I have 2 questions:

https://github.com/IntelRealSense/librealsense/blob/master/examples/post-processing/rs-post-processing.cpp <- I attempted to let the camera grab the frames on a separate thread following this example, but found that I did not get exactly 30 FPS. Is there a way to make sure that frames are constantly grabbed in 30 FPS and to move them to an internal cv::Mat object?

How does frame management work? If I attempt to move the frames to a cv::Mat object in the main thread, how do I make sure that I can optimally avoid r/w conflicts without adding too much overhead? This is to display grabbed frames in the main thread.

Thank you for the help :)

@MartyG-RealSense
Copy link
Collaborator

Hi @sfilbertf You can enforce a constant FPS in two ways. You can have auto-exposure enabled and set the RGB option Auto-Exposure Priority disabled, or alternatively, use manual exposure and set an exposure value in a particular range. #1957 (comment) has more information about this.

Some RealSense OpenCV tutorial scripts are provided at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/stepbystep/getting_started_with_openCV.md

Documentation about frame management is available here:

https://github.com/IntelRealSense/librealsense/wiki/Frame-Buffering-Management-in-RealSense-SDK-2.0

New frames enter a queue that can hold 16 frames of each stream type by default. As new frames arrive, old frames are dropped out of the queue, like a constantly moving conveyor belt. You can increase the size of the queue to hold more frames, though a greater amount of the computer's memory will be consumed by holding onto frames for longer.

Aside from having a good memory capacity on your computer, the biggest benefit to read-write performance is likely to be a storage drive with fast access speeds.

@filbert14
Copy link
Author

Hi @MartyG-RealSense thank you for the speedy response! I will try this again in Wednesday or so, please leave this issue open until then :)

@MartyG-RealSense
Copy link
Collaborator

That's no problem at all to keep the issue open til then. I look forward to your next report. Good luck!

@filbert14
Copy link
Author

filbert14 commented Jun 7, 2023

Hi @MartyG-RealSense,
I have tried the first option with

        rs2::sensor color_sensor = pipeline_.get_active_profile().get_device().query_sensors()[1];
        color_sensor.set_option(RS2_OPTION_ENABLE_AUTO_EXPOSURE, true);
        color_sensor.set_option(RS2_OPTION_AUTO_EXPOSURE_PRIORITY, false);

And the second option with

        rs2::sensor color_sensor = pipeline_.get_active_profile().get_device().query_sensors()[1];
        color_sensor.set_option(RS2_OPTION_ENABLE_AUTO_EXPOSURE, false);
        color_sensor.set_option(RS2_OPTION_EXPOSURE, 333);

And I'm still getting around ~22 FPS. Did I apply these settings to the wrong sensor? Just to clarify, I am enabling streams for all 3 sensors, wait for pipeline_.wait_for_frames() (in a separate grabbing thread) to finish blocking, convert the frames to cv::Mat then use cv::imwrite to turn each frames to an image.

Is there any better way to realize this function to capture all 3 frames in 30 FPS?

@MartyG-RealSense
Copy link
Collaborator

If a manual exposure value is set in a certain range then it can cause FPS to drop, as described by a RealSense team member at #1957 (comment) in a case where it is advised that setting an exposure value of 400 for the color stream (rather than your 333) could cause FPS to be 25.

The default RGB exposure value is 156, which should provide 30 FPS if 30 has been set as the FPS.

If it is possible to set auto-exposure instead of manual exposure then having auto-exposure enabled and an RGB option called auto-exposure priority disabled then the SDK will try to enforce a constant frame rate for both depth and color.

@filbert14
Copy link
Author

Thanks for the speedy response!

After reading the linked post I am assuming that the only thing that is causing problems here is the RGB sensor, so just to be sure, my next steps would then be to enable the streams like normal:

        config_.enable_device(serial_number);
        config_.enable_stream(RS2_STREAM_COLOR, color_frame_width_, color_frame_height_, RS2_FORMAT_RGB8, 30);
        config_.enable_stream(RS2_STREAM_DEPTH, depth_frame_width_, depth_frame_height_, RS2_FORMAT_Z16, 30);
        config_.enable_stream(RS2_STREAM_INFRARED, infrared_frame_width_, infrared_frame_height_, RS2_FORMAT_Y8, 30);
        pipeline_.start(config_);

disable auto exposure, then set the proper exposure value for the color sensor, correct?

        rs2::sensor color_sensor = pipeline_.get_active_profile().get_device().query_sensors()[1];
        color_sensor.set_option(RS2_OPTION_ENABLE_AUTO_EXPOSURE, false);
        color_sensor.set_option(RS2_OPTION_EXPOSURE, 156);

@MartyG-RealSense
Copy link
Collaborator

Yes, the above code is correct.

@filbert14
Copy link
Author

Hey, the code above still gives ~24 FPS instead of 30 - thinking of trying to use the sensor API and test each sensor 1 by 1, could you think of any other alternatives?

This problem is caused by the RGB sensor right? The depth and IR sensor should manage a constant 30 FPS? (Sorry for the superfluous question, I don't consistently have the chance to use the sensor, so this is for my next attempt)

@MartyG-RealSense
Copy link
Collaborator

The SDK has a C++ diagnostic tool called rs-data-collect that can profile the performance of camera streams.

https://github.com/IntelRealSense/librealsense/tree/master/tools/data-collect

The L515's RGB sensor has a slow rolling shutter, so you could try setting RGB to 60 FPS instead of 30 to compensate for lag caused by the shutter speed.

@filbert14
Copy link
Author

filbert14 commented Jun 13, 2023

Hey @MartyG-RealSense! I used rs-data-collect to get the performance of the camera streams (I did this several times) -> log.csv

I ran rs-data-collect with ./rs-data-collect -c ./data_collect.cfg -f ./log.csv -t 5 -m 150

So for 5 seconds or until 150 frames of each stream arrive and are accounted for

As you can see, the color stream manages 30 FPS well (149 frames in 5 seconds)

While the depth and and the infrared stream seem to always lag behind (with 116 frames in 5 seconds, ~23 FPS)

Changing the RGB stream to 60 FPS also sadly does nothing, do you maybe have other ideas? Thank you!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 13, 2023

Reviewing this issue again from the beginning, the key factors seem to be that (a) more than one thread is being used, and (b) you are using the C++ example rs-post-processing.cpp to apply post-processing filters.

There was a C++ case at #6865 that involved multi-threading and applying filters where problems were occurring. They were losing FPS due to lag caused by runtime exceptions occurring, as described at #6865 (comment) and their solution was posted by them at #6865 (comment)

@filbert14
Copy link
Author

I will try this out tomorrow, but I don't think it currently will help a lot:

  1. I used rs-data-collect which is supposed to be completely agnostic of any post-processing filters, correct?
    Even by only grabbing frames, the depth sensor and the infrared sensor are unable to get 150 frames in 5 seconds, or reach 30 FPS

  2. I used the rs-post-processing.cpp only to get an idea on how one would realize having a separate thread to grab frames. I am not applying any post-processing filters anything like that (maybe just the colorizer), just grab a frame to a separate cv::Mat, and use cv::imwrite to write these images to a png file. In fact, this function is called in an endless loop inside the separate thread

    void RealSenseCamera::AcquireFrames() {
        rs2::frameset data = pipeline_.wait_for_frames();

        rs2::frame color_frame = data.get_color_frame();
        rs2::frame depth_frame = data.get_depth_frame();
        rs2::frame infrared_frame = data.get_infrared_frame();

        frame_mutex_.lock();

        color_frame_ = cv::Mat {cv::Size {color_frame_width_, color_frame_height_}, CV_8UC3, (void*) color_frame.get_data(), cv::Mat::AUTO_STEP};
        depth_frame_ = cv::Mat {cv::Size {depth_frame_width_, depth_frame_height_}, CV_16UC1, (void*) depth_frame.get_data(), cv::Mat::AUTO_STEP};
        depth_frame_viz_ = cv::Mat {cv::Size {depth_frame_width_, depth_frame_height_}, CV_8UC3, (void*) depth_frame.apply_filter(colorizer_).get_data(), cv::Mat::AUTO_STEP};
        infrared_frame_ = cv::Mat {cv::Size {infrared_frame_width_, infrared_frame_height_}, CV_8UC1, (void*) infrared_frame.get_data(), cv::Mat::AUTO_STEP};

        recording_mutex_.lock();
        if(is_recording_) {
            std::string frame_postfix = "_" + std::to_string(frame_sequence_) + ".png";
            std::string csv_postfix = "_" + std::to_string(frame_sequence_) + ".csv";

            if(save_color_) {
                cv::imwrite(output_dir_ + "/frame_color" + frame_postfix, color_frame_);
                MetaDataToCSV(color_frame, (output_dir_ + "/frame_color" + csv_postfix).c_str());
            }

            if(save_depth_) {
                cv::imwrite(output_dir_ + "/frame_depth" + frame_postfix, depth_frame_);
                MetaDataToCSV(depth_frame, (output_dir_ + "/frame_depth" + csv_postfix).c_str());
            }

            if(save_infrared_) {
                cv::imwrite(output_dir_ + "/frame_ir" + frame_postfix, infrared_frame_);
                MetaDataToCSV(infrared_frame, (output_dir_ + "/frame_ir" + csv_postfix).c_str());
            }

            ++frame_sequence_;
        }
        recording_mutex_.unlock();

        frame_mutex_.unlock();
    }

So it shouldn't be relevant in this case, right?

@MartyG-RealSense
Copy link
Collaborator

I see, if you are not applying filters in your actual project and were only using rs-post-processing.cpp as a test then #6865 is likely not relevant to your particular situation.

There was an L515 case at #10733 where a RealSense user was not achieving 30 FPS and provided their script. They were using similar cv::mat instructions to yours.

srcImage = cv::Mat(cv::Size(IMAGE_WIDTH, IMAGE_HEIGHT), CV_8UC3, (void*)colorFrame.get_data(), cv::Mat::AUTO_STEP);
depthImage = cv::Mat(cv::Size(DEPTH_WIDTH, DEPTH_HEIGHT), CV_16UC1, (void*)alignedDepthFrame.get_data(), cv::Mat::AUTO_STEP);	

@MartyG-RealSense
Copy link
Collaborator

Hi @sfilbertf Do you require further assistance with this case, please? Thanks!

@filbert14
Copy link
Author

filbert14 commented Jun 18, 2023

Hey, we can close this for now. I will try again sometime soon and ask around again if nothing works!

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the update. Good luck!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants