Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about rs-pointcloud crashed #9584

Closed
zma69650 opened this issue Aug 7, 2021 · 8 comments
Closed

about rs-pointcloud crashed #9584

zma69650 opened this issue Aug 7, 2021 · 8 comments

Comments

@zma69650
Copy link

zma69650 commented Aug 7, 2021

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D455
Firmware Version 05.12.13.50
Operating System & Version win10
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/
SDK Version 2.45.0
Language {C++
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

I am using rs-pointcloud provided by realsense. But when I added my own code, the program crashed.I just added the following code to read the data.
‘’‘
rs2::config cfg;
//// Use a configuration object to request only depth from the pipeline
cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);
// Start streaming with default recommended configuration
pipe.start(cfg);
’‘’
I also use rs-post-processing. When I color the point cloud according to the rs-pointcloud method, the program cannot display the screen. Is this a problem with my approach or is this program not able to perform coloring?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 7, 2021

Hi @zma69650 If you are adapting RealSense SDK example programs then that may not be the best way to learn about building a project that you have created yourself. This is because the set of SDK examples use a build process that is different from the way in which a program that you have built yourself from the beginnning would be compiled.

Basically, if one of the example programs is modified then you can go to the librealsense SDK's build folder and input a make command. This rebuilds only what has been changed instead of rebuilding the entire SDK.

#2946 (comment)

A RealSense user who commented on the above case later posted their own guide to the example rebuilding process.

#6797


When you feel ready to start creating your own RealSense project in Visual Studio, a RealSense community member has created a guide to setting up a Visual Studio project with the three RealSense props files provided in the SDK folder.

https://github.com/EduardoWang/visual-studio-real-sense-record-and-playback-project

image

@MartyG-RealSense
Copy link
Collaborator

In regard to your code, I cannot see anything obviously wrong with your custom stream configuration process. You have set up the cfg definition, correctly formatted the cfg.enable_stream lines (which will generate both a depth and color stream) and then placed 'cfg' in the brackets of the pipe start instruction so that the program will use your custom configuration.

In regard to your modification of rs-post-processing, would it be possible to paste your modified script in a comment below so that I can chck the code please?

@zma69650
Copy link
Author

zma69650 commented Aug 8, 2021

Thank you for your reply@MartyG-RealSense In order to color the point cloud, I set a color flow. Similarly, I defined a queue for storing pictures, and added pictures to the queue when the thread reads data.
'''
int main(int argc, char * argv[]) try
{

// Create a simple OpenGL window for rendering:
window app(1280, 720, "RealSense Post Processing Example");
ImGui_ImplGlfw_Init(app, false);

// Construct objects to manage view state
glfw_state original_view_orientation{};
glfw_state filtered_view_orientation{};

// Declare pointcloud objects, for calculating pointclouds and texture mappings
rs2::pointcloud original_pc;
rs2::pointcloud filtered_pc;

// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
rs2::config cfg;
// Use a configuration object to request only depth from the pipeline
cfg.enable_stream(RS2_STREAM_DEPTH, 640, 0, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 640, 0, RS2_FORMAT_BGR8, 30);
// Start streaming with the above configuration
pipe.start(cfg);

// Declare filters
rs2::decimation_filter dec_filter;  // Decimation - reduces depth frame density
rs2::threshold_filter thr_filter;   // Threshold  - removes values outside recommended range
rs2::spatial_filter spat_filter;    // Spatial    - edge-preserving spatial smoothing
rs2::temporal_filter temp_filter;   // Temporal   - reduces temporal noise

                                    // Declare disparity transform from depth to disparity and vice versa
const std::string disparity_filter_name = "Disparity";
rs2::disparity_transform depth_to_disparity(true);
rs2::disparity_transform disparity_to_depth(false);

// Initialize a vector that holds filters and their options
std::vector<filter_options> filters;

// The following order of emplacement will dictate the orders in which filters are applied
filters.emplace_back("Decimate", dec_filter);
filters.emplace_back("Threshold", thr_filter);
filters.emplace_back(disparity_filter_name, depth_to_disparity);
filters.emplace_back("Spatial", spat_filter);
filters.emplace_back("Temporal", temp_filter);

// Declaring two concurrent queues that will be used to enqueue and dequeue frames from different threads
rs2::frame_queue original_data;
rs2::frame_queue filtered_data;
rs2::frame_queue color_data;
// Declare depth colorizer for pretty visualization of depth data
rs2::colorizer color_map;

// Atomic boolean to allow thread safe way to stop the thread
std::atomic_bool stopped(false);

// Create a thread for getting frames from the device and process them
// to prevent UI thread from blocking due to long computations.
std::thread processing_thread([&]() {
    while (!stopped) //While application is running
    {
		

        rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
        rs2::frame depth_frame = data.get_depth_frame(); //Take the depth frame from the frameset

		rs2::frame color_frame = data.get_color_frame();
        if (!depth_frame) // Should not happen but if the pipeline is configured differently
            return;       //  it might not provide depth and we don't want to crash

        rs2::frame filtered = depth_frame; // Does not copy the frame, only adds a reference

        /* Apply filters.
        The implemented flow of the filters pipeline is in the following order:
        1. apply decimation filter
        2. apply threshold filter
        3. transform the scene into disparity domain
        4. apply spatial filter
        5. apply temporal filter
        6. revert the results back (if step Disparity filter was applied
        to depth domain (each post processing block is optional and can be applied independantly).
        */
        bool revert_disparity = false;
        for (auto&& filter : filters)
        {
            if (filter.is_enabled)
            {
                filtered = filter.filter.process(filtered);
                if (filter.filter_name == disparity_filter_name)
                {
                    revert_disparity = true;
                }
            }
        }
        if (revert_disparity)
        {
            filtered = disparity_to_depth.process(filtered);
        }

        // Push filtered & original data to their respective queues
        // Note, pushing to two different queues might cause the application to display
        //  original and filtered pointclouds from different depth frames
        //  To make sure they are synchronized you need to push them together or add some
        //  synchronization mechanisms
        filtered_data.enqueue(filtered);
        original_data.enqueue(depth_frame);
		color_data.enqueue(color_frame);
    }
});


// Declare objects that will hold the calculated pointclouds and colored frames
// We save the last set of data to minimize flickering of the view
rs2::frame colored_depth;
rs2::frame colored_filtered;
rs2::points original_points;
rs2::points filtered_points;


// Save the time of last frame's arrival
auto last_time = std::chrono::high_resolution_clock::now();
// Maximum angle for the rotation of the pointcloud
const double max_angle = 15.0;
// We'll use rotation_velocity to rotate the pointcloud for a better view of the filters effects
float rotation_velocity = 0.3f;

while (app)
{



    float w = static_cast<float>(app.width());
    float h = static_cast<float>(app.height());

    // Render the GUI
    render_ui(w, h, filters);

    // Try to get new data from the queues and update the view with new texture
    update_data(original_data, color_data,colored_depth, original_points, original_pc, original_view_orientation, color_map);
    update_data(filtered_data, color_data, colored_filtered,filtered_points, filtered_pc, filtered_view_orientation, color_map);

    draw_text(10, 50, "Original");
    draw_text(static_cast<int>(w / 2), 50, "Filtered");

    // Draw the pointclouds of the original and the filtered frames (if the are available already)
    if (/*colored_depth &&*/ original_points)
    {
        glViewport(0, int(h) / 2, int(w) / 2, int(h) / 2);
        draw_pointcloud(int(w) / 2.f, int(h) / 2.f, original_view_orientation, original_points);
    }
    if (/*colored_filtered &&*/ filtered_points)
    {
        glViewport(int(w) / 2, int(h) / 2, int(w) / 2, int(h) / 2);
        draw_pointcloud(int(w) / 2.f, int(h) / 2.f, filtered_view_orientation, filtered_points);
    }
    // Update time of current frame's arrival
    auto curr = std::chrono::high_resolution_clock::now();

    // Time interval which must pass between movement of the pointcloud
    const std::chrono::milliseconds rotation_delta(40);

    // In order to calibrate the velocity of the rotation to the actual displaying speed, rotate
    //  pointcloud only when enough time has passed between frames
    if (curr - last_time > rotation_delta)
    {
        if (fabs(filtered_view_orientation.yaw) > max_angle)
        {
            rotation_velocity = -rotation_velocity;
        }
        original_view_orientation.yaw += rotation_velocity;
        filtered_view_orientation.yaw += rotation_velocity;
        last_time = curr;
    }
}

// Signal the processing thread to stop, and join
// (Not the safest way to join a thread, please wrap your threads in some RAII manner)
stopped = true;
processing_thread.join();



return EXIT_SUCCESS;

}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
'''
In addition, I added a color image queue to the update_data function, and followed the rs-pointcloud approach. Unfortunately, errors will occur when the program runs.

‘’‘
void update_data(rs2::frame_queue& data,rs2::frame_queue& color, rs2::frame& colorized_depth, rs2::points& points, rs2::pointcloud& pc, glfw_state& view, rs2::colorizer& color_map)
{
rs2::frame f,c;
if (data.poll_for_frame(&f) &&color.poll_for_frame(&c)) // Try to take the depth and points from the queue
{
points = pc.calculate(f); // Generate pointcloud from the depth data
/*colorized_depth = color_map.process(f); */ // Colorize the depth frame with a color map
pc.map_to(c); // Map the colored depth to the point cloud
view.tex.upload(c); // and upload the texture to the view (without this the view will be B&W)
}
}
’‘’
When I tested the program, I found that I used rs2::frame to define color pictures, and rs2::video_frame was used for rs-pointcloud. But I don't know if it is caused by this.

@MartyG-RealSense
Copy link
Collaborator

I note that in your script you are requesting streams of resolution 640, 0. This is an invalid resolution request because that resolution is not supported by the RealSense SDK. They should be 640, 480.

@zma69650
Copy link
Author

zma69650 commented Aug 8, 2021

Thank you for your reply@MartyG-RealSense https://dev.intelrealsense.com/docs/api-how-to I am learning the API of realsense. The example of Do Processing on a Background-Thread has a line of code 'rs2::depth_frame frame'; but I wrote it like this but an error occurred: there is no default constructor.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 8, 2021

Did you use the background thread example as a standalone test script or incorporate it into your own script, please? If you adapted it into your own script then there is the potential for the { and } brackets to introduce errors, because if there are too few or too many of these brackets added (or the brackets are inserted into the script in the wrong place) then it can break the code in other parts of the script.

The similar C++ example in the SDK's Frame Buffering Management documentation may be easier to use.

https://github.com/IntelRealSense/librealsense/wiki/Frame-Buffering-Management-in-RealSense-SDK-2.0#frame_queue

@zma69650
Copy link
Author

zma69650 commented Aug 8, 2021

Thank you for your reply@MartyG-RealSense I use the background thread example as an independent test script, can I convert rs::frame to rs::depth_frame?

@MartyG-RealSense
Copy link
Collaborator

The link below discusses converting rs2::frame into rs2::depth_frame.

#6464

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants