Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to read bag file data from the specified time? #12455

Closed
mengxinglvhappy opened this issue Nov 30, 2023 · 4 comments
Closed

How to read bag file data from the specified time? #12455

mengxinglvhappy opened this issue Nov 30, 2023 · 4 comments

Comments

@mengxinglvhappy
Copy link

mengxinglvhappy commented Nov 30, 2023

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win11
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC
SDK Version { legacy / 2.. }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include "example.hpp"          // Include short list of convenience functions for rendering

#include <algorithm>            // std::min, std::max
#define _USE_MATH_DEFINES
#include <math.h>
#include <queue>
#include <unordered_set>
#include <map>
#include <thread>
#include <atomic>
#include <mutex>


#include <iostream>
#include <iomanip>
#include <map>
#include <set>
#include <cstring>
#include <cmath>
#include <limits>
#include <thread>
using namespace std;
using namespace rs2;

// Helper functions
void register_glfw_callbacks(window& app, glfw_state& app_state);

int main(int argc, char* argv[]) try
{
	// Create a simple OpenGL window for rendering:
	window app(1280, 720, "RealSense Pointcloud Example");
	// Construct an object to manage view state
	glfw_state app_state;
	// register callbacks to allow manipulation of the pointcloud
	register_glfw_callbacks(app, app_state);

	// Declare pointcloud object, for calculating pointclouds and texture mappings
	rs2::pointcloud pc;
	// We want the points object to be persistent so we can display the last cloud when a frame drops
	rs2::points points;
	rs2::align align_to(RS2_STREAM_COLOR);

	// Declare RealSense pipeline, encapsulating the actual device and sensors
	rs2::pipeline pipe;
	// Start streaming with default recommended configuration
	//pipe.start();
	rs2::config cfg;

	cfg.enable_device_from_file("D:/11.27/20231127_155528.bag");

	auto profile = pipe.start(cfg);
	int i = 0;
	while (app) // Application still alive?
	{
		/*Wait for the next set of frames from the camera*/
   //for (int i = 0; i < 3; i++)
   //{
   //	auto frames = pipe.wait_for_frames(); //Drop several frames for auto-exposure
   //}
	
		auto frames = pipe.wait_for_frames();

		auto depth = frames.get_depth_frame();
		auto color = frames.get_color_frame();

		rs2_intrinsics intr_depth = depth.get_profile().as<rs2::video_stream_profile>().get_intrinsics(); // Calibration data
		rs2_intrinsics intr_color = color.get_profile().as<rs2::video_stream_profile>().get_intrinsics(); // Calibration data
		rs2_extrinsics color_extrin_to_depth = color.get_profile().as<rs2::video_stream_profile>().get_extrinsics_to(depth.get_profile().as<rs2::video_stream_profile>());
		rs2_extrinsics depth_extrin_to_color = depth.get_profile().as<rs2::video_stream_profile>().get_extrinsics_to(color.get_profile().as<rs2::video_stream_profile>());


		// pixel in color image
		//float u = 0.50390625; // 645
		//float v = 0.3972222222222; // 286

		int w = color.get_width();
		int h = color.get_height();

		int c = 582;
		int r = 368;

		float pixel[2];
		pixel[0] = c;
		pixel[1] = r;

		float point_in_color_coordinates[3]; // 3D point in color sensor coordinates
		float point_in_depth_coordinates[3]; // 3D point in depth sensor coordinates

		// method#1
		frames = frames.apply_filter(align_to);
		auto color_aligned = frames.get_color_frame();

		// For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
		if (!color_aligned)
			color_aligned = frames.get_infrared_frame();

		// Tell pointcloud object to map to this color frame
		pc.map_to(color_aligned);

		auto depth_aligned = frames.get_depth_frame();

		// Generate the pointcloud and texture mappings
		points = pc.calculate(depth_aligned);

		rs2_intrinsics intr_depth_aligned = depth_aligned.get_profile().as<rs2::video_stream_profile>().get_intrinsics(); // Calibration data
		rs2_intrinsics intr_color_aligned = color_aligned.get_profile().as<rs2::video_stream_profile>().get_intrinsics();

		// pixel in color image
		w = color_aligned.get_width();
		h = color_aligned.get_height();

		int index = c + w * r;

		// method# 3
		// find coresponding depth point from vertics
		const rs2::vertex* tv = points.get_vertices();
		float x = tv[index].x;
		float y = tv[index].y;
		float z = tv[index].z;

		point_in_color_coordinates[0] = x;
		point_in_color_coordinates[1] = y;
		point_in_color_coordinates[2] = z;

		//int c2 = 996;
		//int r2 = 393;
		//float point_in_color_coordinates2[3]; // 3D point in color sensor coordinates
		//float point_in_depth_coordinates2[3]; // 3D point in depth sensor coordinates
		//int index2 = c2 + w * r2;
	
		//float x2 = tv[index2].x;
		//float y2 = tv[index2].y;
		//float z2 = tv[index2].z;
		//point_in_color_coordinates2[0] = x2;
		//point_in_color_coordinates2[1] = y2;
		//point_in_color_coordinates2[2] = z2;
		//rs2_transform_point_to_point(point_in_depth_coordinates2, &color_extrin_to_depth, point_in_color_coordinates2);
		// transform 3d point from color coordinates into depth coordinates
		rs2_transform_point_to_point(point_in_depth_coordinates, &color_extrin_to_depth, point_in_color_coordinates);
		//		print_extrinsics(color_extrin_to_depth);

		//std::cout << "method#3 wxh=" << w << "," << h << ", uv(" << c << "," << r << ") ----> "/* << " [" << point_in_color_coordinates[0] << "," << point_in_color_coordinates[1] << "," << point_in_color_coordinates[2]*/ << " 3d point in depth sensor coordinates dp[" << point_in_depth_coordinates[0] << "," << point_in_depth_coordinates[1] << "," << point_in_depth_coordinates[2] << "]" << std::endl;

		std::cout << " uv(" << c << "," << r << " 3d point in depth sensor coordinates dp[" << point_in_depth_coordinates[0] << "," << point_in_depth_coordinates[1] << "," << point_in_depth_coordinates[2] << "]"<<std::endl;
		// Upload the color frame to OpenGL
		app_state.tex.upload(color);
	
		// Draw the pointcloud
		draw_pointcloud(app.width(), app.height(), app_state, points);
		//draw_pointcloud(app.width(), app.height(), app_state, point_in_depth_coordinates);
		/*}*/
		//	glutMainLoop();
		i++;
	}
	return EXIT_SUCCESS;
}
catch (const rs2::error& e)
{
	std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
	return EXIT_FAILURE;
}
catch (const std::exception& e)
{
	std::cerr << e.what() << std::endl;
	return EXIT_FAILURE;
}

Hello,I used the code mentioned above to read the 3D coordinates of a pixel (538,368) in the RGB image from a bag file..The bag file has a duration of ten minutes, but I am specifically interested in retrieving the coordinates of this pixel(538,368) in the 3D point cloud at the 8 minutes and 40 seconds timestamp. How can I modify the code to read the coordinates of this pixel (538,368) in the 3D point cloud at 8 minutes and 40 second

@MartyG-RealSense
Copy link
Collaborator

Hi @mengxinglvhappy My understanding from past cases of seeking a specific frame is that it is not practical to jump straight to a specific frame number of a bag file. Instead, the recommendation is to set set_real_time to False, start at the first frame and skip sequentially through the frames until you reach the one that you are seeking to access.

#1579 (comment) and #2630 (comment) should be helpful references.

@mengxinglvhappy
Copy link
Author

Hi @mengxinglvhappy My understanding from past cases of seeking a specific frame is that it is not practical to jump straight to a specific frame number of a bag file. Instead, the recommendation is to set set_real_time to False, start at the first frame and skip sequentially through the frames until you reach the one that you are seeking to access.

#1579 (comment) and #2630 (comment) should be helpful references.

Is there any way to process the bag file, subtract the first eight minutes of the bag file and read the data directly from the eighth minute

@MartyG-RealSense
Copy link
Collaborator

If you only wanted to keep certain frames and discard the rest then you could use the save_single_frameset instruction to save each frame as an individual 1-frame bag file.

https://intelrealsense.github.io/librealsense/doxygen/classrs2_1_1save__single__frameset.html#ab905a1e107b22906d1790d3e0b88b083

@MartyG-RealSense
Copy link
Collaborator

Hi @mengxinglvhappy Do you require further assistance with this case, please? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants