Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get motion extrinsics in python #10180

Closed
johnjohnson1984 opened this issue Jan 23, 2022 · 3 comments
Closed

How to get motion extrinsics in python #10180

johnjohnson1984 opened this issue Jan 23, 2022 · 3 comments

Comments

@johnjohnson1984
Copy link

Hello, I have l515 camera and I wish to run this simple script, with pyrealsense2. I want to print the translation and rotation matrices of the camera as I move it around.

first=get position and orientation
while True:
current=get position and orientation
print_extrinsics(first,current)

could you show me how to do that? thank you

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 23, 2022

Hi @johnjohnson1984 If using Python to retrieve the extrinsics information is not compulsory then you could access it through the RealSense SDK's 'rs-enumerate-devices' tool by using the command rs-enumerate-devices -c to launch the tool in calibration mode.

image

If Python is requirement then you could retrieve the same information as shown by rs-enumerate-devices by using the SDK instruction get_extrinsics_to()

https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.stream_profile.html#pyrealsense2.stream_profile.get_extrinsics_to

My understanding is that to accomplish this, you set up 'frame' definitions for the stream types that you are going to be retrieving information about, and then retrieve the intrinsics of those streams from the stream profile. Finally, you use the frame and instrinsic information in a get_extrinsics_to instruction to obtain the translation and rotation vectors between one sensor and another (gyro and depth, for example).

#1231 (comment) has a Python example of such code.

import pyrealsense2 as rs
pipeline = rs.pipeline()
pipe_profile = pipeline.start()
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()

# Intrinsics & Extrinsics
depth_intrin = depth_frame.profile.as_video_stream_profile().intrinsics
color_intrin = color_frame.profile.as_video_stream_profile().intrinsics
depth_to_color_extrin = depth_frame.profile.get_extrinsics_to(color_frame.profile)

# Depth scale - units of the values inside a depth frame, i.e how to convert the value to units of 1 meter
depth_sensor = pipe_profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()

# Map depth to color
depth_pixel = [240, 320]   # Random pixel
depth_point = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, depth_scale)
color_point = rs.rs2_transform_point_to_point(depth_to_color_extrin, depth_point)
color_pixel = rs.rs2_project_point_to_pixel(color_intrin, color_point)
pipeline.stop()

If your goal is to have a live-updating readout that changes as the camera is moved and rotated then the above method is not likely to achieve that though, as the extrinsics describe the positional relationship between one sensor and another and the values will not change no matter how the camera is moved or related.

It sounds as though you actually want to print the position and rotation of the camera - known as its pose. It is straightforward to obtain the rotation and acceleration (but not the position in 3D space) of an IMU-equipped RealSense camera (D435i, D455 or L515) in Python, using code such as the script in #8492

The T265 Tracking Camera is the only RealSense model that has built-in support for pose stream data for detecting the camera's own rotation and position. If you have the budget for both an L515 and T265 then you could mount both cameras on a bracket together and so obtain the current position and rotation from the T265 using its pose stream data. Since the two cameras would be positioned close together and rotate together if it is the bracket that is rotated, then the detected position and rotation of the T265 should be applicable to the L515's approximate rotation and position too,


If you would prefer to obtain camera position from only the L515, there is a guide at the link below for obtaining the relative position of the camera in relation to a map by using ROS. The guide is designed for the D435i model but should be adaptable for L515.

https://shinkansan.github.io/2019-UGRP-DPoom/SLAM

@MartyG-RealSense
Copy link
Collaborator

Hi @johnjohnson1984 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants