Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to find position (x,y,z) and orientation (Euler angles) of the measurement frame for D435i ? #7884

Closed
CarMineo opened this issue Nov 30, 2020 · 11 comments

Comments

@CarMineo
Copy link

Required Info
Camera Model D435i
Firmware Version 05.12.03.00
Operating System & Version Win 10
Platform PC
Language MATLAB, LabView, C#, C++ or C
Segment Robot

Issue Description

Hello all,
I apologise in advance if my problem is already covered in another issue. I have not been able to find a straightforward solution so far.

I need to manipulate a D435i camera through a robot to reconstruct the geometry of an object, by bringing the device to different locations around the object. I have been able to develop the software required to acquire a point cloud from D435i at every pose. However I need an accurate measure of the position of the device reference system (origin coordinates and Euler angles), in the robot absolute reference system, in order to translate and rotate each acquired point cloud. In other words, I need to find the fixed offset and rotation between the robot flange and the device reference system. I assume this should be done using a predefined object (e.g. a chessboard pattern in a fixed position) and a calibration routine to find the camera extrinsic parameters (??). Can anyone point me towards a documented well documented algorithm applicable to D435i?

Thanks,
Carmelo

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 30, 2020

Hi @CarMineo It sounds as though you already know how to position and rotate the individual clouds to align them, using an affine transform. Instead, you want to be able to calculate the transform between the camera device and the flange. Is this correct please?

If you do need details of how to perform an affine transform, an example of one in librealsense is the instruction rs2_transform_point_to_point

#5583

In regard to the flange: this reminds me of a tool that a RealSense team member wrote to work out the extrinsic calibration between a 400 Series camera and a T265 Tracking Camera on a custom mount. I wonder if it may be applicable to this situation if the flange represented the position on the mount where the T265 would have been.

#4355

The use of gimballing with RealSense calibration for the 400 Series cameras was also recently discussed.

https://support.intelrealsense.com/hc/en-us/community/posts/360052504573-Gimbal-for-Realsense-cameras

@CarMineo
Copy link
Author

Hello MartyG,

thanks for your reply.
Yes, I would like to use an automatic procedure to calculate the transform between the reference system of the camera, centred at the depth measurement origin, and the reference system of my robot flange. This transform is also known as "tool parameters" for the robot. I could estimate the tool parameters using the geometry CAD information of the D435i and the CAD model of the support I designed to mount the camera to the robot. However that would not be accurate, because the support and the real dimensions of the camera are affected by the manufacturing and assembly tolerances. What I am after is a procedure that enables me to calibrate the position of the camera measurement origin and the orientation of the camera, when it is mounted onto the robot flange by mean of the support I am using.

I am going to see the links you shared.
Is there anything else that springs to your mind?

Many thanks,
Carmelo

@MartyG-RealSense
Copy link
Collaborator

I carefully considered the information that you kindly provided. This is not one of my specialist areas of knowledge, but I wonder if what you are describing is hand-eye calibration, which can be used to get the transformation between the end-effector of a robot arm and the camera.

If that is the case, the links below have information about hand-eye calibration in regard to RealSense:

#3569 (comment)

https://support.intelrealsense.com/hc/en-us/community/posts/360051325334/comments/360013640454

@MartyG-RealSense
Copy link
Collaborator

Hi @CarMineo Do you require further assistance with this case, please? Thanks!

@CarMineo
Copy link
Author

CarMineo commented Dec 9, 2020

Hello Marty,
thank you for the additional information. I am going through implementing the procedure for the hand-eye calibration, as you suggested. It should work to meet my requirements.
Best regards,
Carmelo

@MartyG-RealSense
Copy link
Collaborator

Okay, thanks very much for the update @CarMineo

@MartyG-RealSense
Copy link
Collaborator

Hi @CarMineo Do you require further assistance with this case, please? Thanks!

@CarMineo
Copy link
Author

Hello @MartyG-RealSense,

thanks for checking how this is going. I have found a quite straightforward procedure to calibrate the centre of the RGB sensor of D435i. This is based on the MATLAB Camera Calibrator app (https://it.mathworks.com/help/vision/ref/cameracalibrator-app.html). I manipulate the D435i with a robotic arm to take 10 pictures of a chessboard pattern (https://github.com/opencv/opencv/blob/master/doc/pattern.png) from different positions. I record the colour frame and the robot positional feedback (the position of the robot flange on which the D435i is mounted) at every pose. The MATLAB Camera Calibrator allows me to compute the camera intrinsics, extrinsics, and lens distortion parameters. Thus it provides also the coordinates of the location where each colour frame was acquired. Then I can compare the transform between the locations of the RGB sensor and the robot flange, which gives the robot tool parameters I need.
The only think it is still not clear to me is if the origin of the depth measured by the D435i coincides with the centre of the RGB sensor. If that is not the case, what is the link (the transform or the offset) between the centre of the RGB sensor and the depth origin?

Best regards,
Carmelo

@MartyG-RealSense
Copy link
Collaborator

Hi @CarMineo Great news that you were successful. Thanks so much for sharing the details of your solution for the benefit of RealSense community members. :)

The origin point of the 400 Series camera is always the center of the left IR sensor. The link below explains the coordinate system relative to this origin point.

#7279 (comment)

@MartyG-RealSense
Copy link
Collaborator

Hi @CarMineo Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants