Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding calibration tools in RealSense Viewer for correcting distortion #11596

Closed
WRWA opened this issue Mar 22, 2023 · 19 comments
Closed

Comments

@WRWA
Copy link

WRWA commented Mar 22, 2023

Required Info
Camera Model D415
Firmware Version 05.13.00.50
Operating System & Version Linux (Ubuntu 20)
Kernel Version (Linux Only) 5.15.0
Platform PC
SDK Version 2.53.1
Language python
Segment Robot

Issue Description

I am working on camera-robot calibration for bin picking. I want to correct the distortion of the camera color frame. I know that there are several calibration tools in the RealSense Viewer and Dynamic Calibration tool, but I'm not sure what each one does. I read the documentation, but I still don't understand why there is an on-chip calibration and a focal length calibration. Can you tell me what results the on-chip calibration and the focal length calibration give? Also I tried the on-chip calibration, but the distortion is not corrected enough, so I want to do calibration on my own. Is there a way to perform distortion correction without using the provided calibration tools? I know there are functions in the opencv library for this, but I'm not sure how to do it since it's a stereo camera.

@MartyG-RealSense
Copy link
Collaborator

Hi @WRWA On-Chip calibration improves depth image quality, whilst focal length calibration provides a solution for cases of focal length imbalance. The tare type of calibration improves depth measurement accuracy. Of these three types of calibration, typically On-Chip will be the type that is most commonly used.

A key difference between Dynamic Calibration and On-Chip is that Dynamic Calibration can calibrate both depth and RGB, whilst On-Chip only calibrates depth. So using On-Chip to try to improve the RGB image would not have an effect.

The set of Self Calibration tools (On-Chip, Tare, Focal Length) do not require the installation of a separate software package as they are built into the camera firmware driver. On-Chip can also provide a quick and easy snapshot of calibration health by returning a health check score.

In regard to removing the distortion model in OpenCV, a RealSense user who tried it reported that it made minimal difference to the image.

If you are seeking to correct the color image then the Dynamic Calibration tool would be the appropriate one to use. This tool also provides a robust calibration of the sensors.

@WRWA
Copy link
Author

WRWA commented Mar 23, 2023

Thank you for your quick response. I have an additional question: what format are the values in for the intrinsic parameters that I can find in the RealSense viewer? I guess they use a different format than the matrix below. can you tell me exactly what each value is?

[[f_x skew_cf_x c_x ],
[0 f_y c_y ],
[0 0 1]]
image

@MartyG-RealSense
Copy link
Collaborator

The user guide for the Dynamic Calibration tool provides the following information.


Intrinsic

Focal length - specified as [fx; fy] in pixels for left, right, and RGB cameras

Principal point - specified as [px; py] in pixels for left, right, and RGB cameras

Distortion - specified as Brown's distortion model [k1; k2; p1; p2; k3] for left, right, and RGB
cameras

Extrinsic

RotationLeftRight - rotation from right camera coordinate system to left camera coordinate system,
specified as a 3x3 rotation matrix

TranslationLeftRight - translation from right camera coordinate system to left camera coordinate
system, specified as a 3x1 vector in millimeters

RotationLeftRGB - rotation from RGB camera coordinate system to left camera coordinate system,
specified as a 3x3 rotation matrix

TranslationLeftRGB - translation from RGB camera coordinate system to left camera coordinate
system, specified as a 3x1 vector in millimeters


For a D415 the five distortion coefficients will all be set to zero for reasons given at #1430

@MartyG-RealSense
Copy link
Collaborator

Hi @WRWA Do you require further assistance with this case, please? Thanks!

@WRWA
Copy link
Author

WRWA commented Mar 29, 2023

Yes, I just want to know what value each element of the 3 by 3 matrix that I can see in the RealSense viewer corresponds to.

Intrinsic

Focal length - specified as [fx; fy] in pixels for left, right, and RGB cameras
Principal point - specified as [px; py] in pixels for left, right, and RGB cameras
Distortion - specified as Brown's distortion model [k1; k2; p1; p2; k3] for left, right, and RGB
cameras

@MartyG-RealSense
Copy link
Collaborator

You can see the values of Focal Length and Principal Point in the Viewer's camera calibration pop-up window by scrolling down using the slider at the side of the pop-up, as they are hidden from view when the window first opens. Focal Length and Principal Point in the pop-up will be equivalent to the same-named settings in the Dynamic Calibration tool.

The pop-up does not list the five distortion coefficients but as mentioned above, they will all have a value of zero on a D415.

More information about this subject can be found in the RealSense SDK's Projection documentation at the link below.

https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20

The documentation advises that the principal point is the center-point of the camera's projection. The center of projection is not necessarily the center of the image. The focal length is a multiple of pixel width and height, described by fx and fy. The fx and fy fields are allowed to be different (though they are commonly close).

@WRWA
Copy link
Author

WRWA commented Mar 30, 2023

Thank you for your help, I found the Focal length and Principal Point in the viewer. I still have some questions; I guess my question wasn't clear enough, I wanted to know which elements of the 3by3 matrix of left intrinsics and right intrinsics correspond to which values respectively. As far as I know, the intrinsic matrix is [[f_x skew_cf_x c_x ], [0 f_y c_y ], [0 0 1]] but the intrinsics matrix I can find in the viewer looks in different format from the one I just mentioned. like, intrinsic matrix[1][0]has 0 but, for example, the left intrinsics[1][0] has 0.507.

As I mentioned before, I'm trying to correct distortion of the color frame. I used the opencv library to get the undistorted image and compared it to the original color frame and didn't see much difference visually, but I need to see what the difference is in the camera-robot calibration.

img = cv2.imread('./data/chess/left12.jpg')
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))

So I used the above code to get the new intrinsic matrix and distortion coefficient, and I also got the undistorted image by using dst = cv2.undistort(img, mtx, dist, None, newcameramtx), and I was wondering if I could get the same result by modifying the camera calibration data without using the opencv function.

Camera matrix:
[[903.11292711   0.         636.74337157]
 [  0.         903.29942395 362.09427948]
 [  0.           0.           1.        ]]
Distortion coefficients:
[[ 0.16739003 -0.5474027   0.00102131 -0.00426849  0.55452073]]
New camera matrix:
[[916.43481445   0.         630.85916934]
 [  0.         911.37298584 362.27576711]
 [  0.           0.           1.        ]]

This is the value I got from the camera calibration, is there any way to modify and apply this directly within the RealSense viewer?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 30, 2023

If you have a set of calibration values that you wish to use in the Viewer's Calibration Data window then you can edit them and write them to the camera hardware with the following procedure. This information only relates to depth though and not color.

  1. Enable the depth stream.

  2. Open the Calibration Data window and left-click on the box beside 'I know what I'm doing' to turn it blue (enabled). This makes usable the 'Write Table' button for writing a calibration to the camera.

  3. Edit individual values by left-clicking on them and typing in the new value.

  4. Click 'Write Table' to write the edited calibration to the camera hardware's calibration table where the calibration information is held in a small storage space.


An alternative approach in Python is to define custom calibration values that are stored in the computer's memory instead of being written to the camera hardware. #4061 provides an example of Python scripting for doing so.


A third possibility may be to export a calibration containing depth and RGB information from the 'CustomRW' program that is packaged with the RealSense Dynamic Calibration tool (which calibrates both depth and RGB), edit the XML file's values and then import it back into the calibration tool into CustomRW and write the custom calibration to the camera hardware.

The Dynamic Calibration tool can be installed on Linux using instructions on page 14 onwards of the tool's user guide.

https://www.intel.com/content/www/us/en/support/articles/000026723/emerging-technologies/intel-realsense-technology.html

@WRWA
Copy link
Author

WRWA commented Mar 31, 2023

I'll look into it. Thank you for your help!

@WRWA WRWA closed this as completed Mar 31, 2023
@Tranthanhbao198
Copy link

Hi @WRWA On-Chip calibration improves depth image quality, whilst focal length calibration provides a solution for cases of focal length imbalance. The tare type of calibration improves depth measurement accuracy. Of these three types of calibration, typically On-Chip will be the type that is most commonly used.

A key difference between Dynamic Calibration and On-Chip is that Dynamic Calibration can calibrate both depth and RGB, whilst On-Chip only calibrates depth. So using On-Chip to try to improve the RGB image would not have an effect.

The set of Self Calibration tools (On-Chip, Tare, Focal Length) do not require the installation of a separate software package as they are built into the camera firmware driver. On-Chip can also provide a quick and easy snapshot of calibration health by returning a health check score.

In regard to removing the distortion model in OpenCV, a RealSense user who tried it reported that it made minimal difference to the image.

If you are seeking to correct the color image then the Dynamic Calibration tool would be the appropriate one to use. This tool also provides a robust calibration of the sensors.

hi @MartyG-RealSense m

"Hi , can you help me, please? I have a D435 camera and a robot arm. I want to fix the camera in a certain place, not on the robot arm, and then calibrate the camera and robot arm. I'm not sure how to do it. Can you please help me? Thank you!
Also, I think calibration means finding the camera's position in the robot arm's coordinate system, right? Does this mean finding a 4x4 matrix that defines the camera's position in the robot arm's coordinate system?"

@MartyG-RealSense
Copy link
Collaborator

Hi @Tranthanhbao198 The calibration tools mentioned above are not related to robot arms. They are used to improve the quality or the accuracy of the camera's images. If you want to calibrate the camera away from the robot arm then it is fine to use these tools.

When the camera is mounted on a robot arm, a different type of calibration called hand-eye calibration is typically used. The link below provides some examples of such tools.

https://support.intelrealsense.com/hc/en-us/community/posts/360051325334/comments/360013640454

@Tranthanhbao198
Copy link

Hi @Tranthanhbao198 The calibration tools mentioned above are not related to robot arms. They are used to improve the quality or the accuracy of the camera's images. If you want to calibrate the camera away from the robot arm then it is fine to use these tools.

When the camera is mounted on a robot arm, a different type of calibration called hand-eye calibration is typically used. The link below provides some examples of such tools.

https://support.intelrealsense.com/hc/en-us/community/posts/360051325334/comments/360013640454

hi @MartyG-RealSense ma
thank you for your reply.
but I don't use camera on robot arm, so which tool will i use to calibrate camera away from arm?
thank you so much

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 5, 2024

If you will need to calibrate both the depth and RGB then you should use the Dynamic Calibration tool.

If you only need to calibrate depth then either Dynamic Calibration or On-Chip will be fine. You may find that On-Chip is easier as it can be done from within the RealSense Viewer tool, so you do not need to install a separate calibration software package because the On-Chip calibration is built into the camera's firmware driver.

@Tranthanhbao198
Copy link

If you will need to calibrate both the depth and RGB then you should use the Dynamic Calibration tool.

If you only need to calibrate depth then either Dynamic Calibration or On-Chip will be fine. You may find that On-Chip is easier as it can be done from within the RealSense Viewer tool, so you do not need to install a separate calibration software package because the On-Chip calibration is built into the camera's firmware driver.

but, I still don't understand. i think, "When the camera detects an object, it captures data about the object, specifically its position matrix in the camera's coordinate system. Calibrating the camera with the robot involves determining the camera's position matrix in the robot's coordinate system. This allows for the multiplication of the two matrices, resulting in the object's position matrix in the robot's coordinate system. Therefore, I'm wondering if the calibration tools for realsense that you mentioned earlier have any impact on these two calibration steps?". please give me and advice. thank you

@MartyG-RealSense
Copy link
Collaborator

Calibration with the Dynamic Calibration or On-Chip tools does not involve robot arms or detecting an object. You print off a target image onto paper and stick it on a wall, or display a digital target image on the screen of an Android mobile device, and point the camera at the target image.

@jasmeet0915
Copy link

Hi @MartyG-RealSense,

After going through multiple resources and forums, I had some doubts regarding calibrating the d435 realsense camera.
Earlier in this thread you mentioned that self-calibration calibrates depth while the dynamic-calibrator calibrates both depth and RGB. From what I understood after going through the self-calibration docs and dynamic calibrator user guide:

  • Self-calibration offers on-chip calibration and tare calibration that can be used to calibrate the depth data which results in less noisy and has accurate depth measurements
  • On the other hand, the dynamic calibrator calibrates the camera extrinsic parameters and calibrates Depth to RGB for UV Mapping.

So my questions are (do correct me if anything I mentioned is incorrect):

  • Since, I could not find any details on how the dynamic-calibrator calibrates the depth, does it follow the same procedure as on-chip calibration. If no, how is it different? Is calibrating the extrinsics its way of calibrating the depth?
  • In the dynamic-calibrator user guide it is mentioned that it only calibrates the camera extrinsics and not the camera intrinsics. So is there any other tool that calibrates the camera intrinsics? Also, is calibration of the camera extrinsics and intrinsics supported in self-calibration? If not, are there any plans in the roadmap to add support as it would help a lot to have one process for calibration and would allow automation of the complete process.
  • The dynamic-calibrator user guide mentions that it calibrates the Depth-to-RGB for UV Mapping. Is this what you meant by saying that it calibrates both depth and RGB? Is this only helpful for applications like visual mapping, VSLAM where mapping the RGB data on depth is required and not required for other vision applications like marker detection, object detection, segmentation, etc.?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 4, 2024

Hi @jasmeet0915 Thanks very much for your questions.

Self-calibration offers on-chip calibration and tare calibration that can be used to calibrate the depth data which results in less noisy and has accurate depth measurements

On the other hand, the dynamic calibrator calibrates the camera extrinsic parameters and calibrates Depth to RGB for UV Mapping.

Your understanding of the above points is correct.


On-Chip and Dynamic Calibration have different approaches to calibration and use different target images, but the outcome of the calibration process is similar. As you highlighted, a key difference of the Dynamic Calibration tool is that it can calibrate RGB as well as depth, whilst On-Chip only calibrates depth.

Key differences with the On-Chip system are that (1) it provides a health check score that helps you to decide whether to write a calibration to the camera or re-run the calibration process to try to achieve a better score; and (2) On-Chip is built into the camera's firmware driver and so has the advantage of not needing to be installed as a separate program.

The On-Chip calibration tool used to offer the choice of calibrating intrinsics or extrinsics up until version 2.49.0 of the RealSense SDK. From 2.50.0 onwards the interface of the On-Chip calibration system was streamlined to make it easier to understand and that choice of intrinsics or extrinsics was removed. So you would need to use SDK or Viewer 2.49.0 and have the appropriate firmware installed in the camera to make use of that feature.

There is a special OEM version of the Dynamic Calibration tool that calibrates both intrinsics and extrinsics but it is only available as part of Intel's $1500 USD Calibration Target Board product which is aimed at engineering departments and manufacturing facilities. For most RealSense users, only calibrating the extrinsics is fine for achieving a good calibration as it is the extrinsics that have the most impact on calibration.

https://store.intelrealsense.com/buy-intel-realsense-d400-cameras-calibration-target.html


When talking about RGB calibration, I am referring to the 'Targeted Rectification Phase' on page 42-47 of the user guide of the Dynamic Calibration tool. The guide does not specify what the precise benefits of RGB calibration are.

https://www.intel.com/content/www/us/en/support/articles/000026723/emerging-technologies/intel-realsense-technology.html

@jasmeet0915
Copy link

Hi @MartyG-RealSense Thanks for your elaborate answer.
I am using the Realsense D435 camera for AprilTag marker detection and pose calculation. While experimentation I found that the calculated pose of the marker changes (done using opencv's solvePnP which uses camera intrinsic parameters) as I rotate the camera essentially viewing the marker from different areas in the image. My guess is that it happens because the intrinsic parameters are off. I am using the ones that are directly published to the camera_info with the realsense-ros-wrapper.
I also found this thread which shows that the RGB image is not calibrated and distortion coefficients are set to 0. So I just wanted to confirm is that still the case since the thread is quite old (2018) and the d435 camera's RGB is not intrinsically calibrated?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 13, 2024

On D435 the coefficients are all zero, yes. They are zeroed artificially for reasons given at the discussion that you linked to at #1430 (comment)

The camera's depth and RGB would have been calibrated in the factory though. However, sensors can become miscalibrated if the camera receives a physical shock such as a hard knock, drop on the floor or severe vibration. A very high temperature surge could also miscalibrate. It can be corrected by performing a recalibration or by resetting the camera to its factory-new default calibration.

RealSense calibration tools (On-Chip calibration in the RealSense Viewer or the Dynamic Calibration software package) will calibrate extrinsics, as these have the most impact on the depth image. So resetting to factory-new calibration using the instructions at #10182 (comment) may be the best option if you want to reset the intrinsics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants