-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extrinsic/Intrinsic difference between raw calibration and reported by librealsense #10323
Comments
Hi @shphilippe Typically when this subject has been discussed in past cases, it has been recommended that the synchronization with an external RGB camera is performed in C++ using the RealSense SDK's software_device() interface. Examples of this are at #4202 and #8389 and #10067 |
We do have a custom synchronization using the external sync signal and the kernel timestamp of each frame. The synchronization is not an issue (we did verify it using a fast moving target). What I want to know is does the ASIC change the world center between the IR left raw calibration and the IR left depth point (ie: is there any homography between the two, if yes, which one). |
The left infrared sensor is always pixel-perfect aligned and calibrated with the depth map, as described in point 2 of the section of Intel's camera tuning guide linked to below. |
Is it true even for the unrectified RAW IR? This document describe the rectified stream. |
It should be, as left infrared and depth are originating from the same physical imaging sensor component. |
well, I need to know for sure, because right now, the extrinsic right->left do changes:
while the following script:
output something totally different (even the translation change):
I know that this can be due to the right imager being projected into the left image plan. But in the Realsense Viewer, in the calibration data window, there is a "World to Left Rotation" which is not the identity matrix. And this make me think that there is some kind of homography between the unrectified left IR and the rectified left IR. |
There is a case that references homography and rectified / unrectified at #1526 if you have not seen it already. Within that discussion, advice given by a RealSense team member at #1526 (comment) states: "Rectification is done by the ASIC and introduces a homography transformation from Y16 to Y8 view. It does not affect the distortion model of the frame, just reprojecting them into a shared virtual plane". Other than that case though, I am not familiar with the subject of homography and so do not have anything that I can add regarding that particular question, unfortunately. |
Thanks for the link to the github comment. It's what I was afraid of. I thus needs more help from Intel in order to know how I can I adjust my extrinsic computed on the Y16 stream to compensate for this and be able to compute one extrinsic that I can use on the depth stream. Who should I contact at Intel ? |
I am the point of contact for your case. I will continue to do my best to assist your queries. I conducted further research in regard to extrinsic adjustment, and I wonder whether an SDK instruction called rs2_override_extrinsics() that is used for depth to RGB calibration may meet your needs. The SDK's rs_sensor.h file provides an example of its use. I will be happy to continue this support discussion when I return in 7 hours from the time of writing this. |
I'm not sure to understand how to use this function, I'm not looking to override any calibration inside the SDK. Then I use Intel.Realsense.CustomRW to write the D410 Left Intrinsic, D410 Right Intrinsic and D410 Extrinsic inside the D410. After testing this calibration, it's in our case a better calibration than the one I can get from the Intel OEM calibration target. When I use the D410 in the product, I get a depth stream from it a 640x480. This depth stream has the exact same position ("Pixel perfect") than the left IR at Y8 (rectified IR). I want to align the external RGB camera with the depth stream. I thus need the following transformation: Depth Stream -> external RGB sensor, but I only have D410 Left (from Y16@1920x1080) -> RGB sensor. Thus, Intel need to tell (or provide a binary which compute it) how to transform the extrinsic "D410 Left (from Y16@1920x1080) -> external RGB sensor" to "D410 Left (from Y8@1920x1080) -> external RGB sensor". |
Finding the extrinsics between multiple cameras is typically not a straightforward problem to solve. A couple of ways that it has been approached with RealSense are to point the cameras at fiducial marker images to perform alignment and calibration between them, or to arrange the cameras around a chessboard / checkerboard image to calculate the extrinsics between the cameras. Fiducial markers Chessboard - demonstrated by SDK 'box_dimensioner_multicam' Python example https://dev.intelrealsense.com/docs/box-measurement-and-multi-camera-calibration When the cameras have been positioned close to each other and facing in the same direction (such as when mounted on the same bracket), another approach that has been taken to finding the extrinsics between them is demonstrated by a Python script for the SDK that was designed to calculate the extrinsics between D435 and T265 RealSense cameras. acb0bbd#diff-d17bb35407d5baa40b3af64e8fe9fbef1c6cfb571502c275143d782914a0bd6f Another approach may be to use Y8 instead of Y16 and perform an undistort operation to undo the distortion model. OpenCV has undistort capabilities if you wish to investigate that possibility, as the SDK does not have an undistortion feature. https://docs.opencv.org/4.5.2/dc/dbb/tutorial_py_calibration.html A RealSense user at #3880 tried an OpenCV undistort of a RealSense image but found that it made little difference though. Ultimately though, if your custom calibration system is producing good results already then it may not be necessary to seek to implement further measures unless you have a requirement in your project to precisely document every detail of how it works. |
I'm not asking on how to find the extrinsic. I'm using a charuco board + custom opencv based code. I must use our own calibration for both the realsense sensor and our RGB camera. In order to speed up factory calibration (our own, not the one of intel), I don't want to perform one calibration in Y16 for the intel realsense and one additional on Y8 for the external camera extrinsic. So I'm calibrating everything on Y16 with the same set of images. We are ready to sign any NDA if needed or use binary code. It looks like this discussion is going nowhere. I'm asking about a specific information from Intel, and I keep getting a generic answer, how can I get in contact with an engineer which will understand my question ? |
If you email me your full contact details at the email address below then I will pass them to Intel so that they can be directed onward to the appropriate person. I am not involved in the NDA registration process and so will not be able to provide updates about the progress of your application or a time estimate for processing of your application. Please include in the email your full name, email address, telephone number and country. Also include a company name, company postal address and website address if you represent a company. |
I have received your email containing your contact details and passed the details to Intel to forward to the persons responsible for NDA registration. Thanks very much for your patience! |
Thanks, closing. |
Issue Description
I'm calibrating the Realsense sensor using a custom calibration mechanism using the raw IR stream (Y16 at full resolution 15fps). The calibration results are good. Our setup also include an external high resolution RGB camera. During this calibration, I'm also calibrating the intrinsic and extrinsic (left IR to RGB) of this camera.
From what I understood, the asic is reprojecting both left and right images inside a virtual plan and thus the intrinsic reported by the API are different than a simple "resolution conversion" from the raw calibration. But I need to know if the extrinsic from my RGB camera to the left imager should change too? If yes, how can I correct the extrinsic left IR to RGB for this change so I can capture only one datasets in order to calibrate both the realsense and my RGB camera ?
The text was updated successfully, but these errors were encountered: