Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to convert all cameras to the same world coordinates #10939

Closed
lun73 opened this issue Sep 26, 2022 · 18 comments
Closed

How to convert all cameras to the same world coordinates #10939

lun73 opened this issue Sep 26, 2022 · 18 comments

Comments

@lun73
Copy link

lun73 commented Sep 26, 2022


Required Info
Camera Model D415
Operating System & Version Win10
Platform PC
Language C++

Issue Description

Hello everyone.

  1. I would like to ask if anyone knows how to write the calibration of intel realsense world coordinates in c++.
    After that, I would like to capture the point clouds from multiple cameras and merge them.
    I have two D415 at the moment, and I expect to use three of them later.
    I have checked the related programs, , but I still don't know much about it.
    At present, I have captured the internal and external parameters, but I don't know how to start with the rest, so I would like to ask those who have relevant experience to help.

  2. I would also like to ask if it is necessary to correct the intrinsics of this project.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 26, 2022

Hi @lun73 You can activate calibration of individual cameras from C++ scripting and then write the calibration to the camera hardware using code described in Intel's self-calibration white paper guide at the link below.

https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#appendix-d-on-chip-calibration-c-api

As you are interested in the calibration of world coordinates though, it sounds as though you wish to perform a different kind of calibration - calibrating the positions of multiple cameras relative to each other. Is that correct, please?

It is recommended to calibrate the positions of the cameras relative to each other when combining together data from multiple cameras. An example of this principle in Python rather than C++ is Intel's box_dimensioner_multicam RealSense example program - which you reference in another case at #10872 - which uses a checkerboard image on the floor that the cameras are pointed at to automatically calibrate the cameras together when the program is launched.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam


The RealSense SDK has a C++ wrapper for multicam pointcloud 'stitching' here:

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/pointcloud/pointcloud-stitching

Instructions for it are here:

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/pointcloud/pointcloud-stitching/doc/pointcloud-stitching-demo.md


If your project is able to use ROS and the RealSense ROS compatibility wrapper then Intel have a guide at the link below for stitching together pointclouds from up to 3 RealSense cameras (2 cameras on 1 computer, or 3 cameras with 2 computers) into a single combined pointcloud.

https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/


If it is possible for the camera to be moved around in your project then you could look at the rs-kinfu C++ / OpenCV project. It can use a single camera that is moved around the scene to progressively build up a pointcloud image by fusing frames together, Once you are satisfied with the amount of detail in the pointcloud then you can export it to a .ply pointcloud data file.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/kinfu


If it is not compulsory for you to develop your own application in C++ then the RealSense-compatible commercial software tool RecFusion Pro (which supports D415 and other models) can calibrate together multiple cameras and generate a combined scan.

https://www.recfusion.net/products/

It can align pointclouds with its in-built calibration procedure, as described in Section 6 - Multi-Sensor Calibration of the RecFusion user guide here:

https://www.recfusion.net/user-guide/

RecFusion has versions for Windows 10 and 64-bit Ubuntu (18.04 and 20.04).

@lun73
Copy link
Author

lun73 commented Sep 26, 2022

Yes, I calibrated all the cameras to the same coordinate system

I have seen people use rs2_project_point_to_pixel,rs2_deproject_pixel_to_point,rs2_transform_point_to_point.

OpenCV findChessboardCorners()

I only write programs in C++ and the camera does not move in calibration.

@MartyG-RealSense
Copy link
Collaborator

The link below is for a RealSense multiple camera pointcloud stitching system developed by the CONIX Research Center at Carnegie Mellon that is scalable up to 20 cameras. It is written in C++ code, though it is complex in its setup.

https://github.com/conix-center/pointcloud_stitching

@lun73
Copy link
Author

lun73 commented Sep 27, 2022

How do I rotation and translation the camera to another camera?
Can you provide me with actual C++ examples and code?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 27, 2022

I researched your question carefully. Unfortunately there are few C++ references on the subject compared to the larger amount available for Python.

#8333 may be a helpful reference for technical information about calibrating multiple cameras together, though it is written in terms of Python rather than C++. #2664 is also worth looking at.

Accessing specialized external pointcloud library platforms such as Open3D or PCL from their compatibility wrappers in the RealSense SDK and using pointcloud stitching techniques like ICP registration may be a better way to achieve multi-camera pointcloud stitching from within a RealSense application.

Another possibility that you could explore is calibrating cameras together using fiducial image tags such as Aruco or Apriltag, like the Intel demonstration in the YouTube video below that uses Apriltag boards to calibrate 9 RealSense camera positions relative to each other.

https://www.youtube.com/watch?v=UzIfn667abE

@MartyG-RealSense
Copy link
Collaborator

Hi @lun73 Do you require further assistance with this case, please? Thanks!

@lun73
Copy link
Author

lun73 commented Oct 3, 2022

Sorry.
I'm still trying to figure it out.
My ability to solve this problem is not yet.

@MartyG-RealSense
Copy link
Collaborator

A further recent discussion about multiple camera pointcloud stitching is at #10795

@lun73
Copy link
Author

lun73 commented Oct 3, 2022

I've seen it before, thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 3, 2022

At this point I believe that we have covered in this discussion all the possible C++ methods of calibrating cameras together and generating a combined pointcloud, unfortunately.

@lun73
Copy link
Author

lun73 commented Oct 4, 2022

It's okay, you did your best, I did my best.
It's my problem, it's too hard to do the projects I want to do and I don't know much about these things.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for your understanding.

The RealSense C++ pointcloud stitching wrapper mentioned earlier in this discussion may be the closest example to what you are aiming to achieve, though it involves using MATLAB for calibration of the cameras.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/pointcloud/pointcloud-stitching

@lun73
Copy link
Author

lun73 commented Oct 4, 2022

I didn't know multi-camera calibration could be so difficult for c++.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 4, 2022

Pointcloud stitching is a task that is best suited to interfacing RealSense cameras with dedicated point cloud libraries such as Open3D and PCL. Attempting to do so only with the RealSense SDK's rs2_transform_point_to_point instruction is difficult, unfortunately.

PCL - which the RealSense SDK can interface with via a compatibility wrapper - has the Normal Distributions Transform method of point cloud stitching.

https://pointclouds.org/documentation/tutorials/normal_distributions_transform.html


In Intel's white-paper guide about use of multiple RealSense cameras at the link below, they demonstrated using a tool called LabVIEW to combine pointclouds together into a single cloud.

https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration#c-aligning-point-clouds

image

The RealSense SDK has a C++-based compatibility wrapper for LabVIEW.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/labview

Whilst multiple cameras can be set up in LabVIEW, there are not instructions available for replicating the pointcloud-combining LabVIEW demo program illustrated in the white paper however.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/labview#understanding-the-programming

@MartyG-RealSense
Copy link
Collaborator

Hi @lun73 Do you require further assistance with this case, please? Thanks!

@lun73
Copy link
Author

lun73 commented Oct 11, 2022

I rely on myself to figure out how to, thank you

@MartyG-RealSense
Copy link
Collaborator

Okay, thanks very much @lun73 for the update!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants