-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert all cameras to the same world coordinates #10939
Comments
Hi @lun73 You can activate calibration of individual cameras from C++ scripting and then write the calibration to the camera hardware using code described in Intel's self-calibration white paper guide at the link below. As you are interested in the calibration of world coordinates though, it sounds as though you wish to perform a different kind of calibration - calibrating the positions of multiple cameras relative to each other. Is that correct, please? It is recommended to calibrate the positions of the cameras relative to each other when combining together data from multiple cameras. An example of this principle in Python rather than C++ is Intel's box_dimensioner_multicam RealSense example program - which you reference in another case at #10872 - which uses a checkerboard image on the floor that the cameras are pointed at to automatically calibrate the cameras together when the program is launched. The RealSense SDK has a C++ wrapper for multicam pointcloud 'stitching' here: Instructions for it are here: If your project is able to use ROS and the RealSense ROS compatibility wrapper then Intel have a guide at the link below for stitching together pointclouds from up to 3 RealSense cameras (2 cameras on 1 computer, or 3 cameras with 2 computers) into a single combined pointcloud. https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/ If it is possible for the camera to be moved around in your project then you could look at the rs-kinfu C++ / OpenCV project. It can use a single camera that is moved around the scene to progressively build up a pointcloud image by fusing frames together, Once you are satisfied with the amount of detail in the pointcloud then you can export it to a .ply pointcloud data file. https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/kinfu If it is not compulsory for you to develop your own application in C++ then the RealSense-compatible commercial software tool RecFusion Pro (which supports D415 and other models) can calibrate together multiple cameras and generate a combined scan. https://www.recfusion.net/products/ It can align pointclouds with its in-built calibration procedure, as described in Section 6 - Multi-Sensor Calibration of the RecFusion user guide here: https://www.recfusion.net/user-guide/ RecFusion has versions for Windows 10 and 64-bit Ubuntu (18.04 and 20.04). |
Yes, I calibrated all the cameras to the same coordinate system I have seen people use OpenCV I only write programs in C++ and the camera does not move in calibration. |
The link below is for a RealSense multiple camera pointcloud stitching system developed by the CONIX Research Center at Carnegie Mellon that is scalable up to 20 cameras. It is written in C++ code, though it is complex in its setup. https://github.com/conix-center/pointcloud_stitching |
How do I rotation and translation the camera to another camera? |
I researched your question carefully. Unfortunately there are few C++ references on the subject compared to the larger amount available for Python. #8333 may be a helpful reference for technical information about calibrating multiple cameras together, though it is written in terms of Python rather than C++. #2664 is also worth looking at. Accessing specialized external pointcloud library platforms such as Open3D or PCL from their compatibility wrappers in the RealSense SDK and using pointcloud stitching techniques like ICP registration may be a better way to achieve multi-camera pointcloud stitching from within a RealSense application. Another possibility that you could explore is calibrating cameras together using fiducial image tags such as Aruco or Apriltag, like the Intel demonstration in the YouTube video below that uses Apriltag boards to calibrate 9 RealSense camera positions relative to each other. |
Hi @lun73 Do you require further assistance with this case, please? Thanks! |
Sorry. |
A further recent discussion about multiple camera pointcloud stitching is at #10795 |
I've seen it before, thanks. |
At this point I believe that we have covered in this discussion all the possible C++ methods of calibrating cameras together and generating a combined pointcloud, unfortunately. |
It's okay, you did your best, I did my best. |
Thanks very much for your understanding. The RealSense C++ pointcloud stitching wrapper mentioned earlier in this discussion may be the closest example to what you are aiming to achieve, though it involves using MATLAB for calibration of the cameras. |
I didn't know multi-camera calibration could be so difficult for c++. |
Pointcloud stitching is a task that is best suited to interfacing RealSense cameras with dedicated point cloud libraries such as Open3D and PCL. Attempting to do so only with the RealSense SDK's rs2_transform_point_to_point instruction is difficult, unfortunately. PCL - which the RealSense SDK can interface with via a compatibility wrapper - has the Normal Distributions Transform method of point cloud stitching. https://pointclouds.org/documentation/tutorials/normal_distributions_transform.html In Intel's white-paper guide about use of multiple RealSense cameras at the link below, they demonstrated using a tool called LabVIEW to combine pointclouds together into a single cloud. https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration#c-aligning-point-clouds The RealSense SDK has a C++-based compatibility wrapper for LabVIEW. https://github.com/IntelRealSense/librealsense/tree/master/wrappers/labview Whilst multiple cameras can be set up in LabVIEW, there are not instructions available for replicating the pointcloud-combining LabVIEW demo program illustrated in the white paper however. |
Hi @lun73 Do you require further assistance with this case, please? Thanks! |
I rely on myself to figure out how to, thank you |
Okay, thanks very much @lun73 for the update! |
Case closed due to no further comments received. |
Issue Description
Hello everyone.
I would like to ask if anyone knows how to write the calibration of intel realsense world coordinates in c++.
After that, I would like to capture the point clouds from multiple cameras and merge them.
I have two D415 at the moment, and I expect to use three of them later.
I have checked the related programs, , but I still don't know much about it.
At present, I have captured the internal and external parameters, but I don't know how to start with the rest, so I would like to ask those who have relevant experience to help.
I would also like to ask if it is necessary to correct the intrinsics of this project.
The text was updated successfully, but these errors were encountered: