You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encounter some difficulties in converting a depth image to a point cloud. right now, I am able to convert the disparity image to a depth image, but I wonder how to convert the depth image to a point cloud? In current dataset, I can find the camera spec, such as baseline and focal length, but I need to know the pixel size to project the depth image to a 3D space. Can anyone help?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi, convert depth images to point clouds using the camera intrinsic: fx, fy, cx,cy. But the distance between my calculated point cloud and the real world is different, so I want to ask how the disparity image is valued to a real disparity when you converted?
Hi,
I encounter some difficulties in converting a depth image to a point cloud. right now, I am able to convert the disparity image to a depth image, but I wonder how to convert the depth image to a point cloud? In current dataset, I can find the camera spec, such as baseline and focal length, but I need to know the pixel size to project the depth image to a 3D space. Can anyone help?
Thanks!
The text was updated successfully, but these errors were encountered: