You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am planning to use the model to measure the distance between a fixed camera and an object.
Is there a method to improve the accuracy of depth estimation by using the distance to a marker in a photo, where the distance between the camera and the marker is known, as a parameter?
Thanks.
The text was updated successfully, but these errors were encountered:
You can use it as a postprocess step.
You can compute a scale and shift (via e.g. LstSq) based on the error between the predicted depth maps and the actual marker distance for each of the pixels belonging to the marker.
This implies assuming that the relative distances of the predicted depth map are correct and you know which pixel belongs to the marker, ie segment the marker in the image.
Thank you for the feedback. Additionally, in a fixed camera video looking vertically downwards, when the height of the object being increases, the depth value of the background ground also increases. Is there a solution to this? The camera intrinsics have been manually input as fixed values.
I am planning to use the model to measure the distance between a fixed camera and an object.
Is there a method to improve the accuracy of depth estimation by using the distance to a marker in a photo, where the distance between the camera and the marker is known, as a parameter?
Thanks.
The text was updated successfully, but these errors were encountered: