-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
regarding inverse_warping #147
Comments
Hello, thank you for your interest in this repo. Actually, the function inverse_warp does implement a 6DoF pose to extrinsics function : SfmLearner-Pytorch/inverse_warp.py Line 136 in 4e6b7e8
So you would have to change the code a little to use matrices instead of pose vectors, and thgen this line will have to go : SfmLearner-Pytorch/inverse_warp.py Line 177 in 4e6b7e8
One quick warning with using matrices instead of pose vectors is the output of thge network. It's much more stable to use pose vectors than extrinsics matrices as output to train if every ceofficient of the outputted matrix is a learnable parameter. So the best strategy for you is to have the euler2mat function directly embedded in your network that you will call at each forward. That way, your network will output directly matrices instead of pose vectors, allowing you to do the inverse wapr with matrices, and will still be stable as if it was using pose vectors. Hope it helped, Clément |
Thanks for your reply! |
The code already uses an intrinsics matrix separately from pose, so the pose is totally independent from intrinsics. |
So which extrinsics is |
In this setup, the extresincs of target image is always the identity matrix (or the null pose vector) There is no way for the network to know the pose of both target and reference image, it can only estimate the difference between the two. In other words, it can estimate the extrinsincs of reference image in a coordinate system centered around target image. If you want the pose of both target and reference images, you need an anchor somewhere. If the anchor is e.g. the first image of the whole sequence, and you need the pose of the Nth image relative to the first frame, you will need to accumulate the pose differences and thus compute the compisition of several extrinsics because they are not written in the same coordinate system. |
Sorry for the unclear quesrtion. I am currently trying to accommodate your I technically could just use the network for |
Oh, now I understand, sorry ! If you have extrinsics for both view, then yes the inverse warp does use the difference of extrinsics inplicitly, which results in your proposed formula in your first pose. Now the tricky part is to make sure the order is right. Is it Hope it helped ! |
Hard to tell without the depth (are you using groundtruth from the sensor ?) but I'd agree with you that first one looks better, especially for the chair onn the bottom. Note that the duplication of pixels in the first is normal for view that are occluded on ref image but not on tgt image. It's impossible to reconstruct them since they are not visible and thus the algorithm take the color of foreground. It is visible for the table in bottom as well for example. |
Hi Clement,
Thanks for translating the tf repo to a pytorch one!
I do have a small question with the 6 DOF pose used in
My own dataset only contains intrinsics and extrinsics matrices. I wonder if it is possible to translate this 6 DOF pose to a series of matrices multiplications.
I already have a rough idea that the pose can be expressed in the form of:
extrinsics_src @ extrinsics_tgt.inverse()
where extrinsics_src is the extrinsic matrix of the source image and extrinsics_tgt is the extrinsic matrix of the target image. So the whole warping process can be written (roughly) as:
grid_sample(source_image, intrinsics @ extrinsics_src @ extrinsics_tgt.inverse() @ intrinsics_inverse() @ target_depth)
assuming all images are taken by the same camera and matrices are in homogeneous coordinates.
Really appreciate your input here!
The text was updated successfully, but these errors were encountered: