-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding PoseNet Training #85
Comments
@ClementPinard Could you please tell me what is the procedure to train for odometry set? Should I train pose network exclusively (on odom split) keeping the depth network (pre-trained on kitti) on 'eval' mode? I dont understand the benefit of training pose network on odom dataset. :/ |
pose network is trained on Eigen depth split. It is a known issue, I need to run a dedicated training to test pose training with odom split, but didn't fin the time to do it yet. In the odom split, scene from the test set can be seen on eigen depth split train set. As such, if we train the pose along the depth network on train set with eigen split, we will end up training on some of Odom split test set sequences, which is polluting validation, since it doesn't prevent overfitting anymore. To have a genuine validation on odom split, you need to do the whole training (depth + pose) from scratch with train set from odom split. |
Thanks for your reply @ClementPinard ! However, If we do train on odom split from scratch, then my confusion is, why cant we regard depth from this training? Why do we have to train on eigen split for depth? And better yet, Cant we combine both datasets (train.txt files) and make it to one dataset? |
One more clarification please: When you said "pose network is trained on Eigen depth split" , Did you mean that you trained (Depth + Pose) on one go and posted the best results? Thankyou @ClementPinard |
Odom split is actually a subset of the raw dataset (all the scenes with moving objects such as cars of pedestriancs are removed). So we cannot combien both dataset, as it would make the results from dispnet worse. As discussed in #67, data preparaton is not available for the moment for Odom Split, but I'll work on it (don't expect it right away though) And yes I trained depth + pose at the same time, which is not what should be done, because posenet is probably overfitted to some test scenes in odom split, I'll give a new pretrained posenet at the same time as Odom split data preparation. |
I think you meant Posenet here. Inclusion of moving objects will make posenet yield worse results. |
No I meant DispNet, because even if the loss in moving scenes are partially wrong, DispNet will still benefit from it because most of the same is not moving. So It's still more data to DispNet, which is better. The best option would be to have a set of validation scenes only made of not moving scenes in order to be able to train and evaluate both posenet and dispnet with the same training, but that would require to change the eigen split set, which will be difficult, considering how well established it is when paper compare themselves to existing SOTA |
Hi @ClementPinard !
Did you train pose network separately on odom split? or was the training of depth and pose together on eigen split of the raw kitti dataset?
Thankyou!
The text was updated successfully, but these errors were encountered: