-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much this work is based on LiDAR data? #5
Comments
Hi there, Thank you for your interest in our project. Like many neural simulators for driving scenes, our approach heavily utilizes depth information, a critical component in accurately modeling outdoor environments (learning the depth from posed RGB images is a highly non-trivial challenge itself). We're also working on an updated version that incorporates depth predictions from a mono-depth prediction model, potentially aligning more closely with your requirements. In this way, the mono priors will be integrated as a pixel source for supervision. We look forward to sharing the updated codebase with you once these enhancements are completed. Best, |
Jiawei, thanks for your detailed answer! It's good to hear from you that you working on new improvements to enhance the pixel source of supervision process, and I would like to test that when it's finish, so please let us know. As far as I understood that there is no "quick way" now to set the current codebase to achieve the best of its part of pixel source ? Best regards |
Thank you very much for your amazing work! |
After integrating monodepth depth supervision into the model, I can confirm that this alternative to LiDAR depth supervision works pretty well indeed. |
@jzuern great news! |
Sounds great! I'm also trying to integrate monodepth supervision into the model, but the result is not good ... Is it possible to have a look how you made it? Thanks! |
Hello!
I would like to thank you for this very interesting work, I'm reading the paper and code as they have very interesting ideas. I was able to see that there is availability to avoid loading LiDAR data during training for Waymo dataset. I was able to do so ( in Flow Mode) but results of detecting dynamics for scene with were poor for detecting and also rendering results.
My question is simple is this method highly depend on multi-sensor configuration or it can be visual only? Can LiDAR data replaced with any ground truth depth information from RGB-D cameras for example.
Could you please guide me how to take the best of this work using visual only data?
Thanks
The text was updated successfully, but these errors were encountered: