You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 12, 2024. It is now read-only.
Why do monocular datasets use virtual images while nvidia datasets don't? What is the difference between these datasets? And we find that virtual images are crucial to the kid-running case, which is not mentioned in the paper. It would be of great help if you can solve my question.
The text was updated successfully, but these errors were encountered:
Hi, we described virtual source views in section 4 of the paper and basically virtual views provides stronger geometry information support for moving objects that prevents model stuck in bad local minimal. We believe one reasons is that the camera-object motions relations from real monocular videos offer more ambiguous cues for moving objects when doing volumetric feature aggregation compared with the camera-object motions existing in the Nvidia dataset.
Why do monocular datasets use virtual images while nvidia datasets don't? What is the difference between these datasets? And we find that virtual images are crucial to the kid-running case, which is not mentioned in the paper. It would be of great help if you can solve my question.
The text was updated successfully, but these errors were encountered: