-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about the paper #15
Comments
Thanks for your interest. I'll try my best to clarify a bit for you.
|
Thank you for your answer, by the way, when inputting the support image, is the support images' poses known? |
Regarding the retrieved support images, will we know the camera pose when these support images were taken? I'm sorry, the content of your paper is quite advanced for me, so I've asked a lot of questions. |
Yes. The poses of support images are known. |
Dear author,
After reading your paper, I have some questions, and I would greatly appreciate it if you could provide some answers when you have the time.
1.What is the relationship between "query 3D points" and "query image"? Are "query 3D points" generated using the method from "Point-NeRF: Point-based Neural Radiance Fields"? Also, do "3D points" include neural 3D points generated using support images?
2.How is "Query Visibility" calculated? Is it achieved by projecting "query points" onto each support frame and then using the method from "Neural Rays for Occlusion-aware Image-based Rendering"? Or is it calculated directly on the "query image" using the method from "Neural Rays for Occlusion-aware Image-based Rendering"?
3.How should we understand "M(X) is scene-agnostic 3D representation, as it is computed based on support images"? In the paper, it is mentioned that "query points" are projected onto each support frame to generate multi-view features, and then "M(X)" is derived from this multi-view feature. Does this process involve anything other than support images, "query points," and "Query Visibility"?
4.How is the final step of PnP pose estimation performed? After fine-level matching, does it generate 2D points that are matched with support points or "query 3D points"?
I apologize for the many questions, but your article is quite advanced for me, so I'm looking forward to your answers!
Thank you very much!
尊敬的作者您好,看完您的文章,我产生了一些疑问,如果您有空,可否给予解答?万分感谢!
1.query 3D points和query image的关系是什么,query 3D points是对query image使用《Point-NeRF: Point-based Neural Radiance Fields》中的方法生成的吗?以及3D points是否包含使用suport images生成的神经3D点。
2.Qeury Visibility是如何计算的是将query points投影到each support frame,再使用《Neural Rays for Occlusion-aware Image-based Rendering》中的方法计算的吗,还是直接对query image使用《Neural Rays for Occlusion-aware Image-based Rendering》中的方法?
3.如何理解"M(X) 是场景无关的 3D 表示,因为它是根据支持图像计算的",文中提到将query points投影到each support frame,生成了多视图特征,然后再对这个多视图特征进行操作生成了M(x)。这个过程除了支持图像不是涉及了query points和Qeury Visibility了吗?
4.最后一步的PnP计算位姿是如何做到的,fine level matching后会生成一个2D points,然后和support points匹配还是和query 3D points匹配?
再次表示抱歉,您的这篇文章对我来说比较高深,所以提了不少问题,期待您的解答!
The text was updated successfully, but these errors were encountered: