You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your great dataset and model. I have a few questions.
In the paper, you mentioned that "multiple objects can be used to answer certain questions." And I noticed that in the annotation files, "object_names" may exist multiple objects related to a question. However, in Figure 4, there is only one object label score. I am confused about this. Why is there only one object label predicted? And which object label in the "object_names" is predicted?
Regarding the Object Localization task, it seems that you predicted the scores for all candidate boxes. So you can find all objects in the "object_names", right? For the evaluation of object localization, all the objects in the "object_names" are involved, right?
For the "how many" questions, sometimes, I find that the number of objects in "object_names" is not consistent with the predicted number in answers. Is this an annotation error?
Have you trained the VoteNet and then fixed the parameters before the training of Scanqa model, or have you trained VoteNet and ScanQA model jointly?
Looking forward to your reply.
Best,
Jian Ding
The text was updated successfully, but these errors were encountered:
Dear Authors,
Thanks for sharing your great dataset and model. I have a few questions.
In the paper, you mentioned that "multiple objects can be used to answer certain questions." And I noticed that in the annotation files, "object_names" may exist multiple objects related to a question. However, in Figure 4, there is only one object label score. I am confused about this. Why is there only one object label predicted? And which object label in the "object_names" is predicted?
Regarding the Object Localization task, it seems that you predicted the scores for all candidate boxes. So you can find all objects in the "object_names", right? For the evaluation of object localization, all the objects in the "object_names" are involved, right?
For the "how many" questions, sometimes, I find that the number of objects in "object_names" is not consistent with the predicted number in answers. Is this an annotation error?
Have you trained the VoteNet and then fixed the parameters before the training of Scanqa model, or have you trained VoteNet and ScanQA model jointly?
Looking forward to your reply.
Best,
Jian Ding
The text was updated successfully, but these errors were encountered: