Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

encode centroid #2

Open
Leminhbinh0209 opened this issue Apr 29, 2022 · 2 comments
Open

encode centroid #2

Leminhbinh0209 opened this issue Apr 29, 2022 · 2 comments

Comments

@Leminhbinh0209
Copy link

Hi ,
I don't see you describe the encoding of centroid in the Eqn (4), but it is in your code, why is that, or I miss it somewhere else?

new_center = self.encode(centroid.unsqueeze(0).repeat(xs.shape[0], 1, 1).permute(0,2,1)).permute(0,2,1)
@AmingWu
Copy link
Owner

AmingWu commented Apr 29, 2022

Thank you. In our paper, we have interpreted the encoding (bellow Eq. (4)). It is a fully-connected layer that maps P to the dictionary space.

@Leminhbinh0209
Copy link
Author

Thank you for the reply. In Eqn 4, I think \psi function is currently applied for the P, not for codebook C. Whilst in the code, the encoder is applied for both codebook C and P (in file roi_box_feature_extractors.py).

I have another question with the code in the file generalize_rcnn.py . I see that you reshape a feature tobatch_szie x 256 x n (line new_features = features[i].reshape(features[i].shape[0],256,-1)), then you apply SVD for the first instance only (?) (line u1_0, s1_0, v1_0 = torch.svd(new_features[0].double(), some=True)), why don't you apply SVD for the whole mini-batch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants