-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does it has something wrong in reconstruct loss code? #2
Comments
AlexHex7
changed the title
Does it something wrong in reconstruct loss code?
Does it has something wrong in reconstruct loss code?
Nov 6, 2017
Oh I just found what you mean. It's duplicating the assignments over the batch dimension. I'll fix it, thanks for pointing this out! |
To verify, here is what the masked capsule activity looks like after my fix. Only one capsule is nonzero for each batch sample. Here there is a batch size of 2. The first sample has capsule 0 active, and the second sample has capsule 2 active.
|
timomernick
added a commit
that referenced
this issue
Nov 6, 2017
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The version of python I use is 3.5.
In my opinion, the v_max_index (batchsize, 1) means for each sample of batch, there is a max length vector among 16 vectors. So for the sample 0, there should be active vector among 16 vectors, and other 15 vectors are all 0.
But for
masked[:,v_max_index] = input[:,v_max_index]
it means each sample, there are v_max_index vector will be assign which includes duplicate assignments.
And the paper says
The figure 2 in the paper show that for each sample, there are 15 vectors among 16 vectors will be masked.
The text was updated successfully, but these errors were encountered: