You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am interested in learning codewords (not using EMA) that are L2-normalized and orthonormal with each other. To do so, I created the vector quantizer using the following configuration:
However, I noticed in the implementation at line 1071 that there is only a single term that enforces input embedding to push towards their corresponding quantized (codeword) embeddings. It does not include a second term that would enforce the other way round. Am I missing something here?
Also, if I create a vector quantizer that learns codebook using EMA with the following configuration:
Hi,
I am interested in learning codewords (not using EMA) that are L2-normalized and orthonormal with each other. To do so, I created the vector quantizer using the following configuration:
However, I noticed in the implementation at line 1071 that there is only a single term that enforces input embedding to push towards their corresponding quantized (codeword) embeddings. It does not include a second term that would enforce the other way round. Am I missing something here?
Also, if I create a vector quantizer that learns codebook using EMA with the following configuration:
Will it still learn codewords to ensure their orthonormalilty?
The text was updated successfully, but these errors were encountered: