You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to use the LatentQuantize model in an autoencoder context. My inputs are flat 1-d tensors (32) and my encoder passes a shape of (batch_size, 64) to the quantizer. For now, my "levels" is [8, 6, 4], my latent_dim is 64:
Well, actually, I've just discovered that it seems to be an LR thing...
zooming in on first 10k steps:
But recon seems to converge pretty steadily. So maybe just a false alarm. I still have to wrap my head around how LatentQuantize works (and how to get what I want), mind you!
I'm trying to use the LatentQuantize model in an autoencoder context. My inputs are flat 1-d tensors (32) and my encoder passes a shape of (batch_size, 64) to the quantizer. For now, my "levels" is [8, 6, 4], my latent_dim is 64:
The loss starts at zero, then exponentially increases:
Any thoughts as to why this might happen?
The text was updated successfully, but these errors were encountered: