You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The dimensions of the input data are shown to be different from the dimensions trained in the model when I test with the trained model.
Why is this happening?
Error: RuntimeError: Error(s) in loading state_dict for GTS:
size mismatch for embedder.embedder.weight: copying a param with shape torch.Size([3349, 128]) from checkpoint, the shape in current model is torch.Size([3322, 128]).
The text was updated successfully, but these errors were encountered:
Hi, I had the same error. In my case it was because I tried to load and test a model previously trained on a different dataset (actually a different trainset). This creates the mismatch in the embedding vocab sizes (as in your error message), since the embeddings are created at the initialization of model. See this line.
The dimensions of the input data are shown to be different from the dimensions trained in the model when I test with the trained model.
Why is this happening?
Error: RuntimeError: Error(s) in loading state_dict for GTS:
size mismatch for embedder.embedder.weight: copying a param with shape torch.Size([3349, 128]) from checkpoint, the shape in current model is torch.Size([3322, 128]).
The text was updated successfully, but these errors were encountered: