You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the issue
With prithvi_vit, when the input spatial dimensions are not divisible by the patch size, part of the input is ignored. To Reproduce (optional, but appreciated)
Steps to reproduce the behavior:
Create a prithvi vit model
Pass to it an input of size not divisible by the patch size
No error is thrown
Expected behavior (optional)
Either we should pad the input to a size divisible by the patch size, or throw an error
The text was updated successfully, but these errors were encountered:
That's strange. When using the test tests/test_backbones.py::test_vit_models_non_divisible_input (from the branch associated to this issue) I got:
> raise EinopsError(message + "\n {}".format(e))
E einops.EinopsError: Error while processing rearrange-reduction pattern "b c (t tub) (h p) (w q) -> b (t h w) (tub p q c)".
E Input tensor shape: torch.Size([1, 6, 4, 220, 230]). Additional info: {'tub': 1, 'p': 16, 'q': 16}.
E Shape mismatch, can't divide axis of length 220 in chunks of 16
Isn't that the expected behaviour ?
Describe the issue
With prithvi_vit, when the input spatial dimensions are not divisible by the patch size, part of the input is ignored.
To Reproduce (optional, but appreciated)
Steps to reproduce the behavior:
Expected behavior (optional)
Either we should pad the input to a size divisible by the patch size, or throw an error
The text was updated successfully, but these errors were encountered: