You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
but stuff like flip or colorjitter won't work. In general, it's safe to assume that uint16 doesn't really work on eager.
What to do about F.to_tensor() and F.pil_to_tensor().
Up until 2.3, passing a unit16 PIL image (mode = "I;16") to those would produce:
to_tensor(): an int16 tensor as ouput for. This is completely wrong and a bug: the range of int16 is smaller than uint16, so the resulting tensor is incorrect and has tons of negative value (coming from overflow).
pil_to_tensor(): an error - this is OK.
Now with 2.3 (or more precisely with the nightlies/RC):
to_tensor(): still outputs an int16 tensor which is still incorrect
pil_to_tensor() outputs a uint16 tensor which is correct - but that tensor won't work with a lot of the transforms.
Proposed fix
Keep pil_to_tensor() as-is, just write a few additional tests w.r.t. uint16 support
Make to_tensor() return uint16 tensor instead of int16. This is a bug fix. Users may get loud errors down the line when they're using that uint16 on transforms (because uint16 is generally not well supported), but a loud error is much better than a silent error, which is what users were currently getting
Pytorch 2.3 is introducing unsigned integer dtypes like
uint16
,uint32
anduint64
in pytorch/pytorch#116594.Quoting Ed:
I tried
uint16
on some of the transforms and the following would work:but stuff like flip or colorjitter won't work. In general, it's safe to assume that uint16 doesn't really work on eager.
What to do about
F.to_tensor()
andF.pil_to_tensor()
.Up until 2.3, passing a unit16 PIL image (mode = "I;16") to those would produce:
to_tensor()
: anint16
tensor as ouput for. This is completely wrong and a bug: the range ofint16
is smaller thanuint16
, so the resulting tensor is incorrect and has tons of negative value (coming from overflow).pil_to_tensor()
: an error - this is OK.Now with 2.3 (or more precisely with the nightlies/RC):
to_tensor()
: still outputs an int16 tensor which is still incorrectpil_to_tensor()
outputs a uint16 tensor which is correct - but that tensor won't work with a lot of the transforms.Proposed fix
pil_to_tensor()
as-is, just write a few additional tests w.r.t. uint16 supportto_tensor()
return uint16 tensor instead of int16. This is a bug fix. Users may get loud errors down the line when they're using that uint16 on transforms (because uint16 is generally not well supported), but a loud error is much better than a silent error, which is what users were currently gettingDirty notebook to play with:
The text was updated successfully, but these errors were encountered: