-
As far as I know, pytorch return torch.int64 while ignite return torch.unit8 and pytorch add import torch
from ignite.utils import to_onehot, manual_seed
manual_seed(666)
x = torch.randint(10, (2, 5))
print(to_onehot(x, 10), to_onehot(x, 10).shape)
print(torch.nn.functional.one_hot(x, 10), torch.nn.functional.one_hot(x, 10).shape)
x = torch.randint(10, (5,))
print(to_onehot(x, 10), to_onehot(x, 10).shape)
print(torch.nn.functional.one_hot(x, 10), torch.nn.functional.one_hot(x, 10).shape) Output: tensor([[[1, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 1, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 1, 0, 0]],
[[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[1, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 1, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]]], dtype=torch.uint8) torch.Size([2, 10, 5])
tensor([[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0]],
[[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0]]]) torch.Size([2, 5, 10])
tensor([[0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0]], dtype=torch.uint8) torch.Size([5, 10])
tensor([[0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0]]) torch.Size([5, 10]) |
Beta Was this translation helpful? Give feedback.
Answered by
vfdev-5
Jan 13, 2021
Replies: 1 comment 1 reply
-
I think that's it. uint8 vs long is about memory impact. We put |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
ydcjeff
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I think that's it. uint8 vs long is about memory impact. We put
num_classes
todim=1
such that targets with shapes like(B, H, W)
can be easily transformed to one-hot targets(B, C, H, W)
.