Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot use EMD_loss #4

Open
Holmes-Alan opened this issue Aug 22, 2020 · 1 comment
Open

cannot use EMD_loss #4

Holmes-Alan opened this issue Aug 22, 2020 · 1 comment

Comments

@Holmes-Alan
Copy link

I tried to use "get_emd_loss" for training but it seems there is a problem with backpropagation.

loss.backward()

File "/home/miniconda3/envs/torch_3d/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/miniconda3/envs/torch_3d/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

@HaolinLiu97
Copy link
Owner

can you specify what is your training dataset? And I think you should try to convert all your input tensor to float type by simply .float(). The information that you provided is limited and I cannot reproduce this currently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants