Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nan occurs in backward loss_otherwise #21

Open
ChristophReich1996 opened this issue Jan 26, 2021 · 1 comment
Open

Nan occurs in backward loss_otherwise #21

ChristophReich1996 opened this issue Jan 26, 2021 · 1 comment

Comments

@ChristophReich1996
Copy link

Hi, I encounter a weird nan error in general.py during training after multiple epochs.
Any idea why this error occurs or how to fix it?

Nan_
Error message of torch.autograd.detect_anomaly().

Cheers and many thanks in advance
Christoph

@jonbarron
Copy link
Owner

Hard to say without more info, but my guess at the most likely cause is 1) the input residual to the loss being extremely large (in which case clipping it should work) or NaN itself, or 2) alpha or scale becoming extremely large or small, in which case you probably want to manually constrain the range of values they take using the module interface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants