-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incompatibility with up-to-date pytorch and numpy #4
Labels
bug
Something isn't working
Comments
The simplest fix might be to use torch.linalg.lstsq everywhere to avoid inconsistencies, and then “numpify” the output whenever it’s needed.
…________________________________
De : dreamer2368 ***@***.***>
Envoyé : Wednesday, June 26, 2024 10:02:28 AM
À : LLNL/GPLaSDI ***@***.***>
Cc : Subscribed ***@***.***>
Objet : [LLNL/GPLaSDI] Incompatibility with up-to-date pytorch and numpy (Issue #4)
This issue corresponds to the legacy code. The work-in-progress python package has already applied the fix for this issue.
The legacy code currently solves SINDY using the outdated torch/numpy routines at two routines in BurgersEqn1D/utils.py:
1. find_sindy_coef uses torch.pinverse
2. solve_sindy uses np.linalg.lstsq
For torch>=2.3.0 and numpy>=1.26.4, the internal behaviors of torch.pinverse and numpy.linalg.lstsq have been changed, which make them unstable for our use of sindy loss training. Earlier versions of torch or numpy may have the same issue.
Currently the solution is:
1. Use torch.linalg.lstsq insteady of torch.pinverse. This is also recommended from pytorch official documentation<https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html#torch.linalg.pinv>.
2. It is not clear numpy.linalg.lstsq is unstable. However, it often returns different results than torch.linalg.lstsq, given the same system. numpy official documentation<https://numpy.org/doc/stable/reference/generated/numpy.linalg.lstsq.html> says that if there are multiple solutions, the solution with minimal norm is chosen, which could be a reason for this inconsistency. For the best of our practice, it is recommended to use the same torch.linalg.lstsq.
—
Reply to this email directly, view it on GitHub<#4>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ALEYFFZQACCLJ7I35PFEDCDZJLX2JAVCNFSM6AAAAABJ6JNDNSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGM3TKOBVHA3DSNI>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
As of PR #8 , the legacy code that has this version inconsistency is moved to |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This issue corresponds to the legacy code. The work-in-progress python package has already applied the fix for this issue.
The legacy code currently solves SINDY using the outdated torch/numpy routines at two routines in
BurgersEqn1D/utils.py
:find_sindy_coef
usestorch.pinverse
solve_sindy
usesnp.linalg.lstsq
Other examples are not examined, but expected to use the same routines.
For
torch>=2.3.0
andnumpy>=1.26.4
, the internal behaviors oftorch.pinverse
andnumpy.linalg.lstsq
have been changed, which make them unstable for our use of sindy loss training. Earlier versions oftorch
ornumpy
may have the same issue.Currently the solution is:
torch.linalg.lstsq
insteady oftorch.pinverse
. This is also recommended from pytorch official documentation.numpy.linalg.lstsq
is unstable. However, it often returns different results thantorch.linalg.lstsq
, given the same system. numpy official documentation says that if there are multiple solutions, the solution with minimal norm is chosen, which could be a reason for this inconsistency. For the best of our practice, it is recommended to use the sametorch.linalg.lstsq
.The text was updated successfully, but these errors were encountered: