You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The "Statement of need" (lines 19-22) mentioned that existing deep learning frameworks can speed up computation, but cannot perform analytical derivatives needed for the trial function method implemented by nnde.
Can this point be clarified or expanded?
For example: PyTorch implements methods for automatic differentiation (autograd) and numerical methods to compute gradients. What is an example where these are insufficient?
The text was updated successfully, but these errors were encountered:
Would you please explain about extra functionalities of nnde with this version of Autograd (not the PyTorch's one) and JAX considering JOSS guidelines?
We updated the paper to address this issue by describing how the code was started before the widespread adoption of TensorFlow 2. We began to convert the software to use TensorFlow 2 several months ago, and early results are promising, so we wanted to document this particular stage in the project prior to further development.
The "Statement of need" (lines 19-22) mentioned that existing deep learning frameworks can speed up computation, but cannot perform analytical derivatives needed for the trial function method implemented by
nnde
.Can this point be clarified or expanded?
For example: PyTorch implements methods for automatic differentiation (autograd) and numerical methods to compute gradients. What is an example where these are insufficient?
The text was updated successfully, but these errors were encountered: