You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The copy call does not result in an __init__ call, so the new Loss object ends up with _grad set to the function that was originally constructed when __init__ was called for the "original", unscaled Loss object.
PR #470 has a simple fix, but this issue raises a few broader design questions:
Is there any value in initializing a _grad attribute of Functional objects rather than simply defining their grad method as directly computing the gradient from __call__?
Would the Loss implementation not be at least slightly simpler if it were derived from ScaledFunctional rather than Functional?
* Update change log
* Resolve#468 and add corresponding test
* Shorten comment
* Resolve some oversights in prox definitions
* Minor edit
* Avoid chaining of ScaledFunctional and some code re-organization
* Address review comment
There is a bug in
Loss.grad
handling of thescale
attribute, but only when it's set via scalar multiplication:The same bug is not present in
Functional.grad
:The text was updated successfully, but these errors were encountered: