-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable calling Tensor.requires_grad_() inside a functorch transform #849
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Fixes #847 We do not allow users to call requires_grad_() inside a functorch transform. This is because the user is effectively saying "hey, I want another layer of autograd if I call requires_grad_()", but that doesn't actually work because to set up a layer of autograd we need to do some work (e.g. push autograd onto the DynamicLayerStack). Instead, when a user calls requires_grad_() (and similarly retain_grad), we raise a nice error message. This has the intended consequence of causing torch.autograd.functional.{jvp, vjp, jacobian} to error out when called inside of a functorch transform. Users should use the functorch equivalent. Test Plan: - added tests
zou3519
added a commit
that referenced
this pull request
Jun 13, 2022
…849) Fixes #847 We do not allow users to call requires_grad_() inside a functorch transform. This is because the user is effectively saying "hey, I want another layer of autograd if I call requires_grad_()", but that doesn't actually work because to set up a layer of autograd we need to do some work (e.g. push autograd onto the DynamicLayerStack). Instead, when a user calls requires_grad_() (and similarly retain_grad), we raise a nice error message. This has the intended consequence of causing torch.autograd.functional.{jvp, vjp, jacobian} to error out when called inside of a functorch transform. Users should use the functorch equivalent. Test Plan: - added tests
zou3519
added a commit
that referenced
this pull request
Jun 15, 2022
…849) Fixes #847 We do not allow users to call requires_grad_() inside a functorch transform. This is because the user is effectively saying "hey, I want another layer of autograd if I call requires_grad_()", but that doesn't actually work because to set up a layer of autograd we need to do some work (e.g. push autograd onto the DynamicLayerStack). Instead, when a user calls requires_grad_() (and similarly retain_grad), we raise a nice error message. This has the intended consequence of causing torch.autograd.functional.{jvp, vjp, jacobian} to error out when called inside of a functorch transform. Users should use the functorch equivalent. Test Plan: - added tests
zou3519
added a commit
that referenced
this pull request
Jun 15, 2022
* Fix MSE forward, use decomposition for MSE backward (#860) * use decomposition for mse backward * only reshape if there was no reduction * add tests, fix shape of mse loss forward * remove mse xfail * simplify backwards rule * [release] fixup previous commit * Remove test/test_functorch_lagging_op_db.py (#845) These tests are expected to fail, but we didn't communicate that very well and: 1. we have gotten multiple questions about them 2. we need to special case it in our CI 3. we don't even use the test anymore! So we are deleting it. Related: #835 * Disable calling Tensor.requires_grad_() inside a functorch transform (#849) Fixes #847 We do not allow users to call requires_grad_() inside a functorch transform. This is because the user is effectively saying "hey, I want another layer of autograd if I call requires_grad_()", but that doesn't actually work because to set up a layer of autograd we need to do some work (e.g. push autograd onto the DynamicLayerStack). Instead, when a user calls requires_grad_() (and similarly retain_grad), we raise a nice error message. This has the intended consequence of causing torch.autograd.functional.{jvp, vjp, jacobian} to error out when called inside of a functorch transform. Users should use the functorch equivalent. Test Plan: - added tests * "set_inplace_requires_grad_allowed" should be a context manager (#870) Test Plan: - run existing tests; code reading * Fix index.Tensor, index_put batching rules (#862) Fixes #859 Start reading at `NOTE: [advanced indexing (index.Tensor) batch rule]` in the code for details. This PR rewrites the index.Tensor and index_put batching rules. The TL;DR is: - advanced indexing has different behavior depending on if the "advanced indices are adjacent": https://numpy.org/doc/stable/user/basics.indexing.html#combining-advanced-and-basic-indexing - we have to take this into account in our batching rules, because index.Tensor and index_put handle these internally. Test Plan - I added new test cases for getitem and aten.ops.index_put via OpInfo testing. Future - primtorch should have a sane decomposition that we can use - We haven't fixed the index_put_ batching rule yet. TODO later... - Upstream our test cases (see next section) into pytorch/pytorch Co-authored-by: Samantha Andow <[email protected]>
zou3519
added a commit
that referenced
this pull request
Jun 15, 2022
…849) Fixes #847 We do not allow users to call requires_grad_() inside a functorch transform. This is because the user is effectively saying "hey, I want another layer of autograd if I call requires_grad_()", but that doesn't actually work because to set up a layer of autograd we need to do some work (e.g. push autograd onto the DynamicLayerStack). Instead, when a user calls requires_grad_() (and similarly retain_grad), we raise a nice error message. This has the intended consequence of causing torch.autograd.functional.{jvp, vjp, jacobian} to error out when called inside of a functorch transform. Users should use the functorch equivalent. Test Plan: - added tests
zou3519
added a commit
to zou3519/pytorch
that referenced
this pull request
Jul 20, 2022
…h transform (pytorch/functorch#849) Fixes pytorch/functorch#847 We do not allow users to call requires_grad_() inside a functorch transform. This is because the user is effectively saying "hey, I want another layer of autograd if I call requires_grad_()", but that doesn't actually work because to set up a layer of autograd we need to do some work (e.g. push autograd onto the DynamicLayerStack). Instead, when a user calls requires_grad_() (and similarly retain_grad), we raise a nice error message. This has the intended consequence of causing torch.autograd.functional.{jvp, vjp, jacobian} to error out when called inside of a functorch transform. Users should use the functorch equivalent. Test Plan: - added tests
bigfootjon
pushed a commit
to pytorch/pytorch
that referenced
this pull request
Jul 21, 2022
…h transform (pytorch/functorch#849) Fixes pytorch/functorch#847 We do not allow users to call requires_grad_() inside a functorch transform. This is because the user is effectively saying "hey, I want another layer of autograd if I call requires_grad_()", but that doesn't actually work because to set up a layer of autograd we need to do some work (e.g. push autograd onto the DynamicLayerStack). Instead, when a user calls requires_grad_() (and similarly retain_grad), we raise a nice error message. This has the intended consequence of causing torch.autograd.functional.{jvp, vjp, jacobian} to error out when called inside of a functorch transform. Users should use the functorch equivalent. Test Plan: - added tests
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #847
We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).
Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.
This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.
Test Plan: