Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable calling Tensor.requires_grad_() inside a functorch transform #849

Merged
merged 1 commit into from
Jun 3, 2022

Conversation

zou3519
Copy link
Contributor

@zou3519 zou3519 commented Jun 1, 2022

Fixes #847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:

  • added tests

Fixes #847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:
- added tests
@zou3519 zou3519 added this to the 0.2.0 milestone Jun 1, 2022
@zou3519 zou3519 merged commit 130582c into main Jun 3, 2022
@zou3519 zou3519 mentioned this pull request Jun 3, 2022
10 tasks
@zou3519 zou3519 mentioned this pull request Jun 13, 2022
zou3519 added a commit that referenced this pull request Jun 13, 2022
…849)

Fixes #847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:
- added tests
zou3519 added a commit that referenced this pull request Jun 15, 2022
…849)

Fixes #847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:
- added tests
zou3519 added a commit that referenced this pull request Jun 15, 2022
* Fix MSE forward, use decomposition for MSE backward (#860)

* use decomposition for mse backward

* only reshape if there was no reduction

* add tests, fix shape of mse loss forward

* remove mse xfail

* simplify backwards rule

* [release] fixup previous commit

* Remove test/test_functorch_lagging_op_db.py (#845)

These tests are expected to fail, but we didn't communicate that very
well and:
1. we have gotten multiple questions about them
2. we need to special case it in our CI
3. we don't even use the test anymore!

So we are deleting it.

Related: #835

* Disable calling Tensor.requires_grad_() inside a functorch transform (#849)

Fixes #847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:
- added tests

* "set_inplace_requires_grad_allowed" should be a context manager (#870)

Test Plan:
- run existing tests; code reading

* Fix index.Tensor, index_put batching rules (#862)

Fixes #859

Start reading at `NOTE: [advanced indexing (index.Tensor) batch rule]`
in the code for details. This PR rewrites the index.Tensor and index_put
batching rules.

The TL;DR is:
- advanced indexing has different behavior depending on if the "advanced
indices are adjacent":
https://numpy.org/doc/stable/user/basics.indexing.html#combining-advanced-and-basic-indexing
- we have to take this into account in our batching rules, because
index.Tensor and index_put handle these internally.

Test Plan
- I added new test cases for getitem and aten.ops.index_put via OpInfo
testing.

Future
- primtorch should have a sane decomposition that we can use
- We haven't fixed the index_put_ batching rule yet. TODO later...
- Upstream our test cases (see next section) into pytorch/pytorch

Co-authored-by: Samantha Andow <[email protected]>
zou3519 added a commit that referenced this pull request Jun 15, 2022
…849)

Fixes #847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:
- added tests
zou3519 added a commit to zou3519/pytorch that referenced this pull request Jul 20, 2022
…h transform (pytorch/functorch#849)

Fixes pytorch/functorch#847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:
- added tests
bigfootjon pushed a commit to pytorch/pytorch that referenced this pull request Jul 21, 2022
…h transform (pytorch/functorch#849)

Fixes pytorch/functorch#847

We do not allow users to call requires_grad_() inside a functorch
transform. This is because the user is effectively saying
"hey, I want another layer of autograd if I call requires_grad_()", but
that doesn't actually work because to set up a layer of autograd we need
to do some work (e.g. push autograd onto the DynamicLayerStack).

Instead, when a user calls requires_grad_() (and similarly retain_grad),
we raise a nice error message.

This has the intended consequence of causing
torch.autograd.functional.{jvp, vjp, jacobian} to error out when called
inside of a functorch transform. Users should use the functorch
equivalent.

Test Plan:
- added tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Incorrect behavior on vmap(torch.autograd.functional.jacobian)
2 participants