-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added grid_sample backward batch rule #284
Conversation
Description: - Added grid_sample backward batch rule: CPU and CUDA - Updated tests Notes: I had to expand on dim 0 in most of the cases and could not use tricks like in forward pass when batch dim is merged either with channel or H_out due to wrong grid grads in these cases
d500cb9
to
42ebfac
Compare
VMAP_SUPPORT("grid_sampler_3d_backward", GRID_SAMPLE_BW_BATCH_RULE(grid_sampler_3d_backward)); | ||
VMAP_SUPPORT("cudnn_grid_sampler_backward", CUDNN_GRID_SAMPLE_BW_BATCH_RULE(cudnn_grid_sampler_backward)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do both these get exercised in the tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, cudnn_grid_sampler_backward
is tested by test/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_grid_sample_cuda_float32
and grid_sampler_3d_backward
is tested by one of test/test_ops.py::TestOperatorsCPU::test_vmapvjp_has_batch_rule_nn_functional_grid_sample_cpu_float32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. I had a suggestion for how to reduce the number of cases and we should rebase and regenerate OutOfPlacePlumbing.cpp (I added a change to codegen/codegen_outofplacebatching.py).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
Hmm, tests are failing with build issue but I'm not sure that they're related |
Test is failing because we're getting an unexpected success from I'm also not sure if this is related but just to flag if it is |
There is a cuda test failing at build due to pytorch 1.7.1 installation by conda (somehow), https://app.circleci.com/pipelines/github/pytorch/functorch/798/workflows/e842f064-bc17-4925-b330-88f0aa0c94a1/jobs/4776?invite=true#step-103-333 |
Just reran the tests (sorry that it reran all of them...I mean to only do that one). It looks like that just hit a weird race condition or something funky happened with the checkout since the root errors were a bunch of "file not found"s. I think the CUDA test failures should be similar across the 3 versions after this |
I synced the PR to the |
) * Added grid_sample backward batch rule Description: - Added grid_sample backward batch rule: CPU and CUDA - Updated tests Notes: I had to expand on dim 0 in most of the cases and could not use tricks like in forward pass when batch dim is merged either with channel or H_out due to wrong grid grads in these cases * Code updates according to the review * Updated OutOfPlacePlumbing.cpp to the latest pytorch
) * Added grid_sample backward batch rule Description: - Added grid_sample backward batch rule: CPU and CUDA - Updated tests Notes: I had to expand on dim 0 in most of the cases and could not use tricks like in forward pass when batch dim is merged either with channel or H_out due to wrong grid grads in these cases * Code updates according to the review * Updated OutOfPlacePlumbing.cpp to the latest pytorch
Description:
std::array
in signatureNotes:
I had to expand on dim 0 in most of the cases and could not use
tricks like in forward pass when batch dim is merged either with channel or H_out
due to wrong grid grads in these cases
Related to #240