Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the lowering for HardSigmoid and HardSigmoidBackward #1940

Merged
merged 2 commits into from
Apr 21, 2020

Conversation

JackCaoG
Copy link
Collaborator

test_hardsigmoid_grad_xla still won't get run because it was marked as @onlyOnCPUAndCUDA in pytorch code. That test seems to be super slow/hanging in

  File "test_nn.py", line 10275, in test_hardsigmoid_grad
    self.assertTrue(gradcheck(F.hardsigmoid, (inputs,)))
  File "/usr/local/google/home/jackcao/anaconda3/envs/pytorch2/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 291, in gradcheck
    numerical = get_numerical_jacobian(fn, tupled_inputs, eps=eps)
  File "/usr/local/google/home/jackcao/anaconda3/envs/pytorch2/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 130, in get_numerical_jacobian
    orig = x_tensor[x_idx].item()

@JackCaoG JackCaoG requested review from ailzhang and dlibenzi April 20, 2020 19:54
@JackCaoG
Copy link
Collaborator Author

That test wasn't hanging, it is just super slow(15+ minutes already).. I will look into metirc once it is finished

@JackCaoG
Copy link
Collaborator Author

In https://github.com/pytorch/pytorch/blob/master/torch/autograd/gradcheck.py#L139, reshpae was in couple layer nested for loops and every reshape will cause a recompile. The test will take forever, I will leave test_hardsigmoid_grad_xla disabled. get_analytical_jacobian and get_numerical_jacobian will cause a total of 2048 recompile and these two function will be called multiple time.

@dlibenzi
Copy link
Collaborator

In pytorch/pytorch:torch/autograd/gradcheck.py@master#L139 , reshpae was in couple layer nested for loops and every reshape will cause a recompile. The test will take forever, I will leave test_hardsigmoid_grad_xla disabled. get_analytical_jacobian and get_numerical_jacobian will cause a total of 2048 recompile and these two function will be called multiple time.

Gradient tests should all be disabled until we fix the gradient-check code.
@ailzhang why are we suddenly get those?

@JackCaoG
Copy link
Collaborator Author

In pytorch/pytorch:torch/autograd/gradcheck.py@master#L139 , reshpae was in couple layer nested for loops and every reshape will cause a recompile. The test will take forever, I will leave test_hardsigmoid_grad_xla disabled. get_analytical_jacobian and get_numerical_jacobian will cause a total of 2048 recompile and these two function will be called multiple time.

Gradient tests should all be disabled until we fix the gradient-check code.
@ailzhang why are we suddenly get those?

The test was added and disabled last week because hardsigmoid_backward was not implemented for XLA. I implemented the lowering and tried to reenable the test and run into this slowness problem. The test is disabled so we should be good. I didn't know we should always skip Gradient tests.

test/cpp/test_aten_xla_tensor.cpp Outdated Show resolved Hide resolved
test/cpp/test_aten_xla_tensor.cpp Outdated Show resolved Hide resolved
test/cpp/test_aten_xla_tensor.cpp Outdated Show resolved Hide resolved
torch_xla/csrc/elementwise.cpp Outdated Show resolved Hide resolved
@ailzhang
Copy link
Contributor

@dlibenzi So this test was added to TestNN, and historically TestNN has some gradcheck tests. :(

@ailzhang
Copy link
Contributor

@JackCaoG If gradcheck is the root cause, feel free to add it to the skipped test in pytorch/xla. And maybe submit a pr in Pytorch later to remove onlyOnCPUAndCUDA, so that we control when to turn it on.

@JackCaoG
Copy link
Collaborator Author

@JackCaoG If gradcheck is the root cause, feel free to add it to the skipped test in pytorch/xla. And maybe submit a pr in Pytorch later to remove onlyOnCPUAndCUDA, so that we control when to turn it on.

Sounds good! It is better we keep it in our file otherwise we will forget about it very soon

@JackCaoG JackCaoG requested a review from dlibenzi April 20, 2020 23:45
Copy link
Collaborator

@dlibenzi dlibenzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @JackCaoG looks nice!

@dlibenzi dlibenzi merged commit 23a27b6 into master Apr 21, 2020
@dlibenzi dlibenzi deleted the hardsigmod branch April 21, 2020 01:36
@lucasc896 lucasc896 mentioned this pull request Apr 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants