-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement the lowering for HardSigmoid and HardSigmoidBackward #1940
Conversation
That test wasn't hanging, it is just super slow(15+ minutes already).. I will look into metirc once it is finished |
In https://github.com/pytorch/pytorch/blob/master/torch/autograd/gradcheck.py#L139, reshpae was in couple layer nested for loops and every reshape will cause a recompile. The test will take forever, I will leave |
Gradient tests should all be disabled until we fix the gradient-check code. |
The test was added and disabled last week because hardsigmoid_backward was not implemented for XLA. I implemented the lowering and tried to reenable the test and run into this slowness problem. The test is disabled so we should be good. I didn't know we should always skip Gradient tests. |
@dlibenzi So this test was added to TestNN, and historically TestNN has some gradcheck tests. :( |
@JackCaoG If gradcheck is the root cause, feel free to add it to the skipped test in pytorch/xla. And maybe submit a pr in Pytorch later to remove |
Sounds good! It is better we keep it in our file otherwise we will forget about it very soon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @JackCaoG looks nice!
test_hardsigmoid_grad_xla
still won't get run because it was marked as@onlyOnCPUAndCUDA
in pytorch code. That test seems to be super slow/hanging in