-
Notifications
You must be signed in to change notification settings - Fork 664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add autograd test for T.Spectrogram/T.MelSpectrogram #1340
Conversation
|
||
|
||
@skipIfNoCuda | ||
class AutogradCUDATest(AutogradTestCase, PytorchTestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be nice to have a device generic class similar to what we have in PyTorch for autograd tests https://github.com/pytorch/pytorch/blob/a3a2150409472fe6fa66f3abe9e795303786252c/test/test_autograd.py#L7931-L7937
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but let's defer on changing the test infrastructure. This is the common pattern used in torchaudio, so if we are going to change, I would like to change them in one go.
7454333
to
b3fbe41
Compare
@anjali411 I have also added |
for i in inputs: | ||
i.requires_grad = True | ||
inputs_.append(i.to(dtype=self.dtype, device=self.device)) | ||
assert gradcheck(transform, inputs_, eps=eps, atol=atol, rtol=rtol) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.assertTrue(gradcheck(transform, inputs_))
|
||
|
||
class AutogradTestCase(TestBaseMixin): | ||
def assert_grad(self, transform, *inputs, eps=1e-06, atol=1e-05, rtol=0.001): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you don't need to define these values eps, atol, rtol
here
i.requires_grad = True | ||
inputs_.append(i.to(dtype=self.dtype, device=self.device)) | ||
assert gradcheck(transform, inputs_, eps=eps, atol=atol, rtol=rtol) | ||
assert gradgradcheck(transform, inputs_, eps=eps, atol=atol, rtol=rtol) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`self.assertTrue(gradgradcheck(transform, inputs_))
for i in inputs: | ||
i.requires_grad = True | ||
inputs_.append(i.to(dtype=self.dtype, device=self.device)) | ||
assert gradcheck(transform, inputs_, eps=eps, atol=atol, rtol=rtol) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: use self.assertTrue()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason that assertTrue
is preferred?
I always think that the message returned by assertTrue
, AssertionError: False is not true
non sense, so I always use assert
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was mostly suggesting to use the self.
variant of the assert instead of the default one.
The reason is that it makes the test framework aware of it in a better way than catching the error from the assert. And it allows it to give better messages when running the whole test suite.
What does this mean? How much should we be worried?
|
That means that your backward function is not deterministic. |
Thanks for the info. This is an interesting finding. Let me dig into that. |
b67b2c3
to
ba61c9b
Compare
ba61c9b
to
0cb1e3d
Compare
@albanD Following the finding in pytorch/pytorch#54093, I added |
Interesting investigation! |
self.assert_grad(transform, [waveform], nondet_tol=1e-10) | ||
|
||
def test_melspectrogram(self): | ||
# replication_pad1d_backward_cuda is not deteministic and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit - is nondeterministic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Thanks! |
Co-authored-by: Brian Johnson <[email protected]>
Add test for checking auto grad. Part of #1337