-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use assertExpected on the segmentation tests #3287
Conversation
test/test_models.py
Outdated
model.eval().to(device=dev) | ||
input_shape = (1, 3, 300, 300) | ||
input_shape = (1, 3, 64, 64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Decreased significantly the size of input image and number of classes to reduce the expected file to less than 50kb.
test/test_models.py
Outdated
self.check_jit_scriptable(model, (x,), unwrapper=script_model_unwrapper.get(name, None)) | ||
|
||
if dev == torch.device("cuda"): | ||
with torch.cuda.amp.autocast(): | ||
out = model(x) | ||
self.assertEqual(tuple(out["out"].shape), (1, 50, 300, 300)) | ||
self.assertExpected(out["out"].cpu(), prec=0.1, strip_suffix=f"_{dev}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use same precision value as classification.
62071aa
to
c745b55
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks!
Summary: * Modify segmentation tests compare against expected values. * Exclude flaky autocast tests. Reviewed By: datumbox Differential Revision: D26156369 fbshipit-source-id: be596799abe5ff68cd5f4bad6d64aad8451d1487 Co-authored-by: Francisco Massa <[email protected]>
The current segmentation tests do not check against expected values but only check for output sizes. Thus it's very likely that breaking modifications on the segmentation models won't be detected. In this PR we modify the tests to compare against expected values, similarly to other model tests.