-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable AMP for BetterTransformer #952
Comments
@younesbelkada any idea? |
This solution sounds good to me! Do you mind opening a PR to add that fix? Otherwise happy to do it |
I will get on to it in a moment. Will tag you for a review if you don't mind |
It turns out fast path calculation is indeed not supported with mixed precision in torch. By setting |
Thanks a lot for digging into that! |
Hi, autocast is now supported with #1225, to the extent pytorch supports it (dispatching to an other compute path if autocast is enabled). |
Feature request
Allow for the
BetterTransformer
models to be inferenced with AMP.Motivation
Models transformed with
BetterTransformer
raise error when used with AMP:bettertransformers.models.base
Why is that? I tried setting
torch.is_autocast_enabled
tolambda: False
and everything works just fine at least forXLMRobertaModel
:Your contribution
My guess would be is that originally it was disabled since
NestedTensor
had no fp16 backends. Since now it is not the case (at least in PyTorch 2.0.0) I can replace this AMP enable check with torch version check.The text was updated successfully, but these errors were encountered: