-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set dtype policy for uint8 #19327
Set dtype policy for uint8 #19327
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #19327 +/- ##
=======================================
Coverage 75.74% 75.74%
=======================================
Files 366 366
Lines 40194 40197 +3
Branches 7814 7815 +1
=======================================
+ Hits 30445 30448 +3
Misses 8062 8062
Partials 1687 1687
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix!
…nse` Add qlora-like technique to `quantized_call` in `Dense` Update `save_own_variables` and `load_own_variables` Update `benchmark.py` update version string. Set dtype policy for uint8 (keras-team#19327) * Set Quantization policy for uint8 to float * Add uint8 to dtype_policies Use Value dim shape for Attention compute_output_shape (keras-team#19284) * Use Value dim shape for Attention compute_output_shape * Fix attention layer compute output shape * fix format * check compute_output_shape with output Update `quantized_call` in `EinsumDense` to support training with quantized weights
Fixes issue for KerasCV where we use
uint8
for dtype. Set toFloatDtypePolicy
to get past the quantization dtype policy error. Tested locally with KerasCV to verify the fix.