You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
Thanks for the lib and tutorial, it is very informative.
With respect to finetuning would it be worth quantizing the model first to fp16 or even int8 before beginning training?
As this might lead to better accuracy when compared to quantizing after the model has been finetuned?
Thanks
The text was updated successfully, but these errors were encountered:
Hi!
Thanks for the lib and tutorial, it is very informative.
With respect to finetuning would it be worth quantizing the model first to fp16 or even int8 before beginning training?
As this might lead to better accuracy when compared to quantizing after the model has been finetuned?
Thanks
The text was updated successfully, but these errors were encountered: