You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the issue #30 you answered: "Empirically, we have been doing CLIP inference in fp16 without much problem, and that's how the model was trained for anyway".
In this case I have two questions:
Is there any chance to change dtype of CLIP weights from float32 to float16 and use it on GPU? You said "without much problem", and I`ve already dived into the code, but I have not succeed to change the dtype correctly.
Do I understand right that you trained CLIP on float32?
The text was updated successfully, but these errors were encountered:
For most of weights it's safe to convert the parameter dtypes to float16, but some operations like LayerNorm needs to be done in fp32 for stable training. We convert some of the weights depending on the layer's type; see convert_weights().
We used mixed precision training for training CLIP, as mentioned in Section 2.5 of the paper.
Hello!
In the issue #30 you answered: "Empirically, we have been doing CLIP inference in fp16 without much problem, and that's how the model was trained for anyway".
In this case I have two questions:
Is there any chance to change dtype of CLIP weights from float32 to float16 and use it on GPU? You said "without much problem", and I`ve already dived into the code, but I have not succeed to change the dtype correctly.
Do I understand right that you trained CLIP on float32?
The text was updated successfully, but these errors were encountered: