-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keep text encoders in fp32 in flux #9677
Comments
use 1.5 train a dreambooth and have same issue mat1 and mat2 must have the same dtype, but got Float and Half |
HI, you can't use mixed precision at inference, this is not how it works. But you can use the text encoders in full precision, get the embeddings and after cast them to half precision but I still don't get what are you're trying to accomplish here since it will be the same. The rule here is that the embeddings must have the same |
in my case, add dtype to unet works # unet = UNet2DConditionModel.from_pretrained('./ft/checkpoint-7000/unet')
# to
unet = UNet2DConditionModel.from_pretrained('./ft/checkpoint-7000/unet', torch_dtype = torch.bfloat16)
pipeline = StableDiffusionPAGPipeline.from_pretrained(
'./ft',
unet = unet,
torch_dtype = torch.bfloat16,
safety_checker = None,
pag_applied_layers = 'mid'
) |
yeah, with that you're using the same |
hi @saeedkhanehgir did you managed to solve it? |
Hi,
I want to test outputs when text encoders are fp32 and pipeline is fp16 in FLUX according this issue. I write below code but get error.
code :
error :
Thanks
The text was updated successfully, but these errors were encountered: