You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From inference.py I can see that the T5Encoder is loaded into GPU with float16 format: t5_encoder = T5TextEmbedder().to(pipe.device, dtype=torch.float16)
And during the inference step, the output embeddings from T5Encoder are converted into the same format as SD pipeline: prompt_embeds = t5_encoder(prompt, max_length=128).to(pipe.device, pipe.dtype)
So in order to save VRAM, I tried experimenting with let the T5 model stay on CPU, by changing the load model line: t5_encoder = T5TextEmbedder()
It ran fine, however the result was totally different, the prompt wasn't working well. So it turns out that running the model in FP32, then converting the embeddings to FP16 is not the same thing as running the model directly in FP16.
Also when I tried loading pipeline in BF16, but still keeping the text encoder in FP16, the result was also different too.
So in order to use this ella-sd1.5-tsc-t5xl model properly, both the SD model and the T5Encoder must be in FP16, am I understanding right?
The text was updated successfully, but these errors were encountered:
Yes. We conducted the vast majority of experiments on V100, which does not support bf16, so we had to use the fp16 T5 for training. I tested and found that the output difference between the fp16 T5 and the bf16 T5 cannot be ignored, resulting in obvious differences in the generated images. Maybe it may be a reasonable strategy to put T5 on the GPU first and move it back to the CPU after the embedding is generated.
Yes. We conducted the vast majority of experiments on V100, which does not support bf16, so we had to use the fp16 T5 for training. I tested and found that the output difference between the fp16 T5 and the bf16 T5 cannot be ignored, resulting in obvious differences in the generated images. Maybe it may be a reasonable strategy to put T5 on the GPU first and move it back to the CPU after the embedding is generated.
I see, thanks a lot.
Maybe it may be a reasonable strategy to put T5 on the GPU first and move it back to the CPU after the embedding is generated.
Yes, that is what I'm doing to cope with when generating highres. Another strategy would be running the encoder on another GPU (dual GPU setup).
From inference.py I can see that the T5Encoder is loaded into GPU with float16 format:
t5_encoder = T5TextEmbedder().to(pipe.device, dtype=torch.float16)
And during the inference step, the output embeddings from T5Encoder are converted into the same format as SD pipeline:
prompt_embeds = t5_encoder(prompt, max_length=128).to(pipe.device, pipe.dtype)
So in order to save VRAM, I tried experimenting with let the T5 model stay on CPU, by changing the load model line:
t5_encoder = T5TextEmbedder()
It ran fine, however the result was totally different, the prompt wasn't working well. So it turns out that running the model in FP32, then converting the embeddings to FP16 is not the same thing as running the model directly in FP16.
Also when I tried loading pipeline in BF16, but still keeping the text encoder in FP16, the result was also different too.
So in order to use this ella-sd1.5-tsc-t5xl model properly, both the SD model and the T5Encoder must be in FP16, am I understanding right?
The text was updated successfully, but these errors were encountered: