replicating quantisation from training during inference #1012
Unanswered
michaelcyshield
asked this question in
Q&A
Replies: 2 comments 9 replies
-
don't quantise the text encoders |
Beta Was this translation helpful? Give feedback.
3 replies
-
What's the value of |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
hello, I realized that the validation images are consistently of higher quality than the ones I use during inference. Is there a way I can replicate exactly what is going on during validation image generation to my use case?
here is my config
simple_tuner_config.txt
and this is my inference script
'
pipeline = DiffusionPipeline.from_pretrained(args.model_id,torch_dtype=torch.float16,)
# pipeline.vae.enable_slicing()
# pipeline.vae.enable_tiling()
'
sorry for the messy code in advance but I am really stuck here, currently the output of this inference script is just poor for some odd reason
Beta Was this translation helpful? Give feedback.
All reactions