Skip to content

Flux ‐ GGUF and unet safetensors

yownas edited this page Nov 22, 2024 · 7 revisions

Ruined Fooocus has support for quantized GGUF Flux models that you can find here city96/FLUX.1-dev-gguf and city96/FLUX.1-schnell-gguf, and some of the Flux models found on CivitAI that only contain the Unet part.

Since these are missing clip, t5 and the vae, you might need to download these. This should be done automatically but you can do it manually if you want to use other models:

¹ t5-v1_1-xxl-encoder-Q3_K_S.gguf is the smallest and the one used as default. You can change any of these by editing settings\settings.json

Example:

  "gguf_clip1": "flux_clip_l.safetensors",
  "gguf_clip2": "t5-v1_1-xxl-encoder-Q6_K.gguf",
  "gguf_vae": "ae.safetensors"

(Make sure you don't misplace the commas at the end of the rows.)

RuinedFooocus can automatically download some files. The list of known files are:

For gguf_clip1:

  • clip_l.safetensors

For gguf_clip2:

  • t5-v1_1-xxl-encoder-Q3_K_L.gguf
  • t5-v1_1-xxl-encoder-Q3_K_M.gguf
  • t5-v1_1-xxl-encoder-Q3_K_S.gguf
  • t5-v1_1-xxl-encoder-Q4_K_M.gguf
  • t5-v1_1-xxl-encoder-Q4_K_S.gguf
  • t5-v1_1-xxl-encoder-Q5_K_M.gguf
  • t5-v1_1-xxl-encoder-Q5_K_S.gguf
  • t5-v1_1-xxl-encoder-Q6_K.gguf
  • t5-v1_1-xxl-encoder-Q8_0.gguf
  • t5-v1_1-xxl-encoder-f16.gguf
  • t5-v1_1-xxl-encoder-f32.gguf

For gguf_vae:

  • ae.safetensors

You should now be able to use GGUF models and Flux safetensors that are missing clip, t5 and vae.

Some models that should work:

There are also models that contain everything and will work out-of-the-box:

Note that Flux models needs other Performance settings than SDXL. You can set these by selecting Custom... as performance. These are two examples that work "ok" and can be a good starting point.

Screenshot from 2024-08-14 00-26-40Screenshot from 2024-08-14 00-27-07

Clone this wiki locally