You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I posted the very same issue a month ago.... And still no feeback...
Short answer is probably NO.. They use sort of diffuser format to load Flux, and not suppoort native fp8 safetensor format somehow. The system force you to load the extra ClipI and T5XXL to fill up the ram. I waited long enough... So far Invoke can only do Flux Quantized NF4 model with 4090 on 24GB Vram. or just roll back to SDXL
I noticed that invoke uses much much more resuources compared to comfyui when it comes to flux. I can't even run invokeai because it uses over 24gb vram, and thats with fp8 flux models.
I have a 4090, 96gb ram and its just too slow for me. Not sure if its because the models aren't actually loaded as fp8, but even the freepik model is loading slow for invoke.
Is there an existing issue for this?
Contact Details
No response
What should this feature add?
A toggle button to force a flux model as fp8.
Alternatives
ComfyUI diffusion model loader.
Additional Content
No response
The text was updated successfully, but these errors were encountered: