Skip to content

the first release

Compare
Choose a tag to compare
@bghira bghira released this 30 Jul 03:06
· 194 commits to master since this release

5d44e3b Prompts: Add Clipdrop styles
c77db5e Compel: Enable truncation feature
d34686b PipelineRunner should use completed output for non-SDXL pipelines.
e5c81b7 PipelineRunner should use pil output for non-SDXL pipelines.
b4e4716 PipelineRunner should set up denoising_start for base pipes
f2cb3be mixture-of-experts partial diffusion support for base text2img class
0fe3913 SDXL: Use VAE on GPU
b324b19 SDXL: Use 0.9 VAE, because the 1.0 one sucks ass. Thanks Joe.
e27c929 SDXL: Use 1.0 refiner by default
cd0b090 Added SDXL Base as an option
381b386 SDXL Refiner: loop the index because of a bug in Diffusers
470a379 SDXL Refiner: allow specifying alternative
94505ff Compel: more easily disable
96f14b4 DiffusionPipelineManager: Log SDXL Base args
2b4b602 DiffusionPipelineManager: Use fp16 variant, optionally
97e4bab DiffusionPipelineManager: Logging for SDXL Refiner args
d517aa4 DiffusionPipelineManager: use variant fp16
13c7239 SSL: verify
627460c Updates: pytorch, etc
464c656 Optimizations for PyTorch
cf7b02b DeepFloyd: Stage1 resolution should be calculated via k
741e52a Torch compile: restrict which TEs we compile
7cff1cb DiscordProgressBar: reduce logging
d4d250a Pipelines: DeepFloyd runs with batch size 1
4f33715 ControlNet Tile: use emilianJR/epiCRealism instead of urpm
de92077 img2img: use controlnet on output, if configured
a4b7f42 DeepFloyd: better handling of text_encoder?
5de03e5 AppConfig: retrieve live values when requested
5d295ac DeepFloyd: stage1 can use 8bit text encoder
1f82a24 DF-IF: Make SDXL refiner optional for DF-IF
474ee03 DF-IF: Optimise pipe removal
9bb4af8 DF-IF: Remove pipes after output
0c09361 DF-IF: Use SDXL refiner after the stage3
2b2e940 DF-IF: Optimise batch size for DF stage 1
a9a1f03 DF-IF: Optimise batch size for DF stage 2
bcd8472 DF-IF: Optimise default CFGs
911d43e DF-IF: Use x4 upscaler by default
68b3dc6 Discord progress bar: multi-stage progress reporting for multi-stage pipelines
f3dc97e Disable Compel for DF-IF
087a0b8 Disable compile for unet-less models
69edc95 PipelineManager: Do not unnecessarily delete poops
3c23bc5 kandinsky 2.2 runner
a45fe3a Refactoring SDXL pipeline runner into a more compact form
07ff58f Offload: ability to disable offload/sequential offload, via config.json
8e97e41 Updates for LLama.cpp
979bb59 image metadata: add automatic1111/stable-diffusion-webui compatibility
97298d7 Refiner: allow for img2img
897e11c SDXL: Refiner should use a bit-flipped seed for every refined image
3633900 Compel: 2.0
4a7e149 Compel: (WIP) SDXL support adaptation
09e9803 offload: do not always sequentially offload, what the hell
1d04d0e offload: enable for 4090-A6000
36d7d64 img2img: hard-code some aspects like cfg and step count
61f8a35 Upscaler: RealESRGAN
c0d6974 Pipeline manager: keep track of which models to delete better
fadd3cb Compile: allow disabling of unet compile via settings
7a42acf Diffusers: use main branch implementation
67a6865 PNG Info: Add metadata to the images
369d274 Use Safetensors
e70b1d0 VRAM: Not bf16
a3a8ba3 VRAM: Fallback should move to CUDA
f115230 VRAM: Generator to CPU
35c5ec5 Torch: kill off the logging
c094174 VRAM: Enable sequential offload for severe cases
bdb5b38 VRAM: Clear autocast cache
4318300 Logging: remove unnecessary line
8fa1b17 Progress bar: Store the maximum power used rather than the random sample.
03aae82 VRAM optimization: keep track of where the model is, so that our embeds end up in the same place.
6e7d339 Compel: allow disabling via config flag, and add to example config
640f78c Progress bar: Store the maximum power used rather than the random sample.
a5a19c8 GPU VRAM optimization: do not load Compel when not needed
94daee4 use torch.compile
8310b3b Allow use of base for img2img
a3d30ef Remove enforcement for ST
6fb42e5 Refactor the print_prompt to display more useful information
b3806b3 SDXL: The refiner should be used by default for text2img outputs.
cccc5fa SDXL: Allow fullgraph mode
4191c9c Compel: detect and configure for dual encoder pipeline
a210f2e Use auth token for private models
6543f8a Add Real-ESRGAN
48a45fd Remove TensorRT; useless stuff
52b71ef Pipeline manager: clear garbage collector after using Compel embeds. Related to an open issue in that library.
d132de7 Remove dreambooth code from this repository, it is now in the SimpleTuner repo
dde1da4 A helper script for tiling the output images.
ab28364 PyTorch2 compile mode engaged, reduce batch size programmatically by the hardware size we detect
a4ee7fc Pipeline Manager: Use a ControlNet-specific prompt manager so that we do not reuse text embeddings from SD2.1 in the 1.5 pipeline.
8c5ae60 CTU: Use the Compel-provided embeds for prompting
24786de CTU: Use on each image output
1e32ecb Use 1024 base for controlnet
3381a2a Produce 4 upscaled images
db44aeb More debugging/stablevicuna stuff
7979e98 Allow character voices within prompts to change
9d012ff Use generate_long from segments
cd01da6 confused travolta mode?
6246663 Refactoring and adding Bark TTS support
2b9f2a1 Bark TTS lazy load
2017e75 Bark TTS
6935bd8 Updated project files, some WIP kit for Vicuna
2760881 Do not use small models
35184ae TTS: Bark
cf52422 Move LLMs into llm folder
4beaa0f DeepFloyd support library (WIP)
a389b56 StableML support
4122432 Add threaded image uploading
bc6926e Add scheduler decision support
830e800 Randomize seed for -1
e83eea4 LLaMA support: basic predictions
837c8d6 Remove any references to SAG pipelines
a947108 Ability to sequentially randomize or fully randomise seed
7b99934 Do not set scheduler unconditionally
6caeabe Timeout the retrieval
b948115 Do not use the Compel manager when we do not have a prompt
84b5e57 Do not use bfloat since it is not yet widely supported
d336258 Use Compel to parse longer than 77 token prompts via a prompt manager object
636eb97 memory logging/logic changes
3e42cd0 use friendly_name as worker label
29af31e Enable the upscaler again
af9f99d God damn safety checker again
d952168 Optimize the delete and update of progress bar
23a7bec Make power gen use more accurate
4d337eb Add more information to the picture output
2d86679 Add the GPU power consumption to the HardwareInfo class
a6dcf7b Set the guidance scale
8a59db2 Make 32bits optional, default to 16
350a99d Upgrade the max resolution
43f901a Add SSL connection support
3d2321b Milestone: Report which worker did the thing.
d23f178 Milestone: More stuff works.
6446965 Milestone: Working image generation, sent to the channel as a public message
08473a0 Milestone: the client can receive jobs, and process them. It currently complains that pytorch is not installed, but who doesn't have their problems?
f2c9ef7 Milestone: Working queue manager implementation with best server fitment
5f3eca3 Milestone: Client can send hardware profile to the main hub
dc63215 WIP: Adding keys works, refreshing them does not.
656da3f Add initial files
b4f9ab3 Add work dir to gitignore
e82ad78 Add gitignore