Skip to content

Releases: bghira/discord-tron-client

v0.2.0

27 Jan 16:58
Compare
Choose a tag to compare

the first release

30 Jul 03:06
Compare
Choose a tag to compare

5d44e3b Prompts: Add Clipdrop styles
c77db5e Compel: Enable truncation feature
d34686b PipelineRunner should use completed output for non-SDXL pipelines.
e5c81b7 PipelineRunner should use pil output for non-SDXL pipelines.
b4e4716 PipelineRunner should set up denoising_start for base pipes
f2cb3be mixture-of-experts partial diffusion support for base text2img class
0fe3913 SDXL: Use VAE on GPU
b324b19 SDXL: Use 0.9 VAE, because the 1.0 one sucks ass. Thanks Joe.
e27c929 SDXL: Use 1.0 refiner by default
cd0b090 Added SDXL Base as an option
381b386 SDXL Refiner: loop the index because of a bug in Diffusers
470a379 SDXL Refiner: allow specifying alternative
94505ff Compel: more easily disable
96f14b4 DiffusionPipelineManager: Log SDXL Base args
2b4b602 DiffusionPipelineManager: Use fp16 variant, optionally
97e4bab DiffusionPipelineManager: Logging for SDXL Refiner args
d517aa4 DiffusionPipelineManager: use variant fp16
13c7239 SSL: verify
627460c Updates: pytorch, etc
464c656 Optimizations for PyTorch
cf7b02b DeepFloyd: Stage1 resolution should be calculated via k
741e52a Torch compile: restrict which TEs we compile
7cff1cb DiscordProgressBar: reduce logging
d4d250a Pipelines: DeepFloyd runs with batch size 1
4f33715 ControlNet Tile: use emilianJR/epiCRealism instead of urpm
de92077 img2img: use controlnet on output, if configured
a4b7f42 DeepFloyd: better handling of text_encoder?
5de03e5 AppConfig: retrieve live values when requested
5d295ac DeepFloyd: stage1 can use 8bit text encoder
1f82a24 DF-IF: Make SDXL refiner optional for DF-IF
474ee03 DF-IF: Optimise pipe removal
9bb4af8 DF-IF: Remove pipes after output
0c09361 DF-IF: Use SDXL refiner after the stage3
2b2e940 DF-IF: Optimise batch size for DF stage 1
a9a1f03 DF-IF: Optimise batch size for DF stage 2
bcd8472 DF-IF: Optimise default CFGs
911d43e DF-IF: Use x4 upscaler by default
68b3dc6 Discord progress bar: multi-stage progress reporting for multi-stage pipelines
f3dc97e Disable Compel for DF-IF
087a0b8 Disable compile for unet-less models
69edc95 PipelineManager: Do not unnecessarily delete poops
3c23bc5 kandinsky 2.2 runner
a45fe3a Refactoring SDXL pipeline runner into a more compact form
07ff58f Offload: ability to disable offload/sequential offload, via config.json
8e97e41 Updates for LLama.cpp
979bb59 image metadata: add automatic1111/stable-diffusion-webui compatibility
97298d7 Refiner: allow for img2img
897e11c SDXL: Refiner should use a bit-flipped seed for every refined image
3633900 Compel: 2.0
4a7e149 Compel: (WIP) SDXL support adaptation
09e9803 offload: do not always sequentially offload, what the hell
1d04d0e offload: enable for 4090-A6000
36d7d64 img2img: hard-code some aspects like cfg and step count
61f8a35 Upscaler: RealESRGAN
c0d6974 Pipeline manager: keep track of which models to delete better
fadd3cb Compile: allow disabling of unet compile via settings
7a42acf Diffusers: use main branch implementation
67a6865 PNG Info: Add metadata to the images
369d274 Use Safetensors
e70b1d0 VRAM: Not bf16
a3a8ba3 VRAM: Fallback should move to CUDA
f115230 VRAM: Generator to CPU
35c5ec5 Torch: kill off the logging
c094174 VRAM: Enable sequential offload for severe cases
bdb5b38 VRAM: Clear autocast cache
4318300 Logging: remove unnecessary line
8fa1b17 Progress bar: Store the maximum power used rather than the random sample.
03aae82 VRAM optimization: keep track of where the model is, so that our embeds end up in the same place.
6e7d339 Compel: allow disabling via config flag, and add to example config
640f78c Progress bar: Store the maximum power used rather than the random sample.
a5a19c8 GPU VRAM optimization: do not load Compel when not needed
94daee4 use torch.compile
8310b3b Allow use of base for img2img
a3d30ef Remove enforcement for ST
6fb42e5 Refactor the print_prompt to display more useful information
b3806b3 SDXL: The refiner should be used by default for text2img outputs.
cccc5fa SDXL: Allow fullgraph mode
4191c9c Compel: detect and configure for dual encoder pipeline
a210f2e Use auth token for private models
6543f8a Add Real-ESRGAN
48a45fd Remove TensorRT; useless stuff
52b71ef Pipeline manager: clear garbage collector after using Compel embeds. Related to an open issue in that library.
d132de7 Remove dreambooth code from this repository, it is now in the SimpleTuner repo
dde1da4 A helper script for tiling the output images.
ab28364 PyTorch2 compile mode engaged, reduce batch size programmatically by the hardware size we detect
a4ee7fc Pipeline Manager: Use a ControlNet-specific prompt manager so that we do not reuse text embeddings from SD2.1 in the 1.5 pipeline.
8c5ae60 CTU: Use the Compel-provided embeds for prompting
24786de CTU: Use on each image output
1e32ecb Use 1024 base for controlnet
3381a2a Produce 4 upscaled images
db44aeb More debugging/stablevicuna stuff
7979e98 Allow character voices within prompts to change
9d012ff Use generate_long from segments
https://github.com/bghira/discord-tron-clie...

Read more