This repository provides a Windows-focused Gradio GUI for Kohya's Stable Diffusion trainers. The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.
- Training guide - common : data preparation, options etc...
- Dataset config
- DreamBooth training guide
- Step by Step fine-tuning guide:
- Training LoRA
- training Textual Inversion
- Image generation
- note.com Model conversion
- Required Dependencies
- Installation
- Upgrading
- Launching the GUI
- Dreambooth
- Finetune
- Train Network
- LoRA
- Troubleshooting
- Change History
How to Create a LoRA Part 1: Dataset Preparation:
How to Create a LoRA Part 2: Training the Model:
Newer Tutorial: Generate Studio Quality Realistic Photos By Kohya LoRA Stable Diffusion Training:
- Install Python 3.10
- make sure to tick the box to add Python to the 'PATH' environment variable
- Install Git
- Install Visual Studio 2015, 2017, 2019, and 2022 redistributable
These dependencies are taken care of via setup.sh
in the installation section. No additional steps should be needed unless the scripts inform you otherwise.
Follow the instructions found in this discussion: bmaltais#379
Docker is supported on Windows and Linux distributions. However this method currently only supports Nvidia GPUs. Run the following commands in your OS shell after installing git and docker:
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
docker compose build
docker compose run --service-ports kohya-ss-gui
This will take a while (up to 20 minutes) on the first run.
The following limitations apply:
- All training data must be added to the
dataset
subdirectory, the docker container cannot access any other files - The file picker does not work
- Cannot select folders, folder path must be set manually like e.g. /dataset/my_lora/img
- Cannot select config file, it must be loaded via path instead like e.g. /dataset/my_config.json
- Dialogs do not work
- Make sure your file names are unique as this happens when asking if an existing file should be overridden
- No auto-update support. Must run update scripts outside docker manually and then rebuild with
docker compose build
.
If you run on Linux, there is an alternative docker container port with less limitations. You can find the project here.
In the terminal, run
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
# May need to chmod +x ./setup.sh if you're on a machine with stricter security.
# There are additional options if needed for a runpod environment.
# Call 'setup.sh -h' or 'setup.sh --help' for more information.
./setup.sh
Setup.sh help included here:
Kohya_SS Installation Script for POSIX operating systems.
The following options are useful in a runpod environment,
but will not affect a local machine install.
Usage:
setup.sh -b dev -d /workspace/kohya_ss -g https://mycustom.repo.tld/custom_fork.git
setup.sh --branch=dev --dir=/workspace/kohya_ss --git-repo=https://mycustom.repo.tld/custom_fork.git
Options:
-b BRANCH, --branch=BRANCH Select which branch of kohya to check out on new installs.
-d DIR, --dir=DIR The full path you want kohya_ss installed to.
-g REPO, --git_repo=REPO You can optionally provide a git repo to check out for runpod installation. Useful for custom forks.
-h, --help Show this screen.
-i, --interactive Interactively configure accelerate instead of using default config file.
-n, --no-update Do not update kohya_ss repo. No git pull or clone operations.
-p, --public Expose public URL in runpod mode. Won't have an effect in other modes.
-r, --runpod Forces a runpod installation. Useful if detection fails for any reason.
-s, --skip-space-check Skip the 10Gb minimum storage space check.
-u, --no-gui Skips launching the GUI.
-v, --verbose Increase verbosity levels up to 3.
The default install location for Linux is where the script is located if a previous installation is detected that location.
Otherwise, it will fall to /opt/kohya_ss
. If /opt is not writeable, the fallback is $HOME/kohya_ss
. Lastly, if all else fails it will simply install to the current folder you are in (PWD).
On macOS and other non-Linux machines, it will first try to detect an install where the script is run from and then run setup there if that's detected.
If a previous install isn't found at that location, then it will default install to $HOME/kohya_ss
followed by where you're currently at if there's no access to $HOME.
You can override this behavior by specifying an install directory with the -d option.
If you are using the interactive mode, our default values for the accelerate config screen after running the script answer "This machine", "None", "No" for the remaining questions. These are the same answers as the Windows install.
- Install Python 3.10
- make sure to tick the box to add Python to the 'PATH' environment variable
- Install Git
- Install Visual Studio 2015, 2017, 2019, and 2022 redistributable
In the terminal, run:
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
.\setup.bat
If this is a 1st install answer No when asked Do you want to uninstall previous versions of torch and associated files before installing
.
Then configure accelerate with the same answers as in the MacOS instructions when prompted.
This step is optional but can improve the learning speed for NVIDIA 30X0/40X0 owners. It allows for larger training batch size and faster training speed.
Due to the file size, I can't host the DLLs needed for CUDNN 8.6 on Github. I strongly advise you download them for a speed boost in sample generation (almost 50% on 4090 GPU) you can download them here.
To install, simply unzip the directory and place the cudnn_windows
folder in the root of the this repo.
Run the following commands to install:
.\venv\Scripts\activate
python .\tools\cudann_1.8_install.py
Once the commands have completed successfully you should be ready to use the new version. MacOS support is not tested and has been mostly taken from https://gist.github.com/jstayco/9f5733f05b9dc29de95c4056a023d645
The following commands will work from the root directory of the project if you'd prefer to not run scripts. These commands will work on any OS.
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt
When a new release comes out, you can upgrade your repo with the following commands in the root directory:
upgrade.bat
You can cd into the root directory and simply run
# Refresh and update everything
./setup.sh
# This will refresh everything, but NOT clone or pull the git repo.
./setup.sh --no-git-update
Once the commands have completed successfully you should be ready to use the new version.
The following command line arguments can be passed to the scripts on any OS to configure the underlying service.
--listen: the IP address to listen on for connections to Gradio.
--username: a username for authentication.
--password: a password for authentication.
--server_port: the port to run the server listener on.
--inbrowser: opens the Gradio UI in a web browser.
--share: shares the Gradio UI.
The two scripts to launch the GUI on Windows are gui.ps1 and gui.bat in the root directory. You can use whichever script you prefer.
To launch the Gradio UI, run the script in a terminal with the desired command line arguments, for example:
gui.ps1 --listen 127.0.0.1 --server_port 7860 --inbrowser --share
or
gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share
Run the launcher script with the desired command line arguments similar to Windows.
gui.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share
To run the GUI directly bypassing the wrapper scripts, simply use this command from the root project directory:
.\venv\Scripts\activate
python .\kohya_gui.py
You can find the dreambooth solution specific here: Dreambooth README
You can find the finetune solution specific here: Finetune README
You can find the train network solution specific here: Train network README
Training a LoRA currently uses the train_network.py
code. You can create a LoRA network by using the all-in-one gui.cmd
or by running the dedicated LoRA training GUI with:
.\venv\Scripts\activate
python lora_gui.py
Once you have created the LoRA network, you can generate images via auto1111 by installing this extension.
The LoRA supported by train_network.py
has been named to avoid confusion. The documentation has been updated. The following are the names of LoRA types in this repository.
-
LoRA-LierLa : (LoRA for Li n e a r La yers)
LoRA for Linear layers and Conv2d layers with 1x1 kernel
-
LoRA-C3Lier : (LoRA for C olutional layers with 3 x3 Kernel and Li n e a r layers)
In addition to 1., LoRA for Conv2d layers with 3x3 kernel
LoRA-LierLa is the default LoRA type for train_network.py
(without conv_dim
network arg). LoRA-LierLa can be used with our extension for AUTOMATIC1111's Web UI, or with the built-in LoRA feature of the Web UI.
To use LoRA-C3Lier with Web UI, please use our extension.
A prompt file might look like this, for example
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
Lines beginning with #
are comments. You can specify options for the generated image with options like --n
after the prompt. The following can be used.
--n
Negative prompt up to the next option.--w
Specifies the width of the generated image.--h
Specifies the height of the generated image.--d
Specifies the seed of the generated image.--l
Specifies the CFG scale of the generated image.--s
Specifies the number of steps in the generation.
The prompt weighting such as ( )
and [ ]
are working.
- X error relating to
page file
: Increase the page file size limit in Windows.
- Re-install Python 3.10 on your system.
This is usually related to an installation issue. Make sure you do not have any python modules installed locally that could conflict with the ones installed in the venv:
- Open a new powershell terminal and make sure no venv is active.
- Run the following commands:
pip freeze > uninstall.txt
pip uninstall -r uninstall.txt
This will store a backup file with your current locally installed pip packages and then uninstall them. Then, redo the installation instructions within the kohya_ss venv.
- 2023/04/07 (v21.5.10)
- Fix issue bmaltais#734
- The documentation has been moved to the
docs
folder. If you have links, please change them. - DAdaptAdaGrad, DAdaptAdan, and DAdaptSGD are now supported by DAdaptation. PR#455 Thanks to sdbds!
- DAdaptation needs to be installed. Also, depending on the optimizer, DAdaptation may need to be updated. Please update with
pip install --upgrade dadaptation
.
- DAdaptation needs to be installed. Also, depending on the optimizer, DAdaptation may need to be updated. Please update with
- Added support for pre-calculation of LoRA weights in image generation scripts. Specify
--network_pre_calc
.- The prompt option
--am
is available. Also, it is disabled when Regional LoRA is used.
- The prompt option
- Added Adaptive noise scale to each training script. Specify a number with
--adaptive_noise_scale
to enable it.- Experimental option. It may be removed or changed in the future.
- This is an original implementation that automatically adjusts the value of the noise offset according to the absolute value of the mean of each channel of the latents. It is expected that appropriate noise offsets will be set for bright and dark images, respectively.
- Specify it together with
--noise_offset
. - The actual value of the noise offset is calculated as
noise_offset + abs(mean(latents, dim=(2,3))) * adaptive_noise_scale
. Since the latent is close to a normal distribution, it may be a good idea to specify a value of about 1/10 to the same as the noise offset. - Negative values can also be specified, in which case the noise offset will be clipped to 0 or more.
- Other minor fixes.
- 2023/04/06 (v21.5.9)
-
Inplement headless mode to enable easier support under headless services like vast.ai. To make use of it start the gui with the
--headless
argument like:.\gui.ps1 --headless
or.\gui.bat --headless
or./gui.sh --headless
-
Added the option for the user to put the wandb api key in a textbox under the advanced configuration dropdown and a checkbox to toggle for using wandb logging. @x-CK-x
-
Docker build image @Trojaner
- Updated README to use docker compose run instead of docker compose up to fix broken tqdm
- Related: Doesn't work with docker-compose tqdm/tqdm#771
- Fixed build for latest release
- Replace pillow with pillow-simd
- Removed --no-cache again as pip cache is not enabled anyway
- Updated README to use docker compose run instead of docker compose up to fix broken tqdm
-
While overwriting .txt files with prefix and postfix including different encodings you might encounter this decoder error. This small fix gets rid of it... @ertugrul-dmr
-
Docker Add --no-cache-dir to reduce image size @chiragjn
-
Reverting bitsandbytes version to 0.35.0 due to issues with 0.38.1 on some systems
-
- 2023/04/05 (v21.5.8)
- Add
Cache latents to disk
option to the gui. - When saving v2 models in Diffusers format in training scripts and conversion scripts, it was found that the U-Net configuration is different from those of Hugging Face's stabilityai models (this repository is
"use_linear_projection": false
, stabilityai istrue
). Please note that the weight shapes are different, so please be careful when using the weight files directly. We apologize for the inconvenience.- Since the U-Net model is created based on the configuration, it should not cause any problems in training or inference.
- Added
--unet_use_linear_projection
option toconvert_diffusers20_original_sd.py
script. If you specify this option, you can save a Diffusers format model with the same configuration as stabilityai's model from an SD format model (a single*.safetensors
or*.ckpt
file). Unfortunately, it is not possible to convert a Diffusers format model to the same format.
- Lion8bit optimizer is supported. PR #447 Thanks to sdbds!
- Currently it is optional because you need to update
bitsandbytes
version. See "Optional: Use Lion8bit" in installation instructions to use it.
- Currently it is optional because you need to update
- Multi-GPU training with DDP is supported in each training script. PR #448 Thanks to Isotr0py!
- Multi resolution noise (pyramid noise) is supported in each training script. PR #471 Thanks to pamparamm!
- See PR and this page Multi-Resolution Noise for Diffusion Model Training for details.
- Add --no-cache-dir to reduce image size @chiragjn
- Add
- 2023/05/01 (v21.5.7)
tag_images_by_wd14_tagger.py
can now get arguments from outside. PR #453 Thanks to mio2333!- Added
--save_every_n_steps
option to each training script. The model is saved every specified steps.--save_last_n_steps
option can be used to save only the specified number of models (old models will be deleted).- If you specify the
--save_state
option, the state will also be saved at the same time. You can specify the number of steps to keep the state with the--save_last_n_steps_state
option (the same value as--save_last_n_steps
is used if omitted). - You can use the epoch-based model saving and state saving options together.
- Not tested in multi-GPU environment. Please report any bugs.
--cache_latents_to_disk
option automatically enables--cache_latents
option when specified. #438- Fixed a bug in
gen_img_diffusers.py
where latents upscaler would fail with a batch size of 2 or more. - Fix triton error
- Fix issue with merge lora path with spaces
- Added support for logging to wandb. Please refer to PR #428. Thank you p1atdev!
- wandb installation is required. Please install it with pip install wandb. Login to wandb with wandb login command, or set --wandb_api_key option for automatic login.
- Please let me know if you find any bugs as the test is not complete.
- You can automatically login to wandb by setting the --wandb_api_key option. Please be careful with the handling of API Key. PR #435 Thank you Linaqruf!
- Improved the behavior of --debug_dataset on non-Windows environments. PR #429 Thank you tsukimiya!
- Fixed --face_crop_aug option not working in Fine tuning method.
- Prepared code to use any upscaler in gen_img_diffusers.py.
- Fixed to log to TensorBoard when --logging_dir is specified and --log_with is not specified.
- Add new docker image solution.. Thanks to @Trojaner
- 2023/04/22 (v21.5.5)
- Update LoRA merge GUI to support SD checkpoint merge and up to 4 LoRA merging
- Fixed
lora_interrogator.py
not working. Please refer to PR #392 for details. Thank you A2va and heyalexchoi! - Fixed the handling of tags containing
_
intag_images_by_wd14_tagger.py
. - Add new Extract DyLoRA gui to the Utilities tab.
- Add new Merge LyCORIS models into checkpoint gui to the Utilities tab.
- Add new info on startup to help debug things
- 2023/04/17 (v21.5.4)
- Fixed a bug that caused an error when loading DyLoRA with the
--network_weight
option intrain_network.py
. - Added the
--recursive
option to each script in thefinetune
folder to process folders recursively. Please refer to PR #400 for details. Thanks to Linaqruf! - Upgrade Gradio to latest release
- Fix issue when Adafactor is used as optimizer and LR Warmup is not 0: bmaltais#617
- Added support for DyLoRA in
train_network.py
. Please refer to here for details (currently only in Japanese). - Added support for caching latents to disk in each training script. Please specify both
--cache_latents
and--cache_latents_to_disk
options.- The files are saved in the same folder as the images with the extension
.npz
. If you specify the--flip_aug
option, the files with_flip.npz
will also be saved. - Multi-GPU training has not been tested.
- This feature is not tested with all combinations of datasets and training scripts, so there may be bugs.
- The files are saved in the same folder as the images with the extension
- Added workaround for an error that occurs when training with
fp16
orbf16
infine_tune.py
. - Implemented DyLoRA GUI support. There will now be a new 'DyLoRA Unit
slider when the LoRA type is selected as
kohya DyLoRA` to specify the desired Unit value for DyLoRA training. - Update gui.bat and gui.ps1 based on: bmaltais#188
- Update
setup.bat
to install torch 2.0.0 instead of 1.2.1. If you want to upgrade from 1.2.1 to 2.0.0 run setup.bat again, select 1 to uninstall the previous torch modules, then select 2 for torch 2.0.0
- Fixed a bug that caused an error when loading DyLoRA with the