-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create train_dreambooth_inpaint.py #1091
Conversation
train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training
The documentation is not available anymore as the PR was closed or merged. |
refactored train_dreambooth_inpaint with black
Interesting! This would be a cool addition if it works well :-) |
Gentle ping here @patil-suraj |
Fix prior preservation
Hey guys, I'm thinking of adding the option to create the mask with clipseg instead of just using random masks, what do you think? |
Reping @patil-suraj |
can you please adapt this to this colab https://github.com/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb ,this colab trains based only on the name of the images, without class images and that complicated stuff |
Hey @loboere, Note that this colab is not part of the diffusers repo - could you please leave an issue on https://github.com/TheLastBen/fast-stable-diffusion/ ? |
@TheLastBen FYI |
@patil-suraj ping again |
@thedarkzeno @patil-suraj seems to busy to review the PR at the moment. Let's just go for it :-) Could you however please add a section to https://github.com/huggingface/diffusers/tree/main/examples/dreambooth explaining how to use your script? |
Hey @patrickvonplaten, sure. I think I'll just have to make a few adjustments to support stable diffusion v2. |
Awesome let's merge it :-) @williamberman @patil-suraj it would be great if you could give it a spin :-) |
Was just looking, and this doesn't seem to be available at the following: Why not/where did it go? Edit: Digging into the commit history, I see the following that seem to have touched it: Specifically, it seems that #1553 was the one that moved it, and it now lives at: |
can you please create a colab to test this and have it work on a t4 gpu |
* Create train_dreambooth_inpaint.py train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training * Update train_dreambooth_inpaint.py refactored train_dreambooth_inpaint with black * Update train_dreambooth_inpaint.py * Update train_dreambooth_inpaint.py * Update train_dreambooth_inpaint.py Fix prior preservation * add instructions to readme, fix SD2 compatibility
* Create train_dreambooth_inpaint.py train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training * Update train_dreambooth_inpaint.py refactored train_dreambooth_inpaint with black * Update train_dreambooth_inpaint.py * Update train_dreambooth_inpaint.py * Update train_dreambooth_inpaint.py Fix prior preservation * add instructions to readme, fix SD2 compatibility
I tried to train in collaboration with the same photos of cat toy but the results are a disaster, it seems to corrupt the original inpainting model, I don't know what's wrong. !accelerate launch train_dreambooth_inpaint.py after training load the model
also with normal objects |
Hello @loboere I tryed with --use_8bit_adam and got bad results as well, but with different params my results were better.
Training with those params this was my result. maybe something with the 8bit_adam is not working as intended. |
Hey @thedarkzeno I tried using your script (and using the related requirements), but ran into this error related to
Did you use a specific release of |
@thedarkzeno I was following the same settings you have mentioned but my loss is not decreasing. Any help would be appriciated. |
Hello @kunalgoyal9, sometimes the loss doesn't decrease, but you can still get good results, did you check the outputs from your model? |
@thedarkzeno Thanks for your reply... output is also not good.. I used four toy_cat images and testing using prompt "a toy cat sitting on a bench" |
can you try using just "toy cat" as prompt? |
Hello @Aldo-Aditiya If there is a file named "config.json" under the cloned "stable-diffusion-inpainting" repository, the following error will occur.
I removed "config.json", and the error disappeared. Hope this helps. |
Hey folks! We're trying to encourage the forum for open ended discussion :) Might be good to make a thread there for future dreambooth inpainting discussion https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63 |
Hi, I have tried as following your script and it works. However I would like to know if I have many pairs of images and their captions, how I can train the dataset correctly? You set the 'instance_prompt' just one string value "toy_cat", but I want to train many prompts. For example, "bed", "chair", "sofa", "wardrobe", ... for each image. |
@JANGSOONMYUN you have to modify the code to support your data, I suggest you take a look at the text-to-image script |
Ok thank you! |
train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training