-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separating legacy inpaint pipeline #920
Comments
Hey @juno-hwang, We still support the old "inpaint" pipeline it's just moved to "Legacy" now: diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py Line 46 in 8be4850
Happy to not deprecate it if you want :-) |
Hello @patrickvonplaten , I suppose you're refering to use the StableDiffusionInpaintPipelineLegacy class with the old weights (CompVis/stable-diffusion-v1-4) because if you want to test the new Inpaint pipeline (0.6.0 release) with the new weights (runwayml/stable-diffusion-inpainting) you get the following error:
Meaning that the architecture has changed... So at the end, as @juno-hwang suggested, it would be great to add strength parameter in the new inpaint pipeline with the new weights (runwayml/stable-diffusion-inpainting). |
Hey @MartiGrau, Currently we don't have enough time to do the PR ourselves, but if you think it makes sense to add a strength parameter, I'm more than happy to review a PR :-) |
Hey @patrickvonplaten 👋 Are there currently any plans to also support Dreambooth training (https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) with the new inpainting pipeline (https://huggingface.co/runwayml/stable-diffusion-inpainting)? If not, I would be happy to try to implement this if this makes sense. |
Think this could definitely make sense @patil-suraj what do you think? :-) |
Hey @ulmewennberg that would be awesome, feel free to work on the PR and let us know if you any questions, happy to help :) @MartiGrau , the |
Hey @patil-suraj, as you said |
I see, thanks for reporting, will take a look at this. Also, as I said above, we don't use However, to keep consistency, what I could suggest is: |
Awesome, @patil-suraj and @patrickvonplaten! Some questions so far on the implementation: "For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself)"
|
@ulmewennberg I'm also working on it myself, from the inpainting pipeline it seem to only use noise on the image latents |
I made a pull request here |
Hey @ulmewennberg !
We add noise to the target encoded image which has 4 channels, and the loss is simply computed against that noise. Hope this makes it clear. |
Thanks for the PR @thedarkzeno ! will take a look. |
Hey @thedarkzeno and @ulmewennberg , actually not sure if I understand this. What's the goal of doing dream booth with in-painting ? How would it work ? |
@patil-suraj the goal is to be able to for example take perform image-inpainting using custom concepts. |
I see, so you mean in-paint the custom concepts in the masked part ? |
@patil-suraj yep, this is correct |
I'm also receiving the same exact same error using the latest 0.7.2 diffusers library, StableDiffusionInpaintPipeline, and the latest runwayml/stable-diffusion-inpainting model. Is there anything I have to change on my end to fix this? |
The |
I think I've tracked down the cause of the error in my case. I'm using the StableDiffusionLongPromptWeightingPipeline ( |
@thedarkzeno I'm currently experimenting with your code a bit, did you manage to get good results with it? any chance you can provide the flags you used? |
@thedarkzeno what did you use for pretrained_model_name_or_path ("runwayml/stable-diffusion-inpainting"?). |
Hi everyone, I haven't tested it extensively but got some interesting results, like this one: |
Very cool! |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Hi @patrickvonplaten patrickvonplaten, I see that the legacy pipeline is being deprecated. I work regularly with older models, and I find this very useful. Would it be possible to NOT deprecate this pipeline? If you absolutely have to, what are my options to keep using this pipeline? |
The new fine-tuned inpaint pipeline in 0.6.0 release is very good, but its UNet architecture is different from original pipeline.
I think the inpaint pipeline in 0.5.1 release has some advantage such as sharing the same model with txt2img on GPU or inheriting DreamBooth UNet trained on txt2img model.
I actually often use DreamBooth model for inpainting, and that's a pretty big advantage.
So, how about add legacy inpaint pipeline so that people can still use fine-tuned UNet to inpaint?
Also, I suggest strength parameter in the new inpaint pipeline.
It can be implemented by fixing the mask input and diffuse the image channels only, just like img2img pipeline.
The text was updated successfully, but these errors were encountered: