-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Control model offsets and some others #73
Comments
or at least we should put some control_any3_X.pth files for anime models. I cannot release these officially because of many considerations. But this can be done by 3rd projects. |
the formulation is
|
or we can directly store SD15.control_model.weights - SD15.model.diffusion_model.weights and any time user load the model, it add the base weight from user model |
Updated lllyasviel/ControlNet#12 |
This should be considered early since it seems that this plugin is super popular in both english and asian community. so the early this is fixed, the less things we will need to ask people to try download new files. |
Thanks for pointing out. Will implement some fix immediately |
Fixed in b9efb60, should work but still need some tests. |
Great. Now working. But how is the current offset computed? I do not even have sd15 in my webui. |
non-transfer results looks even better? perhaps because any3 is bad at drawing house |
by the way, what is the best practice to develop webui extension? open private github repo? directly write code in webui folder? |
Hmm, looks like transferring control brings some instability. Temporary disabled and could be re-enable in the settings. |
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Extensions |
Hi, thank you for great work!
I implemented control transfer with this approach. The difference is calculated in advance and stored to the file. The implementation is here: https://github.com/kohya-ss/sd-webui-controlnet-lora/tree/support-lora
I can make a pull request, and I also think you can easily copy the codes from there. Please feel free to modify it. (Please forget the name of the repo. I intended to support ControlNet with LoRA...) |
I am working on these. But I have also found that in some cases, non-transfered models even have better performance. Very weird. I'm trying to understand. We still know too little about neural networks. |
Feel free to make a PR. |
I've made the PR #80 :) |
Merged. This method seems to be more correctly and effective. |
Thank you for merging! As lllyasviel mentioned above, non-transferred models seem to have better or almost same performance sometimes. In my test, openpose seems to generate almost same image. But canny and scribble are slightly better with transfer (little crisp image in canny, improved background in scribble). I'm using ACertainty for test. Please let me know if you need image files or more samples. |
perhaps those anime models are trained too much in anime domain and forgets many general object context concepts, and controlnet without transferring accidentally brought those general concepts back. |
I've uploaded pre-made difference files. I have checked each file by generating an image with the extension, but would be happy to check just to be sure. |
Should the code be tweaked to accept difference models along with (or instead of) full control checkpoints? Would save some disk space for some people, lighter to move around/download etc. I'm just not sure how to distinguish diff models vs complete checkpoints programatically. |
Difference models should be loaded without any issues. Feel free to open new issue if not works |
IMO, doing a weights merge will not save the bad generation quality, possibly will perform even worse. This is like saying |
Hello @Mikubill, @lllyasviel or @kohya-ss. Could you pls clarify me one thing? In this project are you replacing ControlNet base model with the user model (e.g. AnythingV3), or are you doing some other operation? I don't understand how Transfer Control happens "on the fly" without the need to generate a new merged model (e.g. Could you please explain how " |
@plcpinho Check my reply here. on the fly never means it doesn't merging, it merges in time once the model updated without saving the merged model locally. |
@haofanwang |
@CCRcmcpe I think I understand what you're saying, but it's been shown in other cases doing this type of merge as described in #73 (comment) drastically improves results. Take a look at this: https://old.reddit.com/r/StableDiffusion/comments/zyi24j/how_to_turn_any_model_into_an_inpainting_model/ |
Hi great plugin. I played with it a bit and the memory optimization is really good. Some considerations should be put with higher priority:
The requirements for some mode like the segmentation is broken.
We really need a button to offset the weight inside the controlnet: that will immediately solve the distorted faces in pose mode and distorted edges in canny mode. This can be done by offsetting the weight copy inside the controlnet with user models and sd15. Right now all controls are targeted to SD15. These should be retargeted to user models.
The text was updated successfully, but these errors were encountered: