Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: "PNG Info" does not load settings for LatentModifier Integrated #562

Closed
4 of 6 tasks
Dwedit opened this issue Mar 15, 2024 · 5 comments
Closed
4 of 6 tasks

Comments

@Dwedit
Copy link
Contributor

Dwedit commented Mar 15, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

If you use LatentModifier, and generate an image, it will save LatentModifier settings within the generated image. An example of such settings found in a generated image:

latent_modifier_enabled: True, latent_modifier_sharpness_multiplier: -5, latent_modifier_sharpness_method: anisotropic, latent_modifier_tonemap_multiplier: 0, latent_modifier_tonemap_method: reinhard, latent_modifier_tonemap_percentile: 100, latent_modifier_contrast_multiplier: -15, latent_modifier_combat_method: subtract, latent_modifier_combat_cfg_drift: 0, latent_modifier_rescale_cfg_phi: 0, latent_modifier_extra_noise_type: gaussian, latent_modifier_extra_noise_method: add, latent_modifier_extra_noise_multiplier: 0, latent_modifier_extra_noise_lowpass: 100, latent_modifier_divisive_norm_size: 127, latent_modifier_divisive_norm_multiplier: 0, latent_modifier_spectral_mod_mode: hard_clamp, latent_modifier_spectral_mod_percentile: 5, latent_modifier_spectral_mod_multiplier: 0, latent_modifier_affect_uncond: None, latent_modifier_dyn_cfg_augmentation: None

The settings actually used are the "Enabled" checkbox, "-5" for the Sharpness Multiplier setting, and "-15" for the Contrast Multiplier setting.

But when you use the "Send to txt2img" button, the settings for latent_modifier are not applied on the txt2img page. The Enabled checkbox is not set, the Sharpness Multiplier is not at -5, and the Contrast Multiplier is not at -15.

Steps to reproduce the problem

  1. Enable Latent Modifier Integrated, set -5 for Sharpness multiplier, and -15 for Contrast multiplier.
  2. Generate an image. Example: "A cat holding a beer" negative: "fingers, hands"
  3. Restart the UI
  4. Go to the PNG Info tab, load your generated image
  5. Click "Send to txt2img"

What should have happened?

Should have loaded all settings related to Latent Modifier. Should have Latent Modifier enabled, -5 for Sharpness Multiplier, and -15 for Contrast Multiplier.

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Sysinfo

sysinfo-2024-03-15-18-26.json

Console logs

Console log isn't actually relevant here because the issue happens entirely in the web UI, nothing related to the issue is output to the console, but here it is anyway...

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --listen --cuda-stream --pin-shared-memory --enable-insecure-extension-access
Total VRAM 6144 MB, total RAM 14188 MB
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated:  True
2024-03-15 14:23:21.346381: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-03-15 14:23:22.411901: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Using pytorch cross attention
ControlNet preprocessor location: E:\StableDiffusionWebUIForge\webui\models\ControlNetPreprocessor
Loading weights [cc6cb27103] from E:\StableDiffusionWebUIForge\webui\models\Stable-diffusion\Stable Diffusion v1-5-pruned-emaonly.ckpt
2024-03-15 14:23:29,093 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860
model_type EPS
UNet ADM Dimension 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE

To create a public link, set `share=True` in `launch()`.
Startup time: 27.0s (prepare environment: 5.6s, import torch: 6.9s, import gradio: 1.2s, setup paths: 4.3s, initialize shared: 0.1s, other imports: 0.7s, load scripts: 2.8s, create ui: 0.8s, gradio launch: 4.3s).
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['betas', 'alphas_cumprod', 'alphas_cumprod_prev', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'log_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'posterior_variance', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'model_ema.decay', 'model_ema.num_updates'])
loaded straight to GPU
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  3015.03662109375
[Memory Management] Model Memory (MB) =  0.00762939453125
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1991.0289916992188
Moving model(s) has taken 0.02 seconds
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  3015.02783203125
[Memory Management] Model Memory (MB) =  454.2076225280762
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1536.8202095031738
Moving model(s) has taken 0.08 seconds
Model loaded in 7.7s (load weights from disk: 4.0s, forge load real models: 1.7s, calculate empty prompt: 2.0s).

Additional information

Latent Modifier could also use a "Reset Settings" button that changes all settings back to the default values.

@Dwedit
Copy link
Contributor Author

Dwedit commented Mar 29, 2024

I also notice that the "Emphasis Mode" setting is also not correctly saved and load into the generated images.

@catboxanon
Copy link
Collaborator

catboxanon commented Apr 1, 2024

I also notice that the "Emphasis Mode" setting is also not correctly saved and load into the generated images.

This would be related to upstream functionality, but I believe this is already fixed. Forge is unfortunately two months behind upstream changes currently so it's not yet merged.

AUTOMATIC1111/stable-diffusion-webui#15141
AUTOMATIC1111/stable-diffusion-webui#15142

@Dwedit
Copy link
Contributor Author

Dwedit commented Apr 1, 2024

Great, thanks! Now all I need is to figure out how to get the LatentModifier settings into params.txt...

@catboxanon
Copy link
Collaborator

catboxanon commented Apr 1, 2024

That's likely also fixed by the latter PR I linked.

@Dwedit
Copy link
Contributor Author

Dwedit commented Apr 1, 2024

I do see "Emphasis: No Norm" in the params.txt file, but I don't see any of the latent_modifier settings in there. But that stuff does show up in the generated images.

@Dwedit Dwedit closed this as completed Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants