Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: input image advance section issue #2723

Closed
4 of 5 tasks
37OMKAR opened this issue Apr 7, 2024 · 4 comments
Closed
4 of 5 tasks

[Bug]: input image advance section issue #2723

37OMKAR opened this issue Apr 7, 2024 · 4 comments
Labels
bug Something isn't working duplicate This issue or pull request already exists wontfix / cantfix This will not be worked on

Comments

@37OMKAR
Copy link

37OMKAR commented Apr 7, 2024

Checklist

  • The issue has not been resolved by following the troubleshooting guide
  • The issue exists on a clean installation of Fooocus
  • The issue exists in the current version of Fooocus
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

image genration not working when we used adavnce

Steps to reproduce the problem

go and enable input image upload image
it wil give issue
Error
Connection errored out.

note the CMD window is connected, as we remove the image and disable the imput image, it will genrate image

What should have happened?

this is the log, and the weried 1006 is genrated, every time we click genrated, please help

What browsers do you use to access Fooocus?

Google Chrome

Where are you running Fooocus?

Locally

What operating system are you using?

win 11

Console logs

F:\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.3.1
[Cleanup] Attempting to delete content of temp dir C:\Users\Omkar\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Total VRAM 12288 MB, total RAM 57262 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: F:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [F:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [F:\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [F:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 3.21 seconds
Started worker with PID 26316
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
1006
1006

Additional information

i have 12gb VRAM
64GB ram
i7
so the sysstem is ok , i can run stable diffion fast

@37OMKAR 37OMKAR added bug Something isn't working triage This needs an (initial) review labels Apr 7, 2024
@mashb1t
Copy link
Collaborator

mashb1t commented Apr 7, 2024

as you have read in the troubleshooting guide, this can be solved by following https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md#error-1006 and setting up swap correctly.

@mashb1t mashb1t closed this as not planned Won't fix, can't repro, duplicate, stale Apr 7, 2024
@mashb1t mashb1t added duplicate This issue or pull request already exists wontfix / cantfix This will not be worked on and removed triage This needs an (initial) review labels Apr 7, 2024
@37OMKAR
Copy link
Author

37OMKAR commented Apr 7, 2024

image

did as per the trouble shooting says, but still it dose the same

F:\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.3.1
[Cleanup] Attempting to delete content of temp dir C:\Users\Omkar\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
Total VRAM 12288 MB, total RAM 57262 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: F:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [F:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [F:\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [F:\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 3.11 seconds
Started worker with PID 8020
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
1006
1006

@37OMKAR
Copy link
Author

37OMKAR commented Apr 7, 2024

image
also have 2.83 TB space free so its way above 40 gb

@mashb1t
Copy link
Collaborator

mashb1t commented Apr 7, 2024

image

did as per the trouble shooting says, but still it dose the same

Please ensure not to manually disable the page file and follow the 8 steps in the troubleshooting guide.

260322660-2a06b130-fe9b-4504-94f1-2763be4476e9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working duplicate This issue or pull request already exists wontfix / cantfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

2 participants