-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update readme.md with macOS installation instructions #129
Conversation
macOS does not have conda command. |
how many it/s can you have on the M1 pro ? |
If you follow the Apple technical document in the procedure, it will instal miniconda3 from the Anaconda repo. |
wow 8s/it - that is quite a bit waiting |
Yes, it is, but it's similar to what I get with ComfyUI or Automatic1111 using SDXL; SD1.5 is faster though. I don't think you can do better with M1 + SDXL (?) I don't know what optimizations you included in Fooocus, but the image quality is vastly superior to ComfyUI or Automatic1111. Thanks for giving us the chance to play with this project! 😄 |
Note that once Miniconda3 is installed and activated in the shell, the Linux instructions work perfectly on macOS. |
After I install it locally with your steps, I got the same issue like described here. #286 |
You just need to restart your computer |
It looks like I'm running into a problem with the environment, how can I get past this? I didn't see the instructions for this part in your Readme.
|
I had to launch with |
Works like a charm ! Is it normal the program just use too much RAM ~ 20GB is used ? |
Please refer to the macOS installation guide in the |
Last login: Sat Oct 14 11:02:20 on ttys000 conda activate fooocus python entry_with_update.py To create a public link, set No matter I download the model again, it's useless. |
MetadataIncompleteBuffer is corrupted files. |
122s/it, macbook pro m2. ....so slow |
[Fooocus Model Management] Moving model (s) has taken 63.73 seconds, move the model once before each generation, too slow |
@omioki23 I also have the same issue. Seems like a Mac thing |
Setup was a breeze, but as others have mentioned generation is extremely slow. Unfortunate. |
i did everything and i got url when i gave a prompt and tap generate it completed but i cant see any of images? |
When trying to generate an image on my mac book m1 air, it gave the following error code: RuntimeError: MPS backend out of memory (MPS allocated: 8.83 GB, other allocations: 231.34 MB, max allowed: 9.07 GB). Tried to allocate 25.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure) Clearly it is implying I do not have enough memory, though has anyone figured out how to rectify this please? Thanks |
@Shuffls i tried for similar issue and fixed my problem |
RuntimeError: MPS backend out of memory (MPS allocated: 6.34 GB, other allocations: 430.54 MB, max allowed: 6.77 GB). Tried to allocate 10.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). |
Thanks bro! |
For optimize the execution of your command in the command line and potentially speed up the process, we can focus on the parameters that most affect performance. However, given that your command already contains many parameters for optimizing memory and computational resource usage, the main directions for improvement may involve more efficient GPU usage and reducing the amount of data required for processing in each iteration. Here's a modified version of your command considering possible optimizations:
Explanation of changes: Removed unsupported parameters: Parameters that caused an error due to their absence in the list of supported script parameters (--num-workers, --batch-size, --optimizer, --learning-rate, --precision-backend, --gradient-accumulation-steps) have been removed. Clarification of FP16 usage: Explicit indications for using FP16 for different parts of the model (--unet-in-fp16, --vae-in-fp16, --clip-in-fp16) have been added. This suggests that your model may include components like U-Net, VAE (Variational Autoencoder), and CLIP. Using FP16 can speed up computations and reduce memory consumption, although it may also slightly affect the accuracy of the results. Using asynchronous CUDA memory allocation: The --async-cuda-allocation parameter implies that the script will use asynchronous memory allocation, which can speed up data loading and the start of computations. Additional tips: Performance analysis: Use profiling tools to analyze CPU and GPU usage to identify bottlenecks. |
I try to use this and get message /anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.) |
I have 161.37s/it. can someone help me why?. Like how can i make my mac faster. Its. 2022 model so it has the m1 chip. But why is it this slow? |
Is it just quitting when trying to generate an image for anyone else? (M2 Mac Air) |
Did you guys cd Fooocus and conda activate Fooocus before python entry_with_update.py? |
You're probably not using the optimization parameters mentioned right above your post. |
Thank you, got it down to around 13-14 s/it on 2020 M1 MacBook Air 16GB. It starts with 10.5 tho, and slows down after a couple of steps. Fooocus still runs a bit slower than A1111 (7-8 s/it), but IMO still usable. I think it could be faster if it used both CPU and GPU cores. For now, it sits on about 96% with frequent dips to 80% GPU and only 10-17% CPU. Any way to change that? I want my whole machine to generate. |
Great work @Deniffler, you clearly spent more time and effort than I have, I was just glad to get it running fully off the GPU... Very glad that I helped set you on the right path, as you've now got us all running as past as possible. I'm much more productive now, many thanks. |
Posting here as described should be done. Can convert to an issue later if necessary. The On my fork I found the rows for the prompt box was set to 1024 and It would also seem that image prompting is not working at all for me. I check "image prompt", place in two images (a landscape and an animal) and click generate. Fooocus then generates a random portrait image of a man/woman. However, if I put an image into the describe tab, and click describe, it will indeed create a prompt from the image. So the tab/image handling seems to be working at least? Anyone else having a similar problem? |
I get that when I try to install the requirements folder : I got two error like that, I don't know how to solve that because then it doesn't run at all |
follow this it works : https://youtu.be/IebiL16lFyo?si=GSaczBlUuzjnP9TM |
I've tested some of the commands above Results: 🐱 🐆 🌟 My Configuration: ATTENTION |
I installed it successfully. Do I need to use the terminal every time to run it? or is there a way to create an execution file? |
Step 1 : Create a Step 2 (Option 1) : Follow this answer to convert it into a Step 2 (Option 2) : Select that file as one of the "login items" in your setting. Note that this way the server will always run in the background. |
Getting below error. Can someone please help @lllyasviel @jorge-campo @huameiwei-vc Set vram state to: SHARED During handling of the above exception, another exception occurred: Traceback (most recent call last): |
Your checkpoint is broken or has format unknown to Fooocus. redownload or try another checkpoint |
Added option to run OBP Presets randomly
The operator is not yet supported by Apple, that's all. You can (want to) tinker with it as much as you want. Der Operator wird bis dato nicht von Apple unterstützt, dass ist alles. Da kann man noch so viel rumschrauben (wollen) wie man will. |
With all the optimisation flags in place I'm still at 70s/it on an M2 Macbook PRO 1GB of RAM |
After following all the steps to install, I got: import packaging.version How should I fix it? Thank in advance! |
There is just a so called package missing in your fooocus-python environment. You can resolve this by installing the packaging module, but you have to be inside your environment ( If you installed Fooocus with helper programs such as Pinokio or Stability Matrix.: and If you dont know what it means: Every common AI -installation over pinokio or other helpers uses its own python environment for each application (Stable diffusion, SD-Forge, Fooocus etc.) on your computer to ensure that only working packages that are compatible with the base version of the needed python version are installed. You can find the whole installation of that python in the folder (mostly under /Fooocus/venv )to ensure that only working packages are installed that are compatible with the base version.) |
Apple regularly optimizes MPS support in PyTorch. Ensure you have the latest version of PyTorch for Metal support (this is a so called nightly version)(after activating the environment, see below this reply): |
Anpassung in supported_models.py if torch.std(out, unbiased=False) > 0.09: # not sure how well this will actually work. I guess we will find out. mean_out = torch.mean(out) |
Here you find some changed files for fooocus: On a Mac Book Pro M3 Pro with 36GB shared RAM I get 5,2 s/it with SDXL 1.5 on a 1024x1024 with Lora's. What I've done there: Berechnung für ro_posmean_pos = torch.mean(cond, dim=(1, 2, 3), keepdim=True) Berechnung für ro_cfgmean_cfg = torch.mean(x_cfg, dim=(1, 2, 3), keepdim=True)
s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True) m = torch.mean(g, dim=(1, 2, 3), keepdim=True)
ro_pos = torch.std(cond, dim=(1, 2, 3), keepdim=True) Berechnung für ro_posmean_pos = torch.mean(cond, dim=(1, 2, 3), keepdim=True) Berechnung für ro_cfgmean_cfg = torch.mean(x_cfg, dim=(1, 2, 3), keepdim=True) Anpassung in supported_models.py (in /Fooocus/ldm_patched/modules) if torch.std(out, unbiased=False) > 0.09: # not sure how well this will actually work. I guess we will find out. mean_out = torch.mean(out) |
Adds specific Mac M1/M2 installation instructions for Fooocus.
The same instructions for Linux work on Mac. The only prerequisite is the PyTorch installation, as described in the procedure.
On my M1 Mac, the installation and first run ran error-clean, and I could start generating images without any additional configuration.