-
-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mac Os bug - black images [current workaround included in comment - Ctrl+F 'unlimited_area_hack' to find it] #48
Comments
Can you send me a picture of your workflow, as well as the list of custom_nodes you have loaded? Thanks! |
Random hail mary - try disabling the comfyui manager, and git clone the AnimateDiff-Evolved repo from scratch |
Thank You. Here is the whole process of the two animations in the terminal; Last login: Sun Sep 24 04:21:04 on ttys000 Import times for custom nodes: Starting server To see the GUI go to: http://127.0.0.1:8188 |
I think Akak Pixel (three days ago) has the same thing with black frames and he looks from his list to be on mac. |
here again all the terminal when trying to produce 3 sequences. The 1 and 3 generation (16 frames) didn't work. Last login: Sun Sep 24 04:21:04 on ttys000 Import times for custom nodes: Starting server To see the GUI go to: http://127.0.0.1:8188 [AnimateDiffEvo] - INFO - Loading motion module mm_sd_v14.ckpt got prompt got prompt |
Hey, yep, looks like you both have the same issue. His issue is before my big refactor on Friday, so looks like that has always been present. With black images outputting, it makes me thing that either some tensors are becoming NaNs, or there is an issue with VAE decoding. Not sure if this helps since you might have used this guide yourself, but I would make sure the comfy venv using the latest pytorch nightly. Image is screenshot from ComfyUI readme: https://github.com/comfyanonymous/ComfyUI#apple-mac-silicon It might also be good to test how many latents you can batch before the black frame issue starts happening. And when you find that limit, go 1 batch size below, and increase the resolution to see if the issue is resolution related as well as batch related. |
I have not tried the latest nightly Pytorch yet, and I'll do today. Anything to make things work. |
ok, I installed the nightly Pytorch (the --force-fp16 though breaks comfyui)So I tried again without the fp16 and the results are the same. At 512 black frames. At 300 at even get 48 frames. But the res is so low that it looks quite unusable unfortunately... All this is strange because I use a lot of other configs with no problem and very fast iterations (1.20 per frames). Probably a mac thing but so frustrating. |
Kosinkadink, sorry to bother again. Do you think it's a Mac Osx problem? I have M1 ultra super boosted. Do other Mac users have the same problem? I can't do more testing myself, but I can under your guidance. It's a pity mac users can't use it and the mac community is the most creative. I am using Deforum like nodes, but I am a bit fed up with inconsistency between frames, I think i did everything that was possible, clean installs, new nightly Pytorch , no other custom nodes activated, etc... It's definitely a resolution problem. But why? |
I have a M1 mac too and I'm facing a similar issue. I just get lack frames no matter what resolution or samples. |
I wonder if all the Macs face the same issue.? |
I'm going to talk to some folks to see if people with Macs have been able to make it work, and get back to you guys. |
Thank you! |
I guess I'll have to find a PC with Windows...I am sure you are super busy. But is there any hope we could make it work on a Mac M1? |
Same here. Anything more than 4 frames produces black output. Most recent Pytorch nightly and M1 Ultra with 64G. |
I do not own a mac, but it sounds like this is some issue with the pytorch code on macs. I do not really see any other reason why it would crap out like this specifically on M1 macs at a certain threshold of pixels in a batch. Or maybe a VAE issue in some way that is related. We're gonna need to play a bit of a game of telephone to get to the core of the issue. Pytorch on mac can have some real wacky bugs, like sometimes the *= operator for tensors just straight up does nothing. But that's going to deep, here is the game plan:
The goal is to see if there are any differences in the limit given the various setups of your Macs. If your hardware has some differences but still the limits are exactly the same, then there must be some Mac pytorch bug that we can hopefully report and find a workaround for while they officially fix it. Once I see a few of your results to confirm, I can make a separate mac-testing branch (or a few different mac-testing branches) with subtly different code and print statements that still does the same thing to see where exactly things go wrong on macs. Your help would be greatly appreciated! |
Hi. I will do the tests as suggested. Just in the meantime: ComfyUI also totally crashes sometimes with the following error in terminal:
Specs: Apple M1 Max 64GB |
I tried installing this on an M1 Ultra 128 GB with the same OS as @simonjaq in a fresh environment, nightly pytorch. I can successfully generate single images and I can run the basic txt2img workflow at 256x256 and get a 16 frame GIF, but at 512x512, it finishes the progress bar then I get the same error. I notice the python process jumps to about 56 GB of RAM used at 512x512. It's about 8 GB for the 256x256 image. Batch size 16. The full output from session launch to crash is like this:
|
Tried setting Perhaps this is a different issue than the original? Here's that crash:
Not sure if it's relevant or related, but there was also a CUDA / nvcc error I saw when I first installed AniamteDiff-Evolved. Pasting terminal output here:
|
@nathanshipley I think your issue may be different than the ones other had. AnimateDiff-Evolved has no external dependencies aside from ComfyUI, so not sure why ComfyManager is trying to install things. |
I managed to generate with 512x512 12 frames. Memory consumption is insane almost maxing out my 64GB M1 Ultra going up to 60GB for Comfy but it works. |
Look in the console, it will let you know went wrong when it tried to initialize the nodes. |
Appreciate your response - these were the errors I was getting from console - could they be related to issue?
|
Hello, I have a M2 PRO Mac and I only get black image animations, although ComfyUI generates images just fine. got prompt Please advise. Thanks |
@rSaita Have you attempted using the current workaround of setting unlimited_area_hack to True in the code mentioned in this thread above? #48 (comment) Looks like the bug affects M1 and M2 macs, since they both would use the same build of mac pytorch, Let me know if that workaround works for you! |
@Kosinkadink I just did the test of setting unlimited_area_hack to True, and it generated a correct animation using a resolution of 256x256, but after that, when I tried to generate a 512x512 clip, it gave me these Terminal and onscreen messages that I've attached. |
Looks like you may not have enough VRAM/RAM to run AnimateDiff while pytorch bug prevents the use of Comfy optimizations (comfy VRAM optimizations are not allowed to run when unlimited_area_hack is true). M1/M2 macs use the same chips for VRAM/RAM usage, and the RAM requirements are likely preventing enough free space for VRAM requirements. With no VRAM optimizations , 512x512 batch size 16 with fp16 takes ~8GB VRAM, so that would mean your 18GB Mac would need to use less than 10GB RAM/VRAM for anything else running your Mac, comfy included. |
@Kosinkadink Ok, I see, thank you for the explanation. So, will you be contacting the Pytorch developers in order for them to fix that bug? Or is there another solution? |
@Kosinkadink forgot to thank you for the above |
@mackay for how many frames and at what resolution? |
Oh, and what is a Mac GPU? Do you mean an M1/2 or an older Mac? if so what GPU was it? Only very old Mac’s can use NVIDIA? Frankly I don’t understand |
Just wanted to comment to say the temporary fix did work for me on my Macbook (M1 Max - 32Gb) and stopped the black frame issue. On my new M2 Ultra (128gb) however, the black frames issue didn't happen. Or at least infrequently. However, on both machines, which can generate high quality stills without issue in ComfyUi. When I connect Animatediffuse. Restarting ComfyUI doesn't solve the issue. The only way seems to be to restart the computer. *Also, and this may have been a coincidence (testing takes a while, particularly with a possible memory issue) I seemed to be getting less blocky generations out of mm_sd_v14 compared to mm_sd_v15 and mm_sd_v15_v2 as the model applied to the Animatediffuse node. Massive thanks to the developer (developers?) working on this. It's awesome. Though I do wonder if I pause learning much more and jump back to Automatic1111 in the mean time, as I don't think an Nvidia PC is arriving here any time soon :-D |
@aianimation55 It sounds like the motion modules are not getting ejected properly, which should never happen. Can you give me a list of the folders in your custom_nodes folder in ComfyUI? Also, please send a screenshot of the workflow you are trying to run. |
Your context_length needs to be around 16, and you need to have at least around 16 latents passed in. The motion/images produced by the motion module are also dependent on the frames they are trying to process at a time. The sweetspot for AnimateDiff (not HotshotXL) is 16 frames at a time. 5 frames will make everything deepfried. Also, for AnimateDiff, it's recommened to use the sqrt_linear beta_schedule. linear will produce washed out results for AnimateDiff motion modules, which could be artsy, but just a heads up. I think that's the issue you have, and the code is working fine. |
Thanks @Kosinkadink . I'll jump back on this over the weekend. Cheers. |
@Kosinkadink . Thanks again that worked well. Crashes if I try and go above 640 x 360 but successfully generating highquality series of frames without issue at that size. I need to explore some upscaling approaches that keep the mac happy. Or rely on Topaz AI. Cheers. |
For anyone experiencing that crash, try to use |
I don't think is a problem on the M1/M2, When I use the AnimateDiff work with WebUI, it's work and there is no error at all, I don't know the logic of AnimateDiff are different between webUI AND comfyUI or not. BUT I can confirm that M1 is capable to run a good result with AnimateDiff. |
M2 ultra 128GB device // Had the same issue with black images. After unlimited_area_hack=True it works (Most of the times) |
M2 Pro 16Gb Sonoma, had this issue with the black images, even with 256x256 Confirm, that unlimited_area_hack=True works. However it takes ~30 minutes to generate 512x512 16 frames animation from the basic txt2img workflow Anyway, @Kosinkadink thank you for this beautiful tool and this workaround! |
Btw, I tried to use lcm-lora from this tutorial and the 512x512 16 frames generation improved from 30 mins to 12 mins |
Hey all, Just wanted to add that the fix described above ( However, with SDXL I continue to get black images (in the output and UI). I have tried 512x512, 1024x1024, and various SDXL models (base, turbo, custom checkpoints) along with the Always with 16 I'm on an M1 Max 64GB, using torch nightly, etc. SDXL works fine at batch size 16 if I bypass AnimateDiff. |
@Kosinkadink please help me too. i have been reading the threads and have followed the solutions but still getting black images. I'm also using macos. Here is my terminal: I have tried unlimited_area_hack=True this as well. same black image. Im using mac M1 Memory 8GB, Sonoma 14.2 os. |
With the latest AnimateDiff-Evolved update as of an hour-ish ago, v1, v2, and v3 AnimateDiff models should now work in Mac M1/M2/M3, based on some tests done with one person who owns an Apple Silicon Mac. v2 and v3 models don't require the hack at all, but v1 models will automatically trigger the unlimited area hack that should prevent black images. The underlying issue that causes the black images is somewhere inside pytorch and extremely hard to reproduce outside of ComfyUI - I could not come up with a way to reproduce it easily, so I could not report it properly to the pytorch team. |
Now that I have installed Python 10 in a new venv, everything looks normal and Comfyui UI load the nodes correctly. The default preset (the simplest one) works....until a batch of 8 frames max... That is already great! But as soon as the batch number is above 10, I can see the KSampler starting to calculate tthe first frame, but it immediately goes black and the generated frames are all black.
If I start again with 8 frames, everything back to normal.
I put back 10 frames and over and the frames come out black.
Spended all day trying to make it work. Cleaned installed Comfyui and new dependencies, VenV with Python 10, unistalled et installed again Animatediffevolution many times. Nothing works. What can I try now?
Nothing special in the console, no error, just that I am missing FFmpeg. I guess it has nothing to do with it as that only concerns the last node.
I Tried all the different presets provided. Another bug is that on another more complex preset (with two Ksamplers) as soon as it goes to second sampler, Comfyui breaks and i have to relaunch the terminal.
Here is the terminal messages with only one custom node...ComfyUI-AnimateDiff-Evolved:
Last login: Sat Sep 23 19:19:16 on ttys000
michaelroulier@Mac-Studio-de-Michael ~ % cd Comfyui
michaelroulier@Mac-Studio-de-Michael Comfyui % source venv/bin/activate
(venv) michaelroulier@Mac-Studio-de-Michael Comfyui % ./venv/bin/python main.py
Total VRAM 131072 MB, total RAM 131072 MB
xformers version: 0.0.20
Set vram state to: SHARED
Device: mps
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Import times for custom nodes:
0.0 seconds: /Users/michaelroulier/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved
Starting server
To see the GUI go to: http://127.0.0.1:8188
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
got prompt
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
model_type EPS
adm 0
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
[AnimateDiffEvo] - INFO - Loading motion module mm-Stabilized_high.pth
loading new
[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (12) less or equal to context_length 16.
[AnimateDiffEvo] - INFO - Injecting motion module mm-Stabilized_high.pth version v1.
loading new
100%|██████████████████████████████████████████████████████████| 20/20 [01:31<00:00, 4.59s/it]
[AnimateDiffEvo] - INFO - Ejecting motion module mm-Stabilized_high.pth version v1.
[AnimateDiffEvo] - INFO - Cleaning motion module from unet.
/Users/michaelroulier/ComfyUI/comfy/model_base.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
/Users/michaelroulier/ComfyUI/comfy/model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
Prompt executed in 98.54 seconds
The text was updated successfully, but these errors were encountered: