Skip to content

kijai/ComfyUI-CogVideoXWrapper

Repository files navigation

WORK IN PROGRESS

Update5

This week there's been some bigger updates that will most likely affect some old workflows, sampler node especially probably need to be refreshed (re-created) if it errors out!

New features:

  • Initial context windowing with FreeNoise noise shuffling mainly for vid2vid and pose2vid pipelines for longer generations, haven't figured it out for img2vid yet
  • GGUF models and tiled encoding for I2V and pose pipelines (thanks to MinusZoneAI)
  • sageattention support (Linux only) for a speed boost, I experienced ~20-30% increase with it, stacks with fp8 fast mode, doesn't need compiling
  • Support CogVideoX-Fun 1.1 and it's pose models with additional control strenght and application step settings, this model's input does NOT have to be just dwpose skeletons, just about anything can work
  • Support LoRAs
CogVideoX_Fun_Pose_00133.mp4
cogvideox_pose_test.mp4
cogvideox_pose_depth_walk_test.mp4

Update4

Initial support for the official I2V version of CogVideoX: https://huggingface.co/THUDM/CogVideoX-5b-I2V

Also needs diffusers 0.30.3

chrome_jvZuPWOzUV.mp4

Update3

Added initial support for CogVideoX-Fun: https://github.com/aigc-apps/CogVideoX-Fun

Note that while this one can do image2vid, this is NOT the official I2V model yet, though it should also be released very soon.

chrome_klXjpmvAd4.mp4

Updade2

Added experimental support for onediff, this reduced sampling time by ~40% for me, reaching 4.23 s/it on 4090 with 49 frames. This requires using Linux, torch 2.4.0, onediff and nexfort installation:

pip install --pre onediff onediffx

pip install nexfort

First run will take around 5 mins for the compilation.

Update

5b model is now also supported for basic text2vid: https://huggingface.co/THUDM/CogVideoX-5b

It is also autodownloaded to ComfyUI/models/CogVideo/CogVideoX-5b, text encoder is not needed as we use the ComfyUI T5.

chrome_sxMlstknXt.mp4

Requires diffusers 0.30.1 (this is specified in requirements.txt)

Uses same T5 model than SD3 and Flux, fp8 works fine too. Memory requirements depend mostly on the video length. VAE decoding seems to be the only big that takes a lot of VRAM when everything is offloaded, peaks at around 13-14GB momentarily at that stage. Sampling itself takes only maybe 5-6GB.

Hacked in img2img to attempt vid2vid workflow, works interestingly with some inputs, highly experimental.

chrome_hrEYWEaEpK.mp4
chrome_BPxEX1OxXP.mp4

Also added temporal tiling as means of generating endless videos:

https://github.com/kijai/ComfyUI-CogVideoXWrapper

AnimateDiff_00003.54.mp4

Original repo: https://github.com/THUDM/CogVideo