-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"real-time"? #6
Comments
If you'll go to the LTXV fal playground which uses H100, you'll see it runs at around 9it/s, which is definitely real time (assuming 20-30 steps per generation). The memory usage grows with the number of frames you generate. Until we optimize VRAM usage further, try a lower number (such as 129). On my 4090 I'm getting around 1.3it/s, and the community currently managed to optimize it even further. |
Can confirm I'm getting same thing as SoftologyPro, even setting --num_frames to 129 or trying to reduce resolution to 256x256. Windows 11 This: Gets me 11.14s/it 2s video took 7 minutes to generate. |
Maybe add that to the readme. ie Needs a H100 80 GB VRAM GPU to run at any decent speed. I tried my command line above on a 3090 and it has 3 hours so far with 24 hours remaining.
Can you give me the commandline you use to get that performance on your 4090? |
The inference was run on a 48GB GPU. python inference.py --ckpt_dir '/home/user/Desktop/git/LTX-Video' --prompt "Dog resting on grass." --input_image_path /home/user/Downloads/Dog_Breeds.jpg --height 480 --width 720 --num_frames 72 --seed 42 It seems that the GPU memory requirements are quite high. |
If you have a normal consumer GPU, use this ComfyUI workflow. |
What's the difference between comfy workflow and inference.py? in comfy 4090 took 1m and in inference.py it took 2h (probably OOM), but the inference.py version was better quality |
Thanks! it works very well on 4090. |
uhm there are a number of optimizations within ComfyUI, see here https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py But hard to extract in a separate script. pipe.vae.enable_slicing()
pipe.vae.enable_tiling() like it happens in the |
Running locally on Windows with a 24GB 4090.
python inference.py --ckpt_dir "D:\Tests\LTX-Video\LTX-Video\models" --prompt "roses in the rain" --height 512 --width 768 --num_frames 257 --seed 12345
Stats show
| 2/40 [04:14<1:19:10, 125.02s/it]
I do have GPU torch installed and Task Manager shows GPU at 100%. 23.4/24.0 VRAM used.
What hardware did you use to get it to generate "faster than it takes to watch them"?
Any tips for speeding up local generation on a 4090?
The text was updated successfully, but these errors were encountered: