Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It works fine on 8 GB VRAM might work on 6 GB too #34

Open
nitinmukesh opened this issue Nov 27, 2024 · 2 comments
Open

It works fine on 8 GB VRAM might work on 6 GB too #34

nitinmukesh opened this issue Nov 27, 2024 · 2 comments

Comments

@nitinmukesh
Copy link

nitinmukesh commented Nov 27, 2024

Check this out
https://youtu.be/nur4_b4yzM0

@darkstorm2150
Copy link

Just to verify this isn't related to VRAM right ?

(LTXVideo) → C:\Users\thesi\Desktop\LTXVideo [main ≡ +10 ~1 -0 !]› python .\inference.py WARNING:__main__:Running generation with arguments: {'ckpt_dir': 'Lightricks/LTX-Video', 'num_inference_steps': 20, 'guidance_scale': 3.5, 'height': 512, 'width': 768, 'num_frames': 86, 'frame_rate': 25, 'prompt': "A french woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie.", 'negative_prompt': 'low quality, worst quality, deformed, distorted, disfigured, motion smear, motion artifacts, fused fingers, bad anatomy, weird hand, ugly', 'seed': 55234234, 'output_path': 'outputs', 'num_images_per_prompt': 1, 'input_image_path': '', 'input_video_path': '', 'bfloat16': True, 'int8': False, 'disable_load_needed_only': False} WARNING:__main__:Padded dimensions: 512x768x89 Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 9.94it/s] Traceback (most recent call last): File "C:\Users\thesi\Desktop\LTXVideo\inference.py", line 369, in <module> main() File "C:\Users\thesi\Desktop\LTXVideo\inference.py", line 273, in main images = pipeline( File "C:\Users\thesi\.conda\envs\LTXVideo\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\thesi\Desktop\LTXVideo\ltx_video\pipelines\pipeline_ltx_video.py", line 928, in __call__ latents = self.prepare_latents( File "C:\Users\thesi\Desktop\LTXVideo\ltx_video\pipelines\pipeline_ltx_video.py", line 685, in prepare_latents latents = randn_tensor( File "C:\Users\thesi\.conda\envs\LTXVideo\lib\site-packages\diffusers\utils\torch_utils.py", line 67, in randn_tensor raise ValueError(f"Cannot generate a {device} tensor from a generator of type {gen_device_type}.") ValueError: Cannot generate a cpu tensor from a generator of type cuda.

@nitinmukesh
Copy link
Author

Try again.
It happens sometimes when the code tries to unload text_encoder to CPU to save VRAM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants