Replies: 5 comments 10 replies
-
I could run some experiments based on what the community wants. I've tried single video training (it's also implemented) and seems to work really well. |
Beta Was this translation helpful? Give feedback.
-
well I'm kinda dumb so I can't actually run the model and get an output using the code, but I do get outputs while it trains, I've done only some basic training (I took like 10 gifs of an anime girl and absolutely baked a model. EDIT: okay so the video seems broken but it does work: so it works, I just can't actually run the model outside of training yet |
Beta Was this translation helpful? Give feedback.
-
It should work after training. After training is done, a checkpoint should be saved in './outputs'. There should also be checkpoints based on this line. |
Beta Was this translation helpful? Give feedback.
-
Adding Animov-0.1 model here because it's a huge success Original Diffusers weights: WebUI weights: Uploading w.mp4… |
Beta Was this translation helpful? Give feedback.
-
can we finetune it with multiple video-1.mp4, video-2.mp4, ...... so on and their captions in video-1.txt, video-2.txt, .... so on |
Beta Was this translation helpful? Give feedback.
-
Hi!
Wondering about any experiments with training that are going on 🙂
Beta Was this translation helpful? Give feedback.
All reactions