-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thank you, I love it, demonstration of Hunyuan "video cloning" Lora on 4090. #36
Comments
Hi, do you just use 1 video and then cut them into multiple clips for training? |
That's probably the best example I have seen yet. |
WOW thats impresssive, did you share the dataset or lora anywhere yet? interested to see what the setup looked like for something like this |
@diStyApps Would you be willing to share your workflow? I'm especially interested to see how you so successfully mixed multiple LoRAs. Every time I try to do that with Hunyuan the result is bad. |
As a pure guess - split the video into smaller clips with max 33 frames. Caption all videos taking into account key stuff like dress and dounut etc Train all the videos Using vid 2 vid - use the original video as driver so it captures the original motion and make your prompt changes. |
You need to add the blocks bit to the lora and only use double blocks and deselect all single blocks. |
@fuzzyfazzy Thank you! That really seems to work! I don't know why, but nevertheless it does. |
Your Joan Holloway post popped into my feed this morning - really good! |
Sorry for late replay i forgot i have post it, i am working on tutorial for this. There is no video driver it all text to video after training, Hunyuan_clone_3.mp4 |
The stability looks great, look forward to your tutorial. |
Demonstration of Hunyuan "video cloning" Lora and do some edits to the cloned video on 4090
https://github.com/user-attachments/assets/e70d9622-baae-4159-8846-8e7e5dbe5a68
The text was updated successfully, but these errors were encountered: