Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thank you, I love it, demonstration of Hunyuan "video cloning" Lora on 4090. #36

Open
diStyApps opened this issue Dec 24, 2024 · 10 comments

Comments

@diStyApps
Copy link

Demonstration of Hunyuan "video cloning" Lora and do some edits to the cloned video on 4090
https://github.com/user-attachments/assets/e70d9622-baae-4159-8846-8e7e5dbe5a68

@comfyonline
Copy link

Hi, do you just use 1 video and then cut them into multiple clips for training?

@fuzzyfazzy
Copy link

That's probably the best example I have seen yet.

@cchance27
Copy link

cchance27 commented Dec 27, 2024

WOW thats impresssive, did you share the dataset or lora anywhere yet? interested to see what the setup looked like for something like this

@thebollo
Copy link

@diStyApps Would you be willing to share your workflow? I'm especially interested to see how you so successfully mixed multiple LoRAs. Every time I try to do that with Hunyuan the result is bad.

@fuzzyfazzy
Copy link

Hi, do you just use 1 video and then cut them into multiple clips for training?

As a pure guess - split the video into smaller clips with max 33 frames.

Caption all videos taking into account key stuff like dress and dounut etc

Train all the videos

Using vid 2 vid - use the original video as driver so it captures the original motion and make your prompt changes.

@fuzzyfazzy
Copy link

@diStyApps Would you be willing to share your workflow? I'm especially interested to see how you so successfully mixed multiple LoRAs. Every time I try to do that with Hunyuan the result is bad.

You need to add the blocks bit to the lora and only use double blocks and deselect all single blocks.

@thebollo
Copy link

@fuzzyfazzy Thank you! That really seems to work! I don't know why, but nevertheless it does.

@fuzzyfazzy
Copy link

@fuzzyfazzy Thank you! That really seems to work! I don't know why, but nevertheless it does.

Your Joan Holloway post popped into my feed this morning - really good!

@diStyApps
Copy link
Author

Hi, do you just use 1 video and then cut them into multiple clips for training?

As a pure guess - split the video into smaller clips with max 33 frames.

Caption all videos taking into account key stuff like dress and dounut etc

Train all the videos

Using vid 2 vid - use the original video as driver so it captures the original motion and make your prompt changes.

Sorry for late replay i forgot i have post it, i am working on tutorial for this.

There is no video driver it all text to video after training,

Hunyuan_clone_3.mp4

@RichGua
Copy link

RichGua commented Jan 1, 2025

Hi, do you just use 1 video and then cut them into multiple clips for training?

As a pure guess - split the video into smaller clips with max 33 frames.
Caption all videos taking into account key stuff like dress and dounut etc
Train all the videos
Using vid 2 vid - use the original video as driver so it captures the original motion and make your prompt changes.

Sorry for late replay i forgot i have post it, i am working on tutorial for this.

There is no video driver it all text to video after training,

Hunyuan_clone_3.mp4

The stability looks great, look forward to your tutorial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants