Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XTTS v1.1 GPT Trainer #3086

Merged
merged 24 commits into from
Oct 25, 2023
Merged

XTTS v1.1 GPT Trainer #3086

merged 24 commits into from
Oct 25, 2023

Conversation

Edresson
Copy link
Contributor

@Edresson Edresson commented Oct 19, 2023

XTTS GPT Trainer for XTTS v1.1.

ToDos:

  • Update Docs
  • Update recipe with the released checkpoints
  • Rebase the PR to update the code with the changes done in XTTS v1.1 release.

@CLAassistant
Copy link

CLAassistant commented Oct 19, 2023

CLA assistant check
All committers have signed the CLA.

@Edresson Edresson requested review from WeberJulian and erogol and removed request for WeberJulian October 23, 2023 13:44
@erogol
Copy link
Member

erogol commented Oct 24, 2023

Looks good to me. Waiting for @WeberJulian

Yuzuuu69

This comment was marked as off-topic.

@coqui-ai coqui-ai deleted a comment from Yuzuuu69 Oct 25, 2023
@WeberJulian
Copy link
Contributor

I'm not 100% sure that we should use those values BATCH_SIZE 3 and GRAD_ACUMM_STEPS 84 for fine-tuning. In some extreme cases (small datasets) a single step will be more than an epoch. Why not use the values we use internally 8 and 1 ?

@WeberJulian
Copy link
Contributor

Otherwise, it looks good to me too. Great work. In future PRs, we can focus on reducing the VRAM footprint of fine-tuning to something more reasonable that you can do on collab with 16Gb of VRAM.

@erogol erogol merged commit 16ba377 into dev Oct 25, 2023
53 checks passed
@erogol erogol deleted the xtts_trainer branch October 25, 2023 11:28
@markrmiller
Copy link

That's interesting that you use 8/1 internally. The comments say you need an effective batch size of like 230 or something for good quality. Regardless, it would seem difficult to use a small dataset, in my limited experience, the model overfits pretty quickly, even with a much reduced learning rate, even with a decent sized data set. I'm still trying to get anywhere near a good voice match before it overfits.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants