You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 21, 2022. It is now read-only.
Documentation or example showing how to fine-tune a pretrained model on unlabeled data.
Motivation
It's great to fine-tune your pretrained model on untrained data, so that---if you have precious few labels in the target domain---you still have adapted to that domain using untrained data.
Pitch
We have these super huge foundational models, but for niche domains without larges it's great to fine tune.
Examples:
Want to work on a particular style of text.
Want to fine-tune on a spoken language that it was not exposed.
etc.
Alternatives
Hack around, maybe use hugging face. IDK?
The text was updated successfully, but these errors were encountered:
Hi @turian, do you mean Can you demonstrate how to fine-tune a pre-trained model on new data? Because I think unlabeled data mean the data used for pre-training (i.e without labels, performing the task of MLM, or related). Hi @Borda, I think, I would like to do an example/documentation (if it doesn't exist) for fine-tuning the model using lightning transformer.
Thanks for the heads up. Hi, @turian, can you provide me the instance where you would like to see an example or the domain you are talking about? Because, the usage of the library (on different task) are given in the readme section.
Hi @Borda, I am doing good. I was not able to make much of progress, since I am not sure, what is the exact thing, which needs to be solved. Because, if fine-tuning is concerned, I guess there are examples on readme and docs of lightning transformers.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
🚀 Feature
Documentation or example showing how to fine-tune a pretrained model on unlabeled data.
Motivation
It's great to fine-tune your pretrained model on untrained data, so that---if you have precious few labels in the target domain---you still have adapted to that domain using untrained data.
Pitch
We have these super huge foundational models, but for niche domains without larges it's great to fine tune.
Examples:
Alternatives
Hack around, maybe use hugging face. IDK?
The text was updated successfully, but these errors were encountered: