-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LoRA configuration support for fine-tuning module #169
Comments
@gkumbhat please confirm that for description items 2,3,4 you want https://github.com/caikit/caikit-nlp/blob/main/caikit_nlp/modules/text_generation/text_generation_local.py#L172 updated only ( I didn't see a |
@gkumbhat re: point 2.
is |
From slack conversation- add flag about weather model is a lora or not to the config.yaml |
also per slack conversation- FYI if anyone is doing something later and happens to read this (good for you). Merged models have lower latency than base + LoRA src: https://huggingface.co/docs/peft/conceptual_guides/lora |
@gkumbhat could i get more specification on what readme.md you want updated, I'm having a hard time following the link |
Description
After refactoring #163 PT to take out prompt tuning related configuration to separate module. We want to add support for prompt tuning (specifically LoRA) in fine-tuning module, i.e text_generation. Changes:
.train
function.train
function itself, that way, themodel
that we configure to__init__
function will look like any other transformers model.Acceptance Criteria
The text was updated successfully, but these errors were encountered: