-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export Fine-Tuned LLM after Trainer is Complete #2101
Comments
If there is a tutorial of the part specific to this project that exhibit the metadata we want to capture on Model Registry, I would be very happy to complement that example with indexing those metadata on MR ! 🚀👍 |
@andreyvelich I may have misunderstood the initial context of this API because I was under the impression that you could serve the model once fine-tuned. Can you elaborate on this?
|
I think, right now the only way is to use |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
/remove-lifecycle stale |
per #2101 (comment) I would be very happy to integrate a demo/blueprint for the documentation, I just need a "seed" to get started on the training operator :) thanks! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
/remove-lifecycle stale |
We discussed here: kubeflow/website#3718 (comment) that our LLM Trainer doesn't export the fine-tuned model.
So user can't re-use that model for inference or other purposes.
We should discuss how user can get the fine-tuned artifact after LLM Trainer is complete.
/cc @kubeflow/wg-training-leads @deepanker13
Would be nice to see integration with Kubeflow Model Registry as well. cc @kubeflow/wg-data-leads
The text was updated successfully, but these errors were encountered: