-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load models in finetune mode based on command line parameters. #7329
Comments
@wochinge @joejuzl Do you folks know how the |
@dakshvar22 I haven' started looking at this area of the code yet (working on #7330 first) so it's hard for me to say. @wochinge Any opinions/thoughts? |
@joejuzl @wochinge Making a proposal to see if we can reach to a consensus quickly on the above question:
What do you folks think? |
@dakshvar22 So the solution we have come up with (today!) is as follows: For NLU:
|
@joejuzl Perfect! Small clarification:
The context dictionary is what will be passed as part of the |
Yes exactly, via |
For Core:
|
@dakshvar22 Our current approach would be to provide |
@wochinge Do you mean for Core specifically or NLU components also? |
…e_nlu #7329 load models in finetune mode nlu
* Load core model in fine-tuning mode * Core finetune loading test * Test and PR comments * Fallback to default epochs * Test policy and ensemble fine-tuning exception cases * Remove epoch_override from Policy.load * use kwargs * fix * fix train tests * More test fixes * Apply suggestions from code review Co-authored-by: Daksh Varshneya <[email protected]> * remove unneeded sklearn epochs * Apply suggestions from code review Co-authored-by: Tobias Wochinger <[email protected]> * PR comments for warning strings * Add typing * add back invalid model tests * small comments Co-authored-by: Daksh Varshneya <[email protected]> Co-authored-by: Tobias Wochinger <[email protected]>
Load models for incremental training.
Depends on #7328.
The text was updated successfully, but these errors were encountered: