-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using multiple trainers vs. single trainer if max_epochs
needs to change.
#3037
Comments
Hi! thanks for your contribution!, great first issue! |
@MarioIshac I would choose the first option, because the Trainer saves some state (e.g. in the loggers) that you probably don't want to reuse in the following model that will train. |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
I wish there was a bit more clarification on this in the docs, as I had to google for this. |
I'm sorry that you had to google it. Where did you find your answer? Here in github? |
Yeah your answer above. |
❓ Questions and Help
Before asking:
I've looked at Issue with running multiple models in PyTorch Lightning which talks about using multiple trainers for multiple models.
What is your question?
I want to clarify when I should be using multiple trainers vs. one trainer if I'm trying to train a bunch of models (all of
BaseModel
, a custom subclass ofpytorch_lightning.LightningModule
) which only differ among their hyperparameters (and there hyperparameters are provided ashparams
in the instantiation ofBaseModel
). The catch is that the max epochs is a hyperparameter in my case.Code
Given that
train_model
is being called for eachmodel
I want to train, do I go this route:or this route:
Everything else about the trainer remains unchanged after initialization (such as
gpus
,logger
,early_stop_callback
, etc, this is all part of the...
). Onlymax_epochs
needs to change.What's your environment?
The text was updated successfully, but these errors were encountered: