-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1014 learning rate finder #1454
Conversation
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
@wyli @Nic-Ma @ericspod this part of the PR saves the state of the network and optimiser to disk or memory, such that they can be restored at the end. Does this functionality live somewhere else in MONAI? |
We had that kind of functionality as part of an Ignite handler for saving checkpoints, but that's not quite how you're doing things here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @rijobro this is very useful!
Perhaps the way the training losses is computed/accumulated (self._train_batch
) should be decoupled from the lr_finder. Intuitively I think the LR finder should work fine as long as the user provides a black box that takes lr/step as input, and returns a total_loss
(e.g. self._train_batch
and self._validate
). Do you want to refactor this PR to decouple those? otherwise we could file a ticket and have another iteration.
pls see also some minor suggestions inline, they are mostly optional
@Can-Zhao would be great to have your comments as well!
@wyli thanks, I'll get to it! |
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
Signed-off-by: Richard Brown <[email protected]>
@wyli this is ready if you want to review again, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, it looks good, except some minor warnings on (object)
https://deepsource.io/gh/Project-MONAI/MONAI/run/a53a8313-ab56-4aea-8a8b-e6fd4941f978/python/PYL-R0205 this new feature needs another iteration to decouple the actual training/validation logic_train_batch
and _validate
from the LearningRateFinder
class
Signed-off-by: Richard Brown <[email protected]>
I am having trouble using this class for GANs. Is there any way to do something similar to the "GANLearner" fastai? |
Fixes #1014.
Description
Implements calculation of optimal learning rate based on https://github.com/davidtvs/pytorch-lr-finder.
Status
Ready
Types of changes
./runtests.sh --codeformat --coverage
../runtests.sh --quick
.make html
command in thedocs/
folder.