-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Learning rate decay #4955
Comments
We have a callback for setting learning rate although I haven't really use it myself. Would it help even simplifying things further? |
My mistake, I was not aware of this callback. And an example code would be nice too.
and with the new naming (I think it's more obvious): |
@guyko81 Thanks for the suggestion. I agree with your naming scheme, will keep this open so we can have better documentation for callbacks. |
To new contributors: If you're reading this and interested in writing the document for learning rate decay, please comment here. Feel free to ping me with questions. I am available for help. |
I have been thinking about robust model building on Important Outliers - outliers that matters so we don't want the model to add predictions closer to the average value. That led me the idea that implementing a learning rate decay would add the model more flexibility as in the beginning it could catch the trivial rules (a trivial case decision tree in the beginning) due to high learning rate and later in the training it could fine tune the result.
I have made my own GBTree with this additional feature and found that the model learnt much faster as well! Not a big coding but great improvement in performance. I have not tested the robustness (e.g. overfitting) of the model on many datasets but the one I used on showed promising results - no overfit.
I used learning_rate_start=0.5, learning_rate_min=0.01 and lr_decay=0.95.
First iteration:
lr = learning_rate_start
At each iteration afterwards the following rule applies:
lr = max(learning_rate_min, lr*lr_decay)
The text was updated successfully, but these errors were encountered: