-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add python wrapper for Adadelta optimizer #9213
Conversation
python/paddle/fluid/optimizer.py
Outdated
@@ -580,6 +582,58 @@ def _append_optimize_op(self, block, param_and_grad): | |||
return decayed_adagrad_op | |||
|
|||
|
|||
class AdadeltaOptimizer(Optimizer): | |||
"""Simple Adadelta optimizer with average squared grad state and | |||
average squared update state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python/paddle/fluid/optimizer.py
Outdated
def __init__(self, learning_rate, epsilon=1.0e-6, rho=0.95, **kwargs): | ||
assert learning_rate is not None | ||
assert epsilon is not None | ||
assert rho is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to use raise
not assert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
fix #9212