Skip to content

Commit

Permalink
[keras/optimizers/legacy/optimizer_v2.py] Backtick keywords in docstr…
Browse files Browse the repository at this point in the history
…ing ; [keras/optimizers/legacy_learning_rate_decay.py] Remove "Linear default"
  • Loading branch information
SamuelMarks committed Apr 20, 2023
1 parent 1a57052 commit 0ebca04
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions keras/optimizers/legacy/optimizer_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -692,12 +692,12 @@ def apply_gradients(
Args:
grads_and_vars: List of (gradient, variable) pairs.
name: Optional name for the returned operation. When None, uses the
name: Optional name for the returned operation. When `None`, uses the
name passed to the `Optimizer` constructor. Defaults to `None`.
experimental_aggregate_gradients: Whether to sum gradients from
different replicas in the presence of `tf.distribute.Strategy`. If
False, it's user responsibility to aggregate the gradients. Default
to True.
to `True`.
Returns:
An `Operation` that applies the specified gradients. The `iterations`
Expand Down
2 changes: 1 addition & 1 deletion keras/optimizers/legacy_learning_rate_decay.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ def polynomial_decay(
end_learning_rate: A scalar `float32` or `float64` `Tensor` or a Python
number. The minimal end learning rate.
power: A scalar `float32` or `float64` `Tensor` or a Python number. The
power of the polynomial. Linear is default. Defaults to `1.0`.
power of the polynomial. Defaults to `1.0`.
cycle: A boolean, whether it should cycle beyond decay_steps. Defaults to
`False`.
name: String. Optional name of the operation. Defaults to
Expand Down

0 comments on commit 0ebca04

Please sign in to comment.