Skip to content

Commit

Permalink
[keras/utils/audio_dataset.py,keras/utils/conv_utils.py,keras/utils/d…
Browse files Browse the repository at this point in the history
…ata_utils.py,keras/utils/dataset_utils.py,keras/utils/feature_space.py,keras/utils/generic_utils.py,keras/utils/image_dataset.py,keras/utils/image_utils.py,keras/utils/layer_utils.py,keras/utils/losses_utils.py,keras/utils/metrics_utils.py,keras/utils/text_dataset.py] Standardise docstring usage of "Default to"
  • Loading branch information
SamuelMarks committed Apr 13, 2023
1 parent 0f8e81f commit 6893bd5
Show file tree
Hide file tree
Showing 12 changed files with 50 additions and 47 deletions.
2 changes: 1 addition & 1 deletion keras/utils/audio_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def audio_dataset_from_directory(
subset: Subset of the data to return. One of "training", "validation" or
"both". Only used if `validation_split` is set.
follow_links: Whether to visits subdirectories pointed to by symlinks.
Defaults to False.
Defaults to `False`.
Returns:
A `tf.data.Dataset` object.
Expand Down
4 changes: 2 additions & 2 deletions keras/utils/conv_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ def normalize_tuple(value, n, name, allow_zero=False):
n: The size of the tuple to be returned.
name: The name of the argument being validated, e.g. "strides" or
"kernel_size". This is only used to format error messages.
allow_zero: Default to False. A ValueError will raised if zero is received
and this param is False.
allow_zero: A ValueError will be raised if zero is received
and this param is False. Defaults to `False`.
Returns:
A tuple of n integers.
Expand Down
14 changes: 8 additions & 6 deletions keras/utils/data_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ def get_file(
The default `'auto'` corresponds to `['tar', 'zip']`.
None or an empty list will return no matches found.
cache_dir: Location to store cached files, when None it
defaults to the default directory `~/.keras/`.
defaults to `~/.keras/`.
Returns:
Path to the downloaded file.
Expand Down Expand Up @@ -1063,14 +1063,16 @@ def pad_sequences(
maxlen: Optional Int, maximum length of all sequences. If not provided,
sequences will be padded to the length of the longest individual
sequence.
dtype: (Optional, defaults to `"int32"`). Type of the output sequences.
dtype: (Optional). Type of the output sequences.
To pad sequences with variable length strings, you can use `object`.
padding: String, "pre" or "post" (optional, defaults to `"pre"`):
pad either before or after each sequence.
truncating: String, "pre" or "post" (optional, defaults to `"pre"`):
Defaults to `"int32"`.
padding: String, "pre" or "post" (optional):
pad either before or after each sequence. Defaults to `"pre"`.
truncating: String, "pre" or "post" (optional):
remove values from sequences larger than
`maxlen`, either at the beginning or at the end of the sequences.
value: Float or String, padding value. (Optional, defaults to 0.)
Defaults to `"pre"`.
value: Float or String, padding value. (Optional). Defaults to `0.`.
Returns:
Numpy array with shape `(len(sequences), maxlen)`
Expand Down
12 changes: 6 additions & 6 deletions keras/utils/dataset_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ def split_dataset(
left_size: If float (in the range `[0, 1]`), it signifies
the fraction of the data to pack in the left dataset. If integer, it
signifies the number of samples to pack in the left dataset. If
`None`, it defaults to the complement to `right_size`.
`None`, it uses the complement to `right_size`. Defaults to `None`.
right_size: If float (in the range `[0, 1]`), it signifies
the fraction of the data to pack in the right dataset. If integer, it
signifies the number of samples to pack in the right dataset. If
`None`, it defaults to the complement to `left_size`.
`None`, it uses the complement to `left_size`. Defaults to `None`.
shuffle: Boolean, whether to shuffle the data before splitting it.
seed: A random seed for shuffling.
Expand Down Expand Up @@ -130,10 +130,10 @@ def _convert_dataset_to_list(
dataset_type_spec : the type of the dataset
data_size_warning_flag (bool, optional): If set to True, a warning will
be issued if the dataset takes longer than 10 seconds to iterate.
Defaults to True.
Defaults to `True`.
ensure_shape_similarity (bool, optional): If set to True, the shape of
the first sample will be used to validate the shape of rest of the
samples. Defaults to True.
samples. Defaults to `True`.
Returns:
List: A list of tuples/NumPy arrays.
Expand Down Expand Up @@ -254,10 +254,10 @@ def _get_next_sample(
dataset_iterator : An `iterator` object.
ensure_shape_similarity (bool, optional): If set to True, the shape of
the first sample will be used to validate the shape of rest of the
samples. Defaults to True.
samples. Defaults to `True`.
data_size_warning_flag (bool, optional): If set to True, a warning will
be issued if the dataset takes longer than 10 seconds to iterate.
Defaults to True.
Defaults to `True`.
start_time (float): the start time of the dataset iteration. this is
used only if `data_size_warning_flag` is set to true.
Expand Down
6 changes: 3 additions & 3 deletions keras/utils/feature_space.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,12 +105,12 @@ class FeatureSpace(base_layer.Layer):
"crossed" by hashing their combined value into
a fixed-length vector.
crossing_dim: Default vector size for hashing crossed features.
Defaults to 32.
Defaults to `32`.
hashing_dim: Default vector size for hashing features of type
`"integer_hashed"` and `"string_hashed"`. Defaults to 32.
`"integer_hashed"` and `"string_hashed"`. Defaults to `32`.
num_discretization_bins: Default number of bins to be used for
discretizing features of type `"float_discretized"`.
Defaults to 32.
Defaults to `32`.
**Available feature types:**
Expand Down
2 changes: 1 addition & 1 deletion keras/utils/generic_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ def update(self, current, values=None, finalize=None):
as-is. Else, an average of the metric over time will be
displayed.
finalize: Whether this is the last update for the progress bar. If
`None`, defaults to `current >= self.target`.
`None`, uses `current >= self.target`. Defaults to `None`.
"""
if finalize is None:
if self.target is None:
Expand Down
6 changes: 3 additions & 3 deletions keras/utils/image_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,10 +118,10 @@ def image_dataset_from_directory(
When `subset="both"`, the utility returns a tuple of two datasets
(the training and validation datasets respectively).
interpolation: String, the interpolation method used when resizing images.
Defaults to `bilinear`. Supports `bilinear`, `nearest`, `bicubic`,
`area`, `lanczos3`, `lanczos5`, `gaussian`, `mitchellcubic`.
Supports `bilinear`, `nearest`, `bicubic`, `area`, `lanczos3`,
`lanczos5`, `gaussian`, `mitchellcubic`. Defaults to `bilinear`.
follow_links: Whether to visit subdirectories pointed to by symlinks.
Defaults to False.
Defaults to `False`.
crop_to_aspect_ratio: If True, resize the images without aspect
ratio distortion. When the original aspect ratio differs from the target
aspect ratio, the output image will be cropped so as to return the
Expand Down
26 changes: 13 additions & 13 deletions keras/utils/image_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,9 +120,9 @@ def smart_resize(x, size, interpolation="bilinear"):
format `(height, width, channels)` or `(batch_size, height, width,
channels)`.
size: Tuple of `(height, width)` integer. Target size.
interpolation: String, interpolation to use for resizing. Defaults to
`'bilinear'`. Supports `bilinear`, `nearest`, `bicubic`, `area`,
`lanczos3`, `lanczos5`, `gaussian`, `mitchellcubic`.
interpolation: String, interpolation to use for resizing. Supports
`bilinear`, `nearest`, `bicubic`, `area`, `lanczos3`, `lanczos5`,
`gaussian`, `mitchellcubic`. Defaults to `'bilinear'`.
Returns:
Array with shape `(size[0], size[1], channels)`. If the input image was a
Expand Down Expand Up @@ -216,14 +216,14 @@ def array_to_img(x, data_format=None, scale=True, dtype=None):
Args:
x: Input data, in any form that can be converted to a Numpy array.
data_format: Image data format, can be either `"channels_first"` or
`"channels_last"`. Defaults to `None`, in which case the global
`"channels_last"`. None means the global
setting `tf.keras.backend.image_data_format()` is used (unless you
changed it, it defaults to `"channels_last"`).
changed it, it uses `"channels_last"`). Defaults to `None`.
scale: Whether to rescale the image such that minimum and maximum values
are 0 and 255 respectively. Defaults to `True`.
dtype: Dtype to use. Default to `None`, in which case the global setting
`tf.keras.backend.floatx()` is used (unless you changed it, it
defaults to `"float32"`)
dtype: Dtype to use. None makes the global setting
`tf.keras.backend.floatx()` to be used (unless you changed it, it
uses `"float32"`). Defaults to `None`.
Returns:
A PIL Image instance.
Expand Down Expand Up @@ -298,12 +298,12 @@ def img_to_array(img, data_format=None, dtype=None):
Args:
img: Input PIL Image instance.
data_format: Image data format, can be either `"channels_first"` or
`"channels_last"`. Defaults to `None`, in which case the global
`"channels_last"`. None means the global
setting `tf.keras.backend.image_data_format()` is used (unless you
changed it, it defaults to `"channels_last"`).
dtype: Dtype to use. Default to `None`, in which case the global setting
`tf.keras.backend.floatx()` is used (unless you changed it, it
defaults to `"float32"`).
changed it, it uses `"channels_last"`). Defaults to `None`.
dtype: Dtype to use. None makes the global setting
`tf.keras.backend.floatx()` to be used (unless you changed it, it
uses `"float32"`). Defaults to `None`.
Returns:
A 3D Numpy array.
Expand Down
11 changes: 6 additions & 5 deletions keras/utils/layer_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -335,11 +335,12 @@ def print_summary(
It will be called on each line of the summary.
You can set it to a custom function
in order to capture the string summary.
It defaults to `print` (prints to stdout).
When `None`, uses `print` (prints to stdout).
Defaults to `None`.
expand_nested: Whether to expand the nested models.
If not provided, defaults to `False`.
Defaults to `False`.
show_trainable: Whether to show if a layer is trainable.
If not provided, defaults to `False`.
Defaults to `False`.
layer_range: List or tuple containing two strings,
the starting layer name and ending layer name (both inclusive),
indicating the range of layers to be printed in the summary. The
Expand Down Expand Up @@ -1042,9 +1043,9 @@ def warmstart_embedding_matrix(
embedding matrix.
new_embeddings_initializer: Initializer for embedding vectors for
previously unseen terms to be added to the new embedding matrix (see
`keras.initializers`). Defaults to "uniform". new_embedding matrix
`keras.initializers`). new_embedding matrix
needs to be specified with "constant" initializer.
matrix. Default value is None.
matrix. None means "uniform". Default value is None.
Returns:
tf.tensor of remapped embedding layer matrix
Expand Down
10 changes: 5 additions & 5 deletions keras/utils/losses_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,11 @@ class ReductionV2:
Contains the following values:
* `AUTO`: Indicates that the reduction option will be determined by the
usage context. For almost all cases this defaults to
`SUM_OVER_BATCH_SIZE`. When used with `tf.distribute.Strategy`, outside of
built-in training loops such as `tf.keras` `compile` and `fit`, we expect
reduction value to be `SUM` or `NONE`. Using `AUTO` in that case will
raise an error.
usage context. For almost all cases this uses `SUM_OVER_BATCH_SIZE`.
When used with `tf.distribute.Strategy`, outside of built-in training
loops such as `tf.keras` `compile` and `fit`, we expect reduction
value to be `SUM` or `NONE`. Using `AUTO` in that case will raise an
error.
* `NONE`: No **additional** reduction is applied to the output of the
wrapped loss function. When non-scalar losses are returned to Keras
functions like `fit`/`evaluate`, the unreduced vector loss is passed to
Expand Down
2 changes: 1 addition & 1 deletion keras/utils/metrics_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -979,7 +979,7 @@ def sparse_top_k_categorical_matches(y_true, y_pred, k=5):
y_true: tensor of true targets.
y_pred: tensor of predicted targets.
k: (Optional) Number of top elements to look at for computing accuracy.
Defaults to 5.
Defaults to `5`.
Returns:
Match tensor: 1.0 for label-prediction match, 0.0 for mismatch.
Expand Down
2 changes: 1 addition & 1 deletion keras/utils/text_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ def text_dataset_from_directory(
When `subset="both"`, the utility returns a tuple of two datasets
(the training and validation datasets respectively).
follow_links: Whether to visits subdirectories pointed to by symlinks.
Defaults to False.
Defaults to `False`.
Returns:
A `tf.data.Dataset` object.
Expand Down

0 comments on commit 6893bd5

Please sign in to comment.