Skip to content

Commit

Permalink
update lr scheduler doc for doing per step or epoch update (#913)
Browse files Browse the repository at this point in the history
* update lr scheduler doc for doing per step or epoch update

* work

* trigger build

Co-authored-by: Olatunji Ruwase <[email protected]>
  • Loading branch information
cli99 and tjruwase authored Apr 14, 2021
1 parent 8b8ed2a commit c83e49f
Show file tree
Hide file tree
Showing 4 changed files with 63 additions and 47 deletions.
2 changes: 1 addition & 1 deletion deepspeed/profiling/flops_profiler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

## Overview

The DeepSpeed flops profiler profiles the forward pass of a PyTorch model and prints the model graph with the measured profile attached to each module.
This profiles the forward pass of a PyTorch model and prints the model graph with the measured profile attached to each module.
It shows the parameters, latency, and number of floating point operations of the modules within the model to identify potential bottlenecks.
It also outputs the names of the top `k` modules in terms of aggregated time, flops, and number of parameters at depth `l` with `k` and `l` specified by the user.
The DeepSpeed flops profiler can be used with the DeepSpeed runtime or as a standalone package.
Expand Down
34 changes: 18 additions & 16 deletions docs/_pages/config-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,14 +82,16 @@ The Adam optimizer also supports the following two params keys/values in additio

The 1-bit Adam optimizer supports the following three params keys/values in addition to the standard Adam (learn more in our [tutorial](/tutorials/onebit-adam/)):

| "params" key | Description | Default |
| ------------- | --------------------------------------------------------------------------- | ------- |
| freeze\_step | Number of warm up steps before 1-bit compression gets applied to the communication | 100000 |
| cuda\_aware | To indicate that the underlying MPI library supports CUDA-Aware communication | false |
| comm\_backend\_name | To indicate which backend implementation to use | "nccl" |
| "params" key | Description | Default |
| ------------------- | ---------------------------------------------------------------------------------- | ------- |
| freeze\_step | Number of warm up steps before 1-bit compression gets applied to the communication | 100000 |
| cuda\_aware | To indicate that the underlying MPI library supports CUDA-Aware communication | false |
| comm\_backend\_name | To indicate which backend implementation to use | "nccl" |

### Scheduler Parameters

DeepSpeed calls the `step()` method of the scheduler at every training step when `model_engine.step()` is executed.

***scheduler***: [dictionary]

| Fields | Value | Example |
Expand Down Expand Up @@ -269,8 +271,8 @@ Enabling and configuring ZeRO memory optimizations

***stage***: [integer]

| Description | Default |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| Description | Default |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| Chooses different stages of ZeRO Optimizer. Stage 0, 1, 2, and 3 refer to disabled, optimizer state partitioning, and optimizer+gradient state partitioning, and optimizer+gradient+parameter partitioning, respectively. | `0` |

***allgather_partitions***: [boolean]
Expand Down Expand Up @@ -323,26 +325,26 @@ Enabling and configuring ZeRO memory optimizations

***cpu_offload_use_pin_memory***: [boolean]

| Description | Default |
| ----------------------------------------------------------------------------------------- | ------- |
| Use pinned CPU memory when offloading. Can improve performance. Valid only with stage 3. | `False` |
| Description | Default |
| ---------------------------------------------------------------------------------------- | ------- |
| Use pinned CPU memory when offloading. Can improve performance. Valid only with stage 3. | `False` |

***stage3_max_live_parameters***: [integer]

| Description | Default |
| ------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| Description | Default |
| ----------------------------------------------------------------------------------------------------------------------------------- | ------- |
| The maximum number of parameters resident per GPU before releasing. Smaller values use less memory, but perform more communication. | `1e9` |

***stage3_max_reuse_distance***: [integer]

| Description | Default |
| ---------------------------------------------------------------------------------------------------------------- | ------- |
| Description | Default |
| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| Do not release a parameter if it will be reused within this threshold of parameters. Smaller values use less memory, but perform more communication. | `1e9` |

***stage3_prefetch_bucket_size***: [integer]

| Description | Default |
| ------------------------------------------------------------------------------------------------------------------------------- | ------- |
| Description | Default |
| -------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| The size of the fixed buffer for prefetching parameters. Smaller values use less memory, but can increase stalls due to communication. | `5e8` |


Expand Down
69 changes: 41 additions & 28 deletions docs/_tutorials/getting-started.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Getting Started"
title: 'Getting Started'
permalink: /getting-started/
excerpt: "First steps with DeepSpeed"
excerpt: 'First steps with DeepSpeed'
date: 2020-05-15
---

Expand All @@ -13,12 +13,14 @@ date: 2020-05-15
* If you're not on Azure, we recommend using our docker image via `docker pull deepspeed/deepspeed:latest` which contains a pre-installed version of DeepSpeed and all the necessary dependencies.

## Writing DeepSpeed Models

DeepSpeed model training is accomplished using the DeepSpeed engine. The engine
can wrap any arbitrary model of type `torch.nn.module` and has a minimal set of APIs
for training and checkpointing the model. Please see the tutorials for detailed
examples.

To initialize the DeepSpeed engine:

```python
model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args,
model=model,
Expand All @@ -27,10 +29,10 @@ model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args,

`deepspeed.initialize` ensures that all of the necessary setup required for
distributed data parallel or mixed precision training are done
appropriately under the hood. In addition to wrapping the model, DeepSpeed can
appropriately under the hood. In addition to wrapping the model, DeepSpeed can
construct and manage the training optimizer, data loader, and the learning rate
scheduler based on the parameters passed to `deepspeed.initialize` and the
DeepSpeed [configuration file](#deepspeed-configuration).
DeepSpeed [configuration file](#deepspeed-configuration). Note that DeepSpeed automatically executes the learning rate schedule at every training step.

If you already have a distributed environment setup, you'd need to replace:

Expand All @@ -48,7 +50,6 @@ The default is to use the NCCL backend, which DeepSpeed has been thoroughly test

But if you don't need the distributed environment setup until after `deepspeed.initialize()` you don't have to use this function, as DeepSpeed will automatically initialize the distributed environment during its `initialize`. Regardless, you will need to remove `torch.distributed.init_process_group` if you already had it in place.


### Training

Once the DeepSpeed engine has been initialized, it can be used to train the
Expand All @@ -67,32 +68,31 @@ for step, batch in enumerate(data_loader):
model_engine.step()
```


Under the hood, DeepSpeed automatically performs the necessary operations
required for distributed data parallel training, in mixed precision, with a
pre-defined learning rate schedule:
pre-defined learning rate scheduler:

* **Gradient Averaging**: in distributed data parallel training, `backward`
- **Gradient Averaging**: in distributed data parallel training, `backward`
ensures that gradients are averaged across data parallel processes after
training on an `train_batch_size`.

* **Loss Scaling**: in FP16/mixed precision training, the DeepSpeed
- **Loss Scaling**: in FP16/mixed precision training, the DeepSpeed
engine automatically handles scaling the loss to avoid precision loss in the
gradients.

* **Learning Rate Schedule**: if using DeepSpeed's learning rate
schedule, then DeepSpeed automatically handles any updates to the learning
rate when `step` is executed.


- **Learning Rate Scheduler**: when using a DeepSpeed's learning rate scheduler (specified in the `ds_config.json` file), DeepSpeed calls the `step()` method of the scheduler at every training step (when `model_engine.step()` is executed). When not using a DeepSpeed's learning rate scheduler:
- if the schedule is supposed to execute at every training step, then the user can pass the scheduler to `deepspeed.initialize` when initializing the DeepSpeed engine and let DeepSpeed manage it for update or save/restore.
- if the schedule is supposed to execute at any other interval (e.g., training epochs), then the user should NOT pass the scheduler to DeepSpeed during initialization and must manage it explicitly.

### Model Checkpointing

Saving and loading the training state is handled via the `save_checkpoint` and
`load_checkpoint` API in DeepSpeed which takes two arguments to uniquely
identify a checkpoint:
* `ckpt_dir`: the directory where checkpoints will be saved.
* `ckpt_id`: an identifier that uniquely identifies a checkpoint in the directory.
In the following code snippet, we use the loss value as the checkpoint identifier.

- `ckpt_dir`: the directory where checkpoints will be saved.
- `ckpt_id`: an identifier that uniquely identifies a checkpoint in the directory.
In the following code snippet, we use the loss value as the checkpoint identifier.

```python
#load checkpoint
Expand Down Expand Up @@ -133,6 +133,7 @@ each process needs to save its master weights and scheduler+optimizer states. Th
waiting to synchronize with other processes if it's called just for the process with rank 0.

## DeepSpeed Configuration

DeepSpeed features can be enabled, disabled, or configured using a config JSON
file that should be specified as `args.deepspeed_config`. A sample config file
is shown below. For a full set of features see [ API
Expand All @@ -156,6 +157,7 @@ doc](/docs/config-json/).
```

# Launching DeepSpeed Training

DeepSpeed installs the entry point `deepspeed` to launch distributed training.
We illustrate an example usage of DeepSpeed with the following assumptions:

Expand All @@ -164,28 +166,30 @@ We illustrate an example usage of DeepSpeed with the following assumptions:
3. `client args` is the `argparse` command line arguments
4. `ds_config.json` is the configuration file for DeepSpeed


## Resource Configuration (multi-node)

DeepSpeed configures multi-node compute resources with hostfiles that are compatible with
[OpenMPI](https://www.open-mpi.org/) and [Horovod](https://github.com/horovod/horovod).
A hostfile is a list of *hostnames* (or SSH aliases), which are machines accessible via passwordless
SSH, and *slot counts*, which specify the number of GPUs available on the system. For
A hostfile is a list of _hostnames_ (or SSH aliases), which are machines accessible via passwordless
SSH, and _slot counts_, which specify the number of GPUs available on the system. For
example,

```
worker-1 slots=4
worker-2 slots=4
```
specifies that two machines named *worker-1* and *worker-2* each have four GPUs to use

specifies that two machines named _worker-1_ and _worker-2_ each have four GPUs to use
for training.

Hostfiles are specified with the `--hostfile` command line option. If no hostfile is
specified, DeepSpeed searches for `/job/hostfile`. If no hostfile is specified or found,
DeepSpeed queries the number of GPUs on the local machine to discover the number of local
slots available.


The following command launches a PyTorch training job across all available nodes and GPUs
specified in `myhostfile`:

```bash
deepspeed --hostfile=myhostfile <client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
Expand All @@ -195,20 +199,25 @@ Alternatively, DeepSpeed allows you to restrict distributed training of your mod
subset of the available nodes and GPUs. This feature is enabled through two command line
arguments: `--num_nodes` and `--num_gpus`. For example, distributed training can be
restricted to use only two nodes with the following command:

```bash
deepspeed --num_nodes=2 \
<client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
```

You can instead include or exclude specific resources using the `--include` and
`--exclude` flags. For example, to use all available resources **except** GPU 0 on node
*worker-2* and GPUs 0 and 1 on *worker-3*:
_worker-2_ and GPUs 0 and 1 on _worker-3_:

```bash
deepspeed --exclude="worker-2:0@worker-3:0,1" \
<client_entry.py> <client args> \
--deepspeed --deepspeed_config ds_config.json
```
Similarly, you can use **only** GPUs 0 and 1 on *worker-2*:

Similarly, you can use **only** GPUs 0 and 1 on _worker-2_:

```bash
deepspeed --include="worker-2:0,1" \
<client_entry.py> <client args> \
Expand All @@ -228,24 +237,26 @@ executing from and also in your home directory (`~/`).
As a concrete example, some clusters require special NCCL variables to set
prior to training. The user can simply add these variables to a
`.deepspeed_env` file in their home directory that looks like this:

```
NCCL_IB_DISABLE=1
NCCL_SOCKET_IFNAME=eth0
```

DeepSpeed will then make sure that these environment variables are set when
launching each process on every node across their training job.


### MPI and AzureML Compatibility

As described above, DeepSpeed provides its own parallel launcher to help launch
multi-node/multi-gpu training jobs. If you prefer to launch your training job
using MPI (e.g., mpirun), we provide support for this. It should be noted that
DeepSpeed will still use the torch distributed NCCL backend and *not* the MPI
DeepSpeed will still use the torch distributed NCCL backend and _not_ the MPI
backend.

To launch your training job with mpirun + DeepSpeed or with AzureML (which uses
mpirun as a launcher backend) you simply need to install the
[mpi4py](https://pypi.org/project/mpi4py/) python package. DeepSpeed will use
[mpi4py](https://pypi.org/project/mpi4py/) python package. DeepSpeed will use
this to discover the MPI environment and pass the necessary state (e.g., world
size, rank) to the torch distributed backend.

Expand All @@ -259,8 +270,9 @@ deepspeed.init_distributed()
```

## Resource Configuration (single-node)

In the case that we are only running on a single node (with one or more GPUs)
DeepSpeed *does not* require a hostfile as described above. If a hostfile is
DeepSpeed _does not_ require a hostfile as described above. If a hostfile is
not detected or passed in then DeepSpeed will query the number of GPUs on the
local machine to discover the number of slots available. The `--include` and
`--exclude` arguments work as normal, but the user should specify 'localhost'
Expand All @@ -269,6 +281,7 @@ as the hostname.
Also note that `CUDA_VISIBLE_DEVICES` can't be used with DeepSpeed to control
which devices should be used. For example, to use only gpu1 of the current
node, do:

```bash
deepspeed --include localhost:1 ...
```
5 changes: 3 additions & 2 deletions docs/code-docs/source/schedulers.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
Learning Rate Schedulers
===================

DeepSpeed offers implementations of ``LRRangeTest``, ``OneCycle``, ``WarmupLR``, ``WarmupDecayLR`` learning rate schedulers.

DeepSpeed offers implementations of ``LRRangeTest``, ``OneCycle``, ``WarmupLR``, ``WarmupDecayLR`` learning rate schedulers. When using a DeepSpeed's learning rate scheduler (specified in the `ds_config.json` file), DeepSpeed calls the `step()` method of the scheduler at every training step (when `model_engine.step()` is executed). When not using a DeepSpeed's learning rate scheduler:
* if the schedule is supposed to execute at every training step, then the user can pass the scheduler to `deepspeed.initialize` when initializing the DeepSpeed engine and let DeepSpeed manage it for update or save/restore.
* if the schedule is supposed to execute at any other interval (e.g., training epochs), then the user should NOT pass the scheduler to DeepSpeed during initialization and must manage it explicitly.

LRRangeTest
---------------------------
Expand Down

0 comments on commit c83e49f

Please sign in to comment.