Skip to content

Commit

Permalink
Rename accelerator="ddp" to strategy="ddp" (#228)
Browse files Browse the repository at this point in the history
  • Loading branch information
nils-werner authored Dec 16, 2021
1 parent 7a33c0b commit 8b62eef
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,10 +174,10 @@ python run.py trainer.gpus=1
python run.py +trainer.tpu_cores=8

# train with DDP (Distributed Data Parallel) (4 GPUs)
python run.py trainer.gpus=4 +trainer.accelerator=ddp
python run.py trainer.gpus=4 +trainer.strategy=ddp

# train with DDP (Distributed Data Parallel) (8 GPUs, 2 nodes)
python run.py trainer.gpus=4 +trainer.num_nodes=2 +trainer.accelerator=ddp
python run.py trainer.gpus=4 +trainer.num_nodes=2 +trainer.strategy=ddp
```

</details>
Expand Down Expand Up @@ -915,7 +915,7 @@ The most common one is DDP, which spawns separate process for each GPU and avera
You can run DDP on mnist example with 4 GPUs like this:

```bash
python run.py trainer.gpus=4 +trainer.accelerator="ddp"
python run.py trainer.gpus=4 +trainer.strategy=ddp
```

⚠️ When using DDP you have to be careful how you write your models - learn more [here](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html).
Expand Down
2 changes: 1 addition & 1 deletion configs/trainer/ddp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ defaults:
- default.yaml

gpus: 4
accelerator: ddp
strategy: ddp

0 comments on commit 8b62eef

Please sign in to comment.