Skip to content

Commit

Permalink
Update logo location
Browse files Browse the repository at this point in the history
  • Loading branch information
fsschneider committed Aug 17, 2022
1 parent 67a16fe commit 5be5d73
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 2 deletions.
Binary file added .assets/mlc_logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 8 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

<br />
<p align="center">
<a href="#"><img width="600" img src="https://nextcloud.tuebingen.mpg.de/index.php/s/oKEeMfksqdyc6Wf/preview" alt="MLCommons Logo"/></a>
<a href="#"><img width="600" img src=".assets/mlc_logo.png" alt="MLCommons Logo"/></a>
</p>

<p align="center">
Expand Down Expand Up @@ -43,19 +43,23 @@
3. We use pip to install the `algorithmic_efficiency`.

*TL;DR to install the Jax version for GPU run:*

```bash
pip3 install -e '.[pytorch_cpu]'
pip3 install -e '.[jax_gpu]' -f 'https://storage.googleapis.com/jax-releases/jax_cuda_releases.html'
pip3 install -e '.[full]'
```

*TL;DR to install the PyTorch version for GPU run:*

```bash
pip3 install -e '.[jax_cpu]'
pip3 install -e '.[pytorch_gpu]' -f 'https://download.pytorch.org/whl/torch_stable.html'
pip3 install -e '.[full]'
```

#### Additional Details
#### Additional Details

```bash
pip3 install -e .
```
Expand Down Expand Up @@ -168,9 +172,11 @@ python3 submission_runner.py \
When using multiple GPUs on a single node it is recommended to use PyTorch's
[distributed data parallel](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
To do so, simply replace `python3` by

```bash
torchrun --standalone --nnodes=1 --nproc_per_node=N_GPUS
```

where `N_GPUS` is the number of available GPUs on the node.

## Rules
Expand Down

0 comments on commit 5be5d73

Please sign in to comment.