Skip to content

Commit

Permalink
Update docs for max-autotune usage (#1405)
Browse files Browse the repository at this point in the history
Co-authored-by: Jack-Khuu <[email protected]>
  • Loading branch information
yanbing-j and Jack-Khuu authored Dec 9, 2024
1 parent 326c1fe commit aba0679
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/ADVANCED-USERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,8 @@ To improve performance, you can compile the model with `--compile`
trading off the time to first token processed with time per token. To
improve performance further, you may also compile the prefill with
`--compile-prefill`. This will increase further compilation times though.
For CPU, you can use `--max-autotune` to further improve the performance
with `--compile` and `compile-prefill`. See [`max-autotune on CPU tutorial`](https://pytorch.org/tutorials/prototype/max_autotune_on_CPU_tutorial.html).

Parallel prefill is not yet supported by exported models, and may be
supported in a future release.
Expand Down
3 changes: 3 additions & 0 deletions docs/model_customization.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,9 @@ prefill with `--compile_prefill`.

To learn more about compilation, check out: https://pytorch.org/get-started/pytorch-2.0/

For CPU, you can use `--max-autotune` to further improve the performance with `--compile` and `compile-prefill`.

See [`max-autotune on CPU tutorial`](https://pytorch.org/tutorials/prototype/max_autotune_on_CPU_tutorial.html).

## Model Precision

Expand Down

0 comments on commit aba0679

Please sign in to comment.