Skip to content

Commit

Permalink
Merge branch 'main' into patch-20
Browse files Browse the repository at this point in the history
  • Loading branch information
Jack-Khuu authored Dec 9, 2024
2 parents fa269bb + dacabcd commit 4afdea5
Show file tree
Hide file tree
Showing 7 changed files with 75 additions and 7 deletions.
19 changes: 18 additions & 1 deletion .ci/scripts/run-docs
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ fi
if [ "$1" == "multimodal" ]; then

# Expecting that this might fail this test as-is, because
# it's the first on-pr test depending on githib secrets for access with HF token access
# it's the first on-pr test depending on github secrets for access with HF token access

echo "::group::Create script to run multimodal"
python3 torchchat/utils/scripts/updown.py --file docs/multimodal.md > ./run-multimodal.sh
Expand All @@ -111,3 +111,20 @@ if [ "$1" == "multimodal" ]; then
bash -x ./run-multimodal.sh
echo "::endgroup::"
fi

if [ "$1" == "native" ]; then

echo "::group::Create script to run native-execution"
python3 torchchat/utils/scripts/updown.py --file docs/native-execution.md > ./run-native.sh
# for good measure, if something happened to updown processor,
# and it did not error out, fail with an exit 1
echo "exit 1" >> ./run-native.sh
echo "::endgroup::"

echo "::group::Run native-execution"
echo "*******************************************"
cat ./run-native.sh
echo "*******************************************"
bash -x ./run-native.sh
echo "::endgroup::"
fi
1 change: 1 addition & 0 deletions .github/workflows/run-readme-pr-mps.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ jobs:
uses: pytorch/test-infra/.github/workflows/macos_job.yml@main
with:
runner: macos-m1-14
timeout-minutes: 50
script: |
conda create -y -n test-readme-mps-macos python=3.10.11 llvm-openmp
conda activate test-readme-mps-macos
Expand Down
43 changes: 43 additions & 0 deletions .github/workflows/run-readme-pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -287,3 +287,46 @@ jobs:
echo "::endgroup::"
TORCHCHAT_DEVICE=cpu .ci/scripts/run-docs multimodal
test-native-any:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.g5.4xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "12.1"
timeout: 60
script: |
echo "::group::Print machine info"
uname -a
echo "::endgroup::"
echo "::group::Install newer objcopy that supports --set-section-alignment"
yum install -y devtoolset-10-binutils
export PATH=/opt/rh/devtoolset-10/root/usr/bin/:$PATH
echo "::endgroup::"
.ci/scripts/run-docs native
echo "::group::Completion"
echo "tests complete"
echo "*******************************************"
echo "::endgroup::"
test-native-cpu:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: linux.g5.4xlarge.nvidia.gpu
gpu-arch-type: cuda
gpu-arch-version: "12.1"
timeout: 60
script: |
echo "::group::Print machine info"
uname -a
echo "::endgroup::"
echo "::group::Install newer objcopy that supports --set-section-alignment"
yum install -y devtoolset-10-binutils
export PATH=/opt/rh/devtoolset-10/root/usr/bin/:$PATH
echo "::endgroup::"
TORCHCHAT_DEVICE=cpu .ci/scripts/run-docs native
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,8 @@ python3 torchchat.py server llama3.1
```
[skip default]: end

[shell default]: python3 torchchat.py server llama3.1 & server_pid=$!

In another terminal, query the server using `curl`. Depending on the model configuration, this query might take a few minutes to respond.

> [!NOTE]
Expand All @@ -244,8 +246,6 @@ Setting `stream` to "true" in the request emits a response in chunks. If `stream

**Example Input + Output**

[skip default]: begin

```
curl http://127.0.0.1:5000/v1/chat/completions \
-H "Content-Type: application/json" \
Expand All @@ -265,12 +265,14 @@ curl http://127.0.0.1:5000/v1/chat/completions \
]
}'
```
[skip default]: begin
```
{"response":" I'm a software developer with a passion for building innovative and user-friendly applications. I have experience in developing web and mobile applications using various technologies such as Java, Python, and JavaScript. I'm always looking for new challenges and opportunities to learn and grow as a developer.\n\nIn my free time, I enjoy reading books on computer science and programming, as well as experimenting with new technologies and techniques. I'm also interested in machine learning and artificial intelligence, and I'm always looking for ways to apply these concepts to real-world problems.\n\nI'm excited to be a part of the developer community and to have the opportunity to share my knowledge and experience with others. I'm always happy to help with any questions or problems you may have, and I'm looking forward to learning from you as well.\n\nThank you for visiting my profile! I hope you find my information helpful and interesting. If you have any questions or would like to discuss any topics, please feel free to reach out to me. I"}
```

[skip default]: end

[shell default]: kill ${server_pid}

</details>

Expand Down Expand Up @@ -664,6 +666,6 @@ awesome libraries and tools you've built around local LLM inference.

torchchat is released under the [BSD 3 license](LICENSE). (Additional
code in this distribution is covered by the MIT and Apache Open Source
licenses.) However you may have other legal obligations that govern
licenses.) However, you may have other legal obligations that govern
your use of content, such as the terms of service for third-party
models.
2 changes: 2 additions & 0 deletions docs/ADVANCED-USERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,6 +249,8 @@ To improve performance, you can compile the model with `--compile`
trading off the time to first token processed with time per token. To
improve performance further, you may also compile the prefill with
`--compile-prefill`. This will increase further compilation times though.
For CPU, you can use `--max-autotune` to further improve the performance
with `--compile` and `compile-prefill`. See [`max-autotune on CPU tutorial`](https://pytorch.org/tutorials/prototype/max_autotune_on_CPU_tutorial.html).

Parallel prefill is not yet supported by exported models, and may be
supported in a future release.
Expand Down
3 changes: 3 additions & 0 deletions docs/model_customization.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,9 @@ prefill with `--compile_prefill`.

To learn more about compilation, check out: https://pytorch.org/get-started/pytorch-2.0/

For CPU, you can use `--max-autotune` to further improve the performance with `--compile` and `compile-prefill`.

See [`max-autotune on CPU tutorial`](https://pytorch.org/tutorials/prototype/max_autotune_on_CPU_tutorial.html).

## Model Precision

Expand Down
6 changes: 3 additions & 3 deletions install/install_requirements.sh
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,13 @@ echo "Using pip executable: $PIP_EXECUTABLE"
# NOTE: If a newly-fetched version of the executorch repo changes the value of
# PYTORCH_NIGHTLY_VERSION, you should re-run this script to install the necessary
# package versions.
PYTORCH_NIGHTLY_VERSION=dev20241010
PYTORCH_NIGHTLY_VERSION=dev20241013

# Nightly version for torchvision
VISION_NIGHTLY_VERSION=dev20241010
VISION_NIGHTLY_VERSION=dev20241013

# Nightly version for torchtune
TUNE_NIGHTLY_VERSION=dev20241010
TUNE_NIGHTLY_VERSION=dev20241013

# Uninstall triton, as nightly will depend on pytorch-triton, which is one and the same
(
Expand Down

0 comments on commit 4afdea5

Please sign in to comment.