diff --git a/CHANGELOG.md b/CHANGELOG.md index fa3f2c4..3738ad4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -16,11 +16,9 @@ limitations under the License. # Changelog -## Unreleased +## 0.5.11 (2024-09-18) -[//]: <> (put here on external component update with short summary what change or link to changelog) - -- Version of [Triton Inference Server](https://github.com/triton-inference-server/) embedded in wheel: [2.48.0](https://github.com/triton-inference-server/server/releases/tag/v2.48.0) +- Version of [Triton Inference Server](https://github.com/triton-inference-server/) embedded in wheel: [2.49.0](https://github.com/triton-inference-server/server/releases/tag/v2.49.0) ## 0.5.10 (2024-08-02) diff --git a/Makefile b/Makefile index 68ff11b..53ce48f 100644 --- a/Makefile +++ b/Makefile @@ -36,8 +36,8 @@ export PRINT_HELP_PYSCRIPT BROWSER := python -c "$$BROWSER_PYSCRIPT" PIP_INSTALL := pip install --extra-index-url https://pypi.ngc.nvidia.com -TEST_CONTAINER_VERSION ?= 24.07 -TRITONSERVER_IMAGE_VERSION ?= 24.07 +TEST_CONTAINER_VERSION ?= 24.08 +TRITONSERVER_IMAGE_VERSION ?= 24.08 TRITONSERVER_IMAGE_NAME = nvcr.io/nvidia/tritonserver:$(TRITONSERVER_IMAGE_VERSION)-pyt-python-py3 TRITONSERVER_OUTPUT_DIR = ${PWD}/pytriton/tritonserver TRITONSERVER_BASENAME = pytriton diff --git a/examples/dali_resnet101_pytorch/README.md b/examples/dali_resnet101_pytorch/README.md index 9b42500..d0f12dd 100644 --- a/examples/dali_resnet101_pytorch/README.md +++ b/examples/dali_resnet101_pytorch/README.md @@ -89,7 +89,7 @@ To run this example, please follow these steps: 2. Run the NVIDIA PyTorch container: ```shell -$ docker run -it --gpus all --shm-size 8gb -v $(pwd):/dali -w /dali --net host nvcr.io/nvidia/pytorch:24.07-py3 bash +$ docker run -it --gpus all --shm-size 8gb -v $(pwd):/dali -w /dali --net host nvcr.io/nvidia/pytorch:24.08-py3 bash ``` 3. Install PyTriton following the [installation instruction](../../README.md#installation) diff --git a/examples/huggingface_bart_pytorch/README.md b/examples/huggingface_bart_pytorch/README.md index ee83e31..fb2d5f7 100644 --- a/examples/huggingface_bart_pytorch/README.md +++ b/examples/huggingface_bart_pytorch/README.md @@ -40,7 +40,7 @@ pip install torch Or you can use NVIDIA PyTorch container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.07-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.08-py3 bash ``` If you select to use container we recommend to install @@ -97,7 +97,7 @@ export DOCKER_IMAGE_NAME_WITH_TAG=localhost:5000/bart-pytorch-example:latest ```shell # Export the base image used for build -export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3 +export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3 ./examples/huggingface_bart_pytorch/kubernetes/build_and_push.sh ``` **Note**: By default the container is built using `pytriton` package from `GitHub`. To build container with wheel built diff --git a/examples/huggingface_bart_pytorch/kubernetes/Dockerfile b/examples/huggingface_bart_pytorch/kubernetes/Dockerfile index 8fd3fdc..4776efb 100644 --- a/examples/huggingface_bart_pytorch/kubernetes/Dockerfile +++ b/examples/huggingface_bart_pytorch/kubernetes/Dockerfile @@ -11,7 +11,7 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3 +ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3 ARG BUILD_FROM FROM ${FROM_IMAGE_NAME} as base diff --git a/examples/huggingface_bart_pytorch/kubernetes/build_and_push.sh b/examples/huggingface_bart_pytorch/kubernetes/build_and_push.sh index c80131c..f7e46ee 100755 --- a/examples/huggingface_bart_pytorch/kubernetes/build_and_push.sh +++ b/examples/huggingface_bart_pytorch/kubernetes/build_and_push.sh @@ -22,7 +22,7 @@ fi if [ -z ${FROM_IMAGE_NAME} ]; then echo "Provide Docker image that would be used as base image" echo "Example:" - echo " export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3" + echo " export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3" exit 1 fi diff --git a/examples/huggingface_dialogpt_streaming_pytorch/README.md b/examples/huggingface_dialogpt_streaming_pytorch/README.md index 6a568f3..50e959f 100644 --- a/examples/huggingface_dialogpt_streaming_pytorch/README.md +++ b/examples/huggingface_dialogpt_streaming_pytorch/README.md @@ -40,7 +40,7 @@ pip install torch Or you can use NVIDIA PyTorch container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.07-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.08-py3 bash ``` If you select to use container we recommend to install @@ -97,7 +97,7 @@ export DOCKER_IMAGE_NAME_WITH_TAG=localhost:5000/bart-pytorch-example:latest ```shell # Export the base image used for build -export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3 +export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3 ./examples/huggingface_bart_pytorch/kubernetes/build_and_push.sh ``` **Note**: By default the container is built using `pytriton` package from `GitHub`. To build container with wheel built diff --git a/examples/huggingface_opt_multinode_jax/Dockerfile b/examples/huggingface_opt_multinode_jax/Dockerfile index 3da300d..71b3972 100644 --- a/examples/huggingface_opt_multinode_jax/Dockerfile +++ b/examples/huggingface_opt_multinode_jax/Dockerfile @@ -11,7 +11,7 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.07-tf2-py3 +ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.08-tf2-py3 FROM ${FROM_IMAGE_NAME} ENV XLA_PYTHON_CLIENT_PREALLOCATE=false diff --git a/examples/huggingface_opt_multinode_jax/README.md b/examples/huggingface_opt_multinode_jax/README.md index 634e8d1..48b6e59 100644 --- a/examples/huggingface_opt_multinode_jax/README.md +++ b/examples/huggingface_opt_multinode_jax/README.md @@ -90,7 +90,7 @@ The easiest way of running this example is inside a [nvcr.io](https://catalog.ng container. Example `Dockerfile` that can be used to run the server: ```Dockerfile -ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.07-tf2-py3 +ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.08-tf2-py3 FROM ${FROM_IMAGE_NAME} ENV XLA_PYTHON_CLIENT_PREALLOCATE=false @@ -181,7 +181,7 @@ export DOCKER_IMAGE_NAME_WITH_TAG=localhost:5000/jax-example:latest ```shell # Export the base image used for build. We use TensorFlow image for JAX -export FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.07-tf2-py3 +export FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.08-tf2-py3 ./examples/huggingface_opt_multinode_jax/kubernetes/build_and_push.sh ``` **Note**: By default the container is built using `pytriton` package from pypi.org. To build container with wheel built diff --git a/examples/huggingface_opt_multinode_jax/kubernetes/Dockerfile b/examples/huggingface_opt_multinode_jax/kubernetes/Dockerfile index b2cbf61..ab646e4 100644 --- a/examples/huggingface_opt_multinode_jax/kubernetes/Dockerfile +++ b/examples/huggingface_opt_multinode_jax/kubernetes/Dockerfile @@ -11,7 +11,7 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.07-tf2-py3 +ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:24.08-tf2-py3 ARG BUILD_FROM=pypi FROM ${FROM_IMAGE_NAME} as base diff --git a/examples/huggingface_resnet_pytorch/README.md b/examples/huggingface_resnet_pytorch/README.md index 4e32e22..b81aa36 100644 --- a/examples/huggingface_resnet_pytorch/README.md +++ b/examples/huggingface_resnet_pytorch/README.md @@ -41,7 +41,7 @@ pip install torch Or you can use NVIDIA PyTorch container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.07-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.08-py3 bash ``` If you select to use container we recommend to install @@ -98,7 +98,7 @@ export DOCKER_IMAGE_NAME_WITH_TAG=localhost:5000/resnet-pytorch-example:latest ```shell # Export the base image used for build -export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3 +export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3 ./examples/huggingface_resnet_pytorch/kubernetes/build_and_push.sh ``` diff --git a/examples/huggingface_resnet_pytorch/kubernetes/Dockerfile b/examples/huggingface_resnet_pytorch/kubernetes/Dockerfile index 88006a2..38482bc 100644 --- a/examples/huggingface_resnet_pytorch/kubernetes/Dockerfile +++ b/examples/huggingface_resnet_pytorch/kubernetes/Dockerfile @@ -11,7 +11,7 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3 +ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3 ARG BUILD_FROM FROM ${FROM_IMAGE_NAME} as base diff --git a/examples/huggingface_resnet_pytorch/kubernetes/build_and_push.sh b/examples/huggingface_resnet_pytorch/kubernetes/build_and_push.sh index 4a864d7..8d8cfe0 100755 --- a/examples/huggingface_resnet_pytorch/kubernetes/build_and_push.sh +++ b/examples/huggingface_resnet_pytorch/kubernetes/build_and_push.sh @@ -22,7 +22,7 @@ fi if [ -z ${FROM_IMAGE_NAME} ]; then echo "Provide Docker image that would be used as base image" echo "Example:" - echo " export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3" + echo " export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3" exit 1 fi diff --git a/examples/huggingface_stable_diffusion/README.md b/examples/huggingface_stable_diffusion/README.md index e3c723b..7d107d1 100644 --- a/examples/huggingface_stable_diffusion/README.md +++ b/examples/huggingface_stable_diffusion/README.md @@ -41,7 +41,7 @@ pip install torch Or you can use NVIDIA PyTorch container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.07-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.08-py3 bash ``` If you select to use container we recommend to install @@ -99,7 +99,7 @@ export DOCKER_IMAGE_NAME_WITH_TAG=localhost:5000/stable-diffusion-example:latest ```shell # Export the base image used for build -export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3 +export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3 ./examples/huggingface_stable_diffusion/kubernetes/build_and_push.sh ``` diff --git a/examples/huggingface_stable_diffusion/kubernetes/Dockerfile b/examples/huggingface_stable_diffusion/kubernetes/Dockerfile index f659958..5041cc1 100644 --- a/examples/huggingface_stable_diffusion/kubernetes/Dockerfile +++ b/examples/huggingface_stable_diffusion/kubernetes/Dockerfile @@ -11,7 +11,7 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3 +ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3 ARG BUILD_FROM FROM ${FROM_IMAGE_NAME} as base diff --git a/examples/huggingface_stable_diffusion/kubernetes/build_and_push.sh b/examples/huggingface_stable_diffusion/kubernetes/build_and_push.sh index 4e2b5c2..18046b5 100755 --- a/examples/huggingface_stable_diffusion/kubernetes/build_and_push.sh +++ b/examples/huggingface_stable_diffusion/kubernetes/build_and_push.sh @@ -22,7 +22,7 @@ fi if [ -z ${FROM_IMAGE_NAME} ]; then echo "Provide Docker image that would be used as base image" echo "Example:" - echo " export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.07-py3" + echo " export FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:24.08-py3" exit 1 fi diff --git a/examples/linear_random_pytorch/README.md b/examples/linear_random_pytorch/README.md index ddd78ea..41a7d91 100644 --- a/examples/linear_random_pytorch/README.md +++ b/examples/linear_random_pytorch/README.md @@ -35,7 +35,7 @@ pip install torch Or you can use NVIDIA PyTorch container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.07-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.08-py3 bash ``` If you select to use container we recommend to install diff --git a/examples/mlp_random_tensorflow2/README.md b/examples/mlp_random_tensorflow2/README.md index a363132..94d7862 100644 --- a/examples/mlp_random_tensorflow2/README.md +++ b/examples/mlp_random_tensorflow2/README.md @@ -35,7 +35,7 @@ pip install tensorflow Or you can use NVIDIA TensorFlow container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/tensorflow:24.07-tf2-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/tensorflow:24.08-tf2-py3 bash ``` If you select to use container we recommend to install diff --git a/examples/multi_instance_resnet50_pytorch/README.md b/examples/multi_instance_resnet50_pytorch/README.md index 3b35d57..bfef794 100644 --- a/examples/multi_instance_resnet50_pytorch/README.md +++ b/examples/multi_instance_resnet50_pytorch/README.md @@ -37,7 +37,7 @@ pip install torch Or you can use NVIDIA PyTorch container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.07-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.08-py3 bash ``` If you select to use container we recommend to install diff --git a/examples/perf_analyzer/README.md b/examples/perf_analyzer/README.md index 8b5c61f..be0331f 100644 --- a/examples/perf_analyzer/README.md +++ b/examples/perf_analyzer/README.md @@ -38,7 +38,7 @@ pip install torch Or you can use NVIDIA PyTorch container: ```shell -docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.07-py3 bash +docker run -it --gpus 1 --shm-size 8gb -v {repository_path}:{repository_path} -w {repository_path} nvcr.io/nvidia/pytorch:24.08-py3 bash ``` If you select to use container we recommend to install diff --git a/pyproject.toml b/pyproject.toml index 2211c1e..8f03eee 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -41,7 +41,7 @@ dependencies = [ "protobuf >= 3.7", "pyzmq >= 23.0", "sh >= 1.14", - "tritonclient[grpc,http] ~= 2.48", + "tritonclient[grpc,http] ~= 2.49", "grpcio >= 1.64.1", # fix grpc client compatibility "typing_inspect >= 0.6.0", "wrapt >= 1.11",