Release 2.11.0 corresponding to NGC container 21.06
Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
What's New In 2.11.0
-
The Forest Inference Library (FIL) backend is added to Triton. The FIL backend allows forest models trained by several popular machine learning frameworks (including XGBoost, LightGBM, Scikit-Learn, and cuML) to be deployed in a Triton.
-
Windows version of Triton now includes the OpenVino backend.
-
The Performance Analyzer (perf_analyzer) now supports testing against the Triton C API.
-
The Python backend now allows the use of conda to create a unique execution environment for your Python model. See https://github.com/triton-inference-server/python_backend#using-custom-python-execution-environments.
-
Python models that crash or exit unexpectedly are now automatically restarted by Triton.
-
Model repositories in S3 storage can now be accessed using HTTPS protocol. See https://github.com/triton-inference-server/server/blob/main/docs/model_repository.md#s3 for more information.
-
Triton now collects GPU metrics for MIG partitions.
-
Passive model instances can now be specified in the model configuration. A passive model instance will be loaded and initialized by Triton, but no inference requests will be sent to the instance. Passive instances are typically used by a custom backend that uses its own mechanisms to distribute work to the passive instances. See the ModelInstanceGroup section of model_config.proto for the setting.
-
NVDLA support is added to the TensorRT backend.
-
ONNX Runtime version updated to 1.8.0.
-
Windows build documentation simplified and improved.
-
Improved detailed and summary reports in Model Analyzer.
-
Added an offline mode to Model Analyzer.
-
The DALI backend now accepts GPU inputs.
-
The DALI backend added support for dynamic batching and ragged inputs.
Known Issues
-
There are backwards incompatible changes in the example Python client shared-memory support library when that library is used for tensors of type BYTES. The utils.serialize_byte_tensor() and utils.deserialize_byte_tensor() functions now return np.object_ numpy arrays where previously they returned np.bytes_ numpy arrays. Code depending on np.bytes_ must be updated. This change was necessary because the np.bytes_ type removes all trailing zeros from each array element and so binary sequences ending in zero(s) could not be represented with the old behavior. Correct usage of the Python client shared-memory support library is shown in https://github.com/triton-inference-server/server/blob/r21.03/src/clients/python/examples/simple_http_shm_string_client.py.
-
The 21.06 release of Triton was built against the wrong commit of the FIL backend code, causing an incompatible version of RAPIDS to be used instead of the intended RAPIDS 21.06 stable release. This issue is fixed in the new 21.06.1 container released on NGC. Although the Triton server itself and other integrated backends will work, the FIL backend will not work in the 21.06 Triton container.
Client Libraries and Examples
Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.11.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
For windows, the client libraries and some examples are available in the attached tritonserver2.11.0-sdk-win.zip file.
Windows Support
An alpha release of Triton for Windows is provided in the attached file: tritonserver2.11.0-win.zip. This is an alpha release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:
-
TensorRT models are supported. The TensorRT version is 7.2.2.
-
ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.8.0. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.
-
OpenVINO models are supported. The OpenVINO version is 2021.2.
-
Only the GRPC endpoint is supported, HTTP/REST is not supported.
-
Prometheus metrics endpoint is not supported.
-
System and CUDA shared memory are not supported.
The following components are required for this release and must be installed on the Windows system:
-
NVIDIA Driver release 455 or later.
-
CUDA 11.1.1
-
cuDNN 8.0.5
-
TensorRT 7.2.2
Jetson Jetpack Support
A release of Triton for JetPack 4.5 (https://developer.nvidia.com/embedded/jetpack) is provided in the attached file: tritonserver2.11.0-jetpack4.5.tgz. This release supports the TensorFlow 2.4.0, TensorFlow 1.15.5, TensorRT 7.1, OnnxRuntime 1.8.0 and as well as ensembles. For the OnnxRuntime backend the TensorRT execution provider is supported but the OpenVINO execution provider is not supported. System shared memory is supported on Jetson. GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples.
Installation and Usage
The following dependencies must be installed before running Triton.
apt-get update && \
apt-get install -y --no-install-recommends \
software-properties-common \
autoconf \
automake \
build-essential \
cmake \
git \
libb64-dev \
libre2-dev \
libssl-dev \
libtool \
libboost-dev \
libcurl4-openssl-dev \
libopenblas-dev \
rapidjson-dev \
patchelf \
zlib1g-dev
To run the clients the following dependencies must be installed.
apt-get install -y --no-install-recommends \
curl \
libopencv-dev=3.2.0+dfsg-4ubuntu0.1 \
libopencv-core-dev=3.2.0+dfsg-4ubuntu0.1 \
pkg-config \
python3 \
python3-pip \
python3-dev
pip3 install --upgrade wheel setuptools cython && \
pip3 install --upgrade grpcio-tools numpy future attrdict
The Python wheel for the python client library is present in the tar file and can be installed by running the following command:
python3 -m pip install --upgrade clients/python/tritonclient-2.11.0-py3-none-linux_aarch64.whl[all]
On Jetson, the backend directory needs to be explicitly set with the --backend-directory
flag. Triton also defaults to using TensorFlow 1.x and a version string is required to specify TensorFlow 2.x.
tritonserver --model-repository=/path/to/model_repo --backend-directory=/path/to/tritonserver/backends \
--backend-config=tensorflow,version=2