Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mc-nv committed Nov 22, 2024
1 parent 2561287 commit 7c9c7d0
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Major features include:
- Provides [Backend API](https://github.com/triton-inference-server/backend) that
allows adding custom backends and pre/post processing operations
- Supports writing custom backends in python, a.k.a.
[Python-based backends.](https://github.com/triton-inference-server/backend/blob/main/docs/python_based_backends.md#python-based-backends)
[Python-based backends.](https://github.com/triton-inference-server/backend/blob/r24.11/docs/python_based_backends.md#python-based-backends)
- Model pipelines using
[Ensembling](docs/user_guide/architecture.md#ensemble-models) or [Business
Logic Scripting
Expand Down Expand Up @@ -170,10 +170,10 @@ configuration](docs/user_guide/model_configuration.md) for the model.
[Python](https://github.com/triton-inference-server/python_backend), and more
- Not all the above backends are supported on every platform supported by Triton.
Look at the
[Backend-Platform Support Matrix](https://github.com/triton-inference-server/backend/blob/main/docs/backend_platform_support_matrix.md)
[Backend-Platform Support Matrix](https://github.com/triton-inference-server/backend/blob/r24.11/docs/backend_platform_support_matrix.md)
to learn which backends are supported on your target platform.
- Learn how to [optimize performance](docs/user_guide/optimization.md) using the
[Performance Analyzer](https://github.com/triton-inference-server/perf_analyzer/blob/main/README.md)
[Performance Analyzer](https://github.com/triton-inference-server/perf_analyzer/blob/r24.11/README.md)
and
[Model Analyzer](https://github.com/triton-inference-server/model_analyzer)
- Learn how to [manage loading and unloading models](docs/user_guide/model_management.md) in
Expand All @@ -187,14 +187,14 @@ A Triton *client* application sends inference and other requests to Triton. The
[Python and C++ client libraries](https://github.com/triton-inference-server/client)
provide APIs to simplify this communication.

- Review client examples for [C++](https://github.com/triton-inference-server/client/blob/main/src/c%2B%2B/examples),
[Python](https://github.com/triton-inference-server/client/blob/main/src/python/examples),
and [Java](https://github.com/triton-inference-server/client/blob/main/src/java/src/main/java/triton/client/examples)
- Review client examples for [C++](https://github.com/triton-inference-server/client/blob/r24.11/src/c%2B%2B/examples),
[Python](https://github.com/triton-inference-server/client/blob/r24.11/src/python/examples),
and [Java](https://github.com/triton-inference-server/client/blob/r24.11/src/java/src/r24.11/java/triton/client/examples)
- Configure [HTTP](https://github.com/triton-inference-server/client#http-options)
and [gRPC](https://github.com/triton-inference-server/client#grpc-options)
client options
- Send input data (e.g. a jpeg image) directly to Triton in the [body of an HTTP
request without any additional metadata](https://github.com/triton-inference-server/server/blob/main/docs/protocol/extension_binary_data.md#raw-binary-request)
request without any additional metadata](https://github.com/triton-inference-server/server/blob/r24.11/docs/protocol/extension_binary_data.md#raw-binary-request)

### Extend Triton

Expand All @@ -203,7 +203,7 @@ designed for modularity and flexibility

- [Customize Triton Inference Server container](docs/customization_guide/compose.md) for your use case
- [Create custom backends](https://github.com/triton-inference-server/backend)
in either [C/C++](https://github.com/triton-inference-server/backend/blob/main/README.md#triton-backend-api)
in either [C/C++](https://github.com/triton-inference-server/backend/blob/r24.11/README.md#triton-backend-api)
or [Python](https://github.com/triton-inference-server/python_backend)
- Create [decoupled backends and models](docs/user_guide/decoupled_models.md) that can send
multiple responses for a request or not send any responses for a request
Expand All @@ -212,7 +212,7 @@ designed for modularity and flexibility
decryption, or conversion
- Deploy Triton on [Jetson and JetPack](docs/user_guide/jetson.md)
- [Use Triton on AWS
Inferentia](https://github.com/triton-inference-server/python_backend/tree/main/inferentia)
Inferentia](https://github.com/triton-inference-server/python_backend/tree/r24.11/inferentia)

### Additional Documentation

Expand Down

0 comments on commit 7c9c7d0

Please sign in to comment.