Skip to content

Commit

Permalink
feat: Update documentation with new library name Torch-TensorRT
Browse files Browse the repository at this point in the history
Signed-off-by: Dheeraj Peri <[email protected]>

Signed-off-by: Dheeraj Peri <[email protected]>

Signed-off-by: Dheeraj Peri <[email protected]>

Signed-off-by: Dheeraj Peri <[email protected]>
  • Loading branch information
peri044 committed Oct 21, 2021
1 parent 483ef59 commit e5f96d9
Show file tree
Hide file tree
Showing 32 changed files with 716 additions and 709 deletions.
10 changes: 5 additions & 5 deletions docsrc/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,14 @@ check_clean:
clean: check_clean
rm -rf $(BUILDDIR)/*
ifndef VERSION
rm -rf /tmp/trtorch_docs
mkdir -p /tmp/trtorch_docs
mv $(DESTDIR)/v* /tmp/trtorch_docs
rm -rf /tmp/torchtrt_docs
mkdir -p /tmp/torchtrt_docs
mv $(DESTDIR)/v* /tmp/torchtrt_docs
endif
rm -r $(DESTDIR)/*
ifndef VERSION
mv /tmp/trtorch_docs/v* $(DESTDIR)
rm -rf /tmp/trtorch_docs
mv /tmp/torchtrt_docs/v* $(DESTDIR)
rm -rf /tmp/torchtrt_docs
endif
rm -rf $(SOURCEDIR)/_cpp_api
rm -rf $(SOURCEDIR)/_notebooks
Expand Down
18 changes: 9 additions & 9 deletions docsrc/RELEASE_CHECKLIST.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Release Process

Here is the process we use for creating new releases of TRTorch
Here is the process we use for creating new releases of Torch-TensorRT

## Criteria for Release

While TRTorch is in alpha, patch versions are bumped sequentially on breaking changes in the compiler.
While Torch-TensorRT is in alpha, patch versions are bumped sequentially on breaking changes in the compiler.

In beta TRTorch will get a minor version bump on breaking changes, or upgrade to the next version of PyTorch, patch version will be incremented based on significant bug fixes, or siginficant new functionality in the compiler.
In beta Torch-TensorRT will get a minor version bump on breaking changes, or upgrade to the next version of PyTorch, patch version will be incremented based on significant bug fixes, or siginficant new functionality in the compiler.

Once TRTorch hits version 1.0.0, major versions are bumped on breaking API changes, breaking changes or significant new functionality in the compiler
Once Torch-TensorRT hits version 1.0.0, major versions are bumped on breaking API changes, breaking changes or significant new functionality in the compiler
will result in a minor version bump and sigificant bug fixes will result in a patch version change.

## Steps to Packaging a Release
Expand All @@ -20,7 +20,7 @@ will result in a minor version bump and sigificant bug fixes will result in a pa
- Required, Python API and Optional Tests should pass on both x86_64 and aarch64
- All checked in applications (cpp and python) should compile and work
3. Generate new index of converters and evalutators
- `bazel run //tools/supportedops -- <PATH TO TRTORCH>/docsrc/indices/supported_ops.rst`
- `bazel run //tools/supportedops -- <PATH TO Torch-TensorRT>/docsrc/indices/supported_ops.rst`
4. Version bump PR
- There should be a PR which will be the PR that bumps the actual version of the library, this PR should contain the following
- Bump version in `py/setup.py`
Expand Down Expand Up @@ -49,7 +49,7 @@ will result in a minor version bump and sigificant bug fixes will result in a pa
- `[3, 224, 224]`
- `[3, 1920, 1080]` (P2)
- Batch Sizes: 1, 4, 8, 16, 32
- Frameworks: PyTorch, TRTorch, ONNX + TRT
- Frameworks: PyTorch, Torch-TensorRT, ONNX + TRT
- If any models do not convert to ONNX / TRT, that is fine. Mark them as failling / no result
- Devices:
- A100 (P0)
Expand All @@ -61,11 +61,11 @@ will result in a minor version bump and sigificant bug fixes will result in a pa

6. Once PR is merged tag commit and start creating release on GitHub
- Paste in Milestone information and Changelog information into release notes
- Generate libtrtorch.tar.gz for the following platforms:
- Generate libtorchtrt.tar.gz for the following platforms:
- x86_64 cxx11-abi
- x86_64 pre-cxx11-abi
- TODO: Add cxx11-abi build for aarch64 when a manylinux container for aarch64 exists
- Generate Python packages for Python 3.6/3.7/3.8/3.9 for x86_64
- TODO: Build a manylinux container for aarch64
- `docker run -it -v$(pwd)/..:/workspace/TRTorch build_trtorch_wheel /bin/bash /workspace/TRTorch/py/build_whl.sh` generates all wheels
- To build container `docker build -t build_trtorch_wheel .`
- `docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh` generates all wheels
- To build container `docker build -t build_torch_tensorrt_wheel .`
40 changes: 20 additions & 20 deletions docsrc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import os
import sys

sys.path.append(os.path.join(os.path.dirname(__name__), '../py'))

import sphinx_material
# -- Project information -----------------------------------------------------

project = 'TRTorch'
project = 'Torch-TensorRT'
copyright = '2021, NVIDIA Corporation'
author = 'NVIDIA Corporation'

Expand Down Expand Up @@ -63,15 +63,15 @@
html_static_path = ['_static']

# Setup the breathe extension
breathe_projects = {"TRTorch": "./_tmp/xml"}
breathe_default_project = "TRTorch"
breathe_projects = {"Torch-TensorRT": "./_tmp/xml"}
breathe_default_project = "Torch-TensorRT"

# Setup the exhale extension
exhale_args = {
# These arguments are required
"containmentFolder": "./_cpp_api",
"rootFileName": "trtorch_cpp.rst",
"rootFileTitle": "TRTorch C++ API",
"rootFileName": "torch_tensort_cpp.rst",
"rootFileTitle": "Torch-TensorRT C++ API",
"doxygenStripFromPath": "..",
# Suggested optional arguments
"createTreeView": True,
Expand All @@ -92,10 +92,10 @@
# Material theme options (see theme.conf for more information)
html_theme_options = {
# Set the name of the project to appear in the navigation.
'nav_title': 'TRTorch',
'nav_title': 'Torch-TensorRT',
# Specify a base_url used to generate sitemap.xml. If not
# specified, then no sitemap will be built.
'base_url': 'https://nvidia.github.io/TRTorch/',
'base_url': 'https://nvidia.github.io/Torch-TensorRT/',

# Set the color and the accent color
'theme_color': '84bd00',
Expand All @@ -107,8 +107,8 @@
"logo_icon": "&#xe86f",

# Set the repo location to get a badge with stats
'repo_url': 'https://github.com/nvidia/TRTorch/',
'repo_name': 'TRTorch',
'repo_url': 'https://github.com/nvidia/Torch-TensorRT/',
'repo_name': 'Torch-TensorRT',

# Visible levels of the global TOC; -1 means unlimited
'globaltoc_depth': 1,
Expand All @@ -118,21 +118,21 @@
'globaltoc_includehidden': True,
'master_doc': True,
"version_info": {
"master": "https://nvidia.github.io/TRTorch/",
"v0.4.1": "https://nvidia.github.io/TRTorch/v0.4.1/",
"v0.4.0": "https://nvidia.github.io/TRTorch/v0.4.0/",
"v0.3.0": "https://nvidia.github.io/TRTorch/v0.3.0/",
"v0.2.0": "https://nvidia.github.io/TRTorch/v0.2.0/",
"v0.1.0": "https://nvidia.github.io/TRTorch/v0.1.0/",
"v0.0.3": "https://nvidia.github.io/TRTorch/v0.0.3/",
"v0.0.2": "https://nvidia.github.io/TRTorch/v0.0.2/",
"v0.0.1": "https://nvidia.github.io/TRTorch/v0.0.1/",
"master": "https://nvidia.github.io/Torch-TensorRT/",
"v0.4.1": "https://nvidia.github.io/Torch-TensorRT/v0.4.1/",
"v0.4.0": "https://nvidia.github.io/Torch-TensorRT/v0.4.0/",
"v0.3.0": "https://nvidia.github.io/Torch-TensorRT/v0.3.0/",
"v0.2.0": "https://nvidia.github.io/Torch-TensorRT/v0.2.0/",
"v0.1.0": "https://nvidia.github.io/Torch-TensorRT/v0.1.0/",
"v0.0.3": "https://nvidia.github.io/Torch-TensorRT/v0.0.3/",
"v0.0.2": "https://nvidia.github.io/Torch-TensorRT/v0.0.2/",
"v0.0.1": "https://nvidia.github.io/Torch-TensorRT/v0.0.1/",
}
}

# Tell sphinx what the primary language being documented is.
primary_domain = 'cpp'
cpp_id_attributes = ["TRTORCH_API"]
cpp_id_attributes = ["TORCHTRT_API"]

# Tell sphinx what the pygments highlight language should be.
highlight_language = 'cpp'
4 changes: 2 additions & 2 deletions docsrc/contributors/conversion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ inputs and assemble an array of resources to pass to the converter. Inputs can b
static value has been evaluated

* The input is from a node that has not been converted
* TRTorch will error out here
* Torch-TensorRT will error out here

Node Evaluation
-----------------
Expand All @@ -49,4 +49,4 @@ Node converters map JIT nodes to layers or subgraphs of layers. They then associ
and the TRT graph together in the conversion context. This allows the conversion stage to assemble the inputs
for the next node. There are some cases where a node produces an output that is not a Tensor but a static result
from a calculation done on inputs which need to be converted first. In this case the converter may associate the outputs in
the ``evaluated_value_map`` instead of the ``value_tensor_map``. For more information take a look at: :ref:`writing_converters`
the ``evaluated_value_map`` instead of the ``value_tensor_map``. For more information take a look at: :ref:`writing_converters`
20 changes: 10 additions & 10 deletions docsrc/contributors/lowering.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Dead code elimination will check if a node has side effects and not delete it if
Eliminate Exeception Or Pass Pattern
***************************************

`trtorch/core/lowering/passes/exception_elimination.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/exception_elimination.cpp>`_
`Torch-TensorRT/core/lowering/passes/exception_elimination.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/exception_elimination.cpp>`_

A common pattern in scripted modules are dimension gaurds which will throw execptions if
the input dimension is not what was expected.
Expand Down Expand Up @@ -68,7 +68,7 @@ Freeze attributes and inline constants and modules. Propogates constants in the
Fuse AddMM Branches
***************************************

`trtorch/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_
`Torch-TensorRT/core/lowering/passes/fuse_addmm_branches.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_addmm_branches.cpp>`_

A common pattern in scripted modules is tensors of different dimensions use different constructions for implementing linear layers. We fuse these
different varients into a single one that will get caught by the Unpack AddMM pass.
Expand Down Expand Up @@ -101,7 +101,7 @@ This pass fuse the addmm or matmul + add generated by JIT back to linear
Fuse Flatten Linear
***************************************

`trtorch/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_
`Torch-TensorRT/core/lowering/passes/fuse_flatten_linear.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/fuse_flatten_linear.cpp>`_

TensorRT implicity flattens input layers into fully connected layers when they are higher than 1D. So when there is a
``aten::flatten`` -> ``aten::linear`` pattern we remove the ``aten::flatten``.
Expand Down Expand Up @@ -134,7 +134,7 @@ Removes _all_ tuples and raises an error if some cannot be removed, this is used
Module Fallback
*****************

`trtorch/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/module_fallback.cpp>`
`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`

Module fallback consists of two lowering passes that must be run as a pair. The first pass is run before freezing to place delimiters in the graph around modules
that should run in PyTorch. The second pass marks nodes between these delimiters after freezing to signify they should run in PyTorch.
Expand Down Expand Up @@ -162,30 +162,30 @@ Right now, it does:
Remove Contiguous
***************************************

`trtorch/core/lowering/passes/remove_contiguous.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/remove_contiguous.cpp>`_
`Torch-TensorRT/core/lowering/passes/remove_contiguous.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_contiguous.cpp>`_

Removes contiguous operators since we are doing TensorRT memory is already contiguous.


Remove Dropout
***************************************

`trtorch/core/lowering/passes/remove_dropout.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/remove_dropout.cpp>`_
`Torch-TensorRT/core/lowering/passes/remove_dropout.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_dropout.cpp>`_

Removes dropout operators since we are doing inference.

Remove To
***************************************

`trtorch/core/lowering/passes/remove_to.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/remove_to.cpp>`_
`Torch-TensorRT/core/lowering/passes/remove_to.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/remove_to.cpp>`_

Removes ``aten::to`` operators that do casting, since TensorRT mangages it itself. It is important that this is one of the last passes run so that
other passes have a change to move required cast operators out of the main namespace.

Unpack AddMM
***************************************

`trtorch/core/lowering/passes/unpack_addmm.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/unpack_addmm.cpp>`_
`Torch-TensorRT/core/lowering/passes/unpack_addmm.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_addmm.cpp>`_

Unpacks ``aten::addmm`` into ``aten::matmul`` and ``aten::add_`` (with an additional ``trt::const``
op to freeze the bias in the TensorRT graph). This lets us reuse the ``aten::matmul`` and ``aten::add_``
Expand All @@ -194,7 +194,7 @@ converters instead of needing a dedicated converter.
Unpack LogSoftmax
***************************************

`trtorch/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/nvidia/trtorch/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_
`Torch-TensorRT/core/lowering/passes/unpack_log_softmax.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/unpack_log_softmax.cpp>`_

Unpacks ``aten::logsoftmax`` into ``aten::softmax`` and ``aten::log``. This lets us reuse the
``aten::softmax`` and ``aten::log`` converters instead of needing a dedicated converter.
Expand All @@ -204,4 +204,4 @@ Unroll Loops

`torch/csrc/jit/passes/loop_unrolling.h <https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/passes/loop_unrolling.h>`_

Unrolls the operations of compatable loops (e.g. sufficently short) so that you only have to go through the loop once.
Unrolls the operations of compatable loops (e.g. sufficently short) so that you only have to go through the loop once.
4 changes: 2 additions & 2 deletions docsrc/contributors/phases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Lowering
^^^^^^^^^^^
:ref:`lowering`

The lowering is made up of a set of passes (some from PyTorch and some specific to TRTorch)
The lowering is made up of a set of passes (some from PyTorch and some specific to Torch-TensorRT)
run over the graph IR to map the large PyTorch opset to a reduced opset that is easier to convert to
TensorRT.

Expand Down Expand Up @@ -43,4 +43,4 @@ Compilation and Runtime
The final compilation phase constructs a TorchScript program to run the converted TensorRT engine. It
takes a serialized engine and instantiates it within a engine manager, then the compiler will
build out a JIT graph that references this engine and wraps it in a module to return to the user.
When the user executes the module, the JIT program run in the JIT runtime extended by TRTorch with the data providied from the user.
When the user executes the module, the JIT program run in the JIT runtime extended by Torch-TensorRT with the data providied from the user.
8 changes: 4 additions & 4 deletions docsrc/contributors/runtime.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ torch::jit::Value type).
TensorRT Engine Executor Op
----------------------------

When the TRTorch is loaded, it registers an operator in the PyTorch JIT operator library called
When the Torch-TensorRT is loaded, it registers an operator in the PyTorch JIT operator library called
``trt::execute_engine(Tensor[] inputs, __torch__.torch.classes.tensorrt.Engine engine) -> Tensor[]`` which takes an
instantiated engine and list of inputs. Compiled graphs store this engine in an attribute so that it is portable and serializable.
When the op is called, an instnantiated engine and input tensors are popped off the runtime stack. These inputs are passed into a generic engine execution function which
Expand Down Expand Up @@ -72,8 +72,8 @@ execution.
ABI Versioning and Serialization Format
=========================================

TRTorch programs are standard TorchScript with TensorRT engines as objects embedded in the graph. Therefore there is a serialization format
for the TensorRT engines. The format for TRTorch serialized programs are versioned with an "ABI" version which tells the runtime about runtime compatibility.
Torch-TensorRT programs are standard TorchScript with TensorRT engines as objects embedded in the graph. Therefore there is a serialization format
for the TensorRT engines. The format for Torch-TensorRT serialized programs are versioned with an "ABI" version which tells the runtime about runtime compatibility.

> Current ABI version is 3

Expand All @@ -82,4 +82,4 @@ The format is a vector of serialized strings. They encode the following informat
* ABI Version for the program
* Name of the TRT engine
* Device information: Includes the target device the engine was built on, SM capability and other device information. This information is used at deserialization time to select the correct device to run the engine
* Serialized TensorRT engine
* Serialized TensorRT engine
6 changes: 3 additions & 3 deletions docsrc/contributors/system_overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
System Overview
================

TRTorch is primarily a C++ Library with a Python API planned. We use Bazel as our build system and target Linux x86_64 and
Torch-TensorRT is primarily a C++ Library with a Python API planned. We use Bazel as our build system and target Linux x86_64 and
Linux aarch64 (only natively) right now. The compiler we use is GCC 7.5.0 and the library is untested with compilers before that
version so there may be compilation errors if you try to use an older compiler.

Expand All @@ -13,7 +13,7 @@ The repository is structured into:
* cpp: C++ API
* tests: tests of the C++ API, the core and converters
* py: Python API
* notebooks: Example applications built with TRTorch
* notebooks: Example applications built with Torch-TensorRT
* docs: Documentation
* docsrc: Documentation Source
* third_party: BUILD files for dependency libraries
Expand All @@ -26,4 +26,4 @@ The core has a couple major parts: The top level compiler interface which coordi
converting and generating a new module and returning it back to the user. The there are the three main phases of the
compiler, the lowering phase, the conversion phase, and the execution phase.

.. include:: phases.rst
.. include:: phases.rst
3 changes: 1 addition & 2 deletions docsrc/contributors/useful_links.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
.. _useful_links:

Useful Links for TRTorch Development
Useful Links for Torch-TensorRT Development
=====================================

TensorRT Available Layers and Expected Dimensions
Expand Down Expand Up @@ -32,4 +32,3 @@ PyTorch IR Documentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

* https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/OVERVIEW.md

Loading

0 comments on commit e5f96d9

Please sign in to comment.