Skip to content

Commit

Permalink
docs: [Automated] Regenerating documenation for 254eab2
Browse files Browse the repository at this point in the history
Signed-off-by: TRTorch Github Bot <[email protected]>
  • Loading branch information
TRTorch Github Bot committed Jul 28, 2021
1 parent 254eab2 commit 24de61b
Show file tree
Hide file tree
Showing 7 changed files with 157 additions and 149 deletions.
2 changes: 1 addition & 1 deletion docs/_notebooks/Resnet50-example.html
Original file line number Diff line number Diff line change
Expand Up @@ -693,7 +693,7 @@
</div>
</div>
<p>
<img alt="ff74b56771a64c978bf8eab7b77f6d64" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
<img alt="3f3df38af6534d96b3e00b4a5ac32254" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
</p>
<h1 id="notebooks-resnet50-example--page-root">
TRTorch Getting Started - ResNet 50
Expand Down
2 changes: 1 addition & 1 deletion docs/_notebooks/lenet-getting-started.html
Original file line number Diff line number Diff line change
Expand Up @@ -787,7 +787,7 @@
</div>
</div>
<p>
<img alt="415b837ad51b4452b05ef87991e84465" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
<img alt="9fafb7c6b26042f09019abca625e72a9" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
</p>
<h1 id="notebooks-lenet-getting-started--page-root">
TRTorch Getting Started - LeNet
Expand Down
2 changes: 1 addition & 1 deletion docs/_notebooks/ssd-object-detection-demo.html
Original file line number Diff line number Diff line change
Expand Up @@ -807,7 +807,7 @@
</div>
</div>
<p>
<img alt="474905315e234b31a168b62fdd2739fa" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
<img alt="656583182bac46c79c1fd7d509553e59" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
</p>
<h1 id="notebooks-ssd-object-detection-demo--page-root">
Object Detection with TRTorch (SSD)
Expand Down
146 changes: 75 additions & 71 deletions docs/_sources/tutorials/trtorchc.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,79 +19,83 @@ to standard TorchScript. Load with ``torch.jit.load()`` and run like you would r
trtorchc [input_file_path] [output_file_path]
[input_specs...] {OPTIONS}
TRTorch is a compiler for TorchScript, it will compile and optimize
TorchScript programs to run on NVIDIA GPUs using TensorRT
TRTorch is a compiler for TorchScript, it will compile and optimize
TorchScript programs to run on NVIDIA GPUs using TensorRT
OPTIONS:
OPTIONS:
-h, --help Display this help menu
Verbiosity of the compiler
-v, --verbose Dumps debugging information about the
compilation process onto the console
-w, --warnings Disables warnings generated during
compilation onto the console (warnings
are on by default)
--i, --info Dumps info messages generated during
compilation onto the console
--build-debuggable-engine Creates a debuggable engine
--use-strict-types Restrict operating type to only use set
operation precision
--allow-gpu-fallback (Only used when targeting DLA
(device-type)) Lets engine run layers on
GPU if they are not supported on DLA
--disable-tf32 Prevent Float32 layers from using the
TF32 data format
-p[precision...],
--enabled-precison=[precision...] (Repeatable) Enabling an operating
precision for kernels to use when
building the engine (Int8 requires a
calibration-cache argument) [ float |
float32 | f32 | half | float16 | f16 |
int8 | i8 ] (default: float)
-d[type], --device-type=[type] The type of device the engine should be
built for [ gpu | dla ] (default: gpu)
--gpu-id=[gpu_id] GPU id if running on multi-GPU platform
(defaults to 0)
--dla-core=[dla_core] DLACore id if running on available DLA
(defaults to 0)
--engine-capability=[capability] The type of device the engine should be
built for [ default | safe_gpu |
safe_dla ]
--calibration-cache-file=[file_path]
Path to calibration cache file to use
for post training quantization
--num-min-timing-iter=[num_iters] Number of minimization timing iterations
used to select kernels
--num-avg-timing-iters=[num_iters]
Number of averaging timing iterations
used to select kernels
--workspace-size=[workspace_size] Maximum size of workspace given to
TensorRT
--max-batch-size=[max_batch_size] Maximum batch size (must be >= 1 to be
set, 0 means not set)
-t[threshold],
--threshold=[threshold] Maximum acceptable numerical deviation
from standard torchscript output
(default 2e-5)
--save-engine Instead of compiling a full a
TorchScript program, save the created
engine to the path specified as the
output path
input_file_path Path to input TorchScript file
output_file_path Path for compiled TorchScript (or
TensorRT engine) file
input_specs... Specs for inputs to engine, can either
be a single size or a range defined by
Min, Optimal, Max sizes, e.g.
"(N,..,C,H,W)"
"[(MIN_N,..,MIN_C,MIN_H,MIN_W);(OPT_N,..,OPT_C,OPT_H,OPT_W);(MAX_N,..,MAX_C,MAX_H,MAX_W)]".
Data Type and format can be specified by
adding an "@" followed by dtype and "%"
followed by format to the end of the
shape spec. e.g. "(3, 3, 32,
32)@f16%NHWC"
"--" can be used to terminate flag options and force all following
arguments to be treated as positional options
-h, --help Display this help menu
Verbiosity of the compiler
-v, --verbose Dumps debugging information about the
compilation process onto the console
-w, --warnings Disables warnings generated during
compilation onto the console (warnings
are on by default)
--i, --info Dumps info messages generated during
compilation onto the console
--build-debuggable-engine Creates a debuggable engine
--use-strict-types Restrict operating type to only use set
operation precision
--allow-gpu-fallback (Only used when targeting DLA
(device-type)) Lets engine run layers on
GPU if they are not supported on DLA
--disable-tf32 Prevent Float32 layers from using the
TF32 data format
-p[precision...],
--enabled-precison=[precision...] (Repeatable) Enabling an operating
precision for kernels to use when
building the engine (Int8 requires a
calibration-cache argument) [ float |
float32 | f32 | half | float16 | f16 |
int8 | i8 ] (default: float)
-d[type], --device-type=[type] The type of device the engine should be
built for [ gpu | dla ] (default: gpu)
--gpu-id=[gpu_id] GPU id if running on multi-GPU platform
(defaults to 0)
--dla-core=[dla_core] DLACore id if running on available DLA
(defaults to 0)
--engine-capability=[capability] The type of device the engine should be
built for [ default | safe_gpu |
safe_dla ]
--calibration-cache-file=[file_path]
Path to calibration cache file to use
for post training quantization
--embed-engine Whether to treat input file as a
serialized TensorRT engine and embed it
into a TorchScript module (device spec
must be provided)
--num-min-timing-iter=[num_iters] Number of minimization timing iterations
used to select kernels
--num-avg-timing-iters=[num_iters]
Number of averaging timing iterations
used to select kernels
--workspace-size=[workspace_size] Maximum size of workspace given to
TensorRT
--max-batch-size=[max_batch_size] Maximum batch size (must be >= 1 to be
set, 0 means not set)
-t[threshold],
--threshold=[threshold] Maximum acceptable numerical deviation
from standard torchscript output
(default 2e-5)
--save-engine Instead of compiling a full a
TorchScript program, save the created
engine to the path specified as the
output path
input_file_path Path to input TorchScript file
output_file_path Path for compiled TorchScript (or
TensorRT engine) file
input_specs... Specs for inputs to engine, can either
be a single size or a range defined by
Min, Optimal, Max sizes, e.g.
"(N,..,C,H,W)"
"[(MIN_N,..,MIN_C,MIN_H,MIN_W);(OPT_N,..,OPT_C,OPT_H,OPT_W);(MAX_N,..,MAX_C,MAX_H,MAX_W)]".
Data Type and format can be specified by
adding an "@" followed by dtype and "%"
followed by format to the end of the
shape spec. e.g. "(3, 3, 32,
32)@f16%NHWC"
"--" can be used to terminate flag options and force all following
arguments to be treated as positional options
e.g.
Expand Down
2 changes: 1 addition & 1 deletion docs/py_api/trtorch.html
Original file line number Diff line number Diff line change
Expand Up @@ -1121,7 +1121,7 @@ <h2 id="functions">
<span class="sig-paren">
)
</span>
→ &lt;torch._C.ScriptClass object at 0x7ff7f901d2f0&gt;
→ &lt;torch._C.ScriptClass object at 0x7fe8b17b4630&gt;
<a class="headerlink" href="#trtorch.TensorRTCompileSpec" title="Permalink to this definition">
</a>
Expand Down
2 changes: 1 addition & 1 deletion docs/searchindex.js

Large diffs are not rendered by default.

Loading

0 comments on commit 24de61b

Please sign in to comment.