Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nano : Update pytorch inference QuickStart with InferenceOptimizer #5899

Merged
merged 3 commits into from
Sep 26, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
93 changes: 54 additions & 39 deletions docs/readthedocs/source/doc/Nano/QuickStart/pytorch_inference.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,36 @@
# BigDL-Nano PyTorch Inference Overview

BigDL-Nano provides several APIs which can help users easily apply optimizations on inference pipelines to improve latency and throughput. Currently, performance accelerations are achieved by integrating extra runtimes as inference backend engines or using quantization methods on full-precision trained models to reduce computation during inference. Trainer (`bigdl.nano.pytorch.Trainer`) provides the APIs for all optimizations that you need for inference.
BigDL-Nano provides several APIs which can help users easily apply optimizations on inference pipelines to improve latency and throughput. Currently, performance accelerations are achieved by integrating extra runtimes as inference backend engines or using quantization methods on full-precision trained models to reduce computation during inference. InferenceOptimizer (`bigdl.nano.pytorch.InferenceOptimizer`) provides the APIs for all optimizations that you need for inference.

For runtime acceleration, BigDL-Nano has enabled two kinds of runtime for users in `Trainer.trace()`, ONNXRuntime and OpenVINO.
For runtime acceleration, BigDL-Nano has enabled three kinds of runtime for users in `InferenceOptimizer.trace()`, ONNXRuntime, OpenVINO and jit.

For quantization, BigDL-Nano provides only post-training quantization in `trainer.quantize()` for users to infer with models of 8-bit precision. Quantization-aware training is not available for now. Model conversion to 16-bit like BF16, and FP16 will be coming soon.
```eval_rst
.. warning::
``bigdl.nano.pytorch.Trainer.trace`` will be deprecated in future release.

Please use ``bigdl.nano.pytorch.InferenceOptimizer.trace`` instead.
```

For quantization, BigDL-Nano provides only post-training quantization in `InferenceOptimizer.quantize()` for users to infer with models of 8-bit precision or 16-bit precision. Quantization-aware training is not available for now. Model conversion to 16-bit like BF16 is supported now.

```eval_rst
.. warning::
``bigdl.nano.pytorch.Trainer.quantize`` will be deprecated in future release.

Please use ``bigdl.nano.pytorch.InferenceOptimizer.quantize`` instead.
```

Before you go ahead with these APIs, you have to make sure BigDL-Nano is correctly installed for PyTorch. If not, please follow [this](../Overview/nano.md) to set up your environment.

## Runtime Acceleration
All available runtime accelerations are integrated in `Trainer.trace(accelerator='onnxruntime'/'openvino')` with different accelerator values. Let's take mobilenetv3 as an example model and here is a short script that you might have before applying any BigDL-Nano's optimizations:
All available runtime accelerations are integrated in `InferenceOptimizer.trace(accelerator='onnxruntime'/'openvino'/'jit')` with different accelerator values. Let's take mobilenetv3 as an example model and here is a short script that you might have before applying any BigDL-Nano's optimizations:
```python
from torchvision.models.mobilenetv3 import mobilenet_v3_small
import torch
from torch.utils.data.dataset import TensorDataset
from torch.utils.data.dataloader import DataLoader
from bigdl.nano.pytorch import Trainer
from bigdl.nano.pytorch import InferenceOptimizer, Trainer

# step 1: create your model
model = mobilenet_v3_small(num_classes=10)

Expand Down Expand Up @@ -45,7 +60,7 @@ When you're ready, you can simply append the following part to enable your ONNXR
```python
# step 4: trace your model as an ONNXRuntime model
# if you have run `trainer.fit` before trace, then argument `input_sample` is not required.
ort_model = Trainer.trace(model, accelerator='onnruntime', input_sample=x)
ort_model = InferenceOptimizer.trace(model, accelerator='onnruntime', input_sample=x)

# step 5: use returned model for transparent acceleration
# The usage is almost the same with any PyTorch module
Expand All @@ -67,7 +82,7 @@ The OpenVINO usage is quite similar to ONNXRuntime, the following usage is for O
```python
# step 4: trace your model as a openvino model
# if you have run `trainer.fit` before trace, then argument `input_sample` is not required.
ov_model = Trainer.trace(model, accelerator='openvino', input_sample=x)
ov_model = InferenceOptimizer.trace(model, accelerator='openvino', input_sample=x)

# step 5: use returned model for transparent acceleration
# The usage is almost the same with any PyTorch module
Expand All @@ -83,11 +98,11 @@ trainer.predict(ort_model, dataloader)
```

## Quantization
Quantization is widely used to compress models to a lower precision, which not only reduces the model size but also accelerates inference. BigDL-Nano provides `Trainer.quantize()` API for users to quickly obtain a quantized model with accuracy control by specifying a few arguments. Intel Neural Compressor (INC) and Post-training Optimization Tools (POT) from OpenVINO toolkit are enabled as options. In the meantime, runtime acceleration is also included directly in the quantization pipeline when using `accelerator='onnxruntime'/'openvino'` so you don't have to run `Trainer.trace` before quantization.
Quantization is widely used to compress models to a lower precision, which not only reduces the model size but also accelerates inference. BigDL-Nano provides `InferenceOptimizer.quantize()` API for users to quickly obtain a quantized model with accuracy control by specifying a few arguments. Intel Neural Compressor (INC) and Post-training Optimization Tools (POT) from OpenVINO toolkit are enabled as options. In the meantime, runtime acceleration is also included directly in the quantization pipeline when using `accelerator='onnxruntime'/'openvino'` so you don't have to run `InferenceOptimizer.trace` before quantization.

To use INC as your quantization engine, you can choose accelerator as `None` or `'onnxruntime'`. Otherwise, `accelerator='openvino'` means using OpenVINO POT to do quantization.

By default, `Trainer.quantize()` doesn't search the tuning space and returns the fully-quantized model without considering the accuracy drop. If you need to search quantization tuning space for a model with accuracy control, you'll have to specify a few arguments to define the tuning space. More instructions in [Quantization with Accuracy Control](#quantization-with-accuracy-control)
By default, `InferenceOptimizer.quantize()` doesn't search the tuning space and returns the fully-quantized model without considering the accuracy drop. If you need to search quantization tuning space for a model with accuracy control, you'll have to specify a few arguments to define the tuning space. More instructions in [Quantization with Accuracy Control](#quantization-with-accuracy-control)

### Quantization using Intel Neural Compressor
By default, Intel Neural Compressor is not installed with BigDL-Nano. So if you determine to use it as your quantization backend, you'll need to install it first:
Expand All @@ -96,9 +111,9 @@ pip install neural-compressor==1.11.0
```
**Quantization without extra accelerator**

Without extra accelerator, `Trainer.quantize()` returns a PyTorch module with desired precision and accuracy. Following the example in [Runtime Acceleration](#runtime-acceleration), you can add quantization as below:
Without extra accelerator, `InferenceOptimizer.quantize()` returns a PyTorch module with desired precision and accuracy. Following the example in [Runtime Acceleration](#runtime-acceleration), you can add quantization as below:
```python
q_model = trainer.quantize(model, calib_dataloader=dataloader)
q_model = InferenceOptimizer.quantize(model, calib_dataloader=dataloader)
# run simple prediction with transparent acceleration
y_hat = q_model(x)

Expand All @@ -111,13 +126,13 @@ This is a most basic usage to quantize a model with defaults, INT8 precision, an

**Quantization with ONNXRuntime accelerator**

With the ONNXRuntime accelerator, `Trainer.quantize()` will return a model with compressed precision and running inference in the ONNXRuntime engine. It's also required to install onnxruntime-extensions as a dependency of INC when using ONNXRuntime as backend as well as the dependencies required in [ONNXRuntime Acceleration](#onnxruntime-acceleration):
With the ONNXRuntime accelerator, `InferenceOptimizer.quantize()` will return a model with compressed precision and running inference in the ONNXRuntime engine. It's also required to install onnxruntime-extensions as a dependency of INC when using ONNXRuntime as backend as well as the dependencies required in [ONNXRuntime Acceleration](#onnxruntime-acceleration):
```shell
pip install onnx onnxruntime onnxruntime-extensions
```
Still taking the example in [Runtime Acceleration](pytorch_inference.md#runtime-acceleration), you can add quantization as below:
```python
ort_q_model = trainer.quantize(model, accelerator='onnxruntime', calib_dataloader=dataloader)
ort_q_model = InferenceOptimizer.quantize(model, accelerator='onnxruntime', calib_dataloader=dataloader)
# run simple prediction with transparent acceleration
y_hat = ort_q_model(x)

Expand All @@ -128,8 +143,8 @@ trainer.predict(ort_q_model, dataloader)
```
Using `accelerator='onnxruntime'` actually equals to converting the model from PyTorch to ONNX firstly and then do quantization on the converted ONNX model:
```python
ort_model = Trainer.trace(model, accelerator='onnruntime', input_sample=x):
ort_q_model = trainer.quantize(ort_model, accelerator='onnxruntime', calib_dataloader=dataloader)
ort_model = InferenceOptimizer.trace(model, accelerator='onnruntime', input_sample=x):
ort_q_model = InferenceOptimizer.quantize(ort_model, accelerator='onnxruntime', calib_dataloader=dataloader)

# run inference with transparent acceleration
y_hat = ort_q_model(x)
Expand All @@ -145,7 +160,7 @@ pip install openvino-dev
```
Take the example in [Runtime Acceleration](#runtime-acceleration), and add quantization:
```python
ov_q_model = trainer.quantize(model, accelerator='openvino', calib_dataloader=dataloader)
ov_q_model = InferenceOptimizer.quantize(model, accelerator='openvino', calib_dataloader=dataloader)
# run simple prediction with transparent acceleration
y_hat = ov_q_model(x)

Expand All @@ -156,8 +171,8 @@ trainer.predict(ov_q_model, dataloader)
```
Same as using ONNXRuntime accelerator, it equals to converting the model from PyTorch to OpenVINO firstly and then doing quantization on the converted OpenVINO model:
```python
ov_model = Trainer.trace(model, accelerator='openvino', input_sample=x):
ov_q_model = trainer.quantize(ov_model, accelerator='onnxruntime', calib_dataloader=dataloader)
ov_model = InferenceOptimizer.trace(model, accelerator='openvino', input_sample=x):
ov_q_model = InferenceOptimizer.quantize(ov_model, accelerator='onnxruntime', calib_dataloader=dataloader)

# run inference with transparent acceleration
y_hat = ov_q_model(x)
Expand Down Expand Up @@ -186,30 +201,30 @@ There are a few arguments required only by INC, and you should not specify or mo
Here is an example to use INC with accuracy control as below. It will search for a model within 1% accuracy drop with 10 trials.
```python
from torchmetrics.classification import Accuracy
trainer.quantize(model,
precision='int8',
accelerator=None,
calib_dataloader= dataloader,
metric=Accuracy()
accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
approach='static',
method='fx',
tuning_strategy='bayesian',
timeout=0,
max_trials=10,
):
InferenceOptimizer.quantize(model,
precision='int8',
accelerator=None,
calib_dataloader= dataloader,
metric=Accuracy()
accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
approach='static',
method='fx',
tuning_strategy='bayesian',
timeout=0,
max_trials=10,
):
```
**Accuracy Control with POT**
Similar to INC, we can run quantization like:
```python
from torchmetrics.classification import Accuracy
trainer.quantize(model,
precision='int8',
accelerator=`openvino`,
calib_dataloader= dataloader,
metric=Accuracy()
accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
approach='static',
max_trials=10,
):
InferenceOptimizer.quantize(model,
precision='int8',
accelerator='openvino',
calib_dataloader= dataloader,
metric=Accuracy()
accuracy_criterion={'relative': 0.01, 'higher_is_better': True},
approach='static',
max_trials=10,
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

other than these changes, we may add a warning note to tell our previous users(if any), Trainer's API will be deprecated in futyre.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

```