ONNX Runtime is a cross-platform inferencing and training accelerator compatible with many popular ML/DNN frameworks. Check its github for more information.
ONNX stands for Open Neural Network Exchange, which acts as Intermediate Representation(IR) for ML/DNN models from many frameworks. Check its github for more information.
- To verify the correctness of exported ONNX models in ONNX Runtime.
- To ease the deployment of ONNX models with custom operators from
mmcv.ops
in ONNX Runtime.
Operator | CPU | GPU | MMCV Releases |
---|---|---|---|
SoftNMS | Y | N | 1.2.3 |
RoIAlign | Y | N | 1.2.5 |
NMS | Y | N | 1.2.7 |
grid_sampler | Y | N | 1.3.1 |
CornerPool | Y | N | 1.3.4 |
cummax | Y | N | master |
cummin | Y | N | master |
Please be noted that only onnxruntime>=1.5.1 of CPU version on Linux platform is tested by now.
- Clone repository
git clone https://github.com/open-mmlab/mmcv.git
- Download
onnxruntime-linux-x64-1.5.1.tgz
from ONNX Runtime releases, extract it, exposeONNXRUNTIME_DIR
and finally add the lib path toLD_LIBRARY_PATH
as below:
wget https://github.com/microsoft/onnxruntime/releases/download/v1.5.1/onnxruntime-linux-x64-1.5.1.tgz
tar -zxvf onnxruntime-linux-x64-1.5.1.tgz
cd onnxruntime-linux-x64-1.5.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
cd mmcv # to MMCV root directory
MMCV_WITH_OPS=1 MMCV_WITH_ORT=1 pip install -e .
Install ONNX Runtime with pip
pip install onnxruntime==1.5.1
Inference Demo
import os
import numpy as np
import onnxruntime as ort
from mmcv.ops import get_onnxruntime_op_path
ort_custom_op_path = get_onnxruntime_op_path()
assert os.path.exists(ort_custom_op_path)
session_options = ort.SessionOptions()
session_options.register_custom_ops_library(ort_custom_op_path)
# exported ONNX model with custom operators
onnx_file = 'sample.onnx'
input_data = np.random.randn(1, 3, 224, 224).astype(np.float32)
sess = ort.InferenceSession(onnx_file, session_options)
onnx_results = sess.run(None, {'input' : input_data})
- The custom operator is not included in supported operator list in ONNX Runtime.
- The custom operator should be able to be exported to ONNX.
Take custom operator soft_nms
for example.
-
Add header
soft_nms.h
to ONNX Runtime include directorymmcv/ops/csrc/onnxruntime/
-
Add source
soft_nms.cpp
to ONNX Runtime source directorymmcv/ops/csrc/onnxruntime/cpu/
-
Register
soft_nms
operator in onnxruntime_register.cpp#include "soft_nms.h" SoftNmsOp c_SoftNmsOp; if (auto status = ortApi->CustomOpDomain_Add(domain, &c_SoftNmsOp)) { return status; }
-
Add unit test into
tests/test_ops/test_onnx.py
Check here for examples.
Finally, welcome to send us PR of adding custom operators for ONNX Runtime in MMCV. 🤓
- "RuntimeError: tuple appears in op that does not forward tuples, unsupported kind:
prim::PythonOp
."- Note generally
cummax
orcummin
is exportable to ONNX as long as the torch version >= 1.5.0, sincetorch.cummax
is only supported with torch >= 1.5.0. But whencummax
orcummin
serves as an intermediate component whose outputs is used as inputs for another modules, it's expected that torch version must be >= 1.7.0. Otherwise the above error might arise, when running exported ONNX model with onnxruntime. - Solution: update the torch version to 1.7.0 or higher.
- Note generally