This module provides a guide and the implementation of a few custom operations in the Intel OpenVINO runtime using its Extensibility Mechanism.
There are some use cases when OpenVINO Custom Operations could be applicable:
- There is an ONNX model which contains an operation not supported by OpenVINO.
- You have a PyTorch model, which could be converted to ONNX, with an operation not supported by OpenVINO.
- You want to replace a subgraph for ONNX model with one custom operation which would be supported by OpenVINO.
More specifically, here we implement custom OpenVINO operations that add support for the following native PyTorch operation:
And other custom operations introduced by third-party frameworks:
- calculate_grid and sparse_conv from Open3D
- complex_mul from DIRECT
You can find more information about how to create and use OpenVINO Extensions to facilitate mapping of custom operations from framework model representation to OpenVINO representation here.
The C++ code implementing the custom operation is in the user_ie_extensions
directory. You'll have to build an "extension library" from this code so that it can be loaded at runtime. The steps below describe the build process:
-
Install OpenVINO Runtime for C++.
-
Build the library:
cd user_ie_extensions
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release && cmake --build . --parallel 4
If you need to build only some operations specify them with the -DCUSTOM_OPERATIONS
option:
cmake .. -DCMAKE_BUILD_TYPE=Release -DCUSTOM_OPERATIONS="complex_mul;fft"
- Please note that OpenCV installation is required to build an extension for the fft operation. Other extentions still can be built without OpenCV.
You also could build the extension library while building OpenVINO.
You can use the custom OpenVINO operations implementation by loading it into the OpenVINO Core
object at runtime. Then, load the model from the ONNX file with the read_model()
API. Here's how to do that in Python:
from openvino.runtime import Core
# Create Core and register user extension
core = Core()
core.add_extension('/path/to/libuser_ov_extensions.so')
# Load model from .onnx file directly
model = core.read_model('model.onnx')
compiled_model = core.compile_model(model, 'CPU')
You also can get OpenVINO IR model with Model Optimizer, just use extra --extension
flag to specify a path to custom extensions:
mo --input_model model.onnx --extension /path/to/libuser_ov_extensions.so