Skip to content

Commit

Permalink
[Python API] add new api (openvinotoolkit#8149)
Browse files Browse the repository at this point in the history
* Bind exec core ov (#50)

* Output const node python tests (#52)

* add python bindings tests for Output<const ov::None>

* add proper tests

* add new line

* rename ie_version to version

* Pszmel/bind infer request (#51)

* remove set_batch, get_blob and set_blob

* update InferRequest class

* change InferenceEngine::InferRequest to ov::runtime::InferRequest

* update set_callback body

* update bindings to reflect ov::runtime::InferRequest

* bind set_input_tensor and get_input_tensor

* style fix

* clen ie_infer_queue.cpp

* Bind exec core ov (#50)

* bind core, exec_net classes

* rm unused function

* add new line

* rename ie_infer_request -> infer_request

* update imports

* update __init__.py

* update ie_api.py

* Replace old containers with the new one

* create impl for create_infer_request

* comment out infer_queue to avoid errors with old infer_request

* update infer_request bind to reflect new infer_request api

* comment out inpuit_info from ie_network to avoid errors with old containers

* Register new containers and comment out InferQueue

* update infer request tests

* style fix

* remove unused imports

* remove unused imports and 2 methods

* add tests to cover all new methods from infer_request

* style fix

* add test

* remove registration of InferResults

* update name of exception_ptr parameter

* update the loops that iterate through inputs and outputs

* clean setCustomCallbacks

* style fix

* add Tensor import

* style fix

* update infer and normalize_inputs

* style fix

* rename startTime and endTime

* Create test for mixed keys as infer arguments

* update infer function

* update return type of infer

Co-authored-by: Bartek Szmelczynski <[email protected]>

* fix get_version

* fix opaque issue

* some cosmetic changes

* fix codestyle in tests

* make tests green

* Extend python InferRequest

* Extend python Function

* Change return value of infer call

* Fix missing precisions conversions in CPU plugin

* Rework of runtime for new tests

* Fixed onnx reading in python tests

* Edit compatibility tests

* Edit tests

* Add FLOAT_LIKE xfails

* [Python API] bind ProfilingInfo (#55)

* bind ProfilingInfo

* Add tests

* Fix code style

* Add property

* fix codestyle

* Infer new request method (#56)

* fix conflicts, add infer_new_request function

* remove redundant functions, fix style

* revert the unwanted changes

* revert removal of the Blob

* revert removal of isTblob

* add add_extension from path

* codestyle

* fix win build

* add inputs-outputs to function

* Hot-fix CPU plugin with precision

* fix start_async

* add performance hint to time infer (openvinotoolkit#8480)

* Updated common migration pipeline (openvinotoolkit#8176)

* Updated common migration pipeline

* Fixed merge issue

* Added new model and extended example

* Fixed typo

* Added v10-v11 comparison

* Avoid redundant graph nodes scans (openvinotoolkit#8415)

* Refactor work with env variables (openvinotoolkit#8208)

* del MO_ROOT

* del MO_ROOT from common_utils.py

* add MO_PATH to common_utils.py

* change mo_path

* [IE Sample Scripts] Use cmake to build samples (openvinotoolkit#8442)

* Use cmake to build samples

* Add the option to set custom build output folder

* Remove opset8 from compatibility ngraph python API (openvinotoolkit#8452)

* [GPU] OneDNN gpu submodule update to version 2.5 (openvinotoolkit#8449)

* [GPU] OneDNN gpu submodule update to version 2.5

* [GPU] Updated onednn submodule and added layout optimizer fix

* Install rules for static libraries case (openvinotoolkit#8384)

* Proper cmake install for static libraries case

* Added an ability to skip template plugin

* Added install rules for VPU / GPU

* Install more libraries

* Fixed absolute TBB include paths

* Disable GNA

* Fixed issue with linker

* Some fixes

* Fixed linkage issues in tests

* Disabled some tests

* Updated CI pipelines

* Fixed Windows linkage

* Fixed custom_opset test for static casr

* Fixed CVS-70313

* Continue on error

* Fixed clanf-format

* Try to fix Windows linker

* Fixed compilation

* Disable samples

* Fixed samples build with THREADING=SEQ

* Fixed link error on Windows

* Fixed ieFuncTests

* Added static Azure CI

* Revert "Fixed link error on Windows"

This reverts commit 78cca36.

* Merge static and dynamic linux pipelines

* Fixed Azure

* fix codestyle

Co-authored-by: Bartek Szmelczynski <[email protected]>
Co-authored-by: Piotr Szmelczynski <[email protected]>
Co-authored-by: jiwaszki <[email protected]>
Co-authored-by: Alexey Lebedev <[email protected]>
Co-authored-by: Victor Kuznetsov <[email protected]>
Co-authored-by: Ilya Churaev <[email protected]>
Co-authored-by: Tomasz Jankowski <[email protected]>
Co-authored-by: Dmitry Pigasin <[email protected]>
Co-authored-by: Artur Kulikowski <[email protected]>
Co-authored-by: Ilya Znamenskiy <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
  • Loading branch information
12 people authored and openvino-dev-samples committed Nov 24, 2021
1 parent f5add79 commit 47db594
Show file tree
Hide file tree
Showing 48 changed files with 1,797 additions and 1,180 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ InferenceEngine::Blob::Ptr MKLDNNPlugin::MKLDNNInferRequest::GetBlob(const std::
desc.getShape().getRank()))
: MemoryDescUtils::convertToTensorDesc(desc);
const auto &tensorDesc = data->getTensorDesc();
if (expectedTensorDesc.getPrecision() != tensorDesc.getPrecision()) {
if (expectedTensorDesc.getPrecision() != normalizeToSupportedPrecision(tensorDesc.getPrecision())) {
IE_THROW(ParameterMismatch)
<< "Network input and output use the same name: " << name << " but expect blobs with different precision: "
<< tensorDesc.getPrecision() << " for input and " << expectedTensorDesc.getPrecision()
Expand Down
5 changes: 3 additions & 2 deletions inference-engine/src/mkldnn_plugin/mkldnn_plugin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,6 @@ static void TransformationUpToCPUSpecificOpSet(std::shared_ptr<ngraph::Function>
manager.register_pass<ngraph::pass::DisableConvertConstantFoldingOnConstPath>(
std::vector<ngraph::element::Type>{ ngraph::element::i8, ngraph::element::u8, ngraph::element::i4, ngraph::element::u4 });
}

auto get_convert_precisions = []() {
precisions_array array = {
{ngraph::element::i64, ngraph::element::i32},
Expand Down Expand Up @@ -443,8 +442,10 @@ Engine::LoadExeNetworkImpl(const InferenceEngine::CNNNetwork &network, const std
InferenceEngine::InputsDataMap _networkInputs = network.getInputsInfo();
for (const auto &ii : _networkInputs) {
auto input_precision = ii.second->getPrecision();
if (input_precision != InferenceEngine::Precision::FP32 &&
if (input_precision != InferenceEngine::Precision::FP64 &&
input_precision != InferenceEngine::Precision::FP32 &&
input_precision != InferenceEngine::Precision::I32 &&
input_precision != InferenceEngine::Precision::U32 &&
input_precision != InferenceEngine::Precision::U16 &&
input_precision != InferenceEngine::Precision::I16 &&
input_precision != InferenceEngine::Precision::I8 &&
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,13 @@ void cpu_convert(const void *srcPtr, void *dstPtr, Precision srcPrc, Precision d
MKLDNN_CVT(BF16, I64), MKLDNN_CVT(BF16, FP32), MKLDNN_CVT(BF16, BOOL),
MKLDNN_CVT(BOOL, U8), MKLDNN_CVT(BOOL, I8), MKLDNN_CVT(BOOL, U16),
MKLDNN_CVT(BOOL, I16), MKLDNN_CVT(BOOL, I32), MKLDNN_CVT(BOOL, U64),
MKLDNN_CVT(BOOL, I64), MKLDNN_CVT(BOOL, FP32), MKLDNN_CVT(BOOL, BF16));
MKLDNN_CVT(BOOL, I64), MKLDNN_CVT(BOOL, FP32), MKLDNN_CVT(BOOL, BF16),
MKLDNN_CVT(FP64, U8), MKLDNN_CVT(FP64, I8), MKLDNN_CVT(FP64, U16),
MKLDNN_CVT(FP64, I16), MKLDNN_CVT(FP64, I32), MKLDNN_CVT(FP64, U64),
MKLDNN_CVT(FP64, I64), MKLDNN_CVT(FP64, FP32), MKLDNN_CVT(FP64, BF16), MKLDNN_CVT(FP64, BOOL),
MKLDNN_CVT(U32, U8), MKLDNN_CVT(U32, I8), MKLDNN_CVT(U32, U16),
MKLDNN_CVT(U32, I16), MKLDNN_CVT(U32, I32), MKLDNN_CVT(U32, U64),
MKLDNN_CVT(U32, I64), MKLDNN_CVT(U32, FP32), MKLDNN_CVT(U32, BF16), MKLDNN_CVT(U32, BOOL));

if (!ctx.converted)
IE_THROW() << "cpu_convert can't convert from: " << srcPrc << " precision to: " << dstPrc;
Expand Down
5 changes: 5 additions & 0 deletions inference-engine/src/mkldnn_plugin/utils/cpu_utils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -72,12 +72,17 @@ inline InferenceEngine::Precision normalizeToSupportedPrecision(InferenceEngine:
case InferenceEngine::Precision::FP32: {
break;
}
case InferenceEngine::Precision::FP64: {
precision = InferenceEngine::Precision::FP32;
break;
}
case InferenceEngine::Precision::BOOL: {
precision = InferenceEngine::Precision::U8;
break;
}
case InferenceEngine::Precision::U16:
case InferenceEngine::Precision::I16:
case InferenceEngine::Precision::U32:
case InferenceEngine::Precision::I64:
case InferenceEngine::Precision::U64: {
precision = InferenceEngine::Precision::I32;
Expand Down
16 changes: 8 additions & 8 deletions runtime/bindings/python/src/openvino/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,10 @@

from openvino.ie_api import BlobWrapper
from openvino.ie_api import infer
from openvino.ie_api import async_infer
from openvino.ie_api import get_result
from openvino.ie_api import start_async
from openvino.ie_api import blob_from_file
from openvino.ie_api import tensor_from_file
from openvino.ie_api import infer_new_request

from openvino.impl import Dimension
from openvino.impl import Function
Expand All @@ -35,8 +36,7 @@
from openvino.pyopenvino import DataPtr
from openvino.pyopenvino import TensorDesc
from openvino.pyopenvino import get_version
from openvino.pyopenvino import StatusCode
from openvino.pyopenvino import InferQueue
#from openvino.pyopenvino import InferQueue
from openvino.pyopenvino import InferRequest # TODO: move to ie_api?
from openvino.pyopenvino import Blob
from openvino.pyopenvino import PreProcessInfo
Expand All @@ -45,6 +45,7 @@
from openvino.pyopenvino import ColorFormat
from openvino.pyopenvino import PreProcessChannel
from openvino.pyopenvino import Tensor
from openvino.pyopenvino import ProfilingInfo

from openvino import opset1
from openvino import opset2
Expand Down Expand Up @@ -78,10 +79,9 @@
# this class will be removed
Blob = BlobWrapper
# Patching ExecutableNetwork
ExecutableNetwork.infer = infer
ExecutableNetwork.infer_new_request = infer_new_request
# Patching InferRequest
InferRequest.infer = infer
InferRequest.async_infer = async_infer
InferRequest.get_result = get_result
InferRequest.start_async = start_async
# Patching InferQueue
InferQueue.async_infer = async_infer
#InferQueue.async_infer = async_infer
33 changes: 23 additions & 10 deletions runtime/bindings/python/src/openvino/ie_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
# SPDX-License-Identifier: Apache-2.0

import numpy as np
import copy
from typing import List

from openvino.pyopenvino import TBlobFloat32
from openvino.pyopenvino import TBlobFloat64
Expand All @@ -15,6 +17,8 @@
from openvino.pyopenvino import TBlobUint8
from openvino.pyopenvino import TensorDesc
from openvino.pyopenvino import InferRequest
from openvino.pyopenvino import ExecutableNetwork
from openvino.pyopenvino import Tensor


precision_map = {"FP32": np.float32,
Expand All @@ -35,22 +39,26 @@

def normalize_inputs(py_dict: dict) -> dict:
"""Normalize a dictionary of inputs to contiguous numpy arrays."""
return {k: (np.ascontiguousarray(v) if isinstance(v, np.ndarray) else v)
return {k: (Tensor(v) if isinstance(v, np.ndarray) else v)
for k, v in py_dict.items()}

# flake8: noqa: D102
def infer(request: InferRequest, inputs: dict = None) -> dict:
results = request._infer(inputs=normalize_inputs(inputs if inputs is not None else {}))
return {name: (blob.buffer.copy()) for name, blob in results.items()}
def infer(request: InferRequest, inputs: dict = {}) -> np.ndarray:
res = request._infer(inputs=normalize_inputs(inputs))
# Required to return list since np.ndarray forces all of tensors data to match in
# dimensions. This results in errors when running ops like variadic split.
return [copy.deepcopy(tensor.data) for tensor in res]

# flake8: noqa: D102
def get_result(request: InferRequest, name: str) -> np.ndarray:
return request.get_blob(name).buffer.copy()

def infer_new_request(exec_net: ExecutableNetwork, inputs: dict = None) -> List[np.ndarray]:
res = exec_net._infer_new_request(inputs=normalize_inputs(inputs if inputs is not None else {}))
# Required to return list since np.ndarray forces all of tensors data to match in
# dimensions. This results in errors when running ops like variadic split.
return [copy.deepcopy(tensor.data) for tensor in res]

# flake8: noqa: D102
def async_infer(request: InferRequest, inputs: dict = None, userdata=None) -> None: # type: ignore
request._async_infer(inputs=normalize_inputs(inputs if inputs is not None else {}),
userdata=userdata)
def start_async(request: InferRequest, inputs: dict = {}, userdata: dict = None) -> None: # type: ignore
request._start_async(inputs=normalize_inputs(inputs), userdata=userdata)

# flake8: noqa: C901
# Dispatch Blob types on Python side.
Expand Down Expand Up @@ -112,3 +120,8 @@ def blob_from_file(path_to_bin_file: str) -> BlobWrapper:
array = np.fromfile(path_to_bin_file, dtype=np.uint8)
tensor_desc = TensorDesc("U8", array.shape, "C")
return BlobWrapper(tensor_desc, array)

# flake8: noqa: D102
def tensor_from_file(path: str) -> Tensor:
"""The data will be read with dtype of unit8"""
return Tensor(np.fromfile(path, dtype=np.uint8))
1 change: 1 addition & 0 deletions runtime/bindings/python/src/openvino/impl/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,5 @@
from openvino.pyopenvino import Coordinate
from openvino.pyopenvino import Output
from openvino.pyopenvino import Layout
from openvino.pyopenvino import ConstOutput
from openvino.pyopenvino import util
42 changes: 42 additions & 0 deletions runtime/bindings/python/src/pyopenvino/core/common.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,48 @@ bool is_TBlob(const py::handle& blob) {
}
}

const ov::runtime::Tensor& cast_to_tensor(const py::handle& tensor) {
return tensor.cast<const ov::runtime::Tensor&>();
}

const Containers::TensorNameMap cast_to_tensor_name_map(const py::dict& inputs) {
Containers::TensorNameMap result_map;
for (auto&& input : inputs) {
std::string name;
if (py::isinstance<py::str>(input.first)) {
name = input.first.cast<std::string>();
} else {
throw py::type_error("incompatible function arguments!");
}
if (py::isinstance<ov::runtime::Tensor>(input.second)) {
auto tensor = Common::cast_to_tensor(input.second);
result_map[name] = tensor;
} else {
throw ov::Exception("Unable to cast tensor " + name + "!");
}
}
return result_map;
}

const Containers::TensorIndexMap cast_to_tensor_index_map(const py::dict& inputs) {
Containers::TensorIndexMap result_map;
for (auto&& input : inputs) {
int idx;
if (py::isinstance<py::int_>(input.first)) {
idx = input.first.cast<int>();
} else {
throw py::type_error("incompatible function arguments!");
}
if (py::isinstance<ov::runtime::Tensor>(input.second)) {
auto tensor = Common::cast_to_tensor(input.second);
result_map[idx] = tensor;
} else {
throw ov::Exception("Unable to cast tensor " + std::to_string(idx) + "!");
}
}
return result_map;
}

const std::shared_ptr<InferenceEngine::Blob> cast_to_blob(const py::handle& blob) {
if (py::isinstance<InferenceEngine::TBlob<float>>(blob)) {
return blob.cast<const std::shared_ptr<InferenceEngine::TBlob<float>>&>();
Expand Down
8 changes: 8 additions & 0 deletions runtime/bindings/python/src/pyopenvino/core/common.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
#include <string>
#include "Python.h"
#include "ie_common.h"
#include "openvino/runtime/tensor.hpp"
#include "pyopenvino/core/containers.hpp"

namespace py = pybind11;

Expand Down Expand Up @@ -48,6 +50,12 @@ namespace Common

const std::shared_ptr<InferenceEngine::Blob> cast_to_blob(const py::handle& blob);

const Containers::TensorNameMap cast_to_tensor_name_map(const py::dict& inputs);

const Containers::TensorIndexMap cast_to_tensor_index_map(const py::dict& inputs);

const ov::runtime::Tensor& cast_to_tensor(const py::handle& tensor);

void blob_from_numpy(const py::handle& _arr, InferenceEngine::Blob::Ptr &blob);

void set_request_blobs(InferenceEngine::InferRequest& request, const py::dict& dictonary);
Expand Down
40 changes: 6 additions & 34 deletions runtime/bindings/python/src/pyopenvino/core/containers.cpp
Original file line number Diff line number Diff line change
@@ -1,51 +1,23 @@

// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include "pyopenvino/core/containers.hpp"

#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>

PYBIND11_MAKE_OPAQUE(Containers::PyInputsDataMap);
PYBIND11_MAKE_OPAQUE(Containers::PyConstInputsDataMap);
PYBIND11_MAKE_OPAQUE(Containers::PyOutputsDataMap);
PYBIND11_MAKE_OPAQUE(Containers::PyResults);
PYBIND11_MAKE_OPAQUE(Containers::TensorIndexMap);
PYBIND11_MAKE_OPAQUE(Containers::TensorNameMap);

namespace py = pybind11;

namespace Containers {

void regclass_PyInputsDataMap(py::module m) {
auto py_inputs_data_map = py::bind_map<PyInputsDataMap>(m, "PyInputsDataMap");

py_inputs_data_map.def("keys", [](PyInputsDataMap& self) {
return py::make_key_iterator(self.begin(), self.end());
});
}

void regclass_PyConstInputsDataMap(py::module m) {
auto py_const_inputs_data_map = py::bind_map<PyConstInputsDataMap>(m, "PyConstInputsDataMap");

py_const_inputs_data_map.def("keys", [](PyConstInputsDataMap& self) {
return py::make_key_iterator(self.begin(), self.end());
});
void regclass_TensorIndexMap(py::module m) {
py::bind_map<TensorIndexMap>(m, "TensorIndexMap");
}

void regclass_PyOutputsDataMap(py::module m) {
auto py_outputs_data_map = py::bind_map<PyOutputsDataMap>(m, "PyOutputsDataMap");

py_outputs_data_map.def("keys", [](PyOutputsDataMap& self) {
return py::make_key_iterator(self.begin(), self.end());
});
}

void regclass_PyResults(py::module m) {
auto py_results = py::bind_map<PyResults>(m, "PyResults");

py_results.def("keys", [](PyResults& self) {
return py::make_key_iterator(self.begin(), self.end());
});
void regclass_TensorNameMap(py::module m) {
py::bind_map<TensorNameMap>(m, "TensorNameMap");
}
} // namespace Containers
31 changes: 12 additions & 19 deletions runtime/bindings/python/src/pyopenvino/core/containers.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,28 +4,21 @@

#pragma once

#include <pybind11/pybind11.h>
#include <string>
#include <ie_input_info.hpp>
#include "ie_data.h"
#include "ie_blob.h"

namespace py = pybind11;
#include <map>
#include <vector>

namespace Containers {
using PyInputsDataMap = std::map<std::string, std::shared_ptr<InferenceEngine::InputInfo>>;
#include <pybind11/pybind11.h>

using PyConstInputsDataMap =
std::map<std::string, std::shared_ptr<const InferenceEngine::InputInfo>>;
#include <openvino/runtime/tensor.hpp>

using PyOutputsDataMap =
std::map<std::string, std::shared_ptr<const InferenceEngine::Data>>;
namespace py = pybind11;

using PyResults =
std::map<std::string, std::shared_ptr<const InferenceEngine::Blob>>;
namespace Containers {
using TensorIndexMap = std::map<size_t, ov::runtime::Tensor>;
using TensorNameMap = std::map<std::string, ov::runtime::Tensor>;
using InferResults = std::vector<ov::runtime::Tensor>;

void regclass_PyInputsDataMap(py::module m);
void regclass_PyConstInputsDataMap(py::module m);
void regclass_PyOutputsDataMap(py::module m);
void regclass_PyResults(py::module m);
}
void regclass_TensorIndexMap(py::module m);
void regclass_TensorNameMap(py::module m);
}
Loading

0 comments on commit 47db594

Please sign in to comment.