Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into find_python3
Browse files Browse the repository at this point in the history
  • Loading branch information
ilya-lavrenov committed Sep 20, 2023
2 parents e2893c8 + 604aed1 commit e896845
Show file tree
Hide file tree
Showing 52 changed files with 611 additions and 307 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/linux_onnxruntime.yml
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ jobs:
INSTALL_DIR: ${{ github.workspace }}/install/openvino
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'true'
Expand Down Expand Up @@ -96,7 +96,7 @@ jobs:
#

- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores

- name: CMake configure
Expand Down
33 changes: 5 additions & 28 deletions docs/dev/build_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,29 +25,17 @@ Supported configurations:
```sh
git clone https://github.com/openvinotoolkit/openvino.git
cd openvino
git submodule update --init --recursive
```
(Extra for WoA) To build on Windows on ARM with ARM plugin:
```sh
git clone https://github.com/openvinotoolkit/openvino_contrib.git
cd openvino_contrib
git submodule update --init --recursive
git submodule update --init
```

2. Create build directory:
```sh
mkdir build && cd build
```
3. In the `build` directory, run `cmake` to fetch project dependencies and generate a Visual Studio solution.
3. In the `build` directory, run `cmake` to fetch project dependencies and generate a Visual Studio solution:

On Windows x86 64-bits:
```sh
cmake -G "Visual Studio 16 2019" -DCMAKE_BUILD_TYPE=Release <openvino>
```

On Windows on ARM for ARM64 architecture:
```sh
cmake -G "Visual Studio 16 2019" -DOPENVINO_EXTRA_MODULES=<openvino_contrib>/modules/arm_plugin -DCMAKE_BUILD_TYPE=Release <openvino>
cmake -G "Visual Studio 17 2022" <openvino>
```

> **HINT**: **Generating PDB Files and Debugging Your Build** <br>
Expand All @@ -62,16 +50,8 @@ Supported configurations:

### Additional Build Options

- Internal JIT GEMM implementation is used by default.

- Threading Building Blocks (TBB) is used by default. To build Inference Engine with OpenMP threading, set the `-DTHREADING=OMP` option.

- Required versions of TBB and OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically-downloaded packages but you have already installed TBB or OpenCV packages configured in your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR` environment variables before running the `cmake` command; otherwise they won'tnbe downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](./cmake_options_for_custom_compilation.md#Building-with-custom-OpenCV) section for details.
- To build the OpenVINO Runtime Python API:
1. First, install all additional packages (e.g., cython and opencv) listed in the file:
1. First, install all additional packages (e.g., cython) listed in the file:
```sh
pip install -r <openvino>\src\bindings\python\src\compatibility\openvino\requirements-dev.txt
```
Expand All @@ -95,15 +75,12 @@ Supported configurations:
pip install build/wheel/openvino-2023.0.0-9612-cp11-cp11-win_arm64.whl
```

- OpenVINO runtime compilation options:
`-DENABLE_OV_ONNX_FRONTEND=ON` enables the building of the ONNX importer.
### Building OpenVINO with Ninja* Build System

```sh
call "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Auxiliary\Build\vcvars64.bat"
cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --config Release
ninja .
```

## See also
Expand Down
3 changes: 3 additions & 0 deletions docs/dev/cmake_options_for_custom_compilation.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,9 @@ This document provides description and default values for CMake options that can
* `OFF` is default, because it increases binary size.
* `SELECTIVE_BUILD` enables [[Conditional compilation|ConditionalCompilation]] feature.
* `OFF` is default.
* `ENABLE_MLAS_FOR_CPU` enables MLAS library for CPU plugin
* `ON` is default for x86_64 and AARCH64 platforms
* Affects only OpenVINO CPU plugin

## Building with OpenCV

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/static_libaries.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ cmake -DCMAKE_TOOLCHAIN_FILE=<openvino source dir>/cmake/toolchains/mt.runtime.w

* The enabled and tested capabilities of OpenVINO Runtime in a static build:
* OpenVINO common runtime - work with `ov::Model`, perform model loading on particular device
* CPU and GNA inference plugins (**GPU and MYRIAD are not enabled**)
* CPU and GNA inference plugins (**GPU is not enabled**)
* MULTI, HETERO, AUTO, and BATCH inference modes
* IR, ONNX, PDPD, and TF frontends to read `ov::Model`
* Static build support for building static libraries only for OpenVINO Runtime libraries. All other third-party prebuilt dependencies remain in the same format:
Expand Down
9 changes: 5 additions & 4 deletions src/bindings/c/src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,12 @@ set(TARGET_NAME openvino_c)
ov_deprecated_no_errors()
add_definitions(-DIN_OV_COMPONENT)

file(GLOB SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp)
file(GLOB_RECURSE HEADERS ${OpenVINO_C_API_SOURCE_DIR}/include/*.h)
file(GLOB SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/*.h ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp)
file(GLOB_RECURSE LEGACY_HEADERS ${OpenVINO_C_API_SOURCE_DIR}/include/c_api/*.h)
file(GLOB_RECURSE HEADERS ${OpenVINO_C_API_SOURCE_DIR}/include/openvino/*.h)

# create library
add_library(${TARGET_NAME} ${HEADERS} ${SOURCES})
add_library(${TARGET_NAME} ${LEGACY_HEADERS} ${HEADERS} ${SOURCES})
add_library(openvino::runtime::c ALIAS ${TARGET_NAME})

target_link_libraries(${TARGET_NAME} PRIVATE openvino openvino::util)
Expand All @@ -24,7 +25,7 @@ if(NOT BUILD_SHARED_LIBS)
target_compile_definitions(${TARGET_NAME} PUBLIC OPENVINO_STATIC_LIBRARY)
endif()

ov_add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME})
ov_add_clang_format_target(${TARGET_NAME}_clang FOR_SOURCES ${HEADERS} ${SOURCES})

set_target_properties(${TARGET_NAME} PROPERTIES INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})

Expand Down
4 changes: 2 additions & 2 deletions src/bindings/c/src/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
#define CATCH_IE_EXCEPTION(StatusCode, ExceptionType) \
catch (const InferenceEngine::ExceptionType&) { \
return ov_status_e::StatusCode; \
} \
}

#define CATCH_OV_EXCEPTION(StatusCode, ExceptionType) \
catch (const ov::ExceptionType&) { \
Expand All @@ -42,7 +42,7 @@
CATCH_IE_EXCEPTION(INFER_CANCELLED, InferCancelled) \
catch (...) { \
return ov_status_e::UNKNOW_EXCEPTION; \
} \
}

#define GET_PROPERTY_FROM_ARGS_LIST \
std::string property_key = va_arg(args_ptr, char*); \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,10 @@ def get_input_names(self) -> list:
return []
inp_ops = filter(lambda op: op.type == "Placeholder", self.m_graph.get_operations())
inp_names = []
if hasattr(self.m_graph, 'inputs') and self.m_graph.inputs:
for inp in self.m_graph.inputs:
inp_names.append(inp.op.name)
return inp_names
for inp in inp_ops:
assert isinstance(inp, tf.Operation), "Unknown node type. Expected tf.Operation, got {}".format(type(inp))
assert hasattr(inp, "node_def") and isinstance(inp.node_def, tf.compat.v1.NodeDef), \
Expand All @@ -58,11 +62,13 @@ def get_output_names(self) -> list:
# Note: used only for the library functions
if not self.m_inner_graph:
return []
# tf.Graph has ordered outputs which are stored in 'outputs' field,
# but using this field results in mismatch of outputs in inner graph and outputs in outer graph
# during the injection of subgraph.
# For this reason only nodes without outputs are considered graph outputs here
# as this approach does not lead to conflicts.

if hasattr(self.m_graph, 'outputs') and self.m_graph.outputs:
outputs = []
for out in self.m_graph.outputs:
outputs.append(out.name)
return outputs
# If graph has no 'outputs' field, find nodes without outputs and consider them graph outputs.
# The order of outputs is important and wrong order may lead to conversion error.
non_outputs = set()
for op in self.m_graph.get_operations():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ def __init__(self, operation: tf.Operation, share_weights: bool, inner_graph: bo
self.m_operation = operation
self.m_inner_graph = inner_graph
self.m_data_type = None
self.m_parsed_content = None

# Copies value from inner buffer of TF_Operation to NodeDef class.
self.m_node_def = self.m_operation.node_def
Expand Down Expand Up @@ -87,11 +88,11 @@ def __init__(self, operation: tf.Operation, share_weights: bool, inner_graph: bo
if self.m_operation.type == "Placeholder":
self.m_data_type = tf.dtypes.DType(self.m_node_def.attr["dtype"].type).name

if self.m_data_type == "resource" and not self.m_inner_graph:
if not self.m_inner_graph:
variable_value = TFGraphNodeDecoder.get_variable(self.m_operation)
if variable_value is not None:
# does not copy data
self.m_parsed_content = variable_value.value().__array__()
self.m_parsed_content = variable_value.__array__()

if isinstance(self.m_parsed_content, bytes):
self.m_data_type = "string"
Expand All @@ -103,7 +104,7 @@ def get_op_name(self) -> str:
def get_op_type(self) -> str:
if self.m_operation.type == "Placeholder":
type_attr = tf.dtypes.DType(self.m_node_def.attr["dtype"].type)
if type_attr.name == "resource" and not self.m_inner_graph:
if not self.m_inner_graph and self.m_parsed_content is not None:
if TFGraphNodeDecoder.get_variable(self.m_operation) is not None:
return "Const"
raise Exception("Could not get variable for resource Placeholder {0}".format(self.m_operation.name))
Expand All @@ -116,10 +117,11 @@ def get_variable(operation):
return None
for var_tensor, op_tensor in tf_graph.captures:
if operation.outputs[0].name == op_tensor.name:
resource_name = var_tensor._name
if var_tensor.dtype.name != 'resource':
return var_tensor
for variable_value in operation.graph.variables:
if variable_value.name == resource_name:
return variable_value
if id(variable_value.handle) == id(var_tensor):
return variable_value.value()
return None
return None

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,7 @@ def create_tf_graph_iterator(input_model, placeholder_shapes, placeholder_data_t
if hasattr(input_model, 'outputs') and hasattr(input_model, 'structured_outputs') and \
isinstance(input_model.structured_outputs, dict):
external_names = sorted(list(input_model.structured_outputs.keys()))
internal_names = sorted([tensor.name for tensor in input_model.outputs])
internal_names = [tensor.name for tensor in input_model.outputs]
if len(external_names) == len(internal_names):
for external_name, internal_name in zip(external_names, internal_names):
output_names_map = output_names_map or {}
Expand Down
Loading

0 comments on commit e896845

Please sign in to comment.