Skip to content
Permalink

Comparing changes

This is a direct comparison between two commits made in this repository or its related repositories. View the default comparison for this range or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: openvinotoolkit/openvino
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 0d69ed3ad846058e275bb0655cb3796d8d722f11
Choose a base ref
..
head repository: openvinotoolkit/openvino
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: cd4f5c0a9bb24e2a922ac275538874c71c118286
Choose a head ref
Showing 781 changed files with 6,191 additions and 8,864 deletions.
1 change: 0 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -80,7 +80,6 @@ function(build_ngraph)
else()
ngraph_set(NGRAPH_ONNX_IMPORT_ENABLE FALSE)
endif()
ngraph_set(NGRAPH_JSON_ENABLE FALSE)
ngraph_set(NGRAPH_INTERPRETER_ENABLE TRUE)

if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
58 changes: 58 additions & 0 deletions CONTRIBUTING_DOCS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Contribute to Documentation

If you want to contribute to a project documentation and make it better, your help is very welcome.
This guide puts together the guidelines to help you figure out how you can offer your feedback and contribute to the documentation.

## Contribute in Multiple ways

There are multiple ways to help improve our documentation:
* [Log an issue](https://jira.devtools.intel.com/projects/CVS/issues): Enter an issue for the OpenVINO™ documentation component for minor issues such as typos.
* Make a suggestion: Send your documentation suggestion to the mailing list.
* Contribute via GitHub: Submit pull requests in the [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/docs) documentation repository.

## Contribute via GitHub

Use the following steps to contribute in the OpenVINO™ Toolkit documentation

### Use Documentation Guidelines
The documentation for our project is written using Markdown. Use our [guidelines](./docs/documentation_guidelines.md) and best practices to write consistent, readable documentation:

* **[Authoring Guidelines](./docs/documentation_guidelines.md#authoring-guidelines)**
* **[Structure Guidelines](./docs/documentation_guidelines.md#structure-guidelines)**
* **[Formatting Guidelines](./docs/documentation_guidelines.md#structure-guidelines)**
* **[Graphics Guidelines](./docs/documentation_guidelines.md#graphics-guidelines)**

### Add New Document to the Documentation
> **NOTE**: Please check if that information can be added to existing documents instead of creating a new one.
1. Fork the [OpenVINO™ Toolkit](https://github.com/openvinotoolkit/openvino) repository.
2. Create a new branch.
3. Create a new markdown file in an appropriate folder.
> **REQUIRED**: The document title must contain a document label in a form: `{#openvino_docs_<name>}`. For example: `Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ {#openvino_docs_MO_DG_IR_and_opsets}`.
4. Add your file to the documentation structure. Open the documentation structure file [docs/doxygen/ie_docs.xml](./docs/doxygen/ie_docs.xml) and add your file path to the appropriate section.
5. Commit changes to your branch.
6. Create a pull request.
7. Once the pull request is created, automatic checks are started. All checks must pass to continue.
8. Discuss, review, and update your contributions.
9. Get merged once the maintainer approves.

### Edit Existing Document
1. Fork the [OpenVINO™ Toolkit](https://github.com/openvinotoolkit/openvino) repository.
2. Create a new branch.
3. Edit the documentation markdown file and commit changes to the branch.
4. Create a pull request.
5. Once the pull request is created, automatic checks are started. All checks must pass to continue.
6. Discuss, review, and update your contributions.
7. Get merged once the maintainer approves.

### Delete Document from the Documentation
1. Fork the [OpenVINO™ Toolkit](https://github.com/openvinotoolkit/openvino) repository.
2. Create a new branch.
3. Remove the documentation file.
4. Remove your file from the documentation structure. Open the documentation structure file [docs/doxygen/ie_docs.xml](./docs/doxygen/ie_docs.xml) and remove all occurences of your file path.
5. Remove all references to that file from other documents or replace with links to alternatives topics (if any).
6. Commit changes to your branch.
7. Create a pull request.
8. Once the pull request is created, automatic checks are started. All checks must pass to continue.
9. Discuss, review, and update your contributions.
10. Get merged once the maintainer approves.
24 changes: 13 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -2,23 +2,23 @@
[![Stable release](https://img.shields.io/badge/version-2020.4-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2020.4.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)

This toolkit allows developers to deploy pre-trained deep learning models
through a high-level C++ Inference Engine API integrated with application logic.
This toolkit allows developers to deploy pre-trained deep learning models
through a high-level C++ Inference Engine API integrated with application logic.

This open source version includes two components: namely [Model Optimizer] and
[Inference Engine], as well as CPU, GPU and heterogeneous plugins to accelerate
deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
source and public models in popular formats such as Caffe\*, TensorFlow\*,
MXNet\* and ONNX\*.
This open source version includes two components: namely [Model Optimizer] and
[Inference Engine], as well as CPU, GPU and heterogeneous plugins to accelerate
deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
source and public models in popular formats such as Caffe\*, TensorFlow\*,
MXNet\* and ONNX\*.

## Repository components:
* [Inference Engine]
* [Model Optimizer]

## License
Deep Learning Deployment Toolkit is licensed under [Apache License Version 2.0](LICENSE).
By contributing to the project, you agree to the license and copyright terms therein
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.

## Documentation
@@ -30,13 +30,15 @@ and release your contribution under these terms.
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)

## How to Contribute
See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
See [CONTRIBUTING](./CONTRIBUTING.md) for contribution to the code.
See [CONTRIBUTING_DOCS](./CONTRIBUTING_DOCS.md) for contribution to the documentation.
Thank you!

## Support
Please report questions, issues and suggestions using:

* The `openvino` [tag on StackOverflow]\*
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* [Forum](https://software.intel.com/en-us/forums/computer-vision)

---
32 changes: 17 additions & 15 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
@@ -12,6 +12,12 @@ jobs:
BIN_DIR: ../bin/intel64/$(BUILD_TYPE)
steps:
- script: |
git clean -xdf
git reset --hard HEAD
git submodule update --init --recursive --jobs $(WORKERS_NUMBER)
displayName: 'Clone submodules'
- script: |
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2019-06-01"
whoami
uname -a
which python3
@@ -35,11 +41,6 @@ jobs:
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
displayName: 'Install Ninja'
- script: |
git clean -xdf
git reset --hard HEAD
git submodule update --init --recursive --jobs $(WORKERS_NUMBER)
displayName: 'Clone submodules'
- script: |
mkdir dldt-build
cd dldt-build
@@ -136,6 +137,11 @@ jobs:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
- script: |
git clean -xdf
git reset --hard HEAD
git submodule update --init --recursive --jobs $(WORKERS_NUMBER)
displayName: 'Clone submodules'
- script: |
whoami
uname -a
@@ -152,11 +158,6 @@ jobs:
displayName: 'Install dependencies'
- script: brew install ninja
displayName: 'Install Ninja'
- script: |
git clean -xdf
git reset --hard HEAD
git submodule update --init --recursive --jobs $(WORKERS_NUMBER)
displayName: 'Clone submodules'
- script: |
mkdir dldt-build
cd dldt-build
@@ -244,6 +245,12 @@ jobs:
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
steps:
- script: |
git clean -xdf
git reset --hard HEAD
git submodule update --init --recursive --jobs $(WORKERS_NUMBER)
displayName: 'Clone submodules'
- script: |
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
where python3
where python
python --version
@@ -257,11 +264,6 @@ jobs:
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
powershell -command "Expand-Archive -Force ninja-win.zip"
displayName: Install Ninja
- script: |
git clean -xdf
git reset --hard HEAD
git submodule update --init --recursive --jobs $(WORKERS_NUMBER)
displayName: 'Clone submodules'
- script: |
rd /Q /S $(BUILD_DIR)
mkdir $(BUILD_DIR)\bin
4 changes: 0 additions & 4 deletions build-instruction.md
Original file line number Diff line number Diff line change
@@ -146,7 +146,6 @@ You can use the following additional build options:
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build for Raspbian Stretch* OS
@@ -325,7 +324,6 @@ You can use the following additional build options:
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build on Windows* Systems
@@ -428,7 +426,6 @@ cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
### Building Inference Engine with Ninja* Build System
@@ -520,7 +517,6 @@ You can use the following additional build options:

- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.

## Build on Android* Systems
4 changes: 3 additions & 1 deletion cmake/features.cmake
Original file line number Diff line number Diff line change
@@ -28,6 +28,8 @@ ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
# FIXME: ARM cross-compiler generates several "false positive" warnings regarding __builtin_memcpy buffer overflow
ie_dependent_option (TREAT_WARNING_AS_ERROR "Treat build warnings as errors" ON "X86 OR X86_64" OFF)

ie_option (ENABLE_INTEGRITYCHECK "build DLLs with /INTEGRITYCHECK flag" OFF)

ie_option (ENABLE_SANITIZER "enable checking memory errors via AddressSanitizer" OFF)

ie_option (ENABLE_THREAD_SANITIZER "enable checking data races via ThreadSanitizer" OFF)
@@ -42,4 +44,4 @@ ie_dependent_option (ENABLE_AVX2 "Enable AVX2 optimizations" ON "X86_64 OR X86"

ie_dependent_option (ENABLE_AVX512F "Enable AVX512 optimizations" ON "X86_64 OR X86" OFF)

ie_dependent_option (ENABLE_PROFILING_ITT "ITT tracing of IE and plugins internals" OFF "NOT CMAKE_CROSSCOMPILING" OFF)
ie_dependent_option (ENABLE_PROFILING_ITT "ITT tracing of IE and plugins internals" ON "NOT CMAKE_CROSSCOMPILING" OFF)
21 changes: 13 additions & 8 deletions cmake/sdl.cmake
Original file line number Diff line number Diff line change
@@ -14,9 +14,7 @@ if (CMAKE_BUILD_TYPE STREQUAL "Release")
endif()

if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
else()
@@ -32,14 +30,21 @@ if (CMAKE_BUILD_TYPE STREQUAL "Release")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wl,--strip-all")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /sdl")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /guard:cf")
if(ENABLE_INTEGRITYCHECK)
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /sdl /guard:cf")
endif()

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${IE_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${IE_C_CXX_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
endif()
7 changes: 7 additions & 0 deletions docs/IE_DG/API_Changes.md
Original file line number Diff line number Diff line change
@@ -21,6 +21,13 @@ Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting Jun

## 2021.1

### Deprecated API

**Utility functions to convert Unicode paths**

* InferenceEngine::stringToFileName - use OS-specific native conversion functions
* InferenceEngine::fileNameToString - use OS-specific native conversion functions

### Removed API

**Plugin API:**
14 changes: 0 additions & 14 deletions docs/IE_DG/Graph_debug_capabilities.md
Original file line number Diff line number Diff line change
@@ -8,20 +8,6 @@ CNNNetwork. Both representations provide an API to get detailed information abou
To receive additional messages about applied graph modifications, rebuild the nGraph library with
the `-DNGRAPH_DEBUG_ENABLE=ON` option.

To enable serialization and deserialization of the nGraph function to a JSON file, rebuild the
nGraph library with the `-DNGRAPH_JSON_ENABLE=ON` option. To serialize or deserialize the nGraph
function, call the nGraph function as follows:

```cpp
#include <ngraph/serializer.hpp>

std::shared_ptr<ngraph::Function> nGraph;
...
ngraph::serialize("test_json.json", nGraph); // For graph serialization
std::ifstream file("test_json.json"); // Open a JSON file
nGraph = ngraph::deserialize(file); // For graph deserialization
```
To visualize the nGraph function to the xDot format or to an image file, use the
`ngraph::pass::VisualizeTree` graph transformation pass:
```cpp
20 changes: 13 additions & 7 deletions docs/IE_DG/supported_plugins/HETERO.md
Original file line number Diff line number Diff line change
@@ -9,12 +9,13 @@ Purposes to execute networks in heterogeneous mode
* To utilize all available hardware more efficiently during one inference

The execution through heterogeneous plugin can be divided to two independent steps:
* Setting of affinity to layers (binding them to devices in <code>InferenceEngine::ICNNNetwork</code>)
* Setting of affinity to layers
* Loading a network to the Heterogeneous plugin, splitting the network to parts, and executing them through the plugin

These steps are decoupled. The setting of affinity can be done automatically using fallback policy or in manual mode.

The fallback automatic policy means greedy behavior and assigns all layers which can be executed on certain device on that device follow priorities.
Automatic policy does not take into account such plugin peculiarities as inability to infer some layers without other special layers placed before of after that layers. It is plugin responsibility to solve such cases. If device plugin does not support subgraph topology constructed by Hetero plugin affinity should be set manually.

Some of the topologies are not friendly to heterogeneous execution on some devices or cannot be executed in such mode at all.
Example of such networks might be networks having activation layers which are not supported on primary device.
@@ -25,7 +26,12 @@ In this case you can define heaviest part manually and set affinity thus way to
## Annotation of Layers per Device and Default Fallback Policy
Default fallback policy decides which layer goes to which device automatically according to the support in dedicated plugins (FPGA, GPU, CPU, MYRIAD).

Another way to annotate a network is setting affinity manually using <code>CNNLayer::affinity</code> field. This field accepts string values of devices like "CPU" or "FPGA".
Another way to annotate a network is to set affinity manually using <code>ngraph::Node::get_rt_info</code> with key `"affinity"`:

```cpp
for (auto && op : function->get_ops())
op->get_rt_info()["affinity"] = std::shared_ptr<ngraph::VariantWrapper<std::string>>("CPU");
```
The fallback policy does not work if even one layer has an initialized affinity. The sequence should be calling of automating affinity settings and then fix manually.
```cpp
@@ -43,8 +49,10 @@ InferenceEngine::QueryNetworkResult res = core.QueryNetwork(network, device, { }
res.supportedLayersMap["layerName"] = "CPU";
// set affinities to network
for (auto && layer : res.supportedLayersMap) {
network.getLayerByName(layer->first)->affinity = layer->second;
for (auto&& node : function->get_ops()) {
auto& affinity = res.supportedLayersMap[node->get_friendly_name()];
// Store affinity mapping using node runtime information
node->get_rt_info()["affinity"] = std::make_shared<ngraph::VariantWrapper<std::string>>(affinity);
}
// load network with affinities set before
@@ -70,9 +78,7 @@ Precision for inference in heterogeneous plugin is defined by

Examples:
* If you want to execute GPU with CPU fallback with FP16 on GPU, you need to use only FP16 IR.
Weight are converted from FP16 to FP32 automatically for execution on CPU by heterogeneous plugin automatically.
* If you want to execute on FPGA with CPU fallback, you can use any precision for IR. The execution on FPGA is defined by bitstream,
the execution on CPU happens in FP32.
* If you want to execute on FPGA with CPU fallback, you can use any precision for IR. The execution on FPGA is defined by bitstream, the execution on CPU happens in FP32.

Samples can be used with the following command:

Loading