Skip to content

Commit

Permalink
[Docs][PyOV] update python snippets (#19367)
Browse files Browse the repository at this point in the history
* [Docs][PyOV] update python snippets

* first snippet

* Fix samples debug

* Fix linter

* part1

* Fix speech sample

* update model state snippet

* add serialize

* add temp dir

* CPU snippets update (#134)

* snippets CPU 1/6

* snippets CPU 2/6

* snippets CPU 3/6

* snippets CPU 4/6

* snippets CPU 5/6

* snippets CPU 6/6

* make  module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC

* Add static model creation in snippets for CPU

* export_comp_model done

* leftovers

* apply comments

* apply comments -- properties

* small fixes

* rempve debug info

* return IENetwork instead of Function

* apply comments

* revert precision change in common snippets

* update opset

* [PyOV] Edit docs for the rest of plugins (#136)

* modify main.py

* GNA snippets

* GPU snippets

* AUTO snippets

* MULTI snippets

* HETERO snippets

* Added properties

* update gna

* more samples

* Update docs/OV_Runtime_UG/model_state_intro.md

* Update docs/OV_Runtime_UG/model_state_intro.md

* attempt1 fix ci

* new approach to test

* temporary remove some files from run

* revert cmake changes

* fix ci

* fix snippet

* fix py_exclusive snippet

* fix preprocessing snippet

* clean-up main

* remove numpy installation in gha

* check for GPU

* add logger

* iexclude main

* main update

* temp

* Temp2

* Temp2

* temp

* Revert temp

* add property execution devices

* hide output from samples

---------

Co-authored-by: p-wysocki <[email protected]>
Co-authored-by: Jan Iwaszkiewicz <[email protected]>
Co-authored-by: Karol Blaszczak <[email protected]>
  • Loading branch information
4 people authored Sep 13, 2023
1 parent 4f92676 commit 2bf8d91
Show file tree
Hide file tree
Showing 68 changed files with 1,269 additions and 938 deletions.
8 changes: 8 additions & 0 deletions .github/workflows/linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -599,6 +599,14 @@ jobs:
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_utils/test_utils.py \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_zoo_models.py \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_backend.py
- name: Python API snippets
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
export PYTHONPATH=${{ env.INSTALL_TEST_DIR }}:${{ github.workspace }}/openvino/docs/:$PYTHONPATH
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
python3 ${{ github.workspace }}/openvino/docs/snippets/main.py
- name: Model Optimizer UT
run: |
Expand Down
1 change: 0 additions & 1 deletion cmake/developer_package/shellcheck/shellcheck.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ function(ie_shellcheck_process)
continue()
endif()

get_filename_component(dir_name "${script}" DIRECTORY)
string(REPLACE "${IE_SHELLCHECK_DIRECTORY}" "${CMAKE_BINARY_DIR}/shellcheck" output_file ${script})
set(output_file "${output_file}.txt")
get_filename_component(script_name "${script}" NAME)
Expand Down
2 changes: 1 addition & 1 deletion docs/OV_Runtime_UG/migration_ov_2_0/graph_construction.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ nGraph API
.. tab-item:: Python
:sync: py

.. doxygensnippet:: docs/snippets/ngraph.py
.. doxygensnippet:: docs/snippets/ngraph_snippet.py
:language: Python
:fragment: ngraph:graph

Expand Down
36 changes: 30 additions & 6 deletions docs/OV_Runtime_UG/model_state_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,9 +163,21 @@ Example of Creating Model OpenVINO API

In the following example, the ``SinkVector`` is used to create the ``ov::Model``. For a model with states, except inputs and outputs, the ``Assign`` nodes should also point to the ``Model`` to avoid deleting it during graph transformations. Use the constructor to do it, as shown in the example, or with the special ``add_sinks(const SinkVector& sinks)`` method. After deleting the node from the graph with the ``delete_sink()`` method, a sink can be deleted from ``ov::Model``.

.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [model_create]
.. tab-set::

.. tab-item:: C++
:sync: cpp

.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [model_create]

.. tab-item:: Python
:sync: py

.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.py
:language: python
:fragment: ov:model_create

.. _openvino-state-api:

Expand All @@ -189,10 +201,22 @@ Based on the IR from the previous section, the example below demonstrates infere

One infer request and one thread will be used in this example. Using several threads is possible if there are several independent sequences. Then, each sequence can be processed in its own infer request. Inference of one sequence in several infer requests is not recommended. In one infer request, a state will be saved automatically between inferences, but if the first step is done in one infer request and the second in another, a state should be set in a new infer request manually (using the ``ov::IVariableState::set_state`` method).

.. tab-set::

.. tab-item:: C++
:sync: cpp

.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [part1]

.. tab-item:: Python
:sync: py

.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.py
:language: python
:fragment: ov:part1

.. doxygensnippet:: docs/snippets/ov_model_with_state_infer.cpp
:language: cpp
:fragment: [part1]


For more elaborate examples demonstrating how to work with models with states,
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/109-latency-tricks-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ benchmarking process.

.. code:: ipython3
import openvino.runtime as ov
import openvino as ov
# initialize OpenVINO
core = ov.Core()
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/109-throughput-tricks-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ benchmarking process.

.. code:: ipython3
import openvino.runtime as ov
import openvino as ov
# initialize OpenVINO
core = ov.Core()
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/115-async-api-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Imports `⇑ <#top>`__
import time
import numpy as np
from openvino.runtime import Core, AsyncInferQueue
import openvino.runtime as ov
import openvino as ov
from IPython import display
import matplotlib.pyplot as plt
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/203-meter-reader-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Import `⇑ <#top>`__
import cv2
import tarfile
import matplotlib.pyplot as plt
import openvino.runtime as ov
import openvino as ov
sys.path.append("../utils")
from notebook_utils import download_file, segmentation_map_to_image
Expand Down
6 changes: 3 additions & 3 deletions docs/optimization_guide/nncf/ptq/code/ptq_aa_openvino.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def transform_fn(data_item):
import openvino
from sklearn.metrics import accuracy_score

def validate(model: openvino.runtime.CompiledModel,
def validate(model: openvino.CompiledModel,
validation_loader: torch.utils.data.DataLoader) -> float:
predictions = []
references = []
Expand All @@ -39,7 +39,7 @@ def validate(model: openvino.runtime.CompiledModel,
#! [validation]

#! [quantization]
model = ... # openvino.runtime.Model object
model = ... # openvino.Model object

quantized_model = nncf.quantize_with_accuracy_control(model,
calibration_dataset=calibration_dataset,
Expand All @@ -50,7 +50,7 @@ def validate(model: openvino.runtime.CompiledModel,
#! [quantization]

#! [inference]
import openvino.runtime as ov
import openvino as ov

# compile the model to transform quantized operations to int8
model_int8 = ov.compile_model(quantized_model)
Expand Down
2 changes: 1 addition & 1 deletion docs/optimization_guide/nncf/ptq/code/ptq_onnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def transform_fn(data_item):
#! [quantization]

#! [inference]
import openvino.runtime as ov
import openvino as ov
from openvino.tools.mo import convert_model

# convert ONNX model to OpenVINO model
Expand Down
2 changes: 1 addition & 1 deletion docs/optimization_guide/nncf/ptq/code/ptq_openvino.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def transform_fn(data_item):
#! [dataset]

#! [quantization]
import openvino.runtime as ov
import openvino as ov
model = ov.Core().read_model("model_path")

quantized_model = nncf.quantize(model, calibration_dataset)
Expand Down
2 changes: 1 addition & 1 deletion docs/optimization_guide/nncf/ptq/code/ptq_tensorflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def transform_fn(data_item):
#! [quantization]

#! [inference]
import openvino.runtime as ov
import openvino as ov
from openvino.tools.mo import convert_model

# convert TensorFlow model to OpenVINO model
Expand Down
2 changes: 1 addition & 1 deletion docs/optimization_guide/nncf/ptq/code/ptq_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def transform_fn(data_item):
#! [quantization]

#! [inference]
import openvino.runtime as ov
import openvino as ov
from openvino.tools.mo import convert_model

input_fp32 = ... # FP32 model input
Expand Down
16 changes: 8 additions & 8 deletions docs/snippets/ShapeInference.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from openvino.runtime import Core, Layout, set_batch
ov = Core()
model = ov.read_model("path/to/model")
import openvino as ov
from utils import get_model, get_image

model = get_model()

#! [picture_snippet]
model.reshape([8, 3, 448, 448])
#! [picture_snippet]

#! [set_batch]
model.get_parameters()[0].set_layout(Layout("N..."))
set_batch(model, 5)
model.get_parameters()[0].set_layout(ov.Layout("N..."))
ov.set_batch(model, 5)
#! [set_batch]

#! [simple_spatials_change]
from cv2 import imread
image = imread("path/to/image")
model.reshape({1, 3, image.shape[0], image.shape[1]})
image = get_image()
model.reshape([1, 3, image.shape[0], image.shape[1]])
#! [simple_spatials_change]

#! [obj_to_shape]
Expand Down
7 changes: 7 additions & 0 deletions docs/snippets/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from .utils import get_image
from .utils import get_model
from .utils import get_ngraph_model
from .utils import get_path_to_model
20 changes: 11 additions & 9 deletions docs/snippets/cpu/Bfloat16Inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,24 @@
# SPDX-License-Identifier: Apache-2.0


from openvino.runtime import Core
import openvino as ov

from snippets import get_model

model = get_model()

#! [part0]
core = Core()
cpu_optimization_capabilities = core.get_property("CPU", "OPTIMIZATION_CAPABILITIES")
core = ov.Core()
cpu_optimization_capabilities = core.get_property("CPU", ov.properties.device.capabilities())
#! [part0]

# TODO: enable part1 when property api will be supported in python
#! [part1]
core = Core()
model = core.read_model("model.xml")
core = ov.Core()
compiled_model = core.compile_model(model, "CPU")
inference_precision = core.get_property("CPU", "INFERENCE_PRECISION_HINT")
inference_precision = core.get_property("CPU", ov.properties.hint.inference_precision())
#! [part1]

#! [part2]
core = Core()
core.set_property("CPU", {"INFERENCE_PRECISION_HINT": "f32"})
core = ov.Core()
core.set_property("CPU", {ov.properties.hint.inference_precision(): ov.Type.f32})
#! [part2]
2 changes: 2 additions & 0 deletions docs/snippets/cpu/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
30 changes: 18 additions & 12 deletions docs/snippets/cpu/compile_model.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,23 @@
# Copyright (C) 2022 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from snippets import get_model

#! [compile_model_default]
from openvino.runtime import Core

core = Core()
model = core.read_model("model.xml")
compiled_model = core.compile_model(model, "CPU")
#! [compile_model_default]
def main():
model = get_model()

#! [compile_model_multi]
core = Core()
model = core.read_model("model.xml")
compiled_model = core.compile_model(model, "MULTI:CPU,GPU.0")
#! [compile_model_multi]
#! [compile_model_default]
import openvino as ov

core = ov.Core()
compiled_model = core.compile_model(model, "CPU")
#! [compile_model_default]

if "GPU" not in core.available_devices:
return 0

#! [compile_model_multi]
core = ov.Core()
compiled_model = core.compile_model(model, "MULTI:CPU,GPU.0")
#! [compile_model_multi]
10 changes: 6 additions & 4 deletions docs/snippets/cpu/dynamic_shape.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# Copyright (C) 2022 Intel Corporation
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from snippets import get_model

from openvino.runtime import Core
model = get_model()

#! [static_shape]
core = Core()
model = core.read_model("model.xml")
import openvino as ov

core = ov.Core()
model.reshape([10, 20, 30, 40])
#! [static_shape]
41 changes: 30 additions & 11 deletions docs/snippets/cpu/multi_threading.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,28 +2,47 @@
# SPDX-License-Identifier: Apache-2.0
#

import openvino.runtime as ov
from openvino.runtime import Core, Type, OVAny, properties
from openvino import Core, properties
from snippets import get_model

model = get_model()

device_name = "CPU"
core = Core()
core.set_property("CPU", properties.intel_cpu.sparse_weights_decompression_rate(0.8))

device_name = 'CPU'
xml_path = 'model.xml'
core = ov.Core()
core.set_property("CPU", ov.properties.intel_cpu.sparse_weights_decompression_rate(0.8))
model = core.read_model(model=xml_path)
# ! [ov:intel_cpu:multi_threading:part0]
# Use one logical processor for inference
compiled_model_1 = core.compile_model(model=model, device_name=device_name, config={properties.inference_num_threads(1)})
compiled_model_1 = core.compile_model(
model=model,
device_name=device_name,
config={properties.inference_num_threads(): 1},
)

# Use logical processors of Efficient-cores for inference on hybrid platform
compiled_model_2 = core.compile_model(model=model, device_name=device_name, config={properties.hint.scheduling_core_type(properties.hint.SchedulingCoreType.ECORE_ONLY)})
compiled_model_2 = core.compile_model(
model=model,
device_name=device_name,
config={
properties.hint.scheduling_core_type(): properties.hint.SchedulingCoreType.ECORE_ONLY,
},
)

# Use one logical processor per CPU core for inference when hyper threading is on
compiled_model_3 = core.compile_model(model=model, device_name=device_name, config={properties.hint.enable_hyper_threading(False)})
compiled_model_3 = core.compile_model(
model=model,
device_name=device_name,
config={properties.hint.enable_hyper_threading(): False},
)
# ! [ov:intel_cpu:multi_threading:part0]

# ! [ov:intel_cpu:multi_threading:part1]
# Disable CPU threads pinning for inference when system supoprt it
compiled_model_4 = core.compile_model(model=model, device_name=device_name, config={properties.hint.enable_cpu_pinning(False)})
compiled_model_4 = core.compile_model(
model=model,
device_name=device_name,
config={properties.hint.enable_cpu_pinning(): False},
)
# ! [ov:intel_cpu:multi_threading:part1]
assert compiled_model_1
assert compiled_model_2
Expand Down
16 changes: 11 additions & 5 deletions docs/snippets/cpu/ov_execution_mode.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,18 @@
# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from openvino.runtime import Core

#! [ov:execution_mode:part0]
core = Core()
import openvino as ov

core = ov.Core()
# in case of Accuracy
core.set_property("CPU", {"EXECUTION_MODE_HINT": "ACCURACY"})
core.set_property(
"CPU",
{ov.properties.hint.execution_mode(): ov.properties.hint.ExecutionMode.ACCURACY},
)
# in case of Performance
core.set_property("CPU", {"EXECUTION_MODE_HINT": "PERFORMANCE"})
core.set_property(
"CPU",
{ov.properties.hint.execution_mode(): ov.properties.hint.ExecutionMode.PERFORMANCE},
)
#! [ov:execution_mode:part0]
Loading

0 comments on commit 2bf8d91

Please sign in to comment.