Skip to content

Commit

Permalink
Merge pull request #8 from cavusmustafa/torchfx_dynamic_shapes_additi…
Browse files Browse the repository at this point in the history
…onal_support

Torchfx dynamic shapes additional support
  • Loading branch information
cavusmustafa authored Jul 8, 2024
2 parents 03c0ee0 + b28ac17 commit 99d1fcd
Show file tree
Hide file tree
Showing 636 changed files with 6,476 additions and 7,213 deletions.
14 changes: 12 additions & 2 deletions .github/workflows/job_pytorch_models_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ jobs:
if: always()
run: |
export PYTHONPATH=${MODEL_HUB_TESTS_INSTALL_DIR}:$PYTHONPATH
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_model_tests.html --self-contained-html -v -k "not (TestTimmConvertModel or TestTorchHubConvertModel)"
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_model_tests.html --self-contained-html -v -k "not (TestTimmConvertModel or TestTorchHubConvertModel or test_pa_precommit)"
env:
TYPE: ${{ inputs.event == 'schedule' && 'nightly' || 'precommit'}}
TEST_DEVICE: CPU
Expand All @@ -146,13 +146,23 @@ jobs:
if: always()
run: |
export PYTHONPATH=${MODEL_HUB_TESTS_INSTALL_DIR}:$PYTHONPATH
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch/test_pa_transformation.py -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_pagedattention_tests.html --self-contained-html -v --tb=short
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch/test_pa_transformation.py -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_pagedattention_tests.html --self-contained-html -v --tb=short -n 4
env:
TYPE: ${{ inputs.event == 'schedule' && 'nightly' || 'precommit'}}
TEST_DEVICE: CPU
USE_SYSTEM_CACHE: False
OP_REPORT_FILE: ${{ env.INSTALL_TEST_DIR }}/TEST-torch_unsupported_ops.log

- name: StatefulToStateless Test
if: always()
run: |
export PYTHONPATH=${MODEL_HUB_TESTS_INSTALL_DIR}:$PYTHONPATH
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch/test_stateful_to_stateless_transformation.py -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_stateful_to_stateless_tests.html --self-contained-html -v --tb=short
env:
TYPE: ${{ inputs.event == 'schedule' && 'nightly' || 'precommit'}}
TEST_DEVICE: CPU
USE_SYSTEM_CACHE: False

- name: Reformat unsupported ops file
if: '!cancelled()'
run: |
Expand Down
6 changes: 2 additions & 4 deletions .github/workflows/job_tokenizers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -120,16 +120,14 @@ jobs:
if: runner.os != 'Windows'
run: |
# use OpenVINO wheel package only to build the extension
export OpenVINO_DIR=$(python3 -c "from openvino.utils import get_cmake_path; print(get_cmake_path(), end='')")
python -m pip wheel -v --no-deps --wheel-dir ${EXTENSION_BUILD_DIR} ${OPENVINO_TOKENIZERS_REPO}
python -m pip wheel -v --no-deps --wheel-dir ${EXTENSION_BUILD_DIR} --find-links ${INSTALL_DIR}/tools ${OPENVINO_TOKENIZERS_REPO}
env:
CMAKE_BUILD_PARALLEL_LEVEL: '4'

- name: Build tokenizers wheel (Windows)
if: runner.os == 'Windows'
run: |
$env:OpenVINO_DIR=$(python3 -c "from openvino.utils import get_cmake_path; print(get_cmake_path(), end='')")
python3 -m pip wheel -v --no-deps --wheel-dir ${env:EXTENSION_BUILD_DIR} ${env:OPENVINO_TOKENIZERS_REPO}
python3 -m pip wheel -v --no-deps --wheel-dir ${env:EXTENSION_BUILD_DIR} --find-links ${env:INSTALL_DIR}/tools ${env:OPENVINO_TOKENIZERS_REPO}
env:
CMAKE_BUILD_PARALLEL_LEVEL: '4'

Expand Down
4 changes: 2 additions & 2 deletions cmake/dependencies.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -104,10 +104,10 @@ function(ov_download_tbb)
elseif(LINUX AND X86_64 AND OPENVINO_GNU_LIBC AND OV_LIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "oneapi-tbb-2021.2.5-lin-trim.tgz"
ARCHIVE_LIN "oneapi-tbb-2021.2.4-lin.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "9bea2c838df3085d292989d643523dc1cedce9b46d5a03eec90104151b49a180"
SHA256 "6523661559a340e88131472ea9a595582c306af083e55293b7357d11b8015546"
USE_NEW_LOCATION TRUE)
elseif(YOCTO_AARCH64)
RESOLVE_DEPENDENCY(TBB
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. {#openvino_docs_OV_Glossary}
:orphan:

Glossary
========
Expand Down
34 changes: 17 additions & 17 deletions docs/articles_en/about-openvino/additional-resources/telemetry.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. {#openvino_docs_telemetry_information}
:orphan:

OpenVINO™ Telemetry
=====================
Expand All @@ -10,9 +10,9 @@ OpenVINO™ Telemetry

To facilitate debugging and further development, OpenVINO™ collects anonymous telemetry data. Anonymous telemetry data is collected by default,
but you can stop data collection anytime by running the command ``opt_in_out --opt_out``.
It does not extend to any other Intel software, hardware, website usage, or other products.
It does not extend to any other Intel software, hardware, website usage, or other products.

Google Analytics is used for telemetry purposes. Refer to
Google Analytics is used for telemetry purposes. Refer to
`Google Analytics support <https://support.google.com/analytics/answer/6004245#zippy=%2Cour-privacy-policy%2Cgoogle-analytics-cookies-and-identifiers%2Cdata-collected-by-google-analytics%2Cwhat-is-the-data-used-for%2Cdata-access>`__ to understand how the data is collected and processed.

Enable or disable Telemetry reporting
Expand All @@ -21,7 +21,7 @@ Enable or disable Telemetry reporting
Changing consent decision
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

You can change your data collection decision with the following command lines:
You can change your data collection decision with the following command lines:

``opt_in_out --opt_in`` - enable telemetry

Expand All @@ -35,26 +35,26 @@ Telemetry Data Collection Details

.. tab-item:: Telemetry Data Collected
:sync: telemetry-data-collected
* Failure reports
* Error reports
* Usage data

* Failure reports
* Error reports
* Usage data

.. tab-item:: Tools Collecting Data
:sync: tools-collecting-data
* Model conversion API
* Model Downloader
* Accuracy Checker
* Post-Training Optimization Toolkit

* Model conversion API
* Model Downloader
* Accuracy Checker
* Post-Training Optimization Toolkit
* Neural Network Compression Framework
* Model Converter
* Model Quantizer

.. tab-item:: Telemetry Data Retention
:sync: telemetry-data-retention

Telemetry data is retained in Google Analytics for a maximum of 14 months.
Any raw data that has reached the 14-month threshold is deleted from Google Analytics on a monthly basis.
Any raw data that has reached the 14-month threshold is deleted from Google Analytics on a monthly basis.


Original file line number Diff line number Diff line change
@@ -1,30 +1,25 @@
.. {#openvino_docs_Legal_Information}
:orphan:

Legal and Responsible AI Information
Terms of Use
=====================================


.. meta::
:description: Learn about legal information and policies related to the use
of Intel® Distribution of OpenVINO™ toolkit.
:description: Learn about legal information and policies related to the information
published in OpenVINO™ documentation.


Performance varies by use, configuration and other factors. Learn more at
`www.intel.com/PerformanceIndex <https://www.intel.com/PerformanceIndex>`__.

Performance results are based on testing as of dates shown in configurations and may not
reflect all publicly available updates. See backup for configuration details. No product or
component can be absolutely secure.
Intel Global Human Right Principles
###########################################################

Your costs and results may vary.
Intel is committed to respecting human rights and avoiding causing or contributing to adverse
impacts on human rights. See
`Intel's Global Human Rights Principles <https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf>`__.
Intel's products and software are intended only to be used in applications that do not cause or
contribute to adverse impacts on human rights.

Intel technologies may require enabled hardware, software or service activation.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel
Corporation or its subsidiaries. Other names and brands may be claimed as the property of
others.

OpenVINO™ Logo
###########################################################
Expand All @@ -33,25 +28,36 @@ To build equity around the project, the OpenVINO logo was created for both Intel
usage. The logo may only be used to represent the OpenVINO toolkit and offerings built using
the OpenVINO toolkit.

Logo Usage Guidelines
###########################################################

The OpenVINO logo must be used in connection with truthful, non-misleading references to the
OpenVINO toolkit, and for no other purpose. Modification of the logo or use of any separate
element(s) of the logo alone is not allowed.

Intel Global Human Right Principles
###########################################################

Intel is committed to respecting human rights and avoiding causing or contributing to adverse
impacts on human rights. See `Intel's Global Human Rights Principles <https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf>`__.
Intel's products and software are intended only to be used in applications that do not cause or
contribute to adverse impacts on human rights.


Model Card Statement
###########################################################

We recommend that users, wherever you are sourcing the model from, should check for a model card,
We recommend that, wherever you are sourcing the model from, you should check for a model card,
consult the model card for each model you access and use, and create one if you are developing
or updating a model. A model card is a short document that provides key information to assess
performance and validation and ensure appropriate use.
performance and validation and ensure appropriate use.


Performance claims
###########################################################

Performance varies by use, configuration and other factors. Learn more at
`www.intel.com/PerformanceIndex <https://www.intel.com/PerformanceIndex>`__.

Performance results are based on testing as of dates shown in configurations and may not
reflect all publicly available updates.

Your costs and results may vary.


No product or component can be absolutely secure.

Intel technologies may require enabled hardware, software or service activation.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.
4 changes: 2 additions & 2 deletions docs/articles_en/about-openvino/performance-benchmarks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Performance Benchmarks

This page presents benchmark results for
`Intel® Distribution of OpenVINO™ toolkit <https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html>`__
and :doc:`OpenVINO Model Server <../ovms_what_is_openvino_model_server>`, for a representative
and :doc:`OpenVINO Model Server <../ovms_what_is_openvino_model_server>`, for a representative "./../"
selection of public neural networks and Intel® devices. The results may help you decide which
hardware to use in your applications or plan AI workload for the hardware you have already
implemented in your solutions. Click the buttons below to see the chosen benchmark data.
Expand Down Expand Up @@ -236,4 +236,4 @@ for non-Intel products.

Results may vary. For more information, see
:doc:`F.A.Q. <./performance-benchmarks/performance-benchmarks-faq>`
See :doc:`Legal Information <./additional-resources/legal-information>`.
See :doc:`Legal Information <./additional-resources/terms-of-use>`.
Original file line number Diff line number Diff line change
Expand Up @@ -184,4 +184,4 @@ insights in the application-level performance on the timeline view.
Results may vary. For more information, see
:doc:`F.A.Q. <./performance-benchmarks-faq>` and
:doc:`Platforms, Configurations, Methodology <../performance-benchmarks>`.
See :doc:`Legal Information <../additional-resources/legal-information>`.
See :doc:`Legal Information <../additional-resources/terms-of-use>`.
Original file line number Diff line number Diff line change
Expand Up @@ -293,4 +293,4 @@ accuracy for the model.
Results may vary. For more information, see
:doc:`F.A.Q. <./performance-benchmarks-faq>` and
:doc:`Platforms, Configurations, Methodology <../performance-benchmarks>`.
See :doc:`Legal Information <../additional-resources/legal-information>`.
See :doc:`Legal Information <../additional-resources/terms-of-use>`.
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,6 @@ Performance Information F.A.Q.

.. container:: benchmark-banner

Results may vary. For more information, see
:doc:`Platforms, Configurations, Methodology <../performance-benchmarks>`.
See :doc:`Legal Information <../additional-resources/legal-information>`.
Results may vary. For more information, see:
:doc:`Platforms, Configurations, Methodology <../performance-benchmarks>`,
:doc:`Legal Information <../additional-resources/terms-of-use>`.
12 changes: 12 additions & 0 deletions docs/articles_en/assets/snippets/compile_model_npu.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#include <openvino/runtime/core.hpp>

int main() {
{
//! [compile_model_default_npu]
ov::Core core;
auto model = core.read_model("model.xml");
auto compiled_model = core.compile_model(model, "NPU");
//! [compile_model_default_npu]
}
return 0;
}
18 changes: 18 additions & 0 deletions docs/articles_en/assets/snippets/compile_model_npu.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright (C) 2018-2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import openvino as ov
from snippets import get_model


def main():
model = get_model()

core = ov.Core()
if "NPU" not in core.available_devices:
return 0

#! [compile_model_default_npu]
core = ov.Core()
compiled_model = core.compile_model(model, "NPU")
#! [compile_model_default_npu]
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ different conditions:
| :doc:`Heterogeneous Execution (HETERO) <inference-devices-and-modes/hetero-execution>`
| :doc:`Automatic Batching Execution (Auto-batching) <inference-devices-and-modes/automatic-batching>`

To learn how to change the device configuration, read the :doc:`Query device properties article <inference-devices-and-modes/query-device-properties>`.

Enumerating Available Devices
#######################################
Expand Down Expand Up @@ -83,3 +83,10 @@ Accordingly, the code that loops over all available devices of the "GPU" type on
:language: cpp
:fragment: [part3]

Additional Resources
####################

* `OpenVINO™ Runtime API Tutorial <./../../notebooks/openvino-api-with-output.html>`__
* `AUTO Device Tutorial <./../../notebooks/auto-device-with-output.html>`__
* `GPU Device Tutorial <./../../notebooks/gpu-device-with-output.html>`__
* `NPU Device Tutorial <./../../notebooks/hello-npu-with-output.html>`__
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,25 @@ of the model into a proprietary format. The compiler included in the user mode d
platform specific optimizations in order to efficiently schedule the execution of network layers and
memory transactions on various NPU hardware submodules.

To use NPU for inference, pass the device name to the ``ov::Core::compile_model()`` method:

.. tab-set::

.. tab-item:: Python
:sync: py

.. doxygensnippet:: docs/articles_en/assets/snippets/compile_model_npu.py
:language: py
:fragment: [compile_model_default_npu]

.. tab-item:: C++
:sync: cpp

.. doxygensnippet:: docs/articles_en/assets/snippets/compile_model_npu.cpp
:language: cpp
:fragment: [compile_model_default_npu]


Model Caching
#############################

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -226,9 +226,12 @@ Compile the model for a specific device using ``ov::Core::compile_model()``:
The ``ov::Model`` object represents any models inside the OpenVINO™ Runtime.
For more details please read article about :doc:`OpenVINO™ Model representation <integrate-openvino-with-your-application/model-representation>`.

OpenVINO includes experimental support for NPU, learn more in the
:doc:`NPU Device section <./inference-devices-and-modes/npu-device>`

The code above creates a compiled model associated with a single hardware device from the model object.
It is possible to create as many compiled models as needed and use them simultaneously (up to the limitation of the hardware).
To learn how to change the device configuration, read the :doc:`Query device properties <inference-devices-and-modes/query-device-properties>` article.
To learn more about supported devices and inference modes, read the :doc:`Inference Devices and Modes <./inference-devices-and-modes>` article.

Step 3. Create an Inference Request
###################################
Expand Down Expand Up @@ -432,6 +435,7 @@ To build your project using CMake with the default build tools currently availab
Additional Resources
####################

* `OpenVINO™ Runtime API Tutorial <./../../notebooks/openvino-api-with-output.html>`__
* See the :doc:`OpenVINO Samples <../../learn-openvino/openvino-samples>` page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.
* Models in the OpenVINO IR format on `Hugging Face <https://huggingface.co/models>`__.
* :doc:`OpenVINO™ Runtime Preprocessing <optimize-inference/optimize-preprocessing>`
Expand Down
Loading

0 comments on commit 99d1fcd

Please sign in to comment.