From c99375e10d5d745001c45e7ab50410b2647b60ba Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Tue, 12 Sep 2023 13:47:55 +0200 Subject: [PATCH 01/14] [DOCS] OVC/convert_model Documentation port (#19555) (#19776) port: #19555 --- docs/Documentation/model_introduction.md | 225 ++++++- .../Deep_Learning_Model_Optimizer_DevGuide.md | 9 +- docs/MO_DG/prepare_model/FP16_Compression.md | 10 +- .../convert_model/supported_model_formats.md | 62 +- .../Deep_Learning_Model_Optimizer_DevGuide.md | 98 +++ .../convert_model/Convert_Model_From_ONNX.md | 59 ++ .../Convert_Model_From_Paddle.md | 201 ++++++ .../Convert_Model_From_PyTorch.md | 155 +++++ .../Convert_Model_From_TensorFlow.md | 331 +++++++++ .../Convert_Model_From_TensorFlow_Lite.md | 42 ++ .../convert_model/Converting_Model.md | 141 ++++ .../convert_model/MO_OVC_transition.md | 634 ++++++++++++++++++ .../convert_model/supported_model_formats.md | 33 + docs/get_started/get_started_demos.md | 166 ++--- docs/glossary.md | 129 ++-- .../installing-openvino-from-archive-linux.md | 203 +++--- docs/install_guides/pypi-openvino-dev.md | 32 +- docs/model_zoo.md | 4 +- 18 files changed, 2200 insertions(+), 334 deletions(-) create mode 100644 docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/Converting_Model.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md create mode 100644 docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md diff --git a/docs/Documentation/model_introduction.md b/docs/Documentation/model_introduction.md index d0fd4535ce59c2..26038599b83362 100644 --- a/docs/Documentation/model_introduction.md +++ b/docs/Documentation/model_introduction.md @@ -3,64 +3,233 @@ @sphinxdirective .. meta:: - :description: Preparing models for OpenVINO Runtime. Learn about the methods + :description: Preparing models for OpenVINO Runtime. Learn about the methods used to read, convert and compile models from different frameworks. - .. toctree:: :maxdepth: 1 :hidden: Supported_Model_Formats - openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide + openvino_docs_OV_Converter_UG_Conversion_Options + openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model + openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub `__, `Hugging Face `__, or `Torchvision models `__. -Import a model using ``read_model()`` -################################################# +OpenVINO™ :doc:`supports several model formats ` and can convert them into its own representation, `openvino.Model `__ (`ov.Model `__), providing a conversion API. Converted models can be used for inference with one or multiple OpenVINO Hardware plugins. There are two ways to use the conversion API: using a Python program or calling the ``ovc`` command line tool. -Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite ` (check :doc:`TensorFlow Frontend Capabilities and Limitations `) do not require a separate step for model conversion, that is ``mo.convert_model``. +.. note:: -The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model `__. If the file is in one of the supported original framework :doc:`file formats `, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format `, it is read "as-is", without any conversion involved. + Prior OpenVINO 2023.1 release, model conversion API was exposed as ``openvino.tools.mo.convert_model`` function and ``mo`` command line tool. + Starting from 2023.1 release, a new simplified API was introduced: ``openvino.convert_model`` function and ``ovc`` command line tool as a replacement for ``openvino.tools.mo.convert_model`` + and ``mo`` correspondingly, which are considered to be legacy now. All new users are recommended to use these new methods instead of the old methods. Please note that the new API and old API do not + provide the same level of features, that means the new tools are not always backward compatible with the old ones. Please consult with :doc:`Model Conversion API Transition Guide `. -You can also convert a model from original framework to `openvino.runtime.Model `__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide ` . +Convert a Model in Python: ``convert_model`` +############################################ -``ov.Model`` can be saved to IR using the ``ov.save_model()`` method. The saved IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) ` that applies post-training quantization methods. +You can use Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example Pytorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples on how to use the ``openvino.convert_model`` with models from popular public repositories: -.. note:: +.. tab-set:: + + .. tab-item:: Torchvision + + .. code-block:: py + :force: + + import openvino as ov + import torch + from torchvision.models import resnet50 + + model = resnet50(pretrained=True) + + # prepare input_data + input_data = torch.rand(1, 3, 224, 224) + + ov_model = ov.convert_model(model, example_input=input_data) + + ###### Option 1: Save to OpenVINO IR: + + # save model to OpenVINO IR for later use + ov.save_model(ov_model, 'model.xml') + + ###### Option 2: Compile and infer with OpenVINO: + + # compile model + compiled_model = ov.compile_model(ov_model) + + # run the inference + result = compiled_model(input_data) + + .. tab-item:: Hugging Face Transformers + + .. code-block:: py + + from transformers import BertTokenizer, BertModel + + tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') + model = BertModel.from_pretrained("bert-base-uncased") + text = "Replace me by any text you'd like." + encoded_input = tokenizer(text, return_tensors='pt') + + import openvino as ov + ov_model = ov.convert_model(model, example_input={**encoded_input}) + + ###### Option 1: Save to OpenVINO IR: + + # save model to OpenVINO IR for later use + ov.save_model(ov_model, 'model.xml') + + ###### Option 2: Compile and infer with OpenVINO: + + # compile model + compiled_model = ov.compile_model(ov_model) + + # prepare input_data using HF tokenizer or your own tokenizer + # encoded_input is reused here for simplicity + + # run inference + result = compiled_model({**encoded_input}) + + .. tab-item:: Keras Applications + + .. code-block:: py + + import tensorflow as tf + import openvino as ov + + tf_model = tf.keras.applications.ResNet50(weights="imagenet") + ov_model = ov.convert_model(tf_model) + + ###### Option 1: Save to OpenVINO IR: + + # save model to OpenVINO IR for later use + ov.save_model(ov_model, 'model.xml') + + ###### Option 2: Compile and infer with OpenVINO: + + # compile model + compiled_model = ov.compile_model(ov_model) + + # prepare input_data + import numpy as np + input_data = np.random.rand(1, 224, 224, 3) - ``convert_model()`` also allows you to perform input/output cut, add pre-processing or add custom Python conversion extensions. + # run inference + result = compiled_model(input_data) -Convert a model with Python using ``mo.convert_model()`` -########################################################### + .. tab-item:: TensorFlow Hub -Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application. + .. code-block:: py -In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model `, :doc:`set input shapes or layout `, :doc:`add preprocessing `, etc. + import tensorflow as tf + import tensorflow_hub as hub + import openvino as ov -The figure below illustrates the typical workflow for deploying a trained deep learning model, where IR is a pair of files describing the model: + model = tf.keras.Sequential([ + hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5") + ]) -* ``.xml`` - Describes the network topology. -* ``.bin`` - Contains the weights and biases binary data. + # Check model page for information about input shape: https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5 + model.build([None, 224, 224, 3]) -.. image:: _static/images/model_conversion_diagram.svg + model.save('mobilenet_v1_100_224') # use a temporary directory + ov_model = ov.convert_model('mobilenet_v1_100_224') + + ###### Option 1: Save to OpenVINO IR: + + ov.save_model(ov_model, 'model.xml') + + ###### Option 2: Compile and infer with OpenVINO: + + compiled_model = ov.compile_model(ov_model) + + # prepare input_data + import numpy as np + input_data = np.random.rand(1, 224, 224, 3) + + # run inference + result = compiled_model(input_data) + + .. tab-item:: ONNX Model Hub + + .. code-block:: py + + import onnx + + model = onnx.hub.load("resnet50") + onnx.save(model, 'resnet50.onnx') # use a temporary file for model + + import openvino as ov + ov_model = ov.convert_model('resnet50.onnx') + + ###### Option 1: Save to OpenVINO IR: + + # save model to OpenVINO IR for later use + ov.save_model(ov_model, 'model.xml') + + ###### Option 2: Compile and infer with OpenVINO: + + # compile model + compiled_model = ov.compile_model(ov_model) + + # prepare input_data + import numpy as np + input_data = np.random.rand(1, 3, 224, 224) + + # run inference + result = compiled_model(input_data) + +In Option 1, where the ``openvino.save_model`` function is used, an OpenVINO model is serialized in the file system as two files with ``.xml`` and ``.bin`` extensions. This pair of files is called OpenVINO Intermediate Representation format (OpenVINO IR, or just IR) and useful for efficient model deployment. OpenVINO IR can be loaded into another application for inference using the ``openvino.Core.read_model`` function. For more details, refer to the :doc:`OpenVINO™ Runtime documentation `. + +Option 2, where ``openvino.compile_model`` is used, provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your existing Python inference application. In this case, the converted model is not saved to IR. Instead, the model is compiled and used for inference within the same application. + +Option 1 separates model conversion and model inference into two different applications. This approach is useful for deployment scenarios requiring fewer extra dependencies and faster model loading in the end inference application. + +For example, converting a PyTorch model to OpenVINO usually demands the ``torch`` Python module and Python. This process can take extra time and memory. But, after the converted model is saved as IR with ``openvino.save_model``, it can be loaded in a separate application without requiring the ``torch`` dependency and the time-consuming conversion. The inference application can be written in other languages supported by OpenVINO, for example, in C++, and Python installation is not necessary for it to run. + +Before saving the model to OpenVINO IR, consider applying :doc:`Post-training Optimization ` to enable more efficient inference and smaller model size. + +The figure below illustrates the typical workflow for deploying a trained deep-learning model. + +.. image:: ./_static/images/model_conversion_diagram.svg :alt: model conversion diagram +Convert a Model in CLI: ``ovc`` +############################### + +Another option for model conversion is to use ``ovc`` command-line tool, which stands for OpenVINO Model Converter. The tool combines both ``openvino.convert_model`` and ``openvino.save_model`` functionalities. It is convenient to use when the original model is ready for inference and is in one of the supported file formats: ONNX, TensorFlow, TensorFlow Lite, or PaddlePaddle. As a result, ``ovc`` produces an OpenVINO IR, consisting of ``.xml`` and ``.bin`` files, which needs to be read with the ``ov.read_model()`` method. You can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime ` -Convert a model using ``mo`` command-line tool -################################################# +.. note:: + PyTorch models cannot be converted with ``ovc``, use ``openvino.convert_model`` instead. -Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model()`` method. +The results of both ``ovc`` and ``openvino.convert_model``/``openvino.save_model`` conversion methods are the same. You can choose either of them based on your convenience. Note that there should not be any differences in the results of model conversion if the same set of parameters is used and the model is saved into OpenVINO IR. -``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime `. +Cases when Model Preparation is not Required +############################################ -The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used. +If a model is represented as a single file from ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite (check :doc:`TensorFlow Frontend Capabilities and Limitations `), it does not require a separate conversion and IR-saving step, that is ``openvino.convert_model`` and ``openvino.save_model``, or ``ovc``. -This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results: +OpenVINO provides C++ and Python APIs for reading such models by just calling the ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods. These methods perform conversion of the model from the original representation. While this conversion may take extra time compared to using prepared OpenVINO IR, it is convenient when you need to read a model in the original format in C++, since ``openvino.convert_model`` is only available in Python. However, for efficient model deployment with the OpenVINO Runtime, it is still recommended to prepare OpenVINO IR and then use it in your inference application. -* :doc:`See the supported formats and how to use them in your project `. -* :doc:`Convert different model formats to the ov.Model format `. +Additional Resources +#################### -@endsphinxdirective +The following articles describe in details how to obtain and prepare your model depending on the source model type: + +* :doc:`Convert different model formats to the ov.Model format `. +* :doc:`Review all available conversion parameters `. + +To achieve the best model inference performance and more compact OpenVINO IR representation follow: +* :doc:`Post-training optimization ` +* :doc:`Model inference in OpenVINO Runtime ` + +If you are using legacy conversion API (``mo`` or ``openvino.tools.mo.convert_model``), please refer to the following materials: + +* :doc:`Transition from legacy mo and ov.tools.mo.convert_model ` +* :doc:`Legacy Model Conversion API ` + +@endsphinxdirective diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md index fb82472b9b45ad..96d83591a4dedf 100644 --- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md +++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -1,4 +1,4 @@ -# Convert a Model {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide} +# Legacy Conversion API {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide} @sphinxdirective @@ -14,12 +14,15 @@ openvino_docs_MO_DG_FP16_Compression openvino_docs_MO_DG_Python_API openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ + Supported_Model_Formats_MO_DG .. meta:: - :description: Model conversion (MO) furthers the transition between training and - deployment environments, it adjusts deep learning models for + :description: Model conversion (MO) furthers the transition between training and + deployment environments, it adjusts deep learning models for optimal execution on target devices. +.. note:: + This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to `Model preparation ` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the `transition guide ` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you. To convert a model to OpenVINO model format (``ov.Model``), you can use the following command: diff --git a/docs/MO_DG/prepare_model/FP16_Compression.md b/docs/MO_DG/prepare_model/FP16_Compression.md index f560a6a035063d..05b2676c055dc5 100644 --- a/docs/MO_DG/prepare_model/FP16_Compression.md +++ b/docs/MO_DG/prepare_model/FP16_Compression.md @@ -3,7 +3,7 @@ @sphinxdirective By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion. -It results in creating a "compressed ``FP16`` model", which occupies about half of +It results in creating a "compressed ``FP16`` model", which occupies about half of the original space in the file system. The compression may introduce a minor drop in accuracy, but it is negligible for most models. In case if accuracy drop is significant user can disable compression explicitly. @@ -29,20 +29,20 @@ To disable compression, use the ``compress_to_fp16=False`` option: mo --input_model INPUT_MODEL --compress_to_fp16=False -For details on how plugins handle compressed ``FP16`` models, see +For details on how plugins handle compressed ``FP16`` models, see :doc:`Working with devices `. .. note:: - ``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization. - Refer to the :doc:`Post-training optimization ` guide for more + ``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization. + Refer to the :doc:`Post-training optimization ` guide for more information about that. .. note:: Some large models (larger than a few GB) when compressed to ``FP16`` may consume an overly large amount of RAM on the loading - phase of the inference. If that is the case for your model, try to convert it without compression: + phase of the inference. If that is the case for your model, try to convert it without compression: ``convert_model(INPUT_MODEL, compress_to_fp16=False)`` or ``convert_model(INPUT_MODEL)`` diff --git a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md index 802c5b4ee51a27..c62f22cdc8a616 100644 --- a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md +++ b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md @@ -1,4 +1,4 @@ -# Supported Model Formats {#Supported_Model_Formats} +# Supported Model Formats {#Supported_Model_Formats_MO_DG} @sphinxdirective @@ -17,7 +17,7 @@ :description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™. -**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR ` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive. +**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR ` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive. **PyTorch, TensorFlow, ONNX, and PaddlePaddle** - can be used with OpenVINO Runtime API directly, which means you do not need to save them as OpenVINO IR before including them in your application. @@ -62,9 +62,9 @@ Here are code examples of how to use these methods with different model formats: ov_model = convert_model(model) compiled_model = core.compile_model(ov_model, "AUTO") - For more details on conversion, refer to the - :doc:`guide ` - and an example `tutorial `__ + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ on this topic. .. tab-item:: TensorFlow @@ -104,10 +104,10 @@ Here are code examples of how to use these methods with different model formats: ov_model = convert_model("saved_model.pb") compiled_model = core.compile_model(ov_model, "AUTO") - For more details on conversion, refer to the - :doc:`guide ` - and an example `tutorial `__ - on this topic. + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ + on this topic. * The ``read_model()`` and ``compile_model()`` methods: @@ -125,8 +125,8 @@ Here are code examples of how to use these methods with different model formats: ov_model = read_model("saved_model.pb") compiled_model = core.compile_model(ov_model, "AUTO") - For a guide on how to run inference, see how to - :doc:`Integrate OpenVINO™ with Your Application `. + For a guide on how to run inference, see how to + :doc:`Integrate OpenVINO™ with Your Application `. For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations `. .. tab-item:: C++ @@ -146,7 +146,7 @@ Here are code examples of how to use these methods with different model formats: ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO"); - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. .. tab-item:: C @@ -167,7 +167,7 @@ Here are code examples of how to use these methods with different model formats: ov_compiled_model_t* compiled_model = NULL; ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model); - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. .. tab-item:: CLI @@ -206,9 +206,9 @@ Here are code examples of how to use these methods with different model formats: ov_model = convert_model(".tflite") compiled_model = core.compile_model(ov_model, "AUTO") - For more details on conversion, refer to the - :doc:`guide ` - and an example `tutorial `__ + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ on this topic. @@ -239,7 +239,7 @@ Here are code examples of how to use these methods with different model formats: compiled_model = core.compile_model(".tflite", "AUTO") - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. @@ -258,7 +258,7 @@ Here are code examples of how to use these methods with different model formats: ov::CompiledModel compiled_model = core.compile_model(".tflite", "AUTO"); - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. .. tab-item:: C @@ -277,7 +277,7 @@ Here are code examples of how to use these methods with different model formats: ov_compiled_model_t* compiled_model = NULL; ov_core_compile_model_from_file(core, ".tflite", "AUTO", 0, &compiled_model); - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. .. tab-item:: CLI @@ -297,7 +297,7 @@ Here are code examples of how to use these methods with different model formats: mo --input_model .tflite - For details on the conversion, refer to the + For details on the conversion, refer to the :doc:`article `. .. tab-item:: ONNX @@ -324,9 +324,9 @@ Here are code examples of how to use these methods with different model formats: ov_model = convert_model(".onnx") compiled_model = core.compile_model(ov_model, "AUTO") - For more details on conversion, refer to the - :doc:`guide ` - and an example `tutorial `__ + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ on this topic. @@ -445,9 +445,9 @@ Here are code examples of how to use these methods with different model formats: ov_model = convert_model(".pdmodel") compiled_model = core.compile_model(ov_model, "AUTO") - For more details on conversion, refer to the - :doc:`guide ` - and an example `tutorial `__ + For more details on conversion, refer to the + :doc:`guide ` + and an example `tutorial `__ on this topic. * The ``read_model()`` method: @@ -477,7 +477,7 @@ Here are code examples of how to use these methods with different model formats: compiled_model = core.compile_model(".pdmodel", "AUTO") - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. .. tab-item:: C++ @@ -495,7 +495,7 @@ Here are code examples of how to use these methods with different model formats: ov::CompiledModel compiled_model = core.compile_model(".pdmodel", "AUTO"); - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. .. tab-item:: C @@ -514,7 +514,7 @@ Here are code examples of how to use these methods with different model formats: ov_compiled_model_t* compiled_model = NULL; ov_core_compile_model_from_file(core, ".pdmodel", "AUTO", 0, &compiled_model); - For a guide on how to run inference, see how to + For a guide on how to run inference, see how to :doc:`Integrate OpenVINO™ with Your Application `. .. tab-item:: CLI @@ -538,8 +538,8 @@ Here are code examples of how to use these methods with different model formats: :doc:`article `. -**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference. -As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**, +**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference. +As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**, converting them to ONNX for use with OpenVINO should be considered the default path. diff --git a/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md new file mode 100644 index 00000000000000..d0362bd904d6d3 --- /dev/null +++ b/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -0,0 +1,98 @@ +# Conversion Parameters {#openvino_docs_OV_Converter_UG_Conversion_Options} + +@sphinxdirective + +.. _deep learning model optimizer: + +.. meta:: + :description: Model Conversion API provides several parameters to adjust model conversion. + +This document describes all available parameters for ``openvino.convert_model``, ``ovc``, and ``openvino.save_model`` without focusing on a particular framework model format. Use this information for your reference as a common description of the conversion API capabilities in general. Part of the options can be not relevant to some specific frameworks. Use :doc:`Supported Model Formats ` page for more dedicated framework-dependent tutorials. + +In most cases when it is required to convert a model the following simple syntax can be used: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model('path_to_your_model') + # or, when model is a Python model object + ov_model = ov.convert_model(model) + + # Optionally adjust model by embedding pre-post processing here... + + ov.save_model(ov_model, 'model.xml') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc path_to_your_model + +Providing just a path to the model or model object as ``openvino.convert_model`` argument is frequently enough to make a successful conversion. However, depending on the model topology and original deep learning framework, additional parameters may be required, which are described below. + +- ``example_input`` parameter available in Python ``openvino.convert_model`` only is intended to trace the model to obtain its graph representation. This parameter is crucial for converting PyTorch models and may sometimes be required for TensorFlow models. For more details, refer to the :doc:`PyTorch Model Conversion ` or :doc:`TensorFlow Model Conversion `. + +- ``input`` parameter to set or override shapes for model inputs. It configures dynamic and static dimensions in model inputs depending on your inference requirements. For more information on this parameter, refer to the :doc:`Setting Input Shapes ` guide. + +- ``output`` parameter to select one or multiple outputs from the original model. This is useful when the model has outputs that are not required for inference in a deployment scenario. By specifying only necessary outputs, you can create a more compact model that infers faster. + +- ``compress_to_fp16`` parameter that is provided by ``ovc`` CLI tool and ``openvino.save_model`` Python function, gives controls over the compression of model weights to FP16 format when saving OpenVINO model to IR. This option is enabled by default which means all produced IRs are saved using FP16 data type for weights which saves up to 2x storage space for the model file and in most cases doesn't sacrifice model accuracy. In case it does affect accuracy, the compression can be disabled by setting this flag to ``False``: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(original_model) + ov.save_model(ov_model, 'model.xml' compress_to_fp16=False) + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc path_to_your_model --compress_to_fp16=False + +For details on how plugins handle compressed ``FP16`` models, see +:doc:`Working with devices `. + +.. note:: + + ``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization. + Refer to the :doc:`Post-training optimization ` guide for more + information about that. + +- ``extension`` parameter which makes possible conversion of the models consisting of operations that are not supported by OpenVINO out-of-the-box. It requires implementing of an OpenVINO extension first, please refer to :doc:`Frontend Extensions ` guide. + +- ``share_weigths`` parameter with default value `True` allows reusing memory with original weights. For models loaded in Python and then passed to ``openvino.convert_model``, that means that OpenVINO model will share the same areas in program memory where the original weights are located. For models loaded from files by ``openvino.convert_model``, file memory mapping is used to avoid extra memory allocation. When enabled, the original model cannot be destroyed (Python object cannot be deallocated and original model file cannot be deleted) for the whole lifetime of OpenVINO model. If it is not desired, set ``share_weights=False`` when calling ``openvino.convert_model``. + +.. note:: ``ovc`` doesn't have ``share_weights`` option and always uses sharing to reduce conversion time and consume less amount of memory during the conversion. + +- ``output_model`` parameter in ``ovc`` and ``openvino.save_model`` specifies name for output ``.xml`` file with the resulting OpenVINO IR. The accompanying ``.bin`` file name will be generated automatically by replacing ``.xml`` extension with ``.bin`` extension. The value of ``output_model`` must end with ``.xml`` extension. For ``ovc`` command line tool, ``output_model`` can also contain a name of a directory. In this case, the resulting OpenVINO IR files will be put into that directory with a base name of ``.xml`` and ``.bin`` files matching the original model base name passed to ``ovc`` as a parameter. For example, when calling ``ovc your_model.onnx --output_model directory_name``, files ``directory_name/your_model.xml`` and ``directory_name/your_model.bin`` will be created. If ``output_model`` is not used, then the current directory is used as a destination directory. + +.. note:: ``openvino.save_model`` doesn't support a directory for ``output_model`` parameter value because ``openvino.save_model`` gets OpenVINO model object represented in a memory and there is no original model file name available for output file name generation. For the same reason, ``output_model`` is a mandatory parameter for ``openvino.save_model``. + +- ``verbose`` parameter activates extra diagnostics printed to the standard output. Use for debugging purposes in case there is an issue with the conversion and to collect information for better bug reporting to OpenVINO team. + +.. note:: Weights sharing doesn't equally work for all the supported model formats. The value of this flag is considered as a hint for the conversion API, and actual sharing is used only if it is implemented and possible for a particular model representation. + +You can always run ``ovc -h`` or ``ovc --help`` to recall all the supported parameters for ``ovc``. + +Use ``ovc --version`` to check the version of OpenVINO package installed. + +@endsphinxdirective + + diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md new file mode 100644 index 00000000000000..37bfd58f87b01c --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md @@ -0,0 +1,59 @@ +# Converting an ONNX Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX} + +@sphinxdirective + +.. meta:: + :description: Learn how to convert a model from the + ONNX format to the OpenVINO Model. + +Introduction to ONNX +#################### + +`ONNX `__ is a representation format for deep learning models that enables AI developers to easily transfer models between different frameworks. + +.. note:: An ONNX model file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without the need to prepare an OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if the model load latency is important for the inference application. + +Converting an ONNX Model +######################## + +This page provides instructions on model conversion from the ONNX format to the OpenVINO IR format. + +For model conversion, you need an ONNX model either directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format. + +To convert an ONNX model, run model conversion with the path to the input model ``.onnx`` file: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov.convert_model('your_model_file.onnx') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc your_model_file.onnx + +External Data Files +################### + +ONNX models may consist of multiple files when the model size exceeds 2GB allowed by Protobuf. According to this `ONNX article `__, instead of a single file, the model is represented as one file with ``.onnx`` extension and multiple separate files with external data. These data files are located in the same directory as the main ``.onnx`` file or in another directory. + +OpenVINO model conversion API supports ONNX models with external data representation. In this case, you only need to pass the main file with ``.onnx`` extension as ``ovc`` or ``openvino.convert_model`` parameter. The other files will be found and loaded automatically during the model conversion. The resulting OpenVINO model, represented as an IR in the filesystem, will have the usual structure with a single ``.xml`` file and a single ``.bin`` file, where all the original model weights are copied and packed together. + +Supported ONNX Layers +##################### + +For the list of supported standard layers, refer to the :doc:`Supported Operations ` page. + +Additional Resources +#################### + +Check out more examples of model conversion in :doc:`interactive Python tutorials `. + +@endsphinxdirective diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md new file mode 100644 index 00000000000000..ad2aa8798738ff --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md @@ -0,0 +1,201 @@ +# Converting a PaddlePaddle Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle} + +@sphinxdirective + +.. meta:: + :description: Learn how to convert a model from the + PaddlePaddle format to the OpenVINO Model. + +This page provides general instructions on how to convert a model from the PaddlePaddle format to the OpenVINO IR format using OpenVINO model conversion API. The instructions are different depending on the PaddlePaddle model format. + +.. note:: PaddlePaddle model serialized in a file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application. + +Converting PaddlePaddle Model Files +################################### + +PaddlePaddle inference model includes ``.pdmodel`` (storing model structure) and ``.pdiparams`` (storing model weight). For details on how to export a PaddlePaddle inference model, refer to the `Exporting PaddlePaddle Inference Model `__ Chinese guide. + +To convert a PaddlePaddle model, use the ``ovc`` or ``openvino.convert_model`` and specify the path to the input ``.pdmodel`` model file: + + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov.convert_model('your_model_file.pdmodel') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc your_model_file.pdmodel + +**For example**, this command converts a yolo v3 PaddlePaddle model to OpenVINO IR model: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov.convert_model('yolov3.pdmodel') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc yolov3.pdmodel + +Converting PaddlePaddle Python Model +#################################### + +Model conversion API supports passing PaddlePaddle models directly in Python without saving them to files in the user code. + +Following PaddlePaddle model object types are supported: + +* ``paddle.hapi.model.Model`` +* ``paddle.fluid.dygraph.layers.Layer`` +* ``paddle.fluid.executor.Executor`` + +Some PaddlePaddle models may require setting ``example_input`` or ``output`` for conversion as shown in the examples below: + +* Example of converting ``paddle.hapi.model.Model`` format model: + + .. code-block:: py + :force: + + import paddle + import openvino as ov + + # create a paddle.hapi.model.Model format model + resnet50 = paddle.vision.models.resnet50() + x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x') + y = paddle.static.InputSpec([1,1000], 'float32', 'y') + + model = paddle.Model(resnet50, x, y) + + # convert to OpenVINO IR format + ov_model = ov.convert_model(model) + + ov.save_model(ov_model, "resnet50.xml") + +* Example of converting ``paddle.fluid.dygraph.layers.Layer`` format model: + + ``example_input`` is required while ``output`` is optional, which accept the following formats: + + ``list`` with tensor (``paddle.Tensor``) or InputSpec (``paddle.static.input.InputSpec``) + + .. code-block:: py + :force: + + import paddle + import openvino as ov + + # create a paddle.fluid.dygraph.layers.Layer format model + model = paddle.vision.models.resnet50() + x = paddle.rand([1,3,224,224]) + + # convert to OpenVINO IR format + ov_model = ov.convert_model(model, example_input=[x]) + +* Example of converting ``paddle.fluid.executor.Executor`` format model: + + ``example_input`` and ``output`` are required, which accept the following formats: + + ``list`` or ``tuple`` with variable(``paddle.static.data``) + + .. code-block:: py + :force: + + import paddle + import openvino as ov + + paddle.enable_static() + + # create a paddle.fluid.executor.Executor format model + x = paddle.static.data(name="x", shape=[1,3,224]) + y = paddle.static.data(name="y", shape=[1,3,224]) + relu = paddle.nn.ReLU() + sigmoid = paddle.nn.Sigmoid() + y = sigmoid(relu(x)) + + exe = paddle.static.Executor(paddle.CPUPlace()) + exe.run(paddle.static.default_startup_program()) + + # convert to OpenVINO IR format + ov_model = ov.convert_model(exe, example_input=[x], output=[y]) + +Supported PaddlePaddle Layers +############################# + +For the list of supported standard layers, refer to the :doc:`Supported Operations ` page. + +Officially Supported PaddlePaddle Models +######################################## + +The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1): + +.. list-table:: + :widths: 20 25 55 + :header-rows: 1 + + * - Model Name + - Model Type + - Description + * - ppocr-det + - optical character recognition + - Models are exported from `PaddleOCR `_. Refer to `READ.md `_. + * - ppocr-rec + - optical character recognition + - Models are exported from `PaddleOCR `_. Refer to `READ.md `_. + * - ResNet-50 + - classification + - Models are exported from `PaddleClas `_. Refer to `getting_started_en.md `_. + * - MobileNet v2 + - classification + - Models are exported from `PaddleClas `_. Refer to `getting_started_en.md `_. + * - MobileNet v3 + - classification + - Models are exported from `PaddleClas `_. Refer to `getting_started_en.md `_. + * - BiSeNet v2 + - semantic segmentation + - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_. + * - DeepLab v3 plus + - semantic segmentation + - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_. + * - Fast-SCNN + - semantic segmentation + - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_. + * - OCRNET + - semantic segmentation + - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_. + * - Yolo v3 + - detection + - Models are exported from `PaddleDetection `_. Refer to `EXPORT_MODEL.md `_. + * - ppyolo + - detection + - Models are exported from `PaddleDetection `_. Refer to `EXPORT_MODEL.md `_. + * - MobileNetv3-SSD + - detection + - Models are exported from `PaddleDetection `_. Refer to `EXPORT_MODEL.md `_. + * - U-Net + - semantic segmentation + - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_. + * - BERT + - language representation + - Models are exported from `PaddleNLP `_. Refer to `README.md `_. + +Additional Resources +#################### + +Check out more examples of model conversion in :doc:`interactive Python tutorials `. + +@endsphinxdirective diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md new file mode 100644 index 00000000000000..cc6126cffd6043 --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md @@ -0,0 +1,155 @@ +# Converting a PyTorch Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch} + +@sphinxdirective + +.. meta:: + :description: Learn how to convert a model from the + PyTorch format to the OpenVINO Model. + +This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO Model using the ``openvino.convert_model`` function. + +.. note:: + + In the examples below the ``openvino.save_model`` function is not used because there are no PyTorch-specific details regarding the usage of this function. In all examples, the converted OpenVINO model can be saved to IR by calling ``ov.save_model(ov_model, 'model.xml')`` as usual. + +Here is the simplest example of PyTorch model conversion using a model from ``torchvision``: + +.. code-block:: py + :force: + + import torchvision + import torch + import openvino as ov + + model = torchvision.models.resnet50(pretrained=True) + ov_model = ov.convert_model(model) + +``openvino.convert_model`` function supports the following PyTorch model object types: + +* ``torch.nn.Module`` derived classes +* ``torch.jit.ScriptModule`` +* ``torch.jit.ScriptFunction`` + +When passing a ``torch.nn.Module`` derived class object as an input model, converting PyTorch models often requires the ``example_input`` parameter to be specified in the ``openvino.convert_model`` function call. Internally it triggers the model tracing during the model conversion process, using the capabilities of the ``torch.jit.trace`` function. + +The use of ``example_input`` can lead to a better quality of the resulting OpenVINO model in terms of correctness and performance compared to converting the same original model without specifying ``example_input``. While the necessity of ``example_input`` depends on the implementation details of a specific PyTorch model, it is recommended to always set the ``example_input`` parameter when it is available. + +The value for the ``example_input`` parameter can be easily derived from knowing the input tensor's element type and shape. While it may not be suitable for all cases, random numbers can frequently serve this purpose effectively: + +.. code-block:: py + :force: + + import torchvision + import torch + import openvino as ov + + model = torchvision.models.resnet50(pretrained=True) + ov_model = ov.convert_model(model, example_input=example_input=torch.rand(1, 3, 224, 224)) + +In practice, the code to evaluate or test the PyTorch model is usually provided with the model itself and can be used to generate a proper ``example_input`` value. A modified example of using ``resnet50`` model from ``torchvision`` is presented below. It demonstrates how to switch inference in the existing PyTorch application to OpenVINO and how to get value for ``example_input``: + +.. code-block:: py + :force: + + from torchvision.io import read_image + from torchvision.models import resnet50, ResNet50_Weights + import requests, PIL, io, torch + + # Get a picture of a cat from the web: + img = PIL.Image.open(io.BytesIO(requests.get("https://placekitten.com/200/300").content)) + + # Torchvision model and input data preparation from https://pytorch.org/vision/stable/models.html + + weights = ResNet50_Weights.DEFAULT + model = resnet50(weights=weights) + model.eval() + preprocess = weights.transforms() + batch = preprocess(img).unsqueeze(0) + + # PyTorch model inference and post-processing + + prediction = model(batch).squeeze(0).softmax(0) + class_id = prediction.argmax().item() + score = prediction[class_id].item() + category_name = weights.meta["categories"][class_id] + print(f"{category_name}: {100 * score:.1f}% (with PyTorch)") + + # OpenVINO model preparation and inference with the same post-processing + + import openvino as ov + compiled_model = ov.compile_model(ov.convert_model(model, example_input=batch)) + + prediction = torch.tensor(compiled_model(batch)[0]).squeeze(0).softmax(0) + class_id = prediction.argmax().item() + score = prediction[class_id].item() + category_name = weights.meta["categories"][class_id] + print(f"{category_name}: {100 * score:.1f}% (with OpenVINO)") + +Check out more examples in :doc:`interactive Python tutorials `. + +Supported Input Parameter Types +############################### + +If the model has a single input, the following input types are supported in ``example_input``: + +* ``openvino.runtime.Tensor`` +* ``torch.Tensor`` +* ``tuple`` or any nested combination of tuples + +If a model has multiple inputs, the input values are combined in a ``list``, a ``tuple``, or a ``dict``: + +* values in a ``list`` or ``tuple`` should be passed in the same order as the original model specifies, +* ``dict`` has keys from the names of the original model argument names. + +Enclosing in ``list``, ``tuple`` or ``dict`` can be used for a single input as well as for multiple inputs. + +If a model has a single input parameter and the type of this input is a ``tuple``, it should be always passed enclosed into an extra ``list``, ``tuple`` or ``dict`` as in the case of multiple inputs. It is required to eliminate ambiguity between ``model((a, b))`` and ``model(a, b)`` in this case. + +Non-tensor Data Types +##################### + +When a non-tensor data type, such as a ``tuple`` or ``dict``, appears in a model input or output, it is flattened. The flattening means that each element within the ``tuple`` will be represented as a separate input or output. The same is true for ``dict`` values, where the keys of the ``dict`` are used to form a model input/output name. The original non-tensor input or output is replaced by one or multiple new inputs or outputs resulting from this flattening process. This flattening procedure is applied recursively in the case of nested ``tuples`` and ``dicts`` until it reaches the assumption that the most nested data type is a tensor. + +For example, if the original model is called with ``example_input=(a, (b, c, (d, e)))``, where ``a``, ``b``, ...``e`` are tensors, it means that the original model has two inputs. The first is a tensor ``a``, and the second is a tuple ``(b, c, (d, e))``, containing two tensors ``b`` and ``c`` and a nested tuple ``(d, e)``. Then the resulting OpenVINO model will have signature ``(a, b, c, d, e)``, which means it will have five inputs, all of type tensor, instead of two in the original model. + +Flattening of a ``dict`` is supported for outputs only. If your model has an input of type ``dict``, you will need to decompose the ``dict`` to one or multiple tensor inputs by modifying the original model signature or making a wrapper model on top of the original model. This approach hides the dictionary from the model signature and allows it to be processed inside the model successfully. + +.. note:: + + An important consequence of flattening is that only ``tuple`` and ``dict`` with a fixed number of elements and key values are supported. The structure of such inputs should be fully described in the ``example_input`` parameter of ``convert_model``. The flattening on outputs should be reproduced with the given ``example_input`` and cannot be changed once the conversion is done. + +Check out more examples of model conversion with non-tensor data types in the following tutorials: + +* `Video Subtitle Generation using Whisper and OpenVINO™ `__ +* `Visual Question Answering and Image Captioning using BLIP and OpenVINO `__ + + +Exporting a PyTorch Model to ONNX Format +######################################## + +An alternative method of converting PyTorch models is exporting a PyTorch model to ONNX with ``torch.onnx.export`` first and then converting the resulting ``.onnx`` file to OpenVINO Model with ``openvino.convert_model``. It can be considered as a backup solution if a model cannot be converted directly from PyTorch to OpenVINO as described in the above chapters. Converting through ONNX can be more expensive in terms of code, conversion time, and allocated memory. + +1. Refer to the `Exporting PyTorch models to ONNX format `__ guide to learn how to export models from PyTorch to ONNX. +2. Follow :doc:`Convert the ONNX model ` chapter to produce OpenVINO model. + +Here is an illustration of using these two steps together: + +.. code-block:: py + :force: + + import torchvision + import torch + import openvino as ov + + model = torchvision.models.resnet50(pretrained=True) + # 1. Export to ONNX + torch.onnx.export(model, (torch.rand(1, 3, 224, 224), ), 'model.onnx') + # 2. Convert to OpenVINO + ov_model = ov.convert_model('model.onnx') + +.. note:: + + As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default. + It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use ``opset_version`` option of the ``torch.onnx.export``. For more information about ONNX opset, refer to the `Operator Schemas `__ page. + +@endsphinxdirective diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md new file mode 100644 index 00000000000000..d2c8f1418c0815 --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md @@ -0,0 +1,331 @@ +# Converting a TensorFlow Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow} + +@sphinxdirective + +.. meta:: + :description: Learn how to convert a model from a + TensorFlow format to the OpenVINO Model. + +This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X. + +.. note:: TensorFlow models can be loaded by `openvino.Core.read_model` or `openvino.Core.compile_model` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application. + +.. note:: Examples below that convert TensorFlow models from a file, do not require any version of TensorFlow to be installed on the system, except in cases when the `tensorflow` module is imported explicitly. + +Converting TensorFlow 2 Models +############################## + +TensorFlow 2.X officially supports two model formats: SavedModel and Keras H5 (or HDF5). +Below are the instructions on how to convert each of them. + +SavedModel Format ++++++++++++++++++ + +A model in the SavedModel format consists of a directory with a ``saved_model.pb`` file and two subfolders: ``variables`` and ``assets`` inside. +To convert a model, run conversion with the directory as the model argument: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + ov_model = ov.convert_model('path_to_saved_model_dir') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc path_to_saved_model_dir + +Keras H5 Format ++++++++++++++++ + +If you have a model in the HDF5 format, load the model using TensorFlow 2 and serialize it in the +SavedModel format. Here is an example of how to do it: + +.. code-block:: py + :force: + + import tensorflow as tf + model = tf.keras.models.load_model('model.h5') + tf.saved_model.save(model,'model') + +Converting a Keras H5 model with a custom layer to the SavedModel format requires special considerations. +For example, the model with a custom layer ``CustomLayer`` from ``custom_layer.py`` is converted as follows: + +.. code-block:: py + :force: + + import tensorflow as tf + from custom_layer import CustomLayer + model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer}) + tf.saved_model.save(model,'model') + +Then follow the above instructions for the SavedModel format. + +.. note:: + + Avoid using any workarounds or hacks to resave TensorFlow 2 models into TensorFlow 1 formats. + +Converting TensorFlow 1 Models +############################### + +Converting Frozen Model Format ++++++++++++++++++++++++++++++++ + +To convert a TensorFlow model, run model conversion with the path to the input model ``*.pb*`` file: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov_model = ov.convert_model('your_model_file.pb') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc your_model_file.pb + + +Converting Non-Frozen Model Formats ++++++++++++++++++++++++++++++++++++ + +There are three ways to store non-frozen TensorFlow models. + +1. **SavedModel format**. In this case, a model consists of a special directory with a ``.pb`` file +and several subfolders: ``variables``, ``assets``, and ``assets.extra``. For more information about the SavedModel directory, refer to the `README `__ file in the TensorFlow repository. +To convert such TensorFlow model, run the conversion similarly to other model formats and pass a path to the directory as a model argument: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov_model = ov.convert_model('path_to_saved_model_dir') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc path_to_saved_model_dir + +2. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``. +If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#Freezing-Custom-Models-in-Python>`__ section. +To convert the model with the inference graph in ``.pb`` format, provide paths to both files as an argument for ``ovc`` or ``openvino.convert_model``: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov_model = ov.convert_model(['path_to_inference_graph.pb', 'path_to_checkpoint_file.ckpt']) + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc path_to_inference_graph.pb path_to_checkpoint_file.ckpt + +To convert the model with the inference graph in the ``.pbtxt`` format, specify the path to ``.pbtxt`` file instead of the ``.pb`` file. The conversion API automatically detects the format of the provided file, there is no need to specify the model file format explicitly when calling ``ovc`` or ``openvino.convert_model`` in all examples in this document. + +3. **MetaGraph**. In this case, a model consists of three or four files stored in the same directory: ``model_name.meta``, ``model_name.index``, +``model_name.data-00000-of-00001`` (the numbers may vary), and ``checkpoint`` (optional). +To convert such a TensorFlow model, run the conversion providing a path to `.meta` file as an argument: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov_model = ov.convert_model('path_to_meta_graph.meta') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc path_to_meta_graph.meta + + +Freezing Custom Models in Python +++++++++++++++++++++++++++++++++ + +When a model is defined in Python code, you must create an inference graph file. Graphs are usually built in a form +that allows model training. That means all trainable parameters are represented as variables in the graph. +To be able to use such a graph with the model conversion API, it should be frozen first before passing to the ``openvino.convert_model`` function: + +.. code-block:: py + :force: + + import tensorflow as tf + from tensorflow.python.framework import graph_io + frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"]) + + import openvino as ov + ov_model = ov.convert_model(frozen) + +Where: + +* ``sess`` is the instance of the TensorFlow Session object where the network topology is defined. +* ``["name_of_the_output_node"]`` is the list of output node names in the graph; ``frozen`` graph will include only those nodes from the original ``sess.graph_def`` that are directly or indirectly used to compute given output nodes. The ``'name_of_the_output_node'`` is an example of a possible output node name. You should derive the names based on your own graph. + +Converting TensorFlow Models from Memory Using Python API +############################################################ + +Model conversion API supports passing TensorFlow/TensorFlow2 models directly from memory. + +* ``tf.keras.Model`` + + .. code-block:: py + :force: + + import openvino as ov + model = tf.keras.applications.ResNet50(weights="imagenet") + ov_model = ov.convert_model(model) + +* ``tf.keras.layers.Layer``. Requires saving model to TensorFlow ``saved_model`` file format and then loading to ``openvino.convert_model``. Saving to the file and then restoring is required due to a known bug in ``openvino.convert_model`` that ignores model signature. + + .. code-block:: py + :force: + + import tensorflow_hub as hub + import openvino as ov + + model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5") + model.build([None, 224, 224, 3]) + model.save('mobilenet_v1_100_224') # use a temporary directory + + ov_model = ov.convert_model('mobilenet_v1_100_224') + +* ``tf.Module``. Requires setting shapes in ``input`` parameter. + + .. code-block:: py + :force: + + import tensorflow as tf + import openvino as ov + + class MyModule(tf.Module): + def __init__(self, name=None): + super().__init__(name=name) + self.constant1 = tf.constant(5.0, name="var1") + self.constant2 = tf.constant(1.0, name="var2") + def __call__(self, x): + return self.constant1 * x + self.constant2 + + model = MyModule(name="simple_module") + ov_model = ov.convert_model(model, input=[-1]) + +.. note:: There is a known bug in ``openvino.convert_model`` on using ``tf.Variable`` nodes in the model graph. The results of the conversion of such models are unpredictable. It is recommended to save a model with ``tf.Variable`` into TensorFlow Saved Model format and load it with `openvino.convert_model`. + +* ``tf.compat.v1.Graph`` + + .. code-block:: py + :force: + + with tf.compat.v1.Session() as sess: + inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1') + inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2') + output = tf.nn.relu(inp1 + inp2, name='Relu') + tf.compat.v1.global_variables_initializer() + model = sess.graph + + import openvino as ov + ov_model = ov.convert_model(model) + +* ``tf.compat.v1.GraphDef`` + + .. code-block:: py + :force: + + with tf.compat.v1.Session() as sess: + inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1') + inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2') + output = tf.nn.relu(inp1 + inp2, name='Relu') + tf.compat.v1.global_variables_initializer() + model = sess.graph_def + + import openvino as ov + ov_model = ov.convert_model(model) + +* ``tf.function`` + + .. code-block:: py + :force: + + @tf.function( + input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32), + tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32)]) + def func(x, y): + return tf.nn.sigmoid(tf.nn.relu(x + y)) + + import openvino as ov + ov_model = ov.convert_model(func) + +* ``tf.compat.v1.session`` + + .. code-block:: py + :force: + + with tf.compat.v1.Session() as sess: + inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1') + inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2') + output = tf.nn.relu(inp1 + inp2, name='Relu') + tf.compat.v1.global_variables_initializer() + + import openvino as ov + ov_model = ov.convert_model(sess) + +* ``tf.train.checkpoint`` + + .. code-block:: py + :force: + + model = tf.keras.Model(...) + checkpoint = tf.train.Checkpoint(model) + save_path = checkpoint.save(save_directory) + # ... + checkpoint.restore(save_path) + + import openvino as ov + ov_model = ov.convert_model(checkpoint) + +Supported TensorFlow and TensorFlow 2 Keras Layers +################################################## + +For the list of supported standard layers, refer to the :doc:`Supported Operations ` page. + +Summary +####### + +In this document, you learned: + +* Basic information about how the model conversion API works with TensorFlow models. +* Which TensorFlow models are supported. +* How to freeze a TensorFlow model. + +@endsphinxdirective + + diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md new file mode 100644 index 00000000000000..e25795c95a4b1f --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md @@ -0,0 +1,42 @@ +# Converting a TensorFlow Lite Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite} + +@sphinxdirective + +.. meta:: + :description: Learn how to convert a model from a + TensorFlow Lite format to the OpenVINO Model. + + +To convert an ONNX model, run model conversion with the path to the ``.tflite`` model file: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + + import openvino as ov + ov.convert_model('your_model_file.tflite') + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc your_model_file.tflite + +.. note:: TensorFlow Lite model file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application. + +Supported TensorFlow Lite Layers +################################### + +For the list of supported standard layers, refer to the :doc:`Supported Operations ` page. + +Supported TensorFlow Lite Models +################################### + +More than eighty percent of public TensorFlow Lite models are supported from open sources `TensorFlow Hub `__ and `MediaPipe `__. +Unsupported models usually have custom TensorFlow Lite operations. + +@endsphinxdirective diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Converting_Model.md b/docs/OV_Converter_UG/prepare_model/convert_model/Converting_Model.md new file mode 100644 index 00000000000000..24fa33c17f4a94 --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Converting_Model.md @@ -0,0 +1,141 @@ +# Setting Input Shapes {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model} + +With model conversion API you can increase your model's efficiency by providing an additional shape definition using the ``input`` parameter. + +@sphinxdirective + +.. meta:: + :description: Learn how to increase the efficiency of a model by providing an additional shape definition with the ``input`` parameter of ``openvino.convert_model`` and ``ovc``. + +.. _when_to_specify_input_shapes: + +Specifying Shapes in the ``input`` Parameter +##################################################### + +``openvino.convert_model`` supports conversion of models with dynamic input shapes that contain undefined dimensions. +However, if the shape of data is not going to change from one inference request to another, +it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs. +Doing it at this stage, instead of during inference in runtime, can be beneficial in terms of performance and memory consumption. +To set up static shapes, model conversion API provides the ``input`` parameter. +For more information on changing input shapes in runtime, refer to the :doc:`Changing input shapes ` guide. +To learn more about dynamic shapes in runtime, refer to the :doc:`Dynamic Shapes ` guide. + +The OpenVINO Runtime API may present certain limitations in inferring models with undefined dimensions on some hardware. See the :doc:`Features support matrix ` for reference. +In this case, the ``input`` parameter and the :doc:`reshape method ` can help to resolve undefined dimensions. + +For example, run model conversion for the TensorFlow MobileNet model with the single input +and specify the input shape of ``[2,300,300,3]``: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + ov_model = ov.convert_model("MobileNet.pb", input=[2, 300, 300, 3]) + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc MobileNet.pb --input [2,300,300,3] + +If a model has multiple inputs, the input shape should be specified in ``input`` parameter as a list. In ``ovc``, this is a command separate list, and in ``openvino.convert_model`` this is a Python list or tuple with number of elements matching the number of inputs in the model. Use input names from the original model to define the mapping between inputs and shapes specified. +The following example demonstrates the conversion of the ONNX OCR model with a pair of inputs ``data`` and ``seq_len`` +and specifies shapes ``[3,150,200,1]`` and ``[3]`` for them respectively: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + ov_model = ov.convert_model("ocr.onnx", input=[("data", [3,150,200,1]), ("seq_len", [3])]) + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc ocr.onnx --input data[3,150,200,1],seq_len[3] + +If the order of inputs is defined in the input model and the order is known for the user, names could be omitted. In this case, it is important to specify shapes in the same order of input model inputs: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + ov_model = ov.convert_model("ocr.onnx", input=([3,150,200,1], [3])) + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc ocr.onnx --input [3,150,200,1],[3] + +Whether the model has a specified order of inputs depends on the original framework. Usually, it is convenient to set shapes without specifying the names of the parameters in the case of PyTorch model conversion because a PyTorch model is considered as a callable that usually accepts positional parameters. On the other hand, names of inputs are convenient when converting models from model files, because naming of inputs is a good practice for many frameworks that serialize models to files. + +The ``input`` parameter allows overriding original input shapes if it is supported by the model topology. +Shapes with dynamic dimensions in the original model can be replaced with static shapes for the converted model, and vice versa. +The dynamic dimension can be marked in model conversion API parameter as ``-1`` or ``?`` when using ``ovc``. +For example, launch model conversion for the ONNX OCR model and specify dynamic batch dimension for inputs: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + ov_model = ov.convert_model("ocr.onnx", input=[("data", [-1, 150, 200, 1]), ("seq_len", [-1])]) + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc ocr.onnx --input "data[?,150,200,1],seq_len[?]" + +To optimize memory consumption for models with undefined dimensions in run-time, model conversion API provides the capability to define boundaries of dimensions. +The boundaries of undefined dimension can be specified with ellipsis in the command line or with ``openvino.Dimension`` class in Python. +For example, launch model conversion for the ONNX OCR model and specify a boundary for the batch dimension 1..3, which means that the input tensor will have batch dimension minimum 1 and maximum 3 in inference: + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. code-block:: py + :force: + + import openvino as ov + batch_dim = ov.Dimension(1, 3) + ov_model = ov.convert_model("ocr.onnx", input=[("data", [batch_dim, 150, 200, 1]), ("seq_len", [batch_dim])]) + + .. tab-item:: CLI + :sync: cli + + .. code-block:: sh + + ovc ocr.onnx --input data[1..3,150,200,1],seq_len[1..3] + +In practice, not every model is designed in a way that allows change of input shapes. An attempt to change the shape for such models may lead to an exception during model conversion, later in model inference, or even to wrong results of inference without explicit exception raised. A knowledge about model topology is required to set shapes appropriately. +For more information about shape follow the :doc:`inference troubleshooting ` +and :ref:`ways to relax shape inference flow ` guides. + +@endsphinxdirective diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md b/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md new file mode 100644 index 00000000000000..e550d515b753ad --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md @@ -0,0 +1,634 @@ +# Transition from Legacy Conversion API {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition} + +@sphinxdirective + +.. meta:: + :description: Transition guide from MO / mo.convert_model() to OVC / ov.convert_model(). + +.. toctree:: + :maxdepth: 1 + :hidden: + + openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide + +In 2023.1 OpenVINO release a new OVC (OpenVINO Model Converter) tool has been introduced with the corresponding Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent +a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized and the transition guide from the legacy API to the new API is provided. + +Parameters Comparison +##################### + +The comparison of parameters between ov.convert_model() / OVC and mo.convert_model() / MO. + +.. list-table:: + :widths: 20 25 55 + :header-rows: 1 + + * - mo.convert_model() / MO + - ov.convert_model() / OVC + - Differences description + * - input_model + - input_model + - Along with model object or path to input model ov.convert_model() accepts list of model parts, for example, the path to TensorFlow weights plus the path to TensorFlow checkpoint. OVC tool accepts an unnamed input model. + * - output_dir + - output_model + - output_model in OVC tool sets both output model name and output directory. + * - model_name + - output_model + - output_model in OVC tool sets both output model name and output directory. + * - input + - input + - ov.convert_model() accepts tuples for setting multiple parameters. OVC tool 'input' does not have type setting and freezing functionality. ov.convert_model() does not allow input cut. + * - output + - output + - ov.convert_model() does not allow output cut. + * - input_shape + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by ``input`` parameter. + * - example_input + - example_input + - No differences. + * - batch + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by model reshape functionality. See details below. + * - mean_values + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - scale_values + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - scale + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - reverse_input_channels + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - source_layout + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - target_layout + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - layout + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - compress_to_fp16 + - compress_to_fp16 + - OVC provides 'compress_to_fp16' for command line tool only, as compression is performed during saving a model to IR (Intermediate Representation). + * - extensions + - extension + - No differences. + * - transform + - N/A + - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below. + * - transformations_config + - N/A + - Not available in ov.convert_model() / OVC. + * - static_shape + - N/A + - Not available in ov.convert_model() / OVC. + * - freeze_placeholder_with_value + - N/A + - Not available in ov.convert_model() / OVC. + * - use_legacy_frontend + - N/A + - Not available in ov.convert_model() / OVC. + * - use_legacy_frontend + - N/A + - Not available in ov.convert_model() / OVC. + * - silent + - verbose + - OVC / ov.convert_model provides 'verbose' parameter instead of 'silent' for printing of detailed conversion information if 'verbose' is set to True. + * - log_level + - N/A + - Not available in ov.convert_model() / OVC. + * - version + - version + - N/A + * - progress + - N/A + - Not available in ov.convert_model() / OVC. + * - stream_output + - N/A + - Not available in ov.convert_model() / OVC. + * - share_weights + - share_weights + - No differences. + * - framework + - N/A + - Not available in ov.convert_model() / OVC. + * - help / -h + - help / -h + - OVC provides help parameter only in command line tool. + * - example_output + - output + - OVC / ov.convert_model 'output' parameter includes capabilities of MO 'example_output' parameter. + * - input_model_is_text + - N/A + - Not available in ov.convert_model() / OVC. + * - input_checkpoint + - input_model + - All supported model formats can be passed to 'input_model'. + * - input_meta_graph + - input_model + - All supported model formats can be passed to 'input_model'. + * - saved_model_dir + - input_model + - All supported model formats can be passed to 'input_model'. + * - saved_model_tags + - N/A + - Not available in ov.convert_model() / OVC. + * - tensorflow_custom_operations_config_update + - N/A + - Not available in ov.convert_model() / OVC. + * - tensorflow_object_detection_api_pipeline_config + - N/A + - Not available in ov.convert_model() / OVC. + * - tensorboard_logdir + - N/A + - Not available in ov.convert_model() / OVC. + * - tensorflow_custom_layer_libraries + - N/A + - Not available in ov.convert_model() / OVC. + * - input_symbol + - N/A + - Not available in ov.convert_model() / OVC. + * - nd_prefix_name + - N/A + - Not available in ov.convert_model() / OVC. + * - pretrained_model_name + - N/A + - Not available in ov.convert_model() / OVC. + * - save_params_from_nd + - N/A + - Not available in ov.convert_model() / OVC. + * - legacy_mxnet_model + - N/A + - Not available in ov.convert_model() / OVC. + * - enable_ssd_gluoncv + - N/A + - Not available in ov.convert_model() / OVC. + * - input_proto + - N/A + - Not available in ov.convert_model() / OVC. + * - caffe_parser_path + - N/A + - Not available in ov.convert_model() / OVC. + * - k + - N/A + - Not available in ov.convert_model() / OVC. + * - disable_omitting_optional + - N/A + - Not available in ov.convert_model() / OVC. + * - enable_flattening_nested_params + - N/A + - Not available in ov.convert_model() / OVC. + * - counts + - N/A + - Not available in ov.convert_model() / OVC. + * - remove_output_softmax + - N/A + - Not available in ov.convert_model() / OVC. + * - remove_memory + - N/A + - Not available in ov.convert_model() / OVC. + +Transition from Legacy API to New API +############################################################################ + +mo.convert_model() provides a wide range of preprocessing parameters. Most of these parameters have analogs in OVC or can be replaced with functionality from ``ov.PrePostProcessor`` class. +Here is the guide to transition from legacy model preprocessing to new API preprocessing. + + +``input_shape`` +################ + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + from openvino.tools import mo + + ov_model = mo.convert_model(model, input_shape=[[1, 3, 100, 100],[1]]) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model, input=[[1, 3, 100, 100],[1]]) + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --input_shape [1,3,100,100],[1] --output_dir OUTPUT_DIR + + - .. code-block:: sh + :force: + + ovc MODEL_NAME --input [1,3,100,100],[1] --output_model OUTPUT_MODEL + +``batch`` +########## + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + from openvino.tools import mo + + ov_model = mo.convert_model(model, batch=2) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model) + input_shape = ov_model.inputs[0].partial_shape + input_shape[0] = 2 # batch size + ov_model.reshape(input_shape) + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --batch 2 --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +``mean_values`` +################ + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + from openvino.tools import mo + + ov_model = mo.convert_model(model, mean_values=[0.5, 0.5, 0.5]) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model) + + prep = ov.preprocess.PrePostProcessor(ov_model) + prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) + prep.input(input_name).preprocess().mean([0.5, 0.5, 0.5]) + ov_model = prep.build() + + There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview `. + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --mean_values [0.5,0.5,0.5] --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +``scale_values`` +################# + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + from openvino.tools import mo + + ov_model = mo.convert_model(model, scale_values=[255., 255., 255.]) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model) + + prep = ov.preprocess.PrePostProcessor(ov_model) + prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) + prep.input(input_name).preprocess().scale([255., 255., 255.]) + ov_model = prep.build() + + There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview `. + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --scale_values [255,255,255] --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +``reverse_input_channels`` +########################### + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + from openvino.tools import mo + + ov_model = mo.convert_model(model, reverse_input_channels=True) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model) + + prep = ov.preprocess.PrePostProcessor(ov_model) + prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) + prep.input(input_name).preprocess().reverse_channels() + ov_model = prep.build() + + There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview `. + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --reverse_input_channels --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +``source_layout`` +################## + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + import openvino as ov + from openvino.tools import mo + + ov_model = mo.convert_model(model, source_layout={input_name: ov.Layout("NHWC")}) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model) + + prep = ov.preprocess.PrePostProcessor(ov_model) + prep.input(input_name).model().set_layout(ov.Layout("NHWC")) + ov_model = prep.build() + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --source_layout input_name(NHWC) --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +``target_layout`` +################## + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + import openvino as ov + from openvino.tools import mo + + ov_model = mo.convert_model(model, target_layout={input_name: ov.Layout("NHWC")}) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model) + + prep = ov.preprocess.PrePostProcessor(ov_model) + prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) + ov_model = prep.build() + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --target_layout input_name(NHWC) --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +``layout`` +########### + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + from openvino.tools import mo + + ov_model = mo.convert_model(model, layout={input_name: mo.LayoutMap("NCHW", "NHWC")}) + + - .. code-block:: py + :force: + + import openvino as ov + + ov_model = ov.convert_model(model) + + prep = ov.preprocess.PrePostProcessor(ov_model) + prep.input(input_name).model().set_layout(ov.Layout("NCHW")) + prep.input(input_name).tensor().set_layout(ov.Layout("NHWC")) + ov_model = prep.build() + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --layout "input_name(NCHW->NHWC)" --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +``transform`` +############## + +.. tab-set:: + + .. tab-item:: Python + :sync: py + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: py + :force: + + from openvino.tools import mo + + ov_model = mo.convert_model(model, transform=[('LowLatency2', {'use_const_initializer': False}), 'Pruning', ('MakeStateful', {'param_res_names': {'input_name': 'output_name'}})]) + + - .. code-block:: py + :force: + + import openvino as ov + from openvino._offline_transformations import apply_low_latency_transformation, apply_pruning_transformation, apply_make_stateful_transformation + + ov_model = ov.convert_model(model) + apply_low_latency_transformation(model, use_const_initializer=False) + apply_pruning_transformation(model) + apply_make_stateful_transformation(model, param_res_names={'input_name': 'output_name'}) + + .. tab-item:: CLI + :sync: cli + + .. list-table:: + :header-rows: 1 + + * - Legacy API + - New API + * - .. code-block:: sh + :force: + + mo --input_model MODEL_NAME --transform LowLatency2[use_const_initializer=False],Pruning,MakeStateful[param_res_names={'input_name':'output_name'}] --output_dir OUTPUT_DIR + + - Not available in OVC tool. Please check Python API. + +Supported Frameworks in MO vs OVC +################################# + +ov.convert_model() and OVC tool support conversion from PyTorch, TF, TF Lite, ONNX, PaddlePaddle. +The following frameworks are supported only in MO and mo.convert_model(): Caffe, MxNet, Kaldi. + +@endsphinxdirective + + diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md b/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md new file mode 100644 index 00000000000000..75262747e3a9fc --- /dev/null +++ b/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md @@ -0,0 +1,33 @@ +# Supported Model Formats {#Supported_Model_Formats} + +@sphinxdirective + +.. toctree:: + :maxdepth: 1 + :hidden: + + openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch + openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow + openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX + openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite + openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle + + +**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features. The result of running ``ovc`` CLI tool or ``openvino.save_model`` is OpenVINO IR. All other supported formats can be converted to the IR, refer to the following articles for details on conversion: + +* :doc:`How to convert Pytorch ` +* :doc:`How to convert ONNX ` +* :doc:`How to convert TensorFlow ` +* :doc:`How to convert TensorFlow Lite ` +* :doc:`How to convert PaddlePaddle ` + +To choose the best workflow for your application, read :doc:`Introduction to Model Preparation` + +Refer to the list of all supported conversion options in :doc:`Conversion Parameters ` + +Additional Resources +#################### + +* :doc:`Transition guide from the legacy to new conversion API ` + +@endsphinxdirective diff --git a/docs/get_started/get_started_demos.md b/docs/get_started/get_started_demos.md index 61e6e60c600c7b..0ec61b0f0c3e1f 100644 --- a/docs/get_started/get_started_demos.md +++ b/docs/get_started/get_started_demos.md @@ -3,11 +3,11 @@ @sphinxdirective .. meta:: - :description: Learn the details on the workflow of Intel® Distribution of OpenVINO™ + :description: Learn the details on the workflow of Intel® Distribution of OpenVINO™ toolkit, and how to run inference, using provided code samples. -The guide presents a basic workflow for building and running C++ code samples in OpenVINO. Note that these steps will not work with the Python samples. +The guide presents a basic workflow for building and running C++ code samples in OpenVINO. Note that these steps will not work with the Python samples. To get started, you must first install OpenVINO Runtime, install OpenVINO Development tools, and build the sample applications. See the :ref:`Prerequisites ` section for instructions. @@ -40,8 +40,8 @@ Make sure that you also `install OpenCV `. This guide uses the ``googlenet-v1`` model from the Caffe framework, therefore, when you get to Step 4 of the installation, run the following command to install OpenVINO with the Caffe requirements: @@ -76,11 +76,11 @@ You can use one of the following options to find a model suitable for OpenVINO: - Download public or Intel pre-trained models from :doc:`Open Model Zoo ` using :doc:`Model Downloader tool ` - Download from GitHub, Caffe Zoo, TensorFlow Zoo, etc. - Train your own model with machine learning tools - + This guide uses OpenVINO Model Downloader to get pre-trained models. You can use one of the following commands to find a model with this method: * List the models available in the downloader. - + .. code-block:: sh omz_info_dumper --print_all @@ -115,21 +115,21 @@ This guide used the following model to run the Image Classification Sample: :sync: windows .. code-block:: bat - + omz_downloader --name googlenet-v1 --output_dir %USERPROFILE%\Documents\models .. tab-item:: Linux :sync: linux .. code-block:: sh - + omz_downloader --name googlenet-v1 --output_dir ~/models - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + omz_downloader --name googlenet-v1 --output_dir ~/models @@ -139,54 +139,54 @@ This guide used the following model to run the Image Classification Sample: .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + ################|| Downloading models ||################ - + ========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt ... 100%, 9 KB, ? KB/s, 0 seconds passed - + ========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel ... 100%, 4834 KB, 571 KB/s, 8 seconds passed - + ################|| Post-processing ||################ - + ========== Replacing text in C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + ###############|| Downloading models ||############### - + ========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.prototxt - + ========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.caffemodel ... 100%, 4834 KB, 3157 KB/s, 1 seconds passed - + ###############|| Post processing ||############### - + ========= Replacing text in /home/username/models/public/googlenet-v1/googlenet-v1.prototxt ========= - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + ###############|| Downloading models ||############### - + ========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt ... 100%, 9 KB, 44058 KB/s, 0 seconds passed - + ========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.caffemodel ... 100%, 4834 KB, 4877 KB/s, 0 seconds passed - + ###############|| Post processing ||############### - + ========= Replacing text in /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt ========= - + .. _convert-models-to-intermediate-representation: Step 2: Convert the Model with ``mo`` @@ -210,26 +210,26 @@ Create an ```` directory to contain the model's Intermediate Representat .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + mkdir %USERPROFILE%\Documents\ir .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + mkdir ~/ir - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + mkdir ~/ir -To save disk space for your IR file, you can apply :doc:`weights compression to FP16 `. To generate an IR with FP16 weights, run model conversion with the ``--compress_to_fp16`` option. +To save disk space for your IR files, OpenVINO stores weights in FP16 format by default. Generic model conversion script: @@ -246,23 +246,23 @@ The command with most placeholders filled in and FP16 precision: .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + mo --input_model %USERPROFILE%\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel --compress_to_fp16 --output_dir %USERPROFILE%\Documents\ir .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --compress_to_fp16 --output_dir ~/ir - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --compress_to_fp16 --output_dir ~/ir .. _download-media: @@ -290,75 +290,75 @@ To run the **Image Classification** code sample with an input image using the IR .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + \setupvars.bat .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + source /setupvars.sh - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + source /setupvars.sh - + 2. Go to the code samples release directory created when you built the samples earlier: .. tab-set:: .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + cd %USERPROFILE%\Documents\Intel\OpenVINO\openvino_samples_build\intel64\Release .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + cd ~/openvino_cpp_samples_build/intel64/Release - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + cd ~/openvino_cpp_samples_build/intel64/Release - + 3. Run the code sample executable, specifying the input media file, the IR for your model, and a target device for performing inference: .. tab-set:: .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + classification_sample_async.exe -i -m -d .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + classification_sample_async -i -m -d - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + classification_sample_async -i -m -d - + Examples ++++++++ @@ -371,23 +371,23 @@ The following command shows how to run the Image Classification Code Sample usin .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + .\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d CPU .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU - + .. tab-item:: macOS :sync: macos - + .. code-block:: sh - + ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU When the sample application is complete, you are given the label and confidence for the top 10 categories. The input image and sample output of the inference results is shown below: @@ -418,24 +418,24 @@ The following example shows how to run the same sample using GPU as the target d Running Inference on GPU ------------------------ -.. note:: - +.. note:: + Running inference on Intel® Processor Graphics (GPU) requires :doc:`additional hardware configuration steps `, as described earlier on this page. Running on GPU is not compatible with macOS. .. tab-set:: .. tab-item:: Windows :sync: windows - + .. code-block:: bat - + .\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d GPU .. tab-item:: Linux :sync: linux - + .. code-block:: sh - + ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d GPU diff --git a/docs/glossary.md b/docs/glossary.md index 48d65d57d49b24..5380db526c10f7 100644 --- a/docs/glossary.md +++ b/docs/glossary.md @@ -3,7 +3,7 @@ @sphinxdirective .. meta:: - :description: Check the list of acronyms, abbreviations and terms used in + :description: Check the list of acronyms, abbreviations and terms used in Intel® Distribution of OpenVINO™ toolkit. @@ -11,54 +11,55 @@ Acronyms and Abbreviations ################################################# ================== =========================================================================== - Abbreviation Description + Abbreviation Description ================== =========================================================================== - API Application Programming Interface - AVX Advanced Vector Extensions - clDNN Compute Library for Deep Neural Networks - CLI Command Line Interface - CNN Convolutional Neural Network - CPU Central Processing Unit - CV Computer Vision - DL Deep Learning - DLL Dynamic Link Library - DNN Deep Neural Networks - ELU Exponential Linear rectification Unit - FCN Fully Convolutional Network - FP Floating Point - GCC GNU Compiler Collection - GPU Graphics Processing Unit - HD High Definition - IR Intermediate Representation - JIT Just In Time - JTAG Joint Test Action Group - LPR License-Plate Recognition - LRN Local Response Normalization - mAP Mean Average Precision - Intel® OneDNN Intel® OneAPI Deep Neural Network Library - `mo` Command-line tool for model conversion, CLI for ``tools.mo.convert_model`` - MVN Mean Variance Normalization - NCDHW Number of images, Channels, Depth, Height, Width - NCHW Number of images, Channels, Height, Width - NHWC Number of images, Height, Width, Channels - NMS Non-Maximum Suppression - NN Neural Network - NST Neural Style Transfer - OD Object Detection - OS Operating System - PCI Peripheral Component Interconnect - PReLU Parametric Rectified Linear Unit - PSROI Position Sensitive Region Of Interest - RCNN, R-CNN Region-based Convolutional Neural Network - ReLU Rectified Linear Unit - ROI Region Of Interest - SDK Software Development Kit - SSD Single Shot multibox Detector - SSE Streaming SIMD Extensions - USB Universal Serial Bus - VGG Visual Geometry Group - VOC Visual Object Classes - WINAPI Windows Application Programming Interface + API Application Programming Interface + AVX Advanced Vector Extensions + clDNN Compute Library for Deep Neural Networks + CLI Command Line Interface + CNN Convolutional Neural Network + CPU Central Processing Unit + CV Computer Vision + DL Deep Learning + DLL Dynamic Link Library + DNN Deep Neural Networks + ELU Exponential Linear rectification Unit + FCN Fully Convolutional Network + FP Floating Point + GCC GNU Compiler Collection + GPU Graphics Processing Unit + HD High Definition + IR Intermediate Representation + JIT Just In Time + JTAG Joint Test Action Group + LPR License-Plate Recognition + LRN Local Response Normalization + mAP Mean Average Precision + Intel® OneDNN Intel® OneAPI Deep Neural Network Library + `mo` Command-line tool for model conversion, CLI for ``tools.mo.convert_model`` (legacy) + MVN Mean Variance Normalization + NCDHW Number of images, Channels, Depth, Height, Width + NCHW Number of images, Channels, Height, Width + NHWC Number of images, Height, Width, Channels + NMS Non-Maximum Suppression + NN Neural Network + NST Neural Style Transfer + OD Object Detection + OS Operating System + `ovc` OpenVINO Model Converter, command line tool for model conversion + PCI Peripheral Component Interconnect + PReLU Parametric Rectified Linear Unit + PSROI Position Sensitive Region Of Interest + RCNN, R-CNN Region-based Convolutional Neural Network + ReLU Rectified Linear Unit + ROI Region Of Interest + SDK Software Development Kit + SSD Single Shot multibox Detector + SSE Streaming SIMD Extensions + USB Universal Serial Bus + VGG Visual Geometry Group + VOC Visual Object Classes + WINAPI Windows Application Programming Interface ================== =========================================================================== @@ -68,46 +69,46 @@ Terms Glossary of terms used in OpenVINO™ -| *Batch* +| *Batch* | Number of images to analyze during one call of infer. Maximum batch size is a property of the model set before its compilation. In NHWC, NCHW, and NCDHW image data layout representations, the 'N' refers to the number of images in the batch. -| *Device Affinity* +| *Device Affinity* | A preferred hardware device to run inference (CPU, GPU, GNA, etc.). -| *Extensibility mechanism, Custom layers* +| *Extensibility mechanism, Custom layers* | The mechanism that provides you with capabilities to extend the OpenVINO™ Runtime and model conversion API so that they can work with models containing operations that are not yet supported. | *layer / operation* -| In OpenVINO, both terms are treated synonymously. To avoid confusion, "layer" is being pushed out and "operation" is the currently accepted term. +| In OpenVINO, both terms are treated synonymously. To avoid confusion, "layer" is being pushed out and "operation" is the currently accepted term. -| *Model conversion API* -| A component of OpenVINO Development Tools. The API is used to import, convert, and optimize models trained in popular frameworks to a format usable by other OpenVINO components. In ``openvino.tools.mo`` namespace, model conversion API is represented by a Python ``mo.convert_model()`` method and ``mo`` command-line tool. +| *Model conversion API* +| The Conversion API is used to import and convert models trained in popular frameworks to a format usable by other OpenVINO components. Model conversion API is represented by a Python ``openvino.convert_model()`` method and ``ovc`` command-line tool. -| *OpenVINO™ Core* -| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc. +| *OpenVINO™ Core* +| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc. -| *OpenVINO™ API* +| *OpenVINO™ API* | The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite file formats, set input and output formats and execute the model on various devices. -| *OpenVINO™ Runtime* +| *OpenVINO™ Runtime* | A C++ library with a set of classes that you can use in your application to infer input tensors and get the results. -| *ov::Model* +| *ov::Model* | A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite formats. Consists of model structure, weights and biases. -| *ov::CompiledModel* +| *ov::CompiledModel* | An instance of the compiled model which allows the OpenVINO™ Runtime to request (several) infer requests and perform inference synchronously or asynchronously. -| *ov::InferRequest* +| *ov::InferRequest* | A class that represents the end point of inference on the model compiled by the device and represented by a compiled model. Inputs are set here, outputs should be requested from this interface as well. -| *ov::ProfilingInfo* +| *ov::ProfilingInfo* | Represents basic inference profiling information per operation. -| *ov::Layout* +| *ov::Layout* | Image data layout refers to the representation of images batch. Layout shows a sequence of 4D or 5D tensor data in memory. A typical NCHW format represents pixel in horizontal direction, rows by vertical dimension, planes by channel and images into batch. See also [Layout API Overview](./OV_Runtime_UG/layout_overview.md). -| *ov::element::Type* +| *ov::element::Type* | Represents data element type. For example, f32 is 32-bit floating point, f16 is 16-bit floating point. | *plugin / Inference Device / Inference Mode* diff --git a/docs/install_guides/installing-openvino-from-archive-linux.md b/docs/install_guides/installing-openvino-from-archive-linux.md index 3eca6f4acf21fe..d7cfdf8d224787 100644 --- a/docs/install_guides/installing-openvino-from-archive-linux.md +++ b/docs/install_guides/installing-openvino-from-archive-linux.md @@ -3,34 +3,33 @@ @sphinxdirective .. meta:: - :description: Learn how to install OpenVINO™ Runtime on the Linux operating + :description: Learn how to install OpenVINO™ Runtime on the Linux operating system, using an archive file. .. note:: - + Note that the Archive distribution: - + * offers both C/C++ and Python APIs - * additionally includes code samples + * additionally includes code samples * is dedicated to users of all major OSs: Windows, Linux, macOS * may offer different hardware support under different operating systems (see the drop-down below for more details). - - .. dropdown:: Inference Options - =================== ===== ===== ===== ===== - Operating System CPU GPU GNA NPU - =================== ===== ===== ===== ===== - Debian9 armhf V n/a n/a n/a - Debian9 arm64 V n/a n/a n/a - CentOS7 x86_64 V V n/a n/a - Ubuntu18 x86_64 V V V n/a - Ubuntu20 x86_64 V V V V - Ubuntu22 x86_64 V V V V - RHEL8 x86_64 V V V n/a - =================== ===== ===== ===== ===== + .. dropdown:: Inference Options + =================== ===== ===== ===== ===== + Operating System CPU GPU GNA NPU + =================== ===== ===== ===== ===== + Debian9 armhf V n/a n/a n/a + Debian9 arm64 V n/a n/a n/a + CentOS7 x86_64 V V n/a n/a + Ubuntu18 x86_64 V V V n/a + Ubuntu20 x86_64 V V V V + Ubuntu22 x86_64 V V V V + RHEL8 x86_64 V V V n/a + =================== ===== ===== ===== ===== .. tab-set:: @@ -40,58 +39,58 @@ | Full requirement listing is available in: | `System Requirements Page `__ - + .. tab-item:: Processor Notes :sync: processor-notes - + | To see if your processor includes the integrated graphics technology and supports iGPU inference, refer to: | `Product Specifications `__ - + .. tab-item:: Software :sync: software - + * `CMake 3.13 or higher, 64-bit `__ * `Python 3.7 - 3.11, 64-bit `__ * GCC: - + .. tab-set:: .. tab-item:: Ubuntu 20.04 :sync: ubuntu-20 - + * GCC 9.3.0 .. tab-item:: Ubuntu 18.04 :sync: ubuntu-18 - + * GCC 7.5.0 - + .. tab-item:: RHEL 8 :sync: rhel-8 - + * GCC 8.4.1 - + .. tab-item:: CentOS 7 :sync: centos-7 - + * GCC 8.3.1 Use the following instructions to install it: - + Install GCC 8.3.1 via devtoolset-8 - + .. code-block:: sh - + sudo yum update -y && sudo yum install -y centos-release-scl epel-release sudo yum install -y devtoolset-8 - + Enable devtoolset-8 and check current gcc version - + .. code-block:: sh - + source /opt/rh/devtoolset-8/enable gcc -v - - + + @@ -107,19 +106,19 @@ Step 1: Download and Install the OpenVINO Core Components 2. Create the ``/opt/intel`` folder for OpenVINO by using the following command. If the folder already exists, skip this step. .. code-block:: sh - + sudo mkdir /opt/intel - + .. note:: - + The ``/opt/intel`` path is the recommended folder path for administrators or root users. If you prefer to install OpenVINO in regular userspace, the recommended path is ``/home//intel``. You may use a different path if desired. 3. Browse to the current user's ``Downloads`` folder: - + .. code-block:: sh - + cd /Downloads - + 4. Download the `OpenVINO Runtime archive file for your system `_, extract the files, rename the extracted folder and move it to the desired path: .. tab-set:: @@ -131,70 +130,70 @@ Step 1: Download and Install the OpenVINO Core Components .. tab-item:: Ubuntu 22.04 :sync: ubuntu-22 - + .. code-block:: sh - + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_ubuntu22_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz tar -xf openvino_2023.0.1.tgz sudo mv l_openvino_toolkit_ubuntu22_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1 - + .. tab-item:: Ubuntu 20.04 :sync: ubuntu-20 - + .. code-block:: sh - + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_ubuntu20_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz tar -xf openvino_2023.0.1.tgz sudo mv l_openvino_toolkit_ubuntu20_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1 - + .. tab-item:: Ubuntu 18.04 :sync: ubuntu-18 - + .. code-block:: sh - + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_ubuntu18_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz tar -xf openvino_2023.0.1.tgz sudo mv l_openvino_toolkit_ubuntu18_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1 - + .. tab-item:: RHEL 8 :sync: rhel-8 - + .. code-block:: sh - + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_rhel8_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz tar -xf openvino_2023.0.1.tgz sudo mv l_openvino_toolkit_rhel8_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1 - + .. tab-item:: CentOS 7 :sync: centos-7 - + .. code-block:: sh - + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_centos7_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz tar -xf openvino_2023.0.1.tgz sudo mv l_openvino_toolkit_centos7_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1 - + .. tab-item:: ARM 64-bit :sync: arm-64 - + .. code-block:: sh - + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_arm64.tgz -O openvino_2023.0.1.tgz tar -xf openvino_2023.0.1.tgz sudo mv l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_arm64 /opt/intel/openvino_2023.0.1 - + .. tab-item:: ARM 32-bit :sync: arm-32 - + .. code-block:: sh - + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_armhf.tgz -O openvino_2023.0.1.tgz tar -xf openvino_2023.0.1.tgz sudo mv l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_armhf /opt/intel/openvino_2023.0.1 - - + + 5. Install required system dependencies on Linux. To do this, OpenVINO provides a script in the extracted installation directory. Run the following command: - + .. code-block:: sh cd /opt/intel/openvino_2023.0.1 @@ -214,33 +213,33 @@ Step 1: Download and Install the OpenVINO Core Components python3 -m pip install -r ./python/requirements.txt 7. For simplicity, it is useful to create a symbolic link as below: - + .. code-block:: sh - + cd /opt/intel sudo ln -s openvino_2023.0.1 openvino_2023 - + .. note:: - If you have already installed a previous release of OpenVINO 2023, a symbolic link to the ``openvino_2023`` folder may already exist. + If you have already installed a previous release of OpenVINO 2023, a symbolic link to the ``openvino_2023`` folder may already exist. Unlink the previous link with ``sudo unlink openvino_2023``, and then re-run the command above. -Congratulations, you have finished the installation! For some use cases you may still -need to install additional components. Check the description below, as well as the +Congratulations, you have finished the installation! For some use cases you may still +need to install additional components. Check the description below, as well as the :doc:`list of additional configurations ` to see if your case needs any of them. -The ``/opt/intel/openvino_2023`` folder now contains the core components for OpenVINO. -If you used a different path in Step 2, for example, ``/home//intel/``, -OpenVINO is now in ``/home//intel/openvino_2023``. The path to the ``openvino_2023`` +The ``/opt/intel/openvino_2023`` folder now contains the core components for OpenVINO. +If you used a different path in Step 2, for example, ``/home//intel/``, +OpenVINO is now in ``/home//intel/openvino_2023``. The path to the ``openvino_2023`` directory is also referred as ```` throughout the OpenVINO documentation. Step 2: Configure the Environment ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -You must update several environment variables before you can compile and run OpenVINO applications. -Open a terminal window and run the ``setupvars.sh`` script as shown below to temporarily set your environment variables. +You must update several environment variables before you can compile and run OpenVINO applications. +Open a terminal window and run the ``setupvars.sh`` script as shown below to temporarily set your environment variables. If your is not ``/opt/intel/openvino_2023``, use the correct one instead. .. code-block:: sh @@ -250,12 +249,12 @@ If your is not ``/opt/intel/openvino_2023``, use the correct one i If you have more than one OpenVINO version installed on your system, you can easily switch versions by sourcing the `setupvars.sh` of your choice. -.. note:: - - The above command must be re-run every time you start a new terminal session. - To set up Linux to automatically run the command every time a new terminal is opened, - open ``~/.bashrc`` in your favorite editor and add ``source /opt/intel/openvino_2023/setupvars.sh`` after the last line. - Next time when you open a terminal, you will see ``[setupvars.sh] OpenVINO™ environment initialized``. +.. note:: + + The above command must be re-run every time you start a new terminal session. + To set up Linux to automatically run the command every time a new terminal is opened, + open ``~/.bashrc`` in your favorite editor and add ``source /opt/intel/openvino_2023/setupvars.sh`` after the last line. + Next time when you open a terminal, you will see ``[setupvars.sh] OpenVINO™ environment initialized``. Changing ``.bashrc`` is not recommended when you have multiple OpenVINO versions on your machine and want to switch among them. The environment variables are set. @@ -266,57 +265,57 @@ The environment variables are set. What's Next? ############################################################ -Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! +Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials. .. tab-set:: .. tab-item:: Get started with Python :sync: get-started-py - + Try the `Python Quick Start Example `_ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser. - + .. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif :width: 400 - + Visit the :doc:`Tutorials ` page for more Jupyter Notebooks to get you started with OpenVINO, such as: - + * `OpenVINO Python API Tutorial `__ * `Basic image classification program with Hello Image Classification `__ * `Convert a PyTorch model and use it for image background removal `__ - - + + .. tab-item:: Get started with C++ :sync: get-started-cpp - - Try the :doc:`C++ Quick Start Example ` for step-by-step instructions + + Try the :doc:`C++ Quick Start Example ` for step-by-step instructions on building and running a basic image classification C++ application. - + .. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg :width: 400 - + Visit the :doc:`Samples ` page for other C++ example applications to get you started with OpenVINO, such as: - + * `Basic object detection with the Hello Reshape SSD C++ sample `__ * `Automatic speech recognition C++ sample `__ - - - + + + Uninstalling the Intel® Distribution of OpenVINO™ Toolkit ########################################################### If you have installed OpenVINO Runtime from archive files, you can uninstall it by deleting the archive files and the extracted folders. -Uninstallation removes all Intel® Distribution of OpenVINO™ Toolkit component files but does not affect user files in the installation directory. +Uninstallation removes all Intel® Distribution of OpenVINO™ Toolkit component files but does not affect user files in the installation directory. If you have created the symbolic link, remove the link first: - + .. code-block:: sh sudo rm /opt/intel/openvino_2023 - + To delete the files: - + .. code-block:: sh rm -r && rm @@ -330,7 +329,7 @@ Additional Resources ########################################################### * :doc:`Troubleshooting Guide for OpenVINO Installation & Configuration ` -* Converting models for use with OpenVINO™: :doc:`Convert a Model ` +* Converting models for use with OpenVINO™: :doc:`Convert a Model ` * Writing your own OpenVINO™ applications: :doc:`OpenVINO™ Runtime User Guide ` * Sample applications: :doc:`OpenVINO™ Toolkit Samples Overview ` * Pre-trained deep learning models: :doc:`Overview of OpenVINO™ Toolkit Pre-Trained Models ` diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 424895951ad46e..a9c960686a9d1b 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -1,4 +1,4 @@ -# OpenVINO™ Development Tools +# OpenVINO™ Development Tools Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing and deploying AI inference. It can be used to develop applications and solutions based on deep learning tasks, such as: emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, etc. It provides high-performance and rich deployment options, from edge to cloud. @@ -28,11 +28,11 @@ pip install openvino-dev ### Installation in a New Environment If you do not have an environment with the source deep learning framework for the input model or you encounter any compatibility issues between OpenVINO and your version of deep learning framework, -you may install OpenVINO Development Tools with validated versions of frameworks into a new environment. +you may install OpenVINO Development Tools with validated versions of frameworks into a new environment. #### Step 1. Set Up Python Virtual Environment -Use a virtual environment to avoid dependency conflicts. +Use a virtual environment to avoid dependency conflicts. To create a virtual environment, use the following commands: @@ -72,7 +72,7 @@ Use the following command: ```sh pip install openvino-dev[extras] ``` - where `extras` is the source deep learning framework for the input model and is one or more of the following values separated with "," : + where `extras` is the source deep learning framework for the input model and is one or more of the following values separated with "," : | Extras Value | DL Framework | | :-------------------------------| :------------------------------------------------------------------------------- | @@ -110,34 +110,34 @@ For example, to install and configure the components for working with TensorFlow ## What's in the Package? -> **NOTE**: The openvino-dev package installs [OpenVINO™ Runtime](https://pypi.org/project/openvino) as a dependency, which is the engine that runs the deep learning model and includes a set of libraries for an easy inference integration into your applications. +> **NOTE**: The openvino-dev package installs [OpenVINO™ Runtime](https://pypi.org/project/openvino) as a dependency, which is the engine that runs the deep learning model and includes a set of libraries for an easy inference integration into your applications. **In addition, the openvino-dev package installs the following components by default:** -| Component | Console Script | Description | +| Component | Console Script | Description | |------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Model conversion API](https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model conversion API** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. | -| [Benchmark Tool](https://docs.openvino.ai/2023.1/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | -| [Accuracy Checker](https://docs.openvino.ai/2023.1/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvino.ai/2023.1/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. | -| [Post-Training Optimization Tool](https://docs.openvino.ai/2023.1/pot_introduction.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | -| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/2023.1/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with model conversion API. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using model conversion API.
**Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. | +| [Legacy Model conversion API](https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model conversion API** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. | +| [Benchmark Tool](https://docs.openvino.ai/nightly/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | +| [Accuracy Checker](https://docs.openvino.ai/nightly/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvino.ai/nightly/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. | +| [Post-Training Optimization Tool](https://docs.openvino.ai/nightly/pot_introduction.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | +| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/nightly/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with model conversion API. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using model conversion API.
**Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. | ## Troubleshooting -For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2023.1/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages. +For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2023.1/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages. ### Errors with Installing via PIP for Users in China Users in China might encounter errors while downloading sources via PIP during OpenVINO™ installation. To resolve the issues, try the following solution: - -* Add the download source using the ``-i`` parameter with the Python ``pip`` command. For example: + +* Add the download source using the ``-i`` parameter with the Python ``pip`` command. For example: ``` sh pip install openvino-dev -i https://mirrors.aliyun.com/pypi/simple/ ``` Use the ``--trusted-host`` parameter if the URL above is ``http`` instead of ``https``. You can also run the following command to install openvino-dev with specific frameworks. For example: - + ``` pip install openvino-dev[tensorflow2] -i https://mirrors.aliyun.com/pypi/simple/ ``` @@ -151,7 +151,7 @@ pip install openvino-dev[tensorflow2,mxnet,caffe] zsh: no matches found: openvino-dev[tensorflow2,mxnet,caffe] ``` -By default zsh interprets square brackets as an expression for pattern matching. To resolve this issue, you need to escape the command with quotes: +By default zsh interprets square brackets as an expression for pattern matching. To resolve this issue, you need to escape the command with quotes: ```sh pip install 'openvino-dev[tensorflow2,mxnet,caffe]' diff --git a/docs/model_zoo.md b/docs/model_zoo.md index a7f95024b08b01..560b67304a771f 100644 --- a/docs/model_zoo.md +++ b/docs/model_zoo.md @@ -7,7 +7,7 @@ .. toctree:: :maxdepth: 1 :hidden: - + omz_models_group_intel omz_models_group_public @@ -29,7 +29,7 @@ Open Model Zoo for OpenVINO™ toolkit delivers a wide variety of free, pre-trained deep learning models and demo applications that provide full application templates to help you implement deep learning in Python, C++, or OpenCV Graph API (G-API). Models and demos are available in the `Open Model Zoo GitHub repo `__ and licensed under Apache License Version 2.0. -Browse through over 200 neural network models, both :doc:`public ` and from :doc:`Intel `, and pick the right one for your solution. Types include object detection, classification, image segmentation, handwriting recognition, text to speech, pose estimation, and others. The Intel models have already been converted to work with OpenVINO™ toolkit, while public models can easily be converted using the :doc:`Model Optimizer ` utility. +Browse through over 200 neural network models, both :doc:`public ` and from :doc:`Intel `, and pick the right one for your solution. Types include object detection, classification, image segmentation, handwriting recognition, text to speech, pose estimation, and others. The Intel models have already been converted to work with OpenVINO™ toolkit, while public models can easily be converted using the :doc:`OpenVINO Model Conversion API ` utility. Get started with simple :doc:`step-by-step procedures ` to learn how to build and run demo applications or discover the :doc:`full set of demos ` and adapt them for implementing specific deep learning scenarios in your applications. From ea9fba4d49f616f8510b99cd56982a21dd52f008 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 13 Sep 2023 11:20:27 +0200 Subject: [PATCH 02/14] [DOCS] legacy adjustments pass 1 (#19792) --- docs/Documentation/model_introduction.md | 23 ++++++++++++------- .../Documentation/openvino_legacy_features.md | 16 +++++++------ docs/Extensibility_UG/Intro.md | 5 ---- .../Customize_Model_Optimizer.md | 2 +- .../convert_model/MO_OVC_transition.md | 1 + docs/home.rst | 2 +- 6 files changed, 27 insertions(+), 22 deletions(-) diff --git a/docs/Documentation/model_introduction.md b/docs/Documentation/model_introduction.md index 26038599b83362..c4de8fb4820ebc 100644 --- a/docs/Documentation/model_introduction.md +++ b/docs/Documentation/model_introduction.md @@ -13,7 +13,6 @@ Supported_Model_Formats openvino_docs_OV_Converter_UG_Conversion_Options openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model - openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub `__, `Hugging Face `__, or `Torchvision models `__. @@ -21,16 +20,18 @@ Every deep learning workflow begins with obtaining a model. You can choose to pr OpenVINO™ :doc:`supports several model formats ` and can convert them into its own representation, `openvino.Model `__ (`ov.Model `__), providing a conversion API. Converted models can be used for inference with one or multiple OpenVINO Hardware plugins. There are two ways to use the conversion API: using a Python program or calling the ``ovc`` command line tool. .. note:: + + Prior to OpenVINO 2023.1, model conversion API was exposed as the ``openvino.tools.mo.convert_model`` + function and the ``mo`` command line tool. Now, a new and simplified API is used: the + ``openvino.convert_model`` function and the ``ovc`` command line tool. - Prior OpenVINO 2023.1 release, model conversion API was exposed as ``openvino.tools.mo.convert_model`` function and ``mo`` command line tool. - Starting from 2023.1 release, a new simplified API was introduced: ``openvino.convert_model`` function and ``ovc`` command line tool as a replacement for ``openvino.tools.mo.convert_model`` - and ``mo`` correspondingly, which are considered to be legacy now. All new users are recommended to use these new methods instead of the old methods. Please note that the new API and old API do not - provide the same level of features, that means the new tools are not always backward compatible with the old ones. Please consult with :doc:`Model Conversion API Transition Guide `. + All new projects are recommended to use the new tools, keeping in mind that they are not fully + backwards compatible. For more details, consult the :doc:`Model Conversion API Transition Guide `. Convert a Model in Python: ``convert_model`` -############################################ +############################################## -You can use Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example Pytorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples on how to use the ``openvino.convert_model`` with models from popular public repositories: +You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example Pytorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories: .. tab-set:: @@ -188,7 +189,7 @@ Option 2, where ``openvino.compile_model`` is used, provides a convenient way to Option 1 separates model conversion and model inference into two different applications. This approach is useful for deployment scenarios requiring fewer extra dependencies and faster model loading in the end inference application. -For example, converting a PyTorch model to OpenVINO usually demands the ``torch`` Python module and Python. This process can take extra time and memory. But, after the converted model is saved as IR with ``openvino.save_model``, it can be loaded in a separate application without requiring the ``torch`` dependency and the time-consuming conversion. The inference application can be written in other languages supported by OpenVINO, for example, in C++, and Python installation is not necessary for it to run. +For example, converting a PyTorch model to OpenVINO usually demands the ``torch`` Python module and Python. This process can take extra time and memory. But, after the converted model is saved as OpenVINO IR with ``openvino.save_model``, it can be loaded in a separate application without requiring the ``torch`` dependency and the time-consuming conversion. The inference application can be written in other languages supported by OpenVINO, for example, in C++, and Python installation is not necessary for it to run. Before saving the model to OpenVINO IR, consider applying :doc:`Post-training Optimization ` to enable more efficient inference and smaller model size. @@ -232,4 +233,10 @@ If you are using legacy conversion API (``mo`` or ``openvino.tools.mo.convert_mo * :doc:`Transition from legacy mo and ov.tools.mo.convert_model ` * :doc:`Legacy Model Conversion API ` + + + +.. api/ie_python_api/_autosummary/openvino.Model.html is a broken link for some reason - need to investigate python api article generation + + @endsphinxdirective diff --git a/docs/Documentation/openvino_legacy_features.md b/docs/Documentation/openvino_legacy_features.md index c034dd31d29eb0..dad31043b8b851 100644 --- a/docs/Documentation/openvino_legacy_features.md +++ b/docs/Documentation/openvino_legacy_features.md @@ -7,6 +7,7 @@ :hidden: OpenVINO Development Tools package + Model Optimizer / Conversion API OpenVINO API 2.0 transition Open Model ZOO Apache MXNet, Caffe, and Kaldi @@ -36,16 +37,17 @@ offering. | :doc:`See how to install Development Tools ` -| **Model Optimizer** +| **Model Optimizer / Conversion API** | *New solution:* Direct model support and OpenVINO Converter (OVC) -| *Old solution:* Model Optimizer discontinuation planned for OpenVINO 2025.0 +| *Old solution:* Legacy Conversion API discontinuation planned for OpenVINO 2025.0 | -| Model Optimizer's role was largely reduced when all major model frameworks became - supported directly. For the sole purpose of converting model files explicitly, - it has been replaced with a more light-weight and efficient solution, the - OpenVINO Converter (launched with OpenVINO 2023.1). +| The role of Model Optimizer and later the Conversion API was largely reduced + when all major model frameworks became supported directly. For converting model + files explicitly, it has been replaced with a more light-weight and efficient + solution, the OpenVINO Converter (launched with OpenVINO 2023.1). -.. :doc:`See how to use OVC ` +| :doc:`See how to use OVC ` +| :doc:`See how to transition from the legacy solution ` | **Open Model ZOO** diff --git a/docs/Extensibility_UG/Intro.md b/docs/Extensibility_UG/Intro.md index 319a415403e0ef..401e2f155e45b2 100644 --- a/docs/Extensibility_UG/Intro.md +++ b/docs/Extensibility_UG/Intro.md @@ -22,11 +22,6 @@ openvino_docs_transformations OpenVINO Plugin Developer Guide -.. toctree:: - :maxdepth: 1 - :hidden: - - openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md index b96c23beed1271..3d792f6c394fb7 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md @@ -1,4 +1,4 @@ -# [LEGACY] Model Optimizer Extensibility {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer} +# Legacy Model Optimizer Extensibility {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer} @sphinxdirective diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md b/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md index e550d515b753ad..9de12249a341f8 100644 --- a/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md +++ b/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md @@ -10,6 +10,7 @@ :hidden: openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide + openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer In 2023.1 OpenVINO release a new OVC (OpenVINO Model Converter) tool has been introduced with the corresponding Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized and the transition guide from the legacy API to the new API is provided. diff --git a/docs/home.rst b/docs/home.rst index 92079e97058f68..d8f359e65aaa5a 100644 --- a/docs/home.rst +++ b/docs/home.rst @@ -14,7 +14,7 @@ OpenVINO 2023.0 .. container:: :name: ov-homepage-banner - OpenVINO 2023.0 + OpenVINO 2023.1 .. raw:: html From 7f5f63db2373b447806976e3fb490177e8ad6935 Mon Sep 17 00:00:00 2001 From: bstankix Date: Wed, 13 Sep 2023 11:39:54 +0200 Subject: [PATCH 03/14] [DOCS] Add units to benchmark graphs (#19800) --- .../benchmarks_files/OV-benchmark-data.csv | 1022 ++++++++--------- .../benchmarks_files/OVMS-benchmark-data.csv | 242 ++-- docs/_static/js/graphs.js | 45 +- 3 files changed, 630 insertions(+), 679 deletions(-) diff --git a/docs/_static/benchmarks_files/OV-benchmark-data.csv b/docs/_static/benchmarks_files/OV-benchmark-data.csv index b164020129390e..8fbc20e6e64f3b 100644 --- a/docs/_static/benchmarks_files/OV-benchmark-data.csv +++ b/docs/_static/benchmarks_files/OV-benchmark-data.csv @@ -1,541 +1,481 @@ -Network model,Release,IE-Type,Platform name,Throughput-INT8,ThroughputFP16,ThroughputFP32,Value,Efficiency,Price,TDP,Sockets,Price/socket,TDP/socket,Latency -begin_rec,,,,,,,,,,,,,, -bert-base-cased,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,71.70,,25.90,0.103,1.593,697,45,1,697,45,21.119 -bert-base-cased,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,2.72,,1.33,0.080,0.286,34,9.5,1,34,9.5,392.843 -bert-base-cased,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,11.27,,4.28,0.105,0.752,107,15,1,107,15,88.854 -bert-base-cased,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,22.23,,14.89,0.190,0.342,117,65,1,117,65,46.754 -bert-base-cased,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,33.65,,19.51,0.157,0.961,214,35,1,214,35,35.398 -bert-base-cased,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,103.31,,44.62,0.314,0.826,329,125,1,329,125,18.719 -bert-base-cased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,34.72,,22.57,0.176,0.534,197,65,1,197,65,30.392 -bert-base-cased,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,49.78,,17.87,0.117,1.778,426,28,1,426,28,24.03 -bert-base-cased,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,38.20,,14.91,0.078,1.364,490,28,1,490,28,26.874 -bert-base-cased,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,28.32,,16.03,0.093,0.809,303,35,1,303,35,41.835 -bert-base-cased,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,35.62,,18.79,0.073,1.018,488,35,1,488,35,38.627 -bert-base-cased,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,51.56,,19.50,0.095,1.473,544,35,1,544,35,26.626 -bert-base-cased,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,150.45,,67.78,0.251,1.204,599,125,1,599,125,15.82 -bert-base-cased,OV-2023.0,atom,Intel® Processor N-200,1.62,,0.85,0.008,0.270,193,6,1,193,6,668.333 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,59.78,,35.79,0.101,0.478,594,125,1,594,125,19.817 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,21.83,,14.60,0.088,0.307,249,71,1,249,71,47.596 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,215.56,,80.01,0.069,1.057,3144,204,2,1572,102,14.761 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,557.71,,222.26,0.033,1.360,16954,410,2,8477,205,8.731 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,206.20,,76.54,0.103,0.825,2004,250,2,1002,125,15.389 -bert-base-cased,OV-2023.0,core,Intel® Core™ i7-11800H,92.59,,33.22,0.213,2.057,435,45,1,435,45,14.206 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,877.01,,196.60,0.047,1.624,18718,540,2,9359,270,7.873 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,3107.56,,448.17,0.091,4.439,34000,700,2,17000,350,5.274 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,429.94,,171.12,0.189,1.433,2274,300,2,1137,150,8.897 -bert-base-cased,OV-2023.0,accel,Intel® Flex-170,648.29,582.97,,0.337,4.322,1925,150,1,1925,150,24.289 -bert-base-cased,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,44.64,32.01,21.94,0.417,2.976,107,15,1,107,15,89.466 -bert-base-cased,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,69.65,55.42,36.87,0.142,2.487,490,28,1,490,28,53.822 -bert-base-cased,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,88.24,60.90,46.38,0.127,1.961,697,45,1,697,45,44.864 -bert-base-cased,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,3.31,2.36,1.49,0.017,0.551,193,6,1,193,6,1207.72 -bert-base-cased,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,82.48,59.66,41.79,0.194,2.946,426,28,1,426,28,48.15 -bert-base-cased,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,53.07,,24.80,0.496,3.538,107,15,1,107,15, -bert-base-cased,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,71.87,,27.99,0.147,2.567,490,28,1,490,28, -bert-base-cased,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,127.08,,55.39,0.182,2.824,697,45,1,697,45, -bert-base-cased,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,4.21,,2.08,0.022,0.701,193,6,1,193,6, -bert-base-cased,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,83.41,,39.13,0.196,2.979,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,6.79,,2.68,0.010,0.151,697,45,1,697,45,181.851 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,0.26,,0.13,0.008,0.028,34,9.5,1,34,9.5,4055.305 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,1.10,,0.38,0.010,0.074,107,15,1,107,15,883.753 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,2.11,,1.26,0.018,0.032,117,65,1,117,65,492.094 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,2.98,,1.75,0.014,0.085,214,35,1,214,35,351.295 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,9.17,,3.72,0.028,0.073,329,125,1,329,125,149.646 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,3.26,,1.91,0.017,0.050,197,65,1,197,65,308.598 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,4.60,,1.57,0.011,0.164,426,28,1,426,28,230.242 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,3.32,,1.28,0.007,0.118,490,28,1,490,28,271.873 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,2.71,,1.55,0.009,0.077,303,35,1,303,35,407.64 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,3.14,,1.76,0.006,0.090,488,35,1,488,35,335.174 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,5.13,,1.89,0.009,0.146,544,35,1,544,35,214.42 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,14.55,,6.05,0.024,0.116,599,125,1,599,125,115.166 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,atom,Intel® Processor N-200,0.16,,0.08,0.001,0.026,193,6,1,193,6,6605.009 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,4.40,,2.71,0.007,0.035,594,125,1,594,125,180.697 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,2.12,,1.30,0.008,0.030,249,71,1,249,71,481.442 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,20.31,,7.14,0.006,0.100,3144,204,2,1572,102,104.829 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,46.48,,19.65,0.003,0.113,16954,410,2,8477,205,50.293 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,19.27,,6.83,0.010,0.077,2004,250,2,1002,125,109.227 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core,Intel® Core™ i7-11800H,8.50,,2.90,0.020,0.189,435,45,1,435,45,125.712 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,74.00,,26.69,0.004,0.137,18718,540,2,9359,270,39.786 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,125.14,,44.52,0.004,0.179,34000,700,2,17000,350,23.966 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,37.60,,14.83,0.017,0.125,2274,300,2,1137,150,60.232 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,accel,Intel® Flex-170,120.47,80.23,,0.063,0.803,1925,150,1,1925,150,132.308 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,4.72,3.49,2.13,0.044,0.314,107,15,1,107,15,815.785 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,5.06,6.29,2.96,0.010,0.181,490,28,1,490,28,788.772 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,11.39,8.22,5.15,0.016,0.253,697,45,1,697,45,350.737 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,0.35,0.24,0.14,0.002,0.058,193,6,1,193,6,11593.101 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,9.16,7.22,4.24,0.021,0.327,426,28,1,426,28,445.659 -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,4.87,,2.19,0.046,0.325,107,15,1,107,15, -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,3.38,,2.09,0.007,0.121,490,28,1,490,28, -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,11.78,,4.86,0.017,0.262,697,45,1,697,45, -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,0.44,,0.18,0.002,0.073,193,6,1,193,6, -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,7.98,,3.42,0.019,0.285,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -deeplabv3,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,74.12,,31.22,0.106,1.647,697,45,1,697,45,18.344 -deeplabv3,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,2.91,,1.47,0.086,0.306,34,9.5,1,34,9.5,359.456 -deeplabv3,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,12.54,,4.67,0.117,0.836,107,15,1,107,15,80.847 -deeplabv3,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,23.64,,14.69,0.202,0.364,117,65,1,117,65,42.294 -deeplabv3,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,36.23,,16.70,0.169,1.035,214,35,1,214,35,32.374 -deeplabv3,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,95.00,,41.09,0.289,0.760,329,125,1,329,125,15.959 -deeplabv3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,38.31,,21.59,0.194,0.589,197,65,1,197,65,26.51 -deeplabv3,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,54.05,,16.27,0.127,1.930,426,28,1,426,28,20.653 -deeplabv3,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,34.93,,10.57,0.071,1.247,490,28,1,490,28,27.623 -deeplabv3,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,32.93,,18.64,0.109,0.941,303,35,1,303,35,35.127 -deeplabv3,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,38.14,,18.17,0.078,1.090,488,35,1,488,35,27.35 -deeplabv3,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,59.94,,23.05,0.110,1.713,544,35,1,544,35,20.761 -deeplabv3,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,146.06,,56.67,0.244,1.168,599,125,1,599,125,11.988 -deeplabv3,OV-2023.0,atom,Intel® Processor N-200,1.78,,1.03,0.009,0.297,193,6,1,193,6,572.256 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,55.64,,19.26,0.094,0.445,594,125,1,594,125,17.404 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,23.34,,15.54,0.094,0.329,249,71,1,249,71,42.626 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,201.62,,77.92,0.064,0.988,3144,204,2,1572,102,10.754 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,448.78,,157.91,0.026,1.095,16954,410,2,8477,205,4.853 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,196.52,,75.53,0.098,0.786,2004,250,2,1002,125,11.109 -deeplabv3,OV-2023.0,core,Intel® Core™ i7-11800H,86.85,,27.00,0.200,1.930,435,45,1,435,45,12.137 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,649.53,,229.92,0.035,1.203,18718,540,2,9359,270,3.847 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,1030.08,,382.32,0.030,1.472,34000,700,2,17000,350,3.199 -deeplabv3,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,410.93,,141.37,0.181,1.370,2274,300,2,1137,150,5.369 -deeplabv3,OV-2023.0,accel,Intel® Flex-170,746.27,500.10,,0.388,4.975,1925,150,1,1925,150,21.133 -deeplabv3,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,62.44,28.53,16.72,0.584,4.163,107,15,1,107,15,63.94 -deeplabv3,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,76.73,36.90,13.98,0.157,2.740,490,28,1,490,28,49.979 -deeplabv3,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,121.55,55.55,36.78,0.174,2.701,697,45,1,697,45,32.538 -deeplabv3,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,3.85,2.01,1.36,0.020,0.641,193,6,1,193,6,1038.839 -deeplabv3,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,109.22,50.67,28.82,0.256,3.901,426,28,1,426,28,36.378 -deeplabv3,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,62.77,,16.96,0.587,4.184,107,15,1,107,15, -deeplabv3,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,51.34,,8.91,0.105,1.834,490,28,1,490,28, -deeplabv3,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,139.61,,40.39,0.200,3.103,697,45,1,697,45, -deeplabv3,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,4.81,,1.97,0.025,0.802,193,6,1,193,6, -deeplabv3,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,92.48,,24.81,0.217,3.303,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -efficientdet-d0,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,97.01,,57.69,0.139,2.156,697,45,1,697,45,14.749 -efficientdet-d0,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,3.76,,2.64,0.111,0.396,34,9.5,1,34,9.5,290.49 -efficientdet-d0,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,17.79,,11.10,0.166,1.186,107,15,1,107,15,61.812 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,33.53,,23.96,0.287,0.516,117,65,1,117,65,31.273 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,50.07,,28.48,0.234,1.430,214,35,1,214,35,24.356 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,128.34,,84.12,0.390,1.027,329,125,1,329,125,13.006 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,53.85,,35.65,0.273,0.828,197,65,1,197,65,20.31 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,69.87,,39.11,0.164,2.495,426,28,1,426,28,17.367 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,47.89,,20.09,0.098,1.710,490,28,1,490,28,23.549 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,47.53,,33.64,0.157,1.358,303,35,1,303,35,26.193 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,54.68,,32.54,0.112,1.562,488,35,1,488,35,21.202 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,71.50,,45.10,0.131,2.043,544,35,1,544,35,17.779 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,192.62,,103.94,0.322,1.541,599,125,1,599,125,10.401 -efficientdet-d0,OV-2023.0,atom,Intel® Processor N-200,2.05,,1.65,0.011,0.342,193,6,1,193,6,496.613 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,79.60,,35.06,0.134,0.637,594,125,1,594,125,13.426 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,32.46,,25.97,0.130,0.457,249,71,1,249,71,32.053 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,236.75,,163.08,0.075,1.161,3144,204,2,1572,102,12.998 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,458.66,,294.01,0.027,1.119,16954,410,2,8477,205,8.597 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,228.21,,157.38,0.114,0.913,2004,250,2,1002,125,13.254 -efficientdet-d0,OV-2023.0,core,Intel® Core™ i7-11800H,109.41,,61.21,0.252,2.431,435,45,1,435,45,10.459 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,709.57,,452.65,0.038,1.314,18718,540,2,9359,270,9.743 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,922.74,,791.57,0.027,1.318,34000,700,2,17000,350,6.149 -efficientdet-d0,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,416.02,,270.47,0.183,1.387,2274,300,2,1137,150,8.367 -efficientdet-d0,OV-2023.0,accel,Intel® Flex-170,684.48,640.17,,0.356,4.563,1925,150,1,1925,150,22.703 -efficientdet-d0,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,66.61,60.60,33.76,0.623,4.441,107,15,1,107,15,59.789 -efficientdet-d0,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,61.69,49.32,24.14,0.126,2.203,490,28,1,490,28,63.923 -efficientdet-d0,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,123.77,105.73,60.06,0.178,2.751,697,45,1,697,45,31.947 -efficientdet-d0,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,5.64,5.03,3.02,0.029,0.940,193,6,1,193,6,707.252 -efficientdet-d0,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,108.15,90.40,51.00,0.254,3.862,426,28,1,426,28,36.641 -efficientdet-d0,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,56.00,,33.11,0.523,3.733,107,15,1,107,15, -efficientdet-d0,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,50.72,,18.56,0.104,1.811,490,28,1,490,28, -efficientdet-d0,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,144.59,,69.25,0.207,3.213,697,45,1,697,45, -efficientdet-d0,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,6.03,,3.81,0.031,1.005,193,6,1,193,6, -efficientdet-d0,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,97.10,,49.20,0.228,3.468,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,10.07,,3.25,0.014,0.224,697,45,1,697,45,173.007 -faster_rcnn_resnet50_coco,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,0.31,,0.13,0.009,0.032,34,9.5,1,34,9.5,3336.101 -faster_rcnn_resnet50_coco,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,1.60,,0.42,0.015,0.106,107,15,1,107,15,640.413 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,2.89,,1.51,0.025,0.045,117,65,1,117,65,358.209 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,4.22,,2.26,0.020,0.120,214,35,1,214,35,307.054 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,13.04,,4.32,0.040,0.104,329,125,1,329,125,163.824 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,4.59,,2.39,0.023,0.071,197,65,1,197,65,247.538 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,6.74,,1.78,0.016,0.241,426,28,1,426,28,165.661 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,5.16,,1.43,0.011,0.184,490,28,1,490,28,199.974 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,3.60,,1.89,0.012,0.103,303,35,1,303,35,344.134 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,4.42,,2.27,0.009,0.126,488,35,1,488,35,252.351 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,7.29,,2.34,0.013,0.208,544,35,1,544,35,188.868 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,20.79,,7.23,0.035,0.166,599,125,1,599,125,111.834 -faster_rcnn_resnet50_coco,OV-2023.0,atom,Intel® Processor N-200,0.20,,0.09,0.001,0.034,193,6,1,193,6,5079.528 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,7.85,,4.03,0.013,0.063,594,125,1,594,125,138.525 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,2.78,,1.49,0.011,0.039,249,71,1,249,71,363.574 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,29.15,,8.22,0.009,0.143,3144,204,2,1572,102,77.58 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,85.87,,22.37,0.005,0.209,16954,410,2,8477,205,30.369 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,27.70,,7.83,0.014,0.111,2004,250,2,1002,125,79.521 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-11800H,12.17,,3.28,0.028,0.270,435,45,1,435,45,93.88 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,112.09,,32.42,0.006,0.208,18718,540,2,9359,270,90.8 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,417.49,,50.54,0.012,0.596,34000,700,2,17000,350,10.218 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,65.13,,16.92,0.029,0.217,2274,300,2,1137,150,42.57 -faster_rcnn_resnet50_coco,OV-2023.0,accel,Intel® Flex-170,222.32,144.56,,0.115,1.482,1925,150,1,1925,150,71.57 -faster_rcnn_resnet50_coco,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,8.22,4.09,1.84,0.077,0.548,107,15,1,107,15,475.771 -faster_rcnn_resnet50_coco,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,13.31,6.13,2.38,0.027,0.475,490,28,1,490,28,298.019 -faster_rcnn_resnet50_coco,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,17.32,9.61,4.50,0.025,0.385,697,45,1,697,45,230.593 -faster_rcnn_resnet50_coco,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,0.48,0.28,0.14,0.002,0.080,193,6,1,193,6,8322.584 -faster_rcnn_resnet50_coco,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,15.37,8.20,3.72,0.036,0.549,426,28,1,426,28,266.287 -faster_rcnn_resnet50_coco,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,8.58,,1.99,0.080,0.572,107,15,1,107,15, -faster_rcnn_resnet50_coco,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,11.83,,1.81,0.024,0.423,490,28,1,490,28, -faster_rcnn_resnet50_coco,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,20.10,,5.54,0.029,0.447,697,45,1,697,45, -faster_rcnn_resnet50_coco,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,0.60,,0.19,0.003,0.099,193,6,1,193,6, -faster_rcnn_resnet50_coco,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,13.39,,3.31,0.031,0.478,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -googlenet-v4,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,111.78,,31.98,0.160,2.484,697,45,1,697,45,16.401 -googlenet-v4,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,3.19,,1.34,0.094,0.335,34,9.5,1,34,9.5,330.309 -googlenet-v4,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,15.15,,4.13,0.142,1.010,107,15,1,107,15,65.484 -googlenet-v4,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,29.71,,15.29,0.254,0.457,117,65,1,117,65,34.592 -googlenet-v4,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,42.94,,22.86,0.201,1.227,214,35,1,214,35,25.957 -googlenet-v4,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,147.72,,42.20,0.449,1.182,329,125,1,329,125,15.377 -googlenet-v4,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,47.19,,23.81,0.240,0.726,197,65,1,197,65,21.507 -googlenet-v4,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,68.56,,17.47,0.161,2.449,426,28,1,426,28,16.906 -googlenet-v4,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,55.82,,14.00,0.114,1.994,490,28,1,490,28,19.042 -googlenet-v4,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,35.79,,18.77,0.118,1.023,303,35,1,303,35,31.127 -googlenet-v4,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,45.87,,22.92,0.094,1.311,488,35,1,488,35,24.9 -googlenet-v4,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,80.22,,22.64,0.147,2.292,544,35,1,544,35,19.511 -googlenet-v4,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,219.92,,69.93,0.367,1.759,599,125,1,599,125,11.893 -googlenet-v4,OV-2023.0,atom,Intel® Processor N-200,2.02,,0.92,0.010,0.337,193,6,1,193,6,536.385 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,82.75,,41.21,0.139,0.662,594,125,1,594,125,13.243 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,28.17,,14.89,0.113,0.397,249,71,1,249,71,36.365 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,302.16,,77.16,0.096,1.481,3144,204,2,1572,102,10.628 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,950.44,,228.44,0.056,2.318,16954,410,2,8477,205,5.714 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,287.12,,73.94,0.143,1.148,2004,250,2,1002,125,11.353 -googlenet-v4,OV-2023.0,core,Intel® Core™ i7-11800H,128.48,,32.51,0.295,2.855,435,45,1,435,45,10.465 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,1475.64,,347.33,0.079,2.733,18718,540,2,9359,270,3.308 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,5145.62,,522.17,0.151,7.351,34000,700,2,17000,350,3.46 -googlenet-v4,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,730.64,,169.52,0.321,2.435,2274,300,2,1137,150,6.386 -googlenet-v4,OV-2023.0,accel,Intel® Flex-170,834.65,515.70,,0.434,5.564,1925,150,1,1925,150,18.9 -googlenet-v4,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,66.82,31.46,18.62,0.625,4.455,107,15,1,107,15,59.758 -googlenet-v4,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,106.60,55.16,33.10,0.218,3.807,490,28,1,490,28,36.221 -googlenet-v4,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,126.94,64.60,41.28,0.182,2.821,697,45,1,697,45,31.129 -googlenet-v4,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,4.27,1.99,1.14,0.022,0.712,193,6,1,193,6,934.853 -googlenet-v4,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,120.74,59.65,36.70,0.283,4.312,426,28,1,426,28,32.7 -googlenet-v4,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,86.46,,20.87,0.808,5.764,107,15,1,107,15, -googlenet-v4,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,134.97,,32.38,0.275,4.820,490,28,1,490,28, -googlenet-v4,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,203.09,,55.43,0.291,4.513,697,45,1,697,45, -googlenet-v4,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,6.00,,1.59,0.031,1.001,193,6,1,193,6, -googlenet-v4,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,125.28,,33.33,0.294,4.474,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -GPT-2,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,3.76,,2.02,0.005,0.083,697,45,1,697,45,296.881 -GPT-2,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,0.20,,0.10,0.006,0.021,34,9.5,1,34,9.5,5419.817 -GPT-2,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,0.88,,0.30,0.008,0.058,107,15,1,107,15,1086.958 -GPT-2,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,1.44,,1.03,0.012,0.022,117,65,1,117,65,717.508 -GPT-2,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,1.71,,1.21,0.008,0.049,214,35,1,214,35,597.084 -GPT-2,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,5.38,,2.94,0.016,0.043,329,125,1,329,125,227.166 -GPT-2,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.09,,1.50,0.011,0.032,197,65,1,197,65,488.848 -GPT-2,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,3.33,,1.25,0.008,0.119,426,28,1,426,28,340.886 -GPT-2,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,1.73,,0.85,0.004,0.062,490,28,1,490,28,567.627 -GPT-2,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,1.93,,1.26,0.006,0.055,303,35,1,303,35,572.361 -GPT-2,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,1.93,,1.28,0.004,0.055,488,35,1,488,35,511.396 -GPT-2,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,2.86,,1.48,0.005,0.082,544,35,1,544,35,325.104 -GPT-2,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,5.88,,4.25,0.010,0.047,599,125,1,599,125,179.104 -GPT-2,OV-2023.0,atom,Intel® Processor N-200,0.12,,0.07,0.001,0.021,193,6,1,193,6,8434.695 -GPT-2,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,2.14,,1.45,0.004,0.017,594,125,1,594,125,401.241 -GPT-2,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,1.57,,1.07,0.006,0.022,249,71,1,249,71,673.96 -GPT-2,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,13.74,,5.55,0.004,0.067,3144,204,2,1572,102,154.435 -GPT-2,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,22.60,,14.00,0.001,0.055,16954,410,2,8477,205,77.3 -GPT-2,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,13.28,,5.30,0.007,0.053,2004,250,2,1002,125,160.915 -GPT-2,OV-2023.0,core,Intel® Core™ i7-11800H,5.05,,2.19,0.012,0.112,435,45,1,435,45,208.952 -GPT-2,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,38.52,,22.01,0.002,0.071,18718,540,2,9359,270,57.554 -GPT-2,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,76.59,,29.21,0.002,0.109,34000,700,2,17000,350,30.93 -GPT-2,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,23.32,,11.26,0.010,0.078,2274,300,2,1137,150,132.245 -GPT-2,OV-2023.0,accel,Intel® Flex-170,39.06,49.09,,0.020,0.260,1925,150,1,1925,150,408.749 -GPT-2,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,3.02,2.27,1.51,0.028,0.201,107,15,1,107,15,1319.205 -GPT-2,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,2.00,2.74,1.57,0.004,0.071,490,28,1,490,28,1989.235 -GPT-2,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,5.68,4.85,3.22,0.008,0.126,697,45,1,697,45,703.429 -GPT-2,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,0.23,0.16,0.11,0.001,0.038,193,6,1,193,6,17533.595 -GPT-2,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,4.16,4.13,2.69,0.010,0.148,426,28,1,426,28,960.778 -GPT-2,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,3.00,,1.49,0.028,0.200,107,15,1,107,15, -GPT-2,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,1.68,,1.09,0.003,0.060,490,28,1,490,28, -GPT-2,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,5.09,,3.28,0.007,0.113,697,45,1,697,45, -GPT-2,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,0.28,,0.13,0.001,0.046,193,6,1,193,6, -GPT-2,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,4.27,,2.26,0.010,0.153,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,789.36,,299.79,1.133,17.541,697,45,1,697,45,2.072 -mobilenet-ssd,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,27.01,,12.73,0.794,2.843,34,9.5,1,34,9.5,39.577 -mobilenet-ssd,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,116.16,,40.17,1.086,7.744,107,15,1,107,15,8.472 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,221.80,,140.58,1.896,3.412,117,65,1,117,65,4.715 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,344.62,,201.93,1.610,9.846,214,35,1,214,35,3.456 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,1020.39,,400.82,3.101,8.163,329,125,1,329,125,1.993 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,357.97,,218.52,1.817,5.507,197,65,1,197,65,2.96 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,548.92,,158.26,1.289,19.604,426,28,1,426,28,2.106 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,441.91,,112.33,0.902,15.783,490,28,1,490,28,2.364 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,285.67,,177.22,0.943,8.162,303,35,1,303,35,4.236 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,361.27,,208.16,0.740,10.322,488,35,1,488,35,3.267 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,583.02,,215.65,1.072,16.658,544,35,1,544,35,2.48 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,1604.36,,665.09,2.678,12.835,599,125,1,599,125,1.778 -mobilenet-ssd,OV-2023.0,atom,Intel® Processor N-200,15.32,,8.77,0.079,2.554,193,6,1,193,6,67.817 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,652.21,,307.44,1.098,5.218,594,125,1,594,125,1.656 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,211.82,,140.47,0.851,2.983,249,71,1,249,71,4.88 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,2268.48,,693.51,0.722,11.120,3144,204,2,1572,102,1.499 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,6587.85,,1822.35,0.389,16.068,16954,410,2,8477,205,1.043 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,2163.47,,666.62,1.080,8.654,2004,250,2,1002,125,1.55 -mobilenet-ssd,OV-2023.0,core,Intel® Core™ i7-11800H,1031.34,,300.85,2.371,22.919,435,45,1,435,45,1.295 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,11311.71,,2529.80,0.604,20.948,18718,540,2,9359,270,0.697 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,24846.06,,3999.29,0.731,35.494,34000,700,2,17000,350,0.839 -mobilenet-ssd,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,5398.00,,1364.13,2.374,17.993,2274,300,2,1137,150,0.809 -mobilenet-ssd,OV-2023.0,accel,Intel® Flex-170,4127.47,3257.04,,2.144,27.516,1925,150,1,1925,150,3.644 -mobilenet-ssd,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,453.40,238.03,149.06,4.237,30.226,107,15,1,107,15,8.634 -mobilenet-ssd,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,619.19,332.33,199.95,1.264,22.114,490,28,1,490,28,6.244 -mobilenet-ssd,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,816.41,453.27,310.53,1.171,18.142,697,45,1,697,45,4.657 -mobilenet-ssd,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,32.42,16.84,10.65,0.168,5.403,193,6,1,193,6,121.698 -mobilenet-ssd,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,702.15,407.81,276.23,1.648,25.077,426,28,1,426,28,5.535 -mobilenet-ssd,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,326.89,,144.12,3.055,21.793,107,15,1,107,15, -mobilenet-ssd,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,569.19,,143.17,1.162,20.328,490,28,1,490,28, -mobilenet-ssd,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,1042.75,,416.99,1.496,23.172,697,45,1,697,45, -mobilenet-ssd,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,37.18,,15.37,0.193,6.197,193,6,1,193,6, -mobilenet-ssd,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,748.60,,253.99,1.757,26.736,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -mobilenet-v2,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,1726.00,,932.73,2.476,38.356,697,45,1,697,45,1.085 -mobilenet-v2,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,74.74,,45.44,2.198,7.867,34,9.5,1,34,9.5,14.582 -mobilenet-v2,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,280.00,,133.14,2.617,18.667,107,15,1,107,15,3.497 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,534.65,,456.87,4.570,8.225,117,65,1,117,65,1.974 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,882.73,,483.11,4.125,25.221,214,35,1,214,35,1.556 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,2619.92,,1305.55,7.963,20.959,329,125,1,329,125,0.958 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,876.27,,684.15,4.448,13.481,197,65,1,197,65,1.325 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,1378.21,,520.06,3.235,49.222,426,28,1,426,28,0.931 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,1065.40,,315.10,2.174,38.050,490,28,1,490,28,1.015 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,718.66,,516.57,2.372,20.533,303,35,1,303,35,1.854 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,920.75,,638.15,1.887,26.307,488,35,1,488,35,1.514 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,1283.37,,687.61,2.359,36.668,544,35,1,544,35,1.219 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,3959.96,,2062.89,6.611,31.680,599,125,1,599,125,0.768 -mobilenet-v2,OV-2023.0,atom,Intel® Processor N-200,40.77,,29.63,0.211,6.795,193,6,1,193,6,27.138 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,1670.25,,813.61,2.812,13.362,594,125,1,594,125,0.795 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,519.94,,450.61,2.088,7.323,249,71,1,249,71,2.036 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,5539.89,,1920.53,1.762,27.156,3144,204,2,1572,102,1.349 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,15210.14,,4417.85,0.897,37.098,16954,410,2,8477,205,0.851 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,5369.69,,1859.79,2.679,21.479,2004,250,2,1002,125,1.364 -mobilenet-v2,OV-2023.0,core,Intel® Core™ i7-11800H,2575.56,,938.74,5.921,57.235,435,45,1,435,45,0.636 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,23172.78,,6910.90,1.238,42.913,18718,540,2,9359,270,0.522 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,35561.67,,10976.35,1.046,50.802,34000,700,2,17000,350,0.618 -mobilenet-v2,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,12687.83,,3600.85,5.580,42.293,2274,300,2,1137,150,0.51 -mobilenet-v2,OV-2023.0,accel,Intel® Flex-170,5398.23,5255.59,,2.804,35.988,1925,150,1,1925,150,2.256 -mobilenet-v2,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,671.48,511.12,338.92,6.276,44.765,107,15,1,107,15,5.565 -mobilenet-v2,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,909.69,583.68,350.46,1.857,32.489,490,28,1,490,28,4.207 -mobilenet-v2,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,1205.84,894.32,586.79,1.730,26.796,697,45,1,697,45,3.127 -mobilenet-v2,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,60.97,41.27,26.90,0.316,10.162,193,6,1,193,6,64.669 -mobilenet-v2,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,1000.97,740.36,523.20,2.350,35.749,426,28,1,426,28,3.855 -mobilenet-v2,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,1468.05,,394.16,13.720,97.870,107,15,1,107,15, -mobilenet-v2,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,1168.44,,216.48,2.385,41.730,490,28,1,490,28, -mobilenet-v2,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,3183.48,,1119.78,4.567,70.744,697,45,1,697,45, -mobilenet-v2,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,117.36,,49.87,0.608,19.560,193,6,1,193,6, -mobilenet-v2,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,2238.11,,597.27,5.254,79.933,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -resnet-50,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,372.55,,103.20,0.535,8.279,697,45,1,697,45,4.778 -resnet-50,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,10.92,,4.54,0.321,1.150,34,9.5,1,34,9.5,94.519 -resnet-50,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,50.13,,14.45,0.469,3.342,107,15,1,107,15,19.719 -resnet-50,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,97.59,,51.99,0.834,1.501,117,65,1,117,65,10.637 -resnet-50,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,144.61,,75.37,0.676,4.132,214,35,1,214,35,8.098 -resnet-50,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,500.94,,143.02,1.523,4.008,329,125,1,329,125,4.456 -resnet-50,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,155.16,,81.16,0.788,2.387,197,65,1,197,65,6.958 -resnet-50,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,224.35,,60.58,0.527,8.012,426,28,1,426,28,5.088 -resnet-50,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,182.57,,47.31,0.373,6.520,490,28,1,490,28,5.96 -resnet-50,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,121.71,,62.98,0.402,3.477,303,35,1,303,35,9.845 -resnet-50,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,155.09,,77.88,0.318,4.431,488,35,1,488,35,7.418 -resnet-50,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,267.58,,75.49,0.492,7.645,544,35,1,544,35,5.351 -resnet-50,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,744.56,,235.07,1.243,5.956,599,125,1,599,125,3.366 -resnet-50,OV-2023.0,atom,Intel® Processor N-200,6.73,,3.18,0.035,1.121,193,6,1,193,6,160.351 -resnet-50,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,276.11,,141.28,0.465,2.209,594,125,1,594,125,3.804 -resnet-50,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,93.19,,50.97,0.374,1.312,249,71,1,249,71,11.014 -resnet-50,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,968.34,,269.28,0.308,4.747,3144,204,2,1572,102,2.912 -resnet-50,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,2928.36,,755.50,0.173,7.142,16954,410,2,8477,205,1.501 -resnet-50,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,928.12,,257.71,0.463,3.712,2004,250,2,1002,125,3.015 -resnet-50,OV-2023.0,core,Intel® Core™ i7-11800H,418.87,,112.99,0.963,9.308,435,45,1,435,45,2.909 -resnet-50,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,5006.83,,1168.91,0.267,9.272,18718,540,2,9359,270,1.016 -resnet-50,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,18906.26,,1671.79,0.556,27.009,34000,700,2,17000,350,1.003 -resnet-50,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,2292.86,,570.35,1.008,7.643,2274,300,2,1137,150,1.463 -resnet-50,OV-2023.0,accel,Intel® Flex-170,3088.38,1947.78,,1.604,20.589,1925,150,1,1925,150,4.919 -resnet-50,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,212.07,117.19,66.74,1.982,14.138,107,15,1,107,15,18.752 -resnet-50,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,296.02,166.15,90.65,0.604,10.572,490,28,1,490,28,13.174 -resnet-50,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,380.98,229.42,143.43,0.547,8.466,697,45,1,697,45,10.195 -resnet-50,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,15.19,7.96,4.35,0.079,2.532,193,6,1,193,6,262.333 -resnet-50,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,351.93,210.88,120.40,0.826,12.569,426,28,1,426,28,11.145 -resnet-50,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,298.19,,75.45,2.787,19.880,107,15,1,107,15, -resnet-50,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,440.34,,87.84,0.899,15.726,490,28,1,490,28, -resnet-50,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,714.02,,187.03,1.024,15.867,697,45,1,697,45, -resnet-50,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,21.28,,6.36,0.110,3.546,193,6,1,193,6, -resnet-50,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,469.96,,122.38,1.103,16.784,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,6.00,,1.85,0.009,0.133,697,45,1,697,45,241.005 -ssd-resnet34-1200,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,0.18,,0.08,0.005,0.019,34,9.5,1,34,9.5,5553.228 -ssd-resnet34-1200,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,0.90,,0.23,0.008,0.060,107,15,1,107,15,1118.931 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,1.69,,0.98,0.014,0.026,117,65,1,117,65,593.303 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,2.42,,1.40,0.011,0.069,214,35,1,214,35,459.773 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,8.10,,2.46,0.025,0.065,329,125,1,329,125,207.938 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.63,,1.49,0.013,0.040,197,65,1,197,65,393.411 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,3.82,,0.99,0.009,0.136,426,28,1,426,28,280.093 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,3.15,,0.82,0.006,0.113,490,28,1,490,28,315.974 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,2.02,,1.13,0.007,0.058,303,35,1,303,35,562.292 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,2.49,,1.38,0.005,0.071,488,35,1,488,35,411.048 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,4.07,,1.34,0.007,0.116,544,35,1,544,35,297.001 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,12.38,,4.12,0.021,0.099,599,125,1,599,125,157.781 -ssd-resnet34-1200,OV-2023.0,atom,Intel® Processor N-200,0.11,,0.05,0.001,0.019,193,6,1,193,6,8993.66 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,4.57,,2.53,0.008,0.037,594,125,1,594,125,202.869 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,1.60,,0.92,0.006,0.023,249,71,1,249,71,622.799 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,17.65,,4.59,0.006,0.087,3144,204,2,1572,102,116.003 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,57.90,,14.86,0.003,0.141,16954,410,2,8477,205,36.506 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,16.78,,4.37,0.008,0.067,2004,250,2,1002,125,121.986 -ssd-resnet34-1200,OV-2023.0,core,Intel® Core™ i7-11800H,7.08,,1.82,0.016,0.157,435,45,1,435,45,155.088 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,78.77,,20.79,0.004,0.146,18718,540,2,9359,270,106.673 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,440.92,,31.41,0.013,0.630,34000,700,2,17000,350,8.749 -ssd-resnet34-1200,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,42.63,,10.56,0.019,0.142,2274,300,2,1137,150,59.63 -ssd-resnet34-1200,OV-2023.0,accel,Intel® Flex-170,150.99,92.06,,0.078,1.007,1925,150,1,1925,150,105.864 -ssd-resnet34-1200,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,5.04,2.62,1.41,0.047,0.336,107,15,1,107,15,774.647 -ssd-resnet34-1200,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,8.74,4.78,2.25,0.018,0.312,490,28,1,490,28,444.2 -ssd-resnet34-1200,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,10.88,6.26,3.42,0.016,0.242,697,45,1,697,45,367.297 -ssd-resnet34-1200,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,0.29,0.17,,0.002,0.049,193,6,1,193,6,13690.681 -ssd-resnet34-1200,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,9.70,5.42,2.84,0.023,0.347,426,28,1,426,28,422.084 -ssd-resnet34-1200,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,0.90,,0.23,0.008,0.060,107,15,1,107,15, -ssd-resnet34-1200,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,3.15,,0.82,0.006,0.112,490,28,1,490,28, -ssd-resnet34-1200,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,5.93,,1.87,0.009,0.132,697,45,1,697,45, -ssd-resnet34-1200,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,0.11,,0.05,0.001,0.018,193,6,1,193,6, -ssd-resnet34-1200,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,3.91,,1.00,0.009,0.140,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,8.65,,3.05,0.012,0.192,697,45,1,697,45,156.161 -unet-camvid-onnx-0001,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,0.26,,0.04,0.008,0.027,34,9.5,1,34,9.5,3981.236 -unet-camvid-onnx-0001,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,1.49,,0.38,0.014,0.100,107,15,1,107,15,672.584 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,2.46,,1.55,0.021,0.038,117,65,1,117,65,413.844 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,3.60,,2.25,0.017,0.103,214,35,1,214,35,322.582 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,11.67,,4.03,0.035,0.093,329,125,1,329,125,132.595 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,3.85,,2.39,0.020,0.059,197,65,1,197,65,263.384 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,6.42,,1.61,0.015,0.229,426,28,1,426,28,169.941 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,5.20,,1.28,0.011,0.186,490,28,1,490,28,188.957 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,3.01,,1.86,0.010,0.086,303,35,1,303,35,384.226 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,3.62,,2.25,0.007,0.103,488,35,1,488,35,286.784 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,6.15,,2.25,0.011,0.176,544,35,1,544,35,187.178 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,18.36,,6.78,0.031,0.147,599,125,1,599,125,99.814 -unet-camvid-onnx-0001,OV-2023.0,atom,Intel® Processor N-200,0.17,,0.09,0.001,0.029,193,6,1,193,6,5875.117 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,6.61,,4.00,0.011,0.053,594,125,1,594,125,153.203 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,2.35,,1.49,0.009,0.033,249,71,1,249,71,436.422 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,29.07,,7.34,0.009,0.142,3144,204,2,1572,102,71.813 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,95.23,,21.78,0.006,0.232,16954,410,2,8477,205,23.521 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,27.71,,7.01,0.014,0.111,2004,250,2,1002,125,74.843 -unet-camvid-onnx-0001,OV-2023.0,core,Intel® Core™ i7-11800H,11.74,,2.94,0.027,0.261,435,45,1,435,45,94.316 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,129.48,,31.77,0.007,0.240,18718,540,2,9359,270,73.505 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,505.87,,48.40,0.015,0.723,34000,700,2,17000,350,9.311 -unet-camvid-onnx-0001,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,69.94,,16.08,0.031,0.233,2274,300,2,1137,150,41.748 -unet-camvid-onnx-0001,OV-2023.0,accel,Intel® Flex-170,254.48,144.90,,0.132,1.697,1925,150,1,1925,150,62.698 -unet-camvid-onnx-0001,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,8.41,4.37,2.38,0.079,0.560,107,15,1,107,15,475.669 -unet-camvid-onnx-0001,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,15.72,8.02,4.27,0.032,0.562,490,28,1,490,28,253.853 -unet-camvid-onnx-0001,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,19.13,10.06,5.58,0.027,0.425,697,45,1,697,45,208.318 -unet-camvid-onnx-0001,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,0.47,0.27,0.14,0.002,0.078,193,6,1,193,6,8545.949 -unet-camvid-onnx-0001,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,17.31,8.85,4.83,0.041,0.618,426,28,1,426,28,227.453 -unet-camvid-onnx-0001,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,8.90,,2.45,0.083,0.593,107,15,1,107,15, -unet-camvid-onnx-0001,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,14.81,,3.73,0.030,0.529,490,28,1,490,28, -unet-camvid-onnx-0001,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,19.83,,6.12,0.028,0.441,697,45,1,697,45, -unet-camvid-onnx-0001,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,0.57,,0.20,0.003,0.096,193,6,1,193,6, -unet-camvid-onnx-0001,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,14.14,,3.75,0.033,0.505,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -yolo_v3,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,39.77,,11.97,0.057,0.884,697,45,1,697,45,34.872 -yolo_v3,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,1.13,,0.52,0.033,0.119,34,9.5,1,34,9.5,903.841 -yolo_v3,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,5.45,,1.55,0.051,0.363,107,15,1,107,15,184.58 -yolo_v3,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,10.63,,5.89,0.091,0.164,117,65,1,117,65,95.036 -yolo_v3,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,15.37,,8.39,0.072,0.439,214,35,1,214,35,74.286 -yolo_v3,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,51.84,,15.89,0.158,0.415,329,125,1,329,125,30.003 -yolo_v3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,16.37,,8.99,0.083,0.252,197,65,1,197,65,63.341 -yolo_v3,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,23.65,,6.47,0.056,0.845,426,28,1,426,28,46.28 -yolo_v3,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,19.40,,5.08,0.040,0.693,490,28,1,490,28,51.709 -yolo_v3,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,12.66,,6.99,0.042,0.362,303,35,1,303,35,86.495 -yolo_v3,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,16.06,,8.39,0.033,0.459,488,35,1,488,35,68.145 -yolo_v3,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,27.93,,8.41,0.051,0.798,544,35,1,544,35,42.848 -yolo_v3,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,79.98,,26.47,0.134,0.640,599,125,1,599,125,23.62 -yolo_v3,OV-2023.0,atom,Intel® Processor N-200,0.71,,0.35,0.004,0.118,193,6,1,193,6,1470.788 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,29.22,,14.28,0.049,0.234,594,125,1,594,125,32.442 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,10.06,,5.72,0.040,0.142,249,71,1,249,71,100.122 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,105.44,,29.93,0.034,0.517,3144,204,2,1572,102,22.133 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,318.96,,88.54,0.019,0.778,16954,410,2,8477,205,10.668 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,100.48,,28.51,0.050,0.402,2004,250,2,1002,125,23.175 -yolo_v3,OV-2023.0,core,Intel® Core™ i7-11800H,43.61,,11.91,0.100,0.969,435,45,1,435,45,26.339 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,485.45,,108.76,0.026,0.899,18718,540,2,9359,270,6.704 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,2042.57,,194.96,0.060,2.918,34000,700,2,17000,350,3.296 -yolo_v3,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,240.71,,62.50,0.106,0.802,2274,300,2,1137,150,13.156 -yolo_v3,OV-2023.0,accel,Intel® Flex-170,715.46,313.59,,0.372,4.770,1925,150,1,1925,150,21.953 -yolo_v3,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,32.01,15.01,8.05,0.299,2.134,107,15,1,107,15,123.951 -yolo_v3,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,59.08,26.50,13.68,0.121,2.110,490,28,1,490,28,67.531 -yolo_v3,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,69.33,34.30,18.80,0.099,1.541,697,45,1,697,45,57.179 -yolo_v3,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,1.82,0.94,0.51,0.009,0.304,193,6,1,193,6,2193.513 -yolo_v3,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,62.39,29.62,16.26,0.146,2.228,426,28,1,426,28,64.935 -yolo_v3,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,33.33,,8.57,0.311,2.222,107,15,1,107,15, -yolo_v3,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,54.94,,12.76,0.112,1.962,490,28,1,490,28, -yolo_v3,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,79.34,,21.62,0.114,1.763,697,45,1,697,45, -yolo_v3,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,2.18,,0.72,0.011,0.363,193,6,1,193,6, -yolo_v3,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,50.76,,13.72,0.119,1.813,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,433.83,,136.36,0.622,9.641,697,45,1,697,45,3.411 -yolo_v3_tiny,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,12.33,,6.20,0.363,1.297,34,9.5,1,34,9.5,82.687 -yolo_v3_tiny,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,54.23,,18.05,0.507,3.615,107,15,1,107,15,18.195 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,112.45,,65.07,0.961,1.730,117,65,1,117,65,8.957 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,167.83,,92.82,0.784,4.795,214,35,1,214,35,6.71 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,600.04,,199.22,1.824,4.800,329,125,1,329,125,3.14 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,176.01,,99.54,0.893,2.708,197,65,1,197,65,5.439 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,241.89,,75.37,0.568,8.639,426,28,1,426,28,4.543 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,203.84,,57.67,0.416,7.280,490,28,1,490,28,5.129 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,137.74,,78.00,0.455,3.936,303,35,1,303,35,8.162 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,180.44,,92.86,0.370,5.155,488,35,1,488,35,6.283 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,303.14,,97.03,0.557,8.661,544,35,1,544,35,4.164 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,882.39,,296.77,1.473,7.059,599,125,1,599,125,2.558 -yolo_v3_tiny,OV-2023.0,atom,Intel® Processor N-200,7.82,,4.15,0.041,1.304,193,6,1,193,6,136.62 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,349.17,,156.05,0.588,2.793,594,125,1,594,125,2.995 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,106.71,,64.03,0.429,1.503,249,71,1,249,71,9.391 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,1052.16,,339.30,0.335,5.158,3144,204,2,1572,102,2.514 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,2932.78,,914.37,0.173,7.153,16954,410,2,8477,205,1.216 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,1010.00,,323.69,0.504,4.040,2004,250,2,1002,125,2.623 -yolo_v3_tiny,OV-2023.0,core,Intel® Core™ i7-11800H,448.62,,139.59,1.031,9.969,435,45,1,435,45,2.61 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,4584.21,,1374.00,0.245,8.489,18718,540,2,9359,270,0.876 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,13092.13,,2107.19,0.385,18.703,34000,700,2,17000,350,1.143 -yolo_v3_tiny,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,2217.64,,704.05,0.975,7.392,2274,300,2,1137,150,1.319 -yolo_v3_tiny,OV-2023.0,accel,Intel® Flex-170,3196.82,2123.72,,1.661,21.312,1925,150,1,1925,150,4.692 -yolo_v3_tiny,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,290.98,151.90,86.09,2.719,19.398,107,15,1,107,15,13.63 -yolo_v3_tiny,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,505.41,257.15,136.41,1.031,18.050,490,28,1,490,28,7.676 -yolo_v3_tiny,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,616.60,331.25,206.56,0.885,13.702,697,45,1,697,45,6.288 -yolo_v3_tiny,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,17.57,9.95,5.67,0.091,2.928,193,6,1,193,6,226.048 -yolo_v3_tiny,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,547.67,291.75,171.00,1.286,19.560,426,28,1,426,28,7.089 -yolo_v3_tiny,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,321.12,,92.81,3.001,21.408,107,15,1,107,15, -yolo_v3_tiny,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,324.93,,98.38,0.663,11.605,490,28,1,490,28, -yolo_v3_tiny,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,755.15,,230.16,1.083,16.781,697,45,1,697,45, -yolo_v3_tiny,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,24.02,,7.84,0.124,4.003,193,6,1,193,6, -yolo_v3_tiny,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,487.88,,144.54,1.145,17.424,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, -begin_rec,,,,,,,,,,,,,, -yolo_v8n,OV-2023.0,core,Intel® Core™ i9-12900HK CPU-only,184.70,,72.61,0.265,4.104,697,45,1,697,45,7.542 -yolo_v8n,OV-2023.0,atom,Intel® Atom™ x5-E3940 CPU-only,5.69,,3.01,0.167,0.598,34,9.5,1,34,9.5,185.626 -yolo_v8n,OV-2023.0,atom,Intel® Celeron™ 6305E CPU-only,24.68,,9.60,0.231,1.645,107,15,1,107,15,40.111 -yolo_v8n,OV-2023.0,core,Intel® Core™ i3-8100 CPU-only,53.97,,33.14,0.461,0.830,117,65,1,117,65,18.918 -yolo_v8n,OV-2023.0,core,Intel® Core™ i5-10500TE CPU-only,81.11,,47.46,0.379,2.317,214,35,1,214,35,16.114 -yolo_v8n,OV-2023.0,core,Intel® Core™ i5-13600K CPU-only,244.02,,96.78,0.742,1.952,329,125,1,329,125,6.96 -yolo_v8n,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,84.64,,51.15,0.430,1.302,197,65,1,197,65,11.917 -yolo_v8n,OV-2023.0,core,Intel® Core™ i7-1185G7 CPU-only,106.30,,39.77,0.250,3.796,426,28,1,426,28,10.673 -yolo_v8n,OV-2023.0,core,Intel® Core™ i7-1185GRE CPU-only,76.65,,29.15,0.156,2.738,490,28,1,490,28,12.187 -yolo_v8n,OV-2023.0,core,Intel® Core™ i7-8700T CPU-only,69.89,,42.39,0.231,1.997,303,35,1,303,35,16.596 -yolo_v8n,OV-2023.0,core,Intel® Core™ i9-10900TE CPU-only,83.42,,49.70,0.171,2.383,488,35,1,488,35,13.027 -yolo_v8n,OV-2023.0,core,Intel® Core™ i9-12900TE CPU-only,132.88,,51.98,0.244,3.796,544,35,1,544,35,9.23 -yolo_v8n,OV-2023.0,core,Intel® Core™ i9-13900K CPU-only,379.01,,156.74,0.633,3.032,599,125,1,599,125,5.512 -yolo_v8n,OV-2023.0,atom,Intel® Processor N-200,3.25,,1.94,0.017,0.541,193,6,1,193,6,315.715 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® W1290P CPU-only,125.84,,74.97,0.212,1.007,594,125,1,594,125,6.46 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® E-2124G CPU-only,52.07,,32.87,0.209,0.733,249,71,1,249,71,19.471 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® Gold 5218T CPU-only,449.63,,175.44,0.143,2.204,3144,204,2,1572,102,5.959 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® Platinum 8270 CPU-only,996.63,,459.95,0.059,2.431,16954,410,2,8477,205,3.622 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® Silver 4216R CPU-only,432.72,,168.42,0.216,1.731,2004,250,2,1002,125,6.229 -yolo_v8n,OV-2023.0,core,Intel® Core™ i7-11800H,194.10,,74.94,0.446,4.313,435,45,1,435,45,5.995 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® Platinum 8380 CPU-only,1686.49,,552.54,0.090,3.123,18718,540,2,9359,270,2.37 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® Platinum 8490H CPU-only,2714.73,,1006.34,0.080,3.878,34000,700,2,17000,350,4.403 -yolo_v8n,OV-2023.0,xeon,Intel® Xeon® Silver 4316 CPU-only,846.47,,341.19,0.372,2.822,2274,300,2,1137,150,3.309 -yolo_v8n,OV-2023.0,accel,Intel® Flex-170,1205.32,1210.94,,0.626,8.035,1925,150,1,1925,150,12.951 -yolo_v8n,OV-2023.0,core-iGPU,Intel® Celeron™ 6305E GPU-only,126.20,82.53,48.62,1.179,8.413,107,15,1,107,15,31.597 -yolo_v8n,OV-2023.0,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,154.20,106.82,59.95,0.315,5.507,490,28,1,490,28,24.83 -yolo_v8n,OV-2023.0,core-iGPU,Intel® Core™ i9-12900HK GPU-only,218.28,155.96,102.82,0.313,4.851,697,45,1,697,45,17.971 -yolo_v8n,OV-2023.0,core-iGPU,Intel® Processor N200 GPU-only,8.78,5.75,3.36,0.045,1.463,193,6,1,193,6,454.508 -yolo_v8n,OV-2023.0,core-iGPU,Intel® Core™ i7-1185G7 GPU-only,201.27,141.88,88.88,0.472,7.188,426,28,1,426,28,19.643 -yolo_v8n,OV-2023.0,core-CPU+iGPU,Intel® Celeron™ 6305E GPU-only,111.94,,48.30,1.046,7.462,107,15,1,107,15, -yolo_v8n,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185GRE iGPU-only,116.95,,45.54,0.239,4.177,490,28,1,490,28, -yolo_v8n,OV-2023.0,core-CPU+iGPU,Intel® Core™ i9-12900HK GPU-only,283.25,,118.51,0.406,6.294,697,45,1,697,45, -yolo_v8n,OV-2023.0,core-CPU+iGPU,Intel® Processor N200 GPU-only,10.11,,4.49,0.052,1.685,193,6,1,193,6, -yolo_v8n,OV-2023.0,core-CPU+iGPU,Intel® Core™ i7-1185G7 GPU-only,175.79,,75.16,0.413,6.278,426,28,1,426,28, -end_rec,,,,,,,,,,,,,, \ No newline at end of file +Network model,Release,IE-Type,Platform name,Throughput-INT8,ThroughputFP16,ThroughputFP32,Value,Efficiency,Price,TDP,Sockets,Price/socket,TDP/socket,Latency,UOM_T,UOM_V,UOM_E,UOM_L +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,11.35,,4.27,0.106093669,0.756801503,107,15,1,107,15,88.11,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,21.38,,15.11,0.182727309,0.328909156,117,65,1,117,65,48.47,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,32.23,,20.26,0.15061144,0.495859202,214,65,1,214,65,36.18,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,113.18,,45.08,0.344024364,0.905472125,329,125,1,329,125,17.43,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,34.34,,24.04,0.178867029,0.528345686,192,65,1,192,65,30.88,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,37.98,,13.59,0.077518083,1.356566444,490,28,1,490,28,29.22,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,27.60,,17.56,0.091086845,0.788551831,303,35,1,303,35,42.83,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,34.12,,20.30,0.069915909,0.974827538,488,35,1,488,35,37.09,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,53.08,,19.48,0.097575059,1.516595204,544,35,1,544,35,23.07,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,163.20,,66.23,0.272459575,1.305626285,599,125,1,599,125,13.68,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,atom,Intel® Processor N-200,1.65,,0.83,0.008529061,0.274351466,193,6,1,193,6,641.46,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,51.01,,29.43,0.08587761,0.408090401,594,125,1,594,125,28.93,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,20.86,,14.80,0.068171185,0.293808206,306,71,1,306,71,49.41,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,217.83,,80.66,0.069284052,1.037281242,3144,210,2,1572,105,13.64,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,572.10,,224.73,0.033744047,1.395357512,16954,410,2,8477,205,7.81,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,872.62,,338.47,0.04661922,1.61596031,18718,540,2,9359,270,43.32,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,3255.60,,505.88,0.095752851,4.650852741,34000,700,2,17000,350,4.08,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,204.84,,76.40,0.101307066,1.024214437,2022,200,2,1011,100,14.24,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,426.78,,167.54,0.187678462,1.422602743,2274,300,2,1137,150,8.09,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,accel,Intel® Flex-170,842.00,683.21,,,,,,1,,,18.63,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,accel,Intel® Flex-140,174.28,123.71,,,,,,1,,,91.68,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,45.83,31.96,,0.428279604,3.055061174,107,15,1,107,15,87.12,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,73.09,55.44,,0.149162436,2.610342624,490,28,1,490,28,54.56,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,3.37,2.36,,0.017448724,0.561267278,193,6,1,193,6,1185.78,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,84.42,60.51,,0.198161663,3.014888156,426,28,1,426,28,46.90,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,46.67,,23.26,0.436174206,3.111376,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,73.20,,31.70,0.149385536,2.614246884,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,4.43,,2.03,0.022934748,0.7377344,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,1.18,,0.38,0.011017319,0.078590206,107,15,1,107,15,863.34,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,2.09,,1.33,0.017880344,0.032184618,117,65,1,117,65,492.10,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,2.97,,1.87,0.013882493,0.045705439,214,65,1,214,65,347.78,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,9.93,,3.74,0.030176091,0.079423471,329,125,1,329,125,155.72,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,3.36,,2.13,0.017518449,0.051746803,192,65,1,192,65,302.24,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,3.79,,1.22,0.007726669,0.135216715,490,28,1,490,28,266.14,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,2.71,,1.61,0.008936965,0.077368581,303,35,1,303,35,412.44,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,3.35,,1.83,0.006861724,0.095672042,488,35,1,488,35,327.59,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,5.06,,1.75,0.009296462,0.144493584,544,35,1,544,35,210.96,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,15.19,,5.91,0.025358635,0.12151858,599,125,1,599,125,113.61,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,atom,Intel® Processor N-200,0.16,,0.08,0.000828224,0.0266412,193,6,1,193,6,6367.32,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,4.66,,2.88,0.007842661,0.037268324,594,125,1,594,125,224.55,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,2.11,,1.34,0.006898006,0.029729433,306,71,1,306,71,486.76,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,21.27,,6.90,0.006764356,0.101272067,3144,210,2,1572,105,103.38,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,50.91,,17.71,0.003002727,0.124166415,16954,410,2,8477,205,63.45,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,67.03,,27.69,0.003581195,0.124134843,18718,540,2,9359,270,253.77,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,251.10,,47.69,0.007385431,0.358720946,34000,700,2,17000,350,36.69,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,20.81,,6.64,0.010291168,0.104043704,2022,200,2,1011,100,106.83,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,38.76,,14.75,0.017046987,0.129216158,2274,300,2,1137,150,237.83,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,accel,Intel® Flex-170,144.12,101.13,,,,,,1,,,110.90,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,accel,Intel® Flex-140,29.97,20.57,,,,,,1,,,534.35,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,4.66,3.35,,0.043581125,0.31087869,107,15,1,107,15,820.82,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,5.11,5.77,,0.010428379,0.182496635,490,28,1,490,28,745.40,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,0.34,0.23,,0.00174067,0.055991537,193,6,1,193,6,11899.92,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,9.13,6.65,,0.021425568,0.325974707,426,28,1,426,28,449.68,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,5.20,,2.33,0.048610495,0.346754867,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,3.87,,2.23,0.007899318,0.138238071,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.44,,0.17,0.002288653,0.073618327,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,11.86,,4.61,0.11084032,0.79066095,107,15,1,107,15,85.88,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,22.85,,14.48,0.195256904,0.351462427,117,65,1,117,65,43.93,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,34.47,,16.42,0.161068212,0.530286114,214,65,1,214,65,33.07,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,94.13,,42.32,0.286105004,0.753028371,329,125,1,329,125,16.23,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,37.18,,21.16,0.193650962,0.572015148,192,65,1,192,65,27.36,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,30.78,,9.65,0.062807791,1.099136335,490,28,1,490,28,31.13,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,32.02,,18.34,0.105688204,0.914957883,303,35,1,303,35,37.47,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,40.36,,18.54,0.082712269,1.153245355,488,35,1,488,35,27.15,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,57.69,,22.51,0.106053446,1.648373567,544,35,1,544,35,21.75,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,148.87,,57.81,0.248526818,1.190940511,599,125,1,599,125,12.44,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,atom,Intel® Processor N-200,1.72,,1.01,0.008897382,0.286199128,193,6,1,193,6,595.26,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,51.51,,19.39,0.086713894,0.412064422,594,125,1,594,125,21.10,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,22.69,,15.40,0.074145796,0.319557937,306,71,1,306,71,43.78,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,190.40,,77.08,0.060561308,0.906689304,3144,210,2,1572,105,11.76,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,416.77,,155.13,0.024582207,1.016504228,16954,410,2,8477,205,5.66,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,584.92,,227.30,0.031248866,1.083178298,18718,540,2,9359,270,3.70,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,999.22,,380.82,0.029388758,1.427453957,34000,700,2,17000,350,3.55,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,184.31,,74.61,0.091151132,0.921537946,2022,200,2,1011,100,12.07,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,370.93,,139.96,0.163117657,1.236431838,2274,300,2,1137,150,6.87,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,accel,Intel® Flex-170,803.58,560.76,,,,,,1,,,19.59,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,accel,Intel® Flex-140,148.01,97.06,,,,,,1,,,108.12,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,60.17,27.60,,0.562349486,4.011426332,107,15,1,107,15,66.33,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,76.40,36.69,,0.155928067,2.728741167,490,28,1,490,28,51.90,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,3.65,1.92,,0.018917206,0.608503456,193,6,1,193,6,1094.30,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,105.14,48.76,,0.24680943,3.755029182,426,28,1,426,28,37.64,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,61.90,,17.22,0.578511308,4.126714,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,49.44,,9.02,0.100894422,1.765652381,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,4.81,,2.03,0.024920889,0.801621937,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,272.36,,133.20,2.545454771,18.15757736,107,15,1,107,15,3.60,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,542.97,,451.26,4.640733479,8.353320262,117,65,1,117,65,1.99,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,899.54,,499.68,4.203451742,13.8390565,214,65,1,214,65,1.58,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,2804.17,,1285.76,8.523326054,22.43339417,329,125,1,329,125,0.88,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,868.27,,679.32,4.522249945,13.35803061,192,65,1,192,65,1.35,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,990.18,,318.31,2.020773404,35.36353457,490,28,1,490,28,1.18,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,741.91,,511.96,2.448553162,21.19747452,303,35,1,303,35,1.84,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,960.08,,614.86,1.967381666,27.43092151,488,35,1,488,35,1.49,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,1296.04,,651.56,2.382432184,37.02980308,544,35,1,544,35,1.31,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,4078.62,,2016.89,6.809056345,32.628998,599,125,1,599,125,0.73,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,atom,Intel® Processor N-200,39.61,,29.85,0.205210747,6.600945698,193,6,1,193,6,26.82,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,1458.83,,554.06,2.455950239,11.67067553,594,125,1,594,125,1.29,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,527.91,,453.23,1.725211318,7.435417792,306,71,1,306,71,2.04,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,5479.84,,1921.88,1.742952604,26.09449042,3144,210,2,1572,105,1.43,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,14421.48,,4410.27,0.850624065,35.17434244,16954,410,2,8477,205,0.92,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,22622.04,,6912.71,1.208571487,41.89266868,18718,540,2,9359,270,0.56,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,38771.76,,10993.76,1.140345798,55.3882245,34000,700,2,17000,350,0.66,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,5229.87,,1856.59,2.586481754,26.14933053,2022,200,2,1011,100,1.44,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,12359.26,,3615.96,5.435030176,41.19752874,2274,300,2,1137,150,0.55,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,accel,Intel® Flex-170,7195.50,6410.60,,,,,,1,,,1.97,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,accel,Intel® Flex-140,1219.84,1149.89,,,,,,1,,,13.07,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,692.73,509.86,,6.474119544,46.18205274,107,15,1,107,15,5.58,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,903.44,556.69,,1.84375582,32.26572686,490,28,1,490,28,4.30,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,57.93,40.13,,0.300129792,9.65417499,193,6,1,193,6,68.05,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,1008.77,740.13,,2.36801077,36.02759243,426,28,1,426,28,3.82,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,514.94,,313.82,4.812519626,34.32930667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,1114.75,,268.28,2.275002757,39.81254825,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,73.89,,44.62,0.38286464,12.31547925,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,49.96,,14.45,0.466891776,3.330494671,107,15,1,107,15,19.80,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,97.49,,51.23,0.833227829,1.499810092,117,65,1,117,65,10.67,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,145.23,,74.40,0.678644387,2.234306135,214,65,1,214,65,8.18,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,515.94,,140.29,1.568210731,4.127530643,329,125,1,329,125,3.87,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,158.18,,82.33,0.823856129,2.433544259,192,65,1,192,65,7.04,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,173.03,,44.88,0.353117963,6.179564359,490,28,1,490,28,6.59,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,122.85,,61.90,0.405448145,3.510022512,303,35,1,303,35,9.95,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,158.96,,76.32,0.325734087,4.541663835,488,35,1,488,35,7.54,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,270.04,,72.19,0.496399722,7.715469968,544,35,1,544,35,4.89,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,749.50,,228.07,1.251250835,5.995994004,599,125,1,599,125,2.93,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,atom,Intel® Processor N-200,6.55,,3.16,0.03395277,1.092147419,193,6,1,193,6,159.41,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,242.15,,98.01,0.407662044,1.937210033,594,125,1,594,125,5.41,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,93.12,,50.42,0.30430895,1.311528714,306,71,1,306,71,11.07,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,967.51,,269.32,0.307731475,4.607179803,3144,210,2,1572,105,2.90,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,2904.14,,747.72,0.171295011,7.083257598,16954,410,2,8477,205,1.53,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,4995.38,,1161.61,0.266875909,9.250709751,18718,540,2,9359,270,1.02,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,20106.68,,1683.03,0.591372933,28.72382815,34000,700,2,17000,350,1.01,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,930.86,,255.73,0.46036761,4.654316532,2022,200,2,1011,100,3.01,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,2277.89,,566.74,1.001712531,7.592980986,2274,300,2,1137,150,1.47,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,accel,Intel® Flex-170,3587.03,2207.96,,,,,,1,,,4.17,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,accel,Intel® Flex-140,681.39,441.41,,,,,,1,,,23.43,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,211.61,117.01,,1.977639185,14.10715952,107,15,1,107,15,18.79,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,289.48,170.46,,0.590782847,10.33869983,490,28,1,490,28,13.64,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,14.62,7.80,,0.075769949,2.437266688,193,6,1,193,6,272.49,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,352.09,211.39,,0.826510493,12.57476679,426,28,1,426,28,11.09,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,201.96,,71.44,1.887469159,13.46394667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,307.08,,88.49,0.626686058,10.96700601,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,18.58,,6.49,0.09624999,3.096041335,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,107.63,,36.80,1.005906996,7.175469906,107,15,1,107,15,9.12,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,212.15,,122.46,1.813284395,3.263911911,117,65,1,117,65,4.93,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,327.95,,171.39,1.532485774,5.045414702,214,65,1,214,65,3.61,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,999.78,,361.81,3.038835794,7.99821581,329,125,1,329,125,1.90,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,343.23,,200.57,1.787633018,5.280392915,192,65,1,192,65,3.11,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,387.68,,101.48,0.791191606,13.8458531,490,28,1,490,28,2.80,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,275.38,,157.24,0.908858835,7.868120772,303,35,1,303,35,4.33,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,367.82,,194.43,0.753734827,10.50921702,488,35,1,488,35,3.35,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,543.47,,186.05,0.999034589,15.5278519,544,35,1,544,35,2.61,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,1525.02,,586.71,2.545949853,12.20019169,599,125,1,599,125,1.62,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,atom,Intel® Processor N-200,14.49,,7.97,0.075060228,2.414437321,193,6,1,193,6,71.99,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,577.55,,223.76,0.972304716,4.620392012,594,125,1,594,125,2.38,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,203.21,,126.30,0.664084594,2.862111065,306,71,1,306,71,5.09,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,2048.96,,639.54,0.651703831,9.756937353,3144,210,2,1572,105,1.57,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,5725.24,,1655.76,0.337692546,13.96399861,16954,410,2,8477,205,1.11,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,10274.06,,2354.69,0.548886883,19.0260457,18718,540,2,9359,270,0.67,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,22569.91,,3519.34,0.663820955,32.24273208,34000,700,2,17000,350,0.82,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,1946.94,,612.87,0.962878848,9.73470515,2022,200,2,1011,100,1.63,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,4808.85,,1247.67,2.114709828,16.02950049,2274,300,2,1137,150,0.81,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,accel,Intel® Flex-170,4012.21,3280.14,,,,,,1,,,3.65,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,accel,Intel® Flex-140,837.59,673.84,,,,,,1,,,19.02,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,408.97,220.07,,3.822128486,27.26451654,107,15,1,107,15,9.63,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,522.64,285.29,,1.066608011,18.6656402,490,28,1,490,28,7.49,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,28.92,15.36,,0.149828275,4.819476185,193,6,1,193,6,136.60,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,637.62,384.26,,1.496766725,22.77223659,426,28,1,426,28,6.11,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,321.29,,138.22,3.002740187,21.41954667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,531.28,,141.48,1.084245141,18.97428996,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,35.69,,14.88,0.184945841,5.949091211,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,0.89,,0.23,0.00834996,0.059563049,107,15,1,107,15,1119.27,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,1.68,,0.97,0.014338412,0.025809141,117,65,1,117,65,596.89,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,2.42,,1.40,0.011301245,0.037207176,214,65,1,214,65,459.48,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,8.23,,2.40,0.025006186,0.065816281,329,125,1,329,125,163.47,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,2.78,,1.56,0.014501179,0.042834252,192,65,1,192,65,362.12,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,2.96,,0.76,0.006043555,0.105762215,490,28,1,490,28,336.81,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,2.02,,1.13,0.006664408,0.057694728,303,35,1,303,35,563.91,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,2.68,,1.49,0.00548551,0.076483685,488,35,1,488,35,405.46,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,4.40,,1.32,0.008086729,0.125690868,544,35,1,544,35,235.75,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,12.52,,4.02,0.020905788,0.100180536,599,125,1,599,125,125.64,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,atom,Intel® Processor N-200,0.11,,0.05,0.000581374,0.01870087,193,6,1,193,6,8951.48,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,4.33,,2.45,0.007285664,0.034621476,594,125,1,594,125,239.93,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,1.60,,0.92,0.005224348,0.022516206,306,71,1,306,71,625.20,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,17.63,,4.57,0.005609079,0.083975927,3144,210,2,1572,105,115.76,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,57.85,,14.82,0.003412426,0.141107992,16954,410,2,8477,205,36.60,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,79.09,,20.81,0.004225558,0.146470375,18718,540,2,9359,270,106.87,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,445.98,,31.40,0.013117158,0.637119123,34000,700,2,17000,350,8.59,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,16.77,,4.34,0.008295831,0.083870853,2022,200,2,1011,100,121.62,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,42.59,,10.54,0.018728616,0.141962906,2274,300,2,1137,150,59.65,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,accel,Intel® Flex-170,167.49,103.03,,,,,,1,,,95.09,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,accel,Intel® Flex-140,29.99,17.56,,,,,,1,,,529.50,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,5.05,2.63,,0.047178932,0.336543045,107,15,1,107,15,773.01,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,8.98,4.71,,0.018327459,0.320730535,490,28,1,490,28,445.01,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,0.29,0.16,,0.001497632,0.048173819,193,6,1,193,6,13818.30,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,9.70,5.44,,0.022760985,0.34629213,426,28,1,426,28,422.10,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,0.90,,0.23,0.00837685,0.059754867,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,2.91,,0.75,0.005937663,0.103909111,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.11,,0.05,0.000582108,0.01872448,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,5.46,,1.54,0.051044248,0.364115633,107,15,1,107,15,183.90,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,10.65,,5.82,0.09106238,0.163912284,117,65,1,117,65,94.90,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,15.38,,8.31,0.071851432,0.236557023,214,65,1,214,65,74.19,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,51.71,,15.62,0.157162207,0.41365093,329,125,1,329,125,29.85,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,17.56,,9.38,0.091438134,0.270094181,192,65,1,192,65,57.84,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,18.06,,4.86,0.03686323,0.645106519,490,28,1,490,28,57.19,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,12.66,,6.85,0.041784766,0.361736687,303,35,1,303,35,86.80,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,16.86,,8.71,0.034540452,0.48159259,488,35,1,488,35,65.99,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,27.01,,8.10,0.049658249,0.771831063,544,35,1,544,35,42.00,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,78.04,,25.56,0.1302838,0.62431997,599,125,1,599,125,23.30,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,atom,Intel® Processor N-200,0.70,,0.35,0.003642344,0.117162061,193,6,1,193,6,1468.27,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,27.34,,14.09,0.046032451,0.218746209,594,125,1,594,125,40.58,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,10.07,,5.66,0.032912167,0.141846806,306,71,1,306,71,100.10,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,106.30,,29.82,0.033811441,0.506205571,3144,210,2,1572,105,21.86,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,313.22,,88.20,0.018474663,0.76394984,16954,410,2,8477,205,10.58,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,493.02,,109.11,0.026339551,0.913006894,18718,540,2,9359,270,18.51,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,2136.92,,194.88,0.06285072,3.052749275,34000,700,2,17000,350,3.29,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,101.27,,28.40,0.050083923,0.50634846,2022,200,2,1011,100,22.87,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,242.00,,62.34,0.106421602,0.806675746,2274,300,2,1137,150,13.70,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,accel,Intel® Flex-170,789.11,338.45,,,,,,1,,,19.85,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,accel,Intel® Flex-140,159.67,87.28,,,,,,1,,,99.99,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,31.90,15.02,,0.298118652,2.126579715,107,15,1,107,15,123.93,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,57.77,25.96,,0.117907546,2.063382053,490,28,1,490,28,68.96,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,1.76,0.94,,0.009116198,0.293237699,193,6,1,193,6,2271.06,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,63.86,29.69,,0.149904983,2.280697234,426,28,1,426,28,63.56,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,34.09,,9.18,0.318603364,2.272704,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,55.18,,12.97,0.112605748,1.970600588,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,2.22,,0.74,0.011488536,0.369547916,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,54.42,,18.04,0.508553978,3.627685044,107,15,1,107,15,18.25,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,112.01,,63.79,0.957390784,1.723303411,117,65,1,117,65,9.02,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,167.15,,91.46,0.781091755,2.571594392,214,65,1,214,65,6.72,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,599.98,,196.78,1.823638134,4.799815567,329,125,1,329,125,3.05,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,185.17,,102.47,0.964441757,2.848812574,192,65,1,192,65,5.42,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,186.99,,55.22,0.381609578,6.678167613,490,28,1,490,28,5.74,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,137.72,,76.53,0.454530194,3.934932821,303,35,1,303,35,8.20,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,186.09,,96.75,0.381340728,5.316979292,488,35,1,488,35,6.26,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,290.20,,92.52,0.533462621,8.291533312,544,35,1,544,35,4.20,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,858.94,,286.41,1.433957861,6.871526071,599,125,1,599,125,2.44,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,atom,Intel® Processor N-200,7.65,,4.05,0.039622221,1.27451476,193,6,1,193,6,136.49,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,298.60,,148.94,0.502696157,2.388812138,594,125,1,594,125,3.99,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,106.58,,62.62,0.348291454,1.501087112,306,71,1,306,71,9.44,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,1051.30,,339.08,0.334382549,5.006184448,3144,210,2,1572,105,2.51,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,2824.66,,923.96,0.166607362,6.889417597,16954,410,2,8477,205,1.22,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,4664.34,,1378.51,0.249189958,8.637662274,18718,540,2,9359,270,0.87,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,13094.97,,2140.00,0.385146034,18.70709309,34000,700,2,17000,350,1.09,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,1008.44,,322.64,0.49873439,5.042204682,2022,200,2,1011,100,2.62,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,2217.58,,702.90,0.975190642,7.391945065,2274,300,2,1137,150,1.33,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,accel,Intel® Flex-170,3731.30,2395.93,,,,,,1,,,4.06,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,accel,Intel® Flex-140,595.41,589.87,,,,,,1,,,26.79,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,289.69,151.78,,2.707403451,19.31281129,107,15,1,107,15,13.67,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,482.61,255.46,,0.984928525,17.23624918,490,28,1,490,28,8.06,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,18.65,9.81,,0.096624282,3.108081056,193,6,1,193,6,212.84,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,555.59,293.27,,1.304201059,19.84248754,426,28,1,426,28,6.92,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,266.57,,93.47,2.491299065,17.77126667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,383.36,,116.03,0.782374111,13.69154694,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,23.03,,8.30,0.119308014,3.837741122,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,1.49,,0.38,0.013950494,0.099513526,107,15,1,107,15,672.25,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,2.43,,1.57,0.020775748,0.037396346,117,65,1,117,65,425.88,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,3.62,,2.29,0.016924989,0.055722272,214,65,1,214,65,322.49,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,11.46,,3.96,0.03483307,0.091680639,329,125,1,329,125,121.88,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,3.95,,2.54,0.020576722,0.06078047,192,65,1,192,65,262.38,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,4.93,,1.23,0.010063508,0.176111389,490,28,1,490,28,209.95,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,3.02,,1.85,0.009976084,0.086364383,303,35,1,303,35,386.94,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,3.86,,2.43,0.007900388,0.110153975,488,35,1,488,35,282.08,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,6.32,,2.19,0.01162052,0.18061608,544,35,1,544,35,169.49,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,18.02,,6.59,0.030079429,0.144140625,599,125,1,599,125,91.92,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,atom,Intel® Processor N-200,0.17,,0.09,0.000895225,0.0287964,193,6,1,193,6,5824.42,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,6.10,,3.95,0.010266964,0.048788613,594,125,1,594,125,180.82,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,2.33,,1.48,0.007623167,0.032854777,306,71,1,306,71,431.48,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,29.17,,7.32,0.009277061,0.138890851,3144,210,2,1572,105,70.99,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,95.18,,21.75,0.005614225,0.232155061,16954,410,2,8477,205,23.58,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,129.66,,31.77,0.006926924,0.240107724,18718,540,2,9359,270,73.18,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,597.42,,48.08,0.017571322,0.853464211,34000,700,2,17000,350,9.00,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,27.77,,6.96,0.01373266,0.138837194,2022,200,2,1011,100,74.54,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,68.21,,15.93,0.029995749,0.227367774,2274,300,2,1137,150,43.86,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,accel,Intel® Flex-170,277.97,158.53,,,,,,1,,,57.27,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,accel,Intel® Flex-140,46.10,28.49,,,,,,1,,,346.80,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,8.40,4.35,,0.078545387,0.56029043,107,15,1,107,15,475.75,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,15.51,7.81,,0.03164744,0.553830203,490,28,1,490,28,257.89,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,0.46,0.25,,0.002385152,0.076722378,193,6,1,193,6,8685.75,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,17.28,8.86,,0.04056698,0.617197621,426,28,1,426,28,227.89,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,8.96,,2.57,0.083779393,0.597626333,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,15.42,,3.90,0.031467977,0.550689601,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.55,,0.19,0.002833692,0.09115042,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,24.40,,9.61,0.22800938,1.626466914,107,15,1,107,15,40.45,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,53.51,,33.07,0.457363269,0.823253884,117,65,1,117,65,19.20,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,81.91,,47.09,0.382748211,1.260124878,214,65,1,214,65,13.68,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,248.13,,95.51,0.754209583,1.985079623,329,125,1,329,125,6.70,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,86.77,,53.00,0.451947196,1.334982486,192,65,1,192,65,11.88,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,76.53,,27.31,0.156180065,2.733151145,490,28,1,490,28,13.42,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,71.40,,42.60,0.235643867,2.040002619,303,35,1,303,35,16.51,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,93.44,,53.42,0.19148001,2.669778431,488,35,1,488,35,12.45,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,129.22,,50.19,0.237534522,3.691965149,544,35,1,544,35,9.42,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,374.34,,153.46,0.624943307,2.994728327,599,125,1,599,125,5.32,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,atom,Intel® Processor N-200,3.26,,1.95,0.016869276,0.542628378,193,6,1,193,6,316.56,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,136.37,,72.87,0.229579691,1.090962692,594,125,1,594,125,9.15,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,52.29,,32.98,0.170869765,0.73642462,306,71,1,306,71,19.47,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,452.92,,175.37,0.144058565,2.156762523,3144,210,2,1572,105,5.85,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,978.34,,454.72,0.057705352,2.386186661,16954,410,2,8477,205,3.51,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,1708.13,,573.45,0.091255994,3.163203145,18718,540,2,9359,270,2.38,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,2882.62,,945.60,0.084783045,4.118033592,34000,700,2,17000,350,3.82,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,431.62,,166.10,0.213460485,2.158085503,2022,200,2,1011,100,6.19,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,855.04,,342.65,0.376007503,2.850136872,2274,300,2,1137,150,3.25,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,accel,Intel® Flex-170,1445.14,1480.07,,,,,,1,,,10.71,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,accel,Intel® Flex-140,201.93,259.00,,,,,,1,,,79.17,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,126.14,82.58,,1.178855159,8.409166799,107,15,1,107,15,31.55,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,170.37,110.99,,0.347683792,6.084466364,490,28,1,490,28,23.22,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,8.61,5.66,,0.044636,1.435791346,193,6,1,193,6,463.22,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,210.16,143.23,,0.493335138,7.505741748,426,28,1,426,28,18.84,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,116.50,,51.35,1.088790654,7.766706667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,114.23,,46.77,0.233123263,4.079657102,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,10.54,,4.68,0.05458634,1.755860615,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,32.34,,49.17,0.053990134,0.25872072,599,125,1,599,125,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,28.29,,43.45,0.001511565,0.052395333,18718,540,2,9359,270,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,47.21,,52.77,0.00138849,0.067440929,34000,700,2,17000,350,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,40.07,,42.88,0.017620783,0.133565533,2274,300,2,1137,150,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,accel,Intel® Flex-170,81.64,,82.81,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,accel,Intel® Flex-140,70.36,,65.11,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,242.95,,457.83,0.405588331,1.94357928,599,125,1,599,125,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,110.25,,200.49,0.005889907,0.204161611,18718,540,2,9359,270,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,119.09,,131.63,0.003502792,0.170135629,34000,700,2,17000,350,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,277.75,,278.22,0.122140598,0.925825733,2274,300,2,1137,150,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,accel,Intel® Flex-170,143.21,,143.00,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,accel,Intel® Flex-140,,,,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,285.08,,511.77,0.475930451,2.28065872,599,125,1,599,125,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,95.08,,182.22,0.005079793,0.176080667,18718,540,2,9359,270,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,119.29,,133.20,0.003508666,0.170420929,34000,700,2,17000,350,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,338.67,,345.26,0.148931821,1.1289032,2274,300,2,1137,150,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,accel,Intel® Flex-170,138.06,,137.36,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,accel,Intel® Flex-140,,,,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec. +llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, +begin_rec,,,,,,,,,,,,,,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,43.21,,43.13,0.072129599,0.34564504,599,125,1,599,125,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,18.95,,19.43,0.001012236,0.035087093,18718,540,2,9359,270,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,5.88,,6.47,0.000172889,0.008397471,34000,700,2,17000,350,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,21.92,,22.42,0.00963796,0.073055733,2274,300,2,1137,150,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,accel,Intel® Flex-170,4.29,,4.31,,,,,1,,,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,accel,Intel® Flex-140,18.68,,18.46,,,,,1,,,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec. +end_rec,,,,,,,,,,,,,,,,,, \ No newline at end of file diff --git a/docs/_static/benchmarks_files/OVMS-benchmark-data.csv b/docs/_static/benchmarks_files/OVMS-benchmark-data.csv index 35f22df767c500..3c6d0d768aa8e0 100644 --- a/docs/_static/benchmarks_files/OVMS-benchmark-data.csv +++ b/docs/_static/benchmarks_files/OVMS-benchmark-data.csv @@ -1,121 +1,121 @@ -Network model,Release,IE-Type,Platform name,Throughput-OVMS-INT8,Throughput-OV-INT8,Throughput-OVMS-FP32,Throughput-OV-FP32 -begin_rec,,,,,,, -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,559.87,591.49,182.12,188.52 -bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,504.08,520.95,157.58,162.80 -bert-base-cased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,137.25,145.76,38.54,40.85 -bert-base-cased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,139.53,154.46,40.65,44.50 -bert-base-cased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,28.38,30.17,17.53,18.44 -bert-base-cased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.43,27.18,15.59,15.76 -end_rec,,,,,,, -begin_rec,,,,,,, -bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,32.58,37.81,15.39,16.79 -bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,28.06,33.33,13.70,14.54 -bert-large-uncased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,9.04,3.16,3.27 -bert-large-uncased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,8.15,8.23,3.23,3.28 -bert-large-uncased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.56,2.58,1.52,1.55 -bert-large-uncased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.12,2.29,1.29,1.30 -end_rec,,,,,,, -begin_rec,,,,,,, -DeeplabV3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,343.44,369.62,114.69,120.87 -DeeplabV3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,279.60,305.89,90.33,96.68 -DeeplabV3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,89.82,108.12,25.46,25.74 -DeeplabV3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,77.58,92.22,21.85,23.09 -DeeplabV3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,16.77,29.67,16.77,17.29 -DeeplabV3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.99,28.39,15.71,16.00 -end_rec,,,,,,, -begin_rec,,,,,,, -Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,380.32,419.32,236.50,243.76 -Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,324.60,353.57,207.87,227.27 -Efficientdet-D0,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,121.90,137.75,53.07,57.23 -Efficientdet-D0,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,111.73,127.45,44.97,46.59 -Efficientdet-D0,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,40.04,41.11,25.72,28.38 -Efficientdet-D0,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,41.21,42.72,25.10,25.95 -end_rec,,,,,,, -begin_rec,,,,,,, -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,65.01,70.39,17.65,18.89 -faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,60.49,62.73,15.71,16.46 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,13.89,14.16,3.83,4.04 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,14.91,15.29,4.06,4.28 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,3.58,3.62,1.88,1.89 -faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,3.22,3.23,1.70,1.71 -end_rec,,,,,,, -begin_rec,,,,,,, -Inception-V4,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,741.48,791.73,185.33,190.14 -Inception-V4,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,647.30,693.21,161.35,165.76 -Inception-V4,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,158.69,162.07,37.16,39.11 -Inception-V4,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,174.96,183.19,40.53,42.20 -Inception-V4,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,36.16,37.61,18.49,19.05 -Inception-V4,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,32.06,33.23,16.39,16.76 -end_rec,,,,,,, -begin_rec,,,,,,, -Mobilenet-SSD ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,4118.70,5664.02,1312.52,1488.89 -Mobilenet-SSD ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,3740.30,4877.12,1156.02,1255.77 -Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,918.95,1250.22,291.99,335.81 -Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,945.19,1429.69,273.07,329.38 -Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,236.18,280.08,152.84,173.51 -Mobilenet-SSD ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,235.71,263.74,138.06,150.19 -end_rec,,,,,,, -begin_rec,,,,,,, -Mobilenet-V2 ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,7580.68,13077.02,3108.32,3891.42 -Mobilenet-V2 ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,7310.77,11049.48,2661.25,3172.62 -Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,1979.69,3041.21,709.27,904.80 -Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,1911.48,3538.02,619.12,804.96 -Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,520.37,680.97,411.23,536.68 -Mobilenet-V2 ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,549.38,665.54,400.97,503.12 -end_rec,,,,,,, -begin_rec,,,,,,, -GPT-2,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,,,9.06,11.37 -GPT-2,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,,,7.68,8.78 -GPT-2,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,,,2.07,2.44 -GPT-2,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,,,1.85,2.24 -GPT-2,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,,,1.11,1.21 -GPT-2,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,,,1.02,1.06 -end_rec,,,,,,, -begin_rec,,,,,,, -Resnet-50,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,2328.67,2558.36,624.81,635.89 -Resnet-50,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,2062.84,2261.19,558.42,570.31 -Resnet-50,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,500.96,527.47,122.28,133.20 -Resnet-50,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,549.24,594.07,126.20,144.62 -Resnet-50,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,114.70,123.79,60.22,66.07 -Resnet-50,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,106.60,113.22,55.48,57.32 -end_rec,,,,,,, -begin_rec,,,,,,, -SSD-Resnet34-1200 ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,43.61,47.57,11.84,12.22 -SSD-Resnet34-1200 ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,38.68,40.91,10.19,10.61 -SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,8.83,2.18,2.27 -SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,9.84,10.10,2.34,2.58 -SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.08,2.12,1.21,1.23 -SSD-Resnet34-1200 ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,1.83,1.84,1.03,1.07 -end_rec,,,,,,, -begin_rec,,,,,,, -Unet-Camvid--0001 ,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,71.34,77.83,17.45,18.10 -Unet-Camvid--0001 ,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,61.58,67.16,15.23,15.47 -Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,14.39,14.76,3.52,3.71 -Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,15.73,16.27,4.03,4.10 -Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.99,3.05,1.90,1.95 -Unet-Camvid--0001 ,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.79,2.80,1.73,1.74 -end_rec,,,,,,, -begin_rec,,,,,,, -Yolo_V3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,240.57,263.21,69.57,72.63 -Yolo_V3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,208.97,227.96,59.95,61.34 -Yolo_V3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,51.50,53.96,14.09,14.52 -Yolo_V3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,55.30,60.81,14.27,15.84 -Yolo_V3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,12.91,13.40,7.01,7.31 -Yolo_V3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,11.42,11.73,6.29,6.39 -end_rec,,,,,,, -begin_rec,,,,,,, -Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,1785.36,2420.73,690.57,754.41 -Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,1614.61,2097.37,584.59,632.46 -Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,441.22,574.61,158.91,166.02 -Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,418.34,639.79,154.96,174.78 -Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,121.45,141.31,72.11,77.17 -Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,114.38,127.16,67.41,71.53 -end_rec,,,,,,, -begin_rec,,,,,,, -Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,588.99,779.76,308.36,385.15 -Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,574.93,655.56,273.21,322.97 -Yolo_V8n,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,161.00,229.64,71.65,83.89 -Yolo_V8n,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,148.22,233.11,68.23,84.40 -Yolo_V8n,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,55.71,66.98,36.15,41.24 -Yolo_V8n,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,53.23,62.99,34.77,37.34 -end_rec,,,,,,, \ No newline at end of file +Network model,Release,IE-Type,Platform name,Throughput-OVMS-INT8,Throughput-OV-INT8,Throughput-OVMS-FP32,Throughput-OV-FP32,UOM_T +begin_rec,,,,,,,, +bert-base-cased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,559.87,591.49,182.12,188.52,FPS +bert-base-cased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,504.08,520.95,157.58,162.80,FPS +bert-base-cased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,137.25,145.76,38.54,40.85,FPS +bert-base-cased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,139.53,154.46,40.65,44.50,FPS +bert-base-cased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,28.38,30.17,17.53,18.44,FPS +bert-base-cased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.43,27.18,15.59,15.76,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,32.58,37.81,15.39,16.79,FPS +bert-large-uncased,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,28.06,33.33,13.70,14.54,FPS +bert-large-uncased,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,9.04,3.16,3.27,FPS +bert-large-uncased,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,8.15,8.23,3.23,3.28,FPS +bert-large-uncased,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.56,2.58,1.52,1.55,FPS +bert-large-uncased,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.12,2.29,1.29,1.30,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +DeeplabV3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,343.44,369.62,114.69,120.87,FPS +DeeplabV3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,279.60,305.89,90.33,96.68,FPS +DeeplabV3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,89.82,108.12,25.46,25.74,FPS +DeeplabV3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,77.58,92.22,21.85,23.09,FPS +DeeplabV3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,16.77,29.67,16.77,17.29,FPS +DeeplabV3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,26.99,28.39,15.71,16.00,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,380.32,419.32,236.50,243.76,FPS +Efficientdet-D0,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,324.60,353.57,207.87,227.27,FPS +Efficientdet-D0,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,121.90,137.75,53.07,57.23,FPS +Efficientdet-D0,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,111.73,127.45,44.97,46.59,FPS +Efficientdet-D0,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,40.04,41.11,25.72,28.38,FPS +Efficientdet-D0,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,41.21,42.72,25.10,25.95,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,65.01,70.39,17.65,18.89,FPS +faster_rcnn_resnet50_coco,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,60.49,62.73,15.71,16.46,FPS +faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,13.89,14.16,3.83,4.04,FPS +faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,14.91,15.29,4.06,4.28,FPS +faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,3.58,3.62,1.88,1.89,FPS +faster_rcnn_resnet50_coco,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,3.22,3.23,1.70,1.71,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +Inception-V4,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,741.48,791.73,185.33,190.14,FPS +Inception-V4,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,647.30,693.21,161.35,165.76,FPS +Inception-V4,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,158.69,162.07,37.16,39.11,FPS +Inception-V4,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,174.96,183.19,40.53,42.20,FPS +Inception-V4,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,36.16,37.61,18.49,19.05,FPS +Inception-V4,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,32.06,33.23,16.39,16.76,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +"Mobilenet-SSD ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,4118.70,5664.02,1312.52,1488.89,FPS +"Mobilenet-SSD ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,3740.30,4877.12,1156.02,1255.77,FPS +"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,918.95,1250.22,291.99,335.81,FPS +"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,945.19,1429.69,273.07,329.38,FPS +"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,236.18,280.08,152.84,173.51,FPS +"Mobilenet-SSD ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,235.71,263.74,138.06,150.19,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +"Mobilenet-V2 ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,7580.68,13077.02,3108.32,3891.42,FPS +"Mobilenet-V2 ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,7310.77,11049.48,2661.25,3172.62,FPS +"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,1979.69,3041.21,709.27,904.80,FPS +"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,1911.48,3538.02,619.12,804.96,FPS +"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,520.37,680.97,411.23,536.68,FPS +"Mobilenet-V2 ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,549.38,665.54,400.97,503.12,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +GPT-2,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,,,9.06,11.37,FPS +GPT-2,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,,,7.68,8.78,FPS +GPT-2,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,,,2.07,2.44,FPS +GPT-2,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,,,1.85,2.24,FPS +GPT-2,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,,,1.11,1.21,FPS +GPT-2,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,,,1.02,1.06,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +Resnet-50,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,2328.67,2558.36,624.81,635.89,FPS +Resnet-50,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,2062.84,2261.19,558.42,570.31,FPS +Resnet-50,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,500.96,527.47,122.28,133.20,FPS +Resnet-50,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,549.24,594.07,126.20,144.62,FPS +Resnet-50,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,114.70,123.79,60.22,66.07,FPS +Resnet-50,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,106.60,113.22,55.48,57.32,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +"SSD-Resnet34-1200 ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,43.61,47.57,11.84,12.22,FPS +"SSD-Resnet34-1200 ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,38.68,40.91,10.19,10.61,FPS +"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,8.75,8.83,2.18,2.27,FPS +"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,9.84,10.10,2.34,2.58,FPS +"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.08,2.12,1.21,1.23,FPS +"SSD-Resnet34-1200 ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,1.83,1.84,1.03,1.07,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +"Unet-Camvid--0001 ",OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,71.34,77.83,17.45,18.10,FPS +"Unet-Camvid--0001 ",OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,61.58,67.16,15.23,15.47,FPS +"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,14.39,14.76,3.52,3.71,FPS +"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,15.73,16.27,4.03,4.10,FPS +"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,2.99,3.05,1.90,1.95,FPS +"Unet-Camvid--0001 ",OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,2.79,2.80,1.73,1.74,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +Yolo_V3,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,240.57,263.21,69.57,72.63,FPS +Yolo_V3,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,208.97,227.96,59.95,61.34,FPS +Yolo_V3,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,51.50,53.96,14.09,14.52,FPS +Yolo_V3,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,55.30,60.81,14.27,15.84,FPS +Yolo_V3,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,12.91,13.40,7.01,7.31,FPS +Yolo_V3,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,11.42,11.73,6.29,6.39,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,1785.36,2420.73,690.57,754.41,FPS +Yolo_V3_Tiny,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,1614.61,2097.37,584.59,632.46,FPS +Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,441.22,574.61,158.91,166.02,FPS +Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,418.34,639.79,154.96,174.78,FPS +Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,121.45,141.31,72.11,77.17,FPS +Yolo_V3_Tiny,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,114.38,127.16,67.41,71.53,FPS +end_rec,,,,,,,, +begin_rec,,,,,,,, +Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® 8260M CPU-only,588.99,779.76,308.36,385.15,FPS +Yolo_V8n,OV-2023.0,xeon,Intel® Xeon® Gold 6238M CPU-only,574.93,655.56,273.21,322.97,FPS +Yolo_V8n,OV-2023.0,core,Intel® Core™ i9-11900K CPU-only,161.00,229.64,71.65,83.89,FPS +Yolo_V8n,OV-2023.0,core,Intel® Core™ i7-11700K CPU-only,148.22,233.11,68.23,84.40,FPS +Yolo_V8n,OV-2023.0,core,Intel® Core™ i5-8500 CPU-only,55.71,66.98,36.15,41.24,FPS +Yolo_V8n,OV-2023.0,core,Intel® Core™ i3-10100 CPU-only,53.23,62.99,34.77,37.34,FPS +end_rec,,,,,,,, \ No newline at end of file diff --git a/docs/_static/js/graphs.js b/docs/_static/js/graphs.js index 4ec945bcb81b69..a5625268139ac0 100644 --- a/docs/_static/js/graphs.js +++ b/docs/_static/js/graphs.js @@ -35,10 +35,8 @@ const OVdefaultSelections = { const OVMSdefaultSelections = { platforms: {name: 'platform', data: [ - 'Intel® Core™ i3-10100 CPU-only', - 'Intel® Core™ i5-8500 CPU-only', - 'Intel® Core™ i7-8700T CPU-only', - 'Intel® Core™ i9-10920X CPU-only', + 'Intel® Xeon® 8260M CPU-only', + 'Intel® Xeon® Gold 6238M CPU-only' ] }, models: {name: 'networkmodel', @@ -135,6 +133,11 @@ class ExcelData { this.pricePerSocket = csvdataline[12]; this.tdpPerSocket = csvdataline[13]; this.latency = csvdataline[14]; + + this.throughputUnit = csvdataline[15] + this.valueUnit = csvdataline[16] + this.efficiencyUnit = csvdataline[17] + this.latencyUnit = csvdataline[18] } } @@ -146,6 +149,8 @@ class OVMSExcelData extends ExcelData { this.throughputInt8 = csvdataline[4]; this.throughputOVMSFP32 = csvdataline[7]; this.throughputFP32 = csvdataline[6]; + + this.throughputUnit = csvdataline[8] } } @@ -176,6 +181,11 @@ class GraphData { this.pricePerSocket = excelData.pricePerSocket; this.tdpPerSocket = excelData.tdpPerSocket; this.latency = excelData.latency; + + this.throughputUnit = excelData.throughputUnit; + this.valueUnit = excelData.valueUnit; + this.efficiencyUnit = excelData.efficiencyUnit; + this.latencyUnit = excelData.latencyUnit; } } @@ -295,53 +305,53 @@ class Graph { } // this returns an object that is used to ender the chart - static getGraphConfig(kpi, precisions) { + static getGraphConfig(kpi, units, precisions) { switch (kpi) { case 'throughput': return { chartTitle: 'Throughput', chartSubtitle: '(higher is better)', iconClass: 'throughput-icon', - datasets: precisions.map((precision) => this.getPrecisionConfig(precision)), + datasets: precisions.map((precision) => this.getPrecisionConfig(precision, units.throughputUnit)), }; case 'latency': return { chartTitle: 'Latency', chartSubtitle: '(lower is better)', iconClass: 'latency-icon', - datasets: [{ data: null, color: '#8F5DA2', label: 'Milliseconds' }], + datasets: [{ data: null, color: '#8F5DA2', label: `${units.latencyUnit}` }], }; case 'value': return { chartTitle: 'Value', chartSubtitle: '(higher is better)', iconClass: 'value-icon', - datasets: [{ data: null, color: '#8BAE46', label: 'FPS/$ (INT8)' }], + datasets: [{ data: null, color: '#8BAE46', label: `${units.valueUnit} (INT8)` }], }; case 'efficiency': return { chartTitle: 'Efficiency', chartSubtitle: '(higher is better)', iconClass: 'efficiency-icon', - datasets: [{ data: null, color: '#E96115', label: 'FPS/TDP (INT8)' }], + datasets: [{ data: null, color: '#E96115', label: `${units.efficiencyUnit} (INT8)` }], }; default: return {}; } } - static getPrecisionConfig(precision) { + static getPrecisionConfig(precision, unit) { switch (precision) { case 'ovmsint8': - return { data: null, color: '#FF8F51', label: 'FPS (OV Ref. INT8)' }; + return { data: null, color: '#FF8F51', label: `${unit} (OV Ref. INT8)` }; case 'ovmsfp32': - return { data: null, color: '#B24501', label: 'FPS (OV Ref. FP32)' }; + return { data: null, color: '#B24501', label: `${unit} (OV Ref. FP32)` }; case 'int8': - return { data: null, color: '#00C7FD', label: 'FPS (INT8)' }; + return { data: null, color: '#00C7FD', label: `${unit} (INT8)` }; case 'fp16': - return { data: null, color: '#009fca', label: 'FPS (FP16)' }; + return { data: null, color: '#009fca', label: `${unit} (FP16)` }; case 'fp32': - return { data: null, color: '#007797', label: 'FPS (FP32)' }; + return { data: null, color: '#007797', label: `${unit} (FP32)` }; default: return {}; } @@ -876,15 +886,16 @@ $(document).ready(function () { var graphConfigs = kpis.map((str) => { var kpi = str.toLowerCase(); + var groupUnit = model[0] if (kpi === 'throughput') { var throughputData = Graph.getDatabyKPI(model, kpi); - var config = Graph.getGraphConfig(kpi, precisions); + var config = Graph.getGraphConfig(kpi, groupUnit, precisions); precisions.forEach((prec, index) => { config.datasets[index].data = throughputData.map(tData => tData[prec]); }); return config; } - var config = Graph.getGraphConfig(kpi); + var config = Graph.getGraphConfig(kpi, groupUnit); config.datasets[0].data = Graph.getDatabyKPI(model, kpi); return config; }); From cd9c31cb073cea26b7706ee93e2b095af7e6d4dc Mon Sep 17 00:00:00 2001 From: Tatiana Savina Date: Wed, 13 Sep 2023 11:54:24 +0200 Subject: [PATCH 04/14] [DOCS] release adjustments pass 2 --- docs/Documentation/model_introduction.md | 1 + docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md | 2 +- .../Deep_Learning_Model_Optimizer_DevGuide.md | 8 ++++---- .../convert_model/Convert_Model_From_Paddle.md | 8 ++++---- .../convert_model/Convert_Model_From_PyTorch.md | 2 +- .../convert_model/Convert_Model_From_TensorFlow.md | 6 +++--- docs/get_started.md | 2 +- docs/install_guides/installing-openvino-overview.md | 2 +- docs/install_guides/installing-openvino-vcpkg.md | 8 ++++---- 9 files changed, 20 insertions(+), 19 deletions(-) diff --git a/docs/Documentation/model_introduction.md b/docs/Documentation/model_introduction.md index c4de8fb4820ebc..af5c06c446899d 100644 --- a/docs/Documentation/model_introduction.md +++ b/docs/Documentation/model_introduction.md @@ -33,6 +33,7 @@ Convert a Model in Python: ``convert_model`` You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example Pytorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories: + .. tab-set:: .. tab-item:: Torchvision diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md index 96d83591a4dedf..dce063b625fd24 100644 --- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md +++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -22,7 +22,7 @@ optimal execution on target devices. .. note:: - This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to `Model preparation ` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the `transition guide ` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you. + This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to :doc:`Model preparation ` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the :doc:`transition guide ` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you. To convert a model to OpenVINO model format (``ov.Model``), you can use the following command: diff --git a/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md index d0362bd904d6d3..94b01c4299179d 100644 --- a/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md +++ b/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -77,17 +77,17 @@ For details on how plugins handle compressed ``FP16`` models, see - ``extension`` parameter which makes possible conversion of the models consisting of operations that are not supported by OpenVINO out-of-the-box. It requires implementing of an OpenVINO extension first, please refer to :doc:`Frontend Extensions ` guide. -- ``share_weigths`` parameter with default value `True` allows reusing memory with original weights. For models loaded in Python and then passed to ``openvino.convert_model``, that means that OpenVINO model will share the same areas in program memory where the original weights are located. For models loaded from files by ``openvino.convert_model``, file memory mapping is used to avoid extra memory allocation. When enabled, the original model cannot be destroyed (Python object cannot be deallocated and original model file cannot be deleted) for the whole lifetime of OpenVINO model. If it is not desired, set ``share_weights=False`` when calling ``openvino.convert_model``. +- ``share_weigths`` parameter with default value ``True`` allows reusing memory with original weights. For models loaded in Python and then passed to ``openvino.convert_model``, that means that OpenVINO model will share the same areas in program memory where the original weights are located. For models loaded from files by ``openvino.convert_model``, file memory mapping is used to avoid extra memory allocation. When enabled, the original model cannot be destroyed (Python object cannot be deallocated and original model file cannot be deleted) for the whole lifetime of OpenVINO model. If it is not desired, set ``share_weights=False`` when calling ``openvino.convert_model``. -.. note:: ``ovc`` doesn't have ``share_weights`` option and always uses sharing to reduce conversion time and consume less amount of memory during the conversion. +.. note:: ``ovc`` does not have ``share_weights`` option and always uses sharing to reduce conversion time and consume less amount of memory during the conversion. - ``output_model`` parameter in ``ovc`` and ``openvino.save_model`` specifies name for output ``.xml`` file with the resulting OpenVINO IR. The accompanying ``.bin`` file name will be generated automatically by replacing ``.xml`` extension with ``.bin`` extension. The value of ``output_model`` must end with ``.xml`` extension. For ``ovc`` command line tool, ``output_model`` can also contain a name of a directory. In this case, the resulting OpenVINO IR files will be put into that directory with a base name of ``.xml`` and ``.bin`` files matching the original model base name passed to ``ovc`` as a parameter. For example, when calling ``ovc your_model.onnx --output_model directory_name``, files ``directory_name/your_model.xml`` and ``directory_name/your_model.bin`` will be created. If ``output_model`` is not used, then the current directory is used as a destination directory. -.. note:: ``openvino.save_model`` doesn't support a directory for ``output_model`` parameter value because ``openvino.save_model`` gets OpenVINO model object represented in a memory and there is no original model file name available for output file name generation. For the same reason, ``output_model`` is a mandatory parameter for ``openvino.save_model``. +.. note:: ``openvino.save_model`` does not support a directory for ``output_model`` parameter value because ``openvino.save_model`` gets OpenVINO model object represented in a memory and there is no original model file name available for output file name generation. For the same reason, ``output_model`` is a mandatory parameter for ``openvino.save_model``. - ``verbose`` parameter activates extra diagnostics printed to the standard output. Use for debugging purposes in case there is an issue with the conversion and to collect information for better bug reporting to OpenVINO team. -.. note:: Weights sharing doesn't equally work for all the supported model formats. The value of this flag is considered as a hint for the conversion API, and actual sharing is used only if it is implemented and possible for a particular model representation. +.. note:: Weights sharing does not equally work for all the supported model formats. The value of this flag is considered as a hint for the conversion API, and actual sharing is used only if it is implemented and possible for a particular model representation. You can always run ``ovc -h`` or ``ovc --help`` to recall all the supported parameters for ``ovc``. diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md index ad2aa8798738ff..dd3f821229bf99 100644 --- a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md @@ -35,7 +35,7 @@ To convert a PaddlePaddle model, use the ``ovc`` or ``openvino.convert_model`` a ovc your_model_file.pdmodel -**For example**, this command converts a yolo v3 PaddlePaddle model to OpenVINO IR model: +**For example**, this command converts a YOLOv3 PaddlePaddle model to OpenVINO IR model: .. tab-set:: @@ -44,15 +44,15 @@ To convert a PaddlePaddle model, use the ``ovc`` or ``openvino.convert_model`` a .. code-block:: py - import openvino as ov - ov.convert_model('yolov3.pdmodel') + import openvino as ov + ov.convert_model('yolov3.pdmodel') .. tab-item:: CLI :sync: cli .. code-block:: sh - ovc yolov3.pdmodel + ovc yolov3.pdmodel Converting PaddlePaddle Python Model #################################### diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md index cc6126cffd6043..b0aed35fc6aa31 100644 --- a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md @@ -110,7 +110,7 @@ Non-tensor Data Types When a non-tensor data type, such as a ``tuple`` or ``dict``, appears in a model input or output, it is flattened. The flattening means that each element within the ``tuple`` will be represented as a separate input or output. The same is true for ``dict`` values, where the keys of the ``dict`` are used to form a model input/output name. The original non-tensor input or output is replaced by one or multiple new inputs or outputs resulting from this flattening process. This flattening procedure is applied recursively in the case of nested ``tuples`` and ``dicts`` until it reaches the assumption that the most nested data type is a tensor. -For example, if the original model is called with ``example_input=(a, (b, c, (d, e)))``, where ``a``, ``b``, ...``e`` are tensors, it means that the original model has two inputs. The first is a tensor ``a``, and the second is a tuple ``(b, c, (d, e))``, containing two tensors ``b`` and ``c`` and a nested tuple ``(d, e)``. Then the resulting OpenVINO model will have signature ``(a, b, c, d, e)``, which means it will have five inputs, all of type tensor, instead of two in the original model. +For example, if the original model is called with ``example_input=(a, (b, c, (d, e)))``, where ``a``, ``b``, ... ``e`` are tensors, it means that the original model has two inputs. The first is a tensor ``a``, and the second is a tuple ``(b, c, (d, e))``, containing two tensors ``b`` and ``c`` and a nested tuple ``(d, e)``. Then the resulting OpenVINO model will have signature ``(a, b, c, d, e)``, which means it will have five inputs, all of type tensor, instead of two in the original model. Flattening of a ``dict`` is supported for outputs only. If your model has an input of type ``dict``, you will need to decompose the ``dict`` to one or multiple tensor inputs by modifying the original model signature or making a wrapper model on top of the original model. This approach hides the dictionary from the model signature and allows it to be processed inside the model successfully. diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md index d2c8f1418c0815..5a7a3ab3a7c706 100644 --- a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md @@ -8,9 +8,9 @@ This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X. -.. note:: TensorFlow models can be loaded by `openvino.Core.read_model` or `openvino.Core.compile_model` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application. +.. note:: TensorFlow models can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application. -.. note:: Examples below that convert TensorFlow models from a file, do not require any version of TensorFlow to be installed on the system, except in cases when the `tensorflow` module is imported explicitly. +.. note:: Examples below that convert TensorFlow models from a file, do not require any version of TensorFlow to be installed on the system, except in cases when the ``tensorflow`` module is imported explicitly. Converting TensorFlow 2 Models ############################## @@ -238,7 +238,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro model = MyModule(name="simple_module") ov_model = ov.convert_model(model, input=[-1]) -.. note:: There is a known bug in ``openvino.convert_model`` on using ``tf.Variable`` nodes in the model graph. The results of the conversion of such models are unpredictable. It is recommended to save a model with ``tf.Variable`` into TensorFlow Saved Model format and load it with `openvino.convert_model`. +.. note:: There is a known bug in ``openvino.convert_model`` on using ``tf.Variable`` nodes in the model graph. The results of the conversion of such models are unpredictable. It is recommended to save a model with ``tf.Variable`` into TensorFlow Saved Model format and load it with ``openvino.convert_model``. * ``tf.compat.v1.Graph`` diff --git a/docs/get_started.md b/docs/get_started.md index 88d37186e1f4b8..d414afd80d8b04 100644 --- a/docs/get_started.md +++ b/docs/get_started.md @@ -12,7 +12,7 @@ :hidden: Install OpenVINO - Additional Hardware setup + Additional Hardware Setup Troubleshooting diff --git a/docs/install_guides/installing-openvino-overview.md b/docs/install_guides/installing-openvino-overview.md index 7d727cf0d5956c..13201da5f8f082 100644 --- a/docs/install_guides/installing-openvino-overview.md +++ b/docs/install_guides/installing-openvino-overview.md @@ -58,7 +58,7 @@ | **Build OpenVINO from source** | OpenVINO Toolkit source files are available on GitHub as open source. If you want to build your own version of OpenVINO for your platform, - follow the `OpenVINO Build Instructions `__ . + follow the `OpenVINO Build Instructions `__. diff --git a/docs/install_guides/installing-openvino-vcpkg.md b/docs/install_guides/installing-openvino-vcpkg.md index 5d38eeeb9b211f..39694079dfb2d3 100644 --- a/docs/install_guides/installing-openvino-vcpkg.md +++ b/docs/install_guides/installing-openvino-vcpkg.md @@ -60,13 +60,13 @@ which means the compiler stage will require additional time to complete the proc After installation, you can use OpenVINO in your product by running: - .. code-block:: sh +.. code-block:: sh - find_package(OpenVINO) + find_package(OpenVINO) - .. code-block:: sh +.. code-block:: sh - cmake -B [build directory] -S . -DCMAKE_TOOLCHAIN_FILE=[path to vcpkg]/scripts/buildsystems/vcpkg.cmake + cmake -B [build directory] -S . -DCMAKE_TOOLCHAIN_FILE=[path to vcpkg]/scripts/buildsystems/vcpkg.cmake Congratulations! You've just Installed OpenVINO! For some use cases you may still need to install additional components. Check the From 94640fe5834191ddea9f1f1b994976d69434ed7a Mon Sep 17 00:00:00 2001 From: bstankix Date: Wed, 13 Sep 2023 15:30:15 +0200 Subject: [PATCH 05/14] [DOCS] Fix version number (#19816) --- docs/conf.py | 2 +- docs/home.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/conf.py b/docs/conf.py index f079e5601c95d0..1cbeecd43a2201 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -28,7 +28,7 @@ author = 'Intel®' language = 'en' -version_name = 'nightly' +version_name = '2023.1' # -- General configuration --------------------------------------------------- diff --git a/docs/home.rst b/docs/home.rst index d8f359e65aaa5a..4a2b22c946f96a 100644 --- a/docs/home.rst +++ b/docs/home.rst @@ -1,5 +1,5 @@ ============================ -OpenVINO 2023.0 +OpenVINO 2023.1 ============================ .. meta:: From a805c1e028f70d6d2a678350ba5be97f26c67592 Mon Sep 17 00:00:00 2001 From: Alexander Kozlov Date: Thu, 14 Sep 2023 15:13:03 +0400 Subject: [PATCH 06/14] Introduce weight compression doc (#19680) * Draft of weight compression docs * Fixed typos * Fixed typos * Fixed typos * Fixed build * Update docs/optimization_guide/nncf/weight_compression.md Co-authored-by: Tatiana Savina * Update docs/optimization_guide/nncf/weight_compression.md Co-authored-by: Tatiana Savina * Update docs/optimization_guide/nncf/weight_compression.md Co-authored-by: Tatiana Savina * Update docs/optimization_guide/nncf/weight_compression.md Co-authored-by: Tatiana Savina * Update weight_compression.md --------- Co-authored-by: Tatiana Savina --- .../model_optimization_guide.md | 6 ++- .../nncf/code/weight_compression_openvino.py | 6 +++ .../nncf/weight_compression.md | 37 +++++++++++++++++++ 3 files changed, 48 insertions(+), 1 deletion(-) create mode 100644 docs/optimization_guide/nncf/code/weight_compression_openvino.py create mode 100644 docs/optimization_guide/nncf/weight_compression.md diff --git a/docs/optimization_guide/model_optimization_guide.md b/docs/optimization_guide/model_optimization_guide.md index 718fd5310aaea3..db2e0601e5c054 100644 --- a/docs/optimization_guide/model_optimization_guide.md +++ b/docs/optimization_guide/model_optimization_guide.md @@ -8,6 +8,7 @@ ptq_introduction tmo_introduction + weight_compression Model optimization is an optional offline step of improving the final model performance and reducing the model size by applying special optimization methods, such as 8-bit quantization, pruning, etc. OpenVINO offers two optimization paths implemented in `Neural Network Compression Framework (NNCF) `__: @@ -16,9 +17,11 @@ Model optimization is an optional offline step of improving the final model perf - :doc:`Training-time Optimization `, a suite of advanced methods for training-time model optimization within the DL framework, such as PyTorch and TensorFlow 2.x. It supports methods like Quantization-aware Training, Structured and Unstructured Pruning, etc. +- :doc:`Weight Compression `, an easy-to-use method for Large Language Models footprint reduction and inference acceleration. + .. note:: OpenVINO also supports optimized models (for example, quantized) from source frameworks such as PyTorch, TensorFlow, and ONNX (in Q/DQ; Quantize/DeQuantize format). No special steps are required in this case and optimized models can be converted to the OpenVINO Intermediate Representation format (IR) right away. -Post-training Quantization is the fastest way to optimize a model and should be applied first, but it is limited in terms of achievable accuracy-performance trade-off. The recommended approach to obtain OpenVINO quantized model is to convert a model from original framework to ``ov.Model`` and ensure that the model works correctly in OpenVINO, for example, by calculating the model metrics. Then, ``ov.Model`` can be used as input for the ``nncf.quantize()`` method to get the quantized model (see the diagram below). +Post-training Quantization is the fastest way to optimize an arbitrary DL model and should be applied first, but it is limited in terms of achievable accuracy-performance trade-off. The recommended approach to obtain OpenVINO quantized model is to convert a model from original framework to ``ov.Model`` and ensure that the model works correctly in OpenVINO, for example, by calculating the model metrics. Then, ``ov.Model`` can be used as input for the ``nncf.quantize()`` method to get the quantized model (see the diagram below). In case of unsatisfactory accuracy or performance after Post-training Quantization, Training-time Optimization can be used as an option. @@ -33,6 +36,7 @@ Additional Resources - :doc:`Post-training Quantization ` - :doc:`Training-time Optimization ` +- :doc:`Weight Compression ` - :doc:`Deployment optimization ` - `HuggingFace Optimum Intel `__ diff --git a/docs/optimization_guide/nncf/code/weight_compression_openvino.py b/docs/optimization_guide/nncf/code/weight_compression_openvino.py new file mode 100644 index 00000000000000..c9ab67efd5aa32 --- /dev/null +++ b/docs/optimization_guide/nncf/code/weight_compression_openvino.py @@ -0,0 +1,6 @@ +#! [compression_8bit] +from nncf import compress_weights + +... +model = compress_weights(model) # model is openvino.Model object +#! [compression_8bit] \ No newline at end of file diff --git a/docs/optimization_guide/nncf/weight_compression.md b/docs/optimization_guide/nncf/weight_compression.md new file mode 100644 index 00000000000000..efec4839d47a1f --- /dev/null +++ b/docs/optimization_guide/nncf/weight_compression.md @@ -0,0 +1,37 @@ +# Weight Compression {#weight_compression} + +@sphinxdirective + +Enhancing Model Efficiency with Weight Compression +################################################################## + +Weight compression aims to reduce the memory footprint of a model. It can also lead to significant performance improvement for large memory-bound models, such as Large Language Models (LLMs). LLMs and other models, which require extensive memory to store the weights during inference, can benefit from weight compression in the following ways: + +- enabling the inference of exceptionally large models that cannot be accommodated in the memory of the device; +- improving the inference performance of the models by reducing the latency of the memory access when computing the operations with weights, for example, Linear layers. + +Currently, NNCF provides 8-bit weight quantization as a compression method primarily designed to optimize LLMs. The main difference between weights compression and full model quantization (post-training quantization) is that activations remain floating-point in the case of weights compression which leads to a better accuracy. Weight compression for LLMs provides a solid inference performance improvement which is on par with the performance of the full model quantization. In addition, weight compression is data-free and does not require a calibration dataset, making it easy to use. + +Compress Model Weights +###################### + +The code snippet below shows how to compress the weights of the model represented in OpenVINO IR using NNCF: + +.. tab-set:: + + .. tab-item:: OpenVINO + :sync: openvino + + .. doxygensnippet:: docs/optimization_guide/nncf/code/weight_compression_openvino.py + :language: python + :fragment: [compression_8bit] + +Now, the model is ready for compilation and inference. It can be also saved into a compressed format, resulting in a smaller binary file. + +Additional Resources +#################### + +- :doc:`Post-training Quantization ` +- :doc:`Training-time Optimization ` + +@endsphinxdirective From 959b4438a1df32a8ad11aa57b4b9e798f694daf3 Mon Sep 17 00:00:00 2001 From: Przemyslaw Wysocki Date: Thu, 14 Sep 2023 13:47:19 +0200 Subject: [PATCH 07/14] Remove upper bound (#19803) --- src/bindings/python/constraints.txt | 2 +- src/bindings/python/wheel/requirements-dev.txt | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/bindings/python/constraints.txt b/src/bindings/python/constraints.txt index 2e00fd43c0e86b..badfd9bccde310 100644 --- a/src/bindings/python/constraints.txt +++ b/src/bindings/python/constraints.txt @@ -10,7 +10,7 @@ pytest-timeout==2.1.0 # Python bindings py>=1.9.0 pygments>=2.8.1 -setuptools>=53.0.0,<=68.1.2 +setuptools>=53.0.0 wheel>=0.38.1 # Frontends diff --git a/src/bindings/python/wheel/requirements-dev.txt b/src/bindings/python/wheel/requirements-dev.txt index e011f059785e1e..695e850d649a60 100644 --- a/src/bindings/python/wheel/requirements-dev.txt +++ b/src/bindings/python/wheel/requirements-dev.txt @@ -1,3 +1,3 @@ -setuptools>=65.6.1,<=68.1.2 +setuptools>=65.6.1 wheel>=0.38.1 patchelf<=0.17.2.1; sys_platform == 'linux' and platform_machine == 'x86_64' From 3cafb2e1fa84a58b6312eb3b2962092f5ceea03d Mon Sep 17 00:00:00 2001 From: Tatiana Savina Date: Thu, 14 Sep 2023 14:29:32 +0200 Subject: [PATCH 08/14] [DOCS] release adjustments pass 3 - conversion --- docs/Documentation/model_introduction.md | 4 ++-- docs/Extensibility_UG/Intro.md | 2 +- docs/MO_DG/prepare_model/MO_Python_API.md | 2 +- .../convert_model/Convert_Model_From_ONNX.md | 8 -------- .../convert_model/Convert_Model_From_TensorFlow.md | 4 ++-- .../convert_model/pytorch_specific/Convert_Bert_ner.md | 2 +- .../pytorch_specific/Convert_Cascade_RCNN_res101.md | 2 +- .../convert_model/pytorch_specific/Convert_F3Net.md | 2 +- .../convert_model/pytorch_specific/Convert_QuartzNet.md | 2 +- .../convert_model/pytorch_specific/Convert_RCAN.md | 2 +- .../convert_model/pytorch_specific/Convert_RNNT.md | 2 +- .../convert_model/pytorch_specific/Convert_YOLACT.md | 2 +- .../convert_model/Convert_Model_From_PyTorch.md | 9 +++++---- .../convert_model/supported_model_formats.md | 2 +- 14 files changed, 19 insertions(+), 26 deletions(-) diff --git a/docs/Documentation/model_introduction.md b/docs/Documentation/model_introduction.md index af5c06c446899d..ad38d118a01922 100644 --- a/docs/Documentation/model_introduction.md +++ b/docs/Documentation/model_introduction.md @@ -17,7 +17,7 @@ Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub `__, `Hugging Face `__, or `Torchvision models `__. -OpenVINO™ :doc:`supports several model formats ` and can convert them into its own representation, `openvino.Model `__ (`ov.Model `__), providing a conversion API. Converted models can be used for inference with one or multiple OpenVINO Hardware plugins. There are two ways to use the conversion API: using a Python program or calling the ``ovc`` command line tool. +OpenVINO™ :doc:`supports several model formats ` and can convert them into its own representation, `openvino.Model `__ (`ov.Model `__), providing a conversion API. Converted models can be used for inference with one or multiple OpenVINO Hardware plugins. There are two ways to use the conversion API: using a Python script or calling the ``ovc`` command line tool. .. note:: @@ -31,7 +31,7 @@ OpenVINO™ :doc:`supports several model formats ` and Convert a Model in Python: ``convert_model`` ############################################## -You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example Pytorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories: +You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example PyTorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories: .. tab-set:: diff --git a/docs/Extensibility_UG/Intro.md b/docs/Extensibility_UG/Intro.md index 401e2f155e45b2..9b3fa581f4027a 100644 --- a/docs/Extensibility_UG/Intro.md +++ b/docs/Extensibility_UG/Intro.md @@ -57,7 +57,7 @@ Mapping from Framework Operation Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following: -1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API ` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import. +1. If a model is represented in the ONNX (including models exported from PyTorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API ` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import. 2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions ` should be used. This approach is available for model conversion in Model Optimizer only. diff --git a/docs/MO_DG/prepare_model/MO_Python_API.md b/docs/MO_DG/prepare_model/MO_Python_API.md index 2687434957e1a9..7208052bdc2d8a 100644 --- a/docs/MO_DG/prepare_model/MO_Python_API.md +++ b/docs/MO_DG/prepare_model/MO_Python_API.md @@ -4,7 +4,7 @@ Model conversion API is represented by ``convert_model()`` method in openvino.tools.mo namespace. ``convert_model()`` is compatible with types from openvino.runtime, like PartialShape, Layout, Type, etc. -``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. +``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a PyTorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. .. note:: diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md index b160667fee62d2..4cadf432ec42c6 100644 --- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md +++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md @@ -6,19 +6,11 @@ :description: Learn how to convert a model from the ONNX format to the OpenVINO Intermediate Representation. - -Introduction to ONNX -#################### - -`ONNX `__ is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others. - .. note:: ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example ` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions. Converting an ONNX Model ######################## -This page provides instructions on model conversion from the ONNX format to the OpenVINO IR format. - The model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format. .. tab-set:: diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md index e565638c444913..1d34263e65e72a 100644 --- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md +++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md @@ -7,10 +7,10 @@ TensorFlow format to the OpenVINO Intermediate Representation. -This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X. - .. note:: TensorFlow models are supported via :doc:`FrontEnd API `. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example ` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions. +The conversion instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X. + Converting TensorFlow 1 Models ############################### diff --git a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md index a48268609933bb..deddca8e3583f4 100644 --- a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md +++ b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md @@ -4,7 +4,7 @@ .. meta:: :description: Learn how to convert a BERT-NER model - from Pytorch to the OpenVINO Intermediate Representation. + from PyTorch to the OpenVINO Intermediate Representation. The goal of this article is to present a step-by-step guide on how to convert PyTorch BERT-NER model to OpenVINO IR. First, you need to download the model and convert it to ONNX. diff --git a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md index 174cf5cebc38c7..8aa5267c6c4ee3 100644 --- a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md +++ b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md @@ -4,7 +4,7 @@ .. meta:: :description: Learn how to convert a Cascade RCNN R-101 - model from Pytorch to the OpenVINO Intermediate Representation. + model from PyTorch to the OpenVINO Intermediate Representation. The goal of this article is to present a step-by-step guide on how to convert a PyTorch Cascade RCNN R-101 model to OpenVINO IR. First, you need to download the model and convert it to ONNX. diff --git a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_F3Net.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_F3Net.md index 04ed98ee60fc6a..5bc5174a9e8e09 100644 --- a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_F3Net.md +++ b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_F3Net.md @@ -4,7 +4,7 @@ .. meta:: :description: Learn how to convert a F3Net model - from Pytorch to the OpenVINO Intermediate Representation. + from PyTorch to the OpenVINO Intermediate Representation. `F3Net `__ : Fusion, Feedback and Focus for Salient Object Detection diff --git a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md index 08d12669443388..79e36dc4b62fad 100644 --- a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md +++ b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md @@ -4,7 +4,7 @@ .. meta:: :description: Learn how to convert a QuartzNet model - from Pytorch to the OpenVINO Intermediate Representation. + from PyTorch to the OpenVINO Intermediate Representation. `NeMo project `__ provides the QuartzNet model. diff --git a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RCAN.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RCAN.md index 154b8d0f037b02..18dc10cd157f97 100644 --- a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RCAN.md +++ b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RCAN.md @@ -4,7 +4,7 @@ .. meta:: :description: Learn how to convert a RCAN model - from Pytorch to the OpenVINO Intermediate Representation. + from PyTorch to the OpenVINO Intermediate Representation. `RCAN `__ : Image Super-Resolution Using Very Deep Residual Channel Attention Networks diff --git a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md index 08d14dfb1fc7d5..85a782abdce18c 100644 --- a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md +++ b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md @@ -4,7 +4,7 @@ .. meta:: :description: Learn how to convert a RNN-T model - from Pytorch to the OpenVINO Intermediate Representation. + from PyTorch to the OpenVINO Intermediate Representation. This guide covers conversion of RNN-T model from `MLCommons `__ repository. Follow diff --git a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md index 75957a756f189c..e7fc1d8a41f927 100644 --- a/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md +++ b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md @@ -4,7 +4,7 @@ .. meta:: :description: Learn how to convert a YOLACT model - from Pytorch to the OpenVINO Intermediate Representation. + from PyTorch to the OpenVINO Intermediate Representation. You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation. diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md index b0aed35fc6aa31..83005b7e978e8c 100644 --- a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md +++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md @@ -6,11 +6,8 @@ :description: Learn how to convert a model from the PyTorch format to the OpenVINO Model. -This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO Model using the ``openvino.convert_model`` function. -.. note:: - - In the examples below the ``openvino.save_model`` function is not used because there are no PyTorch-specific details regarding the usage of this function. In all examples, the converted OpenVINO model can be saved to IR by calling ``ov.save_model(ov_model, 'model.xml')`` as usual. +To convert a PyTorch model, use the ``openvino.convert_model`` function. Here is the simplest example of PyTorch model conversion using a model from ``torchvision``: @@ -87,6 +84,10 @@ In practice, the code to evaluate or test the PyTorch model is usually provided Check out more examples in :doc:`interactive Python tutorials `. +.. note:: + + In the examples above the ``openvino.save_model`` function is not used because there are no PyTorch-specific details regarding the usage of this function. In all examples, the converted OpenVINO model can be saved to IR by calling ``ov.save_model(ov_model, 'model.xml')`` as usual. + Supported Input Parameter Types ############################### diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md b/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md index 75262747e3a9fc..903199e1547165 100644 --- a/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md +++ b/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md @@ -15,7 +15,7 @@ **OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features. The result of running ``ovc`` CLI tool or ``openvino.save_model`` is OpenVINO IR. All other supported formats can be converted to the IR, refer to the following articles for details on conversion: -* :doc:`How to convert Pytorch ` +* :doc:`How to convert PyTorch ` * :doc:`How to convert ONNX ` * :doc:`How to convert TensorFlow ` * :doc:`How to convert TensorFlow Lite ` From 5eaeb08c63749f4a76a5ddf71a67002f4225ff63 Mon Sep 17 00:00:00 2001 From: Maciej Smyk Date: Thu, 14 Sep 2023 14:33:19 +0200 Subject: [PATCH 09/14] [DOCS] Notebooks update for 23.1 (#19844) * notebooks-update * notebooks-update * fix * Update 121-convert-to-openvino-with-output.rst * Update 121-convert-to-openvino-with-output.rst * fix * table of content fix * fix * fix * fix * fix * Update tutorials.md * fix * fix * Update 227-whisper-subtitles-generation-with-output.rst --- .../notebooks/001-hello-world-with-output.rst | 42 +- ...g => 001-hello-world-with-output_11_0.png} | 0 .../002-openvino-api-with-output.rst | 186 +- .../003-hello-segmentation-with-output.rst | 58 +- ...3-hello-segmentation-with-output_11_1.png} | 0 ...3-hello-segmentation-with-output_13_1.png} | 0 ...3-hello-segmentation-with-output_17_0.png} | 0 .../004-hello-detection-with-output.rst | 58 +- ... 004-hello-detection-with-output_11_0.png} | 0 ... 004-hello-detection-with-output_16_0.png} | 0 ...classification-to-openvino-with-output.rst | 102 +- ...fication-to-openvino-with-output_19_0.png} | 0 ...2-pytorch-onnx-to-openvino-with-output.rst | 88 +- ...rch-onnx-to-openvino-with-output_22_0.png} | 0 ...rch-onnx-to-openvino-with-output_27_0.png} | 0 ...rch-onnx-to-openvino-with-output_29_0.png} | 0 .../102-pytorch-to-openvino-with-output.rst | 166 +- ...to-openvino-classification-with-output.rst | 108 +- .../notebooks/104-model-tools-with-output.rst | 129 +- ...105-language-quantize-bert-with-output.rst | 472 ++--- .../notebooks/106-auto-device-with-output.rst | 202 +- .../106-auto-device-with-output_13_0.jpg | 3 + .../106-auto-device-with-output_13_0.png | 3 + .../106-auto-device-with-output_26_0.png | 4 +- .../106-auto-device-with-output_27_0.png | 3 + ...tion-quantization-data2vec-with-output.rst | 76 +- ...tion-quantization-wav2vec2-with-output.rst | 925 ++++++++ docs/notebooks/108-gpu-device-with-output.rst | 200 +- .../109-latency-tricks-with-output.rst | 127 +- .../109-latency-tricks-with-output_19_0.jpg | 4 +- .../109-latency-tricks-with-output_23_0.jpg | 4 +- .../109-latency-tricks-with-output_25_0.jpg | 4 +- .../109-latency-tricks-with-output_27_0.jpg | 4 +- .../109-latency-tricks-with-output_30_0.png | 4 +- .../109-throughput-tricks-with-output.rst | 244 +-- ...109-throughput-tricks-with-output_17_0.jpg | 4 +- ...109-throughput-tricks-with-output_20_0.jpg | 4 +- ...109-throughput-tricks-with-output_22_0.jpg | 3 - ...109-throughput-tricks-with-output_24_0.jpg | 3 + ...109-throughput-tricks-with-output_26_0.jpg | 3 - ...109-throughput-tricks-with-output_28_0.jpg | 4 +- ...109-throughput-tricks-with-output_30_0.jpg | 4 +- ...109-throughput-tricks-with-output_33_0.png | 4 +- ...110-ct-scan-live-inference-with-output.rst | 148 +- ...segmentation-quantize-nncf-with-output.rst | 293 ++- ...ntation-quantize-nncf-with-output_37_1.png | 4 +- ...ov5-quantization-migration-with-output.rst | 455 +--- ...training-quantization-nncf-with-output.rst | 197 +- ...lassification-quantization-with-output.rst | 278 +-- ...ication-quantization-with-output_30_2.png} | 0 docs/notebooks/115-async-api-with-output.rst | 77 +- .../115-async-api-with-output_21_0.png | 4 +- .../116-sparsity-optimization-with-output.rst | 214 +- .../117-model-server-with-output.rst | 123 +- ...118-optimize-preprocessing-with-output.rst | 278 +-- ...timize-preprocessing-with-output_14_1.png} | 0 .../119-tflite-to-openvino-with-output.rst | 141 +- ...ject-detection-to-openvino-with-output.rst | 266 ++- ...detection-to-openvino-with-output_38_0.png | 4 +- .../121-convert-to-openvino-with-output.rst | 313 +-- ...tion-quantization-wav2vec2-with-output.rst | 1883 ++++++++++++++++- ...tion-with-accuracy-control-with-output.rst | 721 ++++++- .../201-vision-monodepth-with-output.rst | 80 +- ...sion-superresolution-image-with-output.rst | 112 +- ...sion-superresolution-video-with-output.rst | 50 +- .../203-meter-reader-with-output.rst | 63 +- ... => 203-meter-reader-with-output_16_1.png} | 0 ... => 203-meter-reader-with-output_18_1.png} | 0 ... => 203-meter-reader-with-output_20_1.png} | 0 ... => 203-meter-reader-with-output_22_1.png} | 0 ... => 203-meter-reader-with-output_24_1.png} | 0 ...nter-semantic-segmentation-with-output.rst | 211 +- ...semantic-segmentation-with-output_34_0.jpg | 4 +- ...semantic-segmentation-with-output_34_0.png | 4 +- ...-vision-background-removal-with-output.rst | 132 +- ...on-background-removal-with-output_22_0.png | 4 +- ...on-background-removal-with-output_24_0.png | 4 +- ...206-vision-paddlegan-anime-with-output.rst | 98 +- ...ision-paddlegan-anime-with-output_37_0.png | 4 +- ...-paddlegan-superresolution-with-output.rst | 101 +- ...egan-superresolution-with-output_25_1.png} | 0 ...egan-superresolution-with-output_29_1.png} | 0 ...egan-superresolution-with-output_31_0.png} | 0 ...ical-character-recognition-with-output.rst | 103 +- ...haracter-recognition-with-output_16_0.png} | 0 ...haracter-recognition-with-output_26_0.png} | 0 ...haracter-recognition-with-output_28_0.jpg} | 0 ...haracter-recognition-with-output_28_0.png} | 0 ...aracter-recognition-with-output_28_10.jpg} | 0 ...aracter-recognition-with-output_28_10.png} | 0 ...haracter-recognition-with-output_28_2.jpg} | 0 ...haracter-recognition-with-output_28_2.png} | 0 ...haracter-recognition-with-output_28_4.jpg} | 0 ...haracter-recognition-with-output_28_4.png} | 0 ...haracter-recognition-with-output_28_6.jpg} | 0 ...haracter-recognition-with-output_28_6.png} | 0 ...haracter-recognition-with-output_28_8.jpg} | 0 ...haracter-recognition-with-output_28_8.png} | 0 .../209-handwritten-ocr-with-output.rst | 69 +- ... 209-handwritten-ocr-with-output_22_0.png} | 0 ... 209-handwritten-ocr-with-output_31_1.png} | 0 ...slowfast-video-recognition-with-output.rst | 52 +- .../211-speech-to-text-with-output.rst | 96 +- ...annote-speaker-diarization-with-output.rst | 48 +- .../213-question-answering-with-output.rst | 71 +- .../214-grammar-correction-with-output.rst | 90 +- .../215-image-inpainting-with-output.rst | 59 +- .../216-attention-center-with-output.rst | 16 +- .../217-vision-deblur-with-output.rst | 107 +- .../217-vision-deblur-with-output_27_0.png | 4 +- .../217-vision-deblur-with-output_29_0.png | 4 +- ...-detection-and-recognition-with-output.rst | 47 +- ...219-knowledge-graphs-conve-with-output.rst | 155 +- ...ss-lingual-books-alignment-with-output.rst | 24 +- .../221-machine-translation-with-output.rst | 63 +- ...-vision-image-colorization-with-output.rst | 77 +- .../223-text-prediction-with-output.rst | 136 +- ...-segmentation-point-clouds-with-output.rst | 41 +- ...le-diffusion-text-to-image-with-output.rst | 700 ++++-- ...ffusion-text-to-image-with-output_33_1.png | 4 +- ...ffusion-text-to-image-with-output_37_1.png | 4 +- ...ffusion-text-to-image-with-output_39_1.png | 4 +- .../226-yolov7-optimization-with-output.rst | 36 +- ...isper-subtitles-generation-with-output.rst | 844 +++----- docs/notebooks/notebooks_tags.json | 76 +- .../notebooks_with_colab_buttons.txt | 3 + docs/tutorials.md | 4 + 127 files changed, 7181 insertions(+), 5267 deletions(-) rename docs/notebooks/001-hello-world-with-output_files/{001-hello-world-with-output_10_0.png => 001-hello-world-with-output_11_0.png} (100%) rename docs/notebooks/003-hello-segmentation-with-output_files/{003-hello-segmentation-with-output_10_1.png => 003-hello-segmentation-with-output_11_1.png} (100%) rename docs/notebooks/003-hello-segmentation-with-output_files/{003-hello-segmentation-with-output_12_1.png => 003-hello-segmentation-with-output_13_1.png} (100%) rename docs/notebooks/003-hello-segmentation-with-output_files/{003-hello-segmentation-with-output_16_0.png => 003-hello-segmentation-with-output_17_0.png} (100%) rename docs/notebooks/004-hello-detection-with-output_files/{004-hello-detection-with-output_10_0.png => 004-hello-detection-with-output_11_0.png} (100%) rename docs/notebooks/004-hello-detection-with-output_files/{004-hello-detection-with-output_15_0.png => 004-hello-detection-with-output_16_0.png} (100%) rename docs/notebooks/101-tensorflow-classification-to-openvino-with-output_files/{101-tensorflow-classification-to-openvino-with-output_18_0.png => 101-tensorflow-classification-to-openvino-with-output_19_0.png} (100%) rename docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/{102-pytorch-onnx-to-openvino-with-output_21_0.png => 102-pytorch-onnx-to-openvino-with-output_22_0.png} (100%) rename docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/{102-pytorch-onnx-to-openvino-with-output_26_0.png => 102-pytorch-onnx-to-openvino-with-output_27_0.png} (100%) rename docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/{102-pytorch-onnx-to-openvino-with-output_28_0.png => 102-pytorch-onnx-to-openvino-with-output_29_0.png} (100%) create mode 100644 docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.jpg create mode 100644 docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.png create mode 100644 docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_27_0.png create mode 100644 docs/notebooks/107-speech-recognition-quantization-wav2vec2-with-output.rst delete mode 100644 docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_22_0.jpg create mode 100644 docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_24_0.jpg delete mode 100644 docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_26_0.jpg rename docs/notebooks/113-image-classification-quantization-with-output_files/{113-image-classification-quantization-with-output_29_2.png => 113-image-classification-quantization-with-output_30_2.png} (100%) rename docs/notebooks/118-optimize-preprocessing-with-output_files/{118-optimize-preprocessing-with-output_13_1.png => 118-optimize-preprocessing-with-output_14_1.png} (100%) rename docs/notebooks/203-meter-reader-with-output_files/{203-meter-reader-with-output_15_1.png => 203-meter-reader-with-output_16_1.png} (100%) rename docs/notebooks/203-meter-reader-with-output_files/{203-meter-reader-with-output_17_1.png => 203-meter-reader-with-output_18_1.png} (100%) rename docs/notebooks/203-meter-reader-with-output_files/{203-meter-reader-with-output_19_1.png => 203-meter-reader-with-output_20_1.png} (100%) rename docs/notebooks/203-meter-reader-with-output_files/{203-meter-reader-with-output_21_1.png => 203-meter-reader-with-output_22_1.png} (100%) rename docs/notebooks/203-meter-reader-with-output_files/{203-meter-reader-with-output_23_1.png => 203-meter-reader-with-output_24_1.png} (100%) rename docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/{207-vision-paddlegan-superresolution-with-output_27_1.png => 207-vision-paddlegan-superresolution-with-output_25_1.png} (100%) rename docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/{207-vision-paddlegan-superresolution-with-output_31_1.png => 207-vision-paddlegan-superresolution-with-output_29_1.png} (100%) rename docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/{207-vision-paddlegan-superresolution-with-output_33_0.png => 207-vision-paddlegan-superresolution-with-output_31_0.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_15_0.png => 208-optical-character-recognition-with-output_16_0.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_25_0.png => 208-optical-character-recognition-with-output_26_0.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_0.jpg => 208-optical-character-recognition-with-output_28_0.jpg} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_0.png => 208-optical-character-recognition-with-output_28_0.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_10.jpg => 208-optical-character-recognition-with-output_28_10.jpg} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_10.png => 208-optical-character-recognition-with-output_28_10.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_2.jpg => 208-optical-character-recognition-with-output_28_2.jpg} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_2.png => 208-optical-character-recognition-with-output_28_2.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_4.jpg => 208-optical-character-recognition-with-output_28_4.jpg} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_4.png => 208-optical-character-recognition-with-output_28_4.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_6.jpg => 208-optical-character-recognition-with-output_28_6.jpg} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_6.png => 208-optical-character-recognition-with-output_28_6.png} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_8.jpg => 208-optical-character-recognition-with-output_28_8.jpg} (100%) rename docs/notebooks/208-optical-character-recognition-with-output_files/{208-optical-character-recognition-with-output_27_8.png => 208-optical-character-recognition-with-output_28_8.png} (100%) rename docs/notebooks/209-handwritten-ocr-with-output_files/{209-handwritten-ocr-with-output_21_0.png => 209-handwritten-ocr-with-output_22_0.png} (100%) rename docs/notebooks/209-handwritten-ocr-with-output_files/{209-handwritten-ocr-with-output_30_1.png => 209-handwritten-ocr-with-output_31_1.png} (100%) diff --git a/docs/notebooks/001-hello-world-with-output.rst b/docs/notebooks/001-hello-world-with-output.rst index 797cc193e53221..b938d435b16cc4 100644 --- a/docs/notebooks/001-hello-world-with-output.rst +++ b/docs/notebooks/001-hello-world-with-output.rst @@ -1,13 +1,11 @@ Hello Image Classification ========================== - - This basic introduction to OpenVINO™ shows how to do inference with an image classification model. A pre-trained `MobileNetV3 -model `__ +model `__ from `Open Model Zoo `__ is used in this tutorial. For more information about how OpenVINO IR models are @@ -15,11 +13,7 @@ created, refer to the `TensorFlow to OpenVINO <101-tensorflow-classification-to-openvino-with-output.html>`__ tutorial. - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Download the Model and data samples <#download-the-model-and-data-samples>`__ @@ -28,7 +22,19 @@ tutorial. - `Load an Image <#load-an-image>`__ - `Do Inference <#do-inference>`__ -Imports `⇑ <#top>`__ +.. code:: ipython3 + + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" + + +.. parsed-literal:: + + ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. + openvino-dev 2023.0.0 requires openvino==2023.0.0, but you have openvino 2023.1.0.dev20230811 which is incompatible. + + +Imports ############################################ .. code:: ipython3 @@ -39,12 +45,12 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np - from openvino.runtime import Core + import openvino as ov sys.path.append("../utils") from notebook_utils import download_file -Download the Model and data samples `⇑ <#top>`__ +Download the Model and data samples ######################################################################## .. code:: ipython3 @@ -78,7 +84,7 @@ Download the Model and data samples `⇑ <#top>`__ artifacts/v3-small_224_1.0_float.bin: 0%| | 0.00/4.84M [00:00`__ +Select inference device ############################################################ Select device from dropdown list for running inference using OpenVINO: @@ -87,7 +93,7 @@ Select device from dropdown list for running inference using OpenVINO: import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -106,18 +112,18 @@ Select device from dropdown list for running inference using OpenVINO: -Load the Model `⇑ <#top>`__ +Load the Model ################################################### .. code:: ipython3 - core = Core() + core = ov.Core() model = core.read_model(model=model_xml_path) compiled_model = core.compile_model(model=model, device_name=device.value) output_layer = compiled_model.output(0) -Load an Image `⇑ <#top>`__ +Load an Image ################################################## .. code:: ipython3 @@ -134,10 +140,10 @@ Load an Image `⇑ <#top>`__ -.. image:: 001-hello-world-with-output_files/001-hello-world-with-output_10_0.png +.. image:: 001-hello-world-with-output_files/001-hello-world-with-output_11_0.png -Do Inference `⇑ <#top>`__ +Do Inference ################################################# .. code:: ipython3 diff --git a/docs/notebooks/001-hello-world-with-output_files/001-hello-world-with-output_10_0.png b/docs/notebooks/001-hello-world-with-output_files/001-hello-world-with-output_11_0.png similarity index 100% rename from docs/notebooks/001-hello-world-with-output_files/001-hello-world-with-output_10_0.png rename to docs/notebooks/001-hello-world-with-output_files/001-hello-world-with-output_11_0.png diff --git a/docs/notebooks/002-openvino-api-with-output.rst b/docs/notebooks/002-openvino-api-with-output.rst index d66cee6545282f..d7ae0c6fe91bc9 100644 --- a/docs/notebooks/002-openvino-api-with-output.rst +++ b/docs/notebooks/002-openvino-api-with-output.rst @@ -4,27 +4,27 @@ OpenVINO™ Runtime API Tutorial This notebook explains the basics of the OpenVINO Runtime API. It covers: -- `Loading OpenVINO Runtime and Showing Info <#loading-openvino-runtime-and-showing-info>`__ -- `Loading a Model <#loading-a-model>`__ +- `Loading OpenVINO Runtime and Showing Info <#loading-openvino-runtime-and-showing-info>`__ +- `Loading a Model <#loading-a-model>`__ - - `OpenVINO IR Model <#openvino-ir-model>`__ - - `ONNX Model <#onnx-model>`__ - - `PaddlePaddle Model <#paddlepaddle-model>`__ - - `TensorFlow Model <#tensorflow-model>`__ - - `TensorFlow Lite Model <#tensorflow-lite-model>`__ + - `OpenVINO IR Model <#openvino-ir-model>`__ + - `ONNX Model <#onnx-model>`__ + - `PaddlePaddle Model <#paddlepaddle-model>`__ + - `TensorFlow Model <#tensorflow-model>`__ + - `TensorFlow Lite Model <#tensorflow-lite-model>`__ -- `Getting Information about a Model <#getting-information-about-a-model>`__ +- `Getting Information about a Model <#getting-information-about-a-model>`__ - - `Model Inputs <#model-inputs>`__ - - `Model Outputs <#model-outputs>`__ + - `Model Inputs <#model-inputs>`__ + - `Model Outputs <#model-outputs>`__ -- `Doing Inference on a Model <#doing-inference-on-a-model>`__ -- `Reshaping and Resizing <#reshaping-and-resizing>`__ +- `Doing Inference on a Model <#doing-inference-on-a-model>`__ +- `Reshaping and Resizing <#reshaping-and-resizing>`__ - - `Change Image Size <#change-image-size>`__ - - `Change Batch Size <#change-batch-size>`__ + - `Change Image Size <#change-image-size>`__ + - `Change Batch Size <#change-batch-size>`__ -- `Caching a Model <#caching-a-model>`__ +- `Caching a Model <#caching-a-model>`__ The notebook is divided into sections with headers. The next cell contains global requirements installation and imports. Each section is @@ -37,7 +37,7 @@ same. .. code:: ipython3 # Required imports. Please execute this cell first. - !pip install -q "openvino>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install requests tqdm # Fetch `notebook_utils` module @@ -52,24 +52,24 @@ same. .. parsed-literal:: - Requirement already satisfied: requests in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (2.31.0) - Requirement already satisfied: tqdm in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (4.66.1) - Requirement already satisfied: charset-normalizer<4,>=2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (3.2.0) - Requirement already satisfied: idna<4,>=2.5 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (3.4) - Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (1.26.16) - Requirement already satisfied: certifi>=2017.4.17 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (2023.7.22) + Requirement already satisfied: requests in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (2.31.0) + Requirement already satisfied: tqdm in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (4.66.1) + Requirement already satisfied: charset-normalizer<4,>=2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (3.2.0) + Requirement already satisfied: idna<4,>=2.5 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (3.4) + Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (1.26.16) + Requirement already satisfied: certifi>=2017.4.17 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests) (2023.7.22) Loading OpenVINO Runtime and Showing Info ------------------------------------------ +############################################################################################################################# Initialize OpenVINO Runtime with Core() .. code:: ipython3 - from openvino.runtime import Core + import openvino as ov - core = Core() + core = ov.Core() OpenVINO Runtime can load a network on a device. A device in this context means a CPU, an Intel GPU, a Neural Compute Stick 2, etc. The @@ -97,19 +97,18 @@ be faster. Loading a Model ---------------- +############################################################################################################################# After initializing OpenVINO Runtime, first read the model file with ``read_model()``, then compile it to the specified device with the ``compile_model()`` method. -`OpenVINO™ supports several model -formats `__ +`OpenVINO™ supports several model formats `__ and enables developers to convert them to its own OpenVINO IR format using a tool dedicated to this task. OpenVINO IR Model -~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ An OpenVINO IR (Intermediate Representation) model consists of an ``.xml`` file, containing information about network topology, and a @@ -122,8 +121,7 @@ is the case, specifying the weights file is optional. If the weights file has a different filename, it can be specified using the ``weights`` parameter in ``read_model()``. -The OpenVINO `model conversion -API `__ +The OpenVINO `Model Conversion API `__ tool is used to convert models to OpenVINO IR format. Model conversion API reads the original model and creates an OpenVINO IR model (``.xml`` and ``.bin`` files) so inference can be performed without delays due to @@ -163,22 +161,22 @@ notebooks. .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/classification.bin') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/classification.bin') .. code:: ipython3 - from openvino.runtime import Core + import openvino as ov - core = Core() + core = ov.Core() classification_model_xml = "model/classification.xml" model = core.read_model(model=classification_model_xml) compiled_model = core.compile_model(model=model, device_name="CPU") ONNX Model -~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ `ONNX `__ is an open format built to represent machine learning models. ONNX defines a common set of operators - the building @@ -210,35 +208,33 @@ points to the filename of an ONNX model. .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/segmentation.onnx') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/segmentation.onnx') .. code:: ipython3 - from openvino.runtime import Core + import openvino as ov - core = Core() + core = ov.Core() onnx_model_path = "model/segmentation.onnx" model_onnx = core.read_model(model=onnx_model_path) compiled_model_onnx = core.compile_model(model=model_onnx, device_name="CPU") -The ONNX model can be exported to OpenVINO IR with ``serialize()``: +The ONNX model can be exported to OpenVINO IR with ``save_model()``: .. code:: ipython3 - from openvino.runtime import serialize - - serialize(model_onnx, xml_path="model/exported_onnx_model.xml") + ov.save_model(model_onnx, output_model="model/exported_onnx_model.xml") PaddlePaddle Model -~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ `PaddlePaddle `__ models saved for inference can also be passed to OpenVINO Runtime without any conversion step. Pass the filename with extension to -``read_model`` and exported an OpenVINO IR with ``serialize`` +``read_model`` and exported an OpenVINO IR with ``save_model`` .. code:: ipython3 @@ -266,15 +262,15 @@ without any conversion step. Pass the filename with extension to .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/inference.pdiparams') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/inference.pdiparams') .. code:: ipython3 - from openvino.runtime import Core + import openvino as ov - core = Core() + core = ov.Core() paddle_model_path = 'model/inference.pdmodel' model_paddle = core.read_model(model=paddle_model_path) @@ -282,12 +278,10 @@ without any conversion step. Pass the filename with extension to .. code:: ipython3 - from openvino.runtime import serialize - - serialize(model_paddle, xml_path="model/exported_paddle_model.xml") + ov.save_model(model_paddle, output_model="model/exported_paddle_model.xml") TensorFlow Model -~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ TensorFlow models saved in frozen graph format can also be passed to ``read_model`` starting in OpenVINO 2022.3. @@ -299,8 +293,7 @@ TensorFlow models saved in frozen graph format can also be passed to support will be provided in the upcoming 2023 releases. Currently support is limited to only frozen graph inference format. Other TensorFlow model formats must be converted to OpenVINO IR using - `model conversion API `__. - + `model conversion API `__. .. code:: ipython3 @@ -320,15 +313,15 @@ TensorFlow models saved in frozen graph format can also be passed to .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/classification.pb') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/classification.pb') .. code:: ipython3 - from openvino.runtime import Core + import openvino as ov - core = Core() + core = ov.Core() tf_model_path = "model/classification.pb" model_tf = core.read_model(model=tf_model_path) @@ -336,17 +329,15 @@ TensorFlow models saved in frozen graph format can also be passed to .. code:: ipython3 - from openvino.runtime import serialize - - serialize(model_tf, xml_path="model/exported_tf_model.xml") + ov.save_model(model_tf, output_model="model/exported_tf_model.xml") TensorFlow Lite Model -~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ `TFLite `__ models saved for inference can also be passed to OpenVINO Runtime. Pass the filename with extension ``.tflite`` to ``read_model`` and exported an OpenVINO IR with -``serialize``. +``save_model``. This tutorial uses the image classification model `inception_v4_quant `__. @@ -367,32 +358,25 @@ It is pre-trained model optimized to work with TensorFlow Lite. model/classification.tflite: 0%| | 0.00/40.9M [00:00`__. Reshaping and Resizing ----------------------- +############################################################################################################################# Change Image Size -~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Instead of reshaping the image to fit the model, it is also possible to reshape the model to fit the image. Be aware that not all models support @@ -801,15 +785,15 @@ input shape. .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/segmentation.bin') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/segmentation.bin') .. code:: ipython3 - from openvino.runtime import Core, PartialShape + import openvino as ov - core = Core() + core = ov.Core() segmentation_model_xml = "model/segmentation.xml" segmentation_model = core.read_model(model=segmentation_model_xml) segmentation_input_layer = segmentation_model.input(0) @@ -819,7 +803,7 @@ input shape. print(f"input shape: {segmentation_input_layer.shape}") print(f"output shape: {segmentation_output_layer.shape}") - new_shape = PartialShape([1, 3, 544, 544]) + new_shape = ov.PartialShape([1, 3, 544, 544]) segmentation_model.reshape({segmentation_input_layer.any_name: new_shape}) segmentation_compiled_model = core.compile_model(model=segmentation_model, device_name="CPU") # help(segmentation_compiled_model) @@ -853,7 +837,7 @@ setting the input dimensions to 544x544 also modifies the output dimensions. After reshaping, compile the network once again. Change Batch Size -~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Use the ``.reshape()`` method to set the batch size, by increasing the first element of ``new_shape``. For example, to set a batch size of two, @@ -861,13 +845,13 @@ set ``new_shape = (2,3,544,544)`` in the cell above. .. code:: ipython3 - from openvino.runtime import Core, PartialShape + import openvino as ov segmentation_model_xml = "model/segmentation.xml" segmentation_model = core.read_model(model=segmentation_model_xml) segmentation_input_layer = segmentation_model.input(0) segmentation_output_layer = segmentation_model.output(0) - new_shape = PartialShape([2, 3, 544, 544]) + new_shape = ov.PartialShape([2, 3, 544, 544]) segmentation_model.reshape({segmentation_input_layer.any_name: new_shape}) segmentation_compiled_model = core.compile_model(model=segmentation_model, device_name="CPU") @@ -888,14 +872,14 @@ input image through the network to see the result: .. code:: ipython3 import numpy as np - from openvino.runtime import Core, PartialShape + import openvino as ov - core = Core() + core = ov.Core() segmentation_model_xml = "model/segmentation.xml" segmentation_model = core.read_model(model=segmentation_model_xml) segmentation_input_layer = segmentation_model.input(0) segmentation_output_layer = segmentation_model.output(0) - new_shape = PartialShape([2, 3, 544, 544]) + new_shape = ov.PartialShape([2, 3, 544, 544]) segmentation_model.reshape({segmentation_input_layer.any_name: new_shape}) segmentation_compiled_model = core.compile_model(model=segmentation_model, device_name="CPU") input_data = np.random.rand(2, 3, 544, 544) @@ -913,7 +897,7 @@ input image through the network to see the result: Caching a Model ---------------- +############################################################################################################################# For some devices, like GPU, loading a model can take some time. Model Caching solves this issue by caching the model in a cache directory. If @@ -954,7 +938,7 @@ the cache. .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/classification.bin') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/002-openvino-api/model/classification.bin') @@ -963,9 +947,9 @@ the cache. import time from pathlib import Path - from openvino.runtime import Core + import openvino as ov - core = Core() + core = ov.Core() device_name = "GPU" diff --git a/docs/notebooks/003-hello-segmentation-with-output.rst b/docs/notebooks/003-hello-segmentation-with-output.rst index 19c71e8924d361..71de970a264862 100644 --- a/docs/notebooks/003-hello-segmentation-with-output.rst +++ b/docs/notebooks/003-hello-segmentation-with-output.rst @@ -1,22 +1,15 @@ Hello Image Segmentation ======================== - - A very basic introduction to using segmentation models with OpenVINO™. In this tutorial, a pre-trained -`road-segmentation-adas-0001 `__ -model from the `Open Model -Zoo `__ is used. +`road-segmentation-adas-0001 `__ +model from the `Open Model Zoo `__ is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark. - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Download model weights <#download-model-weights>`__ @@ -27,7 +20,12 @@ recognizes four classes: background, road, curb and mark. - `Prepare Data for Visualization <#prepare-data-for-visualization>`__ - `Visualize data <#visualize-data>`__ -Imports `⇑ <#top>`__ +.. code:: ipython3 + + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" + +Imports ######################################### .. code:: ipython3 @@ -35,16 +33,15 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np + import openvino as ov import sys - from openvino.runtime import Core sys.path.append("../utils") from notebook_utils import segmentation_map_to_image, download_file -Download model weights `⇑ <#top>`__ +Download model weights ############################################################################################################################# - .. code:: ipython3 from pathlib import Path @@ -79,19 +76,18 @@ Download model weights `⇑ <#top>`__ model/road-segmentation-adas-0001.bin: 0%| | 0.00/720k [00:00`__ +Select inference device ############################################################################################################################# - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - ie = Core() + core = ov.Core() device = widgets.Dropdown( - options=ie.available_devices + ["AUTO"], + options=core.available_devices + ["AUTO"], value='AUTO', description='Device:', disabled=False, @@ -108,13 +104,12 @@ Select device from dropdown list for running inference using OpenVINO: -Load the Model `⇑ <#top>`__ +Load the Model ############################################################################################################################# - .. code:: ipython3 - core = Core() + core = ov.Core() model = core.read_model(model=model_xml_path) compiled_model = core.compile_model(model=model, device_name=device.value) @@ -122,7 +117,7 @@ Load the Model `⇑ <#top>`__ input_layer_ir = compiled_model.input(0) output_layer_ir = compiled_model.output(0) -Load an Image `⇑ <#top>`__ +Load an Image ############################################################################################################################# A sample image from the `Mapillary Vistas `__ dataset is @@ -153,18 +148,17 @@ provided. .. parsed-literal:: - + -.. image:: 003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_10_1.png +.. image:: 003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_11_1.png -Do Inference `⇑ <#top>`__ +Do Inference ############################################################################################################################# - .. code:: ipython3 # Run the inference. @@ -179,18 +173,17 @@ Do Inference `⇑ <#top>`__ .. parsed-literal:: - + -.. image:: 003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_12_1.png +.. image:: 003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_13_1.png -Prepare Data for Visualization `⇑ <#top>`__ +Prepare Data for Visualization ############################################################################################################################# - .. code:: ipython3 # Define colormap, each color represents a class. @@ -206,10 +199,9 @@ Prepare Data for Visualization `⇑ <#top>`__ # Create an image with mask. image_with_mask = cv2.addWeighted(resized_mask, alpha, rgb_image, 1 - alpha, 0) -Visualize data `⇑ <#top>`__ +Visualize data ############################################################################################################################# - .. code:: ipython3 # Define titles with images. @@ -229,5 +221,5 @@ Visualize data `⇑ <#top>`__ -.. image:: 003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_16_0.png +.. image:: 003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_17_0.png diff --git a/docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_10_1.png b/docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_11_1.png similarity index 100% rename from docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_10_1.png rename to docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_11_1.png diff --git a/docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_12_1.png b/docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_13_1.png similarity index 100% rename from docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_12_1.png rename to docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_13_1.png diff --git a/docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_16_0.png b/docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_17_0.png similarity index 100% rename from docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_16_0.png rename to docs/notebooks/003-hello-segmentation-with-output_files/003-hello-segmentation-with-output_17_0.png diff --git a/docs/notebooks/004-hello-detection-with-output.rst b/docs/notebooks/004-hello-detection-with-output.rst index b5b1a183e6072c..13a38e8e20d373 100644 --- a/docs/notebooks/004-hello-detection-with-output.rst +++ b/docs/notebooks/004-hello-detection-with-output.rst @@ -1,15 +1,12 @@ Hello Object Detection ====================== - - A very basic introduction to using object detection models with OpenVINO™. The -`horizontal-text-detection-0001 `__ -model from `Open Model -Zoo `__ is used. It +`horizontal-text-detection-0001 `__ +model from `Open Model Zoo `__ is used. It detects horizontal text in images and returns a blob of data in the shape of ``[100, 5]``. Each detected text box is stored in the ``[x_min, y_min, x_max, y_max, conf]`` format, where the @@ -18,21 +15,22 @@ corner, ``(x_max, y_max)`` are the coordinates of the bottom right bounding box corner and ``conf`` is the confidence for the predicted class. +**Table of contents:** +- `Imports <#imports>`__ +- `Download model weights <#download-model-weights>`__ +- `Select inference device <#select-inference-device>`__ +- `Load the Model <#load-the-model>`__ +- `Load an Image <#load-an-image>`__ +- `Do Inference <#do-inference>`__ +- `Visualize Results <#visualize-results>`__ -.. _top: - -**Table of contents**: +.. code:: ipython3 -- `Imports <#imports>`__ -- `Download model weights <#download-model-weights>`__ -- `Select inference device <#select-inference-device>`__ -- `Load the Model <#load-the-model>`__ -- `Load an Image <#load-an-image>`__ -- `Do Inference <#do-inference>`__ -- `Visualize Results <#visualize-results>`__ + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" -Imports `⇑ <#top>`__ +Imports ######################################## .. code:: ipython3 @@ -40,14 +38,14 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np - from openvino.runtime import Core + import openvino as ov from pathlib import Path import sys sys.path.append("../utils") from notebook_utils import download_file -Download model weights `⇑ <#top>`__ +Download model weights ####################################################### .. code:: ipython3 @@ -83,7 +81,7 @@ Download model weights `⇑ <#top>`__ model/horizontal-text-detection-0001.bin: 0%| | 0.00/7.39M [00:00`__ +Select inference device ########################################################### Select device from dropdown list for running inference using OpenVINO: @@ -92,9 +90,9 @@ Select device from dropdown list for running inference using OpenVINO: import ipywidgets as widgets - ie = Core() + core = ov.Core() device = widgets.Dropdown( - options=ie.available_devices + ["AUTO"], + options=core.available_devices + ["AUTO"], value='AUTO', description='Device:', disabled=False, @@ -111,20 +109,20 @@ Select device from dropdown list for running inference using OpenVINO: -Load the Model `⇑ <#top>`__ +Load the Model ############################################### .. code:: ipython3 - ie = Core() + core = ov.Core() - model = ie.read_model(model=model_xml_path) - compiled_model = ie.compile_model(model=model, device_name="CPU") + model = core.read_model(model=model_xml_path) + compiled_model = core.compile_model(model=model, device_name="CPU") input_layer_ir = compiled_model.input(0) output_layer_ir = compiled_model.output("boxes") -Load an Image `⇑ <#top>`__ +Load an Image ############################################## .. code:: ipython3 @@ -145,10 +143,10 @@ Load an Image `⇑ <#top>`__ -.. image:: 004-hello-detection-with-output_files/004-hello-detection-with-output_10_0.png +.. image:: 004-hello-detection-with-output_files/004-hello-detection-with-output_11_0.png -Do Inference `⇑ <#top>`__ +Do Inference ############################################## .. code:: ipython3 @@ -159,7 +157,7 @@ Do Inference `⇑ <#top>`__ # Remove zero only boxes. boxes = boxes[~np.all(boxes == 0, axis=1)] -Visualize Results `⇑ <#top>`__ +Visualize Results ################################################## .. code:: ipython3 @@ -218,5 +216,5 @@ Visualize Results `⇑ <#top>`__ -.. image:: 004-hello-detection-with-output_files/004-hello-detection-with-output_15_0.png +.. image:: 004-hello-detection-with-output_files/004-hello-detection-with-output_16_0.png diff --git a/docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_10_0.png b/docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_11_0.png similarity index 100% rename from docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_10_0.png rename to docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_11_0.png diff --git a/docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_15_0.png b/docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_16_0.png similarity index 100% rename from docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_15_0.png rename to docs/notebooks/004-hello-detection-with-output_files/004-hello-detection-with-output_16_0.png diff --git a/docs/notebooks/101-tensorflow-classification-to-openvino-with-output.rst b/docs/notebooks/101-tensorflow-classification-to-openvino-with-output.rst index 50b81fc51eade5..020bd5ff97d15e 100644 --- a/docs/notebooks/101-tensorflow-classification-to-openvino-with-output.rst +++ b/docs/notebooks/101-tensorflow-classification-to-openvino-with-output.rst @@ -1,23 +1,14 @@ Convert a TensorFlow Model to OpenVINO™ ======================================= +This short tutorial shows how to convert a TensorFlow +`MobileNetV3 `__ +image classification model to OpenVINO `Intermediate Representation `__ +(OpenVINO IR) format, using `Model Conversion API `__. +After creating the OpenVINO IR, load the model in `OpenVINO Runtime `__ +and do inference with a sample image. - -| This short tutorial shows how to convert a TensorFlow - `MobileNetV3 `__ - image classification model to OpenVINO `Intermediate - Representation `__ - (OpenVINO IR) format, using `model conversion - API `__. - After creating the OpenVINO IR, load the model in `OpenVINO - Runtime `__ - and do inference with a sample image. - - - -| .. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -38,9 +29,13 @@ Convert a TensorFlow Model to OpenVINO™ - `Timing <#timing>`__ -Imports `⇑ <#top>`__ -############################################################################################################################### +.. code:: ipython3 + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" + +Imports +############################################################################################################################### .. code:: ipython3 @@ -50,23 +45,21 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np + import openvino as ov import tensorflow as tf - from openvino.runtime import Core, serialize - from openvino.tools import mo .. parsed-literal:: - 2023-08-15 22:26:34.199621: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. - 2023-08-15 22:26:34.233464: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + 2023-09-08 22:28:30.021569: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:28:30.056559: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. - 2023-08-15 22:26:34.746193: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + 2023-09-08 22:28:30.570158: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - .. code:: ipython3 # The paths of the source and converted models. @@ -77,12 +70,10 @@ Settings `⇑ <#top>`__ ir_path = Path("model/v3-small_224_1.0_float.xml") -Download model `⇑ <#top>`__ +Download model ############################################################################################################################### - -Load model using `tf.keras.applications -api `__ +Load model using `tf.keras.applications api `__ and save it to the disk. .. code:: ipython3 @@ -98,7 +89,7 @@ and save it to the disk. .. parsed-literal:: - 2023-08-15 22:26:35.659386: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. + 2023-09-08 22:28:31.436088: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... @@ -109,9 +100,9 @@ and save it to the disk. .. parsed-literal:: - 2023-08-15 22:26:39.846021: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'inputs' with dtype float and shape [?,1,1,1024] + 2023-09-08 22:28:35.666551: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'inputs' with dtype float and shape [?,1,1,1024] [[{{node inputs}}]] - 2023-08-15 22:26:42.992490: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'inputs' with dtype float and shape [?,1,1,1024] + 2023-09-08 22:28:38.807497: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'inputs' with dtype float and shape [?,1,1,1024] [[{{node inputs}}]] WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 5 of 54). These functions will not be directly callable after loading. @@ -126,21 +117,19 @@ and save it to the disk. INFO:tensorflow:Assets written to: model/v3-small_224_1.0_float/assets -Convert a Model to OpenVINO IR Format `⇑ <#top>`__ +Convert a Model to OpenVINO IR Format ############################################################################################################################### - -Convert a TensorFlow Model to OpenVINO IR Format `⇑ <#top>`__ +Convert a TensorFlow Model to OpenVINO IR Format +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Use the model conversion Python API to convert the TensorFlow model to -OpenVINO IR. The ``mo.convert_model`` function accept path to saved +OpenVINO IR. The ``ov.convert_model`` function accept path to saved model directory and returns OpenVINO Model class instance which represents this model. Obtained model is ready to use and to be loaded -on a device using ``compile_model`` or can be saved on a disk using the -``serialize`` function. See the -`tutorial `__ +on a device using ``ov.compile_model`` or can be saved on a disk using +the ``ov.save_model`` function. See the +`tutorial `__ for more information about using model conversion API with TensorFlow models. @@ -149,8 +138,8 @@ models. # Run model conversion API if the IR model file does not exist if not ir_path.exists(): print("Exporting TensorFlow model to IR... This may take a few minutes.") - ov_model = mo.convert_model(saved_model_dir=model_path, input_shape=[[1, 224, 224, 3]], compress_to_fp16=True) - serialize(ov_model, ir_path) + ov_model = ov.convert_model(model_path, input=[[1, 224, 224, 3]]) + ov.save_model(ov_model, ir_path) else: print(f"IR model {ir_path} already exists.") @@ -160,23 +149,20 @@ models. Exporting TensorFlow model to IR... This may take a few minutes. -Test Inference on the Converted Model `⇑ <#top>`__ +Test Inference on the Converted Model ############################################################################################################################### - -Load the Model `⇑ <#top>`__ +Load the Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - core = Core() + core = ov.Core() model = core.read_model(ir_path) -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -205,20 +191,18 @@ Select device from dropdown list for running inference using OpenVINO: compiled_model = core.compile_model(model=model, device_name=device.value) -Get Model Information `⇑ <#top>`__ +Get Model Information +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 input_key = compiled_model.input(0) output_key = compiled_model.output(0) network_input_shape = input_key.shape -Load an Image `⇑ <#top>`__ +Load an Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Load an image, resize it, and convert it to the input shape of the network. @@ -237,13 +221,12 @@ network. -.. image:: 101-tensorflow-classification-to-openvino-with-output_files/101-tensorflow-classification-to-openvino-with-output_18_0.png +.. image:: 101-tensorflow-classification-to-openvino-with-output_files/101-tensorflow-classification-to-openvino-with-output_19_0.png -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 result = compiled_model(input_image)[output_key] @@ -266,14 +249,13 @@ Do Inference `⇑ <#top>`__ -Timing `⇑ <#top>`__ +Timing ############################################################################################################################### - Measure the time it takes to do inference on thousand images. This gives an indication of performance. For more accurate benchmarking, use the `Benchmark -Tool `__ +Tool `__ in OpenVINO. Note that many optimizations are possible to improve the performance. @@ -297,5 +279,5 @@ performance. .. parsed-literal:: - IR model in OpenVINO Runtime/CPU: 0.0010 seconds per image, FPS: 988.20 + IR model in OpenVINO Runtime/CPU: 0.0010 seconds per image, FPS: 989.01 diff --git a/docs/notebooks/101-tensorflow-classification-to-openvino-with-output_files/101-tensorflow-classification-to-openvino-with-output_18_0.png b/docs/notebooks/101-tensorflow-classification-to-openvino-with-output_files/101-tensorflow-classification-to-openvino-with-output_19_0.png similarity index 100% rename from docs/notebooks/101-tensorflow-classification-to-openvino-with-output_files/101-tensorflow-classification-to-openvino-with-output_18_0.png rename to docs/notebooks/101-tensorflow-classification-to-openvino-with-output_files/101-tensorflow-classification-to-openvino-with-output_19_0.png diff --git a/docs/notebooks/102-pytorch-onnx-to-openvino-with-output.rst b/docs/notebooks/102-pytorch-onnx-to-openvino-with-output.rst index ac8ce1e7bdf452..1a1d6acf91179c 100644 --- a/docs/notebooks/102-pytorch-onnx-to-openvino-with-output.rst +++ b/docs/notebooks/102-pytorch-onnx-to-openvino-with-output.rst @@ -1,8 +1,6 @@ Convert a PyTorch Model to ONNX and OpenVINO™ IR ================================================ - - This tutorial demonstrates step-by-step instructions on how to do inference on a PyTorch semantic segmentation model, using OpenVINO Runtime. @@ -35,11 +33,7 @@ plant, sheep, sofa, train, tv monitor** More information about the model is available in the `torchvision documentation `__ - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ @@ -62,14 +56,19 @@ documentation `__ - `2. OpenVINO IR Model in OpenVINO Runtime <#openvino-ir-model-in-openvino-runtime>`__ - `Select the inference device <#select-the-inference-device>`__ -- `PyTorch Comparison <#pytorch-comparison>`__ -- `Performance Comparison <#performance-comparison>`__ +- `PyTorch Comparison <#pytorch-comparison>`__ +- `Performance Comparison <#performance-comparison>`__ - `References <#references>`__ -Preparation `⇑ <#top>`__ +.. code:: ipython3 + + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" + +Preparation ######################################################################## -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -81,14 +80,14 @@ Imports `⇑ <#top>`__ import cv2 import numpy as np + import openvino as ov import torch from torchvision.models.segmentation import lraspp_mobilenet_v3_large, LRASPP_MobileNet_V3_Large_Weights - from openvino.runtime import Core sys.path.append("../utils") from notebook_utils import segmentation_map_to_image, viz_result_image, SegmentationMap, Label, download_file -Settings `⇑ <#top>`__ +Settings ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Set a name for the model, then define width and height of the image that @@ -110,7 +109,7 @@ transforms function, the model is pre-trained on images with a height of onnx_path.parent.mkdir() ir_path = onnx_path.with_suffix(".xml") -Load Model `⇑ <#top>`__ +Load Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Generally, PyTorch models represent an instance of ``torch.nn.Module`` @@ -161,10 +160,10 @@ have not downloaded the model before. Loaded PyTorch LRASPP MobileNetV3 model -ONNX Model Conversion `⇑ <#top>`__ +ONNX Model Conversion ################################################################################ -Convert PyTorch model to ONNX `⇑ <#top>`__ +Convert PyTorch model to ONNX ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ OpenVINO supports PyTorch models that are exported in ONNX format. We @@ -204,23 +203,20 @@ line of the output will read: ONNX model exported to model/lraspp_mobilenet_v3_large.onnx. -Convert ONNX Model to OpenVINO IR Format `⇑ <#top>`__ +Convert ONNX Model to OpenVINO IR Format ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ To convert the ONNX model to OpenVINO IR with ``FP16`` precision, use model conversion API. The models are saved inside the current directory. For more information on how to convert models, see this -`page `__. +`page `__. .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize - if not ir_path.exists(): print("Exporting ONNX model to IR... This may take a few minutes.") - ov_model = mo.convert_model(onnx_path, compress_to_fp16=True) - serialize(ov_model, ir_path) + ov_model = ov.convert_model(onnx_path) + ov.save_model(ov_model, ir_path) else: print(f"IR model {ir_path} already exists.") @@ -230,13 +226,13 @@ For more information on how to convert models, see this Exporting ONNX model to IR... This may take a few minutes. -Show Results `⇑ <#top>`__ +Show Results ###################################################################### Confirm that the segmentation results look as expected by comparing model predictions on the ONNX, OpenVINO IR and PyTorch models. -Load and Preprocess an Input Image `⇑ <#top>`__ +Load and Preprocess an Input Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Images need to be normalized before propagating through the network. @@ -268,27 +264,27 @@ Images need to be normalized before propagating through the network. input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) normalized_input_image = np.expand_dims(np.transpose(normalized_image, (2, 0, 1)), 0) -Load the OpenVINO IR Network and Run Inference on the ONNX model `⇑ <#top>`__ +Load the OpenVINO IR Network and Run Inference on the ONNX model ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ OpenVINO Runtime can load ONNX models directly. First, load the ONNX model, do inference and show the results. Then, load the model that was converted to OpenVINO Intermediate Representation (OpenVINO IR) with -Model Optimizer and do inference on that model, and show the results on -an image. +OpenVINO Converter and do inference on that model, and show the results +on an image. -1. ONNX Model in OpenVINO Runtime `⇑ <#top>`__ +1. ONNX Model in OpenVINO Runtime ------------------------------------------------------------------------------------------ .. code:: ipython3 # Instantiate OpenVINO Core - core = Core() + core = ov.Core() # Read model to OpenVINO Runtime model_onnx = core.read_model(model=onnx_path) -Select an inference device `⇑ <#top>`__ +Select an inference device ................................................................................... Select a device from dropdown list for running inference using OpenVINO: @@ -366,14 +362,14 @@ be applied to each label for more convenient visualization. -.. image:: 102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_21_0.png +.. image:: 102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_22_0.png -2. OpenVINO IR Model in OpenVINO Runtime `⇑ <#top>`__ +2. OpenVINO IR Model in OpenVINO Runtime ------------------------------------------------------------------------------------------------------ -Select the inference device `⇑ <#top>`__ +Select the inference device ..................................................................................... Select a device from dropdown list for running inference using OpenVINO: @@ -394,7 +390,7 @@ Select a device from dropdown list for running inference using OpenVINO: .. code:: ipython3 # Load the network in OpenVINO Runtime. - core = Core() + core = ov.Core() model_ir = core.read_model(model=ir_path) compiled_model_ir = core.compile_model(model=model_ir, device_name=device.value) @@ -416,11 +412,11 @@ Select a device from dropdown list for running inference using OpenVINO: -.. image:: 102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_26_0.png +.. image:: 102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_27_0.png -PyTorch Comparison `⇑ <#top>`__ +PyTorch Comparison ############################################################################ Do inference on the PyTorch model to verify that the output visually @@ -442,17 +438,17 @@ looks the same as the output on the ONNX/OpenVINO IR models. -.. image:: 102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_28_0.png +.. image:: 102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_29_0.png -Performance Comparison `⇑ <#top>`__ +Performance Comparison ################################################################################ Measure the time it takes to do inference on twenty images. This gives an indication of performance. For more accurate benchmarking, use the `Benchmark -Tool `__. +Tool `__. Keep in mind that many optimizations are possible to improve the performance. @@ -519,9 +515,9 @@ performance. .. parsed-literal:: - PyTorch model on CPU: 0.037 seconds per image, FPS: 27.19 - ONNX model in OpenVINO Runtime/CPU: 0.031 seconds per image, FPS: 32.33 - OpenVINO IR model in OpenVINO Runtime/CPU: 0.032 seconds per image, FPS: 31.72 + PyTorch model on CPU: 0.035 seconds per image, FPS: 28.95 + ONNX model in OpenVINO Runtime/CPU: 0.031 seconds per image, FPS: 32.42 + OpenVINO IR model in OpenVINO Runtime/CPU: 0.031 seconds per image, FPS: 32.13 **Show Device Information** @@ -539,7 +535,7 @@ performance. CPU: Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz -References `⇑ <#top>`__ +References ###################################################################### - `Torchvision `__ @@ -549,6 +545,6 @@ References `⇑ <#top>`__ - `OpenVINO ONNX support `__ - `Model Conversion API - documentation `__ + documentation `__ - `Converting Pytorch - model `__ + model `__ diff --git a/docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_21_0.png b/docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_22_0.png similarity index 100% rename from docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_21_0.png rename to docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_22_0.png diff --git a/docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_26_0.png b/docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_27_0.png similarity index 100% rename from docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_26_0.png rename to docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_27_0.png diff --git a/docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_28_0.png b/docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_29_0.png similarity index 100% rename from docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_28_0.png rename to docs/notebooks/102-pytorch-onnx-to-openvino-with-output_files/102-pytorch-onnx-to-openvino-with-output_29_0.png diff --git a/docs/notebooks/102-pytorch-to-openvino-with-output.rst b/docs/notebooks/102-pytorch-to-openvino-with-output.rst index d2b0b57549bff5..6b962778c1404c 100644 --- a/docs/notebooks/102-pytorch-to-openvino-with-output.rst +++ b/docs/notebooks/102-pytorch-to-openvino-with-output.rst @@ -1,8 +1,6 @@ Convert a PyTorch Model to OpenVINO™ IR ======================================= - - This tutorial demonstrates step-by-step instructions on how to do inference on a PyTorch classification model using OpenVINO Runtime. Starting from OpenVINO 2023.0 release, OpenVINO supports direct PyTorch @@ -29,13 +27,9 @@ network design spaces that parametrize populations of networks. The overall process is analogous to the classic manual design of networks but elevated to the design space level. The RegNet design space provides simple and fast networks that work well across a wide range of flop -regimes. - - - -.. _top: +regimes. -**Table of contents**: +**Table of contents:** - `Prerequisites <#prerequisites>`__ - `Load PyTorch Model <#load-pytorch-model>`__ @@ -67,15 +61,14 @@ regimes. - `Convert PyTorch Traced Model to OpenVINO Intermediate Representation <#convert-pytorch-traced-model-to-openvino-intermediate-representation>`__ - `Benchmark OpenVINO Model Inference Converted From Traced Model <#benchmark-openvino-model-inference-converted-from-traced-model>`__ -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### - Install notebook dependencies .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" scipy Download input data and label map @@ -103,10 +96,9 @@ Download input data and label map imagenet_classes = labels_file.open("r").read().splitlines() -Load PyTorch Model `⇑ <#top>`__ +Load PyTorch Model ############################################################################################################################### - Generally, PyTorch models represent an instance of the ``torch.nn.Module`` class, initialized by a state dictionary with model weights. Typical steps for getting a pre-trained model: @@ -135,10 +127,9 @@ enum ``RegNet_Y_800MF_Weights.DEFAULT``. # switch model to inference mode model.eval(); -Prepare Input Data `⇑ <#top>`__ +Prepare Input Data +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The code below demonstrates how to preprocess input data using a model-specific transforms module from ``torchvision``. After transformation, we should concatenate images into batched tensor, in our @@ -158,10 +149,9 @@ the first dimension. # Add batch dimension to image tensor input_tensor = img_transformed.unsqueeze(0) -Run PyTorch Model Inference `⇑ <#top>`__ +Run PyTorch Model Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The model returns a vector of probabilities in raw logits format, softmax can be applied to get normalized values in the [0, 1] range. For a demonstration that the output of the original model and OpenVINO @@ -215,10 +205,9 @@ can be reused later. 5: hamper - 2.35% -Benchmark PyTorch Model Inference `⇑ <#top>`__ +Benchmark PyTorch Model Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 %%timeit @@ -229,17 +218,17 @@ Benchmark PyTorch Model Inference `⇑ <#top>`__ .. parsed-literal:: - 13.2 ms ± 27.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) + 13.5 ms ± 5.61 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) -Convert PyTorch Model to OpenVINO Intermediate Representation. `⇑ <#top>`__ +Convert PyTorch Model to OpenVINO Intermediate Representation ############################################################################################################################### Starting from the 2023.0 release OpenVINO supports direct PyTorch models -conversion to OpenVINO Intermediate Representation (IR) format. Model -Optimizer Python API should be used for these purposes. More details +conversion to OpenVINO Intermediate Representation (IR) format. OpenVINO +model conversion API should be used for these purposes. More details regarding PyTorch model conversion can be found in OpenVINO -`documentation `__ +`documentation `__ .. note:: @@ -251,12 +240,11 @@ regarding PyTorch model conversion can be found in OpenVINO `tutorial <102-pytorch-to-openvino-with-output.html>`__ which explains how to convert PyTorch model to ONNX, then to OpenVINO - The ``convert_model`` function accepts the PyTorch model object and -returns the ``openvino.runtime.Model`` instance ready to load on a -device using ``core.compile_model`` or save on disk for next usage using -``openvino.runtime.serialize``. Optionally, we can provide additional -parameters, such as: +returns the ``openvino.Model`` instance ready to load on a device using +``core.compile_model`` or save on disk for next usage using +``ov.save_model``. Optionally, we can provide additional parameters, +such as: - ``compress_to_fp16`` - flag to perform model weights compression into FP16 data format. It may reduce the required space for model storage @@ -268,43 +256,59 @@ parameters, such as: and any other advanced options supported by model conversion Python API. More details can be found on this -`page `__ +`page `__ .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import Core, serialize + import openvino as ov # Create OpenVINO Core object instance - core = Core() + core = ov.Core() # Convert model to openvino.runtime.Model object - ov_model = mo.convert_model(model) + ov_model = ov.convert_model(model) # Save openvino.runtime.Model object on disk - serialize(ov_model, MODEL_DIR / f"{MODEL_NAME}_dynamic.xml") + ov.save_model(ov_model, MODEL_DIR / f"{MODEL_NAME}_dynamic.xml") ov_model +.. parsed-literal:: + + 2023-09-08 22:29:26.465675: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:29:26.497093: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 22:29:27.072823: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +.. parsed-literal:: + + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino + + +.. parsed-literal:: + + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + + .. parsed-literal:: + ] outputs[ - + ]> -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -342,18 +346,17 @@ Select device from dropdown list for running inference using OpenVINO: + ] outputs[ - + ]> -Run OpenVINO Model Inference `⇑ <#top>`__ +Run OpenVINO Model Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Run model inference @@ -382,10 +385,9 @@ Run OpenVINO Model Inference `⇑ <#top>`__ 5: hamper - 2.35% -Benchmark OpenVINO Model Inference `⇑ <#top>`__ +Benchmark OpenVINO Model Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 %%timeit @@ -395,13 +397,12 @@ Benchmark OpenVINO Model Inference `⇑ <#top>`__ .. parsed-literal:: - 3.03 ms ± 45.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) + 3.16 ms ± 13.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) -Convert PyTorch Model with Static Input Shape `⇑ <#top>`__ +Convert PyTorch Model with Static Input Shape ############################################################################################################################### - The default conversion path preserves dynamic input shapes, in order if you want to convert the model with static shapes, you can explicitly specify it during conversion using the ``input_shape`` parameter or @@ -412,9 +413,9 @@ reshaping example please check the following .. code:: ipython3 # Convert model to openvino.runtime.Model object - ov_model = mo.convert_model(model, input_shape=[[1,3,224,224]]) + ov_model = ov.convert_model(model, input=[[1,3,224,224]]) # Save openvino.runtime.Model object on disk - serialize(ov_model, MODEL_DIR / f"{MODEL_NAME}_static.xml") + ov.save_model(ov_model, MODEL_DIR / f"{MODEL_NAME}_static.xml") ov_model @@ -422,20 +423,19 @@ reshaping example please check the following .. parsed-literal:: - + ] outputs[ - + ]> -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -464,10 +464,10 @@ Select device from dropdown list for running inference using OpenVINO: + ] outputs[ - + ]> @@ -476,10 +476,9 @@ Now, we can see that input of our converted model is tensor of shape [1, 3, 224, 224] instead of [?, 3, ?, ?] reported by previously converted model. -Run OpenVINO Model Inference with Static Input Shape `⇑ <#top>`__ +Run OpenVINO Model Inference with Static Input Shape +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Run model inference @@ -508,7 +507,7 @@ Run OpenVINO Model Inference with Static Input Shape `⇑ <#top>`__ 5: hamper - 2.35% -Benchmark OpenVINO Model Inference with Static Input Shape `⇑ <#top>`__ +Benchmark OpenVINO Model Inference with Static Input Shape +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -520,10 +519,10 @@ Benchmark OpenVINO Model Inference with Static Input Shape `⇑ <#top>`__ .. parsed-literal:: - 2.77 ms ± 12.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) + 2.81 ms ± 20.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) -Convert TorchScript Model to OpenVINO Intermediate Representation. `⇑ <#top>`__ +Convert TorchScript Model to OpenVINO Intermediate Representation ############################################################################################################################### TorchScript is a way to create serializable and optimizable models from @@ -544,10 +543,9 @@ There are 2 possible ways to convert the PyTorch model to TorchScript: Let’s consider both approaches and their conversion into OpenVINO IR. -Scripted Model `⇑ <#top>`__ +Scripted Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - ``torch.jit.script`` inspects model source code and compiles it to ``ScriptModule``. After compilation model can be used for inference or saved on disk using the ``torch.jit.save`` function and after that @@ -598,10 +596,9 @@ Reference `__ +Benchmark Scripted Model Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 %%timeit @@ -611,17 +608,19 @@ Benchmark Scripted Model Inference `⇑ <#top>`__ .. parsed-literal:: - 12.6 ms ± 17.6 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) + 12.6 ms ± 8.03 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) -Convert PyTorch Scripted Model to OpenVINO Intermediate -Representation `⇑ <#top>`__ The conversion step for the scripted model to -OpenVINO IR is similar to the original PyTorch model. +Convert PyTorch Scripted Model to OpenVINO Intermediate Representation +############################################################################################################################### + +The conversion step for the scripted model to OpenVINO IR is similar to +the original PyTorch model. .. code:: ipython3 # Convert model to openvino.runtime.Model object - ov_model = mo.convert_model(scripted_model) + ov_model = ov.convert_model(scripted_model) # Load OpenVINO model on device compiled_model = core.compile_model(ov_model, device.value) @@ -652,10 +651,9 @@ OpenVINO IR is similar to the original PyTorch model. 5: hamper - 2.35% -Benchmark OpenVINO Model Inference Converted From Scripted Model `⇑ <#top>`__ +Benchmark OpenVINO Model Inference Converted From Scripted Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 %%timeit @@ -665,13 +663,12 @@ Benchmark OpenVINO Model Inference Converted From Scripted Model `⇑ <#top>`__ .. parsed-literal:: - 3.07 ms ± 5.58 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) + 3.14 ms ± 8.99 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) -Traced Model `⇑ <#top>`__ +Traced Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Using ``torch.jit.trace``, you can turn an existing module or Python function into a TorchScript ``ScriptFunction`` or ``ScriptModule``. You must provide example inputs, and model will be executed, recording the @@ -726,10 +723,9 @@ original PyTorch model code definitions. 5: hamper - 2.35% -Benchmark Traced Model Inference `⇑ <#top>`__ +Benchmark Traced Model Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 %%timeit @@ -739,17 +735,19 @@ Benchmark Traced Model Inference `⇑ <#top>`__ .. parsed-literal:: - 12.7 ms ± 61.1 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) + 12.6 ms ± 60.6 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Convert PyTorch Traced Model to OpenVINO Intermediate Representation -`⇑ <#top>`__ The conversion step for a traced model to OpenVINO IR is -similar to the original PyTorch model. +############################################################################################################################### + +The conversion step for a traced model to OpenVINO IR is similar to the +original PyTorch model. .. code:: ipython3 # Convert model to openvino.runtime.Model object - ov_model = mo.convert_model(traced_model) + ov_model = ov.convert_model(traced_model) # Load OpenVINO model on device compiled_model = core.compile_model(ov_model, device.value) @@ -780,7 +778,7 @@ similar to the original PyTorch model. 5: hamper - 2.35% -Benchmark OpenVINO Model Inference Converted From Traced Model `⇑ <#top>`__ +Benchmark OpenVINO Model Inference Converted From Traced Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -792,5 +790,5 @@ Benchmark OpenVINO Model Inference Converted From Traced Model `⇑ <#top>`__ .. parsed-literal:: - 3.05 ms ± 6.85 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) + 2.82 ms ± 16.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) diff --git a/docs/notebooks/103-paddle-to-openvino-classification-with-output.rst b/docs/notebooks/103-paddle-to-openvino-classification-with-output.rst index 5c8a9fefd2b88d..c0315dfcc595df 100644 --- a/docs/notebooks/103-paddle-to-openvino-classification-with-output.rst +++ b/docs/notebooks/103-paddle-to-openvino-classification-with-output.rst @@ -1,8 +1,6 @@ Convert a PaddlePaddle Model to OpenVINO™ IR ============================================ - - This notebook shows how to convert a MobileNetV3 model from `PaddleHub `__, pre-trained on the `ImageNet `__ dataset, to OpenVINO IR. @@ -16,11 +14,7 @@ IR model. Source of the `model `__. - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ @@ -35,14 +29,12 @@ Source of the - `Select inference device <#select-inference-device>`__ - `References <#references>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### - -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import sys @@ -56,6 +48,8 @@ Imports `⇑ <#top>`__ !pip install -q paddleclas --no-deps !pip install -q "prettytable" "ujson" "visualdl>=2.2.0" "faiss-cpu>=1.7.1" + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" .. parsed-literal:: @@ -75,23 +69,23 @@ Imports `⇑ <#top>`__ import matplotlib.pyplot as plt import numpy as np + import openvino as ov from paddleclas import PaddleClas from PIL import Image - from openvino.runtime import Core + sys.path.append("../utils") from notebook_utils import download_file .. parsed-literal:: - 2023-08-15 22:28:07 INFO: Loading faiss with AVX2 support. - 2023-08-15 22:28:07 INFO: Successfully loaded faiss with AVX2 support. + 2023-09-08 22:30:09 INFO: Loading faiss with AVX2 support. + 2023-09-08 22:30:09 INFO: Successfully loaded faiss with AVX2 support. -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Set ``IMAGE_FILENAME`` to the filename of an image to use. Set ``MODEL_NAME`` to the PaddlePaddle model to download from PaddleHub. ``MODEL_NAME`` will also be the base name for the IR model. The notebook @@ -134,10 +128,9 @@ PaddleHub. This may take a while. Model Extracted to "./model". -Show Inference on PaddlePaddle Model `⇑ <#top>`__ +Show Inference on PaddlePaddle Model ############################################################################################################################### - In the next cell, we load the model, load and display an image, do inference on that image, and then show the top three prediction results. @@ -155,7 +148,7 @@ inference on that image, and then show the top three prediction results. .. parsed-literal:: - [2023/08/15 22:28:34] ppcls WARNING: The current running environment does not support the use of GPU. CPU has been used instead. + [2023/09/08 22:30:35] ppcls WARNING: The current running environment does not support the use of GPU. CPU has been used instead. Labrador retriever, 0.75138 German short-haired pointer, 0.02373 Great Dane, 0.01848 @@ -221,7 +214,7 @@ clipping values. .. parsed-literal:: - 2023-08-15 22:28:34 WARNING: Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). + 2023-09-08 22:30:35 WARNING: Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). .. parsed-literal:: @@ -233,7 +226,7 @@ clipping values. .. parsed-literal:: - + @@ -257,44 +250,39 @@ OpenVINO model. partition = line.split("\n")[0].partition(" ") class_id_map[int(partition[0])] = str(partition[-1]) -Convert the Model to OpenVINO IR Format `⇑ <#top>`__ +Convert the Model to OpenVINO IR Format ############################################################################################################################### - -Call the OpenVINO Model Optimizer Python API to convert the PaddlePaddle -model to OpenVINO IR, with FP32 precision. ``mo.convert_model`` function +Call the OpenVINO Model Conversion API to convert the PaddlePaddle model +to OpenVINO IR, with FP32 precision. ``ov.convert_model`` function accept path to PaddlePaddle model and returns OpenVINO Model class instance which represents this model. Obtained model is ready to use and -loading on device using ``compile_model`` or can be saved on disk using -``serialize`` function. See the `Model Optimizer Developer -Guide `__ -for more information about Model Optimizer. +loading on device using ``ov.compile_model`` or can be saved on disk +using ``ov.save_model`` function. See the `Model Conversion +Guide `__ +for more information about the Model Conversion API. .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize - model_xml = Path(MODEL_NAME).with_suffix('.xml') if not model_xml.exists(): - ov_model = mo.convert_model("model/MobileNetV3_large_x1_0_infer/inference.pdmodel") - serialize(ov_model, str(model_xml)) + ov_model = ov.convert_model("model/MobileNetV3_large_x1_0_infer/inference.pdmodel") + ov.save_model(ov_model, str(model_xml)) else: print(f"{model_xml} already exists.") -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - ie = Core() + core = ov.Core() device = widgets.Dropdown( - options=ie.available_devices + ["AUTO"], + options=core.available_devices + ["AUTO"], value='AUTO', description='Device:', disabled=False, @@ -311,10 +299,9 @@ Select device from dropdown list for running inference using OpenVINO: -Show Inference on OpenVINO Model `⇑ <#top>`__ +Show Inference on OpenVINO Model ############################################################################################################################### - Load the IR model, get model information, load the image, do inference, convert the inference to a meaningful result, and show the output. See the `OpenVINO Runtime API @@ -324,7 +311,7 @@ information. .. code:: ipython3 # Load OpenVINO Runtime and OpenVINO IR model - core = Core() + core = ov.Core() model = core.read_model(model_xml) compiled_model = core.compile_model(model=model, device_name="CPU") @@ -351,24 +338,23 @@ information. .. parsed-literal:: - Labrador retriever, 0.75138 - German short-haired pointer, 0.02373 - Great Dane, 0.01848 + Labrador retriever, 0.74909 + German short-haired pointer, 0.02368 + Great Dane, 0.01873 .. image:: 103-paddle-to-openvino-classification-with-output_files/103-paddle-to-openvino-classification-with-output_23_1.png -Timing and Comparison `⇑ <#top>`__ +Timing and Comparison ############################################################################################################################### - Measure the time it takes to do inference on fifty images and compare the result. The timing information gives an indication of performance. For a fair comparison, we include the time it takes to process the image. For more accurate benchmarking, use the `OpenVINO benchmark -tool `__. +tool `__. Note that many optimizations are possible to improve the performance. .. code:: ipython3 @@ -380,7 +366,7 @@ Note that many optimizations are possible to improve the performance. .. code:: ipython3 # Show device information - core = Core() + core = ov.Core() devices = core.available_devices for device_name in devices: @@ -415,7 +401,7 @@ Note that many optimizations are possible to improve the performance. .. parsed-literal:: - PaddlePaddle model on CPU: 0.0071 seconds per image, FPS: 141.47 + PaddlePaddle model on CPU: 0.0070 seconds per image, FPS: 143.05 PaddlePaddle result: Labrador retriever, 0.75138 @@ -429,10 +415,9 @@ Note that many optimizations are possible to improve the performance. .. image:: 103-paddle-to-openvino-classification-with-output_files/103-paddle-to-openvino-classification-with-output_27_1.png -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -451,7 +436,7 @@ Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 # Show inference speed on OpenVINO IR model - compiled_model = ie.compile_model(model=model, device_name=device.value) + compiled_model = core.compile_model(model=model, device_name=device.value) output_layer = compiled_model.output(0) @@ -478,26 +463,23 @@ Select device from dropdown list for running inference using OpenVINO: .. parsed-literal:: - OpenVINO IR model in OpenVINO Runtime (AUTO): 0.0030 seconds per image, FPS: 337.97 + OpenVINO IR model in OpenVINO Runtime (AUTO): 0.0030 seconds per image, FPS: 337.80 OpenVINO result: - Labrador retriever, 0.75138 - German short-haired pointer, 0.02373 - Great Dane, 0.01848 - Rottweiler, 0.01435 - flat-coated retriever, 0.01144 + Labrador retriever, 0.74909 + German short-haired pointer, 0.02368 + Great Dane, 0.01873 + Rottweiler, 0.01448 + flat-coated retriever, 0.01153 .. image:: 103-paddle-to-openvino-classification-with-output_files/103-paddle-to-openvino-classification-with-output_30_1.png -References `⇑ <#top>`__ +References ############################################################################################################################### - - `PaddleClas `__ - `OpenVINO PaddlePaddle - support `__ -- `OpenVINO Model Optimizer - Documentation `__ + support `__ diff --git a/docs/notebooks/104-model-tools-with-output.rst b/docs/notebooks/104-model-tools-with-output.rst index 47cd5fd7e26980..c098431d60caff 100644 --- a/docs/notebooks/104-model-tools-with-output.rst +++ b/docs/notebooks/104-model-tools-with-output.rst @@ -1,16 +1,12 @@ Working with Open Model Zoo Models ================================== - - This tutorial shows how to download a model from `Open Model Zoo `__, convert it to OpenVINO™ IR format, show information about the model, and benchmark -the model. - -.. _top: +the model. -**Table of contents**: +**Table of contents:** - `OpenVINO and Open Model Zoo Tools <#openvino-and-open-model-zoo-tools>`__ - `Preparation <#preparation>`__ @@ -26,10 +22,9 @@ the model. - `Benchmark with Different Settings <#benchmark-with-different-settings>`__ -OpenVINO and Open Model Zoo Tools `⇑ <#top>`__ +OpenVINO and Open Model Zoo Tools ############################################################################################################################### - OpenVINO and Open Model Zoo tools are listed in the table below. +------------+--------------+-----------------------------------------+ @@ -48,13 +43,16 @@ OpenVINO and Open Model Zoo tools are listed in the table below. | Tool | app`` | computing inference time. | +------------+--------------+-----------------------------------------+ -Preparation `⇑ <#top>`__ -############################################################################################################################### +.. code:: ipython3 + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" -Model Name `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +Preparation +############################################################################################################################### +Model Name ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Set ``model_name`` to the name of the Open Model Zoo model to use in this notebook. Refer to the list of @@ -69,26 +67,24 @@ pre-trained models for a full list of models that can be used. Set # model_name = "resnet-50-pytorch" model_name = "mobilenet-v2-pytorch" -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import json import sys from pathlib import Path + import openvino as ov from IPython.display import Markdown, display - from openvino.runtime import Core sys.path.append("../utils") from notebook_utils import DeviceNotFoundAlert, NotebookAlert -Settings and Configuration `⇑ <#top>`__ +Settings and Configuration +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Set the file and directory paths. By default, this notebook downloads models from Open Model Zoo to the ``open_model_zoo_models`` directory in your ``$HOME`` directory. On Windows, the $HOME directory is usually @@ -112,8 +108,8 @@ The following settings can be changed: precision = "FP16" # Check if an iGPU is available on this system to use with Benchmark App. - ie = Core() - gpu_available = "GPU" in ie.available_devices + core = ov.Core() + gpu_available = "GPU" in core.available_devices print( f"base_model_dir: {base_model_dir}, omz_cache_dir: {omz_cache_dir}, gpu_availble: {gpu_available}" @@ -125,10 +121,9 @@ The following settings can be changed: base_model_dir: model, omz_cache_dir: cache, gpu_availble: False -Download a Model from Open Model Zoo `⇑ <#top>`__ +Download a Model from Open Model Zoo ############################################################################################################################### - Specify, display and run the Model Downloader command to download the model. @@ -166,10 +161,9 @@ Downloading mobilenet-v2-pytorch… -Convert a Model to OpenVINO IR format `⇑ <#top>`__ +Convert a Model to OpenVINO IR format ############################################################################################################################### - Specify, display and run the Model Converter command to convert the model to OpenVINO IR format. Model conversion may take a while. The output of the Model Converter command will be displayed. When the @@ -204,27 +198,26 @@ Converting mobilenet-v2-pytorch… .. parsed-literal:: ========== Converting mobilenet-v2-pytorch to ONNX - Conversion to ONNX command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-name=mobilenet_v2 --weights=model/public/mobilenet-v2-pytorch/mobilenet_v2-b0353104.pth --import-module=torchvision.models --input-shape=1,3,224,224 --output-file=model/public/mobilenet-v2-pytorch/mobilenet-v2.onnx --input-names=data --output-names=prob + Conversion to ONNX command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-name=mobilenet_v2 --weights=model/public/mobilenet-v2-pytorch/mobilenet_v2-b0353104.pth --import-module=torchvision.models --input-shape=1,3,224,224 --output-file=model/public/mobilenet-v2-pytorch/mobilenet-v2.onnx --input-names=data --output-names=prob ONNX check passed successfully. ========== Converting mobilenet-v2-pytorch to IR (FP16) - Conversion command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/mo --framework=onnx --output_dir=/tmp/tmp3q4nxrwu --model_name=mobilenet-v2-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.624,57.12,57.375]' --reverse_input_channels --output=prob --input_model=model/public/mobilenet-v2-pytorch/mobilenet-v2.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=True + Conversion command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/mo --framework=onnx --output_dir=/tmp/tmp9rrzi7ey --model_name=mobilenet-v2-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.624,57.12,57.375]' --reverse_input_channels --output=prob --input_model=model/public/mobilenet-v2-pytorch/mobilenet-v2.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=True [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression by removing argument --compress_to_fp16 or set it to false --compress_to_fp16=False. - Find more information about compression to FP16 at https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_FP16_Compression.html + Find more information about compression to FP16 at https://docs.openvino.ai/latest/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /tmp/tmp3q4nxrwu/mobilenet-v2-pytorch.xml - [ SUCCESS ] BIN file: /tmp/tmp3q4nxrwu/mobilenet-v2-pytorch.bin + [ SUCCESS ] XML file: /tmp/tmp9rrzi7ey/mobilenet-v2-pytorch.xml + [ SUCCESS ] BIN file: /tmp/tmp9rrzi7ey/mobilenet-v2-pytorch.bin -Get Model Information `⇑ <#top>`__ +Get Model Information ############################################################################################################################### - The Info Dumper prints the following information for Open Model Zoo models: @@ -268,8 +261,8 @@ information in a dictionary. 'description': 'MobileNet V2 is image classification model pre-trained on ImageNet dataset. This is a PyTorch* implementation of MobileNetV2 architecture as described in the paper "Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation" .\nThe model input is a blob that consists of a single image of "1, 3, 224, 224" in "RGB" order.\nThe model output is typical object classifier for the 1000 different classifications matching with those in the ImageNet database.', 'framework': 'pytorch', 'license_url': 'https://raw.githubusercontent.com/pytorch/vision/master/LICENSE', - 'accuracy_config': '/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/models/public/mobilenet-v2-pytorch/accuracy-check.yml', - 'model_config': '/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/models/public/mobilenet-v2-pytorch/model.yml', + 'accuracy_config': '/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/models/public/mobilenet-v2-pytorch/accuracy-check.yml', + 'model_config': '/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/models/public/mobilenet-v2-pytorch/model.yml', 'precisions': ['FP16', 'FP32'], 'quantization_output_precisions': ['FP16-INT8', 'FP32-INT8'], 'subdirectory': 'public/mobilenet-v2-pytorch', @@ -301,10 +294,9 @@ file. model/public/mobilenet-v2-pytorch/FP16/mobilenet-v2-pytorch.xml exists: True -Run Benchmark Tool `⇑ <#top>`__ +Run Benchmark Tool ############################################################################################################################### - By default, Benchmark Tool runs inference for 60 seconds in asynchronous mode on CPU. It returns inference speed as latency (milliseconds per image) and throughput values (frames per second). @@ -339,18 +331,18 @@ seconds… [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] Device info: [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. [Step 4/11] Reading model files [ INFO ] Loading model files - [ INFO ] Read model took 29.61 ms + [ INFO ] Read model took 23.78 ms [ INFO ] Original model I/O parameters: [ INFO ] Model inputs: [ INFO ] data (node: data) : f32 / [N,C,H,W] / [1,3,224,224] @@ -364,7 +356,7 @@ seconds… [ INFO ] Model outputs: [ INFO ] prob (node: prob) : f32 / [...] / [1,1000] [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 154.76 ms + [ INFO ] Compile model took 127.94 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: torch_jit @@ -381,28 +373,29 @@ seconds… [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE [ INFO ] ENABLE_HYPER_THREADING: True [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] CPU_DENORMALS_OPTIMIZATION: False + [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given for input 'data'!. This input will be filled with random values! [ INFO ] Fill input 'data' with random values [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 15000 ms duration) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 7.60 ms + [ INFO ] First inference took 6.41 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 20076 iterations - [ INFO ] Duration: 15004.20 ms + [ INFO ] Count: 20136 iterations + [ INFO ] Duration: 15005.77 ms [ INFO ] Latency: - [ INFO ] Median: 4.34 ms - [ INFO ] Average: 4.35 ms - [ INFO ] Min: 2.53 ms - [ INFO ] Max: 11.71 ms - [ INFO ] Throughput: 1338.03 FPS + [ INFO ] Median: 4.33 ms + [ INFO ] Average: 4.33 ms + [ INFO ] Min: 2.33 ms + [ INFO ] Max: 12.04 ms + [ INFO ] Throughput: 1341.88 FPS -Benchmark with Different Settings `⇑ <#top>`__ +Benchmark with Different Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The ``benchmark_app`` tool displays logging information that is not always necessary. A more compact result is achieved when the output is parsed with ``json``. @@ -426,7 +419,7 @@ In the next cell, define the ``benchmark_model()`` function that calls the cell below that, you display available devices on the system. .. note:: - + In this notebook, ``benchmark_app`` runs for 15 seconds to give a quick indication of performance. For more accurate performance, it is recommended to run inference for at least one @@ -436,13 +429,12 @@ the cell below that, you display available devices on the system. command prompt where you have activated the ``openvino_env`` environment. - .. code:: ipython3 def benchmark_model(model_xml, device="CPU", seconds=60, api="async", batch=1): - ie = Core() + core = ov.Core() model_path = Path(model_xml) - if ("GPU" in device) and ("GPU" not in ie.available_devices): + if ("GPU" in device) and ("GPU" not in core.available_devices): DeviceNotFoundAlert("GPU") else: benchmark_command = f"benchmark_app -m {model_path} -d {device} -t {seconds} -api {api} -b {batch}" @@ -457,11 +449,11 @@ the cell below that, you display available devices on the system. .. code:: ipython3 - ie = Core() + core = ov.Core() # Show devices available for OpenVINO Runtime - for device in ie.available_devices: - device_name = ie.get_property(device, "FULL_DEVICE_NAME") + for device in core.available_devices: + device_name = core.get_property(device, "FULL_DEVICE_NAME") print(f"{device}: {device_name}") @@ -488,12 +480,7 @@ Benchmark command: .. parsed-literal:: command ended - Traceback (most recent call last): - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 327, in main - benchmark.set_allow_auto_batching(False) - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/benchmark.py", line 63, in set_allow_auto_batching - self.core.set_property({'ALLOW_AUTO_BATCHING': flag}) - RuntimeError: Check 'false' failed at src/inference/src/core.cpp:238: + .. code:: ipython3 @@ -514,21 +501,14 @@ Benchmark command: .. parsed-literal:: command ended - Check 'false' failed at src/plugins/auto/src/plugin_config.cpp:55: - property: ALLOW_AUTO_BATCHING: not supported - Traceback (most recent call last): - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 327, in main - benchmark.set_allow_auto_batching(False) - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/benchmark.py", line 63, in set_allow_auto_batching - self.core.set_property({'ALLOW_AUTO_BATCHING': flag}) - RuntimeError: Check 'false' failed at src/inference/src/core.cpp:238: - Check 'false' failed at src/plugins/auto/src/plugin_config.cpp:55: - property: ALLOW_AUTO_BATCHING: not supported + .. code:: ipython3 - benchmark_model(model_path, device="GPU", seconds=15, api="async") + benchmark_model(model_path, device="GPU", seconds=15, api="async") + + .. raw:: html @@ -537,7 +517,8 @@ Benchmark command: .. code:: ipython3 - benchmark_model(model_path, device="MULTI:CPU,GPU", seconds=15, api="async") + benchmark_model(model_path, device="MULTI:CPU,GPU", seconds=15, api="async") + .. raw:: html diff --git a/docs/notebooks/105-language-quantize-bert-with-output.rst b/docs/notebooks/105-language-quantize-bert-with-output.rst index 61c8d152e6bcef..a81b93db98bb4f 100644 --- a/docs/notebooks/105-language-quantize-bert-with-output.rst +++ b/docs/notebooks/105-language-quantize-bert-with-output.rst @@ -1,8 +1,6 @@ Quantize NLP models with Post-Training Quantization ​in NNCF ============================================================ - - This tutorial demonstrates how to apply ``INT8`` quantization to the Natural Language Processing model known as `BERT `__, using @@ -16,19 +14,14 @@ Research Paraphrase Corpus will be used. The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps: -- Download and prepare the BERT model and MRPC dataset. -- Define data loading and accuracy validation functionality. -- Prepare the model for quantization. -- Run optimization pipeline. -- Load and test quantized model. -- Compare the performance of the original, converted and quantized - models. - - +- Download and prepare the BERT model and MRPC dataset. +- Define data loading and accuracy validation functionality. +- Prepare the model for quantization. +- Run optimization pipeline. +- Load and test quantized model. +- Compare the performance of the original, converted and quantized models. -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -40,16 +33,17 @@ and datasets. It consists of the following steps: - `Select inference device <#select-inference-device>`__ - `Compare F1-score of FP32 and INT8 models <#compare-f1-score-of-fp32-and-int8-models>`__ -- `Compare Performance of the Original, Converted and Quantized Models <#compare-performance-of-the-original,-converted-and-quantized-models>`__ +- `Compare Performance of the Original, Converted and Quantized Models <#compare-performance-of-the-original-converted-and-quantized-models>`__ .. code:: ipython3 - !pip install -q "nncf>=2.5.0" datasets evaluate + !pip install -q "nncf>=2.5.0" + !pip install -q transformers datasets evaluate + !pip install -q "openvino==2023.1.0.dev20230811" -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 import os @@ -60,16 +54,14 @@ Imports `⇑ <#top>`__ from typing import Iterable from typing import Any + import datasets + import evaluate import numpy as np - import torch - from openvino import runtime as ov - from openvino.runtime import serialize, Model, PartialShape import nncf from nncf.parameters import ModelType + import openvino as ov + import torch from transformers import BertForSequenceClassification, BertTokenizer - from openvino.tools.mo import convert_model - import datasets - import evaluate sys.path.append("../utils") from notebook_utils import download_file @@ -77,10 +69,10 @@ Imports `⇑ <#top>`__ .. parsed-literal:: - 2023-08-15 22:29:19.942802: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. - 2023-08-15 22:29:19.975605: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + 2023-09-08 22:31:58.502786: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:31:58.537414: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. - 2023-08-15 22:29:20.517786: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + 2023-09-08 22:31:59.115585: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT .. parsed-literal:: @@ -88,10 +80,9 @@ Imports `⇑ <#top>`__ INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - .. code:: ipython3 # Set the data and model directories, source URL and the filename of the model. @@ -104,10 +95,9 @@ Settings `⇑ <#top>`__ os.makedirs(DATA_DIR, exist_ok=True) os.makedirs(MODEL_DIR, exist_ok=True) -Prepare the Model `⇑ <#top>`__ +Prepare the Model ############################################################################################################################### - Perform the following: - Download and unpack pre-trained BERT model for MRPC by PyTorch. @@ -141,7 +131,7 @@ PyTorch model formats are supported: .. code:: ipython3 MAX_SEQ_LENGTH = 128 - input_shape = PartialShape([1, -1]) + input_shape = ov.PartialShape([1, -1]) ir_model_xml = Path(MODEL_DIR) / "bert_mrpc.xml" core = ov.Core() @@ -158,24 +148,32 @@ PyTorch model formats are supported: # Convert the PyTorch model to OpenVINO IR FP32. if not ir_model_xml.exists(): - model = convert_model(torch_model, example_input=inputs, input=input_info) - serialize(model, str(ir_model_xml)) + model = ov.convert_model(torch_model, example_input=inputs, input=input_info) + ov.save_model(model, str(ir_model_xml)) else: model = core.read_model(ir_model_xml) .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/jit/annotations.py:309: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either. + WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11. + + +.. parsed-literal:: + + [ WARNING ] Please fix your imports. Module %s has been moved to %s. The old module will be deleted in version %s. + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/jit/annotations.py:309: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either. warnings.warn("TorchScript will treat type annotations of Tensor " -Prepare the Dataset `⇑ <#top>`__ +Prepare the Dataset ############################################################################################################################### -We download the `General Language Understanding Evaluation (GLUE) `__ dataset -for the MRPC task from HuggingFace datasets. Then, we tokenize the data -with a pre-trained BERT tokenizer from HuggingFace. +We download the `General Language Understanding Evaluation +(GLUE) `__ dataset for the MRPC task from +HuggingFace datasets. Then, we tokenize the data with a pre-trained BERT +tokenizer from HuggingFace. .. code:: ipython3 @@ -194,10 +192,9 @@ with a pre-trained BERT tokenizer from HuggingFace. data_source = create_data_source() -Optimize model using NNCF Post-training Quantization API `⇑ <#top>`__ +Optimize model using NNCF Post-training Quantization API ############################################################################################################################### - `NNCF `__ provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop. We will use 8-bit quantization in @@ -207,8 +204,7 @@ The optimization process contains the following steps: 1. Create a Dataset for quantization 2. Run ``nncf.quantize`` for getting an optimized model -3. Serialize OpenVINO IR model using ``openvino.runtime.serialize`` - function +3. Serialize OpenVINO IR model using ``openvino.save_model`` function .. code:: ipython3 @@ -235,188 +231,187 @@ The optimization process contains the following steps: INFO:nncf:202 ignored nodes was found by types in the NNCFGraph INFO:nncf:24 ignored nodes was found by name in the NNCFGraph - INFO:nncf:Not adding activation input quantizer for operation: 22 aten::rsub_16 - INFO:nncf:Not adding activation input quantizer for operation: 25 aten::rsub_17 - INFO:nncf:Not adding activation input quantizer for operation: 30 aten::mul_18 - INFO:nncf:Not adding activation input quantizer for operation: 11 aten::add_40 - INFO:nncf:Not adding activation input quantizer for operation: 14 aten::add__46 - INFO:nncf:Not adding activation input quantizer for operation: 17 aten::layer_norm_48 - 20 aten::layer_norm_49 - 23 aten::layer_norm_50 - - INFO:nncf:Not adding activation input quantizer for operation: 36 aten::add_108 - INFO:nncf:Not adding activation input quantizer for operation: 55 aten::softmax_109 - INFO:nncf:Not adding activation input quantizer for operation: 74 aten::matmul_110 - INFO:nncf:Not adding activation input quantizer for operation: 26 aten::add_126 - INFO:nncf:Not adding activation input quantizer for operation: 31 aten::layer_norm_128 - 47 aten::layer_norm_129 - 66 aten::layer_norm_130 - - INFO:nncf:Not adding activation input quantizer for operation: 85 aten::add_140 - INFO:nncf:Not adding activation input quantizer for operation: 103 aten::layer_norm_142 - 133 aten::layer_norm_143 - 171 aten::layer_norm_144 - - INFO:nncf:Not adding activation input quantizer for operation: 38 aten::add_202 - INFO:nncf:Not adding activation input quantizer for operation: 57 aten::softmax_203 - INFO:nncf:Not adding activation input quantizer for operation: 76 aten::matmul_204 - INFO:nncf:Not adding activation input quantizer for operation: 209 aten::add_220 - INFO:nncf:Not adding activation input quantizer for operation: 236 aten::layer_norm_222 - 250 aten::layer_norm_223 - 267 aten::layer_norm_224 - - INFO:nncf:Not adding activation input quantizer for operation: 287 aten::add_234 - INFO:nncf:Not adding activation input quantizer for operation: 316 aten::layer_norm_236 - 342 aten::layer_norm_237 - 364 aten::layer_norm_238 - - INFO:nncf:Not adding activation input quantizer for operation: 39 aten::add_296 - INFO:nncf:Not adding activation input quantizer for operation: 58 aten::softmax_297 - INFO:nncf:Not adding activation input quantizer for operation: 77 aten::matmul_298 - INFO:nncf:Not adding activation input quantizer for operation: 221 aten::add_314 - INFO:nncf:Not adding activation input quantizer for operation: 242 aten::layer_norm_316 - 259 aten::layer_norm_317 - 279 aten::layer_norm_318 - - INFO:nncf:Not adding activation input quantizer for operation: 300 aten::add_328 - INFO:nncf:Not adding activation input quantizer for operation: 326 aten::layer_norm_330 - 348 aten::layer_norm_331 - 370 aten::layer_norm_332 - - INFO:nncf:Not adding activation input quantizer for operation: 40 aten::add_390 - INFO:nncf:Not adding activation input quantizer for operation: 59 aten::softmax_391 - INFO:nncf:Not adding activation input quantizer for operation: 78 aten::matmul_392 - INFO:nncf:Not adding activation input quantizer for operation: 223 aten::add_408 - INFO:nncf:Not adding activation input quantizer for operation: 243 aten::layer_norm_410 - 260 aten::layer_norm_411 - 280 aten::layer_norm_412 - - INFO:nncf:Not adding activation input quantizer for operation: 302 aten::add_422 - INFO:nncf:Not adding activation input quantizer for operation: 328 aten::layer_norm_424 - 350 aten::layer_norm_425 - 372 aten::layer_norm_426 - - INFO:nncf:Not adding activation input quantizer for operation: 41 aten::add_484 - INFO:nncf:Not adding activation input quantizer for operation: 60 aten::softmax_485 - INFO:nncf:Not adding activation input quantizer for operation: 79 aten::matmul_486 - INFO:nncf:Not adding activation input quantizer for operation: 225 aten::add_502 - INFO:nncf:Not adding activation input quantizer for operation: 244 aten::layer_norm_504 - 261 aten::layer_norm_505 - 281 aten::layer_norm_506 - - INFO:nncf:Not adding activation input quantizer for operation: 304 aten::add_516 - INFO:nncf:Not adding activation input quantizer for operation: 330 aten::layer_norm_518 - 352 aten::layer_norm_519 - 374 aten::layer_norm_520 - - INFO:nncf:Not adding activation input quantizer for operation: 42 aten::add_578 - INFO:nncf:Not adding activation input quantizer for operation: 61 aten::softmax_579 - INFO:nncf:Not adding activation input quantizer for operation: 80 aten::matmul_580 - INFO:nncf:Not adding activation input quantizer for operation: 227 aten::add_596 - INFO:nncf:Not adding activation input quantizer for operation: 245 aten::layer_norm_598 - 262 aten::layer_norm_599 - 282 aten::layer_norm_600 - - INFO:nncf:Not adding activation input quantizer for operation: 306 aten::add_610 - INFO:nncf:Not adding activation input quantizer for operation: 332 aten::layer_norm_612 - 354 aten::layer_norm_613 - 376 aten::layer_norm_614 - - INFO:nncf:Not adding activation input quantizer for operation: 43 aten::add_672 - INFO:nncf:Not adding activation input quantizer for operation: 62 aten::softmax_673 - INFO:nncf:Not adding activation input quantizer for operation: 81 aten::matmul_674 - INFO:nncf:Not adding activation input quantizer for operation: 229 aten::add_690 - INFO:nncf:Not adding activation input quantizer for operation: 246 aten::layer_norm_692 - 263 aten::layer_norm_693 - 283 aten::layer_norm_694 - - INFO:nncf:Not adding activation input quantizer for operation: 308 aten::add_704 - INFO:nncf:Not adding activation input quantizer for operation: 334 aten::layer_norm_706 - 356 aten::layer_norm_707 - 378 aten::layer_norm_708 - - INFO:nncf:Not adding activation input quantizer for operation: 44 aten::add_766 - INFO:nncf:Not adding activation input quantizer for operation: 63 aten::softmax_767 - INFO:nncf:Not adding activation input quantizer for operation: 82 aten::matmul_768 - INFO:nncf:Not adding activation input quantizer for operation: 231 aten::add_784 - INFO:nncf:Not adding activation input quantizer for operation: 247 aten::layer_norm_786 - 264 aten::layer_norm_787 - 284 aten::layer_norm_788 - - INFO:nncf:Not adding activation input quantizer for operation: 310 aten::add_798 - INFO:nncf:Not adding activation input quantizer for operation: 336 aten::layer_norm_800 - 358 aten::layer_norm_801 - 380 aten::layer_norm_802 - - INFO:nncf:Not adding activation input quantizer for operation: 45 aten::add_860 - INFO:nncf:Not adding activation input quantizer for operation: 64 aten::softmax_861 - INFO:nncf:Not adding activation input quantizer for operation: 83 aten::matmul_862 - INFO:nncf:Not adding activation input quantizer for operation: 233 aten::add_878 - INFO:nncf:Not adding activation input quantizer for operation: 248 aten::layer_norm_880 - 265 aten::layer_norm_881 - 285 aten::layer_norm_882 - - INFO:nncf:Not adding activation input quantizer for operation: 312 aten::add_892 - INFO:nncf:Not adding activation input quantizer for operation: 338 aten::layer_norm_894 - 360 aten::layer_norm_895 - 382 aten::layer_norm_896 - - INFO:nncf:Not adding activation input quantizer for operation: 46 aten::add_954 - INFO:nncf:Not adding activation input quantizer for operation: 65 aten::softmax_955 - INFO:nncf:Not adding activation input quantizer for operation: 84 aten::matmul_956 - INFO:nncf:Not adding activation input quantizer for operation: 235 aten::add_972 - INFO:nncf:Not adding activation input quantizer for operation: 249 aten::layer_norm_974 - 266 aten::layer_norm_975 - 286 aten::layer_norm_976 - - INFO:nncf:Not adding activation input quantizer for operation: 314 aten::add_986 - INFO:nncf:Not adding activation input quantizer for operation: 340 aten::layer_norm_988 - 362 aten::layer_norm_989 - 384 aten::layer_norm_990 - - INFO:nncf:Not adding activation input quantizer for operation: 35 aten::add_1048 - INFO:nncf:Not adding activation input quantizer for operation: 54 aten::softmax_1049 - INFO:nncf:Not adding activation input quantizer for operation: 73 aten::matmul_1050 - INFO:nncf:Not adding activation input quantizer for operation: 215 aten::add_1066 - INFO:nncf:Not adding activation input quantizer for operation: 240 aten::layer_norm_1068 - 257 aten::layer_norm_1069 - 277 aten::layer_norm_1070 - - INFO:nncf:Not adding activation input quantizer for operation: 296 aten::add_1080 - INFO:nncf:Not adding activation input quantizer for operation: 322 aten::layer_norm_1082 - 344 aten::layer_norm_1083 - 366 aten::layer_norm_1084 - - INFO:nncf:Not adding activation input quantizer for operation: 37 aten::add_1142 - INFO:nncf:Not adding activation input quantizer for operation: 56 aten::softmax_1143 - INFO:nncf:Not adding activation input quantizer for operation: 75 aten::matmul_1144 - INFO:nncf:Not adding activation input quantizer for operation: 218 aten::add_1160 - INFO:nncf:Not adding activation input quantizer for operation: 241 aten::layer_norm_1162 - 258 aten::layer_norm_1163 - 278 aten::layer_norm_1164 - - INFO:nncf:Not adding activation input quantizer for operation: 298 aten::add_1174 - INFO:nncf:Not adding activation input quantizer for operation: 324 aten::layer_norm_1176 - 346 aten::layer_norm_1177 - 368 aten::layer_norm_1178 + INFO:nncf:Not adding activation input quantizer for operation: 19 __module.bert/aten::rsub/Multiply + INFO:nncf:Not adding activation input quantizer for operation: 22 __module.bert/aten::rsub/Subtract + INFO:nncf:Not adding activation input quantizer for operation: 25 __module.bert/aten::mul/Multiply + INFO:nncf:Not adding activation input quantizer for operation: 11 __module.bert.embeddings/aten::add/Add_15 + INFO:nncf:Not adding activation input quantizer for operation: 14 __module.bert.embeddings/aten::add_/Add + INFO:nncf:Not adding activation input quantizer for operation: 17 __module.bert.embeddings.LayerNorm/aten::layer_norm/MVN + 20 __module.bert.embeddings.LayerNorm/aten::layer_norm/Multiply + 23 __module.bert.embeddings.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 30 __module.bert.encoder.layer.0.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 46 __module.bert.encoder.layer.0.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 65 __module.bert.encoder.layer.0.attention.self/aten::matmul/MatMul_54 + INFO:nncf:Not adding activation input quantizer for operation: 26 __module.bert.encoder.layer.0.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 42 __module.bert.encoder.layer.0.attention.output.LayerNorm/aten::layer_norm/MVN + 58 __module.bert.encoder.layer.0.attention.output.LayerNorm/aten::layer_norm/Multiply + 77 __module.bert.encoder.layer.0.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 97 __module.bert.encoder.layer.0.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 127 __module.bert.encoder.layer.0.output.LayerNorm/aten::layer_norm/MVN + 154 __module.bert.encoder.layer.0.output.LayerNorm/aten::layer_norm/Multiply + 180 __module.bert.encoder.layer.0.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 31 __module.bert.encoder.layer.1.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 47 __module.bert.encoder.layer.1.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 66 __module.bert.encoder.layer.1.attention.self/aten::matmul/MatMul_107 + INFO:nncf:Not adding activation input quantizer for operation: 181 __module.bert.encoder.layer.1.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 196 __module.bert.encoder.layer.1.attention.output.LayerNorm/aten::layer_norm/MVN + 210 __module.bert.encoder.layer.1.attention.output.LayerNorm/aten::layer_norm/Multiply + 227 __module.bert.encoder.layer.1.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 245 __module.bert.encoder.layer.1.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 271 __module.bert.encoder.layer.1.output.LayerNorm/aten::layer_norm/MVN + 294 __module.bert.encoder.layer.1.output.LayerNorm/aten::layer_norm/Multiply + 316 __module.bert.encoder.layer.1.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 34 __module.bert.encoder.layer.2.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 50 __module.bert.encoder.layer.2.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 69 __module.bert.encoder.layer.2.attention.self/aten::matmul/MatMul_160 + INFO:nncf:Not adding activation input quantizer for operation: 184 __module.bert.encoder.layer.2.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 199 __module.bert.encoder.layer.2.attention.output.LayerNorm/aten::layer_norm/MVN + 213 __module.bert.encoder.layer.2.attention.output.LayerNorm/aten::layer_norm/Multiply + 230 __module.bert.encoder.layer.2.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 251 __module.bert.encoder.layer.2.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 277 __module.bert.encoder.layer.2.output.LayerNorm/aten::layer_norm/MVN + 300 __module.bert.encoder.layer.2.output.LayerNorm/aten::layer_norm/Multiply + 322 __module.bert.encoder.layer.2.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 35 __module.bert.encoder.layer.3.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 51 __module.bert.encoder.layer.3.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 70 __module.bert.encoder.layer.3.attention.self/aten::matmul/MatMul_213 + INFO:nncf:Not adding activation input quantizer for operation: 185 __module.bert.encoder.layer.3.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 200 __module.bert.encoder.layer.3.attention.output.LayerNorm/aten::layer_norm/MVN + 214 __module.bert.encoder.layer.3.attention.output.LayerNorm/aten::layer_norm/Multiply + 231 __module.bert.encoder.layer.3.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 253 __module.bert.encoder.layer.3.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 279 __module.bert.encoder.layer.3.output.LayerNorm/aten::layer_norm/MVN + 302 __module.bert.encoder.layer.3.output.LayerNorm/aten::layer_norm/Multiply + 324 __module.bert.encoder.layer.3.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 36 __module.bert.encoder.layer.4.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 52 __module.bert.encoder.layer.4.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 71 __module.bert.encoder.layer.4.attention.self/aten::matmul/MatMul_266 + INFO:nncf:Not adding activation input quantizer for operation: 186 __module.bert.encoder.layer.4.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 201 __module.bert.encoder.layer.4.attention.output.LayerNorm/aten::layer_norm/MVN + 215 __module.bert.encoder.layer.4.attention.output.LayerNorm/aten::layer_norm/Multiply + 232 __module.bert.encoder.layer.4.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 255 __module.bert.encoder.layer.4.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 281 __module.bert.encoder.layer.4.output.LayerNorm/aten::layer_norm/MVN + 304 __module.bert.encoder.layer.4.output.LayerNorm/aten::layer_norm/Multiply + 326 __module.bert.encoder.layer.4.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 37 __module.bert.encoder.layer.5.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 53 __module.bert.encoder.layer.5.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 72 __module.bert.encoder.layer.5.attention.self/aten::matmul/MatMul_319 + INFO:nncf:Not adding activation input quantizer for operation: 187 __module.bert.encoder.layer.5.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 202 __module.bert.encoder.layer.5.attention.output.LayerNorm/aten::layer_norm/MVN + 216 __module.bert.encoder.layer.5.attention.output.LayerNorm/aten::layer_norm/Multiply + 233 __module.bert.encoder.layer.5.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 257 __module.bert.encoder.layer.5.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 283 __module.bert.encoder.layer.5.output.LayerNorm/aten::layer_norm/MVN + 306 __module.bert.encoder.layer.5.output.LayerNorm/aten::layer_norm/Multiply + 328 __module.bert.encoder.layer.5.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 38 __module.bert.encoder.layer.6.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 54 __module.bert.encoder.layer.6.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 73 __module.bert.encoder.layer.6.attention.self/aten::matmul/MatMul_372 + INFO:nncf:Not adding activation input quantizer for operation: 188 __module.bert.encoder.layer.6.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 203 __module.bert.encoder.layer.6.attention.output.LayerNorm/aten::layer_norm/MVN + 217 __module.bert.encoder.layer.6.attention.output.LayerNorm/aten::layer_norm/Multiply + 234 __module.bert.encoder.layer.6.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 259 __module.bert.encoder.layer.6.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 285 __module.bert.encoder.layer.6.output.LayerNorm/aten::layer_norm/MVN + 308 __module.bert.encoder.layer.6.output.LayerNorm/aten::layer_norm/Multiply + 330 __module.bert.encoder.layer.6.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 39 __module.bert.encoder.layer.7.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 55 __module.bert.encoder.layer.7.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 74 __module.bert.encoder.layer.7.attention.self/aten::matmul/MatMul_425 + INFO:nncf:Not adding activation input quantizer for operation: 189 __module.bert.encoder.layer.7.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 204 __module.bert.encoder.layer.7.attention.output.LayerNorm/aten::layer_norm/MVN + 218 __module.bert.encoder.layer.7.attention.output.LayerNorm/aten::layer_norm/Multiply + 235 __module.bert.encoder.layer.7.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 261 __module.bert.encoder.layer.7.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 287 __module.bert.encoder.layer.7.output.LayerNorm/aten::layer_norm/MVN + 310 __module.bert.encoder.layer.7.output.LayerNorm/aten::layer_norm/Multiply + 332 __module.bert.encoder.layer.7.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 40 __module.bert.encoder.layer.8.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 56 __module.bert.encoder.layer.8.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 75 __module.bert.encoder.layer.8.attention.self/aten::matmul/MatMul_478 + INFO:nncf:Not adding activation input quantizer for operation: 190 __module.bert.encoder.layer.8.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 205 __module.bert.encoder.layer.8.attention.output.LayerNorm/aten::layer_norm/MVN + 219 __module.bert.encoder.layer.8.attention.output.LayerNorm/aten::layer_norm/Multiply + 236 __module.bert.encoder.layer.8.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 263 __module.bert.encoder.layer.8.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 289 __module.bert.encoder.layer.8.output.LayerNorm/aten::layer_norm/MVN + 312 __module.bert.encoder.layer.8.output.LayerNorm/aten::layer_norm/Multiply + 334 __module.bert.encoder.layer.8.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 41 __module.bert.encoder.layer.9.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 57 __module.bert.encoder.layer.9.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 76 __module.bert.encoder.layer.9.attention.self/aten::matmul/MatMul_531 + INFO:nncf:Not adding activation input quantizer for operation: 191 __module.bert.encoder.layer.9.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 206 __module.bert.encoder.layer.9.attention.output.LayerNorm/aten::layer_norm/MVN + 220 __module.bert.encoder.layer.9.attention.output.LayerNorm/aten::layer_norm/Multiply + 237 __module.bert.encoder.layer.9.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 265 __module.bert.encoder.layer.9.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 291 __module.bert.encoder.layer.9.output.LayerNorm/aten::layer_norm/MVN + 314 __module.bert.encoder.layer.9.output.LayerNorm/aten::layer_norm/Multiply + 336 __module.bert.encoder.layer.9.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 32 __module.bert.encoder.layer.10.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 48 __module.bert.encoder.layer.10.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 67 __module.bert.encoder.layer.10.attention.self/aten::matmul/MatMul_584 + INFO:nncf:Not adding activation input quantizer for operation: 182 __module.bert.encoder.layer.10.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 197 __module.bert.encoder.layer.10.attention.output.LayerNorm/aten::layer_norm/MVN + 211 __module.bert.encoder.layer.10.attention.output.LayerNorm/aten::layer_norm/Multiply + 228 __module.bert.encoder.layer.10.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 247 __module.bert.encoder.layer.10.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 273 __module.bert.encoder.layer.10.output.LayerNorm/aten::layer_norm/MVN + 296 __module.bert.encoder.layer.10.output.LayerNorm/aten::layer_norm/Multiply + 318 __module.bert.encoder.layer.10.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 33 __module.bert.encoder.layer.11.attention.self/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 49 __module.bert.encoder.layer.11.attention.self/aten::softmax/Softmax + INFO:nncf:Not adding activation input quantizer for operation: 68 __module.bert.encoder.layer.11.attention.self/aten::matmul/MatMul_637 + INFO:nncf:Not adding activation input quantizer for operation: 183 __module.bert.encoder.layer.11.attention.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 198 __module.bert.encoder.layer.11.attention.output.LayerNorm/aten::layer_norm/MVN + 212 __module.bert.encoder.layer.11.attention.output.LayerNorm/aten::layer_norm/Multiply + 229 __module.bert.encoder.layer.11.attention.output.LayerNorm/aten::layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 249 __module.bert.encoder.layer.11.output/aten::add/Add + INFO:nncf:Not adding activation input quantizer for operation: 275 __module.bert.encoder.layer.11.output.LayerNorm/aten::layer_norm/MVN + 298 __module.bert.encoder.layer.11.output.LayerNorm/aten::layer_norm/Multiply + 320 __module.bert.encoder.layer.11.output.LayerNorm/aten::layer_norm/Add .. parsed-literal:: - Statistics collection: 100%|██████████| 300/300 [00:24<00:00, 12.04it/s] - Biases correction: 100%|██████████| 74/74 [00:25<00:00, 2.95it/s] + Statistics collection: 100%|██████████| 300/300 [00:25<00:00, 11.87it/s] + Biases correction: 100%|██████████| 74/74 [00:25<00:00, 2.92it/s] .. code:: ipython3 compressed_model_xml = Path(MODEL_DIR) / "quantized_bert_mrpc.xml" - ov.serialize(quantized_model, compressed_model_xml) + ov.save_model(quantized_model, compressed_model_xml) -Load and Test OpenVINO Model `⇑ <#top>`__ +Load and Test OpenVINO Model ############################################################################################################################### - To load and test converted model, perform the following: - Load the model and compile it for selected device. @@ -424,10 +419,9 @@ To load and test converted model, perform the following: - Run the inference. - Get the answer from the model output. -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -484,13 +478,12 @@ changing ``sample_idx`` to another value (from 0 to 407). The same meaning: yes -Compare F1-score of FP32 and INT8 models `⇑ <#top>`__ +Compare F1-score of FP32 and INT8 models ############################################################################################################################### - .. code:: ipython3 - def validate(model: Model, dataset: Iterable[Any]) -> float: + def validate(model: ov.Model, dataset: Iterable[Any]) -> float: """ Evaluate the model on GLUE dataset. Returns F1 score metric. @@ -526,10 +519,10 @@ Compare F1-score of FP32 and INT8 models `⇑ <#top>`__ Checking the accuracy of the original model: F1 score: 0.9019 Checking the accuracy of the quantized model: - F1 score: 0.8995 + F1 score: 0.8983 -Compare Performance of the Original, Converted and Quantized Models. `⇑ <#top>`__ +Compare Performance of the Original, Converted and Quantized Models ############################################################################################################################### Compare the original PyTorch model with OpenVINO converted and quantized @@ -587,14 +580,14 @@ Frames Per Second (FPS) for images. .. parsed-literal:: - PyTorch model on CPU: 0.070 seconds per sentence, SPS: 14.22 - IR FP32 model in OpenVINO Runtime/AUTO: 0.021 seconds per sentence, SPS: 48.42 - OpenVINO IR INT8 model in OpenVINO Runtime/AUTO: 0.010 seconds per sentence, SPS: 98.01 + PyTorch model on CPU: 0.073 seconds per sentence, SPS: 13.77 + IR FP32 model in OpenVINO Runtime/AUTO: 0.021 seconds per sentence, SPS: 46.77 + OpenVINO IR INT8 model in OpenVINO Runtime/AUTO: 0.010 seconds per sentence, SPS: 98.85 Finally, measure the inference performance of OpenVINO ``FP32`` and -``INT8`` models. For this purpose, use -`Benchmark Tool `__ +``INT8`` models. For this purpose, use `Benchmark +Tool `__ in OpenVINO. .. note:: @@ -608,11 +601,10 @@ in OpenVINO. Run ``benchmark_app --help`` to see an overview of all command-line options. - .. code:: ipython3 # Inference FP32 model (OpenVINO IR) - ! benchmark_app -m $ir_model_xml -shape [1,128],[1,128],[1,128] -d device.value -api sync + !benchmark_app -m $ir_model_xml -shape [1,128],[1,128],[1,128] -d device.value -api sync .. parsed-literal:: @@ -622,19 +614,23 @@ in OpenVINO. [Step 2/11] Loading OpenVINO Runtime [ WARNING ] Default duration 120 seconds is used for unknown device device.value [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] Device info: - [ ERROR ] Check 'false' failed at src/inference/src/core.cpp:84: + [ ERROR ] Exception from src/inference/src/core.cpp:84: + Exception from src/inference/src/dev/core_impl.cpp:565: Device with "device" name is not registered in the OpenVINO Runtime + Traceback (most recent call last): - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 103, in main + File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 102, in main benchmark.print_version_info() - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/benchmark.py", line 48, in print_version_info + File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/benchmark.py", line 48, in print_version_info for device, version in self.core.get_versions(self.device).items(): - RuntimeError: Check 'false' failed at src/inference/src/core.cpp:84: + RuntimeError: Exception from src/inference/src/core.cpp:84: + Exception from src/inference/src/dev/core_impl.cpp:565: Device with "device" name is not registered in the OpenVINO Runtime + .. code:: ipython3 @@ -650,17 +646,21 @@ in OpenVINO. [Step 2/11] Loading OpenVINO Runtime [ WARNING ] Default duration 120 seconds is used for unknown device device.value [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] Device info: - [ ERROR ] Check 'false' failed at src/inference/src/core.cpp:84: + [ ERROR ] Exception from src/inference/src/core.cpp:84: + Exception from src/inference/src/dev/core_impl.cpp:565: Device with "device" name is not registered in the OpenVINO Runtime + Traceback (most recent call last): - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 103, in main + File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 102, in main benchmark.print_version_info() - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/benchmark.py", line 48, in print_version_info + File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/benchmark.py", line 48, in print_version_info for device, version in self.core.get_versions(self.device).items(): - RuntimeError: Check 'false' failed at src/inference/src/core.cpp:84: + RuntimeError: Exception from src/inference/src/core.cpp:84: + Exception from src/inference/src/dev/core_impl.cpp:565: Device with "device" name is not registered in the OpenVINO Runtime + diff --git a/docs/notebooks/106-auto-device-with-output.rst b/docs/notebooks/106-auto-device-with-output.rst index b1e37e02f7a376..bcb30c004d8771 100644 --- a/docs/notebooks/106-auto-device-with-output.rst +++ b/docs/notebooks/106-auto-device-with-output.rst @@ -2,19 +2,19 @@ Automatic Device Selection with OpenVINO™ ========================================= The `Auto -device `__ +device `__ (or AUTO in short) selects the most suitable device for inference by considering the model precision, power efficiency and processing capability of the available `compute -devices `__. +devices `__. The model precision (such as ``FP32``, ``FP16``, ``INT8``, etc.) is the first consideration to filter out the devices that cannot run the network efficiently. Next, if dedicated accelerators are available, these devices are preferred (for example, integrated and discrete -`GPU `__). -`CPU `__ +`GPU `__). +`CPU `__ is used as the default “fallback device”. Keep in mind that AUTO makes this selection only once, during the loading of a model. @@ -30,11 +30,7 @@ first inference. auto - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Import modules and create Core <#import-modules-and-create-core>`__ - `Convert the model to OpenVINO IR format <#convert-the-model-to-openvino-ir-format>`__ @@ -56,20 +52,26 @@ first inference. - `Inference with LATENCY hint <#inference-with-latency-hint>`__ - `Difference in FPS and latency <#difference-in-fps-and-latency>`__ -Import modules and create Core `⇑ <#top>`__ +Import modules and create Core ############################################################################################################################### +.. code:: ipython3 + + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" .. code:: ipython3 import time import sys + + import openvino as ov + from IPython.display import Markdown, display - from openvino.runtime import Core, CompiledModel, AsyncInferQueue, InferRequest - ie = Core() + core = ov.Core() - if "GPU" not in ie.available_devices: + if "GPU" not in core.available_devices: display(Markdown('
Warning: A GPU device is not available. This notebook requires GPU device to have meaningful results.
')) @@ -80,34 +82,32 @@ Import modules and create Core `⇑ <#top>`__ device to have meaningful results. -Convert the model to OpenVINO IR format `⇑ <#top>`__ +Convert the model to OpenVINO IR format ############################################################################################################################### - This tutorial uses `resnet50 `__ model from `torchvision `__ library. ResNet 50 is image classification model pre-trained on ImageNet -dataset described in paper `“Deep Residual Learning for Image Recognition” `__. From OpenVINO +dataset described in paper `“Deep Residual Learning for Image +Recognition” `__. From OpenVINO 2023.0, we can directly convert a model from the PyTorch format to the OpenVINO IR format using model conversion API. To convert model, we -should provide model object instance into ``mo.convert_model`` function, +should provide model object instance into ``ov.convert_model`` function, optionally, we can specify input shape for conversion (by default models -from PyTorch converted with dynamic input shapes). ``mo.convert_model`` +from PyTorch converted with dynamic input shapes). ``ov.convert_model`` returns openvino.runtime.Model object ready to be loaded on a device -with ``openvino.runtime.Core().compile_model`` or serialized for next -usage with ``openvino.runtime.serialize``. +with ``ov.compile_model`` or serialized for next usage with +``ov.save_model``. For more information about model conversion API, see this -`page `__. +`page `__. .. code:: ipython3 import torchvision from pathlib import Path - from openvino.tools import mo - from openvino.runtime import serialize base_model_dir = Path("./model") base_model_dir.mkdir(exist_ok=True) @@ -115,12 +115,30 @@ For more information about model conversion API, see this if not model_path.exists(): pt_model = torchvision.models.resnet50(weights="DEFAULT") - ov_model = mo.convert_model(pt_model, input_shape=[[1,3,224,224]], compress_to_fp16=True) - serialize(ov_model, str(model_path)) + ov_model = ov.convert_model(pt_model, input=[[1,3,224,224]]) + ov.save_model(ov_model, str(model_path)) print("IR model saved to {}".format(model_path)) else: print("Read IR model from {}".format(model_path)) - ov_model = ie.read_model(model_path) + ov_model = core.read_model(model_path) + + +.. parsed-literal:: + + 2023-09-08 22:36:23.476933: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:36:23.509668: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 22:36:24.096790: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +.. parsed-literal:: + + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino + + +.. parsed-literal:: + + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' .. parsed-literal:: @@ -128,30 +146,37 @@ For more information about model conversion API, see this IR model saved to model/resnet50.xml -(1) Simplify selection logic `⇑ <#top>`__ +(1) Simplify selection logic ############################################################################################################################### - -Default behavior of Core::compile_model API without device_name `⇑ <#top>`__ +Default behavior of Core::compile_model API without device_name +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -By default, ``compile_model`` API will select **AUTO** as ``device_name`` if no -device is specified. +By default, ``compile_model`` API will select **AUTO** as +``device_name`` if no device is specified. .. code:: ipython3 # Set LOG_LEVEL to LOG_INFO. - ie.set_property("AUTO", {"LOG_LEVEL":"LOG_INFO"}) + core.set_property("AUTO", {"LOG_LEVEL":"LOG_INFO"}) # Load the model onto the target device. - compiled_model = ie.compile_model(ov_model) + compiled_model = core.compile_model(ov_model) - if isinstance(compiled_model, CompiledModel): + if isinstance(compiled_model, ov.CompiledModel): print("Successfully compiled model without a device_name.") .. parsed-literal:: + [22:36:26.6713]I[plugin.cpp:537][AUTO] device:CPU, config:PERFORMANCE_HINT=LATENCY + [22:36:26.6714]I[plugin.cpp:537][AUTO] device:CPU, config:PERFORMANCE_HINT_NUM_REQUESTS=0 + [22:36:26.6714]I[plugin.cpp:537][AUTO] device:CPU, config:PERF_COUNT=NO + [22:36:26.6714]I[plugin.cpp:542][AUTO] device:CPU, priority:0 + [22:36:26.6716]I[schedule.cpp:17][AUTO] scheduler starting + [22:36:26.6717]I[auto_schedule.cpp:131][AUTO] select device:CPU + [22:36:26.8157]I[auto_schedule.cpp:109][AUTO] device:CPU compiling model finished + [22:36:26.8158]I[plugin.cpp:572][AUTO] underlying hardware does not support hardware context Successfully compiled model without a device_name. @@ -165,22 +190,23 @@ device is specified. .. parsed-literal:: Deleted compiled_model + [22:36:26.8279]I[schedule.cpp:303][AUTO] scheduler ending -Explicitly pass AUTO as device_name to Core::compile_model API `⇑ <#top>`__ +Explicitly pass AUTO as device_name to Core::compile_model API +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -It is optional, but passing AUTO explicitly as -``device_name`` may improve readability of your code. +It is optional, but passing AUTO explicitly as ``device_name`` may +improve readability of your code. .. code:: ipython3 # Set LOG_LEVEL to LOG_NONE. - ie.set_property("AUTO", {"LOG_LEVEL":"LOG_NONE"}) + core.set_property("AUTO", {"LOG_LEVEL":"LOG_NONE"}) - compiled_model = ie.compile_model(model=ov_model, device_name="AUTO") + compiled_model = core.compile_model(model=ov_model, device_name="AUTO") - if isinstance(compiled_model, CompiledModel): + if isinstance(compiled_model, ov.CompiledModel): print("Successfully compiled model using AUTO.") @@ -201,13 +227,13 @@ It is optional, but passing AUTO explicitly as Deleted compiled_model -(2) Improve the first inference latency `⇑ <#top>`__ +(2) Improve the first inference latency ############################################################################################################################### -One of the benefits of using AUTO device selection is reducing FIL (first inference -latency). FIL is the model compilation time combined with the first -inference execution time. Using the CPU device explicitly will produce -the shortest first inference latency, as the OpenVINO graph +One of the benefits of using AUTO device selection is reducing FIL +(first inference latency). FIL is the model compilation time combined +with the first inference execution time. Using the CPU device explicitly +will produce the shortest first inference latency, as the OpenVINO graph representation loads quickly on CPU, using just-in-time (JIT) compilation. The challenge is with GPU devices since OpenCL graph complication to GPU-optimized kernels takes a few seconds to complete. @@ -215,12 +241,11 @@ This initialization time may be intolerable for some applications. To avoid this delay, the AUTO uses CPU transparently as the first inference device until GPU is ready. -Load an Image `⇑ <#top>`__ +Load an Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Torchvision library provides model specific -input transformation function, we will reuse it for preparing input -data. +torchvision library provides model specific input transformation +function, we will reuse it for preparing input data. .. code:: ipython3 @@ -236,22 +261,21 @@ data. -.. image:: 106-auto-device-with-output_files/106-auto-device-with-output_12_0.png +.. image:: 106-auto-device-with-output_files/106-auto-device-with-output_13_0.png -Load the model to GPU device and perform inference `⇑ <#top>`__ +Load the model to GPU device and perform inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - if "GPU" not in ie.available_devices: - print(f"A GPU device is not available. Available devices are: {ie.available_devices}") + if "GPU" not in core.available_devices: + print(f"A GPU device is not available. Available devices are: {core.available_devices}") else : # Start time. gpu_load_start_time = time.perf_counter() - compiled_model = ie.compile_model(model=ov_model, device_name="GPU") # load to GPU + compiled_model = core.compile_model(model=ov_model, device_name="GPU") # load to GPU # Execute the first inference. results = compiled_model(input_tensor)[0] @@ -268,7 +292,7 @@ Load the model to GPU device and perform inference `⇑ <#top>`__ A GPU device is not available. Available devices are: ['CPU'] -Load the model using AUTO device and do inference `⇑ <#top>`__ +Load the model using AUTO device and do inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ When GPU is the best available device, the first few inferences will be @@ -278,7 +302,7 @@ executed on CPU until GPU is ready. # Start time. auto_load_start_time = time.perf_counter() - compiled_model = ie.compile_model(model=ov_model) # The device_name is AUTO by default. + compiled_model = core.compile_model(model=ov_model) # The device_name is AUTO by default. # Execute the first inference. results = compiled_model(input_tensor)[0] @@ -292,7 +316,7 @@ executed on CPU until GPU is ready. .. parsed-literal:: - Time to load model using AUTO device and get first inference: 0.18 seconds. + Time to load model using AUTO device and get first inference: 0.14 seconds. .. code:: ipython3 @@ -300,7 +324,7 @@ executed on CPU until GPU is ready. # Deleted model will wait for compiling on the selected device to complete. del compiled_model -(3) Achieve different performance for different targets `⇑ <#top>`__ +(3) Achieve different performance for different targets ############################################################################################################################### It is an advantage to define **performance hints** when using Automatic @@ -312,14 +336,15 @@ hints do not require any device-specific settings and they are completely portable between devices – meaning AUTO can configure the performance hint on whichever device is being used. -For more information, refer to the `Performance Hints `__ -section of `Automatic Device Selection `__ +For more information, refer to the `Performance +Hints `__ +section of `Automatic Device +Selection `__ article. -Class and callback definition `⇑ <#top>`__ +Class and callback definition +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 class PerformanceMetrics: @@ -345,7 +370,7 @@ Class and callback definition `⇑ <#top>`__ self.latency_list = [] self.interval = interval - def update(self, infer_request: InferRequest) -> bool: + def update(self, infer_request: ov.InferRequest) -> bool: """ Update the metrics if current ongoing @interval seconds duration is expired. Record the latency only if it is not expired. :param: infer_request: InferRequest returned from inference callback, which includes the result of inference request. @@ -387,7 +412,7 @@ Class and callback definition `⇑ <#top>`__ self.remaining_update_num = num self.feed_inference = True - def update(self, infer_request: InferRequest): + def update(self, infer_request: ov.InferRequest): """ Update the context. Set @feed_inference to False if the number of remaining performance metric updates (@remaining_update_num) reaches 0 :param: infer_request: InferRequest returned from inference callback, which includes the result of inference request. @@ -402,7 +427,7 @@ Class and callback definition `⇑ <#top>`__ self.feed_inference = False - def completion_callback(infer_request: InferRequest, context) -> None: + def completion_callback(infer_request: ov.InferRequest, context) -> None: """ callback for the inference request, pass the @infer_request to @context for updating :param: infer_request: InferRequest returned for the callback, which includes the result of inference request. @@ -416,10 +441,9 @@ Class and callback definition `⇑ <#top>`__ metrics_update_interval = 10 metrics_update_num = 6 -Inference with THROUGHPUT hint `⇑ <#top>`__ +Inference with THROUGHPUT hint +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Loop for inference and update the FPS/Latency every @metrics_update_interval seconds. @@ -430,9 +454,9 @@ Loop for inference and update the FPS/Latency every print("Compiling Model for AUTO device with THROUGHPUT hint") sys.stdout.flush() - compiled_model = ie.compile_model(model=ov_model, config={"PERFORMANCE_HINT":"THROUGHPUT"}) + compiled_model = core.compile_model(model=ov_model, config={"PERFORMANCE_HINT":"THROUGHPUT"}) - infer_queue = AsyncInferQueue(compiled_model, 0) # Setting to 0 will query optimal number by default. + infer_queue = ov.AsyncInferQueue(compiled_model, 0) # Setting to 0 will query optimal number by default. infer_queue.set_callback(completion_callback) print(f"Start inference, {metrics_update_num: .0f} groups of FPS/latency will be measured over {metrics_update_interval: .0f}s intervals") @@ -456,19 +480,18 @@ Loop for inference and update the FPS/Latency every Compiling Model for AUTO device with THROUGHPUT hint Start inference, 6 groups of FPS/latency will be measured over 10s intervals - throughput: 189.24fps, latency: 30.04ms, time interval: 10.00s - throughput: 192.12fps, latency: 30.48ms, time interval: 10.01s - throughput: 191.27fps, latency: 30.64ms, time interval: 10.00s - throughput: 190.87fps, latency: 30.69ms, time interval: 10.01s - throughput: 189.50fps, latency: 30.89ms, time interval: 10.02s - throughput: 190.30fps, latency: 30.79ms, time interval: 10.01s + throughput: 181.92fps, latency: 31.32ms, time interval: 10.02s + throughput: 181.58fps, latency: 32.24ms, time interval: 10.00s + throughput: 182.07fps, latency: 32.16ms, time interval: 10.00s + throughput: 181.02fps, latency: 32.35ms, time interval: 10.00s + throughput: 180.73fps, latency: 32.40ms, time interval: 10.01s + throughput: 180.81fps, latency: 32.37ms, time interval: 10.00s Done -Inference with LATENCY hint `⇑ <#top>`__ +Inference with LATENCY hint +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Loop for inference and update the FPS/Latency for each @metrics_update_interval seconds @@ -479,10 +502,10 @@ Loop for inference and update the FPS/Latency for each print("Compiling Model for AUTO Device with LATENCY hint") sys.stdout.flush() - compiled_model = ie.compile_model(model=ov_model, config={"PERFORMANCE_HINT":"LATENCY"}) + compiled_model = core.compile_model(model=ov_model, config={"PERFORMANCE_HINT":"LATENCY"}) # Setting to 0 will query optimal number by default. - infer_queue = AsyncInferQueue(compiled_model, 0) + infer_queue = ov.AsyncInferQueue(compiled_model, 0) infer_queue.set_callback(completion_callback) print(f"Start inference, {metrics_update_num: .0f} groups fps/latency will be out with {metrics_update_interval: .0f}s interval") @@ -506,19 +529,18 @@ Loop for inference and update the FPS/Latency for each Compiling Model for AUTO Device with LATENCY hint Start inference, 6 groups fps/latency will be out with 10s interval - throughput: 138.76fps, latency: 6.68ms, time interval: 10.00s - throughput: 141.79fps, latency: 6.70ms, time interval: 10.00s - throughput: 142.39fps, latency: 6.68ms, time interval: 10.00s - throughput: 142.30fps, latency: 6.68ms, time interval: 10.00s - throughput: 142.30fps, latency: 6.68ms, time interval: 10.01s - throughput: 142.53fps, latency: 6.67ms, time interval: 10.00s + throughput: 139.38fps, latency: 6.69ms, time interval: 10.00s + throughput: 141.83fps, latency: 6.68ms, time interval: 10.00s + throughput: 141.97fps, latency: 6.67ms, time interval: 10.00s + throughput: 141.95fps, latency: 6.67ms, time interval: 10.00s + throughput: 141.90fps, latency: 6.67ms, time interval: 10.01s + throughput: 141.96fps, latency: 6.67ms, time interval: 10.00s Done -Difference in FPS and latency `⇑ <#top>`__ +Difference in FPS and latency +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import matplotlib.pyplot as plt @@ -550,7 +572,7 @@ Difference in FPS and latency `⇑ <#top>`__ -.. image:: 106-auto-device-with-output_files/106-auto-device-with-output_25_0.png +.. image:: 106-auto-device-with-output_files/106-auto-device-with-output_26_0.png .. code:: ipython3 @@ -584,5 +606,5 @@ Difference in FPS and latency `⇑ <#top>`__ -.. image:: 106-auto-device-with-output_files/106-auto-device-with-output_26_0.png +.. image:: 106-auto-device-with-output_files/106-auto-device-with-output_27_0.png diff --git a/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.jpg b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.jpg new file mode 100644 index 00000000000000..abe8cb45ca2fd7 --- /dev/null +++ b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a58493845ebd3e98186df7a1ea042b20545bb3c2a5b4a326163da6c9eb5e7d9 +size 121563 diff --git a/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.png b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.png new file mode 100644 index 00000000000000..30f6673bf72850 --- /dev/null +++ b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_13_0.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84112f4f04be0a3445174f4dcf7e300fc12dbd50fd8a5e2d98b4082402dda6de +size 869661 diff --git a/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_26_0.png b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_26_0.png index 415a474e73b05f..37c8b0a925566c 100644 --- a/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_26_0.png +++ b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_26_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:48a1f70dac1b326af8f205247fc377c9fb72286526c2d19507ada4b520a779ed -size 39987 +oid sha256:5af33aef5b450754224f0fade5aa6a3dba2ce509739a1c551335135e7490291e +size 26881 diff --git a/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_27_0.png b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_27_0.png new file mode 100644 index 00000000000000..2904815be00f1e --- /dev/null +++ b/docs/notebooks/106-auto-device-with-output_files/106-auto-device-with-output_27_0.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a74427d2101d8f93e2888e623dcbbfc9167a32a35ca33b891ec0e3f980414d0f +size 40038 diff --git a/docs/notebooks/107-speech-recognition-quantization-data2vec-with-output.rst b/docs/notebooks/107-speech-recognition-quantization-data2vec-with-output.rst index 313abe78024581..c5da0daf8369d1 100644 --- a/docs/notebooks/107-speech-recognition-quantization-data2vec-with-output.rst +++ b/docs/notebooks/107-speech-recognition-quantization-data2vec-with-output.rst @@ -19,11 +19,7 @@ steps: - Compare performance of the original and quantized models. - Compare Accuracy of the Original and Quantized Models. - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Download and prepare model <#download-and-prepare-model>`__ @@ -38,10 +34,9 @@ steps: - `Compare Performance of the Original and Quantized Models <#compare-performance-of-the-original-and-quantized-models>`__ - `Compare Accuracy of the Original and Quantized Models <#compare-accuracy-of-the-original-and-quantized-models>`__ -Download and prepare model `⇑ <#top>`__ +Download and prepare model ############################################################################################################################### - data2vec is a framework for self-supervised representation learning for images, speech, and text as described in `data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language (Baevski et @@ -58,10 +53,9 @@ In our case, we will use ``data2vec-audio-base-960h`` model, which was finetuned on 960 hours of audio from LibriSpeech Automatic Speech Recognition corpus and distributed as part of HuggingFace transformers. -Obtain Pytorch model representation `⇑ <#top>`__ +Obtain Pytorch model representation +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - For instantiating PyTorch model class, we should use ``Data2VecAudioForCTC.from_pretrained`` method with providing model ID for downloading from HuggingFace hub. Model weights and configuration @@ -74,7 +68,8 @@ model specific pre- and post-processing steps. .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" "nncf>=2.5.0" + !pip install -q "openvino==2023.1.0.dev20230811" "nncf>=2.5.0" + !pip install -q datasets "torchmetrics>=0.11.0" !pip install -q soundfile librosa transformers onnx .. code:: ipython3 @@ -84,10 +79,9 @@ model specific pre- and post-processing steps. processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h") model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h") -Convert model to OpenVINO Intermediate Representation `⇑ <#top>`__ +Convert model to OpenVINO Intermediate Representation +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 from pathlib import Path @@ -97,11 +91,10 @@ Convert model to OpenVINO Intermediate Representation `⇑ <#top>`__ .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize, Core + import openvino as ov import torch - core = Core() + core = ov.Core() BATCH_SIZE = 1 MAX_SEQ_LENGTH = 30480 @@ -142,8 +135,8 @@ Convert model to OpenVINO Intermediate Representation `⇑ <#top>`__ if not ir_model_path.exists(): if not onnx_model_path.exists(): export_model_to_onnx(model, onnx_model_path) - ov_model = mo.convert_model(onnx_model_path, compress_to_fp16=True) - serialize(ov_model, str(ir_model_path)) + ov_model = ov.convert_model(onnx_model_path) + ov.save_model(ov_model, str(ir_model_path)) print("IR model saved to {}".format(ir_model_path)) else: print("Read IR model from {}".format(ir_model_path)) @@ -156,20 +149,15 @@ Convert model to OpenVINO Intermediate Representation `⇑ <#top>`__ Read IR model from model/data2vec-audo-base.xml -Prepare inference data `⇑ <#top>`__ +Prepare inference data +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - For demonstration purposes, we will use short dummy version of LibriSpeech dataset - ``patrickvonplaten/librispeech_asr_dummy`` to speed up model evaluation. Model accuracy can be different from reported in the paper. For reproducing original accuracy, use ``librispeech_asr`` dataset. -.. code:: ipython3 - - !pip install -q datasets "torchmetrics>=0.11.0" - .. code:: ipython3 from datasets import load_dataset @@ -190,17 +178,9 @@ dataset. test_sample = ds[0]["audio"] - -.. parsed-literal:: - - Found cached dataset librispeech_asr_dummy (/home/adrian/.cache/huggingface/datasets/patrickvonplaten___librispeech_asr_dummy/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc) - Loading cached processed dataset at /home/adrian/.cache/huggingface/datasets/patrickvonplaten___librispeech_asr_dummy/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc/cache-4e0f4916cd205b24.arrow - - -Check model inference result `⇑ <#top>`__ +Check model inference result ############################################################################################################################### - The code below is used for running model inference on a single sample from the dataset. It contains the following steps: @@ -235,7 +215,7 @@ For reference, see the same function provided for OpenVINO model. .. code:: ipython3 - core = Core() + core = ov.Core() pt_transcription = torch_infer(model, dataset[0]) compiled_model = core.compile_model(ov_model) @@ -271,10 +251,9 @@ For reference, see the same function provided for OpenVINO model. -Validate model accuracy on dataset `⇑ <#top>`__ +Validate model accuracy on dataset ############################################################################################################################### - For model accuracy evaluation, `Word Error Rate `__ metric can be used. Word Error Rate or WER is the ratio of errors in a transcript to @@ -282,7 +261,7 @@ the total words spoken. A lower WER in speech-to-text means better accuracy in recognizing speech. For WER calculation, we will use -`torchmetrics `__ +```torchmetrics`` `__ library. .. code:: ipython3 @@ -332,10 +311,9 @@ library. [OpenVino] Word Error Rate: 0.0383 -Quantization `⇑ <#top>`__ +Quantization ############################################################################################################################### - `NNCF `__ provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop. @@ -344,11 +322,11 @@ Create a quantized model from the pre-trained ``FP16`` model and the calibration dataset. The optimization process contains the following steps: +:: -1. Create a Dataset for quantization. -2. Run ``nncf.quantize`` for getting an optimized model. The ``nncf.quantize`` function provides an interface for model quantization. It requires an instance of the OpenVINO Model and quantization dataset. Optionally, some additional parameters for the configuration quantization process (number of samples for quantization, preset, ignored scope, etc.) can be provided. For more accurate results, we should keep the operation in the postprocessing subgraph in floating point precision, using the ``ignored_scope`` parameter. ``advanced_parameters`` can be used to specify advanced quantization parameters for fine-tuning the quantization algorithm. In this tutorial we pass range estimator parameters for activations. For more information see -`Tune quantization parameters `__. -3. Serialize OpenVINO IR model using ``openvino.runtime.serialize`` function. + 1. Create a Dataset for quantization. + 2. Run `nncf.quantize` for getting an optimized model. The `nncf.quantize` function provides an interface for model quantization. It requires an instance of the OpenVINO Model and quantization dataset. Optionally, some additional parameters for the configuration quantization process (number of samples for quantization, preset, ignored scope, etc.) can be provided. For more accurate results, we should keep the operation in the postprocessing subgraph in floating point precision, using the `ignored_scope` parameter. `advanced_parameters` can be used to specify advanced quantization parameters for fine-tuning the quantization algorithm. In this tutorial we pass range estimator parameters for activations. For more information see [Tune quantization parameters](https://docs.openvino.ai/2023.0/basic_quantization_flow.html#tune-quantization-parameters). + 3. Serialize OpenVINO IR model using `ov.save_model` function. .. code:: ipython3 @@ -615,12 +593,11 @@ saved using ``serialize`` function. MODEL_NAME = 'quantized_data2vec_base' quantized_model_path = Path(f"{MODEL_NAME}_openvino_model/{MODEL_NAME}_quantized.xml") - serialize(quantized_model, str(quantized_model_path)) + ov.save_model(quantized_model, str(quantized_model_path)) -Check INT8 model inference result `⇑ <#top>`__ +Check INT8 model inference result ############################################################################################################################### - ``INT8`` model is the same in usage like the original one. We need to read it, using the ``core.read_model`` method and load on the device, using ``core.compile_model``. After that, we can reuse the same @@ -657,16 +634,16 @@ using ``core.compile_model``. After that, we can reuse the same -Compare Performance of the Original and Quantized Models `⇑ <#top>`__ +Compare Performance of the Original and Quantized Models ############################################################################################################################### `Benchmark -Tool `__ +Tool `__ is used to measure the inference performance of the ``FP16`` and ``INT8`` models. .. note:: - + For more accurate performance, it is recommended to run ``benchmark_app`` in a terminal/command prompt after closing other applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark @@ -822,10 +799,9 @@ is used to measure the inference performance of the ``FP16`` and [ INFO ] Throughput: 38.24 FPS -Compare Accuracy of the Original and Quantized Models `⇑ <#top>`__ +Compare Accuracy of the Original and Quantized Models ############################################################################################################################### - Finally, calculate WER metric for the ``INT8`` model representation and compare it with the ``FP16`` result. diff --git a/docs/notebooks/107-speech-recognition-quantization-wav2vec2-with-output.rst b/docs/notebooks/107-speech-recognition-quantization-wav2vec2-with-output.rst new file mode 100644 index 00000000000000..d911dac20bf770 --- /dev/null +++ b/docs/notebooks/107-speech-recognition-quantization-wav2vec2-with-output.rst @@ -0,0 +1,925 @@ +Quantize Speech Recognition Models using NNCF PTQ API +===================================================== + +This tutorial demonstrates how to apply ``INT8`` quantization to the +speech recognition model, known as +`Wav2Vec2 `__, +using the NNCF (Neural Network Compression Framework) 8-bit quantization +in post-training mode (without the fine-tuning pipeline). This notebook +uses a fine-tuned +`Wav2Vec2-Base-960h `__ +`PyTorch `__ model trained on the `LibriSpeech ASR +corpus `__. The tutorial is designed to be +extendable to custom models and datasets. It consists of the following +steps: + +- Download and prepare the Wav2Vec2 model and LibriSpeech dataset. +- Define data loading and accuracy validation functionality. +- Model quantization. +- Compare Accuracy of original PyTorch model, OpenVINO FP16 and INT8 + models. +- Compare performance of the original and quantized models. + +**Table of contents:** + +- `Imports <#imports>`__ +- `Settings <#settings>`__ +- `Prepare the Model <#prepare-the-model>`__ +- `Prepare LibriSpeech Dataset <#prepare-librispeech-dataset>`__ +- `Define DataLoader <#define-dataloader>`__ +- `Run Quantization <#run-quantization>`__ +- `Model Usage Example with Inference Pipeline <#model-usage-example-with-inference-pipeline>`__ +- `Validate model accuracy on dataset <#validate-model-accuracy-on-dataset>`__ +- `Compare Performance of the Original and Quantized Models <#compare-performance-of-the-original-and-quantized-models>`__ + +.. code:: ipython3 + + !pip install -q "openvino==2023.1.0.dev20230811" "nncf>=2.5.0" + !pip install -q soundfile librosa transformers onnx + +Imports +############################################################################################################################### + +.. code:: ipython3 + + import os + import sys + import re + import numpy as np + import openvino as ov + import tarfile + import torch + from itertools import groupby + import soundfile as sf + import IPython.display as ipd + + from transformers import Wav2Vec2ForCTC + + sys.path.append("../utils") + from notebook_utils import download_file + + +.. parsed-literal:: + + 2023-09-08 22:38:42.752981: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:38:42.787924: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 22:38:43.332490: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +Settings +############################################################################################################################### + +.. code:: ipython3 + + from pathlib import Path + + # Set the data and model directories, model source URL and model filename. + MODEL_DIR = Path("model") + DATA_DIR = Path("../data/datasets/librispeech") + MODEL_DIR.mkdir(exist_ok=True) + DATA_DIR.mkdir(exist_ok=True) + +Prepare the Model +############################################################################################################################### + +Perform the following: - Download and unpack a pre-trained Wav2Vec2 +model. - Convert the model to ONNX. - Run model conversion API to +convert the model from the ONNX representation to the OpenVINO +Intermediate Representation (OpenVINO IR). + +.. code:: ipython3 + + download_file("https://huggingface.co/facebook/wav2vec2-base-960h/resolve/main/pytorch_model.bin", directory=Path(MODEL_DIR) / 'pytorch', show_progress=True) + download_file("https://huggingface.co/facebook/wav2vec2-base-960h/resolve/main/config.json", directory=Path(MODEL_DIR) / 'pytorch', show_progress=False) + + + +.. parsed-literal:: + + model/pytorch/pytorch_model.bin: 0%| | 0.00/360M [00:00= self.samples_limit: + # Limit exceeded + return + +Run Quantization +############################################################################################################################### + +`NNCF `__ provides a suite of +advanced algorithms for Neural Networks inference optimization in +OpenVINO with minimal accuracy drop. + +Create a quantized model from the pre-trained ``FP16`` model and the +calibration dataset. The optimization process contains the following +steps: 1. Create a Dataset for quantization. 2. Run ``nncf.quantize`` +for getting an optimized model. The ``nncf.quantize`` function provides +an interface for model quantization. It requires an instance of the +OpenVINO Model and quantization dataset. Optionally, some additional +parameters for the configuration quantization process (number of samples +for quantization, preset, ignored scope, etc.) can be provided. For more +accurate results, we should keep the operation in the postprocessing +subgraph in floating point precision, using the ``ignored_scope`` +parameter. ``advanced_parameters`` can be used to specify advanced +quantization parameters for fine-tuning the quantization algorithm. In +this tutorial we pass range estimator parameters for activations. For +more information see `Tune quantization +parameters `__. +3. Serialize OpenVINO IR model using ``openvino.runtime.serialize`` +function. + +.. code:: ipython3 + + import nncf + from nncf.quantization.advanced_parameters import AdvancedQuantizationParameters, RangeEstimatorParameters + from nncf.quantization.range_estimator import StatisticsCollectorParameters, StatisticsType, AggregatorType + from nncf.parameters import ModelType + + + def transform_fn(data_item): + """ + Extract the model's input from the data item. + The data item here is the data item that is returned from the data source per iteration. + This function should be passed when the data item cannot be used as model's input. + """ + _, inputs = data_item + + return inputs["inputs"] + + + dataset_config = {"data_source": os.path.join(DATA_DIR, "LibriSpeech/dev-clean")} + data_loader = LibriSpeechDataLoader(dataset_config, samples_limit=300) + calibration_dataset = nncf.Dataset(data_loader, transform_fn) + + + quantized_model = nncf.quantize( + ov_model, + calibration_dataset, + model_type=ModelType.TRANSFORMER, # specify additional transformer patterns in the model + ignored_scope=nncf.IgnoredScope( + names=[ + '/wav2vec2/feature_extractor/conv_layers.1/conv/Conv', + '/wav2vec2/feature_extractor/conv_layers.2/conv/Conv', + '/wav2vec2/encoder/layers.7/feed_forward/output_dense/MatMul' + ], + ), + advanced_parameters=AdvancedQuantizationParameters( + activations_range_estimator_params=RangeEstimatorParameters( + min=StatisticsCollectorParameters( + statistics_type=StatisticsType.MIN, + aggregator_type=AggregatorType.MIN + ), + max=StatisticsCollectorParameters( + statistics_type=StatisticsType.QUANTILE, + aggregator_type=AggregatorType.MEAN, + quantile_outlier_prob=0.0001 + ), + ) + ) + ) + + +.. parsed-literal:: + + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino + INFO:nncf:3 ignored nodes was found by name in the NNCFGraph + INFO:nncf:193 ignored nodes was found by types in the NNCFGraph + INFO:nncf:24 ignored nodes was found by name in the NNCFGraph + INFO:nncf:Not adding activation input quantizer for operation: 5 MVN_224 + INFO:nncf:Not adding activation input quantizer for operation: 7 /wav2vec2/feature_extractor/conv_layers.0/layer_norm/Mul + 8 /wav2vec2/feature_extractor/conv_layers.0/layer_norm/Add + + INFO:nncf:Not adding activation input quantizer for operation: 10 /wav2vec2/feature_extractor/conv_layers.1/conv/Conv + INFO:nncf:Not adding activation input quantizer for operation: 12 /wav2vec2/feature_extractor/conv_layers.2/conv/Conv + INFO:nncf:Not adding activation input quantizer for operation: 23 /wav2vec2/feature_projection/layer_norm/Div + 24 /wav2vec2/feature_projection/layer_norm/Mul + 25 /wav2vec2/feature_projection/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 28 /wav2vec2/encoder/Add + INFO:nncf:Not adding activation input quantizer for operation: 30 /wav2vec2/encoder/layer_norm/Div + 32 /wav2vec2/encoder/layer_norm/Mul + 34 /wav2vec2/encoder/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 36 /wav2vec2/encoder/layers.0/Add + INFO:nncf:Not adding activation input quantizer for operation: 42 /wav2vec2/encoder/layers.0/layer_norm/Div + 49 /wav2vec2/encoder/layers.0/layer_norm/Mul + 58 /wav2vec2/encoder/layers.0/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 66 /wav2vec2/encoder/layers.0/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 74 /wav2vec2/encoder/layers.0/final_layer_norm/Div + 79 /wav2vec2/encoder/layers.0/final_layer_norm/Mul + 82 /wav2vec2/encoder/layers.0/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 84 /wav2vec2/encoder/layers.1/Add + INFO:nncf:Not adding activation input quantizer for operation: 90 /wav2vec2/encoder/layers.1/layer_norm/Div + 96 /wav2vec2/encoder/layers.1/layer_norm/Mul + 105 /wav2vec2/encoder/layers.1/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 113 /wav2vec2/encoder/layers.1/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 121 /wav2vec2/encoder/layers.1/final_layer_norm/Div + 126 /wav2vec2/encoder/layers.1/final_layer_norm/Mul + 129 /wav2vec2/encoder/layers.1/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 131 /wav2vec2/encoder/layers.2/Add + INFO:nncf:Not adding activation input quantizer for operation: 137 /wav2vec2/encoder/layers.2/layer_norm/Div + 143 /wav2vec2/encoder/layers.2/layer_norm/Mul + 152 /wav2vec2/encoder/layers.2/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 160 /wav2vec2/encoder/layers.2/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 168 /wav2vec2/encoder/layers.2/final_layer_norm/Div + 173 /wav2vec2/encoder/layers.2/final_layer_norm/Mul + 176 /wav2vec2/encoder/layers.2/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 178 /wav2vec2/encoder/layers.3/Add + INFO:nncf:Not adding activation input quantizer for operation: 184 /wav2vec2/encoder/layers.3/layer_norm/Div + 190 /wav2vec2/encoder/layers.3/layer_norm/Mul + 199 /wav2vec2/encoder/layers.3/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 207 /wav2vec2/encoder/layers.3/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 215 /wav2vec2/encoder/layers.3/final_layer_norm/Div + 220 /wav2vec2/encoder/layers.3/final_layer_norm/Mul + 223 /wav2vec2/encoder/layers.3/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 225 /wav2vec2/encoder/layers.4/Add + INFO:nncf:Not adding activation input quantizer for operation: 231 /wav2vec2/encoder/layers.4/layer_norm/Div + 237 /wav2vec2/encoder/layers.4/layer_norm/Mul + 246 /wav2vec2/encoder/layers.4/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 254 /wav2vec2/encoder/layers.4/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 262 /wav2vec2/encoder/layers.4/final_layer_norm/Div + 267 /wav2vec2/encoder/layers.4/final_layer_norm/Mul + 270 /wav2vec2/encoder/layers.4/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 272 /wav2vec2/encoder/layers.5/Add + INFO:nncf:Not adding activation input quantizer for operation: 278 /wav2vec2/encoder/layers.5/layer_norm/Div + 284 /wav2vec2/encoder/layers.5/layer_norm/Mul + 293 /wav2vec2/encoder/layers.5/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 301 /wav2vec2/encoder/layers.5/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 309 /wav2vec2/encoder/layers.5/final_layer_norm/Div + 314 /wav2vec2/encoder/layers.5/final_layer_norm/Mul + 317 /wav2vec2/encoder/layers.5/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 319 /wav2vec2/encoder/layers.6/Add + INFO:nncf:Not adding activation input quantizer for operation: 325 /wav2vec2/encoder/layers.6/layer_norm/Div + 331 /wav2vec2/encoder/layers.6/layer_norm/Mul + 340 /wav2vec2/encoder/layers.6/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 348 /wav2vec2/encoder/layers.6/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 356 /wav2vec2/encoder/layers.6/final_layer_norm/Div + 361 /wav2vec2/encoder/layers.6/final_layer_norm/Mul + 364 /wav2vec2/encoder/layers.6/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 366 /wav2vec2/encoder/layers.7/Add + INFO:nncf:Not adding activation input quantizer for operation: 372 /wav2vec2/encoder/layers.7/layer_norm/Div + 378 /wav2vec2/encoder/layers.7/layer_norm/Mul + 387 /wav2vec2/encoder/layers.7/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 412 /wav2vec2/encoder/layers.7/feed_forward/output_dense/MatMul + 418 /wav2vec2/encoder/layers.7/feed_forward/output_dense/Add + + INFO:nncf:Not adding activation input quantizer for operation: 395 /wav2vec2/encoder/layers.7/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 403 /wav2vec2/encoder/layers.7/final_layer_norm/Div + 408 /wav2vec2/encoder/layers.7/final_layer_norm/Mul + 411 /wav2vec2/encoder/layers.7/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 413 /wav2vec2/encoder/layers.8/Add + INFO:nncf:Not adding activation input quantizer for operation: 419 /wav2vec2/encoder/layers.8/layer_norm/Div + 425 /wav2vec2/encoder/layers.8/layer_norm/Mul + 434 /wav2vec2/encoder/layers.8/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 442 /wav2vec2/encoder/layers.8/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 450 /wav2vec2/encoder/layers.8/final_layer_norm/Div + 455 /wav2vec2/encoder/layers.8/final_layer_norm/Mul + 458 /wav2vec2/encoder/layers.8/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 460 /wav2vec2/encoder/layers.9/Add + INFO:nncf:Not adding activation input quantizer for operation: 466 /wav2vec2/encoder/layers.9/layer_norm/Div + 472 /wav2vec2/encoder/layers.9/layer_norm/Mul + 481 /wav2vec2/encoder/layers.9/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 489 /wav2vec2/encoder/layers.9/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 497 /wav2vec2/encoder/layers.9/final_layer_norm/Div + 502 /wav2vec2/encoder/layers.9/final_layer_norm/Mul + 505 /wav2vec2/encoder/layers.9/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 507 /wav2vec2/encoder/layers.10/Add + INFO:nncf:Not adding activation input quantizer for operation: 513 /wav2vec2/encoder/layers.10/layer_norm/Div + 519 /wav2vec2/encoder/layers.10/layer_norm/Mul + 528 /wav2vec2/encoder/layers.10/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 536 /wav2vec2/encoder/layers.10/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 544 /wav2vec2/encoder/layers.10/final_layer_norm/Div + 549 /wav2vec2/encoder/layers.10/final_layer_norm/Mul + 552 /wav2vec2/encoder/layers.10/final_layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 554 /wav2vec2/encoder/layers.11/Add + INFO:nncf:Not adding activation input quantizer for operation: 560 /wav2vec2/encoder/layers.11/layer_norm/Div + 566 /wav2vec2/encoder/layers.11/layer_norm/Mul + 575 /wav2vec2/encoder/layers.11/layer_norm/Add_1 + + INFO:nncf:Not adding activation input quantizer for operation: 583 /wav2vec2/encoder/layers.11/Add_1 + INFO:nncf:Not adding activation input quantizer for operation: 591 /wav2vec2/encoder/layers.11/final_layer_norm/Div + 596 /wav2vec2/encoder/layers.11/final_layer_norm/Mul + 599 /wav2vec2/encoder/layers.11/final_layer_norm/Add_1 + + + +.. parsed-literal:: + + Statistics collection: 100%|██████████| 300/300 [02:51<00:00, 1.75it/s] + Biases correction: 100%|██████████| 74/74 [00:25<00:00, 2.96it/s] + + +.. code:: ipython3 + + MODEL_NAME = 'quantized_wav2vec2_base' + quantized_model_path = Path(f"{MODEL_NAME}_openvino_model/{MODEL_NAME}_quantized.xml") + ov.save_model(quantized_model, str(quantized_model_path)) + +Model Usage Example with Inference Pipeline +############################################################################################################################### + +Both initial (``FP16``) and quantized (``INT8``) models are exactly the +same in use. + +Start with taking one example from the dataset to show inference steps +for it. + +Next, load the quantized model to the inference pipeline. + +.. code:: ipython3 + + audio = LibriSpeechDataLoader.read_flac(f'{DATA_DIR}/LibriSpeech/test-clean/121/127105/121-127105-0017.flac') + + ipd.Audio(audio, rate=16000) + + + + +.. raw:: html + + + + + + + +.. code:: ipython3 + + core = ov.Core() + + compiled_model = core.compile_model(model=quantized_model, device_name='CPU') + + input_data = np.expand_dims(audio, axis=0) + output_layer = compiled_model.outputs[0] + +Next, make a prediction. + +.. code:: ipython3 + + predictions = compiled_model([input_data])[output_layer] + +Validate model accuracy on dataset +############################################################################################################################### + +The code below is used for running model inference on a single sample +from the dataset. It contains the following steps: + +- Define ``MetricWER`` class to calculate Word Error Rate. +- Define dataloader for test dataset. +- Define functions to get inference for PyTorch and OpenVINO models. +- Define functions to compute Word Error Rate. + +.. code:: ipython3 + + class MetricWER: + alphabet = [ + "", "", "", "", "|", + "e", "t", "a", "o", "n", "i", "h", "s", "r", "d", "l", "u", + "m", "w", "c", "f", "g", "y", "p", "b", "v", "k", "'", "x", "j", "q", "z"] + words_delimiter = '|' + pad_token = '' + + # Required methods + def __init__(self): + self._name = "WER" + self._sum_score = 0 + self._sum_words = 0 + self._cur_score = 0 + self._decoding_vocab = dict(enumerate(self.alphabet)) + + @property + def value(self): + """Returns accuracy metric value for the last model output.""" + return {self._name: self._cur_score} + + @property + def avg_value(self): + """Returns accuracy metric value for all model outputs.""" + return {self._name: self._sum_score / self._sum_words if self._sum_words != 0 else 0} + + def update(self, output, target): + """ + Updates prediction matches. + + :param output: model output + :param target: annotations + """ + decoded = [decode_logits(i) for i in output] + target = [i.lower() for i in target] + assert len(output) == len(target), "sizes of output and target mismatch!" + for i in range(len(output)): + self._get_metric_per_sample(decoded[i], target[i]) + + def reset(self): + """ + Resets collected matches + """ + self._sum_score = 0 + self._sum_words = 0 + + def get_attributes(self): + """ + Returns a dictionary of metric attributes {metric_name: {attribute_name: value}}. + Required attributes: 'direction': 'higher-better' or 'higher-worse' + 'type': metric type + """ + return {self._name: {"direction": "higher-worse", "type": "WER"}} + + # Methods specific to the current implementation + def _get_metric_per_sample(self, annotation, prediction): + cur_score = self._editdistance_eval(annotation.split(), prediction.split()) + cur_words = len(annotation.split()) + + self._sum_score += cur_score + self._sum_words += cur_words + self._cur_score = cur_score / cur_words + + result = cur_score / cur_words if cur_words != 0 else 0 + return result + + def _editdistance_eval(self, source, target): + n, m = len(source), len(target) + + distance = np.zeros((n + 1, m + 1), dtype=int) + distance[:, 0] = np.arange(0, n + 1) + distance[0, :] = np.arange(0, m + 1) + + for i in range(1, n + 1): + for j in range(1, m + 1): + cost = 0 if source[i - 1] == target[j - 1] else 1 + + distance[i][j] = min(distance[i - 1][j] + 1, + distance[i][j - 1] + 1, + distance[i - 1][j - 1] + cost) + return distance[n][m] + +Now, you just need to decode predicted probabilities to text, using +tokenizer ``decode_logits``. + +Alternatively, use a built-in ``Wav2Vec2Processor`` tokenizer from the +``transformers`` package. + +.. code:: ipython3 + + def decode_logits(logits): + decoding_vocab = dict(enumerate(MetricWER.alphabet)) + token_ids = np.squeeze(np.argmax(logits, -1)) + tokens = [decoding_vocab[idx] for idx in token_ids] + tokens = [token_group[0] for token_group in groupby(tokens)] + tokens = [t for t in tokens if t != MetricWER.pad_token] + res_string = ''.join([t if t != MetricWER.words_delimiter else ' ' for t in tokens]).strip() + res_string = ' '.join(res_string.split(' ')) + res_string = res_string.lower() + return res_string + + + predicted_text = decode_logits(predictions) + predicted_text + + + + +.. parsed-literal:: + + 'it was almost the tone of hope everybody will stay' + + + +.. code:: ipython3 + + from tqdm.notebook import tqdm + + import numpy as np + + + dataset_config = {"data_source": os.path.join(DATA_DIR, "LibriSpeech/test-clean")} + test_data_loader = LibriSpeechDataLoader(dataset_config, samples_limit=300) + + + # inference function for pytorch + def torch_infer(model, sample): + output = model(torch.Tensor(sample[1]['inputs'])).logits + output = output.detach().cpu().numpy() + + return output + + + # inference function for openvino + def ov_infer(model, sample): + output = model.output(0) + output = model(np.array(sample[1]['inputs']))[output] + + return output + + + def compute_wer(dataset, model, infer_fn): + wer = MetricWER() + for sample in tqdm(dataset): + # run infer function on sample + output = infer_fn(model, sample) + # update metric on sample result + wer.update(output, [sample[0][1]]) + + return wer.avg_value + +Now, compute WER for the original PyTorch model, OpenVINO IR model and +quantized model. + +.. code:: ipython3 + + compiled_fp32_ov_model = core.compile_model(ov_model) + + pt_result = compute_wer(test_data_loader, torch_model, torch_infer) + ov_fp32_result = compute_wer(test_data_loader, compiled_fp32_ov_model, ov_infer) + quantized_result = compute_wer(test_data_loader, compiled_model, ov_infer) + + print(f'[PyTorch] Word Error Rate: {pt_result["WER"]:.4f}') + print(f'[OpenVino] Word Error Rate: {ov_fp32_result["WER"]:.4f}') + print(f'[Quantized OpenVino] Word Error Rate: {quantized_result["WER"]:.4f}') + + + +.. parsed-literal:: + + 0%| | 0/300 [00:00`__ +to measure the inference performance of the ``FP16`` and ``INT8`` +models. + +.. note:: + + For more accurate performance, it is recommended to run + ``benchmark_app`` in a terminal/command prompt after closing other + applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark + async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to + benchmark on GPU. Run ``benchmark_app --help`` to see an overview of + all command-line options. + +.. code:: ipython3 + + # Inference FP16 model (OpenVINO IR) + ! benchmark_app -m $ir_model_path -shape [1,30480] -d CPU -api async + + +.. parsed-literal:: + + [Step 1/11] Parsing and validating input arguments + [ INFO ] Parsing input parameters + [Step 2/11] Loading OpenVINO Runtime + [ INFO ] OpenVINO: + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 + [ INFO ] + [ INFO ] Device info: + [ INFO ] CPU + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 + [ INFO ] + [ INFO ] + [Step 3/11] Setting device configuration + [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. + [Step 4/11] Reading model files + [ INFO ] Loading model files + [ INFO ] Read model took 61.48 ms + [ INFO ] Original model I/O parameters: + [ INFO ] Model inputs: + [ INFO ] inputs (node: inputs) : f32 / [...] / [?,?] + [ INFO ] Model outputs: + [ INFO ] logits (node: logits) : f32 / [...] / [?,?,32] + [Step 5/11] Resizing model to match image sizes and given batch + [ INFO ] Model batch size: 1 + [ INFO ] Reshaping model: 'inputs': [1,30480] + [ INFO ] Reshape model took 28.87 ms + [Step 6/11] Configuring input of the model + [ INFO ] Model inputs: + [ INFO ] inputs (node: inputs) : f32 / [...] / [1,30480] + [ INFO ] Model outputs: + [ INFO ] logits (node: logits) : f32 / [...] / [1,95,32] + [Step 7/11] Loading the model to the device + [ INFO ] Compile model took 644.15 ms + [Step 8/11] Querying optimal runtime parameters + [ INFO ] Model: + [ INFO ] NETWORK_NAME: torch_jit + [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 + [ INFO ] NUM_STREAMS: 6 + [ INFO ] AFFINITY: Affinity.CORE + [ INFO ] INFERENCE_NUM_THREADS: 24 + [ INFO ] PERF_COUNT: False + [ INFO ] INFERENCE_PRECISION_HINT: + [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT + [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE + [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 + [ INFO ] ENABLE_CPU_PINNING: True + [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE + [ INFO ] ENABLE_HYPER_THREADING: True + [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] CPU_DENORMALS_OPTIMIZATION: False + [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 + [Step 9/11] Creating infer requests and preparing input tensors + [ WARNING ] No input files were given for input 'inputs'!. This input will be filled with random values! + [ INFO ] Fill input 'inputs' with random values + [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 60000 ms duration) + [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). + [ INFO ] First inference took 69.35 ms + [Step 11/11] Dumping statistics report + [ INFO ] Execution Devices:['CPU'] + [ INFO ] Count: 2748 iterations + [ INFO ] Duration: 60151.82 ms + [ INFO ] Latency: + [ INFO ] Median: 131.23 ms + [ INFO ] Average: 131.13 ms + [ INFO ] Min: 67.66 ms + [ INFO ] Max: 145.43 ms + [ INFO ] Throughput: 45.68 FPS + + +.. code:: ipython3 + + # Inference INT8 model (OpenVINO IR) + ! benchmark_app -m $quantized_model_path -shape [1,30480] -d CPU -api async + + +.. parsed-literal:: + + [Step 1/11] Parsing and validating input arguments + [ INFO ] Parsing input parameters + [Step 2/11] Loading OpenVINO Runtime + [ INFO ] OpenVINO: + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 + [ INFO ] + [ INFO ] Device info: + [ INFO ] CPU + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 + [ INFO ] + [ INFO ] + [Step 3/11] Setting device configuration + [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. + [Step 4/11] Reading model files + [ INFO ] Loading model files + [ INFO ] Read model took 81.97 ms + [ INFO ] Original model I/O parameters: + [ INFO ] Model inputs: + [ INFO ] inputs (node: inputs) : f32 / [...] / [?,?] + [ INFO ] Model outputs: + [ INFO ] logits (node: logits) : f32 / [...] / [?,?,32] + [Step 5/11] Resizing model to match image sizes and given batch + [ INFO ] Model batch size: 1 + [ INFO ] Reshaping model: 'inputs': [1,30480] + [ INFO ] Reshape model took 35.47 ms + [Step 6/11] Configuring input of the model + [ INFO ] Model inputs: + [ INFO ] inputs (node: inputs) : f32 / [...] / [1,30480] + [ INFO ] Model outputs: + [ INFO ] logits (node: logits) : f32 / [...] / [1,95,32] + [Step 7/11] Loading the model to the device + [ INFO ] Compile model took 920.18 ms + [Step 8/11] Querying optimal runtime parameters + [ INFO ] Model: + [ INFO ] NETWORK_NAME: torch_jit + [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 + [ INFO ] NUM_STREAMS: 6 + [ INFO ] AFFINITY: Affinity.CORE + [ INFO ] INFERENCE_NUM_THREADS: 24 + [ INFO ] PERF_COUNT: False + [ INFO ] INFERENCE_PRECISION_HINT: + [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT + [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE + [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 + [ INFO ] ENABLE_CPU_PINNING: True + [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE + [ INFO ] ENABLE_HYPER_THREADING: True + [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] CPU_DENORMALS_OPTIMIZATION: False + [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 + [Step 9/11] Creating infer requests and preparing input tensors + [ WARNING ] No input files were given for input 'inputs'!. This input will be filled with random values! + [ INFO ] Fill input 'inputs' with random values + [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 60000 ms duration) + [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). + [ INFO ] First inference took 52.31 ms + [Step 11/11] Dumping statistics report + [ INFO ] Execution Devices:['CPU'] + [ INFO ] Count: 4500 iterations + [ INFO ] Duration: 60105.34 ms + [ INFO ] Latency: + [ INFO ] Median: 79.88 ms + [ INFO ] Average: 79.99 ms + [ INFO ] Min: 47.16 ms + [ INFO ] Max: 106.32 ms + [ INFO ] Throughput: 74.87 FPS + diff --git a/docs/notebooks/108-gpu-device-with-output.rst b/docs/notebooks/108-gpu-device-with-output.rst index fc236c92b8331e..b376e60d8c9b70 100644 --- a/docs/notebooks/108-gpu-device-with-output.rst +++ b/docs/notebooks/108-gpu-device-with-output.rst @@ -1,11 +1,7 @@ Working with GPUs in OpenVINO™ ============================== - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Introduction <#introduction>`__ @@ -30,9 +26,11 @@ Working with GPUs in OpenVINO™ - `Using Multiple GPUs with Multi-Device and Cumulative Throughput <#using-multiple-gpus-with-multi-device-and-cumulative-throughput>`__ - `Performance Comparison with benchmark_app <#performance-comparison-with-benchmark_app>`__ -- `CPU vs GPU with Latency Hint <#cpu-vs-gpu-with-latency-hint>`__ -- `CPU vs GPU with Throughput Hint <#cpu-vs-gpu-with-throughput-hint>`__ -- `Single GPU vs Multiple GPUs <#single-gpu-vs-multiple-gpus>`__ + + - `CPU vs GPU with Latency Hint <#cpu-vs-gpu-with-latency-hint>`__ + - `CPU vs GPU with Throughput Hint <#cpu-vs-gpu-with-throughput-hint>`__ + - `Single GPU vs Multiple GPUs <#single-gpu-vs-multiple-gpus>`__ + - `Basic Application Using GPUs <#basic-application-using-gpus>`__ - `Import Necessary Packages <#import-necessary-packages>`__ @@ -46,6 +44,7 @@ Working with GPUs in OpenVINO™ - `Perform Inference <#perform-inference>`__ - `Process Results <#process-results>`__ + - `Conclusion <#conclusion>`__ This tutorial provides a high-level overview of working with Intel GPUs @@ -59,10 +58,9 @@ run to compare GPU performance in different configurations. It also provides the code for a basic end-to-end application that compiles a model on GPU and uses it to run inference. -Introduction `⇑ <#top>`__ +Introduction ############################################################################################################################### - Originally, graphic processing units (GPUs) began as specialized chips, developed to accelerate the rendering of computer graphics. In contrast to CPUs, which have few but powerful cores, GPUs have many more @@ -72,25 +70,20 @@ learning, where GPUs can easily accelerate inference of neural networks by splitting operations across multiple cores. OpenVINO supports inference on Intel integrated GPUs (which are included -with most `Intel® Core™ desktop and mobile -processors `__) +with most `Intel® Core™ desktop and mobile processors `__) or on Intel discrete GPU products like the `Intel® Arc™ A-Series -Graphics -cards `__ +Graphics cards `__ and `Intel® Data Center GPU Flex Series `__. To get started, first `install -OpenVINO `__ -on a system equipped with one or more Intel GPUs. Follow the `GPU -configuration -instructions `__ +OpenVINO `__ +on a system equipped with one or more Intel GPUs. Follow the `GPU configuration instructions `__ to configure OpenVINO to work with your GPU. Then, read on to learn how to accelerate inference with GPUs in OpenVINO! -Install required packages `⇑ <#top>`__ +Install required packages +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 !pip install -q "openvino-dev>=2023.0.0" @@ -112,17 +105,15 @@ Install required packages `⇑ <#top>`__ -Checking GPUs with Query Device `⇑ <#top>`__ +Checking GPUs with Query Device ############################################################################################################################### - In this section, we will see how to list the available GPUs and check their properties. Some of the key properties will also be defined. -List GPUs with core.available_devices `⇑ <#top>`__ +List GPUs with core.available_devices +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - OpenVINO Runtime provides the ``available_devices`` method for checking which devices are available for inference. The following code will output a list of compatible OpenVINO devices, in which Intel GPUs should @@ -149,20 +140,18 @@ GPU always takes the id ``0`` if the system has one. For instance, if the system has a CPU, an integrated and discrete GPU, we should expect to see a list like this: ``['CPU', 'GPU.0', 'GPU.1']``. To simplify its use, the “GPU.0” can also be addressed with just “GPU”. For more -details, see the `Device Naming -Convention `__ +details, see the `Device Naming Convention `__ section. If the GPUs are installed correctly on the system and still do not appear in the list, follow the steps described -`here `__ +`here `__ to configure your GPU drivers to work with OpenVINO. Once we have the GPUs working with OpenVINO, we can proceed with the next sections. -Check Properties with core.get_property `⇑ <#top>`__ +Check Properties with core.get_property +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To get information about the GPUs, we can use device properties. In OpenVINO, devices have properties that describe their characteristics and configuration. Each property has a name and associated value that @@ -243,10 +232,9 @@ for that property. DEVICE_ID : 0 -Brief Descriptions of Key Properties `⇑ <#top>`__ +Brief Descriptions of Key Properties +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Each device has several properties as seen in the last command. Some of the key properties are: @@ -268,34 +256,28 @@ the key properties are: - ``CACHE_DIR`` - The directory where the model cache data is stored to speed up compilation time. -To learn more about devices and properties, see the `Query Device -Properties `__ +To learn more about devices and properties, see the `Query Device Properties `__ page. -Compiling a Model on GPU `⇑ <#top>`__ +Compiling a Model on GPU ############################################################################################################################### - Now, we know how to list the GPUs in the system and check their properties. We can easily use one for compiling and running models with -OpenVINO `GPU -plugin `__. +OpenVINO `GPU plugin `__. -Download and Convert a Model `⇑ <#top>`__ +Download and Convert a Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - This tutorial uses the ``ssdlite_mobilenet_v2`` model. The ``ssdlite_mobilenet_v2`` model is used for object detection. The model -was trained on `Common Objects in Context -(COCO) `__ dataset version with 91 +was trained on `Common Objects in Context (COCO) `__ dataset version with 91 categories of object. For details, see the `paper `__. -Download and unpack the Model `⇑ <#top>`__ +Download and unpack the Model ------------------------------------------------------------------------------------------------------------------------------- - Use the ``download_file`` function from the ``notebook_utils`` to download an archive with the model. It automatically creates a directory structure and downloads the selected model. This step is skipped if the @@ -350,14 +332,13 @@ package is already downloaded. -Convert the Model to OpenVINO IR format `⇑ <#top>`__ +Convert the Model to OpenVINO IR format ------------------------------------------------------------------------------------------------------------------------------- - To convert the model to OpenVINO IR with ``FP16`` precision, use model conversion API. The models are saved to the ``model/ir_model/`` directory. For more details about model conversion, see this -`page `__. +`page `__. .. code:: ipython3 @@ -399,10 +380,9 @@ directory. For more details about model conversion, see this IR model saved to model/ir_model/ssdlite_mobilenet_v2_fp16.xml -Compile with Default Configuration `⇑ <#top>`__ +Compile with Default Configuration +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - When the model is ready, first we need to read it, using the ``read_model`` method. Then, we can use the ``compile_model`` method and specify the name of the device we want to compile the model on, in this @@ -417,15 +397,12 @@ use by using “GPU.0”, “GPU.1”, etc. Any of the device names returned by the ``available_devices`` method are valid device specifiers. You may also use “AUTO”, which will automatically select the best device for inference (which is often the GPU). To learn more about AUTO plugin, -visit the `Automatic Device -Selection `__ -page as well as the `AUTO device -tutorial `__. +visit the `Automatic Device Selection `__ +page as well as the `AUTO device tutorial `__. -Reduce Compile Time through Model Caching `⇑ <#top>`__ +Reduce Compile Time through Model Caching +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Depending on the model used, device-specific optimizations and network compilations can cause the compile step to be time-consuming, especially with larger models, which may lead to bad user experience in the @@ -488,14 +465,12 @@ compile times with caching enabled and disabled as follows: The actual time improvements will depend on the environment as well as the model being used but it is definitely something to consider when -optimizing an application. To read more about this, see the `Model -Caching `__ +optimizing an application. To read more about this, see the `Model Caching `__ docs. -Throughput and Latency Performance Hints `⇑ <#top>`__ +Throughput and Latency Performance Hints +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To simplify device and pipeline configuration, OpenVINO provides high-level performance hints that automatically set the batch size and number of parallel threads to use for inference. The “LATENCY” @@ -523,18 +498,17 @@ available memory. compiled_model = core.compile_model(model, device, {"PERFORMANCE_HINT": "THROUGHPUT"}) -Using Multiple GPUs with Multi-Device and Cumulative Throughput `⇑ <#top>`__ +Using Multiple GPUs with Multi-Device and Cumulative Throughput +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The latency and throughput hints mentioned above are great and can make a difference when used adequately but they usually use just one device, -either due to the `AUTO -plugin `__ +either due to the `AUTO plugin `__ or by manual specification of the device name as above. When we have multiple devices, such as an integrated and discrete GPU, we may use both at the same time to improve the utilization of the resources. In order to do this, OpenVINO provides a virtual device called -`MULTI `__, +`MULTI `__, which is just a combination of the existent devices that knows how to split inference work between them, leveraging the capabilities of each device. @@ -556,22 +530,18 @@ manually specify devices to use. Below is an example showing how to use ``compiled_model = core.compile_model(model=model, device_name="AUTO", config={"PERFORMANCE_HINT": "CUMULATIVE_THROUGHPUT"})`` .. important:: - - The “THROUGHPUT”, “MULTI”, and + + **The “THROUGHPUT”, “MULTI”, and “CUMULATIVE_THROUGHPUT” modes are only applicable to asynchronous inferencing pipelines. The example at the end of this article shows how to set up an asynchronous pipeline that takes advantage of - parallelism to increase throughput. To learn more, see - `Asynchronous - Inferencing `__ - in OpenVINO as well as the `Asynchronous Inference - notebook `__. - + parallelism to increase throughput.** To learn more, see + `Asynchronous Inferencing `__ + in OpenVINO as well as the `Asynchronous Inference notebook `__. -Performance Comparison with benchmark_app `⇑ <#top>`__ +Performance Comparison with benchmark_app ############################################################################################################################### - Given all the different options available when compiling a model, it may be difficult to know which settings work best for a certain application. Thankfully, OpenVINO provides ``benchmark_app`` - a performance @@ -589,7 +559,7 @@ Note that benchmark_app only requires the model path to run but both the device and hint arguments will be useful to us. For more advanced usages, the tool itself has other options that can be checked by running ``benchmark_app -h`` or reading the -`docs `__. +`docs `__. The following example shows how to benchmark a simple model, using a GPU with a latency focus: @@ -669,10 +639,9 @@ performance may depend on the hardware used. Generally, we should expect GPU to be better than CPU, whereas multiple GPUs should be better than a single GPU as long as there is enough work for each of them. -CPU vs GPU with Latency Hint `⇑ <#top>`__ +CPU vs GPU with Latency Hint ------------------------------------------------------------------------------------------------------------------------------- - .. code:: ipython3 !benchmark_app -m {model_path} -d CPU -hint latency @@ -807,10 +776,9 @@ CPU vs GPU with Latency Hint `⇑ <#top>`__ [ INFO ] Throughput: 189.21 FPS -CPU vs GPU with Throughput Hint `⇑ <#top>`__ +CPU vs GPU with Throughput Hint ------------------------------------------------------------------------------------------------------------------------------- - .. code:: ipython3 !benchmark_app -m {model_path} -d CPU -hint throughput @@ -945,10 +913,9 @@ CPU vs GPU with Throughput Hint `⇑ <#top>`__ [ INFO ] Throughput: 326.34 FPS -Single GPU vs Multiple GPUs `⇑ <#top>`__ +Single GPU vs Multiple GPUs ------------------------------------------------------------------------------------------------------------------------------- - .. code:: ipython3 !benchmark_app -m {model_path} -d GPU.1 -hint throughput @@ -1072,10 +1039,9 @@ Single GPU vs Multiple GPUs `⇑ <#top>`__ RuntimeError: Config for device with 1 ID is not registered in GPU plugin -Basic Application Using GPUs `⇑ <#top>`__ +Basic Application Using GPUs ############################################################################################################################### - We will now show an end-to-end object detection example using GPUs in OpenVINO. The application compiles a model on GPU with the “THROUGHPUT” hint, then loads a video and preprocesses every frame to convert them to @@ -1085,10 +1051,9 @@ found in each frame. The detections are then drawn on their corresponding frame and saved as a video, which is displayed at the end of the application. -Import Necessary Packages `⇑ <#top>`__ +Import Necessary Packages +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import time @@ -1112,10 +1077,9 @@ Import Necessary Packages `⇑ <#top>`__ -Compile the Model `⇑ <#top>`__ +Compile the Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Read model and compile it on GPU in THROUGHPUT mode @@ -1137,10 +1101,9 @@ Compile the Model `⇑ <#top>`__ Model input shape: 1 300 300 3 -Load and Preprocess Video Frames `⇑ <#top>`__ +Load and Preprocess Video Frames +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Load video @@ -1183,18 +1146,17 @@ Load and Preprocess Video Frames `⇑ <#top>`__ -.. raw:: html +.. .. raw:: html - +.. -Define Model Output Classes `⇑ <#top>`__ +Define Model Output Classes +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Define the model's labelmap (this model uses COCO classes) @@ -1213,14 +1175,12 @@ Define Model Output Classes `⇑ <#top>`__ "teddy bear", "hair drier", "toothbrush", "hair brush" ] -Set up Asynchronous Pipeline `⇑ <#top>`__ +Set up Asynchronous Pipeline +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -Callback Definition `⇑ <#top>`__ +Callback Definition ------------------------------------------------------------------------------------------------------------------------------- - .. code:: ipython3 # Define a callback function that runs every time the asynchronous pipeline completes inference on a frame @@ -1235,20 +1195,18 @@ Callback Definition `⇑ <#top>`__ total_time = stop_time - start_time frame_fps[frame_id] = frame_number / total_time -Create Async Pipeline `⇑ <#top>`__ +Create Async Pipeline ------------------------------------------------------------------------------------------------------------------------------- - .. code:: ipython3 # Create asynchronous inference queue with optimal number of infer requests infer_queue = AsyncInferQueue(compiled_model) infer_queue.set_callback(completion_callback) -Perform Inference `⇑ <#top>`__ +Perform Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Perform inference on every frame in the framebuffer @@ -1276,10 +1234,9 @@ Perform Inference `⇑ <#top>`__ Time per frame: 0.004744s (210.774 FPS) -Process Results `⇑ <#top>`__ +Process Results +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Set minimum detection threshold @@ -1340,19 +1297,18 @@ Process Results `⇑ <#top>`__ -.. raw:: html +.. .. raw:: html - +.. -Conclusion `⇑ <#top>`__ +Conclusion ############################################################################################################################### - This tutorial demonstrates how easy it is to use one or more GPUs in OpenVINO, check their properties, and even tailor the model performance through the different performance hints. It also provides a walk-through @@ -1362,19 +1318,11 @@ detected bounding boxes. To read more about any of these topics, feel free to visit their corresponding documentation: -- `GPU - Plugin `__ -- `AUTO - Plugin `__ -- `Model - Caching `__ -- `MULTI Device - Mode `__ -- `Query Device - Properties `__ -- `Configurations for GPUs with - OpenVINO `__ -- `Benchmark Python - Tool `__ -- `Asynchronous - Inferencing `__ +- `GPU Plugin `__ +- `AUTO Plugin `__ +- `Model Caching `__ +- `MULTI Device Mode `__ +- `Query Device Properties `__ +- `Configurations for GPUs with OpenVINO `__ +- `Benchmark Python Tool `__ +- `Asynchronous Inferencing `__ diff --git a/docs/notebooks/109-latency-tricks-with-output.rst b/docs/notebooks/109-latency-tricks-with-output.rst index af706167522e3b..c8913666ba01a8 100644 --- a/docs/notebooks/109-latency-tricks-with-output.rst +++ b/docs/notebooks/109-latency-tricks-with-output.rst @@ -20,7 +20,7 @@ It should give even better performance, but we recommend testing it anyway. .. note:: - + We especially recommend trying ``OpenVINO IR model + CPU + shared memory in latency mode`` or ``OpenVINO IR model + CPU + shared memory + more inference threads``. @@ -29,31 +29,28 @@ The quantization and pre-post-processing API are not included here as they change the precision (quantization) or processing graph (prepostprocessor). You can find examples of how to apply them to optimize performance on OpenVINO IR files in -`111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__ and -`118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__. +`111-detection-quantization <../111-detection-quantization>`__ and +`118-optimize-preprocessing <../118-optimize-preprocessing>`__. |image0| .. note:: Many of the steps presented below will give you better - performance. However, some of them may not change anything if they - are strongly dependent on either the hardware or the model. Please - run this notebook on your computer with your model to learn which of - them makes sense in your case. + performance. However, some of them may **not change anything** or + even **worsen the performance** if they are strongly dependent on + either the hardware or the model. Please run this notebook on your + computer with your model to learn which of them makes sense in your + case. - All the following tricks were run with OpenVINO 2022.3. Future + All the following tricks were run with OpenVINO 2023.0. Future versions of OpenVINO may include various optimizations that may result in different performance. A similar notebook focused on the throughput mode is available -`here <109-throughput-tricks-with-output.html>`__. - - +`here <109-throughput-tricks.ipynb>`__. -.. _top: - -**Table of contents**: +**Table of contents:** - `Data <#data>`__ - `Model <#model>`__ @@ -74,13 +71,13 @@ A similar notebook focused on the throughput mode is available - `Conclusions <#conclusions>`__ Prerequisites -------------- +############################################################################################################################### .. |image0| image:: https://user-images.githubusercontent.com/4547501/229120774-01f4f972-424d-4280-8395-220dd432985a.png .. code:: ipython3 - !pip install -q seaborn ultralytics + !pip install -q "openvino==2023.1.0.dev20230811" seaborn ultralytics .. code:: ipython3 @@ -93,10 +90,9 @@ Prerequisites sys.path.append("../utils") import notebook_utils as utils -Data `⇑ <#top>`__ +Data ############################################################################################################################### - We will use the same image of the dog sitting on a bicycle for all experiments below. The image is resized and preprocessed to fulfill the requirements of this particular object detection model. @@ -130,14 +126,13 @@ requirements of this particular object detection model. .. parsed-literal:: - + -Model `⇑ <#top>`__ +Model ############################################################################################################################### - We decided to go with `YOLOv5n `__, one of the state-of-the-art object detection models, easily available through the @@ -192,10 +187,9 @@ PyTorch Hub and small enough to see the difference in performance. Adding AutoShape... -Hardware `⇑ <#top>`__ +Hardware ############################################################################################################################### - The code below lists the available hardware we will use in the benchmarking process. @@ -222,10 +216,9 @@ benchmarking process. CPU: Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz -Helper functions `⇑ <#top>`__ +Helper functions ############################################################################################################################### - We’re defining a benchmark model function to use for all optimized models below. It runs inference 1000 times, averages the latency time, and prints two measures: seconds per image and frames per second (FPS). @@ -358,18 +351,16 @@ the image. utils.show_array(output_img) -Optimizations `⇑ <#top>`__ +Optimizations ############################################################################################################################### - Below, we present the performance tricks for faster inference in the latency mode. We release resources after every benchmarking to be sure the same amount of resource is available for every experiment. -PyTorch model `⇑ <#top>`__ +PyTorch model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - First, we’re benchmarking the original PyTorch model without any optimizations applied. We will treat it as our baseline. @@ -389,14 +380,13 @@ optimizations applied. We will treat it as our baseline. .. parsed-literal:: - PyTorch model on CPU. First inference time: 0.0227 seconds - PyTorch model on CPU: 0.0191 seconds per image (52.34 FPS) + PyTorch model on CPU. First inference time: 0.0293 seconds + PyTorch model on CPU: 0.0204 seconds per image (48.96 FPS) -ONNX model `⇑ <#top>`__ +ONNX model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The first optimization is exporting the PyTorch model to ONNX and running it in OpenVINO. It’s possible, thanks to the ONNX frontend. It means we don’t necessarily have to convert the model to Intermediate @@ -439,14 +429,13 @@ Representation (IR) to leverage the OpenVINO Runtime. .. parsed-literal:: - ONNX model on CPU. First inference time: 0.0182 seconds - ONNX model on CPU: 0.0135 seconds per image (74.31 FPS) + ONNX model on CPU. First inference time: 0.0194 seconds + ONNX model on CPU: 0.0135 seconds per image (74.28 FPS) -OpenVINO IR model `⇑ <#top>`__ +OpenVINO IR model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Let’s convert the ONNX model to OpenVINO Intermediate Representation (IR) FP16 and run it. Reducing the precision is one of the well-known methods for faster inference provided the hardware that supports lower @@ -457,11 +446,9 @@ accuracy drop. That’s why we skip that step in this notebook. .. code:: ipython3 - from openvino.tools import mo - - ov_model = mo.convert_model(onnx_path, compress_to_fp16=True) + ov_model = ov.convert_model(onnx_path) # save the model on disk - ov.serialize(ov_model, xml_path=str(onnx_path.with_suffix(".xml"))) + ov.save_model(ov_model, str(onnx_path.with_suffix(".xml"))) ov_cpu_model = core.compile_model(ov_model, device_name="CPU") @@ -478,17 +465,16 @@ accuracy drop. That’s why we skip that step in this notebook. .. parsed-literal:: - OpenVINO model on CPU. First inference time: 0.0157 seconds - OpenVINO model on CPU: 0.0134 seconds per image (74.40 FPS) + OpenVINO model on CPU. First inference time: 0.0160 seconds + OpenVINO model on CPU: 0.0134 seconds per image (74.71 FPS) -OpenVINO IR model on GPU `⇑ <#top>`__ +OpenVINO IR model on GPU +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Usually, a GPU device is faster than a CPU, so let’s run the above model -on the GPU. Please note you need to have an Intel GPU and `install -drivers `__ +on the GPU. Please note you need to have an Intel GPU and +`install drivers `__ to be able to run this step. In addition, offloading to the GPU helps reduce CPU load and memory consumption, allowing it to be left for routine processes. If you cannot observe a faster inference on GPU, it @@ -507,16 +493,15 @@ execution. del ov_gpu_model # release resources -OpenVINO IR model + more inference threads `⇑ <#top>`__ +OpenVINO IR model + more inference threads +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - There is a possibility to add a config for any device (CPU in this case). We will increase the number of threads to an equal number of our -cores. It should help us a lot. There are `more -options `__ +cores. There are `more options `__ to be changed, so it’s worth playing with them to see what works best in -our case. +our case. In some cases, this optimization may worsen the performance. +If it is the case, don’t use it. .. code:: ipython3 @@ -537,16 +522,15 @@ our case. .. parsed-literal:: - OpenVINO model + more threads on CPU. First inference time: 0.0151 seconds - OpenVINO model + more threads on CPU: 0.0134 seconds per image (74.36 FPS) + OpenVINO model + more threads on CPU. First inference time: 0.0159 seconds + OpenVINO model + more threads on CPU: 0.0134 seconds per image (74.68 FPS) -OpenVINO IR model in latency mode `⇑ <#top>`__ +OpenVINO IR model in latency mode +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - OpenVINO offers a virtual device called -`AUTO `__, +`AUTO `__, which can select the best device for us based on a performance hint. There are three different hints: ``LATENCY``, ``THROUGHPUT``, and ``CUMULATIVE_THROUGHPUT``. As this notebook is focused on the latency @@ -568,14 +552,13 @@ devices as well. .. parsed-literal:: - OpenVINO model on AUTO. First inference time: 0.0159 seconds - OpenVINO model on AUTO: 0.0136 seconds per image (73.57 FPS) + OpenVINO model on AUTO. First inference time: 0.0160 seconds + OpenVINO model on AUTO: 0.0136 seconds per image (73.59 FPS) -OpenVINO IR model in latency mode + shared memory `⇑ <#top>`__ +OpenVINO IR model in latency mode + shared memory +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - OpenVINO is a C++ toolkit with Python wrappers (API). The default behavior in the Python API is copying the input to the additional buffer and then running processing in C++, which prevents many @@ -603,25 +586,23 @@ performance! .. parsed-literal:: - OpenVINO model + shared memory on AUTO. First inference time: 0.0139 seconds - OpenVINO model + shared memory on AUTO: 0.0054 seconds per image (184.61 FPS) + OpenVINO model + shared memory on AUTO. First inference time: 0.0144 seconds + OpenVINO model + shared memory on AUTO: 0.0054 seconds per image (185.64 FPS) -Other tricks `⇑ <#top>`__ +Other tricks +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - There are other tricks for performance improvement, such as quantization and pre-post-processing or dedicated to throughput mode. To get even more from your model, please visit -`111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__, -`118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__, and -`109-throughput-tricks <109-latency-tricks-with-output.html>`__. +`111-detection-quantization <../111-detection-quantization>`__, +`118-optimize-preprocessing <../118-optimize-preprocessing>`__, and +`109-throughput-tricks <109-throughput-tricks.ipynb>`__. -Performance comparison `⇑ <#top>`__ +Performance comparison ############################################################################################################################### - The following graphical comparison is valid for the selected model and hardware simultaneously. If you cannot see any improvement between some steps, just skip them. @@ -656,15 +637,13 @@ steps, just skip them. .. image:: 109-latency-tricks-with-output_files/109-latency-tricks-with-output_30_0.png -Conclusions `⇑ <#top>`__ +Conclusions ############################################################################################################################### - We already showed the steps needed to improve the performance of an object detection model. Even if you experience much better performance after running this notebook, please note this may not be valid for every hardware or every model. For the most accurate results, please use -``benchmark_app`` `command-line -tool `__. +``benchmark_app`` `command-line tool `__. Note that ``benchmark_app`` cannot measure the impact of some tricks above, e.g., shared memory. diff --git a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_19_0.jpg b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_19_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_19_0.jpg +++ b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_19_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_23_0.jpg b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_23_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_23_0.jpg +++ b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_23_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_25_0.jpg b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_25_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_25_0.jpg +++ b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_25_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_27_0.jpg b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_27_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_27_0.jpg +++ b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_27_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_30_0.png b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_30_0.png index 741cd3a1679082..dd22aa1c0cdaa2 100644 --- a/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_30_0.png +++ b/docs/notebooks/109-latency-tricks-with-output_files/109-latency-tricks-with-output_30_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9d03578132292067edf38c21858cb3b2ed61780d3ff47f1962330823f4abee26 -size 56954 +oid sha256:df3da13ecdc00b84a05159a67c90e4124efafb0f6d93913eaa2dde020e854e8d +size 56962 diff --git a/docs/notebooks/109-throughput-tricks-with-output.rst b/docs/notebooks/109-throughput-tricks-with-output.rst index 523bb307ed2cf3..6782c568cdf29d 100644 --- a/docs/notebooks/109-throughput-tricks-with-output.rst +++ b/docs/notebooks/109-throughput-tricks-with-output.rst @@ -1,8 +1,6 @@ Performance tricks in OpenVINO for throughput mode ================================================== - - The goal of this notebook is to provide a step-by-step tutorial for improving performance for inferencing in a throughput mode. High throughput is especially desired in applications when the results are @@ -26,31 +24,28 @@ The quantization and pre-post-processing API are not included here as they change the precision (quantization) or processing graph (prepostprocessor). You can find examples of how to apply them to optimize performance on OpenVINO IR files in -`111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__ and -`118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__. +`111-detection-quantization <../111-detection-quantization>`__ and +`118-optimize-preprocessing <../118-optimize-preprocessing>`__. |image0| .. note:: Many of the steps presented below will give you better - performance. However, some of them may not change anything if they - are strongly dependent on either the hardware or the model. Please - run this notebook on your computer with your model to learn which of - them makes sense in your case. + performance. However, some of them may **not change anything** or + even **worsen the performance** if they are strongly dependent on + either the hardware or the model. Please run this notebook on your + computer with your model to learn which of them makes sense in your + case. - All the following tricks were run with OpenVINO 2022.3. Future + All the following tricks were run with OpenVINO 2023.0. Future versions of OpenVINO may include various optimizations that may result in different performance. A similar notebook focused on the latency mode is available -`here <109-latency-tricks-with-output.html>`__. - +`here <109-latency-tricks.ipynb>`__. - -.. _top: - -**Table of contents**: +**Table of contents:** - `Data <#data>`__ - `Model <#model>`__ @@ -61,24 +56,24 @@ A similar notebook focused on the latency mode is available - `PyTorch model <#pytorch-model>`__ - `OpenVINO IR model <#openvino-ir-model>`__ - `OpenVINO IR model + bigger batch <#openvino-ir-model-+-bigger-batch>`__ + - `Asynchronous processing <#asynchronous-processing>`__ - `OpenVINO IR model in throughput mode <#openvino-ir-model-in-throughput-mode>`__ - `OpenVINO IR model in throughput mode on GPU <#openvino-ir-model-in-throughput-mode-on-gpu>`__ - `OpenVINO IR model in throughput mode on AUTO <#openvino-ir-model-in-throughput-mode-on-auto>`__ - `OpenVINO IR model in cumulative throughput mode on AUTO <#openvino-ir-model-in-cumulative-throughput-mode-on-auto>`__ - - `OpenVINO IR model in cumulative throughput mode on AUTO + asynchronous processing <#openvino-ir-model-in-cumulative-throughput-mode-on-auto-+-asynchronous-processing>`__ - `Other tricks <#other-tricks>`__ - `Performance comparison <#performance-comparison>`__ - `Conclusions <#conclusions>`__ Prerequisites -------------- +############################################################################################################################### -.. |image0| image:: https://github.com/openvinotoolkit/openvino_notebooks/assets/4547501/e1a6e230-7c80-491a-8732-02515c556f1b +.. |image0| image:: https://github.com/openvinotoolkit/openvino_notebooks/assets/4547501/ac17148c-bee9-43aa-87fc-ead61ac75f1d .. code:: ipython3 - !pip install -q seaborn ultralytics + !pip install -q "openvino==2023.1.0.dev20230811" seaborn ultralytics .. code:: ipython3 @@ -90,10 +85,9 @@ Prerequisites sys.path.append("../utils") import notebook_utils as utils -Data `⇑ <#top>`__ +Data ############################################################################################################################### - We will use the same image of the dog sitting on a bicycle copied 1000 times to simulate the video with 1000 frames (about 33s). The image is resized and preprocessed to fulfill the requirements of this particular @@ -133,14 +127,13 @@ object detection model. .. parsed-literal:: - + -Model `⇑ <#top>`__ +Model ############################################################################################################################### - We decided to go with `YOLOv5n `__, one of the state-of-the-art object detection models, easily available through the @@ -179,10 +172,9 @@ PyTorch Hub and small enough to see the difference in performance. requirements: /opt/home/k8sworker/.cache/torch/hub/requirements.txt not found, check failed. -Hardware `⇑ <#top>`__ +Hardware ############################################################################################################################### - The code below lists the available hardware we will use in the benchmarking process. @@ -209,10 +201,9 @@ benchmarking process. CPU: Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz -Helper functions `⇑ <#top>`__ +Helper functions ############################################################################################################################### - We’re defining a benchmark model function to use for all optimizations below. It runs inference for 1000 frames and prints average frames per second (FPS). @@ -350,18 +341,16 @@ the image. utils.show_array(output_img) -Optimizations `⇑ <#top>`__ +Optimizations ############################################################################################################################### - Below, we present the performance tricks for faster inference in the throughput mode. We release resources after every benchmarking to be sure the same amount of resource is available for every experiment. -PyTorch model `⇑ <#top>`__ +PyTorch model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - First, we’re benchmarking the original PyTorch model without any optimizations applied. We will treat it as our baseline. @@ -381,14 +370,13 @@ optimizations applied. We will treat it as our baseline. .. parsed-literal:: - PyTorch model on CPU. First inference time: 0.0266 seconds - PyTorch model on CPU: 0.0200 seconds per image (49.99 FPS) + PyTorch model on CPU. First inference time: 0.0192 seconds + PyTorch model on CPU: 0.0189 seconds per image (52.95 FPS) -OpenVINO IR model `⇑ <#top>`__ +OpenVINO IR model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The first optimization is exporting the PyTorch model to OpenVINO Intermediate Representation (IR) FP16 and running it. Reducing the precision is one of the well-known methods for faster inference provided @@ -400,8 +388,6 @@ step in this notebook. .. code:: ipython3 - from openvino.tools import mo - onnx_path = base_model_dir / Path(f"{model_name}_{IMAGE_WIDTH}_{IMAGE_HEIGHT}").with_suffix(".onnx") # export PyTorch model to ONNX if it doesn't already exist @@ -410,7 +396,7 @@ step in this notebook. torch.onnx.export(pytorch_model, dummy_input, onnx_path) # convert ONNX model to IR, use FP16 - ov_model = mo.convert_model(onnx_path, compress_to_fp16=True) + ov_model = ov.convert_model(onnx_path) .. code:: ipython3 @@ -429,14 +415,13 @@ step in this notebook. .. parsed-literal:: - OpenVINO model on CPU. First inference time: 0.0195 seconds - OpenVINO model on CPU: 0.0073 seconds per image (136.92 FPS) + OpenVINO model on CPU. First inference time: 0.0124 seconds + OpenVINO model on CPU: 0.0073 seconds per image (136.31 FPS) -OpenVINO IR model + bigger batch `⇑ <#top>`__ +OpenVINO IR model + bigger batch +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Batch processing often gives higher throughput as more inputs are processed at once. To use bigger batches (than 1), we must convert the model again, specifying a new input shape, and reshape input frames. In @@ -456,7 +441,7 @@ hardware and model. torch.onnx.export(pytorch_model, dummy_input, onnx_batch_path) # export the model with the bigger batch size - ov_batch_model = mo.convert_model(onnx_batch_path, compress_to_fp16=True) + ov_batch_model = ov.convert_model(onnx_batch_path) .. parsed-literal:: @@ -486,52 +471,87 @@ hardware and model. .. parsed-literal:: - OpenVINO model + bigger batch on CPU. First inference time: 0.0590 seconds - OpenVINO model + bigger batch on CPU: 0.0069 seconds per image (143.96 FPS) + OpenVINO model + bigger batch on CPU. First inference time: 0.0428 seconds + OpenVINO model + bigger batch on CPU: 0.0076 seconds per image (131.76 FPS) -OpenVINO IR model in throughput mode `⇑ <#top>`__ +Asynchronous processing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +Asynchronous mode means that OpenVINO immediately returns from an +inference call and doesn’t wait for the result. It requires more +concurrent code to be written, but should offer better processing time +utilization e.g. we can run some pre- or post-processing code while +waiting for the result. Although we could use async processing directly +(start_async() function), it’s recommended to use AsyncInferQueue, which +is an easier approach to achieve the same outcome. This class +automatically spawns the pool of InferRequest objects (also called +“jobs”) and provides synchronization mechanisms to control the flow of +the pipeline. + +.. note:: + + Asynchronous processing cannot guarantee outputs to be in + the same order as inputs, so be careful in case of applications when + the order of frames matters, e.g., videos. + +.. code:: ipython3 + + def benchmark_async_mode(ov_model, benchmark_name, device_name): + def callback(infer_request, info): + result = infer_request.get_output_tensor(0).data[0] + show_result(result) + pass + + infer_queue = ov.AsyncInferQueue(ov_model) + infer_queue.set_callback(callback) # set callback to post-process (show) results + + infer_queue.start_async(video_frames[0]) + infer_queue.wait_all() + + # don't show output for the remaining frames + infer_queue.set_callback(lambda x, y: {}) + fps = benchmark_model(model=infer_queue.start_async, frames=video_frames, async_queue=infer_queue, benchmark_name=benchmark_name, device_name=device_name) + + del infer_queue # release resources + return fps + +OpenVINO IR model in throughput mode ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ OpenVINO allows specifying a performance hint changing the internal configuration of the device. There are three different hints: ``LATENCY``, ``THROUGHPUT``, and ``CUMULATIVE_THROUGHPUT``. As this notebook is focused on the throughput mode, we will use the latter two. The hints can be used with other devices as well. Throughput mode -implicitly triggers using the `Automatic -Batching `__ +implicitly triggers using the `Automatic Batching `__ feature, which sets the batch size to the optimal level. .. code:: ipython3 ov_cpu_through_model = core.compile_model(ov_model, device_name="CPU", config={"PERFORMANCE_HINT": "THROUGHPUT"}) - result = ov_cpu_through_model(video_frames[0])[ov_cpu_through_model.output(0)][0] - show_result(result) - ov_cpu_through_fps = benchmark_model(model=ov_cpu_through_model, frames=video_frames, benchmark_name="OpenVINO model", device_name="CPU (THROUGHPUT)") + ov_cpu_through_fps = benchmark_async_mode(ov_cpu_through_model, benchmark_name="OpenVINO model", device_name="CPU (THROUGHPUT)") del ov_cpu_through_model # release resources -.. image:: 109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_22_0.jpg +.. image:: 109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_24_0.jpg .. parsed-literal:: - OpenVINO model on CPU (THROUGHPUT). First inference time: 0.0226 seconds - OpenVINO model on CPU (THROUGHPUT): 0.0117 seconds per image (85.50 FPS) + OpenVINO model on CPU (THROUGHPUT). First inference time: 0.0237 seconds + OpenVINO model on CPU (THROUGHPUT): 0.0040 seconds per image (249.96 FPS) -OpenVINO IR model in throughput mode on GPU `⇑ <#top>`__ +OpenVINO IR model in throughput mode on GPU +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Usually, a GPU device provides more frames per second than a CPU, so let’s run the above model on the GPU. Please note you need to have an -Intel GPU and `install -drivers `__ +Intel GPU and `install drivers `__ to be able to run this step. In addition, offloading to the GPU helps reduce CPU load and memory consumption, allowing it to be left for routine processes. If you cannot observe a higher throughput on GPU, it @@ -545,18 +565,15 @@ execution. # compile for GPU ov_gpu_model = core.compile_model(ov_model, device_name="GPU", config={"PERFORMANCE_HINT": "THROUGHPUT"}) - result = ov_gpu_model(video_frames[0])[ov_gpu_model.output(0)][0] - show_result(result) - ov_gpu_fps = benchmark_model(model=ov_gpu_model, frames=video_frames, benchmark_name="OpenVINO model", device_name="GPU (THROUGHPUT)") + ov_gpu_fps = benchmark_async_mode(ov_gpu_model, benchmark_name="OpenVINO model", device_name="GPU (THROUGHPUT)") del ov_gpu_model # release resources -OpenVINO IR model in throughput mode on AUTO `⇑ <#top>`__ +OpenVINO IR model in throughput mode on AUTO +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - OpenVINO offers a virtual device called -`AUTO `__, +`AUTO `__, which can select the best device for us based on the aforementioned performance hint. @@ -564,27 +581,24 @@ performance hint. ov_auto_model = core.compile_model(ov_model, device_name="AUTO", config={"PERFORMANCE_HINT": "THROUGHPUT"}) - result = ov_auto_model(video_frames[0])[ov_auto_model.output(0)][0] - show_result(result) - ov_auto_fps = benchmark_model(model=ov_auto_model, frames=video_frames, benchmark_name="OpenVINO model", device_name="AUTO (THROUGHPUT)") + ov_auto_fps = benchmark_async_mode(ov_auto_model, benchmark_name="OpenVINO model", device_name="AUTO (THROUGHPUT)") del ov_auto_model # release resources -.. image:: 109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_26_0.jpg +.. image:: 109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_28_0.jpg .. parsed-literal:: - OpenVINO model on AUTO (THROUGHPUT). First inference time: 0.0257 seconds - OpenVINO model on AUTO (THROUGHPUT): 0.0215 seconds per image (46.61 FPS) + OpenVINO model on AUTO (THROUGHPUT). First inference time: 0.0237 seconds + OpenVINO model on AUTO (THROUGHPUT): 0.0040 seconds per image (250.15 FPS) -OpenVINO IR model in cumulative throughput mode on AUTO `⇑ <#top>`__ +OpenVINO IR model in cumulative throughput mode on AUTO +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The AUTO device in throughput mode will select the best, but one physical device to bring the highest throughput. However, if we have more Intel devices like CPU, iGPUs, and dGPUs in one machine, we may @@ -595,62 +609,7 @@ activate all devices. ov_auto_cumulative_model = core.compile_model(ov_model, device_name="AUTO", config={"PERFORMANCE_HINT": "CUMULATIVE_THROUGHPUT"}) - result = ov_auto_cumulative_model(video_frames[0])[ov_auto_cumulative_model.output(0)][0] - show_result(result) - ov_auto_cumulative_fps = benchmark_model(model=ov_auto_cumulative_model, frames=video_frames, benchmark_name="OpenVINO model", device_name="AUTO (CUMULATIVE THROUGHPUT)") - - - -.. image:: 109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_28_0.jpg - - -.. parsed-literal:: - - OpenVINO model on AUTO (CUMULATIVE THROUGHPUT). First inference time: 0.0268 seconds - OpenVINO model on AUTO (CUMULATIVE THROUGHPUT): 0.0216 seconds per image (46.25 FPS) - - -OpenVINO IR model in cumulative throughput mode on AUTO + asynchronous processing `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -Asynchronous mode means that OpenVINO immediately returns from an -inference call and doesn’t wait for the result. It requires more -concurrent code to be written, but should offer better processing time -utilization e.g. we can run some pre- or post-processing code while -waiting for the result. Although we could use async processing directly -(start_async() function), it’s recommended to use AsyncInferQueue, which -is an easier approach to achieve the same outcome. This class -automatically spawns the pool of InferRequest objects (also called -“jobs”) and provides synchronization mechanisms to control the flow of -the pipeline. - -.. note:: - - Asynchronous processing cannot guarantee outputs to be in - the same order as inputs, so be careful in case of applications when - the order of frames matters, e.g., videos. - -.. code:: ipython3 - - from openvino.runtime import AsyncInferQueue - - - def callback(infer_request, info): - result = infer_request.get_output_tensor(0).data[0] - show_result(result) - pass - - infer_queue = AsyncInferQueue(ov_auto_cumulative_model) - infer_queue.set_callback(callback) # set callback to post-process (show) results - - infer_queue.start_async(video_frames[0]) - infer_queue.wait_all() - - # don't show output for the remaining frames - infer_queue.set_callback(lambda x, y: {}) - ov_async_model = benchmark_model(model=infer_queue.start_async, frames=video_frames, async_queue=infer_queue, benchmark_name="OpenVINO model in asynchronous processing", device_name="AUTO (CUMULATIVE THROUGHPUT)") - - del infer_queue # release resources + ov_auto_cumulative_fps = benchmark_async_mode(ov_auto_cumulative_model, benchmark_name="OpenVINO model", device_name="AUTO (CUMULATIVE THROUGHPUT)") @@ -659,27 +618,25 @@ the pipeline. .. parsed-literal:: - OpenVINO model in asynchronous processing on AUTO (CUMULATIVE THROUGHPUT). First inference time: 0.0239 seconds - OpenVINO model in asynchronous processing on AUTO (CUMULATIVE THROUGHPUT): 0.0041 seconds per image (245.46 FPS) + OpenVINO model on AUTO (CUMULATIVE THROUGHPUT). First inference time: 0.0254 seconds + OpenVINO model on AUTO (CUMULATIVE THROUGHPUT): 0.0040 seconds per image (249.15 FPS) -Other tricks `⇑ <#top>`__ +Other tricks +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - There are other tricks for performance improvement, such as advanced options, quantization and pre-post-processing or dedicated to latency mode. To get even more from your model, please visit `advanced throughput -options `__, -`109-latency-tricks <109-latency-tricks-with-output.html>`__, -`111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__, and -`118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__. +options `__, +`109-latency-tricks <109-latency-tricks.ipynb>`__, +`111-detection-quantization <../111-detection-quantization>`__, and +`118-optimize-preprocessing <../118-optimize-preprocessing>`__. -Performance comparison `⇑ <#top>`__ +Performance comparison ############################################################################################################################### - The following graphical comparison is valid for the selected model and hardware simultaneously. If you cannot see any improvement between some steps, just skip them. @@ -693,9 +650,9 @@ steps, just skip them. from matplotlib import pyplot as plt labels = ["PyTorch model", "OpenVINO IR model", "OpenVINO IR model + bigger batch", "OpenVINO IR model in throughput mode", "OpenVINO IR model in throughput mode on GPU", - "OpenVINO IR model in throughput mode on AUTO", "OpenVINO IR model in cumulative throughput mode on AUTO", "OpenVINO IR model in cumulative throughput mode on AUTO + asynchronous processing"] + "OpenVINO IR model in throughput mode on AUTO", "OpenVINO IR model in cumulative throughput mode on AUTO"] - fps = [pytorch_fps, ov_cpu_fps, ov_cpu_batch_fps, ov_cpu_through_fps, ov_gpu_fps, ov_auto_fps, ov_auto_cumulative_fps, ov_async_model] + fps = [pytorch_fps, ov_cpu_fps, ov_cpu_batch_fps, ov_cpu_through_fps, ov_gpu_fps, ov_auto_fps, ov_auto_cumulative_fps] bar_colors = colors[::10] / 255.0 @@ -713,15 +670,14 @@ steps, just skip them. .. image:: 109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_33_0.png -Conclusions `⇑ <#top>`__ +Conclusions ############################################################################################################################### - We already showed the steps needed to improve the throughput of an object detection model. Even if you experience much better performance after running this notebook, please note this may not be valid for every hardware or every model. For the most accurate results, please use ``benchmark_app`` `command-line -tool `__. +tool `__. Note that ``benchmark_app`` cannot measure the impact of some tricks above. diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_17_0.jpg b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_17_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_17_0.jpg +++ b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_17_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_20_0.jpg b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_20_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_20_0.jpg +++ b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_20_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_22_0.jpg b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_22_0.jpg deleted file mode 100644 index 3241ad3c3f5bba..00000000000000 --- a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_22_0.jpg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_24_0.jpg b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_24_0.jpg new file mode 100644 index 00000000000000..ab32fedd0f3d00 --- /dev/null +++ b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_24_0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_26_0.jpg b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_26_0.jpg deleted file mode 100644 index 3241ad3c3f5bba..00000000000000 --- a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_26_0.jpg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_28_0.jpg b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_28_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_28_0.jpg +++ b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_28_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_30_0.jpg b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_30_0.jpg index 3241ad3c3f5bba..ab32fedd0f3d00 100644 --- a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_30_0.jpg +++ b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_30_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:fcd7da923bb0f72430eaf7b4770175320f1f3219aaca2d460c54fa9ef07e51c2 -size 162756 +oid sha256:84e4f91c248768c2ea746240e307041396099f0d52fdb89b0179fa72e353894a +size 162715 diff --git a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_33_0.png b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_33_0.png index ef164610208f1e..d6c90e28f4149d 100644 --- a/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_33_0.png +++ b/docs/notebooks/109-throughput-tricks-with-output_files/109-throughput-tricks-with-output_33_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9547ead08b2efa00ac1380ab53f7cdb22f416ea13ecf0d9eb556703e9fcdde0d -size 77855 +oid sha256:d1fc053f1a52fbefbbfbfaaa6e9d0d5c11ddfdc0481929bfd0dd074338a67509 +size 62467 diff --git a/docs/notebooks/110-ct-scan-live-inference-with-output.rst b/docs/notebooks/110-ct-scan-live-inference-with-output.rst index 9ae34d9db77f08..713d691ac05e65 100644 --- a/docs/notebooks/110-ct-scan-live-inference-with-output.rst +++ b/docs/notebooks/110-ct-scan-live-inference-with-output.rst @@ -2,7 +2,7 @@ Live Inference and Benchmark CT-scan Data with OpenVINO™ ======================================================== Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 4 ------------------------------------------------------------------ +############################################################################################################################### This tutorial is a part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to @@ -16,23 +16,20 @@ live inference with async API and MULTI plugin in OpenVINO. This notebook needs a quantized OpenVINO IR model and images from the `KiTS-19 `__ dataset, converted to -2D images. (To learn how the model is quantized, see the -`Convert and Quantize a UNet Model and Show Live Inference <110-ct-segmentation-quantize-nncf-with-output.html>`__ tutorial.) +2D images. (To learn how the model is quantized, see the `Convert and +Quantize a UNet Model and Show Live +Inference <110-ct-segmentation-quantize-nncf.ipynb>`__ tutorial.) This notebook provides a pre-trained model, trained for 20 epochs with the full KiTS-19 frames dataset, which has an F1 score on the validation -set of 0.9. The training code is available in the -`PyTorch MONAI Training <110-ct-segmentation-quantize-with-output.html>`__ +set of 0.9. The training code is available in the `PyTorch MONAI +Training <110-ct-segmentation-quantize-with-output.html>`__ notebook. For demonstration purposes, this tutorial will download one converted CT -scan to use for inference. - - - -.. _top: +scan to use for inference. -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -48,12 +45,11 @@ scan to use for inference. .. code:: ipython3 - !pip install -q "monai>=0.9.1,<1.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" "monai>=0.9.1,<1.0.0" -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 import os @@ -63,16 +59,24 @@ Imports `⇑ <#top>`__ import numpy as np from monai.transforms import LoadImage - from openvino.runtime import Core + import openvino as ov from custom_segmentation import SegmentationModel sys.path.append("../utils") from notebook_utils import download_file -Settings `⇑ <#top>`__ -############################################################################################################################### +.. parsed-literal:: + + 2023-09-08 22:52:19.504111: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:52:19.539771: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 22:52:20.182360: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +Settings +############################################################################################################################### To use the pre-trained models, set ``IR_PATH`` to ``"pretrained_model/unet44.xml"`` and ``COMPRESSED_MODEL_PATH`` to @@ -109,11 +113,11 @@ trained or optimized yourself, adjust the model paths. pretrained_model/quantized_unet_kits19.bin: 0%| | 0.00/1.90M [00:00`__ +Benchmark Model Performance ############################################################################################################################### -To measure the inference performance of the IR model, use -`Benchmark Tool `__ +To measure the inference performance of the IR model, use `Benchmark +Tool `__ - an inference performance measurement tool in OpenVINO. Benchmark tool is a command-line application that can be run in the notebook with ``! benchmark_app`` or ``%sx benchmark_app`` commands. @@ -129,10 +133,9 @@ is a command-line application that can be run in the notebook with Run ``benchmark_app --help`` to see an overview of all command-line options. - .. code:: ipython3 - core = Core() + core = ov.Core() # By default, benchmark on MULTI:CPU,GPU if a GPU is available, otherwise on CPU. device_list = ["MULTI:CPU,GPU" if "GPU" in core.available_devices else "AUTO"] @@ -168,18 +171,18 @@ is a command-line application that can be run in the notebook with [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] Device info: [ INFO ] AUTO - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [ WARNING ] Performance hint was not explicitly specified in command line. Device(AUTO) performance hint will be set to PerformanceMode.LATENCY. [Step 4/11] Reading model files [ INFO ] Loading model files - [ INFO ] Read model took 13.69 ms + [ INFO ] Read model took 13.99 ms [ INFO ] Original model I/O parameters: [ INFO ] Model inputs: [ INFO ] input.1 (node: input.1) : f32 / [...] / [1,1,512,512] @@ -193,52 +196,54 @@ is a command-line application that can be run in the notebook with [ INFO ] Model outputs: [ INFO ] 153 (node: 153) : f32 / [...] / [1,1,512,512] [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 181.66 ms + [ INFO ] Compile model took 237.32 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.LATENCY [ INFO ] NETWORK_NAME: pretrained_unet_kits19 + [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] PERFORMANCE_HINT: PerformanceMode.LATENCY [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1 - [ INFO ] MODEL_PRIORITY: Priority.MEDIUM [ INFO ] MULTI_DEVICE_PRIORITIES: CPU [ INFO ] CPU: - [ INFO ] CPU_BIND_THREAD: YES - [ INFO ] CPU_THREADS_NUM: 0 - [ INFO ] CPU_THROUGHPUT_STREAMS: 1 - [ INFO ] DEVICE_ID: - [ INFO ] DUMP_EXEC_GRAPH_AS_DOT: - [ INFO ] DYN_BATCH_ENABLED: NO - [ INFO ] DYN_BATCH_LIMIT: 0 - [ INFO ] ENFORCE_BF16: NO - [ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO + [ INFO ] AFFINITY: Affinity.CORE + [ INFO ] CPU_DENORMALS_OPTIMIZATION: False + [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 + [ INFO ] ENABLE_CPU_PINNING: True + [ INFO ] ENABLE_HYPER_THREADING: False + [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE + [ INFO ] INFERENCE_NUM_THREADS: 12 + [ INFO ] INFERENCE_PRECISION_HINT: [ INFO ] NETWORK_NAME: pretrained_unet_kits19 + [ INFO ] NUM_STREAMS: 1 [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1 - [ INFO ] PERFORMANCE_HINT: LATENCY + [ INFO ] PERFORMANCE_HINT: PerformanceMode.LATENCY [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] PERF_COUNT: NO - [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] PERF_COUNT: False + [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE + [ INFO ] MODEL_PRIORITY: Priority.MEDIUM + [ INFO ] LOADED_FROM_CACHE: False [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given for input 'input.1'!. This input will be filled with random values! [ INFO ] Fill input 'input.1' with random values [Step 10/11] Measuring performance (Start inference synchronously, limits: 15000 ms duration) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 26.05 ms + [ INFO ] First inference took 27.50 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 1424 iterations - [ INFO ] Duration: 15004.96 ms + [ INFO ] Count: 1349 iterations + [ INFO ] Duration: 15006.01 ms [ INFO ] Latency: - [ INFO ] Median: 10.29 ms - [ INFO ] Average: 10.35 ms - [ INFO ] Min: 10.14 ms - [ INFO ] Max: 14.72 ms - [ INFO ] Throughput: 97.14 FPS + [ INFO ] Median: 10.89 ms + [ INFO ] Average: 10.94 ms + [ INFO ] Min: 10.60 ms + [ INFO ] Max: 15.01 ms + [ INFO ] Throughput: 89.90 FPS -Download and Prepare Data `⇑ <#top>`__ +Download and Prepare Data ############################################################################################################################### - Download one validation video for live inference. This tutorial reuses the ``KitsDataset`` class that was also used in the @@ -283,10 +288,9 @@ downloaded and extracted in the next cell. Downloaded and extracted data for case_00117 -Show Live Inference `⇑ <#top>`__ +Show Live Inference ############################################################################################################################### - To show live inference on the model in the notebook, use the asynchronous processing feature of OpenVINO Runtime. @@ -295,9 +299,11 @@ If you use a GPU device, with ``device="GPU"`` or card, model loading will be slow the first time you run this code. The model will be cached, so after the first time model loading will be faster. For more information on OpenVINO Runtime, including Model -Caching, refer to the `OpenVINO API tutorial <002-openvino-api-with-output.html>`__. +Caching, refer to the `OpenVINO API +tutorial <002-openvino-api-with-output.html>`__. -We will use `AsyncInferQueue `__ +We will use +```AsyncInferQueue`` `__ to perform asynchronous inference. It can be instantiated with compiled model and a number of jobs - parallel execution threads. If you don’t pass a number of jobs or pass ``0``, then OpenVINO will pick the optimal @@ -313,13 +319,12 @@ inference queue, there are two jobs to do: Everything else will be handled by the ``AsyncInferQueue`` instance. -Load Model and List of Image Files `⇑ <#top>`__ +Load Model and List of Image Files +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Load the segmentation model to OpenVINO Runtime with -``SegmentationModel``, based on the Model API from -`Open Model Zoo `__. This model +``SegmentationModel``, based on the Model API from `Open Model +Zoo `__. This model implementation includes pre and post processing for the model. For ``SegmentationModel`` this includes the code to create an overlay of the segmentation mask on the original image/frame. Uncomment the next cell @@ -327,7 +332,7 @@ to see the implementation. .. code:: ipython3 - core = Core() + core = ov.Core() segmentation_model = SegmentationModel( ie=core, model_path=Path(MODEL_PATH), sigmoid=True, rotate_and_flip=True ) @@ -341,10 +346,9 @@ to see the implementation. case_00117, 69 images -Prepare images `⇑ <#top>`__ +Prepare images +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Use the ``reader = LoadImage()`` function to read the images in the same way as in the `training <110-ct-segmentation-quantize-with-output.html>`__ @@ -363,10 +367,9 @@ tutorial. framebuf.append(image) next_frame_id += 1 -Specify device `⇑ <#top>`__ +Specify device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 device @@ -380,10 +383,9 @@ Specify device `⇑ <#top>`__ -Setting callback function `⇑ <#top>`__ +Setting callback function +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - When ``callback`` is set, any job that ends the inference, calls the Python function. The ``callback`` function must have two arguments: one is the request that calls the ``callback``, which provides the @@ -399,11 +401,9 @@ The ``callback`` function will show the results of inference. from IPython import display from typing import Dict, Any - from openvino.runtime import InferRequest - # Define a callback function that runs every time the asynchronous pipeline completes inference on a frame - def completion_callback(infer_request: InferRequest, user_data: Dict[str, Any],) -> None: + def completion_callback(infer_request: ov.InferRequest, user_data: Dict[str, Any],) -> None: preprocess_meta = user_data['preprocess_meta'] raw_outputs = {out.any_name: copy.deepcopy(res.data) for out, res in zip(infer_request.model_outputs, infer_request.output_tensors)} @@ -417,19 +417,17 @@ The ``callback`` function will show the results of inference. display.clear_output(wait=True) display.display(i) -Create asynchronous inference queue and perform it `⇑ <#top>`__ +Create asynchronous inference queue and perform it +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import time - from openvino.runtime import AsyncInferQueue load_start_time = time.perf_counter() compiled_model = core.compile_model(segmentation_model.net, device.value) # Create asynchronous inference queue with optimal number of infer requests - infer_queue = AsyncInferQueue(compiled_model) + infer_queue = ov.AsyncInferQueue(compiled_model) infer_queue.set_callback(completion_callback) load_end_time = time.perf_counter() @@ -463,7 +461,7 @@ Create asynchronous inference queue and perform it `⇑ <#top>`__ .. parsed-literal:: - Loaded model to Dropdown(description='Device:', index=1, options=('CPU', 'AUTO'), value='AUTO') in 0.18 seconds. - Total time to infer all frames: 3.401s - Time per frame: 0.050022s (19.991 FPS) + Loaded model to Dropdown(description='Device:', index=1, options=('CPU', 'AUTO'), value='AUTO') in 0.22 seconds. + Total time to infer all frames: 3.520s + Time per frame: 0.051761s (19.320 FPS) diff --git a/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output.rst b/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output.rst index 898fa83b906f45..da35d4ac5db319 100644 --- a/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output.rst +++ b/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output.rst @@ -2,7 +2,7 @@ Quantize a Segmentation Model and Show Live Inference ===================================================== Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 3 ------------------------------------------------------------------ +############################################################################################################################### This tutorial is a part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to @@ -13,27 +13,21 @@ scratch; the data is from This third tutorial in the series shows how to: -- Convert an Original model to OpenVINO IR with `model conversion - API `__ +- Convert an Original model to OpenVINO IR with `model conversion API `__ - Quantize a PyTorch model with NNCF -- Evaluate the F1 score metric of the original model and the quantized - model +- Evaluate the F1 score metric of the original model and the quantized model - Benchmark performance of the FP32 model and the INT8 quantized model - Show live inference with OpenVINO’s async API All notebooks in this series: -- `Data Preparation for 2D Segmentation of 3D Medical - Data `__ -- `Train a 2D-UNet Medical Imaging Model with PyTorch - Lightning `__ -- Convert and Quantize a Segmentation Model and Show Live Inference - (this notebook) -- `Live Inference and Benchmark CT-scan - data <110-ct-scan-live-inference.ipynb>`__ +- `Data Preparation for 2D Segmentation of 3D Medical Data `__ +- `Train a 2D-UNet Medical Imaging Model with PyTorch Lightning `__ +- Convert and Quantize a Segmentation Model and Show Live Inference (**this notebook**) +- `Live Inference and Benchmark CT-scan data <110-ct-scan-live-inference-with-output.html>`__ Instructions ------------- +############################################################################################################################### This notebook needs a trained UNet model. We provide a pre-trained model, trained for 20 epochs with the full @@ -51,13 +45,9 @@ On Linux, install ``gcc``. Running this notebook with the full dataset will take a long time. For demonstration purposes, this tutorial will download one converted CT scan and use that scan for quantization and inference. For production -purposes, use a representative dataset for quantizing the model. - - +purposes, use a representative dataset for quantizing the model. -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -85,12 +75,11 @@ purposes, use a representative dataset for quantizing the model. .. code:: ipython3 - !pip install -q "monai>=0.9.1,<1.0.0" "torchmetrics>=0.11.0" + !pip install -q "openvino==2023.1.0.dev20230811" "monai>=0.9.1,<1.0.0" "torchmetrics>=0.11.0" -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 # On Windows, try to find the directory that contains x64 cl.exe and add it to the PATH to enable PyTorch @@ -147,6 +136,7 @@ Imports `⇑ <#top>`__ import warnings import zipfile from pathlib import Path + from typing import Union warnings.filterwarnings("ignore", category=UserWarning) @@ -156,14 +146,11 @@ Imports `⇑ <#top>`__ import numpy as np import torch import nncf + import openvino as ov from monai.transforms import LoadImage from nncf.common.logging.logger import set_log_level - from openvino.runtime import Core from torchmetrics import F1Score as F1 - from openvino.tools import mo - from openvino.runtime import serialize - set_log_level(logging.ERROR) # Disables all NNCF info and warning messages from custom_segmentation import SegmentationModel @@ -175,10 +162,10 @@ Imports `⇑ <#top>`__ .. parsed-literal:: - 2023-08-15 22:41:33.627938: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. - 2023-08-15 22:41:33.662730: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + 2023-09-08 22:52:53.736369: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:52:53.771077: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. - 2023-08-15 22:41:34.189615: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + 2023-09-08 22:52:54.411775: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT .. parsed-literal:: @@ -186,10 +173,9 @@ Imports `⇑ <#top>`__ INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - By default, this notebook will download one CT scan from the KITS19 dataset that will be used for quantization. To use the full dataset, set ``BASEDIR`` to the path of the dataset, as prepared according to the @@ -203,16 +189,16 @@ dataset that will be used for quantization. To use the full dataset, set MODEL_DIR = Path("model") MODEL_DIR.mkdir(exist_ok=True) -Load PyTorch Model `⇑ <#top>`__ +Load PyTorch Model ############################################################################################################################### - Download the pre-trained model weights, load the PyTorch model and the ``state_dict`` that was saved after training. The model used in this -notebook is a `BasicUNet `__ +notebook is a +`BasicUNet `__ model from `MONAI `__. We provide a pre-trained -checkpoint. To see how this model performs, check out the -`training notebook `__. +checkpoint. To see how this model performs, check out the `training +notebook `__. .. code:: ipython3 @@ -249,10 +235,9 @@ checkpoint. To see how this model performs, check out the -Download CT-scan Data `⇑ <#top>`__ +Download CT-scan Data ############################################################################################################################### - .. code:: ipython3 # The CT scan case number. For example: 2 for data from the case_00002 directory @@ -276,20 +261,19 @@ Download CT-scan Data `⇑ <#top>`__ Data for case_00117 exists -Configuration `⇑ <#top>`__ +Configuration ############################################################################################################################### - -Dataset `⇑ <#top>`__ +Dataset +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The ``KitsDataset`` class in the next cell expects images and masks in the *``basedir``* directory, in a folder per patient. It is a simplified -version of the Dataset class in the `training notebook `__. +version of the Dataset class in the `training +notebook `__. Images are loaded with MONAI’s -`LoadImage `__, +```LoadImage`` `__, to align with the image loading method in the training notebook. This method rotates and flips the images. We define a ``rotate_and_flip`` method to display the images in the expected orientation: @@ -378,7 +362,7 @@ kidney pixels to verify that the annotations look correct: .. image:: 110-ct-segmentation-quantize-nncf-with-output_files/110-ct-segmentation-quantize-nncf-with-output_15_1.png -Metric `⇑ <#top>`__ +Metric +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Define a metric to determine the performance of the model. @@ -391,11 +375,7 @@ library. .. code:: ipython3 - from typing import Union - from openvino.runtime.ie_api import CompiledModel - - - def compute_f1(model: Union[torch.nn.Module, CompiledModel], dataset: KitsDataset): + def compute_f1(model: Union[torch.nn.Module, ov.CompiledModel], dataset: KitsDataset): """ Compute binary F1 score of `model` on `dataset` F1 score metric is provided by the torchmetrics library @@ -406,7 +386,7 @@ library. with torch.no_grad(): for image, target in dataset: input_image = torch.as_tensor(image).unsqueeze(0) - if isinstance(model, CompiledModel): + if isinstance(model, ov.CompiledModel): output_layer = model.output(0) output = model(input_image)[output_layer] output = torch.from_numpy(output) @@ -417,10 +397,9 @@ library. metric.update(label.flatten(), prediction.flatten()) return metric.compute() -Quantization `⇑ <#top>`__ +Quantization ############################################################################################################################### - Before quantizing the model, we compute the F1 score on the ``FP32`` model, for comparison: @@ -443,13 +422,20 @@ this notebook. fp32_ir_path = MODEL_DIR / Path('unet_kits19_fp32.xml') - fp32_ir_model = mo.convert_model(model, input_shape=(1, 1, 512, 512)) - serialize(fp32_ir_model, str(fp32_ir_path)) + fp32_ir_model = ov.convert_model(model, example_input=torch.ones(1, 1, 512, 512, dtype=torch.float32)) + ov.save_model(fp32_ir_model, str(fp32_ir_path)) .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/monai/networks/nets/basic_unet.py:179: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11. + + +.. parsed-literal:: + + [ WARNING ] Please fix your imports. Module %s has been moved to %s. The old module will be deleted in version %s. + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/monai/networks/nets/basic_unet.py:179: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x_e.shape[-i - 1] != x_0.shape[-i - 1]: @@ -457,20 +443,19 @@ this notebook. advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop. -.. note:: - - NNCF Post-training Quantization is available in OpenVINO + **Note**: NNCF Post-training Quantization is available in OpenVINO 2023.0 release. - Create a quantized model from the pre-trained ``FP32`` model and the calibration dataset. The optimization process contains the following steps: -1. Create a Dataset for quantization. -2. Run ``nncf.quantize`` for getting an optimized model. -3. Export the quantized model to ONNX and then convert to OpenVINO IR model. -4. Serialize the INT8 model using ``openvino.runtime.serialize`` function for benchmarking. +:: + + 1. Create a Dataset for quantization. + 2. Run `nncf.quantize` for getting an optimized model. + 3. Export the quantized model to ONNX and then convert to OpenVINO IR model. + 4. Serialize the INT8 model using `ov.save_model` function for benchmarking. .. code:: ipython3 @@ -493,12 +478,6 @@ steps: ignored_scope=nncf.IgnoredScope(patterns=[".*LeakyReLU.*"]) ) - -.. parsed-literal:: - - No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' - - Export the quantized model to ONNX and then convert it to OpenVINO IR model and save it. @@ -508,36 +487,35 @@ model and save it. int8_onnx_path = MODEL_DIR / "unet_kits19_int8.onnx" int8_ir_path = Path(int8_onnx_path).with_suffix(".xml") torch.onnx.export(quantized_model, dummy_input, int8_onnx_path) - int8_ir_model = mo.convert_model(input_model=int8_onnx_path) - serialize(int8_ir_model, str(int8_ir_path)) + int8_ir_model = ov.convert_model(int8_onnx_path) + ov.save_model(int8_ir_model, str(int8_ir_path)) .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:338: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:338: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! return self._level_low.item() - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:346: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:346: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! return self._level_high.item() - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/monai/networks/nets/basic_unet.py:179: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/monai/networks/nets/basic_unet.py:179: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x_e.shape[-i - 1] != x_0.shape[-i - 1]: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/quantize_functions.py:140: FutureWarning: 'torch.onnx._patch_torch._graph_op' is deprecated in version 1.13 and will be removed in version 1.14. Please note 'g.op()' is to be removed from torch.Graph. Please open a GitHub issue if you need this functionality.. + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/quantize_functions.py:140: FutureWarning: 'torch.onnx._patch_torch._graph_op' is deprecated in version 1.13 and will be removed in version 1.14. Please note 'g.op()' is to be removed from torch.Graph. Please open a GitHub issue if you need this functionality.. output = g.op( This notebook demonstrates post-training quantization with NNCF. NNCF also supports quantization-aware training, and other algorithms -than quantization. See the `NNCF documentation `__ in the NNCF +than quantization. See the `NNCF +documentation `__ in the NNCF repository for more information. -Compare FP32 and INT8 Model `⇑ <#top>`__ +Compare FP32 and INT8 Model ############################################################################################################################### - -Compare File Size `⇑ <#top>`__ +Compare File Size +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 fp32_ir_model_size = fp32_ir_path.with_suffix(".bin").stat().st_size / 1024 @@ -549,16 +527,16 @@ Compare File Size `⇑ <#top>`__ .. parsed-literal:: - FP32 IR model size: 7728.27 KB - INT8 model size: 1953.49 KB + FP32 IR model size: 3864.14 KB + INT8 model size: 1940.41 KB -Compare Metrics for the original model and the quantized model to be sure that there no degradation. `⇑ <#top>`__ +Compare Metrics for the original model and the quantized model to be sure that there no degradation. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 - core = Core() + core = ov.Core() int8_compiled_model = core.compile_model(int8_ir_model) int8_f1 = compute_f1(int8_compiled_model, dataset) @@ -573,11 +551,12 @@ Compare Metrics for the original model and the quantized model to be sure that t INT8 F1: 0.999 -Compare Performance of the FP32 IR Model and Quantized Models `⇑ <#top>`__ +Compare Performance of the FP32 IR Model and Quantized Models +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ To measure the inference performance of the ``FP32`` and ``INT8`` -models, we use `Benchmark Tool `__ +models, we use `Benchmark +Tool `__ - OpenVINO’s inference performance measurement tool. Benchmark tool is a command line application, part of OpenVINO development tools, that can be run in the notebook with ``! benchmark_app`` or @@ -592,7 +571,6 @@ be run in the notebook with ``! benchmark_app`` or CPU for one minute. Change ``CPU`` to ``GPU`` to benchmark on GPU. Run ``benchmark_app --help`` to see all command line options. - .. code:: ipython3 # ! benchmark_app --help @@ -613,32 +591,32 @@ be run in the notebook with ``! benchmark_app`` or [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] Device info: [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.LATENCY. [Step 4/11] Reading model files [ INFO ] Loading model files - [ INFO ] Read model took 21.45 ms + [ INFO ] Read model took 26.77 ms [ INFO ] Original model I/O parameters: [ INFO ] Model inputs: - [ INFO ] 1 , x (node: Parameter_2) : f32 / [...] / [1,1,512,512] + [ INFO ] x (node: x) : f32 / [...] / [?,?,?,?] [ INFO ] Model outputs: - [ INFO ] 169 (node: aten::_convolution_861) : f32 / [...] / [1,1,512,512] + [ INFO ] ***NO_NAME*** (node: __module.final_conv/aten::_convolution/Add_425) : f32 / [...] / [?,1,16..,16..] [Step 5/11] Resizing model to match image sizes and given batch [ INFO ] Model batch size: 1 [Step 6/11] Configuring input of the model [ INFO ] Model inputs: - [ INFO ] 1 , x (node: Parameter_2) : f32 / [N,C,H,W] / [1,1,512,512] + [ INFO ] x (node: x) : f32 / [...] / [?,?,?,?] [ INFO ] Model outputs: - [ INFO ] 169 (node: aten::_convolution_861) : f32 / [...] / [1,1,512,512] + [ INFO ] ***NO_NAME*** (node: __module.final_conv/aten::_convolution/Add_425) : f32 / [...] / [?,1,16..,16..] [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 87.17 ms + [ INFO ] Compile model took 79.78 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: Model0 @@ -653,24 +631,18 @@ be run in the notebook with ``! benchmark_app`` or [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] ENABLE_CPU_PINNING: True [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True + [ INFO ] ENABLE_HYPER_THREADING: False [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] CPU_DENORMALS_OPTIMIZATION: False + [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input '1'!. This input will be filled with random values! - [ INFO ] Fill input '1' with random values - [Step 10/11] Measuring performance (Start inference synchronously, limits: 15000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 55.25 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 425 iterations - [ INFO ] Duration: 15001.92 ms - [ INFO ] Latency: - [ INFO ] Median: 35.01 ms - [ INFO ] Average: 35.08 ms - [ INFO ] Min: 34.54 ms - [ INFO ] Max: 37.24 ms - [ INFO ] Throughput: 28.56 FPS + [ ERROR ] Input x is dynamic. Provide data shapes! + Traceback (most recent call last): + File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 485, in main + data_queue = get_input_data(paths_to_input, app_inputs_info) + File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/utils/inputs_filling.py", line 123, in get_input_data + raise Exception(f"Input {info.name} is dynamic. Provide data shapes!") + Exception: Input x is dynamic. Provide data shapes! .. code:: ipython3 @@ -685,18 +657,18 @@ be run in the notebook with ``! benchmark_app`` or [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] Device info: [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 + [ INFO ] Build ................................. 2023.1.0-12050-e33de350633 [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.LATENCY. [Step 4/11] Reading model files [ INFO ] Loading model files - [ INFO ] Read model took 33.80 ms + [ INFO ] Read model took 13.90 ms [ INFO ] Original model I/O parameters: [ INFO ] Model inputs: [ INFO ] x.1 (node: x.1) : f32 / [...] / [1,1,512,512] @@ -710,7 +682,7 @@ be run in the notebook with ``! benchmark_app`` or [ INFO ] Model outputs: [ INFO ] 578 (node: 578) : f32 / [...] / [1,1,512,512] [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 144.48 ms + [ INFO ] Compile model took 178.40 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: torch_jit @@ -725,30 +697,31 @@ be run in the notebook with ``! benchmark_app`` or [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] ENABLE_CPU_PINNING: True [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True + [ INFO ] ENABLE_HYPER_THREADING: False [ INFO ] EXECUTION_DEVICES: ['CPU'] + [ INFO ] CPU_DENORMALS_OPTIMIZATION: False + [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given for input 'x.1'!. This input will be filled with random values! [ INFO ] Fill input 'x.1' with random values [Step 10/11] Measuring performance (Start inference synchronously, limits: 15000 ms duration) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 30.85 ms + [ INFO ] First inference took 33.31 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 973 iterations - [ INFO ] Duration: 15001.74 ms + [ INFO ] Count: 961 iterations + [ INFO ] Duration: 15003.95 ms [ INFO ] Latency: - [ INFO ] Median: 15.17 ms - [ INFO ] Average: 15.21 ms - [ INFO ] Min: 14.84 ms - [ INFO ] Max: 17.66 ms - [ INFO ] Throughput: 65.90 FPS + [ INFO ] Median: 15.33 ms + [ INFO ] Average: 15.40 ms + [ INFO ] Min: 15.03 ms + [ INFO ] Max: 18.25 ms + [ INFO ] Throughput: 64.05 FPS -Visually Compare Inference Results `⇑ <#top>`__ +Visually Compare Inference Results +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Visualize the results of the model on four slices of the validation set. Compare the results of the ``FP32`` IR model with the results of the quantized ``INT8`` model and the reference segmentation annotation. @@ -771,7 +744,6 @@ seed is displayed to enable reproducing specific runs of this cell. resizing. In the Kits19 dataset all but one of the cases has the ``(512, 512)`` input shape. - .. code:: ipython3 # The sigmoid function is used to transform the result of the network @@ -784,7 +756,7 @@ seed is displayed to enable reproducing specific runs of this cell. colormap = "gray" # Load FP32 and INT8 models - core = Core() + core = ov.Core() fp_model = core.read_model(fp32_ir_path) int8_model = core.read_model(int8_ir_path) compiled_model_fp = core.compile_model(fp_model, device_name="CPU") @@ -829,23 +801,24 @@ seed is displayed to enable reproducing specific runs of this cell. .. parsed-literal:: - Visualizing results with seed 1692132195 + Visualizing results with seed 1694206463 .. image:: 110-ct-segmentation-quantize-nncf-with-output_files/110-ct-segmentation-quantize-nncf-with-output_37_1.png -Show Live Inference `⇑ <#top>`__ +Show Live Inference ############################################################################################################################### - To show live inference on the model in the notebook, we will use the asynchronous processing feature of OpenVINO. -We use the ``show_live_inference`` function from `Notebook Utils `__ to show live inference. This -function uses `Open Model Zoo `__ -Async Pipeline and Model API to perform asynchronous inference. After +We use the ``show_live_inference`` function from `Notebook +Utils `__ to show live inference. This +function uses `Open Model +Zoo `__\ ’s Async +Pipeline and Model API to perform asynchronous inference. After inference on the specified CT scan has completed, the total time and throughput (fps), including preprocessing and displaying, will be printed. @@ -855,12 +828,12 @@ printed. If you experience flickering on Firefox, consider using Chrome or Edge to run this notebook. -Load Model and List of Image Files `⇑ <#top>`__ +Load Model and List of Image Files +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - We load the segmentation model to OpenVINO Runtime with -``SegmentationModel``, based on the `Open Model Zoo `__ Model API. +``SegmentationModel``, based on the `Open Model +Zoo `__ Model API. This model implementation includes pre and post processing for the model. For ``SegmentationModel``, this includes the code to create an overlay of the segmentation mask on the original image/frame. @@ -882,10 +855,9 @@ overlay of the segmentation mask on the original image/frame. case_00117, 69 images -Show Inference `⇑ <#top>`__ +Show Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - In the next cell, we run the ``show_live_inference`` function, which loads the ``segmentation_model`` to the specified ``device`` (using caching for faster model loading on GPU devices), loads the images, @@ -908,24 +880,27 @@ performs inference, and displays the results on the frames loaded in .. parsed-literal:: - Loaded model to CPU in 0.12 seconds. - Total time for 68 frames: 3.35 seconds, fps:20.60 + Loaded model to CPU in 0.19 seconds. + Total time for 68 frames: 3.46 seconds, fps:19.95 -References `⇑ <#top>`__ -############################################################################################################################### - - -**OpenVINO** - -- `NNCF Repository `__ -- `Neural Network Compression Framework for fast model inference `__ -- `OpenVINO API Tutorial <002-openvino-api-with-output.html>`__ -- `OpenVINO PyPI (pip install openvino-dev) `__ - -**Kits19 Data** +References ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -- `Kits19 Challenge Homepage `__ -- `Kits19 GitHub Repository `__ -- `The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes `__ -- `The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 challenge `__ +**OpenVINO** - `NNCF +Repository `__ - `Neural +Network Compression Framework for fast model +inference `__ - `OpenVINO API +Tutorial <002-openvino-api-with-output.html>`__ - `OpenVINO +PyPI (pip install +openvino-dev) `__ + +**Kits19 Data** - `Kits19 Challenge +Homepage `__ - `Kits19 GitHub +Repository `__ - `The KiTS19 +Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT +Semantic Segmentations, and Surgical +Outcomes `__ - `The state of the art +in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: +Results of the KiTS19 +challenge `__ diff --git a/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output_files/110-ct-segmentation-quantize-nncf-with-output_37_1.png b/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output_files/110-ct-segmentation-quantize-nncf-with-output_37_1.png index 2ed816a703933b..d2cfe72b3c1ca8 100644 --- a/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output_files/110-ct-segmentation-quantize-nncf-with-output_37_1.png +++ b/docs/notebooks/110-ct-segmentation-quantize-nncf-with-output_files/110-ct-segmentation-quantize-nncf-with-output_37_1.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:aee7d177b413f4c198de3e7c40f1624d59abbdabe61e8ec20798b4649ac46f43 -size 383352 +oid sha256:73fedab3a1670adba1dd0624fc1ef3e604734eb937542bd55ef0b1ea9dc17f4e +size 379036 diff --git a/docs/notebooks/111-yolov5-quantization-migration-with-output.rst b/docs/notebooks/111-yolov5-quantization-migration-with-output.rst index b297f3425dac14..e76d5404ea30ae 100644 --- a/docs/notebooks/111-yolov5-quantization-migration-with-output.rst +++ b/docs/notebooks/111-yolov5-quantization-migration-with-output.rst @@ -2,9 +2,12 @@ Migrate quantization from POT API to NNCF API ============================================= This tutorial demonstrates how to migrate quantization pipeline written -using the OpenVINO `Post-Training Optimization Tool (POT) `__ to -`NNCF Post-Training Quantization API `__. -This tutorial is based on `Ultralytics YOLOv5 `__ model and additionally +using the OpenVINO `Post-Training Optimization Tool +(POT) `__ to +`NNCF Post-Training Quantization +API `__. +This tutorial is based on `Ultralytics +YOLOv5 `__ model and additionally it compares model accuracy between the FP32 precision and quantized INT8 precision models and runs a demo of model inference based on sample code from `Ultralytics YOLOv5 `__ with @@ -20,11 +23,7 @@ The tutorial consists from the following parts: 6. Run model inference demo 7. Compare performance FP32 and INT8 models - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ @@ -52,17 +51,15 @@ The tutorial consists from the following parts: - `Benchmark <#benchmark>`__ - `References <#references>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### - -Download the YOLOv5 model `⇑ <#top>`__ +Download the YOLOv5 model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" "nncf>=2.5.0" + !pip install -q "openvino-dev==2023.1.0.dev20230811" "nncf>=2.5.0" !pip install -q psutil "seaborn>=0.11.0" matplotlib numpy onnx .. code:: ipython3 @@ -88,13 +85,17 @@ Download the YOLOv5 model `⇑ <#top>`__ .. parsed-literal:: Download Ultralytics Yolov5 project source: - ``git clone https://github.com/ultralytics/yolov5.git -b v7.0`` -Conversion of the YOLOv5 model to OpenVINO `⇑ <#top>`__ + +``git clone https://github.com/ultralytics/yolov5.git -b v7.0`` + + +Conversion of the YOLOv5 model to OpenVINO +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -There are three variables provided for easy run through all the notebook cells. +There are three variables provided for easy run through all the notebook +cells. - ``IMAGE_SIZE`` - the image size for model input. - ``MODEL_NAME`` - the model you want to use. It can be either yolov5s, @@ -141,18 +142,18 @@ following content: YOLOv5 🚀 v7.0-0-g915bbf2 Python-3.8.10 torch-1.13.1+cpu CPU Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt to yolov5m/yolov5m.pt... - 100%|██████████████████████████████████████| 40.8M/40.8M [00:10<00:00, 4.02MB/s] + 100%|██████████████████████████████████████| 40.8M/40.8M [00:10<00:00, 4.11MB/s] Fusing layers... YOLOv5m summary: 290 layers, 21172173 parameters, 0 gradients PyTorch: starting from yolov5m/yolov5m.pt with output shape (1, 25200, 85) (40.8 MB) - ONNX: starting export with onnx 1.14.0... + ONNX: starting export with onnx 1.14.1... ONNX: export success ✅ 1.3s, saved as yolov5m/yolov5m.onnx (81.2 MB) - Export complete (13.7s) - Results saved to /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/yolov5m + Export complete (13.3s) + Results saved to /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/yolov5m Detect: python detect.py --weights yolov5m/yolov5m.onnx Validate: python val.py --weights yolov5m/yolov5m.onnx PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5m/yolov5m.onnx') @@ -160,22 +161,24 @@ following content: Convert the ONNX model to OpenVINO Intermediate Representation (IR) -model generated by `model conversion API `__. -We will use the ``openvino.tools.mo.convert_model`` function of model -conversion Python API to convert ONNX model to OpenVINO Model, then it -can be serialized using ``openvino.runtime.serialize``. As the result, -directory with the ``{MODEL_DIR}`` name will be created with the -following content: \* ``{MODEL_NAME}_fp32.xml``, -``{MODEL_NAME}_fp32.bin`` - OpenVINO Intermediate Representation (IR) -model format with FP32 precision generated by Model Optimizer. \* -``{MODEL_NAME}_fp16.xml``, ``{MODEL_NAME}_fp16.bin`` - OpenVINO -Intermediate Representation (IR) model format with FP32 precision -generated by Model Optimizer. +model generated by `OpenVINO model conversion +API `__. +We will use the ``ov.convert_model`` function of model conversion Python +API to convert ONNX model to OpenVINO Model, then it can be serialized +using ``ov.save_model``. As the result, directory with the +``{MODEL_DIR}`` name will be created with the following content: \* +``{MODEL_NAME}_fp32.xml``, ``{MODEL_NAME}_fp32.bin`` - OpenVINO +Intermediate Representation (IR) model generated by `OpenVINO Model +Converter `__, +saved with FP32 precision. \* ``{MODEL_NAME}_fp16.xml``, +``{MODEL_NAME}_fp16.bin`` - OpenVINO Intermediate Representation (IR) +model generated by `OpenVINO Model +Converter `__, +saved with FP16 precision. .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize + import openvino as ov onnx_path = f"{MODEL_PATH}/{MODEL_NAME}.onnx" @@ -183,15 +186,15 @@ generated by Model Optimizer. fp32_path = f"{MODEL_PATH}/FP32_openvino_model/{MODEL_NAME}_fp32.xml" print(f"Export ONNX to OpenVINO FP32 IR to: {fp32_path}") - model = mo.convert_model(onnx_path) - serialize(model, fp32_path) + model = ov.convert_model(onnx_path) + ov.save_model(model, fp32_path, compress_to_fp16=False) # fp16 IR model fp16_path = f"{MODEL_PATH}/FP16_openvino_model/{MODEL_NAME}_fp16.xml" print(f"Export ONNX to OpenVINO FP16 IR to: {fp16_path}") - model = mo.convert_model(onnx_path, compress_to_fp16=True) - serialize(model, fp16_path) + model = ov.convert_model(onnx_path) + ov.save_model(model, fp16_path, compress_to_fp16=True) .. parsed-literal:: @@ -200,10 +203,9 @@ generated by Model Optimizer. Export ONNX to OpenVINO FP16 IR to: yolov5/yolov5m/FP16_openvino_model/yolov5m_fp16.xml -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 sys.path.append("./yolov5") @@ -211,10 +213,9 @@ Imports `⇑ <#top>`__ from yolov5.utils.dataloaders import create_dataloader from yolov5.utils.general import check_dataset -Prepare dataset for quantization `⇑ <#top>`__ +Prepare dataset for quantization ############################################################################################################################### - Before starting quantization, we should prepare dataset, which will be used for quantization. Ultralytics YOLOv5 provides data loader for iteration over dataset during training and validation. Let’s create it @@ -261,14 +262,13 @@ first. .. parsed-literal:: Unzipping datasets/coco128.zip... - Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 - New cache created: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache + Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 + New cache created: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache -Create YOLOv5 DataLoader class for POT `⇑ <#top>`__ +Create YOLOv5 DataLoader class for POT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Create a class for loading the YOLOv5 dataset and annotation which inherits from POT API class DataLoader. ``openvino.tools.pot.DataLoader`` interface allows acquiring data from a @@ -335,18 +335,18 @@ index. Any implementation should override the following methods: pot_data_loader = YOLOv5POTDataLoader(data_source) -.. parsed-literal:: - - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/offline_transformations/__init__.py:10: FutureWarning: The module is private and following namespace `offline_transformations` will be removed in the future. - warnings.warn( - - .. parsed-literal:: [ DEBUG ] Creating converter from 7 to 5 [ DEBUG ] Creating converter from 5 to 7 [ DEBUG ] Creating converter from 7 to 5 [ DEBUG ] Creating converter from 5 to 7 + [ WARNING ] /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/accuracy_checker/preprocessor/launcher_preprocessing/ie_preprocessor.py:21: FutureWarning: OpenVINO Inference Engine Python API is deprecated and will be removed in 2024.0 release.For instructions on transitioning to the new API, please refer to https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html + from openvino.inference_engine import ResizeAlgorithm, PreProcessInfo, ColorFormat, MeanVariant # pylint: disable=import-outside-toplevel,package-absolute-imports + + [ WARNING ] /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/accuracy_checker/launcher/dlsdk_launcher.py:60: FutureWarning: OpenVINO nGraph Python API is deprecated and will be removed in 2024.0 release.For instructions on transitioning to the new API, please refer to https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html + import ngraph as ng + .. parsed-literal:: @@ -355,10 +355,9 @@ index. Any implementation should override the following methods: Nevergrad package could not be imported. If you are planning to use any hyperparameter optimization algo, consider installing it using pip. This implies advanced usage of the tool. Note that nevergrad is compatible only with Python 3.7+ -Create NNCF Dataset `⇑ <#top>`__ +Create NNCF Dataset +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - For preparing quantization dataset for NNCF, we should wrap framework-specific data source into ``nncf.Dataset`` instance. Additionally, to transform data into model expected format we can define @@ -390,21 +389,27 @@ format). nncf_calibration_dataset = nncf.Dataset(data_source, transform_fn) +.. parsed-literal:: + + 2023-09-08 22:55:10.459989: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:55:10.495368: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 22:55:11.043305: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + .. parsed-literal:: INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino -Configure quantization pipeline `⇑ <#top>`__ +Configure quantization pipeline ############################################################################################################################### - Next, we should define quantization algorithm parameters. -Prepare config and pipeline for POT `⇑ <#top>`__ +Prepare config and pipeline for POT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - in POT, all quantization parameters should be defined using configuration dictionary. Config consists of 3 sections: ``algorithms`` for description quantization algorithm parameters, ``engine`` for @@ -451,10 +456,9 @@ pipeline using ``create_pipeline`` function. # Step 5: Create a pipeline of compression algorithms. pipeline = create_pipeline(algorithms_config, engine) -Prepare configuration parameters for NNCF `⇑ <#top>`__ +Prepare configuration parameters for NNCF +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Post-training quantization pipeline in NNCF represented by ``nncf.quantize`` function for Default Quantization Algorithm and ``nncf.quantize_with_accuracy_control`` for Accuracy Aware Quantization. @@ -462,24 +466,22 @@ Quantization parameters ``preset``, ``model_type``, ``subset_size``, ``fast_bias_correction``, ``ignored_scope`` are arguments of function. More details about supported parameters and formats can be found in NNCF Post-Training Quantization -`documentation `__. +`documentation `__. NNCF also expect providing model object in inference framework format, -in our case ``openvino.runtime.Model`` instance created using -``core.read_model`` or ``openvino.tools.mo.convert_model``. +in our case ``ov.Model`` instance created using ``core.read_model`` or +``ov.convert_model``. .. code:: ipython3 subset_size = 300 preset = nncf.QuantizationPreset.MIXED -Perform model optimization `⇑ <#top>`__ +Perform model optimization ############################################################################################################################### - -Run quantization using POT `⇑ <#top>`__ +Run quantization using POT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To start model quantization using POT API, we should call ``pipeline.run(pot_model)`` method. As the result, we got quantized model representation from POT, which can be saved on disk using @@ -498,39 +500,35 @@ size of final .bin file. save_model(compressed_model, optimized_save_dir, model_config["model_name"] + "_int8") pot_int8_path = f"{optimized_save_dir}/{MODEL_NAME}_int8.xml" -Run quantization using NNCF `⇑ <#top>`__ +Run quantization using NNCF +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To run NNCF quantization, we should call ``nncf.quantize`` function. As the result, the function returns quantized model in the same format like input model, so it means that quantized model ready to be compiled on device for inference and can be saved on disk using -``openvino.runtime.serialize``. +``openvino.save_model``. .. code:: ipython3 - from openvino.runtime import Core - - core = Core() + core = ov.Core() ov_model = core.read_model(fp32_path) quantized_model = nncf.quantize( ov_model, nncf_calibration_dataset, preset=preset, subset_size=subset_size ) nncf_int8_path = f"{MODEL_PATH}/NNCF_INT8_openvino_model/{MODEL_NAME}_int8.xml" - serialize(quantized_model, nncf_int8_path) + ov.save_model(quantized_model, nncf_int8_path, compress_to_fp16=False) .. parsed-literal:: - Statistics collection: 43%|████▎ | 128/300 [00:30<00:40, 4.20it/s] - Biases correction: 100%|██████████| 82/82 [00:10<00:00, 7.71it/s] + Statistics collection: 43%|████▎ | 128/300 [00:31<00:42, 4.03it/s] + Biases correction: 100%|██████████| 82/82 [00:10<00:00, 7.60it/s] -Compare accuracy FP32 and INT8 models `⇑ <#top>`__ +Compare accuracy FP32 and INT8 models ############################################################################################################################### - For getting accuracy results, we will use ``yolov5.val.run`` function which already supports OpenVINO backend. For making int8 model is compatible with Ultralytics provided validation pipeline, we also should @@ -592,10 +590,10 @@ same directory, where model located. .. parsed-literal:: Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models - val: Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 + val: Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 128/128 00:05 all 128 929 0.726 0.687 0.769 0.554 - Speed: 0.2ms pre-process, 35.3ms inference, 3.0ms NMS per image at shape (1, 3, 640, 640) + Speed: 0.2ms pre-process, 35.3ms inference, 3.1ms NMS per image at shape (1, 3, 640, 640) Results saved to yolov5/runs/val/exp @@ -639,10 +637,10 @@ same directory, where model located. .. parsed-literal:: Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models - val: Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 + val: Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 128/128 00:03 all 128 929 0.761 0.677 0.773 0.548 - Speed: 0.2ms pre-process, 17.1ms inference, 3.3ms NMS per image at shape (1, 3, 640, 640) + Speed: 0.2ms pre-process, 16.9ms inference, 3.2ms NMS per image at shape (1, 3, 640, 640) Results saved to yolov5/runs/val/exp2 @@ -686,10 +684,10 @@ same directory, where model located. .. parsed-literal:: Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models - val: Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 + val: Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 00:00 Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 128/128 00:03 all 128 929 0.742 0.684 0.766 0.546 - Speed: 0.2ms pre-process, 17.0ms inference, 3.2ms NMS per image at shape (1, 3, 640, 640) + Speed: 0.2ms pre-process, 17.1ms inference, 3.3ms NMS per image at shape (1, 3, 640, 640) Results saved to yolov5/runs/val/exp3 @@ -752,12 +750,11 @@ model. .. image:: 111-yolov5-quantization-migration-with-output_files/111-yolov5-quantization-migration-with-output_34_0.png -Inference Demo Performance Comparison `⇑ <#top>`__ +Inference Demo Performance Comparison ############################################################################################################################### - -This part shows how to use the Ultralytics model detection code -`detect.py `__ +This part shows how to use the Ultralytics model detection code +```detect.py`` `__ to run synchronous inference, using the OpenVINO Python API on two images. @@ -786,9 +783,9 @@ images. 'YOLOv5 🚀 v7.0-0-g915bbf2 Python-3.8.10 torch-1.13.1+cpu CPU', '', 'Loading yolov5m/FP32_openvino_model for OpenVINO inference...', - 'image 1/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/bus.jpg: 640x640 4 persons, 1 bus, 57.0ms', - 'image 2/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/zidane.jpg: 640x640 3 persons, 2 ties, 40.6ms', - 'Speed: 1.4ms pre-process, 48.8ms inference, 1.2ms NMS per image at shape (1, 3, 640, 640)', + 'image 1/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/bus.jpg: 640x640 4 persons, 1 bus, 56.6ms', + 'image 2/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/zidane.jpg: 640x640 3 persons, 2 ties, 45.9ms', + 'Speed: 1.5ms pre-process, 51.2ms inference, 1.3ms NMS per image at shape (1, 3, 640, 640)', 'Results saved to \x1b[1mruns/detect/exp\x1b[0m'] @@ -813,9 +810,9 @@ images. 'YOLOv5 🚀 v7.0-0-g915bbf2 Python-3.8.10 torch-1.13.1+cpu CPU', '', 'Loading yolov5m/POT_INT8_openvino_model for OpenVINO inference...', - 'image 1/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/bus.jpg: 640x640 4 persons, 1 bus, 36.6ms', - 'image 2/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/zidane.jpg: 640x640 3 persons, 1 tie, 33.4ms', - 'Speed: 1.5ms pre-process, 35.0ms inference, 1.4ms NMS per image at shape (1, 3, 640, 640)', + 'image 1/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/bus.jpg: 640x640 4 persons, 1 bus, 35.4ms', + 'image 2/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/zidane.jpg: 640x640 3 persons, 1 tie, 33.8ms', + 'Speed: 1.6ms pre-process, 34.6ms inference, 1.4ms NMS per image at shape (1, 3, 640, 640)', 'Results saved to \x1b[1mruns/detect/exp2\x1b[0m'] @@ -840,9 +837,9 @@ images. 'YOLOv5 🚀 v7.0-0-g915bbf2 Python-3.8.10 torch-1.13.1+cpu CPU', '', 'Loading yolov5m/NNCF_INT8_openvino_model for OpenVINO inference...', - 'image 1/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/bus.jpg: 640x640 4 persons, 1 bus, 35.9ms', - 'image 2/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/zidane.jpg: 640x640 3 persons, 2 ties, 31.5ms', - 'Speed: 1.6ms pre-process, 33.7ms inference, 1.4ms NMS per image at shape (1, 3, 640, 640)', + 'image 1/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/bus.jpg: 640x640 4 persons, 1 bus, 37.1ms', + 'image 2/2 /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/111-yolov5-quantization-migration/yolov5/data/images/zidane.jpg: 640x640 3 persons, 2 ties, 30.5ms', + 'Speed: 1.6ms pre-process, 33.8ms inference, 1.4ms NMS per image at shape (1, 3, 640, 640)', 'Results saved to \x1b[1mruns/detect/exp3\x1b[0m'] @@ -873,10 +870,9 @@ images. .. image:: 111-yolov5-quantization-migration-with-output_files/111-yolov5-quantization-migration-with-output_40_0.png -Benchmark `⇑ <#top>`__ +Benchmark ############################################################################################################################### - .. code:: ipython3 gpu_available = "GPU" in core.available_devices @@ -892,68 +888,7 @@ Benchmark `⇑ <#top>`__ .. parsed-literal:: Inference FP32 model (OpenVINO IR) on CPU - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 31.30 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] images (node: images) : f32 / [...] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 1 - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] images (node: images) : u8 / [N,C,H,W] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 360.18 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 - [ INFO ] NUM_STREAMS: 6 - [ INFO ] AFFINITY: Affinity.CORE - [ INFO ] INFERENCE_NUM_THREADS: 24 - [ INFO ] PERF_COUNT: False - [ INFO ] INFERENCE_PRECISION_HINT: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] ENABLE_CPU_PINNING: True - [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'images'!. This input will be filled with random values! - [ INFO ] Fill input 'images' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 15000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 102.48 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 456 iterations - [ INFO ] Duration: 15352.58 ms - [ INFO ] Latency: - [ INFO ] Median: 202.33 ms - [ INFO ] Average: 201.47 ms - [ INFO ] Min: 138.12 ms - [ INFO ] Max: 216.53 ms - [ INFO ] Throughput: 29.70 FPS + /bin/bash: benchmark_app: command not found .. code:: ipython3 @@ -969,68 +904,7 @@ Benchmark `⇑ <#top>`__ .. parsed-literal:: Inference FP16 model (OpenVINO IR) on CPU - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 44.00 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] images (node: images) : f32 / [...] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 1 - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] images (node: images) : u8 / [N,C,H,W] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 387.13 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 - [ INFO ] NUM_STREAMS: 6 - [ INFO ] AFFINITY: Affinity.CORE - [ INFO ] INFERENCE_NUM_THREADS: 24 - [ INFO ] PERF_COUNT: False - [ INFO ] INFERENCE_PRECISION_HINT: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] ENABLE_CPU_PINNING: True - [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'images'!. This input will be filled with random values! - [ INFO ] Fill input 'images' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 15000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 101.31 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 456 iterations - [ INFO ] Duration: 15246.15 ms - [ INFO ] Latency: - [ INFO ] Median: 200.30 ms - [ INFO ] Average: 199.86 ms - [ INFO ] Min: 96.50 ms - [ INFO ] Max: 219.99 ms - [ INFO ] Throughput: 29.91 FPS + /bin/bash: benchmark_app: command not found .. code:: ipython3 @@ -1046,68 +920,7 @@ Benchmark `⇑ <#top>`__ .. parsed-literal:: Inference POT INT8 model (OpenVINO IR) on CPU - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 45.57 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] images (node: images) : f32 / [...] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 1 - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] images (node: images) : u8 / [N,C,H,W] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 702.15 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 - [ INFO ] NUM_STREAMS: 6 - [ INFO ] AFFINITY: Affinity.CORE - [ INFO ] INFERENCE_NUM_THREADS: 24 - [ INFO ] PERF_COUNT: False - [ INFO ] INFERENCE_PRECISION_HINT: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] ENABLE_CPU_PINNING: True - [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'images'!. This input will be filled with random values! - [ INFO ] Fill input 'images' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 15000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 48.39 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 1410 iterations - [ INFO ] Duration: 15055.06 ms - [ INFO ] Latency: - [ INFO ] Median: 64.05 ms - [ INFO ] Average: 63.87 ms - [ INFO ] Min: 46.43 ms - [ INFO ] Max: 83.05 ms - [ INFO ] Throughput: 93.66 FPS + /bin/bash: benchmark_app: command not found .. code:: ipython3 @@ -1123,78 +936,16 @@ Benchmark `⇑ <#top>`__ .. parsed-literal:: Inference NNCF INT8 model (OpenVINO IR) on CPU - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 52.31 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] images (node: images) : f32 / [...] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 1 - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] images (node: images) : u8 / [N,C,H,W] / [1,3,640,640] - [ INFO ] Model outputs: - [ INFO ] output0 (node: output0) : f32 / [...] / [1,25200,85] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 710.35 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 - [ INFO ] NUM_STREAMS: 6 - [ INFO ] AFFINITY: Affinity.CORE - [ INFO ] INFERENCE_NUM_THREADS: 24 - [ INFO ] PERF_COUNT: False - [ INFO ] INFERENCE_PRECISION_HINT: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] ENABLE_CPU_PINNING: True - [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'images'!. This input will be filled with random values! - [ INFO ] Fill input 'images' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 15000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 51.01 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 1416 iterations - [ INFO ] Duration: 15113.02 ms - [ INFO ] Latency: - [ INFO ] Median: 63.94 ms - [ INFO ] Average: 63.87 ms - [ INFO ] Min: 45.06 ms - [ INFO ] Max: 87.95 ms - [ INFO ] Throughput: 93.69 FPS - - -References `⇑ <#top>`__ -############################################################################################################################### + /bin/bash: benchmark_app: command not found +References +############################################################################################################################### + - `Ultralytics YOLOv5 `__ - `OpenVINO Post-training Optimization - Tool `__ + Tool `__ - `NNCF Post-training quantization `__ - `Model Conversion - API `__ + API `__ diff --git a/docs/notebooks/112-pytorch-post-training-quantization-nncf-with-output.rst b/docs/notebooks/112-pytorch-post-training-quantization-nncf-with-output.rst index 4b054d59389fe6..6beabff6bd0cc1 100644 --- a/docs/notebooks/112-pytorch-post-training-quantization-nncf-with-output.rst +++ b/docs/notebooks/112-pytorch-post-training-quantization-nncf-with-output.rst @@ -24,11 +24,7 @@ quantization, not demanding the fine-tuning of the model. the default binary search path of the OS you are running the notebook. - - -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparations <#preparations>`__ @@ -47,9 +43,13 @@ quantization, not demanding the fine-tuning of the model. - `III. Convert the models to OpenVINO Intermediate Representation (OpenVINO IR) <#iii-convert-the-models-to-openvino-intermediate-representation-openvino-ir>`__ - `IV. Compare performance of INT8 model and FP32 model in OpenVINO <#iv-compare-performance-of-int8-model-and-fp32-model-in-openvino>`__ -Preparations `⇑ <#top>`__ +Preparations ############################################################################################################################### +.. code:: ipython3 + + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" .. code:: ipython3 @@ -88,10 +88,9 @@ Preparations `⇑ <#top>`__ os.environ["LIB"] = os.pathsep.join(b.library_dirs) print(f"Added {vs_dir} to PATH") -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import os @@ -101,8 +100,7 @@ Imports `⇑ <#top>`__ from typing import List, Tuple import nncf - from openvino.runtime import Core, serialize - from openvino.tools import mo + import openvino as ov import torch from torchvision.datasets import ImageFolder @@ -115,10 +113,10 @@ Imports `⇑ <#top>`__ .. parsed-literal:: - 2023-08-15 22:47:54.862445: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. - 2023-08-15 22:47:54.896717: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + 2023-09-08 22:58:07.638790: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 22:58:07.672794: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. - 2023-08-15 22:47:55.440534: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + 2023-09-08 22:58:08.221837: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT .. parsed-literal:: @@ -126,10 +124,9 @@ Imports `⇑ <#top>`__ INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 torch_device = torch.device("cuda" if torch.cuda.is_available() else "cpu") @@ -170,14 +167,13 @@ Settings `⇑ <#top>`__ .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/112-pytorch-post-training-quantization-nncf/model/resnet50_fp32.pth') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/112-pytorch-post-training-quantization-nncf/model/resnet50_fp32.pth') -Download and Prepare Tiny ImageNet dataset `⇑ <#top>`__ +Download and Prepare Tiny ImageNet dataset +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - 100k images of shape 3x64x64, - 200 different classes: snake, spider, cat, truck, grasshopper, gull, etc. @@ -235,10 +231,11 @@ Download and Prepare Tiny ImageNet dataset `⇑ <#top>`__ Successfully downloaded and extracted dataset to: output -Helpers classes and functions `⇑ <#top>`__ +Helpers classes and functions +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -The code below will help to count accuracy and visualize validation process. +The code below will help to count accuracy and visualize validation +process. .. code:: ipython3 @@ -300,10 +297,9 @@ The code below will help to count accuracy and visualize validation process. return res -Validation function `⇑ <#top>`__ +Validation function +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 from typing import Union @@ -354,11 +350,11 @@ Validation function `⇑ <#top>`__ ) return top1.avg -Create and load original uncompressed model `⇑ <#top>`__ +Create and load original uncompressed model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -ResNet-50 from the `torchivision repository `__ is pre-trained on +ResNet-50 from the ```torchivision`` +repository `__ is pre-trained on ImageNet with more prediction classes than Tiny ImageNet, so the model is adjusted by swapping the last FC layer to one with fewer output values. @@ -382,10 +378,9 @@ values. model = create_model(MODEL_DIR / fp32_checkpoint_filename) -Create train and validation DataLoaders `⇑ <#top>`__ +Create train and validation DataLoaders +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 def create_dataloaders(batch_size: int = 128): @@ -433,17 +428,16 @@ Create train and validation DataLoaders `⇑ <#top>`__ train_loader, val_loader = create_dataloaders() -Model quantization and benchmarking `⇑ <#top>`__ +Model quantization and benchmarking ############################################################################################################################### -With the validation pipeline, model files, and data-loading procedures for model calibration -now prepared, it’s time to proceed with the actual post-training -quantization using NNCF. +With the validation pipeline, model files, and data-loading procedures +for model calibration now prepared, it’s time to proceed with the actual +post-training quantization using NNCF. -I. Evaluate the loaded model `⇑ <#top>`__ +I. Evaluate the loaded model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 acc1 = validate(val_loader, model) @@ -452,29 +446,29 @@ I. Evaluate the loaded model `⇑ <#top>`__ .. parsed-literal:: - Test: [ 0/79] Time 0.240 (0.240) Acc@1 81.25 (81.25) Acc@5 92.19 (92.19) - Test: [10/79] Time 0.234 (0.227) Acc@1 56.25 (66.97) Acc@5 86.72 (87.50) - Test: [20/79] Time 0.220 (0.225) Acc@1 67.97 (64.29) Acc@5 85.16 (87.35) - Test: [30/79] Time 0.219 (0.223) Acc@1 53.12 (62.37) Acc@5 77.34 (85.33) - Test: [40/79] Time 0.225 (0.222) Acc@1 67.19 (60.86) Acc@5 90.62 (84.51) - Test: [50/79] Time 0.220 (0.222) Acc@1 60.16 (60.80) Acc@5 88.28 (84.42) - Test: [60/79] Time 0.219 (0.222) Acc@1 66.41 (60.46) Acc@5 86.72 (83.79) - Test: [70/79] Time 0.219 (0.222) Acc@1 52.34 (60.21) Acc@5 80.47 (83.33) - * Acc@1 60.740 Acc@5 83.960 Total time: 17.387 + Test: [ 0/79] Time 0.289 (0.289) Acc@1 81.25 (81.25) Acc@5 92.19 (92.19) + Test: [10/79] Time 0.231 (0.240) Acc@1 56.25 (66.97) Acc@5 86.72 (87.50) + Test: [20/79] Time 0.234 (0.239) Acc@1 67.97 (64.29) Acc@5 85.16 (87.35) + Test: [30/79] Time 0.233 (0.239) Acc@1 53.12 (62.37) Acc@5 77.34 (85.33) + Test: [40/79] Time 0.242 (0.239) Acc@1 67.19 (60.86) Acc@5 90.62 (84.51) + Test: [50/79] Time 0.233 (0.242) Acc@1 60.16 (60.80) Acc@5 88.28 (84.42) + Test: [60/79] Time 0.241 (0.242) Acc@1 66.41 (60.46) Acc@5 86.72 (83.79) + Test: [70/79] Time 0.234 (0.241) Acc@1 52.34 (60.21) Acc@5 80.47 (83.33) + * Acc@1 60.740 Acc@5 83.960 Total time: 18.830 Test accuracy of FP32 model: 60.740 -II. Create and initialize quantization `⇑ <#top>`__ +II. Create and initialize quantization +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -NNCF enables post-training quantization by adding the quantization layers into the -model graph and then using a subset of the training dataset to -initialize the parameters of these additional quantization layers. The -framework is designed so that modifications to your original training -code are minor. Quantization is the simplest scenario and requires a few -modifications. For more information about NNCF Post Training -Quantization (PTQ) API, refer to the `Basic Quantization Flow -Guide `__. +NNCF enables post-training quantization by adding the quantization +layers into the model graph and then using a subset of the training +dataset to initialize the parameters of these additional quantization +layers. The framework is designed so that modifications to your original +training code are minor. Quantization is the simplest scenario and +requires a few modifications. For more information about NNCF Post +Training Quantization (PTQ) API, refer to the `Basic Quantization Flow +Guide `__. 1. Create a transformation function that accepts a sample from the dataset and returns data suitable for model inference. This enables @@ -529,16 +523,16 @@ Guide `__ +III. Convert the models to OpenVINO Intermediate Representation (OpenVINO IR) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ To convert the Pytorch models to OpenVINO IR, use model conversion @@ -555,7 +549,7 @@ Python API . The models will be saved to the ‘OUTPUT’ directory for later benchmarking. For more information about model conversion, refer to this -`page `__. +`page `__. Before converting models, export them to ONNX. Executing the following command may take a while. @@ -565,31 +559,31 @@ command may take a while. dummy_input = torch.randn(128, 3, *IMAGE_SIZE) torch.onnx.export(model, dummy_input, fp32_onnx_path) - model_ir = mo.convert_model(input_model=fp32_onnx_path, input_shape=[-1, 3, *IMAGE_SIZE]) + model_ir = ov.convert_model(fp32_onnx_path, input=[-1, 3, *IMAGE_SIZE]) - serialize(model_ir, str(fp32_ir_path)) + ov.save_model(model_ir, str(fp32_ir_path)) .. code:: ipython3 torch.onnx.export(quantized_model, dummy_input, int8_onnx_path) - quantized_model_ir = mo.convert_model(input_model=int8_onnx_path, input_shape=[-1, 3, *IMAGE_SIZE]) + quantized_model_ir = ov.convert_model(int8_onnx_path, input=[-1, 3, *IMAGE_SIZE]) - serialize(quantized_model_ir, str(int8_ir_path)) + ov.save_model(quantized_model_ir, str(int8_ir_path)) .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:338: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:338: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! return self._level_low.item() - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:346: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:346: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! return self._level_high.item() - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/quantize_functions.py:140: FutureWarning: 'torch.onnx._patch_torch._graph_op' is deprecated in version 1.13 and will be removed in version 1.14. Please note 'g.op()' is to be removed from torch.Graph. Please open a GitHub issue if you need this functionality.. + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/nncf/torch/quantization/quantize_functions.py:140: FutureWarning: 'torch.onnx._patch_torch._graph_op' is deprecated in version 1.13 and will be removed in version 1.14. Please note 'g.op()' is to be removed from torch.Graph. Please open a GitHub issue if you need this functionality.. output = g.op( - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/_patch_torch.py:81: UserWarning: The shape inference of org.openvinotoolkit::FakeQuantize type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/_patch_torch.py:81: UserWarning: The shape inference of org.openvinotoolkit::FakeQuantize type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_node_shape_type_inference( - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of org.openvinotoolkit::FakeQuantize type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of org.openvinotoolkit::FakeQuantize type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_graph_shape_type_inference( - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of org.openvinotoolkit::FakeQuantize type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of org.openvinotoolkit::FakeQuantize type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_graph_shape_type_inference( @@ -599,7 +593,7 @@ Select inference device for OpenVINO import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -622,7 +616,7 @@ Evaluate the FP32 and INT8 models. .. code:: ipython3 - core = Core() + core = ov.Core() fp32_compiled_model = core.compile_model(model_ir, device.value) acc1 = validate(val_loader, fp32_compiled_model) print(f"Accuracy of FP32 IR model: {acc1:.3f}") @@ -630,15 +624,15 @@ Evaluate the FP32 and INT8 models. .. parsed-literal:: - Test: [ 0/79] Time 0.200 (0.200) Acc@1 81.25 (81.25) Acc@5 92.19 (92.19) - Test: [10/79] Time 0.138 (0.144) Acc@1 56.25 (66.97) Acc@5 86.72 (87.50) - Test: [20/79] Time 0.137 (0.141) Acc@1 67.97 (64.29) Acc@5 85.16 (87.35) - Test: [30/79] Time 0.136 (0.140) Acc@1 53.12 (62.37) Acc@5 77.34 (85.33) - Test: [40/79] Time 0.139 (0.140) Acc@1 67.19 (60.86) Acc@5 90.62 (84.51) - Test: [50/79] Time 0.135 (0.139) Acc@1 60.16 (60.80) Acc@5 88.28 (84.42) - Test: [60/79] Time 0.139 (0.139) Acc@1 66.41 (60.46) Acc@5 86.72 (83.79) - Test: [70/79] Time 0.138 (0.139) Acc@1 52.34 (60.21) Acc@5 80.47 (83.33) - * Acc@1 60.740 Acc@5 83.960 Total time: 10.865 + Test: [ 0/79] Time 0.199 (0.199) Acc@1 81.25 (81.25) Acc@5 92.19 (92.19) + Test: [10/79] Time 0.142 (0.146) Acc@1 56.25 (66.97) Acc@5 86.72 (87.50) + Test: [20/79] Time 0.139 (0.143) Acc@1 67.97 (64.29) Acc@5 85.16 (87.35) + Test: [30/79] Time 0.141 (0.142) Acc@1 53.12 (62.37) Acc@5 77.34 (85.33) + Test: [40/79] Time 0.140 (0.142) Acc@1 67.19 (60.86) Acc@5 90.62 (84.51) + Test: [50/79] Time 0.142 (0.142) Acc@1 60.16 (60.80) Acc@5 88.28 (84.42) + Test: [60/79] Time 0.145 (0.142) Acc@1 66.41 (60.46) Acc@5 86.72 (83.79) + Test: [70/79] Time 0.140 (0.142) Acc@1 52.34 (60.21) Acc@5 80.47 (83.33) + * Acc@1 60.740 Acc@5 83.960 Total time: 11.098 Accuracy of FP32 IR model: 60.740 @@ -651,24 +645,24 @@ Evaluate the FP32 and INT8 models. .. parsed-literal:: - Test: [ 0/79] Time 0.189 (0.189) Acc@1 81.25 (81.25) Acc@5 91.41 (91.41) - Test: [10/79] Time 0.079 (0.091) Acc@1 59.38 (66.90) Acc@5 85.94 (87.43) - Test: [20/79] Time 0.078 (0.087) Acc@1 67.19 (64.25) Acc@5 85.16 (87.28) - Test: [30/79] Time 0.080 (0.085) Acc@1 51.56 (62.40) Acc@5 75.78 (85.21) - Test: [40/79] Time 0.077 (0.083) Acc@1 67.97 (60.94) Acc@5 89.84 (84.51) - Test: [50/79] Time 0.078 (0.082) Acc@1 62.50 (61.06) Acc@5 87.50 (84.45) - Test: [60/79] Time 0.081 (0.082) Acc@1 66.41 (60.71) Acc@5 85.94 (83.84) - Test: [70/79] Time 0.078 (0.082) Acc@1 52.34 (60.40) Acc@5 79.69 (83.42) - * Acc@1 60.930 Acc@5 84.020 Total time: 6.371 - Accuracy of INT8 IR model: 60.930 + Test: [ 0/79] Time 0.191 (0.191) Acc@1 82.03 (82.03) Acc@5 91.41 (91.41) + Test: [10/79] Time 0.081 (0.092) Acc@1 60.16 (67.76) Acc@5 86.72 (87.29) + Test: [20/79] Time 0.079 (0.086) Acc@1 67.97 (64.96) Acc@5 85.16 (87.35) + Test: [30/79] Time 0.079 (0.084) Acc@1 53.12 (63.00) Acc@5 76.56 (85.26) + Test: [40/79] Time 0.079 (0.083) Acc@1 67.97 (61.34) Acc@5 89.84 (84.43) + Test: [50/79] Time 0.080 (0.082) Acc@1 60.94 (61.21) Acc@5 88.28 (84.38) + Test: [60/79] Time 0.080 (0.082) Acc@1 65.62 (60.75) Acc@5 85.94 (83.68) + Test: [70/79] Time 0.080 (0.082) Acc@1 53.12 (60.44) Acc@5 79.69 (83.25) + * Acc@1 61.050 Acc@5 83.880 Total time: 6.376 + Accuracy of INT8 IR model: 61.050 -IV. Compare performance of INT8 model and FP32 model in OpenVINO `⇑ <#top>`__ +IV. Compare performance of INT8 model and FP32 model in OpenVINO +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Finally, measure the inference performance of the ``FP32`` and ``INT8`` models, using `Benchmark -Tool `__ +Tool `__ - an inference performance measurement tool in OpenVINO. By default, Benchmark Tool runs inference for 60 seconds in asynchronous mode on CPU. It returns inference speed as latency (milliseconds per image) and @@ -684,7 +678,6 @@ throughput (frames per second) values. to benchmark on GPU. Run ``benchmark_app --help`` to see an overview of all command-line options. - .. code:: ipython3 device @@ -726,20 +719,20 @@ throughput (frames per second) values. .. parsed-literal:: Benchmark FP32 model (OpenVINO IR) - [ INFO ] Throughput: 37.93 FPS + Benchmark INT8 model (OpenVINO IR) - [ INFO ] Throughput: 155.44 FPS + Benchmark FP32 model (OpenVINO IR) synchronously - [ INFO ] Throughput: 38.81 FPS + Benchmark INT8 model (OpenVINO IR) synchronously - [ INFO ] Throughput: 139.97 FPS + Show device Information for reference: .. code:: ipython3 - core = Core() + core = ov.Core() devices = core.available_devices for device_name in devices: diff --git a/docs/notebooks/113-image-classification-quantization-with-output.rst b/docs/notebooks/113-image-classification-quantization-with-output.rst index 95f2f7695c1078..f000f7569389e8 100644 --- a/docs/notebooks/113-image-classification-quantization-with-output.rst +++ b/docs/notebooks/113-image-classification-quantization-with-output.rst @@ -1,8 +1,6 @@ Quantization of Image Classification Models =========================================== - - This tutorial demonstrates how to apply ``INT8`` quantization to Image Classification model using `NNCF `__. It uses the @@ -14,16 +12,14 @@ to apply quantization on PyTorch model, please check this This tutorial consists of the following steps: -- Prepare the model for quantization. -- Define a data loading functionality. -- Perform quantization. -- Compare accuracy of the original and quantized models. -- Compare performance of the original and quantized models. -- Compare results on one picture. - -.. _top: +- Prepare the model for quantization. +- Define a data loading functionality. +- Perform quantization. +- Compare accuracy of the original and quantized models. +- Compare performance of the original and quantized models. +- Compare results on one picture. -**Table of contents**: +**Table of contents:** - `Prepare the Model <#prepare-the-model>`__ - `Prepare Dataset <#prepare-dataset>`__ @@ -40,6 +36,11 @@ This tutorial consists of the following steps: - `Compare Performance of the Original and Quantized Models <#compare-performance-of-the-original-and-quantized-models>`__ - `Compare results on four pictures <#compare-results-on-four-pictures>`__ +.. code:: ipython3 + + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" + .. code:: ipython3 from pathlib import Path @@ -52,10 +53,9 @@ This tutorial consists of the following steps: DATA_DIR.mkdir(exist_ok=True) MODEL_DIR.mkdir(exist_ok=True) -Prepare the Model `⇑ <#top>`__ +Prepare the Model ############################################################################################################################### - Model preparation stage has the following steps: - Download a PyTorch model @@ -78,10 +78,10 @@ Model preparation stage has the following steps: Cloning into 'pytorch-cifar-models'... remote: Enumerating objects: 282, done. remote: Counting objects: 100% (281/281), done. - remote: Compressing objects: 100% (95/95), done. - remote: Total 282 (delta 136), reused 269 (delta 129), pack-reused 1 - Receiving objects: 100% (282/282), 9.22 MiB | 4.67 MiB/s, done. - Resolving deltas: 100% (136/136), done. + remote: Compressing objects: 100% (96/96), done. + remote: Total 282 (delta 135), reused 269 (delta 128), pack-reused 1 + Receiving objects: 100% (282/282), 9.22 MiB | 3.92 MiB/s, done. + Resolving deltas: 100% (135/135), done. .. code:: ipython3 @@ -92,30 +92,47 @@ Model preparation stage has the following steps: OpenVINO supports PyTorch models via conversion to OpenVINO Intermediate Representation format using model conversion Python API. -``mo.convert_model`` accept PyTorch model instance and convert it into +``ov.convert_model`` accept PyTorch model instance and convert it into ``openvino.runtime.Model`` representation of model in OpenVINO. Optionally, you may specify ``example_input`` which serves as a helper for model tracing and ``input_shape`` for converting the model with static shape. The converted model is ready to be loaded on a device for inference and can be saved on a disk for next usage via the -``serialize`` function. More details about model conversion Python API +``save_model`` function. More details about model conversion Python API can be found on this -`page `__. +`page `__. .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize + import openvino as ov model.eval() - ov_model = mo.convert_model(model, input_shape=[1,3,32,32]) + ov_model = ov.convert_model(model, input=[1,3,32,32]) - serialize(ov_model, MODEL_DIR / "mobilenet_v2.xml") + ov.save_model(ov_model, MODEL_DIR / "mobilenet_v2.xml") -Prepare Dataset `⇑ <#top>`__ -############################################################################################################################### +.. parsed-literal:: + + 2023-09-08 23:00:34.215999: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:00:34.251815: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 23:00:34.795978: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +.. parsed-literal:: + + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino + + +.. parsed-literal:: + + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + + +Prepare Dataset +############################################################################################################################### We will use `CIFAR10 `__ dataset from @@ -156,10 +173,9 @@ Preprocessing for model obtained from training Extracting ../data/datasets/cifar10/cifar-10-python.tar.gz to ../data/datasets/cifar10 -Perform Quantization `⇑ <#top>`__ +Perform Quantization ############################################################################################################################### - `NNCF `__ provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop. We will use 8-bit quantization in @@ -168,13 +184,12 @@ MobileNetV2. The optimization process contains the following steps: 1. Create a Dataset for quantization. 2. Run ``nncf.quantize`` for getting an optimized model. -3. Serialize an OpenVINO IR model, using the - ``openvino.runtime.serialize`` function. +3. Serialize an OpenVINO IR model, using the ``openvino.save_model`` + function. -Create Dataset for Validation `⇑ <#top>`__ +Create Dataset for Validation +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - NNCF is compatible with ``torch.utils.data.DataLoader`` interface. For performing quantization it should be passed into ``nncf.Dataset`` object with transformation function, which prepares input data to fit into @@ -191,22 +206,15 @@ model during quantization, in our case, to pick input tensor from pair quantization_dataset = nncf.Dataset(val_loader, transform_fn) - -.. parsed-literal:: - - INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino - - -Run nncf.quantize for Getting an Optimized Model `⇑ <#top>`__ +Run nncf.quantize for Getting an Optimized Model ############################################################################################################################### - ``nncf.quantize`` function accepts model and prepared quantization dataset for performing basic quantization. Optionally, additional parameters like ``subset_size``, ``preset``, ``ignored_scope`` can be provided to improve quantization result if applicable. More details about supported parameters can be found on this -`page `__ +`page `__ .. code:: ipython3 @@ -215,26 +223,24 @@ about supported parameters can be found on this .. parsed-literal:: - Statistics collection: 100%|██████████| 300/300 [00:08<00:00, 35.62it/s] - Biases correction: 100%|██████████| 36/36 [00:01<00:00, 19.40it/s] + Statistics collection: 100%|██████████| 300/300 [00:08<00:00, 35.19it/s] + Biases correction: 100%|██████████| 36/36 [00:01<00:00, 21.91it/s] -Serialize an OpenVINO IR model `⇑ <#top>`__ +Serialize an OpenVINO IR model ############################################################################################################################### - -Similar to ``mo.convert_model``, quantized model is -``openvino.runtime.Model`` object which ready to be loaded into device -and can be serialized on disk using ``openvino.runtime.serialize``. +Similar to ``ov.convert_model``, quantized model is ``ov.Model`` object +which ready to be loaded into device and can be serialized on disk using +``ov.save_model``. .. code:: ipython3 - serialize(quant_ov_model, MODEL_DIR / "quantized_mobilenet_v2.xml") + ov.save_model(quant_ov_model, MODEL_DIR / "quantized_mobilenet_v2.xml") -Compare Accuracy of the Original and Quantized Models `⇑ <#top>`__ +Compare Accuracy of the Original and Quantized Models ############################################################################################################################### - .. code:: ipython3 from tqdm.notebook import tqdm @@ -250,19 +256,16 @@ Compare Accuracy of the Original and Quantized Models `⇑ <#top>`__ total += 1 return correct / total -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - from openvino.runtime import Core - - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -283,9 +286,7 @@ Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 - from openvino.runtime import Core - - core = Core() + core = ov.Core() compiled_model = core.compile_model(ov_model, device.value) optimized_compiled_model = core.compile_model(quant_ov_model, device.value) @@ -314,20 +315,19 @@ Select device from dropdown list for running inference using OpenVINO: .. parsed-literal:: Accuracy of the original model: 93.61% - Accuracy of the optimized model: 93.51% + Accuracy of the optimized model: 93.54% -Compare Performance of the Original and Quantized Models `⇑ <#top>`__ +Compare Performance of the Original and Quantized Models ############################################################################################################################### - Finally, measure the inference performance of the ``FP32`` and ``INT8`` models, using `Benchmark -Tool `__ +Tool `__ - an inference performance measurement tool in OpenVINO. .. note:: - + For more accurate performance, it is recommended to run benchmark_app in a terminal/command prompt after closing other applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark @@ -335,7 +335,6 @@ Tool `__ -############################################################################################################################### + /bin/bash: benchmark_app: command not found +Compare results on four pictures +############################################################################################################################### + .. code:: ipython3 # Define all possible labels from the CIFAR10 dataset @@ -586,5 +448,5 @@ Compare results on four pictures `⇑ <#top>`__ -.. image:: 113-image-classification-quantization-with-output_files/113-image-classification-quantization-with-output_29_2.png +.. image:: 113-image-classification-quantization-with-output_files/113-image-classification-quantization-with-output_30_2.png diff --git a/docs/notebooks/113-image-classification-quantization-with-output_files/113-image-classification-quantization-with-output_29_2.png b/docs/notebooks/113-image-classification-quantization-with-output_files/113-image-classification-quantization-with-output_30_2.png similarity index 100% rename from docs/notebooks/113-image-classification-quantization-with-output_files/113-image-classification-quantization-with-output_29_2.png rename to docs/notebooks/113-image-classification-quantization-with-output_files/113-image-classification-quantization-with-output_30_2.png diff --git a/docs/notebooks/115-async-api-with-output.rst b/docs/notebooks/115-async-api-with-output.rst index f8b9eb906a5a50..0e36317dc3c612 100644 --- a/docs/notebooks/115-async-api-with-output.rst +++ b/docs/notebooks/115-async-api-with-output.rst @@ -1,8 +1,6 @@ Asynchronous Inference with OpenVINO™ ===================================== - - This notebook demonstrates how to use the `Async API `__ for asynchronous execution with OpenVINO. @@ -13,10 +11,7 @@ device is busy with inference, the application can perform other tasks in parallel (for example, populating inputs or scheduling other requests) rather than wait for the current inference to complete first. - -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Prepare model and data processing <#prepare-model-and-data-processing>`__ @@ -39,13 +34,12 @@ requests) rather than wait for the current inference to complete first. - `Setting Callback <#setting-callback>`__ - `Test the performance with AsyncInferQueue <#test-the-performance-with-asyncinferqueue>`__ -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q opencv-python matplotlib .. code:: ipython3 @@ -53,7 +47,6 @@ Imports `⇑ <#top>`__ import cv2 import time import numpy as np - from openvino.runtime import Core, AsyncInferQueue import openvino as ov from IPython import display import matplotlib.pyplot as plt @@ -67,15 +60,14 @@ Imports `⇑ <#top>`__ import notebook_utils as utils -Prepare model and data processing `⇑ <#top>`__ +Prepare model and data processing ############################################################################################################################### - -Download test model `⇑ <#top>`__ +Download test model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -We use a pre-trained model from OpenVINO’s -`Open Model Zoo `__ to start the +We use a pre-trained model from OpenVINO’s `Open Model +Zoo `__ to start the test. In this case, the model will be executed to detect the person in each frame of the video. @@ -110,31 +102,29 @@ each frame of the video. -Load the model `⇑ <#top>`__ +Load the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # initialize OpenVINO runtime - ie = Core() + core = ov.Core() # read the network and corresponding weights from file - model = ie.read_model(model=model_path) + model = core.read_model(model=model_path) # compile the model for the CPU (you can choose manually CPU, GPU etc.) # or let the engine choose the best available device (AUTO) - compiled_model = ie.compile_model(model=model, device_name="CPU") + compiled_model = core.compile_model(model=model, device_name="CPU") # get input node input_layer_ir = model.input(0) N, C, H, W = input_layer_ir.shape shape = (H, W) -Create functions for data processing `⇑ <#top>`__ +Create functions for data processing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 def preprocess(image): @@ -174,25 +164,25 @@ Create functions for data processing `⇑ <#top>`__ cv2.putText(image, str(round(fps, 2)) + " fps", (5, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 3) return image -Get the test video `⇑ <#top>`__ +Get the test video +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 video_path = 'https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/video/CEO%20Pat%20Gelsinger%20on%20Leading%20Intel.mp4' -How to improve the throughput of video processing `⇑ <#top>`__ +How to improve the throughput of video processing ############################################################################################################################### Below, we compare the performance of the synchronous and async-based approaches: -Sync Mode (default) `⇑ <#top>`__ +Sync Mode (default) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Let us see how video processing works with the default approach. Using the synchronous approach, the frame is -captured with OpenCV and then immediately processed: +Let us see how video processing works with the default approach. Using +the synchronous approach, the frame is captured with OpenCV and then +immediately processed: .. figure:: https://user-images.githubusercontent.com/91237924/168452573-d354ea5b-7966-44e5-813d-f9053be4338a.png :alt: drawing @@ -277,10 +267,9 @@ captured with OpenCV and then immediately processed: player.stop() return sync_fps -Test performance in Sync Mode `⇑ <#top>`__ +Test performance in Sync Mode +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 sync_fps = sync_api(source=video_path, flip=False, fps=30, use_popup=False, skip_first_frames=800) @@ -294,13 +283,12 @@ Test performance in Sync Mode `⇑ <#top>`__ .. parsed-literal:: Source ended - average throuput in sync mode: 37.71 fps + average throuput in sync mode: 38.75 fps -Async Mode `⇑ <#top>`__ +Async Mode +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Let us see how the OpenVINO Async API can improve the overall frame rate of an application. The key advantage of the Async approach is as follows: while a device is busy with the inference, the application can @@ -413,10 +401,9 @@ pipeline (decoding vs inference) and not by the sum of the stages. player.stop() return async_fps -Test the performance in Async Mode `⇑ <#top>`__ +Test the performance in Async Mode +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 async_fps = async_api(source=video_path, flip=False, fps=30, use_popup=False, skip_first_frames=800) @@ -430,13 +417,12 @@ Test the performance in Async Mode `⇑ <#top>`__ .. parsed-literal:: Source ended - average throuput in async mode: 73.36 fps + average throuput in async mode: 71.45 fps -Compare the performance `⇑ <#top>`__ +Compare the performance +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 width = 0.4 @@ -462,21 +448,19 @@ Compare the performance `⇑ <#top>`__ .. image:: 115-async-api-with-output_files/115-async-api-with-output_21_0.png -``AsyncInferQueue`` `⇑ <#top>`__ +``AsyncInferQueue`` ############################################################################################################################### - Asynchronous mode pipelines can be supported with the -`AsyncInferQueue `__ +```AsyncInferQueue`` `__ wrapper class. This class automatically spawns the pool of ``InferRequest`` objects (also called “jobs”) and provides synchronization mechanisms to control the flow of the pipeline. It is a simpler way to manage the infer request queue in Asynchronous mode. -Setting Callback `⇑ <#top>`__ +Setting Callback +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - When ``callback`` is set, any job that ends inference calls upon the Python function. The ``callback`` function must have two arguments: one is the request that calls the ``callback``, which provides the @@ -524,7 +508,7 @@ the possibility of passing runtime values. None """ # Create infer requests queue - infer_queue = AsyncInferQueue(compiled_model, 2) + infer_queue = ov.AsyncInferQueue(compiled_model, 2) infer_queue.set_callback(callback) player = None try: @@ -551,10 +535,9 @@ the possibility of passing runtime values. infer_queue.wait_all() player.stop() -Test the performance with ``AsyncInferQueue`` `⇑ <#top>`__ +Test the performance with ``AsyncInferQueue`` +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 frame_number = 0 @@ -569,5 +552,5 @@ Test the performance with ``AsyncInferQueue`` `⇑ <#top>`__ .. parsed-literal:: - average throughput in async mode with async infer queue: 103.73 fps + average throughput in async mode with async infer queue: 102.86 fps diff --git a/docs/notebooks/115-async-api-with-output_files/115-async-api-with-output_21_0.png b/docs/notebooks/115-async-api-with-output_files/115-async-api-with-output_21_0.png index 60870d8f5256c0..106617e80a951d 100644 --- a/docs/notebooks/115-async-api-with-output_files/115-async-api-with-output_21_0.png +++ b/docs/notebooks/115-async-api-with-output_files/115-async-api-with-output_21_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b94a4331de267d1151abf422719316d87b59282125c1cb8471a99e34d561bdd1 -size 30455 +oid sha256:a0f473d81de64bea167c31dbd9a671d68826ac1ae89a58a7b244c5bcc79198bb +size 30454 diff --git a/docs/notebooks/116-sparsity-optimization-with-output.rst b/docs/notebooks/116-sparsity-optimization-with-output.rst index c5ccc6437e9a41..97930314759c41 100644 --- a/docs/notebooks/116-sparsity-optimization-with-output.rst +++ b/docs/notebooks/116-sparsity-optimization-with-output.rst @@ -1,18 +1,18 @@ Accelerate Inference of Sparse Transformer Models with OpenVINO™ and 4th Gen Intel® Xeon® Scalable Processors ============================================================================================================= - - This tutorial demonstrates how to improve performance of sparse Transformer models with `OpenVINO `__ on 4th Gen Intel® Xeon® Scalable processors. -The tutorial downloads `a BERT-base model `__ -which has been quantized, sparsified, and tuned for `SST2 datasets `__ using +The tutorial downloads `a BERT-base +model `__ +which has been quantized, sparsified, and tuned for `SST2 +datasets `__ using `Optimum-Intel `__. It demonstrates the inference performance advantage on 4th Gen Intel® Xeon® Scalable Processors by running it with `Sparse Weight -Decompression `__, +Decompression `__, a runtime option that seizes model sparsity for efficiency. The notebook consists of the following steps: @@ -21,9 +21,7 @@ consists of the following steps: integration with Hugging Face Optimum. - Compare sparse 8-bit vs. dense 8-bit inference performance. -.. _top: - -**Table of contents**: +**Table of contents:** - `Prerequisites <#prerequisites>`__ - `Imports <#imports>`__ @@ -34,19 +32,17 @@ consists of the following steps: - `Benchmark quantized sparse inference performance <#benchmark-quantized-sparse-inference-performance>`__ - `When this might be helpful <#when-this-might-be-helpful>`__ -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### - .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q "git+https://github.com/huggingface/optimum-intel.git" datasets onnx onnxruntime -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 import shutil @@ -59,10 +55,10 @@ Imports `⇑ <#top>`__ .. parsed-literal:: - 2023-08-15 22:55:04.775263: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. - 2023-08-15 22:55:04.809127: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + 2023-09-08 23:03:46.012098: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:03:46.047135: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. - 2023-08-15 22:55:05.351203: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + 2023-09-08 23:03:46.594018: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT .. parsed-literal:: @@ -73,10 +69,12 @@ Imports `⇑ <#top>`__ .. parsed-literal:: No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations + warnings.warn( -Download, quantize and sparsify the model, using Hugging Face Optimum API. `⇑ <#top>`__ -############################################################################################################################### +Download, quantize and sparsify the model, using Hugging Face Optimum API ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The first step is to download a quantized sparse transformers which has been translated to OpenVINO IR. Then, it will be put through a @@ -108,8 +106,6 @@ model card on Hugging Face. Compiling the model... Set CACHE_DIR to /opt/home/k8sworker/.cache/huggingface/hub/models--OpenVINO--bert-base-uncased-sst2-int8-unstructured80/snapshots/dc44eb46300882463d50ee847e0f6485bad3cdad/model_cache - Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers - pip install xformers. .. parsed-literal:: @@ -143,14 +139,14 @@ the IRs into a single folder. -Benchmark quantized dense inference performance `⇑ <#top>`__ +Benchmark quantized dense inference performance ############################################################################################################################### -Benchmark dense inference performance using parallel execution on four CPU cores -to simulate a small instance in the cloud infrastructure. Sequence -length is dependent on use cases, 16 is common for conversational AI -while 160 for question answering task. It is set to 64 as an example. It -is recommended to tune based on your applications. +Benchmark dense inference performance using parallel execution on four +CPU cores to simulate a small instance in the cloud infrastructure. +Sequence length is dependent on use cases, 16 is common for +conversational AI while 160 for question answering task. It is set to 64 +as an example. It is recommended to tune based on your applications. .. code:: ipython3 @@ -175,83 +171,11 @@ is recommended to tune based on your applications. To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 74.55 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] input_ids (node: input_ids) : i64 / [...] / [?,?] - [ INFO ] attention_mask (node: attention_mask) : i64 / [...] / [?,?] - [ INFO ] token_type_ids (node: token_type_ids) : i64 / [...] / [?,?] - [ INFO ] Model outputs: - [ INFO ] logits (node: logits) : f32 / [...] / [?,2] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 1 - [ INFO ] Reshaping model: 'input_ids': [1,64], 'attention_mask': [1,64], 'token_type_ids': [1,64] - [ INFO ] Reshape model took 26.03 ms - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] input_ids (node: input_ids) : i64 / [...] / [1,64] - [ INFO ] attention_mask (node: attention_mask) : i64 / [...] / [1,64] - [ INFO ] token_type_ids (node: token_type_ids) : i64 / [...] / [1,64] - [ INFO ] Model outputs: - [ INFO ] logits (node: logits) : f32 / [...] / [1,2] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 1231.43 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4 - [ INFO ] NUM_STREAMS: 4 - [ INFO ] AFFINITY: Affinity.CORE - [ INFO ] INFERENCE_NUM_THREADS: 4 - [ INFO ] PERF_COUNT: False - [ INFO ] INFERENCE_PRECISION_HINT: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] ENABLE_CPU_PINNING: True - [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'input_ids'!. This input will be filled with random values! - [ WARNING ] No input files were given for input 'attention_mask'!. This input will be filled with random values! - [ WARNING ] No input files were given for input 'token_type_ids'!. This input will be filled with random values! - [ INFO ] Fill input 'input_ids' with random values - [ INFO ] Fill input 'attention_mask' with random values - [ INFO ] Fill input 'token_type_ids' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 31.02 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 8896 iterations - [ INFO ] Duration: 60044.19 ms - [ INFO ] Latency: - [ INFO ] Median: 26.82 ms - [ INFO ] Average: 26.87 ms - [ INFO ] Min: 25.13 ms - [ INFO ] Max: 38.04 ms - [ INFO ] Throughput: 148.16 FPS - - -Benchmark quantized sparse inference performance `⇑ <#top>`__ -############################################################################################################################### + /bin/bash: benchmark_app: command not found + +Benchmark quantized sparse inference performance +############################################################################################################################### To enable sparse weight decompression feature, users can add it to runtime config like below. ``CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE`` @@ -283,83 +207,11 @@ for which a layer will be enabled. To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 63.96 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] input_ids (node: input_ids) : i64 / [...] / [?,?] - [ INFO ] attention_mask (node: attention_mask) : i64 / [...] / [?,?] - [ INFO ] token_type_ids (node: token_type_ids) : i64 / [...] / [?,?] - [ INFO ] Model outputs: - [ INFO ] logits (node: logits) : f32 / [...] / [?,2] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 1 - [ INFO ] Reshaping model: 'input_ids': [1,64], 'attention_mask': [1,64], 'token_type_ids': [1,64] - [ INFO ] Reshape model took 26.17 ms - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] input_ids (node: input_ids) : i64 / [...] / [1,64] - [ INFO ] attention_mask (node: attention_mask) : i64 / [...] / [1,64] - [ INFO ] token_type_ids (node: token_type_ids) : i64 / [...] / [1,64] - [ INFO ] Model outputs: - [ INFO ] logits (node: logits) : f32 / [...] / [1,2] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 1252.94 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4 - [ INFO ] NUM_STREAMS: 4 - [ INFO ] AFFINITY: Affinity.CORE - [ INFO ] INFERENCE_NUM_THREADS: 4 - [ INFO ] PERF_COUNT: False - [ INFO ] INFERENCE_PRECISION_HINT: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] ENABLE_CPU_PINNING: True - [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'input_ids'!. This input will be filled with random values! - [ WARNING ] No input files were given for input 'attention_mask'!. This input will be filled with random values! - [ WARNING ] No input files were given for input 'token_type_ids'!. This input will be filled with random values! - [ INFO ] Fill input 'input_ids' with random values - [ INFO ] Fill input 'attention_mask' with random values - [ INFO ] Fill input 'token_type_ids' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 30.12 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 8840 iterations - [ INFO ] Duration: 60036.27 ms - [ INFO ] Latency: - [ INFO ] Median: 26.83 ms - [ INFO ] Average: 26.88 ms - [ INFO ] Min: 26.11 ms - [ INFO ] Max: 40.45 ms - [ INFO ] Throughput: 147.24 FPS - - -When this might be helpful `⇑ <#top>`__ -############################################################################################################################### + /bin/bash: benchmark_app: command not found + +When this might be helpful +############################################################################################################################### This feature can improve inference performance for models with sparse weights in the scenarios when the model is deployed to handle multiple @@ -369,5 +221,7 @@ small sequence length, for example, 32 and lower. For more details about asynchronous inference with OpenVINO, refer to the following documentation: -- `Deployment Optimization Guide `__ -- `Inference Request API `__ +- `Deployment Optimization + Guide `__ +- `Inference Request + API `__ diff --git a/docs/notebooks/117-model-server-with-output.rst b/docs/notebooks/117-model-server-with-output.rst index 272e3b3bdca60d..390de056a564ce 100644 --- a/docs/notebooks/117-model-server-with-output.rst +++ b/docs/notebooks/117-model-server-with-output.rst @@ -1,12 +1,10 @@ Hello Model Server ================== - - Introduction to OpenVINO™ Model Server (OVMS). What is Model Serving? ----------------------- +############################################################################################################################### A model server hosts models and makes them accessible to software components over standard network protocols. A client sends a request to @@ -14,56 +12,54 @@ the model server, which performs inference and sends a response back to the client. Model serving offers many advantages for efficient model deployment: -- Remote inference enables using lightweight clients with only the - necessary functions to perform API calls to edge or cloud - deployments. -- Applications are independent of the model framework, hardware device, - and infrastructure. -- Client applications in any programming language that supports REST or - gRPC calls can be used to run inference remotely on the model server. -- Clients require fewer updates since client libraries change very - rarely. -- Model topology and weights are not exposed directly to client - applications, making it easier to control access to the model. -- Ideal architecture for microservices-based applications and - deployments in cloud environments – including Kubernetes and - OpenShift clusters. -- Efficient resource utilization with horizontal and vertical inference - scaling. +- Remote inference enables using lightweight clients with only the + necessary functions to perform API calls to edge or cloud + deployments. +- Applications are independent of the model framework, hardware device, + and infrastructure. +- Client applications in any programming language that supports REST or + gRPC calls can be used to run inference remotely on the model server. +- Clients require fewer updates since client libraries change very + rarely. +- Model topology and weights are not exposed directly to client + applications, making it easier to control access to the model. +- Ideal architecture for microservices-based applications and + deployments in cloud environments – including Kubernetes and + OpenShift clusters. +- Efficient resource utilization with horizontal and vertical inference + scaling. |ovms_diagram| -.. _top: +**Table of contents:** -**Table of contents**: +- `Serving with OpenVINO Model Server <#serving-with-openvino-model-server>`__ +- `Step 1: Prepare Docker <#step-1-prepare-docker>`__ +- `Step 2: Preparing a Model Repository <#step-2-preparing-a-model-repository>`__ +- `Step 3: Start the Model Server Container <#step-3-start-the-model-server-container>`__ +- `Step 4: Prepare the Example Client Components <#step-4-prepare-the-example-client-components>`__ -- `Serving with OpenVINO Model Server <#serving-with-openvino-model-server1>`__ -- `Step 1: Prepare Docker <#step-1-prepare-docker>`__ -- `Step 2: Preparing a Model Repository <#step-2-preparing-a-model-repository>`__ -- `Step 3: Start the Model Server Container <#start-the-model-server-container>`__ -- `Step 4: Prepare the Example Client Components <#prepare-the-example-client-components>`__ - - - `Prerequisites <#prerequisites>`__ - - `Imports <#imports>`__ - - `Request Model Status <#request-model-status>`__ - - `Request Model Metadata <#request-model-metadata>`__ - - `Load input image <#load-input-image>`__ - - `Request Prediction on a Numpy Array <#request-prediction-on-a-numpy-array>`__ - - `Visualization <#visualization>`__ + - `Prerequisites <#prerequisites>`__ + - `Imports <#imports>`__ + - `Request Model Status <#request-model-status>`__ + - `Request Model Metadata <#request-model-metadata>`__ + - `Load input image <#load-input-image>`__ + - `Request Prediction on a Numpy Array <#request-prediction-on-a-numpy-array>`__ + - `Visualization <#visualization>`__ - `References <#references>`__ .. |ovms_diagram| image:: https://user-images.githubusercontent.com/91237924/215658773-4720df00-3b95-4a84-85a2-40f06138e914.png -Serving with OpenVINO Model Server `⇑ <#top>`__ +Serving with OpenVINO Model Server ############################################################################################################################### -OpenVINO Model Server (OVMS) is a high-performance system for serving models. Implemented in -C++ for scalability and optimized for deployment on Intel architectures, -the model server uses the same architecture and API as TensorFlow -Serving and KServe while applying OpenVINO for inference execution. -Inference service is provided via gRPC or REST API, making deploying new -algorithms and AI experiments easy. +OpenVINO Model Server (OVMS) is a high-performance system for serving +models. Implemented in C++ for scalability and optimized for deployment +on Intel architectures, the model server uses the same architecture and +API as TensorFlow Serving and KServe while applying OpenVINO for +inference execution. Inference service is provided via gRPC or REST API, +making deploying new algorithms and AI experiments easy. .. figure:: https://user-images.githubusercontent.com/91237924/215658767-0e0fc221-aed0-4db1-9a82-6be55f244dba.png :alt: ovms_high_level @@ -72,10 +68,11 @@ algorithms and AI experiments easy. To quickly start using OpenVINO™ Model Server, follow these steps: -Step 1: Prepare Docker `⇑ <#top>`__ +Step 1: Prepare Docker ############################################################################################################################### -Install `Docker Engine `__, including its +Install `Docker Engine `__, +including its `post-installation `__ steps, on your development system. To verify installation, test it, using the following command. When it is ready, it will display a test @@ -112,11 +109,11 @@ image and a message. -Step 2: Preparing a Model Repository `⇑ <#top>`__ +Step 2: Preparing a Model Repository ############################################################################################################################### -The models need to be placed and mounted in a particular directory structure and according to -the following rules: +The models need to be placed and mounted in a particular directory +structure and according to the following rules: :: @@ -196,7 +193,7 @@ the following rules: Model Copied to "./models/detection/1". -Step 3: Start the Model Server Container `⇑ <#top>`__ +Step 3: Start the Model Server Container ############################################################################################################################### Pull and start the container: @@ -224,8 +221,8 @@ Check whether the OVMS container is running normally: The required Model Server parameters are listed below. For additional -configuration options, see the -`Model Server Parameters section `__. +configuration options, see the `Model Server Parameters +section `__. .. raw:: html @@ -645,7 +642,7 @@ openvino/model_server:latest If the serving port ``9000`` is already in use, please switch it to another available port on your system. For example:\ ``-p 9020:9000`` -Step 4: Prepare the Example Client Components `⇑ <#top>`__ +Step 4: Prepare the Example Client Components ############################################################################################################################### OpenVINO Model Server exposes two sets of APIs: one compatible with @@ -656,10 +653,9 @@ into existing systems the already leverage one of these APIs for inference. This example will demonstrate how to write a TensorFlow Serving API client for object detection. -Prerequisites `⇑ <#top>`__ +Prerequisites +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Install necessary packages. .. code:: ipython3 @@ -692,10 +688,9 @@ Install necessary packages. You should consider upgrading via the '/home/adrian/repos/openvino_notebooks_adrian/venv/bin/python -m pip install --upgrade pip' command. -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import cv2 @@ -703,10 +698,9 @@ Imports `⇑ <#top>`__ import matplotlib.pyplot as plt from ovmsclient import make_grpc_client -Request Model Status `⇑ <#top>`__ +Request Model Status +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 address = "localhost:9000" @@ -722,10 +716,9 @@ Request Model Status `⇑ <#top>`__ {1: {'state': 'AVAILABLE', 'error_code': 0, 'error_message': 'OK'}} -Request Model Metadata `⇑ <#top>`__ +Request Model Metadata +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 model_metadata = client.get_model_metadata(model_name=model_name) @@ -737,10 +730,9 @@ Request Model Metadata `⇑ <#top>`__ {'model_version': 1, 'inputs': {'image': {'shape': [1, 3, 704, 704], 'dtype': 'DT_FLOAT'}}, 'outputs': {'1469_1470.0': {'shape': [-1], 'dtype': 'DT_FLOAT'}, '1078_1079.0': {'shape': [1000], 'dtype': 'DT_FLOAT'}, '1330_1331.0': {'shape': [36], 'dtype': 'DT_FLOAT'}, 'labels': {'shape': [-1], 'dtype': 'DT_INT32'}, '1267_1268.0': {'shape': [121], 'dtype': 'DT_FLOAT'}, '1141_1142.0': {'shape': [1000], 'dtype': 'DT_FLOAT'}, '1204_1205.0': {'shape': [484], 'dtype': 'DT_FLOAT'}, 'boxes': {'shape': [-1, 5], 'dtype': 'DT_FLOAT'}}} -Load input image `⇑ <#top>`__ +Load input image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Text detection models expect an image in BGR format. @@ -769,10 +761,9 @@ Load input image `⇑ <#top>`__ .. image:: 117-model-server-with-output_files/117-model-server-with-output_20_1.png -Request Prediction on a Numpy Array `⇑ <#top>`__ +Request Prediction on a Numpy Array +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 inputs = {"image": input_image} @@ -795,10 +786,9 @@ Request Prediction on a Numpy Array `⇑ <#top>`__ [2.2261986e+01 4.5406548e+01 1.8868817e+02 1.0225631e+02 3.0407205e-01]] -Visualization `⇑ <#top>`__ +Visualization +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # For each detection, the description is in the [x_min, y_min, x_max, y_max, conf] format: @@ -879,11 +869,10 @@ command: ovms -References `⇑ <#top>`__ +References ############################################################################################################################### - 1. `OpenVINO™ Model Server - documentation `__ + documentation `__ 2. `OpenVINO™ Model Server GitHub repository `__ diff --git a/docs/notebooks/118-optimize-preprocessing-with-output.rst b/docs/notebooks/118-optimize-preprocessing-with-output.rst index cebce914098bd5..2f9a8116753605 100644 --- a/docs/notebooks/118-optimize-preprocessing-with-output.rst +++ b/docs/notebooks/118-optimize-preprocessing-with-output.rst @@ -1,8 +1,6 @@ Optimize Preprocessing ====================== - - When input data does not fit the model input tensor perfectly, additional operations/steps are needed to transform the data to the format expected by the model. This tutorial demonstrates how it could be @@ -11,15 +9,13 @@ instrument, that enables integration of preprocessing steps into an execution graph and performing it on a selected device, which can improve device utilization. For more information about Preprocessing API, see this -`overview `__ +`overview `__ and -`details `__ +`details `__ This tutorial include following steps: - Downloading the model. -- Setup preprocessing with model conversion API, loading the model and - inference with original image. - Setup preprocessing with Preprocessing API, loading the model and inference with original image. - Fitting image to the model input type and inference with prepared @@ -27,9 +23,7 @@ This tutorial include following steps: - Comparing results on one picture. - Comparing performance. -.. _top: - -**Table of contents**: +**Table of contents:** - `Settings <#settings>`__ - `Imports <#imports>`__ @@ -39,16 +33,11 @@ This tutorial include following steps: - `Create core <#create-core>`__ - `Check the original parameters of image <#check-the-original-parameters-of-image>`__ -- `Convert model to OpenVINO IR and setup preprocessing steps with model conversion API <#convert-model-to-openvino-ir-and-setup-preprocessing-steps-with-model-conversion-api>`__ - - - `Prepare image <#prepare-image>`__ - - `Compile model and perform inference <#compile-model-and-perform-inference>`__ - - `Setup preprocessing steps with Preprocessing API and perform inference <#setup-preprocessing-steps-with-preprocessing-api-and-perform-inference>`__ - - `Convert model to OpenVINO IR with model conversion API <#convert-model-to-openvino-ir-with-model-conversion-api>`__ + - `Convert model to OpenVINO IR with model conversion API <#convert-model-to-openvino-ir-with-model-conversion-apI>`__ - `Create PrePostProcessor Object <#create-prepostprocessor-object>`__ - - `Declare User’s Data Format <#declare-users-data-format>`__ + - `Declare User’s Data Format <#declare-user’s-data-format>`__ - `Declaring Model Layout <#declaring-model-layout>`__ - `Preprocessing Steps <#preprocessing-steps>`__ - `Integrating Steps into a Model <#integrating-steps-into-a-model>`__ @@ -65,39 +54,40 @@ This tutorial include following steps: - `Compare results on one image <#compare-results-on-one-image>`__ - `Compare performance <#compare-performance>`__ -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### +.. code:: ipython3 -Imports `⇑ <#top>`__ -############################################################################################################################### + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" tensorflow opencv-python matplotlib +Imports +############################################################################################################################### .. code:: ipython3 - import cv2 import time + from pathlib import Path + import cv2 + import matplotlib.pyplot as plt import numpy as np + import openvino as ov import tensorflow as tf - from pathlib import Path - from openvino.tools import mo - import matplotlib.pyplot as plt - from openvino.runtime import Core, serialize .. parsed-literal:: - 2023-08-15 22:57:19.952994: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. - 2023-08-15 22:57:19.987688: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + 2023-09-08 23:04:01.488557: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:04:01.524594: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. - 2023-08-15 22:57:20.520711: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + 2023-09-08 23:04:02.060166: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT -Setup image and device `⇑ <#top>`__ +Setup image and device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 image_path = "../data/image/coco.jpg" @@ -106,7 +96,7 @@ Setup image and device `⇑ <#top>`__ import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -125,10 +115,9 @@ Setup image and device `⇑ <#top>`__ -Downloading the model `⇑ <#top>`__ +Downloading the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - This tutorial uses the `InceptionResNetV2 `__. The InceptionResNetV2 model is the second of the @@ -158,7 +147,7 @@ and save it to the disk. .. parsed-literal:: - 2023-08-15 22:57:21.888060: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. + 2023-09-08 23:04:03.032233: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... @@ -182,18 +171,16 @@ and save it to the disk. INFO:tensorflow:Assets written to: model/InceptionResNetV2/assets -Create core `⇑ <#top>`__ +Create core +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - core = Core() + core = ov.Core() -Check the original parameters of image `⇑ <#top>`__ +Check the original parameters of image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 image = cv2.imread(image_path) @@ -209,107 +196,10 @@ Check the original parameters of image `⇑ <#top>`__ -.. image:: 118-optimize-preprocessing-with-output_files/118-optimize-preprocessing-with-output_13_1.png +.. image:: 118-optimize-preprocessing-with-output_files/118-optimize-preprocessing-with-output_14_1.png -Convert model to OpenVINO IR and setup preprocessing steps with model conversion API. `⇑ <#top>`__ -############################################################################################################################### - -To convert a TensorFlow model to OpenVINO IR, use the -``mo.convert_model`` python function of `model conversion -API `__. -The function returns instance of OpenVINO Model class, which is ready to -use in Python interface but can also be serialized to OpenVINO IR format -for future execution using ``openvino.runtime.serialize``. The models -will be saved to the ``./model/ir_model/`` directory. - -In this step, some conversions can be setup, which will enable reduction -of work on processing the input data before propagating it through the -network. These conversions will be inserted as additional input -pre-processing sub-graphs into the converted model. - -Setup the following conversions: - -- mean normalization with ``mean_values`` parameter. -- scale with ``scale_values``. -- color conversion, the color format of example image will be ``BGR``, - but the model required ``RGB`` format, so add - ``reverse_input_channels=True`` to process the image into the desired - format. - -Also converting of layout could be specified with ``layout`` option. -More information and parameters described in the `Embedding -Preprocessing Computation -article `__. - -.. code:: ipython3 - - ir_path_mo_preprocess = model_dir / "ir_model" / f"{model_name}_mo_preproc.xml" - - ov_model_mo_preprocess = None - - if ir_path_mo_preprocess.exists(): - ov_model_mo_preprocess = core.read_model(model=ir_path_mo_preprocess) - print(f"Model in OpenVINO format already exists: {ir_path_mo_preprocess}") - else: - ov_model_mo_preprocess = mo.convert_model(saved_model_dir=model_path, - model_name=model_path.name, - mean_values=[127.5,127.5,127.5], - scale_values=[127.5,127.5,127.5], - reverse_input_channels=True, - input_shape=[1,299,299,3]) - serialize(ov_model_mo_preprocess, str(ir_path_mo_preprocess)) - -Prepare image `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - -.. code:: ipython3 - - def prepare_image_mo_preprocess(image_path, model): - img = cv2.imread(filename=image_path) - - input_layer_ir = next(iter(model.inputs)) - - # N, H, W, C = batch size, height, width, number of channels - N, H, W, C = input_layer_ir.shape - # Resize image to the input size expected by the model. - img = cv2.resize(img, (H, W)) - - # Fit image data type to expected by the model value - img = np.float32(img) - - # Reshape to match the input shape expected by the model. - input_tensor = np.expand_dims(img, axis=0) - - return input_tensor - - - mo_pp_input_tensor = prepare_image_mo_preprocess(image_path, ov_model_mo_preprocess) - - print(f"The shape of the image is {mo_pp_input_tensor.shape}") - print(f"The data type of the image is {mo_pp_input_tensor.dtype}") - - -.. parsed-literal:: - - The shape of the image is (1, 299, 299, 3) - The data type of the image is float32 - - -Compile model and perform inference `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - -.. code:: ipython3 - - compiled_model_mo_pp = core.compile_model(model=ov_model_mo_preprocess, device_name=device.value) - - output_layer = compiled_model_mo_pp.output(0) - - result = compiled_model_mo_pp(mo_pp_input_tensor)[output_layer] - -Setup preprocessing steps with Preprocessing API and perform inference. `⇑ <#top>`__ +Setup preprocessing steps with Preprocessing API and perform inference ############################################################################################################################### Intuitively, preprocessing API consists of the following parts: @@ -326,7 +216,7 @@ Graph modifications of a model shall be performed after the model is read from a drive and before it is loaded on the actual device. Pre-processing support following operations (please, see more details -`here `__) +`here `__) - Mean/Scale Normalization - Converting Precision @@ -335,10 +225,9 @@ Pre-processing support following operations (please, see more details - Color Conversion - Custom Operations -Convert model to OpenVINO IR with model conversion API `⇑ <#top>`__ +Convert model to OpenVINO IR with model conversion API +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The options for preprocessing are not required. .. code:: ipython3 @@ -351,16 +240,15 @@ The options for preprocessing are not required. ppp_model = core.read_model(model=ir_path) print(f"Model in OpenVINO format already exists: {ir_path}") else: - ppp_model = mo.convert_model(saved_model_dir=model_path, - input_shape=[1,299,299,3]) - serialize(ppp_model, str(ir_path)) + ppp_model = ov.convert_model(model_path, + input=[1,299,299,3]) + ov.save_model(ppp_model, str(ir_path)) -Create ``PrePostProcessor`` Object `⇑ <#top>`__ +Create ``PrePostProcessor`` Object +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The -`PrePostProcessor() `__ +```PrePostProcessor()`` `__ class enables specifying the preprocessing and postprocessing steps for a model. @@ -370,10 +258,9 @@ a model. ppp = PrePostProcessor(ppp_model) -Declare User’s Data Format `⇑ <#top>`__ +Declare User’s Data Format +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To address particular input of a model/preprocessor, use the ``PrePostProcessor.input(input_name)`` method. If the model has only one input, then simple ``PrePostProcessor.input()`` will get a reference to @@ -384,7 +271,7 @@ about user’s input tensor will be initialized to same data (type/shape/etc) as model’s input parameter. User application can override particular parameters according to application’s data. Refer to the following -`page `__ +`page `__ for more information about parameters for overriding. Below is all the specified input information: @@ -400,37 +287,34 @@ for mean/scale normalization. .. code:: ipython3 - from openvino.runtime import Type, Layout - # setup formant of data - ppp.input().tensor().set_element_type(Type.u8)\ + ppp.input().tensor().set_element_type(ov.Type.u8)\ .set_spatial_dynamic_shape()\ - .set_layout(Layout('NHWC')) + .set_layout(ov.Layout('NHWC')) .. parsed-literal:: - + -Declaring Model Layout `⇑ <#top>`__ +Declaring Model Layout +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Model input already has information about precision and shape. Preprocessing API is not intended to modify this. The only thing that may be specified is input data -`layout `__. +`layout `__. .. code:: ipython3 input_layer_ir = next(iter(ppp_model.inputs)) print(f"The input shape of the model is {input_layer_ir.shape}") - ppp.input().model().set_layout(Layout('NHWC')) + ppp.input().model().set_layout(ov.Layout('NHWC')) .. parsed-literal:: @@ -442,17 +326,16 @@ may be specified is input data .. parsed-literal:: - + -Preprocessing Steps `⇑ <#top>`__ +Preprocessing Steps +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Now, the sequence of preprocessing steps can be defined. For more information about preprocessing steps, see -`here `__. +`here `__. Perform the following: @@ -461,7 +344,7 @@ Perform the following: dynamic size, for example, ``{?, 3, ?, ?}`` resize will not know how to resize the picture. Therefore, in this case, target height/ width should be specified. For more details, see also the - `PreProcessSteps.resize() `__. + ```PreProcessSteps.resize()`` `__. - Subtract mean from each channel. - Divide each pixel data to appropriate scale value. @@ -472,7 +355,7 @@ then such conversion will be added explicitly. from openvino.preprocess import ResizeAlgorithm - ppp.input().preprocess().convert_element_type(Type.f32) \ + ppp.input().preprocess().convert_element_type(ov.Type.f32) \ .resize(ResizeAlgorithm.RESIZE_LINEAR)\ .mean([127.5,127.5,127.5])\ .scale([127.5,127.5,127.5]) @@ -482,14 +365,13 @@ then such conversion will be added explicitly. .. parsed-literal:: - + -Integrating Steps into a Model `⇑ <#top>`__ +Integrating Steps into a Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Once the preprocessing steps have been finished, the model can be finally built. It is possible to display ``PrePostProcessor`` configuration for debugging purposes. @@ -513,10 +395,9 @@ configuration for debugging purposes. -Load model and perform inference `⇑ <#top>`__ +Load model and perform inference ############################################################################################################################### - .. code:: ipython3 def prepare_image_api_preprocess(image_path, model=None): @@ -532,23 +413,20 @@ Load model and perform inference `⇑ <#top>`__ ppp_input_tensor = prepare_image_api_preprocess(image_path) results = compiled_model_with_preprocess_api(ppp_input_tensor)[ppp_output_layer][0] -Fit image manually and perform inference `⇑ <#top>`__ +Fit image manually and perform inference ############################################################################################################################### - -Load the model `⇑ <#top>`__ +Load the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 model = core.read_model(model=ir_path) compiled_model = core.compile_model(model=model, device_name=device.value) -Load image and fit it to model input `⇑ <#top>`__ +Load image and fit it to model input +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 def manual_image_preprocessing(path_to_image, compiled_model): @@ -580,24 +458,21 @@ Load image and fit it to model input `⇑ <#top>`__ The data type of the image is float32 -Perform inference `⇑ <#top>`__ +Perform inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 output_layer = compiled_model.output(0) result = compiled_model(input_tensor)[output_layer] -Compare results `⇑ <#top>`__ +Compare results ############################################################################################################################### - -Compare results on one image `⇑ <#top>`__ +Compare results on one image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 def check_results(input_tensor, compiled_model, imagenet_classes): @@ -618,12 +493,6 @@ Compare results on one image `⇑ <#top>`__ imagenet_classes = open("../data/datasets/imagenet/imagenet_2012.txt").read().splitlines() imagenet_classes = ['background'] + imagenet_classes - # get result for inference with preprocessing api - print("Result of inference for preprocessing with Model Optimizer:") - res = check_results(mo_pp_input_tensor, compiled_model_mo_pp, imagenet_classes) - - print("\n") - # get result for inference with preprocessing api print("Result of inference with Preprocessing API:") res = check_results(ppp_input_tensor, compiled_model_with_preprocess_api, imagenet_classes) @@ -637,14 +506,6 @@ Compare results on one image `⇑ <#top>`__ .. parsed-literal:: - Result of inference for preprocessing with Model Optimizer: - n02099601 golden retriever, 0.56439 - n02098413 Lhasa, Lhasa apso, 0.35731 - n02108915 French bulldog, 0.00730 - n02111129 Leonberg, 0.00687 - n04404412 television, television system, 0.00317 - - Result of inference with Preprocessing API: n02099601 golden retriever, 0.80560 n02098413 Lhasa, Lhasa apso, 0.10039 @@ -654,17 +515,16 @@ Compare results on one image `⇑ <#top>`__ Result of inference with manual image setup: - n02098413 Lhasa, Lhasa apso, 0.76848 - n02099601 golden retriever, 0.19304 - n02111129 Leonberg, 0.00725 - n02097047 miniature schnauzer, 0.00290 - n02100877 Irish setter, red setter, 0.00116 + n02098413 Lhasa, Lhasa apso, 0.76843 + n02099601 golden retriever, 0.19322 + n02111129 Leonberg, 0.00720 + n02097047 miniature schnauzer, 0.00287 + n02100877 Irish setter, red setter, 0.00115 -Compare performance `⇑ <#top>`__ +Compare performance +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 def check_performance(compiled_model, preprocessing_function=None): @@ -681,16 +541,9 @@ Compare performance `⇑ <#top>`__ return time_ir, num_images - - time_ir, num_images = check_performance(compiled_model_mo_pp, prepare_image_mo_preprocess) - print( - f"IR model in OpenVINO Runtime/CPU with preprocessing API: {time_ir/num_images:.4f} " - f"seconds per image, FPS: {num_images/time_ir:.2f}" - ) - time_ir, num_images = check_performance(compiled_model, manual_image_preprocessing) print( - f"IR model in OpenVINO Runtime/CPU with preprocessing API: {time_ir/num_images:.4f} " + f"IR model in OpenVINO Runtime/CPU with manual image preprocessing: {time_ir/num_images:.4f} " f"seconds per image, FPS: {num_images/time_ir:.2f}" ) @@ -703,7 +556,6 @@ Compare performance `⇑ <#top>`__ .. parsed-literal:: - IR model in OpenVINO Runtime/CPU with preprocessing API: 0.0199 seconds per image, FPS: 50.13 - IR model in OpenVINO Runtime/CPU with preprocessing API: 0.0155 seconds per image, FPS: 64.58 - IR model in OpenVINO Runtime/CPU with preprocessing API: 0.0188 seconds per image, FPS: 53.27 + IR model in OpenVINO Runtime/CPU with manual image preprocessing: 0.0153 seconds per image, FPS: 65.52 + IR model in OpenVINO Runtime/CPU with preprocessing API: 0.0187 seconds per image, FPS: 53.40 diff --git a/docs/notebooks/118-optimize-preprocessing-with-output_files/118-optimize-preprocessing-with-output_13_1.png b/docs/notebooks/118-optimize-preprocessing-with-output_files/118-optimize-preprocessing-with-output_14_1.png similarity index 100% rename from docs/notebooks/118-optimize-preprocessing-with-output_files/118-optimize-preprocessing-with-output_13_1.png rename to docs/notebooks/118-optimize-preprocessing-with-output_files/118-optimize-preprocessing-with-output_14_1.png diff --git a/docs/notebooks/119-tflite-to-openvino-with-output.rst b/docs/notebooks/119-tflite-to-openvino-with-output.rst index 07d330269f823d..1d8af0103e85a8 100644 --- a/docs/notebooks/119-tflite-to-openvino-with-output.rst +++ b/docs/notebooks/119-tflite-to-openvino-with-output.rst @@ -1,8 +1,6 @@ Convert a Tensorflow Lite Model to OpenVINO™ ============================================ - - `TensorFlow Lite `__, often referred to as TFLite, is an open source library developed for deploying machine learning models to edge devices. @@ -10,16 +8,13 @@ machine learning models to edge devices. This short tutorial shows how to convert a TensorFlow Lite `EfficientNet-Lite-B0 `__ image classification model to OpenVINO `Intermediate -Representation `__ -(OpenVINO IR) format, using `Model -Optimizer `__. -After creating the OpenVINO IR, load the model in `OpenVINO -Runtime `__ -and do inference with a sample image. - -.. _top: +Representation `__ +(OpenVINO IR) format, using Model Converter. After creating the OpenVINO +IR, load the model in `OpenVINO +Runtime `__ +and do inference with a sample image. -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ @@ -35,17 +30,15 @@ and do inference with a sample image. - `Estimate Model Performance <#estimate-model-performance>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### - -Install requirements `⇑ <#top>`__ +Install requirements +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q opencv-python requests tqdm # Fetch `notebook_utils` module @@ -55,24 +48,21 @@ Install requirements `⇑ <#top>`__ filename='notebook_utils.py' ); -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 from pathlib import Path import numpy as np from PIL import Image - from openvino.runtime import Core, serialize - from openvino.tools import mo + import openvino as ov from notebook_utils import download_file, load_image -Download TFLite model `⇑ <#top>`__ +Download TFLite model ############################################################################################################################### - .. code:: ipython3 model_dir = Path("model") @@ -94,31 +84,30 @@ Download TFLite model `⇑ <#top>`__ .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/119-tflite-to-openvino/model/efficientnet_lite0_fp32_2.tflite') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/119-tflite-to-openvino/model/efficientnet_lite0_fp32_2.tflite') -Convert a Model to OpenVINO IR Format `⇑ <#top>`__ +Convert a Model to OpenVINO IR Format ############################################################################################################################### - To convert the TFLite model to OpenVINO IR, model conversion Python API -can be used. ``mo.convert_model`` function accepts the path to the +can be used. ``ov.convert_model`` function accepts the path to the TFLite model and returns an OpenVINO Model class instance which represents this model. The obtained model is ready to use and to be -loaded on a device using ``compile_model`` or can be saved on a disk -using ``serialize`` function, reducing loading time for next running. -Optionally, we can apply compression to the FP16 model weights, using -the ``compress_to_fp16=True`` option and integrate preprocessing using -this approach. For more information about model conversion, see this -`page `__. +loaded on a device using ``ov.compile_model`` or can be saved on a disk +using ``ov.save_model`` function, reducing loading time for next +running. By default, model weights are compressed to FP16 during +serialization by ``ov.save_model``. For more information about model +conversion, see this +`page `__. For TensorFlow Lite models support, refer to this -`tutorial `__. +`tutorial `__. .. code:: ipython3 - ov_model = mo.convert_model(tflite_model_path, compress_to_fp16=True) - serialize(ov_model, ov_model_path) + ov_model = ov.convert_model(tflite_model_path) + ov.save_model(ov_model, ov_model_path) print(f"Model {tflite_model_path} successfully converted and saved to {ov_model_path}") @@ -127,25 +116,23 @@ For TensorFlow Lite models support, refer to this Model model/efficientnet_lite0_fp32_2.tflite successfully converted and saved to model/efficientnet_lite0_fp32_2.xml -Load model using OpenVINO TensorFlow Lite Frontend `⇑ <#top>`__ +Load model using OpenVINO TensorFlow Lite Frontend ############################################################################################################################### - TensorFlow Lite models are supported via ``FrontEnd`` API. You may skip conversion to IR and read models directly by OpenVINO runtime API. For more examples supported formats reading via Frontend API, please look -this `tutorial <002-openvino-api-with-output.html>`__. +this `tutorial <../002-openvino-api>`__. .. code:: ipython3 - core = Core() + core = ov.Core() ov_model = core.read_model(tflite_model_path) -Run OpenVINO model inference `⇑ <#top>`__ +Run OpenVINO model inference ############################################################################################################################### - We can find information about model input preprocessing in its `description `__ on `TensorFlow Hub `__. @@ -158,10 +145,9 @@ on `TensorFlow Hub `__. resized_image = image.resize((224, 224)) input_tensor = np.expand_dims((np.array(resized_image).astype(np.float32) - 127) / 128, 0) -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -219,10 +205,11 @@ Select device from dropdown list for running inference using OpenVINO: Predicted label: n02109047 Great Dane with probability 0.715318 -Estimate Model Performance `⇑ <#top>`__ +Estimate Model Performance ############################################################################################################################### -`Benchmark Tool `__ +`Benchmark +Tool `__ is used to measure the inference performance of the model on CPU and GPU. @@ -235,7 +222,6 @@ GPU. benchmark on GPU. Run ``benchmark_app --help`` to see an overview of all command-line options. - .. code:: ipython3 print("Benchmark model inference on CPU") @@ -248,66 +234,5 @@ GPU. .. parsed-literal:: Benchmark model inference on CPU - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] CPU - [ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 9.14 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] images (node: images) : f32 / [...] / [1,224,224,3] - [ INFO ] Model outputs: - [ INFO ] Softmax (node: 61) : f32 / [...] / [1,1000] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 1 - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] images (node: images) : u8 / [N,H,W,C] / [1,224,224,3] - [ INFO ] Model outputs: - [ INFO ] Softmax (node: 61) : f32 / [...] / [1,1000] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 151.57 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] NETWORK_NAME: TensorFlow_Lite_Frontend_IR - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 - [ INFO ] NUM_STREAMS: 6 - [ INFO ] AFFINITY: Affinity.CORE - [ INFO ] INFERENCE_NUM_THREADS: 24 - [ INFO ] PERF_COUNT: False - [ INFO ] INFERENCE_PRECISION_HINT: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] ENABLE_CPU_PINNING: True - [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE - [ INFO ] ENABLE_HYPER_THREADING: True - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'images'!. This input will be filled with random values! - [ INFO ] Fill input 'images' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 15000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 7.60 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 17526 iterations - [ INFO ] Duration: 15005.75 ms - [ INFO ] Latency: - [ INFO ] Median: 5.00 ms - [ INFO ] Average: 5.00 ms - [ INFO ] Min: 3.28 ms - [ INFO ] Max: 14.83 ms - [ INFO ] Throughput: 1167.95 FPS + /bin/bash: benchmark_app: command not found diff --git a/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output.rst b/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output.rst index c1478cc5294bf3..add2c7016faed1 100644 --- a/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output.rst +++ b/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output.rst @@ -1,8 +1,6 @@ Convert a TensorFlow Object Detection Model to OpenVINO™ ======================================================== - - `TensorFlow `__, or TF for short, is an open-source framework for machine learning. @@ -19,16 +17,13 @@ This tutorial shows how to convert a TensorFlow `Faster R-CNN with Resnet-50 V1 `__ object detection model to OpenVINO `Intermediate -Representation `__ -(OpenVINO IR) format, using `Model -Optimizer `__. -After creating the OpenVINO IR, load the model in `OpenVINO +Representation `__ +(OpenVINO IR) format, using Model Converter. After creating the OpenVINO +IR, load the model in `OpenVINO Runtime `__ -and do inference with a sample image. - -.. _top: +and do inference with a sample image. -**Table of contents**: +**Table of contents:** - `Prerequisites <#prerequisites>`__ - `Imports <#imports>`__ @@ -49,15 +44,14 @@ and do inference with a sample image. - `Async inference pipeline <#async-inference-pipeline>`__ - `Integration preprocessing to model <#integration-preprocessing-to-model>`__ -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### - Install required packages: .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" "numpy>=1.21.0" "opencv-python" "matplotlib>=3.4,<3.5.3" + !pip install -q "openvino==2023.1.0.dev20230811" "numpy>=1.21.0" "opencv-python" "matplotlib>=3.4,<3.5.3" The notebook uses utility functions. The cell below will download the ``notebook_utils`` Python module from GitHub. @@ -72,10 +66,9 @@ The notebook uses utility functions. The cell below will download the filename="notebook_utils.py", ); -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 # Standard python modules @@ -85,18 +78,15 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np + # OpenVINO import + import openvino as ov # Notebook utils module from notebook_utils import download_file - - # OpenVINO modules - from openvino.runtime import Core, serialize - from openvino.tools import mo -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - Define model related variables and create corresponding directories: .. code:: ipython3 @@ -121,10 +111,9 @@ Define model related variables and create corresponding directories: tf_model_archive_filename = f"{model_name}.tar.gz" -Download Model from TensorFlow Hub `⇑ <#top>`__ +Download Model from TensorFlow Hub ############################################################################################################################### - Download archive with TensorFlow Object Detection model (`faster_rcnn_resnet50_v1_640x640 `__) from TensorFlow Hub: @@ -148,7 +137,7 @@ from TensorFlow Hub: .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/120-tensorflow-object-detection-to-openvino/model/tf/faster_rcnn_resnet50_v1_640x640.tar.gz') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/120-tensorflow-object-detection-to-openvino/model/tf/faster_rcnn_resnet50_v1_640x640.tar.gz') @@ -161,56 +150,50 @@ Extract TensorFlow Object Detection model from the downloaded archive: with tarfile.open(tf_model_dir / tf_model_archive_filename) as file: file.extractall(path=tf_model_dir) -Convert Model to OpenVINO IR `⇑ <#top>`__ +Convert Model to OpenVINO IR ############################################################################################################################### - -OpenVINO Model Optimizer Python API can be used to convert the +OpenVINO Model Converter Python API can be used to convert the TensorFlow model to OpenVINO IR. -``mo.convert_model`` function accept path to TensorFlow model and +``ov.convert_model`` function accept path to TensorFlow model and returns OpenVINO Model class instance which represents this model. Also we need to provide model input shape (``input_shape``) that is described at `model overview page on TensorFlow Hub `__. -Optionally, we can apply compression to FP16 model weights using -``compress_to_fp16=True`` option and integrate preprocessing using this -approach. The converted model is ready to load on a device using ``compile_model`` -or saved on disk using the ``serialize`` function to reduce loading time -when the model is run in the future. +or saved on disk using the ``save_model`` function to reduce loading +time when the model is run in the future. -See the `Model Optimizer Developer -Guide `__ -for more information about Model Optimizer and TensorFlow `models -support `__. +See the `Model Converter Developer +Guide `__ +for more information about Model Converter and TensorFlow `models +support `__. .. code:: ipython3 - ov_model = mo.convert_model( - saved_model_dir=tf_model_dir, - input_shape=[[1, 255, 255, 3]] + ov_model = ov.convert_model( + tf_model_dir, + input=[[1, 255, 255, 3]] ) # Save converted OpenVINO IR model to the corresponding directory - serialize(ov_model, openvino_ir_path) + ov.save_model(ov_model, openvino_ir_path) -Test Inference on the Converted Model `⇑ <#top>`__ +Test Inference on the Converted Model ############################################################################################################################### - -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -229,20 +212,18 @@ Select device from dropdown list for running inference using OpenVINO: -Load the Model `⇑ <#top>`__ +Load the Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - core = Core() + core = ov.Core() openvino_ir_model = core.read_model(openvino_ir_path) compiled_model = core.compile_model(model=openvino_ir_model, device_name=device.value) -Get Model Information `⇑ <#top>`__ +Get Model Information +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Faster R-CNN with Resnet-50 V1 object detection model has one input - a three-channel image of variable size. The input tensor shape is ``[1, height, width, 3]`` with values in ``[0, 255]``. @@ -308,10 +289,9 @@ for more information about model inputs, outputs and their formats. -Get an Image for Test Inference `⇑ <#top>`__ +Get an Image for Test Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Load and save an image: .. code:: ipython3 @@ -335,7 +315,7 @@ Load and save an image: .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/120-tensorflow-object-detection-to-openvino/data/coco_bike.jpg') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/120-tensorflow-object-detection-to-openvino/data/coco_bike.jpg') @@ -363,7 +343,7 @@ Read the image, resize and convert it to the input shape of the network: .. parsed-literal:: - + @@ -371,10 +351,9 @@ Read the image, resize and convert it to the input shape of the network: .. image:: 120-tensorflow-object-detection-to-openvino-with-output_files/120-tensorflow-object-detection-to-openvino-with-output_25_1.png -Perform Inference `⇑ <#top>`__ +Perform Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 inference_result = compiled_model(network_input_image) @@ -406,87 +385,86 @@ outputs will be used. .. parsed-literal:: - image_detection_boxes: [[[0.16453631 0.54612625 0.89533776 0.85469896] - [0.6721994 0.01249559 0.98444635 0.53168815] - [0.4910983 0.01171527 0.98045075 0.88644964] + image_detection_boxes: [[[0.1645457 0.54601336 0.8953864 0.85500604] + [0.67189544 0.01240013 0.9843237 0.5308594 ] + [0.49188587 0.0117609 0.98050654 0.8866383 ] ... - [0.5012431 0.5489591 0.6030575 0.61094964] - [0.45808432 0.3619884 0.8841141 0.83722156] - [0.4652153 0.02054662 0.48204365 0.0438836 ]]] - image_detection_classes: [[18. 2. 2. 3. 2. 8. 2. 2. 3. 2. 4. 4. 2. 4. 16. 1. 1. 27. - 2. 8. 62. 2. 2. 4. 4. 2. 41. 18. 4. 2. 4. 18. 2. 2. 4. 27. - 2. 2. 27. 2. 1. 1. 16. 2. 2. 2. 16. 2. 2. 4. 2. 1. 33. 4. - 15. 2. 3. 2. 2. 1. 2. 1. 4. 2. 3. 11. 4. 35. 40. 4. 1. 62. - 2. 2. 4. 36. 4. 36. 1. 31. 77. 2. 36. 1. 51. 1. 34. 3. 90. 2. - 3. 2. 1. 2. 2. 1. 1. 2. 1. 4. 18. 2. 2. 3. 31. 1. 41. 1. - 2. 2. 33. 41. 3. 31. 1. 3. 36. 27. 27. 15. 4. 4. 15. 3. 2. 37. - 1. 35. 27. 4. 36. 88. 4. 2. 3. 15. 2. 4. 2. 1. 3. 3. 27. 4. - 4. 44. 16. 1. 1. 23. 4. 3. 1. 4. 4. 62. 15. 36. 77. 3. 28. 1. - 35. 27. 2. 27. 75. 36. 8. 28. 3. 4. 36. 35. 44. 4. 3. 1. 2. 1. - 1. 35. 87. 1. 84. 1. 1. 1. 15. 1. 3. 1. 35. 1. 1. 1. 1. 62. - 15. 1. 44. 15. 1. 41. 62. 1. 4. 43. 15. 4. 3. 4. 16. 35. 2. 33. - 3. 14. 62. 34. 41. 2. 35. 4. 18. 3. 15. 1. 27. 87. 1. 4. 19. 21. - 27. 1. 3. 2. 1. 27. 15. 4. 3. 1. 38. 1. 2. 15. 38. 4. 15. 1. - 3. 3. 62. 84. 20. 58. 2. 4. 41. 20. 88. 15. 1. 19. 31. 62. 31. 4. - 14. 1. 8. 18. 15. 2. 4. 2. 2. 2. 31. 84. 2. 15. 28. 3. 27. 18. - 15. 1. 31. 41. 1. 28. 3. 1. 8. 15. 1. 16.]] - image_detection_scores: [[0.9808771 0.9418091 0.9318733 0.8789291 0.8423196 0.5888979 - 0.5630133 0.53731316 0.4974923 0.48222807 0.4673298 0.4398691 - 0.39919445 0.33909947 0.3190495 0.27470118 0.24837914 0.23406433 - 0.23351488 0.22481255 0.22016802 0.20236589 0.19338816 0.14771679 - 0.14576106 0.14285511 0.12738948 0.12668392 0.12027147 0.10873836 - 0.10812037 0.09577218 0.09060974 0.08950701 0.08673717 0.08170561 - 0.08120535 0.0789713 0.06743153 0.06118729 0.06112184 0.05309067 - 0.05216556 0.05023476 0.04783678 0.04460874 0.04213375 0.04042179 - 0.04019568 0.03522961 0.03165065 0.0310733 0.03000823 0.02873152 - 0.02782036 0.02706797 0.0266978 0.02341437 0.02291683 0.02147149 - 0.02130841 0.02099001 0.02032206 0.01978395 0.01961209 0.01902091 - 0.01893682 0.01863261 0.01858075 0.01846547 0.01823624 0.0176264 - 0.01760109 0.01703349 0.01584588 0.01582033 0.01547665 0.01527787 - 0.01522782 0.01430391 0.01428877 0.01422195 0.0141238 0.01411421 - 0.0135575 0.01288707 0.01269312 0.01218521 0.01160688 0.01143213 - 0.01142005 0.01137567 0.0111644 0.01107758 0.0109348 0.01073039 - 0.0106188 0.01016685 0.01010454 0.00983268 0.00977985 0.00967134 - 0.00965687 0.00964259 0.00962718 0.00956944 0.00950549 0.00937742 - 0.00927729 0.00916896 0.00897371 0.00891221 0.00866699 0.00863667 - 0.00855941 0.00836656 0.00835135 0.00816708 0.00795946 0.00793826 - 0.00789131 0.00781442 0.00773429 0.00767627 0.00765273 0.00752015 - 0.00749519 0.00744095 0.00715925 0.00700314 0.00692652 0.00655058 - 0.00643994 0.00641626 0.00629459 0.00628646 0.00627907 0.00612065 - 0.00593393 0.00582955 0.00582755 0.00570769 0.00569362 0.00564996 - 0.00563695 0.00558055 0.00557034 0.00551842 0.00549368 0.00544169 - 0.00544044 0.00542281 0.00540061 0.00525593 0.00524985 0.00515946 - 0.00515553 0.00511156 0.00489827 0.00484957 0.00472266 0.00465891 - 0.00464309 0.00463513 0.00459531 0.00456809 0.0045585 0.00455432 - 0.00443505 0.00443078 0.00440637 0.00422725 0.00416438 0.0041492 - 0.00413432 0.00413151 0.00409415 0.00409274 0.00407757 0.00405691 - 0.00396555 0.00393284 0.00391471 0.00388586 0.00385833 0.00385633 - 0.00385035 0.00379386 0.00378297 0.00378109 0.00377772 0.00370916 - 0.00364531 0.00363934 0.00358231 0.00354156 0.0035037 0.00348796 - 0.00344136 0.00340937 0.00334414 0.00330951 0.00329006 0.00321436 - 0.00320603 0.00312488 0.00309948 0.00307925 0.00307775 0.00306451 - 0.00303381 0.00302188 0.00299367 0.00299316 0.00298596 0.00296609 - 0.00293693 0.00288884 0.0028709 0.00283928 0.00283312 0.00281894 - 0.00276538 0.00276278 0.00270719 0.00268026 0.00258883 0.00258464 - 0.00254383 0.00253249 0.00250638 0.00250605 0.00250558 0.0025017 - 0.00249729 0.00248757 0.00246982 0.00243592 0.0024358 0.00235382 - 0.0023404 0.00233721 0.00233374 0.00233181 0.0023271 0.00230558 - 0.00230428 0.00229607 0.00227586 0.00226048 0.00223509 0.00222384 - 0.00220214 0.00219295 0.00219229 0.00218538 0.00218472 0.00217254 - 0.00216129 0.00214788 0.00213485 0.00213233 0.00208789 0.00206768 - 0.00206485 0.00206409 0.00204371 0.00203812 0.00201267 0.00200125 - 0.00199629 0.00199346 0.00198402 0.00192943 0.00191091 0.0019036 - 0.0018943 0.00188735 0.00188038 0.00186264 0.00179476 0.00177307 - 0.00176998 0.00176099 0.0017542 0.00174639 0.00171193 0.0017064 - 0.00169167 0.00168484 0.00167157 0.00166569 0.00166213 0.00166009 - 0.00164244 0.00164076 0.00163557 0.00162898 0.00160348 0.00159898]] + [0.43604603 0.59332204 0.4692565 0.6341099 ] + [0.46022677 0.59246916 0.48732638 0.61871874] + [0.47092935 0.4351712 0.5583364 0.5072162 ]]] + image_detection_classes: [[18. 2. 2. 3. 2. 8. 2. 2. 3. 2. 4. 4. 2. 4. 16. 1. 1. 2. + 27. 8. 62. 2. 2. 4. 4. 2. 18. 41. 4. 4. 2. 18. 2. 2. 4. 2. + 27. 2. 27. 2. 1. 2. 16. 1. 16. 2. 2. 2. 2. 16. 2. 2. 4. 2. + 1. 33. 4. 15. 3. 2. 2. 1. 2. 1. 4. 2. 3. 11. 4. 35. 4. 1. + 40. 2. 62. 2. 4. 4. 36. 1. 36. 36. 31. 77. 2. 1. 51. 1. 34. 3. + 2. 3. 90. 2. 1. 2. 1. 2. 1. 1. 2. 4. 18. 2. 3. 2. 31. 1. + 1. 2. 2. 33. 41. 41. 31. 3. 1. 36. 3. 15. 27. 27. 4. 4. 2. 37. + 3. 15. 1. 35. 27. 4. 36. 4. 88. 3. 2. 15. 2. 4. 2. 1. 3. 4. + 27. 4. 3. 16. 44. 1. 1. 23. 4. 1. 4. 3. 4. 15. 62. 36. 77. 3. + 1. 28. 27. 35. 2. 36. 75. 28. 27. 8. 3. 36. 4. 44. 2. 35. 4. 1. + 3. 1. 1. 35. 87. 1. 1. 1. 15. 84. 1. 1. 1. 3. 1. 35. 1. 1. + 1. 62. 15. 1. 15. 44. 1. 41. 1. 62. 4. 4. 3. 43. 16. 35. 15. 2. + 4. 34. 14. 3. 62. 33. 4. 41. 2. 35. 18. 3. 15. 1. 27. 4. 87. 2. + 19. 21. 1. 1. 27. 1. 3. 3. 2. 15. 38. 1. 1. 15. 27. 4. 4. 3. + 84. 38. 1. 15. 3. 20. 62. 58. 41. 20. 2. 4. 88. 62. 15. 31. 1. 31. + 14. 19. 4. 1. 2. 8. 18. 15. 4. 2. 2. 2. 31. 84. 15. 3. 28. 2. + 27. 18. 15. 1. 31. 28. 1. 41. 8. 1. 3. 20.]] + image_detection_scores: [[0.9810079 0.9406672 0.9318088 0.87736803 0.8406418 0.590001 + 0.5544931 0.5395725 0.49390146 0.48142615 0.46272704 0.44070086 + 0.40116653 0.3470845 0.31795666 0.27489564 0.2474634 0.23632632 + 0.23248206 0.22401379 0.21871325 0.20231566 0.19377239 0.14768396 + 0.14555264 0.14337891 0.12709722 0.12582931 0.11867397 0.11002139 + 0.10564936 0.09225632 0.08963246 0.08887175 0.08704519 0.08072548 + 0.08002183 0.07911441 0.0666113 0.06338128 0.06100732 0.06005874 + 0.05798699 0.05364133 0.05204991 0.05011017 0.04850946 0.04709009 + 0.04469202 0.04128509 0.04075823 0.03989557 0.03523415 0.03272378 + 0.03108068 0.02970159 0.02872299 0.02845932 0.02585638 0.02348834 + 0.02330401 0.02148149 0.02133745 0.02086147 0.0203565 0.01959799 + 0.01931953 0.01926655 0.01872199 0.01856231 0.018533 0.01838779 + 0.0181897 0.01780706 0.01727113 0.0166365 0.01586579 0.01579068 + 0.01573388 0.01528254 0.01502856 0.01451417 0.01439991 0.01428939 + 0.01419332 0.01380482 0.01360496 0.01299109 0.01249149 0.01198874 + 0.0114887 0.01145835 0.01144462 0.01139608 0.01113943 0.01108595 + 0.01089338 0.01082359 0.01051233 0.01027331 0.01006837 0.00979451 + 0.00973239 0.00960592 0.00957181 0.00953101 0.00949827 0.00942653 + 0.00942553 0.00931231 0.00907305 0.00887801 0.00884456 0.00881256 + 0.00864554 0.00854315 0.00849876 0.00849663 0.00846909 0.00820139 + 0.00816586 0.00791354 0.0079015 0.00769929 0.00768903 0.00766408 + 0.00766067 0.00764458 0.00745573 0.00721994 0.00706666 0.00700596 + 0.0067884 0.00648051 0.00646964 0.00638165 0.00635813 0.00625102 + 0.00622972 0.00599667 0.00591933 0.00585055 0.00578007 0.00576509 + 0.00572359 0.00560451 0.00558354 0.00556508 0.00553865 0.00548295 + 0.00547358 0.00543471 0.00543379 0.0054083 0.0053792 0.00535764 + 0.00523385 0.00518936 0.00505314 0.00505005 0.00492085 0.00482561 + 0.00471782 0.00470318 0.00464702 0.00461123 0.00458301 0.00457273 + 0.00455804 0.00454316 0.00454089 0.00441311 0.00437611 0.0042632 + 0.00420744 0.00415997 0.00409999 0.00409556 0.00407972 0.00405195 + 0.00404086 0.00399852 0.00399512 0.00393439 0.00390283 0.00387304 + 0.0038489 0.00382758 0.00380029 0.00379529 0.00376791 0.00374193 + 0.0037119 0.00369629 0.00366445 0.00358808 0.00351782 0.0035044 + 0.00344527 0.00343268 0.00342918 0.0033823 0.00332239 0.00330844 + 0.00329753 0.00327268 0.00315135 0.0031098 0.00308979 0.00308363 + 0.00305497 0.00304868 0.00304043 0.00303659 0.00302582 0.00301236 + 0.0029885 0.00291268 0.00290264 0.00289243 0.00287722 0.00286564 + 0.0028257 0.00282503 0.00275258 0.00274533 0.0027204 0.00268618 + 0.00261918 0.00260795 0.00256593 0.00254094 0.00252855 0.00250768 + 0.00249793 0.00249551 0.00248255 0.00247912 0.00246619 0.00241695 + 0.00240165 0.00236032 0.00235902 0.00234437 0.00234337 0.00233791 + 0.00233535 0.00230773 0.00230558 0.00229112 0.00228888 0.0022631 + 0.00225214 0.00224187 0.00222553 0.00219966 0.00219677 0.00217865 + 0.00217776 0.00215922 0.0021541 0.00214997 0.00212955 0.00211928 + 0.0021005 0.00205066 0.00204869 0.00203888 0.00203537 0.00203026 + 0.00201357 0.00199936 0.00199387 0.00197951 0.00197288 0.00195503 + 0.00194848 0.00192129 0.00189951 0.00187286 0.0018519 0.00182989 + 0.00179158 0.00177909 0.00176328 0.00176319 0.00175034 0.00173788 + 0.00172983 0.00172819 0.00168273 0.0016768 0.00167542 0.00167398 + 0.0016395 0.00163637 0.00163319 0.00162887 0.00162824 0.00162028]] image_detections_num: [300.] -Inference Result Visualization `⇑ <#top>`__ +Inference Result Visualization +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Define utility functions to visualize the inference results .. code:: ipython3 @@ -625,7 +603,7 @@ Zoo `__: .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/120-tensorflow-object-detection-to-openvino/data/coco_91cl.txt') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/120-tensorflow-object-detection-to-openvino/data/coco_91cl.txt') @@ -664,26 +642,25 @@ original test image: .. image:: 120-tensorflow-object-detection-to-openvino-with-output_files/120-tensorflow-object-detection-to-openvino-with-output_38_0.png -Next Steps `⇑ <#top>`__ +Next Steps ############################################################################################################################### - This section contains suggestions on how to additionally improve the performance of your application using OpenVINO. -Async inference pipeline `⇑ <#top>`__ +Async inference pipeline +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -The key advantage of the Async API is that when a device is busy with inference, -the application can perform other tasks in parallel (for example, populating inputs or -scheduling other requests) rather than wait for the current inference to -complete first. To understand how to perform async inference using -openvino, refer to the `Async API tutorial <115-async-api-with-output.html>`__. +The key advantage of the Async API is that when a device is busy with +inference, the application can perform other tasks in parallel (for +example, populating inputs or scheduling other requests) rather than +wait for the current inference to complete first. To understand how to +perform async inference using openvino, refer to the `Async API +tutorial <115-async-api-with-output.html>`__. -Integration preprocessing to model `⇑ <#top>`__ +Integration preprocessing to model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Preprocessing API enables making preprocessing a part of the model reducing application code and dependency on additional image processing libraries. The main advantage of Preprocessing API is that preprocessing @@ -694,4 +671,5 @@ utilization. For more information, refer to the `Optimize Preprocessing tutorial <118-optimize-preprocessing-with-output.html>`__ -and to the overview of `Preprocessing API `__ . +and to the overview of `Preprocessing +API `__. diff --git a/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output_files/120-tensorflow-object-detection-to-openvino-with-output_38_0.png b/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output_files/120-tensorflow-object-detection-to-openvino-with-output_38_0.png index dff0d72c697c04..58eab9f05da9ae 100644 --- a/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output_files/120-tensorflow-object-detection-to-openvino-with-output_38_0.png +++ b/docs/notebooks/120-tensorflow-object-detection-to-openvino-with-output_files/120-tensorflow-object-detection-to-openvino-with-output_38_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b7874b5d950db3016c015449880bbc25d55dcbb66b14f1fd54593352f044b3a2 -size 391330 +oid sha256:31311ed17093e4368b19e35fdb57bcc230816766a3908e831be520abe18a263e +size 391413 diff --git a/docs/notebooks/121-convert-to-openvino-with-output.rst b/docs/notebooks/121-convert-to-openvino-with-output.rst index 13bca81bd9e271..4fc417fb304551 100644 --- a/docs/notebooks/121-convert-to-openvino-with-output.rst +++ b/docs/notebooks/121-convert-to-openvino-with-output.rst @@ -2,38 +2,28 @@ OpenVINO™ model conversion API ============================== This notebook shows how to convert a model from original framework -format to OpenVINO Intermediate Representation (IR). +format to OpenVINO Intermediate Representation (IR). Contents: -.. _top: +- `OpenVINO IR format <#openvino-ir-format>`__ +- `IR preparation with Python conversion API and Model Optimizer command-line tool <#ir-preparation-with-python-conversion-api-and-model-optimizer-command-line-tool>`__ +- `Fetching example models <#fetching-example-models>`__ +- `Basic conversion <#basic-conversion>`__ +- `Model conversion parameters <#model-conversion-parameters>`__ -**Table of contents**: + - `Setting Input Shapes <#setting-input-shapes>`__ + - `Cutting Off Parts of a Model <#cutting-off-parts-of-a-model>`__ + - `Embedding Preprocessing Computation <#embedding-preprocessing-computation>`__ -- `OpenVINO IR format <#openvino-ir-format>`__ -- `IR preparation with Python conversion API and Model Optimizer - command-line - tool <#ir-preparation-with-python-conversion-api-and-model-optimizer-command-line-tool>`__ -- `Fetching example models <#fetching-example-models>`__ -- `Basic conversion <#basic-conversion>`__ -- `Model conversion parameters <#model-conversion-parameters>`__ + - `Specifying Layout <#specifying-layout>`__ + - `Changing Model Layout <#changing-model-layout>`__ + - `Specifying Mean and Scale Values <#specifying-mean-and-scale-values>`__ + - `Reversing Input Channels <#reversing-input-channels>`__ - - `Setting Input Shapes <#setting-input-shapes>`__ - - `Cutting Off Parts of a Model <#cutting-off-parts-of-a-model>`__ - - `Embedding Preprocessing - Computation <#embedding-preprocessing-computation>`__ + - `Compressing a Model to FP16 <#compressing-a-model-to-fp16>`__ - - `Specifying Layout <#specifying-layout>`__ - - `Changing Model Layout <#changing-model-layout>`__ - - `Specifying Mean and Scale - Values <#specifying-mean-and-scale-values>`__ - - `Reversing Input Channels <#reversing-input-channels>`__ +- `Convert Models Represented as Python Objects <#convert-models-represented-as-python-objects>`__ - - `Compressing a Model to FP16 <#compressing-a-model-to-fp16>`__ - -- `Convert Models Represented as Python - Objects <#convert-models-represented-as-python-objects>`__ - -.. code-block:: ipython3 - :force: +.. code:: # Required imports. Please execute this cell first. ! pip install -q --find-links https://download.pytorch.org/whl/torch_stable.html \ @@ -46,11 +36,18 @@ format to OpenVINO Intermediate Representation (IR). "torchvision==0.14.1; sys_platform == 'darwin'" \ "torchvision==0.14.1+cpu; sys_platform == 'linux' or platform_system == 'Windows'" + +.. parsed-literal:: + + ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. + tensorflow 2.12.0 requires protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3, but you have protobuf 3.20.2 which is incompatible. + + OpenVINO IR format ------------------- +############################################################################################################################### OpenVINO `Intermediate Representation -(IR) `__ is the +(IR) `__ is the proprietary model format of OpenVINO. It is produced after converting a model with model conversion API. Model conversion API translates the frequently used deep learning operations to their respective similar @@ -60,7 +57,7 @@ an ``.xml`` file, containing information about network topology, and a ``.bin`` file, containing the weights and biases binary data. IR preparation with Python conversion API and Model Optimizer command-line tool -------------------------------------------------------------------------------- +############################################################################################################################### There are two ways to convert a model from the original framework format to OpenVINO IR: Python conversion API and Model Optimizer command-line @@ -68,14 +65,14 @@ tool. You can choose one of them based on whichever is most convenient for you. There should not be any differences in the results of model conversion if the same set of parameters is used. For more details, refer to `Model -Preparation `__ +Preparation `__ documentation. -.. code:: ipython3 +.. code:: - # Model Optimizer CLI tool parameters description - - ! mo --help + # Model Optimizer CLI tool parameters description + + ! mo --help .. parsed-literal:: @@ -94,30 +91,29 @@ documentation. a generated IR and output .xml/.bin files. --output_dir OUTPUT_DIR, -o OUTPUT_DIR Directory that stores the generated IR. By default, it - is the directory from where the Model Optimizer is + is the directory from where the Model Conversion is launched. --freeze_placeholder_with_value FREEZE_PLACEHOLDER_WITH_VALUE Replaces input layer with constant node with provided value, for example: "node_name->True". It will be - DEPRECATED in future releases. Use --input option to + DEPRECATED in future releases. Use "input" option to specify a value for freezing. --static_shape Enables IR generation for fixed input shape (folding `ShapeOf` operations and shape-calculating sub-graphs to `Constant`). Changing model input shape using the OpenVINO Runtime API in runtime may fail for such an IR. - --use_new_frontend Force the usage of new Frontend of Model Optimizer for - model conversion into IR. The new Frontend is C++ - based and is available for ONNX* and PaddlePaddle* - models. Model optimizer uses new Frontend for ONNX* - and PaddlePaddle* by default that means - `--use_new_frontend` and `--use_legacy_frontend` - options are not specified. + --use_new_frontend Force the usage of new Frontend for model conversion + into IR. The new Frontend is C++ based and is + available for ONNX* and PaddlePaddle* models. Model + Conversion API uses new Frontend for ONNX* and + PaddlePaddle* by default that means `use_new_frontend` + and `use_legacy_frontend` options are not specified. --use_legacy_frontend - Force the usage of legacy Frontend of Model Optimizer - for model conversion into IR. The legacy Frontend is - Python based and is available for TensorFlow*, ONNX*, - MXNet*, Caffe*, and Kaldi* models. + Force the usage of legacy Frontend for model + conversion into IR. The legacy Frontend is Python + based and is available for TensorFlow*, ONNX*, MXNet*, + Caffe*, and Kaldi* models. --input_model INPUT_MODEL, -w INPUT_MODEL, -m INPUT_MODEL Tensorflow*: a file with a pre-trained model (binary or text .pb file after freezing). Caffe*: a model @@ -170,6 +166,11 @@ documentation. by a comma, for example: [1,3,227,227],[2,4] for a model with two inputs with 4D and 2D shapes. Alternatively, specify shapes with the --input option. + --example_input EXAMPLE_INPUT + Sample of model input in original framework. For + PyTorch it can be torch.Tensor. For Tensorflow it can + be tf.Tensor or numpy.ndarray. For PaddlePaddle it can + be Paddle Variable. --batch BATCH, -b BATCH Set batch size. It applies to 1D or higher dimension inputs. The default dimension index for the batch is @@ -285,6 +286,12 @@ documentation. --stream_output [STREAM_OUTPUT] Switch model conversion progress display to a multiline mode. + --share_weights [SHARE_WEIGHTS] + Map memory of weights instead reading files or share + memory from input model. Currently, mapping feature is + provided only for ONNX models that do not require + fallback to the legacy ONNX frontend for the + conversion. TensorFlow*-specific parameters: --input_model_is_text [INPUT_MODEL_IS_TEXT] @@ -370,11 +377,11 @@ documentation. .. code:: ipython3 - # Python conversion API parameters description - from openvino.tools import mo + # Python conversion API parameters description + from openvino.tools import mo - - mo.convert_model(help=True) + + mo.convert_model(help=True) .. parsed-literal:: @@ -395,6 +402,11 @@ documentation. Supported formats of input model: + PaddlePaddle + paddle.hapi.model.Model + paddle.fluid.dygraph.layers.Layer + paddle.fluid.executor.Executor + PyTorch torch.nn.Module torch.jit.ScriptModule @@ -459,6 +471,11 @@ documentation. for each input separated by a comma, for example: [1,3,227,227],[2,4] for a model with two inputs with 4D and 2D shapes. Alternatively, specify shapes with the --input option. + --example_input + Sample of model input in original framework. + For PyTorch it can be torch.Tensor. + For Tensorflow it can be tf.Tensor or numpy.ndarray. + For PaddlePaddle it can be Paddle Variable. --batch Set batch size. It applies to 1D or higher dimension inputs. The default dimension index for the batch is zero. @@ -569,12 +586,18 @@ documentation. Enable model conversion progress display. --stream_output Switch model conversion progress display to a multiline mode. - - PyTorch-specific parameters: - --example_input - Sample of model input in original framework. For PyTorch it can be torch.Tensor. + --share_weights + Map memory of weights instead reading files or share memory from input + model. + Currently, mapping feature is provided only for ONNX models + that do not require fallback to the legacy ONNX frontend for the conversion. + PaddlePaddle-specific parameters: + --example_output + Sample of model output in original framework. For PaddlePaddle it can + be Paddle Variable. + TensorFlow*-specific parameters: --input_model_is_text TensorFlow*: treat the input model file as a text protobuf format. If @@ -654,7 +677,7 @@ documentation. Fetching example models ------------------------ +############################################################################################################################### This notebook uses two models for conversion examples: @@ -699,7 +722,11 @@ NLP model from Hugging Face and export it in ONNX format: .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py:223: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. + 2023-09-08 23:06:13.646146: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:06:13.679884: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 23:06:14.259953: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py:223: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask, torch.tensor(torch.finfo(scores.dtype).min) @@ -955,11 +982,13 @@ To convert a model to OpenVINO IR, use the following command: To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin .. code:: ipython3 @@ -991,27 +1020,27 @@ Both Python conversion API and Model Optimizer command-line tool provide the following capabilities: \* overriding original input shapes for model conversion with ``input`` and ``input_shape`` parameters. `Setting Input Shapes -guide `__. +guide `__. \* cutting off unwanted parts of a model (such as unsupported operations and training sub-graphs) using the ``input`` and ``output`` parameters to define new inputs and outputs of the converted model. `Cutting Off Parts of a Model -guide `__. +guide `__. \* inserting additional input pre-processing sub-graphs into the converted model by using the ``mean_values``, ``scales_values``, ``layout``, and other parameters. `Embedding Preprocessing Computation -article `__. +article `__. \* compressing the model weights (for example, weights for convolutions and matrix multiplications) to FP16 data type using ``compress_to_fp16`` compression parameter. `Compression of a Model to FP16 -guide `__. +guide `__. If the out-of-the-box conversion (only the ``input_model`` parameter is specified) is not successful, it may be required to use the parameters mentioned above to override input shapes and cut the model. Setting Input Shapes -~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Model conversion is supported for models with dynamic input shapes that contain undefined dimensions. However, if the shape of data is not going @@ -1023,7 +1052,7 @@ up static shapes, model conversion API provides the ``input`` and ``input_shape`` parameters. For more information refer to `Setting Input Shapes -guide `__. +guide `__. .. code:: ipython3 @@ -1041,20 +1070,24 @@ guide `__. +guide `__. .. code:: ipython3 @@ -1180,20 +1217,24 @@ guide `__. +article `__. Specifying Layout -^^^^^^^^^^^^^^^^^ +------------------------------------------------------------------------------------------------------------------------------- Layout defines the meaning of dimensions in a shape and can be specified for both inputs and outputs. Some preprocessing requires to set input layouts, for example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB). For the layout syntax, check the `Layout API -overview `__. +overview `__. To specify the layout, you can use the layout option followed by the layout value. @@ -1252,11 +1293,13 @@ Resnet50 model that was exported to the ONNX format: To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin .. code:: ipython3 @@ -1268,7 +1311,7 @@ Resnet50 model that was exported to the ONNX format: ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, layout="nchw") Changing Model Layout -^^^^^^^^^^^^^^^^^^^^^ +------------------------------------------------------------------------------------------------------------------------------- Changing the model layout may be necessary if it differs from the one presented by input data. Use either ``layout`` or ``source_layout`` with @@ -1290,20 +1333,24 @@ presented by input data. Use either ``layout`` or ``source_layout`` with To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin .. code:: ipython3 @@ -1318,7 +1365,7 @@ presented by input data. Use either ``layout`` or ``source_layout`` with ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, source_layout="nchw", target_layout="nhwc") Specifying Mean and Scale Values -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +------------------------------------------------------------------------------------------------------------------------------- Model conversion API has the following parameters to specify the values: ``mean_values``, ``scale_values``, ``scale``. Using these parameters, @@ -1341,20 +1388,24 @@ that the preprocessing takes negligible time for inference. To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin .. code:: ipython3 @@ -1368,7 +1419,7 @@ that the preprocessing takes negligible time for inference. ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, mean_values=[123,117,104], scale_values=[255,255,255]) Reversing Input Channels -^^^^^^^^^^^^^^^^^^^^^^^^ +------------------------------------------------------------------------------------------------------------------------------- Sometimes, input images for your application can be of the ``RGB`` (or ``BGR``) format, and the model is trained on images of the ``BGR`` (or @@ -1389,11 +1440,13 @@ the color channels before inference. To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin .. code:: ipython3 @@ -1405,7 +1458,7 @@ the color channels before inference. ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, reverse_input_channels=True) Compressing a Model to FP16 -~~~~~~~~~~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Optionally all relevant floating-point weights can be compressed to FP16 data type during the model conversion, creating a compressed FP16 model. @@ -1426,13 +1479,13 @@ models, this decrease is negligible. To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) - [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression by removing argument --compress_to_fp16 or set it to false --compress_to_fp16=False. - Find more information about compression to FP16 at https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_FP16_Compression.html + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml - [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin .. code:: ipython3 @@ -1444,7 +1497,7 @@ models, this decrease is negligible. ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, compress_to_fp16=True) Convert Models Represented as Python Objects --------------------------------------------- +############################################################################################################################### Python conversion API can pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files @@ -1459,6 +1512,30 @@ training scripts). ov_model = mo.convert_model(pytorch_model) + +.. parsed-literal:: + + WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11. + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino + huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... + To disable this warning, you can either: + - Avoid using `tokenizers` before the fork if possible + - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... + To disable this warning, you can either: + - Avoid using `tokenizers` before the fork if possible + - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... + To disable this warning, you can either: + - Avoid using `tokenizers` before the fork if possible + - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) + + +.. parsed-literal:: + + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + + ``convert_model()`` accepts all parameters available in the MO command-line tool. Parameters can be specified by Python classes or string analogs, similar to the command-line tool. diff --git a/docs/notebooks/122-speech-recognition-quantization-wav2vec2-with-output.rst b/docs/notebooks/122-speech-recognition-quantization-wav2vec2-with-output.rst index 4db1ac32fe921f..cd6ebf0ab3dedb 100644 --- a/docs/notebooks/122-speech-recognition-quantization-wav2vec2-with-output.rst +++ b/docs/notebooks/122-speech-recognition-quantization-wav2vec2-with-output.rst @@ -1,8 +1,6 @@ Quantize Speech Recognition Models with accuracy control using NNCF PTQ API =========================================================================== - - This tutorial demonstrates how to apply ``INT8`` quantization with accuracy control to the speech recognition model, known as `Wav2Vec2 `__, @@ -42,19 +40,15 @@ and has the following differences: the Basic 8-bit quantization flow because some of the operations are kept in the original precision. -.. note:: - - Currently, 8-bit quantization with accuracy control in NNCF - is available only for models in OpenVINO representation. - -The steps for the quantization with accuracy control are described -below. +.. +.. note:: + Currently, 8-bit quantization with accuracy control in NNCF is available only for models in OpenVINO representation. -.. _top: +The steps for the quantization with accuracy control are described below. -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Prepare the Model <#prepare-the-model>`__ @@ -65,25 +59,219 @@ below. - `Model Usage Example <#model-usage-example>`__ - `Compare Accuracy of the Original and Quantized Models <#compare-accuracy-of-the-original-and-quantized-models>`__ - -.. code:: ipython2 +.. code:: ipython3 # !pip install -q "openvino-dev>=2023.1.0" "nncf>=2.6.0" !pip install -q "openvino==2023.1.0.dev20230811" !pip install git+https://github.com/openvinotoolkit/nncf.git@develop !pip install -q soundfile librosa transformers torch datasets torchmetrics -Imports `⇑ <#top>`__ + +.. parsed-literal:: + + Collecting git+https://github.com/openvinotoolkit/nncf.git@develop + Cloning https://github.com/openvinotoolkit/nncf.git (to revision develop) to /tmp/pip-req-build-o2lphim0 + Running command git clone --filter=blob:none --quiet https://github.com/openvinotoolkit/nncf.git /tmp/pip-req-build-o2lphim0 + Filtering content: 1% (2/142) + Filtering content: 2% (3/142) + Filtering content: 3% (5/142) + Filtering content: 4% (6/142) + Filtering content: 5% (8/142), 10.29 MiB | 16.71 MiB/s + Filtering content: 6% (9/142), 10.29 MiB | 16.71 MiB/s + Filtering content: 7% (10/142), 10.29 MiB | 16.71 MiB/s + Filtering content: 7% (10/142), 12.61 MiB | 8.69 MiB/s + Filtering content: 8% (12/142), 12.61 MiB | 8.69 MiB/s + Filtering content: 9% (13/142), 12.61 MiB | 8.69 MiB/s + Filtering content: 10% (15/142), 14.35 MiB | 7.17 MiB/s + Filtering content: 11% (16/142), 14.35 MiB | 7.17 MiB/s + Filtering content: 12% (18/142), 14.35 MiB | 7.17 MiB/s + Filtering content: 13% (19/142), 17.07 MiB | 6.80 MiB/s + Filtering content: 14% (20/142), 17.07 MiB | 6.80 MiB/s + Filtering content: 15% (22/142), 17.07 MiB | 6.80 MiB/s + Filtering content: 16% (23/142), 17.07 MiB | 6.80 MiB/s + Filtering content: 17% (25/142), 19.78 MiB | 6.42 MiB/s + Filtering content: 18% (26/142), 19.78 MiB | 6.42 MiB/s + Filtering content: 19% (27/142), 19.78 MiB | 6.42 MiB/s + Filtering content: 20% (29/142), 19.78 MiB | 6.42 MiB/s + Filtering content: 21% (30/142), 19.78 MiB | 6.42 MiB/s + Filtering content: 22% (32/142), 22.80 MiB | 6.19 MiB/s + Filtering content: 23% (33/142), 22.80 MiB | 6.19 MiB/s + Filtering content: 24% (35/142), 22.80 MiB | 6.19 MiB/s + Filtering content: 25% (36/142), 22.80 MiB | 6.19 MiB/s + Filtering content: 26% (37/142), 22.80 MiB | 6.19 MiB/s + Filtering content: 26% (37/142), 25.18 MiB | 5.93 MiB/s + Filtering content: 27% (39/142), 25.18 MiB | 5.93 MiB/s + Filtering content: 28% (40/142), 25.18 MiB | 5.93 MiB/s + Filtering content: 29% (42/142), 25.18 MiB | 5.93 MiB/s + Filtering content: 30% (43/142), 25.18 MiB | 5.93 MiB/s + Filtering content: 31% (45/142), 25.18 MiB | 5.93 MiB/s + Filtering content: 32% (46/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 33% (47/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 34% (49/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 35% (50/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 36% (52/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 37% (53/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 38% (54/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 39% (56/142), 27.34 MiB | 5.71 MiB/s + Filtering content: 40% (57/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 41% (59/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 42% (60/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 43% (62/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 44% (63/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 45% (64/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 46% (66/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 47% (67/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 48% (69/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 49% (70/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 50% (71/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 51% (73/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 52% (74/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 53% (76/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 54% (77/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 55% (79/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 56% (80/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 57% (81/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 58% (83/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 59% (84/142), 29.35 MiB | 5.54 MiB/s + Filtering content: 60% (86/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 61% (87/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 62% (89/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 63% (90/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 64% (91/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 65% (93/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 66% (94/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 67% (96/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 68% (97/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 69% (98/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 70% (100/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 71% (101/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 72% (103/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 73% (104/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 74% (106/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 75% (107/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 76% (108/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 77% (110/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 78% (111/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 79% (113/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 80% (114/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 81% (116/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 82% (117/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 83% (118/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 84% (120/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 85% (121/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 86% (123/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 87% (124/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 88% (125/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 89% (127/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 90% (128/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 91% (130/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 92% (131/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 93% (133/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 94% (134/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 95% (135/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 96% (137/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 97% (138/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 98% (140/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 99% (141/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 100% (142/142), 31.63 MiB | 4.19 MiB/s + Filtering content: 100% (142/142), 32.00 MiB | 3.57 MiB/s, done. + Resolved https://github.com/openvinotoolkit/nncf.git to commit 90a1e860c93b553fa9684113e02d41d622235c55 + Preparing metadata (setup.py) ... - done + Collecting pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd (from nncf==2.5.0.dev0+90a1e860) + Using cached pymoo-0.6.0.1-py3-none-any.whl + Requirement already satisfied: jsonschema>=3.2.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (4.19.0) + Requirement already satisfied: jstyleson>=0.0.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (0.0.2) + Requirement already satisfied: natsort>=7.1.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (8.4.0) + Requirement already satisfied: networkx<=2.8.2,>=2.6 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2.8.2) + Requirement already satisfied: ninja<1.11,>=1.10.0.post2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.10.2.4) + Requirement already satisfied: numpy<1.25,>=1.19.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.23.5) + Requirement already satisfied: openvino-telemetry>=2023.1.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2023.1.1) + Requirement already satisfied: packaging>=20.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (23.1) + Requirement already satisfied: pandas<2.1,>=1.1.5 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2.0.3) + Requirement already satisfied: psutil in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (5.9.5) + Requirement already satisfied: pydot>=1.4.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.4.2) + Requirement already satisfied: pyparsing<3.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2.4.7) + Requirement already satisfied: scikit-learn>=0.24.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.3.0) + Requirement already satisfied: scipy<1.11,>=1.3.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.10.1) + Requirement already satisfied: texttable>=1.6.3 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.6.7) + Requirement already satisfied: tqdm>=4.54.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (4.66.1) + Requirement already satisfied: attrs>=22.2.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (23.1.0) + Requirement already satisfied: importlib-resources>=1.4.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (6.0.1) + Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (2023.7.1) + Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (1.3.10) + Requirement already satisfied: referencing>=0.28.4 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (0.30.2) + Requirement already satisfied: rpds-py>=0.7.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (0.10.2) + Requirement already satisfied: python-dateutil>=2.8.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (2.8.2) + Requirement already satisfied: pytz>=2020.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (2023.3.post1) + Requirement already satisfied: tzdata>=2022.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (2023.3) + Requirement already satisfied: joblib>=1.1.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from scikit-learn>=0.24.0->nncf==2.5.0.dev0+90a1e860) (1.3.2) + Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from scikit-learn>=0.24.0->nncf==2.5.0.dev0+90a1e860) (3.2.0) + Requirement already satisfied: matplotlib>=3 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (3.5.2) + Requirement already satisfied: autograd>=1.4 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (1.6.2) + Collecting cma==3.2.2 (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) + Using cached cma-3.2.2-py2.py3-none-any.whl (249 kB) + Collecting alive-progress (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) + Obtaining dependency information for alive-progress from https://files.pythonhosted.org/packages/e3/02/5d7f9158d69b36fbe9eb0df8fb435008ec881e41bc7d839239004207d807/alive_progress-3.1.4-py3-none-any.whl.metadata + Using cached alive_progress-3.1.4-py3-none-any.whl.metadata (68 kB) + Requirement already satisfied: dill in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (0.3.7) + Collecting Deprecated (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) + Obtaining dependency information for Deprecated from https://files.pythonhosted.org/packages/20/8d/778b7d51b981a96554f29136cd59ca7880bf58094338085bcf2a979a0e6a/Deprecated-1.2.14-py2.py3-none-any.whl.metadata + Using cached Deprecated-1.2.14-py2.py3-none-any.whl.metadata (5.4 kB) + Requirement already satisfied: future>=0.15.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from autograd>=1.4->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (0.18.3) + Requirement already satisfied: zipp>=3.1.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from importlib-resources>=1.4.0->jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (3.16.2) + Requirement already satisfied: cycler>=0.10 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (0.11.0) + Requirement already satisfied: fonttools>=4.22.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (4.42.1) + Requirement already satisfied: kiwisolver>=1.0.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (1.4.5) + Requirement already satisfied: pillow>=6.2.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (10.0.0) + Requirement already satisfied: six>=1.5 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from python-dateutil>=2.8.2->pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (1.16.0) + Collecting about-time==4.2.1 (from alive-progress->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) + Using cached about_time-4.2.1-py3-none-any.whl (13 kB) + Collecting grapheme==0.6.0 (from alive-progress->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) + Using cached grapheme-0.6.0-py3-none-any.whl + Requirement already satisfied: wrapt<2,>=1.10 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from Deprecated->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (1.14.1) + Using cached alive_progress-3.1.4-py3-none-any.whl (75 kB) + Using cached Deprecated-1.2.14-py2.py3-none-any.whl (9.6 kB) + Building wheels for collected packages: nncf + Building wheel for nncf (setup.py) ... - \ | / done + Created wheel for nncf: filename=nncf-2.5.0.dev0+90a1e860-py3-none-any.whl size=1139358 sha256=35a2f1daf4360a3b65a6a2996cca9f15d165f6c25994f64d8ccf10960e7a55bc + Stored in directory: /tmp/pip-ephem-wheel-cache-mdg9hjsd/wheels/6d/17/88/a292ae87701bc65e2e1c63261d22d7fb0e15aa8448ee693d5f + Successfully built nncf + Installing collected packages: grapheme, Deprecated, cma, about-time, alive-progress, pymoo, nncf + Attempting uninstall: cma + Found existing installation: cma 2.7.0 + Uninstalling cma-2.7.0: + Successfully uninstalled cma-2.7.0 + Attempting uninstall: pymoo + Found existing installation: pymoo 0.5.0 + Uninstalling pymoo-0.5.0: + Successfully uninstalled pymoo-0.5.0 + Attempting uninstall: nncf + Found existing installation: nncf 2.5.0 + Uninstalling nncf-2.5.0: + Successfully uninstalled nncf-2.5.0 + Successfully installed Deprecated-1.2.14 about-time-4.2.1 alive-progress-3.1.4 cma-3.2.2 grapheme-0.6.0 nncf-2.5.0.dev0+90a1e860 pymoo-0.6.0.1 + + +Imports ############################################################################################################################### -.. code:: ipython2 +.. code:: ipython3 import numpy as np import torch - + from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor -Prepare the Model `⇑ <#top>`__ + +.. parsed-literal:: + + 2023-09-08 23:07:39.211214: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:07:39.246066: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 23:07:39.789011: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +Prepare the Model ############################################################################################################################### For instantiating PyTorch model class, @@ -96,26 +284,59 @@ and depends on your internet connection. Additionally, we can create processor class which is responsible for model specific pre- and post-processing steps. -.. code:: ipython2 +.. code:: ipython3 BATCH_SIZE = 1 MAX_SEQ_LENGTH = 30480 - - + + torch_model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h", ctc_loss_reduction="mean") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") -Convert it to the OpenVINO Intermediate Representation (OpenVINO IR) -.. code:: ipython2 +.. parsed-literal:: - import openvino + Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed'] + You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. +Convert it to the OpenVINO Intermediate Representation (OpenVINO IR) + +.. code:: ipython3 + + import openvino as ov + + default_input = torch.zeros([1, MAX_SEQ_LENGTH], dtype=torch.float) - ov_model = openvino.convert_model(torch_model, example_input=default_input) + ov_model = ov.convert_model(torch_model, example_input=default_input) + + +.. parsed-literal:: + + WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11. + + +.. parsed-literal:: + + [ WARNING ] Please fix your imports. Module %s has been moved to %s. The old module will be deleted in version %s. + + +.. parsed-literal:: + + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino + WARNING:nncf:NNCF provides best results with torch==2.0.1, while current torch version is 1.13.1+cpu. If you encounter issues, consider switching to torch==2.0.1 + + +.. parsed-literal:: + + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:595: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:634: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): + -Prepare LibriSpeech Dataset `⇑ <#top>`__ +Prepare LibriSpeech Dataset ############################################################################################################################### For demonstration purposes, we will use short dummy version of @@ -124,34 +345,34 @@ speed up model evaluation. Model accuracy can be different from reported in the paper. For reproducing original accuracy, use ``librispeech_asr`` dataset. -.. code:: ipython2 +.. code:: ipython3 from datasets import load_dataset - - + + dataset = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") test_sample = dataset[0]["audio"] - - + + # define preprocessing function for converting audio to input values for model def map_to_input(batch): preprocessed_signal = processor(batch["audio"]["array"], return_tensors="pt", padding="longest", sampling_rate=batch['audio']['sampling_rate']) input_values = preprocessed_signal.input_values batch['input_values'] = input_values return batch - - + + # apply preprocessing function to dataset and remove audio column, to save memory as we do not need it anymore dataset = dataset.map(map_to_input, batched=False, remove_columns=["audio"]) -Prepare calibration dataset `⇑ <#top>`__ +Prepare calibration dataset ############################################################################################################################### -.. code:: ipython2 +.. code:: ipython3 import nncf - - + + def transform_fn(data_item): """ Extract the model's input from the data item. @@ -159,21 +380,21 @@ Prepare calibration dataset `⇑ <#top>`__ This function should be passed when the data item cannot be used as model's input. """ return np.array(data_item["input_values"]) - - + + calibration_dataset = nncf.Dataset(dataset, transform_fn) -Prepare validation function `⇑ <#top>`__ +Prepare validation function ############################################################################################################################### Define the validation function. -.. code:: ipython2 +.. code:: ipython3 from torchmetrics import WordErrorRate from tqdm.notebook import tqdm - - + + def validation_fn(model, dataset): """ Calculate and returns a metric for the model. @@ -185,15 +406,15 @@ Define the validation function. logits = model(np.array(sample['input_values']))[output] predicted_ids = np.argmax(logits, axis=-1) transcription = processor.batch_decode(torch.from_numpy(predicted_ids)) - + # update metric on sample result wer.update(transcription, [sample['text']]) - + result = wer.compute() - + return 1 - result -Run quantization with accuracy control `⇑ <#top>`__ +Run quantization with accuracy control ############################################################################################################################### You should provide @@ -209,17 +430,15 @@ rank layers by their contribution to the accuracy drop. Default value is 300, and the more samples it has the better ranking, potentially. Here we use the value 25 to speed up the execution. -.. note:: - - Execution can take tens of minutes and requires up to 10 GB - of free memory - +.. code:: + + Execution can take tens of minutes and requires up to 10 GB of free memory -.. code:: ipython2 +.. code:: ipython3 from nncf.quantization.advanced_parameters import AdvancedAccuracyRestorerParameters from nncf.parameters import ModelType - + quantized_model = nncf.quantize_with_accuracy_control( ov_model, calibration_dataset=calibration_dataset, @@ -233,77 +452,1533 @@ we use the value 25 to speed up the execution. ), ) -Model Usage Example `⇑ <#top>`__ -############################################################################################################################### -.. code:: ipython2 +.. parsed-literal:: - import IPython.display as ipd + Statistics collection: 24%|██▍ | 73/300 [00:13<00:42, 5.37it/s] + Applying Smooth Quant: 100%|██████████| 50/50 [00:00<00:00, 58.74it/s] - ipd.Audio(test_sample["array"], rate=16000) +.. parsed-literal:: -.. code:: ipython2 + INFO:nncf:36 ignored nodes was found by name in the NNCFGraph - core = openvino.Core() - compiled_quantized_model = core.compile_model(model=quantized_model, device_name='CPU') +.. parsed-literal:: - input_data = np.expand_dims(test_sample["array"], axis=0) + Statistics collection: 24%|██▍ | 73/300 [00:23<01:12, 3.12it/s] + Applying Fast Bias correction: 100%|██████████| 74/74 [00:25<00:00, 2.91it/s] -Next, make a prediction. +.. parsed-literal:: -.. code:: ipython2 + INFO:nncf:Validation of initial model was started - predictions = compiled_quantized_model([input_data])[0] - predicted_ids = np.argmax(predictions, axis=-1) - transcription = processor.batch_decode(torch.from_numpy(predicted_ids)) - transcription -Compare Accuracy of the Original and Quantized Models `⇑ <#top>`__ -############################################################################################################################### +.. parsed-literal:: -- Define dataloader for test dataset. -- Define functions to get inference for PyTorch and OpenVINO models. -- Define functions to compute Word Error Rate. + INFO:nncf:Elapsed Time: 00:00:00 -.. code:: ipython2 - # inference function for pytorch - def torch_infer(model, sample): - logits = model(torch.Tensor(sample['input_values'])).logits - # take argmax and decode - predicted_ids = torch.argmax(logits, dim=-1) - transcription = processor.batch_decode(predicted_ids) - return transcription +.. parsed-literal:: + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:62: FutureWarning: Importing `WordErrorRate` from `torchmetrics` was deprecated and will be removed in 2.0. Import `WordErrorRate` from `torchmetrics.text` instead. + _future_warning( - # inference function for openvino - def ov_infer(model, sample): - output = model.output(0) - logits = model(np.array(sample['input_values']))[output] - predicted_ids = np.argmax(logits, axis=-1) - transcription = processor.batch_decode(torch.from_numpy(predicted_ids)) - return transcription - def compute_wer(dataset, model, infer_fn): - wer = WordErrorRate() - for sample in tqdm(dataset): - # run infer function on sample - transcription = infer_fn(model, sample) - # update metric on sample result - wer.update(transcription, [sample['text']]) - # finalize metric calculation - result = wer.compute() - return result +.. parsed-literal:: -Now, compute WER for the original PyTorch model and quantized model. + 0it [00:00, ?it/s] -.. code:: ipython2 - pt_result = compute_wer(dataset, torch_model, torch_infer) - quantized_result = compute_wer(dataset, compiled_quantized_model, ov_infer) - print(f'[PyTorch] Word Error Rate: {pt_result:.4f}') - print(f'[Quantized OpenVino] Word Error Rate: {quantized_result:.4f}') +.. parsed-literal:: + + 0it [00:00, ?it/s] + + +.. parsed-literal:: + + INFO:nncf:Elapsed Time: 00:00:13 + INFO:nncf:Metric of initial model: 0.9469565153121948 + INFO:nncf:Collecting values for each data item using the initial model + + + +.. parsed-literal:: + + 0%| | 0/1 [00:00 + + Your browser does not support the audio element. + + + + + +.. code:: ipython3 + + core = ov.Core() + + compiled_quantized_model = core.compile_model(model=quantized_model, device_name='CPU') + + input_data = np.expand_dims(test_sample["array"], axis=0) + +Next, make a prediction. + +.. code:: ipython3 + + predictions = compiled_quantized_model([input_data])[0] + predicted_ids = np.argmax(predictions, axis=-1) + transcription = processor.batch_decode(torch.from_numpy(predicted_ids)) + transcription + + + + +.. parsed-literal:: + + ['A MAN SAID TO THE UNIVERSE SIR I EXIST'] + + + +Compare Accuracy of the Original and Quantized Models +############################################################################################################################### + +- Define dataloader for test dataset. +- Define functions to get inference for PyTorch and OpenVINO models. +- Define functions to compute Word Error Rate. + +.. code:: ipython3 + + # inference function for pytorch + def torch_infer(model, sample): + logits = model(torch.Tensor(sample['input_values'])).logits + # take argmax and decode + predicted_ids = torch.argmax(logits, dim=-1) + transcription = processor.batch_decode(predicted_ids) + return transcription + + + # inference function for openvino + def ov_infer(model, sample): + output = model.output(0) + logits = model(np.array(sample['input_values']))[output] + predicted_ids = np.argmax(logits, axis=-1) + transcription = processor.batch_decode(torch.from_numpy(predicted_ids)) + return transcription + + + def compute_wer(dataset, model, infer_fn): + wer = WordErrorRate() + for sample in tqdm(dataset): + # run infer function on sample + transcription = infer_fn(model, sample) + # update metric on sample result + wer.update(transcription, [sample['text']]) + # finalize metric calculation + result = wer.compute() + return result + +Now, compute WER for the original PyTorch model and quantized model. + +.. code:: ipython3 + + pt_result = compute_wer(dataset, torch_model, torch_infer) + quantized_result = compute_wer(dataset, compiled_quantized_model, ov_infer) + + print(f'[PyTorch] Word Error Rate: {pt_result:.4f}') + print(f'[Quantized OpenVino] Word Error Rate: {quantized_result:.4f}') + + + +.. parsed-literal:: + + 0%| | 0/73 [00:00`__ +- `Get Pytorch model and OpenVINO IR model <#get-pytorch-model-and-openvino-ir-model>`__ +- `Define validator and data loader <#define-validator-and-data-loader>`__ +- `Prepare calibration and validation datasets <#prepare-calibration-and-validation-datasets>`__ +- `Prepare validation function <#prepare-validation-function>`__ +- `Run quantization with accuracy control <#run-quantization-with-accuracy-control>`__ +- `Compare Accuracy and Performance of the Original and Quantized Models <#compare-accuracy-and-performance-of-the-original-and-quantized-models>`__ - -- `Prerequisites <#prerequisites>`__ -- `Get Pytorch model and OpenVINO IR model <#get-pytorch-model-and-openvino-ir-model>`__ -- `Define validator and data loader <#define-validator-and-data-loader>`__ -- `Prepare calibration and validation datasets <#prepare-calibration-and-validation-datasets>`__ -- `Prepare validation function <#prepare-validation-function>`__ -- `Run quantization with accuracy control <#run-quantization-with-accuracy-control>`__ -- `Compare Accuracy and Performance of the Original and Quantized Models <#compare-accuracy-and-performance-of-the-original-and-quantized-models>`__ - -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### - Install necessary packages. -.. code:: ipython2 +.. code:: ipython3 !pip install -q "openvino==2023.1.0.dev20230811" !pip install git+https://github.com/openvinotoolkit/nncf.git@develop !pip install -q "ultralytics==8.0.43" -Get Pytorch model and OpenVINO IR model `⇑ <#top>`__ + +.. parsed-literal:: + + Collecting git+https://github.com/openvinotoolkit/nncf.git@develop + Cloning https://github.com/openvinotoolkit/nncf.git (to revision develop) to /tmp/pip-req-build-q26q169c + Running command git clone --filter=blob:none --quiet https://github.com/openvinotoolkit/nncf.git /tmp/pip-req-build-q26q169c + Filtering content: 1% (2/142) + Filtering content: 2% (3/142) + Filtering content: 3% (5/142) + Filtering content: 4% (6/142) + Filtering content: 5% (8/142) + Filtering content: 6% (9/142), 11.23 MiB | 16.49 MiB/s + Filtering content: 7% (10/142), 11.23 MiB | 16.49 MiB/s + Filtering content: 7% (10/142), 12.61 MiB | 10.32 MiB/s + Filtering content: 8% (12/142), 12.61 MiB | 10.32 MiB/s + Filtering content: 9% (13/142), 13.81 MiB | 7.30 MiB/s + Filtering content: 10% (15/142), 13.81 MiB | 7.30 MiB/s + Filtering content: 11% (16/142), 13.81 MiB | 7.30 MiB/s + Filtering content: 11% (17/142), 13.81 MiB | 7.30 MiB/s + Filtering content: 12% (18/142), 13.81 MiB | 7.30 MiB/s + Filtering content: 13% (19/142), 13.81 MiB | 7.30 MiB/s + Filtering content: 14% (20/142), 13.81 MiB | 7.30 MiB/s + Filtering content: 15% (22/142), 18.00 MiB | 7.01 MiB/s + Filtering content: 16% (23/142), 18.00 MiB | 7.01 MiB/s + Filtering content: 17% (25/142), 18.00 MiB | 7.01 MiB/s + Filtering content: 17% (25/142), 20.21 MiB | 6.50 MiB/s + Filtering content: 18% (26/142), 20.21 MiB | 6.50 MiB/s + Filtering content: 19% (27/142), 20.21 MiB | 6.50 MiB/s + Filtering content: 20% (29/142), 20.21 MiB | 6.50 MiB/s + Filtering content: 21% (30/142), 20.21 MiB | 6.50 MiB/s + Filtering content: 22% (32/142), 20.21 MiB | 6.50 MiB/s + Filtering content: 23% (33/142), 23.21 MiB | 6.41 MiB/s + Filtering content: 24% (35/142), 23.21 MiB | 6.41 MiB/s + Filtering content: 25% (36/142), 23.21 MiB | 6.41 MiB/s + Filtering content: 26% (37/142), 23.21 MiB | 6.41 MiB/s + Filtering content: 26% (38/142), 23.21 MiB | 6.41 MiB/s + Filtering content: 27% (39/142), 25.49 MiB | 6.14 MiB/s + Filtering content: 28% (40/142), 25.49 MiB | 6.14 MiB/s + Filtering content: 29% (42/142), 25.49 MiB | 6.14 MiB/s + Filtering content: 30% (43/142), 25.49 MiB | 6.14 MiB/s + Filtering content: 31% (45/142), 25.49 MiB | 6.14 MiB/s + Filtering content: 32% (46/142), 25.49 MiB | 6.14 MiB/s + Filtering content: 33% (47/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 34% (49/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 35% (50/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 36% (52/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 37% (53/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 38% (54/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 38% (55/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 39% (56/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 40% (57/142), 27.56 MiB | 5.89 MiB/s + Filtering content: 41% (59/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 42% (60/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 43% (62/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 44% (63/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 45% (64/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 46% (66/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 47% (67/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 48% (69/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 49% (70/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 50% (71/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 51% (73/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 52% (74/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 53% (76/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 54% (77/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 55% (79/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 56% (80/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 57% (81/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 58% (83/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 59% (84/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 60% (86/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 61% (87/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 62% (89/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 63% (90/142), 29.59 MiB | 5.66 MiB/s + Filtering content: 64% (91/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 65% (93/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 66% (94/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 67% (96/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 68% (97/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 69% (98/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 70% (100/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 71% (101/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 72% (103/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 73% (104/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 74% (106/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 75% (107/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 76% (108/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 77% (110/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 78% (111/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 79% (113/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 80% (114/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 81% (116/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 82% (117/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 83% (118/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 84% (120/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 85% (121/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 86% (123/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 87% (124/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 88% (125/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 89% (127/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 90% (128/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 91% (130/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 92% (131/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 93% (133/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 94% (134/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 95% (135/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 96% (137/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 97% (138/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 98% (140/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 99% (141/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 100% (142/142), 31.76 MiB | 4.16 MiB/s + Filtering content: 100% (142/142), 32.00 MiB | 3.58 MiB/s, done. + Resolved https://github.com/openvinotoolkit/nncf.git to commit 90a1e860c93b553fa9684113e02d41d622235c55 + Preparing metadata (setup.py) ... - done + Collecting pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd (from nncf==2.5.0.dev0+90a1e860) + Using cached pymoo-0.6.0.1-py3-none-any.whl + Requirement already satisfied: jsonschema>=3.2.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (4.19.0) + Requirement already satisfied: jstyleson>=0.0.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (0.0.2) + Requirement already satisfied: natsort>=7.1.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (8.4.0) + Requirement already satisfied: networkx<=2.8.2,>=2.6 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2.8.2) + Requirement already satisfied: ninja<1.11,>=1.10.0.post2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.10.2.4) + Requirement already satisfied: numpy<1.25,>=1.19.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.23.5) + Requirement already satisfied: openvino-telemetry>=2023.1.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2023.1.1) + Requirement already satisfied: packaging>=20.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (23.1) + Requirement already satisfied: pandas<2.1,>=1.1.5 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2.0.3) + Requirement already satisfied: psutil in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (5.9.5) + Requirement already satisfied: pydot>=1.4.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.4.2) + Requirement already satisfied: pyparsing<3.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (2.4.7) + Requirement already satisfied: scikit-learn>=0.24.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.3.0) + Requirement already satisfied: scipy<1.11,>=1.3.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.10.1) + Requirement already satisfied: texttable>=1.6.3 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (1.6.7) + Requirement already satisfied: tqdm>=4.54.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from nncf==2.5.0.dev0+90a1e860) (4.66.1) + Requirement already satisfied: attrs>=22.2.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (23.1.0) + Requirement already satisfied: importlib-resources>=1.4.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (6.0.1) + Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (2023.7.1) + Requirement already satisfied: pkgutil-resolve-name>=1.3.10 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (1.3.10) + Requirement already satisfied: referencing>=0.28.4 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (0.30.2) + Requirement already satisfied: rpds-py>=0.7.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (0.10.2) + Requirement already satisfied: python-dateutil>=2.8.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (2.8.2) + Requirement already satisfied: pytz>=2020.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (2023.3.post1) + Requirement already satisfied: tzdata>=2022.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (2023.3) + Requirement already satisfied: joblib>=1.1.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from scikit-learn>=0.24.0->nncf==2.5.0.dev0+90a1e860) (1.3.2) + Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from scikit-learn>=0.24.0->nncf==2.5.0.dev0+90a1e860) (3.2.0) + Requirement already satisfied: matplotlib>=3 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (3.5.2) + Requirement already satisfied: autograd>=1.4 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (1.6.2) + Requirement already satisfied: cma==3.2.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (3.2.2) + Requirement already satisfied: alive-progress in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (3.1.4) + Requirement already satisfied: dill in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (0.3.7) + Requirement already satisfied: Deprecated in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (1.2.14) + Requirement already satisfied: future>=0.15.2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from autograd>=1.4->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (0.18.3) + Requirement already satisfied: zipp>=3.1.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from importlib-resources>=1.4.0->jsonschema>=3.2.0->nncf==2.5.0.dev0+90a1e860) (3.16.2) + Requirement already satisfied: cycler>=0.10 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (0.11.0) + Requirement already satisfied: fonttools>=4.22.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (4.42.1) + Requirement already satisfied: kiwisolver>=1.0.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (1.4.5) + Requirement already satisfied: pillow>=6.2.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from matplotlib>=3->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (10.0.0) + Requirement already satisfied: six>=1.5 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from python-dateutil>=2.8.2->pandas<2.1,>=1.1.5->nncf==2.5.0.dev0+90a1e860) (1.16.0) + Requirement already satisfied: about-time==4.2.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from alive-progress->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (4.2.1) + Requirement already satisfied: grapheme==0.6.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from alive-progress->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (0.6.0) + Requirement already satisfied: wrapt<2,>=1.10 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from Deprecated->pymoo@ git+https://github.com/anyoptimization/pymoo.git@695cb26923903f872c7256a9013609769f3cc2bd->nncf==2.5.0.dev0+90a1e860) (1.14.1) + + +Get Pytorch model and OpenVINO IR model ############################################################################################################################### Generally, PyTorch models represent an instance of the -`torch.nn.Module `__ +```torch.nn.Module`` `__ class, initialized by a state dictionary with model weights. We will use the YOLOv8 nano model (also known as ``yolov8n``) pre-trained on a COCO dataset, which is available in this @@ -84,11 +240,11 @@ In this case, the creators of the model provide an API that enables converting the YOLOv8 model to ONNX and then to OpenVINO IR. Therefore, we do not need to do these steps manually. -.. code:: ipython2 +.. code:: ipython3 import os from pathlib import Path - + from ultralytics import YOLO from ultralytics.yolo.cfg import get_cfg from ultralytics.yolo.data.utils import check_det_dataset @@ -97,29 +253,62 @@ we do not need to do these steps manually. from ultralytics.yolo.utils import DEFAULT_CFG from ultralytics.yolo.utils import ops from ultralytics.yolo.utils.metrics import ConfusionMatrix - + ROOT = os.path.abspath('') - + MODEL_NAME = "yolov8n-seg" - + model = YOLO(f"{ROOT}/{MODEL_NAME}.pt") args = get_cfg(cfg=DEFAULT_CFG) args.data = "coco128-seg.yaml" -Load model. -.. code:: ipython2 +.. parsed-literal:: + + Downloading https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt to /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/122-quantizing-model-with-accuracy-control/yolov8n-seg.pt... + + - import openvino +.. parsed-literal:: + 0%| | 0.00/6.73M [00:00`__ + + ov_model = ov.Core().read_model(model_path) + + +.. parsed-literal:: + + Ultralytics YOLOv8.0.43 🚀 Python-3.8.10 torch-1.13.1+cpu CPU + YOLOv8n-seg summary (fused): 195 layers, 3404320 parameters, 0 gradients, 12.6 GFLOPs + + PyTorch: starting from /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/122-quantizing-model-with-accuracy-control/yolov8n-seg.pt with input shape (1, 3, 640, 640) BCHW and output shape(s) ((1, 116, 8400), (1, 32, 160, 160)) (6.7 MB) + + ONNX: starting export with onnx 1.14.1... + ONNX: export success ✅ 0.6s, saved as /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/122-quantizing-model-with-accuracy-control/yolov8n-seg.onnx (13.1 MB) + + OpenVINO: starting export with openvino 2023.1.0-12050-e33de350633... + OpenVINO: export success ✅ 0.7s, saved as /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/122-quantizing-model-with-accuracy-control/yolov8n-seg_openvino_model/ (13.3 MB) + + Export complete (1.5s) + Results saved to /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/122-quantizing-model-with-accuracy-control + Predict: yolo predict task=segment model=/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/122-quantizing-model-with-accuracy-control/yolov8n-seg_openvino_model imgsz=640 + Validate: yolo val task=segment model=/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/122-quantizing-model-with-accuracy-control/yolov8n-seg_openvino_model imgsz=640 data=coco.yaml + Visualize: https://netron.app + + +Define validator and data loader +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The original model @@ -133,12 +322,12 @@ some parameters overriding to test on custom data. The model has connected the ``ValidatorClass`` method, which creates a validator class instance. -.. code:: ipython2 +.. code:: ipython3 validator = model.ValidatorClass(args) validator.data = check_det_dataset(args.data) data_loader = validator.get_dataloader(f"{DATASETS_DIR}/coco128-seg", 1) - + validator.is_coco = True validator.class_map = ops.coco80_to_coco91_class() validator.names = model.model.names @@ -148,39 +337,51 @@ instance. validator.process = ops.process_mask validator.plot_masks = [] -Prepare calibration and validation datasets `⇑ <#top>`__ + +.. parsed-literal:: + + val: Scanning /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-491/.workspace/scm/datasets/coco128-seg/labels/train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 [00:00`__ + +Prepare validation function +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -.. code:: ipython2 +.. code:: ipython3 from functools import partial - + import torch from nncf.quantization.advanced_parameters import AdvancedAccuracyRestorerParameters - - + + def validation_ac( - compiled_model: openvino.CompiledModel, + compiled_model: ov.CompiledModel, validation_loader: torch.utils.data.DataLoader, validator: Validator, num_samples: int = None, @@ -191,7 +392,7 @@ Prepare validation function `⇑ <#top>`__ validator.batch_i = 1 validator.confusion_matrix = ConfusionMatrix(nc=validator.nc) num_outputs = len(compiled_model.outputs) - + counter = 0 for batch_i, batch in enumerate(validation_loader): if num_samples is not None and batch_i == num_samples: @@ -214,13 +415,13 @@ Prepare validation function `⇑ <#top>`__ else: stats_metrics = stats["metrics/mAP50-95(M)"] print(f"Validate: dataset length = {counter}, metric value = {stats_metrics:.3f}") - + return stats_metrics - - + + validation_fn = partial(validation_ac, validator=validator) -Run quantization with accuracy control `⇑ <#top>`__ +Run quantization with accuracy control ############################################################################################################################### You should provide @@ -241,7 +442,7 @@ we use the value 25 to speed up the execution. Execution can take tens of minutes and requires up to 15 GB of free memory -.. code:: ipython2 +.. code:: ipython3 quantized_model = nncf.quantize_with_accuracy_control( ov_model, @@ -256,51 +457,441 @@ we use the value 25 to speed up the execution. ), ) -Compare Accuracy and Performance of the Original and Quantized Models `⇑ <#top>`__ -############################################################################################################################### +.. parsed-literal:: + + 2023-09-08 23:17:54.173599: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:17:54.207357: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 23:17:54.764356: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + Statistics collection: 43%|████▎ | 128/300 [00:16<00:22, 7.55it/s] + Applying Fast Bias correction: 100%|██████████| 75/75 [00:04<00:00, 17.89it/s] + +.. parsed-literal:: + + INFO:nncf:Validation of initial model was started + + +.. parsed-literal:: + + INFO:nncf:Elapsed Time: 00:00:00 + Validate: dataset length = 1, metric value = 0.589 + Validate: dataset length = 128, metric value = 0.366 + INFO:nncf:Elapsed Time: 00:00:04 + INFO:nncf:Metric of initial model: 0.36611468358574506 + INFO:nncf:Collecting values for each data item using the initial model + Validate: dataset length = 1, metric value = 0.589 + Validate: dataset length = 1, metric value = 0.622 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.895 + Validate: dataset length = 1, metric value = 0.846 + Validate: dataset length = 1, metric value = 0.365 + Validate: dataset length = 1, metric value = 0.432 + Validate: dataset length = 1, metric value = 0.172 + Validate: dataset length = 1, metric value = 0.771 + Validate: dataset length = 1, metric value = 0.255 + Validate: dataset length = 1, metric value = 0.431 + Validate: dataset length = 1, metric value = 0.399 + Validate: dataset length = 1, metric value = 0.671 + Validate: dataset length = 1, metric value = 0.315 + Validate: dataset length = 1, metric value = 0.995 + Validate: dataset length = 1, metric value = 0.895 + Validate: dataset length = 1, metric value = 0.497 + Validate: dataset length = 1, metric value = 0.594 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.597 + Validate: dataset length = 1, metric value = 0.074 + Validate: dataset length = 1, metric value = 0.231 + Validate: dataset length = 1, metric value = 0.502 + Validate: dataset length = 1, metric value = 0.347 + Validate: dataset length = 1, metric value = 0.398 + Validate: dataset length = 1, metric value = 0.477 + Validate: dataset length = 1, metric value = 0.537 + Validate: dataset length = 1, metric value = 0.344 + Validate: dataset length = 1, metric value = 0.544 + Validate: dataset length = 1, metric value = 0.237 + Validate: dataset length = 1, metric value = 0.109 + Validate: dataset length = 1, metric value = 0.564 + Validate: dataset length = 1, metric value = 0.853 + Validate: dataset length = 1, metric value = 0.306 + Validate: dataset length = 1, metric value = 0.416 + Validate: dataset length = 1, metric value = 0.388 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.199 + Validate: dataset length = 1, metric value = 0.323 + Validate: dataset length = 1, metric value = 0.305 + Validate: dataset length = 1, metric value = 0.506 + Validate: dataset length = 1, metric value = 0.319 + Validate: dataset length = 1, metric value = 0.319 + Validate: dataset length = 1, metric value = 0.255 + Validate: dataset length = 1, metric value = 0.487 + Validate: dataset length = 1, metric value = 0.697 + Validate: dataset length = 1, metric value = 0.654 + Validate: dataset length = 1, metric value = 0.368 + Validate: dataset length = 1, metric value = 0.730 + Validate: dataset length = 1, metric value = 0.374 + Validate: dataset length = 1, metric value = 0.227 + Validate: dataset length = 1, metric value = 0.500 + Validate: dataset length = 1, metric value = 0.101 + Validate: dataset length = 1, metric value = 0.855 + Validate: dataset length = 1, metric value = 0.430 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.358 + Validate: dataset length = 1, metric value = 0.373 + Validate: dataset length = 1, metric value = 0.692 + Validate: dataset length = 1, metric value = 0.556 + Validate: dataset length = 1, metric value = 0.274 + Validate: dataset length = 1, metric value = 0.670 + Validate: dataset length = 1, metric value = 0.044 + Validate: dataset length = 1, metric value = 0.627 + Validate: dataset length = 1, metric value = 0.945 + Validate: dataset length = 1, metric value = 0.267 + Validate: dataset length = 1, metric value = 0.354 + Validate: dataset length = 1, metric value = 0.265 + Validate: dataset length = 1, metric value = 0.522 + Validate: dataset length = 1, metric value = 0.945 + Validate: dataset length = 1, metric value = 0.394 + Validate: dataset length = 1, metric value = 0.349 + Validate: dataset length = 1, metric value = 0.564 + Validate: dataset length = 1, metric value = 0.094 + Validate: dataset length = 1, metric value = 0.763 + Validate: dataset length = 1, metric value = 0.157 + Validate: dataset length = 1, metric value = 0.531 + Validate: dataset length = 1, metric value = 0.597 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.781 + Validate: dataset length = 1, metric value = 0.447 + Validate: dataset length = 1, metric value = 0.562 + Validate: dataset length = 1, metric value = 0.697 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.461 + Validate: dataset length = 1, metric value = 0.697 + Validate: dataset length = 1, metric value = 0.696 + Validate: dataset length = 1, metric value = 0.378 + Validate: dataset length = 1, metric value = 0.246 + Validate: dataset length = 1, metric value = 0.647 + Validate: dataset length = 1, metric value = 0.367 + Validate: dataset length = 1, metric value = 0.995 + Validate: dataset length = 1, metric value = 0.995 + Validate: dataset length = 1, metric value = 0.597 + Validate: dataset length = 1, metric value = 0.398 + Validate: dataset length = 1, metric value = 0.359 + Validate: dataset length = 1, metric value = 0.407 + Validate: dataset length = 1, metric value = 0.191 + Validate: dataset length = 1, metric value = 0.549 + Validate: dataset length = 1, metric value = 0.290 + Validate: dataset length = 1, metric value = 0.166 + Validate: dataset length = 1, metric value = 0.131 + Validate: dataset length = 1, metric value = 0.745 + Validate: dataset length = 1, metric value = 0.336 + Validate: dataset length = 1, metric value = 0.248 + Validate: dataset length = 1, metric value = 0.290 + Validate: dataset length = 1, metric value = 0.413 + Validate: dataset length = 1, metric value = 0.790 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.265 + Validate: dataset length = 1, metric value = 0.423 + Validate: dataset length = 1, metric value = 0.398 + Validate: dataset length = 1, metric value = 0.039 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.685 + Validate: dataset length = 1, metric value = 0.635 + Validate: dataset length = 1, metric value = 0.829 + Validate: dataset length = 1, metric value = 0.525 + Validate: dataset length = 1, metric value = 0.315 + Validate: dataset length = 1, metric value = 0.348 + Validate: dataset length = 1, metric value = 0.567 + Validate: dataset length = 1, metric value = 0.751 + Validate: dataset length = 1, metric value = 0.597 + Validate: dataset length = 1, metric value = 0.557 + Validate: dataset length = 1, metric value = 0.995 + Validate: dataset length = 1, metric value = 0.341 + Validate: dataset length = 1, metric value = 0.427 + Validate: dataset length = 1, metric value = 0.846 + INFO:nncf:Elapsed Time: 00:00:05 + INFO:nncf:Validation of quantized model was started + INFO:nncf:Elapsed Time: 00:00:01 + Validate: dataset length = 128, metric value = 0.342 + INFO:nncf:Elapsed Time: 00:00:04 + INFO:nncf:Metric of quantized model: 0.3419095833156649 + INFO:nncf:Collecting values for each data item using the quantized model + Validate: dataset length = 1, metric value = 0.513 + Validate: dataset length = 1, metric value = 0.647 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.895 + Validate: dataset length = 1, metric value = 0.846 + Validate: dataset length = 1, metric value = 0.448 + Validate: dataset length = 1, metric value = 0.426 + Validate: dataset length = 1, metric value = 0.165 + Validate: dataset length = 1, metric value = 0.697 + Validate: dataset length = 1, metric value = 0.255 + Validate: dataset length = 1, metric value = 0.464 + Validate: dataset length = 1, metric value = 0.427 + Validate: dataset length = 1, metric value = 0.631 + Validate: dataset length = 1, metric value = 0.307 + Validate: dataset length = 1, metric value = 0.895 + Validate: dataset length = 1, metric value = 0.895 + Validate: dataset length = 1, metric value = 0.531 + Validate: dataset length = 1, metric value = 0.518 + Validate: dataset length = 1, metric value = 0.696 + Validate: dataset length = 1, metric value = 0.647 + Validate: dataset length = 1, metric value = 0.142 + Validate: dataset length = 1, metric value = 0.205 + Validate: dataset length = 1, metric value = 0.487 + Validate: dataset length = 1, metric value = 0.331 + Validate: dataset length = 1, metric value = 0.348 + Validate: dataset length = 1, metric value = 0.415 + Validate: dataset length = 1, metric value = 0.542 + Validate: dataset length = 1, metric value = 0.333 + Validate: dataset length = 1, metric value = 0.489 + Validate: dataset length = 1, metric value = 0.270 + Validate: dataset length = 1, metric value = 0.067 + Validate: dataset length = 1, metric value = 0.564 + Validate: dataset length = 1, metric value = 0.764 + Validate: dataset length = 1, metric value = 0.301 + Validate: dataset length = 1, metric value = 0.400 + Validate: dataset length = 1, metric value = 0.392 + Validate: dataset length = 1, metric value = 0.696 + Validate: dataset length = 1, metric value = 0.193 + Validate: dataset length = 1, metric value = 0.199 + Validate: dataset length = 1, metric value = 0.267 + Validate: dataset length = 1, metric value = 0.484 + Validate: dataset length = 1, metric value = 0.299 + Validate: dataset length = 1, metric value = 0.299 + Validate: dataset length = 1, metric value = 0.255 + Validate: dataset length = 1, metric value = 0.431 + Validate: dataset length = 1, metric value = 0.697 + Validate: dataset length = 1, metric value = 0.623 + Validate: dataset length = 1, metric value = 0.348 + Validate: dataset length = 1, metric value = 0.763 + Validate: dataset length = 1, metric value = 0.354 + Validate: dataset length = 1, metric value = 0.129 + Validate: dataset length = 1, metric value = 0.507 + Validate: dataset length = 1, metric value = 0.082 + Validate: dataset length = 1, metric value = 0.855 + Validate: dataset length = 1, metric value = 0.398 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.381 + Validate: dataset length = 1, metric value = 0.384 + Validate: dataset length = 1, metric value = 0.586 + Validate: dataset length = 1, metric value = 0.503 + Validate: dataset length = 1, metric value = 0.172 + Validate: dataset length = 1, metric value = 0.540 + Validate: dataset length = 1, metric value = 0.027 + Validate: dataset length = 1, metric value = 0.561 + Validate: dataset length = 1, metric value = 0.945 + Validate: dataset length = 1, metric value = 0.170 + Validate: dataset length = 1, metric value = 0.409 + Validate: dataset length = 1, metric value = 0.272 + Validate: dataset length = 1, metric value = 0.507 + Validate: dataset length = 1, metric value = 0.945 + Validate: dataset length = 1, metric value = 0.377 + Validate: dataset length = 1, metric value = 0.343 + Validate: dataset length = 1, metric value = 0.564 + Validate: dataset length = 1, metric value = 0.080 + Validate: dataset length = 1, metric value = 0.721 + Validate: dataset length = 1, metric value = 0.174 + Validate: dataset length = 1, metric value = 0.564 + Validate: dataset length = 1, metric value = 0.497 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.454 + Validate: dataset length = 1, metric value = 0.536 + Validate: dataset length = 1, metric value = 0.647 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.461 + Validate: dataset length = 1, metric value = 0.697 + Validate: dataset length = 1, metric value = 0.746 + Validate: dataset length = 1, metric value = 0.332 + Validate: dataset length = 1, metric value = 0.218 + Validate: dataset length = 1, metric value = 0.547 + Validate: dataset length = 1, metric value = 0.309 + Validate: dataset length = 1, metric value = 0.995 + Validate: dataset length = 1, metric value = 0.995 + Validate: dataset length = 1, metric value = 0.597 + Validate: dataset length = 1, metric value = 0.398 + Validate: dataset length = 1, metric value = 0.309 + Validate: dataset length = 1, metric value = 0.423 + Validate: dataset length = 1, metric value = 0.146 + Validate: dataset length = 1, metric value = 0.535 + Validate: dataset length = 1, metric value = 0.274 + Validate: dataset length = 1, metric value = 0.166 + Validate: dataset length = 1, metric value = 0.111 + Validate: dataset length = 1, metric value = 0.585 + Validate: dataset length = 1, metric value = 0.351 + Validate: dataset length = 1, metric value = 0.327 + Validate: dataset length = 1, metric value = 0.260 + Validate: dataset length = 1, metric value = 0.411 + Validate: dataset length = 1, metric value = 0.788 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.265 + Validate: dataset length = 1, metric value = 0.442 + Validate: dataset length = 1, metric value = 0.398 + Validate: dataset length = 1, metric value = 0.029 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.613 + Validate: dataset length = 1, metric value = 0.610 + Validate: dataset length = 1, metric value = 0.796 + Validate: dataset length = 1, metric value = 0.457 + Validate: dataset length = 1, metric value = 0.323 + Validate: dataset length = 1, metric value = 0.348 + Validate: dataset length = 1, metric value = 0.600 + Validate: dataset length = 1, metric value = 0.854 + Validate: dataset length = 1, metric value = 0.597 + Validate: dataset length = 1, metric value = 0.567 + Validate: dataset length = 1, metric value = 0.995 + Validate: dataset length = 1, metric value = 0.325 + Validate: dataset length = 1, metric value = 0.398 + Validate: dataset length = 1, metric value = 0.796 + INFO:nncf:Elapsed Time: 00:00:04 + INFO:nncf:Accuracy drop: 0.02420510027008016 (DropType.ABSOLUTE) + INFO:nncf:Accuracy drop: 0.02420510027008016 (DropType.ABSOLUTE) + INFO:nncf:Total number of quantized operations in the model: 91 + INFO:nncf:Number of parallel processes to rank quantized operations: 1 + INFO:nncf:ORIGINAL metric is used to rank quantizers + INFO:nncf:Calculating ranking score for groups of quantizers + Validate: dataset length = 25, metric value = 0.523 + Validate: dataset length = 25, metric value = 0.517 + Validate: dataset length = 25, metric value = 0.504 + Validate: dataset length = 25, metric value = 0.516 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.507 + Validate: dataset length = 25, metric value = 0.505 + Validate: dataset length = 25, metric value = 0.503 + Validate: dataset length = 25, metric value = 0.504 + Validate: dataset length = 25, metric value = 0.501 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.503 + Validate: dataset length = 25, metric value = 0.500 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.509 + Validate: dataset length = 25, metric value = 0.507 + Validate: dataset length = 25, metric value = 0.506 + Validate: dataset length = 25, metric value = 0.505 + Validate: dataset length = 25, metric value = 0.504 + Validate: dataset length = 25, metric value = 0.505 + Validate: dataset length = 25, metric value = 0.503 + Validate: dataset length = 25, metric value = 0.503 + Validate: dataset length = 25, metric value = 0.501 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.500 + Validate: dataset length = 25, metric value = 0.505 + Validate: dataset length = 25, metric value = 0.508 + Validate: dataset length = 25, metric value = 0.505 + Validate: dataset length = 25, metric value = 0.506 + Validate: dataset length = 25, metric value = 0.506 + Validate: dataset length = 25, metric value = 0.501 + Validate: dataset length = 25, metric value = 0.500 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.512 + Validate: dataset length = 25, metric value = 0.504 + Validate: dataset length = 25, metric value = 0.510 + Validate: dataset length = 25, metric value = 0.514 + Validate: dataset length = 25, metric value = 0.510 + Validate: dataset length = 25, metric value = 0.508 + Validate: dataset length = 25, metric value = 0.507 + Validate: dataset length = 25, metric value = 0.509 + Validate: dataset length = 25, metric value = 0.495 + Validate: dataset length = 25, metric value = 0.510 + Validate: dataset length = 25, metric value = 0.511 + Validate: dataset length = 25, metric value = 0.502 + Validate: dataset length = 25, metric value = 0.511 + Validate: dataset length = 25, metric value = 0.507 + Validate: dataset length = 25, metric value = 0.506 + Validate: dataset length = 25, metric value = 0.515 + Validate: dataset length = 25, metric value = 0.506 + Validate: dataset length = 25, metric value = 0.499 + Validate: dataset length = 25, metric value = 0.492 + Validate: dataset length = 25, metric value = 0.505 + Validate: dataset length = 25, metric value = 0.499 + Validate: dataset length = 25, metric value = 0.519 + Validate: dataset length = 25, metric value = 0.522 + Validate: dataset length = 25, metric value = 0.516 + INFO:nncf:Elapsed Time: 00:02:45 + INFO:nncf:Changing the scope of quantizer nodes was started + INFO:nncf:Reverted 1 operations to the floating-point precision: + /model.22/Mul_5 + Validate: dataset length = 128, metric value = 0.353 + INFO:nncf:Accuracy drop with the new quantization scope is 0.013362079004897942 (DropType.ABSOLUTE) + INFO:nncf:Reverted 1 operations to the floating-point precision: + /model.1/conv/Conv/WithoutBiases + Validate: dataset length = 128, metric value = 0.353 + INFO:nncf:Accuracy drop with the new quantization scope is 0.013092546237331526 (DropType.ABSOLUTE) + INFO:nncf:Reverted 1 operations to the floating-point precision: + /model.2/cv1/conv/Conv/WithoutBiases + Validate: dataset length = 128, metric value = 0.359 + INFO:nncf:Algorithm completed: achieved required accuracy drop 0.006690894581248108 (DropType.ABSOLUTE) + INFO:nncf:3 out of 91 were reverted back to the floating-point precision: + /model.22/Mul_5 + /model.1/conv/Conv/WithoutBiases + /model.2/cv1/conv/Conv/WithoutBiases + + +Compare Accuracy and Performance of the Original and Quantized Models +############################################################################################################################### Now we can compare metrics of the Original non-quantized OpenVINO IR model and Quantized OpenVINO IR model to make sure that the ``max_drop`` is not exceeded. -.. code:: ipython2 +.. code:: ipython3 - import openvino - - core = openvino.Core() + core = ov.Core() quantized_compiled_model = core.compile_model(model=quantized_model, device_name='CPU') compiled_ov_model = core.compile_model(model=ov_model, device_name='CPU') - + pt_result = validation_ac(compiled_ov_model, data_loader, validator) quantized_result = validation_ac(quantized_compiled_model, data_loader, validator) - - + + print(f'[Original OpenVino]: {pt_result:.4f}') print(f'[Quantized OpenVino]: {quantized_result:.4f}') + +.. parsed-literal:: + + Validate: dataset length = 128, metric value = 0.368 + Validate: dataset length = 128, metric value = 0.361 + [Original OpenVino]: 0.3677 + [Quantized OpenVino]: 0.3605 + + And compare performance. -.. code:: ipython2 +.. code:: ipython3 from pathlib import Path # Set model directory MODEL_DIR = Path("model") MODEL_DIR.mkdir(exist_ok=True) - + ir_model_path = MODEL_DIR / 'ir_model.xml' quantized_model_path = MODEL_DIR / 'quantized_model.xml' - + # Save models to use them in the commandline banchmark app - openvino.save_model(ov_model, ir_model_path, compress_to_fp16=False) - openvino.save_model(quantized_model, quantized_model_path, compress_to_fp16=False) + ov.save_model(ov_model, ir_model_path, compress_to_fp16=False) + ov.save_model(quantized_model, quantized_model_path, compress_to_fp16=False) -.. code:: ipython2 +.. code:: ipython3 # Inference Original model (OpenVINO IR) ! benchmark_app -m $ir_model_path -shape "[1,3,640,640]" -d CPU -api async -.. code:: ipython2 + +.. parsed-literal:: + + /bin/bash: benchmark_app: command not found + + +.. code:: ipython3 # Inference Quantized model (OpenVINO IR) ! benchmark_app -m $quantized_model_path -shape "[1,3,640,640]" -d CPU -api async + + +.. parsed-literal:: + + /bin/bash: benchmark_app: command not found + diff --git a/docs/notebooks/201-vision-monodepth-with-output.rst b/docs/notebooks/201-vision-monodepth-with-output.rst index b26e82ff4e7ff0..e6e3b3f65561eb 100644 --- a/docs/notebooks/201-vision-monodepth-with-output.rst +++ b/docs/notebooks/201-vision-monodepth-with-output.rst @@ -1,11 +1,9 @@ Monodepth Estimation with OpenVINO ================================== - - This tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information can be found -`here `__. +`here `__. .. figure:: https://user-images.githubusercontent.com/36741649/127173017-a0bbcf75-db24-4d2c-81b9-616e04ab7cd9.gif :alt: monodepth @@ -13,7 +11,7 @@ OpenVINO. Model information can be found monodepth What is Monodepth? -~~~~~~~~~~~~~~~~~~ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Monocular Depth Estimation is the task of estimating scene depth using a single image. It has many potential applications in robotics, 3D @@ -28,11 +26,9 @@ Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer,” `__ in IEEE Transactions on Pattern Analysis and Machine Intelligence, doi: -``10.1109/TPAMI.2020.3019967``. +``10.1109/TPAMI.2020.3019967``. -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ @@ -56,17 +52,15 @@ Transactions on Pattern Analysis and Machine Intelligence, doi: - `Do Inference on a Video and Create Monodepth Video <#do-inference-on-a-video-and-create-monodepth-video>`__ - `Display Monodepth Video <#display-monodepth-video>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### - -Install requirements `⇑ <#top>`__ +Install requirements +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - !pip install -q "openvino>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q matplotlib opencv-python requests tqdm # Fetch `notebook_utils` module @@ -81,14 +75,13 @@ Install requirements `⇑ <#top>`__ .. parsed-literal:: - ('notebook_utils.py', ) + ('notebook_utils.py', ) -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import time @@ -107,14 +100,13 @@ Imports `⇑ <#top>`__ clear_output, display, ) - from openvino.runtime import Core + import openvino as ov from notebook_utils import download_file, load_image -Download the model `⇑ <#top>`__ +Download the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 model_folder = Path('model') @@ -141,10 +133,9 @@ Download the model `⇑ <#top>`__ model/MiDaS_small.bin: 0%| | 0.00/31.6M [00:00`__ +Functions ############################################################################################################################### - .. code:: ipython3 def normalize_minmax(data): @@ -175,17 +166,16 @@ Functions `⇑ <#top>`__ """ return cv2.cvtColor(image_data, cv2.COLOR_BGR2RGB) -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -204,17 +194,16 @@ Select device from dropdown list for running inference using OpenVINO: -Load the Model `⇑ <#top>`__ +Load the Model ############################################################################################################################### - -Load the model in OpenVINO Runtime with ``ie.read_model`` and compile it -for the specified device with ``ie.compile_model``. Get input and output -keys and the expected input shape for the model. +Load the model in OpenVINO Runtime with ``core.read_model`` and compile +it for the specified device with ``core.compile_model``. Get input and +output keys and the expected input shape for the model. .. code:: ipython3 - core = Core() + core = ov.Core() core.set_property({'CACHE_DIR': '../cache'}) model = core.read_model(model_xml_path) compiled_model = core.compile_model(model=model, device_name=device.value) @@ -225,14 +214,12 @@ keys and the expected input shape for the model. network_input_shape = list(input_key.shape) network_image_height, network_image_width = network_input_shape[2:] -Monodepth on Image `⇑ <#top>`__ +Monodepth on Image ############################################################################################################################### - -Load, resize and reshape input image `⇑ <#top>`__ +Load, resize and reshape input image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The input image is read with OpenCV, resized to network input size, and reshaped to (N,C,H,W) (N=number of images, C=number of channels, H=height, W=width). @@ -248,10 +235,9 @@ H=height, W=width). # Reshape the image to network input shape NCHW. input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) -Do inference on the image `⇑ <#top>`__ +Do inference on the image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Do inference, convert the result to an image, and resize it to the original image shape. @@ -267,10 +253,9 @@ original image shape. # in (width, height), [::-1] reverses the (height, width) shape to match this. result_image = cv2.resize(result_image, image.shape[:2][::-1]) -Display monodepth image `⇑ <#top>`__ +Display monodepth image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 fig, ax = plt.subplots(1, 2, figsize=(20, 15)) @@ -282,18 +267,16 @@ Display monodepth image `⇑ <#top>`__ .. image:: 201-vision-monodepth-with-output_files/201-vision-monodepth-with-output_18_0.png -Monodepth on Video `⇑ <#top>`__ +Monodepth on Video ############################################################################################################################### - By default, only the first 100 frames are processed in order to quickly check that everything works. Change ``NUM_FRAMES`` in the cell below to modify this. Set ``NUM_FRAMES`` to 0 to process the whole video. -Video Settings `⇑ <#top>`__ +Video Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Video source: https://www.youtube.com/watch?v=fu1xcQdJRws (Public Domain) @@ -320,10 +303,9 @@ Video Settings `⇑ <#top>`__ output_directory.mkdir(exist_ok=True) result_video_path = output_directory / f"{Path(VIDEO_FILE).stem}_monodepth.mp4" -Load the Video `⇑ <#top>`__ +Load the Video +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Load the video from a ``VIDEO_FILE``, set in the *Video Settings* cell above. Open the video to read the frame width and height and fps, and compute values for these properties for the monodepth video. @@ -359,10 +341,9 @@ compute values for these properties for the monodepth video. The monodepth video will be scaled with a factor 0.5, have width 320, height 180, and run at 15.00 fps -Do Inference on a Video and Create Monodepth Video `⇑ <#top>`__ +Do Inference on a Video and Create Monodepth Video +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Initialize variables. @@ -460,14 +441,13 @@ Do Inference on a Video and Create Monodepth Video `⇑ <#top>`__ .. parsed-literal:: - Processed 60 frames in 48.08 seconds. Total FPS (including video processing): 1.25.Inference FPS: 42.27 + Processed 60 frames in 36.11 seconds. Total FPS (including video processing): 1.66.Inference FPS: 42.56 Monodepth Video saved to 'output/Coco%20Walking%20in%20Berkeley_monodepth.mp4'. -Display Monodepth Video `⇑ <#top>`__ +Display Monodepth Video +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 video = Video(result_video_path, width=800, embed=True) @@ -489,7 +469,7 @@ Display Monodepth Video `⇑ <#top>`__ .. parsed-literal:: Showing monodepth video saved at - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/201-vision-monodepth/output/Coco%20Walking%20in%20Berkeley_monodepth.mp4 + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/201-vision-monodepth/output/Coco%20Walking%20in%20Berkeley_monodepth.mp4 If you cannot see the video in your browser, please click on the following link to download the video diff --git a/docs/notebooks/202-vision-superresolution-image-with-output.rst b/docs/notebooks/202-vision-superresolution-image-with-output.rst index 2cdac3c01bf069..8763cf90095a04 100644 --- a/docs/notebooks/202-vision-superresolution-image-with-output.rst +++ b/docs/notebooks/202-vision-superresolution-image-with-output.rst @@ -1,24 +1,20 @@ Single Image Super Resolution with OpenVINO™ ============================================ - - Super Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook shows the Single Image Super Resolution (SISR) which takes just one low resolution image. A model called -`single-image-super-resolution-1032 `__, +`single-image-super-resolution-1032 `__, which is available in Open Model Zoo, is used in this tutorial. It is based on the research paper cited below. Y. Liu et al., `“An Attention-Based Approach for Single Image Super Resolution,” `__ 2018 24th International Conference on Pattern Recognition (ICPR), 2018, -pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. +pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ @@ -28,7 +24,7 @@ pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. - `Select inference device <#select-inference-device>`__ - - `Functions <#functions>`__ + - `Functions <#functions>`__ - `Load the Superresolution Model <#load-the-superresolution-model>`__ - `Load and Show the Input Image <#load-and-show-the-input-image>`__ @@ -39,34 +35,31 @@ pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. - `Do Inference <#do-inference>`__ - `Show and Save Results <#show-and-save-results>`__ - - `Save Superresolution and Bicubic Image Crop <#save-superresolution-and-bicubic-image-crop>`__ - - `Write Animated GIF with Bicubic/Superresolution Comparison <#write-animated-gif-with-bicubic-superresolution-comparison>`__ - - `Create a Video with Sliding Bicubic/Superresolution Comparison <#create-a-video-with-sliding-bicubic-superresolution-comparison>`__ + - `Save Superresolution and Bicubic Image Crop <#save-superresolution-and-bicubic-image-crop>`__ + - `Write Animated GIF with Bicubic/Superresolution Comparison <#write-animated-gif-with-bicubic-superresolution-comparison>`__ + - `Create a Video with Sliding Bicubic/Superresolution Comparison <#create-a-video-with-sliding-bicubic-superresolution-comparison>`__ - `Superresolution on full input image <#superresolution-on-full-input-image>`__ - `Compute patches <#compute-patches>`__ - - `Do Inference <#do-the-inference>`__ + - `Do Inference <#do-inference>`__ - `Save superresolution image and the bicubic image <#save-superresolution-image-and-the-bicubic-image>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### - -Install requirements `⇑ <#top>`__ +Install requirements +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - !pip install -q "openvino>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q opencv-python !pip install -q pillow matplotlib -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import os @@ -80,7 +73,7 @@ Imports `⇑ <#top>`__ from IPython.display import Image as DisplayImage from IPython.display import Pretty, ProgressBar, clear_output, display from PIL import Image - from openvino.runtime import Core + import openvino as ov .. code:: ipython3 @@ -91,21 +84,19 @@ Imports `⇑ <#top>`__ path.parent.mkdir(parents=True, exist_ok=True) urllib.request.urlretrieve(url, path) -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -Select inference device `⇑ <#top>`__ +Select inference device ------------------------------------------------------------------------------------------------------------------------------- - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -147,10 +138,9 @@ Select device from dropdown list for running inference using OpenVINO: else: print(f'{model_name} already downloaded to {base_model_dir}') -Functions `⇑ <#top>`__ +Functions +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray: @@ -209,24 +199,23 @@ Functions `⇑ <#top>`__ """ return cv2.cvtColor(image_data, cv2.COLOR_BGR2RGB) -Load the Superresolution Model `⇑ <#top>`__ +Load the Superresolution Model ############################################################################################################################### - The Super Resolution model expects two inputs: the input image and a bicubic interpolation of the input image to the target size of 1920x1080. It returns the super resolution version of the image in 1920x1800 (for the default superresolution model (1032)). -Load the model in OpenVINO Runtime with ``ie.read_model``, compile it -for the specified device with ``ie.compile_model``, and get information -about the network inputs and outputs. +Load the model in OpenVINO Runtime with ``core.read_model``, compile it +for the specified device with ``core.compile_model``, and get +information about the network inputs and outputs. .. code:: ipython3 - ie = Core() - model = ie.read_model(model=model_xml_path) - compiled_model = ie.compile_model(model=model, device_name=device.value) + core = ov.Core() + model = core.read_model(model=model_xml_path) + compiled_model = core.compile_model(model=model, device_name=device.value) # Network inputs and outputs are dictionaries. Get the keys for the # dictionaries. @@ -258,17 +247,15 @@ about the network inputs and outputs. The image sides are upsampled by a factor of 4. The new image is 16 times as large as the original image -Load and Show the Input Image `⇑ <#top>`__ +Load and Show the Input Image ############################################################################################################################### - .. note:: For the best results, use raw images (like ``TIFF``, ``BMP`` or ``PNG``). Compressed images (like ``JPEG``) may appear distorted after processing with the super resolution model. - .. code:: ipython3 IMAGE_PATH = Path("./data/tower.jpg") @@ -297,14 +284,12 @@ Load and Show the Input Image `⇑ <#top>`__ .. image:: 202-vision-superresolution-image-with-output_files/202-vision-superresolution-image-with-output_15_1.png -Superresolution on a Crop of the Image `⇑ <#top>`__ +Superresolution on a Crop of the Image ############################################################################################################################### - -Crop the Input Image once. `⇑ <#top>`__ +Crop the Input Image once. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Crop the network input size. Give the X (width) and Y (height) coordinates for the top left corner of the crop. Set the ``CROP_FACTOR`` variable to 2 to make a crop that is larger than the network input size @@ -351,10 +336,9 @@ as the crop size. .. image:: 202-vision-superresolution-image-with-output_files/202-vision-superresolution-image-with-output_17_1.png -Reshape/Resize Crop for Model Input `⇑ <#top>`__ +Reshape/Resize Crop for Model Input +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The input image is resized to a network input size, and reshaped to (N,C,H,W) (N=number of images, C=number of channels, H=height, W=width). The image is also resized to the network output size, with bicubic @@ -375,10 +359,9 @@ interpolation. This bicubic image is the second input to the network. input_image_original = np.expand_dims(image_crop.transpose(2, 0, 1), axis=0) input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0) -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Do inference and convert the inference result to an ``RGB`` image. .. code:: ipython3 @@ -393,10 +376,9 @@ Do inference and convert the inference result to an ``RGB`` image. # Get inference result as numpy array and reshape to image shape and data type result_image = convert_result_to_image(result) -Show and Save Results `⇑ <#top>`__ +Show and Save Results +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Show the bicubic image and the enhanced superresolution image. .. code:: ipython3 @@ -420,10 +402,9 @@ Show the bicubic image and the enhanced superresolution image. .. image:: 202-vision-superresolution-image-with-output_files/202-vision-superresolution-image-with-output_23_1.png -Save Superresolution and Bicubic Image Crop `⇑ <#top>`__ +Save Superresolution and Bicubic Image Crop ------------------------------------------------------------------------------------------------------------------------------- - .. code:: ipython3 # Add a text with "SUPER" or "BICUBIC" to the superresolution or bicubic image. @@ -454,7 +435,7 @@ Save Superresolution and Bicubic Image Crop `⇑ <#top>`__ Write Animated GIF with Bicubic/Superresolution Comparison -`⇑ <#top>`__ +------------------------------------------------------------------------------------------------------------------------------- .. code:: ipython3 @@ -492,18 +473,15 @@ Write Animated GIF with Bicubic/Superresolution Comparison Create a Video with Sliding Bicubic/Superresolution Comparison -`⇑ <#top>`__ +------------------------------------------------------------------------------------------------------------------------------- This may take a while. For the video, the superresolution and bicubic image are resized by a factor of 2 to improve processing speed. This gives an indication of the superresolution effect. The video is saved as an ``.avi`` file. You can click on the link to download the video, or -open it directly from the ``output/`` directory, and play it locally. - -.. note:: - - If you run the example in Google Colab, download video files using the ``Files`` tool. - +open it directly from the ``output/`` directory, and play it locally. > +Note: If you run the example in Google Colab, download video files using +the ``Files`` tool. .. code:: ipython3 @@ -562,10 +540,9 @@ open it directly from the ``output/`` directory, and play it locally. The video has been saved to output/flag_crop_comparison_2x.avi
-Superresolution on full input image `⇑ <#top>`__ +Superresolution on full input image ############################################################################################################################### - Superresolution on the full image is done by dividing the image into patches of equal size, doing superresolution on each path, and then stitching the resulting patches together again. For this demo, patches @@ -574,10 +551,9 @@ near the border of the image are ignored. Adjust the ``CROPLINES`` setting in the next cell if you see boundary effects. -Compute patches `⇑ <#top>`__ +Compute patches +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Set the number of lines to crop from the network result to prevent @@ -620,12 +596,9 @@ Compute patches `⇑ <#top>`__ The output image will have a width of 11280 and a height of 7280 -.. _do-the-inference: - -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The code below reads one patch of the image at a time. Each patch is reshaped to the network input shape and upsampled with bicubic interpolation to the target shape. Both the original and the bicubic @@ -741,14 +714,13 @@ as total time to process each patch. .. parsed-literal:: - Processed 42 patches in 4.78 seconds. Total patches per second (including processing): 8.78. - Inference patches per second: 17.27 + Processed 42 patches in 4.76 seconds. Total patches per second (including processing): 8.82. + Inference patches per second: 17.20 -Save superresolution image and the bicubic image `⇑ <#top>`__ +Save superresolution image and the bicubic image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 full_superresolution_image_path = Path( diff --git a/docs/notebooks/202-vision-superresolution-video-with-output.rst b/docs/notebooks/202-vision-superresolution-video-with-output.rst index c2534fe7a3cc38..34c6fb89f78b0e 100644 --- a/docs/notebooks/202-vision-superresolution-video-with-output.rst +++ b/docs/notebooks/202-vision-superresolution-video-with-output.rst @@ -1,13 +1,11 @@ Video Super Resolution with OpenVINO™ ===================================== - - Super Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480×360) video in 360p resolution. A model called -`single-image-super-resolution-1032 `__, +`single-image-super-resolution-1032 `__, which is available in Open Model Zoo, is used in this tutorial. It is based on the research paper cited below. @@ -22,10 +20,7 @@ pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. demo is not optimized for a video. Results may vary depending on the video. - -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ @@ -45,19 +40,19 @@ pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. - `Do Inference <#do-inference>`__ - `Show Side-by-Side Video of Bicubic and Superresolution Version <#show-side-by-side-video-of-bicubic-and-superresolution-version>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### -Install requirements `⇑ <#top>`__ +Install requirements +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 - !pip install -q "openvino>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q opencv-python !pip install -q "pytube>=12.1.0" -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -76,7 +71,7 @@ Imports `⇑ <#top>`__ clear_output, display, ) - from openvino.runtime import Core + import openvino as ov from pytube import YouTube .. code:: ipython3 @@ -88,10 +83,10 @@ Imports `⇑ <#top>`__ path.parent.mkdir(parents=True, exist_ok=True) urllib.request.urlretrieve(url, path) -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Select inference device `⇑ <#top>`__ +Select inference device ------------------------------------------------------------------------------------------------------------------------------- Select device from dropdown list for running inference using OpenVINO: @@ -100,7 +95,7 @@ Select device from dropdown list for running inference using OpenVINO: import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -148,7 +143,7 @@ Select device from dropdown list for running inference using OpenVINO: single-image-super-resolution-1032 already downloaded to model -Functions `⇑ <#top>`__ +Functions +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -167,15 +162,15 @@ Functions `⇑ <#top>`__ result = result.astype(np.uint8) return result -Load the Superresolution Model `⇑ <#top>`__ +Load the Superresolution Model ############################################################################################################################### -Load the model in OpenVINO Runtime with ``ie.read_model`` and compile it -for the specified device with ``ie.compile_model``. +Load the model in OpenVINO Runtime with ``core.read_model`` and compile +it for the specified device with ``core.compile_model``. .. code:: ipython3 - core = Core() + core = ov.Core() model = core.read_model(model=model_xml_path) compiled_model = core.compile_model(model=model, device_name=device.value) @@ -216,7 +211,7 @@ resolution version of the image in 1920x1080. The image sides are upsampled by a factor of 4. The new image is 16 times as large as the original image -Superresolution on Video `⇑ <#top>`__ +Superresolution on Video ############################################################################################################################### Download a YouTube video with ``PyTube`` and enhance the video quality @@ -231,8 +226,7 @@ By default, only the first 100 frames of the video are processed. Change should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. - -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -246,7 +240,7 @@ Settings `⇑ <#top>`__ # If you have FFMPEG installed, you can change FOURCC to `*"THEO"` to improve video writing speed. FOURCC = cv2.VideoWriter_fourcc(*"vp09") -Download and Prepare Video `⇑ <#top>`__ +Download and Prepare Video +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -333,7 +327,7 @@ the superresolution side by side. frameSize=(target_width * 2, target_height), ) -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Read video frames and enhance them with superresolution. Save the @@ -450,16 +444,16 @@ video. .. parsed-literal:: - Processed frame 100. Inference time: 0.06 seconds (16.26 FPS) + Processed frame 100. Inference time: 0.05 seconds (19.34 FPS) .. parsed-literal:: Video's saved to output directory. - Processed 100 frames in 235.27 seconds. Total FPS (including video processing): 0.43. Inference FPS: 17.68. + Processed 100 frames in 235.00 seconds. Total FPS (including video processing): 0.43. Inference FPS: 17.29. -Show Side-by-Side Video of Bicubic and Superresolution Version. `⇑ <#top>`__ +Show Side-by-Side Video of Bicubic and Superresolution Version +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 diff --git a/docs/notebooks/203-meter-reader-with-output.rst b/docs/notebooks/203-meter-reader-with-output.rst index 0efdff356edd3e..cf8fa5961634f5 100644 --- a/docs/notebooks/203-meter-reader-with-output.rst +++ b/docs/notebooks/203-meter-reader-with-output.rst @@ -1,8 +1,6 @@ Industrial Meter Reader ======================= - - This notebook shows how to create a industrial meter reader with OpenVINO Runtime. We use the pre-trained `PPYOLOv2 `__ @@ -21,9 +19,7 @@ to build up a multiple inference task pipeline: workflow -.. _top: - -**Table of contents**: +**Table of contents:** - `Import <#import>`__ - `Prepare the Model and Test Image <#prepare-the-model-and-test-image>`__ @@ -38,11 +34,13 @@ to build up a multiple inference task pipeline: - `Postprocess the models result and calculate the final readings <#postprocess-the-models-result-and-calculate-the-final-readings>`__ - `Get the reading result on the meter picture <#get-the-reading-result-on-the-meter-picture>`__ -- `Try it with your meter photos! <#try-it-with-your-meter-photos>`__ +.. code:: ipython3 -Import `⇑ <#top>`__ -############################################################################################################################### + # Install openvino package + !pip install -q "openvino==2023.1.0.dev20230811" +Import +############################################################################################################################### .. code:: ipython3 @@ -59,10 +57,11 @@ Import `⇑ <#top>`__ sys.path.append("../utils") from notebook_utils import download_file, segmentation_map_to_image -Prepare the Model and Test Image `⇑ <#top>`__ +Prepare the Model and Test Image ############################################################################################################################### -Download PPYOLOv2 and DeepLabV3P pre-trained models from PaddlePaddle community. +Download PPYOLOv2 and DeepLabV3P pre-trained models from PaddlePaddle +community. .. code:: ipython3 @@ -134,7 +133,7 @@ Download PPYOLOv2 and DeepLabV3P pre-trained models from PaddlePaddle community. Test Image Saved to "./data". -Configuration `⇑ <#top>`__ +Configuration ############################################################################################################################### Add parameter configuration for reading calculation. @@ -163,7 +162,7 @@ Add parameter configuration for reading calculation. SEG_LABEL = {'background': 0, 'pointer': 1, 'scale': 2} -Load the Models `⇑ <#top>`__ +Load the Models ############################################################################################################################### Define a common class for model loading and inference @@ -206,7 +205,7 @@ Define a common class for model loading and inference result = self.compiled_model(input_image)[self.output_layer] return result -Data Process `⇑ <#top>`__ +Data Process ############################################################################################################################### Including the preprocessing and postprocessing tasks of each model. @@ -536,14 +535,12 @@ Including the preprocessing and postprocessing tasks of each model. readings.append(reading) return readings -Main Function `⇑ <#top>`__ +Main Function ############################################################################################################################### - -Initialize the model and parameters. `⇑ <#top>`__ +Initialize the model and parameters. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -571,7 +568,7 @@ Select device from dropdown list for running inference using OpenVINO: The number of detected meter from detection network can be arbitrary in some scenarios, which means the batch size of segmentation network input is a `dynamic -dimension `__, +dimension `__, and it should be specified as ``-1`` or the ``ov::Dimension()`` instead of a positive number used for static dimensions. In this case, for memory consumption optimization, we can specify the lower and/or upper @@ -604,18 +601,19 @@ bounds of input batch size. .. parsed-literal:: - + -.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_15_1.png +.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_16_1.png -Run meter detection model `⇑ <#top>`__ +Run meter detection model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Detect the location of the meter and prepare the ROI images for segmentation. +Detect the location of the meter and prepare the ROI images for +segmentation. .. code:: ipython3 @@ -653,10 +651,10 @@ Detect the location of the meter and prepare the ROI images for segmentation. -.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_17_1.png +.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_18_1.png -Run meter segmentation model `⇑ <#top>`__ +Run meter segmentation model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Get the results of segmentation task on detected ROI. @@ -693,14 +691,13 @@ Get the results of segmentation task on detected ROI. -.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_19_1.png +.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_20_1.png -Postprocess the models result and calculate the final readings `⇑ <#top>`__ +Postprocess the models result and calculate the final readings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Use OpenCV function to find the location of the pointer in a -scale map. +Use OpenCV function to find the location of the pointer in a scale map. .. code:: ipython3 @@ -731,13 +728,12 @@ scale map. -.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_21_1.png +.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_22_1.png -Get the reading result on the meter picture `⇑ <#top>`__ +Get the reading result on the meter picture +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Create a final result photo with reading @@ -763,9 +759,8 @@ Get the reading result on the meter picture `⇑ <#top>`__ -.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_23_1.png +.. image:: 203-meter-reader-with-output_files/203-meter-reader-with-output_24_1.png -Try it with your meter photos! `⇑ <#top>`__ +Try it with your meter photos! ############################################################################################################################### - diff --git a/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_15_1.png b/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_16_1.png similarity index 100% rename from docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_15_1.png rename to docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_16_1.png diff --git a/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_17_1.png b/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_18_1.png similarity index 100% rename from docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_17_1.png rename to docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_18_1.png diff --git a/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_19_1.png b/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_20_1.png similarity index 100% rename from docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_19_1.png rename to docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_20_1.png diff --git a/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_21_1.png b/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_22_1.png similarity index 100% rename from docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_21_1.png rename to docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_22_1.png diff --git a/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_23_1.png b/docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_24_1.png similarity index 100% rename from docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_23_1.png rename to docs/notebooks/203-meter-reader-with-output_files/203-meter-reader-with-output_24_1.png diff --git a/docs/notebooks/204-segmenter-semantic-segmentation-with-output.rst b/docs/notebooks/204-segmenter-semantic-segmentation-with-output.rst index 750508af69864b..b2c11965c9bdeb 100644 --- a/docs/notebooks/204-segmenter-semantic-segmentation-with-output.rst +++ b/docs/notebooks/204-segmenter-semantic-segmentation-with-output.rst @@ -1,8 +1,6 @@ Semantic Segmentation with OpenVINO™ using Segmenter ==================================================== - - Semantic segmentation is a difficult computer vision problem with many applications such as autonomous driving, robotics, augmented reality, and many others. Its goal is to assign labels to each pixel according to @@ -28,27 +26,25 @@ paper: `Segmenter: Transformer for Semantic Segmentation `__ or in the `repository `__. -.. _top: - -**Table of contents**: +**Table of contents:** -- `Get and prepare PyTorch model <#get-and-prepare-pytorch-model>`__ - - - `Prerequisites <#prerequisites>`__ - - `Loading PyTorch model <#loading-pytorch-model>`__ +- `Get and prepare PyTorch model <#get-and-prepare-pytorch-model>`__ + + - `Prerequisites <#prerequisites>`__ + - `Loading PyTorch model <#loading-pytorch-model>`__ - `Preparing preprocessing and visualization functions <#preparing-preprocessing-and-visualization-functions>`__ - - `Preprocessing <#preprocessing>`__ - - `Visualization <#visualization>`__ - -- `Validation of inference of original model <#validation-of-inference-of-original-model>`__ -- `Export to ONNX <#export-to-onnx>`__ + - `Preprocessing <#preprocessing>`__ + - `Visualization <#visualization>`__ + +- `Validation of inference of original model <#validation-of-inference-of-original-model>`__ +- `Export to ONNX <#export-to-onnx>`__ - `Convert ONNX model to OpenVINO Intermediate Representation (IR) <#convert-onnx-model-to-openvino-intermediate-representation-ir>`__ -- `Verify converted model inference <#verify-converted-model-inference>`__ - - - `Select inference device <#select-inference-device>`__ +- `Verify converted model inference <#verify-converted-model-inference>`__ + - `Select inference device <#select-inference-device>`__ + - `Benchmarking performance of converted model <#benchmarking-performance-of-converted-model>`__ .. |Segmenteer diagram| image:: https://user-images.githubusercontent.com/24582831/148507554-87eb80bd-02c7-4c31-b102-c6141e231ec8.png @@ -64,10 +60,9 @@ notebook consists of the following steps: - Validating inference of the converted model - Benchmark performance of the converted model -Get and prepare PyTorch model `⇑ <#top>`__ +Get and prepare PyTorch model ############################################################################################################################### - The first thing we’ll need to do is clone `repository `__ containing model and helper functions. We will use Tiny model with mask transformer, that @@ -80,14 +75,13 @@ The code from the repository already contains functions that create model and load weights, but we will need to download config and trained weights (checkpoint) file and add some additional helper functions. -Prerequisites `⇑ <#top>`__ +Prerequisites +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Installing requirements - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q timm "mmsegmentation==0.30.0" einops "mmcv==1.7.1" "timm == 0.4.12" onnx @@ -136,7 +130,7 @@ config for our model. Cloning into 'segmenter'... remote: Enumerating objects: 268, done. remote: Total 268 (delta 0), reused 0 (delta 0), pack-reused 268 - Receiving objects: 100% (268/268), 15.34 MiB | 3.75 MiB/s, done. + Receiving objects: 100% (268/268), 15.34 MiB | 3.91 MiB/s, done. Resolving deltas: 100% (117/117), done. @@ -169,12 +163,11 @@ config for our model. model/variant.yml: 0%| | 0.00/940 [00:00`__ +Loading PyTorch model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - PyTorch models are usually an instance of -`torch.nn.Module `__ +```torch.nn.Module`` `__ class, initialized by a state dictionary containing model weights. Typical steps to get the model are therefore: @@ -215,21 +208,19 @@ Load normalization settings from config file. .. parsed-literal:: No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( -Preparing preprocessing and visualization functions `⇑ <#top>`__ +Preparing preprocessing and visualization functions ############################################################################################################################### - Now we will define utility functions for preprocessing and visualizing the results. -Preprocessing `⇑ <#top>`__ +Preprocessing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Inference input is tensor with shape ``[1, 3, H, W]`` in ``B, C, H, W`` format, where: @@ -272,10 +263,9 @@ normalized with given mean and standard deviation provided in return im -Visualization `⇑ <#top>`__ +Visualization +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Inference output contains labels assigned to each pixel, so the output in our case is ``[150, H, W]`` in ``CL, H, W`` format where: @@ -317,10 +307,9 @@ corresponding to the inferred labels. return pil_blend -Validation of inference of original model `⇑ <#top>`__ +Validation of inference of original model ############################################################################################################################### - Now that we have everything ready, we can perform segmentation on example image ``coco_hollywood.jpg``. @@ -370,10 +359,9 @@ We can see that model segments the image into meaningful parts. Since we are using tiny variant of model, the result is not as good as it is with larger models, but it already shows nice segmentation performance. -Export to ONNX `⇑ <#top>`__ +Export to ONNX ############################################################################################################################### - Now that we’ve verified that the inference of PyTorch model works, we will first export it to ONNX format. @@ -386,11 +374,9 @@ file and create torch dummy input. Input dimensions are in our case - ``H`` - model input image height - ``W`` - model input image width -.. note:: - - Note that H and W are here fixed to 512, as this is required by the - model. Resizing is done inside the inference function from the - original repository. +Note that H and W are here fixed to 512, as this is required by the +model. Resizing is done inside the inference function from the +original repository. After that, we use ``export`` function from PyTorch to convert the model to ONNX. The process can generate some warnings, but they are not a @@ -423,51 +409,49 @@ problem. .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:69: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:69: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if H % patch_size > 0: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:71: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:71: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if W % patch_size > 0: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/vit.py:122: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/vit.py:122: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.shape[1] != pos_embed.shape[1]: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/decoder.py:100: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/decoder.py:100: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! masks = rearrange(masks, "b (h w) n -> b n h w", h=int(GS)) - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:85: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:85: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if extra_h > 0: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:87: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/204-segmenter-semantic-segmentation/./segmenter/segm/model/utils.py:87: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if extra_w > 0: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py:258: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py:258: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version) - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_graph_shape_type_inference( - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_graph_shape_type_inference( -Convert ONNX model to OpenVINO Intermediate Representation (IR). `⇑ <#top>`__ +Convert ONNX model to OpenVINO Intermediate Representation (IR) ############################################################################################################################### While ONNX models are directly supported by OpenVINO runtime, it can be useful to convert them to IR format to take advantage of OpenVINO -optimization tools and features. The ``mo.convert_model`` function of +optimization tools and features. The ``ov.convert_model`` function of `model conversion -API `__ +API `__ can be used. The function returns instance of OpenVINO Model class, which is ready to use in Python interface but can also be serialized to OpenVINO IR format for future execution. .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize + import openvino as ov - model = mo.convert_model(str(MODEL_DIR / "segmenter.onnx")) + model = ov.convert_model(str(MODEL_DIR / "segmenter.onnx")) # serialize model for saving IR - serialize(model, str(MODEL_DIR / "segmenter.xml")) + ov.save_model(model, str(MODEL_DIR / "segmenter.xml")) -Verify converted model inference `⇑ <#top>`__ +Verify converted model inference ############################################################################################################################### - To test that model was successfully converted, we can use same inference function from original repository, but we need to make custom class. @@ -477,9 +461,6 @@ any additional custom code required to process input. .. code:: ipython3 - from openvino.runtime import Core - - class SegmenterOV: """ Class containing OpenVINO model with all attributes required to work with inference function. @@ -506,7 +487,7 @@ any additional custom code required to process input. :param device: device string for selecting inference device """ # init OpenVino core - core = Core() + core = ov.Core() # read model model_xml = core.read_model(model_path) self.model = core.compile_model(model_xml, device) @@ -536,17 +517,16 @@ any additional custom code required to process input. Now that we have created ``SegmenterOV`` helper class, we can use it in inference function. -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -598,27 +578,27 @@ Select device from dropdown list for running inference using OpenVINO: As we can see, we get the same results as with original model. -Benchmarking performance of converted model `⇑ <#top>`__ +Benchmarking performance of converted model ############################################################################################################################### - Finally, use the OpenVINO `Benchmark -Tool `__ +Tool `__ to measure the inference performance of the model. -Note that for more accurate performance, it is recommended to run -``benchmark_app`` in a terminal/command prompt after closing other -applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark -async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to -benchmark on GPU. Run ``benchmark_app --help`` to see an overview of -all command-line options. - .. note:: - Keep in mind that the authors of original paper used V100 GPU, which - is significantly more powerful than the CPU used to obtain the - following throughput. Therefore, FPS can’t be compared directly. + For more accurate performance, it is recommended to run + ``benchmark_app`` in a terminal/command prompt after closing other + applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark + async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to + benchmark on GPU. Run ``benchmark_app --help`` to see an overview of + all command-line options. + + +Keep in mind that the authors of original paper used V100 GPU, which +is significantly more powerful than the CPU used to obtain the +following throughput. Therefore, FPS can’t be compared directly. .. code:: ipython3 @@ -641,74 +621,5 @@ all command-line options. .. parsed-literal:: - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ WARNING ] Default duration 120 seconds is used for unknown device AUTO - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.1-11005-fa1c41994f3-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ INFO ] AUTO - [ INFO ] Build ................................. 2023.0.1-11005-fa1c41994f3-releases/2023/0 - [ INFO ] - [ INFO ] - [Step 3/11] Setting device configuration - [ WARNING ] Performance hint was not explicitly specified in command line. Device(AUTO) performance hint will be set to PerformanceMode.THROUGHPUT. - [Step 4/11] Reading model files - [ INFO ] Loading model files - [ INFO ] Read model took 16.30 ms - [ INFO ] Original model I/O parameters: - [ INFO ] Model inputs: - [ INFO ] input (node: input) : f32 / [...] / [2,3,512,512] - [ INFO ] Model outputs: - [ INFO ] output (node: output) : f32 / [...] / [2,150,512,512] - [Step 5/11] Resizing model to match image sizes and given batch - [ INFO ] Model batch size: 2 - [Step 6/11] Configuring input of the model - [ INFO ] Model inputs: - [ INFO ] input (node: input) : u8 / [N,C,H,W] / [2,3,512,512] - [ INFO ] Model outputs: - [ INFO ] output (node: output) : f32 / [...] / [2,150,512,512] - [Step 7/11] Loading the model to the device - [ INFO ] Compile model took 343.28 ms - [Step 8/11] Querying optimal runtime parameters - [ INFO ] Model: - [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 - [ INFO ] MODEL_PRIORITY: Priority.MEDIUM - [ INFO ] MULTI_DEVICE_PRIORITIES: CPU - [ INFO ] CPU: - [ INFO ] CPU_BIND_THREAD: YES - [ INFO ] CPU_THREADS_NUM: 0 - [ INFO ] CPU_THROUGHPUT_STREAMS: 6 - [ INFO ] DEVICE_ID: - [ INFO ] DUMP_EXEC_GRAPH_AS_DOT: - [ INFO ] DYN_BATCH_ENABLED: NO - [ INFO ] DYN_BATCH_LIMIT: 0 - [ INFO ] ENFORCE_BF16: NO - [ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO - [ INFO ] NETWORK_NAME: torch_jit - [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 6 - [ INFO ] PERFORMANCE_HINT: THROUGHPUT - [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 - [ INFO ] PERF_COUNT: NO - [ INFO ] EXECUTION_DEVICES: ['CPU'] - [Step 9/11] Creating infer requests and preparing input tensors - [ WARNING ] No input files were given for input 'input'!. This input will be filled with random values! - [ INFO ] Fill input 'input' with random values - [Step 10/11] Measuring performance (Start inference asynchronously, 6 inference requests, limits: 120000 ms duration) - [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). - [ INFO ] First inference took 227.83 ms - [Step 11/11] Dumping statistics report - [ INFO ] Execution Devices:['CPU'] - [ INFO ] Count: 1332 iterations - [ INFO ] Duration: 120630.17 ms - [ INFO ] Latency: - [ INFO ] Median: 542.28 ms - [ INFO ] Average: 542.75 ms - [ INFO ] Min: 344.15 ms - [ INFO ] Max: 609.17 ms - [ INFO ] Throughput: 22.08 FPS + /bin/bash: benchmark_app: command not found diff --git a/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.jpg b/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.jpg index ba32a83b50c22d..66f1523fe3f410 100644 --- a/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.jpg +++ b/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.jpg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:0c356343f35091af2179f96680cff400e4d75c00b8c4f9167b7e77fabab05b41 -size 72356 +oid sha256:7e45440c38464f20541ffc81d464dd9889fabd08058af09c8bbecac3a7cd87c2 +size 72372 diff --git a/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.png b/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.png index cdb7ad49633025..056bb9f567bcb5 100644 --- a/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.png +++ b/docs/notebooks/204-segmenter-semantic-segmentation-with-output_files/204-segmenter-semantic-segmentation-with-output_34_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:96e6a09e60d9d6497f6d1065d1097eb684ac816b970f846cfa82f3c04b40ac90 -size 909691 +oid sha256:7c2084994369e17be27f2bc482a182c3b20d42564b4e2f818ca451e8c2ea6cea +size 909654 diff --git a/docs/notebooks/205-vision-background-removal-with-output.rst b/docs/notebooks/205-vision-background-removal-with-output.rst index 976361459d5696..7604ff5432fd08 100644 --- a/docs/notebooks/205-vision-background-removal-with-output.rst +++ b/docs/notebooks/205-vision-background-removal-with-output.rst @@ -1,8 +1,6 @@ Image Background Removal with U^2-Net and OpenVINO™ =================================================== - - This notebook demonstrates background removal in images using U\ :math:`^2`-Net and OpenVINO. @@ -14,21 +12,18 @@ Detection `__. The PyTorch U\ :math:`^2`-Net model is converted to OpenVINO IR format. The model source is available -`here `__. - +`here `__. -.. _top: - -**Table of contents**: +**Table of contents:** - `Preparation <#preparation>`__ - `Install requirements <#install-requirements>`__ - - `Import the PyTorch Library and U2-Net <#import-the-pytorch-library-and-u2-net>`__ + - `Import the PyTorch Library and U^2-Net <#import-the-pytorch-library-and-u2-net>`__ - `Settings <#settings>`__ - - `Load the U2-Net Model <#load-the-u2-net-model>`__ + - `Load the U^2-Net Model <#load-the-u2-net-model>`__ -- `Convert PyTorch U2-Net model to OpenVINO IR <#convert-pytorch-u2-net-model-to-openvino-ir>`__ +- `Convert PyTorch U^2-Net model to OpenVINO IR <#convert-pytorch-u2-net-model-to-openvino-ir>`__ - `Convert Pytorch model to OpenVINO IR Format <#convert-pytorch-model-to-openvino-ir-format>`__ @@ -41,23 +36,23 @@ The model source is available - `References <#references>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### - -Install requirements `⇑ <#top>`__ +Install requirements +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q torch onnx opencv-python matplotlib !pip install -q gdown -Import the PyTorch Library and U\ :math:`^2`-Net `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +.. _import-the-pytorch-library-and-u2-net: + +Import the PyTorch Library and U\ :math:`^2`-Net ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -70,10 +65,9 @@ Import the PyTorch Library and U\ :math:`^2`-Net `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np + import openvino as ov import torch from IPython.display import HTML, FileLink, display - from openvino.runtime import Core - from openvino.tools import mo .. code:: ipython3 @@ -93,10 +87,9 @@ Import the PyTorch Library and U\ :math:`^2`-Net `⇑ <#top>`__ from notebook_utils import load_image from model.u2net import U2NET, U2NETP -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - This tutorial supports using the original U\ :math:`^2`-Net salient object detection model, as well as the smaller U2NETP version. Two sets of weights are supported for the original model: salient object @@ -134,9 +127,11 @@ detection and human segmentation. MODEL_DIR = "model" model_path = Path(MODEL_DIR) / u2net_model.name / Path(u2net_model.name).with_suffix(".pth") -Load the U\ :math:`^2`-Net Model `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +.. _load-the-u2-net-model: + +Load the U\ :math:`^2`-Net Model ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The U\ :math:`^2`-Net human segmentation model weights are stored on Google Drive. They will be downloaded if they are not present yet. The @@ -164,7 +159,7 @@ next cell loads the model and the pre-trained weights. Downloading... From: https://drive.google.com/uc?id=1rbSTGKAE-MTxBYHd-51l2hMOQPT_7EPy To: <_io.BufferedWriter name='model/u2net_lite/u2net_lite.pth'> - 100%|██████████| 4.68M/4.68M [00:01<00:00, 3.98MB/s] + 100%|██████████| 4.68M/4.68M [00:01<00:00, 4.03MB/s] .. parsed-literal:: @@ -191,63 +186,57 @@ next cell loads the model and the pre-trained weights. .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/nn/functional.py:3734: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/nn/functional.py:3734: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/nn/functional.py:1967: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/nn/functional.py:1967: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.") - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py:258: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py:258: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version) - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_graph_shape_type_inference( - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.) _C._jit_pass_onnx_graph_shape_type_inference( -Convert PyTorch U\ :math:`^2`-Net model to OpenVINO IR `⇑ <#top>`__ -############################################################################################################################### - +.. _convert-pytorch-u2-net-model-to-openvino-ir: -Convert Pytorch model to OpenVINO IR Format `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - -To convert the Pytorch model to OpenVINO IR format with ``FP16`` -precision, use model conversion Python API . We add the mean values to -the model and scale the input with the standard deviation with -``scale_values`` parameter. With these options, it is not necessary to -normalize input data before propagating it through the network. The mean -and standard deviation values can be found in the -`dataloader `__ -file in the `U^2-Net -repository `__ and multiplied by -255 to support images with pixel values from 0-255. +Convert PyTorch U\ :math:`^2`-Net model to OpenVINO IR +############################################################################################################################### -For more information about model conversion, refer to this -`page `__. +Convert Pytorch model to OpenVINO IR Format +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Executing the following command may take a while. +We use model conversion Python API to convert the Pytorch model to +OpenVINO IR format. Executing the following command may take a while. .. code:: ipython3 - model_ir = mo.convert_model( - "u2net.onnx", - mean_values=[123.675, 116.28 , 103.53], - scale_values=[58.395, 57.12 , 57.375], - compress_to_fp16=True - ) + model_ir = ov.convert_model("u2net.onnx") -Load and Pre-Process Input Image `⇑ <#top>`__ +Load and Pre-Process Input Image ############################################################################################################################### - While OpenCV reads images in ``BGR`` format, the OpenVINO IR model expects images in ``RGB``. Therefore, convert the images to ``RGB``, -resize them to ``512 x 512`` and transpose the dimensions to the format -that is expected by the OpenVINO IR model. +resize them to ``512 x 512``, and transpose the dimensions to the format +the OpenVINO IR model expects. + +We add the mean values to the image tensor and scale the input with the +standard deviation. It is called the input data normalization before +propagating it through the network. The mean and standard deviation +values can be found in the +`dataloader `__ +file in the `U^2-Net +repository `__ and multiplied by +255 to support images with pixel values from 0-255. .. code:: ipython3 IMAGE_URI = "https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/coco_hollywood.jpg" + + input_mean = np.array([123.675, 116.28 , 103.53]).reshape(1, 3, 1, 1) + input_scale = np.array([58.395, 57.12 , 57.375]).reshape(1, 3, 1, 1) + image = cv2.cvtColor( src=load_image(IMAGE_URI), code=cv2.COLOR_BGR2RGB, @@ -257,18 +246,19 @@ that is expected by the OpenVINO IR model. # Convert the image shape to a shape and a data type expected by the network # for OpenVINO IR model: (1, 3, 512, 512). input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) + + input_image = (input_image - input_mean) / input_scale -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -287,16 +277,15 @@ Select device from dropdown list for running inference using OpenVINO: -Do Inference on OpenVINO IR Model `⇑ <#top>`__ +Do Inference on OpenVINO IR Model ############################################################################################################################### - Load the OpenVINO IR model to OpenVINO Runtime and do inference. .. code:: ipython3 + core = ov.Core() # Load the network to OpenVINO Runtime. - core = Core() compiled_model_ir = core.compile_model(model=model_ir, device_name=device.value) # Get the names of input and output layers. input_layer_ir = compiled_model_ir.input(0) @@ -314,13 +303,12 @@ Load the OpenVINO IR model to OpenVINO Runtime and do inference. .. parsed-literal:: - Inference finished. Inference time: 0.119 seconds, FPS: 8.43. + Inference finished. Inference time: 0.117 seconds, FPS: 8.56. -Visualize Results `⇑ <#top>`__ +Visualize Results ############################################################################################################################### - Show the original image, the segmentation result, and the original image with the background removed. @@ -349,10 +337,9 @@ with the background removed. .. image:: 205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_22_0.png -Add a Background Image `⇑ <#top>`__ +Add a Background Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - In the segmentation result, all foreground pixels have a value of 1, all background pixels a value of 0. Replace the background image as follows: @@ -415,14 +402,13 @@ background pixels a value of 0. Replace the background image as follows: The generated image coco_hollywood-wall.jpg is saved in the directory output. You can also download the image by clicking on this link: output/coco_hollywood-wall.jpg
-References `⇑ <#top>`__ +References ############################################################################################################################### - - `PIP install openvino-dev `__ - `Model Conversion - API `__ + API `__ - `U^2-Net `__ - U^2-Net research paper: `U^2-Net: Going Deeper with Nested U-Structure for Salient Object diff --git a/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_22_0.png b/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_22_0.png index e56801a411862b..590ccf31c72a27 100644 --- a/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_22_0.png +++ b/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_22_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:83192799a2c31d56beff242e7d9b1dae795e6cb3ef9c6f754c8cdc102f3af8dd -size 279567 +oid sha256:bf14dea382ed76564920af34763fa7cef9dbb0f5782590c3766dcd9e59017b8f +size 279572 diff --git a/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_24_0.png b/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_24_0.png index 8fd15b011dc8d5..7d5092c88dbe67 100644 --- a/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_24_0.png +++ b/docs/notebooks/205-vision-background-removal-with-output_files/205-vision-background-removal-with-output_24_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a63381a3ccf5d7a134519b437c3197abb82df80edc51bc8fa51c574a186ac1cc -size 927148 +oid sha256:3c75677da763902da7cc433ead49f1ea41e9b7eb6f45284c911d24c39513163f +size 927043 diff --git a/docs/notebooks/206-vision-paddlegan-anime-with-output.rst b/docs/notebooks/206-vision-paddlegan-anime-with-output.rst index 0dc6a88ad4c929..59b8ea9d1d3e91 100644 --- a/docs/notebooks/206-vision-paddlegan-anime-with-output.rst +++ b/docs/notebooks/206-vision-paddlegan-anime-with-output.rst @@ -1,8 +1,6 @@ Photos to Anime with PaddleGAN and OpenVINO =========================================== - - This tutorial demonstrates converting a `PaddlePaddle/PaddleGAN `__ AnimeGAN model to OpenVINO IR format, and shows inference results on the @@ -16,9 +14,7 @@ documentation `__ @@ -46,15 +42,15 @@ documentation `__ - `References <#references>`__ -Preparation `⇑ <#top>`__ +Preparation ############################################################################################################################### -Install requirements `⇑ <#top>`__ +Install requirements +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q "paddlepaddle==2.5.0" "paddle2onnx>=0.6" !pip install -q "git+https://github.com/PaddlePaddle/PaddleGAN.git" --no-deps @@ -73,7 +69,7 @@ Install requirements `⇑ <#top>`__ scikit-image 0.21.0 requires imageio>=2.27, but you have imageio 2.9.0 which is incompatible. -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -87,8 +83,8 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np + import openvino as ov from IPython.display import HTML, display - from openvino.runtime import Core # PaddlePaddle requires a C++ compiler. If importing the paddle packages fails, # install C++. @@ -117,7 +113,7 @@ Imports `⇑ <#top>`__ ) raise -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -132,7 +128,7 @@ Settings `⇑ <#top>`__ ir_path = model_path.with_suffix(".xml") onnx_path = model_path.with_suffix(".onnx") -Functions `⇑ <#top>`__ +Functions +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -147,7 +143,7 @@ Functions `⇑ <#top>`__ image = cv2.resize(image, (max_width, new_height)) return image -Inference on PaddleGAN Model `⇑ <#top>`__ +Inference on PaddleGAN Model ############################################################################################################################### The PaddleGAN @@ -165,7 +161,7 @@ source of the function. .. parsed-literal:: - [08/17 16:13:48] ppgan INFO: Found /opt/home/k8sworker/.cache/ppgan/animeganv2_hayao.pdparams + [09/08 23:30:31] ppgan INFO: Found /opt/home/k8sworker/.cache/ppgan/animeganv2_hayao.pdparams .. code:: ipython3 @@ -243,7 +239,7 @@ cell. The anime image was saved to output/coco_bricks_anime_pg.jpg -Show Inference Results on PaddleGAN model `⇑ <#top>`__ +Show Inference Results on PaddleGAN model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -260,14 +256,14 @@ Show Inference Results on PaddleGAN model `⇑ <#top>`__ .. image:: 206-vision-paddlegan-anime-with-output_files/206-vision-paddlegan-anime-with-output_15_0.png -Model Conversion to ONNX and OpenVINO IR `⇑ <#top>`__ +Model Conversion to ONNX and OpenVINO IR ############################################################################################################################### Convert the PaddleGAN model to OpenVINO IR by first converting PaddleGAN to ONNX with ``paddle2onnx`` and then converting the ONNX model to OpenVINO IR with model conversion API. -Convert to ONNX `⇑ <#top>`__ +Convert to ONNX +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Exporting to ONNX requires specifying an input shape with PaddlePaddle @@ -300,22 +296,22 @@ succeeds, the output of the next cell will include .. parsed-literal:: - 2023-08-17 16:13:56 [INFO] Static PaddlePaddle model saved in model/paddle_model_static_onnx_temp_dir. + 2023-09-08 23:30:39 [INFO] Static PaddlePaddle model saved in model/paddle_model_static_onnx_temp_dir. [Paddle2ONNX] Start to parse PaddlePaddle model... [Paddle2ONNX] Model file path: model/paddle_model_static_onnx_temp_dir/model.pdmodel [Paddle2ONNX] Paramters file path: model/paddle_model_static_onnx_temp_dir/model.pdiparams [Paddle2ONNX] Start to parsing Paddle model... [Paddle2ONNX] Use opset_version = 11 for ONNX export. [Paddle2ONNX] PaddlePaddle model is exported as ONNX format now. - 2023-08-17 16:13:56 [INFO] ONNX model saved in model/paddlegan_anime.onnx. + 2023-09-08 23:30:39 [INFO] ONNX model saved in model/paddlegan_anime.onnx. .. parsed-literal:: - I0817 16:13:56.664121 2277406 interpretercore.cc:237] New Executor is Running. + I0908 23:30:39.290753 670433 interpretercore.cc:237] New Executor is Running. -Convert to OpenVINO IR `⇑ <#top>`__ +Convert to OpenVINO IR +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The OpenVINO IR format enables storing the preprocessing normalization @@ -352,34 +348,25 @@ The ``ResizeToScale`` class is called with ``(256,256)`` as the argument for size. Further analysis shows that this is the minimum size to resize to. The ``ResizeToScale`` class transform resizes images to the size specified in the ``ResizeToScale`` parameters, with width and height as -multiples of 32. +multiples of 32. We will preprocess the images the same way before +feeding them to the converted model. -Once the mean and standard deviation values, and the shape of the model -inputs are known, you can use model conversion API and convert the model -to OpenVINO IR with these values. Use ``FP16`` precision and set log -level to ``CRITICAL`` to ignore warnings that are irrelevant for this -demo. For information about setting the parameters, see this -`page `__. +Now we use model conversion API and convert the model to OpenVINO IR. -**Convert ONNX Model to OpenVINO IR with** `Model Conversion Python API `__ +**Convert ONNX Model to OpenVINO IR with**\ `Model Conversion Python +API `__ .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize - print("Exporting ONNX model to OpenVINO IR... This may take a few minutes.") - model = mo.convert_model( + model = ov.convert_model( onnx_path, - input_shape=[1, 3, target_height, target_width], - mean_values=[127.5,127.5,127.5], - scale_values=[127.5,127.5,127.5], - compress_to_fp16=True + input=[1, 3, target_height, target_width], ) # Serialize model in IR format - serialize(model, str(ir_path)) + ov.save_model(model, str(ir_path)) .. parsed-literal:: @@ -387,7 +374,7 @@ demo. For information about setting the parameters, see this Exporting ONNX model to OpenVINO IR... This may take a few minutes. -Show Inference Results on OpenVINO IR and PaddleGAN Models `⇑ <#top>`__ +Show Inference Results on OpenVINO IR and PaddleGAN Models ############################################################################################################################### If the conversion is successful, the output of model conversion API in @@ -399,7 +386,7 @@ from the PaddleGAN model. However, in order to use the OpenVINO IR model without installing PaddleGAN, it is useful to check what these functions do and extract them. -Create Postprocessing Functions `⇑ <#top>`__ +Create Postprocessing Functions +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -443,7 +430,7 @@ OpenVINO IR model dstf = np.uint8(dstf) return dstf -Do Inference on OpenVINO IR Model `⇑ <#top>`__ +Do Inference on OpenVINO IR Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Load the OpenVINO IR model and do inference, following the same steps as @@ -455,7 +442,7 @@ The OpenVINO IR model is generated with an input shape that is computed based on the input image. If you do inference on images with different input shapes, results may differ from the PaddleGAN results. -Select inference device `⇑ <#top>`__ +Select inference device ------------------------------------------------------------------------------------------------------------------------------- Select device from dropdown list for running inference using OpenVINO: @@ -464,7 +451,7 @@ Select device from dropdown list for running inference using OpenVINO: import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -486,7 +473,8 @@ Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 # Load and prepare the IR model. - core = Core() + core = ov.Core() + model = core.read_model(model=ir_path) compiled_model = core.compile_model(model=model, device_name=device.value) input_key = compiled_model.input(0) @@ -498,9 +486,14 @@ Select device from dropdown list for running inference using OpenVINO: image_path = Path("./data/coco_bricks.png") image = cv2.cvtColor(cv2.imread(str(image_path), flags=cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB) - # Step 2. Transform the image (only resize and transpose are still required). + # Step 2. Do preprocess transformations. + # Resize the image resized_image = cv2.resize(image, (target_width, target_height)) input_image = resized_image.transpose(2, 0, 1)[None, :, :, :] + # Normalize the image + input_mean = np.array([127.5,127.5,127.5]).reshape(1, 3, 1, 1) + input_scale = np.array([127.5,127.5,127.5]).reshape(1, 3, 1, 1) + input_image = (input_image - input_mean) / input_scale # Step 3. Do inference. result_ir = compiled_model([input_image])[output_key] @@ -542,10 +535,9 @@ Select device from dropdown list for running inference using OpenVINO: .. image:: 206-vision-paddlegan-anime-with-output_files/206-vision-paddlegan-anime-with-output_37_0.png -Performance Comparison `⇑ <#top>`__ +Performance Comparison ############################################################################################################################### - Measure the time it takes to do inference on an image. This gives an indication of performance. It is not a perfect measure. Since the PaddleGAN model requires quite a bit of memory for inference, only @@ -586,17 +578,19 @@ measure inference on one image. For more accurate benchmarking, use .. parsed-literal:: - OpenVINO IR model in OpenVINO Runtime/CPU: 0.469 seconds per image, FPS: 2.13 - PaddleGAN model on CPU: 6.121 seconds per image, FPS: 0.16 + OpenVINO IR model in OpenVINO Runtime/CPU: 0.438 seconds per image, FPS: 2.28 + PaddleGAN model on CPU: 6.173 seconds per image, FPS: 0.16 -References `⇑ <#top>`__ +References ############################################################################################################################### - `PaddleGAN `__ - `Paddle2ONNX `__ -- `OpenVINO ONNX support `__ -- `Model Conversion API `__ +- `OpenVINO ONNX + support `__ +- `Model Conversion + API `__ The PaddleGAN code that is shown in this notebook is written by PaddlePaddle Authors and licensed under the Apache 2.0 license. The diff --git a/docs/notebooks/206-vision-paddlegan-anime-with-output_files/206-vision-paddlegan-anime-with-output_37_0.png b/docs/notebooks/206-vision-paddlegan-anime-with-output_files/206-vision-paddlegan-anime-with-output_37_0.png index 0c03f43011cefc..be078e36f1b340 100644 --- a/docs/notebooks/206-vision-paddlegan-anime-with-output_files/206-vision-paddlegan-anime-with-output_37_0.png +++ b/docs/notebooks/206-vision-paddlegan-anime-with-output_files/206-vision-paddlegan-anime-with-output_37_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:da8386a7ba2d7aec1fb16cf5ad9bb0665a372c044b8b0950c7c7e2fcbe450e67 -size 1931653 +oid sha256:dcdc8be2139b3dc2632e54917fca3d650ed325ed0cbc018f564e5a5fe69675e3 +size 1931633 diff --git a/docs/notebooks/207-vision-paddlegan-superresolution-with-output.rst b/docs/notebooks/207-vision-paddlegan-superresolution-with-output.rst index b19bfc982c628f..d91bd4632495b5 100644 --- a/docs/notebooks/207-vision-paddlegan-superresolution-with-output.rst +++ b/docs/notebooks/207-vision-paddlegan-superresolution-with-output.rst @@ -1,8 +1,6 @@ Super Resolution with PaddleGAN and OpenVINO™ ============================================= - - This notebook demonstrates converting the RealSR (real-world super-resolution) model from `PaddlePaddle/PaddleGAN `__ @@ -18,9 +16,7 @@ from CVPR 2020. This notebook works best with small images (up to 800x600 resolution). -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -40,18 +36,27 @@ This notebook works best with small images (up to 800x600 resolution). - `Show an Animated GIF <#show-an-animated-gif>`__ - `Create a Comparison Video <#create-a-comparison-video>`__ -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 + !pip install -q "openvino==2023.1.0.dev20230811" + !pip install -q "paddlepaddle==2.5.0rc0" "paddle2onnx>=0.6" - -.. code:: ipython3 - + !pip install -q "imageio==2.9.0" "imageio-ffmpeg" "numba>=0.53.1" "easydict" "munch" "natsort" !pip install -q "git+https://github.com/PaddlePaddle/PaddleGAN.git" --no-deps + !pip install -q scikit-image + + +.. parsed-literal:: + + ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. + ppgan 2.1.0 requires imageio==2.9.0, but you have imageio 2.31.3 which is incompatible. + ppgan 2.1.0 requires librosa==0.8.1, but you have librosa 0.10.1 which is incompatible. + ppgan 2.1.0 requires opencv-python<=4.6.0.66, but you have opencv-python 4.8.0.76 which is incompatible. + .. code:: ipython3 @@ -63,21 +68,20 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np + import openvino as ov import paddle from IPython.display import HTML, FileLink, ProgressBar, clear_output, display from IPython.display import Image as DisplayImage from PIL import Image - from openvino.runtime import Core from paddle.static import InputSpec from ppgan.apps import RealSRPredictor sys.path.append("../utils") from notebook_utils import NotebookAlert -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - .. code:: ipython3 # The filenames of the downloaded and converted models. @@ -90,14 +94,12 @@ Settings `⇑ <#top>`__ ir_path = model_path.with_suffix(".xml") onnx_path = model_path.with_suffix(".onnx") -Inference on PaddlePaddle Model `⇑ <#top>`__ +Inference on PaddlePaddle Model ############################################################################################################################### - -Investigate PaddleGAN Model `⇑ <#top>`__ +Investigate PaddleGAN Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The `PaddleGAN documentation `__ explains how to run the model with ``sr.run()`` method. Find out what that @@ -114,7 +116,7 @@ source code. .. parsed-literal:: - [08/15 23:08:25] ppgan INFO: Found /opt/home/k8sworker/.cache/ppgan/DF2K_JPEG.pdparams + [09/08 23:31:15] ppgan INFO: Found /opt/home/k8sworker/.cache/ppgan/DF2K_JPEG.pdparams .. code:: ipython3 @@ -152,10 +154,9 @@ To get more information about how the model looks like, use the # sr.model?? -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To show inference on the PaddlePaddle model, set ``PADDLEGAN_INFERENCE`` to ``True`` in the cell below. Keep in mind that performing inference may take some time. @@ -198,18 +199,16 @@ may take some time. print(f"Inference duration: {duration:.2f} seconds") plt.imshow(result_image); -Convert PaddleGAN Model to ONNX and OpenVINO IR `⇑ <#top>`__ +Convert PaddleGAN Model to ONNX and OpenVINO IR ############################################################################################################################### - To convert the PaddlePaddle model to OpenVINO IR, first convert the model to ONNX, and then convert the ONNX model to the OpenVINO IR format. -Convert PaddlePaddle Model to ONNX `⇑ <#top>`__ +Convert PaddlePaddle Model to ONNX +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Ignore PaddlePaddle warnings: @@ -225,12 +224,12 @@ Convert PaddlePaddle Model to ONNX `⇑ <#top>`__ .. parsed-literal:: - 2023-08-15 23:08:31 [INFO] Static PaddlePaddle model saved in model/paddle_model_static_onnx_temp_dir. + 2023-09-08 23:31:21 [INFO] Static PaddlePaddle model saved in model/paddle_model_static_onnx_temp_dir. .. parsed-literal:: - I0815 23:08:31.864205 2062621 interpretercore.cc:267] New Executor is Running. + I0908 23:31:21.665750 670756 interpretercore.cc:267] New Executor is Running. .. parsed-literal:: @@ -241,32 +240,22 @@ Convert PaddlePaddle Model to ONNX `⇑ <#top>`__ [Paddle2ONNX] Start to parsing Paddle model... [Paddle2ONNX] Use opset_version = 13 for ONNX export. [Paddle2ONNX] PaddlePaddle model is exported as ONNX format now. - 2023-08-15 23:08:35 [INFO] ONNX model saved in model/paddlegan_sr.onnx. - + 2023-09-08 23:31:25 [INFO] ONNX model saved in model/paddlegan_sr.onnx. -Convert ONNX Model to OpenVINO IR with Model Conversion Python API `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -.. code:: ipython3 - - from openvino.tools import mo - from openvino.runtime import serialize - - ## Uncomment the command below to show help, which shows the possible arguments for model conversion API. - # mo.convert_model(help=True) +Convert ONNX Model to OpenVINO IR with `Model Conversion Python API `__ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 print("Exporting ONNX model to OpenVINO IR... This may take a few minutes.") - model = mo.convert_model( + model = ov.convert_model( onnx_path, - input_shape=input_shape, - compress_to_fp16=True + input=input_shape ) # Serialize model in IR format - serialize(model, str(ir_path)) + ov.save_model(model, str(ir_path)) .. parsed-literal:: @@ -274,22 +263,20 @@ Convert ONNX Model to OpenVINO IR with Model Conversion Python API `⇑ <#top>`_ Exporting ONNX model to OpenVINO IR... This may take a few minutes. -Do Inference on OpenVINO IR Model `⇑ <#top>`__ +Do Inference on OpenVINO IR Model ############################################################################################################################### - .. code:: ipython3 # Read the network and get input and output names. - core = Core() - # Alternatively, the model obtained from `mo.convert_model()` may be used here + core = ov.Core() + # Alternatively, the model obtained from `ov.convert_model()` may be used here model = core.read_model(model=ir_path) input_layer = model.input(0) -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -333,12 +320,12 @@ Select device from dropdown list for running inference using OpenVINO: .. parsed-literal:: - + -.. image:: 207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_27_1.png +.. image:: 207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_25_1.png .. code:: ipython3 @@ -362,7 +349,7 @@ Select device from dropdown list for running inference using OpenVINO: .. parsed-literal:: - Inference duration: 3.30 seconds + Inference duration: 3.26 seconds .. code:: ipython3 @@ -385,18 +372,17 @@ Select device from dropdown list for running inference using OpenVINO: .. parsed-literal:: - + -.. image:: 207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_31_1.png +.. image:: 207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_29_1.png -Show an Animated GIF `⇑ <#top>`__ +Show an Animated GIF +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To visualize the difference between the bicubic image and the superresolution image, create an animated GIF image that switches between both versions. @@ -423,15 +409,14 @@ between both versions. -.. image:: 207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_33_0.png +.. image:: 207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_31_0.png :width: 960px -Create a Comparison Video `⇑ <#top>`__ +Create a Comparison Video +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Create a video with a “slider”, showing the bicubic image to the right and the superresolution image on the left. diff --git a/docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_27_1.png b/docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_25_1.png similarity index 100% rename from docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_27_1.png rename to docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_25_1.png diff --git a/docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_31_1.png b/docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_29_1.png similarity index 100% rename from docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_31_1.png rename to docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_29_1.png diff --git a/docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_33_0.png b/docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_31_0.png similarity index 100% rename from docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_33_0.png rename to docs/notebooks/207-vision-paddlegan-superresolution-with-output_files/207-vision-paddlegan-superresolution-with-output_31_0.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output.rst b/docs/notebooks/208-optical-character-recognition-with-output.rst index 30524055a60e84..eaa1776aaae672 100644 --- a/docs/notebooks/208-optical-character-recognition-with-output.rst +++ b/docs/notebooks/208-optical-character-recognition-with-output.rst @@ -1,17 +1,15 @@ Optical Character Recognition (OCR) with OpenVINO™ ================================================== - - This tutorial demonstrates how to perform optical character recognition (OCR) with OpenVINO models. It is a continuation of the `004-hello-detection <004-hello-detection-with-output.html>`__ tutorial, which shows only text detection. The -`horizontal-text-detection-0001 `__ +`horizontal-text-detection-0001 `__ and -`text-recognition-resnet `__ +`text-recognition-resnet `__ models are used together for text detection and then text recognition. In this tutorial, Open Model Zoo tools including Model Downloader, Model @@ -21,9 +19,7 @@ Zoo `__. For more information, refer to the `104-model-tools <104-model-tools-with-output.html>`__ tutorial. -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -48,9 +44,13 @@ information, refer to the - `Show the OCR Result per Bounding Box <#show-the-ocr-result-per-bounding-box>`__ - `Print Annotations in Plain Text Format <#print-annotations-in-plain-text-format>`__ -Imports `⇑ <#top>`__ -############################################################################################################################### +.. code:: ipython3 + + # Install openvino-dev package + !pip install -q "openvino-dev==2023.1.0.dev20230811" +Imports +############################################################################################################################### .. code:: ipython3 @@ -60,20 +60,19 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np + import openvino as ov from IPython.display import Markdown, display from PIL import Image - from openvino.runtime import Core sys.path.append("../utils") from notebook_utils import load_image -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - .. code:: ipython3 - core = Core() + core = ov.Core() model_dir = Path("model") precision = "FP16" @@ -82,10 +81,9 @@ Settings `⇑ <#top>`__ model_dir.mkdir(exist_ok=True) -Download Models `⇑ <#top>`__ +Download Models ############################################################################################################################### - The next cells will run Model Downloader to download the detection and recognition models. If the models have been downloaded before, they will not be downloaded again. @@ -301,10 +299,9 @@ text-recognition-resnet-fc. # for line in download_result: # print(line) -Convert Models `⇑ <#top>`__ +Convert Models ############################################################################################################################### - The downloaded detection model is an Intel model, which is already in OpenVINO Intermediate Representation (OpenVINO IR) format. The text recognition model is a public model which needs to be converted to @@ -335,27 +332,26 @@ Converting text-recognition-resnet-fc… .. parsed-literal:: ========== Converting text-recognition-resnet-fc to ONNX - Conversion to ONNX command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/models/public/text-recognition-resnet-fc --model-path=model/public/text-recognition-resnet-fc --model-name=get_model --import-module=model '--model-param=file_config=r"model/public/text-recognition-resnet-fc/vedastr/configs/resnet_fc.py"' '--model-param=weights=r"model/public/text-recognition-resnet-fc/vedastr/ckpt/resnet_fc.pth"' --input-shape=1,1,32,100 --input-names=input --output-names=output --output-file=model/public/text-recognition-resnet-fc/resnet_fc.onnx + Conversion to ONNX command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/models/public/text-recognition-resnet-fc --model-path=model/public/text-recognition-resnet-fc --model-name=get_model --import-module=model '--model-param=file_config=r"model/public/text-recognition-resnet-fc/vedastr/configs/resnet_fc.py"' '--model-param=weights=r"model/public/text-recognition-resnet-fc/vedastr/ckpt/resnet_fc.pth"' --input-shape=1,1,32,100 --input-names=input --output-names=output --output-file=model/public/text-recognition-resnet-fc/resnet_fc.onnx ONNX check passed successfully. ========== Converting text-recognition-resnet-fc to IR (FP16) - Conversion command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/mo --framework=onnx --output_dir=/tmp/tmppkwl27u7 --model_name=text-recognition-resnet-fc --input=input '--mean_values=input[127.5]' '--scale_values=input[127.5]' --output=output --input_model=model/public/text-recognition-resnet-fc/resnet_fc.onnx '--layout=input(NCHW)' '--input_shape=[1, 1, 32, 100]' --compress_to_fp16=True + Conversion command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/mo --framework=onnx --output_dir=model/public/text-recognition-resnet-fc/FP16 --model_name=text-recognition-resnet-fc --input=input '--mean_values=input[127.5]' '--scale_values=input[127.5]' --output=output --input_model=model/public/text-recognition-resnet-fc/resnet_fc.onnx '--layout=input(NCHW)' '--input_shape=[1, 1, 32, 100]' --compress_to_fp16=True - [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression by removing argument --compress_to_fp16 or set it to false --compress_to_fp16=False. - Find more information about compression to FP16 at https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_FP16_Compression.html + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /tmp/tmppkwl27u7/text-recognition-resnet-fc.xml - [ SUCCESS ] BIN file: /tmp/tmppkwl27u7/text-recognition-resnet-fc.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/208-optical-character-recognition/model/public/text-recognition-resnet-fc/FP16/text-recognition-resnet-fc.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/208-optical-character-recognition/model/public/text-recognition-resnet-fc/FP16/text-recognition-resnet-fc.bin -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -380,17 +376,15 @@ Select device from dropdown list for running inference using OpenVINO: -Object Detection `⇑ <#top>`__ +Object Detection ############################################################################################################################### - Load a detection model, load an image, do inference and get the detection inference result. -Load a Detection Model `⇑ <#top>`__ +Load a Detection Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 detection_model = core.read_model( @@ -400,10 +394,9 @@ Load a Detection Model `⇑ <#top>`__ detection_input_layer = detection_compiled_model.input(0) -Load an Image `⇑ <#top>`__ +Load an Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # The `image_file` variable can point to a URL or a local image. @@ -424,13 +417,12 @@ Load an Image `⇑ <#top>`__ -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_15_0.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_16_0.png -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Text boxes are detected in the images and returned as blobs of data in the shape of ``[100, 5]``. Each description of detection has the ``[x_min, y_min, x_max, y_max, conf]`` format. @@ -443,10 +435,9 @@ the shape of ``[100, 5]``. Each description of detection has the # Remove zero only boxes. boxes = boxes[~np.all(boxes == 0, axis=1)] -Get Detection Results `⇑ <#top>`__ +Get Detection Results +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 def multiply_by_ratio(ratio_x, ratio_y, box): @@ -513,17 +504,15 @@ Get Detection Results `⇑ <#top>`__ return rgb_image -Text Recognition `⇑ <#top>`__ +Text Recognition ############################################################################################################################### - Load the text recognition model and do inference on the detected boxes from the detection model. -Load Text Recognition Model `⇑ <#top>`__ +Load Text Recognition Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 recognition_model = core.read_model( @@ -538,13 +527,9 @@ Load Text Recognition Model `⇑ <#top>`__ # Get the height and width of the input layer. _, _, H, W = recognition_input_layer.shape - -.. _do-the-inference: - -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Calculate scale for image resizing. @@ -588,14 +573,12 @@ Do Inference `⇑ <#top>`__ boxes_with_annotations = list(zip(boxes, annotations)) -Show Results `⇑ <#top>`__ +Show Results ############################################################################################################################### - -Show Detected Text Boxes and OCR Results for the Image `⇑ <#top>`__ +Show Detected Text Boxes and OCR Results for the Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Visualize the result by drawing boxes around recognized text and showing the OCR result from the text recognition model. @@ -606,13 +589,12 @@ the OCR result from the text recognition model. -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_25_0.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_26_0.png -Show the OCR Result per Bounding Box `⇑ <#top>`__ +Show the OCR Result per Bounding Box +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Depending on the image, the OCR result may not be readable in the image with boxes, as displayed in the cell above. Use the code below to display the extracted boxes and the OCR result per box. @@ -624,7 +606,7 @@ display the extracted boxes and the OCR result per box. -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_0.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_0.png @@ -632,7 +614,7 @@ building -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_2.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_2.png @@ -640,7 +622,7 @@ noyce -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_4.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_4.png @@ -648,7 +630,7 @@ noyce -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_6.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_6.png @@ -656,7 +638,7 @@ n -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_8.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_8.png @@ -664,17 +646,16 @@ center -.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_10.png +.. image:: 208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_10.png robert -Print Annotations in Plain Text Format `⇑ <#top>`__ +Print Annotations in Plain Text Format +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Print annotations for detected text based on their position in the input image, starting from the upper left corner. diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_15_0.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_16_0.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_15_0.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_16_0.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_25_0.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_26_0.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_25_0.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_26_0.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_0.jpg b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_0.jpg similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_0.jpg rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_0.jpg diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_0.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_0.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_0.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_0.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_10.jpg b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_10.jpg similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_10.jpg rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_10.jpg diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_10.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_10.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_10.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_10.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_2.jpg b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_2.jpg similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_2.jpg rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_2.jpg diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_2.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_2.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_2.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_2.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_4.jpg b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_4.jpg similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_4.jpg rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_4.jpg diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_4.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_4.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_4.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_4.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_6.jpg b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_6.jpg similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_6.jpg rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_6.jpg diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_6.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_6.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_6.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_6.png diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_8.jpg b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_8.jpg similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_8.jpg rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_8.jpg diff --git a/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_8.png b/docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_8.png similarity index 100% rename from docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_27_8.png rename to docs/notebooks/208-optical-character-recognition-with-output_files/208-optical-character-recognition-with-output_28_8.png diff --git a/docs/notebooks/209-handwritten-ocr-with-output.rst b/docs/notebooks/209-handwritten-ocr-with-output.rst index 454802bf631b0e..56413111683090 100644 --- a/docs/notebooks/209-handwritten-ocr-with-output.rst +++ b/docs/notebooks/209-handwritten-ocr-with-output.rst @@ -1,8 +1,6 @@ Handwritten Chinese and Japanese OCR with OpenVINO™ =================================================== - - In this tutorial, we perform optical character recognition (OCR) for handwritten Chinese (simplified) and Japanese. An OCR tutorial using the Latin alphabet is available in `notebook @@ -10,18 +8,17 @@ Latin alphabet is available in `notebook This model is capable of processing only one line of symbols at a time. The models used in this notebook are -`handwritten-japanese-recognition-0001 `__ +`handwritten-japanese-recognition-0001 `__ and -`handwritten-simplified-chinese-0001 `__. +`handwritten-simplified-chinese-0001 `__. To decode model outputs as readable text `kondate_nakayosi `__ and `scut_ept `__ -charlists are used. Both models are available on `Open Model Zoo `__. - -.. _top: +charlists are used. Both models are available on `Open Model +Zoo `__. -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -37,9 +34,13 @@ charlists are used. Both models are available on `Open Model Zoo `__ - `Print the Output <#print-the-output>`__ -Imports `⇑ <#top>`__ -############################################################################################################################### +.. code:: ipython3 + # Install openvino-dev package + !pip install -q "openvino-dev==2023.1.0.dev20230811" + +Imports +############################################################################################################################### .. code:: ipython3 @@ -50,13 +51,12 @@ Imports `⇑ <#top>`__ import cv2 import matplotlib.pyplot as plt import numpy as np - from openvino.runtime import Core + import openvino as ov -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - -Set up all constants and folders used in this notebook +Set up all constants and folders used in this notebook: .. code:: ipython3 @@ -87,10 +87,9 @@ To group files, you have to define the collection. In this case, use demo_image_name="handwritten_japanese_test.png", ) -Select a Language `⇑ <#top>`__ +Select a Language ############################################################################################################################### - Depending on your choice you will need to change a line of code in the cell below. @@ -106,10 +105,9 @@ If you want to perform OCR on a text in Japanese, set selected_language = languages.get(language) -Download the Model `⇑ <#top>`__ +Download the Model ############################################################################################################################### - In addition to images and charlists, you need to download the model file. In the sections below, there are cells for downloading either the Chinese or Japanese model. @@ -143,32 +141,28 @@ and downloads the selected model. -Load the Model and Execute `⇑ <#top>`__ +Load the Model and Execute ############################################################################################################################### - When all files are downloaded and language is selected, read and compile the network to run inference. The path to the model is defined based on the selected language. .. code:: ipython3 - core = Core() + core = ov.Core() path_to_model = path_to_model_weights.with_suffix(".xml") model = core.read_model(model=path_to_model) -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - core = Core() - device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -191,10 +185,9 @@ Select device from dropdown list for running inference using OpenVINO: compiled_model = core.compile_model(model=model, device_name=device.value) -Fetch Information About Input and Output Layers `⇑ <#top>`__ +Fetch Information About Input and Output Layers ############################################################################################################################### - Now that the model is loaded, fetch information about the input and output layers (shape). @@ -203,10 +196,9 @@ output layers (shape). recognition_output_layer = compiled_model.output(0) recognition_input_layer = compiled_model.input(0) -Load an Image `⇑ <#top>`__ +Load an Image ############################################################################################################################### - Next, load an image. The model expects a single-channel image as input, so the image is read in grayscale. @@ -249,10 +241,9 @@ keep letters proportional and meet input shape. # Reshape to network input shape. input_image = resized_image[None, None, :, :] -Visualize Input Image `⇑ <#top>`__ +Visualize Input Image ############################################################################################################################### - After preprocessing, you can display the image. .. code:: ipython3 @@ -263,13 +254,12 @@ After preprocessing, you can display the image. -.. image:: 209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_21_0.png +.. image:: 209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_22_0.png -Prepare Charlist `⇑ <#top>`__ +Prepare Charlist ############################################################################################################################### - The model is loaded and the image is ready. The only element left is the charlist, which is downloaded. You must add a blank symbol at the beginning of the charlist before using it. This is expected for both the @@ -286,10 +276,9 @@ Chinese and Japanese models. with open(f"{charlist_folder}/{used_charlist}", "r", encoding="utf-8") as charlist: letters = blank_char + "".join(line.strip() for line in charlist) -Run Inference `⇑ <#top>`__ +Run Inference ############################################################################################################################### - Now, run inference. The ``compiled_model()`` function takes a list with input(s) in the same order as model input(s). Then, fetch the output from output tensors. @@ -299,10 +288,9 @@ from output tensors. # Run inference on the model predictions = compiled_model([input_image])[recognition_output_layer] -Process the Output Data `⇑ <#top>`__ +Process the Output Data ############################################################################################################################### - The output of a model is in the ``W x B x L`` format, where: - W - output sequence length @@ -340,10 +328,9 @@ Finally, get the symbols from corresponding indexes in the charlist. # Assign letters to indexes from the output array. output_text = [letters[letter_index] for letter_index in output_text_indexes] -Print the Output `⇑ <#top>`__ +Print the Output ############################################################################################################################### - Now, having a list of letters predicted by the model, you can display the image with predicted text printed below. @@ -362,5 +349,5 @@ the image with predicted text printed below. -.. image:: 209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_30_1.png +.. image:: 209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_31_1.png diff --git a/docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_21_0.png b/docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_22_0.png similarity index 100% rename from docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_21_0.png rename to docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_22_0.png diff --git a/docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_30_1.png b/docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_31_1.png similarity index 100% rename from docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_30_1.png rename to docs/notebooks/209-handwritten-ocr-with-output_files/209-handwritten-ocr-with-output_31_1.png diff --git a/docs/notebooks/210-slowfast-video-recognition-with-output.rst b/docs/notebooks/210-slowfast-video-recognition-with-output.rst index c2bcfa25c5d064..b7f3fdf9ae7e83 100644 --- a/docs/notebooks/210-slowfast-video-recognition-with-output.rst +++ b/docs/notebooks/210-slowfast-video-recognition-with-output.rst @@ -1,8 +1,6 @@ Video Recognition using SlowFast and OpenVINO™ ============================================== - - Teaching machines to detect, understand and analyze the contents of images has been one of the more well-known and well-studied problems in computer vision. However, analyzing videos to understand what is @@ -40,9 +38,7 @@ This tutorial consists of the following steps .. |image0| image:: https://user-images.githubusercontent.com/34324155/143044111-94676f64-7ba8-4081-9011-f8054bed7030.png -.. _top: - -**Table of contents**: +**Table of contents:** - `Prepare PyTorch Model <#prepare-pytorch-model>`__ @@ -50,26 +46,24 @@ This tutorial consists of the following steps - `Imports and Settings <#imports-and-settings>`__ - `Export to ONNX <#export-to-onnx>`__ -- `Convert ONNX to OpenVINO™ Intermediate Representation <#convert-onnx-to-openvino™-intermediate-representation>`__ +- `Convert ONNX to OpenVINO™ Intermediate Representation <#convert-onnx-to-openvino-intermediate-representation>`__ - `Select inference device <#select-inference-device>`__ - `Verify Model Inference <#verify-model-inference>`__ -Prepare PyTorch Model `⇑ <#top>`__ +Prepare PyTorch Model ############################################################################################################################### - -Install necessary packages `⇑ <#top>`__ +Install necessary packages +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 - !pip install fvcore -q + !pip install -q "openvino==2023.1.0.dev20230811" + !pip install -q fvcore -Imports and Settings `⇑ <#top>`__ +Imports and Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import json @@ -81,7 +75,7 @@ Imports and Settings `⇑ <#top>`__ from pathlib import Path from typing import Any, List, Dict from IPython.display import Video - from openvino.runtime.ie_api import CompiledModel + import openvino as ov sys.path.append("../utils") from notebook_utils import download_file @@ -865,7 +859,7 @@ model. frames=frames, num_frames=num_frames, crop_size=crop_size, mean=mean, std=std ) - if isinstance(model, CompiledModel): + if isinstance(model, ov.CompiledModel): # openvino compiled model output_blob = model.output(0) predictions = model(inputs)[output_blob] @@ -920,10 +914,9 @@ inference using the same. The top 5 predictions can be seen below. Predicted labels: archery, throwing axe, playing paintball, golf driving, riding or walking with horse -Export to ONNX `⇑ <#top>`__ +Export to ONNX ############################################################################################################################### - Now that we have obtained our trained model and checked inference with it, we export the PyTorch model to Open Neural Network Exchange(ONNX) format, an open format for representing machine learning models, so that @@ -945,29 +938,24 @@ quantization. export_params=True, ) -Convert ONNX to OpenVINO™ Intermediate Representation `⇑ <#top>`__ +Convert ONNX to OpenVINO Intermediate Representation ############################################################################################################################### - Now that our ONNX model is ready, we can convert it to IR format. In this format, the network is represented using two files: an ``xml`` file describing the network architecture and an accompanying binary file that stores constant values such as convolution weights in a binary format. We can use model conversion API for converting into IR format as -follows. The ``convert_model`` method returns an -``openvino.runtime.Model`` object that can either be compiled and -inferred or serialized. +follows. The ``ov.convert_model`` method returns an ``ov.Model`` object +that can either be compiled and inferred or serialized. .. code:: ipython3 - from openvino.runtime import serialize - from openvino.tools import mo - - model = mo.convert_model(ONNX_MODEL_PATH) + model = ov.convert_model(ONNX_MODEL_PATH) IR_PATH = MODEL_DIR / "slowfast-r50.xml" # serialize model for saving IR - serialize(model=model, xml_path=str(IR_PATH)) + ov.save_model(model=model, output_model=str(IR_PATH), compress_to_fp16=False) Next, we read and compile the serialized model using OpenVINO runtime. The ``read_model`` function expects the ``.bin`` weights file to have @@ -977,17 +965,14 @@ using the ``weights`` parameter. .. code:: ipython3 - from openvino.runtime import Core - - core = Core() + core = ov.Core() # read converted model conv_model = core.read_model(str(IR_PATH)) -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -1017,10 +1002,9 @@ Select device from dropdown list for running inference using OpenVINO: # load model on device compiled_model = core.compile_model(model=conv_model, device_name=device.value) -Verify Model Inference `⇑ <#top>`__ +Verify Model Inference ############################################################################################################################### - Using the compiled model, we run inference on the same sample video and print the top 5 predictions again. diff --git a/docs/notebooks/211-speech-to-text-with-output.rst b/docs/notebooks/211-speech-to-text-with-output.rst index 95d919eb6d637f..424c1983689e0d 100644 --- a/docs/notebooks/211-speech-to-text-with-output.rst +++ b/docs/notebooks/211-speech-to-text-with-output.rst @@ -1,8 +1,6 @@ Speech to Text with OpenVINO™ ============================= - - This tutorial demonstrates speech-to-text recognition with OpenVINO. This tutorial uses the `QuartzNet @@ -13,9 +11,7 @@ with Connectionist Temporal Classification (CTC) loss. The model is available from `Open Model Zoo `__. -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Settings <#settings>`__ @@ -43,13 +39,12 @@ Zoo `__. - `Implementation of Decoding <#implementation-of-decoding>`__ - `Run Decoding and Print Output <#run-decoding-and-print-output>`__ -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 - !pip install -q "librosa>=0.8.1" + !pip install -q "librosa>=0.8.1" "openvino-dev==2023.1.0.dev20230811" "onnx" .. code:: ipython3 @@ -64,13 +59,11 @@ Imports `⇑ <#top>`__ import librosa.display import numpy as np import scipy - from openvino.runtime import Core, serialize, Tensor - from openvino.tools import mo + import openvino as ov -Settings `⇑ <#top>`__ +Settings ############################################################################################################################### - In this part, all variables used in the notebook are set. .. code:: ipython3 @@ -82,17 +75,16 @@ In this part, all variables used in the notebook are set. precision = "FP16" model_name = "quartznet-15x5-en" -Download and Convert Public Model `⇑ <#top>`__ +Download and Convert Public Model ############################################################################################################################### -If it is your first run, models will be downloaded and converted here. It my take a few minutes. -Use ``omz_downloader`` and ``omz_converter``, which are command-line -tools from the ``openvino-dev`` package. +If it is your first run, models will be downloaded and converted here. +It my take a few minutes. Use ``omz_downloader`` and ``omz_converter``, +which are command-line tools from the ``openvino-dev`` package. -Download Model `⇑ <#top>`__ +Download Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The ``omz_downloader`` tool automatically creates a directory structure and downloads the selected model. This step is skipped if the model is already downloaded. The selected model comes from the public directory, @@ -109,10 +101,9 @@ Representation (OpenVINO IR). download_command = f"omz_downloader --name {model_name} --output_dir {download_folder} --precision {precision}" ! $download_command -Convert Model `⇑ <#top>`__ +Convert Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - In previous step, model was downloaded in PyTorch format. Currently, PyTorch models supported in OpenVINO via ONNX exporting, ``torch.onnx.export`` function helps to trace PyTorch model to ONNX and @@ -213,9 +204,9 @@ Intermediate Representation format for applying optimizations. dynamic_axes={"audio_signal": {0: "batch_size", 2: "wave_len"}, "output": {0: "batch_size", 2: "wave_len"}} ) # convert model to OpenVINO Model using model conversion API - ov_model = mo.convert_model(str(onnx_model_path)) - # serialize model to IR for next usage - serialize(ov_model, str(converted_model_path)) + ov_model = ov.convert_model(str(onnx_model_path)) + # save model in IR format for next usage + ov.save_model(ov_model, str(converted_model_path)) .. code:: ipython3 @@ -227,16 +218,14 @@ Intermediate Representation format for applying optimizations. downloaded_model_path = Path("output/public/quartznet-15x5-en/models") convert_model(downloaded_model_path, path_to_converted_model) -Audio Processing `⇑ <#top>`__ +Audio Processing ############################################################################################################################### - Now that the model is converted, load an audio file. -Define constants `⇑ <#top>`__ +Define constants +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - First, locate an audio file and define the alphabet used by the model. This tutorial uses the Latin alphabet beginning with a space symbol and ending with a blank symbol. In this case it will be ``~``, but that @@ -247,10 +236,9 @@ could be any other character. audio_file_name = "edge_to_cloud.ogg" alphabet = " abcdefghijklmnopqrstuvwxyz'~" -Available Audio Formats `⇑ <#top>`__ +Available Audio Formats +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - There are multiple supported audio formats that can be used with the model: @@ -259,10 +247,9 @@ model: ``RF64``, ``SD2``, ``SDS``, ``IRCAM``, ``VOC``, ``W64``, ``WAV``, ``NIST``, ``WAVEX``, ``WVE``, ``XI`` -Load Audio File `⇑ <#top>`__ +Load Audio File +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Load the file after checking a file extension. Pass ``sr`` (stands for a ``sampling rate``) as an additional parameter. The model supports files with a ``sampling rate`` of 16 kHz. @@ -291,10 +278,9 @@ Now, you can play your audio file. -Visualize Audio File `⇑ <#top>`__ +Visualize Audio File +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - You can visualize how your audio file presents on a wave plot and spectrogram. @@ -329,10 +315,9 @@ spectrogram. .. image:: 211-speech-to-text-with-output_files/211-speech-to-text-with-output_21_3.png -Change Type of Data `⇑ <#top>`__ +Change Type of Data +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The file loaded in the previous step may contain data in ``float`` type with a range of values between -1 and 1. To generate a viable input, multiply each value by the max value of ``int16`` and convert it to @@ -344,10 +329,9 @@ multiply each value by the max value of ``int16`` and convert it to audio = (audio * (2**15 - 1)) audio = audio.astype(np.int16) -Convert Audio to Mel Spectrum `⇑ <#top>`__ +Convert Audio to Mel Spectrum +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Next, convert the pre-pre-processed audio to `Mel Spectrum `__. For more information on why it needs to be done, refer to `this @@ -385,10 +369,9 @@ article `__ +Run Conversion from Audio to Mel Format +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - In this step, convert a current audio file into `Mel scale `__. @@ -396,10 +379,9 @@ scale `__. mel_basis, spec = audio_to_mel(audio=audio.flatten(), sampling_rate=sampling_rate) -Visualize Mel Spectrogram `⇑ <#top>`__ +Visualize Mel Spectrogram +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - For more information about Mel spectrogram, refer to this `article `__. The first image visualizes Mel frequency spectrogram, the second one @@ -421,36 +403,34 @@ presents filter bank for converting Hz to Mels. .. image:: 211-speech-to-text-with-output_files/211-speech-to-text-with-output_29_1.png -Adjust Mel scale to Input `⇑ <#top>`__ +Adjust Mel scale to Input +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Before reading the network, make sure that the input is ready. .. code:: ipython3 audio = mel_to_input(mel_basis=mel_basis, spec=spec) -Load the Model `⇑ <#top>`__ +Load the Model ############################################################################################################################### - Now, you can read and load the network. .. code:: ipython3 - ie = Core() + core = ov.Core() You may run the model on multiple devices. By default, it will load the model on CPU (you can choose manually CPU, GPU etc.) or let the engine choose the best available device (AUTO). To list all available devices that can be used, run -``print(ie.available_devices)`` command. +``print(core.available_devices)`` command. .. code:: ipython3 - print(ie.available_devices) + print(core.available_devices) .. parsed-literal:: @@ -464,8 +444,6 @@ Select device from dropdown list import ipywidgets as widgets - core = Core() - device = widgets.Dropdown( options=core.available_devices + ["AUTO"], value='AUTO', @@ -477,19 +455,18 @@ Select device from dropdown list .. code:: ipython3 - model = ie.read_model( + model = core.read_model( model=f"{model_folder}/public/{model_name}/{precision}/{model_name}.xml" ) model_input_layer = model.input(0) shape = model_input_layer.partial_shape shape[2] = -1 model.reshape({model_input_layer: shape}) - compiled_model = ie.compile_model(model=model, device_name=device.value) + compiled_model = core.compile_model(model=model, device_name=device.value) -Do Inference `⇑ <#top>`__ +Do Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Everything is set up. Now, the only thing that remains is passing input to the previously loaded network and running inference. @@ -497,12 +474,11 @@ to the previously loaded network and running inference. output_layer_ir = compiled_model.output(0) - character_probabilities = compiled_model([Tensor(audio)])[output_layer_ir] + character_probabilities = compiled_model([ov.Tensor(audio)])[output_layer_ir] -Read Output `⇑ <#top>`__ +Read Output +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - After inference, you need to reach out the output. The default output format for ``QuartzNet 15x5`` are per-frame probabilities (after LogSoftmax) for every symbol in the alphabet, name - output, shape - @@ -530,10 +506,9 @@ The last step is getting symbols from corresponding indexes in charlist. # Run argmax to pick most possible symbols character_probabilities = np.argmax(character_probabilities, axis=1) -Implementation of Decoding `⇑ <#top>`__ +Implementation of Decoding +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To decode previously explained output, you need the `Connectionist Temporal Classification (CTC) decode `__ @@ -550,10 +525,9 @@ function. This solution will remove consecutive letters from the output. previous_letter_id = letter_index return ''.join(transcription) -Run Decoding and Print Output `⇑ <#top>`__ +Run Decoding and Print Output +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 transcription = ctc_greedy_decode(character_probabilities) diff --git a/docs/notebooks/212-pyannote-speaker-diarization-with-output.rst b/docs/notebooks/212-pyannote-speaker-diarization-with-output.rst index 2e8af021276050..78ea66c42e7ee4 100644 --- a/docs/notebooks/212-pyannote-speaker-diarization-with-output.rst +++ b/docs/notebooks/212-pyannote-speaker-diarization-with-output.rst @@ -1,8 +1,6 @@ Speaker diarization =================== - - Speaker diarization is the process of partitioning an audio stream containing human speech into homogeneous segments according to the identity of each speaker. It can enhance the readability of an automatic @@ -39,9 +37,7 @@ card `__, `repo `__ and `paper `__. -.. _top: - -**Table of contents**: +**Table of contents:** - `Prerequisites <#prerequisites>`__ - `Prepare pipeline <#prepare-pipeline>`__ @@ -52,10 +48,9 @@ card `__, - `Replace segmentation model with OpenVINO <#replace-segmentation-model-with-openvino>`__ - `Run speaker diarization with OpenVINO <#run-speaker-diarization-with-openvino>`__ -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### - .. code:: ipython3 !pip install -q -r requirements.txt @@ -65,17 +60,19 @@ Prerequisites `⇑ <#top>`__ DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. - onnx 1.14.0 requires protobuf>=3.20.2, but you have protobuf 3.20.1 which is incompatible. + onnx 1.14.1 requires protobuf>=3.20.2, but you have protobuf 3.20.1 which is incompatible. + onnxconverter-common 1.14.0 requires protobuf==3.20.2, but you have protobuf 3.20.1 which is incompatible. paddlepaddle 2.5.0rc0 requires protobuf>=3.20.2; platform_system != "Windows", but you have protobuf 3.20.1 which is incompatible. + ppgan 2.1.0 requires imageio==2.9.0, but you have imageio 2.31.3 which is incompatible. ppgan 2.1.0 requires librosa==0.8.1, but you have librosa 0.9.2 which is incompatible. ppgan 2.1.0 requires opencv-python<=4.6.0.66, but you have opencv-python 4.8.0.76 which is incompatible. tensorflow 2.12.0 requires protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3, but you have protobuf 3.20.1 which is incompatible. + tf2onnx 1.15.1 requires protobuf~=3.20.2, but you have protobuf 3.20.1 which is incompatible. -Prepare pipeline `⇑ <#top>`__ +Prepare pipeline ############################################################################################################################### - Traditional Speaker Diarization systems can be generalized into a five-step process: @@ -132,7 +129,6 @@ hub `__. You can log in on HuggingFace Hub in the notebook environment using the following code: - .. code:: python @@ -151,9 +147,17 @@ hub `__. pipeline = Pipeline.from_pretrained("philschmid/pyannote-speaker-diarization-endpoint") -Load test audio file `⇑ <#top>`__ -############################################################################################################################### +.. parsed-literal:: + + 2023-09-08 23:36:40.468953: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:36:40.503440: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 23:36:41.110289: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +Load test audio file +############################################################################################################################### .. code:: ipython3 @@ -208,10 +212,9 @@ Load test audio file `⇑ <#top>`__ .. image:: 212-pyannote-speaker-diarization-with-output_files/212-pyannote-speaker-diarization-with-output_9_1.png -Run inference pipeline `⇑ <#top>`__ +Run inference pipeline ############################################################################################################################### - For running inference, we should provide a path to input audio to the pipeline @@ -231,7 +234,7 @@ pipeline .. parsed-literal:: - Diarization pipeline took 15.37 s + Diarization pipeline took 15.75 s The result of running the pipeline can be represented as a diagram @@ -270,7 +273,7 @@ We can also print each time frame and corresponding speaker: start=27.8s stop=29.5s speaker_SPEAKER_02 -Convert model to OpenVINO Intermediate Representation format. `⇑ <#top>`__ +Convert model to OpenVINO Intermediate Representation format ############################################################################################################################### For best results with OpenVINO, it is recommended to convert the model @@ -310,10 +313,9 @@ with ``openvino.runtime.serialize``. Model successfully converted to IR and saved to pyannote-segmentation.xml -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -338,10 +340,9 @@ Select device from dropdown list for running inference using OpenVINO: -Replace segmentation model with OpenVINO `⇑ <#top>`__ +Replace segmentation model with OpenVINO ############################################################################################################################### - .. code:: ipython3 from openvino.runtime import Core @@ -371,10 +372,9 @@ Replace segmentation model with OpenVINO `⇑ <#top>`__ pipeline._segmentation.infer = infer_segm -Run speaker diarization with OpenVINO `⇑ <#top>`__ +Run speaker diarization with OpenVINO ############################################################################################################################### - .. code:: ipython3 start = time.perf_counter() @@ -388,7 +388,7 @@ Run speaker diarization with OpenVINO `⇑ <#top>`__ .. parsed-literal:: - Diarization pipeline took 14.69 s + Diarization pipeline took 15.15 s .. code:: ipython3 diff --git a/docs/notebooks/213-question-answering-with-output.rst b/docs/notebooks/213-question-answering-with-output.rst index 9b1be824b7a9a9..18951ef96c1162 100644 --- a/docs/notebooks/213-question-answering-with-output.rst +++ b/docs/notebooks/213-question-answering-with-output.rst @@ -1,8 +1,6 @@ Interactive question answering with OpenVINO™ ============================================= - - This demo shows interactive question answering with OpenVINO, using `small BERT-large-like model `__ @@ -11,34 +9,30 @@ larger BERT-large model. The model comes from `Open Model Zoo `__. Final part of this notebook provides live inference results from your inputs. -.. _top: - -**Table of contents**: - -- `Imports <#imports>`__ - -- `The model <#the-model>`__ +**Table of contents:** - - `Download the model <#download-the-model>`__ - - `Load the model <#load-the-model>`__ +- `Imports <#imports>`__ +- `The model <#the-model>`__ - - `Select inference device <#select-inference-device>`__ + - `Download the model <#download-the-model>`__ + - `Load the model <#load-the-model>`__ + + - `Select inference device <#select-inference-device>`__ -- `Processing <#processing>`__ +- `Processing <#processing>`__ - - `Preprocessing <#preprocessing>`__ - - `Postprocessing <#postprocessing>`__ - - `Main Processing Function <#main-processing-function>`__ + - `Preprocessing <#preprocessing>`__ + - `Postprocessing <#postprocessing>`__ + - `Main Processing Function <#main-processing-function>`__ + +- `Run <#Run>`__ -- `Run <#run>`__ - - - `Run on local paragraphs <#run-on-local-paragraphs>`__ + - `Run on local paragraphs <#run-on-local-paragraphs>`__ - `Run on websites <#run-on-websites>`__ -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 import operator @@ -51,14 +45,12 @@ Imports `⇑ <#top>`__ import html_reader as reader import tokens_bert as tokens -The model `⇑ <#top>`__ +The model ############################################################################################################################### - -Download the model `⇑ <#top>`__ +Download the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Use ``omz_downloader``, which is a command-line tool from the ``openvino-dev`` package. The ``omz_downloader`` tool automatically creates a directory structure and downloads the selected model. If the @@ -111,10 +103,9 @@ there is no need to use ``omz_converter``. -Load the model `⇑ <#top>`__ +Load the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Downloaded models are located in a fixed structure, which indicates a vendor, a model name and a precision. Only a few lines of code are required to run the model. First, create an OpenVINO Runtime object. @@ -129,10 +120,9 @@ You can choose ``CPU`` or ``GPU`` for this model. # Read the network and corresponding weights from a file. model = core.read_model(model_path) -Select inference device `⇑ <#top>`__ +Select inference device ------------------------------------------------------------------------------------------------------------------------------- - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -188,10 +178,9 @@ for BERT-large-like model. -Processing `⇑ <#top>`__ +Processing ############################################################################################################################### - NLP models usually take a list of tokens as a standard input. A token is a single word converted to some integer. To provide the proper input, you need the vocabulary for such mapping. You also need to define some @@ -227,10 +216,9 @@ content from provided URLs. # Produce one big context string. return "\n".join(paragraphs) -Preprocessing `⇑ <#top>`__ +Preprocessing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The input size in this case is 384 tokens long. The main input (``input_ids``) to used BERT model consists of two parts: question tokens and context tokens separated by some special tokens. @@ -311,10 +299,9 @@ documentation `__ +Postprocessing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The results from the network are raw (logits). Use the softmax function to get the probability distribution. Then, find the best answer in the current part of the context (the highest score) and return the score and @@ -405,10 +392,9 @@ answer should come with the highest score. # Return the part of the context, which is already an answer. return context[answer[1]:answer[2]], answer[0] -Main Processing Function `⇑ <#top>`__ +Main Processing Function +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Run question answering on a specific knowledge base (websites) and iterate through the questions. @@ -448,14 +434,12 @@ iterate through the questions. print(f"Score: {score:.2f}") print(f"Time: {end_time - start_time:.2f}s") -Run `⇑ <#top>`__ +Run ############################################################################################################################### - -Run on local paragraphs `⇑ <#top>`__ +Run on local paragraphs +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Change sources to your own to answer your questions. You can use as many sources as you want. Usually, you need to wait a few seconds for the answer, but the longer the context, the longer the waiting time. The @@ -505,10 +489,9 @@ questions in the box.** Time: 0.03s -Run on websites `⇑ <#top>`__ +Run on websites +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - You can also provide URLs. Note that the context (a knowledge base) is built from paragraphs on websites. If some information is outside the paragraphs, the algorithm will not be able to find it. @@ -542,6 +525,6 @@ questions in the box.** Context: ['https://en.wikipedia.org/wiki/OpenVINO'] Question: What does OpenVINO mean? Answer: Open Visual Inference and Neural network Optimization - Score: 0.92 + Score: 0.94 Time: 0.06s diff --git a/docs/notebooks/214-grammar-correction-with-output.rst b/docs/notebooks/214-grammar-correction-with-output.rst index 434aabbacd3490..57ac67054b4c8f 100644 --- a/docs/notebooks/214-grammar-correction-with-output.rst +++ b/docs/notebooks/214-grammar-correction-with-output.rst @@ -1,8 +1,6 @@ Grammatical Error Correction with OpenVINO ========================================== - - AI-based auto-correction products are becoming increasingly popular due to their ease of use, editing speed, and affordability. These products improve the quality of written text in emails, blogs, and chats. @@ -41,11 +39,9 @@ It consists of the following steps: - Download and convert models from a public source using the `OpenVINO integration with Hugging Face Optimum `__. -- Create an inference pipeline for grammatical error checking - -.. _top: +- Create an inference pipeline for grammatical error checking -**Table of contents**: +**Table of contents:** - `How does it work? <#how-does-it-work>`__ - `Prerequisites <#prerequisites>`__ @@ -55,12 +51,11 @@ It consists of the following steps: - `Grammar Checker <#grammar-checker>`__ - `Grammar Corrector <#grammar-corrector>`__ -- `Prepare Demo Pipeline <#prepare-demo-pipeline>`__ +- `Prepare Demo Pipeline <#prepare-demo-pipeline>`__ -How does it work? `⇑ <#top>`__ +How does it work? ############################################################################################################################### - A Grammatical Error Correction task can be thought of as a sequence-to-sequence task where a model is trained to take a grammatically incorrect sentence as input and return a grammatically @@ -108,10 +103,9 @@ documentation `__ Now that we know more about FLAN-T5 and RoBERTa, let us get started. 🚀 -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### - First, we need to install the `Hugging Face Optimum `__ library accelerated by OpenVINO integration. The Hugging Face Optimum API is a @@ -122,7 +116,7 @@ documentation `__. .. code:: ipython3 - !pip install -q "git+https://github.com/huggingface/optimum-intel.git" onnx onnxruntime + !pip install -q "git+https://github.com/huggingface/optimum-intel.git" "openvino>=2023.0.0" onnx onnxruntime gradio .. parsed-literal:: @@ -132,10 +126,9 @@ documentation `__. [notice] To update, run: pip install --upgrade pip -Download and Convert Models `⇑ <#top>`__ +Download and Convert Models ############################################################################################################################### - Optimum Intel can be used to load optimized models from the `Hugging Face Hub `__ and create pipelines to run an inference with OpenVINO Runtime using Hugging @@ -188,10 +181,9 @@ Tokenizer class and pipelines API are compatible with Optimum models. comet_ml is installed but `COMET_API_KEY` is not set. -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -219,10 +211,9 @@ Select device from dropdown list for running inference using OpenVINO: -Grammar Checker `⇑ <#top>`__ +Grammar Checker +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 grammar_checker_model_id = "textattack/roberta-base-CoLA" @@ -272,10 +263,9 @@ Hugging Face inference pipelines in this Great! Looks like the model can detect errors in the sample. -Grammar Corrector `⇑ <#top>`__ +Grammar Corrector +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The steps for loading the Grammar Corrector model are very similar, except for the model class that is used. Because FLAN-T5 is a sequence-to-sequence text generation model, we should use the @@ -338,10 +328,9 @@ to run it. Nice! The result looks pretty good! -Prepare Demo Pipeline `⇑ <#top>`__ +Prepare Demo Pipeline ############################################################################################################################### - Now let us put everything together and create the pipeline for grammar correction. The pipeline accepts input text, verifies its correctness, and generates the correct version if required. It will consist of @@ -435,30 +424,18 @@ several steps: return corrected_text -Let us see it in action. Enter text to be corrected in the text box and -execute the following cells. +Let us see it in action. .. code:: ipython3 - import ipywidgets as widgets + default_text = ( + "Most of the course is about semantic or content of language but there are also interesting" + " topics to be learned from the servicefeatures except statistics in characters in documents.At" + " this point, He introduces herself as his native English speaker and goes on to say that if" + " you contine to work on social scnce" + ) - text_widget = widgets.Textarea(value="Most of the course is about semantic or content of language but there are also interesting topics to be learned from the servicefeatures except statistics in characters in documents." - "At this point, He introduces herself as his native English speaker and goes on to say that if you contine to work on social scnce", - description='your text', layout=widgets.Layout(width="auto")) - text_widget - - - - -.. parsed-literal:: - - Textarea(value='Most of the course is about semantic or content of language but there are also interesting to… - - - -.. code:: ipython3 - - corrected_text = correct_text(text_widget.value, grammar_checker_pipe, grammar_corrector_pipe) + corrected_text = correct_text(default_text, grammar_checker_pipe, grammar_corrector_pipe) .. parsed-literal:: @@ -477,7 +454,7 @@ execute the following cells. .. code:: ipython3 - print(f"input text: {text_widget.value}\n") + print(f"input text: {default_text}\n") print(f'generated text: {corrected_text}') @@ -487,3 +464,30 @@ execute the following cells. generated text: Most of the course is about the semantic content of language but there are also interesting topics to be learned from the service features except statistics in characters in documents. At this point, she introduces herself as a native English speaker and goes on to say that if you continue to work on social science, you will continue to be successful. + +Interactive demo +############################################################################################################################### + +.. code:: ipython3 + + import gradio as gr + + + def correct(text, _=gr.Progress(track_tqdm=True)): + return correct_text(text, grammar_checker_pipe, grammar_corrector_pipe) + + + demo = gr.Interface( + correct, + gr.Textbox(label="Text"), + gr.Textbox(label="Correction"), + examples=[default_text], + allow_flagging="never", + ) + try: + demo.queue().launch(debug=False) + except Exception: + demo.queue().launch(share=True, debug=False) + # if you are launching remotely, specify server_name and server_port + # demo.launch(server_name='your server name', server_port='server port in int') + # Read more in the docs: https://gradio.app/docs/ diff --git a/docs/notebooks/215-image-inpainting-with-output.rst b/docs/notebooks/215-image-inpainting-with-output.rst index c431ee55da9359..5c1a0da0682c49 100644 --- a/docs/notebooks/215-image-inpainting-with-output.rst +++ b/docs/notebooks/215-image-inpainting-with-output.rst @@ -1,19 +1,16 @@ Image In-painting with OpenVINO™ -------------------------------- - - This notebook demonstrates how to use an image in-painting model with OpenVINO, using `GMCNN model `__ from `Open Model Zoo `__. This model, given a tampered image, is able to create something very similar to the original image. The Following pipeline will be used in this notebook. -|pipeline| -.. _top: +|pipeline| -**Table of contents**: +**Table of contents:** - `Download the Model <#download-the-model>`__ - `Convert Tensorflow model to OpenVINO IR format <#convert-tensorflow-model-to-openvino-ir-format>`__ @@ -37,20 +34,19 @@ original image. The Following pipeline will be used in this notebook. import matplotlib.pyplot as plt import numpy as np from zipfile import ZipFile - from openvino.tools import mo - from openvino.runtime import Core, Tensor, serialize + import openvino as ov sys.path.append("../utils") import notebook_utils as utils -Download the Model `⇑ <#top>`__ +Download the Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Download ``gmcnn-places2-tf``\ model (this step will be skipped if the model is already downloaded) and then -unzip it. Downloaded model stored in TensorFlow frozen graph format. The -steps how this frozen graph can be obtained from original model -checkpoint can be found in this -`instruction `__ +Download ``gmcnn-places2-tf``\ model (this step will be skipped if the +model is already downloaded) and then unzip it. Downloaded model stored +in TensorFlow frozen graph format. The steps how this frozen graph can +be obtained from original model checkpoint can be found in this +`instruction `__ .. code:: ipython3 @@ -75,14 +71,13 @@ checkpoint can be found in this Already downloaded -Convert Tensorflow model to OpenVINO IR format `⇑ <#top>`__ +Convert Tensorflow model to OpenVINO IR format +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The pre-trained model is in TensorFlow format. To use it with OpenVINO, convert it to OpenVINO IR format with model conversion API. For more information about model conversion, see this -`page `__. +`page `__. This step is also skipped if the model is already converted. .. code:: ipython3 @@ -92,8 +87,8 @@ This step is also skipped if the model is already converted. # Run model conversion API to convert model to OpenVINO IR FP32 format, if the IR file does not exist. if not ir_path.exists(): - ov_model = mo.convert_model(model_path, input_shape=[[1,512,680,3],[1,512,680,1]]) - serialize(ov_model, str(ir_path)) + ov_model = ov.convert_model(model_path, input=[[1,512,680,3],[1,512,680,1]]) + ov.save_model(ov_model, str(ir_path)) else: print(f"{ir_path} already exists.") @@ -103,10 +98,9 @@ This step is also skipped if the model is already converted. model/public/ir/frozen_model.xml already exists. -Load the model `⇑ <#top>`__ +Load the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Now, load the OpenVINO IR model and perform as follows: 1. Initialize OpenVINO Runtime (Core). @@ -119,7 +113,7 @@ Only a few lines of code are required to run the model: .. code:: ipython3 - core = Core() + core = ov.Core() # Read the model.xml and weights file model = core.read_model(model=ir_path) @@ -154,10 +148,9 @@ Only a few lines of code are required to run the model: input_layer = compiled_model.input(0) output_layer = compiled_model.output(0) -Determine the input shapes of the model `⇑ <#top>`__ +Determine the input shapes of the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Note that both input shapes are the same. However, the second input has 1 channel (monotone). @@ -165,10 +158,9 @@ Note that both input shapes are the same. However, the second input has N, H, W, C = input_layer.shape -Create a square mask `⇑ <#top>`__ +Create a square mask +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Next, create a single channeled mask that will be laid on top of the original image. @@ -209,10 +201,9 @@ original image. .. image:: 215-image-inpainting-with-output_files/215-image-inpainting-with-output_14_0.png -Load and Resize the Image `⇑ <#top>`__ +Load and Resize the Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - This image will be altered by using the mask. You can process any image you like. Just change the URL below. @@ -239,10 +230,9 @@ you like. Just change the URL below. .. image:: 215-image-inpainting-with-output_files/215-image-inpainting-with-output_16_0.png -Generating the Masked Image `⇑ <#top>`__ +Generating the Masked Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - This multiplication of the image and the mask gives the result of the masked image layered on top of the original image. The ``masked_image`` will be the first input to the GMCNN model. @@ -259,10 +249,9 @@ will be the first input to the GMCNN model. .. image:: 215-image-inpainting-with-output_files/215-image-inpainting-with-output_18_0.png -Preprocessing `⇑ <#top>`__ +Preprocessing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The model expects the input dimensions to be ``NHWC``. - masked_image.shape = (512,680,3) —–> model expects = (1,512,680,3) @@ -273,16 +262,15 @@ The model expects the input dimensions to be ``NHWC``. masked_image = masked_image[None, ...] mask = mask[None, ...] -Inference `⇑ <#top>`__ +Inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Do inference with the given masked image and the mask. Then, show the restored image. .. code:: ipython3 - result = compiled_model([Tensor(masked_image.astype(np.float32)), Tensor(mask.astype(np.float32))])[output_layer] + result = compiled_model([ov.Tensor(masked_image.astype(np.float32)), ov.Tensor(mask.astype(np.float32))])[output_layer] result = result.squeeze().astype(np.uint8) plt.figure(figsize=(16, 12)) plt.imshow(cv2.cvtColor(result, cv2.COLOR_BGR2RGB)); @@ -292,10 +280,9 @@ restored image. .. image:: 215-image-inpainting-with-output_files/215-image-inpainting-with-output_22_0.png -Save the Restored Image `⇑ <#top>`__ +Save the Restored Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Save the restored image to the data directory to download it. .. code:: ipython3 diff --git a/docs/notebooks/216-attention-center-with-output.rst b/docs/notebooks/216-attention-center-with-output.rst index 0e50d17ec85e40..de3d74679a708a 100644 --- a/docs/notebooks/216-attention-center-with-output.rst +++ b/docs/notebooks/216-attention-center-with-output.rst @@ -51,8 +51,6 @@ The attention center model has been trained with images from the `COCO dataset `__ annotated with saliency from the `SALICON dataset `__. -.. _top: - **Table of contents**: - `Imports <#imports>`__ @@ -65,7 +63,7 @@ the `SALICON dataset `__. - `Load input image <#load-input-image>`__ - `Get result with OpenVINO IR model <#get-result-with-openvino-ir-model>`__ -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### @@ -90,7 +88,7 @@ Imports `⇑ <#top>`__ 2023-08-15 23:14:52.969814: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT -Download the attention-center model `⇑ <#top>`__ +Download the attention-center model ############################################################################################################################### @@ -115,7 +113,7 @@ include model in folder ``./model``. Resolving deltas: 100% (73/73), done. -Convert Tensorflow Lite model to OpenVINO IR format `⇑ <#top>`__ +Convert Tensorflow Lite model to OpenVINO IR format +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -153,7 +151,7 @@ find example in IR model saved to model/ir_center_model.xml -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### @@ -185,7 +183,7 @@ Select device from dropdown list for running inference using OpenVINO: compiled_model = core.compile_model(model=model, device_name=device.value) -Prepare image to use with attention-center model `⇑ <#top>`__ +Prepare image to use with attention-center model ############################################################################################################################### @@ -237,7 +235,7 @@ input. plt.imshow(cv2.cvtColor(image_to_print, cv2.COLOR_BGR2RGB)) -Load input image `⇑ <#top>`__ +Load input image ############################################################################################################################### @@ -285,7 +283,7 @@ Upload input image using file loading button .. image:: 216-attention-center-with-output_files/216-attention-center-with-output_14_1.png -Get result with OpenVINO IR model `⇑ <#top>`__ +Get result with OpenVINO IR model ############################################################################################################################### diff --git a/docs/notebooks/217-vision-deblur-with-output.rst b/docs/notebooks/217-vision-deblur-with-output.rst index 1241fab1900fa3..0bedb5c4b6fe12 100644 --- a/docs/notebooks/217-vision-deblur-with-output.rst +++ b/docs/notebooks/217-vision-deblur-with-output.rst @@ -1,13 +1,9 @@ Deblur Photos with DeblurGAN-v2 and OpenVINO™ ============================================= +**Table of contents:** - -.. _top: - -**Table of contents**: - -- `What is deblurring? <#what-is-deblurring>`__ +- `What is deblurring? <#what-is-deblurring?>`__ - `Preparations <#preparations>`__ - `Imports <#imports>`__ @@ -16,11 +12,11 @@ Deblur Photos with DeblurGAN-v2 and OpenVINO™ - `Download DeblurGAN-v2 Model <#download-deblurgan-v2-model>`__ - `Prepare model <#prepare-model>`__ - `Convert DeblurGAN-v2 Model to OpenVINO IR format <#convert-deblurgan-v2-model-to-openvino-ir-format>`__ - - `Load the Model <#load-the-model>`__ +- `Load the Model <#load-the-model>`__ - `Deblur Image <#deblur-image>`__ - - `Load, resize and reshape input image <#load-resize-and-reshape-input-image>`__ + - `Load, resize and reshape input image <#load,-resize-and-reshape-input-image>`__ - `Do Inference on the Input Image <#do-inference-on-the-input-image>`__ - `Display results <#display-results>`__ - `Save the deblurred image <#save-the-deblurred-image>`__ @@ -30,12 +26,11 @@ DeblurGAN-v2 in OpenVINO, by first converting the `VITA-Group/DeblurGANv2 `__ model to OpenVINO Intermediate Representation (OpenVINO IR) format. For more information about the model, see the -`documentation `__. +`documentation `__. -What is deblurring? `⇑ <#top>`__ +What is deblurring? +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Deblurring is the task of removing motion blurs that usually occur in photos shot with hand-held cameras when there are moving objects in the scene. Blurs not only reduce the human perception about the quality of @@ -49,14 +44,12 @@ better. `__ +Preparations ############################################################################################################################### - -Imports `⇑ <#top>`__ +Imports +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 import sys @@ -66,15 +59,14 @@ Imports `⇑ <#top>`__ import matplotlib.pyplot as plt import numpy as np from IPython.display import Markdown, display - from openvino.runtime import Core + import openvino as ov sys.path.append("../utils") from notebook_utils import load_image -Settings `⇑ <#top>`__ +Settings +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # A directory where the model will be downloaded. @@ -88,17 +80,16 @@ Settings `⇑ <#top>`__ precision = "FP16" -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 import ipywidgets as widgets - core = Core() + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], @@ -118,10 +109,9 @@ Select device from dropdown list for running inference using OpenVINO: -Download DeblurGAN-v2 Model `⇑ <#top>`__ +Download DeblurGAN-v2 Model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Model defined in `VITA-Group/DeblurGANv2 `__ repository. For converting model we should clone this repo and install @@ -175,10 +165,9 @@ Downloading deblurgan-v2… -Prepare model `⇑ <#top>`__ +Prepare model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - DeblurGAN-v2 is PyTorch model for converting it to OpenVINO Intermediate Representation format, we should first instantiate model class and load checkpoint weights. @@ -208,47 +197,58 @@ checkpoint weights. out = (out + 1) / 2 return out -Convert DeblurGAN-v2 Model to OpenVINO IR format `⇑ <#top>`__ +Convert DeblurGAN-v2 Model to OpenVINO IR format +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - For best results with OpenVINO, it is recommended to convert the model to OpenVINO IR format. To convert the PyTorch model, we will use model -conversion Python API. The ``mo.convert_model`` Python function returns +conversion Python API. The ``ov.convert_model`` Python function returns an OpenVINO model ready to load on a device and start making -predictions. We can save it on a disk for next usage with -``openvino.runtime.serialize``. For more information about model -conversion Python API, see this -`page `__. +predictions. We can save the model on the disk for next usage with +``ov.save_model``. For more information about model conversion Python +API, see this +`page `__. Model conversion may take a while. .. code:: ipython3 - from openvino.tools import mo - from openvino.runtime import serialize - deblur_gan_model = DeblurV2("model/public/deblurgan-v2/ckpt/fpn_mobilenet.h5", "fpn_mobilenet") with torch.no_grad(): deblur_gan_model.eval() - ov_model = mo.convert_model(deblur_gan_model, input_shape=[[1,3,736,1312]], compress_to_fp16=(precision == "FP16")) - serialize(ov_model, model_xml_path) + ov_model = ov.convert_model(deblur_gan_model, example_input=torch.ones((1,3,736,1312), dtype=torch.float32), input=[[1,3,736,1312]]) + ov.save_model(ov_model, model_xml_path, compress_to_fp16=(precision == "FP16")) -Load the Model `⇑ <#top>`__ -############################################################################################################################### + +.. parsed-literal:: + + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino + WARNING:nncf:NNCF provides best results with torch==2.0.1, while current torch version is 1.13.1+cpu. If you encounter issues, consider switching to torch==2.0.1 +.. parsed-literal:: + + No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' + + +.. parsed-literal:: + + WARNING:nncf:You are using DataParallel, which may cause significant performance issues with dynamic graph building. Consider using distributed training (DistributedDataParallel) instead. + + +Load the Model +############################################################################################################################### + Load and compile the DeblurGAN-v2 model in the OpenVINO Runtime with -``ie.read_model`` and compile it for the specified device with -``ie.compile_model``. Get input and output keys and the expected input +``core.read_model`` and compile it for the specified device with +``core.compile_model``. Get input and output keys and the expected input shape for the model. .. code:: ipython3 - ie = Core() - model = ie.read_model(model=model_xml_path) - compiled_model = ie.compile_model(model=model, device_name=device.value) + model = core.read_model(model=model_xml_path) + compiled_model = core.compile_model(model=model, device_name=device.value) .. code:: ipython3 @@ -264,7 +264,7 @@ shape for the model. .. parsed-literal:: - + @@ -277,18 +277,16 @@ shape for the model. .. parsed-literal:: - + -Deblur Image `⇑ <#top>`__ +Deblur Image ############################################################################################################################### - -Load, resize and reshape input image `⇑ <#top>`__ +Load, resize and reshape input image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The input image is read by using the default ``load_image`` function from ``notebooks.utils``. Then, resized to meet the network expected input sizes, and reshaped to ``(N, C, H, W)``, where ``N`` is a number @@ -332,10 +330,9 @@ height, and ``W`` is the width. .. image:: 217-vision-deblur-with-output_files/217-vision-deblur-with-output_24_0.png -Do Inference on the Input Image `⇑ <#top>`__ +Do Inference on the Input Image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Do the inference, convert the result to an image shape and resize it to the original image size. @@ -361,10 +358,9 @@ the original image size. .. image:: 217-vision-deblur-with-output_files/217-vision-deblur-with-output_27_0.png -Display results `⇑ <#top>`__ +Display results +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Create subplot(r,c) by providing the no. of rows (r), @@ -383,10 +379,9 @@ Display results `⇑ <#top>`__ .. image:: 217-vision-deblur-with-output_files/217-vision-deblur-with-output_29_0.png -Save the deblurred image `⇑ <#top>`__ +Save the deblurred image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Save the output image of the DeblurGAN-v2 model in the current directory. diff --git a/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_27_0.png b/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_27_0.png index d6f023b5f3d3e6..16e424835869c9 100644 --- a/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_27_0.png +++ b/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_27_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d4ae5cf320917ad4a6326089fe287a5b20986173a8b8909cb484076aabeef784 -size 223269 +oid sha256:91b0a8b3e8c6f8d5187ace312da0c646b53f1572ecee7c58522bfd2edcc3093b +size 223250 diff --git a/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_29_0.png b/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_29_0.png index eaea9531ac82e1..6de672fa1c0470 100644 --- a/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_29_0.png +++ b/docs/notebooks/217-vision-deblur-with-output_files/217-vision-deblur-with-output_29_0.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:8095d3c856ae1652d7a57df4a1c7e154ddb17aa2b1b53da84c40717ab4ce2da2 -size 768422 +oid sha256:0d744b651cd25842af9c672dedac4a3e17bee6b411e7750be7ee931dbece6edd +size 768425 diff --git a/docs/notebooks/218-vehicle-detection-and-recognition-with-output.rst b/docs/notebooks/218-vehicle-detection-and-recognition-with-output.rst index 2bc8a6cd2e9d94..62d21115c128f0 100644 --- a/docs/notebooks/218-vehicle-detection-and-recognition-with-output.rst +++ b/docs/notebooks/218-vehicle-detection-and-recognition-with-output.rst @@ -1,8 +1,6 @@ Vehicle Detection And Recognition with OpenVINO™ ================================================ - - This tutorial demonstrates how to use two pre-trained models from `Open Model Zoo `__: `vehicle-detection-0200 `__ @@ -19,9 +17,7 @@ As a result, you can get: result -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Download Models <#download-models>`__ @@ -35,15 +31,16 @@ As a result, you can get: - `Detection Processing <#detection-processing>`__ - `Recognize vehicle attributes <#recognize-vehicle-attributes>`__ - - `Recognition processing <#recognition-processing>`__ + + - `Recognition processing <#recognition-processing>`__ + - `Combine two models <#combine-two-models>`__ .. |flowchart| image:: https://user-images.githubusercontent.com/47499836/157867076-9e997781-f9ef-45f6-9a51-b515bbf41048.png -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - Import the required modules. .. code:: ipython3 @@ -61,10 +58,9 @@ Import the required modules. sys.path.append("../utils") import notebook_utils as utils -Download Models `⇑ <#top>`__ +Download Models ############################################################################################################################### - Use ``omz_downloader`` - a command-line tool from the ``openvino-dev`` package. The ``omz_downloader`` tool automatically creates a directory structure and downloads the selected model. This step is skipped if the @@ -73,7 +69,7 @@ directory, which means it must be converted into OpenVINO Intermediate Representation (OpenVINO IR). .. note:: - + To change the model, replace the name of the model in the code below, for example to ``"vehicle-detection-0201"`` or ``"vehicle-detection-0202"``. Keep in mind that they support @@ -85,7 +81,6 @@ Representation (OpenVINO IR). ``"FP16"``, and ``"FP16-INT8"``. A different type has a different model size and a precision value. - .. code:: ipython3 # A directory where the model will be downloaded. @@ -140,10 +135,9 @@ Representation (OpenVINO IR). -Load Models `⇑ <#top>`__ +Load Models ############################################################################################################################### - This tutorial requires a detection model and a recognition model. After downloading the models, initialize OpenVINO Runtime, and use ``read_model()`` to read network architecture and weights from ``*.xml`` @@ -201,10 +195,9 @@ specified device. output_keys = compiled_model.output(0) return input_keys, output_keys, compiled_model -Get attributes from model `⇑ <#top>`__ +Get attributes from model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Use ``input_keys.shape`` to get data shapes. .. code:: ipython3 @@ -221,10 +214,9 @@ Use ``input_keys.shape`` to get data shapes. # Get input size - Recognition. height_re, width_re = list(input_key_re.shape)[2:] -Helper function `⇑ <#top>`__ +Helper function +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The ``plt_show()`` function is used to show image. .. code:: ipython3 @@ -240,10 +232,9 @@ The ``plt_show()`` function is used to show image. plt.axis("off") plt.imshow(raw_image) -Read and display a test image `⇑ <#top>`__ +Read and display a test image +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The input shape of detection model is ``[1, 3, 256, 256]``. Therefore, you need to resize the image to ``256 x 256``, and expand the batch channel with ``expand_dims`` function. @@ -273,10 +264,9 @@ channel with ``expand_dims`` function. .. image:: 218-vehicle-detection-and-recognition-with-output_files/218-vehicle-detection-and-recognition-with-output_13_0.png -Use the Detection Model to Detect Vehicles `⇑ <#top>`__ +Use the Detection Model to Detect Vehicles ############################################################################################################################### - .. figure:: https://user-images.githubusercontent.com/47499836/157867076-9e997781-f9ef-45f6-9a51-b515bbf41048.png :alt: pipline @@ -306,10 +296,9 @@ Delete unused dims and filter out results that are not used. # Remove zero only boxes. boxes = boxes[~np.all(boxes == 0, axis=1)] -Detection Processing `⇑ <#top>`__ +Detection Processing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - With the function below, you change the ratio to the real position in the image and filter out low-confidence results. @@ -356,10 +345,9 @@ the image and filter out low-confidence results. # Find the position of a car. car_position = crop_images(image_de, resized_image_de, boxes) -Recognize vehicle attributes `⇑ <#top>`__ +Recognize vehicle attributes +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select one of the detected boxes. Then, crop to an area containing a vehicle to test with the recognition model. Again, you need to resize the input image and run inference. @@ -380,8 +368,8 @@ the input image and run inference. .. image:: 218-vehicle-detection-and-recognition-with-output_files/218-vehicle-detection-and-recognition-with-output_20_0.png -Recognition processing `⇑ <#top>`__ ------------------------------------------------------------------------------------------------------------------------------------ +Recognition processing +'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' The result contains colors of the vehicles (white, gray, yellow, red, green, blue, black) and types of vehicles (car, bus, truck, van). Next, @@ -429,10 +417,9 @@ determine the maximum probability as the result. Attributes:('Gray', 'Car') -Combine two models `⇑ <#top>`__ +Combine two models +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Congratulations! You successfully used a detection model to crop an image with a vehicle and recognize the attributes of a vehicle. diff --git a/docs/notebooks/219-knowledge-graphs-conve-with-output.rst b/docs/notebooks/219-knowledge-graphs-conve-with-output.rst index 67c8776cff2d4a..db8720e971a376 100644 --- a/docs/notebooks/219-knowledge-graphs-conve-with-output.rst +++ b/docs/notebooks/219-knowledge-graphs-conve-with-output.rst @@ -1,8 +1,6 @@ OpenVINO optimizations for Knowledge graphs =========================================== - - The goal of this notebook is to showcase performance optimizations for the ConvE knowledge graph embeddings model using the Intel® Distribution of OpenVINO™ Toolkit. The optimizations process contains the following @@ -13,38 +11,35 @@ steps: 2. Report the inference performance speedup obtained with the optimized OpenVINO model -The ConvE model is an implementation of the paper - -`Convolutional 2D Knowledge Graph Embeddings `__. The +The ConvE model is an implementation of the paper - “Convolutional 2D +Knowledge Graph Embeddings” (https://arxiv.org/abs/1707.01476). The sample dataset can be downloaded from: https://github.com/TimDettmers/ConvE/tree/master/countries/countries_S1 -.. _top: - -**Table of contents**: +**Table of contents:** -- `Windows specific settings <#windows-specific-settings>`__ +- `Windows specific settings <#windows-specific-settings>`__ - `Import the packages needed for successful execution <#import-the-packages-needed-for-successful-execution>`__ - - `Settings: Including path to the serialized model files and input data files <#settings-including-path-to-the-serialized-model-files-and-input-data-files>`__ - - `Download Model Checkpoint <#download-model-checkpoint>`__ - - `Defining the ConvE model class <#defining-the-conve-model-class>`__ - - `Defining the dataloader <#defining-the-dataloader>`__ - - `Evaluate the trained ConvE model <#evaluate-the-trained-conve-model>`__ + - `Settings: Including path to the serialized model files and input data files <#settings:-including-path-to-the-serialized-model-files-and-input-data-files>`__ + - `Download Model Checkpoint <#download-model-checkpoint>`__ + - `Defining the ConvE model class <#defining-the-conve-model-class>`__ + - `Defining the dataloader <#defining-the-dataloader>`__ + - `Evaluate the trained ConvE model <#evaluate-the-trained-conve-model>`__ - `Prediction on the Knowledge graph. <#prediction-on-the-knowledge-graph>`__ - `Convert the trained PyTorch model to ONNX format for OpenVINO inference <#convert-the-trained-pytorch-model-to-onnx-format-for-openvino-inference>`__ - - `Evaluate the model performance with OpenVINO <#evaluate-the-model-performance-with-openvino>`__ + - `Evaluate the model performance with OpenVINO <#evaluate-the-model-performance-with-openvino>`__ -- `Select inference device <#select-inference-device>`__ +- `Select inference device <#select-inference-device>`__ - `Determine the platform specific speedup obtained through OpenVINO graph optimizations <#determine-the-platform-specific-speedup-obtained-through-openvino-graph-optimizations>`__ - `Benchmark the converted OpenVINO model using benchmark app <#benchmark-the-converted-openvino-model-using-benchmark-app>`__ - - `Conclusions <#conclusions>`__ + - `Conclusions <#conclusions>`__ - `References <#references>`__ -Windows specific settings `⇑ <#top>`__ +Windows specific settings ############################################################################################################################### - .. code:: ipython3 # On Windows, add the directory that contains cl.exe to the PATH @@ -85,10 +80,9 @@ Windows specific settings `⇑ <#top>`__ os.environ["LIB"] = os.pathsep.join(b.library_dirs) print(f"Added {vs_dir} to PATH") -Import the packages needed for successful execution `⇑ <#top>`__ +Import the packages needed for successful execution ############################################################################################################################### - .. code:: ipython3 import json @@ -107,7 +101,7 @@ Import the packages needed for successful execution `⇑ <#top>`__ sys.path.append("../utils") from notebook_utils import download_file -Settings: Including path to the serialized model files and input data files `⇑ <#top>`__ +Settings: Including path to the serialized model files and input data files +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -148,10 +142,9 @@ Settings: Including path to the serialized model files and input data files `⇑ Using cpu device -Download Model Checkpoint `⇑ <#top>`__ +Download Model Checkpoint +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 model_url = 'https://storage.openvinotoolkit.org/repositories/openvino_notebooks/models/knowledge-graph-embeddings/conve.pt' @@ -169,14 +162,13 @@ Download Model Checkpoint `⇑ <#top>`__ .. parsed-literal:: - PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/219-knowledge-graphs-conve/models/conve.pt') + PosixPath('/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/219-knowledge-graphs-conve/models/conve.pt') -Defining the ConvE model class `⇑ <#top>`__ +Defining the ConvE model class +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 # Model implementation reference: https://github.com/TimDettmers/ConvE @@ -232,10 +224,9 @@ Defining the ConvE model class `⇑ <#top>`__ pred = torch.nn.functional.softmax(x, dim=1) return pred -Defining the dataloader `⇑ <#top>`__ +Defining the dataloader +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code:: ipython3 class DataLoader(): @@ -282,15 +273,15 @@ Defining the dataloader `⇑ <#top>`__ dp.close() return triples_list -Evaluate the trained ConvE model `⇑ <#top>`__ +Evaluate the trained ConvE model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -First, we will evaluate the model performance using PyTorch. The goal is to make sure there are -no accuracy differences between the original model inference and the -model converted to OpenVINO intermediate representation inference -results. Here, we use a simple accuracy metric to evaluate the model -performance on a test dataset. However, it is typical to use metrics -such as Mean Reciprocal Rank, Hits@10 etc. +First, we will evaluate the model performance using PyTorch. The goal is +to make sure there are no accuracy differences between the original +model inference and the model converted to OpenVINO intermediate +representation inference results. Here, we use a simple accuracy metric +to evaluate the model performance on a test dataset. However, it is +typical to use metrics such as Mean Reciprocal Rank, Hits@10 etc. .. code:: ipython3 @@ -327,18 +318,19 @@ such as Mean Reciprocal Rank, Hits@10 etc. .. parsed-literal:: - Average time taken for inference: 0.6897946198781332 ms + Average time taken for inference: 0.7134974002838135 ms Mean accuracy of the model on the test dataset: 0.875 -Prediction on the Knowledge graph. `⇑ <#top>`__ +Prediction on the Knowledge graph. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Here, we perform the entity prediction on the knowledge graph, as a sample evaluation task. -We pass the source entity ``san_marino`` and relation ``locatedIn`` to -the knowledge graph and obtain the target entity predictions. Expected -predictions are target entities that form a factual triple with the -entity and relation passed as inputs to the knowledge graph. +Here, we perform the entity prediction on the knowledge graph, as a +sample evaluation task. We pass the source entity ``san_marino`` and +relation ``locatedIn`` to the knowledge graph and obtain the target +entity predictions. Expected predictions are target entities that form a +factual triple with the entity and relation passed as inputs to the +knowledge graph. .. code:: ipython3 @@ -364,14 +356,14 @@ entity and relation passed as inputs to the knowledge graph. Source Entity: san_marino, Relation: locatedin, Target entity prediction: europe -Convert the trained PyTorch model to ONNX format for OpenVINO inference `⇑ <#top>`__ +Convert the trained PyTorch model to ONNX format for OpenVINO inference +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -To evaluate performance with OpenVINO, we can -either convert the trained PyTorch model to an intermediate -representation (IR) format or to an ONNX representation. This notebook -uses the ONNX format. For more details on model optimization, refer to: -https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html +To evaluate performance with OpenVINO, we can either convert the trained +PyTorch model to an intermediate representation (IR) format or to an +ONNX representation. This notebook uses the ONNX format. For more +details on model optimization, refer to: +https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html .. code:: ipython3 @@ -385,10 +377,9 @@ https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_Deep_Learning_Model_Optimize Converting the trained conve model to ONNX format -Evaluate the model performance with OpenVINO `⇑ <#top>`__ +Evaluate the model performance with OpenVINO +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Now, we evaluate the model performance with the OpenVINO framework. In order to do so, make three main API calls: @@ -404,10 +395,9 @@ Then, the model can be inferred on by using the core = Core() ov_model = core.read_model(model=fp32_onnx_path) -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -462,11 +452,11 @@ Select device from dropdown list for running inference using OpenVINO: .. parsed-literal:: - Average time taken for inference: 1.246631145477295 ms + Average time taken for inference: 1.500864823659261 ms Mean accuracy of the model on the test dataset: 0.10416666666666667 -Determine the platform specific speedup obtained through OpenVINO graph optimizations `⇑ <#top>`__ +Determine the platform specific speedup obtained through OpenVINO graph optimizations +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code:: ipython3 @@ -476,17 +466,16 @@ Determine the platform specific speedup obtained through OpenVINO graph optimiza .. parsed-literal:: - Speedup with OpenVINO optimizations: 0.55 X + Speedup with OpenVINO optimizations: 0.48 X -Benchmark the converted OpenVINO model using benchmark app `⇑ <#top>`__ +Benchmark the converted OpenVINO model using benchmark app +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -The OpenVINO toolkit provides a benchmarking application to -gauge the platform specific runtime performance that can be obtained -under optimal configuration parameters for a given model. For more -details refer to: -https://docs.openvino.ai/2023.1/openvino_inference_engine_tools_benchmark_tool_README.html +The OpenVINO toolkit provides a benchmarking application to gauge the +platform specific runtime performance that can be obtained under optimal +configuration parameters for a given model. For more details refer to: +https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html Here, we use the benchmark application to obtain performance estimates under optimal configuration for the knowledge graph model inference. We @@ -505,40 +494,26 @@ inference can also be obtained by looking at the benchmark app results. .. parsed-literal:: Benchmark OpenVINO model using the benchmark app - [Step 1/11] Parsing and validating input arguments - [ INFO ] Parsing input parameters - [Step 2/11] Loading OpenVINO Runtime - [ INFO ] OpenVINO: - [ INFO ] Build ................................. 2023.0.1-11005-fa1c41994f3-releases/2023/0 - [ INFO ] - [ INFO ] Device info: - [ ERROR ] Check 'false' failed at src/inference/src/core.cpp:84: - Device with "device" name is not registered in the OpenVINO Runtime - Traceback (most recent call last): - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/main.py", line 103, in main - benchmark.print_version_info() - File "/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/tools/benchmark/benchmark.py", line 48, in print_version_info - for device, version in self.core.get_versions(self.device).items(): - RuntimeError: Check 'false' failed at src/inference/src/core.cpp:84: - Device with "device" name is not registered in the OpenVINO Runtime - - - -Conclusions `⇑ <#top>`__ + /bin/bash: benchmark_app: command not found + + +Conclusions +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -In this notebook, we convert the trained PyTorch knowledge graph embeddings model to the OpenVINO format. We -confirm that there are no accuracy differences post conversion. We also -perform a sample evaluation on the knowledge graph. Then, we determine -the platform specific speedup in runtime performance that can be -obtained through OpenVINO graph optimizations. To learn more about the -OpenVINO performance optimizations, refer to: -https://docs.openvino.ai/2023.1/openvino_docs_deployment_optimization_guide_dldt_optimization_guide.html +In this notebook, we convert the trained PyTorch knowledge graph +embeddings model to the OpenVINO format. We confirm that there are no +accuracy differences post conversion. We also perform a sample +evaluation on the knowledge graph. Then, we determine the platform +specific speedup in runtime performance that can be obtained through +OpenVINO graph optimizations. To learn more about the OpenVINO +performance optimizations, refer to: +https://docs.openvino.ai/2023.0/openvino_docs_deployment_optimization_guide_dldt_optimization_guide.html -References `⇑ <#top>`__ +References +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -1. Convolutional 2D Knowledge Graph Embeddings, Tim Dettmers et al. (https://arxiv.org/abs/1707.01476) +1. Convolutional 2D Knowledge Graph Embeddings, Tim Dettmers et + al. (https://arxiv.org/abs/1707.01476) 2. Model implementation: https://github.com/TimDettmers/ConvE The ConvE model implementation used in this notebook is licensed under diff --git a/docs/notebooks/220-cross-lingual-books-alignment-with-output.rst b/docs/notebooks/220-cross-lingual-books-alignment-with-output.rst index fd4179634a992a..ffa50b71614e67 100644 --- a/docs/notebooks/220-cross-lingual-books-alignment-with-output.rst +++ b/docs/notebooks/220-cross-lingual-books-alignment-with-output.rst @@ -18,7 +18,7 @@ part of the pipeline - getting vectors from sentences - using the OpenVINO™ framework. Pipeline --------- +############################################################################################################################### The notebook guides you through the entire process of creating a parallel book: from obtaining raw texts to building a visualization of @@ -30,7 +30,7 @@ Visualizing the result allows you to identify areas for improvement in the pipeline steps, as indicated in the diagram. Prerequisites -------------- +############################################################################################################################### - ``requests`` - for getting books - ``pysbd`` - for splitting sentences @@ -39,8 +39,6 @@ Prerequisites - ``seaborn`` - for alignment matrix visualization - ``ipywidgets`` - for displaying HTML and JS output in the notebook -.. _top: - **Table of contents**: - `Get Books <#get-books>`__ @@ -67,7 +65,7 @@ Prerequisites DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 -Get Books `⇑ <#top>`__ +Get Books ############################################################################################################################### @@ -213,7 +211,7 @@ which in a raw format looks like this: -Clean Text `⇑ <#top>`__ +Clean Text ############################################################################################################################### @@ -344,7 +342,7 @@ needed. 0%| | 0/3 [00:00`__ +Split Text ############################################################################################################################### @@ -388,7 +386,7 @@ languages. -Get Sentence Embeddings `⇑ <#top>`__ +Get Sentence Embeddings ############################################################################################################################### @@ -478,7 +476,7 @@ best fit. 0%| | 0/34 [00:00`__ +Optimize the Model with OpenVINO +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -557,7 +555,7 @@ model predictions remain within an acceptable tolerance: -Calculate Sentence Alignment `⇑ <#top>`__ +Calculate Sentence Alignment ############################################################################################################################### @@ -683,7 +681,7 @@ will be lists of German sentence numbers. -Postprocess Sentence Alignment `⇑ <#top>`__ +Postprocess Sentence Alignment ############################################################################################################################### @@ -709,7 +707,7 @@ Most likely, English sentence 14 is part of either German sentence 17 or 18. By comparing the similarity using the model, you can choose the most suitable alignment. -Visualize Sentence Alignment `⇑ <#top>`__ +Visualize Sentence Alignment ############################################################################################################################### @@ -869,7 +867,7 @@ To read the model from disk, use the ``read_model`` method of the ov_model = core.read_model(ov_model_path) -Speed up Embeddings Computation `⇑ <#top>`__ +Speed up Embeddings Computation ############################################################################################################################### diff --git a/docs/notebooks/221-machine-translation-with-output.rst b/docs/notebooks/221-machine-translation-with-output.rst index b4103a43f252bd..31a2cbc2ba70aa 100644 --- a/docs/notebooks/221-machine-translation-with-output.rst +++ b/docs/notebooks/221-machine-translation-with-output.rst @@ -1,8 +1,6 @@ Machine translation demo ======================== - - This demo utilizes Intel’s pre-trained model that translates from English to German. More information about the model can be found `here `__. @@ -16,18 +14,16 @@ following structure: ```` + *tokenized sentence* + ```` + ```` (```` tokens pad the remaining blank spaces). **Output** After the inference, we have a sequence of up to 200 tokens. -The structure is the same as the one for the input. - -.. _top: +The structure is the same as the one for the input. -**Table of contents**: +**Table of contents:** -- `Downloading model <#downloading-model>`__ -- `Load and configure the model <#load-and-configure-the-model>`__ -- `Select inference device <#select-inference-device>`__ -- `Load tokenizers <#load-tokenizers>`__ -- `Perform translation <#perform-translation>`__ -- `Translate the sentence <#translate-the-sentence>`__ +- `Downloading model <#downloading-model>`__ +- `Load and configure the model <#load-and-configure-the-model>`__ +- `Select inference device <#select-inference-device>`__ +- `Load tokenizers <#load-tokenizers>`__ +- `Perform translation <#perform-translation>`__ +- `Translate the sentence <#translate-the-sentence>`__ - `Test your translation <#test-your-translation>`__ @@ -52,11 +48,11 @@ The structure is the same as the one for the input. import itertools from tokenizers import SentencePieceBPETokenizer -Downloading model `⇑ <#top>`__ +Downloading model ############################################################################################################################### -The following command will download the model to the current directory. Make sure you have run -``pip install openvino-dev`` beforehand. +The following command will download the model to the current directory. +Make sure you have run ``pip install openvino-dev`` beforehand. .. code:: ipython3 @@ -67,37 +63,37 @@ The following command will download the model to the current directory. Make sur ################|| Downloading machine-translation-nar-en-de-0002 ||################ - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_tgt/merges.txt + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_tgt/merges.txt - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_tgt/vocab.json + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_tgt/vocab.json - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_src/merges.txt + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_src/merges.txt - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_src/vocab.json + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/tokenizer_src/vocab.json - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP32/machine-translation-nar-en-de-0002.xml + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP32/machine-translation-nar-en-de-0002.xml - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP32/machine-translation-nar-en-de-0002.bin + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP32/machine-translation-nar-en-de-0002.bin - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml - ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.bin + ========== Downloading /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/221-machine-translation/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.bin -Load and configure the model `⇑ <#top>`__ +Load and configure the model ############################################################################################################################### -The model is now available in the ``intel/`` folder. Below, we load and configure its inputs and -outputs. +The model is now available in the ``intel/`` folder. Below, we load and +configure its inputs and outputs. .. code:: ipython3 @@ -108,10 +104,9 @@ outputs. model.output(output_name) max_tokens = model.input(input_name).shape[1] -Select inference device `⇑ <#top>`__ +Select inference device ############################################################################################################################### - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 @@ -142,10 +137,9 @@ Select device from dropdown list for running inference using OpenVINO: compiled_model = core.compile_model(model, device.value) -Load tokenizers `⇑ <#top>`__ +Load tokenizers ############################################################################################################################### - NLP models usually take a list of tokens as standard input. A token is a single word converted to some integer. To provide the proper input, we need the vocabulary for such mapping. We use ``merges.txt`` to find out @@ -170,7 +164,7 @@ Initialize the tokenizer for the input ``src_tokenizer`` and the output 'intel/machine-translation-nar-en-de-0002/tokenizer_tgt/merges.txt' ) -Perform translation `⇑ <#top>`__ +Perform translation ############################################################################################################################### The following function translates a sentence in English to German. @@ -219,7 +213,7 @@ The following function translates a sentence in English to German. sentence = " ".join(key for key, _ in itertools.groupby(sentence)) return sentence -Translate the sentence `⇑ <#top>`__ +Translate the sentence ############################################################################################################################### The following function is a basic loop that translates sentences. @@ -249,10 +243,11 @@ The following function is a basic loop that translates sentences. # uncomment the following line for a real time translation of your input # run_translator() -Test your translation `⇑ <#top>`__ +Test your translation +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Run the following cell with an English sentence to have it translated to German +Run the following cell with an English sentence to have it translated to +German .. code:: ipython3 diff --git a/docs/notebooks/222-vision-image-colorization-with-output.rst b/docs/notebooks/222-vision-image-colorization-with-output.rst index 8d11c9030fc30e..b34fcd274e0532 100644 --- a/docs/notebooks/222-vision-image-colorization-with-output.rst +++ b/docs/notebooks/222-vision-image-colorization-with-output.rst @@ -1,7 +1,5 @@ Image Colorization with OpenVINO -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - +================================ This notebook demonstrates how to colorize images with OpenVINO using the Colorization model @@ -23,7 +21,7 @@ Given a grayscale image as input, the model generates colorized version of the image as the output. About Colorization-v2 -^^^^^^^^^^^^^^^^^^^^^ +############################################################################################################################### - The colorization-v2 model is one of the colorization group of models designed to perform image colorization. @@ -32,7 +30,7 @@ About Colorization-v2 A- and B-channels of LAB-image as output. About Colorization-siggraph -^^^^^^^^^^^^^^^^^^^^^^^^^^^ +############################################################################################################################### - The colorization-siggraph model is one of the colorization group of models designed to real-time user-guided image colorization. @@ -44,26 +42,23 @@ About Colorization-siggraph See the `colorization `__ repository for more details. -.. _top: +**Table of contents:** -**Table of contents**: +- `Imports <#imports>`__ +- `Configurations <#configurations>`__ -- `Imports <#imports>`__ -- `Configurations <#configurations>`__ + - `Select inference device <#select-inference-device>`__ - - `Select inference device <#select-inference-device>`__ - -- `Download the model <#download-the-model>`__ -- `Convert the model to OpenVINO IR <#convert-the-model-to-openvino-ir>`__ -- `Loading the Model <#loading-the-model>`__ -- `Utility Functions <#utility-functions>`__ -- `Load the Image <#load-the-image>`__ +- `Download the model <#download-the-model>`__ +- `Convert the model to OpenVINO IR <#convert-the-model-to-openvino-ir>`__ +- `Loading the Model <#loading-the-model>`__ +- `Utility Functions <#utility-functions>`__ +- `Load the Image <#load-the-image>`__ - `Display Colorized Image <#display-colorized-image>`__ -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 import os @@ -78,10 +73,9 @@ Imports `⇑ <#top>`__ sys.path.append("../utils") import notebook_utils as utils -Configurations `⇑ <#top>`__ +Configurations ############################################################################################################################### - - ``PRECISION`` - {FP16, FP32}, default: FP16. - ``MODEL_DIR`` - directory where the model is to be stored, default: public. @@ -98,11 +92,10 @@ Configurations `⇑ <#top>`__ MODEL_PATH = f"{MODEL_DIR}/public/{MODEL_NAME}/{PRECISION}/{MODEL_NAME}.xml" DATA_DIR = "data" -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -Select device from dropdown list for running inference using OpenVINO: +Select device from dropdown list for running inference using OpenVINO .. code:: ipython3 @@ -128,10 +121,9 @@ Select device from dropdown list for running inference using OpenVINO: -Download the model `⇑ <#top>`__ +Download the model ############################################################################################################################### - ``omz_downloader`` downloads model files from online sources and, if necessary, patches them to make them more usable with Model Converter. @@ -178,10 +170,9 @@ above. -Convert the model to OpenVINO IR `⇑ <#top>`__ +Convert the model to OpenVINO IR ############################################################################################################################### - ``omz_converter`` converts the models that are not in the OpenVINO™ IR format into that format using model conversion API. @@ -205,29 +196,28 @@ respectively .. parsed-literal:: ========== Converting colorization-v2 to ONNX - Conversion to ONNX command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=models/public/colorization-v2 --model-name=ECCVGenerator --weights=models/public/colorization-v2/ckpt/colorization-v2-eccv16.pth --import-module=model --input-shape=1,1,256,256 --output-file=models/public/colorization-v2/colorization-v2-eccv16.onnx --input-names=data_l --output-names=color_ab + Conversion to ONNX command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=models/public/colorization-v2 --model-name=ECCVGenerator --weights=models/public/colorization-v2/ckpt/colorization-v2-eccv16.pth --import-module=model --input-shape=1,1,256,256 --output-file=models/public/colorization-v2/colorization-v2-eccv16.onnx --input-names=data_l --output-names=color_ab ONNX check passed successfully. ========== Converting colorization-v2 to IR (FP16) - Conversion command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/bin/mo --framework=onnx --output_dir=/tmp/tmp7wsuasz7 --model_name=colorization-v2 --input=data_l --output=color_ab --input_model=models/public/colorization-v2/colorization-v2-eccv16.onnx '--layout=data_l(NCHW)' '--input_shape=[1, 1, 256, 256]' --compress_to_fp16=True + Conversion command: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/python -- /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/bin/mo --framework=onnx --output_dir=models/public/colorization-v2/FP16 --model_name=colorization-v2 --input=data_l --output=color_ab --input_model=models/public/colorization-v2/colorization-v2-eccv16.onnx '--layout=data_l(NCHW)' '--input_shape=[1, 1, 256, 256]' --compress_to_fp16=True - [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression by removing argument --compress_to_fp16 or set it to false --compress_to_fp16=False. - Find more information about compression to FP16 at https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_FP16_Compression.html + [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. + Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. - Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.1/openvino_2_0_transition_guide.html + Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. - [ SUCCESS ] XML file: /tmp/tmp7wsuasz7/colorization-v2.xml - [ SUCCESS ] BIN file: /tmp/tmp7wsuasz7/colorization-v2.bin + [ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/222-vision-image-colorization/models/public/colorization-v2/FP16/colorization-v2.xml + [ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/notebooks/222-vision-image-colorization/models/public/colorization-v2/FP16/colorization-v2.bin -Loading the Model `⇑ <#top>`__ +Loading the Model ############################################################################################################################### -Load the model in OpenVINO Runtime with -``ie.read_model`` and compile it for the specified device with -``ie.compile_model``. +Load the model in OpenVINO Runtime with ``ie.read_model`` and compile it +for the specified device with ``ie.compile_model``. .. code:: ipython3 @@ -238,9 +228,9 @@ Load the model in OpenVINO Runtime with output_layer = compiled_model.output(0) N, C, H, W = list(input_layer.shape) -Utility Functions `⇑ <#top>`__ -############################################################################################################################### +Utility Functions +############################################################################################################################### .. code:: ipython3 @@ -316,10 +306,9 @@ Utility Functions `⇑ <#top>`__ plt.show() -Load the Image `⇑ <#top>`__ +Load the Image ############################################################################################################################### - .. code:: ipython3 img_url_0 = "https://user-images.githubusercontent.com/18904157/180923287-20339d01-b1bf-493f-9a0d-55eff997aff1.jpg" @@ -383,9 +372,9 @@ Load the Image `⇑ <#top>`__ color_img_0 = colorize(test_img_0) color_img_1 = colorize(test_img_1) -Display Colorized Image `⇑ <#top>`__ -############################################################################################################################### +Display Colorized Image +############################################################################################################################### .. code:: ipython3 diff --git a/docs/notebooks/223-text-prediction-with-output.rst b/docs/notebooks/223-text-prediction-with-output.rst index 97a7a1d8d8542e..835db52c85ec69 100644 --- a/docs/notebooks/223-text-prediction-with-output.rst +++ b/docs/notebooks/223-text-prediction-with-output.rst @@ -1,8 +1,6 @@ Text Prediction with OpenVINO™ ============================== - - This notebook shows text prediction with OpenVINO. This notebook can work in two different modes, Text Generation and Conversation, which the user can select via selecting the model in the Model Selection Section. @@ -72,14 +70,10 @@ above. The Generated response is added to the history with the ``eos_token`` at the end. Additional user input is added to the history, and the sequence is passed back into the model. - -.. _top: - -**Table of contents**: +**Table of contents:** - `Model Selection <#model-selection>`__ - `Load Model <#load-model>`__ - - `Convert Pytorch Model to OpenVINO IR <#convert-pytorch-model-to-openvino-ir>`__ - `Load the model <#load-the-model>`__ @@ -100,17 +94,16 @@ and the sequence is passed back into the model. - `Conversation Class <#conversation-class>`__ - `Conversation with PersonaGPT <#conversation-with-personagpt>`__ -Model Selection `⇑ <#top>`__ +Model Selection ############################################################################################################################### - Select the Model to be used for text generation, GPT-2 and GPT-Neo are used for text generation whereas PersonaGPT is used for Conversation. .. code:: ipython3 # Install Gradio for Interactive Inference and other requirements - !pip install -q "openvino-dev>=2023.0.0" + !pip install -q "openvino==2023.1.0.dev20230811" !pip install -q gradio !pip install -q transformers[torch] onnx @@ -121,7 +114,9 @@ used for text generation whereas PersonaGPT is used for Conversation. DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. - pytorch-lightning 1.6.5 requires protobuf<=3.20.1, but you have protobuf 4.24.0 which is incompatible. + onnxconverter-common 1.14.0 requires protobuf==3.20.2, but you have protobuf 4.24.3 which is incompatible. + pytorch-lightning 1.6.5 requires protobuf<=3.20.1, but you have protobuf 4.24.3 which is incompatible. + tf2onnx 1.15.1 requires protobuf~=3.20.2, but you have protobuf 4.24.3 which is incompatible. .. code:: ipython3 @@ -148,7 +143,7 @@ used for text generation whereas PersonaGPT is used for Conversation. -Load Model `⇑ <#top>`__ +Load Model ############################################################################################################################### Download the Selected Model and Tokenizer from HuggingFace @@ -167,9 +162,17 @@ Download the Selected Model and Tokenizer from HuggingFace pt_model = GPTNeoForCausalLM.from_pretrained('EleutherAI/gpt-neo-125M') tokenizer = GPT2TokenizerFast.from_pretrained('EleutherAI/gpt-neo-125M') -Convert Pytorch Model to OpenVINO IR `⇑ <#top>`__ -############################################################################################################################### +.. parsed-literal:: + + 2023-09-08 23:43:01.055206: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-09-08 23:43:01.090531: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-09-08 23:43:01.738604: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + +Convert Pytorch Model to OpenVINO IR +############################################################################################################################### .. figure:: https://user-images.githubusercontent.com/29454499/211261803-784d4791-15cb-4aea-8795-0969dfbb8291.png :alt: conversion_pipeline @@ -190,25 +193,23 @@ found in HuggingFace While ONNX models are directly supported by OpenVINO runtime, it can be useful to convert them to IR format to take advantage of OpenVINO -optimization tools and features. The ``mo.convert_model`` Python +optimization tools and features. The ``ov.convert_model`` Python function of `model conversion -API `__ +API `__ can be used for converting the model. The function returns instance of -OpenVINO Model class, which is ready to use in Python interface but can -also be serialized to OpenVINO IR format for future execution using -``openvino.runtime.serialize``. In our case, the ``compress_to_fp16`` -parameter is enabled for compression model weights to FP16 precision and -also specified dynamic input shapes with a possible shape range (from 1 -token to a maximum length defined in our processing function) for -optimization of memory consumption. +OpenVINO Model class, which is ready to use in Python interface. The +Model can also be save on device in OpenVINO IR format for future +execution using ``ov.save_model``. In our case dynamic input shapes with +a possible shape range (from 1 token to a maximum length defined in our +processing function) are specified for optimization of memory +consumption. .. code:: ipython3 from pathlib import Path - from openvino.runtime import serialize - from openvino.tools import mo from transformers.onnx import export, FeaturesManager + import openvino as ov # define path for saving onnx model onnx_path = Path("model/text_generator.onnx") @@ -228,40 +229,38 @@ optimization of memory consumption. # convert model to openvino if model_name.value == "PersonaGPT (Converastional)": - ov_model = mo.convert_model(onnx_path, compress_to_fp16=True, input="input_ids[1,-1],attention_mask[1,-1]") + ov_model = ov.convert_model(onnx_path, input=[('input_ids', [1, -1], ov.Type.i64), ('attention_mask', [1,-1], ov.Type.i64)]) else: - ov_model = mo.convert_model(onnx_path, compress_to_fp16=True, input="input_ids[1,1..128],attention_mask[1,1..128]") + ov_model = ov.convert_model(onnx_path, input=[('input_ids', [1, ov.Dimension(1,128)], ov.Type.i64), ('attention_mask', [1, ov.Dimension(1,128)], ov.Type.i64)]) # serialize openvino model - serialize(ov_model, str(model_path)) + ov.save_model(ov_model, str(model_path)) .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py:807: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py:807: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if batch_size <= 0: -Load the model `⇑ <#top>`__ +Load the model +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - We start by building an OpenVINO Core object. Then we read the network architecture and model weights from the ``.xml`` and ``.bin`` files, respectively. Finally, we compile the model for the desired device. -Select inference device `⇑ <#top>`__ +Select inference device ------------------------------------------------------------------------------------------------------------------------------- - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 - from openvino.runtime import Core import ipywidgets as widgets - core = Core() + # initialize openvino core + core = ov.Core() device = widgets.Dropdown( options=core.available_devices + ["AUTO"], @@ -283,9 +282,6 @@ Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 - # initialize openvino core - core = Core() - # read the model and corresponding weights from file model = core.read_model(model_path) @@ -302,19 +298,17 @@ names of the output nodes of the network. In the case of GPT-Neo, we have ``batch size`` and ``sequence length`` as inputs and ``batch size``, ``sequence length`` and ``vocab size`` as outputs. -Pre-Processing `⇑ <#top>`__ +Pre-Processing ############################################################################################################################### - NLP models often take a list of tokens as a standard input. A token is a word or a part of a word mapped to an integer. To provide the proper input, we use a vocabulary file to handle the mapping. So first let’s load the vocabulary file. -Define tokenization `⇑ <#top>`__ +Define tokenization ############################################################################################################################### - .. code:: ipython3 from typing import List, Tuple @@ -344,10 +338,11 @@ at later stage. eos_token_id = tokenizer.eos_token_id eos_token = tokenizer.decode(eos_token_id) -Define Softmax layer `⇑ <#top>`__ +Define Softmax layer +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -A softmax function is used to convert top-k logits into a probability distribution. +A softmax function is used to convert top-k logits into a probability +distribution. .. code:: ipython3 @@ -359,12 +354,12 @@ A softmax function is used to convert top-k logits into a probability distributi summation = e_x.sum(axis=-1, keepdims=True) return e_x / summation -Set the minimum sequence length `⇑ <#top>`__ +Set the minimum sequence length +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -If the minimum sequence length is not reached, the following code will reduce the probability of -the ``eos`` token occurring. This continues the process of generating -the next words. +If the minimum sequence length is not reached, the following code will +reduce the probability of the ``eos`` token occurring. This continues +the process of generating the next words. .. code:: ipython3 @@ -385,11 +380,11 @@ the next words. scores[:, eos_token_id] = -float("inf") return scores -Top-K sampling `⇑ <#top>`__ +Top-K sampling +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -In Top-K sampling, we filter the K most likely next words and redistribute the probability mass among only those -K next words. +In Top-K sampling, we filter the K most likely next words and +redistribute the probability mass among only those K next words. .. code:: ipython3 @@ -414,7 +409,7 @@ K next words. fill_value=filter_value).filled() return filtred_scores -Main Processing Function `⇑ <#top>`__ +Main Processing Function +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Generating the predicted sequence. @@ -464,10 +459,11 @@ Generating the predicted sequence. attention_mask = np.concatenate((attention_mask, [[1] * len(next_tokens)]), axis=-1) return input_ids -Inference with GPT-Neo/GPT-2 `⇑ <#top>`__ +Inference with GPT-Neo/GPT-2 ############################################################################################################################### -The ``text`` variable below is the input used to generate a predicted sequence. +The ``text`` variable below is the input used to generate a predicted +sequence. .. code:: ipython3 @@ -496,8 +492,8 @@ The ``text`` variable below is the input used to generate a predicted sequence. Selected Model is PersonaGPT. Please select GPT-Neo or GPT-2 in the first cell to generate text sequences -Conversation with PersonaGPT using OpenVINO™ `⇑ <#top>`__ -############################################################################################################################### +Conversation with PersonaGPT using OpenVINO +===================================================================================== User Input is tokenized with ``eos_token`` concatenated in the end. Model input is tokenized text, which serves as initial condition for @@ -512,7 +508,7 @@ The Generated response is added to the history with the ``eos_token`` at the end. Further User Input is added to it and again passed into the model. -Converse Function `⇑ <#top>`__ +Converse Function ############################################################################################################################### Wrapper on generate sequence function to support conversation @@ -557,10 +553,9 @@ Wrapper on generate sequence function to support conversation response = ''.join(tokenizer.batch_decode(history)).split(eos_token)[-2] return response, history -Conversation Class `⇑ <#top>`__ +Conversation Class ############################################################################################################################### - .. code:: ipython3 class Conversation: @@ -582,10 +577,9 @@ Conversation Class `⇑ <#top>`__ self.messages.append(f"PersonaGPT: {response}") return response -Conversation with PersonaGPT `⇑ <#top>`__ +Conversation with PersonaGPT ############################################################################################################################### - This notebook provides two styles of inference, Plain and Interactive. The style of inference can be selected in the next cell. @@ -663,23 +657,23 @@ The style of inference can be selected in the next cell. .. parsed-literal:: Person: Hi,How are you? - PersonaGPT: good, how about you? what do you like to do for fun? + PersonaGPT: good and you? Person: What are you doing? - PersonaGPT: i'm playing some video games. + PersonaGPT: working on my studies Person: I like to dance,do you? - PersonaGPT: i don't have any dancing abilities. + PersonaGPT: i enjoy dance, whats your favorite dance? Person: Can you recommend me some books? - PersonaGPT: anybody can do it if you try. + PersonaGPT: do you like to read? Person: Hi,How are you? - PersonaGPT: good, do you have any hobbies? + PersonaGPT: good and you? Person: What are you doing? - PersonaGPT: i love to cook. + PersonaGPT: what are you doing right now? Person: I like to dance,do you? - PersonaGPT: i don't have any musical abilities. + PersonaGPT: i enjoy dance too. Person: Can you recommend me some books? - PersonaGPT: anybody can do it if you try. + PersonaGPT: what do you like about dance? Person: Hi,How are you? - PersonaGPT: good, do you like cooking? + PersonaGPT: i'm good thanks. Person: What are you doing? - PersonaGPT: i am watching netflix. + PersonaGPT: working on studying. diff --git a/docs/notebooks/224-3D-segmentation-point-clouds-with-output.rst b/docs/notebooks/224-3D-segmentation-point-clouds-with-output.rst index 8dcb6fa3b7f11d..ff203173791738 100644 --- a/docs/notebooks/224-3D-segmentation-point-clouds-with-output.rst +++ b/docs/notebooks/224-3D-segmentation-point-clouds-with-output.rst @@ -1,8 +1,6 @@ Part Segmentation of 3D Point Clouds with OpenVINO™ =================================================== - - This notebook demonstrates how to process `point cloud `__ data and run 3D Part Segmentation with OpenVINO. We use the @@ -10,7 +8,7 @@ Part Segmentation with OpenVINO. We use the detect each part of a chair and return its category. PointNet --------- +############################################################################################################################### PointNet was proposed by Charles Ruizhongtai Qi, a researcher at Stanford University in 2016: `PointNet: Deep Learning on Point Sets for @@ -24,9 +22,7 @@ segmentation, to scene semantic parsing. It is highly efficient and effective, showing strong performance on par or even better than state of the art. -.. _top: - -**Table of contents**: +**Table of contents:** - `Imports <#imports>`__ - `Prepare the Model <#prepare-the-model>`__ @@ -36,10 +32,9 @@ of the art. - `Select inference device <#select-inference-device>`__ -Imports `⇑ <#top>`__ +Imports ############################################################################################################################### - .. code:: ipython3 import sys @@ -54,12 +49,12 @@ Imports `⇑ <#top>`__ sys.path.append("../utils") from notebook_utils import download_file -Prepare the Model `⇑ <#top>`__ +Prepare the Model ############################################################################################################################### -Download the pre-trained PointNet ONNX model. This pre-trained model is provided by -`axinc-ai `__, and you can find more -point clouds examples +Download the pre-trained PointNet ONNX model. This pre-trained model is +provided by `axinc-ai `__, and you can +find more point clouds examples `here `__. .. code:: ipython3 @@ -79,7 +74,7 @@ function returns an OpenVINO model ready to load on a device and start making predictions. We can save it on a disk for next usage with ``openvino.runtime.serialize``. For more information about model conversion Python API, see this -`page `__. +`page `__. .. code:: ipython3 @@ -97,10 +92,9 @@ conversion Python API, see this model = core.read_model(model=ir_model_xml) -Data Processing Module `⇑ <#top>`__ +Data Processing Module ############################################################################################################################### - .. code:: ipython3 def load_data(point_file: Union[str, Path]): @@ -153,7 +147,7 @@ Data Processing Module `⇑ <#top>`__ return ax -Visualize the original 3D data `⇑ <#top>`__ +Visualize the original 3D data ############################################################################################################################### The point cloud data can be downloaded from @@ -179,14 +173,14 @@ chair for example. .. image:: 224-3D-segmentation-point-clouds-with-output_files/224-3D-segmentation-point-clouds-with-output_10_0.png -Run inference `⇑ <#top>`__ +Run inference ############################################################################################################################### -Run inference and visualize the results of 3D segmentation. - The input data is a point cloud with -``1 batch size``\ ,\ ``3 axis value`` (x, y, z) and -``arbitrary number of points`` (dynamic shape). - The output data is a -mask with ``1 batch size`` and ``4 classification confidence`` for each -input point. +Run inference and visualize the results of 3D segmentation. - The input +data is a point cloud with ``1 batch size``\ ,\ ``3 axis value`` (x, y, +z) and ``arbitrary number of points`` (dynamic shape). - The output data +is a mask with ``1 batch size`` and ``4 classification confidence`` for +each input point. .. code:: ipython3 @@ -208,10 +202,9 @@ input point. output shape: [1,?,4] -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 diff --git a/docs/notebooks/225-stable-diffusion-text-to-image-with-output.rst b/docs/notebooks/225-stable-diffusion-text-to-image-with-output.rst index 255e3b6b2a51cc..bfb59f260ff0d8 100644 --- a/docs/notebooks/225-stable-diffusion-text-to-image-with-output.rst +++ b/docs/notebooks/225-stable-diffusion-text-to-image-with-output.rst @@ -1,8 +1,6 @@ Text-to-Image Generation with Stable Diffusion and OpenVINO™ ============================================================ - - Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from `CompVis `__, `Stability @@ -36,14 +34,11 @@ using OpenVINO. Notebook contains the following steps: -1. Convert PyTorch models to ONNX format. -2. Convert ONNX models to OpenVINO IR format, using model conversion - API. +1. Create pipeline with PyTorch models. +2. Convert models to OpenVINO IR format, using model conversion API. 3. Run Stable Diffusion pipeline with OpenVINO. -.. _top: - -**Table of contents**: +**Table of contents:** - `Prerequisites <#prerequisites>`__ - `Create PyTorch Models pipeline <#create-pytorch-models-pipeline>`__ @@ -59,9 +54,10 @@ Notebook contains the following steps: - `Text-to-Image generation <#text-to-image-generation>`__ - `Image-to-Image generation <#image-to-image-generation>`__ -Prerequisites `⇑ <#top>`__ -############################################################################################################################### +.. - `Interactive demo <#interactive-demo>`__ +Prerequisites +############################################################################################################################### **The following is needed only if you want to use the original model. If not, you do not have to do anything. Just run the notebook.** @@ -80,7 +76,6 @@ not, you do not have to do anything. Just run the notebook.** You can login on Hugging Face Hub in notebook environment, using following code: - .. code:: python @@ -102,21 +97,23 @@ solutions based on Stable Diffusion. .. code:: ipython3 + !pip install -q "openvino==2023.1.0dev20230811" !pip install -q "diffusers[torch]>=0.9.0" !pip install -q "huggingface-hub>=0.9.1" + !pip install -q gradio - -Create PyTorch Models pipeline `⇑ <#top>`__ +Create PyTorch Models pipeline ############################################################################################################################### -``StableDiffusionPipeline`` is an end-to-end inference pipeline that you can use to generate images -from text with just a few lines of code. +``StableDiffusionPipeline`` is an end-to-end inference pipeline that you +can use to generate images from text with just a few lines of code. First, load the pre-trained weights of all components of the model. .. code:: ipython3 from diffusers import StableDiffusionPipeline + import gc pipe = StableDiffusionPipeline.from_pretrained("prompthero/openjourney").to("cpu") text_encoder = pipe.text_encoder @@ -127,31 +124,151 @@ First, load the pre-trained weights of all components of the model. vae.eval() del pipe + gc.collect() + + +.. parsed-literal:: + + 2023-08-29 12:35:30.891928: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. + 2023-08-29 12:35:30.933110: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. + To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + 2023-08-29 12:35:31.755679: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT + + + +.. parsed-literal:: + + Downloading (…)ain/model_index.json: 0%| | 0.00/541 [00:00`__ + +.. parsed-literal:: + + Downloading (…)tokenizer/vocab.json: 0%| | 0.00/1.06M [00:00`__. You need -to provide a model object, input data for model tracing and a path for -saving the model. Optionally, you can provide the target onnx opset for -conversion and other parameters specified in documentation (for example, -input and output names or dynamic shapes). - -While ONNX models are directly supported by OpenVINO™ runtime, it can be -useful to convert them to IR format to take advantage of advanced -OpenVINO optimization tools and features. For converting the model to IR -format and compressing weights to ``FP16`` format, you will use model -conversion API. +Staring from 2023.0 release, OpenVINO supports direct conversion PyTorch +models to OpenVINO IR format. You need to provide a model object and +input data for model tracing. Optionally, you can declare expected input +format for model - shapes, data types. To take advantage of advanced +OpenVINO optimization tools and features, model should be converted to +IR format using ``ov.convert_model`` and saved on disk (by default in +compressed to FP16 weights representation) for next deployment using +``ov.save_model``. The model consists of three important parts: @@ -163,10 +280,9 @@ The model consists of three important parts: Let us convert each part. -Text Encoder `⇑ <#top>`__ +Text Encoder +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The text-encoder is responsible for transforming the input prompt, for example, “a photo of an astronaut riding a horse” into an embedding space that can be understood by the U-Net. It is usually a simple @@ -178,54 +294,50 @@ indexes of tokens from text processed by tokenizer and padded to maximum length accepted by model. Model outputs are two tensors: ``last_hidden_state`` - hidden state from the last MultiHeadAttention layer in the model and ``pooler_out`` - Pooled output for whole model -hidden states. You will use ``opset_version=14``, because model contains -``triu`` operation, supported in ONNX only starting from this opset. +hidden states. .. code:: ipython3 - import gc from pathlib import Path import torch + import openvino as ov - TEXT_ENCODER_ONNX_PATH = Path('text_encoder.onnx') - TEXT_ENCODER_OV_PATH = TEXT_ENCODER_ONNX_PATH.with_suffix('.xml') + TEXT_ENCODER_OV_PATH = Path("text_encoder.xml") + def cleanup_torchscript_cache(): + """ + Helper for removing cached model representation + """ + torch._C._jit_clear_class_registry() + torch.jit._recursive.concrete_type_store = torch.jit._recursive.ConcreteTypeStore() + torch.jit._state._clear_class_state() - def convert_encoder_onnx(xtext_encoder: StableDiffusionPipeline, onnx_path:Path): + def convert_encoder(text_encoder: torch.nn.Module, ir_path:Path): """ - Convert Text Encoder model to ONNX. - Function accepts pipeline, prepares example inputs for ONNX conversion via torch.export, + Convert Text Encoder mode. + Function accepts text encoder model, and prepares example inputs for conversion, Parameters: - pipe (StableDiffusionPipeline): Stable Diffusion pipeline - onnx_path (Path): File for storing onnx model + text_encoder (torch.nn.Module): text_encoder model from Stable Diffusion pipeline + ir_path (Path): File for storing model Returns: None """ - if not onnx_path.exists(): - input_ids = torch.ones((1, 77), dtype=torch.long) - # switch model to inference mode - text_encoder.eval() - - # disable gradients calculation for reducing memory consumption - with torch.no_grad(): - # infer model, just to make sure that it works - text_encoder(input_ids) - # export model to ONNX format - torch.onnx.export( - text_encoder, # model instance - input_ids, # inputs for model tracing - onnx_path, # output file for saving result - input_names=['tokens'], # model input name for onnx representation - output_names=['last_hidden_state', 'pooler_out'], # model output names for onnx representation - opset_version=14 # onnx opset version for export - ) - print('Text Encoder successfully converted to ONNX') + input_ids = torch.ones((1, 77), dtype=torch.long) + # switch model to inference mode + text_encoder.eval() + + # disable gradients calculation for reducing memory consumption + with torch.no_grad(): + # Export model to IR format + ov_model = ov.convert_model(text_encoder, example_input=input_ids, input=[(1,77),]) + ov.save_model(ov_model, ir_path) + del ov_model + cleanup_torchscript_cache() + print(f'Text Encoder successfully converted to IR and saved to {ir_path}') if not TEXT_ENCODER_OV_PATH.exists(): - convert_encoder_onnx(text_encoder, TEXT_ENCODER_ONNX_PATH) - !mo --input_model $TEXT_ENCODER_ONNX_PATH --compress_to_fp16 - print('Text Encoder successfully converted to IR') + convert_encoder(text_encoder, TEXT_ENCODER_OV_PATH) else: print(f"Text encoder will be loaded from {TEXT_ENCODER_OV_PATH}") @@ -235,27 +347,44 @@ hidden states. You will use ``opset_version=14``, because model contains .. parsed-literal:: - Text encoder will be loaded from text_encoder.xml + WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11. +.. parsed-literal:: + + [ WARNING ] Please fix your imports. Module %s has been moved to %s. The old module will be deleted in version %s. + /home/ea/work/ov_venv/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py:286: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): + /home/ea/work/ov_venv/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py:294: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): + /home/ea/work/ov_venv/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py:326: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): + /home/ea/work/ov_venv/lib/python3.8/site-packages/torch/jit/annotations.py:310: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either. + warnings.warn("TorchScript will treat type annotations of Tensor " .. parsed-literal:: - 13 + Text Encoder successfully converted to IR and saved to text_encoder.xml -U-net `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +.. parsed-literal:: + + 4202 + +U-net ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + Unet model has three inputs: -- ``sample`` - latent image sample from previous step. Generation +- ``sample`` - latent image sample from previous step. Generation process has not been started yet, so you will use random noise. -- ``timestep`` - current scheduler step. -- ``encoder_hidden_state`` - hidden state of text encoder. +- ``timestep`` - current scheduler step. +- ``encoder_hidden_state`` - hidden state of text encoder. Model predicts the ``sample`` state for the next step. @@ -263,56 +392,71 @@ Model predicts the ``sample`` state for the next step. import numpy as np - UNET_ONNX_PATH = Path('unet/unet.onnx') - UNET_OV_PATH = UNET_ONNX_PATH.parents[1] / 'unet.xml' + UNET_OV_PATH = Path('unet.xml') + + dtype_mapping = { + torch.float32: ov.Type.f32, + torch.float64: ov.Type.f64 + } - def convert_unet_onnx(unet:StableDiffusionPipeline, onnx_path:Path): + def convert_unet(unet:torch.nn.Module, ir_path:Path): """ - Convert Unet model to ONNX, then IR format. - Function accepts pipeline, prepares example inputs for ONNX conversion via torch.export, + Convert U-net model to IR format. + Function accepts unet model, prepares example inputs for conversion, Parameters: - pipe (StableDiffusionPipeline): Stable Diffusion pipeline - onnx_path (Path): File for storing onnx model + unet (StableDiffusionPipeline): unet from Stable Diffusion pipeline + ir_path (Path): File for storing model Returns: None """ - if not onnx_path.exists(): - # prepare inputs - encoder_hidden_state = torch.ones((2, 77, 768)) - latents_shape = (2, 4, 512 // 8, 512 // 8) - latents = torch.randn(latents_shape) - t = torch.from_numpy(np.array(1, dtype=float)) - - # model size > 2Gb, it will be represented as onnx with external data files, you will store it in separated directory for avoid a lot of files in current directory - onnx_path.parent.mkdir(exist_ok=True, parents=True) - unet.eval() - - with torch.no_grad(): - torch.onnx.export( - unet, - (latents, t, encoder_hidden_state), str(onnx_path), - input_names=['latent_model_input', 't', 'encoder_hidden_states'], - output_names=['out_sample'] - ) - print('Unet successfully converted to ONNX') + # prepare inputs + encoder_hidden_state = torch.ones((2, 77, 768)) + latents_shape = (2, 4, 512 // 8, 512 // 8) + latents = torch.randn(latents_shape) + t = torch.from_numpy(np.array(1, dtype=float)) + dummy_inputs = (latents, t, encoder_hidden_state) + input_info = [] + for input_tensor in dummy_inputs: + shape = ov.PartialShape(tuple(input_tensor.shape)) + element_type = dtype_mapping[input_tensor.dtype] + input_info.append((shape, element_type)) + + unet.eval() + with torch.no_grad(): + ov_model = ov.convert_model(unet, example_input=dummy_inputs, input=input_info) + ov.save_model(ov_model, ir_path) + del ov_model + cleanup_torchscript_cache() + print(f'Unet successfully converted to IR and saved to {ir_path}') if not UNET_OV_PATH.exists(): - convert_unet_onnx(unet, UNET_ONNX_PATH) - del unet + convert_unet(unet, UNET_OV_PATH) gc.collect() - !mo --input_model $UNET_ONNX_PATH --compress_to_fp16 - print('Unet successfully converted to IR') else: - del unet print(f"Unet will be loaded from {UNET_OV_PATH}") + del unet gc.collect() .. parsed-literal:: - Unet will be loaded from unet.xml + /home/ea/work/diffusers/src/diffusers/models/unet_2d_condition.py:752: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): + /home/ea/work/diffusers/src/diffusers/models/resnet.py:214: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + assert hidden_states.shape[1] == self.channels + /home/ea/work/diffusers/src/diffusers/models/resnet.py:219: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + assert hidden_states.shape[1] == self.channels + /home/ea/work/diffusers/src/diffusers/models/resnet.py:138: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + assert hidden_states.shape[1] == self.channels + /home/ea/work/diffusers/src/diffusers/models/resnet.py:151: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + if hidden_states.shape[0] >= 64: + + +.. parsed-literal:: + + Unet successfully converted to IR and saved to unet.xml @@ -323,10 +467,9 @@ Model predicts the ``sample`` state for the next step. -VAE `⇑ <#top>`__ +VAE +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - The VAE model has two parts, an encoder and a decoder. The encoder is used to convert the image into a low dimensional latent representation, which will serve as the input to the U-Net model. The decoder, @@ -346,18 +489,16 @@ of the pipeline, it will be better to convert them to separate models. .. code:: ipython3 - VAE_ENCODER_ONNX_PATH = Path('vae_encoder.onnx') - VAE_ENCODER_OV_PATH = VAE_ENCODER_ONNX_PATH.with_suffix('.xml') + VAE_ENCODER_OV_PATH = Path("vae_encoder.xml") - - def convert_vae_encoder_onnx(vae: StableDiffusionPipeline, onnx_path: Path): + def convert_vae_encoder(vae: torch.nn.Module, ir_path: Path): """ - Convert VAE model to ONNX, then IR format. - Function accepts pipeline, creates wrapper class for export only necessary for inference part, - prepares example inputs for ONNX conversion via torch.export, + Convert VAE model for encoding to IR format. + Function accepts vae model, creates wrapper class for export only necessary for inference part, + prepares example inputs for conversion, Parameters: - pipe (StableDiffusionInstructPix2PixPipeline): InstrcutPix2Pix pipeline - onnx_path (Path): File for storing onnx model + vae (torch.nn.Module): VAE model from StableDiffusio pipeline + ir_path (Path): File for storing model Returns: None """ @@ -367,39 +508,33 @@ of the pipeline, it will be better to convert them to separate models. self.vae = vae def forward(self, image): - h = self.vae.encoder(image) - moments = self.vae.quant_conv(h) - return moments - - if not onnx_path.exists(): - vae_encoder = VAEEncoderWrapper(vae) - vae_encoder.eval() - image = torch.zeros((1, 3, 512, 512)) - with torch.no_grad(): - torch.onnx.export(vae_encoder, image, onnx_path, input_names=[ - 'init_image'], output_names=['image_latent']) - print('VAE encoder successfully converted to ONNX') + return self.vae.encode(x=image)["latent_dist"].sample() + vae_encoder = VAEEncoderWrapper(vae) + vae_encoder.eval() + image = torch.zeros((1, 3, 512, 512)) + with torch.no_grad(): + ov_model = ov.convert_model(vae_encoder, example_input=image, input=[((1,3,512,512),)]) + ov.save_model(ov_model, ir_path) + del ov_model + cleanup_torchscript_cache() + print(f'VAE encoder successfully converted to IR and saved to {ir_path}') if not VAE_ENCODER_OV_PATH.exists(): - convert_vae_encoder_onnx(vae, VAE_ENCODER_ONNX_PATH) - !mo --input_model $VAE_ENCODER_ONNX_PATH --compress_to_fp16 - print('VAE encoder successfully converted to IR') + convert_vae_encoder(vae, VAE_ENCODER_OV_PATH) else: print(f"VAE encoder will be loaded from {VAE_ENCODER_OV_PATH}") - VAE_DECODER_ONNX_PATH = Path('vae_decoder.onnx') - VAE_DECODER_OV_PATH = VAE_DECODER_ONNX_PATH.with_suffix('.xml') + VAE_DECODER_OV_PATH = Path('vae_decoder.xml') - - def convert_vae_decoder_onnx(vae: StableDiffusionPipeline, onnx_path: Path): + def convert_vae_decoder(vae: torch.nn.Module, ir_path: Path): """ - Convert VAE model to ONNX, then IR format. - Function accepts pipeline, creates wrapper class for export only necessary for inference part, - prepares example inputs for ONNX conversion via torch.export, + Convert VAE model for decoding to IR format. + Function accepts vae model, creates wrapper class for export only necessary for inference part, + prepares example inputs for conversion, Parameters: - pipe (StableDiffusionInstructPix2PixPipeline): InstrcutPix2Pix pipeline - onnx_path (Path): File for storing onnx model + vae (torch.nn.Module): VAE model frm StableDiffusion pipeline + ir_path (Path): File for storing model Returns: None """ @@ -409,44 +544,65 @@ of the pipeline, it will be better to convert them to separate models. self.vae = vae def forward(self, latents): - latents = 1 / 0.18215 * latents return self.vae.decode(latents) + + vae_decoder = VAEDecoderWrapper(vae) + latents = torch.zeros((1, 4, 64, 64)) - if not onnx_path.exists(): - vae_decoder = VAEDecoderWrapper(vae) - latents = torch.zeros((1, 4, 64, 64)) - - vae_decoder.eval() - with torch.no_grad(): - torch.onnx.export(vae_decoder, latents, onnx_path, input_names=[ - 'latents'], output_names=['sample']) - print('VAE decoder successfully converted to ONNX') + vae_decoder.eval() + with torch.no_grad(): + ov_model = ov.convert_model(vae_decoder, example_input=latents, input=[((1,4,64,64),)]) + ov.save_model(ov_model, ir_path) + del ov_model + cleanup_torchscript_cache() + print(f'VAE decoder successfully converted to IR and saved to {ir_path}') if not VAE_DECODER_OV_PATH.exists(): - convert_vae_decoder_onnx(vae, VAE_DECODER_ONNX_PATH) - !mo --input_model $VAE_DECODER_ONNX_PATH --compress_to_fp16 - print('VAE decoder successfully converted to IR') + convert_vae_decoder(vae, VAE_DECODER_OV_PATH) else: print(f"VAE decoder will be loaded from {VAE_DECODER_OV_PATH}") del vae + gc.collect() .. parsed-literal:: - VAE encoder will be loaded from vae_encoder.xml - VAE decoder will be loaded from vae_decoder.xml + /home/ea/work/ov_venv/lib/python3.8/site-packages/torch/jit/_trace.py:1084: TracerWarning: Trace had nondeterministic nodes. Did you forget call .eval() on your model? Nodes: + %2493 : Float(1, 4, 64, 64, strides=[16384, 4096, 64, 1], requires_grad=0, device=cpu) = aten::randn(%2487, %2488, %2489, %2490, %2491, %2492) # /home/ea/work/diffusers/src/diffusers/utils/torch_utils.py:79:0 + This may cause errors in trace checking. To disable trace checking, pass check_trace=False to torch.jit.trace() + _check_trace( + /home/ea/work/ov_venv/lib/python3.8/site-packages/torch/jit/_trace.py:1084: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: + Tensor-likes are not close! + + Mismatched elements: 10371 / 16384 (63.3%) + Greatest absolute difference: 0.0014181137084960938 at index (0, 2, 63, 63) (up to 1e-05 allowed) + Greatest relative difference: 0.006298586412390911 at index (0, 3, 63, 59) (up to 1e-05 allowed) + _check_trace( + + +.. parsed-literal:: + + VAE encoder successfully converted to IR and saved to vae_encoder.xml + VAE decoder successfully converted to IR and saved to vae_decoder.xml -Prepare Inference Pipeline `⇑ <#top>`__ -############################################################################################################################### +.. parsed-literal:: + + 7650 + + + +Prepare Inference Pipeline +############################################################################################################################### + Putting it all together, let us now take a closer look at how the model works in inference by illustrating the logical flow. -.. figure:: https://user-images.githubusercontent.com/29454499/216378932-7a9be39f-cc86-43e4-b072-66372a35d6bd.png +.. figure:: https://user-images.githubusercontent.com/29454499/260981188-c112dd0a-5752-4515-adca-8b09bea5d14a.png :alt: sd-pipeline sd-pipeline @@ -498,7 +654,7 @@ of the variational auto encoder. import cv2 from transformers import CLIPTokenizer - from diffusers.pipeline_utils import DiffusionPipeline + from diffusers.pipelines.pipeline_utils import DiffusionPipeline from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler from openvino.runtime import Model @@ -585,8 +741,8 @@ of the variational auto encoder. self._unet_output = unet.output(0) self._vae_d_output = vae_decoder.output(0) self._vae_e_output = vae_encoder.output(0) if vae_encoder is not None else None - self.height = self.unet.input(0).shape[2] * 8 - self.width = self.unet.input(0).shape[3] * 8 + self.height = 512 + self.width = 512 self.tokenizer = tokenizer def __call__( @@ -594,6 +750,7 @@ of the variational auto encoder. prompt: Union[str, List[str]], image: PIL.Image.Image = None, num_inference_steps: Optional[int] = 50, + negative_prompt: Union[str, List[str]] = None, guidance_scale: Optional[float] = 7.5, eta: Optional[float] = 0.0, output_type: Optional[str] = "pil", @@ -612,6 +769,8 @@ of the variational auto encoder. num_inference_steps (int, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. + negative_prompt (str or List[str]): + The negative prompt or prompts to guide the image generation. guidance_scale (float, *optional*, defaults to 7.5): Guidance scale as defined in Classifier-Free Diffusion Guidance(https://arxiv.org/abs/2207.12598). guidance_scale is defined as `w` of equation 2. @@ -635,39 +794,10 @@ of the variational auto encoder. if seed is not None: np.random.seed(seed) - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - img_buffer = [] - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_embeddings = self.text_encoder(text_input.input_ids)[self._text_encoder_output] - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="np" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids)[self._text_encoder_output] - - # For classifier free guidance, you need to do two forward passes. - # Here you concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) + # get prompt text embeddings + text_embeddings = self._encode_prompt(prompt, do_classifier_free_guidance=do_classifier_free_guidance, negative_prompt=negative_prompt) # set timesteps accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) @@ -706,15 +836,83 @@ of the variational auto encoder. # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs)["prev_sample"].numpy() if gif: - image = self.vae_decoder(latents)[self._vae_d_output] + image = self.vae_decoder(latents * (1 / 0.18215))[self._vae_d_output] image = self.postprocess_image(image, meta, output_type) img_buffer.extend(image) # scale and decode the image latents with vae - image = self.vae_decoder(latents)[self._vae_d_output] + image = self.vae_decoder(latents * (1 / 0.18215))[self._vae_d_output] image = self.postprocess_image(image, meta, output_type) return {"sample": image, 'iterations': img_buffer} + + def _encode_prompt(self, prompt:Union[str, List[str]], num_images_per_prompt:int = 1, do_classifier_free_guidance:bool = True, negative_prompt:Union[str, List[str]] = None): + """ + Encodes the prompt into text encoder hidden states. + + Parameters: + prompt (str or list(str)): prompt to be encoded + num_images_per_prompt (int): number of images that should be generated per prompt + do_classifier_free_guidance (bool): whether to use classifier free guidance or not + negative_prompt (str or list(str)): negative prompt to be encoded + Returns: + text_embeddings (np.ndarray): text encoder hidden states + """ + batch_size = len(prompt) if isinstance(prompt, list) else 1 + + # tokenize input prompts + text_inputs = self.tokenizer( + prompt, + padding="max_length", + max_length=self.tokenizer.model_max_length, + truncation=True, + return_tensors="np", + ) + text_input_ids = text_inputs.input_ids + + text_embeddings = self.text_encoder( + text_input_ids)[self._text_encoder_output] + + # duplicate text embeddings for each generation per prompt + if num_images_per_prompt != 1: + bs_embed, seq_len, _ = text_embeddings.shape + text_embeddings = np.tile( + text_embeddings, (1, num_images_per_prompt, 1)) + text_embeddings = np.reshape( + text_embeddings, (bs_embed * num_images_per_prompt, seq_len, -1)) + + # get unconditional embeddings for classifier free guidance + if do_classifier_free_guidance: + uncond_tokens: List[str] + max_length = text_input_ids.shape[-1] + if negative_prompt is None: + uncond_tokens = [""] * batch_size + elif isinstance(negative_prompt, str): + uncond_tokens = [negative_prompt] + else: + uncond_tokens = negative_prompt + uncond_input = self.tokenizer( + uncond_tokens, + padding="max_length", + max_length=max_length, + truncation=True, + return_tensors="np", + ) + + uncond_embeddings = self.text_encoder(uncond_input.input_ids)[self._text_encoder_output] + + # duplicate unconditional embeddings for each generation per prompt, using mps friendly method + seq_len = uncond_embeddings.shape[1] + uncond_embeddings = np.tile(uncond_embeddings, (1, num_images_per_prompt, 1)) + uncond_embeddings = np.reshape(uncond_embeddings, (batch_size * num_images_per_prompt, seq_len, -1)) + + # For classifier free guidance, we need to do two forward passes. + # Here we concatenate the unconditional and text embeddings into a single batch + # to avoid doing two forward passes + text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) + + return text_embeddings + def prepare_latents(self, image:PIL.Image.Image = None, latent_timestep:torch.Tensor = None): """ @@ -737,10 +935,7 @@ of the variational auto encoder. noise = noise * self.scheduler.sigmas[0].numpy() return noise, {} input_image, meta = preprocess(image) - moments = self.vae_encoder(input_image)[self._vae_e_output] - mean, logvar = np.split(moments, 2, axis=1) - std = np.exp(logvar * 0.5) - latents = (mean + std * np.random.randn(*mean.shape)) * 0.18215 + latents = self.vae_encoder(input_image)[self._vae_e_output] * 0.18215 latents = self.scheduler.add_noise(torch.from_numpy(latents), torch.from_numpy(noise), latent_timestep).numpy() return latents, meta @@ -803,16 +998,14 @@ of the variational auto encoder. return timesteps, num_inference_steps - t_start -Configure Inference Pipeline `⇑ <#top>`__ +Configure Inference Pipeline ############################################################################################################################### - First, you should create instances of OpenVINO Model. .. code:: ipython3 - from openvino.runtime import Core - core = Core() + core = ov.Core() Select device from dropdown list for running inference using OpenVINO. @@ -822,13 +1015,22 @@ Select device from dropdown list for running inference using OpenVINO. device = widgets.Dropdown( options=core.available_devices + ["AUTO"], - value='AUTO', + value='CPU', description='Device:', disabled=False, ) device + + + +.. parsed-literal:: + + Dropdown(description='Device:', options=('CPU', 'GNA', 'AUTO'), value='CPU') + + + .. code:: ipython3 @@ -867,10 +1069,9 @@ Let us define them and put all components together scheduler=lms ) -Text-to-Image generation `⇑ <#top>`__ +Text-to-Image generation +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Now, you can define a text prompt for image generation and run inference pipeline. Optionally, you can also change the random generator seed for latent state initialization and number of steps. @@ -880,12 +1081,16 @@ latent state initialization and number of steps. Consider increasing ``steps`` to get more precise results. A suggested value is ``50``, but it will take longer time to process. - .. code:: ipython3 import ipywidgets as widgets - - text_prompt = widgets.Text(value='cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting, epic composition. A golden daylight, hyper-realistic environment. Hyper and intricate detail, photo-realistic. Cinematic and volumetric light. Epic concept art. Octane render and Unreal Engine, trending on artstation', description='your text') + sample_text = ('cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting, epic composition. ' + 'A golden daylight, hyper-realistic environment. ' + 'Hyper and intricate detail, photo-realistic. ' + 'Cinematic and volumetric light. ' + 'Epic concept art. ' + 'Octane render and Unreal Engine, trending on artstation') + text_prompt = widgets.Text(value=sample_text, description='your text') num_steps = widgets.IntSlider(min=1, max=50, value=20, description='steps:') seed = widgets.IntSlider(min=0, max=10000000, description='seed: ', value=42) widgets.VBox([text_prompt, seed, num_steps]) @@ -968,10 +1173,9 @@ Now is show time! Nice. As you can see, the picture has quite a high definition 🔥. -Image-to-Image generation `⇑ <#top>`__ +Image-to-Image generation +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - Image-to-Image generation, additionally to text prompt, requires providing initial image. Optionally, you can also change ``strength`` parameter, which is a value between 0.0 and 1.0, that controls the @@ -1064,3 +1268,81 @@ semantically consistent with the input. .. image:: 225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_39_1.png + +.. Interactive demo +.. ############################################################################################################################### + +.. .. code:: ipython3 + +.. import gradio as gr +.. import urllib.request + +.. urllib.request.urlretrieve( +.. "https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/coco.jpg", +.. "coco.jpg" +.. ) + + +.. def generate_from_text(text, seed, num_steps, _=gr.Progress(track_tqdm=True)): +.. result = ov_pipe(text, num_inference_steps=num_steps, seed=seed) +.. return result["sample"][0] + + +.. def generate_from_image(img, text, seed, num_steps, strength, _=gr.Progress(track_tqdm=True)): +.. result = ov_pipe(text, img, num_inference_steps=num_steps, seed=seed, strength=strength) +.. return result["sample"][0] + + +.. with gr.Blocks() as demo: +.. with gr.Tab("Text-to-Image generation"): +.. with gr.Row(): +.. with gr.Column(): +.. text_input = gr.Textbox(lines=3, label="Text") +.. seed_input = gr.Slider(0, 10000000, value=42, label="Seed") +.. steps_input = gr.Slider(1, 50, value=20, step=1, label="Steps") +.. out = gr.Image(label="Result", type="pil") +.. btn = gr.Button() +.. btn.click(generate_from_text, [text_input, seed_input, steps_input], out) +.. gr.Examples([[sample_text, 42, 20]], [text_input, seed_input, steps_input]) +.. with gr.Tab("Image-to-Image generation"): +.. with gr.Row(): +.. with gr.Column(): +.. i2i_input = gr.Image(label="Image", type="pil") +.. i2i_text_input = gr.Textbox(lines=3, label="Text") +.. i2i_seed_input = gr.Slider(0, 1024, value=42, label="Seed") +.. i2i_steps_input = gr.Slider(1, 50, value=10, step=1, label="Steps") +.. strength_input = gr.Slider(0, 1, value=0.5, label="Strength") +.. i2i_out = gr.Image(label="Result") +.. i2i_btn = gr.Button() +.. sample_i2i_text = "amazing watercolor painting" +.. i2i_btn.click( +.. generate_from_image, +.. [i2i_input, i2i_text_input, i2i_seed_input, i2i_steps_input, strength_input], +.. i2i_out, +.. ) +.. gr.Examples( +.. [["coco.jpg", sample_i2i_text, 42, 10, 0.5]], +.. [i2i_input, i2i_text_input, i2i_seed_input, i2i_steps_input, strength_input], +.. ) + +.. try: +.. demo.queue().launch(debug=False) +.. except Exception: +.. demo.queue().launch(share=True, debug=False) +.. # if you are launching remotely, specify server_name and server_port +.. # demo.launch(server_name='your server name', server_port='server port in int') +.. # Read more in the docs: https://gradio.app/docs/ + + +.. .. parsed-literal:: + +.. Running on local URL: http://127.0.0.1:7860 + +.. To create a public link, set `share=True` in `launch()`. + + + +.. .. raw:: html + +..
+ diff --git a/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_33_1.png b/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_33_1.png index 1052cb35d056a5..be83c18365ed0d 100644 --- a/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_33_1.png +++ b/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_33_1.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d0f1bbca67c22a713aad717ea3ee14fb453e85d45d542429059bb25fee2b7d6d -size 372493 +oid sha256:56f816275df653fed42eb4d8565dde858c60540f5f4c4efcd45fdd395dea7acc +size 372482 diff --git a/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_37_1.png b/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_37_1.png index 431d6ff733b8e7..64c4abb841c93b 100644 --- a/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_37_1.png +++ b/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_37_1.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d8a9811dc8ab8b5f2ef6171ba757ccf9bbe33d0f137824ad0c32380bd74c5ff4 -size 928896 +oid sha256:9b88666d9dc320d32ce7b8c5865b931fe3087b33bbbf5f9e62df21094de13bf4 +size 928958 diff --git a/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_39_1.png b/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_39_1.png index 73ba6a563cfab8..4c1cb45f6a29d2 100644 --- a/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_39_1.png +++ b/docs/notebooks/225-stable-diffusion-text-to-image-with-output_files/225-stable-diffusion-text-to-image-with-output_39_1.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:a30afadba65d296f28d61a94f3a2a84302cbcc5c25f108a9f2b9105b7e307945 -size 726937 +oid sha256:16a875cb84a2dad46be7de58d29e2778e652827f916e55f167d5479edbc05f8c +size 726871 diff --git a/docs/notebooks/226-yolov7-optimization-with-output.rst b/docs/notebooks/226-yolov7-optimization-with-output.rst index e87f4de95642ce..0f00198465ee14 100644 --- a/docs/notebooks/226-yolov7-optimization-with-output.rst +++ b/docs/notebooks/226-yolov7-optimization-with-output.rst @@ -40,8 +40,6 @@ The tutorial consists of the following steps: - Compare accuracy of the FP32 and quantized models. - Compare performance of the FP32 and quantized models. -.. _top: - **Table of contents**: - `Get Pytorch model <#get-pytorch-model>`__ @@ -66,7 +64,7 @@ The tutorial consists of the following steps: - `Validate quantized model accuracy <#validate-quantized-model-accuracy>`__ - `Compare Performance of the Original and Quantized Models <#compare-performance-of-the-original-and-quantized-models>`__ -Get Pytorch model `⇑ <#top>`__ +Get Pytorch model ############################################################################################################################### @@ -85,7 +83,7 @@ to obtain pre-trained model: In this case, the model creators provide a tool that enables converting the YOLOv7 model to ONNX, so we do not need to do these steps manually. -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### @@ -142,7 +140,7 @@ Prerequisites `⇑ <#top>`__ -Check model inference `⇑ <#top>`__ +Check model inference ############################################################################################################################### @@ -183,7 +181,7 @@ result, -Export to ONNX `⇑ <#top>`__ +Export to ONNX ############################################################################################################################### @@ -280,7 +278,7 @@ an end2end ONNX model, you can check this Export complete (2.53s). Visualize with https://github.com/lutzroeder/netron. -Convert ONNX Model to OpenVINO Intermediate Representation (IR). `⇑ <#top>`__ +Convert ONNX Model to OpenVINO Intermediate Representation (IR). ############################################################################################################################### While ONNX models are directly supported by OpenVINO runtime, @@ -300,7 +298,7 @@ to OpenVINO IR format for future execution. # serialize model for saving IR serialize(model, 'model/yolov7-tiny.xml') -Verify model inference `⇑ <#top>`__ +Verify model inference ############################################################################################################################### @@ -308,7 +306,7 @@ To test model work, we create inference pipeline similar to ``detect.py``. The pipeline consists of preprocessing step, inference of OpenVINO model, and results post-processing to get bounding boxes. -Preprocessing `⇑ <#top>`__ +Preprocessing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -389,7 +387,7 @@ To keep specific shape, preprocessing automatically enables padding. COLORS = {name: [np.random.randint(0, 255) for _ in range(3)] for i, name in enumerate(NAMES)} -Postprocessing `⇑ <#top>`__ +Postprocessing +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -471,7 +469,7 @@ algorithm and rescale boxes coordinates to original image size. # read converted model model = core.read_model('model/yolov7-tiny.xml') -Select inference device `⇑ <#top>`__ +Select inference device +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -518,11 +516,11 @@ Select device from dropdown list for running inference using OpenVINO: -Verify model accuracy `⇑ <#top>`__ +Verify model accuracy ############################################################################################################################### -Download dataset `⇑ <#top>`__ +Download dataset +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -566,7 +564,7 @@ the original model evaluation scripts. coco2017labels-segments.zip: 0%| | 0.00/169M [00:00`__ +Create dataloader +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -597,7 +595,7 @@ Create dataloader `⇑ <#top>`__ val: Scanning 'coco/val2017' images and labels... 4952 found, 48 missing, 0 empty, 0 corrupted: 100%|██████████| 5000/5000 [00:01<00:00, 2979.40it/s] -Define validation function `⇑ <#top>`__ +Define validation function +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -764,7 +762,7 @@ Validation function reports following list of accuracy metrics: all 5000 36335 0.651 0.506 0.544 0.359 -Optimize model using NNCF Post-training Quantization API `⇑ <#top>`__ +Optimize model using NNCF Post-training Quantization API ############################################################################################################################### @@ -841,7 +839,7 @@ asymmetric quantization of activations. Biases correction: 100%|██████████| 58/58 [00:04<00:00, 14.15it/s] -Validate Quantized model inference `⇑ <#top>`__ +Validate Quantized model inference ############################################################################################################################### @@ -872,7 +870,7 @@ Validate Quantized model inference `⇑ <#top>`__ -Validate quantized model accuracy `⇑ <#top>`__ +Validate quantized model accuracy ############################################################################################################################### @@ -907,7 +905,7 @@ As we can see, model accuracy slightly changed after quantization. However, if we look at the output image, these changes are not significant. -Compare Performance of the Original and Quantized Models `⇑ <#top>`__ +Compare Performance of the Original and Quantized Models ############################################################################################################################### Finally, use the OpenVINO `Benchmark diff --git a/docs/notebooks/227-whisper-subtitles-generation-with-output.rst b/docs/notebooks/227-whisper-subtitles-generation-with-output.rst index 39d210defad500..fde501af45670d 100644 --- a/docs/notebooks/227-whisper-subtitles-generation-with-output.rst +++ b/docs/notebooks/227-whisper-subtitles-generation-with-output.rst @@ -1,8 +1,6 @@ Video Subtitle Generation using Whisper and OpenVINO™ ===================================================== - - `Whisper `__ is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. It is a multi-task @@ -21,136 +19,88 @@ card `__ and GitHub `repository `__. In this notebook, we will use Whisper with OpenVINO to generate -subtitles in a sample video. Notebook contains the following steps: 1. -Download the model. 2. Instantiate the PyTorch model pipeline. 3. Export -the ONNX model and convert it to OpenVINO IR, using model conversion -API. 4. Run the Whisper pipeline with OpenVINO models. +subtitles in a sample video. Notebook contains the following steps: -.. _top: +1. Download the model. +2. Instantiate the PyTorch model pipeline. +3. Convert model to OpenVINO IR, using model conversion API. +4. Run the Whisper pipeline with OpenVINO models. -**Table of contents**: +**Table of contents:** -- `Prerequisites <#prerequisites>`__ -- `Instantiate model <#instantiate-model>`__ +- `Prerequisites <#Prerequisites>`__ +- `Instantiate model <#Instantiate-model>`__ - `Convert model to OpenVINO Intermediate Representation (IR) format. <#convert-model-to-openvino-intermediate-representation-ir-format>`__ - `Convert Whisper Encoder to OpenVINO IR <#convert-whisper-encoder-to-openvino-ir>`__ - - `Convert Whisper decoder to OpenVINO IR <#5convert-whisper-decoder-to-openvino-ir>`__ + - `Convert Whisper decoder to OpenVINO IR <#convert-whisper-decoder-to-openvino-ir>`__ - `Prepare inference pipeline <#prepare-inference-pipeline>`__ - `Select inference device <#select-inference-device>`__ - - `Define audio preprocessing <#define-audio-preprocessing>`__ - - `Run video transcription pipeline <#run-video-transcription-pipeline>`__ +.. - `Interactive demo <#interactive-demo>`__ -Prerequisites `⇑ <#top>`__ +Prerequisites ############################################################################################################################### - -Clone and install the model repository. +Install dependencies. .. code:: ipython3 - !pip install -q "openvino-dev>=2023.0.0" - !pip install -q "python-ffmpeg<=1.0.16" moviepy transformers onnx - !pip install -q -I "git+https://github.com/garywu007/pytube.git" + %pip install -q "openvino==2023.1.0.dev20230811" + %pip install -q "python-ffmpeg<=1.0.16" moviepy transformers onnx + %pip install -q -I "git+https://github.com/garywu007/pytube.git" + %pip install -q -U gradio + %pip install -q -I "git+https://github.com/openai/whisper.git@e8622f9afc4eba139bf796c210f5c01081000472" .. parsed-literal:: DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 + Note: you may need to restart the kernel to use updated packages. DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. + ppgan 2.1.0 requires imageio==2.9.0, but you have imageio 2.31.3 which is incompatible. ppgan 2.1.0 requires librosa==0.8.1, but you have librosa 0.9.2 which is incompatible. ppgan 2.1.0 requires opencv-python<=4.6.0.66, but you have opencv-python 4.8.0.76 which is incompatible. + Note: you may need to restart the kernel to use updated packages. DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 - - -.. code:: ipython3 - - from pathlib import Path - - REPO_DIR = Path("whisper") - if not REPO_DIR.exists(): - !git clone https://github.com/openai/whisper.git -b v20230124 - !cd whisper && pip install . - - -.. parsed-literal:: - - Cloning into 'whisper'... - remote: Enumerating objects: 589, done. - remote: Counting objects: 100% (367/367), done. - remote: Compressing objects: 100% (82/82), done. - remote: Total 589 (delta 320), reused 288 (delta 285), pack-reused 222 - Receiving objects: 100% (589/589), 8.14 MiB | 4.18 MiB/s, done. - Resolving deltas: 100% (357/357), done. - Note: switching to '55f690af7914c672c69733b7e04ef5a41b2b2774'. - - You are in 'detached HEAD' state. You can look around, make experimental - changes and commit them, and you can discard any commits you make in this - state without impacting any branches by switching back to a branch. - - If you want to create a new branch to retain commits you create, you may - do so (now or later) by using -c with the switch command. Example: - - git switch -c - - Or undo this operation with: - - git switch - - - Turn off this advice by setting config variable advice.detachedHead to false - - Processing /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/notebooks/227-whisper-subtitles-generation/whisper - Preparing metadata (setup.py) ... - done - Requirement already satisfied: numpy in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from openai-whisper==20230124) (1.23.5) - Requirement already satisfied: torch in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from openai-whisper==20230124) (1.13.1+cpu) - Requirement already satisfied: tqdm in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from openai-whisper==20230124) (4.66.1) - Collecting more-itertools (from openai-whisper==20230124) - Obtaining dependency information for more-itertools from https://files.pythonhosted.org/packages/5a/cb/6dce742ea14e47d6f565589e859ad225f2a5de576d7696e0623b784e226b/more_itertools-10.1.0-py3-none-any.whl.metadata - Using cached more_itertools-10.1.0-py3-none-any.whl.metadata (33 kB) - Requirement already satisfied: transformers>=4.19.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from openai-whisper==20230124) (4.31.0) - Collecting ffmpeg-python==0.2.0 (from openai-whisper==20230124) - Using cached ffmpeg_python-0.2.0-py3-none-any.whl (25 kB) - Requirement already satisfied: future in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from ffmpeg-python==0.2.0->openai-whisper==20230124) (0.18.3) - Requirement already satisfied: filelock in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (3.12.2) - Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (0.16.4) - Requirement already satisfied: packaging>=20.0 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (23.1) - Requirement already satisfied: pyyaml>=5.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (6.0.1) - Requirement already satisfied: regex!=2019.12.17 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (2023.8.8) - Requirement already satisfied: requests in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (2.31.0) - Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (0.13.3) - Requirement already satisfied: safetensors>=0.3.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from transformers>=4.19.0->openai-whisper==20230124) (0.3.2) - Requirement already satisfied: typing-extensions in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from torch->openai-whisper==20230124) (4.7.1) - Requirement already satisfied: fsspec in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from huggingface-hub<1.0,>=0.14.1->transformers>=4.19.0->openai-whisper==20230124) (2023.6.0) - Requirement already satisfied: charset-normalizer<4,>=2 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests->transformers>=4.19.0->openai-whisper==20230124) (3.2.0) - Requirement already satisfied: idna<4,>=2.5 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests->transformers>=4.19.0->openai-whisper==20230124) (3.4) - Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests->transformers>=4.19.0->openai-whisper==20230124) (1.26.16) - Requirement already satisfied: certifi>=2017.4.17 in /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages (from requests->transformers>=4.19.0->openai-whisper==20230124) (2023.7.22) - Using cached more_itertools-10.1.0-py3-none-any.whl (55 kB) - Building wheels for collected packages: openai-whisper - Building wheel for openai-whisper (setup.py) ... - \ | done - Created wheel for openai-whisper: filename=openai_whisper-20230124-py3-none-any.whl size=1179305 sha256=4fcfbe9ab46c8d5e7a7fa0c52e896e59bdbc043a743c686acc001c6ed8dc5e65 - Stored in directory: /tmp/pip-ephem-wheel-cache-5a4nqoja/wheels/0c/9d/b6/d90fb003a36a5e4026f7e998e937791cc6a6c6e9abea61d48d - Successfully built openai-whisper + Note: you may need to restart the kernel to use updated packages. + DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 + Note: you may need to restart the kernel to use updated packages. DEPRECATION: pytorch-lightning 1.6.5 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 - Installing collected packages: more-itertools, ffmpeg-python, openai-whisper - Successfully installed ffmpeg-python-0.2.0 more-itertools-10.1.0 openai-whisper-20230124 + ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. + black 21.7b0 requires tomli<2.0.0,>=0.2.6, but you have tomli 2.0.1 which is incompatible. + google-auth 2.22.0 requires urllib3<2.0, but you have urllib3 2.0.4 which is incompatible. + nncf 2.5.0.dev0+90a1e860 requires networkx<=2.8.2,>=2.6, but you have networkx 3.1 which is incompatible. + onnxconverter-common 1.14.0 requires protobuf==3.20.2, but you have protobuf 4.24.3 which is incompatible. + paddleclas 2.5.1 requires faiss-cpu==1.7.1.post2, but you have faiss-cpu 1.7.4 which is incompatible. + paddleclas 2.5.1 requires gast==0.3.3, but you have gast 0.4.0 which is incompatible. + ppgan 2.1.0 requires imageio==2.9.0, but you have imageio 2.31.3 which is incompatible. + ppgan 2.1.0 requires librosa==0.8.1, but you have librosa 0.9.2 which is incompatible. + ppgan 2.1.0 requires opencv-python<=4.6.0.66, but you have opencv-python 4.8.0.76 which is incompatible. + pyannote-audio 2.0.1 requires networkx<3.0,>=2.6, but you have networkx 3.1 which is incompatible. + pytorch-lightning 1.6.5 requires protobuf<=3.20.1, but you have protobuf 4.24.3 which is incompatible. + tensorflow 2.12.0 requires numpy<1.24,>=1.22, but you have numpy 1.24.4 which is incompatible. + tf2onnx 1.15.1 requires protobuf~=3.20.2, but you have protobuf 4.24.3 which is incompatible. + torchaudio 0.13.1+cpu requires torch==1.13.1, but you have torch 2.0.1 which is incompatible. + torchvision 0.14.1+cpu requires torch==1.13.1, but you have torch 2.0.1 which is incompatible. + Note: you may need to restart the kernel to use updated packages. -Instantiate model `⇑ <#top>`__ +Instantiate model ############################################################################################################################### -Whisper is a Transformer based encoder-decoder model, also referred to as a sequence-to-sequence model. -It maps a sequence of audio spectrogram features to a sequence of text -tokens. First, the raw audio inputs are converted to a log-Mel -spectrogram by action of the feature extractor. Then, the Transformer -encoder encodes the spectrogram to form a sequence of encoder hidden -states. Finally, the decoder autoregressively predicts text tokens, -conditional on both the previous tokens and the encoder hidden states. +Whisper is a Transformer based encoder-decoder model, also referred to +as a sequence-to-sequence model. It maps a sequence of audio spectrogram +features to a sequence of text tokens. First, the raw audio inputs are +converted to a log-Mel spectrogram by action of the feature extractor. +Then, the Transformer encoder encodes the spectrogram to form a sequence +of encoder hidden states. Finally, the decoder autoregressively predicts +text tokens, conditional on both the previous tokens and the encoder +hidden states. You can see the model architecture in the diagram below: @@ -173,90 +123,81 @@ Whisper family. model.eval() pass -Convert model to OpenVINO Intermediate Representation (IR) format. `⇑ <#top>`__ +Convert model to OpenVINO Intermediate Representation (IR) format. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ For best results with OpenVINO, it is recommended to convert the model -to OpenVINO IR format. OpenVINO supports PyTorch via ONNX conversion. We -will use ``torch.onnx.export`` for exporting the ONNX model from -PyTorch. We need to provide initialized model object and example of -inputs for shape inference. We will use ``mo.convert_model`` -functionality to convert the ONNX models. The ``mo.convert_model`` -Python function returns an OpenVINO model ready to load on device and -start making predictions. We can save it on disk for next usage with -``openvino.runtime.serialize``. - -Convert Whisper Encoder to OpenVINO IR `⇑ <#top>`__ +to OpenVINO IR format. We need to provide initialized model object and +example of inputs for shape inference. We will use ``ov.convert_model`` +functionality to convert models. The ``ov.convert_model`` Python +function returns an OpenVINO model ready to load on device and start +making predictions. We can save it on disk for next usage with +``ov.save_model``. + +Convert Whisper Encoder to OpenVINO IR +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +.. code:: ipython3 + + from pathlib import Path + + WHISPER_ENCODER_OV = Path("whisper_encoder.xml") + WHISPER_DECODER_OV = Path("whisper_decoder.xml") .. code:: ipython3 import torch - from openvino.tools import mo - from openvino.runtime import serialize + import openvino as ov mel = torch.zeros((1, 80, 3000)) audio_features = model.encoder(mel) - torch.onnx.export( - model.encoder, - mel, - "whisper_encoder.onnx", - input_names=["mel"], - output_names=["output_features"] - ) - encoder_model = mo.convert_model("whisper_encoder.onnx", compress_to_fp16=True) - serialize(encoder_model, xml_path="whisper_encoder.xml") + encoder_model = ov.convert_model(model.encoder, example_input=mel) + ov.save_model(encoder_model, WHISPER_ENCODER_OV) + + +.. parsed-literal:: + + INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino .. parsed-literal:: - /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-475/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/whisper/model.py:153: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/whisper/model.py:166: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert x.shape[1:] == self.positional_embedding.shape, "incorrect audio shape" -Convert Whisper decoder to OpenVINO IR `⇑ <#top>`__ +Convert Whisper decoder to OpenVINO IR +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - To reduce computational complexity, the decoder uses cached key/value projections in attention modules from the previous steps. We need to -modify this process for correct tracing to ONNX. +modify this process for correct tracing. + +There are 2 types of attention modules in Whisper Decoder - +self-attention, that makes projection for internal decoder state and +cross-attention, that uses internal state of encoder for calculating +attention. Decoder model runs autoregressively, it means that each new +step uses prediction from previous step as input and in the same time it +conditioned by encoder hidden state calculated before decoding start. To +sum up, it is enough calculate cross-attention once on first step and +reuse it for next steps for reducing computational complexity. +Self-attention hidden state for sequence that generated on previous +steps remains without changes, so it is possible to calculate it only +for current token and then join it to previously generated. .. code:: ipython3 import torch - from typing import Optional, Union, List, Dict + from typing import Optional, Tuple from functools import partial - positional_embeddings_size = model.decoder.positional_embedding.shape[0] - - - def save_to_cache(cache: Dict[str, torch.Tensor], module: str, output: torch.Tensor): - """ - Saving cached attention hidden states for previous tokens. - Parameters: - cache: dictionary with cache. - module: current attention module name. - output: predicted hidden state. - Returns: - output: cached attention hidden state for specified attention module. - """ - if module not in cache or output.shape[1] > positional_embeddings_size: - # save as-is, for the first token or cross attention - cache[module] = output - else: - cache[module] = torch.cat([cache[module], output], dim=1).detach() - return cache[module] - def attention_forward( attention_module, x: torch.Tensor, xa: Optional[torch.Tensor] = None, mask: Optional[torch.Tensor] = None, - kv_cache: Optional[dict] = None, - idx: int = 0 + kv_cache: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, ): """ Override for forward method of decoder attention module with storing cache values explicitly. @@ -273,23 +214,27 @@ modify this process for correct tracing to ONNX. """ q = attention_module.query(x) - if kv_cache is None or xa is None: + if xa is None: # hooks, if installed (i.e. kv_cache is not None), will prepend the cached kv tensors; # otherwise, perform key/value projections for self- or cross-attention as usual. - k = attention_module.key(x if xa is None else xa) - v = attention_module.value(x if xa is None else xa) + k = attention_module.key(x) + v = attention_module.value(x) if kv_cache is not None: - k = save_to_cache(kv_cache, f'k_{idx}', k) - v = save_to_cache(kv_cache, f'v_{idx}', v) + k = torch.cat((kv_cache[0], k), dim=1) + v = torch.cat((kv_cache[1], v), dim=1) + else: - # for cross-attention, calculate keys and values once and reuse in subsequent calls. - k = kv_cache.get(f'k_{idx}', save_to_cache( - kv_cache, f'k_{idx}', attention_module.key(xa))) - v = kv_cache.get(f'v_{idx}', save_to_cache( - kv_cache, f'v_{idx}', attention_module.value(xa))) + if kv_cache is None or kv_cache[0].shape[1] == 0: + # for cross-attention, calculate keys and values once and reuse in subsequent calls. + k = attention_module.key(xa) + v = attention_module.value(xa) + else: + k, v = kv_cache + + kv_cache_new = (k, v) wv, qk = attention_module.qkv_attention(q, k, v, mask) - return attention_module.out(wv), kv_cache + return attention_module.out(wv), kv_cache_new def block_forward( @@ -297,8 +242,7 @@ modify this process for correct tracing to ONNX. x: torch.Tensor, xa: Optional[torch.Tensor] = None, mask: Optional[torch.Tensor] = None, - kv_cache: Optional[dict] = None, - idx: int = 0 + kv_cache: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, ): """ Override for residual block forward method for providing kv_cache to attention module. @@ -308,32 +252,87 @@ modify this process for correct tracing to ONNX. xa: input audio features (Optional). mask: attention mask (Optional). kv_cache: cache for storing attention key values. - idx: index of current residual block for search in kv_cache. Returns: x: residual block output kv_cache: updated kv_cache """ - x0, kv_cache = residual_block.attn(residual_block.attn_ln( - x), mask=mask, kv_cache=kv_cache, idx=f'{idx}a') + x0, kv_cache_self = residual_block.attn(residual_block.attn_ln( + x), mask=mask, kv_cache=kv_cache[0]) x = x + x0 if residual_block.cross_attn: - x1, kv_cache = residual_block.cross_attn( - residual_block.cross_attn_ln(x), xa, kv_cache=kv_cache, idx=f'{idx}c') + x1, kv_cache_cross = residual_block.cross_attn( + residual_block.cross_attn_ln(x), xa, kv_cache=kv_cache[1]) x = x + x1 x = x + residual_block.mlp(residual_block.mlp_ln(x)) - return x, kv_cache + return x, (kv_cache_self, kv_cache_cross) + + class CrossAttnKVGetter(torch.nn.Module): + """ + Helper class for scripting approach of caching cross attention key values. + The main idea that they should be calculated once and reused for next steps. + Tracing can not correctly catch condition for that, that is why we need to use scripting for this part of model. + """ + def __init__(self, attn): + super().__init__() + self.attn_key = attn.key + self.attn_value = attn.value + + def forward(self, xa: torch.Tensor, kv_cache: Tuple[torch.Tensor, torch.Tensor]): + if kv_cache is None or kv_cache[0].shape[1] == 0: + # for cross-attention, calculate keys and values once and reuse in subsequent calls. + k = self.attn_key(xa) + v = self.attn_value(xa) + else: + k, v = kv_cache + return k, v + + def crossattention_forward( + attention_module, + x: torch.Tensor, + xa: Optional[torch.Tensor] = None, + mask: Optional[torch.Tensor] = None, + kv_cache: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, + ): + """ + Override for forward method of decoder cross attention module with storing cache values explicitly. + Parameters: + attention_module: current attention module + x: input token ids. + xa: input audio features (Optional). + mask: mask for applying attention (Optional). + kv_cache: dictionary with cached key values for attention modules. + idx: idx for search in kv_cache. + Returns: + attention module output tensor + updated kv_cache + """ + q = attention_module.query(x) + + if xa is None: + # hooks, if installed (i.e. kv_cache is not None), will prepend the cached kv tensors; + # otherwise, perform key/value projections for self- or cross-attention as usual. + k = attention_module.key(x) + v = attention_module.value(x) + else: + k, v = attention_module.kv_getter(xa, kv_cache) + kv_cache_new = (k, v) + + wv, qk = attention_module.qkv_attention(q, k, v, mask) + return attention_module.out(wv), kv_cache_new # update forward functions - for idx, block in enumerate(model.decoder.blocks): - block.forward = partial(block_forward, block, idx=idx) + for _, block in enumerate(model.decoder.blocks): + block.forward = partial(block_forward, block) block.attn.forward = partial(attention_forward, block.attn) if block.cross_attn: - block.cross_attn.forward = partial(attention_forward, block.cross_attn) + kv_getter = CrossAttnKVGetter(block.cross_attn) + block.cross_attn.kv_getter = torch.jit.script(kv_getter) + block.cross_attn.forward = partial(crossattention_forward, block.cross_attn) - def decoder_forward(decoder, x: torch.Tensor, xa: torch.Tensor, kv_cache: Optional[dict] = None): + def decoder_forward(decoder, x: torch.Tensor, xa: torch.Tensor, kv_cache: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor]]] = None): """ Override for decoder forward method. Parameters: @@ -342,19 +341,25 @@ modify this process for correct tracing to ONNX. the encoded audio features to be attended on kv_cache: Dict[str, torch.Tensor], attention modules hidden states cache from previous steps """ - offset = next(iter(kv_cache.values())).shape[1] if kv_cache else 0 + if kv_cache is not None: + offset = kv_cache[0][0][0].shape[1] + else: + offset = 0 + kv_cache = [(None, None) for _ in range(len(decoder.blocks))] x = decoder.token_embedding( x) + decoder.positional_embedding[offset: offset + x.shape[-1]] x = x.to(xa.dtype) + kv_cache_upd = [] - for block in decoder.blocks: - x, kv_cache = block(x, xa, mask=decoder.mask, kv_cache=kv_cache) + for block, kv_block_cache in zip(decoder.blocks, kv_cache): + x, kv_block_cache_upd = block(x, xa, mask=decoder.mask, kv_cache=kv_block_cache) + kv_cache_upd.append(tuple(kv_block_cache_upd)) x = decoder.ln(x) logits = ( x @ torch.transpose(decoder.token_embedding.weight.to(x.dtype), 1, 0)).float() - return logits, kv_cache + return logits, tuple(kv_cache_upd) # override decoder forward @@ -362,37 +367,30 @@ modify this process for correct tracing to ONNX. .. code:: ipython3 - tokens = torch.ones((5, 3), dtype=torch.int64) - - logits, kv_cache = model.decoder(tokens, audio_features, kv_cache={}) - kv_cache = {k: v for k, v in kv_cache.items()} - tokens = torch.ones((5, 1), dtype=torch.int64) + encoder_hidden_size = audio_features.shape[2] + kv_cache_init = [((torch.zeros((5, 0, encoder_hidden_size)), torch.zeros((5, 0, encoder_hidden_size))), (torch.zeros((1, 0, encoder_hidden_size)), torch.zeros((1, 0, encoder_hidden_size)))) for _ in range(len(model.decoder.blocks))] .. code:: ipython3 - outputs = [f"out_{k}" for k in kv_cache.keys()] - inputs = [f"in_{k}" for k in kv_cache.keys()] - dynamic_axes = { - "tokens": {0: "beam_size", 1: "seq_len"}, - "audio_features": {0: "beam_size"}, - "logits": {0: "beam_size", 1: "seq_len"}} - dynamic_outs = {o: {0: "beam_size", 1: "prev_seq_len"} for o in outputs} - dynamic_inp = {i: {0: "beam_size", 1: "prev_seq_len"} for i in inputs} - dynamic_axes.update(dynamic_outs) - dynamic_axes.update(dynamic_inp) - torch.onnx.export( - model.decoder, {'x': tokens, 'xa': audio_features, 'kv_cache': kv_cache}, - 'whisper_decoder.onnx', - input_names=["tokens", "audio_features"] + inputs, - output_names=["logits"] + outputs, - dynamic_axes=dynamic_axes - ) + tokens = torch.ones((5, 3), dtype=torch.int64) + logits, kv_cache = model.decoder(tokens, audio_features, kv_cache=kv_cache_init) + + tokens = torch.ones((5, 1), dtype=torch.int64) + decoder_model = ov.convert_model(model.decoder, example_input=(tokens, audio_features, kv_cache)) + decoder_cache_input = decoder_model.inputs[2:] + for i in range(2, len(decoder_cache_input), 4): + decoder_cache_input[i].get_node().set_partial_shape(ov.PartialShape([-1, -1, encoder_hidden_size])) + decoder_cache_input[i + 1].get_node().set_partial_shape(ov.PartialShape([-1, -1, encoder_hidden_size])) + + decoder_model.validate_nodes_and_infer_types() + ov.save_model(decoder_model, WHISPER_DECODER_OV) + del decoder_model .. parsed-literal:: - /tmp/ipykernel_2070841/1737529362.py:18: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! - if module not in cache or output.shape[1] > positional_embeddings_size: + /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-499/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/torch/jit/_trace.py:154: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:486.) + if a.grad is not None: The decoder model autoregressively predicts the next token guided by @@ -402,22 +400,9 @@ tokens and attention hidden states from previous step) are dynamic. For efficient utilization of memory, you define an upper bound for dynamic input shapes. -.. code:: ipython3 - - input_shapes = "tokens[1..5 -1],audio_features[1..5 1500 512]" - for k, v in kv_cache.items(): - if k.endswith('a'): - input_shapes += f",in_{k}[1..5 -1 512]" - decoder_model = mo.convert_model( - input_model="whisper_decoder.onnx", - compress_to_fp16=True, - input=input_shapes) - serialize(decoder_model, "whisper_decoder.xml") - -Prepare inference pipeline `⇑ <#top>`__ +Prepare inference pipeline ############################################################################################################################### - The image below illustrates the pipeline of video transcribing using the Whisper model. @@ -431,231 +416,14 @@ To run the PyTorch Whisper model, we just need to call the original model pipeline for audio transcribing after replacing the original models with OpenVINO IR versions. -.. code:: ipython3 - - class OpenVINOAudioEncoder(torch.nn.Module): - """ - Helper for inference Whisper encoder model with OpenVINO - """ - - def __init__(self, core, model_path, device='CPU'): - super().__init__() - self.model = core.read_model(model_path) - self.compiled_model = core.compile_model(self.model, device) - self.output_blob = self.compiled_model.output(0) - - def forward(self, mel: torch.Tensor): - """ - Inference OpenVINO whisper encoder model. - - Parameters: - mel: input audio fragment mel spectrogram. - Returns: - audio_features: torch tensor with encoded audio features. - """ - return torch.from_numpy(self.compiled_model(mel)[self.output_blob]) - -.. code:: ipython3 - - from openvino.runtime import Core, Tensor - - - class OpenVINOTextDecoder(torch.nn.Module): - """ - Helper for inference OpenVINO decoder model - """ - - def __init__(self, core: Core, model_path: Path, device: str = 'CPU'): - super().__init__() - self._core = core - self.model = core.read_model(model_path) - self._input_names = [inp.any_name for inp in self.model.inputs] - self.compiled_model = core.compile_model(self.model, device) - self.device = device - - def init_past_inputs(self, feed_dict): - """ - Initialize cache input for first step. - - Parameters: - feed_dict: Dictonary with inputs for inference - Returns: - feed_dict: updated feed_dict - """ - beam_size = feed_dict['tokens'].shape[0] - audio_len = feed_dict['audio_features'].shape[2] - previous_seq_len = 0 - for name in self._input_names: - if name in ['tokens', 'audio_features']: - continue - feed_dict[name] = Tensor(np.zeros( - (beam_size, previous_seq_len, audio_len), dtype=np.float32)) - return feed_dict - - def preprocess_kv_cache_inputs(self, feed_dict, kv_cache): - """ - Transform kv_cache to inputs - - Parameters: - feed_dict: dictionary with inputs for inference - kv_cache: dictionary with cached attention hidden states from previous step - Returns: - feed_dict: updated feed dictionary with additional inputs - """ - if not kv_cache: - return self.init_past_inputs(feed_dict) - for k, v in kv_cache.items(): - new_k = f'in_{k}' - if new_k in self._input_names: - feed_dict[new_k] = Tensor(v.numpy()) - return feed_dict - - def postprocess_outputs(self, outputs): - """ - Transform model output to format expected by the pipeline - - Parameters: - outputs: outputs: raw inference results. - Returns: - logits: decoder predicted token logits - kv_cache: cached attention hidden states - """ - logits = None - kv_cache = {} - for output_t, out in outputs.items(): - if 'logits' in output_t.get_names(): - logits = torch.from_numpy(out) - else: - tensor_name = output_t.any_name - kv_cache[tensor_name.replace( - 'out_', '')] = torch.from_numpy(out) - return logits, kv_cache - - def forward(self, x: torch.Tensor, xa: torch.Tensor, kv_cache: Optional[dict] = None): - """ - Inference decoder model. - - Parameters: - x: torch.LongTensor, shape = (batch_size, <= n_ctx) the text tokens - xa: torch.Tensor, shape = (batch_size, n_mels, n_audio_ctx) - the encoded audio features to be attended on - kv_cache: Dict[str, torch.Tensor], attention modules hidden states cache from previous steps - Returns: - logits: decoder predicted logits - kv_cache: updated kv_cache with current step hidden states - """ - feed_dict = {'tokens': Tensor(x.numpy()), 'audio_features': Tensor(xa.numpy())} - feed_dict = (self.preprocess_kv_cache_inputs(feed_dict, kv_cache)) - res = self.compiled_model(feed_dict) - return self.postprocess_outputs(res) - -.. code:: ipython3 - - from whisper.decoding import DecodingTask, Inference, DecodingOptions, DecodingResult - - - class OpenVINOInference(Inference): - """ - Wrapper for inference interface - """ - - def __init__(self, model: "Whisper", initial_token_length: int): - self.model: "Whisper" = model - self.initial_token_length = initial_token_length - self.kv_cache = {} - - def logits(self, tokens: torch.Tensor, audio_features: torch.Tensor) -> torch.Tensor: - """ - getting logits for given tokens sequence and audio features and save kv_cache - - Parameters: - tokens: input tokens - audio_features: input audio features - Returns: - logits: predicted by decoder logits - """ - if tokens.shape[-1] > self.initial_token_length: - # only need to use the last token except in the first forward pass - tokens = tokens[:, -1:] - logits, self.kv_cache = self.model.decoder( - tokens, audio_features, kv_cache=self.kv_cache) - return logits - - def cleanup_caching(self): - """ - Reset kv_cache to initial state - """ - self.kv_cache = {} - - def rearrange_kv_cache(self, source_indices): - """ - Update hidden states cache for selected sequences - Parameters: - source_indicies: sequences indicies - Returns: - None - """ - for module, tensor in self.kv_cache.items(): - # update the key/value cache to contain the selected sequences - self.kv_cache[module] = tensor[source_indices] - - - class OpenVINODecodingTask(DecodingTask): - """ - Class for decoding using OpenVINO - """ - - def __init__(self, model: "Whisper", options: DecodingOptions): - super().__init__(model, options) - self.inference = OpenVINOInference(model, len(self.initial_tokens)) - - - @torch.no_grad() - def decode(model: "Whisper", mel: torch.Tensor, options: DecodingOptions = DecodingOptions()) -> Union[DecodingResult, List[DecodingResult]]: - """ - Performs decoding of 30-second audio segment(s), provided as Mel spectrogram(s). - - Parameters - ---------- - model: Whisper - the Whisper model instance - - mel: torch.Tensor, shape = (80, 3000) or (*, 80, 3000) - A tensor containing the Mel spectrogram(s) - - options: DecodingOptions - A dataclass that contains all necessary options for decoding 30-second segments - - Returns - ------- - result: Union[DecodingResult, List[DecodingResult]] - The result(s) of decoding contained in `DecodingResult` dataclass instance(s) - """ - single = mel.ndim == 2 - if single: - mel = mel.unsqueeze(0) - - result = OpenVINODecodingTask(model, options).run(mel) - - if single: - result = result[0] - - return result - -.. code:: ipython3 +Select inference device ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - del model.decoder - del model.encoder +Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 - core = Core() - -Select inference device `⇑ <#top>`__ -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - -Select device from dropdown list for running inference using OpenVINO: + core = ov.Core() .. code:: ipython3 @@ -681,108 +449,16 @@ Select device from dropdown list for running inference using OpenVINO: .. code:: ipython3 - from collections import namedtuple - - Parameter = namedtuple('Parameter', ['device']) + from utils import patch_whisper_for_ov_inference, OpenVINOAudioEncoder, OpenVINOTextDecoder - model.encoder = OpenVINOAudioEncoder(core, 'whisper_encoder.xml', device=device.value) - model.decoder = OpenVINOTextDecoder(core, 'whisper_decoder.xml', device=device.value) - model.decode = partial(decode, model) + patch_whisper_for_ov_inference(model) - - def parameters(): - return iter([Parameter(torch.device('cpu'))]) - - - model.parameters = parameters - - - def logits(model, tokens: torch.Tensor, audio_features: torch.Tensor): - """ - Override for logits extraction method - Parameters: - toekns: input tokens - audio_features: input audio features - Returns: - logits: decoder predicted logits - """ - return model.decoder(tokens, audio_features, None)[0] - - - model.logits = partial(logits, model) - -Define audio preprocessing `⇑ <#top>`__ -------------------------------------------------------------------------------------------------------------------------------- + model.encoder = OpenVINOAudioEncoder(core, WHISPER_ENCODER_OV, device=device.value) + model.decoder = OpenVINOTextDecoder(core, WHISPER_DECODER_OV, device=device.value) - -The model expects mono-channel audio with a 16000 Hz sample rate, -represented in floating point range. When the audio from the input video -does not meet these requirements, we will need to apply preprocessing. - -.. code:: ipython3 - - import io - from pathlib import Path - import numpy as np - from scipy.io import wavfile - from pytube import YouTube - from moviepy.editor import VideoFileClip - - - def resample(audio, src_sample_rate, dst_sample_rate): - """ - Resample audio to specific sample rate - - Parameters: - audio: input audio signal - src_sample_rate: source audio sample rate - dst_sample_rate: destination audio sample rate - Returns: - resampled_audio: input audio signal resampled with dst_sample_rate - """ - if src_sample_rate == dst_sample_rate: - return audio - duration = audio.shape[0] / src_sample_rate - resampled_data = np.zeros(shape=(int(duration * dst_sample_rate)), dtype=np.float32) - x_old = np.linspace(0, duration, audio.shape[0], dtype=np.float32) - x_new = np.linspace(0, duration, resampled_data.shape[0], dtype=np.float32) - resampled_audio = np.interp(x_new, x_old, audio) - return resampled_audio.astype(np.float32) - - - def audio_to_float(audio): - """ - convert audio signal to floating point format - """ - return audio.astype(np.float32) / np.iinfo(audio.dtype).max - - - def get_audio(video_file): - """ - Extract audio signal from a given video file, then convert it to float, - then mono-channel format and resample it to the expected sample rate - - Parameters: - video_file: path to input video file - Returns: - resampled_audio: mono-channel float audio signal with 16000 Hz sample rate - extracted from video - """ - input_video = VideoFileClip(str(video_file)) - input_video.audio.write_audiofile(video_file.stem + '.wav', verbose=False, logger=None) - input_audio_file = video_file.stem + '.wav' - sample_rate, audio = wavfile.read( - io.BytesIO(open(input_audio_file, 'rb').read())) - audio = audio_to_float(audio) - if audio.ndim == 2: - audio = audio.mean(axis=1) - resampled_audio = resample(audio, sample_rate, 16000) - return resampled_audio - -Run video transcription pipeline `⇑ <#top>`__ +Run video transcription pipeline ############################################################################################################################### - Now, we are ready to start transcription. We select a video from YouTube that we want to transcribe. Be patient, as downloading the video may take some time. @@ -811,6 +487,8 @@ take some time. .. code:: ipython3 + from pytube import YouTube + print(f"Downloading video {link.value} started") output_file = Path("downloaded_video.mp4") @@ -827,6 +505,8 @@ take some time. .. code:: ipython3 + from utils import get_audio + audio = get_audio(output_file) Select the task for the model: @@ -857,42 +537,7 @@ Select the task for the model: .. code:: ipython3 - transcription = model.transcribe(audio, beam_size=5, best_of=5, task=task.value) - -.. code:: ipython3 - - def format_timestamp(seconds: float): - """ - format time in srt-file excpected format - """ - assert seconds >= 0, "non-negative timestamp expected" - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - return (f"{hours}:" if hours > 0 else "00:") + f"{minutes:02d}:{seconds:02d},{milliseconds:03d}" - - - def prepare_srt(transcription): - """ - Format transcription into srt file format - """ - segment_lines = [] - for segment in transcription["segments"]: - segment_lines.append(str(segment["id"] + 1) + "\n") - time_start = format_timestamp(segment["start"]) - time_end = format_timestamp(segment["end"]) - time_str = f"{time_start} --> {time_end}\n" - segment_lines.append(time_str) - segment_lines.append(segment["text"] + "\n\n") - return segment_lines + transcription = model.transcribe(audio, task=task.value) "The results will be saved in the ``downloaded_video.srt`` file. SRT is one of the most popular formats for storing subtitles and is compatible @@ -902,6 +547,8 @@ into video files using ``ffmpeg``. .. code:: ipython3 + from utils import prepare_srt + srt_lines = prepare_srt(transcription) # save transcription with output_file.with_suffix(".srt").open("w") as f: @@ -954,8 +601,69 @@ Now let us see the results. Don't tell anyone what you've seen in here. 7 - 00:00:22,000 --> 00:00:30,000 + 00:00:22,000 --> 00:00:23,000 + Oh, my. + + 8 + 00:00:23,000 --> 00:00:24,000 Have you seen what's in there? + 9 + 00:00:24,000 --> 00:00:25,000 + They have intel. + + 10 + 00:00:25,000 --> 00:00:27,000 + This is where it all changes. + + + + +.. Interactive demo +.. ############################################################################################################################### + +.. .. code:: ipython3 + +.. import gradio as gr + + +.. def transcribe(url, task): +.. output_file = Path("downloaded_video.mp4") +.. yt = YouTube(url) +.. yt.streams.get_highest_resolution().download(filename=output_file) +.. audio = get_audio(output_file) +.. transcription = model.transcribe(audio, task=task.lower()) +.. srt_lines = prepare_srt(transcription) +.. with output_file.with_suffix(".srt").open("w") as f: +.. f.writelines(srt_lines) +.. return [str(output_file), str(output_file.with_suffix(".srt"))] + + +.. demo = gr.Interface( +.. transcribe, +.. [gr.Textbox(label="YouTube URL"), gr.Radio(["Transcribe", "Translate"], value="Transcribe")], +.. "video", +.. examples=[["https://youtu.be/kgL5LBM-hFI", "Transcribe"]], +.. allow_flagging="never" +.. ) +.. try: +.. demo.launch(debug=False) +.. except Exception: +.. demo.launch(share=True, debug=False) +.. # if you are launching remotely, specify server_name and server_port +.. # demo.launch(server_name='your server name', server_port='server port in int') +.. # Read more in the docs: https://gradio.app/docs/ + + +.. .. parsed-literal:: + +.. Running on local URL: http://127.0.0.1:7860 +.. To create a public link, set `share=True` in `launch()`. + + + +.. .. raw:: html + +..
diff --git a/docs/notebooks/notebooks_tags.json b/docs/notebooks/notebooks_tags.json index e6899ffc0c32a3..090e04d579a9a7 100644 --- a/docs/notebooks/notebooks_tags.json +++ b/docs/notebooks/notebooks_tags.json @@ -28,7 +28,6 @@ "105-language-quantize-bert": [ "Benchmark Model", "NNCF", - "Optimize Model", "Pytorch", "Transformers" ], @@ -118,6 +117,16 @@ "Torchvision", "Transformers" ], + "122-speech-recognition-quantization-wav2vec2": [ + "NNCF", + "Pytorch", + "Transformers" + ], + "122-yolov8-quantization-with-accuracy-control": [ + "Benchmark Model", + "NNCF", + "Pytorch" + ], "203-meter-reader": [ "Dynamic Shape", "Reshape Model" @@ -200,6 +209,7 @@ "Download Model" ], "223-text-prediction": [ + "Dynamic Shape", "ONNX", "Transformers" ], @@ -208,8 +218,6 @@ "ONNX" ], "225-stable-diffusion-text-to-image": [ - "ONNX", - "Optimize Model", "Pytorch", "Transformers" ], @@ -219,8 +227,12 @@ "ONNX", "Pytorch" ], - "227-whisper-subtitles-generation": [ - "ONNX", + "227-whisper-convert": [ + "Pytorch" + ], + "227-whisper-nncf-quantize": [ + "Benchmark Model", + "NNCF", "Pytorch", "Transformers" ], @@ -236,11 +248,27 @@ "Transformers" ], "229-distilbert-sequence-classification": [ + "Pytorch", "Transformers" ], - "230-yolov8-optimization": [ + "230-yolov8-instance-segmentation": [ + "Benchmark Model", + "NNCF", + "ONNX", + "Pytorch", + "Reshape Model" + ], + "230-yolov8-keypoint-detection": [ "Benchmark Model", "NNCF", + "ONNX", + "Pytorch", + "Reshape Model" + ], + "230-yolov8-object-detection": [ + "Benchmark Model", + "NNCF", + "ONNX", "Pytorch", "Reshape Model" ], @@ -256,7 +284,7 @@ "Transformers" ], "233-blip-visual-language-processing": [ - "ONNX", + "Dynamic Shape", "Pytorch", "Transformers" ], @@ -265,15 +293,11 @@ "Pytorch" ], "235-controlnet-stable-diffusion": [ - "ONNX", - "Optimize Model", "Pytorch", "Reshape Model", "Transformers" ], "236-stable-diffusion-v2-infinite-zoom": [ - "ONNX", - "Optimize Model", "Pytorch", "Transformers" ], @@ -281,13 +305,9 @@ "ONNX" ], "236-stable-diffusion-v2-text-to-image-demo": [ - "ONNX", - "Optimize Model", "Transformers" ], "236-stable-diffusion-v2-text-to-image": [ - "ONNX", - "Optimize Model", "Pytorch", "Transformers" ], @@ -299,7 +319,6 @@ "Torchvision" ], "238-deep-floyd-if": [ - "Optimize Model", "Pytorch", "Reshape Model" ], @@ -346,6 +365,31 @@ "Pytorch", "Transformers" ], + "250-music-generation": [ + "ONNX", + "Pytorch", + "Transformers" + ], + "251-tiny-sd-image-generation": [ + "Pytorch", + "Transformers" + ], + "252-fastcomposer-image-generation": [ + "Pytorch", + "Torchvision", + "Transformers" + ], + "253-zeroscope-text2video": [ + "Pytorch", + "Transformers" + ], + "254-llm-chatbot": [ + "Async Inference", + "NNCF", + "Pytorch", + "Reshape Model", + "Transformers" + ], "301-tensorflow-training-openvino-nncf": [ "Benchmark Model", "NNCF", diff --git a/docs/notebooks/notebooks_with_colab_buttons.txt b/docs/notebooks/notebooks_with_colab_buttons.txt index 151267b3675aa7..082087e6ac4a32 100644 --- a/docs/notebooks/notebooks_with_colab_buttons.txt +++ b/docs/notebooks/notebooks_with_colab_buttons.txt @@ -17,6 +17,9 @@ 221-machine-translation 223-text-prediction 227-whisper-subtitles-generation +230-yolov8-instance-segmentation +230-yolov8-keypoint-detection +230-yolov8-object-detection 230-yolov8-optimization 232-clip-language-saliency-map 243-tflite-selfie-segmentation diff --git a/docs/tutorials.md b/docs/tutorials.md index bb38b9b40a3e07..b8aa1eee280435 100644 --- a/docs/tutorials.md +++ b/docs/tutorials.md @@ -105,6 +105,10 @@ Tutorials that explain how to optimize and quantize models with OpenVINO tools. +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ | `107-speech-recognition-quantization `__ |br| |c107| | Optimize and quantize a pre-trained Data2Vec speech model. | +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ + | `107-speech-recognition-quantization `__ | Optimize and quantize a pre-trained Wav2Vec2 speech model. | + +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ + | `108-gpu-device `__ | Working with GPUs in OpenVINO™ | + +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ | `109-latency-tricks `__ | Performance tricks for latency mode in OpenVINO™. | +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ | `109-throughput-tricks `__ | Performance tricks for throughput mode in OpenVINO™. | From fa14ae0a56c9cbfb5c06b4fc0f590b1f3a9c8e8f Mon Sep 17 00:00:00 2001 From: Tatiana Savina Date: Thu, 14 Sep 2023 17:10:01 +0200 Subject: [PATCH 10/14] [DOCS] Optimization images change (#19849) * change images * change workflow * case and description change --- docs/_static/images/DEVELOPMENT_FLOW_V3_crunch.svg | 4 ++-- docs/_static/images/WHAT_TO_USE.svg | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/_static/images/DEVELOPMENT_FLOW_V3_crunch.svg b/docs/_static/images/DEVELOPMENT_FLOW_V3_crunch.svg index 99023d14b6c4b1..c183f387509c1c 100644 --- a/docs/_static/images/DEVELOPMENT_FLOW_V3_crunch.svg +++ b/docs/_static/images/DEVELOPMENT_FLOW_V3_crunch.svg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d02f98d7e50d663e0f525366b59cac16175ad437ee54147950a62b9bccb85030 -size 413331 +oid sha256:3ea9a60d2b9a1f3056f46b995f9cf5fee83d04e57ada60c60ed7661ab09d08c7 +size 367000 diff --git a/docs/_static/images/WHAT_TO_USE.svg b/docs/_static/images/WHAT_TO_USE.svg index 5a87c4558221db..5cba27ea4ee0eb 100644 --- a/docs/_static/images/WHAT_TO_USE.svg +++ b/docs/_static/images/WHAT_TO_USE.svg @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:b71a90fd9ec78356eef5ef0c9d80831c1439fbfc05d42fc0ad648f4b5aa151aa -size 286982 +oid sha256:2a30e182191979bf0693d9060eee2596313e162042f68bce31859670e21ad0c4 +size 206149 From 9fa65836f0dda321883fe448e240e05f54051dc8 Mon Sep 17 00:00:00 2001 From: Maciej Smyk Date: Thu, 14 Sep 2023 17:52:34 +0200 Subject: [PATCH 11/14] [DOCS] Notebooks Tutorials Page Update for 23.1 (#19854) --- docs/tutorials.md | 57 ++++++++++++++--------------------------------- 1 file changed, 17 insertions(+), 40 deletions(-) diff --git a/docs/tutorials.md b/docs/tutorials.md index b8aa1eee280435..80d939bf162013 100644 --- a/docs/tutorials.md +++ b/docs/tutorials.md @@ -25,7 +25,7 @@ libraries. Notebooks with |binder logo| and |colab logo| buttons can be run without installing anything. Once you have found the tutorial of your interest, just click the button next to -the name of it and the Jupyter notebook will start it in a new tab of a browser. +its name and the Jupyter notebook will start it in a new tab of a browser. .. note:: @@ -36,28 +36,20 @@ the name of it and the Jupyter notebook will start it in a new tab of a browser. on how to run and manage the notebooks on your machine. -**Contents:** +More examples along with additonal details regarding OpenVINO Notebooks are available in +OpenVINO™ Notebooks `Github Repository. `__ -- `Getting Started <#getting-started>`__ +The Jupyter notebooks are categorized into following classes: - - `First steps with OpenVINO <#first-steps-with-openvino>`__ - - `Convert & Optimize <#convert-optimize>`__ - - `Model Demos <#model-demos>`__ - - `Model Training <#model-training>`__ - - `Live Demos <#live-demos>`__ - - `Recommended Tutorials <#recommended-tutorials>`__ - - `Additional Resources <#additional-resources>`__ - - `Contributors <#contributors>`__ - - -Getting Started -================== - -The Jupyter notebooks are categorized into four classes, select one -related to your needs or give them all a try. Good Luck! +- `First steps with OpenVINO <#first-steps-with-openvino>`__ +- `Convert & Optimize <#convert-optimize>`__ +- `Model Demos <#model-demos>`__ +- `Model Training <#model-training>`__ +- `Live Demos <#live-demos>`__ +- `Recommended Tutorials <#recommended-tutorials>`__ First steps with OpenVINO -------------------------- +########################## Brief tutorials that demonstrate how to use Python API for inference in OpenVINO. @@ -74,7 +66,7 @@ Brief tutorials that demonstrate how to use Python API for inference in OpenVINO +-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+ Convert & Optimize --------------------- +################### Tutorials that explain how to optimize and quantize models with OpenVINO tools. @@ -141,13 +133,8 @@ Tutorials that explain how to optimize and quantize models with OpenVINO tools. +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ - - - - - Model Demos --------------------- +################### Demos that demonstrate inference on a particular model. @@ -281,7 +268,7 @@ Demos that demonstrate inference on a particular model. Model Training --------------------- +################## Tutorials that include code to train neural networks. @@ -297,7 +284,7 @@ Tutorials that include code to train neural networks. +--------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+ Live Demos --------------------- +################ Live inference demos that run on a webcam or video files. @@ -322,7 +309,7 @@ Live inference demos that run on a webcam or video files. Recommended Tutorials ---------------------- +###################### The following tutorials are guaranteed to provide a great experience with inference in OpenVINO: @@ -365,21 +352,13 @@ The following tutorials are guaranteed to provide a great experience with infere Additional Resources --------------------- +###################### * `OpenVINO™ Notebooks - Github Repository `_ * `Binder documentation `_ * `Google Colab `__ -Contributors --------------------- - -|contributors| - -Made with `contributors-img `__. - - .. |br| raw:: html
@@ -388,8 +367,6 @@ Made with `contributors-img `__. 的人不一了是他有为在责新中任自之我们 -.. |contributors| image:: https://contrib.rocks/image?repo=openvinotoolkit/openvino_notebooks - :target: https://github.com/openvinotoolkit/openvino_notebooks/graphs/contributors .. |n001-img1| image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg :target: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg .. |n002-img1| image:: https://user-images.githubusercontent.com/15709723/127787560-d8ec4d92-b4a0-411f-84aa-007e90faba98.png From 2933ad5a136c1142999188bb10e40d5ac54c98d7 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Fri, 15 Sep 2023 08:08:19 +0200 Subject: [PATCH 12/14] [DOCS] troubleshooting article port port: https://github.com/openvinotoolkit/openvino/pull/19855 --- docs/install_guides/troubleshooting.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/docs/install_guides/troubleshooting.md b/docs/install_guides/troubleshooting.md index e1a6d03be41216..d5c969bee40108 100644 --- a/docs/install_guides/troubleshooting.md +++ b/docs/install_guides/troubleshooting.md @@ -10,7 +10,12 @@ .. _troubleshooting guide for install: -This guide provides general troubleshooting steps and solutions to possible issues that can be encountered while installing and configuring OpenVINO™. +| This guide provides general troubleshooting steps and solutions to possible issues that + may be encountered while installing and configuring OpenVINO™. For a comprehensive + database of support topics on OpenVINO, go to: +| `Support for OpenVINO™ toolkit `__ + + .. _install_for_prc: From ce69d9709a2f87cba1d00e381b2034e3a74f1686 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Fri, 15 Sep 2023 11:50:13 +0200 Subject: [PATCH 13/14] [DOCS] benchmark update 23.1 (#19868) --- .../OV-2023.0-Performance-Data.xlsx | Bin 326534 -> 167725 bytes .../OV-2023.0-system-info-detailed.xlsx | Bin 72517 -> 0 bytes ...m_list.pdf => OV-2023.1-Platform_list.pdf} | Bin 215404 -> 205520 bytes .../OV-2023.1-system-info-detailed.xlsx | Bin 0 -> 67988 bytes .../benchmarks_files/OV-benchmark-data.csv | 44 ++-- docs/benchmarks/performance_benchmarks.md | 6 +- docs/benchmarks/performance_benchmarks_faq.md | 43 ++-- docs/benchmarks/performance_int8_vs_fp32.md | 235 +++++++++--------- 8 files changed, 162 insertions(+), 166 deletions(-) delete mode 100644 docs/_static/benchmarks_files/OV-2023.0-system-info-detailed.xlsx rename docs/_static/benchmarks_files/{OV-2023.0-Platform_list.pdf => OV-2023.1-Platform_list.pdf} (70%) create mode 100644 docs/_static/benchmarks_files/OV-2023.1-system-info-detailed.xlsx diff --git a/docs/_static/benchmarks_files/OV-2023.0-Performance-Data.xlsx b/docs/_static/benchmarks_files/OV-2023.0-Performance-Data.xlsx index 18ced16236b750f599bc8c2682c27f0d98464d09..f4000d00ce058e92e2d896248bba3adb75bc7678 100644 GIT binary patch delta 120367 zcmc$kQ(#?P-|b^HMq^uz?KE~{JB`uUtFdj{wv)z5W7~~wCntU05AXTz&v!2Ny4W{! zuerwfkMWyp2i~JqWWr%7NP|P5gFu16fPjDygUEXpl8=LdfKU?DV3B|UDH}G+jOYXB zl&AQ)t8#sl);b)Z9S7{@({IWa_aGEp(J+INR!s%AU1(^9aNq1AJJ>h`Kk;`>?APJ1 zn7X%=h}t79D@%<}OR~$MMYqt**KneJ2n^65*wzs3^`MMrv(Ldy~tH!0EK*$DNLS;+8z-=fSuiiCAb=~Dd`mU5_w6pOF)}iMUcH)cN z;-3+y_YRfXV&hb8YhWSwJGCv&x82oT1!smZz99+vBo)Mf-{%(a87X4m3^K6#mN9It z!nvKGmNiBcCeSH3{)f<~?g<9cl%w$~M(g?uS;X}lyc=beYYg>Y0X0an1?`or_JPj2 z2z?}}Dom-0x*egX!hr&+GBalps$j?b{`(!DX5(=A|Cp^_6g3iZh|8B7oY8AW{xCfT%N%kV zboN^d(n+)7=!iI5ek8Uhk7tJ%Xf)vY!7F6Vofm;EOyISc<3`3P;TFyg4dZrW5VRNb z>tcN8?FqZ{`K&8b#0d4}+92pjTt9kG=pm%wIxJ z@Wi}NPEZGbkp@k~T(!T;U=Z|-linr%Gwh17nhbXwQdvy7j0Kdr2bCI0w9l7zALAF~ z07)(?eT}!p-GbTHklrHYp-}r+%e9uE{h;g)ChL!can&>!b&^m@;6U6NHa`^v2#5w8 z2nf=L>RqiETx{(v4Q*{Lzqndkm8)vV|KUXT)-!+qI*(&cpi2B>J3ZH9w#7LHggDTE zAVzW(KMaq4&_~Sd{W)<)trjI<)Ip4x1V9kBAv)ClU71>R3@N-&V_t)H_&1!rUxsCSuJ!@n?S$hO5f5936S zYJF3Jm9AlIEzqw&kQy-7!A>-96_a)In}^AHfDz-#x=tszUSBAWu|)2sq4DunyE17K zExV5THK)vbBh6=fOfI;39XhfPif-;ikV&7aI{6H@s%s#w1;9QyZ6T#yR5vCH-uzY` zbC_~I)y@Q|uc3JT4Orn|5&G-jazhb*B?;7z(%!Wjlxog0-aV^c<-=lsXTb@a#<3vx z-}FKqTl!HqdE+=v>)iH5C*n&|QoU+QX|(E;Tp?sqc`^}F;wc)>BJwNqksGAqh*LQT z_KXgf9xQ@BIe@0-D=__n?JgXmMzSjQAO(l;CLlfD{oCaUhTTgzTGcA_3SR%UEs4y| zXu{FdJ-ns>&Gh@RQ2dhn_bt{7*+G}M2g2ZByWaMK4myrZs##bP!RLen-^xZ*t3W=6 zJ9yn-0SEkvaHolSybKYcQgMF>_oG-C)9z6x*ko{dWng06{%-cvOI6zXHjYE|nN_G2 zK_I$p|1|Klba#g6ZTORh@S-Q>;_2(yLoUR4vTLRvtK@l=ihx}74iro-HIYo|p#iu# zYtl>1+T=4V7_qYNm65M5Nfbm4roozgx;kYF;o-ON5CUYLkTvX@6IPrDTvF&j4X{Xf z+Y0O}0GOAU`plR7ZH!SuS&S~5R@g);9pkIipwzyV5bbGg0;4uJLSaVY#g{6os+{%# zDp!BQ61&K6Hd0e}VfIVX^qo&kioFU1JUe$h765*6_raQZ%zebGTe0~Psmi|kZMszQ z<-{P_U}sC$G*`VJaUM_Tdd)a}g*!W9FDLZdC2*)5;anLUp`0z|{O~m^a>uK`&#sG= zYbHCFMq;f-ep2kYJZ+F-1BCO+HXZDx)M|Y)sE=?T%qEmCD90?^dh4^*IIe2(Fr*2u zrgi9}jSAcCunxOSzQ3^L6e5Xc16M_0reXmC6OG$9MkgTea6wOG_n1&GmSORxcMBsO z2Ozba@xx#QeZZs0ai{*o`KyqPEWF~68)Z?^#k_lW&JQLg(@bpSH5qFxUT^(F{u+8d zijV|u-PHV;yuI`%DVX=DHq<~S=bqF3AVUpBVq1tY!p*$h)G^ZFsqa>}zZ_)P?O%rK zeMpAw3eoVA-O*b_Tw+I^dDHt!;!cNY?#1`B+Dkl*Cci7+f$^vOys1Wgh-^mERE>XT7g;$ae~DA;GX zU7DhDB~TC$b=W#QL>f?F#G;oGS@b^O&2OqRCLt&;o>QY9z1DgL3Hj=dPXdAM9kavvR7g^cPUiOG3TQV>Siq-4hMvYEsZ zZ1id4y-S)}alti%fkp$C8%6{*ZH6Twm$-^B=?_EA$u}~N#+Y#+sByDGSzH4oYv4!E zDS*((HxquR@%x1T`WC5ZUahmz`l?;=!fI9=d|8L8yDOZ}5O%Y@xE``O{+9aB+OWlT z2zD1d>35@P&T~|FGM%BeqHH2DljiOiY{)wJ)lBlGOmlr4rUE=|5g%Uk8& zQb-F;@-g^uYeaA|-<8E}9->jFu~C+Bt|;qZWa@}IuoIx9&51o6<|jDu&dxPb^DkML z4%LeGKM}^TZmU)=KV1S8x=66;H2G7pI@c<(zGz`XpiqMhrivxAAMUerzVw zmZQyKyjTKcRtyf(R1{L&zh77}L0e!L!>TAiJw*!k`tYZrzXTgc0-<~Ze;f>;ZwRHfDG@x1(dnXFkz45v!w2wN*<&H5Llski=W@f9Gff5?$3l0u&24e)$$|P22!G>P)lzD)Nq#7zw{8y@8pnS z>|QdQF4QZ*rm$BI3jEDMIL{w>KIGsaAkmO@LTHd+K=hPi&xdRQ@;H|QNQ;uMDc_*z zC1`_lFA~{_Bl0C>72yEeP?K5SQ@*yP438jg%09YyRx%bDxSyN}p(*VW3PQrop=Xu6 zO`n7!9XrL^Ys=vfB4wCQ zg@$8STX+278L)mc?~(|*iCI6Qgn^;$TzgJm#%~cvLe+8XtE%-LYJJM&)m&%>3TPfM z(k0=-ql=%K;DZ!oYd)QFsuq9ueVexV%G03yTz5PbS4Hy~uGLCf>G7jaAjAC?D8TZ! zM5M>9M5Zw!2cFa3;zyrzVdKHTX^@-bovDqG9$I?I(lo$+u6=}=GG)T1&& zv1|w!y=U^`8SkGqvkTvnhx);|wVRfHek~R0^8WII=Qc3iS(tI;781Fd9dx)aAJ;K$ zIb(0bM|4vtQqhn`0*_$Xl24%k;I8+|Ji&L&@!p)*?0qs+C>>M0C%x}-0SxnHKbntxweM4Eahi1f)18=Xy*efvYHBE}rPUuVVi7F8Ih z`3n(A$1RnEJhi{yD75FW?{U9r)8pFW1i@LGrh$MWQ;-78cMIRtkBJw~V6f|=l6Id2@c0bJVG@#nCh zNhvr4FCTSa9b{rIH0FJ7qjiWPYJRe@8~nS6m1TJ>AjeW!YDr-gFyikz-pBciD`fZ} zLq^>t3HCBK1(J*SkZMBEVK_o3I46qZc7_FVJ0+G(=(G5a@z3^eoWLlzTQKJ1y~x)l zi;0hR`TqXTB|dhnvWEy+OwYu4XFwvXf&zj3($+SYA7q>Li%l8mw8!}hkBHy24u?OZ z>7HjC(Hg?kdHsUa@-xF?>Ms21SC>c|mElbWhWe7qcOFM59=$rNFH(j1uqWKV*hs_S z#K(f3<3urjX~Frwd3Tr@h5Yz8PK(ckm8;nRo5&h*=MLv_)1sO zW<79hM9`FR@=$4h{=u3PZ+&jAstH&_Oe|#wBCZ@^S(`OH9*=sO6nKZ==)gstH+C{0>Uz^ zQGv}TnO;z_+}+mr)7D=CoeX~Pq{r%7wPH=Vzr+nLum9`>UFxPqR0P<;z*NxNooXkc z8M@J<*X5jvd}2vp59M-Yvoz!(RX-FhC7#lc22lHr!{q%}d3z>R$Xk*vJ5_R>D1%WD z92B}LOiUSA`w6?Lzq7A=+0=CUsXi>$nipi4@A_72(_l~r^~67!{KJvaPSg&G!TjaE zO-9N-ucA^L#@N2}vGWxL8)cv>|M`uuXEGV9ChirgGdK4>K_rfr);$1W16I1K+Pg)j z6G$-ec^8>+V}E5;hhQb3_vH+Y(`VJo2Q4^sOpM6YU?zO)kNZJw5cFIbjiTaH#Hu%t zEHkh!AW*QllE!p&+C1$(Bi}BV3W{8A25}zYgXN!*98ySqkp$0c19+GTK#{dD+YVyB zO=~@r2^ADCHvZVSd8>;)Uha02%&u9-2Hf1zKaF(9MstV9X8v{phjAq#fBrCmAvEEA zt)r9AjzKcHWcqjcpIchT^-oTzIi|0Yuv%LQ$P-d<)f^DjtGXbUu}6m#b5iO}@m`O% ziras+&zx@%PtGctBXmOq`F7JF*gktDKz~Vl|AAKjrTuQ3ULyEFK(BvidW01o$`xiT%>1JiCi(?rF zy!W+m+S7HFsA=^*!bqmba6GHe8f3EpIk*Lro57`NZg0VEpf5N9%)0b6<9fgbkH zx|9$_(M%=kvGJSDV7n5D>31T_9*D7lDsc<e!|+qQBAxFAUdd5P*Tf6@=EtzJBj&9x5t)+B^dgv{*_jTqQ?sE{u}wiuu-mVdy$5tuCJpvfI$C4mlM^wCK!1&3});Z;<8FAEHLhP8MES#660o@D8{9dab^ zVB{1#)|?2GYQg(7S$i_^&`F1{l4Q$RbvP#PnZl{RiX*5&+#kdacy*1CLC*Xx{{19K z@pbI})Rc@gUPto&RlpzCDECqK2KON}bX`pmNY>VvqP4n_TYP{!PLkf|`S$r?f?0=L zBQ>IZ|G@2}f6MS}3{&GzLHD*2Z;0(XD>jC%daqlM!F0+0g9 z{}@HpV&$(;3<-b0meu{$(%oWE(%B}zKNlX@nB(EIhPXivK zgcr6^f791&{|Gy&U{e#Azdl)%`|yc{eVoNugic833kP+zR3zA?40eEbJcR*sAk>r} z`oqjOB3+Beti&xV*(kvObjts-QR1BWQ1To|!Qh4t6Z;?`K|ts~{jE}Q ziR3syF1pissl|%!KQ16$Su$3+UHfQF73kCvs$fu9?;Ci$nF9DKLAn7oiudG&2L3Sz zEtMu+^p{adG|7g>_{Wy>hr00(vg$AO4=?84liy9Xi1EOLZKYSaIne#p*WvoDilL|c zil)SvptHV9pXfuBbx1goMdreOlev2k0jNev;n!GW!y5KH$lYChv8I@yA6HZn?}U|J zP|^**HxYtW&Vd9+oaB34D}-LWG3 zu_na%5eb71U_0HB>jcK9%cwOX zZ$;fUJ&f>ydi^5-BlWBWkR?*2pljn2Mo7q^W%y!3lFGp1%7jK^?9IG1#**F(5J_`N znQK`;Bb7RWATtI5?lIR#oE_*Cc$A@e zGUbTrFz*#+H#x{q57PEJxNS~Kb{>y63t9m#)epCRd350^0^ZN^N24>|Qz5nqfL)0e z)XNgQhSp0c=sOnZKEyd?{1Q8%;lIiBlMXGQIjN{41F2+*yOmEP8`gga>+zL;SxZ|o zJc%W|2_kBszwbAaRK@RjGce-5UYbfW)qXr$U-7cJf4-haD--Oq#ZdFb0A6bAa;V6_ z;OG=>7XIp<;a^?pfCdGi+i+kEBFm0eNu@iI>L_+^cMgYDWh`uTP;59~--D; zO=8pr*p{Th;YC*TLi(Ce zpzm3+sCzYlh&*O#K@2E`%>M^8oZnUS##@2bpfptP6NJVXAI>y>OSQ2meikA5Jt^6X z@e5zq>gk+APLZ)9oTlQFvF*@f<4d13@9)a|m8bKb=%JYxqE-Y?AxK;4kUlll3$RZc zFlpKbz}orrp)NZ1TSqGSi2i9+Iu#sY236F&9O_d`oQQGjc#^@(Rt)b>e>-w@ua1hu zHFK|Y>>kS9=j%?u8^)WpYGA1J4vpb7bb+m0mhtGxA#b))>k*=pp9rCf>zEtz8w@mm z`I}VXnm)C?xhs?ZfGlnxxJrE) zXZK`FZ1t7FpA#uYH*0=jHoY_fCJW2HSxK7Oj(({t+}*NaC?# z<1^U%OU>;xc)AYA_A-*5qaqF=P3Wx)!x5Mp>FA|oa;o|7aYhnv@u^>_rVnBA54Kb zIKK126pJ5BF#^&rart7tVbSZ1FzZ#wv~ys!3AYug+XMe@OF}&y(b#IwA3dCWZT;xL zOc|~x*>t-Rki*2Zt#OC@0EEH`B%;bzmS13e_Cik7HSlVa@s9ss3Qt(mZthR2x~-Cg z9j<6l>on`-?T>p*a=xsixvWoNNnwgTi1#YQq@`e0W}OjC8eak*AIbZN-|i!yeyo~% z9pK9&Qye74rZub>;B{duhlb+g-s@y_!nIW4k9vbi-nFGCB{fvz0Rx{Hy_0XqDsf?; z<@1dJIep$mw`xj_*4AG?PAD~yzo|z)OmSCuM56PO+Cor+yhg&BmSu8uN_-Sl5E%1Mb!(VLpFYyC^vEe(9 zn&v%7JlC>;EUn>$>VpC-y1w88kkhmXwpX&%%K*8Wf z4mD16$^U`I>)&9_JVHa+|CdrJ#TW3}Ybik?$G~&4^ zMr9S)0f&C~4@waFO9?N3DWTvWO7J@1Nupi?HKfEbm>@;xiY=IURyA+6%sxX4p>mMJ zi3AzHuwd!za)|XcH&Hg>BifoH9N-~NbV1%~OLQW(dzeXA!w;~qA(sixm%TtdG-rFaTc9Z~!^Qc~ec zRm0Yds}Dxjr62qd{=?LZ%Q^f`v(gbG9j{-6fM@7=C;h;Wm&%M-wR+nVB&U~w{?10m zXC^YP6MM}}2OF$#08_u6h(JPyO&?lKJakLI=Q6v9i9po+7+y4ogY=Cg9W>x}facLd zlL+&X%)c^HI2h9a*_G@Ht=df<8UfD#3rvLlpb#fk}0k#KVCm#mntcAhPmNi%EOhusQrhK4}lTRNW}3|f?) zhT}ozzkakMMa`8jzQ2M|EdU$OIblEAfN8d-2Q^!;8H-IfOl378^gwe3ho$Ju>PaK4 zQmXu)JmigRtc^it#m%Cxh4B?!(-urnY7VdU)|A3?^d$4Ov3HX~A!=d$uZ7=It7vaJ zieNFf&mB<(-;@wq3e_( zwP>{5z1uE5(`cgw(hQBv6GK`4HPm_6X_C#U{c&W*R?PCY)5D>1OZ3ul^0YW_A)Nd*X?%r^N_?imQZni$hit2$@X(FXRp2(H5W>x}t9r2P@zb=O!i zBd4V2qi1~E(OIA0ojuhCEB#GUCe>ktQ!%<1W0)jny+YDpEHCYz6?X!9hwKs0H~hwn zDhPpxC;}y0nzB`I;!eaIt;dbGv48e(cb%1)1$27dU$?1kct1TXU}klCXmog9J~Ev1 zd+v2rmfh`!b*Y@T-7g5dyxu)tKkp+60Kol~YjImrRi_ujK-cHvbKQ9cy^8@;-Pc_e zHQ;%Sp{nzBuY<+Si@jtZVGJv66(Y`dRkZ1_S;=!u$Ko5lQCysXVn4-wV93*pEh8+_ zg5%|_t+XwF`Nql0!|%a^-xS{0)b2_;kGLFKWl;^h4;-FhTV;PNmo8O8onBN_j{u_% zcA43rHYSGE`c zdwDrb+E1#|U46MfxAlH)Yde4J0$Qqo+pGQ&y^hNFm8OlCZwl{sy%Q4xuN&{z+j;_C zp6@S@*Mss5omEkTn6_Q-89h%c07KVK*kdM#);;g5WggKB@)#^P z=j+&0P?gP7N83g7+oS&oX*%;Wj~&hYMu+3Z;?k~JdZuityLS`!jkO;BpaIUHjai`( zSJu#jD^^+|cBCxJo!1(>O2HIv%DuId*>j;y7~NvS=xCN`MQ?i9zSF8a|z+~aII*cP`C5p=4J3)H#7A(PVoLZ%5^u%`=yAH zKhxbTtPcWKH+#?#3nchy!}Etz4d$l^+G`S zt0{!;`DpO&X;=@To}hOQ_9FIp_eW~d^^O_()l_e2<@)owmx^Uo17|o)z}gq|&<5e0 zD~zfMif&7UIR)7lo3_@&0Sm>;iHKT$f=e=4m-Hj^ZCa)1j)?NM5}sMF{^oH@YX8-Y ziZSYnaV2ho0dupTz;aDVQ6pcY^-(Ka9^ORWX7)g)c-c{oWg%3vIrMH3eYTA7gg_Mws6QmCCLvSUg9slNybR-+mzw03a2)aR2e;HbkF zX|3jTI(enXof%S|<_Ds`iK)uACdiuIGqp058gM?TdPR;m0UTjExio&+%Hd)~(?2e% z}LHB9#o#p+k;-lI}P(c_udEma2N4cnDa6qV^(6BsXeE#Fw*?ty)n;>tDB# z8lqT2!rqd1@u$(DNC!}!9ko*Q;2-OrSCJl{O``f#t0R+!)16yqK+#P>l5XsR!s9;% zd@S7ivE|am2Z%; z&sgM^2=E*^u#AN|zmP2<;crzw7JVSf<*ToaGN)kli92;7!_`0vf)&K$VP~N`J7p}r z-DPjAMD$ExW0LxQ?9lin$B?snw_6(ClQs9mg4t8l06i!aCuy#cPB**EI<@iJ)M!Um z0|*de4oxPrLPsVwkArH%jwYepm+67U$)uL8P_upPCIQoN9hi(Tu7K3LNMO18>(_U+ zUXn)2`C~He6WF`dNP3?w#%o*J-GfVdgyb0`f`|F!#mT$C>y;nx5R zbqXcSrDOGa*oEcBuK2j8jz97)rtZX7C;^huK|6ot4RxZwivJa&PYyuESu-omG;1D);0Gh$EouP+z}$L#=oZ$%~O+9;acL<6@4{x;2K)vFUsKCz%dJd5MYvD_gOce)AaZAJFAfOh_VCRpQx(5 zPaiinJYTjRi!ce-UmqX2GgB8XolJ9eEhQ5p3U?T=_Rql5W+2(`17$1rZX@bKFbdRS zoRu58xTU!mk|F7d)_C7k|qObNqLX>`3c_IVWX$z%KB(x!}_Ex;96+?&aO-J|Q4rR^G2|?qtSY{HjSMzJzK1 zeQs^^u^^Giq?eX;bc6r7H#lTqMTn~<>UiSI1f*db z33NQ)5KgGQKNr=N89-4lVUmD*pQgGfU(9QRU+ec7E&gHMXOh8{8Ay{E>*1~6P()Hv` zXVPm5l1R+vre1vf|HiN@-)D_jzPQ;Hj+DI)R8BvUWJ*8wv9f z5j~EV|NU1<1a76+UL}2Be7H+zvwM1Gv^O*VLoS*icl)eK|JvToOArt0A3SN1-g}I! zStn2$?F+c?n}B7|9)S)1e%gth?eRBwa|pnXMJ~o9HENu9|AwU04ObLpp*=ge`up8P z*W*JFA>jUY@p3)KAX!fs(|4;th?9SsiJkc!R+FcGC_UmXr>D}VTC8AR@2Da_OcHK6 zCCp4A_-D*I$Y(60pvrpgOGAmC*_qmV$wcDW>}&V*xFg& z&nsaPpxFMs_Lm)>B{JNoD{(?bi3k=YWDsiUmH<6!a}At<3Uhn`+;83Li$hQPO=iIc z2KHaIcL_})OK%FGb(uex!WUcZ&=p%<(<_Kt-Ln^yG&6Gv=WPAwOXjkowJ8o$Mc3+H-we? z!fo@Y4F@n;B|9JCi{6PxR-avfx7|U)N`CK$3l7U?wKhlfthi8BFg9+=Z$7jgd;Rk# zZ_yH$-2Q3`C?|)G?rK(1D65uZ+&j(yH(+nSXCk=8XVQ=QD&G9cT5+k2GhiU~*@ao& z!k)~My+w(y@G1@Wdqi2Z;|czP*1Nx@b_%cSxOB>Zn|8{cR&yxTt+lqBv#Xq`)>f#k z%fMV0@G%KhYJJK`6EmzLGj+I0%)QdpWyCU`QPR{P`l$XqkX;EpgiuCO$> z-wBsYeQIV?BS$hDXpu?o+&@QvsGL?DvLJ;Ir5G9=##QWBs&y2v-W5dBdh{=HL6$|W zjKL0)RusdW*gtGw2J^`qj{H{JnC1oTACIY;~MChlj7$?KJ__7uwj?sh{QMn9=z3Q;zD;s zQmT!@(?38^%PoAXKD8 z;@-pko~$iy3DRP)_>N<5^Nl6qnuvL6s`va ztz6iS^yW~wJ8iL@q?Fh;o1_Ll(+Az;w1EVt9UpdDEcUEcm$ugr$YSeuZ|=10^Ua16 z+jHW}hMn(Sh`Ozwh7dP{jinr#PH;!*z{*L>L;|g z9prRo{UH3gmDpf(<0(Y@xjp5;`zx%jaXfcPG%d0VJR)L1tiH9N~N2n@NcS}&vF)Dj7Dm|xTv%rQ)u+u<QM_w zpnoo)UIOi1>%?7+{RbJrctpH?y#3Lm@37>`yF*4fqC-l(Y`j}Ck`KYQHi8{yMQHaPfg*Tu3M$1J97#bIg$hIm}1&F ztY3@V7b@u$N`T$c-SzABv1MIUYSvi9?Jbz~q|Q-wpOla}=+(GgdHx@=;pUJr$zjs{ zz;N#$$nl#L`k`Y5h(36boVPN$#ViayusUsgi0DYs5xl?pj_lr)ar_`1-cpI~a)yWK ze4|suP;;EP^^+%mlE0JD1|@dh{F466&8+@g-LxjYW&`M%<`?T6X1vnU%xN13HOC<% zZ>Eg%r;4!EQkvzcY1s@0Q?h@*)FYh>2xV@K7%STWbwe~~8dTuCu2mz+2@3t(SQ|UM z9QT&Zo>U;j8%x!c`p|5pI4pg>n0@3Jz?ktV7Ipo#0%%Q8+b-afsI9n`s6B0pu8)ea zWwv@L@C9<(_>J+EK)$|V_`vL1b5dpmVWbfvQH`?+Sf^tf!X{j%Rt2Ae(m+^%gQs50 z?iW!D!7(Z%fZR(moofr2{xN82`~3BbJ+c1PzJiX2lLkDqd!zZ5mydBmnb7$tHQlV4~-D&SWBZC7OW+MXga}E?=1) zIp`~0tx)@8b5KXJYkHMCFAX#@pUWKne`*< zr;*w5&5jqkcW4xgPNB&ZjkeAZkGAd#$o3NPaG8r#oiF%jb{&Xs;|lR--K98HXoi3URnD!uBb@PM5}Z z?w~#oFaHlOuCxi&S)JXY%z{gWIuV{?;^UDMizM{#kk9cmQd}ZaKKHr&Y3>q-*V#z} z(|mHPJLqS`xBhRy-(fyk6Dk;~DNR7|sDSq4ev4M)B{Qu_1vEJ;O$8K-A%yXP2qI;z z`pW8}87m))C+%kg0)F9|c-F)o)aaYg?;Koy<)LbBl$jZ(fHh)>`T+wgL>6N53Wy^< z;;Uq`?Df*AD6;rkD-_wR_j=u((bEg+Sv18u;x%o5>>@xTVeYhuQ%=IOicBesGkl%+ zkD3}(48|V+=CHsdocy6QQoFHK(COt9ake%M9QozwnUQ(a#Sl!C0R%O*wwMHR$|L$e z<2*XGhf!>trSVz)X=(N6NuuM26x8t$F=XOd1#E9Q9RQcoz8SAsxRfWy;RZp-%VXx2WEB&vN|SlqkpRxEkI!NjlQ;8 zrNF3K>zjXNjDhz#3r`;b;=1#M@~_TojIsa}+_e^_Ff0r6O3(ah!DM!vK@~6rE#?ms*M+<2w!& z-y}`ZpY*isVF2}xkDahYH=z|y6@$6;AS8K{QYxHq$g@!D`&4=yWQ|BhkjQMy8;~$= zG2~fWaDry5jMFa!I|q^*{{kP|In=uQt%2q0xl= zUrpFxZ;BO@I)u$Gpip$%DBl(F;&kU(Z`V<7YNsg8BO$hoRl>?OreTH^BAtZmo?&of z5>6<8DTs2iHnB^!+;4LtO4W030cg^iUh>~R@4l)U?-n@<#d$=trN$-^G`|zDeyqm& zv6|WPU`G@7nOBs^!2nE~_#YO!qY^WPeOp`_>2x(@!ta7^mPiG`j=2mxJ@?mW& zT=QeW^}<33sXTw8^xqAaCxVi*(?~9adp~@lJ$|zJ|9z-7Nd8vGy#7|t^f0GJE2ce1 zpMZEF7ae_YL^__6#^E`m33%}0q=sPAa|zDxxItP!#ROyIa2WgY)Ev7>K%=wy;|ljj zIflqllOzkkLhw?)eiFpL%5u9(7sG=4G%Q@mgo+@EEEuppyV8>XEgVQ>i$}fFJ{aLs z_{b@R+Eb3~*;Aug(`!4Cj4FEgcCU~3z(D8ZJPbW<1&S`?_!=bpKQkM5jqKv1m7Dvo zF`0dQ%1{v1>}z|@%eQ+@Jv{qO5lFTmIhd4tb}162u)3t7#9rcD70*peKec&jKUlr> z)8w=*vp|a|p~0rrLk48{Z_Q&f*SH0=D1TOj8W2mj7wMP@8!M^Irj&@d{s90MHSG?2 zA@|&6{yI)x3nQz+QdGAAIssCPDtZFW3DDtW##@s6UroslTUhL)!(6z2B*k@w)q}vf zxM`jVr|}G8cLX~%zhb_QNp9G&gPD}1^LucaED1vjje(k;07SU7PtJ>-b@LJ_FLRF(mJl3#FAM-haLSa4wL=|F z3>i3r#Wm3!m3__(@tP?0F}`^r9YGR)H?Rf;6baKRqv%-4)uEWL4&TWjMrKEZFC!Os zQ==cj9(p}%Wt_Y0kA_Lai;euDlSpLSxxkJSLn-*ii@c;Ss1&TMGh^?|FW1WY%s;vh z)*GLFKhYGJvnI_SHx7(~h`oqw6+G@6{u*g?k<`=JX!_SLfV9YIo6QNySK4bi($J4_ zE$BD@M?(_lP#Ji zB9O^V7@3mUZu3AN65)VHYt^SN*ngd=s4@0-=5ER`-+TZD?}k90tBpUZqB$YohRMMr z{o16?z8lj#Nu_~jPsl%nXayCocY+3E=NoXop%70X;ng71gxo+J2Ia48tqoTOBs~f9$a|{6uEle6dw> z!6~l*(P=*F_jQ5a?mWrN*XkO_YKSV0P6R3+6Rz}*=~^c;Wg`!-Rmj(#%9Lrk=cj`w zEa1WQpxsMDvbOsNgi+!ScO5}hfU8I}^F6@2 z;5Xx?dcY6fTCvo}X#?R%DRLnRIevES#H_7;lY;aPV4z{7RvvH6XQXl5Z{%Z>7x>3J zN9v=OqLJ53h+_@ih@-drpzKXz@1i+s05+$q`kA~E^xcYdmf=fn?$}Tm?_VOiNjD}g zfFzlU#oPAe>~CT^`yTFuGENDbG`Q(ArDf2QX%R8^>MQeNvDoV`sMGv12fhWYU2F0~ zIMNgqnFu4+$GnMF&j@t4&l1l|tJA=jViv~*cR{vj?pOd4LsWJ%`|I_h`=%!+a9Ab7 zlFN0-A(<31HvE~a?E7JNMaNPG%Ld#Sm$k1I_GjKr5$bw5MEZ!@zP+1jOma=cdVFF; zM=OFt*6d+{5sM{hJ@?H5{1M#&?y$n}*UzbUHCHS(w~!XOZR2&2H?#V*0xB=YhpM7p zC7OqZ4lvGe9{K&L3Q#9oP8kMPz!yri*wL1r*wC?zN2qwBLF16riOlF7N=JJ$^Y~ww z5}7p5yykEWhQfKI#f({>;-_Vt-d$-2`=mXxjkWf3K8zz|PI$ha)uO9nhqd_Ho{FVs zRu+;Np^ElzuMHy3US9{6;%ECh5iJb$Fli8p<1y3YXb`CNh|81 zhA%c16E_~A8xt`Zul(zt?C*x6Q#u{h`}Q!fbjWlG=wxQlp9{(s$hczSeooI-ovd$NgM$)l8#K3VYc%UtBp6Sx~IJwnE$-fxiS=piG^)^w)?br)#6Sf|Gv!0h66|cY^?mmc>R%e6` zCUZ5*B^>$5DMr*vj)NbEq2YpW-*&y*%gqMtBP1h|92@ z=5{>I2Ve3-cPSPI=TA<{8+-htuf9d`p7#^Y197j??guN!K|k$FxKF$|AlcBO;4P?Q z+r3}0#G*@k8GU^uJi;jlT;Yr)<_xe%k1_92?!5w=H>T0noYqP0U47Dnxkn#JW=MX7 zpkT;HiviPwu>`s7+{7hW7F{J`MRWf3LSxQ!PJ5zsQUW+R|A(lzii)G_x^QuK4esvl z?ykXu6Ep-14%N6@<8F-;oCF92f&_ObxO)ihoPNLmjB_sfqHFXRtGa6KXU{eFu2oJV z8L0?79#uCfK*ct5@pE!Q?DY)2Rm!%Tc9mX#Z#bv0dV9cIpfmZ(sek!%GHfl*R9Q61 zebd>8SR*2}r?w}_zBLPK<_@)>-YxzVQY|ZScd4?fWNZJ31cv~6iX0^X+>E>h;lkXLMS7jl9(n*$3RWf6nLyYr00%PSdV6>Rq&0wnY_E_qk_a*P}U%# zEQ`h8LPt!RT%Ac7s2-Pt>haJi$)7(bFK$dlq3NGW_({nTyFU{phTBBNW_T&ECt#Q; zHIVp{M!85!5FNJ++Z2GcXTWS)Ez}J1F8$m_CsP|eB#i4nP5-o_=bom`A9+kSIH=nj zZK_7w{`9}}zxI!3(Tx4w$kVw6K%VP`UgTnwgpo`qPLS8RAs%A|jf zQ>$k4dCtS#J=NXGDC-ZR0gJ@nLU&BITPyrX7Oq~Hrvv|sOrN)IAx{ugyKVb|yO%mt2|?Ez zs<&~7KY_wjpM8F!ku+^}u}9NykVVx|w=$;4m=E0PGuN+M3t3Gu*W~3;(O3?}nL6f^ z1GKoKHMl#BaZN5_0w2>?$Zz6NbAP#-&RxpE;?stGo(82UQV*xFI=LKWsI^Om6~0}= z9gYccM~uniVYbg+!xhWVyBQ9+%gS>G!OJHDN@n(IGUt(S23cz`>~F$sza@j9D_6De z#Ncb=zYrObJaJf_*_=kHAKYPbOn=xfWx=D|+!*#wUVW#E@r9o4yN^;tH`F#v75X&u|J>! z9~#nhZ(mUjHDfbxEupT44Rl(DdU7MdLy!>*-NeJX8_@AX?+{*@C(`^o?aG|?*l{KV zx;zpc503eHRAU^|J7g9(Q-K2mhc3^3vT2dkOmZ0Nkaq2lq$to4LnDg(LY~1KCCEdN z5Xb*NA4S!3^JNF>qI?V8%B%MS%I2R5;7_;Fc>h;u4NgK9h#g|iMYRZ$RoIOOua02z z>ohuI60G=-f)$HtFzciXi)Wmb5i5>xMZkLXxp zOT*C&RTYP>W%%+ZlT>pyBA3lTY2c&Re+$x0S7ZOvAFlte{+I|7Zthf7!59lRSBd9p zNtYjGG@dNMwoLT&`PHRKytGOtC_1_Y)#EHNa%5@!#pA3ovxJ=774)tfdu)=QwDGt) z5{l_Lzc;4!-+$l_UFwz`_?^j&S80%ow)7W2J-Q!?J1Db3 znlZv*hi$?mr;J&Sv^z&*e&juW*yV?~WmfEQdR(GQ63isLG4v=oPq9^H@(>{(8r_xY zT$xIV)vS+q(pztZ;k1|M9Af;3@20DFMW5s}$Zvt^qvi)GEe)@}s2AE?X7KJXP8m{K zLbmm)jdQPKt^GgB(tsdVLG}XNJZ&i;k5r;+6CaOJxJNU1+nFsiv zTKKV+Jj4-Ps`S%U0J*dtPtRn{$E!-d8MCQae?og?r#)^Uw0Wp#|b%}$VZcJNEh;l(Scf`Vq1)Y z6@n?9?|E+^?m3XM3OE3T@67@74Z0B|2c{V3jAp!qW6dKR3MLw--h0rG})|!g0(BpODBy z&M(|Kk91p-3I^<%+_&=2E2RpkH*>_KO5;$F)O+B!exTvYFuSvdB;6Z#Wv-K#P zoD3J7Zeaw*{8Ox+BMvr=rFxSfrrX{SQfxZEec=C-hhv?*`+u$0E!*ku*NciGyzi)8kIY_i4rAJV)x zS#9zM-v16pH+(XB;OQIrn}W0U4v_SZ(n)VkG(gU0R~uG&*C-P5SH_=6PlzMxx6Wu8 zk(hw!po`mT^R3Q=L!Z)cIp&6LDrqE6Hov8Cx*{qpewU$Lqnu`@>&-COkLsfrte!u8 z4Ny6Mo0NGAg3>JPzd=4LX%aj(eHqXqC#1ZDu3C?f0iXDured=ao!Awy_H+!Di1Qti zOcTSIci>kz+hF-h!(AgUc)vIQeWMe&IeYqVBc#&P zOvIQdhn#uz=?0lQtrDPO$zQ&iroii<8Qr@5`7t`ZoKOA%;{8^0S*fE`zd+P^P~4HKR_9p0KL?s} zO#+iZ#FqUPCUfz1qfC)H{9hG}d~&QaNgMZ+Qdd0slp}RJl(9(7?`X~KFf4XsFJ`rJ zjC`r$gFHMS#2^3VFbDM+-NGQBXAOOy^u_^0-@U9UZ;MNdt?w^kAL6Av&W3g&7cbQb zGJ8ER#-OBfq)lNuMikqxl^4q!-nE~@hJnCR(K~YwVmWTJb8FPde?Pvi&xjBFU5s;% z_E%4Qa`?CW9ET|9(|Tq8VIbwN*GGM`k1hxd>rF>!QPR%kTY`f1BG@s#G06e79|rRI zCWpdz+=7IL&t=ov~?233)*+a#f=V=J;$q3Qz3Mw>)Pyk&YX9&#PZ34f$Veu+#gk+!uaae*x`J z!0riB;v1=0jn62t{d!MhN5LhC{&Blid3aM>%$gaTPNEfcU8WPD_ft&{2tA{VHvTf?y7jlx$t6v9unBv}=XcdMUyWq|G?9$t3UwuS>nh-l^E+E`{|Y>!$RK9$Bc`Ex+_r)jfyFnzoO7U z9mz8m&PQ(ryS4-S`RpZS-ru0~qd#93GP(o zo-(?s)lU_s4-DojVUUv65J*W7k;;?u4U1g0MfG$034EdE6IIg?5cV|naNg+m1+)_> zH^TX)QG^Zs`MvJMX?(134pz7)NGC&(94}nC`i^kW7>?;ORVPjM`s-pFcQJZ~uH>53 zBB`3-PQLNZJIh5)=oP3>&uh3$s1R`IX1k(Q#rw00m(3a{ zK6veuS<3oArjc2u`#SU06tcIue^E7}Lk2M`y{y0scfA0F(^u8e>kZ@#*aFXaQmiIq zzW>E&Nm4B2rLYjZmhj2O1~dOr)6#2;NpNu22ybK$=#V@R6s&NB$E@1*`dmm@Mqmv! zzqFO66UiT_jfbzkjSFWt2y23Hs1T#R1?JF;tLjd??9`5^{&&kAlhmLwaG)`0T9Tv- z0p0|2&gq-gR!SwDoEH4?f^#?gTx^I082;z|JC4(Veemi+@_d2CCK~1fF%itC%qF&# z*n%#wBfh_P5qY1UPme! zNj^&p>eKcUaYStjN1Hrb(tbtyOl;;BAU0JW-b#)&&eeB6UQqBOI8PoP%RktopOv?W zu_LZjxYn&2rrnsI4-Ga2FQhM=!$_sdMT*y0uGVAYK0GZAohk}v* z-&cR@{@wOLwb&^ffGz?OxjhM=v-G(?xq|Bd20#6%k?MSU4!kJHL(p2!7x2Jk=MG}AJ3k0MgR$3rnh#6%`uIY)IA&tb&8_V%2>Xf5gffJy`61JXO49S& zjsTADzwgyScdw68-5ZiX@Z0m&!C!M#XCqf(Ch6Blm$yl_`>P#f1Hj2OTj;<~4_AQ> zX@HqIAkDriPu85yKGLNMGNrma`!7eYw~F;5wAfs`tE;P|quS$0bg~QD)=4+AvwB$A zt3*7RO-O01_&w{5#C0z9mKIi%Ud=J@8|?e76a)6Ke=A{~<)M$wJ%eP6ohN{Sbd-fY zQb4;5r5(n1fIDsDxY@+c{-CqnZ6Q@k-b|7BfLaV=HXRL zi-pOu_=jsVR@8h?lfa~QEc*RR1cK-pjz($lpei9L;G2-Qwf7x~MR*ch7lA1zjGjOF z?;sPRZklaHU_^JLL^3>6F78)y#Jv|3M5UK*nzK!Q9tsD+=(7aH5S$(ymdc@M4Ws3My!C z`n>zB7A#CY%YZ2k4i@-6Lg^A>wk97p-~N_X#rD5!b258uEgDeE zgUBWDox_fFe@$U;I`Z(_Eigs1^;)8tU&s6q8Y!3um&byMx$j^isAg@uI*J?*E2O>A zmo>fHi@t9dy)6w>GYtJg@+$tz`1s?5Ap25U%}w!4UMskN6*X}`L-d1R_7>r9-4RLg zkKsH|G(gikzHzm8&Cox*B)MLI()W9bl4crvyRSAySHl~!tE%JEV~o@B`9~9zaI#`+ zUaJC)qSmBU7sLBoTW=lQ1x}W*j>FMEpWAr)JspD)6^*{d*rX>7BoNLVlTq0UT!c|D zFqwGXU|+#4gq@RE=7hyks`>l#SC#+tLdj zmEu1}Q&{)u{Bl&t#Pf{jCh`*)&tu>Xv{MmNhhtbqC?7z5n8rDmje#-yJ|UU5AEvx_ zQRW=@l}VChKHZZM+xW$B9JQ+%lW4<#_excE+qd`I8`W)+692e6>LM!{$2Z#<&i53m zkq4aFJRecy{vzo<&CYtbOOgueeC#q&H-9<_JQbi!gnX4&YSeGaVQHeUR7m&})A=YO zk+lxt=cN5Zy-croOTNY$M3`YsX{RM`aeTWCQXrFQP{CPmF15?1jNgash4j)6<0gPY z6BzvjXS7u`uz$!)ePy$3S2BqKTbUTlM*^2CbstI`&EYbFela%O4#P*kl2|^J29r0` z1;765+aNSP;lesB`9Y$#69{U_Q!P-u)!__aio(L_?a|fIT46ls!d)J`MP9=6=U`6% zWBN(q)+_l}eiQXfS6IflXm*RZdh}`807DQ9tFj<-^lo&Aoupy3s@{5EszTj}rVKFI z!3w<=%+PBgf41{B8NN(hnffXlReqI{XD}Nl-N0fmfu~}IuaD( z+*#$6ps-n8+2ei54CB38RC8}vXZ$ZA$Gq` zS60)%g*uWtqF#Yh&N+ERKUjNxCUI1{`c0PZrS1}%JPg>UtCMEVT(fy1Y$FOis!DoH z@pAo0rTJKQZd;Xo{0WkN&vXaD@X&Zf5xB#xYJMSJq$~JL+@^nAdNd>S9azhRA7dF` zbRA!t;HvhRi8&=#%E|J`QH|9YbSkVeal&WVGzJgh6|7?x>Igiv4CUl4i)S)@!+6Ze zoDz7rJ5hT5M_q!EkQGbkWrVnAW)e5dwH~`68_`1IiukvsgV5q*8{vUg<3Ld{R?tzj zyj8%>D|MjUXvvAa`x(RRBCy-|F{&51r!bYh5W_E6&ox>SOQNG*J#-qOZN&#jXy2&x zTYO)|(WS8)kggM`<-z#J7}=h$&%b__d_f2zN#P)ypeP+yZaV*cTe~N_&{%R3vADdv zn_L4@YOX+pbp+=nGDpRHQ#90jlfISaqYm~PKua{4_zl`jB_Gk}0s1P+(z=1Pn|&QzfqQa?V+0xf2gv7_8<9T@9`pCpPQN68lDWCOkBFOEG<2|Xmgt(e%PerVjE z&tcj|JkmzYemEqj$yM_vy569b!mWe`8<&1-Yqf$xyql3E2A5aB`o$pbx#6N+g5Pn>qdi|K@#GW} zT7Dn3f<`riM%{L*FwnMZ35^rb5(4!Zve`5C$_%CGgzd7^dzrdufs4qacs28CWG&uf ziK<-?4x`O=G}3}ci=T`PTvuu9EOYk6O|n+8>VW}WhrMD#WD_`ni4#a?YU~u!KuBE{ zYKyL4kCZ>cS9#tZyXEVk1v_Y52WVVb=L&IcyWNlvx)o&ogQgt$ozdT5&>e~UsaZN{ zFRAi2&mlVq!{4vH)F#xJ>aEYt9lk3x%Z**0*H9BR@a50eEU6{u%dcbEm;BnlQQnRL zo>(z=cD|nX*q19Yc1^0VsZnN(OIH)zAH;BH%n^cka)&3a#{{`7$9`Jk+lZxyvQ`kB zf<@Puv214$$r;}dVQxt+D=`)uuD&W}DF0*t%CyeLMD^eA>9gosKJiTe>A;3?Xz@SORy-S>Q2x9ukq0oEh1UioE13v0CS1TDWjNXM`%=YE zmV!egibmw`Ie(IE5qnJ${?TJp=cwCfwX~fy`K`#jG5yk1!GkEqBZw33>d(QBwL`7o zgF$L4z26AnxHuNPtOHp$MAK5hfIo-2Mc5FhH%? z1IWz=9jrTvCiOEk!G*vQ{~w@z>4@|0HG$8nYGm^eMpUDGl(d<+;&Jvgj-G?1S!`zM z=K`Vq_mQ7D*k4L4+6};~4I&B6u2aHf?E)M|rYvi=meax~NYF#iLaHJ#?v&Sv^uMWB z!`2Hx7Z}-6qY{^bc=m0n`ky&{w#CfWV%$PCF(GwRX0fGkvp`PB#Q?k2>YM$Uiw}cH zB3}?&_%EtBMBhM7pBrKtd>s2BlGyOY282IPspY?uVR8_d>p)#B7UA_{TDD3b-v`sH5%9ajg7@$s$;(H{T#)1>n+M?AwL&%;c0R=5UL%R+;9Z0!K|V&3nuEjNCj zN_6@;Nx|dz84XZJXb5LMbim+MA;9vZq@E4GNJ`E_SYhVj(j@=TDt9`z66Ot8(tC;Q zkSP120s9b7pFv9%eggS(#F4sijCqxBy5|JZ;-cN8Pgn4V=E+vpFDHYIXC$M?3nZg0 z(w-5(>XLXW5AzN@fZE%!G-?kUht6-Gzen$wcfw904qW+xPT!<;wkN^q z=VPJm4xIZ*gi**E5kqCKj19lW?7{YXNvcf|hWE)_L(Wz`AL%iVXUSZS#t8FZed zVjWJkfQ`4yt)z~XyScP>ks>y$-x!}*9NntB=$;yKY%vdY3ww`obMyumn%O4AOagL3 zrJtr}*8cKPkM0w~$X!AzkLj0DkToe_&3+en@Rk*5h%5%Wgd78lDyco98A&;aA2P}p zU{p`e9F?LI-Mv@SFi$6mr-Y@7mf~;Jtk}$4XX~EvK5kcYSQJaiDx+4JU~W*T4b2g0 z6JW*O^>I(nNpefaUK*UI*K<4MM#RR?7Y*5t0};3|WWO37($b^<>_9!qphIQ8RU0ZW zV(z8-7YO$=UV`J&0a4|Gfz#0E$Cifz2%DtMD_$n;m#p8gGTaMD&OU)QHTe^Ja|=P# zJG#hx5$6=fO6<>5W!Mga{o?T(3WiRSvVftaCDn&i@n6M#`0DQYqAEA-u0zH;%T3EV z%AgB-<&&1%=ksACM*F>NAljU+-Tz^bo@H;-tV>vS0qc9TNJ*#KVCfzICrc2l#LnSIp`Vz&~g@D4B zig%j%Wf9Uwrqt9{RXYEM@%Dw3oV7T_7QX6^r_U#PkinX`27ARB>L;4FD1vO{Nm^1F zdF8zMY=0n7;=nP?OAX?){B+eW!^uq}qg{SC-{U#+>ziiF9_DeM3k^}dd&Zbw7-hf5 z6>PsK{4I^33i;F>+s*!+vL?l%cpbnbZ_uF4%rSL|AfKHE)5(W0;liY(3 zTotFdBIKS}$1QgjHJV|o{9$P#`Gl?xiRgQSD`thGz5KL|GLO!~O%cQ8s6jw+4VeYi zprxwCu=jE=Cmn1@mg=}t)(G;cjT4w(*!vQmvWin}ZGt0U!bg(}+38j~KNX-PACc{H zzYV7-aBFnvj5<&1B{SE2Tv%?-R;iZm?r~|QvW_FfR!!fecyA)SyA}Arp;PE|_?bhC zeFOn!+UTn7nOk-WW32o7+{xV1WgG<>yoy#8lZKHq+a$4oGb2K!K<*?C3 z<7NoROiXer1_VaAVTG=Z*a4$FX6)-;g`q9Bsr5d;ghPY~VWV8pL7FQwsmX~P21hP> zR8RZs`I#6A7-#zR&jZ6a7>zqS>bfydyLOkY^?L_whGaLaL7PEnwO*V-^#B#@f5WGC z*jH-4#E$M--QI=>Nxgz%#Y&W~w8u-58$USh0YXuJ1zYN~tiDbG1X?6Ssvo!;XHV?R z&{0c0@WtfMe5I`ry`jT)MVN12k12ag5wkNp@UP&npMCvu*r|H>bB`*%ePYM9|L0P7 z!O7g#{4`T6h&MWmrE3(T;C(ToPG4YMC@+AbA002}^aqJ0D%X6(dQ%CpPH)`b?BE~5 zV*Ak7Em19o!Yf}qpzg{y^hCVCpf1%`|5gSpirP>2C((UzE5ArY%Ps#*lOex(E#_~- z8Y+dT?1!TR#Dw4NOQ|m)PQ{m#QSMP2yqiJI}f7M{oSRM?8S;Ve_V(%Novm?ay{;-jZ!0a9J{1Z5LpPNNK%L@Hhs z!;zInH>~Yz>P8$k`AOCEPJ9{S8DnZOe41Ui5{(>~bOL2d+{?Vxsl)LB_K?SNjA}4Q z%Gr_PQeQ?QdEHpxnn5bg%in3_+BFUe|fAOkee(WZ@AIq`{QAz(F+N(F0iqX7LH7P3F zfNe_^q8JDWMLF70VPj&SntUjjPMWDzVOptZ0;Q+vmf%^=paqW2wJZz=7$CHRDf=5j ze~X9t8~zKKL>uXNNt^XkW*)nN=J0PCDuHcQJj9zXBxVfqu-8qbCWK10O7{&quAEN8E^f>xo@-`=#q& z^uzMY*i+9j#uD-Zy@WLGy%$&Bw%`{q8fUNl?lYCjt$ksUYphRmt;imBbeYO1K{vA4V~ZiMPOWpRI>A`csNwLAxswL z2}I$4RWlZ$*>UR4Q=Ko>J}c_l!<@o9F8Z1A)q+dmdoX@@+)FFF5ETpu6g({|$Dqza zL!Erlx<_;AIAV3~doB(clrDUzBXL665A1!9(5aFE}>^D zOb+uu+ChW5{316>f_NTpvT}%hNa5^N(s9Y1;-IN-5X%vV@tV{kKQL~K=KJ8*8EXIe zAoCr|H{VEN09k_xItsp_9okev(6r^J3dccR^LF9`}`EibOs>H1Y#@ zzAedYt(7fdD(-C@V*AU4{B^@pn!)xw!$eP!fW)f}$>--;;BAq~Lgn}B5B^j=siCZO zv(LR;9tVs7Npt%;y`A8h#Z_3d6kh0!s^HMX!y=d9j~I<@dL?&G$!XkMsfDmq&key9 zH|lyh|Kpw|T>0dZY^$V}5FfNuRAf4iF37t7b_huTY8~zqcwgd9YP$Vz_=*~sVY5rx zy`W!_c>+AqZWg|6#;Kh*T4t>zdY?Q3cQ(|dv=<3DAHOA?M-cmj?7G~sqUN8O2SPvK zEs&5hKk`2&e=uY_?x!98xw7or%_#SePpIQ^f-y|VOkS!cZ1T*LP+fn&mVgi?sznu= zsuP6guFG_0laeP2%o9tb&}r3>^=`JQ_)dkWH{H5T>7J*s!`_;qfZCjLhNPwu9N#Hq z0~zLbO!<-akR~S_fPod zH^wdC)T6_*UDikNI#V2kIY;1MLCzCJdUPq0fTRO3WqdM+Edr?SrQwk(!+@i9WDGA^*@%5 zm`_-t5LO%Fqo)TtAFj)U+yHn}sg!N`Q>8L@#sSx@c-mSQb2y6rp&|OgGD8)O1FGIw zQVQjv0+u(F&nofv{^*0tTl^Z$YG^>+p_Yeq<9u18=eB z1HeFzEf%%2u9?J`|1m;3KL~+Zl-7?Of;pYYm&aBsQ6^b-N!o}6EV1*)3jecOYt;g+9zb1z|`l7+eD!n z!(OsEp=Jq!=r+DB%#ND>B)zeurO!1w19gVd{SO?qaV4Blfls z@RTY|Q$3d2D8Twr-hA1f{?8L5&qG5N$$rRAwHFvu!zcy_Ts`wP3$QXBW@Q+n-Cc@U zy*vaG7jT?}(A(}Nj^K_sgW|(8*@l~xv zI4u_(mW zMLX@})(YTI_EkTe%NYB#Zd@Ig&3Ewo_F8~<0Hy0=rDeqN8m84yMulTy9a*G&Kahbs zTIHloQ93=Mr4ZuRxj+Oc?>So-+e7iB12A{0;7y^;Phl5{mf@Iq!sDlNg>-=HQS$W| z_`GG-7{Y;xL+@AOqp-o-DOoeuU^^I50f&wP+>%*sguLF-b)1$ub54y6o*eAy* z@1@Bx=Kg`uO^eSe>R+gm+$V_MAiF*=-Vc8j&%~IV)i|7- z#eBT-s@W{AqYPyjiEX-5BbuklaIm^L3BPMC;JADA?=l?z=W-Vs=Xgg;pbXTbgT+{q z=~belYkJ{GNI;L<;}f~uFiUHx;#vaN1Yq&`@Pm3rlGpRAi(h}lKVUQ!Xhz({1%>`f z8UUdF&CFg56av(0cttqrA#W|W$WBNq8T13v^|D=0q`aS5_O19~qV{5lvQLjClBUiZ!O474xoA8@))xx6hVqAt!R5V2SEq zG$|#Sq$ekFiq5|O?i;DERRVc`e8@PWk^aNZumOdAPYd6q4=?sxW%o1qZ+mc#B$(+4 zVSA{)#<>Q5@e4sUY6L^Rsffk}7jV&VD;%hez1yjVztV>%_G07-j9BVeqr7(ih_8OZ zDTZL9!AqlHA*Udow$WFtY{_fTPAJNxD@zI;3qMLl-qd$3V2Z>0T#Ty)jeHFoYTey+ zKab%M+t;8XzTKMueF!4Yom?edrXip#mLBa|p!XpKNvPD|89!DYHuv95YQsam>BLkb z_@avBIS%a#ISNSuzpxw_o%`#=hD102^7rN6LJw1+V5pAkStw+_c=4-gd@816pX-a5 zRHwo+*iLpYYIfO^zabg(AYMWsY7uYR7|?T-?9qd|7+R@wzuM&}PVovw8zMc9{>hJp z^cJ%t(yt7Z*0y%1lt>%b@Xd*>0LBe)S?ca|)8128c7#Ag6u)&o(54>~$B>Pb_q~jU zFfh0>R3NXstWu2^BZUIhL{2p#%q5^I_X}$=`Y{{Ni9Y4T-)&YQs!dj^hNK#mpP=-c z&vcSsLicNB=tA+W3urNR-@@~!7%7tFVOo+EV`j0+@WEK8&ut3y1o(&8tkkW4+-D%+ z3EDc^M+m^3D}r}vs$dbCbSP4EnU>h_J!D5sa#ZBJ>~GQd^}Op;I)r=a@$K-luEa7dzvS+Ve+{!>071_@nZ5^sv` zXp*oc^LN2KMfcy!$k=gx-{=Q%%5U58LF|{vH-2LaSr^Dl8to?S4=}b5prB+8Jd^{< z5bpGws^&vc#co49?+7&cSk1i#AWBB7Kk`-zKXpNv3YUA`? z!akwo?&415=Y>g(=ncN#IH4VJ`KDBaxQ6+FZ@^jTA}wARud*^Mp~uFnUEu96ik(-7 zQ^4^-!Zn5Br5kB+Wa4+MT(Qt{s>sGKz}rrs=K80cZ3?}%@^x5Sm~gnHPGmY;6Ut2p zMjc!;NUu=o;m02vXpX&y=9pnBe{XxL@&(RKRTx{_#oRQ9atJH|F^UfSo#UBRvr$te zraWBJ(9m-g_|r55wV{V5y>r|DOQ~(xh~$?oxRP44G-v5ZX2d30Q*zDtQb!XYl=6~c zrbr|;IbeKafs4Kq3 z1=YI#@r{JNgMz;e3VwMC%B&-{pM4%~+8N*af+Q`kIdhLTiSqx0e~@{B;rZ3&>Xsz% zc0l#^xRt3@sq6WAkOn(HDgIG<(X?x;RKlYs9{Iy(2Jy-(C!l*x9Qw`|D|`uDCnoC4 zISd!dLpxqC=zO3Retc10!qA96jH8xGAQ{cj`Sn48m7xezOl>D%;%)MmK=UmZx&X29 z`Ffx7w>6MkE&{rpvOUU?}rJ_*Gqxl zdrB!ADzBtOEx=k8eaJVlAp3KKc14ZA#BEymF2jhDvJ=Z%N}~p$~IO+)#x1#(=3mv#m<%&<_>ipR z3&uvU!|z?2FVGX>Y$b6UB&5T>#i&vqit11er|$Na$#I@Ch937Evzb?6&?4I79L)X^ z@EWtE0(fm8V06!z4|5OSvkwp zno`f)N)n_stphG9^YY}=$$cYkC53wCn1t9Y+K^r^+q%(qY2oxdaSRJYB1|N$zoYir zfsL4N>&IZ{8#RJiJ_S_LYjwdURwJwit@f{|0f>s1^*w1`yz())(04gg8gs>fG$6ZV z)M53w3`O@PD?W1mE=j=mlTJ?Zq`VbxDqA`P(iS@NXE%I0zkw_k|4ewGVP=p)D%`O|(F%UCEUOJJP z3=jzlscD*3)A*@7*!IUo^(@Mi)r=Xpy;&Z!Y((M$yN%SAA7@;BIh=NF0@X{CA zp2Zcl4;)TeupD{Zc}(5YjZ`LhuvR$ffL2Vn-@nKe%n*eWog*aq%J6gF628%)sV(_f z4wmZJ3HeTSFz5ZXG0=JIBs7{K86RTU_PZ$^3Z)Cx@u-UtrPDrVuwt8+LpY0|K2sHr zO?@)bK--#^K$u@F4qqOeev%hs^3UCwZz)58!bC<`aBz62(?w^k!3FsO7+U5)7NagL zvnGN)rPC%>g(D&!gq?;jIYU}TZsSW5zCDEC_=J~wR*B!)i1PApD7yevC_7anc9V+c z*31q;`I2_Non2it{U$@oG%DfVB-i~-BpXc1A!)rXhN;;%%Sq9Xry|Hj7sU$pdqi+7 z)wg2yT1yBio8b4+%_!PU%?HuI2czcO-LTYNxo_C2N^|U1&9z*9%{fPOBt;RxsY3aR0BH|y<@3pr{UH}t;KmZfBJ{DQ}@A6m`*<| z97h`;-bz=hRG#8=#CayLLK7IM-|p69i+lg2rfccTTJ>ZzqG6|^Iy(n7Le={TVjAw0 zT1RJM!cXU*l}k9(@UAP%rUFz#ue$602SNY+>Tyldif>wLZNN>7d5BRVO! zzw$|`+4G6mEl9~I*HGCb8@cV~QI++e%Qk6asmZ-ac(<03rh=g_VUL3vVdx3oMlCC1 zYn!TwZ%Bq)CNZraTO5cD9fNyt451^*Ok)OiQH)~KQ_ol+x6c$$w38MLRNH#Ti zjOpy61p8gr=*P=p_P;gL$;4fh2IklPhso)Gp*yIO3VYbi-jbV29%BY;5B=$ai?COj zcjOnz%JM2-oOe+GS9kiN`WUpknKp;nKcb&RrcZ3iyP>{5{`}Vb5kOOsFo%;zQ2iIN zjJAFf;R=@@%piSOn4m=Z>2a0>J+CoyAr_+jSVVSsif|>saP1$C))j-PUpXawdaqP0 zTd!0(b<$`ZB^TXKtITZ01?-2DUmY5~DBgj)BQZYRv9bWgP{$|k%=}PsDwtN8cBzl2 zeAn#tCT`>T1>53zKT0e3A*y>PN|z*K94?z1?uPJFmuTG@pKMu%)r4FXYLbP9?%Wy^ z-o3!@mblJghqu;L#f!!^?fh<;R6h7&7nBE`kkpPNY|I6-*jbOBS7fFegQ zUL|?VJ}N*&lP5RNiS6mEtItxQw)`Pa*xT*!D?7fIa7y(mTDI1Ki88jX8_JB|Fls|A z3(}3wHw=f)-;)f6FhdqSAi*gc2C1){mzT!U+i>tJA6fj0K4lQ(J;;N;gus5nPnVzi zQt~S3m|?dPBz12@6h2|?N}QQ9o1`Kr{ZBqvV*RY5 z`r{z=E=M@r%`iVq(X~#roxtfKnlB(XuXF9nwJfNv{BVaUh!Z<>P)fff|N8>6QW=dmZzaZX~VX|MEW{>4K z#-~N4?v66TxhT@m7Vi5U82%`{Uz@^>x}+fk^w~f%%TgI&wD?D#hZ3-jQDZY07 z$v{Kp$`}=zKbSqdE-)dYULxQ%T9#mTFMvnX`fS;^Y-3iH3c(Un7mu|8g2DNfh<*)g zQ5SCD{F|^-sUv|gtzdgHt_Do*`((W~)7)hZx*+Y1Oc?`NDode=+SS1DS}32I7>~l7 zqa=N^;O0?}BYkP|6a@r%qler26Jp3pis3Bb=*{B#r+Iz;a!5r0JoV|1^l9lV+cMk-nL-R%G_r^V+aIs6?6v4{#=4T(Z)Rn^? zLHiiVgV#R-tX(BT4WUzyc`D>rs~oAiPLb$v^%_IE!d#?_ukPH!**kK5Ookp_B{&vM zQhvA%(VFZEoZ1)kWO;9qWaO$B;6k6e!X^zyiPu5327z9ZreADGTbihxRd7`y?TfCG zp{8`OvpsIyvrG_Wb2P&iyUJ_Pl0D5j3zoip6}O!9kc_%NBHd4h z{gdyi&YE>p>;ibfyXe=vs2@z%!R4)cESxCaMTU#d)o!=?Pc$R0T5|;e?nHW(cCs74 zCQ-|l{K;}E0UH7wd7Z-H(A!0@QNZwthXIu%dW&zutlbEM*N5lC2N;KZ+$ZVWT|C$P zN11?_V$nMHqFo~$<*59mZw1PF@Ga_|usU+OD2iF~2;x%`!7okCqsAaSQl6>uk8;wu zD8-EMb(C#PEra_Rc_dK)zJPS`W0AuusxR0vo{4Cv>Xugsq^cbL==J9%#hH)Ia4-;| z@)_v~ltYi}nx7XM#Vh;8O>Q35sp}%PX?g1s$tb%+=FkpRampnWh4bLo6jvJI=n|dp zKr#9ZpTNp8hT~Ky9Tx{j@@0cb1b_M#e=JM!-gLJ+UI=q(x$79f<67?tUquv9zxdM; z`y+Ka(d|*aPWuCjE|j~<8eljE?_1KJddLPpujJpGvdFvBO(aZA9*B3fBOgJJ$ptt-D$*>&=)9pg27pJ7uHeCi@+=a%5xD^WfJxk(pkNlSyXwv}(*Ng@FS%_(T`1+!RZ7kSw_ zn<7xZCu1mZ25a&Ggu4-tHyrg{S~hj{GojAs*GZ|L4`)~z03a$!+ zpDC?I-<~G5>Q38O=G%c9t?zf+-yfE`c)xje8IE|bbORY14KGKyouB`fsZS-;uQS@* z4!qvDRAAN97~n1Hq=Dr$RhT+!j4lRN^pXx@=&@7S@kE4wcQ`DNv!*8wOOoM7dvD*W zdar910ui$iTl?Xs&iwGx&U|XI&lPCYTQ&5QOh+Nx5RU9ycV8~wiP-~-`a9O_LMZiP z&djef0eXDc*vut#SQE{)E!#~!)ZHuA7?yOfC!U^$Z>bXrdk&9AegXdZAsqN{gz1$P{%6W&Vz`kCXqR4;_H^Pzh516oAQ`6 z@iBZ{gVmQrhY(* zYc-C0)}Dx?Q;0V!kb_4MmFk)6gnp^Z}|U* zsCSNz^b5L$W81bpv8{=1I}_XK*tTsaGr`2RZQIGjy#0IM`+fJ%?$v8mpHt6MyXw?F zwX^u$w;|WV)ch3afc`p_%u*f`xib0aWvvx{=3xq~bpsqZOtWt-c zUVI$fW95Ax&hg}*z3&I;RVu%9_CX9WQE=qCj9wNLr7{dt@Prhty<2SJjfvIp-U zZl7P34MM`u>1JAH%G-#}PmA?IG9}zI1B1+aU zQn)c$uicz;#r|~fwhWGzGI-8{DvtO9K=QUD{B9>9CnJfEj>0TodU(gYor~jXhm_}( zermYS>hbt=f^nOcyhHGl&Zo^1_iw0j2c0)|XPGqP^+TP3Mf^oQrN|OTRhG$Y>s|EB zI-gMTLtalx14{0a`D|F`_2}+gK(Tp^wnt|!`bJ~|aSAvrY5%gH$HKs3wUZYhtKQu( z=Mq!o2|bITcpXmn_*D`c8ioyH;6( zTB-;s(v#To7ND zxR!sI?v^7Onj5B5rb3V2OfCR`6OAhR-D7Yz8RPyU$AePId!^ptkSp-TYA_?8<$FPl z4OagoHYt3M`dJvoi-bKo03Ij4U&0JwEco}L2xFt3i8G7IrKk9N2+7Zk#SPeyxmotB_0(W4X^!jM-;$InM7%iJQ)a= zMky1fd!^32fP?0HE@9LuLO*v>^O|7f@v8#KAjD1URJk<=$!@rWf*3T@Sm3RUC>9>G zIyC!hivDcLvDoY(`I876c4T*WfPqNkQlOh^djUCmB)J*^iF`r;%Wbiz zwCy6Kg(!(Ole||yur^@>V#DMlWOk&ZU3Bu-n`~tKw>zTEvo$*xd-aNZ#tNZ?l1N1fN-dXG&%{CzSJcTt~l6)6;x> ze4FD;p52wbVUwCnJ3+#~^wrQ0%5HoB>@9@d6W16=OP&!lIZ7Ss=8l2OjW-j8EQ8O5 z_U2BrZiCl~Bw%ZDf4tFj;blr{K1e_!HzBb({i@V>X}j~{opD9B(*_Xfam9RBSM162 zcen$|z4O8`)}yQIc>7V0B3VZ5PM_~N+QY6t$~J#K%+}B22a@>}b`g@<>+5`vo6f(e z8}cdor*zMuFSL5@3fDjvwQH&s&!N6J&CV_h!@CJH96-Y=SN;i-CAIZe5)-d(EPc3D z6U&MY!wl&3G!bh=m_RLqk~CDsKZl)Wg+{-Xz4;FATfg@jd{RZ{E03{ehh&?bHJfXF zMad}xUgh6eX|G3`-yh*KU!U(izAu;1Uw2Y~yrsQVXo{gIFzQe*DlAEwrF_{Mft`%g zqf%Q)e}Dkq63Sb{lWWX@R)6H*uRAmm_PJoRMZ>$52qGk%%Ph55=H5h%j^q6AY|So4 zcg9dxX-fv^Ld$WF_uchd1;dHlZi5?&*R6}(7qE?73K1k6qJprJLIT>1FX_$hLXMdJ z(xld@<1OQgBZ&yuo~2(&c3;1S>RYwl2tm3JG69^a5Fz&m*HTIRi?22`M>4yqIuvN$ zlyws9Y{E0ra*>Z>+eMruycmg9miet$W#Nlyc*|x{uP{-97XAc_4_aitG_DbiDuog% zQS&h4jz#Kx$X7o-RGY9%Ir;PXdnqNi#_8YUiwNEqo`E}Zu;b^v!haiMMYKK>D3w4g z3cyy*jap=#XuV)aitn0E3aV30)S-H_&y~J!r zc(^-{t1MCJ*C`ivWx#9P0sVj!a?+CE#Zyq%|65E#c$I)sW-z9a+R>fN9y*XVv+fdw z6vp&~dR^8iyvX%RaTmt57loZ&M z!QBgij#?<$_!^E?kfnQQt=RO1$OD3qCzYukGa-5Fs+% zR>)2em{89kzi_OzYrNL>*|I7a1up7HT+Rdg%AxjJd` z@L7Mrpu(-QWj|X1n-U~cD|%1MxE2`*tFDrl?uM7%fH4}Z!A#JQRN^}-U`DZlQRLB!u+1~a$t?y z^r^zDfSzvDpx;7JqSLkZVwu9@zsk`f2FyzSuJREr{H-!fVzT7qw9cit0&Pcwv5BXT zQHwVFqhLXRzv1`A7>~=;r{+aZa>NW;pya!oeTbJL+7L_vv*LNTVUsq(FDGdo-Q}|) z0#+#5PMJVY8r`y^u#*6=>rS$St9KTU!0+9f1axWxBDpS@?<_tkw`;bYCs0OgI@gRw zvJqE2e{a}aPbq}4B2%S!Ssn!Mio|Nb@@va2u2lm>UwYTcrTzLb?_mQ1p8XyEMSW5l zH1-Vs57^<+a@2F{9bksrbnZLG0-^5M;Lhy!#wZ9t*cXhaeyfrZH#>qS0xM-}2>=9J zQDF#a(NqQ1wL@Ggexcp08}94B1A!M2E6anh?yhctW81B5xMTZ2@MhI;KOGz=!r(u> zJy%y*2hF9u#A}qj&-3fxUq(ZJMx#VQ^+3W3$>c^6UyE$E^a?*P|7s_Mz*4%J00oQf z+avoGHUP;nu&3g2LxxG+IqroHu!ADxi~Ct(%BNfq^}5Y`iW_&&B8njMH{#L2SLn)M z{%;7>9*FAwdm-;u7_^=VORTJiKyV3P8SXRdd7Y&d)tQ2SYZ7CZ!s5;pHMn`WLcyi! z8LsOWo@lWz*PL{v$jpxbfAGN9#wY0gdKHP-xY!V}F-q7GO!{v=!a3OlK%3UeDGE@c z55J)fQcDwb*{v2))El-DyGipDKE#BN);Y?HX-t^}6ACsj>knGch-1zN!yzCPUDoe2 z8(lWSv9fN@QDSAbH!AgmqJ4DSGVw%wW!8dJnyiO5mlD!Y5Y?!&$fg4Q=cn*%-{pY| zE0V|Q;GVl1BCk@Q6o_gGz*U7izuE zS!2tQ%d**}g9wOqtnLzx%VQ8~4I0fRxo;suBWY|&>bwUt^wya#wa2*E-Iz*N|Ji~b zLjfrX^8-?NScwohF*lMUaR{go)~MLEe=G*Ls`P3mQNuJa31w}!E3APHY|BQ z{UDG$3J>wbbF9|m^kgPXJjx#km(WQHg5%t9^>Lea3n^}~kze373u+w`>l~}T-_Pko znk(acZHh+av-8<_FP$3%H3eSEuvaL}OPfjrJXhtNNb9%Q$DMVZgf1A9;s?nZliq41 zr`%fNKkXgo?;lS9bc^-KH_6_m6s4@JtMZEHAki+;*issM_dN3WtO`17zWY5!+&z2L zz-6dJhJ;Ad-pacwts{C}JFz1YsrBWK2JUv2u;LhyA`?D!J!!1?#mlNjj7>>J{FLGk z$oL;?sD4@x1?V)+r)GhIh**)y)X?Pfn;+ZeSID!o>;T5V=H%lDaeXky8nXvnvcLgO-PgBF+y9FmRDmT0Z`I36;i9RcR8Zh(Hxo9vcIfXXFB+ zi{M!V^e!7NIlp^Kj*AL(Zs~q_(%K-D1n~ZyFez?J=29D%#(X3OL8ee@Xz427+N9HO z)~9sxAqpD1Eie#N0dPhOiFo#!1J{U;^hF(jc0kjSCCg0vhx7&QxiFR_0^Nq@p|2+K z3OD!PnMHknU`pbDpX#wgpBEECBv0|+R2m#-^EH)7OgcssA)(aW_#C1?ZpAxh;klnn z-dk2Y8r08rsj~b*=jS1;mEOJ_&$d}tqW+_qyY!=i#%>iqIbqJK2$QwM8QmfQ!ygr3 zL@>Zp)wMJgmKx2}@&9?QVP6g???kA6Le82vfG8V|;=mSs@-htRsnJ!Y!-r-FJBh~nz?+Jt1Bxy<6nBV^tLzr7v zJq6`d4q6yRW>OA@G?Qb;hs9D*O$Gta*~LW$RPA%h2 zR=;7K(O>q2)M{XPGoK8&WaXKDUovq3-Ny!W9}639p5XRDah>}c01t+^g6MKK?wphj zt|}bbf-J^T&|2#8HUM>6G6$&PyQ5iLc)sVUA1z3r>-nCY@Bevw@E@1<`)Tha|7Aso zFV}PTUG1rW*owhN%OT9;Wp|2oYry-r+OyGj7L@W#bzLEa8cMRg^&Kntq0#%NPTUnY zZWPm#Pd_wAl>sWi*sP>ys{)~Zd@#`(x9|tA1 zkj)?%I6RdFW)F|q-)E}(!kkr2_wqRe`PX6yGNqDe;lkTU2io+PbO2)GSNpsVw%S$e z4X@r?)^9NxH(*?8Eh&B%2+omG0jwn;m(87)K_m{^H=c-E;n7mR>(7J@7;CkqPg5uk|M0D@fguBJNbRJzW2K`j4oO_|+R z69<5p-opd_&jWoRT3%dnJl9bVSj$r9JQZMwhF`P_UtWqs<*PN6UQmLyfOwL-iUmxp zNUE@^MXC`!+PQ73ATAI9O0qfby>5)F)JrSVixYw-P*ogxG*lAGVv-`p8OJ0Nzw&Wu z+j42#nXYQ<6=>40-=KLG zkKUj;jQJ|5&f@AUnwl@K}e zK;^6Uiju;669fELOKd6P_NTavBQID{$ZX!c`kgds&&&gp>?3F}1=C6k98e#c0d?G? z&piIhh0oG-%_(n_LEl|IwCidVPW+KYxD*_Fgc-T%5e@Ez;O9fwvVcdB zAYYGbn3ZH72z#<|XDRxyPYFawTx|!14-rNH4dN*>fh=zTKK>9n5w%RM3V;Hq08%@h zBVD6c<-6-*!&m);ct0#Pos76a+!axP)b7D8=hp8!pEX}XBv?pEWjvDDrS1R(RkY`- z%U(D+e6_t`Y%089Ec`Oqf;FPmHKeS=8HP~nSp){Vh3+SI3YC=j(SV?pYGa}aDPc;` zZ4E632+T$S2f)O?WaYDy5&-N^_ZIY%P9o=qVoPU>pAHp-;V2YG@F)$!&PbTW)2v$C z5^%NA5Sj1WZk>`*aS@YI5jVwWjG=7LD`yBYgwMkMk`84J68|=KH2OgK_v#%~!y$=2 zPHVVcj~X7zViX&{Z)Sts@y=H?~I*3H>$2hWdO{A%PZ@XfSsUfhiLcYE<;8C-Ot)18?mPC!S(qdPUce7c*?crc0KO1~(+Pes@?qkNgmWhb zkI;mg*my~)rMmcUMzDco`v?V%d-M3USuaqL1cCUxKSKt(bMtrUDwzl4dH2+j?#ZXh z>v&G6Vac4psKZliAwjYU%EQo?%ugb*P9HW7dtlNjX}6#w*KPFE37O=LfetkJ&w-g^ zI5(B#@4f(O%)($B&s=`;drT$s{XR4zgeXOuxTfq+1ZpYvgAj9HjVaxh;wzJ7d&)G-vdNa!v}*h27&H&N@~QJy4ZXS>||b-hO+Sbuif z=gd(~+8w9;<7HWA?2b3b;b%p?V&n8QV37UFq9`9;LdAp?{ip2IV|f=Y4it)WMuQF7 z<+EK*BpV~%R8wIU%@F@G_EH%`zkrI71&B5sF_C7}E((kpwhO6O^Rz$+Y3wLwVFlASBvu_9|;ck1-ynVPtp-UgV>!qc&>b&U+sAGs` zL%gVuTTmUD2UlS*VYc{^aVzsAeg5;dFkyfws*1ni-k!yjS9;bGvERm380e)*Lj?)G zCB6;IJT;kW>9^xFc(4QaC_1+dQn&1GjtSFt?nJ5dWze;oeE+RW(>mBSs#5e)*+E+b zTIEaXsp%4Ee-eyY)voQ_L9KZQ=xO!f+v5>qkGu`cJ~Z`%%~%c;-bi!+c{}~rkz>_C zGl%R}@T;tDL4cglwlQMXBx;3+{EuxNM!__v{~OBc2%Zshj>m^1|?6&K>EX5@1;HgyGnt zGo&CO6_Fsxi9&P$t!`u^XfW}@rFlA?W&^ne!8$_IAoZBcD_`q%z|9W|>k?^Y9on%U zjvt*TFP&!YJ*tT7Fs7yU@Nz*gIahC)9x=oU;MZG@8cb{7)ioa_) znOm^>NlE^+^ftR}u_G|nhnjHC=34FjER8imVw$4+lZ#pdxa6v&Dp6*K#Y|=-f>CvW z7t>5V+*khBj8=D=?1%lpe_N()J*)Q-_WywJq6rR*J#M7p&E>TWt#L z_MosJxI*xy@g0%M{Q3$_O4_ulrpE^1A^6p8D}?+n;ST+*={uW+0x5P?TBw%`<5TM= z1yAECzU_wyVD%8<--vmcjW*WH4!^E=a<~+$3_36IkXH-Nk#$}yOs!wO?*G``U6EsejfSysp&n`ec3C~0_a-oH<7zO=?<&hZ{a9stGOor zYzo0m^p%nj&JL>nwFd$1k^cGJ9yqaD7q6?`-rnm(G|2vw4eZs5wnkTaW-y|@cHw*` z0oZs91WP`rML|wske;{tAc8`uuoUKJf$`#kkhvyJMyEiQCXn}K);w24-XJ0RV~tlr zF+Os31qcTLzij@;1(PGiIH7_7?N@Hff(_uwECDA>kIYSL$Ye>^p|QvFYtQwq&F|&q_AR=6 zq-UGo&#T(rA@BSCb}#e$iLcbb;j8RCytT*Y`T6SX_3CV3gxO(2G<##ku(P?pr*+|c5*?nKK5&WsT}yV_l5Yi6zY-$t3F*5ELr`g?1t~Apr2*BteVN{Cjd8*lkg}Z zn&$4)K@!#FqUVbKb@xvRQy!CAxV^V)DT)7_Al9 z+l?jrr4#D+beMp=5OL(s7M0;tDkx~oveUEuvB8zS^KJZ$`2C%2NciC3;9w8%ssDbB z*4)u`c{?8%S$O|`y~%dS2N3P#<>eFS+jq~IdwIM(#d8_@{ev#Nt2f~NI;K3h=~0(& z&lp*q4bAnv!YYZGG*Zztfm)(tYXC0vKHoTcY^I%9LXFS1x2htmg|@ovtg4K`tujP{ zVjjUcxw>n;DQ`7QnR`|e`=qK84Q*mdE&h5nVpTs_il$IxclG(#2B zPT!-}f>74Zd+GXmuq50`g)pZ-`%dJL@-W!6Mx^m)k9!ZO6zl8UKc*4`b3P57)wtE9 z=i-RpxlI*x3oBjAfM(@VYd%W-W|WooYPC`|)#{|@l3Ke@Xzq)%_o5ZI(i$pR+D;c! zzKcri+H!-Yymj4pTDcv1)#m2&Ut3&VV`_hUmTN{>s+;qhj9C}b+;)uGJNWkgA|KQw zg2p0~fhQry@xv^W5-hA-OASt2s<`g^I(E}flhEhlC163{0X}KaaukF2PMd186!sK& z8=W{K|McU@Y@K!6&}}W}u43s5y&DOvYAUC;*Pr)!vR*im^IiOiUN%ju8IkKQr{z=C z;9#m?t^MV5xADA^^$+J*O(J|?uJr6p2svRv?)I5UxdMH2Fby4Uh(;oy2iQQ>u@Ui_0Yc^G&z!OBf#U6qye-*vsn$L| z1y)WvcApQ+5U$nHSzAGeUh%tO?k%%S>!KZI@U^XwR5d#O;tS3Cn$30=@GDvOIAyl* z3!Hw29{TbmgDHMQu`$0Tmw|R`M1XdYr)mP(?S}4*N>y#F2E#guB}9EiaX>djoP&y5hD({tj=ZCzCOI6eVRs^C2={$+epVI;pLC`T z^OZl;MD_n$HPjZSKu<_;^I^*Ah6?vzc>q~0LZ-~pirYXGR~5E_<%K0NV^SW4%8M2=(vfmcxV5WX_x{Y~-V=ZmGu}qvS5@mn+G7V=q**4ngFt?={ z9MIDk|9N_H5(A^~2v2V`Dtb~Z4}Bn^SI%@@2aAJl;FSwep^jodsjxU@rHl@6hcs;E z4={{`+~K|ZB^||gSz{%y?Hq;tO_9a!U+;`nI4+T_bd)4tF^?~yo6z=aI%6aXqJQkw1J^R+Qs%zU zkK6wh`$$xpqbo;K6_pVL$3UU)MNt2-t>PTus^h*<4)uBuD$##r6k%+{n((5Ma$_5T zN(wII!8_km%Cym9_lkwb1#MmxrpvTL5QD<*Kif$VPm&RR63Jle>;Ix8o6jX)%nmi+gzvJ#0!^!dQ+GFqhFZ{SRgMg>01Roz(@n}$ zEhIR(l28r)jn$ARvT4sWExxU|2uYX#F8hRIIk1pUO6;O$HvuGPHKeK!OGd71VA=;X z*LE{O(A*rh_Y-5EfKC!HOtEnb$~jGfMazz~4YO-5*N5ZOb4qaaKBlnU8nussNrwl; z?qi;kJC&XBib}J-`Bqc!>rX^lOH295kRsTg2o2x7nnS2CiYCzQ24b?LlQMt?fSH`U zpkCVmhOFa(d>+67XQNSIRo%sFJ=&;1o7VwheqVxu#p+!pW6b~|>;&`jkDQoR$cy1r z$mA@Y>ZqG^N=xW)9$gx`X&4DD*z>9^PDLuTJB4)4&lO2N)6ow=6l|3_feIs7B3&3{ zB%F>jt}uu?IQ+Oyh9QJQaF5aU0XuVB$5{$kBt?b>i5I}~0{|u6t{CYO36msOOBzP< zQ-_ z;=a6P?t(tpmcMDcvS%WGGwz(6DdBFQZa>xbdU4@uD6A#x*Csv-UyLGwiJSJ zvr`@5Uoo<}Z_aVDYNBD=P?97))y34C0h=)UgfjG#gL;3^9a5<1YE8urwk>1myVBG6 zIiOlVTiwz}F!X(qb-V)0SdwEeMjo_93vb+%gBxrwU+v#xnSjCqXXs;iRXEx#JB0t= z_viDmrcNC8&YB|v_7vcJ1s=Iyu1h+!4FddtKi3RcGz$;%bg{GziZ6&IY`x)Dy4Oc$ zj;zuFevHU#z}LWG&wNmcQ8bJW#B)({lr7~9F;%c3R^g@e6SG`Easf*KMt4cAYR_A{v3LW0K9}CgD`AEC@M=DA@5XJ0D6bz1(t=E_*0t)|@A&s!x zu%)r~7#rpMb)ct4r>ad6ZkmEzV3@@|!cognx*+~R?@$tko0&vg^-GlRXK_~#^0=%j zxl~gM0GmOb6_rcXo6TH5w68Xyx`+T<*rtyd?gotG`VXpSO##wj@Ur2gnZzNBc`EW591VY{7pr- z9`~3HDtK@Bq|HH-jSS`{B%nrggZNH;J;&IepFl}B5MOo`UUmf?Nm#8qx+Og*C7^xu z>H6=TQ>yd{l2JiRfOJM8vjI3^bg1(bZ{d)3)NJLzWEh{VJB@Cxt?w!NI2*%lmf(1D zR&I;usGujJ^Ps=B9C?|s&%3&+C+6dGHqB%Xp7JqYKi{$9Dls>KK8jCXM8nnpq?1LG zUu0G-qo4N9v7n@H49_so9^5>ZdS)S{9*+ugO{W~N-AU_RvNKuA85$dH51m?YxMA%%!5=|5y( zb>1{|T0dZ~1=DJHA#K!v#~I9|ey(5niDf+mvueKN?P(6WyKHI38hXs?#npS{V-ehX2d6cFg81$cJ)pLRo~1*OqsGxP!za*7w(A6V zzWqD9K0yxl+WdSBA9*hC=n)j`>UclfdlmBkxIO7O0bVoR+tOEP>+v`Uzr-igK)xUj zGnHD@9+RoW065U*DP@BQ4;(BvtyyMAwWe5}Cuxne#+fDR!!RSYOAXc-t z_$27|#?fWf2YOItz3) z^3++0Qdp8jL6%V(~V7>R4p2X80_s`I)krY$>44A4d2>&l~9^JS+mU$tXwW_oPxA_muj zXg2llkH1j6<~Jmn!QaU`T59iD30p>gX)!c=xNu7002_svw*w+6QG#EK1WJx^G&SaL z&ruT(J#f?2Xxt9-Jz<($r{Axc6~^~q=hs&U8&arF?IJmh0VhF6coCxoR3XSs5?;ws zx&TKV(o$IFQ;33i(RI-vs$~eo6duqZ3k7hY_SR7N%MNE$8WNWrt4v2QP*$rSL6RUy zOFzX=v)D`#l>Zr7`s~r1p?%L|Y35%DK6+118e<-7Qf3&MA=%CvAr;G3ob!eLPA^Lk z_El?K{VUGAHi8TOau|KAz(<6-vfKC z)uY-?QI#S>5bZ^6%YSMB_ee+i=Aa%%E7A}a1>;4r%gyhM{vt-jhWIqMXn-TAL33nb z1&{0LL-230?B4)hs{VYrzP=j>@5z;=+$2$GE$9j>nbNq|YK{_w3?wSsZ@Gf^=LBHt zJlNo`n8##9*}TDZ#IYvX{<|~ThDcKuYgu896(ZXXP< z0|l=@v!TtNLw7hroD47MnMBa&3s(mtmbBI-CqLoG3v=Q`mfUojFEWy>u zul$l$8KS~m33E1zO^38RUQ8xPvjfn{&sZvAh5vT^rdSw`_W3j{*iD^K%Vz!cK7fv% zEId#!FBOso8HPmDH9S}$B!}Pxi`7I~o?wkGH1L@uXoO@`qX8Nd{E)dRj1E3D{=G0l zEcBtD2L{Vkcg^4oz63X$ElCq#^7j$lN!k`7TyiZ;)6=`n0~e)alJ7J@ZxXxXs<9yy(JRR**Wh~#@!`Y;51eo$_3}9i_XKnMV!w!)OW0Z*Nay`0pPjU3OINEwk)+& z3=q<4eJIf>Bf2EibBFpU4PHU)B^Dn5`50pj#(vK30GPuk4D5+71p|+zb(@yJ6wVo3 zJPbLz;2qBlj$=x{L}oxdNR;a+1g4s9cs^_@Ze(=&DG73YTfsfE_ z*|9-g$G5t>>nDC@=a94z5}?}z#{S^OwPTF$vb^jf){Qk)Tn{@lj{Nr&#xMa= zu$j1U5L8s6TB})T7yQ8EXV|~775x>%fVd;aOu|5$jC(?b38*7&xW8cE#ND`u%VX%T zRO>7f%beh}@^}5FSUewMF+(qMVKtQPFHe#wpT*1m5efXMG4ew)nPSLvF60)L(B1Ky9uzB z8s@ERGk@;ygDM#{QB*+U!$?M=6^)hq1K}q|u zFiZzil^0mR5bund=+4Zmt`tclj3mwPBW#TaVs@cPXU}tZn;%(f*A(?eL6#du9&2pz zGBGX7XdT>QYzi+^Vn4iXNY-Q9!!5l^fQ1ZAUK}Nj16X%KjM*gcCuDQ02?{#rp|L^~ zzJ!83L2;u0$)H4)CeEf&mK7pxK?|EQU^QxFDUz8slyAq&%OXJFZFRXFY|b|v+g(DA zl(XJ0=W~E2&WS?vV|-iD+dZYhoIzp%O`JB;sao6R+U)KkyV4p{O?5_ugBj9(3q(Pp zhDgpk1AtXS-b!r7|0YkT;5mfZpw}%T7!<#E4(2sX@#hDP#Z#ED#|;!dyQ(rm-e94q z>u`aE4RT3Z4oX@UsV?OQHIwz|cHTfAGL^MMe zuNqMQ2D{f(Mig~h8dUG{dHuP$&U%K_)M%7TN&vEo7%dcdBq4w51)>F=z5m0Jyjiu6 zR&W^Vyz}(ffK3l~)P}fgv&iGg{_nK5!6(?9GY;g5Y-?-GUF8{2q6yD(A4Xe3n)Mcd zs>>_%rngi?g&zqMBEB(VCD^~#LBcn?M{;YIxsLV% zzyXeF?$eG-C}+5(%58BF-|FrlQ=q7ibR;GTUvdQVT^ET~0TS(8;TJU3VF-#Qn3deA zG*RCV=%6fOw5q&&pf zkD!bt=%81Lh?KI3&@hPw|5fNBmAcpTGX#_rAkCk!yMPkA-nux-!vJ1;vqJ-)cO)Mg9=x~IQ3A*KPAT|~x5qpY z;;uDY-XQ`fz(*oi4w^0XbRcG|$QH()=%@wekUjR2x%H)fc>oWFyB>!HBI^rcnZjf= zt<>A??~&NgAg^;{kW+7QF_J|k698L)>%kFBJbna^lBppN_}zR*F*kKXjp-UMCv#xH$^L^(c3O{_}<yn}o1cCfYVoN=aMJLt-!1R+@A7H?namf8= zIt8X3Gf5F?P1j*F*)w#wb-HG@DrA0=B8Ayj)UiIC71RjfufG<*Q-ZCElhL85>OWvS z!jT=iqIAGoh1+j(n~dh#bN=%CI^882O1Fx)L1DIkpx7#?o)=+DMp_qauxWVNdg=Rg zss@SKj1;6-kpPey%nyhsDvNiBMpVRp=+E1ADs|2?HRccz=BRbf7Xv@cCA*etmbzL{ zmHtkDVr1Riirx+=kRqhyB#1V%1PL_RxI{X z%L>m~jaDb}#`bgXTEF!d#kZLra4=&{Bt+;0Uzx006ChP)@Cs62&tD+YQdi(mYC*<4 z9%+x-g1)&b<=62#HIvBF^$>`W^ec$FVz^NO5;-i=mT6H71-5-e ze}9A~d?v%FVW0FD=VHfWGw6k&)-Zg`+xBUc!b}lV`ym3B5ciFv(&V8(eV$O{hpfLD zcUcrjV``{N_h&U}r&N(M2XwnUJALD@i3w}=03MbTt#kmZW3Al@?h2nNK7fvBz-u`n zbv1t_HwGeJM8*9JAhWA9T}rDUrUUnQvGHH=2t1a(ZN{Zto%C{z3^ml*%plEH#}|W@ zbrQ8h!ZbryBp)J46F*0bUVGbYs04aWxBj-+* zXMK5wytR$A1t2ahr*65o<*+_1nZl}LpJxHY|kfAq@&c6wV8 zBj3{nGt*stnoSyDA$qm_7+s3#6#q%Dn({$_2F#Hj6a7tN7VK7^JI}HEKhz(~wqqKl0#_VaURxxIGr^zYX9#f& z0B}-WplES=wYs)B%t8v!p2rb?fvcmZSBl2kKSB0X2nS}OdV{>jko??Qz;;zl`DbZY zy*&srt_tur%tQ;*x~|Y)A<@ByO72$Y^;|oLN6L>AR+zu)=5GVUpnKnkNK+~~UHMe^ zHhbw7@GW!~*7b9g=TLXE5Q{9m!ddGKABqS}xU8u7KHChSpLy!iR{d*XI`PC{Hu7}ats06s z+8>jDZFUXl&a~>ZhFzFpIN3tbr#+;*xW}|?R>~=3{U_F9;oNk6gE2 zRkbpF>%hDo(ml>+<{c0v@`T*tED-!C)3jkrUeYL2ClbC5W{E*QvC32F2`>r(yS=X5zH{n5gC*yULZNglYbYoR>Kc) z1`!b;8bs=8UH-itk>_%GRCs7!FY%^Lt^uD-T%mV5?is` z-1-PtmWp!k3>J1=j04Xq3aIAP#Oveg7H8_ZOMgO`gHf-8?HllJT=BCEt*V0MT^9H@ z^{K}GAZ`o2xfAG97Y}p?Bq?E^(Hu^7%qn<^39;kSgw;G;xEo%qU{=;I$v5xR zO}OhW_x66&jGfP%m(47>;1vESQ4WG9mEij#DN)1|B8}W{%}DGtQCGf&HKp(o`w>K1 z#5E|ehAN1J?C1mAc@5Uciv40nGP!G~?c>afh#amE{{WYJN3$`$8Gk3ZHn*#G5- zkd06q-))^=q#uc7=hlWg!obu{IbG?cqndazV*KvD28hGcr*cN|c(Uu7@R4ira94(f zh_FycjbX)yOm6nis#E^;T=cCMzmSYTMRY0HS#J7mnYG`#i^106cj8pbD{N~w5AH>5J;=?A$1TcjdBQUAUM(%-TwGh1gKWn7t6M>clGgY`w0-)U3L!QE zmAT1b7eG(u`^K z=TJb8k~*j4P&g=RJ}0OPoSy~u%=)*ErK?jAW4#|=t(=_HhtC)a?QC~;e|(-l4#c@@ zIso23ju5{e^|$mtgZod)fx~YACwo9<$LqujGM|m!=GHbp;`$BrU3l(>{vM!n@9d^^ zZcFdm-^b;1FsVaXv~(`TpedhnYw`ZlNI;zdLMWjr9%i z_ef+@mx`HzSvshplO(e1uz)hrd+d`YvUs4r9;TPM^_^Jos^|Oc3;hf5bwG@CoxQbm zoz3@Lo_%2$cFXVo`u4E__;z_bxR_Y@x3MvCu|#cowRBLPFRmnL@OJ&V-XZjLDsf4) z9)kLP<7As|i{7&E;%@v;x1TdY=<;@kN0m0RAr{Sfgdd5j4TIsUNR2jBMFN7anQeJP zuL47pww++mib%3K`}ZTj-*>j8-uHI)Nxfu-oGT7NNWSxE6G<1!8YNm}+$2E~7CDK7 zRH>zTSjTSa@Q7gxp65u3{BMMiSf&aZtRK$N6iX~}mT?j?(X9~?IO?2~eb-cJaz0Z! zOE{Au7?hKcoPycN7EPv0rI9N?Wj{{oX_6MpQ?BXP*z=M1BDFbtu_K; zb|&EN+Z$z==3aFUJ&|BFV(NF`Q`%-nMe_Vrs;Slbd9=K2oeay7fJ}<0O65bn08ry9 z8-b3se%%ekS1JQE?il3F2DfqL4%)kN$sBtr!e~otT}PFkqlh%}y5C2b7bX0zE720z z+j-s@uOa#tHudwJ)k;SuYC|+9Ty*~X2z2#x|HKxJ@r7WPni#@bX-*Kmv%`kRFSxQ4 z;Z+o0;kICe7aDuLcx1el`@a9f)HlXx7Hn&`ZQGi*ZEM=LZToH8p0=lL+xE0=+wR-v z+}wQMkIGIeyRx&ZvMQ@qJpk>|ZzFJ%<)Vs z?ei%Joyk+~x;8TKpw4lhI4j>qld0Y2LC>|nGc4H_~)$%U66oPFr_%y#7v|BW@P9DlEB&wh!ni^(9k&S4qu1 z5%(5*m7$@B#AZ^Y@=P)~p3$1KoPCzhm7do`A^=0)F@<=EKcDyZGc^qQ)rl>)ey3#4 zuj|j&_Se(Q_$IRyfxPAFv80uMP^hsFl#!i<-ia2^@+qm#wvTc`sd6`TxoqI)P zz9g^;Ci2SGhSG`e(|R6XJ}93q5f`y+0sr+^`X^t_7)jjLnzfN_Qha)R5%e58V$Q4f9P#d%NDH_o20?%~6mbe`V^vpolj=ZvZ;*!tiXfJ;CzF z-xKF(w!HK=U%Yp#HMZU2sA5;(W#ymzN`HV}7SC_&wg?$MGg1q8cKZBd#%Cc-OGlf{ z$YG|C-Rc3RSyrfl+&C!)N<^Z3oM{To30J4K)QY7Ti|WKYA9tU9UYW=;=u@Y0j5eeP zCvzidPVpQL=0`!XBJv_vQrKWJ;8oY)gP^*_;roAGY3f1q5G7+aqB&5S-AD)E>+Nbe9s$89i|V1t`9{g_lJ+K zfbX~GMg990QSz@>hA1d6PgHMrDOA8Q*MjX{YN1z%h^903y&@gD1QA}KwNv@t`UGFT z4#;?);fhiA01B-=m&#O>{;Z(Zdr?p8swkrzYBY(`lI#$Ha_%YPF<&QUB5ggt#V36Xv((fLTOu3D^AWqn1p9o_CXse^_9i z8XWwYKJzi6Gvq=435VjWVwaMS*;H~KJ8mXwao}u-WIfz^g$z^ch%^Qz^bM$~WeGSFRdXDQwFk(nuFc@K!3+gVD=%iGy z457h{``(}r*J`g7x)MCak8Ai{FE)}2W+BrX|G#VqZE<(YY@lS7XYd8O~KC>Vl~ z65SYH1u)k`!+$ig^Zb6q>vaaifSgo?*!a$F?t0sBCiRv@($tQ?dhdWi#e| z6`+GKKyk%Yt}d9&raV1lWbsmK_*RB?Y5&iE8{s+Jh%I+rb#ptIM`%A5L}6)C8Dg9C zu;hSLs67pkRELos1bXwQBCHK^e3W7fI>TK>Gnvf>Y(!;YQ7&m@e6codAo$k{(YXzq z6~lCjm&Ktg$XhMIx(jLpy7|_B#@R7_?dLOtT!sP8WbXB5d(*qlz;SWl z)4JCWPCCB+;SQQkOQQ>*rY5MzV7efRLkdf^0A#!oxF;E4rgyzyr7$5+(ZuPEj5C)B zu=L7n{c3RC35Ddbj9;EG*jdZ+Jb4+@_^2kmPIQrU$8N`0`nmjRU;iI%AX-^BuHZkk zfufpWbR^N3bRfSx62H7^ep?0VyjkXAD(2PiZuD$TclJzL-?rmf?{#M{fITHVQIT`t zFWyH7GLl&Rvl=U4(n7kgY4`UxXdxO>iw3r}9WYrB1r-y|#mLO-t>IH(lCH>OSRv0Yz82_#EK+KF{o4pE_@_&( zIZ+cA;vlC+n&Q5b>UpL6Uq(k1?vXI>H*6M-bw^8bP`->vK)7G-bFb(B7& z1mwY$XKY!hkO!qzm`n-PsM_9W$)uUInLj+_9{$#^psYsxOMfep?1ao)AG*`?>mqm- zk>4mXDKQ1OWh8m;1r&Vtk*JOIXbic*(Ie2CvbG(eNVC6g%s;&SSx8YR5SrXhQlB;a zFuOWEYa;gI3nZzI8m&cF88{XaumI94%>xDml#UDp^m7UX@~~lWcJZ_^b^gyWa8P62 zZl4{=mthuw^iX|*GGUFIh~bEr4{L)7O9uWXEtC8^m;^pj?vU{CTXO`H3*ikV zN@V(GG9+w3H1|AFi$OyrxMAhVo_D>UcAv{IYUS(vm#cWQs??y9DAxd>uB$KQ^T%DG zr8E&Yk}wTR23EF~x;ag$nxb0UR7R)ZEVPC(=uco2iWF94h;|j2xMy*)^ch4Bhl`A8 zx-G5j_yry*Im*m=O46-U2-zQ0=>zfg`YPs3A5BL3*({KSHNw2f^3QyzO0C9Bi_WQd86Awbb)e3$G)w(9gM4INE$TF$qe&Iq(oks?D4< zpy? ziZ&}xR+FCboWisgLcI2%c@@WW*QNG8SrPR>GC%e47row46+QO_#B&rjmaF6Z?oRDMRdYaPcu)R=h@>6jN?=~M?SzKie(`B}O`3Nz zqPDeN_x@s?`L*k?p%�?PS;T#O@h#clTIF6iv0T6%9ihUgmXPT@$*#q^+AY?ElIQcw!W(TDI&Uw&O{A5 zYru_l6?nuYfUqQYxFL+yL43>Q0M5?wBzoX<5*qUP-zqJ>cYynM!+o^h2&q=j+pfZE zkd&C#ZW$epIqppykEV?~1VlGIz5kDID?;d&?uoaY*ZJ$TLAdy|g$FLTB>zYmz_L(%X6eI6m@VZ(5=BEt#E8>4P8JOzd|8_q? zfq>xP(kAqQaRF*m|JhKwF;94q&hpdd-|qE6)?qJxnDChY@rDVHvsb{8;XNSuVP~rNA`<*mcIb_c?gO{G(MS4-2k0 zeotF26Av?Ev*eRKWV2x%Z| z&B_9XD1d9r96)ERJAUoY6DN!pUGf-K?8nRA{e|PVQRB8XpJnlxZ3xn4{8JlRWb<21 zwg&e=$MUZ7p-|x@nTYd&^w=lC(u@f~L~X;td(HVW@Aya#MuKZYxxoZEG{6Jy1f4S| z0m#hX1n|P+9UD!#_@kLpJ!BV%kTzWCDDQq4DG4?pC zLNLBEuQ)PM3>Zu3jHil8fOM^AqD8q8Qc1K|Cx9J8>27yo@KXS}GqerbIHK>^alS@I zEz}(xmLhf_DeXLAJ_5%@EEYn##23C3MbbSrY|K zzD#3ikIR5KM3cstZIx-=V^@;IV%4-XjwCzm>sKncach>@)am#c)_!@N4!U%47}NUN zJitr@ieLgIlh`GH!9|(pxP~m>%>BeMOA(rvoH}ob^mv+9p^T-_$15bW%z%E+r<{jm zNB41xuj-^$<2rYLrAP< zgx~v9?|!+RXi{gng>B}<8=Ctele?GXiybu+!)GeF67 zVy2-NF#{7a{Z)Ez)}F9-LyGi*AaQuQ*>lgA7ma6k^c1N~S?cB0LZC814H+w_;kzU> z8#SxcBs}IVIodFy5Z;^E@=Fkep>%ZTbM^8dhF#O`NGC_GIVlPx+Js3?1QiWU3$H(;eHmqt?&Eg^9gI))YS-gZN!?NYvHY? z*MmZ9%9@vr9dq-kzD1zTK5)x-aPOJgvs3qbZhT>DS)QB6p1%7C67aJPq=B6SqqgJ; zfgk~=tq+1Br42!Vptg{i0sFv08}8U=bpB)@6ls2jU_`)NX}N}Ajj+eezOd!A3+f$r z@c+$P>YRXQAksRnf5ygmfnlUwUxUGKU*P+0fdB#V!Kd{= zfM5ey{*%jGsBhZ+l(f5Z8{GxF-TQ_=sEc5LOQcoTqb`AIjd56e6Y_tTmtXT6=Siih zthI|-6>pJ2=4KJ*UT4K6t-dx_M|;z+E3qOAnqx3}42bHnw;;dV@&)lTuEnx*Rf!Ie z-t=la@PEIVHcy$hTB2G~LH@=qAAzcDw*%m3m`$j4mR$bPxEHsCCAMBGlkl`YO0-Kp zZ)dJEZ8Pgg%{sb_yr;)zHF%KN-l-}v1wG?FEXRP?hUUUUS6T=f6lH5MGXn(a zEkH=O(nq#kl1{9hzL8d?8nI9YNioQocjb7{yXC0*FBBW5ojJrU2;^ z^o-mD*qVd)u^~AiThi@%DYQwAN8-Q#pdn+yE^GkSmA9hVjMSew=Be(XR(dUxRl!ne zsChtXSa9xa-B;ZTO@6GdLEM;&{47elKEpdf1-J>qO4yj+sACg|7zq} zw7`4~W_Ch!sFq1Fr-{6Zj#xtzQv@*cKgOKlh|0z^Oy4m{Vh(D)0n1|6flBR^ov39A z;4Hc>G$KX?`x|Ilftb`|vPq2~nAn z5OKvf#sy|GMg08hRC3R};xJmGd}+mu<9q_(iG;p9QTXm5;9A-*yD`X56$cQl;11i~ zuyVf?o)Y%Tf^}4Q{)0y&@+|f)&`mkVz+%4Wkk>n~5VgG?#_2jXIzhP{U@6c@8F$yu zgDxZ0ae(o17#aJ@CXBaTpO*Y3?G6)oqn}SvWOhTv@d+8&%K#11r5xz+LG~CJ0^7hT z6kilkbprj)sO#08=>hc)1~5b!{Ni~nL;~}R0~1Q3 z4q7h+`yQ*jgL+CK{tz7@d7gY2#0XrnO@JdV$ryXP5m|&CvIafc$gI3yHwfKm-X-!O z-&G#+o@2jqXFf2KYN^oy6OX!f!!$rZTVB9GX#bnZ8(SDUxi~ZY_rdhvAjsBs!d=cN z-sUMd_BUeO<5eO4^9(zn6ap2|TwtPYIL5E~nF^MOrX@v|C3&_NdA~$@3cNsMMa7r+ z_0<^wR$C{PLG;V{eK21V=+Ef8p7-au{&@%B(1DN#GIYY zxBmY4dCRzkOIz;3E_H8|`qPm?_w@Dn`u#z;9fLSRIA8C@AwRO}pE8LNwr}oDSZ?L# z;ZTbO7@htXqB_}teVvVQ=&5yU?&-$(>)%Y}2d{vGEe~(0di`=WQP&X6^uN~WNxtuZC-W)oYE{D^5Cl^l(=P8K06&AD4- z3Rj;{kNOOkWK!dJ*5DIl?)=AUPM13o!_nzQr6+MFJivFkQv(CDL5dw4{V@8Up8tjvoy#u8(2$ z8uK5sHt#wv&(|H&3kM&Nn+p!qiuIsUhUqVH6LSs31WN*CWrPs>*DKI@G9{H+8R zdm8@+z~!}@w$+Q#r0k73QNP?W#r%;$|G=CJAK>w&>*b*&+ucnEX{tCTg++U`zi$a> zE8QxJIbY1+FQdLSI)4;%YjhZMThL9_4yp|J?Ol4duj*8Lkh5Fq@+<@J(%=Jp z2Y4S{415&4VXoGe>s4zCy7{@5^&N_+UOPPcID9;9rkHBVJC`iUrRg+(VsEr+K0?QI z6l(H)8+ttv)6_IsI{g|=#BFhG|DlB6YA)6ozf$4d4}v!dAtN9{?(6yjop8_;my`9dd>X0X3$~0norfyrM<%PipY%jcAqhiu^tDbX_Ds_4uFL!;&(z;2r^KcT z`|Io@dQ*cJa-I}&x!iBSB^_vLZw0pT%liY_3`S7j$z=`T{qHB?)4l@z-CGhZi?nC# zgwnkg*iz{la@%XQC)>_>rjUux%jv7NgZS6>Hyco_$@lg5$95kp4!0bh58>hrzUJlM zx6c4`*r5BG_J`#(uKZ1nq|-rY5ODS^!@8 zi3S@FriwHyzb}P@)f<+y0VIXbgeYq&C=+Ir12>g(=k>>~$$HP6oR6b2{*xxck2)Y6 zvxZ6>KO@mLAHOsJ?rBTzC%N`0B&hadU*6|VhdV=)%rojY%{8W`guq-4lUU#IUo0M4itSq^x^Y&nqymF=Xo{?3rp` zKw}zd5~yb1#oh-u$CIM3DuHe{wPt-!SLW(c`0ZPpEt>@3V_wGkxo58)9&ZAkw8bxw z*8cJFoUPUddy}+!=t$r&WJxz6zAKdj7Ya(>(#Hl7 zc}aTJV?Goe@Vv=tj)nVDjrvu9>wJznSa#-tMcI05cx`q;M{kEbOegNpCNH?2W|lr_ z%|6A54VRI|#v;q|lGz9N2YNe(@FjPEcSU%~|YVAg))GNXpahr2MRn&H4OxA>mQXmnj8EWzBG8CJXv89=5$rpcugXzh zz<9@1&CdhORG^g?1YrZNK?{uvNiyUPLe2}gW(0A7i0|LI2+>1x!fEPVe8!_W=&!%* zjpZ_$9?!SE5rSoE@ekp_3Lqfda>ycf?++)Gv7+EoBS$+_Jife--#AO6t z_R5}Eg!aY>qR+xPg2C0{lLO|0A{pyJctF~~{3iX#Xi(3|CFTnwM`8E`Za*0%K}HZ3 zu<}LFdu!4`pp9GPT?gfi==OnForD47bdENl-B?49r6gbgPr3BLZg^Iut05FsD9S;@ zJsZYVH(tK`oq1@xxO{m7#3nJWO2?dGEKrfyEm#ThQ$BzCYFT$-jg6?CL7N^38i@N!1P-_B zf;Ngq?=TSvgxCd7m?k7Q?+Bow*S9;2Y7kz~MT=gfkr*Q@?);^Wlu~rUQ~|i2mjRCV zq?Am0<$WEU|0_c$)V(EdU{itT7T0e-v)t9PWes}At;H#X?n5L7--`>MT0r^2 zA_O@!m9cTKDFtD6pH0#Nq7Xj-Y}*v*uNQGpH!6f(5rK(T{LWDHiyz>E<%pC(QuxuO zgOnm(m>FIC{-l?jg<OJj=gXmI<;2Za`YBu(t`Vx zd#9F45Ub=(gxhnk(C$HlP>(}jv%zQ*NY)g{E@^C#UNPedfh{Ey?`N-2ECM3Y-sqx) zQ#UlWy$IW4NDaSCsj6grkdlOtl6_zhH24Tq7~9_O{886p$YlV$E(Fgo@$mWVHuL_K z6{25%)_N2lhp3)p%GS~y5E{ot@M#!saI-nfBuz`XIxFML8%84ZoQB60M(I5K-Oali z*5C;wl3?#0jiM08@Owm9II=HgDUcFHY)Z7$PlsW+4j5Yyt|Q8-`zKahBx&;sD-2!$ zAy_z$-3TbJFaY=f9~Pz4K{sKx+vT5>kF(yDTeO7u`#|+N0^DlP#U~>^Y>BbZ>YVdL zEckNGA-J80>$L^Ji~+hv0Mwy@ANJWpff)5#?9|F0nJnk0WEIs#$t^1eV}8sF8BA?F zNS71PRqL0QfTftwU@~@90K;d-dWUuT*?PX1wy?=PJRq4U`F(C?|Fc)xsOXjWGTA>) zfhW$r50q7glb#p&qN|JgMLQC!U5Eh5U0_;{RZvYT#5A1U6!wFbegG#o=2=*D^}sNQu+ zMeV8a7C=$m@Sl3;TDu(%tgLZ>CGE+2h=i;9u0@M($UN_{n8WTSzzSYdhB#m5Id3&Z zd0g*BVj}yub58ejjXkB{{nd!w@8iYH!b&~&hU?jcI_QkHcX=-8%M^gZDuENor`ltK z{R1Q;!gzjtuCq20gm&nH9TJ3OtK{PwXg77AHUQyaQzerm5D`nuuD+hKjxfYGvl-g` z(0Ti2_I$%OD1RB{C~agbRjp_4)nDIx{iRE9VFDVCpD%w z{4ey7>({C!r6F1*5usx`nzqPMqFlM2>`AD1~Z2wm>y~J$&f_&=nFcKm+o2amH>N)%Z`*CSAZUP+HA;jGsp&-?nP1VP3AC0N$#u$T2s4N z;Z;v}yS^ZJu)z04V=iNJ8BbIY5E$?sGlX+C3ArfI4>UmZBus#~0bnaZ{LfGaX0||4 z$_;yR5$1tFTVZr$FMPTS32PwqG{xKDmVh8U9eM6{!nSK0AeRPXgJ2A#?cz@iF#}ZT`7}-I{*m2kV5=tw04&w;*Yd4Av8|C%1T@p6TCYYCgEWn zpakxVV6FKrbrvFAyn_!AVo|U>j8WIT0SU2i86+1_HKC@edO0)y&vI3jls4{G)K> zS0Q9iqp_e&?i5)36aU`)*q~ToHmbMYB{B0tjK&59Fc!zk6MZP~8E6OAJY#6?D~+<9 zs%JE>7A8rrDQ5#pzL0ICgLufuI;Iml<3bzO@!<|t_CU5GkpzI-T7I~czwB1X?-BF( zKRs$|CyWNWZi4FkYvOQ0kAP_@*`Ys;q|cZQK*8Tyn4pmeqX>!zQ5?k05niUGtS|!| zsPnL+to(+&sxgaQ{{78JR3JQ5&OjPi)vmmdiQ*TjWNsjM0hEO}Hjhd$Pr}@&r+Rs& zK-1!n=v^iK#r6AQrL3j)y-_E+a3r{-WJN?qDgNiA{*qEEt~1mxNPsf~ByD;Ubjf^^ zVvL8PuA~c|W~Q6HCIqN!x3ke~S1v!1JdqzdWc*KBc0}|BAt>N(J0-78pnL^#fQ;`9L*$zRUNHTqlsokGYq#$%Bc^>=B=;pO% z7ABbs6p+~tpfZq5900Qwa6!c{1#JeGJdw^yd=hXAdLYYu&sjr+H$;WJ9T0e|qQ*wG zB#rA5L2sB2k)l|J(3pkP>RphS0n+Y2{mVqIirNehele2lv=@hRXj8vEp>8%M=P4kha6zsOMtE)gXSSLby>&5t4JDG zM9MtlzkFF1K(sFK78D77%pc|C@%Sh4r~GDtv_cl3@I+R2MbAdU9Q|1t{i0PTKsBWh zFM9I<>2pcap&FR;&t+^j!TY^?y+P4=<|xe~>%TyVdZlpwfI|u%pn3^~z2HM52GElO zm9SyEuH$6XCIiHKgcHEp!1dIjSX@ORJ0`}UI0aXz-bR(0v=moJEy&!!ddZ9H==gCr zKcE9P*wR!&(sW4j-OxGJMEv7DM|Fx;glpxIFP_GJxR#WhyxGz!?(AIBGO_=Kk$?A8&Yk<2|UQRJN@HTo)`G5BtXF_ESf(QSw) zFQ}>y?DJR%VniaoF^OVU$x)t`dpyv>PK0=At&t)3QQ0ZB?k{Z?b?3S@X?XUyKWV#ntAB?` z;{bOSxLBUt@y&1Jr{%{oLhdv+Udo{ES}uJVVeD!>MK>_W=jA-Mg-si4|4T+Q@Yosq zc!e-AV74hRXU?+2ioCt9wfk{%vM`{BUJgL{g$YNnDAv?q11Zn9R$J;Fcvh5#m(Wu0 zZhaacuLt)iX~1Y>6m20p(xHUitOG>%LE1EMaXA-#^#Wql`efuvpO$-sic3t|$bb<)f_P(-( z=G{=B_Bx^TBHxxZx{StJIkCo#rHLQzzNHvyALIWDXhClqp6ufjp`UUi{rSs&iV*8o zrB*?<@XenO9|Dn`SYIWyqW~Nm*vD6!YZ28cL_`FPyGAL@rB@dnjHtcF`r)BhAxAH6 zd~?Vo=1<=2Y%JEeEj1$w3bCrpY3{hEAnxX}qe}vUbrR;m=CMMa_~^oL>4GI! z6oqh5)Ii@l>3$da2oi4#nOM;13S6Wy|0Xmbbx2!7om+izdaQp!Kmbk^ZJnnT3_=PI zbd$t-hQ@%Cd^&%OrMa|f1RuhMEQ?$ox+55nFzo%CwBSn>UkQm$>G&W~EC0|0o5A=6 zEk$5ocrbrhRoJzXY^5kWJ{;B4;FK-s(i=(JyE!Qsk}ST{>7ip#bZD4}MbXH=@OJx< zkS*ta>5*nploRVqz$k`HkGp;Em9}U7-!SBhvW(yWC#WEwAH@86hHCGq#+5vvu4(O%J7gPVj zraDTE9GTo^yL9J0`a${23~r-xSD!v-A?4maKdUVe|@TzSf*v^~c!=l)_AmR-2BcNe1) z1ZI1|Gk;$9Ma5^!!u5l`xl6B%jDo7;v4ZvX6^H8+cIgd{<2OwjM*#d2I=l?@U5HtSjNyvNJ(x7b z?_mg|hnUCJ$<6e}G0EtFc0pL1Jh;xM94&?m>td(5IrY8}*?e3JbJ)TBZg-G8Hv8lg zC_hYy9c+(Xu8}x#?y}&zzfDKLuG`P^0(x62if~X?;(J5E{IbX#l+N9E(d_a?I^Lg{ zBM&$~X908}BEXkwt-s0~)|7g2Zykf{^HdtktnCD3HhpTG;u5Vba=OFajJ`cSV;f-?a`xMA4xtOLv8*HQFeO zMHz{h5O@)!+pu}v);bCHo-4KMW+(I7V);DmFs^9YD_AGoS}`vt4fXMs69^CE!WqKY zyl=bFxM+;@@!Dio&SwyRy*@N1eBNDn-9RZ1AmN63h&XF-jN>c^3G92_fMFYr*luDH zL;yYm_vf%f!Kt!U^Q4zouxdVeMLfOM6&D2&j-rU04gc-RW3?p>D!jR?aA z#pUzY*@gpkO3!Zf8Q1<5W63%OcKJkdQ>{qM3qs&xuroASuDgD*DQhkd8WHUYy0@Ge z2oO?fkL8ac=%X7b3tmd(Bi$|{jR1vHumso+90)0?Hj+R;tcR=BYLM5fy}>V17G66H zaBL<5VZXSi4TNlW?uMsz8_AY!dJG~`5_Ajlu%i8i|9(MtjyO3XoVRN3J1MJE96K>; z2fc?d?;5IdHRRZN`M*MPn-3un9%2a<`RLph8@FCd1josX+`0ptr_HFb2!KF9?r!DQ z3K-mJMxu{^hqwe{;ptSj5C~DTf;285f3GPj*`HOg5z-4B{0$72k5SPV2;Nb2w!nD) z!zk(lH=%VeJrqQ&v6C*xQhjd{!MA(%{5cOI!T&wA=8Ii4MLO$b@ zD0^s|eZF}V8Wrv$biTwD4-g06wJgwK>z(J|22=>GvMfEiB}KJ@QUdO;gJm_n=Ngy{ zO)3mF&=@*N>K5T+5fqp9^0GLvzyCB)L%}}W*(nL0M^L*UJ9`F8+EY@ zwmaKm<0=ePMn!}!eAnTFTc*x&fx z*6){g33a?#)o#n!CN}YPLc`2(dajR`I39l4ri%_)Ie-i~Kb53Osuty}d6VYLZ=^*CNa<6;wjd1bdwgJc_Z{8j zkzNJ1%{B9;SBHGC@d69+cH|p6Zi@?)Ci+#ldCpN13kVe%mszYMG=13q1;I-X)tfM@ z2m~BZp{m}$fYR}|Ro_vYMU+in8a_v`kew89T>OWYw-YxGI%klvAQ61xI^x^ZmFQS@ z@m$#t$oJS0W{jcbvEt*wT?a#N6Q>ZHWgHX0e1`FepNC5)0nItq922xlm+Q^B@3EdK zx@JzwkPVX2-HTP8)WWB8cc1pduu;pBCOqyvn=M=pAVCi6G&n(ySNwDk%IL*G-zc0A zoaaKD%sZje9~l%!RjPLJ7bRn2I5HyCl#Ipol26!{Pk$n(E@T$|I+j8_N&Z_R0hsq} z*CRKW75H0k%{;YUP8 z*x*o00Hh|s*6{cQ^AbD@8?OJ9O+euVh1Vt6J$Mme{oCw)kl@ra{U#-?3U-lme9=XR z_D8t4X`Sd1IpfJZB34JZd5l8=rX1&25UD{6c@V=$Kep+uD%i!=Uxxz3nVV21`Ls== z9lCO(pa6R`a9hUOzFr7W0Xet}k_4>FW$pk8K)g*f29_ZheP{DS^kU*sOl@^x0RD-x zC@cu*D{pY=l3$e^*i*n1ket;^{*o^|u5NtWbpngfNj;R0D{U>U;O2I71RKRFykt#S z?~VzkDg(2DKg$>Fn~8BU!A*$|bh9mpmFtgyQYsQ`FIzHKwv!j=p5VtO3I&YcfsiQ+ zK!5fZS-juBxSLeR7v1pX5>(j-qzj)|7#MK1atsGx7Bf2m zH8pL_3zG=*ArB+a1s<5JoXs8}fHo+;K#?8;eFgbnYD5LhX-TpY)I&U=PX1mX7|BK| za6~O(LQ1FW^S3hD3Ni@(N1Ugxg~TYk1^e%y5dR#m6{Eq3MY793X*ZIc62z-l)`Y^HuyA4IhLmdUBsLrU z)GO{Uh?=v-5ZKHjqXanI`O9P*wb*AbLyZ*@`XNyXLDs5eInUH5i;4G#u2quqWFe|i z(W+>yV}sEopu5a}@h zdcla$9R(`4^kOLd)Vu962nCsWR)sc z-)G5DAZv>X8Us2*GjWzMyZ_2ToKo$PHP&i875|Q>MQH39=hHtU3%7Dxp_Pg+Sxw+X z28~O*;q>(I=(Jwc;}X)suCF) z7unG{%10RlX3Sw&aWBDy6IC$`4`iR7i8KgJ&8XWHK#E@gTF281l{OTwdve3X%D7}O zxVC#QsJxm=)Pz82ybKf(iG$_NC`cIN%I$LjS~|z(U!=QCN!1?fu^HaS<0D(mI*zOY z8Q;fiAJMH)PVeP!GqBKaAyX8NYvh^_KKk&yOB&!N+`bvlQR-e~9Jh)X(M!r5ON4+n zQLQEczyo7NZZ}~XDOKrx&v6n+<#KIpHnfmMAZP>Oq^$AU+6-ST3{UNOb7f`&_0iEH`$Y&ep*&hoGKq|zRDH8zzF<}TruazCXpJc2qIY;e!+ zL~*blKF07+?}#pKfF5{ZWT3ixSb<)O8#g~9fE=HuC+}H>6dy6`Y#0l`1?7uRhveo% z+!0M#IdlG{g-KpQqH#0RdCQJxPkE)48*WU>2HvJn?c^+c=bD=FS| zaI?EpK^_VHKCl#&$6237ylDLB_fTThk0z5t&~F#^faY4WH?K;E^w%7^+0C#<*XSA! zpx8+kNnZYHc%RV)ueHY>F|f0P04d;Na4#2d)83Q;+#iadl`1RO8}*-AZNH0R0JU|+%M0G zx;q|jTDbbcK3Zgb2$NRES#CKjYqr33KzNb=wir9`#nM=W;{p*)(36Y{UOps#^~_2* zaP|t5!EqE5Q;gI2wT78Z_>BlH+lglIW;H2fi&tO#C+(YM`zMN%ERknQqjPRupBixr z(iGU$)Vd9i7uBR+q44kp=#p+bsQTiXd?t#x#pA4kM^H=f*owx~ibt!>k++*VfQ4!$ z&mbre9)@3l>WiC7LpYt>dfzdaF-jbh2yGy;Uo#QU5#AE4#AIRDvX``b1vWWbKidg9 zvBAy**1vSm+yOqya6ZI3^^PO`~Uvhw-S{GNSaVz9GS!F?-S?&J>VCbaHJ|-jYRA>Qc-zRN7sNBkTT7Z{+=hWL{)6;c_ld%erV<}5ke{5uW>JH=t)oD zRSQrD9}(qFF|WCN0}rQ0oWaDEQnnRe^$Bp&D55kjlwDwdRH<;3X?Lo0DQKGp(k1jA z1EBUZn}>2Jfg>4%NkFkXQr$2{+6e-fM3#`8yFVHPF|_yq&3*dc-{6GT5Ma&TIU9yDU<|{S{Mp%K=8;YrQ{{~r zi6AK`qnxJ?S0B45w}{g~H50Y7G!@6l_E#rP5ah2e6UnoA#JI@LF@_574}^U8>! zseXHUt4vFF<-oO}XSI8>zY(4=L-hn1{q%TMN=n+$m6-aKdS2=c7@nk`j+?0nc;GEr zEKPTS^vRv7XP#;kkBV~2>iRRyZfEf3BuS@E&IrFsOH$2-HeH6pDQKz7AZku$(8_@ zwZh)ZW^mctCV6NGU^9=mmbtCs`d#JoPU_`Dd34%!dg^CKedecQECs8C%rd*6REbwLK-NSes#AzDgf?kHhq2 z^S%Fdyw%Q1;*j)=1tD7xJ*l(S-2ur$Ic2sS7j->xDk}a02;n7d*XGOheShYF^z?KYzDRb?($WLh0*iI(2=r>wfS~Xy27&6UdfCy&@+)#YJIW;_cFA za{y&XIZeqg$(m+rqXtWiRV3UyoeTjp1M_Z$MqJ_{7QUE9v*7(9C4K)11%~5S8%IvJ z{`sKCit^XpzCPiXMi;|u`9pr1|G-IYs82*8o819p=n<-Ta`V)_W1;=LpS+mW&bz^T z=_)-V=K{qH7>{x~8B=(T_Vzk1ps*7UJ3n_ErOR6irR}II{nm zW4rthQ|Az#NuVv-*tTukwr$&1$A9c}Y&#v>X2-T|+pq6AXYd9!t6_b$wf5S4vukcG z<Z}vVlJ2R*?%!v zppnW5f3OL2e|(#bmcft26HELys(D+)7pS6>+?HE+a`2aUTGXKuM?%c0uQpbi>Hq|v zUyd_H1^_lnz3gMMLJS`1XoJPp6OqGiiK2*OH{mGn)CDy6g&4+&KH5BX69tC3w;*=D zQEPTJ%9gW$&7xp%>3MUX9@!5&9(fX;=|UG_)dI_YYElq1s$*#bI-lrGj28}XJu>8( zC_9>SC|WuONYjKoh2Z#sK|VW73eD10l<5UDW@4R>6sknY}50#8s>eKoGDH? zwqiYg$_=Kn<6p2N>JFGjHKXM)gXkQp9-_~#N!xUX-{J}=sS&o#c0RXwe&G*rk9FJg zFXjW*bFHMDZN$QLjKJ9*MGLo+Jtp-Iv$R9(Mc%?S2V+Yd8V42Bc*r9ZWISn2e|)>% z9|5HX9D0;tb|tG)0uf?N6$CYhjEaSCyWD5;4X^JZx_qcEs8u9!zflY#OjROK2l{{2 z(3c2x_0J@mr}<^EKp05%!8hQk{e6Yp=r|<1*%ab&;VVuPyI6ZvCG zVSY^PLv-jp2gT~*3Ps7=L|$Yo6hOW|3u0KuLT}-d+oC6*MwM$-M!*cC$qtko!vX%F zb#q0Kn?rOqBxBB_2Ad98F7il-OWy{ND{CxPH*$+vbIiyZBgZ7um6NmBN)T(r6WL{^ z(A|D1x-g5xmOv(e=@O1rBx)i{!&L=IJn6>HC2ViU@~3w7tUp)VD(lTE%PFl%vo1P& zNVlh;Er-BZ1$4L%SSL&AZ#5e92mmfV&G2(xx7S6iaAb2+@ifTSZHQPYC+(;SJ&|Pe zu-d0(X35U!rF2j{y1aW67uk7ebCs(bYFg0^u$R;keRmvruZV6t^;cZCO0A18I-eXG zp%tDxa!=g5`fPj3-=g4`AxAGAj#Gqj?mT33Fv2wFQRm?2!&E>Gs7pcTdi(FG9iOa+` zY^gEE-4vk^OLj=3@AwpAY=#>xZm^RT5g;X9t%k-yf@Dqgy@X%g)9Wu&{r;6z|q}Uzto1x#+JB6IA$b z377n7GHN*2S>ISCUr)mF*3! zyw6IG<@+prnHfQ-AM%25KUHAZKCqHXcQv}*Eb`lFk0o;k04x(XTlrr88gM5@+d(I`ir8!i+Eauq z(iPhu>r1-hDltt|5!reTaa^1~jRgtGNu6$Z9xGDYUtXT_q<|c%lSVx){MircGqRuj zBqAovV8)`PF5INoU7oQk5lZ0!L>}ub1ss7og>#2fvJ~c3L&~?D@cTpuHa=9=o}(AP(ckl@#wm%;SF{OPGB%fglKlOu{>oFt zdOW|&*I75v<^g^;Crh>^5c7e_bwbD$Ng||gdau|wC%Cc#UKT+O4VwjmVw|K9mMq%n=gouzVK3aVu9LKgw@fgU?V-{opw77{AJ#jUGQ@Mss_d0NFITm-7g%9Hq$cA z+t{!s{^SmsIQ&-PkyweZG$&bp(iQH62gv#$w9@kQEO01 zm0+~t!vQ!7!HmJ}G#)u0gW`TWC=E zMeTN@UFyEA0T&65r`3AKZcm$7S4)kMecSezIfEnz_4P4d9#^$i6 zad@c@waR!#<*#4xH?YVVq1gCZ9qdumD*}MR{(Q7GI1`3N_*v)3)V8)Cu6{2>csHoF z6cHsW6MF-`*z3z|ot=XE)b9B!a;>^BOFc|}FMDZ$Mu3H2@fS?H;1FqE^}t$GBU$fP zDw?B;#eEya2R5a+zyN*9gNc<1Dw#&}K@V7oMnUt`a`^{|+bDZ)@C8XAuA7biC_kX0 znb%aW;BNiO<5aDnd2Z^MDhAKh@m&nrAGhFN)&*DI;gHpl^XKjI3A zMp^Q84cDNz0<=2%nA2aYf|MQAx0V{!P*vzEX@MV?lxAhwo?MR<*2_ z&(H$5qKb1=mge7v%D>JE&wz1s2j#v^s4(%8sO(WH(;R8!e&H z0q4D4F6A_Cu5ZU1Ql4W5j}sM)a2m(&bVl}0*z&d48wlTNz+E83|4s0`kN=b44U6l_ z!2kh~^8Y8nOCuTs0|)#k!Q(>l+o|}1WS~`lU=odbiKe+vmRrlXq^YO%uxL4kCn1is zy+uKJBv*Zq{XqUO-)M^{HRDdjvqho24;BX@gJSDI?7)R&1oTWtxI|X+(BB8zoQGBH z@-ZZ`<-sq$oQ&$^)3A$s1xY0w1bPSCKAi;j{#==F#+{S{3N6W_>Oo=;-31BCU1cWx zYc)q*vkP<>WwJ%Ln#x(k_F6KZ5@Ga(53tX)!}gv$Nh|Dt_zMH!O5XOrY?5B0&)8z` z$|JJiPShR?2f&o$h4>XGyytZ6yi?rEhUsD}3mHn);-af4adhEea9Fntcq>4WRD!5D zL}FVe6)$U`MRV0j^hH8B|qK6pBtL=;{ayb6x_`AEF}8DWa;%396^f$(k(* zL#%50Y$Xs|^v3Z?$;FRhh`Riq(DsrKAcqBAgM^t)15ZM+@+TT#(=8xOp9Zsyhg}0k z@@0k#xIx1Ka)TyCSIs*|`dktra!S7aW(!_zevd7H-0n6SxMvJaQV(HO8K&Mgi=Rri zT*5A~FX@EMYz-N8B@SpA{gF{?^#$gNGwKSV8@Uv1VE#aWs)`=GnjF-r2+VrlxG;?P zld^gZjT*w@yJ#>?6%AEf0T!7=<*>Zi-nN;kn4ii5D~FrEYD40ElZha7Zcp&`BH^C$ zXuvl>&a+nKyBE8hI~|9$lcpOeX(F1{zk~)eAM6$e3xN;>;2UHkr(%%_F-dv80xq3& zkc{L%c!?xgj51%gLIDV4TgxIrHqW&caxLtY_h8A1GVxK~45q-W?i;H14aktMnax5` z1crn-kJGL}rpX+%Dghx?MqZ&scb-ZyA;toHV^h;G)WU=e+d){E8#9(GB94@28k@tR z!w11)p-?o}WY6>`TRb!rz*L~qDx&G=?c(VGEkru{n*`^7;ubnnlrJ$->DRmL&4Q9w zI&U7>)3XClMUHi|4$3WTNE`B^)PMqO22S&hBmR!2)uXnqmSY0{SV>uc8Ko9I@3{xO zUwPdox%k@X(hl}}4X|aNX=<8Xg`4H&Tf$xhz>dd;=4TZGj}3jn9l?_vJb!17X^?2J z41&}-ETbF9?#|T{uck!oy5sq)A3{*K+<*A`Y1rdS=B*hw}9}$qP;yl3#?-S(Q zRCJ`9mYzq0gRnPoS#pxfF%6?29Kq-(1|iVK4B1Rg1~G1;s)~p~+VAmJ>r*MSNH{)ZyQ`DU%z0>9-1NQPTlnZ*G&j znfYJ$&{AoS^^WR-@`|AQn+wItZUC>N(S`VuCSK=T0VJ~&ysiBen zHufUUaKGUVxo_xLHZoi@8_@wDPmlDu{pW7f&aw)2F|UXAR6~2P69^xSyQ9p6spODM z6g%eH1iuv(wm)mya}($%uAY@!=vX6rc=%Vo|9B;RXoOv!7^o}hc!u!+T zog;F4DB=cD{^8C=w>Act;aeB@@!`ND#{Ith`=eS@HEZcm9j(lyi@w_!CA^7n;13yW zHr9e=b|>ux1IKgJW06D+9O2VSMFtY5uK6oY1ZA}c4Mv+ol6YbZA4WUSA++Vd@wZ5+ z^=ayX?ziYJYDWk1Y8?#S_v?9AQw~VhOZsq_l+W~fP&R$gYF}; z^DKcMf!5@Y4PgX#&@QMuUVqt3pzjt7$jlmqpU4(e4pNZa+Y1t0h6NPTi6@>|FZ3gH zE{0z*VBOizY)s?y!CP}LP3rBE>eI!Hz=2u`wLK}ek^1BIV%zFfc~6sUfbqF;;&x&_ zxX-BtrngU%rF#QlbS{ke*F$g1MSteoqOj)Hx*$AjC-Hz(wk0%IUtFIToG_Po`^3Me zViPeRA-ZFA2BC;ybQvF0FOh?7f2>9K{gM3Zqp#^)u=stY|}>xIu1Hg2>@eyh=bwP}-U zwZT5()F|3D(2ll`!ex>r;kyfEIy?_jbj(x4X8I)^Q`3|$n7#|JmlCUA{)P_B`)4rw zPXV3CBmH=4tGm`4cFl6?AOZG4@vW8@6Wj8Ku4 z=)+EHpSmA_k$2Ckm~IV}-C&xuYl0(Y*7O#=SOjF4@5$FT#RV2{yXgp+x*By%&_c3U zcmo?CxW?S3_kpO6$VkxDy6yD6DL$EmNIqKQ>RVxlqs3TPP$W=6cC2%L?7aDjJ0c9jR z2gDb*@Cibh$URCW^g~_oz}Djlx+xxLmW?_pjTPYYqN(Buotlbq?b5gXAY!zwQyLGV z*E`#+ryhyj>v{J7qk7yTT^`A*hxNsm=RBG)pAH3 z=;68$q+%4X>?w?ZA`6K`9K{J=aQ2!QVDHWct3{_n2+`9qopJVfbC#14dCCzUt*M^L zL{fE2LrxukIJ46u{P!2+>!s)PW`Zx+%o{J@m=lob_(_i6QcFv=YKuWOHJlK{u=Dwc zk6q&I>r!zPpEIWxS_JbwpMR{i~L0RQBii zsJ7ZhuDw8O!4EJ9M4IO1|4BQ{z|hk=tiUc{C}ttg7yel~c>ZVUVEeD7W6Gh338|;H z=_bIpY;{CPh*(|9Nd3?GbKM6}Dl@#!|1=#vErAs3iC=%v{-k(IhhbZcT?5=!p#0ZMUMI62l*j4|A^|qxb7VJ${6jashCFw9X$7S zfjf{cR%}F6@3dOf&aP826GCrcPm=lz0O3pw-(}cdw0)YloG0a(w|axze5NkDxyq0c zmD0nl{fsB?V#F5TrcC9eX+Yo-RV@~shMEV4iUZsJ#&yRP#8^j~Ib3c79Hl`fGOuc) zyphNy8F@jD@vp>vB3+{u!1DIrxO9|I4aT~ppy=@6Fc!aeI{kzLMoXNz-W)M zesF}T6GxeEkW)s!9m+cD3Mq#$@cjIM0<$pmy&smyD_r*4B6AsG?-czz#4IPlZaa55 zbnLUNC;e)z#wbgU>ubv#A>(CBe8rnpagC|HX@2RYGN8ifSP$bO#cnZEsc69o;_I4- z!CLa;{paN`b8Oi_u($+HfRQ5>pjBWVX*aL57t)n_)o&`RrVMC|b^-OOLa((iG2vRk zE2Rjyxf>Vv(ICoLrU`K-z(l=bwL^$hMY3&o>JiDCfsef;2~`zmAVC?;{WFDd*4ad* zQ9*$W0pg3<*Tzfe7SU%S<210V6!N8;0a5BIp9|{oe1DBBW9Etp zR!wl{)0ca3Nz?RDa zmv^_K`nGVT%aQ?E;hk(yO?@lfTJDw1OpI-ZrD{qReP*G3DYc5Iii1zfGlOCSVtn}D zqC;sHAe7|K;L^>O#J04a(^M)DFd?85-jfd?-&K%!&_{ zgLjOCGZZ#_73WlWlG+KB;Rl_?0Gyw_Q^KFU&R3$I-{bFtEKy0wEMt|yuTH-DD-*$A zUo<@m7|KWJ6J(=~Y2xh>Hoo+=&nM=O*cprsei9MAjx5-%*kXnW{UkE3Sv{K+e$HMQ zBV*FYwHVR{oF!CAbQNvd>hb;y7vI>UzrtdUVfOlXfNX0PK_x{p#Pd zQV-<{R#HQ!6ype^W?i+67RvrhdQ$gejD(rd=Jn$zW4;>6+yIk{XR9 zwaVu$-59#fuOu{r)v2gxzanJ}inVoiGHl}FdPKJyFF>xC-?`u1Q&OefxqkoA;tce3 z0F5^?ylQ;zE^JW4Q+v`+1+bwPhMcWm|C(VJ=ZbE)B#9C{Ygub+?{xp_RCujpcB5H{ zeGKQRZXEJ$Tlz`0_Vs(zcXLvnyqTU*)?R!kkEt;CO-)t*5~w(TC>QH3GUbOk&%2#5 z*YFjlVx|Os64=l zM!V@W=^KoNG-E3KY|z`5Zc6CdJc6=~lfH8YOd00p3-3?#>@Q}O!-&IO?Z>h~mzno9 z7=ro+9BJ*{U2$~w00vKIM1uTqd4Y^&&NcG6V8X=mH{u$CPbAG! z)zvWOIP0wOCe6r^c~c0-;ooVe`2z2rvCOXS-Fizud_25A`LLQ zoOfo=Q_*LcB?9lJ-tIZcMNK&RWb6`!oIH$4TuZLU{ij=T0Kyc1fu}sGzMiX~SEVch z?R;MIxvlQEd>h&<%=cmT8u#Vg5B`Yn-by zNxn6dB>ePzr)Tnt9AA>XreFQj&T|zE0_TjB(y5EmyoI{)+7{w|dKAi_T)&28er3Gw zlF(S*=XqOGDBHbPGXeL!fRoOJZ*Oz$B18HX ztoH?e|2odCqT|?1XrQwW===iIMw47TFSlKuWD9N%Tf)~&{ryb2rO0ZHN!(MWN z6BaaLQxO#sP;qs`mBZXPU2N#K+VWj7?vI?SB9>bbibyC?f)HTR0d2Ys)jzksrIkY3 zfIjF4KTDVm^T<&&E@;?4Y9xo{v8<6-b#h+BPbqLz!gm7!A%6U+o*>l|;4ofqIN4K;}4-I<0p8CNY@pr8-Rp>C?P78a-}_D;>i|Bqq=L` z*I^GWKh=fMBwc&J*>sFzkveqa7bEL506k-BNG$iY3{`kw&CGjz?U2ogk`AuzuyM_h zGvcVxx@6Tz?Yo~w5ui_gqto{j9ftMA*-x9JFmxFrzZfZ)m7NPt`RV4(gP!=cl)hRz z32|pf&$i8e`h_$UEJ?KeyZ9vFwR0=L)70SxYKXn2Z+n|e8Z<(8l2FK(FW^-JKqRn0 zB(Omr!5YAII{=wW4ejd=nsL?ICs4|Dl>82&MKR$2kqD9itx5+r$s`nh+QXV+v`Tvx~dCs%DcoWk&hgB`}5om{-k@W&@H? z9m}{m48X_}w(wJ6$9WG@uF+x~eGY5^2CoUUbUq5se%@;&WIIDzOCXjn=3eO8!J*BgjVpykAtb9<9_>@-<;Ri6Ss4TAyM_)Ag55hCiW7fq)LC_4q2? zal%dKKP<3vBj2b0V`M(fH?6X ziIQD|TF!DK^0*?QphJO}TnzOq0F?zwQib>;U=0ZdHmYIdV57MzSsup9fZDV_VCQ^K z?b--=g^0)kJAeE;Q|=7Sf_NGZDkQw@Ap?~23ZE1xtVhWNLWO2}WcNqmGZkqPeI27T zQRBOp$lvXp^8x4s05c7a!W5QvpTF;jmGgE{=qj7cIjhY_n6o#)aUGhbd{QX){UP|x zArlN;BIEz_@B;$iNc~%6l-DoG|61Nz9?6dW?enC}3Z~Gf+KrIy>GzH26jzRPO6*%* z?%&_L)^Sn-`NkFE%42l9?9$fqfy8sm9t%fyJ!-m&SvS=IC>I^z51f}8q8GxjDAwFw zW>cZ!njtSjanNinF-FGG!{XD5Qlf^BG)9HFO(t?c)?vx#@jaFs=!Y%y5tI-`ObZZ} zpI1gz5N8C0xe|bFg=3@UJ7u@qqejn(6sZ04#S_hc2r#vB4uDi}yLn3&o5atYksANN zn>Kms$2hzJ?ja;5YK><*2Tyb6txV(Ku=s}(GNhV`e_uu-sUTSyvEj{i4p~w2_lr^k z*|8x(A-E1il+4Y;_H~fo^*e*Vgx-LGU-<9!Kzfz}apHG^DV@fVChe_NDmic2?uqDBEtn4Yxxa<$iCnZ;4Znh zE8__Q0G@_|SUE&^=s*S4DaH4WZ$cKSCgQh^Q0dm;G8y6b3^FEz5CwmS_uIFvKjrwo z+rg&%L9YL8MtGWh^gxf^HpMA#5I6va$%w@a;IxMT{O|Q2aJQ@3p>bALzse_rO}ecGpDITJ!{dm$V&MR}dES;tJ*J8LLa%lCwpz31qT8= zN=`zrcCARig_x?$=9P7v=2^fPULdjpNrDwo!va{_f{~@RDpO&JShA1k%+_V2$u$)K zi>*|@KaHsaPO_R+jfxfQ*5Nn~pg>QQVnY#cZ^-v`0^_xKz+1+5H_tc7c&+%goyf?)jwq#4mMva4-UrdIGAt zAwj0f_#p4(Vz6S3(n@!ZVG$22@ZgSs?iE}eFWITQR$ti2z*1 zzWm>a=dby{J2|w{ucc9euWwjD@J@r+*R|&CekKQVT@H1GBVd607MU%cYiB~?y)S`* z>nu7L#UaPpz)UZr-zAVRNG9q6Ccd9Qm`(Qi#2g)wFXc1BhmF6;TX?v37lQ8qK+YYt zzcKs+X9*td{%s}wWFN;58xjza#oLntV8NbAZ8eRznY^A2HGdrN=W**c=Fd(NPF6y; z8Xv-G@3Qz6mtl`NV!m3^uOP&~aPzU;0RdaU9+0c<{nwLy*LIF&b+Gsw!9&;{Nxizr z_D^;)NaThj?l0TjRwYU2%fO=?89D1r7T>aIUE)0T7%<~p!1&e1Y7Y(EC*YCiOhKeY+6RL92{a_ z-jTPVu!DWj_9S6!J+VG>{roMyzF=ny$qla)E&0Novdw0oluaQuz)=VXq-;Zy&@pGfJ z#aAUK^(QM}*Z7Smz-^Gb1OXqz+ZNqrS9YU)Q(Oai$WQWhZZeDjo^x=AwlZX+Fu(p? znq z3xwwL2o5DJ6KqcLF1G!Vo?WhMl99Ry_+5Hra(H`emf%X9oJIKeD$=d`Ka^F7V(w2h z6G3t_b)!^;EA;F_+Bfk& zegGgr{rEQQV*lae5GE_*NA}(*+UN7W3`#QA4lh`FZ=J3J5QFjZQoFn&NOqL8Qg0fx zZTX=ru&@N)g76dn-I9UM7GGnfId^Zk;N`dFfLpWvv^58)-V#T5{=ov*nqQFuMfC5p z;HZY|UgK(B^9Q)Hx`zzoW#ND4Xlpo!B;nM!iK3fF>7=ttW`fYqe}g zQmN!p4EI)JE@3QnPEpxcdb~=5G7BaB{g}3lc4m>2ma811>~SN8V@jUcXgygroo`i^-Me6#2mvH{Ddozy ze$1>t!;MdU;hlQz2#~4FyRSMT`L`>eSb2n2M(0=YTbBuv+e%(bDGqhY;!)U%=`W#C zJwKOq7HA+t&Ex^W=zy;e#vMo5))ueF6}I-ndV}123^DX$*{fcFlIL zTfyPBu2sB`_osmWpXP)l2l+>Hx^%T{E|0s+v`b=_z~_rH<*0i76| z6ktpCoJ&EqmMko)(&b(rn)a~RAwo%13I+lVXi3iblKn#dGUo@9+M;tjG~?Z}OM=X3 z!^gwRe#n8nKmTx7$nl{g+q2t+fZuUvlJ$~Ub=mFd&GF=4Nr0(`Q=Y!<+cG-y7yNw4 zGbnNH5E0ozqXG;gAQ`hLKwiuIA#va^?-<}tgf}pPFok7sDbA-(os!FkB+6Z|IQ_^e zCEnKno3%vxi){Sl?L6(>qZN;4c-u6|teOfNNUcHy+)ff&^`}HwG9-@bm<(O-KGdd| z&r!$ZsUSdW*1H0n2rmtyIWyyn20$KC7UCcewX&!Y@Nohu3C3 zyB?YUqzC-EWl@W*@d6VLuT!sv8+8>3BDh#)@9@G-Cx=wxn!Ihjb}7HDuoZg|Iyx|0 z8H00%C=NgyJ~piefJlK$?ZvLR8t^FS_7^QiIcbD-Z+eL@=Rt3m!@?$Bhi~$e)QYp;?L*=&(lXC`*U_ zq8yC=)ff8-rK&4%%gU*T{Gf}V2T@u&r4Y}{#0?m6oJ_me3`FTeVZw6-WA*z@Rw|ZN ziue*~GDbZ{5u&m%>k)xUQ}wvS!{lm}VR$$B{I7W(-}ssO_GDt*S=7-9O=e5uC9nBM%i^=IHlQoV;7NMvJIObA_CJsWyI=K)dW~~$@B_R#S*g7jv_sEFA>Y{#YnNzN@c( z6nv=>>SFW@Hp|*qunTNP=I4viQsizOwxfqxRsL!_qHdO}U`thgi33hkSG9WTenkkY zgxd4gBh(X;wYopoejvnp#jV{lIt>6c_>vM`&A=*tJTQ&Ac`D8uVc6&RXniMMHq4o8 zU}*St*v#wiPsK*>M{9lM+r+gq>CoNlmGh?hFkrKtT!`mYu#)4myR)?ROBo-(N$O^F z{1a{o8yC9oMggSGsZxj|NWO0n?kmhOWfH8p^g?Se7fG4X%1x(W=X~YII}ebkHXCKs zN^~A%$@~+fI0kpjQRu-NRJVT!qqMZuZdatSCu87QofDdh=gVCZP=0tq7Uk`q*wpnb ze$u>h{}IXQj~}{y;oyn<+^NhflW6$7e{3l^Roxv>kD3%F`YWTt@1ixpxCk;ZGGyr%Bi5Lbg>LwF* zxNQM~hsXRQBHgs^BBtB2MFZSK1+J4olr|e$#D)c>N;TB&7Fnf#;S{;U_wx7Ngy+;P zHTwTp5uLtN%ubQi$g@0)O5Bc+CXMn*tCT{qtd?M)VS*_b+c2jVWY^I))%5u}k`#L* z^%CG=R$F#eWGPM1mIJoPG^eM%5H~roDrh4_z9(hlSCi{_v(>SEa86G1^fnV8uVHK` zQrpIckT;)GK_T|rj#6Wj#xg8kx#sp|iD-aj-L5M4IS0u87 zLi0{=n#PAZSHLIQ0Jz+8Hx7PTG@}r!`SF#w>QoKlIgHgqK`Ho)TXM-(G)~91 zxJqjA4*(sw@8iCLa@)Mv=N*z5jdm&Z*S`^TBTBM8Nk!usqAp6I zb@ASMr*;y8@xns>htU`Jx7<%NZ}WC9r{rgdyUcNh>Uz^&;C!CC-VinDK}_zTakBB; z8MCIL$cxjI*`jw_VCvv}QNjd`WN-!r`O{mcA`JzE1-SYvftoT8n!Vwfz@2)nB$zA| z^1~c0O-79kb#Gg9$2@o&-U_rS`MMtbN;aDE6+R2{h8=3Xl&nA4m^o9|_5JqhoH_ zZkW7RY?!}?@&;WIW`2D??uWZi2isP#C7IxjOi{s=OPP-hx#0EmX0t;W=ffjo@2Si> zd&;`M4j$G5yb^xTqbna50*I*Lg~U#kc@Q*5(26u<%V;xEx01op({-g%P=hs_?aZS# z13ag7uc*#j@e=AtgmQ(5DCHJEmWV%&O!`&4Nfubj20R(ij)sIly@z4+Rb>x)Y<7y$ z8`Y%gF-tSXyN)84L;|%oZS9LQg`}~W*$5=Ib~A6Cu|ny2|3+%P1Au98ZWurF6HESD zrAm=I;qzWRxUn6D(EzJNqCx0szKaf;H^pjpdT#&TBN%~>jXIr0dzSj8asiL_F({xM((E0 zM3!c2TS>Gjvdkwo+Kq7iV#Ek%+Z(stsWNR1OMA1;c&qnk1mT`>%;?pp`*bQJa0oP$ z2ghvv6%ZN=HEA!5c?)`Q>)3APbTFpez$qM=C-t-~R*|E;809>wR^jrlqLexe=TI^N zi=8ostt^X^4KOYw^9+zmzBuVWf>F|`iJAukqpZ({kbsbd(E=83W1|gq4);9QQ2meu z>-2VCXHE(@;yUrlw_e{Y5*D|(`n!EjP{cK1_|^T<%C(w46zt*+yS9l)mddFHE(hsq zvx}VB-(E=F*yDqi-P4Z8hiGw<^Nhd!`Bses}84M4G&8oR>pEv7AjwTAz@5mM*t z2q6^PiQJ(zyO?clS^{LVr69YZ`^wp+{p$yO|LO&9Z7r_b^h4=GM+L{r+E9V2EKDP} z1jYkA&7y+pF7QSpb=howF1vh)0%_?1y3!YI#E(x8WEh+U+_8X~k)1Br!v8+Ad_&B| zi&AL-4nWN%D@mvOp|9VoQvp}E7TZSvdRupMI&iHe*CQBy(&`l@#MDLRs^ETm{`vK? z!~gNoXka$%4qH!7d>ihe`tO>AovJtoK+PB=`#Q&&rz1C;#Nfg32Y-@FpL4(HZ~vl7 zrw0vP6=8#LZRxooo+fVja@}XLEv0o-SH9-k9DtXW-v}{`36*0vhhck`Jp02Py{&RW zF0!+35RKDs`9eV38FOS?_#lKHqr}}Q_QHcyK+73Dug%3rxo{G(?c_k;qu$(gM!-)V z$$&zc^}c17MTgTZpV%$y%6Rd*FGCaHwBVyr$B)X&KI4&)7F1K++kC%EO@%%mnwO+t z2|yy3Zk^*j>$P($GM?x1U$f>kAZS)xA1qR+O(N^PBx{OOuvEXnAsWLfAB}cpa=>)Eum10fq%5g!gw?p z+e=5~X<2BfhY+*{vEV1TU-d&E`?~aQ;lJ*7b0Q-`s)Ow1q$;}qLdDd&Tr^+k1xTxu zfi11g)5y?3Q0wLOvnI3NWmeJtv^64`!!?xi?k7IAS9ybMK197TO~}{9${*aO&^mM3 zh-!L6sktJ|Lj0jm!z@R+OUE>lh$fdY2lR;H+g2Wzsa4$#qF?ORMJ+XJ`<^|W%9PI4l57+H z2@Z((D25o&+~(qHix=J$7FATNOzdwDp}*h4lL0=dw99!=te{zFb^PT55I9lQ@=&kq z>+oxIoSsvkYpOx@Z3TzU_0@M7oUY-nYX!Ld znf=yxCcCkw{kl}>8mB03UMn4dF%1Q_3*tXO1a?jAzkhySYV)=d-;V$GWFwqfayQsJ zJU~kxm;V0KUeM%%V#T*&p`wl;uN_ ztP}r!)Q#e8q@k+&>+hp(KcI5G_?+BK4s+CM$?(h?eN&lEjhbF28C7>{HBMM+_Qk|? ztBJ3emea7~;M$M<4Tj{{zxF4d(co8)C*MFVdK9#4pdvwevEF%oAsGHvMzR%jCs}1* zIQAmDEXA!o*w)H1auni6f3p`F#4E|k_l6_DVZgD~F$W1fxn$pv1F6YLOZ(N^@y03Z z9~$MW_z%s$07g&S{U_0Cjko$2uv@jRfNeow$~Np9ND+X522}r3vjaH)gGcH83*LXx zdl+YbDS?Cf)RRxW?Kx2m8T@U?fzLFHc}SvXD>ENp!i?ztqnwMw|^lEqauG zpNc0PYp@=R{C6)3I&iS}O^|ixZ90RSv8OY)G>29_SvnWrE%L&hmnQl_IZ+;=gK#^K zr(uJ-k_uPcI`>a7;5wG3a|i$X4v}%+z!w>P>5VQ*9WukroKfJ^Ky9h2wbyV2(*`HS zK|mFIJNCf@N4<@n;-iSNE$hOj_^IT%cx?EAM(GV@-k>z`B3KsN)`?fFqtlQZUM^ zsq8)b)ba6djANek9yp$jc=9wzl4nWEj∓wx{*xkMr_wT@nSS4S!!HQl0gk>9BOL zP~Mh3gQxZ@OadW^?T>?8V`n&H{L3zhB*1O^Pu_`i!FaHP+-n2ut0`|E?DLU;{Rnz2 zd6%XJ=bs8(fEZteZf7smirOBV%z`nR;dzhto9ErxeYZl}Ha1TRGO3?3xq8LqaPzk= z+1>D7bUEjwB&Vf#*N9o`=JXw+I!&6x>`Mk@xk;b=Bd0b3*GX{k%?4jt_oXOxYNB0roe z2fxbY-(Urf2+mpAnq?+;-h?>^)fN_Nq!^fw8a_Am+dGdP3B%fIKeE!zd-1xo^dg=*UJl_d4G_@P; zF2l?&04OfhI9b_8c${XxJTkeh)h){kn=xrB42O=^quba-jCn`U7Ti+KYC&>JPT(Yl zr_;(c@q2pvvNQB1kC*az%PdbiTvDW?#oB6oJ{4MR?{s5^y4O}f8&6ZuZ$Hdj3ZfKL zJi;mlBlQ`WdgZ zSs5JNJWbWhsF!47Zv{-!O;!3A6yi8J+_z(7v~k8n1wvdOzjIkHL-5r)TLnq8gNm%6 z0C374P*7?mo+c{G+prI<{bBQo!eH2=ea22Z6oxNqA52258>4)m@cPD|6=L0^?)Eq> zwVa#%#<>Vn&S1aRz zrhi!wcRrc(ySC;|NS7j?EWK7|8o77d#5ZCuaj}rL`XN5QXsvEs+}t@u|7Pne@p5-H;?3DWTj9l z-tVGM(2`b5>uW0l?xYupyM8~ z8P&l3f%9d!i(_5Zp{K1D>+mO{O3TF*$vmVec+wN0IuMbe;YXRz_e<3VB$)Tmid;YZ z!#kX4v~35gC%tQC#7~v;LIxg`#B7_Z`c%>qf#6RHjQo)jpLMj?N+zF}LiE7k&O{b% zi@;D9qu<1s7Gr0H=9|GGCf-%%&v839IYp{!n!`$lu{CqNfQLUFTC*U3&8i1@+QT_Bu_s+u)0Nrr(@E)4ghN2d!JvZR6 zTYBhNs5Q_B!k{gQg=*>2(0|v+%Vi)Xg9fz`s?o!Nzz8x1O~~_rVt3TeFBPiE(?WGa z|5l*dSVyf@=`oC&@GY3&!<2dedqa_D9l<>RhY7Z1#FiID-3OAB!U@ahyZBd9tYUzf8vyg9Yx`k@S zT`%OLz$u3fL|&b}S$?(Ent0Q&p2V}lz5bqCLjy3}P9)N+ZT z8x7ecaH-y5v&p%_2#nE`FY%)lIW5)(#093UfLd!AORR;y0Ne5sqc##_!ayfUh}{vJ z0Nr9O1qurA@69Q!tcEVE0BFf(bVT7(5&*nh}fFAzjX6vTb9k>gSjZ@{ikpa*acjBnK%7WQZciVKf1O!1+FAv`-@yx;sDcrk$7|xIkTa zgP#+1u|Ei#mIQ^fl~C*ssy8h@WLyrhJqVP{=kOr{N4>zqy_~OzzrZr^V{h5m4B{)vy?1Tv~G*Ggjm8LD}ZF1Ew~n$ zp&$g<;=L?m<3?)cOPA;kTK3mkD~rs`#Rjur#lPALv=*9SFo0!w*^G7kZTlJKr|C(if17U2KerR;`HGegCMi3#};l3Z25l>9fFh_%V@AU-nO&6H5J) z(0@BU>JRwiox;JL!%fYT$5ZBK!R}{v4DYaecL-{C2uc$hyXg{_pLz%id>Yt2cwX#g za0LScTHUnPgw0&AR*=fg-c5_x6b9@7kiGo1*>Wvn3(t2Y@ON{H#nC5#BgNsLO-G&( zyTStx4mRh!@EEhumDoeRy~h|{0nlIu=6`K}n{3zcuEPk8&1@S}EOuwXlK_XqX2cG_ z&O`YXacuf|I6ZnjOiL3&;v)a}T1{}I`Y&qjpgrFF7$M-vN-&NR&jwNTHk#f(X{(=~ ztK#10@K5Q}zIXCDnh&9zpC`YpW`*TpQME}l`)e|)E}DbOvT=Gx*Y>0rjuPU$8-EV^ z)k?xNwCv}CUN%khXZqAXd zd^j@a%WpL|x<1RBC?#oKC6JEOY1LUC(>&5Aos`6&_^1!uxgmVFLhLkN5^<@ERp$tifIC_^hqI2 zzVc8eTZednI)M;FWxolgV3vgC%s1~4B*Cm3~ zN;QiW0xjvAMip7CI-eUU34gQfR$LsTC7K91=4zo(G)vC!G+5IPz_BN!4$`(7EbK8c zn$m@2X?goGDSyb)hKn?ZN=8XJJ3_3d%d+2qImkhy!h(7nYc`|9(Lo_!!(to0X`8S_ z!%52l%CAo&-X`YWt>ABuSl4Dnl2zvj{s!h`RF4Z4A-2_No+CJxQ9i0>(L{mAOp**y z;t+%iJNvpF9rDo)x*fy>H2>@9gXTq@T;H)S7?aMWzFFjj!sfG@->ns{)v-nB&BOl! zm$BOdAeX>90u#6Sa01K)2_T{X%`^f40Q{F>?ExBpHZFK=Y@AfvZlf>|eP3z+0r{O^ zZfQV)DnM9Os@gv6et>06Knr$cyGg35|Gu^{O|)sH-M%EljOWbEIb*MHPvhVm2p2u0 z8SobnID}%>^mLbjUq5Si1sov}ZP36d$-oN{;5PsMZGCEZdK49R6aW>e`piHlOR zOp3ui7nJN=*bhFg9L6vXKB*kud{R1r@q6TdTzD?G<7nUYxaMdi&WJE)8L<4&!;g#L zlcfkh9Gt#}?vq@Q2$s6$UwP(7Y|IHMW{(537&chWaD7~cc zp5fm@W0}RdK5hIT6MJg8+C7x2ahy{I)Fpe-zY5Is<>1MQXRm>plWY(AtTihn(Zv6M z-#D!GRC?glSXK!9h1)hES%81K{F9hA6~oUXp>lcwI2p`Nh_2rY;P6y$AHQpStJ=lH zM!k133`LjnSQUp}VQYeBrT%&fOrCzY@;{TXDisRsz)})W7ytk_&X*xY0u_HaE_iKh z>|Nb*+c=uN->Q8Fjw)4iF-w{N|5fcwIkJ=OnxxwmCp|NJM^li^HASjP%0JyTcl#v! z0P_&@DElOP01_lE38w5=b`p~pwnXs<1Oa>p2j6$TAO3tjPqa&x7jc&EkBDK8G?qr$ zG)~X-UO>CH8-~C~p#0JrFS|f7-p~zM=?k!z4KkqYGX)or9Z0+u!+#tChje z63vKzDC06=;=?uno5#`l>mS1LX*Rt%&b2%%`Ag7>Mf5h#i}HO~l*eI?c1K3M7nDEa z&n(HV_D3v9&=z8}2EHG)kmbLPw5vQ^?2n3HmtoFET9`)IjT(O|bNOXb;ydBb7><*o zJmHTnzAX5j#c|I6P1)=tN>luXLv0h~ITfXf#oy)rC`DJ}-7b$W(B;zXM0^@)7ia`H z1sz8`f;jv%ELalru4&?ZhT|ej;_2HsNyHc457{KitJ*KGi8v-%&Oc<+YM<+vxLj4< z<^1Q_tlDkK-Fttyq`V4V+{z7jw@J0~X2E6{2lhvQnx}eFR+nSp$`358eu#>dAEH7u z3RyAYHg#2r`pQvRywBjR$o>+J%h&JGA=2pPKeRa>H!fI|rPG2Ju@qOpsrX#g`&#Am zM%cWMjW-dm$Cq`3yJc4_*WyAuWhHu}p&jQeiut%X)_#Bd>65?V#(S~>;zKFgV}{0* zgg3WFgQ9-aG~=(WdJ`!xaI*s?O~f8Ho3W^TUzEJ~=qut(e(kDV4XvCHvfp4UXX%s= zLwrzJY3Ef77VU?>WBx&oaN)AdKJhPyEMX<9`*Uq#7D-kf z?fUa!T#tVY*k4(pGnk~qyZqhwj2t+Q$M<7JrNOtK`qWuDLs8A=w794YH-ZiR*Q3R7L#{9M6@qYa{PPwi5arP5C!?=1`+syinMxl6X zTh`veE8iQxI(QX`C+?|3QFdOjw=40@(Z^3Zw=a5xKhxV)*B;Jo3a?agcoODkOkbwr z4E=xVJWH5D+csb=VxDx*n~scgWZx2~U5qR0mr1fgdt}8kVwBzoe>T z?fTe{tl$Sars_baKJet^PzM~s!IrIaldC$x?VK=gO7aT<;EFQUNnU-@XI_{noL|OM zJxkN;J1fSoPx?E$psuc#;PzoUEcA`?-En`=z+YZ3XW=k!@GCju$nys(pQW6-R8`%A4$hL_SeAoA_ z{xf7FHI2aWO$!ZQZcblYbqq5gK4uScQ+65IXsPEL)UlO4Mz&RIP|G1!du3(6kqv*9 zxPf7zDE$vCTdD0Bf$!3`Qh|`VV{bcv$Oda0D7x>UuhpBfR75sgVj3Q?=t{-lK5etA zzQ|#{Zm;Z`qbS~T@kv|efZo4PEXy}16I*u<-6P#LgNgp;$fEj@M-OS>S&lWC{G&5D zy!X`5?#znZ@iL8nU9ul*v;Dh#W}SZqMy3{&Ck#XE5c1DQ}TuJ9@oXQ%*cGes%Z?7OH{ik8jM0ZBU!Kf#`p{yaOw3 zpoF2sbPeCaDC$aE)dw^QHB8r|meN2;8%_+^C)eM?AWOJu0miu0Z>?s~hdT)+hD&_E zs?|*mP8x9>M_t$3;@Yx_da$sTPXkvP8zoSXMx0XMoWS=Qo2;yUX~Zc??|RM_SgJAN zu0j03^TCAGo4sZgzEN{dOx=H{6h{0gu=E4(?V)Z>-Z}?woi~C1)_H(A_t(w2{V;Rx z3vl%q=KNijOf`oR|I^Nl`8sXc81o)}aTiuTChMN>xJpaZ%(#i+IiBrrW2C7={El7hn`gtu1%VrEafW}KXA|O}G(N`m zK30erj_YAA**}MGj5L@6-?Lk$px;On7-{OLYrD104lx}yf|9dp$M9{>a|Qi+i!A+C z%`kafGfc(GWHj#1s-1_K;Xq)W$1uakVl8et*L+${)ptU3^B$m>Ru7gK3%>&u4ls0X{;<8}grOa!k@t5wAv*=64_VFa-7EVo`>n_95t z7PW}d$VnRw{@bHo%YUn_dTXOW*t%ePn@wr>S{rS<0a(9&GbSg^PGGp8pB*Z1qO`)& zTAPM%0ta*!&6R((Bn`K1VJ-|L)09hjhNQR&xQf7WY-P2|*)uV)s^DS~yx*@tPNXd? zUU-((sn_p@*`|*?rLMi4x>M?U?fRZxW3(b=9(PI&xVAHLn|H#&bPuMxaMdods%O!CddKDF*_DgS(=9Jwy^kXI&Ee4 z&$xZehCO)m_L^a7xB(+P(jsobGAnQg!^9COOP7R!m{ z!18%;UbXcuPlKX$EX*Psd(pd24P)tm(QUn*&EDVzgy$lF(%izcUCy~-D}5)x5cL}8P8%Kpv|Ul!W{J! zGhX+^SqqF?M@lv8u?mzP{z%o0dDP4@39FDCUXWjNMBLpOY$s^N=1>ff_}sk=UL8tx zE9{^fJa^BqJVY$!S*p2~=^X70omKJUs}X<8yYvWr^7nQt%!($IF zdsc%CRRCA=M&OrB9N*^|$>Sp7nU9bMxv9=*JtJ`rMKFbYUM1Bf0cT(!y4Lkt(nx=G z3rQIvR0Bbd)OIDIvjt^@RiPj-LEBYZZxjPT9Hkv}ViEhDCT-KOf&beuhVKWBJ}` zEw|(sDk+q$#Zm4js_->TChq(42M~W)$%0hs6@x0oMmF+!RUuyJu#BZhu^=hdBcC6N z+P-1CK53&l)$3q26tx3tV1V~1mjmNl@5&`sM*|9@&`4u!5wL@$PRH6JVzlu+0(a_} z79b`T#!&g?oUF&S_E%y9K z*5W>0t_u#Z4*|-MOv~h=XFh+p)tbswA}C3!QqQnlXjE+hU?HaF^e^9ghH0B{*X#EY zkgyf^2RmxPU#}0Y!i|o*BsS~(>e4hJZn(CEHFH z`+;f=-zNi$72}5pK0G3(t(BTOm{uX7_+l}wn7BOy0}F97fZE+TgbIh@0oC^S0uLnY zK*A0r>{CPy+@FMXMUa$m-24hiSO_mSKN4c`*6|p`oDdnUKT0 z2dd@DUFsYd63YVFc^>pg8p>43R!S^uL&3y`SSjswvz1N+aI+f zSh%k?WvL{lg{U_s33i({qj%xNMa~ScvmI4(WjtJyX9G`y7ssjEjea06o2Rv(9n;pK zP#sLG&?9}ZnAU&e`aVyNX#*Xy13SgE8!|fb9{S7)O$Ii1U~`)+djp$0u(?&9=Lbfo z9=`HKM3?_{M09~^HKDNA@_TA$%-W6Cf-eD=i+3b`>czb{fk2Z^Sgnoz+?|{>z6S}Q zzECa{07$(RC{U}#H*nX)GQNxW`$jmY-w6R;ypZngi{O8vJ~Hn3h>BAkD6y@}bI3uP zzbQelNS}+)FwVX<(*?EGLxXKO(+(!Q)dyPh0^CaX{`=%>;TQSLeC>J)JB2Xdi^aA9 zAAp|02SKFN{kY2j{#AJ;;ovl2+k5Lf44m%3=?#J~8pa|bboKZ*l zP)QEf;1+-D=>`FU3M6IOPwEnbSnwj~PwLpBT%?)w?B0~gSbF#2VXK6RjbkkozI>=M zz$>HP`milLI7|~FQ~h*t<6vuYcL_Kg_maOIM_Xim#3Z4Y?3V5cNJ9k%Tb+igbxJ^f z4=9JNx-YzJ-qw9~Y`dPWLLr^`VzDh2Po9amg<5~zE}aq9wL=GRD^$}4aJwmEqIB~+ zWNeXyr@tsy`3C)iLH}S^GKcqfSU|chk#1BZQ+a-u1?2TIp=935miDJzL58mYtN2xx zw!3;J5_fwLNpCZEDzDw5T4ZwxwoyctP74qY0F6xUp#V!Qo~qQkBNwZ5RzT)%(?xL_ z=U#uMtE)t~NHGIv1uhTKN95Wdd`%09S$eUsM7Hp12pr4nceh0mh)>=2i%D<0IScThYy{`|*3yn6SFhzZ`M{}sA?1FbtC z-vRluQjfy~6SC@z!vd@ov3BtoH=ntsH z^lLhoH~@_B^sLe!-qj%q7TYNt1o}l7+H^&}7g)ABtxoGIBoQx^TjBr914A}@o~}C= z5uVG~4WKE|$LSIyfC{eK$D+ZxfcDVWwOyq%fn;#Op8K}r!c(}9SC%*z6A}m9d7FO* zqF*!t{<|o4M=xy%FiZAFzt`>K;Q0~nx@rP=c<4Vx;{ozY$ZI`^Kdu(TA=ko;b$6Ke zj%=42{iX}?G*(RvJ58Ne^uDBg}Q$pNJ&J0HJj2YK|);$$yvL5in26KwgpJex=8=2 zrb#8aOYnT21)$jrOeN?IfR5oW+$!ewrda^F4rFh3@{_45NrGSQc?$;oKH%3u@>K%e zvcX5Ug}iT`AM&n?Dv(;nXn?#D^4d?CqTz2WOaJvx?Fd<9>~}k3c<)GnI-q~u0qtJI z>6u>qx(ugx5}5#iwrQIcYl|$mNOS8VR<5CZl?xye z2D^a4B!EDHq+ONhT*6_;KxThg*WN<>p~>!G5ib4^sH5BD)D&tDxVmGDq#t+sBH@KW z;jNXE2(}Ox;}R5rEZgRNDb%-OjH_d;;D@>DCRz|*| z+><=tZWY4jekCDnK2ngmK40hCC|1?F49MgVl-iI-a6m=&ozWzR07wPmY*VRi;p__o z)Gtd0%>}T_NKr4h-dHw7>w!2nk=MRIh7H7N06_=2DSZtZ&j%lgHYlVnddXj;zjYv@ z0EAp`y-6Y<4D34{Nil!5qf-;bhLmQGY13BSVeowDcs?BM2K{16fmyj&q?EEw*mWA_ zlk+gg{%DNU-v`(HLwNn8L_YxnA8Fv$;cP_|CK%-SFkJAs~webFo z#+?JFEMX<9j;Y-z!t3L#sP-h4Lj{yM-p)&ndrpSsB%8AR(K~;ZvOG*iT9GaDh`o=~ z3pU*!)pYbc{O>IP6h{{y!u+DTRGOu%x*Ce2{;|YK?@yDa1Sx9cQ&yc=ERJ)I^qQ=+ z0DY-;wLgL!iZjxpaIrs{C1Ht=^92f1q-P^7OwQ8%QB>vvJKoB1TSVt<&L&A-m9V^C zm-#|ZsARrw<4b>@#cT`puRL3((<&udl~0$=W-Kb-7bV|`_aGjKJFHX>cA8CZj&m)~ z%Kecp9llV-i|B2f7v=k~D38ND#P3`WCLG`QHMpB!gSFjjK$D15 zt-M*VSs0-Vf10OyQdYAD7Owoj!s>^pSot9;YTxOqKJWsY8e-{`zZYIhNtmADb)@>| zllpK}C0>7Z^@`Dp^QkGK4CB1y$e>>$5xFrLXY{dOv?EEWrDud>Fg+2GpyjQ8^@Qne2+iAoYo@( zA1>bI@szibf{$OatA7vu8kjZa+1Eq8@TGn9d}; zxdnW_v!-MjH=1+#0&dJjOh$m6s;VI?5q znSdVWES$3ZZI+cR7ochpo-xd0&f>HX*cXw$BMk`6G`x`p{4D+hvEx5}s7~kKtPy5R z{3_L#=32Pfa#2kHR&LJW7;ge63*gk({|}e3+X5h$z&ipHx2}u=%mugQD*|i+3c8{g zuy+gq0D451Aw>cee>pCAZEWmZ>u%e~75-nK@4yNM_Q#eqXLuK4FC57Zf;x#2CtIL@ z78E&>SyQBzqI{p6r`(KNIE{(hZYuxVUd<%|`0H!iDb+~BsF?59KDBLsUo_HF z4a8s2z%PfF10I`&7*<220$2-Q4C(4v5&4e~)&*{jTW3Nu1}SIJxAqSt{HX zUA*TN*M-4r2{qyuP1fX0T-@;ARhC>FzKZ*!GQB;kjH+z-BpCH9ImxQJIgRV)EUut; zq{ml5^8vrcd3k-%V|fl;$e<1UIiyflzx0gjDxMwm>YwLv#d=0uBzO#oHI=>@G`J^x zjB!7&f17hYba6A|PiAKo|1V|ZkLXSP3yQin`ZE%}%Ea&aL9f74`e8hlj~)GZA5H33FmY=i;LD~kn%fy0f5Q$O^uC`KX5Pr=SiJUv#qvc`uf0fW zp%hv$;xkHALS7Yw#n%jGMX#5*-yEJ|K@`dDUwN+@QAw8OBVVs#EXzrZwmCGW5K~sO zq%6{Ur4ckJ=j+a-api62@!rmfX$`eZ(aZ9hRYq1DBi3LQTgF+%l8oEizVY_`r|_N+ zfA4F>#YH3ZxP%hs@$HHgsg;O|l25ms!Gd?VeSn_k;t3m%S<;-=4evhIn5fCymio9` z9o#s-z}3Nul$$SZOxD(i)L@}Me2)2rwv6$-DL?U>AseE}ASGR*QFIQjPwdAQkCqkE-@kf5v8cJi2na3OcIbw1g|f3f7ocLSq)wyXd5=%EB+= z1wFf~QBM`DYxH^1FoD;VlwFnlrAqP(j%#vs@2U`}3Ul6>B%WSnsaY1q&4YpoRi#@j zz@NIs60H{d3AldSId|OE;FBlY42!LohtftYZr(sKpKh_DhkH(cZ@5Rqe~gNCje706 zmfT`crtI(yX0wWOp}K-`h^Kefeuv%?5_(n;I-cW1@@JFkqDB;15p^BnLp8ThPGU!{ z<=8ZG;PyROLu;uYTDIqSuGiHWt)eubmK#PSBC_S~m7;Z(I3Wow8ik?V*}7Iy;`o+t zJFe?0*+iRkH1xSR;YCIee-i)cs-X3g(g5_+Ctm2u3h<1UjqYMqYE`9#TA^!GBCOSY z>z8Onb$!R8ltPeqRCS?D_?*^A{$;77?1uYITw8Hv%quwQzYZKHvDtl2 ze0}Ut^EjYG8U;|H!Qi(g8u@4v=`QgnzdtXspXcnY0_tDY{MzR6e{e`l*AGt2Bl~1% z4oTqJCnvr~9Q!u`E27T=uJkB4SV`1f2VVsjOW>>4$>YG62mwC{eFtz$sycvAO2=7pu$NA7Z@Pz}l1K$o)^$mb8@i_WY0TBHC&LL;Ta`v(xF*(Cb-VNO*ik|8#ys zV=&+}2tyk9o7}`8a3C$8(9VHs;EgQb0owRlG8=K%4LIyNk!Pzao#3c-6TN^FdtWB2i(5K)qbpwdr7Bx~@;ne~5-5uAD(I9G<-P$QQt_ z0@)?l6;V$e_t~2B3YB{r?3NY#-@pECWP!$JO1;1eLq7;Q ziM0k_@cYPj+)%T2yVEVvAj|^-0d*F=>Ey{R2m|0jej~pn?`vL!B)%%7^0HVkHJ=rZ zML0I;FTt@RL&c9{!H$@dXNBY7t1@A47luaEdKZS?e}$ntHDplnnHyml5n0O*9eY8ZH|>x(f#tizqrpaAt6|x(K%2X{YD<3+zm43`-AI33N-bg| zCab%!kp}w+5I7>V6u;mxs7O~ zF&Es~_X2+re(sF5YWQ_65;$SBVB4MerKxZPe&AUxux-x`?3S%|(^XA{ z!Gy6j?X+R{?tSSt9vxv-&& zBgtjLrBhZJx-QbSWG|*!@h-km2i}$nrs8<}${6FDvr?s2)4d-7gzd!W6eXC~zq6Qz^S~ zZ!U*{G^EI*Ue8E)*3lTrHn^P5f6$q_m>}aQpCBEGCm(%D;t#n$@h99a@keB<$z+&E zrdMa879cVLtbQkq;ffg9ni~Ny+e+MFkCzV27qx=Y(tujfV-7;r#lp{HR$o!@TTNS4O$7ua{u z3IXO~Mly^4X|ywNWV@#^>1iWQ_w4Q+EzveNG^r-3#O}e|?UU>S>_hCM?33*86iG|A zOo>h;43cFx;7Anz7OUzz-#Ono_1z!uZj-^sC@_h%!2 zI4H_6orOu3Mn}Vc`=}WH@%2Cd(|6Ou=}nlI7xQo$4LC_!98QmhH)T0LJUA$(H_>fa zjI()^^4nFG--ab$^Xr3I9)9BFx5gzC{H=nBuh)~ zx^A!APAz^L<{uXG5w|<%Aur=3F7NeY4_|-BxhFRn?J;;4{kn+rs5lx~KIZAgU z`B27X66p_j@;8sCH?O}74==OX{dqpfvr=BdpqNk3;=Cx|hDCWE=JY!?lu=Or#Gk7q z`*bvnl7zkx(;H+vt&ruv4F{j{aDFr_eqDrlG#rF~>6Fd1SeZ9pCMDmMJY#s66y=3H zy8bekE%WnS{+UHr@3@=dH+HoR?&nh9RjmImj)o~iO@>__e_+U^*@gZz9DJY?a0mlO zKZ4%%k9ydG9|t zOO(!}OG*b>edg8ss-Ewl>E3)Ob#Kq;s!4eNXs#*hhDRhgUcQ_}NmNGlY*7P$ z*gVO~<2(v0FuTtd(q*QHmtj5;qoDqL64&z%+bfK_2IS_Tf7L^$$)aFHMYHP1k70g4 z$&&05_|Xj`#?|yNp50YPnR<4e9%lKBak&bs)gWNnd>38GAy=<|cg}P!#zzXmy$>&w zs2B_;=Rg1MUtWFx$}~;s=6Y8SnuJV$b8^_coB+jT4OrE^&C3BdP|dN^!;i0j0EUzQ z`R~EMMp?=~=XxGo+zx!pH{_%rb*s*0MiNMCPF{V{xA!n;n*q9M;tr=%P~TpgyqTw} znQF@{{jvccrviCDT>T}wW_tfvLwEh^r=fnm9b5VMmG2$CI(`*+9h$m+u}|B7`pxOP z_am7+M^paH9yhPNuIkkYv`NG3-IFlCjz)`AA2IrLlO@sU6Fp{xRr-MrI1&h-jc7g9 zc0RK{-$@kBlkoBvt8GWC-tor_+bT+K669~Y3wPy!+lM=U`9)#(O|!Oxy!xP@cV10m^tQE0qNV5BW7k!IX&P2w_@-g|*6FCVYc&1W8auv!Qh{RyrtLVE zW16R<*1p$tQfsVC+xK19@|Dd&_NFT}9afD^!*@Jo`>to2%CdvD_tSJ+V{E#v?>T`1 zrLhCsP*^Xkv)y_Jw#TOFIF4_draaJwXK8w{F?M(q-}QY<8Gc~6LF;`r{ns4_rpxJV zi@VZ1%-m)?T4;?%^Vhe_cdB)Rw^C1bX<-9-YyDT}SpCPmO%rJsTaH zu6g|CSUGQ;)4yxGUf;7u!saOi2P;^Aitg*+UT=9JxGVdMv3-K!lV|C`wG8KUZ=hb0 z+^4jv2kaiOpR*tS-vYLBG+yi3R$J_QfNNu{4A%%OAkJZux7u;1!D_`3%CnU3Sgy^% zp1lD>8m#6r;F@iJSmG`8Z9G7&HbH~bCXguqd!FIhu3=cM2Eh`kZ5|^~p63FK9Ot(J zRby)UV=REcH56c|Or;byMC)}Lu*PF-U=D!MSUQHqLECIiH3oGJj0Xz@4P3|CWU4e^ z?aXpa8rHs|r9?nt^NiPafzi zhH7(kJUKN+wsmZd{K=U!vg{M{CL>DhG)Z~7+KwIi`MzO)*>LIrNHm>IjBf?41F*w|@Pdp^0IS;O2!i(8hG)Sjec;!F_HF0YzGdN) z@J*iwf?E2TzC33EHfXKJHuTR{v;t7=2L2|~V~N(jvP|E$c>ch{Bhh+ijiG6d4bO8t znoB_`UE+5efVb(pFkZtofl$M=Xfo9etl`x7NjzMCsI-{J6K!SI#@OZ<*Mj?VK|k;f z4JD79uU{*#wss5F8o3^Ub-m?~ee za)T#-fUxvWH`Ll5<1ogl8`!oFV7sc-j95}@A8d1608x2@Id1DkR>0aZ6niUt&90BD zdIO3s!5aGkA2m*Pg?}DSx{X=`*7&786Ssonc^>q?8(7;wu4S8whlk2{L~ESp5J$)L z0?WXx>W0<|!-Lp{&kMB;=Sl4CtM-ELlv-NbnPA>=L%N$I8P{aRGQ7+Ai9H zPM&ZHA7DvvTO0zHZ+Digc)~5m#Wc`y1PLX4JLKBgExFdv^$4!(EiZ&?_sa!oE99D? z1zztja_#nj-2-+H*xLZxwFIy|jfH!EQ+o;t zQL8t2#TkwZ*u~{+!r&147_B(6m)!d%f>e*i6wF?io1$pop2ySf^nGuHv z+z51If;e17ZE_YY`L$yqngd8_33z<#qcnJ}#-cF`{J`WIE~36oS806V9&rkPhZjSW zVHdfG1v=q1A!~!jkf#@~f!W$>E0Jr9fQZezLBya4k_S(Y`HJbayIZ{0$n^-X>n$(D z>#8sBhF%9FUie<|`kCCHUOwK-$1C-GFCTx2eB9G&v`SL7FHA?%Oym@$HPlTZn6e2Zl~^P78PW?l5#|wc4vQ?3$ z#yA8#%pvR#-EbXZ+|HPNZF1^}X6RPyC?6N9FIw>GQf8}(vnn6Q5|R`Gi5&Qd6W3?{ zuT_y|jI2}78QI4tC!=E(9FN}E=JD|xbMnTpPF^k_ud#Urt^Ln`LhDK=xT_eqM(cO` zO!v~2K~@WJ^d!6|;XMf_i}-~(+<_LZ;Wc|mAK3$@ds>DD^@Osz3WszT7S2KBNyF|8 zN!G$2-6e(F9g}zj1RA7&15<&siBT>S!wh`zLaK4EBgax5q~<8tMT&;9%)yBdt=t|C zhkBC3t&jtlsP_VYLIqoa9U30MzLg9A7A4w-N()Q>Iu*gVDGqcU9!tXRt;fysc)Yl5_{%(nQPyzW0wAWi+ zn#8TD$_rIpu+OW|@(>Wm!EWx?!MIZQ|yzTtse&&xn~@B!o%XkWM-`fY7pS(!{wGiCf!}$jmX}TnQT! zYWB%hvszVaD>^qoUBq{geNxy-dESQX{upOGI1B1>E$(cSp|Y&c5(K1r+oWRK$N)&j z?!eiR>xof+Lt;IQnxyXz$Zn6x4yO(9LXdw)Lu^tgtmJ&-C3o?0poW0Hlbhcmjf^@# zyc`q_INH(HJxyZ&ie2o-yG3@jo=1l7Ll=L0DYoyR`r4}44MB$S=e>#3#rH;;#1-29 zPq!3J0d;*nOy-*#bDorq?QDe(eV zB%>K0KtNt{C!rjmss*?G>#)&lz3#C1E^Zbw+-$o zBLjUU9*K>>C5TXTNJHcK(Z|tTxHC}L;nLU^PA4=@Iy*ICTy*eQOV%cDd&A>N1)vli zwgeS_ZX6CHhp+8A4c4nMiU6s2hbP0bh@o_WdL(sHwT&CZq!MY~H$&o2cMSFGNoeXb zYq427D8^PNjSpgYJY@p*i1p%I>#@GkpY+vN`|7K$6^uW#`YOR|&F<|zmvf^&4CP0l z_?CsA7g=~K6|AuyfTWU)uwh9oLlxdl6fYcq)pW_yhCM=X+i|FI+auG%pm6X!QxKH& z>GpK^hTK!D6`=t2%&=Ib%4-$%*c6T@8C@rzA|+-|vmkZ>Xi-6Z&n_ zWM5)^)aSiz%X?F{t^xmN1ne16*d|0>s^r-hSG0v~Ko%H(vbAlj-WXd%+fl?(l1B3H{q_puO?4Rubsvg< zpZ~!-P`F!so*Z||(CUhIk{jM`Q8Zxhje%-~k`5=QZHl?GA@s3~&tn`Ak03Km^4T^T z2umlv$oG-Gqp(LAiJxM7m352?tQ?mVJ27m11yCGKw>Gl46WmE~hu{(egR%t9p8RwrA(`**J*o zUsRjxFY*S8-`Y8zyHiot5WwT++D;>(10gPa1u1T4K<$LZbeyP^8-AqFCcBvP}E224Jp z3W`4vr3bCTvuz^hBN0RALV{dr8bJIp_;M_B%6HV_(BpCp%8hMPw;!%70xAFGZ(B!BCugJk{G~#dnAZs1W-|_O z`!Kgt74H761O7f=mLLVHPQ+8Tf%Bx;@>ZdS2@Vl`6Yd>aC4|ZacJ0>_a;rqBqkcHRpm@@T#>&mT^1S#=ata%Xm;j+^;+_vfj9e)( z8W@>Xv*`4O)%{@|XLvBQF9G|2t~wvE@Rzf8;+}xMn`G~f#&HsYT)J;R6dj$qEK-lo z>23M%rR%d|_&ukS<`E9aXC9q2k%bmVTJ*QUu}{UYOQ4%VE8b1lRNRWs_R(ck$eDX& zns?|tsAxnlOI_)b^*quF>L@#VMk`zUM3xe8e&LC0kyW+^V`Z5&ij4KSasX2PxKtz{ zH0EHi`FcMs+NPO^Pog~?x&ZoWIbps^>AJWOQKvTqdW;**rcL~QYz9O4@^Ee`4%#x| zhSHPL%yrDcwJ}Z{W4zK;hixEy+bC?Es-4EM>cWZ zp-H{FpkJzWA+@W%G7x?I5a?nPAXc&{#q&_!R%en_i1Z){$qw!SJ@l3YOpcjs{Y6HL zC1HLIK6WnXGj_Uw-VWrwXNRauHhc7KB4UwAF#75Q{BUP$8Jy0@rhr7;^q3kl{r2al zg&b(^7}v$A`Ba_r#)&&F8Am*|A~ch;Mj`w8tyUHo?bpkO67s}RMalQ$Eqpe=><#>{ zhL7BM0!XNQf_tObj}ips_W8fk6kl8qUj?ltBB9)E;Lbq`J57P|Q_yvosEoabeLtVK zX5Q4Q`d##+DxL6~fhrJZi}PH;_Y7&9WBAe|;1W&8Tu_PmjcB`xl$Tkj!~Dg&?#P&2 zh2ZAOd}Id8*u{fr(cX^ZX1G{cj*C6%uFzhlD4knqiL7Ef5!(<#)d9cAlJ|B#x*#1f zaU1UC+aF5gY8fxeu~dZCQ*^GQ$yH+GmU{e^XC~Y1FYr@%m1)4IYeiF(f`%rEEvRp{YX~O#C~XqNhdOl z6Y>P|f*GMaF^MyH;$=LCDQv7VmsT49<<9MJ=GiG8To?fBSmOVBlbzO`ul`ncGwQz6pXW&C%#^&8uFs_x#^iu^LC zs|kAszztbm|ihxhuTH1`bx*%$D4@Hj4MtnreHN;ly6XtODd#238@HwiE`4?bPy zejZ)u>$;|7i5%Mb4Wq10(Io}$R|iM_?2mz&>Vqq&t~YcFi!}=U=j1DPjWuty{V8h` z$rMqumc~zD+Q7asx^{n8MP?YsjjhGf_FL6;d3#Be5m%1nJND)~QyIHLO4xSf{>K|i z%9MLe)A6=(ikx*X0sYJ!>d5qUhNmf?95YfpXGfA(E$_i1Giz;q>z-Bm2hQd~k@kum z7P-Q6vrKQ>ZFzB&WGTK;A#WQ!jJqElOQcXdkEVJ?$&T^*-HGpa$K zs3iaJV;M($nYi}W zz8hiU1u8O*?)|givCv>x!*<_{pJ4aPec_-#c7{ z7Wy%}Jxfe=JHg}W-HKaM^7z-bKE{t{Qus3B!hv|=LXi##!v&(3ibWuOKN;Djhz2#> zoP~}U_setcW=7$jvT9X|Lb{2Foo5p_%6wnP2YqRQXY)C|L(1xEPq9PG465k_5B}FJJOPA=v=J9rHf7J2X zi)bo8TB*)sqjszJsSp@@@A-5Do|USI)K1E>`q}g4BoDVStK}tC%73oz`;tFYo=L-@ z3B&tbTJal($i1bLx>t!+Y_Qp(x+74~#3*GVzMOY5Po6GYcr9p54h2hu@6brk1pP-1 zLDqLBP;kLJ*eI)y3KGr^%aywPu6J~7{sM`OX!BbaEbLnjH3%Sa*e>2g4xj-Efr|?b z`CB7C*7Fx8>xlQ?OqMJO3{{#cc`pyrJEMzRmf0%K4`cMTZbU`#wBP@QqF3zHf`kz8Mewv6C@@Tz1%pl zu{FnOp|AFK45HyBu$xm1i#tE~|4lH!+N4$vF#!dN=1%s>o~(TiP6YhdrY>Dc+a`?( z!z-))Q|~zg{Yx+aqc{$=P*Qm*_#8%U5WrgfZB0f|`Tepk(yW#oMN{{$J>%hew8Q!S zd&yb&EsLR_b-J~MPLNQQUSlDLC z2@`1iibpdXn$_w|U@=dP$-1xwsYmWa(RyGanb^+_7k$>${+v}QQJMv&!f;XuKH_J9 zrkn<{gKFxJFr?E~%%aE`|FWc9Vmg{u&ssz)m_^}_&-^e=gP&Ic<5+j|7B-|MO#XxO5#fVY|@LjgPz4>D3x67JCM zLIuOwS*ZM;ZLmLHNoR`s>Ht=WJ+Khg^0@1)!sb4R5&38N#EB1BuCZ3oTca!*Q;p=v zl7>*OwSztG;qloKh2d7Lj2ttp-c6<$Mf1ZI6NP+AdE-=_=&d#~8ItoaQDHwYoHi8M z8KTtj1AbE01Iuo#G=(KlX~of<>1yAu>{!TOzu+V{I_PRy1* z=gVWaj%;7zbfT^lrgcZ{TNYYA)}C!p9+af&yvW$jeSSetFnct}{X95B4}{O<1;;>V+6eSyWyws_V5#aate+MO($pC@vVc$IayOG&0Syu zSQ^$SiD~wf+m4n?9+?q2a;4nb!?Z1$&wWG-K3Wk8CUvx*ZT<}-L0cV}2nRwt!OQ+Z zJF$Sw{~Bm!oLldTI|qHj4D(S5LALb>v4(H44t7*;oUimejN_bVrjmV{hC0*A#N)+X zG@8_W+LO2VXbCfrtHvE-NJu0}D?Et9u5QyU(kr}`8q{pwX5ME0+IoAwpuqKvx;6My zAokNZrB5(Ds&0g};|Jcy-L@qLB8&)b0Wh9wWcJC_l^DB^zw2WN`1IS8^8;spe}E2e z2vRliNEhEV|S7CT*{qMdnf<%LwF< z>OKf(oolvAkY(cl0zKIXhiq!E&t=k+7PR;5&=cjqk;~u1lTDQ+_kMj=4Rxu7Y2FTWGJuq zSjT=54b6UV?Dp;EN8Bd-%J^+IqP_I)RHq!hCH!2pwi2?IF*7`7#zKGTR2N*;wY^?M z{Qbr7>c_{u^Pi!C8Qb!75o}Jmb0F9pKP+tar2Bl!Fc^!Pa&n_-egAM8z-Az3>Pr2Dkue2NpR%LD3PpIRjh^$8|K ztw={Fy4~*A@3qCTiv~+5=!nnVC#PDbW*pv3k#)LHZD4fOlo{IdC%;q{zFeNqFzqYH zFEM0!?<7jqA-tP$KJ}CQMb#nL0Cc<>EA@{k>Bz(%;u2!1vt*>p#0vvsADMp0+>}0! z@f18tSle_?Q7o`eu>rY2z-JQ&zmB$Y+WGHW@FL5}UVqINpLX_R(DQ0QbsSQxkAEXS zFTY$CTa1>!U*W#5S?yV0>qv0MF``p?L%Frfo?B`YH&gPqQ#g44wCRRIF17Lk#^ewi zrc2nE>uEQb-Nv9;Y5@_3d6yQD5DW*iNcR4g#o$NHh)Iy%80~hTzI#-D=6Hh zJe@G3$6Q3HAEyxVq7xic{mY0?SmwL~fAYyvvS5#lSF4W)4F)GGc+)Iv1HHdj6u~sA$~z)qO@JMn2SmlM z`PC4C{Ek$~-$jVAlXsa6yd77^ILtR@wmM{9q&$|gKo%rqhg4VhYm$`+ol z%`UH4F{__1PJ*BQgx)(K{kfkb5PJ!=T5%Yvg7~~zxGtOC%Kc>3lPLOe)*at;$sYmG z6E*yhCIUKd7`T9A9xHWtpzkkTAF(fpMRasOhEtNca5o6cpSu~y>E%NASwd3le)YP3v036nk zuac0PQ@2K@;KMD_8wmH7VBX0%zDHUAkfY3_lL20`J4?$G-&M7H=RH`%5nr%F;sO^C)h7EYIi(OkTRKl)|1tUeQop7CUC*koQAX@y*pYCmUU`|# zUi+J4>M6m+1lvyvKAWGKe`ZecM-SgfVMdTzEBF>8)?>=3)*5&z>j|U+RjRjXCWhIN zRx2=?OrQ!)UZPGL8VYd-8n@Luv8M;32!Vi6XN^Oog$UyTQpYN68kh^?=Zn$!JzRQG z?fZh%`_pZLtx#Wb$TT_sHG;V}n<8GF+q-$(7MmmaAw)SbN*gPSyG4)-PtMdXKi~T+ zx|=79lVg%vyo+m^EU+W8s|@YkrXA@fi_Kn6#$=Dcqvxo<<}B8RhqAI@{MUCSAaLn- zeWwYMl?*LlnZ{K|#0*Jew~subQn!RMx4wXQ_wwg4UGyg?G>QXLgP5;IhLuamobv=5 z5sAC8I*9foC%v;u^c;QsyHvLs!s+!{Vn;R6qu*S&z;Hhl(Q@H_vn=f%JA0v(8jUS4 zCey|rlEx;uH7ri6w$W=J>aE_dz$-u5a26)O1pOJzcyKr3+u}F86qTSaK`eHGqpFpJ z^=S~1zGk@A-J4Oho|c$%8%B1lR5G`GJW4c5g(f^Xho^_PDf)BW^;S>L!=2WsdO5E8 zsMb((k_?9GzJ}abUI&i7^(>y@!LYn(zK1;e+4SvVGL_JUGT*8#!#m*#fcW@A`Y09Z zS-P~cYi$D{_EBtGMK36K9NB1G{O2xU^DF}~&rypqh>8NeA3a+Q9{94n$N8a!g8k;A zC<0$PHW6goRKNmUeNFoMc{Lk)1^wq39A|y~9TGqpN;}V=>QAL$Ww^X?G~hcI%u&Xl zS?_Rg8c5^URhq$_@v|F&h%h9M1Y?QE2oP)Tk36x#@lAS`e= zu=GxvbPB-DryU6q=) z*3Ru(hmJQdJGi2p9!8b@<}x#_9)sufDo%x>CZ-)_fD_lxDO>n*j(?$_jI@aX%4^qV zJyhV%wDpi%{%y|B9rVV<(=N78G+PJ*iEEg}HAvX(R_c|0WV5B!_aV?l+COE_HNiPx zRnlp21@@fjmvTG{K z+D{!n<&z6FCAq-8Hda4cw$@mr_sDkYyGsn*I9GHLvz%QJ=JD!@z?_gaZw;72?;cm!;iOR!c#Yp%Y}c_!Po}E{otBWU=r{|dikwPlrGIYw!rJQh-PyfrmM+kCgvjqk4Bkdl z>xFF#&S%pCh0bx*G{7;mc)tZquC;uiE3P3I4Mkt7=hkc8Cjkc%R@dkiEymq_=GmP2 z(MsM4kB0xiI&mtt(kYr{DY4$y|4Ty7+z9MVKswbB;E>$lJfs z{;MSbJ>^pNw{meU5ZcyE!f`>BG{W|h2pYA&mV$fszmXfC~-j;8P~ zJdg=(ok7JKQkp22b>~}O9i0K zE{itJl*QK(-$=#4q~3WEJ?u5pz~n0?x{v7EhO;gL#$&ZJYF*j1=pWhn9OcL*&MCgW zTgc3iB3u8tg|f`60s-+n(K^&oG^e4JsQLAvmo5SV6zA~WZM>+Z;g9)w24!+rC9|$6 z;P|=5&fcOoR}0SYp5B8%X9E8crSAf@gBSX8d|Y=zR1mF2)wZn&V8!8Q^>~etR`R8HuDCNT1VfD0Fyjh;?*XL!01vrEqfH7{uKn5`4JpK}tmFm0_^1f3 zXfdALkIpmtYyuZgDn(25?wG8Gw)aDM-~}Foq@%#1W3_ZlzRk8aWvG-JMMWW>qkx4C zeRjHlhff38e(-NY;|2|Kqy(5C(<963)OOY=r&Wh%JSs|K#D}O30VN2o)&@N8&oJue zYwH}X3WPC#_>HWv{lo}+lUQ3JxdYv$d4#sH1)V_qpesXTCMJK6Di0((KIO|2;LC&A zW)gC4-u5P?DhJ%RHmXkXD+=F`vpnaxwFsRp5=AQ>vR|4{Q$NSH^3gRV?>KdSs2s; z)i@8V806skHiW~Uki<|Skp?}&zy;!zw6|BON|$oj3}93+@#ip>f}=#X19Lu8+i;7; zV;kK~yTteJW;_)FNm1NhNX(s!Z#7LkgU?RJ?h`v%VYdsmD{zx2g@9Qr@z)O@onAc@ zE)SGDFTUi?%<#(GyXwsuhtrm2?#`$s$0=F2(Uh3B2~S4A!iCb9;6C{gFtT2|90K@r zbk%Pp+!9Y87aV%|+tTUXh$aReoB+KVKlaU9pstK<#w;Bvnt{?PdM_o-Mye-;tXse7 z@W$1jFI_4XV&kFh#<23==n+Pizou-ZeyiLdQu^Az4^G)+^|r71w8z|x!4N86yBOl_ zg6=jY;3d$3lmDDEe6sor44Cxw^bXU{`wo1k+fG#rU6(nUe~r1b3SsqFSo{2Z1;#hy+OQX=n;L+*LaD#MyL&d?mG3BXg7-8F2tN-&wNC(-V5s2 z3}TptcEN<{3MTNTrRH@F^HvyLdTM}*+ZA^i@LET=5*t^@WN|l|*Y{9U6DhtBhjpdq)k*zl z>bTn7bq3f81a6aoFD-$}7U`cNXl@|5Yf{SHWNI5fOI0_%p=xARqwZ;iO`TR>JMp-I zKb9BS{N^`7g0Zjvl)FDw8P>?JcInavbt5B%;e^$!wQXMVg{xetZ5C9K2JFN%?ARQ= zVx!@fuT|g9wl3Usl{^q?e(4LQGe8N1=IZlUHS`o?^iZZ?MGTW{w2-I1+2aM}hCVu>$=*EkD&;t&@R#HAv>;&!!OMJ-*S}0+%>AoC#20 zzfQ%*32sRo6d=>(xSDauK|7r6+ddO6>8Li$TDLsC+>Q>E&_vv|`Q5N3f3izaK*p*2 zy<(U4%FP+yv8jBFeNy=yBiY@~p3S zi97_N*NcurBP5GR4HsB5=GMk6OO&L7<)UnhOo&KlBUv1@oF_;ZB%~iLkWb4Yww{NG zSo0JY(YF5jh9n`72n3<$c&tM#;D#d>hLPCQ@$hw;n58}U(QstP?nY47!=$>SVg8DF zOJa2l591nwDI&QOevP{lF;EsH^R>M2OUFPzhh#RV6ZpnugtU|oML=dL`yOCtH-_b% zY@-WfsogB9yvb%AEh;*GCe%1HB1T8!VP!;fKXXM+S5l;+XrLzMpibyYnf-|2@0%9m zw5!MWY|eoX$%ZkW_1<+4VH37#Z@2nCbC1*)+SC`q#rH3!2c7$lq#{Ped@w0}>21Iz zZ8WZoguosf6d-iYDs2h^VrKd;&K;T z+apgDBFmFt_m+P{t3sHVnS+Yc5m*Bz;{4Ws45Qs?9s;`}`Q)S`DF1`)e_-BZYQ1pM zVr>7>-s^zV)8^b)6{_yi%BvB*|E^d>kC1k+ejztvFP2?U9dhgCEzYWbFOE8{XMON423eU=Z{OyHF#WA@dYkzU%cbOaT1*=^KQ+*5QW0F^Z zZO7@yPIhVKvJm4iQ;|0><|X{v!>ZzR3~=y-PO^2=*2s`|qkUs6<|2QwytD88_7!N5 ztvMef!c_DVosP1st(nbs%c$k`2{%Mqk7XJbxnVVHCs!|$?C0NvAw{7Tv6fYA$b-eG zp7x|+djIEJ$*CNw70eXS^)7+)tMO~fg|E=Do^lf&<~nW^FDDMGOV0N%D2H~HOC)kb z*MEOHruIusI_nAaJ$o++i-d^|C@wxz%%$g3mWJ6t9x7RaC+hc}O%+*l2D3xfKs42= zNj4C?8hT$1E+MDaHKvs73f;4j9%aPdqH4G0yMs%MKk6N?&1ZdMjTQuc^@j6AVlDVG z`SY*V-6uv6_P6W~mUZ;>ZvtdJne5q|s9Ki?2MH&;cfF0FkbgOka}R1o5hXpC3B)a9 z6fY}K>=f!mNB@Rt#b$Gz?v-OTy@xq$oDt?de+y6IuL2{qE8NKAs+!Y%fIh^pzv4@O zwB77`-d@Bsmshgjba~VO=0L-7;Bg)i=$PCL!G^i?aqS;wJH`k`FG#5owI$9oEOVK& z3!5u>3Ix`F%D9R*pm+EC>M6DEZM`8{`Y{9M$TYp3Mk*`Y2IBj((p0hkz&6co{ zd&*?Ak!2x}0f(va3Q2dV`ZJ2RgKm^OdKn+W+@@PPYN}w(yWj(D% zB_X=s^8YNc@oac_X;#^s-djl}@JvUv`PIFbtQoZ^wa+Km4E2Otdm;47gkbSu2(F%F z)5CJW{mM*G1y)qe;v%@{H4Y!m#4H8Z?^u07D3YgfT09Y*PEZNzJo~}Xqp1L)6zsH= z1Xo-nBEW{_C*AW z|FQ;q_Xmd<i!@jmBVNrb7 z;G_SPf<0^bcI)5dtUM#DH8Z6T)Qx_NHO8H{ne(T$K|77gbD+IP=Uf=+yWZ`kcS^WMlYd)GHbq(#C-+#P`H$X_`svSC|-E*~E?*AZ_Ji$%r8wrr- zUUYJG5}vGY?6SMOl0p7o+X~V)W#;6r_h6ICL;^Wxx=fyfkDulq6+TM1mH!mjz3;MG zeb(%e5M93^&puVY#VEsI-M3zvQK-OS3jvpskvj>HsPoQ|VRlWOS=)t_hKJH{|ASK| z{DV{4rCJdFEhOI^Yje8qii1MkKhY3++Fek72TPksBgA)ePLt(K=|H#QA5c}jZuV47 zsI~J{V>9(s+srt$4;{0#|1UaO{do^`ytedYp9sIvR!TTjAd=uf0S>#-0r#eq<7Oc3 z;-pOm>WdqRk@$cA15YMvwdOzFvyU3OewJ(ZmUvJG06&!)RdV6eXGyIO0fSxlymg8>OW-0!kg)G zWLF=aaKz#{`NwKmCL5I5!g(qUL&on}K^UdQQ#I0dS`a^hp1>)PdF45oxU_FSjW2*@ zM7@;p*1@i3O??1Ng@NHA1<^MgH|XT_Gv1E>--K$!O>FHDlTb~iGJqfiB+=t8P?$V0 zFhWq&d$QzeRRBSSRTtP*T;k!=tB14zfZ_>?1O4PTR2Q{jN`x>WaHyP`t zbl!C+6qv0{jx{(}*VXp8Y2QHXzbrWV!vN%lDGqL$ce|{7V|DB5Kq*4uKN%1rKLTmg zQd(7&#E6P`U9|p6jZmmpX(Y$GVUb~lk~cyl>r-CO92w*)osk&_OUpZ58CsPP_aamG8s$5B5pGfvF!4qn)j>HBIva&eqwhQ*^v_F?CW=Yf%S{g9dq(YFB{XgS>$d)yd_snDreXYXb&zsUeA{~?HG zx`jEM&XTDayK)o{cw~S~O2%=#;;Gy}I9yYg1|SzAdF>d0njEMPz-^M$1JHuQ`wr-I zjDmrIu|XsQ^#Rnt_Sh--UM9rgOTSBb`=$+wkdO_AudbM0KGViPTMMaQB3qk&u<*GY za-?w2-o!@Q+Czd`RJNDpL0Z2_hVfJ%8y9SOEls3Q31${KIi>Kd`{fhoM%?8f2Oayh zFV)GyjFGe>ybCbWqC}03c~|Pqv~ZXU+3QF1xb&q;hie0c;j0j-wo(jKEnwT+k#}rw zsh>I4*ko)>cgNkx@cD(Q=JZ40LUIa<1xJShh>aQI>58X_20nSiY>1ved2fn>-94P$ zlEK>{Fq`ZR04U&1UPb^^@TSn;pc;M_!GY|5c_mIRH3Yze7#@R%U`7;q^!t>rZ6eGH zksgYKwHDzgab!VBHv=*J`7gsfs-$pSj{-OM4VV3a5KWWjIAe$a_1)3MMZBosb5l6K zZ>#7$CK7#-IEw?eF_$HMDN)U4eBr`1F9EHn?c)gYBew^!`k{=V!$_sd_oDLTwgD7aDtT+Y`2F29oF*nTHV(Pz1M^PPb@iuD1=Kt(?B*g_rQBjpoFy5k3!H8P{T z z0oAV0NdIAD(hPtL>5iJ13}FrU4vvrvT9-2hA4yKP1`suc+k?)96AhHaZwJXaMY6X( zfFya<0)Ug;s1L|MFk$gRD5aZMZM#AG=T}UwNCqG_g@gg-z>`Ol0BTTz;%syAP503N z3UJ_<^%@guJN6MjZ2K4)Ifp5nCiv>tB4{d9Y-)GI;mAJGuW&C3*Sd#WSC zzopSvsMGC|HstxT5ojEB8y?t`KbK*VIz3MW#d0Y)BS*)AKw!T6#@}DD?;V`Ot1n-b zb)yZZ-{L+yKkU9&YH;%#8T+u|TVQZo=B!*TQFh#HUohgNk0Oxtl1<~x3e))*cn1It ziQ^{v(@9TiVr1;mjt^zLOrR8-xOds!Cp=&tPVMZUx_{+; z**NnUs|XI7z}L<;0~vA}^)M&u(hStnPWi04~2mmuks7Qq)UrrHc0& z5^g(B2GbGA#b^(t69|8xBE?Q(EBBC~*@zBcWnA+d-A5<>lcVDKw)F)Pr1y^jP3>_2 zQgEQf8puS2_>3R6D{4`eeH1oqEXnbdd5c)>7A#uu~6ug zoLdxQb;7^*)f_8G7DLUit)Q_`_O~TmrXnycm|`ryy8z88bTxR6sukh-uR>AErV^q4 zG|S|}2TcQ0Ux?q2Sfj;l}_*SH_RY_pPH=>7M zooLy%g8g>TqogQ|P#+=Fh*n6mttpr9meIZVW9e@)hH21ePz3Du2R%gPfO82INDE;6 zuX370tGBUbRaFxAp`Q@-1HYDs9t@wUGo|HV!-9r`a+HN4a|VUo120S(p-;ugk^ z(P#`*WD~Dy#xLgLn(3Mpxdw7kJy`PcwHWS^G$zPW#gMmH-3cqI{W=s;_GnbI+4A!Q z;tSCcSN5m~bbc8s)8&4w%GmNP9RUX>!<^&>5AN5J5?fWxWI78W7{Rqqf2l6}Dq6jC z7a<^4$zR>43+)Pl#HDc~flV*1?)yWrBwR(YtJNwog&gUs9j5Q=7KR>k!NT!FKJAs5 z26$P>9_{(h-4XrG0&4}INoKxZZ)>JW?n@snJ0lxbcg(q7=L@c`&td~W(Cr;xM}wZM z7X@HRc8ml>C+|mqD#$3n8IoyD@U=APgmutHmE0Wx(tZ3%07Q1Cx(#r=?e?tL&{*Ht zZA{f4V(>8sCLv%qsXk=iV-EXbM&%&_N)SabR$*o*Vc8&szxy)9TqpCV0*L-m72m;` zled#WME)iNw3CrQS4;-!4vJ*W2!I+8oHtc=;0qWSv>zB4Du}MDC8MFfrQuIgeKYHS zj9@&q$D#;Y(Ax2@_+cF$&DCz%RoY0fL>yHYWjRW6^TaSy|rCPfy4Dgyybp@3z<1ua8@wA0O|hb3=!W z(N+AO@9)o0)SoXe=VQArZ*LE4uS;89eEc6TFW1k{ugO!FMPp-YkGk?b19JR7I$j^P zbG5fMvwmnv&keLqnT`#-{sQ(Az-cQHUwn*fUWit{*pB7mV`4c{XVi^$O)Zd{ZU(F-3As7|(tM!vS^6|@Be&y;!f3g~4qO|McS z8a=`pVm!bZ_JcOZM>xo=+kn8H6m=za$975)?&{*&V(l!F4r0j_V5OXYmPiq_$;@%M zx+BEbaM^6*8Mwc6+(@EW8PE*jvi|A9s`D?YJgg4E2NoyOo-%}gs!D#|vN@S{79vk9 z&F4J-;bztS75w``z9X-2k;mNp|K5?Gozk1hb`&fU+mzhHU|Eg8rmHWK9{^vN8ZCQ% z>n9EyRnplxtjpMO^i2SCgS9s_1FZdAZgvng?Sd*e`&1(IHDLpd!&Z3)V^YIncg!=x zdcGJV=87|?d>u%gD8bQE7^K4;W5qR%-KoZGF2};oZ!YNH@neylqhp}jt)Bojt*IM8 z9mz6(AyQ|s^KrCd4DO(S)X<=&mta^A6t$>sjDhV69fLUYOKl>I{Q-%?IsGH2^S^{S^SHUR-#u_mOB_?J{hG%|KVjl-0v85Z`qv{G_IZ?jB;vP-5b+x)SnXrPL*gaI< z0hfgn>^>Zh6{4wjJ&)`UExlWs++A!AzZo_p5Ye4$24WmSYJw=NmJt07z2qcrURO`M zwDpB@VDIA8YPA39IHO`+@v*6~b+mYLS#irYT9MJ$G|5lL*SKyNs>1VGeJ7xjP{Tdg zOb1amU=WB=7LH7tP~(1BEreU<0%`*P(>_c~lI;p0T$MC2!?~3`wjGUJXu~ga;ZV7> zF5lcx>u9TScQIUw!fz{NVF>SrXW>~QRGAj}r=~LWLJ!6qRqGnp4`f*!a#FL}m+V(j z#k!^CS(1}}C+^u<2@7I$eQ$7IGW(G;pmNFWOEs$2aXL>{6&*3A0VH4$uW3E96~wiG z$hpyhe2ET8DAb+{Zo!P%|V07;r**~m*XGWPs$FRfsN-v#$} zgdnLLF(?BXMvNJN1n9rB3Yt0R3ob1t-Sc$^5Se?lldYL8l^K(;2j(TdkJEUgqKq>@jEnn5{HvtB}$p_0UC?Xoedz{ZBD*W~^uc zw$*127Bn~~TVv{}u_GWTp7rX%*&ra8*M;Z#gCun4NKX*AjZ;I-PvHo@o{L7`0dedg zBw*qiHZ2d0eeCOc8x$ALLRn-7UCfK ze9!V;L%MGPBAuZg9OM`_XRF>ZFc{@w^8+1YBWkjVH=}?NH)6rJhMP$RX7Hq2jqraj z@Xv?r(1vy6Y2=%9m%q(8##+_Cj`-GaF|or8{_cTwHdAyi%w8g^$r)UBdp*l+zXY zX7$%**BkqR@g3y;FrQ80!oFZ16=kml8W*~75^?Qa42ZeCnEmnpGM7~L``;l*2!qB! z{Z9o)3Nv`ny-ep|GjbGiC6=8h%)n4h7P2ou2W5bZDut-s;x$Qv+ug}Z5c4g(aojN^8pv6kgBI=s6*~Qv4K==&+|6wNE8d4?P6tp7;R8_A*5TbCRqKqb z$+56gs|Hf6d@rU=gB6|gdJjVxOkG?UHAA+sXW+37&5&QkYw&?uNn6!0!t!YaF7yNX zD0A8DU9r67)_0pH4ScI${5{rEY8?#DoojhWPf;5m4t6x7YC^n?k-3i#EWR56^>YeX z#WT*;F@Wd9`C3)Zb5;(EKgEJERkTSb+*luWF$#;JX0IO!n4ETz(Ze(fF_EG zlzpnzvXMuu0tVG?-xe#E$UQ$ilRP#?RcYSj_e`K2yr^UrqBG#3j=@ZSbk_2JDw?Qa8E7^bzvYb>xY!@{kR7-Jz-Kv z(iQf+e7~%@Ig?>Ja%R8jcj;{|VXJMYLTk8yzMlvwz0t_HQ*&I}C80JUZKcOY&gR5! z0uJ01xX-|^<_M{#QsoBU@EN(k^JMI9+7m2f{PM(qKU9PL?sF}@bf0T^HYG;f&!%gq ztS=kG%Oeg&&-d(Kk5h#8K*W2n`+H5`XlK zn}ti6KOx0v1EghPi1mzEA%8k!V5)GuwRJ~rOWzs%#psvnm6eg5(ecg$xZb$T(U%4k zb@N@oiTu#v)jR7~>mL+XR0pk=dE!#gJ<|6EcV|#4HBb5SOx+3F0tRzYjwg+JCFpp~ zW8r|kjFI$BoEsfco@ikOmo&G6o8;rcHwXd9uw{vpE=6vncVCO|hKGEa*2#%3LmmtW z+>+pe#U{c@VrI3_&LepmXLPQ;rO<+y(cmPap#}{UWGKbHWR~;ywu&inNZwTLx#E#C zJ{f>Z3)|0%Y(sxPWh>GghU3v63)IJW)Yk*jCg9WEaSpEpWYEwKYTo@6hOql;>?ufx z<*91P3mva#B4c?>00fy63*^=GZ9Sz$=HUmV8KL7<`RuM_n_5X(>aa$3%njPI5_Kjq zj6|=_Hat8DrT>+FHWf$1jx*oEXB{6FGxzs2vM2d=8(bTq-O5XR{;SzRYkKy6cb)@i z9e(vBpR^Xo@TI6R#G9$juXDy8QCna7%ND^FtTpF|F#j@JkV>q&PsBK(k-@k>rrg&4 zwalB{9>O1bBTqgQoRCIXFIm-xSq_ZCTrYy6zimP;D||wT3|G725YGVbcJ6Ml(91CO z%t{q<>qH!LyOtWrNmo zyG-@hy&D1VdIqI%7X7grhLqWMV4DFAU&#JcXS}fp?h54DEzhQGn)OD&q6pvse>zj7 zF=6*=zB5ft_ezzr4`6p*W)QlFd&HApcgqFaa4 zu2;TLwQr{lwBzVk{NW5eJ7x@|CO)1{%Jf;Wa#Gj~^r}&}c7h8!WKA&xMGPn>?d4&9 zEvXI#fBBB0(|Nw}XG^7=jLZ4(+sTjW;+y!4#H_ROHzSjMN*-GN74_$l$x%6H74O=* z|LxjZmG1aKe*m|8_jl&mwaiNzSxg-&$#IA=XfL1FtzA6<$S@okClu5aDb6^YR`Qh~ zFrpoISe%bEPeLQGCon<-0@sY*$cTyLi`78`)HCe;ZvW<;;Be`3YJhZoV2AZyUJEpC z0~CudH6z6^C6lqC_P`DRSKmGT!}aXt4er=KCZ`-nm%W#buVpzAg@w7(GBHL@Z04GeFkxhmW*Kg{pCd(I;Kw!6Ciu@EL>xT^N;(td2z8br?p}bODfxP0eBwExQ=?Jih5P}toBQ-T$ z6*#fj`-jj`L|yJVZ$t-TAOA*;CnAOT?GYktqcg{{T!m=C%sl%ay$L#gkZ=Q0Gx$R$ ziJNa>fERDRJU~4;qvpyI&2_gspY;4Y`F(4@8zM@%X^}J2$CB9S^jMwAv-&JDGvs(c zeAbv#*H^iZc)O9IC5)U(BLj<|L++|a1P7D(d6D!Tdl$~Z=gvE1LJunZsse)%y`+*G zKLxjr;&BmWiW-_P28kmL*Sg@xjG`&u@INtQfcDJ(nt^RJ?aJHb@Vd&)rM4`;V) zG8Dhbj*{~_Zf7HWYBe8bcp&{!1xZ(hM+IEECHKpA0zjpmm3W4e#j<5`11lKEp!6VT#j>iw`?;pCwyw8q z1n%Fq@HY7S(oF|~pKL~dX);RK|BSn{iLoiDJ$%CA(xPm&|J%BxhJ27Zlo^QrJwss9 z*-&E1w>%I7QjgvC{49=x6#W+VyD(k<_(~5m9>TIU*caSJIlC!ZJuU+CMj+#^j{HwB zE6o_`$rk$pE58kwS?HM`106+t;ql@It1K&RSI2ajYrIz2g~Y!hC z)7W#4<|9)rjSMA*zFDxHKM9q!Jjm{o-Hv`WW%8#sKB0lTekKrN@LUfViVrqNTY~wR z+T|6Vxgl7e-HJL_Tbs#t&<0xHl#Z_zp%-^dk@ni!R;3cAs0X}b5s8!My_o!f`#KqP zrH#OB&_2_4g-bmnxS4_e*!jjIIbL@wYjtYXcR2N#&4oS~3uDbHlddj9&al)ncgDcX zS0$=FGHTz3Q=cfk(u;U;MqKe#Gia#$F)MK4D0hSDE&#~N9}x0fpDL1q z(0c%@E=DsY;rts{E}hg6lA@9R88rn5cF+~AsqHLWZ`@@&C~Xku(DRYqbZ-I`wIj?U z#j@RXSLKZq>=&nNPL`vBv}%x^zPr(KPlrqRn7-_Usi)olE9=^$q9~*IH#6U&Zh(uj z5X&w}C>nxjV&(%EF{~s7q(V?j^HnDS(drNK)x+{pW?|mcGKk8KY33_UQ+)7}WnopU zqZD8zuapHJC>$UCW_D`XWjJSN@3+7EyZ3(Ioo~-4>4`BP zy_>lysPe+WcYmFGW8P=Bqn)j9gj7g|ed{;1#?D!iA2j>XqIbsHzYH>)?_SfiwVksh zkdP<(k$pYCNlnMNw>Dwvw*xBhi}k=D5(;>dwugqX7Z=Z4xFnJ;6>z;VrzK$AAkJ8- zma*RW(a?FM^`s9Ac(Czu*!G>tsCQ25#Tf3Z>vHV&q_P4YZuGdGA9xcsLcxs(ICnW> zd*{40K%Y~o(@^+vt}`yp%E!GHjBV(KOz$HjDg8KnI=nDGxdT4w;FBv~sa`?tlTmG% zRU_^JHL@G(mO(*v$f$4D?)~I0P$Rmb2KQ4?<`Y1bHf<`@z^8}@+%3K^04M{kJpu2- zr)K;nqawR;Z4OhoEHYPU=5)_GShQolT)nS~R8T%AfwDBEUcCg=KiyExQ3@(WMtwBq zHPeCRj3pq#7ZudtQ{0a#PjQ2Eqmj{-Q@p>aN<2Pk3_ShJhcT#jIo@t%6j8{7j6Y01 zcj+#c)e{fK@a+O%tnmg_z9{(7vO<&z7nO5^=-)wmG@?J{m-9fXF2u;7PRh*|^yFE@ z^`_Z%nOl^?88R-!g{nj&(1|PDK>jUw$t;48g&qRsWedKP3spK|2U(66ohjn})MDp> zrpKLXa>-zVoaXp=FE)Z!Hm^p3vT`&@}YUfJVyL6nzqa5S7WKE zm=83jeX!&8-B`MVW85R&I)V0=0J(HBwae#`Q|a+iS)(X!`ZGwJkZ5;}1m+?^3;$MFHy{ZRe&-Qayny^x!SwloURa?hdX_^|uKzBN!FDsN95&7C?lr7|ddx%a>QUfEJaay6-razGzd)TbXO>len52idB=)bb>188&i}m^C7l6n z!$8_7pYy{V^v$3r^u6fFP>-;yB1Agz0xsN>g*p)bQghcmzwVdq+L!&44WZ3e7lv~VA#QDbN`_mgs;R#nikVIUiJT=9!Z4s>o7 z&|Y?~b2LOK@Lll=Jc>;69=zij2Qm?Y>HkiMZX5dH$*K)>M`$VV8r+KML`AdqCij*O zP7!Gs@&4r7&WA`76GTe-kcZIA?L1OC|DSYJmgXj^pPlc(rJ(drz?46}j_`%mZDH0p=6PQI(RTGYe!bJJ+BZCqE#JmO&MC5qS{(;3zmCNQzf6g6FfcH3u(zfOLkK7^FlfxC&#>sgfK3ZWMS58qb-E%bRO~a%@{Kk8$l_>Z=Fo)RCpg zB?ZN0DL0pAO{ND$Ynt`5(=Of=18jFL{$;;vNoI;QF7rfZQ$+J7_g5&6lu`4%4eO^DB+NIPA*CHbOr$M}5Km+8)0yDh%- z>W++BNr^8@d?<6|B?1xliR@^Tr022#;sYs%@uGNIA6lhc|$b3 z>&y=_cqHq0J~Txc9IvF6sy`dZ7`*+|W$P6GOQ=n7k^DsN?!jc%$CC&taYT(F*2oUU z4tApZDmC3RPhG2NAyX$%aP=LhCTiQwYTIk!szQu;XV+%`N~gt1@}f~ zR}R3zEQ$Ci5&rMjEdOtpJhpc`y=BLGhXQCk^CD1?r+Fa{$(t|&gx7G8zKOKVmV?H?+D1Vw8;?1lX_ ztc?e!^fN(>_BPRt4J|vFyk6;q^2CUYjvn)o63c&QsPgSzM(`_%aEoHMEI!96mGBG- z%->y<7o`E_vY)MK;WIo$o&F$v|K5-ksvtbLkb-*Cl60&CO-|bHC$NTT1`Ezv)_KpJ>9<**57IY#jci9q~NXX4he0Y zPR|bVP$knt#aP|7CvoGPmOL8lOxEZ&MZJOVq$RRL;M8M7t3$%WUcH&hz8?+CNzQP1 zfWBiPA6CF;>$ecxST0mX2p9=4Zu}Im`otxrc@KejO5ALt6nQg+%<{~)&Y>MCkdmgm zwhehB|N6tks^w=b^Pp4(uJ&pv0+MtB6HPtK_RRQ(bI+(V>u-q-Kdn@??TUozrJgum zuj1=V^*#vC-B3aVmg8HuBu96%WpY_k1NJaC!J1AS6_T>lOam)cIKk7EBlt;?&`GyfUU|j6?itub2+jI9!N0oz03mlFzh`<_ZY5Yw@c#vaI0fQQW zo1UkHNb&_x1>Tts{9Q=p3UO*1Y~LOr)t8`WSPiw&j#|Jk=#a`IoOKzz`duA&dSL5I zDuY^BIKt8}-!{mc(jIQ~F^bqO(gVG*Jy3nrS+Np-+fd*$B2*11%C;ws%Rf9*Wrz{o zPRNfLI(U8d0HR@KhWfzG{u{^+NJ>@@yb0*kX*6AgM{!?|uT`>J{_T%S(*GwsmtSFP zn6j&zvKM(Kvyo$6r$9xT0DOs7I?e5eP>;pbAG^to0b;%VO@5DSzFL!f>W} z=U%=#4BV%r&p$CV>+iCKxP7zcF{%PC7%K9lG98w4cdZV*+k%)OMoZ7`0oX~i>JjTj;65!pwl!Xxzd4P)eKjX%_Fg{Wl3c6C^niMJRiQ6l)^I(HP71xfvg zH%`G(XPAPS&HMdz+clJS*La@ekm;U|8t&kET55B;h&CFxp0o^5U-ICR8Q8~L@ovLd z7P4^;%2THl37}4qCEH~YfuHijrNRh0H4?SSZ;y1IE#H>CJFTcQ4S@3~;~C8Qb+jiR zG8o+kb*VM2+q?WC^Sh5ZrqhTF`DG%cNhiyDuS$C=sNrDebSUxWXV%koorI6<7ja#2 znymX@bd>wgH=i7vHweelo(|`E-E=K0MVm|(l$|&VOd)VH?dD{u%EhqfhRMWMP>4EgZq6j`N?U-Etp|-QgSz#MR z;~Q*&mU(suUisFLlAw)`vcE8Kw?mh^*cSq@I0Y=|>xahnMck_n1{ghzKLScfbGdYA;^Amh|u>BA@ z{Xlu$uCQm9x(siV!S8*Ml12ozWX?1xg=ST`a=M3NN?nqwp&sH4y09!$nU#|t+}g%u zvt@UU=$XwNCmvOUBt~Nz=`*Jsthm)il2gQg&j%>oa_)I5rqZ|-s<0)sHXHBI6yc-m z#eQ|t3bNKJ%96Ft?rJc$S{em??kqyz%tI1$v_e-`N)573#Tw&b8%sS&owE6O9v=TG z`sts89HGHMa##}r3``L!sg?o1-W9bK9EhJr=w(Ba1d4Ycv(-N)xU&kwBLv@ryN@^4 zEhTf+tv3-8ZY>K%B&=wZNhb>?8BxXs%N@A3s&Rf5D!h}GfRFZvc{`78{Zi7GwK)-1 z^r5l9xQ##9!LuX;a;add8lRfJX11(8fjJP0UiYY3e| ztl8P~-MISVBu#cwxUG4sij8+3qRLh@~rWj91$g>E0RlueOp?UaiTPH?s@TUt{vL{KQb;LS-MRlgnUZ8;#$+$Q^euJ& zz8^Ut9eCvur0d}3-p2vY=jN}jI^~$;tlR?;XuG!q(8|RPMK}RBgPb{g4Mf3a+vw|g zUZ5{H@u3RqeJYyNc&u2O?OJB~Kf8HkE?zD}T{(`wgr5HZ=W0`yxhQFaiM z?^1)@tUIJTX-Fi8+uQ}3S6}oP8&F6EJ^S<9&sNdf!EDFTJRkM=I>y7s*ETk>9qUfq z7w}Ht>+p~o!WLt6dv`BtFO9ZNm!ZIN6vxn8FokS1wJx9J!H^P`NXbN9`n{44m9zM; zOJZfo+ohit0^epQLy>pNwq$~k)I!!|`0E4neNQ_9rTlLZufB}kujoc|P5XLf_FWh7 zC4OX?#$w7vV_$^cdb41B%kNBBqV^@Z#&ql+_!B4e!C{>`U$c5 zW1Y%%e>9x>zKV%0P&+G|mxDw3C1SlOhQJgKv1_U>2@I*8NKq-xfc>Iu8lok1Ot0}drw?%Z^@uhhh8=`Nx=Lf9Jw5m7M$LN>GGh2 z5$6JYP@%s7Ipq}Q-gi*W+{t56CvpEiS5U2}jqEY!R!`QqBgkH7qWk4&Vp`jfy?MWw z=jNeS!m{QHc~to0Kr&a5A#Ag8=yW&WEq_;)c|P5!{ZuFz;xtw2=mxg@;w-Gc4{DOT zJqnzF)U6Hw-i4Asj6=quDznBoG7dai$dqyxse62{OTH(Hq~5-cc^sndPo{>8gRn{0 z@XCvFYeWWxSFL^O!jW0R!d$Y0i?jck$PQIJ$bb=5n{7hHN03Ho3MUs^k(db86_8Jx zw84Q>2#pT`&rW&Y@c53A?0=<^=I8m=egI{+9J{db#uP&87i&P;PZ6{gE=}W1l;DmSP;>_t~3X>Q`-v(GQ1@ zamu}0`-l6=Ng+J?`tX}*Xarf?KU;Vx`~S$%S@^Ghfw7RY$8x=};puObswu!N%Y-Ua z6$h}AkV)V&yhHOx)m&<){7OG=!$hKUyQ*|mnc1>(4UqQGTQs))+SdCKs57e9s*XA! zb9rJuYt{FQ`(Ah^(%Fs63EC+ljqByEO+n73u!H<6W%53^Y^{j=((uJ2Z(ZO`LgVE+&s+nVZ zhkWTn(HE?_N(k_RD=p$u$^5y~cG$)oIoB^&>R&=S@C&Pnf9pB6Md8YDLIh*~*f2CD zgE3~^E2svC8dz9|xpfUSJ^J0H}ic( z8|DgIZ8<+|*kr>T%a{d)6@oU|n_RuN)>kPt^*4@!oTBKRdkw5CcEQPc_j z%sU&`kIQ+dZ6rv&vLapiArafNdp2qEYmEHmm;$I!127~a0Bk5l8%!cR`NKCVUfin} zGbKwl+GADovVtru?(MXcHftwB*gH<>(%{-+omo&nJA}K8_{2I6Gz$1kolm1-|0$JX zEyZ|lKf7W^2|rX}vd)V#({D6cqNrtPcD&+#AF&+(IKlmI)!V84(^+|8jQl+k&5kBh zFG%tV4!^vunXvV)jkhFlZUTV&Ken+Nm&Jw_cFuT1%(Nczpa>RLStqM{S6#kz_8*

IdKAmFP0Nu(F6?Jhybu{vWEfE#R5kKe!2Bw;Bq7BU<|WGLi1jw;K_(Y1jbj=K=I zr9WQbvJ-6pj12eL)FPdA!P+|avI{2&giFQspOX1s+ho7+Db7rJBav7t?R?jsu*rc@ z+N$c3yy{*aQ$HvD#3Bby`#~d&>$^+)pL&X3vlhJl{rC;ot349r?Pg6I5vC7f1da=G z4!1ble6LgQrf^F5q)y!FT3f+97Jx((p+50*Hk=R!*oMaAjfBNeM=lBhs0J=_y1k~h z@lXpI+~W$T6u0;?sa?Bk)^w=zO0nN6^OC22slE&18vHg6Oph0V?33E~deM@LDI)s< zkcN>%k2aY+sj>w&Qnl z4Gy7r(`>Le%X%KS;_A=n*(MfE=O3yyVIyPhihS|E;?d^{IqKUllz^|(5^%;T!BnH2 z9e^zE1CnHA9UA6Su7VT>GaXMShhx|Rs8=5WI0;jeez9A^`!5LV+8Bz}l^57J9hf{V zy^p&bi=R^YUon*c?YuPW4>t$|u%FryFdQXkyY;KAn^{t)sT4!Hyx3W(Pn^yX@Ktxc zWa}vg&DcpF_e-&)pDbb?Jyxtbj%jnr6~Ukw33{FeMUi^N{4f+D$>;`ONitMXskFD2 zFT*||AwSQ5RnxFrZ+3jVmmIb~-DsdPsyL6X^wUKg%_k=ZYDsB?QjTHJEHpL~twSFfTKH#R z_a~^h9DF&-%(#fkAwmg%2KFmdg~awp(bd~i-{fE5j-Qy^?k?9ohkHBlw*_}oux|gr zH&Vw0ww3p6mg!>No21N)AGUESW%V?L3SVT(Vl&VNNF znm=xH27BroJ#rZRI4#fqv(TBusoZ)4crayCnep97PAkAGrn~<9$WpS+TV#4mXRY+% z6%+cC#G5sIrx|=`DY#WSeUVV^-L0{VJkQAKB7`9o9`(W9laaDDH75%$-Z@%sOhtw! z+|R0`>;sCPg>i-|Gp%_H#!%Xdn=uBcU9GGPO{$qDImYSo=9lXK_Q(1dIzkBe%>oMf z+jmRAd&eb4AmO|RPXXhdloWB^wV{dnJC3~vVQ|g^rK0?NEhyncu58JQ+6Zay&0_FX z2;eIl6ofM^c5wX|jOJai+nRjJBfZ{<^NQsCtG3ZB)Y|rbUf*vw7o|oPOrD9 zm3^Geek+pcsBL~avKy^ldRc?&SWm^G$ufS=Pq89^fGBJeijhX&jmB(`B)pnoWI*q# z1EWe%BE~8`-e%Vv_cR6Ig6wgctlFH7zIPCw5BdAk3l;bQ1$pC%j zl6w5&g(eBKVpYM23x=O(+i5Q{KyUFZ*LwQKf|S1y8FIpVfsNhqoeE^2JI%-xKx^j` z2HcFA*L5(tUfXiZr%ekOJR}ds3o{=qDR0+YsLPcco8Z&tk&HMtq^DtcG@t+EEciJM zGJ}@%VkdpRGW=5L&w1-3=+LlsgQ=WA0~L%NyV1e^0cMj-tH2rzDrYgz(qazsGUeXG zFCRA~s0;5q7WOjlnJ%daVoMcqb}H1;5#XPg!fI>GkB5N)t2(3J;s6G&tUU5+7L>nW z9VxBbDj>~;0yUc>%3EF^>e<)n8|`oGFaRwN?-XE>>T zoEaMeekGlW^Zvc|iT_E_eE-2A-S~GL8d_((5!4BQb)^*}ui%Pkb(}9Bs{c1Nm}=g% zd9v{{NG`={K@74z1Fl{l`~^l}h?{xDe?X(WfKje-Xnyg~-ymK8y@vNM47GQEU)d;K z+*h70oDo!abwEecQ{PxxVxjG?$&|P8o4>>XKKuhi^ZI{b=|-v;tMe?T z;}LdqI`i-;8T$2#ngDiGmF=!hv+)9?iRE==#(hUqAqj%f<@!>8&zh?!4=%W({SHG2 zA5gE{gGcO+_V!CRq_5TqZnm3MZGH>pk}H5g^$|`y_K0GQ1+jX~g3$#j4wP|_ zu!Q)1v5Kn@v#A0E5Vp_)r0pClL0vG?Ua6?!3$V0MzBp3(c^?n0o^vrW&ytNMf1-g1 zON}!-Ffj#o{to9z!+NMlI&v7ja!^h*@3cg>GOtm=fd3~j|95KmA7bEhI?DG8wEZ8U z!693pCH043E-$5StLMjc>+dNIX{0sGqxj9NLVT^<`Y>JIWr|m~zw407P?380#(fJ z{zhhDhLQ1kf0341{FM=34c+ns8p-kdn~_;Aub7bij9~RX9remG*)6;v0V?u+!NjwF zLc{H&7Q3u^rA$4ovw}KHWDgy+mTDr}DECJU=`9a2Bq{pZPiPOPNiYAl3hMVc&>`UW z8rF#ZN&^2MN$~&MSo)g~!v7|OcYjD%heOb%R#bUU2tLgoG4NV!C5^6X|AkKP0U@1T z=k0ECVS8~_mS-Z!mQo{&a_eDulUcpks)_&cq0mWv2Ty((#8bY@FW?%IyO$?29w&FM zFy7)GIF(5>|H}h^Fnr!XZM!TEXsS`8XD$m7-njM~q; zb^MU>mz@*^se$hYowe)pmyEc}OTG3{`IVilb${Bo!o?(lYjOL9Qp(=h$)o7N5r+#) z2>FsuPJf>cp#%Q1bQ6m1A4v>ByClYo^F@fkT~Z)H#D5Hc;Bh?s`cn+{vVgKPZO}Qn z=$uaqm%2Zi<7O+06y!-np4WrE^`oB`W1tV_1@`~M(M_9fRpki^8>L)h{xYt0WoA8nrPL+V&2Ab5q~~v6q@K>$*fzbuewoKb8r3l zFl+XGeDIezA5`csoMw;eQTW<+oG*(ifM-Eih8*DlQF|tLZ4P?n^?R_O`U3MeSYVR# z#HBL6R zKc!Z)IHiVD%GLK+!QplOP^$P`nsHw#LT}93j-TBKkpnhEG9?s|3;tO-Lq4#Ge_B^~W=yEdTuTzmCp-x~z)w{qKeF_A`C-Q;~Js zO9BYSqbfMKdSgqdLH&b_r>x05a30FCSkm-KF2ftN?@!+AN3kTOOC2Ws5?HO<+nqgc zbnu-4@0ljUrsC|o*zA^(EsnJHM3EJ7o2%2f3rqu>)@*xPRv&&ueCcmUpgmjQIoNZK z@(6BCP!b^K)amqGy?e)sI6Hr0qUhyjXxzm02URZw>P=i;I50NJ>QWTsG$uwo-yYqYrO zSyXp=6+6aB>_G1U3qmaHZtw7S$5Yb3`5J)n6~(3i53*;LO{;PD&oW@!z_GI#oAFoz z@{AR;3{PpOlUjxzJq-BW7Z6~r^Wz5zO2 zj%fP#WoLGmSAYEIAXq%L_ai@MgHtditMZ{y)yiolXC}E9e&>A`LyN^Nw$DfS@dLw} zM&h=5kEtr-$2qVpsG(oOog$Crk;Ya4gx$E2CPvgzV@c9FZ|Ol$OqYAM^yT%Xai8FW1ZCH z7&UCnw zlQoo58a3dBh9|>N{CmgC2z({tS$B`{Y7^uzN|d%lXGv2)M0npnOY~lSRZxluSNrh@ z{r906GQRi(vq=RQ7+f{ve-6YXk`@erY#o-(NVs6C+y=L5hD-CQ{VyMYQ!ZPB6kpB- zVZ;R_%dLv3B8>8e&M&=B=Wp`Ma%ig?671t@jwvIW6mSYw_YU^sZzv?=cp8|knDuiD zDSsb49&Ja=e=3(Ybao&jYY>o5VPBoO;;QhH_`xW+)+x@$gN>_ z;?ft^9@>?1X!s0~b)5OTEtz;!GX7;hsgYhB&kWw;!sXf@m`^EkubY5<7tMG6F=+18 zhjlZR9YxS5;k=!yC(etn*}-XU+F6b+ZyC|{=G^ED__kH%&h6WoL8W3XgxBsIx{we7 zIZ_LK@VP}1!e^4k1Q=LFgINh;C9r?Q7V>saE5r8^{#8-w)G@U``A)k z9At4wG3!X+869YmlvF^Drs#(~v1|mHn!GUh1Ph#m7^Fx_N-_sI8LU#ruif?w+F;H_ zRQm`G{7V%OA!Bk_(4;=MKT9kqk+-bX^R=&B*jMI4h&POFMovyf6lylCMJPO%<}wJn zRYSt-e~6#J(DzU^mSisy`XazgEaE$Jvth>z3NV%-Cco{Vwk5(0NauxS{$V#Tyn_!aiC>>g8-pE_9xEM78|z(*lDB|pLF7Jz z^N^W`H^Q~igwSc>xV$Ot5*7hEz3(1Y=YNxmJZ}#SIeuzn{p|O6=Xox&*|pZtH;Vf< zjdC8~b9E)+2;_>qTtB~F@v>&A^LDmBKGIq~SJl7nncw|^e!YPI9?)Wwb$GlEv^+lE zi6o9Mua=Nv7KUXRz$UG$7ZM&d%Mw(zPun}#Ik=>;Tepn(4;F3HptivWMDQ*~-`&e} zuw`Xv@6kz_oNjbQ< zu8f}6j;z-Hy&XF0Rn2n4Rh-ppQT!8W+KXdFdMvGtfE|Ft_Q3M&95S;X&-S=`^w{<4 z^m3Z#0CaZ&Pe+|ktI_KnewTkdS=T(?c8}NJwmQ|~&qYMuZh_akXpw;Di`lDEWm4fz z;6C3r6E|MuS+;xsDf9Jv{A1UXDQhP@Ak%fQaLCIufh=M4IZV{@WnBIlWxZ3? zkA3ZoX7x3ZtF$rc-V|om+HOJF^GUBggOI@O4J$35RB2YA@>SZe-qF+6@_F<&8SuZm zeArK86&8N`Gd{ZfI?fz;y*2yYDTnntuy)(^)-?1wFL5ria>(V@$HCjjbwixnPU`SE zdZ?@ZvFmt!<>7U?<1ONKUj3j;kQ?(UW4^7U^w9gT-iF*FtND6HG@;l?r7k2U>;c0Kxj>4h9RK7E-KIH)S= zJ2O#HFZlqYe&${F05uOzbLUN!_RxXbo-&kZkZO{TN4ww&WH%~m3o*WzGD z7tF@@Vys)<=ZDBraHp&qVpfi7u;hh`;NsUtXR`QoB@8NjwkTheSQ6S~feWrHioFv( zJgywkYh>iQ5Tk5CB8b5{!Nn?OzRJ?-^X0OPR*ykx0hYpBS}E4)yh=>hkgc5cS;Apb!n~Tk7)z$ZNC(ej zO1pdYxr0<(|G5@UpQwOBTw0o1>Eb5*#M&;Eu$T^6VvZOdXtSvhdb5Uwj9)c=6WOMw z2@4CY{i)eriI<_ktX->`%%I~rXe(8-uX5JZLqo%1E9-AOj;8M59LFvu zOdUC zuYSP_%-&JA%-^-~A~^@!Ym;)pxCI`>gd^!#+i(gU8e8V1bj#mDag_x}DHZgkIUV=5 z&?qhTm=_a9yFYrwfUA0u7kEZ!12(wOiL1e6!2<|w5XP?ZA$rXR$`V`Bk0m4CApdW6 zyrD5T?dDkkSs|{{=~~wt6h6GNKMx0me^n;dB051U0{*q#QqtUXSJfV{JJwE}CF9fJ zV%f`F!*?mMU9O8Na@M)Ubi9;D<4o5{O^8UeZEgM7sGDjL)MitwW3|+GL=$Mb=?g+S z-&{8hj>m|%^^Swsw5hY?a>EurSmY8u^3uXBOwI<<{}2TWLi<`L!u+1IT*-E_Rlx>P zUMY+DE}D}l+Y3gFf7i30*wpvT{h^y?4ECD?tdN66v{3I;u^{{g7yx3sa^wY{fUab` zQESxZWO=05!s|i# zIF10rpMq(KDk9uTGJl+x@vZXQ5R{qdwL-2Gxu>~QQ+pEgy&rIHd)k6vj{ z+}E0cxc4y!zzh2VDHnlf|C~4m?=Cw`phPxURagf=Kp;%~O!`AVRCK&C+=)elZxSn2 z=gA<1qBs)2)W`5E#ivSQgu0y(StAUDan%Zh;q+y|9?u8iNV$Gbnv=6L3|DsnjzrCM z>CjJ(zknG$sQ62C%!W5Q4$ZF6e73~DrgY6x%fP{D8X?~!V(}o#q+VT-%Ux4esDm-= z9#msZu~iWwmESh|MK(B0A~Bd06BQsQMq^5OgKR%2g<$8xQ;@gJX?HeJ(=xemZBhxq z)-4)9n3;;3NdU!tSyoRw%V%e3*&L29hR?pbRD7RZaxtVZj+eGVYbMz&0?^_Jq+TcIBL@fSOs9CPzv{X}rUkk)}8&&!u#3VndGq_dtEBlkTxQKEmDH3!f zwivbm!dJP^n@4Jcn4}?cSN$`9a{Fnd>GcG{=Z~Cp^rtxOCK>?jtlHX}ui}c}v$W{ngh) z9WM~_lH)mL8;WnA+7auadfmOe-qdvA|A-)sX_^>6{V~)5!(Bx!J96eCvZ zQXogH#N_tg#Qx?3Qz~oY?-=W3$3LM3tQ=3bGHBfU$5+>u)Y8@?*L?fMv#8oxpmEF` zHm!tiW<8j$bY=)D1S_%*;%e&Ua5?C=3zgKcbib@)d_hubjie}4#Fy+;{iyl}K^ncV zEMD~^#j4p;p_u@2B9PPYl99igDu(1RW1?QJWL}v>qTkN1x&7^>zDHp;7uFe7#;I3! zO-!oVK#JD*DJDsWFCLup=lVT7LKMquhs6(U>LR>$H*|WbEDkll!pP(u>^Q6;<6LD; zPyBK{d#^}fHIW0g)LTX0+=3&f6g~)A@GSK`kA|9{iqB%NYaJPZLL> zXi%FIHNm(@cmSw3hs_mTJTn#0*8`{zm#RVh>_P!CVT{$zm}1*fOEu|%LXSFT*g+M6H@7M z10XG7GZ|4MO{+1JiHg?YSW2!fI#TuzOyD<}Mk_-oc603CNWC(%by9bPe~3>P%LEfTQVa zNINzy1|X-jpGnZr3U67CqpKN1h}D+MW{l3CiqNuQ@hweY$3UDuB8bApkJl@VvNTTZ zjJtS6{t$K1YsXCoIXzPD#Habet(Sz48pC{#n)kCPHOnvK=rL3eAxuEL<+xS=K3cP> z#Ij0@MmRJh8m%?Zu;kMbnHzi?5zH4fhT2hmK7g92(a8Kbn8%fzE13edjvOxbJ_bK{ z2R$e2j<27V5mXJ`JuXHhCPedN%*<7kRe88ozCB(cutosB(=xPxdlLp1nrL!Rdc&Zb zb+&L0d>t}oBaS3{`hpUKuftnK|g zB|^Q>7IJtp8c80UJmec1q7RCA+JS-$ zirF2fm-4ut6+|$=1+x7##4~5i3sGf*KsX3hCF2x#&fTHa7Lz_KcrnB3{eyQ;j&xFf z(7fQ&8G|zZSnY00sH-)G`VmM|Yrfw<$P9 zpC4(TA9YD4@nToDjHomAj&n1R&{o+_Bg`V3AL_DGr#JE(Mn1IdfxD&?YuCXTR|XI* z=lA4#+oRuNk2NgHtNfZYwcUGBWsNcz?#ejPMHUP@y)@%my?d1ig z<@ekDam4!jV7qT9@AKQ~YU$hM@l#j(>$S++>)HPAuTfOHtDM4w?Vl(OR=Ar25zUYq zk1oTPS)*r>so=C2D|@tkMGFL^P-Q99RK*12{L^GzN>#5&iO5}~>P#7i0go^&v-Z$O zSqj_NmpNll$&b(x{B$=Ol%yJPnbCA6cgfhh^Uz*1D{S_R&VKe0t4mV_4!sXOR5Y%A zJCl=>Z?DU3B7mwOuT%i#SojAQZ#ArEJ_u|rkj&&5t~Mo;5IsZ9>7Mi&>WIt2cWfdx#bN;`}$0Leaf;N5T8piUN9WJ7Eh3p&x+2J?#~Qn%E7qBJ50 zZ$6B9>GC-AWTOQzb2}bjGGi73%V6oe8lj6E8V(3*j(&68DQC50utAw}b)=bS;Wzdo zz1~Ie9;oKoQH7G>@L3=K}9ez^_GtJauQ9C2vFp)FGzZ)y8+^k6oD1hLX0zkjfo7t z74o3X>9Pd5X2$oryz=FF^Ooee^u5Ix%Gvgq+<%2FVN!P5=gmd;{A?x)T6#2NIN3)x z;W8S++W$rcNvkb&h*PhDA`DQ+maSG7O@7gs-R8&^HxnoG!c?QMsD|n zqtso;^06uj#<%=^xfs2<-L1l`{$^$Iyo_*`kGZznL>gnTq=wVjwA01Nb#< z%klwbY3s^UNHHd2cMdf@?f&jQDygr!N7ocn-&EnPAOrUlh^D$lMNnS(O3OZpqx%tW5w{k^A~j;9O26V6F40{vRl37`^z($J@em zV}}c^%k_1WhQ-dh*&w{%g^B8$HAEgtb{&*Iht18>+{`sqQ~LNYX{v9>|L4dARf7C> zV-BD2&+FL)f=I{H^1DC-UN&Ez-iFaT=VbXxP%2__ToBPZ>*p+;5LL_)xM?Gxe*gfd z+ezZqsjflAh#A;S%^r?})Bn7-=WdI8d?roD*kFp=d4K}?RWZ%O zJQB-}21SiUZ3C|uqo`5_oB)@sm=gLjW3IZAMXd%N(dG50BeN&)qdm>ZBs3sGK7P;b z1+8;0Go0Lx95F^aG4W7jW#y8>p2079*Le7OgKCO=MCd>SOfZa}@c>Qd2fTSRC@D~* zDC{97y^YFkgZ67Ee<(^j6#(UJ-EF|{rgCDl9G@Z%TdQe68J!m(0Sk^#%-=Nt z9^J_c++SQ>IkK*GJuP1m1_`XWbZIF%^tte;^?FE&G+8dJ~e0Y1^fAhJtee(&T0fb5&1WC3S$3jTA zAH8DgI_)bFaqRZ+-z<)nzp1VqUzu@cEYpn>>Xto)>yJf#i&N~?A8``Q*zbC7GJV?f z_!Gfy7q=G2yyIZ{bYyJzV!6+utoKpcIdB!QEF#^b^TgX_iGjcX2%#z4eZCZK*pSZs ziuf$4^!@47;Soec1jK?OQ=ZLK>?x(0%;)Xa9J0bIhOA1vz^p%PA*{n5sRGz;D`asL z+B3B*4mBD35CyN+c436vcUUYXjmj+?vJOtJDz?#B((6u1;F?nzy^18T!yka;0zOh4 z6E>ITWn{_nyLAo&qg(M!A|jaX;g)ei4ccYLj8f%}qOY>s0JL_uF+7(?q|wRR2N;bu zG_ezz!k;Hc@P)H01aV7Fhklg4PX?Ol{20BLJAhP3@*N{JcPP+|MPTy0CiC9c|&GS3R$Se zDN}y*rmPjTLWJTNjLI;C(fk1B`O}G78O(j(z%<7JfBBNUEGkcs|BgBqT*lg(?6yS1>zn8;7?vEjjZ)!y+zQG!=}F^KSbHkq5^hCotpH~BRv%<#!P=98yd9I^ zZXj!ppUjqsuA{~F6^<#XhZS-rIm;#ROPx2?cuM9lvYH8BG?o2(0pA65w#aIR3X`oT%vq=ThNQ&JrMqAZw z0br6`89V;?idyv?nOytlB85nqM|^M2v@XtcPEDZS^yA;EP&IlhM1jwh+wSAzs%JPY z1@KAV$p$&$zXB?A>0&d5R(Q;Y%_w}uxLC5r?&3-a$E++D^4ZBcG3R|uQX0An0J>&s z-MY?mrG>f1VJO%eytRg+d*!P|E+s`A5Nb9zH5A?*TlWP9w7$P*8clJEA+=IuucD?+ zazDId^h3Y9hg>6!qV37COpVL;a?9MQgN2{E(ql#_EDBNd&tzV#Us~V}q2}52xP>Iy zfYxh%mnAEE%Y=sXqT?PvSq>vsAqs>)D)d)PAS0^~EvNanTz>Q}T1@jPswWXVU=Mxc zU2~5d$>_Gd0~|vLS@4FkMn*abF{77F9GlDaM9{~g)!STSk!#JoIOg|nAso>12@;w| zT2~8uP{jwS6Ki*6>=&$Ye_=bKA_8~5$#sX?*~_vRGLdTGS4&@U$1%4^9z%@_ABKbP z9P@5K+KR~zYTnFDTE*!V7C6ELX!sx|FrIP2Ez7P)qYLxeg*}SEw1;Mtp`G7? zm|`-$mG^Ks@cnjZ=tOHddJXwmX47|Kn%=Tj7@DkDGL4Ch0i?c&S#08;$NBm>RlN=o-SV_ zM-i{qg}KLNGN$$im)#=qIDES`GU={9$E5Px(mLGwRXvF33h~-_jwKtjwt@qJ@3ZR_ z^Z#Pvo1*(}zOQ53cGB2RV;ha}#I~JJY_>5P+g4*6js1=7#`x#A)_d_@oSV6rGqcvr z?AiN(d~6U`9@Ey z0sjyK#-Z_G{>=lO@=7=d%ZN^PCP$l?QuxA;acL_w_K90xPje~wCfu7zds@n-Z~FM? z0V%8VOJn8Wt^u3JHCuXf`}Z$E7IPJ=$>^gNB@7O_W|#GO@Uw=Z?`8&&WLn{b{xa-l zqdi;CQVAtQkMDz8=@EI*0wjf1&{?O)&r^Ri5vAh#nGIpgYubYtuoG#j$9Nx*c}Vp^ z<>c^YmB`wnMLpYoyi<-2xZ_Py8sn9%f05``ID3534{4zXQf_I}NF*xT-!r(ziKW^R z3&|)={~sK+2>|-@UF+#d!E<`b>Vg662q`nH*O2!#zFSG16%WP;F8=7^w}STAs5K;o zIUnI@$~}6!h(9o5hL|Zw(AO&NM2Pc+Q{dk322B<=B07k19kH!Ihs1axoVJ?c$>{Hw z0k|Aa5?x2Adg{J*4Ab$xf=6D>wfY9gC8X{gJSMoR@IcwL8ogq5jV}w`phCsArLzpD zYN6PBthM(YUN@T3tuXXhPk7eqB4&pVf`RB*T&+5KLVjuF$USv8!g3f?iJHugVG%{) zhoF&ijz2X45A0CPo13^KjgO-f(|?(Ud43h5p93)tDz?g)HaLYe#WOnN8y1QKhe67T zL$c-O=YYPHBo2+R1N@YX7<|)$OppiW{nI`MvPvWV0qk*{gcWvQdiP*2@3JJu7x01m zQPCBKlbDYVq(ltbI?M-!bC{d;`vp6Rvv!#|@4qj(OYm!42bipTSUzBMZ8xw>7I>w; zvO0syd_3PJg+e6y70QvVPRWTxS3@MO$P>x914*In>U787Me(^3EVb>tC;mIm%z7-# zW=bH~C|GA!;)t&{3jkx1bgE2`o5k2qBSjA9-By*9G)s36+k^wPfHb#SR#G&5MV7}O z!1Sci=ll5au!NG63qxQ$#cbW-c;xFoT1f&1hbMXiQe3!Bd>O|89QA@~q3%M)RCL*w zZgBd=yiQO1;atgMo#K0A*L?D(WDaDgW7rAPv&0!XIjs$Z=(sTtwY3Pvv%Ww^x(2zK zLuG{avaiT?y>uWHzXtKVgmZN;O8*DYe;}k_1C@cVNgEVPuoygPrwOHFg)N%eJt(S) ztC=nz{o#s`r)t^*O=vm$0RO9U1OH~hZGn^U!2*cVQJ=J`EE;b6 z`+a0kfJ-t@DE}Y{lB6xJ`uP19N1*?8N2P-Dm0 zfD!K@#ts*XbWc;z{}nFCOXguZaUW~rea}IF?g1}+b(Du=jC*tET-Wv;=lQufK6iW4 zH8YQ6Jn`%Q!*PDPXfvv)i4}ADgI-j#WACu6=XWZ*rtQTS8wTt;mpn4psS)}PS|-lJ zWEL+ud0goyQ+9-nd&;&o1CyI6`M)~NaMmD# z7c&-?(JIymbl^wY_5Fw9vdJmMZy^*MHjVw0H2&WK99qr8!N-M+EPKdzw+-2GR0RJ& ztmJ6xl8%;;vD^XfWN$9Tx4ORvx6RAzpN5+LX z0;Y)#y>{h=M?81BLHJy^47=7T3Jc9ZQ=pfaO3?n=c)0@J8bv`DAK3qCb!1P9>pa>^s=9<=1)2y6yz7L35hL@)ei1Z#1i_L zT4u_my9oH#_Eaqob32K#4K{z&c82zsx8L)FujTJaK&%uqvFGIQPH*X(+&F1nM z4vx~y9^4_wS3b{#8;hqvJgu=EFUgL5hQ&U)xpP+`upoO(m{^PR72cal9unKLHE+20 zL+_&VVN8KsBGi*ORL<@7zY!n&F9s9Zsb(6ZMQp}u;qfoD=C_|#o74%yb`k{~SJstq zC2|^3sJq>Vn#tX#Bn8{4aGeFi}{&PX1>0Cm;f9xuVI&EpG_Xlsc8G9=K z@2@&Y3D_^nVi=xC?rOmfH#UVHqJFZ=Qy>k6(s8dq-|o-O^Pbbmgr^H6KOh_$b2y-x&I~BXnZRIzl%dIaFfkV>Va{<$k?7`-N5Ty8v`auSZe+W!}lEu~8UO z8lvrohcb4VJ`D9yS)Oe|NMLlo6&=Uk;4Bk;Df{t0;qk*(QP&kp8ZbIKa*$KHBw9zmD!dZ%wctw+m8S_C+xf?^lw#W2p^7Cz5rff#d z9;9zmr(btG`&9eAm}*wIunDgjIg@AsFyY(vYpHGLOC9iLmjhn6om{EXn$(QlFbp@9Ovih)Rw1HQ}qgQ-X{%N&Kao zwHu$_*_WC=gyLE1TLeF!zn@$DUiZwQ1S=c{mBHh?(XS`vJ37QektZ9_Z!h@{Ol?xr zR5it{h;IIT1_!P+vE(J*#y?%Zz1}uEHOP6?vlV_j5DYF7EtCZLtRSiETkNMyaGOX2M`C{?Q zkyEA!HOvFY5%6(fof5#6Wj>Bk>PI&cYjw}pNeWzaezy%etNj_P`VF+O!fBGx`6a1k z{K}Vim=gsPDB>eiYZ5<%rTUESUl{8_ssb3ffED|=lm9BG^fmD<*a}WDes0f*GF33r z1>kFB{`s0xeKG0LHt3IVVwh@U9acXJUW&)p(d3-15^JvD?46R9@lUM81gIE94{7@I zUPq=nP2XT=in;GCum4>Rk8=ykR`|RcOHuYS=byb3El`|ObbLLup7)YEAG*@Zfa4+V zcz;=SgieB0RHq5=W~9C=VK&w>R~#Sxl_b&aZ{1Ms+ln2tn={Zb<%SN5g3?EBY}fXj z>47Ldw+@5cI(4ad`-mEZ-={?-#m{b4rex;b<2@sEIZ5yjoUSS(G6bTo^%pAKoC-a< zR^x7OS=LO=x?Jnw@W4?no)Ky`0Bwm-j@98ZKFN0cU}BwM^vwwM%Ar`HaH~PHc~#!1 zYiY6K*hM+Vu79v=JQ-kde`42o9vUqmWk8&`H7Of=UP;nLIvag^HHjVVpvl@gm-MN>jPlKVzz1$d}qRt)u%h;mh zK`Tf)EA~pb9^c2c(xdYb1#HtB*zKwduf7n&9FGfOoK?d6;HM1Zb@wnRn1V;poQ~nj zNk1vraZ_0N5ISR}qr2T+&lAOEvW-EgrDis&`s^#M<{v3JbCT2pLCvc+`NU=7dX|7R zj@P@Qlxu|XsUS&XJ^MxLwx#1hd5Q-~%R^*a|4br%(?3b|J{~n|rFd#U+Mxa** zpVT#VHj&Uc#NZ%BtHQLs&ebxth#dV{#-1tvbU&7Qs|J$#B1la;}by{^>hIYQsX{;(MZj zSFbMqh2!*E4v#a7=;Xu%6(S|P92s|C{FD(-&$G$Ps1IkKb*uaGf!x&L1c`DAW-;BG zL#}-n*ENdM_fM3j3Xbr2+UY$K0&4~a&lx=oU?e$s<#W-EhwxZu9o)!YOQp-xrj1J% z)2Ag@l23_e=b)a}-EC%urw?5%hxzqIQ$l1*!Ko7R`qXiCtO#ckCI}H{REwUMKA;51 zMB5l9tai$^_w#HMZ`b#F_W-jKfz=*?UKcVG9*a`burc@<> ztBc>~jXIkSn?~(!<;;v3Y48&bZOemeaMDTqOF{fb_O+Gzjw Y?h}=j(e+aHgY?i z#Fqr$mi~mIi%>xi<5C=r-@6I7rh{R=za60l%*!JDK1-Cv^RO}yJ9)$(NE{s1oYtkM z9(QAic5qD8tzsF9{v*{e5%GFW@dbb%dyaO4lOE=OPM*ZQt~a)Nm?7#7Z4>oK69%oj zi%Y|D1Z?kvel{Hom>i;y3}G3ulaTX6+h^gZXzil&a*D5z>a@ZCW;6!rf}>PrYVts_ zfi*k+z)q^dAUtAswzduqq$krTgYEbs^IPmgr(l@d7KcsWXky8MDx*ya1#MI5h1E_XBEO6?0Jltmc*3JZ& zZgW{{L@Ozsg+++~##R7V!DLk+dx4v-$SoX#s=Ya*apR2acbm(pLA6#8yVYNs1sa8b z1*P0|PhP(9f2#M#4!okPgY3ZQ=*JEcACoMY@t8gsXgey3E$wKPtvPe;UxalW+}#x| z*(6|TE+&#br5pN;PEWVI!l+hr&P9S`#Sj9X<&R=4gm*oI`W1P+iP4tX+O_D#`cbWS zWmXYYCyEYL@{nkI99Nes>Rd-*srGq%p{OqU+y!+yVgFDXdse{X3I;GpsIPN!Q_C*1 zrodOoJ%i~bo`7G0;WFE|o_C}(*A({2e^gcnd6F+*30LUuoG1ic!<#>tUNdVL&rf}b z{UaE44ez!VVlt?>!Zr3a?g{QRdc>tSiG55s78$6pFWaM8C3Pa8RT?%aIt4TG!1S=> zsNt?3{bX%~N2x}F*a1*d+cx572nup>e|~y&?&=a`QU7K}hJ#bqlKpU;k3y`-xiDTA z`P0!b$e1P(y%8!{69^`36e$rZXrdZPntCA%saB?klK}BPN!A%XLKP=no~5FOdJe}> zn4kT7W?H7#V(!DdF5}++xk;APR_|<+y$Lv%bKIb}{`C0f8wTt!$*7H+&t9)O*2L)^ zVms3l9h?6{xVl9SB7#9h3Teo}?KtSqrX zkV;h1$CoJZg8-`qogjC3XQ+oDS@j>pGAHN@x*fRk1J{XtAgLn*n4|Gx5cToS^(*!@ z*ULGfl65Hm@=*{~!@!8)<$RqG@}D^}4o0x>m0V9)lk5l(IU##_eCd zQGZWJp&@Y9!Et|6IcbCKRqh{J>0pi-uUoSbb?Y5hs(`9LR;OYrXjb2Mczjk&4}mG0 zE}`(@gEhwOkvnDNWPlNox$P8T_8BXof;W`acY{9s1&+N#g!}n|gqoeKo@9*Im7~f_ zNQEXQRR+RHGmt!0TcmxT>b`~oX#o*36WbW?&3(~kK)U+YF;@3r)?KMHwh?6tqo}wx z3fa&+6ae2n^^>?aEOspfhp3%zP!o5kw|wSgYK@q-H!&9tErL)nQ;pm*MsN26Ne_I5 zXV4f+|IQ+Zj>11;wU+HxZx$@7%E0UYP>BB}q-_emP!IQ6a;t$CJz7*=CUwYn!9I#K zg<)1w3vZ@5OrdLNC>^ea>h7>GFYeG+EP!Cn2w;FmEH}u6O5rzNSwzMs_cRv3^ldXS z&4az}+`4>^SyXoO6w|ATCklA@Ng09dPunSAqk&xEx@Indok{(bONp6R5i8VGd_0h` z%Wb1POMre5eXs&@RUeF5f6NQiU<}U6s}he`KVZyUiIxn761~QNTc^|)cALdELut`u z0l>}opujc63Tcy{PZU&ArO3@ zYWH$psI*nt>@fH^-aE%ahp78%8)pwdQy{K!g>g5wi^fS4KkZmf7U5`gwX8m339dX- z`=N%s$3i=3AGy1ztsSDJt!pAsu8~09Dfk@KDf~jeC$HcD&Ukd8HiJRqt@kkYe_{l00NlDMcqkIAj&dgP2$i`X5Af1=YKpk_e0tR!UY-+c{r2rm)+c@a2 zLrfSw*(TdnFXVK(f9)pG3Mzv%8RIV*t-u1F^AUb*d)CgIUNIe*%e0iwTaI+FiCFIB zmZy(6Eyx(DN@z{1R4@jAMX*Ztwb_&koE%EW!r@ScYwS{Q1u;0AiJ1#$LPXZovIH!S z$qUt`P7O<0cDBtl1`qyx66+Np+B)*0(ddO4YZF@OmmAK_!=>{(0`Wd(WCmL}#bB8R z*Wk2u+KoZJ4n+Mu7kK?Ro_MP(==l1pU`W{i18({)t+KZJef&m4FZz7sNck#PxQC># zs`$02b(Z{g=RNYzzWNSrLh=SliU<`OYU@yk#AFQa>ipB%g`!{IV_q;|2nj?)9?jSM zt>Q7(^ zlcNA~tvv^w*vY0A=Q$WoAKH)HP`}To)%JnFROe-^LcS!7haE?P zwe`TViZPZrrw5I2+UIBZ3(FSoSn!CS9~PEV)-3jYKAWb_lRQt~HpIKoo>qBeq;UT7 zL+k9vm40fG1NwUy&^|78%Zj=+KlhFdw?BVeEJ5XO8wvkff>PfxeNZ|Nms`D&&V@-A zR68;kb;#zOfdgxU2&k0Ar%Vp`YIAxSIRae-?%Rmq89t`)mNE1RX;U{@rdUR4B~|BI zqa)0+_qpdZ=1@KP>A0O7-NBLdPW{4dBQuTMt_rD?0NhA#!8Ze8rWCh6nN+X|T))i8 zo8HSsDJW1eH%69p~ZQC~(Ey>_^4-x>S($)q=T5gSBi6GLgR z*E54RhSvNqMKx%uyQPY7+CBENcgTtHQ}Tex$#>(TTy@k(V$s2j_uGr>kcUdb>8nC! zOLtg_z0(Lo5qNCd{XWz0RqM4EoUhTi%oYGWW!TxAG8m?2hk3qkA-|6Zq0Euo8d1ER z-eq0I0ZV3WS4D-Kh2idJM}JoCin`bSQB4#4;1enn^ZLp*{rE(Ct@6n=NMQXC8KKoE z;}xU$CXC^+_;#^lur%oE2Te)ue|uS7-RvcGtDgW?VBtg5uy{@F({4SjXbeNKQz*b# zQ_Is&lJw-vR$y?QClv_}w)ya*4F3;F&#|UyGWd#YWE&1n(nc6ja3?K8I_9XVAx8s#2l8zH>Ad5fySp-1jH1}ROFIc5V zYH1u{Jj|-0lpLp>EB1ub;`WBq_09bVG)$w+iLCx zLGv~m;5%i6LVt?xgK6RS93$Su^uE7e4CDgu7pvupLjEs0FsxhdtonM0iVseX_Jcm-V3Kcj8PCgk*dMl zM&f()f9NB1Sd6@G1Y@uy7HtjQ!z4oN-{g2KAH--w#%DH#l7Kc2_nif2q)!j}oDA%d zIu(l+hC)Wnk!#f7&K7NRCG;FVjXh-;M9ueT;;W(?K4b&68*m4tKAOy8*aSxpHD(nC zwxQ>!OA`Swf7p@Bc>OW5@*Q$n!yjVZ1R;$(l<=5)7eJmmzT%+VXsI=*4|qe1qf@A9hX zC!_yfk^RbAlwmf%iK0n!b@WKCvo;ua9pxPH{rpb%7qqQLeOomMHPHS%cgvwB^&ia> zTBjN#3qr>N9}mk2Zov*e_9nND2GFEsrhc=DEd7^&U}X(?QohgGEeOzw5yZvJ922b4C88+ z15&ZLng>v8&90UC!ed&Ki(JI^eaMiF$e8&Tbn?@K8l3yF0oonAq)JC~Cvum?c17Qpb)f{6OV)ktuQtm3MOPXEUa>ED`~{b|HskRH%v*_|W9kaV@y}@>3w# zQlp2Tbt*mF)1Xmyo#1!I>Co1fze>Kf;E0F}Vmkog0R;Ze*Ui7k*%*uVs5s}H*r6z?4SXg`!%-J&$pwSw;W#Q-Mps8>1KieuCy_w_>hrrUx)H7v(wkND zYcrrQNaj(zJU<=@ehO-D#zOS?Ir#5_FKpLqBDjRvvM2%K9RD`0u*DJy*3fMO)8leY zrsUStU*75?)@}K7)jxf9VQgQsj2UD1W8$;$LeKNEBaYFK8t%4|@WpAm+Yy}2NWNvC z7mw2dV)6|UOa21tLdJSbm7ir1l}(vDW&_YL!;Y!y@;-kiV4atiyqwS}m08l&_!maq zeAm;;<^K2Qm${U<>Del(4f*rB@7{*ho$9wG7M5l4mxo$ySCYSpt1n}2OW$nnY zX3$p-F=8rJ(+f{NHA+S1s=NI~z$V81n*&ns7DJB6E6CC;J1$A-5vgl^ByJ5s11{>s zJ(?V`@rgIHoN=UosZYY-8+2$^A>r0wGehDVbZe3^w;233yXf2XqNj7%#r>fSYs){J zJb;EFdb7efS^fS!+5V}WbJ4|*!+A!U&G4U2eeqclP6h&oSF!Ll`m(UT@cnol ztl!@q@^z9@wuAW~615%H$WrBdz_A#RdN^?=G`_QMO8E1OIE84(m+B>wU}lRRhM0`XFOF3B?9lP7z@$g>;vTI0jp2U@p7QVvgAU*%RFExD z^f!Ru;>8?}Eg?jm-f7PjlNmsHe07xx@>lvqqQ~U|#N=@mxdwcmRlzEbL}*d36%N$% ztl*N*8Q~NSk{uL2b06gXL~&*%q?7Mdn7&A^`4>qCyo?P^CE(dD5#!e220NjNW8K~0 zGY%M928s18V(PK#=R<9>oJubH~Qp`B{FHTrqH*Rh~i=XRJ~Cbhc4ir4cW-4(5n!j z*<1CiM8v29rO_duXlgJ}oJj12k(f zxPNhM20?2%V+9PQ?eesS_Rxy;uKo#PZiwNkdlFxL-Lw`JJX1?sY+YR6I+`OX>}*42 z(M-(Kxv8#a$`eq*Hb+76Tff0LZFE9st(z+I{f*D+OI8O3Ei7^r6o?n%5bC{0#F82g zANu7c)Exth?Bx_rpqgEpGQ*KLux9M?lTHbs0%sl_6B}xMvuwb-FUohyS-V`d&rI`6 zoX+NmhCdp@ES*+kl1_6chY_Ta!#-@n$IOtgReNG(0kZP(GxpME<5)X%Yfazt@QL07XYAQ0)O{*g>&ChIT|%MuH|zD1a*zdr(ur%%EcZ%+b21M zd7S;03KZ(h!!NM-MA z9-!LgFiE`ZVd|~@AN``GPkF3_tu!m;ny{KdJj^Zm9Zczu8gmwuHJp`FQpvfh4N4O-d@uCtqc8RtJcD?ftMvbX&ID47j6(3=hzK&}1rHwQ~}u4=P>%D%O?F`TLNzqx__{hJw`>2%AF8F0SQ%<^HDe? z;KEt=SN%O>Gtn9x8(K3J-V$H=7shi1pn^80ZdhGhcn>80!Y!4STPqT_&~4EDo8*T{ zeZWrPjp1K6@Trch)b1zwIPb`q@j}Q|EnIyg?(jY5Sopd>Q!0ietkqU5n^ClwB#OlM zhSl#>ibXp|KVgzBZDb$=u-*A+1HymI*05P6{lpB&fLbG<@6wM5!vZr0s<4bwypf9r z)pN0pF7`1}angG6p8s|x2n)X6OpTCk3BJC*UTypNJU*-nf4qT4JZC;`Kzr+!fdAcz z@W<^(gk|o>-Ie2(P>3+&7R4K!WU+Xa)}mfK9P;PKTV?Dl?T$f7s7m3fof2*iRWWTG zwti(1zKQGZtr1Su=$cY++HeCmgb>3o* zGCzm&1vcOdd7EMsJLA9(b-^V&f?mc&Xt%`B#NIB%k%!)(;jny>5L5`0Oo%&VQ0r^p zR`enCLsSD9%n+XR) zpQWh0aX_QCgd>nFVk?2@=Ix0Kkpc=hPZ)bQxidojh_n0U!gjdZy67suaD;sS~Rno|EFKK<=ml8(vli?sg&Huj{^WO)e4dDO+G|Qgw-UtkZw|5 zi$o_W&MEFa{h(`4{GMir)Epudk~?Qna^#03E3>cZjxvX5y$@X|=DE3}#E!bdrJ?=tHFbxA->34&t5!jGzPsOtR=kueZ4i+g?d}fKYGl zy8@Ux&N4mNnyjMnuiL!KB&nCP(%JkwTvCqW9`*_;<$McEZs@gB z$4uLucJ}2$4A*P2zQ$*_92&`-1Mm(pweW4&@Jv%=*lYK?8y%28C@j%w>+0Ou^}LSw z^Bm5|#Ht^4b6kATc;v8MDq?zWky5(h0FZ^AEIXD?^+?&%5clReo)KXQO5Kn$1=(s) zbzcAEVZs`J(C>$^jM9s1^4nxovx^c7^W!j5RHIWh)QNrU@5xv zHp|Y03(ahf)(ATrq3#sTZ0dw!fvitk}MhoOobtyGPKWqKQD6f3Oa4K;QFW3y@L$8E9 z=#ZpnKUB`d|`hif>f67NqecB%Ml-|LN(Jbh}CA9gsLsOY$7-6G780R zWB)8Qd)@%VtHl*V;Ea;p()7Z@cxa`4bTQc*v@+To@bswfT|9K?RZ^O_A!2eD z1xjdg)~|YHXATC2AqEuQXC2~O9e-s#Vs4k*QNJwWkNalh9{@W8LME;AibJ;(={0;j z&g1dK0C9ea_+`nhy37%I86hpGo6C?vU7D?olNM> zPxn+apjql{=5K9>L-&ann-`gope_CN zbM-9EH=w1^xr<&aKAADgD8+%3Py_(J6>v1ZeKlbLpA_N%T;u#BiU zX@9%r!1@Pb8{aQmUrwgs_SG|_+6(bISI%9u2EH4|w#kNGfr0{(+Yl|i(xIGiGMK7e zOUk{}1E3=7zv630iiq)0)Af%yDfidbFAx0+OG{T*Z@0E^XZt`4&Zld=6J?yTcV<}{ zB?BA5n)EsgoX57A<@X=wo!Ot(naC;P+?Hp@u^1+_C>@2IOf))(Z6?I-c;s(---S`E zG@y%0dm0EhzT@|(1aTycSntJ3x$m9N^TsqH0LB9>J>iW?{V)m7h#C$>8FHh8)O#_SP`6cORL{rf~e%zO;p zJ^*$+sF-;)X`b(su@Qt{-!7o&qO<;eZzbzPL2vbbEp)<$`WcF*`l7)~ohOqyuFt9`A65? zUX%6*Q_}E`JnD~nDgIb#T}o%#Ama}XE&!O%!VWAPIdL=OtRQsyT7R!bI`YSF+rTq3 zZtnQ1H0vI%3if%Ue!^%k3RRWlyk2_36Bb4>Hd z?SbNtv!Ir+$PJ=h%)>l{UcMq|NEE@PaA)1@mGEN5CwKSVxc|g>w!XIFRq2EY`0a81 zad*KaEVWPe5~!Y&p@g@iO?+<3P=~S)DWg3-fDc@43zG>K?SaDL)tuMvq!c(;PF2Cu zMt3%>3Zd~iR1}w|O9)HBRLm)u2JdI z*wzU(|6O7KM#$gi^?a{W8O=5hz&wrEK=3}(Iu6EabqnPTrJmi)H#jCGo&x z*(Wo{<)a^Fr8shgm|57%({gr8IWlGM4)ol5Pmx-W_ZD{!airy_PTwm$|NY^fe&Qyf z0WTF|09zhII!CtHsS=}%+iS)wR=h;{bq_%|G7l++9!Ip#epqWC4lRw`?3F4McRoYW zcuYnp-TtPvW7!?~o zocZ#0H{i0!ZJPkBQc+rW33iEMUdm6`4~V=GwxpZ?Q9bB&A@eWG!M5~!H_+%$Wx14D zxlO72*6uLhah&b}2d?!HWgMW3!_LZ2bGfHhXdw4N1 z)IK*C$cZ*5PmXMedL^Q=dqsj@7rMjc7Y=GADtXHVbnlEGb$cJ*-m?6vwZAdxGw8!$ zwD?scBlD=s7RyOecO)|=jX2h7;3B1s6pKM|slbgM!1w%04UhYVcy)1yJMrJ!VE+Y4 z=k7pnh45X^QE&K5m6J=(hKOH$F1c{w4bhUwn^77YobRFfT!58NUh4IJicdfjuK0Pi zJu2)8F!Ya(2YX(M6G~e}hLGViyyMo~3JmS8M7BEjkT@=vD2QsRp!Z-NGW$8-^XC(K zv6sJKh$reZ8vIYT5X%t=Ov)czlg%YDkq#Eu`<-Yi$qHiNLrP0Vj`c7-$zpNLBH9Kb zE@>MnExlu4k{^OMi7gRNi27-K1wKK57!^VQpgh{!5eN4tKX>L&kP8nHok3Oi>coYq zWfPQ(s5Ku{Kx{V^F^PsD2H0BS16th9H}qPO>Ah)dR4x@CYgD`~8Q)P)auK?daERob zh5oGq9Ny~li%w7Ai#+BEWi@Pa{uHE)#}6q|M9emt(`*9Q)(kx{m?=B&(~mU|MctYO z_Mk#}^n0gKMOEWX`&rja!+8{M!^>=KQEO9`c8<=W^r8=Vz8f9?CMiWRis9onuD18- zU2^M#H1R^T!%hv#E$uhG$YB(3s%09+<+9`aP1Xu~QFw1C1yyj-Csc6P6g6s{Y8+fD z=6tW|1f6)N9gftFzUekA3?0PhWW9m{WUfTYr#Smzh+m=iTC` zx*{^ojSIV)cEEWCCz3V>Yo~6A)1Kv4mMUn{2m_A(V0Wih4uF)CmzIt3v{GOJy<^uq zJ%x1nnYplb&WfauU;y~&9H2E~)^tG3Stm%hRmNqL2FbX7pbgBwR0#h#J99dtBt>~8 zIP`jb8Hhj?etTlZ>U+}P*<&EdmZ_PY(jDBCE0-}&p!P7-*z4?nOt}k*o0SZWt|nRU zR2c-Bw#Q8%sbamC-ur)Wjy(d3WfBWYA!96>z1K-LS|`+;E#6O$VB!@XVPKQE+YqEE zNSO61Uh{AR$3yh8iMz~DNLfVC59~QLwJEv1gqpQlH_pw;FAWZfI+GaMLu`e0(nk2w z=-}1t9r|m+o5?C|vHww`xPG6ClK(S+zC}hAlu0D#;tKO%2{PCw=+FbAGDWCQ#v3C; z)9-n6E?J#n7pd_YVTfP~cTmK+$wc;c>bQuJYgzhEwj?KY0L``mAx+b^<_0`w+9NO4 z&H?N>?7uWfXUvkL3Gt2E$oXl%{!}{$kAI*f$!4_n{^;-$@9&Fd&iNY{?#Zy=WG*T% zasQ_Yx}R{tKWKau(TNTin{-rp5=@x1(s-7?DnEHt2{>g%@Iuu13kC)27?oc3+@-n5 z>5SWd#l?rRpXc_{rO7>z;d0rg=AhFeCK}(R@xdXK;1vYvk*AoXV_?G(BkR00dxP8g zC_p2qw%D))KhaP$Mm%{Ot)-?^~s+0)1B) z&8>>Um-o>ST=ouV(-2qA0KmO;iKe^7x z&TRrG8`02i?3L+5@{u~R9(Wg&Iy`E0g0L-Ja#aJjefkmqz=}mU;`Tu)n{R1sN=0X&N z%Z3eu-CdXPjOT!SxF-=bvX_oK%x01_65n9vZA^;gl|P+z>Z3W$CyHiBv)LGTcAr@N zB_ea@rdnq`pqGXctbKQ&3e#_~pX5X5y-gjuy?e#9qD08Mi=Epgi;t#kQxVZ)4gM?s zcL1)HyEF;-&*GnA*bUR4RX=#Pv>`px@}Rx{1}+TBQbt04ony+DsPq;0sO4y_Y)eb1 ziXBx#xM#kJT)NX`?0PFb_)G*+4nrJ;Rv|8Iup9h*w}x{O(C586B)Pr|)M)6MA6@9BfXujl?dJ$&U8C6|lK>wSvnkNU|BU-mb%wY7MU&W#{E%AjdE?h!yVK2!eg^T z6t#L8sIDYPS?gxt@b@1&&SH?NwUIgy`^=0ZFh83Yj;O_S|EplT+MkC+8u0C-i|!>v z5#t>Mv`PRyFEQ&spZik+Ft^|hPltx&o6Uihw}&vD0{`0ct2Q}=dL(7N<8u0C#qUwm zQWmMk_MmyrBV4mT>O?4F97s4ek3iesaLmRS%fPW7R!jqCCwD zwFW2C$57ydKytvdN$;+6y;BbGoYRK)a&A=|UmXhzN2E-%uYK~))f51N`44rYl2c1tf~{g^!WlJ? zK1~bNDbrrQiZ=Wz|KN$WrfTxcZcvyG1ZFS(?B%pHy^w9k`5WB`hjUaI=vmx~WLU+9 zKKH|Out6G2IyBHJ`jhKbyn_^)E29bBXmDTRrZ-TL0Tir6{s?+*-PJ-DpRsNsSlD<5 z>*~$kH^dJ>1eHpX)%LO}e%qlH{u=>+e7`|sT-e55vrs`29YfGLM2@=*SL0;ELi7w* z)BgewL;vQgRKk!`fbp8c=ML4gSP=4|041+7!E2AGy|!uU)BapESzCAlF_%b^F!C)i z&lAQZlX+JDpp8-mQ~3{9;tOQPC*3c~zpePhc1e?Jbi?$cxtEFGY8hzhDd!q~aYqq<`g5@I_^?lW~a$ow*`{xfNYMe&p zs=$OX8gQWmvi7|VLXe3d6>=pFK|AdoY>R5@FnsPeVFthdzTvw#CXp|P0(sX7RKX$o zdg4qTQCw%+$?sSl=o1C?ORIFgK-o2-$u!uEv1I>mwtz~oEibL9EjbXKa9R(6sd(W2 z_y3|npW_8M-{Z^jrM}o9}%=5#@m9XEZC~&Zn%J35+%uWTqv>J10OmelbTCRN#@X|@o%Z61F zQ3^tAdE7Lp`p+F#5yn}{i~w{Uq8^*dlh=w+P^_WP|0 z?Ee7dKpVeaC*^^#!^0Ks1{%zS8@>p10Cltn+8!qe8-Ug(IF!t)n1PYJkp1b>I<3>Z zLas&IyJ8i`3D;pj5dPz0ouT%RfLecX??LQ0yqpG4me7P>!Zgc`SSgB1uPl2o zeuhpk@vv?%oeXS(DUZuwQlAxI>LezHBeaaz%TK@k^(X4;^qVh#bGfeFKr^r$AY}T; zeZN^PwLNy@W!B=$VoDtC5b+h#QnOsrls?$jpg#tU$;;&(dXzH7mal(&w(q~+QJ1BC zxOv6%$n6+EcQ4?)l1^TvgRj>>Y8!ABvLVj}5Eg*zBpZeX+0duBqge=a(8I|J!|*2M z7ERj^$Tcg&x!M9=351dbnv++N!W!uGJR+oAQ9IE!PSJ((i!z8?Ce z;0deYASF!}Y|0*W2NspKSa!GJ`Oo%hNVN{Buj4@7WXrNsC-cy86Y6NJ!UAa~W|m&4 zfSS(|G`-uSECIjq?zOYR66Coo)H+#$NYOqbqN9O))I`eHW;#L~1hAy>;S!wMCx!iv z%H?GkgD!gpEF^z4pvrshQEXyl{yYRMRK-GiMFv**{YkA1Hf2n*2w+hpy?0agC=0+} zL^rT1EI^*i0;`h+7>>9u)@E@E_;N^1z#Wa89@@~z)r{;8Ox;9vCJ2wtjj-mO8vza0 zs5}k7W}}uyj^?1&uE;FYkb|fq&*Ck)ymxnY0FBDrB{R>qs3L0f&hRSV-? zxDF(8`9fH9E?meF$PJxzK(y}`$kC{zaDEgKWKXtydLF3T1Iwp5T!8mL962b z#ae1eb}}+vj`VWF3ccJ!>G9Ym1_|4+2EE*{LN7P0{Dq=)S}6atK)pVq`w|YTwIf7> zE{GQc@>hT8<p(A;FNUI^H%0RG$Hkp+zZN6wJ|hUXHQrLjtxrvMQEf zg6=)^Io4Sd#Y8BkH=6`SvS}>Bl<{;!jaUcO_M&7(l&vI$T+#<`6MgrD9lrX-UaHpt}ik0A=!2Tq-+A*@9*#QA`@kcyG# zgYq@}q8Qs16jsQGe9v~#?j#?E7Wwc?>E!4F1k)V>5u;hsCfI^O#@ddH!J%qSVhV{i z?Sg+TSngdifxwFb1{-+z8I>fLd6~y*=*63e(YuCf^GY{k3x@J7sn8xhRecxP#;?mS z;I&8C*MRB|BzaA3w^3mx_$#Ss<;G5CBHBNsm7|diHRzF4(^jKj()J-UdzZj7`Xy}- zhITj2H&G(jqZ*Q+Fb;hihF*=^GKlaV{`(I zht*FV4$w{pwt@A`Ebln)QdKkvGCC54|LF0p)7y+ z=4)=Pld>3F`4Qb3IhrZOVV-ViQ|Kf!YJ2S2RRz&9xAo;v%)cNt7a9L8Nx|1_?_+R$ zLhCU;lR$^G)zT*D%JrGD(?i602sL1JU?#m_fjyVcXfCPz0{T6j%V$7Oer!SQCNE!6 zc)E5n#Mnt_MEgg}6_|m=x^)>R=cca|wvFtfGlZJoEx! zC=CSyt<<2q^JG7V8qB_*vqT0zQLmqK`3TR?`2!uSN!Ef=9r)&Jey)?r7#mE+HKiPl zPKR0~Bp~zItg4!u2+d&!ld)J#g~C2w`vO4hnDoqzA@OPRwO8Jcxh!109G}s0_aO8pJxOjIou_aH@`@ksM5hYeLT_Z*!&+ z817S$2Fvhs8LZRuBV7>n^4be6yxK9p4>@G|AQFn(Tu|xzHlXvd@oL^Q47n*(3P(X& z12le{!k5uwN1!3P8`3sn$zc;?J&}+WrQQ6kJjID;Zk4A0K0=+^G)$HDbb@iD9-t z>n>v+yUo2p+=Jh;z+5YpAx36_!N590~@R>m0y4T*L`~P{qwi~eEIIv_wU|=qeaF`xcdn4FOMg$=>mE>#_}w@ zfdd`2%6guLpon*w^aAi8s3t7_Or;%d4;=}*G#a*ojLx=lGtz4Ap_t;hjvIn#6{gIc zi2;d1&7(>wLyZcf8=#p~AkJqjIJH9m+`dh(xF30L{)30S3I~6|Uqo%S?c^ZF1_yDb zx1({q)}j`tjlXHC0NhsDS_knCWi#H2u(6TVk^NcB?LM#bg6ir!bV{fvU+O@yjl04o z+@J~!T~+d*sa2d_pKx`LaS8k;`jPc0myoY00w0oJ&};3~$t6VlnoqkTtE0gLF9h0= zA>1x`^8zt4sjz>;CJqUO?Pc?2VIM?$>iA%E!!-ZdR&IlXAXXuqAxx;(ohEPPplN`Z zD#6b9v-D~+>i7k8o*sBuH|kCXwoeavU`Cz#tUz5Su@KqaM-+B6kPoMz>lL|U7Ef)D zhKt~ngr_hER&BrFpjmF^$l}bmg>@5PT~I4|LOm&aM0J1X$#J%QMiCJ48!1l?=}iid zQJB6LWRLO*{LUKabU;?&6Y^wcyEb+53DMqEq(gydI!HNcJ3X|awyPN#y6VDAkS!g7 zY|;f*F9*5NxzVedQCU_Nx*-wtx7}vbKL?1TB3+r~0c{42KMX4TcQ$CN?WQ5C?WVyk zNrxt;4Y_~a`Gp|2%U8k*bmdBp-frsT6{3B&-i}7`whdb{CHFAFIBh@S(4y>%{Sekw z#YG?i2gN=OaY{xlz1=iIZ#RbnS3C>i$Nv@P4SKt2gx+o%`77O@wmaIMM4EOB{qJh^ z>D-F}`y=#r`C_5%ZM;~x4)k{UVrXRme_J_R&;@^_9E#rpM_7Uus~%xtOV-aN%i51a zvJOx#LePXeT~$*sGpk~Xt5Ijgxq;j|1xxVm`lw*T*xt4p+|BJh=YXm$_9E` zECGM%=cWnzxeRW=G5m&Twi?3R`GC04wj$4mLSo?;($>%MHzF&0sn3eNUMCqbwaAEH zNi3g*}Aq@F-2X0sA{ePu#I?8`ssyZ{W>bl;J>GfGRr& zups)KkkW79<<65>0>@>!2!4iE2E-IIAQBgWd=IR_O)i3G>vwKG7vVs# z!bRk}u!q~u+=!`_8__M6qnS9Qf*gP4O|FgB*{OsH6rc;b6`e1kr$mzoI7D$GvA)jD zc_3dmqOimbo4joS2aFuqrKXRgE1#~UNdMVHPLhq?@#_`-&gUXpO~&|L^drybA{^vZ zxQKig7I~dq#MIy-!ulTlzt>5Y1~b@#wxsN_gymGx4NK6U$SAStW~c3eHXDB!c8P?v zn1Z9qOKC`RFz(KsC%HRFN}x|Mp*ntX0#hv(f(X}X2jQBw@~B4>oE*omj1Yj|LqF7f za>9X{BqB%%knh4kZSEu|qJ3=#$9~@J7E7afl|>*H6_muXS+26$9yu-{+aa%6#rOFV zG^}DXP#4(Xj!YWvD4h_N6-R#qq|~56JE(~Q-w1qo855{;85iq2lFi3y6FFfe{?gmb z_~S>Yhl+u>W9pJ$ma*INB)n$d?DP9;c-MdJ!AqhLYG(wu{@}&&I>C^>eXqhcoG0hxO zAUr`F*`F!xs7*20^(^o`=`+@*a3`gh$qC_*Po?if5AgBYv!2$K_`pI5_hM zIykGOEbz_OaMsBr%#D=x0~7FK@-Gu=Q8;0D6d&9m4>wE95|E&48+| z>pY;*FEZ-4QmKCl|K`qy`|a`mEUf zcM=6NtM7~Atd~m5(O`nwwuKc`6v?+xQ(OVEG~w3TZCwF1ZI3J)MVA`wCE1^C*w9#u zh*C7szLje}VCT{g1Ur<-g9pcfhyRMF#hS{mR-jJu2Bv?6X4|(tN+R$J>^}@tNQ69@ zJq&b`2r;blhz5=Z@_BRJLsCgIPJ!*4W&=P^*>2R@2Dx15RpcejkiCzx8{8sDDJ0+^ zAO?__5*wuHXw{wO-5ZwR5y#|AshRwl^ezR=TDc67;%LLnkM}#y?NOqDU%hf}k~}D! zTPIO4AJKnc(w0%>=`V8|L`Mauhc;AjH6x>fyD$@0Om(LX72E|QcpW=a8Jj~`RmAfv zDmebiEw&6=hY)=+4n z5~{ z?hw1>iy`cHTr6A%Vz+!T=mu^((sIycX&-jvP}H(&=wed3#4oVE?`!tt;$f_UOG4V( zk3)Yl3eX=V+sf~x!Xe~KnPqnA+;(K*ZMCzhKND5I0@Z(85fes*FLYcTK}47}mgbQ_ z;)m)45)U&wG7`isHwfbL4*&(=!tadO#R`Ae{P})|cMm~ao)7Od{EBEk3&=cO<{H#z zdO=<~xrS(81LQBIhocKHViB%WS~tT-XnVwRk+tCYQ*94d2U%!4f%|LJv~dc4WhhTO zHnH~|To}ruZ%d_Xgg+4>8P!Hnc>-tQ9c3tP8yc2W)WGRG$ooZps$ML+90h8k#~*)c z4YdA1N^Pv5vqDkuy;mYGOD9DU?H>}&(MX0Js0HL$HDf3*kWH`X(@;O6BrCKD%K0x| zy8s}~BP$qb$VEj%B=oTi1tezFzpLp;{F$=TL=#MOhgwk2Fk;R`UDE3mKy$f^=IYDu zqJE6^faY=+D4QM;S^-*xy2v+S*X4gs>SEynL1jmriyAr_&!=F>Y*jGXF4Bs`7*Lu} z<%||y5ev4mM;7%BLAv~B7i^D*VC;o>Ul5pQ{24_ZUAM9DC**|+?8(KUL7h=-ruQq5 zM~~F&;#^X~L!Ljdjl2q(!Cyzs#dVSyOM}el@5M?prImKwRMgm?&yy{B_h|*U5G)4YuQ2`8gV$j>8aJ4)#Q$I16mU1E1FPTkvzr9@sQOIZL*CH8dPw1rDu`bEhF7q9G{Ie1wgUxSs){O@ZJS)({Mkp4sTq zd00Q<+!P20ag_{+d=nONoy^A4%7D16x}zBq%K>e=L$|`~DGGlBz(d1cR|=&EECYHs z>_Yl(*HcyX%-r()v{DJKq zT!qr$d#~)^)=o+z+W+i2bTmskyqfOPF9vOLeqPn~;1H1eEP!en*O9%y^6mxds`VrM znbLW6yx*}SND+U#8tl@`&Fe2eRwwLuShw3d8Q6qf9+!y=^_eCv)=uIg+Sg#m#Kk{9 z{P^LIpNnt#tAWO`vqOtWmRLGdgm6vTsmFju&hOWqaEyZ=(pi*Kk8?=Abb?cPXb?ekn*ZuGNu;2czs;)b~ z2Gn)=BG@mVT*Oh=t)1jSwC`5e(Wop9DJy#58B}UR(015$QklR3A^lH^(v3fBa0Ar2 zb)6xu+rp74hOc6Aw{;HUx^;%QZk_p?Lz)HZ%LZ8;{TCZIlW9ozUsv*B;hiC_%NN5k zi4A`$E*7o>ab3RHJ!B8bwzV}K30+KTqWJywF2Is3fZi$fgs%NKlr3&L)pW8omiTt0 zx>$WyY~nh}gtbK`Tr4q1<9WG7%9e?`X%l3%uDc9=*cDcj525^^A?s<*#tNJBwb1V& z)a>H_7H$e?^0Fk*2{-QK1N(uj>oUGZV-GFIiy%T;o?E-WB4Jr&uPNKkIC;zav_=jIgP)8RakfMsjE-M_C?X2NK z3(f(|tP|aPS=gfm)nas=ti;-4CAi4w+jsB&_T!sBRl+5j zWth>r*J0{`HqA1iruys_;4JI|WK4N5vhl%&1M-55T$EDh|5i#H`ZHw@37w`>Iv(3B z%SP~n^scl%so+4(ALu}>LPzk;*PwsaNk^|j>eOFLps?UIL`c6%H6enC^J#k z5`jxogvA}TyUHqIIFh)WLtcbO9H>M45c;n@=DPDFh@m>HD*kSxDVA{H$LNKa&l@-p z^9MQ*t1u3H^A(6~os2`Yuc?B&tIg3&3^p_*k==gFR;w?t1D7h{5V|!RK5c)30$%9V zj{M|OWsQxq59UWoQcW*C;Kz+2Fb2-CP9eA zB`7!zI`r=eEl&8)sF~6W6@Jale`tn|`91XeHJ{pWuvVcc@?DtN@z#Ht01@pUtvN@7 zng7D(2%bl*q~)GX+i!4Gkj-TPm=K?~N6ooimEN%@1{&bQ4u$Se&1tzo+ksO3_;0D9 z^JgK{;pRv9IFq25UZ}v@V>AUneLvQGe#609g{R1OVX@Z9Q*5o2hf|FljpT^&x*&U? z9qWZpQz;9e$DgqNInsY$Z9hJtQ;6Iv8KDx{pGEE|t&-HhUqWpK?c^o4R#L-dZ5+*zqHUorYI_R?#3?oc z-7XQDY#(1=mi9L|rP2{$&*r8xr1Ccqo6G;JMce$DQnU@b1+{imHY>MA)6{KeH!*GWTc4H}}aJ{`@%`!}#newIqlqJ< zv;7-{Y{2;pWoP_vt*eMXQ{KX%cSH?s{7MDd9-}AtvHMq3Ki|PYTP5Kk--TUGI|+v< zwSPbeN8@-c4c&h)Ri?e!v?(3|Ia_5-aX2$&0{ID2@k{!ZY-Ilqz3u%8_w(iKd&pIo6?6hnFKN06= zg)-~)b~S9-53ffFf_y~*FG(5{@aiN8B31i{7>)+=DHBQaTj^e~|5Bm;6^=x0*i2Ce zZ4ZgKDaJ$=XJA5bWkahk)N!Eyn$zSJVgOk=s4|S>Xgh{sc>#U~6ECBTtVE8Fv1LOz zK{T-TC{usHchCT-vo=j@2E8O;5L*e$n{_^AJ zcOQQ>VYl%=t2#G&bg1!KSAxzhU+M+^_mxXIYPWyAlLLtM-D)=)#qJB+?Xo`_9)$wR zM~MZr`77hE#Uu-oukMnvDtB9CU)P@ji}&YWkH|GY#BO7(B)8v#*loWP7hh&M4MFrVY{%P`&jY z+HP$#^0Uj8mabc0vB#t|54}gEc+UjZXdZtBF_$g?q~bkVFH6Jfze8YcfQBt26J`N_ zyiC`i*P8HvHYGY#l*;^_WPktrB6bi9xt@bf|?MT@itCX&w0b7gHgQEqzzCJICl^k=mm$0BKr+R zMb=u~IDev)nGczsWb15J7#2Z(+4&61kKS~Fh$PYt$dpzx&fvjTs%os7iZVkA!WaWs-wbreuX)rcm;fE8Xb5-aM= z*)?917O0SPGrTHIbB4wZ6h5LKXeKg&&cD}E3;mgLS`LUZ+le~8H-SKxdKgJHY@tTO znotXEq~qlx)PQe5jU{E(#kn=%0X1gtDvNd+D4-@kE1=d%N$f33;-Wr|#`AxoJ#2{E z$vR(U*iPHu(F-SZes1aMUZQfZaYOIC z$Kg(5VsGU~bPMNbrbNUd;Z=W@=(k%(l?Jh5Nme^S)4@s4A5@^I5#MyHK%|%Yz=(v3 zZUaLRwv3>HQL)fw+~btu&XiNFQBAwl9RQP-rr;;(#pFpA#U8RA;@r_6BI^PEFl9Z| zFzM14V>?8{4>>0in-s=iwr7yb82ztNlDnKMm!0PlhM`J2rbmB%X#4xqT!bBgK#!)#&JWsAv_3)SIuw0H$q9o$cNW6m2E`v1 z_565->_{(D;E%|IUKdEin)urS{(@1&AA(oFU*b_}_-jqb6EyLc1`7Pi&kFo?G9r5` zW#UT7IvPyyn@2RLh6<#eFQM(hSs=?1U6azEIUMWU35eai;!uA8qepWv7*RWWMe|p* z=DfiJ--pqZ4fCfVG!94z4jW7xxv?Y(KX)$(>G7K&+ye-MQ3N4^ngC%USZW|_O~?~8 zL6`;#5dJ^f&aYRJ+qmLi#SeK14CrQN`o9neuy&=jq`>im-i3vUuZ_c9P7r{tcllZm^1jZ8cf<%KIf0_jVLD6j3% zUKRG_hp~A~IP6l@Uf9!q>7J=S%jj>gx6j6;!Oj$aB{+@NY2GLSTW0c{j1#Lkk(kX; zEr0*!_+q0&j8D{u#dui*i+uj+*SnF!qU%&4b^TiBl=gpsI~F~gsp**H!idHEy#kAU z0T!4FcLvog;#JvR7$GW>$>h$IoW&`*uqUauJdunX8+vparN!bX8iq~y z_DU$_WFU_{$_!D|Cp{MFILYx{Ch#*$KDG9P@f&Eu?=J9@u^s$`#0C@={E`UEkE-|H z*d(!&J4t`w7Y8ek$-}KL?_#(A*ot;xl;ntP)!W1^X89L1GrVPTEDgxqo;t-@W^S29 z8^*V>>5j;o6pQknQ5R7shG)iUvLkZY7aVsOl3Efx+Ch~DWf;G(CLRf@>_!fcaNXe1 z2&%LP+yNa59w^E%xiGL2_61eguGih+N#B3ja9wbMH`YAWE~+qM@t!iBX|XE4 zF6}8(oAp7~tMPvGHtfjSg^?kBM>?)ARpiS z_z1%a0z-Ucl=ui(u{@#~A7OyT<}f}$LLBDr6$^4-d}O#a;v-iif#Ko^!#gz>OQkXH zFN=SAPy&py6YbL5;4rh7W;`r5O)3YK!)|}JE1V?bgW{u{B*z<}obs4_GLA|w~B6G|g; zGK7cOD}Ju;^Y8leeIn_U=#(7raO|6s14nuxeE53a^ZWgR6hs zQ&}SFiV+K>9;qyDi-;gezxc;zWjr^2oahQG?|qZJ*VGka6Lp3B|AyG_&Cfsm@H^4p zep^!&UYO_Kyk}V=Scz<@nR6bHV{$BXF@7)Z>3+G=>yy_s2paRdU~G4EgzcaI}P9D5q|iOZwF> zGj%vPuvgSG_GGxk1&v{xY$7JHPbTyH9^VrubKxmJHtFI*p^ zBVdj;hw)VgiVJ&+kI_v&SxdER^fdgwwv&}aJ@xiq0ARJE9HWndA{m>IyU@Oe)NW~! zQfW$Nj|B4erRjyk%s4#=Gj97&wGrHJD8rZ;FPXg7OuJk&yb&&|g6!Ir74Yf6Nc;`f zmf60*2*H5{_Z5-MURj3OfZtW*^}ehJA*#XnYkd)Nl38Tv2*dhN;z zKy}b0{!%Icy>|vpoc%NXX$OC0L>{G7I5srHc7^@2&~zmg#FX0W!k&L4H`;>|ZT=ok z(Z*pI7q*cAa9b((qDxPO+3F(&v`L2|R6^bl;ti!rXIh--O&BNXWELN!6Kmo7ZQPRg zp*3fC@k))g#P`mih_kO@?V50ql!{c?hx-5$Wf&y45h=s1EP?yF80aqP zm_~)ZFM_=dGg{C?jop7|N_NmKk0PX1nC*{X!wkJFn((;Z5@}H=3*n{1qlE|WMVT4F zusm9r@C2U~l%;VFf3iIWQ(Q9+rYj8W?0<{edNlOP@J1qVDvMFI^$L4rt*Mer1=4|G z9{&J9WqFohVk1vd2*|mV;0aP!933GK#rpZl&K#TwUJw7a-b#OALb#*M!I{yI=3*Ki zzn*ewLeEcla%Ba2@xsKxrSBCEt~1PWZ-hA{-r_>ofBE_6zx?#>{kva3`A6|L{W+oh z6m7VY7t+d&b4G5@dU{WfC3+{9+q0hHsnmLEg*w9SO;SW|R84alz=h=GTByU%cIN0t z5~@;9ryQM*bcBEB=yh~vV8TS(wJR%VOVc}oqf6f_(AF8!ko?jQHAiAl z5hYT);#px&!7=%3KsaU8_%!_CREqvZXmWa!>3W`+B%bF zNgb%@C<%X*IJHtuKjh-UCcTy0>1>F|X-8AN`jE-T}gFYERE2tgyT&}P6S;&S_u zBJn93Ql{#2SjYn4lw{ePMdKYgblaSBXu8-$c#Rb;c&{jo3t2oxQ=25}3G+1_DNvf>(t- z>6GIM^s)%Wa;NHSs#GbwBy`D@2nDWooFL;wQYa}!D|CrIv|1w_tQ>G2lK@4T;RFW5 z_nd!V@VO90_^zLDeb0s{jwut@mxe70*Vh@MxHm!+XWBe}T@*!4(pT78YqLngV}t+A z+JUJpU2e~Dcu47Gy}#S-ZQncTgh zpzgW2L-?+rK;5%(hhxeFb!jXjN8O_{?%;pyYYOEcF7(Zhzx(yC@BdZGq5Q=M^Z*B( zM0lx?Y;I5L*b5mhO+C?Tw3pf0SWdd>TCQjlZ-2VY8|chF2jsB`=g&phf1lR|o+wRk z=(8g73ia2UFic9(g(vDW(xVw0zUU82L58P$%6}an+K{#c_6SezxMZ_V&qbz^q zocbemZuJwz{UGjfNP`TeyUXpTmu}khKI^@>PcJoTN$|oF z%iq^1b;_d*pzyw6PDdod6ZK;5>4-#_@C2FlPB;-sx>z(QUl-BNo5x;po(xZRyY<~ITGue#ICKidpr)U z-Q&^F+Wq;bw=67w|J(0YLMK(Z-7$T;h#&WJ$n7r9Whm|*o#6&&-&@5GF_1s%ieT$O zefO;UqFy^wbJR)d)X8$0nh=O6e*1u}5mxZ}y2pUNZjh6f=({H=tH&jxuX}$C=<6Os z_$89tGZ0>7^S00z2(F`cay2Kz-wo*Nj>$+d+$WQH(AOQ4q4T;E@Z)gge;`sRYZV-i zer)ml{7^-o{pXKp=R|nwf`gr)or)^RD8aJdcl5maPcCyI5iGg$Cw1q&{!;Ntai z$CG5dQZFICiq+}WkeNR~*&}~j&eY-;?#_K5qW3nq6YykZLVzD9T%+y}rQDah^ok`R zhcjYD_F<0{(npbX-sqTkL&z>aR7^BJZGr+%w>uQ1nSpynTONTjW(N#evIKxlq_iRd zdq-Ls?dxPTmnarTtK!QJErf)%HvBq62DDD7^J~AA4lYh02pXw^oVI`LQY~evCzd$D zJ9{~cti!{l<`cbv(!?H3`+D=nJ|v|6(>v`>^X+xw*M|V*#!*zC2&|yR0`DKE4b}H^hQfZ5&S)NW6X{PSQu+v888;Nu;hh-v6wW` z##m>(;L(T|L|^&i+Yi6K`{lb|zJLGzyIhcM3N z&j|zdT$}VROJsjK;BaHhq!?OnVuQ-HKVH}SNc(CMC&ov0k8j2W;<8B2qm05?VgI2~ zrQ-trCc9B(v-=jRT2}^C$>+{Ot}js)qx zxgVv{1V%*j{{F3QRN3skg{sz-0afz3plT+=3{(YaMH^ML-sn;ZIs2#OY4K{oz|8i8 z;vkA+&5Z6;#w3)fq(Qk6Y3# zP@2kGbVGl<`D5Si-|B@@xUBJD29%mtdMLTi9hArw1WG|NVL+*qw5RA=eQR~0yxb5k z-uL3f?1&`Bj6B76W#%|10p5{&uE^HR997|zGZ$*_?(NUuwQle$QYDO>0)Ve+dwU!Y zuYnpo+X4#`oRwlO;_WB+@??1DxzLXUT?_4{cl>`AzFLHTz*oq;onZ!{Iby$y=C+E; z0e!A2RP-rp{+|lY+)U!++$KpX#|`KQn}Nv8e3^&|NyXbVgXlF3XG%>_%*N)zy!jcR zDC)mF1_>&`HjYGc%Gjeljt3MXbA%QNDXLu7v zxq}v89MFo~f7@4e68S@e$nTSk#ZmTM`%8Zru>(;lB~PtDq%v*!K3*#`Cpo~Zb{@>n zj)F4{J?CM9%tiALmsMd%{ltxtj-w!Pm;+jrXHe)050Id%#o^lwslTIVRzj2J2Tt<2 zLl+D`{~6;2ZFF_=^psAlI<7M@SiDZ~#Q6yMf~C98%vUP5m0b;o%tDft*d{mG2cmz6 zCQlwqkHaglpAzBWct;~yOI)%?ig@DBA>$S_p0vQ<58b_Uf0}U*6RJOs;H?7VK0C&J zIA`34PR8BYKP&T!7w?-Zse09JQi{h)Nq71quoX)%sPfA*^Sks8R!iU&o}JUgf|?BS z^D!XdA6W?1*6Iq)F8O<|oUdm<2bF(X9&b4DP@eVfUH_h@+5J6e-xd%RX!hCB?D7zM z?7QuuI%#$nGkr$D6)$v49&L{1w8&XO4T7eWFy{n_M@u3;Dl;TAr}7_1dedKB;n#(g zW#ywv?{=FO8ZT%Cf|oKh^;?m%LIy_0TVz8u(V+Qt|47~L>S=yGOxSc+fnR^09ltJl zGH2lGVAt)sXl|BBEFgbdr@)sXuFuTr@T>Ur$ z%MjK;E-lMEQ@EDmvtzjl`Z7#g9|veS>6e^V;pgd0<6nyRjCX&4z`v!JJFnOpZz#47 z6uP>2L*%+kpoG!te?Na;@f;|&j?tv+1_%m;L@V%W4(JYqOE}_8P4sng?qiE{|E&~T zaRPT_1VL22K3*yuotd+_2X7W1k(twE8ivY*r&X6Hz>^ABkX^#>PL%f{mRQ@5`H5|S za%iBVWtvo?5vm3Ce(1)AY-v^13zs#a82I%?`fY$r3x%dQ{{nvsJDK^h#mt{7w&Ime zP6LBLPXL2J&CChC$lV@lB)#m@m4e1woH;zvG83bjfdd=;dr5@g@b;O`93FCu zvd-0;<1P~~XOi)444ITVpb3=v!j1Bu!AT%YR12klq6R;90YjZ+{MaJnFAB5b^*fRX z*{0HG6+q0)$u57OO^;%?G8>5*W$m4dZ|sY~o1bF|9oVqYyNDtw(iyxYBFu4)#Z!{Q z5tN3PA|m45C$KV!_%^IeIvpH^4d&t`&13khufobC^)JAxlZdCgOFhODShUMptT@uH zb&y2YWNA|+?lN=g01$n#gjt#SNSgu0b9oCJUJr>WJ&1oojT0rBkv~)yKTDV&0LEg> z_LBVD_)<~Cc+UyKOftTWuoLCHn@K43#r3w3rj~oNshZ<5Me`RB)=9>XEz$gya$@mf z0Tl@qNZb>csDzEooTv-QFGfp8B*jU*5w(3PRjo+0y!n{}Ba2W9goC zOtM%)cICggS3_nl#jDX=HG3DbQBP@nN{1<)%Ny9PNz|j1^Cwl3be(BY5gv1pfF{;n zI#Uhk<^4-6AFmOYcuzksGr5}u|gbu^u-vEr33V2F-{ zJ&S+BPpVUg5JNJmsM2g4N*B&{yH0OzL+DO|2AlR5k8 zXadxvtokJBtAVQ5&1VSNSR*B;K@^yG>ogdj_wZ=`tzM{w%UV>Z`&>?gt0UwkwWA5O z=2#aAdFs{mL9Ic^pGmXaJTa0uQbL#D-|;Hf$IN_|ElbKSsO*Z&oc2u{C%5M4fa?m+ z4m-}Col542OR4lrxmUH}GRHZfMYw-)0HX1wN_55BPoQP$-|}sX3;bEaD4lhr&pQgVD|_WzGqrkdR}Q;>t#cm4B!Wytg>x z7D$bM!0U5hSZ*8}N)`-nb%kiBxCvXgN90D;?knDJ0;uOGcRx}aP=Rt^9p!&6C!02) zTJ-%@U)4#uyTIl%BCU9#tc0u7aWKL#ul4|Hz#dVD7mjC#_tDNORmr2N$YO$JKL;;w zXi2M%nDsy{au|GKKvc-u=$}w>4gR6*5DsS*SbKtk;Z^v&H^;*!(0Yz`_czprR-oNi zN4rlJ?cN;gDzrLjcWMX7#@>IHXw{6A;Gt%u*r^L60hG!;sbX3ejQAbvk(MK(@2X%S zLhwAa6jrN9w8F=!L`%Q-O&rFOXieiG(VE87Vp*XA;0T2m+4 z?(DmZX~nCYhyfux6%33hb7oGuDpa{8V#v&?xpX61+&q#2!-fdnm)U=sCL~+9tzELU zLI!a1m_)KQO-Qz;Dg1i5FDj!>(ozzFgz{Il$~5L=tfmv1tz$Bvx=$wapxHVmlNNNS zDdKC@0x|w9GZI45{YPlF;`NRUy`fU3 z@RZu3XS6ytv|zE$f2n^t1FDo~KWTk^0w+#Wg~$QsiatYV>POVkw_5UqJ(BOc4ZJ8C zLiZSc?*H-!*tnOIe^nNX6?nxCDuAvMj zW!*E+O{O>%ly+{90kucjMd0?T-zJbnUu~VKeJY#|@le}-^ z>V&+XR>4(Z@Ha=TPJzL%;>u+38Q=_8oecieV(>4@w&L~9gWW(^nnI{*iDc&V&?C;0 z2mn>SPiDSR$}g4dRF+tabw51VvfWn7KtkTH#0l>;{@#C2Y5;STQa+p~2Gx#CD}_eNjQ zn=Y%B*^tr*$Jv%WzlNzwgf^?RpiEB;N_FbZ&$e>>>1$9 zX?0Te)WZ1)9ap@1cT~jRUZCO?h*GNl!SDEm*03!t6s{^YM{k&ym@F&KKt))Ddsq`0 zhZ0*7P0Q}W7$>BH0?I++enfo@lPQ>!_L%Qt^ zi2i>G8;W@bI0IKFeLuGp^H=I^#fu#|$s$89#8S!mYRIf+`ajVH^1zl7%xC5ztZ_&TTkM@6!w15*-@{8<}e;wgw&Ds&Kf-9ghzpR7N^CvS$UqDHcTT%`} zt%l*goJWLdP0N?)3g3Qo8(#?B>B@~JhYbE1$^4=cGVEbIzNEc3c;C?T$UjQat4&t8 z`2=64T)vI36S@2>h34H|yG@qUH_zcfR>ha8ncsqUZf~+WY5cjRnGZX`4bEyO%p8~+t$RkCdtIMZQFKo zPdKq{+qTV#tvj|g(aH0EYn}7`xqjB_uIj4ZtFGO<5Lh|bB#wxM#@>+}^$`hoE@cJ> zxCSWd!ef(G!1>kRm|Tj2t`g*Z#~Jm-n)mgnK<)<~ck2L1es zrC82ZBMP}cf1LtpDmvpU3@AD-;z_I2xnC*v29+~sd-NmEzJWj;H|o-?@KR@e047~z znVOiz;*%bh8q>ElSB-o{PRB2|9ZdHy^@U;2`ZoIVyb-XZy7RM}w1!9W1*ssg%SL6g z7?6m(=fs?=#N{T zclw7e{0*b=x%)=3**|Y({J?#8>DkWJ)2|-C*p{PD8a;Tx^X%Kp)0+a|W#!z1s0Z+z zL&|S!L6LAJN)1>wsh*YW+7#`VsAo#m=NrbJUdJ8-w66V$mayI23>E_L6&+D$H8SWN zx_X90mqxR)$ii_iyvFSe71)e>28hUzw0IN!jC&MsY#5ZX(zIge1{Bz9X2;z&#@&+0 z$}>;2Q%6Q2KhOPMqQQ;DYPC6Zd9t{x$%r@p7Lt+Pb~7qs`pa!Oo^G7ah|^sj-Gm{3 z8{1#DL^r39^dR=mt?vww?CFu(w$a$tw*S**)1E!i>DU0MI_r$_VSUgWHLY!Hw@JhW zV@g`}8?!z+zU0=mPuZ=;D&nxgQ(Df0Pvp9T$0kKP>+12~Al>k8DoiPx%v>oXG`x=~ zgHJC(cjL3K$=-9uR$6hESZv-zR>DTG_rCtAl=^Eem31w3+;kJLaNI=|(w2sRF)}XN z-zzn%<-BB>*r(uD6lE~zRP^8=CY2fo<|c)ma17jO=|c`6y2+DAZ)b-MC}^wn{DMW&ZtAQZYUcgx{Ugmng%rfr_%Jxyf?aTb z?PYXYslR1(~aq)pQaH9nuZoHK(o~(A$%huBK zpkdpMwRz?IL&m>%fxUN;9lvqLBd>O4F?&?j!4=s=1?L0TCqJK4Anyi}Bi#i|?#xMM zC5M5+Xi~n)?ow;;eW*xZGmK}X&G*nxbq~H@QoLT zaq3@~%q{_(e2hIZ^njM4NBE;gY)4=29^foeq%@Zgi~~=8Ddi57Cl;2bEl9XYN3Fn@ zp%CWvD924mzF&S8d=5A%l}|^_s(`^b7!`({UI&dLBXVk3NmqmMJxQX9iOM{Jl{a7e zH`yOFl{xQ-d7@rzMPJ74TPnR;BTAUYU)TARj-3L?cWCAx@Rv3t$bNA|77fRLSo@ZL z(x(5CyP~vNKx$$Q>8aufCBSoN&fMVIFS^Mq+1RVH7RL7}YWp|$$%al}q1*TZGJbaa z>&Cio)KP;<_nlO?D^p(3!=tF29u?|@o>gC*Ys23y;a1hDBIqi@o=*QFtcqvD3wE@W zN*D<+SdN!N2*%_QJW`Ed*OwfAOyizR3-)vEaN5ohRB7=i`NQaCG9?!RD!=T_XEhmn zpGv<*cGO3HROY+Ww_|)#kAf z3y!Ft=o)eKf@K#@s}60gz_vdan~a1*-$ihlL2#MJ!mqSn;%?!|jujgWGLj0;ujOB7 z81pK?)7LZITVo+yd|g*|*WZzfiZ%2Uz&%96sj{5{>bga^5CeTS;Yz!Axvc8gBr{6D zn?y&>_3PUnC0GQ6f`{vL1*^-uG@7l}ycf6*mF)+qZFlC3(WF#H@kH(3 z0R1KJ66S237)2?g{nXkD`*f*ujb||rZcSLDqRfw74e3+W5J~!{_P7;vM=@P9)4Z*0q(xLIZxX`{FduJw zn;uJ_)vC%|xDuGJ)`;+{49y^gDc)oAe%uCh>{ z;OE=bb-j-huhEW%(azQ?l5~t8Hd;;hC>*EqR}0KWu26ZRUun<6eFdP}vwp)Iv20AK z&{DAYf*S))?0YMA&PrE6Z0Tib2v~uJJt*gnIYNx zG&pfw10t@P78h|({s0^coGzrw5VID>yRn@ADDFl@{;^8=X)3~rDjotKx4U3VVf`;zN%?eF*Tw9=aU zK?DK<^7Zw<&!VcHO93;+2OUB!Pfakgm_!!nqBbP0fA1wgH0-yaut=Gm$5XF^;70T1 z#Z(Vt*QU#uobya~st_*O5LOa1MqRKWs5xprY@`eF(HM$`N@Fpo7KX9rh)$YX7Ek&! z8t*fP0C5e+X?_?RWnBi8P3fe2TOw6Cl6!EiZ2ED}@`Y zWJZs^PTW5yv{ol9jOQ$zG%CqPQ0k#W&U#v5F?!;J#4PZCO_xbT#DsDk%I}r*L~{}Q z&%GH;x&-W6`(e8ZRJ?a_Tq62PpZd5Ero1OONtSbehB~*OqDK(ViqYwVqTqLtl-k z@bJE?Jt}GK#AX&|EK&-TArX<-zMc1|iKkS0$6`}vucCy59>5vjQH(MSvy8@2;zHxU z)=*5%x$YFMqJ`l92g9cM1&dHmoTQXihmT0(>{m)Y5M1CWOOvmA&ww!@6O^7U?*}(s ziLl(L>GMfb?MKOO2j%gCp9l(xfw=FXv|=J$*gSEnNu(Le{v;rTxO&K8u4 zOtn@(RTHhK7Z84KGQ?9tJ9i29elKxago7*78PQU)59xmdB~ZohuSuA^of+y9`TK7= z80Yd+`!Vjaqf|7Y(tEpKy1J9P>vGOnc9t{4;Il9;FLn zFAJ)$BZg*_UMo=tM)vWmeyNJ*KifvqPrp|q)K`yDBjQb8_iI!#>~;S7k&>956z#$7 z>g#Ss?bPy0^*6Crv@phX(;V5nrl)!(O&8gGco<2ZSbdTMLcx0S0-r-f3(_&W`mnBp zhuLf;Q)iG1Xm4`ewEx{6BRVBeYw{SSq{ye3mWu>7_reUgsh7+dbZM)D(ejoFXRJVe zir&>~g?2AF+vXVr5=~eB!u#h`x#+Ly2_f)lc&4|OVj|$bPzLSu*Z48Hryn}lay`VMaq$k*pmqO%qw-`jhNtoL}KGaC?KaWjBVLmRYs7F zTJ6`Q04S&*Ol45V?Z<=drzdnM0hzW3elNx#;LIWjZzBZ6Nl5aFu%-t4^IEo}0-U#@ z_SZZ>l&>(5p(jt+-W*)WDk2nMd-l@$SbiQ7QDUzFdNRe8a7xE7^Tu^~(x!}hX}6mV z9I8GA{$KH_(zA;pkZ|sXlcTTWs8gw$_=iOf#`7`Ef!$y@)*FKQD+xx0?HrfU z2>yb?AE5wDRYOfjjaOUKYsgq#Zot9JPOl|p1B#h0ugnVU>;SfR{Ca#o9vAiv00z8% z_YK~!z}Hz1F|cXODj!_@ zV47YTk*VG3_+Ub(gS104l*l^-5RW(*nZM55rN53)d%AxaC}|?jiFq2G@?pmCN2{o2 zkeMKnt`zd8S*6asc}l;d{Or^*74J_X{p-)ejV+>H0pG~4kNs=;D`X~vuJ^&}x6)bz zznj^`*S-6X(y!NR>>WP7o=$+jzpwXgHQglku|qE>^8BmyVW-SQ?2cMC!2VT7`3hRl zNZVmxQ3i02w$pE_eQ&ulLu#Lf0QYqCPuZU~s1~)hgPW?S6e_8){mD)b`*QV;LHsqK zs1mkh6TGnJe9^A}gwvqmqhXS2!Z+5PS!k8M&ccIaF~Hf6kvj=KMIX zj%ZNkI792E!ix&K@D(D>)b*0lKr`)>8}qh`u4-Jkrbml%wd~lus4UV`O0jO>SUAn` z&v9fp&&Ofb2>qTP>?E1tc-hbf#?!D;V(SUaM&+=dd(@s~w63n`s2F;%q0-0u6L#t= zWSsFo%d&5BY2TKu>_$G^u}EY$3T!S&ts$hb)kTqeoXVdbBH5z&O0-whQN`GV2qiNN zFB+MRHDSe9-`ThBSgpG^zWt%MsO@%1?ZnQ zYp8WG5Op&wAqeM#JN9H&@$+$Kx*8>bk9$=M{fIk~gry&W+<_+$Gz@1RbC`C@4=r`O zK^1OaJR=ka3)R)#J>tskKwpd~}N!_(o2qV52J}X?b<9h3yYIxmQ&! z#&L$)km?+HWg`rF+|nGD#KTwfOJ_$4al)c{vxkJa&fFRrFVs~CA)_M_9mkBtdL!J zseIxy2Wsk##G*v=b_oKSO!0X&Fs@F~V9d0HTsgI>HG&*3f-u%Mw5JxKbkT;CrC&8( zT5bcwj;3o3LVx;8RZDxk6Unr^&;}#ZfgqQxnLAkhSGNTqf2BDHJw1JbcaMq?(WsQB zSyXsJer1dzS2*fo-t2ce5^DPy#4aRmrd+naI+asg?z*#m**NC@b!l2m#44Bt%} z^a}^ck+0EX3~ZH(CFWz*%6>{UkLTR0TtXs!Vw+XM6z2;3#cRfLgRu(d#!}ZJErPG1 z-srzvMKWXn%Wp_a6MA;HAE&G;Md+>^E)2ujT+Jv=MFwOE&kb`;AJ*IJ@O>{D>aU3n zp%X(Jmp{(Xt7s@v`*BL~$2BT@fU}GnS`Dd2O3y=%jhMXh%vn=%Z7yus;vZscK7Fp_ zDJ>Hw2*>mXdZl0!fy+_8Oh=dSoDMb-m;`%5l>rN2*-eJ^0fpU=5ii}K^)nr-HXfyc zhM0jR%JITXe`N-JQ~WMVH|4A%J<&POfqPQHUdauq0>w+U+XXReq`?MXU_7%q^Wppb zXDry`!@LfFhS+-KLxG;j446@%q+z>iwy44}RioY-QiPKbrRZ=(qR>FzyCpKP!Nc z)1Ox_gQ=O!V?fbONpu%&{fkogS(-3c6Yl0APGFi0Wm*fVjggNKe?!eM0F8$HVwTSE z#UNox$i zGYPjh-!YAT8QQ#bIJ1Bt3Ea#@T)bnrDK(Q-qN?hm8ANzSd{0wx0f7?r!D8|REf(p@fs&M%x&$%;tVIa^F zYr;&6+8|G0gN)z|$&F>tL^qx+795&S3yrqyGA`J@n2+*kY!JV{=t`RTkOJ|DV+1zI zU)jbok(XQqy`mC@N!wa8G7oT8T2@us&dHibX~vgfMvJt`O1N#ypn`54vzzYipdoS#wqwNZceVEKS%MhM`6&RrDRvMGn@nVpGKgTMArZI5$1+TVuMp zrnaEt3g>Dm(Xz3;o@e$TYVziBE_-J1G>V`MNvMF{@?a(vs+w~1P=K!dHxPT=yFD0M zqz0qeQe-Rw4xmT~mQSK7dP!Z^uZmyf&5+aVwvXH;D!X7tS=NpA3Wz z5~+zXrt}?ipoO}ntXdh@D{b`eqY$0P+RynQdHY~Y?FEP&5>$s+@qc27|1=!a!@{YM z!5g$=NA&F~3i}I%10WwnsAlSIQR>D5T~bybAzQ;o{iE2EEL#hE-fo1Tn-XJ{7UPu0 zYlA77mhKw4?xG0aYDwS+f@(V(`e;NzW}V=fUcnOYthaA`W~Vzv?oaYVnzie@#qZpRu2g4iVbuX}jOPAN948;~3 zH*pCMVm*ddnju5mQ|zY}8{?H2GnbQ%#JHF_taLJb-yaORmP7D{{GIi}e!>FeAgryJ zpt9*WgaMEM5DIh`T)}OsWGUNacvcQY%{VEvOpTLYaR|$xS060huG)VW*QyK16Xk4| zr?Xb8GRf;-UtGU>zg`WP_@6k{em0c?^rr0XU!u3vdh?aDygw3sr_3}le~3tJ?R?yN z7;zq)c3xkGR_pXma^JD_$n&@2Ta^y8Fuan-zN}gS)NP33JDnfBM-rOYG*m3w@eTy6 z=EZzf1~Xl6B#y5Ajh@RGRVFIXOpTW(Na;K z*;H=;Byz&tnLF;^Y0N^D=7MHXeH%6P8LQJQiE3}1MN2T2%0@vc=Yk|XOVGG z*$mcT24oE7*^`oV<`Trym)UWjuFnq(AJYbelywfD|L*ro-!gmo`MY~x53gQ#1o+=w z@85|4?LL4YxA_L)<~=*<>*-Mi)-@Zna>pQm@e#vPP4jb5tsk0M1P3q0?G3f57D?N1 zeUu}0e0Ic$!&)dq#4Y_$K}!P?$8~OqtMsq>voyXv`N>`LWc&&WPS;l}#l<*Jytd)% z>+7%g)2mUYa4PYnQQ6POA1z8US?FoWo`^mCR}Z6u@esn&T|&48HI9w)%}4%c^wFvS zuLh&LiVyC>Cyn*42lf;v4+V85MXXrl)rmS=)hFJ#@6wG@T7(A6U|MV=MXML8=_Czi zY=8Cf2bpNcZz!Gi`dVP8tCjcT!GvW2_$(r&zGbf6#1%Jq&xqx0BH>f`1JuEuf-BCm z?wEc{H9UeuWBDi3J}luVgLMF#fPfhwfWf6qth|qaL$WJMN(hb)OR&@wJyz%pz?op% z0rNoq1B@D!z4du;>;CwylqhGLkN5q?_3Pd7tDOPh`TTM4|HzD1%wq-zvn$3ix$E_i-9b$@=nqDAyfVh%y9CD_%Dt zyt82hRyp)<*f_~vjSbgC5=mx7w~f!Mt5H7(Y-l`95OIF4VRZ{~qNUdavN3i$C3>3- z&qcJ4qHU-FE8>Ti%xu*V9%i3My@p;o6 z7ttdFBz*1sxkQq@hMC2!CFC<*XhBI;abM9ht;6c~I&$(%b;$YhL_Q6k)6ltgW8HuB4BiiI|l+AWsgb+UlWf*Q!RY0194Aa`qd+Qhb8XKVdxS!)g!qr``VpyA8mU|+UE z9ZpV6pz^sU%{E-U4TDy~e2M+(a~lv}EU>-Z!(m8?#GWu6+~c>$#+SImhn)1>_u{k;-hZGYrK^UO3*E9EQYm3Q%zo*yB z*Yjlq4P01y)ld9qoHCmf6| zI7*mgbOniRlJx*BNrj)gB`aRG)!l>cQ`+?N#q|x~&jHh%v*sbzw-+n7Vvys<*80>R?u+Y4`4MNboz%!4s3|K@vEvBYIZU<5?5%7)Ndm z?27xyozeo(w09{s%efE_8l9Rn?&~L+JCuGI>>-*Kx56 z?b(j)bEc(sk8nQ|SS0>N4n*6Z!t8~-DGmI<$YTeXT+kBKOYK-ya7Jm@@mvL@2`y2G z5w>K)lt_8gMC>sht8}~aXwZ=sT_3KeYL-@n4Ya`7F!)s0;liB<5w;DZ^Q$!f{F#9UA#$RApIm9JMt|uq4?(7Eoz}djs{Meulvs{|l3ge32&)e* z=?4ek_h2KXn~5_OWID_8!j)q$+5^)lPbpjhg%6CU7KCty6i4M{R9TYN*z(l{{Z0as zDb?KL&{s-*XednlzI8ybXkldF&I547weKtq&ChrW_j%c3lX{ zMBo``TFhFL{JnYKg=n@6SQ1|Hz!|j+O2EjlQE1nI=V%M3veuw288~fyjiTEk%o~h4 z@e>gvw#3D#A*G_*Z4ismCh#-@R|y?HM$#RqAu#)Bt+tx74qwd=M!k>_05D|{;)?^= zgj4PH_K_F`Tf;%DSI+WUjufSo$O9ZLvxgNicoW+2HS__yoN*$c4 zm?qlNF^9{n+p+zjlP~p<^x6dB$RB_s1biCj^Lr@HUDXr>3&4~$$?p3y`_28n>MO9< zi8|kFUfzhF%#p@!kWUxH9^G!CMvYi6@wzAzw!H^pDI~jl&qQXhnd@l{BZxO}M*ZH# zDfqB?o!iVx<4Oui)V36Fq8_`m4RI$!O&A9#hqIa#@*}BVtsXPPNRceiDgc1$;Q-n% zmn7S%Zs;neop^<4qj%bBsM(?mzn4C2BxT+|_#G)dllAg3=aHy;h5a_P~t*g!f|Wvfk4 zUI+%8qsHx4JNz+Xcz#XLb_n=2q}ct}RsJHXG0ZF(Vxt1;Pj%i;q{oXR2N91@A%rle zw~((|#&W~%jiFN6`LEn0G8&j$yd%lel?_L|$RMZ{Xfw@Te9#lvjo`lP=QW7y2BiZ-< z8SFP~?toq#6g3T&-S5noB>Q6XW8v*Md$O_j_=6T4r&2Kj-icru7H!AS`a41T*m1k* zLWmaYBmjc>*NlHm;UnGu2hyES(I*OKL6|`lJxMWGxk>+X2-veCY zf0Z+WJ6l24DKkus)c|6&u~+*SHm3%!bv-iPrNc@Eio5-Kr7eJ{HslA^QTmx7HV0Hq z*hVK|u4N@D7u?cZ3DRrnK%i~BeJr5LBtdn;(cqw~O`B%Hj#QfR6n?FfB7JBtjRtgT zc@JcS+{Ys%2Yh_&LeKlODX*OE{SiN5-2A%3erDTk%lgS)6clIbb9$rrVJmcJRqql1 z{I$~_J(^X@VH&VAC&wEU7_)SZ(KHu8n;^8U+gtxvUBU9wFIlC$)%(X3)f&{kQ=dH2da-b?YWPJrK8E_bHrYQ4x9u;eBlY9%MjMd{=os#HS5plXAXT zhfL4EkeBH6I1OAT&A&9V*-8yzA6awJfuxy>&WWp;Is}tn1GsurhBbgY>{7@bW4;GT z#B2Gh=oWB?njC+UD$7n{=h=23P3J{`8Ss!PnJM*er3mD7M9^m6GV*{gL>r37@8LF> z13G_L$fot|v3{Aq{WNm_#|1jBpe>s2R+i&;;dm2;Ld)4g`JyLJ7!ZVdJGg>bT%?Fz zrv#Q5Y!tX{)yQ=48U9Hg&%teAW2M%HsI^6qn*{8-qOXp5>p%BZ_kiH&vsxF9Tx5fi z1NyXw#u$=({y0Sv7hfPe(`Jjl$z4p5f9PDKk&7kdE7gv1G!|n?gc%`6uG_%cg7;sv z$hQ%;k$G+M3M5VvnKh$?=Hwb#o0D2%;j}PUp$<0femm>;9%bP7VpQG4fU#GI>dXDS z0Nbuwvzq}ydbacl7zU)E%!1!b%6|4AkU`+J45vy9BPQxH%>3-SNwkkLY^KWV16%QI z2i8_Hz2S}NMX6EO8AC){Q!!NfhvBJmwNQ{e3`qXyNdeh3;U~L6PdqxSuDejjbzmmj zT)zgjb0{l=^Mrr?0dOMGI_~1WIskSP0rX#7A@z}1DR=Cfr;zX)LkkZ!hx8f;&nMf4 zMw&&e2b1!V$Ye3xNw6@P{(ph1^Ch9%Ifq`mF$W-`sC;L5qW&COCkNpI97ix9Kv5-P z99nYve1A9-JW$z4?KU5H5PP8YzvhAz87qN~_yvD(Fpe%A8sCI+WsS1m^Y*GZ07C}( za55>)Xbz=GhFOxMY{jk&m!I99Q?#((V=W?U74d}twyK09%wQ>4jW?bQAT8k`rz;U! zHCvXFaH|)-pGzY^Z3r_Gu5nV8zXikfULd1`2xA0h+TVWxqL4k#R}P!K>r7tXC~*Q(>vA@JY9V&)Nh& zkME+LJ%w>TuKm3arTUY~HX<$-%_wPHXVB2fq27WwPBbD|BbgN?4o7Lb(!I+P|p zO>43E0zSife-WUQ6$IzAvil=~l=ZyOG&vX(xd^nXb z`1N|#PXzvoNj#JL$u0E-h%hDI8XfP0TQuqQhZ~TOG}#IDYHx#7==OuB+TOp!E9nZs z&jFw6btqDE2Dr41)nBjpteLmh} zgzolT6Xl-fe2DON zvDxMyIwT*tpWFd}bf*#}PEeDtr&>jb#<=93g1(xRPH?yE;f~rCkVt3|LUXf(sJV$& zJfL}(Juf{-Cdz*wo6u1vG?K7m3B-S?HxkLMj-h%)b3N;V(9@sKa{%gx{=7Qml)jJu z9Gf-dDP^}fzdixCeXopTWlGz?zx?!Nc@*aEOgbt{Hs1#1|502fl89Sl9T911BW`WP zBDyM=;>^O-WV#515h2o8NGjMPWUMO5q|PvkE@*F>jkPBSaq_R<*{QcKi(2hK0d0!>V5tng2}(xd^7@O8XgVoj>6a&8@fZJNh)rNKsFX%`@1}GOs z?%vcT6&OC77UMiJPlzC88PYS~hR7kjuihQR{Rj(^LNOLqD3Bonrx1FSF9EVXfxCwl1VwKr_B=PxH%}5n9$DD_1(7c!t^459bue3C1BN2 zo!FfT26{xCOuNh7a3-#ZMW`1QtTW5U>oH&>cMx$}cpO4Mc8sX=CrxFr<9#N6yka_{ z-9L_70M0lFzK0l2W=9e4f|E+XH|B%bz|<1zNL9Q)#sB(@VgAxGVFS*j(u5Bx*iA=b zc{n~|pxbjwyGS^{EbWFXis2Yw0IZNv$7x|-jKFI}ejv|eV4-8pm{ zOgXf`O6LBsR@FHNRQZ>^wJ;zz-AQb=)=au$Pgr204oecuun9t))4#xgaTNOBt*8^< zrU6fES70^LTrH`S04NIHlFCvq)vvZ$K+)dpOf<1VS+l{tkUUTL2V?I|*krHjFeNud z`>mz(FFI!smFjYpnmEcB5KPbDxW~vK*b;7vt09>JPs864E#mO<+tWwU6=(-hI&9NL zfD&?gPQ%X-^LL{+!_RGFKHx15xjP;&-a>(L^$;cvr+?25#uKKkYdDvwEFy8hQT##~ z_>KL3Z&3nGbxGuF)?Ou^@1#Sa(T;^gB`;VYv^PH2s8{~B9IlB?)Cg>0*u%Weh57nKKs-|co*K3hN_?9g)>8npoWmkEXsM&+3I(ty;hnnA*)-k~yQ z8X?v{zsCWUeHxg5)lT#vCh)|@x8sw-_<&lR|w z_;D#RObxkFvQN5IWSiTup#cHjPWMFOKYo#a|AfyDHzy5Q=Bt6U-Z_jcvVUaSgWiR( zo=iG9<2iHN(JE#M%Bb+xmax0Iu5f*E_)`5PlYVk0* z_;YX$v6dxnaE>`6XU%aVXX!6=ioJD@kImU-?xUJO3KMex%@KKFae{;h>1h6|nDhaz z)Q)j~M1~#NVLWUK@#T3-NgvOM*53v8S>!j||+IZ1GKw#AWFT0NI|6|wbIAk(ojP6if@@V=)n}F>V#96CF zF1K2(AFi=bYWCx-$tPN_d+hYLH>niSRznc<@%P>fU~6t(s>SAn)GH%nmKX)cao8@2Z2M#|$2>vB;LNQ3iUI&IrKIeema5?cWMeXF z7nmf4aC+lv?t5WH(2a*0v+w{ffe`|f;U(DW?_imJC>*_gCC=&2beV8p<%!gc1%Xhp zkG5FksNKkSqoha;c$B0i&u$o0fLrG|$&;HTOxA+x8*FXKbh z#gFFNo`GC*P06~K)gQe6Y-Okn8&6lA+kTkNTfcBVF;wO47&8Zh>IeWw@XSV8^5iu> zFwh!B2M_suzIENLJZo(=15&7zP8o+7Pld^KH7V%mWIfIDl{=$)`~mQg))(L8qa=FS zM$0EcsHA1Kz8svS9{hwXVnc;E9QLg=$b)mtsV|o+(}ggT;_Wg)_d7@*b)pcj49a_6 z@GJbB8-qKgetq)ZvpzsG)h)w2^#A4YCh6Klkhi1uNUOa3LF74GBr}ki?Q>U;tz?P9 zZp6k}?_RB&O1m<$`+Ao01tA)cxRoMEhm*QhG;4uN`Jrkc^^djaFE;ALH+OFG{=W78 z*dbS7v)lLMZF=%8k#DQ(^YT7b<7MUf{`825!`J`cCt&uASD=@l5Agc*xrn=x%fHE2 zy-K8h^Onf7)rVdC_uo16Mzlw3um880jPR7-&`Zv5N+d1T11x}RjqAlrEFtNTeCb1B zY{g+9gXsz5vom}!n@!%0`!bzo-BX||BpSpiT+wly3*%0*T)m5#q-wb(Zg2v+t$|QY-KcIudmPNVeeP_>+RwB!u|RAN@}J;&AC0H zzy~?P{q1x%lSDS)eGVR6|D*wRhY#5Iqh8l*Xv(=~*qQF-V|>B>O)CwF#YktSZhf@8 zkceeLtgrfygcQGe8vB5reiL$i0Hev$T^PgGKLDLLe^Z=6zhI92vXG;MX?@q6`Q2l=3xFl0(ToY|zD!(j9 z)ypSAFV!q5$6t$lZN^0_I2}IHjG9AMt0(%<6pQ-6*G5O@Kg8tK3F8eEOuBmmf0w6# zzU=}9=xN!3Se(n z1;Z;-;9IDHyI(s!+ZuJmj89K|!t>tO$AaN9WQ<_;RJWEhsDfi7P}1tnx^-glka~CS z4RmRVKO$gOO{e&T8gDZOs@{fXzY5*h znP8t9Jl~Yrlvy3dY35Nf%DoVqGlv#GnDB2Z)+)!Nm=0Ke*qkGRvT+bLADbstq*Xl+ z5_;j_H?Y z88@U#H`B}kc=^BnJ2b!1`%;u)Wq0i*aGRY+poeKtWhcx85!<#eF+6vcp{mltbm)-R zBKBxJ9Fa}a6eZp_yNJ8`S&fB?kxHsytmNh?IcMc&X~qx2V!xe0!ly_8trib1EWSn*jcF}4H*O9AEiTK6&G!8@L4MA=Isq~({6<+z0e$WZLC@G-oa!aZ&q zg)+=%$_lWIm!7b49T7BF@sr7Q<@>FxmlKGlMfLzd2PrN)}=r90-X4A5`b$2nQC#u*#$WjB$4RdK$Q& z(g_?+33AfYVj*HKXY!$DVM5pkN3b)n2`mXxZb(Evnzs$KNl*+3Ch$7VP2p^|Xu`~P z$#km2j8jfxYy%~k<&r1dVoJN&RFPN+m9iN}iZF%r1(d}Axh~;F^2lXi96gj|A}N#{ zH*UM>>+?)-A7R)O!vP7KfNegMnOv4ZBGO&5ZaP9CYS8Ka4axpbWh0V?hcm+~!J7S1Gmy$Qe~;M)I0}+%F^bX5tFl*jJTY08!3kZ( z3F4jNl;T08s*T8F2w|lijb1c=k!C2CQC>-inphxWzqi^-A4x5f14HGE zud@oM_g5?Rs#u6QRd>xu+R%xmz`2^TJpDjAWJs57bgX%v(4sX9HDQj>NY=`mxQGLW z@>A=OQRceFZKbGVpm0rkS?SQP^IG$t)P*RU%DsmoVScZS4k&A#+LxfZ4Be*e#7C&j zwaHVqJugR_jWRG^FpMRhTEQuwD&1_-!3~fq76$e;X$VXy=2IH0GHN5Wf+};9Z57XV zNLLs^M=`27WZ5UG!ef?GK2satfrZ9#LAc$*MaHx2I>$x8FAz!bWjh>NX1=7y5vzO;1GGrwrm4r#ng#zjg ztFs*=21&A7-c^(3xfJqm#H>UM=pdF}IHU>{DvIYU^z=XG zH7CIV3@kI*0W+JVa6+WFrAJ1?q&h%(>zqY+8zIXd^(TUO6^)9~)nF>ptECVjt3y?P z;IjOphSF~68zhOF2#l6gZ_rMsRRM(jU>?*{6U>(awX<}ZlerXSv}(Z{aT{$nOdVHv zEO}0<>A0#`L~-vs(N`1kOT=6M3*GwH2B%%w2FGd?&OA=cc<5qe$Pe2D*uxVMKB}GH zW)ybFAfg)^_C~qXlHaPBqAQb9FAiFb%CcEam`tJg?xs_})bE2ebzKQRda zSsv;#-YHt-VBgWDB^5`|ebC00--d8!(Znj`+6=03@sVog4Z2`dfj&qwk^2sa@_Dne z${dSv93#X-p>bFOEkBAUp?r?5SC+7}#oWALhyFSzO(P;hQo43=SrRabWC2&Ri5R_b zm4Y9f`b9y#qMp|}zidhK3{WauFBSdRQB+prN(Obz8yn|k44OwR;Ka2F+uZ0JTM?D! zFPciZWa?|h0b8dJ&#d-M=~ixv>&;yigOWlmR?#lqzS_2&VWY-ca;BHdH=d*XwXAWF z78Qn?c+#=WsP_KAw;ZWYp7%i^-m^fMw$svRjILEftq^6EKvnjvY7 zKT7=B>+_`&>`sFIVzWrn3v!#3EZO4zbLPS*S4Dful9}BG0YqN*KNShqxhnl!d zvq_IzoVAZ4s5#b}5<6vhpad_L?J4c2PR((e1KKF+^K%zZTwLpC6oxD26tR2?^!MrO z97NgQGHS!Sp1H09CvkoiVc509G4Wc!St0vBOub`xU`-P(8rwD|wrx#p8xvznc1Hza*dil#{{K?4i7C`)vaZUT zq$W!oWu)0%Wq`@%iqy6cO5El=WD~tX(=yoUpv=2|7J*Kt;SEkD0`&xXS79Cl`2Vp16EsPiq414pWesr=%qket+;+@NXa+yg$XfRD2lO9_FsoD0%8VvXKI|EN zqiR&cl@t-u5o7#n2g!co^uy`DZ}2jb7DKQ`WwByW5`iKZ0%i1h1CvijZYgCdLSv#v z<~b8OSJgVb#3j6@C6#yv6BWIw0M$HcGHso$D-K|6!!==rbuxEqew6NBJ8!kHb132_ z0S%HRjj_{TBr+$0QIYpL)rC%a(}?OzEwogRJvc4_Jd zbr23-wAvHGEj3VWGp?ZuS|mU+?duajrf`V1*--X zD>%${`4@{hZ5pO~PLuk#G=Z^lnszyaK39PCp*Xo6J?lTfL#Y1`@T_>Qfil5;VbT@o z8(9_rtmL_VN-iGB^W=t}ac#3(fq@E$p&SMKN}Pyou1Atw|C5IpKH$DE`(*S;jo;Prg7rSs~#7 z1F=Kg8do1a24XCQ?j%ON3lN|cQnM?ktT*UpzW-|(F4#GNgaK1)A!7<$t3^rKQHqm6=-?(VK{9ea6NtL*! z+s?Fe(0BH} z`x@#JQ~AQ?D@f;9e?l zW2jS+X<-rS0%l`(Jl=^cgO|gNqC;u&mN7 zW>3`*sNE%vbXC*nT*W$CoSadJFG4(~of(b6JQ`&8((-$+=ay=kMf@=YCRzeNhckOv zU12og{_=>XUI&0n+)jwR3O994m@C5RiLf6BE2L-f^CX{Ln8jSb!PtM zRniXSg(*#r{$~m-h9x;|X|hMbE?J-YI0WqxT{n3FfIao6GwJqG|IMaRAC>VnBxjQB zzPAD9k$qa_qVm=6CGEKCbClg)1x*91nxJ8q;8|9K*UgW-{rsJ&ZBjbfx^`i@Orh7j z(6xP&%-qdP!n+P{8=SwlI!5+;N-9>UHU7y0>CP*|?jWP{l%l%%OsZ z^T**<030|J&TuSLiCXg(QgzDKKq?rxXiNc-hgmupWOw3!giBbXBLjSz4>>xI<{bLG z+5(EB`KJh;yp2TE*hojcrad7W;chUKg34voMmxd4p__ zF2$y)!atKp(d2hvW0^B$MCA>2Pe%+rk@cfl^O^hM*YspIfFgsU>6vx~Vc1n;pO|y!jVb3z|MTYJW!U~pqWo>XVWrS@S0=xWzY)duuniO2+s>_iqq{!1-`2NIy_j(W-e+QVTkmFPDsqm> zD^tgoI3^ma2UcQySIdcg8w9(<3Rs`L*tQ`LCFJ<`qy;YWaDeA#J_D=in4g?nlEcre z|76)3om&{47t0^_({&Wr%?whFk6+#JQpz+9hcvHWZm$z%fykNy=?m)3o86d*Iwu#6 zmXF3|aY=6?Wua8u-kIs?nD|jNstNm~ zJDW4(2=b8wu`}_OA9-NK{Z$jy?T?K$zur*s`D)!cAJJugb7al#yfd+fPwXqn&}gVa zk5k6PdGc=bsjlBzo>fNC@tt!=@xqwdktPx71*h1e%Zv0Esq@ePgf!NhbZ9hE`Mp&! zV`41S8F$f-VAXiL!MH$)_)cAS(udiOtx2A(c`QHsX7T!^4ZC6>l8Qwm_iMPUsiqbF zWDR=dC5T#M8>t#ZES29Nt%4~^TxA*2S z@%!T!BCkU~-*dP=->yrDpTFjEZ4WR3uh*L&`x7(r6T~i!7~naGAj6JbR|bk>%=ETT zKL!$pw@y}y4{W?~7mCwTT|qFHIK~{-+xx%OZ2-V?!D>c<@P5r@Qtam2-N2(gWiH>X zeWcIMAs^>Jk#@V|$?~ry?S=84xlS8KBk!Jn=e!E&3V0uFJE#}?j-S;+zAw}w3!4j@ z%?v{KLPv+GA>{9Jjcc{f?qXzddD{xrzG3bm*Q|=?D~tlKH<{b(&g@KsR8@4ss9$b( z3`CCI+OL{3wq`<6y~EYI?DE#=b3LBsi={b~c!zNzVQ=9Bg-n6)*Z( z7Ckz3aW8HsJ$z_vgueb>T>J7*`y?tPw1~<5x&+M}T#Zqs%z0$sD|J6O5TazkSUPpS`yoc;~$* ztS#zCG*d92MR#=wq5Ko?$YYH7q5UA~hyhs?BgyH*%go z0MTMg*qs;A(?f5Q`d+BMDyGu%}q;ar3 z);yUBh-cRXjO!N{ojI<#h}auq!DBE^oQc9`>`JdwOcr>u`a{MBB>}|0bWk=okyNaL zr!cukLUsV2v0OaPk|6hWQIP;|O0*N$N7vHdX*H^1BF1>H*`iampLk{7KsL!StkzuF zsfq7gwUM~BvAa!09a_T9TH;(kzXL8)k@MmVD;%ReWQWNgS1@2@);!m5uP3|t|ukfCzt-SRC0knztBkldU z{}6)6uarO5gU87Iv>DgPX?V>zc5q~xmv3dq7vY-*&9xRC?fjD+=j&S5%GHu@xO?0s zgiY-1`%}iC0z+hbI_t+IXu}C>mpFaJ4n=~ZIH4W*viH`M#R&Z=0QIJb1Bz6A5oJTf z2>R5>=$!D5i8(0tJ)LHG#urmykC20soI`G z7j@i{AkX0)4RC5xq+t(&5AT}^clqE#4&S6|Jggwsj2O^;#iwUe@2lL@D}l&*)} z!%OI8%;mu{$ev3OF)o5J38ZCxK0TQ1)#4u&-=cojcQJ63EVV0HQl#O8H)m+uA3EO8 zj2%TF+RynrnN>fMHqtkDbaDzauFkP0Od<&wRk^p!0X{p>`9-|DEF9)u5?*{<8LnM) zIOsOs?y8KN;O7p>TUJ;8dqxtbd&8+t5nPwP(#s!THN2k3`fSCaaym2r>vHtbi^wV zYQ-K|Eu&C|4V3Gq2dBJTE*LOUp8sB=J-+OmA4XlHR~v1HWoxf z*%^m<2zPWMEO}rz?lUn=5&dd*X$RHQeg;NI5Ak|~DjF#knh3m(K<$=oK+Z+*dNB~z za|wT1P3DCX9Y9`FdE+N%7td43nh926ah%u~iB{-zWm^>KCO(wdeW5azHT zYqZPH&Rox1e{MVF!L}mD$Q>;ZUjB>air-QX1&lA(^_j0|omprE zd%OGzHnEdj&s%!&5aZVKa<9pwL?%pQ#KZJiYH#N)9p__3#OpVu6mVEm_&(s`BB5{c zUJE?>bLqW&i5#lMMPyI8=aQ+Sj351#Rzyv4TV$B!eSE`1pN9xC}=(TOV ziv$K7NHKmnkAssc0l(YFd&pDpJkpF8?`OgaX?@O!OYjTm9Plav61{{@5skG^;{Fb# z6bM3Y#t$)fZLoNM&07i6@$o(7M=^3GnTaVYbFU_m=EV4%r31HI%8W_QY!1MeJX5?9 z;z+OZnHXPI!h?drA-44JO7Uj=61o3_J6t&Xwyt-d>lpND4RGSyvFnXag;8ZNFoDyq zPnhpB$+u#F%^%w~*Wb7zK$N4q989Zcqn*;~iRPw%3$O-bEV|wPXFkAizI-MslJbqP zlDhPzF@v>J4o;J~s&rE(eQJj7+)5;cuRoDa3Ek#3GRvpTru8mwS3W#zgJZohKpuOc zS!OSZ=Lwu87(nAV7duf;L=97@W|pcXez+tu2$8)sYpw3;%{ckqj8i|UjBKHo%){Wz zYYV~K4;eC72_HM%8P_adjUrX~&WMkK$Jo`e){H%cUh^80MiB`4E`<(|zEV9?`EKMstaD=CBLyDqs4F#x;r2>IP6@9g$r(saOvR{X@K2r``WiL=gRvuPI>L%I_;hS zyR!#A<^6x-8ne#U9sWt=q{H9QoMt)vA-o0GE3+)4Ot$z+hDSwl7Kx9dwThZqoWSN-m*bSt1u_AnCg3jut$8D?y z6BlG6n?rl##GGm=ea$t0Q!YF_wm(e)ta@ILHEqI{Nk8F{z9PM=u~M4s-!V#Ao`}zY z6I-PQ3r!UA1>{$EWQKvFM?~0GakD_q*1atKoZ!1rpVr1f2?1k})t~0?1Xb{;?`ILf z;L2rPe90GtiBwOLl~3Y7j#{T=#+r=?8Bw8Uv>_~uz~YPS9Phs}**MGU5OLsyqWZ9` zel9>k5jt%IyL_IVAS@JbVt4)k{YktVNs9D zA0Mvb*G1pjF+~ShiCo^$<^jc{qna9msU?inw24}ax$Xhijp9SIaCb8FhansPVX-+X zHT%M)uAx%YodGcaa9JpMG03|RnA+NrY1_i%}@OZfu`h}swl zqiwxAaG76b{Hk(q&v>V=gxR8*$AgSGQV~4auLJ zSBjN?cSRorEp6LXp2OhX~KP~XVK_XdwbLf^L?pDIJ7Qk30fh{;>PS8=nx_UR@S#wBGHI8MK+e%~5$l#vJR(xStbZxZvZ2B=u2O{m_-^^s(5lnOs=b{XB29?Lwj}o^8i#2*PFDRqQ-s+tF$|T*Y6)Q;#-=qqUsvm z)fSL>8k$AYl@%}uIk2MZoY_GRI~|W-Y@E<7mu94m(o&>Q7wGc+fEzrnt2qK!P#qC= z7ZO;Oen*_2%Kf?3SFyfzFBE+0dHqhqecO)hB&Hq!zh85%J}=gVsea1qEjU3y>5i6`&L{Ulv<`={*!P6(YJ&o%)PWAvWB|!L+vP zbz)+$oRD>+v^sCdX|^NY5B(BRD3m033+rGuc?FfEZJ;t_W(=al%s8wXXW^r;Kt?+@ zQjhba^?^t)YCRaVJb3jS+iM?MA1FNcUc0zQi4yMnvGwa&D-Bslil@ST9#b&b?qKi- z`yBQ_vFnjt?pId>y|v^VMtam-mz`5+&M!3F;&12Qw}4D{4Z-#$Y0H&h9I8Ah{3<;h zlWuHNe5{IupcW3c_#LWeH2s^1?G2~8-5Fjug#iXFkI^2Vtc|(}K=kUV60pQDZn>0R zBc-h)D#e2cmatGxDkw}n!q!1}Y)W1JZBNb;m@SqpPl@XC)cSmF!jvUQA_FF0URaxn zaKqR1J?C}a8sp++Fm_-bxk#lo<(miONRB+G#q3Y_j!$Z(2(v3yQ?+OOKqMPJw?Fbq zEaks<=i=bT2+6rU;>cSuFo0t{hPBpeWoqxge=?ax`>6tD;tW}X^D=x%F>{m7yxPl( zJ$l!vj|j!ZvSd`gteuUeU}!lr?NLxZ!5*D5F4>`b;kH>!xnj#ee6E94*cstR3 z)XJ2EwtNN0-*#8Hq83EZq`K!v^NCL9mPF(9=jIoV#3dG_18)+#03dVM&{zv@VE-Ud z<6Eb$|5pzfX96t*9W|a@6{FkfOJf`w^po({iwjYIKrirjSTyYn_>ErVqCT?xFjr6O z6bA~h7yF2^lYv{0U;^t3$I5|E-P}~=&Wiyrl-pV2zL(h`EL19~*l6sXy53M?XLG5* zGcC*B=!(z21!U0?(EsGtz{gIr0A3#St46q~R3;$bqE?3{J(36%A))SE3I*RmPQ8Uw z(ru{#&Mm-G+$PwK!B+9qAu1RC!{bB^n}67lSVL0%hWed;O}FrEc_t1!5G!SDrQ~6* zq9an6y*WR?(@4A=`5$&{xSyxlp_`24^fsR;3L4+|rReA$kOQx1K9OT;g52|){P!=g zXs`%3Q{9E?oyc|-*K35-EE8pMIZ3K*CFjzlOf*P~FhP;;zN!!0Qf+vDN|TKWb>q{XAg=^=_1m{!U9kxd)-K8)N9BIB?UM7d}`%lX3MuN`-yTbmC4R@zosIxPF56 zlq_4jnd)6t1H3btPQd21LMEs-)Y!@HiW65<Si2U#KWnHjNg{O)$4DP3+RMe6r}XXq(&8>43A|XcI*Ar8 znM3~)HIO!ej}Hs|gEg^K%Upk3#aoxE1Jgu6dMmQ=-HD5j>45UufSY-UDng;t>m)pt zZo2G+N1im|(5asY0`W`8*RtK=`dwBsE}0v=%q66wbn>u1iR*Re_5o7MRoTU@hFq;X z6}E)+@a~oe1$N0ljY03FGcjR>%>=Y4hK8ft^lSKYgdZ<@%!5Ra{#^wpZ5;9Je2|PM z9?TQ_NEyF*fUqCQlsuB2MgNSOLx8I%Y2U!BQPyZl3G~>`PzCGu=!&NHD}e>BTa68I z%lyplv~XAE9Cl$K1v(Z3wb7#-WoCH+JaIxb9!+3e1xnlBor}3_-@3y{N#q|GxSk3a z96&FxWt32N*5aSp>&34x4Lpid&8@G-Fy9wfC0}(BPDpnS^Pk(@9%Wo2ZlgbOzHLI0 zfb9aCPbrV{v^GQ&haG`~<^5Zg`!i@s^`fBOA~lsp9_a!-J9c><+2knZP|LztTS4dW zPJ-rP)mQ_YM+8TuO-^vSa+2r=TE2*BF(9;p-qgE*#p4jvyaJ`}5E^+OH&%qyr0i2RPWEQ z1QjVff^m$K9W-egQeeqdY%?!pMIY@YBX58;`WKG z0DNi0V6;QS_US@sLyKVAjgbK;atXwo+x=?vpN#eUzqc9pkZj+GKb|HwiUcG$ZmJ0U zI+jyY*3K6Xmef1WvF7@SKLb^_FM!H)8)?k6lCO}-a9;C$j!uX>7(7Ft#^$Qbuci6S zy>zaZ^iX>G9e<6YUdAWk6DZ&|%|BeDO)4#=Z9WFd6Z9xr-~o zb4qU2KwBvSy>e|W*w5B(C2O&@LxGo!o$&rD*jlKwMr(y{KoJT-HD0-40DQS>VsozL zxd+zc@9#m((Sci){*k5M0t_vBDUw&r=`$&@bdXt@`qiSUg|5)a<(XP}^BI`Q1_m<8 z3K1%NkI{F`SDIeZc>l!rJ9#_8ejr+KvN?&=N`WJQnh;({`kt@$o#k7CI{FXvZ5iGp zFyYnJ%KBt1v(vd+jc=%j{nNHX-SW(~j_7NnfG-U3E`PwKUhe5CG#z%Y}X!t{hxiNvDOsccCFdq1a*>*oNa331Nxdul>O2VDMFEbPAcVME(nRqN`L?4|OAIetN5lY=k6k7J{`V=n73JZOhu zt%Op7X^V+Up>&DR4A2Juq|R|+!tMC}4TF0GL-y;Y!OGggVE4f@1!2kQc}{Uy9+sfKLWT)`K4M_`cvVC^18_I<`+;)i$L}Q9)X3J^ zM8I{;nFlY;C);ZGBhtlOr6*>@Sq;1AU4f3@g-1I{g@!8DY%as15VJo;S0>8mW!*up z0_AK*11^$KvKtCXFA|VSx}E$PfykA1<%@Xoz~j+&MC~w&e-{~nU?;OV|6FlZB6F7^d?14rXY}q-HtSd zGr~GAv|{C{b<9evc?zXF1dWpRO?EQWWXyBy1l>6pB*4>d?Knwb1gqoQESzay=78hU z_UswWUt_+vn$(KS9ofv16bEITU@`1CRH*}WqfLtuRda<(#8oMX;wML2V&CiE)ZUuS z%(DAKfhW`$eg(8_fJja9Px81wD5gU=vj?QZJIehsClUmcQun@iyUe}{&z5;k7-+zC zJPVV@cYsWrZ?}x!jL_N2TU>U-qCN`;+)qCJPiKLaY=h6txwC>XR&&S z#GtRhB6^-S1I>c9M;p10GgwQU6BoX+4BMzv281fI;P(wd=W^hgSzYbzLndQ?v-b4I zPKxlRv&=Fqn#AyfMAy-NHN_)PYakBA(%s+_J*Dr=CGmxC6eWSyG4r1gnn)k@bUwpB zbU|_;Hq$)lF&#;)KXj4Xep4yY4|X&Tznpz?Hw$wGb$Gik>MJYdDwMy#eL%PNbGVWI z1>C1(uCTq*?i8|e_6sA12Q^VZo7oSGRkH>)Cc3{CDQoiYAlXu41WI_p&hwo(-su0~ zX~!YA@^*nNB%k`bCz3O=!i;QV9wdUew(ZXGll8s%iqkq?vWF)ZtBF#ufdLUt2T3Y64J|H-vFoCZvAm{z*yM^R~bmd3QEj!Gs$Hi>KIYW z4(7F&@ItZgkd5f4#PaT=x#8XFFh_*W8mxvcCXmC+WqeDfrH|zZwv%afC2!0(0hHAI z37*OM%y1|<=$&|@r6^fhSzFU)pfd0G)pC`b2*2+=AxWXrVsKJ0%`2(pwSsf<=Z!W( zjX5?(?Eg2fW0yh3L|PW**&hOdVf60wl3eDtr;1u~uCqSh8KF?d{?ux)*NJq}#+Ic1 zZ|k2CeG~SCiFHn`yR-zgtFBo|E&%btZ5r-r2H)V~U&+pW61Rz0gkd6KUnBD5v*ka< zxjMk=UEWC$PY&JK3nEaeS9Fh%6T%ZZmhCjtzfY~!%{S7^h_f1b0J|1Uf28u@fUSgV z{4dZ@g0iaeXRRV-X6jK=q8Wk$X17^-^b83!F)3U<>s4}31uSlc0*yuIKfnn4*1j|= z9h^u|`l@z5(?c7vQ(nEh1m5CKbdtmrO{if2wtDsZWBsw5+)`B1s&Ni8D|x%w_lY42 z_>W&QY8yz+VH<+dKaub}L39h0Q=rEv&uCe`(*37)LlcGh0d=CzPKCL$dd|tG#!}U| zYqztG$5M?*rK-T@YNLuU-Y-&_MJ>lX_T}H_-?1curo7+(P5*1zD+~|wLVzGt*0Bi5 zBIqAQ@rB!kp;qUjt0`4&ec9;$ju2E(s@!0~4ux3I$y#=1l{BNA{EZkF%CHG@tUYNc zJgR&`B-_e!aB^^gTnZ2O=+`^t;o(T_w~V+iz3`dmD!P@(9VT$)EoNXBdf8EIa@#Gs z^KZ#7q*~AH3xVUE+O>E|=yr$CE8JX)aIaW5{?P)-;gW;?XFEBr(SS&T_!T-IzwRo- zuLumbM_yJ!!1TkRoFHb1h6*m4?c_Vk=^jY`___Fecr`VLrOw;0s5|9n43`(;MW*(1 zI_vrS;R)>Z-hDMCJt!b`xbegdX?rS()sZHX)sZ!cE+(UBgU}+v6_BDfPYY zco6dbExYM`IH0=W2~uHSzuHbnUqpnh@nvZ4|KhJgTmQ+uYq>&lQ_Y*W#cH2O=c{97 zv--%739sOG$jD?WN-aXQEYpiQ!v+P8v9wI^3fSBt`|mrc0AYTc*}RVh*B-V8zIRk0*SN2OC4tI7TzKmOaV3)ARjU)D2Nk zE;&Ex01~6tWLNPMaEQ>5PK#UWsibpVBoE}Y{bOio-^`Ybr8W;uy9ACfTvw_;cF0RS z#X152Djrl+085zK`vmNb@-#w?!^D``#&-t5=ucx9s$u?O7>*<%z~YPf9gm78uKc1E z|F`fcyDe)-rIVN$y6jYiX^TAP&C&iqqDFsPeW{qX^=#H2z9QAb|}<){wc>c*<){Z%!?L&EF8#1b=l zfPBiGI8}B2hW*fNTXWmV?7X7#OHVo|IE}J%>w@b9QD&QkqoG-q!aB3tx?+ z0d4lR7UBP^dC{ka%(|U}Z>Nr{bN7I{;6d5er*f2uI)#G^;*TSOO0-O%RR18GT z>2S?DM0h;*PBRan2Fh4lb(U?i5Bh(a(rF~Q3cE}@lPW85rNb<%Gr42}h(Y5f zCSKls>ZH}Sy1}b;11P~47o;l#L5Ml6n&P$=!MAi!9Zj|u11h>Wx!g4dVL6L0ZMwuN zO$2&5U@l}tw8b+;2@fDzzb2;9@jPZGGA_mKo3`%SUVhJ7C*WDA!Ep;+>0ONVr`^}s z)xRrCV9V2;K+4=vG;RxW0fj8XT~@_gU~dyywC#~G8=(qM#O*xZ>-2`93k5xH^v84= zB96@+?5sD|PLTNin2Nv!>rIZEdkZg<04Dt|Z$FwQfC$mttb8Ri3V1Cf-UUmSop@*{ zxxvYj^2BqGcrso+<9wK4VEFzc&`TB|AlklU&m@(i@aJE2n!%=A?%&PZ+(pYVQMo_* zCa<5Cxf4erD1)|Q`&V<>*&~-)UFo+O0TM#JmgMTewuknqNWy-8Ky{g#hvYn{6-u}> zPjVjG5)iz_J3&wFwD99@P)ac>+boSvhGJo*TU0S=pt`_@eRO(Wx@t#) zW$IqAiT15ZTjjz)qN=>gci%8uUf(xxr%|RVnya{)aE^9VlEoDhlg{B%PMSv@AKIxf zA3-th1y6gNR5tRc{H#p4fKSYF!jUQtlb%9JMMBd>SrhA!2+%uU`2tf*#I5YIPFXq` z5_{O3FciOZBC=P7L*0|W5z2O{s~}=fq9E0&zF$;L-H4VQ-=cweUO1x<^9!-K2io%qGAkzdnyLeC*Ome9_>ql8Cpaljd|C932108b>meQBoF6132ad_onfJ7D%w|Vu)SDkXC^~$e-8JbZz8P+WsVevuN*O@_}SE{ z2{siMP-5;S1)iMP{g7oDtI0`U5`*b&ov^?%ItWfKgnjRPQjFVP{NT@atIvM_DP zQDk`~+1F8p_9$z=T492ggi?MATUiOh=~s}9E2%qcoyBKid>yNIJ!=SJ$HW@1uVX?L zHj$HNsxapB(zL{efEB~&aZu){6cgv?x7oUGAbfXc@0W~$`cUn;Xr!VH{}~6Bfd-lT zy86IZ%jIcDyrrh7em4^o8gAg<{`EN`-1g)XaD8MqT7z@Bk^P38&zI|6rk>ZUu~7F< zOfY!@g#b}9B+5umGj z$XqAn;1NB`Mhh{~K%wttC2KVD#O#Ax-Z43m3V{aQ z?!72{ri#lYuxxv)$j9O=i00;0gb1I4$J)$?!{&0pH)%^Hbu(()nDsBxL}IML(#rav zT(!j(M}wWRrO}8&&Tjyd*+AsAb$B!;A=yn&W$Eli)-^z2#>70be1`C&nygP>nWTb) zYUa)joeaTBM1hFblbrk-ih_uCZ1RYV@pEwhy3}tGf&v-EqVyT=Emq0&57Cd)QbzN8 zcjn0y&8o=ttLAoSO`u{ z-5l0w6IVgntps(kDf+C)HIWeaeU;ZCl`-|FYAJ&R%_WH@+~fQCNZGeTx=)E#QFgH| zHBp}#U4qzurBfwN^Rk!k@0X{$%6?xwclgfj_N?Z zoL(n?GVK&&>2~0>HKD@0c#p5RybYa=IrL<*dZq^U*(Dp@DWe6=t-v<2GbCvWZU;vi z@?wIKqzEmkImb_fUG|m%gnTv&fkGd=drwae%xVYsSR6aVHVkRo(jZXEKHbOLS- zhC=4~jb>BMKqmgHxt7c|7B9c{TU7kXz!<6=$~nJ0hKg{1%jKR+(C{rIij7SG)3^S! zAcVZA{=jSXy{Gsq;p4?%hLhfzsytZw>=Q&~jlmNCg{mQK+$#H8Xke`o{IUW91VkPe1O)L*i^Ic~$=Suz*39|8F84j{P5T{g6hEQkk5KAM zb|Dh_+&Fcs$e%@B^D^hskJf0=mAsrza!hB(ACLH}u!Thd5TzF`k1A-x+0W(&8Mv#6 zLfE)X2DN`-EmT)K9PQv6epCHoD0#3~yxsWQ>=dpQ2vA#iSlc`P1Y-O?XvX(bc7`#B zX24MCb5`hR7SV@Cb}|1c`3)@o?n_OIM3DBtl%+C+k4n&E7%koHxR@G2F>a1$9@*g( zWhKw%IyD$?;7BkPp^n#O7;CQgxG@`P6^4gXg17fKskjR1Z-G;$nJ$3#Vnu_`zE#`s z*9$G70m83I*Xr2ZlsZ!(5+)P(AY)E@>bp=ZWbnAbdQdP0`QVl*Gb~tJOuCKH`rGj9 z$5WYL5s;vp&GMaw(XSG{ciXGa#q(4dWBZdDx{?%)YMu7?(yoP;^n6+K=PRkl0WGGm z{;(3$!AvJ(>0A)01u_J)rSVp=Bzg$tS(PMIpq*o&8*i$T=Fkknq!bhv@!<0N1aH1< zh695cpM=f{OTkY_kvsjwOx~A2uy->3^L>DW_+KIO+5PO9zL9{iUo-mMPjO7vByf3+ zQuIp%29-C@Rvl0>Y^8xA>0TS*%d!E@*~D{L+&ab>jIC0+CmLxh&=cm=a5M@O9vX4` zfXus0cSsLBBZXQqwH%C&Q%LFPqP2uCg_;fZxT?Os z^)suk$=v)CJ+7!DOCvs;eaW7)RmYB`VnKy7nv(n-N_%VW(F}JaxyME1%z8iI%-qV% zUf^F0xMvL+QjRN#uJO+s=-S){`}yWAU(VEeT2qD{mtT8OnpAXrWT#twTM=gjfZM$h zbS9OVX80xsVR8u7$-H2YGw(8900y)FwgG#oyrALw6!JV^dgJ7w}Qe7}%K?7zO{#5t@Pj)WjJm zcP>P1o-_1_?A*iUWKpn@8ymF2{d@f}uy6hB!%xu7_wk};W&IJ7YxAX?h!bwZC_%NA z7`v#B;%WGOX|B2p=c!#^8uX0!p-#xf9(?kA;^rwQdFK14NPgbN0+4(#l*@i^m4wW*B8s=s@egU14B6k( zxFHm>ZxdZZmXr*|(C)rJy@Ik6RMLN#)-wikG*>eDW6(|gES5O z_RXDv9TQCew(S~HTq2GTVn?aCh=sEveS6|^5-XDE{m7N7q0y19*RQ}D4JO>QNZY%_9}ZSl!c}aw z(*8Ci5n0=;$gbW_s7d=_iHc9(tVYSP%c#%aOI8W~jmx@@JhqA>={&TAWBNCwsYG31 z5(Z_0iPY-ARtTxc*^4+$v*}M2$nmn6NZ=RvwzA=VL-#t3Jow#pt-~jBpujlB5LWLn z_w_2(Sjl5}b=W3-&Qlww995Qv&NBJJPC^L-EMkIbLXTs2$~V2L7UMj}p`5L*)(*Nt zXUVrXEWs@LZCI_cfj?gpC7#s#E@}O!2Bnn{JUs>6Z=Mw_+^9a6!ISZXgoS=!`vIOs z!-DZsqNIxSSF<`#7-BgdI3e!f&av}9r1)*qi<2Lb^NL8EP*{-R|K-R=Gqx3f2mxXi z(l3`e0IB(pVb%$s%0O*ozAh+pswEzl2~YJ_CN+oFH@zpK6Ic3@f6RVJLjxc@S&5?r z4Y@7j(B+64fn~axxI?+Z_ws(P+>+bu+TrqGjx0B%5Xm( zP?n#U__iY@OqoD#7=iB%&GSTqcW^AJ5Kmvz3&)TO5(5ESue;w|ahf-a|m zi4*&4Z|6;=5@jz1{qzNdadV3)%#`}qmGntk(@YudN_jk;vC^1pmttP&rE3b)N=)&YC%8DQ}QM%48P1Vi$jC3%6j7AQp> zW2ckXN5xS|UVCi(wPVEc6p$2$E1P-9)?xy8HZV@mow{u7&<#smH8(?bh|4w+1Ha2dE&2QY{|b}Dit0(i{~FzOmyFe!~kPHSFn3t;q?Gd zB(Bw+z-Av7CuOp(V^XvkqLr7eDE@y z35}~Zl67-*8C%~Gk(a7i&&;Y!`-hYt&qju(FP9VoT3ItkemgO%QuVGh^K{weO)eIYD4R=@w}T1xPtUh8xqnXl<)Le?y& zHdDEfl!n2m4!6*7nh-67`l~@xq&5$H3?r}q!YI~q;4^FJ^M4FR2>Q$E{zRH*Qc$+k zO(8H`ppxWU#)SyVL-{HAFP9alM7iIvCYu{~h5GznnYbU9uW7r-bQ1UB^a1sdv-i2o z-gsi!44sAs7epe`uAVOOtU6lCTkLBa1cof_E28zkzLhK2YiU_CDC$HSGV>gl8vRTS z=Wbj1p6%-RF)&ez?E9Jf%mp4osG21S??Z%~%gbnEbwK9^mo05so>fw4l^(cRI&@aH z!W47LCb84f{@s>+m`wxTZ*`>WSVwFM7@Gk8)^yg-KTV%B5Wcj+L#W+HQ2)0dFgDiI zEg>)*Km#RS{*i?KOnng==C~1&GL0D>>-C|S>U?Kr znLe7e-P(2X@_?fFQ#=hNV`(H2ub;LcIxPc+1^7D7^y3sLAP0%m=stq)2C*4jf(Firm&y%Fm(ME$|z}Whw1^1ID^CorV52VLz7SK||A;4r_x>>w|!r{aPek z#k?AR9F--FgKNNh%$x@6R))PCt1-kX(bg zIQ0;;juq=}Z31G`y^h|QVbBaOApbPk(x_*!fDsaZK&<-nLCNZ-@!oKpzfzDfU>8)N z`rt71zp`pQ6@&lK@#oCWD2_50;pZEU8Vi9GN|}aXIoM;nX3q{ijFZqhaTfeD>51B& zg_DPo*F&yg-oj?#h{G4gc#`Qk+Jf!@Y`PVqdE^@PieG`XKi3YlL~cdmqk8go!c5IW zB9I#4srR}@27^frI4CHdDAWEwY@I`tCSBB})3$Bfw#`c0w)v)Q+qSdPmA37wv~6~N z{SUf(t)ASN#3V*H&fd@2+age79{r}LYCx^KmZW|m4HLY2OvkQxs(L`H6aILaCx@I6 zA~q?{I|@*onr^0c zB)&;>+vM5tw7@18%%wH6`1%C;b~wgT+w&BR(bErj+$c-caxZ$zu>q&-0`*f z=79LL1l8liwR#z#3j9dy!>TB6ehy;lnlx@xNs&<9|8YJAukM-i4any*epvEO*wR*j zx{F-oVBrn|@O%_!;40`tPOvO_f&lL1UXAVXNgpodfuExDvORe%@ zy6H(0UMv7$I&B}|^9lH#H7I*uoNTGuH7H%Q4dJ9tdtVH`HPbV=yuPF_PhTg|+c;EH z|LbsU)+%N5p{;i@UG>1a(R?9gSnN zZC^iJK;J7>t+=Xzq5ceGaA%u0dtQHG6L0Vhe$C%O*T^r?XWFMXTW#ybl-KFs4*em| z8Gs779QG;Mu77E&>+7rOJJPxj)eWC>=%N%zuN{!iI<`1{K;7n8h#r>h>RSZY;DpeN z!k2gm_trnge*Xvi*`B4yXz)$GEl6u&pjfS}w&F?|lcuql?A2PcTRQp1R_nOcVd9Ok z1Q--N_IiCa%sJ$a3+0Z3c;jmm(fSl1@T&tXR0rSQwV$_4j1;$I>{;11cJB4bS$|wD zTxVPzPrmVgo?WEVzD+G#iGOFC9rqd9rS?>aXqkru(9If>Q;LSMJ|k^Axqi;(ezH)D>eqjX0B<4Teifr^Y&3^JKC3GY7(H@WP5q;cZdYu`C^T_>OwmyUlwQersy zLRI|xesqy}@xhn9a-GJ$o&9^Qf&8@HK|#(Xv%sSg(=$^E@BWf|^2x0^{;(px6=^>snbyxQ&lLgpgLV?`FP|j`HO6uE0OHn%ln|61pwbSch{fG^Rk|rRvpi*CO=+B zRKni7Cj-HDp!zSk5 z8&CdBBcp|u-8A0vWbo7B@gD=N$Hpv-VM)VTl4{W7{g#}xDsaG{dw+CO_ZVyUPUo#! zeeA|6gN&j6)5f7n$Kjq1W3Td6N{OFc^N4!EM;1el>T5 z?XuH7bAjfGqL(Ij&JN!`z;Nqfu{UFfFQBdd9qQ&ut%JrlpMxroEsRz>l34^Nyg=gMO~xG!a;B4qpRK~?h@q?d62W$ zCY)hT^I0j6$aiqx4%8j~_c_^-&US0DrkB!ft-LKIQRK&5u2@`bld8VXKq< z_>0v(M4RLmG3~%C{%&Kjls<`iEstQZ2ro(E2l52@P$eLQ{<4ZO-4}6Bt%tqef-3&r zo=_rgck}K-v-`ej%EQWZ6s)}YPO4w8fj1`G;AOj!pTCI_Ep7o7Trjqd7A^2Q$5a18 zj-nSGFLX5Rnhf=Px>2%bKjH@>xdU-vc@?!*wcRSYDShMH>SH+S)3Xq4dpXYknAEfv zL-FN{8h0aXyWiqVy00TBPvLr-*>QG0=e649(FOTiP|C7aC|XH3))6VkCL6d?8UXXx7VpJ>op|0e9CNz}DbQHqPe}+}MPDm!T->Cw&R7&CEgE6RgY5_y zmjUq6PC%stkvt(F+A$-q8Exqpo9Tstl+ka+<`W;?)xL~D)9<=~(msi<^5les6CivP zC*KfcREuk(99iSa;f#2O=;5An3)h8(XqWQK4gLfPmHivC z@0}iDf-l#hI`iVp0fFi8b)WQQ8bg;ndk0v;GV85UBsR6uY$xF#MvFy9t_7BN_qZ?} zN}Py=WJ_eu>772<@GrWexZ}rM>)rLSIHZ9!MStN?zOq%$M%qOv7TSoXy=>lwPzBVb zdQuMEA?B`m?~^sMXXb#7GH}BwCmU|X)*Kk zF+wY_qxA)Z{@J1?LafAh`GQui=GlZ%d6_DQa76ymD*X|Z~E_p0q=pYB-QnR_g zW%Kmupl-T_B?5{7hwFum&)pB(bO1yq`O$M^UI>2X5?pf29mL|_jc7={E|;P3^(gAhDDRrnH`YIFw>9*KvNPoxRMECuK~e_TiR z8boqIr{KMW#^3edd=3jIcfdy8)pfQ;PIHHn1!`tFWnk_fu6IFW3L?;2)Plh#ja)Fo z)iM>oUa^DZ;N}3ZC3VR-UjRosi3_CZR$@=p3baNqze=z0-W+wa(6D?;fCoF$rKN=L zJNqy8-F4<08dueAlf#=eJGvt~%;?rJ*#YC)6GKtD6)2MSMfm6%EXa@Wbg+>%30A|J zV=AUgrbUG0Rt;-{1Fhz0=EY{pTvpTfAO_4JVsZ$SPZT2ZL%R+eZ-BTpcJd!xBx|oG zA?Zk+7kcb9Dl7`5y*Q+ng`B(-UB(TVx#=_NBQ-3sh&JPj1XH%sAqkN`ha;LIXZI(! zn08H-z*Py&po-6hP5OTh1HXTrAAsw@abZ5RJjz&7jF}*IH`Lp2-1gv5xZ~j#l+I1R zZ-Q&n*CYj~uW{*cfC04OXLbPrdu3Hfo>naaiWo#0?W65A4-2_t4OKj9?lJI^=t&K^^7bOD8nF062tj{Th2@I<(x zXZR@{iuCJPqN&$ovKbJV3pBqCnAaGttd-M3x{7@MF|>DB+d_Jh1L<>N>SNq=lDb4Z zS#FNyyZsXhOJOT1Atraw?>o_i1QTpDguF}Oc^A%QLbS`*uM2LYhY&4Tb*ZL)XFy@B z42uj(E9ta*^#i0@hPDs;ZeCk3pID9mfk>>h3H2g=B&g6I-ZX#~ZW1N>N5l^W`g!sY z@vAl+d84|Q)lrI)&eeyW!EX%oM3NF~NZ#Xt6c4hO5)emF>D6{ULBhW8oRXmZTd!ng zj|IAN_*vANAo_3}1M_r1_TwoX^T^G#G1Zx;MA(`lKo+17#RRPDku}yE&5?D^4BmGx zSQ8?6=Z%kE8$zhU$f>-LlN$n8_A0Eb0#fx_pm?~G&5;f3YL4fyBvnjOqi-9EB6Noe z`-l3fRER45uPX%|GSpl`TV-425f3q6q;+H3j|}pPA$FFA=}-zx0on5xhTnBCOghXi zLt!}IFc+{)ulaApv4Te4IkEzgDQ>io5!T1x&mVT^I`Pd=8+xwfzuL}%zmFW0M+mjx z>ckl*Wvu2lq71i(N}s?#AZq49eox)HZd->JA$U2OnOXMW`|+x*sr;G z#83zyo}k^?O)S(YyzDY-W4Ja_M3+`0%Rs^TvPJ+~$U<9@%qdG|D zE=p|JR$@#rEhK%=t<^GJAl!}~IvBq(@d$nOdXqBDeB_(Ifq9_hLUsJ)nFLk9og1d? zE+*}%pSNkLpY5feO0-Qk@9^Xn_7AaIwGme660%`)c*o)4u?6Mn1XEE7JcblOJ7ROs zO)3ERvbQ*9wI!v^^*Rd^6MIPN(pu_eKdp~w z%#iTdJW~OJDf#{V7$4@LCsSycJIvqSJkq`n`Rzbsmh39PMvnNi+#EZOyAx=`jX~KtT0m zoLBl-6-gfvCwR1?cWh3PLWtV48WF}t>sVg;W51VRJGHj9z#=Y>Mq4+}-xtoB;MBN# z9P}o}V2`iW$*oa@NRJS8dHsp;A0iy;0b~69Al(Y&cdMc6Yo{baYz0j~X{hc2Vz!%& zRf##f6)KvAxH%Yq5nmTZ1x*0_#fZ>79Q9ZtpGwsi<=RyvhZ)W? z%aIP6fQ1`_LX?NwKA=TEA|-(&j5&-awJ#dzQwUl`Y@tHY1S{+&F;@)QtE?w1-Q5?3 zh1T0#p59n=0K)anyAj(L?ym&-a}k5!pe#tY%N+Qi)v$SlkvOYrcBU|-BPaBk2S>(m zFX%vOxGh4&X?(*#VvGmiaV}P()yw%PTtFJ5631^~6Rsg8CVf;V%)Y(Ybo)K@$wV`p zis+hG_>@pfOK2A$r@Qx~?%XHK+1O3x?m8$6YFbYRJ0V~l3u&Pl(EWvLZ9VTAn*ll< zvYaaueH3D&BIjydOh@7)M2yIF+cvuIE+tOS+dV(8oP!Rx{W}TxAUMNfx@C#-k^b7a z0X$q8THq~&qx}1})EgG_Vv9OWM}vJ^m^djn2$HxU&pYK-W}Ao5HWuH9N3w}+R0;Vm zj%*7{MrXn^pKud}5!1&nL#u&B+POs798WXaxIEB$_D}pKGlu%e=CP%hd*@xQ&h51= z4z#MvJj$}F!;b+B_n1O{LXug3K;DXm%?})|G1Yi*nElppw{2nSCP#@yBPjAcKqjaqA?5z+>Cx#M|| zPm#~XUHDf#VU0vo>tz}fP-GE|5#SRgplqXU#XU~2azbDJNxd^o<)H8EORu>HdbeDJ2ek&V0(tDhPa8&kUSJ5A6Kd3U!V(@9!R zVugpgir>>8t3j!WwG#hkl zkIvpe(_F9e>ca!nXPS1!wykcmSjcgR|t;GWjMF8VBl`oC(GF?2iR9M<+ufFaq zvo24Q4=g&S$=x{HC4apaPuXa#y;~lHQEg^$5le5Ca1all6$!i%77u;k=^q_WsLl{s zRd$3n&q9AkjNz0lpv1yTt^3&KaF7Ob5%rHD!I3GULKsSDOOgxoia{`+>B7{lKI1)H zf(?ZhS5)s|YI-y##-s6GGm(vq|QnaTXhABnF$4C&$W zKP-2Zf+hv?4ry>sjI$S0Mf4i>cBr(}D9p78ziV0aff+i1O1_Zg@2oq@z~a75c;(G} zQFlZm(CJNDwU36Om&y7t=c^w!@zq~i zb*$0O&K@t?I;fZC9}f=`AXot$yff9=`Qx{gm?^g!72>)%(*(JLVo?T*Y6eL`*q5?7 zQ5pF@g16t+d02%4-2m+&86g8Pa0MbCocVrM0U^8QEfX)N{{3(ICM@d^{yfu*-ueJt zMhWJ~obXHDc@*iytqsL;!+*XITG*y$3CjpXv;#vA`uo=g;Wazv;_9W`How!CNopw% z>1-$r?B8@Z56)4Zy;&+zs(;d zW4hB8@2L4+8*7mB7p8Sylaly=%2}-jQ6XkFNju<`Vu&S%xTlot3-!F=F_V74VdFTnHiRM3rAy%jS=4; zX!@FYL_b%N%nHX#p0!xXM_U5;fQO{0tF*zpVQcZ04c)rLM$Zak!z1W=Z05sLRm;d$ z*O0k$K>9O0gC&F`of2QbfG&u0#rw~Y_x6AijRhW@>X>tCdTV0)EOO49ZI$dWb+##9 z#y(XZdJJtkpX!7|9; z70lr#91`7iGLhpOy`6CBP(S_L^19H88B~rS!B~m;#!KAJ*brIRR zY9)R_rhOa~LNcqs8J_Twk^=nl7|v^0Se)TvaRM)P?XbCn#<8`d)<3WQo$z4|Iw~f! zv7J#BKcRsSM(l)>w&MQUMjtM1IY z7vS(3GMxE(yb~49!ZM`kbwzpB}gKclRZ2_{E4VmLz7{gL^Oa+|JN(us61zOk2I}qxtBzH z53b{>RylVU{+c3M;OFzV%Cw|v);}J_%s+Xx6AtJtPk@mfV=yu1X$eA=i-JQ-Htx^I zS%CNB6*_DMA?GdcuJw#Ig38sjQtHe z_G5m$(gTofa-Fo$J5A@vyZOyMMFn`k_kDp`$Q#sP=mBwX;A{`{tbSj|;)Vy=g$MgO zIlc|a8%@@r^Ke_df7y&U;b`}L)P;OXPjioy=+Cu*FmeWy7B}k&?>cf3 z;|@rxNisqwMl2ZiQMTKVLVSQ&_2sGu4=8=FbOM|)P&uR*DPup9Y^MZhJ;)cn{Ir~% z9SAWiqxIW>ius-lKM|h0+i*&@^l2m;ZmamN4nQnsJ9ZvfA|0@X4L`+qr5ay}b4oJf zaRknjgf0JbMEOU_HpqJCd#{K1bly~-xLwF8O)oG>s*MecYAgOZYP3C{&=NA zO^vhgKeE4ys6sKL9!vhUD?p#fr1b0dz~q0i>% zp~r((n%?2*daSx;Sh2l8ZZRkGq_quTL^HPwWW=+2mTd-j4HtA2fo;a)p)Rx&`0Q5- z(!<5N%A-&)KbBc%2RxKQN?tdbr|LSPeIzL6lfIoU>yCzC#Y!krV>rrI%mx9T`14RBYm_o-hv4+UvhHX}^;PsEmXaV6!jjWWb!MzMv zrkGrSNdeEusP5FAsrgSv?*ROEvT=jjH6eMPQKukC*-q9Fs0GRsH7i2*6&-{{4|{*e`rzm?zZ@;dp%BzNMJ{lOcEu{%B$nmu=!D}M(}?WNVe8PBb#{k#d3IBW9xWU zC9Rbor>%ziEAw<|>f+E#ThDaGtv{_nz#z@b+;3!gSi59WF=C`StxvDKxpff;p4J(& zd{8H&?5P5v?PYEtvYu2e^{Xrd@t1I{LJGzA1<5Fn7y2JcW-oM=?4iX~0tPl&YfR!} zVc!@sjSVbbUl}3<9G2@wT=qCiZ|u-BE#!x5P_4&e&g=o5>RJJ$GFf9Uqy%SiFvOrc z&?<-J(smOOL&DT>Lu*kp&{YNFPfwY#o&!f6Ten95i}4Z{JyBTb?42LXx%k3nx%a3Q z7E!(W({TkRG84(4E5!hTAr#RKVNsvmrbuC4+0J<F+;M?L|i-gTAwRzgs?R@CfObT%e z>T?c6BhFNp@CV+}2Q_P=St~XuKvg;3XzC^$^&D)vQj$ntoCQDN0&t4 z>pCxS8sKDT1(!0icKm>h%YEF)(HVs=>Id`7&OXEuXD+t2ut|Agwp&G2$PmM@tM&C< zBQcw{)EUgc8X01uh@A}*jr`9iVpc0|vrmyh**ViqvA92KMUp?o$=i;O)vmtG1JK{! z?aS*vaO#OBVU3hb18F5=1l@&n%q723a%$}b<+O&3) zf3dvn%F+*E@onqmG&-I`NrpiH4FLM(-u7Y$c8H2ZRu^177!Ju1y_M%^$%QhQh!tl# zd}KEf--TvsOedpCT`>)W2k~QSMZDG;w8qM+`akTWa<=Bl>yuB7Ty@S4j>f$O@63m_ z>~#3TZeIx;7plgPh#8cDuDlG0n+5gnc6Nd_=g+o@vX@0Z3dM67Nd> zwL;;W=Z($4+0h5Nwd%=f&&59<)j%Dg2dH6%Ty#X^*F>!R+>JqZ;@wF7TT}X>f8h*; z8~$wx8+Y0tags#+>ro@{?V1h>Q8b8X>DN*r-lGmPy|PbBlU3}HtQSLpT}=wF7qf@7 z=HFLV=k1-cw+Llc1Sv0N@81h`Oq?(|hRrOTv%p=x%8r|}RYsB7fqSJP`SQOA}o zx#t(6I|W8y{bdt|(yp8kwObo!^}?)Fz?x{Zui4VlkuJ1KjwbbxrgCfrqh}Exg}s%c8rA+)#+E_E->+n<&WXX*xxqe;rbUAi9{<)WWJGSF*9djJfaD!FsF<0;54RFVHI^!jYDHE0Y$ zw4+6>UXYQ5c>}u_*^OqvKAn1eF`c@-oZIIoz-6XOr>tS&t8yacq_Me8F4edSE%(nR zmP|~cY3xciCtqyQIWolFPO;2w=H)g#H5>q;*JJOTp8RAb95;xt|t0`hpsO zPI{}Fb_Fi$w5UTwayrS)JoyU)hbhGrR645#P^{l@H=(g}8FhFwPK)u4U~G12daaj} zK34RaS7z*#LrB1zaQ+I1&B=;deSx=6MP2n07Wrhp)RA#wXHKyf=2LDL5)8?jY0950 zTpT93y8L1IF?%d)ejJC3Dm=u*s%g(+|Em@SDWAbZAB9TgC>7v|GZ5p*KpyB#dspJB|Fb`- z3uDcM3;db(%b> zgp{D*Z%(5lv?>76#-&iwG^$H=Xv$?kiO<9GRkR5=qok;uG90v8Cv-3JAGb)+g*i($ zy}D?MJ;EtxB7qw8d6zGs{F2VgD3#mKddnyq`W61URB93UJkDfy6C_>o*>ZU~9RCD7 z*W-=NJDz3Hf671z&Dg>@^HyjOa~Iq;ac6^mjrdrumA?ZfPtDrKet4J<_5aPsj4m~~ z3NoJbiAw_<^tbqubMq}!c}8LcxVH`TbB+I5>EGJRYaZXGeT z+|Q-CwQdH_#X#?5{EhwZXg}m5W^lx77=mAlS&L}S=vc|0ZrsK)KH?X8&QGH&8v?^&NVTzrdM0sqE0-_otM^Xz| zFVBGq+|GXhDO%nNt)S@-uZ5OD`US;Bvf#2$yPA)g2JKvEk~yihF4d)1ki+K}=7k4E*SIwfyf69K7y5J-#W?VJq*K4|-y!8Ixwo-8=B$=q1OCB&0|nx$nChQ?eH zK`ahO-NddkWKoRzFUcQ6hMv&>QlA`9=&st1E`znM%odSxBDWB?iwjxz0f%4{Jcm#$ zJc#oeC><(S(F)e_l9#Aml$pS^ez)S z0CO%`r~IYBPhK8>>Ced?RjDi&Yiy0CTHO7CLvC$*b?bs%eNvCzIJyXE50a;(9vG58 zMnM$1%!R8U?5dvW$4G24L|T$S(k}+W^_dXhXOtzPC+)GK-%yUq3pO=s)d*cBX*{3T zF4;r?J6v z#q-AWeA~uhi{$lUbO7+-FS2j1XrN1LrDUek|;WJ|vPTLFkT>zZ7y(W&V@ISq4?XGpG z5#LIC(mObAR(~eZJZ&Ci7;wh7_S=@CBhXs0PRE0nd>M8-m|hLry4Zx2d!CFn!W%E2 zNh)Hgd#hEtDeF~`N&>n|PJ2tv?xsy&{8RTD6K*BV<#y6uZPAc8zi!spqsHJkxOE}- zCp&vLb~jDDOaX|3&Fh*9F^5C{9i}>Is4bhPV4m@Oh^l<7=9AzNZKjdTqz#ZNsp_r> zHl*3&-K!|)Jh36VwQw--X@ zGAY@=unw3X2O;hY>DB%Q5H}9>!#U-G^^^qu@J?ueps^zyl`?1vaMi73eYSM}jmU5- zO0vYUJm64R8m--Ac%d>!svFZr z$~Dt9Pwq&YEXA31wq%=p_02Rtl1lZn7W-^kg=mH1^Huq}bA(H-pH)$|XDSIMgSq;L z%)bG^9t%N-BlmP-v(Eo0v7>c*LpS0QUH^fQ+?s1@Q(~~!`yVXiv4whYm&TUq=`vjf zkkc|aD`>kaEjODrV^wVBK=Ct?^5@drla?9}FoJrRZZ4=!=)s1CH%@qVKdq+lpEOiv2gX9UkK!nuccLSJw&)ro5CIVawj$Sg*Z#o*AWgc>+~p+48lo-O z;eZvk`1Ys^%>@~&;}J>V0^cHEt#q!7O(!Z$+c#y(A$*)jqKR~@?r8r7>7e3?WIH8q z7Z{Gn+_6N2k(>qSft<0ZIEui-KJNY{&jjC;NC9@0$0aJ5Xb##+0vwnY{@uDd%HUrG{M3o$)z(eC9cSS(G=y zBTNoTAdZ5wqTj(r-Nmwckdbmdkig=&(OzM0?B0M7NCK}Rv{75Uq0MTS#ugGG!``-FSRHgW)WISLQ#*$k7Ep5_9O(rP36w!8~R8(!k@?9uCANWih8$tVwL5F0O z@m9b%BpFCSM4-SU%Q}|>yeimA+F1X!-{I4Jn#nqX-K`f6ppnLCs!a*)`dN=vob73$ zHe`7=hM1{OVVp}rRtTbo`wRIN;_v`*Ew-$s z;qDje&xNYh63m$&?Wy^Q-R?y5?XX3cWA-5Sn2;hiS)BijtR8Z|jh&T%e)8Npt0x;X zzX{vhXBs5>;X8df{Pkf`(dw}h;8k5j*Yp&BFY-Bx1KYzpIgEq0MngDZ-ha4EO(%Tn zubek^k0-+C$r&|&9PEUWV91wMrUAL{k|A@mzXwCxOvf-%%+^PRFtXTbjI1-sl0SR0 z;A(WVJoEEUS!47Ws;~Oatm{a}ZQJ}{M?z<=>wsEW0G+4YPo$^(wzB>T@ZzZ5r9YPt zU-K-(2~-#vse zcIWHgn!}iVY_v`F(@|eM`(u(!?$#~R45m?)d3LU2Trwu(^fmj1vFol}pVxbhw4I~t zZMwV6Hr*REs88-T7B(^iBrG6JK6w8(NxmM3Pk(OxMP1z2V<8wTncoD1yCdGjhasJg zjUfM#tB==s$hieyVWURYu5j%KYsyWPp zddH6++wyTPvOMdJ#(Pc9$wwY~1|^T1Uyp}!ox0t%N37m4N*=ocg3Ep3_qe#3dWb#- zH{1p)KC7B8mjISk26vy%{S9)Xc=lC>r*clMPwvT$^JT4j*=UKO(Bql+cL+#Qg_i2> z8pLY4($xgMu0Nq2@6v`*PgSaWSDEDoK5NH18vm|*xUT|^2)=r@2AxveBr*7RzWjPW zbQ0S~m3?UQr_(hY0>B065jWpo#l4q{)+X=nk?jX%qX0Vr;~Ms(cvttNPWZogx(o>= z=i}ijdL@;%4Er9DT7@^VO`58$^_Q-jaj0&C3gai~a?O^dwMg6p{#8{!kcd*dFf*pX zeFdT7#Ver&1>?m76|R2+UA_7#q75mFS!A+ zF+PcMJimlWS!f zW-D}vK5&=5*G?%a(b6r3sLK9Ho9-eXV!VvyWmqhstJkvv$EHKS zmbi%Lsb!t`{%w(OmbHZkz$Mhjhz`GZR>xqs3DIhmnOSJ65*$+VBS=5-FrASQ{v;Ks zYucNf&oRe-6N%dSGgFKab+`I@A&XrfceKpd?j(KO%mq+%#9>!Yznip*y$<0w@y$?s z1ov_MG8>+$nUQ_k4buM%S@_krr?M*I#o$$`^5x~jHp{%U)6|D^QF| z1Py@z0P}w})19X?6I~SX83|WQERAkvn|#6KKsdEhkJ_lehVG;2((^Z<8{*z=cH~vw z**uKb&;abUkSY@XO}k^T{M`z-)-Z1II2fgGM8K8xZJ&v&MQ<;;-%4mn=Py;&N|+So z0i(stmt9un>6cnT$6-E4(I{&>sFk)$f8f!N%99~~?>>c;=x9`7*wor~UlyLm7lpdA5cGv39Dfyh z68TWRSI{buG`kgbkg3Qzo`zjoru$8OMadI8yzQdS$XxOHE9@+8YpdF)WsWw}TG|Z)JHO-DQ@D4pCQAp7H;) zG;Z%PO6xRy1l6`Twi(6Pt$Asf3hJ`ijYA2IVFoafXy>-MFdV5$lR&fj7GY+KbFLBJaM$^rtG$`gFSl+Q4w4jDz zu*2O6yL}`gSL-WYiP^$|VWU?{TfVC65AIuVSk5wjuP9WWvy%5nO`Rl?j(AQ|BTyH; z$5Z@!=eLX7D#Uc_B*@rEeJzdMbSMB9bJq%&+`0~(oP+Pu1VsX^=J(hdk_K;G|Ba}I zISKAZ8ov$-gWc%B6sdP^C8BMu;E%%fhv(UKaFF2~%VS$3>ls0}|qqI5ZeQuQlNO1;ZC zX2B#YK|hmRO^b4igId7xcWmFo4T}px zqGCUl*q1pkE*Is=XR95;Oxkhp%$%yp_@9f2^M{XFZR_uD&v&1b0VdkB4MI z>M8X1Xv-^NBGcDkNOxj!-o60SvSh}8$X56S&@Uk~M3BM`*Ly{~iDI^#{-E92fR18T zw#G0(7T7^dmy#4)qA>!v)9BQa7oqf{_}ED=&o=(4T%Uqk=zv%JURuVY&BmBKC;&D3 z?XF``0d~0=K2!=Mhw@t{3AyD!!m^)Z97{VYrA$&JA>d0-t+%Uy*=+EBWh+XPhA#B; zXsI`I%t%^j>azyIz#K?cxn45dMwjNL!t2)4s^K74o564#F_sfhcZqdU>%jN0t4d3v z7+wkyi>A5v7D73gG${G40bQU#oJ||;d&HaeZ2#b6NlIx0HaK`kS;(M;Ij2~|)`)Tx zD1a?j^U$O@ev3uh@iFXaP;+1e4G43jz8T}H4A-Rwm2zBa_w(5-+Ja6O8HJS?mnqQ& zUPzt81tWvvrXvLKtX<8?B$E^LOASP|aHfgUGJ&o}rl7owf{>wPRRHQ(EyZ+1rNE3b zY&K(i}iB|YR00wR_q7FUF6g`A1JFWeCATv|ZLjNj_Y zVI&L?QPxKk5-Wg-z$0d zK&QhrTYLirW5>;(Utm&misEf1;Crk@#b8pJY4mMzffX<1?s{pUbIf+_(;;SuvB^ZQ z3UlT`H7&1H6Bv8rDv?d6M50j%x{HsUFhc7iR$B#^L`^GepjM^qC>eN*BUvq__jDjS1C6 ztOyz1OlL-E>Ya1Is+Qq(O=+lhDA00&R$er$-kTAHiyh!?^(@7lmRKj5dFU0ieH@`n z9mq@=J8igB;`^3H!dX=7WvCEoDrVl@34acEN^+y}qpR(gJY~vq7Kl_4S2Omm^N z@jj#!Um>W>@O+fNbvYg9(qgygC)M&SQ88sw}6S|`=ZEn#!?qm)U= zHC-fFU`>y$o0TynHDz0eo$YGTBtTFWJJtc8;q%aiXt}s(dL}H1dnI#Z0+A^&i`Exp z`0Gq{PIwpg3E^oEWca}4K|)q29&2Rd!KE2l13Mcp3uM$GNHTWs^JknzD_Rvl@iU(o z!XTxl?3RgQQ&cLcY3E%|`VCCuoAkKsWF-xW-*iSvbNtsb8DRTTf9Ahxl3rL9T-gBg z^J0?qv2i5!{qj%-k5A-E&eKYV$WrPO^C}0ZlKohPw{xy|-0PN_wy z{BeU=yzyOnT6WWt29+B+^??L)wQgL^C^%zrpU_gtt>%XG#1Y;J4nex)%xBXh{!Fi3 z{60%ekPt^+2NE%=ch(02QpccFRyJCg3d_U-(x2Y8t-?W35RJv z7||aI26U(v?V`8>ylyO7>6Qi(=%_#A0rI~{Pl20g$gBWJ@M#_GMacI&zaNl6B#E* z0RSm#hh8xvRnh?I2PP`698U~Ussy&th-9v02M@2!#!VbkTf^|bsC%d9Ug9loGtQ1} z+qP}n-m&fcV%xTD?^rv|j&0jcCg;3ore^+xsp^Zq>8ify>iVwFTI+fG>*oPv+YfJh zte5+(-bd*qq&hXqM5-PxuvB1ZUVaJ;l_`|2Q*ccU_%(1kEa0UvctI;TR6L8Afza84 zp19wS>42~7sF_}G#?cqa=UWg_Z>TKNPjAOk*BN^6X3p zE9eF%_YWM{24P}50#hAt*I=xHs>05_7$#1NNf|}^oQ0aINPx|F(;G|`y;Rxy{ia}7 zyxSe6V9Xh{b2RUGXrVk`=or*pIpV*>r`bpi?fY^dBmQtxl9S*~`yK0;VH~*Aw^C!`z z<$n3B?ORo#E_|`WqQj8bntBw4%~(#%I(TJl8rJ#E&~}NP56%OdJYm|sSLrf$ zxaDr7)Lmfyq7Zb=xQU)#WI4!@sg5rq9R-yQxa}0>@>vZICAt26vR=rTL$E(}cwp|4 ze3t0k<13%Yn-PJc?sA#UhOWE9TOiZogG&d zt@+W8&EQpZA656#sUSO!hWFX0Af}#l)iPy6mP_HS%L@`BELZczBvf2WR|US2B-FhU zPNC-^5okfqsb&x`T>N0_JkOhpVPhqxIMgt9e*3~z1W)CWH#2RD(KI1d1A7Nr6!X!%mq2{d*sVnZ zIt275lokChoDcz~B)G~L8Ol4?gboL7DZ5&#xkK0uC{R(8bXMdK$+xSodCEz}3i|~O zKg74axNaj;fhxTEDCJpIy17sWVc60Ah&EZNs}huIW1b4M%_QD3&KBP~>HqJ5)U zsXC)w=iUFWya|tGzI0vgjyLb1HJtoaZ4&T+p?&cE@_vEmvFzI$(9y-5wARd>=U_Qd z0+kkfR8zRiVZmqWycd=+d5j}^5w;UCIY|O&btZWiJXT)H`R0poOFL-$@TC0yt zd|7700=?^tH1|ror%mLdvQK=pN))V=i@=1gz}NlNKxT-~ zQiigreDG)`TOHnFvbN^3A(eZ4xbZV_vc>$Tphke@zViOh+)}C1o2U+otz7dXPFaoM zN@bXLi<-oX#sDsmE0c`4rv#+Msy0!Ot}dR3sD`R${7^J8ZCVDDs}l}V%^uUIUEA$M zn3MD(wH1aVRn?u*>847c?RxhL5;{qU(yL@YT6T8RjZ+S*#Dt7=I3Hw(MF_xO&ya5= zWEZoT3bItM(vEhGIx_i=8Ws1dto0F`EEu2H*KUrN+VQ?H_fDf%tihSiOpE`jCaV12 zsYEsSB0Y;e$2IZTlNa)=p1%c#YEdk6bI$3A80<~KyNH6Fo1&^>i}`Roi3_n7PHps|k zEL=yZz!q=Js->GN!h_F9;SYetFbek#=z_g`p%#!gg3I95ZgY@!?RjYd$s36u6_>xN3lg368SUt2fva5o=^A zF_uq~goB~D4BtHnGY(Tn7R1~!YJ;2tjfYr+v84&UQ?^3_>Jc03Doht1<9$)1@IX^l zn6Vn;oa6}TVNmmmf)dMSRkQdTvy~Dlj+B76c2g2#9G9U|aCtFYK5bTJH-he!L=&!% zE!&K#cb~&yAg-{TYYch}i>M~*xBooCF1KxC+S{DR0H-$OG;v&qyB~eT*q|`w3L8Abi<;){|$fsgq}BNui?GZ#~u8mVEC{h`)B1dFUc4)e#{c@wfxj`2O`wh8JU9c{xW6$` zd6+UFTzGQr#qLDfPH1*th%t;d)?}McmY}G2#_*&8_{f@jCc%Wz=*F{%+V$HQ1Z$)@ zGGLHhPrQLZ-XDW_PaKC?ihTVFNs>Eo)K|FZR0vl^KhH0&dy4wIgOphcE=78QDCrXS zL8apLp9g=;8mI4l<3?Ge0pS2QTXGYj`6NG`Y37Nu`vRGl!xvKgb#r+LJ&!#1{#DWG zrmF$K?>5XQ?gFRwo;NAueH2|*QS85hE-&^&FzR^cv#RU=t-W6kRrcTWgabWb4k^Yc zchYfuy=uzS5JHH-VQq=9ttP6%-xKca!zgZ-YJCvJGKD#=Bi+eIF{CzuC)}w|(VxxJ zVtGrR9(;n-m};xL<)L6G-jSEDm@=0e3DBmX3R1s;bhabOHu<{d0gDzy!5GB&g#FHC1A)g0x(-J%_vA9t@St3gNelTW$i( zY%y`M#ZI9a96#Or!X5m9dE5zPO$exajS3GdpU#5div?932*6!6oi&37%73BX zDiVmVc#VDuDVF2mQY(2Dyj^(N0hB_NkPLOW#D5hHP>L2_VLti7Z<6d;yPLv1+`eerKib2%Rr2cXuf7YaMJ zc?WO`7acb!PDk39&B9TLlGA?3X`t%$CLtbrgg3$VBHFID7-L!fzUq<RGCtcnLx@ z6@jz}QS={|9T7wgZ$0+WtpmRxga!hCJ~!zM+q(j>dr0A&C&9%FHZ$a6T(=^S;A!Ov z4LnP$UYv(n+UqcYl#fD4185Cj7THBI5S*}0VE~E|21F~BPSjow6&za~uUKmV;b|`W zAK-KZd;s|gFr5 zca2lUayS4!WwBY8^zzkRE?Oz;!92`!wc41ghjz$Zs}Kdwm)Tp!MYnImUlpmNIv>XR zEsIgFi~^N<3qU^{Wqtkd)%8Pf8;{Cn7bxxc8pV#79kPXT(j4)n6&5A=6XQK;7uluiP|L zBh*7_a6@Hoh^{L zje1yjznYe)2DL7ssk|VXyVG1Ab@ZR^NcWWDKxI!k0rNP4+pzSy>hphkBg$&Mg+CiE zO%p|2{3`!xjvT}B1m!h9u)K-s>)!MpHO(}O4x`WtNe1dAoMgV_{5dT=%3Fc`+Mn`& zly08x^9F#hC}%Gs(0HLyP!3+Fb|fqos#$hFfyN9YNVPnPD6fO zcFV}gH;;0zc6Ux2A&W2v;hUp88!67X`y0kmdA68pHMkSgElE>k5W=5;(}P_gI9H_l$l&*eQO!R z)_{W-OhcbX4?`0#?S6y5(6RbUkF-nMU&-nMM_Ti7*dKbui)V0>JI=2Es*gO#EHKd- zDC}2xU`;p>U)PLjjSS#JZS!3`r<&F`WZvH5HGy8-1ktiTU5Is3EcFd$v7Aw{H7BxfG0#CTX#CG^zd@3K!BN3EPYfiO(=M}s@d2#=>7Qn++WrtD5x@8? zzx(gp1Lb}}Q7Y0g{0A9H&`My4g)-Y0Yr`R>_SieaUK)`qw)O@d%X8Jyb3+uJ zV5r;@c#Nok2o|lP)Z&vim?to92psS)Sfdyv`|u`(%0t8FvyoSgnWr)a48p%^(&wmt z&MPmZpb+jH5Rn149y=sCY?6l46B9pG5@1hyMzut)LAL zvz);n$#&jppUo}|U#lJIS6_Yt-3)sPw&Vge;!$}{oJ%oxf!jonk-DcjBofIckr%UX z(HTC?l(ttt1Kz@kF~UJjVZp9oubNBFAzel(KOeUO=y=d{p!nNkX-OqDBdAc0%Jl$y zJNO*6(?k)$kDp6sQH3jx=#Z2$=qgeSu~M86o%0a%ZwrK>acWmbOT22eH9F{@rckuH zWBuBQr6RU|>s+_4#Qedg**LL$lsJ%U+Xv3!X*^yH^l{4nbmeqGg}8- z?gG6ufc-C9^>*Fjeq-*aAk3MDmZg67$XjR%C}||GFrx-oz32UPIVKzboRx`tk7204 z^i)2G4{42>cCNMGZeP79-jyY*{D~QkeN3`PBfSrATxj9cyTop|culv8wr{Ly( z?|k>p;I?>S^I06xGJt&!dE&(L_6gv93r)u4voP@Od$og-o^QyPmu}Mh`fb*xu%)q5 z<;K*YavcOKF;~K?KU~YUlA5nnAwUnBW?omj_aEpe^aGe@n*kCK5N^=_>BBSsw-2wk zX}8~w*26IC?hpgyJtsZ)9mn*S*fBAqe|i!k=>q*^N-zOSB8JwJj4SEF`1+01*PmaM zphV$cTr#DSJmz1PTjQh@24op>sz|``zWem9=i}uiG5%&eZQQ@#_SIEu9-@hiIbwf( z>^}xyBlcn>JdsG72<1^40USLfuP^JL0^g@(CV zlyDj_iKwP@%*c@7+JwE4saywj5}@skrcq)L9AH1=kv)ENn@tW(G?Vex)Q^6-0szo8;COQiy| zH7;9D7WbkEyY%VMRNW)ey2JO%+fC~Dwg4fGWg1j#CJzZ8&5_KB*D)fSR+)Wk&A_bg z%j;~Fe!~-IOQZ}}0AuBz_*$$7Tle$hM(S)ZM<1l%?pwlq>0hs65@(x32>&U4b}^tLZ*WqP!DaK2p=C>?*|`oIHA?Qg_SC`BL#zO0(-w#iN9TItU03HNY!!!gTAF7pbS*Gm0o!S=s zgT)DQs>}9924wBqHwY`-CZ2=4#Ngqs?)Pt-uzJ2 ztU<5R08SRMQZ2#3^b!`BbxzS9%rt;*sKzc!>@qsw5F<&&CI{pxyDG5~*&H(#SsbOi z6lWlsbqOaaXWc7+b0k{YUJ#h@32%G26u+16+aL#E)w9WDw$1c3-UQnatI#ub7`3pm z98CJP*pz8taDt67d*h-vBt823onUgFmDDpAU{-{n$+oInV!BoDru%O(P#Ra?&9lE! zeMWMT+%;$YVV-dU$z%83mOH1(onbNOIHEp9Yo}DK{oQqY6DJ` zfV{eY>6Cd)`KUYN*02g8(7PFd5KE?q8ET#^!l{l9jwa*C+G@hIwMv);H0H&MW|_1d zdSty|UWBV|;rR;PefIp-var z5~0_btz0=@)HahGTd6t&u?rd1i6>=fo;tv#1|Bq2sk;Q!OAlj;-6ZCfxOwdoN4LAT z1*SN?jdajeEFy3pbd+Y4b2Hq(oUQ{1TNw^;0{n>Ie(Ilmr>aIWMA2m@z=hfn+!0V9 z1mX~>E)+B*;sc?o-xr~Uw9xDW2ar&wi5)~B>|*mJoKU<`XAQ3M-1Kl+ft5B)w$!a0 zXdr9QKbuF1POKk&;clycD!CZ5Pv&XmwG4`*Y%x!zxwNSOq*y9^Jy{?ImCJ7an3``F z!CN&4PG%*GbR&Y(E+IxfpwbY9n|`%&J#@rcE0;<4 zUmiNW8dzu_7Sk7irTxK7!aV4%O0tM*rI}VvT+{4g-_}{23~$poO}ey=?igjY04nM?;)NA zn$Sp&3!%k+!ffENZI?T?s5bHFEyxKU0gET7B;VwaJEq>k&U;!K!Mo^HY`9gF-^(h; zuWZG7t^y$|Smu-n$a}7^dYM3NdZlsWt|avji+NE z2&r}u=WjomNVq0S@$__hJC%Bx54;WgRx7l<7)ZXXSaIe;T`P&2C_z31G@y1y_+n1bWd!F6w6&N;L{tmBeK@} z={!!S)CyA@&}IJ3nSRCCRn_kzG3x+V%JaOEw%u|XD_4OXB^5!Z#HI>0ufQ=xR=6Kj ziR2Mtrwi(7(Uwb;ve3<E4Rt#(eW3_Q0OB)te1FFF!cYk`{@&Lv(qW%xL$ z=pBgAD`=xw;h1oAc{ciw)51gaH4q<^p#r=<=T?9iAe!nac}Z>&sJlJ1`*%;~8h<%) zf?raW_bq=58xCuHyhLuvk*y8P6m@=H%MUnT%#@GrrQ#P3@P}%sSC$gzIq*UqRvl<0(Ln~H*|NN5DJCOlkwv5V@UwPjeR zZE67?;7%;FaGs~{POYiLA!*zb-Y*}4Nqddul!X&{AnQDKMRW6LlJJi-_7zc6W9H?p z?BXrh32BHI+dAlDhFt`hTA3|Na!4*!<;mMgb#rTm69;8r>fbvZ-%ZOllNG~S(JVL5 zVJk&2$+H@@BP>v&cguiy_nGF^S7h|^2R=__0Q%Tw?H~A%)gVu2nJlIkj}}-ti2EZ9 zt1X{)lk;Rsr}T>P)Nta9q4m^pCmE!xI8bjKjbvNpztauKlBgYd3N;+S&c-K&bIr2j z?8x&$@t4i}z>h}U%3zu-!l4igl!x>$I%94&?Z~W&|56Xo+)Yp4s7INT!5c>U(3rq2`AL>Bz8a#E1996|g=s?C`$EI$&$DgVcu- z#y#oD_1ecLNZpEPgoNlcp6R1Oi!{^W0Z`06^W%`|WA<9FF5EVA%S~QLDlMyxN0qr- zkW~(-2iiAxZQzg`R5sR=scehR;Jic<)G<&^i>)iE?CLRLytt&Z%y^0ByLh(w9r#Km zY)_M&XXAX`R){bJ9Cc#Ge=uQx^Vf8t<36o;3I{S*8;7v-?3ZPaD+~g^a^`?#0#-P- zZPUOCAFc#J;7Dg3W@o+#uJZ{-OGx8KhqhBC32|fhTXSYgHb5x?cJRQ}OI{_(^lbU7 z2Ingdzc)fsT|_i&Ujuk!FybRnjJf$%vF_*Y{$6vRhIL}Rs+S?8K^nNppeX8<>G1r* z#JRfWE60Zqbx@Bv^9Hdeq0r#O1pK4&N!&+>wXFzIiE>Rmm$nxEWhJFDdttdLJ_o-x zmY|co*pqnE4oX7049ssmkxyY7>~FX{rznPOD-LA~31^#vQkZ<{9}2%=$kKNb@)hvnKaVs7Yda~=w%-UZQ@AoH5pMMC zQm}NMvUlsg9P3<@f}M;ifi6@UcM&8FSX1?rkT^pqS|H$ECD&7vD+sC>@i>CrBt&|? zs?In>-q@+?AO2w?PJh{=b0vVr^l`GL35e^ZZ=^Lb?PK!>{n`i%d=%9dyM z+XT4nCD2{Dtc_<3ZUr&|;dFqDvo1EbVgm@T!l(Mr058nUiE0QIgscXvkf8fph$lHb zZDUBKOKz&&Y3RC_fo0NC>RhS9-=l+{j<_0tK zArghdu%G)P0pi!DCOY1O`(WbhSNl17Ji$VqUjqrg zCjz*ymW6UwyP`PDCUpH${QM-2!d0K}w4w~c4E$gb0TQa=j#4TLL3<+g7LetI&oh!w zJHEjnMFGuMDbDSkg1YrhC*}>5lmzFMQ$dH@MOuz(!(jK7Dv^OY_Tiu-X*+sZ4foR z`yHE}ev1oCW9c!;044WV9;ryd{A`$;wPcxr6+jkMrX9EtBSJy#6Y<3t0v+=5UMN|> zz$MLB*pO{)S};;Qv7%0!fX$cwe>5C_ur zRV|A3J$l$7n8>wh_Gm#PQQ?L!bYiF~)s7LcAw1)EXH!ylXNDO!vP1>wM^X0m5O-fj zDZlw6sxBqPI;IPKJi{r+mtTd*W*P*_0gAM41QISHBA|ajljK80ogw{Gh`&pL&KOB{#_@DBI^?%A6rGGI@;yL&bLHb=+z|lpdrLCs4EU9gE&TGnU z8%oIE&z}~67kGVNCG=4xk>%!mN^N}IY*wG#Z32BkDJDn8ToOx7$(`Bl?MsCNxeB5T zJ!Ca1CDzH3Vhzy-7H|?cipWQ1C-~9@CWZ9qKj{$;%dIo%0G9l1#zn{3__$w#@C%+& zi#~&C1;2sO$YS`8;~G6GS(7NJu`fE4M$1as!9M=y^0AH~ptHkYHTbV^sqG{S*ct@d zzGG=%@JH|(;_%;2yDhZnL5@LX8Th?Wozr0I-ITN$V}2=T z?$sos6hE0sjFxz}_V=Jx;UDCQ$b-@w=_e2}DkV;7a!ck)0=-QFh#+Dg*UL>#P>{y< z#&qHq+*A~&48ofz#~WX!_$sG-8RTbNntA5-HJWL9>>#PksnB;U`X|61{`;d$V~(&B z`u@%sU%WX?_@6N%-b3BE6aUIKLm>?}3=Ch^JZi68qYrv&`()YsKJQ2Lfca1Ah9dmYR+%!I^BpjygIAa^Vk9MHxbi29k?REK)7Om!YTnW{sB~>K>dY*c?jI8# zLKCl;yX2|)+zus`Wo}`)QEF!yBFi?$E?(Q;Hr=-KpbVi9M=1kcyn=b#3qw?|3= zpBKSN+|M6xSV(~QsS)Zz4yW(t>C?CAN~4>$0aDB}ZT2iA+z8OvwrQU`QRRS6{+BYk zR%=}{BCB{}TWe%-#IWOq(b!$^ zKUm0%;2h2K&s`#q1wBN)j^#YzG!YzAO`8c67Fi#q_ z+GV4=>5KZCmdlCeR`8*sRpEuRkw|7I|wOh;9+H0?WuPW9uvxN79Jy(CO8w2AH5*<;5*EMgM(xl5#{`1ji$xN=o1|)So zLnvWSGQdaF3{zgpWBDJX*k_~V2zTvSRI*xkM*ud(2YimKg=!C-^b>YGdTvsRUQG2S zr@7S=R;+VoYzDEhm62(&21!O7!P4kO)U4xf;UxX)FqE8ylyyWcMTTc$iUb#271S}k zPCXF{oVXGHZxr{>Ptsaf&?}AmSyW-RQCPj0SpcOLD(0<=?xzRJk^3O+Uu_iBP$IFZ zLLuIs8mCpYGg<->n}dwHL|}_n1eXCPS@>S=uo2T2!eLSutXO32htOk!UO(Tj+p6A+ zoU4qB7*Fj3>`+B9+U;&!<4)RhsS>ZX?u!DD6#Qrijxc*hHS+IQj^Aui7D~T8ulZOh z(f}IjM{xENT=SSVOGu^~7~NhmlhCg|vTdPC!4az!Xj(L;WF6KMPT7rE$5RJ%T^7?**H#qC+|LeB2!Bq?<3+5WU@po3{xR{3X(d-Mce;*Prplp2KCH#m zNS`lSnq&P~BXv;6Q}3_IdX(Ix70YU@s=3CONnu&*uLgr!s9Q{}VU5bqWJSXmRsdaM zyBFm6)>@fc{A+Te5AS%%qZFvF9Mgyj?41O=0-jOmy4xx5l{NkT?^HhGVcGxwv3N6h53Hyj zgAkb}<|FE=yW{DEc{iPs<1ma(kzEwe$x=jA$I=Q=M6~QkR4kxptk^<|Xtf@SDo_;k z8sjPEm*Dnf#Av$kt9qr$5End4^yZQ#fDQD*Ga6ILye9gnXn?l8qHwJ<^8jI~hd6@; zHCz-|6=pd7iah)+vX=`i3X^?JxYnE>V%o0u-AAn@howB4#kX-VZ>8cQ0#QbVIk}ht zX6xn@YjX|umiP@DEY6sm?2dlB^XzpjVW4|-n3(1HQ`7?$WE#SpK64a`UvKyKtMeq; zP0rS0Z+VEj?L5IvBAmF)7XU>07UYV)I~XBOkx+-wZ^&SgYV=w~5}yDLxy_rD`48lU zczIb77q&h`=e&}42$R0dIAnP`CD_{(MTs5sf&6Ll_7Kj_Ln;D#hbX+09_|!>I{HEO z^8#sL9o(Ge9m$C%Fqsl(vTtllJ zoeTf(=@oIQW(#ECn$Nz%!&Z{m(S z9Xn|GGJKfmKDt}qYSCZ1?LWu_C%&hd4YwmLCA5cWh}DUioq4KXB8w-(q3e#fB|4d5 zsIBYii%;no-N#*90r1}NqQIY*O5~RIC-&-J#_JUmbZO-1salWab)-3`6O1?Q>?BB^ zkkaI`hPew}sjb6t%ij|Y*31Y%I3NP9mJeJIlsm9vhGm;4XH2N0q4|f%m0GgTR`PWX zA&9O?W-L8b=od-o`U^O65}_rD1dxWChlj^Sf6H-fVI?ju z%&2ZkwYC85`t{TbC3)vU6vi}U*)QvzkP)W>$Z=(G0~6yj_d{hKN05z04*pzxP^k5cDuIqNdcx=;1pHQ^r zM=GKi5aj918LG-Z>a6)E*NJ$o`uCB@W2@i9Ye^vKnGOj+vVS&KHeyEh7Z({u;D1(BDnFhFJTx|4{;fT;)8LG)%t#wWa+uy@!2c&=r1?}Qh zh6M67+}ahR*jm7%5NDT_0mKaC<~CtjfisGDltAiU&K=dq1fMDSni%u>7i0%68D2Q% zFH{WhBsD-M{)#JyPvgdzpU$JA10AY+fuAL-iD4a>L8ru!Q*k{Y4bNjWAg%S!u_-4= zkPom2mK{cSdM^HpmB18sJ!3Cs*oT>?!n1Qh7(PQ~$dNsZcUCb^kSK5Ruu=OjRq|DGnHf`QCSlbM;{@;Lz899n$j zA0OKW;z~&0NLlL|+YY!JgQ_MoM+L!lJ!9;Zrc-L#{+Tr&UHmS7-JBo@^JC7GR=O3* zNKzx?R%?#Yi@dt$ASI630MtuDP63be{Pf1dEPiZG$7x1JR_rtGKsa`q0=0qk!p8dD zf)2ngi~D5xz0}GgNKP)->rkKKj&9TLQc1ZGD*ymg)h#~;IvBbcNG=_f=r0gZhI1Js ztH+b~izG{1UZ4^;2DIVyq|}QbDV`329i1Xx*CeIA(lLjL?q&L?DPR)B5B)nyy{$L8 z1X6sxsm_#OIFDTlz_m7p`bkQyrp-}tX z|6o#kSWWZ>u`;gOi285K_-BP9Gl#>&BmaObSO_LOJVaD`(6rQ%!mPG64iO81Q5bMD z4ew*=tx={E3u!-eR&k`!KG(SM}jgMRT^TOyI-3fW2w9tt2pv-rd?(3#e>EeRLm3TS} zt)3S*v+xL?j=~4+_$E=6f7TSH(E(}&Qn-wC(#S%=3<0l`H{8CS^&unohk6`2Ut?g5 z<77Dj4lk_d6GIXhcwlM2N#b`ioQ%;|Mp^4;WZjNVU8F)O*kq-j?x=EBMuj(UF=tb+ z%cNPZb)kZC3v3AQ52dp%YY$|!2B4X z)X;Tsne>xL$~%&)i%!9 zpP~{K^cK+s4BSp25W(BT!=o5hgc-f^XSa}mH@#!90&e(?>6F^7-*otC8m>F0y;}w7TagY?4$Xr*>^~Mi?biS>zdu1Wk@cG3Olcy@B+TbX zaY?nbjp4#WU3rb6<4)*fX?YupYCnX4>w$Id^rB>k$$gVHeU5IvnKKpiMCi}~`)?a{ z{*%~QnnrN#3)^h{d4MC{wgC3z0O0qlDXw1cAQX(+F0Ye7TvJSCIe<0+>d)o^p_sF- znHn2DEX4L;1XzFx_ox04JYhQF6@Sn4QMqh_yyJ@?^YCAw3*0~hX9ytA7g!+q1b1(* z+aQ|C@;98(^5tOdcfOpzK!pz)czm6KxmCd>2lv~E2RY}A z=>1>Uzb@3^eBBnIPjjOaNun>fQTfE(U46zW`?=7-(RR0K+(m;+r?%)4nf<(z1Y1ds z!a%GSpmX2y{j`>@Pah}BE7>(L>h3)zW6#S`eSNP)68QnI|0mXHlK$MgXLIX3f5j!a z7wfc!GRFtZ>3x)$X#15mFKKYngz7C=6f5x>c&)%Vw%MwA?yDq%eZ4v9s6IF!S?@T84dqG%rd~^g`b>$ zjJ@+e#@;&L{~CM$Kl4{a!4RnBcOWVymFHoXkeVZ4to0=8a>{C-*G(~0bBR(qy`;zg zo4;}eT=6&*3%6#QGNu##q9p0ITICNC03>fAgH$k9IgDaW?a-v$@PEIVy4H$%CGr5X z0wq=Oo7F3w#lQ8C!d*6G--4O~xV4?_Rx(=iCS+!AB~S?KYY>D)?|z zBe=fg_;go+2PD5)XdhG+ZY%5%x9D+6EdsujJ@B_{af9zoIbGRO@L{e1;QXv~RNz-s zCrn7ZL9v8xA8trKZnz0$`%OT$DgB5brA^H3w)KjHvzDAC=8u#qO?^=oEHCP*yJkws z2^v-tQM2TC>|)6tC3dyHuE%XT*7dyJB;Fv%B@?kC4I+in71>r|oO zj3_*wxwV}Ar(_{<*&m4z6L!;&%S(W)6=%AKyIs{d&}poNwu{9NE5>yU!l0yReHb?K z4*bcs=?$Y0$gl7BaipTMrJ&*^C*gPn-8SIAz#sFM1VbB+cmGtFN<%3GXQ!`%ZPtQ< z1&htzheXgg7wO4c&wS&Js5+BJD^^;t*AZE%A4hcz?O#LJ;KQB6EAeb@Vbx~e)>)zyCv^5d05;GW;I|O6C`E*I09h;72sf4*~__I89`lp|fihn0qWzN0TFHf_s9WP(y0-n;lDSB(?6ICVVhn&{Q9rD(I%>Ii&=`Czb z{UM+{W9kj&~|yJ8z(qlW!J^IH1XHI@rie-l(cKVtS^lYN{OY$C(%B6f}d_Qql=VbmeG> zH$LqGNtPFF`kY`}soUtnv|0KZLr_};)f3UhzV79Uef#z-Mtg!eS z{ossGTIDlTL=WY#<9CVl>h|Lt>!@nce66g2$@t+gP}27O0$V(N45L-vS;qa1#$e#GW zUFS9=IjK2QHxavOE?qa7RO+J{O51Z?4WXv8*qy&Q z;!Ruzo@$U^jP)L>tCvvhr?J@ll1_6W>t)MdJEA0yv5LVtF4dDifvAUyAMMN^xcXAy zE4v+zMz6m-Eb+FR{JWTZ%EzuXT}^#qlq}-a&Do&(G+qjxVc$uszLNL)ORmv1q(L72V|2Z}I}bx^ZZ($?7D_r5w={~!FZJ?f@CSW@Rb=PmgKTKz ztx$VHrD5;q9sr&;5QOVJJNm**viO_1_vO(2`)PCi+D6;JjZi*gR5SEMs>&<2rqPZa z)vrp8umAED`-gne^9sEF`$^zCh6|ugT3i|FbcdoY=hdOJ6i(AAIy)tQ_F9@LlY3w-%bG*b5J<45?QWYYU32k7ZnDE$=^WkW3fsm5 zn1h3^t$24QZ0q}EH_%(HCqKboLv_y2F7=28n7brplD#yYxCwc;*Wl+49|G7iYMk9h zE+*s>5>_#yOR*Ra&0(jpHvKVJn(Irta_4pJg~q6(80_&j1MHj$Q620nSOtnRhd2qrZhMGs_c1=L=LhQe|hd0})Ow zdo%p^#zI@&Vj8KABjmW=#Kzlp)fSnn=$*0FF9|pi7+D?brMcH(p@99ByI@K0&=c@& z4!%Lwq8_b7@Kz)5Eq|J*S_hKt8@fwc)cJ9ips^d9RDq3Fn({iObS20rlXr51x6M^a+4@GPT%p&`^3)8lyn~G(OXNE1 zV$+lpaIz%70%R8D6aZTJ!^w@afjxeB7X<@}Er^*5W?~?hKQeX^{7YSddZYqk90Q!hwPxzI%#<(zioS zS#RxM1UU7#E{o}m%k}oI%e+bXY!!3#kW}lLV{?kRhG&E)8^FN$Bx6uHxEL(%3-6G? zlwWlp!IL0mE-=I(LDb{s{q4*_JAEpUP*SgF6AOjYa~MGLlRXzs!e+2wHr#dve-y`L zh}{M1Xod++`KxS%U0t+4X7EX}71gpY25RF*_V<2=dN5rF#tdhIEfDx$K1Brhp=f8h z85Z28&F1E=0>BCAXcX-7J>-VSMU>lb>{@h}UxxdfE2ebwi5N@9puje`kcb1(Yl#*{ zk?G3t`6~G!IJUOn^$og%Fu-T;GGzX8!7P2|Z43%g3_~Qr$DXl9H0kmPQ9~@jL2xVe za~O9ef1*y###|UV*Pa#yu*c0KX4yX+6ukqGj^jksPXNF)e0|mo*ss|-$Hy$o0z6rG zd3ftA&j&t!XCUsY$J6r(3PH}O2ZNj+6eI*;eAocAdlpD4b@bOEciC&38&E?LMYd=g zf!jejR0OG8q?hPE$S7g=EP{ZU2@w8Ri2#J{)n_mtT9${HG3yNQ&*_y50V9jx+#&}I z?H!`0VF05`bf1b~QmQK|A|-1>Lgjkl6Kn&`8~RRGNwPEyFBtBfoUDWnj;B)dXQBfeF{ zbgMA*$s1PQZ2dzGQ9u0KSfgx-jse2H$&6d1Hvl`iXeW0B>`62TgW{kU{u!0*b^H6H zPU>t@IDhvK5rKMt!rfea70DrsGC#vo_MsN)>x)g-D#{&Hb>EHbaB<-z&@ddqQ^1LKmnMYe z4`>#vxFn4re3^dzL$OUd{!JApjveF&5kO&C-0q+b-CDmA_O^iFt9E`li2JjgxuVYX zRn4Bi0E2T9d4p}E^f^+g>{A1krL7elT5q zI=1a}Y}>ZcH~+oo;g0iEW7J#i{ZQjuYp(gxw>jIFelOALkf*HzhmQW zKoY?eoMJM5QL8wIv8q)r0p){|Oe#~C(U(c(n=0ow)~olZb;ot;IxPrchjm*6$h;K6 zIQLdAQue+E*RqT-Onm${aXXCJ@#dx?9VFZy)@S9%4#)H^I~m-$$Ilp{p@P zuGVW$k$8*njf-Ly%N_*YM}owbHH5=77HJ9)*tio-!sZ=IP|ja#ki&GQMU}W)?6nMP zdv>mD^a{%j>p&m!f*D)5grM30pl1z#POVy$#9%g~;6^oZR2UI3F0ZblKl0l%Hb9pE5rQkS`V9hea6XmY!FiG(fwT4VLp!S&XQpWE@rB3Ai+&%+ z;=UES7^>{L9H`Nku*BLJa@wMP<;A^-<_=A~&0^xeccIbnr5R~JE)K2WMUuckmRZy( z%MFOkeq8747cvNA>~+2uFZ&pNt?6USAHl5>4Dq}Wg2(SW4FV2;(P8}=I_YId2!_GX znKr*CSYIHkU1I?#ZfRf4;?WuyrN=^x{`XfPg|F+gS*#2XNZKA8SIs)z!-ZWu@VC$z zQ2_+c*G=eqXp4klU&o!B1@)L=GgZ;N*fGmrb45NT@ytF1@RScUY=WtSYE!A5irS44 zt1La$>AH`XA$2hTVJK7@VxLFc=B60%+ZLEb)%QyiA>p2_8A$H%6NH9YXaOh{^YW;O zQ=j3X;Jn^5gkYWO%h5aDZwFAc>C)O{6 z`DL;2TVPl$A|jQKbI&UiAszNcgxz08m2igfvc65aO^ZeVQgC|l8BtjV+x;!9ah#^6 zKd0;JLZZ>#$Zl?6vZg|!Th}bXC7N)ldZh_1pZ+q%_^+4|AtW#3^k^Mum82GP@CwkW z`4ugY;$ev>)~^EIAtL)yS+G(R{Pv{%D0r*>+qv+47g?<5mQ>7#sRk`0^xsV2Xg4Xig)bqjoz`0IGT653{JZdc(2{d^ol_^j{!X_w62QJSvkriKHqajw?3 zbp32vR#1uTSI|GblP9CyE+eU*wjwD|P)b?<0+US35!2%~_pUH#B0jlQ^ z8)+#3J6UD3=M5rRg=hfx$;EMh`cnGbl(c#<6%CZc#sMhrkMQ~mM*{vYJsl|$R!pMa zu#K!WaN#&m&jbWX7k1zwG7HAd`ULMnSnT(999E@v^6p;k%I6@kw)l2o@DPNmd0c<6 zbztgnn)6{!lzlI!#05hInZ|r~m`4(PY<*uqC2n^ghZFkd-}5dbJJucN&3?(cDOOSj zSI6Pn0S*AExPq%geR1H$5t*1&WOqT9JnT}si>;X*kCmgDodKiiffw8sS3qsT+eL$e zx082+TBQ_Y#6gJUDXFtnX}GZtuK9`DycUqVRV=9qW|*p~q)icHN+7>6n_3nvDFz9k zSIe=79!V0hkOq-S(>+-ZZoDll0Oy#70)3_~D@bl30vZhq%8Kb$&2K3_B0omr)a+c#$2}c~w;o{dbNeyn_cAGtQiJr?df8~iAqZZc<2$;`o0lAI!b-u40$`#z z{i7{ay>*sNdG>9uoyA~)g3Zh$b?gPOP$VfjSxGNaJG8?GUbq04YZ-o=;Xq7irR0Iy zd&!r;-Y*~r9VL=z%(YHrA#mcVNi|R^<_+?&)mY>~97EYErClS!`8!955-F9)?VB?M z-Hio}3<5FqUH=EhB=ERR{Sj_lIGQc%6meKs=U67ZD3pc=qmaRwM?i21a$} zKEw*@HXZ4Q#w|*e4~J|~ z#tnIBe#x?iS|EOL+HC}NoK7Q_UGCUSX_HSa5>5~hh-`Q4_U4k$G4W{n^FY#kNKO;& z#5H>&v6S1&&R*Ay!mjRD1cL+wpSEF=73}(OQK9CF zz}dT3Dd?mVgDpmugK2Vvg`c%%w#OmTmVzX|E2B7(lbAlkRw82=!)B&>ajH*V*mUaerKK%yn;uuhlbuN3Br2eJr9%Uo-M}=HYgMvoSTS4!@t$T z*%b6OEM+lcisb`*JyXb;kLAoVqBIZ*fWQ`Pw4~}#mt;K8-~BCT zhY6egaX~Ws|2m_4tEI+CI356J`wlL+mXfNScTt*Khk2No>(D(0jNN-O5RlVf#QF5a zNK>e(n+fC@mR2pywyWvvc9S^I>kZ-Dz{wK4n|pq^i*G!JEReo_mZR6rfFFb@f|?(# zQRdJ=Is`ELtbd_>>^ARo9#p-mwugLe11>So0I^SkW!7PvGp-jJs%FsriKtha?cktF zPhS#P8fOj5mxW+0Rp zGGxYc+}9WfYMAy+43IsfVIFl=zc`@e4vF!31PWj;fq)ZhR+MPRmV_@~()EA~M`Z@d zmyZ-L*+?ld#txjslZ1E`M>$q7232i9iftx_glCPIe5e!{ThFOOVkYf%xr{DHLPRMR z{y;o$KaOS`vogwRlR?s(EFltaq%(`^rj6u}uDhjF{(fg4#YyBYDpd&4R)N^G8ver~ zJps6#%n!vfIW%t`DVJt_fIyG37e z6emoLmnh6!F&gb-saUxo;N}eWAfGUeS`1i5;6^-4f`X4>&OzFUfR2~&)PuWQLhX-= zUY?deNbOicfwnIuK0a0k23mF``uT)NkOICWPz%qPbDI137adlN1Dc?!hK0sRIm-QY z;Lv=wFqV?ol3oj#Qz5s+Oo*ub__|9}1o&5HCDv^G@&im8z(3caMe-qaP<~!`>TE0s z9QSOn`{xXS-Ya<4ST??@a8`zD*wW=6;;_P;j4^@YHF|1FhK~2IZ75qr%LM7+JFln8kowC7ag;V4cUGn&lriMLjDOa znpv2oDF*##9d}Q`VW1l(-92FanspHr8!EU+z)E$P7b`R7Lk!%{ zM~Z^Mn77RULk^|U-k?Gm9wIYekU|J<8`i0!*+UEl)}moKTLJFBgX<3q2NPG?jbCwh zyohI$FOqT>db}7%a-3GEW_3N*X<_B%ejvsYLD0Q8{5)LrC@P~Z3&WFAR@j({VS|VM ztf8|A#t23NIKEoS{u);10PxQX#&i}XetQo;BB}_^f|@3SrvN` zIq)i-ThI_tWXbjw<>J2KOf^J4rukv!AaKNj89T^SFC9~=E1%m#rcR_UJh_J5&n#3W zymJJt#ospY76y!rp?#qE@n7@`iSv#G82t$8IJ=yVq*ELDTuJuJ0QFFv)AKG!s4`r# ze^kaBIui-ZX0n&Vn0EB6z0*1^UY;51lZ}_9ulI3!$$od1E;Bh3IKawbhTkZbxI0q^ z?`2-n96kSp{z~@t5}AwK`y9L`KOLD%2o(4JNfU`5E7TvY>@AIX56`m>gO^3$`KsGB z49lkrs{RU}^SK560+cvcb^e-olh*-8SAJ9-vJbXCa>}JtK6+*1(~j+O z4}=Gu#|?jjV3QI%HYo14#Fg(FSqKJkm8WOYo(?Oh(wzAjF%GTPU(4kv)$1i#yJa6{ z%z%$*1dXx(Nh$}(CC>Wh+FQnq&n6NskkU37S;gRX1M4 zKc0>+ac=4uz`IrSDEPMsYHiX(Q`Nbrv%2Asxm=bvh^yTf>-UeRLnqMeKTk(A0KnEw z{dcD{nUUN+>3qdq52fix)M2}gF%{t*J5x3DBkK58+OZ5(_YjxeV0#JH>z4deu`@ec z{&aY672Qky(J7GG5SJr_pB6CEHWq_;x#f1B@!JaEg;NV(HiX_kJ9Og~BrzdakGp!` zy9Pg9=U|ipLizL+tv3w)_x8hFV8SMV8%6LnXy`Md*yr{?O9!==Z=Hd?4pa=%73R9k zf0hp3`!zh7CfgAMLNaxcf6^K#u@`k4O|{#u+aC{DQyocvfi z@ByY~4>zZ$7X8n6^!L|l#oA5zUse=OcLzL`3!bgm$EgkZ1x7B~L4fO`3t8Qsty;#k zzb>#FV$}Iyrr+I>JQf0YxM1WeeAAmnBjC+2n;({6)B6q zMWh-H^zRYjaS~x?iatJ2AxJ6KX=u+6k^nKr`(m1Oa)4m{YT~lK@DRb_qlXX1sr{o1PT|u( z`#tD(d(z=H^5>7(JGU&lB3B@zxxsc2l)2{vcqsn!a{!3kVpCKB^Ha# z+HBAbQThJrm3a^EGRmnfKo4Z}X|U%2i8TH+A1r#X=iDxwQCdbvQ23yK3md?4>ejxp z_Jw4rib+KDMWE-OXZ-{H-ynaO&wtDBXlg`~|5<)>{Lk{+d0rU1NC7cKxK{TCXh0c7 zb%Q09`sd$`+n$>DnkwqwM}QmGcCe=A-+eBe(@YFB@rjU_-~O zS?frTJNMUuhn@!7wx3o{$BAG6X(ZBO0zi;jeHMWAKwvz&rDB=kfq)q0{~y}#6bK9f z(9?H0XhRNARKM;kFSQ{=nFss#2gi4UG~Tjone_t4ms8z4qfko9LM~A!>3+uHN>?EX z^bm<#Mj;6iNLaXmImz;J(T9Y8^%J;H0(9%EU@kUoXL}OnAbMHeAmszpWDPVV)jH+q z$PI9N@K%ov|EpB~FCBG7(lw%kHS6m$U^gD%m4UDjQ|vHvEG+}irou7k25_2ddDz=F zNH&<&p4VE+PW9sabFMwV-;#T6?(PyhVmX{^y3tSIXg159;3omvNXBUX<&*WkWR_I+ zST2Lfs?!o2orxh0i@=8Y)|FUYLROx`OFwwn6pz`A`cg(7L<*M5SP6`k&mrt0AIrI zwa|6;OI=KhX@_<>IX&cJynjyh^mV#Lv5#k#C-iOb6Zpkkc9rDm}e0jeCxS+%(7h+MKksrjWv@v%0&Ct}ahAizbMCvzr>na5} z4}LFu`#9gSV8>NnJ~wfkSD1$wu{}I;)n+}DS+6e?k93iOQh;9}zOqtRTzmo3!%T+Z zH3soRVphDT1v%?eK70v3Jdpf3>y!G67T;grlFC_x@e=yPjEy1hJJK-?a6G;o|7^1O z$HMalWRq#AwAd&-ldhWbgY%rf1>$#r(=S5n7E6J3Nbbv9ibRi=`3z4 zpMfbjO6Inw&%k_)0JkRqZEK8bt2hcfyajVcf4P*m zbof<2G+3dWP_X3v8C}9#&oc0D`TDJ$`Oy=ec`;x1KQy#`skb1*q+8vsZK_B&L8g3VqrGc^$r`+!}0!$jb^IE4Ddk_x|RGX34;ubI5S5Zv)Y zL@aY>LMF3z(-L&w=Y^0qY^ALgusFZ zw%P2n+w)!eU$%EQ0AZ2J-yHJlxHkp1+OC?}VXmd-=}}35sc2jUb;b%6I>D02WVSH$ z7oO@?6&0wZ)Nxtr!t}@l~{mSV{kB|%szdjD;X)Nby{2r$WMg+qIC)n?|7uMUU zv-a_|-b2iTu3eT!{$I(aNOmmlcBQY$)eWB7o60em()=aIg@^2u7cmw2$Lti*(7(N- ztSwoRvon|g=3*%m7i6JB z4cTZsRb?)k*3>hF(EM1fn~7mNSai1`urn!ivc(JZL@l`3ieCg>{Zh-){YSBkiLVR1 ze`QGR$R71q?oGoNUln zIgCQnZGgI{!xt5vmNa?ySiZFkOgGb%_x^?hIFs62Q#w6{7iduUw6S2Yr%Lt4>2Wql zlFmHF2d|V`j*>ns*{(1fMocH7M%3KIN%T_3yh1YW@tZ&4z+swyKLqz&8O_%wH6C9S zk(&4jd)X1Jk4hdRL-T*Ua@|JR{u_bkEz1CfSc8SY1UuG_5XkA*$VW63m3i%@eSgs4oI+|7%h}j*8gq6B$J>Y*JGAi%sKIvkp%mo!~VFN&vrz zqvN)4KeN9QN4dLm<#Nw{c#V_>7)=-Ywx$95Q>a4{oyFFM+9K1#X-_N%7V9 z1d9Z@-Nyc7P#{h)6`%pz_Z2=<3KFVWHSu2EPP^R^qyf`OPL1g|!dn%hjLtGW=xM~%7K1e|Zt`9gg!V%pJe4Z|;}67iC5DcI|)I77R%Ov8+{NjVkM3&k8 zo_;aIdvD5DJM8H6v*G{s#wvy8!tuwGLnSod0~1~1DzLqR%o*5IHO zeM!W7Ks^s=d=h6B`tE`aI7Fd@7P{v@T0s(poZU~YiWlNvQoQNRrvXK9(f+nI){OVC ziy6wAi-To^9Z+p?>Z1>e$Jb_1X1cb}j3XSVE_N|zg)m`K-D3C@6H+Akrg(bt@Y=myd1p+{QTO4&kjN?19n zi@tk^goalI!d1J97UGA_RM}OX4O)_|3;&94MOiEYM??Xy!vH$B)e zOMI_rf5!mU(db+h*V!`uvCpi>Fuc&7-)g&%hIE+n_94_0x+$;PGDhA+$yG10ahZkV z-EYd-^U7xuv3mY(Q(dA_ep72Jc3~lv}I2T%c&|PQs~Pi4oY!j037d2o1GX^8MEc@i3-1IiC4!oukimy|t zN&9-<{cbM3b8wMXPn1;|!|9$|9|9igal^tJeg8JWneNvb(rcOWKFs~ayY=LU(Dh2P z)jY(cSG!f)`C6&j)Lq)NpvjdHyya$9tjPy3^yvcwvp?bjjzf6s2l=cRMax-!6XZWI zu^ikJM5EjQsp?6;IL* zeQ^34n8J{C^nNkMKF%FWk=;AikdSKoIIt+C(S;y?<=EYz#gdC2GKMlvfWa!5R5(to z5(bjaNsnEz6iqh6+F)M`KYJ%=u`sBAdZl3?ClD{DXx7wTnTY|tWwweR;k^%vlz5d_i1>pF9 zoq8KC8(JRWH>xR~yLBBl3>2YG0_8fnN#>PhwmueW_Aira4thw?@@wBNqgW{=KgBem_0k2yPcAzev~X9IuW4N0gF{ zAHSJo=;z4+y!-`V%aYZxEdeoHQ;mma@6_7zfPa^H|IRMn?mq4He%Cr2c(WK&uAO@Y zu=wkywvI|lR=#ZT$p+BZ#AAqm2IcG&OSV|QOWIH__(0x~dN`pn zvUdA3n|*wq+i)uy96oerk?*$YTOOGF=w&e~kJA6pTEd;wNw(0IW?Govx?@5Yy zb@F;M2=hwKRm?8W>1Jf6I~;1Wgy{oNF>kINEVHlaF}>U6(vB$I*A^@d8*dt0xv!Yt z1m5l(w8<)y&o*7MC;(1^t-zDJRD2B=0Y@hEFJ9^aGRfz`bPK7K{5pi1IRoC=rD=Dc zIZg5xgBS4}fHe3U&TzYu@Wf9gh6y%hLBHQCCftCJ%Zt}n7DK-dDhhD}8A>^g)`ti4 zdYaxC%_DI=rt7Ldlfn0lU3w*N5y{p`rN6?+i-fn!(C_!l&H#s^$UC`mnu6QI;y--p zT6U{-eZy>TcFL#2IQz)xS%n{YT1`NEil+l3%v+08P0xv)hTxs@%Ap^_k~aO)#nw2R zX~lRGZR$T1E%zTNQeDe3NcunsPy==w>wJ(wfCT`i7 zpL^^0)OR@_PryT&pK|w!FU3#a_RIA#qo(H4MqI#0r2cvLtn&*3(E>%*?>vWO+G|kC zg`F+4#-=Kdw4s;pgrOzE<4RvZdhKHVEPN4yDrbsKZSY7Ew`lF@8wz3C%r{P%=XkH& zG@}8&Ke?qw1^ilYUghz&jVreAg@c#VPKBn(7THmwAb_rRhho^QJyVZ@ym2;N4{h9! z`eouEHe_V<>m#elSEkwX&F=mx&oJWUImrwbJBdYTePvUi_WV8W@%A`ECH6VHAaf*- zPg(Fwu*SGf;E zc~8x|762lt2QcDk{;d<>omecLvFI7&1KcV9aNi`gZ^QiYduKc$gV1nfI3qCl{RDI3 zy=U-a3CDAa#B<`@Fm~44r`1G)Jmx(9=3sw8JNFnjhH#L{*kCF?Ehc_Rqot#BI{v|= zzsruXa*1rW<;~xPJWG%N;|jdh6_^m4cJtzu1t71HJia!J*!wl5x#M@_RRibxq|Ct+ z(>^tB(Rzd}>#WVFwbZ_qimH&`F*+T;BlwYO&h=IMFZ+{!oV7>4{9|`(ip)(!)#Kfu z`Mi2F(M)=Qw>ChztTngEC-`}*?cgn2eP^q~+cmG^T8;O5K96%RErZT`$KZ>*{rLI2 z2Vl@~QIj`X;S1Svf2KKor&(sG-5;3xOyF?t$Y3yYImMA8Q1LFh%xZlltKdb!xw@8} zCcZLy^*p-J)}~5wCqafer* z@OMq}!y!gzHsjfZxXLufOSp`yw!zmOE#OvKrHlGl`x(M6mAu5VC*$Oy>w+twd(S=B zunamcuUj*u3p;*FIQYGhlRI6>-=q25uHETk@P?O{yh2bn=Wg<%OHP(3h7A@nS3S>T zGh;Vv`g<8afO3T+-f^1|vypE?;MpU&EX|Vnp>U*ApoJV z|Dh*WWV7S&ysVsc$p8o&~FZ^J&=0;56)F<2E1R%7?e~2Y)79MxyrGUkg=qS#Qd)9t}t&F9UXc`tb zUK+IG>fy5v4)O7FIo8r%*T?x)cOvBToe}QX5of=Vd*hpWpdE1Xd#Q1j#f}orE(ZSX zSj)f?@vPknr-ks=m=nf-Cnc75v^n>8jd2!RUq`$hPqQ!CX~8aTBKm3#B|y`x!{hMj z>Da21`Igj%DU}|?Wj;MEjO0#8oOX3|JGxAIkFvDyuh*}_2`vtY&l3P zBg{_ulB2EGbK@@1cYEW*Y=g!1Lto|`!AYa)Myq08p3b)G z&5_`uXey5-xa^}k6xAQ5WsF6a3i5XFW}Dlm3?*6k7DKQmhG@UsDh+Suw}<8XI>|_K z)Qr)B<%c_{dWp!A(vI+K(jlLprZwb9a-hnmIu{_nBix!P9=nd78^CJn6%Ug8-dhdC6v(~5Ut+^jNin|fWTtW z9=b9phW+Rx*jJdwsblS*jd%{bL!BB+2w|hFK=MvR%2RGi+%u1dL%EOHyb;YHU?n*J z1s43BXVR6@C!V=@34oRmtWemlcsDVh{zrRHI|+P!U}<#`7tn?PT#+)y?3x=LE8Ns< z92B8fe%1BeEL@}wQM&SJM=oajZpn2u&d`;w$EgJH?NGh{O3MdQ4go5IA{egjlRIO5 z$a&&4%cqkmkN|bQ`SdhCZ+d&G<&;79m+Q>3sI&>i-JNu$Oc}k>NRVJoDg{dGujCBPR zF;7`5FM?2wMgab*1*Nn(PO7ad-vQbO?;^?du96EMf?Yf*J6x1nSdw3BH zkix}rs(hz{0x8u-9f=e_PZ`fFP_zt?^95EA2@j#)y47rQolmv%Y{W16*j$hzZu+$r zLP+-0@%m-WA_VSVMIi1S${#xlanS|4AuJ4{#oREC>ZKBO(7>_um1YNOdT~n`y?s z`mDheHqkhY2cWWh=yngwUvqtDZwacDz+g?@hIF)MjgcXMD3?hWNIxUB;5{fCy6Rlb zmt)A~VgaI35D3FA-P*4LjaG=mLbs!#u5{+;TD(^_bJs05yV0%V1CH_a@uEH?ylO5bdvFT_jR>*tgzhuT@Yw%E(8Ub(vrPao!&&2EOn3vcTi>4P~kP=UP zAZ1IpRV~+XJRApo9+MVojceWZJw`)jH<-*zlPWQ`E|pX}mL#Pz%XcxsK2|x&|Gr=T zrPJ-rRDBf2^GiITk>QP(B7FOhWl1!g#h{COrzXhv%=_a-+o3eag8vnQRqK8Z!k%Id z!~&2kkXcEZmHAwO0l{0q$kQ?vY+bAfJSl-@s=ZFeYXOCvyQe`Vu4+hG#Y+_iY(Zz* znQ0b-wY3CX*#<+fAq$a`U}+^to;SM^J1VJS1ren-NGN&NpP@LBM2)3ye9-4SgYXSz)X~ zG$O)}9ms!M&zjsr?TX~9*XmXcr_qGEJjZ0TxGOdL+_N$#*IomE^viAkDk~O^o~rx| zgqNuJi;A7i%>pwnN&MyyV}X|Bn5gn^L&QZ8X%ekM{l+OBS~})rV0!T%gb9TiQvv`r zviSm_ALAX8;WyNr*t(ssm_1LDSvV%T)F3b&c$$r$X_UX5=plY$C#E|~SiP$_bH6hn z=fnb4mAnlOS@#4t9n z>Z6Fh7;JAe;vy$2Bmu=(W5XGr?Fz8e>qMnS(qR#W*7LpM9{}Gi1}Z8oesTBd2I6uZ zP@p1D(R@n&Px(;=-Dn$84aP#MadHJRW}KRd>t2?$myKepS%o!EY+hgL`+TCdkOJ^T zl;qS9BK4^QfQK3{*ms6@Y3hM6+bqB&k(u{{PhgQgjZZMNZB)Q_qop`O9tDU%^?a{` zr_h0B{%Wc{wnb8Fs7g5O&wu;*P7V}ZRR|{g8;8Wy1qgO*?@810v=Kdp6?c<1p(@!x z_d&L2&$qlGvsSp>BN176thwhMA}%+shImeJ{I06^-GyEEk4N$2Tavdnz&q!_h#ok? z(rmK{@{Awm5fW_3Cyx$=@oc4Q7Xo+G~4bZV0KrDa$6gLE@wN9Yb7V1Nv2+Jkf+taRJL zK*5$XcM`6&Rpp~ec{R^q9=yZ-ns=pXMgKzOKL{>|zmq<$*KAv4?bg-bh`y-nnPNaj z`PW$1(FV|JtT4f|TUhovSF;213P=uABR6-hhD;9kFELp*4u8V31|;_=Dp4M|E9@!yHdwe5uIQw5ILAery)jxaP+gCoha9b zjkyYI@}&LVI~T#jo6RplUaDYV%nkbSuMDk zrV)RHVOiW+9q3b0ax$=d&+Z=Fgf!|56Vz3e=@ZcT5zM$#sB`F4xAO zKSV9ow_bvSKLB`F9Fm<=;g#6J%cv0AaY?B~ar|^+_fn&jagYWH36lwKe00#Ws*=Ko zzzvJbAxs)|&Unt0IDz$%)y*@GksB6fMrUHHkh@|iXF0}Y}c^Bveg&XMVT}LPXNmuu0*FECUi)*BTP)b^&W$! zb0b#6vlKxr?d8H^7;OghW;(V@w#(}XK>M)E0;@=CXxz!Tn9ptv_XA%hRhca|y!Okb zN_o`GqxGKtz3Z4`9TBB3nHiiWRUgL!MLWl*t#7pTY!TYZaarHO&8Gus;^_|i5~0DE z6v15`IRLR#vrou$okCd%J8r5>o`Sa&v%C{BxVu_moiHj zhMsa3DGh zp)RPh!<~eQ_E+KgbSA1Ty?<9YQ*oQ1tiwXAQUGyeahwi@RD%iiDSh~O6f&T|gF9-Z z0UN-2Kp_0F7Y#3PgVs*)($NN0!enXG0BD-f_M*BIHx@Q2?uK<__$DeP!%{ib5ip-W zhPP2j<$`>}%_q)-K;7^tD#|tv6~Z~i@Ir=JE^2!?%uj20u~>a6mQ4LW1pe&6 zt%kd$2ICVunfOEzR!)xZTFI%F=AFiw;Kn?>O*t4n% zba1`b#;EoF?brKTqq^VbpmJ0@&(x;1H_|w^7NOifO?M=+mg6Gm*Q?EtCZHF??}%0s z0%Z;%s#ZyZ7Z=?ELJSVyw6P|-Dk%rng;u9YTdi18uUd1tZkJqhesaC48s)KMh21@2j_(Xd$VJ^BgbKc; z<=sOmM_`f)Ba zEpboaqBRTrm|1Jj3Y69l%*tXL>v?Da4Sam3>v5zt7bK1&$kfw(Bgms%`&g2n>_f=Z ziL}mQ8v^BG+pM$p&A`J*X%SF^E^X27U+n@$vs{j*}`m-_`c4HLp=amK>2 zuZ6_wwqv?hWf$(gDHj#&z+2QRp8f`3<}o-8uqKn&?nL%ZZAU`=_K*Iap^XJ?V?yL% zNa1xON?YJw>lO4X0Edv8vIFvWlNm4D4bLr~?7I$q@`dXOZ~VCuYXXK)st}jteVn=F z!RD1Gxn8+w7Z2>(q@NkYa7|tKyB7}T%J?ONs_vvf*VKa9Gcb?42WouFRgpa}x%2k# ziOU5A<*_opshuvw#{TGF@2b_mFU~X>vslpqdF8kZON~1Q0J33qgUz8ezKpIo#sxF> zjBNeS+l?Kn^pc@_ut1ZllmSQ0y=K1F+?i?&BQ~L3ET|@rNG6UaoiI%Wv zsb)-Fq&rYgfLn9mhz>#$c{J&M5>29BlMCrbD16No}ert;t?7Y znFgMlq{)g0;_vshu211qowy(fL{J}*M1{a7@kQBq>qT&{XAl+IAu7X7sVNz~FC!Ia z(v9Hsr7uQ#=AlS6h1hbEKk22k?(Wy@o0xLW-J7h303RW$e_WuRBE7z_v^3GO6!&{f zX>hg3u1hZuKLel_HG8b0SktCEO%!`GV-2`8&si%|v0#z`l1Ft4cS9rv_y^`@Y4*qA z-_h%g`&>N>qrXeGD4^qt**lGwM>?g7Fl$~eJ;!ZhmsSZ+T{rOSC2 z?pa<60H)}D(M4bRtG}B*X!{t7)wFEjA;XwCYT?n}{w%6VA>PFNEI4 z^3C!=*A6o1w@KgA3)#$VAV3-=?qg~fa-Apa{<#ndu34Gci5-x&NoHxyyyyI4hBI>_ zt>GZsCJ|tJ*qWrC`aAc{B6sGYhIi7c?a2+k^;S+rtFob(@3&=_BiE|o07wFBJP&Rc zfL&-E9Vgu~k!CD29FyYvw|`7E1*ZroEEvp%$-Gc3j#UKkoLIJ~a~29MR66`j=poLr z4NM_vykqyCLZ7$;#iGGQQuvbV$V}cWWS9}ru#tC1zo==OcsOw#L!US(G2fUe zhrK05T%B#GKItV-MvZ>uO<;a#!v7t_8Nv>ut3k)8(_Jam!8&(I|1kj4{Ne!&$r+G8 z#rX3W6#m4sfAebA{oFvU48i(KwmZ&`ITudB23ewX=l9wM@0Dlr(D3(Wjd#8Yl5Z8@toEz zJh{5xB ze!L06o{3b!ZSoxkn#|)9n_aEtT;Y|L+Lk((^jK7@#wE2r?+w6D~R=ovTgE< zA#QHiQuD4tE}Cl-vkhi3>t(h zM*SicIkAmkqCQaqwWnP7Ex0{tOe(eFf* z4-x=BHxtnZ*$xN$+wu(*UMUJilEhq4RsXIfoKZlN+SbGN%+TQNmGuZMcl3Vr{BUQn8uD~1Nl z903^8WVlSk%1s8s>@dUU@$Zw2Nb&>aN4L1kCk%b&nD9otowKD?H$hrJ0*G1AqrVN^ z+fWXfkYcD0-IIy#8kgjSe?KfGF<}3&nVlBL2Wbg;MzzfS}=0H4;3ed(Yn?YKTJog0q&nhj1tij}qCA zn7eks^lIJ|VJ-ToWm;)6P>PhMbJ;nAe$$&LLF}SC5p6NrF7Ka<_qo1||A@B{=m5X! zT;4nP&x(N*f|sqo?Au?pkS#N(ZQ5Pd;vX_>6m)#?;ni2sX6lNKVe`=7*|L4aj9X=u?*cUm`@F!{6w_zsf}@SWz+CJrj#viH;l_QAVUTf-Y8peEiaVzqw{jk z1#m33GIN~YvY}wX_9`_6x?Tm|(^h19`#i>^Q;2)cMu9$VXPJv6)KZB>@T0keQSC)s2z`?e>pWbeZ24+N^Dd0!@zN=jX4FkRABoPLcH(dwQe z%^LWvRbg3?^O>)I4Z%bW#ehDI6nzh_I&wY)Vg9s^$$b^3G3bGCLnx*WvPe-G!ZW|l z{qWyW#@Lw2Ac%#~Uh|gtvsCV2pqYzUWG-ViQJfkp&N{}T``QU|scLuqHALsd{cMvW z7-7F~wm~uJBiP2A(yvm*NsL=iDW?|;&q71Fj5di_G+QOoAK`u65aL4IRHy zb$GC)_XJX1uH+q`>Ho?XOAYYb`#u4yVZd@1FRrhk@V)^5rITU})}j}E?uo^5RK*=s zw$C3AdWWjhaJJbuPl_$8twjCT92^c7N{wHQTe~4pd6x#6oCIK4Z`yKjPnFcelx-ZT za08kzEV)s~J=Iv?JT5)0gyO2@|AORpc!RKlhHLxgWV502&R*2ii|@)&7M=Xd)%kQH z+@csYb{~MfuH)yKQKJTQXaBv|dB`)40gw|=XCBf?Z{BAF#4c37Oz+>%+{rgQeODHH z-TKpk54##)JOE~H8169rnm_+^)85{^FV-Av^wMJYE1v z{vqC0xqm*gFieyEAGXdhI_~yS*Rkz1wr$&1W7}ww#uGKRZQHihps{TyP2=QD-}k@w z+3W1{>GyS3X4d4I>wcd5UKVD=xgVU5y%Rh3`+Cn!AtfT1h%Kj>ePW)YVd>mB^??DX z`o8|MLn@Q#(x|W(i~U@&Mq8~IhjQ|)z@q{cd8pkZtyr>6uB{3fI(2QBxV9>}PFd|V z3!rXa+oAku?egf(>U~wy>#B55KGatIbWV5stT!X;@bHDgB{^93Y~HQ-Gd^~AZ=Qi> zOvyr=>BF5)%5hVY=;`Cl+D&RU8Ozn0HAUN*qqz5*moI&KQj`Mix3*mpQLBcd=y0WR zv-Q$U7NNSAC3RP1{8`{0Ghe^PZGhPa~BJNITijqAi;3D`mkR77yr35&7co#SWGfQfo# zOBsNN?pgl<*RqY)Fgf>^c=LI6mi^{N~{k7^^8V9lD?gUzPsMsSf~QSvYD&JwU*!d*D z27U#j1y-U?6K}Ny@}h5E&eg*OVPXN{3rp`}YoZNfQR*oTS*MqhWGih+ zeSUfOfl0#vYm!cwDl@ z&2ER)Qs!Q3GdH1)wmxWwa!MF6i`P%UEux0S@Cy>(R{IEFs!kSR*SBiasE$ejpJP>k zS56X9LQ#*d{@30GEi>PBPbas}sQxkbfAet@5#jQIke^Dn;wszUT$ZNUdqVMvz;Jp! z1f%G2F20i5ncbBrN6 z1IDfyM!sB}kI#l`GSx2bwy-hh5R%u_`3VVEJN-E(Ygji)Pp(ZsD?V->AaqRhh{rhg zRO~4i)o8tS8wi?uVAZ9$kvnWX4Q8?b((el9|8+ay!si%y@#rp-?J!>EkLr8F7B+Ne zXLSz96@lJO8bW|(E6AedC+PpW0)n3cJh20*N*zPM@o#IO6Fe9+x*tyWQUyrHNN z@|K5A^CtPiW%EvwN@psxmI2svhd0#~1rK(*B(=X%LIXMOmUNQWGH zN-q}qInplS2Yky=U=pi2&TBd02#(#bax(uC^KMX6% z{k8X8 z+r}uZLUr*^{yRt*H(q6ZcwJ{^#o703(DiTy{w1#;iz0gzBq$dC#(r6opHrhm249sD zXCl2CtCsGAefq}D>KQ9v4`1f_pZCAVy0Wg?MLzi6s#jy*A6ySpKJQ+=UJLeY0H@oT z@HCb30v;@&0Mm)UvlyP7*$xw&O2hF5?ghwqJ^E6ey)U+^B>s|1SeU#kc0AC_))_o)cr zCz|$-`AS~jH*=!%j$Z_R|0!q8AxbT-ja)Qopq=Q*pn4r`{CL}+>)-yPRQ7gw@AzQm zDWqh@sy(QEPwq{Bb{X7fJ#2T|@b1~T>8S8gk$QhXbZt%RG7}%lFyuU;lJN}?#J{1a zm^?$B_{nLB&O>Kh)q0ch?h3#$CEbp`vw>Z1FDGmvny}%hCL~Nb+`C!o;}d&oA~^g* z`>CQ{vZ56YF^RCZBx8nYhf5#)Oz}Wm4v)EjIwoWQN0-+yf`jE9tCCU7`{rBl$l7^EQfKy|vE%x}ZqYjj z`)B92>u}48z-J|u-f9$8loXF`>mP6xzd^J0<|fCT%A*elW_^v1W}T{5K9GgT(m&ZN z{EWF=<3%|fj)w2^0;{oWa|Er0k1xz?Qm*{`uPQz|qU%4wd7~C^(R|nd_-T}(xUI*- zboenAd$0rIg7lNkHt@-5@6?t@fKYu4cG8{VX99nE=lO7S+G2J*b{p)@N95#)?u~Ug23)duw zw3eSIUlrMIr_|0Uz5!GhVAIV^IL@xo2Su-3(H%@g>c$G(R1jz|4zn{-X%KX?imL>iiqi~#58#R$@sA#|3!oAm;v0e{N`W-|u1f*xd96Ua(4f`7R2 z$hMP*nFkw9!wykC!-SE5i6SGRu!6wU(f*Ex%@+&aQQwY00YD_&jdBKuW1vn+m*C%r z9eA3G!6=_F&l=(!r8n^?Y1l}KBcP&MSjEc}h^6>Z1MiQjvf*d&BiU!Ipm5zW!JMAt zI~BRc>7b^!V~iu{4zXFHJYm2ER7D>l_4P~fX3?TRlZDwFn&g>A%AyVn_n@eY@1hWr zvj}LRh{h`)0D=;_C+;enGX#n_Wi;7wD{T_KIckl7Aks)vwakMe| zW}t|%lkD&{qj!!#!YV3IcLE)HaN+7ZX2-ZP{+_hxtzz9cSQL!8;)}4j6gTWxn3|RX ziCVmi6P++0SOAQr@X}A!5Us05JzO%3Z)BuU;bGLpfJE%eFlEvFT#RE1HcK6No<}V2&_eoH-cN zQDqIav6i68d0A!ADq~nnqp>37lHe6kxD13x=DCY}6xC>fzagXChlnr3xPK%?)yFXh zK+VIw1I*a-xam(0(@@AA(&tXfXk@BfCh%qJ5k}!?O7bJvD6q8qiLFkFH(_<;Dy8I1 z@H1wl1CorCPjtbe` zxDhOfl_l7Vrv6i{69HxMA)3mrxMd@DVkH;_z`{9-GYMh876@v-ku_0`q?9i3565O^ z?NZ1_N3I$N4yg&gfch0Z2BTU15Cu34LVtp9!A~mdM!~hWsC9CLrA2LqTT#{ZE5bix zT6rn?he-;Fu_aOEuxpbV`PbVdGe)&7)Pp()xK zpbp~tLF3n33~GOxxsaS_6WT>z2MRq-Q`hbQN)uT@L48D-o`v&KehynRn7BI6zVzVT z*E-eUdqNY`^lzrH%ghYn)Ep^b6}>^Z_m$1F%Nb~r!RqStIL!#=8ZUJeytK4=l0_dO z&QK|Fupq7s<<+|;+|eTpK*f|U1Qa$yI8 z-&}drr#>I|AuO(5a#Efwv$snK|w%1KQGeG9Wt3QMhzivHzBv}o5c5y z%6~OoVxo{B>iAF0a;hzU&uwqE24E>wFUf`{`AFmA<$m))`nVnKOh&r1=zMFy!xLIM zr`SbM`HXB`=0G?L?$9?>t(aMC(Zd9Dac_bg^;% zIn3>J@&*2+SNvhm%iW`}aa|#uE?eV=nt8heRt;ZA^>yzR;>Us-^Y%5s22iTUpf{vf z$~lka&^7LQOl(7GAyR+d)4aLha^>3YM3gLws(=YP(FAf!23C+Hori)&vqvkLCh*Wm z0`H=4roXzdOlPTSf6mFA4gY=Ktu^4d`MIR5;ZFsvLXmyrRn1+=UB*Ml6ue_`V`xLq zZ5(pArn&hglCvPKv@wJl5rABkR#+4%fE*SQh6FY}OpAJuYMD&}w*WX#qY?GvYWc0} zZLrF$-w&;qhOrP`Uli2g8y-Man4W7QazW3ruCXd_V68Vg!!4~$y0)PxTWJx{jt(LWG z@t2u!?{J^iRMGf#MZwjGhig5-Rk*v_R!dD`(8tI8GM#uy!zF_!_4YT;5d6z!OWOSw z_K{bFpT%c{8Eq>LgS^}9YSpo>-4`o_2r1FBIQwHADu=hxaJ~b_o9{w#J8@diDRH@ih_Y4uOQ2w741fy;29G!jqwW+sVwJCX zQVKqAUEU* zo3soO6(YQ_mW#+A#%?RgkU)G4T>>JNJ|bZZn)?S>9F8jSj{)LvaX9Tj7KIx~RLgJ} z=ZLVnEHzb|eTl)&X&QStFf_YG#*M&GCH|738x8^l2F3d{Rz#8^K}}+XaGnygdN9cd zV~+^2 z*2i#rm}b8lg@ddY6pyOYWI8S=7zPe!)T z9Fb@zs+BXB@25cbJ~{^=`n=)CCI6;+*BD5yK1i(1AszCPb#F3ztvQ~>D7`odpRv2j z&V(Jj-;(t}Me*$4k6q=MS+boPbn*alfC^48ox$-w`TBdNbe=sQ+Ot7#HmXu1DG4KQOdI8ca8xP1sFi-SF?^a3S+U&h1Y`k zc)OA>Ru)|XYSHFBQ{Pz>BeQ;Gu&VGbC>rFsCtMGhZ~sdHo!9R zOX&}nUDjVtct&E5d8Wm@#kQH|6TXTG)Nf>xIT3k*Sj-~7x+`efExABr_5yFVjIr9+ zulw=U>J6Y8+sOkG;GH?MQ`a>eb0TBTLrJ`3IMJ~A52Gd`>27sP#(jQgA&fj9yfiIL zWWkeKX~g3fn-42V4=c&Q_B6VZK)!MKmC%1OFVL9o{J_{CUUP=?d{$W&Oe(@T1AZyf zl-I=5i?At1rT$cya&`TfAWZ>5H%$1vXHvKs-b_|Kui8ItwoLUK=@Oq=zUq zz6LJ45L5)v@s3kZpvee|pI?xon5i5xhD`_6_~J5x>I+xX@yTA%i@q*E)=M+rpLaA6}-)AM?GJBdt z`Bt>I=R+LQP>Q2lf1&6f{P6(=c|sUfWJQltu99Ome*ew+n=tNS(^t&q2U@M^M_Jh< ziq7fOK=XwDw4{1rh=&Ie9Xf(0I&iL_A9O}9n4y$*+5y+6;sRadhi+MoKErnbh&1B# zxJ8M8O3a?=N!Z|0Th(L)Ve_7RwK-GXmtBs&(awF%T8%Kf1|H`re45Se#8P4HiJeu1l*>)%@j#MR3K84#N zx@UowaOYE85T|naYY{q3TqQa`j=QTdT$#MV}kumB0H$|wfF z8p%7}dOl*XQ?-slS8p$VEUti@&=%oYFq%~^x={_ItP<33*)$~*;kI8X6gwL;xB1sR z{W<3NPg3=}ioP>pNTic6cI(lbV-ExG?}iSuEssXu3nQBZJi%I1N)<=%pe`(S^;s;_ zp}O!qTS{%i{ae-Yc=Z?~VDlU}!JEthY&(hpE<{1hy40{`b0TRBH(hwfvaQm_Eqwm6 zLfYFz?DS*A98O|5?9IH4Bnhg>>^R;tS^bUOmb<0ZA-$_M_=CQtpYO4MzGqlIeW$&> zcgg?J;@G3dK`?+olSn4q$xLgTZFpc;@Z5s#aA{TR7VIeVqg$42FmQw&u*O*kXsk9h z(z0-plKQSIH3O5M+Vb<*{`|;Du+|J(}|~uRmWE&DPtqughvOE5yVvXA@_lt zV~_aDA2mVkVBgR25tH>Bk2|$g04NYOcswl-5}74dWwyo&F^3%`@O!}}1a|uBVp@~u zp7HyhDR?I6E4^-!MO>kXJoeO7eMtp}l9y-P`M9Gsy97_6A-0nB^?;L=5o05w6_p(i zMf~qgm7hkJXYPAAA4ciL*QvkTBnO2CMUnuL2n-Hw@t0pz%p{;VJ#gBl0Urh@uf5Wv z?uE1!`Io!{)%M!ZyGsnQqHVXSB5j6<3l%6cUdqPMHsAByG3hUaFOe2%jIIdEs=K2J z$~FVmI!*NIowWCA;cWuAC0LmvHpal1^#&4Z&vU=#~@I~GkQ)r z9hA7OV%8H|gBx8U9BPHMsh$CEuFia~7I*K^ag~hdEk+ltHI>UBDjOF*)P!5K(aLeT z8Q0iHUZDhY2{2Ee9-+GNTtJs!L`)tu-AsJWyd=|r6$x>qG+MfTD!h5&4ilvM^5H&9QDC6X@aQ;QQkrls0835UQJ{tsm{5bH_}>Fbcl4zA zunuxPJ##3rnN+$901IhV5pm^`)ZLfJJiKoAu(+gB$L*>fc=x-8TrY^tz2!Zk;$r4o zmvZ90T&1Uoy-3Oe6r1n56q7j~>673I<^^w6YnAuAXs%7OU~bBL#)+wD+HZi_h9h6+ z)&xdV)0WU7#q1a<7pcXPXxs>Y+BLIY%v4g+%w9d#oE$w?0E{NIW}i+I3RM1Dv{p%D zA5*OPh|Q*|aToP`ICY4Qj49yqZK;-Im-}!X@9oHD)4dIG%>eUrvvuUsCkEog&Vat% zJxk#gQ^89Kq4QD?TqW?Pdj%WxBa^lcF;AgL5a2!ZXB9Jw&+0e93m(Udc7SBI#}vV= zwBTCl8+b7X0AEz4rmcLRmZ9c(!%IhcS@xlb-af%C>8>JrW+Cn9yUwr9dsNPW;85E;M~DnBql7Wmwui6_8-pU`F}YN7W}_BFV38g<9~9V2MmwP1^u7@ zF=T8Jl0>6@Mt;YU}O0>3%lE)HR70^LW7FQ zDx}BSpyV&;xdA~hY+P-rSz-QqLgu#@*j+HJYJ%C%r#q7U`&0JtQxQTY)ePt6hy>OF zkpP<<3=KA2yXRtG+Y$Eg$*zggF7#t-O@$>pxI}G(@9mYKVeBM;ES>mMo7CV3H?`{s zNy&zEo4oYfD|wETXVT2sacU+ue5vzi_A5l(QuB~>Fa3Yr(M2V)W03S08X zIfRJYhIU-%4PBGiqET8u$`(dPxH;9Vl)af?#chBI$08N~S%%h(Rt;@@Z)8u7$U^W+ z@;Ib8X9n6?D@ITXpB63XNvO;{7y^!{@8J5etm=8=9JlTXR0+;A=YR8tn1<%qr3F~$`~ zPc*zzP3T5)c%rNncE#Y3i|VeR-&eRb&B#&-X~<3c%g6bDMm5FPge<-X4JmqzK;QfB}vN=#DVW0XU)U?b@9nDG)BTzQLGgv#6U zkcX6GFuu{HY{JF?XUFqF1pMO{aN#9wlBl$A-t;!`l_hzqH6-l2;Zp96Wx^ZGmY+rI zHe0}2SK znp`gFa;x?SzZnk|_Ha@RGg5SQo}2-!*=7SpK-8PZT>zq9#`%19R+Kgc;w{@U!38(C zmb0jwmKdWWNE81?(q)cwr)6h<%2ESP;B_ooIT+s5jj|Y_z+4QV<{gQ1-SIVjWEsSmzkO|P8R9mR;BJ9-m(4EpQ0ez6nxjfaO z7rw$+K}xk|0p0m&0RU;UgZfQVcctcZgr*GU)C7F^ZAG&ce=CYxCvYmptG!14f;9?6 zy>HA`I%jQZV~Jixt2kFY-QBwe9=)eGT$wSZ+|Cz5pGzN-_Xq$*iU@-u9jn))N{x9E zyZ8n^unzdb2F@q*Y03;ozTKsOKWoAsY3exDw6LlTz6#BC@+A3zy68S+?W`m~_QBafy)FhlqcGmq1NtL>H)C#WMab1|pSceCg4&7SlHGvIln z$sUsZZHrV+kp9FOF%l3`O8tqOV|k4SY}oI;zJvyZSU#tC;~@EtHu}XAUP;=kn{K$3 z>1H-!EmZ5~K)8M073bAB8TNkZ&_K<7z!^l^yqdkFwOPquC+N@z-R5)E?IhrH?G$VM z{-aA0ZFBF@wqSkitj&o&klDp|6!EQktnt>GfBiz-;Xph9lK#|mrTb**fj7WW;_FFU z2_83~#G9$(VFM`zd;yoc=Dcld2^F!!Z7K#+D+X^hzOSmJV3m@8!-LV?A60lJ9=GG` z4xfk?hIgWkYAimOTYbhu! zfGnvOZI4uQ(gROT&PVuV;U=yDW4!)2Nh0~&QxxlgD@#D*8O7f9MG>844smtfG2L{- zdG#@|5TF`?>V89gxTbJXRoh!s=0k!IL19Qt7`yTUDRI+|{{iSf(1ZY zRbt$@6cF>W@q^ade;|+)-RseGMdRMp;=zWlZL2$zkQTB+KT?qHW}-ia7)n*-Bwl@d z4R5qGc)AN4N~e~t3~>w53yh&?5;jjdv2#%;t^NG2v(&|O^;OlA4bee|(%G$O+Kw1UWT=Hb89Ai8|SY^>kQ2rB$s<8-owjIGTBvC5B7aC1t1eQRact)Qv^|(nu+x zVnwR{S%RZVZVlzJs@rT~fFNu@`PLVrAkx+f*p})mjeG9VQXKrSeC#T8Ko*x=i%PhA zk7=H8%jU!D&s_K0{-~kqnp${9MV(=@C2)#bl%kP*kPC`h_X?uy0)3m<4J>JF2{3%j z+#rUC{XHfoFDgx9lm2*M66?ncx6$R{z^<+zJm*ttJPR1s|D?RVzmyltygbkYq&yYQ z)b+E!BYpqZTPjC5Ubh#3i-E^`U_C=;DG!!8;CVP16LFNU-H%w4A zx9D>!j-5izU{%IFy9%vuZ^~)?9-ON2|QaN@S+)Saa_UvPyk9v>It}5Zd3}HOaV(aE~BFzh!NK6V(R_LFQu39w7iF`t`<13h~Q9}RI$DkgKZ8|$Fo}h$n)dJ37dAW<0 z?P~iH=u8t{z=OjBQ1Dg1>%zfL*}-i~4l%s(k1cUxff}|GiK30%#kVl=)pHN^yQxBY z-x`O9vjc8oAe=d@CGHk}liU-B8SkIK_qc)IY`%IQQ#&HFcX7~Ndz>_@CzqF#wU_0- zdCXmwq+FGsEy+u2{#lv*fOi)H%~&n~ty=u#F0qvKAY#!DAck|BN#~Al0h26j2$?!2 zKcb?*J_gUC_Kv8>T)gs>4!fz13w0+ZcN-0i`|=!4mtz<)fSMnisH3w0SCz*@#1@#$ z+WC`Cg!R@aLYrJzQvMt=B@z34(#h6GbZyWrI4`_`TA_~uh5u`QT4e08N$zbk z7s-GyPZ*{10@RlbVLj5!Sv#>T_V5GA67ZSEM6XSqukB^05U(nNVHK(prkvKc7aTgTG$!{xMMNmEkC zQ~D&*3oUq;3A8k|I3v7&vmSMw!(`KXgTAHQ5ACRcE2?me07SbLjnq&j^~2nXf%l)Z z8(x&OK9;xOR=CvH{I&f|(#iW(DTs%xpH8A!%nPw*3cVted^=H-g3;78=j)1-vRgZ=k>MHnbtSFoK*_~m`jc2rq)W)yyFOmMwN zHP?&w79q)zc&=~+%d2)t4-C@5R(dKI(NRR;E^JTG=2hWk4C8_OA?${34llt0(#hsP zr=21jqQB#bPEE})*zq*zR==sv6v({=d|#U@02D;MZ?jbRCzcx6&I#}lAc(+G>jfqC{Tvq7pxZa0cRY)?G4(M*MjC%P_)>_L(!AOFu)Ng{Q8o)X!}p%ZX#?D}Rmp2M&T`AR!?k)0h5$Vitu* zOyVO0lV)M{wo%BgaJ#7c0TQ>~C^jmRk4-!Z-7E4W*BWz?F1asf#4Keq_an^PqP?2CQzGe86Ad zTFY&;rOl4r}?rkigeT|w##&oWpIp}zlC9c zr8TB%wpE`mg7_EZ0x&Z7G|P#>!4e` zojL=%A9q?>c>nCUBRde?c}SpbNE%{ocnJvIfKUF3gDsooXcb)?DH2gczA%OF05Ni_ z2B{W_;w$rlc3{}%)ZcdY0c_vJUE5_Bf;;#wbCCFeQO4R9TxvCgJAjsYAsLwfPp#*$ zKXHPdAax)L%!>-9y}E-uxg#P2hEK43uyX?)bR%DFo$p*BPrT9i*mKEd+drgHvzi(qPQzQ;ygG3LUSuhQmG0Ftv-lo9)BG$+0B zr)C}L471Xw5>75&ra*Xn{%1NuMxXhP^)j73BvAFiA9=-L(E_F16?!x1du6@*Pvvl- zR&&p~3T@(C5J~}z8W7!xYHwa7BLd030xB<55;*;nY%=Zc$GmuFOt1$%MsR@?wS5H0 zH4k)PJ;!U8@q2(Hzy)h~ts+UGL^w-24T)(i@$VKcmy-oV2D2T}j;Z9nbsXZ)h8QQ5 z*Gw12U{m#w9l=mCpKlvmH#SO(-x4-^H8w8NOMCc99Vk_QO*j(}r{)#@z>348_>zf| z40HY&RG0Bb!kWTGveXrUC5KlK*_kl8ma{kxM?Hb^OHCQZ8=&dgTEf%CMIoS_w+*aG z0^YdK63Bb5SWnOOQ|i^aa8arB>ERIN$zgv|k=)+D;3Jj#0D=tlKh_Cb1DZ5p13Hho z^%+U9I<&AbL&Xy#x7Y^ly&HwiNPXx!BtSkXDeS6UP5tdF(U!4=8VJ6wu79FjMKWAU} zi44fI&17Z$4B~=q(#R`$Wi)eL3|!YPY14n#?%>7YC+H5@==01XirLY1MiO*3MdesF*QBKNv86^^c#ieNyL4W=^{tB1)qwKu~9oDCy%`LgyTWW!xsD$fzr2GF>g7cDPW4L4@_}& z(Ex3+6#lqPgDv{Gbey*B*%8~!6`W@2Ei3lkw+8LOpvHF6334Kt#i7`-TYn-0&WBJ6 z$??MBb6g}3hnt^(DQ?qaplitU$x#(B#f=o3tsuoM0;ae!!!y$VNpTTBqO?EiE7eB& z;Az@Knu35SuFSGvO3}&LVki#{;NKXRdSvp5@~EwKh)^a}<4w;_e+#3an44~RCM;(A zzwMrK4Q(>uSYIHh47Q$bUM@^8C3liezY_x82nIsRK}1lH9MilUMFIU-h8LLP&Jmmc zWA|9-Fdv?qDRPbPS1Wyr{$uy8~;P)0-_6Xj5iY#WI!|v{@1Q*vUDZ z5L=0RzF=&GHtxu1CS|EcVub7LWOkV!F&9-`L9T>2H4H?tAwJ~ za&-36JVUQDe+?hGmKyWK+ScOq8sc}L;j@_OFOS-&D$A%SF-?KbQ6Bv(_`duj`1ng; zg{pyq4?^Xy;49gTjGNur&V#w^lRk@rB;zg0a`i}$e7@_GbiZJPl>?&f=xbwNSqM_`i(%lv1q^0Hff?^!4u@5?h2)g*| zeETn4AVx+YIuxTAJrKO7Y%7buYB?cJDo1dh>OEwfem>R$YP$d;aBXQ1vT&*->Iuwt zKQGQC8NZQqMrUo6_Q)jk{gK;ukwO+yESZSgXa(y%c8Hjl5}N7KZrQfcxM?Qmt&Jae4PCAtfM~O_m?WjtbdYP?AM_Jz!;ccn z$MONZ4q=*a()I!XU+t2$*pZ)cM5J^Iq4qAle1Ye+h2F+2$|E=DUzE>4`34+kW^J(w zMOs-w$|erBU3A}j{_x>8`_2FE`n^I;HZMpqpgU#G>sS8rKo}h8Kr$y2=(%4+tnUmg z62tx&(NGBA9|1Dn2zs(QdoFjfW%$?kTt^k4-)l9+b@m1%fPGIXEi?5baPPb^|C{A* z{3pw8H_hYIn~p#wcH{&dGDqK03UJR2WXFy}^EZXHW&}BTmsabLf@|Ld*IXNa`N%VN zMov^_W z1h~TJ7G1=ETo060A^N2^#viRKwz=hXbD{`RVoK6QBfx)|e{){P%eDsxA7l;khU!>{ z@{)w(A`!8p9;q zCh-mUs4FMP>`!iY#OpC3PCd4W=#P3+ywLz(Gk{dU zK96%(k7TGbenC+fK2jqcE=)b`W#K{P@3Pp-WER z=zTY$*UCoP&Xrp^E#;J39X!z%-p^A5I|h8;ZBD)p`z#eAc)NB=wN{oH$2vToxI_9} z&)HwPU(8(+Zf;@UMNo7^_cC@M>HzW&(^~dPG>ul6A z912syH^T8G2u%cYP9=1uHVSmtW9;OCJ!@7SIoDVI4&YV84`g^SToI(l#;&%ZDmXk2 zIL{;?^{p+mkXE*ad+VaDOUL=^KPG{Zu4fA7X)p#rPRd7TG+h){Eiq=I4S-RxSjT^O zk$67BQm3%HsqIw4`KOOXO~7N(KML=E;WXGuz%7}(+su?H#7pRBtw7XhWQ!jG8yQd**dVascFRP@a_ z0Z79Ngcnk!+sQ+mN4-CnW`NV&BolEXOXmGG`-`edpzHJf>-uuZ{<=Pje_Y?xn9%(l zsRE;6et|?ei#82OVyv>)Iv)=d!!Z_Xa2%sc@eQZNSg!*)z70U_=Z~*Jar(f%{*nQsmR(GCFQrmSYqrz^1+h$?$1PfE2~u%V(bX&(}Q%3b&Dv z3ly0gUXfVw*a}TNRtzYOCYew=MhPoU0EZ>NK(`4>4TT=4ZM{Ep*s8-2s~r5>ot8z} z5(DPQBr8EUSb1{NpXF({cQiC4+U?C9F3r!DQTy3H6~s=<9j%cx3HgcEEXQj*7hMli zPU~9?D8(?n0L*n#e}Ec#3us+9#7S-Jl3-_!GjgO~P|rW`H>XN}M5smH`H}=EY6=mh zp2OsFlQ$2r7HsXn`qiu|M1fV<(3`1K|Em?trre8?O$AD-!RnWQmcv4Bbdj@f8O|l%BJXoR=f;hOIL#&)<4uf z;Yhld?e0y_p~j&$mSqm>d8~FR6G2sV>v4NWGWC0XLjOG(3l5?z2M&P&0tEuodP4ZLXMapd`lflA$=@Qk*$$2Z{>60n=%DXuF zDZ|U^+vaBX$~`j^4?uXtV-gY%t{p(YXjC&VaaY9zlusOfe=u$L?kvVG)2J}AFeLKh zl}Q4WG%>S=pXtZ3z#lfB#qTwW&8Ra;5-dIPwsutCou7_{x+4ZX!m0V_l%p_l$ms?> z@)&XH2LAd#ZV90ObNcK5O2S0q8DH2UYK(4@ekFjACIk^M^~a#lp3fC5`N#il z&Fs?-<0e+-r4d8@m;Xad1!R1elut}MG8eS$=2m)jsk2aHPTT#xu8+7IXAx#hHfw)~ z79UVi>1!aP+!BY$J-?MbE}NmbiZD}Mk`BCP=!Eac_4RLWbIC%7N; zKJMRSF*DfwU}d$35`$ANV3IE|RPy-54)yvJcPtg}Dgz>5L3O14zjz2_i7EqkKHeUIYVqi^{8D=i(%|^R*VAzy#>(oF4rDRT!oJ~ZfI{CYrP_)@z%em2)S2PD zJefRF=$sC?WCID~@|QhT6p3%lCK)efL!K(!^wVrkbcnrqafpvBj4&AnIzHO+zXO+d zwR0Y_!Hv@EZZCX#1?P*AmWdO^C32?!qOT9d+Cpxh-JnpKCg|%qILTlclm7bQw zAS)kqxQj@X$5%v+=Ef?^eN5EZ5bjpDho7~Fr1q7`2e6dO=6UlcMyG~tiSvwWx)R2b zKw(7=nz?t2F7srA-n%B3AAq8yINoMGrXlwVfv((s-24m&Q$<2QL{Qs_%W;8g=q}AF zyzX81?zUXbT9H6L!6S)t?)u(OItvU?zqL%e~3)3rcx}k;i7tg0uxG710k~!Aq(^ z83ha-^k0i0J`Su{L=#_F}c7qSx0lWH~ zROw(ITu$;*8YW1N$NpzKEDRPPZjF$_x(y&3<0@C*P%PjnK%HEH5B_VZv;q2ru}i|X zU$M)N&}djSBGNlYCV3Uf&MatL58}-%YmEclb2*=NXHHy|F=EWyV}Q@XH}GTJH$Kd# zc3I4A*N1(L#eTQ=1wQ;CDR_*lLvLg7KjOI5DidD2VvB zqU@(AW}I_o=UjU9*sli=YDpRe1S~*Ov&D)HVDTru&v7W4YYL^7TnW+ab$Zf~K;bl@ffg<76T#7)0#OKg$xFF~0&TtB>vsCxh zyKZkqgzCE5KK%B71j+ic(sS94T%cQP`2rG*D0JgP;PkjeNZ^VGk>jw6GP8?bO?Fa? zv-u@A#6cI>QF!~0_QM1yS=ed6ru%@Z?JG;_!D(0G#<2gHb2G(B1MqSm^AFBeM>F*7 zj$x%$obyAUyOERfu1nFHLd^9*wmN*v@IRIvsy-C`Mb0O=+}sX4%;&SSGhyiLoPWNw zEqQ=Ogvz~$PwYU5?=5qyR-a=zr1aV6tcHuMQZWA=?lIgUEC)<~V$2h-^y$(hijVfy ztKfbHjwLYFGJ*AQ3Gkg$+0b;%Qv?D+&LBXD(B~W*Cu^p8olN3$;?xt(5MPWk62p{K zOca7hl|}T!Tws^@57(iN(@a@dzb}%~(@|gd(Na!DDtRZM zwr>?)$wdTj z9mypV!TAdVH@@QY6;y9WS1W-7xHS_xC{gCMdrz>@iHWv40;|Hb7ZXD3#7*@H z7Sda0Rvn^$Gyqx{0BHI%{v5@p^&4^R3&)*yfCPczlt7&{J8*|=pbe<~3MS$Iqy6;% zYCky|)hzL^#UB4^zaYw|tBb$dk4BQqnkI67{k4dHTsm9&U+p&%XXU37bLPr<^MCj{ z%c!avwQJK|l9JL$cZYO$cXv0Uz@|Y;8kBA%rMsn55b5qN0qJjT^f~W&<9xr?*uVE! zW6d$|dClql6PInX>*yk1`B4MrEm-*_oZqEln}dD7Td?oi4H2Ls{HVvyO4dXlRTuID>7lW}B{bQylSN}Qy zuRI@sV~!|W6gAsp2l2T)Iu&an9P;02Gv&T5v$Eu6IdlsW?Wu!??_1& z+fmW%1%G8I@O}84eEabZ^f*=1h&L|NHSUV3hAKW;jdy*?yW?HCz_`I{5L=zF#UZi6 zL5~Y0{8Bw=V7<|SdQWh|PnP_hgug&wXCS4GV0+{EZ+kukQ@)|~t-wP4(cJ7P=R;v$ zrSob~A+DaE$x3dc^g20x4eiOdx!lt5=}0Kn;RCMH`F?5zguf(*@6wMJfaGuoNDkXp zUbSe6dU+amwvh7q`KOAtx zZ*=UxB7Q00h+q6a5kEGwe?|PnG>ezOC`TxOasUo0fO2g9@JWf(n)+hvxxWo!lK!dw zH{)k}&sX%E@tcvbWvLkWoADb4B7TYTDo#;83UwN;9Z2>-0cW}_BXfTLwN&DER1`BY zD07KJp;|Vkj=K-t=e2F2d*oPQ;ktOPP!+<9uC*c&hQS-qCQjA)5k9 zx|XDxD0C#*MAVkC;UEuy96IfVYOJQsPY?8+F%A99Mm7_!i8fZX{=oonx_cGl zso4JHcyg0zhisWb!Qv^Mz42`~9o;0%+oJpjid%&VfoxY_+wjBZ`zODL>mkB?f8-Jn zknnQ_5`H43UrGsiM=;BW1_)6ke{ z%3|Gr^ML19gj7qP!1RnACKP5s=ay)(T0EX}EGftsfnUAL6EEo`U|gIDmikz`nq}J* zG@88NN1lAQybn)`pYHWpBNb0~i=`aoF2*~)oHcJp;s!9|_5ACKd{V z+2$)$RImN_k{r+epVzKr1s(~}y=zc?8+;3)L;leqaU-W7sE$8*#*7=wDUw%?b9HH; zM$jMgzjOqR0*_K2^4S z!HlY&ASr)O6ZM~PZ^mcEFC1tV`6dhr5mw@n@{+_uVWVvxYmcJG>H`8yK6Md8FF8O9 zQUn{$UkN!au)v=LeeP&bDUSjCKYN{KeE}G3qQfR0!~5lQ=GPn(!V;{Z{@8bxQlnp{ zYXJes;=jR1JcCHlUkf$fRe4vX)(}*@ArW@~bkm%?&7q+23Sx1b%@;ai*$=Ki@)p6+ zZ71XlxU4cm3yi_jO+J!RTy0D~5?_$~v150W(fM(t6Vcu|o6rFyGF$Xxagm z?S7d}VVQBy1vUZ{n)UiA>NU~ZP&=uxzJLwbK4XmET>;#D=l9>4Cu^39Bk6)$(A??G zenK^C4CnbDuL$8y;EWyM_0-c%){yAUnk*348}y>^+qd-8(@(q)bD%FW3v=+Ay{hn^ zQ8v}P_miyS!)LAWWWu|29T>@WeD>b&(jAqVf#b3ZiEqyr}Y?p?E*saNCQ%+qiJ&Ctj za?YiTP7V>n^C;^Rsn6NPRap4;W7!u-NI-4UdDHCxeFl54cx+1I$>vXgMN>oN5~>!h z$|Z3*#bb9E1iAq5o09D#NaaRiwkr>I#u3WO_kmZU3)w`OVGgtFEzD2m*W}|GKg>iP zlW=xqe9oXO7-XXd&k;U(2Tv|653&|#XiL8k;TuovUPvW*sy1;heozR(C1;?+a?YYa zY3H?b`mK?@PrG-#LT`8V?ZAK}mJ`dvdMjJ@quj)IVPtIa4`X~HRG|3Vr+kTCum$MU z*JtmKqLh>&l`MtxB^tjZKvs9B8=yMm)7rGzGSl_h5VVj1sDlFoj%)#dIyk$=!BEGt zs)+z|{DXeu)7rSzuCT5Cb$B`Ipdrgo1C8`hn�Tl@TtR z)(a!PaHMXl$=Q(^(1#>ex634}50z4LsjYo_W9@M=_GT9018Bz<11kIn4ao6F10J|8G~zt2d}@bI+4yv%7>0R*r;tyqQ$IDr_kqT9Qt?Ryo3biY zIy9JwpZ~5>X%c={?3c#X{14_Q~ zC^g$BP}s1K&#!5JygD2}@F?G;%)m%4sI4^muKMJC+W$r%^8Hyl_B{s;>$ET><9ipQ zEVs<-<|VewDOg&59YwXpG|S#VXO2i6FV$Fd(spBm1&)Ub%3Lj&0SW{L>G+=;k#|w! zg;SJ;@H(yf6N#ld*oiOD&=M~wnL&28RaCZUtx?2}Od5q@GTmzJcWw?rJ)L+U&^&S_XJOn7(T)3L}XluzgOtEU&K7NeP z-R#fbq{;VR)M=i8dL#fS58qvY@)!_>vcY&*71i|lQHKsYc=ztcPX?(3WJekO9Tkv( zj!``G$=VcqRq8 zmT}~j3&&bm2<^|LuA7z96dGY7|KD;+m zje;R9u0W3;+Riq*X}|{&>7$KTQgT=hW4bEQ>~F);R!HIxYZATs7AxAjZphZ8A9qle zD(pQ^qOk|TuuaH^Ls1y?$_V;>>-)3)Iyg1p1}uwgEiJ_#5O88`wb}pbD)YqqE5W+S zo>#<+fE=tD-dA1t7ef|b@VE7QI1-bh-{S>+z6U+sR3>o)9-umiIiGy1n-Z~7Q%wsM z(12ccf^DOofCel`$lMBj;g_mN^X1fy`Q`N;IKVoZNo;{Dt8`k*%H01ZF` zj>QhcIVXones5=YT$wwrET|a?iv={`=S~8)#hH&%AKf>y_S=I-j{yyMe7C#}FO47X z6|d2WrwP`8_J9TyBANm;py??4UnSnyclPHpArcK8HiFiCHALZ^hx_h(MV8tq=qGFQMzt08 z+@JwAs0B|c$IW+ZkB?3>u?z1{Kjon|2o44iPJz@G!IVelTfI@863AbFC=Zrn>w?GcG=qNR^tD;gWBEh0CGBH|G)9}gbl2YF`zQ8ro zb!WVNa-?4B5Tb#>AeRyu<{$VVIKF?hHsf@KhfQFeEs9Se91$?5;%4hnZWhJ}Wz6r_ z&rZyJu0P{GI?-%5<-ooh8~}1?^4$C+vMTB7I)kvGE5E;Vl^(PAUK5}=%;o~jB)uGJ z4m`$#oQPFUDvvy%=)1>YGI3zyDz)ovG%tUe23s@>ZJU4ATuob>=g&@` z8PSX;pw(z;`rZ!6z!=GYm|hCAr-`cQo=%~!d7WqJ--|7bKo9T>S`9d3r|*Y3ygTxl z>D2G((IU>rPetk3^qsrIWA1ZY-iD7NJ+Qb#-En)jFIh*?g}21G(>whXZPpmii#;|H z!kfe?nqh`>)M&L5?EVVf!0+6GMBusV3Gb-&^%`wavnYnRaVcP7f!vS+(edn#s;m6K z9pcD``zr)_2512FSiHe=q6(UyBS65xSY~B`D_5l9rEyq$r>7;U@WIk9Za6>GF-jZh zDdTWmz3DOBQYbO}K{(V5Faqu&cwG^MJi(}G2j^+7m-8W|Ec$oo_wq}d-FjXZ5WPrd zJy}NFedS>Acqxn8YZKc=HN+JCOM>L0vD?jRp_i*8aG76W@U%ZpYi z_Gw~H@kv8^+Dkhmz;%84G?@7?YvHa;LG2q1>XK7?)U2!%;*DYA6#o)wW)xcuXQ%; ze(twXJ_)MHTTqb%)m?5~Jg=DboK zJ{|)fe8td~%HkNvoKE7;J%&p7n)jUrWrloSd2H`&eJ!yR8>!(V%R@O9{t+C(Oo5@J zFLvwEd*Ta3B5D|?OIKC`?u4>HT4g@rr~Vg)hLp5#dbOYtAcx&X^)-JHTj1}1*@|PN zVFxb03^HflkaWrXkP)}vW~4G8rmLnwXJk>WIiZosc@(4+#l@0h$jo}%`z`);DUm&f z7dW%4|K|E}{MCCRAh6y`e^|||0Q8>Xf6#lJ|A*&?2TO%Ej!n);IWHUSVM^gSnd?R2 zN?AohB8o=LANjwakNQK(=!7I@2G>xx)i-~rk6{ndBzo0Emt+u{#*ZnDkbD!|-aXUe z_O?{p%#uWuq8EiavZ4E^w9`hCp!=RkXACBnt}!7675fd2uNS5HX z3Q;wWdKIX;y~e>L>lrBL8H&8hpYKX*Ty*NBSc4ASnUPvCpt(;yvjMrMo`eoN7G`_X z-VLjQ14<8_K*2VXaKmD0l)5Jdz>mZ9j^@TeZ#h5B)2#53l}<Wj|2l#q;+ zmvfw$;}dk=NfOngRUWB|#|^TIJ8pCKT%*3Jpz2SsXpRn$Ok7{O9bMVr1zFUe67vZx zT=@cL_z;qbO`3kbNDibW=nl8ARYiP>?8*LPXw;=F!eg9oqe} zF&U;q%wx>=NW?zfAX82bhl`J1`lTBzueJ^}IuQDEAD2{4jZWN{!OJ|KVl!lr$@x~m z2BJVQzuT$%BtziRzNue*wSGQ&i)z;ciTO4K^ zfyOoij@1!0@;s?)u_~OfSv#4Vkg3q^8TMD8y=A5dshCeR$xYQi_*Wvk@~Pk! z)YI>tu3_kfjS(iv&RK|{+>JapTz^)}y1{tQ3eZAs*#S|?p$SRKhS(WQ{r4X zw!ClDd0;%X8gt~t6#RKF1_zM<5dT8L50NYX zWye>1IVhT_i#QSGasz&nL0r>Vw~2HTL-$wCBtPV7;enyMW@mPco;y zPe;Tspo*~KxdpJ^E(cItEk99{+cVcINzuzH?{pw+=0M7m8cbL|JA+{@|1je~|9Jtf znP771J)G_W-27+wcQy;}uIf#w%iwMOtnzeU(ObhRR@ce`sXuD?`ZqkOL_C_vsK%kV z1Ov*F5V5BD*PEGqe^P&`nak{P2i_H7_M1Ft?{A2rT`UK4RgID8l@*> z6GMn!ymK({6?O9_)MlYiY$pn1zo`&0$e0t0tKK7e+n0KN(h1Z69G2YJ0vo$bY1kGINL7 zH@MXxtW}A2naOc<7~}gJ_`BE%Y+m$zFx}OJ^Z6h!8D_`zizIJU13KJMuDHp z1er9m4b&r+atWbA)B@-3!nK#pJ1xy-jy)xe&74b_cHV{P&cn`vQ?4Z94WSD)_B|86 z;}DxRjaT(A7@H*oVe|r|APXswNzcSGBQ1XEYeS^_4hO$ldGecJyyb9TXkJ{AO)M&v z#^ZUCBdW3>wmOHAM}=qGh#%HOotvEN2hBvA8Sk!cG$Jf!L<_Fo4w7h>5DLJbcrdW> zEm+Zsq0n}~UOUzV(sYWS>pD_?n9};)?8(iCE{Z}VdP7gf zm{S8y2kFB5#q1yAg~3=a{MY zd}L46an8zjX-G_O&DA$og`_Bb+|~_)5Y}s^mW=3Ku0{?FN#b}mF0Hp3$ACu2@1n@B zqt-JNVj^}fpCltbU*9ar?njM*c3;K|cm$GQ3RY`7NaoNLQvz78D{Zi4jTVl@*&q(= z+WsmM3n620z_aX|1Is@4CZ`-0LtK|{rFojF^G19#mrt?I+2>1m_uCOGSB;@vSQ<4XTRdgu2Wt}Ul#i_kBhcHzSn%2fQzL1;BzpA6TcEdPwdER}OEkEj zXQ_6JVa(}d{hXt~gUsMaU&`f^Quf4sHPOtLJ;p_upgEB9+my(;pF#)+GaiMTUMZMA z=S<~p_6@JWIlqVgq34|6Il1B-FL3N7GhPFZy)r8t!`a!ojrP2T7zhar>)61tH-0M< zS;P1KA}LNUVM@9Wtlf{PlacmV#c?0$f7MPJhDxsMZK|2^gX6mW7t%`#$@%x# z%lY@%OKEl&+z$x9(ElL(xc-OmGX}T0#DZJX5T<7v`hGq}? z7uoG=-*`WEvB^Y+o5Vk@eO94vkLW9))tXn#c=9dT5}(ZXghAXbp%vdb#;9fZ2zQ;v zWB35vH$5M#yE+l=7rpNOo8zZPukm$10UFphqcZZGDDlY;Z?ngF$M4z<@yGW96PFl@ zXy*%csz`}?O@F3gZ&JV+ea*CZX{dyjs|6h^h4PhDM)#WecVnQm3`I)PTT7{;H zvLugUb~=r>&PNb#nxV9`IK_ItGa(S} zhAv|zMdh<=h)j(wsA=e!54(%fsA%Rl#{9gCYrnYgi)k&Kg^T*Ei6`yn@M!7GQSQdANw{Hr)Hi5woy*#@(YFGUyxp*O^-#5z z=W>85QvYFijAb!EMeRn()v}V~MFh5B5U66Fn0N?|c^40EE7!_?X zs3ML-l2b7X&x36JbsEs;GA`>!SI>mpR6MAJbmk;lYa$CZav-|RHzbeS`^m(U zuUqRDTq+)!cE&t)%UkQ0?jAT~JWUn%2>Q8|F z?oM8zNAx&)R^ld%?xDraB#+_f7S_N}#;e>8Qk~#=QC-(jAJ5+mvOBUX=lVjNF`N9S z<3$uefaQ5TABygds&obFmEB-T8qMQTu*U|bBf{xQFO=9tW{@Q-> zkZ{R@y|WRsUS%|tPuv6V^im5Hr1xWB$)-(H7`#2Hk>C?m6zR#-_S(`de6*Ew-B&g0 zu_>{xLH+sRFv-Q4=<%}pIfh^Q@ZHVOCbD01Sju%Bx#tovOS|A$Fz9{%L@I`a8+kXl z=^9c|V+yrxLvPj#jZyEktNttX`0143>(){vP~s;oGgPal>o3+kkLWI@#ZU3(-UPV) z?jV|`2u^U5$HwJ)jZy5n7~1gHb*BU6E|?Q8peNG~l1I?f?e!2!d{=KVUVNM@S!1wh znUof6(@3b7olu*?VFC}FWvuL8=)K=dC7LhFq0hcw;8U*Z{`Aqa??-q4VHoX-9CoA% zT5VSaK!2^7kR)0x79Hm*_6ssc!?FS2FQSvcVG$21_@g*PR_V>d=;xJp1axXv}d7+TqmodSzcx8`n&)wAy$6V=vV)%YeCcbemz?KSgZD)q@fD^evKl zc)x@Yq`IP&f1f?vBAxQN8HO#i1SF3xLLFp91BJScCT)fXvi@H0!Cg>rFQ+u3z7a|Z z0X`m>CkateDjxHYJ=Ze~WtRDi`^RPL@rKWLy$Glr%7(EBv8F4#C7g97;C2@% zmWPU+U{KD1PQcD@kzGEorRF_P-zfy2ez>rAxPJAQ#gz-kf$0M5-#)aMSi_C|rVbd9?6#;l414JyuU!@(Dc6?W@2)a{Vz!vj5Ctyza217 zZT7^s7aO0Af5qN1>PID=BX)Z`i!a%uVr5Q5LKH1OgXkOXWufH|D~Z!*5Z0W;XD6~$ zvaO+`N>tfOFa=){rz;CDy?2{yuG(K7N<$ zws{xC1;GH*h)hI}LNtDT0@GkPbH>2zY`=1DKkf$n^e?IZ=%0S_HfHbBu4l@N4R^*B zRAOY~2^XIkl=>^{fQ569{4> zW~e=5;xTVM()o;tvC;F0hj4eC7Sj?T$apZ>xv!_nPjSt=!iNO;{9_r9-1|&tNEPzWCjPc$S0Gm) zzCk(CJ~#enm9s>F^Q0wnN)$~$9?HmcH_ikuYH&agko~h^JRLUsRHfU1{c=3goV>b3 zbA8T;f8tWknMd(v8?K`%zlqo}s8&lS*lRmqqw&Z-MjCwf<(-sQY*9i1f#ZL1VW(}o zxRCb;!9e^^LHrE}P=s@X;PfByJOK_k{l{oD!uVC1EBf`5v<+-XY!*FIl2SnzM^>6H z^0BT=e&US+TY;(&XZt6l@q5Xd_g+_BlHsQqd1;)9RJ%t8s_lw8NfKK4GYR=4QUm#K zn8P%f2z@e>LZT=^jnHu~=!CxbTe+a_&D^wK4;m4V*Kbz%f#4csYdaoV;*!RJb%c{G ziTHChA9e^Z_PfoNb@F;RRsq0%(??z}Qr1xR_QMWn9p6gtVP!FT_-^)TKK0^pREv1v zQR#(=Nk-)&jcTXbe=0ppsv8Wcn(+~;e*yt;FBttukqL(+_!9%<`l}4N{wl+%wxH!k zKpATP2W80pe<{O1ea^8q|LXbsqYPbB*CxUkt{k(>|0#6NdM^L!!@M$eQxx(XgQc0YpNCZoi| zn%^X>9x-37Wl0@&4o*VAn`O(mHOt-*ddi_IN2!7Bf`&^clE5@z=W zmig<^v@`p(Gj}dIEFa+nx*lGw7UNV>H0;-SHY!{-Agbi?LwvPx<@ zLnP{&)@Mg>65GXIqOjC}oDEzALx4E6Qv$ium&QIlV4b9N#!<)+26hCgf>_s5w^%gf z-OySp_L3YV&QPi%OIVCsU|TmSsc1XmD$74+b}gO-LJ8K+{Ps;^(?`4f=ISJ~DwGHT zdaiT!tvIbt=6`*I+O;3TFoo;1;53t@$0Hgnuk1=@J>}i#`>P(v08)j%q?(h~KTX79 zV6Nz8mP_c}#V3&8{k5HF6XoiHpsPbEMN5B!NOwbP`Zod?n1!FCB}R+|)fx-M9qyS8 zW%0`WC?e;%2j<(?iX4FekY)BuE1=DJZ5i_<94>#;YJGZ3flN0-jrvjH?2z-Y@E+o0 zGxebrxmVKAifu8_R4;m{{SL^KPO*Cz*x_};u)x4bwFmn61j@>T+nm#B2oe+Xr}7dp zyeT#@`h#irFYM%m#7YifGfKEt*iHEAR_NYyrhTz2vN`Ta9AyAGJJTa0R1I?asjcGg z2IM@>FmNMiO_|hA2E^@44<*IqCxTo4nA8$vCnF)Nf=4=mmcKwaAC91@ojV60!r45g<`U7OAJ&;NTNj1XK!haTd=oih?~0m!6S>=2{Ada_`M%JVUGi^@96Cn z=D0|l(ND={p{2dS;KWJFH!e#eF>d~h01ey%(7?h>=#JVAR+C3__RR%L41Jx~QeYbR z-R#eNAPu;iWjh#jnY;+30e$Q)p|kOMP7*;sU%spzGBE^vL6J(uQ=-CK#L8LjUGK=w zr02drX@c6`@DA}gHnT|22}rvI5gLRw-TI8vdq2)k7(U5~eT1T!I4$l@nb>5~eo z&rFDt<_bCo15wbA&e1MmwbBhtN~Qe(OS9G;l&ik?YyApg9n0=SLHtsW7fwg7Qnycm zJm4PPl;{G8+lV0z%_K5$=>@6b%cP{PaoSI|z$uszex4q>Yvx@@|Pf<>4ebjKGsMglH{N1Yth56oc8Nu`su=EQ`%9t~Gfudc2|n=*spSj3P~ z)zvBBp6g(gG72>gzXm}1poiX?tlj|$Kv2Dz>`PbVm&cjSPFIU8)%x_oWQ7h=->K=i z`W<-NSgHkAypBlS?fVghJjaq3B1a~!1cs4YS=jQTTWry0_q=nk_~NkUb8JE?Y(R(h z8JP~Sg=X(_4$?F?b?GUD`JBF#I}#MDzuM7-J@~&0oj$XT0?Xg-6-yQBHzB?F$t{^Z! zC!6sWn!T&o^BpNVJ2Er|4&YF)-5)v|xJ{>Vmzo8K8^rpr1>iO3pn&oi{$FGocwM9c zVdV;G&9DvaN1>LR-mBRy^{>iSGxcJZd6*y^iRqzccf^G{--k;siO8n2WVWc5547>ll3QjJ1XZ*$S zkSKko3Y=j3B9B4&wB6lQz!1`zByUz4ZEo`jWiYcWxg&xQAVvwI?GC9}a3)mITO|Kz z>n{-5L6fAIbmuM3cj=Wbsj-K_DCC<5oPsi zY0oEN<+}q}W^rTnLckj&_3>R^K0OegR+|LOL8q&4^u{Ey+*nCATaAjbKn4&n2jeg^ z6e48vFP}gkbif!et(zpZISOwk-y3vq5{-H;GUZq5>(qo5jo96_za%8yzH^lkZUib| zMkJ9KLq-JOmx8FP6h)7RHB(-3*u0@57od3 z;x`gza&Lc4S3#VSd71graiIw|7jx}>72{Go=qsnc!seR}A>F%8*FIu~TQuKjf*Xde zT*W^S@D*zZm;_PIp?BO*m5qG93Fq5=NmSM38O)XSp6+{wfXh#oZs6^Jn{c}0B)IRd z!RCgJ#)B9z2bHF(XT$g|B3<>hYD7kAu3~fk+=M?;l2$hc{z|inSaG1!rnx``j>1_z zZKElqa<9mN2q3-o=L`1Vwh*G7la)FH5$zPRIhjBN5cabfLc+)tkr1(&k^k(FV2}_RTI?gv4SCuHQz?LyErMF!Lef|CfUgRVDqXD`9YC!J4 z8juTdZ?Fc?fd2^v`Mc?{1dai=pUAHS44;iS7Kwd$dh=PDKiLHw^op8;5|lGO})csKRwLv^iZ2GV74h`4$gX+HfjbbKkS*{GH-Ki+2X=l1P4KH zF${u+KkM4xg!qQ73w!_$!_=)fRu}c;n#@Sw7%G>_O5mxZ5uaHQ920}39}8d>q~w_e z(E?Zy>3(+sfCX8o`gJdG{0qNG0!g)pVJp?6xAI1jC@W#zHI6XVF-9T=Gx?OT$-eCx zr8cYv<$l6&$}2KI+d}Ij<8?AD=vWNj81Cq+gVtR{udZX!CU;Z`tWF`L=7QrgeUY>L zOTxI{s;8EFYun!!fTpVqeA7g13YRXdvEtT7+ztuM@pt3WR*BJ0V@LoBG7CmQHUJca zmi7jpmJ~oiYQZQ-u*FFWfPz%V3KNL)H^$-sI?z|-yuz?uFE$XgPatE)5oFd{>=a6L z4bA}qb%K!v!)f}pn^qyexeF6!QWb6>0Ufx_9$zE7d6HXNq|BM)DdCEkBv<1G=)e>V zD3pNp0jhz1sr-C~7bP~PZ)A!_wAq9dUnX#iqVL7xMBLra$B%#Qy_ew9bZEBbXiAa zLp-&AY=~cahoYx$3rOT_2_F3(sj7QrIp{8l%t%}J0<<-8*ii<=0JWtHV&m#8N|WmB zx_}s@aev?iWn?-Lo2iAJFCpZR30K@=s`94jgW+ekLK)&Y~pyBmAMbx!cu^>M} z{Ja^Nblnyua;8Cenix0+m{F_`!~mC>U#mDCQtKhaec-21MS&#(Vt@%l>)8h04YZ&| zJ+uyR3@{Ri0rr<%L|}9GL|M=~qMJ(Rn7Yv58&wW?Xh!7-3_q&xK^$(`@x7p+ZaXo8 z^llaOzL5>e$&=?Vg5}6B$mnWqs6{@1@PZ#l7u7>(AquDxgJP$V-0?xs8lev>uuOGB zol@rFI@Z6mJ%75pT%mW0MPe#81j$GYg~(tEl^0exZgLPVm0$i68V0(a9gS|2_^E%J z@MYw8XJyD6PSV<6Yiy;IE)i$NXw$ZxVl@ySQo^bNRnLv0OiupQ0*lzgf_Yn+N6Wg+ zL#^mdZxr(p%OHCy3AjQ(_$HIqM&N^c7LGNz<1m^A&?6h2(ewx|d9b zDv$z1nyo|mv4PfxtEm1OT=fhoetmH|aGjp*gdg^JHsMSIR6U11S3RSFS3SB`Q}jBI z1b=_4La7H*fc#TxARq;Z){^nN`g4ss^k%fc!c?6$=^~;(G1Jb?OKa{7sHJM!>>`Nf zkC2~GKf|nQLw!QMYJFusln^KCc-@)g+1f6ot!*Dv+j-fS{iVuVKpsKQ}!?eig0UT4k`x@e`Q=Fs#jq-MZa(0IhwrVKdy=4orB-}SkL{V#V&-A`NXq2;B9| zqhDR|7z8RWZNVNo<&Wg>Zpfy=p?<{g7r(KB32L`#^fM=brr9UVIYG~T>=YEAVm`NT z`Hrmt;YGZo*=~=xqsjQOy9sos&!d%Y(DAZXqbl=qx0_`CIOmAYd{%5hu+~6i`MtGX z2Jh8`C%VH)q!%hxZSP4+aDCidb$TaMBK9i#A}na(U3n+Gd`(-Y3)CCkpqCBSja~IB z!%*G}u165Rr>e?75}W9^ge1zmW%ZHx`k-%+8m*RxES}Q0$B>3 z!?O8nzm~&w!=d)fANP6jk>UG9<4v9*Le7Kzaatrpgdmea-0*J{C4Mq5JS$8|+|Db@ z06uaU@3onVrF^o~KrKbo@|9wY$goZb_E}z}Jbr6*(-|`%blBXAx;2?;F5BG|ujy)< z+Eu7g2w(;tJ+`S@Dzz?eLDI6TglS%tE`rU#T49mI-Le`r^1LS{UG-|_V~}RNrTw>C z=@a!Y-uZ9Uge5zu*_oj^Aq7JrPd z0y87#tSc_bIf=R?bFOR!2nq5D*haY7+D$mwnMyvMKaiAP>OJ5Ybq=6$>I0)5P;7w_ z%{JC+7LoeCm|w44U1sfsp7#G107CNom3};brC-{B=Rr3h{pA0H{rCTw`S@x>|KDal z#DO`tu>W}%j%VPlPdMb$q4EdZzfkt4X9~FYjZR7zDLFWchHSeXIk#|HtHgG=16leYD{K&5czYCQ+$eBw~L-) z%SODu$vN-AeBqwg7UerZNbhbKw|L7xJ0F9T3cDtm4BH$Vwm4LP@`IXYU@QSy)_*&f zyBmwT;Gn&r%Q{tzA8)a(wp2?UId49yfFqqub+!c?fS5mL6x!$s@COS^0xbJl*r&C( zBkkq;T?{RGVhVP4WSp{GbS+cq#@J&tW`2Es6Ta@IYRG^DJd1y`dmUSj-fv-s23Nh1 zVS;f;CUJb#c7z8#^MOqAUwdwL;5|3{)4*SnH@~;Rdv1T401N;7o*SwkHABb#^Uep- z-<^*ib*1sxzHPT(NvWnmobgoFaJ_Hm7QX??Zx$dxBCn1+jlB20MVJpdm|KqQ!aS;8}Bm4%%IqciELs z-kJDOrF*Y+4zk5PD5rI$RAmaS5h#qn4qzz^-~g_w19NVILOf%bx(HRAOa@hKOz@^K z6*x^N6Z)ry?@axDgV7b`KIe-HUTH3iyvqVw0JEa0n#&W9#==i`H0a%HOESF(_Bsy26@w(c2k-q2Hq1vM(@ zi4>*v_m6paSfC(PB!Isn2uM(fXG3&n~jm?Y|X}?9bd?C zd)bkzZ}UdNi_&}?a0?)@l`tLP`>6rGpYop;z*?XMP*+Y(=P>9b4D9LM5dsF0SJ&>nA|&Fp>si+*@h@F z&`ZMOZ`V<~EY(q+^-HtS`pg16zjqi{uSkwu7=j#>R9DB&LD(J)mHm!*?F`!tQxM)1 z<5%vCFA%(-jKhY#S>fhK3xyl(k#=eQkJW@NBdH;Il7WZd3H+x?_CL0t!q01X)JU^f zf=1pyi*D!OIN%kS0C_b0Z8Npz#6{eHx;hI~!8}B7<3Fa$BDAI20R%{;rb5_PfB-p0 zSY(O<2#^$BLc4zmkTpY9!2cfjn$0?zRp0h~j*EqbfxXcxyTu{fDtJO}VR^@aaC_BP z`2wA**^6K=UVJ8w_*E$;tpY_ko2O}q#071cM)a*wU=tm=( z@e*WCJZXkS@lsj>c?HD~d;dPs4zRk5X^4&(b zjxZV$)&3@xE`S2*tFn6=_87PJS;d}+h_~Jadp4||)E}!KB!~19R2%4#l8f)>6bDQo z{b~&Debb|s_!KpOj*jBRBfugG&$rAM5$fXplB7vW`DCbaJ{xTOp#jEUJ&eaeGL6Vm zhYxKr{o+hzsG~-sm0;&r@bR}Lg1~dvGt5z|^BQgGr(^$fHHST1{ zN>77a*M7fSh_l+5`W-h_H97!Ii3TYI?L|-G_`MvR6kpqKcfK7jxV`}go!c-9hmV*Z z#YbSj^OOHn8!CFkF4ykw!ktfwFW)lx-E)?`i=5PRk93$-F>h&)?(j3OI1+=LpHKN+ zKT&?^y<6|m)RFXhyqY{Ce!X%pb$p)(%!Gg@)$mh9>i{Su-Pd)A2d@6Zs=l#KVNDfH zZKrOX-d7@E^P`!W9Y_b3U+sWi!dOQ1e`9gJM?2*K>a{U zFc6#)gz*e@t61tAySPRMru`Qs5`uFy?wzZkKlAZh(Q2Dnx z_R4qO;_LG2c^Y@m(;v9H*&qg4cP?JKxS1z@M!RJ^FZwU+T&8#bY=3+Klpyur?GMo! z@b4oTg4#eQ%=7dI!}DD@PPak*CG)U5gY9!jkbmY<4m|zg6||zi_hDky;Q5X@|_6L|RUUrr!EhZH==1U-qB z{UmHqLzHv>aRUE)9iXilxC%NN_xCZdz5`T2D{|ZM%AGFU)xJxpYL2b&tmwDVAOd27 zx$~97U?nJM{-cXH0+g$KW&%SVX(!!5d*dsOcmOCtpUXS9##iT;e`A8+Izartb%5+Z z9pK2{IzTl54&nGy2gnbu1MCOj5FjOJbVmHw2n65|OBEm`_;xN=8J5A08E?$+?#1lt z)8Cu$a~)uc!8=HtKc*n>UuDSqR~dS+RxURH%J4t1!E7x5`!M`ZNi^A%3qe`NpOWY} zpd>oB=l>oE0egaD6)3aY{|tm&^>`@O(vOrx;hKS#2vUI@N-&W6dwJl*922cPLl+Z) z{X%!O$}foiA#4yB`eneUlKb}C`vYcK7pVlNY~OACg)0Y<<{;)yF~I~T-UL?>gA`GO zJ*zzaEkw|cxpi9Mo5Y~qQHb3a(3`L6=O67;9^Hk;F@3}n`1)aieGuR*Ory+LQ*A3H z#{kNj^O78?3H}~2c0-zTqm25~5ecrQ)uf>mkw`Uyr=EOjx&j0Lpil@q2?hsQBLI2h zsHw!fmPs|eFA>L?xb6DFoP&y}1}Zedrpsh{)O(9R<892Wg#p3WMNBT^ zOMXLD9`qwrKO-v?Ei<4Vdg0H=FAE4I%+sy%tIjYnAPSBDh{CX?R#(LS@K}6Vrrx<< zhR23&4tPL4pxCc?b4rr8eTd2EGwaO5@mrht1f$mP-1;~qn;K!t=@K@gH11(E?on8? z`Q`BRJB@m>cynI$@lnHAlR?{JTsk8q`T|%mN|IUIJ0prv4^$jP^&ZfN|Hs!`uw|jO z-P$zL-64pml+xXelpx*R-6c0IAfR-YG)PJ#4bm;$B_Q3P0(;)*TJQ7jZ$IC!nB%zS z80YCYMQDU+D!YdhDcwENJ!P7W&#_5@cc;t`fiEc%>?kQi*-h?KH9fHxm_jq)Xm?|5 zLP%iPZtxkek)?v$hKm65H~q|P)L6NSe3Z*DNdj6;uQZ%Tpe^Vqm&E(V*S61HIz4k- zW%4=$1vq}Eu6M6>RQYVru|sU6{NHIqQiUtzvyW53k0WNpdFw{7NIfZLP7P9oikNaK z$K&|N;THxQ>_bX@r$!DM0p=@eTD~GQf;@S`gY|6-g~ck2kpmf{h#8Z>SLGt&vnXIg zuA?jK{>;bet3wRRHMdqWe5Bx*?yi0$zk%Rb)CjruFW+RQgd|{@9%4qGx;=%=_0~%x zYN9^UoUfakt;ndqpS0#bA_gHC&MuK>3Y;XidBkl{m7Ols|JlC{slz4U0K@ zkM64)LX-JJ>gHCFi&_5<-zd;Ttt6fn^E6`z)5tORL}ppIsLMRdWQ~ZUY|5^oN;Y8^ zVB@3y#6W+b3sR`QYz{bn+%kH|y}$MylIpQ;SND=%HfbcH96cFH`e2AO?#ogNU4ji( zz9rl1Gr_UQ7=*JH6-IZHqfQTAh9FUI9!M||)t7S8?u7+AFIJj(l}<$+w-1Sk!URvN zgPw0x!7vX5!tK@#{$!e;toatdiNAfz9s8(40==pDcsp9_w{wkqg{JqsqPX`Vqc6-e z_|5b&q{{#mPd)akY^`w!)>98yv76B8GqUP0Vg@Y_Rv9(nxiNkib?ImO#8*fN^!i1f zQ{f>vm;O-jxU0w*dLTF=xJ2;-f#!qAQ@^I4(gGx_*4g5S@a?f`Y<@06-6P;PI6uL0 z<6%z--{t>;Lt_Coh{VEDpnR0z+{FgUf+kRd3_!@D=Yub2)9^_RmY{GLS%j{1_^QC3 zFLJO4glK+sCa^QFsTO;(1~Cu>B8z zD`T&f2+Vw_8UzB=ASh4Buc2^gW39GUwy^A79RP>*A(ha@_NjO!X_Bc!jEv;3RTE0*S$G>J#D9%A;fG8nW)=qixOoE ztv$CmoR9h>f-i}wQ%UA!NQ{`-Xu;=zWVVuIh98no>sxnjVLx-SC$f!@{SZOnf%~rA z{+M3bW)S?uY1gle-G=<;S}cQ+`3bdPLI#e@2BnU_c~I)s85{&>xP1<0zMi%TRzX+6 z+y^omckQ@S`_~Y*-%7eBht|KV+Pj7Pi=IKyPc5uS527EE89I=Y`Xg?4)w~f39n8%0 zZ&D)XpmL|!<^1{St!rwEl;L8lFWo1Gz3Fl!`}j`t#qNr?b>1FLsALyMZdmr}qnDIR zoNhgoO9!j_Zv@LVI_9q!I<>jbkqwDq6wYlK&d4np7J0iM2U;8n`<}q=yU>O~F&+L} zFHlsr>|gDarcGSR0A~Q#1vPUQK7Iv&%N&QH8me+XGoApLxAb zfx4#cwo-FUgC;wJxv0&bb`_l6T5j3uJU!ICFrS- z^UBf@nbhQKAO=xe*$XRZ=BILaGCo#piAN2@Ac2G*h-gXYiQ_|e@8Ca&K%iz27fCY{ zRFRA)!dU*4@gPun6vgsd%*za2UMWg2JKsGVkHgwUU8QGsp;$GZrv|yiJ0%)%ycNAW%3d=5>OO z;9o-E>ogO!Jf9yvY#3lG35&}U(FrJa&3OX8jZeTghX!>%Q zCGW#4&=ACBVg_OmP1t23;r2K#x&Ij6 z^e~&>kzTvb3ah3WaQvakcig2gw?iQ=f@^(*jq9|xx2Kz#i#7UoJLgR1CoGla%LJd7 zlD>2`~}Ky#zSH zQDnfMGI<#NP)=7G77_?y zOL#-R4V3_+-ya9nKzMtP*oW)|%oV7`){w`Mqrd_T2!YNEe^(<=3-BL8p!rL4`%17H zp@gnR9;p)y+zt5uAp{=(UxYv)0nYpoQ>77_?|=IEhSm7^jM?m76Jo6P3`l@q3w9>- z$#AOLg4GCb&64Sf8RzREx5b6dILJ98cYO`fgG#U(NuMA&1FI3|7`Oki0Qc8K6#_ya z9*_V(0uo@@dkOHj6VabO9x?vOmM=XE7z>Onb)vekbr`&sA2k}Zd?)8rD2vLYMtzt; z$Q~Z-yH^e*z=uEr{9Y0_39BY&ddvT!M}l8}RP>#!ttf;?{N4iWLO-L-+&9-UIqbz@ z4jdtpBW-!W0$i9}_0cc|#rKs|8frK@MiTU6ICU%(@L@)gMW0E9HxP6lbV*qp+Q0Il z{No6jERF>;k>_@pz!9Pr{;^7zXW*(ZOMJF+*M?5x$qz+o77zp*1w$F)fg)t-JEX#% z2OT&HH%EHL zxv@QtbTMyzwS)%VrJduT_;D~7-fWLjHXz_*4rbXnLE;|hmLEGvuJ*2-x3*+F#UZ|9 zPT;KwLC30DU&R}`+Dfs9e7>&D2)p*JxP?_l+DbPgI11MCOopiSXX}_ee7{5f8bAWk z$$=!~>V+tuEL0L=3(kWvpP-TuTp$U_gGxd?)~;_hO**zUfA3tVfT0Kq1-wU1w%4}; z^^S0;B&3$nZ^xIi75rR2OiqVCbXagS%a}*O68*fcoxtV>9B~(*7U;ilkn<*rbHR-k z(GJ1{YJo1l_RLmoSe+<(be>$}dvK-0FIVxj75FtO`X^&#Ago!m z4-7{l^0QhS|c(MMr!?Mqu17;$jahHeS>(yLhC^_u%+&Ut~h8 zAvv1inXgekCWaAeWH?scLjoagQdI6xNMLLKjLIn^-wf}l7TE*jSkxSXw-k7!6IYh= zWc#-8H&K>X?&v@V?7{~NwE#EFhs^%90K48>fOl9u(dlQ9>5qPSU$K;a4`Pvm$5df9 zMpt2BZ?sY^&38Sk2u38IKyTC|=B_0p&fiHIezRrwD+)4JgE8R7KVJhs9beJ}OTv|= z{Lmv*UAUp1Il7i0sfDS1I3MR+W__oOgjWxHE@8(XMq6F=VWp|J?gSRo2yXx2$=)x_ z>8iyt`!$)p`6ICj410=Qj7sW~^Bol7ykX~@5MYQSexDxYz0wTkBHRait<|N&vdg>` zq7M;_7_UM?K?U`{PrNVG7_8n7%N0m@Q+pNx-Z-!`Q96K^USlYU$Hsf0rM}y(rdrP33R(P2Dn4r^ZcA zo#!_MzyUEB;bD1~gOF=0pEB!C2CCQ4Y#$j&v6g;*lh67{`qxnK+f)LUkLs-crGznE zJ17ms{80cC6GW5Iiu?VMy8K{d5v?Fq;GAQ04L(WLR-81{0{jSAfQ40MaBO{m1=w*6 zgYKI?+gSa2mHkusyIg)6BfhsN({>-t;UPB!LB@hZqx$)Q8YIMic**?>y~cT( z%Vl{yzxVf=7w@Gr!cwZ8RPy@nF)`se;^yfExP3RR6Cc;!IM&4o8bk0nML!nwkJHTN zpiVtOj|Z5T*K6UjIO#(Q2@yR#iRc|*BEn3D5hOD6??faFdI~HL_n@cGRU4nv=UayE zg2CMFt+iBV8rP+^Hjn((m;W6HJ8%Ti9@=^F=?z%Da-{!_Bj z6Hp2K;{};=fGU6;ub-HzvgmH=D6X?EOVwOU@D#tC1BXEVe-yx;f9`-~_X=Q>7N7vu zzdrP^@QQjpY)UgOm|e6hCl!BH!h*eWiMyrgW{`ZxeB=Ag+fgbq%>)vq1#*0cSPyH7{j4~pw zkjPw?Vn?Gdg=jQURkRn3@| zrzbOI($CwREZv@Dy6nS=`#m=xZ$P~MnG_dC8^9sI@N_4hMD~dkig6Xy)rTmJY=umH z%@c}Roir0X)h+KQA+L9JpD)Q36weivG!a9pC=rS3IQ7my1W_J*A zpJTeh(Z-cwDEX+}67&H>dzSjibKMo)OW&}a_*@I~@w(dV|MFx_ow(lxLwLA>8jfbd z8;tozIIjtJ+1*e9J_#S8j?8(DVT1W(6R4A`GoZ@auS_SJE66kHLR zeH~G{?b;(C>H!H{IE4{ac4vH0l#pi9_t{yAJ1fFTt1s-O>n7uRX8>;A*Rs!fRLS*R zPr9Ys{C`|i6+OoM5$~1?5oz8W{S_{1T+1kO37rSaI12P@)PB4W4#8(6yo~9aMAbF# zMi`u!DhNlWda*oSw9P`{bBWZuep51S&;Ee7znTrqgL&uZy1b?Mo-W^59&TosHU!L> z61h;~^zw`eY|(5R=~*(?bb)>FTihzgL#J2o?^+=@-lnL1t45G-vVAfQA7n&}CUZ&x zbj&-yaO5n}Hp-BT9m;k)$s!3w6~(GNt5%K` zolWaPoru`Rh!y;Ck=3rwMtxdY{ks_Utkm7Rhq$hH%|vi`FVrUv_9QK%XC zr0dtOfqORe#C~=Zoj*1&(t#DtdA(G55Fr72&{JDYnDHBS-ZO!x#9CkdA?&ORMTaC~ zA|@pMwoNo#PR$OGyDhS=eSx_RiyT|UjJ8t9X3+$c3#jRl3grTx{^0^r9pLw3z6V^u z`qW_Yfcvv1m&H*w8w2oSAi{se#v2-yeQbDX|V# z!InyRPV`;`@s}{7o;{XOZJJU5e8iX^{N?~LtSb6ol$L6V6@`EeHEa)SXOuBthN-rj zK}d%Ca`Su(#NO2g>k*AZVB4jb41|knJw*8j8rEQ;F|r!AhT4i}8If&zPK)JoIPXmD zLy2r_-fS|7c{Nr2N^3(Uw~ZM!3r|=CEr;d(j)i^knE;9uK|<3M>k>G~Qd%wjH-nzM|O{X%kehh%o#=;+d}%gk8tLJseJp0eg-<^?w?QK~Y#Q|zmumIk|(q&&?ACqR~J z2DzDXB!~C`^Toaj_tYzLh)+@0 z!O`}BR#36(W}rdF8AUKdvCL`fH%|p87uV9z1oQl(Rr;t4B5^H^&`IrOA?)*|1MwD; zStW~A{gq`j2Z5eL10D?`=CXaQ+c#HJY!n6-105-&$VgM93U?vL;2Zd=mZhncKv7XU zW?1^R&ZJ{2zML+Oo8zFULbl8bW8gKCP(!gi1b#j!5EChB$HoTMxR6k8@*=AZqT5QJKYAfDHJi0Ht+11e<tYR#aH*@{&S3gfPxhy9D$*<_e`v)0eaR3k){3rQ2<%kp_r zEkGTx%^K7Js|l|*D+e|1eZUf|I4H^thWq6dhd*h#$qRhE!6)!7^b>flB&hA}_vZ8w z9PIR=Q(CFoKFzJ<)!KATs&O3Odmk__3)BZp4H<)|!8&!3v*!{!D|3|4xJ7D7VOH#>@Sg246Uj zA02QD0@?3-9WaP~_fimO^!vvOlGSnPT>JlX@%w89`41v=?EesSc=I~}G&)afH%ilHd7s!1b5Uhb^ zQbvCM{B?St{gebng1*<6FR%RJL4Q&3>LX|IRm(!KW|r2wh-ner`m*iXWXjq@LeE#- z##1d%H-P>2QL5`xblvt9lAZfDAeEN@j8_K?$;;DxA#Z62Z+bq;Qh`kTYFddZvm7=u zYTRk{v(zzM$bk7pDb8nh!|L4EOEMNys(~D-0qu2E!%BiwyZQ}8V(G9F}YaN=)*h_Pp@t_k-q;(csdzoT=^az=eP$jNdfaBy* zkh!-4w_6kQWxURMdw~i0q#Odo8X5;N2nWU}21t;_3hU`D)YO^1BLpP#Hvnl^0%Jm8pYRkX(n-BS%gop*PP^4|le3&B()^>sc=e>b&- zX>^3L+6j%y$?Gn^Lq>4z6Y__=7A^k?etkWt5$&+gg%SG&cjgGhBtYs@3pLag^Msoe zhOi8$hz@4kgb944q`OX@X=KFzn9psxW*$&*Xuz@g)MJvv5Ssc_Zp(jqVSaql6_rOs z<-{WptJvrUD+i-1?65(&v1_bi6oWd(zRIsup8GgGLRr*Aq?bAh<@L9%=>(s*Dzhx< zg}h)33=lE&e8~!`H4`fChP`I^;MNC037XjeO~51kGhzKZ=LE8#l2v9c>o#eQ$D3MW z$}gc!z=9$f+MRAc4N)^_6Ho+%zWla9?w-G)?}5OIX7n$@A;d$bD8u!cA1Z|GtAWr3 za3M7G72bzF)|eKbEQ%HZEWm=Em?dXpVya>JPBkl38VQLw4O@BHPGdc<#<$BEfq|uS z!h`sPcH2y1AdUmYnSMqj-QoGOszSf9&`<3XbP88WeTZ*_kxUg?YS*UI&mL^vz~ybh zM&u4_Fe~TqD0S~NJliu4;3>Or0*YRLFw7mC!6-!-c-+ilROttQe(I}bO{mKO|+~-1JEkw0W-_ zMbp1`WKk$AuaP`E9GGLC?&0Ghja43BPiDY!N`hAbGmsh4=V&mW`JYv|4b(4IE$YZ> zYohxfOBH?d$1w6JBmq^xZlP%}7Kl5ef%Ot<1UU}6yevLWTKRkiO?}L8 zv@QtsC6V7^DO(8DR32dSTN1!aUt0UD+|!`nK=8>9zFkIG2>sgyT$ul58&=;erOz{G zF&~RxU&Aj!F|Z4X-t3q+QgFgxw%?e|D5Hg2Q-GaqH&&B_Irz>7K7En?rar_pgsw5s zWV9nsuKqZ68pOqeHiku$H;^pDRpMMEJ4~cvLf+>h&@v!bNO{^)$!f`w?|m8YtwS)) z-F4jsmz?tzPdssBfA#3a(7NgbwM}#tA6z{IEvIVg8iN*Zs4m^XTFSeWX5mF8vzDca z1}FpS*A4+13^8#6$->y!;?x+MC@2GdKdp6jHW2(<1`JTcemqdJ$5Lp3^$Iqv{7N%} zIieB(1o>c;Q^(uhM7+|Hw6fta zxHHgz`z&C21H(N>u-Nc`cdise3vGXjLs1lZ?6XkUF>EDmQ+=N4`n2;t;n6$Hz%Ed< z>9#NP`Drin-$%ehC+}|5mGI2+2M$a<=mD7GoqMBbozKuAwQ!x^jf^#)2#ayKz;i|T zu$@c^f5pY+f|+K~Eiu-PTAyv%=gB8nM0S8h4xyN+yAP5yc%(?#2lfdoSI#dsbhhW&6T)(EA5b*0I zQpKcb6}yn+^m$~?`6G{x_UzWREzxPT0x~68A7KPw2DYLTH5sA%KtFvNB>dx$+pqZJ zBQ2kwdU#P|7%boHNh}u}X9rLwgy=>exH-f@9Bp?5cX#cn(nhdhyG0os`U3-R-~y2w zFMLe4AI7B@^mVi3RzKr-@|xD<0d5>1!(gV*!G@53k=|0F`x5DRQof6cT+>RoBU&$I zN_8Mx-^um%5Tn&iP2lO`P3cGT5&p|foH4fOST)RD?gj)3Gz-{1rH_>;Dq-8V%Pb|H z5dAo{HFFcW5bT|z&NJ+NB@}C8xMUGpJ0o!SYv+@F&(JZEBMS$;`x$T&Vk0fol2yuC z`|DEpc&;7AEb90UZgE4oH>B4BJ4dnXX24abI$`E1w z3?Z!Xooz%K)mZ6GGhzoz;}p@B%i!HtBj+TtSTXqy+Wc6U4u@s=6zDo1_2u^o3)+SAsM_#c+S5AYX*Y zVxdxy|FQ;V{VN60bXu>!_X4*WHdwL7e-w8pD9)ox)mtNN+P7)~VqmZaWL)@|LXhJZk$zOP`peA2E_Fnn`@+rqo7RH z>?;VQ#U24nfz#xfhuB!Z*-IDQ@&8g=NPC_oVbckg|qN7t}} zO(lQIzRKk^|8pB-kYdOgWWXq&Pq>o(*_1R}2?)clQ4AnIZ?fzsK?al<&f4ILE5x9l zz-`*EJB*i|>4}SS^!X1U%#*)Bn4s5>01yU}&~-XS#!tKj&488AIY|GbB}I+C_YQrE zEbuL2jUV8A;qLO;TSh~u2a)?N*Lz0sDgQUZd)_C1IDvPQuSNjie5P^(PW<`62-FD&gV%R^)%?oFaFD_i{7xgBY$-e&I-Yv`>iIj-#Lw`0?#}JH$>{VGljYmHX0$OA_iY%Q_H*F`8dJr3E-{lTqebTmds0l50%<)9sWdzGR1kA#6_gJk-K!CAktFT5S_UX=Mft0>n z@n%&@PfS`A=NTf}qo|;U#d(L^x8dbq?#u}b7%<9!0rPA*x`>#C;>H8+!+PiHJRSx+ zdG;KMAr-a2KK_}PB43ZtZG~$^u+3}m*$Wp)gN~|z{=x%6GHE*hB~#@J=_fJEX%7b; ztM7M*LU%PssC27FQ?hh2!J{%AkQ9#3C1ef5Y93@F&5Gvh+(>~hnEEL;f3asPup0JZ z7TEWpS6`UgM(?htlSf+8?H{74Q=T!yM;IYX6N@~zhtY3v=xh{Nw;S=|V~RAX#E59k zl%|*w4(^#dJWBrl%Xz4_b5GzhKm{z1g7l*2{|sg% z;oiRXYq!Ul(+o!f5EzN1*6y8000Q$0j$vMe4nSboGmd^Ma8SN>O7Lmqo5X3)3W|bp zaHaO*r<#U-_d+2sew)6L7yQ!~bqvZjt&5_zmwhdg zL}pDfaPA2Og@FsO%@AD=^+C&g*clAnfyo*TgLyh(-g>!rC@Tv#Nc9QBFfOQ}zsNmi z*QABXDL}P)_}6OQSs1=c6LR_4`sR&)NbsLEh=rA_PQ<2+z?hwl{cK?Ae(8$g#A+!g z#oLvpvAf^ezsUO7Kmc56cv;hSGPo}BHvm?m|wcp zVNwnvGJPlxY%y`NKuxj21gn=uC!FDkOJLhs^Qc=h^AY@Aw)3U8-by`n0BkR$b`ZOi zY&5NIlI_iaYun=#he0ZcwG=yYq?bS5Op7he2~EX$)#}j z2rm~{?VYQ!)kR@Xm^Rv<$XQ<#P^H$zsc~VvD#mio^8$ap5s!Kt+?xeDSvq@XWUL|yh{VUVHo^bN1!Yg$L{LRc&o`a}W1ioz89}Db^46IBe zT$lVKxAKiO46aHTj)~EBwu8LVb#|VF<2M6DVvQYdZmd?+x;+zU(8wH`VC&vx8N%Cf zt+>H@+qc>7yV|tvBP}cuorL4p6A0w0d{%TSB4 z9dG$AarIoqR?`^APSf(I9Cx3{X=d7Q;;ieUKWF<2XRRQ&*>hhBdhz(T2P~l+o~HS55u+cCyW@%08MK*UHg=wB}c3WxTS_ydP_+sF2mfWo0A)ffK2q3t{Y99onmkI^y4H2< zIP_}MF_xNWP{BPMy50U^%*o@NdpNXWOx2A4OOW}Ro|^|IneS7JQ7n`_LnX)*7dCN3 z+2P`7eNSgdF)|fb?hC=ZG;aB&p>c)3igK&h9utw-?hcO)m*6|3lW?}b#iODkn^@Y$ z24@(f8Pkr8^r~v%G`ko&06xv;ue~MsI{=>LMKI}N{E?rIrl&^mW56#j5zV~(=C}3P z|H^##T9Et9_eTt3wOv6|4Km+<8+-gC2Km2>e#vo9#-rqD*%?1TTpF;uBQNDLg=3`V^F#(z1AAcl;ib2@FC8li!>Xk<%uv43I4OoV^c&a@H zE?|u_GsNrAKD?>V=$*TLGG|tBC!r{~xCQ4|Yl7>aulb|4l?Y35|9<(3_&bqonF>I( zO}v6#jlVvQ3I}44FJz`Nit>OD9edA*ejuKLqcGI6YC#l);VO0*CMh`{Y?WoXwOC}u zSf$FF^O>1JnH+fF9+O4@{!4gK$KRr#9<=D!(*LLEXH0m?0DgSW(4F57pr4H{jboM6 zXKlWEFSnI~Ff0Z^F}P~ZGFROSA4&nKPd%a-BRPcDZQgo`A(5FnLSDw@xy8K-xKST{ z;ql+pm&8W@;(7Poqo{-&a}(wvGxevzY9>150czjJ9zTESEBDe@6!j>*2#s#V{>Xl~ z>b+PxE0SJlJH|%}mA@ap*RNg(eUcV&jfCXlq=>NSedD6;X{Ni7M*6f-V6=Pkgox={ zhT7N6<(9C7PH0uBr_C$hNwxAf2#|E*@}asonPMB`dn$T6b392fVpQBtXeY;Xbad~6 zEl9Z)S)Z{Hjyz#8K9TID#?Z_z9C;ysjjl~wUX^Thssle!bY2$2b=qK6yZfCo{}aT# zz1{YUaHxFoaY@>6Hgd?%%-HJA`jn`cNX(ZiQYv;X;&~?FDaZ;}F+%YQYpChkY>M$rTXVtAVeSc6pT%wShjr=ejAC&y8$Uij)q8|{K)hJ4g zC%muj3$wL^=zM)g%T!(eEk91I_^ukl!X|GXK!XUevC-WBq|OFBz#GTt=>5HuDIs(W zCwa*Rh^~!AS<(}Jvq?7Dr#z8Yj^OS`1#!wv(mwv7=M=Pdi$uFc`RDFe%F!2YZoc;Y z-UCcS{~36I-=n>ZLuu+)3o!lI9*3Xp2RC`Hq+T5Q4XoHmbk_Ea$5C zVU2!Jqt@)CEd>7&A;#M5CaJ!8H+siJZQe_yEQaOr24a?nH%CZ?>o)`7v-cSflm#s; z3Ruu8-uEo%)s=4l9gl$dO>A^b6qhvOj8km4B=xD6Oe|ZZQ`z`7?DfQxr}qB8x2=RPczzF?^EAn3|s2ea+#{( znU54N4XpX%0Ujqpotqb79O74vL>7g>*Pk@VI(th#y95mDl zaq>wmJ>KJX9@#&8|I@6ej`69>&nPl-%G*;?+a@eGt~u%Xc>MZGaQB;lP*m&&*LA1m zksjGC*XZcJL->*oKe@`w$$m^W-M~?vc&&}A!01NXga{FR_=e!-SUdNkz{0kSrzwks~dB$7# zZ<3Tx-A{lI9*jPRMD%}MAtFh&%rHgfL)IdLU)2T^;5FZ#ps(z{bLW!Y)55LBq_<4) z>ALWX3NX#pyKfH43^|vOIUx2qS`-Y`0aTieFDV01G5t}0x5N7Cl<*i1g3n%po(KN) zl9KAB#go-xlx`J%<~)mSM{vvM6wi?BUNr0gM<2iYG)5yN%JfRX;Ah6HyN@#w#V$M- zQ;GF;8AW}Df8yEDt?pA>ExqZzU-`D&Wm`#lRI6jPs*_TCB)Vu?gU6*Uww`5^mxsK% z)BC*swwnIyy!kt4i;?MKg5`kyXH}s>v8_g6J@1bbqR=C7lHpFC^Uw`5b6#z$!JfiU zsn+_fp=>wA3rnq1fgx!XL(THHp@hC#@0B_CyAv#X_k}1nmaplRct{j9=s)o~lV&pB z*D*|6y%Q`pICwHwiViM+fC2OO@>dUPek;WeBW8YvSPqu}62rX*dqBBTwt6VgR?7cg zpluO;VZK8z`_F&xv|E$N@(D)WA+YRt&br9?n^JMFxfB?js8FvsE7 z`JGg7W_f82FRAQDd&wM*i${fV^3<%rYZ4rj--I2VL*c|@49QI(`K_H1G<+ier{_ob z9q8boJwKp>13kYA(DN&Z69YXzaZJ$j;{!cECVtO?u=?!;eEKV8(~(r2?iYy`3(m83 zvDpxn3>?P{8AwOX?l!o#h$vl(664F(ipo(QpF4-cxDZ@P_X<_5F-8sRXQxOMv1rm# zvwe8w)x^hxRC9_(*THO_tn8sZv7ygJidoZIx+E$tWlDJjEbw8r}x?cq#LW_KJWB5yiF1la-hI>2-&zbsD_q$*II)hK& zeZYVbnJ+PF`!)3?ohceym+_Q9s;1BKGHkWBS(~aKr-f)9%77uRMTAs@;cr8?9!-T% z28@_~j;Nq@MIphv21LTXGO>OkWi1ivZ#&_s^!eikDI-jk!7il)D*dcaHMQubZ1yS84M(ix{ecZ&mOKg`QFf_ECf$X8)+_4K-c7OoCx;bdFQ=mr4ZE~MCM zr)twg1m}rn7o8xh@T}9clxgPG&%)gt@!FpR4?FG=FtY#wV?^jWF&La!>ZICHb?mgd z$D!{ZdHatA_#qtYy&nYn@B5D+gzl|w%wmn*F# zbk}+L@NBK8glFQph0+sE$lSzGZMHRg$X>3n248|e)=Fkj|(;*J3aW_H*q#mFXxz`0|LF+bf=Qe!kmy`=@jDjk%y!N4js(CciI17aoDE)CK z9Ax$^!md|KSE_*0UsKsO2L!u&E0IG zTMtND57>I}iS&T2#|vxT#Tb3IV!d8=#^c)b?+s39VjVM~U0VU1vRqtsOA+tFcDt}Y zKPC%9%W#m~?2fJAy#1@| zWElI|%1vy**RWTSi}B_k4g@A@HeW|qZ@$^zKG)2;VV2Ej)3T5$ihTAKqH; zD1~~f;ca?umY!1N*iQT2+~AdDnL4|iX7r081ct&1c|?g6%uioUu+rr$2H(p*--f@A zW$T^`S9g!0!M{|0#)D<>*!62bv{k7L;lCo_sKP-tAOe2&OzFuUul9q4uqGO1aX3_! zX=Ir^+tG5ydHj3`)7v3+a?;NdC@lrX93CR!yU&bg!~5XLltVQZb8GZ(hNX{mRA+1JC%)pNsGma~6KY#SW{%iM&5i#o?@@@-K(dD&1OJ1e1ba;d z@H0uacX;O%Si30IL^g0A0c!*#AQTh(RjH9L-towCtSpW~BOu}zAlqDZ%6Jd!IzgAk zlnUpL8utr7XjWK;DvCB}7G|UgC*S2AIQ+qvyR4mG28MLANMqX^GiV1~KnOD7h;SKsb15_@WRbA8CBC-Rt%Jrm;!$(I^Kg@b__=|d$CK8<8IAj$ z$Cy^?gXI|1ocJK7KRXY+eOtVn1hDgvbE%bg(T46kwi@R1XFVvt0p5hw-<^liEwM@k zmdc9T<pLzuuf`u_izBKZn#i!&b6b~v?<>@q|E$RhPV_L_xBKz zT|7?t0t|&C#`-f6nDF3BiZV+NTQ2nW9~tlN6e=#zL$@$0Rdc?+obg2!&NHut^D$hhCOSo5?>Z@K-u^O(h*DfP-gaJ##hZ7Vx6XeDk@9m_bs>cCt+ zINGPTn^~QwXwmgw=JV0!q2%*v>a25@Xnq9Uc}N^H`6KCJ225W41%4#wx+SQpgJF+B?`3R&HEw_0rjCrRn z@`1YPtPzaIWz9r>*R`gzA^T)eD5TofO+Uk7%dRb9>KvO|6QoC~=+k3tvK=)sQRy)JFgXoc`zD=MoqkfOJG_#z&({?@q&C_(ElRIhKLX0DfC zMnAfwe_f*y$KRprRJ}uIqd2UArKTJCv7+{K+=NGB7ims~PC|;k*|+}A1@v?uC7vF# z;}9)=5Jl84-t&wtx!T8#k)kNQfT~j2@>FGRif0N1X%cQ0yL_)Pk=&}N|56gi_+YYnNW-m+eKAp(ieqV zH8^5QuVi-Pz(;qG)n|1yQ2RqeTGi?xxKDu4!G|UgXZNtu)YumRL0q2(@B-)7w7RtH z*4+<2(0pC*y}-7>3;a+Ec!9-4U3dqGbjUK&YX2R4Xjd}9T}=lYLdd`w(Es+`LRa9jYnV#OWU!b+W z{)Bl!`d>f{Z2OQFTKmJ6#4OzTXuA8LH&j(6M-vT8d*jEhU(H)^1H4B4boZ|q7!Qv1 zUIYS-fd804-gCEkm4FEN>c6Z${+dAkr}{V0cTa)7d(xDiiG_#48 z+l#fX2d<@SkIC>7%&W_nM_=9<9`y+E%X{@j#Qpw4g)+%3t#%H|f9uflpDJsPv$E?+ zfSE<=#%7uW{(z;sP7D^Y&v0X{(>KIctsDYY#?hPfdq3lo6v_Sn&J?Q+t z{wG;gRGgK3e4(S7aQ%pE!Z9t81k8PJ`FAi#3QR+?*Z9{pT;R(Z3)f?Tft9xn4dweQd? zAVHsumfG()mERF@D**eV+@`<29JKlHC!j!o`4B)RE>QR(@uBO!m`r$!UbFs(7(jvU z!&M11`eK91A~lQa{a8Tl*tybG^O?B3qgr?-OZt~~tK#>d09ct3t)ohR_u{7)k`kVg zdVV5y1EWwPC;*~75hi*NwGiO*Mx4GMDgzmgJu6PjTaS%n(ZqI=HddpD%0OB=2~QKR zH30S{Z1tNL(dl$-Sg~8mkLQpO8%b~ll%wi;GI1p@@^lpb&CQF`K?5BOsXj?~+G6S= zPSTi{tP*0t5eNQ$%W=WqZ{Taojo`DZmEXT-V0z5r_{ua$5-G;uN5U8CES`JVd!lRi zs1;P@qC8BCXO{`~+AAFbEWmAW2$UR8#Hx;E>zQiF%$p!#s;M5Agv5ze-F;zU9r_x; zfcW0VMtlF$JToU{{l+P|Tz~ISNn_4a87!WW9E-yduj_ozPb;|Dpi5#m|z^ zTf8wgjt{y^rb+QBOy_u*sHrLT(SJ1O=zci7J=?g(8>mC98+g7=js~@kxwsHrJiQ;_KaV>uob9^biR75g(T;3PRwwlrdG2GG|cd$y&;51T4S<*-j@} z!^c2vj$*}$I1&6?BSm^4*=I`pGYLV5dZjW<%T#ZnS z`BsMB)gX}gf@6IQzVvEK>(HR-y!K((`I7b>%(KAL;S|c#{KM~9R_r(tznH!@Q1o|s7{Hq7fE$_< z%$m4zY8OXWE{WQ>=&#<`_x(1@&=Fl3o(nm!I4?>zl)uP(E^-yG;L_yk@$c%Lqq@{6 zqWX@?yIZMWd_!!%(`3;9umE37Ll1$F{(}cAfCVVIwe^Pu$j3JawAZb<_X-dZ;}oHT zl(E@K7AOlaC#R_-2g(9$`hM!>>dgG`+wUdG8bLJ+M3E{RmR2l!ZaDh3>+sA;6ype5 z?pG}l#Dzu%Zr_TvR8zMNk_ zU-qL2f-(q#xlKn<#86z-vP`-)hj`{GdQceWFLjn3J9YmRNsI1D~+;EHpCCw zUPThY-d~V#u047&s}w{KoNs-tG4qv^c{O~hcvLv=CwCpa#;55B*r)$b`7Ntli$o5>+dCm znm&Uz6x;h#bIHEWDu;B#&S-J&|1kF6|5*S1|G&NWOo+0wvO@MKJ3{u%-g{-_Kt?2G zglr{b?@dNlX7(N-WRGn6KAs1?#`!tV_vQQ3@%#&p%k6T1+-}!vQhAZR{vK`K=_p6A zTPx~8Zj2kB$?zN6@(U z3D%tEVQH9bdQOLal?|8{5tLdHsHSzb?O3x&ITM7LhzG8Yp);78nh2xV!jlJQwBdeO z_k|{q=O+AeQtW~-2IKwLe17gC`C)TzY2Pt->TYz-*L||8i&{0Zj5xKFf2!-WU#7kq zLjqH1_tB_Lki=ArP6?|a$r&jUcAI4y?U`;wgR%NyvJH+;5b2~?H>ug73*ITZPcb|$ zqWYEq9=Wz>hvvNBv zjlwlR{zEhxnsqr+YkN?3w4Tp6^SUw4;$$sVtRFJUcOcNm%xpqJxNS)^ zjsYp~0z7j2c+@TU{)U4>H9@ZmMer$6@AgKMcS~Wi`fFy?AERN(BD>vj-d}njNbB(C zeE|1AwscfgF!~_QBq|1@kCGo?^zjmwQcOsBtM@VbD&d;Ssx@73U0N6)A-MlJ4*%dF z56{Zdyt@A>U?5j+Xs`|H%*8?J&;AoU%yZ%wPZY5h3w~-^GdLMk&nYI)2d#KfLpI{ZHp_^~ElJ z%!o3E6}>(cO75j}$IDe{u~Wr%e`Xjune+O%7WFKi%4X>eZY$ zizwxO$LRbKH(b#w`>+~;Gcr39r0bo^AwJR| zUDf`q@b+MM>cSvmmETgmIb-RG(k_+U@a|WQH_SnarU~XpZ3x<3$q1p(z9VNS`QeP8 zX!>B&QL1^UIJ;i<{iwj)1M8PlV;j`>J0q~&{%7uk%Rivp_s9O>qljfw0CL~|TKitv zKmOa=*XXgkbzbx(=0R&)?n}U zQ2&E=)ayapiP70)8jAD+^(0f1=H2jvUp6|cVmKE^A>?u#?;@j8N$JzCzc5wjSY6}k zeQ8js%fKVZq|U@)8QdUe(2EI#MH>+cyM$ks^K23`XWu^9PeAZQ0++v1bD!fkJIAI` zjIQxaF9mw45~SoP=EwZ4W69L-!`O?@1{$RPnfeHA$WV=AN9{Cu8~?aJ7>u4x57h$5 zi!E?}nAZtCer49s(dJ6EY!i~Qe*jV+nyOQZQ+Lfw;qh~KAEmD^rHr1`ppps5 zYkw_VD*4rf-5+5gZ~$NX4n%1@1F*kDcChxXMTDdvTb*eCHv+mXVyC1!RA7_6SbD&ORE$_5jz7qfSxI3BLO{nW{=)nt|5*AhW!MKv z;J)r7-HKbX*p(l>(};2nsVP$7&Fj~)QravJ2ETYl8N4lZ^@-tTbO@vxh<|H+c>ICUn*jtgt8+}AoAP_XDC>N> zPxO!PQUV0D!`rDsqC^H$G^q2py3;g1-hY|<;wj>gxsXP!YaKSK=jsox@eKe5^d`uC zv6s0IP+m4EAO`f5p=f`EVQR?4;U5Mx>;=St9=$zi%T}ej-7!%z{Iv$x1W;Z?`>(T74mkj7JnvOIoW=qyf2DD-Ne=?wPm_HSk3WmWp5hG zaZnrPZ@U>p+!XFQQ0qk4U($v2AJ4aD5IpO|e9xtyzX$q{4-E7fACe{OJ<42mr&A9M zHnshktMu`j=HkWsLg!Im8@=?jD;^@jE-<GQT&N+_nW&HXm&O}*SU;=W7&$H1dX zxnyPV0E{$M9u9UN^+eF_1Nc8MWe*?wtavLGIZ)jHeVcD86s&#!#ehCWbMk)eE)e^z zg2`iU1hzl4sb%nb`k{|D(3@VZeQ(ijqIBTJ?HS`#B`P91jFVjLKFGk@mo2@U%xC5m z_eYyc$V*~gA^vr;c3NIX7XW#2)02clUe>n+>g&(zuVx?Zj`MH^bn*`hUnPcMgYI%L z`_RnIygzmeX6Aw7NG@$=x80TWig(t%IUjkW+<%+n6MXj3AOGX^A4 zKUsmXuMy(t>XG@}NrR14dJy(tUh31_EAgH zzwP~!?YtTFaDMnJ#4+ zLz!-+Fy_X^FY^1f%|YkJ2%14YBhPWb&=ZdY=bRDqs#S^Qx)M z9GdA^2z_L^*>R!>*e-F*fb9ayX9S^7b1}#LUhItrv{XB1Z#lux*Tnvz;tnEUwca04 zHSI`wd_%o=8PgMlz6&)*DLC%=`+G`N5c=LqIJl60HhAyUZf5u>_Cn56I?z4hhhA-H z%CBE>Fp_ePK#Af->@SOihM&_g#6j&(;Ml?-1#CT7)u62hEBM1Dl6UZCYEN4_lMKGB z{k7daj(Y`#zMN`;Zz}IXP8r_MCzc)t8@A`9L~mZ?M_KgM?Jp5_@~A{ZktV+!h6 z`cFW$zfTXGf76uZc^Xc^FMhBM@aSs+J^KCvp|2T)KIqXGqxVN4S$Gx`V7!zEvOZzg z!FZm%AQoF zGLo0o$l%o%8G7}-4}bL)%p5Sjdi5pw^XfY>248!qHwgWHLigZl_+XC?{M9#q#x(Cv zW^VGDlMX3*Hs9y$gS9BWFTytX6y=Whp;zAs@al`Z%J23WpV@?3u`ZT$`IeL zoNiDfL7NHi{k*E4-?bf} zkwji^Uny_ys+6N-(2w&hx-N-0V$yknI&Kk}F|`zQID9pg-j5Q$k$K3;WHl1b;X#r5 zh|x;&^`0qL7KLW+ZDu+ul5BHub>=N)h&7z>Fi*;od#8CE=n>^JqaMwh*lY^IpT{tI zWLw~4R&M=`u%i3_41Xe;{~7+!!Uwrq)Ij*ltH~VoVG8S$^Q^v&!&ceID8sC9{A#LV z1{PesppEBc>9BGtr8DqT`4QV;ED8|6fz2;`@*CEpV_*0kh~IF5_|1LNZWa{(=r2!x zrTdPjD|Vl+fypnId`V>09J7U*0hs)nC3K(k$vVWmcIcJQ@2NJ6}R`AWz`uf7Tp;qNN@$w1kUTo<1GdgY<)$Nw+=$M4qk6bZ2R zc)Q8~?LC&j-Xj2$6D0tUo@r>Cj+0cMIr3xM_{bF#U`z|GRFv)sxg-Lm-eaj$l{eyjxRl2W-)_0 zQbQWKPXuUUVs2x*M{_`vUqWURf0UnV3uPL7mYTtKSY0rU#zj^<#j*EDvxCJy>uxP6 z;s_WNKmF_A7X&}}Y4btNKbG14&}&?)u*2(V3)3$(3wc-WeJyj*k(3mipt$P|nhCS# z^K-}kq=E3}Uj(LF(lvwaYv%EzZScXzNcz6bKO!NIQ^H2=i4ZrFcaO8Ms8 z=dQrsW6FiFfdBXtGG$RwF6!$WR*hn#j_8TFmFEv|4tr0~uTdm<#E!lpw_c*peG9wK z+a;DX0OTM31r+!8>*%)i0^R?}KUgp29}Gps)R%+bFMBeO{9_L;|6ukf5=33pL0g67 zAD7L)_b3UcT%tPw^P<&OEEfJ>4t}oK74_)elwIP`;I~v5y!v*n6;B3%SKqF?jnkKu z=bB`Mp?PP6aDUfQ<~QKg*B$H4={3LcThObo-}v|5Q^d^uF!{_Vm@U49< zKIqWEdw&)2Vko3kh@=ndkL6w06!QnW})XZFD^=vJQUhVHZ33@|faBt#tNMeqg@ zWig7pb-#4|&NGd^vO$o6we!q-zUysE0d!}4gK{c~5Okq8R>>PY>X;JE4lKe`bq@*a zqdoz;vu8p*{b+eh@jflm{EOwn!0HY+N*5FBP@zR7T>NAC$eZ5s%xcvsG8xLSmQRo0 zl6?DKTAA?TrM+x=0EhZ+U61;iPiUJ++%qA}6PYCnBlHm9`jGB};r&`BG5bgQ%lIq( z8Dw-WDUg(c^!JkfKhhs`@&oB_6rTQADlgL?(YNiMad7f8QGX=o@msaCl*!=NuHWLj zZ0TGyw*m#u^*OwNroIGO ziRDQQM;rP&^}Sm94>g5G7s^;7l@1B!ot6d!^eQgv1TJO1(f7sbcq`?N265--Z$m|e-= z@IC0P>5Fef;s`jev;8yunFnVuXqT8Vjk~#%(vk_Z91Dc}-abcfIJ$s8_{PY^!ra|* z&3;~2T(C{b7%aHhG`9S=^%pjKdcI}w<^R9+SLR>VU(A17e^vgr{+ghF)sBkeg_T2~ zKfFJkKi1wKc2ao;BrmiLYPHqz6aRYeb#r^{@Di*(K3S)I+S%0OB^$p*J-9;)Rv+yH zdwW%b7D;5xvbZHNCL_bXJsuM~0?c>FtNmAhXf570!u%ZBHR8=7?#}Bx*4xL|h}xv-8%VrC`rCu2zXqWF zh4{{w+F#g9`jlS=Z+dT2&L)vw@txaS{%C(2f%Z4U?{iZsW|yctJpCbp^tUc~#dofS zcm8r5EsIG3Hxh68@PG-zVG45(y*M z7YWM7EM|>Tc%he&+{LP2i$eSa14ZVJgsA8(2D4BCMdD>SL9@1~9Zn-S-x*mWdm2K% z1hfArEGf3$6Cqw4@*&^jj}EnV|M`OtCOe{bmHw>H7_&h7E4WzpwB{`6P$?Oa2>l5< ze-TjU&jGA`%hjDfW7)>Op-7Tv=g4UBhV^Ng*Li%&sG@ov;*{@x((8_0VWaefPTP&P z$Nkd!YuN_3{`${P?520*1MBbNFR5%0{~lVD^+%~*#lP!N>F++)ftGHuj&|)2D?P`~ z?-WPTk%^?Jk2PxJ%%S*~X)Kd~rhFOyeq=F_pCa!>)1lh^Kk-ioihp;$^bmshCoiR@ z!D>OVMGlXD>?mDdk_nm7kTBIoW#m5E_7~txlEi}2U*}(?zhO_Qutdbgr>)=pahV^m zZPG8Skv8;(aRpk8$FNZN<@hKt8Gp$*oTl`PIAkiEwd{9_v05(k3oI++u{?-%muNU` zY3f2|-3RZzH?IlPDmxiJ?M3HospHnL4igyV#Q$ix(5qgU38(xmk*YSM4zt3T<*LEp zH)VKe_*K&7@B_p@s+Ozxr<5J7HdYc1x!-0Qa%`R5>J|nGSwEus{`mSlIYXQoxZiky z`;7p&-;Ap*AorW90?)g{rlXbzVEDndZr${O>H2mb?^Kg{3!wb9)R$f$N&~pF8R=q> zj(`&D5=HBMv;6eBXhk;rZH%OX5a#7|NyJ^m@FIZZ;{-@P@X>>XcK%2{Jm#SH z-WwNC{FAwie?pEHUe+)Vw!#*BCdmCZjNk!xzmcuDBPj#-n<(0VVwQM?E!G&b_~v5- z^XK&b_hP}1&=z|904t#U-KZOg7#+QGzlmPE%PR;de-Q5F1YK!44Mrep3(tQjGnA*0 z$;*8S4?S#`WgxJX}EeD~2e4ZfW&=1O*JF(h%kE`rw(- zBxSN4GI9&|b8oboNi;8MjQX;TXEPN>cwqPOxl|=Tf>cMsR=Xx%42_fui4-Ln`Rswa z3%L7%-ACIs1oi#y*o0x#K{)TF!lpua7Jo+!>%JIPUP$jWUyH_~H_YEAG14qJbIYDm zg{I1)e5%94R?Ib@6RSryAX!(-xl8w#oVP(27rT(N;%o%cXmMq4j;L{$y_ESPY6E%A z8#9iu8-n`Sr6uB%m{jd$B7Kq0Wk|919+b^GG^yfhe~y1mjl6YqzsDSVET2%_5UOyZ z8k5lqTM&5O1d9s$MU!tl{ZT7;Ry(6J1XbkIo1NE=WJ_XnjZN1!i#==uThfC%_ctyY zx$#dmelx93*$>H(cI(U3zLn6GtX1!d?%yJM&TZTN7fR#JR;02#Z@oo)>-L03^R7?$6L~IFWd% zY|}QlaUwhs+0SbvFLGNGvi4a^%> zOEp+rPm5(ch#yc+h&7-w^QAJ#fw1_9%4f1_dRG4O2zW7S+HIi@KLT!MmnjrLM?e_3 z0v6~azwom_i!~FZ^p@;;Eb?^N8HM_Wtzo8H#bXu|xbm$^JCfE5Qoh~Vqkkq`06t`b-mnqQn@&*V}peFnVsG&!{v&8yeQeYVEfK=xH zkphj?`z}+U1xSG}pcKe@st%<@5D30+Fv$AR|QklkEX1Fiwj5Un#H_$bTW_8+`bo)2Od71LVIiAo;INOX=PSPm7&* zrFab8nGR%pbAuT{$~V=t&%!|Y_P$Mo?S%A-^sG=E@BJilT`=Q}?y~k*^~sn8H}iOk zNLO}LUg9I0OVmd&{tfL>d-{W4XQ}oOpHDKd{oZumJoNq>y{q^&$GGNeHIV=MzOdfw zDlvRpcJqg}XGeCB5K&P2_m)ffuciKBt-@99Z*&>Re}VGtiu5dZEbxEV{&W#R?XO)X z9|2zb!;^N;p8wwajqnYj+K}RXY>vy?A2j_v#>hMT`}r4<=SuOx^GER^Bp;Js^1oky zdHyIq>?b=Y8aVnro?s&X(YzTA0@gD%WpOk($`8pJi`=4cUL4c@r{SXqM^2$+@o`jJ zCyO|nbMT%E-tYNSe4m}c;cx(Z&KnS3gRQ|~&kV=6dl;XcZycus*mJ8K-z~O2O8|R* zdoeh7_}M#hKZ=?8SX+W9$N#ePw{+S0`^W?D{H;(fFNHf!#YS&t)6ueZw0Ia;-JCM(ebHC%VRB z-1A->(*Ne5jb#x2ll6R;J4#UvxhK5RBB!`}dHhAh_quLJD(t?#sF^zpW!>VC{#SeS zl-IjIqEFV7iv#CjW#g?7Z~!dVs}OyPII(lL-4{9ldg=`16f(tnm!fW(!1x<+YK%q; z`bb{_=3gsNd|(6SUtYyzgQgOs_@M6$8cPfcnUrxjX>ANjN*S<(EeB9QE%?yAqCgi)j9}1q`misoEd+4I7 zst=>n4lrN8#*vs9zc52&1C{=Gid{V)zli-_Mj}Qh==*$JjbeK(5+F^5HKW zK>aIt$U*C@?aNpQSkDjrZUW#7%JI<~Fna;Omn5cDbpZvL5N7!F_m%c1cZl^&!Iw(C zOgH?)o3Z25ajyl_Lb0`b!jC($Fg_3gSuYQOHEAX*{tXvd+UD>GH~=D`Y|h(=jHrj- zY(WI92N6*IG6K?ZWVEBhy0q=vV^A`Cs$5DW9%9fE$2qta2Y)9sXqijs*tb zL!42vN79?X034D9j)3MzKfgrWS%_nX$FN+|L~ZOEb<_LTd& zcCES2^Z=Uw#_kV%8zwSbUczWDwFKNhal?jm6E0)nqj{8$h3VW;CV1FNperquXus@?)l!BkrEEJm0E5-u08;=gXgC4Q?{%DZ}bx zqSP>w>BDvI>rZ!hRy0py{M64$q{m!uxXV@v2!nxwJ%JM5yuOM|$}@50Mg^}uT<@B` zz1z~K=UJplkrqa*(@f^-y)v zMkak3M^2kg(8a+KFk!R`906%zfcnDQBS!TNQ}+C!CZzZvtoqaXV`n_$eaFrNDL%9& z;fjxOp!ldicI;TuKH1^>L?;iJFMhRTQ0wnX@xjLdT7RMVK7E+(>?X6&5pdmGx{T(p z*5BsFxz}?*{i~&V38;TUjy5naYmfrl?HLwN*1IIL^OPdR6s)y09(iUl* z`OW5IWpivM|9fQM2hMuty<|NrYWzuo|1bdayL%+f^lmCW{k*z8ni!!8z5ZV0pTE6) z{Y~kEL;uVm<_ip>{sHJ8_mNqzbb^D>e4GElSMj4bwQKozx!C~oWm(a|l@AvOrwc6D zSM0B^P~6-f$UO8+3>LUjd^`h+596+qV>t9LyK!w}eWS6{`q8z!I*w_^6kN)!jq5{w z9oxaSI#T$QT8*!}BbH?<$e&hF9~D^5dcZafR@{YG!e||*I&h4R_+DA!P3<--nMxOs zRz)sPNRUi>48$(G-gze{SC%4 zqK^TPf1a@ose_u}@fQ_5{#x8aKS-aSNHJS)-E1hPNoYzj(Vg*kxL^^PDnH2%uFkcr z)K1ke|viSs>+zh8efp5y5H)uL8)H5y?%Rw8&{!W#r?P_FcXHzWU78h>O60 z*uvK3a%pdx=( z_py0)y}@pJ(}7{2G+T}-<48g9`=HvA1A z2#n40+Rkak5ELHM2m9ZLpX4!0e2M4hf0%`>yzMs34ecISkJlTco(Ci}s%s$XX%@=7rJyRIT+q0qKpN4z~4k-TpP>PUvc^rfz zk+G-8Tvu+j%Rz}7GGVCca`XKMZu~=m_w#FizX9uXmRcB`K>6>sT(0}tT{!wh+%9It zc316I^=}WIkvb08X6%54ZHIg0%BNCj_Tg22^S+U?R3bF{cwoJ^Ah59d4O0HjVa2}! z%3teRvM#oV+j}PO(i1L~zw}rcv0(Ts2={!f?{1*_y8Lp`okFuK#6qIcIdm75l62)} z|3Nj(2v`1A*!$m>0mk1hW23R_ApF77Al9?9$^seLvycYqS3a>%zDcq{tmoQ;+rPyj z;cwO-;cxjDvVn3nW#+CVQ>%}4SHfS7WBc>x<$nu*Ys&;XFK>V!%2*D-4Nzrcgu(8G z=g19i>-hsIqwFI=q{vp%jM<@c(Ts36lY8UJ%_rvqaMtrGU_BpsOT9CPSkJU5Z08A| z*w%!)U7*oNJ}og2{?hksL!%ENDE&dKXNHnr5bOD$=}!ThBas}8KHkE?FGT-_{qg*9 zeSnWXV?=yJ80St>Zr=Zq|E^pg+Rh;V`94zL^h?v@mHQ}eY^cSUMK9~DMyJi9;2vAa zI7Uy%i+IiQYgE<`J0))Pj+@RgN%V#QU?UefVcXYnxHlkwH z6D*!h!+yrsNSD&nd>qdx{465IDXE`(uTwg3iDF}Iv~1ur&Djnlxh{=jqRmr)j;hspfcotsGFoae{QVd2{V4JqCLB&WtC4*U zngKDI*MotZOPxN_JOvsms?ztA(h@SIz~TdTIr)f%CLbnX@}d1_@=-ZGj1Y477JTv% z_t)ej6h8TQ2qquGl;2dq++_HdXU`4S?>wpo-Pp)o>_weM zUsZkCEA#OUjMtwgEC`qJ@1F~^TMdsAchw6Z9TvmTFCTypFCTzM|9Sw1HXpR%xN^=n zd(G#U4u%umzd8Nle83?m{oOw+N#)p);Udn8o4Ss^-bl1Yl)apL`8!crjx63l$#5Uo$Vn*sfik;|v>e zS|nG*g%0+P(U6SyY@~Ts-$msLziJB#%2Gt5Hig_Z{|HpSW}O~mKJH1c)snZ~T~o^U z5EY_$4W~YHnZkIm?%10lHSxV`E5sHIrdnBpC7YowSl^S^xPDs&SFU{Ejat<0gy)fi zLoy6~xnGPhQ=0Pco=JN7Z1*KCwF$L15*7U0^-(@r2WdV17I-6TcKd zHH2&N{lNrQc$+CM^=qr<*DQB>(S>h&hZU37+gHnAY!$xoz9BCAGXnJ{Z&Y-*r|%ss z?-&&&8v&Kvf`BETtc(Yrir0MZ$TF_t%28n7Qx@C$SpG$#e!w0^oL!0JWvF7={DX8Q zB|>chwEs*v_o60{DO8I9_gO|$!8JLKjI>h9%a5|Gx1Q7=RHTL; z_!gZZDx)nW|KQyW(i-oBSZGo`OovGE0q7p8an&19IQ|7c!2*Ildx;K8PV&EwQxEBvg2``z)kn-%`DIN3N-N;{r4*WA zr8Y8DO)nehD5B-+q2UA2=bwL|&qDzE-0@Ly&oi+`ARgcNqDj8IK|>bR$#mT308D?e z)%;FL#rXNdxj^{yxeR~V>g;Oy??Z6upP;6XI{DlaryAq~uew{c9$l4F5ikd|ilzBq z2rs9GK2$r7mJ9M396sa-1IMiVQ~QbX=7@!(SQ9V%C(^N{P7y!C*e8EhrPv5n(4gM< zv9npLk3H!q$+LZb^!0jwgoZobMV)}1x@Dv4vveD&yysjCW4v`Duvy2R+kEba)RSZQ z>mou}RDGcfsF#8dY&=}pXa2r{Ll-YCx&;~|VE8~j{Qks!^lo$G!K-uRNhJEvjs8Pe z6xpVZCusetm+7*Wp_b{6Lao1z(sX^JW?wG9Ts_fTrC|2PX{0H$xgyjmkN^WeS0Yqa z2v;C@U_1UnJ#%0<+KuuF`I`tfMpjVj^h$4!U*oHQQK0&LF(5#VU47pteR~{1kAvXF zy|9OqCF`6x`QC^HTMv?Hzfc)n)4nNQRf}0rfj~|>LCBTRQTc&$n_uah7Dp_#E88sQ zt$10^6=9fJSm8DB{u}I|TY74^aQXgA3f_P7jXb~3F=@WCLYVA* z?g_Ja*E<16JZI|c1I@>Dq)~DYu=`33Iox6c@4xf)Q9I!MS37|^I(K4D>nUM`iA~G)$pb%)HQ|Gzmo6ag<1VP!Reawoj?z@wt3MIexGOiGk$$LHdO>uWKl9*U5MUI<-|4ztdnvVv!-8cGL z1zJ%k!HNR_zQ6=NIbpwSkkAFszY0Z`P2?`*^4!VEedUU=L?H}C9SK`)S8XRu>bvr*y-!#!sO~_52(kG=n zC|s{fBogZW0njt*D!=Dvd^r7~{22b#IskDa& zOj$bun-#g`pzdGUl9FGy=nkqXlXw{5Jd6I}JX3KSlc-y^H=FK3-9PSX&Vvb#CbZw^ zmxec|)+?>h*HQq6KQ`C$wAI!>`d#lJ?%u zZfNWzWoGCQPsd0xH! znwxIZn#ntk63pZWp6`r=bhfgbe)~D;nA4IjQMBjT_0@_za{2v&AkPC1X6f$%Oi3!9PV5(Pa>6vCOy^Fb)dEoJGbS#}dGR{B zqo8Z zED*enTP4)ej>D9j+VXK}hu}-!tRiIO^Fpi3qu+ra(J$J9ksDv$l7t0wImaicM;S82 zGq|GF4y<6W?#L!=0ntJ^T)P%LUL+~Fd zTxV)lI?!yRJ%8jnI-yHX(GxPU^U1^pmcmpj0AnxD7-);R-}<{m`kguP%@j$kk&!}; z5j%P+ay={`JS($B^VTUfUXD_|j-ZL34;@X%5v(6p^smOWon?Fv5L1>Of84JTP^(F2 z)cRS1cS7RL&<$(}>xZ|sw*l}u9N2wF&^{a(f!&9MTdTj}Z{38j58U5oltA0ahQaAA#?|EA^Ps31^lg zxrSK)%U>I?GU{g1Z&SK=E+BG z8%&e>75j0mh^80!B7YQ0;MUgc&d`Y*FMDl2DMhbLeStnZzK_6me^+Vm`e`%lJ(XFx ze)f;!Va#Ozp+~MgmGhn60d(P<-KxB4R+cX@V2T6kO=KJWzh7+>p-Wb4oHNdd>4mq4 zr!dfLMG2;(94w=Z<-QtO5UnS}Q@agceP|uJxGs7tIXk7{ znxCKNsLY=9-Y)*|s!F62<}Wn(c&fNgv^M#vZVy;}93BWy4wh>*PBHgW-x)MU>PYNC za2m%sdYm@hPTRwffih-!vxfnpp2q6yLL`7ji@&w6YY^4avVC6G*wdC~Wwl^@%j2j2 zorSq~gE04PwzIO)ZGP`parfH(p8LvE)yrPaJ_=y>t8OkiqdX;7GRxn4nWT6Zntf1j zlt!VD0yiMHOhSL*wdWb=L9lwe5O^kTd>&C z;*I`2{jN&=c6@4qTHQBJlOM(%vlg`dmk;we%aC1NHXd!1O>%8CV$~B^ug3eE$WzST zl`d`7U|L5d2yVpNNO3s%F~XHFT`u@^bWI?ARVc+l$Jf!q(4iVP520@2 zoMCz`l*cl`HV3yBwmXVneQ3NfUf&4YNxE(QPV09Q$?UGVt{%k1(uZxg9k^HDravoz~DD{GCtuu-tpu6*?}D|MLyomN$R5;$o{;r z|4BXwpq_;k10V3yE2Sf`_%}R!eCD; zV9UA#OErPN%DaG5BLD3k0Z{&HQG3+s{#(Nnl>d%*{p#Lj%jV+!T&zN&RwAXag!jmBbt|&CyeVcWWV-E=?+CqZV>An zdn^GO;^Zr>Ox`Ar*Jf?jGXqcRZbye^zt-urgl(ve7N*b7?M{QU-;#s(6q-%pPec!P zZ=qB=a=}D$><32%KH%9`QD6)W*2q|r`dI`#`>I0EzIc5Q^Vwet z2!2J1WJSTVZ`i=Pf%yW!FeY zlVFP7Cxi0~ys26r#FAaYX5?OQ`7=GkF~k9vzdl|U?Qb4Q%h?|ROe>)Kb~Ak`eM=VV zZH-No+dqkEXx)(1+DATr*S&Tu?ndqr5#bk}_TE0WjC1V0@+?Xo?j1buF8rq8zw+N7 z#Rt!o;-mYf&r03@arC=VeEdE64Mn(m^!2>irSasSlizIu=+SroAF~g5@z2>56#q(g zgIeFsCqb;gqqD5F7%R=PyeQ$dE5UE;F@2{wc=K)TfOB3F3I1?iRw2Rf4M^~N>?r{W ze)E2|LW19KYwo#El^$PlUVfO#-wRvH6JqW}$4M#q`{*Zw^CpVk<|M9mzb|W$K2yfC zPS&bzm_`08Mmwh3UPgApH)B{A$A%DHrkvgnb3lEvMI~Z;{K%&u*EVR{jyQXhel}9G z>)^I4p0pP#)N6jh$;Y_!@m}J`xJgZiNvZZaV5z$dGa8Y))YH)m~?h z_NTW3!0)RrB`p6HJotvCMiRMNtOj5o*_BToeVF?`!bo5X zNj}mZcYrtFxVt>pG+RLJPdwE6MQ(X;Rn&uTRN+aPqAZ>U40sCLIQAds$p;qD9eBRJ ziqFvO{L)dnKGCSeR{Y?OjJT$7uFGeaqX3M_)3JM@H7N&Fr%s=L4p!vs3)b3=Y>3J@8M!&jo9D=wWKQYvaXE+14HyIk+hz4vCSI%W0C&*xx zPa~_>?3`TC8ad&rBJqPbXW>OH; z|EkOEL>+Yb(f1^}xAN51E+xP010hHO*-Vc98x>UuCS_IP$2`FZBZ)EfO-;=r32BlyV(`gHix zlh?IsMC{7$jxrgjlGmtw5md6o3n`eVK5hpuzxNlm$BJJqG>Vo$v)_4`quiGhe6r%r zsK+f^>?Mom0VzXiN91FnyM5bK-lkP_qr7-6A21%p^6#RL&g_F^$F@@O@)4EPl;a0c_W>R|I#My6jPHGIT*!)nex2|NVLq<^lN`P(d zd-)Bl-jP_)|Fg^QYpKZZJkNjX87s9KP6AV8^f5Sn2e>Z_DPa{bIQYeEJ$kK;4I$Fg z{E2pEX)y!J##ge37Y5nKTm;8(yZo0-X0 zs;2(&IxR;#S9kab#DVOETHqE;suE*vcJVn7zTG-Ip8QbW*K0P_@_ws-1nzs&*=?Od zerY;*bBX45emcQ`i&Xl-r!t8BV$-D0>KVh`Drc_tHEV?b@xr@VE9b2lcnmalEsZTf zsjpYa8KuZ2UbSBEic)L68P&~^E0NrbmTF;llMa7VmOdz~kj{QiTkrmfV$Iea=DdaR z*!|(HzPXtt;x} r>uA~lQJaq;0sATKJj`N93sA!b=NGnP?T!kok&m_OzxkxW@V zkW7Dta=Z~l957ik+~EqhRi2jW{5JoF@I$tw5E-K`3ClAxJ7*(k08IVI^~>|e`N30p z>HP3m($|1)fDisV?+-{2p4!>2>w*w?g?wfu0mx_5E9CQ?FuBPzx!MpRolHABOS!iz zcUPZm{*e2DcyRMT)X#&XSyQ>T?s=I>$^lB~i+QSMb&mX|pY9W}>OM-A3v8@d#C@QQ-mN)?_1=?z^W)f?x#CAUd^Tyf|K81_&TAfF2p zy);4h52)UH!uhj6_b+ZdPzm1sW9{RJnt20`fbmfGPknS>XDakNwrUE;<=d|lc>84l zZ@($Vn@>}o-4IWm9fO=&qkB68uS@UJfwc0NChKAvNSV&JeG6azy*4CIyv z)xYSl)HhevKUOFPK7(Q)xvJxHOLWrR-RA}M98mSoLqSy)rVmyBl*8kC6uX@qLPer( zk712GWr{!XuWf6EZ>@2zHyJ04>r0i5^e17hIc zyuWCbukm$+SG+$31<{t+m*mF}$^q|hb;v93rRLx#c=ZpKX={FE{m9(O4Nyu)KRKBt zd)M@r^+VOpmHw@1STL}D@B-_H{H66H62!pgupeyvGr{k)NaLPw-z^~hKM+H-zpPSlH7$c~f3Te2HC#MBjDKlA zCOP5S52{P;$0zGuvua5D@h(kv60ZFiWL7+KYW)xG2NuIJ-j(i`w=-%PXg}U?f3?b| zvo2LHO2SuwYd_)w%%W!EV~YKde*H4iv4ON7e9x_5tWWkxLl|wf{kK*h0PV*vOE7KW zJK>_6K`$8UU7f4jMm+`mxAP`|?XCZ9;i_7AJYIu0>d?-?ahK-x7%H;~W_a%pB?t8W zBDXGf7w1ClPOrwlK>Gp4zwYs)K>Gn~Z;!By zTob8ayHpYRZSpRk98tWt)g$}tnbS^}1Wcg=pOdKGv{DXhZ@tv@#i!jYR?ldvXP)%& z(2{`chq<_mw(_+_v3i2XU}N0w-8$m<8xVlTS~7PqNnVVTngX(GSWfok3l;RYW%hPf`e}rDF-lDky6s*MiW|#SD?{wq{b=SUL{qQSI3tskHCHyw<(ubB%ksah3UZ zZ6A#i`P*h8xK`5oR+4G)lXeXvhS$; zM`1xK5kADmLFtv9BV>~pm3*}EtBLAXB*`sMXL_R;qRNq zONy{VYk-;5*w57z5O2IUahdZU-VqrsWozH1V)b)68Yz&%j` zw!hS$O{!H^#^%K98fVTUez;Os5{v&7>1#39d;!nnjXc#f(3B!aQGwW z=*na=xvrzelE-VHoSd;yPv)OwOIIa|gW4afIH>&*^uH1#?>{#l(?Ip)Yr%r2zqPCM zhYO{@jjQxW_9y+3Oa`YQ1UVA2v4$fF#oRmLvxQ+l9SZfq7av1?Sk5+heysvG8PBhD z`DqF^j-6D)FD_fk8nAq3W5RjXwQHBQqEC0-c1?$R5>vPfFQB6$uW?-9eQSk8yOaJC z-PxB2KR;L^JF(u=^(HbTuP7JAovgTXI7+mntqwZsXClelp41jt49bdLER(9)8ZEmq zrH^2DC@T?WuKQ3qmX?+?l?e|&*UU%jov;7@sCx^bIHPS#7#eqX4-g;_AXspBcefA- z?yikH!6m_iyGw9)ch>;H-EA6@d*8ip=1tWB=^ZSL4G5i2IS&_24%V2D={)#MW;7 znsRw*jJZYsqW>b*wTY(1mj|+a6NrSmb=FPG>Ayw5taxDnlN)ZJNm@ID^k1fz1Ove) zG+spi4sH@7{6YFJv{k6l3g7(h38POAB|w{io63KoD4eriNCER5Z}6+>$H%P}W9Jv+Hz$zs8%X{|r{zxl9)6wp4+FTH z>W}FMz0{qk>sNzU2CyiI0i1)^@}9rOj*4F;{pL4_0sLb7w%YnXF@O;+720}U8Nd&$ zH~-22X8wom2k6D0J}@FwR{x9U^M6$X{ zJ#9{>nBsu=A3)G%;9vd+`*1F3FL3@Z|KsgH_#c}udx2>eF?ck$rvJhJ_;=0c->;g_ zO9_)l^%RMuyvG092Og66&bm2xA3jPI%aVETTA4H%UZfEhhlY%-TveS9#5a4X0pEw7 zv(TKsOFxl~8Ep(zucAh+YC5u0fnYS1X0LfOIZY3{zn{B;6%lTzG|kVvc?dFoI~QTg z-DGEAwg4HwxlRx;$qpiBgCw7m9EZzpo7s7(wr>A6f^+D$Wp-X@pwPcIg3C5Q$N&B|g8$M#-t_-mli6S2%KG?du*S*UMg7tU1|9!%wWWMYFl8d( zrfOE;ZQ-T)+Xxm%o~+#H%?9;?|2Bg0fp6(r4?xHN`UK(sHiE}$24A%Q+*(16U?6J@ z<)21y*PljkMQD2HpGGhh==k4cKLMx_>@B8)agNuokppu5DEre0c0&d=f?r)fPyvXq zjo@Cfm*ao0jo_E#f4tIouZ`fLmqxHrzww;G6B9`h+bXkw>@djngXdfRmps!WU!b87 z6hYk;AoI`hKOdr(__2cr!kFzpPbWjh1vBqte;uI>em7Z|LLX% zETkLnI*3ZWFR4b`8UyaN7Axe=tBYqxMb=Oj#XAyY_CO^vr3vxl{nV{sR2goe!}*cn z^bvzl<3EJ|O6%89F66#qR>v0~v54UwS-GDaMA)b zfXlOmf0plikO{xtjS$B}pqAG=&a0^rXgsq}s;>nCc=aS+g#UiM2>(HWj{Z@fg%0}R zD&~WZ{`IQS+NCCbrj}bwd`IM6;vobg02`iknrD1<7Aw;=XS9nm&8zkgbg0Hiy87xQ z>UA?&)ofh532&Mx#K``(97F(a|9taF*PWKN+t)lPh^buS*kEkz`M4FR;~GHfLKdHo z3fvaIsSGS{h*r|xv&_nJ*06}7u(_T{Ut(%TncQK)@H#L#3zhu@DglAo#|JkqUGH{1 z)9aV@#qdyHh#ycO;)er>_+fD!zQD8CTi?eMA9eg7Gz7JOzuW>ro*%xi{ol+TqV@t2+%E`_jk$_dR>>d3Pk2@a^k6A`f?B8Ub8ZOnuN z1fJD#BTjv4-v^yRU>G-!nVHDq&74L37J2Yz%}@G`3a~+IE+mVlgGgL#Pwf$R0&DeG zGibxu?z&WrLR>7%ZC3}>C$*(n^E#uubLxTnVMa;1xWy$9cdnh5j+5RwZbBcw|3mX{ zm5))IIW(dfqL&RB;7&4W2hx7gdOHKM+j7);DgJ3zVxEly zPR;}5H-O0)tI+0)Li;IWQr})%OXTR$TSvYaRt)=Gw=uGf`I$?$mnn-Ho_scfFr~R} zxUjvpWfeT5#sb0KPMWW}_Nz&Gi8aI&S2W3bwQpNEMrp|=K~L>W-lxT_XpjWVS+&eR zRG)iacK>4225K=2r$MUEb5v@>6~H!ZCe)WE@B!R=adA&-6POWOH=+TH5huyl0x)#a zQa3kPyFaexdms0(ZcNd4t9^m4REc1z^yHf?&rP!S-kfqI&821#0^RrrdNj|gM#+m)-)5_>!zRW@sUS4+8gd)W5WqCl)wHezlf`m zl=hmc9Ax%AG~v%|{ptNqwI7E4kUtgpWA#CE-7E+9;`HHhdCz5|y}DBr`LgS0vqC$- zvYB^d2u7ywTKcU&rlpfx>_ZLrKc!#(w}qc?XoJQ?#~^mV9ku@9)<${V!;SgLzZHJ} zF#BNsYxa>??*l+htos1KOFZ@gz}I2X1C$Vuh>8a}WWd3|j3L3mC=vyj0ffL0uNpA) zc19NVQ*k;{os5`)$578g{ml|we5e|A1X$2+S?3Lg{iX!p)21j!4p-e9I2=B|3HNbH z9KCLmJA2!C`lF_w(1Q`#KH)Tu(tE(vPh~_m_u6=t4o=}m!VK9j3pk$$u{^3ys&b}@ zT=`^rifmspW5@922vmXTG2lR(4vaW+&Us1r)MfndY%@bBgQ#4!4&bbQ7D#LChy6Cy zT?*x-kyt!%*2%~ebBXhE2GGMpDJ&zof^Q+Nde6iF5D${NV5t2m{?A*lppmsNsoH(tc?Cudi z#1xeqnH$Rn-aLu1-fphbK76ZZ zL1u&_A10ab1I)tDanAg-)~l%jOXRKJ%#C+OH*ux+&fiuyZRYV1n4`DrK(fEz1-EJhV=c@-}j->(j zV_wqU4qpO7`^f~{px&@0;aMu8WCz6ME$nLR3_1r1rnXkzjncwgB{y}R?zGH%Y#Umd zpj229LBqF4er>VaVj_+Gj**@-{9tnq*?gJ|jRIw(VTYLk!h&?aEp-wnLRfr0E+w&? zne>-03NG)M0rov;Ox`5gasbGHuO|~;zo#z6cG*rw#AfnS6qO^2MSj>!$40OTiuyKo zHQl{h1*JTSFAGn^&KSNjLfTR6oL|#d-MuDnbI<4TKim1(vCfdklg-O>7TRAnc+}S` zXc%Omdyo!IW5<3U9U09X{q18!%~~q=Ekk}lrE_T&Ki0yJh=RxsNU^I-=G3Oxvv8rW zl|L>mnq`jB`Z<3d{7WshJr>SZ4#<#4m+kp0QQ?3_6T(%q&;|yrGzrm%9vnk#ABhaI znZBPBclFNNk!2SH(M)+7vQIHaE&{nn0m(_H(-69B^GY@vTb2o9Gk+1Btjz9{TQ&T zM-zUX;23R_mR0>3ZdyD5WpGGj;fvM8Rw)2wVoLx3J`olgya;aaf-b*OUT~Eg1sv+_;a*TzzNF% z^8K*vGIxT;63yAK)IddS(mst<8LKfH4w8>ts3~KP?CxUtX`>U> zHq<+l6z6EWowA`c?)d4snj;Z%xqAels?`X^RYWROPGiBh!)O*{g)AD;yKeF9sLp`3 z$M^jc?Td?o^pC;vPF+b%sGIx}esRZ0tvNBb8uGcSdRs#XvEWdcb^ZJRRB$Np;8Ey5 zF(eL%0q}tDg2g9wg@3DDKqNRULN8JnX8m3zIQc>9g6|aMS*64p>5A?gYSBC?KLCK_ z*yP%WMoK)lt$}EV(+P@o&R&;5lxfyhruxpfMNVv~A5HU>HJm1(Sxmw;A2gM$K6SU9VF4TuK_ANt=X8$Kd0*>JT#IQp9n#8A-Wxt9ST0~L9K z#bb4aPu^XECpg^uS)x}e94r$2{^z{VXwFsOra4 zZesXf4DyHERCGLpzpYs3NC(v>~B?edoN3`i|w+10A3+l^iUn=(Rt9_H&pjjs)Y7@>Kis-fGienn3 zEPYOtxdb2o+?iibW%SgK0613LK&iPB~~;T$ERo@hAx0Ah*H*jINBZJ0v)^Ae-d5zkWe?Qc^;rriB1Ir zBh12@Auf&y2kx0BEa?J{2`BLe+zme@O_ZO4u9228Db9`&Jhw+tc7h*BIj#ryyNvFR z)+SIQ5Svl?95}mV`4|mAE zVIh8}mUY4|5E;J~J-Kcc?RvZ|O^_6P;7UR(xd6 zJXpAr*5_Q&)BgulZAbgXm|oWbhKMYayNTMFLO@%^{WU0RXL5IUbqsL9I=Mt6LnwD(PVx<|wU z5{|A8(M+1^3TrG&D};I<8WsdA{ofQSRI00+y_eGr(_`e=&DXa|3}OGdAe!cA`_ws{ zkK*~9ar+S>kVq+248emKC4-mV+;p4P2|i82q%buv+w{BdYJSf_!2-kAeP)r3x&~1T zmR@F61h2)ApU0X)W5B^-U|(5N<@C+?-ZFWR^-Bc<*>Nj!_Zkw*pOEO@{I?Hj8c7@L zH6+%*AyNOg52*wP=J_=wwwI9bfS=Ij9Z-DB)pSVo#q%r~`~9U+@2_&H4mZX(hfaX2 zqbVbLAM)kiZE$EX>Y#+iY>wy6cidlYt)s;$OOr#(8h)bmIky2iyXAP-1Q?!bC}&Yr zluvz6ug0#E7v5!8GD{4R>RRYw+7Ca1FrEl%;oCb)2Jie@K%W(0hli3;%42S|P2hA< zT7II;)vIvM;##K=F|SmkRX$~mti7^_pFdtcr2M|#l4^UdCNSeUA^ACK)y1gjG;6w1 z40#na?yw;?iBVqy0>1h-{>6(5*)9-XI8&^Uoykrvfx+syL7-2R*l_d5HCPIxi#hrK zUW}C;!ywPHd|bUA?448#-NVoT7$M~D3w*j_3FOvtX>D5K@r~I#`~AkKy8K6*X?-(C zi-7|uMmQ1WEJIL?nrYS8t@?Ho3hhhu7`ApbY7yM~8xKLr3M6+=y64u=h+Mgxk=Qrf z#QZul6GLhTnB1{r%3N|1lPplJuQBQo&KieySvy9|nS3=a#(t>9I3N2*jC!vB8Y32D zM#|vUYu4CbvW5@D{y$3^tmL~~GzhB7@|ZT)`c|%H#WoOF+|U&&#DKtJ7z7qt)IS_j zXle$#C=Dz>e{&SQ_F>)?TK&&SGw1%lOd1wsoyi3JYt~++_y3n!^9K7r%bNH9>#Y5A zWd7%@!3JHFf`TLiFfk?`^8*-xX7&yTe^;m#Up2LX4R-wB-?1@82D8Gde9F%6ClRP= zaK$bN``aV|z3ZpzCj!J$Nc|fz`H~?9`ZScyX78Ocy(esbs54;fc;|d|oq;=qGM?~U z*2BDOMIzo#tc>(u3$HB#x*TOV)$6Q-4NKBX1l zmM_ZQ)fONLJikAF67t%KF9EGk$HW{Se^w|G`$K`34-aH|l}7vjZ6Vf>{J&p_kCgw* zh1l4%w&*2HaHhl~egGwKlRAWBktv3Q-dfN2U(;nEuV#1}hv*=K03VwL0>QS~+s9p~ zILu^G0_sd3EU2%Yo1N;V*5J;WX9(F$XO$5RRd}c9)$rli8w33^)s?c7cMg=CY>gQ;r zRp8oy($(@`rpqR^O!&`O`&&_{qDISHlhPfi%)-F#RwKRr#)yIaZ^%yg6@=`xl`fLd z4Y^PO{o}tUE!H;e++3h3yK~2(B~A22d^SPI?$W9MG6t*iB_&?QV5`Y{t9Fx|%M6NUQ4RTyrM~hby{{TJc8}6%@ilX!d*}qDdyo<=nH~rqU_&Nyd)alDGgI5Rf z%O~hY)e%9+{`eV`oF}c7$DoHETCE6@k`*4io5c2Ma4g6fE!*rrLH(^Mu+jee2i0C? zBK0Szzf}b`=70a7e%3H&{t1fh1?upC?2;TifgSB7hnL{~SroaoMs)Tp|2Cz#J)z31 z;<{~F##rA|hbD;O7(eH7UaI_}=3jzxLbM%e;r_om`@J{w6CLq&)Sz0)O`O`Op?`CVQyLeMOE(TM4 zJ@_rWn&#k}hKpyp|Jo0T=Bb~S&26Q495I3Mkg^~=JdJ3g0pXz(2oFO8bK)+1shu+Qv)6?ybl;A&$8m1s z#a5($d3&&Uh4%+K8p83qwxg~MH-W|b?5E97>e>h zH(0qp+O4>DQ9y^WZeF8GeT{1ApU{ytu&w+jdH-dF;l%I#f0jJR|JTXe`CldPMe_g) zf}rpHYx0;G6UQY%Z4O*P05Dbd$V!M$Ya%~IVV#-BT3M`D&==jAYtI*P2T6@FFU0(vLZ zk^5fmgby3uV^JSk*tr^w-Zs;vy&L`&5(?dT^lOZ*hHP#1?1=A|v#y5ugNf~V-xEPy zg)HD4Jds2Tpq}`w3SdfXQv*aMet3D1QwJO*npp!bW8t6>q2!S0bi(Su6##F+fiiM` zw!=vZmm0GP^a^X=fNwyeqYIH*k)4XFCkuy&&A0YGs@G-0(FDyM-*$VQUfkuqi?xOw zXXpAnFJ2B=Y_~AC72Z5_JV~1%9H%-sp^;R9B@wTZ?{4&FakJB& z%goGTGajHST{AGQ57!-cTn&sZmI-#tYc4sY@%v z#~L57H|em93vfpR`lxVjf4W$U^5t7w?4qlzPCbXlDr--eV(t=BTS0|7VNiBV1bFeY z3LnO3X1{k84cT82-1-WgeWmvE2I>Vm;PK1f1X2RPz~rFn#*_eL;6UpIMoiE2>Sz5e z$9HfMZN9mh)IGf2^XSM4uRs1k8irovwBJI4m2YT{0;9QF41S9l)fTsLaRz=vaL z$~3K$p1@LH$a9WU3GC=8czq1Xc2b-(0YmeQAO!&D4R(EEROc%~s}I&Dd<2NpaM)KYP4lUy_CJi!$f%T2a zuOA1@b5d075RbC~o|p*GvpwEZbP#cu{Y4Rv;|?Siwfdns`sL#yN~t)~hZR#I_L(b( zxxp8R>AQ@Uc_IxXNj|tR)yX}j{OLAh;leTL?2a|(a_97Uq?JEF~;wTE7a zLC$__?%rCNYFSUV7=b|Wmr;{=s0lz!6w?4r_jg(VnU_nZ#B6yG1pMO-ssv2{!k;&F zGyzeG!x{i)&_)@!CO{kVXK|iU;KLe4mlCf$T9KXGIw5<)+zpVa@ zL(!Xs&kva#BWmW8A#@BcVgVv0f6{=waHc2eVYoaqKMO|Ob7GkWfEeD>>2}_X5)4de zE73;_wDzSgf8DCJ3pj@OMFef1&ivlg3t|yw&oet=%JUm{n?5%On|>rxBpQWv`W6dK zLVe5kBGE)@%R*t*5=|9~*8BVChr7YB=ck9>*>vFj+2i24wKIdB$J5zv9SQL1Z1h;~ zX$M&ReD$;yRtEHXINR*i^SVE~y-$3;zug~feFmPCJ)bS-MYei0JYJmcFfd&BGw8Jf zZzt1%Pfy2Xx}J|0lRJ7KKh#C*R^ZbQ+ULtfl2)$=rRS&9Q8hi_^X7RkN$bsylM1J7)4=vcWf*HEq9zXavFeArDXFL&eAiU7Uyg;*vjw8MdH;H` zgD)Z0oecVOMWQANN5)hh6U`^k!KBd}QYlu&1CpWNAEx_~%nYSjnR4*IDMm;ah~{dO z@8%~d5=ny|?TAu%@-=k_9SC|ShZyKJS21DGG=Bn3JiX3usAG$#f3;Y*?)<3velS<) z5%yMdH68^=^QVcY8n1_i(T__V1BwZ55@jBwVz z+hpV|LR^>|5Z>ofy4d(pgFkaZhf^}m*sEH4Z#MUtcJlF{)9~sO(>+*{C)m(a4bb?x zx(e7k&$pm0ps@dVkp9`rI5bVCsH(`u32UUZtl4dNCS+K|{acl9KwiA#xTdH510kDV8o$LB`GVNU%ZK~$0rcS>%}1Y1jh{`b zfYtMSy`Ycfv^-imJzvAw2yZ+)YbCiXZI=Y(MK`pes9zJ8Z;#gZB^MA8^9}im&WnG+ z83>6+Bm@ZlO4csa8`XRf)qEM%e5Lo8EpGQw=9mV0KYBM-q$iS?7G)ZU7I5P#PCiZ+ zAslp#mqs`t?(i>HE57{siFpl^+-{=w)`U0%wV%ZbfjSam*15pt z*e9o#$FFPB&ka<`jRjARa-2ZsM-tEr%G8c;PtLf{?k|6QQAXQuy)XF#^m4nhl;UnS z9%%TmFZo4C@o_&*_}LxA`=gu|c==bq^^w)y&BP8J#2NPROMKz(BV5VnDI0z!<*LCl z>@nGO?P(SNLg#JxrGV1!+Jv0?fxandF(5Bf6n0pZ&)0%j%7USC;~}ELMihvKizBOC zlFsUGs#A|*W=U9_W@FdzScPD0>v>fL++Lp#xUa9}bv1Iox=p!C2nZ@SUiD6I5edBi z5$kT^nO>XLyD`J?NH2}nWF!DHJ2RRUq{$aqW4_;^c2h^EzO``o(9#Tz@atzSZ);gi zpe+;kBFb3Z`E^~}qh7ivibbnof4(N43+d*MOFw&3dQCodm<`ZN7$$nnqG$wQdr;Vn z*^&V*iecf;uoCXzVra(NGtx@@+pz%=bJm!Oc$nzv{w~!KJ{2k~Emu?x8FPy*0YlbW z&>JVy4?U{H!m8gZYcvO8k+nNP7^aNf)9!%Aqm##i>B}L7%WzXg5^B#wA%zTA-2KvnG4tnBeC}yp ziFX#O3Ws!*8Bfu={CUU}Q+y!G8T4am1jJ+M?wYq@z9HU&|BU;Bn?xC56l@$Fk_nGx zeO^gqzmiUj5g4ZCvFSc#vz>IeYfyHjAs$80%zd3E6_{5Vn3*gXD;3cDI|lzST*me< zX5%zMGs!Q^VoX6pN;z2s6e?UnLu993zBPxvVA-TOVpN@${`})!u3>mM}&dM-`-M ziAL?}nqRZ1x6B`NPf3$lW){p)9gE0OWDBS zg=Z#kqa5lZ)^vH8z#NaqJY^DrhTAOO1s&eS&m5;k>uO<}DA#)K8VY-po%ORR^tsH` zpg|?6uXRrE9>?c zD2k03lt@v#pDz?@(GKnjatD8=Q6Dy#7Nxit&B0B$=StLsj`UZG$&@B)_48e40l7xc ze>3>SB0S;}72^=>y6sR#~*6SOw<&Ks?jD^nR92c59L$=DU)6aR@{Q*Wu1o`n%Oiq}C5tP|=G@s=?OXJ?v85+~GAC z$KVyVn}U1n*qZ`}DL#I9tBV5Ue3_D#0>rwi#v9Qprs==)|~ z4K8iK0wq>!l}kEc`&*pXKY#K!Ym~KyAW}Il#c~zzvj`xfeX{WRz%K)#22Tx`11t zJ!RFVNx~_`WJ4~+^bk;+_@F(;v7`iVy)N0W2pl&Nca*O*wvnTO$*_%y0Now*SnDNl z-YUU=^2DRUW21+I#ihcdf8WoGOI2{*Di!UTBdx^~_ zdth|bGjk@P=1*}E;LL!CT-Huvx@e?snrP$_=$SvwOhRQ1kZv|F7l%|+o_iWB=V#oN z20fa@jburtJ$*O1ma;wxbB`*Q#^-rfy%KBenO5lg+4=A^Qr4f>*Ox?~Ju+iWO#XzN z`UVa20T$j15{mDizg9cRq6OZJV?F$a_i})G{(K@`Xei;Os}>X9SW2M+-w;zC*brl* z^Ng!mdTX}pK)KHCxD`Hqn3ioNbe*!~y&Sujv9)`cCJb|(VbmX^Hx@5E*bwt9_`_&9 z*5sCV8nnP_&b4;X6Q#-1UCk7b^VafpUB0!Zx|`(* zd9r?4v&UOs7iZz;hM3=apb2jH@)mLzK^^VvLW|3muana#6Z^Hr(9e3CbA*lN;n}cd z*?EWS*1|cvQWAC150PzK?oc6aaJOZ(3j9LMN1FkB8E3u=u3vPq@N=i#jqymhS6ZDD zY|jaR)<_v?;YNWP)<{WdmXBioJJL!7Yb}(+F!U#oWMZRA$nwjK{Ek97p$09E-g|-V zRU`xU?(scq@hoOLD)>T!yQuqCZc3lv^f?@j5P}0a}Gu+0Xc*cfoQGpEsY#Jlx`B zdQ~G;#i0D%Q)S{{KjMp?HOu|Rk9jrp;T8*%D^|fO2H5ZWEY=ru+Ig;+@yF;QTYka{ z&!#*m zD@fiGu;QkW`79mqL1^3Ku;(U_^(=P_KVAI;p0dhZmAS@fid*nMT~5Omm5Hi*vUcnh zp{eRcl;4^8PpgrG-fHI+YRZbS!`;*kC~zX?3vcl?BW}lzEY~!^#8Ako;&AeZJ3bDC$(|2kOUoXtS=m8z?^od9*03+bkFvf%< zq;JbVwZXNmFs<3Pt$VkEJ438Mry$+n+t9A5v=w;EfhPipp%TvBh&PsWnM|;_5!rE1 z7}j*#8td&jz}k*Ul_v5iTj(W7Y~Qp1YrHv%hjXtDEV0e22;|kya_hFt9dURds@TkO z7j~iM%vDmksE~`QFDF-wnr1B@;3XHRtn2u>R4Rm^nl`z%Vv18$54eD^w0t#^(8`DP z+8k8o&P9#zb>A1}7&|IPH_6u0w6=3TOsajI5Ruh8jYU zFKG~z3yV3jAH=tnyO2Y2I>}ebfs3?W*R@i$n-AdYkNAc}jmRa5)tBJwC&-l$g6*t* zcf9qR>Ju|C{<308=8sjG`Cx!h@}Z|EIQ`8I5GSODxrl3qbUzKL1SKeELGg;^(=tTU z>4dP(Ea3#vE?dejeai0JlwIPerz)uihJ1)3R$8(JvW{yoy{g8#oB=yr;04V{vly?D z&p}Y_v%rV1&Trw|DdN};PZH0P+I$n=4o@CONFB~B&$P81CzB@SIW5W18Jn$f;_vb5 zR}{}qZHUe**Cm~5x6h^0mpm>3h9Qsd0G~2Nf||TZRg8XNphV|piI9nVbFIjtE-%2T z2x7&Ni)TvOT=+vl!B;i=T|& zRdm{1_WyL1&hvTk+w? z@G=Pi5a!*gdbgumxKm{CcYo=VY!6e4?|5W`QwxR4I0C%w-J{o{v|!x)UMwacuL*;N zkk*9y{UbBY6iS!xEm8Z-9`_wQy7NuEV4WF8Ftk6}(v)DONl>tKs+q@VO(~ROr$O|P z;$gX#QG9tsz>C>5QEB zqzq~`hIfqh0J@g9@*mUbmQ;x>UPLU^r`Q zOKU|&XSYlkw$k{!eSH^RUikzQ8^JJHCuG#$dTn=TxC-tE9@1jSz=d3VX$Bf6B>fm1 z1BW}1GRji75|~wS2$>ziHixo6c5x){3BCl+;7Ht!sZfJLADw2p@Mgw|YX;RancV=~ z4XBgvhbnMz3vrVDhNm5c!=vz*m$w2jBg%x*jeeE zi_oeVGu=U7mo2j)_NO^Gv((m=Pde#wQf@yGgzGkRM*GKvcu7H%l1o4uH`5>v{FrO( z6taGe*yZuyK(!tAnqOcB2!o;kh1;xE4z$5=)azN_h)D&b)rKaMa)RGqSH0%;s1%zQ zj0SsL1ilNDl5EILAaAqpwC~XY$HO1fha-72sIoTCPUG_XSen{yve78Vzi^>bRzyB_>>5UU0&vBH#TO8VxCmts&WW4m9RU9>{RrC#@Rmfx)Zi;_x7~(zfp) zev%!_8kT~Hzpp|J7Mg|TjIcfe9R^)J z7~d#Y`+i3620p!4uVzT#fzbO)G6&-B`qE+JCWYpzcngpu$6-2)9(VcfH`o>ZU@^^a zZ=j$Cbrbx(*99yi7mf#VUZCR>T2F#(PxWs-ld%;M#LV8Euc(QLWJfKE`X$}H~fI zEq`VcoHFeS?0QroF??oGe0KjIh`4RFN>hzstL)g1k-)sk6}^dJ@1oFa(_zTvp-v(j zhlsPw@{;$<2sC`&E(VvRJO}dfHpuE?K`y>QvVx?Bkg9lJ`%n)cu!o{AO6m{$mVq2* zjn3l`01kaxA6JMrsu}u_sP5PselU9~#7%3&78Nvm-%k!ujELF7b6kGL&cwo9>5&9c zmTie+0w0NUZ>1BvtL}ttrUqa-C)XZW6blJ1h}NhHE4NW)>5B87+t+A69~V-P?mI5P zBd^gKlFOU2-qb2b@5L8Mc2az(L?eHaS?3T`a!NmM7W~X;Pm~E40UqN|CkBh8Lhn(+Oxus}Qel`Trhb#tUDwLEi9M>1? zY#{T?Ol$b*F(mq?!M3O-l79t8Bhrw&*y4A?_11+s?}l9K_%iaoCzNrDx6k3vjQI&d zxo57ULZ~7=^m01>W=0PiYF`3-ptnx&9qh>Z|pivB6LGe z&WQyc_B1rWsW>reFq1cbj6m@hS&s_c30L>-2-tH-!Yx4CF-ZlMkE~^PdDAD<&F0Zi%lOUQD2$AI4ppQ$YKL^` zOYuDP_JE>*fHA?TurV>+a@SPi z-#q$@#gu)ea_obNgj_#4d^Hsr6rxjY7oaO~R(g4?(65p4t^c_=Y?0zBp|fCZ%23UQHqMS3#a@FQ~%D|8O1~#<`K;7 z9V$f(i%nSuijB&eB;1>y%L^t5<~LBdnjK!U1>QAE;?TsC_DVnO+6wM_9r+mgK23_x z!Q`rLPfv^SOGW{s0D5_siMlJBT#N&^LW;vXXg5ztxKN}N!C4tUAY4>3xZ_+P%>g1> zX_}f#iHEv66qO-SDC!QQ^2Fs4``?L4epvWnB$wYN%g3tSdRqixt=Blso@){>8;eW2 zzCWh2;BW4P{%+v67WBRNIzXN*Y0vtarjA7+GNO$dvxF0PFTx>S_YTHom-TW&P%H{; zn01|CrhhcuzD9`9rpL}HDV&;tDw@BRi&In40n)g^P^=-pXQ`PKwW)(kVnxQgL%=r5 zQ3~E1ns)oAG(@4v#T3q*F5^mhb^ph9)f7lJtPZCjPG4GH8?X-8Cw?~=Hob4&k0_P< zV-qvDrF1Sp-vq?B;E|kHWM*1p{V2exNJlHF^6C0}21|&s zoP8vb(~pX1kq(Jb6>o*nFpL_eDh0)<1i5x<)Z|VWY}U}dKg>BRU3Rsl$g9G^XM(kv z45uo^uwSkl7hD8h&4cOorbD}Ok-qWnwTV6?nAK&t#)J@>~KQrakPN~DjHV@usE36e?6>XThq{A45yZ3|fuTxy5s!HOkIiCH#~u;c*w&GVzUeT{I8 zLXa$h$L4&wk7D@@4;U>`2-uKA=6A@v6!iL{^0@@;akdZgh~PDOU^8pwyf@kAj)3*< zw{xpBeQtys^b*t~#&&m*@vN@XE0D<$P#UW+KSwHtI121`Ra?knPqG%q@{r+ZaNh2+TloHL0)U}f{As*GBD zV0r7OQoX>wZW(uKug>l!|EApV5V9_#koA4`>8DB-g=;L*3I_SOc!+i~7|u0jKQ6`* zbGF{LZ(82utM=gU)O}~%W=4Ec_-8p*-GH9_%>)+yv%-)0xlFB0mx%G5wyiQ#t$}>% zNrOnTQFzbi+W?03NZrcXWPSo+Fd4ellr8rtVBltnkkp*w)9GL^D<(OOuE9uGl0Z9W za__YSlWdtTr8n>v-_B%@_S-OuzEFJ20?dv0@-vn|WG;X7l_m}F$a;o`TtY4DoO#b% zJNte2$kMmafqN6GvDt?byJo{^?u6xCmp7+CT_2YPXA0hK0BI`KBHKwQAO%_4({>KN zakSlMQIPV;89d<~MuI;olMEZL|68E8XqeRK@5*zG>Syto%4#MPogI%>#C+jhQ+sJldFtn|&V$1!l-$ z^9NyrD!wV@D(}!Rzv;37m4X;t1v<5eti+$z7ai5IzC?}aH0aZcxny67J2?TXzYmlL zro6#&uD)JV-r^rFSTQ?orwnusDkf@Qe!LBJm?ul(`Z}j_0snzmh3000<^7{$BoZ1V zBq_aOIUE{0WOe!bF!yI7HnV>CK5Rkw`cCAMEb&Kr*f}9s5^fSh zZqc13<@dklXEvgTbULlVQI+9=q5L0s;@aBeAQ*E);67u0jg0%~pEQprBC?S-gH$;} zFLP9}#hMC}L+RIDt0^q5#c8;LiOj0jWLc}oixQ=)8kPYMpC2NQ0^{SSjg@grfmFG2 zc#GSwoqcs{zlhxUB_LI{^xmRVq1rC`Nm|_qR;$e@ndhMGL!rZ)elqH9pg*_1ZlScu znoYTk*2ARI^G>$aDP?>xRx(zdOs3#7Q-x-g`-uX|+Nzb%?GTQh>bsF27TGo7=c!n3^Qyf*VdJ8d^e^+?0U$JNru2=)hchajQI+7?GSi zJmwqtfTpjd-&20$EUoZ)m(@>iOoZuy=?Q2xy)_Jk71>PG@AY}xuxAd%4c4mbs`^pk zY2U`CY2whT0fuS@vj)7ytg`fYt+LuV&-25nNm?=tK`9|Ag(#U2m;4)w#X5BfS?4Jm z*ajD!XU8&|aj@%hI47WKRvx$}jQr7?TZ?B8o9$*_w0vP73^m`@VT5egQqLzxdM5HJ`GZi)E zckX?6ocm@*|Co9Ae9!s3ySy{=E;DER-m$58*E`1Il2)#%Ox)Tc+Z&&Bx5~9secbFZ zvqpaY@<6MV*P{VtJ443L%-b?Q;NbS2yAGNMuCrT%emrZPGwVy=sgo?bwZHw`%F516vO2wbWZIRe>-!?C zkGS6rueZ>uI!wFa?0!Qic%hiq-jI+HGhm?q@JFp1TJ6kF-5pT!;&=IuU3-(X4h`1x zG7gO@dz`U+(U->t-v97I&etQR`K~N95C3`Jq3J2j)h&UO9w)f0a%w+wbJpDbJ3k2v z4vQ@Jw=G+Ktai!N4%?gEVwP^3dUepeVkKdMfocrSES8ES1zLYy+livq1fjjEry>Iz_^lxpCs)t%d4|UP8U+S8l zX7)QW;kz?6{;Dk>2M4WcaqDKiDfZ7B$_l5f($$?;r0={odVOX7ye^G*E}Tz2x2fOc zih=KWRn3~cxU{rF+ty-pwC0jq((W_MH7)Z_;hg`X^EFZXI~~q?)rK!k*dtp$MqX=v zC9Cc6?xQ({AN0>kYaAXp$?m*%@8ufvZ101YPg!35?5;=6j2^KY>b7p_7%`*mP|~w; zQ@s|B7kZ7d^^6_=$05Zh8|59+R#b)D=x}ATaB0{4)Y?4d`32^22ZDC?jLLtr*YNvC z?tL_(@N$bs?zrL|^-U3)s#)bFHg3Z&ez-aE_;bNW-A^%g>$cRA+pYEbk46Rj*X8v$ zN@d7Huouh)3uti@+N*z^iJ-KHA`qu0powUxIX<0$)V(`AYL6TP?N_Y?!3LE2e0-B%n!SH7Tj8cOs}fyR}t4 zhd7=_Dzyt$#e>S_>h1T+Zh@*Dx+RSjhPNi7zN*GizQa`j`_S4@5Z!}>DKG7zSf7M! zRW~cTj(tMT+lW`)hw_4wfOl;}KkqhBsf|>(@!}-lHB~jJh``B6VCPH2oP{u|A$RWx z2+qnMR8ErN3>ZRHZ!JL%DyI>@dd%4^2o_(<4OD3X^xWV6-W-Q7fs?O48c_7iT@a4$ zH;iX@kH-5H;;F~{v*Ez7HzD5phN_@9W#tbw_A*cGnU0j|sE4{=YhCRHFs*^*=_--jijv{I9c2#4+Y<++Ag6M_=e8X1 zQM3%-G?&@qe+g;I<5j&(hgrK=7siW(@s@v4`;c4X2;vU|VhI*?1JQ3E@bpHpn=myS*jz4bRB;zL_-VS37o1M}hKaGlV14k-O?2?!x4(Fd1h!MS7yPq4=xzhj$H&{aH}EJ&Jy9l87M&hM|Xs^1rge^0H=(z~tv9M+olX{h?eZ@hGh+`oWtdRmY7HpbHJ%| zz~$P*;4~4OloHb53WOW)!tg4!NQFIeQ73tyuMmz`AO{?r15F9ggqrr^++5U4mGY|0 z&vYEeZ?V-<6of2la2R^GH=N@ok@2zdiAk7f`g=O2AO8Wsqm00(>LfmdFJ^T; z+P?ZKuc2tFaidseQX_8`v0@C(>Z`6BKL>7@5qJAdi4*1eU^^{xrQKC#-{b8f6B0)b z*hya`cZ2C3x}3B902N6BdV*N z#7pcdc~inothEcTVG~O=uNRf_L>FG8n_VL6?`QeKC)}V-2s-afwK?>14{iTHuBnb^6Qzi4)}x)(^G)ks-XDfdg~kyy&9J4B*X- zIar+EWY5ZmYJeIqXfnmnGji&PMhx7z(r4Ow}A4^Sr= zbv18-#EVjKM5=Cxqo(s_5lYPb5WCLiH4IEen)SnGgnt8E;~Y6LT;fEzUR-Z|V|cp= zhPO(uuc3!=yq!V4#NKYJeVVl2WN*7BNW8?}k~g3HYR=#_tjnNbY3f8aZ`Spe#L(0P zxRjGLWnv-cNSr8_PgCbt@^+EQr>T+>-p-&bO~o|t?M7Y=j91i;r4lc(xB4`-b0cpS zp?sQ}vxT=aC`(hl+jjYrAvT7VZzvS&Al}}TH;I@N%38Tr;i>0a3K}Rx3 zm2us#P3vEE7gS2T#I72osnOr_8aA;8X-a;W*XU-Krm4&@tA*s{z<4eD{3vk};rcYC zJjI(C*C0*Ro#iz;E=^PO@-Fr!*N}0VLcd9zD3?!DO;>n31MAaN#dY4ym@G{VIT>6- zUJ#6-7aAm91PWh`)u*Y$&AgpL!LGk>Gw$yq!T=mNNWR0zI_OU*bife3q&l&D%vP zpQRRt@^(fQvlJnz_KW`-Ynpzh+$lqKA%%MXmBP0l!Q8S;@|9M6v6Hr_`bF_9ge z^Msm!Ez98E>A@t{G}9ZJ{Fg(Mb_#2{>jOOY7!mRs%9l7Vh-hiYGufFf_ ztm`zYbHQnO;PuUPc66E?!FNEhYz6XFlcKUVumPfmTm0@4_sKJv1KomnU0Kygw|$nV zgX*7TuLMK&Q2t*1t#C8EcQoA1E?R+{#iF9^1;{}@FM_p0bx|LA)EqeKD03u*nJnpW zigiWT7s8EqQY4c;$%iXZ{rox0;;g`g)zGDRot zT;}vD2%IYOk*!!elR6d6V5NyV!-N*hXZjjA601^DDPF*&aVtR@0r%_j@)$E* z77TYCTM8g5tpQ1;pDkoks0Hbazb4YHv1WMND3IPOfcdRl!~{@p(HVyVTmDrX!{!ts z7ZqQ8Q8Gz2@uDZWHLfd!Ev7|_ncuk(HYT_bPEeke$l9-m>e_9|i4-T%*>lF~cc2~4 z#ujgblaM5THq*-Z48|t=D(Zt!?98+-SIZ&}6njG|hM88b$z;kwqLP&2 zGOeu0Vr(+GYEsS0v~FKE)j5zlS9B;z?I_dI<2gF1xId{T^_NUb>vMHdHbJDsk!fku zQc((1NiTU)H^{W|=`vkVHbga^5bGv1y)1!!sCmh#lUVu#=M+IUy;W+4aS`gO*4Ld71gPrD{oB6Q z3|}gOo1w50*f08^81}Z>*D*p-G3+BjyR&@NdfL2zrFF9AIa*&BEd39Z>6>fuiWFgi*FDNYcACqPf+KQa^fqkvrzvVTWkX~6KD$8Qv0 ktRrG0N6wE}5J!HGa>7`JIqB9F_`hF(2gdTOSp&vUw;Gs<#s@b~~k05Sjopai&a=@$CH001Ef0015U8CFl+ z$ofpYaxMcB{S?dkWJ>4q43e~qI| zHUoXsasR^D@>gT5HXNcOJDuB$nxcy#>o+VmT)oI!6$AQylh|gi{ zo+W!(7kryNQFCvK2yr<~v9pDI67e*A_yQ5CcPx8Q&y|d4rR>j9z|k`uqFOziP4Z(={uZPUt$YGAFX#&kHekx zZrJlg1J7;xVqv_Z;dRb|^NL>)1^e{N;4Q`U+pfyc?xX$0zH2JR*fReD(woG5jmpyV z0gFjwlAw$c{+-TsaHD6#rh!yDO^NrlTkZUEtueEv3>q;uU;Ofpwz>P9(FGC{C(hVb~P4T zOwLnOriD`7KM=HnRUe&4L%P~QPll~d@(oTppw0hzXn9p2`e=aic8jC@2QEH8b%SR` zSn{K@I}#(EOOljx`DQPU$L!7QZK|}a_eYPm7{;>S#W`|=>(o-y7ZNqt5$pdF4;0QV9lJO|2FyeFy9* zZyrcA)U0`Js!cOp_^7;%%kow^O00|TT01G-YUiK`WPA+z)PEL0Jd|J7Nol_nU z{!9DRD^jl~;g!6If;6^qqv~?Sl6=b(gG(oTS)k6c4W7o;`dwOxP+^`%L#>AGHqxJc z!z@?c7l;Gt_Ak^yFK+fzJbRKlpx0pY&B$}}z~l@^K9vE8WNk9FDR1i8<@T7RNJ(Fy zoF{-4kR8unt63VuB=?TfAwpnN@YJ|qE0-2E%+z7fP+4^-O?8$l>6^20eTe(sdrI6t z#(zsq5fr>@Y;1~dKLm40!1(FZ8ve9)a7Ndc>+LC&MsWHXs9h6QvI?*zalpx_;%6Qx z4XSQzaI!~uhP`k6%rWML%jCyXY7x2-r*#CMT_Jdgb+NnvIrU^%@E_@NmdT_&pq}=; z@0NwnGAC#?MyN98dbQyo&F8eH4c_s?9SzvrOR*{ah`*tP?byVq6pF{?z1$d~D#s19mixN5K6(5A> zx~?%yY*JQc&8`FrR`GsZ`JEHsl#*@CG(p&l`iX%G5;rd~+b_lOgf3f628W4Z>q!rx z3%(`m|90XdgK;in78GVHmTpmB4x+bdBwN$AwfU(S=agBw*Eq*(^c;p;3}4@Xj1PAl z(?UxWjSEUHTfIlF9q9V`HMn9qL%j}RqSDmPrp+1Y!Mg8l155QFUoZB?&+>MBeH}m} zBaw$xnaH~FAkYRNUx|}eMzF3Dz>jwQeS3Jxd)wnivW0R9w=QkCvGWn7mTVIR6U^hh zwGh!Eicb}es>oF;^ijcVOdiYp#^f;0{tp4EG^u8L=35k2fXv-G=k^jec}Zo7XOz&O zmcmg^O`Da$V3sAFi^2Mxf*H4>S36Qqk;y}5V(A>jgB;mg_>SM=n1p9J+x=8@)7p{! z$~&{gCU(!2V{`@J-Tp&`$dY)-m45WuJZ`^)`wL_(mgI;#meePYRDj+y&Yj>pODOXh zUsr=oxC{l=*l?H8+`XY_8mxR_i;0=F8U0MqbVYNl?Sbez`u*bjpY%M}*kk&?b8QZ? zOZNG!&k7Xxlpq{(zJ8LAaQH727GBjl9R79*#HasDjS*AQ8d} z)oA!_D`UT`z7x@cHYkNh4FXJyu13a<29N%hDZT3|;cQiiSP}`Es5c>!ZXU&|Wr?5l z-}<6*?n}6u&llUX{MK;sF4yVcSNc^8HTPH*GhwpyS=+$7;iuX{y*n2PL5k0 z?-sMRapth_nmfL%VzSjsrGj6ehW$THJ`O421Re?o!q8R{05S~JsoK#J5NcRu?lE5Qxilpbab8e@~H`p2L z2>C42efK`1WjVmJ;+}?r#EQTigc@A#VPR&6aNajON*&&5EoYxWivWkPus^TWqGkJQ zyT~|AD7@J88*I;tQvv#t@^?yS!9|`D@OiA+q1|Ghkg0-8d?D7}x+RsjJT?4fX z`@+yq|3Bx@DJc7rdJzEFHbMXpK{ftAx8Z7SVd3V=@}KPUKmU=hv+TIdh3&^O5rBQ| zC^usu3YLshm_Ud$oxHwe8g^rnAks=!lU!Ps$s zsm<-s@QLzV>z#Si%~H28mcOr0l-Ju1NaI(d!u-h-r}A0;UPx=H8;~2 z-PZ~c*hHX{jto;myTI;<(E(H3ShfCXpKHzXg7BH+^oOV_CoRX5^@`xm4jqF(hcArn z&T!}b9Zl_Mc>$+o!v-+uW}2GAccf2g1}p2eA0QcJa+En8sIOBt3rDr5a4g2gJq*fi%_fSSVItdNDWsm@LTrXuMd}p4;0NCfAf2wklkVLIp<@;AXk&UGu`xc zs32dr8n*~oP6!}ud9ALAcOO}qi4jj1dTR522qla`iV~8T_f`w0xL+?t)t}4WJJD!u zM;Fqr#x$F^de5_XS=m9sKAy!dt6hAo9CR37vTzf@35X*r_cSo@MG6BGC?(x*+5~us*OVqIa1_n6Hs`4j z8jOc?pK8X=ginY4^KN;wjzbCXJ9eNFji zcLxQP^y*%pB}tVpa7N@qBeCcyO6F)wu?z<^}hE z55o`izeB=BMA`;2vXCxft=syn*%fv&cA11!x8PuQKflSw;!+^*IlwdE1RFP1t5U&t z5l3KCmQeMw^IPTjA_wnAgg5ni2@3qc+rwpm9bLjokt;`*XjlSwo|up5FOjE|aXH8g z&w#ZpD(C)Qv*(piZzVQSRC1_s_kMO=cX$uEN_osF!y0kS*B%+pr9%!cjvniHVWp!D zmXYnXkLt>o!p_L?qLT~`Cp+9<8e9wq+gh7N8+U2L{UPq@P8VKKj*n)SCY5JIpFGi( zG5G9WWT=kDYdUGkWdG_vz7r*x-SjZvjW&(QEO*&uveEg83bp4XWO(Pd)Q6=t5f3#; z@2`>j-o+w4IwU68lH0O(g@@&sRzNc%S)6tc6K)S0f*b|?VC#$!6Kq=6)8u2OqohKb^rAW5wI!%4y6V^VS{ zXy$D=Ds6--f!ZI;#3VjH*foa|$K+Z_Y^LIbvQbIw?}T|C1o51532tW|gbOb`BmG1D z|3H62#PDC~S=;*jIRo1l_9z$uB4b};Qdh&b5H7PCDpN^X z@s4KEUrm>n7Uq>BG{=nrFE7GxSL@qGZ!hy5&$r_N&mF=qS9dJwECG-6Z!gDh&vLJK z0bE7PQ2D82w}bx8+2e9o>FxC?;N`UADZp3%{xW&}_15|8o%0)o(%@{s;OyJW@cjDg zb4S3#>3G02|Kr2Y(`4b}P)b9heKhG|a(+d(ZM?Q{8~uK?b+cTptI%!9-9NsYdHMs7 zb7lJDo?zyfm0q;QtI*orpZto~1N1%=(_Z0(@6B?@!3Q-;*R+KfBrL6c66^bD`-S)P zLX(Q5cgw2Hilo!}lXEM+tLq%j9!lavOWjXtr*)E}1P)Vvs~$@s4Si!`-Al!E#J0VP zdXqHI@vgq_ap%G~QPjjemg49(=@swwWVA`=Y2>39ZS2xQ0-U?kCocFLHvNX$xQd%{ zF`AW{=GUx&-grJnRiEsqau84JMHPlXR}Nw^$0aX5wjG&ERZ2@vc+Cb%7z=7}w=;o$ zQ-SFdcRsdVnYz_Q?@nspp_4;%vSI`z+b&AGt_c`u9;Evv`yClaF&~#07Fk{5ojv+Y z;kSmv%xF=#5;XS3e0c{x>-4k)&a~zPaSEuC!q7jR5|EB~DXECqf7j$S9188K zd@GgXz3x)c^Uk`Iap1H4*RA+J-Rhy;mI~>}9Ym8A51gr^lc+J0&8A!%a{{7ghN$Z= zQ|d0ct_y$Zde3Nb?N0e04G}z!6`wjDNkeZnmhW&hg(7G%mD!?XHRZv5NMm-$b^Rm4 z=S)J-Q`YB<>pJ+q%DI{Rw1RgZ|4Ve>Q%P6UNq_8e&85pI-d3D(F3R72A9}{A@&_uO zLC?6KU-6sgfAXC{&oPe`;o%YA)Xd?KXSM08m?sM#=Gst;xUcxqQf+@mCT@oOrG$&#Vf30|yhZPKX3Mx39&_4>K8k07C~s6nRL2U_)fG)vCz%*-OVaBGy&& zaFj)a^h@Dlqt=MUQSY(Yd_)Eg`wl%rlQX>!QZyJMZUmy#PLz z%81rqBas3v8H7+&tUJT!X5f6Ju(F8-1ED1i1;Ht8Ps}l{BNyN9k_N@5YQRuxwOi-0)^Xkdp#8*{FAge1l-Zr(S>nC8 z6!;<{w6+!pF6`w$|8T4fUK~h??i|sOtMdvs7`27*D%1^3l@oXrf-lwF!V{Ypx?o6G z%joR|x|T-ltE-VT=T4W0R7Qx;_vYBiifg&$Cr)rWAEfcPDG;j_O%`dUq@|QVTN6Sn zr>)sUmE+-AEo4Lv*!1WEhkg+rxoV*p@MHcvTbSH3&cqAYE2OhHDk$#WUM=kGb~(~C z+oUbCf7A$KTv$zfmG8``vbU$r2f-{R{6dT@41tF?RR4 z%A&EA&QOo;5N?oE14d@VJH45UJMd?WKrf%vuvT~5ve(=Z9xCZy6gRIw#)($$06H6* z#gI%ZEEG#OqL^H1k%SrD0a6taV=f|e&2ezXOXYB1Y#rQ_;2^V!WPn*q%+6BIB^}HG zYb_Umz8>#EP+9#NRP-TqnNUK&^x3c-^-6+c+fF1?mpMu8HT;%$9BpMj4&*J4U0{!B zLzKyaV>?NMvs(bBmUb?(gWLKQvKSzLmwe{Bwb(Q^-Hi>AFbaj#%#YA`i5OckM~<_* zW_CBg_Wra;L}%&KL?RB+0aIpUgZEl4e`_bKkVG7kVIgE81H1=B>)=5Qr9`qo;})LS z-47CDqJJJvOET#D)BSBQPf%eM=z@;@5lrC09!(npX~rXB9h@$#KSTM5g31g;297@P zS&T^NT^*{>7fClT)nH2X&3>b3WAC<8Q-pKWV@}V`{GP%CyHN;+T}hfQK9M8LUDX!# zQO{p?iL6UTbGGjtsn_8PZT(af%irWQ3HWx9w2}|=n}OTnYT0S>E%=}ijJVOD|1Pd0 z^b1GOMWH?|jS^WLec#%=Q#eOFuHVdv=znU8qSl=#$rOg1_>mH01)vm=8|voqhmbs! zI<8{?WfLc>BJ!XIF%3%ix>(2(L;5J+f`-`TPw`7d=NtEGEL}$v2~Aj``t&#Fw{xSP zBXe`U7ItNadIH=9cPS>~)4!URSTcJpu&0_oT0;@WQ=zEoAm=Vc5=*s%xp+PH(V_5t zGLOu0d-jcpZcM7C{=|oh~KESkm8aQT>bRNCj2xkzaz5b;9kVa)P7{Yf|#k6)D2d6I~8Us(=r0*9Cx%zf(e zMSR+*X3(t!vmt26sqa61s5bl$l5kFtfg~;(6f@~)~@r%V;Aq{hx(=>>Y>db6AnH7h7&o?3}_0q*}AC@2PWgah7>M zhHE~l4?d2SygxUZPGdB&-D2fD^zyQQS>|SIF4sa)!~0$Wdf@eV|K0x6u}1eyuZitc z*LZS_Jkv3ye0}Re2&=^>J6x^>NZeW_ZlMQRp-1-Mg-d7l&=Xn~TQSAm*Kd4A&Srgk zcKRiA>GDh&RpAk@UU}C7Qa{KK_0`>&LHl3z_GKwp`x6qXfJ5B*zRQ~J)EQ5aV{nH8 z4epts{es*pSw1I+49}^<=1ArFcvbGYg64>N{1mP*@(&P2H%U2^&vI<5F{(|0(e62l z$2?-|FFP&_biiF$zPX@EYr&eNTu3g0Z~2q{nH>A3whEtijF-uwp_WjXPoxic}>`lK(1#L0$c!=m02BrWSB zm$bo%bk-Art}_0{!C|(mx>qHc>(f?VO8(JTb(S#6`0=F!3hQ&Qx~f*G&RRenruaExg2>`q4L|x0F=-26J;aH$F(1 zYL-pIB64JTp1tOz^RJLwN*XG;YVJ0)!=4x};JNn2)C;ObBA+i!Fd5&vM2B?9nQ=Q*}drOQhR9QHBTO)0b(W%Hkmv48lninyUJjYckL&kj2xH&toRZUDW6f z$@0G{AcN*a8ulsqCHt&T-LUQKPI;(K@pJO^9nI9lqW1He`sE*C^X$pvPt^Kk=zX#mAJSsWX{_cB*R;u^2_`uVfBS z>?m#OA%>uXQLR0l@)1wmqvlA}`FM5ia%*?IAE&aW9|oJH*VXCG5ApG=DCA6y|13$< zH}J-c&c5u#F#t~;=YlAXDf|YD^W2q~3R}y=gDCAj%=h}&q_9oDI}ZBg9%~ro;~^ZU zT{Ti34nu86skKyYCGto?AmW3VKd+qIbrDinDF`=}3^*i?HS9s@{V01wVgUZJllJNJ zG&_5n6!C%D{4tJOd9?Wpt`wXZhxLKVi`f(fQC?2SWG4{@iI4IE^`!@&J}nC zPKM92QBQYt7G{WeFpkK$R%+@;jig!>_!m<|5R)BE+X8W-e5$jQYG3QGC_DmV^XPmb z9@4?Q_>WviGP_ac1?JSO5UHu;sJVC28Jj%xC2d_Tq4!*Mh#w`-!`0?QcEIKg7Q*8o z)x9sArf1V*wPC=W)W^L67|?T*1-=B_@NKKn4dQFTD=PTQ?m? zL4Z1q0}&yHgqz$?n=jXUHQ!{AcMx?8?|T7=R~$Y6rUK;cq6wktH?g^d&rt5nNcgZ% zD`!fUL`S~Lno#XLTfH-44nj8>Oal=z{46o&@9S$Cj9mEHGo|=>NOtceAAVkGaWLko zEdyK?f}$(%HJ3Z}H?PsJ=hMbHZli6_Dw)&VTn~`#I?z_@G39eoet^Mb2j$Chij&b0c~lw1uKY{s4BW zNN|QXnrqY?;yF?uO4Y)M5h}Zw)+L#OO{ozf=>UKGM4P2lYzyuD%2914C9opBY%-Eset;!H#siagXv7T@~f% z!?LVa3QxX*tVs(mo#a^RN*vYqI?#izN^h)<;ya+xX~aM;rNb1Zt0c~Lo{D;kCO*zo z`DzBRsN%O;3V6R_=IE#RlHvnj@j+D8GvA-0sPjKBYxiSXVtu_va1-hB8yQudDNj|7 z=Q6vwxh^2abv;HcpO(+f`Cknt+bCD{ilE3L&v-|Wh@|RmUm_4cBqzC>sJ^Dlc%DDy zPLNtjpW$eY;*JMopmVt9%QlNHSIf=fRAGe<00C;9Hqvqa7HNubld>l*Ew))knHs8` z%qmow5;%)wTW;pS_+m<1kY6zaM|Q04C*F5X8%;9{qywsc_@g_cnd2js`=(-3ocZg3 zQ$NHTMXY=ubgH`EU7`aB|XUmCSgWi*<%5c@ShINZ_h~!aVluw`_ZacAM)yP=aELsL- zi`Am|rGDa2Ot~?S(uBs03(;ybhaw@K^MroS_wM>G9<{2)I49aIhiJP+f_aL@;8FGa zw)jY$nZNnfljqCU*e==Lt&zX#7%}@xh=k**oqQ(sOaBc8{LvFu$#)t?zqMJuz;8DFtxBP zZ-#IXVK$y`Z8O_D&I=SCV@gWkxt7mNEzTeKeL4|`E1d~KVwoWJ$B}`HSw{QBK>1hO z^`KjHGSXPmad1$EdR-kBB=HA7>S+*lhP#q zh#aCV@pUTEKRoTnh&ywaybE)4(rfn2cQI1&{*d@5X^6d~RbAZ>v|CYdsmN%c_3O}Q z>M$%xnY(7HG9TWpoYbC?fp82xDFN_<^P56JmOfo*-f9LPz--i5?Jyz{KbC-21pJ*U3RiwPeFqU0q%f>hp5rgj`o1IljJfbMbn}p+lOnJ&= zP^U{agjgh_KKSQ~my9|R)jqtOZ1yN$S7F@cc9by)MMj5uL0we_jHOPy4`+i5>sXF8 zgLE@HyfNk`;UZM6T8OivO|cNknx|+0)%m;HE1YCAGXbB6D;$&uGw{%<+U$>r=ST3p z2&Xp9!-G?RvL&ZIH3tVSS&7TBnNj@vN;M*d6`Wjs_BWl(RS}Qc>ik|&t0_?+ z)?z#&`96%CEHd95TtoM=;^vq4)wy-s*|lnp#ThGM8&Z#RMFJZXSYCL=$=$s`|9B+N z)av47%zF{d=8p^pkg2=EDT4Q`ma%%6sW(7}iQ*b~*p8{<7Or67m{OciE zCxrq}&7gin zLnshITH9<(Ne9Y7pT35rDhZ68y4gnrx?{q0+m``b!2ul=J!42_&7z~980d|qMaQ$iraT5^Ropf&-`Wu7HJt z1I%diPy@*TbR}fFrrqy73LEd9z&lwTqeK5;EIjd2VGDTc{*J$ofWTZ|8}ER`~n9|{Vg&p4A4s0t`roNhAwAG z;e!{mOQp#R)Kyb3oejZZP2fB!RSMS`OPW(pPO*qS6pG{+YKBuD%S}@E1f49wG<-Wf zS}paZzIS!J*$&#!+?O*MBMdll&-2u2aX&SG02;QOv`p?Tb^Z^B| zSAhgi%?S~tW4#Uxc%J$vPjOptK%u?J3k5|@Uj~MD$d(@HB7>lBR?`|+OO7X^Ou-wq z58qSHVG#zx)w2ql>DoxEA+l@WSW=E=Gq$VwsCdWD036U|I2KEHZ7F~V(Uy+adl-bf zRyO880#y)9P9c#`v}r!r4O_8NngoRUHuL^LV@(^fEs>;jVSN}yr5F14V)4H>Wijqh z6kcG`a&7-fag4dio;~`na3OtQnve)m=3Vm$+c6ARMcbJOQo5sF3@sNnnwbb-r80eG ziU!X7Quz=5LI~!t@gmueKJr>d8GV&LuR{YD=u?8nXp?P;X1p8qRSiVk9Gmh{yIbuv zHO|wvpKYN}V(B-t%<4mde`?ziu9pj1VXr?%?p``&WKOdtA?)@I+g9oZG!q~I_Ezgk z8wfDvrw&FZ^3jJ!Dk=iQf2)Rt^f(h6wTV4bs(&HEKPb|QV`Cy~7!IWL>=UP&XsR$M z<{~Iu0xRls*U(h9OiihMx{Y|sRJ%Ikk47@-O^sWiKWo9o>h-H-l-gu1iYNu0;N2Oh zBDk*kB@dGPKv`{yvvoj(nm6_xfph;od1A|9E09F{CN& zELU*$Oe_vA^%y^VHwAuTPQ0A3&eAnR7b5qO6#hJwH2gl7cPCZEzD0rRBt!gL-3b)8BaSoQ>}Opf zWj^J|=r^kU_#TZ@(Ym}xwj-f}iJRN&!0lQs!Lko&%CK{b7%ud9mu!$Dr_WUqIb1nG z%bc$9uhvonyOC{(1)F3aWlPh5t1P&cU=~4YA0o$zzSKI*Jo^_m+jWsv1AEV)q0}dt zoDRPSHp6xC@gQw@j>LnxMOD*zQ6m2LNqPD5@69NfsdEiGMc*)S9gC`F)pTmW6?03z za(YzsAP?i$d0GUCPK{)R>mS2o_@*JGT{$0pCC={)nS(Rddzg30#g9r`3R{N_JtRd} z+zUec3~IYg(bLc)T9_F=?0I`;+(c`?ZB>>yg|`^03op?)Zo=_*6d9<>nP&VxDkL!l zdrs5V)AUEA-`ZYqJK?OfMl%n7>Ka|NrYEGMH_h4?ei}WFOk1$B<*6B0JneMKblk~H z$f_Nsz=B^EFuD}i3$eQwKfGhnvrFGcs&Dz;SC=8QapEF*7?;?7Osla3A6fM`WVN8s z#c-RsWd7yrTMd}I+@Ev?&eBtV;wf$hPEN3)AXDe4faYdM)ya?%|0pA|7e^~iiL7SL z5{!2+iHL3>4Q8!R;@ON_lFUU6R{wP714r2Lrmw0;Y5GGF-SJ);c;T8qHhWu>*R%<|=_A!M6Zq-!){zRg)xGP~3JdrF;09S=4imL8Oa3Nh zB8ftV@Z3INTWo z)5{+_{-WyQ$(rAb1S+YNYQ!=rdE&6a}0?&Y~k9 zVLyU4W6p{F1%nm7bc^$dGnE zHq4#9n%!;1?Ay`fPLJlD#N2RQnQ?`SigN0zDZ82a2P5KF6bI+x<=K9O5dX}?gBc<< z{S-2Ya-a*nAiSd4sJm8oWimW>a^O48quv+Qryo!5;4G*1zHw37&WN`P0R2-BlXW0J zc116o8crWvGJbOQo|rR{#=+dTWZihu{%;5+Y+iUeGZ~d zS|C)Jo7wvQtBGK@{N_0!0O&5)(OIjs5*BuV@tG)1lN4uI6MuxPtRNO03_;UpgcCDF zW}N)ZymKnvq;0ml?#J^GKCV07m@#l?q0qZuJ2#UqHPNUl2bfCUaUZ%;XIcKzh2}=_%Gu}emN<|s3IMkcVIB`EEsBHvDgY?ztA4)b!EU~DUfe{!oL z4&lgdY$^!CpJ(+Qx)cdT)A{_-!hCb7f@mbB0Vtt~Axr$AGCTka(UTdl-JxMNdM>ws zXR6{Dg{{>y{#SE$FFeZzo2kVH0i4c}L$yPrI6PzcSy#uxnWVAHA!NTSLz1awao)>U znVXyV7*PuXZmXfhK4PcsnDrp4WJ(l1f9fC{bBkL<+x%iQSx9%HUx{{ z{1FY`COka;Tg?o+#U>SXa3KlGcZhSANBDh@XbNUskpLu9WoSUGT5nv&F68bOsK6e$ zoj5p;tGCJ~%|I>USs;b22YsJj&BjNl9vjwQQZ45$w3=C#NNj4fX-5?az1h%RJVRoY zgcV=G%$Hw%xNQoB2YdTbTbrJ&e$7~L5bdoV5Si+3+chdV#FuS-&*5u18n1+r4FH

zZv46fttAUPUm303cF&tO?J*e zwh#8|)N~N~hX~w|s=TCnk}2cNKuKua$5nJG1q6Tl ziMpz5B;fA*jhqW$9(~s(F6UG71~1HbjAzT;Wf?+BfMU{Ou1@g&94I4YNKB&8XGi3-INv`@EW$=~_fx2sMu|JTt>eth>S8~GJ zcADEx(-=F`ai&{U+^-fjjhr=}@8TsRW%-Aw5Jah+e)HS@9Eehic&}{B&S;cmCfMl~ zOQsX{L9v;l9+p|4iSzryC$FX;gsyQ}6vD!lrS7y1HB~XzI;!7acIU9?-~&ps7qye? zgHhQbnj#J#?GPL+cYK9si^+699)7ZJqJ=4Hq;N25KS$C+?M!am0fWrOuim zz3AtM#K`1n6sBO>L~Y5<}l0Ku?-*mJ#DBZZ+7%bC!4( zkO&o3^>|z>TT2jqpC5cRu7t-%))nWM!8dbuSKc9BC*;sBQLMr)M=$OIngDQX$F1?y zjx0J?5S62DRnu+|Pa6*RYAKKNB$$m#{a$kSw>owqs3s1(?-}PsH4~@LC007yy*Y+p z>w9g2()Rp*V3M%gm!Ya`*9xZa`eV{OlT6RgZ3z&b71@gL9*6pT9z8|x)I{zpCa#85 zHkqqijjuB7@)3Ts%eMwPCuM5{c{|Pa>FPDwmggCNQ(Z#A#npTI=3AK3lHz-g?!RK};Xem(5{q zrimGs3a0x!y5E3Z{W6x#`GwCr1A=I<*&f^pjgw8XK@*#}LbSnw){Ey)3Af3};&tc8uyQVHK~%{Sp%I$Xx&WD)bFN!rks4R*l=Y{1=~WVkFm?LLD%41@ zS~-?{ADnzb#WZzfz%zB!3M6!u3pRC>3q}cmd^<(3S(RA%ga$j5nB%)S!y_W%J+TTQ zNzqr)OL04!BZyi3R2Ec6@NcE(m~OlAC#zw;Fg{?!-w}-eNru<6EatsSqb`@{7~8G+ zqH?Aa|4rjX*ro4sDla8wCgweV)c0FsD3z1Rv1IobN)E8@#H@@H>w|blsuoUhv7zo& zmbm6DdwJH!!;xDqbOd_KVlX9ZS&}0BaP?b!A}XDE-1zX}MMTQiLbfmf$&=@ml}MKd2?G<`sAAN%^gf z;WdxO-vGD6y(^1HT;+dA~uhL2#Lsf z5PIOmZJo4XE(<3NPXjTa4v7uGI&RLg`z0jS^Qhi&)b5-}|I~Q{fuZrq_gxGNi%iq> zDyHV^YvhTLmaZIE{{C|7OQf?TSH50`I9I-$X?Ot8^pqjnm#&Qlu~V~mX=JdWK9ZL@ zYmT@BD7^kIN}vO1#E7||$AG!Oe{c#kHN69>hp#%RPvR|*`Bw^Y{KcO-aTSpl}aD>(y;&7B(p_hG_$|5O?1^hv!IsB#zE%k7a zO1cM46vH-UlXL*?pq+SsHGV1KA$;gDYomv{YsUc3#>c+UWG)H%i-I*&*ut zch0>pFwe^z4gfTy#MACIal%D%LPV4!phIl5n<4 zI+PQ3l~K;^^g#I3?nW;Nz<9B0hq~7rP6qX1CK^AEQj_aO&SKz!fB$^n2BC#BNS@OF zUff^OpOh{sGgZm6Y|}0OUCqOT?$;vn*^#_Dgw zOU~<03%_7IiMgR0fOQPK2Xdg{0j=@8RFhW4-PSc--KKWe&Xk z$>`cGKhQ-G0fk)Tc=^4ms;@;~W84-$(8B-(Io7F#GYu-To-C9gh^>z}diS@SROyFo zRv41HCH#^9LEdTRbPxxg9^aRG)wwK?&bcjT>x5;6XGOV^PkQYd=WHmCU7KBL$2AxG z*AwWT9tqCP@mNSJ+DL`8saz1*uF($bpqvpI?UW>g#Tqi!D5aJ)F+-8f@Tf4c^Xw6^ z`JRKfBV(TR!QmrOGFG>}B6f$SEqO?j;G?c$QnBR=*Io7dnfkoJ>JYx;#%XYGmkGdq z%qiJt;Xf5}g7)`g7UUvZvN|wDSIe9lQOOPEADkuKQQYU8=G8x2@^u)LnFLx7qaH0? z_yA1CoW2F{o;9uSUP#rh`I-Usw`ptR_(sdsljTc0ka^B z!BR8s7?Z5-2^c9pgR-p}p2HmD#jwra1iM?Dc&Xk>+WwyYf$Q=cPu4eVG#d0fHkkWY zR{U3v-wL(tj&93`!d3Z`BEf$(s?k+m$%oWGLRt3;LsUJvop#VgyZQ-|rY{qNkn41N z0=|*XMMeJoSmajwzDwh>=1Sw5rT$WEgkuNfsW9!-gL4)^V80zG@2&;X2{@?t7{|F& zFpysHEG5y`y<=BtD@%0G8U4q&#xr`|8Q9K;^6ag_#{5smj;r=1OsC~+Gx;ksgM@?&mlDSkqO{7G;6T>%b~#?1+%(tqtGwGUL|eMyCeEauDDGo^Rx`9`>FP+z^FP5`?e)}rcNs}p z9xc3S;ZJirz53Ph;xbb;}vWRFxGoe{qlgF~z~qXOgq%6FFLizJ?J9-ek=W zI6vk$eTJC5%y(B2;{%5)ycrra-Ev-DUgz8_(pa^L6Y@aR5K)PB#eUwNXqRqs`y9wD zG5FgoO8Fm0PMMMK&@sh5mZ|`y%@HJeJ@P5KHk(qp&3p-*qO$NPU^28%cS-0h9kp7L z_|XewH zxwi9t(t^V2$kS0ABjBo?Pa|5Guw%mfc1l$3>pMOe*0RVN4rSc(^^oE{KlhSyC|A?o ztlG0xR$|bn1?BFw)LPCkCF=cRpm*(j09U_#*K@X`Na2Buc^)9_xo{2 zf) zfjx7;D?@lh#q-2{>G8YuRU#zJ2x@Fwr|01YjxL6Np03a&d};aW*3X(>UVhyjNGCHS z=|$&I@Q9;d_5aQ{w$yp51?Z{DAS!y zMx~^A(;F~ihRu37_kz^Sap?_Iu<0|Ta_)i3gRKf4#d4?*#U$wx;5rY=uR>(s>YiRgx5mh$*QQzL3`ZlYe=m4AW2qzH^GEc(9x3I!(NXEo&-$wpR{ z-)pM+T1h%DBsLIyuW01`E@;HR-k82%Q!2IilNqE>gPZI#DhnVy&j9*~wX(aTUX0mP zWhYn02d~beHhkG>bp_pk2wFXvHTJvXuEt6j+uSNj*Z7M#$x1=3?6cKg`||F1cV^Pe zWnTNxXrQO1#)g~k^&hf><%h+?!HY>Q=9Ih6!Sww&`{4ML{kq6Q4ZF5csL1#w4!`>E z0yT1|$gp&WhRL`DR&T2}^Dcb&BKQMO#f~|O_L#tU>6(C+G2vBtBkvWs4!+Ot^JG3F z9ds<=xvX-hZ}&&7G!Y_BO*G{>k_l)Yb%nQ}6)!r#W87o#udyN7QXMC_@a6OtaY!3)2x48YyjV`$s)2qVvm%S#Q7hYd z99Nc(3K_k%;hkZHvY%Th&VW6^hSFw6%22o0dRl}p)&zi6Sz>G)xR(q1&dwZS`95WO zi-ifh?TrZr`-1{UcwwhTwf17K?EZ@8d1O7!(z6N zABYUvYM4uyYsQI>%fn*6z2TWDoHF(|=Q8%EEonr#mo~@vt!c|WKZ0vs|Lp|#_oQ7K zu4tXg&A{)WEA-q<3iJwGVb&@vJ~Kh#h4;kFH=}6;hN9u2>OMiIeim%WZ zqvYl5v6I_%I{vv_Bv$!-3WH;exa9w0>n(uW2%2qSGcz+&%n&m(W6T_L%n-9JCuU}5 zjMZrSmsDJYW;6DT7j}M8c9pkE+O@Y`fBEgOd7(ez0KBvkUTmR2DT1^>CN%x>vX6 zj(Z-kV@X4HztMTNm@{xnVtYOJR8u*IAIG3QM^7(-Jf!JljINt^^0E8V#yQQ|&G{{< z?J+|z_KJi0#aq%?L+;4IKm#IkGd>N+^|R_O5O(@5R7ib!dTmo|j@Z>FkK|^W_*R3E z9jU1;?fmEAVs25pOcC`e?$#c5P;d3VCJ7)H3Ahh&w>Xg57bUElOPcDSsQGqh=N@i#uH zg5Xv`Zy^56!HN)KnpE>X)KCTJHyIxwL?PSduNHgs*W&ihCGGDt8|roQ*c(BnapE|= zteMA)4tSK7Jk%N*ON_kTX6PUljpBaT^0ryi9*r>lw`^t4Z1b$dgM(c6q^|sKT!D?Y zL`U1lj2= znkPQ=GuRA{^>2N+D}NFMh_Khhc_K?YM)|6+R%O%=Ew0*4t_VOZTt1OfaG5+^{?OdK zKPZ06hIDG;Cg7G8ty}ZST8V_!WE%9Gmz`LbJ@>#{KYncK&U%mJ-yRiDRCZKev44Z2 z;r+0G;vG)pq`H3kC0w*cp{NNtKbp1*nan~RI339dl@!kQx~D3y;#yethNx~<&(%== zrnv3OlO>R`e69e2nWemA%iiFRV&{U5ldSg#esmSn9B6)9R8h;iBGpZ*{kzIo6Kim9J!5Qa>DI|PZG zR?++G`fNX7*H&33KX32UmW&*N$K%y^pm)w);)VQr>HxuU$L-s3%U@95@pRhAyij@3 zRWSX5_%T@BA{wx7zqNsrw0WQc$~{?UTswU32-H4BY?1 z$<$AdsuI;;$ppccf@I{)!rJ<`%i>Sp`eabpZy4d4i+vKx0?SRY`V{I2zG}m2ZFa?T z!+rzI@v|^F3Dh$V+jgwPX{smeYM6`Q=tI zD%+`t`x!)-s%*__r!vP+{=6ugLwuNB9PCm}50=%ajH!0jgsf!~=aO@qCp7PXlWTS? z-m0b+cp=TKkW%+;`oQ)*V!fhzHb;{A)Pt zZrgOd(W73=5mW%G&BgJmmAi_ryS@=$Z#rmU`+u{~&Rv;(v)%klhjh1C?f0440sh6k zsu+dB`eLbW)+4-&jLs_ER({;MB()!lcneOKvc0I8zWpF=M|gRp)C z|Ki$uu5-Cz_w*HFnr2$T`~_mEWz0|sSXIIgM!~*cYuGAUCrfpSM1#!sC7~QrFT!9w z(pc`bUl#9B3NVuYmj|v}c6nxsvs0|)$*tSr)80?h*dxUYBBDm0H0~iVGRCT&Tv%(g zPdBV`eJHMj$eFv&N*?_FB6+T$ejWY)7y}Ku%R5#s*Y*FW-sb6`<(JO7;Z&xpoZ!x< zKV$Q$yJ?DGf@-h?x?Zbmyh2oS;W66_sIZyvtA;wPp&mHvy5KA@Ha-mQ~bkQnP{<*hO!M-SN3fG){)ghGMxY^10Z88bGG4m5THly4#;pz#Fhc& zV$cCvG(Tu7_`8HePv39-s!99$*}yplTPZ&xlg4hEyw;nmJeVn%WJ}ZJ@Y6?+=y5Hn zLU7+?v#z*fag9#fWJ?=G;(MpONMsyTx&r^ozj;8Xjq1v!1izG?M~Kpkjtf4=?78kv zewE#}J;gUC=N~fYaOs41pAVnIVs-VI6H|{(5K}=O5gII!CB))Vi(|B!?9unGzV4|p z`fYhY^}R~Fm9o8=<8CVTjfj{^Hh^V)QjB&QHnEP$?y0NP#dxOWKt)5L#Xuy2vrCen zs<(4$*=6`Nbz>dsb?{{0P?av~PpziH*f`Ii0LS*bCSc#m&8%A>#-opU8R^hWVXGul z#Ayvh`01q2>JTpwhZJ*rdr+9AqAeK03V7;R8@tD&zM&tj>4}K@*197lcsERgOW zlGju@zXLUSrW-;0Ce-mX0K4cpbN-68D=FnSj+s*w5E(v2+n*Nxlu7mSnzBh%Sazwf zHOdIWE7CeG_T&P84Gr*jzc&a(%2d|!lzJ~0<(=s=C92gZRf^TCqhd@SDElZRR^QkO z+W2AcHb6FH&_T_(%>94`R3KIFsjbVySlBSOT&Nb6JsS{@`1$=;h_t)4y@IAE(_n?5 z_yo`l*1$4F;6kTXYNt>ad^50lB2HNgHf=dN)2o6rf6KGdF%|Usj%YL0Teo$L;wPop zCbBDOdY02|R3@CF8K!BSKG+M1M4T>~6}Q%B1}mxi`Y+h_N}__=1T;3v-(6A!mKlAU zzZkqGkqO)zY!bH!NrjB*YVMx z_Z%!KYCZ&8&+)I8dd^@PPi>~b(>7t^A*aGl5JCCXd`-F_Q(>SR_B`2P!i$Q!`DIo3 z3$n~~Id5$X_}~Uy>Kb!-@afdWe@9dkIP3ZA=zKxu_iw`IZ;fcFZ%8)!Gl>x=sbb4w zxzrxCqN88yLR+J5TizcdVpi|38-PF2$wnz6)jx!_`$fu+m9kT+e+g^XKnmyH=)^#M ztH8~2qV3VqHQe|iG)x2sZ2PaN`9`rb02&+7`xU9jK)tF)rtuWX*%=x3k%CrvR`s}mu93bS*BpGXPXLYG zIwtP8EA^N0f(2tskdcJ)GL3CpFI`JGb0MIcE?~~5mri}{QQOI$S1qxSIqn7qU1>)Tm$+OkMU+qM1!``Kk|97)Fm&2w!}3CaBHX8@A-3#jvnUA1 z3Op+nrq8zMxt?AdE9H-$Rk`#*4 zD35p*bzD;?Ed=J z2Tm^Tn)Dz|tNpFFsuiSP37IOye8;N>EFP`dlWXfGWq?Cp=*Ah)CV#?uY3FlF1N%Sc zlQr2KqDXvgcTUgKg_|<}eY2OcQrlt<55=5k0A$zjd&b9fIbC^8rFCGDg@9+GqU1MC zsp9uly6>yK$}Q}SBz4-swMS!|t}3O@ZaxyFEQTqPs0M>y(^BT-m0cr4GHGzu4si@Z zs28`pQsxrQ4fq;wAV5L2^|wqV8>i#sf}67QMll|3!5)IsllM}(?J8+(Raoc!K8Wi@e?#073mluVk+`vpYCa5-RWXynhNeeNfx1NX#VL<3drgY^;Zi)L2o z%RyiFiXeRBCzo5{`?Aka(!7QM8v#T0H@}X_3UBUtcDg+RA#@%;O5^6*F6S6n^x9Q0 z#99%dgs1}C&f>L@{mnh#?;t(=sz`;tLnbUuI|CZr+ZWLZ+;ULC_r<;hnq;Fn1?`(! zlYUs0KB9yUT3@cney0;D(DoD5h?AdjQk6b~)j^Yp8w)&tw) z7Lz#a5nY)-M&qrZEUF#|OB2IBFcLCt-!qm7Xe&LB{M5;DDoyTO{idR>GuD7K=F}2@ z>2~}24m()^a*Qjh*@+)QE2(;OWQ*;%3+c4Muf4~N3p5rpEKE+>LfjplTK<^EzgWD{ zji}V1tud!{y#CYR{x;sS(H8EN$9*@N^^QUO!XbTUB`OV$D|Z#8{8%VErKJ+K5VHak zsJcMu3E%RUH33aJ)fjLiPrk^BcmKgMHE~9!;DAMY*g0K*wUWpmi6^`};;jx;pz`G= z9OT8uT7Fh~-UY!^%6E=%9uI=TeslaFTOpfg-wf%9CLO&9>;6c@Myd)1mv{c6q{o-@ za|`vtFpYr7bF|QibCp*vl6kla^+ImEh|}1zYn2*)(x^t$nR2CbR?M>u{8H1IC+bxX zUMu~%-@{Z-k=+S1tgawCjh5u^J+=1Ovk^SCes*h@f8?^@O{iDfg^A;g3nxJ7MyHc@ zyP>h9&GN)!it9KIWV%$VLLgDwKc;7B#!hBT4CAXp;u8(tQWwT?GRw{hJWr#m?ZHS? zKpwVxrtsl^?a(@&B{XUWGA`pH6Zx#O$_u`xt<{*FJLDMBk4pTE+ze7I=00Aal2dJN z6H<27WCzBw`FGIlbsH7XIm)|{b5#ZswM&j^mpk}#;uwuj{&8oRCk<*4>UGL_e%DLV zV?K3tv^5WncjK&fq<7=B-B|w_+4t-_XgG80pd@klC&4*%su|EhXf+L%u$!8RFcgCg zW@{?}^*$>*#gB}mAz=o-ZrE7a0ZH-!`NGoJrK=V=R2D|VZ5R;x3eAjS*)~aad>UMa zBzY8gw?Dz9<`)<{CIp^3+&xysJDdf254awb3c2t1xqT>{;0qDR-=W;^2lX%6K|W;=H;bm^eK9IGFWQ)zIs3Mw-M}1n9wY28fWjX&Eiifb+Y&uV^K4w+WDWG zdHY&)*WU3CZZBcJSs?)MC%|4zSgLr4a`uaq4=ZJdRPhq!tbr7kmhp+8`ayO2iI5)7 zm7h0$IAH3IQXJZQFbDMfU$EC5TSxX0VUl|Foxl2sUWtbyWuY&B8s6uLhXbF+?C*io zuatY|_nL`hc$$h{ zae>X8!d&l#_9e5@DAkof&!oyJacPBt0g^viwLNta6tF{dwuB`0R>ZM{pf_&T)JkGQ zk4c9wH3J6S*fk?&-X&2@&gNA!=E`DV7e*5(^xGK58ys9sBv7DB1|i%=x?zixWv`*3 zv5{v=g~NZ=qhTv>-_6L@aP}h4_g9^6?W_O7O9{VTBPf)W6ZP$~p3mI0xZ{v}O-zDI z>Abrc-5$D9*EvleORC<+bOGMR6b;#dmJ0bIex1mVwxGIp60Zl^K+32x#&1fAuP4Ur z0mkzq<*9yzfJ5&q@4TKzO#$_KpL-pU^&J@1`Zh1c85vm!ZtrFMwY-i_X?Q@+gGFCy zY32r=8{Rg7(eoM`vm>0wAF256#&vQ;bf+kl9@fF)=rBCBm=K-kl+L0nL8>c6(G`+%GSFa~|U8yE#unh6| z{e&bcI{AvreB&Baq+;W!qdjY3Y%6IJPgJZkz^&Fk&LNvW9_8nttL7Z5)WStAB z%?FyzQ!Wo+7vAReb6gNX4>|4~3BXY?rA1}8bD@>GezC%QmDZ`?-G#?Pd-lH8@sTm< zk*2SmIv_bV3iO)I8!~x+c%Gyx`u+t^x+Lqj98Zxo^9vEsqzD`lncTS#?2Hk4HE>%e zOs!G9w08J7+QZhXC@FVW-Aq%z2rTYHTk)4uBX6!M71ss@;Q1Tf7!}q+4@sMpzLWD7 z!E4T}(9U1(p08XIoqDoZaeqqZdq^*ADylZ6@>RNO^B|zt^Gvgc<@fFpwA=ZCv{?7S zTHt|itxr&uwG8%jXga-#XmNVL9=q9{ms6#%CbM?J4zo&o8T5C?teGS{V{a!arj4I{ zKtiLum?pi);b=2fQLJ#XI0R?+hEU@6VtW}?z1^h(Cv2E_$3Td<1Y)60;LqkvcNTvP6HX8f2Jw84J; zF)MvycL#=f4DRpkRT?JGx-!YFc+@4E7cXuie}_$EeIcN`2*joxg!&_>;(mPE&^`jR z5IV-;dYf52Sy&_p76P^(cBz_tFwL2ox$AJvS;m~`CIa3|?HAr2176RL_m9tOyZ(-K z{f*e${~Pc(?eC)#cpdPuw)Hn&Zr`gc#O6u?Kk5OkG3wzXAJ1C0(ir(UpcdB7#l<0S zZug4qZI5lQuj%X|++e!a@j`9X_U2(1sC-hnMUl^zK;Xm{qsnG<8bvd>7Bv~sy#2>o zs=e3ghGDzHhZNK4F2Sus+`Gw#koZb593AtfF8B$@;eY1@2Sz7y>YMt}$)1ZTBClZ# zSJS|(D5BFQ7~E_kY@0vS;;@y2ZhpYaYLi602)J50q9-2lj=rgxW|z}YW%W%=+;y}0 zFI^+sX`QybTR}!u?MVZd%eCA1B)uae_)Qg56>S~hgwjP!Jt5G!eL6<~c1fzR(||*J zi_mtQ49`Gk-ZPSjx1_RK#{ZWRQr!=G1$2MLQ$g{R>H;C5kzaS(=Z2vvma8x8s;K+> zw;RlXLfSC^#x|zTV8xv{$xyePxwr50iKR%NI#MWgQ9qJ ze8xn_P8FK@D+R?@qjT%1_t&84t6hVKh|y!g_QfO%2HzLq7s3^=bP9L=VB+{BZTnBz zR~kzPV+mMQy)1D4AhCD|wsshKaJz?n@$h1SD=Uh)TI5Z|vOuu>n( zjg6Fp^;W{uHzg#e^e@>sA$mF0@B5{3x^G_fh!SK$uz!+vzTkB#@^ts{6ZdKzyx1}S z#tu#~Nmvb1VueA7=uOmy%Al);LFc)I*F!~rr>NXX(Z>2|MA?OK9uDEe-6*c&n})I7 zt#Uk`LM|=k)?`8)M^)GYBKUUsP=)Sv#vTWut-=TiUS`6~R_!ktT@F33NVhk2gXpco z`cMmdidG2J=t-cc>3;%SFjV0NwrtcyO3vtWX5-~4-@NRya zQt>I;RHPRhvG$O7$3xEH9wrI)&a_hQscI%pf{~Mj2Xl`9C@@`xkG`T)%OZmdaqX_B z$*n#&`_DvP48}8=au}On6LQE|k+Y5fqm>m~w|Hf;T`+bBnH^NGvMp4Xrz0Jac7F}isuK7f6;fsNG8yC%lvBW#sdyLIPVl@Sw-9z*XB{H#$=HfZ6 zW_%L2FCuOcqum1*WB4%q@E6ShcoBlZ%#IF$G58u%@#pTIdk?@`Z84SYtF0CE_GWt9 zhi|I;cL$`z=ZKgy>Mpn#$`s4a+YC?V*NH6(VssS1S0xL;lfZ+slONuyAhDk8)XOId z8tU5Nx5R%;S}4Pxwc5JzDGW0i;CLuVulcp!uQ_b4{vF~&Sszg)&H-VieR~@vaQIDu zDPmmpmC}MHN|@#B80X)KP;Ep!2T~b#0t{#wY3xC*o@4;!^2H11N$t)MxN%_TVC+ZFRoGt314T47*=zSPpD~ ztmrAty;Di;4+*E>>=qRu(gAm-^4E5}Lw0Vau(Ye`fRg8W598z$m2g_TG1crD#t92m z#K)vAHc-QfI+X4Iu}N9zwEqY{--g>4^E!p!{duh&E$tCXwC_m2W>&dA zpSp6o$Ar03O;DSp)67IAxXef;crQi?O{l+oyq2vjzC~w`C0$KA_`;J5wKPhaF?YqE z(`z9mLC+5tyH$TQu=rJ>;PunWCaCyrF)fB>c~Oq8SAfTleg*st(6z;p%--KvlZ>iR zUKHcBhf5QW3Of#s#T;loD^~VcT=x(cAQBl$8q-Rs%~p{^VX zIzCLjjIF^4tiZ@zsz8&fe8q565~kx%RJ#4AdVKU zbgKL>E%Zcm#$KD~hy;Y7Qlcq_QvYW3(7bu&Xa1P!h6B zvx2?X4j+c@b2&+9dI{N{Dz_tVTs3szjRMe9(%0)Zm_Okbq6k+4^C!dm3g~0D zUQ!SXgFM5wUWkxr7xN@eaPNay(iT8~6-^Ru%1})HWREB^GEmv{6+uyeKkObJrke+E z)OJvfZ8*}i0gt+LFG95eZc)!%ye^`SfLD*_Hi37u#-M<9axH}3*B{uti|j9vsT&ZU8u31urTF%Q73;SoTQinp z`1DibLCRCI7<8)epBxydKx4N6d1Gv1vFKe-u4en(OQsmET37JDeH{~T+mOttJ_wI1 z63u>>g?Xg&b{Yt*1>?}a2(gpZJ|W};;f5>ZX}9E53knjeh15bs@4+F^C#@hfMNcn1 zb^?*g#qEVa490b54u9ml%Rgag2Q%FQSoprEQ58J58M)FIs9qSn#>LyWgAdpV)kW0d zqf+224CMWD{#9R)Nx^LwMm-Zlwt*cG7rgDV{yEmwgVz?J^nieOGy|mTeDpLdbiU8e ziTtpEPFRBeLP8!>AWk6x8FEZ1_uamrO|npOWrqp3yt0@CjfKh`oXl+m>)%*8szuQ{ zh3HdxLxZGlQ`IWNmglz$-yaM4$~61&Y2UN zCH{V}V~)oT_G)Yw6xy66ND4lwok88FC1!89)oqTI?}Z9PI6wj~cXJW)y;KtDuh#Ot z8o}u_DD7mioWQ|o*D+l>4SsI~k5zRU74s!iF(E<65$|&tD?&s3I2wa419`mOixoac zJj$wp=#R@H2A36jSdlx*??M`5F*XsT&Mu3)$kBz3sFrS?<*3pqvIGg8BYgt_@82Yi zu|Hn)3SK_~s}RRT2jgC*-8K|R1wGHgyUqrpn=YpCtbGfnwb zOGB4I&p-8b5!a1r5Dm-!QF^EV0MJN~V)HC&{5Jipvcu-kh>WY>^x=#BF_?rwR$$3n zup1c!rzlvum8x#~lxK@PW}~hyvu!IL7t=thpNeH$Senz(yu4m25sZ!q*=J1RF)w$F z;;^-JFL!u%ML`}riC2O+<_a%NvtnX@q(_9U<%`0y?4=k=Yx=Hg;o#%q&N1m*cv8?` zBK@@I_beg5q^8|C%Y!Hd8`mULr}doZzjgJd(O~HZ#1h?sC%I{qb()L4KpK*gkk!17 z<;R-^cn)zVB#F+bX*UE~($EV|RAs}Iud{EbHHug4nZJGt>rD1vJ?5#m+4@FH_n5MK zITWj0znD(TG%7~8Wc4+*e4>~6OV;I{oEH2RRYj|C)_z;yUVSFIJ+bQPNXjmxR*gXA zZ)Ht))2uREhl(8t&}(jf;7@D=Fi*UIcsgpr#~3v_vv_%+8|THc)sa2Zl;d}2D+c>V zS1auWw|_6>AhFt%pwjk&J>p(Oq@oe}L|3axr;C+l(|mm2iNDG{@avGPU_9cxwn!7V z?xQhUUbUgeuSH-m+B<_z)=9k!RMq8Ss6UI@YA<8}`23B?Kw<-vx} zK`CUnVCu9ylcjt+u$E7L7uNu>J&W!}0b`Mg9^RswpCUZC=B`2R7z;|*`bE6aA`L@? zK(7PU^w4V<8@BqM6y2MuLOm8Q|7LpFiQNSXg-&HqKVb#&niGQ9ls(SU!`GKtOv=iR zBXQ-M>dOfpyOSAfU~5Lpb>PsDqL!4`kO$t*23hP+Y0oqP-C{3$YrRl9)6`w8D}dv~ z^E)itK*!eRDE^J{sJzQnqi08Y%C6Uf!p%c;v6t^>#q9+Jbtty?_eTCzR2oq=YNj6O zrEc zG#oHGiD_i#SR3-z_=eLdK9cvFrf$QI5KT4M02k${hmuG0$;d-hh`cY-7A*2Sm<<1Gq^V(jK=oA;sqq?;IQ9UvxwnxNl+9gJ&@=Cz~u zmQT{FMMgX6;ph4eoU!t7wZ%Lem3kw81p(HDz_qo#IVNhdZusu*6ph3(;?MpDI zHjCuv5>xNLB7?r`OFucd9gc2%+E zwQFqkw%QGPy{w1xj3i5;SwK{%3grsaE?Vdp-@I4s=4<7{G-(E%O#R^7<1g_O%LV1q z6lnm$n`L-+75PTvXWC;G;0-%%FCXW}ZP0j>h{lRhgL1rsSOlJT6hP8n|Q>iP6H@Wr&#%BiU;pqSzNbhq|ev z%>|>_=}fT(u?zt!1b@AIkj<|Gn)FX~V?S1{9(mL5%wz~2tz`(#xc(df=4AJZ$83=FjbE?^I{M z-3`W8N?g?Jz<&-KJG?6^NI@U<@$q zI60kqm^Esg^@glO%zO<_7g)-;PZn63Eq&dQr@*TGgB}}eg!AYK1T+m)x|+?65!Dp5EJ{a> zp%8Xv3d4Q%w+ODMHv+(rMG= z1D({i>dyo=xzXcL8%CFqyrZpKy#_N{M}?H;D66eB405VnbHtlZR3d2@CRDq2h&L=$ zkmgTQ#$}}Nmjki%1t#PH!P}bw54|3{vC2O^5 z-0$z-F)JKvTe*t!r1Ny{IYU3J=>M?uGd^9f<@~E5I1uFi6S=X2spA8~Z_m=oIU+UV zzM$E^C?n}mY1XP>;lkbYlS!ZPX$M}BKJ+QhnF=@H~ZtE0~=*(m$8XBF~ynftU)+yuIp zGPJQmjK0yTFQ=#;#Rt2E)ewdX`-|}n6=ZoL!n4mIvIkhTam`hGyjWgYW*eK~t1sJD zA-YO^wca{fg${~Hy3xVmVMn>L`=XxkJvv8jq~!Y8V=`;FsKtzxcZaS}+0N!NEtx9BO|L7f5*xj;Xo zTO^=g^?}WhXMl~T-L{Ec$s+NdPxuvWS}J;jHIPBpDv|WvaUDD~0|q1V4v~gQmgmKU+uJtW zU2S};&(*c{I8LJcWSLTt;BaSGpO}bILVzJUfq8%lCfU-Sr4hq94U2s<qUCbpCf%&i>38KM2$8_%v~jLWSY~qK-@Gg`Mgk$%y%OI z3RCmjQKrfSc`#?*#7nvP4j1a>Dc@@CKxai9f=Rp2V>()rKZwm|41oo0%xLe0HjZ6` zh5VA?&lQ}+i}W9GTzqtP&~LNMCR(8wz~@LCw0DZamRSX=OfKN7!M5~gSs1U9I#jC6 zxv3EoEL2%;GAgudBQB05SxxNNaGS-7-!_iu^xdYrhM%E>q}-rTng+&ZtGD?U@n)(9 z&UcR6b0ill)=8huVY+VYl12~_;OH}pTBtIaE|?4TwsIm5mnW$*5!0pX4IBUP&%Av| zU`%}M>a@f+lExlhluu5at3^xIciJv{6&%QYZZaduc$$+*d!dT3diRf2?A--dPNJ4B z63Y8=0l5KVEv^65Oav-yFUN1Jnf|qLUKb!uX?<|V zEN+Z{uGR{Kb5)-1q_xRUBip{={q{bsQx*80v3f|a$ZLvsX62{J32)_^^hBcCgv8O# z6#(s;u{I#i`UgRYGpjb>WGYS;PJNgbp8|<+tF6PaN|YvKT>&UDS%&4NGcbuAlI@2H zJjUYjX)>#dXM({EAwg(`jnSnf4_ntNk--Zww6#zlu1k=n+@t)o|1f=D(-n+*?rk#u zQZKxT4zZoMLC+}Iu5MH>^s!tiKHIDr6Jk+K^pQwD(=Yb9^3GY?BOsYswTN4Z8*nDM z*k8TvwU4P>H6VvgpKRbgI^7;Uy)H$RDLVDqRCl^PX*qQh(N8Z8@*U!8_@=D1=&{R zDt4`R5Dhs{`ll==E!2B-8frEeu3TQtEPKYK=9+JUYkIb<3#6UVvSb0(31x@VmxY;3 z2!)xi)*N!=bQSzmcL)K)gz6KtFwG4&6m%7Y(Tn2IVSttXn69jjd4C3OYf_5{{*!f3 zSF0wEOB`gHdJ2!z(A6bG*dSvIE(tuaFcGYp+HWwD9IeRA$oYnVX4ux-v$8D4;~>i= zO{%l1HGlKl$nq1vM{$`Ml!X@a*5i@p%`37v?kfv`z8WX+mopVnK%+s+&3JCOOR4|T@vi@ zDnq~2lm7qusCgODIB?J>joQ>28-v3Q?mJX^D=XxQ0ybWJ?s+FfMjM`u(G9gB&U^Y} zpuO6qkOA3GO_^Vtr%^(1UInzA1mpx0njr>t;@qoWP_0%5xcMPA04_pL;UaL(#D0FA ztn3p4BdFn)DN!VTj$Qq&e6-5L7PxJy-(6%frSR<@XCRM}w50LjD@m@<#QOdm@6BVi?oz{bcC97^5NGgwY|ov3mNx?wdr6=kaz=g85OEd#`8 zM72yKK0{3+&@{UM<)*ruwy~Ea?@JG(Y)Yp3?W1fj=c;&6jpM3F%hP?L#OeI?w>d8OlhQEibtwQ0 z+$>JM{Zs{vZP!7b@JMCMH#BR%jH;VE;so4|ffTRC$iX?Y;u}s)?4LlW#NX#`NXPH6 zEodvb;{p*=9wYVsx91Xh{zft*PA}mdiCY2A5gJj@AF*5Xn|UF2c@2%)`wg;(583}M zVF4;sspj-|&S}8i7?Zoak!xUs>mB^I)6?gr;ZOfn-z^CRM1@ni>KhHoL=MlG(YRu9 zaV22CXh^9b;`Y}^xq#!e{m;P>NY4?WS^I~OuW}(j?29EF@9_zjw{U1~Wr?RCU$dgb zCFle0EPadbbdb*bEE7Z_ei`igH8YO|F~~lO?oe+0ohJjLj0*5sQAajl5f+Qm*GrB0 zIZy@Z*KxUZT^C3(C5ZnVrwYR2tX1t>7%7oSJ?H_?$8>9IZ=pvP9n=0^jUE$#g*%ND z`wlFFV3A>@;07x;`XRom9n=(6+4Z$_7AHu<(v+>x&1jF?#*uRy%v`s#gBj{CKIiNz zx|)9DNqB=pBaw_AMEPq- z-X4P5>cqvBxVO3sQ(RLFT%d)uKoPk|xRy9Y!vxfr*1_uoST-ax>lawhV9z?_C3TT55kqlV&2Kivz%meW0S00`2r zatdECJnY3V<&pV-A52G~4ABb*- z(-$KCD=^D|N$FQFo5n%#EY+U26IUoO)YfS=fl1#_7#WUPjTU4$XZWAkA5;vkf8HkV zlU^*TKbFL92BQ1DRvxwW?@Yic7;##(2lwE1Q6<#j)nBsvIb#vp!P@Uee}%P((k+3) zqR}a1{c`a06pMywYE=e@JTYKN)Xh0NC3Ew+D__oNwu--RBkjV5YSTIy6t>N(frFiw4OJMWq z3zIVm&rAOk|A%FS)5)KQE|j4j7l#ls&NktVwV;8S!+Q(IaI1ikIj%Qazs z>0NB_^-pY(1P8RX1`+Der8F|i>hBJLrVBI{FmqOv)W@=v)XOrId)14oKVEZ!O!1A( z5dbax)}Yv04tsQZJ*1fqzsFb|3as=-lfCZG-{w=MG@3VtYLXc=17cIp-@-nr+ek@2 z3jE#n{PCuu_VsL;UjCR6p#eID@Ln-H88udIcVhJM(~XtXgu}g~%_BGnHQ>UUZ+3yu zJd|H4Hi?_sgT0&4zD`$V_Tsq^bI46*Dd<&oCD8}sPA1V?Dwa8?ki;?aACT9dpTjXm z$btw4ie9R646mDZdkkDiDyoG$zjnk$74J<+rhVd^{BWOrOY<(l@d4ZW%YR$T2K98)wn6s;m%9_E>H(A#1FB z6#Z<2vT)KI9;=t}>qY;FF?I155J2He|Mz+8;}3a&(eV2)rJO_D;N_`?NioZ= zyw8mmiUuedRYUS$Em_I)2?^`V_9-U~>5G*&U;`J|q; zyl#IaO+W>EstxfSkixyUWvC}N42b!LH2qCzl{y0~E@H?wWf7)b$L+0ag}C7)a_<%B(I**1@C z8SVT~|Dwq^WOz+4l~^@!r$>?$F!EbUy?dD#OO@$IQ)&A1WO1OrGp@nitvs<>d>3^h zMnK-Z*5`@s*X3)Gb6p{uV+flEVZ#LWuM61}n8 zJL|Lv`%$=hW+^V#M9v2-a$|tj-Q%5|1rZd+I_W?=5R{I?bwM~MkP*IMhFoiDPy_WC zEnHc9Gn#vSq7a=W9vTBOg*Zwp|6bkSj0vjuKjyH&wFjpe;c|a8SeA1^ z2ZRF+^V}`SLTCW#P{Gw%M=s$S24WPo`m>TE5<;pm=jdf8(cxE8N za5N9Nc%2q#gzT%XryOjq?j+f>;-N_^qU-4T*yY7F^#WA~-?;nm+lDM)?SO=Hpi3vH2Bn>R6RH8%`78GE- z?GODDF8nL+vPJ4sy^|O@hw9Wi{WBMpaSBE)^{z7F4J*|qcmyUdLMvn-GzLEfYw!fJ z5I$qY__2t0e*5Ic=~6FG*;gIA7T&UAnSaLi)yd9&bwYY%dIGF0DkkEx`2*b;Id9;Y zjiOGS&J#r>E}9FTvt}9heT{_A-e#c9XD+@=3i`ZA@v|6Dr<7#Cx5NjioGX}&t$ZYr zX_RWJ-@Z2^#f`7aGiyO2S>wn-=s4}Ed>!v#0rz5^@2dOd8saL}DQlfOya9b#&MS{s zbpar*@lx!2Zhj0t*`VsTCHWZCAJ!&s+(Xy7IL%h%PVE2;))c6vfl1M^(8}eq%La~W`Uf1RwBD;;iOPO=ub0mrs5|gYte;X!&$VUjrbWEw z_1|AFBc@UkX)hVU z#DphrSUzwLS)QnjrDIhN_3odMmG=Zf?-jg`>J^-h=xtjgnZDYEEuzEUcNF8o*P*H_ z=tvR!`RxV!jymR)bOg_TRGfx=IpN7*@S)CtHq!nowkw9kC$DIDbcs^k;#$(++Vjm@ z4u^2K0Xk^o5UT5py-ure7%9Xv=LAPu+kV48u2BIequ}L?&Lf|jbl?=&_aa>Er znVt6?eMANLQ4SSdJqL;4#@T96yLm6Y0{_0w3OSJ@-jt;RZ_4P{3h_H=o(e=&Z*0o` z(}N5k&0RQA(JHW81g3eOP;UZGc_oGOlVfC=tV9cpapL~PHUHg%EyUGteg|-(gAuQ> z?FlIW^W8FCPUP9@Y7YogzJwpj}?V@hh_a$ zpcS*PoxN8t5lM#NxGR7OPl=QIMF?F16OMZ*_k%#)u2JE~vN0$LV2T}P`xO@7eiwqH z&i#U|c%|pdji$NT8DoC0zsnC@34bN~_Y{z`R0#4OL|rga*Z^c<;#^GvecEtfpGOSI zloxgJ>-?iL2?>L(iE2&U=u_*1&W{loE7M6b5i<{eP1GG8ak0m#nm7f_x55Lm=xs5M zZzcOLWp|aaOR}^)KQQahiZqDdUdoj$LILq&b+dDv2^k!G6y3--QdWqR_Y!A6egFte z#6GzcEaTzrEs zy(&1;P;Bo{^xnW|6bDz99#{;n^@WeZCU4k08QbIHyTz#za>R?e>od#!@&Q3(6w%Bw1s@uTW{NPk)Z6HVuT8ZW2UQStHhnh7eTnMurA`)yRt!{ud*1)+-m z7Aip8dVKtzEP1uRos;KBMmzO<^UMW32{R>1issffaS;w((}m`1+jYkpzc`56F~U+6 zw2)Z#CW=izhortRhNXVkj9E6g#EIk>jrKnTOJ0w#{U5r%DlCpIS~DTIHrBYiy9Rf6 zhd^+GLm;@jySoN=2oNN}-8HzoyJjjm=iECFGY>8F*HrCZ>$hFiy~`Lo{lgIZ>(0`4 z%4B=JY+7GuB;g*Yj-zl%2%FJ#(?Fjpg_UAf;_Eo0DXxlSzC`6_9X6e^u?Oe(#81z| z_Qd!M$=*2crsCT488Z|bA(SDyu#d}fG;z?IT)(mfs}fGOrS|ozx3k zK942kda(o}OQ#lJ*BJhy#pjVMKhI}oK0g0`%ads`rD%v1LM^Oalte*|P)RSG)+GZ$ zj6R+7po@Qpsv%N?X1(_&=wtim-^_L=g1Bkt2w#T6?IBY2~eY(1Mncuy~7d0j2%fd+7x9f1X`6wCzSVVb7?dl*TLz}FV6j9I^9@hXcj zdvUhJ!1g3g$@amn83M1NGLy>L{t`Bbp{bR;yJEE;gpQ;^CRvs&7X4t0k!@F}!m;T}yeZDsm# zC@|R}&HK0S$Z1|?%Ni_Mr~YQY!_t%KWls9abRonS!)E@?*ufmAc8ubOgq6pV96>>T z_-R-7`Sz$q5vZ#3>DziFtR206_~m=}g?Hz;4IJ$C4@de2xI|uFLNxDr=DiVihfqU!Uy8$@X((;YcyvL0@C6dYW!Q%E~ zkL%JRVWhpTP4~)sX+b~_zq?F%!+xb&w>-wj(i_DV&aaF7`Sz`CviT$}KSPM54D3B? ze~PD+T5%_5$F^ubD}y<-Qkr~sw;)7LLEYWcxFBXyBf_*BN$=X=<%ycCawdi?k0j0e zpUErz-lx-chY=0CI;HdKrTX)=`dmWf0iNxr4Kw#2_uG?ka}ahQc|*5g+St7#~Z+l{n8RC>M>^l11lp z<+cCFa-$LFu6X%EswRYRYOUTon$M1toPdQ9Mjuer8q#s{xI|K}FVwWl>_uy4@1hZE z;A)fh$&ktRVE=hi7}wn^SC_6+8Y;JwL^~xd>}OK+jUv^$mpgStX^Vm@?BQH>`Xg)p zYH?GatSy=vckw4?9A%$brSEpE42X*C!a-{fmS{wNA(@YqnPzWMXb|`W(BLV(4z=MZ z11bR}t{5XM$RrR_gB66`uC z|CpEaP*08>YE9>dijF6QhOGHAE6>`C<*rkFPy0gr(`1)ylY$ElLT_vV8jg}J@MCuY z_;W#(B6?U=n%+;6pN093%wZwY%n#n;4`2NUzkF?xUuHyqvvJ%%1hWf#|ADN3TjSO_ zb9Y+hOFEO;ppR+VxJxARgXf8U3p4!{3^O9Rvn<&(##$DlyijVo9_V?nWm?rz? za)B*)@&2EUJ0}Pe>GzUwy{LDkY=wTe^IDQ;F73A|$Yk&EJ+XQG9A`ZTq=URviG+V%)+U1M@T&~u0JmRGjIoUww&>R)|)PZGVqJk#Q!=>NwnCmyXG|>eXxxlzqAcJ%@4T$0YQ51eD9z?C$u~#azJN&f2B4e6BFv{rX}HhWvEET^1B6uezAKdz}zlb;fdBMOg5G0?3!l_c~6es(ODkxW`mL4xE;lQj9!Gk zag67vNxpVkQ@8w1%f3VJ*(dzF#K!o`Wm~sq)vCZSO~&bAU-rgm@g50;B#d+zh7VsF=?ChIm)i4r>v*fcZD?1a20~iPyC_+En zr6e_Y@72TH^haa~xk^ZM)@aYUKzO5Kifdx!{o*Qr*q>`tSpR)J+WnFizPr0|*yBq7 z&Ea4|U^%CR>$iH3w3;%uS96y`3r%^QXGzv_cJCb}R&74O@;E_8jP%lG>8b zggPgu?%b@&4r8MACAF*d)yB%kL3&i~%Ph1y!k$Nq5mi0j&3#u&!-r{vmH*;MWcFcvOr}NSf!^G-HN?%! z*+!(|~dQyGxcC!c+6)&~iW&KQ42EAVN{ z+=X`~o>F}O1n4@rz%ICuhAhg#Xw`})xvFXy!WcirW%e&0)(Jp>!Aovwm`(hADK?u>Q7 zLA9r+Rx4l}!?BQ<{qUU-^O0^YKB-E@SbRb)D70NwDj>IAT^Hyn#J%{5FF zJVWA14!$FMv|q`A54jWd`mq`-Zc+ES3}x`i&F9pG)cbY)o57sseDNGDR&14?^?9Xw z#&2w|YSWxu)sLOJBQP;7AJifSE(%ldP6WWbkmO3feepC`7F_$b zW<N9;H4qTZnvf8 z&3L)D{uoznUqZJk5wJK5X_zdLWuw!9Xyl<2ck^#60y-WZUP1=2XX~|L7Fd>>hntW= z!~O3cRxU4~OL@6k)=!EAA|0R5tyzrimaKIuCYP#j^5QiMDAD`vMTVS-&p9DI79C4I zACi4rhV)2>dg<@@vYA$t{drh^!g<*6V!IuZfy#Wc2xYPLa0=#PVQT+*VRYY7dpXE7 zq1y=O;?&GpWi{wyp$%^J4kgwLn>kF;a;3-E?Z}fm6kAic+1sO+4f#sv7VqtIt8&ia z5IPCAqpjyjopuIJgtblF?VeEkWt59TP3A-)~3`>XzheETSG`QfQE!RDSD4Ku@S z^GGjl*`Np^0K_|HVbFK;g1uKrW(z3~Z?RqnpV<-4GiJA#d~Vzk*GMENACQ%s@6o-j z{5IZei6X5D`>Mxmyq}3^+*=OGIBT?iv2D*-EBNJDPyXY8;2yoG8}@Llpzl#OgMyGl zL*tgB_!Z?57lGL(*>y%sCxYyqJ#m>AXwOE$G(#Oh5gl1{Mi1K0_1-pv1V$5MKcsjm zACd&a@n!wd27Qzd%3UPY;~+iI1}2cQ%B>)%in_#WAUd#3vLl8lsO5XN{R;HF4fQ8@ zh~9z9rMyw!S!I7s;o+;(2ytIcWpf2Wiq?D~kchzCiTykz#hoEx(XnZwhv@hvYcb$p zM`l+YtNg86oCdm!Q$C^YvcC;!WnbT+5<(i%!NyctQ}WH$^}Z8~*nI_!wukQCE-&Y) zXUslICNk)=MORvOu3wRKZqG|J<0r^OEM}J}~60{6WxzIVY zeBe*j;I<3gG|hy@@zJ$fT+elD_+}~5mdHlJZe}0YIylBlLMi^bWOJhPEDHWq2VAm+oNCokau+A!Xt!dP~du`$^2<)sJBk#+2k1k>|_vG7I`A+0@ zNvFz}@5rvaneDPwaNs*l|4*Ku`QxikxU-p;p$yxwxg(FkL}`^^F*>Cn`tSLuMUu;6TOF z+m3;c4rT_B{QTosG=lrn1gp;FR;q;~7vh$U+Yw@mQ&?$3@l2b4jcBQ)$u+uPgHy0y zmEp)UuIg%>@EYZ<7wSYi6u;{P5rjL2(mmjt|c^o>t>&>~hEZSwDQrrwi;xf&*w z@*=Oeq13LFc}=2KmrWEwld||1Pz1IW(-VoPQANpdfL4!|>mm#enlVo$5n;E!R4Ut9 z(6uP%__rOygxf6)^W2}k`ce$C9weUIoWFE+vXCu|m;&MQvI0blV^+lMcKP60^N=;+tr&h#f7ki@Up*gqawGXcl!3Vw zcW=G>ePdsu9SQW8-nB?*Y_E}UrF6a?g#h!jyWh`CK3)%JD!!3fIrA|q#+dM(1y`|L zjdNk>X$DI^Oa&+ov=8kKG7vLwws|7gli>yLpd8 zdpZ;z*)<3W5f>}z@;s$HL(aP%gBO$aFs{clWX|d6E}z0 z6Jx`|bm6FSgH}>+9lU47U6K9oE5T;thGnUIFu#<7_ms*h#`9R?pC`>vj7v?)#Wd?jkda1(_3M>8dEUZnF=@rJx)e6rs8Fe10ZeWpE^ke)Wbbw;you*Bodr4F|oht<_macOanvs@FF3=$z?B#AY;!&c*lzH91xF!@_*LF(pv7r-tk5LOh zAO4bKONDju2U&ToOo2Mzl5|cc9{G%+b-EiUp$RfBb?mVgHcY5XOF?E01&2r#TRwoK zzYCOR#_ltVB_EVtUKmVy%fJn`x~4#xBMx>ocqaW1AS`AdRP~O-%FBL zmu61rNPRL(-^V=uty|H#0oslD_04R-Db&qJ<&9P)0}`suWUx0GoG9YymK!LL`O9iP zSWdV=U|g4KqiMQ0KS_I?}+ zi)bA#BA&$zxI03_f+u_0#qXH~g0caJX)lGqnM;N$DY$n`hr$8-)4+ONreJ3 zmHnZ4j-X#r&`D(N_{o81JSq z0dRIM`QFk>I)L0SVMC3mN-&;?T;gE=W@9wnz$GWm{;rpFc|{6>svEAWU2-AvF_xnw z^XL11lt(3$6NVRJDq4jq8I#an{d~2*X29EKBm)&}(ggvA6v&Qb+ zPj%HRGBm6J?lF2zx1pWZ9% zv;7rAI0{j-HQi7ylT+rwt_K5N?|y;K6qZBJLAdd?f9q9=qW>d4KVHv1)IXTYEj``} z?GCvh4_L~=6n@~|JkOJKz5dfL{<+B?=uLUpUS6>k{DB1txFAqAd<7gq#^bqV)_;9V1U0KA-9)Y8wu`jvt zG@i-ONkrEk{CZ6Y&bGG8gj(YzhG`to85x<^qOXT zTwgrnjVQA7kfvUuhj=rIclb@6Ohm$>WMhc?9RyD4u&lz%_xF>R?emwrYQx%`GK`zr zb;=1?)3=BzHp6M};8g)N#Unk@3^M#izxZ(Tr)u znh6jO%A6se?+tnG5MNx^XP3`K#21D&juF`*B(YA|>6HH7HGEDDW1gdS{wlU=Ar{=x^hkh!g^c1T)h%5$^)7zI4vlDL20_E3(0k zOOQ&-;EMSdIeNpF5w2&DYi+QH<&rB1hd=X0B*mKS6Op=B2sea5O@t(fwE~Xc0Q8V) zFrL9JDK&o=u$x0o`ZT;@%OOL$%m8pJYAVANW*=4?mv=J~&8bJ5>)bH)9~j@7%IIw1 zbRaqfqM_Diy-z|Coom#Yic7w2Uhu>su_`|0J5*KERm!o_7C;l7!J3@^rxeAPUl_lB zZBix*s+hI7d|_CwUb0E7L?<*Qx_<==!-^vMZ~o#WHQ{&WI0_vCfk~`bfQ)va+Lw1Dde8-a%S*~2U+VFfQ3V1f1T-}Mdk3LXZ(nm&e8m@9G-P?FRB#kSveKHb zLKT%6J`Kd)tN1i|x_I1#dIM_23fh*~f36bgR!MNGF-{Vt`;5Eqp3tZjidWfz3cA zv67HJc(2ID?RNVVQfiGYQ#BiJR&p(kDq{vrs073t3-Q1=MX&5uA+z&`t*fQQ~NaTz&`~3S0-)$j_lfoL9i%%Vg^eKa#+1Hh&`x-C8wD zj9#T~KM=5|PD8is)Pmd(ZD0?HPy5r8bbHdo=ZlfVl?*#UpNl1?Pl}DnQ3?u@jYZKj zbq!5bf>Ywl|3c$f*#PoCo3iURRY-+3S2*IV#FyPxAuM)4E)R56$vDiP(icT?ZN z^`H0DluSUz&EuK|{HTz01qD`00!UPl4za02H#E)4u0eW_s%oW8XQDt2Wj0G4_M-y( zudGfX=jr9tA@M?oO=$*0V2=2|MDQd8K1Rk{8&TaVD7YaFU{I!dXT^ zf-3zbQ2~y`@DC{94nH&C!NaK<@E__cp@1<(NaIxttU+lvuqggDH?W4Gd9n6HU7&!1 zv0t1y)U3KWTnJ$@K05;yxD=>qm_QpO4&;C3#XNVvLmeyyzZJ=!Lqa`;{G;p=fyZ%) zk zn+8n$TL!>gfaEV4DN-M9--=z1t8F&HSFnV82s}1 zpGc?i34|D@3v|Nu*nJo!f2VT(a?5>~<@O;-+*I1}+1b|Z2HUar^4Fy!Y< zmZTTj%$^P>umsfqe0an-F<{4U{`a#RX3!IBgnw-qo#lY~cobmrRv9kUGgLDWNvwIj@kg9-i>!D*U#7u<(5TSCBBX4FZdZv{YG;3f*Z|>Rrnn_Sm=qa< z0QuM?5?PIEBfg*ffmddTe#Y1tE%>zb)hW!Q$`tkEb_(X{O8NZv_8$au4ikuR{5r!D zFjk+ZGb?{G*tI7U&Fij)nMdalLK`O39(=U1;dzfkigF_ZX{b8P3OoGH@fv#DW+mKJ z6cs05V|pTB_5dYDq%hcB-cErt3o~~+GOOii@E~nXFkDR9K?q=hrU?1Buy=JgU$tTB z|KZ}PEBNT`-OcRL(O)Li5yy)ZSYw1F6%6v=$}PADO0I-t*Pwd=tc+f_TsnWdC^O(l zGOfUaT#iQj;9IfZJ&9MuWtB)wKuR-1D#V*u1ibhsRYaY3hjAZ&<)N8zrtp3(9EPtn zE#S6R(}kC2I7((W4TArNhxL+PsQ-C&`^(VGlASVVXt#SHjPRd*{|TTOc!V2)Hv3tH zP04{`sNA9g)(Q2`4plC=7DYh$qyZY=Ry{ILT!8_{eiEzmEm7XQFDJhys&9QEd9(+l00_Y#P@Ie4-!a4hQ6;_|C5d{lRyzjtWpI-P75a(BFa`a)wh|KOk4e02uA_}S z+{~SaUQ!_rF~U;Vv3~{ttIaVsPWD26DP^@t&5bu3ZyD+dTs*Wc_dgkamP{Pm?)W#O z2&*o>fZPg!gu7lrIdPzMI)CzyDa&hs=!UNXZK%0*iOA_fi9@Gr%TP9*{uN^PL~8C~ z$hJ;Ehe_g&f))Dz?%Sd(Z2kD_)s>XbYoUvv8X*mg`H|w)Q^Y_>0rT8wLGEdO%Qd0s zpGioSsWhO0q$Sma&P*jR5%8K~+W-}bx#{4D9$kF@qJUY9kcTwTrY=@XJNp;si*qax zoao8@iT&$=N&JNtfCC_(6cXHO8KN_Qq(=3h_W|-g|p0zOK)|tiO(Y#BC8QMFGHc-Fp5y=l8Xb&%;&D>urkQ?-PU9 zy>AA#L}J6b)jQai%+m9Yk=}E#bd&W&FI79(&lO0d(&~`rR~9Af71)0#w?`U1c;Qk+ zjp@OJh3tpJub`DIi^o+f_s?|)6>s*aBE8Yz3IIe%ah4Uipt*UFnkFLrk1s))|2ygU zs7!)DY6zg=r?&KZj5YkfDM>yVjxI}eDLA11Mz~IIz(f{)tV|$OwCQ$^UhESgpee z3)lI-kyT2p%OWw#zh1wq1kgkgY$N{J+b>jrIZl#j^UL7tqQVuFq0n;ENl4?zV|sA| znrxc$t&=RUw&&JB?V-wrG(1A>0*Sue5C|j*{;6$>Cx8z) zvIi+`xf-N7)W&1kn&>OZKash~Y0p4ZNG!Ni+}C$N*+@~0Jl#ETv;;+4Xl$Gdpn|;B zA~Uxxx>$m5LtD%APi@LmzD-p%SyEQ=j!=PrIB2U{# z?bxXi_O?OA$$5@i+iTgG4XA6>0Y`)}BlZ<5lQh1P1n=x44QJe(E5XIdS)u|b?MZIS zo=Uiv5h?$fWvOjmZa!O29dLcXpn#XWSkSO5#ZcxBjS`@}93vN`xvZD=R^HZ3Xw%&8 zhGP{sgL=5trNi$rqY!H*g0JHs8Kosd5fe^VuR{^JJI`dr2(ow=A1Hn+2Kh-55sM&C zou4PY`{qm6IYtR)>~|)@?bBVzK%+)@M>`QtDq^ax#uTi*y$pp;l2T=yulTi5L|LwhTNQkqSXt)y~M!HUWX~X zc9pl#UpGWet5TF!p>g$7&x2b@3Kl6U2fU>6ng&zn!@zeOn$z-u*zkk?cdW(Ml_@Kz z5s1C(0-`MRtiV?qln_Jo0!7J-n!1D`?;!)n_Gn=tdt?p+eFw59P(MI@(!e2@ngt<4 zO13K~nOIT<`BRAX_^T`C($3int7mS|Zi)mmZEgyMH&eqB&8vefBBA{H2C_3@;SKaN zb4qz%f>%Aos%UvqxH|h|zH<@9Ia3K(1Eax>h?M?tWLjDaJe47A_nXl&kwQu!;XrPY zijvZ`2@1fjg>zE^Nkt{y!j;y#Pa!5X*~-Iug5mO%9A=IRB`AC@qlVB(^Ls6ui8+9w zSOP;)5e;Ti*%At$(E>HIAp)BONsYdla^u+BtJBvHHJPKv7mwC$Kc1~vizo|CZJCaN z7L$^zOreli5gJ@;S2vN06c6rU#z7++G$PV8X=&=m5sTvp9KR1gO&Y~a8@=`h zBrIS|v~D%2kx(5z>^O!cco>IWsE;=oHU2{SYQ>aUQ$cN^YVy1x?n{U>Zp>gv!bqr3 z5{KKSj~CQ$Ru%*F@b}^I-nAMzzir)g%G-w*4zJ>l&8GuHRw0Py0uQaMzD})<-lxYq zK~mXKT@KVbh2woO{dpE~LRP@_*X8+fUMydTyK!l_k?cag4x z27!g`!Hkhog*Gv4h~l!aS?J&uIz~B)YSCO4FeGj_KY*N&dFG+*w~`!ku&z5K9eN3;Rg0T!N?7QNx?+q*22)K&?kG zc&vXg05IK}A7Cu@R0S)t`0&P|cs)*80`4R_PB<9kP^fkDFVk-e(MLCDC+smx9f-v( z>0#S&gN1;HLbEpqZRuB6_DoD-q_!B*qLb5qMl+U%flJ2fAWv?_MwZ8NBsHQf(G`;h z57Qpl%={0(MF!&6Suuh4x7@r|)LP8bp{u9%`i zP(G6^;fmY*gu&}Gl5~`mlsa%?U<*IID#*Ark9`aeklwz+s+%e3D#^bRY#c|zAs0Lo ziYPG_69g=}oP08xq~#9Ljwy^%%7k$CSCAkB}7EIO&oJZzRlqb!Cc1R+plp zm`*_I9&dpGuIRjaf-(zb5z28t0Wvk@&3o1s3J(ToP6CEJh1YIv26&DO@Elj=@_1|( zBy&u6I;6&pC@9%pK|;F}BZ!Y5s~}6yYWBG@rLl+9_|P#Yj|PmB-eqDDF3NZc9~V&^ z$}aQeCPN+iiEIKKETl^h+t7Om#~utAdiQy~e<%|iOyM;4Bs!+|9REhbME<|@Hfrv< z3+(-uOyq&xX_ zE=uRhl%hUTQOo9Wz$*UhS}BQR9Bz_=bd`7gkd`GjL}@K?EN(fl%U}T1-Q)0%@*#=; z__*?h`69Xln$GpV>_(iw`Kk}K{{Q25yuVA;jMA7vTzaWJws+Ny<9!R*NIa3NV3^S- zuO|^ZtkjBxCF866!Be-gD|{HwuvBP7&|j)gO%bRwMU;W+O8(Y$Z|;G-!Qv0uSNl4n z!8w6I_Z*0VbLMl}wDVO_;*4pl+D@TA1n2a2i*i@Q`PoK|D#x=FA$h#gUp<8TM7JnLn?NbcWawLLIHLKp$z3p&=UgptxjSOHrmGq4I} z1uooo;Jm&j3{mh3F!PFRA@#zI10!q01X4!i!q5o+(1Z!=`}{w0PUhjr`mE=;Hd8OjS%t%VQ}r;I$wEF0qp4Wz>A1Y82%3hxpy&M$IPoX&yjxgAr7=77FiKAZh+z zu1<0H2Sg^Fq#Xa&rc@fo*RxvIrfo^DHZ*lau7_w)La$x={UO6(EOrE9m?0~RpSJ9KoaEg0^svw8dM$9v~(Fp`~ZSv&9CmtSkyKsS&m61ngOK%rD?rD|kH;lOBHH zUENS&4#j$L*!~z{fI}E<+lQI(1`8R{!;+g#B+#>H{uWz39bda1bZWGBMdLlB2rCV` zS^3}Y2ggtXs59naM1Dv5j8}QerE`c^5KOmMP@CAsKKV7GmLMDhu`c>l zMI0ZtGn$SNAkb4n0E8V5W<`B^9xBLqHTz0yAbH z6)NOJy#IqD+9;L4e2VP?fboDiOV!H|ivDZuh~h^Zo8O9bs{dF!CJ+FC68=Yi+zy{O zKTXkp(?%`v`M=VU{RrVzBgw|uY-t`*o})z-Cw3Y90Ag9 zlmc`h4$y%-%w^t&E=vL(hq4<5av9Q<7Efw__g_$x`e}?Nfuac-#j%)M zwI9c5nT$-Au7=R#fN|mhaR?S=e1(mRa17;>#*9b?y)xJP3klQ1Vu;-Y5`&yirR#^z zHEfYK3MLx$si^a*Xu*}lv|t8z2z7vUWu!>swHCn`4gk*e3gunzG=%vn8NR3`c7`QC};tj;JE$oha78Aff{$L8q?|nBgD}eHw zK_t%7R7JZ0#xkZDG&A$82D_4Z?lT#ZbaAK(Oe~}ieDNcy1P|MA0OK7I8(EgnixHo9 z&*}k&gqT3W5>LrDqU8A&XoOGTOjH@2Pd9#ElFabLI`JEs4c07BP~T+58D=bY-S(RKX7z zdoy;A6(y=s(MLjeaKN)=z`iQsk0D6Cd^MrD7DA;P~gvvD9k0}yPrU{@&v9E zawB}4Ml3FgFExpZ0HGN~G66gg03nHhM*7R%A*|^QCc&S@(FNMfl9P(68{6C$QRm zJQCujf^vl)6cFOZLbN;UncD828gR!vK@Eg=-Q+ae44!$Ix8H8umn#Qtz=ZbZ2-OO* z$BZ5?98t&)AqwYOOXt)rQJ~3g3Eh(OSaK7+cS-F>j1n!T#cX zA|F}J+Qs`S%vqgANk!*V5kGt$5T6g#6B{AHLQRN$Ep9gR-Epv{P6np+(&0DJU>`O$ zfvb~0nr?7q>5;`Z%QBJ^wMo}MYhTL!y#@|)5SI})z99@m&g!X51fw$3?=R%_Z|s6g zTcdbiN~7%$lrK^H-pYOSdp7))l|GRmwFQC}A&pmYo+z>R@$_fR#(-1*&Ptpqf$2|uD!C=oR8 zHlN%i>7HRY9}kM^wzU14GCLlcsY9L)7^}?G0cibL;3d4dL*ryi=jeWDg?Kj4?N?fI zg!ItW^05!90+^7axM~)evpg1=nPMxPh!Bt9Z&Eqk3{zB5gE$d{Sm729d7F{nohHgCLrHx0i10FOh9OF<3x=$tAn{tS#j!MO&sPSE;e~rl zA*w(7Pk*lxZ=$q~XTbbfe&gLsH@vLh`7=D=&GJC`xas!OvgfabS?vpeHpG%g)K?n`f zQR{deEka{4O)^u{2-{Kk=Xl-T-ZToi;rjU)z@&1zj9dd=LgEATeG!}du@hbXvjPFLPvXj6{UD z#bNvT4QOGPM27fXlgm*FQ2ajEyFlg1oqv(E2*00M?n8L~t_Ah7i=Y?MV)WiPN*z>m z)kCvs{F9|+Yu*j+)_%nHZND1WUd|K&`Ql<7MD#Y~p1CbF<{i^!!TK3k*Tv^4OAMJYcR zixAY10NF$k3|b#Ih|wBUo?S_o3h>4I2eK3tQo2w@oqhJ z69QS@$aRt&MxVX(wbk?U%XJNwJp9PcacP60!8conP(DO<>5u7N(5Z`j0ju=%P5bio z3ttrv#4X+P>4mQIk`{R>M!vfyY*v_$vxA83Mu>&@%*1^&ag-njx-j0NVZ+7-`Mq(& zhE?ivf=~l4UWJ)Uv*S?fWx$u44g)vd!owMxOg%b8yX~9KV3m`>fbfCcH)ss$qk05%O`!NY8;fM+#NL4sGiu}ji`=LY ziz5>jorubWUJHgMg&Lp1m98%Il`h(?ue9ZnxYV-Yc+^Ld?7lbMNFQrMp=(%G-np5! zJyEGR`@HUcGpH-;IQmB1VKC>Ol2vxg)86?lJO%5j%so}syt4PmA7-uPoC?k-`5X)V z^|0#>&0ER7LcDxWE8AwFUUFvbV@-|Xukv$dU94&PxpqcZpkQu$JFRD<*0pV#BrGW} zA-U!em1pHEjPy-PVMN6?(WklXqnPGPLL3#s0kN*CZ+=bDGBUeR?ymM)^U8e{qG&%E z=^Q;PPHa~2dpwK#7(74IQVTH~o&_QPNrM;x8S3&+`=TG{*OiD7q92ZbvgAPF!}px$ z3n8~rEBy$8$H1(I(8i{Y4e^73ldB(pBo`T(?HK_dVMyKVmd#hIf(yLjf_lMc#7e|g z6S+dGxx#*~6X6g%srf?SV(Qo*LVom!7z5oxerUz2x$&4TRS@!NPAl67`oGS*&TY$F z!UPSbwIe;dNpC51qOLC|WTuBj83>eE8e!Dr4ohltji4nD{NYDyAH#7nOFM6=5SX2z zFpaurNI8)kL*u3F-G-ap6@gl?}++5~a+$q}a>Uf230T zvfeuGmNgn5&sTK-tnI=(5Y{zp^G!k~dxpoVkZ(OvHwk^+($I~@B91QC3U_WtQT}Lx zLV8orN&e+W_75YMvNhG83p0?B&M?4YuZ{VhKhWT^2|-wPuS?@gWv|G`JJr7*^T^*k zzX<-mH86Pnjv@4}xZNFb3`5|wShDjZk}7gQP=6Er;o!ITI-HgGLmNSTtas|}E`|IoWi3pmE!7g|m)Jk`);2vp z7k=YMrH{C{Z`-M!4WfqIsh-2&DTBd$4^0HI?7{LU*+KWZ)(^%P!mH)$_=pJIHTE)}va^Au@ zmBfUjp0mNWn=$dj2<2W1o9YIxW4XyYH>FEhHw~39xtCdWWkKp%q&&U`3HL<~GCa?$ z!QbvpE%iigldOqRWC%&dWzMbeJC7CaIid}Qkl9+NEsOab@V?|7J8*WP4Zoqs=|vlj z1_x85*BUO5E;_+wFjBBC+oe8fGX?K;FDrWV^H>=w)CQZ%}N6U%H&F8Z}p%p+MNpBfK2tb;0oYV-yL2c*BxEC zf@2sMdvJrT=BkK2W+q%I&64^NIj~Rp6)GE7?Bp2L*=%f_*!8}059+j2svH)@P?425C z<=zoJs7XeLs7<3SItNt9qtOdy@e|HeTsHny?-o+`X~!K) zk*}@=JLf20&a8=oYtuh6d)|$)SEvS`UER-bb@K^WxHR?&9?#e!w-`mZm>S9}a-bZw z>1_%HRCV7_vRqo_Q+1bnjLKgIZK#(5i&7r41J%w@Hl}OEN(u(#vMGPI*w1%8?H`kc(azt#b zRgHs#sfrlUjAervhiZcIWKotv2paI4m@5`(mOd8K-NL##KE++MRb-OPy18vwV}2>H z4hXxm+#uVQmJSQ76%DZ(z<8hdA!vnLH>8DP8`RzCxH$}Gm{qU75XEXVq%<}>d1#vY zbtjbO-CXh^q4`}79nA{7H-cG_BXbld zHRmUb3$V$w>Hae2D0rB8+a#|w-ydu!5cw5LFr%TBN=Ae2>YD22EU!(F8+K^P1YnQUIl!H|F&sVuIs<#Kxtu&chcmc zrAd2~UZbF$#4yki`SEgh6;(TxP697BEK5IulNkNId3g<<`BBXiVX(_=yGMwYn1(h% z?_oe4WjIHbmPZn`o(INh7pGSw&xXV|t|gn>uP$S^%%oPIl}Hhe5|e5Zz_M1?v?E^B z1=Ro4<&{jMn`yNLduXn|1SGKY`tTYD8UXcI<{@wHuiXrZO!&p3)Pz}S_y5itDtg>-cdAFlWI?F4n2M5pDrSMm|cKF5RN<&jla|;#fO#OW~ zrz@IfC2L7k^+uE*pZdPb5&L=p-ho@g045k&Hj0QR!+?6)5?To={}tTT0=&ULW0go0 zLcfHg1FGZxQZq>VfK`!>q$)qz$WnhMm9Pc{+O_o44Ra zADnh&iQ!uf$J9ozYUhTo#Aw+&k*sb=M%!sE1W`DQ`z7PpyPeUWWXyR>YCYJEZmzx; z?+ZiHvm{N9o4#PpNcRFzvyJvI6891~Nf>SSTAZS6HS|An7GiNHRO*sO=YJUk%ca*HH_<&VmV$Hi!Q9Sqjza}yE)_*5Qv1KA%$(A=h+RLmD*PP&WwRrr44 z9vbD5>B8D8Vfke8FHzH_>h1G|I{DQ{wz=f01g$1%Zb^cwUjJGv*Ie72#{aWsO|>q) z&$@YD6ymz*#rXqp+uyUCfw?2+gzdOi{|RmMhL4zEQj_Xim~7N$HF&KGU|xYylUAj4 zNDb>eFCnasGk>uiS$~+rM-r&h_6(qD0BBzgT`k&cw}sw3qHDm3&jYEoVYM(LDWgr9 z;y!whFLTRtq3c zm}V6kp*eBQOoKCjJ4}p-E@Hjr^ifaOtHeqTi}3`7|3@PiLHKhqjwR+b^6tyUju0=b zuKvQW9>r@weatGD?2uEPE5($H zcZDp08(pDRYIAl#YjSS{Uu7P7Tg`ku6RYgQ5$jq7Agb~{Vxm7Q$ClZ9ZtP7BCTIJh2!FqZO%6U-!3`MY9aFmNis1 zMLF`duCcf3~vaO0ttDp*O%b^^47_3B>^ zXhWfu(W<|XRt{DRtE{ODbgCYAPGk*@s}TsqP26)^CP-8cRj43y%}$~-8O6VGEP{SJ z$ePxVuxm`F)3um!nwix5HME;>0vIgONLuC5?mzC}?XHg0Zc2y33mCjn%cWP>*VrsiEb)(Dh}j#=3y#WD%RpF}D;$TT@{6m%Z3{}zEI{?W zrQNG5*GxI>nH$FCWh>IuiyFAc^NMT01OKy>O3jYsm@fdw|JSwVzg?#R+vFFOk}dL! znKGl(`NZg%TxcGpCjZYhcs20ptqpvkK?^>ySvPy zsHa?ZL={OcIE@ew!722Y=n#hREbSvd)-Ahyo-Y?;2I*Btesy|+H-z`2{hIGG_o$r< z@gBAInXVwM>G}MMEpjhu%9P1H|C+L6!qm8`02A-;i8i`ykPDkpCWQw9Wit6=*P+pF zk)#7^v%R>@fVIcC(fv?m-)7I$@`2UPGV#je0R?KD^!V#SlYVA!%EwZfq0<3P_X9;w z&7UEG>_eAS;X?uM8+J2C=IcxKhkSI`$L#<=;nbQ!6T^p9=J9?E^R8;#&7e9<%MKnA zFY_*m^vt~Fk9C36otpgp;^mi0JBH<_H&OARzxa6$0h1!9#GXw&lOxMl-J|C+`VAKA z+m+W8lG^8WB=RrGK>M<0a$eJJw|^dElz5nkNZPJvtVs;N2gTAt?|LIt{HZ&uRL`xv z(3_o`VK+D<+8^tStrZ7Sq24D+<2x_OY7IR`qFegog;f4Nq)_t*qAVl89lNInQAa1QuFRh6z3~T+-AFGXa~(u z4YUKOibNIx^+-h(yD^O$XF#_SkcqH#x8N<|DF_v0R%Jv$WL6`H)KW^#ptkTu0UbSI z4`OLNV(e5C1Ym5XNB7ZIuwz&^57cIbppDgj2E~wg)iMi?eCT=o!hi-ELl1|xBX5?X zbsqi@UXM0IgsDpWtWe`@^2S7lnjMy3qTc!I(IPIgqDf+d&`czSb2|Cf4%Tk>2}9IQ zIJd`kzBoeSZ>ObwL3rLhng`M=d5>-V7g68A5aS{Jew4mme4wF$s{0%M%0a9=UvAoV zC@k`0F(Udm=^-8J;GaGQF+P2W4{w$6yFhC5U?7i`-&RZ+Rb*WVEMEciRbaBDrKAnH zgy6J)S&AesHD<%=*Sx=?%tqzo7z6c4mMOPaEF14j9-@4KauX9|G<@24qg`W5x!5_{ z0e3?T9@)ZJKiG(ruEVGD#N3DBZ;od~GIwiAKJ16r!Ri z`QX21Bdq!I0gQh5IB*WE>7{e6SZ*E0I(2Kz-{Z&06*Js$ikeHwi+?Yb@aJU=N*FH< z7v?wmpOJ=hvNck!$YR*+i5R_T~%C={zYv5=fXLK0%)qM;Fq?D|gj`oQ#o3~zM-O4G<< z?(;26H?>GkW|T6mSWiZjvdIQROAzU>rKGzdMZ89d{J`fT*VgplFEzUJ(sa zLJ3acb6v-m2b0UvpL#)lIQdHBvU5>(KtA zHkg3OwJa|bnTyOi2tg$;q(mt!0sH$t%LsW9jYeFnbQsx#+Y}aKEki9h%^_kqTK{vM z-0|+BZw-w&rP5F#8@GtCl|THsh+veOgJlpor-uh`i-uoH=v|8)yv)0iwC3;fU$*x@ zy88hz>tM_Ez$Umxfn^bTcM@K+Lo+-#MM1D3wYECy*rG&ZesJm+g$OL8zsw>6aj7t@ zJSrk3Yau!E>KXl#Jrm)`L6&wS zT!Z~(?}Wa$d9uw_P#djwf+B$XEC2M_6CJ@#q8_%J9-AJfn;uBVDDW1TaQ>km6N$u? zGFnz?*LXg1(@KoVp<}SwT&_X^Rg)@bQyw_iwY{;>7#lg;=MV*FP2SwIrbzQqkg+*oc(z$)s3D6}UsY;5uQ`l0HhPJ*_k&0?X=9wI9ZLR|Jy4SSa zD=-r0LH@=lhq+6eT6RyvPEE=Q`}0Szi^=*;%)|q^a)oe}ggAJcoIhESxq{7F-Da8( zYx-ij_(CmyS$d-?r)^#g!%#1AVuAj+Z;>#pOKp+EHpyJn(noxmiX_2}NIy z#usHrXOeIXWTkz&0ObD$RHUJ8@Sg#xX_qZsX*b(BHaTUGa_ELksRI*hCnSJzp^M@? zJQ&!0PkZ<1WbUlqU~gmm>*2re##pMG+%Vu7RzGFeykOJBk*KzB{sYOiT>dXAy6If) z#-Y4d^|s+}jbRF{CuxY~ZpHJfVj2GuRmMyAc0!d$)VDSpsFSR12BKFa$c*{=B+GxuTyXC>gO^oG!MYK`#t#X1;bCjnT1GZK7OtKZ*j?n~J29jf? z@(32L#ZukoQLOLvtKRQS9)iH{1)Fq_Tz%u6;J_*R{3U32vz88nY9x+EedDUf`~bB9 zg@8JKKtn(;Uo4`berE*tOJZ=4K<~$mu1i82U8rAmjLQ>QfNKs*S4C&2Ax{$Hb9o%Ewd1_qSyf&(r@2ja z9F)vA&OY67n}z@7ld+&A`dICJxbnR$YV+!>mkD8o6*?3R8Kud(XHw7=tOx@y3Aj{Y zEnczMI_kBL1~Vf(m+o2+T*RI>`Mrd_35d-sEyHP2iFVJhlIU5#qdj+LC4=Vrsi^fG z_~YDm9_uB_8cmjz@W&i@DY!*96*dLoSqN6%!n}%giI5!TQ4@WGw62WFfpTlxEsZj2Wa&Uqd1F=l35Fw-59FKBPlD`sput-6BAGdFFv?iGu z=UVW&W;7y2-^GCCtoGLb8V zq5~&sc@2|!S;3?dV;9_x?UFM?zc4f>$JLN}9V!;jyVSp>@)(qLS$!K1zm)af)-zK@ zjBX^v%*oZ>GjUmX7(iBbo$A+Qx-AeyzdY9jjPIZMGgr2)=64)jb>HNi(YkG!#3PU$ z)kvDJB%$%Q{s5<@?YFMGs=jzPA1%lMda&sVoW#g4ab!< z4*c2ydv7Z%K3L1(pdDYF^&w=vs<^~pc~WC6G-*n#1#zVMgP&=I|LX8aUKB#2Xh6TA zFSZWuMvr6F+c8jYQRNcy^ofXfTW$&H4CXJ6TUt}N2Nt^$} z6T$zz7vl4FcJwsx5#z)8bGkq`lEdfuL2tA3 zdHePt%b(-(Fx67y^Y---lcUS``7N8{{ds(nVx#->INgDj|o99y#7;$b&aya!{@wY4Vf(=i6XQgaj6XQK-Wf)8w0gqY$yzp zcI4`KVa#xLDt$~(f%Z7Sep4Fk$#Knbp}Ie@F1^=qSCZiMMi)CY4S&zi>k7V43&xZz za{-Rbg!6JVn22H&9k^uX`yTB` zXz%P8*NQew`Ffo9p{gUBS_4js;E5lDA$z9ueOu1bltB0X?cLnU3S&9dgCoQKf#(K= zp(^@rbI;I~8tL~EK=^(56>3 zV93e7W>U!y0=$?m1M18cDNSr-o=ew{@jbsu$)Z9Q7KBSG)F;cae1etPQyUwYV8z+h zXi9;Mx;NI%tSq{jcG+1>Jj;V^vOMGT8Znk%wFpijaE+1gEw%~0(_5_T)SM6Im5v-l zN{}+yjD}7h;7!o!@n`53y1oa#__LRCbacPKE85ahjJ97{NWiJ*G*^_zK5ed;M0;`;<=%W17Qh zCNm4iZOn#DKhOvE9?a}vLxgmyIdPQ+qGx4HInV}-IZDxw>M6*a<`&n*SZLC+G3?eu zqjYFw;iKvsjdqcDn7(7!=bYLpx6f}>HE4ous{5{R!V<_))e(ccr zUvqlgn9^vTO`hCco@Gr6cF+eRTz=foSL`?Epryb(iyK6ji?L|V*X$WgJR5OT9l!vx zYcFlR;*wHGJnt%qI59S8P7f52O_;gnPDR98X~d>tu@+3hyqD^4u|nxnwCFG~g|E#y zGWAKHRjxAcfMAU@@4Pg&Q>!pvpK=ise9S7z4&q~@Y(2wf62@wAZy_7*^cw$ymY2YU zHbNf%Gb)K*Q2DeU8*=1Hr^ghFGJj7RL%K;xaO1_0IeAJ%A?#l-ygzDLkn~o5i}=xf zQ#|>(`T6EIfhmpfH@0>?u{8&0_(b8m`A-UKjfW+~{5C zYsH%07>ZTODC6=RxL%Z?x~2DTiE!quI2p>#LREBspp16W=2TPpmu;iFd|+Q~1(ZX_ z(?(L$_GUk}R!i4z#++|wCcaHstOA?%8}&WZh)q}GRP`I%KLcBus9RaN(U@Jh9Es;6 zsADVM-%v_s#7^vniGe%tFm(oUSz^=U)AsiK^znl4!}oD}fAZ^AhVhXN)Y&sX(&g{;{MeeX_!c33e<+d7`T2%7R&Kig2`kEM- zfeGaI^87e1k*&$m?eY3~n_7ea{I1CXz)#QF?(t&IL)Dl*E+9>LHv)TYd0Fsz;OFo9 zK03kI(c$~K`vLrq{~Q=VP7)Xd1po{H;@6c<2+)U)&@BTF05D7d0D$~^$JLVF(bUMw z=zo-v&eh8DQhh1{n;oSG{hc4i!R3?nAck~%cg8Ydi^MQ15vQs7QeK6bDZUdM8Hkkg zNr9rUC`HhQH*x+CFl^Uj3?xb88U7sk)GBe+5lM!ngoZP0gxGgiNy$sLxBG+R-F#Q7 zyI3l8e|)lTx7o;hcPsxoOZ+k?mF^G{>;5T8* zykvZF&J8NGA&B6()?Q-{za6s3k-z=KIcOs|xB=U#Xw*dtRT#wauR8t)avokxgv6Jy zNfJIDWu3BnjB$dnkrX?s<#nWabq53Wf}ugD3fH)V>-YNEjNEwp9`i30NJ!qMZ=_!* z=q>bUb1G!QM_}qg*111SZBlxkYI6?vXMN`L1P}ol9!0vqrCK9T7hf7Z&GRLIrY!zR z#m%c+7fIH!`ml2EOL-ltH^1n@_N=aULmO^+OwdZMl22_gK1}eS7v0(!=ZXc=|S=Ti@11C20*#s;GFq z6r=m|^@YBz+w*;Ue#d^cNPgEp*z@@|k<#;X$MsaZzlloU?R9^DmVmzF^>iW|gS)XEWlYoxdT5@WNFMrbG>u!=8JX6FkM8JS8)3B0)4bX= z6~@fX^-z|y-hrX$7&(mL(Zc7THmlD(and%YN~D*Bdnr*#Jvy!Yl&J@36kl$!OlhA2 zL2dMMmuOXMVq@p;ibfHJxKsFsCA=k}&2f~e43 z%1oB~uQ0@k8Z~FO>O z1&m^C0YhKP7+c;6LtYn75Jkr%P-iVoZO&0`PVPJZi(&hUB8#Asr!2s!8cRKx6k*0AcyIaz9a z4B55xzPt-Mp44*S=PubTcKoGr2Qaf^lj0$&^F_ymvk>k>*ou3h#d@kD>n|Bv`)Uii zHaMguH;K-@MXw0pJ_g#HNhWFaPMLx5h6eeKMrIla7}-U(S1Tt`4&J+Ja{ke*Ne!nz zG#3(CukYiVolC~xE`B`jwmuvs3a4^{HYvmu@AKHBiEMfin+fVs5Idcm*2S>D1 zL5L9%Qo57qO?&VA-CAp&b~lIwSszW~x$0KV=0V0yyc8f^G6l1j%UnS^Rhl=j?pw8U zI6Svq#ISJdOV)B1>`+biEt(MKq(ZRaG65ORN*k8F{1l0fZjwrnhy8&LzJDUIh z#jh|s)MC`HB!0C60D%3U_;qk}vovz}ui(|Tipk(W`1YRv0v~*#@q=A6Oe#F@7FIpe zP;SvY>qZi+S8)sTwM@vq?%7%W1Ei1JYk}-wUevtn`{l~>>nLS=J{*{KNKz$_l-re6 z{wlAbinallJgg}>Z#i+ZDWsCOe3P7j^)xBJ&iebs@bT3mj&h@YayDdV09t=rQ1*^A zlA$Rbafv?CkYAy9@OnEz1;I-lykT1T+;CyIufF$k zRcr#jJsds<)JCzL(5W=kV8dVaMbL;w_h;)2ctXYboq;7$I~7$>LGs9T6fS^5ZAdk? z2CY(M)Op-?WL>S{@}EpBv!+aY*;AC@mJ8uO8DLsR9YkwBCpL9ZZyZYJ+NXb}QpD0} z()l^#6-R>c)#eRhxUB-EgR@cur1ATj*$F&$Hf^SeM4zkU_pu$a@vxbHG}yeRg$PVC zOZS)(;!P50z0al8y*|Ne`qAF{|4s*mHz}yJ>PRiUb;^F{jj!iAD2Kd%g{jLFRLJKmul2BtJiTKOO?WkA zbcCr^Qg zB&ku{YYi=2%&uPqtN#R1*v=spsO=zoxS!Inzw?e63w#Jn2n>TY!ga0}5&E26W*#Zw z!u{3eNK9foi7Fq2;7oOhSWr_%iE;|H zC#bk;PM^7}B$slCcl}3VDNwEP`e_X2(>wtx#qVe)bP&nd?1OOEUlVKIktslagkxLh~eK%JiHXSNnR@1mm`?aLG z=!}6TLE;mr(3LHm2R*X>RFm+5S!j{HPf2s8VXcQTp7*0OP^Qi7i;Ba~fmNyvESJu5 z#3_ei4r0qRUyH$J6Payl`+kRsIW4*Mq6P;i}mRO zP*EgltGs@AxNg|i4F&lxkq37l4|v;tS^Q(P5@*Lm`ERwM@=9R)R9SJ7+)fa}cSy{= zFbr(r{oYKJ$9K* zbmq(sZ+U9jRPbhmnEB%!6P&%|4y*+O$i>;sR=|$asCYIedllyZVvg9>4Y;7uk7LeF zs01ehbFC&-*scX;6{lke;U3NWq^wrr7dWm5{+uvX(!nnG^^N}FUHH>423m%F7*ih( z$IomM=X(51D36;(^3`a>O*EU*o9=B!&WnNC014mQWs{kP-3cI|q|A#&;wQjKO%i=5 ziyG*BKnIugu{Gr?YEf1!0~0YA!*#lky{bV$~tOi0BqXz_4)Im*SjOd;KA zW}xFLin}b44?;^__Sb>Fg*7P?F(nPf)QyLT?Xp76c@FJLDyi&pVW9q{K=+-N zc*+4F>~-78SgSo}T(=Y;-|NwiAm4~NF5i_StlH)KmL&m8TrY~CPp3zL%Mx6rcOg&P zfwr$GD5A6*7^?z#*khE96?Nld0V%y+D<>i7YF~NQvMWn1Z$4|^E93SqU|K!|lN=1t z^baIV_m>*k#j(TOQC>nQg=AM$M5q0X1$5TUWZ1j)J3s7PD21ZtltZvmgV{yCTwQ6E zBXU<(O!V>yW2nM)iy*LgRT~s4@#UkDCn@2q-oirLez19Ipd_wQNsU#^L(-*0AUEp- zCM?8R>)NeI%+i=PukzBl*kQs?y~@u?@JHS?GWxI6xS&(?pETr~qY9@2 z*Wlkmq#>X%ZY#@`3!N9P@kl`gkZD>WuQW58qa%R}QVvW0e=yxdI6{hnJ+Q(sL>+fj zW3Qz2ffqKbtAcnn;XAUZUI`k@-obHHP})a-vOmJz>qO`>n*!JwaO03S>c4asT)KCq z)wVLD4|O=!6j5zyTt>A2jny#n5yV`Qd;9_8Gd`=VtitBU_s%RIxtKXL#b)R}Q`DWa zKuDk|KEHdH_4JOGkva8Cy7dvY-Gu$Du@qua{&DN(ndt%j=jRqauo!Y>V zQ3qZcU8l0h^Q2-y+RQEsOl}Hv=Bc-P#{bny{%_dedrdLJ79Id#oeKZ}<^R=NO!e)J z3>6*i&8$uStG;A%Yg!+SC+$GS@}ltk+09N>Qoh?dyAI6H{wKd2s<6=#u(DCl#@Srs{Enyq53|A|4WT-^o3$;aP0g5Rq+fp zOr^mPuO^#MN17aWsGBH*Mr&$7q&}Qar%O1a;`bE)mJPop8z$iD((%Es@T%Hj$I4i9 zXoO)mua8!#y=N6q(FgM>mW`U?it2wm<_xbt%I~9O#hzf$(T-EE*EYhham=`)KfnO{6riwuOc_+7PFnsgz`2c^@r${eYfRDOd2nk-b@0^mH|7#h+ zI%NF#$tF}61>LM3ZY9rECWk8F+M^!AEC=_uEo}Z~ZYlfVSC8taKuBvGQ?J%-(HXf) z1qg4F;EFg6nfKg{Z2xjGk>~g`HCEPzDbg4^ZwNZh#%kU;qfn1BY*}ET-EJ2t_gns# zf;8e*Q`nt^-%M~HhqkDTO2dUQWD2;>P0bq5pL0omrX^xag5Xh24U{4zz}~&ZN$$Wl zim4MO_TyvRjxUrvxLfeE>Znhlm_bclnMuuaFqr$&m)xcxDvcSRY8jI5)YW&yBz9}Z2bv^msG!_ zjy6`&m?!tdD1(;hjd*}3-Tk}=kXu=M3pRgH485LIg6VUo1lrgF^lFN`5ZIa)=qmlL ztKqsJAW14%!fCRr%eN_)S&X>PY1mbP-Hr`#tjI3}S@=y$Q7gE=9aRE|&_hh8kO=V& zJI{hs&>zclbZIXV@83X1eLuuP>sCLeIBu;Oa0ti&U#2*39_56z+4SBmJDhV=cMRb< zkBeZlSy?W{u)xx`$!GK!k(!;5l1n}BKN?tLiC^U32k{3Mle{U{O#=fm47L`T@Kz)+ zL@~xppaB!xmZSS>wYkVtDS_)UF4yCa7ywptQ5rxf%i%J{DX$?zbw(g(w z)@kY{f#JCoilBH_qMQK{9(qe%5*m4*?O2P@RRrr$bQAh}xh#v_;7$IKH6r&u67~x# zccEb1fQ?0V>rq+V&tkzcD+h}D#|N5Ys;(O;>(Nw)>H!;FphOCnciMs2JvmU$9j|!f z30slPYgjW`LJ)2?hi`}^;3dx6Cs5zdtsEyZd#Xv?2f0kjvzLm}&N-BjN4LgU^L_VE zx|;PG@^5aSJufsXHaU$ZkvExd7`v^)t?e2GISK*lARfSZfI@8urNk!>0(Zos`ceA4 z*7D!%zZ&)EH1^6J5Z6sI^t-9%EB%nl?jlZf-cu6~T0sg3!}TWfQ7R&FnFE4f8Tdl- zT0&EBvsyB@K&N&cc%G_~xXOfViJP3EqUX_{y@JBZIg;cH;0I6rR}~(asc1zOtOn4;YEF;vO zuPG@QgT{&W`qD}AxeX{9XGi>T?vm|GO_W-@Pyrt@J;5QYj9|ej9YenrHl3`tpslz z{Y%{T0sm`4~;Tc*Yq?dd)LV&8dEi0Gug?9+1r>aSI_2&=sS8k##I+WmOH=MHP4I@Dku zxFpK}UgnJj!?PbGr$Ov>%18CeitK~^37$5#o8~6e$H)kfKwwAmA%$64U@vr^brBy^ zb?U(*WPhbqku#jZv%D;lZdN1GZr17@ZGvsE=avT9AJzwY?>VfD83W4wCRGR(4nzV$KUJoB;gmZshWx#Kt2K9=y3JZ7!$!Tnx_{h+y62-xi4!T$&PG$>lxO-f zUcbuO*mcxc1os4EW8-N1m5xd7G=9vk34XZCX0d$lUV6Q3-sZqGy2;6PkhpS(osq!r z=Mb#Gxxx=t=$NH3ovc-N?>15sW=% zfJnUZ+F2CO{fBMaH?iGZV?OizE9_Y$K~=1=*yklYx$`&jl(jF>TBt#(1i4pNj_>~~+s<_OR;kvqj_&xDws++rZ)Owuu+hDT0~ zDaPzG^XUFkD-H@mcx=;AX(xEAhXc$_P&^K3ILeAbRGaRjXy3}~d)v0P{e2M`261-%$V5Ijbz&nw=>qv`3<=dS^oQefRYu8b?}uT< znj1KTpZ@7XR#uaAGUgVhxW~DyU}-BwMk)A={K-^IDc3OmKDvpgbWrI@149G`wQ7MQ z2t-2`DdcnJHU9zuy;z#aeimvfv%&V0t+WN8OXET9i9ikSrG6s-zJb8ELdG{ZGl=eNd8pJKlS0&++xh5h zr4b_+te7J|p}!A6SRK?T1B~My!g-M7Zy=1~<;rb6wfl}jI)M|e1Lu*!5HybNK?nx+ z6&H$Z|Kn9@wsURCmnn5eiL{%X(*Wv)$WK?4pdsz}>1C@xw;Vs}BuJUEibX&UwCzB5)SS z%z1YKv=8%_Q^ZRjv0Xv6j>Qegme92}nATm1m0c9gMCN`;r&1Gs+%!rYW^RptDkxzH z&^&!3yCEy;c3Gt%lD=epl|X8HzisB6g^_HMMi*V-ifp84P%^jEHVTd&92ElQ;%j3* zuC`bkMI1tgbCGIWzf}Da%;xwzixUGDd;|~@42EN0%{0761ZRAz9LANb`gPb0Rxs_A zQ}z!kfGUfT0*&L9R_nP}Q6}2l!BVM3o~jHeFkGpe!4>ijrE0KDnNk*ikMum*wp;=C z*fs!?X{7+y_$&H{gn)&Bvo46C!+FGlpchQ1768PaPWC~LJ~Zz<7dmUDte+y$sBWQs zeQeCAoWd?u1vi@DE&WS2^yG{**E)MorsW%8#%h7+@9E+n$d=gnGk0F1JTQuj>9af) zd2M*i+XYL}YDvjDC4Gl=wicS_p0~6f;#SN-mOG=%KWZ6l=55`~B%M-uV4*3J?woB6AUAi>b z#k`NN_B5Rs2wE7!l?+xMMT4Jm#YCh|nFTZUXsy9j3RR4)l#~A@#N{>?*9eHwMGyeN zqR+bAL*UxmL<4M5!8&w;fMubqIxjAVLIGrr3J>3l@-!^u7^mxd+Tx1rx*5rpT=;rf zy)zID2iEZ3lF)Le85cyeotY=S0&@3D`3<46%RO8*{@vw8MymT8%ef8`4u1&tH>npw zYYmF(rt+6kb8csIkdL-+iy4ENwa9IwD@_)p#h~%Q$)4M zawq`XCh(GqWV^ohcvWijbLtwV18nCklO7G@k3ggSFcolJc4t@C+Qr2>xlp*5 zH^GHJ18vEiO=ZI@xNaDxI>}4p$#4(aQ{nkC)idE)$o4KWv%H`yynYQHKqr)~+mM-E z$Q(tsoz{lH&$XHfbJQz^S9{jd!;)=eRezi<$af3n8v+xAr@CMiGU)bdrF+KdcsGz_ z%#n}MA3uNF54oL0Uo8Wg_tNCr){PZ|;NU((6fhQ;r(ym++`{{U5jlYXeSku+ic}de z2AE^DYfu785G87!L$vd(+5liixy2?nONt3-)R>j#mZ?0h7sK?IuI4TU&vp7Mq#0*ghAt}N z$}1lz;hwo&+p$2UY@G(eZD5@qJ#-uISVLHK28@93KeRz%N@$|k3+(3mEzpvNCD%}3 z-S(8_eE*v0|F?yHGt=`E4h#SggaZIT{67ob#m3&k!PLme(Scsq-pJB{{=dwBef)I) z;V0FlZ8H8Kbg7>E)V7L3e?WF9XKI~`aw>&msa_@R!xB6wqp{2Qc!6^_`xQC~DDQ>( z8o6vwcCyDzTv(+~dY{e@y0Nu{MpdOv`+K6A>}|exe)<-&GAHF!eUi*PZo1D8w|BdL z9N#C%d_w6k$!n!zR>r5bp(-oiP1{K!oGr3D>zu+|^BfwTtJ)XSs}BHcL(4^1DN=D zdOETVOSl~6n^k(I&(3b)g_IG+d$=$NB;1pY;IhscI(WONIIke1&C4m4HFr4qvt zkL*7scaN-q7#bRmE@wHzjIPsvj3sCMUbk6U%pwU5G`MyWcx2FF7jWDlfC_XWfa|r+ zl;Ts_1(Fwhla@UkC5ii&I41OF<&>SrAgeNZq{YWt>u?9i4K-Cl*9BPWrU>JE--=K* zX@uuD$is^7%)upOemVz80q zODLH;#a~B?j}tss6b(|ez-&b>9v5hrrv%LqHH{bwCt4l7{-auUvWSPc zI4~o>Iu{j&Jo_HXywB!g9bmJ%i%?1#>`Ng+YQYTGXElEI$QTVaqK7kQm#)l1A^=AB zp?u2$>bd!-%`0l0@hB)*{d~#$Z^UEQ^6#7Q2ox3kzoBYPl$lwAF+Ct2c^$^6H{Z3!vjcrDT+J8wW0 zJ53lD%&^E{9F~sZ1yVQAkP=-?uKEiQl%HZ=MexHNdjDdfemjK@AO7Ph35ASKxY&m2>Q0KKWSoTV~9m*0W=QC)5$j-A9NEpnsb)zVe4{0{CwnsLZ zz3W2B+c8-U?KsMmJU!f=qyPVcZMZ_t==Q&7bQ&1}0LcIME8G5}Y>r0uibjr(|AE`+ z|35qd@%2R#W@ECBY6!dZ%;J9~ z%=%1aRqRxu6rrJ^ue(C{o{#!*{dD!G0o@Q0qxrS)d~XE#^nAaK?_cC;$ByjciiTVH z%=%;}+&@R3KQ7e?C}X)e(y^lA>G=VV2vG~XED`H=;|bEzNGP*7*@Wg$iAnK5xcT5= zHE|Qe&t8%zNN2KQqGG0j-ks|`^}#n6XOO$UHDZ15=lK%53)lGFBONQ>JQ8hCzWZ4d zn8}q!G^*9Q6J#Q_J8mYsY;dr=yyOa8u8HzXz4EEQ-I(e^cnAKp=(@n24PwNp*B+}m z@LNUl%9>4vR%VLB=Y2wK(Vr-Wsgnt>EfkaX)Y@9<~R*>XycK!p&LuSm_q4JX6Rngucu z0ilS+1dCR+Dc(|HMiKD0PgiH03|E)J8m_cHJ9fhYFA`-cN~s#Jlo!7kzEiL^UgrAS zAUdsS&fUC>1%xr>#>!o4wUj`nl36~wq0?ow0*ZO|SuYOSL$;aUMKha-%c7rp|3#`A z$XtcMIJbBhkWe}aC4a#oKhoJ1Li z#}UWO^cb%!aZIDIpDsnBVy7suh@%-VPnRxAmNjwAghe=+IOE1@RNN5k*A43g??;m^ zBh9iA*=A8_(x8-vm(PKI`w zj9pFylP{&g4m}pH%owL#%@I(&9sXNf)PR8rI^~ngf&MO&LfGN6C1xaMC%SaIE&5d6 z)8}rAJ}1lbEo5X{_x%|*6U~}MfZoaQhT>j2(Z;`FbmyfP9DwLcQ@&ffk1}yb^_FO&j#Kp6re3Qr2fzkE z?Asgc`%fQxfhT%koF~>Ru{FwU6|q-T!55Ae5K-+^4`&3$zj=x3h) z4j3OG&Kq|`f3g2d-^yTjwI`HoLjY$fYVtpi=u37C-8HKli)Gc@s9P^6j(K{(9F^OO z-jJH}aoM`gfVM7Ob#EulX;+*&obKX{ZN%Xf=R4+JF_cfb+Tz_9UlTu0JibFu&`j0C z9;RV}$k!av>R|sBKib_ zn%i>!9}sVWK6u{>C%xw!UrWSSRFQAI+hf{r{@vd`J<%UrwjLUNE&%x5O|&EEq-xuJ zI5#M_ZE$=*Z#V}V>+c7>th>N!0LOOMu(x$)uuZb@F*8gOVBHxU5BFy&rGJgg^;(|& z%Bk-HROz8__$~!AAw4zXu260(^Jp`qcyfS=GSPzP z6#t+0t^A*@bPvZGm8i6+rPVT)YE8(!+^8*4TuV{8Mr)}(G%X@&sv@;_xoC?NEmsVN z(%ufWhPGo5MJrs>C6=mv>_S5eV+F{InR0D@B5tZSLH~D z&!JC><~4C*A^y#Ro(Z*Gqtw6Obe&JRVJZ2^p*81>jiE2{-op$QVy&-S!MBS!X<7u) z8P4FY5F2j7zd&f%qz#4hKQIHGX|p}EzVDeORTtUc_pzgu&lbL!cn>osC=D3=bvBzl z7ueAmPW~WELS+6o1S#4xla&T7@~MWjrK#Yq$p_M17^_FII7}F&UBM%wGC(1H*@)KY zK9A0{LvO^E&vkBkefhM$TqfThrG0m3mF?F*Hei}9LMn_k9_Mwcz~y4eE#r0zd3GR)JpI^@s_Epf ze2r2#SrILn^nv;=e632iyM=4}C9HJxSv%*Evd*9){UmwvG*XbGr3&f#ygcZpQWS_| z&CZn=R#8)p8Je3)9eIaB(d4m7Z@F2Gtf5@i*aUIRSzb#4)$&}K#cH9nMA%p^MvnA;wq$&uW+Wgk%r<1o(p2V3RLwcekD zb;;>5M@I4L8B|oCzSJ;FDU-GmF8AF@)M_wZckrM0r0Tp)V(qD6L&FE^*^*Pu9C+8N zE%|fO+A~(qZ$4vPCP!>^C2A?g{@Kc#b{)Cglrrsz#b;Dw=LdPK!BWaid>s(5Y zNsqDVtXIBO&nBd-=or>+WJBlK$HV;$vnL|5riV*QBWw}Jq!!%b)9!4w{o^M9osUuv zZ*gj`Z}!QFj;c0VXE$x)xi2?F_lm%wF_G9zV7aaZPT0<}jr9)1nqBg~9JH0*W0cq+ z-iUyivu|v|xIsU_;i6&JCS2Z&9uj(%x>QJawmgt;t$9T)DOKg6 zKPT6Zd8_TG?Q8+NpKkp^hz{e-1 z-Tx7a2cKks!8Z=yAt=A-yTLq`e5r~+=nbNm)re87J6_a<&8anqYfn!cE|hY8pkd=6 z9pjS$p~Jhb)8lX&q^y%?1B!CY!Nf}*?w&I3umX6sGrs=1fR?!Gn3#@QFNGlXDsk4t zAx$_|eAwcGb;R>tI5IWjSNNcj9idd%UVllts=ga)9&AoRDlyZF3_&E?zSvJV1jm!g}ZWzm{Fv^>TM(DVP^@fS_eCnry- zvbWB^NL=w{%o?&j4|ac1=*QHm zHpYm5%u^%g@hb)rxCczD>y~AUdCd98wFw{5t=QyE7-uprRM1fb^+o+X@I`qK`JutE z&3Hgt2Z8M@>|6PH6N&#HMLqKn{g}BF{{fG7YG0>PoA1l4Z5^sUz5S(2B z_NOw{xWWT$R#IixI~Q_q!f06Nnz?moaIfe2X2-AgoIc)c=}bSu$m`$>S+7(iXX4Y)2A-Ky_ug539h5yS-C5IFpJDH zi|U3_aitc0A8qOORxX?OR`*>1nX2_5Bp;I|-9OCbdDx3;X(eFmp^c4d!wvVr>s>Rp z1yQi#v?eJ=yWz-Sw3hCI#9%NEHB~&1pGn>)1R0;ALF`wk`xiQ3)iWH@6K+u(h9i}^ z9C7qXn1>%1+gkazA+e>q<3)(tjgj@{rL~LaA~jaMWp&g%(d8R}KzC+O0YNyB7x71t zMKIE};5s1STOvC}7Q9Ecb#=jv0PN+0f;}B6M8B)?WX}L#QkQO>Kb*7_tjp z4s^L==4@Ly{6Me$Cg4{<6Z3r_(ALYvJGy~23n=#3@`QW{gn+$zO$MjV2m`}L0|W{@ zByjacd>g#e@wXIv4r10i=d(%=2sGj;0+QRsBMIcZejUVDV~S@<@!tx6r3#Pe(r65py9altad&rj_u#ILTkyu+-QC>@?jbk?w*bKc`N{je-`@M2tD3du zs4+)XuZtSB>gkJQvU9Nh z52D!~+!+D_zy{iaBmxe2NRnQqYtW}=s9Df>UdXsIr%r_`$g^W+_9e>g`)l;E*?@6; z*~lZa%4*h7t19i-ZSp~3-in(ZbU)7b68`i2T#oxm`09xza3SF7sp$3nUc&W9Ksls( zlUoEwC4VuLdfV&X%IV}M;gdRL<=K3p_O0yfCD4rdcAB=@AmZz}uDN$UM; zgqa#893!B!@6XpIP&hEUd0-C}ig|i%50#cKQ!_U)*k@C=79Y2ypH*HIW>kTPJpAd) zY64!x3#z-aQ13}sRpy8(2mW~m-g`@w^LhX8y)Ww;{W!UngPtr#`FTB6P9&p zD8$frCZ85>A6me*9H6vpr+akae${#-Ql?-}g!+sC1^Q5eDDLf% zj^LBzC#UIgrr#(I^RG(e@SOVNIXAWM(xIQGf4*14XtYPtlk+k7)rh&_Y^|ErkZZa9 z?qLgtjvfHM+dazU3r4aS3&58DG<&4geswCW>3(8tVrx_%hTWeFKjO8-7xT4Ks7s2-^#e`m7QFlmbl#$`bUvzh^HDJ0@nvwU}WG? zZBqq`!y(I&D2awTG)9+nP^x&OApKU~Y|BR^5@mc)@BavKAw5GO6$$yhSeEbe!|b8b zzn@A2|2U$Lp29>|?6XZ{>Z8pRlVt1zC_B?HbbPBEc{Ov&5-aScGgeY>}fIQ$rjU+iDePe50xYF!dXAm}bNFfi6?cX^vm%_rFiegmwhpA1F;8LPq}Z z1$Vgf@OiAF6gMvIoXo~CX-7A~Hx&9RPmLxHbm$M)3QUhG`w;7VS1H0*Am==>!qS6L?Q$O@{u z=l#$Et_k6ZbX5_yiVGLjh_tNWoL@Tm#E zVA9I1p6)m7_DbXk+HE&0qD$QfaDmmFZZEyAqM{1?xJzGGNahB=Yx@lXl$3;D>-6fT z?p6XXd8C^%=1-fmQ@R3|AY#e!ksEGEq#gmgEPejCg(dC;ea~P`(;9$I{n!_tngv zs=}VIZcJCYez+_J^niR2pPN$o?TC16{ZnFQ`FnsuA7!8L>M}gt{8t@_0i*}Cau@F| z9h)2P{d5*Yf*_xPd~xt1dJ3i+Q&!vLq843EtnqqaVg57wpqyeJ`E=s<=yyV+eT8pW=qn^* z{Zq5me|$14Gw(CU7T}0@qI-4eP2T=UlE50H7c^sHm|}NMr|<(Je>SHe^!vg`W*fDV z?q|q{(ss&5&j6&>fnogVmV7ST?ePX`kvvh0VDy+YvyLbQkOuR*tHHzQNief;gE^Uy zYb+Hx;>UnU&Ix@|7bMDGV&mqEzC^J+C!xZf%`s>LSDm;lpDz4pFjv!5U*Fj}RAPn(-`I?f}o~9g@DDuFjCuC93EW$-3Ma<*{#Z?`AM2Lg}dqc7syC zpXz0|G+35?pCsoDsoO|9qU1wfe=1 zo6nhYBYI@^8Hfvac)c9}d7f&lg-E^CzHrwcaYvE?d*xt zM>y$SsBb$}dF%Uji+8b(*imliB$CTkNN@-|y41#SUKm?mPb)jv7ji^&wT)p%akW~d zGnEW7F?zpseUTwHSQL)x2WcW+Mig3x%%SnX2#QW_TOgi5piefRHzWZHH^3$tmijkQ z8vrzaAkhns1dB(&YI$%x95LW5QwYv6#74md6%h)DoZ*qC+fBww#07}V?OcIvHs8mU zmBh5;+Y8yfQRX3Kg^I5yAFIyJ2@F_cDMvX@pK~H@FnAY)8R@a~e^qxM_+u1R8=e`r z2$2><4TQZoaG!6M*Izy&`aI;#uUk3S@8{E7uT7MOQ;?!=I>|-{@}0oSi2VYV#eO8( z@*`G0t0{Wlx|e%-o66|~h5ZI!eRxM9P*^miS*#~{UD0)nEpbeDw5+jg5I+fXF?Ivl z3ImzvswDk(AQ&_@JQCg8Eu1j;RgQ#2TAu**97u&&ZF>Vr(3=_@o?EI)XXET0-%1Am zJ}Dlp7Grx+@PfHD{PpJSr)K+Re%ckplZH!22DHy}je5~wEI7J}oM39$>Ww3q@pdsp ze#-2O`yX#zK-QA=!B{V(H7+8lvM(*Pbb1O>16X5=-){*Qzk+oFB;LfQsFD!f0d+ZuNuCl%%&uUjg%(!qTZ+azf*d=S#<36x?aF%5mFcJ7VLc%rdiN;Vok zoYP*)EK)eW$K`R*)8(|k-kTxcF|1O}9ndm+f=TE5FWMzib``myq;-6lY@5gzNSd}$ zcuV5LKuZ&Uhk?_&G6=muQFZ8)Ff`kZjEZ4N`0-NcZrk3z%VHePlx!JX(gTBj_KcS8 zMFyQ0-uPAi0%v7t27_2@Pn1i)q)M0+aFt0q{DY*{ zw;7<$@6IaNotxm#*{fMrBDP?uC=;c%emm|e2qGjjdQG}qEqRX&1>nLxzq^c<`#QQD zPURt#plU2!-^|KFckZ2okrj!EZv;`GC(rzdb`wzm31OnnjGwv_eo{Q&e1AZeU{2=m z=o1~1MHM)Zx7JFtPlZ(Cv2m#F2m$5^FKvT3kuZBR4XVyz&$gl?rRa@Xb9j~HZNBb# zJ0ZOivI7wi!*shYrayl_aBNMk-{mnaL~nyoqWqNL1VkTCtkW2}cgH})VM>;;$z+vx z&Y3&m;UurgqZw=iB;OX>xAD%}+Z24s^T2qJ>jaFS!Xc|~Abp3zoTZD3%LaNuau0@K zLK2iU5qH2-6Ro6%R0C}E94m=9cz%B_KEffIJG3=KPIIn1$5fgX6CfejK9CNEz5su`OFE0x^I*2A(uC^Ggt>{} zKZlz1C7C3J(aX89yA(Kdm1idmZaf7Yi9?W1eiB%3;4D+*LKkxN>KeS^Gxb9-7!k#i z^rxGIZ2MtlIkEa8(+#$O#Dwm$AmlSGfnl)IC7BsZ)75KmI>X=roGheq*cK`R$quXp zlyS)xDk90K!5*sixRy<~WQGCJrpR}un-x)s63Hz=$OpJA!(d=N9=RCH%T;7pd=Ue> zud25&r8|~Fr6Iq4&nFDNnB0a-GFti7f}atDVbsiD)}@Y^ZWN^@8dzCNbH%$$skL|Q z-uNU*-M`+Q8^gfvWjDujGh~Cea66bVVa<}a{!kaoeih) z8EHcF^OaXB5-)!|ju_FgEnKQB4TlvPb?bA9N>^zR+7kj5xfsQw>6c6N{qE!n!3Ej* zqgm1vP;HL57$RBxx?Cc2sf_F0iMv(3jxGx~?h^qec~=p7PVibXH==(4I=XaH4#;Jf zM`~DA;@AR6#M6T2rnVrO-R%yE zV*PUQTZgr=zkHCf>5gJ>EU$Zkag*(Ag#+RvP+43|xF#4)$Ow{T zpWyEv7FZb;1Z$RdZ~~uKdsIc@#F&wxrE1-%DVz zK56PQ-{dMFqAB|lf*97bjl?eg*)vAB248S zp$J>;-zQ`oJ9<+Kx3KKmdkfW62BVv3eQVDYb7%w3tfp~tfR;yVdY2SfwdoMW1;m(Ax^2!*UiwonwL|k*uU1&0=f=P#MFL* z;I^gWcCe(vjjcHW+;&XM-HR`bYH0mUE`IKhVz-sNl9p&oMwHT$c)LH8We{+!iz|7g zAnft2J4=E0R!XmAD8*Wq0=OUXGeDD&j-1QUK~pd+yng9YQyO$b7XU zK6IefUk~IL!r->QsNmB>w-(|Z>%t*RINO(!`~$wFp>9FTY?;;jr`H(OokljZ2Y;Jd zXu!%|0&$_tT`{^*fcH<5*MbuXnXH;S1yL%Z2-xjmrJzVcAomzvF{MIRfjp_QLk9?j zb0lWcNpJRck%H9mN2>1PQjdJR4{aZOs!IH zOh6iWnYnms_OEyx+gY-ZrLk5J^GB}-r)LH)gkrc@ivhZoOvMkNk31_FpW{J?2gG;v zy{Mr<9(*qUrknmzZ(d{?htsfI+MAob5ZPV}MPR(bbM@ohk-lwg&O4;MSpNrQx;e`? zGS=w9lpgX;<>s`iwj15YCg{WRFf*0@C_-P#Os=|s4Dh$E~A>xjFoUE}nFXB!^d<1Nu0)p$0mc2QdunG;x%F7`n zgpA+g|q)RDZ zCCactvzNs`<4$2%yE88F&B{8zmuO6{m{|KMa}%759hqc-oU-PG_@7@crthS3^0<`e z=Cb;UNoW^*ckou+wOXbCt2&Ipr_-~W-N$?`?nqdgIpfYBZZ=@>TX!L&!5xEE3km7;suNtS*FR*RWR?#Z9h z2N`xh7~XWo|8VPW%2QRX9rAAb{q|#H&00~tqvE@#>OI@epQhznk5N?N0Oa_W@XrQ> z26r#SeBnXBCN?1mkFwM%pEpPm2`BJ;l*#h!<49N_AK$B$$9Ljh+dhLyXZL1y>FYYJ ztCR9n!LATQ#*UFcfH|#kAiYsv+o4H`0eQNIliyb<`j{s5ah{K zO_v*G>Qa}5AZ8x43WN`K)z<49GrD^Ts6W!~yh5Pb0URIWt`{I1^@>&Z2qXyO7?Qr* zM! zdK*r--W&EN)Ih~*=@-c^vfc`S1%88JR=%10RCw0lhtN674Ro4ic#%s+V<+sr&=cG^ z%P;8e9};#lDvT+xj`Dqnna*lX-$lVWgK-L)D+x3D^bSlK`uT<9zBn%Jwh!^CLf#SU z+Tpz;nVf#d@~H9EPfb0any_lPy;RT*c4xVY_%ua5M0Hry4VnaH!dKscx?$NBx?%q~ zP9g#0ByO&77pVQGj;x#x3YFLDYW@D0LfV+8`(++B$pclABkCZ|isXG4cJ4fD;j3AM zm2VBlP3_4T=c@vohbKK`kG_xHO)ciolRE4lPGh*e{2Y>lKj9Jl#QrslhQF2rahygN zfN74bH_V2NArW!h6XmB><@2Ev)j8xw9ty$c^-vtV;Ifx`Rx%zbexiZ`nt1^zQ=^wF|TiT zkI!?B7hxxMaM!a8Msag0P{O&fJs9;tDp2~nvOR1I8CBm2#6pp`MOu0R;qVqq39-5a zjx}`51P&De<}sOUT@R>i`B$)Rf2vF=47pLWFMtI$2!yj)fIlNCb+-XU&j4y$p0eoE zw`blTzj1Ng1c*dm$YtZIG?%y~`2zE(30zTX%QaL%RVl;9$e${Y2ojwKmj<5Zvt*IK zG9qL7Tj1CioVr5FLs}c0QycbHKq}l48!6cbe|oIX)SJIK6x(F`f!E!p9(St15RJhi zj03zgwVf)>cT^&ykN@Lz@N^tWU_wyj9Ylp^pbQpxcR2?}6+d|+oNv~Jha9_t4x4E6 z6UR`P>l`n@cqMU3WK6uyW5A;zK&<>W9(D`L`PZ9Xu9P&1yERjwL;k!iBQ#P>(wJS( z)&T9~A|scQ`a|3ubwaWb0C@eyvA+5=u?!fT3abIL2_qbi<=3k9pw7bs9l=4yt{mAr zH}8jy_WEFn%!^ziaQMx=7VcF-36d7I6B~wO{*8)}Y#(EW>jv-JLCNkk9#QtrLvcs8fP50hHi8xbjgL;FqS%o_kYU@1TCf5nB`UBf zB1O*2?<1x_WiJEf=7bQf;Vqh2AJjNGv1W~|gjeS%O76ai*Sw77@1G4}qK+oDPC=d3Y+!5yGYu3b#oHkQ6DfG|~%kizfC z`*H{}-USd!$7v8yVOq4HIn{Px(M{k>a_UT@-&$!@(#z-Mvb3a$Ocqzk2n>^RXDf*p zACdA1iMx?FdJy0&menN?&|PowsBKhVpKn*PAem(xmZqRW{4{&meI?|NUcA+oU$a-7 zlyDr7JJh)b?{DZN-0ILzFN87aexhfK-`qpihDuhr7tfV|=fUEFs_y3Z=5)bQjT}0{ z!puVzKxCOvRQ)X~44WAV87TEL7%}wz!2qwLh>;32f+9d*)7%;JI%QhJG||z}FB~ND zB^}Cw0m&am}h>O4Bi2v{#WOrS75gYzOoIcvVp#p#Ov!t0y+JdxH~677!8Dg9U|9f&HLq z@#ujCfnVXQDFi@bPHJmOTw$OeZf-k7Tm(7^fGTmDBOF<}g2pOiQ!8j;Vi2;-W!!b0 zjFo`Vd%)Q%K9wIz>Hrco<9vVW>MCdZP+$F_-`w-84l5YFt_`D;;y9&{rF0BvleDC2Qaj;h1t*NF)TYr4EqA+&^`SQiq`|L(JxD z#=X|kwH1x3q7?;AQ0d#P{aAO zgfB2NR8}G+mkEG67#pfh$38cI^3ZKFmRjgSZ2%CA`hmv?&Zj?p=`ge$elfqturB%} z>r#mZQLhSqM$0=(2QLA_Dwk13n@Km{Iy*aCsa zAoRfUE2V^CqP+608v%ZtIga`7!Pat2NRE#i*-ViJ8)S3dvFw}K8qXF}RAn9lON&U6 z)_Rblb)Z*{%MHOY!Od{nnXwjSp9>37Otw6ST&zn$qfK#6!fEOe)%<1mX87iZ16b>F zxtS`oBI)W=gGCnt5QvXBR+8n?7fg%jZXx^CgX?kIyVO6MIt`ZN<$u z{M3<%S#Cc?tluwa&nyiz=_V+VFx7v|T|r?gCS{~%KQ;3YN;H#`lqnQ%!Bb};2M-B3 z|6&=An->_zXx&O2FD<~wME!jI+uSCdAhC=@K%w{U8UizJ=hg> z1xA;&K1cy#u+^6?zsj_5l}+F@D*~~0t*W0Oz}z9dU%TQomsk_At;|mq^eddO<&Xd7 zHD#w!nn2RDQQ4~uk#JgIXFlB%QkaM~5CVS&s!j|glfaCOI;hEC~tE+nQ2@KFjs?cSNHOh|7$ zwZtW0F>5stt7tH_T!p&wPd!6aUeAaF(L6DnSH2M2%3Bjc)R;=OE{&c<8ZR?m9Ffhf zNm^%#H4U4~ejqCwxI62Fw$$>IX>#6|5sC1eu!8chk;^B%67sUB{AT;2IO^Z$0Wq++ zov-N3byw2B?nKTXz_t=t0+(E-h4ibtwwWS_1m)#bCn~{K8M`??F|l&#h!fq&62?6j zd0!KB%wh~KE0_b)@r}L={4jbkf-rBmR!%m!I0}QiQcWQUQ*;{LMe(Q%T8`>&E9vgI zhHrW&rs4)p3O!fwVTG_bIGc$i^iaIq=d=-td6EY-3ADOOz>BndJTmGoh4Jhl?RE@g z3<09dgz+)aP)Jm1>cd8tdD?;dtdR-xgM!{(xL9*cj81(y4o^FX?C(scgM0$sncoG2 zuY)y=WfO+?3cns+jf=_h`{7I}Tm@35+*=TmObPM(Ql^wFp;oP!JvYl@bn|<|$+bMZ z|7q-|5)_N9271TPU}mjlj3kq!504}rz!Sno(oLAaXSSmNe^Aj^!wuz?K#4oi&8K`f&HQkI6)h_Z`?YT&iCmV;`!82PsafOP!f_dGikn!d`QfkEc0z178=g*}LwA>N!v_>j z)*hMnl+;E67PAjSFRY@mZtZIiNtK%|VqoI{y-$ynH!c!? zPGC8zo;@H5ewXqlfIw>vYAkC%_52zi1}jLvTn>;*%S zs_3>Y)y5|;K8RqY?YRgMvP~#!a|xvcFv&s#wx!hiNMwfCaU+tkk}PUQj%r#<`~?%k zEP>uoU-goa$pWfWO@@Ccv3gnUk4f0NMGc@%D}YAY?_ zm_($jG$9=wi$4*fD+nTyk%fk&qmIYD0)-@;Xq2XCuqGA}@ofjF9e#TQC3*`cl6`?i zPKX|mZt*C!brYo<3GdvCt)nX~D@s zLy9)}D-DH_3Iu6~iK#HtLZ9wBlV1m}+>EGgriQ2uilBvAt*xL97RaJNm3|DK=+w{_ zhsx*ENBBE-U2>C*ToMRn8wZ#aQ=!L$8#-Rqgceo#2`Tz9V_N9S7`t=Q6~w>L`JJpR z>jPdAL*2K|0<02$+3rk?Lo-&!e^_lA%jn_06|+i4D1rH zFhM|HjF&S%{wsdbxg3hjRxItWwnRqwK+LK=pw*RV?NNvtnj>A^D#4Wy@%?J22`L=G zWkEX$_F-l6@J9*Wu4=FKjxfQK2DcB+ z#bFdbjYPO%6#fq_@XCCji}VH%=%GLORR##E;fJ0N#T0rN!`)T%vC(jpMqd8$bmg$A=jrpCP$;xk0d^Z67Uf^MI%z_(#B# zEY6O{%goA7%E7_S%)?E}&d$!vNy-7>Waj1tvB^|}ev7k!kZs^F{>gH1b278@{%_h0 zlr5)_xp@9>+F9NlbScGx&%w+MAmw0Z zWd{6Jz{|}3p@*G^nd7g78qldUC+2^096YSd9320f`&(ZTv?cT3Jm<$GI6vh7ckEvQ zd?0&j4A7D+*S{QGyj;xO|A*fcC;zZ|jl8lXibF7%H%{MGQ!GH`N!Q2iebLZAi}Owg1v=YLTD zF35jTe|2brhE-UwSpWW*0B&a1zdCrBd3i}Wxj=IimLDoKKs~CgxNLtH`2+A#>faUl zUy@jRO;D%WH7J#V67)+w1jxb4!TtX`UN&%WBWy>1ufu!prdfv9GVzz?0G=PVM)zch9_vU=s8FB^hSFMBGq^?5)$zpCZxvxex1P{ z51T0*oGZw0%BZeZ$6(vC-{jY{{Jra9a6G->hxw#Pjd{A$<0B>JjWYYet!ptccZ^{l zEHLVVT92{?$5i>(V_;rRPsVK4r2H>;`r$)QJ8Gx;a>t*xb~J0wA;{WKu>9Gx8Unq+ zbH8y|^nN?Mpn5*fyl-zDHIUnJ$I$Nata}=QN(PSHaj2F1AO$$U-q@>(5eM!agKW(^ zlgtN{g#5E!Y$Ul*Z{lt!iLm`|3`&$Wc7ZiP>lZMM*t;>Nm?K=`HG2o_BRe<)-?oB0 zlWFo`+cw~8;vV;dyr9mTD;GhiU_Y*KZ`4O(Aul+I8gDiE zfa)UOE?_F}=kbeGin(W~FSu@S+D=XNfBy-^UrC6GyRJAsUEgUr?;T{kz#l-sB@~ba__!V?Rhmc-_x zLDk8{&oP9)xZ2JAXC_y9duBn2cni2yC~H&@z1`a-F7!L`D!{l2Y|nw1pw5@tnQ)^3 z`~W^KM>@o4dXf9<7T4;y9s9+k4pTT zt_AIkwvAxto%Vq2-PyHbxr%Q4$AQC-Mzr@6=hbL5KBGb+gVJ4d)Hk^z;ZvZY#jXa) zp!CR>m-#~%Kbva}?~Mfn8-^0=WhO=Lj5dG)LC2PPS;sag1KJt2E2 zs)@#SE6+aP>ziCjZ0?0tNCObG!e+uR$he+HsomNZKHaf~YOOD9p}$5l6y&h75y9ob zkp1#-+mop%i=tz%Uas&J;!pYe+1Oj_SQ+2`=66xq_$snU`-@+vz=1QZ*N86>5+(J2 zGVHKBTjW(XsG4S2PA~Y5WSq>vz10zPDD`VcY6dh597&*Ntk6K3Hy6O4XUCI~n;4YI zVOnB~b$f*y@RZ4XtG{ZRGWf5+lF6eDs3?z-tz5fehoT4&0=YJ;MP*BrkRc1dITlG-J66Fc z5wb%Hzt*nIr+A+&JCC_=pItrH!B8;}AA7W+I3PQj=m7EJ0-EYZA3rrmk9f`&XXiN4 zAtv*3=#JPXFOGzlrD#U#_|kfeqxJj0;Wv!n_Iyp@FK zD{bRD@g|JljT}+pZ9wUU-@P`r^J*vdluPqi&u#6wMg5Z7M4ysTw?Wc9Hr@JD01Dzw zABkoQtQV*#xx)WTf2}10_taQDq85@q^k^`FOj2myX4z)maaOzRc_$mHR~@`@u3m0} zu5p!Y4JIb?CUJr;eU)rHmFqci0=aZft{tX-wKy=~(*&Kl+@Pi8sI52HSV{yTUoZNz zVBIJGPw>fa^O~ORwra&Q{)jqs+N-9cY?a^f)-Qppya_o5wyh{a_c zh7y6PcJ?v_Da7ipH1lq)F%I8EeVHZUiA@LP+{C%~O{YPTxq_ zE$5y}vip1mw>(2kt&3@&u#Cj|Rf#rLLrngS%<{dzO7=<9C)!rCtZe7Ai_rlI@6kb9 zxfv}_0^Rik!*_a5!yy^(R-6x?`fHR`rsWYRYP}u$eO8hIgP>NjqV`}co=l>iXW+f< z98SJ?6~m}j(x!9-{||Z?`)CJTe1Qg4N|;=AZzH~QijejJEzu!@6o0M zHGE^ebnupk3L3ZRncI{#2=PE@l4KgTS>B{|#ZLz)TKid2 z0NeJ%TL?#Go4Bb%h^gq}IrqQs{20Z7M%Ew-Brn{)QKm@kt-R4{M`XU8uuoo8vrao! z?Y|z6y|N2GKJ*^Duk1UjE@rFAmTpJOMPPGN;ibr(nUr9kiQ@IPu9?9f($+4E$}ubx ziqPv;7GkNx($%V@lU0&a%5ggyp~*HtdF# zv6ZCVKB<+i3C=+?tJx1`wvkR2|M?U^e;K2*4L18+q&VXY)7a1UZW27=*7^+w__)@` zY#f7^Y|Q8%Hb!?a2p&K{Bd%irY`5aUFV~Ot3YF@8dOGv8)RxmHEYJ{dS=rsjq!RFg zr^aR9{o?=aAUg!v;^v8bN94{2mnqKao6YI#gtP=xK78c7SP0KWxK1xNPX))(1YfR; zK4=v>H`?{vuT*`x4K+l$Y7ae~(gUnfPd%B|vN1IU4RHkj7iwScsZkGKAWywB#sT8W zNeN@CbFrPpwFUamG?EI(*GJ9RaU6q%CV+qkO9r$3*z*c!)6-AvT(DqTCGXmBEe?D{ z*;LltDAvNh2tKox5@>J4k+rLc;V7*3#fBZD(#N0(?y-m7&_j}J5t3Nt-8^UUL<5H_ z5+K`DqtKL6VuCu;Vu;gc;E!WU*~(>!gKBh7dqu|BqKAPcgl~G7$uk;uW`$#zGpkuP zc$w#o<=8E))RdQ%%OkD)EbUjIhB{dqo62Zxid$IHX)ZmHQW$7+b7U(na0HA@(W~%E z=4gy3zLjgk+@f`EB=;wpO_0mpoJc&>#>ee~yaWBmh~CYeZW%~oV7BJxRX7w@+VY#! z6qMaO0C@X%d~+X$3c)!lQ;=IHmWC?t#$H0m4G3luDy$+{PSKL(NB7FO>0?Y0ZR=I zGE#QNl%v+2v%p?{oK(+`5CfG8$SCBLQKP_u4;N;qmOLL}e!FQ7J0Pa8t++qY9YYe?z0{!5zx| zV=$d1tK$0%x6Q;w)I~t_D}v_i}c^TIVjN)3Y6%i0y?r` z#rrUPoTUE(IXOPu)(7AJ5Jfs`fDUX~aoL$ap8ki7iA(5^ictlfd zKL%^oFF+D`;7JwhGBOH zpm*(h`T6b1c>VgDx4XaZ-uk7hKv8z7`90m){qrf;8SB@lEdKEn8XuEIXX~Pf?E10d z{y6FG_mo}HUEC0rQ@1xn?_@yrfc*0?sSCDdQuE7#Cls1p)oC_hd(`m-eLD^A$6|iZ;>tNw-+r802udaTy07|#z{LzvfG6>T81m-Xw7*8mPx(1h+z1|nTK~m-0 zM83r2d>c$^Gz3YOxiqQ?Zav{l4==srS1HGAlb3qLm2Z{NfE=oQcSHo{bt{&sy3Bmi zmXRbDr5bU*H|ke6X;Sqtz(M++hr~~j;sI}|#3ttUCn~vY{=!V$jH_!w1z3WwGqxk_ zUlBia=%6>wM#^Fn;)EC5Wdr|%EMUzt7yFBV+mcUCSJ18#i56X2DdSy9t2SuOn4){F3+}~g0^J- z1fHVbzJ4cA0Nt5HQ8s=;`(oN{H?v9Dgy=&55Zs!|1i$za;`78vD_F7Zl;T;g*(73+ zfRn=~$vKm2Z+qn)bdQLVGQihZj8;s8h&SG(G)j`P6fu<8CL=%Z1K>44j*eW>s1}FK zm~r>(lM8N+_>yR|&8g`Wi)&5>tTz+;m;R8R0|U|wQtjbEBl7Pv=FE=fB0kDHQr-E+ z%zKvdU$Kk6T7UPE{?t!?ZzM`IykM65+RK2FzqO+Q6%oQkWjtn`kvDEu zhP9+I=GJY34@jg6S^DPO8XZoo`qQ=|CD-sxpiwQA^e9&9^Z7klUdg0DiY78+KInYz zj#`wbI)yxqHfYNBGptJJb5LA7CFCRTmzBo4pSQw9^f(4wh6+G=?DQ#kIa+2^a$22u z*EDeJd@@b@CfY+IMO{OBd<`X4ei6c!{9|xseM)p?9z7neak&b!un8H-mH4#;#God- zM*Vk<;zhNT()Aqv;svEy*g*sTo*v)`C}~{DyDgFt8wwR34VS&(p~N20-t*A-03Z$Q zMX~4dp?qyj2mpSm`z1{`?lwUKmf1x7$!7MOl9muGz85WhXU;{e?g~FT|CY=gU!7}v zK&Q@-DxCzwX5#bfav}Y_A3FjsS`HFSYLJ*+8vlfHs-2dm{^jrZ(m6*4Tq>{YhE*bB zJ}bGSDMf3>;+o_ApwE+Ik-3M|sKt!{umR2K>51&^G8dph*}2-!0uCHLwCWPcie;{b z^KHw|uUn1PTyw7-o=Ec6A@QfeiFc#>Mhi!h^z@6m%~|rHQ{D=;df2AdiWKZZ!G?Nnd!IZ2{)7bI34 z@!U(KBt`L-J7^!u+?8U~oZEpBF&JOaRAZ5RA_%k>hNUJ?lHZK6{gF$_@2SNX6XIalF4D?n7oBstBwXJjfN zI3+iyM~ZQv$U3V}i|&3u%ABFj6Qr6`L)8}e>nZ?!Q-zAko`1gNbrkV`^t%P zmf+Q78t*|U1u$P z&w&ETu+W!u1XqjG1ZQ6gTRL@T+x7m?EoZb#f zP?EZ@a0x%K{T{3G2}9b{#>}lxLI`O5(ZHaY zVyeHy7g3(j4~bEx`w*aeo>l0a(*7kg8kR+jX&5i*jv)|LPKwj74=*&0l<>sn49O$M ze8feJYLP%KFaFDV+QG3ZUcxS^k-3rANcaJcedU9e0JIiaUwkN_+L%_u;5YrAze(0{^Pc&eVZeeY)D>_UO^Bot*ao9 zBjlIjXL{R$kxlo|K#cqQbsaOD*kZwq%KvC4(2>VRQ?;__5*3E`jUjN{Dyo=qd`HxW zN!b0^R^x!qoIeq=v#~Mr0!Z0EHnd!9q#yr<@b?o0`^ScslM_VdVgcd};RS6vqk;oK z0ra+@xFlr=F3!JuOf&%J->s#kqlJ^19Wuhl!#^HWP#geO0B9hT7)O$5Ho!}bUPDF& zmnMx#UL%EP7LsWJbQekk93v4}58!(}3yMBXx%~8XeQr}1Qhb=yH6k@WZju<`#2j+8iY7H+Lt$^X|)9O_qLd zO(yJmLD1CQ7~2M*$I&rRbA`6fe~{iGkr><_%~RBlPV3jrX>4N}@W+wIO!nBVRBAma zqWV|+lVe`-3e~N3GNaIZoz^}aCdE-KyV4}?(Xiv-66PZaOVzr-|u|q z+&}ZwRPVL6)Sm8L)l=OJd=KjG-5Fo`EF4^%@c+DBcR2jrXgwl5xd2h1qPF^##`s!p zH%Bk{nYPu|T1YC$M@>K$$>3}VKtcePEuaY>U$O3c2A>C7bZ-{-GR^Vu7H)A-40$L!A4y^aX~7sNh#(>6f$-{ex-WZ;TLYX zM|*u%{@%S#Z%)0gF1J;Kn(_yJP~x75M9!y(o;dNL2_E%!EY~h$ER*vg&W#x06K*E4 zbZXm@M^WzJN9lFE{Z_wCeF81(X@&e`=Vm`Eiv#1rR2wi@YB9JBZ)d6?`~A;6;sJc* zoD%y>nAvCLz-j`BDJ>&_)QTZITZc24I#hcsT&Qb>#R$uM;M%e$Di5BfCp#a<6PJTb zY1pW2ds)#tkzKpUW*vIwO{X59FiCsN3eyYe+AQ;hscWJT|4%CDmP(?)!CE ze@s96M2IEjsxc}TxmJiR0W}^LQTueKYWZ*68X{g(0OSvr30PPN8jL=_-!>(w$8mZL z<>ak1!oO{zN#%vip2O}A_wG#bMrN+!d>^+kw(cH2!=mqhXOeh7II3mD|wQV3RJElF4#6lUZ;*#qB0pjlTr5 zZhj05F2?h~rEb7f2Bdeze0_*F@LAyy3Sl`E`=JKG@|9NSBlhPTh0P7rxKK_lD=auX z)_u%QTm0#?Gsf@Ww#IcX2=#{sSvN_2!$rBU9UAP)RNd>fn2ElWR1WT=BBy|ENyTG5e6~nz!v?&t8mI~ znp|uB&`&<$*u-lDn3HyJdjo<{I1-fPi-tef%&e7+R4tZr=6I;Pp5kP)6~UhiD?=u8 z7bU(rH~e;<>?JC;4J47^A%8SXOqGc0Aw&1B9*@Z3;7vc4UEA=pGc6&^`wU zb19PwQv$q!ILwXgvZ)QSlk2C_gEAi)`Hx( z1Y@Zw7W4WnP1c6ZniIJ;`nRq3Q&)BC&QP9OgJK7OJxf70bMPNzp*@bOPrqdnD4Dj_ zn83qKf+kdP{cC8If4MW{T4Gc?8Ewq)R#+79FjQX{P zcWJ_1ey)5BKg5Y(e~WaUpqnrf7v+HCq#xG`t+jts`_rm<$(rS0z4#>RpUziPJHzrZ zR;znRfZLw7aPuS1A7oXNpm9VnZB+1H*5slttFm11Ui3tmGL2cYF1v#~aE3iL0v003 z{JmurYchWLHjCFiJbG^X{n|VweZu_HQY*4Tj0rG(tlRj~W{NTqm8a4Y#sxPjRzt@E ziR_3U+;{CvBK-26)Cgb1%bKs8A(timet>#`4j<`LUMx3NJY~;a=y`@TdYY!LhbyVI zO4@t0>b;E_n34R;kI$cQUW36d{^lJYTuIT?JL5FkKcKkGMLG;j-0BbTNA{B!xn*Vu zA0er3<2cIBD$60vUOZY0GU5!Zi;2*ORi87^8Hyg1PTR6&5n+K9LncFP#Kwd(WScPn z%z9vNc#Y8@MxU#JE-s4*i+o|}nnu3{S6TMTsW0ADM=DjdOwPZ)M zeHg|2Xr75hxgX^7G6L@362D}Li<+zWFxC2yXNT{QfX;e;r-wrY zR~K}blf!d{*p1OC+G*1$A@5{X@Q43JMKTpQX2$MEl2~#CsklG8h0n=0Ld7970P)Nt zO!}JeG0^&)~pih?*3$Y@de6$%oW2Bi1{o1k8;U6jb8;*pnzMlot zTaO(@bp!Cka{2vBKCI6Kg?^%T3c6d;L&&x3;JmdJJzd7>@9E&D z#eL)g-ZqWCPAD8}_d0bIo@g+V@c%34Z%)ebsG{G*GyB18P(! z4-{%VGG4_%NLrb2*Xw9lK)+Dexoqa9MfJ&5?k^DG@%`2*5%aQBr1?wtnw5%X)$ccv zzGi9*A-XDz$VIux_+?gUw1aGP}uf>H}1KnNtwT%FpgP`A7_-0nFU^0+gc(Jp(n zCx2AuQI-8Taop|-uVRm_N@k^Z_1cBzYM@o>vBCXYv>hrL(PHK=zLcy#UK`J-FdfAy zMX>^F$Y@TBT$o-zOa{vK7*4}FYF64uz7i{chT+oZLeyPl4A~3bqn-T{gox`*Ozfby zcR+I{d5Fkw0#kv3aXcrwg-oLlqei`fU6ld9ScZXK|I7w%TR2#@GR46nrMoA~xl=Wo zL|`M+Y85XNOA;a<{7kfGon&b+Y+otrLg)Y(3BA1$$!ZvzchZs(;JuP7@lEK=(2zxh zyj(UwJy)WSk5Q@q=;#))z{c>&q_RLZsV3hH^jrCIBSlGXF!xo=B|~51&*HHv4i$$} zhIZ~WMNDxWRZDSTKEdi*V~BK=`0wC)BhDR;*Af=E%1ZbjZm3JU5Cx%~XvgS9MoC8i zIvo4(op%NCDpZ}MFz0E3KMHvE%C5r4tN@?q`a1ozJJQNC*Wr~(iM-KUQoqQR5Kpij;MZc1X_~H$r+CxmK;6@bf58qr9g*A)bP*>V|_Wn($79Ek|e$eawXOhJA zpGgwu|1(MA0!@;r0hUiZ|L>~&>9=pTE1Xz9TpMp)lEq2*f2dIWsu$&J%Njo>I%`yi zaM;i2n@{&2HoQKii^Vix3@eX%e1XOCBM;?`?KqF#7bQuyI*Az@i@@DpgE#0U!^HS%14qm6C_!&F=u+o&1 zQ`T7?1X+`aD(+(M1-XA-;jm30w9^^aoB7y&+8gH?M*u{3hbiu+WLXpx=^!{qw0yOM zutGvPep7Aa^|!#=bwR-%?Ouf~*}ytczzN6U6@4VfWow1d;5r9aQ#}2)XtZ4|By{R) zF%Oksd1h6Pe~d#v6%j;nuV)4;){o_z|wcKErzZui(;H^O$aAwZcIg z*KKxv(LKP&diiZ!lX}wuoG`!|Y_yR#3HSJj@*+7{2;)l6aflr}v3#9=&&`|*mwe?^ zQM8$wik8vbfVB3hUS0U$wH;;8MQ#*XT(osos`K(}U+K?>1O)spdigm|dgOu+sRG!s zuN!!}?hDt=h}vned47l;#rSLYN4(6n_`IW6$J)SmaX&t)x2#wA5HUzxm(O@rzfr}7 zAwysexwalBZjJ_!p3K?0nO4+wR<~ryl>YvC;&61#J$b%FQk<}1@z5dvUj2b~Q*o8* zM|kfqS(b6;X*igP8hQWWC-_?`sBJIG2z|Q`dOD~Bnn;uv{YUVLcgQtaHDv3{VvTOI zK~(a9%9)=8lPh$DJ|{lrHd$lUf>)89I!nQDwx8Hde6@omcb5^h$aI&9(wD>RKTiH` zYLN`!(4M-V{8A;r1xv=ga?8$pDDsI=F0_Aj$f;-=n z3~$JHM#!VnxMoRw&EvHrqm5!;eL#9M^ze@}=I8IPVy~#ty2Pp}{b_{T)uN`b708Ti zcX2uP19(o8iSZuAMa0}e3z+l-IB#Z5jg=(+oU>T9z&#$0bEGyEdG1bp=h8viu4AqS zWZF=D+=AB)R(9MW?zkyrw@K1W*zZJe!?|r`0?>)Z-ii@(;^j4Df@E*tioiQQ=2JwD zJf*;t9Fo>_e26U5q8lXd4RmSqq|* zu2QZ)j%d|vU){fH+7RJomB5!KFVJ@a#*)sWXb`SK-TZILMI_mnGU8*GTO^}p*VaD< z?#}j+Y|afXvoR8MkMW@Ra6mZt!hn%LTDu__rvJQC%99N`8GOSSm9!OlpxZ|#;TP$3 z!IVw2SKcd}#@f%?;dyqX1}32e5&VZ9E0A`5Hi&Ip5F53SU9q~9?LCd82g=ofUSf$h z)zv=-Bu-U*WFzr)QplP)EUNvlTbk{%&WB?2&=2{4`1v|pP9O*+^Es-z=~JhUSX|PT z?S~RQ^>A@IC-{Seq5o!PI%P*~tLK!oYgYJ^#Ng@$3Pw7a9|YJ}`0G9}5_=Qj&P>8$ zdMmD#a{K#ZB*=YI?)~fO*SaAIm_<}DT@Lrla44|CdLG4XRE#S1XR{+)R!x1cs(@3W zYD4@)V)V6HD=PhQ@*D&FqwuWzXvq@E>NE*h0B(}_Ry``7}^D${ks~%@iLT!?~ewnhGtDrfpD`|Y!faALTu&6? zc zYw{4m0hP)ctmPyoYXG0@sS9V|^~`#HV{qGInMGd30IwES?4TGK$u=QkwSX0&TQ-0T zY3u&BT=6=WwmCSo{&=@3q^2=HHd#6O#lLu%AwVejzI)Q~lWPrpa^x|OO|kNsE2=U^ zl}-~rkssq;r?Fx*jg+UPze>SAMo{%^yfucl=CG~I#spQE1HeS~QS6-JgY-d<3|9D2 z{{#mTTHp*0d=ojc(uS!C52^yK>o+g}smDE4*uYG)4*mc`D9jpNG4^O)9gbySUfqsG zYl-|r7X;!8wIV{vw@iI)VGDGyx9(*#1kJa4SJt7>K8qO;jf?mt)%|X zb2LoiDtuufxH#kdswpA4Kz|0$4glTD=i+4hf6In+oskNYM*DQw-$ExsX0iP=`fg~H zaooPpUur)g#E)5Q9K& zxGB6~>g0hh znCws*%&R_cIuPe6O_z2mYUI#zMW@WZ{}ey0BqN$@FdMEw@OdmKB@attG7Bro20uOr z%u+GT-<~?(_h>i|%ikU+A52#>UUZluR9Pxmj#i?ZCRtgyUzrwqluVS)I7@^M>Q`yJ zC=Nk9M1XWIR#GE}w9-t#IRwN&j0l}Yb~`xPy|Hpw7y#la>e7nOF~p{nR)Pgd$CY9+ zEd;^jccocmW3f0g#^=lxBII`+)Z4Ic#Yxi<44B8~+!WqgD}>L`NkGnLVG(a5N&7h~ z)8rrEnJOb7DdFZHoSJLv;R}4r0U4efSGwX-J%#fyqC(H&#S&GK*8T>deBrC~BaEUM;7Q2H!snRqop7WlY=D z6i`9nmSc{G_%pX#FF%!q1(~TT4NeNlO>lwrkyf~?*`pBax<{UF!_r$3!Of5~9^!1C zCf}Fq_6yyHowpK#8|5ep+^Bw@ygC@;B;auYfA5&1(3Ojrm-QGtNpo?CCZ)8Ikn51% zvken20_*WP01KLgi+6zbVt9%Uy3HHOgYY#xc@b?X zmDI=wm60=9Q=~mx`&O~$=qfTTPKh&q;c^d!@SDP5?Vz>W0Tz=;RT{yS4pt6fskA^ayBh+3n91^KU zUrnyONs2OF3W1oGftlrGD3)QgchU$LcrDhZn=A|(%aTPnkVF);i>tR?b#tz&0_CqZKf4U}mPI?i4nUDE4Xa zxQdI>??+Sa?P@ZzBpFk{2!lV*-NC4Fpv@or6_>}^FXp1W2igas z`)4&*V8N={K>pz)F@mv8mj=R!%(@ab3WJkzG=}Fzt)S~smm9(Y_Ou8res2HYhOtAo z^kk$PUmWX>-j6Cm*{P`1W};lANK2x3#5*=*#7MklC%>hw*^z^Q={+a!K*GqLchGde zI3u5(ZR*U;;6_-!425SGtJ*TwD?;zLFD7<<*laE6N@=QF-+SG_ z;|&awWGhTqJ3hYVQ3nbNFvM{@z8YUrT^}wbRZiWRrk-?9@Ou9g6lfef3`oL~+;m&n zyY>#u9) zegODub|t+ITO7Zi;4rzD&=J)XPQtf_G1~V?zBk`UhPXr3WVT=Fe!>=e7G|uTtD5k? z_>^TMB~vro0^KPBzT}LFP*@=sMn{Wbw51&hjg1*3WA@Zs1r$@2P8h@uv z_~QA!s@V8gHkBgRgdo6uc33=+vl5Ay5~_}y#9AIcdR6syvGr%4(ht6*=lS#l;h$L? z4ZQXbOAvbu*-Mro&2d0mq}h~&-MaV6f}HFK49Z=T^SZ`WU+TrxWD#Ow6ei)YS8noqD-c!a3C>rXil z)G7irF&i~M7&ma6QXBTv<*|K1r^a(;Ni^gy4lqY=uVwp$t`!f@tJTVURs8vhZ5OQB zK6SW*m~%TAyL>IN3DPob)o*^453ax*s8UVmGg$HFNt9@1_6u}y`H{5h`cqpUm;K*Qv84<&&XlgTN%%y57yFI>rMqOGU(b{ z9S4_&T?Kt+J~qz%9Cfbkc}7jaegnK>x#Pftuh{T zIr$~Qo3do+Ff_OBf-#)T<#A^=dnT`zym{&cd??#&>?1BKnrwyDteXOT#QXffsh4>R z>S4w8;BOGOVx8Dws+qE+aSbvS6L+&nM5<|&h#(Ofy5n=sG*14kkO62T8++6h>3BS! z%rXg(q*9FLpSM8OuegrOoKA0}Cohu2+_gj?)lgtw5i|)}e>vAJ3*qp%Tz<0fxZJWS zo7%=@U;P}VH=29UY^KzDy(2OIIA+`N?0dQwmqFV3e7#%P`MSxh=kwax`FaWZ@-p%E zdT9H0y|4HBG=lo}3e3JdtqUABcD_7ydOB+sj-al4y}7HC%{5n3blI5tY zgq`!eJxwTT?y#jv8e0ZKd>V6o79?uku%)RQy}cIa`f`Yt+GUx*EN{oJ@#-U``>Foq z0?q^J3^1Jp_hyJPD?@Bl%eDY2YPFlHJwA=Gz9W)(57?0gVBxyT!2{N+y?IO5azE+D z?3s3ZRE02p*IofCVJB(MgRQE-WUF~K?rYA^zp{^;YTdmmLreqgbU#p0`1unq4EYCr ze8QE#F!zmoPeQ=i)RbQsRew=ubZt4xtC5k?G!gm^(?Lm1V}G@=S&W3Yo!r((BTSMp z?9ZApBb}pzfQ7@F{S?9q8vxVK$?b^zKMKid8EIiNjT+w4oy`=OdmrK^(>la z=BVs>N)FO!F{I*H3UGkAecE(PiZ%<-TLoCr+dYI@_R#9Z5A>cgyOqf zf4svTe$9Nk`ug^DB)x)@O(D}G_8_h4`N-=FKrNZD=kV}sDc0kwLB&FMEUQ$D+;SgoT?yCP97cWFh(|e{w0)G@vgjjV&862wqwryr??+Q z#K3NHY<7o(UYee$7LAyIK2sbSSfQ$KT4 zW@vjpM(YvDUrhn@`ivhQ89#}x)_9J$nn?I>elNc~dK|9rx2>_Ag`IF*n(m*z)*7_h z5F;8j4bm6{lZ`I0)icUIeH;Zn2Feh(Sj-?;#4s~38TiFwFxh0X0kTg8!X+Fo=%K@; z0mfaYQ9smF&6$U-f>r|2ffk?TX0ky5h-2%(dI9>Yh+=G{fw#Ab_44gMQ-6 zspFpvJnnJy)rwUDEuKfIGjJ1v8#>RJe^l1fg56xkcQP+^-;X=akMy&*nd#c{=sL+u zk+!cs9OX2%Hs%jb?h4_5bir=>=z7<1y=Lx9Gh__^SHYXv+m&D|S(Wa!_&onyUW|Xn zXDjnj_wfv*$@OVUS=*G)L*w$`?Pzyg9l-53DbBCSp?sUlS~Rg6Vv=D@z!qz>s)@fJ z>_{9A1D)iQw@9CK=@|wBThbwEK5%P75*a4$p;CX znDB8y^;*V>H7h+>@>mIlIU{|9wR>(fy^5EilQ!dK#R-&$L5Ax6`HxQddWAp1ty?am z{7hd2b&$+v5&#tbW85P{;lW_QCw#_sHpo}LUqCDB=3;9}gcMVIY!$w=)ep zkgv@RMUfOlb6f+J2srzkysC&*%REhcw@g>g_k#xhL9(aTr;!P!4mcbZ!pyiCaVE>s z(Z+pKbep$b==XI={-hPP4?j_T^XZ>YOg|OWA76O!UsXt}C(M89@k~5IxBfFh9|TzZ z$GCf4rdN#!eyZGZFWFn`GYBFdNt^N@3c*#Veofl2cpoP!H%-6ss80Fc(fuIvBj_}p z08lL7Lla{9lA!}ZM`i*UjpIb$^!~->xaQs|f4x3# zH}Se(u$HkK@$2=73v1RFbuZW)#G)%BFpX1{tvMm8jGJRnw|Q^MGhX3(<4^E)pO20> zbM^n@lgxz!b^<_Zd3pReC)lhL&{%q)ekLmXbVENUM4xfP=lV`LC&Z9(BLiw$i@jBW zd|m7JwG~R``FXVbnDQT=`U@Fe{^*SPSa&Y(HCmz>Y@h(pH*-e#nz5)N^5)^a5^h5Q1{fkBBR>f|MEU9(Bg&FH@Z>;vJ8D%nW3$Fqo}`25i11 zP3#Ai0_5$>bEg_fcQa*9mzXl zr0Z{|qAnjt?;bks@$-Bbzoqu9QcRLQ-rH_hTh3Rr3_!e4hTA`6`;;rL^vq85+*d+Y zy14UV4bTe#7z0huik&3W4MEB40Ni$`>7ZZ^G2+#0!&{rq{{N_6p8Sjd8V>9tEj*c4@XZdCK+w)HE>mgnr3$l2{x!3)<~ia~xp=7;%Z{h+^g#Z0k7_u(~*n&4i^&i=Sy||{d z!P7=ccM8(Xx8JrcJTLwy!4}a50J)&4A%JB@kObkapb~(6mg_nMYq`mMAlVqSxY_UU zY4&J`KUid;RWIa1FAc>j|6N;BQC*2#VR06YZ0rb9=i0h?eux^No(_HZ+tO&pi-DXT=SWRqnx`9 zd$t_8ARMiqtRl`VeRi1#i>7Wy}a(WVTTfKDGBqza^W3qTvYKwQ_;*ZN;G z8nZ|}^%%6*R`Cg^@82vT0~k8zXOYcQdbPXxHVO~D5CnVJKtmVU475oSbDV}bEg#(j5-We7bnr)sk#4ud(QH8Mj{!T?N zD~)ik4~u$|hR;;1^m6t3#o+2?{X(Cdc}gF!zVa1OXz(jpW9T`(ptbG`Fn-Uc*M7yW zLt&rh72!9r#B{vcsmp%#WG}aE+BvP$)Eme`hxdKj1}U*39us<$@eUzdyLhK1S{a`R z<(mboRX2CJ-9Air6Y9WWLXWSn65oJyuDz+gW{>P&=A|sKvShwMgl^g{FxOq*DybeU z4+69z((B=7hfMcf2Nf1LfJJ@caQKr-6IqwOfKi$ntuY*n<^*yJ8iQqxLu=q-p}|e>r5C$%LePD zMpSqwi!73CHF0hfygENNG#YGRlHj+rQ3GtCi|mfW1{Q;zu!6(X|n3Fj$94 zYHgmZl{Q2m2nt-D&^H-glP%hwodWwa*^H9wGpkJQWaZEgg5TYme9*1mw~;opo0~9N z*X|qfYSi{8@mDiri%U3u=^ayBQLLCd5EUmC>;G+iSH{sLwQ(&@0yp}%TiHBH$>7vCbeFsZg; zUB8SJua2^b!&62YuKq%UyoQpdf*8{fkjWVR@M~h4w%TVtb)8c>9m5X2;Q7~fbudAx z#i3>V7jtEDnB4DG#sNR7NRQ2uxg2AgFsHhJQ#;Vcn`)`JeWdDW`q1&{iLq_0O>$*upLv-n{+* z7|G|BD3YJ;(2knVLP`fs69ld3n_Km|0a^NP(L8XRK|pfnhOBflhG9lO8#qpb=|Dh< zRXH)4lH=a>2MGd4k1BaWN8lrq=^5(xLRJ9w)O^%D59ZorfTG;(GU!AR0mL)l0g!0AxS zS{RYm9VE;4IhOUVnq3<93b$2p%PlT>bW{v_qdKWu)*PE!T9_i2j6~yG(hPgAd|b1T z%pPqr#?)`+T8B>!1D3~dHb%(CNTO`>pGdm-lPr^Eb@sJrCZt0VO`ct24#}LQH75V) zpZ-jSF<0z~#SsZ#?yvuw?l*og1W576_x4#sUnRLSwi)nc(CoB+omYw+Ot>FCeh!@9 zz9(?(I`F+{ys}{!(d}zZ8`{8BD2qCV;H261bR;5dG9>= z%Cp2*d74NEa(O=BQ;GMAyO{cda)i^6=^jzJ5DMvIJr7%>y*Kv?uR``|%o?rQwa>O2 zzo(J-rXmwR@8EV%q$Rx6Vrp9)JASA(3Vbqm%o@+kfJU}rjx$Kw$Fkc0FG4A0qD^%8 z(_g62V`L_3N4h#DHWCb=95HHF*<5E79ito(=XcOgt1@@msg?jde(2=%m8LdvedQcx zSozFh?s5|Tvt$sdBo1|Kh z)+66`v<^QR`l}ICg%i-{AxJZ5B#_BodDYnmz1$BC2Kp(R;|Bm$2<3dCB7Aa0K{rsx zm~rsXG2`N*!sJtHToa`145(@zSYMdACgi_q9=D{LrfBPlo?ty`VMBD-`$0|7#;!xicXO*n<4?rG9?+SG?1ro?s*-M;JS(~W4jVMe$| zJzh6x1#2wSq$Ecn)T_;Wj>$j$eui2hgYgXx%{tqj0rzwf7`9(f_HBlYIq{AI8_*uH zWDAhKwARNPU424EVBY0Mg2<|Lf*A45Qdh=3a^x#XpL7gW zjO-7rO3PJwduZ^bu0QkldC1=on^V2?OK)S3YR9mljG>HXuZaBGe_})J&?r9f>UA~6 zl~etQ*y1(%05JNJE;L!W$74fL!m#WRxGHOl2wQ-~*=sxe)?OpbI1=80*r5J7TJ7(v z`e`92rIboBq1>Ja7K3t}Z~ChZHSB~IEt4J+r{-eN05=j>!;7_=@RDrV$|TYL!7iXC3RPic(h$QT?UwS)z{#EkyjIo;ZhJ zkx`xxvi&1#Pizs|3NHH9LhiH+$8zN7m8tvhsq99Qsr2=S<4RPM1@L`L=!3ZEA;KJ? zW_se+fMYtNV-R;|;P1$`s3dGdM^N4RQ7>w^@i*_Qpe}a<54%fU%;;Q3b1V(57GY+y zTY@N=prc9n0XuG`KW(2Ws78;Q&~6*fLCb7r`;6ryTeDYhS`m)^l7cI-8(Fq}z8z%r zuAgv7`!t!s-T8~-(6*V7ce9H;kur&pMK_@J@DZsf(9j z8{_=!-DYRy`2EC;BDs`LJJ~ZULcikUdlR1wr(0WsP2Ck_GuG%=N=N4qP}ROUNN%b7 z-t`af?r8jnNRv~tV`*=6aq9ECa80-{VYl|ze>hQ~%)hzBxZ(^{xYH(;TI@NzbNQ6j z>>myq)hy0BhPn*>*DN^~_nHa0cI$MjtOhNbs(RU`Mtt0|U=e>GbdU(Sk1mQ8R; zrgl!nn74ufeZNU2pjW!$Tx-d-{N%~Y=;Tza3`|yy#nGbRkjaaSSya%?-9xv5;Nh5~ zWSuw{_oko>?QolFMDM5;w^#uT=g=`_`aE)~(1#l0)9j#81L`BCLZ-UZD4foj%~N#ZX$ zhc1gX2MlXs*ENzcs|hwCOkz?Zx-H%rHX+PnQr)^O0I0=6AvGmT+TsUlvWiLBXP^ar zUy{}2R5`P`Y8X$h$&K8pbmEKGyxBb?BUsLrhLCCOO(x6J2Uym`6dJ`+(DTf4P}js2 z8hca7@-Fhvate2JdJx-(JJ2FF%0X;9kd(87k#jRONPwcyV z7R#EH!jQoj-1KSQTN6onTr`ShaCA?ij(XQWmKHrFvxc!>*0sB5TsHs9;^MvO3H+FU zKKMt2zJrFv$q_^q@(JXf0Yw(edt*~(?NxrsU<^%J8zEe@*0$j!mGO>ob%ob-IFdJ| zr=~YWOz(wc3!;Zb&Hy9^Hw_h(Tu^oLZaVEKN07;XIA*X}_{#7_!$3RRQl9*$Zy)UY zs{h5Lpb9vt+B>EX$c=3J>wR$MHT-V4e^0^xaLC@BsJLjb2V!-kwLsv!vvooQXE&1) zFA54pa?kUq_za3 z67#M*wF6a<2NiArPgmlz=SB(nor@ zDHY*si~O%0#LgT+JPHU}bLwcatLV0mk@q|N^zXRRl%6zGXd}?@%Bdt>Rq4Ym1p>^e z6~5y${;WF-l`y9ly#EcW%j~-{CNnub5f=MzZHFscaaV2nNOZJi<_L}?ct~WeSyd#8 zN&k}7WC459-Iyi#`Ef3bSa)!L5Pk5!>5*m%Q+~_y0cV=xEde*lr=SW(($o zkYddG4CwFAg$^3MvY+m|9}=#_UELQYcSIUzwimB?0y_hH#LfFowd-=yYEqcYrtmlR ztv96Ctm>K3>U*wjIi8o)HY8Cu50sT|s6mU*3~nYJd>^;4JD_7l_`YIlHU5=rW2WXD z-G^v&S4bS?+e&Z;b`pwHk5pCzNDkR0j0ASD`JStB`JblyI6)^_M?GFI4}*0IpR-|s zMv)9pV0;`|aNj1(@$gZbh9B2N9`?D8P{DRTO)9hYSFJd|tk=gvQ&L#ai`|7R*0pbX zZ}vkrzfaNtsee}d&QYe*=N>v_Ue~H)RWEVDOTpqzO?Q9#>VO@o=X#@0LS^}MPA-70 zyABGGcpmqyi8y#uM;y6%54Ail>-}gWP2jtK=Zxy zGW9`Myk|KGTh|lwTE}{`q;QCPa=HE6%{rD`ohxVOn&23JmOD;P%Wx3NM+v$Hk0WSp zp~wXP@UP2>yk9^oy)c?Zdn3;&a#m73%OAMC``c6fj*K`9;S;QkE_X_p#Jjac*jG1);i}<71*lwl@LY5O*~HX=7<~7)|M3sj(Z>MV3?K4$?kTb^3o5K_G!F6V6tBrZvNH6wrzy z;NN`hsZ|3a*g_lq#No*EUQh_3XkLZ=AA*9e|25yZV#{t9j(|ywCDCWh!@h}4l%rew zF@dfBT`#EbN(R&iDqv=r^_bu3bM_>Zn!l;8{9jij5OF0Z=(46?L6=<04S=dB`EG3$ zHuZnN(R{rQT_R9X7epG6SP3ez$ivsi-fZ85#PftRAFkN51T)Lmd6cszupaRh z4^=dgSJ;30z^Kmleo$1%YEOG}_&p+38Ok>^61%j_o?5Z)IIU$OVg1O+ewr*V1XY`hrw+UEBe8JzA6r_BH)4z@mJWJ?&M*K}x@7>(7JfTA)tZ z{wYdD&$h?DUirE}kAH?=K0*Y${ZM^^BTB_cQNs3dmwjw|7Ekqo!_B*Ou$Zjq=vPRL z7D~b|vHV@dLm$@B zNdux`Jy@v3PZ##&dE1RnjM2rstD6>L+_1NpNhF$QCP7eq@v#0cvi}{4r#hAD^GWWG z`xDgZaN*-FM=z-F`gkEm{(qz5sdbo1c)?wNJc4#i!v*ito z|E+bG>&R77HCwO2{sxMGNvhCUSbaAG%2aM&SoT`PnyLs9v_aPJF3p|wkPzpuXJ-ri zZI0(wEjPLe;$i&!d3mKz!#sQTB)e%Z2iR}N)i|$-xn^Ry#Py4)`@B)Sygs#&YR^;r zTBK&-Vn=;^fWz+|yHaAUzh-k{rXsmSIv)%USmjCWh_0?BUy>{+aG%T*LI>PGe&zRP zqIuQC1Jc~gP^C-FmRRzc>tr*v2(@mrM4$X^m{_cM~ z>X|Y0aAq`H3h;y7XKOb4UCnYq^r{%?(E=ecCW|fo6{uCgSIjEQRa=BcwFokhW|Xwv zQrK65){3|AHW9X}|66BZ%&t2JonR?sfeTnv;GyXfc(Ho($FZhr;}km@@DpY+-4yO+ zV-6=4%sXH%j_e7Bo4(c6c(wV|mAuvwmt$*a$Xm*hPW*qZeFsz&+tRLp@&OP_)ee3_ zb$9J=SJkdv->#kpdb0?JZ1&&Rw3y@Sxb;2%0xoWTRL@sgb>N{n^L#!!jr$lVw4x-mAGCnMvQHOMEbm0o)Z6hPIy@-6o2A(4?ZZJ*bWuoE)jc2TXAH`q8>Kg?q3h`n`fV zonqs6&F0O@W`ORshixApKXLY(@qJEUjiPyO0BD!MhDJXQt>u&>ha2lldl-lqms}o= z%!n}#H@M#Us-Q<33pFJ(CANPkY>g81zS?oi@?~@*U44V(13|wdTE4(VYp)^J7_w6#|`qZ)o#SgZD4we~Z{M#DwvVi~_UYBej8{FPZmOQp*j?8A70O=#ks78xew^9KvcIE4D4A1OBX4xucmBY3QzPh?W(&?QUz z@8z%3J?dhJ*%Pdew4!Q<^UhZ>5t4LS!@3f2Cyi;C<%^o^NEXp0@UK6=zM^Xyh7xuS z)^fCQz?cEZZJ~i@Ru!Kg>vq=eJ4o^tO9Wl9e8cqAcKtM`$4$d>NTRUGP82l9B1b<~ z@jze+yR*3C;I0N!KFV=kCUyqYy0nkGfO14R?)#sh5_;~0zIuDy;427PPe2hZ6)(jU zH2b_knw__hIq6}#h@VP+65OcKy!Q_Hwtl~glIN3R?ejJKu(#i0TI7rEJdFJ#99}U= zd#K?gN^KL{=PC3SeV61U{y8sVQgh+3MJ7XsSR^wGOLx&b8=TNy)&UF-eGLE8IK} z%mN|asoq}M#Ydxa`D?R%ifa~;JHfFsK$g}%E>-kND8PEH&%F-R$Xhn0r7V0rHn0Nw zGB-aEa!Q(v%6|9oVX;E(oX24jV6R{sO8PMJM(E(ft{XT7h=z-EcG{z)7S3b}gmw0Xe0%bRIWMojSt~S685IgXLH}i^KIcCk!oX@&)HSvCmp!c{0P~H z2x}n79!XMCG;NA;mnlH_$k&=fJ_?G)JiH)Dg6vZk$FGTh+!j{J6qLau3!n`}T!hV4 zwax1+LSmlY5t*$TaSrn_#P_`BkSF{cCbAJ6gD3Ywys9C-_e7-)qgj0Q8w@16-);l2t7Y5*pIv|4+#_(_ z(A;B$_rIw67CJs?I%#AczXsTt=!%R_8eMf+@4)&nOMzYX*rMOd)<`FlgE%VmhPPbH;bs8Q%jyAXVaeO{OZ_hHN9DJ>R z0*%YiiS5WnUqvGFgDH)zvSv2Cu!}^TmG}ibEKYyu1}ljb|2s-oS4};@rbBE|kW2#Q zC7BB&k!5{K>hfT^=Dv5KkF~{T-ASY7DbLe;!`jPcr9^h#cAQB|8{OC${hJIVCYI2p zP>acXxE$ifZTSP!w=M^dQaBJhM$f-*6h01t(cSgh6nrCmYgMT9MEL~sATg(|ek{0zkze&O}k@|gSs0_t~*f)?oq z(Y2qRxAbmM+u#%6yju&i2;cWrdhQ+#dnQJdK@^fiQ9`RFQq)dQaa%VJRpv?RWc=Py zv-HJ9$Ee}HD=i^&ob0iHZjh!8NMt8L*S^wg8hc0<2y{M*x^&a5wx*J1LAd#*lcGS` zq>(L2al5o53SvC`$R<%knc|s{Y>6)qZilPPbo+**4f8yy&0E-%5$l3qef0Q{PL))# z)#Tb^SVTEtU1dMRC&deDUDaBaA2uh#Dav1m@G2=+#ZlzyZ}M0yI?8OL$x zyKh%Sd7)Ha$bi$89CVpUxCfESy9aDS)C({6%?m?)8bUv`KA1{HwS4~c2nJodkWT=o zZOn{&IRA!T2ry{eq;%i^X9lf+mQQmcdJMGn(f%Kl}oJFm;IxPT85=m=p@K)*2 z=mb|haN?>w$JCEv^Z0~&7w^IzT(}yV)?`T#YmK-jTX(883n{ z^JWWE+vp@wMAqf9{KlHOVi%fXx`O%8@oMT=1;;B7gh}7aOz^z(RxWHZf1oMSOH%@E z;M0Zl|9r>JFb?$2ec$aMn5t^N3Lj36A-pyGBlc@f-wJbC=g;2j^0a-bSd>l$m9ZD} z1ux^$T;@v;42v#&NPe**o92eIk-X{2R5)*lJmpt)(|b2MzRL>RcdULXVC>xC&>iY+ z6{fv|aMUXOdX=M(o|Xt3IuM@1T6gW4RCAfN=8bw8Pa)pN$sSRxcT+G#!pSTYsh5vAusfOr9Hb{7hT!bjA!g{Mj~hx(uA{R{^Kn+Gi)< zs?H+l>2B}u0Sm+4XYOc^l9Ls)vjsVnD{$o9bdb3&cd|wfNcI~sEU#oX)_Ez4P@Wy; z0tcn8#86aS(xCqv<z6_r=C6H0ljBm zKRh>F`qnu#!dH9PCpx2GIQ8`oiD~>3N5uwFZA8SY&f?-so5t{0Z{LqF)G{fJ#`xLf z&_xKV68*>wZ^SKqJI6zETVYP0t0J?1v~`20J|t#Lmi~~g*&0WKp*iewSJWDh$6L`b zeq{jFzE?9;MZwE#(MwUtVnI3Qxa4F*pqo$@wds-P%Bg{9P2<#%Xyxi1?H_-{S>h3< zl$}c)NNiC{`H8VrQW>RqDC6N}N3cR;0pmI``D#osp&k?ImrQkjog?872_wBdr~LH1 zPawAwco|d1lziHqEyK`?r;Gv=4>9K9@K(HQW3vRYDuDGs4v9r7x8N^r%*+<4oKAPc zl%C3o*$om{m^oiIT?o)sY*dBIxum?aPF>yB+@!bNp$l7%*s!KpnplSQURlxbG$YZB z%_)7&SNug`ku~&X7n{MsarbOK)yGAl-Yegx=PU=7n3us*pk`<%qhg5t}{s3iRyibJwiw!-R>*tA&!;tDkxYP-=?j*=V}3wZ-yg|AWs#r_L* zpugeVg56i%SPuxFpQ6;HIe!X;rN@`GQWzjxu|ISvFHui1wh}3!FFrjB1IKFsZza0M zFjJKY1pjn=m;DaD=2$?G8vHYt;*hTO`*oBJfx6ck_lMG~%JENuFTrW~wyZFtfSMvG zCb0-uzs>c*6>M1wbpE?#OjbSMIX!<1C0%k*U-rAV>y8UxEaku{Z~c57Wks-Xv`3ZP zXgy#HzIdOnRs2_MQ{y4g0A)t*v-Qw%D|QLCBgl`jF2%iru7*-z-EiipZ0~fw!PRRW zf9i;3@9^X{e7Y~MuD_{ov`(9!DA7mx*~QhDCq%n+l1Lz^J?h>d%$K@k-4JefvW8adC+GJ1})aBQ4V(z~Z@g*7|zyw}9M z6a8!?e#UZ#lQ%fbvx)SXC@wwDg*`0oJ+>r13)SegUa`sQ&*i^y_)6Q$z#qNz3GpDV zq9-j%XU!_k4gQF5Q5>LiR})U#WRO!K*w(({Day5gPiS%26E2!tXg;1x6@Z+F54>~} zXU@^&zSRStTe0pDA3sa<;i2!XcG3|Rpe+BQz?dbZX3Jxlq4(8NRlZi9Ntl) z35~tlv^lsp8f^4AaE)>x;?jththdhLb~u%!n}qTZK)dy7-`4UZiqR;4s&+K`bz+Q6-5L2Uva&rd`Jm2{9kYS4Uv z`S>$ofLwX9vxb)J;|uYkw^2rF6<9-8RA0Q*k#zj!BTp4Rmn|3$$%RM?Aq-kCJmm%kqm$SARfF%5YP(>~jq7&DviBAHQ-}lYmf!x!kN7uR z+_uTq{^f)MY5Sx&U^r=C#0~s90|TpZz=#674|#jJvTHT5x-%~uARmxU2^=d)@l8KXSCWZmY;r{z1V~QXu$P$EuBq%k8g^4<1VGDlP0HEmU26U*1!$303nfm zN=4$A*&|Kh>(dC+8TGXrQ{qdcD<9?em^^&ZpCI@7OL1Ix@7~rwRa zg$z!g*Q_7%z#_+Z!Co?r8djE2c>ef}oYj^GH`H2FbqKVHnnd1*^2|{ys7e5rj~f-q&(&@3xa!=xpc zJV12&dCq1~@}oZIlcb+Cs8@*Ss#pyzXYyZf7aVWEqcq^#?4tp-nDFI`<$1#YqQhmX zbCD^`)_i>Qy0zJ&4+EHQH`EXC?{kBMSQhCe;t6_Yas4lY;{AUatm1fwIdiHM!tHCk z&IugthOMqEQP~VF8#38~1`TNl>xMawwc6s&#~5dJ2mNb|_tfmfFiuPa_q9}v${yi` zufC+V=%Q~eg_T4Z`MsrVUV{xLdsMnaY3e6fC!6UlxRb!e8 z#Bdm^13}Yx85dy7*yE61_K>tSk=CGHu;_+Y3CFQ*Tg)%xXYYe)q3XhE`{MY0jY^!D z-aqs$x>!;sosisjvU(^p%8E%-Oq`M2ErVj;RU3Ju1J{-qZc$)j=_t-e*#<>9lh;1LiFX zS#R*Wnr@}M^NK%_{ou<9=&uiY5v?jIk$%rE8RqhQyWd@LfACidE+!s2Ob z_QREal@zmRzW2*I`V|+u*A%}>=L(9be6iP+~zHT&jg z#Jg=rlolwZuqU}_@xXY-UbMaftjggh^vk5|e%lMfnKoLyskz1tSGHpxVwvqN0ZyEj zX(qpV`Cr%HlQh_iynPSyal0*o>ZaOmB*o9zwg__YXi6=k5v&{;FUks{Vc&OvZPoKWW?lw{&6 zdx$LGz)SA-3~VL56kujZ_FR1{S?);( z?rPy!QVJED@WUa=Q%%tTR-KfyqPCrs<2QxZ)V2Ws$*0xRx91sr1ufZjZOFAhhuNM9 zr*i^4>fI9L{HIam5ou1`ao5y7CBew}YYv|~>8N$X+cQ4~>O}_5Z>fzx-S>u(^UrVH zVb+;EnY_LctF5)hZLVbE%04He6Vm=#70Mn@$mf!DiE&qzJY}1=qp8nX|-bM%n6V`tZyF5nI< z!JHmHrPG=27++KLDY}i{xpH=6$f-?sJNY5(`iOGiSGQrkD)cSlJH{HZeU(CmvNvlr zjohsr8KJN$wG}$u$nagw+wKj4m5nN8$PeaWQ*WjD3J#yUY_Uuj^O6jq*4hr4LY-@X z8!YHT92J}&uk2Kcga@rri`f0}hsAvY#w5~F28c_CZ3a~g`+BGe%h@GAGBWkzi4Ro_ z$Rw8GYcY{y<#MZz4wpVB%{$C~9e?VxwU}&F;NZqW%AZpstNoB>P-V)tNuT`NaQ8lF4w7Xj<*&rOnjTs9-2nV(_Ph?PXU4V?+)f; zi!-Hwj*ai%cSib(5x}uujo*pe+2)V7(RC5_SB*z49*1Ug-xsC!?4)-VGl3Igz^u>b z$2-H^0Pt0fhRUv01s`_D)s7U3wENT?&P)o}j;spUeCtoXH(N83_eY~!vhdh_xdmMB zO8Wiy$ZX1Q8p5+YXi$r-UaDTOHLftR!ND814An9{Gsw}4jB19SUU+mGhuKk#*$x}` zjrd>IC*#;MrcZr-n1%uI;`eWn6c9@?7kpRy2HwzK`xY{A-QtbTxtiK1mAYlCsj`4WzbRn)H5<h1baNEL85l`^rT)_~$K)6|DS&q`M3B~1cO^!hhR@@-fBCnqr z*&TsJV!7=NCH9qq4q_eoqXS#DCHC8sU-a8SNE#wd%_cw zLGaH?@7~eYynWN!)HR<)sV2u=#Z>c0O;7RC6JbLuiMTDJoe2Y7f0wP8B4DC6Q6}cD z)CA&AaghwH=H-?9JMQ<{lxn_Z^?(9|uTt_#8=efuVA;ymGP;S=LL-iYM(Kvh%*}eJ&eq zx;=B6Xu5q+ysi+P2#7$b&g$#V7OQeaG9$HpUzaji*v+|JkKBwWiSP*Cf=?|wnMk_kT?r(Kyy;WDcnqa#Guz$IYHHhHj$h&;A%y+CI3q^qvZ zO!iyt@u;zOj*kaWQ(s%>U+s5q?{w>Arxfc=Yv0OI-;Z6#tH66`dq2s8rpD^y(~YUc z#K*K1P3{&|-sG9Z^b3QdRdO5n`?W;qnTCYhBLQc0kxCs3!7wJXVrm#hrjR1 z_7227V6)0N)9u8~!#nnDo!#X$@JHy6rf)lj-mY>%ZuvRp%`XG04^Iz&oJ^&jot;*m z1#X=gB1YHOrvaIR`JJPOxx(5xQ#<=s>uaTQCV?K?YYPo^ceiI9ww*(UMP!5af8ZyQ z4s*ISR%R0WGxbzIF6S1s)_4|PkB^{X$I4lF(Fwnk@cyJ`O_8)`!3%pMC4ivanw`pA z-XZ*u!W`BmF)YC+R^ei|JW&>|U?KB3Z0EZmWW<9gJPSz0`IMd1<$n5#%kk>%x_sI8 zN*?iQbTRqnt!>WRD|wvMJP~*vmk!(jgS+-BN|b}L(!~=FXYkkVUz=7K_Vl=%n5$H{ z;!@qA7e2*LF!OVbcj;qAeH8Dk>vzxR`JoKAm|Rjl%DX<~o<*SivEH~WzAva|)*+># z`qb^(?jP^g(rCK0Ja?b(e`yuYg?Kx z=9x(1F9cuA?I2cG5G%8!6^-iwV)N+a-Ws=;3zt*L{Lp1rjUr}3ep1Y8pzk$jHM^>6 z7(S(QE|phn-F6#=Bu$CL!FMb$)eB$3uKIA%*C1Vm9$7npq7BcC3Mm?o@h6 zYybmo&eGE%`l9!N=*tDW5DmJg6@RHvv6Gh-a4T%i!nRYBRFDuv?U-hoHnf4H+&m!nRI~ps;yhjPI8NH z=r&#wV>SZm>NX9`)?-~Zr-CQwkOkPI233`^huJTK;P z6R^sEB~YL@h)oLR_~-gBjEkTYCqAbAuWK^V723Pt#u5T6hTT|M_q^a7Z^(33p5Krx zHqU4&3H4nR>FmaGoHHUByn|kJ`+}R>9fGXjN)U|ZcAW3_XHC|3w>DxiqF1KtOzt{J z1`6D*omq@nlnuI)NrS8dS&TT84Z4$g?q;*H$A0O)B2l4{P?E}BWzQwB*sUW`VUQr0 zVO4MyPi*taSn$`>6O!BaR7q5Ow=XyLs`Rs2zzHkWRip>Cz@N!EmB~4@Q0pos6%M2> zOkB#$C9)rnlUDXxlU-uc7}@2+M@+eMK+l*Hg{?BgI7B?OeMx}YJuAveE@_P7lGnv( zq5I_f{Z5SxBhnKh*V9Smd*iMcW3z8C&c{v#lkbzPb2@E*~LHmz%5HN~JZb z!g~X2N6#x(+Q_;@{9+sF@zYK{oj+tHAc*dn&p6K{!ISs=@mK#dB@A&Iu zCc5$God?u<1f7j&w>p1nCm}gdlCe??q zvRFD>!lTGb9?Z$#xHRZ3w1w)~UVf2(OBo%gXvv)%7xrAkk4I}1inofZzv544KMp$#pA)^hmAhkex5 zqkVbo)6uzgj;rj)%r7`zr=bm9LMA+G-eKueY)gt%r;4P5@aZUDZo+Xz;!J$6s%Yax z*~jlR@V&B6N#8~39u0TQx{LT-PxBj_z2q;=kItlExG5TzoavmyDt+2sg$BCdRYJ=S z*vl=pidoK`S8wH#Z)eHk2cYi@l3enJ`R2-PBm~Yj>^0YY-If;LkoBHrU1|0oyzia7 ztZF?QINdDu_Ws^&ob@LIV$ID*+4sE#an@zPSzYqfb&NB7tPx!|W% z3S5_xkw5c31YIc4q$y&s?&bu{7I+-ks8oVGbH=m6-{dYnG#SO&l+zF(W>C?0w0Z2B zcpZDZE$-zS)#sU)6ydeTWg^wJIHEOX^tU=l!75YQo z1s8R{&2T|1GhccgZj)>$5IquX0#rM+_8UrS^2@jb(}U0(Ot=QdgX z(H?{}(f*ecR)Zyl5-A831r+D*5$z_0JhM-XXxZxA$b&U1efDdF-G3R)&-NDJRWJ!~ zFHM?3Zu(y?zy7(`A+$L!%Je7(-;8;N@re@6V>$ucF$;0pNef>M2Agf?NQS_k4z(8t-y#Y57~+SW&g%ihz|O;SX}#Y5Nv z+;GZ9*vi#e1k)k{6&4fWk|oM4&BjJnaXrKq!6KCqf34sW~m0)loaj3kKki3`@TZ(a3R#u)2wRl^ z9EEu1Vga@wCjtWFfN-2|jU_EDqUY;oBZAVgu@%vE@U!87f~}dNh1k4G5C{UP2!)Bk z;BX~*CC~}bgMW8GR{DQLm~xyy7!id^{O5xa*ynp_817W?+@9~Zq&2*6Yv&)m+KWnx zN=!t(rzhhW-X2xvAc766d#xy zG+9kJi5%L$ze!A{;n@X#3ncBAlv-m#|B#fHh8(2=I36bAkmYYU9;^t+o&(NyHUhGq zg|qz#0y(>hL(C$*Aum8Leb+qi8Xldjy?HVY9^I6OdATVb-Hd~Iv?1Q7NAz!rE-4uQ@|d)+)h8?( zO3B-C3C)sfp{8-)Xah?l-T_Kl*yV|)6f*~G`@;^P%(fjYXjTskt^Uv&DEm@J%035O zvjU1D*4#ocLaF>BUDp*9le&eGVCZ5wi$OaPic!EKup~mi9M7QZ7K(bs+_@z3UO6qi z9dC?**+SPT{EB68N8CE}X| zFOojD9e1BVHip=pQZ^!X)7>OQdzLAHDYw8hp}3mRdlqo41~*Nc*r#iZE?r=*rk?5A+h7f#Z|Wp>r(bJp zsK)nR=<>NB*Lt!q5fqU-e|evltzBd@_WpJJ19$1LnR$z2vZgL>ACkoR#C@F87`Zt@ z>1eqH2kCITO_H28vyCgao40)kMdsJ{nQwQ>ZD`kroXy{>k3ekVO?Mja(@z5}z=mt2 zi{7if4dLmovwFo%sty_vn8v0~TbjU$`q_u3pCTRVy1MlrAsyyA{W)p`<09|_xEpk-Y?_4Xry!vB5{yO4fw4NxMgg3Ou z9CHXzb=1k^E3_)IxP$&!kcJU{N0BxqXu4TOPGj*<~~(Nn8;0y=9HFmA$8+Z z7zWaKdy9~9Q3(WL#QQ0q!b(RQWHiutu4Cp5%bgjr!T&+JA}FnG+iNu5u$5$g6eC2; zwzUL}%ia!h4)}+3*l2cjD^9^Er7{Rz+Y=OatQEgt^hlXt2KZ}CG6756lDPD;D+VnQ zD4b#|p_1r(W$;gJwPP1nteBnR;>-BqElXo0#+JlR@h{yt-nP9RBb2hlcS@Wp6Yy#o z!0=6bH9^Mh!fsd482-^b4NF|88 z71_99ccy`MMkKS{oJn@5`UYioBC$J`Y((PxLN!5db4NAExXbSRvhjTzj3)Q{+-YC6 zXSxZ!YFXYe3Sb+Z^9#U2%*O6BHHL_QbfZKp@J_}UyGgmUJ=&LCYesC61^~=6t=^lg z0qiqviSG14!F8M0CS(2Gm6w@djj zpLI5Em>t_azFB-43G9`y(5?rnFL1_uGds*YEdU(!V=|cveTpouJYXsWehKyYZ{I=B z7eo}<-$8>AjBx2G2$|0a;0<-Kx`PJyEdFNp8BD%=J?5X$Ooi?kNGq=l<}hCXBL9LA zJ^TlPQN{%D2DBx?y&lTU5y@J4G2T%}-DS#FmtTaRp^a zy0#P)cApiaVEkg4pg~Iv3QNF>rvzl=l~2vzQRI#w31jb-LE+7ZV+70~?_xB|B;X+L zh>R_no#G#r$-He#7{jWwqF;>zNx0jRGlt`DCBhPQQYJ#ye2T=zU*KR#0NUL6$Xd#g zM9T9FEb*xr5=JAj@&`ZgBN@SLAfFS=P8fFM9cVs4;@d4qXU8VaVFz4f?+CBH$PR9n zc~Q92Zi7F7Dsz6gnkaKY)FfJ9L&7Aqc+*lH=-APZOp`Csp$T zAPwh=G4*c(Q~3b00MeP}(M_QMYVU=*eX_>H^*Q`%YPojPeg3r$nQHdEIjmy>xt7FD z<6{=NHjz!@VOYAXdwBzS8Yef$FCA^SA%2>$yg@IW*fftJ;v7$XbS9K& zjL)ca7FHjF*pQrVoA4n%Yn9q32TtR?H_h0qZgzpyuSBmNys6XxYgn47zB==8C#JAr z4zB3$okP`OB8(b;2^oGBJ+PLfiSB5ELW^MLYc!EP&Llx_`B`KdaSI`4mq{I&Q9eTFkf;iQ7Iu6=ljvF!X~Y*nNM(b^Au@$+ zAQ{E1ctJ7(5V@u1RWz}OC0l>|0YrqZ%Rbxq0U`Tj516>CZib~zwK``8=oFt>` zE{a;fLa8Jqy`0IQg9*jTY~frIey^PDQH7`Sfc#?^ zE(Q_(11JUqzwBW?|AvE${)t0?8!i8a1HbnEGV*VI5u!-wzvBQ*9|TPNk0>CJ(Ek8p za7YOJ55q_?$RB-?Fxa0mu=Bqg#^{rPAb>wcCm|;K$4De#h(B>iQHkFaLO~aQ4>c4b zCic72PzdaA9PqEOg8zjl_Pf_msHphwUPIyH(BGYb!eQcn7=gnOf65S||7in^KNOCH z{2prXTl^nW0!4^Q{GMD;1nf_z5eSJtA_GSMkAWb?pb+5q5dN3BgCdcTKf)>jf&D&B zP;l!081a9ZI;aFv^lvr+f74DBlU~0$Bl=(FP86KG-@`8oPTk)M|D_KKk@!8TqEPYQ zDGNYE#sAD!QE{Z`AI`u)p8W10Oze*&6otW%zefRFQN;h~3kTDc=Ry_$O{K1oHRP!_2?B_lbbQ5r3p3 zD1(cMW-_SYuz=6(;F&Uo*vz;7$471dn}>0~9>Q7N_4K)G0|p<802eZNW{C<8@4pNM q{C_bda^tRz7x*Xd(*W zt(;4QLoDW>LQ?jY3f5Iwd*G6XT^#r9_0S7{-ZtO$E3pZ;JZ?HoJYNQyq-d(997cBtG1X)?$-}Qg2Qhe+N@Vghrh&*!Zd$W&lZ3t zgKu0B^9VD=y1X2mM#e!rSE`qzkMb~(7`~o*G=qNzw7rx-Pn~4TiSDS!YNToUE$BpP zh;w!Alt)uDL0|8H7o^QENZWkQNHYc^kMb(!!NF$3TNdrT;zhj}?IZb%f*mJWZml~! z$+Ha5>FdyKt8rBsKexYAbk7#0aV`YPGVYpT-=h=epv8>}Ka{CY)~yA&Sv$ z+jE)yR7G!wFgfLZ)6&23hHQ-H_QU-_vJw*)R;)S>o#omFSWf}9V;_BEbYc5^;iTT^ zLZ2L82065M$xUQA9py1WFlnIW+i;~M-_)iwo^`^e(jt~aV%YXeB#8olE#ARXRi8mr z3ik2DfR_{+f&$S8-u$cLh@$F44NggzdaLKEDQ9Y#{APp)Fa=474>W>Gme5n#*r}r7 zZC3>N{tRplmb7Z@pf1tcj{C`l6}nU54J6%s|(?$2nsab7^d^AbEak69p7@w1S|_=U5SFT4KSJ3>J@j8c}NU zWyx@01sGcOL*l)6@LM{aq2F}$O|w01vXErqr6&(GGN&Cq*Kz5bolzro+vi}h`i|Mx zcb!m6L~d?uqi=*`+}JopGgI$3_30N+jjnIh9e(d6m|53ZFltizMi3!%!yNt*wInJV z5&=lbu}JvIMdr7cbJpAWF2o*_bz?a1cLdpf6N1-eGNLLd ziab<&sMy~weVTV7=%>s9rY|Hm`l`XmQxjoSp+ZUDfL7UCeGrsY`J-2S{N~IiEFN=% z9jXR`10B)~x{`78vu5HTeti-ZM(~E4UE=iq@d!(jp$|M5$QiK8yLsAn_)jJ|`r zPgYDfg^AT4(61k-D92L<1Hb7+u9|az*UcA$2PPsSTiqrT`18dQ5tMaPL?8J4$eoNV zv=L&YB*}T13r^k9_1dbO{WB(xXAE^=WKAe*2=4K*XC@5u#VWZvD?$k;M6Ci5UB}bm z<&e9zJjsiHwmgq$-Yd9;jmq?O7MjWc&#fG%Dx@SxP=Z64kmI8RdI9*zrOk2RgvidZ z?9Kz$4kWGx{St;Sj~S{6%Nh1kZ$=#$h-sk9LO)RNiTiBo>WHf%n@yIM%;`wa2icn7 zu?MyN*hkaD6lJG`plx=bF0g#!w6dKRJrhMuH&S*X+z?x9B-p9|d*75Jyk@7WP$3p?grh4c@#cFhRCq6`;AGhhg`2IDlc>O3kkB)bbhrhq(6xFt*fiO} zJZjmnCN>BRC$g2WmUWibhmnCn!^o~1J-vo~V->arReLJCqx2>Yo`75wVS{1Fy$$Bg zc$KlvVu$W$KN4&SN(8z~SMiXJjV#YNLqrw>sNV+*Y(fte2KHpX`3x3~SAX7%9&j*S5L0+9#|Er~SJ3EvTP+Cr^3Q^km{rUn6 za;(0VJJN0De>ZUtOqGO*eV1w`Xlb~0R>g646M8{7RJMs_3EDmbJBo!jq)9bN=hXwM z9BV+x1!!n1$|T5mxr`aQ$5|}s&?K5_-%s1J3PB%88CcKdX!*Nsr=kFK zI+$`i@|+AKN8IxGYJh3|sqPh~oV2+A%oM;1xvZ>VjeX<=3qCYZ`UEY9F^ zcQrk3RK=Boy=$#DP^BLTL`j_iD_o?S?`DqT9_{RofC(VK_Xt~4zuO%6Y2gfGaq_xZ zNIs4{@Qt@qeEh7>=44)#w#D)WW^JxfuVa(-nXq4$y3x;P3Lb~)C zCGVHmrFVDni=M}g2y_Hf-p$8iYvGvu7w;`}lWfvyzzV#ccICR^0uI2V7OgJuvhCIO zTvq+*mucYR=(UJmoDX5{2bH#NPjxo_nuI1hiNFei3Z~@jHue0Oac)t*a?aE!zXhpdwv5;5F{ckX; zJgD%D<0|6?F=cmHVlgI3;DwW843RR>@nGj9gI14G2=6H+!BO@E%QzRZyg`&ne0d8Z0dvS+4N;H-GwIRL_udbfIK@$P<;U_8Kb~sA zQ`b`w0E0FmWn0!gg?I9|d#-_%ow09yAT#FZeoDD4IKKFil>sXbZ^+(c!D`y`)T_+8 z?JhLSI2*#mIYy+@B-B(zr#UqzTdpHVY*_ZbZ4n7a!e@6;wnsnW#A{9XnAF`J6`E2D zCLE%*2XI=P-AEV~5>Pk^I2S$pVh(}8a*auD3z7-ojb#M1Q-spCw=L6QNNPHvAMLTX zJ4!zKprHe{+llK0xGEQqxqCG-XQ!{9-?k8j_v^n87f?ljB-d=}XBAxqJ-O4{+1EGC zHJ2w8s-#NKH~0}pP{3Oe52gmHXs<1z?seZ-5r-}5@!~!52jcG`SPjxl#f0&JUA z=D!@?bYnZ|yY96P*1wy<&{zk-`6+k(0}BeK!Ol$B1o#;`MXz4 z0;OigmD-&feSfcDXAylYe6}D#tzM$)-C@Equ)90?-68M|#w75)_XQ@7Iev8(GTqgU zV2XCPFLlc0G$q{-i?Ilv{Q8|qkk`6BYNcBB`ymNS%lnttiMI`AvS&S*Or+nZw!r2f zS_~Q$M{PN#(gZM-O5H*c7S8CFQ3}Ap)-6mRQK1z#CH!T}0tf(mI8PbBq9EV&j$>yCqY0i0DOSa?yB8 z)}5%?tdx@@7FwiHFv`Go7SPAFN9fxWpbjDRdA2<++4stdOj33OD{9i(Of~8cjqA%1 za^Pky=?BEtOJT9x{22yLE7OcSe&V=lg_5b62}&%8EY0AMR$a(S-k7eoLLjU*U9_Tk zYfSL6LzJy4<(j#6;&F&DAxafOq56`7Fw3_Rb%*y##n<}VpXI=A?bzjl}gl- zFX_IapLu6Ab}z8ndjC??5F!2UJ8Sw>Rdm0$oQ>5Hg*#R_zO}c}{vl|*fWw9pzpmtR5>X~R7B}3N|DoAIJr04XNY_l?`(*FogkepFiv@YpHl&d zrx7W>(T9gM^+8wyi?Rid&>ly|i#6vjv#UUY2nS1^DQ|zg{d#w5oD%s{ybuEtX!Dhd|xGSet z+P2p?1~TGR)1j#SG&DsX)FVw-?M{RO6!qEsyn7(XMV9m}cZ=GJZ^^rCRDUo! z$`!4?22h($cT@*VM3IyDIVUF(LAXz1#qrl_3|^v%dm?|>J;Yheih*bDDeNAe7x1+gnymS4MQ|_R`rNwPo%l$*NZFE z=uR1p>dF%Jl!-ZKznqlg`uM2mYRa(@YD1lX*$3rh$-GuDh209ulJz(Mn*#{xS0=72 zag3kHT1iqldNz`_Syp@A)6nR5{EA_tzy|geqnHQhyjGS;QYuRUP7+5g|F!(a>ad)Z z6Ef2i@_}#wu7E1+g1Q%(x|w+*3e^pcM&cBQ#3syECzkcEmGb0$8r8)51$Yld;#D{< zrlD0m2fjv}4={=Mnom_~Y>BAd@}~;oVtKuZwe^Z}<7Y{NKW8Y@a~_gQBV9VJA*+p61}IdE5wfU9=JDn6GYZghc1&hh_9= zYkbAy+OL^ZiAtr!TxyNdK0E-UvTvH`!a~Ka)30xsbiIj%P`_+rbnXnI3V7{C$A%CjV(zDCYY0lTUZzk*~Qh_ z%*YN6+4JkLzJUvAeLUC#udPdP)BgZD^j0K=nP3o`k@@#rLQHN#sot?z;oDS#R8@KY zpPLd}z(SJtL+AxyxBN+hY_X(sbxli~J*bg{@R4J1SO% z#?g)nRyN(o`=Qmb=`IZt-KHyz-V)7)tGAjMb?%=)Ts~D=!3B2cHD1?C+7qU~+3Pb_ z0;~s64baFeE*yNlkziJB<{IiehPOVsf}mbZVKA{6c`^0CcQ}TU7(vzJebwWFB7(PB zc{N(48^^l}7vN#JYiS;4m&@m+0kWs&@6so$aNK%p6vH)B)yV83?Jox;&AZnknlB4_ z&)27`b6X8`)=FtIgeV`eG!o#IH2b!9k8$4ITe-1psa(8#i-1e%%kBaCXmv%BKafA zVK=Y0gNuJndPAgi*YGAk(1je~Itbk&3)AX!)h0u)v&(^RVmu}rgvrd!pl>t8lF04b z;K0~>qA7=9viyX|Em_9<*c!Srd&}MOoD)#HfTPN4$J~VUO_yy<^1@e98H=v7rHNcG zkFSG5i39BJ^Js*~GF*R(@bQM8uu^4h`<=^~0BLb0lg@o~5@B--`DEg$-2I(3@qx}9 z&vcq*iqM`{c+Ta0SN2YQ@eP1$1+=q6z^liB%~d(Z#y--O3`dVBk~R)2U}OCyDvLWM zw2tdtjfY0f%>yy~K#+{^zTQZs2yJA}$%|b9^girUrk#JYbh z$}t2tw2(>3v5*bmqJBQbySi8cspq1Ei@KE;T=)8|7{_$7gP+seW>A%E2r^+T?Q~7( zzh-N~!2)G3&HOi3`9PEL6VM@{iw&(u=F}r(=K11?!jR=cx6U&u1(CKuOEi##f)IW} zqh_%aR~xQ)M#ccSudbK%sTO_ESrPg($N-G8mi3iRRMtmv6Slf-jsc-pR(#7)Bq@a;A@JDL0HX{qVOuWX=m_n7sHtLTOq}#6zvjaisdVKG zn@65=hn15FZL`HYu5OmNX>Cg1T8J2W{P_W4ha75wR6G5KbJh-*E0^8B6SXMjk@w_ zl6F}@A>)8o6m+DT$Ba7%o#84!|4-jLK3(02NP;eFdwF#u>dzztcUjh9R=Mj`N#w%Z z!MDRVZEun|`r1WW+jBWn2o*@^B+XMMg2k{BP7jD((?&fywFP}7}+l8r;hh+vr9}Ix&440 z1J#`S4J_BSWQj-MTu-B-U^k@1NcB`jgTXSR2zZzeR~l%13f}}b=YxU|9^y;}>H`HR z*X=DXLBZ#vx;dwsL&lfR^p~JdfF+VgU8VaQ7|-Y&Ssdu^=J}t$rV!_Woz|R4Jbs-T z6&#q!#1Ef@5G^U_RD}gQz%dOcv3wgx`Kl&wh~^BmSp7hWZyu~ z{mp5?r$+Vc`dC9(E!12&iGRL9(?e~ar3p$+O#Xrnr5dl1D>g8tP`=Q^;{(depnizC ziW$cjpj2tdScCuwA9Xqv%s4rUJ0MYB9njl?{MzomHMu(pIvAz#o&}N~I4+d~o>Q&*@+w*68%d*<@+$e4(C{o-g1fX%&`o+=nNHw(&7^z#cVC zslyJ}guL9TDLEeAVq%KZ8|wtGrDsX<#2V9zd`qmGB-k38#{CZIH@0F(Kg@h(sehh! zJc6PGQm~a1DQQdvPDY>_us5jyDGuRx=NEfy()m5f!tiN;A83WM1!Frzxpf(a!>@J$ zKatCiHqa`C-YuaT+_?~XM%CGJGjBl!f#yYLhO6o>$UP}D z$bPfPtyTT8-=Un~IU>2&PV)oIkO0EE zCxU z@>|zmY61}>4I7^TNMALV)1d#ti5uStA7_q9E^0@1%(y5bnkMM^WD<&D+yDwO<1 z&=X8Pqr`h!9o7|}IcUK=9kCn#AlQ)bwMu3y91*)8iD zf~Vsd)0ZQ~u#B$o=xJmJ))@5Ni>j@kNH-ZIdbZhXBjGF zQFeCo!frmyrm>T(NAX?Lb8O1>Qfcht_o5z?>r(o zR^amu#xkl=;~&KWi$elAj*~-Qkjq8(934X8N5h&AK7mD?)7ZU!Ieo%6{lA9kEnQJdonsA32BM%(GhMJn!&)Xe%09!{HX_epHZ?sJP+w%dY zOPMy|am$YSB+9{z*{6FVfJXcydEy(EZbZ!79fAJJY$--DbZ65QWVDcUJS)L+IZqu+ zi&Nn)RaVo=`bk3Aw_LUnub)TDyUvO3k~j|n8|bV#bYYcIV+G5Mu8HoR&k2!S@eANI z)PU)<4@QOR{l=J#p}0enXeLlB38`OB78I4>b3#`+KB;;+YAA~vQl^hn(UhPsn(5V4 zm3m2PC?UhMDY6tpt)hO_52?|$yB7L5U6p|CABpEN+@SpdAsxNjKzB8GzqVKFI-hzq zz+LgTYmhIBYa6bsQpB6B&N>rvu(lhx@yHUZiUQh2>1Pbmp>YKb`N11QDs%b8Gv8>D z<>ub1>W?;vO{Zx!$wns<47khP1zqjRO<`kOJw%F?+>_&uOYa(jNy{+jY{BhrQW?<^ z8;lh+Y7%UW^VE+yPVo&{ukY6TK3PlJqh9Dk7oF^8U6*b!Pt$wu z&2gJ%-gER`dE0{s3p{aZ4GglX6yf8peyR=O>uh;*2wcFg-=Dp(&;ZqqW)8xAb|{a0XhgVT$-i3X=n$K-lC6O)}HQAOs~`gsI< zJ(LcjiA1G5%f~}tEP6Wx-&9Y5W3hCfV!Ablsq3z5@z>{+QDS>Pfk=djYStiFB>Tsg z4Ou4do2t$FYTlMldDekGTnXl~z6Y}qO`F*BrKsOo4!kXN6Py^2NU+ce&GPNe@pX^2 z8o4&oso^|QBmbd4+Nt%4Hv7EcXuy)il{{sE?Ngb>GQXACO6Q#e6-V)j z>iFt!33`G20Qu~}hH02K@DurWK&#MWmb7Nlr6Fx0u?~u!Q2J;W%eV>Op3b;QLW(!# zik{*=XPZbXwGg&8FHk*)D6A+VaFXT;RpWTA9sG{pw2qk8ii+Teo}qqwVDi)M`M$8% zX+ow{cMxg#DNYYL&-aGljaHN-pRarUHfIgF?rHZ6_ln%%uxF0qP@hsNb%R3wU zTy+qJ$9x$cDbaK9Xx5+PQ+D@z?@Vk0Yw8{)!_RNLYw{7;ovLjn*7lD~jOmNNL<+^P zpLDf?Rh#qs_^R~<$JYtwKAwbR(5Wn@^zqu#U~ZVv_x_%r#o+unzLA|`h{NRzeUx|h z^{o(TL@}m4H}I*=A1n?UImhjE`NA(ypxhjK$xA%8IY>t7ulvG&LhbF7SexJ$B278- zUXUQ2e61ST^l@AwFNHV$BRicRJ~MoSp8mUc7c3Zf_Bs%6cqTK+=t}xMStWUO&AkQ# zn_9JaQsK}iradf0J&FU`WABvqK#%&+e)d-c9mZyAcL2#R0)u3ioMHw?&s5$zPaAQ( zZ#YQE9O({BX<)=59J8c%w#~C^iL}}`hLT@QeD|hW@1XRJ6_YVkca3F}te;IpgB-mm zOD**P{`R7KTKF zh4GWa5_`2n2HY*dhD!X9N{$mAsz(UwMH%6vrgok!P&c8Mtl~RN$RvNIlro1nZ#?or z@4ZW2DW!3>2(CQwp%({E^iz)ExZ_A*s^=jFEQI`cV#B-lTv}X`T7WH;NvGxM0c-B9 z5Fo77Q=S1Qt%(UeK_!J(Rmy0Ck})`(rT~{4JwYu6%salcp+eeIRw^8xE&b?B1T9|^`UFFJ7lg#Yxp0FKXNpbN4sw|l z5cLj;ySf7k;bSdOMac4(dl`Ndz;MX_AyDlh*OcE!J}k;RK#qpNAwEu$jn z_9%E3ELiQP2X~*yl7vi*0Xs9OKB~)v3P>surAY2kr8*n(P&JCHR)@Q1DC1+gJzec8 zHziK;kN&9zXGzegmd8X(M4Ug!$Koc7p$)*!TIn?@)(zJcA&<}h4MW1WFBL5a>r{#A zCL8M<;g{S;7+v%srJ`kpt1e|J0&_BtFf`U0{Ba_uG?Y`u8y8n8x|_wIFa#e5_G-&| z6lkw)O{;SU;7VK+W;4XB`GDr_d_>A6wT_0r`z<7e!qL}kwdE)UXKNNet8z)sq<4d^ zJ-GhN5s_kznBiTS1|fjkF><$7_?Ssh?y<6KNSv$9PSpR!2HiB4M8ei$p#lnR{k=~E znBVZl)AN)c&E8h4TA$nKl)1MgqQBq#fxNmkgQQXV2|%UpT|ksW`Y5Sm+DxdNyYBPaPx)QGVc zjIcc+;>tsMH#GC66|@Xon*rjJM>YY?JuqOjzYK1=M2sX-5|FbcA^nlD#MTg#2^nq@ z;ld3t?AFoebYI=r>Q#J=WMoNk1S7(KV~ZjqK^9{wI#ompINQB|O(I4Xqqe|O{1!q2 ziTwB;?I-2j$nKA-F0N#pj|9ezt>t^cx_c3p8N|qZzmvwi)iD(}FaE5+Vw#C#-jkw@ z%z19sua>o>4pb!NzKrp|R!(HeyZT1iERiyZREffwYhxoEee+(jLd*Ksmgg>q5mP{; zqna-bee3TYQM&>g^AM);G66wGfyVFqh*M;;rsa&PuAcgI02XxmD%42I7g*an=L9rl zGkep2VdLu*vVso>p%ArwC6wHd7}2`dk+()#ObF_A!UG8v2M}d|# z$i&i&)y`a8tU$H~Uc`|qxfcP0>uce;>ZDR!b)ldqOJ`PevsuaD( z`2J`c0sb$hS^qy>Q$f6>xp4lL1oCpQ0bgDJD_IJ0km3F(#sz#W_j*Ks$Nu;=m;HAi zH{a`70{)BrWtbOYErAU&l;g$(vVnMC-GbOSd0($)vVkBS@46v&@45aC0C~9B0DS)j z%pmAW!VtOlG!ScfcKp|NxhT2++W*gS{k>n>1cIon1~FFvU~#_2DY^LoY=BpB02?O& z((xLFQ&EE$D{`ZAvvK?>&Bq1+P;&h#oC8sm!-jlR`X}^H!+(YTlvaW0DYHYeWZ*DB ze{KDz=|AzmYN|uDRoL-4*f;=`ATBoU*TTGPoZPR{ukHU+SRJCNdO}C}`qckiMdb$k zJplh6Kz|Qhe-GRcF|`sfh?|%9{{-c_ub{lz*8=b5FHp`8>(utuxtcou`(d{`ZiHM( z!dC?CB-~7z_c5~c^mkW#@ElDkN2xuiy!mP+TgRd`Ee1S(&&0{St`_W#BO)RX+v}x+ zF;7Jpx-t>9NzBU^hpE3;Wh1N2++xQC)WsxRYR$YzuKFR z@~5LB3k_1##k~7$ccXaZ2flUA^S2WOE*2Z6Vr*f!j zPdj>Stg^4;oW`(YuxA<=XpW|R8P&N>OL7?vfi4fyVi0xyG>`%08;co*=!$pTCO*Unr<)ydZo5H+>${A=7i|V;8%X35ZWz*?p z@RJR)I`!GBR7EpNm=dr3IH$mhc!RZ>c@cpQOkI8(;Uu!^S}>|rNq#GJ-=(Sj`a5K5 z;t}&ch=TlyWJfQLxxK@W`F!?~2pT>NbS{JW=DtBn+iX|aQ-pO%9YS6>inAoMU_4Tg z!1k+GkNetXkp>sM&>7#(`MMsgC(pXxM*4={g9{496~WgB`kkWN zROW_>PVk^>G9OP?AIaH(K|TqnIg-8_@eybFjn3}+k(IaFR=Q6kq+;@}I1sSqk}7x7 zneTw;Id{RDNv{jFeGYCmRwS;hO`VVn(>mNpy)Yj7I!8)?vz<7SK=i(5KSKL+y0(=b z91LTdpg9_^ABdVm>}f0cWs!kMwg{sb+v@qpE<{FRrSGi&L8mYGOasRhNdZ2@Eia`> z?KBUYaP@vTOG)^18C9;41`f3u8HzogBuLY`RMRR2(S|Ysv<$WqH*GPu&Mjb-l;X3BYa%bhGr1$dYhr@@uu9iW z$Eh_FCcKrp*r&dz+X8FxvEmDEuCnVIUGBNtHX+-_v_Syz|_Zn1zoHnUL{x?+Vsk1b;e15D+9dx*J zhzsjcz5>=iQMt@iOj32xb=HpNkqsT8@9THP?-lPjW3z-^!6%XS4kfmg3_09uxMF!5 z>2Iv>t$DyLvQBh8TdBB@kF6=+_SROKi~WlP=V3ou1=5s2 z3QxN|4K+3!i6YqBaaqr%XcTs=83N@!@Zwiu2uoylZ^iwYG_BC)&$#fko~gDYu!$t? z`MY1>`SW3WH_7hvi1|dgg~{pZjl>}x>zj1UAyK>P%DRF2x$?#qKBgvS1~9if(QVqe znsr~D$K4f5H~51^YL}i^iB1y#jrT=v#6!CEg8^!RzOou9>IdGj>_q39xWbfzUQhPz zk!9ovDzPO*<*y11F9!>aDvJO z;~_Ym57_i3$D-;lR^~f~uw&vn9jLT{9=Erz;dhjGf=ZHjZ>2mFY)GpWS0f)Uj@f5uMXa?3=nJo(Bw{G{-)Gw^d zq;Jop;1=`opze-FL<~tE^*Oc2KSS?;Un;unCr{2kO|VNR>-Laifm4P?HtAm&D{I9D z4kmz&s5=0aQlZq@m^n?K{^pcR6r0`1;${PC8QN~!$+?2U{e$U8yRGbr%2f*?{@^r zF10FO*}~n_RS#o=@pweqxv;etuP7v6Ta&uNO>RgL$yr`MTyikbUmY!T(05{e+`Ar$ z?zr7Red*Q#U;l)RM z4b*G}EQV*3*nWDaTPhScykMw`cQ)*h2|5f=`DFQ_Fp{rXwJ%SIoI8|fbhH%BMbUy3 zu7$%aY2vg5P3^aZ9vlu@9`OM;D`9y+c(1VPhK^X5y>f%fy6Ir|`B2M+Ib0Ey9La2J zPIQhw*jkhrSZlOTGEbahc{>_X>|kkQnAMj{!YK|eY? z5_`zw$@ZQN_v?4*bYkR@I4t1hE)JiTLIUnrfLuB`pTSk~cwi>V+W@hyIV(nxNKTM@9VKdc24F z9FeuE;h8uwzsh*ajiGkxK5?(*WPf|0Fks^1B`ng&o{8cF$U1WH3(K^RgYJvWuh&8M z%Pu13+Aq%N8e0dKWhM%KF&XT`Wz10kTl(KMmeSQCc^j&dbT?DELDT3wmRa<5xTkq% zWy8+GDiaMyr9sc`u*EnOmmU2kxU9|Nj*|-W+3K%4L>KoJe#3JPV4NGcycLh1{T%9E9D`@(4f_3w$XoOW}I$#Wt>Hh<)^F1kB z`USK;IH!a2<|MU=W>;Gzn!}}WTB0LgeH0$JV;M#QSsm+9|s4qh-@VHYT zj^s_LT)J+>YOH=?RHE=DV~d@Djj)_QQ> z`<@VF&XoxkHL+hpWZqON2py@@z%SVdUw9ae@;m)!o)UM1C%-Ucy%$vi-(%g5#?^f# zpmCf!#!~w@-(DC+z$%R{Xu!4dE-HgD$dBYgzxE)@1wdcZ>32A0OK_UEc!JGbQmk)$ zxR`^_CL;y^H>u{k@92vz53dC5_lm^EM@S3r7bHWEs_6Q70jA zIQ4m`)5-Xp;`1m&(~7Gf_yycVgS8G5@YSdS25tPU@C7DxB1WGyyGv2KGtr>Nu5nRE zdBlJn6uCc&Bx$detu_M_9;&HP>W>1_p{@8;<}0>>9U1sqeJUmbR7s;$Gzpc2u<_Nf z04B+;0DlIS!1=lG048L~#H}+~W&xF-hLlW6{lq1FqnK?3CR%QjnerDq)+UX-FNGdzbm1pZIBM&4kBg=&> z<4i8NF*6V(27J%JpmFAqi8q4`!37d`0gPTR3RyumJ%5eXrSpE3`F_X0z);=FOb(CD zY;OQ-h})VNj|?yF2Ts~FVq9g;Hb-GBHl&10nVkz%HEp;0A}+^7C<7&>5n2$I;RQ)l z=ZJ&$*|-v8yyXp7m1$-(+r@MT&Y4i3O8V%gaTA~hdFsm>#K`%dM2s`M_6WBy*3Mui zXUvW$y=GPJksM`zA`OA2HC^_)eg}(EnX;nte~+MQDr7V#LkTS z+`_c*J>}qRbq-9}R^^}^N?BOO=*)V|L}d5?bsu!ocU+`j(O?S#{IfSs{f`thK z174H6^U}e6^u&>je4NrMc+SpZj0jm#W$N$H3$4*M^8@@zN$Qo!lLEs@VEA4S2f=y_ z*SXi4#om^n;Opywz`Pr^MBj)fX3mR)Aj3PG%5(Fvgd&thiYqb^{#tVaK8v+D4kiPF zzv?Lubdc)%tglfCIxVO%(tX6p=lxb5fw`OPObLFAhuSSFt7vB+(f zcfcr_@h=Z}9wVf^G$tGdzhvQH_yG1QyetR=@^YMK-ju-`ji+J-j&*6ta$5naQK2R4 z_f@18E{X9y`3I`VK`JH2MqzzEjI{a~66~fI<1C2_8#Fi`;^S9-4aTB;G5Hh=FK{Fn z^}_?NcRBGOM+Wq`Kp-C*kn>+$2L$o30f4V$%M1dmstVaL1pS-v54QU+!4wj0stegR z`ioJ%V#2>={)3!<$&85`6LG#GH{d^D)x^QsjMa*VlarT&!xBF)c>iZ3yQcpo{Kq{1N%+%<4rJSm>tBWcz~^5H|9^U*fNYv`{7nh}xOnBj|8PPe zkOLC$VhV{fg2VfN>wpZhZb1V1BuDapW2eTD1Tzf`;2&i8in$$(trFjs>C2@6A%T$l*C06;dLe-pWYuV?+gB*s~&VQ~F#+91fO z$=iQ;=>LjR|Mv0M9pC?aaJ~BYH}`+C)qf0N@ceJO+^@OhkPI1w{|9URlmExZUs&`% zZ61i2s}UpwS{RX=?=ODJ0})kYgy!ajv{=fc^ZjqykhL{BFo2u;{{XAyI(9A#y;#q` ziJn6rM)xpNcU@1|Gvo_7%?9NTB!6u3XUL2E`c7L)6B#4R=kBWj7V|anAV5G~g%ZRb zp3M$>Tjgo*;E=T>RMn++J14L3)YJG z+?=NP5!`cJ1nJrU_Sq**An7ra1u9{U@AQ$VLh-mtT3IePefcb0prF)wc6a6tt z*n-GDqPR?-D<~+%ik#(3eg&G$+f?{GARX>S@|l#24GfE=Ve(qakHC{o{~&+bB!$11 z2lVBc?95Drn?gU!Hbkd>Yv4Y71Pf7`@|ebbAPFK6=xm-n&=>J?09h9i>e$dny!L%mmLbRzN$Jg^K*P*MHsD+TEF7&lEcO=~6)?4PIFG=w zGs8#;3{Mjbv;Jq{Z|Ilun!ej>m~f8VFcvVeMh*ge9GE~wmkhCV_G2glMHx54qmKB{+sRS~z)De8$;XcAm0}U%GOLV7IU|YzNyY|n?X2J2S4&YK+KQ==Yld3D?Q6msn??&1Z>JZCDWdsKA%}eky^S|r0 z;t3jv(ik6~|in9nksYj#Gp& zn3z%lc@zbii7R#;*kI4w8Q>%DJtP*3egfKs)ezM~23XKFWbU&;M1CST*kU-^PX-(3 zdr7+V^QseN(%IKFN_-90*zOy_j_yjKE_P_tGzZwr1Wli+>x{zxNgV>kwi+GpoKoqA zA6KKUKge}0s~^>L!&dcm1n5eWo7#i`rNnY0Aqj%p?N9kJQ>jWJ$x}cx;Yx5Ig*RCswsX(Z z{^p`%4nvis+=~XO$P;q~QqDzMdnifkkb#X9Kuf0pn9D0YtUoi=2b8y|$}6J%js8a6 zNFZw&*N|gUtzM#(>a%3#T4trlBX$L1OOiM5&@fD3oRLFF`@*EkB5Yci|Bc^BREW5C zd0yBd#T$R^^0X0qVl6D{U4k3e{SwTa+pr9W1(k$fcQibUimjw{-0= z?$Q;sn$ zwcKtTbQJe@BN1CMi%-&K@=~!6d+u-k{GICx%omqFw0lsK&mp71PuonHX8CTW@IunN zpE-=mH|H{IRe!RtW*O`t{~%C?UL5?=Sh{|oa==f!?@VB^x9 zBY8(nkE-y5JafTAj51%E`m~~4Kl(NUx;2dz!Fe^BZ#*(|+&*>2m@Djd!8cPRgw6B$St4S7-2kLTo z5_sKauqQ{k=mVrVU5AIs>c%Zl>yO4R1iGI)PHaZ81MsF&kr ztaN!kX@mx(W?-bn<$S5**-hUt3wHqYjLEK_sTb$anE@o^bWTyO;1>i`2YMA$A=979 z{iAZQ&e;&r1JNyzE6C~f6oyvg$yh~hMblDv+#;6-4T7s7@H9~q)>zUBM4W+Od?Nee z@YU*kX)~s#TxixN-2%1mK)RB`k!>#KY(5Xd+vt0RU9FOaV9MOGrpvsyBt+=$SYzRu zc{EZYz+sLC3S>RTP(p%}Yg%GY_YOwh>`Q zx|U5ifzz^%rTG4&oH^dh|xfGNjpq%4=Mle@I`A| zE>6iL2guWN$nytLa!3;%AsgLsOCJ!!urvuFE|U<-5+{pM4pR7$s#XfQc%z}$5w>B0n z-*Af`IxQF!LOQ6C!HgvNayo13ssMQ2$iRNzpulg~w}?Z@|6mU-u)CSrVMy0oFqeRu zGCEU)qO>Z7Wa_yLVokXPk_(iUps3gFF41~GT@2ehOpLuJk&)GZmuA&hpYn_V`b{xe z5Vm5p^ZIij`SR#mZQ;zJ)uc<{EA0NYN2^3yXf7Z;wBaooFy2;xcDDL|P03ZHZ;PJ? z^Q|f>#d5*ey1;_$D*lfC|5Jd?w*&t2;+ECQ7>G(#QsN;41(2N(sq>|Vu zNOBF)M1FdDSC7Bwg{ie3VntI9Nhp?V^?sE+{O7w1lseE+grqV^(bSseGcs7-NL35{ z%I!cHw#sHyBHo))(nuoz1yiXnaHM0%Xke0s1MJlYoA`PB@6)QZ99KhlRZeW%KELk!A zr-aPG@_#5L+?g9ECMm3Rp8k|8Xq284WX8d|*$Cvf}Edf}=1h}Vj>+WiLKW#Ii!rsskT9tM~Q(&8f z8;glDw;b73&U-`77Y5f0a7sXhum)$M>q?*_W-(XXWJPg1g=0Y0&0ztM`cP z_EK_H1s%54yCm7$bf-Id(aW~IrqxtHRWfn{x<~4OWUZ`Ak;;4Dn0&| z7dqih#cKImVN)Q-G# zet03-@U#85pA!*@o(~MbkEUVQK8ZbnxSY~)4O;Fd_~4rlQXJ(E)uh-7pUq(C_a`In znPY{F52?;r!d%@r4qkmnDrg!5qOd)l_o{VA^e2Mi@6DmU+@0K3l zD~Q&H5lS}u3&=kZBj=@Qeq!uoH2s>00TxwtJ^i6mf@^fDakb+# z#2%kA8WtL^%Jd z9)J$JFKEiOH1&8_lcXNMWeH3|Mv+U1+5LvzT^`=qqL0pAr38H)Vej633WCC&vGB(B zH3ejTb8pQdYG_1gO%>@3n5DLI3r1*34xW!ikQ+d8{Gb%+b_q$#NS6-)R-{;*nmp4b;Kp-n)9Y z3l{_2FuuQ|=w}kd-%fvuQtc)(`IJ^;Jzk;^JZyX*#S)pJ(R>>^xoFK(k4X(uNE9`ixx5$9Iy#%>6{_>V*Iq%HVzs6 zG5tx^;HokR8tqEDr`LMFjN?yubhGMu8j%e$yMGXaMSVjIdm8puaoNJ5x6p`za~;tQxwqK#Svp2S5l`N`XoP2X z>5OUd}{(JkA6wx9*JVA8SH`=*6!)jEVjr^ zD6uo%D&Ap{G#d6@%>;;l+hBRc=*6g>WQ2<+>HWAeLLH#5FO3^(ZL(u+P;LnN7Fu{{ za%!Q4G9jt}vmlICE9>%vzvO)RI9h}^#`zA}I%XGf1RmN2?`a>NGkj~`p&IX+ed)UC zK>OP%^o_wM8+YCEF%IVkXeM{!f&&)(4ODr(@ChVHb(sHNs;sIZ=klEYUbqanI=vOU zA-BuCKPJE#7ZE2mK-u}KtQjR6LZ1WRjewci`k*yWT^lq1q}KdPCBhtp8P0orX%lrh zx9n4GDcQR7H%=YTLjJ6{Xy&_q3JFokck~eQZ_Asn?Vwlrf}*0C-UdI4+z8?LX%Wi0 z?!hiF<&)5LcRxG}uQf1y!Zn?5ErX7fJ+c1c!2*E=){l5aai=(RSY!9f z0V!}qUB{nWr(PW-MJ(1FUtQzE44U1vz0SiF+$w{_CWZdNxELLMuVzx`onoUo9kP$3JY zZthk7%C_Vz2A#!P`UWkZb=lpsWMvo0n2+5`zKlaY^_%=KNVe*ZU;|okBFN4wYY$&?dqtDVY@ziK$U1 zWWod}Xed1QsnxJ@WrkbaDq3NSKzklS$nCrzNyA;Rw=E#!=*hU!} zArmFSccu4y-PV<#lwz;vPx=%fPP>TMI3SRf3*eu$+vF;QY zwXz2)^qw@UJ2`%o(l)=1HWhp~&K0^FquFBpJXIhf|H z-jnI;x)o4TyX24ze4N%O4hEaFZlqJ%D9 zyRq{@{=i*AXtm~CDOuEw&j~n@*tAc`p-0$j==c5~B^iQaE)NDwX;_hTspq;)&s$g# zNu{j>jt6b^6L0Ag_oQyxeNRtz9h(@A0AJ!81+&UP13oN^`hetEKMO0E^vCSDP=xCN zgdeDqqx~dI*XoSI3-GhN=XvUIFC1Zr2}=2oc#DKdDC#9(7PR)q`g}S^;N>x36T2KP0({8LVAxAC)7~kOgH_Wk51rCyO|XIAd7l_lHEa$J(_r zIsZX_&i(_L%?hpo8}VpGdKZe$3{HoB3#T~)VWS=quI-oR-?E^vd}V=$L(b$(iSMNO zQx>r$SC4a?LMl##O2iN8fpZdg{8Xz3X$N8(pf2ZY2a%G#zR@5N1rY%dUz&ANeU|WR zc)-t72kdVc%*intt?14!NfR7w&kR~qXryMmLw|2offqSiQtf#;DW3u}z5S&tZ4??_ zyKJ@6S^TK#I@*?MzcS+W;f6?dD{bryUlJb)T{@X88=Xs5xY6X?RALC_=2B8%)J z045ms=v`n1$?j#1j3mPiTqw+c_G+98YgVXd{t0%`@%ucPq4o8-04REA;WfJCLbU|R z)YTE#IJDSffNIQP@iW|X-{3haGvWtYiG(0sZ5UBN?XH`z0jY=uEASV8q1)Fc%pblw z&%R%BHSILxw=!2}T(Igb@yZ`q@&O$r#$Qz(GP(4NU(k57J$Cfv-4)!McTHgpbS2GMqvKt)$7UM5-}80SY%UX~red%Ux@`MH zMOfE~5`MgY@3YWC>qB7RpR+b;zPx zLN=}IQ6TQ6k1MxTY=1v{_Pc(*_I}@R$E?ilB+=E>?$?u%v;8bKJsC>%1o?|E)fV1BE1V-YB+SjeP3ZDnj1L8@ zXe?ggw{O_29blSVH=yc@S30ZvyLCSVuKcY4%MeM{*FIIa7g)4&Qz2w82Brx(0N;iL8GxS0aW6C6Fd{-nT)#wyKErGI)sx^ zuDGzq#d~psA1k`|vb~lUKQNv7n^NH8uUx6~bQ91LlN)Q}Ed8!9Q6_tK`ZlX0+5%(& z7Ad@E8g_m}ONh#4ilCKfs=!l>cj4-Jlu6H!WA~Df<#IZDhg=2mT2V66=VEHAA6PSt zdii^%&BQT+Rqm0`a~xJV9E{zAVWU0-ArKsJe*G26{LED2`e{*eWy`)6aK4YF0T&VZ zJXeyIkzsa{V(#zd7PNMO#u5c71lzF>U=%VX5Ez3vc>5M(7*2mdUh0!+Xduj7Ein%h z3Ua@s#awNi>(8Xl)aqmN3o;4uaakaAsDP2)G0h2?-pP~N0-|9o6$iO3r0Q5CBqNYT z4GHnGyd*kFX8`n?4fZ#be3Y~_@?$ajV1bh^EcNQW6-+-p+^v%9A+(t3^Q3N&MT7Q&8h zwUY@#1)KUBgHV#9sw@&N7Kc~_F;JgJ9Xaxq17A8IuJ0zldXYRb^Kq1s6}NrAW){9Ed@-iTme=@3GuRsVM>21-gTKu zy^f=jeu(f0e6A9bV71Fm{Jria`b&0ehavcAaS(NHX=I(1fwympN8Zl`%qbiR1Qp!Z z1I{=r>RG8+Hstc;3vE)u9`NH;Vj6$baVGg?RyGX~Sqv4HGw|c&Don4n;3K8-d|cwU zW5tq#L>nDu%X5M0rWePiQoYR3)#p{-u~cTMtW?yZCK_3emSdq-mu9fg+@iKO3tm`+eouW)=1^~+U_3g)jm4ayy%v9 zjjxS>W1!pt`o~M%D_}5!D@Nh=a^x7D%2Z1^ew`XaS_PkrFZunA9}~R6#K5XmtvJZ) z40hzF%Cqiet9-}HAP--RLOAqPuxYmiyHT;j%u7ihiRKwV!79%q=>6xkICbu=zq?}u zn@7H_552`Pnjn-WgRbmu|Fx5mp6%)1+#Q3i2Vgf}y;&x6YoRq5=QiAJ z&aoOB%|J=!4`8e`A|qf!g9;$XTv3%a1eyzOo_Vt10M%m$6`25L(Bc4z$;(sqQxdwB z+mecQj_;gy*A%;z8@wa{K6?d0{-sQEfW0tFg-O21vNsUFO3m6)-xQKT*_%Glx_p1+ zB1Pr0vUO4mPbOPiewcET;J>)#JgElZYqWi+=!+J7rld>s*%>nycXF$KB;Va z#9%5{7}*3tvr4Q0 zD{*$%;LX2G>xU|y|Gh~7i$=7gDZqGG5(1(L0sr@rW<|Oav9>LaSNxC z1)wVIgJ6+Q1tMGuGGDHSqH+g)_%0hKREh6+>E#zC``;; zD##zDU5wd&cNU;1F+VE#b65Fikv~T*hj+bUoUG_a#dp?=hC@rWY`|J}s6ox^pMahi z^TYtq1-4>-c<^Pn;hGmFKk5&_p2Ca&7rQ_oz0>wrnD~gKhawyxEMiG_Hr2xAFV7FH z7b;oy2@nFl=5f;74sj(`zZJ;utbSbn8Ek%BUi+sC+4yh@$AWQzB^3qnD=KIm$$`Uy zD_~-#(h!}$%PN#}lmWCPxsl2w?u=J7nCX*{PHr4|LNv6) zxQY2-){1Zz<+9SCP03O~j#mAVpH)5vNOgqrBJ6@(t+ZJH1myhLF` z!90vW$6)1NxF#jIXnp*mUw48X{uaz{tK~zZLCP@MveI+Rfa!H!sO5Z&+AMPEIS54Q zxnZ(Qdpp9IhDyU~sR@2%8nGD*MYs>L`l>K0VH)0H%S5o=`4(Ij`4WuYgsyg8pl4t* z7&4A?{T7OmNR?tV%<3|U#-{AlkWjYlHp9wxAeP_OTeL;JXm&};(U0tT7$*5b zG?YLr<6=OGt28K;MV7K`G(YuFZ-NjFa~mhkP=+@MSnLv3k^jo0D9x_!cVGjyEmsks zFVd8I^Stsf9P0IHAUjJmq(N)Wg=u!Jn6dE*5v4)xFS3+parp{+wL>+8Bf$WdCcy7o zWWv(G>DDuSboK7N1!6$T!%Bcc5cm`dA*asdt^;=A9IPdUto$Ks`S+3voYG973V@n5 z1Yk69{C12Nh<9+n#^tNCYdO~d2Gri zf)W1A$GB}BOOgppO{_PhZBT^sII0w6ynWW9f#kP?EAlUy6sOsBY*31nek;Hrua~FU z1td8u!tG88(d;HUD+5t<6T$R(u#5bAYJlx_jM`KQUX=PrG|=4wSL1;wYQQGz0+-4U zk`>`d%l@fYC4wnvlwhQepC^J5wLleFkn2c;erYEFWtOvXG99lXnz;OqYQ~%`<6T%^ zv0+rvwG%|>o!GRXsw1!f^gW-XYSAF+#@5eA9$;zGI~a0GZS5*S2T6T*yp1|gETeWs z&WA#87X+S$OKiFRgCVV#b+x;#$+Oxv@r>v$%21XWsChPaRCP&p2Lpcz$v~ErC~@)0 z!?9kw>Tk&VR)*S`;Uo{(o@_OP&Fbwpd~pSM&6zKdxILOz-GAKy+#zr~m`+&h)+Y{i zf1Z4}748a~_)8La)QO$YA~=UM)9?kxhn1-}=7zU82vIUxk6_4RcTS`9bF|>7;c4bl zic{3z70X!QpzHaF3sgtP|AM@n7L4=^YR8=f8RPDQ`OOLrDFWjXhh+j2gt8R!b=^Bn z-5(As8u!q`XLgtEBR-Lb>%DWR8?V*oOBOW42pM1@xSHj_ zjpN=n6{H=^nq4?vYsZluk7daGUeHKB?I%G4g*wPO+&I}Ctzu@vtmp^OF|*kllN)5Cv?g>jvY3;4nE5=3FCoDV5R+C(wJ0HhI?pZJ2+ihb3 z71bhV_i2+6Kni0cyIzmX33YTYLj&{xT|d6|;Hj=$Va)c*G!9PeUU-trY}%7ryUPye zPJC7x!h@u!^}5gHWMu<1IAcStG#7F7MA9{%T$iIw z#zGNvRz?8LO}NSl(t~cDi{+sXK}ak5MHqP))@Jq~06}h!PLhIcj_!29c_UMR+r3;g zCwt@g&nKZbt*@6|#=cUa5OOegEWBet_=fSu?t~Xz&t?%Slc!KKM(#jG0~0kPf4giPtM10w z|0hn$**KkXG9BHv+mnT*9IoL|G(%=hDvH_2^qX)DY{D7&BzGDO8G(?9Y>A4A@Mvk2 z0|X*VEycR78xq9B4?Yp8MxN0ihDtosTv-c4fHUg8arq3ATev+=$X?hT48u=Bz9Ecj zsMJ)Lgr`ZxMl#RhsgQv1;1JnlshMUTaHdS+0_G3Xb=_?zG7mm88W+$;xAXqY6cz}0 zcs#2?LC!#>7lgF|5kAu0+2a^b6j3kJ6Y`DYtXFjBm6pLUT;$)D#mc(Sddo?eYz&2y7A%C;lcQ+?B>Uhd4vEv<2%q6E1aZ!|ixY-BQt(JmS-=~$UgL82 zBNgJNHX3E%F4fx6JtR7tVcL`R&Pe-P!YjTC_%HSZHY*cW$bz_u%{2U@t+I^EZf`d6 zcX>*WnC{*!dSH1kOB!f=9h0NPk;fn*;MUIQ4LrZsLzZ0-R`%8CBI|<4@hRV@rR<>_)Fw{L#&*_kfUnL^ zILR6j9FwSqlb<+zL^hVG#-t2lQ5idgX{}`pe5@Q^P9~9yr=H=mAA}=1Aq@MhG4^I9 z*+~q=A+YV%_8a@dSPQat;&|w_(>UV?kmQ*$8EnPbhfLS^bNj{B*QQOk?gTZC6=HGF zTh0&7&ufcN-@B1t)C#EF#oxu_05{Q3F_rW2c|Q(K4sp_a(|%Ig9lP<4S9ukjAlhvM zk78%`Jh4NuW^*)2T|2j7A*HO~^t#U_z6^F&^s_=|sO%EIA@a4SjhILporGhsYPn)h zT`#JM6}xKAT2$`%bCy=B_x9(oIYU1Zc!2_5){Yck%gkv&P+v5k&U;`Su)Kehx;T%H zuVpwt9<%-Z^(JeTY}b(S&$?8%IRUk|`{L;q33>CA7G#r5v!LldR%)7EKEoLeozZcO zeuqcz{OO&webU)h6ifG92li|CoHcessyLQrbdv2g0)GA9eQbm70jyi8nP&sPm-~So zzvmUEAAfIl{N8|nzTN?VzYjM(e}Q|}U+=ewiPivLQwe^r6M(-ZEsUQouh$hjU&lK+ zAH*}>Ejb;H3r8b>xi`b|mxgsmFU=#&N;9LXL!%|^T6NC4$S{f;Lp5|6{>mpfb+#+l zI=>HnwZHnjBjpl%64>VHcUEt^}|!-zU>QT zgq6(eN1OjfH6ZoypnV+jX#T7xu2OxAy1Vp0su^g=tWK+WtI@u8ah1}w8mC3R&x)Xv zsv9e7Jxo+%jJ~r0c+oO`H%me^;m^p8R8XWP9tI*r;txkkhqGkkj^!t%B4Z~Hgd92S zWdK2zq|{OhBBvwFUVp1*6qHXWiJc6h%p8}VJE_j`unacvl(!S8xyU2j5xl=@HBCp5 zX5%j8Cut&MFAsz~{traNKQ&RQZgOMi#MXqz6g0rTbK*Z)lOJ5gYFX%Hoj->gEhxtK zgn4wS!{B!ook}X?%i9RlT;*+eehnU%9y_Tn{V!|(F%j-;Vyze@fSMxw8-80mpBI@p zd#-H7Vsv$cj5cBL8+-k3_ak;Ewp&M~<<7=T0py$#t|8=&M8tFi>UQH5*Qbi~|3EkrFcO5pkL>k(-j7(G*e?8oxI1~p`MpZOp%;sm1pu1L zC@B69F1W=eR$D;gEE8vc?McT@)|!_p)Z?dYl=?ouE;lmv^gzhPVce6Ek3ORWM;3Yt zfG8(rN@?P#wD_MVP8d;S-}$r9!8?D;#?p@czufg2a;{>aut=wdL22fZ%aqR&U=(5xAdbo-AyNFz|IpHZTv`Zp%(@fX_1BUrX8>@}yI z+JAPR_gJ+pEKxY}lviYG)&s;1>Zg>3j!LVYjKei-VrdyAT$9j&gQ_5MLP_sv5ND>@ zz|llogNJ3<`;l-$OMYci<5Ppdy}g~BQ9|4`3q1;mP(G!!1Vs2>o#Dz;4(!MMH3Bu? z*!h3GeWkIL%K0zM`o1Gh$qO70YF}y-S=GEWjyzVUnVO$$39{*g01^HXKRa3ce^IOH zHsP5#OMfl@DnF9f$WnFwC%5{+_BM|YyWaD8RrB}povG*ZGbdXGArArINkOZFe;@+P zlmPDs246BcOLL*VjC$!`P@@7uacai1r&LL*G8A;bud~Ggrw2VSjmT*xM$t)cQaCKZ zXZKZ4dc?d-07dXH_p61Y`A_$$CmxT|-*&(DsFK)aC@%2TSQo^T~K{8)&_uT4~NYHa{W!+hO#+ z$%{4Y5U0?OAd>oQVx^Mmt^95zZMUy?DRQouYWZuG3Sd-BA&bL)&2k$i;1`|h77Y{f z+HBTLZzRmyYTix!_OfliqB7FodHH$Z@yYEYAfNqx#a*b$ zfpw;Er4E#H6v{Q-?mO5ua6fO@%=76DQRk-Z&6dk%;*Ii=T6(#{7mCk$#4230A#q^J z5!&P;7odbRo6zf%_omEVUWA{o?$Nrdq?-rv)&q2{x5BUKd7cX>(4fCvf$qYjJ)(1s z_FZgVg2}zRo%;IPwv45b@@vYlNukf$$=zOND{I(;Qbx7s+`Ft}0cWOg{9MDA$<-Zi ze@$&?spd`p{F;5Z!Tr5Yng8N!{q_9o{AgMOfIVbZR#b27TO-^e%A!^NGo>hbJd!1Q zOnHgvU7waN)WdDf{c)l@O-tRy}y;sa#p0f`e zuyxKqGQACc@r+O|^S6fImj|NEPn={OIGm{}C(v|@7haVCGjAd7IJ3=wJOh~QM|@Y7TwiU>DhZxW54YJ72n{y({&so+y)*Z7 ztA`zI_4n0Ix18!fGC(mo4L=r-W&5R}f4(g3YW1jNg9Ck0C~5Gk0s0b)82Gq~#Pl=} zRpS3<0Sryn5)32o6NjsxEul*S>oo&FZ>;nXEEbXs(6S#pTmk{LgAUYRUge9z|MA6Z`c&zJMcVg-%2&vxF=7J;Seh3da!p36+D|G5ydr`2Nwbiw6_ zxWxQ3QV-#DbvDlNjj#h{wi*$6<$%A_1L-0Tw)*FF@RHVRj@;NAL&w`gFoG5^9zDR< zsfy`1(tnNe^jN-G`w#IEh-iTgp3Zs5b)qYnMGP+JNIO{Y=+XVd*~VAye?@qIA<1t= z{x_hIX{?YAR-ioJpg9Uut{tsMlK-OR%2+U=dawp<=FJu81tU6kk1YSi^ObRLioryt zn5*so4gxrkuLo2!xg9k_7--DZY!3NT>O~mPWC3GztOG03_>A zGxN680qpHO16E+rfylTA{f{03iq1BG|8WM*2SNH|U{kzeb?&0y8zg^NZH|lW+pd=mX{r! zu{VlNl-a+#kuO6BT0J5E_0o#D`6r$}g2hJdD~o(j7~o%n{QmC-Wq8bgH2A(YYu_3# zzD;5Qm_?Z+>5lcr;5@;2ler&-XHFgI*CR&x)&SPFq(Gq|)(H;2p3~mR|Nu7D@-`6ON z@@*%CfAC6or9F>B-aK;V08mv;({h8Y)F_RIo4#`5p$;&!kq5t<{YLin`D#I~7cr^~ zd{XGgNfA2yTwTZd=|d5n!5!iIlMvp7e)b=q`0z7$9pl(f@?C`ZYk}|S&V_!mg@Nth zKByDha-+X-rXRup9MB@uC}MP3a5eiiv*H^2Kb8>*S5{NJkG-bb<#~1(3fhn8T5NW^ ztGjB(t}i0I>3!l3u31L6{I(mvigi+$YPL7W!q$dIcec+K&wM|k{T?Rz6v`i$??kA+ zx@6xqk^ZU~d})tI`R{|g8H9SH*7EuPHBZrp$VAEoe90xOfczj*2b|O0xMR80eE?FvZ&cb-9rXwCh4mlDl|>K7-_z;$7hNY&{?qAjaNP#r^j~otuH8!8NK(Y; zx>K95z)yF|Hzdr_{T|j*{X3#i-&a9u?2HnGmVk{K7KGE;=!-g6XQKFQt7skkQU|zx zC)P_ksbC!0UV+o#avu(5?<0+$n*f}wl%vCaBIlU~>57PkwFIu)2hjkD*1&!HAX+j} z$4eYl0KP8nw{;%XR@xbC{n)M=%xb&qXhcihV_GPU$gks?hymTd5=xPMNu>D$x_*!z zEUm1fE~yhtpiEF2;^9+u1qR^y0EB7;)T>nN>hXuB6q6h4kctdG6A`>GHgG77?vx^( zdG5)j{{Tb7k(^&$cZYVzVe0rMkFW!=gXly=M`Hab{=;oF(Ql>Q*c)x2`Jvw0-p0mR zc=toX-ofm72dw#;S;IZ80uSgc@8pq2kVX(4+29^?iUE>7Vzn9qDk0z7X{Qoa@7wJt@Gt?<% zk=-GGLbrS}$|ym>P(sis{l+!^l?ww(8mWkL`&@ZsaHAo$#*;tDMGOlkkF2k+Mbm zdbKiNO(2=hK97aDC$Th^Te(K#f+ep@H9~qdg;y3>ZFI5q5p6}y1(E8>7rHM*?Ol{(b;T?wq<*E^}0p7KIyCrA;SVEAy!#d@r8x? zEMIFb9>Oeibvip?Vr2vK+w8q4j=N^nR^x+UH08P{i3=M2oJ2scVhEV6md#`ZW;*1j z9i;XX3(6Xo!Z35`sVbN5QYi_batH5igF!DQV3~(~Rr977uMx^?%_b|FOx${s!-pi- z8HdVZZCOx|%Z9l%K%N4<0O}3Ko-^zthLS)2X21p5yum z87`tZ8^vOFIDx7v^vm%h+N^o(`u;GsM?6oDJLGGCSw5?O*Q47L2W$~Qh?7*%%lB`l zMv7D~98W*stI^dmpjLYr6)86Wm23IA_7NxRB$sdkOKaK&MS|5u_=b@#h$?P^7R-Bh28Du~l?j0<6C4?r#@tX9@4!V6T}Q7M zr;(*prDqZjve9N3$`N4N&!_mD?!Gq$1=@{=iVz>liGZ9zQfw)i}wl(Sx^su6KVeg4mci)OVL_H=E_jOl=`s5>muvAcb)xj03 z-5Mek|669vo(KAXIvqbJU5%jkk$7W#4r!jR$vHhUSmCs6+6#jUMMFu|rPgwIQGE#rX->4BEKmyvLj->M;#46oeMKGkkN)%RS6f7c~Ij3I= zgI1)5UyF$luNp(viSp0A(7rGe3*?Az@f_5B=CTZBb#N@AxNu~elgw}=h|(3SB>1Z8 z_(Rm70PL*&mQ!bIdLQOp(kfv5l3+Yw&^(v8q3}o2IhrM8G|3nnLO7a7T#RB`aZY~C z646ph_g+m}4zt9B(tv^!MA_5Dk-CgJ{(>9+<%r-`Pnt$FUkIT~uHiGmvZWP`JK|@a z%^7LcSPblf%u$f=kQgyw;YC*3uOHE>{~ST?f!Z5{x% zA9}^J-`AJON?|q4@?A1+m<2Hw0&heB=2um;+F*Rf8e9eT0XMPij4ZLG4+`gdKg*b7 zkX50Qv3IDFth%&Gwe=hw0<(0?{Jc}WfGHh;J&fV5)8$5_e5@q|8il^ntjxix1Wc|O zAoerKMl*@L6~S#UH8gD%j!pr~X1LCN@+je8WpqB_U}_X2!DQ!jWULOEjqHcR2UsRr z)M7>vyTnK5O4I2Qgp8n7A{Ji7y6m>uL<6&}eG{|${Rc=(Y@wE)V(WI%K&yF}OQF3-6e; z;b2DFbMc}lq(i)qgT9&TOu4Huf;kG$TNo#4U5Tq4SZdK-3Oe0NgrE8R%+Mqt9^G2w z9u`e>A5%PH+sY=c3EUeL_U=3W2H8X}W+KXu(XUJc=8^-Z1^-`AMe?vXrwBU7EQ8;G zkB_k3L(E>W6h|U)E^;pvKVhT6^MZ!&%OMV|6pcgN<7v_Cd?jp^IXp9h&LLSkK@o@StTk-LMOvbs8IW?{l1*B5I#{J)D_JdfmRi@{SyL* zT;7qc4P8hsVtJ@ktiPCv=Wa;w7&8WV^}LTO0Xb{bo`8gBqg>!cd071Aeq3e4sSl zj6P$$*SgLTDGR~%3PK^J@5;Y|tB&%g>9DrkO*!oGLup&Z)Q(*4Cq5m_C|q-Sf5smp zy2qVEYMr({N)}lF6uNvWM%vP=LUu&aiX4)V6)D~rC^*oR*iS7XK`)SzTD^OiNkMir z2%M%WDX78bKz|nwo?GVBHO7)=lr?Di7?S!BwT|t=*tW>>j7=7e<3m=s!MKR=&T-$G zTn@%2#kHJ|!wsfoxHaF;`;D6J&$pYN&o{u=ONV{W+X4XaHPiF=jMxu&CfnOn18~!A z)b2Z=t^0F-#}H=x$LHD3->LRwwglfRCgPe9&KIJbhXZskAD`sUwaw->)#teoF=K?ZR>0r_eB1Sl&_OR9Q+|KlSwT3GrGDZO5roQxMXjWLmO0ZcWHDc zqDK;G4?x+_@UADU*<7f3)i{vp)c#L{PwO&kM8kvB4l_|_$l&yh5i z(JaVPE2RICq2YmT<)+owz_)nn)C#G8Eb(x;m%6dc6+&Qc@|lS(Znj0B#U-&k6>qze zA*bpo$GYOsTE8TS->3T@`Oo$exiexIFt1=tt&lm!`-j>er+3`B_b)d%>13we0NQWn zske>2?AX4{AHVqB7CMmB1-b4q*(Cq>b?5T{II;QB^5uAC`WM1M^=}E_YXBg?wn-U# zQw>Y4B&JjtHxiZe=PX;GAo#6u<_$p(AfhT2FE9`luJDGSk%M~92AGwg8YA_fSUick z`1TCob?E*6>&{=S_>aD|fqO_eGM}UF|bLU~H)*>}?hK z?KC1&n+xgjRlCzyR)Gkl6;%0x?fN4~p#6_h#e9H32j5Lyz}0%Mn{DmN8KC7=p@n$E z?{1?8@UXG-b?wRWr{LPw+pOZ~^Oc3IQ+vxDT%bIs;?Y;b(0Q(#&2c$!=6plzhMR3t zWg&<5{Pv`7)8uBPs_n9^9@qU4zGs`LrI5fo8*(_@PoJO#ffq=IhL6Ry1{fV-X(GFLdH2{`J^r&gCypibz%vkAco7os(@Y?hJoNCNQYTA2 zef04IJ+tVfKoC+FuRxA@089RDfj(i*OhYfuoPh~k!9CgkcYFLBl5$~m4ck}D^@i2- zY1X{DxTKLy=Su&v{|!$sS05nn8yw)r`O@7Pa6FVc>7yihI-712XmLthKW=TANZ^m@ zp1FB;RuQQ=X|xlzBalLRoeo*g8lSQhAXWPlrtJKQg#F^@MDtp`ZSy8wSI5MRt-GAjn9elwoJ-HdU%`KwoyiUtf~o*O5HVDhm!@!%Nftn_z_!0lzf`TdwyQ1S z=rxObr${oG2Q5cymR(OX$s~f?ZIDON?9x|LIpx9o<&bh{d|;2fn)cg2Zdj)G)b^%R z6rBA4-4LuG6jMxq&Y!9uPi|bE;n6#RpQ31-L2s`Swg}@c23E5;`zEe)@psk!Di<|1 z79WA*r>B!=U51pPh7s$9Wm`QE#vo=WVWo4wcpf=@B9|dK%w!QfutsK_tgr!2KIF$L znG`iO75^MdXDdvvwTj^VT_S3*cLk(2T;X&D5j8l z{1K@yI3-S9+H&-X^JSQP17=)jis8wKb-hGo}QnU;WMOqWG84v`OcELSW-ydkEz&~9I>~#^Zop)_;uakld*6(~@ za{Tw635GgZYgHefoGV`@{iexzCdx0!cnbZ#Q~dFlZo9bQ8r&F+a;NB?LKlO48R7|A z+1BA!pc#-hYa&DVkXquFV5kJD$AIek+C)w;isZ^w(cgUi*Q%nPpi0Q&ys(vT*|c#x z*Cg#vWwvQ?t}fKz%Q3?yx#uVZxrJ9 z)b)ZlH;q}^(f(+KkY929=~K7x!ASBYYJ7Ot=4|ig^ZR?FAv`dNriCD92_A(pBR96Z zF@mAOwirK_1hEG1wvXRp4sigofr^A+4tcAnpGXB9pKgViWz25vHePXfx-`!O!sBC) zS$z2NM+UTOYC!Ujekaaf{r0YPXNzDysR?iZtSSxwIV%ac?Gb4b2B_U|!x)W5Z?ky< zzZ98&-u$!Sv3s+_@OKYXWXS?SSc^Lxl!P<)7~=5P7t#9pR{w;0_V|1&%q&amlW@o- z@#Cjot`B*ya}p+v4hJ=G_Bbux!_O60UwvanzD(opggK~Ny0amGzmv~2f>IbMzI(@U zl_uB;!%zztV-(U(8BPHF+ZF{{gTj@UdQYGD-5=l-bCM=)4i~*ZtQZJZ!`eqUZF#tZZPKEn_A@16_R<`B{E+1ND z{%cBJwtK{>Wpwj$CL`5Zi18$)Al?=TYzH8-5BGKS zKXdWE{qta{!CJhwh{S%-!Oi^he+Ci*aQA{H)j=?;@jnyvYW~|o*kCF9-o0mK{f%{28d80V}-%E9S8xdEWB{xP#X$B&G43|10 zDGd4~wm;5yX)DXBG%nTP4iWW?bRWpu7s84Dx0k_mP9wmw@w0*V#Hrx8IoWyt+!W{M z0*}P|U*Aw&3EmWEYIT4-&OJw-`9byont%*$_$9AD>&X?_XfQ`y4C07K6w6AVaUxDA<7=}nd`CsnEzJHj6{+& z##^~b=3!l6+Bz%~7Zu-1Yg!_A`6R9S zLNlZdEuIcxlDi;6X40Z;0#c%dP?f0nsft3(3^8!zECM+A;S6_1WIVNxT9Lx@jx!Yb zC0O9AP^ZQ2ZFI|qR(IIh&#b*l(8{cWmXLSV=?ZxkIE;;cekN`%772IeJk%IR^$YejSSm`(?b)wj zQ^H-{Kzv2Vq!m)aeu+(5 zl^6pfq`f3LNDScdkbW$Zk{>T#W{-TeUY8e^-nCm@?0SZ)PKI1Go2}S|Ji`^~w_i;6 zLrz4B)-6a1nG3JBSVIq1*T$>4M)*fiYk(w9(TZrE<@pbh2UevQQoWVTU0?QcT+`dz zZSB6!x+1C1;*qOH2~#@|QeK9|d+STkJeQ=muD3W>Zi)ghz_)+u?NIdRa9vFCq3?@& z5(*gdO47xfs}oDn&JbLy4`z^Za6ui{Bb6O~wZT-+C;tAq`IW~2u5aWQD>5x?q6A^) zuN-mcuZW_A=g<2viq>sS$rkBX(nVoOJ3KC$#dRoblr&EK2g@gRctr6uXy0V)xI8$C zL<#@M|Mt8L5XcwoF~In0$bQb|<{L6t4pV@=Jo}m1i-hHb>DrylATlQJCFM%?ffwq> z!V9HeaAZ{iHc2`(x(uFOTN>n-?8l`eS!Z7Jr6LoiN@0yHF{-fv&x+Jq$Y4e43p>lf z9)98Q+<=L9)XdgpP-INk>c8Zo18`s@T7-`WtbMsUI&N3t0xvd{RCEKE)9%56r4KI?Ooua}nbL@0zl#*S|Z6w~rr=ePkx^BNCB zA5D~!OftOncQ5T{v&+6al4-G|p{BT+2=de8$E76(?SB^*asq8+MurYElBw1O?K|Z0meXygpKN?TvQrjzLd7dEDbW{IY@{|u3Kk?>j_})syKm-0h2Zf~% zJ#($UQBGc5ux4GGX$xrbo9{J@$s1!iY$kMvLTPh6VSQl1eE#Akv=nzX_#S0nS+4$9 zfIk}Q5DoS+h9k^{wLzc@JH5@)U{OmpKr-~*7~e#?z?$o373N6EfWnQF&UD{^d+3l-so{H@l5&>Qv5>sJG_}_(`T4l zCa)xn8ehH%Cs2Y)$AG^=dWC{C!ao*4I|2}ea$P3bzBS7Z@6PNJFeRs2SF~P2x|1MC zpuNP+N9i%gV~mN`ug~kX+R(9q3RlTgwGOjyu0`g0u@!Kci*d{Sn6$}!-C1Re5mSkz2fSa91%+ibOby3@@9Y#+lbL7 z+JBj|a>sH>0P!1AJWrJde8ITMr+L-iLc`Jo){6O<25^~l?;A>!e(S_oDgXR5Fs^%O zJ8Mb$t*7UsX$6yFvNyASy?QMiUj_eh9fAFey%(O932cPb$g?!2BLe*mp* z#R|Cgxa~a4c&r!!N&QK}grgsTjMBM}J~~6%&icP%=GSy>(mboTr_xrVuhF*2WX+Ho zUk^mPIDBB22s3ei)2PWYPq?Kh8tc($SLu$qD>_?YT#;M-m99jifu)YX4yQhXehk@^ zdBmSi6Q)#a!EAtX>^1;?OIIVnRy709th@P`s43Tm9WVM@H(kk5eP$g2z#A>d7LwFf zQ^1+7Y1(surdbl8I|DsZ0hjj`YNW)bo}V>uv1ndFTg7Ht%f33+l5=9wlNv0Nz*lgf@Zg0K*4EqV!kOz~nQc$Lk+@VBJ zW}tRP>bN$A-MI1eTv|~q(D!l5KDa1X((H48L9QhCe9^dnfJ;?cp=-SjrzmWtlx5vm zh>IYS?nN38GxtSbb1#j+TjC{EPNir;hdev``k8uTdW{nDNk-H>o9>=A4Dj^_sv!$F z#i*fcM7;T~Q+IW`BYn2St5Y-sxTCIb7{5wZZ}&APr%TPNwP{y>1U^HNZVzK`#aae! zG+lg^>uScnjBJQpC`)#4sN(7Ncp=AZiOB@5!BwFZUp_5aEt&Lrkpp#K3#G;uR6r{D z$vuE8_w7Z9r3P_MB{v%ltgJYrHXI5(9S;T@fpy`stExMP9X1!;QkHdN4vxLj>uIno-;1N zJ?CV?MIo@P{)Mm4;m`PR+-Dex%dk#b*;;wf-HhsH*u|C)m`$J%NUPx?z}iJq@DL1S z`Dgl@gr2AQ%k8}8A!xL$9t^cbzUS6NF?p->?KKA57}q22pO5N+dPxT3)L~D|w>e7# z#_JyE~!**r@KKBX9-YbzjlZ=oEOmKFwNP*?0K#I1b*8 zRa$$0asR}JgG{i?i224LB^ z_+i!h`rU63?3y(t3-)e$IB?$6D8Bx^)ql+W#MQ)m9dtS&!XMA?1S@S2(oM_h&nW=8p<*!HLv>0 zpgLH!f7C2^YAP(2efWu{sMOI^*4L)?rc3aP#t&||#@VI=kJW@hNBYzC@Sz|!|Gpt4|4cb3LV>HcCjXU zzwT}PKrk;!ciu25mc@QObR3SajWR)NgAi-S_&!vJNPG8b8$hus(-$mn4z< z;E)SGM^Jr9xCIu%50OG9nnI*!KRY@LtXJ{Yj=;lpndi1CeI&tx*4P&)!u3<1 z`%Z!z3s)Xz#Nquc)wI5CpB3<-0!kQfyii(Ye52MDN9QF~l=(}T&R`o26zVFY$+3u^ zwkU)O<+YxLjo~x8vNpOf9n9RD^?0u73mHywsCMrX_p$Wfx1*lJpRZsZ(KDOO zG_UH--|tT2R|zE@?nsxJ{W2H6C_QG^Z#Jub5bedh{DYM&V@EZ=xyYZ;j++Z_K3a;K zv*#(A)BfDrM-}wa1hDF{BU9K_r83N5uhNK9a4Utb+NgD)n>U-4w-tscrrAkxf%eog zA@=xsHN%PybvWGupX*JgE|y8mvdW@*RnqLj&zsER7a+80LRx!^zRY1ccGLKm+C|;P z_P?E6SW+qPBWx7lGPtP;wNs2Yv5WAo6ySOYoCEw9Z6Rg*yTCcH_}Xh6g7dz%(--mm zplPd>-BC$HQ&uNW7)knP?&W|A?xE_2fi`J0h0Yh;n+-%4MBeu4axq?&U>jtlBFkIt zu$^W#D|Os&=koY^Kgi5-FNaj%m@mCyuS)nkKXg0XKbN>th|Y=n+oXZNTD$+!x*m(* zWb)=x7=C5`qr`O+yk38<#Q=L%%^*0c@I}5I?rTu{Zdmb@^p-U0IJ>gON-@)9Ip0R{ znu1X#DY36@fJ~uSm3osq!aGJ7a<$uN{?S%?>vP(5s%?K`A0zCx`Fxa2K@O-z*ZI*P zC%tI+m_m8|E5Ns^s4+vC3|9`1dU`|5QRBkvLa@BdNyBHnva#6Cv4J{jc`tPxzTB{N z=Pl!Tp~gOUQwz@qxi9tT0^SEj+j$x8)fNhOSldD4^5c?uGfgbkm}4`fNfOVeht?N0 z#0BlRBygo7@TFELb0z-6l|KRh`z(zU(!Cbe?{}J&F;|0z!>aC*@T!hUh~G~uTu((8 z#lC3#G}myIRhIdPtXlp`fWDb_0AVh+Tl0Q2;36aF@NTi@^YxJ(@xADK^7<3l`gd%# zylpqqJ2gBd*8{zV$|oT`EKr`NYr*xTgc*s3?94q?H!TsDVOtxo{1ngtD3*uM*>UZ) zg%?}bPj07dO7l$?t}ooktGtHvv?LF=@QJgNkWyl|uinHi9&Aa2BJm-&4rv!up3EeZ z8|V_}_|N(dlO)q8pbT)F_M4h4-UqViYUMi}3sH-^dsJy**$BY8PZQ?ZwJ~2{yRXD9 zedT}CXM~$-Sj99$Ev^N;{%qdHd`ab1i6coJs8##vG+sD~X@R;&t1;`TaVid~@%p4x z5^D7Nq}@@c+Ag@-{BU*S9zXbfaJ9xOl^6^>fIP=5*r?8U5y|BI20C)^_RCZpMx*1C z8rZk?$V&FZ3Oo{?GPu9#DQfW7pT3hmY$+tU>l5LbVx}bkhay@$SR?MzEti}k8n1^= zT)R!lNO)$KX^Dh_X|{*Ua!K6+#KPW{FN==4j}#| zi@@AT=rHe22K`8SI!JiWa!D0DES@Tflt^Ggwl9Zge~xjz5<2rZa#9!Wo)P_ZmWPmf zjReRUd^qE+xI^l(>kN4r=-lRJ*j_R7LGaC-5ZA*~_~mTOP{#Hr3ZyQE@A_w|Hw8NW5%Bye;Ujo;k zGnz^)KUL8K6@-N{C|DhShzAYd1@@ZrGX(XX`Af~141y-u^hGRgyz$sy0H z7;q#GexUAKjZCobmgAMTU6THu43gf>tc@g6276Ci1xKy%B$5fIB!TbWt_&2D>29e0 z(>as4ODnu1LJIp(B(uMhYtIraB?IdvY*ElEbI19N=M!|#Z8RpZ1DPr8;FZry2i`4% zNN%#9CSNZ3As}ttZVC$)JOxfEVGJB@3d#(^>YE1h7fgbvSQJbh=96gqoiD&CP70~s z^Y`Dnxu}-7)2}s7K{BD1Oyz?p#$jmU_2j|Ae5UB>qj<;iCwgvig*W3LQ6N{eMYO9M zx*I_8Bs~~u^sq|kkFB>9X6BlW#A4pBKPjDUVGuh!ep$aN<#H-?Oq$dlzjip-&hqu@ z@w1udyT-%VEG5tErQc1vR|AK!5=WmdiGrsNv+i14TwVpzwXOQvEOxfSTo@K_ry6HT zK7{mq#^-X9c_pV{m@()i_e1pHQ6NlE4vz%D6Y?e{J=>b@uH!$S4p?Md{B|&d5xw?+?nSf|`=N564BW%Ea*tW^ zk)J;~I(2qPhkNy1=v4T`W(sryL^yon11pjj1PL6Rg#!|>60kI}jO@D*{5naQOV6QGJsYH69}FKk@Q$0m>VcN8OX(oJ9f$h)>kdg*e)Nop4nMufsZ;!0F%7xAxLF zOvn6FZ1V!#YmE;V&%$(EbE$JUUr*4ISbB?Di&|lI!iJMVijr1@+9x@uqV~J?A;NZs z0syw}NwVK3tvB)Ng~9FvXyC}~z$r;VS~5w-50-Yv10wt(ihHzzO`cPK=QNw!L=`tq zr5$d=a@1GLxXzC;*n!>7?B8I`3tKEQdqh*ZMLZqTu{XZNqTGs6nl5a^v`k zy63ZqI{s1$&_*{0Ysru#vb-PfAxUSq_yFz}Zbt)*_{O(!fueSW)~5CA>k&hNQ;TT^ z@5N_SRlMROUu>);>>0^_RNU&JZeG24n;I;UMt$zEA}GlOb#OapC1 zEdrf7^6*CmO{^ldp)x<^@hz#u{Kd~#$SxXa5*ei=0BMBm?Q zTckjJR;w!~cewMw5loFHFX}_@FVuoEGkK8yL2|687?|s}?;kkvZB(q?lBgo1xK7p@J<;96?-H_p#H)SGi><&m z)&Wif6RtE^bavm>uA)&Lo|+E{tR!4@5a?)S7s#m9hfUp=zx6o=^K@=?@8jKPCnLWX zDs9+Z_ywVcNza)W{SULmYftir!)Ydv_*I0+W23QU`XbJaaPgd~DDjETXTlfT48ObP zs>H@9z1+5`+I)g4C5U4G&#n2yFVm>>{E!!9|9OdxkPxf7x3dMSlBunly928tcsw>Q ztF(=)n>%uTb|0Ki z_8gHo5F!aaq*eU+M_V&M9ioq*Mt`ZpEFYr1(~|mTn1VOimsL^;VQD|%yUx!nghUxk zhCa{gBVR*`$&m*t)NPGme7|~L(DDOv^bSd9S1I!Z(1Jwful~={o;_T zDK-O@x+-^rPk|;r1LGW%?u(2HE_ug7wx!jG|1F2Nyp*3mpb@S zhmuZ1UavDf#@y*UCPvo4Z(HU?R1MqjcsU5&u^3`>^LRPhlO@T6UX9x0y_2_9IDt+B z-n4GZ(5$fhnzoA(=>3G*PD4exvrQph7`o8^&f4F>d`GOzA?3F${r1Q2R+MW+)*e{x z)Vf)m@m0gb+tvw{%_>zx)oEmcM)bc2M&>>|VnWwK?Z_d>=ziSYi=iQUnSWxyB^$Um z7OvZ87LG7XZKEz}5s$74&@_dwnuZZ<0~eR}TbfixRv9EmR}rLdtG^}geBtgL(kXdv zTZ4yyNgC3?HWVUlA((Sh(!WK()kUF7ygi#=LmK^(t+qL-cimArYVTPiMzQEYl2a+G znpZq}tjGBaAG_fu&ME)xZ&;QeTrl#}v}c3-fE|XPQ6|ZBYYS}s3rq1uLvb2B#6T~N z;4Tgl(e|)f+uL}U9gGo3DOt>Ng5gvZJy|xq%{Oup&21+^jj&?QnqupSkcgM3JvFZaUwi@d0fbY01+DEHSYQSccE{F_A46+uGp z6{Lwi2sYr|#0h1KDs>cwnVGWzgz}%B$KQ8V6R9Q2?4PExre|ejwGQx+^^?60`)(dY zB2+|&$KR6E9-dYy7^#59lRmVUAlZ~K9*{2YS>28Po5ieQMF4i1sGJPBWtmL*$#}!O@BXlEHO04I14$TMoQ52QHUd z9Sh5$xQO^xzU7bp0NFCMRgz7zeDTY@myNgdHH;Udb;IgG+Zxa*~ompp0MD`^LdOt7qmljF7Vve=-Q0kqxZIX)cR8{vCu~GZTW_I@I zm5JNS&aFRQquiyX`p1n4(3XFM%DiV(roEV;M`OUA$YECZLQ_iqaB_h>Syka*v z%@t?knYM$^NlG@m&%w-U*>Ww1w3q`{9! z;VP9UW#5Z^_Jp03W%ie8(X3gEe$io}d?F$gt(i(;20$s#x3Hnmqg?he9%Q$RpBMiJM{B;QLN(RfGhn9j!91qno~uXVn{#OjM^ItP3) z0~vJO_!A3gIJ5&mVR+=d^_Z7{#BUokR`M7ucHeFe0u*LVJJjT;6;&d(b`H*`X-vLk09z` z2E7OVr0RE*(X?_z>t6VJ9M5&vZ2=)kViq8;uvsF7Q%k1GDrkQQ5HPlcvJAC)>_x4h zKUue!n{{qmo^fVp?kDZY{Nj_tqpvLIg2*QGc~+ufKTkJOFqq%XzGG%rR>-m-KR-?9bV?)ZhZHc3B{M$9%CiX$|a;XRi|tUa<`EKE~s6PwIO$ zdfXa!XEC|vH@w^M7}FHlf%_?Pjhpb;@Yj=o@335dOprrIcuFSVO*s^o+TuUI5S6AG zR_gH5AqML@;K~OVjA&jW^3k_WS@B0f!eMc%Z>{uy^$Vr+*0m(mw0*QvAXCEcJc#o4 zxf3FK1D*FmNo86a@C-V)x&~UC(6H~%yWjW9INg6MX5HZz&VKBem-BAiJF|ZjIQ>UQ z%1?}}Jg?^Q1Yv99au!YXt4+~UwK(?k^%2dldireo8%ySozhTd#&o}yqpX>Iwv$cQP1cZr3MLUAH zX9KR5*@b3rJPCiI>Fn6V*!zfiv^XdKCFrvI2Gc2oph0!}J%G+C#b{b7#H9STkai3C z5-Gtf=fe*xL_zbkVvo8X@EU-3=g}W=&ZkweWr#GQ)nj{5Erhg6#WH3(L%w(~5Ewbk zyo)-&=jgs2szpPA@3MMEFm`cMgZ5A-~WKz@iR7+xvK>~8x&|Gjv-bDzM<*-H1T$^q%D~^>e2c3 z@8LB{BvZ|9cBcWSIme(Qmu{L)8viVwyhNUrXCikWqD&mZOkPzTqBV00;rD(;>y4E^ z+^ujMo5>-vuIyUOYLB!!qCm3-aP$fq;Q93QYxs!xJ$!QaIzh?0r)k) zThSeE7EIpRXr@>R46S^7k#<$>-RXP0y*M8BP> zVEoZbG)4UXDT+zMj4R*2D_{-Vd`*r>E`}Mi1B5 zi>I77Ya35yu6oXP;{zNFXQKhWQJPQ*kO;XLn8V83W~mpBjaM< z$t!RtVKMlqltvIgH(zZ2gi%)MC?{MvnjN?jsSDkNHPmzURe6YhuQEOUi@@f5jeg)N z1(JCkUq|W}(am{IU__NR9M$%~j#GXmWn(k&fJfip?5hxxKHZy#_z%m4pJkt2%v=Vm zCKNK-NF8h$2sNu}&(+FFF>LQ==<_N0Y86PY_J^T_Q41>%&zF4=;Kko~(?X(F_`Ez& zs{Kvvn+!&!Qp)m=iE1x47~Im0T~w+o-fiic=-h)R_eK&SECTS1Woj(=n%s3xs&O^I zdX>>iNBjL%>*1t~XzOZ7C7nFIkHNc;V(cCY?xVAs4N-t%Qa-2$}OouEA zTPsMNWW82uy4k%ec4pt7tdR3#>{lHXK}N6-vjM7OZUXs*{BZ@;08|`;1+s{(mBz!D zG7+tYy0W*L^AlxvZt#7jSsD#>>Ci!MWWX*+8SFyIQQcSXDghcP2dxb3^H99sBDUXdTq+_S0*3?L<0iRCWS;MHBNJY&t;=L^`u# zO)#070=DJ~F%|nc83I~h|Gz5PiFV;AU?pSp|%DYGZ&-P$W0->JkO zVoJVb4R1U(CMLjV0!zT(`Z+YE(oOLPUrfF=jDO z-uOF*pHbs?;eY?0Irly1GmYZ`|MEh0`WR9U>VCnO|RMN$<+qO|EvmmO)wj7RL~ zd8RhdOnYb@kB5b7yQDP&{yLnSfoO;&VFK^gMNDN3h5{Gq8NamnhlF^Gmj#3$2Q~=_ zz$fP9@lK*hs3?YI;onAb1!3s)+Cd8}0+{SqJ8jf3lNVdwv2firmIVD9%4i52T z&RZ$5@r^(W=xy{lI_np(Vf`8v&(yW``_Wwb+ts$wpFM-a6V<4RE6>5uYruuN<#!F8 zvuE$pgoOfz$g!}(gJ1J#70UF!f=2OrUk5Bw97To%A|sXfL&?wXW7~lT5kviY-J}0X z_;YuzKm>EY1|y0iX;37Cx_SVbtSeS6^slhzU#pdr<%!&vK&+sEfvT&q1x88M_CvFdOdqAO*vitIZ6C4f;8XXOZ@;YzMo}=gw6qYSh)M_Z}w}S2_@M-MVu{AfQM$1`_ zjEF(8&fe*kQa5>`5DhFr50h{plGjfFj8C2w^yud zc4fUU*RilDsHjcE)kAKUXMF;o4WG#XjLwLRgm>1@q!DS(WzXongd9UtC85b=6P&3Q zCSu_98Bv=L37w%i*EI{WmXMlt2%NvMt|MfxMAp?|y{#K_^j$_}YQ?oE#yPS~Sh=nt z=6lOHFoC!!pnA~8GJnUZ5&4M*opS+SiS8y0x^>bo$r{`8Afnjz5ZBQ9IT`9 z9TPjl@UE*@9K(WDRK8=f08Y6qa(f&Nbt+RHjT7srNy{4VdWdM+a z>`j)l2Zha*4u6le;9Ln8{CZW>#Ln_zLFt@&f?D?FNfuuXEP8|AYgr6kY~(7nTXcFa z@oRH}3diThw8iDQevh{O~MA7Y<<*Gn#&ndf@$@8CnDQ3lY}uSj^p`c39Mc;%3?>7sS0T zmjl?YX@@_}qe*Rqr1?g-a2AnSXcd}#)MIYQ_j-{e|MLxq11fT2+w?Q=dhkN7M;+p( z;t*KjYtk-*_J9UU{7qBk(1T!!kF6=a5}FQDVuy$}9ejp9oB?YbDN6w`%;0Yu_?wUd zVctTgtOvp%Hf;vMutKBqGPCihv-1kDu?ukUrjUieF#nGx?GTu_m%O}Uy!=vJJj`4? ze3Hyu?BEAKng(a(+jgQ3(cCa_)XDkd2Xr5n6jF*>}PZHt<7dym*|J8w4LjS)K1~0&`h=>7B zZr=ZK)DibL&+jo;v(p7LaR-d$o6m21oOXDxJ7ESqi<%WbD#xdBl z!-TMlV80%8PaZvf8AK!8g{hx>-t@L^;&%oG;oinn69>WF1NAm9g+>; zDE_K+KBx*Yd@NWco3V1-;JV3rG|N4$5bxX%dCY$2ZzW>pVsd5{f-;B``kdu?`)9UV z@YnMhBdrE%YE2I5d4vlRZWU1;40A_MPq@13nw|TbuI067q!r=1NVaB#b5f*CVHPzo z$%DGugp4K{6+qy;GP<3yM@7V)HahAwR7^EbdJg-7)Rh#kgqnRIo^>p0^()<6YF#r4 z`{vDZX~xqr7^-A%-Es;A;i{KVA2M=*k)tu$U5YNArNX;5Z>}L6C_6=`Bb%ojn)r zUn~f8ueCk|IwWS`kKutmOGwordd|Y;mSJ#F?xsw4rECx$vvkPVoQ;EkF#5SP0+nCF zf~B%Z`_ixg<`oHZh=R^>CDg2$aNFsNj#9@mGZ7PC=Ce#ujtEW!j|W zxvv(wbhWX^Yj}wA*ojO?xx~~eqOPW%XXLfSHHgaD`=d@H6X$aCbPrfZjn@!1_TeoN z>6z%+2k|VSf&6P6R9wP5uI7B(&$8!#y+oWf{mft*zSWG^y_akW#dlNtDCmR_>fTJ` zC`!Va4Xq~t!ap{W$Upq+Ng(jwav=fvml=-j%%CWylo>rch1fog@-U75&c%mf9+jWK zYC}FVsE!?D*A!!-7oLy2mkkbolk#~#bu&TAhN^zRsEe?)G2iE&r2mkxrWuoWohEwV zKO`jg+(e5WeCeIzCg+=`E_8io(LD)G+wexX$Y8c-wN|v3pJX4+nK9;=3Us zt&97w2nXq)&Sgsg;ZivyX#YirNNLo3FlMb4A=E03nv+7vkAcMO6#?h}613rZ+ngK( z4qz~L{fTfxh%|TffRt4Jza!is8kmzhpbVNGC@~4HdTx_Mbx_FK@;2;%LR69g5^-lz zIAA3AhucW9L*hJB3KF_4vgVM8D~*!;8_7sU!K{#&J(K!C<$>7`2~QZjjVOq2BxVOB zBJ&6!aSelq87nD$I|8L81_@k{AHl3Z9szXJKlJ%m>_QZcAz>cOIzzPaPnd(!a7+*% z&ZKl9F8xc`AL0^bhpZjMhkprILwxALw~6`3e~2&)9)wS~l$C!C{}t{6C_zFY85MB< zFQY}$h74~bs{RkZ{-QK|5;#0?sk7p@~$M@S%IG^0{+W zEIkV2&AgZcQ;@DUG{{0=uD{kgGN}ejQ$n~RQlpyYzznq_wqY6rwHj}sU%)z0+B(|C zI%Y>3{V*S1H}4Lo5ln^E^gP`=OV9O8QI5hLs<|6`8?B>wmY~1oWB)r9=1~b4 zv;J2q6#MT~7$e~fu>B_&Cb}@icAxX1(Qd+BcD>$8;ll^1Xk`pF z(UZo9Xk|t>(JPu6Cq@vB056ot-Fo&{li9(ok6E;C?Z>kV*^Sh$%meQGUB}E_@F-UJ3xdT9WbN7Gsb-E!TKEq zoRQ=Ck{7Ui{S#`pd0rO&XjKXRzzmx3XQLuCcw0pSb-WR=e&i22jz z0mhbIJ^R;GOXZy>;1&!??JRrB$b^=^@yH(2{D(kbX!kMyBvb+85kLSb)4@@zfdR%@ z0ZR`)`=6Li_`Aw;5V$b9>BIj`1dfov-R8_A=w}QvvS9t}h=1?P#lis+j|j3LfPUuK zLNXg9%Rq9_6PSM+{#{LW`5gENX3GB|0{lxxd*BN)MDPC~^GAemo#9_t|8FAyG7MRr z=1O$`Qw;?dE8D-BokyDZ&)Q{H&t3+}lz_BT{zr({kPNEw@q zxG;JyDWNhD?4bO^`e4?;KNAPe8)S|OSg^qAnbe;J1<42woF2$*foqyYX8E^9AA2JH zC{^jFGAcbub}XHNFC#wlmyp!}qV+s^<1aJ-*I44dzH&9y= z^`nMlq3ew{X8FHU+@b*f`AR68Cq+LA-YFa zKg!No_;c?uK@2fF0`CUi_vmly!5=u{H2&GRvP2IGgI~#Ry^`+&`HH?FCf_3|6eM$w zL^fy!A*+p02)!5Dq{RYYNu zt`sCjp;i?9qck80oilT1&Y4A(E-LOc0bM8-vZxCeg#;-Di$7V23m;fS$3;OJN>OkV z6hv2Mp_{JK#%FwG<5G-k70-R-Pcq*DUHGq(bLW2dJKsFMIXCye*`4|O%RByj<%_4@ z{LFv;ZO9+1$K(&2tj&OQ6?`K`t0U;piyV~>1v?bq@4+V$=B?YPBVp1x&9B;Izb8$KM% zW0$Ar$LvO3WOQzF%qN0aj2O!e*X^QYfmp_K6OmTQ`Wd_Z3#ZTknG|xU0aHrYPJ%vH z*?#u%*l~Z6gRwyQV7qa93P*_tgI!FTLk-)Qkixe=8~~Kiz_}7L#-sr%RO1m0^s(OX3NX^W&agTYpK!o=#sTL!z&P@p z0+3;Swt3Xezy0atgYW)Gor6t1W)gE3@V z=C-;-2nn*TIS7Gl%W@B)3oA6&YGlBhy-=#0?k>C2_P=@5W2Ig53w33#*l5< z86c1#t7RetvMuy&mqNGvPb>>&&*(h34uSXMOmQTC9F|0_mu&gyJ>jQ)#r(s2H-2xC(zjTQh zRwRZMiDg9#3h@w$VMStCkyh8`V|~%ixV_CWsXvBodkRoo5k;yi1C7P{<_uBN((!Of zqOnld%DU4J+VgJyV2MervFbKL8`Gy0F3ugfSD$K#<*zrU?|J%PA4d2uk1O==o?Kda z;lj#UEm(*-Bv)>~|H%C0LKpv+3!Qjk9 diff --git a/docs/_static/benchmarks_files/OV-2023.1-system-info-detailed.xlsx b/docs/_static/benchmarks_files/OV-2023.1-system-info-detailed.xlsx new file mode 100644 index 0000000000000000000000000000000000000000..8d6c1bcf022e2048085023a85a026e4b8e83ccd6 GIT binary patch literal 67988 zcmeFY1$!G!(=OrGqWAD%*@P*F=l3lm??(U^Ss~r_Uv_b z_Ya(-OVd+5)0&aGs_v@3Yosg(1&s{>3jq%S0YMJo%B554{Rsjh1O@^E3j!WeSIp7D z)!f0=P|fR`xr;uNr@bB74`|5G1rU(n@&D)bzwr!IsZAQlLsxqKS_9CXAo9->zQd*I-$v`39>+{8d#3&x-27h&X!@sApJ zr$4Vm;{=~a=w`ERmOJ(Mp8>yj*+V3}i1yd#_pwuB8{VZJG2VUg%?-RYaelx2>Z5f< zU&0%E?bZ_w;}wZ$a{9ic=mRS_tXBzbA*$DXTZ!}(>n9qnp%`z<@o&9Pbm8uj`#7 zDxODxFok^Kexgggsg-5cCztQKhxmL?un-U*A5aj={|}dK zR%fBS1KXiI*dvj^E^Fv)Zs)?x^iTVLT=jor|NNhRdR3Bw(jW_R*dOWF@Uh2*jRbU2 zIS&!(PI5K>?=tIX&9OyPgd4rIMCfV+!BA2G-TtpX*ER%VPshj~b~&n|F|qk6TRf`6 z(qEk1;OJWnC8Z<;*Sd|D7yy{HbWYvnFDO`gsUSV}8BK@pDNl|TQT2^e!Rv0SO~9kV09 ze-Rp!a@#Ba{uW(;MMd-5M0vauC42$9)DnpNcNeo`Ut=I>|BN6az8 z_IN(OAqp_qyHgra6jaTqE9!Yn8Lf@VjJifJOJ8TP8EJWNsmU8^}WLhV$A{l`?MKT^hzl5khpE-<+ZB-&`w%hxzo};((GL!yF|yZ&sLVS z&+Z4Q`XDOAa8(n_uhU<1;p1MAeH)zP*p`+I>5-}zm>}gjhP;Jlr=%Uk@tM*;EQ;}6bEXDY2rIdjpHdSvZHpR2i4^`Zr7yszk|6{4p)Wbs0uRr*P1wdK<$Y*=bZ%z`B+>=Dy%M#(V zdD~C26162l*fD~eSi6Iggj<(mkx8{{muf2$m>Oe7;^}W?pRQbp*RW<1>?=&ZbM!X^I~UN22&1WIUuvR757*TNl%vVuZqocrd^-tX z?}GY6@BG8bJK)F8?u(6^+mDL_=VpItCC<2-T`Il}p^om@cOAwLqFR;!yqTST^|fE~ zMCNQCaQ|;Eo&=Y24h?pGVX&JMK)`|6`*sD7d% zC9njT$sh(*xtp8X!u%QjHAxxKXC-HsOAP}BvwXCq*{NxhxL0PBB@|I^5)3)G?pT7n zsvJ)4B)Gy;k@F|P^u%_hNXSINIk^<=_cSM&@EmqHFi<$9l`N_-e(J-v6J*07$p1E* zcz}e%b~Q5sBOcg8qj9B6n~1S*{&zC#foNJZ`LEW`>&Y~{22T8*9AQ|oA*|5~z&?+5 zNV_4Hq-kY7GM9T0Z*v-xbF0l~WN*5y<Ab^i~(6I{zb1NKzT#ZIfQ_MdS|Fq$zy?cq4k$it-7_~Z7HL8cm2w0Go!(giQzmN z;on+eN6kq4W_^u6jlg+d)86aFch~z{rF7;WUUu<09yl!X7b8OM8z%!!tFL2~Rr=9b zwL96tR-mfjfNZ!jK~90S$5_zsYdvmfP*wRw;LZ=$9rSRq#9P8dVfC&{*rc#@ZMZ$h z_~8ob7CP?2%7(-1lct{%Zje!be=u~k=FPW&UUzTB7_-iMLfQ8uZw5M;*lnA#1<y=$b;ji8$EvP5nrcc=J88q`@OpdB!7;oy`WXEU9WIl4aFRTbtv)~tGmC| z&+&>#SM&qxkJR4ZD_>f@LG-yq=se2E+F>i{fj)V$!`W!lQ}shy6C)ry#IGWw6gOx} zh3%SdDl_8{Yi^Vf4v2uv$UbQpXzZ_DsHRJQ?zt{9T#aNS$TCPF(%fpHAt`aaiXnnL zAbD{93X(?If`H{}bsZ7Z$~Y`$fue+xwREUY$b`vo0tcUj7l2mMVmM#da(q@2LiBdJ z|3p>D4-VJi@9Q08DPNg%voPgE$<>2a zKz%J%EJBd@Klc&w5+LBEzxRD>E-4Q8{ym9fBkNsGy)kx0u8<5r&oK_!YFJXr1?TKh zfO$50+l>QE!-%20T5#rU-Y5u>pfi?}L|1u0kD<)i#FI$jgihhKx3tu%!6-rh6iV`C zL*9D7NA9*l8`jz;9-^ypo&c4Mh8#WNY-7b~s2D2fHq>DNf%PQNsGX#(@sB1SH1_E0 zq@}z!Sq8Vy2o-$LK_JzCw_br%#qdFB(4tXPq(CZi3G@kP4bdPn<=dilh3tzS%w^vD z5!v|$h@TmpiZb%hHfhRI2x0`7?0&5s$THH4ci{&n7JR@}^)efKZY4NviUpd<=g!j< zhQO~n16Iun<5=)%KK#_GWMl+>WgM(KB{wNm7wl<7$>X|%C6iUb4RT>75ot$(?Peg7 z$q8tead|-0Mt{PzPPk z_ZK2vu{L{S#LzOea7WwJK)SWe)@IZSQfzM4{;z`x

d+eFi6b~*#$*A`ylcYasdmn%{cj>td78byQe;Rany3pWTzWZAv15c)- z(yzxL7u4H>40ILazqWj~nS2nAP2tPviQVZjpA#rW@5pI*OO0kTa^81e1O5f*f18xt zleqJa#A?HXZGrFI?6&JDYu{6Vlec>J(<}tviUQpSxn6qSW7nffVj=)C5^sEPtMW{! z(IWm5vafT!@9+5^&wqAmKit0S`|LdG;i(dX1N0=v@#4Z{lHC2;0A4oJU+#Z3(R0c=u5@kE(dcocu z>9sxr=fOG$H+9!%`GR-r8HJW-T3S07tg8AZmVdIAvU{Z0{`jRtE3O8dO~gXCF{ z|6)VG(tS1iFw*^~MxXyeXiayN-1I^@8O6=0Owfw0XBRU+B&JzaVPrls_u9Dx$p`> zCCj__#X&9>*L)e_#au2o7lD#~6Ef7dqHa6qQlSm2{Hz?#u#TKT_;oAh?3@ZE?$t6H zOO#@!BNk}x)n7E0r*p9n7311P6~B06`ePp&#?-vPL-D-zpw9b;*-pm?sJyTJ7 z5RWWocL_AiZ(=>3;Y#Naf|&kE5c{}^k+KAt95^O{Av<~_WW!mn6!vxEoiAj zF589Ev?+a-+a$zRjV{xJsG~1sqOWALL6aPPX24T=Fm)Moznz&Y%%3rW&%qR4OLrff z=ss8&>=;85vv;>k_uTUXjvZgnbM&}hK#`d#&ydcWC1o}^Ach3l>>gEY`D3W2kqFa=!L4OU)QYlzVc3p_BS_&FPA|-yvwQ&9$}-WT|<7 z4ia)%FxH#V)LUq*^mbRz@1SG46-HvZG!|sH?ziH{bu{8Q+AU}etkhAlg38AiZ2NiK zudX1dCHWkrY#8KdD zh%WUN+V0PATIBH|7J5~A+L$TD5_OC)xDtmd^A3nG(#BlPPa#l>rau>P$sgq4uDVo( z)Fg^6jTPEXh%U`Csqu~RJ<*ui&wV^UHruBbp#yvrC|%Zw?&3|p&2hkX-bLCLk0ob~ z7-<#?9EINKDSI)M(bDVY?V(54Af0l<-0K2+f6a>tczrGx#+lETe&ba|`*hS?@UEd8 zKzkn_^r9)!eTH~Lyl(T|C>cN&v*b{fvRws)e43*$&zqp-D%c_z(3u{a*G^NiLWoHvxX{t~@?QLCE;s+uq>nVgBbUwV0n*#*|^S|wRo12pPsiAAe zD#q8GJHoDenMB5HXSwQk+Z0H{nIs~tXZo6V+f>ngnIz;aXM$>X+cZdBnIxZ> z&$>14wm(O=W|HhMp4Pc%bTG!0WfeYwCqW z{o~WMJI8%t+OX=2mH)atK~fiq@s~68e?1;W&LEyF<(`A3nU+y&n!<9|L{jnhT6yKF z_f;m$gnT@NDpuxPfK6fZuuN8pFr_~yklY`vZNB+XzS*Y=#agC}VhG`vfckO1Xp z{mc$lPGldQSe$rer{@hMHy0|eAd($QLL->g8xN3VBII#_uGJWaYhDR_vxN{NwTzQc zWxuVyK7`r$Qcrvo1-~%i))}^XAPYa*ZewNBdc&o4;YpNA?6D!nuu*8Ixz*P980mGY0j0k?oAOV>fN|>zW`gw2? zS8I-!ar}mH1lN8T)=3DL!<{pcsyEg0lvtMC5lB2_Hb}|WuX$A&uqsU-o^ZCEt6SdQ z0dDW?@(iFR48CwF8UE+QSo!>^6p{ZMZEFDmNN}OgBrPaMMccOMfmOvk7=LW)o};>7 zIqiE|(>w!Qn*V%vn!hyCL;YP*@84@%O#Nz8LI+nIzPf$Yh%~~d(Ih4Q8ar2-1_k)m zwK!hgTfs9$ENk1bGh@0(IH2^19Wfc=HN>*%x`t zvP`~bXW^g8w+NfY@^OK;i_)t~A`&gAS{GOp{)-qtGFdhaN8Z&hJU-jf98c2XaiFiykZT9Wa&62UnHBB6Tx|TCEsVrQAdv0z^s$Vw9D;`a>)SW^01U?q@TFOz9%VY z=6z3(l(jNPr3Ef>d>)FHeovGR$ca;?B1e&~d46(vLoZ!PQ@SEYxvoMuu07IaXNaQu zO0v1D5|d7z;$5r_i`o)MarIVvpk}_BGc8!Y0l zOg@Dl(4(HNh;)Z#c2gEv!rcSl?#~Mh#85!qUzht)Pyg#A;et->-sf}Sc9aFRi%2-G z#)!T34g-nmSiUiGMPqCSFu3p-oP{*aGod=E{UTS)Q8p%v=6UHva6*lfmb0V^=SlVM z|BB<}+kMO|N|>8Uv}c9eaqm%;P5^%fIi;1mi&vIZPhb@=LR19H8E+Uc%7_@NGL}Ix z(C1FT640DwqRfAxPk}-ZO|*xdWio@0g0Bis%yGptQek$UE&$hrjXy8LWr`wj!Szu1 zd0a~@(KIKh84gt<7)|1Lu9J&m?mk&r0jKk4f^Ltv&qHyi#8SKfMpTI+MT-o+jB*Za z-iZn;a%}Ahr=6XIvl2z89M#!fWe!kagE#L?;s;}-)IegOFmt(K!i%}cTUL_;qE2hETe@{96fD~Rz+=&F1DUhxf_p_BQkZJy z-=`fxT1&|~tzX96DE|wNzlJ)T)o3Z%&k>;#=a`X~1ABcIRnMC4!H6sh=*C@C{R@^q zBZ;LxnDyTM3&;j<&d2XBB)w>?@}Yd|sf}Pf1|Z$r6&ma*cES@xp?XO9`?#zBClli& z;vBRCXJqvMBxT)U#VsVAp#NlL1ZiC)elQ_=>y@*s$v9P(DnAQk z9S1gxKvnA7Y1wh!JQN$c-nw|&dX79!Iib2qSTHI#BHJ`tXc2I@^G48d{3@ug)OSWv zLoKGu)Jy|H)t11bs9kU=iD|wH{Rm@%Q-e7}1E*b_2AzcS=BOYMNeVmU87S#UU5t)s{mG;7D zuicR}l+yO7>{b{nn3CCgL&74HcFpgH)kr^dk@<);q$_fz++>M6AX2naIWJ*)ktL1> zjbTL!w2B+Z>{pZBC{mCzCNg85bOgyRCCj%u@m&`!Zj8xV0;oNQ8j@8J1Mtslw4@bg zn>^t1wXB>)uk}lOaDb*wq1ymCIfX)89H0wW#*LOyAa@rtbP}6(IQDR0p&xoGDHK65 z;*Kq^@f)TQTq|B0+gU=q4yA_1>U0{CD90i`M1_gUhE`F@R@OUxQIRwgg7Gop&RxoJ%G2z6qm8Gu!t88`^{&v%U>uo1e^udZPl02g;CjSKzJCmAG`oK^8@&h4Eep&+KSMvC=$6;UP-xQW;1Ku2=OFO)0J8Jv7y9AfsPxH zstatltq@4~X)19Lsg|cpS6h6eq2fiG8lUzPx58ClqV!CrOf`8w?O0NSkN!*l=QMR4 z_0t+Hnb=t_FStf6Eo-`s#-uD$5rnY%rr&Db#?+i+L74KUP9+sXamOxFLyre57;jS; zhoA=1>)}Tcd!xWm(AtqjWT;W&9f258eN{Pn1U-&+8#*=V{dtjrn7j~|_k)Vzjmosp zDd)twiV5lcf1N?W(cJmbspKHoyf~l+)v9gf6c>9`H7^{eO_yj6C8up=O;;g9)L5Qm zj*};}Lx{xqBIXxOTNk4E_0m_jLh&lLByG`i>zHc4T%vOdh_oc*7knAtDqO)&)RJE5B8Uwv`^Kt4RxiD^XjqMI5Qgb^$v_@KnL3(>_pzfcH2?VKBx9FPw>* z8kMg|QO9TTH+W=cN(#CwLs!$lTKVC(DL zk%=J$7pn$qbT^x)F^Yyfa7@X5JSDe_%izC)-D^>jo%EZLe#5JTg@aV|{^fAAhtcj{LZUxr%%~Xb!+=Y3IN3Diy8*(yt%}9dv z2IbZjku--@w@#(`?F^Xa6^XQOt2WG$1bU4HT>Rre#ZGFh^gACtOlEOeJqqqoMK1Uh z5zD1*VFOgS634H{bID1;c+l7%~G&>E~}u?Tn8d2(c9CztxZTa zBF}3CyAZ*MOqwdclFZoA{1SOOffmKMWC`HK1TENeC7TzzZ?J^(5{RD3246Dim2oE- zl?tvy4)TzI4qjK7TrkJ?f9ko3H%>UmJNz1Uu3&nWWtwF?KsLPC!G-=ZD$$(73Uf(W zR6GGM{>g;GzA0VS8Nl2M3ppz~b+Js9eyhmK|MjOJiSTIJ5KViL$ECXMvL^#-i*ubC zFm=u6%lpr_Zh(6GeEKO}>r-hx-DNdZy)4tCg8E6NLnK`c_O6p!u4&qExsJZ^XE?Ry z4w`y7CKv-N<7tf>?QX{lWP4_WI6F>3o7Nf0fVLJp&Nr5-Fq7Kt@U;4J^dsqmPt2}! zFu1!w(q<&5$%URTUp5$PR>sZ~Nd zaZdU8r&x6=Lb13JDoY%q!gnyig8Jzyn)0`*5fSJ3K@%CwaD>WKSvD!QQ5(Iath0~> ztB`*VB(*7KRI}BDu4n-;>;to~mJm8q)&6ZyU&amo>T6gwL~RF*^6q63SvG=1F)>1)=@18eQ#CLjC`l3jmLNJ!BMBk9MbN&ZzY0L5oqqq) zx*-b%nY%_)*$lM*LMuO39r6`7gC$9q@d1HQ5it*C&J*1thdk16Agp9F97T{&LoF2L z3nIe5`-+-@FzN*=@W?bZjKAx!x(aS_esL&rAeIcvL7;x}GOb-(!Rd;!)~SzO+ndO; zM>a`lCd;O7-EXxtlrkwfJDw80bw;l#4(8SmjW0w0(IeCtC;cWFvrN}u^F$}K#jsZ<1&|@TBPIJLO znl$D!q4eve0)1NwIz8tZ4Wm|XJNfo#Y;HVQWq9(IJESlW)@ajW$w_888bSb)M{u@O zLBO;`z~Yi)kzyJR+HdGNWHT0V&a5*kIruwdJ`j|OLCr7*iZF_17;~#`uTs*ZVbB7d z(fa6W)d80#?IcHo0P4mfpvpQU88@ziwI%wB0pmOv^@T>GoC~n>z~l^5eZk^Fy-`I? z;|URxg`55`ip)e}9!xXUf7D5Vx<#(80w|_!S4Sx>a!nP?@XgO8d3o~D?Q(z|C`@V zz-9b+SQGwu^=Zv=faEf*Taxl$!I^9pp+oYxtsdy*;~!eX=&j@TFZMS8_8Q&G`4#um zq&;TQd#fgOm{)YR%tOfurWNODovz5iNcA@S)|ENQO%>__e!In`M`%6aJT{rwSiyUd zA3mpT>4|_a*Pr0A|0L&Tr#r4fZ;a=Zx6Y$@!qemsuCa5Fj^7T?nZJo8rt{ z;PMLH1~biHP^$#ng1fZ5OtzqY>hPPzN+)pvver?aerG=uB9W&3!aX-mI@+=lD+MyDOhRzDF0!^z&YNs{h)Ag|pV-DNaiWSH`J8T!m70L49DR^UQJ| zi@#h=3j0J8?U9)d_A6+g;psq!a$xqH{(0{GU74=axk76KMT0+u+gskjX%7C=1eSIM zUxxT&9Z68iRxJGik%n&l#JG*^*O(h;58F`xu3zWE{>%#n~G-~!9OR0w+|B4wyDvHH=s!|x2L z`clXtg;jJkLof?eRR|d+ApV!@?=9UokGkJWJ)pCNSsXu3$-gf#oxxTx6etJ};pr2P z+}1qBN^txZW;@SkA@OtzJ6h+c!M_+h-F`-At?D zW1!|^-rSnB%3tZn0P?HPy3!fuZkR3|C#vN=stOp(RSM)75x=e~=~D=)(O)tzN} z_ViKsbSu(K*Z$=sQFm^n))1l5x*-?Dq6;F3wE4QlYcY2&FJlA>P3lvVDy52$(Jk3v zvm%yb8~p-jwa>L;QQ4*d5eu6Fk#+WWZebdJJaWY>yqYQ_qgrYriOAp2G*+t4h(NR6 zf~%jShGG{7dr9i-sbXwe9OFHVU3;ei)O?_X+F@sVpkR(u7ZxWHS0iRZ&Vs8nfrE7KB;*%6^0=^sqw<4cJ>i@PL_(}v=7MA2FA`-lL>yD#bfkwLn?x}K>=J4= z-XbeZxRk!}?b=Mur#t+y(};juLV&9KDT=H{R%R)T`G<5u{FTnfjt z*k~ z77P(}`7Mr(WI@588KyA-K|&UGttDzt4#0{tCv|Ajg8t6F)VeD&ms))6W%jS z@IOvPMYgSV(LMRF&89s_G}qSB5D1n=1GN6EEo9eN3f6yI#PIQJ+h(bo(eVIh7yg4T{diRwWl2xreox@QM~4)4QdH1 zLvhw~PFROdpVojP+orvxML3zU=UUwMi7e`ff6@>knVGT8xEjT_fg%PrFoF5Fh-`$b ziB38RbpjY{i)L{#&6*o#PI#^B+Y?3k{N>n({m_hBzZO&U zCAYxC7h$r`QB1Hd)PEbs4H`{Nr0|p%)41Xx{1W~O$OMYmz^3LZFu|LgB}Nm6*B61i z3e&)7D%jLUkahHsu)c4aWE7EMHMV$MEil?{+f;a*WJG;XnJ%Kvqk_z3hSLacEuRWNDJ#Ok3*%(*&PH6lQF%gFfKx}F*$NU5E8&HQ27(;bi|s=8~4G>i7dCsFEy- zDCT+Wkw+=}=B~*K^>@^Q*rx0a^TDO~(I;~?&z1b<>M?|i*^t<($n^D%T2c;0NF0rU5Bdl126vE>oI`tn3&*3)ULXDjaa z0NZHIKgEWBG{P#+F|T%Di_`1oYpezewXIGXzv4Ot2UI3t_dHDsXvrdJnTS6}LE*JXs)Z_*IX z@=e0gJyC3%y1P;wt;NX*cS+nCFZ~Q|@7%5UE7KC%qV?INgY(x;wcd1bvH_m+xMXRY$-(%e-o_6bA( zZ1NM+GergGcQ;8hj9RZ8jS;HK0cgN|O=@G;gjoy|Ya`!v{4&q~?| zGpo`dGeC^A$`a0y z>VS4CAp}*gcD&<*K89bD7@&oW2Ew1s!I=+}^f!ojL5N{HO(N2=8D+NOLAS zIQ78XpdjL;=1CK#xM*39+wJtDhC|R%iG`{5MIxoPE9rH%Q>W7yw+hnLV+f&j9b)Yt zC8{QJOk4?VMGTg6hiXTL*Y%ks>l*f^zD01XH{c*uX@7F8%L!+m|C# z?YVQ({)Ugz@v8UJ&zox~9>m#@eR4iF(I?aS$+`3EH8KK7On!`Bqy4Nmz@b#=XwKQorQ;R5=0eE3@UJ!rN?-fScHo5xQ-vG1M==xXblbg(I zDV6`PT@Qup@T4bGnb*kS{OsLY>g(2|C;sRFj^SR}-5__sD8d>z&o~IVD79=qf<7+| z5L)S8CMPs|nIL5C8GxtFJAcnw>i6)Oud!Lkom?g|YA_#J2z`9O-Zlu>6)wUY)#4^> z(Eq-YSlYeQD+uFR0e~nKu=e<*XWa=wvaSljb7{s+|MM(SRV>Vth)=IiDLFHrz-Psi zXqu1WW<2rZxnyF+n}%P@v5=}Ruf0vGWznLjRhBJW1kSgl&akoME7=N=YeT0<8NN>K zdlq9TVmpMrE+S?cBFpH{!!~sJeaI&Jm4HQX_t(Qb|ESy`_qRVJJ%M6ts=YI2)R;($ zK%}v=G)A1)4E@QLz=VRdzV0j6i8ZIJ3F2W_n1j}(16yuD z=a}}ymrP-GDGo%ydg+&c3LzN<}xwyHK!&qk_61Gb;7a&jgSq4r3q7gwM2SrUeA6( zP_|k!f13Nk?Q@ZX5FfiD#|1A36BxY)gJDt<1YbXuW{)ji*9=4_*H-_g@>vo>M4KrB zA-y_i!nkpTld`IFh9YBEVQI`Zy#y!FP-zuLM9<(e{JA|6B=fJUU%Eq$wZ4pJKIffp zf$J&t+3bH8?#6PDwTMT5B_#ZG5K$CHrY_ON=1lGW5{&3%K=+?DbEs&D@VLG$o$5t) z(k>(YW-2zR)^&aH(@@kj^{}RfFa4LE#$Blzy!W8z;V8&@05`&r4V%Xd`@O1fzx|L> zc4O|jMyz{*==N~g*_fme=T#{%*iDW*? zv0`wk8;aFZHx-CwT@-0Bq$Sk2J+x!Mh{TlsjSjDJoM@!f?Ld>@z6{m0%(8`wGkiRa zI^FCaisW$nkR>gs>|9sl#V{jPk9Hypwc}|R*fupq$hm}2-ar?mcl_}LBGa=}-|2XO z$Ko?z5UhzdJbc@CVsxa`(9v>dmr-|TS6v#O?nJJlBhmqeVq93uwJ|k)QwtqbT=xRr zcT`0X1U!eGD7pvSZuJ?T`TfmSogy)obIX}LMCY>Cet$xX#S7H%J2Z$WmOz`p(P6EL zHDdS+(2+09lac;pp{p!is?$FwOAM_vr?FexZX?ViKZ(67x8@ui0}USv|CH+`E_2(e?%~nbX1bgRK#z8)ZvxkC_)Vuy&e6i zL{#q^In$gdk|M$SoQ~?=eS=HqC8(+} zpmyu7S#WdD5}edg?OrmAgS7=Ut}&o2su^Ocx7ikk?(j$jx7(O0w~fylRT%oTjXO|Z zI$_O{VKbndvP%UwCsCoEVKj^8P!Wi;YZI6)-E%J8N1l0$H$E87dACi?8jZX8?RJo0 zR0}HIeakKC&pIVKnlOYyU%*ilPR0qt!nGLUoyWHb#99Mo2dZ;N*dl@8-Qr;7_Pm4J zf6tQnUT~R4ni_R93tmL!gCCuV-P-B*Jt=3oSelEd%@E)43G>pYY&4!~n&9G}0Ay(J z!Mh>(!3XowiTAdEX%(1IGk;> z)s84J%R!5-_16OSHx{xP#?Sk5?QIkINzLouzqS@ANBcqLoGjOI>>-cG@XwoUK@jqB zF;v9W6;h|`oD=Tw7V%L$G;8ASbxo{EtYffWLWcIaXaHb!m2*w=RYFg2)QnjwqgLER z0HrQ=ZP-VGgg;S#wsN5@t&u`=<&Zq{elakupC!#=5j-xM5xRC|0cvv#ZP@GZ(2ZKli{PJU3&VSRO~J7>!SH)!>Z4AT29Gz zg*1)X;nXU|VyAM(gD07!j0kvFm_Y+=uhh*CG_DbW zslNYK`gB+YUE@y5M5A$uVJHe%@z7N>1FqWuVK=)m>I&W}B_bL-t%?9)FdG`S?`AyM z^s$GdwaW)4+o)Kz5D6RTcC~%d(7EOZ@MXoskpwYT@M>=D!1Ni>5fNc&wPHS-!f8|+ z&MCpTIjUP63O#5bY<^%MEVaYINc^8$UtoiREX9cV;!q;fjrgX+UR0;U?hpvn@9i=c z?k%CaR$SdjSx6PGOGqX%^H7=>`~pqc$>*C*Z(-+3EeDeZEDge!oyTA4?6g$Sx#Wp4 zggKx{^DcWh1Sf7QR|bPlprNmI`o7m$WFoUpV5#bjplEuX?D`XnW|Z{+-)<#zd=J#i z<3|dW4P}g#)S}(Ks@=|)L5(uI@BJzn*Q1+Hzdy6?+J=U`o!3+WoX(krg*>!<@xOO+ zpPS+p^`l#j}7fp9J>w3g+swDTTb3A)c+Go(nX{ zdxiK{qDOXv+a~?XaWrCZSJaj2bWSbp1x$pG^*mn={>z`rkhhEif-q|D4KGw?)xKyv zrlnqH$)Z)=X#u@WN{&i@)6;jSrfh1Tn3kC-!o4BuKMhBTdH|N-Hxlj5j%s%a3;CTR zXEyHa{^oC3mo9sWdSp$^>dV~Isf0$aQWkg0oFbC8vkn+eb}i+5chEOqowG@K$mlN# z9}`*qK=fd2Vj(&@VN#FH-wMnY*ie5_KaK-$OzI_TB-g4^9&<`>Q-&2(k%J+ZvecQB zX+``u9W(liAJ=XyE6?CL@{Bo4D)_86Y7=2-ACI(9Pe7q{{I#RTd1p>-bGpkY#LPCv3x(I!b0}hW}kpd5NLnXThA|3LBZC|8`KS*qloYp7fkpQH+D;y>qg^z3Ou!H;et*~?g+a%M_n?;^i2U9 z6rDaLOW5y$hjw! zSOe3E-K6k^=(u_ja%O+Fii?x;)&B3Kp)D?+dYD$jm+4DS!>+E(buK{oZuaC-d<@r; zxlXe_u8+h^%E-~%0z2fqJU!&R)RJ5j))aolnoY{RLii+WQ2f3_qkct95)g+_L;>2Y z45IA>x=aZoN2&}5} zY0SidH>PU4r!#Tk?K0Gh(tPhG`c$g=i~{`%DwhOjD=7=yrb@i8GjSuycI(vC+SG&| zRZ+?dc@WUoT7dqa*@6$fM5kuz%MK!LsK%(*4H|~Sv1`-f07bow^TWPFmf9b50+Qys0s|01KNRXMp~tChxX&G&UdGg3-upb{!i&^8+@P9O4}6=5kqcv&!SHPX zn4J|&iKo$ICQ^^lmd?A{_^puK_@E2Y6$0!Q$`w0$q(ux{<9T1MZuPOCdU0A__ z2Cu(co}Adpg%QtOo2qG}a zH`+>>Fg26`*I-cp0v96MSTaWcK@!@75ui=TiSN4BpY~)E#->}r^R;&}qpdqDa>WlI z;+oGKnwZSqulyIWa6h#O7c)mVNCljK4W#$GU$Lhuu*-b7Q{Jf?b-{l|jmAXql(My_ zluw9!VG3)ygzvvCMb_=rM1s7(qGT-<8*|O7O6e+x69?VUQcr4@Tr)Hw{ovlp<1=p(&>t8D$oAOaCQhZ}JCsH`Jzmszd@ zY8s2ZRhWV!6}YtYPnnG{?cXxnzk%gMo8JJ5(1vYLuHuZ1%S?T!(ftF7&{>J>M`$XT zQ?94eYgN{4o5~S9!B`l8-(&o$5em*J#6$Rex-b>(DiH~jKz08W!$MJ1!-ui>zGYa7 zU9e1tVT2qi9~nH&<@0grjWWX+^|%@u>yr*)#lX9bwve4cby$)tv@YotF9;Vf4^qX( zDIXcdkpqPEjx{`|@^9yB+^fa$UbW8#%zgs&sy@RrSg?iISi$VMy=D$4B$+W>uQ? zODG(EW3ASFHT`6F>{S1Qu1y*g6b(@Kg8L?W=u}_KSNlrDl%}@B8y^jZ3-A(-0Sp%; zxN@LdqVsRy6!H(IgJaIe)B*mDlgGgDNn-SzFLC1g%(^7UF(rQI+AYtu66}*^C%^RW z(OAoK^%W9MuU(X(3#X&3Y-^S4lA5f*a8SyF`(W(`c_sy1;J z<>Zw$#w?SdfPKXN-jkJq1;lUhof`v{*PXsqu&p6jCQh`@?g=8~Ps7s*uM0{X(K&Z{ zO2IQp6~URl%}dt;)W{ZU_k3CotXnSST5ud>; z;H-qLy9w?a*4`;@>g4&(zw`;Yfu&jP`))!^2;c%Wc&V{J7Ek%duIoV^rSW<~Ge7n^ zX`3ESNE_wS2F{yYh+yC*GQAy+jkE1nZ;vJd-W$>fzY|jOGSTpPF4klR z!itI<;)?nyjhPhV7qcZB8Q%4ZQkQW4g(cKtcpf8~-nBAew<_npPCq>6oHtR@Ka~uLaT8Ur0P` zQPf|c4UBAnQDeV`Gxgc@yAUEZN05kir`YYG+zt!7T5fU`Ib#N(O z<~k5SDi@_x^`vLB@ZU>Ool(#EvuFU!G#&95Psz>1+pM|9y3*KAUeh0lQqNFfzB0k@ z^M8JugM4MgejsW-8;g=c<>nrKV^ocG{NegfIrrdE4t1ATzHJixcWA^x_RW~IV$Q)S%I ziZyk)>^OE?DJjCBq2&RuD{*nKmhQjl`^chOm@dq_Rbq7%2X=2#mNJ%4IZfRKT=MZ% zzhssVL?X>?5uGAC^+jIb(J)B8(J_=MEafDG9F&KK9yIvJx&ca64f#vwEFZ%9ie0Zl zw%TJJ8n2-lVLLI`9owgH1RR)aL=kW`vpSTH54RFM2MOLU<&^) zzTP@2j;?tZO@d2scXxLuXo9=DySuv&5FEl_A;H}V8W>7MDmdRKMTQ%}{dZm2?6c^KbKAD_`H!BE*6%^8z63nCC73U}R|G|W75%~+t! zKa#e+D$`aU#y8>2L~KaO+8MWk;?bt>3V1yHJ#l=DP5WOY_?7nfrl!~TbxunxgR!f> zGablQggckpi`c~2?oL(z5Mm!+rGqv!$zpMZ0#MzKD7dw@8V;lEsK4X1%@?tYt%d*j zg|<$_4%u3Z&SD74T8Z%aX@T&UH%i3#3%n0603)J$X#aOZpg=EX$zZ#gs~Rq?PxxsKr}wanjRs*v=Ut!=o@tl*kBR3V*kle0%l%j0K*>Hwme zqKCfeXOA5(K@UMuhara|K_zQMgsCc07Au4#C;z()Wd-XPYJ|hu*;v6VFhE>smO|o_xrjdRMpWZhplt=KIat!_h8Z zA*|~kR`dd6+*_1AF+*OFjUTYD0(@5+8XGg&bjk!|y$^KXUt2w;J>^dF&|!M@hh%!2 zwhWBnHV6T)c9Au>CL{H0_g3QjBIEk}2hO+Dm`7TSN0hhi@sDQk$rUSOwIk>ynjN{Y=rtrm;H=?zFPEE)$F+t*HW7MF}_3XhDG1M zn!u)m8WJA>t?W=IEzrFb)G&NOs(N%tcdhpvjJl7n(d&ijE&Tt-bWcBI1Q z(J>6X37v=>d-wJam3>-U{j_eir|?l8irHn>1sj%H0?a2DGNAAk5p3&<%czVq8I+S8 z*T+LBLFJZ@G=pbN2Tb*&t13grhGGTS_T1nrJ4L8fkYtmaFzbsn#FAzePga%N`?8n) z70OMy`p=_f^hJIf=QLZ!?(F|pvPUG_B-kYtfzaWpO5o-mkT;;gDs(U;?1m$q^*-qF z<~k1O^m;#h(aMGr5M{{Jk7Ky<6;M1O$y-$=u%xP$i!VJK zY$mk#;2y;Y1w_(e#!5JRia7Hzm&eqI!NEOsnlY7~Xn}Arw-JV&P=J}`Ub%w|uO0dk zzb2*Eo(&Y-9d4$6;aJu`_JYe-GZyucH^-vC>OPH`Ko>XPVe+~!`UjI^wtrHsDTS6< zzr4{dL{xhKK&+o#Yijy0F?G!~S#S0;Y8QG@nG-07k>!{4QDe>sC4g^ree;!-4_FiE z<0v3{b!I|?=@v=}F$Go4@5ppdI=w9#~+xj}rm z!c|RmH|x5|H3?25_X>L_EcrWK<)oChkTDDF@rfsMD5N+JJu?l@UDRRL$Qp5nHrZu6#!lqd=gI$^(? zl&JLA8W(n(Y@ndqE(uPdf<2DOTvxznM2E}O2y@`aT%!ZM7&b4czUA8qO6oTbQ}xe} z9(f!Tx1}7W^Uw;zk4yNLT5YVg#lSO7@{XB2d)1-fkpcg2D)ybTqxVLVq?z#+|Xcd zK5J(oe1ToSwVtXy?$SMT2C0T^Y(yYlI&R&Gey4ZQtk8(Q;@yk9SFgQe5n8NO{;B7~ z!@0}V?LXri>9dQWRERnnlSM8pGgF%htK+s*mW_6kmm{x0&79*<1zpKfB{!s8(HDIF z1T_&>xfUJjM)mXePr*=XV`*=ZO|7{x+M=WSZ_j{tPw=M6_RMFqQw)=k|4@}dAvl8M z<)|xelwgIZwoOUHy(mOVXf1<_eEff_zCH)d%0|%tSmwR)s35Qz{eQtj$rtjYC-Z?C zW;t(B2k+ceTmFBF__W%_>y4MGuzmAsoQ!qIB3W^sEw|NOQ)d&(_8Qrjtb`cdPm zZQ6x_+Ogym&9PN|=T~tC=e1_kyz<)!#qjBk@u_8P*nD-_1ZfY(+ED8ZT3--aIqK9A z)Vh*&cl=HwTm*I)>FOoM%FQm#ut{3nOG^8cR$c+89A$~nM)h~CojZKgi!(5fD6a)6 zuZ1yh{&PC;{cJthcsiA6pap55Mb+TWln}3Zf2~lJXtE<+z@USTpS`Z#g|4FZ7%#g zVkwwVg7$@9_(1`q&Q%%X(=Q~+hTb!YUz~kpL{rB=Tu~g*;$qd&SBHm1H*_+8LIFXT z`)M)};P3^r#jU^02q6~iq7!!HYUcJ1D$)mA0f~oC_iFC&GOpmF%*3qKpoX2Z`klwA zedt*x0e)H0QJyctYCQyAk5N!6964M`wIk0auRlH4!Ji?! z@Za;{4tf1JbB@FT`xjr!UbqoLax<6K$e!&|W@bnV2%4)rvMu9GYWrHQ%3uE&Bk#%r zGl<;4iEX|6Yn-cD9AB~kzCUFw#$Jh<@#xTY~Tb2 z_Wsvl8n792u7IM!AkIF=7Is*?$6|#Z|EbMKT$XFF*`T@mOfX}g0I2OD;nX3`H;1Ai zF$<|;S9qPiRiu-iB1eb&#=jPO@^4)EqS;k^!D2;_2^b5c!t&9LzTg^LW7gO6F`BXk z{#d94d40Z(E+3i__LXB?n0K3P4S1gQOB(HQ+qO@~u}LLH;Klb|JFueMOYor6>xoFS zfMW++K9zgbq0&#(IL_9rMCL*iwRNe$G-K$v1Uu-mTX?IU>?o8Vj=;Zot2bavnU`4Z z5mD4`74=|0*V}F~Fje5wkG2Y+} zs;+KVd}3oTLc?G1E0#-Cc*uIK{M9?4_nU=lkr*lIK52zTA zV7P*E${jiiN*caRvPWl;=_f-tEJ>66kehciPs>*CSkIv^_q0C{Wc^8~L^-EDkPf3#jAOpI0BE|+ zqiEf~md!cB$53WJ#6Dtgy-;-gR;lK@od3j-HT#=CG*yTO68OytOv-9L?~9{b*px=GSm+pO z)6-7EQf_}w^EOq@rp1w@JJdTU*K-^|hWHl5IDdikn1lSPYVIhOe(UG~ zQregjHcHZlz@b`(^&poS`#lPHSX5$bGx(DZI%0mqeu?g90fZHgTz2QQJ>ElFR2n(e|b zAodHr3e?_cNp@O;a!q_ZY-c8&ETa#nV<@>cm-;bEz)uzq^D@{GOrDwcm^ZiYr zyihmjo={DC9*m=qF_?KbTeZX4R)3U}^0uxm{IG2hbt9nq&cZLO9ua%k8mB#2!#?%;+*$ZJKqpMh}voMVdA zd@Sgph+4g;Q|fZLyF;FDX29XrK;^AvLqf#i24Je8N~~;)$JHCq`ay=L!%4iraEyj} z%~p&xPa5YT_UU4|9q>4lGqzZfqd@MCp_|&A25tu&b|$@7sJH&56Mb;&mXFBt!wj`p z0yf-lL2bwj@NQZOBlT9xa2Pq91il2YiBclOcNT6J-J8R}6-k2)$8GdHl8V~nPIz!@ z7XD6VzcwUQDl^v|7)J%|qLxds^Xr`0>U(dXYfP9>SF}TcK!*N=U`Ei*26q+NRJ9?; zMWrx8)w}rXFQE{Tk~FStXNX9P{kor`>PX#Jk595MpH9`Vkk1S+aTFT7Q6~N3jpyH9 z$lFsc&flKLrY@&id!Aoh-d^^Ul#6?cnZsQI zG1;3K1S-rT*$6L@H{73uE>qWesn^pn&IvBNBOiKWlW9TpvVm-r-5d}Hq{~FifzMT_ zKOSO1q?ee?J1J7qkYgkQ-phILP3ih)MO8tC{#sbvD&%}S)JmihKXOi~p8|!g@%HR< z;djJ8r0+k}2idz9;5uHvnhBNQ_-U$zw2Af}$i|Ol^H`Fe zXUC0|&8<#MYtPHPk-?0Qj+j+m^#ifO9bd1Ad0BSJ`B}CKMAU$Ms6uBkDfC;#A}Uqm zqL*ZGgb-2dZRvWU-=90Qe+2$Ov4)6pF+B)que%qgTR-D1J9kQtB&y3sbzhfprlqbj zSZUgJj`S^N6ke0f-!M>?C}2vAR&oVD?CO{Fe~-05Ks5nZEdSk?-~tF;aIm411Bo+d>cP z)FDXKdn(x6kq_T=*9ok*d*kSRQ)5wdQ{NLgt}3`UeVK7ZciJW8k#|Zluy^}DhDlBR z$v_Fi9ZnWe)zpM`Y1Ms}KnGLM-tpt*owVUPhZ7_B{S+BxZwlcRBL)^IHEE!}6pY3cSW{!sn&n>%Ru<9mgGx7uy$05>d! z#LzR~jrf=27uk)sTynavHTRh4?Pub<)MUXJ;R{VvJvh_Hw~6x&V&=@q*J5Y%2k2{0 z()J)iv2*Rgb2I7(AhBE6q(X8|;mQsl7DOwFVEn*Awv&-^-Hx^KX!U=`LjPK*wnV6f zTKVFM)mq<2??!CLGT`KBg^hs5a_wvp{)-ObOefqw=le7zdG_AgAZmtW^g2@?e(R-5 zsXN8WjwLQ^)5rH~&+R&^Mj1Zw)m^VmoQF5KyKb;ZUHmy8y~ej)KZL&s7}tb`n_B1% zc+Cz9E^r=ujwW|66ui3PtHgP(pnt(nM#Y|vtJSqfO3(GO#HNbx$yWa53z*l+Kg4Ee zi`$2%@(-!!nj0|rQO<_1+IUINj$3tInnIZLx8r$L8|pflT(en+f8D7a9tv|Oc4v=E&`W(U&whioTl)p0aC30GZn9bkY?49| z8jT%kxIZv!;hx)kJ6s>fhfI{TAaY5+qB0@Rtb#TpD9Ndet?B9vU_zrQiigpYz@=}{ zy8fK_D0q$py6)?K_$DP@)f7gX-V<(xo6IRe%SuB=Jr=L!A*coSK5hL%9#A0t zhssVGEBz6Qg-!s>tXBbqYO4;j`k*dCsx{8(Af&^v6tb8vDu5e9Nq@5+3zXBu)N#a% zgU+i#5P19MpKqX#RTna=pUh+0!jPCm-38w#bP{J5){Ia-WfX>s8mkPRjJx-zTfv2N zo(UwWnLFQ>iYulQaEufWR{IHxT57s7+E)4I-nG8YJ_Y`CJ5}V~MmF}-^!U5=f*k!2 z_;D-7Pa?A+E{PJfPpt=an0z%G;Z^RIWo2Vp#O-DkR(mxz;JU?Q=nw`Us(H?CS$Q&U z#jxk&CKo9BUWOhr(W|c2Z2Su>1xb4O&X2VQjRHk8SL#@$V8%L=by7dFT&r0F9dmD6 zeYx27!>Z$DV2=m87+>@F#{c?oOJANMss(n$1Kjb3V@a;V3ZvF^+hi((^fK12GMZ8yGwFlfk%!A<>p!Mx)>T1zwfuES1Z3e3 z2T>XYTxa_sy&0+<)q5mua_9jFXJ1ENHTDBz$YBv`aN!V=@-ggLk9(WHbNC!070oxG zk?=wsHS9ow_N@PC(z(na>9#`kE=hYdfk+B8vGw|=4VIr4BrMxG)Cx=^Vz(CIfLI`1 z#$gsH#Ige(^d$o=q}?u<)|k-pwLQ>`nV-6WN3r~$H8EY-!gSM2;N<;W*os;HYV}}g zs@l4LzGA{erMV(WG@GNHwslzjBcDpaC8Y5w%))0`v<+x{d^tFkzi#@LoQPxGK$yS)ex>O}O6nEm@LwBGg!$9)=nciv;!|4oG{mWXR3@=wL~Mue1IHX_r+o z&UNL;%S5UxWP;Y+)W{3V0=JpDxy_z8TjnRLl-I8Vc8#`!+WVrb?j_2!=hDE&O#k(o z^{cmtj=h2pQ!p;G4c&UFgTbWdtOZBtYovqDtnBWMi_@G4n zyRX$$cX_XWJa2-KF2gX%6}ZGWAs$GVQJ5wQu`U3HZoT6?x7K)BAeJS^c!LrR+klW)=aZ&`1EE3ESO9%r4ML|89$|v3)pxVQPfy@Qo%>4ra+Oq-a zi1ws8XArt|!GqWG;T0d@`XG`(n-CYLSV-|ko{J9ADOhTM zz>mh{aWrxoi33sQlL2@e*pY8HZSG%l}V{x8BtJ=P2$V> zn^k&p*v7ycwdUoUCSq%xN&-*pzKcbs>Tizox5LmlxyFYOId_Tby=U<}#st6i4mHZY zlQ8Oag=*39 zJPk~?ps^)r0cIvfA)n-=XuY8?;VDKEy8ya)@dGNIvLcEbBB3scBs^YB9j<>&T}I*p zjvHe0HxEROZjVKL8L!-}sZ^|xy4gwh&aBvksCj*L2vue%zNi3+sT&dY=0HsVx#>9% znfh$VsGXzWX`=gzv;GtdQsjJt9ECpSS#l(Wz@y$i3)X`k(lbdj-UUiv_ZsMjbaClDcx-5w_hi4zDroqTm8vXXizpL?!RU#-OHsSBDQ z-@b(>ppIOvH;G%!`f5Xvt-xkgu{g;5A)}89G(xeg(p8e=BDF)JVs)WR(d7m6w~Cxf znIiZE`03PVAGW2z?h2uLFIBuvZ7-G$mvi9n*Z|-j^Edatv)6Y*6P+5%7WP-yZj*A( zLPK5BZ3ndWk~rC1*OEVa-tAuva?0%V(=L?J-qKaUhNZLdc|L55&4;o6*sUC;jH<-ReJycGZ4@>M8oWQ+Mf?z>Co`G@)@f_%W zkY4_5vv`1wa*o*8`z^vwvG~K=!V<-pSaKbEz8h^Z!djn8g zXI{Ugzk(sBF9@$6O7N7Zw2LzZcV?(uzB)rXWM{eXIodI8e#!zpsM#81tGYkGC1Tdf z$1VVTH6%MNKu=f@7;3Rhp~wGvwr`^Zwawj}kP$hgE6cHL`VOpv5k}R-#tV(HK7-b5 z;5!x%Fo~)T*gX#4MLF0`G@!*d2T8$y984QR_8%Rj`mSbj$)3T-LNxZYfn%Can1g9ya z52wIlom4M``F4%Yz<4!9Tqu4J8`nL*P@5LlY(yg;1uh#7Ib`Xw4kQiA-Jqcg8!nwh z86~zql7xZuP6CKBvc?}P~e>LAQ=_7z~Q`HW1rBNWrwy%(b(BwezT z0%m26_8t;)5vJatil7CQ)yJX!aWwN+_m75sukVL3+5|gcfK1exBWT24jPxclIT17Z zvB|*ou8!^8OsI+?mdW;bwe%V_-?f}x9luZw)f(%<5uq9ydnAs7(Q0m4TAhC1_A(> zkPrUt!#RRFhqKO@Af}0IX*9?9l~m4X+JhaHQ!KZEnVGG-d}C(R1WU({x_k~FZTD&& zt*Z~0$6DYWuh)&O^y6AT&&YRn#ec(dN!Ec^br8uJSh3J7tFI4ZT_-D+Oo8(L?jGyL za`WOyN5qi?`(g)=6Mg}>v!rGFU0qOY)cRSp^O%LWCQw*)JRm@*r@Hksf2fa9fuM^@ z$Pxd@;);UxEFBqE98Bzz?Z$~y5aEMXhD08{;^zpgrOab+o5C*m z%Fxm8FPH@BfjOr$rZ2Ni2KRQz<%iZFiZU{oM2v0*NoF`aG`MYK6n6NI{UvJ40@06S zFywx?Fcuz-*@;^n~0w;$^)_7!feHu88O~I ziMrFRgNJ&w&;f4C%^52-<4~0%*e;tgA*(X~0d6Quz+F|guEF85Itd|^MD@u{HShn# zxeR9vlzLu_MX@pa(0TBi^TPZ%XM15_J^(y|3Ps`D%>Q=pyR@<(wV1 zI%F}vPXtY2K;@+34i#urZd4m)3 zA9#c>t^1_%{C9tP-{j%CHs{k#-#(N~x_0OnJ(Ie8Zu)lp%vA!)&=Kj;c7jBQsbTY@ zLh~V}(-_|U!FE!PoeFjKSmzDg6S3&Ve#?Ux`tGQUMEvu#5cCSvEi^gBUIch}d`$5c ze4(jxunbG0_kIy3e$};qSQ@h>#MxX#SrCaOjyWc*%;rq}%)dzRe(7G)halTitQCGW zwXPX0$jyQQRsuI1%uvP(iW>f(Wb=sUWfChwJr9v^z~C+AB-Dj^EZho_uOBx z0OUmc`5)P;A5J{p3rrP&CF%6uXc{Ds6V-f7e+1087Sc(`4)D?t`uy?6=HUR7!!mVr z0s=7>EBqEz#~iu31L>xXCcv{f+@;mmU=m~{fKrv$dd^JE6+TX}qv`NXXtFSw!X^v1 zq&$r~e%eoUv$c?lscL~LdpWVddj?@e)Rx9wb=#QXB^&dj`Cce|( zpmy$|UdT9Soqz|j(jqF3P0h<}Nk)VWUYq1!O!*fUFp0j1#~zkjz5JtaKCdZD zT4S&=D)0Vyd+;52mD1p;6zxgh-^XWhAzsq5fk5F{*Fnz)TpAzJW7hJB*+>H=jLgXY ziTUx>vgo&`#>Dnz4L7fy&lxK!6{@X3-P7MO1R3aNq*#wPXw7S>|A*J6*~X?8%fc`r zxb%`xbWG!y#nP$09c>dHn47T&&mEI~3yn)ycy$!5EFdJ3S;*=am4%6+%b#O^ITV1< z`dTF|^)hRCR&Q_>sXI5@Ruv4m&D7)wIhWb=*WZu}4^_{35@Oj~EG+Giw0d%ZTsch@ zG5AmVVz=Qr`IK^`k4bOp!yb1Y>4L2qHckVIt{3$uAO2D2E#zOMdYDa@62*0=4&K5(yAC|5P1SgttOK_flSHhH_ zKhJM2m~h{)>o+_T1~`lbeQw9RnKb(qnWrdNZ(x1e;OmlPjuj=71HI{&p))*sA4mH1 z9t%QYJd!axMtZMjD;DoYN2GiVP8z!ghIoxq+kU^wd*Ent+NsbD~**DWo-`lTg?N&l0+Q|pG zQe8^}Qb2)c+Z=dzHEzPykGHJZF}qCfM&(GlyT*mx)(KRB{BWLN17>1&Oh~64uMbw% zl~5U>U|eM^BY36GWY^!Ciq5x{E1|(~lI6XAS0?I?UCE@fgdp4vc$re>u@u;xsj(*L zP|YCVaLM*mKi>e>XTG|=QTStC`Y5$urA7_D@N=n*{1+cx_}P>PB1Lx1>Vx$h5)o+t zgqSxmeAJRimr0oCm(@i8bihM4)nHwN3gE#|GSCdPfXXRYQ0#_NT0LD2X|m(pZ=wyP zQgvI`Z>}f4#n#kwX7)~Em7z#?a|=F1fBB0q(+pc|t;SmWCylRS*NdIUdDaPDu|n^e z|CIKY#ynG`ZoCGLo+4vTkG^Lr<*c`wCooSJ?BXDvaatK+|8)7XvcNu+d&_hXk=AMm z_Fck~z6zTkflbHezlo7c?mmv<3-J3$FrM>1Lix9almn{kg4*jVtrFSgqUH!1Z=|my zG~TVDO|v1rE%bE+Aj>B57Wq2zSPfWID(ba|jFhSBY+Ko&l=w}ure%20(UehaSk}Y* zecsZC(-IMYk^ya_ZKE#=L^|yuIp5X6u+@8A0;C-G%WToW)Tb={iM5_tu5&BtdlZy3 zaY2p0G5F8k1h0o9g!goKrFuud2)?*FSc*bDgS1z)6NvtcR|Duox9nGQmp&zS`AIm)bxN3r|`5m3Nrj1H} zqMd5#fVgs;_$Y>Cp@3`FK>dWAZv4uP+5K0-xkhm&?`)&UjtPT%^n61Kw|)weq^q_) zNRS^QnLskd_j${9U)_Na#LP+*v(YKb8c?B*`2-cTLhL_c9%)M=efj_7;id8h)lhF) zj{O1KEYZKTJ)-@m>vM|ME0kCc3oy*E$jQb%_(wkR;FG^*1#=pjB}6l{;i;%@GgD#w zc^~0|_P__TPBI|uR=5i46M2sw!C0H;(n1bOU#Q_rLbc_~=fCQVM<9^C_TbgOGPr5X zsWR`=PEXIS)1SzxL#qBF6^o=`nJE>6oHGzft2Yo?{Z&tKQG3D}SQs;etf#1($$M&s z+}jDQ!068K*F~d1)Iy`c(nHcx)J@e=T!i8jq_MIf$f&>FmAJG87D^<)nO83mY0R#N zqXrx~R{(}yp*guZAV`|>jPqFoI}@|8#peBz!YU4SCI%>SuAvCkypImBD6J#U7Se3L z9tq8Ug`Sk7-pInmubR9vjlAQgM4Q zqrWHl4R3eIz4W27*&0{Q_)p+!exrH-xLp*#|IQGCAazlR^}rjkf%^OWVqwmle07@| zd>!}l8mf=tKo+B_QV!(2I`=>(Tid{g4>9VUXun``4DZemGEg+71=XQ|VCU_@uxEai z*C?{5lh`OX^gobFTm&M&DtwN&=1-CMZTeW!%Na|&D1Zuq=lVtAylIRG(azDM-;72d zCuF3@vwK?RC(aM}B2+KU{1Icj>AM3+V9tK8T{f$O)aNs;w}Hbuy6+po08t8WCQjv( zS7W zHQD3F@{hDVI;xL39Fv<_$(j>~F*r`ACx7nRZ^zNB2sul(a|s$Ugv_H?Rkqlr5|z6F zX0MxjD6*^ingr`3rzsqe%ehSz- zviSbo?4Rs0a3WvT$>L@xMzzwXvmzj|^?<24Vw-GfT1xGb=|vOC?KXY=3B8=Wg2A)L z1|E<>PRx&=1r1yVnHoODMYw)Ffl>>T&6#TqN>{DU=CffMaY&xNktfbPs?^}7`8Le` zvSuF0hXS9oE4OT_Ed8-H+p^f8iP(Rr)>(I1OSr2rN^3uS|-4i1^d7nb)}diQA_6WqK0HKKTGv zznw!xr}x%}K$C}jFK#bPH>eugOo0gIxL@Sw8a@~m^k{+mPGWFIMNyQ92gxeEsAg%$ zLUTL?=_n{XUJl~JId8_Te*1}AY?s>i#>2|MjPfph^t`*V7eo_ty>I` zL$9T(IFZ7tSPUAm{6W!We$Q!NK&V~M#dK4<5u=GVCQjZ%n_BEyy>}wc>VW@!o799s zG9W zWphBQMYRnNQPLcGmvsh_9o*-1wK6-7M1_FWc)BCx<|?)9UDs9VF`bYDfaL!Wm+OR4 z^_WoU5Jt&e$xml3JWBaYG*jJ?F4M|xnQGj2)7-<1y=0FDk$A5cG-V3L27)mdb^1`K zyY1B`(_NeBh*Ji&96`t(AlX@aXJ*dh1BUO00jK3c9nE*xT^14U;TBs!k{rQ_#2J$@ zdW0=bmEMXG1&cU=(Is(}UY3I1JU=FD^Qf{0!Lm6;baR{+(vrmpbGiogP)F<;r0;+E z-7}D=hUNhAdfXDHje`V^+0@v2A@pSjX30e88oO=Z1b!(6WQTK%x$%xYW`a?L`cAQJ zQ?$lGKVp!y#-$~3xjA!_Bjb^FHpVkli&DCB%66JYU2#1+VsoG~mQ!Zc=p+fXhqo*j zmU3*U&HVVdUVzZ8`)E~>ql|6)REhBcNkBI?tcUXk)M?Hyr8@Q$$N{10u1+gPFQQTC$$H!|M-nRkU1|3BjTFX{_@GJq9 ztav}PJ@71C{o17(`Eu&v!cYd3aKpF`fZ#oumCR^G&LhPR_a zURK_ox7q23SYy7e=s3#!?npMXyp549J!u|hK2<6_Wwc+ScKA2lD&&CuZ|NBWFC=vU zikj=furX@Y90>FNsRglb4;qkH5wH82Ogl{f{O8o~v$5E|C@J5$aT{i>9yQ-(`gvl2 z{`heGs=2H^uZ^sAISTa<2%yM~I5=x1PT5Bz54%VJQ2IgcIcWVsaVr9}IIe*|$g1&` z%WzaNG=%;I^#g#Bbjg+K{8(Z??Z>gH*e5D zKhki=xCc(?_BavA#N{4k|0)N42nT>ybk5QlqH>bB7Tkws@yZN?bVc#?r9knq)i;7m8y}s4o`1!CSSvZWt>-2_#L|OJ;oh{|y%Ze@yu`JR8keFEyzQ*lY!Eh3zC8N*NcDDH&K5uorQ9}D3g~~K~n5L z5~;RYw-YUm-3VXz{ReNlH6~>dw;QM^%S+}v5?;eb5;gAcXod*IvB76f+#ySdy6{T| z2mE+uMo7}3k>HbNiJOG{7P-%am%g5aq`eL>B>M$~r}*5N$H^MR+N~)K^UNa}#P0FY zJCwDu9OI?ghWk$%E}P>ewl}@x6@HKD&t!nRl2_$SyHVFB8fKOV9NQa@@3X%xx8kSE zS`QL$)em_hV#hxvbc+$>$%KCF@X_was*ZwXXA4HiK2HWBckL+yr~ z1Q%JsTnw8PS)XUHUy9XXg0|gye&}+!HjDdWnSDs~rB_lcGcs^mKgI|W^1S=je2}-%7reyXy*_t>lF?~D;$Hon z_@u(p9?xSwV#nJ7qnrAf1)2+W8*fi5cvwboi|%E$Q4Ny^7d~dd43di0+1)V_;hkJ zqAQa_hpvEMAB{tBS`o84aRR0PCRwCU!S%2M{5qZ))ALwrlbv15R~z`QV9iYaV9w^3 zz$WvvS1f+B`WG}o>#P0j*#COsfk4o9_PPm-elX>b8-@ff}^S#h@AbEw*K0sMtS* z<~O}h1%V_Cg(~6iw75|Y(x^c^?+9Y%QUZ3!@b@(U6Gyg$9Q9cB=GlE#Y5^eu2Z@qP zQqMp&--qAgujbSbfp#+6!~(*z4YFoWMH_=Z_qQIUY;e^7M^H%}|GFgEizzB{Q#Rwl zc3D;j9BojyR0mK^_wWz_@2Z|=CvE-;ov%BJ@dDryhh&vF)VrN!>=bAEusntru$LI1 zJw6Z*G-rdFF*a=>o$&!}*6f5S|EDDc2{ublshmFE{LrAvxdx`ZP)BAznk9DrS7ne) zPjU{;yn#bw>pBx8{&v2iJ)7^RGTw2bkX}_Q){#>(>#z5bTAa(7cL2RjaDi^2+rSuj-0{%Sam+ zlV(ZldOGkdl_JIpHs+L8BK>diR9pEw(!FYdj1a6Dx^3pqh<&P0q}^B_qBmCPH!OH4 zIQq^nv@k4V*okw!n(BKIT?S|oZ?YTZ1rB%;3y*SdA=aY&ox|`2wUPr@MnwNmvgLEnHS!wDDz1z zf6D;9BTUZAvdFY+SF>2sdt=9O3bDP}))Wxgww?Wq<$lt2m zg#)r4?;wtJKgysEc)OyD!|69-g*Pl;`Xempb*Dx1XAjiGfJDa0vcRMp}Auj-?})A80mCrz2n zqp7AF?lU{*Ax}Nf*RQy|P4iQiKKj{YKp76Gt@~X46BGZTGn%hq^5@1zm7#sq1eV~b ze1bLqnLhLBlL>l@m>BXA1kDs>>P#V3w9YbF|7O66thyf%G_6mmD6e}}Fy*?n5l}Hu zeEV2i1XieR6hNx&a>P7Hw8Ggg@1x|*kCk>`V6#v|ZDAiwb0385jX0Shw&nRsZ?u?U z1m)h5i~IWGjWjW}PpGh<8?Lug;?&21w@xL~-_?htJkf9+nwK^FDXk5s1OD&VguNFm zR}$U_+>aIj2Jkc^90NF+4tU+Lgtu3VtR!jUf3vL^M>4^yah1p+;%0gw%V%Le6KhEM(!h+|&ulGa*1_YqUHf!Lb}a?XrwA6EmVY zu2sLw8FyDLg1h=sh2JrQ)Mgtap(GZ66cQbsx?HqLJ1P!8twgAE&VbSgCWEtlc<4_@ zJbv2Gg>8(cb}Ud+DWad%jlLp$aWaqYyy0>{ zvvYLyAbd+CX5UkdA~Hv_<895t8I~y02H|}FouX8%%|}MM#4~O0Jo6<={dn`?!C*G0 z!2)#C!-B+UjRD-b z*s{utu_~O_T{xna@~bCDOquxy%!=$!w6g_BOgvMwu=a@Subsbib!q)5OS_=pg=HY@(b1H^sARAJ`5>c*|J^%tu|gyv zqmla37jYpFtECBzeul0Sz<5RW&+zIumcI|9Z_?YmDV9pxG7jb^8RPO z?`FO)%PlBv9+|(S@7aH_b(N3tRJ-%Wex7Hh+qFb53dX!zYnXBVo3_Wiz7;IUyz=4` zK;R>e7?9Q+WXEd*(jubrHneLo&h(d76owkHQ~8hW)x1AEd={zAFWsa*X@!*1*`qBz z*o)t4n@Gmu4jq0`73Q_kCNJ)`%2O^gzfGw!zopH~cM8O;&<(=X+ok!cRwPoYZSqvU zI(~l6`dg|%I6R^+4B8<>b?LmSyUmSRn|y3}$RDe;cs`fF-sP6@h(`JI$M=8>Z4)?& zs%`+LID>MG_H}?6k|b6swbUXg+u*7pZ4i1Y6_p}xUkBhvJroUi$I6p1lX8zWa0o3K zbhmaBa*wp_IcZ)NfD%JdP)|aw8aEf?f7XqUj9qz-;qjoMZGxhHa(3BDbq*XtMDK`T z^jzd=q%%f`zFjtQDCpBm5%*;orp-gA0lPUhxcwhv)w(Z9&DK1R-oZLg#2FkC09}D+ z6Ng)W9@+-#>0lpu=;ad2w1i@_uz1hHvg!go`!Z`*-t;AI86tVkF{VhYMgRe=O?*Z- zZex`ecL&fsLTgaNk)wLgu@1Rn(Uhi=gi+fPHzEPLRYv+l7b+QUGcwQP7Ka=E{)K!k zQ&oAV;bKJFpbkJ<(y*A#l9{LJ2a?cpEgQ@?>Pe$WVpmq&JBHH**hbRLYs&D|mk@lc zQ1CI^{#aU5IC%cc+cYa^p&6#jkaD<5$o+FHizYcpj{;!kV|LQn{=*jPBn-t5h*0bK z)0j-~xEXUrE(K(hPz`up=$acwUMNv-5q$FNaF$vu%IGDiAHQ_aCM+kJ-Q}A9TyHPG zMVpP?z+tA`v8O=_Ou4y@t}wf!Tvw~}9$=ZC64Q4^9kV7t-k6MdwqB(CdR974l~^wl zpHL@icp#YlDM478Hh`6LFE_I)OQW(tpC;Fj{Ii)ru}gu*2wahowpfr1`LB{x;>R1K z-I@EFA_OMCA1dN@avVl|-*2j##bV=cr{5fFGaN2#(@b8gsbUB?-RcHRi@uY_QMJg0 z$(vXIX{A_~WN$y2wP$t2qW;mR`N#BcW;O+;8Vknbv(q_fi`g=+GAhX^i4?T{1)1~J zyBX{H``pc56yF~sJQYb++vb7LmAj$>r*#gdMUIcnpUTXw%i0>Lg(Y!4im58Ii2=x6 z9BZ9+;-un7xRo$cbPHx#Sv8!~4FptWt%_`MJ)x5LDG39o{h|rKRF6~vE&N|_yZ{%a ze3leZ{EAG7=Gin{RcU1?(11^oD-v4EN9AtSb(2`k_D5j&RY@*d8QG=~l$+NO(vPUh zI8%lKC0S(_>)tMMb8Ntis6spN}{ zHm3|h^KnM~dstGgU=gMgL|ZfQ)+JErJCu7MV>r36M&v*9hC&LRw{c3Bc!nc__o&&% zm2`wV`d)AniYe036Li+w@Qj^|sT7t)Fb&4?k_dIE6aze-pO|1wDy?D5a*!lV97T?) zKHwy8Ff%4f=?IfkwpAz8O6IEMP@RiE44DizI#f2Jxxr7ANCkfDI-kq5^Q^EE&vfMz zSID;E9rw`qxId0BC(Nc$sKfi`81{zleHU}s`obr2tUp7rH`TEI?$aSm`w?TZ>LfF1 zRwva+C$A~~7Sxtx>{QFR?FKG@{m4RGeA?Skd$;w7 z;e^0yt|Mccf<(lVAam?bli`G5#2Ngh9cBP6bNe$pqYuh&yumX(z8tpR&SoxiMH}>m zpmQ}MLbA#=2iX}d%R#jM=P5sw5WnQDJYcjZ>GU9laqU2?F!2Z5SNV^khWA}5qJ}Ep z^3a65hP%=Ik=Sl0+fBA3VCp?d-5`YcMg4!z+9;{sGNIr-KTMt1T5-Cb)(wdw&@;rle$-XSwAI6xp zKOiS+gcwK-{6SaCN-^>G?LcjokUew{s4-1O)d z5Hn}8-NvKz&Qq1r+?R!OETVwCV}e>jUHK zI6f&Y19A`J4HVN;)jsh^$A#8Zf616(R2XDnc6x*QqnTr`q#q!Qc9#SQ(o{;=UAQg@ zKl>t6vus1kjAjKvw^;9!uR?BXC5+}L$gMVKR774$P2z(Yv9q2bD2q9;cCpCmM={Ll zj|zf!aX#lf4_mV5@3lQWCMGLfXH4}arV~MlE|>vh4?Bw4cF4Xu)bmr#u6#n)lu$Qe zX8t@ghK~n0eKuj`r(yY=%lli6-A%u9$rcKs&j@w~ zR)@C#fEn%U`)bRuhsK`S!~HQ$l)eVm4L(VCv%0pr(WjC6*B6oaDvO`3%VRJHTLSiP z*CR}UnYFg@$Ofj8WBgVjtvT9j&0rsixW%MbA*tCuWB6)^s>(f{!B?4D=0zvu)y>H2 z>my5fA~SCOzCaV5KOiQ(&AUP|J5iOjMoYAH!T=^~aaPm)_+!rQU4m_l`Y^<$Sn`?&^xBpvK&xUb7y3Is?K09r8z_Z0TECc?FTJNo#qGEZaU=; zti!CEb?(r-p?qGllE2(3*R08h^|b=d24Ha+zA>v~Zp9^s$>9W7J*!2nI=xxp3iOE- zsLOIFPeVu_X<=gYi3YU7Z)AB6SgKH0skB|>j9wV~je&Qh`OM$1BG%hX5nY){eat9`GFol>^cA|^RtE%dG z!N5jNyw?U#NO0w9k>aDy38KT?1=KpGX{=Pnuu=p{rfbVxmKe7<%0*5r=yN3uY0F(S zBJLXgE7D6c=oCV64jw(4y4G0G!n2Vp3xwLxmWQRGRv)Yqn=ec*g_tHQF{yjaCQ41| zug?TsjE_Mmb}tvlk$w^X*R}Do)SCreYQtDztCSq~y!X~Vr#SZW&o+@nC<3EI0<~=B zwrZ#25WW_x9)6!Dhp`@qV=6U`D-WTLs9rbIfB6yFNrYC}4|P-qMP&y4#`MCMk-jYs z%?koEAv%s7nMkIJdg=?pZh?`ad(Tg`bbj;C%y5|tsFe9e;X;|kHl~$eBcYci?iK6U zZyuKw0lmP5)CmU2!*W_qaXao>HaqT9G5L?BYu!pksXISY7l3OC{Axib?mrq)me<)ChLnYjqg{94vGf=fmJlzpWIlG#T)^xK|`myeH_&%a&ZVHz= zwcAykT&gZRPr26?^b5<9#C?95hJ07wW zY>@V`tjvQFqzav4w|7xUggCgut}m`#Y^xukM2;p@z}Bl-y%(?Uw0#!#1Z%DCNnyu{ zK!>OXP@rAZYXQ-=tdN^AJ&N}8>|_cdCa=&|)M!zTA{dzXn=`reoks@KA3kK=iqu}T z3BTf5`DZhK-NPYyj)|RtUW%JZ{+BzuXDm9#^UI__^$> zusL3*qw{^h?W_F>w`wF7<~@!tZ<8)qkbo|MNjyMO#_6up&dd6erqDX_p03cYt~7D4 z>|TPH$O{|nFLfK`Ln4}%Qkuw?T;tO*B{BAPt>Nu#wHby8g(rfa%w_&!kaek`jWe-@ z6VEuLXa^Rk1TJ-N)Mvp1+)9-yd-?V4AvRB;O}iq}Ms-1jUSJ-=$Dttq!0IEC=uq8a zN*Eqy{ElPvm#~n?=m@ry)=9FvL=+6G;y?#WIc@L<+y|}UTC>dSBq#G(Dqgr%SXpE8 zu1Z;S~3m4h@RbIe87!X(c8y&7v9%eRI&pLfaLNis& zMnuYv@Egtm0`gWZp@fMyPI}*l+LI#z{?tRXx>Rye#6nNiCE)mMi~q_CqsJxVmM(Zx zdfdElrc>x5IoxgXJS)M)(eT`%;eD$MQnmTk&d=*9t#qCAQ4iYLYV$XuyO_i-L_>wO z$XSO)X}^sKANu&NiTO`X!z30w!Pp`|vV0<=exvb)FrCXTV#hyrnJT*Jy=kAYrM)$! zIqcoDuG_+{k~$1DzV!2~Ewq>8X|xP)ySCwEe`QT@K+nHM24f)ry{@ z33cYY+9fL_AG zALy09VdP+g)1(%xyOMBz>)WU_4v3hlN;#q=3ixoY1QyAF+Q#Ydz0CaQhJ_L{^7*rd zq}Mkk9t!Wplp^bz6be&|8pqL=b+8yQkv7VB+P!9^MD)F8G(~($Ow*K_$Y@|f7^R4^ z6vTg-=C1_J+F`xE(YCU+-Ip-Q({YVjDi`yLJzUumv@g`VTyLHaiZ~JTs<;(HNf+pv zBX5E28Y4gQ0=FQAdT)dgyrUo~;LqY?@9oOHzy*dExxy7P^48NZ&;*-+pVv- zdA1#tNyvydC$ycJU!j(o*U2-(RqB=62Kv!md&PgOK5?*jExO%gx->_lBN$h0Q0U#Z z&3opr!vW}Y^H4GlST7843Jb8MK`GQ0yHj_AB3ZBY^%t@2 zVq`M4u^gy*EuRIg9H`xiJLp|4a2b_Sq23GTD&WjTx81PwxsC5SJ(`7NwV7A+OY-_z z$hhLzxT+9Q7rU@9hf&6s^%JV^Bv|;#sssK&P2iv;6JT#RH!rZ;pXzB`CMQVxtV=?l zcr&u1)Yx2`kVqfev2HsHgL;DHb9-_HGLAsw7v06?H;S10uM8!GfO`?<*iK>dxHmZS zbDQ(yU$=ASYseDe)^Jd}36q(O&g4i0NDX!7^*1455|Pr`TBo*c)CU$Fsw5n{jFVcw zhzNrAuMH>%`Q|=K0FBl!(E}Xo@M(3hb|GK2v4VCsMyk(wWMJE{ z&`mHdL;hFpS1eQ{QnC|EPw`v-d)QChK4775jVI`a&}U@{S6yRtS@DLgB(N}%rw+VN zi<-Fv{yl z1r_!%s`GiJi%=t3Ci!i%y9%zKfhXcmXrVbP2KF|5b*ofdX8oOoiLayNWu|aV;t<9;3a^!-`cR2bV}cR5Tn3QV0%Sux!m<^A0wbVCG#upR7MO z4KvP${r&YRjm4qIQc33Q?%O_98t+ zq>(u*OH|Txo!-QzVZ~`N#4vc6;FdX+xxsK9R!k=Qdw6UXmt$o^=&P~>7l<07b$t{h zqV;Y7J(b1ilXucnDrhHrX1Qv^&;GdLFtJmueLzZt~5Sem_2K!lo=iT!l?nwfGwf+~1~Q62&)^ z9Na|JN~KfVj3l~dTxbvVz!iS?8pFI<+L5PBFGyKOJy_Xl#!6yd60FdY=@JMqldWSw z6G6$+mT1(j{WO~49%qkWdVsD3YIL>r3j4i(=?Rk5kp zkR`*d5hCYYZacZjd8AmI;(iAUX45;3qA(%k5#0~{!dhViDQ^X4p8}}pc;a}O? z{_~E6r|kIQkt=K(cxB1bh`kS@uqXJfgqf2~&8wHQDD zj|Ii*rVX0G@|Pz2t#STt8r~nWvQ7uhUxo~m9x+@CHs$A zaVDmDMgLDBeEz2pnqJvZBc{y9|^D%Kn7 zU)ZSba36~Khc3KIx@HTeKEk*G9e1sn#x47{H)V z8WCuA1KY}7Wudb#di(I|Hi(4Z3tT4Uj`~uJH?STOXdX7$q2Q@?l9(925aslUjsz=O zh?J>^@;ie72Pp&?e-xw}x*G06e|x2%X}&^#heMDsB^8>R*EbXzLpw=L>)b~ZI&%SXg z>HpKXzNl&2Y_$RbAdCoIo#Zni-!dwKc)2WFL8uz$JAoUOd36+M?|~-pC{cs9m34e> z?klTyKv)0ccWTL++WF}(FR06;XO<;B!ffs=J(sm;ry~zZQDnaB5&@j3Z1_Tomsx!w zoHeA_5U7h;L|1;;*|j$W2J7UZqd6 zuB%1EoqilUiNJhu%;MrYmbCOkXw43!CQ)L33tKG!u{ zFR>64WsSWz@F_GIzotKr3X-|Dxw$DnXfg3X6guuTyCA4V9@kLh=ht%8f@wcPditNT zT*OP> zSozi501f*-<+gr8ZHnLM?%FQ6dy+75X0lD0esjZ07Fo;5g^v~}Ely-fD@#U7Cr)-Q z{cm^mQV;eTXnTwk|I*C?)b#<{ys*&?7Mx z7T7$p@(Pyu&lrY?v_hI`oV*p=BR8+~W@uCABVH<_>2|qa`yGk>3n1fW#`d2UJP2k; z+1?AR2Nd)2!l+y3 zQg5t2d^s*F{^(77?oj9AN;Ul0`R5NlAi`@iU#v4de}&}P{{?BsHeAjP8K^eyO7K#* zB+m;io@J<0TSP!dEm)W$9B`UZEZ!t&lKQW5jVO$Fk!V5(-n<_y{iRd1s{e8lzA^r(NBg^ zL2)C%;f}tWBl*Jy|0RzUDnFPo;LPoxT9}(+(&Vdw5q|W@w!lW#o9VAuZy{)5;{9sK zKMHcos`Y533JUVWAAFDF07)T@v0as2w_~TO|3;%nr6S-LO$%0-^G-`?L8F~n6eMXQ z+YR)u5vyZB5m5rVuJLZJ+$H@s)nE9ZA^tzbXm6ij=2`c5EGx`3&ee*2m!DUPpNe^x zzg@5^mKOz!9*KpDhT4>`Po_a44i+{^r-d<3`nQ|)t1-ULT`&kQz+Xvx5lHtm4}Ue5 zG1F4W)t2QT&Y^Tk6W1J@q6AY`rRL6scXY?yY*5A$Y(*Rl((5=W>3p~UW8)}f^$hjX z<#`|TL$YVO4ESA3x%91Oy6zVC&c&i5iSE|9M0~Y8w}RqHgdS~Uh%r5U0_6Y;;){>K zSEUVyYM7}Ow87pj5h-TXN`dtQ9ck)deS@*Kn0l>sUFYN2 zB^aU({@g~i-kb!ZL_I1EZZYFaD%`vQ5P}`=W1t#2n}S47{S$Y9jWq~eAg@A>j!kVd zN{oZ-_Y8TEl{iH4`#`I2h_k^*L%koTcY`z~0=`z?@E$~*;Okl#I39tk9wW^J*P8m8B3LaTt|p-8NCH@=xE4S2>@SN=wBbWo0n%M96Svo$8$X_1}ec` zVQ~g6#CdoPhd}xh@6>hpriq>k5^MyBCTUm+FecR3DiZr}a4Ht7~%81pCSzP4_G0P3CvqF$(wNMKC{66q>zpR8g z2&HR^oA?v2?qw)y=DJ#n*FUxzx3(1H>dYV-9Z)25yyHPl#SEfiFfTiZr>UBue4XUyLZg*+cNayC^jI)3K7Q ziPXbDx@;DKbTLL=RU7qz7i)AsVPYoJdQ{%!7x~zHLD$(Dp6j|`Jyr@INZ=?^LR8KA zW?*Ox-L1AKk#AY=6JQ{IefIVfsisIK(EqJKP=3jn;gE2x+)?iLY%|uEY#7%N6Qka> zku4gEMyW>oMlP=ns=j~DLl|{u&}V>cF%qeC2ED9)#v|pP4Q)5o%xjW))MH(XO_KT4 zKQ$7EV!_oC@g%U21-@@4!NQ10`3ACA!c+|wU z+^?;pv+$^ZO2Hk*1y*G9v66g1ZG>^IzMW-(0vgyX-4ciJ3rC-2Fei{^FyFEdY0k?s z$iaIZon`SFRX4QPzPGsV08WkEYif1igSwa#TFD>0qm)|V0I)|X%!jnp;7HtEe?iuY z&H}PT;=5Ub&@rCaOK7`*o+903Ki%95#3Z8o0Wq$o11FyupID^!$vrWxWu}jxvpu9XWvQ`^tsR^2MY9n2>E+h z`HIe96LtRsbX&sWN}V9OvAT3Zhu?}6>%LF<5y5f^28KZlg$o#-seW zzN*8FAR+3mSq&~!s@)rOtD#O|7syH*G*YM}I19A)defCozHrG+}Y{W%#p| z`+iD=$QlC~;%}A!!i{yvK2PrT16yH$g^qO(If|79P_rzcm9s2}m>52^qziLYQ>TLI zfr??LrU$^?5*#9BmIm|1iZ{wk9o@q{u{aRzIf1(<=ue*vmn5^|+b5IH^KXQzzm0Wx z!?&d*UzDi+A{F9DvsxyFfrCmBfH(kJ5!W@=XNLYyB%7s|njBCe>JgA|vxVy$knKqKR^~Jz+zJne~SgcHAnPUw<0T=K^8T33|FzK zIs=W-Zx}S~Z}jjL4d>2+v%Wc+XtO%?C?Q<2&TL$=+J5^6TUXfA?=|L%YEmd+rs>Fi z4;B58uHKM|QQ|;j>N@h%8?QzwWZJz@91$6M>Z~Vk##)W-u@T>QOrSB}F1UoLgH!|i zCrSx|5C#+yK)%7yosI$2kT09kFjL$V=SNH~u$0#UVu4+@w@q`yNXm=8kFDeImUwsgYHnVC~)wP66DVe7d!lSvGkFNamWQ)iSy>c3wfaw29hehbaiPu0A+r# z14$O%Yi(~!Y=TX3j$%|4Oy2|nRK6^a#*MLozhExaMU_MQ+dLLj310H8JFKxtW-?N) z_N+MkYF=?xYP-^?(!yIH4%$aOc+lXuK-Xwx|1`Gb#*9l8px(^K;B=`#Jy=}Fs;DRg za?(WBAIeu~Jl`SUN)X!*TKB(Orl~$lZ2nbmQ$OzQ4E#vwuxWQKiUnx; zLlrHyc!}%g4yPzVE~hB{N4bHld;Fp1v>Jyjecp#k&<#lHT4iFGyE=9Rs7+3ZynZoP zU8rIpp+~)g?d@PWcZ?S>uSbbtihn(=5koRY;R*i(?QR?_+>H-xJ<9OS>*jFlX%i3- zU$kCi`-SPG2&99>MY+%FXRN{5By^7qBy5jzPH4lZCDfIn?0Tfwfx=8!P>#_ksJ>>f z1jZsiao~9HeZQ_W$*PMDdS)eORo(<1RT*eR#Gc}mpiylix(;L|t1z$Kac?QTjea59 zVi22}@n7a8<4_6AV1zyKexXZ16*m$d;^G%pnfZZ83j4l*txOc%shx}F3nfL=ZncF? zZj%D()r_y8iXC<+)c%oaEU01JKw%CnsN#L$WOeziRsCW-nQMeu#DL$YP5^3S*Rz9CT;76rXSt5oi{BUdYEd}d4^1^-{Zp8qzZ}6ke zMOwWqn4wkop{F&O5E-;vxv8-isEnujYklfErY z4!Ue2aGDbHo{RKk%fABK<+Vt8Fg`|iGp5o3Qv{F(8FGhV`qDCzec5yN5`GQ34H!HP zqqtD85D7F+EmZ|)bAX8lVE^WQF%}D=e*(t{&RC?26|D=|M5eWmwV0X=qm0Cgx^@pv zZ4(jORrE03o<>Lbk46uyZz$D%{*$WcD5tEwl!2ONHI6tN++tk#=Bzy(<`PMh2gbHj z1gGIh{8^x00MaEIi0is6_y&Ocz=^^1uN3s{v-?T<0Y3em3}$FT)bb7yB(pxsU%VDm zLw&KbKosLZqEI6bDjCfnq%KyD_zcLBUt!ur_CN0YIyat5gM(l14=pgjYH%R_$Dd30 zzXbJ+{jZ<`vSmSUCAg@SR_IvK3d;Qv`{O)mF$5wJn-JT-H{Wj~UrqP^OSQv`A~Uk~ zuEzySyQH;DX^sN3Q|2tGYb^*ktMG{X+ZB|iZ@DpJJK@8)BZ4H%`k_Ob@CL7TFZx5xzyNQv#vwadA`B2tph~pIca7}X;(`xoQ^bqK``xv1JZ-;`v<`eV(>2MNNkOla~E z#hI}fNo82Yyrxe?Sb;H1e47z0S_$q8;JbUP_Wl6AP^;Dt06hC#fhUWq_J0t6Y7ZO& z6&j85RRZCKF*3g?DB|^s6RqZjX7#iyjVLWd0tQr@)tK|=U4tXGu+#LGk{G*qgqCw8 zUdgvCg&dqP_!r#ot5o@c=q|}r={@kB-`IzSSrF({;RR_-O*-`naS`zXW-l!~!p&$L z=tOFO-mgagqnoA_$Uxci4a?5;xMp|MiwMX;CO3NIDgwi)=e!Js7RsRlb(cFP)HzC z^;Sn@MpH-P<|h^c;P^fbiVFNyZvv*XS7wFt+yEmV0gQZ<MVtL zPde@&OCu>d)|XDcXq%14Px?f!!dc$_cu06s{&Y6~{51T`@XQ5YX?@E4eBNOANqd8z z811sn8&{0-MMQ|bn69d=)B?A#SKTNElp@5CW>I+l4A%d;P*WoW?^y}QZR~j7esH^^r zC7?Tt|M3aDQ7y$G7ketR-cP{&l{T^rGdYUpyQ#-7Uwo_tpt{FD1z-}JmR`m9$bb}} ze-HiIy++#EBSZxEz(A5}|5^MWajZ?E4(O50J`#)wWX8>bR#NHi&^wK2UWK<7OsIY8--bPzxJSIgy9`(%;yfR7Yw|a zqQd@)9^b&K6*UoyS6)HzNOy%N5YH;-iRR!6t?b}RRE`qEt|!epC$OIA1P^;;Eb@Vv zM~}c9`(KvryQ!5Ly0%|pvDpB_NP|aM7E8&F>_tBnmQEhvXq9e{mv`wcZ0%+fD~6E^ z)KXiKINpiPOvpvzg=Vz{8ZppebgR+i%LW*HgKyPR+sk{s;$sOWJ|ur5%!u{M%JP-r zuVux%ujuSaWm;F8=w1%dg@^2H){fCxmi7W$v5`5`#o@JiBv9F0#r&3glXlO z%a`n2`|bmGNU_+lUh(cjR7QbbsTsY_j6u{|G0d>7#ETGzSkJ@5jX}C#T+wevz~6N$ z3i9XqaZtB*4qp@JxyWE_wz<{swt+Rqi<3G()(N)X`yk2>VBKVv(gxVe?VHTaWNuh zAqGNA4qc@m*w9$%=kdq;TEh#|ubt+<%fK{a?vN!IhVWZk4`*_tE11yMxL4YvWhPWC zdF%K5KKo^_(l=~Pw~sw<;Tf)~AtW}o(}Qq_>K;EH83z92kjct?cVps+j2J`NI2ibnuW>g|=0zWq~?wMEHH z%CpHVw_Ag|p0`znPe-VtYuH$)vc~RDni3~_40+YkUItjOi3kZrV22?GgCET&cm|hP@Tm=i5FBeT8nJC0exm|R{ zZ+nEnphBM1Xtg&A(Ut`~WE@7}c zcv?Yq#gJ!oovvWVUP9HW(G&O)KmlOwNijqsYE+z5>Bdh>{pvyk1JORhg19K7sPlOVoY8+16# z>e}mF6M3@<#SXi%;l`#FZ<>sZdjMaJioC$dFeh%PS1>z~8JG>0Sx;*@zHVGUOiUTDSvl`JpY87{|J?DZs7o`%c|NJ~2ZS~I;NAx1!YaJQ#< z-|;Ngl|_dD{Q9dstXxMwJy~0Ggi|4^?iQy39VhVM_eGifZD)Y$IaNg^1uvue(k_^^ zUbWXVCTPKGpF$lD8w}k~sq!hJx%1)_1;1O&1`)Z6=M*IcrXQN@CrZj^V?0Lgp8TI- z701T|CcX-g1mn&Oh!{JdKHTQE!NCPcN8n&3E2UavLvN z97B$<=z3w*Sn3(f{?38fS;Ypbe3^@mWKg8l{?_2zG25{4i&jM>Tvi^F zo~@{8F(h1Elhg%2M07YdIg@Bm_PKSb&3FXo>JG=;5It?!>$kY!4hcgU9pk0J?nIj` zT_Qe`rNvfdgV{Ezc@HhVc*SPRwd3&JoOy#bDV!4c^;NtO%Q%Ezti+`<@xoru;vl+9 zvi!sidp(YWC{4wRHzkEkMh5gM4%TQoQt zNYuDE0m+5$nQ7Jyovoc}maVP#@UpBhFgZ^inOL0P`{8$}GvIZo(|`RykZg%!#>Q4) z87v!vAIu@bnD*(M4YzDCHgV8Go~ZgKTCQ!cq1%bZBQ3U6z+o7EM=u3lM=v?Bbie&6 zgZlR8ko4Ov`pxT%fy>{IqZRGBLhWIk5~$d+MQ&-tF12^!jpR7=`BH?&mYy@~osm=o znn|BG>Co-C)90OX&*^S;x(wxSm-pxQ4GIZeA8Co!_d~Dl-R)V4(b&hvAK!!9 z`!+?!uHR_3@4YSeolQxxJ2noP4)5htrs9hFxUpi{cPiQ84{?3_n?-K6eXa5xgmGKfx>17{B&0SD&OmpEFvP#N;HGhCVh)%% zl-~EG^ub_jwk+?v7P>{?2ZHFOd_{~Ie61p!FIW=tQ-V~MQmYv;3$FP&QoCpD6&Ta> zBDww3+5(-}>U>BY7~G&Rt5MYk`m#kila}(edU>U4pKa@rhqtcxhu=?xS$sIi55j5(%okG5915@?RdP`o0AblA}84l*e)c z+t-UeK^Y(`!C*_?xuJ3y6*tLGWOI{Y^g-sl@9}`7FJA+LG=4dl3>hYPSN_@6Cj!6t z?&BgWx7c1257$yJ)z?ZYNRlZho%m2fTw@6rFe=seTUd!oD~H(=wQkfooSC0ot3#X` zw`xNqt3r>zD}YH$p0$4O3O!vYJ|n++_5A$msaHt+IL-BWCSby0ZQL{FO&Azf3yC8uGlB+JRWS4`kSFi(AFjue!*R z=CkH@eu3xxA|KD^r+&>(x3}}i8Ew7-cyqIdYtJXw{;EXm;sqmUVr#A3^RmcuSt*b2t8d(QeLJhi-&2jQce=!E#KcoDn9wxb1;hv=<5 zVs?s`W0o^G>A;_|vAasQ#mwjIz{y)aevjN*{0Q^muZkD2C8vp;j`uh+__V#n6JhL^ zPvWZWx68gx*>>YJ+6pr1O|}ih5h1+Jg6J-r`-3?}hlmkUlAEX)k#QTSm+pZZs2IJa zHbGL;rt|4p;)KplfoLuhKX7GcMAlNXmM6bi?5aL6%ZT2o$&5_g+^JD2=LqhXmI)8f zQKv=J&%jBT=iNe3I>w=uOw8B`T%Kd#UmE7^32#k&8jsH-R`&mj9~W?5gz+}Z8}H+U zRm(`h{>o2;IK`aFmu;0=Kj435ncj4i8d=^}hdGc{odrwb%;9F<#qGBc+tU`_pm&?9 zv~9w7YN5dayCZoi&6R_WHMIRV)9!sp$y ztfWp_fi?13>LZPstHl=l8%DmHi8$yVT0s`^svkf&=9wTgDLW!~I7zxGS*8(o5yf$& z)Na=$c9$u%>WGxb*^qF_Tmyj{;J3hZh-T7Z(Q!`!Fhb^r3HUhSSnmijXfnUOJnjoP?gzy?KRZQM7z^ zO~B++-x}?cP71%u&SZRJubU@-Br79=Yhj%sykpp^OGjlZu~aZS8D*fL@8A*MPw=zbMi83}+VS z1*}gn61lEk>)g_52AjEOm;GXeRfM(7u>9hLHi;MJJw)E2=W5tQl)ISJPDl?h#J zPuBv74-p>^No5W;zL@pN*jYn0X$YJ~+-*)RQXR=Pi7hA(uj8A)!r)x{UMs&#MDod~ zv*oZE&PkT5s_9khhmw$pqT7A~$D(ZuzJ!mtVq`18}yLW2W%+iyULV^N$~R!vvn!Dr9x z!ziF$|HS@rGzTAH*K=7@9f%`O9VbnXIwAE1zerqkvxY>#*`v!-bBP`Tjn27 zXbOHNt_%M_Xmv*_UV2l{WN9s^wyhNz)8;8tZ;}&&wec0*AI3bAWn)Yuu!qoQ+lH{- zr5>SKK+}A#Fu8AU-h{PKRTHYYQrC3kzEqX@(v#BlJnQ(gUWlN;&xN2LJYnhL8AP)2 zCM`PVRk3@jW%I@wUzV%NQQUx_Pt(+A0uHN@o1I)?`RH66d3JxQ4z#3FShYbE}foKB`8x#u+TV zav0Y9MGsBaULm=yq6X^W$M}_sZcZR0UZ%P)m8*nZAy2ymRpAn(F6>^0xVTyOYox2P z7wO0iWDDmohE#H9LLO#I{ zfi9H@YMGv#Y9Y_Sy#>WL1WsrX$*D~Ca!c%jcOcbuzQ?ZC7N>`C7o9HzPGC^_lZBn$ z_|-SUZOPa}XVQgHp3x6pj6r6L#NX53iM+c&e1R=FNApdec6rb*4QhJu(NX*)@j$}E z)t>0l(Xa7%f5PAS(G9dWbHf6X;~P3RAbN96kml|Ig)3_RgNy>x*j)$GIrm)3F;V}~ zb!c@PvM!Hq(8tu{uInMKA7(*Ehqx5)8SZSEZ4Pa)`##tipOcO(NAghfi8Qc(pRnV= z57G-!{2?OAeqWX#Hq=wThJxyP7|t?ub-BWwtX#08F}ETwhJW{!-=@MQjCS4%?z$?5 zAW+n(ZBVGdNmca)_ylzh+ti1+-mxBOnR_|lI6e#ZqpijU`^WM- zMm>cTuM_%_<!I|=k~*e%+y=+1%=&i*Vhp>3x2;m zOB|N4)~*6i^dfy7si>-l%e>8%OcP5G^{a23N27SY@2#RI<%9v6sEqsFUT~5-jB#(n z9Gk_Nb#B{pFPpWhyt=*AxwjQznO-dM=w6RKH~d$6ECSHN&Cy`mw&$&_%ZB0g_!!Z2 zy;^BWE(_h9zF54zSsqUZa3Z+hO{n*iB@{KNd?g8854Vv(Ql=IOKeBu9+{qaG=+8L?KZA(8IB@r@&@Rkv6eO zY5sZlUfDG%{uf)x5nFuL*13X!aOAdsKaL|yMtWHLkxts=JKPFq(^OZ=C+}H;onx^_ z)A@trn)2%IQfH3uz{%twn_?Gv#=o3)y-C3ygUb}QcL6N9ZNwyYwW{;=H`ncKBKpNI zJi3~TjOxjJOE1}zbrmlMTuUL)cgFS!ypz2bQ7q1r1a9R)OzF=G%Z|$Pl+F6)hje^# zLvAzxckIRZ_*=hZ1L(Px)I0^}Do&$RSzis@vzQ-sHT|`W*e?3j&93!Cre%WX-bR1= z>*H}U0O_HLfL6s#2AAxxUtu5i9L-F)eI7ECHQtAqOV?I=)sleS|}~jk)VF@SZf1tLku1Qc#Wx zahblAHI+K!gWVX0H>@`|q|?t%Qhy}j9$PF`^REKz77ys^hx|6XgSpjxI7nK~vFVwl zp9Pxmf8d$ddO>InZ+osl?sHWI9j8{U4!_z8`$m9soAtKe3-|P~k{Skspy=+^yP<5m zt(1blXu$npG5Es=tqpa=uFix;_x^1%qK53raop0B%?c_VMeE(!NLc{uiCl`nhN|ui zb$We3lLDf~Vyb6TN@WUr7&8te*w)Gz0eB7EBtk~IKiEPWh>b#O__EB-B6Fcu$lPHc z*<07Z1rpZdd=mbbB7NjPB{GV{pA-&^mV*@Sa`B?Q2=sCDoircZHWGQtA>SA;U($JPr zD){x0=@vk=D9J3pmvMf-yYTm%T?PMeZ{h;kwJ-~Uo9>;JaR?Ww&Si0a%pf3Urt*$mjV~&5=%gX4WDc zKl4p-m@w@|!&%PuefvdE>}}M;)sE)cMn;B}gitCcI;I)dLs|o_czsCO?RPoBNmpAA zp*yf5w?)qWt8EazYsMHMDJ7(Z;1}%BfoSARGYc_D`ztGYs`jWjT7pUXHS9(DoM~-8 z)^Py74Jd9B-yw>gJfwQBYp;Z%jXnENp10+Un*4jU3(lK6x%3AI)&`Iitv)38pX@Dfu9r36l1wmeROBm2-gOo9^# zGcY5H%t~0gd(h?^ltCNhLKwu`N-K-JHk*J32F@&z#uQO5NL8@oD~B4GL#6GxNGgRG z6wpc1dZy6@h=nlK5d@*>4*^7MZgY#SpFFhJtp{=y;9;G16xUQ?8ry)J%;PHTg~^uH z_9pp0BQ&Dud^f5OvPHPiRzV|whygk~3j{dmK=#1qr|uR-N3lkrY^j8e{&4%-ZZDEhuN!rvY`5Yxel51;Qs znA&lfMm~~$kIBSM;~|#!>s#dA{{L$z0BFdC%Fod{Q`fhHo@^k_Dw_;7x2Q&NM4iGf zM&rU!Ihvy-eMyx9H8-3rB$K`1HxI2FF^dptE|66+X>FS{Vn>C(lfethSP^??M8qbSp?5RaDC&h0SsIvDZomc~ z#o^9i&DKhIoX?59n&?LfYik*{lqJcMEsl5_5?xBIi9%?f48)wZ%S3|882##&5Bdn7Pt97Zi#Pqf=OQlRW!YAkGj!#1-YQG>*6g3p{VHp^6^V9F>bzoP|83%Xl|Z# z{?M&DN;Imqa;h^B~`0ymHGTg>osIFG#rz0^-mqNj;9^0Qw#*ZE?4&IH9NmusMDJz~Oj7g|v zBLvRpwcQU2BejPZ|BsX zxkh|pOV6jv%)@f6dPDG?esOEyhvJIU&+>xt$O3cjbjlUk< zpFPvrTfba&mw+O1`%q{uP7;DDpkP{PZovF&thr(@f;Z9Dnav!C}n&))s+ z>->Rp*18tvT(fG9b;{EDw%5qCYu zX=t%)$rL(7q23~cOVo?kjKmUcvkeJG%X_^FEOQgO+cQ<0UKDxQCr-$x%bKv~Fh{&oKxn5Mq%3}SlxQ2~Vh6}Q zRkVi@$?_)D#R7S{9dyO6L^vWvE=aeAKad z<6eT6@GpGJg4AQEBJOoIKzspslZ>zU7PGN|ij)~9x?ap`t!^yv*B;!S_AtmEwzP>J zOZ4v1l#{#b=-aVuv^Az0d21=iB5jM(zxp!uS2v!_LUG%{6ZKe-`b#!LpmPtb+ldao znD@F%D8?)w=9-v5@+`+A$y#NoxU0SV`(2vRCTy<=WB3*Y`|L*tO1JQcNMlp3p`bXb zA!UasW{I-Mp2+6sc2bySh#?KqXw{0wnlEEjv8kNNF%8B;3k673iDv|**r$yXFIkJH7f&drcVu)LJc27P2Nlq`;3|NTSX>;BbY6%w8lX-o)6IA#d zrF+A*dSm`x`Cb#$fzGu+Z@I|Q;eHt8-)gB7=)1$d#4FbHm@G$g4@UG^6NPRw`It#E z(pz^sbx?1w$*g_pE`aKV$Ioo>=FIILdV?`V*kyW~aE0@wSW;xY?$t-F+c_F@x9-fD z3?~c5qNBeb>TvJrns#?~UoX8CRPvukV=!35ORFULOigBob!W!7(kxeC590gvdGQ^X zwrvZQtAjr*`|px9YJ5J=i(K$OUXnM*%9clmFj!`<%YUi4uG(Nqjg(_j|30>)PgR+% z!~;xu9m~PBc$p2I{p#XA;LJqW-JSP0tr~Qlr%vlOUsJY9`(58Y|8!s!bgP4U{)R@u zQ|Y58RHr((>#YplD=u9xJ=oiwmlg%iZL&IX%6NfB71`LMwr1jm-r4iBUzb_)jgRVN z+ux9>Wqh=;H&jn*Q(h*vU0MLK+%YeNeC)(z3KH~S4G$FkRSN2k6Vam;D5-Nv|G zAh1<6W$(4_jEsD9{$M1T7N3>*%6o)@PDJS%*4y$o>*cg8@S%D0=8>sDg*ryeh!ZoV zM7{a1=nA-q?3ixbb5$7WDU2q!+gIu6A`z=;9#SK&a>&?yl*HmpsmdUg>kC#aly&0y zA~=BR==;rzWL|uklOc*~g!8Vfc+71U4sv$kD-rl!C4)@okn89me6laz7ZEC{AD@Sv zI$pPT*C3z6VxQ)msbiZR=8qvNKA)v8=ccZ9&o*>#>f_|xE$zkNOVQ1{lywB#>_)>= z<7MXEt+kFqPmKB)Dc)auYKzI&G6$!AC?BgEa6Xl1Gv-92*SWErh_tIb_sb2QLh!0A zM{}0qGzD~+$#vrYAf3I_Wj8lrI`=^R<7ra`ksf@@(`UhoD7^KiSDWVn z>54!Vt2=tJPW&`gycT#um+yBd$u7%4S&*%>W;R8yQys?8F(&?Eq~v17l0HF>tTlI~ z{g|xOp*bHRlHK9sbbYt2^m_e#?DKissg0N>G-%uGwj6vHzU(;`p~WbF17UDV39?d; zH@@=xQ>2k~FY}tNY80QZX*)xwAcZbiaZU;icIG~Y!+{;=>PC!gS5J*W#4J&V8p3sb z@{sa5kuSLLH`d)OUU%!XsT{QOgV$`$T`FvAEc65&R$G0X^;=zHueI`Zt+Vk1^;#%` za_ns-F6^Nf3$6@$omo{cE??kP?MQNFZO^#*Z`B&kBh|DeL+k7+}GgPgUv~NYvB~>o!-cQy z^#uS`$8;#u6rUWP56aqDKAmLq@qRuJUS?zVcxchMA>p135P4!IkijEJQalG!953w8=-Pg${q2t*?& z>t`71H_6p(Z+k?mVaUNHkY^#w+$4N)j#Wyue#qe1<}PD)zfH1;L9rgvASc+5OfAR&31K9jy5qBqf@O)HZL?SrccTIYN+wn*uEs!luH9Cw?~5I_c~ zd;HP?Db^gkKYds4Y@8_qHf8otENoojJWaHY(SwtFna^!ixl|fO+nCbPs%yb5jSgDO zQS_+}#)tjAB@9!=|BY9(Hzd=z$mDb}2SrDtcTw_OTdwKOLQY8BzB!mSsKMN&Wd+r+ z=O_w!12%xaJ?`72d<1^@s~bULfBuKS;!PhWlYhiq<8ycP+^bXD(m*L`4MD1)a4{dP^ZEXczM<3kadmRdc05OZ-P70k_B@*0`FYKG zU%j)2O4s3ab8{SzzUg&;C>xEtvWkP7azz+cb5*%*a7h-f=lAvX8kjG$k6xfJ?mOa< zbMeLK&=+kDf|DMn>hqpe9Mu7^HttS~<=^<%EhJH?RU$QDYX%X{7%rS!rD>yASGHMF z>+!cOVXa|Pu@)D!66HXThfFytJ3p2qtBnreej<5Ps0}F0PxZxh7y-&9&StT8bko(VXw=V|NfszNDA@^APozSqmUtjUG`dH&6?T=23dgzxh@*Q(FLMd;2;#`qGORU0HTJw46E%bFnWrU? zhrSq1;MTN7q_*RuJNj3L8?A6RE;WpYF>!I;mL#sUVkkI93}ARP@j9qZ=`oESwoI!K z=_cZyNt9C!O(@-G=mHzXm0B!(vrmSkGJ3pDu&Oq(vGaFDqXxJKBY(mMxXHP2Hfb&IZqU2PmeeIVQnF z)5AD|yoEovUN9syEc&{CDC&o%P-dCBaF^~ z{lh)Kd1{-d-w4^|%~6(YTg^TbVZy~ISbwX4ix+6ybOZ!_@v8i9@)mR838JSomK5r> z1Y(+9J2F1N61~w=us=W?rS277naFYs3PPNyP_w5hUS)1wqa$Z!Oj^o+r@sJ)!3I_;=B zEq9Wft1e=<4^3}{it4)muqE8SAHu)FA8d@1aA<&9RG-=wQ3*YfD&ii`RFVoTr%%U3 z5bdb2{M*jW+{vu8UATaTIKr_ml75EXAAOseYdq;yZQyrm$Q+|=i+&5XJ<~}fJfej# z5qYy88JWUVAD8uWgE?dqB{8$bGx8Be@1iSs3-O!=dY(eFGp2Z!^^=QU5+-_Wh)@Ej zX1vS@I74iTM?xQs4vkMD9P>0zo)wjfF9NUnZrXP${B%16N3J1P#%^M>;B^f7EE#;i zZG}qPkJA24+;Wm-oSt_4$mn(u&O? zt!JW!gq59ReX?>AVduH7BRfCJP7rXm3<3Whw;Zizd=ndO1I~|(Ko_3dr1evc506M*u zvw4tl0}lmgyG;Jn*#c*fc7?_zocnt9G!FMwJ2BkXm3eEqQ#R-Z`z8%Yb5bMke6?bT zLPR*VJghJxCj+W2#lU!?BpH?2WMEfSzL= z*>O&xVStb}!YYNG`zy$q`z(+jg?bmJSOq3Y@+^mFEtYl0@=y+WZmpcGcgB%v@zH9j zy08N+&jdg-&x}g;!x#ODZ25)(KD(4CH8?H1zwB%l%Zkh8haD&Z+Y745s7P^b1y3rw zwnCc~l7d2O1>Z_1ULy{DG<1=5OFxy0VqP#_uz&0-n8i7rG*MUM*2tjlP-#$6k6L+v z?8G{IjGvXcs=zSIpch#~;g;7edX_8p?C(wC1D17;e+#o;ieoW%CU^T{He00HW;fp( zuE#ia+^O=6_=%yv>ZN4AApoUbb~8ShlNfvRoT~=n`785*{7cDTW$PjvNS<|$E+=?D zPX1JOcVum=x2IY_?R9rBi9`Ue%FP{Ry$4yMHvn#(`xpm3U)!WNNJIS)tI`1jQ zm@t{LFIDyyOFB6;>QZz3*?`nKjn?h9ABh;qT2(rlQ6l+Nqgd(#Lub7@l;CDQU<6Ck z4<9GqwtNKZrL$lI51*~=#0mPIG&pz z%S-3Y;;$0Fksv;eq%cMK;V`b(W||RU56Y;O7}K#5rJad>Ehd=E@wOD$GPMQ+vcD@n?M107Ko%OU0Z2E>l(q}Ai277%@7k=bl>WwbqI!x=#V|w{f zl&eN&(c^;A+c)3Ia)e%-{oTMFNzESDanJ4Dlbt!*CmmBsZVB14<-WT?3%nsy5{Y|3 zi23V!gBaWCpBYMsSsS(*!Xv>jCb+=e{A!B{$w65!zwLwc|g z5q0De&0rtX+Ks_6v@U0%EQ&##9|riz7#7Vi3=;(7Ny!6Zzo_k}^EoaI$yLxBK{p2< z*pKpANEC^XCMx?$f9jgGIs!@R#{k+P9?9%4uE2z{=b;f`vQh}caCp>$F@EmUzqE;% z?N#wqO}<6%>y_Y(_Z4ViD{4roUXdWiJ3HyJDhL;fWABxy`ik09SuynVklqf~Gm%yr z#pP6FR+Q85m)~9U2D8idB!F2|wHghb)PC==zsd~Sj;^Ji_0;DiU~oSd)L%gm7s@e0 z1gxTm{nb9xwjpA2LactEj=o{oS%2jmwk5fiP>&z!=#n+4kD+QNLB7A487J^{yw}y= zBJ~Fq8GkPwC3<{hNn|t&E9uyr_1Y-TUR;~4VDhccHd#|TEfU6XSXcxHkg{M>eLA&uV|B_)Hz+IQF<2U6R~fl%tkKm1-ky}Sd; zw_Z{6HOQiq#_F3A&A9VnSr#Ov#%d?^C3u1L9c>9?Z1ov0)nII$CVrf=60XNE=2J_|=OY;xamn0VvC4T}%9(S}$g zP@oNiY;$`l87A?=P~Kt{2g1G{YOlY_NR^7alk8+FALD+ufq0~Xu+N?!W@yqIL6cL= zb?IJn3E7}&R)MSA_^w!aVMZEVwPsY&0}lq#3C$P!4(^u<6zlwgR4{{-~YPyLkZM|Av+EcLqKCct% zuKJdP9s`<9Qw{+_Ccn0v^ACba5#4!?w)dE}g*%8*qVT1g;9Ev!Z8G^ieM|-g>o4eG zDy*k)yjI0y3UDWtW3YoxDmB1C1R5RO>HLj;29~=IFXbl;gry`YKf7B!6A*9>AmIXq zj_L;#+^v4pzBc+@m_Tx27&lstS*8t-4=HC1fpYf(URY$5Dd7X`jGY9lZm5v}g=W}6GDdO_TRL3@F6Wsnm zTI)^>NC7apx3RgnFeYA~v-#x7n49zI4U@eKrCbu8KGG4T&459@;A{o?w53QdPE7~p z)tZB2Bj(g5Cbnh4&B~AX&y%2YYKh)!EWytPI{de`QFa#T2m@@$JjT z@2Kq|Jl#mg@)^XDq#lCq^Dr6HV=>c^+s?xO9%uk<^FtN@A`Av5ARv_g2{cUg?2QZ+ z9PQ1lP5u>cWN4h)?>3@-tdH&C(qBjoay3u@wUM{V2%o5j*NP-8q>mdi*7r4VP{l%v zTP;3M>v+Y;;OD@{iXO+``6d{KK_u+IT~|WkWWY@9_X58?gjAK0J|h*XCG=g=rU=l9 zVNAvgqwux0rbxHKo%$QPA<+37A%$GhX7!xygq+%Rx}_U&0I#rnxLl_ko?fPnTYg|v z`b&u~=)n?a$2p?zU0J+LjV+jAP}dI9kEx8370sDe`crfe+!`VuUeYpWrlA`mi*{NI zG`$sI9rWL?Qvdc&*MZ;R*8KR;*&#D~>hyUJ5i;-9Pw22ZShPvkml$Vm*E=?mV!}=m zKd!IMn7U}S#2w?eK;VO!rP~HW6Skp;mMX-x4`y#INtP!^S}8}J(F)eVo`9GzwYWOy zC$-!%Mqkrz7ixx`B>|_xqMYZVvuDJPP;;N;cG&@?Syrd*Nq`NQB*cu>hZ%-!?6B?@ zubzlMe|r}Ra8e4GNmoJii&domK{FjvT`gGDjk)OVBxz%^&Vtc&{1l|-T^~q5gk>_Y z7gW-2*h0W*qCCI-(9y8ZtqtymVx!!*Q@CbcuxSmC-2f}PO$L{=Wm0xe1|OC!dOI1I zO`KK8xSQdYWIlgEJ|j?P)@eQM90Sn@M>wG{gFQJ?tV>3k+egY-!uR5a`i_+70sU*8<2Ra<|g*Hx-^C*NCk6o1dWF$04PZ z+Hrir$6iPU;U{u*?vNMRUHpaHZwxX?!@bB)5%$8t&jN(fH;DdWV=GFvNqo0O+i-R^ z?bntosWII{sI!Cdev80F@vQ1qCifx5U3Fro&WyKy&opL)`@)S@NTN;mKfmALbV^Q$ zLj$Mu?d9vG+?;9eU9H!XrHa7osgtx}nl+JQUE8dJOcKXG-n`if@t(tlaoLrUA@{I@ zC%*~~W!!1kcKa>-A`^I=~2LI7zh3!F4VJT9|BoMBb z@6PGugX6HN$Aob7%?}bALhYeeKHU(>8(N`DkI)UDI$!uWI1dj}&PkY-kWhdbnQbI{ zFSWY53-xSB8Pw++Ca$PVH-##>arK&~XpbVY?dF6ItFfi&}56KFbaAD3v$i*!g z+n96lGCXdWrMt|jA!^_;3OO?ZimCQe70LBqGqdvCaM{;e2ZtkhgG&LkUUOtNW?%hI zH_Dzc11IU7Wd|k*xV|)ajJS)JRqClHcrRHLXYUV~_XcLzYmX_8ePMT%bzw9{xBxYL zgZKs}vk}J#786O5T72w}6B4~8D?w6buDtyk9AWR0khHz0qGs9|3uf<~2yY48G$E^S zo{!Ug!ZPLKA>HLu=~9UzU-cCoBW;$w6LHCdUy}m1wj@9fzw=2NqxJr`WJG zcFXBj=FuX#&`8bs=7d4pMnRUW{mBN3H!{6tF@A(ICH6KYL(G0gltY=u+*zKhmJl*u zquUXMb$frHfc2q{i*2R+sOSC)RUoo7gc09~1mnjo>1ho9qD?CtCu^&i;q%p@ej`d{ z{OV4=zJ#`p%@FR${3~h+B{e!kMzH0cT%@sbbrsU%bl9h}Vy46DxiX(Lo2_-PMQowt zmNO{Pj?+80W|iJ$>1p41&I{DU=DciguH+gJJ$Y@&0U=2_*3;_Mq#`nu_X2B-n9OIg5=vQ7B*;!C!79L2=P2=+;Sski~2yay*sE6B|it zgt$DBO6^IK&*@(?$WP5bVzUQ3-$hFEgdE4*sfqxFM2C#9ya^R6&T(;ns!czVc7vNG z@HadI9jyGGqC#h9l%HS1&A_ZF47uPMXmm?7BfVRs@mV{rhffny0opi(UiRJ$kzrF6 z9xa+#!V(<7YsasLsK`flw;3xI_U5!@7g2g~Xc6*FwvA{1noFG0rz9A;4t(b8EWD($ z)^+B@hj4^f8FyDAVPaZv7^8I6z`4Z-KGkO})2`-V&o$8@*llx5#+0Uf?=gJv=#&2p zG=ejY%?PO^#psk)EE4umR4%jfTO`MDl!LG(8Ok5#y+-l_seEBG9*y(Gd62S`Xq{ot zV8^k4vfI=gl~0}+$dNepJEBtI)KG%>QN0KoC@BKp3k_((HUWfRc^%QZ?$=~s3O?2b z!ei?s$rCeFk zG@>S}()6^IoV3FF9F35;`01Ng^~t^6Xt)ig6;t4QQ~z~-E&E3X-=1Kkt_O!Q!V9Eo z<&cyrws9MdE`vCHr$retKe5WTwcr7DQT^M#_E!0q>5MV z(Vh0N;}0+WNGzA3O15`+KrJ`p`UbKT3>~SG!uc9D4a{X8K zx&cogCBJp(4O(?04Mm+rpFzHhWsymmy3P!on&f@Qn_QyP_wnhDg2`8|~C!MK%*)9<4{8{fR01o@OpNGpmwzN*Fq(W25ccUJ2MhLs*pUl|Ic^*|{HT5s2wFopuMA`doZoQ= zzcsVMWTdn-DSpd)%kXynE8l=VUWL|CVvxI0@Xy0y5JsE%F~1)_nb5~$(&M5j02cD>F+P^@+5kTj* z$+K^KK%0}79AanSZ#i1lBn&78=$sV9CO_05AA?bJWPfwtkgL{9*bas8$032QF;Je} z1t(-9lH(+ZK`jQ5>nQa+OSFNk@-hcn_Umq*z#s&lP6D4~AqTSLg4s~NU99_lLzY05 zZbST7EwXoCq%DW=!&?Q*QNc@h`Y&;jSThbJf(?(_eSa zZ-Zst5c=by&Mxi)Jl{AlYO*6PgkSlPU?DYGL$S#~8KR05Lp0L4{X)tR>l;@H+e9_| z0%7bwg)X-F2X-mN_~LcjwMA}slm=kJ!v`(N^HK4?*ai~TKjOfiBhN!;eJvq0~n2gu8c&1tOw1$In! zXkVvL4vAbA$RMqdpRhhIxx~t%B=kBla$WTqX;MX{KYurB_|qVdNE^W-;2fNGiToL;5I?(CZ z8(BKg{cGc26S=g1ZI7u*+oXR-XjeJ$scsg5dF>vwMkP-!l>eHTN228M%N1m(TGGYt z{BF?&XONZM4A)eVooKA5&5h~y;Q7u|2`xT*W=XTGL{RfgszBn7TIG1|+5QpA&P4xI zb$myFio*%7z-eM~qm!#WR#>xFkad)b%TFO8lT%Gg1)qD8ES1{e^hAQP*`Z1rbJ5o9 zN7c`Gw+xABeH`~fw_t|_P6vCIbj|~F@XJn*;;r$)$%V&9HzK|hLwO0M&MRoTfOx6c zG7cQ9bj5{q0~${U%LD65bG7C$S{^Ecc*8@!LN{N?uk;nvv^bah zlVp}Lmom(OKs9v9k~VrgB@1?LiPLq$KQw#;Lvk;92Evf6NJfYSTw_Eaz2n6!8u$Z~ zD8dMXOyzUciSg|sX3o&n51fK;%(so5)Eom1Qkd}q0^|ppY|9IjDdziOgE}Vhv3vtv zVIaX6%X5$XfZztjAc;Wt%Y`9D=)0RSKL~MEbzp-_!n9~~2qcPqmLppdB5GrxesQ~x ztP(qvIdLh6BiVuP|E@Y9bMqjl#UchLx(UdUUN0WkY&4=3%%d}b!gU2==p!tRI(JTh z$|uP|w2oUi05gT(V?tu37{z*TWHU0OAMut9(#;cI@eOD*6{7pPYV{e*)aGsD(&3m3*65}+&s7qB^!-U>!#-3~E^))I-Fl|UqF-$@E zOImK3LqEAQ8}y{iTcF5lK!zGJddzW)b@Z&Q?>iG@I<%_+JAF2rNNp#RE(Ns5B~7nb zoYt{77{2({X;klyG-_YjddK2XvDE_W8fqXev(`iTv9)F$p7Jz(yh(D)>fo=lz9N#g z(JhVYGJlTiJb1V~%fH|V*-EX@ zLlThnQ`Ohb?9H%FBZPu`WK+7rp=h5@Cf98Odffk>(!mvUMz;qrD(V@4fXM&ulePee z#nH%K!N}3^FRX>`k8K88eKYGbH4Rf@C5(^N6L(=1L^V1wL`1~IGj$7^aLYLHl{pe7 zW3tvtNV~L*LW3eEJ;ss>Hp)#9_k+VF^;VX6{@zT;rC{s9D zgyzr*iE&?W^B}^iVn+uayd)2ij%7tfL`?&|+g3VjgD+2yp|<3?2Yqj5coRAbmigQx z9Lt|Q5^PXDdRXF_$Q1|GD^)w1pjYI9Og@a``S7M0v$tc~l>6j5Q%Vz2BR3 zT)rOnVZ^FcAE-L;Sw-;3nvI2)XNbe+zCo_j9V-096EX8YV0b8!a|f*zm&i@%j{jvQ zwtS&ymR*3p&2Y$~t~YW`QI-r{s7?EeNpU1aombXq5=tmu(yi0H)t^y&-6<^`6;ePR zuy7LYPcmGpDE$8RX=)5(Kh>nL2Fk6E58SXIeu*&tO0F0w zmlr=Bxc*^nyuc||Co-X7&egbp^#x1F4k&Q%-i;xL&O*td`p*A$81mG zlZiGlqvSW+ya%xkybA(LyFG-fuYh1_jv^|sR{$lQzFSv5dcs3}*IjlNzb&gU+79Zs z2N~4~?DB%k;2#!*CcwdBO*hl+q&%0D@nVpA);pq8An(dgjZ2gyzfG_|NyA+!Kp(Eg zCZf+g1l?y!O8gd!#~#bXco(N6aX|fJCrye(*-n9f4o4$So;FQ{EOYdL5sPpvVbYDo zsIV^BuLI5r-j6y>Mw)pwqQ#=Xq)ssvF$>a?Eg-g9xo#1q2z9#xa?vJqq%T4JeC4*s zs+2{o73-xtjSTH93A>aCHcv{O4Q4n_i6K_2l0Beu<0p3Jjy^pjO!6D21Ko87g^


oP;OVp9Pr_c2`U3RADbI9O^&dc%MVJEYXj_^Bv%5E}$W2MmKgC;~xdGrMn_A88Y#{-HUx4m7dV9bpag3s7VGMQDO>@#%0*~(XCT_IH`BeFGZ0WIx1YTiy76RtQlI30zn8;Apo&eu#`q9|{))rDKp zz9xPgc)WX_U>PcVos0tn5l`77mBIdtel%M{-pEf-!XLQ1?^>_vv8M`M1^KrMJKXjdM`#eK>aG-Ule}h$54)hB$pv; zL{i!!{mW_{mf`gHu|Ne3(@xsMlWlhZ%_~g0LG%ZfvckBfYWn%|x#9eHb(8TW_3EWB zp-lPc(L4f=4Z$WS<9YVwh2_Mgr^^DmJ$=TzGb{M=SynP^g?<~%a5Axp6Q*T@TJJNS zfr*@l+ZdL%SEB`1$|HArkJc4mhBV(tNZrNcr|#><+hZ-}m=9g}_VcCD`u?UsHkf^R z2+!ezZp+B2?)a_Bw+j>PM>m(0SSqu6^lM%@(e_BYDFI7YzU8#YHk*dQC5CT@>lSLp z)qy-?BAAz!_AaMIMln<>=7dge<;V5D@X-q~bDl3v%u9n0Qm-C0dzE9i1%ymnxDAvp z$Na2cZfQwirf(j%G>I!LO%~5Ds=1nC5_o#4J_#oB(Omgl_K)>2&MC6C^A`7yZ1+_$ zX|Z?*n=!O{`{F>tS?h6a`69f{PC@l@QF$R~hMdaz@;NvJ+}MVDk}z>ITvRrNq;S$P z>@^>1*LblG5pLfWqzE%!R>~ZiT~hpT_N>U#Vt7|MkarddGSZ(sF^DL6pB=53w}K;( zJN3SY$i{eV?;v(v!k_ft(o8=~x!gp*R4y(b=u}>S&nEnop2&NMuch<4 ztBm1MP0WAZ5k-0wOy6zBZ@fjA3;|URdid@6ZSj|61MOMGv+l?yP^oG*y6r6;>WP}8 ztfl>D4Z*`o2f`$XcJz)L0w5>ZFU3!CvWqaV>(VCviHWP@BOCVWlHAjXEBnYR>vX54 z#!HGGANg7{EcxeO$Nt|ZFzl5NndhL&SXaM(j;~J}_uFOPr`5yn5AgrywxzozTCxD{ zx&!b+_}gt8=vf*Fn(CQZ|3${&j_QW)p+^w9^#4S7uvVZ66^q9clGO%%_l>EPCRk3e zM0oeu?bZf2#v7C3YcoKg8#_b(w!}&UNs=xi8}C2(Ud!RTrEkhih)DTF(3`VZwvnbQ zJK>7d;HT}_lv=nQP^jsE&G z1bLvq6gfbEL;k=0h=`My?4?Hu+64O)9ImsPK?@0P1WDWu0O=0f(Ga_6)Qt}&SvlWK zZ`7!;gy$tYT%$YG{4wTI8r9LwxqO853yL4nbOW-4jnrL>W(OOEW3pb?DL+@p!`4(= zOB-cPZbQ|zF%ppS*h~>qpTCx1dfJ|g!PNoQuZe`i8G1TEdC)s;U9I;PZv|u$I9B~- z67&|ss~Sif?8F}AyEJ>WQJ6X`;O+**>*a>UEU=*H3+Ge`%)AzP;3kIs6TL{6txN3$&ZY(T4>w~R6 z%6YGMur~%nwQTa6n+e=C&Ap3>L*J6z^}b;@RSOF0?0Gk@iD&?AqGqGHO=JiS#OZ)5 zD5(y_3Eo(9eWo2|!-?5!z^@b-$4)n^#k?mY1d$AhJ2Ech&>E4|HK_(!F}2T#XDAII z0n%l9_#*pbt%ZwxZCP+y+DL;gSOB+Vr#lwnZTd{A&MRZl1m^H09Y*#k=IiwYLfhG`eepxZu-q+j#-kTrv4^(NgTYWE%QobF_+bTo z$H(ql|NV=xk~hV(9y$}L4qx2|z(D^VJ%PW_0H($NFDc%URlU$1V8DT3f0N?>xnzHR z(-5=)+|5}#>L|L|8aZhFWzlky|EZusiJtflaHteu0EYPw={mqE|Hfthg~t3T$b1*8 z*$6m<5ef(hu!`2dJ(d7o3qa2Q3*ONO6vzH=XN`kMI0FSVn-8D>>)%9yfLfvdCH(h^ z|6Rp@ZG^zr2I`q%AfOv?Fd&To(DMoV|1{!%Fs6Udr9c15pP2>!D1uo0uXzT4D)^Hd z`HzC&jejZlhbj4|=$~wde?%`2{ucd{`|ziRKasV6H1q>5?EhSd|2LfNPlbQN0{AEgKY=MHG)e}pi9O8;k=@Q*AI&>BGce+3PHivQ;@|L@`& eApa)*KZCxUBpASg|6;kq0U7>fpx}Q+AO8nRz}R8{ literal 0 HcmV?d00001 diff --git a/docs/_static/benchmarks_files/OV-benchmark-data.csv b/docs/_static/benchmarks_files/OV-benchmark-data.csv index 8fbc20e6e64f3b..ea2a4cd4de414f 100644 --- a/docs/_static/benchmarks_files/OV-benchmark-data.csv +++ b/docs/_static/benchmarks_files/OV-benchmark-data.csv @@ -5,7 +5,7 @@ bert-base-cased,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,21.38,,15.11,0. bert-base-cased,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,32.23,,20.26,0.15061144,0.495859202,214,65,1,214,65,36.18,FPS,FPS/$,FPS/TDP,msec. bert-base-cased,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,113.18,,45.08,0.344024364,0.905472125,329,125,1,329,125,17.43,FPS,FPS/$,FPS/TDP,msec. bert-base-cased,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,34.34,,24.04,0.178867029,0.528345686,192,65,1,192,65,30.88,FPS,FPS/$,FPS/TDP,msec. -bert-base-cased,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,51.27,,18.44,0.120345681,1.830973571,426,28,1,426,28,23.31,FPS,FPS/$,FPS/TDP,msec. bert-base-cased,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,37.98,,13.59,0.077518083,1.356566444,490,28,1,490,28,29.22,FPS,FPS/$,FPS/TDP,msec. bert-base-cased,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,27.60,,17.56,0.091086845,0.788551831,303,35,1,303,35,42.83,FPS,FPS/$,FPS/TDP,msec. bert-base-cased,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,34.12,,20.30,0.069915909,0.974827538,488,35,1,488,35,37.09,FPS,FPS/$,FPS/TDP,msec. @@ -29,7 +29,7 @@ bert-base-cased,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,84.42, bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,46.67,,23.26,0.436174206,3.111376,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,73.20,,31.70,0.149385536,2.614246884,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,4.43,,2.03,0.022934748,0.7377344,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,83.69,,37.23,0.196445915,2.988784286,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,1.18,,0.38,0.011017319,0.078590206,107,15,1,107,15,863.34,FPS,FPS/$,FPS/TDP,msec. @@ -37,7 +37,7 @@ bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,2.97,,1.87,0.013882493,0.045705439,214,65,1,214,65,347.78,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,9.93,,3.74,0.030176091,0.079423471,329,125,1,329,125,155.72,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,3.36,,2.13,0.017518449,0.051746803,192,65,1,192,65,302.24,FPS,FPS/$,FPS/TDP,msec. -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,5.09,,1.63,0.011951214,0.181829179,426,28,1,426,28,219.77,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,3.79,,1.22,0.007726669,0.135216715,490,28,1,490,28,266.14,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,2.71,,1.61,0.008936965,0.077368581,303,35,1,303,35,412.44,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,3.35,,1.83,0.006861724,0.095672042,488,35,1,488,35,327.59,FPS,FPS/$,FPS/TDP,msec. @@ -61,7 +61,7 @@ bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Co bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,5.20,,2.33,0.048610495,0.346754867,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,3.87,,2.23,0.007899318,0.138238071,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.44,,0.17,0.002288653,0.073618327,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,8.63,,3.52,0.02025661,0.308189857,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,11.86,,4.61,0.11084032,0.79066095,107,15,1,107,15,85.88,FPS,FPS/$,FPS/TDP,msec. @@ -69,7 +69,7 @@ deeplabv3,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,22.85,,14.48,0.195256 deeplabv3,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,34.47,,16.42,0.161068212,0.530286114,214,65,1,214,65,33.07,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,94.13,,42.32,0.286105004,0.753028371,329,125,1,329,125,16.23,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,37.18,,21.16,0.193650962,0.572015148,192,65,1,192,65,27.36,FPS,FPS/$,FPS/TDP,msec. -deeplabv3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,52.88,,16.51,0.124135164,1.888627857,426,28,1,426,28,19.87,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,30.78,,9.65,0.062807791,1.099136335,490,28,1,490,28,31.13,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,32.02,,18.34,0.105688204,0.914957883,303,35,1,303,35,37.47,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,40.36,,18.54,0.082712269,1.153245355,488,35,1,488,35,27.15,FPS,FPS/$,FPS/TDP,msec. @@ -93,7 +93,7 @@ deeplabv3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,105.14,48.76 deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,61.90,,17.22,0.578511308,4.126714,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,49.44,,9.02,0.100894422,1.765652381,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,4.81,,2.03,0.024920889,0.801621937,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,89.16,,24.78,0.209304953,3.184425357,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,272.36,,133.20,2.545454771,18.15757736,107,15,1,107,15,3.60,FPS,FPS/$,FPS/TDP,msec. @@ -101,7 +101,7 @@ mobilenet-v2,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,542.97,,451.26,4.6 mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,899.54,,499.68,4.203451742,13.8390565,214,65,1,214,65,1.58,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,2804.17,,1285.76,8.523326054,22.43339417,329,125,1,329,125,0.88,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,868.27,,679.32,4.522249945,13.35803061,192,65,1,192,65,1.35,FPS,FPS/$,FPS/TDP,msec. -mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,1372.54,,531.48,3.221931925,49.01939286,426,28,1,426,28,0.96,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,990.18,,318.31,2.020773404,35.36353457,490,28,1,490,28,1.18,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,741.91,,511.96,2.448553162,21.19747452,303,35,1,303,35,1.84,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,960.08,,614.86,1.967381666,27.43092151,488,35,1,488,35,1.49,FPS,FPS/$,FPS/TDP,msec. @@ -125,7 +125,7 @@ mobilenet-v2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,1008.77,7 mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,514.94,,313.82,4.812519626,34.32930667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,1114.75,,268.28,2.275002757,39.81254825,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,73.89,,44.62,0.38286464,12.31547925,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,1497.43,,605.84,3.515084507,53.4795,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,49.96,,14.45,0.466891776,3.330494671,107,15,1,107,15,19.80,FPS,FPS/$,FPS/TDP,msec. @@ -133,7 +133,7 @@ resnet-50,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,97.49,,51.23,0.833227 resnet-50,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,145.23,,74.40,0.678644387,2.234306135,214,65,1,214,65,8.18,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,515.94,,140.29,1.568210731,4.127530643,329,125,1,329,125,3.87,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,158.18,,82.33,0.823856129,2.433544259,192,65,1,192,65,7.04,FPS,FPS/$,FPS/TDP,msec. -resnet-50,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,229.98,,61.97,0.539855399,8.213514286,426,28,1,426,28,5.09,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,173.03,,44.88,0.353117963,6.179564359,490,28,1,490,28,6.59,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,122.85,,61.90,0.405448145,3.510022512,303,35,1,303,35,9.95,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,158.96,,76.32,0.325734087,4.541663835,488,35,1,488,35,7.54,FPS,FPS/$,FPS/TDP,msec. @@ -157,7 +157,7 @@ resnet-50,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,352.09,211.3 resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,201.96,,71.44,1.887469159,13.46394667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,307.08,,88.49,0.626686058,10.96700601,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,18.58,,6.49,0.09624999,3.096041335,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,358.14,,114.23,0.840715023,12.79087857,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,107.63,,36.80,1.005906996,7.175469906,107,15,1,107,15,9.12,FPS,FPS/$,FPS/TDP,msec. @@ -165,7 +165,7 @@ ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,212.15,,1 ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,327.95,,171.39,1.532485774,5.045414702,214,65,1,214,65,3.61,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,999.78,,361.81,3.038835794,7.99821581,329,125,1,329,125,1.90,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,343.23,,200.57,1.787633018,5.280392915,192,65,1,192,65,3.11,FPS,FPS/$,FPS/TDP,msec. -ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,518.48,,150.36,1.217089202,18.51714286,426,28,1,426,28,2.24,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,387.68,,101.48,0.791191606,13.8458531,490,28,1,490,28,2.80,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,275.38,,157.24,0.908858835,7.868120772,303,35,1,303,35,4.33,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,367.82,,194.43,0.753734827,10.50921702,488,35,1,488,35,3.35,FPS,FPS/$,FPS/TDP,msec. @@ -189,7 +189,7 @@ ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only, ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,321.29,,138.22,3.002740187,21.41954667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,531.28,,141.48,1.084245141,18.97428996,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,35.69,,14.88,0.184945841,5.949091211,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,690.18,,239.85,1.620143192,24.64932143,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,0.89,,0.23,0.00834996,0.059563049,107,15,1,107,15,1119.27,FPS,FPS/$,FPS/TDP,msec. @@ -197,7 +197,7 @@ ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,1.68,,0.97,0. ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,2.42,,1.40,0.011301245,0.037207176,214,65,1,214,65,459.48,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,8.23,,2.40,0.025006186,0.065816281,329,125,1,329,125,163.47,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,2.78,,1.56,0.014501179,0.042834252,192,65,1,192,65,362.12,FPS,FPS/$,FPS/TDP,msec. -ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,3.93,,1.00,0.009217819,0.140242536,426,28,1,426,28,278.25,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,2.96,,0.76,0.006043555,0.105762215,490,28,1,490,28,336.81,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,2.02,,1.13,0.006664408,0.057694728,303,35,1,303,35,563.91,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,2.68,,1.49,0.00548551,0.076483685,488,35,1,488,35,405.46,FPS,FPS/$,FPS/TDP,msec. @@ -221,7 +221,7 @@ ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,9.70 ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,0.90,,0.23,0.00837685,0.059754867,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,2.91,,0.75,0.005937663,0.103909111,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.11,,0.05,0.000582108,0.01872448,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,3.92,,1.00,0.00919669,0.139921071,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,5.46,,1.54,0.051044248,0.364115633,107,15,1,107,15,183.90,FPS,FPS/$,FPS/TDP,msec. @@ -229,7 +229,7 @@ yolo-v3,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,10.65,,5.82,0.09106238, yolo-v3,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,15.38,,8.31,0.071851432,0.236557023,214,65,1,214,65,74.19,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,51.71,,15.62,0.157162207,0.41365093,329,125,1,329,125,29.85,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,17.56,,9.38,0.091438134,0.270094181,192,65,1,192,65,57.84,FPS,FPS/$,FPS/TDP,msec. -yolo-v3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,24.16,,6.62,0.056714953,0.8628775,426,28,1,426,28,45.99,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,18.06,,4.86,0.03686323,0.645106519,490,28,1,490,28,57.19,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,12.66,,6.85,0.041784766,0.361736687,303,35,1,303,35,86.80,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,16.86,,8.71,0.034540452,0.48159259,488,35,1,488,35,65.99,FPS,FPS/$,FPS/TDP,msec. @@ -253,7 +253,7 @@ yolo-v3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,63.86,29.69,,0 yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,34.09,,9.18,0.318603364,2.272704,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,55.18,,12.97,0.112605748,1.970600588,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,2.22,,0.74,0.011488536,0.369547916,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,52.21,,13.99,0.122567488,1.864776786,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,54.42,,18.04,0.508553978,3.627685044,107,15,1,107,15,18.25,FPS,FPS/$,FPS/TDP,msec. @@ -261,7 +261,7 @@ yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,112.01,,63.79,0.95 yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,167.15,,91.46,0.781091755,2.571594392,214,65,1,214,65,6.72,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,599.98,,196.78,1.823638134,4.799815567,329,125,1,329,125,3.05,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,185.17,,102.47,0.964441757,2.848812574,192,65,1,192,65,5.42,FPS,FPS/$,FPS/TDP,msec. -yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,251.02,,77.47,0.589250939,8.965032143,426,28,1,426,28,4.54,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,186.99,,55.22,0.381609578,6.678167613,490,28,1,490,28,5.74,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,137.72,,76.53,0.454530194,3.934932821,303,35,1,303,35,8.20,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,186.09,,96.75,0.381340728,5.316979292,488,35,1,488,35,6.26,FPS,FPS/$,FPS/TDP,msec. @@ -285,7 +285,7 @@ yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,555.59,29 yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,266.57,,93.47,2.491299065,17.77126667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,383.36,,116.03,0.782374111,13.69154694,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,23.03,,8.30,0.119308014,3.837741122,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,444.34,,149.05,1.043043427,15.86916071,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,1.49,,0.38,0.013950494,0.099513526,107,15,1,107,15,672.25,FPS,FPS/$,FPS/TDP,msec. @@ -293,7 +293,7 @@ unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,2.43,,1.5 unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,3.62,,2.29,0.016924989,0.055722272,214,65,1,214,65,322.49,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,11.46,,3.96,0.03483307,0.091680639,329,125,1,329,125,121.88,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,3.95,,2.54,0.020576722,0.06078047,192,65,1,192,65,262.38,FPS,FPS/$,FPS/TDP,msec. -unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,6.56,,1.65,0.015387439,0.234108893,426,28,1,426,28,169.32,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,4.93,,1.23,0.010063508,0.176111389,490,28,1,490,28,209.95,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,3.02,,1.85,0.009976084,0.086364383,303,35,1,303,35,386.94,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,3.86,,2.43,0.007900388,0.110153975,488,35,1,488,35,282.08,FPS,FPS/$,FPS/TDP,msec. @@ -317,7 +317,7 @@ unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only, unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,8.96,,2.57,0.083779393,0.597626333,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,15.42,,3.90,0.031467977,0.550689601,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.55,,0.19,0.002833692,0.09115042,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,14.32,,3.81,0.033621549,0.511527857,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,24.40,,9.61,0.22800938,1.626466914,107,15,1,107,15,40.45,FPS,FPS/$,FPS/TDP,msec. @@ -325,7 +325,7 @@ yolo_v8n,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,53.51,,33.07,0.4573632 yolo_v8n,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,81.91,,47.09,0.382748211,1.260124878,214,65,1,214,65,13.68,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,248.13,,95.51,0.754209583,1.985079623,329,125,1,329,125,6.70,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,86.77,,53.00,0.451947196,1.334982486,192,65,1,192,65,11.88,FPS,FPS/$,FPS/TDP,msec. -yolo_v8n,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,110.84,,40.77,0.260193662,3.958660714,426,28,1,426,28,10.71,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,76.53,,27.31,0.156180065,2.733151145,490,28,1,490,28,13.42,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,71.40,,42.60,0.235643867,2.040002619,303,35,1,303,35,16.51,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,93.44,,53.42,0.19148001,2.669778431,488,35,1,488,35,12.45,FPS,FPS/$,FPS/TDP,msec. @@ -349,7 +349,7 @@ yolo_v8n,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,210.16,143.23 yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,116.50,,51.35,1.088790654,7.766706667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,114.23,,46.77,0.233123263,4.079657102,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec. yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,10.54,,4.68,0.05458634,1.755860615,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec. -yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. +yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,181.83,,76.10,0.426834977,6.493989286,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec. end_rec,,,,,,,,,,,,,,,,,, begin_rec,,,,,,,,,,,,,,,msec/token,msec/token/$,msec/token/TDP,msec. bloomz-560m,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec. diff --git a/docs/benchmarks/performance_benchmarks.md b/docs/benchmarks/performance_benchmarks.md index 9c95329e184726..a322a2a66af593 100644 --- a/docs/benchmarks/performance_benchmarks.md +++ b/docs/benchmarks/performance_benchmarks.md @@ -100,14 +100,14 @@ For a listing of all platforms and configurations used for testing, refer to the .. grid-item:: - .. button-link:: _static/benchmarks_files/OV-2023.0-Platform_list.pdf + .. button-link:: _static/benchmarks_files/OV-2023.1-Platform_list.pdf :color: primary :outline: :expand: :material-regular:`download;1.5em` Click for Hardware Platforms [PDF] - .. button-link:: _static/benchmarks_files/OV-2023.0-system-info-detailed.xlsx + .. button-link:: _static/benchmarks_files/OV-2023.1-system-info-detailed.xlsx :color: primary :outline: :expand: @@ -166,7 +166,7 @@ or `create an account `__ - - OpenAI GPT-2 + * - `BLOOMZ-560M `__ + - BigScience Bloomz & MT0 + - Transformer based llm + - 2048 + * - `GPT-J-6B `__ + - Eleuther AI - Transformer - - 1024 + - 2048 + * - `Llama-2-7b-chat `__ + - Meta AI + - Auto regressive language + - 4096 + * - `Stable-Diffusion-V2-1 `__ + - Hugginface + - Latent Diffusion Model + - 77 * - `bert-base-cased `__ - BERT - question / answer @@ -62,22 +77,6 @@ - DeepLab v3 Tf - semantic segmentation - 513x513 - * - `efficientdet-d0 `__ - - Efficientdet - - classification - - 512x512 - * - `faster_rcnn_resnet50_coco `__ - - Faster RCNN Tf - - object detection - - 600x1024 - * - `inception-v4 `__ - - Inception v4 (aka GoogleNet-V4) - - classification - - 299x299 - * - `mobilenet-ssd `__ - - SSD (MobileNet)_COCO-2017_Caffe - - object detection - - 300x300 * - `mobilenet-v2 `__ - Mobilenet V2 PyTorch - classification @@ -86,6 +85,10 @@ - ResNet-50_v1_ILSVRC-2012 - classification - 224x224 + * - `ssd-mobilenet-v1-coco `__ + - ssd-mobilenet-V1-coco onnx model + - object detection + - 300x300 * - `ssd-resnet34-1200-onnx `__ - ssd-resnet34 onnx model - object detection diff --git a/docs/benchmarks/performance_int8_vs_fp32.md b/docs/benchmarks/performance_int8_vs_fp32.md index b28cb4e0d9796d..6652be32dc21fa 100644 --- a/docs/benchmarks/performance_int8_vs_fp32.md +++ b/docs/benchmarks/performance_int8_vs_fp32.md @@ -1,17 +1,12 @@ -# OpenVINO Accuracy {#openvino_docs_performance_int8_vs_fp32} +# Model Accuracy {#openvino_docs_performance_int8_vs_fp32} @sphinxdirective -.. meta:: - :description: Learn about the differences in absolute accuracy drop for INT8, - FP32 and FP16 representations of models inferred with OpenVINO - on three different platforms. - The following two tables present the absolute accuracy drop calculated as the accuracy difference between OV-accuracy and the original frame work accuracy for FP32, and the same for INT8 and FP16 representations of a model on three platform architectures. Please also refer to notes below table -for more information. +for more information. * A - Intel® Core™ i9-9000K (AVX2), INT8 and FP32 * B - Intel® Xeon® 6338, (VNNI), INT8 and FP32 @@ -27,98 +22,98 @@ for more information. - A, INT8 - B, INT8 - C, INT8 - * - GPT-2 - - WikiText_2_raw_gpt2 - - perplexity - - n/a - - n/a - - n/a * - bert-base-cased - SST-2_bert_cased_padded - accuracy - - 1.15% - - 1.51% - - -0.85% + - -3.00% + - -2.00% + - 2.94% * - bert-large-uncased-whole-word-masking-squad-0001 - SQUAD_v1_1_bert_msl384_mql64_ds128_lowercase - F1 - - 0.05% - - 0.11% - - 0.10% + - -0.04% + - 0.03% + - 0.06% * - deeplabv3 - VOC2012_segm - mean_iou - - -0.46% - - -0.23% - - -0.18% - * - efficientdet-d0 - - COCO2017_detection_91cl - - coco_precision - - -0.87% - - -0.56% - - n/a - * - faster_rcnn_resnet50_coco - - COCO2017_detection_91cl_bkgr - - coco_precision - - -0.24% - - -0.24% - - 0.00% - * - inception-v4 - - ImageNet2012_bkgr - - accuracy @ top1 - - -0.06% - - -0.08% - - -0.04% - * - mobilenet-ssd - - VOC2007_detection - - map - - -0.49% - - -0.50% - - -0.47% + - 0.00% + - 0.23% + - -0.13% * - mobilenet-v2 - ImageNet2012 - accuracy @ top1 - - -0.70% - - -1.11% - - -1.05% + - + - 0.97% + - -0.97% * - resnet-50 - ImageNet2012 - accuracy @ top1 - - -0.13% - - -0.11% - - -0.14% + - 0.20% + - 0.12% + - -0.19% + * - ssd-mobilenet-v1-coco + - COCO2017_detection_80cl_bkgr + - coco-precision + - 2.97% + - 0.29% + - -0.31% * - ssd-resnet34-1200 - COCO2017_detection_80cl_bkgr - map - - -0.02% - - -0.03% - - 0.04% + - 0.06% + - 0.06% + - -0.06% * - unet-camvid-onnx-0001 - CamVid_12cl - mean_iou @ mean - - n/a - - 6.40% - - -0.30% + - 6.32% + - -6.40% + - -0.63% * - yolo_v3 - COCO2017_detection_80cl - map - - -0.14% - - -0.01% - - -0.19% + - -0.06% + - -0.21% + - -0.71% * - yolo_v3_tiny - COCO2017_detection_80cl - map - - -0.11% - - -0.13% - - -0.17% + - 0.73% + - 0.21% + - -0.78% * - yolo_v8n - COCO2017_detection_80cl - map - - n/a - - n/a - - n/a + - -0.26% + - -0.22% + - 0.12% + * - bloomz-560m + - ROOTS corpus + - ppl + - + - + - + * - GPT-J-6B + - Pile dataset + - ppl + - + - 4.11 + - 4.11 + * - Llama-2-7b-chat + - Wiki, StackExch, Crawl + - ppl + - + - 3.27 + - 3.27 + * - Stable-Diffusion-V2-1 + - LIAON-5B + - ppl + - + - + - -.. list-table:: Model Accuracy for FP32 and FP16 (Flex-170 only) +.. list-table:: Model Accuracy for FP32 and FP16 (FP16: Flex-170 only) :header-rows: 1 * - OpenVINO™ Model name @@ -127,100 +122,98 @@ for more information. - A, FP32 - B, FP32 - C, FP16 - * - GPT-2 - - WikiText_2_raw_gpt2 - - perplexity - - -9.12% - - -9.12% - - -9.12% * - bert-base-cased - SST-2_bert_cased_padded - accuracy - 0.00% - 0.00% - - 0.01% + - 0.00% * - bert-large-uncased-whole-word-masking-squad-0001 - SQUAD_v1_1_bert_msl384_mql64_ds128_lowercase - F1 + - -0.19% - 0.04% - 0.04% - - 0.05% * - deeplabv3 - VOC2012_segm - mean_iou + - 0.49% - 0.00% - 0.00% - - 0.01% - * - efficientdet-d0 - - COCO2017_detection_91cl - - coco_precision - - -0.01% - - 0.02% - - 0.02% - * - faster_rcnn_resnet50_coco - - COCO2017_detection_91cl_bkgr - - coco_precision - - 0.00% - - -0.01% - - 0.03% - * - inception-v4 - - ImageNet2012_bkgr + * - mobilenet-v2 + - ImageNet2012 - accuracy @ top1 - 0.00% - 0.00% - - 0.01% - * - mobilenet-ssd - - VOC2007_detection - - map - - 0.00% - 0.00% - - 0.02% - * - mobilenet-v2 - - ImageNet2012 - - accuracy @ top1 - - -0.08% - - -0.08% - - 0.06% * - resnet-50 - ImageNet2012 - accuracy @ top1 - 0.00% + - -0.02% - 0.00% - - 0.00% + * - ssd-mobilenet-v1-coco + - COCO2017_detection_80cl_bkgr + - coco-precision + - 0.01% + - 0.01% + - 0.01% * - ssd-resnet34-1200 - COCO2017_detection_80cl_bkgr - map - - 0.00% - - 0.00% - - 0.02% + - 0.01% + - 0.06% + - -0.06% * - unet-camvid-onnx-0001 - CamVid_12cl - mean_iou @ mean - - -0.02% - - -0.02% - - 0.05% + - 0.02% + - -6.45% + - 6.45% * - yolo_v3 - COCO2017_detection_80cl - map - - 0.02% - - 0.02% - - 0.03% + - 0.00% + - 0.01% + - 0.01% * - yolo_v3_tiny - COCO2017_detection_80cl - map - - -0.04% - - -0.04% - - 0.03% + - 0.00% + - -0.02% + - 0.02% * - yolo_v8n - COCO2017_detection_80cl - map - 0.00% - 0.00% - - 0.03% + - 0.00% + * - bloomz-560m + - ROOTS corpus + - ppl + - + - 22.89 + - 22.89 + * - GPT-J-6B + - Pile dataset + - ppl + - + - 4.10 + - 4.10 + * - Llama-2-7b-chat + - Wiki, StackExch, Crawl + - ppl + - + - 2.91 + - 2.91 + * - Stable-Diffusion-V2-1 + - LIAON-5B + - ppl + - + - + - -.. note:: - - For all accuracy metrics except perplexity a "-" (minus sign) indicates an accuracy drop. - For perplexity a "-" indicates improved accuracy. +Notes: For all accuracy metrics except perplexity a "-", (minus sign), indicates an accuracy drop. +For perplexity the values do not indicate a deviation from a reference but are the actual measured accuracy for the model. @endsphinxdirective \ No newline at end of file From 2d97a5d59c46ecb94e24bc47ca56e341d167c5d2 Mon Sep 17 00:00:00 2001 From: Maciej Smyk Date: Fri, 15 Sep 2023 15:58:27 +0200 Subject: [PATCH 14/14] [DOCS] Notebooks iframe update for 23.1 --- ...-encodec-audio-compression-with-output.rst | 4 +- ...diffusion-v2-infinite-zoom-with-output.rst | 4 +- ...diffusion-v2-text-to-image-with-output.rst | 4 +- .../237-segment-anything-with-output.rst | 4 +- ...ly-2-instruction-following-with-output.rst | 4 +- ...41-riffusion-text-to-music-with-output.rst | 4 +- ...42-freevc-voice-conversion-with-output.rst | 4 +- ...4-named-entity-recognition-with-output.rst | 4 +- .../248-stable-diffusion-xl-with-output.rst | 8 +- ...1-tiny-sd-image-generation-with-output.rst | 158 +++++++++--------- 10 files changed, 99 insertions(+), 99 deletions(-) diff --git a/docs/notebooks/234-encodec-audio-compression-with-output.rst b/docs/notebooks/234-encodec-audio-compression-with-output.rst index 7e98b009f940ba..cd05bd7302413b 100644 --- a/docs/notebooks/234-encodec-audio-compression-with-output.rst +++ b/docs/notebooks/234-encodec-audio-compression-with-output.rst @@ -587,7 +587,7 @@ like with the original PyTorch models. -.. raw:: html +.. .. raw:: html -
+..
diff --git a/docs/notebooks/236-stable-diffusion-v2-infinite-zoom-with-output.rst b/docs/notebooks/236-stable-diffusion-v2-infinite-zoom-with-output.rst index 75656ed47aa094..7e2ec9efacc6bf 100644 --- a/docs/notebooks/236-stable-diffusion-v2-infinite-zoom-with-output.rst +++ b/docs/notebooks/236-stable-diffusion-v2-infinite-zoom-with-output.rst @@ -1388,7 +1388,7 @@ Run Infinite Zoom video generation `⇑ <#top>`__ -.. raw:: html +.. .. raw:: html -
+..
diff --git a/docs/notebooks/236-stable-diffusion-v2-text-to-image-with-output.rst b/docs/notebooks/236-stable-diffusion-v2-text-to-image-with-output.rst index 33a4df82bfdbfc..885e8893389a01 100644 --- a/docs/notebooks/236-stable-diffusion-v2-text-to-image-with-output.rst +++ b/docs/notebooks/236-stable-diffusion-v2-text-to-image-with-output.rst @@ -1024,7 +1024,7 @@ seed for latent state initialization and number of steps. -.. raw:: html +.. .. raw:: html -
+..
diff --git a/docs/notebooks/237-segment-anything-with-output.rst b/docs/notebooks/237-segment-anything-with-output.rst index 2db34401ec919d..ecf2d8c0373bcb 100644 --- a/docs/notebooks/237-segment-anything-with-output.rst +++ b/docs/notebooks/237-segment-anything-with-output.rst @@ -937,9 +937,9 @@ point. -.. raw:: html +.. .. raw:: html -
+..
Run OpenVINO model in automatic mask generation mode `⇑ <#top>`__ diff --git a/docs/notebooks/240-dolly-2-instruction-following-with-output.rst b/docs/notebooks/240-dolly-2-instruction-following-with-output.rst index 9b450eb9902ce5..7dda0634e62c7a 100644 --- a/docs/notebooks/240-dolly-2-instruction-following-with-output.rst +++ b/docs/notebooks/240-dolly-2-instruction-following-with-output.rst @@ -693,7 +693,7 @@ generation parameters: -.. raw:: html +.. .. raw:: html -
+..
diff --git a/docs/notebooks/241-riffusion-text-to-music-with-output.rst b/docs/notebooks/241-riffusion-text-to-music-with-output.rst index d8eb9cb1462095..121ec4aa61f53b 100644 --- a/docs/notebooks/241-riffusion-text-to-music-with-output.rst +++ b/docs/notebooks/241-riffusion-text-to-music-with-output.rst @@ -751,7 +751,7 @@ Interactive demo `⇑ <#top>`__ -.. raw:: html +.. .. raw:: html -
+..
diff --git a/docs/notebooks/242-freevc-voice-conversion-with-output.rst b/docs/notebooks/242-freevc-voice-conversion-with-output.rst index 0a372bf31c85b7..3e8ac5bdaaff0d 100644 --- a/docs/notebooks/242-freevc-voice-conversion-with-output.rst +++ b/docs/notebooks/242-freevc-voice-conversion-with-output.rst @@ -800,9 +800,9 @@ inference. Use rate corresponding to the value of -.. raw:: html +.. .. raw:: html -
+..
.. code:: ipython3 diff --git a/docs/notebooks/244-named-entity-recognition-with-output.rst b/docs/notebooks/244-named-entity-recognition-with-output.rst index dd6af58fd7bc13..9d49188f04a6f7 100644 --- a/docs/notebooks/244-named-entity-recognition-with-output.rst +++ b/docs/notebooks/244-named-entity-recognition-with-output.rst @@ -439,9 +439,9 @@ text. -.. raw:: html +.. .. raw:: html -
+..
.. parsed-literal:: diff --git a/docs/notebooks/248-stable-diffusion-xl-with-output.rst b/docs/notebooks/248-stable-diffusion-xl-with-output.rst index 594fb4f1a7b6e7..0c0451bd8aaee4 100644 --- a/docs/notebooks/248-stable-diffusion-xl-with-output.rst +++ b/docs/notebooks/248-stable-diffusion-xl-with-output.rst @@ -292,9 +292,9 @@ Text2image Generation Interactive Demo\ `⇑ <#top>`__ -.. raw:: html +.. .. raw:: html -
+..
.. code:: ipython3 @@ -445,9 +445,9 @@ Image2Image Generation Interactive Demo\ `⇑ <#top>`__ -.. raw:: html +.. .. raw:: html -
+..
.. code:: ipython3 diff --git a/docs/notebooks/251-tiny-sd-image-generation-with-output.rst b/docs/notebooks/251-tiny-sd-image-generation-with-output.rst index b2afd5f5c58864..466cb5801d5a35 100644 --- a/docs/notebooks/251-tiny-sd-image-generation-with-output.rst +++ b/docs/notebooks/251-tiny-sd-image-generation-with-output.rst @@ -1056,83 +1056,83 @@ found in this .. image:: 251-tiny-sd-image-generation-with-output_files/251-tiny-sd-image-generation_39_1.png -Interactive Demo `⇑ <#top>`__ -############################################################################################################################### - -.. code:: ipython3 - - import gradio as gr - - sample_img_url = "https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/tower.jpg" - - img = load_image(sample_img_url).save("tower.jpg") - - def generate_from_text(text, negative_text, seed, num_steps, _=gr.Progress(track_tqdm=True)): - result = ov_pipe(text, negative_prompt=negative_text, num_inference_steps=num_steps, seed=seed) - return result["sample"][0] - - - def generate_from_image(img, text, negative_text, seed, num_steps, strength, _=gr.Progress(track_tqdm=True)): - result = ov_pipe(text, img, negative_prompt=negative_text, num_inference_steps=num_steps, seed=seed, strength=strength) - return result["sample"][0] - - - with gr.Blocks() as demo: - with gr.Tab("Text-to-Image generation"): - with gr.Row(): - with gr.Column(): - text_input = gr.Textbox(lines=3, label="Positive prompt") - negative_text_input = gr.Textbox(lines=3, label="Negative prompt") - seed_input = gr.Slider(0, 10000000, value=751, label="Seed") - steps_input = gr.Slider(1, 50, value=20, step=1, label="Steps") - out = gr.Image(label="Result", type="pil") - sample_text = "futuristic synthwave city, retro sunset, crystals, spires, volumetric lighting, studio Ghibli style, rendered in unreal engine with clean details" - sample_text2 = "RAW studio photo of tiny cute happy cat in a yellow raincoat in the woods, rain, a character portrait, soft lighting, high resolution, photo realistic, extremely detailed" - negative_sample_text = "" - negative_sample_text2 = "bad anatomy, blurry, noisy, jpeg artifacts, low quality, geometry, mutation, disgusting. ugly" - btn = gr.Button() - btn.click(generate_from_text, [text_input, negative_text_input, seed_input, steps_input], out) - gr.Examples([[sample_text, negative_sample_text, 42, 20], [sample_text2, negative_sample_text2, 1561, 25]], [text_input, negative_text_input, seed_input, steps_input]) - with gr.Tab("Image-to-Image generation"): - with gr.Row(): - with gr.Column(): - i2i_input = gr.Image(label="Image", type="pil") - i2i_text_input = gr.Textbox(lines=3, label="Text") - i2i_negative_text_input = gr.Textbox(lines=3, label="Negative prompt") - i2i_seed_input = gr.Slider(0, 10000000, value=42, label="Seed") - i2i_steps_input = gr.Slider(1, 50, value=10, step=1, label="Steps") - strength_input = gr.Slider(0, 1, value=0.5, label="Strength") - i2i_out = gr.Image(label="Result", type="pil") - i2i_btn = gr.Button() - sample_i2i_text = "amazing watercolor painting" - i2i_btn.click( - generate_from_image, - [i2i_input, i2i_text_input, i2i_negative_text_input, i2i_seed_input, i2i_steps_input, strength_input], - i2i_out, - ) - gr.Examples( - [["tower.jpg", sample_i2i_text, "", 6400023, 40, 0.3]], - [i2i_input, i2i_text_input, i2i_negative_text_input, i2i_seed_input, i2i_steps_input, strength_input], - ) - - try: - demo.queue().launch(debug=True) - except Exception: - demo.queue().launch(share=True, debug=True) - # if you are launching remotely, specify server_name and server_port - # demo.launch(server_name='your server name', server_port='server port in int') - # Read more in the docs: https://gradio.app/docs/ - - -.. parsed-literal:: - - Running on local URL: http://127.0.0.1:7860 - - To create a public link, set `share=True` in `launch()`. - - - -.. raw:: html - -
+.. Interactive Demo `⇑ <#top>`__ +.. ############################################################################################################################### + +.. .. code:: ipython3 + +.. import gradio as gr + +.. sample_img_url = "https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/tower.jpg" + +.. img = load_image(sample_img_url).save("tower.jpg") + +.. def generate_from_text(text, negative_text, seed, num_steps, _=gr.Progress(track_tqdm=True)): +.. result = ov_pipe(text, negative_prompt=negative_text, num_inference_steps=num_steps, seed=seed) +.. return result["sample"][0] + + +.. def generate_from_image(img, text, negative_text, seed, num_steps, strength, _=gr.Progress(track_tqdm=True)): +.. result = ov_pipe(text, img, negative_prompt=negative_text, num_inference_steps=num_steps, seed=seed, strength=strength) +.. return result["sample"][0] + + +.. with gr.Blocks() as demo: +.. with gr.Tab("Text-to-Image generation"): +.. with gr.Row(): +.. with gr.Column(): +.. text_input = gr.Textbox(lines=3, label="Positive prompt") +.. negative_text_input = gr.Textbox(lines=3, label="Negative prompt") +.. seed_input = gr.Slider(0, 10000000, value=751, label="Seed") +.. steps_input = gr.Slider(1, 50, value=20, step=1, label="Steps") +.. out = gr.Image(label="Result", type="pil") +.. sample_text = "futuristic synthwave city, retro sunset, crystals, spires, volumetric lighting, studio Ghibli style, rendered in unreal engine with clean details" +.. sample_text2 = "RAW studio photo of tiny cute happy cat in a yellow raincoat in the woods, rain, a character portrait, soft lighting, high resolution, photo realistic, extremely detailed" +.. negative_sample_text = "" +.. negative_sample_text2 = "bad anatomy, blurry, noisy, jpeg artifacts, low quality, geometry, mutation, disgusting. ugly" +.. btn = gr.Button() +.. btn.click(generate_from_text, [text_input, negative_text_input, seed_input, steps_input], out) +.. gr.Examples([[sample_text, negative_sample_text, 42, 20], [sample_text2, negative_sample_text2, 1561, 25]], [text_input, negative_text_input, seed_input, steps_input]) +.. with gr.Tab("Image-to-Image generation"): +.. with gr.Row(): +.. with gr.Column(): +.. i2i_input = gr.Image(label="Image", type="pil") +.. i2i_text_input = gr.Textbox(lines=3, label="Text") +.. i2i_negative_text_input = gr.Textbox(lines=3, label="Negative prompt") +.. i2i_seed_input = gr.Slider(0, 10000000, value=42, label="Seed") +.. i2i_steps_input = gr.Slider(1, 50, value=10, step=1, label="Steps") +.. strength_input = gr.Slider(0, 1, value=0.5, label="Strength") +.. i2i_out = gr.Image(label="Result", type="pil") +.. i2i_btn = gr.Button() +.. sample_i2i_text = "amazing watercolor painting" +.. i2i_btn.click( +.. generate_from_image, +.. [i2i_input, i2i_text_input, i2i_negative_text_input, i2i_seed_input, i2i_steps_input, strength_input], +.. i2i_out, +.. ) +.. gr.Examples( +.. [["tower.jpg", sample_i2i_text, "", 6400023, 40, 0.3]], +.. [i2i_input, i2i_text_input, i2i_negative_text_input, i2i_seed_input, i2i_steps_input, strength_input], +.. ) + +.. try: +.. demo.queue().launch(debug=True) +.. except Exception: +.. demo.queue().launch(share=True, debug=True) +.. # if you are launching remotely, specify server_name and server_port +.. # demo.launch(server_name='your server name', server_port='server port in int') +.. # Read more in the docs: https://gradio.app/docs/ + + +.. .. parsed-literal:: + +.. Running on local URL: http://127.0.0.1:7860 + +.. To create a public link, set `share=True` in `launch()`. + + + +.. .. raw:: html + +..