Skip to content

Commit

Permalink
Fixing formatting and reference issues
Browse files Browse the repository at this point in the history
  • Loading branch information
sgolebiewski-intel committed Mar 12, 2024
1 parent 21bf6ab commit 2b3ed45
Show file tree
Hide file tree
Showing 44 changed files with 521 additions and 454 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@ Converting a TensorFlow GNMT Model


.. meta::
:description: Learn how to convert a GNMT model
:description: Learn how to convert a GNMT model
from TensorFlow to the OpenVINO Intermediate Representation.

.. danger::

The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.

This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`.

This tutorial explains how to convert Google Neural Machine Translation (GNMT) model to the Intermediate Representation (IR).

There are several public versions of TensorFlow GNMT model implementation available on GitHub. This tutorial explains how to convert the GNMT model from the `TensorFlow Neural Machine Translation (NMT) repository <https://github.com/tensorflow/nmt>`__ to the IR.
Expand All @@ -26,7 +26,7 @@ Before converting the model, you need to create a patch file for the repository.
1. Go to a writable directory and create a ``GNMT_inference.patch`` file.
2. Copy the following diff code to the file:

.. code-block:: cpp
.. code-block:: py
diff --git a/nmt/inference.py b/nmt/inference.py
index 2cbef07..e185490 100644
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -457,7 +457,7 @@ For other examples of transformations with points, refer to the
Generic Front Phase Transformations Enabled with Transformations Configuration File
###################################################################################

This type of transformation works similarly to the :ref:`Generic Front Phase Transformations <generic_front_phase_transformations)`
This type of transformation works similarly to the :ref:`Generic Front Phase Transformations <generic_front_phase_transformations>`
but require a JSON configuration file to enable it similarly to
:ref:`Node Name Pattern Front Phase Transformations <node_name_pattern_front_phase_transformations>` and
:ref:`Front Phase Transformations Using Start and End Points <start_end_points_front_phase_transformations>`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -734,9 +734,9 @@ The Model Hosting components install the OpenVINO™ Security Add-on Runtime Doc
How to Use the OpenVINO™ Security Add-on
########################################
This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable :ref:`set up steps <setup-host>` and :ref:`installation steps <ovsa-install>` before beginning this section.
This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable :ref:`set up steps <setup_host>` and :ref:`installation steps <install_ovsa>` before beginning this section.
This document uses the :ref:`face-detection-retail-0004 <../../omz_models_model_face_detection_retail_0044>` model as an example.
This document uses the :doc:`face-detection-retail-0004 <../../omz_models_model_face_detection_retail_0044>` model as an example.
The following figure describes the interactions between the Model Developer, Independent Software Vendor, and User.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,8 @@ Example Kernel
Debugging Tips
##############

**Using ``printf`` in the OpenCL™ Kernels**.
**Using** ``printf`` **in the OpenCL™ Kernels**.

To debug the specific values, use ``printf`` in your kernels.
However, be careful not to output excessively, which
could generate too much data. The ``printf`` output is typical, so
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Custom OpenVINO Operations
custom operations to support models with operations
not supported by OpenVINO.

OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions <create_library_with_extensions>` for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions <create_a_library_with_extensions>` for more details on library creation and usage. The remaining part of this document describes how to implement an operation class.

Operation Class
###############
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
.. {#openvino_docs_ops_internal_AUGRUCell}
AUGRUCell
=========

**Versioned name**: *AUAUGRUCell*

**Category**: *Sequence processing*

**Short description**: *AUGRUCell* represents a single AUGRU Cell (GRU with attentional
update gate).

**Detailed description**: The main difference between *AUGRUCell* and
:doc:`GRUCell <../sequence/gru-cell-3>` is the additional attention score
input ``A``, which is a multiplier for the update gate.
The AUGRU formula is based on the `paper arXiv:1809.03672 <https://arxiv.org/abs/1809.03672>`__.

.. code-block:: py
AUGRU formula:
* - matrix multiplication
(.) - Hadamard product (element-wise)
f, g - activation functions
z - update gate, r - reset gate, h - hidden gate
a - attention score
rt = f(Xt*(Wr^T) + Ht-1*(Rr^T) + Wbr + Rbr)
zt = f(Xt*(Wz^T) + Ht-1*(Rz^T) + Wbz + Rbz)
ht = g(Xt*(Wh^T) + (rt (.) Ht-1)*(Rh^T) + Rbh + Wbh) # 'linear_before_reset' is False
zt' = (1 - at) (.) zt # multiplication by attention score
Ht = (1 - zt') (.) ht + zt' (.) Ht-1
**Attributes**
* *hidden_size*
* **Description**: *hidden_size* specifies hidden state size.
* **Range of values**: a positive integer
* **Type**: ``int``
* **Required**: *yes*
* *activations*
* **Description**: activation functions for gates
* **Range of values**: *sigmoid*, *tanh*
* **Type**: a list of strings
* **Default value**: *sigmoid* for f, *tanh* for g
* **Required**: *no*
* *activations_alpha, activations_beta*
* **Description**: *activations_alpha, activations_beta* attributes of functions;
applicability and meaning of these attributes depends on chosen activation functions
* **Range of values**: []
* **Type**: ``float[]``
* **Default value**: []
* **Required**: *no*
* *clip*
* **Description**: *clip* specifies bound values *[-C, C]* for tensor clipping.
Clipping is performed before activations.
* **Range of values**: ``0.``
* **Type**: ``float``
* **Default value**: ``0.`` that means the clipping is not applied
* **Required**: *no*
* *linear_before_reset*
* **Description**: *linear_before_reset* flag denotes, if the output of hidden gate
is multiplied by the reset gate before or after linear transformation.
* **Range of values**: False
* **Type**: ``boolean``
* **Default value**: False
* **Required**: *no*.
**Inputs**
* **1**: ``X`` - 2D tensor of type *T* and shape ``[batch_size, input_size]``, input
data. **Required.**
* **2**: ``H_t`` - 2D tensor of type *T* and shape ``[batch_size, hidden_size]``.
Input with initial hidden state data. **Required.**
* **3**: ``W`` - 2D tensor of type *T* and shape ``[3 * hidden_size, input_size]``.
The weights for matrix multiplication, gate order: zrh. **Required.**
* **4**: ``R`` - 2D tensor of type *T* and shape ``[3 * hidden_size, hidden_size]``.
The recurrence weights for matrix multiplication, gate order: zrh. **Required.**
* **5**: ``B`` - 2D tensor of type *T*. The biases. If *linear_before_reset* is set
to ``False``, then the shape is ``[3 * hidden_size]``, gate order: zrh. Otherwise
the shape is ``[4 * hidden_size]`` - the sum of biases for z and r gates (weights and
recurrence weights), the biases for h gate are placed separately. **Required.**
* **6**: ``A`` - 2D tensor of type *T* and shape ``[batch_size, 1]``, the attention
score. **Required.**
**Outputs**
* **1**: ``Ho`` - 2D tensor of type *T* ``[batch_size, hidden_size]``, the last output
value of hidden state.
**Types**
* *T*: any supported floating-point type.
**Example**
.. code-block:: xml
:force:
<layer ... type="AUGRUCell" ...>
<data hidden_size="128"/>
<input>
<port id="0"> <!-- `X` input data -->
<dim>1</dim>
<dim>16</dim>
</port>
<port id="1"> <!-- `H_t` input -->
<dim>1</dim>
<dim>128</dim>
</port>
<port id="3"> <!-- `W` weights input -->
<dim>384</dim>
<dim>16</dim>
</port>
<port id="4"> <!-- `R` recurrence weights input -->
<dim>384</dim>
<dim>128</dim>
</port>
<port id="5"> <!-- `B` bias input -->
<dim>384</dim>
</port>
<port id="6"> <!-- `A` attention score input -->
<dim>1</dim>
<dim>1</dim>
</port>
</input>
<output>
<port id="7"> <!-- `Y` output -->
<dim>1</dim>
<dim>4</dim>
<dim>128</dim>
</port>
<port id="8"> <!-- `Ho` output -->
<dim>1</dim>
<dim>128</dim>
</port>
</output>
</layer>
Loading

0 comments on commit 2b3ed45

Please sign in to comment.