Skip to content

Commit

Permalink
[Spec] Add alignment mode description and Mersenne-Twister descriptio…
Browse files Browse the repository at this point in the history
…n to RandomUniform-8 (#26142)

### Details:
 - Added new alignment mode description
 - Added explanation of Mersenne-Twister algorithm with examples

### Tickets:
 - None
  • Loading branch information
PiotrKrzem authored Dec 20, 2024
1 parent d8e276d commit 782f002
Showing 1 changed file with 167 additions and 9 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,20 @@ RandomUniform
**Detailed description**:

*RandomUniform* operation generates random numbers from a uniform distribution in the range ``[minval, maxval)``.
The generation algorithm is based on underlying random integer generator that uses Philox algorithm. Philox algorithm
is a counter-based pseudo-random generator, which produces uint32 values. Single invocation of Philox algorithm returns
four result random values, depending on the given *key* and *counter* values. *Key* and *counter* are initialized
with *global_seed* and *op_seed* attributes respectively.
The generation algorithm is based on an underlying random integer generator that uses either Philox or Mersnne-Twister algorithm.
Both algorithms are counter-based pseudo-random generators, which produce uint32 values. A single algorithm invocation returns
four result random values, depending on the given initial values. For Philox, these values are *key* and *counter*, for Mersenne-Twister it is a single *state* value. *Key* and *counter* are initialized
with *global_seed* and *op_seed* attributes respectively, while the *state* is only initialized using *global_seed*.

If both seed values equal to zero, RandomUniform generates non-deterministic sequence.
Algorithm selection allows to align the output of OpenVINO's Random Uniform op with the ones available in Tensorflow and PyTorch.
The *alignment* attribute selects which framework the output should be aligned to. Tensorflow uses the Philox algorithm and PyTorch uses the Mersenne-Twister algorithm.
For Tensorflow, this function is equivalent to the function tf.raw_ops.RandomUniform(shape, dtype, global_seed, op_seed) when dtype represents a real number, and tf.raw_ops.RandomUniformInt(shape, min\_val, max\_val, dtype, global\_seed, op\_seed) for integer types. Internally, both of these functions are executed by tf.random.uniform(shape, min\_val, max\_val, dtype, global\_seed, op\_seed), where for floating-point dtype the output goes through additional conversion to reside within a given range.
For PyTorch, this function is equivalent to the function torch.Tensor(shape, dtype).uniform\_(min\_val, max\_val) when dtype represents a real number, and torch.Tensor(shape, dtype).random\_(min\_val, max\_val) for integer types. Internally, both of these functions are executed by torch.rand(shape, dtype) with default generator and layout. The seed of these functions is provided by calling torch.manual\_seed(global\_seed). op\_seed value is ignored.
By default, the output is aligned with Tensorflow (Philox algorithm). This behavior is backwards-compatibile.

If both seed values are equal to zero, RandomUniform generates a non-deterministic sequence.

**Philox Algorithm Explaination**:

.. math::
Expand Down Expand Up @@ -168,7 +176,7 @@ For integer values:
where *x* is uint32 random value.


Example 1. *RandomUniform* output with ``global_seed`` = 150, ``op_seed`` = 10, ``output_type`` = f32:
Example 1. *RandomUniform* output with ``global_seed`` = 150, ``op_seed`` = 10, ``output_type`` = f32, ``alignment`` = TENSORFLOW:

.. code-block:: xml
:force:
Expand All @@ -179,7 +187,7 @@ Example 1. *RandomUniform* output with ``global_seed`` = 150, ``op_seed`` = 10,
[0.5197197 0.22727466 0.991374 ]]
Example 2. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100, ``output_type`` = double:
Example 2. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100, ``output_type`` = double, ``alignment`` = TENSORFLOW:

.. code-block:: xml
:force:
Expand All @@ -194,7 +202,7 @@ Example 2. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100,
[2.67008206 2.36423758]]
Example 3. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100, ``output_type`` = i32:
Example 3. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100, ``output_type`` = i32, ``alignment`` = TENSORFLOW:

.. code-block:: xml
:force:
Expand All @@ -208,6 +216,148 @@ Example 3. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100,
output = [[65 70 56]
[59 82 92]]
-------------------------------------------------------

Mersenne-Twister Algorithm Explanation:

Link to the original paper Mersenne Twister: Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator <https://dl.acm.org/doi/10.1145/272991.272995>__.

The Mersenne-Twister algorithm generates random numbers by initializing a state array with a seed and then iterating through a series of transformations.
Suppose we have n which determines the n-th element of the random sequence.

The initial state array is generated recursively using the following formula:

.. math::
state[0] = global_seed & 0xffffffff;
state[i] = 1812433253 * state[i-1] ^ (state[i-1] >> 30) + i
where the value of i cannot exceed 623.

The output is generated by tempering the state array:

.. math::
y = state[i]\
y = y \oplus (y >> u)\
y = y \oplus ((y << s) & b)\
y = y \oplus ((y << t) & c)\
y = y \oplus (y >> l)
where u, s, t, l, b, and c are constants.

Whenever all state values are 'used', a new state array is generated recursively as follows:

.. math::
current_state = state[i]
next_state = state[i+1] if i+1 <= 623 else state[0]
next_m_state = state[i+m] if i+m <= 623 else state[i+m-623]
twisted_state = (((current_state & 0x80000000) | (next_state & 0x7fffffff)) >> 1) ^ (next_state & 1 ? 0x9908b0df : 0)
state[i] = next_m_state ^ twisted_state
where m is a constant.

For parity with PyTorch, the value of the constants is set as follows:

.. math::
u = 11
s = 7
b = 0x9d2c5680
t = 15
c = 0xefc60000
l = 18
m = 397
These values follow the recommendations from the linked paper for MT19937.

To convert a given unsigned int value (denoted as x below) to a specific type, a simple conversion is performed.
For float32:

.. math::
mantissa_digits = 24 (mantissa / significand bits count of float + 1, equal to std::numeric_limits<float>::digits == FLT_MANT_DIG == 24)
mask = uint32(uint64(1) << mantissa_digits - 1)
divisor = float(1) / (uint64(1) << mantissa_digits)
output = float((x & mask) * divisor)
For float16:

mantissa_digits = 11 (mantissa / significand bits count of float16 + 1, equal to 11)
mask = uint32(uint64(1) << mantissa_digits - 1)
divisor = float(1) / (uint64(1) << mantissa_digits)
output = float16((x & mask) * divisor)

For bfloat16:

mantissa_digits = 8 (mantissa / significand bits count of bfloat16 + 1, equal to 8)
mask = uint32(uint64(1) << mantissa_digits - 1)
divisor = float(1) / (uint64(1) << mantissa_digits)
output = bfloat16((x & mask) * divisor)

For float64 (double precision requires the use of two uint32 values, denoted as x and y below):

value = uint64(x) << 32 + y

mantissa_digits = 53 (mantissa / significand bits count of double + 1, equal to std::numeric_limits<double>::digits == DBL_MANT_DIG == 53)
mask = uint64(1) << mantissa_digits - 1
divisor = double(1) / (uint64(1) << mantissa_digits)
output = double((x & mask) * divisor)

All of the floating - point types above after the conversion fall between the values of 0 and 1. To convert them to reside between a range <min, max>, a simple operation is performed:

.. math::
output = x * (max - min) + min
For integer types, no special conversion operation is done except for int64 when either min or max exceeds the maximum possible value of uint32. A simple operation to standardize the values is performed.
The special behavior (optimization) for int64 matches the expected output for PyTorch, normally a concatenation of 2 uint32s always occurs.
In other words:

.. math::
if output is of int32 dtype:
output = int32(x)
else if output is of int64 dtype and (min <= max(uint32) and max <= max(uint32)):
output = int64(x)
else:
output = int64(x << 32 + y) (uses 2 uint32s instead of one)
output = output % (max - min) + min
Example 1. RandomUniform output with initial_seed = 150, output_type = f32, alignment = PYTORCH:
.. code-block:: xml
:force:
input_shape = [ 3, 3 ] \\
output = [[0.6789123 0.31274895 0.91842768] \\
[0.9312087 0.13456984 0.49623574] \\
[0.5082716 0.23938411 0.97856429]]
Example 2. RandomUniform output with initial_seed = 80, output_type = double, alignment = PYTORCH:

.. code-block:: xml
:force:
input_shape = [ 2, 2 ] \\
minval = 2 \\
maxval = 10 \\
output = [[8.34928537 6.12348725] \\
[3.76852914 2.89564172]]
Example 3. RandomUniform output with initial_seed = 80, output_type = i32, alignment = PYTORCH:

.. code-block:: xml
:force:
input_shape = [ 2, 3 ] \\
minval = 50 \\
maxval = 100 \\
output = [[89 73 68] \\
[95 78 61]]
**Attributes**:

Expand All @@ -234,6 +384,14 @@ Example 3. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100,
* **Default value**: 0
* **Required**: *Yes*

* ``alignment``

* **Description**: the framework to align the output to.
* **Range of values**: TENSORFLOW, PYTORCH
* **Type**: `string`
* **Default value**: TENSORFLOW
* **Required**: *No*

**Inputs**:

* **1**: ``shape`` - 1D tensor of type *T_SHAPE* describing output shape. **Required.**
Expand All @@ -245,7 +403,7 @@ Example 3. *RandomUniform* output with ``global_seed`` = 80, ``op_seed`` = 100,

**Outputs**:

* **1**: A tensor with type specified by the attribute *output_type* and shape defined by ``shape`` input tensor.
* **1**: A tensor with type specified by the attribute *output_type* and shape defined by ``shape`` input tensor, with values aligned to the framework selected by the ``alignment`` attribute.

**Types**

Expand Down

0 comments on commit 782f002

Please sign in to comment.