diff --git a/docs/articles_en/learn_openvino/openvino_samples/bert_benchmark.rst b/docs/articles_en/learn_openvino/openvino_samples/bert_benchmark.rst index ce82c582e97f5b..691e6cbfc8fef9 100644 --- a/docs/articles_en/learn_openvino/openvino_samples/bert_benchmark.rst +++ b/docs/articles_en/learn_openvino/openvino_samples/bert_benchmark.rst @@ -9,7 +9,7 @@ Bert Benchmark Python Sample This sample demonstrates how to estimate performance of a Bert model using Asynchronous -Inference Request API. Unlike :doc:`demos ` this sample does not have +Inference Request API. Unlike `demos `__ this sample does not have configurable command line arguments. Feel free to modify sample's source code to try out different options. diff --git a/docs/articles_en/learn_openvino/openvino_samples/hello_classification.rst b/docs/articles_en/learn_openvino/openvino_samples/hello_classification.rst index b6eef4b762a031..c51b4545b4919f 100644 --- a/docs/articles_en/learn_openvino/openvino_samples/hello_classification.rst +++ b/docs/articles_en/learn_openvino/openvino_samples/hello_classification.rst @@ -14,8 +14,8 @@ Synchronous Inference Request API. Before using the sample, refer to the followi - Models with only one input and output are supported. - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: :doc:`alexnet `, - :doc:`googlenet-v1 ` models. +- The sample has been validated with: `alexnet `__, + `googlenet-v1 `__ models. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. diff --git a/docs/articles_en/learn_openvino/openvino_samples/hello_nv12_input_classification.rst b/docs/articles_en/learn_openvino/openvino_samples/hello_nv12_input_classification.rst index c9a53ede7229d2..183535fedddcd3 100644 --- a/docs/articles_en/learn_openvino/openvino_samples/hello_nv12_input_classification.rst +++ b/docs/articles_en/learn_openvino/openvino_samples/hello_nv12_input_classification.rst @@ -15,7 +15,7 @@ with images in NV12 color format using Synchronous Inference Request API. Before using the sample, refer to the following requirements: - The sample accepts any file format supported by ``ov::Core::read_model``. -- The sample has been validated with: :doc:`alexnet ` model and +- The sample has been validated with: `alexnet `__ model and uncompressed images in the NV12 color format - \*.yuv - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. diff --git a/docs/articles_en/learn_openvino/openvino_samples/hello_reshape_ssd.rst b/docs/articles_en/learn_openvino/openvino_samples/hello_reshape_ssd.rst index 0b516c797a6d57..3272a6d1014988 100644 --- a/docs/articles_en/learn_openvino/openvino_samples/hello_reshape_ssd.rst +++ b/docs/articles_en/learn_openvino/openvino_samples/hello_reshape_ssd.rst @@ -16,8 +16,8 @@ using the sample, refer to the following requirements: - Models with only one input and output are supported. - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: :doc:`mobilenet-ssd `, - :doc:`person-detection-retail-0013 ` +- The sample has been validated with: `mobilenet-ssd `__, + `person-detection-retail-0013 `__ models and the NCHW layout format. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. diff --git a/docs/articles_en/learn_openvino/openvino_samples/image_classification_async.rst b/docs/articles_en/learn_openvino/openvino_samples/image_classification_async.rst index 2dad59f0ee2f97..18a8136d7600a5 100644 --- a/docs/articles_en/learn_openvino/openvino_samples/image_classification_async.rst +++ b/docs/articles_en/learn_openvino/openvino_samples/image_classification_async.rst @@ -15,7 +15,8 @@ following requirements: - Models with only one input and output are supported. - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: :doc:`alexnet `, :doc:`googlenet-v1 ` models. +- The sample has been validated with: `alexnet `__, + `googlenet-v1 `__ models. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. diff --git a/docs/articles_en/learn_openvino/openvino_samples/sync_benchmark.rst b/docs/articles_en/learn_openvino/openvino_samples/sync_benchmark.rst index 793bc11c5262e4..f2da10f65797a6 100644 --- a/docs/articles_en/learn_openvino/openvino_samples/sync_benchmark.rst +++ b/docs/articles_en/learn_openvino/openvino_samples/sync_benchmark.rst @@ -11,14 +11,15 @@ Sync Benchmark Sample This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike -:doc:`demos ` this sample does not have other configurable command-line +`demos `__ this sample does not have other configurable command-line arguments. Feel free to modify sample's source code to try out different options. Before using the sample, refer to the following requirements: - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: :doc:`alexnet `, - :doc:`googlenet-v1 `, :doc:`yolo-v3-tf `, - :doc:`face-detection-0200 ` models. +- The sample has been validated with: `alexnet `__, + `googlenet-v1 `__, + `yolo-v3-tf `__, + `face-detection-0200 `__ models. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. diff --git a/docs/articles_en/learn_openvino/openvino_samples/throughput_benchmark.rst b/docs/articles_en/learn_openvino/openvino_samples/throughput_benchmark.rst index ff7d667ca74b1e..a2545361f7ee39 100644 --- a/docs/articles_en/learn_openvino/openvino_samples/throughput_benchmark.rst +++ b/docs/articles_en/learn_openvino/openvino_samples/throughput_benchmark.rst @@ -9,7 +9,7 @@ Throughput Benchmark Sample This sample demonstrates how to estimate performance of a model using Asynchronous -Inference Request API in throughput mode. Unlike :doc:`demos ` this sample +Inference Request API in throughput mode. Unlike `demos `__ this sample does not have other configurable command-line arguments. Feel free to modify sample's source code to try out different options. @@ -20,9 +20,10 @@ sets ``uint8``, while the sample uses default model precision which is usually ` Before using the sample, refer to the following requirements: - The sample accepts any file format supported by ``core.read_model``. -- The sample has been validated with: :doc:`alexnet `, - :doc:`googlenet-v1 `, :doc:`yolo-v3-tf `, - :doc:`face-detection-0200 ` models. +- The sample has been validated with: `alexnet `__, + `googlenet-v1 `__, + `yolo-v3-tf `__, + `face-detection-0200 `__ models. - To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide.