From 1ca91df28fea0d905a32e7cc0a34010fac6d52d0 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Sun, 16 Jun 2024 15:36:57 +0200 Subject: [PATCH] [DOCS] release final touches (#25034) --- .../supported-models.rst | 6 +++-- .../about-openvino/release-notes-openvino.rst | 7 +++--- docs/sphinx_setup/index.rst | 25 ++++++------------- 3 files changed, 16 insertions(+), 22 deletions(-) diff --git a/docs/articles_en/about-openvino/compatibility-and-support/supported-models.rst b/docs/articles_en/about-openvino/compatibility-and-support/supported-models.rst index 3131545954c032..aa4a2a984a3ca0 100644 --- a/docs/articles_en/about-openvino/compatibility-and-support/supported-models.rst +++ b/docs/articles_en/about-openvino/compatibility-and-support/supported-models.rst @@ -35,6 +35,8 @@ by OpenVINO may also work properly. | Note: +| The results as of June 17 2024, for OpenVINO version 2024.2. + | The validation process involves using OpenVINO, natively or as a backend, to load each model onto the designated hardware and execute inference. If no errors are reported and inference finishes, the model receives the **passed** status (indicated by a check mark in the table). @@ -43,5 +45,5 @@ by OpenVINO may also work properly. | The models come from different public model repositories, such as, OpenVINO Model Zoo, ONNX Model Zoo, Pytorch Model Zoo, and HuggingFace. -| In the precision column, optimum-intel default corresponds to FP32 for small models and INT8 - for models greater than 1B parameters. \ No newline at end of file +| In the precision column, the "optimum-intel default" label corresponds to FP32 for small + models and INT8 for models greater than 1B parameters. \ No newline at end of file diff --git a/docs/articles_en/about-openvino/release-notes-openvino.rst b/docs/articles_en/about-openvino/release-notes-openvino.rst index 989be4057b8d3a..63c7fc1cd9197a 100644 --- a/docs/articles_en/about-openvino/release-notes-openvino.rst +++ b/docs/articles_en/about-openvino/release-notes-openvino.rst @@ -35,7 +35,7 @@ What's new Python Custom Operation empowers users to implement their own specialized operations into any model. * Notebooks expansion to ensure better coverage for new models. Noteworthy notebooks added: - DynamiCrafter, YOLOv10, and Chatbot notebook with Phi-3. + DynamiCrafter, YOLOv10, Chatbot notebook with Phi-3, and QWEN2. * Broader Large Language Model (LLM) support and more model compression techniques. @@ -52,7 +52,7 @@ What's new * Model Serving Enhancements: - * OpenVINO Model Server (OVMS) now supports OpenAI-compatible API along with Continuous + * Preview: OpenVINO Model Server (OVMS) now supports OpenAI-compatible API along with Continuous Batching and PagedAttention, enabling significantly higher throughput for parallel inferencing, especially on Intel® Xeon® processors, when serving LLMs to many concurrent users. @@ -61,13 +61,14 @@ What's new * Integration of TorchServe through torch.compile OpenVINO backend for easy model deployment, provisioning to multiple instances, model versioning, and maintenance. - * Addition of the Generate API, a simplified API for text generation using large language + * Preview: addition of the Generate API, a simplified API for text generation using large language models with only a few lines of code. The API is available through the newly launched OpenVINO GenAI package. * Support for Intel Atom® Processor X Series. For more details, see :doc:`System Requirements <./release-notes-openvino/system-requirements>`. * Preview: Support for Intel® Xeon® 6 processor. + OpenVINO™ Runtime +++++++++++++++++++++++++++++ diff --git a/docs/sphinx_setup/index.rst b/docs/sphinx_setup/index.rst index f3e06b3f956f13..0c7a58e769d089 100644 --- a/docs/sphinx_setup/index.rst +++ b/docs/sphinx_setup/index.rst @@ -14,6 +14,7 @@ and on-device, in the browser or in the cloud. Check out the `OpenVINO Cheat Sheet. `__ + .. container:: :name: ov-homepage-banner @@ -24,31 +25,21 @@ Check out the `OpenVINO Cheat Sheet.
    -
  • -

    An open-source toolkit for optimizing and deploying deep learning models.

    -

    Boost your AI deep-learning inference performance!

    - Learn more +
  • +

    New Generative AI API

    +

    Generate text with LLMs in only a few lines of code!

    + Check out our guide
  • -

    Better OpenVINO integration with PyTorch!

    -

    Use PyTorch models directly, without converting them first.

    - Learn more +

    Python custom operations

    +

    Implement specialized operations for any model out of the box!

    + Learn more
  • OpenVINO via PyTorch 2.0 torch.compile()

    Use OpenVINO directly in PyTorch-native applications!

    Learn more
  • -
  • -

    Do you like Generative AI?

    -

    You will love how it performs with OpenVINO!

    - Check out our new notebooks -
  • -
  • -

    Boost your AI deep learning interface performance.

    -

    Use Intel's open-source OpenVino toolkit for optimizing and deploying deep learning models.

    - Learn more -