diff --git a/docs/articles_en/openvino-workflow/torch-compile.rst b/docs/articles_en/openvino-workflow/torch-compile.rst index 28abccb1df32f1..1b90ea9af5eb95 100644 --- a/docs/articles_en/openvino-workflow/torch-compile.rst +++ b/docs/articles_en/openvino-workflow/torch-compile.rst @@ -316,7 +316,7 @@ TorchServe Integration TorchServe is a performant, flexible, and easy to use tool for serving PyTorch models in production. For more information on the details of TorchServe, you can refer to `TorchServe github repository. `__. With OpenVINO ``torch.compile`` integration into TorchServe you can serve PyTorch models in production and accelerate them with OpenVINO on various Intel hardware. Detailed instructions on how to use OpenVINO with TorchServe are -available in `TorchServe examples. `__ and in a `use case app __`. +available in `TorchServe examples. `__ and in a `use case app `__. Support for Automatic1111 Stable Diffusion WebUI +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -362,7 +362,7 @@ and decrease memory consumption: NNCF unlocks the full potential of low-precision OpenVINO kernels due to quantizers placement designed specifically for the OpenVINO. Advanced algorithms like ``SmoothQuant`` or ``BiasCorrection`` allows further metrics improvement minimizing the outputs discrepancy between the original and compressed models. For further details, please see the `documentation `__ -and a `tutorial `__. +and a `tutorial `__. Support for PyTorch 2 export quantization (Preview) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ diff --git a/src/plugins/intel_gpu/thirdparty/onednn_gpu b/src/plugins/intel_gpu/thirdparty/onednn_gpu index 36e090a367a431..0f269193c74663 160000 --- a/src/plugins/intel_gpu/thirdparty/onednn_gpu +++ b/src/plugins/intel_gpu/thirdparty/onednn_gpu @@ -1 +1 @@ -Subproject commit 36e090a367a4312a1caa2db9e95fb94d17d7573b +Subproject commit 0f269193c7466313888d3338209d0d06a22cc6fa