diff --git a/.github/workflows/mac.yml b/.github/workflows/mac.yml
index 5402b79e70..062b83fc27 100644
--- a/.github/workflows/mac.yml
+++ b/.github/workflows/mac.yml
@@ -17,7 +17,7 @@ concurrency:
env:
PYTHON_VERSION: '3.10'
- OV_BRANCH: master
+ OV_BRANCH: 345163f87953fb0dd8dd590257eb7fc84378da8e
OV_TARBALL: ''
jobs:
diff --git a/.github/workflows/windows.yml b/.github/workflows/windows.yml
index e396671b2c..95a713d7a1 100644
--- a/.github/workflows/windows.yml
+++ b/.github/workflows/windows.yml
@@ -17,7 +17,7 @@ concurrency:
env:
PYTHON_VERSION: '3.11'
- OV_BRANCH: master
+ OV_BRANCH: 345163f87953fb0dd8dd590257eb7fc84378da8e
OV_TARBALL: ''
jobs:
diff --git a/README.md b/README.md
index 9d4543bed4..c5cf799973 100644
--- a/README.md
+++ b/README.md
@@ -394,7 +394,7 @@ See [here](https://openvinotoolkit.github.io/openvino_notebooks/?search=Automati
## Additional materials
-- [List of supported models](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/docs/SUPPORTED_MODELS.md) (NOTE: models can work, but were not tried yet)
+- [List of supported models](https://github.com/openvinotoolkit/openvino.genai/blob/master/SUPPORTED_MODELS.md) (NOTE: models can work, but were not tried yet)
- [OpenVINO Generative AI workflow](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html)
- [Optimum-intel and OpenVINO](https://huggingface.co/docs/optimum/intel/openvino/export)
diff --git a/src/docs/SUPPORTED_MODELS.md b/SUPPORTED_MODELS.md
similarity index 95%
rename from src/docs/SUPPORTED_MODELS.md
rename to SUPPORTED_MODELS.md
index 44da29ced4..6b45f47890 100644
--- a/src/docs/SUPPORTED_MODELS.md
+++ b/SUPPORTED_MODELS.md
@@ -147,6 +147,8 @@
+> [!NOTE]
+> LoRA adapters are supported.
The pipeline can work with other similar topologies produced by `optimum-intel` with the same model signature. The model is required to have the following inputs after the conversion:
1. `input_ids` contains the tokens.
@@ -165,12 +167,14 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
Architecture |
Text 2 image |
Image 2 image |
+ LoRA support |
Example HuggingFace Models |
Latent Consistency Model |
Supported |
Supported |
+ Supported |
SimianLuo/LCM_Dreamshaper_v7
@@ -181,6 +185,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
Stable Diffusion |
Supported |
Supported |
+ Supported |
CompVis/stable-diffusion-v1-1
@@ -213,6 +218,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
Stable Diffusion XL |
Supported |
Supported |
+ Supported |
stabilityai/stable-diffusion-xl-base-0.9
@@ -225,6 +231,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
Stable Diffusion 3 |
Supported |
Not supported |
+ Not supported |
stabilityai/stable-diffusion-3-medium-diffusers
@@ -237,6 +244,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
Flux |
Supported |
Not supported |
+ Not supported |
black-forest-labs/FLUX.1-schnell
@@ -260,10 +268,12 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
Architecture |
+ LoRA support |
Example HuggingFace Models |
Stable Diffusion |
+ Supported |
|
Stable Diffusion XL |
+ Supported |
|
-
+
@@ -292,11 +311,13 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
Architecture |
Models |
+ LoRA support |
Example HuggingFace Models |
InternVL2 |
InternVL2 |
+ Not supported |
OpenGVLab/InternVL2-1B
@@ -309,6 +330,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
LLaVA |
LLaVA-v1.5 |
+ Not supported |
llava-hf/llava-1.5-7b-hf
@@ -318,6 +340,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
LLaVA-NeXT |
LLaVa-v1.6 |
+ Not supported |
llava-hf/llava-v1.6-mistral-7b-hf
@@ -329,6 +352,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
MiniCPMV |
MiniCPM-V-2_6 |
+ Not supported |
openbmb/MiniCPM-V-2_6
@@ -345,11 +369,13 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
Architecture |
Models |
+ LoRA support |
Example HuggingFace Models |
WhisperForConditionalGeneration |
Whisper |
+ Not supported |
openai/whisper-tiny
@@ -366,6 +392,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
|
Distil-Whisper |
+ Not supported |
distil-whisper/distil-small.en
diff --git a/samples/cpp/visual_language_chat/README.md b/samples/cpp/visual_language_chat/README.md
index 39364d51ee..73baf0088a 100644
--- a/samples/cpp/visual_language_chat/README.md
+++ b/samples/cpp/visual_language_chat/README.md
@@ -29,7 +29,7 @@ Follow [Get Started with Samples](https://docs.openvino.ai/2024/learn-openvino/o
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model `llava-hf/llava-v1.6-mistral-7b-hf` can benefit from being run on a dGPU. Modify the source code to change the device for inference to the `GPU`.
-See [SUPPORTED_MODELS.md](../../../src/docs/SUPPORTED_MODELS.md#visual-language-models) for the list of supported models.
+See [SUPPORTED_MODELS.md](../../../SUPPORTED_MODELS.md#visual-language-models) for the list of supported models.
## Run benchmark:
diff --git a/samples/cpp/whisper_speech_recognition/README.md b/samples/cpp/whisper_speech_recognition/README.md
index d649266613..2ea3322dee 100644
--- a/samples/cpp/whisper_speech_recognition/README.md
+++ b/samples/cpp/whisper_speech_recognition/README.md
@@ -31,7 +31,7 @@ Output:
timestamps: [0, 2] text: How are you doing today?
```
-See [SUPPORTED_MODELS.md](../../../src/docs/SUPPORTED_MODELS.md#whisper-models) for the list of supported models.
+See [SUPPORTED_MODELS.md](../../../SUPPORTED_MODELS.md#whisper-models) for the list of supported models.
# Whisper pipeline usage
diff --git a/samples/python/whisper_speech_recognition/README.md b/samples/python/whisper_speech_recognition/README.md
index aeb46444bf..5f373df2b7 100644
--- a/samples/python/whisper_speech_recognition/README.md
+++ b/samples/python/whisper_speech_recognition/README.md
@@ -38,7 +38,7 @@ Output:
timestamps: [0, 2] text: How are you doing today?
```
-See [SUPPORTED_MODELS.md](../../../src/docs/SUPPORTED_MODELS.md#whisper-models) for the list of supported models.
+See [SUPPORTED_MODELS.md](../../../SUPPORTED_MODELS.md#whisper-models) for the list of supported models.
# Whisper pipeline usage
| | | | | | | | | |