Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added information about LoRA support #1504

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ concurrency:

env:
PYTHON_VERSION: '3.10'
OV_BRANCH: master
OV_BRANCH: 345163f87953fb0dd8dd590257eb7fc84378da8e
ilya-lavrenov marked this conversation as resolved.
Show resolved Hide resolved
OV_TARBALL: ''

jobs:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ concurrency:

env:
PYTHON_VERSION: '3.11'
OV_BRANCH: master
OV_BRANCH: 345163f87953fb0dd8dd590257eb7fc84378da8e
OV_TARBALL: ''

jobs:
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -394,7 +394,7 @@ See [here](https://openvinotoolkit.github.io/openvino_notebooks/?search=Automati

## Additional materials

- [List of supported models](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/docs/SUPPORTED_MODELS.md) (NOTE: models can work, but were not tried yet)
- [List of supported models](https://github.com/openvinotoolkit/openvino.genai/blob/master/SUPPORTED_MODELS.md) (NOTE: models can work, but were not tried yet)
- [OpenVINO Generative AI workflow](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html)
- [Optimum-intel and OpenVINO](https://huggingface.co/docs/optimum/intel/openvino/export)

Expand Down
29 changes: 28 additions & 1 deletion src/docs/SUPPORTED_MODELS.md → SUPPORTED_MODELS.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,8 @@
</tbody>
</table>

> [!NOTE]
> LoRA adapters are supported.

The pipeline can work with other similar topologies produced by `optimum-intel` with the same model signature. The model is required to have the following inputs after the conversion:
1. `input_ids` contains the tokens.
Expand All @@ -165,12 +167,14 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
<th>Architecture</th>
<th>Text 2 image</th>
<th>Image 2 image</th>
<th>LoRA support</th>
<th>Example HuggingFace Models</th>
</tr>
<tr>
<td><code>Latent Consistency Model</code></td>
<td>Supported</td>
<td>Supported</td>
<td>Supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7"><code>SimianLuo/LCM_Dreamshaper_v7</code></a></li>
Expand All @@ -181,6 +185,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
<td><code>Stable Diffusion</code></td>
<td>Supported</td>
<td>Supported</td>
<td>Supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/CompVis/stable-diffusion-v1-1"><code>CompVis/stable-diffusion-v1-1</code></a></li>
Expand Down Expand Up @@ -213,6 +218,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
<td><code>Stable Diffusion XL</code></td>
<td>Supported</td>
<td>Supported</td>
<td>Supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9"><code>stabilityai/stable-diffusion-xl-base-0.9</code></a></li>
Expand All @@ -225,6 +231,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
<td><code>Stable Diffusion 3</code></td>
<td>Supported</td>
<td>Not supported</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers"><code>stabilityai/stable-diffusion-3-medium-diffusers</code></a></li>
Expand All @@ -237,6 +244,7 @@ The pipeline can work with other similar topologies produced by `optimum-intel`
<td><code>Flux</code></td>
<td>Supported</td>
<td>Not supported</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/black-forest-labs/FLUX.1-schnell"><code>black-forest-labs/FLUX.1-schnell</code></a></li>
Expand All @@ -260,10 +268,12 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
<tbody style="vertical-align: top;">
<tr>
<th>Architecture</th>
<th>LoRA support</th>
<th>Example HuggingFace Models</th>
</tr>
<tr>
<td><code>Stable Diffusion</code></td>
<td>Supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/stabilityai/stable-diffusion-2-inpainting"><code>stabilityai/stable-diffusion-2-inpainting</code></a></li>
Expand All @@ -275,13 +285,22 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
</tr>
<tr>
<td><code>Stable Diffusion XL</code></td>
<td>Supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1"><code>diffusers/stable-diffusion-xl-1.0-inpainting-0.1</code></a></li>
</ul>
</td>
</tr>
</tr>
<!-- <tr>
<td><code>FLUX</code></td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev"><code>black-forest-labs/FLUX.1-Fill-dev</code></a></li>
</ul>
</td>
</tr> -->
</tbody>
</table>

Expand All @@ -292,11 +311,13 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
<tr>
<th>Architecture</th>
<th>Models</th>
<th>LoRA support</th>
<th>Example HuggingFace Models</th>
</tr>
<tr>
<td><code>InternVL2</code></td>
<td>InternVL2</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/OpenGVLab/InternVL2-1B"><code>OpenGVLab/InternVL2-1B</code></a></li>
Expand All @@ -309,6 +330,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
<tr>
<td><code>LLaVA</code></td>
<td>LLaVA-v1.5</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/llava-hf/llava-1.5-7b-hf"><code>llava-hf/llava-1.5-7b-hf</code></a></li>
Expand All @@ -318,6 +340,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
<tr>
<td><code>LLaVA-NeXT</code></td>
<td>LLaVa-v1.6</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf"><code>llava-hf/llava-v1.6-mistral-7b-hf</code></a></li>
Expand All @@ -329,6 +352,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
<tr>
<td><code>MiniCPMV</code></td>
<td>MiniCPM-V-2_6</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/openbmb/MiniCPM-V-2_6"><code>openbmb/MiniCPM-V-2_6</code></a></li>
Expand All @@ -345,11 +369,13 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
<tr>
<th>Architecture</th>
<th>Models</th>
<th>LoRA support</th>
<th>Example HuggingFace Models</th>
</tr>
<tr>
<td rowspan=2><code>WhisperForConditionalGeneration</code></td>
<td>Whisper</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/openai/whisper-tiny"><code>openai/whisper-tiny</code></a></li>
Expand All @@ -366,6 +392,7 @@ In addition to image generation models, `InpaintingPipeline` supports specialize
</tr>
<tr>
<td>Distil-Whisper</td>
<td>Not supported</td>
<td>
<ul>
<li><a href="https://huggingface.co/distil-whisper/distil-small.en"><code>distil-whisper/distil-small.en</code></a></li>
Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/visual_language_chat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Follow [Get Started with Samples](https://docs.openvino.ai/2024/learn-openvino/o

Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model `llava-hf/llava-v1.6-mistral-7b-hf` can benefit from being run on a dGPU. Modify the source code to change the device for inference to the `GPU`.

See [SUPPORTED_MODELS.md](../../../src/docs/SUPPORTED_MODELS.md#visual-language-models) for the list of supported models.
See [SUPPORTED_MODELS.md](../../../SUPPORTED_MODELS.md#visual-language-models) for the list of supported models.

## Run benchmark:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/whisper_speech_recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Output:
timestamps: [0, 2] text: How are you doing today?
```

See [SUPPORTED_MODELS.md](../../../src/docs/SUPPORTED_MODELS.md#whisper-models) for the list of supported models.
See [SUPPORTED_MODELS.md](../../../SUPPORTED_MODELS.md#whisper-models) for the list of supported models.

# Whisper pipeline usage

Expand Down
2 changes: 1 addition & 1 deletion samples/python/whisper_speech_recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Output:
timestamps: [0, 2] text: How are you doing today?
```

See [SUPPORTED_MODELS.md](../../../src/docs/SUPPORTED_MODELS.md#whisper-models) for the list of supported models.
See [SUPPORTED_MODELS.md](../../../SUPPORTED_MODELS.md#whisper-models) for the list of supported models.

# Whisper pipeline usage

Expand Down
Loading