Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] Add OpenAI Vision API Support #5237

Merged
merged 54 commits into from
Jun 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
8ba11d4
initial
ywang96 Jun 2, 2024
d361d20
iterate
ywang96 Jun 3, 2024
fd5aba5
Merge branch 'main' into gpt4v-fe
ywang96 Jun 3, 2024
730cda7
iterate
ywang96 Jun 3, 2024
1c0b89d
iterate
ywang96 Jun 4, 2024
520f5a0
iterate
ywang96 Jun 4, 2024
3a57a6d
iterate
ywang96 Jun 4, 2024
31b941b
adding test
ywang96 Jun 4, 2024
9b3cf48
iterate
ywang96 Jun 4, 2024
af94f8c
docstring
ywang96 Jun 4, 2024
332dd10
remove unused lib
ywang96 Jun 4, 2024
d52a907
revert hardcoded chat template
ywang96 Jun 4, 2024
58746fc
address feedback
ywang96 Jun 4, 2024
99d9197
update pytestmark
ywang96 Jun 4, 2024
0b65271
apply asyncio mark
ywang96 Jun 4, 2024
3a965d9
update doc
ywang96 Jun 4, 2024
f9b9707
update test
ywang96 Jun 5, 2024
04ebbf7
minor doc update
ywang96 Jun 5, 2024
0cdd54f
minor doc update
ywang96 Jun 5, 2024
82a0052
Clarify experiment support
ywang96 Jun 5, 2024
dd01246
note regarding prompt format when using API server
ywang96 Jun 5, 2024
e40da86
Merge branch 'main' into gpt4v-fe
ywang96 Jun 5, 2024
088ad81
fix typo
ywang96 Jun 5, 2024
daa7085
update template
ywang96 Jun 5, 2024
1b32e2f
revert and update token count
ywang96 Jun 5, 2024
c45b34e
update template
ywang96 Jun 5, 2024
d6c1322
update
ywang96 Jun 5, 2024
05fe635
update
ywang96 Jun 5, 2024
938e5c9
template format
ywang96 Jun 5, 2024
b9318bc
correct and add test for multi image
ywang96 Jun 5, 2024
199ced7
fix test
ywang96 Jun 5, 2024
9e686e0
Add unit test for `fetch_image`
DarkLight1337 Jun 5, 2024
d9fbb17
Apply formatter
DarkLight1337 Jun 5, 2024
2833ba0
address feedback
ywang96 Jun 6, 2024
6c365bd
fix notes
ywang96 Jun 6, 2024
26c38f1
use aiohttp
ywang96 Jun 6, 2024
734e50b
fix test
ywang96 Jun 6, 2024
0cd2931
test
ywang96 Jun 6, 2024
561f07f
fix test
ywang96 Jun 6, 2024
481fea8
update test
ywang96 Jun 6, 2024
9585cc6
update fixture
ywang96 Jun 6, 2024
7f9500d
fix field
ywang96 Jun 6, 2024
32d1a25
fix field
ywang96 Jun 6, 2024
1e665b7
format
ywang96 Jun 6, 2024
cce804e
fix image loading
ywang96 Jun 6, 2024
31b219c
revert change that merges fetch and parse
ywang96 Jun 6, 2024
dcf8c8d
add encoded image fixture
ywang96 Jun 6, 2024
a9a9712
Merge branch 'main' into gpt4v-fe
ywang96 Jun 6, 2024
89a452a
update fetch image and remove unused fixture
ywang96 Jun 6, 2024
4e3eca9
cleanup
ywang96 Jun 6, 2024
afadfac
fix fixture
ywang96 Jun 6, 2024
d3bae73
remove unused client close
ywang96 Jun 6, 2024
a149368
add TODO and format
ywang96 Jun 6, 2024
72d4bc4
address comment
ywang96 Jun 7, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 67 additions & 1 deletion docs/source/models/vlm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Using VLMs
==========

This document shows you how to run and serve Vision Language Models (VLMs) using vLLM.
vLLM provides experimental support for Vision Language Models (VLMs). This document shows you how to run and serve these models using vLLM.

Engine Arguments
----------------
Expand Down Expand Up @@ -54,3 +54,69 @@ For now, we only support a single image per text prompt. To pass an image to the
print(generated_text)

A code example can be found in `examples/llava_example.py <https://github.com/vllm-project/vllm/blob/main/examples/llava_example.py>`_.

Online OpenAI Vision API Compatible Inference
----------------------------------------------

You can serve vision language models with vLLM's HTTP server that is compatible with `OpenAI Vision API <https://platform.openai.com/docs/guides/vision>`_.

.. note::
Currently, vLLM supports only **single** ``image_url`` input per ``messages``. Support for multi-image inputs will be
added in the future.

Below is an example on how to launch the same ``llava-hf/llava-1.5-7b-hf`` with vLLM API server.

.. important::
Since OpenAI Vision API is based on `Chat <https://platform.openai.com/docs/api-reference/chat>`_ API, a chat template
is **required** to launch the API server if the model's tokenizer does not come with one. In this example, we use the
HuggingFace Llava chat template that you can find in the example folder `here <https://github.com/vllm-project/vllm/blob/main/examples/template_llava.jinja>`_.

.. code-block:: bash

python -m vllm.entrypoints.openai.api_server \
--model llava-hf/llava-1.5-7b-hf \
--image-input-type pixel_values \
--image-token-id 32000 \
--image-input-shape 1,3,336,336 \
--image-feature-size 576 \
--chat-template template_llava.jinja

To consume the server, you can use the OpenAI client like in the example below:

.. code-block:: python

from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
chat_response = client.chat.completions.create(
model="llava-hf/llava-1.5-7b-hf",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
},
],
}],
)
print("Chat response:", chat_response)

.. note::

By default, the timeout for fetching images through http url is ``5`` seconds. You can override this by setting the environment variable:

.. code-block:: shell

export VLLM_IMAGE_FETCH_TIMEOUT=<timeout>

.. note::
The prompt formatting with the image token ``<image>`` is not needed when serving VLMs with the API server since the prompt will be
processed automatically by the server.
4 changes: 3 additions & 1 deletion docs/source/serving/openai_compatible_server.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ Please see the [OpenAI API Reference](https://platform.openai.com/docs/api-refer
- Chat: `tools`, and `tool_choice`.
- Completions: `suffix`.

vLLM also provides experimental support for OpenAI Vision API compatible inference. See more details in [Using VLMs](../models/vlm.rst).

## Extra Parameters
vLLM supports a set of parameters that are not part of the OpenAI API.
In order to use them, you can pass them as extra parameters in the OpenAI client.
Expand Down Expand Up @@ -120,4 +122,4 @@ It is the callers responsibility to prompt the model with the tool information,

vLLM will use guided decoding to ensure the response matches the tool parameter object defined by the JSON schema in the `tools` parameter.

Please refer to the OpenAI API reference documentation for more information.
Please refer to the OpenAI API reference documentation for more information.
23 changes: 23 additions & 0 deletions examples/template_llava.jinja
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{%- if messages[0]['role'] == 'system' -%}
{%- set system_message = messages[0]['content'] -%}
{%- set messages = messages[1:] -%}
{%- else -%}
{% set system_message = '' -%}
{%- endif -%}

{{ bos_token + system_message }}
{%- for message in messages -%}
{%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
{%- endif -%}

{%- if message['role'] == 'user' -%}
{{ 'USER: ' + message['content'] + '\n' }}
{%- elif message['role'] == 'assistant' -%}
{{ 'ASSISTANT: ' + message['content'] + eos_token + '\n' }}
{%- endif -%}
{%- endfor -%}

{%- if add_generation_prompt -%}
{{ 'ASSISTANT:' }}
{% endif %}
Loading
Loading