Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable mypy type checking #3680

Open
simon-mo opened this issue Mar 28, 2024 · 10 comments
Open

Enable mypy type checking #3680

simon-mo opened this issue Mar 28, 2024 · 10 comments
Labels
good first issue Good for newcomers misc

Comments

@simon-mo
Copy link
Collaborator

Anything you want to discuss about vllm.

Even though vLLM is type annotated but we did not enable type checking. It would be useful to add it, even incrementally.

@rkooo567
Copy link
Collaborator

The current remaining

[rkooo567] mypy vllm/engine/*.py --follow-imports=skip --config-file pyproject.toml
[rkooo567] mypy vllm/worker/*.py --follow-imports=skip --config-file pyproject.toml
[rkooo567] mypy vllm/spec_decode/*.py --follow-imports=skip --config-file pyproject.toml
[rkooo567] mypy vllm/model_executor/*.py --follow-imports=skip --config-file pyproject.toml
mypy vllm/lora/*.py --follow-imports=skip --config-file pyproject.toml
mypy vlllm/entrypoints/openai/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/transformers_utils/configs/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/transformers_utils/tokenizers/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/distributed/device_communicators/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/attention/backends/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/transformers_utils/tokenizer_group/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/core/block/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/model_executor/layers/*.py --follow-imports=skip --config=file pyprojet.toml
mypy vllm/model_executor/layers/*.py --follow-imports=skip --config=file pyprojet.toml

and misc after

@DarkLight1337
Copy link
Member

DarkLight1337 commented May 31, 2024

For those who are new to the vLLM repo, determining the correct type of each variable is a great way to enhance your understanding of how the code is connected together.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Jun 30, 2024

Quick update on the latest progress:

However, the effectiveness of type checking is still quite minimal because imported types are currently treated as Any due to the setting --follow_imports=skip:

vllm/pyproject.toml

Lines 46 to 51 in 2be6955

[tool.mypy]
python_version = "3.8"
ignore_missing_imports = true
check_untyped_defs = true
follow_imports = "skip"

My IDE, which uses Pyright basic mode, still flags a lot of errors, showing that there is much work to be done.

Moving forward, we should work on gradually applying --follow_imports=normal for each directory, fixing errors as we go. I recommend going through the directories following a similar order as @rkooo567 did (these search filters should be able to locate the relevant PRs).

@youkaichao
Copy link
Member

Cool, looking forward to the progress.

@rkooo567
Copy link
Collaborator

rkooo567 commented Jul 4, 2024

This is a great issue to improve the code quality of the repo, so please feel free to work on it if anyone is interested! One example PR in the past #4043

@jberkhahn
Copy link
Contributor

I'm taking a crack at vllm/core, just to get my feet wet with the codebase.

@pooyadavoodi
Copy link
Contributor

There are errors indicating incompatibilities between AsyncEngineClient and AsyncLLMEngine. Those errors seem similar to what is discussed in the following links (still an open issue):

An example of the error:

vllm/entrypoints/openai/run_batch.py:132: error: Argument 1 to "OpenAIServingChat" has incompatible type "AsyncLLMEngine"; expected "AsyncEngineClient"  [arg-type]
vllm/entrypoints/openai/run_batch.py:132: note: Following member(s) of "AsyncLLMEngine" have conflicts:
vllm/entrypoints/openai/run_batch.py:132: note:     Expected:
vllm/entrypoints/openai/run_batch.py:132: note:         def encode(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], pooling_params: PoolingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ...) -> Coroutine[Any, Any, AsyncIterator[EmbeddingRequestOutput]]
vllm/entrypoints/openai/run_batch.py:132: note:     Got:
vllm/entrypoints/openai/run_batch.py:132: note:         def encode(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], pooling_params: PoolingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ...) -> AsyncIterator[EmbeddingRequestOutput]
vllm/entrypoints/openai/run_batch.py:132: note:     Expected:
vllm/entrypoints/openai/run_batch.py:132: note:         def generate(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], sampling_params: SamplingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ..., prompt_adapter_request: Optional[PromptAdapterRequest] = ...) -> Coroutine[Any, Any, AsyncIterator[RequestOutput]]
vllm/entrypoints/openai/run_batch.py:132: note:     Got:
vllm/entrypoints/openai/run_batch.py:132: note:         def generate(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], sampling_params: SamplingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ..., prompt_adapter_request: Optional[PromptAdapterRequest] = ...) -> AsyncIterator[RequestOutput]

@DarkLight1337
Copy link
Member

There are errors indicating incompatibilities between AsyncEngineClient and AsyncLLMEngine. Those errors seem similar to what is discussed in the following links (still an open issue):

An example of the error:

vllm/entrypoints/openai/run_batch.py:132: error: Argument 1 to "OpenAIServingChat" has incompatible type "AsyncLLMEngine"; expected "AsyncEngineClient"  [arg-type]
vllm/entrypoints/openai/run_batch.py:132: note: Following member(s) of "AsyncLLMEngine" have conflicts:
vllm/entrypoints/openai/run_batch.py:132: note:     Expected:
vllm/entrypoints/openai/run_batch.py:132: note:         def encode(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], pooling_params: PoolingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ...) -> Coroutine[Any, Any, AsyncIterator[EmbeddingRequestOutput]]
vllm/entrypoints/openai/run_batch.py:132: note:     Got:
vllm/entrypoints/openai/run_batch.py:132: note:         def encode(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], pooling_params: PoolingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ...) -> AsyncIterator[EmbeddingRequestOutput]
vllm/entrypoints/openai/run_batch.py:132: note:     Expected:
vllm/entrypoints/openai/run_batch.py:132: note:         def generate(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], sampling_params: SamplingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ..., prompt_adapter_request: Optional[PromptAdapterRequest] = ...) -> Coroutine[Any, Any, AsyncIterator[RequestOutput]]
vllm/entrypoints/openai/run_batch.py:132: note:     Got:
vllm/entrypoints/openai/run_batch.py:132: note:         def generate(self, inputs: Union[Union[str, TextPrompt, TokensPrompt], ExplicitEncoderDecoderPrompt], sampling_params: SamplingParams, request_id: str, lora_request: Optional[LoRARequest] = ..., trace_headers: Optional[Mapping[str, str]] = ..., prompt_adapter_request: Optional[PromptAdapterRequest] = ...) -> AsyncIterator[RequestOutput]

I have been working on this for the past few days. Stay tuned for updates!

@DarkLight1337
Copy link
Member

DarkLight1337 commented Aug 7, 2024

I have been working on this for the past few days. Stay tuned for updates!

Opened #7248.

@jberkhahn
Copy link
Contributor

Was going to take a crack at vllm/executor next.

russellb added a commit to russellb/vllm that referenced this issue Oct 9, 2024
Resolve mypy warnings under `vllm/entrypoints` and turn on `mypy`
checks for this directory in `format.sh` and in CI.

part of issue vllm-project#3680

Signed-off-by: Russell Bryant <[email protected]>
russellb added a commit to russellb/vllm that referenced this issue Oct 9, 2024
Enable `mypy` checks on `vllm/inputs` and add some casts to fix the
warnings present for this directory.

Part of issue vllm-project#3680

Signed-off-by: Russell Bryant <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers misc
Projects
None yet
Development

No branches or pull requests

6 participants