Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: [vllm-openvino]: ValueError: use_cache was set to True but the loaded model only supports use_cache=False. #6473

Closed
HPUedCSLearner opened this issue Jul 16, 2024 · 21 comments
Labels
bug Something isn't working

Comments

@HPUedCSLearner
Copy link

Your current environment

The output of `python collect_env.py`

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$ python collect_env.py 
Collecting environment information...
WARNING 07-16 19:50:52 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
PyTorch version: 2.3.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35

Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             128
On-line CPU(s) list:                0-127
Vendor ID:                          GenuineIntel
Model name:                         INTEL(R) XEON(R) GOLD 6530
CPU family:                         6
Model:                              207
Thread(s) per core:                 2
Core(s) per socket:                 32
Socket(s):                          2
Stepping:                           2
Frequency boost:                    enabled
CPU max MHz:                        2101.0000
CPU min MHz:                        800.0000
BogoMIPS:                           4200.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization:                     VT-x
L1d cache:                          3 MiB (64 instances)
L1i cache:                          2 MiB (64 instances)
L2 cache:                           128 MiB (64 instances)
L3 cache:                           320 MiB (2 instances)
NUMA node(s):                       4
NUMA node0 CPU(s):                  0-15,64-79
NUMA node1 CPU(s):                  16-31,80-95
NUMA node2 CPU(s):                  32-47,96-111
NUMA node3 CPU(s):                  48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] torch==2.3.1+cpu
[pip3] transformers==4.42.4
[pip3] triton==3.0.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] torch                     2.3.1+cpu                pypi_0    pypi
[conda] transformers              4.42.4                   pypi_0    pypi
[conda] triton                    3.0.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

🐛 Describe the bug

1、bug description

It is normal to let vllm-openvino convert openvino IR at runtime;
However, manually converting the model to openvino IR will result in an error use_cache XXXX

2、manualy convert module to OpenVINO IR,and run ,get error:

convert commad

optimum-cli export openvino -m Qwen1.5-4B-Chat --task text-generation --weight-format int4   Qwen1.5-4B-Chat-optimum-int4

convert OpenVION IR logs

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$ 
optimum-cli export openvino \
    -m Qwen1.5-4B-Chat \
    --task text-generation \
    --weight-format int4   \
    Qwen1.5-4B-Chat-optimum-int4
Framework not specified. Using pt to export the model.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00,  1.33it/s]
The task `text-generation` was manually specified, and past key values will not be reused in the decoding. if needed, please pass `--task text-generation-with-past` to export using the past key values.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using framework PyTorch: 2.3.1+cpu
Overriding 1 configuration item(s)
        - use_cache -> False
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1116: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if sequence_length != 1:
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:128: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if seq_len > self.max_seq_len_cached:
['input_ids', 'attention_mask', 'position_ids']
Mixed-Precision assignment ━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   9% 2
Mixed-Precision assignment ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 280/280 • 0:00:46 • 0:00:00
INFO:nncf:Statistics of the bitwidth distribution:
┍━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑
│   Num bits (N) │ % all parameters (layers)   │ % ratio-defining parameters (layers)   │
┝━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┥
│              8 │ 36% (77 / 282)              │ 20% (75 / 280)                         │
├────────────────┼─────────────────────────────┼────────────────────────────────────────┤
│              4 │ 64% (205 / 282)             │ 80% (205 / 280)                        │
┕━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙
Applying Weight Compression ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 282/282 • 0:01:36 • 0:00:00
Replacing `(?!\S)` pattern to `(?:$|[^\S])` in RegexSplit operation

run command

use manuly convert model : /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4

VLLM_OPENVINO_KVCACHE_SPACE=30 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4 \
                --port 10003

get use_cache error

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$
VLLM_OPENVINO_KVCACHE_SPACE=30 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4 \
                --port 10003
WARNING 07-16 19:48:08 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 07-16 19:48:11 api_server.py:212] vLLM API server version 0.5.2
INFO 07-16 19:48:11 api_server.py:213] args: Namespace(host=None, port=10003, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 07-16 19:48:11 config.py:1374] Downcasting torch.float32 to torch.float16.
INFO 07-16 19:48:11 llm_engine.py:174] Initializing an LLM engine (v0.5.2) with config: model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', speculative_config=None, tokenizer='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4, use_v2_block_manager=False, enable_prefix_caching=False)
WARNING 07-16 19:48:11 openvino_executor.py:132] Only float32 dtype is supported on OpenVINO, casting from torch.float16.
WARNING 07-16 19:48:11 openvino_executor.py:137] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 07-16 19:48:11 openvino_executor.py:159] OpenVINO optimal block size is 32, overriding currently set 16
INFO 07-16 19:48:14 selector.py:121] Cannot use _Backend.FLASH_ATTN backend on OpenVINO.
INFO 07-16 19:48:14 selector.py:69] Using OpenVINO Attention backend.
WARNING 07-16 19:48:14 openvino.py:130] OpenVINO IR is available for provided model id /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4. This IR will be used for inference as-is, all possible options that may affect model conversion are ignored.
[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]:     return _run_code(code, main_globals, None,
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]:     exec(code, run_globals)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 282, in <module>
[rank0]:     run_server(args)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 224, in run_server
[rank0]:     if llm_engine is not None else AsyncLLMEngine.from_engine_args(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 444, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 373, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 520, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 249, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 150, in __init__
[rank0]:     super().__init__(model_config, cache_config, parallel_config,
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 46, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/openvino_executor.py", line 28, in _init_executor
[rank0]:     self._init_worker()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/openvino_executor.py", line 55, in _init_worker
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/worker/openvino_worker.py", line 199, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/worker/openvino_model_runner.py", line 91, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 210, in get_model
[rank0]:     return OpenVINOCasualLM(model_config, device_config, kv_cache_dtype)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 137, in __init__
[rank0]:     pt_model = OVModelForCausalLM.from_pretrained(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/modeling_base.py", line 427, in from_pretrained
[rank0]:     return from_pretrained_method(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/intel/openvino/modeling_decoder.py", line 796, in _from_pretrained
[rank0]:     causal_model = init_cls(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/intel/openvino/modeling_decoder.py", line 171, in __init__
[rank0]:     raise_error(self.use_cache, use_cache, "use_cache")
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/intel/openvino/modeling_decoder.py", line 159, in raise_error
[rank0]:     raise ValueError(
[rank0]: ValueError: `use_cache` was set to `True` but the loaded model only supports `use_cache=False`. Please load your current model with `use_cache=False` or export the original model once again with `use_cache=True` when calling the `from_pretrained` method. To export your model, simply set `export=True`.

3、however, directly run vllm openvion with original modle Qwen1.5-4B-Chat , is OK:

run log

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models/Qwen1.5-4B-Chat$ 
VLLM_OPENVINO_KVCACHE_SPACE=30 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat \
                --port 10003
WARNING 07-16 19:33:14 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 07-16 19:33:17 api_server.py:212] vLLM API server version 0.5.2
INFO 07-16 19:33:17 api_server.py:213] args: Namespace(host=None, port=10003, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 07-16 19:33:17 llm_engine.py:174] Initializing an LLM engine (v0.5.2) with config: model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat', speculative_config=None, tokenizer='/home/yongshuai_wang/models/Qwen1.5-4B-Chat', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/home/yongshuai_wang/models/Qwen1.5-4B-Chat, use_v2_block_manager=False, enable_prefix_caching=False)
WARNING 07-16 19:33:17 openvino_executor.py:132] Only float32 dtype is supported on OpenVINO, casting from torch.bfloat16.
WARNING 07-16 19:33:17 openvino_executor.py:137] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 07-16 19:33:17 openvino_executor.py:159] OpenVINO optimal block size is 32, overriding currently set 16
INFO 07-16 19:33:19 selector.py:121] Cannot use _Backend.FLASH_ATTN backend on OpenVINO.
INFO 07-16 19:33:19 selector.py:69] Using OpenVINO Attention backend.
WARNING 07-16 19:33:20 openvino.py:123] Provided model id /home/yongshuai_wang/models/Qwen1.5-4B-Chat does not contain OpenVINO IR, the model will be converted to IR with default options. If you need to use specific options for model conversion, use optimum-cli export openvino with desired options.
Framework not specified. Using pt to export the model.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00,  1.56it/s]
Using framework PyTorch: 2.3.1+cpu
Overriding 1 configuration item(s)
	- use_cache -> True
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1116: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if sequence_length != 1:
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:128: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if seq_len > self.max_seq_len_cached:
['input_ids', 'attention_mask', 'position_ids', 'past_key_values']
INFO:nncf:Statistics of the bitwidth distribution:
┍━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑
│   Num bits (N) │ % all parameters (layers)   │ % ratio-defining parameters (layers)   │
┝━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┥
│              8 │ 100% (282 / 282)            │ 100% (282 / 282)                       │
┕━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙
Applying Weight Compression ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 282/282 • 0:00:26 • 0:00:00
INFO 07-16 19:34:36 openvino_executor.py:72] # CPU blocks: 2457
INFO 07-16 19:34:47 serving_chat.py:94] Using default chat template:
INFO 07-16 19:34:47 serving_chat.py:94] {% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
INFO 07-16 19:34:47 serving_chat.py:94] You are a helpful assistant.<|im_end|>
INFO 07-16 19:34:47 serving_chat.py:94] ' }}{% endif %}{{'<|im_start|>' + message['role'] + '
INFO 07-16 19:34:47 serving_chat.py:94] ' + message['content'] + '<|im_end|>' + '
INFO 07-16 19:34:47 serving_chat.py:94] '}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
INFO 07-16 19:34:47 serving_chat.py:94] ' }}{% endif %}
WARNING 07-16 19:34:48 serving_embedding.py:141] embedding_mode is False. Embedding API will not work.
INFO 07-16 19:34:48 api_server.py:257] Available routes are:
INFO 07-16 19:34:48 api_server.py:262] Route: /openapi.json, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /docs, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /redoc, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /health, Methods: GET
INFO 07-16 19:34:48 api_server.py:262] Route: /tokenize, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /detokenize, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/models, Methods: GET
INFO 07-16 19:34:48 api_server.py:262] Route: /version, Methods: GET
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/chat/completions, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/completions, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/embeddings, Methods: POST
INFO:     Started server process [42639]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:10003 (Press CTRL+C to quit)
INFO 07-16 19:34:58 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 07-16 19:35:08 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
@HPUedCSLearner HPUedCSLearner added the bug Something isn't working label Jul 16, 2024
@DarkLight1337 DarkLight1337 changed the title [Bug]: [vllm-openvio]: ValueError: use_cache was set to True but the loaded model only supports use_cache=False. [Bug]: [vllm-openvino]: ValueError: use_cache was set to True but the loaded model only supports use_cache=False. Jul 16, 2024
@mgoin
Copy link
Collaborator

mgoin commented Jul 16, 2024

@ilya-lavrenov @helena-intel can you look into this?

@helena-intel
Copy link
Contributor

helena-intel commented Jul 16, 2024

Hi @HPUedCSLearner, thanks again for the great bug report! Local models should definitely work. Could you try if it works if you export the model with task text-generation-with-past instead of text-generation? See https://huggingface.co/docs/optimum/main/intel/openvino/export#decoder-models . The command is then::

optimum-cli export openvino -m Qwen1.5-4B-Chat --task text-generation-with-past --weight-format int4   Qwen1.5-4B-Chat-optimum-int4

This is also the default for CausalLM models, so omitting the --task parameter should also work if you load a model from the Hugging Face hub.

@HPUedCSLearner
Copy link
Author

--weight-format int4

thanks a lot, it works by set the --task --task text-generation-with-past
thanks again!

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$ VLLM_OPENVINO_KVCACHE_SPACE=30 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4 \
                --port 10003
WARNING 07-17 09:39:30 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 07-17 09:39:33 api_server.py:212] vLLM API server version 0.5.2
INFO 07-17 09:39:33 api_server.py:213] args: Namespace(host=None, port=10003, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 07-17 09:39:33 config.py:1374] Downcasting torch.float32 to torch.float16.
INFO 07-17 09:39:33 llm_engine.py:174] Initializing an LLM engine (v0.5.2) with config: model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', speculative_config=None, tokenizer='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4, use_v2_block_manager=False, enable_prefix_caching=False)
WARNING 07-17 09:39:34 openvino_executor.py:132] Only float32 dtype is supported on OpenVINO, casting from torch.float16.
WARNING 07-17 09:39:34 openvino_executor.py:137] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 07-17 09:39:34 openvino_executor.py:159] OpenVINO optimal block size is 32, overriding currently set 16
INFO 07-17 09:39:36 selector.py:121] Cannot use _Backend.FLASH_ATTN backend on OpenVINO.
INFO 07-17 09:39:36 selector.py:69] Using OpenVINO Attention backend.
WARNING 07-17 09:39:36 openvino.py:130] OpenVINO IR is available for provided model id /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4. This IR will be used for inference as-is, all possible options that may affect model conversion are ignored.
INFO:nncf:Statistics of the bitwidth distribution:
┍━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑
│ Num bits (N)   │ % all parameters (layers)   │ % ratio-defining parameters (layers)   │
┝━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┥
┕━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙
INFO 07-17 09:39:40 openvino_executor.py:72] # CPU blocks: 2457
INFO 07-17 09:39:51 serving_chat.py:94] Using default chat template:
INFO 07-17 09:39:51 serving_chat.py:94] {% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
INFO 07-17 09:39:51 serving_chat.py:94] You are a helpful assistant.<|im_end|>
INFO 07-17 09:39:51 serving_chat.py:94] ' }}{% endif %}{{'<|im_start|>' + message['role'] + '
INFO 07-17 09:39:51 serving_chat.py:94] ' + message['content'] + '<|im_end|>' + '
INFO 07-17 09:39:51 serving_chat.py:94] '}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
INFO 07-17 09:39:51 serving_chat.py:94] ' }}{% endif %}
WARNING 07-17 09:39:51 serving_embedding.py:141] embedding_mode is False. Embedding API will not work.
INFO 07-17 09:39:51 api_server.py:257] Available routes are:
INFO 07-17 09:39:51 api_server.py:262] Route: /openapi.json, Methods: HEAD, GET
INFO 07-17 09:39:51 api_server.py:262] Route: /docs, Methods: HEAD, GET
INFO 07-17 09:39:51 api_server.py:262] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 07-17 09:39:51 api_server.py:262] Route: /redoc, Methods: HEAD, GET
INFO 07-17 09:39:51 api_server.py:262] Route: /health, Methods: GET
INFO 07-17 09:39:51 api_server.py:262] Route: /tokenize, Methods: POST
INFO 07-17 09:39:51 api_server.py:262] Route: /detokenize, Methods: POST
INFO 07-17 09:39:51 api_server.py:262] Route: /v1/models, Methods: GET
INFO 07-17 09:39:51 api_server.py:262] Route: /version, Methods: GET
INFO 07-17 09:39:51 api_server.py:262] Route: /v1/chat/completions, Methods: POST
INFO 07-17 09:39:51 api_server.py:262] Route: /v1/completions, Methods: POST
INFO 07-17 09:39:51 api_server.py:262] Route: /v1/embeddings, Methods: POST
INFO:     Started server process [242065]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:10003 (Press CTRL+C to quit)
INFO 07-17 09:40:01 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 07-17 09:40:11 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.

@HPUedCSLearner
Copy link
Author

I have another question, I would be grateful if someone could tell me.

Can anyone tell me why I got the warning that WARNING 07-17 09:39:30 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") /home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash: No module named 'vllm.commit_id'
Is it ok if I ignore the WARNING when I use vllm openvino backend?

@HPUedCSLearner
Copy link
Author

ith task text-generation-with-past instead of text-generation? See
I have another question about cpu core load, I would be grateful if someone could tell me.

1、CPU numactl topo:

this is my computer numactl info:

NUMA:                    
  NUMA node(s):          4
  NUMA node0 CPU(s):     0-15,64-79
  NUMA node1 CPU(s):     16-31,80-95
  NUMA node2 CPU(s):     32-47,96-111
  NUMA node3 CPU(s):     48-63,112-127

2、numactl -m 0 -C 0-15,64-79, but only half cores have work load

this is my tartup command

VLLM_OPENVINO_KVCACHE_SPACE=100 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
numactl  -m 0 -C 0-15,64-79 \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4 \
                --port 10003

the image show that only half cores I set get work

image

Is there any way, that I can make all the cores I set work?

@helena-intel
Copy link
Contributor

@HPUedCSLearner I'm glad you got it to work! We should mention this in the OpenVINO vLLM installation documentation so other people don't run into the same issue. We'll fix that (maybe together with some other updates in the near future).

packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash: No module named 'vllm.commit_id'
Is it ok if I ignore the WARNING when I use vllm openvino backend?

It is a warning for usage reporting, it's safe to ignore. But if this warning is caused by the OpenVINO backend, we should look into it.

Is there any way, that I can make all the cores I set work?

OpenVINO uses the same number of threads as the number of physical cores it has available. I'm assuming your system uses sub NUMA clustering (SNC). I don't have access to a system with that at the moment and have no experience with that with vLLM. If you have sysadmin access to this system you could consider disabling SNC, so you'll get more cores per NUMA node. Also note that for now vLLM with OpenVINO only works on a single socket.

@HPUedCSLearner
Copy link
Author

@HPUedCSLearner I'm glad you got it to work! We should mention this in the OpenVINO vLLM installation documentation so other people don't run into the same issue. We'll fix that (maybe together with some other updates in the near future).

packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash: No module named 'vllm.commit_id'
Is it ok if I ignore the WARNING when I use vllm openvino backend?

It is a warning for usage reporting, it's safe to ignore. But if this warning is caused by the OpenVINO backend, we should look into it.

Is there any way, that I can make all the cores I set work?

OpenVINO uses the same number of threads as the number of physical cores it has available. I'm assuming your system uses sub NUMA clustering (SNC). I don't have access to a system with that at the moment and have no experience with that with vLLM. If you have sysadmin access to this system you could consider disabling SNC, so you'll get more cores per NUMA node. Also note that for now vLLM with OpenVINO only works on a single socket.

Thank you very much for your answer.
I personally think that if OpenVINO-vLLM can run in a hyperthreading environment, reasoning should be able to achieve a higher throughput, because I have tested OpenVINO model_server, which can run the entire core in a hyperthreading environment;
In addition, I have also tested IPEX-LLM, and its throughput is not as high as OpenVINO-vLLM;
More over, if OpenVINO-vLLM can perform cross-Socket reasoning, then it can reason about large models with larger parameters, like 32B 72B LLM;
Hope OpenVINO-vLLM can add these two features soon.

@liuzhipengchd
Copy link

@HPUedCSLearner Hi,If VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS is not enabled, will the inference speed be very slow? What about the concurrency?

@ilya-lavrenov
Copy link
Contributor

@HPUedCSLearner Hi,If VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS is not enabled, will the inference speed be very slow? What about the concurrency?

You can see the difference here #5379 (comment) if you open spoilers with plots. You can see that FP16 model performs even better.

Generally, int8 should have better performance, but currently, we have extra optimizations for FP16 weights and that is why it has better performance. We are in progress of dynamic quantization enabling using AMX which will fully utilize compressed weights.

@BarrinXu
Copy link

Hi @ilya-lavrenov , I've encountered a strange issue when using local openvino model. When sending a long prompt (1024 tokens) to a fp16 format ov model, the vllm would crash without any error log.

Steps to reproduce

  1. convert Llama-2-7b-chat-hf to fp16 openvino format
    optimum-cli export openvino --model Llama-2-7b-chat-hf --weight-format fp16 --task text-generation-with-past Llama-2-7b-chat-ov-fp16
  2. directly run the ov model, with vllm-openvino (0.5.4+openvino) installed.
    python -m vllm.entrypoints.openai.api_server --model /home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16 --port 8000

Output:

INFO 08-27 20:17:52 importing.py:10] Triton not installed; certain GPU-related functions will be not be available.
WARNING 08-27 20:17:52 _custom_ops.py:17] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
INFO 08-27 20:17:54 api_server.py:408] vLLM API server version 0.5.4
INFO 08-27 20:17:54 api_server.py:409] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='/home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 08-27 20:17:54 config.py:1552] Downcasting torch.float32 to torch.float16.
INFO 08-27 20:17:54 api_server.py:135] Multiprocessing frontend to use ipc:///tmp/a3d06905-cb3e-4942-a44c-dc4fb841a41d for RPC Path.
INFO 08-27 20:17:54 api_server.py:146] Started engine process with PID 858824
INFO 08-27 20:17:55 importing.py:10] Triton not installed; certain GPU-related functions will be not be available.
WARNING 08-27 20:17:55 _custom_ops.py:17] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
INFO 08-27 20:17:56 config.py:1552] Downcasting torch.float32 to torch.float16.
INFO 08-27 20:17:56 llm_engine.py:184] Initializing an LLM engine (v0.5.4) with config: model='/home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16', speculative_config=None, tokenizer='/home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16, use_v2_block_manager=False, enable_prefix_caching=False)
WARNING 08-27 20:17:56 openvino_executor.py:133] Only float32 dtype is supported on OpenVINO, casting from torch.float16.
WARNING 08-27 20:17:56 openvino_executor.py:138] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 08-27 20:17:56 openvino_executor.py:160] OpenVINO optimal block size is 32, overriding currently set 16
WARNING 08-27 20:17:56 openvino_executor.py:169] Environment variable VLLM_OPENVINO_KVCACHE_SPACE (GB) for OpenVINO backend is not set, using 4 by default.
INFO 08-27 20:17:58 selector.py:188] Cannot use _Backend.FLASH_ATTN backend on OpenVINO.
INFO 08-27 20:17:58 selector.py:132] Using OpenVINO Attention backend.
WARNING 08-27 20:17:58 openvino.py:130] OpenVINO IR is available for provided model id /home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16. This IR will be used for inference as-is, all possible options that may affect model conversion are ignored.
INFO 08-27 20:17:59 openvino_executor.py:73] # CPU blocks: 256
INFO 08-27 20:18:01 api_server.py:197] vLLM to use /tmp/tmp7e3febh1 as PROMETHEUS_MULTIPROC_DIR
WARNING 08-27 20:18:01 serving_embedding.py:188] embedding_mode is False. Embedding API will not work.
INFO 08-27 20:18:01 launcher.py:20] Available routes are:
INFO 08-27 20:18:01 launcher.py:28] Route: /openapi.json, Methods: HEAD, GET
INFO 08-27 20:18:01 launcher.py:28] Route: /docs, Methods: HEAD, GET
INFO 08-27 20:18:01 launcher.py:28] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 08-27 20:18:01 launcher.py:28] Route: /redoc, Methods: HEAD, GET
INFO 08-27 20:18:01 launcher.py:28] Route: /health, Methods: GET
INFO 08-27 20:18:01 launcher.py:28] Route: /tokenize, Methods: POST
INFO 08-27 20:18:01 launcher.py:28] Route: /detokenize, Methods: POST
INFO 08-27 20:18:01 launcher.py:28] Route: /v1/models, Methods: GET
INFO 08-27 20:18:01 launcher.py:28] Route: /version, Methods: GET
INFO 08-27 20:18:01 launcher.py:28] Route: /v1/chat/completions, Methods: POST
INFO 08-27 20:18:01 launcher.py:28] Route: /v1/completions, Methods: POST
INFO 08-27 20:18:01 launcher.py:28] Route: /v1/embeddings, Methods: POST
INFO:     Started server process [858756]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
  1. post a json payload to /v1/completions endpoint
    This 32-prompt-token json payload (in python) is OK to run:
payload = {
        'model': '/home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16',
        'prompt': [i for i in range(1, 33)], # Make a 32-token prompt.
        'min_tokens': 16,
        'max_tokens': 16,
        'stream': True
}

However, this 1024-prompt-token json payload failed and cause the vllm backend to crash without any logging output:

payload = {
        'model': '/home/chxu/llama_env/7b/Llama-2-7b-chat-ov-fp16',
        'prompt': [i for i in range(1, 1025)], # Make a 1024-token prompt.
        'min_tokens': 16,
        'max_tokens': 16,
        'stream': True
}

However, this 1024-prompt-token json payload can successfully be served, if I directly use the huggingface-format model path and let openvino to convert the model after vlllm start.
That is, we directly run python -m vllm.entrypoints.openai.api_server --model /home/chxu/llama_env/7b/Llama-2-7b-chat-hf --port 8000, and wait for the vllm to convert the model to ov format, and then we can successfully post the json payload.

Here's my environment info.

INFO 08-27 20:38:35 importing.py:10] Triton not installed; certain GPU-related functions will be not be available.
WARNING 08-27 20:38:35 _custom_ops.py:17] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
PyTorch version: 2.4.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.39

Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        46 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               128
On-line CPU(s) list:                  0-127
Vendor ID:                            GenuineIntel
Model name:                           INTEL(R) XEON(R) PLATINUM 8562Y+
CPU family:                           6
Model:                                207
Thread(s) per core:                   2
Core(s) per socket:                   32
Socket(s):                            2
Stepping:                             2
CPU(s) scaling MHz:                   20%
CPU max MHz:                          4100.0000
CPU min MHz:                          800.0000
BogoMIPS:                             5600.00
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            3 MiB (64 instances)
L1i cache:                            2 MiB (64 instances)
L2 cache:                             128 MiB (64 instances)
L3 cache:                             120 MiB (2 instances)
NUMA node(s):                         2
NUMA node0 CPU(s):                    0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s):                    1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] pyzmq==26.1.1
[pip3] torch==2.4.0+cpu
[pip3] transformers==4.43.4
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] pyzmq                     26.1.1                   pypi_0    pypi
[conda] torch                     2.4.0+cpu                pypi_0    pypi
[conda] transformers              4.43.4                   pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.4@baaedfdb2d3f1d70b7dbcde08b083abfe6017a92
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

Many thanks in advance.

@ilya-lavrenov
Copy link
Contributor

ilya-lavrenov commented Aug 27, 2024

Hi @BarrinXu
are you able to reproduce the same issue with offline_inference.py example?

UPD: I managed to reproduce the issue on long prompts. Could you please to increase KV cache size using VLLM_OPENVINO_KVCACHE_SPACE ? By default 4 GBs are set and increasing KV cache size helped me.

Meanwhile, we are in process of investigation of original issue.

CC @luo-cheng2021

@BarrinXu
Copy link

Hi @BarrinXu are you able to reproduce the same issue with offline_inference.py example?

UPD: I managed to reproduce the issue on long prompts. Could you please to increase KV cache size using VLLM_OPENVINO_KVCACHE_SPACE ? By default 4 GBs are set and increasing KV cache size helped me.

Meanwhile, we are in process of investigation of original issue.

CC @luo-cheng2021

Hi @ilya-lavrenov , thanks for your quick reply.
However, by setting VLLM_OPENVINO_KVCACHE_SPACE to 64, the vllm can only serve the first request, and still crash with the second request.
Steps to reproduce:

  1. Send a 16-token-prompt request to /v1/completions, the request completes successfully.
  2. Then, send another 1024-token-prompt request to /v1/completions, the request cause the vllm to crash.

@BarrinXu
Copy link

BarrinXu commented Aug 28, 2024

@ilya-lavrenov Sorry to bother you again, but there is another strange issue: When running vllm+openvino with fp16 model, the memory consumption is 2x higher than expected.
Steps to reproduce:

  1. run python -m vllm.entrypoints.openai.api_server --model /home/chxu/llama_env/7b/Llama-2-7b-chat-hf --port 8000
  2. Send a single inference request to vllm to "activate" the model loading.
  3. Wait for the request to complete.
  4. Check the memory consumption.
    At this point, I find that vllm has used approximately 32GB (14GB + 14GB + 4GB) memory, while ideally it should be 18GB (14GB parameter + 4GB kv-cache). I have also tried to use local openvino-format fp16 model, to avoid online converting, but the memory consumption still shows the same result.

@luo-cheng2021
Copy link

Could you share

Hi @BarrinXu are you able to reproduce the same issue with offline_inference.py example?
UPD: I managed to reproduce the issue on long prompts. Could you please to increase KV cache size using VLLM_OPENVINO_KVCACHE_SPACE ? By default 4 GBs are set and increasing KV cache size helped me.
Meanwhile, we are in process of investigation of original issue.
CC @luo-cheng2021

Hi @ilya-lavrenov , thanks for your quick reply. However, by setting VLLM_OPENVINO_KVCACHE_SPACE to 64, the vllm can only serve the first request, and still crash with the second request. Steps to reproduce:

  1. Send a 16-token-prompt request to /v1/completions, the request completes successfully.
  2. Then, send another 1024-token-prompt request to /v1/completions, the request cause the vllm to crash.

Could you please share the output of pip list? Thanks.

@BarrinXu
Copy link

BarrinXu commented Aug 28, 2024

Could you share

Hi @BarrinXu are you able to reproduce the same issue with offline_inference.py example?
UPD: I managed to reproduce the issue on long prompts. Could you please to increase KV cache size using VLLM_OPENVINO_KVCACHE_SPACE ? By default 4 GBs are set and increasing KV cache size helped me.
Meanwhile, we are in process of investigation of original issue.
CC @luo-cheng2021

Hi @ilya-lavrenov , thanks for your quick reply. However, by setting VLLM_OPENVINO_KVCACHE_SPACE to 64, the vllm can only serve the first request, and still crash with the second request. Steps to reproduce:

  1. Send a 16-token-prompt request to /v1/completions, the request completes successfully.
  2. Then, send another 1024-token-prompt request to /v1/completions, the request cause the vllm to crash.

Could you please share the output of pip list? Thanks.

Hi @luo-cheng2021
The output is

╰─❯ pip list
Package                           Version        Editable project location
--------------------------------- -------------- -------------------------
about-time                        4.2.1
aiohappyeyeballs                  2.4.0
aiohttp                           3.10.5
aiosignal                         1.3.1
alive-progress                    3.1.5
annotated-types                   0.7.0
anyio                             4.4.0
attrs                             24.2.0
audioread                         3.0.1
autograd                          1.6.2
certifi                           2024.7.4
cffi                              1.17.0
charset-normalizer                3.3.2
click                             8.1.7
cloudpickle                       3.0.0
cma                               3.2.2
cmake                             3.30.2
coloredlogs                       15.0.1
contourpy                         1.2.1
cycler                            0.12.1
datasets                          2.21.0
decorator                         5.1.1
Deprecated                        1.2.14
dill                              0.3.8
diskcache                         5.6.3
distro                            1.9.0
fastapi                           0.112.1
filelock                          3.15.4
fonttools                         4.53.1
frozenlist                        1.4.1
fsspec                            2024.6.1
future                            1.0.0
gguf                              0.9.1
grapheme                          0.6.0
h11                               0.14.0
httpcore                          1.0.5
httptools                         0.6.1
httpx                             0.27.0
huggingface-hub                   0.24.6
humanfriendly                     10.0
idna                              3.7
importlib_metadata                8.4.0
interegular                       0.3.3
Jinja2                            3.1.4
jiter                             0.5.0
joblib                            1.4.2
jsonschema                        4.23.0
jsonschema-specifications         2023.12.1
jstyleson                         0.0.2
kiwisolver                        1.4.5
lark                              1.2.2
lazy_loader                       0.4
librosa                           0.10.2.post1
llvmlite                          0.43.0
lm-format-enforcer                0.10.6
markdown-it-py                    3.0.0
MarkupSafe                        2.1.5
matplotlib                        3.9.2
mdurl                             0.1.2
mpmath                            1.3.0
msgpack                           1.0.8
msgspec                           0.18.6
multidict                         6.0.5
multiprocess                      0.70.16
natsort                           8.4.0
nest-asyncio                      1.6.0
networkx                          3.3
ninja                             1.11.1.1
nncf                              2.12.0
numba                             0.60.0
numpy                             1.26.4
onnx                              1.16.2
openai                            1.42.0
openvino                          2024.3.0
openvino-telemetry                2024.1.0
openvino-tokenizers               2024.3.0.0
optimum                           1.21.4
optimum-intel                     1.18.3
outlines                          0.0.46
packaging                         24.1
pandas                            2.2.2
pillow                            10.4.0
pip                               24.2
platformdirs                      4.2.2
pooch                             1.8.2
prometheus_client                 0.20.0
prometheus-fastapi-instrumentator 7.0.0
protobuf                          5.27.3
psutil                            6.0.0
py-cpuinfo                        9.0.0
pyairports                        2.1.1
pyarrow                           17.0.0
pycountry                         24.6.1
pycparser                         2.22
pydantic                          2.8.2
pydantic_core                     2.20.1
pydot                             2.0.0
Pygments                          2.18.0
pymoo                             0.6.1.3
pyparsing                         3.1.2
python-dateutil                   2.9.0.post0
python-dotenv                     1.0.1
pytz                              2024.1
PyYAML                            6.0.2
pyzmq                             26.1.1
referencing                       0.35.1
regex                             2024.7.24
requests                          2.32.3
rich                              13.7.1
rpds-py                           0.20.0
safetensors                       0.4.4
scikit-learn                      1.5.1
scipy                             1.14.1
sentencepiece                     0.2.0
setuptools                        72.1.0
six                               1.16.0
sniffio                           1.3.1
soundfile                         0.12.1
soxr                              0.4.0
starlette                         0.38.2
sympy                             1.13.2
tabulate                          0.9.0
threadpoolctl                     3.5.0
tiktoken                          0.7.0
tokenizers                        0.19.1
torch                             2.4.0+cpu
tqdm                              4.66.5
transformers                      4.43.4
typing_extensions                 4.12.2
tzdata                            2024.1
urllib3                           2.2.2
uvicorn                           0.30.6
uvloop                            0.20.0
vllm                              0.5.4+openvino /home/chxu/project/vllm
watchfiles                        0.23.0
websockets                        13.0
wheel                             0.43.0
wrapt                             1.16.0
xxhash                            3.5.0
yarl                              1.9.4
zipp                              3.20.0

@luo-cheng2021
Copy link

@ilya-lavrenov Sorry to bother you again, but there is another strange issue: When running vllm+openvino with fp16 model, the memory consumption is 2x higher than expected. Steps to reproduce:

  1. run python -m vllm.entrypoints.openai.api_server --model /home/chxu/llama_env/7b/Llama-2-7b-chat-hf --port 8000
  2. Send a single inference request to vllm to "activate" the model loading.
  3. Wait for the request to complete.
  4. Check the memory consumption.
    At this point, I find that vllm has used approximately 32GB (14GB + 14GB + 4GB) memory, while ideally it should be 18GB (14GB parameter + 4GB kv-cache). I have also tried to use local openvino-format fp16 model, to avoid online converting, but the memory consumption still shows the same result.

Yes, there is another packed bf16 weights buffer for the model and it will be fixed soon.

@BarrinXu
Copy link

BarrinXu commented Aug 28, 2024

Hi @ilya-lavrenov , I've tried OpenVINO nightly, it worked and can serve long prompt successfully.
Thanks! But it still consumes 2x memory, and I'm looking forword to the fix.

@ilya-lavrenov
Copy link
Contributor

Hi @ilya-lavrenov , I've tried OpenVINO nightly, it worked and can serve long prompt successfully. Thanks! But it still consumes 2x memory, and I'm looking forword to the fix.

Fix for 2x memory is merged recently openvinotoolkit/openvino#26103 and will be part of upcoming nightly package.

@ilya-lavrenov
Copy link
Contributor

@BarrinXu latest OpenVINO nightly is out, could you please check memory consumption?

@BarrinXu
Copy link

@ilya-lavrenov Yes, the 2x memory consumption is fixed in the latest version. Thanks!

@liuzhipengchd
Copy link

liuzhipengchd commented Sep 6, 2024

Also note that for now vLLM with OpenVINO only works on a single socket

Hello, may I ask you a question?
Can vLLM with OpenVINO only inference on a single physical core of a CPU? If I have a server with 2 CPUs, is the other one unusable?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants