-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImportError: /ramyapra/vllm/vllm/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: #2747
Comments
Uninstall the package called transformer-engine by using the command |
I tried this but didn't work |
Please post the vllm version and then the steps to reproduce this. Which model are you using? Cuda version on the system and the docker (if you use it). |
I am also experiencing this issue. |
i also experience |
Any idea? I am also facing this issue
|
same problem, I am running on Kaggle.com Successfully installed aioprometheus-23.12.0 cupy-cuda12x-12.1.0 pynvml-11.5.0 quantile-python-1.1 transformers-4.38.1 triton-2.2.0 vllm-0.3.2 xformers-0.0.23.post1 |
It seems to be PyTorch isn't working with Cuda-12.2 on NGC-PyTorch 23.10-py3. |
same issue. anyone able to fix? cuda: 12.0.1 UPDATE: solved by downgrading torch to 2.1.2 |
#2797 is the same |
I'm hitting this same problem :( |
@sudarshan-kamath , |
@RylanSchaeffer can you try to installing a matching version of pytorch with vllm? e.g. vllm 0.3.3 with pytorch 2.1 . |
Versions:
Error:
|
@youkaichao , here's a script I'm using to debug:
|
Are you using a custom built version of pytorch? vLLM is compiled against officially released pytorch. And there is no binary compatibility promise across pytorch versions. You can try to build vllm yourself: https://docs.vllm.ai/en/latest/getting_started/installation.html#build-from-source . |
No, I installed using the default command from pytorch itself ( I am now trying the following: I deleted my conda environment, added This is the error I received:
|
I'm deleting my |
I purged my
|
You don't have a valid cuda installation. Try You can install one by |
@youkaichao thanks for help! New error:
|
I'm not sure why it says 12.4. I'm uninstalling and trying
This matches
Now trying |
Failed again:
|
It seems to be a problem of yout pytorch environment. How did you install pytorch? |
Following the instructions on the pytorch website:
I promise I'm not trying to do something weird. I'm literally trying to install the most vanilla versions of everything. |
You can try to use our docker image and see if it works for you: The script to build the image is also available https://github.com/vllm-project/vllm/blob/main/Dockerfile . |
Here's what I just tried:
The error:
|
I don't understand how installing pytorch-cuda doesn't install cuda, but I am now going to try |
Error:
For more info, |
If I do
|
@youkaichao can you give us a hint of which pytorch version does work? ranges? the highest? anything? |
can we request pytorch 2.2? It's the fastest! #3742 |
@youkaichao I followed @RylanSchaeffer advice and I still get an eror:
Can you let us know precisely the commands you recommend to run? I started new conda env and it threw the above error anyway https://docs.vllm.ai/en/latest/getting_started/installation.html error still:
|
ok it seems this is sensitive to python version. You have to do 3.9. Then the link by rylan works. Code:
ref: https://docs.vllm.ai/en/latest/getting_started/installation.html |
@youkaichao what version of pytorch is supported then? |
is it pytorch |
If you build vllm from source, it supports (requires) pytorch 2.2 now. |
I'm using pip though. |
|
awesome! any estimate? Thank you! |
@youkaichao since the versions of python I am using are fragile due to the current vllm (or one I use) only works with pytorch 2.1, I was wondering, what hugging face and accelerate version do we need without breaking vllm? Need to debug this but I think this should work: # for pytorch see doc string at the top of file
install_requires=[
'dill',
'networkx>=2.5',
'scipy',
'scikit-learn',
'lark-parser',
'tensorboard',
'pandas',
'progressbar2',
'requests',
'aiohttp',
'numpy',
'plotly',
'wandb',
'matplotlib',
# 'statsmodels'
# 'statsmodels==0.12.2'
# 'statsmodels==0.13.5'
# - later check why we are not installing it...
# 'seaborn'
# 'nltk'
'twine',
'torch==2.1.2', # 2.2 not supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747
# 'torchvision',
# 'torchaudio',
# 'fairseq',
# 'trl',
'transformers==4.39.2', # my gold-ai-olympiad project uses 4.39.2
'accelerate==0.29.2',
# 'peft',
'datasets==2.18.0', # 2.18.0
'bitsandbytes== 0.43.0',
# 'einops',
'vllm==0.4.0.post1', # my gold-ai-olympiad project uses 0.4.0.post1 ref: https://github.com/vllm-project/vllm/issues/2747
]
) and fyi:
For flash attention I have these comments
|
please install vllm in a fresh new environment, then you don't need to care about this manually. |
@youkaichao sorry for the spam. Where do I follow when the release for vllm + pytorch 2.2.2 will work? A need it for a special machine I'm using sadly as partially documented here |
@youkaichao does vllm work for python 3.11? |
python 3.11 is supported. see https://github.com/vllm-project/vllm/releases/tag/v0.4.1 . |
which version of pytorch does that need? if I remember the past instructions I saw forced me to use python 3.9 |
for a version that works with pytorch 2.2.1 and python 3.11 do this https://stackoverflow.com/a/78394535/1601580 conda create -n vllm_test python=3.11
conda activate vllm_test
pip install torch==2.2.1
pip install vllm==0.4.1
# pip install vllm |
@youkaichao vllm wants torch 2.4.0 according to the output of my code: snap-cluster-setup % pip install vllm
Collecting vllm
Downloading vllm-0.5.4.tar.gz (958 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 958.6/958.6 kB 10.0 MB/s eta 0:00:00
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
Collecting cmake>=3.21
Using cached cmake-3.30.2-py3-none-macosx_11_0_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.whl.metadata (6.1 kB)
Collecting ninja
Using cached ninja-1.11.1.1-py2.py3-none-macosx_10_9_universal2.macosx_10_9_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (5.3 kB)
Collecting packaging
Using cached packaging-24.1-py3-none-any.whl.metadata (3.2 kB)
Collecting setuptools>=49.4.0
Downloading setuptools-72.2.0-py3-none-any.whl.metadata (6.6 kB)
ERROR: Could not find a version that satisfies the requirement torch==2.4.0 (from versions: 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2)
ERROR: No matching distribution found for torch==2.4.0
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip. is this a bug from vllm? (I did not request such torch version so it can't be me). |
I noticed I have torch 2.2.2, which version of vllm supports that @youkaichao? snap-cluster-setup % pip list | grep "torch"
torch 2.2.2 |
for torch 2.1.2 use the above. |
if you want to use flash attention it seems
related: #485 |
If someone know how to install flashh attn @RylanSchaeffer I'd appreciate ;) |
I always build flash-attention from source: https://github.com/Dao-AILab/flash-attention/tree/main?tab=readme-ov-file#installation-and-features I'm having the same issue with torch 2.4 |
@brando90 Have you tried using vllm-flash-attn? |
may try the latest version |
I'm trying to run vllm and lm-eval-harness. I'm using vllm 0.2.5. After I'm done installing both, if I try importing vllm I get the following error:
File "/ramyapra/lm-evaluation-harness/lm_eval/models/__init__.py", line 7, in <module> from . import vllm_causallms File "/ramyapra/lm-evaluation-harness/lm_eval/models/vllm_causallms.py", line 16, in <module> from vllm import LLM, SamplingParams File "/ramyapra/vllm/vllm/__init__.py", line 3, in <module> from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs File "/ramyapra/vllm/vllm/engine/arg_utils.py", line 6, in <module> from vllm.config import (CacheConfig, ModelConfig, ParallelConfig, File "/ramyapra/vllm/vllm/config.py", line 9, in <module> from vllm.utils import get_cpu_memory, is_hip File "/ramyapra/vllm/vllm/utils.py", line 8, in <module> from vllm._C import cuda_utils ImportError: /ramyapra/vllm/vllm/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops19empty_memory_format4callEN3c108ArrayRefINS2_6SymIntEEESt8optionalINS2_10ScalarTypeEES6_INS2_6LayoutEES6_INS2_6DeviceEES6_IbES6_INS2_12MemoryFormatEE
I'm using the NGC docker container 23:10-py3.
The text was updated successfully, but these errors were encountered: