Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

serving mpt-7b-bigdl.yaml crash when install with pip install -e .[bigdl-cpu] #58

Closed
xwu99 opened this issue Jan 14, 2024 · 2 comments · Fixed by #61
Closed

serving mpt-7b-bigdl.yaml crash when install with pip install -e .[bigdl-cpu] #58

xwu99 opened this issue Jan 14, 2024 · 2 comments · Fixed by #61

Comments

@xwu99
Copy link
Contributor

xwu99 commented Jan 14, 2024

First create an new conda env with python 3.9, and install with

pip install -e .[bigdl-cpu] -f https://developer.intel.com/ipex-whl-stable-cpu -f https://download.pytorch.org/whl/torch_stable.html

Then run:

python inference/serve.py --config_file inference/models/bigdl/mpt-7b-bigdl.yaml --serve_simple

It looks something is messed after installing bigdl-llm:

image

After I switched to another env, it's OK.

@KepingYan @jiafuzha

@xwu99
Copy link
Contributor Author

xwu99 commented Jan 14, 2024

I did some test and found latest fastapi 0.109 and starlette 0.35.1 cause the crash,
I reinstall fastapi 0.104 and starlette 0.27.0 it's OK.
For reproduce, you need to stop ray and start over.

@KepingYan
Copy link
Contributor

After installing env via pip install .[cpu] -f https://developer.intel.com/ipex-whl-stable-cpu -f https://download.pytorch.org/whl/torch_stable.html, I also encountered this problem. fastapi is installed together with ray[serve], I've verified 0.104<=fastapi<=0.109 all works well. The above problem will be fixed by #61.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants