Skip to content

Commit

Permalink
BLD: pin chatglm-cpp version v0.3.x (#1692)
Browse files Browse the repository at this point in the history
  • Loading branch information
ChengjieLi28 authored Jun 22, 2024
1 parent 5cef7c3 commit 7705d4a
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion xinference/deploy/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ ARG PIP_INDEX=https://pypi.org/simple
RUN python -m pip install --upgrade -i "$PIP_INDEX" pip && \
# uninstall builtin torchvision, and let xinference decide which version to be installed
pip uninstall -y torchvision torchaudio && \
CMAKE_ARGS="-DGGML_CUBLAS=ON" pip install -i "$PIP_INDEX" -U chatglm-cpp && \
CMAKE_ARGS="-DGGML_CUDA=ON" pip install -i "$PIP_INDEX" -U "chatglm-cpp<0.4.0" && \
# use pre-built whl package for llama-cpp-python, otherwise may core dump when init llama in some envs
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 && \
cd /opt/inference && \
Expand Down

0 comments on commit 7705d4a

Please sign in to comment.