Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pp_serving example to serving image #11433

Merged
merged 4 commits into from
Jun 28, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
init pp
hzjane committed Jun 26, 2024

Verified

This commit was signed with the committer’s verified signature.
JounQin JounQin
commit b3dacfd2d69f6b558a5b0d87965ddd96dfd4ea78
10 changes: 9 additions & 1 deletion docker/llm/serving/xpu/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -21,12 +21,20 @@ RUN apt-get update && \
pip install outlines==0.0.34 --no-deps && \
pip install interegular cloudpickle diskcache joblib lark nest-asyncio numba scipy && \
# For Qwen series models support
pip install transformers_stream_generator einops tiktoken
pip install transformers_stream_generator einops tiktoken && \
# For pipeline serving support
pip install mpi4py fastapi uvicorn openai && \
pip install gradio # for gradio web UI && \
git clone https://github.com/intel-analytics/ipex-llm /llm/ipex-llm && \
mkdir -p /llm/pp_serving && \
cp /llm/ipex-llm/python/llm/example/GPU/Pipeline-Parallel-FastAPI/*.py /llm/pp_serving

COPY ./vllm_offline_inference.py /llm/
COPY ./payload-1024.lua /llm/
COPY ./start-vllm-service.sh /llm/
COPY ./benchmark_vllm_throughput.py /llm/
COPY ./start-fastchat-service.sh /llm/
COPY ./start-pp_serving-service.sh /llm/


WORKDIR /llm/
25 changes: 25 additions & 0 deletions docker/llm/serving/xpu/docker/start-pp_serving-service.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
source /opt/intel/oneapi/setvars.sh
export no_proxy=localhost
export FI_PROVIDER=tcp
export OMP_NUM_THREADS=32

export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
basekit_root=/opt/intel/oneapi
source $basekit_root/setvars.sh --force
source $basekit_root/ccl/latest/env/vars.sh --force

export USE_XETLA=OFF
if [[ $KERNEL_VERSION != *"6.5"* ]]; then
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
fi
export TORCH_LLM_ALLREDUCE=0

export NUM_GPUS=2
export IPEX_LLM_QUANTIZE_KV_CACHE=1

export MODEL_PATH="/llm/models/Llama-2-7b-chat-hf"
export low_bit="fp8"
# max requests = max_num * rank_num
export max_num="4"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_num_seqs

cd /llm/pp_serving
CCL_ZE_IPC_EXCHANGE=sockets torchrun --standalone --nnodes=1 --nproc-per-node $NUM_GPUS pipeline_serving.py --repo-id-or-model-path $MODEL_PATH --low-bit $low_bit --max-num-seqs $max_num