Skip to content

Commit

Permalink
Merge pull request #9108 from Zhengjin-Wang/main
Browse files Browse the repository at this point in the history
Add instruction for chat.py in bigdl-llm-cpu
  • Loading branch information
Zhengjin-Wang authored Oct 10, 2023
2 parents 1e78b0a + a1aefdb commit 30e3c19
Show file tree
Hide file tree
Showing 2 changed files with 36 additions and 1 deletion.
3 changes: 2 additions & 1 deletion docker/llm/inference/cpu/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get update && \
pip install --pre --upgrade bigdl-llm[all] && \
pip install --pre --upgrade bigdl-nano && \
# Download chat.py script
wget -P /root https://raw.githubusercontent.com/intel-analytics/BigDL/main/python/llm/portable-executable/chat.py && \
pip install --upgrade colorama && \
wget -P /root https://raw.githubusercontent.com/intel-analytics/BigDL/main/python/llm/portable-zip/chat.py && \
export PYTHONUNBUFFERED=1

ENTRYPOINT ["/bin/bash"]
34 changes: 34 additions & 0 deletions docker/llm/inference/cpu/docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,37 @@ sudo docker run -itd \
After the container is booted, you could get into the container through `docker exec`.

To run inference using `BigDL-LLM` using cpu, you could refer to this [documentation](https://github.com/intel-analytics/BigDL/tree/main/python/llm#cpu-int4).

### Use chat.py

chat.py can be used to initiate a conversation with a specified model. The file is under directory '/root'.

You can download models and bind the model directory from host machine to container when start a container.

Here is an example:
```bash
export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.4.0-SNAPSHOT
export MODEL_PATH=/home/llm/models

sudo docker run -itd \
--net=host \
--cpuset-cpus="0-47" \
--cpuset-mems="0" \
--memory="32G" \
--name=CONTAINER_NAME \
--shm-size="16g" \
-v $MODEL_PATH:/llm/models/
$DOCKER_IMAGE

```

After entering the container through `docker exec`, you can run chat.py by:
```bash
cd /root
python chat.py --model-path YOUR_MODEL_PATH
```
In the example above, it can be:
```bash
cd /root
python chat.py --model-path /llm/models/MODEL_NAME
```

0 comments on commit 30e3c19

Please sign in to comment.