Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refine LLM containers #9109

Merged
merged 1 commit into from
Oct 9, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 0 additions & 112 deletions docker/llm/finetune/lora/README.md

This file was deleted.

4 changes: 2 additions & 2 deletions docker/llm/finetune/lora/cpu/docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
You can download directly from Dockerhub like:

```bash
docker pull intelanalytics/bigdl-llm-finetune-cpu:2.4.0-SNAPSHOT
docker pull intelanalytics/bigdl-llm-finetune-lora-cpu:2.4.0-SNAPSHOT
```

Or build the image from source:
Expand All @@ -15,6 +15,6 @@ export HTTPS_PROXY=your_https_proxy
docker build \
--build-arg http_proxy=${HTTP_PROXY} \
--build-arg https_proxy=${HTTPS_PROXY} \
-t intelanalytics/bigdl-llm-finetune-cpu:2.4.0-SNAPSHOT \
-t intelanalytics/bigdl-llm-finetune-lora-cpu:2.4.0-SNAPSHOT \
-f ./Dockerfile .
```
2 changes: 1 addition & 1 deletion docker/llm/finetune/lora/cpu/kubernetes/values.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
imageName: intelanalytics/bigdl-llm-finetune-cpu:2.4.0-SNAPSHOT
imageName: intelanalytics/bigdl-llm-finetune-lora-cpu:2.4.0-SNAPSHOT
trainerNum: 8
microBatchSize: 8
nfsServerIp: your_nfs_server_ip
Expand Down
13 changes: 11 additions & 2 deletions docker/llm/finetune/qlora/xpu/docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,18 @@ docker build \
Here, we try to fine-tune a [Llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b) with [English Quotes](https://huggingface.co/datasets/Abirate/english_quotes) dataset, and please download them and start a docker container with files mounted like below:

```bash
export BASE_MODE_PATH=<your_downloaded_base_model_path>
export DATA_PATH=<your_downloaded_data_path>
export BASE_MODE_PATH=your_downloaded_base_model_path
export DATA_PATH=your_downloaded_data_path
export HTTP_PROXY=your_http_proxy
export HTTPS_PROXY=your_https_proxy

docker run -itd \
--net=host \
--device=/dev/dri \
--memory="32G" \
--name=bigdl-llm-fintune-qlora-xpu \
-e http_proxy=${HTTP_PROXY} \
-e https_proxy=${HTTPS_PROXY} \
-v $BASE_MODE_PATH:/model \
-v $DATA_PATH:/data/english_quotes \
--shm-size="16g" \
Expand All @@ -45,11 +49,16 @@ docker run -itd \
The download and mount of base model and data to a docker container demonstrates a standard fine-tuning process. You can skip this step for a quick start, and in this way, the fine-tuning codes will automatically download the needed files:

```bash
export HTTP_PROXY=your_http_proxy
export HTTPS_PROXY=your_https_proxy

docker run -itd \
--net=host \
--device=/dev/dri \
--memory="32G" \
--name=bigdl-llm-fintune-qlora-xpu \
-e http_proxy=${HTTP_PROXY} \
-e https_proxy=${HTTPS_PROXY} \
--shm-size="16g" \
intelanalytics/bigdl-llm-fintune-qlora-xpu:2.4.0-SNAPSHOT
```
Expand Down
4 changes: 3 additions & 1 deletion docker/llm/inference/xpu/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ ENV TZ=Asia/Shanghai
# Disable pip's cache behavior
ARG PIP_NO_CACHE_DIR=false

RUN apt-get update && \
RUN curl -fsSL https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2023.PUB | gpg --dearmor | tee /usr/share/keyrings/intel-oneapi-archive-keyring.gpg && \
echo "deb [signed-by=/usr/share/keyrings/intel-oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main " > /etc/apt/sources.list.d/oneAPI.list && \
apt-get update && \
apt-get install -y curl wget git gnupg gpg-agent && \
wget -qO - https://repositories.intel.com/graphics/intel-graphics.key | gpg --dearmor --output /usr/share/keyrings/intel-graphics.gpg && \
echo 'deb [arch=amd64,i386 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/graphics/ubuntu jammy arc' | tee /etc/apt/sources.list.d/intel.gpu.jammy.list && \
Expand Down