Skip to content

Commit

Permalink
Remove accelerate 0.23.0 install command in readme and docker (#11333)
Browse files Browse the repository at this point in the history
*ipex-llm's accelerate has been upgraded to 0.23.0. Remove accelerate 0.23.0 install command in README and docker。
  • Loading branch information
qiyuangong authored Jun 17, 2024
1 parent ef4b651 commit de4bb97
Show file tree
Hide file tree
Showing 17 changed files with 2 additions and 17 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/llm_unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -381,7 +381,7 @@ jobs:
shell: bash
run: |
python -m pip uninstall datasets -y
python -m pip install transformers==4.36.0 datasets peft==0.10.0 accelerate==0.23.0
python -m pip install transformers==4.36.0 datasets peft==0.10.0
python -m pip install bitsandbytes scipy
# Specific oneapi position on arc ut test machines
if [[ "$RUNNER_OS" == "Linux" ]]; then
Expand Down
1 change: 0 additions & 1 deletion docker/llm/finetune/qlora/cpu/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ RUN mkdir -p /ipex_llm/data && mkdir -p /ipex_llm/model && \
# install huggingface dependencies
pip install datasets transformers==4.36.0 && \
pip install fire peft==0.10.0 && \
pip install accelerate==0.23.0 && \
pip install bitsandbytes && \
# get qlora example code
cd /ipex_llm && \
Expand Down
1 change: 0 additions & 1 deletion docker/llm/finetune/qlora/cpu/docker/Dockerfile.k8s
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,6 @@ RUN mkdir -p /ipex_llm/data && mkdir -p /ipex_llm/model && \
# install huggingface dependencies
pip install datasets transformers==4.36.0 && \
pip install fire peft==0.10.0 && \
pip install accelerate==0.23.0 && \
# install basic dependencies
apt-get update && apt-get install -y curl wget gpg gpg-agent && \
# Install Intel oneAPI keys.
Expand Down
2 changes: 1 addition & 1 deletion docker/llm/finetune/xpu/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ RUN wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRO
rm -rf IPEX-LLM && \
# install transformers & peft dependencies
pip install transformers==4.36.0 && \
pip install peft==0.10.0 datasets accelerate==0.23.0 && \
pip install peft==0.10.0 datasets && \
pip install bitsandbytes scipy fire && \
# Prepare accelerate config
mkdir -p /root/.cache/huggingface/accelerate && \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,6 @@ pip install -e .
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# install transformers etc
pip install accelerate==0.23.0
# to avoid https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544
pip install datasets==2.15.0
pip install transformers==4.37.0
Expand Down
1 change: 0 additions & 1 deletion python/llm/example/CPU/QLoRA-FineTuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pyt
pip install transformers==4.36.0
pip install peft==0.10.0
pip install datasets
pip install accelerate==0.23.0
pip install bitsandbytes scipy
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install datasets transformers==4.36.0
pip install fire peft==0.10.0
pip install accelerate==0.23.0
pip install bitsandbytes scipy
```

Expand Down
1 change: 0 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/DPO/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
pip install transformers==4.36.0 datasets
pip install trl peft==0.10.0
pip install accelerate==0.23.0
pip install bitsandbytes
```

Expand Down
1 change: 0 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/HF-PEFT/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte
pip install transformers==4.36.0 datasets
pip install fire peft==0.10.0
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
pip install accelerate==0.23.0
pip install bitsandbytes scipy
```

Expand Down
1 change: 0 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/LISA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
pip install accelerate==0.23.0
pip install bitsandbytes==0.43.0
pip install datasets==2.18.0
pip install --upgrade transformers==4.36.0
Expand Down
1 change: 0 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/LoRA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte
pip install transformers==4.36.0 datasets
pip install fire peft==0.10.0
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
pip install accelerate==0.23.0
pip install bitsandbytes scipy
```

Expand Down
1 change: 0 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/QA-LoRA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte
pip install transformers==4.36.0 datasets
pip install fire peft==0.10.0
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
pip install accelerate==0.23.0
pip install bitsandbytes scipy
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte
pip install transformers==4.36.0 datasets
pip install fire peft==0.10.0
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
pip install accelerate==0.23.0
pip install bitsandbytes scipy
# configures OneAPI environment variables
source /opt/intel/oneapi/setvars.sh # necessary to run before installing deepspeed
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
pip install transformers==4.36.0 datasets
pip install peft==0.10.0
pip install accelerate==0.23.0
pip install bitsandbytes scipy
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ conda activate llm
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
pip install transformers==4.36.0 datasets
pip install peft==0.10.0
pip install accelerate==0.23.0
pip install bitsandbytes scipy trl
```

Expand Down
1 change: 0 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/ReLora/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte
pip install transformers==4.36.0 datasets
pip install fire peft==0.10.0
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
pip install accelerate==0.23.0
pip install bitsandbytes scipy
```

Expand Down
1 change: 0 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/axolotl/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,6 @@ pip install -e .
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# install transformers etc
pip install accelerate==0.23.0
# to avoid https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544
pip install datasets==2.15.0
pip install transformers==4.37.0
Expand Down

0 comments on commit de4bb97

Please sign in to comment.