Skip to content

Commit

Permalink
Fix LLAVA example on CPU (#11271)
Browse files Browse the repository at this point in the history
* update

* update

* update

* update
  • Loading branch information
jenniew authored Jun 26, 2024
1 parent ca0e69c commit 40fa235
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 8 deletions.
11 changes: 4 additions & 7 deletions python/llm/example/CPU/PyTorch-Models/Model/llava/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,11 @@ conda activate llm

# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # install dependencies required by llava
pip install transformers==4.36.2

git clone https://github.com/haotian-liu/LLaVA.git # clone the llava libary
cp generate.py ./LLaVA/ # copy our example to the LLaVA folder
cd LLaVA # change the working directory to the LLaVA folder
git checkout tags/v1.2.0 -b 1.2.0 # Get the branch which is compatible with transformers 4.36
pip install -e . # Install llava
cd ..
```

On Windows:
Expand All @@ -36,13 +34,12 @@ conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
pip install transformers==4.36.2
git clone https://github.com/haotian-liu/LLaVA.git
copy generate.py .\LLaVA\
cd LLaVA
git checkout tags/v1.2.0 -b 1.2.0
pip install -e .
cd ..
```

### 2. Run
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,8 @@ def get_stopping_criteria(conv, tokenizer, input_ids):
# Load model
tokenizer, model, image_processor, _ = load_pretrained_model(model_path=model_path,
model_base=None,
model_name=model_name)
model_name=model_name,
device_map=None)

# With only one line to enable IPEX-LLM optimization on model
model = optimize_model(model)
Expand Down

0 comments on commit 40fa235

Please sign in to comment.