From 29ad553f158dff7f99901dc35bf2ab5d3f9215ac Mon Sep 17 00:00:00 2001 From: SONG Ge Date: Fri, 13 Dec 2024 15:58:37 +0800 Subject: [PATCH 1/2] Update ipex-llm ollama readme to v0.4.6 --- docs/mddocs/Quickstart/ollama_quickstart.md | 5 ++++- docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md | 5 ++++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/docs/mddocs/Quickstart/ollama_quickstart.md b/docs/mddocs/Quickstart/ollama_quickstart.md index f3f44b23dd0..3264427977b 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.md @@ -19,7 +19,7 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. > [!NOTE] > `ipex-llm[cpp]==2.2.0b20240826` is consistent with [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) of ollama. > -> Our current version is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama. +> Our current version is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama. > [!NOTE] > Starting from `ipex-llm[cpp]==2.2.0b20240912`, oneAPI dependency of `ipex-llm[cpp]` on Windows will switch from `2024.0.0` to `2024.2.1` . @@ -80,6 +80,7 @@ You may launch the Ollama service as below: export ZES_ENABLE_SYSMAN=1 source /opt/intel/oneapi/setvars.sh export SYCL_CACHE_PERSISTENT=1 + export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH # [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 # [optional] if you want to run on single GPU, use below command to limit GPU may improve performance @@ -177,6 +178,8 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile` - For **Linux users**: ```bash + source /opt/intel/oneapi/setvars.sh + export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH export no_proxy=localhost,127.0.0.1 ./ollama create example -f Modelfile ./ollama run example diff --git a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md index 62bc07462b4..74e37cfa436 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md @@ -19,7 +19,7 @@ > [!NOTE] > `ipex-llm[cpp]==2.2.0b20240826` 版本与官方 ollama 版本 [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) 一致。 > -> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) 一致。 +> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 一致。 > [!NOTE] > 从 `ipex-llm[cpp]==2.2.0b20240912` 版本开始,Windows 上 `ipex-llm[cpp]` 依赖的 oneAPI 版本已从 `2024.0.0` 更新到 `2024.2.1`。 @@ -80,6 +80,7 @@ IPEX-LLM 现在已支持在 Linux 和 Windows 系统上运行 `Ollama`。 export ZES_ENABLE_SYSMAN=1 source /opt/intel/oneapi/setvars.sh export SYCL_CACHE_PERSISTENT=1 + export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH # [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 # [optional] if you want to run on single GPU, use below command to limit GPU may improve performance @@ -174,6 +175,8 @@ PARAMETER num_predict 64 ```bash export no_proxy=localhost,127.0.0.1 + source /opt/intel/oneapi/setvars.sh + export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./ollama create example -f Modelfile ./ollama run example ``` From 5f8b0b6d5390e4d9975fc47eb1c586d4b3a9bcdc Mon Sep 17 00:00:00 2001 From: SONG Ge Date: Fri, 13 Dec 2024 16:19:10 +0800 Subject: [PATCH 2/2] meet comments --- docs/mddocs/Quickstart/ollama_quickstart.md | 2 +- docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/mddocs/Quickstart/ollama_quickstart.md b/docs/mddocs/Quickstart/ollama_quickstart.md index 3264427977b..9cc09da9069 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.md @@ -17,7 +17,7 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. > [!NOTE] -> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) of ollama. +> `ipex-llm[cpp]==2.2.0b20241204` is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama. > > Our current version is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama. diff --git a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md index 74e37cfa436..bc84cf0f448 100644 --- a/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md +++ b/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md @@ -17,7 +17,7 @@ > [!NOTE] -> `ipex-llm[cpp]==2.2.0b20240826` 版本与官方 ollama 版本 [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) 一致。 +> `ipex-llm[cpp]==2.2.0b20241204` 版本与官方 ollama 版本 [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) 一致。 > > `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 一致。