Skip to content

Commit

Permalink
[Ollama] Update ipex-llm ollama readme to v0.4.6 (#12542)
Browse files Browse the repository at this point in the history
* Update ipex-llm ollama readme to v0.4.6
  • Loading branch information
sgwhat authored Dec 13, 2024
1 parent d20a968 commit 5402fc6
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 4 deletions.
7 changes: 5 additions & 2 deletions docs/mddocs/Quickstart/ollama_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below.
</table>

> [!NOTE]
> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) of ollama.
> `ipex-llm[cpp]==2.2.0b20241204` is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama.
>
> Our current version is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama.
> Our current version is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama.
> [!NOTE]
> Starting from `ipex-llm[cpp]==2.2.0b20240912`, oneAPI dependency of `ipex-llm[cpp]` on Windows will switch from `2024.0.0` to `2024.2.1` .
Expand Down Expand Up @@ -80,6 +80,7 @@ You may launch the Ollama service as below:
export ZES_ENABLE_SYSMAN=1
source /opt/intel/oneapi/setvars.sh
export SYCL_CACHE_PERSISTENT=1
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance
Expand Down Expand Up @@ -177,6 +178,8 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile`
- For **Linux users**:

```bash
source /opt/intel/oneapi/setvars.sh
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
export no_proxy=localhost,127.0.0.1
./ollama create example -f Modelfile
./ollama run example
Expand Down
7 changes: 5 additions & 2 deletions docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@
</table>

> [!NOTE]
> `ipex-llm[cpp]==2.2.0b20240826` 版本与官方 ollama 版本 [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) 一致。
> `ipex-llm[cpp]==2.2.0b20241204` 版本与官方 ollama 版本 [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) 一致。
>
> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) 一致。
> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 一致。
> [!NOTE]
> `ipex-llm[cpp]==2.2.0b20240912` 版本开始,Windows 上 `ipex-llm[cpp]` 依赖的 oneAPI 版本已从 `2024.0.0` 更新到 `2024.2.1`
Expand Down Expand Up @@ -80,6 +80,7 @@ IPEX-LLM 现在已支持在 Linux 和 Windows 系统上运行 `Ollama`。
export ZES_ENABLE_SYSMAN=1
source /opt/intel/oneapi/setvars.sh
export SYCL_CACHE_PERSISTENT=1
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance
Expand Down Expand Up @@ -174,6 +175,8 @@ PARAMETER num_predict 64

```bash
export no_proxy=localhost,127.0.0.1
source /opt/intel/oneapi/setvars.sh
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
./ollama create example -f Modelfile
./ollama run example
```
Expand Down

0 comments on commit 5402fc6

Please sign in to comment.