Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Ollama] Update ipex-llm ollama readme to v0.4.6 #12542

Merged
merged 2 commits into from
Dec 13, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion docs/mddocs/Quickstart/ollama_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below.
> [!NOTE]
> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) of ollama.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please also update old version here.

>
> Our current version is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama.
> Our current version is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama.

> [!NOTE]
> Starting from `ipex-llm[cpp]==2.2.0b20240912`, oneAPI dependency of `ipex-llm[cpp]` on Windows will switch from `2024.0.0` to `2024.2.1` .
Expand Down Expand Up @@ -80,6 +80,7 @@ You may launch the Ollama service as below:
export ZES_ENABLE_SYSMAN=1
source /opt/intel/oneapi/setvars.sh
export SYCL_CACHE_PERSISTENT=1
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we need this ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks not very friendly for users. It would be better if we can hidden this for users.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to link to the shared library when running ollama. I can try to hide this by modifying the compilation method.

# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance
Expand Down Expand Up @@ -177,6 +178,8 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile`
- For **Linux users**:

```bash
source /opt/intel/oneapi/setvars.sh
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
export no_proxy=localhost,127.0.0.1
./ollama create example -f Modelfile
./ollama run example
Expand Down
5 changes: 4 additions & 1 deletion docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
> [!NOTE]
> `ipex-llm[cpp]==2.2.0b20240826` 版本与官方 ollama 版本 [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) 一致。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please also update old version here.

>
> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) 一致。
> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 一致。

> [!NOTE]
> 从 `ipex-llm[cpp]==2.2.0b20240912` 版本开始,Windows 上 `ipex-llm[cpp]` 依赖的 oneAPI 版本已从 `2024.0.0` 更新到 `2024.2.1`。
Expand Down Expand Up @@ -80,6 +80,7 @@ IPEX-LLM 现在已支持在 Linux 和 Windows 系统上运行 `Ollama`。
export ZES_ENABLE_SYSMAN=1
source /opt/intel/oneapi/setvars.sh
export SYCL_CACHE_PERSISTENT=1
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance
Expand Down Expand Up @@ -174,6 +175,8 @@ PARAMETER num_predict 64

```bash
export no_proxy=localhost,127.0.0.1
source /opt/intel/oneapi/setvars.sh
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
./ollama create example -f Modelfile
./ollama run example
```
Expand Down