-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Ollama] Update ipex-llm ollama readme to v0.4.6 #12542
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,7 +19,7 @@ See the demo of running LLaMA2-7B on Intel Arc GPU below. | |
> [!NOTE] | ||
> `ipex-llm[cpp]==2.2.0b20240826` is consistent with [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) of ollama. | ||
> | ||
> Our current version is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama. | ||
> Our current version is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama. | ||
|
||
> [!NOTE] | ||
> Starting from `ipex-llm[cpp]==2.2.0b20240912`, oneAPI dependency of `ipex-llm[cpp]` on Windows will switch from `2024.0.0` to `2024.2.1` . | ||
|
@@ -80,6 +80,7 @@ You may launch the Ollama service as below: | |
export ZES_ENABLE_SYSMAN=1 | ||
source /opt/intel/oneapi/setvars.sh | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why we need this ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looks not very friendly for users. It would be better if we can hidden this for users. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is to link to the shared library when running |
||
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance | ||
|
@@ -177,6 +178,8 @@ Then you can create the model in Ollama by `ollama create example -f Modelfile` | |
- For **Linux users**: | ||
|
||
```bash | ||
source /opt/intel/oneapi/setvars.sh | ||
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH | ||
export no_proxy=localhost,127.0.0.1 | ||
./ollama create example -f Modelfile | ||
./ollama run example | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,7 +19,7 @@ | |
> [!NOTE] | ||
> `ipex-llm[cpp]==2.2.0b20240826` 版本与官方 ollama 版本 [v0.1.39](https://github.com/ollama/ollama/releases/tag/v0.1.39) 一致。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. please also update old version here. |
||
> | ||
> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) 一致。 | ||
> `ipex-llm[cpp]` 的最新版本与官方 ollama 版本 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 一致。 | ||
|
||
> [!NOTE] | ||
> 从 `ipex-llm[cpp]==2.2.0b20240912` 版本开始,Windows 上 `ipex-llm[cpp]` 依赖的 oneAPI 版本已从 `2024.0.0` 更新到 `2024.2.1`。 | ||
|
@@ -80,6 +80,7 @@ IPEX-LLM 现在已支持在 Linux 和 Windows 系统上运行 `Ollama`。 | |
export ZES_ENABLE_SYSMAN=1 | ||
source /opt/intel/oneapi/setvars.sh | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH | ||
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
# [optional] if you want to run on single GPU, use below command to limit GPU may improve performance | ||
|
@@ -174,6 +175,8 @@ PARAMETER num_predict 64 | |
|
||
```bash | ||
export no_proxy=localhost,127.0.0.1 | ||
source /opt/intel/oneapi/setvars.sh | ||
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH | ||
./ollama create example -f Modelfile | ||
./ollama run example | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please also update old version here.