Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] Update ipex-llm ollama troubleshooting for v0.4.6 #12642

Merged
merged 3 commits into from
Jan 2, 2025
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/mddocs/Quickstart/ollama_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ You may launch the Ollama service as below:
export OLLAMA_NUM_GPU=999
export no_proxy=localhost,127.0.0.1
export ZES_ENABLE_SYSMAN=1

source /opt/intel/oneapi/setvars.sh
export SYCL_CACHE_PERSISTENT=1
# [optional] under most circumstances, the following environment variable may improve performance, but sometimes this may also cause performance degradation
Expand Down Expand Up @@ -227,3 +228,9 @@ If you meet this error, please check your Linux kernel version first. You may en

#### 8. Save GPU memory by specify `OLLAMA_NUM_PARALLEL=1`
If you have a limited GPU memory, use `set OLLAMA_NUM_PARALLEL=1` on Windows or `export OLLAMA_NUM_PARALLEL=1` on Linux before `ollama serve` to reduce GPU usage. The default `OLLAMA_NUM_PARALLEL` in ollama upstream is set to 4.

#### 9. `cannot open shared object file` error when executing `ollama serve`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel it's a same issue as 3 ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just reuse 3 instead of creating a new one ? or maybe we just modify 3 ?

Copy link
Contributor Author

@sgwhat sgwhat Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these 2 errors have different manifestations and are specific to different versions, keeping section 3 could be helpful for users that are still running earlier versions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also update related Chinese version.

When executing `ollama serve` and `ollama run <model_name>`, if you meet `./ollama: error while loading shared libraries: libsvml.so: cannot open shared object file: No such file or directory` on Linux, or if executing `ollama serve` and `ollama run <model_name>` shows no response on Windows, this is most likely caused by the lack of sycl dependency. Please check:

1. if you have installed conda and if you are in the right conda environment which has pip installed oneapi dependencies on Windows
2. if you have have executed `source /opt/intel/oneapi/setvars.sh` before running both `./ollama serve` and `./ollama run <model_name>` on Linux