From 2ec45c49d38994b8203d39d4ae2665a8a786342d Mon Sep 17 00:00:00 2001 From: Ruonan Wang Date: Mon, 22 Apr 2024 22:04:49 +0800 Subject: [PATCH] fix ollama quickstart(#10846) --- docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md index a043aa747b3..b911c5d6c51 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/ollama_quickstart.md @@ -81,6 +81,7 @@ You may launch the Ollama service as below: Please set environment variable ``OLLAMA_NUM_GPU`` to ``999`` to make sure all layers of your model are running on Intel GPU, otherwise, some layers may run on CPU. ``` +```eval_rst .. note:: To allow the service to accept connections from all IP addresses, use `OLLAMA_HOST=0.0.0.0 ./ollama serve` instead of just `./ollama serve`.