From dfac168d5f03e54a2afcd1688c6961b4e2ead8b9 Mon Sep 17 00:00:00 2001 From: Guancheng Fu <110874468+gc-fu@users.noreply.github.com> Date: Fri, 17 May 2024 16:52:17 +0800 Subject: [PATCH] fix format/typo (#11067) --- docs/readthedocs/source/doc/LLM/Quickstart/index.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/index.rst b/docs/readthedocs/source/doc/LLM/Quickstart/index.rst index 92cd6d94826..2e82acde52a 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/index.rst +++ b/docs/readthedocs/source/doc/LLM/Quickstart/index.rst @@ -24,7 +24,7 @@ This section includes efficient guide to show you how to: * `Run Ollama with IPEX-LLM on Intel GPU <./ollama_quickstart.html>`_ * `Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM <./llama3_llamacpp_ollama_quickstart.html>`_ * `Run IPEX-LLM Serving with FastChat <./fastchat_quickstart.html>`_ -* `Run IPEX-LLM Serving wit vLLM on Intel GPU<./vLLM_quickstart.html>`_ +* `Run IPEX-LLM Serving with vLLM on Intel GPU <./vLLM_quickstart.html>`_ * `Finetune LLM with Axolotl on Intel GPU <./axolotl_quickstart.html>`_ * `Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi <./deepspeed_autotp_fastapi_quickstart.html>`_