Skip to content

Commit

Permalink
docs: fix langchain (vllm-project#2736)
Browse files Browse the repository at this point in the history
  • Loading branch information
mspronesti authored Feb 4, 2024
1 parent 46a462c commit 240b5b1
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions docs/source/serving/serving_with_langchain.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ To install langchain, run

.. code-block:: console
$ pip install langchain -q
$ pip install langchain langchain_community -q
To run inference on a single or multiple GPUs, use ``VLLM`` class from ``langchain``.

.. code-block:: python
from langchain.llms import VLLM
from langchain_community.llms import VLLM
llm = VLLM(model="mosaicml/mpt-7b",
trust_remote_code=True, # mandatory for hf models
Expand All @@ -28,4 +28,4 @@ To run inference on a single or multiple GPUs, use ``VLLM`` class from ``langcha
print(llm("What is the capital of France ?"))
Please refer to this `Tutorial <https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/llms/vllm.ipynb>`_ for more details.
Please refer to this `Tutorial <https://python.langchain.com/docs/integrations/llms/vllm>`_ for more details.

0 comments on commit 240b5b1

Please sign in to comment.