Skip to content

Commit

Permalink
Merge pull request #48 from langchain-ai/mattf/dev-v0.1
Browse files Browse the repository at this point in the history
update to 0.1, remove deprecated functionality and focus on api catalog backend
  • Loading branch information
mattf authored May 31, 2024
2 parents 53c7448 + e88df10 commit 2719ca5
Show file tree
Hide file tree
Showing 25 changed files with 955 additions and 1,467 deletions.
4 changes: 2 additions & 2 deletions libs/ai-endpoints/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ for txt in chain.stream({"input": "Why is a PB&J?"}):

NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over.

An example model supporting multimodal inputs is `ai-neva-22b`.
An example model supporting multimodal inputs is `nvidia/neva-22b`.

These models accept LangChain's standard image formats. Below are examples.

Expand All @@ -237,7 +237,7 @@ Initialize the model like so:
```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA

llm = ChatNVIDIA(model="ai-neva-22b")
llm = ChatNVIDIA(model="nvidia/neva-22b")
```

#### Passing an image as a URL
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,7 @@
" ]\n",
")\n",
"\n",
"model = ChatNVIDIA(model=\"ai-mixtral-8x7b-instruct\")\n",
"model = ChatNVIDIA(model=\"mistralai/mixtral-8x7b-instruct-v0.1\")\n",
"\n",
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
Expand Down
Loading

0 comments on commit 2719ca5

Please sign in to comment.