-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
langchain[major]: LangChain community #3581
Changes from all commits
52603b6
59f9277
92b3024
f69ee29
480104e
1c328e8
850ce9e
0b1624f
bd94ddc
b95ef09
c3919df
00c7ff1
de26581
77acea3
0cf863b
26743fd
444320b
8381c76
53f320d
5c27feb
974d86f
5a3d0be
a2c45f1
e01ae33
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,92 @@ | ||
--- | ||
sidebar_position: 0 | ||
title: Get started | ||
--- | ||
|
||
import CodeBlock from "@theme/CodeBlock"; | ||
import BasicExample from "@examples/guides/expression_language/get_started/basic.ts"; | ||
import BasicPromptExample from "@examples/guides/expression_language/get_started/prompt.ts"; | ||
import BasicChatModelExample from "@examples/guides/expression_language/get_started/chat_model.ts"; | ||
import BasicLLMModelExample from "@examples/guides/expression_language/get_started/llm_model.ts"; | ||
import BasicOutputParserExample from "@examples/guides/expression_language/get_started/output_parser.ts"; | ||
import BasicRagExample from "@examples/guides/expression_language/get_started/rag.ts"; | ||
|
||
# Get started | ||
|
||
LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging. | ||
|
||
## Basic example: prompt + model + output parser | ||
|
||
The most basic and common use case is chaining a prompt template and a model together. To see how this works, let's create a chain that takes a topic and generates a joke: | ||
|
||
<CodeBlock language="typescript">{BasicExample}</CodeBlock> | ||
|
||
:::tip | ||
|
||
[LangSmith trace](https://smith.langchain.com/public/dcac6d79-5254-4889-a974-4b3abaf605b4/r) | ||
|
||
::: | ||
|
||
Notice in this line we're chaining our prompt, LLM model and output parser together: | ||
|
||
```typescript | ||
const chain = prompt.pipe(model).pipe(outputParser); | ||
``` | ||
|
||
The `.pipe()` method allows for chaining together any number of runnables. It will pass the output of one through to the input of the next. | ||
|
||
Here, the prompt is passed a `topic` and when invoked it returns a formatted string with the `{topic}` input variable replaced with the string we passed to the invoke call. | ||
That string is then passed as the input to the LLM which returns a `BaseMessage` object. Finally, the output parser takes that `BaseMessage` object and returns the content of that object as a string. | ||
|
||
### 1. Prompt | ||
|
||
`prompt` is a `BasePromptTemplate`, which means it takes in an object of template variables and produces a `PromptValue`. | ||
A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). | ||
It can work with either language model type because it defines logic both for producing BaseMessages and for producing a string. | ||
|
||
<CodeBlock language="typescript">{BasicPromptExample}</CodeBlock> | ||
|
||
### 2. Model | ||
|
||
The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `BaseMessage`. | ||
|
||
<CodeBlock language="typescript">{BasicChatModelExample}</CodeBlock> | ||
|
||
If our model was an LLM, it would output a string. | ||
|
||
<CodeBlock language="typescript">{BasicLLMModelExample}</CodeBlock> | ||
|
||
### 3. Output parser | ||
|
||
And lastly we pass our `model` output to the `outputParser`, which is a `BaseOutputParser` meaning it takes either a string or a `BaseMessage` as input. The `StringOutputParser` specifically simple converts any input into a string. | ||
|
||
<CodeBlock language="typescript">{BasicOutputParserExample}</CodeBlock> | ||
|
||
## RAG Search Example | ||
|
||
For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. | ||
|
||
<CodeBlock language="typescript">{BasicRagExample}</CodeBlock> | ||
|
||
:::tip | ||
|
||
[LangSmith trace](https://smith.langchain.com/public/f0205e20-c46f-47cd-a3a4-6a95451f8a25/r) | ||
|
||
::: | ||
|
||
In this chain we add some extra logic around retrieving context from a vector store. | ||
|
||
We first instantiated our model, vector store and output parser. Then we defined our prompt, which takes in two input variables: | ||
|
||
- `context` -> this is a string which is returned from our vector store based on a semantic search from the input. | ||
- `question` -> this is the question we want to ask. | ||
|
||
Next we created a `setupAndRetriever` runnable. This has two components which return the values required by our prompt: | ||
|
||
- `context` -> this is a `RunnableLambda` which takes the input from the `.invoke()` call, makes a request to our vector store, and returns the first result. | ||
- `question` -> this uses a `RunnablePassthrough` which simply passes whatever the input was through to the next step, and in our case it returns it to the key in the object we defined. | ||
|
||
Both of these are wrapped inside a `RunnableMap`. This is a special type of runnable that takes an object of runnables and executes them all in parallel. | ||
It then returns an object with the same keys as the input object, but with the values replaced with the output of the runnables. | ||
|
||
Finally, we pass the output of the `setupAndRetriever` to our `prompt` and then to our `model` and `outputParser` as before. |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,6 +9,7 @@ | |
}, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hey there! 👋 I noticed that this PR introduces a new dependency, "@langchain/community", with a version specified as "workspace:*". This comment is to flag the dependency change for maintainers to review, especially regarding its type (peer/dev/hard). Keep up the great work! |
||
"dependencies": { | ||
"@langchain/anthropic": "workspace:*", | ||
"@langchain/community": "workspace:*", | ||
"@langchain/core": "workspace:*", | ||
"@langchain/openai": "workspace:*", | ||
"langchain": "workspace:*" | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,6 +19,7 @@ | |
"license": "MIT", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hey there! 👋 I noticed that this PR adds a new hard dependency "@langchain/community" to the project's package.json. Just flagging this for the maintainers to review. Great work overall! 🚀 |
||
"dependencies": { | ||
"@langchain/anthropic": "workspace:*", | ||
"@langchain/community": "workspace:*", | ||
"@langchain/core": "workspace:*", | ||
"@langchain/openai": "workspace:*", | ||
"d3-dsv": "2", | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,6 +17,7 @@ | |
"license": "MIT", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hey there! 👋 I noticed that this PR adds a new dependency to the project, "@langchain/community". It's important for maintainers to review this change to ensure it aligns with the project's dependency requirements. Thank you for flagging this for review! 🚀 |
||
"dependencies": { | ||
"@langchain/anthropic": "workspace:*", | ||
"@langchain/community": "workspace:*", | ||
"@langchain/core": "workspace:*", | ||
"@langchain/openai": "workspace:*", | ||
"d3-dsv": "2", | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,6 +20,7 @@ | |
"license": "MIT", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hey there! I noticed that the addition of "@langchain/community" as a dependency in the package.json file might impact the project's peer dependencies. I've flagged this for your review as it's an important change to the project's dependencies. Keep up the great work! |
||
"dependencies": { | ||
"@langchain/anthropic": "workspace:*", | ||
"@langchain/community": "workspace:*", | ||
"@langchain/core": "workspace:*", | ||
"@langchain/openai": "workspace:*", | ||
"d3-dsv": "2", | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,6 +10,7 @@ | |
}, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hey there! 👋 Just wanted to flag for the maintainers that a new dev dependency, "@langchain/community", has been added in this PR. It's always good to review dependency changes to ensure everything is in order. Thanks! |
||
"dependencies": { | ||
"@langchain/anthropic": "workspace:*", | ||
"@langchain/community": "workspace:*", | ||
"@langchain/core": "workspace:*", | ||
"@langchain/openai": "workspace:*", | ||
"@types/node": "18.15.11", | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,6 +11,7 @@ | |
}, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hey there! I noticed that the addition of "@langchain/community" as a dependency in the package.json file might impact the project's dependencies. I've flagged this for your review to ensure it aligns with the intended dependency type (peer/dev/hard). Keep up the great work! |
||
"dependencies": { | ||
"@langchain/anthropic": "workspace:*", | ||
"@langchain/community": "workspace:*", | ||
"@langchain/core": "workspace:*", | ||
"@langchain/openai": "workspace:*", | ||
"langchain": "workspace:*" | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,11 +20,11 @@ maskingParser.addTransformer(piiMaskingTransformer); | |
|
||
const input = | ||
"Contact me at [email protected] or 555-123-4567. Also reach me at [email protected]"; | ||
const masked = await maskingParser.parse(input); | ||
const masked = await maskingParser.mask(input); | ||
|
||
console.log(masked); | ||
// Contact me at [email-a31e486e324f6] or [phone-da8fc1584f224]. Also reach me at [email-d5b6237633d95] | ||
|
||
const rehydrated = maskingParser.rehydrate(masked); | ||
const rehydrated = await maskingParser.rehydrate(masked); | ||
console.log(rehydrated); | ||
// Contact me at [email protected] or 555-123-4567. Also reach me at [email protected] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
import { ChatOpenAI } from "langchain/chat_models/openai"; | ||
import { ChatPromptTemplate } from "langchain/prompts"; | ||
import { StringOutputParser } from "langchain/schema/output_parser"; | ||
|
||
const prompt = ChatPromptTemplate.fromMessages([ | ||
["human", "Tell me a short joke about {topic}"], | ||
]); | ||
const model = new ChatOpenAI({}); | ||
const outputParser = new StringOutputParser(); | ||
|
||
const chain = prompt.pipe(model).pipe(outputParser); | ||
|
||
const response = await chain.invoke({ | ||
topic: "ice cream", | ||
}); | ||
console.log(response); | ||
/** | ||
Why did the ice cream go to the gym? | ||
Because it wanted to get a little "cone"ditioning! | ||
*/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
import { ChatOpenAI } from "langchain/chat_models/openai"; | ||
|
||
const model = new ChatOpenAI({}); | ||
const promptAsString = "Human: Tell me a short joke about ice cream"; | ||
|
||
const response = await model.invoke(promptAsString); | ||
console.log(response); | ||
/** | ||
AIMessage { | ||
content: 'Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education!', | ||
name: undefined, | ||
additional_kwargs: { function_call: undefined, tool_calls: undefined } | ||
} | ||
*/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
import { OpenAI } from "langchain/llms/openai"; | ||
|
||
const model = new OpenAI({}); | ||
const promptAsString = "Human: Tell me a short joke about ice cream"; | ||
|
||
const response = await model.invoke(promptAsString); | ||
console.log(response); | ||
/** | ||
Why did the ice cream go to therapy? | ||
|
||
Because it was feeling a little rocky road. | ||
*/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
import { AIMessage } from "langchain/schema"; | ||
import { StringOutputParser } from "langchain/schema/output_parser"; | ||
|
||
const outputParser = new StringOutputParser(); | ||
const message = new AIMessage( | ||
'Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education!' | ||
); | ||
const parsed = await outputParser.invoke(message); | ||
console.log(parsed); | ||
/** | ||
Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education! | ||
*/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
import { ChatPromptTemplate } from "langchain/prompts"; | ||
|
||
const prompt = ChatPromptTemplate.fromMessages([ | ||
["human", "Tell me a short joke about {topic}"], | ||
]); | ||
const promptValue = await prompt.invoke({ topic: "ice cream" }); | ||
console.log(promptValue); | ||
/** | ||
ChatPromptValue { | ||
messages: [ | ||
HumanMessage { | ||
content: 'Tell me a short joke about ice cream', | ||
name: undefined, | ||
additional_kwargs: {} | ||
} | ||
] | ||
} | ||
*/ | ||
const promptAsMessages = promptValue.toChatMessages(); | ||
console.log(promptAsMessages); | ||
/** | ||
[ | ||
HumanMessage { | ||
content: 'Tell me a short joke about ice cream', | ||
name: undefined, | ||
additional_kwargs: {} | ||
} | ||
] | ||
*/ | ||
const promptAsString = promptValue.toString(); | ||
console.log(promptAsString); | ||
/** | ||
Human: Tell me a short joke about ice cream | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there! I noticed the addition of a new dependency, "@langchain/community", in the package.json file. This seems to be a new peer/dev/hard dependency, and I'm flagging this for the maintainers to review. Great work on the PR!