forked from langchain-ai/langchainjs
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: New conceptual docs (langchain-ai#7068)
- Loading branch information
1 parent
398aea1
commit b8aea27
Showing
316 changed files
with
56,119 additions
and
53,875 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
# Agents | ||
|
||
By themselves, language models can't take actions - they just output text. Agents are systems that take a high-level task and use an LLM as a reasoning engine to decide what actions to take and execute those actions. | ||
|
||
[LangGraph](/docs/concepts/architecture#langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. We recommend that you use LangGraph for building agents. | ||
|
||
Please see the following resources for more information: | ||
|
||
- LangGraph docs on [common agent architectures](https://langchain-ai.github.io/langgraphjs/concepts/agentic_concepts/) | ||
- [Pre-built agents in LangGraph](https://langchain-ai.github.io/langgraphjs/reference/functions/langgraph_prebuilt.createReactAgent.html) | ||
|
||
## Legacy agent concept: AgentExecutor | ||
|
||
LangChain previously introduced the `AgentExecutor` as a runtime for agents. | ||
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents. | ||
As a result, we're gradually phasing out `AgentExecutor` in favor of more flexible solutions in LangGraph. | ||
|
||
### Transitioning from AgentExecutor to langgraph | ||
|
||
If you're currently using `AgentExecutor`, don't worry! We've prepared resources to help you: | ||
|
||
1. For those who still need to use `AgentExecutor`, we offer a comprehensive guide on [how to use AgentExecutor](/docs/how_to/agent_executor). | ||
|
||
2. However, we strongly recommend transitioning to LangGraph for improved flexibility and control. To facilitate this transition, we've created a detailed [migration guide](/docs/how_to/migrate_agent) to help you move from `AgentExecutor` to LangGraph seamlessly. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
import ThemedImage from "@theme/ThemedImage"; | ||
import useBaseUrl from "@docusaurus/useBaseUrl"; | ||
|
||
# Architecture | ||
|
||
LangChain is a framework that consists of a number of packages. | ||
|
||
<ThemedImage | ||
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers." | ||
sources={{ | ||
light: useBaseUrl("/svg/langchain_stack_062024.svg"), | ||
dark: useBaseUrl("/svg/langchain_stack_062024_dark.svg"), | ||
}} | ||
title="LangChain Framework Overview" | ||
style={{ width: "100%" }} | ||
/> | ||
|
||
## @langchain/core | ||
|
||
This package contains base abstractions for different components and ways to compose them together. | ||
The interfaces for core components like chat models, vector stores, tools and more are defined here. | ||
No third-party integrations are defined here. | ||
The dependencies are very lightweight. | ||
|
||
## langchain | ||
|
||
The main `langchain` package contains chains and retrieval strategies that make up an application's cognitive architecture. | ||
These are NOT third-party integrations. | ||
All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations. | ||
|
||
## Integration packages | ||
|
||
Popular integrations have their own packages (e.g. `@langchain/openai`, `@langchain/anthropic`, etc) so that they can be properly versioned and appropriately lightweight. | ||
|
||
For more information see: | ||
|
||
- A list [integrations packages](/docs/integrations/platforms/) | ||
- The [API Reference](https://api.js.langchain.com/) where you can find detailed information about each of the integration package. | ||
|
||
## @langchain/community | ||
|
||
This package contains third-party integrations that are maintained by the LangChain community. | ||
Key integration packages are separated out (see above). | ||
This contains integrations for various components (chat models, vector stores, tools, etc). | ||
All dependencies in this package are optional to keep the package as lightweight as possible. | ||
|
||
## @langchian/langgraph | ||
|
||
`@langchian/langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. | ||
|
||
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows. | ||
|
||
:::info[Further reading] | ||
|
||
- See our LangGraph overview [here](https://langchain-ai.github.io/langgraphjs/concepts/high_level/#core-principles). | ||
- See our LangGraph Academy Course [here](https://academy.langchain.com/courses/intro-to-langgraph). | ||
|
||
::: | ||
|
||
## LangSmith | ||
|
||
A developer platform that lets you debug, test, evaluate, and monitor LLM applications. | ||
|
||
For more information, see the [LangSmith documentation](https://docs.smith.langchain.com) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
# Callbacks | ||
|
||
:::note Prerequisites | ||
|
||
- [Runnable interface](/docs/concepts/runnables) | ||
|
||
::: | ||
|
||
LangChain provides a callback system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. | ||
|
||
You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. | ||
|
||
## Callback events | ||
|
||
| Event | Event Trigger | Associated Method | | ||
| ---------------- | ------------------------------------------- | ---------------------- | | ||
| Chat model start | When a chat model starts | `handleChatModelStart` | | ||
| LLM start | When a llm starts | `handleLlmStart` | | ||
| LLM new token | When an llm OR chat model emits a new token | `handleLlmNewToken` | | ||
| LLM ends | When an llm OR chat model ends | `handleLlmEnd` | | ||
| LLM errors | When an llm OR chat model errors | `handleLlmError` | | ||
| Chain start | When a chain starts running | `handleChainStart` | | ||
| Chain end | When a chain ends | `handleChainEnd` | | ||
| Chain error | When a chain errors | `handleChainError` | | ||
| Tool start | When a tool starts running | `handleToolStart` | | ||
| Tool end | When a tool ends | `handleToolEnd` | | ||
| Tool error | When a tool errors | `handleToolError` | | ||
| Retriever start | When a retriever starts | `handleRetrieverStart` | | ||
| Retriever end | When a retriever ends | `handleRetrieverEnd` | | ||
| Retriever error | When a retriever errors | `handleRetrieverError` | | ||
|
||
## Callback handlers | ||
|
||
- Callback handlers implement the [BaseCallbackHandler](https://api.js.langchain.com/classes/_langchain_core.callbacks_base.BaseCallbackHandler.html) interface. | ||
|
||
During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://api.js.langchain.com/classes/_langchain_core.callbacks_manager.BaseCallbackManager.html)) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered. | ||
|
||
## Passing callbacks | ||
|
||
The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places: | ||
|
||
- **Request time callbacks**: Passed at the time of the request in addition to the input data. | ||
Available on all standard `Runnable` objects. These callbacks are INHERITED by all children | ||
of the object they are defined on. For example, `await chain.invoke({ number: 25 }, { callbacks: [handler] })`. | ||
- **Constructor callbacks**: `const chain = new TheNameOfSomeChain({ callbacks: [handler] })`. These callbacks | ||
are passed as arguments to the constructor of the object. The callbacks are scoped | ||
only to the object they are defined on, and are **not** inherited by any children of the object. | ||
|
||
:::warning | ||
|
||
Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children | ||
of the object. | ||
|
||
::: | ||
|
||
If you're creating a custom chain or runnable, you need to remember to propagate request time | ||
callbacks to any child objects. | ||
|
||
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
# Chat history | ||
|
||
:::info Prerequisites | ||
|
||
- [Messages](/docs/concepts/messages) | ||
- [Chat models](/docs/concepts/chat_models) | ||
- [Tool calling](/docs/concepts/tool_calling) | ||
|
||
::: | ||
|
||
Chat history is a record of the conversation between the user and the chat model. It is used to maintain context and state throughout the conversation. The chat history is sequence of [messages](/docs/concepts/messages), each of which is associated with a specific [role](/docs/concepts/messages#role), such as "user", "assistant", "system", or "tool". | ||
|
||
## Conversation patterns | ||
|
||
![Conversation patterns](/img/conversation_patterns.png) | ||
|
||
Most conversations start with a **system message** that sets the context for the conversation. This is followed by a **user message** containing the user's input, and then an **assistant message** containing the model's response. | ||
|
||
The **assistant** may respond directly to the user or if configured with tools request that a [tool](/docs/concepts/tool_calling) be invoked to perform a specific task. | ||
|
||
So a full conversation often involves a combination of two patterns of alternating messages: | ||
|
||
1. The **user** and the **assistant** representing a back-and-forth conversation. | ||
2. The **assistant** and **tool messages** representing an ["agentic" workflow](/docs/concepts/agents) where the assistant is invoking tools to perform specific tasks. | ||
|
||
## Managing chat history | ||
|
||
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models#context_window). | ||
|
||
While processing chat history, it's essential to preserve a correct conversation structure. | ||
|
||
Key guidelines for managing chat history: | ||
|
||
- The conversation should follow one of these structures: | ||
- The first message is either a "user" message or a "system" message, followed by a "user" and then an "assistant" message. | ||
- The last message should be either a "user" message or a "tool" message containing the result of a tool call. | ||
- When using [tool calling](/docs/concepts/tool_calling), a "tool" message should only follow an "assistant" message that requested the tool invocation. | ||
|
||
:::tip | ||
|
||
Understanding correct conversation structure is essential for being able to properly implement | ||
[memory](https://langchain-ai.github.io/langgraphjs/concepts/memory/) in chat models. | ||
|
||
::: | ||
|
||
## Related resources | ||
|
||
- [How to trim messages](/docs/how_to/trim_messages/) | ||
- [Memory guide](https://langchain-ai.github.io/langgraphjs/concepts/memory/) for information on implementing short-term and long-term memory in chat models using [LangGraph](https://langchain-ai.github.io/langgraphjs/). |
Oops, something went wrong.