-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example code for OpenAI functions agent seems wrong when using memory #3348
Comments
🤖 Hello @humbertaco! I'm Dosu, an automated assistant here to help with your issues, queries, and contributions to the langchainjs repository while we wait for a human maintainer. I see you've raised an issue about the OpenAI functions agent example code in our documentation. I'm currently reviewing it and will get back to you with a detailed response shortly. |
🤖 Hello @humbertaco! I'm Dosu, a bot here to assist you with questions, help troubleshoot any issues you encounter, and guide you to become a contributor if you wish. While we wait for a human maintainer to join us, I'm here to assist you. Let's dive into your issue together! Thank you for your detailed observation and question. The order of the messages in the example code for adding memory to the OpenAI functions agent in the LangChainJS documentation is indeed correct. The 'chat_history' placeholder, which is represented as 'history' in this context, should be placed before the 'input' to maintain the sequence of the conversation. Here is the correct code snippet from the examples/src/chat/memory.ts file in the repository: const chatPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.",
],
new MessagesPlaceholder("history"),
["human", "{input}"],
]); As you can see, the 'history' placeholder is placed before the 'input' in the 'chatPrompt' definition. This ensures that the conversation history is included in the prompt before the new input message, maintaining the correct sequence of the conversation. Regarding the purpose of the 'chat_history' placeholder in the LangChainJS framework, I wasn't able to find a specific answer within the repository. However, in general, the 'chat_history' placeholder is used to keep track of the conversation history between the human and the AI. This allows the AI to have context about the previous interactions, which can be useful for generating more relevant and coherent responses. I hope this answers your question. If you have any further queries, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Then, that means that the documentation for OpenAI functions agent is wrong, doesn't it? |
Thanks for the ping! @bracesproul I think @humbertaco is correct, can you fix? |
Thanks for addressing this @jacoblee93, @bracesproul. By the way, I think both OpenAI agent's pages have this typo (OpenAI functions and OpenAI tool calling). |
Good catch, I was in the process of switching over from the conversational langchain agent to OpenAI tool calling, and was puzzled when the chain seemingly lost context after following the documentation. Switching the orders fixed my issue |
Will patch now, thanks |
Thanks @jacoblee93 ! |
No thank you, really bad that this slipped through the cracks :( |
Hi there! I noticed that the documentation for the OpenAI functions agent has recently been updated. I was checking it out and I noticed something that caught my attention.
The section where the memory is added to the agent for previous iterations, the order of the messages seems to be wrong. The example code given in the documentation is the following:
Notice that the
chat_history
is added as the last element in this prompt.The example can be inspected in LangSmith here: https://smith.langchain.com/public/2052d0d1-5640-44bc-9024-d443d46db2a4/r/87a1b796-5a70-46a9-bc57-016a3b6beee1. The image below shows the trace in LangSmith:
As can be seen in the image, the order of the messages that are passed as input to OpenAI is the following:
However, I would expect the order to be this:
This order would be consistent with the sequence of the questions asked by the human (first what is the weather in new york and then whether I need a jacket or not), it would also respect the human-ai sequence.
It seems to me that when creating the prompt, the code should be like this:
In this way, the chat history would appear first in the conversation, then the input, and finally the agent scratchpad to write down the observations by the agent when processing the human input.
Is this a typo when creating the documentation, or it is the intended behavior?
The text was updated successfully, but these errors were encountered: