MemGPT + AutoGen #65
Replies: 24 comments 38 replies
-
Note that the folks at AutoGen are also referencing this project: microsoft/autogen#156 (comment) |
Beta Was this translation helpful? Give feedback.
-
This would be incredible! |
Beta Was this translation helpful? Give feedback.
-
This would truly be a game changer and in turn possibly creating actual super agents. Any idea how far along the roadmap this might be @cpacker ? Not rushing at all, just excited to see how you and the squad go about it. I have been trawling the web high and low to find a solution to integrate MemGPT into AutoGen but have come up emptier than my wallet around the 5th of each month. Regardless of when I still wanted to thank you and the contributors for creating an amazing project. Not all heroes wear capes. |
Beta Was this translation helpful? Give feedback.
-
I'm trying on my own to restructure the AgentAsync following the AutoGen's approach on multi-agents, but there is a place where I couldn't figure out why it is written that way: Isn't |
Beta Was this translation helpful? Give feedback.
-
@QZGao 🐞 caught thanks for that |
Beta Was this translation helpful? Give feedback.
-
This would be amazing. It was the first thing I thought of when I saw this repo. |
Beta Was this translation helpful? Give feedback.
-
We just added some basic AutoGen integration! #64 We're still working on cleaning up the workflow (adding better MemGPT-AutoGen constructors that better match the config file style of AutoGen), but since it seems like a lot of people want AutoGen support we thought it's best to release what we have for now and add to it as we go. Current caveats
Example using MemGPT agent inside AutoGenUpdated 10/26: see the example on the main post: #65 (comment) |
Beta Was this translation helpful? Give feedback.
-
This is truly amazing @cpacker !! |
Beta Was this translation helpful? Give feedback.
-
I don’t have the expertise to directly assist in this programming endeavor, but I wanted to know if there was a way I could contribute to this by using ChatGPT4. The following is my prompt to ChatGPT4 and it’s response in regards to combining MemGPT+AutoGen+RWKV (I only chose RWKV because that LLM has a one click install and includes a API). It would be great to have MemGPT+AutoGen to be able to run locally. I’d pay a monthly subscription fee for code updates to something like this, if it was making decent headway with improvements each month. I’m not sure if any of this feedback of information is useful or not, but wanted to toss this info out there. My Prompt to ChatGPT4 + Web Browsing: ChatGPT4 Response: This workflow provides a structured approach to integrating MemGPT, AutoGen, and the RWKV LLM to create a unified AI application. The actual process may involve additional steps or modifications based on the specifics of the codebases and the RWKV LLM. |
Beta Was this translation helpful? Give feedback.
-
i know this is something im probably doing wrong but im getting |
Beta Was this translation helpful? Give feedback.
-
I keep getting the loop like the example above where the bots just say “heres that request again” a bunch of times until they hit the limit. Anyone know why this is happening? |
Beta Was this translation helpful? Give feedback.
-
Hi, in the Readme you say you are willing to support other server type besides webui: Supporting the llama.cpp server would be nice. thank you for your great work! |
Beta Was this translation helpful? Give feedback.
-
An AutoGen update: we just updated the AutoGen MemGPT agent to:
The recommended way to create a MemGPT AutoGen agent now is to use the import os
import autogen
from memgpt.autogen.memgpt_agent import create_autogen_memgpt_agent
def your_autogen_script():
...
coder = create_autogen_memgpt_agent(
"MemGPT_coder",
persona_description="I am a 10x engineer, trained in Python. I was the first engineer at Uber (which I make sure to tell everyone I work with).",
user_description=f"You are participating in a group chat with a user ({user_proxy.name}) and a product manager ({pm.name}).",
# extra options
# interface_kwargs={"debug": True},
)
groupchat = autogen.GroupChat(agents=[user_proxy, pm, coder], messages=[], max_round=12) We've updated the example (that was previously in a discussion comment, now in the repo code) to use this new style of constructor, you can see the example here. |
Beta Was this translation helpful? Give feedback.
-
Does anyone else keep getting this error? |
Beta Was this translation helpful? Give feedback.
-
Might be related to NZ369 issues, but running the script, I get this error: After researching, I found that running it in Colab may be the issue. There is an alt library called 'nest-asyncio' that could resolve it. But I'm not sure how to test that. (Still learning Python) |
Beta Was this translation helpful? Give feedback.
-
For reference, this is what the example looks like if you turn auto-reply for the user off and prod it to actually start coding:
Remember, this is primarily high-level; we'd need to integrate database management, error handling, and user verification, among other concerns.
|
Beta Was this translation helpful? Give feedback.
-
@cpacker any way of passing something like this might be used to run MemGPT and AutoGen on a local LLM. |
Beta Was this translation helpful? Give feedback.
-
Hi @cpacker, |
Beta Was this translation helpful? Give feedback.
-
@cpacker is function calling currently not supported with MemGPTAgents? |
Beta Was this translation helpful? Give feedback.
-
@cpacker i connected a postgres database to the autogen memgpt agent but the issue is it only loads the embeddings and I cannot find where the chat memory is stored which becomes crucial for context retrieval. Does the integration allow us to store our conversations in the connected database or is it even using the context |
Beta Was this translation helpful? Give feedback.
-
@cpacker @vivi Can anyone tell me what the code would look like if we are running two Autogen_memgpt agents, I am following the one in the example for the group chat and if i add another
but the latter agent does not take part in the conversation, am I missing something |
Beta Was this translation helpful? Give feedback.
-
Is there a new memgpt update to make it compatible with the latest autogen version (0.2.0b5)? |
Beta Was this translation helpful? Give feedback.
-
Does anyone have a working example of how to use this for document QA with Autogen? I've been looking for hours and I've be unable to find a single example of this, which is strange because it seems like it would be the most common application of this library. Im trying to use MemGPT inside of my AutoGen script for enhanced context for QA. I've got MemGPT up and running fine and I have Autogpt working fine with RAG, but I want to use MMR via MemGPT. I have yet to see anyone post a working example of this. Does anyone have one I can look at to see how you managed it? |
Beta Was this translation helpful? Give feedback.
-
Method towards AGI with MemGPT and AutoGenNote: The following is just some quick examples of the reletively rapid improvments that could be made to the MemGPT/AutoGen hybrid software solution that would enable dramatic improvements to requests to complete goals, tasks, or problems. Function calls would need to be aligned to “ChatGPT Response 2”, which could be streamlined and expanded upon. Question 1: If an LLM is instructed to read, write, understand, and update a text document every time the LLM interacts with a user, what English text could be stored within the text document that would provide logic and reasoning skills to the LLM that would also allow it to expand upon that logic and reasoning skills? ChatGPT Response 1: The concept of having a large language model (LLM) like GPT-4 read, write, understand, and update a text document in each interaction to improve its logic and reasoning skills is an interesting one. While LLMs don't "learn" in the traditional sense from each interaction (as their learning happens during the training phase on a fixed dataset), there are certain types of texts that can be useful in this scenario. Here are some suggestions:
Question 2: Please thoroughly understand the following and work through your answer step by step. A human student can read and follow instructions or processes previously written by a human teacher, which enable the human student to solve previously unseen problems by the human student. These instructions are reusable in different situations and can be combined with additional instructions or processes from the human teacher, enabling the human student to continue to develop unique processes to solve new goals, problems, or tasks once the goal or problem is understood. This previous situation should be able to be replicated for artificial intelligence. If a large language model (LLM) is combined with MemGPT (enabling a long-term memory database and ability to do function calls to that memory) in combination with AutoGen (enabling multiple agents to work together which replicates a human having internal dialog and thought processes) then if the long-term memory database contains a set of general methods for hypothesis testing, evidence-based reasoning, logical reasoning, problem-solving skills, drawing inferences from limited information, making connections between different processes, etc., then each time a user provides a new goal, problem, or task the LLM can search through the long-term memory database for the best matching combination of hypothesis testing, evidence-based reasoning, logical reasoning, problem-solving skills, etc. processes in order to provide the user with the best solution to the goal, problem, or task. What specific text based English instructions can be written to store within the LLM’s long-term memory database that the LLM can follow to provide a framework for hypothesis testing, evidence-based reasoning, logical reasoning, problem-solving skills, drawing inferences from limited information, making connections between different processes, etc.? These textual instructions/processes should be written as short and concise instructions/processes with accurate label identifiers, which will enable the LLM to quickly use function calls to locate the correct combination of instructions/processes needed to best complete the requested goal, problem, or task. Keep in mind that these instructions/processes must be able to be applied to a broad range of situations. ChatGPT Response 2: The idea of replicating human-like learning and problem-solving in a large language model (LLM) involves creating a framework where the model can access and apply a set of general methods for reasoning and problem-solving. The integration of technologies like MemGPT and AutoGen can potentially facilitate this by providing long-term memory and multi-agent collaboration, respectively. Here’s a step-by-step approach to creating a set of textual instructions or processes for this purpose: 1. Hypothesis Testing 2. Evidence-Based Reasoning 3. Logical Reasoning 4. Problem-Solving Skills 5. Inference from Limited Information 6. Making Connections Between Processes Implementation in an LLM Context Considerations This framework attempts to mimic some aspects of human problem-solving and learning within the constraints of current AI capabilities. While it can enhance the LLM's performance in specific tasks, it's important to recognize the differences between this simulated problem-solving and genuine human cognitive processes. |
Beta Was this translation helpful? Give feedback.
-
Integrate MemGPT into AutoGen!
Instructions on how to connect MemGPT to AutoGen can be found on our docs page.
🙋 Need help with local MemGPT + AutoGen? Check Discord!
If you need help visit our Discord server and post in the #support channel.
You can also use this GitHub discussions page, but the Discord server is the official support channel and is monitored more actively.
Beta Was this translation helpful? Give feedback.
All reactions