-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to enable auto-GPT to pick up where it left off after connection failure? #430
Comments
good Q i dont know yet but am curious. Bumping this thread in hope someone answers |
i am looking for the same issue. i constantly and i mean constantly get "Error communicating with OpenAI", here is a traceback for more details. this is annoying. Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): |
this is really good feature that we can work on . |
it would need a database for starters. i noticed they tried to put in pinecone, but then pinecone blocked it. i find that the local cache json gets corrupted and then it all goes to hell. I am going to try and fix it with a json parser i would propose a local database instead of something like pinecone to make it less brittle. sqlite runs on any platform. it could be a possibility. |
Finally without pinecode I am stuck with pinecone error if you are working with sqllite I am happy to contribute |
I have found a solution to this. You must save the whole interaction in a .txt in the /auto_gpt_workspace. It took some coaxing but I found the command to give it after a restart Input: I have saved a log of our previous interactions at ./[your_file_name] , read the file and restore yourself to the stop point of the file. You will need to chunk the file in chunks of 6000 tokens. If it reads the file without the chunking, it goes over the token limit. This is running on Debian on an EC2 free tier After this prompt, it loaded the whole file in the buffer and crashed with a max token error. |
Restarting where it left of feels critical. I'm new to the codebase but if I have time and this isn't fixed this is the first thing I would try to contribute. |
i couldn't agree more!! i'm trying to solve a complex problem using Auto-GPT and whenever it crash the task starts from the beginning for no good reason. following @BaltekLabs solution i wanted to add some code to Auto-GPT to do it, first question is if the |
can someone pls confirm the function below in
|
All you have to do is go to the logs file and preseed the activity file. That will give you a different beginning every time, because if you just restart, then it does the same thing all over. But this will go at the task at a different angle in the beginning. Though giving it the activity log to read in the beginning will 100% get you back to where you want to go. But imo pre seeding will give the AI a better outlook of what to do next. I think that’s better because it’s AI. It’s sometimes dumb but won’t do something it knows isn’t worth it if it already has knowledge of failures it’s done in the past. |
I agree with @itsDesTV. In theory at least I don't think it sounds complicated. I haven't come across an |
Yeah the one thing I havent tried is setting a goal to “read the “activity.txt” (by directing it with the file path) and continue progress from there” and then goal number two would be “read 1.txt” that would have 2 or 3 tasks in it to do, then at the end it would say, once done with prior tasks, “read 2.txt”. I did that a different time but not like this and it works. But you essentially could set a bunch of if/then command in your goals in those files. For how smart AI is, it honestly pisses me off sometimes lol. I make such good progress and then I try to replicate it and it just never works out. And to do what I truly want to do, it would cost so much :/ bc for Ai to truly be Ai, it needs to fail and learn and fail and learn just like us. That’s why we keep going at it and tweaking all our prompts. But the money it’s spending on us is our time! |
@itsDesTV Splitting up the agent into multiple tasks sounds a lot like one of the major goals of this issue, and I think some others as well. The thought is a hierarchy of agents, each with its own I feel like at some point in the future, some implementation of an LLM will support the ability to get/set a "seed" for a session for replicability. Maybe they will be less powerful, less frequently versioned LLMs; who can say. |
@joeflack4 so you technically can now if you wanna look into this! I just haven’t dove into it because I simply want to see what I can make one agent do! I’ve already come to terms that only right now, 1 agent, for the most part is useful until it fails. But there HAS to be a way to make it understand. With that being said, multiple separate instances that work together is wild. And if they can make it communicate, that would be EVEN better. But here is the thread: #3392 (comment) |
only in theory, the complexity is huge - you need to get the sequential approach working first, i.e. where the parent agent literally sleeps/waits for the sub-agent to finish its task, at most monitoring/observing its progress (think a stream/pipe or just a log file - comparing log messages to stated goals/objectives), with the option to "guide" a sub-agent (modify its goals/observe) or to terminate/restart an agent. I don't see any way to make this working and robust without first getting the sequential use-case right. Once you are talking about multiple agents, possibly working in async fashion, the complexity is getting huge - because at that point, you are literally in need of "manager agents" to carefully watch its agent/team of agents, or they will just clobber up your API bills ... |
For me it looks like the whole AutoGPT architecture could need management agent that manages the other agents and the whole progress. For the situations where the model gets stuck into "task loop" (because Task prioritization agent doesn't look at the memory and that Execution agent cannot influence the task queue), management agent could overlook the Task creation agent and step up if the task doesn't lead to our goal or if recognizes that he is repeating/getting into a loop. @Boostrix we came to a similar conclusion. For reducing the bill, that agent could get the summaries instead of all the tiny details. But than again other model needs to make a summary (hard to get those bills down) |
work on supporting multiple projects and multiple agents to be managed by parent agents is in progress (especially the REST/async efforts will inevitably need a way to assign managers to agents, and possibly teams of agents):
right, that is why it would be preferable to "checkpoint" state locally inside that agent's workspace, i.e. using shell scripts or at least /some/ meta info on disk that isn't going through the LLM to help with #3466 |
I missed this post, but I think as a first step, what I suggested #3933 should be implemented. Pickle is the solution, and if there are multiple agents, then it complicates things, but not that much. |
I think we need a draft PR to summarize the main ideas, collated from different issues/talks |
@EggheadJr Thanks for that, Egghead. This looks kinda like a gist, but in markdown form. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Duplicates
Summary 💡
Any thoughts on enabling auto-GPT to pick up where it left off after an error? It has done some noble work, but has never fully completed the tasks I've set for it before getting cut off by this kind of error:
raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
Examples 🌈
No response
Motivation 🔦
No response
The text was updated successfully, but these errors were encountered: