-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not aware of past command+arguments; often enters "forever loops" and repeats the same actions #3644
Comments
Here's another example; ai_goals:
It basically got as far as figuring out it had to write a program but then it went into several rounds of analysis paralysis.
I told it to knock it off via "You have done do_nothing too many times and are not progressing. You need to generate the code." THen it started writing the code. So something similar needs to be done if it starts repeating the same step or if it returned back to an older command and arguments it already ran (at least within the last 20 steps or so) This would solve a lot of the issues I've seen to date in my goals. |
I do understand that this is what you are intended to do, but the objective you stated is quite clear about it: "Clone auto-gpt so you can make updates to it." In other words, if you are a little more specific in your wording, you can actually get exactly the desired behavior, you just need to think like a programmer. Don't get me wrong, that won't magically solve your other tasks, these are way too broad and the system is too limited to explore the solution space - but the point being, you need to be highly specific in your goals to give a chance to the agent to figure out what you want it to do. You cannot use goals to do something and then have the implicit expectation to do so in the most efficient manner, taking into account leftovers from previous invocations. For that to work, you need to encode the underlying logic inside your task list.
Indeed, EXCELLENT thinking - coming up with these heuristics is exactly what's needed to tell an agent what it should do - see: #3444 (comment) |
@Boostrix I think we are saying the same thing. My point is it will really give a lot more wiggle room to Auto-GPT if it can handle more abstraction by detecting and focusing on errors as well as knowing what it has already tried aka a bit more statefulness. As I,mentioned I can either be more specific on the goals or provide human input along the way ... however, I think with a little effort it could handle a lot better higher levels of abstraction. The whole basis of the turing model is acknowledging and adapting to the output and using that as input in the next iteration. If these were senses (errors and command history) they are like missing hearing and touch while trying to navigate reality. |
it will also be very interesting to actual log these issues to a corresponding log file, to take a look at the task/prompts in question and try to learn how this is happening - thanks to the heuristic it should be easy to detect this now in code and handle the case, but obviously that's "after the fact", so it would be more efficient to look at the underlying data/prompts to tell what's going on (and yes, I have also seen it "bailing out" only to restart tasks it had previously solved successfully). |
@Boostrix agreed. I tried attaching the log but it overloaded the Issue I opened. There might be another way to attach the log though. I've been away from GITHUB for a few years and am picking it back up. Lol. The two types of instances I saw was not ingesting errors and not being stateful and aware of what it had already tried. Both are decent size pandora boxes... I might tinker with some approaches. |
one takeway here might be to provide agents with an option to also log directly to their workspace, so that parent agents (and/or humans) can look at the log file and see what it was doing (but also for context): #430 (comment) |
Yeah a cooperative model. Especially when it comes to cases of error. I'd rather have a pre-spawn an agent that picks up these error/troubleshooting tasks to look into the matter so as to not muck with the goals and tasks of the main body of work. |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Which Operating System are you using?
Windows
Which version of Auto-GPT are you using?
Latest Release
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
There are multiple examples of this. In this case, it tries to clone the auto-GPT repo twice in a row. If it was aware of it's previous command(s), it would know to skip that.
AI Name: Update-Auto-GPT-With-LangChain
Describe your AI's role: Update-Auto-GPT-With-LangChain is: an AI adept at Python and other programming languages and modules for the purpose of improving Auto-GPT
Goal 1: Clone auto-gpt so you can make updates to it.
Goal 2: Identify where OpenAI is called and replace it with the appropriate LangChain calls
Goal 3: Expand the commandline arguments to include an argument for chosing which AI the caller wants to use
Goal 4: Update the help and usage messages to include this new parameter
Goal 5: Make the commandline argument for chosing which AI the caller wants to use optional. Default OpenAI as the AI used.
Current behavior 😯
It doesn't track what it has already done and often will repeat the same commands with the same arguments leading to redundancy as well as "time loops" where it loops through the same series of steps forever and never tries anything else.
Expected behavior 🤔
It should either locally remember or add to the prompt the past steps and either try something different or quit gracefully if it cannot think of anything different to do.
Your prompt 📝
Your Logs 📒
The text was updated successfully, but these errors were encountered: