-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adaptability Challenge : Dynamic prompting #3937
Comments
I think clarity on "circumstance" may need to be specified/worked out. For example, I imagine we could benefit with metrics on specific adaptability behaviors.
I'm sure that there are more types of "circumstances" that should prompt adaption |
for starters, you could use a stack of prompt generators to process a task, whenever a current prompt seems unsuitable or needs changing (we could ask the LLM itself if the prompt boilerplate is helpful for achieving a certain goal or not), a new prompt generator would be pushed onto the stack, with the corresponding goals/objectives and constraints/evaluations - often without it having to be aware of goals higher up in the chain of objectives, to free up contextual memory and declutter the memory of the current task.
I don't seem to find it at the moment, but I made some good progress asking the LLM to lay out its steps in sequential order including an estimate of complexity (partial percentage of the whole task at hand). Here's one related PR #934 in that this uses dynamic/custom prompts to accomplish observer/monitoring behavior (one agent possibly observing/monitoring sub agents). Currently, I am tinkering with this idea of using dynamic prompts for specific workflows as part of the wizard draft: #3911 I would also suggest to take a look at #3593 where @dschonholtz stated:
PS:
give it a try: "build a modern webpage using HTML3 using REST" :-) |
One idea that came up in #15 was splitting the agent into a divergent thinking agent (high randomness and word frequency penalties) and a convergent thinking agent (mostly deterministic) that work together. The divergent thinking agent is told to consider alternative solutions. This approach would be very conducive to adaptability. |
I think we talked in #15 about using the sub-agent facility to accomplish just that, by priming each agent with a different set of goals/rules (right/left brain hemisphere). In combination with a runtime setting to adjust the "temperature", that could work reasonably well - at least if we can afford having an outer agent that evaluates the dialogue to determine the most promising path forward by weighing the degree of complexity versus the gains/costs associated with different strategies. Basically, each agent instance would provide its perspective/preferences and then the final agent would compare the two perspectives, while taking into account what's feasible / tangible and most promising. |
Maybe we can steal some ideas from heuristics (https://en.wikipedia.org/wiki/Heuristic Maybe the best would be to have a mix of those three strategies. ##Educated guess (help to decompose the problem into simpler tasks) prompt1: prompt2: prompt3 #Resolution of simple task (trial and error or rule of thumb) ##Educated guess An idea of a prompt for educated guess : Given a problem, break it down into two main categories, focusing on tasks that can be executed in parallel. For each category, divide it into two subcategories. Continue this process until the subcategories become simple tasks. Identify any remaining interdependencies among tasks and include these dependencies within the tree structure itself. Finally, create an execution order that takes these dependencies into account, ensuring that tasks with dependencies are performed after the tasks they rely on. I like the idea of the sub-agent checking the degree of complexity, maybe this could be used to build the decomposition. |
The general point is valid, we need to provide strategies to deal with options and evaluate those, which is something that I have recently begun experimenting with - basically any item on the list whose complexity is determined to be above some threshold needs 2-3 alternatives (options), which are then internally weighed based on "probability of success". Case in point: Ask an AI agent to "GET ONE BITCOIN" - it should be able to come up with the conclusion that mining on your CPU/GPU is not likely to yield success :-) And it should be able to present different options, weighted by preference. |
if you can create a challenge.md file that would be amazing. You don't need to implement the challenge in code, just lay out things and put status : "to be implemented". |
Not sure what exactly you had in mind, so I've tried to incorporate some of the ideas we've seen over the last couple of days, see: #4133 |
I propose that we have a "warm-up" for goal creation by using a prompt like this:
I experimented with a few test goals and the "team" does some good discussion and planning. Another prompt can then get the meeting highlights before we put it somewhere into the prompt fed to Auto-GPT. We can also have a command to hold a meeting, and/or automatically trigger meetings under certain conditions. |
anyone wants to implement something and create a PR ? |
I've kinda used this already to use one temporary LLM conversation to determine which registered commands are useful for a given problem. Obviously, it would be much better to modify/generalize the current PromptGenerator class - but to get something started, chat_with_ai will probably suffice ? In fact, if we wrap up both functions together, we'd have a single command to recursively invoke itself to come up with a specific prompt before doing more research Personally, I am thinking it makes sense for us to wait for the corresponding RFEs and the associated PRs to be reviewed/integrated first, namely:
Once these are in place, we should generalize the PromptGenerator and make it support an evaluation to judge its suitability to query an LLM for a given objective. This could be score based. At that point, this new prompt generator abstraction could be exposed as a BIF/command so that an agent can experiment with different prompts, chosing the most likely candidates based on some score. We could then use a loop to read a set of objectives and adapt its own prompt accordingly. |
Goals 1 and 2 are positively correlated The agent's goal is nested in its ancestors' goals, see Asimov's Laws |
Each child agent should know all its ancestors' goals and report to the parent agent to justify its continued existence. |
I tried to implement strategies that will change the way the plan is built. The ai should decides which strategy to use depending of the problem. It still does not work but it might give some interesting results. You need to change the file prompt_settings.yaml if you want to play with it. The decomposition strategy seems to be an efficient way to simplify the problem.
|
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Duplicates
Summary 💡
Auto-GPT needs to change its goals depending on the circumstances.
For example it could decide that in order to build a modern website, its goal would be to use a certain technology.
But then after navigating the web, it might discover this technology is not the best practice anymore. At that point it should update its goals.
In this challenge we need to figure out a prompt that's very specific, then make it read a certain file that should lead to the agent updating its goals.
FYI @Torantulino.
Examples 🌈
No response
Motivation 🔦
No response
The text was updated successfully, but these errors were encountered: