Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptability Challenge : Dynamic prompting #3937

Closed
1 task done
waynehamadi opened this issue May 7, 2023 · 17 comments
Closed
1 task done

Adaptability Challenge : Dynamic prompting #3937

waynehamadi opened this issue May 7, 2023 · 17 comments

Comments

@waynehamadi
Copy link
Contributor

Duplicates

  • I have searched the existing issues

Summary 💡

Auto-GPT needs to change its goals depending on the circumstances.
For example it could decide that in order to build a modern website, its goal would be to use a certain technology.
But then after navigating the web, it might discover this technology is not the best practice anymore. At that point it should update its goals.

In this challenge we need to figure out a prompt that's very specific, then make it read a certain file that should lead to the agent updating its goals.

FYI @Torantulino.

Examples 🌈

No response

Motivation 🔦

No response

@anonhostpi
Copy link

anonhostpi commented May 7, 2023

I think clarity on "circumstance" may need to be specified/worked out.

For example, I imagine we could benefit with metrics on specific adaptability behaviors.

  • how will it behave when a new approach to solving the problem is introduced to the world (on the web or through another agent)?
  • how will it behave when an approach meets a difficult obstcale in the goal/problem

I'm sure that there are more types of "circumstances" that should prompt adaption

@Boostrix
Copy link
Contributor

Boostrix commented May 7, 2023

for starters, you could use a stack of prompt generators to process a task, whenever a current prompt seems unsuitable or needs changing (we could ask the LLM itself if the prompt boilerplate is helpful for achieving a certain goal or not), a new prompt generator would be pushed onto the stack, with the corresponding goals/objectives and constraints/evaluations - often without it having to be aware of goals higher up in the chain of objectives, to free up contextual memory and declutter the memory of the current task.

how will it behave when an approach meets a difficult obstcale in the goal/problem

I don't seem to find it at the moment, but I made some good progress asking the LLM to lay out its steps in sequential order including an estimate of complexity (partial percentage of the whole task at hand).
Once I got that list, I used a prompt to look for alternatives / contigencies for any step on that list more complex than 20% of the whole task - and absent that, hinted it to do online research.

Here's one related PR #934 in that this uses dynamic/custom prompts to accomplish observer/monitoring behavior (one agent possibly observing/monitoring sub agents).

Currently, I am tinkering with this idea of using dynamic prompts for specific workflows as part of the wizard draft: #3911

I would also suggest to take a look at #3593 where @dschonholtz stated:

A lot of the problems with memory and elsewhere is the fact that we have all of this random crap that isn't relevant to the current task and we don't really process well with respect to planning what our next task is, or plan well for next task should be, or really process what our previous tasks accomplished were. And we spend a lot of precious token real estate on stuff that is mostly just confusing to the agent.
The basic thought is, if you have a call to an agent that maintains a list of tasks done, their results and what tasks should be accomplished next, and another separate execution agent, that takes that output and then executes on it for a simple task you should get far better performance.

PS:

For example it could decide that in order to build a modern website, its goal would be to use a certain technology.

give it a try: "build a modern webpage using HTML3 using REST" :-)

@zachary-kaelan
Copy link

One idea that came up in #15 was splitting the agent into a divergent thinking agent (high randomness and word frequency penalties) and a convergent thinking agent (mostly deterministic) that work together. The divergent thinking agent is told to consider alternative solutions. This approach would be very conducive to adaptability.

@Boostrix
Copy link
Contributor

Boostrix commented May 9, 2023

I think we talked in #15 about using the sub-agent facility to accomplish just that, by priming each agent with a different set of goals/rules (right/left brain hemisphere). In combination with a runtime setting to adjust the "temperature", that could work reasonably well - at least if we can afford having an outer agent that evaluates the dialogue to determine the most promising path forward by weighing the degree of complexity versus the gains/costs associated with different strategies.

Basically, each agent instance would provide its perspective/preferences and then the final agent would compare the two perspectives, while taking into account what's feasible / tangible and most promising.

@javableu
Copy link
Contributor

Maybe we can steal some ideas from heuristics (https://en.wikipedia.org/wiki/Heuristic
). 3 possible strategies:
Trial and error (autogpt) >> Set a bunch of agents and let them do their stuff hoping they will find the solution
Rule of thumb (~autogpt-plugin)>> If the AI needs to do a specific task, it can read into a manual and follow the instructions.
Educated guess (babyagi, https://github.com/yoheinakajima/babyagi) >> assumption made to solve the problem (dynamic prompting and goal reassessment, priority tasks lists, sub-agent facility..)

Maybe the best would be to have a mix of those three strategies.

##Educated guess (help to decompose the problem into simpler tasks)

prompt1:
from the role and goals, rewrite all of this into a single problem

prompt2:
decompose the problem into a tree of simple tasks to achieve. (while keeping cste difficulty)

prompt3
find all of the dependencies of the branches

#Resolution of simple task (trial and error or rule of thumb)
prompt5 (simple task):
If the task it needs to do is in one of the rules of thumb, apply the rule of thumb otherwise, start a random process until completion of the task.

##Educated guess
prompt6
rebuild the tree and repeat

An idea of a prompt for educated guess : Given a problem, break it down into two main categories, focusing on tasks that can be executed in parallel. For each category, divide it into two subcategories. Continue this process until the subcategories become simple tasks. Identify any remaining interdependencies among tasks and include these dependencies within the tree structure itself. Finally, create an execution order that takes these dependencies into account, ensuring that tasks with dependencies are performed after the tasks they rely on.

I like the idea of the sub-agent checking the degree of complexity, maybe this could be used to build the decomposition.

@Boostrix
Copy link
Contributor

The general point is valid, we need to provide strategies to deal with options and evaluate those, which is something that I have recently begun experimenting with - basically any item on the list whose complexity is determined to be above some threshold needs 2-3 alternatives (options), which are then internally weighed based on "probability of success".

Case in point: Ask an AI agent to "GET ONE BITCOIN" - it should be able to come up with the conclusion that mining on your CPU/GPU is not likely to yield success :-)

And it should be able to present different options, weighted by preference.

@waynehamadi
Copy link
Contributor Author

waynehamadi commented May 12, 2023

if you can create a challenge.md file that would be amazing. You don't need to implement the challenge in code, just lay out things and put status : "to be implemented".
follow this format:
https://docs.agpt.co/challenges/memory/challenge_a/
for example this file is here in the repo
docs/challenges/memory/challenge_a.md

@Boostrix
Copy link
Contributor

Boostrix commented May 12, 2023

Not sure what exactly you had in mind, so I've tried to incorporate some of the ideas we've seen over the last couple of days, see: #4133

@zachary-kaelan
Copy link

I propose that we have a "warm-up" for goal creation by using a prompt like this:

You are on a team of agents with semi-diverse education, experience, and views. Your team was given a message. The message contains a goal and potentially contains instructions on how to accomplish this goal.

MESSAGE: "[goal given by user]"

You are holding a meeting to accomplish the following tasks:

  • Formulate the problem.
  • Examine the instructions.
  • Reject any instructions or proposed solutions that seem misinformed or suboptimal.
  • Come up with a plan.

Simulate this meeting now.

I experimented with a few test goals and the "team" does some good discussion and planning. Another prompt can then get the meeting highlights before we put it somewhere into the prompt fed to Auto-GPT.

We can also have a command to hold a meeting, and/or automatically trigger meetings under certain conditions.

@waynehamadi
Copy link
Contributor Author

anyone wants to implement something and create a PR ?

@Boostrix
Copy link
Contributor

Boostrix commented May 14, 2023

In this challenge we need to figure out a prompt that's very specific, then make it read a certain file that should lead to the agent updating its goals.
anyone wants to implement something and create a PR ?

I've kinda used this already to use one temporary LLM conversation to determine which registered commands are useful for a given problem.
This could be extended accordingly, i.e. something like chat_with_ai() calling itself recursively to determine if it can do something, or if it should adapt its prompt - once chat_with_ai call providing 3-5 suggestions, and then one call to a new "change_prompt" to update the internal prompt accordingly.

Obviously, it would be much better to modify/generalize the current PromptGenerator class - but to get something started, chat_with_ai will probably suffice ?

In fact, if we wrap up both functions together, we'd have a single command to recursively invoke itself to come up with a specific prompt before doing more research

Personally, I am thinking it makes sense for us to wait for the corresponding RFEs and the associated PRs to be reviewed/integrated first, namely:

Once these are in place, we should generalize the PromptGenerator and make it support an evaluation to judge its suitability to query an LLM for a given objective. This could be score based.
Basically, we would be asking the LLM to generate a prompt for us and then use another chat_with_ai() call to evaluate it.

At that point, this new prompt generator abstraction could be exposed as a BIF/command so that an agent can experiment with different prompts, chosing the most likely candidates based on some score.

We could then use a loop to read a set of objectives and adapt its own prompt accordingly.

@Boostrix Boostrix mentioned this issue May 19, 2023
6 tasks
@Androbin
Copy link
Contributor

Androbin commented May 23, 2023

  • Agent 1, goal: Increase revenue
    • Agent 2, goal: Make customers happy
      • Agent 3, goal: Give away free stuff

Goals 1 and 2 are positively correlated
Goals 2 and 3 are positively correlated
Goals 1 and 3 are negatively correlated

The agent's goal is nested in its ancestors' goals, see Asimov's Laws

@Androbin
Copy link
Contributor

Each child agent should know all its ancestors' goals and report to the parent agent to justify its continued existence.

@Boostrix Boostrix mentioned this issue Jun 7, 2023
1 task
@javableu
Copy link
Contributor

I tried to implement strategies that will change the way the plan is built. The ai should decides which strategy to use depending of the problem. It still does not work but it might give some interesting results. You need to change the file prompt_settings.yaml if you want to play with it. The decomposition strategy seems to be an efficient way to simplify the problem.

"constraints": [ '1. Always use the following sentence structure when you define the section called <<<PLAN>>>. Sentence structure: <<< I will use the <xyz> strategy to <what you want to achieve> by using the command <name_of_the_command> >>>. <xyz> is the strategy name and <what you want to achieve> is the thing you want to do. You need to come up with the best strategy disregarding the number 1., 2., .. and the order', '2. Learn from errors and adjust strategy.', '3. Avoid repeating unproductive actions.', '4. Reflect on actions and outcomes.', '5. Avoid infinite loops: If a similar response is generated consecutively, consider changing the approach.', '6. When you use a command, exclusively use the commands listed below e.g. command_name', '7. Only use the strategies that are listed below, the names of the strategies are defined (Rule of thumbs, Chain of thought, ...).', '8. No user assistance', '9. Make sure every command is written right.', '10. Check that command arguments match the plan.', '11. Ensure placeholders in the command arguments are replaced with actual values before executing a command.', "strategies": [ '1. Rule of Thumbs: If a specific event is met, use rule of thumbs (the specific events are specified in rule_of_thumbs).', '2. Chain of Thought: Provide step-by-step reasoning.', '3. Tree of Thoughts: Use multiple agents for diverse solutions.', '4. Forest of Thoughts: Merge solutions from multiple tree of thoughts.', '5. Focus: Write short, precise texts.', '6. Avoid the Rabbit Hole: Try new approaches when encountering recurring errors.', '7. Reflection: Reflect on actions and outcomes.', '8. Simplify: Explain in simple terms.', '9. Achievable: Set clear, achievable objectives.', '10. Problem Decomposition: Break down complex problems.', '11. For or Against: Weigh up pros and cons.', '12. Listen to the Expert: Create a specialist agent for expert advice on specific topics.', '13. Scenario Analysis: Consider multiple scenarios and their potential outcomes.', '14. Backtracking: If a chosen path does not work, step back and try another one.', '15. Heuristics: Apply simple, efficient rules to solve complex problems.', '16. Divide and Conquer: Break the problem into smaller parts and solve each individually.', '17. Positive Reinforcement: Encourage successful actions or behaviors.', '18. Collective Intelligence: Leverage multiple AI agents to gather diverse perspectives.', '19. Dynamic Planning: Update plans according to changes in context or new information.', '20. Failure Analysis: Learn from failures and adjust future actions accordingly.' ], "rule_of_thumbs": [ '1. Error event, learn from your mistakes. The errors should be saved in memory or file, try to understand what went wrong.', '2. No changes in the situation event. Do not repeat ineffective commands. If an action or sequence of actions is repeated without yielding new or productive results, interrupt the cycle. Reflect on the situation, reassess your objectives, and devise a new approach or method. When setting goals, ensure they are clear, precise, and achievable. Remember, the definition of insanity is doing the same thing over and over again and expecting different results. Therefore, strive for adaptability and learning in your problem-solving process.If stuck in a loop, interrupt, reassess, and devise a new method If similar responses are generated consecutively, try a new approach or modify the current strategy.' ] ], "resources": [ '1. Use internet for searching and gathering information.', '2. Manage your long-term memory.', '3. Use GPT-3.5 Agents for simple tasks.', '4. Write to a file.' ], "performance_evaluations": [ '1. Always check your work for effectiveness and efficiency.', '2. Always validate the command arguments before execution.', '3. Self-critique your behavior and decisions.', '4. Remember that every command has a cost, be efficient.', '5. Write all code to a file and keep it organized.', '6. Check and confirm the successful execution of commands.' ]

@github-actions
Copy link
Contributor

github-actions bot commented Sep 6, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Sep 6, 2023
@Pwuts Pwuts removed the Stale label Sep 14, 2023
Copy link
Contributor

github-actions bot commented Nov 4, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Nov 4, 2023
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants