Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Auto-GPT delegating sub-tasks to other instances of itself #70

Closed
LrWm3 opened this issue Apr 3, 2023 · 19 comments
Closed

Support Auto-GPT delegating sub-tasks to other instances of itself #70

LrWm3 opened this issue Apr 3, 2023 · 19 comments
Labels
enhancement New feature or request

Comments

@LrWm3
Copy link

LrWm3 commented Apr 3, 2023

Summary

Allow Auto-GPT to break up work and delegate it to other instances of itself.

Feature Description

Similar to actions such as 'file', or 'browse', I propose we have a 'delegate' module. However, I don't really have much of an idea of how it would work at this point.

Benefits

  • Have an 'overseer' Auto-GPT model backed by GPT4, worker Auto-GPT models backed by self-hosted 'llama.cpp' models which do the grunt-work to reduce cost of operation.
  • No need to try and keep all of the state, history, long-term & short-term memory for everything in one agent. can split it up across several.
  • Better fit the organizational model of the data LLMs are derived from; a "more natural" fit given that this is how most complex organizations are run.
  • May work as a method for delegating to humans in the future

Request for Comments

I would appreciate feedback from the community on this suggested feature. Please share your thoughts, suggestions, and any potential concerns you may have.

@LrWm3 LrWm3 changed the title Auto-GPT to allow delegating sub-tasks to other instances of itself Support Auto-GPT delegating sub-tasks to other instances of itself or other LLM models Apr 3, 2023
@LrWm3 LrWm3 changed the title Support Auto-GPT delegating sub-tasks to other instances of itself or other LLM models Support Auto-GPT delegating sub-tasks to other instances of itself Apr 3, 2023
@PederHP
Copy link

PederHP commented Apr 3, 2023

Perhaps another way to look at this is as 'reflect' and/or 'analyze' actions? I don't know enough about you've hooked up the other actions by making their function clear, but using an instance of itself (or another LLM) could be framed as an action which allows reflecting on an idea or concept, or analyzing a problem or topic. The concepts of reflection and analysis are semantically very clear to an advanced LLM, which should provide a strong foundation for their use as value-adding mechanisms.

@drusepth
Copy link

drusepth commented Apr 3, 2023

Have an 'overseer' Auto-GPT model backed by GPT4, worker Auto-GPT models backed by self-hosted 'llama.cpp' models which do the grunt-work to reduce cost of operation.

Using a recursive parent-child model could do away with any kind of meta-state in instances like "overseer" or "worker" if any instance could delegate tasks to child instances. It'd be great if there were a way to estimate the necessary capabilities for a task and assign a model dynamically based on need, rather than defaulting to overseer-gpt / worker-llama, etc. Might make sense for an original gpt4 instance to spawn sub-gpt4 instances for more complex tasks, which in turn could spawn their own llama instances for gruntwork and/or additional gpt4 instances for more involved tasks.

If there's any work already done in integrating other LLMs (like llama), I could probably allocate some time to some of the auxiliary requirements for this feature, like deciding what model should be used for a given task.

@Torantulino Torantulino added the enhancement New feature or request label Apr 3, 2023
@pmb2
Copy link

pmb2 commented Apr 6, 2023

I found this interesting.
NOt sure if it will help spark some inspiration or not..
https://flowgpt.com/explore/g3WyEcF-faatk0syMUJtB

@algopapi
Copy link

algopapi commented Apr 7, 2023

I have done some testing for this on my fork:

Basically I wrapped the main.py inside an agent class.
The agents sit inside of an orginization class with one initial founder agent.
Agents can then hire/fire staff members. Each staff member can recursively hire and fire its own staff. Only supervisor <-> staff communication is possible to keep things clean.

@feffy380
Copy link

Using a recursive parent-child model could do away with any kind of meta-state in instances like "overseer" or "worker" if any instance could delegate tasks to child instances.

Care would have to be taken to limit the number of child instances (per agent as well as maximum depth)

@algopapi
Copy link

Hey there,

I made some progress on this lately:

Feel free to experiment with this over at: https://github.com/algopapi/Auto-GPT. The stable branch is the one to look at.
(by default it is set to gpt-4, but be careful because costs are high)
I also implemented asynchronous agents but for your wallets sake do not use this.

Organisations are initially launched with a budget. Nodes can hire employees and allocate budget to them. Hiring staff memebers increase the running costs. I implemented this because it would go off the rails rather quickly, with agents hiring multiple researchers/developers to work for them.

I am very curious what people find here. What does it do right, where does it go wrong. This is far from perfect, but it gives a small glimps into the promising world of agent swarm. Right now I enforce a certain organisational structure upon the agents. (supervisor, staff) I feel however, as these models get smarter, it should be more of a sandbox approach. (https://arxiv.org/abs/2304.03442) That way agents can form their own, perhaps novel, Organizational structures, or maybe, it just prefers to work alone - who knows...

It is quite buggy, it might require some messing around to get running, but people should play with this, it is quite cool.

In the near future, implementing reflexion, much like this paper: https://arxiv.org/abs/2304.03442, could be nice.

Let me know what you find.

@LrWm3
Copy link
Author

LrWm3 commented Apr 14, 2023

Organisations are initially launched with a budget. Nodes can hire employees and allocate budget to them. Hiring staff memebers increase the running costs. I implemented this because it would go off the rails rather quickly, with agents hiring multiple researchers/developers to work for them.

I was just talking to my colleague about this idea, awesome to see you actually implemented it!

Gonna try it out, I find this idea really interesting

@LrWm3
Copy link
Author

LrWm3 commented Apr 14, 2023

Tried it out! I ran into an issue where they would circularly hire more and more people to try and complete the task. and the founder kept trying to message his boss 😅 . Really cool to see though; if I had a self-hosted LLM I could use I would love to let it run for a while in async mode to see what happens

@algopapi
Copy link

Nice! I ran into these issues as well. First one is really tricky. The second one should be quite easy - simply remove the message_supervisor command from the founders prompt. Thanks for the feedback!

@feffy380
Copy link

feffy380 commented Apr 15, 2023

First one is really tricky.

Is it? Couldn't you solve it the same way as the second one, i.e., remove the ability to create new agents depending on some condition like hierarchy depth or a global agent limit?

@lfricken
Copy link
Contributor

Discussion thread

You could either set a hard max on recursive limit (and how many sub agents each agent can hire) or make it depend on the difficulty of the subtask and give GPT permission to choose a number (lower than a threshold) with a suggestion that it should only be >0 for complex tasks.

@Pwuts
Copy link
Member

Pwuts commented Apr 16, 2023

Seems this is already implemented in the form of agents, no?

@algopapi
Copy link

algopapi commented Apr 16, 2023

Seems this is already implemented in the form of agents, no?

Not quite since the agents are not full instances of itself, i.e., they cannot execute commands or spawn more agents (AFAIK)
They might have changed it though, please lmk if they did

@lfricken
Copy link
Contributor

lfricken commented Apr 16, 2023

Seems this is already implemented in the form of agents, no?

No. See the linked discussion thread above your comment. @algopapi They can execute commands. They can't run concurrently, which is the big one.

@Pwuts
Copy link
Member

Pwuts commented Apr 16, 2023

@0xArty is working on that atm: https://github.com/0xArty/Auto-GPT/blob/async_agent/autogpt/agent/async_agent.py

@lfricken
Copy link
Contributor

I maintain that such a feature should be referred to as fullauto 🔫🔫🔫

@BouarfaMahi
Copy link

BouarfaMahi commented Apr 17, 2023

I am interested in this feature because I am building an AI Agent Farm. This infrastructure consists of 4 refurbished microservers gen8, each equipped with a 250GB SSD, 16GB of RAM, and 4x1TB of RAID data storage for analysis. Powered by Ubuntu 22.04 with a suite of powerful tools, including vscode, QuestDB, Grafana, and AutoGPT.

I want to run multiple tasks simultaneously and distribute processes across the AI Agent workers, maximizing efficiency and minimizing downtime. I plan to expand this farm by purchasing 6 more machines in the near future.

Furthermore vscode is remotly connected to a jupyterhub instance installed on a GPU server. Available kernels on jupyterhub are Python and Julia. Each AI Agent has its own jupyterhub user account when using vscode.

Right now i can only launch autogpt on each of the 4 microservers independently. Each ones is connected to the coinbase real-time streaming data source. Questdb and Grafana are used to store and display the data in real time. I can access remotly the farm with the remote management system developed by Teltonika.

If the data is stored on each microserver and the AI application is installed on the GPU server, we will need to ensure that each AI agent can access the data on the microserver where it is installed.

I can test any available module.

edit-1: I'm currently working on creating documentation for building an Auto-gpt crypto environment that is Docker-based. This environment will also be connected to a live streaming crypto data source, QuestDB, and Grafana for data visualization, so that any contributor can have access to a ready-to-use Auto-gpt crypto environment.

@Pwuts Pwuts moved this to 🏗 In progress in AutoGPT development kanban Apr 18, 2023
@Pwuts Pwuts moved this to Todo in AutoGPT Roadmap Apr 18, 2023
@Pwuts Pwuts moved this from Todo to In Progress in AutoGPT Roadmap Apr 18, 2023
@Pwuts Pwuts moved this from 🏗 In progress to 🔖 Ready in AutoGPT development kanban Apr 18, 2023
tgonzales pushed a commit to tgonzales/Auto-GPT that referenced this issue Apr 19, 2023
@Wladastic
Copy link
Contributor

Wladastic commented Apr 19, 2023

I am currently working on a similar solution.
There are very small LLM models that can run locally, which do very simple tasks.
Not sure if they have any use for big stuff but researching for example and just summarizing what it found, I think facebooks llm would be great.
I try to use oobabooga as api, will continue on it tomorrow, running alpaca on it seems good enough to replace gpt3.5 and also alpaca runs much quicker on my RTX 4080 right now.
Only need to rebuild the api for that, which is tricky.

What my client did after issue #1646 is that it actually managed to ssh into my local server and clone autogpt into it.
I think that made me have the idea to run multiple docker instances with auto-gpt.
What would you say?

@Swiftyos
Copy link
Contributor

We now have an issue specifically to discuss how best to approach this #2458
Please can you use that issue to add your import as to how best to architecture v1 of AutoGPT

@github-project-automation github-project-automation bot moved this from In Progress to Done in AutoGPT Roadmap Apr 19, 2023
@github-project-automation github-project-automation bot moved this from 🔖 Ready to ✅ Done in AutoGPT development kanban Apr 19, 2023
Say383 pushed a commit to Say383/Auto-GPT that referenced this issue Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet