Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Self Regulatory Agency (Avoid getting stuck in sub loops) #3916

Closed
1 task done
urbanscribe opened this issue May 6, 2023 · 6 comments
Closed
1 task done
Labels
enhancement New feature or request Stale

Comments

@urbanscribe
Copy link

Duplicates

  • I have searched the existing issues

Summary 💡

To improve the agent's ability to focus on significant tasks and make measurable progress, as well as better manage the auto_gpt_workspace folder, few improvements could be considered:

Track progress: Maintain a progress tracking system for each task by saving the task's status and progress within the agent's memory. This can be done using a dictionary that maps task identifiers to their progress information. Update the progress data after each interaction loop or whenever the agent makes progress on a specific task.

Check for existing work: Before starting a new task, have the agent check the auto_gpt_workspace folder for existing work related to the task. To achieve this, you can modify the _resolve_pathlike_command_args method to search for existing files or folders related to the task. If a relevant file is found, inform the agent to continue working on it instead of starting from scratch.

Prioritize tasks: Reintroduce a task prioritization mechanism in the start_interaction_loop method. For each task, assign a priority level, and ensure that the agent only works on higher-priority tasks before attending to lower-priority ones. This can be done by adding a priority attribute to the command arguments and sorting the tasks accordingly.

Specifically this turns into.

Progress Tracking: The current agent does not track the progress of individual tasks. The proposed changes involve maintaining a progress tracking system for each task and updating the progress data after each interaction loop or whenever the agent makes progress on a specific task.

Checking for Existing Work: The current implementation does not check for existing work in the auto_gpt_workspace folder before starting new tasks. The suggested modifications involve changing the _resolve_pathlike_command_args method to search for existing files or folders related to the task. If a relevant file is found, the agent will continue working on it instead of starting from scratch.

Examples 🌈

No response

Motivation 🔦

No response

@Boostrix
Copy link
Contributor

Boostrix commented May 6, 2023

Please see: #3593 (feel free to provide your feebdack there)
And more recently:

@k-boikov k-boikov added the enhancement New feature or request label May 6, 2023
@anonhostpi
Copy link

I am not sure if it is currently part of the project or planned, but there has been a lot of talk about Observer Agents.

One of the main points for incorporating Observer/Self-Regulatory Agents is to minimize the risk that an Agent does something unlawful or harmful to a human. Most of the contributors agree that we can't prevent an AI from breaking the law, but we can ensure that it self-monitors and corrects any inappropriate behavior.

Many have noted that observer-like agents whose roles are to report violations of law, mal-intentions, or any risk of human harm became obsessed with it and would report and catch this kind of behavior extremely frequently.

@Boostrix
Copy link
Contributor

Boostrix commented May 7, 2023

I've seen some PRs/files already considering/using this approach (using sub-agents to observe/control behavior: e.g. #934 with #765 talking about introducing a "monitoring" capability soon)- so if this is formalized, there could be the equivalent of an ObserverAgent sub-classed from the Agent class but potentially able to not just observe, but also observe - and once you think about it, it's exactly what a "manager" agent would be doing anyway, just with different constraints obviously.

So, I guess the main differentiation is whether an agent is passively observing ("read-only" / "watch-only") or if it is watching to intervene if necessary (active observers/directors)

The current implementation does not check for existing work in the auto_gpt_workspace folder before starting new tasks.

Regarding redundant sub-loops, see: #3668
Regarding persistence, there are different folks tinkering with different schemes.
It's an interesting problem to think about !

@Boostrix
Copy link
Contributor

Boostrix commented Jul 3, 2023

Latest work to be found here: #4862

@github-actions
Copy link
Contributor

github-actions bot commented Sep 6, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Sep 6, 2023
@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Stale
Projects
None yet
Development

No branches or pull requests

4 participants