-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[issue in need of revisiting] Retrieval Augmentation / Memory #3536
Comments
I will tinker with this shortly: |
Sorry I am new to this, do I post this here or somewhere else? |
@W3Warp depends what you want to achieve |
Help with the issue. |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
🚧 Actual state of work in progress
In order to effectively utilize RAG for long-term memory in AutoGPT, we need to find or create data points during the think-execute cycle which we can use to query a vector DB and enrich the context. Our current vision is that the best way to realize these data points is to implement structured planning/task management. This is tracked in #4107.
Problem
Vector memory isn't used effectively.
Related proposals
Related issues (need deduplication)
🔭 Primary objectives
Robust and reliable memorization routines for all relevant types of content
🏗️ 1+3
This covers all processes, functions and pipelines involved in creating memories. We need the content of the memory to be of high quality to allow effective memory querying and to maximize the added value of having these memories in the first place.
TL;DR: garbage in, garbage out -> what goes in must be focused towards relevance and subsequent use.
🏗️ 1
🏗️ 1
🏗️ 3
🏗️ 3
Good memory search/retrieval based on relevance
🏗️ 1
For a given query (e.g. a prompt or question), we need to be able to find the most relevant memories.
Must be implemented separately for each memory backend provider:
Milvus(The other currently implemented providers are not in this list because they may be moved to plugins.)
Effective LLM context provisioning from memory
🏗️ 2
Once we have an effective system to store and retrieve memories, we can hook this into the agent loop. The goal is to provide the LLM with focused, useful information when it is needed or useful for the next step of a given task.
🏗️ Pull requests
Applies refactoring and restructuring to make way for bigger improvements to the memory system
Restructures the
Agent
to make extension and modification easier🛠️ Secondary todo's
🏗️ 1
🏗️ 1
🏗️ 3
(see also Code files under 🔭 Primary objectives)
✨ Tertiary todo's
🏗️ 4
📝 Drawing boards
These boards contain drafts, concepts and considerations that form the basis of this subproject. Feel free to comment on them if you have ideas/proposals to improve the workflows or schematics.
If you are curious and have questions, please ask those on Discord.
Related
🚨 Please no discussion in this thread 🚨
This is a collection issue, intended to collect insights, resources and record progress. If you want to join the discussion on this topic, please do so in the Discord.
The text was updated successfully, but these errors were encountered: