-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance chat memory #226
Comments
Before utilizing any token-counting memory, such as These memories rely on the functionality of Additionally, the |
I digged a little into vector based memories, here's some pickups:
|
What about a background task (like kubernetes CronJob) to read all memories, and put those "old ones" into a vectorstore?
Neither is simple to implement. |
A chatbot is not like a human been. Humans don't inherently retain conversation message orders; instead, we rely on short-term memory, which eventually consolidates information into a long-term memory where order becomes less crucial. In contrast, a chatbot must persistently maintain all messages in a list-order, primarily for later user display. Additionally, it's worth noting that Redis indexing is limited to hash or JSON structures, not lists. I cannot simply add embedding index on list of chat messages. To address the need for both long-term and short-term memory while preserving the correct order of messages for user display, a solution involves duplicating the messages. One copy is stored in a Redis list, and another is stored in a hash with an embedded index. An alternative approach might involve storing only the list index in the document, as shown below: {
"msg_embedding": [],
"msg_idx": 0
} Fetching messages with context can then be achieved by accessing the message list using LRANGE, which has an acceptable time complexity of O(S+N) (LRANGE). However, it's important to note that the current behavior of langchain's |
langchain's memory system is under refactoring, maybe I will wait some more weeks until it's getting stable. |
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
Enhance chat memory by long-term memo (summary), searched memo (vector retriever) and short-term memo (buffer window, current solution), and combine them together (CombinedMemory).
The text was updated successfully, but these errors were encountered: