Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Local Cache and Redis Memory backend #372

Merged
merged 18 commits into from
Apr 9, 2023

Conversation

BillSchumacher
Copy link
Contributor

@BillSchumacher BillSchumacher commented Apr 7, 2023

Background

Redis is a free and open source alternative to pinecone.

We had a dicussion here: https://discord.com/channels/1092243196446249134/1093713674679615638

Changes

The memory backend was given an abstract base class.

PineconeMemory now inherits from this ABC.

RedisMemory was added.

Configuration options were added to the Config class.

Test Plan

I had SuperMario in Discord verify I did not break pinecone, I tested redis locally.

Change Safety

  • I have added tests to cover my changes
  • I have considered potential risks and mitigations for my changes

I didn't see any existing tests for memory and will attempt to add some at a later point in time.

@BillSchumacher
Copy link
Contributor Author

I should note that there is no reason why we have to wipe out the memory, unless the intent is to reset state.

We could easily save the memory store's state on exit and load on start, or even per add() call.

@BillSchumacher
Copy link
Contributor Author

In redis this is simple so I went ahead and persisted the vec_num to redis and added an option to wipe on startup as the default behavior, optionally allowing users to keep their state.

@aosan
Copy link

aosan commented Apr 7, 2023

In redis this is simple so I went ahead and persisted the vec_num to redis and added an option to wipe on startup as the default behavior, optionally allowing users to keep their state.

👍
Allowing for persistence is a great idea for any long running jobs. CloudFlare frequently throws a 401 finishing the session. Warm restarts would need a steady state.

@BillSchumacher BillSchumacher changed the title Implement Redis Memory backend Implement Local Cache and Redis Memory backend Apr 7, 2023
@BillSchumacher
Copy link
Contributor Author

This now implements a local version using numpy.

@BillSchumacher
Copy link
Contributor Author

It is also set to the default, orjson was added as a requirement.

@BillSchumacher
Copy link
Contributor Author

This now avoids import errors for both redis and pinecone. It is possible to remove them from requirements.txt.

@dominikdassow
Copy link

Hi, I see that both you and @cs0lar in #424 work on a memory backend/factory/interface. Great stuff! I'd suggest having a look into creating a strong combined implementation.

I think @cs0lar's implementation is a bit neater and simpler, and has tests. You, @BillSchumacher, have the two additional providers, Redis and local.

This memory interface is a good step 👍🏻 Happy to help and work together here

@supermario-ai
Copy link

supermario-ai commented Apr 8, 2023

@BillSchumacher LocalCache is crusing rn. no major errors. ~5 min of run time. Starting Redis smoke test.

@Torantulino Torantulino added the enhancement New feature or request label Apr 8, 2023
@supermario-ai
Copy link

@BillSchumacher Redis - Pass
@ALL

of all the memory types, in my experience, redis doens't bork segmentation
image

PC and LocalCache, dont do this well with teh stock entreprenur persona & goals

@supermario-ai
Copy link

🟢

jkoelker
jkoelker previously approved these changes Apr 8, 2023
Copy link
Contributor

@jkoelker jkoelker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I've use this as is for a few test setups without issue. Just a suggestion to default the env vars to make the presumed local case easier to setup.

scripts/config.py Outdated Show resolved Hide resolved
Co-authored-by: Jason Kölker <[email protected]>
jkoelker
jkoelker previously approved these changes Apr 8, 2023
@BillSchumacher
Copy link
Contributor Author

Hi, I see that both you and @cs0lar in #424 work on a memory backend/factory/interface. Great stuff! I'd suggest having a look into creating a strong combined implementation.

I think @cs0lar's implementation is a bit neater and simpler, and has tests. You, @BillSchumacher, have the two additional providers, Redis and local.

This memory interface is a good step 👍🏻 Happy to help and work together here

I'm not sure that having a class for a single function makes sense, if other people like that I'm good with whatever.

The tests are a start but we probably need some CI/CD to be able to really test.

@Torantulino
Copy link
Member

Taking a look at this.
Thank you for your contribution!

Torantulino
Torantulino previously approved these changes Apr 9, 2023
@ryanpeach
Copy link

@Torantulino once you merge this then I'll merge my combo PR back into my qa pr to handle the merge conflict produced, and we will be back in buisness.

scripts/main.py Outdated Show resolved Hide resolved
Remove pinecone config requirement
@Torantulino
Copy link
Member

Running this causes almost all of AutoGPT outputs to not be properly formatted as JSON for me.

Anyone else experiencing this problem?

@BillSchumacher
Copy link
Contributor Author

Running this causes almost all of AutoGPT outputs to not be properly formatted as JSON for me.

Anyone else experiencing this problem?

I'm not sure the JSON errors are restricted to this branch, the main thing I'm seeing from a local model is just bad output from the model.

relevant memory [[Document {'id': 'bob:0', 'payload': None, 'vector_score': '0.156416118145', 'data': 'Assistant Reply: Understood. My first task is "Search Files" for python scripts. \nResult: Command Error: returned: Unknown command Error: \nHuman Feedback: GENERATE NEXT COMMAND JSON '}]]

Is what's coming back from relevent memory.

@Torantulino
Copy link
Member

I can see in the readme you mentioned instructions on how to make Redis memory persist between runs, however using your "LocalCache" version, memory is persisting for me by default.

How do users configure this?

memory = RedisMemory(cfg)

if memory is None:
memory = LocalCache(cfg)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We appear to be missing a call to memory.clear() here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the LocalCache

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, there was a lot of people that wanted the memory to persist by default and I just made Pi's implementation work.

I guess a direction one way or the other needs to be picked.

Deleting the memory is easy enough for local, use just delete the json file.

But there's also a memory index if you wanted to start fresh but keep memory from a different session.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, okay.

Perhaps default persistent memory is a good idea in the near future, however now I think it has the side effect of encouraging the AI to repeat it's errors.

Through my experimenting with this PR, if the AI is reminded of events from it's past were it didn't respond in JSON, it's more likely to do so again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be prudent to not even save those in memory.

@BillSchumacher
Copy link
Contributor Author

Running this causes almost all of AutoGPT outputs to not be properly formatted as JSON for me.

Anyone else experiencing this problem?

    return [result.data for result in results.docs]

Might be better

@BillSchumacher
Copy link
Contributor Author

Yeah, not saving the bad responses and returning the data seems to be working better for me.

@Torantulino
Copy link
Member

Agreed, I think that would make more sense to the Agent.
Though benchmarking later on will tell us for sure.

@Torantulino
Copy link
Member

Okay, nice, I think we're pretty close.

Does anything else need to be resolved before we merge this?

@BillSchumacher
Copy link
Contributor Author

I thought it was good before but you are thorough =)

@Torantulino
Copy link
Member

Gotta look out for all the users!

It's passing all my tests now though.

I have a few more suggestions for tweaks to the memory system now, but they're better suited to their on PR.
Keep and eye out for them and let me know what you think of them.

@BillSchumacher
Copy link
Contributor Author

Will do

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request high priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants