Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running /attach is very slow for large datasets #371

Closed
sarahwooders opened this issue Nov 8, 2023 · 1 comment
Closed

Running /attach is very slow for large datasets #371

sarahwooders opened this issue Nov 8, 2023 · 1 comment
Labels
auto-closed enhancement New feature or request

Comments

@sarahwooders
Copy link
Collaborator

Now that postgres is integrated, MemGPT can support large archival memory stores. However, loading data into the agent via /attach requires copying data from one table into another (the agent's archival memory table).

Possible solutions:

  • Parallelizing the copy across multiple processes
  • Avoiding having to copy data by allowing agents to have read-only access to attached data source tables (this would require supporting the ability for archival memory to search across multiple tables)
Copy link

github-actions bot commented Dec 6, 2024

This issue has been automatically closed due to 60 days of inactivity.

@github-actions github-actions bot closed this as completed Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-closed enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant