Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependency management #337

Merged
merged 40 commits into from
Nov 7, 2023
Merged
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
89cf976
mark depricated API section
sarahwooders Oct 30, 2023
be6212c
add readme
sarahwooders Oct 31, 2023
b011380
add readme
sarahwooders Oct 31, 2023
59f7b71
add readme
sarahwooders Oct 31, 2023
176538b
add readme
sarahwooders Oct 31, 2023
9905266
add readme
sarahwooders Oct 31, 2023
3606959
add readme
sarahwooders Oct 31, 2023
c48803c
add readme
sarahwooders Oct 31, 2023
40cdb23
add readme
sarahwooders Oct 31, 2023
ff43c98
add readme
sarahwooders Oct 31, 2023
01db319
CLI bug fixes for azure
sarahwooders Oct 31, 2023
a11cef9
check azure before running
sarahwooders Oct 31, 2023
a47d49e
Merge branch 'cpacker:main' into main
sarahwooders Oct 31, 2023
fbe2482
Update README.md
sarahwooders Oct 31, 2023
446a1a1
Update README.md
sarahwooders Oct 31, 2023
1541482
bug fix with persona loading
sarahwooders Oct 31, 2023
5776e30
Merge branch 'main' of github.com:sarahwooders/MemGPT
sarahwooders Oct 31, 2023
d48cf23
Merge branch 'cpacker:main' into main
sarahwooders Oct 31, 2023
7a8eb80
remove print
sarahwooders Oct 31, 2023
9a5ece0
Merge branch 'main' of github.com:sarahwooders/MemGPT
sarahwooders Oct 31, 2023
d3370b3
merge
sarahwooders Nov 3, 2023
c19c2ce
Merge branch 'cpacker:main' into main
sarahwooders Nov 3, 2023
aa6ee71
Merge branch 'cpacker:main' into main
sarahwooders Nov 3, 2023
36bb04d
make errors for cli flags more clear
sarahwooders Nov 3, 2023
6f50db1
format
sarahwooders Nov 3, 2023
4c91a41
Merge branch 'cpacker:main' into main
sarahwooders Nov 3, 2023
dbaf4a0
Merge branch 'cpacker:main' into main
sarahwooders Nov 5, 2023
c86e1c9
fix imports
sarahwooders Nov 5, 2023
e54e762
Merge branch 'cpacker:main' into main
sarahwooders Nov 5, 2023
524a974
Merge branch 'main' of github.com:sarahwooders/MemGPT
sarahwooders Nov 5, 2023
7baf3e7
fix imports
sarahwooders Nov 5, 2023
2fd8795
Merge branch 'main' of github.com:sarahwooders/MemGPT
sarahwooders Nov 5, 2023
4ab4f2d
add prints
sarahwooders Nov 5, 2023
cc94b4e
Merge branch 'main' of github.com:sarahwooders/MemGPT
sarahwooders Nov 6, 2023
78ff874
cleanup dependency management
sarahwooders Nov 6, 2023
7e3385c
move around impots
sarahwooders Nov 6, 2023
0312aeb
dependencies
sarahwooders Nov 6, 2023
c6cf11c
cleanup docs
sarahwooders Nov 6, 2023
112f5f3
Add pip install comment to legacy run
vivi Nov 7, 2023
c8f70ca
formatting
vivi Nov 7, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ jobs:
PGVECTOR_TEST_DB_URL: ${{ secrets.PGVECTOR_TEST_DB_URL }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry install
poetry install -E postgres -E dev

- name: Set Poetry config
env:
Expand Down
8 changes: 3 additions & 5 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,12 @@ First, install Poetry using [the official instructions here](https://python-poet
Then, you can install MemGPT from source with:
```sh
git clone [email protected]:cpacker/MemGPT.git
poetry shell
poetry install
poetry install -E dev
```
We recommend installing pre-commit to ensure proper formatting during development:
```sh
pip install pre-commit
pre-commit install
pre-commit run --all-files
poetry run pre-commit install
poetry run pre-commit run --all-files
```

### Formatting
Expand Down
6 changes: 6 additions & 0 deletions docs/local_llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,12 @@

Make sure to check the [local LLM troubleshooting page](../local_llm_faq) to see common issues before raising a new issue or posting on Discord.

### Installing dependencies
To install dependencies required for running local models, run:
```
pip install 'pymemgpt[local]'
```

### Quick overview

1. Put your own LLM behind a web server API (e.g. [oobabooga web UI](https://github.com/oobabooga/text-generation-webui#starting-the-web-ui))
Expand Down
15 changes: 9 additions & 6 deletions docs/storage.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,19 @@
# Configuring Storage Backends
MemGPT supports both local and database storage for archival memory. You can configure which storage backend to use via `memgpt configure`. For larger datasets, we recommend using a database backend.
# Configuring Storage Backends
MemGPT supports both local and database storage for archival memory. You can configure which storage backend to use via `memgpt configure`. For larger datasets, we recommend using a database backend.

!!! warning "Switching storage backends"

MemGPT can only use one storage backend at a time. If you switch from local to database storage, you will need to re-load data and start agents from scratch. We currently do not support migrating between storage backends.
MemGPT can only use one storage backend at a time. If you switch from local to database storage, you will need to re-load data and start agents from scratch. We currently do not support migrating between storage backends.

## Local
MemGPT will default to using local storage (saved at `~/.memgpt/archival/` for loaded data sources, and `~/.memgpt/agents/` for agent storage).
MemGPT will default to using local storage (saved at `~/.memgpt/archival/` for loaded data sources, and `~/.memgpt/agents/` for agent storage).

## Postgres
In user to us the Postgres backend, you must have a running Postgres database that MemGPT can write to. You can enable the Postgres backend by running `memgpt configure` and selecting `postgres` for archival storage, which will then prompt for the database URI (e.g. `postgresql+pg8000://<USER>:<PASSWORD>@<IP>:5432/<DB_NAME>`)
In user to us the Postgres backend, you must have a running Postgres database that MemGPT can write to. You can enable the Postgres backend by running `memgpt configure` and selecting `postgres` for archival storage, which will then prompt for the database URI (e.g. `postgresql+pg8000://<USER>:<PASSWORD>@<IP>:5432/<DB_NAME>`). To enable the Postgres backend, make sure to install the required dependencies with:
```
pip install 'pymemgpt[postgres]'
```


## Chroma
## Chroma
(Coming soon)
8 changes: 6 additions & 2 deletions memgpt/memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
import os
import datetime
import re
import faiss
import numpy as np
from typing import Optional, List, Tuple

from .constants import MESSAGE_SUMMARY_WARNING_TOKENS, MEMGPT_DIR
Expand Down Expand Up @@ -353,6 +351,8 @@ class DummyArchivalMemoryWithFaiss(DummyArchivalMemory):

def __init__(self, index=None, archival_memory_database=None, embedding_model="text-embedding-ada-002", k=100):
if index is None:
import faiss

self.index = faiss.IndexFlatL2(1536) # openai embedding vector size.
else:
self.index = index
Expand All @@ -366,6 +366,8 @@ def __len__(self):
return len(self._archive)

def _insert(self, memory_string, embedding):
import numpy as np

print(f"Got an embedding, type {type(embedding)}, len {len(embedding)}")

self._archive.append(
Expand Down Expand Up @@ -394,6 +396,8 @@ def _search(self, query_embedding, query_string, count=None, start=None):

# query_embedding = get_embedding(query_string, model=self.embedding_model)
# our wrapped version supports backoff/rate-limits
import numpy as np

if query_string in self.embeddings_dict:
search_result = self.search_results[query_string]
else:
Expand Down
5 changes: 4 additions & 1 deletion memgpt/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
import json
import pytz
import os
import faiss
import tiktoken
import glob
import sqlite3
Expand Down Expand Up @@ -104,6 +103,8 @@ def parse_json(string):


def prepare_archival_index(folder):
import faiss

index_file = os.path.join(folder, "all_docs.index")
index = faiss.read_index(index_file)

Expand Down Expand Up @@ -308,6 +309,8 @@ async def prepare_archival_index_from_files_compute_embeddings(
f.write("\n")

# make the faiss index
import faiss

index = faiss.IndexFlatL2(1536)
data = np.array(embedding_data).astype("float32")
try:
Expand Down
Loading
Loading