Skip to content

Commit

Permalink
Dependency management (#337)
Browse files Browse the repository at this point in the history
* Divides dependencies into `pip install pymemgpt[legacy,local,postgres,dev]`. 
* Update docs
  • Loading branch information
sarahwooders authored Nov 7, 2023
1 parent dbfbd10 commit 2c56eaa
Show file tree
Hide file tree
Showing 9 changed files with 704 additions and 96 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ jobs:
PGVECTOR_TEST_DB_URL: ${{ secrets.PGVECTOR_TEST_DB_URL }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry install
poetry install -E postgres -E dev
- name: Set Poetry config
env:
Expand Down
8 changes: 3 additions & 5 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,12 @@ First, install Poetry using [the official instructions here](https://python-poet
Then, you can install MemGPT from source with:
```sh
git clone [email protected]:cpacker/MemGPT.git
poetry shell
poetry install
poetry install -E dev
```
We recommend installing pre-commit to ensure proper formatting during development:
```sh
pip install pre-commit
pre-commit install
pre-commit run --all-files
poetry run pre-commit install
poetry run pre-commit run --all-files
```

### Formatting
Expand Down
6 changes: 6 additions & 0 deletions docs/local_llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,12 @@

Make sure to check the [local LLM troubleshooting page](../local_llm_faq) to see common issues before raising a new issue or posting on Discord.

### Installing dependencies
To install dependencies required for running local models, run:
```
pip install 'pymemgpt[local]'
```

### Quick overview

1. Put your own LLM behind a web server API (e.g. [oobabooga web UI](https://github.com/oobabooga/text-generation-webui#starting-the-web-ui))
Expand Down
15 changes: 9 additions & 6 deletions docs/storage.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,19 @@
# Configuring Storage Backends
MemGPT supports both local and database storage for archival memory. You can configure which storage backend to use via `memgpt configure`. For larger datasets, we recommend using a database backend.
# Configuring Storage Backends
MemGPT supports both local and database storage for archival memory. You can configure which storage backend to use via `memgpt configure`. For larger datasets, we recommend using a database backend.

!!! warning "Switching storage backends"

MemGPT can only use one storage backend at a time. If you switch from local to database storage, you will need to re-load data and start agents from scratch. We currently do not support migrating between storage backends.
MemGPT can only use one storage backend at a time. If you switch from local to database storage, you will need to re-load data and start agents from scratch. We currently do not support migrating between storage backends.

## Local
MemGPT will default to using local storage (saved at `~/.memgpt/archival/` for loaded data sources, and `~/.memgpt/agents/` for agent storage).
MemGPT will default to using local storage (saved at `~/.memgpt/archival/` for loaded data sources, and `~/.memgpt/agents/` for agent storage).

## Postgres
In user to us the Postgres backend, you must have a running Postgres database that MemGPT can write to. You can enable the Postgres backend by running `memgpt configure` and selecting `postgres` for archival storage, which will then prompt for the database URI (e.g. `postgresql+pg8000://<USER>:<PASSWORD>@<IP>:5432/<DB_NAME>`)
In user to us the Postgres backend, you must have a running Postgres database that MemGPT can write to. You can enable the Postgres backend by running `memgpt configure` and selecting `postgres` for archival storage, which will then prompt for the database URI (e.g. `postgresql+pg8000://<USER>:<PASSWORD>@<IP>:5432/<DB_NAME>`). To enable the Postgres backend, make sure to install the required dependencies with:
```
pip install 'pymemgpt[postgres]'
```


## Chroma
## Chroma
(Coming soon)
6 changes: 5 additions & 1 deletion memgpt/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,11 @@ def legacy_run(
if ctx.invoked_subcommand is not None:
return

typer.secho("Warning: Running legacy run command. Run `memgpt run` instead.", fg=typer.colors.RED, bold=True)
typer.secho(
"Warning: Running legacy run command. You may need to `pip install pymemgpt[legacy] -U`. Run `memgpt run` instead.",
fg=typer.colors.RED,
bold=True,
)
if not questionary.confirm("Continue with legacy CLI?", default=False).ask():
return

Expand Down
8 changes: 6 additions & 2 deletions memgpt/memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
import os
import datetime
import re
import faiss
import numpy as np
from typing import Optional, List, Tuple

from .constants import MESSAGE_SUMMARY_WARNING_TOKENS, MEMGPT_DIR
Expand Down Expand Up @@ -353,6 +351,8 @@ class DummyArchivalMemoryWithFaiss(DummyArchivalMemory):

def __init__(self, index=None, archival_memory_database=None, embedding_model="text-embedding-ada-002", k=100):
if index is None:
import faiss

self.index = faiss.IndexFlatL2(1536) # openai embedding vector size.
else:
self.index = index
Expand All @@ -366,6 +366,8 @@ def __len__(self):
return len(self._archive)

def _insert(self, memory_string, embedding):
import numpy as np

print(f"Got an embedding, type {type(embedding)}, len {len(embedding)}")

self._archive.append(
Expand Down Expand Up @@ -394,6 +396,8 @@ def _search(self, query_embedding, query_string, count=None, start=None):

# query_embedding = get_embedding(query_string, model=self.embedding_model)
# our wrapped version supports backoff/rate-limits
import numpy as np

if query_string in self.embeddings_dict:
search_result = self.search_results[query_string]
else:
Expand Down
5 changes: 4 additions & 1 deletion memgpt/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
import json
import pytz
import os
import faiss
import tiktoken
import glob
import sqlite3
Expand Down Expand Up @@ -104,6 +103,8 @@ def parse_json(string):


def prepare_archival_index(folder):
import faiss

index_file = os.path.join(folder, "all_docs.index")
index = faiss.read_index(index_file)

Expand Down Expand Up @@ -308,6 +309,8 @@ async def prepare_archival_index_from_files_compute_embeddings(
f.write("\n")

# make the faiss index
import faiss

index = faiss.IndexFlatL2(1536)
data = np.array(embedding_data).astype("float32")
try:
Expand Down
Loading

0 comments on commit 2c56eaa

Please sign in to comment.