Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory tooling and connector docs #627

Merged
merged 3 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions spiceaidocs/docs/components/data-connectors/memory.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
title: 'Memory Data Connector'
sidebar_label: 'Memory Data Connector'
description: 'Memory Data Connector Documentation'
pagination_prev: null
---

The Memory Data Connector enables configuring an in-memory dataset for tables used, or produced by the Spice runtime. Only certain tables, with predefined schemas, can be defined by the connector. These are:
- `store`: Defines a table that LLMs, with [memory tooling](/features/large-language-models/memory), can store data in. Requires `mode: read_write`.

### Examples

```yaml
datasets:
- from: memory:store
name: llm_memory
mode: read_write
columns:
- name: value
embeddings: # Easily make your LLM learnings searchable.
- from: all-MiniLM-L6-v2
Jeadie marked this conversation as resolved.
Show resolved Hide resolved

embeddings:
- name: all-MiniLM-L6-v2
from: huggingface:huggingface.co/sentence-transformers/all-MiniLM-L6-v2
```
35 changes: 35 additions & 0 deletions spiceaidocs/docs/features/large-language-models/memory.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
title: 'Language Model Memories'
sidebar_label: 'Memory'
description: 'Learn how LLMs can interact with the spice runtime.'
sidebar_position: 3
pagination_prev: null
pagination_next: null
---

# Memory Tools

Spice provides memory persistence tools that allow language models to store and retrieve information across conversations. These tools are available through the `memory` tool group.

## Enabling Memory Tools

To enable memory tools for Spice models you need to:
1. Define a `store` [memory](/components/data-connectors/memory.md) dataset.
2. Specify `memory` in the model's `spice_tools` parameter.

```yaml
datasets:
- from: memory:store
name: llm_memory
mode: read_write

models:
- name: memory-enabled-model
from: openai:gpt-4o
params:
spice_tools: memory, sql # Can be combined with other tool groups
```

## Available Tools
- `store_memory`: Store important information for future reference
- `load_memory`: Retrieve previously stored memories from the last time period.
Empty file.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: 'Language Model Overrides'
sidebar_label: 'Default overrides'
sidebar_label: 'Parameter overrides'
description: 'Learn how to override default LLM hyperparameters in Spice.'
sidebar_position: 1
pagination_prev: null
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,17 @@ models:
spice_tools: auto # Use all available tools
```

To use all builtin tools with additional tools, use the `builtin` tool group.
```yaml
models:
- name: full-runtime
from: openai:gpt-4o
params:
spice_tools: builtin, memory
```

### Tool Recursion Limit
When a model requests to call a runtime tool, Spice runs the tool internally and feeds it back to the model. The `tool_recursion_limit` parameter limits the depth of internal recursion Spice will undertake. By default, Spice can infinitely recurse if the model requests to do so.
When a model requests to call a runtime tool, Spice runs the tool internally and feeds it back to the model. The `tool_recursion_limit` parameter limits the depth of internal recursion Spice will undertake. By default, Spice can infinitely recurse if the model requests to do so.

```yaml
models:
Expand Down