Skip to content

Commit

Permalink
rewrite docs
Browse files Browse the repository at this point in the history
  • Loading branch information
masci committed May 2, 2024
1 parent 2b1e924 commit 2e5b0a7
Show file tree
Hide file tree
Showing 8 changed files with 167 additions and 114 deletions.
60 changes: 0 additions & 60 deletions docs/async.md

This file was deleted.

26 changes: 21 additions & 5 deletions docs/prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,29 +22,45 @@ provided by Jinja, Banks supports the following ones, specific for prompt engine


::: banks.filters.lemmatize.lemmatize
options:
show_root_full_path: false
show_symbol_type_heading: false
show_signature_annotations: false
heading_level: 3

## Extensions

Extensions are custom functions that can be used to add new tags to the template engine.
Banks supports the following ones, specific for prompt engineering.

::: banks.extensions.generate
::: banks.extensions.generate.generate
options:
show_root_heading: false
show_root_full_path: false
show_symbol_type_heading: false
show_signature_annotations: false
heading_level: 3

### `{{canary_word}}`
### `canary_word`

Insert into the prompt a canary word that can be checked later with `Prompt.canary_leaked()`
to ensure the original prompt was not leaked.

Example:
```python
from banks import Prompt

p = Prompt("{{canary_word}}Hello, World!")
p.text() ## outputs 'BANKS[5f0bbba4]Hello, World!'
```

## Macros

Macros are a way to implement complex logic in the template itself, think about defining functions but using Jinja
code instead of Python. Banks provides a set of macros out of the box that are useful in prompt engineering,
for example to generate a prompt and call OpenAI on-the-fly, during the template rendering.
In order to use Banks' macros, you have to import them in your templates, see the examples below.
Before using Banks' macros, you have to import them in your templates, see the examples below.

<h2 class="doc-heading"><code>run_prompt</code></h2>
### `run_prompt`

Similar to `generate`, `run_prompt` will call OpenAI passing the whole block content as the input. The block
content can in turn contain Jinja tags, which makes this macro very powerful. In the example below, during
Expand Down
47 changes: 5 additions & 42 deletions docs/python.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,9 @@
## Basic usage

The `Prompt` class is the only thing you need to know about Banks on the Python side. The class can be
initialized with a string variable containing the prompt template text, then you can invoke the method `text`
on your instance to pass the data needed to render the template and get back the final prompt.
::: banks.prompt.Prompt
options:
inherited_members: true

A quick example:
```py
from banks import Prompt


p = Prompt("Write a 500-word blog post on {{ topic }}.")
my_topic = "retrogame computing"
print(p.text({"topic": my_topic}))
```

## Loading templates from files

Prompt templates can be really long and at some point you might want to store them on files. To avoid the
boilerplate code to read a file and pass the content as strings to the constructor, `Prompt`s can be
initialized by just passing the name of the template file, provided that the file is stored in a folder called
`templates` in the current path:

```
.
└── templates
   └── foo.jinja
```

The code would be the following:

```py
from banks import Prompt


p = Prompt.from_template("foo.jinja")
prompt_text = p.text(data={"foo": "bar"})
```

!!! warning
Banks comes with its own set of default templates (see below) which takes precedence over the
ones loaded from the filesystem, so be sure to use different names for your custom
templates
::: banks.prompt.AsyncPrompt

## Default templates

Expand All @@ -54,4 +17,4 @@ Banks' package comes with the following prompt templates ready to be used:
- `run_prompt.jinja`
- `summarize.jinja`

If Banks is properly installed, something like `Prompt.from_template("blog.jinja")` should always work out of the box.
If Banks is properly installed, something like `Prompt.from_template("blog.jinja")` should always work out of the box.
5 changes: 3 additions & 2 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ nav:
- Home: 'index.md'
- Python API: 'python.md'
- Prompt API: 'prompt.md'
- asyncio support: 'async.md'

plugins:
- search
Expand All @@ -19,8 +18,10 @@ plugins:
python:
paths: [src]
options:
docstring_style: google
show_root_heading: true
show_root_full_path: false
show_root_full_path: true
show_symbol_type_heading: true
show_source: false
show_signature_annotations: true
show_bases: false
Expand Down
3 changes: 3 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ path = "src/banks/__about__.py"
dependencies = [
"coverage[toml]>=6.5",
"pytest",
"mkdocs-material",
"mkdocstrings[python]",
]

[tool.hatch.envs.default.scripts]
Expand All @@ -53,6 +55,7 @@ cov = [
"test-cov",
"cov-report",
]
docs = "mkdocs build"

[[tool.hatch.envs.all.matrix]]
python = ["3.9", "3.10", "3.11", "3.12"]
Expand Down
4 changes: 3 additions & 1 deletion src/banks/extensions/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
SYSTEM_PROMPT = Prompt("{{canary_word}} You are a helpful assistant.")


class GenerateExtension(Extension):
def generate(model_name: str):
"""
`generate` can be used to call the LiteLLM API passing the tag text as a prompt and get back some content.
Expand All @@ -25,6 +25,8 @@ class GenerateExtension(Extension):
```
"""


class GenerateExtension(Extension):
# a set of names that trigger the extension.
tags = {"generate"} # noqa

Expand Down
8 changes: 4 additions & 4 deletions src/banks/filters/lemmatize.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ def lemmatize(text: str) -> str:
to English.
Example:
```
{{ 'The dog is running' | lemmatize }}
'the dog be run'
```
```
{{ 'The dog is running' | lemmatize }}
'the dog be run'
```
Note:
Simplemma must be manually installed to use this filter
Expand Down
128 changes: 128 additions & 0 deletions src/banks/prompt.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,15 @@

class BasePrompt:
def __init__(self, text: str, canary_word: Optional[str] = None) -> None:
"""
Prompt constructor.
Parameters:
text: The template text
canary_word: The string to use for the `{{canary_word}}` extension. If `None`, a default string will be
generated
"""
self._template = env.from_string(text)
self.defaults = {"canary_word": canary_word or generate_canary_word()}

Expand All @@ -20,22 +29,141 @@ def _get_context(self, data: Optional[dict]) -> dict:
return data | self.defaults

def canary_leaked(self, text: str) -> bool:
"""
Returns whether the canary word is present in `text`, signalling the prompt might have leaked.
"""
return self.defaults["canary_word"] in text

@classmethod
def from_template(cls, name: str) -> "BasePrompt":
"""
Create a prompt instance from a template.
Prompt templates can be really long and at some point you might want to store them on files. To avoid the
boilerplate code to read a file and pass the content as strings to the constructor, `Prompt`s can be
initialized by just passing the name of the template file, provided that the file is stored in a folder called
`templates` in the current path:
```
.
└── templates
   └── foo.jinja
```
The code would be the following:
```py
from banks import Prompt
p = Prompt.from_template("foo.jinja")
prompt_text = p.text(data={"foo": "bar"})
```
!!! warning
Banks comes with its own set of default templates (see below) which takes precedence over the
ones loaded from the filesystem, so be sure to use different names for your custom
templates
Parameters:
name: The name of the template.
Returns:
A new `Prompt` instance.
"""
p = cls("")
p._template = env.get_template(name)
return p


class Prompt(BasePrompt):
"""
The `Prompt` class is the only thing you need to know about Banks on the Python side. The class can be
initialized with a string variable containing the prompt template text, then you can invoke the method `text`
on your instance to pass the data needed to render the template and get back the final prompt.
A quick example:
```py
from banks import Prompt
p = Prompt("Write a 500-word blog post on {{ topic }}.")
my_topic = "retrogame computing"
print(p.text({"topic": my_topic}))
```
"""

def text(self, data: Optional[dict] = None) -> str:
"""
Render the prompt using variables present in `data`
Parameters:
data: A dictionary containing the context variables.
"""
data = self._get_context(data)
return self._template.render(data)


class AsyncPrompt(BasePrompt):
"""
Banks provides async support through the machinery [provided by Jinja](https://jinja.palletsprojects.com/en/3.0.x/api/#async-support)
Since the Jinja environment is a global state in banks, the library can work either with or
without async support, and this must be known before importing anything.
If the application using banks runs within an `asyncio` loop, you can do two things
to optimize banks' execution:
1. Set the environment variable `BANKS_ASYNC_ENABLED=true`.
2. Use the `AsyncPrompt` class that has an awaitable `run` method.
For example, let's render a prompt that contains some calls to the `generate` extension. Those calls
will be heavily I/O bound, so other tasks can take advantage and advance while the prompt is being
rendered.
Example:
```python
# Enable async support before importing from banks
import os
os.environ["BANKS_ASYNC_ENABLED"] = "true"
# Show logs to see what's happening at runtime
import logging
logging.basicConfig(level=logging.INFO)
import asyncio
from banks import AsyncPrompt
prompt_template = \"\"\"
Generate a tweet about the topic '{{ topic }}' with a positive sentiment.
Examples:
- {% generate "write a tweet with a positive sentiment", "gpt-3.5-turbo" %}
- {% generate "write a tweet with a sad sentiment", "gpt-3.5-turbo" %}
- {% generate "write a tweet with a neutral sentiment", "gpt-3.5-turbo" %}
\"\"\"
async def task(task_id: int, sleep_time: int):
logging.info(f"Task {task_id} is running.")
await asyncio.sleep(sleep_time)
logging.info(f"Task {task_id} done.")
async def main():
p = AsyncPrompt(prompt_template)
# Schedule the prompt rendering along with two executions of 'task', one sleeping for 10 seconds
# and one sleeping for 1 second
results = await asyncio.gather(p.text({"topic": "AI frameworks"}), task(1, 10), task(2, 1))
print("All tasks done, rendered prompt:")
print(results[0])
asyncio.run(main())
```
"""

def __init__(self, text: str) -> None:
super().__init__(text)

Expand Down

0 comments on commit 2e5b0a7

Please sign in to comment.