Skip to content

Commit

Permalink
chore: remove deprecated 'generate' and 'run_prompt' (#28)
Browse files Browse the repository at this point in the history
* chore: remove deprecated 'generate' and 'run_prompt'

* lint
  • Loading branch information
masci authored Nov 30, 2024
1 parent bd9da46 commit cf9dc91
Show file tree
Hide file tree
Showing 11 changed files with 1 addition and 204 deletions.
71 changes: 0 additions & 71 deletions docs/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@
- [Create a summarizer prompt](#create-a-summarizer-prompt)
- [Lemmatize text while processing a template](#lemmatize-text-while-processing-a-template)
- [Use a LLM to generate a text while rendering a prompt](#use-a-llm-to-generate-a-text-while-rendering-a-prompt)
- [Go meta: create a prompt and `generate` its response](#go-meta-create-a-prompt-and-generate-its-response)
- [Go meta(meta): process a LLM response](#go-metameta-process-a-llm-response)
- [Render a prompt template as chat messages](#render-a-prompt-template-as-chat-messages)
- [Use prompt caching from Anthropic](#use-prompt-caching-from-anthropic)
- [Reuse templates from registries](#reuse-templates-from-registries)
Expand Down Expand Up @@ -184,75 +182,6 @@ Examples:
> Banks uses a cache to avoid generating text again for the same template with the same context. By default
> the cache is in-memory but it can be customized.
## Go meta: create a prompt and `generate` its response

We can leverage Jinja's macro system to generate a prompt, send the result to OpenAI and get a response.
Let's bring back the blog writing example:

```py
from banks import Prompt

prompt_template = """
{% from "banks_macros.jinja" import run_prompt with context %}
{%- call run_prompt() -%}
Write a 500-word blog post on {{ topic }}
Blog post:
{%- endcall -%}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))
```

The snippet above won't print the prompt, instead will generate the prompt text

```
Write a 500-word blog post on climate change
Blog post:
```

and will send it to OpenAI using the `generate` extension, eventually returning its response:

```
Climate change is a phenomenon that has been gaining attention in recent years...
...
```

## Go meta(meta): process a LLM response

When generating a response from a prompt template, we can take a step further and
post-process the LLM response by assinging it to a variable and applying filters
to it:

```py
from banks import Prompt

prompt_template = """
{% from "banks_macros.jinja" import run_prompt with context %}
{%- set prompt_result %}
{%- call run_prompt() -%}
Write a 500-word blog post on {{ topic }}
Blog post:
{%- endcall -%}
{%- endset %}
{# nothing is returned at this point: the variable 'prompt_result' contains the result #}
{# let's use the prompt_result variable now #}
{{ prompt_result | upper }}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))
```

The final answer from the LLM will be printed, this time all in uppercase.

## Render a prompt template as chat messages

You'll find yourself feeding an LLM a list of chat messages instead of plain text
Expand Down
17 changes: 0 additions & 17 deletions docs/prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,20 +93,3 @@ Macros are a way to implement complex logic in the template itself, think about
code instead of Python. Banks provides a set of macros out of the box that are useful in prompt engineering,
for example to generate a prompt and call OpenAI on-the-fly, during the template rendering.
Before using Banks' macros, you have to import them in your templates, see the examples below.

### `run_prompt`

Similar to `generate`, `run_prompt` will call OpenAI passing the whole block content as the input. The block
content can in turn contain Jinja tags, which makes this macro very powerful. In the example below, during
the template rendering the value of `{{ topic }}` will be processed before sending the resulting block to
`run_prompt`.

```jinja
{% from "banks_macros.jinja" import run_prompt with context %}
{% call run_prompt() %}
Write a 500-word blog post on {{ topic }}
{% endcall %}
```

When the rendering is done, the entire `call` block will be replaced with OpenAI's response.
44 changes: 0 additions & 44 deletions docs/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,47 +16,3 @@
::: banks.registries.redis.RedisPromptRegistry
options:
inherited_members: true

## Default macros

Banks' package comes with default template macros you can use in your prompts.


### `run_prompt`


We can use `run_prompt` in our templates to generate a prompt, send the result to the LLM and get a response.
Take this prompt for example:

```py
from banks import Prompt

prompt_template = """
{% from "banks_macros.jinja" import run_prompt with context %}
{%- call run_prompt() -%}
Write a 500-word blog post on {{ topic }}
Blog post:
{%- endcall -%}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))
```

In this case, Banks will generate internally the prompt text

```
Write a 500-word blog post on climate change
Blog post:
```

but instead of returning it, will send it to the LLM using the `generate` extension under the hood, eventually
returning the final response:

```
Climate change is a phenomenon that has been gaining attention in recent years...
...
```
5 changes: 1 addition & 4 deletions src/banks/env.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# SPDX-FileCopyrightText: 2023-present Massimiliano Pippi <[email protected]>
#
# SPDX-License-Identifier: MIT
from jinja2 import Environment, PackageLoader, select_autoescape
from jinja2 import Environment, select_autoescape

from .config import config
from .filters import cache_control, image, lemmatize, tool
Expand All @@ -15,16 +15,13 @@ def _add_extensions(_env):
"""
from .extensions.chat import ChatExtension # pylint: disable=import-outside-toplevel
from .extensions.completion import CompletionExtension # pylint: disable=import-outside-toplevel
from .extensions.generate import GenerateExtension # pylint: disable=import-outside-toplevel

_env.add_extension(ChatExtension)
_env.add_extension(CompletionExtension)
_env.add_extension(GenerateExtension)


# Init the Jinja env
env = Environment(
loader=PackageLoader("banks", "internal"),
autoescape=select_autoescape(
enabled_extensions=("html", "xml"),
default_for_string=False,
Expand Down
17 changes: 0 additions & 17 deletions src/banks/extensions/docs.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# This module exists for documentation purpose only
from deprecated import deprecated


def chat(role: str): # pylint: disable=W0613
Expand Down Expand Up @@ -38,19 +37,3 @@ def completion(model_name: str): # pylint: disable=W0613
{{ response }}
```
"""


@deprecated(version="1.3.0", reason="This extension is deprecated, use {% completion %} instead.")
def generate(model_name: str): # pylint: disable=W0613
"""
`generate` can be used to call the LiteLLM API passing the tag text as a prompt and get back some content.
Deprecated:
This extension is deprecated, use `{% completion %}` instead.
Example:
```jinja
{% generate "write a tweet with positive sentiment" "gpt-3.5-turbo" %}
Feeling grateful for all the opportunities that come my way! #positivity #productivity
```
"""
5 changes: 0 additions & 5 deletions src/banks/internal/banks_macros.jinja

This file was deleted.

6 changes: 0 additions & 6 deletions src/banks/prompt.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,12 +177,6 @@ class AsyncPrompt(BasePrompt):
prompt_template = \"\"\"
Generate a tweet about the topic '{{ topic }}' with a positive sentiment.
Examples:
- {% generate "write a tweet with a positive sentiment", "gpt-3.5-turbo" %}
- {% generate "write a tweet with a sad sentiment", "gpt-3.5-turbo" %}
- {% generate "write a tweet with a neutral sentiment", "gpt-3.5-turbo" %}
\"\"\"
async def task(task_id: int, sleep_time: int):
Expand Down
11 changes: 0 additions & 11 deletions tests/templates/generate_tweet.jinja

This file was deleted.

7 changes: 0 additions & 7 deletions tests/templates/run_prompt.jinja

This file was deleted.

12 changes: 0 additions & 12 deletions tests/templates/run_prompt_process.jinja

This file was deleted.

10 changes: 0 additions & 10 deletions tests/test_env.py

This file was deleted.

0 comments on commit cf9dc91

Please sign in to comment.