Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mirascope Interface Updates -- v1 #322

Closed
25 tasks done
willbakst opened this issue Jun 11, 2024 · 16 comments · Fixed by #425
Closed
25 tasks done

Mirascope Interface Updates -- v1 #322

willbakst opened this issue Jun 11, 2024 · 16 comments · Fixed by #425
Assignees
Labels
Feature Request New feature or request

Comments

@willbakst
Copy link
Contributor

willbakst commented Jun 11, 2024

Description

TL;DR:

Discussions in #305 and #312 have led me to believe that we need to update our interfaces and that in doing so we can resolve a lot of issues with the current design.

The resulting design will be far more functional and less class based since "calls" aren't inherently stateful. For example, a basic call would look like this:

from mirascope.openai import openai_call

@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {genre} book"""

response = recommend_book("fantasy")
print(response.content)

Basic streaming would look like this:

from mirascope.openai import openai_call

@openai_call(model="gpt-4o", stream=True)
def recommend_book(genre: str):
    """Recommend a {genre} book"""

stream = recommend_book("fantasy")
for chunk in stream:
    print(chunk.content, end="", flush=True)
print(f"Assistant Message: {stream.message_param})

Full Brain Dump:

Discussions in #305 lead me to the following two thoughts:

  1. While there is a solution, current Python PEPs don't enable proper type hints
  2. Perhaps the interface is not designed properly.

Point (2) is the result of the principle behind #305 that there should be a separation between the call arguments and the state. What this really means to me is that calls likely shouldn't have any state at all. Instead, they should simply provide maximal convenience around making a single, stateless call to a provider's API. In principle, this is the direction #312 is suggesting.

Right now, we would implement a simple call as such:

from mirascope.openai import OpenAICall

class BookRecommender(OpenAICall):
    prompt_template = "Recommend a {genre} book."

    genre: str

fantasy_recommender = BookRecommender(genre="fantasy")
response = fantasy_recommender.call()
print(response.content)

This makes sense. We've created a fantasy_recommender that we can call multiple times. However, what if we want genre to be dynamic and always provided by the consumer?

from mirascope.openai import OpenAICall

class BookRecommender(OpenAICall):
    prompt_template = "Recommend a {genre} book."

    genre: str

recommender = BookRecommender(genre="fantasy")
fantasy_response = recommender.call()
recommender.genre = "horror"
horror_response = recommender.call()

This makes less sense. Really what we want is something like this (as defined in #305):

from mirascope.base import BaseCallArgs, call_args
from mirascope.openai import OpenAICall

class BookRecommenderArgs(BaseCallArgs):
    genre: str

@call_args(BookRecommenderArgs)
class BookRecommender(OpenAICall):
    prompt_template = "Recommend a {genre} book."

recommender = BookRecommender()
fantasy_response = recommender.call(genre="fantasy")
horror_response = recommender.call(genre="horror")

Unfortunately recommender.call(genre="fantasy") can't be properly typed unless you manually override every single function (which is verbose and not good, see #305).

Instead, I'm thinking that calls should be stateless and written as functions (since calling the API is really just a function, and we're adding additional convenience). Something like:

from mirascope.openai import openai_call

@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {genre} book"""

response = recommend_book("fantasy")
print(response.content)

In any attempt to make these interface updates (without breaking changes, just new interfaces), we need to make sure that we cover all existing functionality.

Existing Functionality Checklist

  • Generating content (call, call_async)
from mirascope.openai import openai_call

@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {genre} book."""

response = recommend_book("fantasy")
print(response.content)
  • Streaming content (stream, stream_async)
from mirascope.openai import openai_call

@openai_call(model="gpt-4o", stream=True)
def recommend_book(genre: str):
    """Recommend a {genre} book."""

stream = recommend_book("fantasy")
for chunk in stream:
    print(chunk.content, end="", flush=True)
  • Message keywords
from mirascope.openai import openai_call
from openai.types.chat import ChatCompletionMessageParam

@openai_call(model="gpt-4o")
def recommend_book(genre: str, messages: list[ChatCompletionMessageParam]):
    """
    SYSTEM: You are the world's greatest librarian.
    MESSAGES: {messages}
    USER: Can you recommend a {genre} book please?
    """
  • Computed fields
from mirascope.openai import openai_call

@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {lowercase_genre} book."""
    return { "computed_fields": { "lowercase_genre": genre.lower() } }

response = recommend_book("fantasy")
print(response.content)
  • Chaining
from mirascope.openai import openai_call

@openai_call(model="gpt-4o")
def recommend_author(genre: str):
    """Recommend an author that writes {genre} books."""

@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {genre} book by {author}."""
    return { "computed_fields": { "author": recommend_author(genre) } }

recommendation = recommend_book("fantasy")
  • Tools
from mirascope.openai import openai_call

def print_book(title: str, author: str):
    """Returns the title and author of a book nicely formatted."""
    return f"{title} by {author}"

@openai_call(model="gpt-4o", tools=[print_book])
def recommend_book(genre: str):
    """Recommend a {genre} book."""

response = recommend_book("fantasy")
if tool := response.tool:
    tool.call()
else:
    print(response.content)
  • Streaming tools
from mirascope.openai import openai_call

def print_book(title: str, author: str):
    """Returns the title and author of a book nicely formatted."""
    return f"{title} by {author}"

@openai_call(model="gpt-4o", stream=True, tools=[print_book])
def recommend_book(genre: str):
    """Recommend a {genre} book."""

stream = recommend_book("fantasy")
for chunk, tool in stream:
    if tool:
        # do something with tool
    else:
        # do something with chunk
  • Custom messages
from mirascope.openai import openai_call

@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """This is now a normal docstring."""
    return { "messages": [{ "role": "user", "content": f"Recommend a {genre} book" }] }
  • Dumping
from mirascope.openai import openai_call

@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {genre} book."""

response = recommend_book("fantasy")
print(response.model_dump())  # since calls are functions, this should include what was previously in call.dump()
  • Extracting structured information (extract, extract_async)
from mirascope.openai import openai_call
from pydantic import BaseModel

class Book(BaseModel):
    title: str
    author: str

@openai_call(model="gpt-4o", response_model=Book)
def recommend_book(genre: str):
    """Recommend a {genre} book."""

book = recommend_book("fantasy")
assert isinstance(book, Book)
print(book)
  • Streaming structured information (extract stream, stream_async)
from mirascope.openai import openai_call
from pydantic import BaseModel

class Book(BaseModel):
    title: str
    author: str

@openai_call(model="gpt-4o", stream=True, response_model=Book)
def recommend_book(genre: str):
    """Recommend a {genre} book."""

book_stream = recommend_book("fantasy")
for partial_book in book_stream:
    print(partial_book)
  • FastAPI integration
from fastapi import FastAPI
from mirascope.openai import openai_call

app = FastAPI()

class Book(BaseModel):
    title: str
    author: str

@app.post("/recommend_book")
@openai_call(response_model=Book, model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {genre} book"""
  • Ops integrations / client wrappers
from mirascope.openai import openai_call

@openai_call(model="gpt-4o", llm_ops=["logfire"])
def recommend_book(genre: str):
    """Recommend a {genre} book."""
    
response = recommend_book("fantasy")  # logged to logfire
  • Retries
from tenacity import retry, stop_after_attempt
from mirascope.openai import openai_call

@retry(stop=stop_after_attempt(7))
@openai_call(model="gpt-4o")
def recommend_book(genre: str):
    """Recommend a {genre} book."""
  • Dynamic Providers
from mirascope.base import BasePrompt
from mirascope.openai import openai_call

class BookRecommendationPrompt(BasePrompt):
    prompt_template = "Recommend a {genre} book."

    genre: str

prompt = BookRecommendationPrompt(genre="fantasy")
response = prompt.call(openai_call(model="gpt-4o"))
print(response.content)
  • Support for all existing providers
    • OpenAI
    • Anthropic
    • Gemini
    • Groq
    • Mistral
    • Cohere
    • LiteLLM

New Feature Checklist:

  • Output parsing
from mirascope.openai import openai_call

def format_book_response(response: OpenAICallResponse) -> str:
    return f"Book: {response.content}"

@openai_call(model="gpt-4o", output_parser=format_book_response)
def recommend_book(genre: str):
    """Recommend a {genre} book"""

recommendation = recommend_book("fantasy")
assert isinstance(recommendation, str)
print(recommendation)
  • Support for more stateful workflows (i.e. chat history)
from mirascope.openai import openai_call
from openai.types.chat import ChatCompletionMessageParam
from pydantic import BaseModel

class Mathematician(BaseModel):
    _history: list[ChatCompletionMessageParam]

    def add(self, a: int, b: int) -> int:
        """Adds `a` to `b`."""
        return a + b

    def subtract(...): ...
    def divide(...): ...
    def multiply(...): ...

    @openai_call(model="gpt-4o")
    def _step(self, problem: str) -> LLMCallReturn:
        """
        MESSAGES: {self._history}
        USER:
        Solve the following problem: {problem}.
        You have access to `add`, `subtract`, `divide`, and `multiply` tools.
        Use these tools as necessary to solve the problem.
        """
        return { "tools": [self.add, self.subtract, self.divide, self.multiply] }

    def _solve(problem: str):
        response = self._step(problem)
        self._history += [response.user_message_param, response.message_param]
        if tools := response.tools:
            for tool in tools:
                output = tool.call()
                self.history.append(tool.message_param(output))
            return self._solve(problem)
        else:
            return response.content
    
    def run(self):
        while True:
            problem = input("Problem: ")
            if problem in ["quit", "exit"]: break
            solution = self._solve(problem)
            print(solution)

mathematician = Mathematician()
mathematician.run()
#> Problem: ...
@offchan42
Copy link

offchan42 commented Jun 12, 2024

Sounds good. But what if I want to both stream and call? Right now it seems like I can only choose one thing.

@willbakst
Copy link
Contributor Author

The short answer is that BaseCall and BaseExtractor classes aren't going away.

All of this is new and shouldn't break anything currently released. Majority of the code will most likely get refactored and shared, so it's simply alternate interfaces for the same functionality. In fact, I imagine I could likely use e.g. openai_call to implement the OpenAICall.call() method for full re-use.

With the new interface, I think you could implement streaming with this more functional interface something like this:

from mirascope.base import prompt_template
from mirascope.openai import openai_call

PROMPT_TEMPLATE = """
This is my prompt template
"""

@openai_call(model="gpt-4o")
@prompt_template(PROMPT_TEMPLATE)
def recommend_book(genre: str):
    ...

@openai_call(model="gpt-4o", stream=True)
@prompt_template(PROMPT_TEMPLATE)
def stream_book_recommendation(genre: str):
    ...

The idea here is that now recommend_book (i.e. call) and stream_book_recommendation (i.e. stream) both have properly typed functions (genre) and also have properly typed returns and have a shared prompt template.

I also think this new interface will make getting started (hopefully) easier.

This isn't to say that you can't still do whatever you want with the existing classes. It's mostly that I wasn't happy with the outcome of #305 and think this solves a lot of those problems.

Of course, if people are happy with the outlined solution in #305 we can always reopen it.

@willbakst
Copy link
Contributor Author

willbakst commented Jun 12, 2024

I'm realizing the above becomes a bit annoying / verbose as you (1) add additional complexity like computed fields and (2) expand to all four call, call_async, stream, and stream_async methods.

Alternatively, maybe we could offer something like this:

from mirascope.base import BasePrompt
from mirascope.openai import openai_call

class BookRecommendationPrompt(BasePrompt):
    prompt_template = "Recommend a {genre} book."
    
    genre: str

@openai_call(model="gpt-4o")
@prompt_template(BookRecommendationPrompt)
def recommend_book():
    ...

response = recommend_book("fantasy")
print(response.content)

@openai_call(model="gpt-4o", stream=True)
@prompt_template(BookRecommendationPrompt)
def stream_book_recommendation():
    ...

stream = stream_book_recommendation("fantasy")
for chunk in stream:
    print(chunk.content, end="", flush=True)

I'm curious about the cases in which you want both call and stream? Perhaps in this case the class approach is best.

Recently I've found in most cases I just default to async streaming and build everything out around that flow. With the new interface, that would just be:

import asyncio

from mirascope.openai import openai_call

@openai_call(model="gpt-4o", stream=True, output_parser=str)
async def recommend_book(genre: str):
    """Recommend a {genre} book"""

async def run():
    print(await recommend_book("fantasy"))

asyncio.run(run())

@willbakst willbakst changed the title Mirascope Interface Updates Mirascope New (Additional) Interface Updates Jun 12, 2024
@offchan42
Copy link

offchan42 commented Jun 12, 2024

Yeah I think if the class stays then there's no problem. I also usually just do streaming like you do.
I support this new interface.

@koxudaxi
Copy link
Collaborator

@willbakst

@koxudaxi, seems like the same problem here around trying to dynamically type tool_schema but curious if you can see a possibility here.

Yes. In the same way type checking is no longer possible.

It looks a bit redundant, but from a type-checking point of view this looks better.

class PrintBook(OpenAITool):
    """Prints the title and author of a {genre} book."""

    title: str = Field(..., description="The title of a {genre} book.")
    author: str = Field(...,  description="The author of {genre} book `title`.")

    def call(self):
        return f"{self.title} by {self.author}"

@tool_schema(PrintBook)
def print_book(genre: str) -> ...:
    ...

@openai_call(model="gpt-4o")
def recommend_book(genre: str) -> CallReturn:
    """Recommend a {genre} book"""
    return { "tools": [print_book(genre=genre)] }

As for the rest, it seems very good.

I think calling an external API with just a simple function call, as successfully done by requests, is easy to understand for a wide range of users. (In the sense that it does not retain state.)

@koxudaxi
Copy link
Collaborator

It doesn't really make sense as python code, but why not make the prompt in the function an f-string?
You can reference the argument, so if the string you define does not exist in the argument, you will get an error in syntax.
Of course, if the same variable name exists in global scope, etc., it will be resolved, so this is not a fundamental solution.
This is just something I noticed and did not immediately judge it to be practical.

image

@koxudaxi
Copy link
Collaborator

But if we make it an f-string, it's not a document, so we can't refer to it in func.__doc__.

@willbakst
Copy link
Contributor Author

@koxudaxi for this to work we will need access to func.__doc__ so unfortunately I don't think the f-string approach will work (even though I wish it could).

For the dynamic tools, I'm going to remove that from this issue and put it in #278 where it belongs.

@willbakst willbakst changed the title Mirascope New (Additional) Interface Updates Mirascope Interface Updates Jun 16, 2024
@willbakst willbakst changed the title Mirascope Interface Updates Mirascope Interface Updates -- v1 Jun 16, 2024
@willbakst willbakst self-assigned this Jun 17, 2024
@jimkring
Copy link
Contributor

@willbakst I had a thought...

For a developer-oriented tool like Mirascope, I'm finding that using docstrings as the prompt data makes it harder for me to actually document my python code (because I'm using it as data). In such cases, I would prefer to have an argument to a decorator where I can specify prompt information.

@willbakst
Copy link
Contributor Author

@jimkring here's an example interface we could implement for this:

import mirascope.core as mc

PROMPT_TEMPLATE = "Recommend a {genre} book."

@mc.openai.openai_call(model="gpt-4o")
@mc.prompt_template(PROMPT_TEMPLATE)
def recommend_book(genre: str):
    """Normal docstr."""

But I have some questions:

  1. In my mind, the prompt is the best documentation for what the function is doing. Having the prompt as the docstring also means that I can see the prompt and how the call is working anywhere I'm using the function (instead of having to go find the prompt). Can you give me an example where this isn't the case?
  2. The BasePrompt class will still accept the prompt_template class variable, so you can always use the docstring of the prompt class as a fallback from the function definitions.

Here is an example of the BasePrompt docstring approach I'm imagining:

impore mirascope.core as mc

class BookRecommendationPrompt(mc.BasePrompt):
    """Normal docstring."""

    prompt_template = "Recommend a {genre} book."

    genre: str

prompt = BookRecommendationPrompt(genre="fantasy")
response = prompt.call(mc.openai.openai_call(model="gpt-4o"))
print(response.content)

Would you still want the mc.prompt_template decorator for writing prompts as above, or is the BasePrompt approach sufficient for when you want the docstring?

@jimkring
Copy link
Contributor

jimkring commented Jun 17, 2024

@willbakst thanks for the thoughtful response and great ideas. Some thoughts:

I agree that for a "pure ai" function that has no body (and is only defined by its signature and prompt) that using the docstring works great.

Here are some ideas related to classes:

import mirascope.core as mc
from pydantic import BaseModel, Field

class BookRecommendationPrompt(mc.BasePrompt):
    """Normal docstring."""

    genre: str = ""  # making genre an attribute allows it to be used in the f-string, as showne below
    prompt_template: str = f"Recommend a {genre if genre else ''} book.")

# it might also be nice to allow passing in the `provider` and `model` as args when creating the `mc.BasePrompt` instance.
prompt = BookRecommendationPrompt(genre="fantasy", provider=mc.openai, model="gpt-4o")
response = prompt.call()
print(response.content)

@willbakst
Copy link
Contributor Author

Unfortunately we can't have prompt_template be an f-string with access to another field as you've written. Although it doesn't throw an error, genre will always be evaluated as Literal[""] no matter what you set genre to.

from pydantic import BaseModel

class Prompt(BaseModel):
    genre: str = ""
    prompt_template: str = f"{genre if genre else 'fiction'}"

prompt = Prompt(genre="fantasy")
print(prompt.prompt_template)
#> fiction

With the new interface, I would write your example as follows:

import mirascope.core as mc
from pydantic import computed_field

class BookRecommendationPrompt(mc.BasePrompt):
    """Normal docstr."""

    prompt_template = "Recommend a {conditional_genre} book."

    genre: str = ""

    @computed_field
    @property
    def conditional_genre(self) -> str:
        return genre is genre else "fiction"

prompt = BookRecommendationPrompt(genre="fantasy")
response = prompt.call("openai", model="gpt-4o")
print(response.content)

Some explanation:

  1. We need conditional_genre here to be a computed_field so that it can properly access genre.
  2. We want prompt_template to be a ClassVar as it is an attribute of the class and not something you set in the constructor.
  3. The prompt is agnostic to the provider, so we want to provide the provider inside of call and not at construction.

@jimkring
Copy link
Contributor

Ah, you're right. Thanks!

@ekaj2
Copy link

ekaj2 commented Jun 23, 2024

For the initial question here, I haven't had time to go look at how all of our code would look, but you're right the class interface doesn't make sense in many cases. Would love to simplify even more, although since we're already used to the current design, I'm hesitant to start introducing more choices on how to do things.

There's a certain price I'm willing to pay with a less than ideal method if it means consistency.


On the docstring conversation, FWIW I don't totally grok where the docstring is used in prompts, so I've avoided it entirely as that seems to be "magic".

For example, here it's being used somehow in a validator prompt (and requires a comment to remind that this is the case)?

Screenshot 2024-06-23 at 2 57 51 PM

Another example, it raises ValueErrors to be missing in some cases, e.g.: https://docs.mirascope.io/latest/api/openai/tools/#mirascope.openai.tools.convert_function_to_tool

A search for "docstring" in the docs doesn't show clearly where it's safe and unsafe to use docstrings, so I've never been sure. Since it's not immediately obvious what it's doing, it becomes easy to slip up and add things to prompts.

Even if there are workarounds for it, it's seems to be philosophically opposed to Mirascope's no-magic ideal when standard language functionality is hijacked like that.

Last, some of our prompts are 100+ lines long. A concise docstring that can be used in IDEs is often more helpful. Appreciate the workarounds you shared here, I'll be trying that to standardize in our repo!

@willbakst
Copy link
Contributor Author

I agree 100% on consistency. For the v1 interface I want to have a single recommended choice/flow that is consistent.

For the docstrings, this is a failure of our documentation, and we should ensure that this is extremely clear in the v1 documentation on release.

Marrying these two points, I believe that the recommendation should be to use docstrings for all things prompts/tools while providing escape hatches for those who want them (e.g. in the case of really long prompts that you want a simpler docstring for).

There are currently only two uses of docstrings -- prompts and tools. The docstring should describe what the thing does, and in my mind it is the prompt template / tool schema that best describes what the thing does (with an exception for prompts I describe below). Here are the core interface examples to demonstrate this:

Call

These are LLM API calls as typed functions where the prompt template is the docstring. This will be the standard recommendation for making API calls.

from mirascope.core import openai

def recommend_book(genre: str):
    """Recommend a {genre} book."""

response = recommend_book("fantasy")
print(response.content)

BasePrompt

The recommended use of BasePrompt is for writing prompts that are agnostic to a particular provider. It provides a run command for running the provider of your choice. The prompt template is also the docstring:

from mirascope.core import BasePrompt, openai

class BookRecommendationPrompt(BasePrompt):
    """Recommend a {genre} book."""

    genre: str

prompt = BookRecommendationPrompt(genre="fantasy")
response = prompt.run(openai.call("gpt-4o"))
print(response.content)

Tools

There are class tools and functional tools. In both cases the docstring will be used for constructing the schema:

from mirascope.core import BaseTool
from pydantic import Field

# Functional Tool
def format_book(title: str, author: str):
    """Returns the title and author of a book nicely formatted.

    Args:
        title: The title of the book.
        author: The author of the book. 
    """
    return f"{title} by {author}"

# Class Tool
class FormatBook(BaseTool):
    """Returns the title and author of a book nicely formatted."""

    title: str = Field(..., description="The title of the book")
    author: str = Field(..., description="The author of the book")

In all of these cases, the "magic" is simply that we are using the description of the thing to construct the API call, but ultimately it's just the means of providing the information for us to parse. In the case of prompts, this means parsing them into the messages array. In the case of tools, this means parsing them into the correct tool schema.

Any other docstring used anywhere else will continue to operate as a standard docstring just like these ones (the only difference being that they aren't used for prompts/tools).

However, I understand that sometimes prompts can get quite long and you would prefer to have a shorter docstring with a separate prompt template. I also think that there are cases where the docstring for a prompt doesn't necessarily perfectly describe what it does, particularly when using tools.

I imagine support for a prompt template escape hatch would look like this (where it's clear you're no longer using the docstring):

@openai.call("gpt-4o", tools=[format_book], tool_choice="required")
@prompt_template("Recommend a {genre} book")
def recommend_book(genre: str):
    """Recommends a book of a particular genre using OpenAI's gpt-4o model.

    This function will always use the `format_book` tool. This means that `response.content`
    will generally be the empty string since the model will have called a tool instead, which you
    can access and use through the `response.tool` property and `tool.call` method.

    Args:
        genre: The genre of the book to recommend.

    Returns:
        An `OpenAICallResponse` instance
    """

Similarly you could do the same for BasePrompt:

@prompt_template("Recommend a {genre} book")
class BookRecommendationPrompt(BasePrompt):
    """A prompt for recommending a book of a particular genre."""

    genre: str

For tools, my current stance is that we should not provide any other choice but to use the docstring. My reasoning here is that there is a true 1:1 mapping between the tool's docstring and what the tool does, and I can't see a reason to provide another choice here. All of the override examples I tried to work through just felt silly and likely never needed. The LLM will use the tools just like you would, so the docstring you would write for the class/function for yourself is exactly what the LLM should receive.

Would love your thoughts / feedback here @ekaj2 🙏

@willbakst
Copy link
Contributor Author

I implemented the @prompt_template functionality.

We have most things implemented. At this point we are working on full test coverage and documentation so that we can release the v1.0.0 production version.

Right now, v1.0.0-b3 is out as an open beta and can be installed as a pre-release version for those who want to test it out. The documentation should also be live (but it's not latest, so you'll have to manually select the pre-release version).

Any and all feedback here is most welcome and appreciated :)

@willbakst willbakst mentioned this issue Aug 17, 2024
Merged
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature Request New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants