diff --git a/apps/11_promptflow/README.md b/apps/11_promptflow/README.md index 3300edb..48cf098 100644 --- a/apps/11_promptflow/README.md +++ b/apps/11_promptflow/README.md @@ -36,8 +36,42 @@ $ pip install -r requirements.txt [Prompt flow > Quick start](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html) provides a quick start guide to Prompt flow. Some of the examples are extracted from [github.com/microsoft/promptflow/examples](https://github.com/microsoft/promptflow/tree/main/examples) to guide you through the basic usage of Prompt flow. +**Set up connection** + +```shell +$ cd apps/11_promptflow + +# List connections +$ pf connection list + +# Set parameters +$ AZURE_OPENAI_KEY= +$ AZURE_OPENAI_ENDPOINT= +$ CONNECTION_NAME=open_ai_connection + +# Delete connection (if needed) +$ pf connection delete \ + --name $CONNECTION_NAME + +# Create connection +$ pf connection create \ + --file connection_azure_openai.yaml \ + --set api_key=$AZURE_OPENAI_KEY \ + --set api_base=$AZURE_OPENAI_ENDPOINT \ + --name $CONNECTION_NAME + +# Show connection +$ pf connection show \ + --name $CONNECTION_NAME +``` + ### [chat_minimal](https://github.com/microsoft/promptflow/tree/main/examples/flex-flows/chat-minimal) +A chat flow defined using function with minimal code. It demonstrates the minimal code to have a chat flow. + +Tracing feature is available in Prompt flow, which allows you to trace the flow of the conversation. You can see its implementation in this example. +Details are available in [Tracing](https://microsoft.github.io/promptflow/how-to-guides/tracing/index.html) + **Run as normal Python script** ```shell @@ -67,6 +101,8 @@ $ pf run create \ --stream ``` +`--column-mapping` is used to map the data in the JSONL file to the flow. For more details, refer to [Use column mapping](https://microsoft.github.io/promptflow/how-to-guides/run-and-evaluate-a-flow/use-column-mapping.html). + ### playground_chat ```shell @@ -79,30 +115,6 @@ $ pf flow init \ $ cd playground_chat -# Set parameters -$ CONNECTION_NAME=open_ai_connection -$ AZURE_OPENAI_KEY= -$ AZURE_OPENAI_ENDPOINT= - -# List connections -$ pf connection list - - -# Delete connection (if needed) -$ pf connection delete \ - --name $CONNECTION_NAME - -# Create connection -$ pf connection create \ - --file azure_openai.yaml \ - --set api_key=$AZURE_OPENAI_KEY \ - --set api_base=$AZURE_OPENAI_ENDPOINT \ - --name $CONNECTION_NAME - -# Show connection -$ pf connection show \ - --name $CONNECTION_NAME - # Interact with chat flow $ pf flow test \ --flow . \ @@ -230,6 +242,8 @@ $ pf run create \ $ pf run show-details --name $RUN_NAME ``` +[Tutorial: How prompt flow helps on quality improvement](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md) provides a detailed guide on how to use Prompt flow to improve the quality of your LLM applications. + ### [eval-chat-math](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-chat-math) This example shows how to evaluate the answer of math questions, which can compare the output results with the standard answers numerically. @@ -237,7 +251,29 @@ Details are available in the [eval-chat-math/README.md](./eval-chat-math/README. To understand how to operate the flow in VS Code, you can refer to the [Build your high quality LLM apps with Prompt flow](https://www.youtube.com/watch?v=gcIe6nk2gA4). This video shows how to evaluate the answer of math questions and guide you to tune the prompts using variants. - +### flex_flow_langchain + +To guide you through working with LangChain, we provide an example flex flow that + +```shell +$ cd apps/11_promptflow/flex_flow_langchain +$ pf flow test \ + --flow main:LangChainRunner \ + --inputs question="What's 2+2?" \ + --init custom_connection=open_ai_connection + +$ RUN_NAME=flex_flow_langchain-$(date +%s) +$ pf run create \ + --name $RUN_NAME \ + --flow . \ + --data ./data.jsonl \ + --column-mapping question='${data.question}' \ + --stream + +$ pf run show-details --name $RUN_NAME +``` + + ## References diff --git a/apps/11_promptflow/playground_chat/azure_openai.yaml b/apps/11_promptflow/connection_azure_openai.yaml similarity index 100% rename from apps/11_promptflow/playground_chat/azure_openai.yaml rename to apps/11_promptflow/connection_azure_openai.yaml diff --git a/apps/11_promptflow/flex_flow_langchain/data.jsonl b/apps/11_promptflow/flex_flow_langchain/data.jsonl new file mode 100644 index 0000000..63f316d --- /dev/null +++ b/apps/11_promptflow/flex_flow_langchain/data.jsonl @@ -0,0 +1,2 @@ +{"question": "What's 4+4?"} +{"question": "What's 4x4?"} \ No newline at end of file diff --git a/apps/11_promptflow/flex_flow_langchain/flow.flex.yaml b/apps/11_promptflow/flex_flow_langchain/flow.flex.yaml new file mode 100644 index 0000000..3abde49 --- /dev/null +++ b/apps/11_promptflow/flex_flow_langchain/flow.flex.yaml @@ -0,0 +1,8 @@ +$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json +entry: main:LangChainRunner +sample: + inputs: + input: What's 2+2? + prediction: What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four. + init: + custom_connection: open_ai_connection diff --git a/apps/11_promptflow/flex_flow_langchain/main.py b/apps/11_promptflow/flex_flow_langchain/main.py new file mode 100644 index 0000000..76331e9 --- /dev/null +++ b/apps/11_promptflow/flex_flow_langchain/main.py @@ -0,0 +1,52 @@ +from dataclasses import dataclass + +from langchain_openai import AzureChatOpenAI +from promptflow.client import PFClient +from promptflow.connections import CustomConnection +from promptflow.tracing import trace + + +@dataclass +class Result: + answer: str + + +class LangChainRunner: + def __init__(self, custom_connection: CustomConnection): + # https://python.langchain.com/v0.2/docs/integrations/chat/azure_chat_openai/ + self.llm = AzureChatOpenAI( + temperature=0, + api_key=custom_connection.secrets["api_key"], + api_version=custom_connection.configs["api_version"], + azure_endpoint=custom_connection.configs["api_base"], + model="gpt-4o", + ) + + @trace + def __call__( + self, + question: str, + ) -> Result: + response = self.llm.invoke( + [ + ( + "system", + "You are asking me to do some math, I can help with that.", + ), + ("human", question), + ], + ) + return Result(answer=response.content) + + +if __name__ == "__main__": + from promptflow.tracing import start_trace + + start_trace() + pf = PFClient() + connection = pf.connections.get(name="open_ai_connection") + runner = LangChainRunner(custom_connection=connection) + result = runner( + question="What's 2+2?", + ) + print(result)